text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
GCC cannot find stdio.h with OS X even though command line tools is installed
I'm using OS X 10.10.4 and am having trouble compiling .c files. Most help I've found on this issue suggests installing Xcode's command line tools however, trying to do so gives
xcode-select --install
xcode-select: error: command line tools are already installed, use "Software Update" to install updates
I used the find option to locate stdio.h
find /Applications/Xcode.app -name stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/include/c++/4.2.1/tr1/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/include/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/usr/include/sys/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/usr/include/c++/4.2.1/tr1/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/usr/include/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/usr/include/sys/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/sys/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/c++/4.2.1/tr1/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/sys/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1/tr1/stdio.h
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/stdio.h
So clearly there are several locations with it.
I tried adding the '-v' flag while compiling to try and get more insight and saw that it wasn't searching very thoroughly.
#include "..." search starts here:
#include <...> search starts here:
/usr/local/include
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.1.0/include
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
/usr/include
/System/Library/Frameworks (framework directory)
/Library/Frameworks (framework directory)
End of search list.
Can anyone help with this please?
Russian Owl, if you do code by indenting four spaces rather than backticks, it solves your octothorpe problem - I've adjusted your text to do this. To do it fast for a big block, just mark all the text and use CTRL-K. If only I could help you with the actual question :-)
Helpful tip anyway. Thanks!
First of all default compiler on OSX is clang instead of gcc
So better to use clang
If you have to use gccyou may set headers search path by compilation option -I option ( example: gcc -Ipath_to_headers yourprogram.c -o yourprogram )
Thank you so much! That worked. I honestly thought I had been using gcc this whole time but clang worked immediately.
| common-pile/stackexchange_filtered |
VBA: Find and replace text
In my word document, there are multiple instances of ABC, but only one of ABC123.
My task is to find the one instance of ABC123 and copy it onto a separate word document. As there are thousands of documents I will need to sort through, I would like to make a macro to relieve some of the pain.
Please note that ABC remains constant, but the numbers, represented above by 123, are always changing.
As it stands, the macro I have tried to put together is only able to find all of the instances of text starting with ABC. Is there an "If" I could add to the code that could find the instance of ABC that ends with a number?
Thanks so much in advance!
Please post the code you have already tried, along with a representative sample of your document's text.
You can use this search term to find ABC followed by 3 digits:
ABC^#^#^#
Or, if you don't know how many digits there will be, you can use this wildcard search which will find ABC followed by 1-10 digits:
With Selection.Find
.MatchWildcards
.Text = "ABC[0-9]{1,10}"
.Execute
End With
| common-pile/stackexchange_filtered |
CockroachDB C++ transaction retry issue
I'm trying to run the basic C++ example, basically:
void executeTx(
pqxx::connection *c, function<void (pqxx::dbtransaction *tx)> fn) {
pqxx::work tx(*c);
while (true) {
try {
pqxx::subtransaction s(tx, "cockroach_restart");
fn(&s);
s.commit();
break;
} catch (const pqxx::pqxx_exception& e) {
// Swallow "transaction restart" errors; the transaction will be retried.
// Unfortunately libpqxx doesn't give us access to the error code, so we
// do string matching to identify retryable errors.
if (string(e.base().what()).find("restart transaction:") == string::npos) {
throw;
}
}
}
tx.commit();
}
However, transactions never get retried correctly, I'm getting this error:
current transaction is aborted, commands ignored until end of transaction block
Looks like the top level transaction pqxx::work tx(*c); gets poisoned? The second iteration of the loop always fails
| common-pile/stackexchange_filtered |
Adding a HTML table to a bookmark in a Word document with Excel VBA
There are 2 parts to my macro. 1 part creates emails and 1 part creates a Word Document. We use the Word feature when we don't have an email address. This is an Excel macro using VBA.
The email macro creates a table (3 column) with the details, a blank row and then the total. The # of rows depends on the number of records for the table. The email table is created with HTML. I'd like to use the same code (below) for the Word document and insert it at a bookmark (top of the 2nd page) in the Word document. I'm not committed to HTML for this table. I was trying to use code I had already developed instead of reinventing the wheel.
Do Until vStartRow > vEndRow
DoEvents
If Val(Workbooks(DataBook).Worksheets("DNR List").Cells(vStartRow, 20)) <> 0 Then
strRows = strRows & "<tr>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & Workbooks(DataBook).Worksheets("DNR List").Cells(vStartRow, 3) & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & Workbooks(DataBook).Worksheets("DNR List").Cells(vStartRow, 4) & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=right"">" & Format(Workbooks(DataBook).Worksheets("DNR List").Cells(vStartRow, 20), "#,##0.00") & "</td>"
strRows = strRows & "</tr>"
MyBalance = MyBalance + Workbooks(DataBook).Worksheets("DNR List").Cells(vStartRow, 20)
End If
vStartRow = vStartRow + 1
Loop
strRows = strRows & "<tr>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & " " & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & " " & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & " " & "</td>"
strRows = strRows & "</tr>"
strRows = strRows & "<tr>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & "Total" & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=center"">" & " " & "</td>"
strRows = strRows & "<td ""col width=10%"" ""col align=right"">" & Format(MyBalance, "#,##0.00") & "</td>"
strRows = strRows & "</tr>"
strAfterRows = "</table></body>"
strAll = strBeforeRows & strRows & strAfterRows
StrBody3 = "<font size=""3"" face=""Calibri"">" & strAll
I use the following code to write the table to bookmark. What I get is the code written to the bookmark instead of a table. I'm sure the code I'm using is writing the string instead of the table.
WrdApp.ActiveDocument.Bookmarks(BookMarkName).Select
WrdApp.Selection.GoTo What:=wdGoToBookmark, Name:=BookMarkName
WrdApp.Selection.TypeText StrBody3
This is the first time, I've tried something like this. Thanks for your help or suggestions in advance to write a table to a bookmark.....
You can use VBA to build a table in Word - eg: https://stackoverflow.com/questions/61640242/draw-table-from-excel-to-word-bookmark/61641834#61641834
Alternatively you can look these things up in the Object Explorer and online help.
@TimWilliams............Thanks........I'll take a look at that article and give it a try
@TimothyRylatt...........Thanks........I always try multiple things and search multiple times before posting on this site. I have a 60 minute rule; where I try to figure things out on my own by reading and researching. If I'm still spinning my wheels after 60 minutes, I'll post a question here. I'm not looking for anyone to do this for me but rather kind of point me in the right direction; like you did earlier today. That helped me solved that problem and I moved on to the final situation that needs to be fixed before I can roll out the macro. Thanks for your help.........
| common-pile/stackexchange_filtered |
"Pre-commit" command is not found by bash but is installed on macOS
Problem Description
I'm having trouble making commits, when I try to make a commit with the command, for example:
$ git commit -m "add readme"
pre-commit not found. Install pre-commit with the command pip3 install --user pre-commit or follow the steps on official documentation: https://pre-commit.com /#install
Following the steps described in the pre-commit installation documentation I installed by the command:
$ pip install pre-commit
However when I trigger the command the following error occurs:
$ pre-commit --version
bash: pre-commit: command not found
My attempt fails, so I've tried some other solutions for this but they didn't work.
Export bash:
I already tried this solution described that would be to export my ~./bashrc with the command: source ~/.profile but the following error happens:
bash:/Users/pvieira/.profile: No such file or directory
Install using homebrew:
This results in the same error that occurs when manually installing by pip above.
pre-commit would not produce that output so it's something custom that you or your company has set up
Are you using a virtual environment? Maybe pre-commit was installed only inside of a virtual env, but you are doing git commit outside of a virtualenv.
I was able to solve the problem by simply restarting the terminal.
The solution that everyone forgets to do is finally here!
Le sigh. Yep. This was the fix.
I have used MacOS. I solved this problem installed pre-commit by brew install pre-commit
Ubuntu suggests installing pre-commit as apt package:
sudo apt install pre-commit
command not found
This error generally means that your shell could not find the an matching executable file in their hash or that the file is not longer located at the previous location. The shell creates a hash of all programs in the PATH environment variable when it loads.
Depending on your shell you can refresh their hash by running hash or rehash.
The reason reloading your shell or terminal fixes the issue is because it, in effect, creates a new hash. Along the same lines, restarting the computer also happens to recreate a new hash.
Thanks for your remark. In my case, I was using a virtual venv and the PATH was not pointing to the right location after venv activation. I suspect that this problem was related to the fact that I move the venv in the past. I recreated the venv to solve the problem and it updated the PATH properly.
for me the problem was that I used pip install instead of pip3 install which wasn't using the default python version. After adding this to the bash profile and restarting bash it started working
alias python='python3'
alias pip='pip3'
If you are using nvm and recently installed a new version of node while uninstalling a previous version, this could be due to nvm default pointing to an old version.
Run the following command to see the current version of nvm default:
nvm ls
And then set nvm default to a version that is installed:
nvm alias default [VERSION]
i.e.
nvm alias default 18.18.0
In my case, the pre-commit file was a php file and I hadn't php installed. I was getting this:
fatal: cannot exec 'pre-commit': No such file or directory
And installing php-cli with
sudo apt install php-cli
Solved the problem.
I solved this problem by restarting my ubuntu system
| common-pile/stackexchange_filtered |
Secure Passwordless MFA authentication on mobile app
I want to secure my mobile app with a passwordless MFA mechanism.
The registration/login flow would be:
You register you account online with a username and a mobile phone (an OTP will be sent to verify the phone number).
You login to the app for the first time by providing your username and the system will send an OTP to the registered mobile phone.
After the first login, the app will create a private key on the device and ask the user to setup a biometric factor and a PIN fallback.
On subsequent logins, the app will verify that you are in possession of the mobile (1st factor), and that you can unlock the private key with the biometric or PIN (2nd factor)
Now, let’s say my mobile got stolen and the thief knows my device PIN (1st factor is compromised). The 2nd biometric factor will fail. Then comes the fallback.
If the fallback is the native device PIN, the thief who knows my device PIN can log in to the app.
If the PIN fallback is user-defined, where to store it? Remotely on my server and it becomes a password. If stored locally, can I securely store it on iOS/Android and unlock the private key with it?
Also, how can I secure first time logins in case of theft?
If the thief takes my SIM and introduce it in another phone. He can enter my username and send an SMS OTP on the registered device (that he got access to)
I’ve seen apps which login flow ask for the phone number as username and send an SMS OTP to that phone number. How is it secure? If a thief get access to the phone or SIM, the login process is not secure.
to secure against sim-swap where the attacker also knows this PIN, you need another factor. An example of this might be access to an e-mail account that cannot be recovered via access to the phone... or maybe access to another piece of hardware.... or, of course, knowledge of a password that also cannot be reset or recovered via the phone. (as you've discovered many "multi-factor" logins really only require a single-factor... access to the phone. What you've describe has 2, access to the phone and knowledge of the PIN)
| common-pile/stackexchange_filtered |
Displaying validation errors on "new" view
In this section of the Rails guide, it is instructed to add @article = Article.new in the new method of ArticlesController, explaining that otherwise we won't be able to access @article.errors.
From what I understand, @articles = Article.new creates a new instance of Article, and what we need is the @article variable that we tried to submit. I know it works but I need to understand why.
Controller code:
class ArticlesController < ApplicationController
def index
@articles = Article.all
end
def show
@article = Article.find(params[:id])
end
def new
@article = Article.new
end
def create
@article = Article.new(article_params)
if @article.save
redirect_to @article
else
render 'new'
end
end
private
def article_params
params.require(:article).permit(:title, :text)
end
end
View code:
<%= form_with scope: :article, url: articles_path, local: true do |form| %>
<% if @article.errors.any? %>
<div id='error_explanation'>
<h2>
<%=<EMAIL_ADDRESS>"error") %> prohibited this articles from being saved:
</h2>
<ul>
<% @article.errors.full_messages.each do |msg| %>
<li><%= msg %></li>
<% end %>
</ul>
</div>
<% end %>
<p>
<%= form.label :title %><br>
<%= form.text_field :title %>
</p>
<p>
<%= form.label :text %><br>
<%= form.text_area :text %>
</p>
<p>
<%= form.submit %>
<% end %>
<%= link_to 'Back', articles_path %>
It says this:
The reason why we added @article = Article.new in the
ArticlesController is that otherwise @article would be nil in our
view, and calling @article.errors.any? would throw an error.
so it has nothing to do with accessing validation errors.
Without the @article variable in new action you would call errors method in your view on nil value and nil doesn't have such a method, so you would get an error undefined method 'errors' for nil:NilClass. In case of @article variable set to Article.new you call errors method on an Article class instance and since there are no validation errors (yet) the #error_explanation block will not be rendered.
But when you try to create a new record, then validation occurs. And if there are validation errors your rails app renders the new template again, but it does it with the create action. Therefore, the @article variable this time is the one from the create method and since we have validation errors in it, the #error_explanation block will be rendered and user will see what's wrong.
| common-pile/stackexchange_filtered |
Material UI: How to change font size of label in React Material Ui Stepper?
In React material ui, I want to change font size of stepper label. How to acheive that?
function getSteps() {
return [
"OPTION 1",
"OPTION 2",
"OPTION 3"
"OPTION 4"
];
}
Perhaps you could add a custom css to your webpage :
.MuiStepLabel-labelContainer span {
font-size: xx-large;
}
You can adjust to your desired font size by changing the "font-size" value.
i didnt get it...can you explain
You could just apply this css into your css file to alter the stepper label's font size.
Material UI will generate the stepper label with the ".MuiStepLabel-labelContainer" class. The mentioned custom css will override the default font-size.
Target the label CSS rule name in the classes prop
import { makeStyles } from "@material-ui/core";
const useStyles = makeStyles({
customLabelStyle: {
fontSize: "24px"
}
});
function App () {
const classes = useStyles();
return (
<StepLabel classes={{label: classes.customLabelStyle}}>
);
}
| common-pile/stackexchange_filtered |
Using cfx to make a Thunderbird addin
I'm trying to make a Thunderbird addon that will call external code to produce a lump of data to attach to an outbound email, labelled as some custom type so a recipient mailer can just be told to use our app to handle such inbound attachments. Regardless of implementation detail, I'm falling at the first hurdle.
Having briefly tried and given up with XPCOM components in C, I've found this page which tells me to go here from where I downloaded and installed the Add-on SDK. That uses a command cfx to create skeleton addins and test them. It has an experimental parameter --app which allows you to load Thunderbird instead of the default Firefox. However, it doesn't seem to load the addin. From the docs, I see that pretty much the simplest possible case is for main.js to simply contain
console.log("Hello World");
Firing this up using cfx run --app=thunderbird gives no "Hello World" anywhere that I can see, though Thunderbird does open using a temporary test profile. Running it in firefox opens firefox and outputs
reference to undefined property exn.stackconsole.log: cfxtest1: Hello World
Which is outputting what I want it to output but in a rather suspicious way!
So two questions; is the AddOn-SDK the way to go for Thunderbird extensions, and why even in Firefox is it looking like it's not actually working?
Versions; Firefox is 25.0.1, Thunderbird is 24.1.1, the AddOn-SDK is 1.14 and Python is 2.6.6 (after 3.3 and then 2.7 turned out to be incompatible). Platform is Windows 7.
As of today, I still cannot find a definitive answer to this question too.
Add-on SDK is mainly made of cfx + the actual SDK (see Comparing Extension Techniques). The same reference says that the SDK only formally supports Firefox for Desktop and Firefox for Android, and no other Gecko applications.
| common-pile/stackexchange_filtered |
14.04 updates broke Java
Yesterday (8/15/2018) the regular Ubuntu updates on 14.04 LTS caused Java to stop working on two different VirtualBox machines. Netbeans 8.2, reliable until now, can no longer start up, and after a delay I get a kernel oops in the java process.
After this happened to my first machine, I made a duplicate copy of my hard drive image (.vdi) before allowing the update on the second machine. After the update (which required a reboot), Netbeans was broken on this machine like the other. After I restored the .vdi this morning (and declined the update!) Netbeans is functional again.
Is anyone else having this sort of problem? Thanks!
14.04 LTS is supported into April 2019, according to https://www.ubuntu.com/info/release-end-of-life and also this web site's [14.04] tag. Anyway, I'm stuck with the version of Ubuntu that supports the build tools I need for my job.
It is, indeed, still supported.
It seems that kernel 3.13.0-155 broke things. We also ran into this and had to revert back to 3.13.0-153. It doesn't seem to be just related to Java; we had our AWS EC2 instance completely fail to reboot with the new Kernel.
Canonical is aware of this issue and their kernel team is working to fix it.
UPDATE: Canonical has fixed this issue in 3.13.0-156 kernel.
The Ubuntu update on 8/20 included kernel 3.13.0-156 which solved the Java problem. Meanwhile, thank you Kaitsu for pointing me at the bug report, and also telling me what I could do to work around it before the fix.
| common-pile/stackexchange_filtered |
Why is off-centered prior necessary for HMC sampler?
In this PyMC3 tutorial on Bayesian Mixed Effects Models, there is some Re-parameterization "to avoid chain divergences."
with pm.Model(coords=coords) as hierarchical:
# Hyperpriors
intercept_mu = pm.Normal("intercept_mu", 0, sigma=1)
intercept_sigma = pm.HalfCauchy("intercept_sigma", beta=2)
slope_mu = pm.Normal("slope_mu", 0, sigma=1)
slope_sigma = pm.HalfCauchy("slope_sigma", beta=2)
# Define priors
sigma = pm.HalfCauchy("sigma", beta=2, dims="group")
# Reparameterise to avoid chain divergences
# β0 = pm.Normal("β0", intercept_mu, sigma=intercept_sigma, dims="group")
β0_offset = pm.Normal("β0_offset", 0, sigma=1, dims="group")
β0 = pm.Deterministic("β0", intercept_mu + β0_offset * intercept_sigma, dims="group")
# Reparameterise to avoid chain divergences
# β1 = pm.Normal("β1", slope_mu, sigma=slope_sigma, dims="group")
β1_offset = pm.Normal("β1_offset", 0, sigma=1, dims="group")
β1 = pm.Deterministic("β1", slope_mu + β1_offset * slope_sigma, dims="group")
# Data
x = pm.Data("x", data.x, dims="observation")
g = pm.Data("g", data.group_idx, dims="observation")
# Linear model
μ = pm.Deterministic("μ", β0[g] + β1[g] * x, dims="observation")
# Define likelihood
pm.Normal("y", mu=μ, sigma=sigma[g], observed=data.y, dims="observation")
Consider that β1 = pm.Normal("β1", slope_mu, sigma=slope_sigma, dims="group") was swapped out for the below:
β1_offset = pm.Normal("β1_offset", 0, sigma=1, dims="group")
β1 = pm.Deterministic("β1", slope_mu + β1_offset * slope_sigma, dims="group")
Why is this necessary and what does it accomplish?
Edit: I see the comment link to Stan forum, is there a slightly more concise/higher level explanation available?
I understand that HMC doesn't do well with high curvature areas; but why is multiplying the slope by an additional Normally distributed random variable helpful?
The Stan team has produced quite a lot of good materials on the topic, e.g. https://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html
See also Neals Funnel from the Stan documentation.
The explanation below is based on Statistical Rethinking, 2nd edition, pages 420-423.
As you well say, samplers have issues exploring high curvature areas. A re-parametrization is a way to change the area that is being explored.
Consider a funnel made from the following parametrization:
$$
\begin{aligned}
v &\sim \operatorname{Normal}(0, 3) \\
x &\sim \operatorname{Normal}(0, \exp(v))
\end{aligned}
$$
Where does the funnel come from? Half the values of $v$ will be negative, and for negative numbers $\exp(v) < 1$. So we know that $x$ will be centered around 0, with a quite small variance. For half the other values of $v$, $\exp(v)$ will not only be larger than 1, it will grow—literally—exponentially. Hence, a funnel arises: increasingly more concentrated values of $x$ for more negative values of $y$, increasingly more spread values of $x$ for more positive values of $y$
Consider a reparametrization now, as follows:
$$
\begin{aligned}
v &\sim \operatorname{Normal}(0, 3) \\
z &\sim \operatorname{Normal}(0, 1) \\
x &= z \exp(v)
\end{aligned}
$$
Where did the funnel go? We are no longer sampling $x$, but a standard normal variable $z$. Gladly, multiplying a normally distributed variable by a constant, changes its scale (or variance). Hence we can get our desired $x$ by multiplying $z$ by the exponentiated $v$. This step is a deterministic one, there is no probability involved, so there is no need for the sampler to explore anything here. We see that the sampler now just has to explore normal distributions, which have straightforward curvatures. The joint distribution of both $v$ and $z$ will now just be a multivariate normal, which has nothing pointy or weird going on.
| common-pile/stackexchange_filtered |
Found matches if multiple needles
Sorry, I couldn't find the answer to this, but I'm using str_contains to find matches of a str within another str.
str_contains($haystack, $needle);
However, is there a way to have multiple needles or do I need to use this function several times?
There is no built-in function to do this, but you can make your own
function str_containsa(string $haystack, array $needles){
foreach ($needles as $needle){
if (str_contains($haystack, $needle)){
return true;
}
}
return false;
}
usage
str_containsa('the quick brown fox jumped over the lazy dog', ['fox', 'cow', 'goat']);
//returns true as string contains fox
also you can use regex and preg_match_all:
join needles into pattern string like (\bneedle1\b)|(\bneedle2\b) etc
make regex pattern case insensitive i and search multiline m
function stringHasWords(string $string, array $needles)
{
$nPattern = join('|', array_map(function ($needle) {
// note \b token
return "(\b$needle\b)";
}, $needles));
// m - multiline, i - case insensitive
$pattern = "/$nPattern/mi";
// returns count of matches or false on error
$count = preg_match_all($pattern, $string, $matches);
return $count && $count > 0;
}
note that \b token makes regex search only exact words, no match for o in string fox
You can use preg_match with implode like following :
$string = 'some value1 or some value4';
$needles = ['value1','value2','value3','value4'];
if (preg_match("/(?:". implode('|', $needles) .")/", $string)) {
echo 'does contain';
} else {
echo 'does not';
}
one liner :
$doesMatch = preg_match("/(?:". implode('|', $needles) .")/", $string);
or as a function :
function contain(string $string, array $needles) {
return preg_match("/(?:". implode('|', $needles) .")/", $string);
}
| common-pile/stackexchange_filtered |
in `initialize': string contains null byte Ruby
I wrote the following code on my desktop and it worked fine. I downloaded it on my laptop, downloaded ruby (v1.9.3), and tried to run it but got the following error. I'm pretty sure it has to do with Ruby being used for the first time but never got this problem on my desktop when I first ran Ruby.
C:/Users/Downloads/vscript.rb:18:in 'initialize': string contains null byte (ArgumentError)
from C:/Users/Downloads/vscript.rb:18:in 'open'
from C:/Users/Downloads/vscript.rb:18:in 'main'
Line 18 is the File.open line:
File.open("filename", "r") do |f|
# Do while there are characters in the text file
f.each do |line|
# Checks to see if any parts in file match the regex and inform the user
if x = line.match(/\d\.\d\.\d{4}\.\d/)
puts "#{x} was found in the file."
end
end
end
What encoding is vscript.rb in? UTF-16 perhaps?
It got encoded in UTF-8 without BOM
Are you certain that it is UTF-8? UTF-8 text shouldn't contain any zero bytes but UTF-16 will contain lots of zero bytes.
Figured it out. When I originally wrote the code the filename had /'s separating the folders. When I downloaded the file on my laptop, I copied its new directory from the address bar which uses \'s. Changed that and it works fine now.
| common-pile/stackexchange_filtered |
Solution For Bypassing a Step: Text To CSV Pandas
I read text file with header=None because first 6 lines are unnecessary and becoming obstacle in using '|' as delimiter. Because I need to convert text file in csv file.
Then I need to convert that file into csv file and I again need to import 27evening.csv file by using delimiter '|'. Then only I can save df2 as final csv.
I don not want to save(as csv) after 4th step shown in figure and want to open userhistory_aam.txt by using delimiter "|".
Here I don't want to generate unnecessary middle file(27evening.csv) Can you please provide alternative.
'''
import pandas as pd
import numpy as np
df = pd.read_csv("userhistory_aam[50][100]27May.txt", header = None)
df.columns = [''] * len(df.columns)
df.drop([0,1,2,3,4,6],0,inplace=True)
df.to_csv("27evening.csv", index = None)
df2 = pd.read_csv("27evening.csv", delimiter = '|')
df2.to_csv('final.csv')
'''
please don't post pictures of code
@Isotope Ok I am removing
@Isotope Without removing first 6 lines I am not able to convert text file into csv file using delimiter as '|'. Hence I have to save txt file after removing first 6 lines. But I need to bypass this step because it is generating unnecessary 27evening.csv file.
use skiprows in your read method.
df = pd.read_csv('data.txt',sep='|',skiprows=6)
data.txt.
random text and data
# could also start with a #
more random text
1|2|3|4|5|6|7
1|2|3|4|5|6|7
1|2|3|4|5|6|7
A|B|C|D|E|F|G # <-- data starts here.
a|b|c|d|e|f|g
print(df)
A B C D E F G
0 a b c d e f g
Note # <-- data starts here. is just for illustration, don't put that into your text file.
| common-pile/stackexchange_filtered |
How solve react native tls/ssl network error?
I use graphql server(in graphql-yoga library).
and I apply the certificate created by the openssl command to the 'https' option.
then I connect graphql-playground through pc chrome browser with using 'https' protocol.
It worked fine.
but It not worked in apps made with react-native.
It use apollo client in apollo-boost library.
and I apply testServer address 'https://<IP_ADDRESS>:4001' in ApolloClient constructor option uri field.
Once again app give me Network Error!
WARN Possible Unhandled Promise Rejection (id: 1):
Error: Network error: Network request failed
ApolloError@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:102209:32
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:103704:51
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:104124:25
forEach@[native code]
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:104122:35
forEach@[native code]
broadcastQueries@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:104120:29
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:103599:47
tryCallOne@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:27024:16
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:27125:27
_callTimer@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:30579:17
_callImmediatesPass@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:30615:19
callImmediates@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:30834:33
callImmediates@[native code]
__callImmediates@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:2591:35
http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:2368:34
__guard@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:2574:15
flushedQueue@http://localhost:8081/index.bundle?platform=android&dev=true&minify=false:2367:21
flushedQueue@[native code]
callFunctionReturnFlushedQueue@[native code]
Please let me know solution about this problem.
you need to add certificate to android for working https.https://stackoverflow.com/questions/36289125/react-native-fetch-from-https-server-with-self-signed-certificate/54475750#54475750
yeah, I solved my problem myself.
In my case, it must check two thing.
First, You must use a authorized certification not a self sign certification.
(I think React Native force to use a authorized certification.)
(so I used a certification made by 'Let`s encrypt')
Second, You must record 'ca' option(field) in 'httpsOption' object(in any graphQLServer).
In my case, I not record 'ca' option.
so I record 'ca' option, 'ca'file path and it works fine.
(It must record key, cert and ca option in 'httpsOption')
I find causation using axios.
I wrote the graphQL query and sent it using axios.
axios kindly reports the error. Unlike apolloClient.
...from Republic of Korea.
| common-pile/stackexchange_filtered |
ObjectDataProvider with MethodName , Refresh not working
I am a newbie to WPF. I had created a window for adding customers and wrote the code-behind in VB and everything works fine: The entry is inserted into the database successfully. Now, I have a problem in refreshing the dataGrid.
Here's my DataGridBinding
<Datagrid Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" IsReadOnly="True" x:Name ="DGCustomers" ItemsSource="{Binding Source={StaticResource PopulateCustomers}}" />
Whenever I call
Me.TryCast(Me.FindResource("PopulateCustomers"), ObjectDataProvider).Refresh()
Nothing happens!. Only when I restart the application, I am able to see the changes!
Here's my XAML
<ObjectDataProvider x:Key="PopulateCustomers" ObjectType="{x:Type local:CustomersDataProvider}" MethodName="LoadDG" />
Here's my class
Public Class CustomersDataProvider
Dim con As String = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source= C:\Users\Nothing But WIND\Documents\DBSAMPLE.accdb"
Dim oc As New OleDbConnection(con)
Dim cmd As New OleDbCommand("SELECT * FROM Customers", oc)
Dim da As New OleDbDataAdapter(cmd)
Dim ds As New DataSet
Dim dt As New DataTable
Public Sub New()
da.Fill(ds)
dt = ds.Tables(0)
End Sub
Public Function LoadDG()
Return dt.DefaultView
End Function
End Class
Here's my updation code
Private Sub btnAdd_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles btnAdd.Click
Dim cmd As New OleDbCommand("Select * FROM Customers", oc)
Dim da As New OleDbDataAdapter(cmd)
Dim ds As New DataSet
Dim dt As New DataTable
da.Fill(ds)
dt = ds.Tables(0)
dt.Rows.Add(txt_CustomerID.Text, txt_CustomerName.Text, txt_ContactPerson.Text, txt_Address.Text, txt_Area.Text, txt_City.Text, txt_Pincode.Text, txt_Number.Text, Chk_1Discount.IsChecked)
If oc.State = ConnectionState.Closed Then
oc.Open()
End If
Dim cb As OleDbCommandBuilder = New OleDbCommandBuilder(da)
da.InsertCommand = cb.GetInsertCommand
da.Update(dt)
oc.Close()
MsgBox("Item added")
TryCast(Me.FindResource("PopulateCustomers"), ObjectDataProvider).Refresh()
DGCustomers.Items.Refresh()
End Sub
Any help is heart-fully appreciated!
| common-pile/stackexchange_filtered |
Getting the current Application and Document from IExternalApplication - Revit
While executing IExternalCommand i can easily obtain the Application and Document via ExternalCommandData
UIApplication uiApp = commandData.Application;
Document doc = uiApp.ActiveUIDocument.Document;
Transaction trans = new Transaction(doc);
While executing IExternalApplication, there is no ExternalCommandData object. I need to find the path of the currently opened Revit file. How do I gain access to Document via IExternalApplication?
Well, This worked. Calling onViewActivated, document is stored in e
public class AddPanel : IExternalApplication
{
void onViewActivated(object sender, ViewActivatedEventArgs e)
{
View vCurrent = e.CurrentActiveView;
Document doc = e.Document;
string pathname = doc.PathName;
TaskDialog.Show("asd", pathname);
string id = Convert.ToString(vCurrent.Id);
string name = vCurrent.Name;
string userName = System.Security.Principal.WindowsIdentity.GetCurrent().Name;
string now = Convert.ToString(DateTime.Now);
string content = now + ", " + id + ", " + name + ", " + userName + "\n";
string path = @"E:\H1503200 Montreign Resort Casino\3-CD\views.txt";
using (System.IO.StreamWriter sw = System.IO.File.AppendText(path))
{
sw.WriteLine(content);
}
}
// Both OnStartup and OnShutdown must be implemented as public method
public Result OnStartup(UIControlledApplication application)
{
// Add a new ribbon panel
RibbonPanel ribbonPanel = application.CreateRibbonPanel("JCJ Addin");
// Create a push button to trigger a command add it to the ribbon panel.
string thisAssemblyPath = Assembly.GetExecutingAssembly().Location;
PushButtonData buttonData = new PushButtonData("SheetsToUpper",
"Sheets\n To Upper", thisAssemblyPath, "SheetsToUpper");
PushButtonData buttonData1 = new PushButtonData("ViewsToUpper",
"Views\n To Upper", thisAssemblyPath, "ViewsToUpper");
PushButtonData buttonData2 = new PushButtonData("RenumberViews",
"Renumber\n Views on\nSheet", thisAssemblyPath, "RenumberViews");
PushButtonData buttonData3 = new PushButtonData("viewerLocations",
"Find\n View on\nInstances", thisAssemblyPath, "viewerLocations");
PushButton pushButton = ribbonPanel.AddItem(buttonData) as PushButton;
PushButton pushButton1 = ribbonPanel.AddItem(buttonData1) as PushButton;
PushButton pushButton2 = ribbonPanel.AddItem(buttonData2) as PushButton;
PushButton pushButton3 = ribbonPanel.AddItem(buttonData3) as PushButton;
// Optionally, other properties may be assigned to the button
// a) tool-tip
pushButton.ToolTip = "Converts all the text in Sheet titles to uppercase - Global Operation.";
pushButton1.ToolTip = "Converts all the text in View titles to uppercase - Global Operation.";
pushButton2.ToolTip = "Select all views in the order you want them re-numbered.";
pushButton3.ToolTip = "Select View.";
// b) large bitmap
Uri uriImage = new Uri(@"H:\!PRACTICE GROUPS\Revit Scripts\icon-font-theme-lowercase-uppercase.png");
Uri uriImage1 = new Uri(@"H:\!PRACTICE GROUPS\Revit Scripts\icon-font-theme-lowercase-uppercase.png");
Uri uriImage2 = new Uri(@"H:\!PRACTICE GROUPS\Revit Scripts\icon-font-theme-lowercase-uppercase.png");
Uri uriImage3 = new Uri(@"H:\!PRACTICE GROUPS\Revit Scripts\icon-font-theme-lowercase-uppercase.png");
BitmapImage largeImage = new BitmapImage(uriImage);
BitmapImage largeImage1 = new BitmapImage(uriImage1);
BitmapImage largeImage2 = new BitmapImage(uriImage2);
BitmapImage largeImage3 = new BitmapImage(uriImage3);
pushButton.LargeImage = largeImage;
pushButton1.LargeImage = largeImage1;
pushButton2.LargeImage = largeImage2;
pushButton3.LargeImage = largeImage3;
application.ViewActivated += new EventHandler<Autodesk.Revit.UI.Events.ViewActivatedEventArgs>(onViewActivated);
return Result.Succeeded;
}
public Result OnShutdown(UIControlledApplication application)
{
// nothing to clean up in this simple case
return Result.Succeeded;
}
}
You can also do this (in IExternalApplication.OnStartup), but it relies on undocumented features of the UIControlledApplication object. I have used this technique from Revit 2012 through 2017, so I guess it is a stable assumption for now:
var versionNumber = uiControlledApplication.ControlledApplication.VersionNumber;
var fieldName = versionNumber == "2017" ? "m_uiapplication" : "m_application";
var fi = uiControlledApplication.GetType().GetField(
fieldName, BindingFlags.NonPublic | BindingFlags.Instance);
var uiApplication = (UIApplication)fi.GetValue(uiControlledApplication);
```
The idea is to use introspection to access the non-public field (m_uiapplication) of the UIControlledApplication object. This is of type UIApplication.
| common-pile/stackexchange_filtered |
Finding a VBAS defnied Named Range definition
a valueI've inherited a large VBA project and whilst I have lots of dev expereince I have a small amount of VBA. The code reads data off a sheet in the form:
Intersect(Range("colName"), .Rows(intCurrentRow)).Value
Where colName is a named range, or so I thought. I have searched all of the project code and the excel sheets and cannot find where colName is defined ?
So far I have searched the code, looked in Name Manager on the sheet and have googled furiously but hit a total blank. As I now need to read in another value from the Sheet I would really prefer to use the code that is currently used with another value instead of colName to reference my new data field.
Is there anything obvious I'm missing ?
Edits:
activesheet.range("colName").address gives a value of "$L:$l"
colName in "" would mean that its a sheet level range name, it could be a dynamic range, offset(a1,0,0,counta(a:a),1) for example for an expanding range, that may not appear in the CTRL G list of names, check in the Define Names box see if its there.
It's not listed on Define Names ...
in the immediate pane, try activesheet.range("colName").address does the code open another sheet, or define the range?
Is there an On Error ... statement above the intersect? Perhaps it never worked properly? What happens when you put a breakpoint on that line? What is the value/address of Range("colName") when it arrives there? Stepping into and past that line, is anything processed? You have not provided enough code/data/behavior examples to determine anything.
@Jeeped I appreciate that. If I break on that line and hover over Value I get no value given by the IDE - adding a Watch gives an invalid property error, however, the value is part of a set of parameters passed to another function - breaking in the function and reading the value passed in shows it si reading a value from the sheet. I just cannot understand where colName is defined.
probably a hidden name: try unhiding it to see
@Nathan_Sav range("colName").address gives a value of "$L:$l"
I second what @CharlesWilliams suggested. Try ActiveWorkbook.Names("colName").Visible=True.
@CharlesWilliams thank you for that - I really thought that was it - there is a function to HideNames that I commented out - unfortunately no further ranges have appeared.
@DougGlancy - ok we're getting somehwere - typing that in the immediate window caused the named range to appear, so I guess I can add another range with the HideRange function enabled and it will be defined then be hidden with the rest.
OK that worked fine - if someone would like to add it as the answer I'll very happily accept it as being correct. Many thanks to all who contributed.
Its probably a hidden name.As Doug Glancy said, you can unhide it using VBA
Activeworkbook.Names("colName").Visible=True
If you are working with defined names you may find it useful to get My & Jan Karel Pieterse's Name Manager addin which (amongst many other things) handles hidden names. download from
http://www.decisionmodels.com/downloads.htm
It could be a hidden Name. Try:
ActiveWorkbook.Names("colName").Visible=True
| common-pile/stackexchange_filtered |
Do the Gemara and the Rema prohibit, or encourage, women putting on talit and/or tefillin?
An article in Ynet Judaism (in Hebrew) addresses the recent controversy regarding the prayer of Women of the Wall at the western wall. And claims the following claims:
אף אחד גם לא הקריא להן את הגמרא שמספרת על ברוריה אשת רבי מאיר, שהייתה מניחה תפילין ומתעטפת בטלית, וגם לא את זו שמספרת על מיכל בת שאול שהניחה תפילין ולא מיחו בידה, ובטח שלא את פסק ההלכה של הרמ"א שמחייב אותן בברכה על כך.
In short, it claims that the Gemara depicts "Bruria wife of rabbi Meir" and "Michal, daughter of Saul" putting on tefilin and talit, and nobody stopping them. And also, that the Rema said that they are obliged to bless the act (לברך), and thus implying that this is allowed (and maybe even encouraged).
What is the Gemara's and orthodox adjudicators' view on the issue?
I know that this is an English site, but please, when you put references to books and scriptures, put them in Hebrew as well, it makes the job of understanding what is written and checking it by myself much easier.
I take it the article doesn't say where in the g'mara? (If it does, please add that to the question to help guide answerers.)
No it doesn't I have also read some of the talkbacks, and none of those say where in the gemara it appears, or from what book the Rema's Halacha is taken.
Thanks for clarifying. Also, welcome to Mi Yodeya and thank you for bringing your question here! I hadn't heard the claim of g'mara support and, if it's there, would be interested in knowing too.
See Women & the Mitzvot: Serving the Creator, pp. 95-105, which discusses this at length.
See also this article
There is much discussion in Jewish literature about this subject, and there is also a difference between a woman wearing a tallit and tefillin. It is easy to show what the Gemara and the Rema say, but leaving out all of the rishonim and acharonim on the topic would prevent learning where the halakha stands. But here is a start.
Regarding tefillin
Mishna - מסכת ברכות פרק ג:ג (Berakhot 3:3)
נשים ועבדים וקטנים פטורין מקריאת שמע ומן התפילין וחייבין בתפלה ובמזוזה ובברכת המזון
Women, slaves, and minors are exempt from the recitation of Shema, and from tefillin, and are obligated in prayer and mezuzah and the blessings on food.
The reason why women are not obligated is explained in the gemara קידושין (Kiddushin 33a-34b), where the Torah uses the words "sons" for the mitzvah of learning Torah, which is in the same verse as the mitzvah for wearing tefillin (this is a general principle of Torah verse application). Also, women are not obligated by positive time-bound mitzvot, so they're not obligated to wear tefillin (same sugya in Kiddushin). But can they wear teffilin anyway? Here is the main story of a woman wearing tefillin in the gemara.
Talmud Bavli - תלמוד בבלי מסכת עירובין דף צו עמוד א-ב (Eruvin 96a-b)
דתניא: מיכל בת כושי היתה מנחת תפילין ולא מיחו בה חכמים. ואשתו של יונה היתה עולה לרגל ולא מיחו בה חכמים. מדלא מיחו בה חכמים - אלמא קסברי: מצות עשה שלא הזמן גרמא היא. - ודילמא סבר להכרבי יוסי, דאמר: נשים סומכות רשות. דאי לא תימא הכי - אשתו של יונה היתה עולה לרגל ולא מיחו בה. מי איכא למאן דאמר רגל לאו מצות עשה שהזמן גרמא הוא? אלא: קסבר רשות, הכא נמי: רשות. אלא האי תנא היא, דתניא: המוצא תפילין מכניסן זוג זוג, אחד האיש ואחד האשה, אחד חדשות ואחד ישנות, דברי רבי מאיר. רבי יהודה אוסר בחדשות ומתיר בישנות. ע"כ לא פליגי אלא בחדשות וישנות, אבל באשה - לא פליגי. שמע מינה: מצות עשה שלא הזמן גרמא הוא, וכל מצות עשה שאין הזמן גרמא נשים חייבות. - ודילמא סבר לה כרבי יוסי, דאמר: נשים סומכות רשות? - לא סלקא דעתך, דלא רבי מאיר סבר לה כרבי יוסי, ולא רבי יהודה סבר לה כרבי יוסי. לא רבי מאיר סבר לה כרבי יוסי - דתנן: אין מעכבין את התינוקות מלתקוע. הא נשים - מעכבין. וסתם מתניתין רבי מאיר. ולא רבי יהודה סבר לה כרבי יוסי - דתניא: אדבר אל בני ישראל וסמך - בני ישראל סומכין ואין בנות ישראל סומכות. רבי יוסי ורבי שמעון אומרים: נשים סומכות רשות. וסתם סיפרא מני - רבי יהודה.
For it was taught: Michal the daughter of the Kushite wore tefillin and the Sages did not attempt to prevent her, and the wife of Jonah attended the festival pilgrimage and the Sages did not prevent her. Now since the Sages did not prevent her it is clearly evident that they hold the view that it is a positive precept the performance of which is not limited to a particular time.
But is it not possible that he holds the same view as R. Jose who
ruled: It is optional for women to lay their hands upon an offering?
For were you not to say so, how is it that Jonah's wife attended the
festival pilgrimage and the Sages did not prevent her, seeing that
there is no one who contends that the observance of a festival is not
a positive precept the performance of which is limited to a particular
time? You must consequently admit that he holds it to be optional;
could it not then here also be said to be optional? — It reprsents
rather the view of the following Tanna. For it was taught: If tefillin
are found they are to be brought in,
one
pair at a time, irrespective of whether the person who brings them in
is a man or a woman, and irrespective of whether the tefillin were new
or old; so R. Meir. R. Judah forbids this in the case of new ones but
permits it in that of old ones.
Now since their dispute is confined to the question of new and old
while in respect of the woman there is no divergence of opinion it may
be concluded that it is a positive precept the performance of which is
not restricted to a particular time, women being subject to the
obligations of such precepts. But is it not possible that he holds the
same view as R. Jose who stated: It is optional for women to lay their
hands upon an offering? — This cannot be entertained at all, since neither
R. Meir holds the same view as R. Jose nor does R. Judah hold the same
view as R. Jose. ‘Neither R. Meir holds the same view as R. Jose’,
since we learned: ‘Children are not to be prevented from blowing the
shofar’; from which it follows that women are to be prevented; and any
anonymous Mishnah represents the view of R. Meir. ‘Nor does R. Judah
hold the same view as R. Jose’, since it was taught: Speak unto the
children of Israel ... and he shall lay, only the sons of Israel
‘shall lay’ but not the daughters of Israel. R. Jose and R. Simeon
ruled: It is optional for women to lay. Now who is the author of all
anonymous statement in the Sifra? R. Judah. [Soncino]
Talmud Yerushalmi - תלמוד ירושלמי מסכת ברכות ב:ג ומקבילו מסכת ערובין י:א (Brachot 2:3 and Eruvin 10:1)
הרי מיכל בת כושי הית' לובשת תפילי' ואשתו של יונה הית' עולה לרגלי' ולא מיחו בידיה חכמ' ר' חזקיה בשם ר' אבהו אשתו של יונה הושבה מיכל בת כושי מיחו בידיה חכמ'
Behold Michal the daughter of the Kushite wore tefillin and the Sages did not attempt to prevent her, and the wife of Jonah attended the festival pilgrimage and the Sages did not prevent her. Rabbi Chizkiya [said] in the name of Rabbi Abahu: the wife of Jonah was sent back; the Sages did attempt to prevent Michal the daughter of the Kushite.
Rema - שו"ע או"ח לח:ג (Shulchan Aruch Orech Chaim 38:3)
DoubleAA already quoted this source. I'll just add that the Rema is quoting the Kol Bo's language, which besides noting women's exemption, also mentions problem with cleanliness.
כתב הר"ם נשים פטורות מתפילין מפני שהוא מצות עשה שהזמן גרמה שהרי אין מניחין אותן בשבת ויום טוב ואם רצו להניח אין שומעין להן מפני שאינן יודעות לשמור עצמן בנקיות ע"כ
The Maharam [whose students, like the Kol Bo, wrote his teachings] wrote that women are exempt from tefillin because they are positive time-bound mitvot since they are not worn on Shabbat or Yom Tov. And if they want to wear tefillin, one does not listen to them because they do not know how to maintain cleanliness.
Because of the Rema is the foremost Ashkenazic authority of halakha, the great modern Askenazic poskim (R. Moshe Feinstein-Igrot Moshe IV OC #9, Chafetz Chaim-Mishnah Berura OC 38:3, Arukh HaShulchan-OC:38) have similarly ruled against it. Sephardic opinions and other poskim who have explicitly allowed it deserves its own question.
Regarding Tallit
Like tefillin, there are time-bound and other issues (male clothing, arrogance, etc.) that concern its permissibility. However, there is no story of women wearing it in the gemara, and there is generally a more stringent opinion against women wearing tefillin than tallit (perhaps because there isn't a requirement to buy a tallit and it's not obvious if it's time-bound). The discussion of women and tallit in the gemara is here:
Talmud Bavli - מנחות דף מג עמוד א (Menachot 43a)
ת"ר: הכל חייבין בציצית, כהנים, לוים וישראלים, גרים, נשים ועבדים; ר"ש
פוטר בנשים, מפני שמצות עשה שהזמן גרמא הוא, וכל מצות עשה שהזמן גרמא
נשים פטורות. אמר מר: הכל חייבין בציצית, כהנים, לוים וישראלים. פשיטא,
דאי כהנים לוים וישראלים פטירי, מאן ליחייב? כהנים איצטריכא ליה, ס"ד
אמינא, הואיל וכתיב: אלא תלבש שעטנז צמר ופשתים יחדיו גדילים תעשה לך,
מאן דלא אישתרי כלאים לגביה בלבישה הוא דמיחייב בציצית, הני כהנים הואיל
ואישתרי כלאים לגבייהו לא ליחייבו, קמ"ל, נהי דאישתרי בעידן עבודה, בלא
עידן עבודה לא אישתרי. ר"ש פוטר בנשים. מאי טעמא דר"ש? דתניא: בוראיתם
אותו - פרט לכסות לילה
Our Rabbis taught: All must observe the law of zizith, priests, Levites, and Israelites, proselytes, women and slaves. R. Simeon declares women exempt, since it is a positive precept dependent on a fixed time, and women are exempt from all positive precepts that are dependent on a fixed time.
The Master said, ‘All must observe the law of zizith, priests, Levites, and Israelites’. Is not this obvious? For if priests and Levites and Israelites were exempt, then who would observe it? — It was stated particularly on account of priests. For I might have argued, since it is written, Thou shalt not wear a mingled stuff, wool and linen together, and [it is followed by,] Thou shalt make thee twisted cords, that only those who are forbidden to wear mingled stuff must observe the law of zizith, and as priests are permitted to wear mingled stuff20 they need not observe [the law of zizith]; we are therefore taught [that they, too, are bound], for although while performing the service [in the Temple] they may wear [mingled stuff] they certainly may not wear it when not performing the service.
R. Simeon declares women exempt’. What is R. Simeon's reason? — It was taught: That ye may look upon it: this excludes a night garment. [Soncino]
Rema - שו"ע או"ח יז:ב (Shulchan Aruch Orech Chaim 17:2)
נשים ועבדים פטורים, מפני שהיא מצות עשה שהזמן גרמא. הגה: ומ"מ אם רוצים לעטפו ולברך עליו הרשות בידן כמו בשאר מצות עשה שהזמן גרמא (תוס' והרא"ש והר"ן פ"ב דר"ה ופ"ק דקדושין), אך מחזי כיוהרא, ולכן אין להן ללבוש ציצית, הואיל ואינו חובת גברא (אגור סימן כ"ז) פי' אינו חייב לקנות לו טלית כדי שיתחייב בציצית. ולקמן בסימן י"ט אמר כשיש לו טלית מארבע כנפות (ולבשו).
Women and slaves are exempt [from wearing tzitzit] because it is a time-dependent commandment.
Rem"a: And if they wish to wrap [in tzitzit] and say the blessing on them it is up to them to do so as with all positive time-dependent commandments... Yet, it appears arrogant and therefore women should not wear tzitzit, since it is not an obligation of the person, meaning a man isn't obligated to buy a tallit for himself in order to observe the mitzvot of tzitzit. they are doing it to appear more observant than others in which case they may not wear them since they are not required as men are.
Halakha considers the whole gamut of opinions, which are too numerous to cite here, but this is what the gemara and the rema have to say on the subject.
Thanks for all the information! Regarding your final paragraph under t'filin, asked.
Rabbi Yonason Ben uziel the Targum Yonason on parshas ki teitzei 22:5 explains the passuk of the prohibition of men wearing womens clothing and vice versa that the prohibition includes women wearing tzitzis and teffilin.
Does that mean that the Rema didn't write that women need to bless when they put on Tefillin, or that the Gemara doesn't mention those two women putting on Tefillin and Talit?
@IlyaMelamed The gemara does mention Michal wearing tefillin (not a tallis, if I recall), though the Talmud Yerushalmi maintains that the sages did object to her actions. I don't know of a source for Bruria wearing tefillin, though I've heard a lot of people mention that she did. The Rema did write that a woman who wears a tallis should recite a blessing, but he said that women may not wear tefillin. (The Rema and some other poskim seem to disregard the Targum Yonasan's rationale regarding cross dressing).
This is not written by Yonason ben Uziel, and is full of incredibly strange and not followed opinions. Not the best source for a practical question.
Michal is a different case if you access to the seder Mrafsin Igri inyanim alef pg 17 it explains the diff.
Doubleaa that is debatable even though most say what you are saying
@sam Who says otherwise with basis?
@Fred, Please write what you said, including appropriate references and the missing data (about Bruria). Obviously that more relevant info, like in what way (positive or negative) the Gemara describes the actions of those women, or that the Rema was against women putting tefillin, provided it's properly sourced, will be extremely appreciated.
@DoubleAA Are you referring to that targum in general ("Targum Pseudo-Jonathan"), or are you referring to something about the targum on this specific verse?
@Fred The whole thing. It's a nice "shitta" to have in an overall discussion, but I'd be wary of basing an entire conclusion on it. See this and some responses in the next volumes.
http://judaism.stackexchange.com/questions/18340/should-a-woman-wear-a-gartel#comment44058_18340
@doubleaa the Levush holds the same thing.
@doubleaa just saw this Rambam Shabbat 19:14 that wearing teffilin is a malbush,I would say it has become a beged ish,but it seems that Rambam held that way.question is if a women found teffelin in a Reshus harabim would it be a malbush for her or not? See halacha 23 which uses loshon zachar only
One orthodox adjudicators' view
The Ben ish chai in chukay noshim chapter 43 (below) says woman can do all time bound mitsvot exept tefilin and tissis
In his shut Rav poalim part 1(end), soid ishorim 12 he gives a kabolistik source/explanation for them not having permission to do them.
The question was really about the Talmud and Rema. This answer presents neither...
| common-pile/stackexchange_filtered |
How can I make my button be clicked under some certain circumstances?
I am a beginner into front-end Development and I am trying make this Clicker game. When I press
increase, the value is increasing by 1. After 20 clicks (at my choice), you can press upgrade and
get your clicks doubled. The problem is that after 20 clicks and first upgrade, I can still press upgrade, resetting my score. You can try and see it! I want to create different upgrade points: like first at 20 and increase with 1 click;
2nd at 80 and increase with 2 clicks and so on. Please help
javascript code
let count = 0;
let upgrade = 1;
var snd = new Audio("click.wav");
var upgradeSnd = new Audio("upgrade.wav");
document.getElementById("increase").onclick = function(){
count = count + upgrade;
document.getElementById("value").innerHTML = count;
if(count >= 20)
document.getElementById("upgrade").onclick = function() {
count = -2;
upgrade = 2;
count += upgrade;
document.getElementById("value").innerHTML = count;
}
}
Html code
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="style.css">
<title>Document</title>
</head>
<body>
<label type="number" id="value">0</label><br>
<div class="buttons">
<button type="button" id="decrease">Decrease</button>
<button type="button" id="reset">Reset</button>
<button type="button" id="increase">Increase</button>
<button type="button" id="upgrade">Upgrade</button>
</div>
<script src="index.js"></script>
</body>
</html>
CSS CODE
#value {
display: flex;
align-content: center;
justify-content: center;
font-size: 50px;
font-family: Verdana;
color: yellowgreen;
}
body {
background-color: black;
display: flex;
flex-direction: column;
align-content: center;
justify-content: center;
column-gap: 10px;
row-gap: 10px;
margin: auto;
height: auto;
width: 100%;
}
.buttons {
display: flex;
align-items: center;
justify-content: center;
column-gap: 20px;
size: 30px;
}
#decrease {
background-color: yellowgreen;
width: 90px;
height: 30px;
color: black;
cursor: pointer;
}
#increase {
background-color: yellowgreen;
width: 90px;
height: 30px;
cursor: pointer;
}
#reset {
background-color: yellowgreen;
width: 90px;
height: 30px;
cursor: pointer;
}
#upgrade {
background-color: yellowgreen;
width: 90px;
height: 30px;
cursor: pointer;
}
I think this is what you want.
This code will check if your count is greater than 20, but make sure you haven't already clicked that upgrade. It will then minus the upgrade count from your existing count and it won't work again even if the count after upgrading is high enough. If the count reaches the next upgrade level, the process will repeat again and the number of clicks will again increase by 2. The values can be changed for further upgrade options.
let count = 0;
let upgrade = 1;
var snd = new Audio("click.wav");
var upgradeSnd = new Audio("upgrade.wav");
// Increase and update the count.
document.getElementById("increase").onclick = function () {
count = count + upgrade;
document.getElementById("value").innerHTML = count;
};
// Handle upgrade events
document.getElementById("upgrade").onclick = function () {
// Check if the count is high enough for an upgrade but the upgrade hasn't already happened.
if (count >= 20 && upgrade < 2) {
count = count - 20;
upgrade = 2;
document.getElementById("value").innerHTML = count;
} else if (count >= 80 && upgrade < 4) {
count = count - 80;
upgrade = 4;
document.getElementById("value").innerHTML = count;
}
// Repeat for however many you want.
};
Hope this helps!
Thank you so much! It made me understand how to do it! Very useful!
All good! Happy that it helped you
| common-pile/stackexchange_filtered |
Laravel Artisan CLI safely stop daemon queue workers
In order to process large numbers of jobs, I run a variable number of queue workers depending on howmuch work there is to complete. I don't want to run more workers than are necessary to complete the work that needs to be done in a time period that we deem appropriate.
At the moment, I start 5 daemon queue workers for testing purposes, however in production this number could be between 25 & 100 workers, possibly more. I understand that when deploying, I have to stop the queue workers by first placing the framework in maintenance mode by using php artisan down, because the --daemon flag causes the framework to only load when the worker starts up, hence new code would not take affect during the deploy until the worker restarts.
If i needed to stop the workers for some reason, I could put the application in maintenance mode using php artisan down which will cause the workers to die once they have finished processing their current job (if they are working one). However, there may be times where I want to kill the workers without putting the whole application in maintenance mode.
Is there a safe way to stop the workers in a way where they will continue processing their current job and then die without placing the whole application in maintenance mode?
Essentially what I need is a php artisan queue:stop, which behaves like php artisan queue:restart, but does not restart the worker once the job is done.
I was expecing there to be a like php artisan queue:stop command that would do this, but that doesn't seem to be the case.
Using ps aux | grep php I am able to get the process id's for the workers, and I could kill the processes that way, but I don't want to kill the process in the middle of it working on a job.
Thanks.
For anyone who finds this question. http://stackoverflow.com/questions/30060526/laravel-artisan-cli-safely-stop-daemon-queue-workers#comment48332831_30087630 has the answer. php artisan queue:restart will stop the workers. The "restart" name is misleading. The workers won't actually restart after they are stopped.
is correct. The "restart" only happens if you have something like Supervisor setup to monitor and automatically restart them once they quit. (Which is the typical setup.)
We've implemented something like this in our application - but it was not something that was built-in to Laravel itself. You would have to edit this file, by adding another condition to the if-block so that it would call the stop function. You can do this by either setting a static variable in the Worker class that gets changed whenever you run a custom command that you'll have to make (i.e. php artisan queue:pause) or by checking an atomic value somewhere (i.e. set it in some cache like redis, memcached, APC or even MySQL, though this would mean you'll have one MySQL query for every cycle of this while-loop) that you set using the same custom command.
Yah I was afraid I was going to have to do something like this. Will have to get redis or memcache setup, checking the flag is kind of a waste of a query, especially when its happening for every iteration in a loop, with 25 - 100 workers. Best answer I've got sofar, so +. Hopefully they add something like this in later versions.
Yeah, APC/redis/memcached would be the way to go, with APC being the best in this situation.
When using the --daemon flag workers shouldn't quit when the queue is empty.
I think what you are looking for is in the documentation for queues.
The php artisan queue:restart command will prompt the workers to restart after they are done their current job.
Correct, I do not want the workers to stop when the queue is empty. What I am looking to do is stop the queue workers on command, but to do it "safely", so that they do not stop in the middle of processing a job. php artisan queue:restart, unfortunately does not do what I need it to do. I am not looking to restart the workers, I am looking to stop them "safely". The documentation does not contain any information on stopping a queue worker, its merely mentions that you may have to do it "manually".
Essentially what I need is a php artisan queue:stop, which behaves like php artisan queue:restart, but does not restart the worker once the job is done.
Open a terminal with php artisan queue:work --daemon and run php artisan queue:restart in another terminal. The first terminal will actually just shut down after a while. It doesn't actually restart it, thats up to a seperate process like http://supervisord.org/ to handle.
Since Laravel 5.5 there is an event called Illuminate\Queue\Events\Looping that gets fired from the daemonShouldRun() call inside the main worker loop of Illuminate\Queue\Worker. So if you setup a listener to do your should process jobs check, and return false then the queue worker(s) will stop until the check returns true. There's a sleep between the next time it checks it which you can customise by passing --sleep <seconds> to the queue:work command.
I'm currently using this technique during deployments to stop workers which run inside docker containers, as it's not so easy to run the suggested queue:restart on them without hacking around.
my Laravel is 5.6.
you can (kill your pid) don't worry lose your work
just load pcntl(extension) Laravel can listen to the signal and safe exit
show part source below(in ./vendor/laravel/framework/src/Illuminate/Queue/Worker.php):
protected function listenForSignals()
{
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function () {
$this->shouldQuit = true;
});
pcntl_signal(SIGUSR2, function () {
$this->paused = true;
});
pcntl_signal(SIGCONT, function () {
$this->paused = false;
});
}
And my test below:
for($i=0; $i<100; $i++){
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function () {
echo 'SIGTERM';
});
pcntl_signal(SIGUSR2, function () {
echo 'SIGUSR2';
});
pcntl_signal(SIGCONT, function () {
echo 'SIGCONT';
});
echo $i;
sleep(1);
}
you can try to kill it
Jup not underestimating the importance of the pcntl extension and making sure your kill signal isn't set to SIGKILL (Default on the official Docker PHP images) but rather SIGTERM fixes it!
also - if (something=something) ? app('queue.worker')->shouldQuit = 1;
works well
Its also really good for jobs that suck up ram as it kills off the process then your horizon or whatever will kick it back in.
| common-pile/stackexchange_filtered |
Objective-C memory management and nil?
In my book a *joystick that was assigned @property (nonatomic, retain), and it wasn't released only set to nil in the -dealloc method. In the -init method, the same joystick was set to nil. What does this mean?
If it's not released in the dealloc, that's a memory leak if it's allocated or retained in the init. For example: myproperty = [aProperty retain] or self.myproperty = aProperty
If you have a property like:
@property (nonatomic, retain)
the setter method generated by synthesize will take care of releasing the object currently pointed to by the ivar before assigning the new one to it. So,
self.property = xxx;
is equivalent (if you like) to:
if (property != xxx) {
[xxx retain];
[property release];
property = xxx;
}
Now, it is considered good practice to set an ivar to nil after releasing it:
[property release];
property = nil;
This is a common release idiom in ObjC.
As you see, if you assign nil to a property (i.e., xxx = nil in the example above) what you get is just this: the ivar will be released and its value set to nil. Assigning nil to a property is therefore just a shorthand for this "release idiom".
+1, but correct property = xxx; — it should be property = [xxx retain];
@Anton: you are right, thanks! I was writing by memory thinking about the release part...
thanks for the reply! Does that mean if something like @property (nonatomic, retain) NSString *name only has to be set to nil in the dealloc method? And that releasing it in the dealloc method is optional?
You can set a property to nil whenever it makes sense (consider a label, when it is non-nil you display it, when it is nil you don't). And you should alway release, one way or another, in dealloc if you care about memory leaks. Don't count on being able to foresee what happens to your ivars, so that you can skip this... furthermore, properties make it so simple...
I'd say self.property = xxx' is equivalent to if (property != xxx) { [property release]; property = [xxx retain]; }, i.e. the retain should also only happen if xxx does not equal the value already there. IIRC, this is surrounded by locks too, but I may be remembering incorrectly.
@sergio: your correction is better than what I wrote. Mine could dealloc it before it could be retained. +1
um so how would you release name? With a [name release] and name = nil in the dealloc method or just a name = nil in the dealloc method?
The standard way is: [name release]; name = nil;. you can use self.name = nil; as a shorthand to the same effect (but then you should be sure that the property is a retain one). Actually, using the property is not as clear and could be considered bad practice, because you say you do something (assigning nil), but you are looking for a side effect of doing that (releasing and assigning nil).
Oh I'll just use the standard way instead of the short cut then. :p But why was joystick only set to nil and never released? Won't that cause a memory leak? Why was it set to nil in the init method? Thanks again!
please, notice that self.name = nil is different to name = nil; the latter will not release, thus it would be a memory leak. setting to nil in the init method means simply setting an initial value.
Properties are essentially automaticaly generated accessor method to you ivars, you can even override properties, so if you property is call joystick then the automatically generate method is equallent to
- (void)setJoystick:(MyType *)aValue
{
if( aValue != joystick )
{
[joystick release];
joystick = [aValue retain];
}
}
more than this actually happens if you do not have nonatomic, and there maybe stuff to deal with thread access.
Sp you can see that if you set self.joystick you are calling setJoystick: with nil and so release the current value and setting the ivar to nil
Thanks for the reply! Is joystick being retained or is it aValue? In joystick = [aValue retain];
Yes to both, aValue and joystick are pointer to some location in memory. Think of pointers as indices to memory blocks, initial joystick has the value 6 which refers to memory block 6, aValue has the value 8 which point to memory block 8, [joystick release], tell the memory at block 6 to decrease its reference count, [aValue retain] just increase the retain count of memory block 8 and then returns 8, so that joystick now is assign 8, referring to the exact same memory block.
Thank you! I understand what's going on in the @synthesize now.
| common-pile/stackexchange_filtered |
Separate text from hyphens line
I am wondering if there is a better approach than what I am currently taking to parse this file. I have a string that is in the general format of:
[Chunk of text]
--------------------
[Another chunk of text]
(There can be multiple chunks of text with the same separator between them)
I am trying to parse the chunks of text into elements of a list, which I can do with data.split('-'*20) [in this case], however if there are not exactly 20 hyphens the split will not work as intended. I have been playing around with regex however am currently unsure of a proper regex that could be used.
Are there any better methods that I should use in this situation, or is there a regex I should use oppose to the .split() method?
re.compile(r'-+') matches at least one -. re.compile(r'^-+$') matches a row of just -.
related: http://stackoverflow.com/questions/10974932/python-split-string-based-on-regular-expression
You want a regex split. I'm not python-literate, but I found the function in the official 2.7.10 documentation, and modified to your case:
>>> re.split('\n\-{4,}\n', input)
4 is the minimum amount of dashes you want to match.
\n are the newlines before and after. You probably don't want those in your text.
I would try to use re.split() with the regex --+ which means:
- - one hyphen
-+ - one or more hyphens
... this way it would not match a single hyphen, but everything more than one, alternatively you could use -{2,} which means two or more.
As far as I know, this will capture the hyphens themselves, resulting in the text being ignored (which is not the intended result).
@MarkN But, did you used split() with that? I thought that this is what you ask about, regex to split text
I see..I understand now, thank you. You may wish to add this to your answer? [About the combination of regex and split]
You may want to use ^--+$ (or ^-{2,}$) so that it splits only on a line containing only hyphens and doesn't split on two or more hyphens within the text.
Would it become more complex to try and handle cases with different numbers of hyphens per separator, or could this be handled as well with this combination, because .split() would not cooperate as wanted? (Not required)
@MarkN This regex (as well as these from comments) will match every number of hyphens more than one, you can test it here, however keep atention to a fact, that in the example the multiline and global match modes are used
@MarkN I don't exactly understand what you mean... this regex will split text if there will be more than one hyphen in a row: 2,4,10,5689 or more. But as long as they are in a row '------' they are treated as single match. I encurage you to past your example text to regex101 web site and try with this regex, you will see how it exactly works
@m.cekiera I was confusing your reference of split() with str.split() instead of re.split().
@MarkN I sorry again, I was concentrated on regex, as I don't know Python to well. Thx for your edit on my post
| common-pile/stackexchange_filtered |
Gulp appending to files, not overwriting
I'm trying to concatenate my JS files and run them through Babel for a new project, but instead of overwriting the destination file on each task run, my gulpfile only appends changes to the file. So my destination file ends up looking like this:
console.log('hello');
//# sourceMappingURL=app.js.map
console.log('goodbye');
//# sourceMappingURL=app.js.map
What am I missing? Below is my gulpfile.
Thanks in advance.
var gulp = require('gulp');
var sourcemaps = require("gulp-sourcemaps");
var uglify = require('gulp-uglify');
var rename = require('gulp-rename');
var concat = require('gulp-concat');
var babel = require('gulp-babel');
var browserSync = require('browser-sync').create();
var reload = browserSync.reload;
gulp.task('js', function(){
return gulp.src("./app/js/*.js")
.pipe(sourcemaps.init())
.pipe(concat("app.js"))
.pipe(babel())
.pipe(sourcemaps.write("."))
.pipe(gulp.dest("./app/js/"));
});
gulp.task('js-reload', ['js'], reload);
gulp.task('serve', ['js'], function() {
browserSync.init({
server: "./app"
});
gulp.watch("./app/js/*.js").on('change', ['js-reload']);
gulp.watch("./app/*.html").on('change', reload);
});
gulp.task('default', ['js', 'serve']);
If I remember correct, you have to add ./ to overwrite files.
You're reading and writing to the same destination directory. Therefore the file app.js is first read, some stuff is added to it, and then the result is written to app.js, causing this appending behaviour. You should output to a different directory than you are reading from.
| common-pile/stackexchange_filtered |
Silverlight 4: Resolving Microsoft.Silverlight.CSharp.targets was not found?
I've been upgrading some Silverlight 3 apps to Silverlight 4 in Visual Studio 2010. My Silverlight 3 apps open fine in Visual Studio, but SL4 apps don't, with the following error:
C:\Path\To\MyProject.csproj : error : Unable to read the project file 'XNTVOD.AdminClient.csproj'.
C:\Path\To\MyProject.csproj(593,3): The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlight.CSharp.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.
I had a problem with older VS Silverlight components and recently uninstalled most of the SL components, and right now in Add/Remove programs I have:
Microsoft Silverlight
Microsoft Silverlight 3 SDK
Microsoft Silverlight 4 Toolkit April 2010
The <import> declaration looks like this for the SL4 project:
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\Silverlight\$(SilverlightVersion)\Microsoft.Silverlight.CSharp.targets" />
That folder, C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0 only has two files in it:
Microsoft.Ria.Client.targets
Microsoft.Ria.Client.VisualStudio.targets
What Silverlight development component am I missing in particular? I see a bunch of different options, from Silverlight 4 SDK Beta to VS Tools for Silverlight 4 and a bunch of others. I don't want to install stuff that will get me right back to the situation I had before this one with outdated components.
I'm having a similar problem. My error is telling me I'm missing the proper file in "\Silverlight\v3.0" even though I have v4.0 installed. I also have the proper Silverlight 4.0 file, but VS doesn't seem to see it.
Looks like this is the missing piece...
Silverlight 4 Tools for Visual Studio 2010
What about Visual Studio 2015 ? Is there a distinct link per visual studio version - or is the version indifferent ?
The file that's missing ships in the Silverlight 4 SDK. You can either install just the Silverlight 4 SDK, or re-install the entire Silverlight 4 Tools for VS2010 package (which will re-install the developer runtime, SDK, a hotfix for VS2010, the Silverlight 4 Tools package, and WCF RIA Services).
In case of VS SP1, you can't reistall the SL 4 Tools fo VS2010. Just (re)install the SDK.
Installing the Silverlight 4 SDK was what I needed, because I needed it available while not having VS2010 but 2013
You may get this with Silverlight version 4 projects when trying to open in version 5 if version 4 bits are not on that machine. What has worked for me (after several hours of trying everything) is to edit the csproj file and change the silverlight target version
<TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
From 4 to 5
This worked great for me! And I didn't have to install any of the 4.0 components that I would have never used. Thanks!
I'm getting this problem, and already ha[d,ve] v5.0 in that section.
You need to build it using x86 instead of Any CPU.
This worked well for me...also had to install Silverlight 4 Developer runtime
Afer many tries, what worked for me was:
1. go to add or remove programms
2. remove all silverlight versions installed (4, 5 or even if version is not specified remove it too!)
3. install Silverlight 4 SDK
4. install Silverlight 5 for developers
Finally opened the project in VS 2010 SP1
A previously working installation can break when you install a new version of TFS on the server (or whatever it was that messed with my MSBuild).
My 'targets' files had disappeared from C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v5.0 on Server 2012 and reinstalling the tools.
I believe it may be possible to just copy the targets files from another machine but not 100% sure.
| common-pile/stackexchange_filtered |
Organizing Dynamics CRM customizations and update test environment
We are actually reorganizing our CRM customizations. Till now we had one main solution which was containing all the customizations and now we would like to split it by technical matters.
So now on our development instance, we have 4 unmanaged solutions that we would like to publish on tests instance which has the old managed solution.
We plan to do the following:
-> Export the 4 solutions to managed
-> Import them to test instance
-> Uninstall the old solution from test instance
I have a doubt concerning that procedure. Will it break something?
At some point we'll have the same customizations from different solutions. What do you think ?
I tested your steps on a trial environment with a couple Solutions and although when I began I believed (as Arun answered) that uninstalling a managed solution would delete all objects regardless of usage by other solutions, when I actually tested it, it's not deleting them. Data was also kept.
So the steps:
-> Export the 4 solutions to managed
-> Import them to test instance
-> Uninstall the old solution from test instance
Might work with no issues.
I would recommend that you make sure to checklist all elements so nothing is left behind.
If you have an available instance I would also say that you first restore a backup and test that everything goes as planned, but from my test it worked out.
Thanks for testing & sharing the results, you saved me couple of hours this weekend :)
I'm also curious about this exercise. If this is another sandbox to play-around by only disrupting the QA team (without any concern to Prod instance) - I will go with the listed steps to see if it goes through. We can wipe out this Test Org with the Restore from Prod anytime later if it didn't go through all the way.
Or else spin a new sandbox copy of exact Test replica for dry run.
At some point we'll have the same customizations from different solutions.
True, but uninstalling the existing Managed solution will remove the components though they are part of another Managed solution, if I'm not wrong.
This is a common approach. We also have split our customizations into several solutions. (e.g. one for Plugins, Security Roles, Web Ressources...)
You can split your customization work into as many solutions as you like, but don't overdo it.
| common-pile/stackexchange_filtered |
Tabs not coming in bottom when using fragment tab host
Hi I am using FragmentTabHost to display some tabs in my application.
Tabs are displayed but not appearing at the bottom. Here is my layout code for activity
<android.support.v4.app.FragmentTabHost
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@android:id/tabhost"
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TabWidget
android:id="@android:id/tabs"
android:orientation="horizontal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_weight="0"/>
<FrameLayout
android:id="@android:id/tabcontent"
android:layout_width="0dp"
android:layout_height="0dp"
android:layout_weight="0"/>
<FrameLayout
android:id="@+android:id/realtabcontent"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="1"/>
</LinearLayout>
The code I used to display is taken from Android documentation. My problem is tabs are always on top and they're not appearing at the bottom
Actually it's against Android Design Guideline since at bottom there are soft/hard buttons like back button home button etc.
http://developer.android.com/design/patterns/pure-android.html
But if you insist on to put them at the bottom, you should tweak it, there have been an already implemented one below
https://github.com/rameshkec85/BottomTabsFragmentTabHost
A . Hi, I tried this solution it works great. I have a little situation with this . Actually i want to replace one fragment with other and i am able to do that also .however when i go back to some other tab and then come back again it ends with an no activity error
I have asked this question in stackoverflow but no one answered http://stackoverflow.com/questions/18635403/fragmenttabhost-no-activity-illegalstateexception-error
hımm can you clarify a bit, maybe post some code? i couldnt figure it out so far
can i mail you in your email id ?
i will take a look at your post, btw someone already answered, check it out
Try this: https://stackoverflow.com/a/23150258/2765497
simple chenge TabHost to android.support.v4.app.FragmentTabHost for support Api<11
| common-pile/stackexchange_filtered |
Android: Activity taking too long to display because of web service Http Request
One of my activities make a http request to a webservice to get some weather data when I start the application.
The issue that the activity will take 3-4 seconds to display because of the webservice request. ( Tested on actual device )
I know I m not doing this the right way. All I m doing is on the onCreate method, I m making the request , getting the xml back, parsing and displaying the data.
What is the best way to deal with webservice requests in Android so the application won't display a white screen while the request is being made? Maybe some threads.......
I know this is not happening on other application I have in my device that make request to get live data.
Notes:
1) The xml I getting back is not that big ( 5 elements with 5 nested elements on each one).
2) I tried with the 3G network and Wifi but the response time is still the same.
sample code:
@Override
public void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(R.layout.clock_weather);
// this is where it is making the request and parsing the xml.
WeatherSet set = getWeatherCondition("New York, NY");
TextView currentWeather = (TextView) findViewById(R.id.current_weather);
currentWeather.setText("" + set.getWeatherCurrentCondition().getTempFahrenheit());
TextView currentWeatherH = (TextView) findViewById(R.id.current_weatherH);
currentWeatherH.setText("H: " + set.getWeatherForecastConditions().get(0).getTempMaxFahrenheit());
TextView currentWeatherL = (TextView) findViewById(R.id.current_weatherL);
currentWeatherL.setText("L: " + set.getWeatherForecastConditions().get(0).getTempMinFahrenheit());
ImageView currentWeatherIcon = (ImageView) findViewById(R.id.current_weather_icon);
String imageUrl = set.getWeatherCurrentCondition().getIconURL();
Drawable bitmapDrawable = getImageBitmap(imageUrl);
currentWeatherIcon.setImageDrawable(bitmapDrawable);
setForecastInfo(set, R.id.day1, R.id.day1_icon, R.id.day1_temp, 1 );
setForecastInfo(set, R.id.day2, R.id.day2_icon, R.id.day2_temp, 2 );
setForecastInfo(set, R.id.day3, R.id.day3_icon, R.id.day3_temp, 3 );
setForecastInfo(set, R.id.day4, R.id.day4_icon, R.id.day4_temp, 4 );
}
The time for your response is unpredictable - your network connection can be very poor and take seconds to transfer a few bytes. So the correct way to do this ( as you propose ) is to use thread. In our case android provides very useful class to handle this situations - AsynTask. After you read the docs you will notice that it has 3 very powerful methods that can help you
onPreExecute runs in the ui thread - very helpful to show some spinner or some progress indicator to show the user that you are doing some work in background
doInBackground runs in background - do your background work here
onPostExecute runs in the ui thread- when your are done with your background work hide the progress and update the gui with the newly received data.
Thank you All for the quick and great responses.
I have a question. When I open the HTC build in weather app or any other weather app from the Market, It does not show me any progress bar and does not take any time to show the response. Would it be showing old data and then refresh in the background after the activity is launched? would it be ok to do something like that for a weather activity?
It's just not having the progressdialog show. You can do that, but know that it'll show defaults or something different until the AsyncTask sets what you're changing (in the onPostExecute)
private class getWeather extends AsyncTask<Context, Void, Cursor> {
ProgressDialog dialog = null;
protected void onPreExecute () {
dialog = ProgressDialog.show(CLASS.this, "",
"Loading. Please wait...", true);
}
@Override
protected Cursor doInBackground(Context... params) {
WeatherSet set = getWeatherCondition("New York, NY");
return null;
}
protected void onPostExecute(Cursor c) {
dialog.dismiss();
}
}
Then where you have WeatherSet set = getWeatherCondition("New York, NY"); now, you'll put new getWeather().execute(this);
I suggest reading how the AsyncTask works, and see why this should work. It goes outside the onCreate() method.
This is regarding AsyncTask, I just want to help understanding the concept, it is really useful:
DownloadFilesTask dft = new DownloadFilesTask(this);
//Executes the task with the specified parameters
dft.execute(Void1...);
...
...
...
dft.cancel(boolean);
private class DownloadFilesTask extends AsyncTask<Void1, Void2, Void3> {
//Runs on the UI thread before doInBackground(Void1...)
protected void onPreExecute() {
}
//runs in BACKGROUNG threat
protected Void3 doInBackground(Void1... urls) {
//it can be invoked from doInBackground(Void1...) to publish updates
//on the UI thread while doInBackground(Void1...) is still running
publishProgress(Void2...);
}
//Runs on the UI thread after publishProgress(Void2...) is invoked
protected void onProgressUpdate(Void2... progress) {
}
//Runs on the UI thread after doInBackground(Void1...) has finished
protected void onPostExecute(Void3) {
}
//runs in UI threat after cancel(boolean) is invoked and
//doInBackground(Void1...) has finished
protected void onCancelled(Void3) {
}
}
The response time is normal. Don't worry. Make it a point to run the web-service call in a separate thread.
Regarding the white screen, as soon as you start the web service call, fire a ProgressDialog box. This will run till you receive the response. As soon as you receive the response, dismiss the progressDialog box and start the new activity where you can display the result.
Use the following URLs for reference
http://www.helloandroid.com/tutorials/using-threads-and-progressdialog
http://thedevelopersinfo.wordpress.com/2009/10/16/showing-progressdialog-in-android-activity/
I have implemented the idea I'm giving you and it works perfectly.
Hope I was of some help
You can use AsynchTask class for your web service.You can write your time consuming task in on doInBackground.Also you can use a progress Dialog.
Here You can see how to work with AsynchTask.You can also update your UI while web service is parsing without waiting for the complete parsing using onPostUpdate method.
What is the best way to deal with webservice requests in Android so the application won't display a white screen while the request is being made?
Because you said 'white screen" I am assuming you are not using a progress dialog. You need to show a progress spinner/dialog to let the user know you are processing.
Have you check how large the data is? If the data is too large you really cant do anything , if you have control over the service its best to reduce the size.
| common-pile/stackexchange_filtered |
Logback-classic conflict with SLF4J through different dependencies
I have an internal library that was upgrade to use logstash which has a mandatory dependency to logback, hence logback-classic (which has its own slf4j appender internally of its packages, which means I can't exclude any library here).
When I try to use this dependency as a jar over any other legacy module (all of them are using slf4j-log4j12) I do get the log dependency hell message:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/asdf/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/asdf/.m2/repository/org/slf4j/slf4j-log4j12/1.7.26/slf4j-log4j12-1.7.26.jar!/org/slf4j/impl/StaticLoggerBinder.class]
but the problem itself is that I can't just exclude logback-classic, since it is mandatory for logstash, and I can't migrate my applications to use logback, since they are configured with the log4j.xml
Is there a way to force slf4j to use the application appender instead of the one that comes from the logback library, OR, any other idea to make just a given package to use an appender, and the rest, use other?
| common-pile/stackexchange_filtered |
Do jQuery ajax calls using JSONP need to be modified (or avoided) due to Rosetta Flash exploit
The fix for the Rosetta Flash exploit is for a service to prepend an empty comment (/**/) to the callback function invocation.
Instead of returning:
my_callback({key1:"value1",key2:"value2"})
it must now produce:
/**/my_callback({key1:"value1",key2:"value2"})
If we're using jQuery ($.ajax) to make the request, do we need to prepend the empty comment to the jsonCallback parameter in the request?
jsonpCallback: "/**/my_callback"
Somehow I don't think that will solve the exploit.
Let me know if you need any further detail adding to my answer.
No, the fix is something the server should do regardless of what the client asks for.
The comment characters won't change the name of the callback that the script will call.
See these slides for some possible mitigations.
These protections are all server side as the attack leverages the JSONP callback in order to reflect the encoded flash file to the victim's client. The attacker's script would be sending the encoded Flash to the end-point, not your js, which is why you do not need to alter your jQuery request.
| common-pile/stackexchange_filtered |
Adding arrays in php
Arrays have been comprehensively covered but am still stumped about how to go around this. I have two arrays which i want to merge without overwriting duplicate keys i.e.
Array1
(
[0] => 0
[1] => 1
[2] => 1
[3] => 1
[4] => 1
[5] => 0
[6] => 0
)
+
Array2
(
[0] => 0
[1] => 0
[2] => 0
[3] => 0
[4] => 0
[5] => 0
[6] => 1
)
my ideal result is
Array1 + Array2
(
[0] => 0
[1] => 1
[2] => 1
[3] => 1
[4] => 1
[5] => 0
[6] => 1
)
How would i do this? I've tried using + but it gives the first array as the result
So are you saying you want to select the higher value from each of the two array keys? Or you want to mathematically add the total of the two arrays at each key?
Your requirements are inconsistent. Do you want to use values from first array unless overwritten by the second, or you want to do something opposite? In both cases the result you are showing is incorrect. Make up your mind.
The array1 + array2 is the solution. Could you please provide an example of the code you tried?
What you want to do is to map both arrays into single array, containing max value from two respective values, like that:
$array1 = array(0, 1, 1, 1, 1, 0, 0);
$array2 = array(0, 0, 0, 0, 0, 0, 1);
$result = array_map('max', $array1, $array2);
See the result here: http://ideone.com/clone/MN568
It looks like that:
array(7) {
[0]=>
int(0)
[1]=>
int(1)
[2]=>
int(1)
[3]=>
int(1)
[4]=>
int(1)
[5]=>
int(0)
[6]=>
int(1)
}
+1 for intelligent array_map usage over looping. I assume this is what the OP was trying to do, but with the vague question, who knows?
This worked perfectly, just what I wanted. Thanks for the quick response guys :-)
Just for fun (although in your case with 0 and 1 values it works :)
$array1 = array(0, 1, 1, 1, 1, 0, 0);
$array2 = array(0, 0, 0, 0, 0, 0, 1);
$str1 = implode('', $array1);
$str2 = implode('', $array2);
$result = str_split($str1 | $str2);
Sorry for this variant, I know it's crazy, but just couldn't not to post it. Arrays look like bit masks 0111100 and 0000001. So just using bitwise | operator.
So result:
Array
(
[0] => 0
[1] => 1
[2] => 1
[3] => 1
[4] => 1
[5] => 0
[6] => 1
)
array_merge() does not overwrite duplicate elements that have numeric keys.
But the numeric keys that are not overwritten will be appended at the end of the array...
If you are looking for the greater (nonzero) of the two arrays, you can iterate like so:
$array1 = array(1,0,0,1,1,1);
$array2 = array(0,0,1,0,0,1);
$newarr = array();
foreach ($array1 as $k => $v) {
$newarr[$k] = max($array1[$k], $array2[$k]);
}
print_r($newarr);
Array
(
[0] => 1
[1] => 0
[2] => 1
[3] => 1
[4] => 1
[5] => 1
)
If what you need is to add the values, use:
$newarr = array();
foreach ($array1 as $k => $v) {
$newarr[$k] = $array1[$k] + $array2[$k];
}
Given the arrays are of the same length:
function bitwise_or_arrays($arr1, $arr2) {
$result = array();
for ($i = 0; $i < count($arr1); $i++) {
$result[$i] = $arr1 | $arr2;
}
return $result;
}
Just for the fun of it:
$array1 = array(0,1,1,1,1,0,0);
$array2 = array(0,0,0,0,0,0,1);
$array3 = str_split(decbin(bindec(implode('',$array1)) | bindec(implode('',$array2))));
var_dump($array3);
Unfortunately it trims leading zeroes
Using
$array3 = str_split(str_pad(decbin(bindec(implode('',$array1)) | bindec(implode('',$array2))),count($array1),'0',STR_PAD_LEFT));
will restore leading zeroes, but doesn't feel as clean
If what your looking is to combine them use array_combine().
$a = array('green', 'red', 'yellow');
$b = array('avocado', 'apple', 'banana');
$c = array_combine($a, $b);
print_r($c);
//*output:
array(
[green] => avocado
[red] => apple
[yellow] => banana
)
here's how to merge array:
$beginning = 'foo';
$end = array(1 => 'bar');
$result = array_merge((array)$beginning, (array)$end);
print_r($result);
//output:
Array(
[0] => foo
[1] => bar
)
heres how to add values
$a = array(0=>1, 1=>2, 2=>3, 3=>4);
$b = array(0=>5, 1=>6, 2=>7, 3=>8);
$c = $a[0] += $b[0];
print_r($c);//output: 6
im not a guru on php but i hope this helps you even just abit.
| common-pile/stackexchange_filtered |
Trying to submit a javascript variable as a placeholder value with no user input. Next.js
There are four inputs in my form; token_name, token_expire_date, token_issue_date, token_value. The first two require user input however the second two do not (token_issue_date, token_value). These fields are are predetermined however i need them all to end up in the same mongodb document. Here is my pages/index.js file.
import {FieldValues, useForm, UseFormRegister} from "react-hook-form";
import {ToastContainer, toast} from 'react-toastify';
import 'react-toastify/dist/ReactToastify.css';
import useSWR from 'swr';
import React, {useState, useEffect} from "react";
import {useRouter} from "next/router";
const Loading = () => <div>Loading...</div>;
const fetcher = (url) => fetch(url).then(r => r.json())
async function saveFormData(data, url) {
return await fetch(url, {
body: JSON.stringify(data),
headers: {"Content-Type": "application/json"},
method: "POST"
})
}
const Index = () => {
const issuedDate = new Date();
const selectedKey = "123abc";
const tokenPlaceholder = selectedKey;
const fields = [
{type: "text", name: "token_name", required: true, label: "token_name"},
{type: "date", name: "token_expire_date", required: true, label: "token_expire_date"},
{type: "text", name: "token_issue_date", required: true, label: "token_issue_date", placeholder: issuedDate},
{type: "text", name: "token_value", required: true, label: "token_value", placeholder: tokenPlaceholder}
]
const renderForm = ({register, errors, isSubmitting}) => {
return <>
{fields.map(field => {
return (
<>
<label htmlFor={field.name}>{field.label}</label>
<input type={field.type} autoComplete={field.autoComplete} placeholder={field.placeholder}
{...register(field.name, {required: field.required})} />
<div className="error">{errors[field.name]?.message}</div>
</>
)
})}
<button disabled={isSubmitting}>
{isSubmitting ? <Loading/> : "Submit"}
</button>
</>;
}
return <FormComponent url="/api/form" renderForm={renderForm} />
}
function FormComponent({url, renderForm}) {
const {data, error} = useSWR(url, fetcher)
const {register, reset, handleSubmit, setError, formState: {isSubmitting, errors, isDirty}} = useForm();
useConfirmRedirectIfDirty(isDirty)
const onSubmit = async (data) => {
const response = await saveFormData(data, url)
if (response.status === 400) {
// Validation error, expect response to be a JSON response {"field": "error message for that field"}
const fieldToErrorMessage = await response.json()
for (const [fieldName, errorMessage] of Object.entries(fieldToErrorMessage)) {
setError(fieldName, {type: 'custom', message: errorMessage})
}
} else if (response.ok) {
// successful
toast.success("Successfully saved")
} else {
// unknown error
toast.error("An unexpected error occurred while saving, please try again")
}
}
useEffect(() => {
if (data === undefined) {
return; // loading
}
reset(data);
}, [reset, data]);
if (error) {
return <div>An unexpected error occurred while loading, please try again</div>
} else if (!data) {
return <div>Loading...</div>
}
return <form onSubmit={handleSubmit(onSubmit)}>
{renderForm({register, errors, isSubmitting})}
<ToastContainer position="bottom-center"/>
</form>;
}
I am able to get the variables to render as placeholders but i am not able to get them to submit to my database as placeholders... It doesn't seem to work without the user typing anything in. any help would be greatly appreciated!
Hey y'all I was able to answer my question from earlier with the help of this other Stackoverflow post... Send post-variable with javascript?
Just add the HTML input hidden attribute with
type: "hidden"
Thanks @Guffa
| common-pile/stackexchange_filtered |
Accesor method is not recieving var from mutator method
I am making a basic calculator on eclipse, java. But I have a problem with one of the methods as it doesn't accept the right variable.
I know that the problem is in the calculateDifference() and setCurrentValue() method.
public class Dollar {
static int startingValue = 2650;
static int currentValue;
static int dollars;
static int differenceValue = calculateDifference();
static void setDollarQuantity (int dollarValue) {
dollars = dollarValue;
}
static void setCurrentValue(int currentDollar) {
currentValue = currentDollar;
}
static int calculateDifference() {
return ( currentValue - startingValue) * dollars;
}
public static void main(String[] args) {
setCurrentValue(2780);
setDollarQuantity(111);
calculateDifference();
}
}
The expected result from the calculateDifference method was 14,430 but the actual is 0. I have found the problem which was the calculateDifference method is not accepting the currentValue as 2780, but 0. Anyone can help me and modify my code?
You are not using the return value of calculateDiffrence();. Have you put your complete code here?
I think I have made a mistake while copying the code the real code should be like this
System.out.println(diffrenceValue); instead of calculateDiffrence();
Change
static int diffrenceValue = calculateDifference();
to
static int differenceValue;
and in main()
calculateDifference();
to
differenceValue = calculateDifference();
System.out.println(differenceValue);
This way you will set the differenceValue after the other variables are initialized with correct value, not before.
| common-pile/stackexchange_filtered |
Derivation of an interpolation function with multiple arguments
I have the following interpolation function:
f[x_,a_,b_]:=70 s[a,b][[1,1,2]][x]
where s[a,b] is the solution of a differential equation.
s[a_, b_] := NDSolve[{
H2[x] == Rx[x]^2 + G[x, a, b],
H2[xmax ]== 1,
H2'[xmax] == Ans[xmax, a, b],
H2''[xmax] == DAns[xmax, a, b]},
H2, {x, xmin, xmax}];
I get this:
Now I want to define the derivative of this function as:
fd[x_,a_,b_]:=Derivative[1,0,0][f]
But it doesn't know how to compute it. I have also tried with
fd[x_,a_,b_]:=Derivative[1,0,0][f][x,a,b]
but it also gives me errors.
What's your code for s?
I didn't write it because it is a bit messy. It's an NDSolve (and I don't know how to write it as a porper code line here): s1[om_,ok_,h_,f0_,f1_,f2_,f3_]:=NDSolve[{
H2[x]==(Rx[x]/ff1[x,f0,f1,f2,f3])*
(1/2*(-f[x,f0,f1,f2,f3]+R[x]ff1[x,f0,f1,f2,f3]/Rx[x])-
3H2[x](ff2[x,f0,f1,f2,f3]Rx[x]-ff1[x,f0,f1,f2,f3]R2x[x])/(Rx[x]^2)+
3(omExp[-3x]+or[h]Exp[-4x]+okExp[-2x]))/3,
H2[xmax1]==1,
H2'[xmax1]==DH2ans[xmax1,om,ok,h],
H2''[xmax1]==DDH2ans[xmax1,om,ok,h]},
(H2[xmin]==H2ans[xmin,om,ok,h],
H2'[xmin]==DH2ans[xmin,om,ok,h],
H2''[xmin]==DDH2ans[xmin,om,ok,h]},)
H2,{x,xmin1,xmax}];
@Maria. It's not messy if it can fit in one comment! Please edit your original post with this code. Comments are ephemeral by design, and so we like questions to be complete on their own. Alternatively, you can come up with a minimal example (one that is simpler than your actual problem that still shows the problem that you are having) and post that instead; this would actually be preferred.
Try this instead. Define fd[x_, a_, b_] := D[70 s[a, b][1, 1, 2][y], y] /. y -> x.
@march It works indeed. May I ask why? Thank you :)
It's all about order of evaluation. For instance, if you define fd[x_, a_, b_] := D[70 s[a, b][1, 1, 2][x], x], then when you call fd[1, 1, 1], the right-hand side gets evaluated to D[70 s[1, 1][1, 1, 2][1], 1], then the insides get evaluated to a number, etc. Basically, you want the derivative with respect to a variable, and a number gets put in its place before it can take the derivative. This is why you put y there, take the derivative with respect to that symbol (which is undefined), and then replace y with x (which is now a number).
You could also simply define fd[x_, a_, b_] := 70 Derivative[1][s[a, b][1, 1, 2]][x] which can be typed as 70 s[a,b][1,1,2]'[x]
| common-pile/stackexchange_filtered |
Squashing a Large Branch with Several Merges with Itself
I have a feature branch off of master as shown below.
--S1-- (side branch)
/ \
/ \
-F1---F2---F3---F4---F5--> (feature branch - HEAD)
/ / / /
/ / / /
--M0-----M1---M2---M3---M4 (master)
I've bene working on this branch for a long time, so periodically I merge in any updates from master
git checkout feature
git merge master
I also collaborate with a few others so it's useful to branch off this feature branch to another side-branch where we safely do our work, and then merge that back into the feature
git checkout feature
git merge side branch
Let's say the feature branch has built up 100+ commits and I want to get the branch ready to merge into master by squashing everything.
I tried using a git rebase, but it only picks up those commits that are on my feature or side branch and ignores the merges (i.e. F1, F3 and S1).
Usually this would be ok, but applying each of those one by one to master results in conflicts that I have to clear up. But I just did all that hard work to resolve conflicts when I created the merge commits F2, F4, and F5.
Is there a better way to do this?
One thought was that I could take the entire effective diff between M0 and F5 and somehow apply it as a patch to the end of master (M4). That seems messy though.
I'm also interested in the answer to this question. Usually I use your last suggestion, which is to create a patch of the diff between M0 and F5, so as to minimize the conflict resolution later on.
Is there any reason why you can't just merge your feature back into master? You'll end up with a single commit in master from a merge whether or not you do the squash. I personally prefer to keep as much history as possible, unless of course you and you coworkers have sullied the feature with too many commits.
| common-pile/stackexchange_filtered |
Component Updating Out of Order Vue
Here is the following code I have using Bootstrap, Vue, and Axios:
SETUP:
*Ignore the tab-contents in component_a
main.js
Vue.component('component_a', {
props: ['info'],
template: `<div>
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 px-md-4">
<ul class="nav nav-tabs" id="myTab" role="tablist" v-for="ep in info">
<li class="nav-item">
<a class="nav-link active" id="home-tab" data-toggle="tab" href="#home" role="tab" aria-controls="home"
aria-selected="true">{{ep.epname}}</a>
</li>
</ul>
<div class="tab-content" id="myTabContent">
<div class="tab-pane fade show active" id="home" role="tabpanel" aria-labelledby="home-tab">Raw denim you
probably haven't heard of them jean shorts Austin. Nesciunt tofu stumptown aliqua, retro synth master
cleanse. Mustache cliche tempor, williamsburg carles vegan helvetica. Reprehenderit butcher retro
keffiyeh dreamcatcher synth. Cosby sweater eu banh mi, qui irure terry richardson ex squid. Aliquip
placeat salvia cillum iphone. Seitan aliquip quis cardigan american apparel, butcher voluptate nisi
qui.</div>
<div class="tab-pane fade" id="profile" role="tabpanel" aria-labelledby="profile-tab">Food truck fixie
locavore, accusamus mcsweeney's marfa nulla single-origin coffee squid. Exercitation +1 labore velit,
blog sartorial PBR leggings next level wes anderson artisan four loko farm-to-table craft beer twee.
Qui photo booth letterpress, commodo enim craft beer mlkshk aliquip jean shorts ullamco ad vinyl cillum
PBR. Homo nostrud organic, assumenda labore aesthetic magna delectus mollit. Keytar helvetica VHS
salvia yr, vero magna velit sapiente labore stumptown. Vegan fanny pack odio cillum wes anderson 8-bit,
sustainable jean shorts beard ut DIY ethical culpa terry richardson biodiesel. Art party scenester
stumptown, tumblr butcher vero sint qui sapiente accusamus tattooed echo park.</div>
</div>
</main>
</div>
</div>`
}
Vue.component('component_b', {
data: () => {
return {
columns: null
}
},
props: ['info'],
template: `<div>
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<main role="main" class="col-md-9 ml-sm-auto col-lg-10 px-md-4">
<h2>EP</h2>
<table class="table table-striped table-hover">
<thead>
<tr>
<th v-for="col in columns">{{col}}</th>
</tr>
</thead>
<tbody >
<tr v-for="row in info">
<td v-for="col in columns">
<template v-if="col === 'id'")>
<a @click="setActiveDisplay(row[col],'activeEp')" href="#">{{row[col]}}</a>
</template>
<template v-else>
{{row[col]}}
</template>
</td>
</tr>
</tbody>
</table>
</main>
</div>
</div>`,
methods: {
setActiveDisplay: function (id, ad){
this.$emit("set-active-display", id, ad)
}
},
watch: {
info: {
immediate: false,
handler (val,oldVal){
if (val.length == 0) {
this.columns = null
}
this.columns = Object.keys(val[0])
}
}
var app = new Vue({
el: '#app',
data () {
return {
info: [],
activeDisplay: "dashboard",
}
},
methods: {
getAPIData(id,type) {
axios
.get(`${SERVER}:${PORT}/api/${type}/${id}`)
.then(response => {
this.info = response.data;
})
.catch(error => console.log(error.response));
},
setActiveDisplay: function (id, ad){
if(ad === 'ep'){
if(this.activeDisplay === 'ep'){
return
}
this.getAPIData('*','ep')
}
else if(ad === 'activeEp'){
if(this.activeDisplay === 'activeEp'){
return
}
this.getAPIData(id,'ep')
}
this.activeDisplay = ad;
}
},
mounted () {
this.getData
}
})
main.html
<div v-if="activeDisplay === 'ep'">
<component_b @set-active-display="setActiveDisplay" :info="info"></component_b>
</div>
<div v-else-if="activeDisplay === 'dashboard'">
<dashboard></dashboard>
</div>
<div v-else-if="activeDisplay === 'activeEp'">
<component_a :info="info"></component_a>
</div>
<div v-else>
<dashboard></dashboard>
</div>
PROBLEM:
After clicking a link in B in the table, it correctly calls the setActiveDisplay method and getAPIData. However, after the getAPIData (i.e., Axios GET call) runs, info (root instance) is not updated to the single item that was clicked in the table in B, and I know this from logging to console. info (root instance) contains all objects that are in the table. (Component B is the first component to display when you open the webpage)
Then, the webpage changes (correctly) to display Component A (due to activeDisplay setting the variable, and displaying the correct component), but I can see the same number of tabs as the number of items in the table. And I see this for a split second. Then, Component A is updated to show the correct one tab that was clicked initially.
I'm sure this is an order of operations issue (as I am new to Vue), but can't seem to figure out why the info (root instance) object is not set to the single item when it is run. I've ran through the debugger in the browser, and the getAPIData is never called a second time. The getAPIData is grabbing the correct data, but it's like Vue is holding the result of the getAPIData call from modifying info until Component A is rendered (even though I am asking it to update after clicking the link in Component B)
Order of Operations
Render B in table with multiple objects in root info => User clicks link in table => setActiveDisplay called in B => event emitted calls setActiveDisplay in root => getAPIData called within root setActiveDisplay (correctly working up until here)=> should update root info object to single item => A rendered with one tab due to root info object (and due to passing to A's prop) containing one object from getAPIData call
EDIT1:
After more debugging, I am seeing that axios.js's dispatchRequest is being called with the correct API url after the click is beginning to be processed by vue.js. . . .So, this this.info = response.data within getAPIData has no data in it after that method finishes. Here is the flow breakdown I've discovered from debugging:
run getAPIData to completion (causing root instances's info object to be undefined => vue.js processes the click event of the item in the table in Component B => axios.js is calling the dispactRequest event with the full URL => more vue.js processing => receiving a response from the getAPIData function and sets the info (root) object to the response
So I see that I will need to wait for the Axios call to complete before displaying the data I need. Question is now, how do I do this within Vue? Thank you.
So I see that I will need to wait for the Axios call to complete before displaying the data I need. Question is now, how do I do this within Vue?
I assume you need to wait for the axios.get() call in getApiData() to complete before setting this.activeDisplay. To do this, first return the result of axios.get() (i.e., a Promise), so that callers could await it.
getAPIData(id,type) {
// BEFORE:
//axios.get(...)
// AFTER:
return axios.get(...)
}
Then make setActiveDisplay() an async function, allowing the result of getApiData() to be awaited:
// BEFORE:
//setActiveDisplay: function (id, ad) {
// AFTER:
async setActiveDisplay(id, ad) {
if (...) {
await this.getApiData(...)
}
this.activeDisplay = ad
}
Alternatively, you could do without async/await and use Promise.prototype.then():
setActiveDisplay: function (id, ad) {
if (...) {
this.getApiData(...).then(() => this.activeDisplay = ad)
}
}
| common-pile/stackexchange_filtered |
Incoming shortwave radiation for SEBAL in GEE
I am trying to create incoming shortwave radiation for SEBAL model in GEE. Unfortunately, the error occurs: <quote Image.select: Pattern 'EARTH_SUN_DISTANCE' did not match any bands. >quote
var filtered = landsattoa.filterDate('2021-01-01','2021-12-31')
.filterBounds(uae)
.filter(ee.Filter.lt('CLOUD_COVER',10))
var dem = ee.Image('CGIAR/SRTM90_V4').clip(uae);
var elevation = dem.select('elevation');
Map.addLayer(elevation, {min: 0, max: 60}, 'elevation');
function addRinc(image) {
var transmissivity = ee.Image.constant(0.75).multiply(2).multiply(0.00001).multiply(elevation);
var rinc = (transmissivity.multiply(1367).multiply(image.select('EARTH_SUN_DISTANCE')).multiply(ee.Number(image.select('SUN_ELEVATION')).cos())).rename('R_incoming_shortwave');
return image.addBands(rinc);
}
var filtered = filtered.map(addRinc);
Map.addLayer(filtered.select('R_incoming_shortwave'), imageVisParam, 'R incoming shortwave');
Your image collection filtered doesn't have 'EARTH_SUN_DISTANCE' band.
why? in metadata profile it said it has
i use ee.ImageCollection("LANDSAT/LC08/C02/T1_TOA")
Issue is produced because you're assuming that 'EARTH_SUN_DISTANCE' and 'SUN_ELEVATION' are bands in filtered collection and this is not true. They are properties and the method for retrieving them is .get. On the other hand, you are not putting the formula for determining rinc so, I am assuming that your arithmetic operations are corrects. As 'SUN_ELEVATION' is expressed in degrees, you also should corroborate if cosine function requires radians instead degrees.
So, I modified your code as follows; where uae is an arbitrary area in image zone.
var uae = ee.Geometry.Polygon(
[[[57.36213307516186, 26.493020474959547],
[57.36213307516186, 26.14836450809302],
[57.80158620016186, 26.14836450809302],
[57.80158620016186, 26.493020474959547]]], null, false);
var landsattoa = ee.ImageCollection("LANDSAT/LC08/C02/T1_TOA");
var filtered = landsattoa.filterDate('2021-01-01','2021-12-31')
.filterBounds(uae)
.filter(ee.Filter.lt('CLOUD_COVER',10));
print(filtered);
var test = filtered.first();
print("EARTH_SUN_DISTANCE", test.get('EARTH_SUN_DISTANCE'));
print("SUN_ELEVATION", test.get('SUN_ELEVATION'));
var dem = ee.Image('CGIAR/SRTM90_V4').clip(uae);
var imageVisParam = {"opacity":1,
"bands":["elevation"],
"min":54,
"max":1508,
"gamma":1};
Map.centerObject(uae);
Map.addLayer(uae);
Map.addLayer(dem, imageVisParam, 'dem');
function addRinc(image) {
var transmissivity = ee.Image.constant(0.75)
.multiply(2)
.multiply(0.00001)
.multiply(dem);
var earth_sun_distance = image.get('EARTH_SUN_DISTANCE');
var sun_elevation = image.get('SUN_ELEVATION');
var rinc = transmissivity.multiply(1367)
.multiply(ee.Number(earth_sun_distance)
.multiply(ee.Number(sun_elevation)
.cos()))
.toFloat()
.rename('R_incoming_shortwave');
return image.addBands(rinc);
}
var filtered = filtered.map(addRinc);
var imageVisParam = {"opacity":1,
"bands":["R_incoming_shortwave"],
"min":-5.9374,
"max":2.7116,
"palette":["fbff0e","ff9b04","ff3f06","1224ff"]};
Map.addLayer(filtered.select('R_incoming_shortwave'), imageVisParam, 'R incoming shortwave');
After running above code in GEE code editor, it was obtained result of following picture for 'R_incoming_shortwave' image.
| common-pile/stackexchange_filtered |
How to change attribute of MIMEMultipart object?
from email.mime.multipart import MIMEMultipart
msg=MIMEMultipart('mixed')
msg['To']='test'
msg['To']='test2'
print(msg)
produces
Content-Type: multipart/mixed; boundary="===============1302686855105723805=="
MIME-Version: 1.0
To: test
To: test2
--===============1302686855105723805==
--===============1302686855105723805==--
I would expect test2 to replace test, but it just adds to recipients list. I don't want that. I want to replace current recipient and reuse current mimemultipart variable/message as I need to send multiple emails with same parameters to different recipients and I don't want all of them in header. How do I change value of current 'To' attribute, or discard current 'To' attribute in object of MIMEMultipart type?
Use msg.replace_header() to overwrite an existing header, e.g.
from email.mime.multipart import MIMEMultipart
msg=MIMEMultipart('mixed')
msg['To']='test'
msg.replace_header('To', 'test2')
print(msg)
Output:
From nobody Tue Oct 21 09:51:52 2014
Content-Type: multipart/mixed; boundary="===============0295162158244343135=="
MIME-Version: 1.0
To: test2
--===============0295162158244343135==
--===============0295162158244343135==--
You can use replace_header() method. This method replace the first matching header found in the message with header order and case. If there is no matching header then you will get a KeyError
In your case simply use msg.replace_header('To','test2')
| common-pile/stackexchange_filtered |
In GWT, how can I make my widget escape the display bounds of its containing widget?
So for example, if a context menu was too small for its panel when the menu opened, I should like the menu to spill out of the panel in order to display properly.
In Swing, you can arrange for the widget to render on a 'glass pane' which is a transparent layer in front of 'everything else'. Is there an equivalent in GWT?
In my case, my 'context menu' is not a classical menu - it's some arbitrary GWT panel, which might contain (for example) a form with some editable fields, more resembling a dialog ... but it isn't modal, and its position needs to be anchored into the surrounding HTML.
GWT's PopupPanel appears to fulfil this role.
| common-pile/stackexchange_filtered |
PipeInputStream and PipeOutputStream synchronous or asynchronous?
I'm reading a Java book I found a sentence that seems incorrect:
When reading, if there's no data available, the thread will be blocked until it new data will be available. Notice that this is a typical asynchronous behavior, the threads are communicating through a channel(pipe).
Why does the author call the operation "asynchronous"? Shouldn't asynchronous imply that the thread will NOT be blocked until it receives new data?
Later Edit:
I run this code and it from the output it seems the behaviour is asynchronous.
Here is a part of the output: http://ideone.com/qijn0B
And the code is below.
What do you think?
import java.io.*;
import java.util.logging.Level;
import java.util.logging.Logger;
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
/**
*
* @author dragos
*/
class Consumator extends Thread{
DataInputStream in;
public Consumator(DataInputStream in){
this.in = in;
}
public void run(){
int value = -1;
for(int i=0;i<=1000;i++){
try {
value = in.readInt();
} catch (IOException ex) {
Logger.getLogger(Consumator.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println("Consumator received: "+ value);
}
}
}
class Producator extends Thread{
DataOutputStream out;
public Producator(DataOutputStream out){
this.out = out;
}
public void run(){
for(int i=0;i<=1000;i++){
try {
out.writeInt(i);
} catch (IOException ex) {
Logger.getLogger(Producator.class.getName()).log(Level.SEVERE, null, ex);
}
System.out.println("Producator wrote: "+i);
}
}
}
public class TestPipes {
/**
* @param args the command line arguments
*/
public static void main(String[] args) throws IOException {
PipedInputStream pipeIn = new PipedInputStream();
PipedOutputStream pipeOut = new PipedOutputStream(pipeIn);
DataInputStream in = new DataInputStream(pipeIn);
DataOutputStream out = new DataOutputStream(pipeOut);
Consumator c = new Consumator(in);
Producator p = new Producator(out);
p.start();
c.start();
}
}
asynchronous means the writer is not blocked waiting for the reader(if there is free space in the buffer)
Uh! I just noticed the reader writes the same numbers the writer produced previous. I was thinking of synchronous as 1 1 2 2 3 3 4 4 5 5, not 1 2 3 1 2 3 4 5 4 5 6 7 8 6 7 8
Why does the author call the operation "asynchronous"? Shouldn't asynchronous imply that the thread will NOT be blocked until it receives new data?
Either this is incorrect or the author is talking about how the consumer thread would block but the producer thread would still run. At the very least this wording is certainly confusing.
In any case, the PipeInputStream and PipeOutputStream streams share an internal buffer but otherwise are "synchronous". If the buffer is full the writer will block and if the buffer is empty the reader will block.
it's confusing wording. it could mean multiple things. technically, yes, each individual stream is synchronous, however the movement of data betwee streams is asynchronous.
How is it asynchronous @jtahlborn? You mean because there is a read and write buffers?
yes, writer can write and return before reader sees data (up to a limit). i guess, in a way, that's similar to any buffered stream. your edit is a good clarification, though.
Yes, you do not show the whole text. If author says that producer is not blocked then it is asynchronous. Consuming is always blocking. (ok, sometimes you can check the availability of the answer, but sync/async is distinguished by behaviour of the sender). It will be blocking when buffer length is 0 then writer will always block and synchronization occurs. So, we can say that blocking/nonblocking (sync/async) is a feature of the channel.
| common-pile/stackexchange_filtered |
Is it okay to recreate class component state behavior using useReducer?
so in react hooks we may have multiple states in our component:
const [stateA, setStateA] = useState(0);
const [stateB, setStateB] = useState('Hello World');
const [stateC, setStateC] = useState(true);
but there is a case that we need to update multiple states at once. let say when clicking event:
const handleClick = () => {
setStateA(prev => prev+1);
setStateB('John Doe');
setStateC(false);
}
As we can see I need to update these states 1 by 1. So to update all states at once and ensure all updates in the same batch, I tried making this reducer:
const stateReducer => (prev, next) {
switch (typeof next) {
case 'object':
return { ...prev, ...next };
case 'function':
return { ...prev, ...next(prev) };
default:
return prev;
}
...
const [state, setState] = useReducer(stateReducer, {
stateA: 0,
stateB: 'Hello World',
stateC: true
});
...
// just set single state
setState({stateA: 0});
// set multiple state
setState({
stateA: 5,
stateB: 'John Doe'
});
// set multiple state with prev state
setState(prev => ({
stateA: prev.stateA +1,
stateB: 'John Doe',
stateC: !prev.stateC,
}));
It's works! but how it gonna affect the performace? or this is okay?
So to update all states at once and ensure all updates in the same batch - Are you observing an actual problem with the multiple useState method? This sounds like a premature optimization for a problem I'm not so sure actually exists.
Your performance will not be affected.
In the note section of the page Functional updates in the react hooks reference they post useReducer as an alternative for useState when we need to manage different state variables.
Another option is useReducer, which is more suited for managing state objects that contain multiple sub-values.
The performance key does not reside in the usage of useState or useReducer itself, it's more about why do you need to use it, and as they say in the useReducer documentation you are doing a good usage of it simplifying the way you handle this 3 state variables.
useReducer is usually preferable to useState when you have complex state logic that involves multiple sub-values or when the next state depends on the previous one.
Not 100% sure about this but I think useState use useReducer internally.
| common-pile/stackexchange_filtered |
Finding the determinant of a matrix with $0$s on the diagonal, $1$s in the first row and column, and $x$ elsewhere.
How do you attempt to solve the deterimnant of
$$D_n=\begin{vmatrix}
0 & 1 & 1 & 1 & \cdots & 1 \\
1 & 0 & x & x & \cdots & x \\
1 & x & 0 & x & \cdots & x \\
1 & x & x & 0 & \ddots & \vdots \\
\vdots & \vdots & \vdots & x & \ddots & x \\
1 & x & x & \cdots & x & 0
\end{vmatrix}$$
My approach is to try and create a similar matrix with finite dimensions (something like a $4\times 4$) and then try and simplify it, but I always get stuck in the process of simplifying...
In general, how do you approach these types of questions?
Does this answer your question? Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
The first approach is to compute the first few values.
Here they are (WA):
$$
\begin{array}{c}
n& 1 & 2 & 3 & 4 & 5
\\
D_n&0 & -1 & 2x & -3x^2 & 4x^3
\end{array}
$$
The pattern is clear. Try a proof by induction.
By Determinant of a specially structured matrix ($a$'s on the diagonal, all other entries equal to $b$)
we know that the determinant is
$$
(-1)^{n-1}(n-1)x^{n}
$$
for the matrix, where the one's are replaced by $x$. But multiplying the first row and column by $x$, we can obtain such a matrix from the given one. The determinant changes by a factor $x^2$ then. So the determinant of our matrix then is
$$
(-1)^{n-1}(n-1)x^{n-2}.
$$
We have $$x^2D_n = \begin{vmatrix}
0 & x & x & \cdots & x \\
x & 0 & x & \cdots & x \\
x & x & 0 & \cdots & x \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x & x & x & \cdots & 0
\end{vmatrix} = x^n\begin{vmatrix}
0 & 1 & 1 & \cdots & 1 \\
1 & 0 & 1 & \cdots & 1 \\
1 & 1 & 0 & \cdots & 1 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & 1 & 1 & \cdots & 0
\end{vmatrix} = x^n\det(J-I)$$
where $J$ is the matrix with all entries equal to $1$. We know that $J$ has eigenvalues $0,0, \ldots, 0, n$ with multiplicity. Indeed, the respective basis of eigenvectors are $$e_2-e_1, e_3-e_2,\ldots, e_n-e_{n-1}, e_1+e_2+\cdots+e_n.$$The matrix $J-I$ therefore has eigenvalues $-1,-1, \ldots, -1, n-1$ with multiplicity. Determinant is the product of eigenvalues so $$\det(J-I)=(-1)^{n-1}(n-1)$$
We conclude $D_n = (-1)^{n-1}(n-1)x^{n-2}$.
| common-pile/stackexchange_filtered |
Making iptables easier to maintain
My network is completely locked down except for a few sites which are whitelisted. This is all done through iptables, which looks something like this:
# Allow traffic to google.com
iptables -A zone_lan_forward -p tcp -d <IP_ADDRESS>/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d <IP_ADDRESS>/24 -j ACCEPT
iptables -A zone_lan_forward -p tcp -d <IP_ADDRESS>/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d <IP_ADDRESS>/24 -j ACCEPT
iptables -A zone_lan_forward -p tcp -d <IP_ADDRESS>/24 -j ACCEPT
iptables -A zone_lan_forward -p udp -d <IP_ADDRESS>/24 -j ACCEPT
...
Obviously those addresses are hypothetical, but you get the idea. My firewall is becoming enormous. It would be much easier to maintain if I could just do this:
# Allow traffic to google.com
iptables -A zone_lan_forward -p tcp -d google.com -j ACCEPT
iptables -A zone_lan_forward -p udp -d google.com -j ACCEPT
I believe this is possible, since man iptables says:
Address can be either a network name, a hostname (please note that specifying any name to be resolved with a remote query such as DNS is a really bad idea), a network IP address (with /mask), or a plain IP address.
But what I'm concerned about is the part that says "specifying any name to be resolved with... DNS is a really bad idea". Why is this a bad idea? Does it just slow everything down?
If I really shouldn't use hostnames in iptables rules, then what should I do to simplify my firewall?
You'll get a better answer on security.stackexchange.com
You might be right that this question belongs on that site, but if you have a problem with the accepted answer please explain why.
If you want to know about locking down your network and how to do so, ask on security, if you want to use iptables (as opposed to the using IPtables) then this is the place.
DNS Names are resolved when the rules are added, not, when packets are checked. This violates the expectations most people have.
The rule does not get updated to reflect changed DNS results. It is resolved when added and that is it. You will need to either periodically reload rules, or some sites may break.
There is a bit of a security issue in that you are basically delegating control of your firewall rules to an external entity.
What if your parent DNS server is compromised and returns false data.
If your purpose is to block HTTP access, then you are usually far better of setting up a piece of software designed to filter at that level (e.g. squid+squidquard).
I see how this could be an issue - google.com could resolve to <IP_ADDRESS> today but tomorrow that address is invalid. Would simply restarting the firewall take care of this? 2) Is it still a security issue if the DNS server is well known - such as Google's DNS or OpenDNS?
I marked this as the answer because it explains why I shouldn't use hostnames in iptables rules and gives me a course of action for simplifying my firewall.
I'd like to add my support for Squid. I implemented Squid for my office, and once set up it's very easy to maintain for whitelisting hosts (though I use it to blacklist). It seems like you have a monumental undertaking on your hands; I wouldn't even know where to start whitelisting Google for example. www.google.com resolves to 5 IPs alone, which is to say nothing for ssl.gstatic.com, and all the other hosts involved in authentication, G+, etc which probably resolve to multiple IPs each.
Monumental is a good way to put it. But I was just using Google as an example. A basic outline of my firwall goes like this: if the packet's destination is whitelisted, accept it. Otherwise send it through a proxy server.
There's also the problem of systems that use DNS for load balancing. You may not get the same results if you look up such a domain twice in a row, so a single lookup isn't even going to give you an exhaustive list of IP addresses that the domain could possibly resolve to.
If you use hostnames in your firewall, your firewall is now dependent on DNS. This opens the firewall to a number of issues:
DNS lookups under high volumes could cause latency.
DNS changes do not propagate instantly. So your firewall could be
using cached IPs.
DNS can be spoofed, hijacked, hacked.
DNS can fail - meaning your firewall fails.
Your firewall rules are now controlled by a 3rd party.
If you use hostnames and you do not control the DNS, then someone else effectively controls your IPtables rules. Mistakes, errors or security issues on their end become problems for you.
The only time I've seen hostnames used well is for internal operations. I've worked in an office where IPs and hostnames were assigned via DHCP. The firewalls used hostnames to put barriers between different groups. Since this was all internally controlled it worked well.
This is a good answer but it's missing the part that would help me simplify the firewall.
You could use a wrapper around iptables like shorewall to make your rules easier to mantain.
This is a good idea but you didn't tell me why I shouldn't use hostnames in iptables rules.
I don't know much about Shorewall, but I'm under the impression that davidkennedy85 would still need to maintain lists of every IP address of a service he'd want to allow within the Shorewall configs. It might make managing netfilter [& etc] a bit easier, but wouldn't solve his core problem, which is a massive list of IPs.
I personally assign a hostname to a ip manually in /etc/hosts and then use it in iptables.
This way you
Do not offload your Firewall rules to an external entity
Have easily maintainable iptables
As the others have already said you shouln't use DNS resolvable names in iptables rules. They are inaccurate, controlled by third party and are generally a Bad Thing (tm). I'd also add that your DNS may fail or may be unreachable at the time the iptables service is starting. In which case the rule won't be added at all and a whole new type of problems may happen (like loosing ssh access after restart).
What you can do is:
Use custom chains to logically seperate the rules
Use ipsets to have addresses grouped and seperated from the rules
Add comments to the rules
Also noone said anything bad about hostnames which are not resolved by DNS (i.e. are specified in hosts. You can use them if you really need to.
Yes, ipsets are the answer. With a script run from crontab to update the ipset.
| common-pile/stackexchange_filtered |
Dialog flow Agent Query with input String in Japanese / Arabic
I am using dialog flow V1 Apis to query the agent. There is a scenario where the input query string is not english. The language I want to send in is a Japanese / Arabic string
Sample Request
endpoint = "https://api.dialogflow.com/v1/query?v=20150910";
JSON Data
{
"originalRequest": {
"data": {
"incomingMessage": "朝食時間は何ですか?"
}
},
"lang": "ja",
"query": "朝食時間は何ですか?",
"sessionId": "###########"
}
In dialog flow agent, it is received as
æé£æéã¯ä½ã§ããï¼
How do i pass it to the query endpoint so that the dialog flow agent can read the input query in the language i send.
I am also aware that Dialog flow will not Arabic. I tried with a Japanese string as well and ended up with same kind of results. I tried changing the "lang" property to "ja", still it didn't work. Should I encode the "query" property in a certain format?
Unfortunately, Dialogflow does not currently support Arabic. In terms of working with Japanese if you're only interested in having it all in that language then you'll need to set the agent's language root language to Japanese:
Go to your agent's Settings ⚙ > Languages tab > Choose the language > Save
And if you'll need multi-language agent then here's the reference docs
| common-pile/stackexchange_filtered |
Jquery Font size resize to divs height?
Is there any way to adjust the font size to the height of an div box or an ?. I know there are many ways to adjust the font size to the width of the parent element, but i cannot find something to adjust the font size to the height.
I have several lines in a table:
<table>
<tr>
<td> Line 1 </td>
</tr>
<tr>
<td> Line 2 </td>
</tr>
<tr>
<td> Line 3 </td>
</tr>
</table>
The table has an height of 100% of the viewport. SO that every td has an height of 33.33 percentage. This means the font height of every line should have 33.3% of viewport! No idee how to achieve this! Im happy about every helpful link or answer! Thanks!
If you don't need to support older browsers, you could use the new viewport-unit in CSS3:
html, body {
height: 100%;
font-size: 100vh;
}
table {
height: 100%;
}
td {
font-size: 33.3%;
}
http://jsfiddle.net/ZaJ8w/
could you explain me what the 100vh means? I never saw something like this! Thanks
100vh = 100% of viewport height (1vh = 1%)
Jeah problem is, i need it espacialy for android browser, but they dont support vh but thanks! I learned new things!
Would jQuery's height() work for you?
var height = $('td').height();
$('td').each(function() {
$(this).css({'font-size': height + 'px', 'line-height': height + 'px'});
});
@Em Sta - Sorry about that. Fixed syntax error in answer. http://jsfiddle.net/UQ7Ba/
But why is the first line bigger than the other ones?
@Em Sta - The text is wrapping in the top row, making the top row in the table larger than the others.
@Em Sta - There may be some padding messing things up. Try this http://jsfiddle.net/XXuAh/.
Is there a way to change the line heigt so that all things are a little bit closer?
@Em Sta - If we get the height ahead of time, we can guarantee that all of them are the same height. See http://jsfiddle.net/XXuAh/
| common-pile/stackexchange_filtered |
Adding UserControl dynammiclly to a panel on buttonClick
In my form I have a button and I want to add a user control to a panel each time it's clicked:
public partial class AddInstanceForm : MetroForm
{
private List<Material> material { get; set; }
public AddInstanceForm()
{
InitializeComponent();
}
// get data from db for mycombobox which exit in MaterilControl
private void AddInstanceForm_Load(object sender, EventArgs e)
{
using(DBContext db=new DBContext())
{
material = db.Materials.ToList();
}
}
// This Attached to button click
private void anotherMaterial_Click(object sender, EventArgs e)
{
MaterialControl mc = new MaterialControl(material);
this.SuspendLayout();
panel1.Controls.Add(mc);
//EDIT
this.Invalidate();
this.Update();
this.ResumeLayout(false);
}
}
The problem is that just one user control is added to the panel, no matter how many times the button is clicked:
public partial class MaterialControl : UserControl
{
private MaterialView _material;
private List<Material> MaterialboData { get; set; }
public MaterialView Material
{
get
{
_material.Name=MaterialName.Text;
_material.Quntity = MaterialQu.Text;
_material.MaterialID = Convert.ToInt32(MaterialName.ValueMember);
return _material;
}
set
{
MaterialName.Text=value.Name;
MaterialQu.Text = value.Quntity;
MaterialName.ValueMember = Convert.ToString(_material.MaterialID);
}
}
public MaterialControl(List<Material> Data)
{
_material = new MaterialView();
this.MaterialboData = new List<Material>();
this.MaterialboData = Data;
InitializeComponent();
}
}
Try calling Invalidate() after drawing the control.
Instead of adding them with a new top position, if you have a panel dedicated just for these controls you could use the Dock Top instead and this would let the panel automatically layout the controls for you.
Also, what are the settings of the panel1? Is it possible that panel one is only the size of the first Item and the other items are all just being added outside the viewable area of Panel1? Try adding a panel1.Height += 10 in the loop.
When you step through it in the debugger, is the Z order different for each?
| common-pile/stackexchange_filtered |
Highcharts Legend formatting
I have a line chart which includes 3 lines with 3 different names : Apple, Orange, Watermelon. I would like to add text to each of these legends, for example: Apple - 123, Orange - 456, Watermelon - 789. Could anyone help me how can I do that ?
Thank you in advanced !
Use legend.labelFormat or legend.labelFormatter.
can you please tell a litte bit more in details. I also tried labelFormatter but it applied the same text for all series, I would like to make seperate text for seperate series
I got the anser, it used labelFormatter and put variables 'customLegendText' for each series, here it is: http://jsfiddle.net/wp2e21du/
| common-pile/stackexchange_filtered |
java generic class extends the generic argument
Short version:
What is the correct syntax to make a class extends its generic argument:
class A<T> extends T {} // doesn't compile
Long version:
I have a base abstract class (the following code has been shortened just for this post purposes)
abstract class D {
abstract void f();
abstract void g();
}
Then, there is a set of possible combinations for each f() and each g(). The goal is to have all the possible combinations.
One solution is to make for each f_j() a class Fj that extends D and implements the f_j(), then for each g_i() we make a class GiFj which extends Fj and implements g_i().
For example
abstract class F1 extends D {
@Override
void f() { /* F1 Work */ }
}
abstract class F2 extends D {
@Override
void f() { /* F2 Work */ }
}
class G1F1 extends F1 {
@Override
void g() { /* G1 Work */ }
}
class G1F2 extends F2 {
@Override
void g() { /* G1 Work : Duplicated Code */ }
}
The problem is that the code for Gi is going to get duplicated many times.
The solution that I want to implement is to regroup all GFi into on generic class
class G1<F extends D> extends F {
@Override
void g() { /* G1 work */ }
}
So how to do this in java, or if it is impossible what is the good way to do it
Unfortunately, it looks like you are wanting multiple inheritance of classes, which is not supported in Java. Have a look at using the mixin strategy.
Answer to short version
That is not possible in Java. A compiler needs to know at compile time which methods and fields the superclass of a class has. Thus, a class can only extend a concrete class (possible with type variables), but it cannot extend a type variable on its own.
Answer to long version (which on the surface seems to be unrelated to the short version)
You can use composition to achieve sort of what you want:
class Foo {
final FDelegate fDelegate;
final GDelegate gDelegate;
Foo(FDelegate fDelegate, Delegate gDelegate) {
this.fDelegate = fDelegate;
this.gDelegate = gDelegate;
}
void f() {
fDelegate.f(this);
}
void g() {
gDelegate.g(this);
}
}
interface FDelegate {
void f(Foo container);
}
interface GDelegate {
void g(Foo container);
}
Then you have classes F1, F2, G1, G2 which contain the various method implementations. You pass a Foo object so that you can access its (visible) members.
| common-pile/stackexchange_filtered |
is Lucene + classic query search syntax same as using AND
A Lucene query of the form
field1:+"term1" field2:+"term2"
seems to be equivalent to
field1:"term1" OR field2:"term2"
I expected it to be equivalent to
field1:"term1" AND field2:"term2"
(i.e for my particular query on my database query 1 and 2 are returning 10 records, whereas query 3 is returning 6 records, I would expect query 2 to only return six records)
Im aware that if there is no OR or AND it defaults to OR but I thought the + means that term has to match, otherwise what is the point of the +
What am i misunderstanding ?
That query doesn't look equivalent to either of those, to me.
field1:+"term1" field2:+"term2"
Is just invalid syntax, and the standard QueryParser does kick out a ParseException for it (maybe your code is swallowing the exception silently or something?).
It should be:
+field1:"term1" +field2:"term2"
Are you sure because on https://lucene.apache.org/core/2_9_4/queryparsersyntax.html the example shows title:(+return +"pink panther"), i.e the plus is before the term not the field ?
Absolutely sure, yes. field1:(+"term1") is valid (though the + is meaningless). field1:+"term1" is not.
okay that is probably it then, but how I could deuce that form the Lucene docs I do not know.
Im stil having no luck with this, but I am testing via a web interface maybe that is causing an issue, is there a website where I could test out lucene syntax.
| common-pile/stackexchange_filtered |
Automate a deployment task (Use case)
What is the best approach to automate a deployment with the following constraints:
The repository located on bitbucket.
Steps:
pull latest changes from the master branch.
increment the version using this command "npm version patch".
execute build task.
push changes into the repository.
Does something like this can achieve what I want ?
$ git pull origin master && npm version patch && gulp && git push origin master
All is good, I just need to find a safe way to store credentials of the bitbucket user.
My question is all about how to make this task in a best approach.
That is a far too broad question. If you show us your gulpfile and tell us the concrete problems you encounter, you can get an answer.
Actually it's not related to the gulp APIs. I edited the question. Thank you ^_^
It's not clear where you want to deploy to
| common-pile/stackexchange_filtered |
Why do object-boundaries look better on a badly rectified image?(openCV stereo correspondence)
I got 2 pictures which is almost parallel, and not positioned very far from each other.
I'm using OpenCV to try to create a disparity map (Stereo correspondence).
Because I'm trying to use it in a real world scenario, the use of chessboard calibration is a bit un-practical.
Because of that, I'm using stereoRectifyUncalibrated().
I tried to compare the results, using 2 different sets of corresponding points for the rectification:
Points manually selected(point & click)
Points generated from SURF and filtered with RANSAC
Input image1:
Input image2:
(Note that I do undistortion on the images before using them for rectification etc)
Rectified images with SURF and RANSAC:(1 and 2 in that order):
Rectified images using the manually selected points(which is more inaccurate!):
Now, the thing is, looking at the result we see that the surf-version is almost perfectly rectified.(The epipolar lines are quite well aligned).
While the manually selected point version is quite badly rectified...the epipolar lines are nowhere near aligned.
But when we look at the result of openCV's sgBM() using both our rectifications:
Manual point result:
SURF point result:
The disparity/depth shown is more accurate/correct with the SURF-point(well rectified version). No surprise there.
However, the actual detected object-pixels and object-boundaries are actually a lot better on the badly rectified verison.
For instance, you can see that the pen is actually a pen and has the shape of a pen, in the bad rectified disparity map, but not in the well-rectified map.
Question is, why?
And how can I fix it?
(I tried fooling around with the sgBM() paramteres, making sure they are the same for both etc, but it does not have any effect. It is the different rectifications alone that makes the badly rectified image look good for some reason(with respect to object-boundaries)).
Both range / disparity maps seem very noisy. And I, for one, find it hard (impossible?) to distinguish the pen in either map. Do you have some sort of objective measure of the improvement in object boundaries in the badly rectified ones?
| common-pile/stackexchange_filtered |
objectCast Sideways casting
I'm trying to replace all dynamicCasts in my code with QT's objectCast. But I've run into a bit of a snag. Here's the hierarchy of my objects:
class ABase : public QObject
class IAbility; // Registered as Qt Interface
class ISize; // Registered as Qt Interface
class Derived : public ABase, public IAbility, public ISize; // Uses Q_INTERFACES
Using objectCast I can convert a Derived to a ISize or IAbility. However, in one point of my code I want to perform the following conversion: Derived->ISize->IAbility. That last cast is where I get an error. Since IAbility does not relate to ISize in any way, that could cause a problem. I could do a dynamic cast at this point, but I'd rather not.
As I see it you have three options:
You relate the two interfaces to each other by putting them in an inheritance hierarchy, letting one inherit the other. This will let you cast from one to the other, in one direction, but will also be clonky and wonky if there is no real realtion between the two.
You create a super interface that the two others inherit from, and use that as a common casting ground, if you will. This will let you cast the object two ways, using the super interface, but will also create an unnessecary abstraction.
You cast your Derived object to an ISize, do what you want, and then cast that same derived reference to an IAbility. Hence: Derived -> ISize, Derived -> IAbility.
I would go for option 3, simply because it's the least forced solution, abstraction-wise.
Unfortunately, the way the classes are laid out in the actual application. The casting is happening in a class that doesn't know about Derived, so I'm not sure that that would work. But option 2 would work. I do have a top level Object class so creating a top level IObject interface that all interfaces/objects must virtually inherit may not be that bad of an option. Thanks.
Glad I could help. Best of luck to you!
| common-pile/stackexchange_filtered |
What is 'Normalized frequency in the range [0,1)', à la DTMF & Goertzel algorithm
I have a functioning DTMF program. It takes a larger signal and breaks it up into many small components, then analyzes each segment with an FFT to determine if the magnitude of the response for a particular "bin", corresponding to a specific frequency, is present, thereby indicating that the number which is respresented by that frequency has been pressed, straight forward enough, but I want to implement the Goertzel Algorithm of the library jmathstudio and I can't quite figure out how to go about searching for the frequency I want. the documentation says this:
I construct the bins like this, the term '200' is dictated by the fact that my FFT is currently of size 400, and 2000 is determined by the fact that the sample rate is 4000.
int sixNineSeven = (int)(697.0*200/2000);
int sevenSevenZero = (int)(770.0*200/2000);
int eightFiveTwo = (int)(852.0*200/2000);
int nineFourOne = (int)(941.0*200/2000);
int twelveZeroNine = (int)(1209.0*200/2000);
int thirteenThirtySix = (int)(1336.0*200/2000);
int fourteenSeventySeven = (int)(1477.0*200/2000);
I guess the main thing I need to do is this step:
Normalized frequency in the range [0,1)
But does that mean that I can totally ignore the sample frequency?
I suppose I would be doing this Geortzel operation in lieu of an FFT, so should I just search for this normalized frequency in each of the small segments?
But when I do this:
Complex coeff = su.goertzelFrequencyAnalysis(new Vector(c), (float)(1/697));
System.out.println("coeff "+coeff);
My output is just a bunch of garbage like this:
coeff org.JMathStudio.DataStructure.Complex@5e228a02
coeff org.JMathStudio.DataStructure.Complex@7bd63e39
coeff org.JMathStudio.DataStructure.Complex@2e8f4fb3
coeff org.JMathStudio.DataStructure.Complex@42b988a6
Does w=1 correspond to the sampling frequency or one half of it? You might want to check for integer division (with result 0) and use float(1)/697 instead.
'coeff' is the object of type Complex and hence to get real and imaginary part do following:
System.out.println("coeff = "+coeff.getRealPart() +" i"+coeff.getImaginaryPart());
Hi Bhavya! So then, just to be clear, how would I use the goertzel function to check an input signal for the 697 frequency?
| common-pile/stackexchange_filtered |
PHPUnit CakePHP 4.X - mock up the identity object, but how?
i written a webaplication with cakephp 4.x. In my controller(AppController.php) i have implements the following:
...
public function beforeRender(\Cake\Event\EventInterface $event)
{
#Load User_ID
if($this->Authentication->getIdentity()){
$identity = $this->Authentication->getIdentity();
$user_id = $this->Authentication->getIdentity()->getIdentifier();
}
}
...
Now i written the phpunit test following:
...
private function loginAsAdmin() {
$this->Users = TableRegistry::getTableLocator()->get('Users');
$user = $this->Users->get(1);
$this->session(['Auth.User'=> $user->toArray()]);
}
/**
* Test loginAsUser method
*
* @return void
*/
public function testLoginAsAdmin(): void
{
$this->loginAsAdmin();
$this->get('/admin/dashboard');
$this->assertSession(1, 'Auth.User.id'); //check if the user is logged in
}
...
But my identity object is allways empty.
The object should actually look like this:
i have no more ideas how to implement it. Can someone help me? :)
With the authentication plugin, users are by default stored as objects, if you pass anything other than an object that implements \ArrayAccess, then the authentication service will wrap the given data in an instance of \ArrayObject.
Also the data is by default expected to be stored at the Auth key, not Auth.User.
Theoretically passing array data should work too (as in "being treated as authenticated and data being set in the identity object"), even when nested in an additional key (unless you've enabled the identify option for the session authenticator), so maybe there's additional problems somewhere in your code, the fact that there's keys like storage and _config in the array object is really weird, that shouldn't happen in an out of the box setup.
Anyways, normally you'd test with authentication like this:
$user = $this->Users->get(1);
$this->session(['Auth' => $user]);
See also
Authentication Cookbook > Testing with Authentication
Thanks @ndm for your help. It is really appreciated. I have also added my vote to your answer. :)
| common-pile/stackexchange_filtered |
Explanation of the Monero Daemon running arguments
I started the monero daemon via the GUI on Mac OS X. I downloaded the binary unpacked from monero-gui-mac-x64-v<IP_ADDRESS>.tar.bz2.
When I do a simple $ ps aux | grep monero from the command line I get the following two entries:
/private/var/folders/p_/{really long random string}/T/AppTranslocation/{long random hex string}/d/monero-wallet-gui.app/Contents/MacOS/monerod --detach --data-dir /Users/{username}/Monero/blockchain --check-updates disabled
/private/var/folders/p_/{really long random string}/T/AppTranslocation/{long random hex string}/d/monero-wallet-gui.app/Contents/MacOS/monero-wallet-gui -psn_0_3179605
I was confused about this so I ran $ tail -f ~/.bitmonero/bitmonero.log and I found
WARN daemon src/daemon/executor.cpp:62 Monero 'Helium Hydra' (v<IP_ADDRESS>-release) Daemonised
This seems off since I downloaded version v<IP_ADDRESS>.
Some questions:
Why the weird /private/var/folders/p_/ locations?
If I want to run the monero daemon, should I run it from this weird folder location, or is there a better way?
Should the daemon be on the same version as the GUI (v<IP_ADDRESS>) or is this an error?
The command line says the data directory is /Users/{username}/Monero/blockchain, and you say you check for the log in ~/.bitmonero. You're looking at an old log then. The log includes timestamps, double check those.
Looks like you had a previous version already running. I would close your GUI, killall monerod, ensure there are none running with ps aux | grep monero, restart the GUI (which starts a daemon automatically).
| common-pile/stackexchange_filtered |
how to divide a circle into 60 part in java gui
I want to have a circle which works like a second-hand of a clock but I have a text field instead of a clock handle.for example when the second-hand of the clock is 30 the text field is in the lowest place of the circle and its text is 30.But I cant divide the circle into 60 parts.please help me to edit my codes.
I have to mention that I'm using Timer class.
these are my codes:
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
ClockWiseRotation window = new ClockWiseRotation();
window.frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
/**
* Create the application.
*/
public ClockWiseRotation() {
initialize();
}
/**
* Initialize the contents of the frame.
*/
private void initialize() {
timer = new Timer(1000, new ActionListener() {
@Override
public void actionPerformed(ActionEvent arg0) {
second = Integer.toString(Integer.parseInt(handle.getText()) + 1);
for (int i = 33; i <= 60; i++) {
int angle = i * (360 / 60);
handle.setBounds((int)(202 + 100 * Math.cos(angle)),
(int)(119 + 100 * Math.sin(angle)), 23, 26);
}
handle.setText(second);
}
});
timer.start();
frame = new JFrame();
frame.setBounds(100, 100, 450, 300);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().setLayout(null);
handle = new JTextField();
handle.setText("0");
handle.setBounds(202, 22, 23, 26);
frame.getContentPane().add(handle);
handle.setColumns(10);
JComboBox centerPoint = new JComboBox();
centerPoint.setBounds(202, 119, 17, 20);
frame.getContentPane().add(centerPoint);
}
}
Swing is a single threaded environment, blocking the EDT for any reason (like a long running loop of Thread.sleep) will prevent the UI from been updated. Swing is also not thread and updates to the UI should never be made from outside the context of the EDT
With that in mind, on every tick of the Timer, you looping 60 some 30 times trying to move the handle, but the only position the handle will appear in is the last position it's in when the actionPerformed method exits.
See Concurrency in Swing for more details.
What you should be doing is simply placing the handle at the current location based on your requirements/needs at the time of the call
I've also been looking at you calculation...
handle.setBounds((int)(202 + 100 * Math.cos(angle)),
(int)(119 + 100 * Math.sin(angle)), 23, 26);
From what I remember, to find the point on a circle, based on an angle, you need to use...
x = xOffset + Math.cos(radians) * radius;
y = yOffset - Math.sin(radians) * radius;
So, two things to point out there, but the most important is Math.cos and Math.sin expect the value to be in radians, not degrees
You should also avoid using null layouts, pixel perfect layouts are an illusion within modern ui design. There are too many factors which affect the individual size of components, none of which you can control. Swing was designed to work with layout managers at the core, discarding these will lead to no end of issues and problems that you will spend more and more time trying to rectify.
You should also avoid using components in this manner, instead, I'd consider using a custom painting processing....
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Dimension;
import java.awt.EventQueue;
import java.awt.FontMetrics;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.Calendar;
import javax.swing.JFrame;
import javax.swing.JPanel;
import javax.swing.Timer;
import javax.swing.UIManager;
import javax.swing.UnsupportedLookAndFeelException;
public class TestClock {
public static void main(String[] args) {
new TestClock();
}
public TestClock() {
EventQueue.invokeLater(new Runnable() {
@Override
public void run() {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException ex) {
} catch (InstantiationException ex) {
} catch (IllegalAccessException ex) {
} catch (UnsupportedLookAndFeelException ex) {
}
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new BorderLayout());
frame.add(new ClockPane());
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
protected class ClockPane extends JPanel {
public ClockPane() {
Timer timer = new Timer(1000, new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
repaint();
}
});
timer.setRepeats(true);
timer.setCoalesce(false);
timer.start();
}
@Override
public Dimension getPreferredSize() {
return new Dimension(200, 200);
}
protected Point getPointTo(float angle) {
int x = Math.round(getWidth() / 2);
int y = Math.round(getHeight() / 2);
double rads = Math.toRadians(angle);
// This is an arbitrary amount, you will need to correct for this
// I'm working of a width of 200 pixels, so that makes the radius
// 100...
int radius = 100;
// Calculate the outter point of the line
int xPosy = Math.round((float) (x + Math.cos(rads) * radius));
int yPosy = Math.round((float) (y - Math.sin(rads) * radius));
return new Point(xPosy, yPosy);
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g.create();
g2d.setColor(Color.RED);
Calendar cal = Calendar.getInstance();
int seconds = cal.get(Calendar.SECOND);
float angle = -(360f * (seconds / 60f));
angle += 90; // Correct for 0 being out to the right instead of up
Point p = getPointTo(angle);
int x = getWidth() / 2;
int y = getHeight() / 2;
g2d.drawLine(x, y, p.x, p.y);
FontMetrics fm = g2d.getFontMetrics();
String text = Integer.toString(seconds);
g2d.drawString(text, getWidth() - fm.stringWidth(text), getHeight() - fm.getHeight() + fm.getAscent());
g2d.dispose();
}
}
}
Have a look at Painting in AWT and Swing, Performing Custom Painting and 2D Graphics for more details
Thanks your guidance was really helpful and i have solved it like this(by removing the for loop):
public void actionPerformed(ActionEvent arg0) {
int intSecond = Integer.parseInt(handle.getText()) + 1;
if (intSecond <= 60) {
second = Integer.toString(intSecond);
handle.setBounds(
(int) (Math.cos(intSecond * Math.PI / 30 - Math.PI
/ 2) * 100 + 202),
(int) (Math.sin(intSecond * Math.PI / 30 - Math.PI
/ 2) * 100 + 119), 17, 20);
handle.setText(second);
} else {
timer.stop();
}
}
});
| common-pile/stackexchange_filtered |
Xamarin Android Emulator Failed
When I try to install the Xamarin Android Emulator v27.3.9 it fails to installed.
Is there anyway around this? anything I can do it fix this
Any error messages?
I just get the message installing Android emulator 27.3.9 failed
Well, it’s hard to say where the problem is.You can first check if there is windows firewall.
I would first check if from Visual Studio installer there is an updated version. I had myself issues with android emulator until the update I got a couple of days ago.The emulator version on my system now is stating 29.3.0
| common-pile/stackexchange_filtered |
Subscript out of range when dealing with CountIF over multiple arrays
I have code that takes values from the spreadsheet and stores them into an array (Array named JanDates) those values are the days of every month in a year taken from a spreadsheet. I then have another array to store values from a CountIF function that counts every instance of (JanDates) in a column.
The issue is that I get a Subscript out of Range error when the amount of values in the column exceed the array size, but the array size is directly linked to the amount of dates in the month and cannot be changed.
Some context for the arrays in the code below. Month dates are stored in rows every 5th row, that's why I step 5 and count the columns in that row to get the amount of days in that month.
JanDates is the actual value of the dates in those rows which is used to run the CountIF.
ComLength is the amount of entries in the column (the entries are in date format).
I hope it makes sense, any support would be appreciated.
Dim ComLength As Long
Dim x As Long
Dim LetterSht As Variant
Dim FrstLtr As Variant
Dim JanDates() As Variant
Dim MnthCount As Long
Dim MnthDayCount As Long
Dim MnthDates As Long
Dim d As Long
LetterSht = (Array("First Letter", "Second Letter", "Third Letter"))
For x = LBound(LetterSht) To UBound(LetterSht)
With Worksheets(LetterSht(x))
ComLength = .Range("J" & Rows.Count).End(xlUp).Row
If ActiveSheet.Name = "First Letter" Then
ReDim FrstLtr(2 To ComLength)
For MnthCount = 2 To 57 Step 5
MnthDayCount = WorksheetFunction.CountA(Worksheets("Log").Range("F" & MnthCount & ":AJ" & MnthCount))
JanDates = Worksheets("Log").Range("F" & MnthCount & ":AJ" & MnthCount).Value2
ReDim FrstLtr(1 To MnthDayCount)
For d = 1 To MnthDayCount
FrstLtr(d) = WorksheetFunction.CountIfs(.Range("K2:K" & ComLength), "=" & JanDates(1, d))
If d = MnthDayCount Then
Worksheets("Log").Range("F" & MnthCount + 4 & ":AJ" & MnthCount + 4) = Application.WorksheetFunction.Transpose(FrstLtr)
End If
Next
Next
End If
End With
Next x
End Sub
JanDates is 31 columns (I don't know why you have Redim in there when you immediately override that). If ComLength is more than 31, there aren't enough columns in the array to use d as a column index, but I can't really see why you are using d to iterate the columns at all.
What I want to achieve is to use d to iterate through each value of JanDates so the stored values lines up with the days of the month. So FrstLtr will store the count for 01/01/2022 in array value 1 and so on. I want to get a count for each day in the 31 days, but I want it to count through more than 31 entries because multiple date entries are possible.
That doesn't make sense. You're counting the items in the same range each time, so why are you using the number of rows as a counter? If you are checking how many times each of 31 dates appears in the 100 rows, why would you want to loop 100 times, not 31?
I made adjustments to my code, however now I'm not able to transpose the entirety of the array into my range, it only outputs the first array value in each cell of the range. Can I output the entirety of the array into the range without looping the output?
You don't need the Transpose there since you are populating a row, not a column.
| common-pile/stackexchange_filtered |
How to implement MVVM in Windows Phone 8.1 app (xaml)?
Could anyone give an example of implementation? Is it done with ViewModel implementing INotifyPropertyChanged (and raising events, as it's done in Silverlight) or some other way? How is ViewModel bound to the view?
All examples I've found so far are incomplete or outdated (refer to Silverlight apps, not Xaml ones).
I have a simple MVVM example on my blog if you're interested. As Abdurrahman said, MVVM is a pattern so the pattern would be the same regardless of if you're using WPF or Silverlight.
It's worth investigating some MVVM frameworks, Caliburn Micro now supports WP8.1 and Universal apps.
@Rachel AFAIK MVVM is supproted out of the box in Silverlight. That means no external dependency of any kind, you can wire it up yourself quite easily. The API is different in XAML apps.
@Arnthor I'm not sure what you're meaning by "out of the box". MVVM is a pattern to follow while designing your application, and can easily be used with both WPF and Silverlight with no 3rd party tools. The link I added in my first comment is a simple example that does not use any 3rd party components. It sounds like you found what you were looking for though, so good luck with your project.
I currently use the following approach in my own Universal/W8.1/WP8.1 apps. This approach uses the RArcher WinRT Toolkit, which is an experimental toolkit based on the MVVM pattern.
It provides a way to maintain app state and you can use the ViewModelBase to implement INPC.
It also uses the Unity dependency injection container.
I'd start with making the ViewModelLocator an application-wide resource, so my Views can access it easily.
<Application.Resources>
<vm:ViewModelLocator x:Key="ViewModelLocator" />
</Application.Resources>
The view can use it like so:
<Page.DataContext>
<Binding Source="{StaticResource ViewModelLocator}" Path="MainViewModel" />
</Page.DataContext>
The ViewModelLocator looks like this:
public sealed class ViewModelLocator
{
public ViewModelLocator()
{
RegisterIoCBindings();
}
public IMainViewModel MainViewModel
{
get { return IocContainer.Get<IMainViewModel>(); }
}
private void RegisterIoCBindings()
{
IocContainer.Container.RegisterType(typeof(IMainViewModel), typeof(MainViewModel),
null, new ContainerControlledLifetimeManager());
}
}
The MainViewModel has the ViewModelBase as baseclass and implements the IMainViewModel:
public sealed class MainViewModel : ViewModelBase, IMainViewModel
{
private string myText;
[AutoState]
public string MyText
{
get { return myText; }
set
{
myText = value;
OnPropertyChanged();
}
}
public MainViewModel() // You can use constructor injection here
{
}
}
That is the basic setup. And as the others have stated, MVVM is a pattern and there a many ways to use it. I would say, use what feels good ;-)
If you like to know more about this approach, check out the toolkit and unity DI.
Thanks for detailed answer, I'll go check some frameworks, like the one you described. I'm really disappointed though, if there's no way to get MVVM out of the box.
You can wire it up yourself of course, if that is what you mean with "out of the box". Just implement INPC on your viewmodel, and you're almost finished.
I would be careful about using that ServiceLocator pattern. Like stated the framework contains unity so you might as well inject the viewmodels into the views and not have the views take these sort of dependency hits.
In the case of Windows RT I would advise to look towards PRISM. It provides really good modern development practices. You will get suitable navigation service, app lifecycle management, great MVVM support and very flexible view and ViewModels resolution mechanism.
You can easily add it to your project via NuGet.
It has pretty good documentation, so if you have any questions you can find an answer on MSDN or even download free book Prism for the Windows Runtime. Our team has successful experience in building projects using PRISM.
There is no difference, it's the same. Because MVVM is a pattern. You can implement it to your windows phone app easily. I'm using MVVM Light for my wp apps and EventToCommand behavior to raise events. I have an app open sourced on GitHub, you can check it out if you want.
| common-pile/stackexchange_filtered |
ODI 12c Smart Import into Exec Repository with SDK behave like importing into Dev Repository
I try to smart import an XML file generated from an Exec Repository via ODI client to another Exec Repository with the SDK. I am using ODI <IP_ADDRESS>.
I already had a java program that was working fine in 11g (<IP_ADDRESS>) but since I upgraded to 12, I have errors (I made some changes to the code due to the new version).
Smart importing the xml file to the Exec Repository with ODI studio is working fine so the file seems okay.
The error given by the program is :
oracle.odi.impexp.smartie.OdiSmartImportException: com.sunopsis.dwg.SQLWorkReposException: ORA-00942: Table ou vue inexistante
Caused by: Error : 942, Position : 14, Sql = SELECT 1 FROM SNP_VAR WHERE I_PROJECT IS NULL AND VAR_NAME = 'CONN_DEC_USER', OriginalSql = SELECT 1 FROM SNP_VAR WHERE I_PROJECT IS NULL AND VAR_NAME = 'CONN_DEC_USER', Error Msg = ORA-00942: Table ou vue inexistante
According to the Oracle support, it is totally normal that my Exec repository doesn't have the table SNP_VAR, it is only in Dev repository.
What is strange is that the smart Import makes this request on it. It seems to me that the smartImport is acting like working with a Dev Repository (just guessing)
My code :
private String TargetUrl = "jdbc:oracle:thin:@";
private String TargetDriver="oracle.jdbc.OracleDriver";
private String TargetMaster_User;
private String TargetMaster_Pass;
private String WorkRep_Execution="WORK1";
private String TargetOdi_User="USER";
private String TargetOdi_Pass="PASS";
private String ExportFile;
private MasterRepositoryDbInfo targetmasterInfo;
private WorkRepositoryDbInfo workInfo_exec;
private OdiInstance odiInstance_exec;
private Authentication auth_exec;
private SmartImportServiceImpl smartImport;
private ITransactionManager Tm;
ITransactionStatus TStatus;
SmartImport(String Url, String File, String targetMasterUser, String targetMasterPwd) throws IOException{
TargetUrl = TargetUrl+Url;
ExportFile = File;
TargetMaster_User = targetMasterUser;
TargetMaster_Pass = targetMasterPwd;
targetmasterInfo = new MasterRepositoryDbInfo(TargetUrl, TargetDriver, TargetMaster_User,TargetMaster_Pass.toCharArray(), new PoolingAttributes());
workInfo_exec = new WorkRepositoryDbInfo(WorkRep_Execution, new PoolingAttributes());
odiInstance_exec=OdiInstance.createInstance(new OdiInstanceConfig(targetmasterInfo,workInfo_exec));
auth_exec = odiInstance_exec.getSecurityManager().createAuthentication(TargetOdi_User,TargetOdi_Pass.toCharArray());
odiInstance_exec.getSecurityManager().setCurrentThreadAuthentication(auth_exec);
smartImport = new SmartImportServiceImpl(odiInstance_exec);
Tm = odiInstance_exec.getTransactionManager();
TStatus = odiInstance_exec.getTransactionManager().getTransaction( new DefaultTransactionDefinition());
}
smartImport.importObjectsFromXml(ExportFile, "MyPassword".toCharArray(), false);
The master repository I am working on has only one work repository (called "WORK1") which is an execution repository.
Any clue on what is wrong ?
Thanks
PEB
Have you tried with importFromXml instead of importObjectsFromXml? To me the term objects refers to base-objects as opposition with scenarios/load plans only. Haven't tried it though. Also, did you have a look at Deployment Archives. I use that since ODI 12.2.1 for promoting code from environment to another. I can share a groovy script if needed.
Hello JeromeFr, thank for your response.I tried importFromXml with same result. I did not try to use Deployment Archives, but I just need to deploy scenarios in my case, would that work ? I am not familiar with groovy, but yes please, it would be great to read your script.
Thanks to JeromeFR suggestion, I found a workaround for my issue using Deployment Archives instead of Smart Export/Import.
It works fine exporting my archive from my dev repository (through ODI client) and importing to my Exec repositories via java skd (I have several exec environments)
The code for importing the archive in replacement for the smartImport :
DeploymentService.applyFullDeploymentArchive(odiInstance_exec, "EXEC_Init.zip", true, "Password".toCharArray(), true);
Thx
PEB
| common-pile/stackexchange_filtered |
S3 sync exclude piggyback of .gitgnore
Is there anyway to specify for the aws s3 sync command to exclude file patterns as outlined in an .gitignore file?
I have a code build pipeline which I want to trigger but have it run against my local files for testing changes I don't want to check-in.
You would need to convert the .gitignore information into the --exclude format expected by the AWS Command-Line Interface (CLI).
See: Using High-Level s3 Commands with the AWS Command Line Interface
If you have a script I'll give you a tick ( ; haha Otherwise I will write my own one once I muster up the motivation.
| common-pile/stackexchange_filtered |
How to apply a function on the output of nn.conv(). nn.conv3d()
Assume I have a output from 3D nn.conv(),or nn.conv3d()
The output is a tensor in following shape:
[1(batch),28(depth), 28(rows), 28(cols), 32(channels)]
<tf.Tensor 'Relu_1:0' shape=(1, 28,28, 28, 32) dtype=float32>
I want to apply a function on each 28x28x28 volume data. I really do not understand what the data looks like inside. Why should it not be (channels,28,28,28)? I could think it as a np.array.
I saw it could be done by:
function_to_map = f(x)
res = tf.map_fn(f, input)
But I do not know how to define function f on this tensor. I am really confused the channel here and what is the data looks like inside.
conv1=tf.pack([tf.py_func(func,[tf.unpack(conv1)[i]],[tf.float32])[0] for i in range(n)])
The input for func is numpy.array and the output of the function is also numpy array. func could be any python function. The data is (b,c,d) if the tensor is (n,b,c,d) n is the batch size. Finally figured it out.
| common-pile/stackexchange_filtered |
how to understand java String source code
How does it works?I cant understand how it gets to the point that, every time a new string has been created once you change something in the original one. What are offset, value and count standing for?
private final char value[];
private final int offset;
private final int count;
public String() {
this.offset = 0;
this.count = 0;
this.value = new char[0];
}
public String(String original) {
int size = original.count;
char[] originalValue = original.value;
char[] v;
if (originalValue.length > size) {
// The array representing the String is bigger than the new
// String itself. Perhaps this constructor is being called
// in order to trim the baggage, so make a copy of the array.
int off = original.offset;
v = Arrays.copyOfRange(originalValue, off, off+size);
} else {
// The array representing the String is the same
// size as the String, so no point in making a copy.
v = originalValue;
}
this.offset = 0;
this.count = size;
this.value = v;
}
public String(char value[]) {
int size = value.length;
this.offset = 0;
this.count = size;
this.value = Arrays.copyOf(value, size);
}
public String(char value[], int offset, int count) {
if (offset < 0) {
throw new StringIndexOutOfBoundsException(offset);
}
if (count < 0) {
throw new StringIndexOutOfBoundsException(count);
}
// Note: offset or count might be near -1>>>1.
if (offset > value.length - count) {
throw new StringIndexOutOfBoundsException(offset + count);
}
this.offset = 0;
this.count = count;
this.value = Arrays.copyOfRange(value, offset, offset+count);
}
public String substring(int beginIndex, int endIndex) {
if (beginIndex < 0) {
throw new StringIndexOutOfBoundsException(beginIndex);
}
if (endIndex > count) {
throw new StringIndexOutOfBoundsException(endIndex);
}
if (beginIndex > endIndex) {
throw new StringIndexOutOfBoundsException(endIndex - beginIndex);
}
return ((beginIndex == 0) && (endIndex == count)) ? this :
new String(offset + beginIndex, endIndex - beginIndex, value);
}
public String concat(String str) {
int otherLen = str.length();
if (otherLen == 0) {
return this;
}
char buf[] = new char[count + otherLen];
getChars(0, count, buf, 0);
str.getChars(0, otherLen, buf, count);
return new String(0, count + otherLen, buf);
}
public static String valueOf(char data[]) {
return new String(data);
}
public static String valueOf(char c) {
char data[] = {c};
return new String(0, 1, data);
}
This is the source of String from Java 6. In Java 7, the source is very different.
The value is the underlying character array of the string. The offset is where the string starts, and the count is how long it is. A string may be on the array {'a','b','c','d','e'} with count 3 and offset 1, and it's "bcd". This way the array isn't copied around for every substring operation.
Still confused. What's the value in your example?What do you mean by a string is on the array?
value is the array. The string may not be the entire array, but instead is a substring of it.
...every time a new string has been created once you change something in the original one.
I assume you are talking about the immutability of String. It's not something complex or clever. Rather, every operation on String does not change the original one. It simply copies the result over to a new String, or keeps around an unchanged reference to the old one.
A string is based on a character array, and the various string operations access that character array. When a new string that is different than the old string is to be created, a new character array is created, and data is copied over from the old string, with the changes in place. Then a new String object is made from that character array.
For example, the concat method prepares a new character array, copies over the data from the two Strings (the current one and the one passed as parameter), and then makes a new String object backed by this new character string. The two old String objects are not changed.
But the version you have brought here is from Java 6. Before Java 7, the authors of Java wanted to allocate less memory, by allowing substring operations to point to the original character array. The idea here was that since the original, long character array is never going to be changed (because none of the methods ever changes that array), all its substrings can actually be backed by the same character array, if you define a string by three items:
Which character array is backing it.
Where in that character array does our current string start (offset).
How many characters in that character array are considered to be part of our current string (count).
So, a string such as "ABC" can be represented as:
char array: { 'A', 'B', 'C' }, offset: 0, count: 3,
char array: { 'A', 'B', 'C', 'D', 'E' }, offset: 0, count: 3,
char array: { 'F', 'O', 'O', 'A', 'B', 'C' }, offset: 3, count: 3
All of these are valid implementations that Java (up to version 6) will consider to be "ABC".
This trick allowed them to avoid copying arrays when doing the substring operation. It doesn't cause the string to be immutable. It's a trick that's based on the fact that String is immputable.
However, in Java 7, this trick has been abandoned and now the only valid representation of "ABC" in Java is #1 above. The reason for this is that this trick actually caused memory leaks. You could create a huge string, take a few tiny substrings of it, then get rid of all references to the huge string... but still, it would not be garbage-collected. Why? Because the tiny substrings were still referring to the internal, huge, character array inside it.
To sum up:
String immutability is achieved by making sure that there is no method that changes the backing character array. All operations are read operations. You can see that the methods that return String values invoke new String(...) to return. And the various constructors do not change anything in the original strings passed to them as parameters.
The offset and count trick, now obsolete, relied on immutability to save copy operations. It did not cause immutability, just relied on it.
You're amazing. What about the order in the String. If the String I created is "dcb", then how does it created through the constructor by only using offset and count, since I cant see it here.
If the string you created was "dcb" using which method?
I mean there must be difference between "dcb" and "bcd", then how to fix that?So it doesn't included here?
They are not created based on the same character array, so what's the problem? Each of them is created with a different character array.
yeah...I get it. It's an array, not a set.
Why not change the short string's internal representation when the GC runs, so the array can be collected on the next pass? Or, alternately, why not have a string as a special variable-length object like arrays, so that they are only one object, as C# does?
@Random832 The string object is not aware that a garbage collection is running or that the reference it has to the char array is one that could be interfering with it. As for why Java and C# are different, only the designers of the language can say.
I was actually suggesting having the garbage collector be aware of strings.
| common-pile/stackexchange_filtered |
Distribution of Ordered statistics.
We are given a sample of size $n$ ($X_i$), the ordered statistics are as follows:
$Y_1\leq Y_2 \leq Y_3$. . . . .$Y_{n-1}\leq Y_n$
I am trying to establish the the probability density function of $\alpha$th ordered statistic i.e $f_{Y_{\alpha}}(y)$.
The solution goes as follows:
$f_{Y_{\alpha}}(y)=\lim_{\Delta y->0}\dfrac{F_{Y_{\alpha}}(y+\Delta y)-F_{Y_{\alpha}}(y)}{\Delta y}$
= $\lim_{\Delta y->0}\dfrac{P(y<X_i\leq y+\Delta y)}{\Delta y}$
= $lim_{\Delta y->0}\dfrac{P((\alpha -1)\:of\:X_i \leq y;one\:X_i \: in\: (y,y+\Delta];(n-\alpha)\:of\:X_i > y+\Delta y)}{\Delta y}$
Now, let $N$ be the number of $X_i 's$ such that $X_i\leq y$, thus $N$~$Bin(n,F_{X_i}(y))$.
Also, let $N^{'}$ be the number of $X_i 's$ such that $X_i> y+\Delta y$, thus $N^{'}$~$Bin(n,1-F_{X_i}(y))$.
Thus, $f_{Y_{\alpha}}(y)=lim_{\Delta y ->0}\dfrac{(\binom{n}{\alpha -1}[F(y)]^{\alpha -1}[1-F(y)]^{n-\alpha})(F(y+\Delta y)-F(y))(\binom{n}{n-\alpha}[F(y)]^{\alpha}[1-F(y)]^{n-\alpha})}{\Delta y}$
Following this, I am not able to get the correct answer which is $\dfrac{n!}{(\alpha -1)!(n-\alpha)!}[F(y)]^{\alpha-1}[1-F(y)]^{n-\alpha}f(y)$
Can anyone help ?
Have a look here and here
This method not good ? @RRL
Well we know that the PDF is the derivative of the CDF: $f_{Y_a}(y) = \frac{d}{dy} F_{Y_a}(y)$. You start pursuing this by writing the definition of the derivative, but it's not going anywhere for you, since you have not clearly expressed the CDF. The first link shows that $F_{Y_a}(y) = \sum_{j=a}^{n} {{n}\choose{a}}[F(x)]^j[1-F(x)]^{n-j}$. Now you take the derivative, which is not trivial to reduce, but explained in the second link.
Got the proof in the link you provided, but I was stuck on the proof above for like $1\frac{1}{2}$ days, I thought I was close to the solution, well, never mind, thanks anyway ! @RRL
| common-pile/stackexchange_filtered |
Express routers
I am developing content preprocessor on NodeJS
I have 3 concrete ways of preprocessing:
building html
building xhtml
building xml
Each way is very different from each other (different middlewares)
So I initialized 3 routers:
var xmlRouter = express.Router();
var xhtmlRouter = express.Router();
var htmlRouter = express.Router();
All I need is to dispatch each request to concrete router.
I can't use app.use() to mount each router because of stripping effects on my url:
// Binding
app.use(/\/\S*\.fast\.xml(?=$)/, xmlRouter);
app.use(/\/\S*\.xhtml(?=$)/, xhtmlRouter);
app.use([/\/\S*\.html(?=$)/, /\/\S*\/(?=$)/], htmlRouter);
I will loose my url that I need to know further. No way
So is there any solution?
I can't test it right now, but as it won't fit into a comment I write it here in the answer section.
IMHO it should work that way:
var xmlRouter = express.Router();
app.use(function(req, res, next) {
if( req.url.match(/\/\S*\.fast\.xml(?=$)/) ) {
//if the url matches, pass the res, res, next to the xmlRouter
xmlRouter.handle(req, res, next);
//if handle does not work try: xmlRouter(req, res, next)
} else {
//otherwise pass it to the next registered route
next();
}
});
//do the same for the other routers
Maybe there is an error in this sample because I could not test it, but I think the idea should be clear.
Seems to be perfect answer. Thank you mr. t.niese
@Artem great that it works. I had a look at the docs, and it seems that handle might be private, so please check if xmlRouter(req, res, next) would work too and if so, then please use this instead of handle I'll update my answer as soon as I can test it myself.
perfect:
// Bindings app.use(function(req, res, next) { if( req.url.match(/\/\S*\.fast\.xml(?=$)/) ) { xmlRouter.handle(req, res, next); } else if( req.url.match(/\/\S*\.xhtml(?=$)/) ) { xhtmlRouter.handle(req, res, next); } else if( req.url.match(/\/\S*\.html(?=$)/) || req.url.match(/\/\S*\/(?=$)/) ) { htmlRouter.handle(req, res, next); } else { next(); } });
forgot to remove handle
| common-pile/stackexchange_filtered |
converting vbscript into c#.net/VB.NET?
Well I am stuck into a problem. I have one VBscript script which I need to convert in C#.NET(VB.NET would also work). I am new into programming, not having much idea of how to do it? One traditional way is to write the same in C#. But I have little less time. is there any converter Or can someone just give me a start? Please suggest me from where I should start. It is actually a e-learning software name lectora and I need to pass values from lectora to my database.
Any help would be aprreciated. Thank you.
Here is my VBscript Code.
Sample ASP Script
<%@ Language=VBScript %>
<%
'Get the parameters posted from the test'
testname=Request.form("TestName")
score=Request.form("Score")
user=Request.form("name")
numQuestions=Request.form("NumQuestions")
passingGrade=Request.form("PassingGrade")
'Validate that this is actually from a Lectora test'
if testname="" Or score="" Or user="" Or numQuestions="" Or passingGrade="" then
Response.Write "<html>"
Response.Write "<head><title>Failure</title></head>"
Response.Write "<body>"
Response.Write "STATUS=500"
Response.Write "<br>"
Response.Write "Could not parse test results due to a parameter error."
Response.Write "</body></html>"
else
'Write the results to a file named the same as the test'
'This could be a database or any kind of object store, but'
'to keep it simple, we will just use a flat text file'
fileName = "c:\" & testname & ".log"
'Open the results file for append'
Const ForReading = 1, ForWriting = 2, ForAppending = 8
Set objFSO = CreateObject("Scripting.FileSystemObject")
if not objFSO.FileExists(fileName) then
objFSO.CreateTextFile(fileName)
end if
Set objInFile = objFSO.OpenTextFile( fileName, ForAppending, True )
'Write the results'
objInFile.WriteLine( Date & ", " & Time & ", " & user & ", " & score )
'Older courses produced by Lectora used a zero based index for the questions '
'(i.e. Question0 is the first question)'
'Newer courses are one based (i.e. Question1 is the first question)'
'determine which one it is'
Dim startIndex
valTemp = Request.form("Question0")
if( valTemp="" ) then
startIndex=1
else
startIndex=0
end if
'Write all of the questions and answers'
for i = startIndex to cint(startIndex + numQuestions-1)
nameQ = "Question" + CStr(i)
nameA = "Answer" + CStr(i)
valQ = Request.form(nameQ)
valA = Request.form(nameA)
objInFile.WriteLine( nameQ & ": " & valQ )
objInFile.WriteLine( nameA & ": " & valA )
Next
'Close results file'
objInFile.Close
Set objInFile = Nothing
Set objFSO = Nothing
end if
%>
Is it for WindowsOS or MacOS?
I would copy the code directly into vb.net, then debug it. The majority of the code is compatible. Some things, such as Set objFSO = CreateObject("Scripting.FileSystemObject") only require minor modifications: objFSO = new Scripting.FileSystemObject (although there are more modern ways to accomplish file io).
| common-pile/stackexchange_filtered |
React loop through and create element
I have array of items in the following structure.
[{column: 1, name: 'one'}, {column: 2, name: 'Two'},{column: 2, name: 'Three'},{column: 3, name: 'Four'}]
Need to loop through the array and items with same column value should be wrapped in single parent.
Like,
<div class="parent">
One
</div>
<div class="parent">
<div>Two</div>
<div>Three</div>
</div>
<div class="parent">
Four
</div>
As you can see the second parent element have two child, because they have same column value "2".
Aggregate array based on column and then generate data based on aggregated value.
const array = [{column: 1, name: 'one'}, {column: 2, name: 'Two'},{column: 2, name: 'Three'},{column: 3, name: 'Four'}]
const aggregatedArray = array.reduce((agg, {column, name})=>{
if(!Array.isArray(agg[column)){
agg[volumn] = []
}
agg[column].push(name)
return agg
}, {})
Object.entries().map([key, value]=><div className="parent">
{value.map(v=><div>{v}</div>)}
</div>)
there is a couple of syntax errors: agg[column] missing ], ([key, value]) missing (), ...
const arr0 = [
{column: 1, name: 'one'},
{column: 2, name: 'Two'},
{column: 2, name: 'Three'},
{column: 3, name: 'Four'},
];
/* convert Array<{ column: number, name: string }>
* to Array<Array<string>>
*
* convert(arr0) return [['one'], ['two', 'three'], ['four']]
*/
const convert = arr => arr.reduce((acc, { column, name }) => {
const index = column - 1;
if (!acc[index]) {
acc[index] = [name];
} else {
acc[index].push(name);
}
return acc;
}, []);
| common-pile/stackexchange_filtered |
Open urls from a mark_text() data table that was filtered based on chart selections
I'm trying to display a table of url values based on interactive selections.
I am able to display it, following the example on the docs, but I want to be able to click on one of the urls in the table and have it open it in a new tab.
Idea is to have multiple charts and their selections filter down the url data table.
This way the user can choose one of the rows from the table and open up the source link of that row, or in this case a google search.
From what I gathered, I should be using href='url' as one of the encodings, but it has no effect.
Here's the code I've been testing it on:
import altair as alt
from vega_datasets import data
from urllib.parse import urlencode
source = data.cars()
# placeholder url column
def make_google_query(name):
return "https://www.google.com/search?" + urlencode({'q': '"{0}"'.format(name)})
source['url'] = source['Name'].apply(make_google_query)
# Brush for selection
brush = alt.selection_interval()
# Scatter Plot
points = alt.Chart(source).mark_point().encode(
x='Horsepower:Q',
y='Miles_per_Gallon:Q',
color=alt.condition(brush, alt.value('steelblue'), alt.value('grey'))
).add_params(brush)
# Base chart for data tables
ranked_text = alt.Chart(source).mark_text(align='right').encode(
y=alt.Y('row_number:O').axis(None)
).transform_filter(
brush
).transform_window(
row_number='row_number()'
).transform_filter(
alt.datum.row_number < 15
)
# Data Table
urls = ranked_text.encode(text='url:N').properties(
title=alt.Title(text='url', align='right')
)
# Build chart
alt.hconcat(
points,
urls
).resolve_legend(
color="independent"
).configure_view(
stroke=None
)
I'm unsure why it doesn't work with mark text. If you use it with mark_circle like in this example it should work https://altair-viz.github.io/gallery/scatter_href.html. Maybe create a smaller example and post as a bug on the vega-lite issue tracker if it reproduces?
| common-pile/stackexchange_filtered |
How do make ImageAwesome react to hover (change foreground color) like a textblock on a button in WPF?
I have found a workaround by using unicode instead of ImageAwesome but I would much rather not have to look up all of the icons' unicode of all of the font awesome icons I am using in my program.
The font awesome package I am using is: https://github.com/MartinTopfstedt/FontAwesome5
Here's a snippet of my button style:
<Style TargetType="{x:Type Button}" x:Key="ButtonStyle">
<Setter Property="Cursor" Value="Hand"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type Button}">
<Border Background="{TemplateBinding Background}" BorderBrush="Transparent" BorderThickness="1" CornerRadius="4">
<ContentPresenter HorizontalAlignment="Center" VerticalAlignment="Center"/>
</Border>
</ControlTemplate>
</Setter.Value>
</Setter>
<Setter Property="Foreground">
<Setter.Value>
<SolidColorBrush Color="{DynamicResource PrimClr}" />
</Setter.Value>
</Setter>
<Setter Property="Background">
<Setter.Value>
<SolidColorBrush Color="{DynamicResource SecClr}" Opacity="0.8"/>
</Setter.Value>
</Setter>
<Style.Triggers>
<Trigger Property="IsMouseOver" Value="True">
<Setter Property="Foreground">
<Setter.Value>
<SolidColorBrush Color="{DynamicResource SecClr}" />
</Setter.Value>
</Setter>
<Setter Property="TextBlock.FontWeight" Value="Bold"/>
<Setter Property="Background">
<Setter.Value>
<SolidColorBrush Color="{DynamicResource PrimClr}" Opacity="0.8" />
</Setter.Value>
</Setter>
</Trigger>
<Trigger Property="IsEnabled" Value="False">
<Setter Property="TextBlock.FontStyle" Value="Italic"/>
<Setter Property="Foreground" Value="DarkGray"/>
<Setter Property="Background">
<Setter.Value>
<SolidColorBrush Color="Gray" Opacity="0.3"/>
</Setter.Value>
</Setter>
</Trigger>
</Style.Triggers>
</Style>
Here's the example of how I use a button in my program:
<Button x:Name="SubmitBtn" Grid.Column="1" Grid.Row="14" Grid.ColumnSpan="2" Width="200" Height="45" FontSize="24" FontWeight="SemiBold" HorizontalAlignment="Center" VerticalAlignment="Center"
Style="{StaticResource ButtonStyle}" Click="SubmitBtn_Click">
<StackPanel Orientation="Horizontal">
<fa5:ImageAwesome Icon="Solid_UserCheck" Foreground="GhostWhite" Height="24" Width="24" Margin="0,0,10,0" />
<TextBlock Text="Save Player"/>
</StackPanel>
</Button>
Please note: not putting a color for foreground results in the default black color, I at least want something to show in the mean time.
The color of the ImageAwesome does not take the styles from the button style like the textblock does... I want to be able to make it do so but I cannot find an answer anywhere! Any help would be appreciated.
Also, here's the workaround I found and I hope its not the only solution...
<Button x:Name="SubmitBtn" Grid.Column="1" Grid.Row="12" Grid.ColumnSpan="4" Width="200" Height="40" HorizontalAlignment="Center" VerticalAlignment="Center" Style="{StaticResource ButtonStyle}">
<StackPanel Orientation="Horizontal">
<TextBlock FontFamily="/FontAwesome.Sharp;component/fonts/#Font Awesome 5 Free Solid" Text="" FontSize="24" Margin="0,0,10,0" TextAlignment="Center" VerticalAlignment="Center" />
<TextBlock Text="Submit" FontSize="24" HorizontalAlignment="Center" VerticalAlignment="Center" />
</StackPanel>
</Button>
I found the solution. I should have been using FontAwesome instead of ImageAwesome.
<Button x:Name="SubmitBtn" Grid.Column="1" Grid.Row="14" Grid.ColumnSpan="2" Width="200" Height="45" FontSize="24" FontWeight="SemiBold" HorizontalAlignment="Center" VerticalAlignment="Center"
Style="{StaticResource ButtonStyle}" Click="SubmitBtn_Click">
<StackPanel Orientation="Horizontal">
<fa5:FontAwesome Icon="Solid_UserCheck" FontSize="24" Margin="0,0,10,0" />
<TextBlock Text="Save Player"/>
</StackPanel>
</Button>
The only problem, and its not a big deal, in the designer it shows as a square.. so I just have to make sure I'm choosing the correct icon.
| common-pile/stackexchange_filtered |
Why aren't all my processes starting at once?
I have a process that adds up a bunch of numbers:
def slow(x):
num = 0
for i in xrange(int(1E9)):
num += 1
And I start 500 of these.
for x in range(500):
out.write("Starting slow process - " + str(datetime.now()) + "\n")
p = multiprocessing.Process(target = slow, args = (x, ))
p.start()
I would expect the processes to start all at once, since the maximum number of processes allowed on my computer is greater than 500.
user@computer$ cat /proc/sys/kernel/pid_max
32768
However, there's a brief delay between the start time of one process and the start time of the next process.
Starting slow process - 2015-05-14 16:41:35.276839
Starting slow process - 2015-05-14 16:41:35.278016
Starting slow process - 2015-05-14 16:41:35.278666
Starting slow process - 2015-05-14 16:41:35.279328
Starting slow process - 2015-05-14 16:41:35.280053
Starting slow process - 2015-05-14 16:41:35.280751
Starting slow process - 2015-05-14 16:41:35.281444
Starting slow process - 2015-05-14 16:41:35.282094
Starting slow process - 2015-05-14 16:41:35.282720
Starting slow process - 2015-05-14 16:41:35.283364
And this delay gets longer as we start more processes:
Starting slow process - 2015-05-14 16:43:40.572051
Starting slow process - 2015-05-14 16:43:41.630004
Starting slow process - 2015-05-14 16:43:42.716438
Starting slow process - 2015-05-14 16:43:43.270189
Starting slow process - 2015-05-14 16:43:44.336397
Starting slow process - 2015-05-14 16:43:44.861934
Starting slow process - 2015-05-14 16:43:45.948424
Starting slow process - 2015-05-14 16:43:46.514324
Starting slow process - 2015-05-14 16:43:47.516960
Starting slow process - 2015-05-14 16:43:48.051986
Starting slow process - 2015-05-14 16:43:49.145923
Starting slow process - 2015-05-14 16:43:50.228910
Starting slow process - 2015-05-14 16:43:50.236215
What might account for this phenomenon?
check GIL for python.
@BarunSharma I thought multiprocessing sidestepped the Global Interpreter Lock? Or at least that's what it says in the documentation: https://docs.python.org/2/library/multiprocessing.html
yes, you are right. My mistake(I took it as multithreaded). Sorry, have no idea on this. :(
You only show part of your code, you are using the if __name__ == "__main__": trap before calling multiprocessing I hope?
@cdarke Wait, what does that do? I haven't been doing that
Ah Ha! Sounds like you have a fork bomb, I'll post.
You should take a look at multiprocessing.Pool for this kind of application.
@HarryPotfleur: Thanks! I tried that but it runs slower for some reason.
@Jessica "I would expect the processes to start all at once" - You mean: "I would expect equal delay between the starting of each process" since the processes still start sequentially (and each process takes at least some time to start, they cannot start at the same time
I may be wrong, looks like that line is only needed on Windows. Sorry.
You are starting 500 processes; each of which you're asking to spin by counting to a million. I'm not sure why it surprise you that this takes time ?
Starting 500 processes would take a bit of time even if they did nothing, but when each of them uses python to count to a million, it's pretty much a given that a second or two will elapse. These other processes will now compete for CPU-time and it's not a given that the process that does the spawning wins this race and gets to spawn the rest immediately.
Edit: you're also doing 500 calls to the system to get the time now and print it, this takes some amount of time too, if you printed the time only when you start and when you're done spawning, I suspect that'd speed it up too.
I suspect that this would go quicker if you replaced the counting-loop with a call to sleep or something of that nature, and thus that what you're seeing is not really just the time to start processes at all.
@Jessica The best practice with multiprocessing is to only spawn a process for each CPU on your system (at least for CPU-bound work, you can do more if you're doing I/O bound work). You've probably got 4 or 8 CPUs on your machine. Asking 4 or 8 CPUs to give time to hundreds of processes running a tight, CPU-bound loop that wants 100% CPU time is definitely going to grind your system to a halt.
Here are some changes to your code based on @Agrajag's suggestions, which at least on my system, confirm his suspicions.
Sleep for 10 seconds before starting the computation (to avoid hogging the CPU while the kernel is trying to spawn more processes).
Remove the IO overhead of writing to out in the middle of spawning.
Code
import sys
import time
import multiprocessing
from datetime import datetime
def slow(x):
time.sleep(10)
num = 0
for i in xrange(int(1E9)):
num += 1
times = []
for x in range(500):
times.append(datetime.now())
p = multiprocessing.Process(target = slow, args = (x, ))
p.start()
for x in times:
sys.stdout.write("Starting slow process - " + str(x) + "\n")
Sample output from the tail.
Starting slow process - 2015-05-18 04:17:02.557117
Starting slow process - 2015-05-18 04:17:02.574186
Starting slow process - 2015-05-18 04:17:02.594736
Starting slow process - 2015-05-18 04:17:02.616716
Starting slow process - 2015-05-18 04:17:02.637369
Starting slow process - 2015-05-18 04:17:02.658615
Starting slow process - 2015-05-18 04:17:02.675418
Starting slow process - 2015-05-18 04:17:02.696439
Starting slow process - 2015-05-18 04:17:02.713795
Starting slow process - 2015-05-18 04:17:02.734777
Starting slow process - 2015-05-18 04:17:02.753063
Your computer doesn't really like to run more processes than there are CPU cores. Normally, it's not a big deal, because no one process is hogging the CPU. The operating can happily allocate resources to each process in turn according to the rules of its process scheduler.
When lots of processes really need the CPU, bad things start to happen. The operating system does its best, but things are likely to slow down. None of the jobs are able to complete their task efficiently.
As you add more active processes, things get worse. Why does that happen?
Well, one factor - among several - is that the CPU caches are probably going to have stale data in them when a new process takes over. CPUs have several levels of cache that act as super fast memory. If a long running process gets to have sole access of a CPU, it will enjoy much faster speeds because it will have the cache all to itself.
When there are many more processes than CPUs, some of those processes just wait in the queue. When the OS allocates the process CPU time, more memory will be loaded, etc etc, slowing everything down for the next person.
Oh - and let's not forget that spawning processes is not instantaneous either. The operating system has other jobs to do, like ensuring you have access to the Internet and checking that files are being written to disk.
| common-pile/stackexchange_filtered |
Laravel blade "request()->is()" syntax issue
I am using the following piece of code to make a link appear active.
<a href="/profile/{{ $user->id }}" class="{{ request()->is('/profile/'.$user->id) ? 'text-blue-500' : '' }}">View Profile</a>
It was working previously for links which weren't dynamic. But now when I'm passing data in the url it has stopped working. I've tried dd-ing the request()->is() part and it returns false although the url and the pattern matches.
You can compare with the current route : url()->current(). or use is() with the route name without parameters.
Are you using named routes ??
I would also advise you to use the route() helper to echo the href instead of hardcoding it. that way, if you ever change the route to for example /user/profile, all the links change too.
| common-pile/stackexchange_filtered |
What are the meanings of the field 'on_cpu' in struct task_struct and the field 'cpu' in struct thread_info?
I want to know which cpu the current process is running on in Linux system,
and I have two choices —
get the field on_cpu in struct task_struct or
get the field cpu in struct thread_info.
I write a kernel module programming to probe the two fields, and get the
result below:
[ 3991.419185] the field 'on_cpu' in task_struct is :1
[ 3991.419187] the field 'cpu' in thread_info is :0
[ 3991.419199] the field 'on_cpu' in task_struct is :1
[ 3991.419200] the field 'cpu' in thread_info is :0
[ 3991.419264] the field 'on_cpu' in task_struct is :1
[ 3991.419266] the field 'cpu' in thread_info is :1
[ 3991.419293] the field 'on_cpu' in task_struct is :1
[ 3991.419294] the field 'cpu' in thread_info is :1
[ 3991.419314] the field 'on_cpu' in task_struct is :1
[ 3991.419315] the field 'cpu' in thread_info is :1
[ 3991.419494] the field 'on_cpu' in task_struct is :1
[ 3991.419495] the field 'cpu' in thread_info is :0
[ 3991.419506] the field 'on_cpu' in task_struct is :1
[ 3991.419507] the field 'cpu' in thread_info is :1
and I don't know the correctly meaning of the two fields.
The cpu field in thread_info specifies the number of the CPU where the process is executing on. This is what you are searching for.
The on_cpu flag in task_struct is actually a lock when context switching and wanting to have interrupts enabled during a context switch in order to avoid high latency by having an unlocked runqueue. Basically when it's 0 then the task can be moved to a different cpu.
| common-pile/stackexchange_filtered |
Difference between QML's Component and Instantiator?
The QML types Component and Instantiator appear to do similar things; create QML objects on demand, as opposed to when parsing their definitions. So what's the difference? Why would I want to use one over the other?
An Instantiator creates instances of the given Component - one for each model-entry given in model. It's similar to the Repeater.
A Component is a Class. A Instantiator is sort of a Factory for the given Component.
But can't I just create instances of a Component without an Instantiator?
There are multiple Possibilities, choose which one fits best for you: You can use a Loader for a single Instance, a Repeater'](http://doc.qt.io/qt-4.8/qml-repeater.html) for multiple instances. Since QtQml2.2 there is also the [Instantiator' which is a Repeater that can load asynchronously. You can also use Component.createrObject() or Component.incubateObject() which are attached to every Item
You can imagine Component as a prototype of the object. Like a class MyClass{} in C++. Instantiator is a real object which creates items. So these items are parented to Instantiator. When you create instance of Component you should set a parent.
| common-pile/stackexchange_filtered |
WrappedComponent is undefined
I am stuck trying to implement graphql with react-apollo and graphcool. I am following this Tutorial
My code with the request query looks like this :
import React from 'react';
import { graphql } from 'react-apollo'
import gql from 'graphql-tag'
const ALL_INSPECTIONS_QUERY = gql`
# 2
query AllInspectionsQuery {
allInspections {
id
date
observation
note
next
temporality
components
done
}
}
`
// 3
export default graphql(ALL_INSPECTIONS_QUERY, { name: 'allInspectionsQuery' })(InspectionRow)
export { InspectionTable }
class InspectionRow extends React.Component {
render() {
if (this.props.allInspectionsQuery && this.props.allInspectionsQuery.loading) {
return <div>Loading</div>
}
if (this.props.allInspectionsQuery && this.props.allInspectionsQuery.error) {
return <div>Error</div>
}
const linksToRender = this.props.allInspectionsQuery.allLinks
return (
// some code
);
}
}
class InspectionTable extends React.Component {
// some other code
}
Everything worked fine before I tried to add the query. Graphcool is also working, I got some data on it.
I get this error now :
TypeError: WrappedComponent is undefined
getDisplayName
node_modules/react-apollo/react-apollo.browser.umd.js:230
227 | return fields;
228 | }
229 | function getDisplayName(WrappedComponent) {
> 230 | return WrappedComponent.displayName || WrappedComponent.name || 'Component';
231 | }
232 | var nextVersion = 0;
233 | function graphql(document, operationOptions) {
wrapWithApolloComponent
node_modules/react-apollo/react-apollo.browser.umd.js:246
243 | var operation = parser(document);
244 | var version = nextVersion++;
245 | function wrapWithApolloComponent(WrappedComponent) {
> 246 | var graphQLDisplayName = alias + "(" + getDisplayName(WrappedComponent) + ")";
247 | var GraphQL = (function (_super) {
248 | __extends$2(GraphQL, _super);
249 | function GraphQL(props, context) {
./src/components/inspection.js
src/components/inspection.js:53
50 | `
51 |
52 | // 3
> 53 | export default graphql(ALL_INSPECTIONS_QUERY, { name: 'allInspectionsQuery' })(InspectionRow)
54 |
55 | export { InspectionTable }
56 |
Here's the tree of the project folder :
├── README.md
├── node_modules
├── package-lock.json
├── package.json
├── public
│ ├── favicon.ico
│ ├── index.html
│ └── manifest.json
├── server
│ ├── graphcool.yml
│ ├── package.json
│ ├── src
│ │ ├── hello.graphql
│ │ └── hello.js
│ └── types.graphql
└── src
├── components
│ ├── App.js
│ ├── data.js
│ ├── description.js
│ ├── inspection.js
│ └── maintenance.js
├── index.css
├── index.js
└── registerServiceWorker.js
I have installed all the npm packages required, but I got this message when installing react-apollo package :
<EMAIL_ADDRESS>has unmet peer dependency<EMAIL_ADDRESS>
I really don't know where this could come from, thanks for your help ! I am new on stackoverflow so tell me if I should be more precise in my explanations.
are you using withApollo HOC?
I am new to this, but I think I'm not, I am just following along the tutorial from the link I put in the post. I only changed the data.
I added this warning yarn message<EMAIL_ADDRESS>has unmet peer dependency<EMAIL_ADDRESS>
Finally solved, it was just an issue with the export:
export default graphql(ALL_INSPECTIONS_QUERY, { name: 'allInspectionsQuery' })(InspectionRow)
export { InspectionTable }
as to be
export default graphql(ALL_INSPECTIONS_QUERY, { name: 'allInspectionsQuery' }) (InspectionTable)
| common-pile/stackexchange_filtered |
Java / Swing app fails to transition to fullscreen on OS X
I'm using a JOGL FPSAnimator and Apple's FullScreenUtilies class. I implemented this some time ago, and it worked fine. Here is my code for enabling the native OS X fullscreen capability, similar to other code on SO and around the web:
String className = "com.apple.eawt.FullScreenUtilities";
String methodName = "setWindowCanFullScreen";
try {
Class<?> clazz = Class.forName(className);
Method method = clazz.getMethod(methodName,
new Class<?>[] { Window.class, boolean.class });
method.invoke(null, frame, true);
} catch ...
It also works fine in the context of a simple test program I made in an attempt to isolate the issue. I'm not sure at what point the behaviour changed - I haven't spotted anything incriminating in SVN logs. It's likely that I first implemented the feature on an earlier version of OS X, and have also upgraded JOGL version and MyDoggy which we use for docking since. However, all of these things work with fullscreen in the context of other applications.
When I press the green fullscreen button, the usual OSX fullscreen transition starts (it gets its own desktop space), but the window appears frozen from that point on.
The main AWT Event thread carries on running, and I can see that my GLEventListener.display() method is being regularly called. I've tried adding a return to the beginning of that method to eliminate the impact of my rendering code, this unsurprisingly made no difference.
For testing purposes, I added a FullScreenListener:
FullScreenUtilities.addFullScreenListenerTo(frame, new FullScreenAdapter() {
@Override
public void windowEnteringFullScreen(FullScreenEvent arg0) {
log(">>> Entering fullscreen... <<<");
}
@Override
public void windowEnteredFullScreen(FullScreenEvent arg0) {
log(">>> Entered fullscreen. <<<");
}
});
As anticipated, I get the entering fullscreen event, but not the entered one.
It's a fairly large program, but there should be a fairly small surface of things that are potentially relevant here... unfortunately I haven't managed to trace them down. Happy if anyone has any pointers.
A complete example is shown here.
Like I say, my application uses similar code to what can be found in various examples on SO and elsewhere on the web... the code in my question is very similar to the enableOSXFullscreen(Window) method from that example, which worked at first, and works in a trivial test app, but not now in my real app unfortunately.
but thanks @trashgod, I suppose it gives more full context to have a complete example.
Java 8, MacOS 10.11.3, seems to work okay for me
Like I say, the basic method works, but "something" in my application goes wrong when invoked in that context meaning that it doesn't complete the transition. Perhaps SO isn't the right place to ask, since I can't provide enough info for someone to answer, but I'm hoping someone will have some idea of the sorts of things that could cause a lock here... must be related to Swing / AWT threading...
| common-pile/stackexchange_filtered |
What is a UUID?
Well, what is one?
You should also note that a GUID is the same thing.
Well, a Microsoft GUID is the same thing.
@Dave that link was enough to answer this question.
@Dave online UUID generator links seems broken.
Online uuid generator: https://www.uuidgenerator.net/
It's an identification number that will uniquely identify something. The idea being that id number will be universally unique. Thus, no two things should have the same uuid. In fact, if you were to generate 10 trillion uuids, there would be something along the lines of a .00000006 chance of two uuids being the same.
"So you're telling me there's a chance!"
Probability of a collision will never be 0
Standardized identifiers
UUIDs are defined in RFC 4122. They're Universally Unique IDentifiers, that can be generated without the use of a centralized authority. There are four major types of UUIDs which are used in slightly different scenarios. All UUIDs are 128 bits in length, but are commonly represented as 32 hexadecimal characters separated by four hyphens.
Version 1 UUIDs, the most common, combine a MAC address and a timestamp to produce sufficient uniqueness. In the event of multiple UUIDs being generated fast enough that the timestamp doesn't increment before the next generation, the timestamp is manually incremented by 1. If no MAC address is available, or if its presence would be undesirable for privacy reasons, 6 random bytes sourced from a cryptographically secure random number generator may be used for the node ID instead.
Version 3 and Version 5 UUIDs, the least common, use the MD5 and SHA1 hash functions respectively, plus a namespace, plus an already unique data value to produce a unique ID. This can be used to generate a UUID from a URL for example.
Version 4 UUIDs, are simply 128 bits of random data, with some bit-twiddling to identify the UUID version and variant.
UUID collisions are extremely unlikely to happen, especially not in a single application space.
UUID stands for Universally Unique IDentifier.
It's a 128-bit value used for a unique identification in software development. UUID is the same as GUID (Microsoft) and is part of the Distributed Computing Environment (DCE), standardized by the Open Software Foundation (OSF).
As mentioned, they are intended to have a high likelihood of uniqueness over space and time and are computationally difficult to guess. It's generation is based on the current timestamp and the unique property of the workstation that generated the UUID.
Image from https://segment.com/blog/a-brief-history-of-the-uuid/
It's a very long string of bits that is supposed to be unique now and forever, i.e. no possible clash with any other UUID produced by you or anybody else in the world .
The way it works is simply using current timestamp, and an internet related unique property of the computer that generated it (like the IP address, which ought to be unique at the moment you're connected to the internet; or the MAC address, which is more low level, a hard-wired ID for your network card) is part of the bit string.
Originally every network card in the world has its own unique MAC address, but in later generations, you can change the MAC address through software, so it's not as much reliable as a unique ID anymore.
It's a Universally Unique Identifier
A UUID is a 128-bit number that is used to uniquely identify some entity. Depending on the specific mechanisms used, a UUID is guaranteed to be different or is, at least, extremely likely to be different from any other UUID generated. The UUID relies upon a combination of components to ensure uniqueness. A UUID contains a reference to the network address of the host that generated the UUID, a timestamp and a randomly generated component. Because the network address identifies a unique computer, and the timestamp is unique for each UUID generated from a particular host, those two components should sufficiently ensure uniqueness.
I just want to add that it is better to use usUUID (Static windows identifiers).
For example if a computer user that relies, on a third party software like a screen reader for blind or low vision users, the other software (in this case the screen reder) will play better with unique identifiers!
After all how happy will you be if someone moves your car after you know the place you parked it at!!!
A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems. The term globally unique identifier (GUID) is also used, typically in software created by Microsoft.
When generated according to the standard methods, UUIDs are for practical purposes unique. Their uniqueness does not depend on a central registration authority or coordination between the parties generating them, unlike most other numbering schemes. While the probability that a UUID will be duplicate.
| common-pile/stackexchange_filtered |
What originated the use of hawkish in a figurative sense?
Hawkish in a figurative sense is often used to refer to politicians who are in favor of using force rather than diplomacy to achieve something. By extension hawkish is used in financial or economic contexts to refer to an aggressive tone used to indicate possible future threats.
The above figurative usage derives from the “militaristic” sense which developed from the mid-20th century. According to Etymonline:
Hawkish:
"hawk-like," by 1703, from hawk (n.) + -ish. Sense of "militaristic" is from 1965, from hawk in the transferred sense.
and from hawk
transferred sense of "militarist" attested from 1956, probably based on its opposite, dove.
As explained above, the figurative sense is quite intuitive and may have been first used in contrast to the figurative sense of dove, but:
in what context was the term hawkish initially used with the above meaning? Was it originally a British or an American expression? Is there a more precise date in which the term was first used?
Edit:
The suggested expression “war hawk” doesn’t explain the reason of the spike in usage of “hawkish” from the late ‘50s early ‘60s. See Ngram
"The term "War Hawk" was coined by the prominent Virginia Congressman John Randolph of Roanoke, a staunch opponent of entry into the War of 1812." https://en.wikipedia.org/wiki/War_hawk#Modern_usage "And the man to do it was Congressman John Randolph in the run-up to the War of 1812. Randolph described those clamoring for military action against Great Britain in the name of American honor and territory as “war hawks.” The term had talons and caught on." https://daily.jstor.org/the-original-hawks-doves/
So, I'm guessing sometime 1810-1812, American. (For '(war) hawk', not necessarily 'hawkish'.)
@Keepthesemind - interesting, but hawkish actually took off from the ‘60s as you can see from here: https://books.google.com/ngrams/graph?content=hawkish&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Chawkish%3B%2Cc0 and my question is about its usage and why it spread in that period.
But that seems to be merely a transition from 'war hawk' https://books.google.com/ngrams/graph?content=war+hawk&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cwar%20hawk%3B%2Cc0 So, are you asking why it became used as an adjective in the '60s?
@Keepthesemind - as I said, the meaning is intuitive, but something has generated its usage which has now spread to different fields also.
Your question above is "What originated the use of hawkish in a militaristic sense?". But now you seem to want to ask "What originated the use of hawkish in a non-militaristic (and unrelated to biological hawks) sense?"
@Keepthesemind - please see the material I provided. Both etymonline and Ngram suggest the use of hawkish from the ‘60s. Why? That is my question!!!
Because somebody made an adjective from '(war) hawk' and made up 'hawkish' and it stuck. I don't want to irritate you, but why not make up hawk-ish? I'm reasonably sure that when first used (without referring to biology) it was still used strictly for foreign policy/war, and not yet for central banking etc.
The tendency to use "-ish" to adjectivize a noun "caught on" in the past 50 years. Nothing special about "hawk", in this regard.
@HotLicks - not sure about your assumption on the usage of -ish in the past 50 years. A quick check suggests that -ish as a suffix to form adjectives is much older in usage. https://www.etymonline.com/search?q=-ish
@user159691 - I wasn't saying that the suffix did not exist before, but rather that it became "cool" to use it in cases that were not previously idiomatic.
@HotLicks - well, you should provide evidence of what you are saying, otherwise that’s just a personal impression.
I don't have the time to delve into the depths of Google Books, Internet Archive nor search through the archives of American newspapers, but a cursory view suggests that the war in Vietnam provoked this surge in use
In February 1968, after the Vietcong attack on Saigon and after the North Korean seizure of the Pueblo, Gallup reported that the hawkish rating rose from 56 to 61 percent in one month. A "dove" stance would have been particularly unpopular with the college-trained who, according to Gallup, "have been found to be more hawkish than those with less formal education." (N.Y. Times, February 17, 1968.) To campaign for President as a "dove," Kennedy would have to fly against the wind.
The American Federationist
… hearings had increased the hawkish element in our country, and the hawkish pressures, I believe he said something to that effect, I read in the paper this morning. Dr. Morgenthau. Yes ; I read this, too. The Chairman. How do you interpret that ? Dr. Morgenthau. Well, you put me on the spot. [Laughter.] Not everything that emanates from the White House is equivalent in truth to what has emanated from Mount Sinai.
U.S. Policy with Respect to Mainland China
One would expect a majority of Japanese to opt for a moderate position neither hawkish or dove-like, but again we find that most of those with any opinion on the Vietnam conflict disagree with the Sato policy. Twice as many Japanese advocated immediate United States withdrawal as wanted limited defense of South Vietnam or total victory over the north, and many 1966 subgroups were even more opposed to the Tokyo support for Washington policy.
Asian Survey - Volume 7 - Page 455
Talks with hundreds of citizens from all corners of the country, all walks of life, show clearly that hawks are becoming more hawkish, doves more dovish. The great mass of Americans in the middle-—those who are tormented by the war but honestly don't have any idea of how it can be ended with honor-— is slowly shrinking as many of those once in its ranks begin to choose sides.
Congressional Record (1967)
The adjectival form, hawkish, is derived from the expression “doves and hawks“ that arose during the Cuban missile crisis in 1962
Like his brother, the president, RFK originally favored an air strike against Cuba. But they both quickly changed their minds and came down on the side of the blockade option outlined by McNamara.
The term “doves and hawks” can be traced back to the missile crisis.
When it was all over, the leader of the hawks, Mac Bundy, had the grace to concede that the doves had got the better of the argument.
“Everybody knows who were the hawks and who were the doves,” Bundy told the ExComm on the morning of October 28, after Khrushchev announced that he was withdrawing his missiles. “Today was the day of the doves.”
Foreign Policy.com Article posted online OCTOBER 11, 2012,
| common-pile/stackexchange_filtered |
List field: translated values are not translated
In my Drupal 8 project I have some list fields. For example maybe a field "project type" with the following key|values, coming from the original language German:
1 -> öffentliches Projekt
2 -> privates Projekt
Which I translated in the field's settings:
1 -> public project
2 -> private project
The field is NOT set to be translatable because the value should not be changed in a translation: the project type is in all languages the same! But now when I go to the node edit page, when I view the node or when I add this field as exposed filter to views I always see the German values of the field no matter what is set as the current language of the page. When I change the field to be translatable the value of the list field can be changed in English translation -> so it can come that a project is "public" in German and "private" in English what is not the desired result.
How do I get the translated values of a list field to be used in form element and node view when the list field itself should not be translatable?
You always enter the English strings/labels only in the field config, then display the form at least once in German, and after that go to interface translation and translate them to German. I think.
Ok, it seems that it was just a weird cache thing. I cleared the cache several times after changing these translations. Now the translated values of the select list appear without setting the field to be translatable.
| common-pile/stackexchange_filtered |
Android:how to pass objects from a non-activity (view) class to an activity?
I have a class (Called Doodling:))) that extends view and an activity (Called DrawActivity) that displays that view. I need to pass the bitmap object created by my doodling class to the draw activity. Is it possible to do that? And if yes,How?!
The code is as below:
public class Doodling extends View {
Bitmap bitmap;
public DoodleCanvas(Context context, AttributeSet attrs) {
...
}
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
if (bitmap != null) {
bitmap .recycle();
}
canvas= new Canvas();
bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
canvas.setBitmap(bitmap);
}
protected void onDraw(Canvas c) {
...
}
.
public class DrawActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_draw);
}
The doodling view is then placed in the layout via .xml
Could you use getDrawingCache method on your view?
I think i can.but thats not the problem,i have the bitmap object,i just dunno how to pass it
What exactly are you trying to do with this: I need to pass the bitmap object created by my doodling class to the draw activity
I need to save the bitmap of the drawing,and load it later.
If you can guarantee that Doodling will only be used in DrawActivity, then you can obtain a reference to the Activity using getSystemService().
public class DrawActivity extends AppCompatActivity {
private static final String TAG = "DrawActivity";
public static DrawActivity get(Context context) {
// noinspection ResourceType
return (DrawActivity)context.getSystemService(TAG);
}
@Override
public Object getSystemService(String name) {
if(TAG.equals(name)) return this ;
return super.getSystemService(name);
}
And
public class Doodle extends View {
....
DrawActivity drawActivity = DrawActivity.get(getContext());
Can you elaborate on your answer? what is TAG? and does using this code mean i will have access to drawactivity methods in doodlig?
Whoops I missed adding TAG field
In your DrawActivity create a new instance of your Doodling class.
Then use the following code :-
doodlingObject.setDrawingCacheEnabled(true);
doodlingObject.buildDrawingCache();
Bitmap bitmap = doodlingObject.getDrawingCache();
Hope this helps!
I will add hard way to achieve this butI will suggest you to use EventBus for much more cleaner uses and not worrying about interfaces or activity life cycle
You can do this using interfaces if you have created View object from activity itself
Something like:
interface BitMapListener{
void onBitMapCreated(Bitmap bMap);
}
then this interface to implement this in Activity
public DrawActivity ... implements BitMapListener{
BitMap created;
@override
public void onBitMapCreated(BitMap map){
created = map;
}
//when calling DrawView activity pass in this object
//DrawView(DrawActivity.this);
Just when calling DrawView class from activity remember to passin this activity object like :
DrawView ...{
DrawActivity callback;
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
if (bitmap != null) {
bitmap .recycle();
}
canvas= new Canvas();
bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
canvas.setBitmap(bitmap);
callback.onBitMapCreated(bitmap);
}
| common-pile/stackexchange_filtered |
IRFZ44N getting very hot in Peltier driving circuit
I have got a question about my Peltier driving circuit. First of all this is a drawing of my circuit:
I am confused because only 1 of my IRFZ44N gets very hot, even though I ran the same gate signal to both of them. The Peltier that I have uses have maximum current of 10A (TEC1-12710) so I thought that IRFZ44N should be able to handle it (the datasheet says 49A continuous drain current.)
Any idea why only 1 of my IRFZ44N gets very hot?
Are You really driving them with 1Mhz signal?
Are both FET's IRFZ44N? They're not on your schematic.
Why the two MOSFETs and inductors with one PWM signal?
I agree with the comments. I would not drive the mosfet that fast. Your switching losses are going to be very high with that amount of current. 100khz would be fine with how big of an inductor you're using. You could probably even go down to 50khz and still be fine. I would also only use 1 mosfet instead of two. One reason I can think of as to why one is getting hotter is that one of the mosfets is turning on faster than the other and one of them is seeing more switching losses or conduction losses.
Both of my mosfet are irfz44n, sorry i made a mistake on my schematics
Please update it and please answer our questions.
The inductors are only capable of 5A current, thats why i use 2 inductors, as for the mosfet, at first I want to use pwm signal with 180 degrees difference between the two, but from what i have tried its eems to be very difficult or next to impossible (for me at least) to make them using Atmega2560 microcontrollers
Yes i did use 1 Mhz signal so I can reduce the ripple voltage on the peltier as much as I can, I have tried reducing the frequency to 100kHz it does gets better (when I use 1 Mhz the mosfet will reach 170C in matter of ~5 seconds, but when I use 100kHz it reach about 120C in about 20s or so.
Should I change my mosfet with a faster switching one? Any suggestions which mosfet will be able to achieve at least 100kHz without any problem with my circuit?
Document the actual gate drive waveform. And be more realistic about frequency. It's quite possible at this point you've blown the gate oxide on one FET causing it to turn itself partially on even in the absence of drive. It's also possibly you are only partially turning them on with your drive, causing them to act like resistors and heat up.
Since no MOSFET drivers are shown on the schematic, I assume the FETs are driven directly from arduino pins. IRFZ44 requires more than 5V Vgs to turn on fully. With 5V gate drive it will be somewhere in the linear region. So you must either use a FET with logic level gate drive, and a proper MOSFFET driver with a 5V supply, or keep your FETs and use a proper MOSFET driver with a +12V supply.
Likely explanation for the uneven heating:
FET RdsON has a positive tempco so FETs in parallel share current well. Resistance in the hottest one increases which directs current to the other FETs.
However in linear region it's the opposite. Threshold voltage has negative tempco, so it goes down as it gets hot:
This means your FETs run in conditions that are ideal for thermal runaway.
Additionally, the absence of gate resistors mean they are likely to oscillate due to layout parasitics ; and 1MHz is way too high a frequency. You must calculate inductor ripple current according to frequency and do the math to design a buck converter.
With a proper FET driver chip using 12V power supply you should be able to drive these FETs at 100kHz without trouble.
How about changing to IRLZ44N is it possible to drive it fully on using arduino pins? Since the datasheet says it can handle 80A ID at about 5V Vgs
Yeah, it'll turn on at 5V Vgs. It's an obsolete part from 1997 though, with high gate charge, so it'll need a strong gate driver for >100kHz...
Ohh I see, sorry if this is an obvious question, but i want to ask one thing about IRFZ44N. If I follow the chart that you gave me, it seems that at 5V gate voltage we can use up to 20 A of ID. So why do we want another mosfet driver for this case?
I mean you need a driver chip with strong output current to drive your MOSFET because your microcontroller won't have enough output current capability.
| common-pile/stackexchange_filtered |
mod rewrite question mark and ampersands
The first GET argument should have a ?. When i try this url i'm not able to $_GET['type']
http://localhost/category/general?type=pages&v=1
it only works with an &, when i use $_GET['type'] i get pages
http://localhost/category/general&type=pages&v=1
Here is my mod rewrite..
RewriteRule ^category/([A-Za-z0-9-]+)(\?type=[A-Za-z0-9-]+)?([^.]+)?/?$ /category.php?c=$1&type=$2&query=$3 [L]
How do i solve this so this url http://localhost/category/general?type=pages&v=1 allows me to $_GET['type'] with pages as the result?
Your second URL is incorrect. & only starts working AFTER a ?. The url should be http://localhost/category/general?type=pages&v=1
thats exactly what i said.. i can't $_GET['type'] with that url.
not sure why you're using rewrites anyways, other than to strip off the .php portion, everything else in the url could simply be handled by adding a QSA directive .
Could i have an example?. i thought i need rewrites to create a pretty url like localhost/category/general/topics
I think the issue here is that in your RewriteRule you're trying to match a part of the query string, which isn't possible.
If I understand your problem correctly, you want requests to http://example.com/category/general?type=pages&v=1 to be interpreted by the server as http://example.com/category.php?c=general&type=pages&query=1?
To match a query string, you need to prefix your rule with a condition. The following should work:
RewriteCond %{QUERY_STRING} type=([^&]+)
RewriteCond %{QUERY_STRING} v=([^&]+)
RewriteRule ^category/([^/]+) /category.php?c=$1&type=%1&query=%2
The first two lines are pre-conditions that state that the requested URL needs to have both type= and v= as part of the query string. The final line rewrites the request. This means that the URL will only be rewritten if it conforms to: category/SOMETHING?type=SOMETHING&v=SOMETHING.
Replace your existing RewriteRule with this one:
RewriteRule ^category/([A-Za-z0-9-]+)(&.+)$ /category.php?c=$1$2 [L,NC]
This will make all of your $_GET variables available to category.php
e.g. for you URI of: http://localhost/category/general&type=pages&v=1 this solution will give following query string:
$_SERVER["QUERY_STRING"] = 'c=general&type=pages&v=1'
| common-pile/stackexchange_filtered |
What is a more accurate translation of the title of Legend of the Galactic Heroes?
Several people have mentioned to me that the title of Legend of the Galactic Heroes is based on a rather "interesting" translation. What would be a more accurate translation of the original title (銀河英雄伝説 Ginga Eiyū Densetsu)? I've also heard that it is based on a German title, that was then translated to Japanese, and then to English - I don't know if that is true, though.
Ginga Eiyuu Densetsu is a series of Japanese novels written by a Japanese person, and is not, to the best of my knowledge, materially based on any previously-existing work.
The 110-episode OVA series (which is probably what most viewers are familiar with) was given the alternate title "Heldensagen vom Kosmosinsel". I don't speak German, but various discussions on the internet suggest that "Heldensagen vom Kosmosinsel" is not valid German. A look at de.wiktionary.org suggests that "Kosmosinsel" is not even a word, or perhaps is a compound noun meaning "cosmos island" (which is not idiomatic for "galaxy"; that would be "Galaxie").
Given this, it looks like the producers of the OVA figured that it would look cool to have a faux-German title (in Gothic font, too!), ran "銀河英雄伝説" through Babelfish (or maybe through some dude who'd had a year of German at university), and got out "Heldensagen von Kosmosinsel". It's probably the case that the translation went Jap -> Ger (badly) and Jap -> Eng, rather than Ger -> Jap -> Eng.
I'm not sure why you've been hearing that that "Legend of the Galactic Heroes" is an "interesting" translation of 銀河英雄伝説 (ginga eiyuu densetsu), though. 銀河 ginga means "galaxy"; 英雄 eiyuu means "hero"; and 伝説 densetsu means "legend".
It's a bit contracted to omit particles, as titles often are, but the meaning is clear - this is a densetsu about the ginga eiyuus, i.e. a legend about galaxy heroes. Hence, Legend of the Galactic Heroes.
Note: in the comments, @Krazer correctly points out that an alternate parse of ginga eiyuu densetsu is as "eiyuu densetsu about the ginga", i.e. "Heroic Legends from the Galaxy". Now that I think about it, I'm not really sure which parse is actually better, in terms of the show.
If we draw from both sources we can infer that 銀河(Milky Way/Galaxy) 英雄伝説 (heroic [legendary] tales) can also potentially mean: "Heroic Tales/Legends from (the) Milky Way/Galaxy."
@Krazer Good catch. I've incorporated that into the answer.
| common-pile/stackexchange_filtered |
Patch SUPEE 11086
My Magento version is <IP_ADDRESS>
Which file should I choose between the below given options?
PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-13
PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-50
It's not clear to me..
may be not useful any one because your magento version no patch is realese so
http://prntscr.com/n4kvss also check your website in https://www.magereport.com/ and check which security patch is need
For Magento version <IP_ADDRESS> to <IP_ADDRESS> You need to choose PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-50
If you compare both the patches you will find that in patch PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-13 on line number 302 it is looking for the below file
app/code/core/Mage/Adminhtml/Block/System/Design/Edit.php
Now in this file it is looking for the below code to be started from line number 71
public function getDeleteUrl()
{
- return $this->getUrl('*/*/delete', array('_current'=>true));
+ return $this->getUrlSecure('*/*/delete', array(
+ 'id' => $this->getDesignChangeId(),
+ Mage_Core_Model_Url::FORM_KEY => $this->getFormKey()
+ ));
}
But from Magento version <IP_ADDRESS> to <IP_ADDRESS> the above code is starting at line number 75 which is given here into the PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-50 at line number 302
So if you will apply the below patch it will not work because it will not be able to find the above code on the line 71.
PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-13
You need to apply the below patch file because it is looking for the above code to be started from the line 75 & from magento version <IP_ADDRESS> it is present on the line 75.
PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-50
There could be other differences also present I have just compared this one for this time you can check it at your end. You can refer the below file for verifying my answer
magento version <IP_ADDRESS>
magento version <IP_ADDRESS>
magento version <IP_ADDRESS>
Choose PATCH_SUPEE-11086_CE_<IP_ADDRESS>_v1-2019-03-26-03-03-50
explain why please
This patch is applicable for Magento versions <IP_ADDRESS> to <IP_ADDRESS>
| common-pile/stackexchange_filtered |
Removing a key with empty values from a map so they are not included in a query of a REST GET request
I have a Query class which holds the queries i can send like so:
Class Query {
Integer product_id
Integer collection_id
Integer id
}
I use object mapper to convert my Query object to map like this:
def q = new Query(product_id: 12345)
Map <String, Object> toMap = new ObjectMapper().convertValue( q, Map )
Which then in turn i pass on my RESTClient so it is included in the request
def client = new RESTClient ('http://somewebsite.com')
client.get(path: 'somePath/anotherPath.json',
contentType: ContentType.JSON,
query: q)
After i send the request, the empty keys in the query map are also sent in the request which causes problems to the response
GET somePath/anotherPath.json?product_id=12345&collection_id=&id=
As the title says, is there a way to remove keys with empty values in map so they are not included in the request when you send a REST GET request. I want it to be like this:
GET somePath/anotherPath.json?product_id=12345
Where the key with empty values (collection_id and id) is not sent in the request.
You can use the annotation @JsonInclude(Include.NON_NULL)
Class Query {
@JsonInclude(Include.NON_NULL)
Integer product_id
...
}
See documentation here
Thank you! i am having troubles with annotations because im just starting out. it is now working how i want it to work. thanks a lot
Doesn't ObjectMapper have any parameter which would disable exporting nulls?
If not you may do:
Map <String, Object> toMap = new ObjectMapper().convertValue(q, Map).findAll { it.value != null }
| common-pile/stackexchange_filtered |
Assign what res.json return to variable in Node.js
I'm building a social network - and for that I use Node.js. I'm new to the subject, and this's my first post on the subject, I'll be happy if you understand me.
In my social network I want to use an algorithm that has it in the "npm" kmeans algorithm.
I try to keep within a variable what the function returns to me and then continue to do calculations. I think the problem is very minor, but for a few hours I sit on it and can't figure it out.
I'm adding the code of what I've done so far:
//kmeans.js file
const kmeans = require('kmeans-engine');
exports.addUserKmeansMatch = (req, res) => {
const engineers = [
// frontend engineers
{ html: 5, angular: 5, react: 3, css: 3 },
{ html: 4, react: 5, css: 4 },
{ html: 4, react: 5, vue: 4, css: 5 },
{ html: 3, angular: 3, react: 4, vue: 2, css: 3 },
// backend engineers
{ nodejs: 5, python: 3, mongo: 5, mysql: 4, redis: 3 },
{ java: 5, php: 4, ruby: 5, mongo: 3, mysql: 5 },
{ python: 5, php: 4, ruby: 3, mongo: 5, mysql: 4, oracle: 4 },
{ java: 5, csharp: 3, oracle: 5, mysql: 5, mongo: 4 },
// mobile engineers
{ objc: 3, swift: 5, xcode: 5, crashlytics: 3, firebase: 5, reactnative: 4 },
{ java: 4, swift: 5, androidstudio: 4 },
{ objc: 5, java: 4, swift: 3, androidstudio: 4, xcode: 4, firebase: 4 },
{ objc: 3, java: 5, swift: 3, xcode: 4, apteligent: 4 },
// devops
{ docker: 5, kubernetes: 4, aws: 4, ansible: 3, linux: 4 },
{ docker: 4, marathon: 4, aws: 4, jenkins: 5 },
{ docker: 3, marathon: 4, heroku: 4, bamboo: 4, jenkins: 4, nagios: 3 },
{ marathon: 4, heroku: 4, bamboo: 4, jenkins: 4, linux: 3, puppet: 4, nagios: 5 }
];
kmeans.clusterize(engineers, { k: 4, maxIterations: 5, debug: true }, (err, result) => {
res.json(result.clusters)
.then((data) => {
let resultCluster = data; //<--- I want to perform a calculation on the object and then return it.
res.json(resultCluster)
})
.catch((err) => {
console.error(err);
return res.status(500).json({ error: err.code });
});
})
};
//index.js file
const {
addUserkMeansMatch
} = require('kmeans.js');
app.get('/kmeans', addUserkMeansMatch);
exports.api = functions.region('europe-west1').https.onRequest(app);
The problem is: I want to insert the information that the kmeans.clusterize function returns into the resultCluster object. I can not enter information into the resultCluster.
Then I want to perform calculations with resultCluster and return what I will have in the calculations with resultCluster.
What is res (i.e. what server framework do you use that calls addUserKmeansMatch)? This looks like a http response being written on the server. If so, the .json() method does not return a promise. It's really unclear why you are doing res.json(result.clusters) and then res.json(resultCluster).
This is actually how you call clusterize in kmeans.
kmeans.clusterize(engineers, { k: 4, maxIterations: 5, debug: true }, (err, result) => { if (err) console.error(err); else { // Do operations on data } }
@ParikshithKedilayaM No, don't throw err; in an asynchronous callback.
@Bergi I usually do it that way. It's always better to log onto the console. Thanks.
@Bergi Thank you for the help. Can you tell me what is not clear, I will explain again.
@Haham Ooops, I was missing the // index.js file at the bottom of the code. I assume that app is an express server?
@Bergi Yes. I want to put in the variable what the "kmeans.clusterize" function returns, to do calculations according to the variable. And then return the result of the calculation. I have trouble putting in the variable what a function returns
You're calling res.json() twice, and you're using promise syntax on it for no reason. It should be simply
kmeans.clusterize(engineers, { k: 4, maxIterations: 5, debug: true }, (err, result) => {
if (err) {
console.error(err);
res.status(500).json({ error: err.code });
} else {
const data = result.clusters;
… // perform a calculation on the object
res.json(data);
}
})
| common-pile/stackexchange_filtered |
GraphQL resolver logic - Call query/mutation from mutation?
I am creating a workout/exercise logger, where user's can add a log of their set of an exercise to their account. User's will also be able to see their history of workouts and exercises (with set data). I am using mongoDB to store this data, with GraphQL and mongoose to query and mutate the data. I have seperated workout and exercise into their own types, as a workout object will only hold exercises and the sets that were recorded in the last 4 hours (duration of workout), while the exercise object will hold all sets that were ever logged by the user.
Type Definitions
type Workout {
id: ID!
workoutName: String!
username: String!
createdAt: String
exercises: [Exercise]!
notes: String
}
type Exercise {
id: ID!
exerciseName: String!
username: String!
sets: [Set]!
}
type Set {
id: ID!
reps: Int!
weight: Float!
createdAt: String!
notes: String
}
My problem lies in my resolver code for adding a set (mutation). This resolver should:
Query the database whether the exercise has ever been done by the user, by checking the name of the exercise with a match in the database, if there is a match, add the set data the user inputted to it, otherwise, create a new exercise entry first and then add the set to it.
Query the database whether there is a workout that has been done in the last 4 hours. If there isn't a match, create a new workout entry in the database. If there is a workout, check for a matching exercise name on the workout object to add the set data to or create a new exercise entry for it.
I realise that this mutation will be fairly large and will combine both the querying and mutation of data together. So I'm wondering if I can call seperate queries/mutations from my addSet resolver similar to a function call? Or is there another method that I should be going about this?
addSet Resolver
async addSet(_, { exerciseName, reps, weight, notes }, context) {
const user = checkAuth(context); // Authenticates and gets logged in user's details
if (exerciseName.trim() === '') {
throw new UserInputError('Empty Field', {
// Attached payload of errors - can be used on client side
errors: {
body: 'Choose an exercise'
}
})
} else {
exerciseName = exerciseName.toLowerCase();
console.log(exerciseName);
}
if ((isNaN(reps) || reps === null) || (isNaN(weight) || reps === null)) {
throw new UserInputError('Empty Fields', {
// Attached payload of errors - can be used on client side
errors: {
reps: 'Enter the number of reps you did for this set',
weight: 'Enter the amount of weight you did for this set'
}
})
}
// TODO: Check to see if the exercise has been done before by the user. If it has, then update the entry by adding the set data to it. If not create a new entry for the
// exercise and then add the data to it - Completed and working.
const exerciseExists = await Exercise.findOne({ exerciseName: exerciseName, username: user.username });
if (exerciseExists) {
console.log("This exercise exists");
exerciseExists.sets.unshift({
reps,
weight,
username: user.username,
createdAt: Date.now(),
notes
})
await exerciseExists.save();
//return exerciseExists;
} else {
console.log("I don't exist");
const newExercise = new Exercise({
exerciseName,
user: user.id,
username: user.username,
});
const exercise = await newExercise.save();
console.log("new exercise entry");
exercise.sets.unshift({
reps,
weight,
username: user.username,
createdAt: Date.now(),
notes
})
await exercise.save();
//return exercise;
}
// TODO: Get the most recent workout from the user and check if the time it was done was from the last 4 hours. If it wasn't, create a new workout entry for the user.
// If it was within the ast 4 hours, check to see if the workout has an exercise that matches with one the user inputted. If there isn't an exercise, create a new entry
// and add the set data to it, otherwise update the existing entry for the exercise.
const workoutExists = await Workout.findOne({ username: username }).sort({ createdAt: -1 }); // Gets most recent workout by user
const now = Date.now();
if (now > workoutExists.createdAt + 14400000) { // Checks to see if the workout was less than 4 hours ago
console.log("workout was from another session");
// rest of code not implemented yet
} else {
console.log("workout is still in progress");
// rest of code not implemented yet
}
},
| common-pile/stackexchange_filtered |
"Engineer at the Quality Assurance Department" vs. "Engineer of the Quality Assurance Department"
Which preposition do we use? Which one is better here?
John Smith, Process Engineer at the Quality Assurance Department.
John Smith, Process Engineer of the Quality Assurance Department.
John Smith, Process Engineer in the Quality Assurance Department.
This is from a table in an MS Word document. The column lists the names of employees who took part performing a particular task. Jonh Smith works in the QA Dept., another employee works for the same company, but in the Production Department, and so on.
I know that if it were the name of a company, not division, then we would use at. But maybe of is allowed (or even preferred) when we talk of a division or department.
@JavaLatte - ah! It never occurred to me. I'll add it now
The normal choice is "in the department".
But if there is only one person in the role, then it could be of.
...an engineer in the department
...Lead Engineer of|in|for the department
but "head of the department". We don't ever say "head in the department", but we could say "head of communications in the department".
So, if we were to encounter the phrase "Process Engineer of the department", we might reasonably assume that Process Engineer is a role occupied by a single person, comparable to Lead Engineer. That would be the implication of of. For example, there might be a law governing pharmaceutical manufacturing facilities, say, which mandates that the QA Department designate a Process Engineer.
| common-pile/stackexchange_filtered |
For-loop return array value
I'm trying to insert url for menu through mustache template. But just the first value is being returned for the array.
Or is this the return method wrong
var main_menu_link = ["main_dashboard.html", "#", "online_dashboard.html","index.html","#","#","#"];
var url = "";
var i;
var url_link="";
for(i = 0; i < main_menu_link.length; i++) {
url += main_menu_link[i];
return '<a href="'+ url +'">' + text + '</a>';
}
CodePen working here
The return statement has to be after the loop:
var main_menu_link = ["main_dashboard.html", "#", "online_dashboard.html","index.html","#","#","#"];
var url = "";
var i;
var url_link="";
for(i = 0; i < main_menu_link.length; i++) {
url += '<a href="'+ main_menu_link[i] +'">' + text + '</a>';
}
return url;
this will give below result
Dashboard
Edited to create seperate links for each element in the array, is this what you were after?
Now the li text is being displayed 5 times :) . Should I put that in a loop as well?
If you're creating a new list element for each item in the array then yes
Yeh that may be the case, to answer your question though the reason it was only outputting the first element of the array is because a return statement will exit the function immediately, you need to build a variable first and then return after the loop has finished. Hope you get your project sorted.
got the menu built from template, the formatting of the template itself was wrong, so the for-loop error was an additional error to an already wrong method :) Thank you for correcting the loop issue.
Correct template as below in-case it can be of use to someone else
var link_details = { "link_details" :[
{ main_menu: "Dashboard", main_menu_link: "dashboard.html" },
{ main_menu: "Analytics", main_menu_link: "#" },
{ main_menu: "System", main_menu_link: "system.html" }
]};
var template = "<ul>{{#link_details}}<li><a href =\" {{main_menu_link}}\">{{main_menu}}</a></li>{{/link_details}}</ul>";
var html = Mustache.to_html(template, link_details);
document.write(html)
| common-pile/stackexchange_filtered |
colocate_gradients_with_ops argument in TensorFlow?
I'm trying to understand what this argument does, the TF api docs for AdamOptimizer's compute_gradients method say the following -
colocate_gradients_with_ops: If True, try colocating gradients with the corresponding op.
but it is not clear to me. What does colocating gradients mean in this context and what is the said op?
colocate_gradients_with_ops=False (default): compute gradients on a single GPU (most likely /gpu:0)
colocate_gradients_with_ops=True: compute gradients on multiple GPUs in parallel.
In more detail:
colocate_gradients_with_ops=False (default)
Only computes predictions in parallel and then move the computed predictions to the parameter server(cpu:0 or gpu:0). And with predictions, the parameter server (single device) computes gradients for the whole batch (gpu_count * batch_size).
colocate_gradients_with_ops=True
Place the computation of gradients on the same device as the original (forward-pass) op.
Namely, compute gradients together with predictions on each particular GPU and then move the gradients to the parameter server to average them to get the overall gradients.
Reference 1 @mrry
Reference 2 @bzamecnik
| common-pile/stackexchange_filtered |
Existence of solutions for $x'=f(t,x)$ for $f$ not necessarily $C^1$, but with other conditions
Let $f:\mathbb{R}\times\mathbb{R}^n \to \mathbb{R}^n$ be a continuous, locally Lipschitz function that satisfies the following condition:
$$|f(t,x)|\leq C(1+|x|)\; , \forall (t,x)\in \mathbb{R}\times\mathbb{R}^n$$
Show that the IVP $x' = f(t,x),\; x(t_0)=x_0$ admits a unique solution.
We can get existence of solutions from Peano's existence theorem. It's also easy to see that $C(1+|x|)$ is globally Lipschitz. Though, I don't know how to connect the dots. It feels like we would need $f$ to be globally Lipschitz to be able to say anything about uniqueness of solutions...
The case where $f$ doesn't depend on $t$ is proved as Proposition IV.3 in Zehnder's book Lectures on Dynamical Systems, using Grönwall's lemma. Probably something similar works in the general case.
With the equation locally Lipschitz you also get that the solutions are locally unique. As any branching of IVP solutions is also a local event, local uniqueness is sufficient for global uniqueness, that is, for as long as the solution is defined.
What is usually associated with a linear bound of $f$ of the given form is that the domain of any solution is $\Bbb R$. But your task description does not even mention that?
Your question is correct, that's exactly what I'm supposed to prove. Seems like I forgot to write the most important part of the problem.
| common-pile/stackexchange_filtered |
Get remote page title from url in foreach
How can I return titles of sites in foreach, codeigniter.
The code works, but takes too long to load.
<?php
foreach ($links->result() as $value) :
$url = $value->lnkUrl;
$domain = parse_url($url, PHP_URL_HOST);
$web_link = "http://".$domain;
$str = file_get_contents($web_link);
if(strlen($str) > 0) {
preg_match("/\<title\>(.*)\<\/title\>/",$str,$title);
echo "<span class='directi_web'>". $title[1] ."</span>";
} else {
echo "<span class='directi_web'></span>";
}
endforeach;
?>
This code probably takes too long because the remote sites take a long time to respond or send a large amount of data. You should primarily avoid running such updates to often.
can you show me how to do this?
There are lots of tutorials on using a database like mysql with php. This platform is for specific questions/problems.
| common-pile/stackexchange_filtered |
Add HTML5 required attribute to webform fields set as required
Could someone advise me how to use a hook to add the HTML 5 required attribute to all inputs that have been selected as 'required' within webforms.
Webforms already adds class 'required' but the HTML 5 required attribute provides great instant validation in Opera, Firefox and Chrome browsers.
Do you need a javascript solution or PHP or both ? You can do this by setting form attributes in a hook_form_alter()
PHP, could you produce an example please?
See the answer below.
You will need to create a module for this.
Note that, this answer assumes you need to set required attribute on all form elements which are required in web form settings.
you will need to implement a different hook and/or add conditions to make this strict.
Create a file named webform_formalters.info and put following to it.
name = webform form alters
description = a collection of custom form alters for webforms.
core = 7.x
package = Custom
dependencies[] = webform
Create another file named 'webform_formalters.module' and put following in it.
<?php
/*
* implements hook_form_BASE_FORM_ID_alter.
*/
function webform_formalters_form_webform_client_form_alter(&$form){
//dsm($form);
foreach ($form['submitted'] as &$field){
if (is_array($field) && isset($field[#required]) && $field['#required'] == 1){
//dsm($field);
$field['#attributes']['required'] = 'required';
}
}
}
Put both files to a folder named webform_formalters and upload to your site's module folder and enable the module. See if it works.
Unlike other forms, web forms' fields are stored in $form['submitted'] array. So we go through each item (which should be an array of form definitions) and check if the element has marked as required.
if so, we add or merge required attribute. We are altering an existing form, so we are not going to pop out existing attributes.
Good luck!
Thanks for your time with this. Out of interest why can't you perform this function in template.php?
Hey @ayesh-k thanks for your help! I added this into template.php. The functionality works! But also produces this error message Notice: Undefined index: #required in innovista_form_webform_client_form_alter() (line 188 of /home/sites/innovista.org/public_html/beta/sites/all/themes/innovista/template.php). But only once per form.
I fixed this by wrapping your if statement within if(isset($field['#required'])){. Is this acceptable?
Yes it will work. But better if you put solenoid like this right into the existing if statement. if (is_array($field) && isset($field[#required]) && $field[#re ...
| common-pile/stackexchange_filtered |
One column value missing and no values are shown for entire row
Below is expected results Below is code I work my actual out on , but if there is no procurement done then it returns a blank row
SELECT PRJ$Projects.project_id
,VW_QUOTE_SECTION_TOTALS.Quoted_section_total "Amount"
,VW_SECTION_TOTALS.Costed_section_total "Quoted Budget"
,(VW_QUOTE_SECTION_TOTALS.Quoted_section_total - VW_SECTION_TOTALS.Costed_section_total) "Gross Profit"
,ROUND(((VW_QUOTE_SECTION_TOTALS.Quoted_section_total - VW_SECTION_TOTALS.Costed_section_total) / VW_QUOTE_SECTION_TOTALS.Quoted_section_total) * 100) "GP %"
,ROUND(VW_PROC_SUM.total, 2) "Procured"
,(VW_SECTION_TOTALS.Costed_section_total - VW_PROC_SUM.total)
FROM PRJ$Projects
INNER JOIN prj$quotes ON prj$quotes.PROJECT_ID = PRJ$Projects.PROJECT_ID
AND PRJ$Projects.QUOTE_TOTAL IS NOT NULL
INNER JOIN VW_PROC_SUM ON VW_PROC_SUM.PROJECT_ID = PRJ$Projects.PROJECT_ID
INNER JOIN prj$quote_sections ON prj$quote_sections.quote_id = prj$quotes.quote_id
INNER JOIN VW_SECTION_TOTALS ON VW_SECTION_TOTALS.QUOTE_SECTION_ID = prj$quote_sections.QUOTE_SECTION_ID
INNER JOIN VW_QUOTE_SECTION_TOTALS ON VW_QUOTE_SECTION_TOTALS.QUOTE_SECTION_ID = prj$quote_sections.QUOTE_SECTION_ID
AND prj$quote_sections.quote_section_id = :P13_SECT_ID_H
The miracles of using LEFT JOIN instead of INNER JOIN.
Just split your quey into small pieces and try to find the error, We cant test your query without data or db schema Please read How-to-Ask
And here is a great place to START to learn how improve your question quality and get better answers.
Try create a sample in http://sqlfiddle.com
Great thanks it shows procured but this part (VW_SECTION_TOTALS.Costed_section_total - VW_PROC_SUM.total) "Available Budget" is also say null
add nvl: (VW_SECTION_TOTALS.Costed_section_total - nvl(VW_PROC_SUM.total,0)). because 10-null=null.
Hello Thanks I had just done that
| common-pile/stackexchange_filtered |
djoser activate account by link
How to activate after click on the link send by djoser?
my settings
'''
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'djoser',
'rest_framework',
'rest_framework_simplejwt',
'data',
]
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES':(
'rest_framework_simplejwt.authentication.JWTAuthentication',
'rest_framework.authentication.SessionAuthentication',
),
}
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_USE_TLS = True
EMAIL_HOST_USER =<EMAIL_ADDRESS>EMAIL_HOST_PASSWORD='naz@technomancer7629'
EMAIL_PORT = 587
PROTOCOL = "http"
DOMAIN = "<IP_ADDRESS>:8000"
DJOSER = {
'PASSWORD_RESET_CONFIRM_URL': '/password/reset/confirm/{uid}/{token}',
'USERNAME_RESET_CONFIRM_URL': '/username/reset/confirm/{uid}/{token}',
'ACTIVATION_URL': 'auth/user/activate/{uid}/{token}',
'SEND_ACTIVATION_EMAIL': True,
'SEND_CONFRIMATION_EMAIL':True,
'SERIALIZERS': {},
'EMAIL':{
'activation': 'djoser.email.ActivationEmail',
},
}
'''
urls.py
'''
urlpatterns = [
path('admin/', admin.site.urls),
path('auth/',include('djoser.urls')),
path('auth/',include('djoser.urls.jwt')),
path("api/data/",include("data.urls")),
]
'''
my email link
http://<IP_ADDRESS>:8000/auth/users/activate/Mjc/5bx-5f9542251fd9db7e980b
error:
Using the URLconf defined in startgo1.urls, Django tried these URL patterns, in this order:
admin/
auth/ ^users/$ [name='user-list']
auth/ ^users\.(?P<format>[a-z0-9]+)/?$ [name='user-list']
auth/ ^users/activation/$ [name='user-activation']
auth/ ^users/activation\.(?P<format>[a-z0-9]+)/?$ [name='user-activation']
auth/ ^users/me/$ [name='user-me']
auth/ ^users/me\.(?P<format>[a-z0-9]+)/?$ [name='user-me']
auth/ ^users/resend_activation/$ [name='user-resend-activation']
auth/ ^users/resend_activation\.(?P<format>[a-z0-9]+)/?$ [name='user-resend-activation']
auth/ ^users/reset_password/$ [name='user-reset-password']
auth/ ^users/reset_password\.(?P<format>[a-z0-9]+)/?$ [name='user-reset-password']
auth/ ^users/reset_password_confirm/$ [name='user-reset-password-confirm']
auth/ ^users/reset_password_confirm\.(?P<format>[a-z0-9]+)/?$ [name='user-reset-password-confirm']
auth/ ^users/reset_username/$ [name='user-reset-username']
auth/ ^users/reset_username\.(?P<format>[a-z0-9]+)/?$ [name='user-reset-username']
auth/ ^users/reset_username_confirm/$ [name='user-reset-username-confirm']
auth/ ^users/reset_username_confirm\.(?P<format>[a-z0-9]+)/?$ [name='user-reset-username-confirm']
auth/ ^users/set_password/$ [name='user-set-password']
auth/ ^users/set_password\.(?P<format>[a-z0-9]+)/?$ [name='user-set-password']
auth/ ^users/set_username/$ [name='user-set-username']
auth/ ^users/set_username\.(?P<format>[a-z0-9]+)/?$ [name='user-set-username']
auth/ ^users/(?P<pk>[^/.]+)/$ [name='user-detail']
auth/ ^users/(?P<pk>[^/.]+)\.(?P<format>[a-z0-9]+)/?$ [name='user-detail']
auth/ ^$ [name='api-root']
auth/ ^\.(?P<format>[a-z0-9]+)/?$ [name='api-root']
auth/ ^jwt/create/? [name='jwt-create']
auth/ ^jwt/refresh/? [name='jwt-refresh']
auth/ ^jwt/verify/? [name='jwt-verify']
api/data/
The current path, auth/users/activate/Mjc/5bx-5f9542251fd9db7e980b, didn't match any of these.
are you sure that you suppose to add url with users/activation/ and not users/activate/ to the email link?
i dont kno. check my activation url .Is it wrong?
i did not add those urls djoser add them
I read through the docs of Djoser and your activation URL is correct but you using it the wrong way, you supposed to use that URL with a POST request for it to work( currently the way you click on it will call a GET request) so i suggest creating a URL endpoint on Django to handle GET request to send POST request of it following this issue .
In your urls.py:
path('activate/<str:uid>/<str:token>/', UserActivationView.as_view()),
And you views.py will handle it and call POST request on URL:
class UserActivationView(APIView):
def get (self, request, uid, token):
protocol = 'https://' if request.is_secure() else 'http://'
web_url = protocol + request.get_host()
post_url = web_url + "/auth/users/activate/"
post_data = {'uid': uid, 'token': token}
result = requests.post(post_url, data = post_data)
content = result.text()
return Response(content)
yeah i tried it before but it says no reverse function for user-activate
well if you scroll down you can see this you need to have djoser urls set
where i can get the urls set
in that page the last comment is mine
@safakat001 maybe because he want to specific the function name but you can do normal POST request with it to check , like this
Thanks for your help. I just solved it with this code.
Then this question will be a duplicate of the other question
With djoser try this:
In auth.urls :
path('activate/<str:uid>/<str:token>/',
ActivateUserEmail.as_view(),
name='activate email')
class ActivateUserEmail(APIView):
def get (self, request, uid, token):
protocol = 'https://' if request.is_secure() else 'http://'
web_url = protocol + request.get_host()
post_url = web_url + "/auth/users/activation/"
post_data = {'uid': uid, 'token': token}
result = requests.post(post_url, data = post_data)
message = result.text
return Response(message)
Thanks for your. I just solved it with this code.
def ActivateUserAccount(request, uidb64=None,token=None):
#print(force_text(urlsafe_base64_decode(uidb64)))
#print(token)
try:
uid = force_text(urlsafe_base64_decode(uidb64))
#print(type(uid),uid)
user = User.objects.get(pk=uid)
print(user)
except User.DoesNotExist:
user = None
if user and default_token_generator.check_token(user,token):
user.is_email_verified = True
user.is_active = True
user.save()
login(request,user)
print("Activaton done")
else:
print("Activation failed")
Is this a view? How does this work? I am facing the same problem where they tell me my "uid" is invalid
| common-pile/stackexchange_filtered |
When to use Angular 2 factory functions?
I can't imagine a situation where I need to use a factory provider.
According to the offical docs https://angular.io/docs/ts/latest/guide/dependency-injection.html the situation is that one may not be able to access a service (service-b) from within another service (service-a), but, the factory function does (have access to service-b). So, when would something like this really happen?
Where can I find the text you mention in the linked doc? "one may not be able to access a service (service-b) from within another service (service-a), but, the factory function does"
That was my interpretation of the docs, look for it in https://angular.io/docs/ts/latest/guide/dependency-injection.html#!#injector-providers under the section "Factory providers" just before the text "Why? We don't know either. Stuff like this happens."
You can register for a provider by just passing the class
providers: [MyService]
This only works if Angulars DI can instantiate MyService.
If you have for example
@Injectable()
class MyService {
constructor(private http: Http, private String configVal) {}
}
then DI is not able to create an instance because String is not a valid key for a provider (primitive types don't work as provider key.
If you need this you can use a factory function like
providers: [
{
provide: MyService,
useFactory: (http) => {
return new MyService(http, 'http://mydbserver.com:12345');
},
deps: [Http]
}
]
This way you fully control how a new instance is created and Angulars DI only needs to know that it needs to call the factory function with an instance of Http.
I get that, but, e.g. one can easily create a service, MyConfigService, which gives you those config values and inject it into MyService. So I still wonder if there is actually another reason for the angular framework to allow us define providers this way.. Anyway thanks for your answer!
For example if you want to inject classes that don't have the Injectable() decorator and you can't add it because you don't own the source. I'm sure there are several others.
| common-pile/stackexchange_filtered |
Should the requirement for 100 questions in a tag for tag badges be removed?
Inspired by a poor soul who wanted a my-little-pony badge.
waffles describes this here as an anti-gaming feature. However, I'm not sure if that's necessarily true. If anything, it seems to me that this would simply encourage more abuse, by requiring (ab)users to add the tag for which they wish to receive a badge to a larger number of questions. On a larger site like Stack Overflow, this could probably be pretty easily concealed among all the other editing/posting traffic, especially if done over a few days by someone with full editing privileges. On a smaller site, it'd likely be noticeable either way (and there'd be less cleanup work).
On the negative side, it prevents recognition of subject-matter experts in niche subjects, whether it's My Little Pony on SFF or a specialized library on Stack Overflow. It also removes an incentive (other than rep, of course) for people to strive to answer more questions in such niche tags.
Should this requirement be removed?
Heh, I remember trying to hunt a [winter-bash-20something] tag badge on Meta Stack Overflow, only to realize it'd never reach the 100 questions required.
"... provided you haven't edited the tags or previously voted to close or reopen, respectively.".
Unlike normal badges, tag badges are revoked if one ceases to meet their criteria, so even if one does add the tag to different questions to game the badge, once those are reverted by the community, their ill-gotten badge will be revoked.
I continue to agree with waffles. It's vastly easier to cover up 20 retags on SO than it would be to cover up 100 of them. If a tag is only ever used for 20 questions, it's unlikely that the tag has enough currency or utility on the site to justify awarding badges for it.
@CodyGray basically means that it is close to impossible getting a tag badge for any game that isn't Minecraft on Arqade, every fiction that isn't Harry Potter/Star Wars/LotR on SciFi and so on...
@SPArcheon nope
@SPArcheon I don't know, I would leave it to users of smaller sites to make the decision about what they want, but this is a major issue affecting Stack Overflow, and I can speak with authority on what would be the consequences of it on that site.
@CodyGray True, and that is an historical problem StackExchange has as a whole. Most settings in the network are tuned around StackOverflow volumes, but those often don't make sense for smaller sites.
btw, about the abuse part, I think that Ryan has a point, since like he probably saw firsthand, the suggestion I got was basically to post the missing questions myself.
| common-pile/stackexchange_filtered |
Devise + LocomotiveCMS - Remember the page I was trying to visit before being asked to log in
I'm trying to add a simple feature to locomotivecms (github). A very simple feature: currently users are redirected to the "main admin hub" (/admin/) after logging in - even if they were trying to edit a different page. I want them to be directed to that page after logging in instead.
It seems a very simple and reasonable thing to add, but after two days trying I've decided to ask for help.
This is what I've found out so far.
This app doesn't use ActiveRecord, but Mongoid (the backend db is MongoDB)
Locomotive doesn't use a simple "User" schema. Authentication is divided into two parts: "accounts" (email, password, name, etc) and "site" (it's a multi-site cms). A "site" has many "memberhips". A membership has one site_id and account_id (and also, a role, but that's not important for this, I think).
Most of the "action" in locomotive is behind the /admin/ route. For example, the login path is /admin/log_in . Most controllers are inside an /admin/ subfolder, too.
I've found this bit in the /admin/sessions_controller that apparently "fixes" the url to be visited after being logged in to the /admin/ root.
Here's the relevant bit:
def after_sign_in_path_for(resource)
admin_pages_url
end
I'm nearly sure that what I need is this instead:
def after_sign_in_path_for(resource)
stored_location_for(resource) || admin_pages_url
end
If I have understood Devise's documentation correctly, stored_location_for searches for a value in the session (session['admin_return_to'] in Locomotive's case) to "see if someone wants to go back somewhere". If it's empty, the || ensures a safe path to the admin root.
Unfortunately this doesn't work. It seems that the session variable that I need is never set up. I was assuming that Devise did this kindof automatically.
Must I set the session value myself? If so, where? Or are my assumptions wrong?
Thanks a lot!
Best solution is updating to the latest version of LocomotiveCMS.
| common-pile/stackexchange_filtered |
site level folder read/write/delete permissions from web page
I am hoping that someone could offer some assistance in helping implement some functionality on a production server.
I have a link on a VB.net aspx page that needs to open a folder in a subdirectory of a web site and provide read and write permissions so that files can be copied into and read from as well as deleted. ONLY from this folder.
I have this functionality working on a dev and demo PC but they are both on my domain. Clicking the link opens Windows Explorer and I can copy /cut & Paste, etc.
The public/production PC is on the LAN, but not part of the domain. I know there are ways to allow folder read/write permissions on a public server like this but I am not too sure on the safest way to do this. The upside, the only users that need to have read/write/delete access are employees. Forms Authentication, ASP Membership directs non-employees to other pages within the site. Likewise, the membership directs employees to an admin section of the site.
This is Server 03, Web Edition, so no AD, using IIS 6, and SSEE.
Any sample code, links to articles, MSDN etc would all be appreciated.
Thanks,
the simplest solution is to provide read/write access to directory from IIS. another approach is to tap into the kernel's logon methods and impersonate a user who has read/write access. here.
How do I pass credentials to a machine so I can use Microsoft.Win32.RegistryKey.OpenRemoteBaseKey() on it? this post my provide some insight into how to go about impersonating a user to perform an operation.
Your link looks promising I will look it over.
| common-pile/stackexchange_filtered |
Order of resolution for multiple awaits on one promise
If multiple tasks are waiting on a single promise, is there a guaranteed/specified by a standard order in which those tasks will begin executing once the promise is resolved? For example, consider the following
var promise = null;
function setPromiseIfNeeded() {
if (!promise) {
promise = new Promise(resolve => setTimeout(resolve, 100));
}
}
client.on('event1', event => {
setPromiseIfNeeded();
await promise;
console.log('event 1 done awaiting');
}
client.on('event2', event => {
setPromiseIfNeeded();
await promise;
console.log('event 2 done awaiting');
}
If event1 occurs before event2, is it guaranteed that the handler for event1 will awake from its promise before the handler for event2?
If the behavior is implementation dependent, I'm particularly curious about the behavior for node.js v12 and above.
This really smells like an XY Problem. The first one to call setPromiseIfNeeded() should be the first to do the log...however what is the actual higher level issue you are trying to solve?
@charlietfl I have multiple event-driven tasks that each reach out to a web API that has global rate limits. If I am at the rate limit, I want to back off to avoid getting blocked. Rather than having each task separately calculate how long it needs to wait and have tons of promises, I thought it was more sensible to have the first task calculate the wait time, make a global promise, and have all the threads await it. But in that scenario, I want the thread that made the promise to wake up first so that it can properly clear the global promise state (although maybe that isn't necessary...)
@Alec The clearing of the global state should be done in the handler that also calls resolve, not in the "thread" that was waiting first. Btw, for rate-limiting I would recommend implementing an explicit queue
@Bergi Clearing state when you resolve makes a lot of sense - I hadn't thought of that possibility! As for the explicit queue, that's more or less what's happening under the hood, it's just complicated by there being both per-route and global (all routes) rate limits, so it can't all go in a single queue :/
Yes, this is guaranteed. Promise handlers (then callbacks) are run in the same order in which they were installed on the promise (.then() call). This translates into the same for await syntax.
thanks for the insight! Do you have links for more details about this behaviour on JavaScript?
@Christian https://promisesaplus.com required it, native promises implemented it, you can find their exact behaviour specified in the ECMAScript spec. You can find the claim and some examples also in the MDN docs and elsewhere.
| common-pile/stackexchange_filtered |
Get data from both side of a query
I've the following query but if I search a data contained in table_b or table_c or table_d I don't get result can you pls. explain me why and correct the problem?
SELECT c.contratto,nominativo, email_pers,dt_ch_conto
FROM clienti AS c
LEFT JOIN table_b AS s ON (c.contratto=s.contratto)
LEFT JOIN table_c AS r ON (c.contratto=r.contratto)
LEFT JOIN table_d AS n ON (c.contratto=n.contratto)
WHERE (nominativo LIKE '%$stringa%' OR c.contratto LIKE '%$stringa%' OR c.email_pers LIKE '%$stringa%')
OR ((s.login LIKE '%$stringa%' AND s.attivo='SI')
OR (r.login LIKE '%$stringa%' AND r.attivo='SI')
OR (n.login LIKE '%$stringa%' AND n.attivo='SI'))
AND ((dt_ch_conto is null) AND (dt_ch_conto=0) AND (dt_ch_conto='')) GROUP BY c.contratto LIMIT 15
y not c.nominativo, c.email_pers,c.dt_ch_conto
Your WHERE clause consists of
(or-expression) AND (and-expression)
with the and-expression
((dt_ch_conto is null) AND (dt_ch_conto=0) AND (dt_ch_conto=''))
This expression can't be true, because dt_ch_conto can't have at the same time the values
NULL,
0,
''
So you have
(expression) AND False
what evaluates to
WHERE false
for every row. I can't be certain but probably you want in the second expression
((dt_ch_conto is null) OR (dt_ch_conto=0) OR (dt_ch_conto=''))
thanks for your answer but if I search a data contained on clienti i get result, if i search on table_b I get nothing
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.