text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
A limit property and the Z-transform of a 2-sided sequence
Here is a property about limits that we were taught in school:
Theorem:
If the limits of functions $f(x),g(x)$ exist at $x_0 \in \mathbb{R}$ then:
$$\lim_{x \to x_0} (f(x) + g(x)) = \lim_{x \to x_0} f(x) + \lim_{x \to x_0} g(x) $$
This can be extended for $x_0 = \infty \blacksquare$
Now I will describe a violation of this property, that is observed in the computation of Z-transform(ZT) of a 2-sided sequence $x[n], n \in \mathbb{Z}$, where $x[n] \neq 0$ for some $n_1 < 0$ and for some $n_2 > 0$.
$$\sum_{n =-\infty}^{\infty} x[n] z^{-n} = \sum_{n =-\infty}^{-1} x[n] z^{-n} + \sum_{n = 0}^{\infty} x[n] z^{-n} $$
We know how to compute each of the 2 new sums separately. If the first sum converges for $|z| < r_{+}$, and the second sum converges for $|z| > r_{-}$. The Z-Transform exists at least for all $z$ such that $r_{-} < |z| < r_{+}$
given that $r_{-} < r_{+}$.
Given the above assumptions, we have found a subset of the Region of Convergence(ROC) of the ZT.
But if $r_{-} \geq r_{+}$ textbooks mention that the Z-Transform does not exist!
However, the Theorem stated above implies that when the infinite sums do not exist simultaneously, they cannot be broken apart. Thus we cannot claim non-existence of ZT since the initial sum may converge.
Example:
I tried computing the ZT-transform of the following sequence, assuming that $|a| \geq |b|$.
$$x[n] =
\left\{
\begin{array}{ll}
a^n & n \geq 0\\
-b^n & n < 0
\end{array}
\right.$$
The result was that for some discrete antipodal points $z_1,z_2$ in the complex plane $X(z)$ can be computed. Of course in this case 2 numbers $X(z_1), X(z_2)$ cannot describe a sequence of infinite terms...
Hence, it might be the case that for many/common sequences we can only compute their ZT @ discrete finitely many points. And thus ZT doesn't carry any useful information about the original sequence. Thus the ZT is practically non-existent.
But such assumptions should be proved rigorously.
1. So is there some theory behind this?
2. Can you point out any mistakes I made in the thought process?
Thanks in advance!
EDIT: Here the correspondence to the limit property is clarified.
$$\sum_{n = -\infty}^{\infty} x[n]z^{-n} = \lim_{N \to \infty} \sum_{n = -N}^{N}x[n]z^{-n} = \lim_{N \to \infty} \Biggl \{ \sum_{n = -N}^{-1}x[n]z^{-n} + \sum_{n = 0}^{N}x[n]z^{-n} \biggr \}$$
Unlike one-sided series, convergence of two-sided series of the form $\sum_{n=-\infty}^\infty a_n$ is only unambiguously defined if $\sum_{n=-\infty}^\infty |a_n|<\infty$. In this case, if either $\sum_{n=0}^\infty |x[n]z^{-n}|=\infty$ or $\sum_{n=-\infty}^{-1} |x[n]z^{-n}|=\infty$, then $\sum_{n=-\infty}^\infty |x[n]z^{-n}|=\infty$. This will happen when $r_->r_+$. So convergence of $\sum_{n=-\infty}^\infty x[n]z^{-n}$ without separate convergence of $\sum_{n=0}^\infty x[n]z^{-n}$ and $\sum_{n=-\infty}^{-1} x[n]z^{-n}$ isn't possible.
There is a temptation to believe that convergence of $\lim_N \sum_{n=-N}^N a_n$ should be how convergence of $\sum_{n=-\infty}^\infty a_n$ is defined, but it isn't. If $\sum_{n=-\infty}^\infty a_n$ is unambiguously defined, it should be independent of rate of approach of $\infty$. That is, if we say that $\sum_{n=-\infty}^\infty a_n$ is convergent, it should be the case that for any $M_k,N_k\in \mathbb{N}$ with $\lim_k M_k=\lim_k N_k=\infty$, $\lim_k \sum_{n=-M_k}^{N_k}a_n=\lim_N \sum_{n=-N}^N a_n$.
To see why $\sum_{n=-\infty}^\infty a_n$ can be ambiguous when $\sum_{n=-\infty}^\infty |a_n|=\infty$, let $a_n=1$ if $n>0$, $a_n=-1$ if $n<0$, and $a_0=0$. Then $\sum_{n=-N}^N a_n=0$ for any $N\in \mathbb{N}$, but $\sum_{n=-N}^{N+1}a_n=1$. So if the upper and lower bounds on the partial sums approach $\pm \infty$ in different ways, you can get different results. This isn't an issue when you only have a one-sided sum.
Oh I see now I get it, thank you!
| common-pile/stackexchange_filtered |
Are the charges induced on the surface of an earthed conductor of the same?
Suppose a positive charge is placed near an earthed conductor. Now a true/false statement in my book reads as follows: "On the surface of the conductor at some points, charges are negative and at some other points, charges may be positive, and distributed non-uniformly" This statement was given as True.
Yes, negative charges will be induced and they will be non uniform. But how can the positive charge be induced on an earthed conductor? Since the conductor is earthed, doesn't it have to maintain charge neutrality but rather just that it's potential needs to be zero? So how can, and why will positive charge be induced on the surface of an earthed conductor? Is there something I am missing?
I cannot really reason out why they should not induce. But it's just that I don't feel that they should. In case of a non-earthed conductor, they will since it has to maintain a charge neutrality. But I don't understand why would that happen in this case
If a positive charge is brought near an “earthed” conductor, it will attract electrons from the earth. There will be no “net” positive charge on any part of the conductor (which initially had no net charge).
Does it then imply that the statement given was false?
I think so. If the conductor were not grounded, electrons would be attracted toward the external positive charge, leaving a positive charge behind, but that gets neutralized if electrons can flow from the ground.
I guess it depend upon the symmetry of the conductor as well, and when a positively charged conductor is brought near the earthed conductor, then high negative charged density would build up at one end and it will induce electronic repulsion in the conductor, allowing the charge density gradually decrease. Thus it is relative phenomenon, i.e. farther end is relatively more positive.
| common-pile/stackexchange_filtered |
MongoDB query choosing the wrong index in winning plan, Though in executionTimeMillisEstimate as lower for the other index?
MongoDB Query chooses the wrong index in the winning plan. I have two indexes for the same field, one is a single field index and a Compound index with another field.
Eg.
Field name: Field1, Contains Yes or No
Field name: Field2, Contains 0 or 1 or 2 or 3
Index 1: {'Field1':1} Single Field Index
Index 2: {'Field1':1,'Field2':1} Compound Index.
On Search Query {'Field1':'Yes'} for Field1 it uses the compound index, instead of single key index. Attached below is the query execution plan.
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "xxxx",
"indexFilterSet" : false,
"parsedQuery" : {
"Field1" : {
"$eq" : "Yes"
}
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"Field1" : 1,
"Field2" : 1
},
"indexName" : "Field1_Field2_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"Field1" : [],
"Field2" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"Field1" : [
"[\"Yes\", \"Yes\"]"
],
"Field2" : [
"[MinKey, MaxKey]"
]
}
}
},
"rejectedPlans" : [
{
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"Field1" : 1
},
"indexName" : "Field1_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"Field1" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"Field1" : [
"[\"Yes\", \"Yes\"]"
]
}
}
}
]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 762490,
"executionTimeMillis" : 379131,
"totalKeysExamined" : 762490,
"totalDocsExamined" : 762490,
"executionStages" : {
"stage" : "FETCH",
"nReturned" : 762490,
"executionTimeMillisEstimate" : 377572,
"works" : 762491,
"advanced" : 762490,
"needTime" : 0,
"needYield" : 0,
"saveState" : 16915,
"restoreState" : 16915,
"isEOF" : 1,
"invalidates" : 0,
"docsExamined" : 762490,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 762490,
"executionTimeMillisEstimate" : 1250,
"works" : 762491,
"advanced" : 762490,
"needTime" : 0,
"needYield" : 0,
"saveState" : 16915,
"restoreState" : 16915,
"isEOF" : 1,
"invalidates" : 0,
"keyPattern" : {
"Field1" : 1,
"Field2" : 1
},
"indexName" : "Field1_Field2_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"Field1" : [],
"Field2" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"Field1" : [
"[\"Yes\", \"Yes\"]"
],
"Field2" : [
"[MinKey, MaxKey]"
]
},
"keysExamined" : 762490,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
},
"allPlansExecution" : [
{
"nReturned" : 101,
"executionTimeMillisEstimate" : 0,
"totalKeysExamined" : 101,
"totalDocsExamined" : 101,
"executionStages" : {
"stage" : "FETCH",
"nReturned" : 101,
"executionTimeMillisEstimate" : 0,
"works" : 101,
"advanced" : 101,
"needTime" : 0,
"needYield" : 0,
"saveState" : 10,
"restoreState" : 10,
"isEOF" : 0,
"invalidates" : 0,
"docsExamined" : 101,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 101,
"executionTimeMillisEstimate" : 0,
"works" : 101,
"advanced" : 101,
"needTime" : 0,
"needYield" : 0,
"saveState" : 10,
"restoreState" : 10,
"isEOF" : 0,
"invalidates" : 0,
"keyPattern" : {
"Field1" : 1
},
"indexName" : "Field1_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"Field1" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"Field1" : [
"[\"Yes\", \"Yes\"]"
]
},
"keysExamined" : 101,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
}
},
{
"nReturned" : 101,
"executionTimeMillisEstimate" : 260,
"totalKeysExamined" : 101,
"totalDocsExamined" : 101,
"executionStages" : {
"stage" : "FETCH",
"nReturned" : 101,
"executionTimeMillisEstimate" : 260,
"works" : 101,
"advanced" : 101,
"needTime" : 0,
"needYield" : 0,
"saveState" : 10,
"restoreState" : 10,
"isEOF" : 0,
"invalidates" : 0,
"docsExamined" : 101,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 101,
"executionTimeMillisEstimate" : 0,
"works" : 101,
"advanced" : 101,
"needTime" : 0,
"needYield" : 0,
"saveState" : 10,
"restoreState" : 10,
"isEOF" : 0,
"invalidates" : 0,
"keyPattern" : {
"Field1" : 1,
"Field2" : 1
},
"indexName" : "Field1_Field2_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"Field1" : [],
"Field2" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"Field1" : [
"[\"Yes\", \"Yes\"]"
],
"Field2" : [
"[MinKey, MaxKey]"
]
},
"keysExamined" : 101,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0
}
}
}
]
},
"serverInfo" : {
"host" : "xxxxx",
"port" : 27017,
"version" : "3.6.0",
"gitVersion" : "xxxxx"
},
"ok" : 1.0
}
The executionTimeMillisEstimate for single filed index is 0 where us executionTimeMillisEstimate for the compound index is 260, then why still it uses the compound index in winning plan. I am using a single field query for single field index why it uses compound index?
When evaluating candidate query plans MongoDB determines which query plan returns the first batch of results (by default 101 documents) with the least amount of overall work (indicated by the works score). The works score is a proxy for different effort involved in query stages (index key comparisons, fetching documents, etc). If multiple plans perform identical work during evaluation, there are some small tie-breaking bonuses that can help choose a plan to cache (for example, if a plan is a covered query or does not require an in-memory sort).
In this case both of your plans do the same amount of work so there is no deterministic winner. In fact, your single key index is a prefix of the compound index so the compound index can answer all of the same queries. You should drop the single index rather than having both.
The executionTimeMillisEstimate for single filed index is 0 where us executionTimeMillisEstimate for the compound index is 260, then why still it uses the compound index in winning plan.
The estimated execution time in this explain output is only informational. The execution estimate is not factored into the works scores since it will change frequently based on other concurrent activity. If you run the same explain multiple times the estimates are likely to be 0 once all the relevant indexes and documents are loaded into memory.
I am using a single field query for single field index why it uses compound index?
Since the field you are querying on is a prefix of the compound index, it is also a candidate evaluated by the query planner.
For more information, see How does the MongoDB query optimizer evaluate candidate plans?.
The order in which the indexes are defined seems to be influential in its selection.
and you can use Partial Indexes for example :
db.collection.createIndex(
{'Field1':1}
)
db.collection.createIndex(
{'Field1':1,'Field2':1},
{ partialFilterExpression: { Field2: { $exists: true} } }
)
| common-pile/stackexchange_filtered |
How do I analyze the treatment effect while controlling for covariates in a pretest–posttest design in R?
I ran a repeated-measures ANOVA in R to look at the effects of treatment (3 different treatment groups), gender, age, and education level on a specific biomarker (continuous variable). The data is in long-form with two time points (baseline and post) corresponding to the id column.
model1 <- lmer(Measure1 ~ Treatment + Gender + Education_Level + Age + (1|id), data=dataset)
anova(model11_rma)
I've seen some examples of repeated measures ANOVA that include a time interaction. I just want to make sure that this is correct and is actually testing what I need to test. Can anyone verify that my code looks correct? Also, do I need to conduct Mauchly's Test of Sphericity to verify that the assumptions of the ANOVA have been met? If so, how do I do that in R with the lmer model?
I've also tried to run a repeated measures ANOVA in R using the car package and the ez package as shown below, however, I keep getting errors that tell me I am missing data like the following:
Error in ezANOVA_main(data = data, dv = dv, wid = wid, within = within, : One or more cells is missing data. Try using ezDesign() to check your data.
ezANOVA
ezANOVA(data=dataset_3_lfclean, dv=.(Measure1), wid=.(ID), within_covariates.(Age), within=.(Gender,Education_Level),
between=.(Treatment), detailed=T, type=3)
Car
Measure1_Response <- with(dataset_3_lfclean,cbind(Measure1[Group==1], Measure1[Group==2], Measure1[Group==3]))
mlm1 <- lm(Measure1_Response ~ 1)
rfactor <- factor(c("g1", "g2", "g3"))
mlm1.aov <- Anova(mlm1, idata=data.frame(rfactor), idesign = ~rfactor, type="III")
summary(mlm1.aov, multivariate=FALSE)
Here's my data in wide-form where each dependent variable (measured at time 1 and time 2 has its own column and each participant has a single row:
Here's my data in long-form where each participant has multiple rows:
On the second point, there is this function - https://stat.ethz.ch/R-manual/R-devel/library/stats/html/mauchly.test.html Or do ?mauchly.test from your R session
@AllisonGrossberg - can you not do something like mauchly.test(lm(cbind(mpg, disp) ~ 1, data=mtcars)) to make a temporary linear model and test it? Obviously adapting to your data rather than the built-in mtcars
Is there any specific reason why you have to use a linear mixed model (fitted via lmer()) to model the data? This model is not exactly equivalent to a repeated measures ANOVA.
@statmerkur - I tried to use the Car package in R but was having a hard time getting it to work. I'm new to R and I don't entirely understand the differences between the different models. Can you tell me why a linear mixed model using lmer is not equivalent to a repeated measures ANOVA? What is the correct alternative?
@statmerkur - Just for a little extra context - I do have some missing data (only for some dependent variables) and the study design is a little complex. I also want to account for baseline differences in the dependent variables.
Regarding the difference between linear mixed models and rmANOVAs, I think this Q and Jake Westfall's answer are a good starting point. The model that you specified assumes compound symmetry which is not exactly the same as sphericitiy.
Also, please explain clearly which hypothesis you want to test. If you want to compare the effect of the Treatment at time point 1 and time point 2, you have to construct a different model. What do you mean by "two time points (baseline and post) corresponding to the id column"? Do you mean that data of the same subject (=id) are collected at time point 1 and at time point 2?
@statmerkur - Ok. I have 38 participants who each underwent one of three different 8-week interventions. Seven different dependent variables were measures once before the intervention and once after. So I'm interested in the effect of group (treatments 1,2, or 3) on the various dependent variables. There are major differences in the baseline scores of the different groups so I need to account for individual differences in each participant. I have my data in both long-form and wide-form - I'll post some examples.
I've edited the heading. Please check if you are OK with that and feel free to change it back if you aren't.
If you want to analyze your data with ANCOVAs and use the pre/baseline scores as a covariate, I think you should create separate columns for the pre and post scores (e.g. Measure1_pre and Measure1_post). Depending on whether you are interested only in the main effect of Treatment or also in the main effect of and interaction with Gender your ezANOVA()s should look like (1) or (2), respectively. Note that you should set orthogonal contrasts in order to get meaningful Type-III tests (see, e.g., John Fox' answer here).
# set orthogonal contrasts
options(contrasts = c("contr.sum", "contr.poly"))
# (1)
ezANOVA(data = dataset_3_lfclean,
dv = Measure1_post,
wid = ID,
between = Treatment,
between_covariates = .(Measure1_pre, Age, Gender, Education_Level),
detailed = TRUE,
type = 3)
# (2)
ezANOVA(data = dataset_3_lfclean,
dv = Measure1_post,
wid = ID,
between = .(Treatment, Gender),
between_covariates = .(Measure1_pre, Age, Education_Level),
observed = Gender,
detailed = TRUE,
type = 3)
A repeated-measures-ANOVA-like approach, where your focus is on the Treatment by Timepoint interaction, is different, and tests different hypotheses.
Especially, the discussions under the heading Best practice when analysing pre-post treatment-control design seem to be a good starting point to decide how you want to analyze your data.
The corresponding ezANOVA() syntaxes to analyze this interaction would be
# (1)
ezANOVA(data = dataset_3_lfclean,
dv = Measure1,
wid = ID,
between = Treatment,
within = Timepoint,
between_covariates = .(Age, Gender, Education_Level),
detailed = TRUE,
type = 3)
and
# (2)
ezANOVA(data = dataset_3_lfclean,
dv = Measure1,
wid = ID,
between = .(Treatment, Gender),
within = Timepoint,
between_covariates = .(Age, Education_Level),
observed = .(Gender, Timepoint),
detailed = TRUE,
type = 3)
Your lmer() model is more in line with the second approach, however, it assumes (positive) compound symmetry.
Thank you so much for this really detailed response! I really appreciate it. I tried to run both the ANCOVA and the Repeated Measures ANOVA and I am still getting the following error message: Warning: Data is unbalanced (unequal N per group). Make sure you specified a well-considered value for the type argument to ezANOVA(). Error in temp$ezCov - mean(temp$ezCov) : non-numeric argument to binary operator
In addition: Warning message: In mean.default(temp$ezCov): argument is not numeric or logical: returning NA
I guess you get the error only in the models where Gender is a covariate, right? Does it disappear if you put in as.numeric(Gender) (only for the covariates) instead? The warning is about a different issue: whether you should use (type I vs) type II vs type III Sums of Squares. Should I put in a link the addresses this issue?
| common-pile/stackexchange_filtered |
Adjust <div> in CSS
I'm trying to solve 2 issues in the picture below:
a) adjust space between the image and the text (red rectangle)
b) put away at the bottom line of the image the link called "enviar" (green rectangle with the arrow)
I'm using Bootstrap style and I tried a lot of different solutions but I'm really stuck.
Here is the code:
<div class="col-md-6 order-md-3 mb-4">
<h4 class="d-flex justify-content-between align-items-center mb-3">
<span class="text-muted">Meu carrinho</span>
<span class="badge badge-secondary badge-pill">3</span>
</h4>
<ul class="list-group mb-3">
{% for item in cart %}
<div class="list-group-item d-flex justify-content-between">
<img src="{{ item.image.url }}" class="img-bg margem">
<div style="margin-right: 20px;float:right;padding-left: 0;">
<h6 class="my-0">{{ item.category }}</h6>
<small class="text-muted">{{ item.description|linebreaks }}</small>
</div>
<span class="text-muted">{{ item.sell_price }}
<form action="{% url 'cart:cart_remove' item.id %}" method="post">
<input type="image" src="{% static 'remove.png' %}" class="img-sm">
{% csrf_token %}
</form>
</span>
</div>
{% endfor %}
<li class="list-group-item d-flex justify-content-between">
<span>Total</span>
<strong>R$ {{ cart.get_total_price }}</strong>
</li>
</ul>
<hr class="mb-4">
<button class="btn btn-primary btn-lg btn-block" type="submit">Finalizar pedido</button>
</div>
Thank you!
please add your Css and JS codes. also if you provide us a jsfiddle of your work we can help you faster and better.
by the way, did you try to give them margin-left with float-left
mkafiyan, I'm working with Bootstrap, so if I can overwrite it, I think it would be better! And yes, I tried margin-left with no success!
I need to see your code! don't you upload your website any where?
Just dont use bootsttrap... I think you can manage to do it without it, especially since its overused. Try using tailwind, or try to make without any of these. I know you can do it. As the saying goes: 'The more original, the better'
| common-pile/stackexchange_filtered |
Constructing a type based on the object in Typescript
I am in need of extracting a type from an object literal that I am constructing by hand. I have the following:
const makeFunc: { key: string, val: (p: TParam) => TResult } = <TParam, TResult>(p: TParam) => doSomething(p);
const myObj = {
getSomething: makeFunc(a),
getSomethingElse: makeFunc(b)
}
Let's assume a is of type A and makeFunc(a) is of type AR and similarly b is of type B and makeFunc(b) is of type BR. Now I need a type that resembles something like this:
interface extractedType {
getSomething: (a: A) => AR,
getSomethingElse: (b: B) => BR
}
Is there a way to accomplish this? If so, could someone throw some light or point me in the right direction?
I don't currently have access to a computer, so I can't verify this, but something like this should work,
interface extractedType {
getSomething: <A, AR> (a: A) => AR,
getSomethingElse: <B, BR> (b: B) => BR
}
you might have to play around with it a bit, hopefully it helps!
Source
https://www.typescriptlang.org/docs/handbook/generics.html
Edit:
Link to thread that might be related, see comments.
https://stackoverflow.com/a/44078574/7307141
What I needed to know was how I can extract this interface type using a piece of code. I know what the resultant interface type would be, but given an object literal, I need a function that would extract the resultant interface type I showed.
Hmm... Okay, that is more difficult, and I don't have a good answer for that, I post a link to a thread that might be related, good luck!
| common-pile/stackexchange_filtered |
Is Guid considered a value type or reference type?
Guids are created using the new keyword which makes me think it's a reference type.
Is this correct?
Guid uid = new Guid();
Are Guids stored on the heap?
You can see the definition of a Guid yourself:
public struct Guid ...
Or you can test it like this:
bool guidIsValueType = typeof(Guid).IsValueType;
Quote: "GUID's are created using the new keyword which makes me think it's a reference type."
Structs can have constructors too, for example new DateTime(2012, 12, 23).
No it's a Value Type -> see @Randolpho's answer
@CodingYourLife Actually, you read the answer wrong way.
Guid is a Value Type.
See MSDN. Note that Guid is a struct. All structs are Value Types.
Except of course for System.ValueType which is in fact a class :)
@JaredPar: True, but it's also abstract, so no danger of instantiation.
GUID's are created using the new keyword which makes me think it's a reference type.
Stop thinking that. Value types can have constructors too. It is perfectly legal, though strange, to say
int x = new int();
That's the same as assigning zero to x.
Is this correct?
Nope.
Are GUID's stored on heap?
Yes. Guids are also stored on the stack.
Note that the analysis below assumes that the implementation of the CLI is the Microsoft "desktop" or "Silverlight" CLR running on Windows. I have no idea what other versions of the CLI do, what they do on Macs, and so on. If you need to know whether a particular hunk of memory is stored on the stack in other implementations, you'll have to ask someone who is an expert on those implementations.
A Guid is stored on the stack under the following circumstances:
(1) when the Guid is a "temporary" result of an ongoing calculation or is being used as an argument to a method. For example, if you have a method call M(new Guid()) then temporary storage for the new Guid is allocated on the stack.
(2) when the Guid is a local variable which is (a) not in an iterator block, (b) not a closed-over outer variable of an anonymous method or lambda expression.
In all other situations the Guid is not stored on the stack. A Guid is stored on the heap when it is a field of a reference type, an element of an array, a closed-over local of an anonymous method or lambda expression, or a local in an iterator block.
A Guid may also be stored in neither the GC heap nor the stack. A Guid might be stored in entirely unmanaged memory, accessed via unsafe pointer arithmetic.
I am curious as to why you care so much as to whether the bits of a guid are on the stack or on the heap. What difference does it make?
well, now that it's clear guid can be stored anywhere, i guess it wouldn't matter.
When writing soft-real-time applications (animation, games, some UI work), it's often necessary to reduce, amortize, or eliminate GC allocations within a specific "loop" or iteration of the software. Doing so reduces or eliminates GC collections within those loops, which cause animation "hitches" that are visible to the user. Thus, knowing whether a particular line of code "allocs" is needed to decide when to cache objects or use object pooling. Example: smooth realtime physics simulations written entirely in C# must not allocate in either the collide or integration phases.
It's a value type.
http://msdn.microsoft.com/en-us/library/system.guid.aspx
It's actually Guid. All types are constructed using the new keyword. You can identify reference types from value types by whether they are a class, interface, or delegate (all reference types), or a struct or enum (value types).
You might want to add enum to your list of value types.
Its value type, See the below example:
using System;
public class Program
{
public static void Main()
{
Guid a1 = new Guid();
Console.WriteLine(a1);
Guid b1 = a1;
Console.WriteLine(b1);
a1 = Guid.NewGuid();
Console.WriteLine(a1);
Console.WriteLine(b1);
}
}
/* OUTPUT
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000
164f599e-d42d-4d97-b390-387e8a80a328
00000000-0000-0000-0000-000000000000
*/
| common-pile/stackexchange_filtered |
Changing label placement (anchor point) using ArcPy?
arcpy let user automatically turn on the label, use label expression to save our time. is there a way how to change the position of the label like we use to do on label properties like screenshot below ?
so if i want to change the label placement/position of 100 layers, even i can use arcpy to turn on and use label expression using arcpy, i still have to double click into layer properties to change the label placement anyway?
i assumed there is away to solve that, any suggestion?
what i need to do is put the label in top of the point feature, but the default is on the top right of the feature.
This is beyond what python can do alone, you will need to use ArcObjects to alter the label placement properties for a layers' label class. If you think you can manage ArcObjects I could try to track down a method using ArcObjects to point you in the right direction. Do you also need to account for Maplex label placement?
well i have figured another workaround as someone had describe before in this forum, i convert the label into annotation (which my final purpose should be in annotation form), then using model builder and iterate, i will iterate for every layers that i use, and use field calculator to fill the value for fields named "xoffset" and "yoffset" and change to the value i want, but if you have easier way i happy to listen , thank you
btw ive tried using maplex label, but still yu have to click each layer , cant do automatically for all layers label, i really want to skip the repitition works like we use to do with arcpy and model builder
If it works then put it in as an answer. Considering annotation was your goal it fits this solution, there is a limit to what you can do with labels via arcpy and Maplex would be extra complexity.. I find Maplex beneficial where half an hour of modifying rules prior to conversion saves hours of moving annotation manually or where annotation isn't an option; Maplex produces labels with better placement options and avoidance rules but there is no way to turn it on in arcpy https://gis.stackexchange.com/questions/180039/using-arcpy-to-turn-on-maplex-label-engine
well i have figured another workaround as someone had describe before in this forum,
i convert the label into annotation (which my final purpose should be in annotation form), then i will iterate for every layers that i use using model builder, inside model builder i use field calculator to fill the value for fields named "xoffset" and "yoffset" and change to the value i want.
Was the previous Q&A that helped you this one (https://gis.stackexchange.com/questions/65959/moving-offsetting-point-locations-using-modelbuilder-or-arcpy) or another? Whenever, you refer to other Q&As in posts here please provide links to them.
| common-pile/stackexchange_filtered |
Xamarin.TestUI Navigating repeating objects
Lets say I have a listview in a native app (not webapp). Each listview item has child labels with AutomationID 'name', 'date', and 'time' and a button with AutomationID 'info'.
In my test, I want to click on the info button for an item that has a specific date.
Coming from selenium, one way i'd find my button is to first find the listview item that has child 'date' with value "specificDate", then I'd find that listview items child 'info' button and click.
This is the ugly solution I'm using now:
x => x.Marked("VisitStartDate").All().Text(date)
.Parent().Marked("VisitEntry").Descendant()
.Marked("PatientName").Text(name)
.Parent().Marked("VisitEntry").Descendant()
.Marked("VisitInfo");
Id like to avoid all these repeating Parent() Descendant() calls.
In short I'm having trouble navigating repeatable elements without the structure selenium provides.
Repl is the way to go my friend. Could you provide some output from it, so we clearly see the structure you now describe. A image says more than 1000 words! (if you have the right image)
@PixelPlex any complex structure can navigated the same way in selenium with its provided tools. I'm asking if xamarin provides similar tools im missing.
| common-pile/stackexchange_filtered |
Why is my laptop fan running so much?
I have a Dell N5110 laptop and the fan is running quickly and noisily often. I'm concerned my laptop might be getting damaged.
Is there any way to identify what is causing this and, if so, how I might fix it safely?
please sudo apt-get install lm-sensors and paste the output of the sensors command, preferably when the system is idling, so we can see the typical temperature of your processor and (depending on the hardware) information about the fan speed
It may also help to know what version of Ubuntu you're running.
you should consider installing jupiter
you can select power modes with it, its very easy to controll, and I just set it to power on command, you wont notice that your laptop is on, until you start using heavy apps
open terminal and do this.
add the repository
sudo add-apt-repository ppa:webupd8team/jupiter
update
sudo apt-get update
install jupiter
sudo apt-get install jupiter
and to other people reading this and are using an Asus EEPC netbook, install this asswell
sudo apt-get install jupiter-support-eee
Have a nice day :)
Looking at earlier questions, it looks like your laptop is one of those hybrid laptops that have an integrated (Intel) and discrete (Nvidia) GPU. You probably do not need a lot graphical power, so if your BIOS has an option to choose for the Integrated card (disable Optimus mode), do so. Alternatively, you can install Bumblebee which is described in Is a NVIDIA GeForce with Optimus Technology supported by Ubuntu?
| common-pile/stackexchange_filtered |
How is this access token stored on the client, in FastAPI's tutorial "Simple OAuth2 with Password and Bearer"
I'm pretty new to FastAPI and OAuth2 in general. I just worked through the tutorial "Simple OAuth2 with Password and Bearer" and it mostly made sense, but there was one step that felt like magic to me..
How does the access token get stored onto the client and subsequently get passed into the client's requests?
My understanding of the flow is that it's basically
User authenticates with their username and password (these get POST'ed to the /token endpoint).
User's credentials are validated, and the /token endpoint returns the access token (johndoe) inside some JSON. (This is how the user receives his access token)
???
User make a subsequent request to a private endpoint, like GET /users/me. The user's request includes the header Authorization: Bearer johndoe. (I don't think the docs mention this, but it's what I've gathered from inspecting the request in Chrome Developer Tools)
The authorization token is then used to lookup the user who made the request in (4)
Step (3) is the part that I don't understand. How does the access token seemingly get stored on the client, and then passed as a header into the next request?
Demo
When you run the code in the tutorial, you get the following swagger docs. (Note the Authorize button.)
I click Authorize and enter my credentials. (username: johndoe, password: secret)
And now I can access the /users/me endpoint.
Notice how the header Authorization: Bearer johndoe was automagically included in my request.
Last notes:
I've checked my cookies, session storage, and local storage and all are empty
The authorization header disappears if I refresh the page or open a new tab
I suspect Swagger is doing something under the hood here, but I can't put my finger on it.
How you store the access token is up to you - but localstorage is probably what most people do. You could just store it in a variable inside your javascript application; there is no need to persist it if you don't want it to survive a reload (which apparently swagger doesn't)
@MatsLindh I think you're misunderstanding my question. I'm asking how the example above stores the access token, not how should I store the access token.
Sorry, that wasn't clear from your indexed list of how the flow worked at the start. SwaggerUI keeps the reference internally in the library, unless you enable persistence. In that case it gets persisted to localStorage. You can see the code for pesisting authentication information here: https://github.com/swagger-api/swagger-ui/blob/cc408812fc927e265da158bf68239530740ab4cc/src/core/plugins/auth/actions.js#L274
Ah, there it is. Thank you for helping me understand. If you feel like posting this as an answer, I'll accept it.
If you need persistence for the token you'd usually use localStorage or similar, but in SwaggerUIs specific case, the authentication information is kept internally in the library.
If you have enabled persistence SwaggerUI will persistent the access token to localStorage:
export const persistAuthorizationIfNeeded = () => ( { authSelectors, getConfigs } ) => {
const configs = getConfigs()
if (configs.persistAuthorization)
{
const authorized = authSelectors.authorized()
localStorage.setItem("authorized", JSON.stringify(authorized.toJS()))
}
}
| common-pile/stackexchange_filtered |
How to glue two functions continuously
I am trying to construct a family of real functions, $f_1,f_2,\ldots$, on the interval $[0,1]$ with the following properties:
For every $k$, $f_k$ is strictly increasing.
For every $k$, $f_k(1)=1$.
Define $I_k = \int_{0}^{x_0} f_k(y)dy$ (for some constant $x_0$). Then $I_k$ should be as small as possible, i.e, $I_k\to 0$ as $k\to\infty$.
Define $J_k = \int_{x_0}^{1} f_k(y)dy$. Then $J_k$ should be as large as possible, i.e, $J_k\to(1-x_0)$ as $k\to\infty$.
For every $k$, $f_k$ has continuous first and second derivatives in $[0,1]$.
Without the last condition, the problem is quite easy. We can "glue" together two functions - a function that is nearly 0 at the left, and a function that is nearly 1 at the right. For example, we can take (See graph here):
$$
f_k(x) =
\begin{cases}
(1-1/k)({x\over x_0})^{k} & x\in[0,x_0])
\\
{(1-1/k)(1-x) + (x-x_0) \over 1-x_0} & x\in[x_0,1])
\end{cases}
$$
Note that every $f_k$ is continuous with $f_k(x_0)=1-1/k$, but not continuously differentiable.
Is there a way to construct functions $f_k$ with similar properties, but that also have continuous first and second derivatives?
Maybe $\frac{1}{2} + \frac{1}{\pi} \arctan \left[ N \tan \pi \left( x - \frac{1}{2} \right) \right]$ ? (see here) I don't get whether it should be $f$ or $1/f$ though.
@Adayah the integral for this function does not go to 0 at the left... see the updated question.
The integral $$\int \limits_0^{\frac{1}{2}} f_N(x) , \mathrm{d} x$$ definitely goes to $0$ for $f_N(x) = \frac{1}{2} + \frac{1}{\pi} \arctan \left[ N \tan \pi \left( x - \frac{1}{2} \right) \right]$. You can replace $N$ by $N^2$ in the graph drawing tool above to see how it behaves for bigger $N$.
@Adayah I see! Indeed this looks good.
A textbook solution is to use as a building block the function $h(x)=\exp(-1/x)$ for $x>0$ and $h(x)=0$ otherwise. This transitions from $0$ to positive values in a $C^\infty$ way as $x$ passes from negative to positive. Use it to build $g(x) = h(1/2+x)h(1/2-x)$, a $C^\infty$ function whose graph looks like a blip centered at $0$, vanishing outside of $[-1/2,1/2]$. Let $G(x) = \int_{-\infty}^xg(t)\,dt$ be the indefinite integral of $g$. Note that $G(x)/G(1)$ climbs smoothly from $0$ to $1$ on $[-1/2,1/2]$. Finally, let $f_n(x) = G(n(x-1/2))/G(1)$, which climbs smothly from $0$ to $1$ in the range $[0,1]$, with all the action within $1/n$ of $1/2$.
This function is not optimal. Probably the question was not clear enough. See the clarified question now.
I have edited my answer. I did not suggest that $g(x)$ answered your question, merely that it could be a piece of an answer to your question. I have now spelled out the detail for how to do this.
Indeed, this is what I get: https://www.desmos.com/calculator/cwmqqe4iav it looks good.
| common-pile/stackexchange_filtered |
mongoimport and header line contains in 2nd line
I am importing a CSV file into MongoDB. I have a region.csv file which contains the first line as header line.
I used the following command:
mongoimport -h localhost --db maxmind --collection region --type csv --file region.csv --headerline
It worked fine. Now I have another CSV file which contains the first line as some copyright line and 2nd line as header line. I can delete the first line and used the same command. But without deleting the first line manually, how can I do it ?
Assuming you are using an environment with the standard Linux/Unix tail utility, you can use tail -n to start from a specified number of lines from the beginning or end of a text file.
In your case, you want to start from the second line of the CSV file (so tail -n +2) and would need to pipe the output of tail to mongoimport.
Putting that together, the command line would look like:
tail -n +2 another.csv | mongoimport --db maxmind --collection region --type csv --headerline
Note that you also don't need to specify -h localhost as this is the default value.
| common-pile/stackexchange_filtered |
Android Pay API - get payment methods
I'm developing an Android application and I need to get user's credit cards/payment methods like in the picture below
is there a way I can do this with Android Pay API or any other method?
Yes, there is. In fact, there's significant documentation on exactly what you need to do. Here's a link to start:
https://developers.google.com/android-pay/get-started
| common-pile/stackexchange_filtered |
active directory with xamarin method
I wrote the following code to find out if the user logging has an account at active directory so i may allow him to proceed and it's working fine:
public bool AuthenticateUser(string domain, string username, string password, string LdapPath)
{
string domainAndUsername = domain + @"\" + username;
DirectoryEntry entry = new DirectoryEntry(LdapPath, domainAndUsername, password);
try
{
// Bind to the native AdsObject to force authentication.
Object obj = entry.NativeObject;
DirectorySearcher search = new DirectorySearcher(entry);
search.Filter = "(SAMAccountName=" + username + ")";
search.PropertiesToLoad.Add("cn");
SearchResult result = search.FindOne();
if (null == result)
{
return false;
}
and it works great, the only problem is that i need to make the same thing using xamarin forms , how may I?
DirectorySearcher is a class that you would use from a server API/code.
I suggest you to create a Web API that would do the same job and that will be called by your Xamarin application.
| common-pile/stackexchange_filtered |
Torsion and Curvature in general relativity
in Landau's book, field theory, the famous physicist stated, "Because of equivalence principle, there should be a 'Galileo' frame, in which the Christoffel symbols should be zero, thus the torsion zero. If in one coordinates, the torsion is zero, then it's zero in any coordinates."
My question is are the following:
Are there any limitations upon allowed transformation between coordinates? For example smooth?
I've just learned manifolds. Can the two coordinates be viewed as coordinates of one point of the space-time manifold in two different open set. Thus being smooth is the only thing required?
2.In this so called 'Galileo' frame, the Curvature is also zero. Then shouldn't the curvature tensor is also zero in any other coordinates? If so, why Landau just made statements about torsion, omitting curvature?
The Galileo frame is one in which the Christoffel symbols are zero. These are also known as normal coordinates. However just because the Christoffel symbols are zero does not mean the curvature is zero. The Christoffel symbols are not tensors so they are not coordinate invariant. It is always possible to choose coordinates in which they are zero even when the Riemann tensor is non-zero.
Thanks for your clarification for the concept 'Galileo frame', which I've misunderstood since long. Are the Christoffel symbols only zero in the affinity of a fixed point?
Generally it is only possible to make all the Christoffel symbols zero at a point.
| common-pile/stackexchange_filtered |
Specify decimal precision of column in a SQL Server indexed view
I am wondering if it's possible to override the default column precision of columns within an indexed view. The view seems to always create the columns with the largest possible precision.
A complete run-able example is below:
-- drop and recreate example table and view if they exist
IF EXISTS(SELECT * FROM sys.objects WHERE name = 'ExampleIndexedView')
DROP VIEW [dbo].[ExampleIndexedView]
IF EXISTS(SELECT * FROM sys.objects WHERE name = 'Example')
DROP TABLE [dbo].[Example]
-- create example table
CREATE TABLE [dbo].[Example](
[UserID] [int],
[Amount] [decimal](9, 2)
) ON [PRIMARY]
-- insert sample rows
INSERT INTO [dbo].[Example] ([UserID], [Amount]) VALUES (1, 10)
INSERT INTO [dbo].[Example] ([UserID], [Amount]) VALUES (2, 20)
INSERT INTO [dbo].[Example] ([UserID], [Amount]) VALUES (3, 30)
INSERT INTO [dbo].[Example] ([UserID], [Amount]) VALUES (1, 15)
INSERT INTO [dbo].[Example] ([UserID], [Amount]) VALUES (1, 2.5)
GO
-- create indexed view
CREATE VIEW [dbo].[ExampleIndexedView]
WITH SCHEMABINDING
AS
SELECT
e.UserID as UserID
,SUM(ISNULL(e.[Amount], 0)) as [Amount]
,COUNT_BIG(*) as [Count] --Required for indexed views
FROM [dbo].[Example] e
GROUP BY
e.UserID
GO
CREATE UNIQUE CLUSTERED INDEX [CI_ExampleIndexedView]
ON [dbo].[ExampleIndexedView]
([UserID])
-- show stats for view
exec sp_help [ExampleIndexedView]
This results in a view with the columns:
UserID(int, null)
Amount(decimal(38,2), null)
I understand why the view would automatically use the largest possible storage type for a SUM column, however let's say I know that the summing of that Amount column will never exceed the limits of a decimal(19, 2) - is there a way I can force the view to create the column as decimal(19, 2) instead of decimal(38, 2)?
decimal(38, 2) takes 17 bytes to store, decimal(19,2) only takes 9 bytes. As a test I duplicated my indexed view into a regular table where I used decimal(19,2) and the overall storage space saving was around 40%, so it seems like it would be a worthwhile thing to do for a view that contains a large number of decimal aggregations.
Edit: Posted a complete run-able example that creates an example table, populates it with a few rows, then creates an indexed view on that table. The result is that the Amount column in the indexed view is decimal(38,2), i would like to find a way to force it to decimal(19,2) for space saving reasons.
CAST() or CONVERT() it to your required precision
Select
UserID,
CONVERT(DECIMAL(9,2), SUM(Amount)) as Amount,
FROM
Example
GROUP BY
UserID
That does not make any difference, the indexed view still creates the column with the maximum possible decimal precision
in my test, i do get (9,2)
what version of Sql Server are you using? im using SQL Server 2014.
It looks to me like you are creating a regular view, not an indexed view as your syntax isnt correct for an indexed view. If i do it as a regular view then you are correct, the column type adjusts accordingly, but this is not the case for an indexed view. I will update my OP to include the full indexed view creation script.
Oh i didn't create the index just schemabinding the view. Just tried, with the convert() it does not allow to create the index. But the data type of the view is (9,2) .exec sp_help [ExampleIndexedView]
Its not for me.
`Column_name Type Computed Length Prec Scale
Nullable TrimTrailingBlanks FixedLenNullInSource Collation
UserID int no 4 10 0 yes (n/a) (n/a) NULL
Amount decimal no 17 38 2 yes (n/a) (n/a) NULL
Count bigint no 8 19 0 yes (n/a) (n/a) NULL
`sorry not sure how to format this properly in a comment. Can you run my exact view creation and see what you get. There must be some difference between what you are doing.
or post your view + index creation creation query in your answer and i will try it on my PC and see what i get.
I have updated my OP to include a full run-able example script to demonstrate the issue. I would be interested to see if it is doing the same thing for you, seeing as you said you were getting a different result with the sample you made.
| common-pile/stackexchange_filtered |
CKEDITOR 4 How to make snapshot before using custom command to use CTRL+Z to undo
Problem is following:
We have custom block element, such as quote.
We want to have a possibility to "CTRL+Z" (Undo) its creation.
How to make snapshot of current state of ckeditor before inserting its html, so CTRL+Z after that would be usable?
To save a snapshot just fire saveSnapshot event on editor instance. You have to do this before and after performing an action which should be recorded as a separate snapshot. For example:
editor.fire( 'saveSnapshot' );
editor.insertHtml( '...' );
editor.fire( 'saveSnapshot' );
Also, if your functionality is a single command, remember that editor records snapshots whenever you execute it. So this wouldn't make sense:
editor.fire( 'saveSnapshot' );
editor.execCommand( 'myCmd' );
editor.fire( 'saveSnapshot' );
Update: If you want to merge some operations which could make their own snapshots (like executed command) then you can lock snapshot before performing them and unlock after.
editor.fire( 'lockSnapshot' );
editor.execCommand( 'myCmd1' );
editor.execCommand( 'myCmd2' );
editor.fire( 'unlockSnapshot' );
While snapshot is locked new snapshots won't be recorder. If snapshot stack was up to date in the moment of locking the snapshot, then unlockSnapshot will update the last snapshot. But if it wasn't then all those changes will not be recorded until next saveSnapshot is fired.
This is a bit tricky and requires some practice and testing to start using this mechanism properly :).
what if I want to do sequence: snapshot -> exec my command -> insert html -> snapshot. Is it possible to undo whole operation by crtl+z ?
I would try this today and accept answer if success, thank you!
Thank you very much. Managed to make my toolbar fully usable by adding Undo/Redo to it :)
| common-pile/stackexchange_filtered |
TypeError: Cannot read property 'roles' of undefined in discord.js
I am trying to get the bot to add roles in one discord and also add a role in another discord. but I keep getting the "roles" is not defined error. I have little to no coding knowledge and most of my code is a combination of things I find on google or friends teach me, so please excuse me if it is a dumb problem with a simple solution.
const client = new Client({ intents: [Intents.FLAGS.GUILDS, Intents.FLAGS.GUILD_MESSAGES, Intents.GUILD_ROLES] });
module.exports = {
name: 'accept',
description: 'Adds A Player To The Whitelist',
permissions: `ADMINISTRATOR`,
execute(message, args, add, roles, addRole, guild) {
message.delete()
let rMember = guild.roles.cache
message.mentions.members.first() || // `.first()` is a function.
message.guild.members.cache.find((m) => m.user.tag === args[0]) ||
message.guild.members;
let role1 =
message.guild.roles.cache.find((r) => r.id == roleID)
let role2 =
message.guild.roles.cache.find((r) => r.id == roleID)
let server = client.guilds.cache.get('guild ID')
let memberRole = server.guild.roles.get("role ID");
rMember.roles.add(role1).catch((e) => console.log(e));
rMember.roles.add(role2).catch((e) => console.log(e));
rMember.roles.add(memberRole).catch((e) => console.log(e));
rMember.send(`message content`);
message.channel.send('message content')
}};
The error occurs in this line:
let memberRole = server.guild.roles.get("872432775330955264");
On which line is the error located?
@MrMythical sorry for not including that, it is on line js:17:29 so <let memberRole = server.guild.roles.get("872432775330955264");>
I don't think you need the GUILD_MEMBER flag to access the cache for the roles or the guilds
let server = client.guilds.cache.get('guild ID')
let memberRole = server.guild.roles.get("role ID");
These particular lines justify your error, in the first case you are assigning the property of a guild object to server by getting the guild from the cache, now you see a guild object does not have a property further named guild to it so your error actually rests in the next line, where you are trying to get from server.guild it's same as saying guild.guild which makes no actual sense.
Only correction you would want to make with your code would be something of this sort:
let server = client.guilds.cache.get('guild ID')
let memberRole = server.roles.cache.get("role ID");
And if the guild isn't cached, you can use .fetch('ID') instead of .cache.get
| common-pile/stackexchange_filtered |
matrix with two unknowns
I am to calculate the value of this matrix
$$
\begin{bmatrix}
1 & 1 & 1 \\
1 & 1 & a \\
1 & b & 1
\end{bmatrix}
$$
I do a basic transformation to
$$
\begin{bmatrix}
1 & 1 & 1 \\
0 & -b+1 & 0 \\
0 & 0 & -a+1
\end{bmatrix}
$$
We have the "stairs" in the left corner. How could we proceed to get the right answer? I know how to calculate basic matrix , but variables $a,b$ in it makes it hard for me to understand it
Is your question about the rank of the initial matrix ? If so, say it explicitly.
What do you mean by the "value of a matrix"? Did you mean to say the "determinant"? If so, then the determinant of a triangular matrix, as you have in the "transformed" matrix is just the product of the numbers on the main diagonal. However, the "transformation" you use to reduce to the triangular matrix (row reduction?) may change the determminant of at matrix. And, in fact, the determinant of that triangular matrix is NOT the same as the determinant of the original matrix. I have talked about the determinant but I still don't know if that is what you intend.
What you have to do, now that you have a triangularized matrix, is to "discuss" according to the values of $a$ and $b$:
if $a\neq1$ and $b\neq1$, the determinant is $\neq 0$: thus rank=3.
if $a=1$ and $b\neq1$, the determinant is zero, thus rank $<3$ but the two first vectors are independant, thus rank=2.
if $a\neq1$ and $b=1$, same conclusion, this time with column 1 and 3.
if $a=1$ and $b=1$, the rank drops to 1 because all the columns are multiple of the first one.
| common-pile/stackexchange_filtered |
As level computer science paper 2 question
Could someone help explain the logic of the ms .
The question is
The RandomChar() module is to be modified so that alphabetic characters are generated
twice as often as numeric characters.
Describe how this might be achieved.
and the marking scheme says
my logic is that if you randomly generate an integer between 1 and 3 inclusive and multiply that by 3, you would get 3,6,9. Of which only 1/3rd is divisible and you put that as numeric values. And the rest as alphabetic character
Is this right?
Logic of milliseconds? Please don't use images to represent essential text in your posts. Without even mentioning, e.g., uniform, RandomChar() looks ill defined.
(only 1/3rd [of 3,6,9] is divisible looks as weird as the marking scheme)
| common-pile/stackexchange_filtered |
How to connect IF in my ELSE condition inside of while?
C++
I'm stuck my if and else is not connecting
error: else without previous if
#include<iostream>
int main(){
int i=1, num, sum=0;
while (i <= 5) {
cout<<"Enter a number: ";
cin>>num;
if (num % 2 ==0 ){
sum = sum + num;
i++;}
cout<<"\nThe sum is " << sum;
}
else{
sum=sum*num;
i++;
cout<<"\nThe product is"<<sum; }
}
}
How to connect if in my else condition inside of while?
Check your parenthesis
I need to build a program that will let the user enter five numbers and display the product of odd inputted numbers and the sum of even inputted numbers using a while loop.
I don't really know if I build the correct program
Format your code properly.
⟼Remember, it's always important, especially when learning and asking questions on Stack Overflow, to keep your code as organized as possible. Consistent indentation helps communicate structure and, importantly, intent, which helps us navigate quickly to the root of the problem without spending a lot of time trying to decode what's going on. The problem here seems to be entirely a product of the confused indentation.
The presence of not just one, but two instances of "enter code here" suggests that the copying/pasting and proofreading leaves something to be desired
The real problem here is indentation and structure, which is easily fixed by writing it in idiomatic C++:
#include <iostream>
// Avoid using namespace std; as that separation exists for a reason
int main() {
int sum = 0;
// Use a for() loop instead of while(n < L) { ... n++ }
// Declare iterator variables inside the scope in which they're used
for (int i = 0; i < 5; ++i) {
// Declare variables individually if/when they are needed
int num;
std::cout << "Enter a number: ";
std::cin >> num;
if (num % 2 == 0) {
sum += num; // x += y -> x = x + y
std::cout << "The sum is " << sum << std::endl;
}
else {
sum *= num; // x *= y -> x = x * y
std::cout << "The product is " << sum << std::endl;
}
}
return 0;
}
Where the key is to use for() instead of the inappropriate while().
Tip: When writing code, structure as conveyed by indentation and formatting is extremely important. You should be able to see at a glance what's going on without having to read and parse the code on a syntax level.
Tip: When you're faced with a perplexing syntax problem or a bug, if you get stuck spend that time organizing your code to be more clear and understandable. Improve variable names. Organize things into functions where appropriate. Break down your problem until it becomes clear what the issue is.
Thanks for answering my stupid question and thanks a lot for noticing my indentation and structure now I know where I'm working on
Nothing "stupid" about it. Learning is all about making mistakes. Upvote any answers that help, and when ready, accept the one that solves your problem. Helps steer others towards solutions as well.
When properly formatted, your code looks like the following:
int i=1, num, sum=0;
while (i <= 5)
{
cout << "Enter a number: ";
cin >> num;
if (num % 2 == 0)
{
sum = sum + num;
i++;
}
cout << "\nThe sum is " << sum;
}
else
{
sum=sum*num;
i++;
}
cout<<"\nThe product is"<<sum;
}
}
As you can see, the else you wrote is connected to the while statement, not the if statement. You also have two extra right curly braces at the bottom. I have removed the enter code here lines, since I don't think they're in your original code. If they are, you should be getting a syntax error.
| common-pile/stackexchange_filtered |
Require Select Not Allow First Option
Say we have the following:
<select name="select" required>
<option disabled selected value>Please Select Option</option>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</select>
<button type="submit" name="submit">Submit</button>
I would like this field to be required but I do not want to allow the user to submit the first option (<option disabled selected value>Please Select Option</option>)
By default, the first option (<option disabled selected value>Please Select Option</option>) should be selected until the user changes the option.
I've tried doing Googling this problem but I'm only able to find different problems but not this particular one. My thoughts is that it would have to use Javascript to accomplish this but I would like to use an HTML option before going that route.
Thank you for help!
You may want to use validation in here. In Jquery http://jqueryvalidation.org/
Have my answer is working for you?
Using JS
$("#clickButton").click(function (event) {
var picked = $('#selector option:selected').val();
if (picked == 0) {
alert("Please select any value");
} else {
console.log("Selected")
}
})
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<select id="selector" name="select" required>
<option disabled selected value="0">Select</option>
<option>Data 1</option>
<option>Data 2</option>
<option>Data 3</option>
</select>
<button id="clickButton" type="submit" name="submit">Submit</button>
<select>
<option selected="true" style="display:none;">Select language</option>
<option>Option 1</option>
<option>Option 2</option>
</select>
Much more complicated than the awesome witchcraft @Mr. HK showed, but another approach:
HTML:
<select id="selector" name="select" required>
<option disabled selected value="0">Please Select Option</option>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</select>
<button id="mainButton" type="submit" name="submit">Submit</button>
JS:
$("#mainButton").click(function(event) {
var picked = $('#selector option:selected').val();
if (picked == 0){
alert("you must pick an option")
}
else{
//do whatever
}
})
See here: http://jsfiddle.net/ShL4T/105/
if you do not mind using php or javascript to validate the input of the select. tag.
here is a simple javascript verification:
function verify(){
var somevar = document.getElementById("select").value;
if(somevar == 'Please Select Option'){
document.getElementById("message").innerHTML = 'Invalid Choice, Please choose Option 1, 2, or 3';
}else{
//do whatever you want to do
}
}
<p id="message"></p>
<select name="select" id="select" required>
<option disabled selected>Please Select Option</option>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</select>
<button type="submit" onclick="verify()" name="submit">Submit</button>
im sorry about the first answer i kinda messed it up im a little rusty in javascript
<form>
<select name="select" required>
<option disabled selected value>Please Select Option</option>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</select>
<button type="submit" name="submit">Submit</button>
</form>
<select id="option" name="select" required>
<option disabled selected value="0">Please Select Option</option>
<option>Option 1</option>
<option>Option 2</option>
<option>Option 3</option>
</select>
| common-pile/stackexchange_filtered |
Eager loading a tree in NHibernate
I have a problem trying to load a tree, this is my case, I have an entity associated with itself (Hierarchic) with n levels; the question is, Can I load eagerly the entire tree using ICriteria or HQL?
Thanks in advance for any help.
Ariel
Yes... just set correct fetchmode.
i'll include example in a minute.
Example taken from here =>
IList cats = sess.CreateCriteria(typeof(Cat))
.Add( Expression.Like("Name", "Fritz%") )
.SetFetchMode("Mate", FetchMode.Eager)
.SetFetchMode("Kittens", FetchMode.Eager)
.List();
You can specify to eager load child of child too =>
.SetFetchMode("Kittens.BornOn", FetchMode.Eager)
In case you are using Linq to NHibernate, use Expand method =>
var feedItemQuery = from ad in session.Linq<FeedItem>().Expand("Ads")
where ad.Id == Id
select ad;
And i would recommend to use helper method that creates string from passed in lambda expression.
Quite likely that it's possible to tell Criteria to load whole tree. But i'm not aware about that and i prefer specifying what exactly i need (seems dangerous to load everything).
Does this helps?
I know I can load collections related to an entity using FetchMode, but I want to load the entire tree, not only the next level.
Using your approach I would do:
.SetFetchMode("Children", FetchMode.Join)
.SetFetchMode("Children.Children", FetchMode.Join)
.SetFetchMode("Children.Children.Children", FetchMode.Join) etc
| common-pile/stackexchange_filtered |
Parse HTML in Objective-C
Hello!
This is for the iphone
This is what i am trying to achieve:
This website expands URLs like bit.ly/miniscurl the creator got an API that says that all you have to do is go to this website:
http://expandurl.appspot.com/expand?url=YOURURL
so when i put:
http://expandurl.appspot.com/expand?url=bit.ly/miniscurl
it returns a new website with this information:
{
"status": "OK",
"end_url": "http:\/\/miniscurl.dafk.net",
"redirects": 1,
"urls": ["http:\/\/bit.ly\/miniscurl", "http:\/\/miniscurl.dafk.net"],
"start_url": "http:\/\/bit.ly\/miniscurl"
}
And that is great! but how do i get that information into an NSString and then search through it for the different tags. so that at last got this:
NSString *status = OK;
NSString *end_url = http:\/\/miniscurl.dafk.net";
etc...
Also a NSArray containing all the redirects (if there are more than one) would have been great!
Conclusion:
I need:
a fast and easy way to get HTML source from a website.
a fast and easy way to search through NSString and cut it up in different pieces.
Thank you!
Best regards
Kristian
To get the data from an HTTP-server, you can use [NSData dataWithContentsOfURL:].
To parse the data, in this case it seems to be JSON-data, you can use a simple JSON parser like TouchJSON.
Here is some example code I wrote for one of my apps:
NSURL *url = [NSURL URLWithString:@"http://server.com/data.json"];
NSData *rawJsonData = [NSData dataWithContentsOfURL:url];
CJSONDeserializer *parser = [CJSONDeserializer new];
NSError *error;
NSDictionary *jsonDictionary = [parser deserializeAsDictionary:rawJsonData
error:&error];
[parser release];
Hope that helps!
Also, it looks like you are new here. Please mark the answer that helps you most by clicking the "√" on the left, thanks!
Hello! Thanks for the reply =D i got in working by using this SBJson:
NSString *url=@"http://expandurl.appspot.com/expand?url=bit.ly/miniscurl";
NSString *output=[NSString stringWithContentsOfURL:[NSURL URLWithString:url]];
id theObject= [output JSONValue];
NSLog(@"%@",theObject);
this gave this output:
{
"end_url" = "http://miniscurl.dafk.net";
redirects = 1;
"start_url" = "http://bit.ly/miniscurl";
status = OK;
urls = (
"http://bit.ly/miniscurl",
"http://miniscurl.dafk.net"
);
}
how would i continue from here? to separate the NSString :)
| common-pile/stackexchange_filtered |
The Stacks project
I have a question concerning the admirable Stacks Project.
Which comparable projects are there:
approach-wise: "an open source textbook on algebraic stacks and the algebraic geometry that is needed to define them", i.e. projects
as "an open source textbook on X and the Y that is needed to define
(and understand) X"
technical-wise [Side question: which technical framework(s) is the Stacks Project based on?]
tag-system-wise (using the same tag system as the Stacks Project)
tag-wise (with a significant number of tags shared with the Stacks Project)?
The question concerns also the possible interoperability of several such projects.
This looks like a question for Meta to me.
https://gerby-project.github.io
I am not sure if this question is on topic here..
tag-system-wise Anyone can use Gerby, with a similar tag system. The actual tag assignment isn't dealt with by Gerby, you can set up your own conventions if you like. There might be some minor things you'll have to change if you decide to do so.
So far I'm not aware of anyone actually doing this, except for the work-in-progress Kerodon, which will be Gerby applied to the works of Jacob Lurie. There's no fixed date to go live yet, maybe in September we'll have a more definite idea about this.
Observe that the text will not be open-source (there's no need for this in the Gerby system).
tag-wise No projects share tags with the Stacks project (as there are no other projects using a similar tag system), although it is often cited. It's also not quite desirable to "share" tags, as nothing ensures uniqueness of tags then.
technical-wise This has been answered before, but feel free to ask follow-up questions (which probably would be better done via e-mail, or chat), as I'm the person responsible for the majority of the implementation (with help from @RaymondCheng).
approach-wise:
Open source text books on a variety of topic in mathematics (and other sciences), mainly CC BY licensed, so the content can be freely shared and adapted, even for commercial use:
OpenStax
College Open (some with a non-commercial-use restriction)
Wikibooks
Answer to side question: Stacks is based on a LaTeX processing tool built using plastex and
a website built using Flask (see Gerby).
| common-pile/stackexchange_filtered |
Byte between two element
My teacher ask me a question and he said the answer is 48 bytes
please explain to me.
Print p1 and p2 and look at the values.
sizeof(double) is 8 bytes.
i thing it is 47
@ĐăngKhôi - I know I was about to say the same thing (or suggest 40)! (ಠ⌣ಠ) "between" is such an ambiguous word.
@ĐăngKhôi Let's consider another array containing char. How many bytes would you say there are between m_char[0] and m_char[1]?
Can you please stop posting pictures here!
It's a trick question: p1 and p2 are right next to each other on the stack, so there are zero bytes between them. Tell your teacher to provide less ambiguous questions.
The real question here is to interpret your teacher's language, which is not on topic.
I believe that the between here means how many bytes far apart the two pointers point to.
Given:
p1 = m; // 0th index, 1st element
p2 = &m[6]; // 6th index, 7th element
// 7 - 1 = 6 elements (between)
So, p1 and p2 are 6 elements of double type far apart.
sizeof(double) on that architecture should be 8 bytes.
Hence,
6 elements x sizeof(double) = 6 x 8 = 48 bytes
Programmatically, it would be:
auto bytes = (p2 - p1) * sizeof(double);
Here's an example (live):
#include <iostream>
int main()
{
double m[100];
double *p1, *p2;
p1 = m; // 0th index, 1st element
p2 = &m[6]; // 6th index, 7th element
const auto bytes = (p2 - p1) * sizeof(double);
std::cout << "Bytes: " << bytes;
return 0;
}
Output:
Bytes: 48
std::distance may also be used for calculating distance:
auto bytes = std::distance( p1, p2 );
| common-pile/stackexchange_filtered |
ClickOnce Publishing in Visual Studio 2017 RC not available
When installing Visual Studio 2017RC, I already included the required Component in the installation:
Although when opening the "Publish" Page in the Project-Settings, its not available.
What am i missing?
"You need to reinstall Visual Studio to publish your application"
Possible duplicate of Visual Studio is acting weird. How do I fix this?
I had big problems with this. Turns out that having Visual Studio 2008 installed was the problem. I uninstalled it and everything was fine.
Appears to be there in RTM bits.
| common-pile/stackexchange_filtered |
ActionResult - Service
I bored, writing same code for service and ui. Then i tried to write a converter for simple actions. This converter, converting Service Results to MVC result, seems like good solution for me but anyway i think this gonna opposite MVC pattern.
So here, I need help, what you think about algorithm - is this good or not?
Thanks
ServiceResult - Base:
public abstract class ServiceResult
{
public static NoPermissionResult Permission()
{
return new NoPermissionResult();
}
public static SuccessResult Success()
{
return new SuccessResult();
}
public static SuccessResult<T> Success<T>(T result)
{
return new SuccessResult<T>(result);
}
protected ServiceResult(ServiceResultType serviceResultType)
{
_resultType = serviceResultType;
}
private readonly ServiceResultType _resultType;
public ServiceResultType ResultType
{
get { return _resultType; }
}
}
public class SuccessResult<T> : ServiceResult
{
public SuccessResult(T result)
: base(ServiceResultType.Success)
{
_result = result;
}
private readonly T _result;
public T Result
{
get { return _result; }
}
}
public class SuccessResult : SuccessResult<object>
{
public SuccessResult() : this(null) { }
public SuccessResult(object o) : base(o) { }
}
Service - eg. ForumService:
public ServiceResult Delete(IVUser user, int id)
{
Forum forum = Repository.GetDelete(id);
if (!Permission.CanDelete(user, forum))
{
return ServiceResult.Permission();
}
Repository.Delete(forum);
return ServiceResult.Success();
}
Controller:
public class BaseController : Controller
{
public ActionResult GetResult(ServiceResult result)
{
switch (result.ResultType)
{
case ServiceResultType.Success:
var successResult = (SuccessResult)result;
return View(successResult.Result);
break;
case ServiceResultType.NoPermission:
return View("Error");
break;
default:
return View();
break;
}
}
}
[HandleError]
public class ForumsController : BaseController
{
[ValidateAntiForgeryToken]
[Transaction]
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Delete(int id)
{
ServiceResult result = ForumService.Delete(WebUser.Current, id);
/* Custom result */
if (result.ResultType == ServiceResultType.Success)
{
TempData[ControllerEnums.GlobalViewDataProperty.PageMessage.ToString()] = "The forum was successfully deleted.";
return this.RedirectToAction(ec => Index());
}
/* Custom result */
/* Execute Permission result etc. */
TempData[ControllerEnums.GlobalViewDataProperty.PageMessage.ToString()] = "A problem was encountered preventing the forum from being deleted. " +
"Another item likely depends on this forum.";
return GetResult(result);
}
}
In the company that I am working in we use the same algorithm. We consider it to be an MVVM not MVC. So I think that this algorithm is good.
I'm just wondering, how do you handle redirections? Do you have a redirection service result type, or you decide to redirect or to not redirect in the controller?
IMO, Only for basic things, (e.g. exception thrown, no such record). Other things should be taken care in action method.
| common-pile/stackexchange_filtered |
Mysql connection works exept for SELECT * INTO OUTFILE statment
My website works fine except when I execute back-up function for mysql it throws this error :
SQLSTATE[28000]: Invalid authorization specification: 1045 Access denied for user 'username'@'localhost' (using password: YES) (SQL: SELECT * INTO OUTFILE '/archive/db-backup-date-09-09-2015-time-10-30-04/accounts.csv' FROM accounts)
i tried this statment :
GRANT FILE ON \*.\* TO 'username'@'localhost'
but it throws another error
#1045 - Access denied for user 'myHostingUserName'@'localhost' (using password: YES)
note that in this error it the user for who access is denied is my hosting user name (hostgator), but I don't have MySql user with this name
Actually mysql does not rights outside so you need to export your file within mysql, you can use as per below-
SELECT * INTO OUTFILE '/tmp/accounts.csv' FROM accounts;
assuming /tmp is a directory where mysql is creating temp files.
Also aussing that your csv creation command is fine for you as you are not managing fields seperation etc. issues in it.
Regarding 2nd error, you don't have permission to grand rights.
Thanks for the answer, but in my local enviorment the script worked fine, and how can I find the directory where I can place my .csv file?
get your temp dir by "SHOW VARIABLES LIKE 'tmpdir'" and you can use this path to save but this is temporary path so you have to move from here otherwise file can be lost from here. other wise you can save in data directory which is also not a good suggestion. You can create one backup directory and can give permission on that directory to mysql and save your csv here.
I did 'tempdir' but It didn't show me the full path, I couldn't find the directory, But how can I give permission on a directory for mysql? is there a command?
did you check by SHOW VARIABLES LIKE 'tmpdir'
Yes I did SHOW VARIABLES LIKE 'tmpdir' , but it didn't show me the full path so I couldn't find the directory
what it is showing...you can also try to create a backup directory and give permission to mysql like "chown -R mysq.mysql /your_backup_directory"...or you can keep in your data dir like "/var/lib/mysql" but need to keep watch as it can be risky....
Thank you Zafar, I'll try it, but I have contacted the support and they say the function is not supported on shared plan
Then you can export in temp and move from here...check your temp location by "SHOW GLOBAL VARIABLES LIKE 'tmpdir'" check from root (admin) user, you can also check in your configuration file my.cnf (linux) or my.ini (windows) if not mention in config file then it will be /tmp so you can give path just as /tmp/file.csv
| common-pile/stackexchange_filtered |
CakePHP 2.0 Livesearch
I'm trying to create this: http://www.justkez.com/cakephp-livesearch/ in CakePHP 2.0.
The problem is that the AjaxHelper is not available anymore in CakePHP 2.0, and so
echo $this->Ajax->observeField('query', $options);
Does not work anymore.
Any suggestions?
Investigate the new helpers and go from there. Probably a good place to start... http://book.cakephp.org/2.0/en/core-libraries/helpers/js.html
I ended up implementing it using the JSHelper, like suggested above. I will be marking this as an answer, since it contains the actual code example on how to do it.
<h3>Search Reservations</h3>
<?php
echo $this->Html->css('screen');
// we need some javascripts for this
echo $this->Html->script('jquery');
// create the form
echo $this->Form->create(false, array('type' => 'get', 'default' => false));
echo $this->Form->input('query', array('type' => 'text','id' => 'query', 'name' => 'query', 'label' => false))?>
<div id="loading" style="display: none; ">
<?php
echo $this->Html->image('ajax_clock.gif');
?>
</div>
<?php
$this->Js->get('#query')->event('keyup', $this->Js->request(
array('controller' => 'sales','action' => 'searchReservations', $event['Event']['id']),
array(
'update' => '#view',
'async' => true,
'dataExpression' => true,
'method' => 'post',
'data' => $this->Js->serializeForm(array('isForm' => false, 'inline' => true)))
));
?>
The AjaxHelper calls were all just convenience methods for JavaScript functions; most of which can be easily achieved in Javascript itself, which is the way I prefer to do it myself. I suppose that observeField would simply add an "onchange" event listener, but I've never used it so I can't be sure.
There is a replacement in Cake 2.0 (and 1.3, by the way) for the AjaxHelper though, it's called the JsHelper, and it uses various engines to adapt to the multiple JavaScript frameworks that exist. It probably doesn't have the same methods the AjaxHelper used to have, but it's much more flexible and I'm pretty sure it'll fit your needs.
| common-pile/stackexchange_filtered |
Convert ASCII code to equivalent character
I am using the charAt(integer) method which returns the ASCII code as integer. How to convert that ASCII code to it's original character in "Apex"?
String myChar = String.fromCharArray( new List<integer> { 65 } );
myChar is 'A' now.
Hey, how can I get exactly reverse of this
Like -- 65 => 'A'
@OmkarDeokar see https://salesforce.stackexchange.com/questions/85495/string-to-ascii
| common-pile/stackexchange_filtered |
How to detect cycles in a directed graph using the iterative version of DFS?
In the recursive DFS, we can detect a cycle by coloring the nodes as WHITE, GRAY and BLACK as explained here.
A cycle exists if a GRAY node is encountered during the DFS search.
My question is: When do I mark the nodes as GRAY and BLACK in this iterative version of DFS? (from Wikipedia)
1 procedure DFS-iterative(G,v):
2 let S be a stack
3 S.push(v)
4 while S is not empty
5 v = S.pop()
6 if v is not labeled as discovered:
7 label v as discovered
8 for all edges from v to w in G.adjacentEdges(v) do
9 S.push(w)
See my answer here: https://stackoverflow.com/a/60196714/1763149
One option is to push each node twice to the stack along the information if you're entering or exiting it. When you pop a node from stack you check if you're entering or exiting. In case of enter color it gray, push it to stack again and advance to neighbors. In case of exit just color it black.
Here's a short Python demo which detects a cycle in a simple graph:
from collections import defaultdict
WHITE = 0
GRAY = 1
BLACK = 2
EDGES = [(0, 1), (1, 2), (0, 2), (2, 3), (3, 0)]
ENTER = 0
EXIT = 1
def create_graph(edges):
graph = defaultdict(list)
for x, y in edges:
graph[x].append(y)
return graph
def dfs_iter(graph, start):
state = {v: WHITE for v in graph}
stack = [(ENTER, start)]
while stack:
act, v = stack.pop()
if act == EXIT:
print('Exit', v)
state[v] = BLACK
else:
print('Enter', v)
state[v] = GRAY
stack.append((EXIT, v))
for n in graph[v]:
if state[n] == GRAY:
print('Found cycle at', n)
elif state[n] == WHITE:
stack.append((ENTER, n))
graph = create_graph(EDGES)
dfs_iter(graph, 0)
Output:
Enter 0
Enter 2
Enter 3
Found cycle at 0
Exit 3
Exit 2
Enter 1
Exit 1
Exit 0
Here you assumed that there is only one connected component.
@VarunNarayanan: The question was asking on how/when to mark nodes as gray/black and quoted iterative DFS algorithm instead of whole algorithm for finding connected components. As a result the answer is only about iterative DFS.
that makes sense :)
You could do that simply by not popping the stack element right away.
For every iteration, do v = stack.peek() and if v is White, mark it as Grey and go ahead exploring its neighbours.
However, if v is Grey, it means that you have encountered v for the second time in the stack and you have completed exploring it. Mark it Black and continue the loop.
Here's how your modified code should look like:
procedure DFS-iterative(G,v):
let S be a stack
S.push(v)
while S is not empty
v = S.peek()
if v is not labeled as Grey:
label v as Grey
for all edges from v to w in G.adjacentEdges(v) do
if w is labeled White do
S.push(w)
elif w is labeled Grey do
return False # Cycle detected
# if w is black, it's already explored so ignore
elif v is labeled as Grey:
S.pop() # Remove the stack element as it has been explored
label v as Black
If you're using a visited list to mark all visited nodes and another recStack i.e a list which keeps track of nodes currently being explored, then what you can do is, instead of popping the element from stack, just do stack.peek(). If the element is not visited (it means you're encountering that element for the first time in the stack), just mark it True in visited and recStack and explore its children.
However, if the peek() value is already visited, it means you're ending the exploration of that node so just pop it and make it's recStack as False again.
In DFS, end of a branch is nodes that has no children these nodes is Black. Then checked parents of these nodes. If a parent do not has Gray child then it is Black. Likewise, if you continue to set black color to nodes, color of all nodes becomes black.
For example, I want to perform DFS in graph below.
DFS starts from u and visited u -> v -> y -> x. x has no children and you should change color of this node to Black.
Then return to parent of x in visited path according to discovery time. So parent of x is y. y has no children with Gray color so you should change color of this node to Black.
I have solved this problem as a solution for this Leetcode problem - https://leetcode.com/problems/course-schedule/
I have implemented it in Java - using recursive DFS using colors, recursive DFS using visited array, iterative DFS and BFS using indegree and calculating topological sort.
class Solution {
//prereq is the edges and numCourses is number of vertices
public boolean canFinish(int numCourses, int[][] prereq) {
//0 -> White, -1 -> Gray, 1 -> Black
int [] colors = new int[numCourses];
boolean [] v = new boolean[numCourses];
int [] inDegree = new int[numCourses];
Map<Integer, List<Integer>> alMap = new HashMap<>();
for(int i = 0; i < prereq.length; i++){
int s = prereq[i][0];
int d = prereq[i][1];
alMap.putIfAbsent(s, new ArrayList<>());
alMap.get(s).add(d);
inDegree[d]++;
}
// if(hasCycleBFS(alMap, numCourses, inDegree)){
// return false;
// }
for(int i = 0; i < numCourses; i++){
if(hasCycleDFS1(i, alMap, colors)){
// if(hasCycleDFS2(i, alMap, v)){
//if(hasCycleDFSIterative(i, alMap, colors)){
return false;
}
}
return true;
}
//12.48
boolean hasCycleBFS(Map<Integer, List<Integer>> alMap, int numCourses, int [] inDegree){
//short [] v = new short[numCourses];
Deque<Integer> q = new ArrayDeque<>();
for(int i = 0; i < numCourses; i++){
if(inDegree[i] == 0){
q.offer(i);
}
}
List<Integer> tSortList = new ArrayList<>();
while(!q.isEmpty()){
int cur = q.poll();
tSortList.add(cur);
//System.out.println("cur = " + cur);
if(alMap.containsKey(cur)){
for(Integer d: alMap.get(cur)){
//System.out.println("d = " + d);
// if(v[d] == true){
// return true;
// }
inDegree[d]--;
if(inDegree[d] == 0){
q.offer(d);
}
}
}
}
return tSortList.size() == numCourses? false: true;
}
// inspired from - https://leetcode.com/problems/course-schedule/discuss/58730/Explained-Java-12ms-Iterative-DFS-solution-based-on-DFS-algorithm-in-CLRS
//0 -> White, -1 -> Gray, 1 -> Black
boolean hasCycleDFSIterative(int s, Map<Integer, List<Integer>> alMap, int [] colors){
Deque<Integer> stack = new ArrayDeque<>();
stack.push(s);
while(!stack.isEmpty()){
int cur = stack.peek();
if(colors[cur] == 0){
colors[cur] = -1;
if(alMap.containsKey(cur)){
for(Integer d: alMap.get(cur)){
if(colors[d] == -1){
return true;
}
if(colors[d] == 0){
stack.push(d);
}
}
}
}else if (colors[cur] == -1 || colors[cur] == 1){
colors[cur] = 1;
stack.pop();
}
}
return false;
}
boolean hasCycleDFS1(int s, Map<Integer, List<Integer>> alMap, int [] colors){
// if(v[s] == true){
// return true;
// }
colors[s] = -1;
if(alMap.containsKey(s)){
for(Integer d: alMap.get(s)){
//grey vertex
if(colors[d] == -1){
return true;
}
if(colors[d] == 0 && hasCycleDFS1(d, alMap, colors)){
return true;
}
}
}
colors[s] = 1;
return false;
}
// not efficient because we process black vertices again
boolean hasCycleDFS2(int s, Map<Integer, List<Integer>> alMap, boolean [] v){
// if(v[s] == true){
// return true;
// }
v[s] = true;
if(alMap.containsKey(s)){
for(Integer d: alMap.get(s)){
if(v[d] == true || hasCycleDFS2(d, alMap, v)){
return true;
}
}
}
v[s] = false;
return false;
}
}
Java version :
public class CycleDetection {
private List<ArrayList<Integer>> adjList = new ArrayList<>();
private boolean[] visited;
public static void main(String[] args) {
CycleDetection graph = new CycleDetection();
graph.initGraph(4);
graph.addEdge(0, 1);
graph.addEdge(0, 2);
//graph.addEdge(1, 2);
graph.addEdge(2, 0);
graph.addEdge(2, 3);
//graph.addEdge(3, 3);
System.out.println(graph.isCyclic());
}
private boolean isCyclic() {
Stack<Integer> stack = new Stack<>();
//DFS
boolean[] recStack = new boolean[this.adjList.size()];
stack.add(0);//push root node
while (!stack.empty()) {
int node = stack.pop();
/*if (recStack[node]) {
return true;
}*/
visited[node] = true;
recStack[node] = true;
List<Integer> neighbours = this.adjList.get(node);
ListIterator<Integer> adjItr = neighbours.listIterator();
while (adjItr.hasNext()) {
int currentNode = adjItr.next();
if (!visited[currentNode]) {
visited[currentNode] = true;
stack.push(currentNode);
} else {
if (recStack[currentNode]) {
return true;
}
}
}
if (neighbours == null || neighbours.isEmpty())
recStack[node] = false;
}
return false;
}
private void initGraph(int nodes) {
IntStream.range(0, nodes).forEach(i -> adjList.add(new ArrayList<>()));
visited = new boolean[nodes];
}
private void addEdge(int u, int v) {
this.adjList.get(u).add(v);
}
}
| common-pile/stackexchange_filtered |
Can I use Unicode characters as regex patterns when using re2 library?
I want to know if the re2 library can handle all kind of special characters. I have to mention that I've done a lot of research on the net but didn't find anything that can make me understand. Thank you!
http://www.regular-expressions.info/unicode.html
I've worked with this yesterday; it works.
For example when I have the input StringPiece "München" and I want to search the pattern "ü" it gives me:
re2\re2.cc:173: Error parsing '(ⁿ)': invalid UTF-8
re2\re2.cc:768: Invalid RE2: invalid UTF-8
@paulotorrens Could you please share your solution? How did you work it out?
| common-pile/stackexchange_filtered |
Pairwise distinct subsequence
Suppose there is an infinite sequence $S_n = (s_1, s_2, \dots )$ generated by a finite set of numbers $\{1, 2, \dots, n\}$. Given a number $m$ such that $m < n$, the subsequence $(s_i, s_{i+1}, \dots, s_{i + m -1} )$ is pairwise distinct. In addition, the set of connective subsequences are different. That is $\{s_i, s_{i+1}, \dots, s_{i + m -1}\}$ and $\{s_{i+1}, s_{i+2}, \dots, s_{i + m}\}$ are different sets.
For instance, we have $S_n = (1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8, \dots )$. If we choose $m = 4$, then any subsequence of length 4 satisfies the criteria.
My question is given an $m$, is there a way to determine the smallest $n$ and is there any pseudo-random algorithm to generate such sequence? Thanks.
| common-pile/stackexchange_filtered |
Replace value of an element with a value from sitecore variable
I have a variables.config file where i have all the variables and corresponding values stored like shown below:
This is default sitecore file :
Now i want to replace the value 1000 with the value stored in the Variables.config file
Now i am trying with this:
But this is not working.
did you include the the Variable.config in the Sitecore.config file?
Variable replacement often doesn't work that way. Sometimes you can get lucky and try this (works on pipelines)... <policy batchSize="$(otap-batch-size)" ...>$(batchSize).
@Hishaam yeah,i can see the variable in showconfig
@jrap i tried your suggestion. It replaces attribute value but not value of the element
Then it's not supported. I have done extensive testing of the method I suggested and some elements support it, others do not. I never dug into the details as to why it sometimes works and why it doesn't other times.
It is not possible to use Sitecore config variables in this way to set replacement values. The global config variable replacement process only works on node attributes, e.g. <setting name="xyz" value="$(variable)" />. It will not work on element values, e.g. <element>$(value)</element>.
You will find some elements in the Sitecore config which appear to look like a global variables, but checking the expanded config in "/sitecore/admin/showconfig.aspx" you will note that they have not been expanded by the ConfigReader and in fact the replacements is made by the underlying code used by that process (Sitecore.Pipelines.Loader.DumpConfigurationFiles processor is a good example of this).
For the replacement of the <Limit> node, in your case you should use a regular config patch which target's this element:
<contentSearch>
<configuration>
<indexes>
<index id="sitecore_web_index">
<commitPolicyExecutor>
<policies>
<policy>
<Limit>1000</Limit>
</policy>
</policies>
</commitPolicyExecutor>
</index>
</indexes>
</configuration>
</contentSearch>
| common-pile/stackexchange_filtered |
How to interpret font-size of text from adobe XD?
font: SemiBold 14px/17px Basier Square;
I am trying to copy styles of a text from Adobe XD and it shows me font-size as above, I am confused, should I interpret it as 14px or 17px?
17px is the line-height - https://developer.mozilla.org/en-US/docs/Web/CSS/font
Answer: Font-size = 14px, Line-Height = 17px;
It is not Adobe specific, it's simple HTML definition
Reference: From Mozilla Docs, we can understand that
If font is specified as a shorthand for several font-related properties,
then-->
line-height must immediately follow font-size, preceded by "/", like this: "16px/3"
font: SemiBold 14px/17px Basier Square;
That would be
.element {
font-family: 'Basier Square';
font-weight: 600; /* SemiBold */
font-size: 14px;
line-height: 17px;
}
| common-pile/stackexchange_filtered |
JPA @OrderBy Annotation - Ordering Strings
I am using the @OrderBy annotation in my entity class to sort a collection that is eagerly fetched. The column I am ordering on is of type String. However, in some instances, these strings may contain numbers. How do I ensure that the OrderBy annotation will order numbers stored as strings in their numeric order instead of 1,10,2,3,4,5,6,7,8,9 ?
@OrderBy will sort according to the database since it will be part of your SQL query. Using @SortComparator with a custom comparator will achieve your result. You can also consider combining the 2 as well (@OrderBy so that you return in some consistent order from the database, then the @SortComparator for the desired order)
Not sure if you have an idea of how to write the comparator, but this question is a start to that.
Where @SortComparable come from?
Typo, it's meant to be @SortComparator .. updating my answer for this
You may want to consider storing the strings as 0-prefix to a fixed maximum length, eg: 001, 023, 234 etc. Ordering will then work correctly using the normal string alpha comparison.
See Ordering results by computed value in Hibernate for a JPA specific implementation way of achiveing close to the same thing.
Another option is to store your collection in a TreeSet with a custom comparator.
| common-pile/stackexchange_filtered |
i have been trying to make a clock in javascript but sometimes it shows up as undefined
js
the js at first it would work but when testing it i left the prompt undefined various times but time still showed up but after a couple of more times of leaving the prompt blank the tie showed undefined basically everthing in the get greeting would only come u as html as undefined which was strange because it has worked before
window.onload = function () {
document.getElementById("time").innerHTML = getGreeting();
document.getElementById("welc").innerHTML = open();
};
var count = setInterval(function () {
getGreeting()
}, 1000);
function getGreeting() {
var dateNow = new Date();
var time = dateNow.toTimeString();
var hours = time.substring(0, 2);
var today = dateNow.toDateString();
var now = dateNow.toLocaleTimeString();
var clock = document.getElementById("time");
//var numhours = hours.parseInt();
if (hours >= 12 && hours < 18) {
return
clock.textContent = "Good Afternoon, Welcome Time:" + today + " " + now;
clock.innerText = "Good Afternoon, Welcome Time:" + today + " " + now;
} else if (hours >= 18 && hours <= 23) {
return
clock.textContent = "Good Evening, Welcome Time : " + today + " " + now;
clock.innerText = "Good Evening, Welcome Time : " + today + " " + now;
} else if (hours >= 0 && hours < 12) {
return
clock.textContent = "Good Morning, Welcome Time:" + today + " " + now;
clock.innerText = "Good Morning, Welcome Time:" + today + " " + now;
}
}
function open() {
var name = prompt("Hello User. What is your name?");
if (name === '') {
return "Hello User";
} else if (name) {
return "Hello " + name + "!";
}
}
html
all the is here is a menu with sub menus and a basic outline for a webpage
i dont think the html would the problem
<DOCTYPE! html>
<html>
<head>
<link rel="stylesheet" type="text/css" href="final.css">
<script type ="text/javascript" src="final.js"></script>
<title>Final HTML/CSS Project-History of Major Tech Companies</title>
</head>
<body>
<div id="header">
<h1>Home<h1>
</div>
<div id="nav">
<nav>
<ul>
<li><a href="#">ppgphb</a>
<ul>
<li><a href="#">ppgphb</a>
<ul>
<li><a href="#">ppgphb</a></li>
<li><a href="#">myymmy</a></li>
<li><a href="#">ymymmy</a></li>
<li><a href="#">ymmyym</a></li>
<li><a href="#">ymmy</a></li>
<li><a href="#">mymyym</a></li>
</ul>
</li>
<li><a href="#">myymmy</a></li>
<li><a href="#">ymymmy</a></li>
<li><a href="#">ymmyym</a></li>
<li><a href="#" >ymmy</a></li>
<li><a href="#">mymyym</a></li>
</ul>
</li>
<li><a href="#" >myymmy</a></li>
<li><a href="#">ymymmy</a></li>
<li><a href="#">ymmyym</a></li>
<li><a href="#">ymmy</a></li>
<li><a href="#">mymyym</a></li>
</ul>
</nav>
</div>
<div id="content">
</div>
<div id ="footer"> <p id = "welc"></p><p id = "time"></p></div>
</body>
</html>
css here is the css it really has nothing to do with the issue but i added just incase
body{
background-color:#DED4FF;
}
nav{
margin-right:25px;
}
nav ul{
list-style-type:none;
position:relative;
display:inline;
}
nav ul li{
position:relative;
border:1px ouset #3B315C;
color:#DED4FF;
-color:#52447F;
text-align:center;
margin:5px;
width:125px;
padding:5px;
margin-bottom:25px;
box-shadow:0 1px 7px #CAC1E8;
}
nav li:visited{
}
nav ul ul{
border:1px solid black;
margin-bottom:15px;
margin-left:1px;
position:absolute;
transition:display 8s;
transition:display 8s;
display:none;
}
nav ul li a{
color:#DED4FF;
text-decoration:none;
display:block;
}
nav ul ul li a{
transition:8s;
}
}
nav ul ul li{
border:1px ouset #715FB5;
background-color:#A388FF;
margin:0px;
margin-left:90px;
box-shadow:0 1px 7px #413D63;
}
nav ul ul li a{
color:#6F6A7F;
}
nav ul ul ul li {
box-shadow:0 1px 7px ;
border:1px ouset #52447F;
background-color:#826DCC;
}
nav ul ul ul li a{
color:#52447F;
}
nav ul ul ul{
margin-right:18px;
float:none;
position:absolute;
transition:display 8s;
}
nav ul:after{
content:"";
clear:both;
display:block;
}
nav ul li:hover>ul{
display:block;
}
#header,#nav,#footer,#content{
height:100px;
}
#nav{
border-right:1px solid black;
text-align:center;
width:145px;
height:500px;
float:left;
}
#content{
float:right;
}
#footer{
border-top:1px solid black;
clear:both;
}
#header{
border-bottom:1px solid black;
border-top:1px solid black;
}
It's better to use something like jsFiddle for really long and multi-file examples like this.
There is no need for all that HTML and CSS, the OP should post a minimal test case that shows the problem. Often in developing the test case the issue will become clear and no post is required.
well the prob lies in the js file
@user2888333—note that the Date methods you are calling are implementation dependent, parsing them as if they are exactly the same in every browser and for every possible user preference setting is not a good idea (e.g. toDateString will almost ceratainly produce different results for US settings compared to almost any other setting).
The clock should not work at all. In:
if (hours >= 12 && hours < 18) {
return
clock.textContent = "Good Afternoon, Welcome Time:" + today + " " + now;
clock.innerText = "Good Afternoon, Welcome Time:" + today + " " + now;
automatic semi–colon insertion will insert a semi–colon after return, so the function returns undefined and does not execute the following two statements. You should have something like:
var clockString;
...
if (hours >= 12 && hours < 18) {
clockString = "Good Afternoon, Welcome Time:" + today + " " + now;
} else if (...) {
...
}
if (typeof clock.textContent == 'string') {
clock.textContent = clockString;
} else if (typeof clock.innerText == 'string') {
clock.innerText = clockString;
} else {
clock.innerHTML == clockString
}
the bottom of your code im not exacetly sure for the reason for it can u expalin
Separate creation of the string from assigning as element content (which really should be a separate function), and fix the (unnecessary) return statement.
Well in the state that the code is in, it is impossible for it to have worked unless you added the return lately. The returns in the if statement short circuit the code.
if (hours >= 12 && hours < 18) {
return <-- anything after return will not run...
since you are not returning anything back, it will return undefined, which you are than shoving into the innerHTML. You should be returning the text.
if (hours >= 12 && hours < 18) {
return "Good Afternoon, Welcome Time:" + today + " " + now;
}
wow really but it did before for a other project that only had the welcome message
| common-pile/stackexchange_filtered |
What is the proper (efficient) way to write this function?
The following function returns a list of possible paths starting from the root node to the deepest node of a tree:
paths :: Tree a -> [[a]]
paths (Node element []) = [[element]]
paths (Node element children) = map (element :) $ concat $ map paths children
This looks very inefficient on paper, since concat has terrible complexity. Can this function be rewritten in a way that keeps the complexity lower without using intermediate data structures (like sequence)?
EDIT: to be honest, I know one could avoid the O(n)/loop complexity of concat by:
Building the path (list) as you go down on the recursion;
Only when you reach the last recursion level, append the path to a global "result" list.
Here is a JavaScript implementation that illustrates this algorithm:
function paths(tree){
var result = [];
(function go(node,path){
if (node.children.length === 0)
result.push(path.concat([node.tag]));
else
node.children.map(function(child){
go(child,path.concat([node.tag]));
});
})(tree,[]);
return result;
}
console.log(paths(
{tag: 1,
children:[
{tag: 2, children: [{tag: 20, children: []}, {tag: 200, children: []}]},
{tag: 3, children: [{tag: 30, children: []}, {tag: 300, children: []}]},
{tag: 4, children: [{tag: 40, children: []}, {tag: 400, children: []}]}]}));
(It is not actually O(1)/iteration since I used Array.concat instead of lists consing (JS has no built-in lists), but just using it instead would make it constant-time per iteration.)
DList should emulate the performance of the global solution you describe -- it's the functional equivalent of mutating a list cons (as long as it is only consumed once)
... I'm not sure how? I'd love to see an answer! :) also - if you just use the list monad like this: "do { x <- map f xs; x }", and then, on the most deep iteration, cons to a normal list on IORef, would this do what I want?
concat does not have terrible complexity; it is O(n), where n is the total number of elements in each list but the last. In this case, I don't think it's possible to do any better, with or without an intermediate structure, unless you change the type of the result. The list of lists, in this context, offers absolutely no potential for sharing, so you have no choice but to allocate each "cons" of each list. The concatMap only adds a constant factor overhead, and I'd be surprised if you could find a way to reduce that significantly.
If you want to use some sharing (at the cost of structural laziness), you can indeed switch to a different data structure. This will only matter if the tree is somewhat "bushy". Any sequence type supporting snoc will do. At the simplest, you can even use lists in reverse, so you get paths leading from the leaves to the root instead of the other way around. Or you can use something more flexible like Data.Sequence.Seq:
import qualified Data.Sequence as S
import Data.Sequence ((|>), Seq)
import qualified Data.DList as DL
import Data.Tree
paths :: Tree a -> [Seq a]
paths = DL.toList . go S.empty
where
go s (Node a []) = DL.singleton (s |> a)
go s (Node a xs) = let sa = s |> a
in sa `seq` DL.concat . map (go sa) $ xs
Edit
As Viclib and delnan point out, there was a problem with my original answer, because the bottom level got traversed multiple times.
One invocation of concat takes O(n) time, but since it's being used at each level of recursion, the complexity of paths is worse (I'm too lazy to figure out what exactly). Also, "terrible" is relative, but there are certainly persistent sequential containers with better (logarithmic or amortized constant, for concatenating two lists) complexity.
@delnan, the concat is applied at each level to the elements one level below, so the sum of all the steps in the concats follows the total number of elements in the tree.
Thanks for the answer! But I do think that it can be done in constant time per iteration - maybe using ST - since you can in another language. I've edited the main post with what I have in mind. Thanks again!
@Viclib, I'm not sure what you mean by "per iteration". The solution I gave is O(n), where n is the total number of elements in the tree. There is no way to do better than that in any language.
No, I mean - your solution involves using another data structure. If I get it, it is perfectly fine and a great way to do it - but I'm specifically asking if this can be done with lists alone. Mostly for learning purposes.
@dfeuer Not the elements are concatenated, the results of paths element, which includes many copies of said elements (except for leaves). In fact, the root of a full binary tree with depth d, there are 2 child elements but concat $ map paths elements has 2^(d-1) elements, give or take a few, most of which have already been copied several times by previous concat calls.
@delnan, it should be fixed now. Sorry about that.
Let's benchmark:
{-# LANGUAGE BangPatterns #-}
import Control.DeepSeq
import Criterion.Main
import Data.Sequence ((|>), Seq)
import Data.Tree
import GHC.DataSize
import qualified Data.DList as DL
import qualified Data.Sequence as S
-- original version
pathsList :: Tree a -> [[a]]
pathsList = go where
go (Node element []) = [[element]]
go (Node element children) = map (element:) (concatMap go children)
-- with reversed lists, enabling sharing of path prefixes
pathsRevList :: Tree a -> [[a]]
pathsRevList = go [] where
go acc (Node a []) = [a:acc]
go acc (Node a xs) = concatMap (go (a:acc)) xs
-- dfeuer's version
pathsSeqDL :: Tree a -> [Seq a]
pathsSeqDL = DL.toList . go S.empty
where
go s (Node a []) = DL.singleton (s |> a)
go s (Node a xs) = let sa = s |> a
in sa `seq` DL.concat . map (go sa) $ xs
-- same as previous but without DLists.
pathsSeq :: Tree a -> [Seq a]
pathsSeq = go S.empty where
go acc (Node a []) = [acc |> a]
go acc (Node a xs) = let acc' = acc |> a
in acc' `seq` concatMap (go acc') xs
genTree :: Int -> Int -> Tree Int
genTree branch depth = go 0 depth where
go n 0 = Node n []
go n d = Node n [go n' (d - 1) | n' <- [n .. n + branch - 1]]
memSizes = do
let !tree = force $ genTree 4 4
putStrLn "sizes in memory"
putStrLn . ("list: "++) . show =<< (recursiveSize $!! pathsList tree)
putStrLn . ("listRev: "++) . show =<< (recursiveSize $!! pathsRevList tree)
putStrLn . ("seq: "++) . show =<< (recursiveSize $!! pathsSeq tree)
putStrLn . ("tree itself: "++) . show =<< (recursiveSize $!! tree)
benchPaths !tree = do
defaultMain [
bench "pathsList" $ nf pathsList tree,
bench "pathsRevList" $ nf pathsRevList tree,
bench "pathsSeqDL" $ nf pathsSeqDL tree,
bench "pathsSeq" $ nf pathsSeq tree
]
main = do
memSizes
putStrLn ""
putStrLn "normal tree"
putStrLn "-----------------------"
benchPaths (force $ genTree 6 8)
putStrLn "\ndeep tree"
putStrLn "-----------------------"
benchPaths (force $ genTree 2 20)
putStrLn "\nwide tree"
putStrLn "-----------------------"
benchPaths (force $ genTree 35 4)
Some notes:
I benchmark on on GHC 7.8.4 with -O2 and -fllvm.
I fill the tree in genTree with some Int-s in order to prevent GHC optimization causing subtrees to be shared.
In memSizes the tree must be pretty small, because recursiveSize has quadratic complexity.
Results on my Core i7 3770:
sizes in memory
list: 37096
listRev: 14560
seq: 26928
tree itself: 16576
normal tree
-----------------------
pathsList 372.9 ms
pathsRevList 213.6 ms
pathsSeqDL 962.2 ms
pathsSeq 308.8 ms
deep tree
-----------------------
pathsList 554.1 ms
pathsRevList 266.7 ms
pathsSeqDL 919.8 ms
pathsSeq 438.4 ms
wide tree
-----------------------
pathsList 191.6 ms
pathsRevList 129.1 ms
pathsSeqDL 448.2 ms
pathsSeq 157.3 ms
Comments:
I am entirely unsurprised. The original version with lists is asymptotically optimal for the job. Also, it makes sense to use DList only when we would otherwise have inefficient list appends, but it's not the case here.
Note that the list of reversed paths takes less space than the tree itself.
The performance patterns are consistent over differently shaped trees. In the "deep tree" case Seq performs relatively worse, presumably because Seq snoc is costlier than list cons.
I think a Clojure-style persistent vector (Int-indexed shallow tries) would be nice here, since they can be pretty fast, can possibly have less space overhead than plain lists and support efficient snoc and random reads/writes. In comparison, Seq is heavier in weight, though it supports a wider range of efficient operations.
exactly. foldl (++) is bad; foldr (++) (and so, concat) is perfectly fine.
I think your trees are much too small to reflect some of the differences between these algorithms. Using fewer benchmarks of deeper/bushier trees may be interesting. It would also be good to add a revList version using DList (or something). I say "or something" because I'm also trying to find a nice way to transform the DLists away while using that same essential algorithm.
@dfeuer: my 6-branching 8-depth tree has 6^8 ~ 1,6 million nodes, that's surely not too small! Anyway, I added results for wide and deep trees too, and the overall picture remains the same.
Interesting. I'm still trying to understand why the DLists are hurting performance for the reversed lists. I'll need to think about this more.
Speaking of algorithm optimization, not code optimization: tree by definition has only one path from root to any node, there is no need to return a list in first place. It only makes sense if you want to return paths to all deepest nodes, if there are many of them on the same depth.
Actually, your code returns paths not to the deepest nodes, but to the all leaf nodes, depth isn't taken into account at all.
... yes, that is what I mean. I meant terminal nodes. I'm not a native english speaker, but I see your confusion.
| common-pile/stackexchange_filtered |
How do I query a field from Mongo where the value is actually a number but stored as string?
The confusing part is the number in the string field can be with leading zeros but my query param will not contain that
Object 1:
{
"_id" : ObjectId("5c3f6aec29c2e3193315b485"),
"flightCode" : "000541300157840"
}
Object 2:
{
"_id" : ObjectId("5c3f6aec29c2e3193315b485"),
"flightCode" : "00054130015784"
}
If my intent is to find flight code that matches number<PHONE_NUMBER>4, how will I write my query?
Just to be clear, you want to match object 2, right?
Thats correct. I am looking to match the 2nd Object only
You need $regex operator with following regular expression:
var code = "541300157840";
var regex = "^0*" + code + "$"
db.col.find({ flightCode: { $regex: new RegExp(regex) } })
where * means that 0 occurs zero or more times which means that it works both for<PHONE_NUMBER>57840 and for<PHONE_NUMBER>40
how will the regex expression change if the number actually comes from a variable.. Is this correct? /^0*var1$/ ?
@satsamsk Note that the ^ anchor is required for MongoDB to be able to use an index on this field. Without it, MongoDB won't be able to use the index and query performance would suffer.
If you think that your data would have text flight code so the string can be identified, we can use this.
Regex:
54130015784(?="\n)
Explanation:
Positive Lookahead (?="\n)
Assert that the Regex below matches
" matches the character " literally (case sensitive)
\n matches a line-feed (newline) character (ASCII 10)
Example:
https://regex101.com/r/sF0YfH/3
Let me know if it works. If not give a clear idea what you want.
| common-pile/stackexchange_filtered |
How do I display logged-in username IF logged-in?
I'm working on creating some text that says 'Login' to users that are not logged in, and the user's username or display name when logged in.
It seems like it should be an easy problem to solve, and I've found the following two bits of code on the wordpress codex that each do half of what I am looking for, but I haven't figured out how to combine them (without breaking the site).
Is this the correct direction, or way off base?
To check if the user is logged in and display something different depending:
<?php if ( is_user_logged_in() ) {
echo '{username code here}';
} else {
echo 'Login';
}
?>
To get and display the current user's information:
<?php global $current_user;
wp_get_current_user();
echo 'Username: ' . $current_user->user_login . "\n";
echo 'User display name: ' . $current_user->display_name . "\n";
?>
This seems to do what you need.
<?php global $current_user; wp_get_current_user(); ?>
<?php
if ( is_user_logged_in() ) {
echo 'Username: ' . $current_user->user_login . "\n";
echo 'User display name: ' . $current_user->display_name . "\n";
} else {
wp_loginout();
} ?>
If is_user_logged_in() returns true, then wp_get_current_user(); is probably not needed. I was able to skip that and still echo the $current_user->display_name.
| common-pile/stackexchange_filtered |
Iterate through datatable progressively slower on Azure
I've got a c# problem where we load around 10 000 rows in a data table, and then loop through them to change some data.
Problem is that it gets progressively slower as it goes through the loop. Every iteration is a few milliseconds slower, up to a point where it is v e r y slow.
But before you ask for code or give work arounds to my existing coding, here is what is interesting: it only happens on Azure. And not all Azure servers, only on 1, maybe 2. The exact same config, db, and data on a different server does not have the same problem.
Are anyone aware of something in Azure or Win 2012 R2 that might cause such behavior?
EDIT:
Quite interesting is that the problem seems to have nothing to do with what is in the loop. This loop now just prints snapshots to screen. First 100 rows take 6 seconds. 40th set takes 40 seconds, and going slower. Here is the simplified code.
var format = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
var fs = new FileStream(path, FileMode.Open);
var ds = format.Deserialize(fs) as DataSet;
fs.Close();
System.Data.DataTable table = ds.tables[0];
table.Columns.Add("TEMP", typeof(string));
DateTime lastSnap = DateTime.Now;
for (int i = 0; i < table.Rows.Count; i++)
{
if (i % 100 == 0)
{
PrintToScreen((DateTime.Now - lastSnap).Seconds);
lastSnap = DateTime.Now;
}
}
If you -1 at least tell us why please.
Sorry, but without any code inside your loop it's unlikely you get an answer. And commenting on -1 is explicitly prohibited in SO rules. You should have posted your code anyway, because it could be a trivial algorithmic complexity issue, but this way others just have no chance to guess.
OK - thanks. If it was a logical error then it should happen on all servers. If it is 1 server in 20, then I don't think the coding is the main cause. The reason I did not put code is to prevent people from giving alternative coding, as it will solve the cause but not the underlying problem. IMHO anyway.
If it's infrastructure related, you should show the functions you use to access and update your data to find out which might cause the problem. But i'm pretty sure it's an algorithmic complexity bug. Here is a trivial example: https://dotnetfiddle.net/RIzZq1 Try to increase the number by 10 and you will understand.
The infrastructure and configuration could only contribute to trigger such a behaviour by altering some data involved in the computation, but i'm pretty sure azure as well as any other db technology is not subject to algorithmic complexity bugs by itself.
So it's the PrintToScreen part which get's slower. Please post the PrintToScreen code.
Try also measuring the loop with a Stopwatch rather than the TimeSpan arithmetic, see if the output still behaves similar.
How is the question related to Azure? The code simply iterates over an in-memory datatable. And doesn't even measure the elapsed time, just the timespan's seconds value.
It relates to Azure in the sense that we only get the degrading time performance on an Azure box, and not on any other VM that we've tested on. It MAY be Azure as that is the only difference that we know of.
Well, as suspected, it does not have anything to do with the logic. For some or other reason the exe was running in Windows 8 compatibility mode.
Removing that flag solved the problem.
| common-pile/stackexchange_filtered |
how to restore deleted records in MDB
is it possible to restore deleted records in MDB?
Two words: time machine.
To whomever voted for ServerFault: how exactly does a question about a file-based desktop database engine belong on ServerFault?
I had a near catastrophic loss of data in an MDB file once and I researched this a lot. I ended up purchasing a product named Advanced Access Repair. The free trial allowed me to preview the data available for restore so I was sure that it was worth the $300 price tag.
I tried finding documentation on how to get the data back with code, but that didn't lead anywhere (for me at least).
| common-pile/stackexchange_filtered |
Graph with Cycle and Two-Colorable
i think if the graph G has an odd cycle, it's not two-colorable, otherwise it can be two colorable.
i read in one notes that the following is True: we couldent two-colorable any graph G that has cycle.
anyone could clarify me ?
This is correct: if $G$ has an odd cycle, then $G$ is not two-colorable. Suppose that $V$ is the set of vertices of $G$, and $c:V\to\{0,1\}$ is a two-coloring of $V$ with the colors $0$ and $1$. Suppose further that $C=\{v_0,v_1,\ldots,v_n\}$ is a cycle in $G$. We can start the listing of vertices of $C$ at any point in the cycle, so we may assume that $c(v_0)=0$. Clearly this implies that $c(v_1)=1$, $c(v_2)=0$, and so on. It’s clear, in fact, that if $0\le k\le n$, then
$$c(v_k)=\begin{cases}
0,&\text{if }k\text{ is even}\\
1,&\text{if }k\text{ is odd}\;.
\end{cases}$$
And $v_n$ is adjacent to $v_0$, so we must have $c(v_n)=1$, which implies that $n$ is odd. Finally, $C$ has $n+1$ vertices (and edges), so $C$ must be an even cycle.
In short, if $G$ has a two-coloring, then every cycle in $G$ is even.
we couldnt two-colorable any graph G that has cycle.
This is not true. The right statement is the following Theorem:
Theorem A graph $G$ is two colorable if and only if it has no odd cycle.
Here odd cycle means cycle of odd length.
Brian proved one implication. For the other implication, note that it is enough to prove the statement for each component, so you can assume that $G$ is connected.
Now you can 2-color the graph the following way: pick some vertex $v$ as the start vertex.
Color $v$ with color $0$, and color each vertex $u$ with color 0/1 if there exists a path of even/odd length from v to u. Then, the condition implies that each vertex is colored the right way (i.e. by ONE color, not both colors), and that this is a good 2 coloring.
| common-pile/stackexchange_filtered |
While deploying two application authenticated using Azure AD on a single IIS server, Authentication page keeps on looping infinitely
I have created two .NET applications having Azure AD authentication. I have deployed both of them on the IIS server with different ports for HTTP and HTTPS.
1st Application: Deployed on HTTP Port 80 and HTTPS Port 443 with the Redirect URL of app1.xyz.com
2nd Application: Deployed on HTTP Port 88 and HTTPS Port 9443 with the Redirect URL of https://app2.xyz.com:9443
While authenticating a user for 1st Application, Authentication flow is working fine where the user is redirected to Login Page and after successful login, the user is redirected back to the application URL.
Where authenticating a user for 2nd Application, Authentication flow is not working fine where the user is redirected to Login page and it keeps on looping infinitely on the login page.
Could anyone please share your comments/suggestions on the above issue?
Still if you have any question just let me know here in comment. Thank you.
As it is common issue may be you have not chosen a pattern syntax and so you using the default regular expression syntax on your web config file, On the fly which I am assuming the causes for an infinite loop. Though you have not share your web config file but you can try this:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="Add www" patternSyntax="Wildcard" stopProcessing="true">
<match url="*" />
<conditions>
<add input="{HTTP_HOST}" pattern="example.com" />
</conditions>
<action type="Redirect" url="http://www.example.com/{R:0}" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>
Note: Replace your desired URL On Code.
For more information you can click here...https://blogs.msdn.microsoft.com/kaushal/2013/05/22/http-to-https-redirects-on-iis-7-x-and-higher/
| common-pile/stackexchange_filtered |
SpringBoot one Service, many classes
I create simple news system with comments using Spring Boot and MongoDB. I would like to focus on code quality. I create service using generic to save data from class.
My code:
Dao.java
@Repository
public interface Dao<T, ID extends Serializable> extends MongoRepository<T, ID>{
}
DaoService.java
@Service
public class DaoService<T> extends AbstractService<T, Long> {
@Autowired
public DaoService(Dao<T, Long> dao) {
super(dao);
}
}
Service.java
@org.springframework.stereotype.Service
public interface Service<T, ID extends Serializable> {
T save(T entity);
}
AbstractService.java
public abstract class AbstractService<T, ID extends Serializable> implements
Service<T, ID> {
protected final Logger logger = LoggerFactory.getLogger(getClass());
protected Dao<T, ID> dao;
public AbstractService(Dao<T, ID> dao) {
this.dao = dao;
}
@Override
public T save(T entity) {
this.logger.debug("Create a new {} with information: {}", entity.getClass(),
entity.toString());
return this.dao.save(entity);
}
}
and 2 repo
@Repository
public interface CommentDao extends Dao<Comment, Long> {
}
and
@Repository
public interface NewsDao extends Dao<News, Long> {
}
My controller:
@RestController
@RequestMapping("/news")
public class NewsController {
private final DaoService<News> newsService;
private final DaoService<Comment> commentDaoService;
public NewsController(DaoService<News> newsService, DaoService<Comment> commentDaoService) {
this.newsService = newsService;
this.commentDaoService = commentDaoService;
}
@RequestMapping(method = RequestMethod.GET)
public void save(){
newsService.save(new News("elo","a","c"));
commentDaoService.save(new Comment("iss","we"));
}
}
error:
2016-03-02 23:22:30.265 WARN 6100 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'daoService' defined in file [C:\Users\Lukasz\IdeaProjects\NewsSystem_REST\build\classes\main\com\newssystem\lab\dao\DaoService.class]: Unsatisfied dependency expressed through constructor argument with index 0 of type [com.newssystem.lab.dao.Dao]: : No qualifying bean of type [com.newssystem.lab.dao.Dao] is defined: expected single matching bean but found 2: newsDao,commentDao; nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [com.newssystem.lab.dao.Dao] is defined: expected single matching bean but found 2: newsDao,commentDao
2016-03-02 23:22:30.273 INFO 6100 --- [ main] o.apache.catalina.core.StandardService : Stopping service Tomcat
2016-03-02 23:22:30.296 ERROR 6100 --- [ main] o.s.boot.SpringApplication : Application startup failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'daoService' defined in file [C:\Users\Lukasz\IdeaProjects\NewsSystem_REST\build\classes\main\com\newssystem\lab\dao\DaoService.class]: Unsatisfied dependency expressed through constructor argument with index 0 of type [com.newssystem.lab.dao.Dao]: : No qualifying bean of type [com.newssystem.lab.dao.Dao] is defined: expected single matching bean but found 2: newsDao,commentDao; nested exception is org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [com.newssystem.lab.dao.Dao] is defined: expected single matching bean but found 2: newsDao,commentDao
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:749) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:185) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1143) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1046) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:510) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:482) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:772) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:839) ~[spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:538) ~[spring-context-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118) ~[spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:766) [spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.boot.SpringApplication.createAndRefreshContext(SpringApplication.java:361) [spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) [spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1191) [spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1180) [spring-boot-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at com.newssystem.lab.NewsSystemApplication.main(NewsSystemApplication.java:18) [main/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_25]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_25]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_25]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_25]
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) [idea_rt.jar:na]
Caused by: org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type [com.newssystem.lab.dao.Dao] is defined: expected single matching bean but found 2: newsDao,commentDao
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1126) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1014) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:813) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741) ~[spring-beans-4.2.4.RELEASE.jar:4.2.4.RELEASE]
... 24 common frames omitted
You must use the qualifier annotation to define which bean will be injected into your service. On Spring an interface can have many implementations, but if none of those implementations has the primary annotation defined Spring does not know which one to pick automatically.
@Autowired
@Qualifier("bean-name")
I to this: @Autowired @Qualifier("DaoService") private final DaoService newsService; but still not working
Is this name defined for the bean class?
What are you mean exacliy?
As you can see in your log file, there're 2 DAO's, so you must inform into your service which one should be infected using @Qualifier
For example @Qualifier("newsDao")
In my Service class yes?
that's it, you must inform into your service which DAO to inject. Another option is using @Resource that injects a bean by name where the attribute name is the name of the bean
I don't known exacliy, I put this in my Service.java class but it's still no't working. Please tell me where should put this annotation in my question code?
As you are making the injection by constructor, you should put in there. DAOService is the class that receives your repository
public DaoService(@Qualifier("newsDao") Dao<T, Long> dao)
Ok, I try this, but how to CommentDao?
You must have for each service an implementation an in there you put the qualifier, you got it?
No, I don't get it. I create one service for many table objects.
When I create another model class I want to use the same service.
Initially I do not know if we can dynamically create many instances of one same service (meaning services are singleton scope) and set the qualifier into it, as a concept o services it should have all your business rules in there, and what you're trying to do is just a bridge to your repository
So, I should create service for each model class? There is no short way using generic types?
That's it... For now i cant remember better way
| common-pile/stackexchange_filtered |
Foreign key refering non primary-key column on parent table
Let's say we have 2 tables Parent(col1, col2, col3) and Child(col4, col5, col6). What is the requirement on Parent's columns to have a FOREIGN KEY (col4, col5) REFERENCES Parent(col1, col2) ?
I remember reading that a PRIMARY KEY on (col1, col2) is not required. In SQL Server the only requirement is that (col1, col2) has a unique constraint on it. How about Oracle ?
Try and see. (Should have known by now.)
Or look at the documentation; the first sentence describes "a relationship between that foreign key and a specified primary or unique key". (I believe that's part of the SQL standard)
| common-pile/stackexchange_filtered |
Can I send c2dm push messages to an android device in which the primary account is the role sender account
I developed an app which uses c2dm. It's implemented at
http://www.controlyourandroid.appspot.com
It's working perfectly when I'm trying it in an emulator or in my friends device. But when I use my own android phone (HTC WildFire, and 2.2.1), I'm not receiving c2dm messages.
I don't know why?
The registration is successful and I'm getting a registration id in my device.
Is it because my primary account in my phone and the role sender account that I'm using for the app is the same.
Thanks
As I recall the answer is no. This may change as C2DM is still a beta project (as far as I know). We are using C2DM in one of our Android applications, and we ran into problems using the role sender account on a device. There may be a workaround, but we just created a dedicated gmail email address to use with the C2DM service.
| common-pile/stackexchange_filtered |
WooCommerce - Checking if my first variable subscription item is sold
I have two Questions:
Question 1:
I am using a "Woocommerce product bundle" and "Woocommerce subscriptions" plugins and want to restrict my FIRST variable subscription item to restrict to sold to only one customer? In other words no other customer can buy it if one customer is already subscribe to it.
Question 2:
Also How do I check in the custom loop if the customer is subscribe so I can exclude that product from my archive product page?
Thanks!
When a customer select a subscription and checkout/order, an order is generated (post_type => 'shop_order') with a related subscription (post_type => 'shop_subscription'). when this order is paid you can use woocommerce_thankyou hook to detect this item on the order and then exclude the product (see this example of different usage for this hook. You can with a foreach loop get the items in the order (products)… so exclude an item from shop (each order/subscription is related to a customer)…
Hi, I don't want to exclude the entire product, I only want to restrict FIRST "variable subscription" ITEM from the list, Since I am using "Woocommerce product bundle" I have some other items attached to the product but I only want to restrict first item if already purchased by someone.
I haven't use Woocommerce product bundle yet… so I can't really tell about: You have to look in your database to see how data is arranged for your bundled product (in wp_post and wp_postmeta) and for your order/subscription too. Or alternatively use var_dump() function on $product, $order and $subscription object. This way you will find how you can use foreach loops to access to the data you need.
| common-pile/stackexchange_filtered |
Maintain Markup Format with Highlight.js
I am attempting to display dynamically generated HTML on my web page and do highlighting/formatting using highlight.js. I have the highlighting working correctly, however the indentation is not correct. Here's the jsFiddle.
The code shows like this:
<div class="parent">parentContent<div class="child">childContent</div><div class="child">childContent</div><div class="child">childContent</div></div>
whereas I'd like to show up as it would in an IDE:
<div class="parent">
parentContent
<div class="child">
childContent
</div>
<div class="child">
childContent
</div>
<div class="child">
childContent
</div>
</div>
I understand it's called highlight.js not format.js :) but I thought it was possible and I haven't had much luck getting an answer from the API. I have tried configuring line breaks via hljs.configure({ useBR: true }); and the fixMarkup('value') looked promising but I have not implemented it with any success.
Hear me out. I know this might seem kludgey, but you can put your html together as a string, like so:
for ( var i = 0; i < 3; i++){
var html = '<div class="parent">' +
'\n\tparentContent';
for ( var j = 0; j < 3; j++){
html += '\n\t<div class="child">childContent</div>';
}
html += '\n</div>\n'
$('.grid-container')[0].innerHTML += html;
}
This gives you full control of the white space. It's also probably faster because you're not appending to the DOM several times, just once. You only trigger a redraw one time when you set the innerHTML of .grid-container.
JSFiddle here: https://jsfiddle.net/dgrundel/fjLwa592/1/
I hadn't considered manually adding the tabs, it works for my purposes thank you.
You know what they say, "when you want something done right...". :)
| common-pile/stackexchange_filtered |
Is there a .Net (prefer F# or C#) implementation of the Hilbert-Huang Transform?
Hilbert-Huang Transform, Empirical Mode Decomposition...
I have found it implemented in R and Matlab. I'd like to find an open source implementation of it in C#/F#/.NET.
The amount of quality open source numerical code for .NET is tiny. I struggled to find a decent FFT only a couple of years ago. So I seriously doubt you'll find a decent existing implementation of this algorithm because it is pretty obscure!
Your best bet is to build a Hilbert-Huang Transform in terms of an FFT (like the one from either of my F# books or the F#.NET Journal articles) which is, I guess, what you did in MATLAB and R?
I'm curious why you would want this though? It doesn't look very compelling to me...
I didn't do it in MATLAB or R--I just found something that someone else had done. And it's quite different FFT (similar in purpose, yes, very different in approach), which is one of the reasons it's interesting to me. For the things I'm working with, FFT has various significant issues.
@taotree: I meant that I suspect the R and MATLAB code you found uses the FFT to compute the Hilbert transform. You might appreciate some of the alternative techniques I described in the third chapter of my PhD thesis: http://www.ffconsultancy.com/free/thesis.pdf
I appreciate your response. It's not a Hilbert transform, and as far as I know an FFT would not be used in it's calculation.
I skimmed through your thesis, it looks interesting. It appears you suggest standard wavelet analysis but propose a new mother wavelet?
@taotree: Standard complex-valued continuous wavelet transform with a new mother wavelet and new quantifications of instantaneous frequency and amplitude from the results.
@JonHarrop The R implementation of HHT that I have seen definitely do not use FFT. The Hilbert transform is done once the IMFs have been created. Huang used splines to decompose the signal.
@IanThompson: That's very interesting, thanks. Do you know what the motivation is for not using an FFT?
@JonHarrop Not sure if I can do the answer justice in a comment, but here goes: Huang was interested in processing data measuring turbulence. A data signal representing turbulence is not one with smooth linear transitions, and most decomposition methods including FFT do not deal well with non-linear transitions. To FFT, these transitions appear as noise. However, with turbulence data, it was the sudden transitions in the signal that Huang was interested in, so he set out to develop another method of decomposition that was closer to continuous than discreet ... continued.
@JonHarrop ... so as to capture those non-linear transitions in the signal. The decomposition first decomposes the signal into IMFs (see the wikipedia article for a full explanation), after which the Hilbert transform is applied to those IMFs. Huang's method was to form a cubic spline through the local maxima and minima of the signal, take the mean of splines and subtract the original signal. This worked well for Huang's application on turbulence data, but presents challenges for continuous signal processing due to problems with cubic splines. If I get time I will do a Q&A on this later.
Here's my implementation of the Hilbert transform from Matlab. I've done some comparisons with Matlab's output and this code seems to produce identical answers, but I have not done any kind of extensive testing.
This uses the publicly-available MathNet library to do the FFT/iFFT calculations.
public static Complex[] MatlabHilbert(double[] xr)
{
var fft = new MathNet.Numerics.IntegralTransforms.Algorithms.DiscreteFourierTransform();
var x = (from sample in xr select new Complex(sample, 0)).ToArray();
fft.BluesteinForward(x, FourierOptions.Default);
var h = new double[x.Length];
var fftLengthIsOdd = (x.Length | 1) == 1;
if (fftLengthIsOdd)
{
h[0] = 1;
for (var i = 1; i < xr.Length / 2; i++) h[i] = 2;
}
else
{
h[0] = 1;
h[(xr.Length / 2)] = 1;
for (var i = 1; i < xr.Length / 2; i++) h[i] = 2;
}
for (var i = 0; i < x.Length; i++) x[i] *= h[i];
fft.BluesteinInverse(x, FourierOptions.Default);
return x;
}
Dave, is this an implementation of the Hilbert-Huang transform (HHT)?
@tofutim No, it's not. See this: http://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang_transform
| common-pile/stackexchange_filtered |
Median for a sequency with missing values in R
I’m trying to do a median like a function, for vectors with even and odd numbers (related to quantities of observations – the calculating method changes). I need to include an option for missing values as well. How can I do this? I need to do this without to use the ready median(x) in R. Is it possible? I have put bellow how I tried to do:
x<-c(1,2,3)
function (x, na.rm = FALSE) {
x <- x[!is.na(x)]
return(mediana = median(x))
for the odd length case if (length(x)%%2) x[length(x)%/%2+1], and a similar one for the even, except average the two middles
What's wrong with using median for the final step (as in your post)? Homework?
| common-pile/stackexchange_filtered |
Eclipse is not recognizing Javadoc images properties in Eclipse
Really new to java here... I'm trying to embed Javadoc images in Eclipse. I have the image stored in a "doc-files" folder inside the package that will reference it. However when I add the following line:
<img src="doc-files/MyImage.png" width="250" alt="description">
Eclipse comes back saying "The word 'src' is not correctly spelled" and provides 5 fixes for it, none of which make sense. Is one of my Javadocs core components missing? My version of Eclipse is 4.9.0. Many thanks.
Your HTML line displayed in your post does work in NetBeans but only if you create the doc-files directory (folder) directly within the directory that contains the source files (.java files) for your project application and then place your image files within that doc-files directory. This image will most likely not show within your IDE's JavaDoc display. Instead a generic image icon will be displayed. It will however be displayed within the actual JavaDoc viewed from your Web-Browser.
This should be a bug report with a patch rather than a Stack Overflow question here. Add 'src' to dictonary can be used as a workaround. By the way, you are using a pretty old Eclipse version. The current version is Eclipse 2019-12 (4.14).
I see, thank you both. I'll look into perhaps switching IDE's or at least upgrading Eclipse...
| common-pile/stackexchange_filtered |
Log4j2 Automatically Log all console outputs
Is there anyway that I can configure to log4j2 properties file to have it automatically log all console outputs? The current legacy program I am working on has a lot of console outputs and I would like to set log4j2 to automatically log these outputs to a file if possible. Without using System.setout. that is.
Thank you!
Try to give as much information as possible. Are you using any frameworks? Spring Boot for instance? Properties file or YAML?
Yes, you can. System.out is an OutputStream. You can set System.out to an OutputStream you control. You can then log all the data coming in to that and send it to files using the logging configuration. However, if you do that you must not use the Console appender or you will end up in an endless loop.
| common-pile/stackexchange_filtered |
How to display recent post on website and make own search engine ? With PHP
On OLX like Website , Anytime a user posts a new ad. It comes in top of the page.
No search is involved in it.
Which technology we can use to make such functions.
Suppose X is visiting my website abc.com.
Should I fetch database each time and generate page dynamically ?
or There is a better approach to to that.
Ques no 2. Suppose X is searching a ad on my website abc.com.
Should I run sql query on my database every time ?
Please guide me.
Basically you would have to generate it dynamically of course. Though in this case you could cache the page or parts of it. For the search part you would most likely want to use a search engine like Elasticsearch or Apache Solr.
Usually you have to query DB every time you render a page.
Q1 - Yes, you just have to select all items ordered by created date (well, you don't want to select ALL, but only how much you need for current page - see 'Pagination')
Q2 - Yes. Same as above. This might not work exactly as you want, because SQL databases are not search engines. In SQL DB you can only do simple search and filter. If you need better searching methods (search even when typos are present, ranking results, similar items, etc.) then you need search engine - Apache Solr for example)
Don't worry about querying database. It's designed to do this :) Start worrying about performance when you really fell page is running slow. When it happens you can start thinking about using 'cache' to store some database queries results.
EDIT - it's best to use some php framework to help you with some of this ;)
| common-pile/stackexchange_filtered |
Find two binary numbers with a multiplication of a specific form
Not sure if this is the best place to ask this question but I tried to search online, wasn't able to find anything useful.
Consider binary numbers of the form $$x=0|y|0|y$$
where $a|b$ means concatenate the bits of $a$ with the bits of $b$, $y$ is any sequence of bits of size $n$.
Is it possible to find three binary numbers $s_1,s_2,s\neq 0$ of the previous form such that $s_1.s_2=s$ where $.$ is just the plain multiplication operator? If not why?
Obviously, all of $s_1,s_2,s$ need to be of size $\leq 2n+2$ bits
Thanks!
Observe that $x = (2^{n}+1)y$. If you consider $2^n+1$ to be of the desired form, then we're done. Otherwise, this factorization places some constraints on your desired $s$.
This will require $y$ to be of size greater than $n$ thou, since it need to be of that form too
@Shadowfirex: no. Consider $y=11_2=3_{10}$. Then $x=11011_2=27_{10}=3\cdot 9$
Sorry I noticed the question says "two numbers" I edited that. I meant all of $s_1,s_2,s$ are of that form, for $y=11_2$ it's not the case.
@Shadowfirex: and how about $y=101_2=5_{10}, x=1010101_2=85_{10}=5\cdot 17$
Oh wow, thanks a lot!
| common-pile/stackexchange_filtered |
Magento 2 Restrict SKU Characters
I am using Magento 2.2.2.
I want to restrict the number of SKU characters to be entered in the Admin Product Detail Page. How to achieve this.
Try to update you eav_attribute table
UPDATE eav_attribute SET frontend_class = "validate-length maximum-length-11" where attribute_code = 'sku'
how to fix this from code level so whenever we edit the product it will not raised the error "Please enter less or equal than 64 symbols."
Nice answer mohan.
| common-pile/stackexchange_filtered |
Git or svn on windows server 2003
I was wondering if anyone has any experience on getting a git server working on windows server 2003? Is there an easy solution to get it up and running? If not is there any good alternatives out there?
It ideally has to be free but willing to pay. It is only for a small team (3 users) but we desperately need source control!
We need to host it on our windows server as this is where are sites/code is kept!
I have previous experience with git but not with setting up my own server so an point in the right direction would be great.
Thanks.
Why does it matter that the site is kept on the same server? Why not use bitbucket private repos? Free for teams smaller than 5.
At the moment we have a development and a live site as expected. How would we (from git) update the relevant site when the code is actually hosted on the bit bucket site?
ftp, sftp, ssh, any program that can "synchronize" folders. Develop locally, push your code to test, test, push your code to production.
@BenThomas What did you end up using? I was going to try installing GitHub for Windows on Windows Server 2003 (used by a client) but it appears that it may not be supported. I was going to try anyway but I'm hesitant as I only have a live production server to work with. Did you find solution you're happy with?
From my experience, the easiest way to get a Git server on a Windows machine is to run a virtual Linux server, and then use something like Gitolite. There's GitStack which is a proprietary package that allows two users for free, but charges for more than that.
I'm more familiar with the Subversion servers for Windows. Maybe someone more familiar with Git can explain the Git on Windows server options.
Meanwhile, here are just a few of the Subversion on Windows options.:
SVNSERVE
Subversion comes with svnserve which is a lightweight server that uses it's own protocol. It's fast and easy to install on Windows as a Windows service.
HTTP
There are two well known GUI Windows Subversion servers out there: VisualSVN and UberSVN. Both of these are proprietary servers, but are free as long as you don't use any advanced features (like tying your repository to LDAP). Both have GUI interfaces which make it quick and easy to setup.
If you're more of a brave sole, you can get the CollabNet's Subversion Edge package which is entirely open source. It's a bit more work, but it's completely open source. CollabNet gives you an Apache httpd server that's compiled to work as a Subversion server with all the required modules. Setup is a bit more complex than the others, but it's not something a developer can't easily do. The big advantage is that you can do things like integrate LDAP or Active Directory or use secure certificates (https), so your Subversion commits and updates aren't sent plain text over the network without paying extra. Just be prepared to open the back and scout around for user serviceable parts.
Then again, you can share Git repositories back and forth with each other without even having a server. You can use things like email and Dropbox to share updates to the repository with each other. It's one of the big advantages Git has have since it is a distributed version control system.
| common-pile/stackexchange_filtered |
scrapy shell testing xpath invalid syntax
I am trying to retrive reviews from trip advisor and instead of writing code I diciced to use the shell that scrapy comes with. While I was testing this Xpath
response.xpath(//div[@id="REVIEWS"]/a/text())
I am getting invalid syntax error.
Try
response.xpath('//div[@id="REVIEWS"]/a/text()')
This works for most parts but the array length is 0, even though looking through google chrome's inspect element it seems like it should not be 0
| common-pile/stackexchange_filtered |
Unable to retrieve the result set from database the second time i query it from my application
I am developing an application using Hibernate+Spring MVC. At the login page, I simply query a user table and validate if user is valid or not. It works smooth using my configuration of Spring and Hibernate. But the problem is once, I create a session factory, the next time when i try to validate. It doesn't fetch the user details. The code is running fine and neither any error is being thrown. I tried to uncheck second level cache too,but the problem still exists.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="oracle.jdbc.driver.OracleDriver"></property>
<property name="url" value="jdbc:oracle:thin:@//localhost:1521/XE"></property>
<property name="username" value="testdata"></property>
<property name="password" value="testdata"></property>
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource"></property>
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource"></property>
<property name="packagesToScan" value="org.test.model"></property>
<property name="hibernateProperties">
<props>
<prop key="dialect">org.hibernate.dialect.OracleDialect</prop>
<prop key="hibernate.cache.use_second_level_cache">false</prop>
</props>
</property>
</bean>
package org.test.dao.impl;
import javax.transaction.Transaction;
import org.hibernate.Query;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Repository;
import org.test.dao.DaoMethods;
import org.test.model.Sla_Users;
@Repository
public class DaoMethodsImpl implements DaoMethods{
@Autowired
private SessionFactory sessionFactory;
public SessionFactory getSessionFactory() {
return sessionFactory;
}
public void setSessionFactory(SessionFactory sessionFactory) {
this.sessionFactory = sessionFactory;
}
@Override
public int getUsersCount() {
System.out.println("INSIDE REPO->GETUSERSCOUNT"+getSessionFactory());
String sql = "select count(*) from Sla_Users";
Query query = getSessionFactory().openSession().createQuery(sql);
return ((Long) query.uniqueResult()).intValue();
}
@Override
public Sla_Users getUser(String username) {
System.out.println("INSIDE REPO->GETUSERPASSWORD"+getSessionFactory());
Session validateSession = this.getSessionFactory().openSession();
System.out.println(validateSession);
Sla_Users user = null;
try
{
user= (Sla_Users) validateSession.get(Sla_Users.class, username);
}
catch(Exception ex)
{
System.out.println("exception fetching user at login: "+ex);
}
finally
{
validateSession.close();
this.getSessionFactory().close();
System.out.println(validateSession+":"+getSessionFactory());
}
//Sla_Users user = (Sla_Users)this.getSessionFactory().openSession().get(Sla_Users.class, username);
return user;
}
}
That is the xml, i used. I know that am missing out some basic part of hibernate,since its happening for all the values i retrieve in my application. For the first time the session factory is created, every time goes as planned but the next time i query the db ... booom....blows out of proportion. Saddest thing is, I won't even get an Exception. The code just stops running(waits for response from db).
Help me out guys.
Can you provide the code you use to request the database ? there is one thing awkward it's the presence of a jdbc template in your spring config.
@benzonico: Please can you check it now. I declared the template at one phase, but i didn't use it. Too bad, I just left it (it was invisible). Will be glad to take it down. But i don't think, that will cause any problem. Would it?
You are closing the session factory. See http://stackoverflow.com/questions/4699381/best-way-to-inject-hibernate-session-by-spring-3
Thanks for the responses. I finally figured out the reason.
It was not due to the session. I was not returning the connection to the pool once i opened it, which was making the connection pool starve.
Once i added the following to hibernate properties in my spring xml, everything was working great.
<prop key="hibernate.connection.release_mode">after_transaction</prop>
What is happening is that you are opening your session and not closing it while not using a connection pool. So when you do your second request you are waiting for the first session to be closed in order to open a new one.
What you should do is use : sessionFactory.getCurrentSession() or close the session at the end of your requests using session.close()
Moreover you should not close the sessionFactory as mentioned in the comment made by @Jukka. Therefore you have to remove the line this.getSessionFactory().close();
hi benzonico, Thank you so much for answering. The problem is that, the application has queries that hit database often. I was thinking that some kind of caching in hibernate is avoiding the queries from hitting the db again and again. I will give u a small sample of problem: When I login, the console shows the log of retrieving the user details from db. The second time, it works fine and in the similar way. but the third time, it hits the db and console doesn't show anything & d application is just loading wit nothin 2 retrieve from db. I will try ur solution,but feel free to advice on dis
| common-pile/stackexchange_filtered |
AMPscript for adding the Order Number to an email subject
I am trying to add the order number to an email subject and I don't know-how. Plus I am really new to the AMPscript world.
I want my email subject to be like: ''Your #OrderID is on the go''
for the moment I went here and choose the Data Extension and then the field that I wanted to be mapped
Can you please advise?
And did it work?
I'd recommend adding an AMPscript block at beginning of your email body and setting the subject there:
%%[
var @subject
var @idCommanda
set @idCommanda = AttributeValue("idCommanda") /* check if this send context attribute has a value */
if not empty(@idCommanda) then
set @subject = concat("Comanda ta #", @idCommanda, " fost expidiata")
else
set @subject = concat("Comanda ta fost expidiata")
endif
]%%
Then in the Subject field:
%%=v(@subject)=%%
Works perfectly. Thank you so much.
| common-pile/stackexchange_filtered |
jquery css dock menu not working in ASP.NET masterpage
I'm using css dock menu in my ASP.NET web app, I've used it in my master page but it doesn't work and I get a strange exception:
$('#dock').Fisheye is not a function
it is my document ready function:
<script type="text/javascript" src="js/jquery.js"></script>
<script type="text/javascript" src="js/interface.js"></script>
<link href="dock-menu.css" rel="stylesheet" type="text/css" />
$(document).ready(function() {
try
{
$('#dock').Fisheye(
{
maxWidth: 50,
items: 'a',
itemsText: 'span',
container: '.dock-container',
itemWidth: 40,
proximity: 90,
halign: 'center'
}
)
}
catch (ex) {
}
$('#scrollbar1').tinyscrollbar();
});
and this is my HTML containing the dock object:
<div class="dock" id="dock">
<div class="dock-container">
<a class="dock-item" href="/site/fa/DepartmentsNews.html" title="اخبار واحدهای سازمانی" alt="اخبار واحدهای سازمانی"><span>اخبار واحدهای سازمانی</span></a>
<a class="dock-item" href="/site/fa/ReportsCommunities.html" title="گزارش مجامع" alt="گزارش مجامع"><span>گزارش مجامع</span></a>
<a class="dock-item" href="/site/fa/FinancialReports.html" title="گزارشات مالی" alt="گزارشات مالی"><span>گزارشات مالی</span></a>
</div>
</div>
what is going wrong here?
have you referenced the script on your page for fisheye?
yes I've included js files containing this function, what do you mean exactly?
I mean to say that there will be a script file which contains the definition of fisheye function. So did you reference that file before the code you have written above?
yes I've included this js file
can you show your jquery includes and the html for the menu?
I've added some code to my question
you are missing the fisheye.js file, you need to download that and include it. Just a guess but your js code is in the interface.js right? then the fisheye needs to be before that file.
| common-pile/stackexchange_filtered |
How to use JavaScript code in angular to create new folders?
I am trying to create new folders into my Angular project files (src/assets/images/NewFolder) based on the user input.
I have tried to add java script file in my assets folder
var fs = require('fs');
fs.mkdirSync('d:/stuff');`
and then added the reference in angular.json file:
"scripts": [
"src/assets/test.js"
]`
Then I need to run this in my ts file but I don't know how? I don't have a function in the java script file above.
Is this within Angular logic running on the client? If so, then you will not be able to update the file system on the host machine in any way - for security reasons.
I have property webpage and I have a button to add new property and there I am adding images for that new property. I want to create new folder with the property name and add these images in the new folder. is there a way to create folders in angular other than this way?
I would strongly suggest you use a database/server side storage for this. Putting them on the client side is not a workable solution.
| common-pile/stackexchange_filtered |
Move elements in a List?
Consider: ['A', 'B', 'C']
I'd like to do
list.move("B", -1);
list.move("A", 1);
resulting in ["B", "C", "A"].
Is there some sort of one liner to move items in a listin Dart? Obviously could do this manually in a few lines just curious if there is some easier way.
There is indeed no built-in way to do this.
Since the list keeps the same length, the most efficient approach would move the elements in-place rather than remove an element and then insert it again. The latter requires moving all later elements as well, where doing it in-place only requires moving the elements between from and to.
Example:
extension MoveElement<T> on List<T> {
void move(T element, int offset) {
var from = indexOf(element);
if (from < 0) return; // Or throw, whatever you want.
var to = from + offset;
// Check to position is valid. Or cap it at 0/length - 1.
RangeError.checkValidIndex(to, this, "target position", length);
element = this[from];
if (from < to) {
this.setRange(from, to, this, from + 1);
} else {
this.setRange(to + 1, from + 1, this, to);
}
this[to] = element;
}
}
I would probably prefer a more general "move" function like:
extension MoveElement<T> on List<T> {
void move(int from, int to) {
RangeError.checkValidIndex(from, this, "from", length);
RangeError.checkValidIndex(to, this, "to", length);
var element = this[from];
if (from < to) {
this.setRange(from, to, this, from + 1);
} else {
this.setRange(to + 1, from + 1, this, to);
}
this[to] = element;
}
}
and then build the "find element, then move it relative to where it was" on top of that.
linear? I don't think there is any. In any case, a sample code here:
extension on List<String> {
void move(String element, int shift) {
if (contains(element)) {
final curPos = indexOf(element);
final newPos = curPos + shift;
if (newPos >= 0 && newPos < length) {
removeAt(curPos);
insert(newPos, element);
}
}
}
}
void main(List<String> args) {
var list = ['A', 'B', 'C', 'D', 'E', 'F'];
print(list);
list.move('B', 2);
print(list);
list.move('B', -3);
print(list);
}
Result:
[A, B, C, D, E, F]
[A, C, D, B, E, F]
[B, A, C, D, E, F]
Marking as correct as I think this correctly answers that no, there is no shortcut way to do this. Thx for the snippet!
| common-pile/stackexchange_filtered |
'For' loop in tidyverse filtering
I'm doing a project for a university and working on dataset "elo_blatter" from fivethirthyeight library. I want to write all statistics for each football confederation, so I wanted to use for loop.
My code looks good but it doesn't work.
confs <- c('AFC','CAF','CONCACAF','CONMEBOL','OFC','UEFA')
for (i in 1:6) {
eb %>% filter(confederation==confs[i]) %>%
select(elo98, elo15, gdp06, popu06) %>% summary()
}
I'd like to get following results:
elo98 elo15 gdp06 popu06
Min. : 631 Min. : 611 Min. : 1076 Min. :1.584e+05
1st Qu.:1012 1st Qu.: 964 1st Qu.: 3264 1st Qu.:3.575e+06
Median :1262 Median :1225 Median : 6382 Median :1.933e+07
Mean :1249 Mean :1211 Mean : 20526 Mean :8.444e+07
3rd Qu.:1515 3rd Qu.:1480 3rd Qu.: 31676 3rd Qu.:4.329e+07
Max. :1787 Max. :1755 Max. :118199 Max. :1.311e+09
NA's :1
elo98 elo15 gdp06 popu06
Min. : 891 Min. : 788 Min. : 457.8 Min. : 84600
1st Qu.:1205 1st Qu.:1266 1st Qu.: 1173.4 1st Qu.: 2349126
Median :1366 Median :1372 Median : 2025.3 Median : 9729954
Mean :1350 Mean :1361 Mean : 4469.1 Mean : 17253651
3rd Qu.:1484 3rd Qu.:1512 3rd Qu.: 4926.3 3rd Qu.: 18772579
Max. :1723 Max. :1737 Max. :30740.7 Max. :143314909
NA's :1
And so on for each confederation. Has anybody some idea why it doesn't work?
In general, try to be more specific than "doesn't work". Is there an error? A wrong result? What's wrong? In this case, I think all your calculations are probably working fine, but non-explicit printing is disabled inside for loops, and you don't assign anything. Perhaps try sticking %>% print on the end of your pipeline.
Damn, it's working. It's all because of I didn't use print function. Thanks a lot!
| common-pile/stackexchange_filtered |
how to prevent boot if a drive is not detected at bootup
I've got two drives in RAID1 that serve as the root partition on my workstation. Sometimes (1 out of 10) one of the drives is not detected, and thus not included in the raid. As the drive is not detected at all I cannot reattach it that time (and I may not even notice it, at least not right away).
Once after such a time, I rebooted and the kernel used the previously undetected drive for md0. After that I ran mdadm --add and it just re-added the other half of the array, not reconstrucred it, thus I had an inconsistent array (I ran a check and mismatch_cnt came up with more than 300k blocks!). In the end, I failed one of the drives and then re-attached it, thus doing the reconstruction that should have been done in the first place.
To avoid such situations in the future, I want to detect such errors as early as possible: in GRUB or during boot, before the root file system is mounted. This is a desktop, so turning off and on again in such case is not that much of a problem. Also, I am willing to take manual measures (i.e. booting from a live CD to remove the boot block) if the disks really fail.
Is there a grub/kernel parameter that prevents boot in case of a raid disk's absence? Or the only way to do so is to modify the init scripts on the initramfs?
But why? The goal of raid1 is exactly to boot(continues to operate) when one device is failed.
I think you're looking for LVM... it's a logical volume manager for the Linux kernel; it manages disk drives and similar mass-storage devices. ( https://wiki.archlinux.org/index.php/LVM )
@MichaelD. I know and use LVM heavily, but do not see how it is relevant here.
@IporSircer This is a rare (but not very rare) temporary error I am speaking about. I do not want to boot if one drive is down, only if I am sure it is permanently down. Otherwise, I must do a reconstruction after every 10th boot. Instead, I would like to do a poweroff/poweron after every 10th boot.
@P.Péter The VG would not come up with a missing PV - i.e. a missing disk, so you would not able to access the logical volumes on the group.
@MichaelD. Certainly a possibility, but seems to need reformat. Or creating a small fake PV that is somehow required at boot-time.
Last idea would be to update the mainboards Bios if using onboard sata, or update that raid controllers bios and if possible the firmware on that harddrives - this can be done without dataloss/formating etc. Good luck
I managed to solve half of the problem: I simply removed GRUB from the MBR of the drive that usually does not fail to be detected. Now, if that drive has a temporary error at bootup, I will still get a de-synchronised system, but (at least for now), this does not seem to occur often.
| common-pile/stackexchange_filtered |
Chrome popup window always on top
I have a Chrome app that opens a popup window. I would like it to be able to stay on top, but am currently unable to do so.
The Chat for Google extension opens a popup window that not only stays on top of all windows, but the window itself also appears to have a completely customized appearance. Unfortunately, all the JavaScript in this extension is obfuscated, and I can't make heads or tails of it.
The Chrome API lists the "alwaysOnTop" boolean as part of the Window type, but neither the create nor the update functions allow for changing this property.
It’s a panel type of window. Call chrome.windows.create with a type: 'panel' parameter. This currently only works in the dev and canary channels.
This can be found in the obfuscated code : chrome.windows.create({type:"panel",focused:c,width:d.width,height:d.height,url:a}); (in ace-extension.js, with a similar one in ace-datachannels.js)
It's worth noting that I seem to require the "--enable-panels" option enabled when I start Chrome, otherwise it will just create a popup window.
The Chat extension seems to create a panel even without the option enabled, but I don't know if that's some special magic because it's Google, or what.
Correct. The flag must be enabled, or you must use the dev or canary channel, or the extension ID must be hardcoded in chrome/browser/ui/panels/panel_manager.cc.
Do we have a timeline of when panels are going to be available to the developpers?
They must be using some undocumented API for the talk extension. It works without the -enable-panels
| common-pile/stackexchange_filtered |
Themes, Styles OR hardcode?
I have an application with 7 activity. All activity has 1, 2, 3 or 4 root element in a main root element. This root element have a dark background color with relevant color. I want have a light theme in my app. Therefore I want to know which way is better and professional to change the my app theme from dark to light?
Define two theme with my color value for each `root` element? if answer is yes, How can I define it? example code?
Define style for each `root` element?
Or simply find my 1, 2, 3 or 4 `root` element and change background color with Options menu setting UI?
Thanks
Themes are styles too so if you want that your whole application driven from for example Holo.light you can simply define a style like below and change your desired colors and style for predefined elements , then set your app theme in Manifest:
<style name="MyTheme" parent="@android:style/Theme.Holo.Light">
<item name="progressBarStyle">@android:style/Widget.ProgressBar.Inverse</item>
<item name="progressBarStyleSmall">@android:style/Widget.ProgressBar.Small.Inverse</item>
<item name="progressBarStyleLarge">@android:style/Widget.ProgressBar.Large.Inverse</item>
</style>
but if you want set a background color for your Activities and change it on runtime or by clicking menuItem , i think better approach is define two styles for your root elements(Most outer Elements in Activities Layouts) and define Background for those elements like :
<style name="LinearLayoutLight">
<item name="android:background">@color/#somecolor</item>
</style>
<style name="LinearLayoutDark">
<item name="android:background">@color/#somecolor</item>
</style>
@SamanMoradiZiarani : dear saman ty for accepting , for your first question u define a style in style.xml then in your Activity layout use : style="@style/yourstyle" in your root element. for changing theme at runtime here in stackOverFlow you can find nice answers.it's too broad to answer it hear.
@SamanMoradiZiarani : create another style with background Item , then assign that style to those Elements.
@SamanMoradiZiarani : why do you repeating Accepting an unAccepting the answer!!?
| common-pile/stackexchange_filtered |
Go: type assertion for maps
I'm reading data structures from JSON. There's a little bit of conversions going on and at the end I have a struct where one of the fields is of type interface{}. It's actually a map, so JSON puts it inside a map[string]inteface{}.
I actually know that the underlying structure is map[string]float64 and I would like to use it like that, so I try to do an assertion. The following code reproduces the behaviour:
type T interface{}
func jsonMap() T {
result := map[string]interface{}{
"test": 1.2,
}
return T(result)
}
func main() {
res := jsonMap()
myMap := res.(map[string]float64)
fmt.Println(myMap)
}
I get the error:
panic: interface conversion: main.T is map[string]interface {}, not map[string]float64
I can do the following:
func main() {
// A first assertion
res := jsonMap().(map[string]interface{})
myMap := map[string]float64{
"test": res["test"].(float64), // A second assertion
}
fmt.Println(myMap)
}
This works fine, but I find it very ugly since I need to reconstruct the whole map and use two assertions. Is there a correct way to force the first assertion to drop the interface{} and use float64? In other words, what is the correct way to do the original assertion .(map[string]float64)?
Edit:
The actual data I'm parsing looks like this:
[
{"Type":"pos",
"Content":{"x":0.5 , y: 0.3}} ,
{"Type":"vel",
"Content":{"vx": 0.1, "vy": -0.2}}
]
In Go I use a struct and encoding/json in the following way.
type data struct {
Type string
Content interface{}
}
// I read the JSON from a WebSocket connection
_, event, _ := c.ws.ReadMessage()
j := make([]data,0)
json.Unmarshal(event, &j)
So why don't you use map[string]float64 as your fields type instead of interface{}?
Because it's not the only kind of data that I parse from JSON, it can also be a string or an int. I should rephrase, "it can sometimes be a map, and when it is, JSON puts it inside a map[string]inteface{}."
Do you know what type it will be before parsing the JSON?
Can you provide sample JSON data and struct to which you are trying to Unmarshal? From your example it's not really clear what you want to achieve.
No, I do not know in advance. @s7anley, I edited my comment to show the actual data.
Type assertion won't do it; you have to actually loop over the map[string]interface{} and create a map with float keys. That's because map[string]interface{} and map[string]float64 actually have different representations in memory; it can't just treat one as the other. (And encoding/json makes a map interface{} keys because Go doesn't know what type the keys will be when it creates the map.)
You cannot type assert map[string]interface{} to map[string]float64. You need to manually create new map.
package main
import (
"encoding/json"
"fmt"
)
var exampleResponseData = `{
"Data":[
{
"Type":"pos",
"Content":{
"x":0.5,
"y":0.3
}
},
{
"Type":"vel",
"Content":{
"vx":0.1,
"vy":-0.2
}
}
]
}`
type response struct {
Data []struct {
Type string
Content interface{}
}
}
func main() {
var response response
err := json.Unmarshal([]byte(exampleResponseData), &response)
if err != nil {
fmt.Println("Cannot process not valid json")
}
for i := 0; i < len(response.Data); i++ {
response.Data[i].Content = convertMap(response.Data[i].Content)
}
}
func convertMap(originalMap interface{}) map[string]float64 {
convertedMap := map[string]float64{}
for key, value := range originalMap.(map[string]interface{}) {
convertedMap[key] = value.(float64)
}
return convertedMap
}
Are you sure you cannot define Content as map[string]float64? See example below. If not, how can you know that you can cast it in the first place?
type response struct {
Data []struct {
Type string
Content map[string]float64
}
}
var response response
err := json.Unmarshal([]byte(exampleResponseData), &response)
Yes I think I can do a little bit of plumbing to define Content as map[string]float64. It's gonna save me the new map and it's worth it. Thanks!
| common-pile/stackexchange_filtered |
Name for the effect where people cause others to fulfill their expectations
I recall hearing a social cognition lecture a number of years ago in which the lecturer described a particular idea that centered around the role of self-fulfilling prophecies in relationships. For example, if I believe that x is a hostile jerk, I'll tend to treat them in a way that makes them more hostile. (A good example of a book that subscribes to this type of a view is Feeling Good Together by David Burns; I'm not sure if he subscribes to the exact theory that I'm trying to remember, though).
I'm pretty sure that there was a specific name for this, but I don't recall what it was.
Can someone help me identify which theory or term this is?
Interesting concept. The only thing that comes to mind is confirmation bias. You prefer to validate your beliefs, instead of getting a positive reaction. However this does not describe the inter-personal behavior that triggers the negative behavior in the other person. It only describes the motivation for it.
I believe what you might be referring to is the Pygmalion effect, an effect in social psychology where high expectations lead to improved performance in a given area: a sort of self-fulfilling prophecy.
An interesting idea, but one which has sparked a lot of criticism over the years. I won't go as far as saying it has been debunked, but you might want to have a look for yourself.
| common-pile/stackexchange_filtered |
Finding a closed form for the series $\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)^k}$ for $k=1,2,3,4$
Context:
This question is related to Calculate $\sum_{n = 0}^\infty \frac{C_n^2}{16^n}$ and Is there a closed form for a give infinite sum?.
We have also:
$$\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)^2}=\frac{\Gamma(\frac{1}{4})^4}{2\pi^3}-\frac{96\pi}{\Gamma(\frac{1}{4})^4},\tag{1}$$
$$\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)^3}=8-\frac{384\pi}{\Gamma{(\frac{1}{4})}^4}\tag{2}.$$
Both can be obtained in an elementary way (avoiding hypergeometric functions).
I have derived:
$$I=-\frac{32}{\pi^2}\int_{0}^{\pi/2}\int_{0}^{\pi/2}\sqrt{1-\sin^2{(k)}\sin^2{(x)}}dxdk\\=4\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)}-\frac{2\Gamma(\frac{1}{4})^4}{\pi^3}.\tag{3}$$
I tried several different ways to compute: $$S=\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)}\tag{4}.$$
It seems that is easier than $(1)$ and $(2)$, but it still eludes me.
Updated 1:
Thanks to Mariusz Iwaniuk and KStarGamer we have (see https://functions.wolfram.com/HypergeometricFunctions/Hypergeometric3F2/03/08/05/01/01/06/0002/):
$$I=\int_{0}^{\pi/2}E(\sin({k}))dk=\frac{\Gamma(\frac{3}{4})^4}{2\pi}+\frac{\pi^3}{8\Gamma(\frac{3}{4})^4} ,$$
where $E(k)$ is the complete elliptic integral of the second kind. This integral implies:
$${_3F_2(-1/2,1/2,1/2;1,1;1)}=\frac{\pi}{2\Gamma(3/4)^4}+\frac{2\Gamma(3/4)^4}{\pi^3},$$
and Wolfram and Mathematica are unable to deal with it.
Updated 2:
The natural question is to ask about the closed form of:
$$S'=\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)^ 4}={_5F_4(1/2,1/2,1/2,1,1;2,2,2,2;1)}\tag{5},$$
If we find the closed form of $S'$ also we will get the closed form for:
$$I'=\int_{0}^{\pi/2}k\cot(k)K(\sin(k))dk\\=\frac{3\pi^2\log{2}}{4}+\frac{5\Gamma(3/4)^4}{\pi}+\frac{\pi^3}{4\Gamma(3/4)^4}-\pi G+\frac{\pi^2S'}{64}-\frac{3\pi^2}{4} \tag{6},$$
where $K(k)$ is the complete elliptic integral of the first kind and G is Catalan's constant.
Updated 3:
It can be done using $(6)$ also:
$$
I'=\frac{3\pi^2\log{2}}{4}-\frac{\pi^2 S_{2}}{64}-\pi G \tag{7},
$$
where:
$$S_{2}={_5F_4(1,1,3/2,3/2,3/2;2,2,2,2;1)}.$$
As @SetnessRamesory has suggested probably $S'$ and $S_{2}$ don't have easy closed forms. But at least we know from $(6)$ and $(7)$ this nice relation:
$${_5F_4(1/2,1/2,1/2,1,1;2,2,2,2;1)}+{_5F_4(1,1,3/2,3/2,3/2;2,2,2,2;1)}=-\frac{320\Gamma{(3/4)}^4}{\pi^3}-\frac{16\pi}{\Gamma{(3/4)}^4}+48.\tag{8}$$
This question is going to be updated for a while. Thanks for your feedback!
Looks like: $$\sum _{n=0}^{\infty } \frac{\binom{2 n}{n}^3}{64^n (n+1)}=\frac{\pi }{\Gamma \left(\frac{3}{4}\right)^4}-\frac{4 \Gamma \left(\frac{3}{4}\right)^4}{\pi ^3}$$
For what it’s worth, the more general series $$\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^3}{64^n(n+1)}z^n$$ is known.
Thanks @KStarGamer this is what I was looking for.
Please try to choose more descriptive titles in the future. There are a lot of series with binomial coefficients after all.
Thanks @BrunoB I will improve the titles. Best all.
Maple does $S'$ in terms of a Meijer G function:
$$ S' = \pi^{-3/2} \text{MeijerG}([[1],[2,2,2,2]],[[1,1,1/2,1/2,1/2],[]],-1)$$
@RobertIsrael S' is: $$_5F_4(1/2,1/2,1/2,1,1;2,2,2,2;1)$$ maybe Dougall's formula?
We can do term by term integration of the series https://math.stackexchange.com/q/2151053/72031 and arrive at your series.
The cases $k=1,2,3$ could be done directly(even WA can solve them). $k=4$ seems not to have an easier expression.
@SetnessRamesory I've updated the question.
| common-pile/stackexchange_filtered |
show that $\mathcal{A}_{f}$ is a $\sigma$-algebra
Given:
S is set and $f: S \to \mathbb{R}$
Problem:
Showing that $\mathcal{A}_f = \{ f^{-1}(B): B \in \mathcal{B}(\mathbb{R}) \}$ is a $\sigma $ - algebra.
My approach:
Try to show all three properties of a $\sigma $ - algebra:
So I would like to show that the following three statements hold:
$\emptyset \in \mathcal{A}_f $ and $S \in \mathcal{A}_f $
$\forall A \in \mathcal{A}_f: A^{c} \in \mathcal{A}_f $
$A_1,A_2,.... \in \mathcal{A}_f \ \ \cup_{n=1}^{\infty} A_n \in \mathcal{A}_f$
I guess I should do something with measurability of a $f$, but I can't figure out what, since I don't know nothing about the measurability of f.
Question:
Can someone tell me how I should tackle this problem?
How can you for example show property (1) in the first place?
Why don't you focus on the problem and show that ${\cal A_f}$ is a $\sigma$-algebra. What do you mean by measurability of $f$???
Just use set-theoretic properties of preimages. For $1.)$ for example we have $\emptyset \in \mathcal{B}(\mathbb{R})$ and thus $$\emptyset =f^{-1}(\emptyset) \in \mathcal{A}_f$$
$f^{-1}(A \cup B) =f^{-1}(A) \cup f^{-1}(B).$ Same logic with the complementary.
@SeverinSchraven thank you. You are getting me in the right direction. I understand that $\emptyset = f^{-1}(\emptyset) \in \mathcal{A}_f$. So how would you then show that $S \in \mathcal{A}_f$. You would like to show that there is some set,say $F$, such that $F \in \mathcal{B}(\mathbb{R})$ such that $f^{-1}(F) =S$ right? How would you do that?
@copper.hat yes I mean the measurability of f
@UBM Thanks, I didn't think of that, even though it is quiet trivial hahah. Then property 2 and 3 immediately follow, but how to do property 1?
@Phoenix_10: property 1 is even easier. $\emptyset \in \mathscr{B}(\mathbb{R})$ and $\mathbb{R} \in \mathscr{B}(\mathbb{R})$ so $\emptyset = f^{-1}(\emptyset) \in \mathcal{A}_f$ and $S = f^{-1}(\mathbb{R}) \in \mathcal{A}_f.$
@UBM but how do you know that $S = f^{-1}(\mathbb{R})$, since that is only true when f is surjective right?
@Phoenix_10: No, this is always true whether $f$ is surjective or not. Suppose we have a not surjective function $f$ such that $f(S) = B \subset \mathbb{R}.$ Then $f^{-1}(B) = S$, and since $B \subset \mathbb{R},$ we have that $f^{-1}(\mathbb{R}) = S$
| common-pile/stackexchange_filtered |
Convert Html to markdown using xwiki
How do i convert from html to markdown with xwiki,
Getting "java.lang.NoSuchFieldError: fRecognizedFeatures" for shouldRenderHtmlToMarkdown, tried different formats of html.
public class HtmlRendererTest
{
private Converter converter;
private WikiPrinter printer;
@Test
public void testHtmlToMarkdown() throws ComponentLookupException, ConversionException, ParseException, ComponentRepositoryException
{
WikiPrinter printer = new DefaultWikiPrinter();
converter.convert(new StringReader("<h3 id=\"HHeader3\"><span>Header 3</span></h3>"), Syntax.XHTML_1_0, Syntax.MARKDOWN_1_1, printer);
System.out.println(printer.toString());
assertThat(printer.toString(), containsString("###"));
}
@Test
public void testMarkdownToHtml() throws ComponentLookupException, ConversionException, ParseException, ComponentRepositoryException
{
WikiPrinter printer = new DefaultWikiPrinter();
converter.convert(new StringReader("### Header 3"), Syntax.MARKDOWN_1_1, Syntax.ANNOTATED_XHTML_1_0, printer);
System.out.println(printer.toString());
assertThat(printer.toString(), containsString("</h3>"));
}
@Before
public void setUp() throws ComponentLookupException, ConversionException
{
EmbeddableComponentManager componentManager = new EmbeddableComponentManager();
componentManager.initialize(this.getClass().getClassLoader());
converter = componentManager.getInstance(Converter.class);
printer = new DefaultWikiPrinter();
}
}
XWiki only provide a parser for Markdown right now so unless you wrote a markdown serializer yourself shouldRenderHtmlToMarkdown cannot really work.
Now it's not what "java.lang.NoSuchFieldError: fRecognizedFeatures" is about, usually it means you have some incompatible jars (one class expect to find a field but the target class is not in the expected version). The complete stack trace might help understanding which ones.
| common-pile/stackexchange_filtered |
Not able to recreate same sound using FFT
I am trying to recreate musical note using top 10 frequencies returned by Fourier Transform (FFT). Resulting sound does not match the original sound. Not sure if I am not finding frequencies correctly or not generating sound from it correctly. The goal of this code is to match the original sound.
Here is my code:
import numpy as np
from scipy.io import wavfile
from scipy.fftpack import fft
import matplotlib.pyplot as plt
i_framerate = 44100
fs, data = wavfile.read('./Flute.nonvib.ff.A4.stereo.wav') # load the data
def findFrequencies(arr_data, i_framerate = 44100, i_top_n =5):
a = arr_data.T[0] # this is a two channel soundtrack, I get the first track
# b=[(ele/2**8.)*2-1 for ele in a] # this is 8-bit track, b is now normalized on [-1,1)
y = fft(a) # calculate fourier transform (complex numbers list)
xf = np.linspace(0,int(i_framerate/2.0),int((i_framerate/2.0))+1) /2 # Need to find out this last /2 part
yf = np.abs(y[:int((i_framerate//2.0))+1])
plt.plot(xf,yf)
yf_top_n = np.argsort(yf)[-i_top_n:][::-1]
amp_top_n = yf[yf_top_n] / np.max(yf[yf_top_n])
freq_top_n = xf[yf_top_n]
return freq_top_n, amp_top_n
def createSoundData(a_freq, a_amp, i_framerate=44100, i_time = 1, f_amp = 1000.0):
n_samples = i_time * i_framerate
x = np.linspace(0,i_time, n_samples)
y = np.zeros(n_samples)
for i in range(len(a_freq)):
y += np.sin(2 * np.pi * a_freq[i] * x)* f_amp * a_amp[i]
data2 = np.c_[y,y] # 2 Channel sound
return data2
top_freq , top_freq_amp = findFrequencies(data, i_framerate = 44100 , i_top_n = 200)
print('Frequencies: ',top_freq)
print('Amplitudes : ',top_freq_amp)
soundData = createSoundData(top_freq, top_freq_amp,i_time = 2, f_amp = 50 / len(top_freq))
wavfile.write('createsound_A4_v6.wav',i_framerate,soundData)
Have you tried plotting the waveform and spectra?
“Does not match” is very vague - And very subjective.
It looks like you are throwing away the phase. Don't do that. Use the phase, Luke.
Thanks @CrisLuengo. I will change the code to add phase and update the result here in the comment.
The top 10 spectral frequencies in a musical note are not the same as the center frequencies of the top 10 FFT result bin magnitudes. The actual frequency peaks can be between the FFT bins.
Not only can the frequency peak information be between FFT bins, but the phase information required to reproduce any note transients (attack, decay, etc.) can also be between bins. Spectral information that is between FFT bins is carried by a span (up to the full width) of the complex FFT result.
| common-pile/stackexchange_filtered |
Can somebody explain how to solve this bug?
%%cython
cimport cython
import numpy as np
cimport openmp
from numpy.linalg import inv
from cython.parallel cimport prange
@cython.boundscheck(False)
@cython.wraparound(False)
def inv_unit_diagon_blocks(double[::1,:] A,double[::1,:] U,int m):
cdef n=int(len(A)/m)
cdef double[::1,:] Lambda=np.empty((A.shape[0],A.shape[1]))
cdef int i
A=A.reshape(A.shape[0]//n, n, A.shape[1]//n, n).swapaxes(1, 2).reshape(-1, n, n)
with nogil:
for i in prange(m*m, schedule='static'):
Lambda[i]=inv(U[i])@A[i]@U[i]
return Lambda
Error compiling Cython file:
------------------------------------------------------------
...
cdef double[::1,:] Lambda=np.empty((A.shape[0],A.shape[1]))
cdef int i
A=A.reshape(A.shape[0]//n, n, A.shape[1]//n, n).swapaxes(1, 2).reshape(-1, n, n)
with nogil:
for i in prange(m*m, schedule='static'):
Lambda[i]=inv(U[i])@A[i]@U[i]
^
------------------------------------------------------------
.ipython\cython\_cython_magic_f2335949c00c765c6ffad6f007548be3.pyx:15:36: Coercion from Python not allowed without the GIL
It gives this error probably cause I try to multiply the nxn matrices. Can somebody tell me how can I do that? How to multiply parallel m*m times the 2 matrices? I know that it can be done using prange but it needs gil to operate matrix mul and prange can only be used with nogil. Is there anyway to make it work?
Does this answer your question? "Cython error: Coercion from Python not allowed without the GIL", "Coercion from Python not allowed without the GIL, using arrays in parallel (multithreading in Cython)"
Welcome to SO. Please take a look at the [help], especially "[ask]".
@outis I don't think either really answer the question. They similar errors but they don't really explain why this particular operation is failing.
@DavidW: note the import numpy …, from numpy.linalg import … and cdef n lines in the sample.
Related: "calling dot products and linear algebra operations in Cython?", "Missing numpy attribute when using Cython"
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
| common-pile/stackexchange_filtered |
synchronized block on grails works on windows but no in linux
I have a grails application that relies on a synchronized block into a service. When I run it on windows the synchronization works as expected but when I run on ams linux a get a StaleObjectStateException.
The problem is reproduced in the following example.
class TestService {
private final Object $lock = new Object[0];
TesteSync incrementa() {
synchronized ($lock) {
TesteSync t = TesteSync.findById(1)
t.contador++
t.save(flush: true)
Thread.sleep(10000)
return t
}
}
}
In my understand, this exception occurs because multiple threads are trying to save the same object. That's why I'm using a synchronized block.
Linux java:
java version "1.7.0_85"
OpenJDK Runtime Environment (amzn-<IP_ADDRESS>.61.amzn1-x86_64 u85-b01)
OpenJDK 64-Bit Server VM (build 24.85-b03, mixed mode)
Windows java:
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
Any clues?
Thanks
Just a guess, maybe nothing will change: Can you try installing the Oracle JDK on linux too?
You are missing a try catch or a throws for InterruptedException
Are you sure you only have one instance of TestService?
@RealSkeptic Spring should ensure that the service is a singleton under grails architecture. But I'll verify anyway
@BackSlash I'll give a try to Oracle JDK as well
@RealSkeptic yes, just one instance
@BackSlash, Oracle didn't solve the problem neither :(
Try adding @transactional above incrementa call as a test you can do incrementa.withTransaction { inside before your find
You are right about why you're getting the StaleObjectStateException.
If what you're looking for is pessimistic locking (allowing only one transaction access to the data at any given time), then you can use the domain class lock() method:
class TestService {
static transactional = true
TesteSync incrementa() {
TesteSync t = TesteSync.lock(1)
t.contador++
return t.save()
}
}
You can learn more about Grails pessimistic locking here.
PS: Grails services are transactional by default. But in my example I explicitly made the service transactional to call something to your attention: The lock is released by Grails automatically when the transaction commits. I also removed the flush because the data gets flushed when the transaction commits. If you were doing this from a controller method that's not explicitly set to @Transactional, then you would need the flush.
TIP: When you query by ID you can do this...
SomeDomainClass.get(1)
...instead of this...
SomeDomainClass.findById(1)
Thanks @Emmanuel Rosa the pessimistc locking approach worked! obrigado
| common-pile/stackexchange_filtered |
Where is the uniform distribution with one parameter ($U(\theta, k \theta)$) useful for modelling?
I recently came across the distribution $U(\theta, k \theta)$ (where k is known) in the context of statistical theory (as a nice toy example for finding MLE and the likes).
However, I was wondering if there are real use cases where one might wish to use this distribution for modelling something. But I couldn't seem to find any paper using this (or similar) distribution for actual data.
My question is if you happen to know of such a use for this distribution (or one similar to it). And more generally, how can one search for it effectively?
See the German Tank Problem
It might be of interest to note that the logarithm of a variable with this distribution would be a doubly truncated Exponential variable.
| common-pile/stackexchange_filtered |
What must/do astronomers reveal beyond their academic papers?
Academic papers often make claims about observational data and results from computations. The papers only refer to names of data sources (such as an observatory) and of mathematical methods without any specifics, other than some of it illustrated in a few graphs and tables. References lead almost only to other papers of similar format.
In order to replicate (or even peer review) a paper, obviously one must have access to specifics such as the actual calculations, the raw data, the software, the blue prints of the measurement equipment and logs of how it has been operated.
What data do peer reviewers and replicating scientists (and the public) have access to beyond a paper making some claims?
Two examples: The results of the infamous cold fusion press conference of 1990 could never be satisfactorly replicated. But were the attempts based only on the paper they then published, or did they publish on the side also a wealth of data about their procedures? And the more recent "neutrinos travel faster than light" media debacle was, as far as I know, all about the details of equipment, was all relevant information about it made public (for the science community)?
I hope my question isn't perceived as a suspicion of bad science. I think astronomy would be a very bad choice of subject for cheating scientists. Beside my curiosity about the practicalities of the scientific method, I rather think about how I could access source code and other yummies from the very edge of the science community. :-P
Unless you're talking about review papers (ones which talk generally about a specific field of astronomy/astrophysics), it is simply not realistic for scientists to report this level of detail in published papers. Usually they will have a methods or samples (or both) section to their paper highlighting the source of their data as well as the analysis/reduction procedure which was done on it. Most likely, they do not make the specific chunk of data they use available simply because it comes from some publicly available database. Providing the coordinates for the region should be ..
sufficient to be able to reproduce the result. All of that being said, some papers do actually provide the exact data they use to publish their paper with. I'd imagine that paper within the fields of research which are under the highest levels of scrutiny (climatoloty, medical research, high-profile astronomy projects like Kepler) probably make their own datasets available more often than ones in fields which are not.
Most catalogs have a public release somewhere on the web. Here is the best listing of astronomical catalogs that I know of:
http://vizier.u-strasbg.fr/viz-bin/VizieR
There is also a NASA Extragalactic Database (NED) that can be helpful: http://ned.ipac.caltech.edu/
In general, astronomical observations are available to the public to be analyzed. Different telescopes/space missions maintain their own archives. Some give an observer a proprietary period (6-7 months is the time frame I have heard given for most observatories) to give the person who requested the observations time to analyze the data before it is released to the general public.
In publications the author(s) will list they catalogs the used to come to their conclusions so replicating scientist can use the same dataset.
If you read scientific papers (at least in astronomy, as far as I am concerned), you will always read of a section which is called Observations and data, or similar.
There you (as author) have to explain which kind of data you used and the detailed method for data reduction.
Most of the time, this section goes along with a table in which all used data are described in detail: the observation log. The guide principle in this section is that, any reader can reproduce your results.
In the same section, information about instrument calibration is also reported in detail. Also, error sources and error determinations. Basically, everything that is included in your published data.
Of course, mistakes can happen, and sometimes we do not consider variables which are important. I do not know the details of the two examples that you mention, but they are not the first and will not be the last.
Paradoxically, what you report is exactly the reason for disproving those experiments. Both major and minor results are always tested twice or thrice or more, by different groups, different facilities/observatories, different software versions. There is no way you can guarantee for a breakgrounding result without being "differently" tested.
In the end, the method works well in this way! If I can't reproduce your results by following your description, either you are a bad writer, or your experiment is going to be confuted.
All the rest is explained well in the answer from OP @moonboy13, with the only exception that, for my experience, when data are private, they are released after 1 year.
All data from the Hubble Space Telescope that is used in scientific research is released into the public domain (usually within one year, sometimes less) and available from the Mikulski Archive for Space Telescopes.
| common-pile/stackexchange_filtered |
Qt and Digia licensing
I wanted to download Qt sdk today and found that nokia site doesn't work now. I was redirected to digia site:. I founded, that now I can't just download open licensed sdk. They requires my name and other info. In email they sent me I founded the links for commercial version of SDK and my license key.
Why did they do this? Would it be free for use in future?
As the website says, if you want the Qt Open Source version, you have to go to Qt Project website.
The copyright holders can change the license as they please. This is usually easier when all the copyright is hold by one single entity. Sill you can take the Free/Open Source versions and continue working with that.
As far as I know, Qt have had dual license since they opened it. What Digia seems to be doing is distributing the one that has a proprietary license, leaving to the project to distribute the open one.
| common-pile/stackexchange_filtered |
'init' is inaccessible due to 'internal' protection level
public class SelectedValue {
static let sharedInstance = SelectedValue()
public var reportPostId : Int? = nil
public var report_GroupId : Int? = nil
public var postType : String? = nil
}
'init' is inaccessible due to 'internal' protection level
Please help me to solve this.
Thanks
Post code, not screenshots
public class SelectedValue {
public static let sharedInstance = SelectedValue()
public var reportPostId : Int? = nil
public var report_GroupId : Int? = nil
public var postType : String? = nil
}
There is nothing wrong with the code posted and you don't have to initialize optional values to nil
As it says, the init method (that you probably called without it's name) is private. make it more public to access it.
Click on the gray lines that say 'init' declared here to jump to the source of the issue.
Note that MyClass() is equal to MyClass.init(). that means you are calling the init method each time you build an object.
And also note that if you don't specify the access level, it will be internal by default.
So this:
class MyClass { }
is equal to:
internal class MyClass {}
So you should change it to:
public MyClass { }
Hey, I tried but not working as I edit question.
As I mentioned, click on the gray message to see what is internal.
Worked. It was my fault. It was syntax error. I wrote Alamofire.Request instead of Alamofire.request
| common-pile/stackexchange_filtered |
How to calculate the cell size?
How to calculate the cell size so that different devices display the same number of cells in row and column, and the cell height is adjusted to the screen size of the device?
I tried calculating it this way, but it's not right.
Where did I make a mistake?
override func viewDidLoad() {
super.viewDidLoad()
let homeFlowLayout = UICollectionViewFlowLayout()
let itemsPerRow: CGFloat = 2
let itemsPerColumn: CGFloat = 2
let window = UIApplication.shared.connectedScenes.first as? UIWindowScene
let tabBarHeight = window?.windows.first?.safeAreaInsets.bottom ?? 0
let availableWidth = UIScreen.main.bounds.width
let width = availableWidth / itemsPerRow
let availableHeight = UIScreen.main.bounds.height - tabBarHeight
let height = availableHeight / itemsPerColumn
homeFlowLayout.itemSize = CGSize(width: width, height: height)
homeFlowLayout.minimumLineSpacing = 10
homeFlowLayout.scrollDirection = .vertical
collectionView.setCollectionViewLayout(homeFlowLayout, animated: false)
}
Your logic is based on total screen size. You don't account for the status bar and nav bar at the top nor the tab bar at the bottom. You also don't account for spacing between cells.
@HangarRash, could you please tell me how to calculate it correctly?
@HangarRash, the distance from one cell to another is 10
@HangarRash, maybe you have an example of the correct calculation ?
@Rob, and there's nothing in the video to help me
Ideally, rather than calculating the size ourselves, we would rather let the framework do it.
Now, this may be beyond the scope of what you might contemplate, but compositional layouts (as described in the Apple Implementing Modern Collection Views sample and in WWDC 2020’s Advances in Collection View Layout) gracefully handle heights as a percentage of the collection view (as well as orthogonal scrolling which you posted in a prior question).
To get five rows, I just defined the fractionalHeight of the group to be ⅕ (and set the orthogonalScrolling behavior):
func createLayout() -> UICollectionViewCompositionalLayout {
let itemSize = NSCollectionLayoutSize(
widthDimension: .estimated(100),
heightDimension: .fractionalHeight(1)
)
let item = NSCollectionLayoutItem(layoutSize: itemSize)
let groupSize = NSCollectionLayoutSize(
widthDimension: .estimated(100),
heightDimension: .fractionalHeight(1/5)
)
let group = NSCollectionLayoutGroup.horizontal(layoutSize: groupSize, subitems: [item])
let section = NSCollectionLayoutSection(group: group)
section.orthogonalScrollingBehavior = .continuous
return UICollectionViewCompositionalLayout(section: section)
}
That yields:
The only trick that I employed was regarding the spacing. If I set the height of each group to be ⅕ and added spacing, then the total height was too tall by the sum of the spacing. So, I just left it with no spacing, but then inset the content within my cells (the images in my example) to achieve the desired spacing. Maybe there is another way to achieve exactly ⅕ height including spacing, but it was not jumping out at me.
I freely acknowledge that if you have not played around with compositional layouts before, this might be a lot to take in. But when you throw in the orthogonal scrolling (if, indeed, that is still desired), then I think compositional layout really starts to shine. I’ve done the old “collection view inside a table view cell” technique, but it gets mightily ugly very quickly. The compositional layout might feel alien at first, but it is worth it, IMHO.
The aforementioned sample code has a rich array of examples, so I would encourage you to check that out.
All that having been said, if you wanted to calculate the cell sizes yourself using traditional techniques, there are a variety of approaches one could adopt.
But we would avoid using UIScreen.main. It might feel convenient at this point, but will break if we later support iPads (with their split-screen multitasking), device rotations, etc. We should perform calculations on the basis of the bounds of the content view, not the device’s main screen. That also largely eliminates brittle code that subtracts out the height of navigation bars, tab bars, etc. Just use the bounds of the collection view. Perhaps:
func updateItemSize() {
let layout = collectionView.collectionViewLayout as! UICollectionViewFlowLayout
let rowCount: CGFloat = 5
let columnCount: CGFloat = 10
let spacing = layout.minimumInteritemSpacing
let insets = layout.sectionInset
layout.itemSize = CGSize(
width: (collectionView.bounds.width - (columnCount - 1) * spacing - insets.left - insets.right) / columnCount,
height: (collectionView.bounds.height - (rowCount - 1) * spacing - insets.top - insets.bottom) / rowCount
)
}
The next question is where you call this. The problem is that viewDidLoad is too early in the view rendering process, so the bounds of the collection view may not yet be reliable. There are lots of approaches, but you could do it in viewDidLayoutSubviews:
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
updateItemSize()
}
There are other approaches, but the key observation is that (a) you should just use the bounds of the collection view; but that (b) viewDidLoad is too early in the process.
| common-pile/stackexchange_filtered |
Todonotes problem: No line connecting notes to text
When using the todonotes package, all notes are missing correct lines. That is, instead of having a line connecting the note and the text, the line connects to the note itself (with default color you cannot see this, since both the line and the background of the note has the same color; resulting in no line).
See code below and the result it produces.
What can the problem be?
\documentclass[11pt,reqno]{article}
\usepackage{todonotes}
\author{name}
\begin{document}
\title{Title}
\maketitle
\section{Introduction}
Bla bla bla \todo{where is the line?} Bla blah
More Blah, \todo{and now?} blah
\end{document}
Welcome to TeX.SX! Using your code I got the lines working properly. Are you sure that the MWE provided gives the picture of your question?
You have to compile twice.
Indeed, when using WinEdt (and MikTex), the resulted dvi had also flawed lines, but the resulted PDF had correct lines. What's going on here?
By the way, what is MWE?
Oh, DVI. The positioning of TikZ graphics aren't usually correct in DVI I think (see http://tex.stackexchange.com/questions/5826/tikz-with-positioning-does-not-work-with-dvi), you have to convert to PDF. I have no idea how Bakoma works, so I can't really help with that.
| common-pile/stackexchange_filtered |
How to find prediction probability in given CNN in tensor flow?
I am very new to TensorFlow. Let us assume that I already have a trained convolutional neural network, now I give one new data to this CNN, and I want to see what's the prediction probability in each class. (e.g, the CNN is for handwriting 0-2, now I give a new data 2 to this trained CNN, the prediction probability should give me something like 0.01 for class 0, 0.02 for class 1, and 0.97 for class 2)
May I ask someone advise me, what's the right code to do that in TensorFlow (1.13.1) for python? Sorry about the elementary level question.
I am using the online example MNITS code,
import numpy as np
import tensorflow as tf
def cnn_model_fn(features, labels, mode):
input_layer = tf.reshape(features["x"], [-1, 28, 28, 1])
conv1 = tf.layers.conv2d(inputs=input_layer, filters=30, kernel_size=[5, 5], padding="same", activation=tf.nn.relu)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
pool2_flat = tf.reshape(pool1, [-1, 14 * 14 * 30])
dense = tf.layers.dense(inputs=pool2_flat, units=1000, activation=tf.nn.relu)
dropout = tf.layers.dropout(inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN)
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Calculate Loss (for both TRAIN and EVAL modes)
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
# Configure the Training Op (for TRAIN mode)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode)
eval_metric_ops = {
"accuracy after all": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main(unused_argv):
model_path = "/tmp/mnist_convnet_model"
# Load training and eval data
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
train_data = mnist.train.images # Returns np.array
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
eval_data = mnist.test.images # Returns np.array
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
# Create the Estimator
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir=model_path)
# Set up logging for predictions
# Log the values in the "Softmax" tensor with label "probabilities"
logging_hook = tf.train.LoggingTensorHook(
tensors={"probabilities": "softmax_tensor"}, every_n_iter=50)
# Train the model
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=100,
num_epochs=None,
shuffle=True)
mnist_classifier.train(
input_fn=train_input_fn,
steps=5000,
hooks=[logging_hook])
# Evaluate the model and print results
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data}, y=eval_labels, num_epochs=1, shuffle=False)
eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
if __name__ == "__main__":
tf.app.run()
A general answer is: run the correct tensors in a session. Can you share a snippet (as small as possible) of your code that showing how your CNN looks like, so it clear what tensors should be executed?
@ Jindřich I just added the example code.
Call the predict method of your estimator (i.e., mnist_classifier) and set predict_keys="probabilities".
The predict method only runs the inference (unlike evaluate without evaluation). Setting the key will choose the correct tensor from the dictionary called predictions that you have in the cnn_model_fn method.
| common-pile/stackexchange_filtered |
CSV Download not working within Views Bulk Operation's Action extension function
I am not getting any errors in the logs, and the CSV doesn't seem to download. I am trying to get the selected data from Views Bulk Operations into a csv that downloads upon the associated selected data with the Action extension that I am making:
I tried the VBO Export module, and the same thing happens.
EDIT: took out dots indicating that there were more rows so there isn't any confusion that that is what was in my code.
public function executeMultiple(array $objects) {
$tempFile = tempnam('/tmp', 'download_csv_');
$out = fopen($tempFile, 'wb');
if ($out) {
fputcsv($out, array('Company Name'));
foreach ($objects as $entity) {
$company_name = $entity->getTitle() ?? 'Test';
fputcsv($out, array($company_name));
}
fclose($out);
$headers = new ResponseHeaderBag();
$headers->set('Content-Type', 'text/csv');
$headers->set('Content-Disposition', 'attachment; filename="data.csv"');
$headerArray = $headers->all();
try {
$response = new BinaryFileResponse($tempFile, 200, $headerArray, true);
}catch(Exception $e) {
\Drupal::logger('VBO')->notice($e->getMessage);
}
return $response;
}
}
What is the code snippet representing as pertaining to the Question? If that is your custom code, it doesn't contain valid PHP syntax. It can't work as written.
This is my custom code that extends views bulk operations that pertains to the question. I also tried VBO Export module with no code with the same results. Do you know what is wrong with my custom code above that extends views bulk operations? Drush cr is working okay and I don't get any errors in watchdog logs
I also don't get a WSOD.
Ah you mean the fputcsv($out, array('Company Name'...)); section. I just abbreviated all the columns. I'll edit it so people don't get confused
Add clarifying information to the Question and explain what "doesn't seem to download" means, on a technical level. Using the network tab of the browser dev tools will probably show useful information about the failing request.
ok I'll go do that
You won't find any errors because there aren't any - executeMultiple is a void function, the response you're returning just isn't used for anything. It's possible to override the completion redirect (see https://www.drupal.org/project/views_bulk_operations/issues/3042535 & https://www.drupal.org/project/views_bulk_operations/issues/3207304), so in theory you could redirect to a route with a force-download, but it would leave your users on the batch page, rather than taking you back to the start. Weird UX. The standard solution for this in my experience is to save the file, and render a...
...message with a link to it on batch completion. Or you might be able to hack something with javascript/meta refresh tags on the completion page to redirect to a route serving the force-download, but it'd be messy
Thanks @Clive! I went to https://www.drupal.org/project/vbo_export/issues/2925601 and there was a comment in there about revision metadata fields fouling this up, and I was using revision metadata fields. I made a test view page with VBO Export with regular content, and it works like you said. I'm going to consider the routes you suggested. Thanks again!
So I tested it with new revision metadata fields and it worked. I'm thinking that the same comment was referring to this same issue where the content has been in the database for awhile. I'll follow that issue.
Thanks to @clive, I was able to find the pertinent issue links. Based on the information I found, I just needed to create a new views table and VBO Export works.
EDIT:
I was also hiding the message box, so that is why I couldn't see VBO Export's link. I didn't think a link to download the file would appear there.
| common-pile/stackexchange_filtered |
Using two list to make rows and columns of matrix
I am trying to make a matrix with the rows and columns being passed as lists yvars and xvars. How would I make a matrix between the two and then populate it with the spearman correlation rho(using pspearman) between the column and the row title.
xvars and yvars are lists of names of the columns in the dataset access_sam2.
xvars <- c("Count_iTLS", "Count_iTLS_intra", "Count_iTLS_peri","Count_mTLS", "Count_mTLS_intra", "Count_mTLS_peri","Count_LA", "Count_LA_intra", "Count_LA_peri", "Distance_iTLS", "Distance_iTLS_intra", "Distance_iTLS_peri","Distance_mTLS", "Distance_mTLS_intra", "Distance_mTLS_peri","Distance_LA", "Distance_LA_intra", "Distance_LA_peri", "Area_iTLS", "Area_iTLS_intra", "Area_iTLS_peri","Area_mTLS", "Area_mTLS_intra", "Area_mTLS_peri","Area_LA", "Area_LA_intra", "Area_LA_peri")
yvars<-c("CD8_PD1_D","CD8_PDL1_D","CD8_GBNEG_FOXP3_D")
This is the code for the spearman correlation and to access the rho coeffecient
x<-spearman.test(access_sam2$x,access_sam2$y)
rho=x[["estimate"]][["rho"]]
Please show a small reproducible example
I have added the two lists xvars and yvars
You have two vectors with different length? Is it a combination of correlation?
each of the the xvars and yvars are the name of a column in the same dataframe (they have the same number of rows(some are NA, but should be able to remove with na.rm=TRUE))
We can use outer
library(pspearman)
f1 <- function(x, y) spearman.test(access_sam2[[x]],
access_same2[[y]])[["estimate"]][["rho"]]
outer(xvars, yvars, FUN = Vectorize(f1))
Using a reproducible example
f2 <- function(x, y) spearman.test(mtcars[[x]],
mtcars[[y]])[["estimate"]][["rho"]]
xvars <- c( "mpg", "cyl", "disp", "hp" )
yvars <- c("drat", "wt" )
out <- outer(xvars, yvars, FUN = Vectorize(f2))
out
# [,1] [,2]
#[1,] 0.6514555 -0.8864220
#[2,] -0.6788812 0.8577282
#[3,] -0.6835921 0.8977064
#[4,] -0.5201250 0.7746767
dimnames(out) <- list(xvars, yvars)
I got the error: Error in .subset2(x, i, exact = exact) :
recursive indexing failed at level 2
@Andre As I mentioned, it is not tested. You haven't provided any reproducible example
The data frame has 33 rows and 677 columns. All the vectors passed contain numbers.
| common-pile/stackexchange_filtered |
Can I deprecate usage of specific class (not under our control) in Android Studio?
Does Android Studio's inspection feature contain some option to mark certain classes as deprecated?
I suppose that the inspection rules currently do not support parametrized inspections e.g. "Mark following classes as deprecated: _______" but is there another way to fix that?
I want to move away from class X (which is not under our control) and use class XExtended and would like to reliably ensure that in our codebase no-one uses X again. Since we do use inspections regularly, having an inspection which looks for certain classes mark them as warning or error would be very useful.
Search and replace is somewhat tedious and does not protect for new uses. I also cannot remove the library where X is located as it is part of the android sdk.
Edit: During refactoring we find all those old uses due to unit test failures but not everything is yet fully covered with tests, so there might be undetected issues.
You could write a custom Lint check for this. I don't know if custom inspections are a thing. I'm fairly certain that the deprecation inspection is looking for @Deprecated, which isn't an option in your case.
@CommonsWare thanks, I am looking into it but it looks like writing a lint extension is neither trivial nor a quick task to do.
There is definitely a learning curve, compounded by the fact that it is all largely undocumented, particularly given a twice-overhauled API. This blog post is close, IIRC.
It is possible. I managed to do it with Android Studio 3.1.1. I marked a WebView class as deprecated as follows:
Then whenever I try to use WebView it will display deprecated as follows:
Steps to deprecate:
Click on the class which you want to deprecate.
Yellow bulb icon will appear at left
Expand it and click on Annotate class <Class> as @Deprecated
Through out the project, this class will be deprecated.
Wonderful, exactly the thing I was looking for!
@Samuel Great! Good know it was useful
| common-pile/stackexchange_filtered |
Linux networking using macvlan interfaces
This setup is working for my needs but I don't understand it which concerns me e.g I assume it's possible a system update could break this functionality and if I don't properly understand then how can I repair/restore it. Also I'm open to better/proper ways of working so welcome any insights learnt here.
This is about linux networking, macvlan interfaces and yes ultimately docker containers but I'm more concerned with the OS networking and the layer 2/3 outcomes I have. My host is a Synology DS1019+ NAS.
Using config files placed in /etc/sysconfig/network-scripts/ I can create a new br0 interface against eth0. The mac of the new br0 interface is the same as the eth0 interface. In this configuration I no longer have an IPv4 against eth0, only br0 has an address and is routable on my subnet. My current understanding is that the above is sound and as expected.
Its my understanding as above which is what is calling me to question the next part.
Instead of using config files in /etc/sysconfig/network-scripts/ I have instead used the linux ip command to achieve a similar outcome.
ip link add link eth0 name br0 type macvlan mode bridge
After reading a number of other articles proposing this method I now have an IPv4 address against both the linked eth0 (physical) interface as well as an address on the newly created br0 interface and br0 has an entirely unique mac address of its own. To be clear the subnets on each interface are different. Both interfaces work, I can reach them on the host as well as from remote devices in my lan, the only additional step for remote devices was the placement of a static route on my router pointing back towards my nas for the new br0 subnet.
I don't understand how eth0 still has an IPv4 address and I don't understand how br0 has its own unique mac address given my expectations of the command and it being a macvlan bridge.
My interface layout before modification...
eth0: <IP_ADDRESS>/24, eth1: disconnected
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet <IP_ADDRESS>/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit <IP_ADDRESS> brd <IP_ADDRESS>
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:11:32:c0:fb:37 brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/24 brd <IP_ADDRESS> scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::211:32ff:fec0:fb37/64 scope link
valid_lft forever preferred_lft forever
4: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:11:32:c0:fb:38 brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/16 brd <IP_ADDRESS> scope global eth1
valid_lft forever preferred_lft forever
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:42:ad:6c:d4:7f brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/16 brd <IP_ADDRESS> scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:adff:fe6c:d47f/64 scope link
valid_lft forever preferred_lft forever
I issue 3 commands as below to create br0 and place it in a seperate subnet...
ip link add link eth0 name br0 type macvlan mode bridge
ip addr add <IP_ADDRESS>/24 dev br0
ip link set br0 up
Which then changes my interface layout to be...
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet <IP_ADDRESS>/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
link/sit <IP_ADDRESS> brd <IP_ADDRESS>
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:11:32:c0:fb:37 brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/24 brd <IP_ADDRESS> scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::211:32ff:fec0:fb37/64 scope link
valid_lft forever preferred_lft forever
4: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether 00:11:32:c0:fb:38 brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/16 brd <IP_ADDRESS> scope global eth1
valid_lft forever preferred_lft forever
5: br0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 76:31:e6:37:27:97 brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/24 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::7431:e6ff:fe37:2797/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 02:42:ad:6c:d4:7f brd ff:ff:ff:ff:ff:ff
inet <IP_ADDRESS>/16 brd <IP_ADDRESS> scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:adff:fe6c:d47f/64 scope link
valid_lft forever preferred_lft forever
I note the "state UNKNOWN" for the br0 interface however it does work!
Your opening paragraph is confusing. The use of 'aka' feels like you probably mean e.g. 'for example' not a.k.a 'also known as' but then lack of punctuation makes it unclear whether it's the updates you don't understand, or the breakage; or what that may actually entail.
Thanks @Tetsujin, made some edits to hopefully my clearer.
| common-pile/stackexchange_filtered |
how to add& remove overlay image using gridcell index
here is my code
-(void)gridCell:(GridCell *)gCell didSelectImageAtIndex:(NSInteger)index imagSize:(CGRect )frameSize
{
UIImage *img=[UIImage imageNamed:@"Overlay"];
UIImageView *imgView=[[UIImageView alloc]initWithFrame:frameSize];
imgView.image=img;
[_scrPage addSubview:imgView];
[self.view addSubview:_scrPage];
}
please help me how to remove overlay ..
By "overlay" I assume you mean the mean the UIImageView you added the _srcPage? Have you retained some sort of reference to that view? If so, then all you need to do is [imgView removeFromSuperview]; If not, you need to find out how you can get a reference to that specific view.
| common-pile/stackexchange_filtered |
MYSQL versions that install with MAMP or XMAPP for Windows are old
I download MAMP for Windows 10 or XMAPP and I get an older version of MYSQL installed than the one listed in the data sheet. I need 5.6 and I keep getting 5.5. What is wrong?
MAMP by default has the 6.5.34 version of MySQL (as specified here) and XAMMP doesn't even have MySQL but MariaDB version 10.1.32 (as stated here) so how did you get the 5.* version?
Google search and downloaded from their site.
| common-pile/stackexchange_filtered |
How to search for a substring from an user input string and print out
So I have an assignment where the user puts in a string and a word, and then the program will search for this word in the string and print it out with starting index.
For example if the user inputs this string:
abccbaabcccabbcacbbbca
And this word:
ab
The program will print out this:
1 abccbaabcccabbcacbbbca
7 abcccabbcacbbbca
12 abbcacbbbca
3 casualities were found
As I just started taking classes in Java, I have only learned about basic loops (for, while), and just learning to use a few methods from the string-class including:
int compareTo()
boolean equals()
Int ength()
static String valueOf()
char charAt()
int indexOf
String substring
String toLowerCase/UpperCase()
String trim()
As far as I am concerned, I am supposed to use these methods in a for-loop, but I am not sure how to do this. The scanning of the user input string needs to be casesensetive as well. Could anyone come up with an example or explanation on how to do this? An example with explanation would be appreciated.
Edit 1: @ Zong Zheng Li, I don't see how my question is a duplicate of your linked question when it only adresses to one aspect of my question.
Try taking a closer look at the documentation for indexOf.
Voting to close as duplicate of return index of the first letter of a specific substring of a string
I understand you don't have enough rep but please try not to use edits to reply. Think about your problem in parts. Do you know how to read user input? Can you use indexOf with the inputs? Can you do this repeatedly and count number of iterations? Do you know how to print output? The solutions to these subproblems can be easily found on this website. Just take it one step at a time.
I'm sorry, didn't realize I shouldn't edit to answer. It just popped up that I should edit to explain why my question was different. Anyway, I have only written the part of my code where I let the user write in string and word. I'm not sure how to build my code after this. It's a bit confusing what to search for since I am not learning Java in English. Could you link me questions that adresses these problems for me?
Use a for loop, where (without giving away the code):
the iteration variable is initially set to the index of the word in the text
the terminating condition is where the index returned is -1
the next value is the index of the substring after the current index value
In the body of the loop, print out the substring of the text from the index onwards.
To count the number of times a match was found, declare another variable before the loop and set it to zero. Inside the loop, increment it by one.
After the loop print out the value of the counter.
The methods you'll need are indexOf(String), indexOf(String, int) and substring(int).
Could you give an example? I am not learning Java in English, so it is a bit confusing for me what you just wrote there. Help appreciated although
Edit: Thanks for listing up needed methods
| common-pile/stackexchange_filtered |
accessing data members of nested struct and passing struct to function C++
I have a simple problem but I need to understand the concept behind it.
How to access data members of a 1st struct by instantiating it as pointer in 2nd struct.
If I make data members of 1st struct as pointer then how to print out there values by accessing them e.g.
struct temp
{
int a =5;
float b = 6.0;
i = &a;
f = &b;
int *i;
float *f;
};
I am working on a complex code so I need to understand the logic behind it as how it works in-terms of memory and logic.
Thanks a lot for your time in advance.
#include <iostream>
using namespace std;
struct temp {
int i=5;
float f=6.0;
};
struct qlt {
temp *d;
};
int sum (qlt *s)
{
int a = s->d->i;
// std::cout<<a;
}
int main() {
qlt x;
//int b = ;
std::cout <<sum(&x);
return 0;
}
What is the question? Please clarify
Actually, I want to print the values of 1st struct by the help of function sum() by making an object of 1st struct as pointer in 2nd struct (qlt).
qlt x;
This creates a qlt allright, but not the d inside it. So you have a dangling pointer (since it's also left uninitialized).
qlt x;
temp b;
x.d = &b;
this would be a C-style solution. C++ has way better ways to do it.
Forget all sort of pointers at this moment and use STL.
Yes, you are right so thats why its giving me segmentation fault.
Just one more thing if I make data members of 1st struct as pointers then how to print them with the other things same. like struct temp { int a =5; float b = 6.0; i = &a; f = &b; int *i; float *f; };
You access them with *, for example cout << a, cout << b, cout << *i.
Thanks for the precise answer I have marked it as +1 also regarding 2nd part of question incase my data members of 1st struct are pointers.
I still am having issue in accessing the members values within function like int sum (qlt *s) { int a = s->d->*i; } beacuse I am still getting an error that i does not name a type when I make data members as pointers.
d is a pointer; i isn't. So you use -> correctly, then i, not *i.
| common-pile/stackexchange_filtered |
Angular select defaults to the 2nd option, why?
I have an array of objects from which I render a select. The problem is that the select defaults its selection to the 2nd object(which shouldn't even be selectable, just used as a category label) in the warehouseSelect array(hence why I also tried to use the unshift method as well).
What would possess Angular to resort to such behavior?
Angular HTML
<select [disableControl]="allDisabled" class="form-control" id="warehouse_id" formControlName="warehouse_id" tooltip="Select a warehouse" data-placement="top">
<option *ngFor="let warehouse of warehouseSelect" [disabled]="warehouse.id == null" [selected]="orderform.controls['warehouse_id'].value == warehouse.id" [ngValue]='warehouse.id' id="{{warehouse.id}}">{{warehouse.name}}</option>
</select>
Typescript
constructor(
private warehouseService: WarehouseService,
) {
this.warehouseService.getWarehouses(this.selectedOrganization.organization_id)
.subscribe((warehouses) => {
warehouses.forEach(warehouse => {
if (!this.purposesWarehouses[warehouse.purpose]) {
this.purposesWarehouses[warehouse.purpose] = [];
}
this.purposesWarehouses[warehouse.purpose].push(warehouse);
this.warehouses.push(warehouse);
});
for (let purpose in this.purposesWarehouses) {
this.warehouseSelect.push({ id: null, name: purpose });
for (let warehouse in this.purposesWarehouses[purpose]) {
this.warehouseSelect.push(this.purposesWarehouses[purpose][warehouse]);
}
}
this.warehouseSelect.unshift({ id: -1, name: 'Auto', });
});
}
Probably your orderform.controls['warehouse_id']'s initial value is null, that is why the one with null id is selected by default
@firatozcevahir you sir are correct, please add that as an answer so i can accept it. Many thanks!!!
Since your orderform.controls['warehouse_id']'s initial value is null, the one with null id is selected by default.
You can change the value of the form control using setValue function.
For example: orderform.controls['warehouse_id'].setValue(1) the item with id of 1 will be selected by default
| common-pile/stackexchange_filtered |
How can I restrict the arraylist to accept only a specfic type of object prior to generic
How can we restrict the arraylist to accept only a specfic type of object prior to generic
Write a wrapper function that accepts only the allowed type, and hide the collection. That was standard best-practice pre-Java-5.
private final List strings = new ArrayList();
public void add(String s)
{
strings.add(s);
}
public String remove(String s)
{
return (String) strings.remove(s);
}
// etc...
Yes, this sucks.
Might I ask: is there a reason you're not using generics? They are bytecode-compatible with Java 1.4
The application is a very old legacy system so not sure if it is written with java 1.4
Two options, (I am assuming C# here, but all applies to pretty much all OO languages).
1) Inherit from collection type of choice (or its interfaces), override all methods to throw exception on wrong type, something like this:
public class MyType
{
// Your type here
}
public class MyTypeCollection : ArrayList
{
public override int Add(object value)
{
if (!(value is MyType))
{
throw new ArgumentException("value must be of type MyType");
}
return base.Add(value);
}
public int Add(MyType myType)
{
return base.Add(myType);
}
// Other overrides here
}
or
2) (probably better), create your own type altogether and implement interfaces as desirable for collections and use a non-generic, non-typed collection internally. Something like this:
public class MyTypeCollection2 : IEnumerable
{
private readonly ArrayList _myList = new ArrayList();
public void Add(MyType myType)
{
_myList.Add(myType);
}
// Other collection methods
public IEnumerator GetEnumerator()
{
yield return _myList.Cast<MyType>();
}
}
Make sure to implement all interfaces you will care about. In the .NET Framework the interfaces implemented for ArrayList are: IList, ICloneable
Hope this helps.
| common-pile/stackexchange_filtered |
Media not found error when using Gentoo minimal install CD on Lenovo ThinkPad T450s
I'm trying to use the Gentoo minimal install CD (specifically install-amd64-minimal-20150910.iso, though that is unlikely to matter) to install my new Lenovo ThinkPad T450s.
I can boot off my USB CD-ROM or USB key by hitting enter to interrupt the normal boot, hitting F12 for the boot menu, then selecting my USB device, and the minimal CD boots up to the kernel selection menu. However, when I attempt to boot that kernel, I end up with a "Media not found" error when Gentoo tries to mount the root filesystem from the USB device.
I've tried both an external USB CD-ROM and a USB key, and both exhibit the same behaviour.
The culprit turned out to be USB 3.0! I was able to boot from the Gentoo CD by temporarily disabling USB 3.0 in my BIOS:
Reboot the laptop and hit Enter to interrupt the boot process
Hit F1 to enter setup
Go to the Config tab and select USB
Go to "USB 3.0 Mode", hit Enter, and select "Disabled"
Hit F10 to save and exit
When the laptop reboots, hit Enter to interrupt the boot process, F12 to enter the boot menu, then select your USB device
When the SYSLINUX Gentoo boot menu appears, enter the following: gentoo slowusb scandelay and hit Enter
Everything should now proceed as normal.
Apologies for answering my own question, but I couldn't find anything with Google, and I wanted this info searchable in case someone else runs into this annoying issue. (It cost me an hour of my life to figure it out.)
Worked for me w/o going to BIOS with just scandelay=50 option.
| common-pile/stackexchange_filtered |
how to unlock ubuntu 24.04 from anydesk remotely
After restarting the server from anydesk and entering the password this screen remains black and I can't do anything anymore. I have to go to where the server is to unlock it
Ensure a monitor is connected to the remote device. By connecting one, the computer will detect that a display is connected and will keep the display drivers on. If there is one connected, make sure that it is on and awake. If you do not wish to connect a monitor.
if this is done:
Currently AnyDesk might need an active display signal to function properly. As a workaround you can use a headless adapter, to emulate a visual signal.
https://www.reddit.com/r/AnyDesk/comments/esd3m1/anydesk_in_linux_gives_black_screen/
| common-pile/stackexchange_filtered |
Alter playback speed (tempo) using Azure Media Services
Is it possible to alter the playback speed of the outputted audio from the “Windows Azure Media Encoder”?
I’m using the “Windows Azure Media Encoder” media processor with a "WMA High Quality Audio" configuration to covert an mp3 audio file to a WMA audio file, this generally works ok.
I’d like to speed up the track so that the outputted wma file plays at double (or x) speed, is this possible?
The feature existed in the Windows Media Encoder, it was called 'Time Compression'. The description Windows Media Encoder (search the page for "Applying time compression to your content") makes it sound like it would be ideal, but I never used it so can't say how effective it was one way or the other.
SOX has a simple tempo command line parameter which will "Adjust tempo without changing pitch (WSOLA alg.)" but obviously the preference is to use the "Windows Azure Media Encoder"
Scott,
No, this is not currently supported by the Azure Media Encoder. I assume you would want pitch correction as well? Please add your vote for it on our uservoice page here
ideally there would be pitch correction as well, but I could live without it to start with. I couldn't see an existing uservoice item that matched so I created one here: http://azuremediaservices.uservoice.com/forums/88965-media-services-feature-suggestions/suggestions/4323657-support-altering-playback-speed-tempo-as-part-of but let me know if one already exists and I'll move my votes to it.
| common-pile/stackexchange_filtered |
How to track object inside heap in node.js to find memory leak?
I have memory leak, and I know where is it (I think so), but I don't know why it is happening.
Memory leak occurs while load-testing following endpoint (using restify.js server):
server.get('/test',function(req,res,next){
fetchSomeDataFromDB().done(function(err,rows){
res.json({ items: rows })
next()
})
})
I am pretty sure that res object is not disposed (by garbage collector). On every request memory used by app is growing. I have done some additional test:
var data = {}
for(var i = 0; i < 500; ++i) {
data['key'+i] = 'abcdefghijklmnoprstuwxyz1234567890_'+i
}
server.get('/test',function(req,res,next){
fetchSomeDataFromDB().done(function(err,rows){
res._someVar = _.extend({},data)
res.json({ items: rows })
next()
})
})
So on each request I am assigning big object to res object as its attribute. I observed that with this additional attribute memory grows much faster. Memory grows like 100Mb per 1000 requests done during 60 sec. After next same test memory grows 100mb again, and so on. Now when I know that res object is not "released" how I can track what is still keeping reference to res? Let say I will perform heap snapshot - how I can find what is referecing res?
screenshot of heap comparison between 10 requests:
Actually it seems that Instance.DAO is leaking?? this class belongs to ORM that I am using to query DB... What do you think?
One more screen of same coparison sorted by #delta:
It seems more likely that the GC hasn't collected the object yet since you are not leaking res anywhere in this code. Try running your script with the --expose-gc node argument and then set up an interval that periodically calls gc();. This will force the GC to run instead of being lazy.
If after that you find that are leaking memory for sure, you could use tools like the heapdump module to use the Chrome developer heap inspector to see what objects are taking up space.
I tried gc() already. If I call gc after each request, still memory increases, but slower because requests take much more time. Also tried to call gc after memory reaches 200MB - does not work, memory is still leaking. I also use lot of middleware which may have bug. I am trying to solve this since last week without a luck. I also inspected heapdumps in chrome, but it does not tell me anything (arrays are increased?)
If you've already tried all of that, then I would say your memory leak is elsewhere, not with the block of code you've shown in the question.
Yes I suspect that for some reason res is referenced from some root and is not garbage collected. But I can't figure out what is referencing res... I can't put code here because it is quite big app now...
I updated my question with screenshot of heapdump comparison - what do you think about it?
| common-pile/stackexchange_filtered |
Can't construct a java object for tag:yaml.org,2002:
When I am trying to create an object from a data file, I am getting the following exception while assest class is present. I have tried with dum; it was able to dump the data but when I have tried to read same data I am getting the following exception:
[ConstructorException: null; Can't construct a java object for tag:yaml.org,2002:model.Asset; exception=Class not found: model.Asset]
File reader:
package utill;
import org.yaml.snakeyaml.Yaml;
import java.io.File;
import java.io.FileInputStream;
import java.io.*;
import java.io.InputStream;
import java.util.*;
import model.*;
import java.util.LinkedHashMap;
import org.yaml.snakeyaml.constructor.Constructor;
public class FileReaderUtill {
public static List getAssest(String fileName){
LinkedHashMap<String,Asset> assest=null;
List<Asset> data= new ArrayList<Asset>();
try{
InputStream input = new FileInputStream(new
File("conf/datafile.yaml"));
Yaml yaml = new Yaml();
data=(List<Asset>)yaml.load(input);
//System.out.println(assest.get("Asset0"));
}catch(IOException e){
e.printStackTrace();
}
return data;
}
}
Datafile.yaml
- !!model.Asset {cid: null, enable: '1', id: re, internalName: df, name: fd}
- !!model.Asset {cid: null, enable: '0', id: rexz, internalName: dxxf, name: fdxxx}
Assest.java
package model;
public class Asset {
public Asset(){
}
public Asset(String id,String cid,String name,String internalName,String enable ){
this.id=id;
this.name=name;
this.internalName=internalName;
this.enable=enable;
}
public String id;
public String cid;
public String name;
public String internalName;
public String enable;
}
Please help me solve this issue.
Is your Asset class actually in a file called Assest.java?
| common-pile/stackexchange_filtered |
Pandas: how to duplicate a row n times and insert at desired location of a dataframe?
Let say I have a dataframe as follows:
index
col 1
col2
0
a01
a02
1
a11
a12
2
a21
a22
I want to duplicate a row by n times and then insert the duplicated rows at certain index. For e.g. duplicating row 0 by 2 times and then inserting before original row 0:
index
col 1
col2
0
a01
a02
1
a01
a02
2
a01
a02
3
a11
a12
4
a21
a22
What I'm doing now is creating an empty dataframe and then filling it with the values of the row that I want duplicated.
# create empty dataframe with 2 rows
temp_df = pd.DataFrame(columns=original_df.columns, index=list(range(2)))
# replacing each column by target value, I don't know how to do this more efficiently
temp_df.iloc[:,0] = original_df.iloc[0,0]
temp_df.iloc[:,1] = original_df.iloc[0,1]
temp_df.iloc[:,2] = original_df.iloc[0,2]
# concat the dataframes together
# to insert at indexes in the middle, I would have to slice the original_df and concat separately
original_df = pd.concat([temp_df, original_df])
This seems like a terribly obtuse way to do something I presume should be quite simple. How should I accomplish this more easily?
Does this answer your question? Is it possible to insert a row at an arbitrary position in a dataframe using pandas?
This could work. Reset the index to a column so you can use that for sorting at the end. Take the row you want and concat it to the original df using np.reapeat then sort on the index col, drop it, and reset the index.
import pandas as pd
import numpy as np
df = pd.DataFrame({'index': [0, 1, 2],
'col 1': ['a01', 'a11', 'a21'],
'col2': ['a02', 'a12', 'a22']})
index_to_copy = 0
number_of_extra_copies = 2
pd.concat([df,
pd.DataFrame(np.repeat(df.iloc[[index_to_copy]].values,
number_of_extra_copies,
axis=0),
columns=df.columns)]).sort_values(by='index').drop(columns='index').reset_index(drop=True)
Output
col 1 col2
0 a01 a02
1 a01 a02
2 a01 a02
3 a11 a12
4 a21 a22
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.