qid int64 1 74.7M | question stringlengths 15 58.3k | date stringlengths 10 10 | metadata list | response_j stringlengths 4 30.2k | response_k stringlengths 11 36.5k |
|---|---|---|---|---|---|
68,046,767 | I'm new in python, please have a look on the code below
```
n = 5
m = 5
mat = [[0]*m]*n
print(mat)
i = 0
while(i < m):
mat[0][i] = i
i += 1
print(mat)
```
This code gives output like -
```
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
[[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]
```
In normal C program, this should give final output as-
```
[[0, 1, 2, 3, 4], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
```
And I want this output, but i cant get this, in loop I'm just accessing **0th row by mat[0][i]** but python changes all the rows. How am I supposed to get this output? And even if there is something I'm doing wrong then how during traversal it prints correct matrix? Please explain me how accessing the matrix differs in python? | 2021/06/19 | [
"https://Stackoverflow.com/questions/68046767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13275764/"
] | I have used SafetyNet API for accessing device's runtime env. I have kept signing certificate of app on server to verify its sha256 against what we get in the SafetyNet response. Below are the steps you can refer if applies to you too.
1. Get SHA256 fingerprint of signing X509Certificate
MessageDigest md = MessageDigest.getInstance("SHA-256");
byte[] der = cert.getEncoded();
md.update(der);
byte[] **sha256** = md.digest();
2. Encode sha256 to base64 string
String **checksum** = Base64.getEncoder().encodeToString(**sha256**)
3. Match **checksum** with **apkCertificateDigestSha256** of SafetyNet response | Now Google depreciated SafetyAPI and Introduced PlayIntegrity API for attestation. PlayIntegrity Service provides the response as follows.
```
{
"tokenPayloadExternal": {
"accountDetails": {
"appLicensingVerdict": "LICENSED"
},
"appIntegrity": {
"appRecognitionVerdict": "PLAY_RECOGNIZED",
"certificateSha256Digest": ["pnpa8e8eCArtvmaf49bJE1f5iG5-XLSU6w1U9ZvI96g"],
"packageName": "com.test.android.safetynetsample",
"versionCode": "4"
},
"deviceIntegrity": {
"deviceRecognitionVerdict": ["MEETS_DEVICE_INTEGRITY"]
},
"requestDetails": {
"nonce": "SafetyNetSample1654058651834",
"requestPackageName": "com.test.android.safetynetsample",
"timestampMillis": "1654058657132"
}
}}
```
Response contains only *certificateSha256Digest* of the app (The sha256 digest of app certificates) instead of having *apkDigestSha256* and *apkCertificateDigestSha256*.
How do we validate the received *certificateSha256Digest* at server?
If the app is deployed in Google PlayStore then follow the below steps
Download the *App signing key certificate* from Google Play Console (If you are using managed signing key) otherwise download *Upload key certificate* and then find checksum of the certificate.
```
public static Certificate getCertificate(String certificatePath)throws Exception {
CertificateFactory certificateFactory = CertificateFactory.getInstance("X509");
FileInputStream in = new FileInputStream(certificatePath);
Certificate certificate = certificateFactory.generateCertificate(in);
in.close();
return certificate;
}
```
Generate checksum of the certificate
```
Certificate x509Cert = getCertificate("<Path of file>/deployment_cert.der");
MessageDigest md = MessageDigest.getInstance("SHA-256");
byte[] x509Der = x509Cert.getEncoded();
md.update(x509Der);
byte[] sha256 = md.digest();
String checksum = Base64.getEncoder().encodeToString(sha256);
```
Then compare checksum with received *certificateSha256Digest*
```
String digest = jwsResponse.tokenPayloadExternal.appIntegrity.certificateSha256Digest;
if(checksum.contains(digest)){
//
}
``` |
279,678 | Will it be slower to place functions' defintions inside a main function?
I usually do that if the subfunctions are short.
However, with a long subfunction I usually place it outside as I think that would make it easier to read and run faster.
Now I want to do it as the first method below as I want to make it self-contained and easier to manage when I do copy-paste and re-use.
If I place them outside, I usually miscopy some functions and make it not running properly. But I'm worrying if it's slower and harder to read.
I understand about the local/global effect but I want to focus on the speed and readability here.
**Method 1:**
Place `function1` and `function2` inside the main function.
```
myFunction[parameters_] := Module[{},
function1[parameters1_] := Module[{},
(* a long function*)
do something here
];
function2[parameters2_] := Module[{},
(* a short function*)
do something here
];
(*use function1 and function2 to do something more*)
]
```
**Method 2:**
```
function1[parameters1_] := Module[{},
(* a long function*)
do something here
];
function2[parameters2_] := Module[{},
(* a short function*)
do something here
];
myFunction[parameters_] := Module[{},
(*use function1 and function2 to do something more*)
]
``` | 2023/02/07 | [
"https://mathematica.stackexchange.com/questions/279678",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/87122/"
] | We may first create all k subsets of 1..n;
Then we need to select n/k subsets that have no common elements. Toward this aim we define a function that, given a number of subsets, selects a further subset that has no common element. It repeats this recursively until n/k subsets. We then feed the first subset to our routine, then the second e.t.c until we fed all subsets starting with 1. Finally we join all results.
```
main[n_, k_] := Module[{d = Range[n], p, step},
If[! Divisible[n, k], Print["n not divisible by k."]; Return[]];
p = Subsets[d, {k}];
step[e1_] := Module[{es = Flatten[e1], new, res,fun},
fun[e_] := (
new = Select[p, (! IntersectingQ[Flatten[{e}], #]) &];
Append[e, #] & /@ new);
res = Flatten[fun /@ e1, 1];
If[Length[res[[1]]] == n/k, res, step[res]]
];
Join @@ Reap[
Do[
Sow[step[{{p[[1]]}}]];
p = Rest[p];
, Binomial[n - 1, k - 1]]][[2, 1]]
]
```
Now for a test:
```
main[4, 2]
{{{1, 2}, {3, 4}}, {{1, 3}, {2, 4}}, {{1, 4}, {2, 3}}}
```
Or:
```
main[6, 3]
{{{1, 2, 3}, {4, 5, 6}}, {{1, 2, 4}, {3, 5, 6}}, {{1, 2, 5}, {3, 4,
6}}, {{1, 2, 6}, {3, 4, 5}}, {{1, 3, 4}, {2, 5, 6}}, {{1, 3,
5}, {2, 4, 6}}, {{1, 3, 6}, {2, 4, 5}}, {{1, 4, 5}, {2, 3,
6}}, {{1, 4, 6}, {2, 3, 5}}, {{1, 5, 6}, {2, 3, 4}}}
```
Or:
```
main[6, 2]
{{{1, 2}, {3, 4}, {5, 6}}, {{1, 2}, {3, 5}, {4, 6}}, {{1, 2}, {3,
6}, {4, 5}}, {{1, 2}, {4, 5}, {3, 6}}, {{1, 2}, {4, 6}, {3,
5}}, {{1, 2}, {5, 6}, {3, 4}}, {{1, 3}, {2, 4}, {5, 6}}, {{1,
3}, {2, 5}, {4, 6}}, {{1, 3}, {2, 6}, {4, 5}}, {{1, 3}, {4, 5}, {2,
6}}, {{1, 3}, {4, 6}, {2, 5}}, {{1, 3}, {5, 6}, {2, 4}}, {{1,
4}, {2, 3}, {5, 6}}, {{1, 4}, {2, 5}, {3, 6}}, {{1, 4}, {2, 6}, {3,
5}}, {{1, 4}, {3, 5}, {2, 6}}, {{1, 4}, {3, 6}, {2, 5}}, {{1,
4}, {5, 6}, {2, 3}}, {{1, 5}, {2, 3}, {4, 6}}, {{1, 5}, {2, 4}, {3,
6}}, {{1, 5}, {2, 6}, {3, 4}}, {{1, 5}, {3, 4}, {2, 6}}, {{1,
5}, {3, 6}, {2, 4}}, {{1, 5}, {4, 6}, {2, 3}}, {{1, 6}, {2, 3}, {4,
5}}, {{1, 6}, {2, 4}, {3, 5}}, {{1, 6}, {2, 5}, {3, 4}}, {{1,
6}, {3, 4}, {2, 5}}, {{1, 6}, {3, 5}, {2, 4}}, {{1, 6}, {4, 5}, {2,
3}}}
``` | I define `permutationsNk[n,k]` for `Integer` arguments, where `k` divides into `n` (or, alternatively, where `k` belongs to the `Divisors` of `n`).
Evaluating eg. `res=permutationsNk[4,2]` returns
[](https://i.stack.imgur.com/LMt3r.png)
Looking inside the [`Iconize`](https://reference.wolfram.com/language/ref/Iconize.html)d object, one finds
```
res // First
```
[](https://i.stack.imgur.com/b0e7q.png)
Following along, down the depth of the result, one finds that the first iconized sub-element (of the top-level result) evaluates to
```
res // First/*(Part[#, 1] &)/*First
```
[](https://i.stack.imgur.com/B34Ob.png)
which can be further shown to be
```
res // First/*(Part[#, 1] &)/*First/*(Part[#, 1] &)/*First
```
[](https://i.stack.imgur.com/Rr8lB.png)
Similarly, the second and third iconized sub-elements of the top-level result evaluate to
[](https://i.stack.imgur.com/C4P2y.png)
and
[](https://i.stack.imgur.com/b15JH.png)
respectively.
It might have been easier or more convenient to evaluate the result all at once, using something like
```
res // First/*Map[First/*Map[First]]/*Apply[Join]
```
[](https://i.stack.imgur.com/Dkx65.png)
however, the results for the general case ($n$ and $k$ integers and $k|n$) can be *too many* in number and an approach like the one above, might temporarily freeze or hang the notebook.
It's not to say that accessing the results in a piecemeal fashion, is a definite workaround for the problem of the combinatorial explosion in the number of the results, however it seems to help, for *reasonable* input arguments (however, I haven't tried anything above $n=20$, so I could be wrong for even moderate values of $n$, but I'm confident that it's not the case).
Just to provide *another* **example**, evaluating `res=permutationsNk[8,4]` and using the bulk way (the one-liner presented above) to access the results (again, which *might* cause trouble with *larger* values for $n$) returns
[](https://i.stack.imgur.com/uYam2.png)
Before providing the code, I will produce another example for the case `permutations[12,6]`, only this time the result is provided in plain text format and can be used for testing purposes:
```
permutations[12,6]
```
evaluates to
```
{{{1,2,3,4,5,6},{2,3,4,5,6,7},{3,4,5,6,7,8},{4,5,6,7,8,9},{5,6,7,8,9,10},{6,7,8,9,10,11},{7,8,9,10,11,12}},{{1,2,3,4,5,7},{2,3,4,5,6,8},{3,4,5,6,7,9},{4,5,6,7,8,10},{5,6,7,8,9,11},{6,7,8,9,10,12}},{{1,2,3,4,6,7},{2,3,4,5,7,8},{3,4,5,6,8,9},{4,5,6,7,9,10},{5,6,7,8,10,11},{6,7,8,9,11,12}},{{1,2,3,5,6,7},{2,3,4,6,7,8},{3,4,5,7,8,9},{4,5,6,8,9,10},{5,6,7,9,10,11},{6,7,8,10,11,12}},{{1,2,4,5,6,7},{2,3,5,6,7,8},{3,4,6,7,8,9},{4,5,7,8,9,10},{5,6,8,9,10,11},{6,7,9,10,11,12}},{{1,3,4,5,6,7},{2,4,5,6,7,8},{3,5,6,7,8,9},{4,6,7,8,9,10},{5,7,8,9,10,11},{6,8,9,10,11,12}},{{1,2,3,4,5,8},{2,3,4,5,6,9},{3,4,5,6,7,10},{4,5,6,7,8,11},{5,6,7,8,9,12}},{{1,2,3,4,6,8},{2,3,4,5,7,9},{3,4,5,6,8,10},{4,5,6,7,9,11},{5,6,7,8,10,12}},{{1,2,3,4,7,8},{2,3,4,5,8,9},{3,4,5,6,9,10},{4,5,6,7,10,11},{5,6,7,8,11,12}},{{1,2,3,5,6,8},{2,3,4,6,7,9},{3,4,5,7,8,10},{4,5,6,8,9,11},{5,6,7,9,10,12}},{{1,2,3,5,7,8},{2,3,4,6,8,9},{3,4,5,7,9,10},{4,5,6,8,10,11},{5,6,7,9,11,12}},{{1,2,3,6,7,8},{2,3,4,7,8,9},{3,4,5,8,9,10},{4,5,6,9,10,11},{5,6,7,10,11,12}},{{1,2,4,5,6,8},{2,3,5,6,7,9},{3,4,6,7,8,10},{4,5,7,8,9,11},{5,6,8,9,10,12}},{{1,2,4,5,7,8},{2,3,5,6,8,9},{3,4,6,7,9,10},{4,5,7,8,10,11},{5,6,8,9,11,12}},{{1,2,4,6,7,8},{2,3,5,7,8,9},{3,4,6,8,9,10},{4,5,7,9,10,11},{5,6,8,10,11,12}},{{1,2,5,6,7,8},{2,3,6,7,8,9},{3,4,7,8,9,10},{4,5,8,9,10,11},{5,6,9,10,11,12}},{{1,3,4,5,6,8},{2,4,5,6,7,9},{3,5,6,7,8,10},{4,6,7,8,9,11},{5,7,8,9,10,12}},{{1,3,4,5,7,8},{2,4,5,6,8,9},{3,5,6,7,9,10},{4,6,7,8,10,11},{5,7,8,9,11,12}},{{1,3,4,6,7,8},{2,4,5,7,8,9},{3,5,6,8,9,10},{4,6,7,9,10,11},{5,7,8,10,11,12}},{{1,3,5,6,7,8},{2,4,6,7,8,9},{3,5,7,8,9,10},{4,6,8,9,10,11},{5,7,9,10,11,12}},{{1,4,5,6,7,8},{2,5,6,7,8,9},{3,6,7,8,9,10},{4,7,8,9,10,11},{5,8,9,10,11,12}},{{1,2,3,4,5,9},{2,3,4,5,6,10},{3,4,5,6,7,11},{4,5,6,7,8,12}},{{1,2,3,4,6,9},{2,3,4,5,7,10},{3,4,5,6,8,11},{4,5,6,7,9,12}},{{1,2,3,4,7,9},{2,3,4,5,8,10},{3,4,5,6,9,11},{4,5,6,7,10,12}},{{1,2,3,4,8,9},{2,3,4,5,9,10},{3,4,5,6,10,11},{4,5,6,7,11,12}},{{1,2,3,5,6,9},{2,3,4,6,7,10},{3,4,5,7,8,11},{4,5,6,8,9,12}},{{1,2,3,5,7,9},{2,3,4,6,8,10},{3,4,5,7,9,11},{4,5,6,8,10,12}},{{1,2,3,5,8,9},{2,3,4,6,9,10},{3,4,5,7,10,11},{4,5,6,8,11,12}},{{1,2,3,6,7,9},{2,3,4,7,8,10},{3,4,5,8,9,11},{4,5,6,9,10,12}},{{1,2,3,6,8,9},{2,3,4,7,9,10},{3,4,5,8,10,11},{4,5,6,9,11,12}},{{1,2,3,7,8,9},{2,3,4,8,9,10},{3,4,5,9,10,11},{4,5,6,10,11,12}},{{1,2,4,5,6,9},{2,3,5,6,7,10},{3,4,6,7,8,11},{4,5,7,8,9,12}},{{1,2,4,5,7,9},{2,3,5,6,8,10},{3,4,6,7,9,11},{4,5,7,8,10,12}},{{1,2,4,5,8,9},{2,3,5,6,9,10},{3,4,6,7,10,11},{4,5,7,8,11,12}},{{1,2,4,6,7,9},{2,3,5,7,8,10},{3,4,6,8,9,11},{4,5,7,9,10,12}},{{1,2,4,6,8,9},{2,3,5,7,9,10},{3,4,6,8,10,11},{4,5,7,9,11,12}},{{1,2,4,7,8,9},{2,3,5,8,9,10},{3,4,6,9,10,11},{4,5,7,10,11,12}},{{1,2,5,6,7,9},{2,3,6,7,8,10},{3,4,7,8,9,11},{4,5,8,9,10,12}},{{1,2,5,6,8,9},{2,3,6,7,9,10},{3,4,7,8,10,11},{4,5,8,9,11,12}},{{1,2,5,7,8,9},{2,3,6,8,9,10},{3,4,7,9,10,11},{4,5,8,10,11,12}},{{1,2,6,7,8,9},{2,3,7,8,9,10},{3,4,8,9,10,11},{4,5,9,10,11,12}},{{1,3,4,5,6,9},{2,4,5,6,7,10},{3,5,6,7,8,11},{4,6,7,8,9,12}},{{1,3,4,5,7,9},{2,4,5,6,8,10},{3,5,6,7,9,11},{4,6,7,8,10,12}},{{1,3,4,5,8,9},{2,4,5,6,9,10},{3,5,6,7,10,11},{4,6,7,8,11,12}},{{1,3,4,6,7,9},{2,4,5,7,8,10},{3,5,6,8,9,11},{4,6,7,9,10,12}},{{1,3,4,6,8,9},{2,4,5,7,9,10},{3,5,6,8,10,11},{4,6,7,9,11,12}},{{1,3,4,7,8,9},{2,4,5,8,9,10},{3,5,6,9,10,11},{4,6,7,10,11,12}},{{1,3,5,6,7,9},{2,4,6,7,8,10},{3,5,7,8,9,11},{4,6,8,9,10,12}},{{1,3,5,6,8,9},{2,4,6,7,9,10},{3,5,7,8,10,11},{4,6,8,9,11,12}},{{1,3,5,7,8,9},{2,4,6,8,9,10},{3,5,7,9,10,11},{4,6,8,10,11,12}},{{1,3,6,7,8,9},{2,4,7,8,9,10},{3,5,8,9,10,11},{4,6,9,10,11,12}},{{1,4,5,6,7,9},{2,5,6,7,8,10},{3,6,7,8,9,11},{4,7,8,9,10,12}},{{1,4,5,6,8,9},{2,5,6,7,9,10},{3,6,7,8,10,11},{4,7,8,9,11,12}},{{1,4,5,7,8,9},{2,5,6,8,9,10},{3,6,7,9,10,11},{4,7,8,10,11,12}},{{1,4,6,7,8,9},{2,5,7,8,9,10},{3,6,8,9,10,11},{4,7,9,10,11,12}},{{1,5,6,7,8,9},{2,6,7,8,9,10},{3,7,8,9,10,11},{4,8,9,10,11,12}},{{1,2,3,4,5,10},{2,3,4,5,6,11},{3,4,5,6,7,12}},{{1,2,3,4,6,10},{2,3,4,5,7,11},{3,4,5,6,8,12}},{{1,2,3,4,7,10},{2,3,4,5,8,11},{3,4,5,6,9,12}},{{1,2,3,4,8,10},{2,3,4,5,9,11},{3,4,5,6,10,12}},{{1,2,3,4,9,10},{2,3,4,5,10,11},{3,4,5,6,11,12}},{{1,2,3,5,6,10},{2,3,4,6,7,11},{3,4,5,7,8,12}},{{1,2,3,5,7,10},{2,3,4,6,8,11},{3,4,5,7,9,12}},{{1,2,3,5,8,10},{2,3,4,6,9,11},{3,4,5,7,10,12}},{{1,2,3,5,9,10},{2,3,4,6,10,11},{3,4,5,7,11,12}},{{1,2,3,6,7,10},{2,3,4,7,8,11},{3,4,5,8,9,12}},{{1,2,3,6,8,10},{2,3,4,7,9,11},{3,4,5,8,10,12}},{{1,2,3,6,9,10},{2,3,4,7,10,11},{3,4,5,8,11,12}},{{1,2,3,7,8,10},{2,3,4,8,9,11},{3,4,5,9,10,12}},{{1,2,3,7,9,10},{2,3,4,8,10,11},{3,4,5,9,11,12}},{{1,2,3,8,9,10},{2,3,4,9,10,11},{3,4,5,10,11,12}},{{1,2,4,5,6,10},{2,3,5,6,7,11},{3,4,6,7,8,12}},{{1,2,4,5,7,10},{2,3,5,6,8,11},{3,4,6,7,9,12}},{{1,2,4,5,8,10},{2,3,5,6,9,11},{3,4,6,7,10,12}},{{1,2,4,5,9,10},{2,3,5,6,10,11},{3,4,6,7,11,12}},{{1,2,4,6,7,10},{2,3,5,7,8,11},{3,4,6,8,9,12}},{{1,2,4,6,8,10},{2,3,5,7,9,11},{3,4,6,8,10,12}},{{1,2,4,6,9,10},{2,3,5,7,10,11},{3,4,6,8,11,12}},{{1,2,4,7,8,10},{2,3,5,8,9,11},{3,4,6,9,10,12}},{{1,2,4,7,9,10},{2,3,5,8,10,11},{3,4,6,9,11,12}},{{1,2,4,8,9,10},{2,3,5,9,10,11},{3,4,6,10,11,12}},{{1,2,5,6,7,10},{2,3,6,7,8,11},{3,4,7,8,9,12}},{{1,2,5,6,8,10},{2,3,6,7,9,11},{3,4,7,8,10,12}},{{1,2,5,6,9,10},{2,3,6,7,10,11},{3,4,7,8,11,12}},{{1,2,5,7,8,10},{2,3,6,8,9,11},{3,4,7,9,10,12}},{{1,2,5,7,9,10},{2,3,6,8,10,11},{3,4,7,9,11,12}},{{1,2,5,8,9,10},{2,3,6,9,10,11},{3,4,7,10,11,12}},{{1,2,6,7,8,10},{2,3,7,8,9,11},{3,4,8,9,10,12}},{{1,2,6,7,9,10},{2,3,7,8,10,11},{3,4,8,9,11,12}},{{1,2,6,8,9,10},{2,3,7,9,10,11},{3,4,8,10,11,12}},{{1,2,7,8,9,10},{2,3,8,9,10,11},{3,4,9,10,11,12}},{{1,3,4,5,6,10},{2,4,5,6,7,11},{3,5,6,7,8,12}},{{1,3,4,5,7,10},{2,4,5,6,8,11},{3,5,6,7,9,12}},{{1,3,4,5,8,10},{2,4,5,6,9,11},{3,5,6,7,10,12}},{{1,3,4,5,9,10},{2,4,5,6,10,11},{3,5,6,7,11,12}},{{1,3,4,6,7,10},{2,4,5,7,8,11},{3,5,6,8,9,12}},{{1,3,4,6,8,10},{2,4,5,7,9,11},{3,5,6,8,10,12}},{{1,3,4,6,9,10},{2,4,5,7,10,11},{3,5,6,8,11,12}},{{1,3,4,7,8,10},{2,4,5,8,9,11},{3,5,6,9,10,12}},{{1,3,4,7,9,10},{2,4,5,8,10,11},{3,5,6,9,11,12}},{{1,3,4,8,9,10},{2,4,5,9,10,11},{3,5,6,10,11,12}},{{1,3,5,6,7,10},{2,4,6,7,8,11},{3,5,7,8,9,12}},{{1,3,5,6,8,10},{2,4,6,7,9,11},{3,5,7,8,10,12}},{{1,3,5,6,9,10},{2,4,6,7,10,11},{3,5,7,8,11,12}},{{1,3,5,7,8,10},{2,4,6,8,9,11},{3,5,7,9,10,12}},{{1,3,5,7,9,10},{2,4,6,8,10,11},{3,5,7,9,11,12}},{{1,3,5,8,9,10},{2,4,6,9,10,11},{3,5,7,10,11,12}},{{1,3,6,7,8,10},{2,4,7,8,9,11},{3,5,8,9,10,12}},{{1,3,6,7,9,10},{2,4,7,8,10,11},{3,5,8,9,11,12}},{{1,3,6,8,9,10},{2,4,7,9,10,11},{3,5,8,10,11,12}},{{1,3,7,8,9,10},{2,4,8,9,10,11},{3,5,9,10,11,12}},{{1,4,5,6,7,10},{2,5,6,7,8,11},{3,6,7,8,9,12}},{{1,4,5,6,8,10},{2,5,6,7,9,11},{3,6,7,8,10,12}},{{1,4,5,6,9,10},{2,5,6,7,10,11},{3,6,7,8,11,12}},{{1,4,5,7,8,10},{2,5,6,8,9,11},{3,6,7,9,10,12}},{{1,4,5,7,9,10},{2,5,6,8,10,11},{3,6,7,9,11,12}},{{1,4,5,8,9,10},{2,5,6,9,10,11},{3,6,7,10,11,12}},{{1,4,6,7,8,10},{2,5,7,8,9,11},{3,6,8,9,10,12}},{{1,4,6,7,9,10},{2,5,7,8,10,11},{3,6,8,9,11,12}},{{1,4,6,8,9,10},{2,5,7,9,10,11},{3,6,8,10,11,12}},{{1,4,7,8,9,10},{2,5,8,9,10,11},{3,6,9,10,11,12}},{{1,5,6,7,8,10},{2,6,7,8,9,11},{3,7,8,9,10,12}},{{1,5,6,7,9,10},{2,6,7,8,10,11},{3,7,8,9,11,12}},{{1,5,6,8,9,10},{2,6,7,9,10,11},{3,7,8,10,11,12}},{{1,5,7,8,9,10},{2,6,8,9,10,11},{3,7,9,10,11,12}},{{1,6,7,8,9,10},{2,7,8,9,10,11},{3,8,9,10,11,12}},{{1,2,3,4,5,11},{2,3,4,5,6,12}},{{1,2,3,4,6,11},{2,3,4,5,7,12}},{{1,2,3,4,7,11},{2,3,4,5,8,12}},{{1,2,3,4,8,11},{2,3,4,5,9,12}},{{1,2,3,4,9,11},{2,3,4,5,10,12}},{{1,2,3,4,10,11},{2,3,4,5,11,12}},{{1,2,3,5,6,11},{2,3,4,6,7,12}},{{1,2,3,5,7,11},{2,3,4,6,8,12}},{{1,2,3,5,8,11},{2,3,4,6,9,12}},{{1,2,3,5,9,11},{2,3,4,6,10,12}},{{1,2,3,5,10,11},{2,3,4,6,11,12}},{{1,2,3,6,7,11},{2,3,4,7,8,12}},{{1,2,3,6,8,11},{2,3,4,7,9,12}},{{1,2,3,6,9,11},{2,3,4,7,10,12}},{{1,2,3,6,10,11},{2,3,4,7,11,12}},{{1,2,3,7,8,11},{2,3,4,8,9,12}},{{1,2,3,7,9,11},{2,3,4,8,10,12}},{{1,2,3,7,10,11},{2,3,4,8,11,12}},{{1,2,3,8,9,11},{2,3,4,9,10,12}},{{1,2,3,8,10,11},{2,3,4,9,11,12}},{{1,2,3,9,10,11},{2,3,4,10,11,12}},{{1,2,4,5,6,11},{2,3,5,6,7,12}},{{1,2,4,5,7,11},{2,3,5,6,8,12}},{{1,2,4,5,8,11},{2,3,5,6,9,12}},{{1,2,4,5,9,11},{2,3,5,6,10,12}},{{1,2,4,5,10,11},{2,3,5,6,11,12}},{{1,2,4,6,7,11},{2,3,5,7,8,12}},{{1,2,4,6,8,11},{2,3,5,7,9,12}},{{1,2,4,6,9,11},{2,3,5,7,10,12}},{{1,2,4,6,10,11},{2,3,5,7,11,12}},{{1,2,4,7,8,11},{2,3,5,8,9,12}},{{1,2,4,7,9,11},{2,3,5,8,10,12}},{{1,2,4,7,10,11},{2,3,5,8,11,12}},{{1,2,4,8,9,11},{2,3,5,9,10,12}},{{1,2,4,8,10,11},{2,3,5,9,11,12}},{{1,2,4,9,10,11},{2,3,5,10,11,12}},{{1,2,5,6,7,11},{2,3,6,7,8,12}},{{1,2,5,6,8,11},{2,3,6,7,9,12}},{{1,2,5,6,9,11},{2,3,6,7,10,12}},{{1,2,5,6,10,11},{2,3,6,7,11,12}},{{1,2,5,7,8,11},{2,3,6,8,9,12}},{{1,2,5,7,9,11},{2,3,6,8,10,12}},{{1,2,5,7,10,11},{2,3,6,8,11,12}},{{1,2,5,8,9,11},{2,3,6,9,10,12}},{{1,2,5,8,10,11},{2,3,6,9,11,12}},{{1,2,5,9,10,11},{2,3,6,10,11,12}},{{1,2,6,7,8,11},{2,3,7,8,9,12}},{{1,2,6,7,9,11},{2,3,7,8,10,12}},{{1,2,6,7,10,11},{2,3,7,8,11,12}},{{1,2,6,8,9,11},{2,3,7,9,10,12}},{{1,2,6,8,10,11},{2,3,7,9,11,12}},{{1,2,6,9,10,11},{2,3,7,10,11,12}},{{1,2,7,8,9,11},{2,3,8,9,10,12}},{{1,2,7,8,10,11},{2,3,8,9,11,12}},{{1,2,7,9,10,11},{2,3,8,10,11,12}},{{1,2,8,9,10,11},{2,3,9,10,11,12}},{{1,3,4,5,6,11},{2,4,5,6,7,12}},{{1,3,4,5,7,11},{2,4,5,6,8,12}},{{1,3,4,5,8,11},{2,4,5,6,9,12}},{{1,3,4,5,9,11},{2,4,5,6,10,12}},{{1,3,4,5,10,11},{2,4,5,6,11,12}},{{1,3,4,6,7,11},{2,4,5,7,8,12}},{{1,3,4,6,8,11},{2,4,5,7,9,12}},{{1,3,4,6,9,11},{2,4,5,7,10,12}},{{1,3,4,6,10,11},{2,4,5,7,11,12}},{{1,3,4,7,8,11},{2,4,5,8,9,12}},{{1,3,4,7,9,11},{2,4,5,8,10,12}},{{1,3,4,7,10,11},{2,4,5,8,11,12}},{{1,3,4,8,9,11},{2,4,5,9,10,12}},{{1,3,4,8,10,11},{2,4,5,9,11,12}},{{1,3,4,9,10,11},{2,4,5,10,11,12}},{{1,3,5,6,7,11},{2,4,6,7,8,12}},{{1,3,5,6,8,11},{2,4,6,7,9,12}},{{1,3,5,6,9,11},{2,4,6,7,10,12}},{{1,3,5,6,10,11},{2,4,6,7,11,12}},{{1,3,5,7,8,11},{2,4,6,8,9,12}},{{1,3,5,7,9,11},{2,4,6,8,10,12}},{{1,3,5,7,10,11},{2,4,6,8,11,12}},{{1,3,5,8,9,11},{2,4,6,9,10,12}},{{1,3,5,8,10,11},{2,4,6,9,11,12}},{{1,3,5,9,10,11},{2,4,6,10,11,12}},{{1,3,6,7,8,11},{2,4,7,8,9,12}},{{1,3,6,7,9,11},{2,4,7,8,10,12}},{{1,3,6,7,10,11},{2,4,7,8,11,12}},{{1,3,6,8,9,11},{2,4,7,9,10,12}},{{1,3,6,8,10,11},{2,4,7,9,11,12}},{{1,3,6,9,10,11},{2,4,7,10,11,12}},{{1,3,7,8,9,11},{2,4,8,9,10,12}},{{1,3,7,8,10,11},{2,4,8,9,11,12}},{{1,3,7,9,10,11},{2,4,8,10,11,12}},{{1,3,8,9,10,11},{2,4,9,10,11,12}},{{1,4,5,6,7,11},{2,5,6,7,8,12}},{{1,4,5,6,8,11},{2,5,6,7,9,12}},{{1,4,5,6,9,11},{2,5,6,7,10,12}},{{1,4,5,6,10,11},{2,5,6,7,11,12}},{{1,4,5,7,8,11},{2,5,6,8,9,12}},{{1,4,5,7,9,11},{2,5,6,8,10,12}},{{1,4,5,7,10,11},{2,5,6,8,11,12}},{{1,4,5,8,9,11},{2,5,6,9,10,12}},{{1,4,5,8,10,11},{2,5,6,9,11,12}},{{1,4,5,9,10,11},{2,5,6,10,11,12}},{{1,4,6,7,8,11},{2,5,7,8,9,12}},{{1,4,6,7,9,11},{2,5,7,8,10,12}},{{1,4,6,7,10,11},{2,5,7,8,11,12}},{{1,4,6,8,9,11},{2,5,7,9,10,12}},{{1,4,6,8,10,11},{2,5,7,9,11,12}},{{1,4,6,9,10,11},{2,5,7,10,11,12}},{{1,4,7,8,9,11},{2,5,8,9,10,12}},{{1,4,7,8,10,11},{2,5,8,9,11,12}},{{1,4,7,9,10,11},{2,5,8,10,11,12}},{{1,4,8,9,10,11},{2,5,9,10,11,12}},{{1,5,6,7,8,11},{2,6,7,8,9,12}},{{1,5,6,7,9,11},{2,6,7,8,10,12}},{{1,5,6,7,10,11},{2,6,7,8,11,12}},{{1,5,6,8,9,11},{2,6,7,9,10,12}},{{1,5,6,8,10,11},{2,6,7,9,11,12}},{{1,5,6,9,10,11},{2,6,7,10,11,12}},{{1,5,7,8,9,11},{2,6,8,9,10,12}},{{1,5,7,8,10,11},{2,6,8,9,11,12}},{{1,5,7,9,10,11},{2,6,8,10,11,12}},{{1,5,8,9,10,11},{2,6,9,10,11,12}},{{1,6,7,8,9,11},{2,7,8,9,10,12}},{{1,6,7,8,10,11},{2,7,8,9,11,12}},{{1,6,7,9,10,11},{2,7,8,10,11,12}},{{1,6,8,9,10,11},{2,7,9,10,11,12}},{{1,7,8,9,10,11},{2,8,9,10,11,12}},{{1,2,3,4,5,12}},{{1,2,3,4,6,12}},{{1,2,3,4,7,12}},{{1,2,3,4,8,12}},{{1,2,3,4,9,12}},{{1,2,3,4,10,12}},{{1,2,3,4,11,12}},{{1,2,3,5,6,12}},{{1,2,3,5,7,12}},{{1,2,3,5,8,12}},{{1,2,3,5,9,12}},{{1,2,3,5,10,12}},{{1,2,3,5,11,12}},{{1,2,3,6,7,12}},{{1,2,3,6,8,12}},{{1,2,3,6,9,12}},{{1,2,3,6,10,12}},{{1,2,3,6,11,12}},{{1,2,3,7,8,12}},{{1,2,3,7,9,12}},{{1,2,3,7,10,12}},{{1,2,3,7,11,12}},{{1,2,3,8,9,12}},{{1,2,3,8,10,12}},{{1,2,3,8,11,12}},{{1,2,3,9,10,12}},{{1,2,3,9,11,12}},{{1,2,3,10,11,12}},{{1,2,4,5,6,12}},{{1,2,4,5,7,12}},{{1,2,4,5,8,12}},{{1,2,4,5,9,12}},{{1,2,4,5,10,12}},{{1,2,4,5,11,12}},{{1,2,4,6,7,12}},{{1,2,4,6,8,12}},{{1,2,4,6,9,12}},{{1,2,4,6,10,12}},{{1,2,4,6,11,12}},{{1,2,4,7,8,12}},{{1,2,4,7,9,12}},{{1,2,4,7,10,12}},{{1,2,4,7,11,12}},{{1,2,4,8,9,12}},{{1,2,4,8,10,12}},{{1,2,4,8,11,12}},{{1,2,4,9,10,12}},{{1,2,4,9,11,12}},{{1,2,4,10,11,12}},{{1,2,5,6,7,12}},{{1,2,5,6,8,12}},{{1,2,5,6,9,12}},{{1,2,5,6,10,12}},{{1,2,5,6,11,12}},{{1,2,5,7,8,12}},{{1,2,5,7,9,12}},{{1,2,5,7,10,12}},{{1,2,5,7,11,12}},{{1,2,5,8,9,12}},{{1,2,5,8,10,12}},{{1,2,5,8,11,12}},{{1,2,5,9,10,12}},{{1,2,5,9,11,12}},{{1,2,5,10,11,12}},{{1,2,6,7,8,12}},{{1,2,6,7,9,12}},{{1,2,6,7,10,12}},{{1,2,6,7,11,12}},{{1,2,6,8,9,12}},{{1,2,6,8,10,12}},{{1,2,6,8,11,12}},{{1,2,6,9,10,12}},{{1,2,6,9,11,12}},{{1,2,6,10,11,12}},{{1,2,7,8,9,12}},{{1,2,7,8,10,12}},{{1,2,7,8,11,12}},{{1,2,7,9,10,12}},{{1,2,7,9,11,12}},{{1,2,7,10,11,12}},{{1,2,8,9,10,12}},{{1,2,8,9,11,12}},{{1,2,8,10,11,12}},{{1,2,9,10,11,12}},{{1,3,4,5,6,12}},{{1,3,4,5,7,12}},{{1,3,4,5,8,12}},{{1,3,4,5,9,12}},{{1,3,4,5,10,12}},{{1,3,4,5,11,12}},{{1,3,4,6,7,12}},{{1,3,4,6,8,12}},{{1,3,4,6,9,12}},{{1,3,4,6,10,12}},{{1,3,4,6,11,12}},{{1,3,4,7,8,12}},{{1,3,4,7,9,12}},{{1,3,4,7,10,12}},{{1,3,4,7,11,12}},{{1,3,4,8,9,12}},{{1,3,4,8,10,12}},{{1,3,4,8,11,12}},{{1,3,4,9,10,12}},{{1,3,4,9,11,12}},{{1,3,4,10,11,12}},{{1,3,5,6,7,12}},{{1,3,5,6,8,12}},{{1,3,5,6,9,12}},{{1,3,5,6,10,12}},{{1,3,5,6,11,12}},{{1,3,5,7,8,12}},{{1,3,5,7,9,12}},{{1,3,5,7,10,12}},{{1,3,5,7,11,12}},{{1,3,5,8,9,12}},{{1,3,5,8,10,12}},{{1,3,5,8,11,12}},{{1,3,5,9,10,12}},{{1,3,5,9,11,12}},{{1,3,5,10,11,12}},{{1,3,6,7,8,12}},{{1,3,6,7,9,12}},{{1,3,6,7,10,12}},{{1,3,6,7,11,12}},{{1,3,6,8,9,12}},{{1,3,6,8,10,12}},{{1,3,6,8,11,12}},{{1,3,6,9,10,12}},{{1,3,6,9,11,12}},{{1,3,6,10,11,12}},{{1,3,7,8,9,12}},{{1,3,7,8,10,12}},{{1,3,7,8,11,12}},{{1,3,7,9,10,12}},{{1,3,7,9,11,12}},{{1,3,7,10,11,12}},{{1,3,8,9,10,12}},{{1,3,8,9,11,12}},{{1,3,8,10,11,12}},{{1,3,9,10,11,12}},{{1,4,5,6,7,12}},{{1,4,5,6,8,12}},{{1,4,5,6,9,12}},{{1,4,5,6,10,12}},{{1,4,5,6,11,12}},{{1,4,5,7,8,12}},{{1,4,5,7,9,12}},{{1,4,5,7,10,12}},{{1,4,5,7,11,12}},{{1,4,5,8,9,12}},{{1,4,5,8,10,12}},{{1,4,5,8,11,12}},{{1,4,5,9,10,12}},{{1,4,5,9,11,12}},{{1,4,5,10,11,12}},{{1,4,6,7,8,12}},{{1,4,6,7,9,12}},{{1,4,6,7,10,12}},{{1,4,6,7,11,12}},{{1,4,6,8,9,12}},{{1,4,6,8,10,12}},{{1,4,6,8,11,12}},{{1,4,6,9,10,12}},{{1,4,6,9,11,12}},{{1,4,6,10,11,12}},{{1,4,7,8,9,12}},{{1,4,7,8,10,12}},{{1,4,7,8,11,12}},{{1,4,7,9,10,12}},{{1,4,7,9,11,12}},{{1,4,7,10,11,12}},{{1,4,8,9,10,12}},{{1,4,8,9,11,12}},{{1,4,8,10,11,12}},{{1,4,9,10,11,12}},{{1,5,6,7,8,12}},{{1,5,6,7,9,12}},{{1,5,6,7,10,12}},{{1,5,6,7,11,12}},{{1,5,6,8,9,12}},{{1,5,6,8,10,12}},{{1,5,6,8,11,12}},{{1,5,6,9,10,12}},{{1,5,6,9,11,12}},{{1,5,6,10,11,12}},{{1,5,7,8,9,12}},{{1,5,7,8,10,12}},{{1,5,7,8,11,12}},{{1,5,7,9,10,12}},{{1,5,7,9,11,12}},{{1,5,7,10,11,12}},{{1,5,8,9,10,12}},{{1,5,8,9,11,12}},{{1,5,8,10,11,12}},{{1,5,9,10,11,12}},{{1,6,7,8,9,12}},{{1,6,7,8,10,12}},{{1,6,7,8,11,12}},{{1,6,7,9,10,12}},{{1,6,7,9,11,12}},{{1,6,7,10,11,12}},{{1,6,8,9,10,12}},{{1,6,8,9,11,12}},{{1,6,8,10,11,12}},{{1,6,9,10,11,12}},{{1,7,8,9,10,12}},{{1,7,8,9,11,12}},{{1,7,8,10,11,12}},{{1,7,9,10,11,12}},{{1,8,9,10,11,12}}}
```
---
Definitions
-----------
The following definitions are mostly user interface definitions:
```
ClearAll[permutationsNk, doPermutationsNk]
permutationsNk[n_?IntegerQ,k_?IntegerQ]/;Mod[n,k]==0:=doPermutationsNk[n,k]
```
Here we define the messages generated from erroneous user input:
```
permutationsNk::notint="`1` is not an Integer";
permutationsNk[x_?(IntegerQ/*Not),y_?IntegerQ]:=(Message[permutationsNk::notint,x])
permutationsNk[x_?IntegerQ,y_?(IntegerQ/*Not)]:=(Message[permutationsNk::notint,y])
permutationsNk::badargs="`2` does not divide into `1`, exactly";
permutationsNk[x_,y_]:=(Message[permutationsNk::badargs,x,y])
```
This is essentially what drives the results: The way to combine different elements of the list `{1,2,...,n}` is done through [`ListCorrelate`](https://reference.wolfram.com/language/ref/ListCorrelate.html). The *auxiliary* function `times` combines the entries in the kernel (`ker`, ones or zeros) with the numbers in `list` (`{1,2,...,n}`). Whenever it encounters a zero in the kernel, it evaluates to [`Nothing`](https://reference.wolfram.com/language/ref/Nothing.html). This behavior, coupled with the fact that the last argument of `ListCorrelate` is `List` allows for the construction of the desirable tuples.
```
Clear[times,listCorrelate]
times=(Switch[#1,0,Nothing,_,Times[##]]&);
listCorrelate[ker_,list_]:=ListCorrelate[ker,list,{1,-1},{},times,List]
```
An example for how `listCorrelate` works, consider a kernel `ker={a,0,b}` and `list=Range[4]`:
```
listCorrelate[{a,0,b},Range[4]]
```
[](https://i.stack.imgur.com/x64Uk.png)
Whenever a zero is encountered in the kernel, `Nothing` is returned, which in turn is wrapped in a list eg `{a,Nothing,3b}` which simplifies to `{a,3b}`.
Putting everything together:
```
doPermutationsNk[n_,k_]:=Module[{ker,list,firstLL,lastLL,reMo,ico,null,res,permutations,outer,apply,through,res0},
ker=ConstantArray[1,k];
list=Range[n];
firstLL=First/*List/*List;
lastLL=Last/*List/*List;
reMo=Rest/*Most;
ico=Iconize[#1,ToString[#2]]&;
permutations[j_]:=(Join[#,ConstantArray[0,j]]&)/*Permutations;
outer[j_,args__]:=Outer[Join/*(listCorrelate[#,list]&)/*(ico[#,j]&),args,1];
apply[j_]:=(With[{len=Length[{##}]},outer[j,##]//(Flatten[#,len-1]&)]&)/*(ico[#,Length[#]]&);
through[j_]:={firstLL,reMo/*permutations[j],lastLL};
{null,{res}}=Reap@Do[ker//(Through[through[j][#]]&)/*Apply[apply[j]]/*Sow,{j,n-k}];
res0=listCorrelate[ConstantArray[1,k],list]//(ico[#,Length[#]]&)/*List/*(ico[#,Length[#]]&);
Join[{res0},res]//(ico[#,Length[#]]&)
]
```
The following two expressions deal with boundary values for $k$:
```
doPermutationsNk[n_,1]:=Module[{list,ico,res0},
list=Range[n];
ico=Iconize[#1,ToString[#2]]&;
res0=list//List/*Transpose/*(ico[#,Length[#]]&)/*List/*(ico[#,Length[#]]&);
Join[{res0},{}]//(ico[#,Length[#]]&)
]
```
and
```
doPermutationsNk[n_,n_]:=Module[{list,ico,res0},
list=Range[n];
ico=Iconize[#1,ToString[#2]]&;
res0=list//List/*(ico[#,Length[#]]&)/*List/*(ico[#,Length[#]]&);
Join[{res0},{}]//(ico[#,Length[#]]&)
]
``` |
279,678 | Will it be slower to place functions' defintions inside a main function?
I usually do that if the subfunctions are short.
However, with a long subfunction I usually place it outside as I think that would make it easier to read and run faster.
Now I want to do it as the first method below as I want to make it self-contained and easier to manage when I do copy-paste and re-use.
If I place them outside, I usually miscopy some functions and make it not running properly. But I'm worrying if it's slower and harder to read.
I understand about the local/global effect but I want to focus on the speed and readability here.
**Method 1:**
Place `function1` and `function2` inside the main function.
```
myFunction[parameters_] := Module[{},
function1[parameters1_] := Module[{},
(* a long function*)
do something here
];
function2[parameters2_] := Module[{},
(* a short function*)
do something here
];
(*use function1 and function2 to do something more*)
]
```
**Method 2:**
```
function1[parameters1_] := Module[{},
(* a long function*)
do something here
];
function2[parameters2_] := Module[{},
(* a short function*)
do something here
];
myFunction[parameters_] := Module[{},
(*use function1 and function2 to do something more*)
]
``` | 2023/02/07 | [
"https://mathematica.stackexchange.com/questions/279678",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/87122/"
] | We may first create all k subsets of 1..n;
Then we need to select n/k subsets that have no common elements. Toward this aim we define a function that, given a number of subsets, selects a further subset that has no common element. It repeats this recursively until n/k subsets. We then feed the first subset to our routine, then the second e.t.c until we fed all subsets starting with 1. Finally we join all results.
```
main[n_, k_] := Module[{d = Range[n], p, step},
If[! Divisible[n, k], Print["n not divisible by k."]; Return[]];
p = Subsets[d, {k}];
step[e1_] := Module[{es = Flatten[e1], new, res,fun},
fun[e_] := (
new = Select[p, (! IntersectingQ[Flatten[{e}], #]) &];
Append[e, #] & /@ new);
res = Flatten[fun /@ e1, 1];
If[Length[res[[1]]] == n/k, res, step[res]]
];
Join @@ Reap[
Do[
Sow[step[{{p[[1]]}}]];
p = Rest[p];
, Binomial[n - 1, k - 1]]][[2, 1]]
]
```
Now for a test:
```
main[4, 2]
{{{1, 2}, {3, 4}}, {{1, 3}, {2, 4}}, {{1, 4}, {2, 3}}}
```
Or:
```
main[6, 3]
{{{1, 2, 3}, {4, 5, 6}}, {{1, 2, 4}, {3, 5, 6}}, {{1, 2, 5}, {3, 4,
6}}, {{1, 2, 6}, {3, 4, 5}}, {{1, 3, 4}, {2, 5, 6}}, {{1, 3,
5}, {2, 4, 6}}, {{1, 3, 6}, {2, 4, 5}}, {{1, 4, 5}, {2, 3,
6}}, {{1, 4, 6}, {2, 3, 5}}, {{1, 5, 6}, {2, 3, 4}}}
```
Or:
```
main[6, 2]
{{{1, 2}, {3, 4}, {5, 6}}, {{1, 2}, {3, 5}, {4, 6}}, {{1, 2}, {3,
6}, {4, 5}}, {{1, 2}, {4, 5}, {3, 6}}, {{1, 2}, {4, 6}, {3,
5}}, {{1, 2}, {5, 6}, {3, 4}}, {{1, 3}, {2, 4}, {5, 6}}, {{1,
3}, {2, 5}, {4, 6}}, {{1, 3}, {2, 6}, {4, 5}}, {{1, 3}, {4, 5}, {2,
6}}, {{1, 3}, {4, 6}, {2, 5}}, {{1, 3}, {5, 6}, {2, 4}}, {{1,
4}, {2, 3}, {5, 6}}, {{1, 4}, {2, 5}, {3, 6}}, {{1, 4}, {2, 6}, {3,
5}}, {{1, 4}, {3, 5}, {2, 6}}, {{1, 4}, {3, 6}, {2, 5}}, {{1,
4}, {5, 6}, {2, 3}}, {{1, 5}, {2, 3}, {4, 6}}, {{1, 5}, {2, 4}, {3,
6}}, {{1, 5}, {2, 6}, {3, 4}}, {{1, 5}, {3, 4}, {2, 6}}, {{1,
5}, {3, 6}, {2, 4}}, {{1, 5}, {4, 6}, {2, 3}}, {{1, 6}, {2, 3}, {4,
5}}, {{1, 6}, {2, 4}, {3, 5}}, {{1, 6}, {2, 5}, {3, 4}}, {{1,
6}, {3, 4}, {2, 5}}, {{1, 6}, {3, 5}, {2, 4}}, {{1, 6}, {4, 5}, {2,
3}}}
``` | Here is one possible solution:
```
perm[set_List, k_Integer] := If[Length[set]<2k,
List/@Subsets[set,{k}],
Flatten[
Function[s,Prepend[#,s]&/@perm[Complement[
Drop[set,First@FirstPosition[set,First[s]]],s],k]]
/@Subsets[set, {k}], 1]
];
perm[n_Integer, k_Integer] := perm[Range[n], k];
```
It might be not the most effective, but it is much faster than @Daniel's `main` function.
For example,
```
In[1]:= Timing[perm[10, 2]]
Out[1]:= {0.1,{{{1, 2}, {3, 4}, {5, 6}, {7, 8}, {9, 10}},...}}(*4260 elements*)
``` |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | The error you get is likely because the Guest Additions CD image is already mounted.
To see all mounted drives open a terminal in the guest to issue `mount`. This will give you (among others) a line similar to this:
```
/dev/sr0 on /media/takkat/VBOXADDITIONS_4.2.12_849801 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)
```
In the Unity Launcher you will see a CD-ROM icon. To unmount the CD right click on this icon and select *"Eject"*.

We can not unmount the Guest Additions CD from the command line when mounted with the help of then Virtual Box Manager. Please select *"Devices -> CD/DVD Devices -> Remove disk from virtual drive"* and choose *"Force unmount"* to remove the CD iso.
To install guest additions we will have to load the CD again from Virtual Box Manager and select the Icon from the Unity Launcher.
See also the following questions which also has a command line method for installing guest additions:
* [How do I install Guest Additions in a VirtualBox VM?](https://askubuntu.com/questions/22743/how-do-i-install-guest-additions-in-virtualbox/22745#22745) | For what it's worth, I was unable to install the guest additions until I installed the extension pack from Oracle. After that, right-ctrl+d worked like a charm. |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | The error you get is likely because the Guest Additions CD image is already mounted.
To see all mounted drives open a terminal in the guest to issue `mount`. This will give you (among others) a line similar to this:
```
/dev/sr0 on /media/takkat/VBOXADDITIONS_4.2.12_849801 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)
```
In the Unity Launcher you will see a CD-ROM icon. To unmount the CD right click on this icon and select *"Eject"*.

We can not unmount the Guest Additions CD from the command line when mounted with the help of then Virtual Box Manager. Please select *"Devices -> CD/DVD Devices -> Remove disk from virtual drive"* and choose *"Force unmount"* to remove the CD iso.
To install guest additions we will have to load the CD again from Virtual Box Manager and select the Icon from the Unity Launcher.
See also the following questions which also has a command line method for installing guest additions:
* [How do I install Guest Additions in a VirtualBox VM?](https://askubuntu.com/questions/22743/how-do-i-install-guest-additions-in-virtualbox/22745#22745) | I tried this and it worked:
1. From the terminal (`Ctrl`+`Alt`+`T`) and enter the following commands:
```
cd /mnt
cd /cdrom
eject
```
2. From the VirtualBox menu (top left pane) go to "device" and insert guest additions (it should work this time)
3. Then from the terminal again type:
```
sudo sh /media/cdrom/VBoxLinuxAdditions.run
``` |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | The error you get is likely because the Guest Additions CD image is already mounted.
To see all mounted drives open a terminal in the guest to issue `mount`. This will give you (among others) a line similar to this:
```
/dev/sr0 on /media/takkat/VBOXADDITIONS_4.2.12_849801 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)
```
In the Unity Launcher you will see a CD-ROM icon. To unmount the CD right click on this icon and select *"Eject"*.

We can not unmount the Guest Additions CD from the command line when mounted with the help of then Virtual Box Manager. Please select *"Devices -> CD/DVD Devices -> Remove disk from virtual drive"* and choose *"Force unmount"* to remove the CD iso.
To install guest additions we will have to load the CD again from Virtual Box Manager and select the Icon from the Unity Launcher.
See also the following questions which also has a command line method for installing guest additions:
* [How do I install Guest Additions in a VirtualBox VM?](https://askubuntu.com/questions/22743/how-do-i-install-guest-additions-in-virtualbox/22745#22745) | I had this problem but at the time I was running ubuntu as a guest (when you start up the system it asks if you want to install ubuntu or run as guest). As soon as I installed ubuntu on the box, the guest additions installed without any problems |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | The error you get is likely because the Guest Additions CD image is already mounted.
To see all mounted drives open a terminal in the guest to issue `mount`. This will give you (among others) a line similar to this:
```
/dev/sr0 on /media/takkat/VBOXADDITIONS_4.2.12_849801 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)
```
In the Unity Launcher you will see a CD-ROM icon. To unmount the CD right click on this icon and select *"Eject"*.

We can not unmount the Guest Additions CD from the command line when mounted with the help of then Virtual Box Manager. Please select *"Devices -> CD/DVD Devices -> Remove disk from virtual drive"* and choose *"Force unmount"* to remove the CD iso.
To install guest additions we will have to load the CD again from Virtual Box Manager and select the Icon from the Unity Launcher.
See also the following questions which also has a command line method for installing guest additions:
* [How do I install Guest Additions in a VirtualBox VM?](https://askubuntu.com/questions/22743/how-do-i-install-guest-additions-in-virtualbox/22745#22745) | my workaround in macosx 10.10 host ubuntu 12.04 guest is to copy the virtualbox guest additions iso to a shared folder and just use the terminal in my guest to run (sudo sh /pathto/VboxLinuxAdditions.run) |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | The error you get is likely because the Guest Additions CD image is already mounted.
To see all mounted drives open a terminal in the guest to issue `mount`. This will give you (among others) a line similar to this:
```
/dev/sr0 on /media/takkat/VBOXADDITIONS_4.2.12_849801 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2)
```
In the Unity Launcher you will see a CD-ROM icon. To unmount the CD right click on this icon and select *"Eject"*.

We can not unmount the Guest Additions CD from the command line when mounted with the help of then Virtual Box Manager. Please select *"Devices -> CD/DVD Devices -> Remove disk from virtual drive"* and choose *"Force unmount"* to remove the CD iso.
To install guest additions we will have to load the CD again from Virtual Box Manager and select the Icon from the Unity Launcher.
See also the following questions which also has a command line method for installing guest additions:
* [How do I install Guest Additions in a VirtualBox VM?](https://askubuntu.com/questions/22743/how-do-i-install-guest-additions-in-virtualbox/22745#22745) | I ran into this problem recently (Sept 2017) trying to install Guest Additions into Oracle VM VirtualBox Manager version 5.1.28, in a newly-installed VM for Ubuntu version 16.04.3. The problem was indeed that the Guest Additions CD image was already mounted. To fix it:
* In the VM Manager under Settings->Storage, under Controller:
Right-click on VBoxGuestAdditions, and select “Remove Attachment."
There will be a confirmation box: click “remove.”
* Click the “+” icon next to “Controller, to add new optical drive.” A dialog box will open.
Choose “Leave Empty-> Okay”
* Open the VM and select Devices->Insert Guest Additions. It should
install now. Power off the VM (not the VM Manager) and restart the
VM. GuestAdditions should now be working. |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | I tried this and it worked:
1. From the terminal (`Ctrl`+`Alt`+`T`) and enter the following commands:
```
cd /mnt
cd /cdrom
eject
```
2. From the VirtualBox menu (top left pane) go to "device" and insert guest additions (it should work this time)
3. Then from the terminal again type:
```
sudo sh /media/cdrom/VBoxLinuxAdditions.run
``` | For what it's worth, I was unable to install the guest additions until I installed the extension pack from Oracle. After that, right-ctrl+d worked like a charm. |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | I tried this and it worked:
1. From the terminal (`Ctrl`+`Alt`+`T`) and enter the following commands:
```
cd /mnt
cd /cdrom
eject
```
2. From the VirtualBox menu (top left pane) go to "device" and insert guest additions (it should work this time)
3. Then from the terminal again type:
```
sudo sh /media/cdrom/VBoxLinuxAdditions.run
``` | I had this problem but at the time I was running ubuntu as a guest (when you start up the system it asks if you want to install ubuntu or run as guest). As soon as I installed ubuntu on the box, the guest additions installed without any problems |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | I tried this and it worked:
1. From the terminal (`Ctrl`+`Alt`+`T`) and enter the following commands:
```
cd /mnt
cd /cdrom
eject
```
2. From the VirtualBox menu (top left pane) go to "device" and insert guest additions (it should work this time)
3. Then from the terminal again type:
```
sudo sh /media/cdrom/VBoxLinuxAdditions.run
``` | my workaround in macosx 10.10 host ubuntu 12.04 guest is to copy the virtualbox guest additions iso to a shared folder and just use the terminal in my guest to run (sudo sh /pathto/VboxLinuxAdditions.run) |
301,722 | I can't connect to Wi-Fi in Ubuntu 12.04 with an Intel PRO/Wireless 3945ABG card. What can I do?
Every time I try to connect to a wireless router it keeps asking for a password and even though I write the password down it never fully connects, it just stops after a while and asks for the password again. | 2013/05/29 | [
"https://askubuntu.com/questions/301722",
"https://askubuntu.com",
"https://askubuntu.com/users/162892/"
] | I tried this and it worked:
1. From the terminal (`Ctrl`+`Alt`+`T`) and enter the following commands:
```
cd /mnt
cd /cdrom
eject
```
2. From the VirtualBox menu (top left pane) go to "device" and insert guest additions (it should work this time)
3. Then from the terminal again type:
```
sudo sh /media/cdrom/VBoxLinuxAdditions.run
``` | I ran into this problem recently (Sept 2017) trying to install Guest Additions into Oracle VM VirtualBox Manager version 5.1.28, in a newly-installed VM for Ubuntu version 16.04.3. The problem was indeed that the Guest Additions CD image was already mounted. To fix it:
* In the VM Manager under Settings->Storage, under Controller:
Right-click on VBoxGuestAdditions, and select “Remove Attachment."
There will be a confirmation box: click “remove.”
* Click the “+” icon next to “Controller, to add new optical drive.” A dialog box will open.
Choose “Leave Empty-> Okay”
* Open the VM and select Devices->Insert Guest Additions. It should
install now. Power off the VM (not the VM Manager) and restart the
VM. GuestAdditions should now be working. |
6,063,541 | I am currently faced with a problem, where I need to be able to launch another batch file if a file is under 10KB.
E.g. If xxxxx.txt is greater than 10KB launch stage2.bat | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761628/"
] | Here's some psuedo code that I think will get you on the path to what you want: <https://gist.github.com/981513>
```
class Person
has_many :groups
has_many :group_memberships, :foreign_key => "member_id", :through => :groups
scope :owned_groups, where(:is_owner => true).joins(:group_memberships) # gets all groups where this person is owner
end
class Group_Membership
belongs_to :member, :class_name => 'Person'
belongs_to :group
# note that these attributes need to be defined
# is_owner (boolean)
# member_approved (boolean)
scope :requested, :where(:member_approved => false)
end
class Group
belongs_to :person
has_many :group_memberships
has_many :members, :class_name => "Person", :through => "group_memberships", :foreign_key => "member_id"
end
```
Fair warning, I haven't tested it at all, and I'm still learning the new AR patterns :)
I think that your group\_memberships relation is probably a best fit as a :through relationship and then creating scopes around the different "states" that relationship can have. You might checkout [state machine](https://github.com/pluginaweek/state_machine) for some help on this too. | I think like that
```
class Person
has_many :groups
has_many :group_memberships, :foreign_key => "member_id"
has_many :own_groups, :class_name => "Group", :through => "group_memberships", :foreign_key => "group_id"
end
class Group_Membership
belongs_to :member, :class_name => 'Person'
belongs_to :group
end
class Group
belongs_to :person
has_many :group_memberships
has_many :asked_group_memberships, :class_name => 'Group_Membership', :conditions => ['status = ?', false]
has_many :members, :class_name => "Person", :through => "group_memberships", :foreign_key => "member_id", :conditions => ['group_memberships.status = ?', true]
end
``` |
13,827,004 | I'm looking for confirmation/clarification with these LINQ expressions:
```
var context = new SomeCustomDbContext()
// LINQ to Entities?
var items = context.CustomItems.OrderBy(i => i.Property).ToList();
// LINQ to Objects?
var items2 = context.CustomItems.ToList().OrderBy(i => i.Property);
```
1. Am I correct in thinking the first method is `LINQ to Entities` where EF builds a more specific SQL statement to pass on, putting the ordering effort on on the database?
2. Is the second method `LINQ to Objects` where LINQ drags the whole collection into memory (the `ToList()` enumeration?) before ordering thus leaving the burden on the server side (the web server in this case)?
If this is the case, I can quickly see situations where L2E would be advantageous (ex. filtering/trimming collections before pulling them into memory).
3. But are there any other details/trade-offs I should be aware of, or times when "method 2" might be advantageous over the first method?
**UPDATE:**
Let's say we are not using EntityFramework, this is still true so long as the underlying repository/data source implements `IQueryable<T>` right? And if it doesn't both these statements result in `LINQ to Objects` operations in memory? | 2012/12/11 | [
"https://Stackoverflow.com/questions/13827004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146610/"
] | 1. Yes.
2. Yes.
3. Yes.
You are correct that calling `ToList()` forces linq-to-entities to evaluate and return the results as a list. As you suspect, this can have huge performance implications.
There are cases where linq-to-entities cannot figure out how to parse what looks like a perfectly simple query (like `Where(x => SomeFunction(x))`). In these cases you often have no choice but to call `ToList()` and operate on the collection in memory.
---
In response to your update:
`ToList()` always forces everything ahead of it to evaluate immediately, as opposed to deferred execution. Take this example:
```
someEnumerable.Take(10).ToList();
```
vs
```
someEnumerable.ToList().Take(10);
```
In the second example, any deferred work on `someEnumerable` must be executed before taking the first 10 elements. If `someEnumerable` is doing something labor intensive (like reading files from the disk using `Directory.EnumerateFiles()`), this could have very real performance implications. | >
> Am I correct in thinking the first method is LINQ to Entities where EF builds a more specific SQL statement to pass on, putting the ordering effort on on the database?
>
>
>
Yes
>
> Is the second method LINQ to Objects where LINQ drags the whole collection into memory ... before ordering thus leaving the burden on the server side ...?
>
>
>
Yes
>
> But are there any other details/trade-offs I should be aware of, or times when "method 2" might be advantageous over the first method?
>
>
>
There will be many times where Method 1 is not possible - usually when you have a complex filter or sort order that can't be directly translated to SQL (or more appropriately where EF does not support a direct SQL translation). Also since you can't transmit lazy-loaded `IQueryable`s over-the-wire, any time you have to serialize a result you're going to have to materialize it first with `ToList()` or something similar. |
13,827,004 | I'm looking for confirmation/clarification with these LINQ expressions:
```
var context = new SomeCustomDbContext()
// LINQ to Entities?
var items = context.CustomItems.OrderBy(i => i.Property).ToList();
// LINQ to Objects?
var items2 = context.CustomItems.ToList().OrderBy(i => i.Property);
```
1. Am I correct in thinking the first method is `LINQ to Entities` where EF builds a more specific SQL statement to pass on, putting the ordering effort on on the database?
2. Is the second method `LINQ to Objects` where LINQ drags the whole collection into memory (the `ToList()` enumeration?) before ordering thus leaving the burden on the server side (the web server in this case)?
If this is the case, I can quickly see situations where L2E would be advantageous (ex. filtering/trimming collections before pulling them into memory).
3. But are there any other details/trade-offs I should be aware of, or times when "method 2" might be advantageous over the first method?
**UPDATE:**
Let's say we are not using EntityFramework, this is still true so long as the underlying repository/data source implements `IQueryable<T>` right? And if it doesn't both these statements result in `LINQ to Objects` operations in memory? | 2012/12/11 | [
"https://Stackoverflow.com/questions/13827004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146610/"
] | 1. Yes.
2. Yes.
3. Yes.
You are correct that calling `ToList()` forces linq-to-entities to evaluate and return the results as a list. As you suspect, this can have huge performance implications.
There are cases where linq-to-entities cannot figure out how to parse what looks like a perfectly simple query (like `Where(x => SomeFunction(x))`). In these cases you often have no choice but to call `ToList()` and operate on the collection in memory.
---
In response to your update:
`ToList()` always forces everything ahead of it to evaluate immediately, as opposed to deferred execution. Take this example:
```
someEnumerable.Take(10).ToList();
```
vs
```
someEnumerable.ToList().Take(10);
```
In the second example, any deferred work on `someEnumerable` must be executed before taking the first 10 elements. If `someEnumerable` is doing something labor intensive (like reading files from the disk using `Directory.EnumerateFiles()`), this could have very real performance implications. | The other thing to be aware of is that IQueryable makes no guarantees on either (a) the semantic reasoning of the underlying provider, or (b) how much of the set of IQueryable methods are implemented by the provider.
For example: -
1. EF does not support Last().
2. Nor does it support time-part comparisons of DateTimes into valid T-SQL.
3. It doesn't support FirstOrDefault() in subqueries.
In such circumstances you need to bring data back to the client and then perform further evaluation client-side.
You also need to have an understanding of "how" it parses the LINQ pipeline to generate (in the case of EF) T-SQL. So you sometimes have to think carefully about how you construct your LINQ queries in order to generate effective T-SQL.
Having said all that, IQueryable<> is an extremely powerful tool in the .NET framework and well worth getting more familiar with. |
13,827,004 | I'm looking for confirmation/clarification with these LINQ expressions:
```
var context = new SomeCustomDbContext()
// LINQ to Entities?
var items = context.CustomItems.OrderBy(i => i.Property).ToList();
// LINQ to Objects?
var items2 = context.CustomItems.ToList().OrderBy(i => i.Property);
```
1. Am I correct in thinking the first method is `LINQ to Entities` where EF builds a more specific SQL statement to pass on, putting the ordering effort on on the database?
2. Is the second method `LINQ to Objects` where LINQ drags the whole collection into memory (the `ToList()` enumeration?) before ordering thus leaving the burden on the server side (the web server in this case)?
If this is the case, I can quickly see situations where L2E would be advantageous (ex. filtering/trimming collections before pulling them into memory).
3. But are there any other details/trade-offs I should be aware of, or times when "method 2" might be advantageous over the first method?
**UPDATE:**
Let's say we are not using EntityFramework, this is still true so long as the underlying repository/data source implements `IQueryable<T>` right? And if it doesn't both these statements result in `LINQ to Objects` operations in memory? | 2012/12/11 | [
"https://Stackoverflow.com/questions/13827004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/146610/"
] | >
> Am I correct in thinking the first method is LINQ to Entities where EF builds a more specific SQL statement to pass on, putting the ordering effort on on the database?
>
>
>
Yes
>
> Is the second method LINQ to Objects where LINQ drags the whole collection into memory ... before ordering thus leaving the burden on the server side ...?
>
>
>
Yes
>
> But are there any other details/trade-offs I should be aware of, or times when "method 2" might be advantageous over the first method?
>
>
>
There will be many times where Method 1 is not possible - usually when you have a complex filter or sort order that can't be directly translated to SQL (or more appropriately where EF does not support a direct SQL translation). Also since you can't transmit lazy-loaded `IQueryable`s over-the-wire, any time you have to serialize a result you're going to have to materialize it first with `ToList()` or something similar. | The other thing to be aware of is that IQueryable makes no guarantees on either (a) the semantic reasoning of the underlying provider, or (b) how much of the set of IQueryable methods are implemented by the provider.
For example: -
1. EF does not support Last().
2. Nor does it support time-part comparisons of DateTimes into valid T-SQL.
3. It doesn't support FirstOrDefault() in subqueries.
In such circumstances you need to bring data back to the client and then perform further evaluation client-side.
You also need to have an understanding of "how" it parses the LINQ pipeline to generate (in the case of EF) T-SQL. So you sometimes have to think carefully about how you construct your LINQ queries in order to generate effective T-SQL.
Having said all that, IQueryable<> is an extremely powerful tool in the .NET framework and well worth getting more familiar with. |
67,222,585 | For example, this is the part of my data set I'm interested in:
| EventID | Action | Actor |
| --- | --- | --- |
| EventID1 | ActionB | ActorX |
| EventID2 | ActionB | ActorZ |
| EventID1 | ActionA | ActorY |
| EventID2 | ActionC | ActorZ |
| EventID3 | ActionA | ActorX |
| EventID3 | ActionB | ActorZ |
| EventID2 | ActionB | ActorZ |
| EventID2 | ActionA | ActorY |
I Want:
| | ActorX | ActorY | ActorZ |
| --- | --- | --- | --- |
| ActionA | 1 | 2 | 0 |
| ActionB | 1 | 0 | 3 |
| ActionC | 0 | 0 | 1 |
The problem is I have LOTS of actors and actions--so I need a way of doing this without listing each one. | 2021/04/23 | [
"https://Stackoverflow.com/questions/67222585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15743710/"
] | In your template, Global should be renamed as Globals
Please refer to the [Globals Section](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy-globals.html) link that you already shared. | Use `sam validate -t yourtemplate.yaml` to check for syntax errors. This error came up for me because of a missing parameter in my lambda function. Not very helpful exception but the validate tool exposed my problems. |
123,794 | Please advice how to install the patch command on Ubuntu Linux
I download the following file , patch\_2.6-2ubuntu1\_amd64.deb ( from site -
<http://packages.ubuntu.com/lucid/patch> )
But not clearly how to install it?
I tried by ./patch\_2.6-2ubuntu1\_amd64.deb , ( but I get errors )
Please advice how to install the pkg patch\_2.6-2ubuntu1\_amd64.deb
In order to use thepatch command on my unbuntu linux | 2014/04/08 | [
"https://unix.stackexchange.com/questions/123794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
] | This website tells you that the package is in the repository. You should not download the package from the website to install it.
You should use your package manager :
```
sudo apt-get install patch
``` | ".deb" files are not executable binaries. Use `dpkg` command to install your package :
```
dpkg -i your_package.deb
``` |
123,794 | Please advice how to install the patch command on Ubuntu Linux
I download the following file , patch\_2.6-2ubuntu1\_amd64.deb ( from site -
<http://packages.ubuntu.com/lucid/patch> )
But not clearly how to install it?
I tried by ./patch\_2.6-2ubuntu1\_amd64.deb , ( but I get errors )
Please advice how to install the pkg patch\_2.6-2ubuntu1\_amd64.deb
In order to use thepatch command on my unbuntu linux | 2014/04/08 | [
"https://unix.stackexchange.com/questions/123794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
] | ".deb" files are not executable binaries. Use `dpkg` command to install your package :
```
dpkg -i your_package.deb
``` | since your patch is tar.gz a simple way to install is to get alien
```
sudo apt-get install alien
```
then turn tar.gz into deb
```
sudo alien -k file.tar.gz
``` |
123,794 | Please advice how to install the patch command on Ubuntu Linux
I download the following file , patch\_2.6-2ubuntu1\_amd64.deb ( from site -
<http://packages.ubuntu.com/lucid/patch> )
But not clearly how to install it?
I tried by ./patch\_2.6-2ubuntu1\_amd64.deb , ( but I get errors )
Please advice how to install the pkg patch\_2.6-2ubuntu1\_amd64.deb
In order to use thepatch command on my unbuntu linux | 2014/04/08 | [
"https://unix.stackexchange.com/questions/123794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
] | This website tells you that the package is in the repository. You should not download the package from the website to install it.
You should use your package manager :
```
sudo apt-get install patch
``` | since your patch is tar.gz a simple way to install is to get alien
```
sudo apt-get install alien
```
then turn tar.gz into deb
```
sudo alien -k file.tar.gz
``` |
24,010,421 | I have a problem where I'm more-or less using the jsPlumb flow-chart demo example but where there is only ever one drop target per window and there may be one or many drag targets. However I want to forbid self-connections so that a connection can be dragged from any window to any other window EXCEPT itself.
I was thinking that maybe you would use scopes but this would mean a different scope for each window which seems over the top. Does anyone have a tidy solution? | 2014/06/03 | [
"https://Stackoverflow.com/questions/24010421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2837057/"
] | Thanks for the answers they pointed me in the right direction. In the end used "beforeDrop"
when binding the "connection" it was detaching the source endpoint of the window as well as the connection.
The final solution was:
```
instance.bind("beforeDrop", function (info) {
// console.log("before drop: " + info.sourceId + ", " + info.targetId);
if (info.sourceId === info.targetId) { //source and target ID's are same
console.log("source and target ID's are the same - self connections not allowed.")
return false;
} else {
return true;
}
});
``` | Bind an event to get notified whenever new connection is created. After connection creation check whether source and target of connection are same, if so detach that connection to avoid self loop. Code:
```
jsPlumb.bind("jsPlumbConnection", function(ci) {
var s=ci.sourceId,c=ci.targetId;
if( s===c ){ //source and target ID's are same
jsPlumb.detach(ci.connection);
}
else{ // Keep connection if ID's are different (Do nothing)
// console.log(s+"->"+c);
}
});
``` |
24,010,421 | I have a problem where I'm more-or less using the jsPlumb flow-chart demo example but where there is only ever one drop target per window and there may be one or many drag targets. However I want to forbid self-connections so that a connection can be dragged from any window to any other window EXCEPT itself.
I was thinking that maybe you would use scopes but this would mean a different scope for each window which seems over the top. Does anyone have a tidy solution? | 2014/06/03 | [
"https://Stackoverflow.com/questions/24010421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2837057/"
] | from <http://www.jsplumb.org/doc/connections.html#draganddrop>
Preventing Loopback Connections
In vanilla jsPlumb only, you can instruct jsPlumb to prevent loopback connections without having to resort to a beforeDrop interceptor. You do this by setting allowLoopback:false on the parameters passed to the makeTarget method:
```
jsPlumb.makeTarget("foo", {
allowLoopback:false
});
``` | Bind an event to get notified whenever new connection is created. After connection creation check whether source and target of connection are same, if so detach that connection to avoid self loop. Code:
```
jsPlumb.bind("jsPlumbConnection", function(ci) {
var s=ci.sourceId,c=ci.targetId;
if( s===c ){ //source and target ID's are same
jsPlumb.detach(ci.connection);
}
else{ // Keep connection if ID's are different (Do nothing)
// console.log(s+"->"+c);
}
});
``` |
30,995,731 | I have a web page with a huge form to fill. In most cases, session time out and users lost a lot of data. I searched for this problem and found this [Prevent session expired in PHP Session for inactive user](https://stackoverflow.com/questions/5962671/prevent-session-expired-in-php-session-for-inactive-user)
I implemenet ajax call
```
function heartbeat() {
clearTimeout(window.intervalID);
$.ajax({
url: "trash.png",
type: "post",
cache: false,
dataType: 'json',
success: function(data) {
},
complete: function() {
window.intervalID = setTimeout(function() {
heartbeat();
}, 300000);
}
});
}
```
and call `heartbeat();` in `$(document).ready`, `trash.png` is in the same directory as file where I have jQuery code with Ajax.
I checked with fiddler and jQuery is sending requests to `trash.png` every 5 minutes. But after 30 minutes my session still expires.
`session_start();` is called when user log in to the web page.
What am I doing wrong? | 2015/06/23 | [
"https://Stackoverflow.com/questions/30995731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2549665/"
] | You can not keep the session alive without calling a php script which starts a session, just downloading a png file will not keep the session from dying. Create a PHP Script like this:
```
<?php session_start(); ?>
```
Put it into the directory and call this instead of that trash.png asset.
You might need to call other things before calling `session_start()` depending on how you are starting it in your other scripts. | In ajax you can post timeout as
```
jQuery.ajax({
url: 'ajaxhandler.php',
success: function (result) {
returned_value=result;
},
timeout: 10000,
async: false
});
``` |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | **Steps To Carry Out**
1) First take a look [at this link](https://en.wikipedia.org/wiki/De_Moivre%27s_formula) which is a guide for DeMoivre's formula.
2) Using step 1 show that
$$\sin (7x) = 64\sin \left( x \right)\cos {\left( x \right)^6} - 80\sin \left( x \right)\cos {\left( x \right)^4} + 24\sin \left( x \right)\cos {\left( x \right)^2} - \sin \left( x \right)$$
3) Replace $\cos^2(x)=1-\sin^2(x)$ and obtain
$$\sin (7x) = 7\sin \left( x \right) - 56\sin {\left( x \right)^3} + 112\sin {\left( x \right)^5} - 64\sin {\left( x \right)^7}$$
4) Divide by $\sin(x)$
$${{\sin (7x)} \over {\sin (x)}} = 7 - 56\sin {\left( x \right)^2} + 112\sin {\left( x \right)^4} - 64\sin {\left( x \right)^6}$$ | Take a look at [Chebyshev Polynomial of the Second Kind](https://en.wikipedia.org/wiki/Chebyshev_polynomials).
Since $U\_6 (\cos \theta) = \frac{\sin 7 \theta}{\sin \theta}$, we can find $U\_6(x)$ be the recurrence given in the wikipedia link, i.e. $$U\_0(x)=1$$ $$U\_1(x)=2x$$ $$U\_{n+1}(x)=2xU\_n(x)-U\_{n-1}(x)$$
We can then use $\cos^2 \theta = 1-\sin^2 \theta$ to convert it into a polynomial of $\sin$.
If you want to use DeMoivre's Theorem, use $$(\cos \theta + i \sin \theta)^7 = (\cos 7 \theta + i \sin 7 \theta)$$
Expand L.H.S to find the desired value. |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | Take a look at [Chebyshev Polynomial of the Second Kind](https://en.wikipedia.org/wiki/Chebyshev_polynomials).
Since $U\_6 (\cos \theta) = \frac{\sin 7 \theta}{\sin \theta}$, we can find $U\_6(x)$ be the recurrence given in the wikipedia link, i.e. $$U\_0(x)=1$$ $$U\_1(x)=2x$$ $$U\_{n+1}(x)=2xU\_n(x)-U\_{n-1}(x)$$
We can then use $\cos^2 \theta = 1-\sin^2 \theta$ to convert it into a polynomial of $\sin$.
If you want to use DeMoivre's Theorem, use $$(\cos \theta + i \sin \theta)^7 = (\cos 7 \theta + i \sin 7 \theta)$$
Expand L.H.S to find the desired value. | $sin(7\theta)=sin(\theta+6\theta)=sin(\theta).cos(6\theta)+sin(6\theta)cos(\theta)$ then we have $cos(6\theta)=cos(3\theta+3\theta)=cos(3\theta).cos(\theta)-[sin(3\theta).sin(3\theta)]$ now we know $cos(3\theta)=4cos^3(\theta)-3cos(\theta),sin(\theta)=3sin(\theta)+4sin^3(\theta)$. Now you can just back substitute and get the job done. Hope its clear. |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | Take a look at [Chebyshev Polynomial of the Second Kind](https://en.wikipedia.org/wiki/Chebyshev_polynomials).
Since $U\_6 (\cos \theta) = \frac{\sin 7 \theta}{\sin \theta}$, we can find $U\_6(x)$ be the recurrence given in the wikipedia link, i.e. $$U\_0(x)=1$$ $$U\_1(x)=2x$$ $$U\_{n+1}(x)=2xU\_n(x)-U\_{n-1}(x)$$
We can then use $\cos^2 \theta = 1-\sin^2 \theta$ to convert it into a polynomial of $\sin$.
If you want to use DeMoivre's Theorem, use $$(\cos \theta + i \sin \theta)^7 = (\cos 7 \theta + i \sin 7 \theta)$$
Expand L.H.S to find the desired value. | $(\cos x+i\sin x)^7=\cos 7x+i\sin7x$.
We have
$$(r+t)^7=r^7+7r^6t+21r^5t^2+35r^4t^3+35r^3t^4+21r^2t^5+7rt^6+t^7$$
hence, making $\sin x=t$ and $\cos x=r$ and having in account the powers of $i$, we get
$$\sin 7x=7r^6t-35r^4t^3+21r^2t^5-t^7$$ so, because of $\cos^2x=1-\sin^2x$,
$$7t-21t^3+21t^5-7t^7-35t^3+70t^5-35t^7+21t^5-21t^7-t^7=-64t^7+112t^5-56t^3+7t$$
Now dividing by $t$, we finish. |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | **Steps To Carry Out**
1) First take a look [at this link](https://en.wikipedia.org/wiki/De_Moivre%27s_formula) which is a guide for DeMoivre's formula.
2) Using step 1 show that
$$\sin (7x) = 64\sin \left( x \right)\cos {\left( x \right)^6} - 80\sin \left( x \right)\cos {\left( x \right)^4} + 24\sin \left( x \right)\cos {\left( x \right)^2} - \sin \left( x \right)$$
3) Replace $\cos^2(x)=1-\sin^2(x)$ and obtain
$$\sin (7x) = 7\sin \left( x \right) - 56\sin {\left( x \right)^3} + 112\sin {\left( x \right)^5} - 64\sin {\left( x \right)^7}$$
4) Divide by $\sin(x)$
$${{\sin (7x)} \over {\sin (x)}} = 7 - 56\sin {\left( x \right)^2} + 112\sin {\left( x \right)^4} - 64\sin {\left( x \right)^6}$$ | $sin(7\theta)=sin(\theta+6\theta)=sin(\theta).cos(6\theta)+sin(6\theta)cos(\theta)$ then we have $cos(6\theta)=cos(3\theta+3\theta)=cos(3\theta).cos(\theta)-[sin(3\theta).sin(3\theta)]$ now we know $cos(3\theta)=4cos^3(\theta)-3cos(\theta),sin(\theta)=3sin(\theta)+4sin^3(\theta)$. Now you can just back substitute and get the job done. Hope its clear. |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | **Steps To Carry Out**
1) First take a look [at this link](https://en.wikipedia.org/wiki/De_Moivre%27s_formula) which is a guide for DeMoivre's formula.
2) Using step 1 show that
$$\sin (7x) = 64\sin \left( x \right)\cos {\left( x \right)^6} - 80\sin \left( x \right)\cos {\left( x \right)^4} + 24\sin \left( x \right)\cos {\left( x \right)^2} - \sin \left( x \right)$$
3) Replace $\cos^2(x)=1-\sin^2(x)$ and obtain
$$\sin (7x) = 7\sin \left( x \right) - 56\sin {\left( x \right)^3} + 112\sin {\left( x \right)^5} - 64\sin {\left( x \right)^7}$$
4) Divide by $\sin(x)$
$${{\sin (7x)} \over {\sin (x)}} = 7 - 56\sin {\left( x \right)^2} + 112\sin {\left( x \right)^4} - 64\sin {\left( x \right)^6}$$ | $(\cos x+i\sin x)^7=\cos 7x+i\sin7x$.
We have
$$(r+t)^7=r^7+7r^6t+21r^5t^2+35r^4t^3+35r^3t^4+21r^2t^5+7rt^6+t^7$$
hence, making $\sin x=t$ and $\cos x=r$ and having in account the powers of $i$, we get
$$\sin 7x=7r^6t-35r^4t^3+21r^2t^5-t^7$$ so, because of $\cos^2x=1-\sin^2x$,
$$7t-21t^3+21t^5-7t^7-35t^3+70t^5-35t^7+21t^5-21t^7-t^7=-64t^7+112t^5-56t^3+7t$$
Now dividing by $t$, we finish. |
1,553,047 | By using DeMoivre's theorm express
$$\frac{\sin 7\theta}{\sin \theta}$$
in the powers of Sine only
answer given in the book is
$$7-56\sin ^2\theta+112\sin ^4 \theta-64\sin^6 \theta$$
can any one help to solve the question | 2015/11/30 | [
"https://math.stackexchange.com/questions/1553047",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/153864/"
] | $(\cos x+i\sin x)^7=\cos 7x+i\sin7x$.
We have
$$(r+t)^7=r^7+7r^6t+21r^5t^2+35r^4t^3+35r^3t^4+21r^2t^5+7rt^6+t^7$$
hence, making $\sin x=t$ and $\cos x=r$ and having in account the powers of $i$, we get
$$\sin 7x=7r^6t-35r^4t^3+21r^2t^5-t^7$$ so, because of $\cos^2x=1-\sin^2x$,
$$7t-21t^3+21t^5-7t^7-35t^3+70t^5-35t^7+21t^5-21t^7-t^7=-64t^7+112t^5-56t^3+7t$$
Now dividing by $t$, we finish. | $sin(7\theta)=sin(\theta+6\theta)=sin(\theta).cos(6\theta)+sin(6\theta)cos(\theta)$ then we have $cos(6\theta)=cos(3\theta+3\theta)=cos(3\theta).cos(\theta)-[sin(3\theta).sin(3\theta)]$ now we know $cos(3\theta)=4cos^3(\theta)-3cos(\theta),sin(\theta)=3sin(\theta)+4sin^3(\theta)$. Now you can just back substitute and get the job done. Hope its clear. |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | Returning a type (not a reference) will make a *copy* of the object you're returning. Thus there's no need to cast it, the copy can be changed without affecting the original. The code will compile cleanly if you remove the `const_cast`.
Edit: based on the latest edit to the question, I'd say this is an abuse of `const_cast`. The principle followed by C++ is that a `const` member function should not only make no changes to the object itself, but should not return anything that could be used to make changes outside of the function. By returning a non-const pointer to a member variable, you violate this principle. | you can just return `n_`, as it is
```
int n() const
{
return n_;
}
``` |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | Returning a type (not a reference) will make a *copy* of the object you're returning. Thus there's no need to cast it, the copy can be changed without affecting the original. The code will compile cleanly if you remove the `const_cast`.
Edit: based on the latest edit to the question, I'd say this is an abuse of `const_cast`. The principle followed by C++ is that a `const` member function should not only make no changes to the object itself, but should not return anything that could be used to make changes outside of the function. By returning a non-const pointer to a member variable, you violate this principle. | There is no need for using a cast. In the function, `this->n_` is a `const` pointer it does not point to `const int`.
```
int* n() const
{
// No need for a cast.
return n_;
}
```
It makes more sense to return a `const int*` from the `const` function. You don't want something like this:
```
const A a;
a.n()[0] = 10;
```
That subverts the `const`-ness of the object. You can prevent that by using:
```
const int* n() const
{
// No need for a cast either.
return n_;
}
``` |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | Returning a type (not a reference) will make a *copy* of the object you're returning. Thus there's no need to cast it, the copy can be changed without affecting the original. The code will compile cleanly if you remove the `const_cast`.
Edit: based on the latest edit to the question, I'd say this is an abuse of `const_cast`. The principle followed by C++ is that a `const` member function should not only make no changes to the object itself, but should not return anything that could be used to make changes outside of the function. By returning a non-const pointer to a member variable, you violate this principle. | Generally speaking, casting a `T const` to a `T` with `const_cast<>` is almost always unnecessary. This is because the constant object is being converted into a non-constant temporary, and this can be safely accomplished without a cast.
```
int const n; // n is a constant int
int x = n; // perfectly safe
```
This is true even if `T` is a pointer type.
```
int * const n; // n is a constant pointer to an int
int * x = n; // perfectly safe
```
However, if you move the `const` keyword to the front, it is no longer making the pointer type constant, but the type that is being pointed to constant. Thus, for our example above:
```
const int * n; // n is a pointer to a constant int
int * x = n // error, x is a pointer to an int
```
You can see that `x` points to a something different than what `n` points to, and so the initialization will fail. In this case, the initialization would require a `const_cast<>`:
```
int * x = const_cast<int *>(n);
// cast away the const-ness that n is pointing to
```
You should only do this if you know that `n` is actually modifyable (it may not be if the pointer is to actual read-only memory), or if you know that the user of `x` will not actually try to modify the contents pointed to by `n`.
---
For your example, you seem to believe your `const` method should return a pointer to data held by your object in such a way that the data be modifiable by the caller. That is, since the `n()` method is declared `const`, it means the contents of the object being accessed should be considered constant. Thus, `n_` is an array of constant `int`, which will decay to a pointer to constant `int`, but you want to return a pointer to `int`.
If you intend `n_` to be modifiable regardless of whether the object is being treated as constant, you can declare that intention with the use of `mutable`. This will make `n_` be treated as non-constant even if the containing object is `const`, and it thus makes the `const_cast` unnecessary.
```
class A
{
private:
mutable int n_[10];
public:
/* ... */
int* n() const
{
return n_;
}
};
``` |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | No, that is not a valid use of `const_cast`.
Your function is not modifying the array, but it is granting rights to modify the array to the caller. As such, it should have those rights itself. You can't give someone access to something which you yourself don't have access. So the method should not be const, and the const cast is then unnecessary.
You might also want a const version of the function, which returns a `const int*`. This is the same principle as, for example, `std::vector::operator[]`. Even though the operator doesn't modify the vector, it grants access to modify the vector, and so it is not a const function. (but there is also a const overloaded version, which returns a const reference, thereby not granting the right to modify the vector) | you can just return `n_`, as it is
```
int n() const
{
return n_;
}
``` |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | No, that is not a valid use of `const_cast`.
Your function is not modifying the array, but it is granting rights to modify the array to the caller. As such, it should have those rights itself. You can't give someone access to something which you yourself don't have access. So the method should not be const, and the const cast is then unnecessary.
You might also want a const version of the function, which returns a `const int*`. This is the same principle as, for example, `std::vector::operator[]`. Even though the operator doesn't modify the vector, it grants access to modify the vector, and so it is not a const function. (but there is also a const overloaded version, which returns a const reference, thereby not granting the right to modify the vector) | There is no need for using a cast. In the function, `this->n_` is a `const` pointer it does not point to `const int`.
```
int* n() const
{
// No need for a cast.
return n_;
}
```
It makes more sense to return a `const int*` from the `const` function. You don't want something like this:
```
const A a;
a.n()[0] = 10;
```
That subverts the `const`-ness of the object. You can prevent that by using:
```
const int* n() const
{
// No need for a cast either.
return n_;
}
``` |
29,756,074 | I'm trying to get a flot tooltip to appear, but nothing is happening. Can anyone tell me what I'm doing wrong, please? Maybe it doesn't recognize my points, they are appearing on the graph (number of people on y axis, the years on the x-axis).
```
$.post('php/myprogram.php',
function(output){
var obj = jQuery.parseJSON( output );
var data = [];
var coordinate = [];
for (var i = 0; i< obj.length-1; i++) {
coordinate.push(obj[i][0]);
coordinate.push(obj[i][1]);
data.push(coordinate);
coordinate = [];
}
var options = {
xaxis: {
axisLabel: 'YEAR',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial',
tickDecimals: 0
},
yaxis: {
axisLabel: '',
axisLabelUseCanvas: true,
axisLabelFontSizePixels: 15,
axisLabelFontFamily: 'Arial'
},
series: {
lines: {
show: true,
color: '#ffa500'
},
points: {
show: true
}
},
grid: {
hoverable: true
}
};
$.plot($("#byYear"),
[data],
options
);
function showTooltip(x, y, contents) {
$("<div id='tooltip'>" + contents + "</div>").css({
position: "absolute",
display: "none",
top: y + 5,
left: x + 5,
border: "1px solid #fdd",
padding: "2px",
"background-color": "#fff",
opacity: 0.80
}).appendTo("body").fadeIn(200);
}
$("#byYear").bind("plothover", function (event, pos, item) {
var str = "(" + pos.x.toFixed(2) + ", " + pos.y.toFixed(2) + ")";
//$("#hoverdata").text(str);
if (item) {
if (previousPoint != item.dataIndex) {
previousPoint = item.dataIndex;
$("#tooltip").remove();
var x = item.datapoint[0].toFixed(2);
var y = item.datapoint[1].toFixed(2);
showTooltip(item.pageX, item.pageY, str);
}
} else {
$("#tooltip").remove();
previousPoint = null;
}
}); //end bind
});
``` | 2015/04/20 | [
"https://Stackoverflow.com/questions/29756074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229001/"
] | No, that is not a valid use of `const_cast`.
Your function is not modifying the array, but it is granting rights to modify the array to the caller. As such, it should have those rights itself. You can't give someone access to something which you yourself don't have access. So the method should not be const, and the const cast is then unnecessary.
You might also want a const version of the function, which returns a `const int*`. This is the same principle as, for example, `std::vector::operator[]`. Even though the operator doesn't modify the vector, it grants access to modify the vector, and so it is not a const function. (but there is also a const overloaded version, which returns a const reference, thereby not granting the right to modify the vector) | Generally speaking, casting a `T const` to a `T` with `const_cast<>` is almost always unnecessary. This is because the constant object is being converted into a non-constant temporary, and this can be safely accomplished without a cast.
```
int const n; // n is a constant int
int x = n; // perfectly safe
```
This is true even if `T` is a pointer type.
```
int * const n; // n is a constant pointer to an int
int * x = n; // perfectly safe
```
However, if you move the `const` keyword to the front, it is no longer making the pointer type constant, but the type that is being pointed to constant. Thus, for our example above:
```
const int * n; // n is a pointer to a constant int
int * x = n // error, x is a pointer to an int
```
You can see that `x` points to a something different than what `n` points to, and so the initialization will fail. In this case, the initialization would require a `const_cast<>`:
```
int * x = const_cast<int *>(n);
// cast away the const-ness that n is pointing to
```
You should only do this if you know that `n` is actually modifyable (it may not be if the pointer is to actual read-only memory), or if you know that the user of `x` will not actually try to modify the contents pointed to by `n`.
---
For your example, you seem to believe your `const` method should return a pointer to data held by your object in such a way that the data be modifiable by the caller. That is, since the `n()` method is declared `const`, it means the contents of the object being accessed should be considered constant. Thus, `n_` is an array of constant `int`, which will decay to a pointer to constant `int`, but you want to return a pointer to `int`.
If you intend `n_` to be modifiable regardless of whether the object is being treated as constant, you can declare that intention with the use of `mutable`. This will make `n_` be treated as non-constant even if the containing object is `const`, and it thus makes the `const_cast` unnecessary.
```
class A
{
private:
mutable int n_[10];
public:
/* ... */
int* n() const
{
return n_;
}
};
``` |
59,147,042 | Parent Component code:
```
const ParentPage = ({ }) => {
const [filteredResults, setFilteredResults] = useState([]);
```
Inside render:
```
<ChildPage records={filteredResults}/>
```
ChildPage code:
```
const ChildPage= ({ records}) => {
const [displayStore, setDisplayStores] = useState([]);
useEffect(() => {
if (records.length === 0 || records.length > max) {
setDisplayStores([]);
return;
}
records.forEach(record=> {
if (record.total) {
let hoverText = '';
if (!record.sum) {
hoverText += '- Missing sumData';
}
if (hoverText === '') {
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
});
setDisplayStore(records);
}, [records]);
return (
<ReactTable
getTdProps={(state, row) => ({
onClick: (event, cb) => {
onRowSelected(row.original);
cb();
},
})}
data={displayStore}
);
};
```
Error:
```
Error (Invariant Violation: A state mutation was detected between dispatches, in the path....) is caused due to setting record.indicator
```
Assumed Problem:
```
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
```
How can I update the props here without causing state mutation error | 2019/12/02 | [
"https://Stackoverflow.com/questions/59147042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12209835/"
] | Create a new array `updatedRecords` from `records` using `.map` instead of updating it in-place using `.forEach`.
```
const updatedRecords = records.map(record => {
if (record.total) {
let hoverText = '';
if (!record.sum) {
hoverText += '- Missing sumData';
}
if (hoverText === '') {
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
return record;
});
setDisplayStore(updatedRecords);
``` | The problem is that you are passing an array, and arrays are passed by reference.if you don't want to change the state of the parent component, you need to pass a new array.
this could be one solution:
```
<ChildPage records={[...filteredResults]}/>
``` |
59,147,042 | Parent Component code:
```
const ParentPage = ({ }) => {
const [filteredResults, setFilteredResults] = useState([]);
```
Inside render:
```
<ChildPage records={filteredResults}/>
```
ChildPage code:
```
const ChildPage= ({ records}) => {
const [displayStore, setDisplayStores] = useState([]);
useEffect(() => {
if (records.length === 0 || records.length > max) {
setDisplayStores([]);
return;
}
records.forEach(record=> {
if (record.total) {
let hoverText = '';
if (!record.sum) {
hoverText += '- Missing sumData';
}
if (hoverText === '') {
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
});
setDisplayStore(records);
}, [records]);
return (
<ReactTable
getTdProps={(state, row) => ({
onClick: (event, cb) => {
onRowSelected(row.original);
cb();
},
})}
data={displayStore}
);
};
```
Error:
```
Error (Invariant Violation: A state mutation was detected between dispatches, in the path....) is caused due to setting record.indicator
```
Assumed Problem:
```
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
```
How can I update the props here without causing state mutation error | 2019/12/02 | [
"https://Stackoverflow.com/questions/59147042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12209835/"
] | Create a new array `updatedRecords` from `records` using `.map` instead of updating it in-place using `.forEach`.
```
const updatedRecords = records.map(record => {
if (record.total) {
let hoverText = '';
if (!record.sum) {
hoverText += '- Missing sumData';
}
if (hoverText === '') {
record.indicator = <DataIndicatorIcon status="good" />;
} else {
record.indicator = (
<DataIndicatorIcon status="ok" hoverText={hoverText} />
);
}
}
return record;
});
setDisplayStore(updatedRecords);
``` | I used the below code in ChildPage which resolved the issue
const updatedRecords = JSON.parse(JSON.stringify(records)); |
112,506 | I'm new to photography and I'm working on this project where I need to find the right balance between ISO and exposure compensation, to take pictures in different environments. This is an App for android phones, so the options are limited. However I can use the **light sensor** of the phone to get the current ambient light, reported in lx.
I'm trying to automate the process of calculating the right ISO and exposure based on the received light but so far I'm unable to find material on this.
All the videos/tutorials/docs point to how to configure **manually** the ISO/Shutter Speed/Aperture for the perfect picture.
I believe some calculation method exists that has as an input the light level (lx) and as output the ISO and exposure compensation values ? | 2019/10/11 | [
"https://photo.stackexchange.com/questions/112506",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/87410/"
] | I don't know what 'exposure compensation value' means (is it EV?), but there is no automatic way of being able to do this. Simple-mindedly there are four parameters which define how bright (a bit of) the image will be:
* *b*, how bright the scene (or the bit of it corresponding to the bit of the image) is;
* *s*, how sensitive the sensor is, which is essentially its ISO (see below);
* *t*, the exposure time, in seconds;
* *a*, the aperture, given as f/11 or whatever, which we interpret as 1/11 or whatever.
The brightness of the image is then roughly proportional to
>
> *b* ∗ *s* ∗ *t* ∗ *a*2
>
>
>
It's clear from this that, if you know *b*, you can adjust *s*, *t*, and *a* to give an equivalently bright image. As an example, if
*t* = 1/100 second
*s* = 100
*a* = 1/8
then *s* ∗ *t* ∗ *a*^2 = 1/64. But if
*t* = 1/50 second
*s* = 100
*a* = 1/11
then *s* ∗ *t* ∗ *a*^2 = 2/121 which is very close to 1/64.
In other words an exposure of 1/50 at f/11 is the same as 1/100 at f/8.
Similarly I can adjust *s*:
*t* = 1/200 second
*s* = 200
*a* = 1/8
then the product *s* ∗ *t* ∗ *a*^2 = 1/64 again: an exposure of 1/200s at ISO 200 at f/8 is the same as 1/100s at ISO 100 at f/8.
If you know *b*, then there are an *infinite number* of combinations of *t*, *s*, and *a* which will work: which you pick depends on factors other than how bright the scene is.
---
A note on *s*: I have written the above as if you could vary sensitivity, which really you can't: the sensor in the camera is as sensitive as it is. What you *can* do is then multiply the output of the sensor by some factor, which is, for these purposes, equivalent to controlling *s*. It's not really equivalent because of issues like noise and dynamic range, but it's good enough here.
(You can, of course, change the sensitivity of the sensor in a film camera, by changing film.) | >
> I believe some calculation method exists that has as an input the light level (lx) and as output the ISO and exposure compensation values?
>
>
>
There are plenty of existing methods to set exposure parameters (ISO, exposure duration, and aperture) based on the amount of light received by a brightness sensor (usually called a light meter) in a camera.
Cameras with "Auto" exposure modes set all three parameters based on a hierarchy of prioritization used to produce a "best possible" image under the current shooting conditions.
* Lower ISO settings tend to result in a less "noisy" image
* Shorter exposure durations result in less blur due to camera or subject motion
* Narrower apertures allow deeper depth of field so that more of the scene is acceptable sharp and considered in focus
Unfortunately, to get all three of these things at what are usually considered optimal values, we need very bright light. We often need light that is much brighter than what we actually have with which to take a picture. Each of the three exposure parameters is weighed against the other two: how long can I expose before blur becomes noticeable? How wide can the aperture be opened before depth of field becomes too shallow? How much can I amplify the signal from the sensor before noise begins to noticeably degrade the image? Other factors, such as focal length (which affects the angular size of the field of view, and thus the amount of blur caused by a specific angular displacement of the camera during exposure) may also be considered.
For most phone cameras, aperture is fixed and not variable. That leaves duration (shutter time or shutter "speed", often abbreviated as *Tv* for 'time value') and ISO. In a digital environment 'ISO' really means "amplification" of the signal output by the imaging sensor, which only has one actual "sensitivity."
There are more than a few combinations of different Tv and ISO settings that will result in the same image brightness. If the light meter gives a brightness level suitable for a certain combination of aperture, Tv, and ISO, we can also expose for twice as long and amplify half as much, or we can expose for half as long and amplify twice as much and get the same image brightness. Each of those images will look different. If there is motion in the scene or the camera is moving during exposure, the longer Tv will show more movement. The shorter Tv will show less movement. But a shorter exposure at the same aperture means less light is actually being collected by the sensor, so we must amplify the signal from the sensor more by raising the ISO setting. This also amplifies the noise in the image.
**But none of that above has to do with *Exposure Compensation*. It's strictly about *Exposure*, which assumes that the average brightness of the scene (or the average brightness of whatever portion of the scene is being metered) should be a medium value between pure black and pure white.**
*Exposure Compensation (EC)* is a way of telling the camera to make the resulting photograph darker or lighter, on average over the total field of view (or whatever portion of the FoV that is being metered), than exactly halfway between pure black and pure white. We do not (usually) want a photo of a black cat in a coal mine to be the same brightness as a photo of a white cat in a snowstorm. We want one to be much darker than "medium bright". We want the other to be much brighter than "medium bright." The way this practically works out is that when we set an *EC* value, it calibrates the meter to center exposure a specific number of steps darker or brighter than the midtones halfway between totally dark and totally bright.
Many modern cameras can do a sort of internal *EC* when certain metering modes, such as 'Evaluative' (Canon) or 'Matrix' (Nikon) are selected. The light meter is divided into multiple segments and the brightness of each segment is measured independently. The resulting "map" of varying brightnesses is then compared to a library of different maps stored in the camera's memory. Each prestored map has a set of instructions on how to compensate exposure. The instructions for the stored map that is closest to the measured scene are used to adjust EC.
For example, a landscape scene with bright sky and darker land beneath it is very easily recognized by even the most rudimentary multi segment light meter. If the upper two thirds of the scene is very bright and the lower one-third is darker, the camera may be programmed to assume exposure should be set to capture details in the sky (such as very bright clouds) at the expense of allowing the lower one third of the scene to be grossly underexposed. If, on the other hand, the lower two-thirds of the frame are darker than the very bright upper one-third, the camera is probably programmed to expose for the darker areas at the expense of letting the sky be pure white and details in the bright sky will be lost.
In the past decade or so, dedicated light meters have advanced to the point that they now do multi segment metering in RGB+IR (red, green, blue, and near-infrared). They will even adjust EC based upon the specific colors in a scene compared to ever expanding libraries. Of course the most expensive camera models featured such RGB meters first, but now even many humble entry level cameras have RGB light meters. In the case of mirrorless cameras, metering is done directly off the RGB imaging sensor.
Many phones also use library based methods to compare with what the RGB image sensor in the camera is "seeing" and then take appropriate measures with regard to exposure. In darker light, the phone may even take multiple exposures and combine the results using auto alignment routines to increase the image quality of the final image. |
801,627 | In a Windows Explorer window where you browse files in Windows 7, I would like to add a new context menu that will allow me to open a file on my local Dev Server.
So it would have to open a browser like Google Crome and the URL would have to be the file path but slightly different removing part of it and prepending my localhost URL.
For example if the file I am right clicking on, the path for that file might be...
`E:\Server\htdocs\labs\php\testProject\test.php`
I would need a button to click in the context menu `Open in Browser` and it would open my Web Browser with a URL like this...
`http://localhost/labs/php/testProject/test.php`
I would love to be able to do this, any ideas or help would greatly be appreciated!
To go one step further, would to be able to somehow make the context menu item only show up on File that are under this folder.... ``E:\Server\htdocs` but this is far less important.
 | 2014/08/22 | [
"https://superuser.com/questions/801627",
"https://superuser.com",
"https://superuser.com/users/3700/"
] | Yes it is valid to have two nics on the same network. In the case of Windows, it has an algorithm to determine which interface is "best" when deciding how to send packets out to the network. Most likely, your wired connection will get precedence.
DHCP servers work on a broadcast basis. At startup, the PC will issue a DHCP broadcast request asking for an IP. The DHCP server will give an IP address in its configured address range that matches the interface the IP address came in on. A DHCP relay can cause it to choose a different range, by passing its own address as part of the request, and sending it unicast. Then the DHCP server will choose an address from the range matching the relay address.
In order to give your two nics different address ranges with DHCP, they will need to be on different networks separated at layer two - so separate VLANs or physically separate. Your DHCP server will either need to have a presence in both networks, or another device in between can act as a DHCP relay. | More than one IP address in the same subnet on the same host (whether bound to the same adapter or to different adapters) are not only valid, but are routine in many applications (e.g. web servers).
It is, though, quite unusual to configure more than one with DHCP - unusual being the key word, not invalid. |
68,629,633 | I have this vector with these numbers `{0, 1, 0, 3, 2, 3}` and I'm trying to use this approach to have the minimum number and its index:
```
int min_num = *min_element(nums.begin(), nums.end());
int min_num_idx = min_element(nums.begin(), nums.end()) - nums.begin();
```
However, this returns the first smallest number it found so the `0` in index `0`. Is there a way to make it return the last smallest number instead? (the `0` in index `2` instead) | 2021/08/03 | [
"https://Stackoverflow.com/questions/68629633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16256986/"
] | >
> However, this returns the first smallest number it found so the 0 in index 0. Is there a way to make it return the last smallest number instead? (the 0 in index 2 instead)
>
>
>
You can try [`std::vector<int>::reverse_iterator`](https://en.cppreference.com/w/cpp/iterator/reverse_iterator):
```
#include <vector>
#include <iostream>
#include <iterator>
#include <algorithm>
int main() {
std::vector<int> n { 0, 1, 0, 3, 2, 3 };
std::vector<int>::reverse_iterator found_it = std::min_element(n.rbegin(), n.rend());
if (found_it != n.rend()) {
std::cout << "Minimum element: " << *found_it << std::endl;
std::cout << "Minimum element index: " << std::distance(n.begin(), std::next(found_it).base()) << std::endl;
}
}
```
>
> Output:
>
>
> Minimum element: 0
>
>
> Minimum element index: 2
>
>
> | ```cpp
int min_num_idx = nums.size() - 1 - min_element(nums.rbegin(), nums.rend()) - nums.rbegin();
``` |
36,196 | Have any linguists studied/described a language that was totally foreign to them? ie the linguist has totally no idea of what the utterances and writing of a language mean.
How did they do it - infer the meaning and grammar of utterances and writing - if they did not understand a single word at the start? What methodologies did they employ? I can think of a very likely scenario where some writing or sound record has been uncovered by archaeology, and it's totally new ground. How does the linguist even start to figure out what the writing or sounds mean? (And also, is this the domain of linguistic anthropology?)
(Please pardon my tags. I don't have an idea which domain or sub-domain this question best fits. I can only guess.) | 2020/05/14 | [
"https://linguistics.stackexchange.com/questions/36196",
"https://linguistics.stackexchange.com",
"https://linguistics.stackexchange.com/users/23964/"
] | There are two major variables in doing this: can you interact with a speaker, and do you have some common ground. If the answer to both questions is no, then you pretty much cannot figure the language out.
You mentioned archaeologists: numerous texts of dead languages have been uncovered and deciphered. But you need some basis for figuring out what the text is – it would help massively if you have a text written in an known language that translates the mystery text. Or if you have independent factual knowledge about the culture (names of kings and the like), you can make intelligent guesses based on recurring text. If you only have scratch marks, you won't be able to figure out the meaning.
If you have an actual speaker, you can ask various questions that lead you to understand the mystery language. This is reasonably easy if you and the speaker both speak some other language (English, Potawatomi...). By "speak" I don't mean "be fluent in", I mean "can make reasonable use of".
The harder way is if you have no common language, then you have to rely on the person's cooperation. You might present objects like "knife; rock; grass; dog" and infer the words for those things when they respond to the presented stimulus. There is always the chance that they will say something different like "Are you threatening me?", "I charge $15/hr, I don't work for rocks", "Do I look like a cow?" or "This interview is really going to the dogs". If the speaker does not understand that you are trying to learn about the language, then there is not much you can do until you have at least that common ground. | The film "The grammar of happiness" has a scene where Daniel Everett demonstrates how he learned the first words of Piraha without having any common language to the Piraha people.
He starts presenting some things in order to learn the nouns depicting those things. Than he goes on with actions on the things and learns some verbs.
It is all starting to converse with the native speakers of the unknown language. |
36,196 | Have any linguists studied/described a language that was totally foreign to them? ie the linguist has totally no idea of what the utterances and writing of a language mean.
How did they do it - infer the meaning and grammar of utterances and writing - if they did not understand a single word at the start? What methodologies did they employ? I can think of a very likely scenario where some writing or sound record has been uncovered by archaeology, and it's totally new ground. How does the linguist even start to figure out what the writing or sounds mean? (And also, is this the domain of linguistic anthropology?)
(Please pardon my tags. I don't have an idea which domain or sub-domain this question best fits. I can only guess.) | 2020/05/14 | [
"https://linguistics.stackexchange.com/questions/36196",
"https://linguistics.stackexchange.com",
"https://linguistics.stackexchange.com/users/23964/"
] | For field linguists, this isn't unheard of, but in most cases there is a contact language or lingua franca in the region which allows more extensive communication. The University of Toronto [maintains a bibliography](http://projects.chass.utoronto.ca/lingfieldwork/) of fieldwork resources which you may find interesting. Most of my knowledge of field methods in early documentation work comes from Robbins Burling's book [*Learning a field language*](https://www.worldcat.org/title/learning-a-field-language/oclc/1086361926) and my discussions with [Kate Lindsey](https://ling.bu.edu/people/lindsey), a field worker who documented Ende. She describes a bit of her initial work in [this news article](https://news.stanford.edu/2018/08/30/stanford-phd-student-documents-indigenous-language-papua-new-guinea/).
In field situations, linguists will often use elicitation sessions to gather data. In these settings they will often try to elicit basic vocabulary to develop a preliminary lexicon from which they can make other generalizations. These elicitation session can use a number of methodologies. Linguists can use images or flashcards to learn what words associate with what objects. Linguists can use actual objects in a similar manner.
More complex grammatical patterns can be elicited through methods like [map tasks](http://groups.inf.ed.ac.uk/maptask/maptask-description.html). In these tasks, speakers are paired up and both are given copies of a map. The instruction giver has a map with a route they need to get the instruction follower to draw on their own blank map. Because the route is known to the linguist, the correspondence between phrases and meanings can be deduced. If the word "gavagai" occurs every time the speaker wants the hearer to draw a line to the left, the linguist can make an informed guess that "gavagai" means left. As the number of examples grows, more complex grammatical structures can be uncovered through comparison. If you have examples such as "he sits" "he sat" and "she sits", a linguist can determine what sound sequences correspond to what meanings by comparing what meanings share sound sequences across examples. From these the linguist can develop hypotheses about new sentences they have not seen before.
As the linguist builds up understanding from these tasks, they can begin to ask questions of their collaborators and do quasi-experiments with their grammar. They may quickly learn how to ask questions, and so it is possible the linguist can ask their collaborator "Is [...] a good sentence?" As the linguist learns to speak the language, they will also try to use it. Through using the language, they get positive and negative feedback about whether the grammar they deduced is correct. They may also get feedback from their collaborators on mispronunciations, or better ways to phrase things.
Once you get past the initial hurdles of establishing basic communication and collecting a basic corpus, further work becomes more similar to any other field situation. More complex patterns can be gleaned through analysis of the corpus, and those methodologies are similar to solving a phonology or morphology problem. More complicated tasks can be used to gather new data, and hypotheses can be tested through acceptability judgments. | The film "The grammar of happiness" has a scene where Daniel Everett demonstrates how he learned the first words of Piraha without having any common language to the Piraha people.
He starts presenting some things in order to learn the nouns depicting those things. Than he goes on with actions on the things and learns some verbs.
It is all starting to converse with the native speakers of the unknown language. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I have solved the issue. I have renamed the apk file to zip and explore the drawable folder and found there was some images which i have already deleted from project but they are still showing in apk. After deletion of those files the apk uploaded successfully.
I do not why the deleted images still in apk's drawable folder. | Do a clean build under `Build` in Android Studio it fixed the issue for me. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I faced that issue as well. I tried different builds, even old ones, previously uploaded without any issues. Rebuilding, cleaning and still got the same error. Nothing really helped.
I think it's Google Play problem because it just started working after some time (uploading exactly the same apk which was earlier rejected).
So take a sit and wait.
Nice try Google. | I have solved the issue. I have renamed the apk file to zip and explore the drawable folder and found there was some images which i have already deleted from project but they are still showing in apk. After deletion of those files the apk uploaded successfully.
I do not why the deleted images still in apk's drawable folder. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I also ran into this problem. Our app was originally using an XML for the icon in the manifest:
```
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@drawable/icon_default" />
</selector>
```
This has been working for years. But when I tried to upload my new build this week, I was getting the "icon invalid" error. I tried to make sure I had a icon png in all of the res folders for all resolutions but that didn't fix it. Finally, I tried removing the XML and in the manifest, just point the android:icon directly to the PNG. That seemed to fix the problem. | Do a clean build under `Build` in Android Studio it fixed the issue for me. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I faced that issue as well. I tried different builds, even old ones, previously uploaded without any issues. Rebuilding, cleaning and still got the same error. Nothing really helped.
I think it's Google Play problem because it just started working after some time (uploading exactly the same apk which was earlier rejected).
So take a sit and wait.
Nice try Google. | I also ran into this problem. Our app was originally using an XML for the icon in the manifest:
```
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@drawable/icon_default" />
</selector>
```
This has been working for years. But when I tried to upload my new build this week, I was getting the "icon invalid" error. I tried to make sure I had a icon png in all of the res folders for all resolutions but that didn't fix it. Finally, I tried removing the XML and in the manifest, just point the android:icon directly to the PNG. That seemed to fix the problem. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I faced that issue as well. I tried different builds, even old ones, previously uploaded without any issues. Rebuilding, cleaning and still got the same error. Nothing really helped.
I think it's Google Play problem because it just started working after some time (uploading exactly the same apk which was earlier rejected).
So take a sit and wait.
Nice try Google. | Do a clean build under `Build` in Android Studio it fixed the issue for me. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | Created a bug report for this: <https://code.google.com/p/android/issues/detail?id=229018>
Tried very hard to not add any insults as to the quality of the error message and the coder who's responsible. | Do a clean build under `Build` in Android Studio it fixed the issue for me. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | What worked for me today November 2016:
1. Created the application icon under "mipmap" using "new" > "image asset" dialog.
2. Then I removed the old icon from drawable floder.
3. Update the manifest to point to the newly created since the name and folder has changed. | Do a clean build under `Build` in Android Studio it fixed the issue for me. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I faced that issue as well. I tried different builds, even old ones, previously uploaded without any issues. Rebuilding, cleaning and still got the same error. Nothing really helped.
I think it's Google Play problem because it just started working after some time (uploading exactly the same apk which was earlier rejected).
So take a sit and wait.
Nice try Google. | Created a bug report for this: <https://code.google.com/p/android/issues/detail?id=229018>
Tried very hard to not add any insults as to the quality of the error message and the coder who's responsible. |
18,097,504 | Please any one tell how to remove this error.
Thanks | 2013/08/07 | [
"https://Stackoverflow.com/questions/18097504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921653/"
] | I faced that issue as well. I tried different builds, even old ones, previously uploaded without any issues. Rebuilding, cleaning and still got the same error. Nothing really helped.
I think it's Google Play problem because it just started working after some time (uploading exactly the same apk which was earlier rejected).
So take a sit and wait.
Nice try Google. | What worked for me today November 2016:
1. Created the application icon under "mipmap" using "new" > "image asset" dialog.
2. Then I removed the old icon from drawable floder.
3. Update the manifest to point to the newly created since the name and folder has changed. |
47,055,098 | Input JSON
```
{
"digital-profiles": [{
"Id": "INTID1",
"status": "ACTIVE",
"cId": "12"
},
{
"dId": "INTID2",
"status": "barred",
"cId": "13"
},
{
"Id": "INTID3",
"status": "ACTIVE",
"cId": "14"
}
]
}
```
Output:
```
{
"Results": {
"NewId": "INTID1:ACTIVE,INTID2:barred,INTID3:ACTIVE"
}
}
```
I am trying to achieve the above mentioned output JSON using the input which is mentioned above. How to achieve this using dataweave transformation. Any help would be appreciated. | 2017/11/01 | [
"https://Stackoverflow.com/questions/47055098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6114085/"
] | Use [map](https://docs.mulesoft.com/mule-user-guide/v/3.8/dataweave-operators#map) and [joinBy](https://docs.mulesoft.com/mule-user-guide/v/3.8/dataweave-operators#join-by) to get desired output. This works for me
```
%dw 1.0
%output application/json
---
Results : {
NewId : payload.digital-profiles map ($.Id ++ ':' ++ $.status) joinBy ','
}
```
Hope this helps. | this is one way you could do it ( code not tested but working on something very similar at the moment)
```
function ParseJSON(MyObject)
{
// parse incoming var into a JSON object.
var MyObjectParsed= JSON.parse(MyObject);
var i = 0;
var NewJSONstring = "{/"Results/": {/"NewId/":";
// for each profile
for(;MyObjectParsed.digital-profiles[i];)
{
NewJSONstring = NewJSONstring+ MyObjectParsed.digital-profiles[i].Id;
NewJSONstring = NewJSONstring+ ':';
NewJSONstring = NewJSONstring+ MyObjectParsed.digital-profiles[i].status;
NewJSONstring = NewJSONstring+ ',';
i++
}
return NewJSONstring;
``` |
57,075,443 | Consider the table:
```
CREATE TABLE event(
event_id UUID PRIMARY KEY,
user_id UUID NOT NULL,
trigger_id UUID NOT NULL,
name VARCHAR (255) NOT NULL,
type VARCHAR (50) NOT NULL,
trigger_name VARCHAR (255) NOT NULL,
status smallint,
date_created TIMESTAMP NOT NULL DEFAULT NOW(),
);
```
I want to
1) order by "type" ASC first, then by "date\_created" DESC
This part is easily done like this
```
SELECT *
FROM event
WHERE user_id = 'fd80059a-3a16-40fe-9f6b-ad2812875d92'
ORDER BY type ASC, date_created DESC
```
2) group by "trigger\_id", "type" and "name" with a count for each group. Yes I want to make a group if "trigger\_id", "type" and "name" are the same, and show the most recent one with a count of all events in the group (basically how many times the event has occured, because if those 3 are the same, the event can considered to be related).
Here is the challenging part. Ideally something like this would work:
```
SELECT * --, count(since the count/grouping is based on 3 columns, how??)
FROM event
WHERE account_id = 'fd80059a-3a16-40fe-9f6b-ad2812875d92'
ORDER BY type ASC, date_created DESC
GROUP BY trigger_id, type, name
```
Would give me only the first record in each group (since they're already ordered by date), but with ALL it's columns (and not just the columns in the GROUP BY clause) + a group count column in the end.
I'm solving this now with option 1, and then using the following javascript code in my node API, but if you understand the snippet, it's doing exactly what I need to do in postgres:
```
[...arrayOfEventsFromDB.reduce((r, o) => {
const key = `${o.trigger_id}-${o.type}-${o.name}`;
const item = r.get(key) || Object.assign({}, o, {
count: 0,
});
item.count++;
return r.set(key, item);
}, new Map).values()];
```
But ideally, if postgres is a good fit for this kind of aggregation I'd like to to this within the SQL query.
Edit
Since I cannot paste code in comments to answers.
As requested here is a running fiddle with the table creation, data and the SELECT query I've come up with, by combining the 2 below answers. Seems a bit inefficient with all those subselects but works.
<https://www.db-fiddle.com/f/mNNzwiDbx2iUdgd2vFTRuJ/0>
```
SELECT *
FROM (
SELECT DISTINCT ON (trigger_id, type, name)
*
FROM (
SELECT *,
row_number () over(PARTITION BY trigger_id, type, name order by date_created DESC )
FROM (
SELECT
*,
COUNT(*) OVER (PARTITION BY trigger_id, type, name)
FROM event
WHERE user_id = 1
ORDER BY type ASC, date_created DESC
) s
) t
) u
ORDER BY type ASC, date_created DESC
``` | 2019/07/17 | [
"https://Stackoverflow.com/questions/57075443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3973406/"
] | [Window functions](https://www.postgresql.org/docs/current/tutorial-window.html) to your rescue (Edit: And `DISTINCT ON` as well):
```
SELECT DISTINCT ON (type, date_created)
*
FROM (
SELECT
*,
COUNT(*) OVER (PARTITION BY trigger_id, type, name)
FROM event
WHERE account_id = 'fd80059a-3a16-40fe-9f6b-ad2812875d92'
) s
ORDER BY type ASC, date_created DESC
```
---
Edit: After chatting this solution fits best:
```
SELECT *
FROM (
SELECT DISTINCT ON (trigger_id, type, name)
*
FROM (
SELECT *
FROM (
SELECT
*,
COUNT(*) OVER (PARTITION BY trigger_id, type, name)
FROM event
WHERE user_id = 1
) s
) t
ORDER BY trigger_id, type, name, date_created DESC
) u
ORDER BY type ASC, date_created DESC
``` | You can check below query
```
SELECT *
FROM (
SELECT
a.*,
COUNT(*) OVER (PARTITION BY trigger_id, type, name) cnt , row_number () over(PARTITION BY trigger_id, type, name order by date_created DESC ) rn
FROM event a
WHERE account_id = 'fd80059a-3a16-40fe-9f6b-ad2812875d92'
) s
where rn < = 4
ORDER BY type ASC, date_created DESC
``` |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | Since version 4.0 Bash has associative arrays:
```
$ declare -A elements
$ elements[hello]=world
$ echo ${elements[hello]}
world
```
You can use this in the same way you would a Java Map.
```
declare -A map
for el in "${arr1[@]}"; do
map[$el]="x"
done
for el in "${arr2[@]}"; do
if [ -n "${map[$el]}" ] ; then
echo "${el}"
fi
done
```
Dealing with substrings is an altogether more weighty problem, and would be a challenge in any language, short of the brute-force algorithm you're already using. You could build a binary-tree index of character sequences, but I wouldn't try *that* in Bash! | Here's a `bash/awk` idea:
```
# some sample arrays
$ arr1=( my first string "hello wolrd")
$ arr2=( my last stringbean strings "well, hello world!)
# break array elements into separate lines
$ printf '%s\n' "${arr1[@]}"
my
first
string
hello world
$ printf '%s\n' "${arr2[@]}"
my
last
stringbean
strings
well, hello world!
# use the 'printf' command output as input to our awk command
$ awk '
NR==FNR { a[NR]=$0 ; next }
{ for (i in a)
if ($0 ~ a[i]) print "array1 string {"a[i]"} is a substring of array2 string {"$0"}" }
' <( printf '%s\n' "${arr1[@]}" ) \
<( printf '%s\n' "${arr2[@]}" )
array1 string {my} is a substring of array2 string {my}
array1 string {string} is a substring of array2 string {stringbean}
array1 string {string} is a substring of array2 string {strings}
array1 string {hello world} is a substring of array2 string {well, hello world!}
```
* `NR==FNR` : for file #1 only: store elements into awk array named 'a'
* `next` : process next line in file #1; at this point rest of awk script is ignored for file #1; the for each line in file #2 ...
* `for (i in a)` : for each index 'i' in array 'a' ...
* `if ($0 ~ a[i] )` : see if a[i] is a substring of the current line ($0) from file #2 and if so ...
* `print "array1...` : output info about the match
---
A test run using the following data:
```
arr1 == 3300 elements
arr2 == 500 elements
```
When all `arr2` elements have a substring/pattern match in `arr1` (ie, 500 matches), total time to run is ~27 seconds ... so the repetitive looping through the array takes a toll.
Obviously (?) need to reduce the volume of repetitive actions ...
* for an exact string match the `comm` solution by Charles Duffy makes sense (it runs against the same 3300/500 test set in about 0.5 seconds)
* for a substring/pattern match I was able to get a `egrep` solution to run in about 5 seconds (see my other answer/post) |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | Since version 4.0 Bash has associative arrays:
```
$ declare -A elements
$ elements[hello]=world
$ echo ${elements[hello]}
world
```
You can use this in the same way you would a Java Map.
```
declare -A map
for el in "${arr1[@]}"; do
map[$el]="x"
done
for el in "${arr2[@]}"; do
if [ -n "${map[$el]}" ] ; then
echo "${el}"
fi
done
```
Dealing with substrings is an altogether more weighty problem, and would be a challenge in any language, short of the brute-force algorithm you're already using. You could build a binary-tree index of character sequences, but I wouldn't try *that* in Bash! | An `egrep` solution for substring/pattern matching ...
```
egrep -f <(printf '.*%s.*\n' "${arr1[@]}") \
<(printf '%s\n' "${arr2[@]}")
```
* `egrep -f` : take patterns to search from the file designated by the `-f`, which in this case is ...
* `<(printf '.*%s.*\n' "${arr1[@]}")` : convert `arr1` elements into 1 pattern per line, appending a regex wild card character (.\*) for prefix and suffix
* `<(printf '%s\n' "${arr2[@]}")` : convert `arr2` elements into 1 string per line
When run against a sample data set like:
```
arr1 == 3300 elements
arr2 == 500 elements
```
... with 500 matches, total run time is ~5 seconds; there's still a good bit of repetitive processing going on with `egrep` but not as bad as seen with my other answer (`bash/awk`) ... and of course not as fast the `comm` solution which eliminates the repetitive processing. |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | Since you're OK with using `grep`, and since you want to match substrings as well as full strings, one approach is to write:
```
printf '%s\n' "${arr2[@]}" \
| grep -o -F "$(printf '%s\n' "${arr1[@]}")
```
and let `grep` optimize as it sees fit. | Here's a `bash/awk` idea:
```
# some sample arrays
$ arr1=( my first string "hello wolrd")
$ arr2=( my last stringbean strings "well, hello world!)
# break array elements into separate lines
$ printf '%s\n' "${arr1[@]}"
my
first
string
hello world
$ printf '%s\n' "${arr2[@]}"
my
last
stringbean
strings
well, hello world!
# use the 'printf' command output as input to our awk command
$ awk '
NR==FNR { a[NR]=$0 ; next }
{ for (i in a)
if ($0 ~ a[i]) print "array1 string {"a[i]"} is a substring of array2 string {"$0"}" }
' <( printf '%s\n' "${arr1[@]}" ) \
<( printf '%s\n' "${arr2[@]}" )
array1 string {my} is a substring of array2 string {my}
array1 string {string} is a substring of array2 string {stringbean}
array1 string {string} is a substring of array2 string {strings}
array1 string {hello world} is a substring of array2 string {well, hello world!}
```
* `NR==FNR` : for file #1 only: store elements into awk array named 'a'
* `next` : process next line in file #1; at this point rest of awk script is ignored for file #1; the for each line in file #2 ...
* `for (i in a)` : for each index 'i' in array 'a' ...
* `if ($0 ~ a[i] )` : see if a[i] is a substring of the current line ($0) from file #2 and if so ...
* `print "array1...` : output info about the match
---
A test run using the following data:
```
arr1 == 3300 elements
arr2 == 500 elements
```
When all `arr2` elements have a substring/pattern match in `arr1` (ie, 500 matches), total time to run is ~27 seconds ... so the repetitive looping through the array takes a toll.
Obviously (?) need to reduce the volume of repetitive actions ...
* for an exact string match the `comm` solution by Charles Duffy makes sense (it runs against the same 3300/500 test set in about 0.5 seconds)
* for a substring/pattern match I was able to get a `egrep` solution to run in about 5 seconds (see my other answer/post) |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | Since you're OK with using `grep`, and since you want to match substrings as well as full strings, one approach is to write:
```
printf '%s\n' "${arr2[@]}" \
| grep -o -F "$(printf '%s\n' "${arr1[@]}")
```
and let `grep` optimize as it sees fit. | An `egrep` solution for substring/pattern matching ...
```
egrep -f <(printf '.*%s.*\n' "${arr1[@]}") \
<(printf '%s\n' "${arr2[@]}")
```
* `egrep -f` : take patterns to search from the file designated by the `-f`, which in this case is ...
* `<(printf '.*%s.*\n' "${arr1[@]}")` : convert `arr1` elements into 1 pattern per line, appending a regex wild card character (.\*) for prefix and suffix
* `<(printf '%s\n' "${arr2[@]}")` : convert `arr2` elements into 1 string per line
When run against a sample data set like:
```
arr1 == 3300 elements
arr2 == 500 elements
```
... with 500 matches, total run time is ~5 seconds; there's still a good bit of repetitive processing going on with `egrep` but not as bad as seen with my other answer (`bash/awk`) ... and of course not as fast the `comm` solution which eliminates the repetitive processing. |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | [BashFAQ #36](http://mywiki.wooledge.org/BashFAQ/036) describes doing set arithmetic (unions, disjoint sets, etc) in bash with `comm`.
Assuming your values can't contain literal newlines, the following will emit a line per item in both arr1 and arr2:
```
comm -12 <(printf '%s\n' "${arr1[@]}" | sort -u) \
<(printf '%s\n' "${arr2[@]}" | sort -u)
```
If your arrays are pre-sorted, you can remove the `sort`s (which will make this extremely memory- and time-efficient with large arrays, moreso than the `grep`-based approach). | Here's a `bash/awk` idea:
```
# some sample arrays
$ arr1=( my first string "hello wolrd")
$ arr2=( my last stringbean strings "well, hello world!)
# break array elements into separate lines
$ printf '%s\n' "${arr1[@]}"
my
first
string
hello world
$ printf '%s\n' "${arr2[@]}"
my
last
stringbean
strings
well, hello world!
# use the 'printf' command output as input to our awk command
$ awk '
NR==FNR { a[NR]=$0 ; next }
{ for (i in a)
if ($0 ~ a[i]) print "array1 string {"a[i]"} is a substring of array2 string {"$0"}" }
' <( printf '%s\n' "${arr1[@]}" ) \
<( printf '%s\n' "${arr2[@]}" )
array1 string {my} is a substring of array2 string {my}
array1 string {string} is a substring of array2 string {stringbean}
array1 string {string} is a substring of array2 string {strings}
array1 string {hello world} is a substring of array2 string {well, hello world!}
```
* `NR==FNR` : for file #1 only: store elements into awk array named 'a'
* `next` : process next line in file #1; at this point rest of awk script is ignored for file #1; the for each line in file #2 ...
* `for (i in a)` : for each index 'i' in array 'a' ...
* `if ($0 ~ a[i] )` : see if a[i] is a substring of the current line ($0) from file #2 and if so ...
* `print "array1...` : output info about the match
---
A test run using the following data:
```
arr1 == 3300 elements
arr2 == 500 elements
```
When all `arr2` elements have a substring/pattern match in `arr1` (ie, 500 matches), total time to run is ~27 seconds ... so the repetitive looping through the array takes a toll.
Obviously (?) need to reduce the volume of repetitive actions ...
* for an exact string match the `comm` solution by Charles Duffy makes sense (it runs against the same 3300/500 test set in about 0.5 seconds)
* for a substring/pattern match I was able to get a `egrep` solution to run in about 5 seconds (see my other answer/post) |
47,536,813 | I am writing a Korn shell script. I have two arrays (say, `arr1` and `arr2`), both containing strings, and I need to check which elements from `arr1` are present (as whole strings or substrings) in `arr2`. The most intuitive solution is having nested for loops, and checking if each element from `arr1` can be found in `arr2` (through `grep`) like this:
```
for arr1Element in ${arr1[*]}; do
for arr2Element in ${arr2[*]}; do
# using grep to check if arr1Element is present in arr2Element
echo $arr2Element | grep $arr1Element
done
done
```
The issue is that `arr2` has around 3000 elements, so running a nested loop takes a long time. I am wondering if there is a better way to do this in Bash.
If I were doing this in Java, I could have calculated hashes for elements in one of the arrays, and then looked for those hashes in the other array, but I don't think Bash has any functionality for doing something like this (unless I was willing to write a hash calculating function in Bash).
Any suggestions? | 2017/11/28 | [
"https://Stackoverflow.com/questions/47536813",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2604472/"
] | [BashFAQ #36](http://mywiki.wooledge.org/BashFAQ/036) describes doing set arithmetic (unions, disjoint sets, etc) in bash with `comm`.
Assuming your values can't contain literal newlines, the following will emit a line per item in both arr1 and arr2:
```
comm -12 <(printf '%s\n' "${arr1[@]}" | sort -u) \
<(printf '%s\n' "${arr2[@]}" | sort -u)
```
If your arrays are pre-sorted, you can remove the `sort`s (which will make this extremely memory- and time-efficient with large arrays, moreso than the `grep`-based approach). | An `egrep` solution for substring/pattern matching ...
```
egrep -f <(printf '.*%s.*\n' "${arr1[@]}") \
<(printf '%s\n' "${arr2[@]}")
```
* `egrep -f` : take patterns to search from the file designated by the `-f`, which in this case is ...
* `<(printf '.*%s.*\n' "${arr1[@]}")` : convert `arr1` elements into 1 pattern per line, appending a regex wild card character (.\*) for prefix and suffix
* `<(printf '%s\n' "${arr2[@]}")` : convert `arr2` elements into 1 string per line
When run against a sample data set like:
```
arr1 == 3300 elements
arr2 == 500 elements
```
... with 500 matches, total run time is ~5 seconds; there's still a good bit of repetitive processing going on with `egrep` but not as bad as seen with my other answer (`bash/awk`) ... and of course not as fast the `comm` solution which eliminates the repetitive processing. |
35,098,534 | so im following this tutorial: <https://devcenter.heroku.com/articles/paperclip-s3>
I manage to deploy it to Heroku and App is working in the development. The app is running in Heroku but when i try to upload photo. it gives me this page
>
> We're sorry, but something went wrong.
>
>
>
I try to debug it by going to console and typing heroku logs it gives me the following error:
>
> heroku[router]: at=info method=POST path="/friends"
> host=s3friends.herokuapp.com
> request\_id=4173ed9e-ed69-492c-b1b9-d98227ca678c fwd="98.207.140.59"
> dyno=web.1 connect=9ms service=2668ms status=500 bytes=1754
>
>
>
Production.rb
```
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => ENV['S3_BUCKET_NAME'],
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
}
}
```
gem file:
```
gem 'aws-sdk'
gem 'rails', '4.2.5'
group :production do
gem 'pg'
end
```
I've also made sure that the credential on my heroku config match the credential on my s3.
any help would be greatly appreacted. | 2016/01/30 | [
"https://Stackoverflow.com/questions/35098534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5213790/"
] | Add `region` and `s3_host_name`.
```
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV["S3_BUCKET_NAME"],
access_key_id: ENV["AWS_ACCESS_KEY_ID"],
secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"],
s3_region: ENV["S3_REGION"],
s3_host_name: ENV["S3_HOST_NAME"]
}
```
S3\_REGION="eu-central-1"
S3\_HOST\_NAME="s3.eu-central-1.amazonaws.com"
Regions and endpoints: <http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region>
Using `gem "aws-sdk", "~> 2.0"` | For those facing the same problem, I solved this by renaming `:bucket => ENV['S3_BUCKET_NAME'],` to `:bucket => ENV['AWS_BUCKET'],`
and downgrading `gem 'aws-sdk'` to gem `'aws-sdk', '~> 1.61.0'`
and it fix my problem. |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | The Xamarin.Android equivalent of `SharedPreferences` is an interface called `ISharedPreferences`.
Use it in the same way, and you will be able to easily port Android code across.
---
For example, to save a true/false `bool` using some `Context` you can do the following:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
ISharedPreferencesEditor editor = prefs.Edit ();
editor.PutBoolean ("key_for_my_bool_value", mBool);
// editor.Commit(); // applies changes synchronously on older APIs
editor.Apply(); // applies changes asynchronously on newer APIs
```
Access saved values using:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
mBool = prefs.GetBoolean ("key_for_my_bool_value", <default value>);
mInt = prefs.GetInt ("key_for_my_int_value", <default value>);
mString = prefs.GetString ("key_for_my_string_value", <default value>);
```
---
From this sample, you can see that once you know the correct C# interface to use, the rest is easy. There are many Android java examples on how to use `SharedPreferences` for more complex situations, and these can be ported very easily using `ISharedPreferences`.
For more information, read this thread:
* [Android Shared Preference on Xamarin forum](http://forums.xamarin.com/discussion/4758/android-shared-preference) | You can use this example for you SharedPreferences in Xamarin.Android
First, you need to use:
```
ISharedPreferences //Interface for accessing and modifying preference data
ISharedPreferencesEditor // Interface used for modifying values in a ISharedPreferences
```
You can create a similar class:
```
public class AppPreferences
{
private ISharedPreferences mSharedPrefs;
private ISharedPreferencesEditor mPrefsEditor;
private Context mContext;
private static String PREFERENCE_ACCESS_KEY = "PREFERENCE_ACCESS_KEY";
public AppPreferences (Context context)
{
this.mContext = context;
mSharedPrefs = PreferenceManager.GetDefaultSharedPreferences(mContext);
mPrefsEditor = mSharedPrefs.Edit ();
}
public void saveAccessKey(string key){
mPrefsEditor.PutString(PREFERENCE_ACCESS_KEY, key);
mPrefsEditor.Commit();
}
public string getAccessKey(){
return mSharedPrefs.GetString(PREFERENCE_ACCESS_KEY, "");
}
}
```
From the Activity:
```
Context mContext = Android.App.Application.Context;
AppPreferences ap = new AppPreferences (mContext);
```
If you want to save some value:
```
string key = "123123";
ap.saveAccessKey (key);
```
If you want to get the value:
```
string key = ap.getAccessKey();
``` |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | The Xamarin.Android equivalent of `SharedPreferences` is an interface called `ISharedPreferences`.
Use it in the same way, and you will be able to easily port Android code across.
---
For example, to save a true/false `bool` using some `Context` you can do the following:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
ISharedPreferencesEditor editor = prefs.Edit ();
editor.PutBoolean ("key_for_my_bool_value", mBool);
// editor.Commit(); // applies changes synchronously on older APIs
editor.Apply(); // applies changes asynchronously on newer APIs
```
Access saved values using:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
mBool = prefs.GetBoolean ("key_for_my_bool_value", <default value>);
mInt = prefs.GetInt ("key_for_my_int_value", <default value>);
mString = prefs.GetString ("key_for_my_string_value", <default value>);
```
---
From this sample, you can see that once you know the correct C# interface to use, the rest is easy. There are many Android java examples on how to use `SharedPreferences` for more complex situations, and these can be ported very easily using `ISharedPreferences`.
For more information, read this thread:
* [Android Shared Preference on Xamarin forum](http://forums.xamarin.com/discussion/4758/android-shared-preference) | I had trouble using PreferenceManager as the example shows. I added this code at the top and now I'm good using it.
```
using Android.Preferences;
```
Plus to get the preferences you have to add the default value as a second parameter or it will not compile
```
mInt = prefs.GetInt ("key_for_my_int_value", defaultInt);
``` |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | The Xamarin.Android equivalent of `SharedPreferences` is an interface called `ISharedPreferences`.
Use it in the same way, and you will be able to easily port Android code across.
---
For example, to save a true/false `bool` using some `Context` you can do the following:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
ISharedPreferencesEditor editor = prefs.Edit ();
editor.PutBoolean ("key_for_my_bool_value", mBool);
// editor.Commit(); // applies changes synchronously on older APIs
editor.Apply(); // applies changes asynchronously on newer APIs
```
Access saved values using:
```
ISharedPreferences prefs = PreferenceManager.GetDefaultSharedPreferences (mContext);
mBool = prefs.GetBoolean ("key_for_my_bool_value", <default value>);
mInt = prefs.GetInt ("key_for_my_int_value", <default value>);
mString = prefs.GetString ("key_for_my_string_value", <default value>);
```
---
From this sample, you can see that once you know the correct C# interface to use, the rest is easy. There are many Android java examples on how to use `SharedPreferences` for more complex situations, and these can be ported very easily using `ISharedPreferences`.
For more information, read this thread:
* [Android Shared Preference on Xamarin forum](http://forums.xamarin.com/discussion/4758/android-shared-preference) | Not sure if you still dont know or not, but make sure you **Dispose** your variables if they are inside a function:
```
prefs.Dispose();
prefEditor.Dispose();
```
I had a crash/freeze on my app over some time because of not disposing the memory whenever it is not needed anymore. |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | You can use this example for you SharedPreferences in Xamarin.Android
First, you need to use:
```
ISharedPreferences //Interface for accessing and modifying preference data
ISharedPreferencesEditor // Interface used for modifying values in a ISharedPreferences
```
You can create a similar class:
```
public class AppPreferences
{
private ISharedPreferences mSharedPrefs;
private ISharedPreferencesEditor mPrefsEditor;
private Context mContext;
private static String PREFERENCE_ACCESS_KEY = "PREFERENCE_ACCESS_KEY";
public AppPreferences (Context context)
{
this.mContext = context;
mSharedPrefs = PreferenceManager.GetDefaultSharedPreferences(mContext);
mPrefsEditor = mSharedPrefs.Edit ();
}
public void saveAccessKey(string key){
mPrefsEditor.PutString(PREFERENCE_ACCESS_KEY, key);
mPrefsEditor.Commit();
}
public string getAccessKey(){
return mSharedPrefs.GetString(PREFERENCE_ACCESS_KEY, "");
}
}
```
From the Activity:
```
Context mContext = Android.App.Application.Context;
AppPreferences ap = new AppPreferences (mContext);
```
If you want to save some value:
```
string key = "123123";
ap.saveAccessKey (key);
```
If you want to get the value:
```
string key = ap.getAccessKey();
``` | I had trouble using PreferenceManager as the example shows. I added this code at the top and now I'm good using it.
```
using Android.Preferences;
```
Plus to get the preferences you have to add the default value as a second parameter or it will not compile
```
mInt = prefs.GetInt ("key_for_my_int_value", defaultInt);
``` |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | You can use this example for you SharedPreferences in Xamarin.Android
First, you need to use:
```
ISharedPreferences //Interface for accessing and modifying preference data
ISharedPreferencesEditor // Interface used for modifying values in a ISharedPreferences
```
You can create a similar class:
```
public class AppPreferences
{
private ISharedPreferences mSharedPrefs;
private ISharedPreferencesEditor mPrefsEditor;
private Context mContext;
private static String PREFERENCE_ACCESS_KEY = "PREFERENCE_ACCESS_KEY";
public AppPreferences (Context context)
{
this.mContext = context;
mSharedPrefs = PreferenceManager.GetDefaultSharedPreferences(mContext);
mPrefsEditor = mSharedPrefs.Edit ();
}
public void saveAccessKey(string key){
mPrefsEditor.PutString(PREFERENCE_ACCESS_KEY, key);
mPrefsEditor.Commit();
}
public string getAccessKey(){
return mSharedPrefs.GetString(PREFERENCE_ACCESS_KEY, "");
}
}
```
From the Activity:
```
Context mContext = Android.App.Application.Context;
AppPreferences ap = new AppPreferences (mContext);
```
If you want to save some value:
```
string key = "123123";
ap.saveAccessKey (key);
```
If you want to get the value:
```
string key = ap.getAccessKey();
``` | Not sure if you still dont know or not, but make sure you **Dispose** your variables if they are inside a function:
```
prefs.Dispose();
prefEditor.Dispose();
```
I had a crash/freeze on my app over some time because of not disposing the memory whenever it is not needed anymore. |
26,668,509 | I want to save and retrieve some application settings in my Xamarin.Android project.
I know that in Android (java), I use the class `SharedPreferences` to store this information, but I do not know how to convert that to Xamarin C#.
When I type "SharedPreferences" into the Xamarin Studio IDE, there is no auto-completion, so I don't know what to use.
---
An initial search of the interwebs took me to a related question, but only contains Android java:
* [Use Shared Preferences in xamarin](https://stackoverflow.com/questions/25082199/use-shared-preferences-in-xamarin)
---
So to summarise:
* What is the **Xamarin Android C# equivalent** of Android Java's `SharedPreferences`? | 2014/10/31 | [
"https://Stackoverflow.com/questions/26668509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383414/"
] | I had trouble using PreferenceManager as the example shows. I added this code at the top and now I'm good using it.
```
using Android.Preferences;
```
Plus to get the preferences you have to add the default value as a second parameter or it will not compile
```
mInt = prefs.GetInt ("key_for_my_int_value", defaultInt);
``` | Not sure if you still dont know or not, but make sure you **Dispose** your variables if they are inside a function:
```
prefs.Dispose();
prefEditor.Dispose();
```
I had a crash/freeze on my app over some time because of not disposing the memory whenever it is not needed anymore. |
13,212,366 | I've recently tried to access the debug keystore created by the Eclipse SDK in order to use the Google maps API within my application. Now I know the file exists and have its path. However to access it and receive an MD5 fingerprint I have to use a keytool command.
Now I've been told that this command has to be done in the computers command prompt as there is no keytool GUI. And this is where the problem is as my command prompt doesn't recognise the command I'm giving it. Here's the command:
```
keytool -list-alias androiddebugkey-keystore(path_to debug_keystore).keystore-storepass android -keypass android
```
(Brackets should be left and right arrows) to which command prompt replies:
>
> keytool is not recognised as an internal or external command, operable
> program or batch file.
>
>
>
Now I tried manually entering the path which I
believe would be (C: \Users\Adam.Android\debug.keystore).keystore I've
also tried variations of C: \Adam.Android\debug.keystore).keystore
Adam.Android\debug.keystore).keystore
.Android\debug.keystore).keystore debug.keystore).keystore
Which it then replies: "the system cannot find the path specified"
So either command prompt doesn't recognise the command "keytool" or I'm entering the path wrong (which is likely as I don't use command prompt commands often enough to know how to write paths successfully).
I also run the `C:\Program Files\Java\jre6\bin` through my command prompt and it replies:
>
> C: program\ is not reconized as internal or external command
>
>
>
Please help me out. | 2012/11/03 | [
"https://Stackoverflow.com/questions/13212366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1796974/"
] | You need to enclose commands/directories that contain spaces (or special characters), with the double qoute **"**
So to run your command, you would use:
```
C:\> "C:\Program Files\Java\jre6\bin\keytool" (option parameters)
```
So your complete command should look something like this then:
```
C:\> "C:\Program Files (x86)\java\jre6\bin\keytool.exe" -list -alias androiddebugkey -keystore C:\Users\Shazar\.android\debug.keystore -storepass android -keypass android
```
I've verified it on my system. | I Accept the @chrkad answer but for generating MD5 certificate, you should always use the keytool.exe located in jdk5 or jdk6(Development kit) folder. jre6 will provide the runtime environment and problems raises when you use MD5 certificate generated by jre keytool to get maps api key. I faced such problem and sharing the same with you. |
29,211,173 | I created a subclass of list that writes to file every so often so that I can recover data even in the event of a catastrophic failure. However, I'm not sure I'm handling IO in the best way.
```
import cPickle
class IOlist(list):
def __init__(self, filename, sentinel):
list.__init__(self)
self.filename = filename
def save(self):
with open(self.filename, 'wb') as ouf:
cPickle.dump(list(self), ouf)
def load(self):
with open(self.filename, 'rb') as inf:
lst = cPickle.load(inf)
for item in lst:
self.append(item)
```
Adding every object back into the list one-by-one after I read in the file feels wrong. Is there a better way to do this? I was hoping you could access the internals of a list object and do something like
```
def load(self):
with open(self.filename, 'rb') as inf:
self.list_items = cPickle.load(inf)
```
Unfortunately `vars(list)` seems to show that list does not have a `__dict__` attribute and I don't know where else to look for where the items of a list are stored.
And I tried `self = cPickle.load(inf)` but that didn't work either. | 2015/03/23 | [
"https://Stackoverflow.com/questions/29211173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1858363/"
] | You should be able to load the pickle directly into the list using
```
def load(self):
with open(self.filename, 'rb') as inf:
self[:] = cPickle.load(inf)
```
One other observation, if something goes wrong during the save, you might obliterate the latest persisted list, leaving no method of recovery. You would be better off using a separate file (perhaps using tempfile or similar, or just manage 2 files), and then replacing the previous file once you are certain that the list has successfully been persisted. | You could use `extend()` to get your unpickled list loaded:
```
self.extend(cPickle.load(inf))
``` |
29,211,173 | I created a subclass of list that writes to file every so often so that I can recover data even in the event of a catastrophic failure. However, I'm not sure I'm handling IO in the best way.
```
import cPickle
class IOlist(list):
def __init__(self, filename, sentinel):
list.__init__(self)
self.filename = filename
def save(self):
with open(self.filename, 'wb') as ouf:
cPickle.dump(list(self), ouf)
def load(self):
with open(self.filename, 'rb') as inf:
lst = cPickle.load(inf)
for item in lst:
self.append(item)
```
Adding every object back into the list one-by-one after I read in the file feels wrong. Is there a better way to do this? I was hoping you could access the internals of a list object and do something like
```
def load(self):
with open(self.filename, 'rb') as inf:
self.list_items = cPickle.load(inf)
```
Unfortunately `vars(list)` seems to show that list does not have a `__dict__` attribute and I don't know where else to look for where the items of a list are stored.
And I tried `self = cPickle.load(inf)` but that didn't work either. | 2015/03/23 | [
"https://Stackoverflow.com/questions/29211173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1858363/"
] | You should be able to load the pickle directly into the list using
```
def load(self):
with open(self.filename, 'rb') as inf:
self[:] = cPickle.load(inf)
```
One other observation, if something goes wrong during the save, you might obliterate the latest persisted list, leaving no method of recovery. You would be better off using a separate file (perhaps using tempfile or similar, or just manage 2 files), and then replacing the previous file once you are certain that the list has successfully been persisted. | You actually want to replace the entire contents of the current list with that of the loaded one. For that you can use slicing:
```
self[:] = lst
``` |
61,679,902 | I have a zsh config on MacOS Catalina which works well. No, I would like to get the same but for Debian 10 Buster.
The issue occurs in the using of function for PROMPT that displays pink slashes that separate the PATH current working where directories are in blue.
On MacOS, I do it like this (into my .zshrc) :
```
# Path with colorized forward slash
slash_color() {
dirs | awk -F "/" '{ blue="%{\033[38;5;75m%}"; \
pink="%{\033[38;5;206m%}"; \
for (i=1; i<NF; i++) \
printf blue $i pink "/"; \
printf blue $NF pink; \
}';
}
# Prompt final
PROMPT=$'%F{13}|%F{green}%n@%F{cyan}%m%F{13}|%f%T%F{13}|$(slash_color)%F{13}|%F{7} '
```
The result looks like for PROMPT :
[](https://i.stack.imgur.com/Z9H4Z.png)
Now, on Debian Buster, I have copied the ~/.zshrc from MacOS Catalina.
and **when PROMPT is displayed, the PATH of current working directory is not displayed (empty)** and I get the following error :
```
awk: run time error: not enough arguments passed to printf("%{%}~%{%}/")
FILENAME="-" FNR=1 NR=1
```
I don't know why I have this error on Debian and not on MacOS. I suspect this is due to a difference on the working of my `slash_color()` function but I don't understand the origin.
It seems that a variable is missing in Debian version for `awk`, but I can't see which one. | 2020/05/08 | [
"https://Stackoverflow.com/questions/61679902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1773603/"
] | Do not do: `printf something`. Always do `printf "%s", something`. The awk errors, because you passed invalid `printf` format specifiers `%{` and `%}`, yet you did not pass any arguments. Do:
```
printf "%s%s%s/", blue, $i, pink;
```
I think you can just:
```
{gsub("/", pink "/" blue)}1
``` | I would use a pre-command hook and simple parameter expansion instead of forking various external programs.
```
precmd () {
bar='%F{13}|'
prompt="$bar%F{green}%n@%F{cyan}%m$bar%f%T$bar%F{75}"
prompt+=${PWD:gs./.%F{206}/%F{75}}
prompt+="$bar%F{7} "
}
```
Add this to your `.zshrc` file, and `prompt` will be reset prior to displaying it, rather than embedding a shell function in the prompt itself.
`PWD` is the current working directory. The `gs.---.---` expansion modifier replaces each `/` with `%F{206}/%F{75}`, using `zsh`'s own color escape sequences rather than using raw ANSI escape sequences. |
26,378,704 | We have string inputs of the format `hello_EP_-12.5_201414`, `welcome_EP_22.5_20345` etc
We have to extract the double value `-12.5`, `22.5` from the above strings. The format `*_EP_double_*` is fixed.
One way to extract is to split strings with '\_' and take the string next to 'EP' and convert it. The other way is to use regex, where we extract the decimal value part. Is there any other efficient way to do it? | 2014/10/15 | [
"https://Stackoverflow.com/questions/26378704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1440973/"
] | I always prefer this regex for extracting the Double number from string
```
(-)?\d+\.\d+
```
it does not have any constraint like `*_EP_double_*`
<http://regex101.com/r/dN8sA5/16>
But in your case, you want to extract the double that followed by `_EP_`, and in this test case `12.4345_hello_ES_34.5_4444` you want the 34.5, then you have to use
```
(?<=_EP_)(-)?\d+\.\d+
```
<http://regex101.com/r/dN8sA5/17> | Not that its any better, but I fail to see whats wrong with something similar to:
```
var parts = s.Split(new[] {"_EP_"}, StringSplitOptions.None);
string dString = parts[1].Substring(0, parts[1].IndexOf('_'));
double d = double.Parse(dString);
``` |
26,378,704 | We have string inputs of the format `hello_EP_-12.5_201414`, `welcome_EP_22.5_20345` etc
We have to extract the double value `-12.5`, `22.5` from the above strings. The format `*_EP_double_*` is fixed.
One way to extract is to split strings with '\_' and take the string next to 'EP' and convert it. The other way is to use regex, where we extract the decimal value part. Is there any other efficient way to do it? | 2014/10/15 | [
"https://Stackoverflow.com/questions/26378704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1440973/"
] | Try :
```
string input = "hello_EP_-12.5_201414";
int start = input.IndexOf("_EP_") + "_EP_".Length;
int length = input.IndexOf('_', start + 1) - start;
double d;
double.TryParse(input.Substring(start, length), out d);
``` | I always prefer this regex for extracting the Double number from string
```
(-)?\d+\.\d+
```
it does not have any constraint like `*_EP_double_*`
<http://regex101.com/r/dN8sA5/16>
But in your case, you want to extract the double that followed by `_EP_`, and in this test case `12.4345_hello_ES_34.5_4444` you want the 34.5, then you have to use
```
(?<=_EP_)(-)?\d+\.\d+
```
<http://regex101.com/r/dN8sA5/17> |
26,378,704 | We have string inputs of the format `hello_EP_-12.5_201414`, `welcome_EP_22.5_20345` etc
We have to extract the double value `-12.5`, `22.5` from the above strings. The format `*_EP_double_*` is fixed.
One way to extract is to split strings with '\_' and take the string next to 'EP' and convert it. The other way is to use regex, where we extract the decimal value part. Is there any other efficient way to do it? | 2014/10/15 | [
"https://Stackoverflow.com/questions/26378704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1440973/"
] | Try :
```
string input = "hello_EP_-12.5_201414";
int start = input.IndexOf("_EP_") + "_EP_".Length;
int length = input.IndexOf('_', start + 1) - start;
double d;
double.TryParse(input.Substring(start, length), out d);
``` | Not that its any better, but I fail to see whats wrong with something similar to:
```
var parts = s.Split(new[] {"_EP_"}, StringSplitOptions.None);
string dString = parts[1].Substring(0, parts[1].IndexOf('_'));
double d = double.Parse(dString);
``` |
59,156,333 | I want to pause the playing audio if another one is played. I have this
```
<ul class="music-list">
<li *ngFor="let song of Songs">
<div class="music-panel">
<div class="music-image">
<img src="{{ song.song_image }}" alt="music-img">
</div>
<div class="music-detail">
<span class="date-remind">Season 1 / 10 September 2018</span>
<h4>{{ song.song_title | titlecase }}</h4>
<div class="music-play">
<audio controls>
<source src="{{ song.audio_file }}" type="audio/mpeg">
</audio>
</div>
</div>
</div>
</li>
</ul>
```
Please help. Thanks | 2019/12/03 | [
"https://Stackoverflow.com/questions/59156333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11066320/"
] | you need to get a reference of all element in the view then loop throw them an run pause method
```
@ViewChildren('audio') audioElms :ElementRef[];
```
now on the template bind a method on the audio play event
```
<audio controls #audio (play)="paly(audio)">
<source src="{{ song.audio_file }}" type="audio/mpeg">
</audio>
```
the play method will loop throw the audio elment and pause
```
onPaly(elm:HTMLAudioElement) {
this.audioElms.forEach(({nativeElement:e})=>{
if (e !== elm) {
e.pause();
}
})
}
```
check the complete demo [**demo**](https://stackblitz.com/edit/angular-xswwfh)
**Updated for better performance**
in case we have many elementd keep loop through theme everytime for just pause one of is a low preformance way.
simply we will save a reference of the current played audio element and stop when we play another one
```
private currentPlayedElem: HTMLAudioElement = null;
onPaly(elm: HTMLAudioElement) {
if (this.currentPlayedElem && this.currentPlayedElem !== elm ) {
this.currentPlayedElem.pause();
}
this.currentPlayedElem = elm;
}
```
[**demo**](https://stackblitz.com/edit/angular-fjdymi) | Add listener to the play event in the capturing phase and pause all audio file except the target one:
```
document.addEventListener('play', function(e){
var audios = document.getElementsByTagName('audio');
for(var i = 0, len = audios.length; i < len;i++){
if(audios[i] != e.target){
audios[i].pause();
}
}
}, true);
``` |
59,156,333 | I want to pause the playing audio if another one is played. I have this
```
<ul class="music-list">
<li *ngFor="let song of Songs">
<div class="music-panel">
<div class="music-image">
<img src="{{ song.song_image }}" alt="music-img">
</div>
<div class="music-detail">
<span class="date-remind">Season 1 / 10 September 2018</span>
<h4>{{ song.song_title | titlecase }}</h4>
<div class="music-play">
<audio controls>
<source src="{{ song.audio_file }}" type="audio/mpeg">
</audio>
</div>
</div>
</div>
</li>
</ul>
```
Please help. Thanks | 2019/12/03 | [
"https://Stackoverflow.com/questions/59156333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11066320/"
] | You could omit the `<audio>` element from your template and move the player logic to a service:
```
export class AudioPlayerService {
private globalPlayer: HTMLMediaElement = new Audio();
}
```
**UPDATE** Demo: <https://angular-eg8uik.stackblitz.io/>
You can keep track of the player's state using `addEventListener()`. Pass it to an Observable stream and use it in your component.
```
private playerState = new BehaviorSubject<any>({ isPlaying: false });
constructor() {
this.globalPlayer.addEventListener('play', () => {
this.playerState.next({ isPlaying: true, audioId: 'foo' });
});
this.globalPlayer.addEventListener('pause', () => {
this.playerState.next({ isPlaying: false });
});
}
getState(): Observable<any> {
return this.playerState.asObservable();
}
```
Pass `getState()` to your custom audio player component and update the buttons accordingly.
You can also add functions for loading, playing, pausing, stopping, seeking, `currentTime` values, etc.
This will scale well because you have only one `MediaElement` and two `EventListeners` no matter how many files you have in your view. You are also more flexible with your UI, the native `<audio>` element is pretty limited as far as design goes. | Add listener to the play event in the capturing phase and pause all audio file except the target one:
```
document.addEventListener('play', function(e){
var audios = document.getElementsByTagName('audio');
for(var i = 0, len = audios.length; i < len;i++){
if(audios[i] != e.target){
audios[i].pause();
}
}
}, true);
``` |
59,156,333 | I want to pause the playing audio if another one is played. I have this
```
<ul class="music-list">
<li *ngFor="let song of Songs">
<div class="music-panel">
<div class="music-image">
<img src="{{ song.song_image }}" alt="music-img">
</div>
<div class="music-detail">
<span class="date-remind">Season 1 / 10 September 2018</span>
<h4>{{ song.song_title | titlecase }}</h4>
<div class="music-play">
<audio controls>
<source src="{{ song.audio_file }}" type="audio/mpeg">
</audio>
</div>
</div>
</div>
</li>
</ul>
```
Please help. Thanks | 2019/12/03 | [
"https://Stackoverflow.com/questions/59156333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11066320/"
] | you need to get a reference of all element in the view then loop throw them an run pause method
```
@ViewChildren('audio') audioElms :ElementRef[];
```
now on the template bind a method on the audio play event
```
<audio controls #audio (play)="paly(audio)">
<source src="{{ song.audio_file }}" type="audio/mpeg">
</audio>
```
the play method will loop throw the audio elment and pause
```
onPaly(elm:HTMLAudioElement) {
this.audioElms.forEach(({nativeElement:e})=>{
if (e !== elm) {
e.pause();
}
})
}
```
check the complete demo [**demo**](https://stackblitz.com/edit/angular-xswwfh)
**Updated for better performance**
in case we have many elementd keep loop through theme everytime for just pause one of is a low preformance way.
simply we will save a reference of the current played audio element and stop when we play another one
```
private currentPlayedElem: HTMLAudioElement = null;
onPaly(elm: HTMLAudioElement) {
if (this.currentPlayedElem && this.currentPlayedElem !== elm ) {
this.currentPlayedElem.pause();
}
this.currentPlayedElem = elm;
}
```
[**demo**](https://stackblitz.com/edit/angular-fjdymi) | You could omit the `<audio>` element from your template and move the player logic to a service:
```
export class AudioPlayerService {
private globalPlayer: HTMLMediaElement = new Audio();
}
```
**UPDATE** Demo: <https://angular-eg8uik.stackblitz.io/>
You can keep track of the player's state using `addEventListener()`. Pass it to an Observable stream and use it in your component.
```
private playerState = new BehaviorSubject<any>({ isPlaying: false });
constructor() {
this.globalPlayer.addEventListener('play', () => {
this.playerState.next({ isPlaying: true, audioId: 'foo' });
});
this.globalPlayer.addEventListener('pause', () => {
this.playerState.next({ isPlaying: false });
});
}
getState(): Observable<any> {
return this.playerState.asObservable();
}
```
Pass `getState()` to your custom audio player component and update the buttons accordingly.
You can also add functions for loading, playing, pausing, stopping, seeking, `currentTime` values, etc.
This will scale well because you have only one `MediaElement` and two `EventListeners` no matter how many files you have in your view. You are also more flexible with your UI, the native `<audio>` element is pretty limited as far as design goes. |
9,161,904 | I'm relatively new to scala and made some really simple programs succesfully.
However, now that I'am trying some real world problem resolution, things are getting a little bit harder...
I want to read some files into 'Configuration' objects, using various 'FileTypeReader' subtypes that can 'accept' certain files (one for each FileTypeReader subtype) and return an Option[Configuration] if it can extract a configuration from it.
I'm trying to avoid the imperative style and wrote, for exemple, something like this (using scala-io, scaladoc for Path here <http://jesseeichar.github.com/scala-io-doc/0.3.0/api/index.html#scalax.file.Path> ) :
```
(...)
trait FileTypeReader {
import scalax.file.Path
def accept(aPath : Path) : Option[Configuration]
}
var readers : List[FileTypeReader] = ...// list of concrete readers
var configurations = for (
nextPath <- Path(someFolder).children();
reader <- readers
) yield reader.accept(nextPath);
(...)
```
Of course, that does not work, for-comprehensions return a collection of the first generator type (here, some IterablePathSet).
Since I tried many variant and feel like running in circle, I beg for you advices on that matter to solve my - trivial ? - problem with elegance ! :)
Many thanks in advance,
sni. | 2012/02/06 | [
"https://Stackoverflow.com/questions/9161904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921177/"
] | If I understand correctly, your problem is that you have a `Set[Path]` and want to yield a `List[Option[Configuration]]`. As written, `configurations` will be a `Set[Option[Configuration]]`. To change this to a `List`, use the `toList` method i.e.
```
val configurations = (for {
nextPath <- Path(someFolder).children
reader <- readers
} yield reader.accept(nextPath) ).toList
```
or, change the type of the generator itself:
```
val configurations = for {
nextPath <- Path(someFolder).children.toList
reader <- readers
} yield reader.accept(nextPath)
```
You probably actually want to get a `List[Configuration]`, which you can do elegantly since `Option` is a monad:
```
val configurations = for {
nextPath <- Path(someFolder).children.toList
reader <- readers
conf <- reader.accept(nextPath)
} yield conf
``` | Are you trying to find the *first* configuration that it can extract? If not, what happens if multiple configurations are returned?
In the first case, I'd just get the result of the for-comprehension and call `find` on it, which will return an `Option`. |
39,971,831 | Hello team I have a small issue and I would like to know the cause please see code below:
`<input type="text" class="span6" id="item_title" value ="<?php echo set_value('$item_title'); ?>">`
The preceding code gives me an validation error in Codeigniter however this code
`<?php echo form_input('item_title', $item_title); ?>`
It works fine with no validation error.The error indicates that the ***Item title field is required*** however i do not get this error with the latter script any ideas. | 2016/10/11 | [
"https://Stackoverflow.com/questions/39971831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5811421/"
] | I find the code from [spring-cloud-sleuth](https://github.com/spring-cloud/spring-cloud-sleuth/tree/master/benchmarks/src/main/java/org/springframework/cloud/sleuth/benchmarks/jmh/benchmarks),it works for me
```
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
public class DemoApplicationTests {
volatile DemoApplication app;
volatile ConfigurableApplicationContext context;
private VideoService videoService;
@Test
public void contextLoads() throws RunnerException {
Options opt = new OptionsBuilder()
.include(DemoApplicationTests.class.getSimpleName())
.forks(1)
.build();
new Runner(opt).run();
}
@Setup
public void setup() {
this.context = new SpringApplication(DemoApplication.class).run();
Object o = this.context.getBean(VideoService.class);
videoService = (VideoService)o;
}
@TearDown
public void tearDown(){
this.context.close();
}
@Benchmark
public String benchmark(){
return videoService.find("z");
}
``` | I would opt for the `getAutowireCapableBeanFactory().autowire()` solution you already sketched out.
There has to be some boilerplate code that loads the application context and triggers autowiring. If you prefer to specify your app config with annotations the setup method could look something like this:
```
AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
context.register(MyBenchmarkWithConfig.class);
context.refresh();
``` |
39,971,831 | Hello team I have a small issue and I would like to know the cause please see code below:
`<input type="text" class="span6" id="item_title" value ="<?php echo set_value('$item_title'); ?>">`
The preceding code gives me an validation error in Codeigniter however this code
`<?php echo form_input('item_title', $item_title); ?>`
It works fine with no validation error.The error indicates that the ***Item title field is required*** however i do not get this error with the latter script any ideas. | 2016/10/11 | [
"https://Stackoverflow.com/questions/39971831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5811421/"
] | ```
@State(Scope.Benchmark)
public static class SpringState {
AnnotationConfigApplicationContext context;
@Setup(Level.Trial)
public void setup() {
context = new AnnotationConfigApplicationContext();
context.register(CLASSNAME.class);
context.register(ANOTHER_CLASSNAME_TO_BE_LOADED.class);
context.refresh();
}
@TearDown(Level.Trial)
public void tearDown() {
context.close();
}
}
``` | I would opt for the `getAutowireCapableBeanFactory().autowire()` solution you already sketched out.
There has to be some boilerplate code that loads the application context and triggers autowiring. If you prefer to specify your app config with annotations the setup method could look something like this:
```
AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
context.register(MyBenchmarkWithConfig.class);
context.refresh();
``` |
39,971,831 | Hello team I have a small issue and I would like to know the cause please see code below:
`<input type="text" class="span6" id="item_title" value ="<?php echo set_value('$item_title'); ?>">`
The preceding code gives me an validation error in Codeigniter however this code
`<?php echo form_input('item_title', $item_title); ?>`
It works fine with no validation error.The error indicates that the ***Item title field is required*** however i do not get this error with the latter script any ideas. | 2016/10/11 | [
"https://Stackoverflow.com/questions/39971831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5811421/"
] | I find the code from [spring-cloud-sleuth](https://github.com/spring-cloud/spring-cloud-sleuth/tree/master/benchmarks/src/main/java/org/springframework/cloud/sleuth/benchmarks/jmh/benchmarks),it works for me
```
@State(Scope.Benchmark)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
public class DemoApplicationTests {
volatile DemoApplication app;
volatile ConfigurableApplicationContext context;
private VideoService videoService;
@Test
public void contextLoads() throws RunnerException {
Options opt = new OptionsBuilder()
.include(DemoApplicationTests.class.getSimpleName())
.forks(1)
.build();
new Runner(opt).run();
}
@Setup
public void setup() {
this.context = new SpringApplication(DemoApplication.class).run();
Object o = this.context.getBean(VideoService.class);
videoService = (VideoService)o;
}
@TearDown
public void tearDown(){
this.context.close();
}
@Benchmark
public String benchmark(){
return videoService.find("z");
}
``` | ```
@State(Scope.Benchmark)
public static class SpringState {
AnnotationConfigApplicationContext context;
@Setup(Level.Trial)
public void setup() {
context = new AnnotationConfigApplicationContext();
context.register(CLASSNAME.class);
context.register(ANOTHER_CLASSNAME_TO_BE_LOADED.class);
context.refresh();
}
@TearDown(Level.Trial)
public void tearDown() {
context.close();
}
}
``` |
47,738,652 | I just installed a fresh new Android Studio on a fresh new Windows 7. I created a new empty project and Android Studio keeps failing at project syncing. I tried few solutions people posted on SO, but none of them worked so I decided to show you my project structure hoping someone can help.
So, this is my project's gradle file:
```
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
And module's gradle file:
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 26
defaultConfig {
applicationId "com.example.kompjutor.myapplication"
minSdkVersion 15
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner
"android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
```
}
And those are the errors I'm getting:
[](https://i.stack.imgur.com/cY10e.png)
Any ideas, please? | 2017/12/10 | [
"https://Stackoverflow.com/questions/47738652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8789149/"
] | Issue could be caused by the following reasons
1. Gradle has not synced the libraries.
2. Android Studios offline mode is activated.
See [this](https://stackoverflow.com/a/47114291/7702686) for more info | Go to File->Project Structure.
In the Dependency Tab Choose Plus icon appears green then choose Library dependency. And add through that. If still problem appears update Support Repository through SDK Manager
[](https://i.stack.imgur.com/wZLAi.png) |
34,424,846 | I already opened a [question on this topic](https://stackoverflow.com/questions/34362012/trouble-with-curve-fitting-lmfit-wont-produce-proper-fit-to-peak-data), but I wasn't sure, if I should post it there, so I opened a new question here.
I have trouble again when fitting two or more peaks. First problem occurs with a calculated example function.
```
xg = np.random.uniform(0,1000,500)
mu1 = 200
sigma1 = 20
I1 = -2
mu2 = 800
sigma2 = 20
I2 = -1
yg3 = 0.0001*xg
yg1 = (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (xg - mu1)**2 / (2 * sigma1**2) )
yg2 = (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (xg - mu2)**2 / (2 * sigma2**2) )
yg=yg1+yg2+yg3
plt.figure(0, figsize=(8,8))
plt.plot(xg, yg, 'r.')
```
I tried two different approaches, I found in the documentation, which are shown below (modified for my data), but both give me wrong fitting data and a messy chaos of graphs (I guess one line per fitting step).
1st attempt:
```
import numpy as np
from lmfit.models import PseudoVoigtModel, LinearModel, GaussianModel, LorentzianModel
import sys
import matplotlib.pyplot as plt
gauss1 = PseudoVoigtModel(prefix='g1_')
pars.update(gauss1.make_params())
pars['g1_center'].set(200)
pars['g1_sigma'].set(15, min=3)
pars['g1_amplitude'].set(-0.5)
pars['g1_fwhm'].set(20, vary=True)
#pars['g1_fraction'].set(0, vary=True)
gauss2 = PseudoVoigtModel(prefix='g2_')
pars.update(gauss2.make_params())
pars['g2_center'].set(800)
pars['g2_sigma'].set(15)
pars['g2_amplitude'].set(-0.4)
pars['g2_fwhm'].set(20, vary=True)
#pars['g2_fraction'].set(0, vary=True)
mod = gauss1 + gauss2 + LinearModel()
pars.add('intercept', value=0, vary=True)
pars.add('slope', value=0.0001, vary=True)
init = mod.eval(pars, x=xg)
out = mod.fit(yg, pars, x=xg)
print(out.fit_report(min_correl=0.5))
plt.figure(5, figsize=(8,8))
out.plot_fit()
```
When I include the 'fraction'-parameter, I often get
```
'NameError: name 'pv1_fraction' is not defined in expr='<_ast.Module object at 0x00000000165E03C8>'.
```
although it should be defined. I get this Error for real data with this approach, too.
2nd attempt:
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import lmfit
def gauss(x, sigma, mu, A):
return A*np.exp(-(x-mu)**2/(2*sigma**2))
def linear(x, m, n):
return m*x + n
peak1 = lmfit.model.Model(gauss, prefix='p1_')
peak2 = lmfit.model.Model(gauss, prefix='p2_')
lin = lmfit.model.Model(linear, prefix='l_')
model = peak1 + lin + peak2
params = model.make_params()
params['p1_mu'] = lmfit.Parameter(value=200, min=100, max=250)
params['p2_mu'] = lmfit.Parameter(value=800, min=100, max=1000)
params['p1_sigma'] = lmfit.Parameter(value=15, min=0.01)
params['p2_sigma'] = lmfit.Parameter(value=20, min=0.01)
params['p1_A'] = lmfit.Parameter(value=-2, min=-3)
params['p2_A'] = lmfit.Parameter(value=-2, min=-3)
params['l_m'] = lmfit.Parameter(value=0)
params['l_n'] = lmfit.Parameter(value=0)
out = model.fit(yg, params, x=xg)
print out.fit_report()
plt.figure(8, figsize=(8,8))
out.plot_fit()
```
So the result looks like this, in both cases. It seems to plot all fitting attempts, but never solves it correctly. The best fitted parameters are in the range that I gave it.
[](https://i.stack.imgur.com/y0SSc.png)
[](https://i.stack.imgur.com/rs0pU.png)
Anyone knows this type of error? Or has any solutions for this? And does anyone know how to avoid the `NameError` when calling a model function from `lmfit` with those approaches? | 2015/12/22 | [
"https://Stackoverflow.com/questions/34424846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5500160/"
] | I have a somewhat tolerable solution for you. Since I don't know how variable your data is, I cannot say that it will work in a general sense but should get you started. If your data is along 0-1000 and has two peaks or dips along a line as you showed, then it should work.
I used the scipy curve\_fit and put all of the components of the function together into one function. One can pass starting locations into the curve\_fit function. (you can probably do this with the lib you're using but I'm not familiar with it) There is a loop in loop where I vary the mu parameters to find the ones with the lowest squared error. If you are needing to fit your data many times or in some real-time scenario then this is not for you but if you just need to fit some data, launch this code and grab a coffee.
```
from scipy.optimize import curve_fit
import numpy as np
import matplotlib.pyplot as plt
import pylab
from matplotlib import cm as cm
import time
def my_function_big(x, m, n, #lin vars
sigma1, mu1, I1, #gaussian 1
sigma2, mu2, I2): #gaussian 2
y = m * x + n + (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (x - mu1)**2 / (2 * sigma1**2) ) + (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (x - mu2)**2 / (2 * sigma2**2) )
return y
#make some data
xs = np.random.uniform(0,1000,500)
mu1 = 200
sigma1 = 20
I1 = -2
mu2 = 800
sigma2 = 20
I2 = -1
yg3 = 0.0001 * xs
yg1 = (I1 / (sigma1 * np.sqrt(2 * np.pi))) * np.exp( - (xs - mu1)**2 / (2 * sigma1**2) )
yg2 = (I2 / (sigma2 * np.sqrt(2 * np.pi))) * np.exp( - (xs - mu2)**2 / (2 * sigma2**2) )
ys = yg1 + yg2 + yg3
xs = np.array(xs)
ys = np.array(ys)
#done making data
#start a double loop...very expensive but this is quick and dirty
#it would seem that the regular optimizer has trouble finding the minima so i
#found that having the near proper mu values helped it zero in much better
start = time.time()
serr = []
_x = []
_y = []
for x in np.linspace(0, 1000, 61):
for y in np.linspace(0, 1000, 61):
cfiti = curve_fit(my_function_big, xs, ys, p0=[0, 0, 1, x, 1, 1, y, 1], maxfev=20000000)
serr.append(np.sum((my_function_big(xs, *cfiti[0]) - ys) ** 2))
_x.append(x)
_y.append(y)
serr = np.array(serr)
_x = np.array(_x)
_y = np.array(_y)
print 'done loop in loop fitting'
print 'time: %0.1f' % (time.time() - start)
gridsize=20
plt.subplot(111)
plt.hexbin(_x, _y, C=serr, gridsize=gridsize, cmap=cm.jet, bins=None)
plt.axis([_x.min(), _x.max(), _y.min(), _y.max()])
cb = plt.colorbar()
cb.set_label('SE')
plt.show()
ix = np.argmin(serr.ravel())
mustart1 = _x.ravel()[ix]
mustart2 = _y.ravel()[ix]
print mustart1
print mustart2
cfit = curve_fit(my_function_big, xs, ys, p0=[0, 0, 1, mustart1, 1, 1, mustart2, 1], maxfev=2000000000)
xp = np.linspace(0, 1000, 1001)
plt.figure()
plt.scatter(xs, ys) #plot synthetic dat
plt.plot(xp, my_function_big(xp, *cfit[0]), '-', label='fit function') #plot data evaluated along 0-1000
plt.legend(loc=3, numpoints=1, prop={'size':12})
plt.show()
pylab.close()
```
[](https://i.stack.imgur.com/xO9AM.png)
[](https://i.stack.imgur.com/ogObd.png)
Good luck! | In your first attempt:
```
pars['g1_fraction'].set(0, vary=True)
```
The fraction must be a value between 0 and 1, but I believe that cannot be zero. Try to put something like 0.000001, and it will work. |
43,039,407 | I would like to ask your assistance on how to calculate sha256 of large files in PHP. Currently, I used Amazon Glacier to store old files and use their API to upload the archive. Initially, I just used small files that cannot reach to MB-sized images. When I tried to upload more than 1MB, the API response said that the checksum I gave to them is different from what they had calculated.
Here is my code to upload the file:
```
//get the sha256 using the file path
$image = //image path;
$sha256 = hash_file("sha256", $image);
$archive = $glacier->uploadArchive([
'accountId' => '',
'body' => "",
'checksum' => $sha256,
'contentSHA256' => $sha256,
'sourceFile' => $image,
'vaultName' => 'my-vault'
]);
```
And the error:
```
AWS HTTP error: Client error: `POST https://glacier.us-west-2.amazonaws.com/vaults/70/archives` resulted in a `400 Bad Request` response:{"code":"InvalidParameterValueException","message":"Checksum mismatch: expected 9f1d4da29b6ec24abde48cb65cc32652ff589467 (truncated...)
```
I tried the function like below to check for the final hash but it seems it's not the right hash when I print it:
```
private function getFinalHash($file)
{
$fp = fopen($file, "r");
$ctx = hash_init('sha256');
while (!feof($fp)) {
$buffer = fgets($fp, 1024);
hash_update($ctx, $buffer);
}
$hash = hash_final($ctx, true); print_r($hash);exit;
fclose($fp);
}
```
The resulted hash is like this: `ŸM¢›nÂJ½äŒ¶\Ã&RÿX”gíÖ'„IoA\C÷×`
The Amazon Glacier API documentation shows how to compute the checksum as stated:
>
> For each 1 MB chunk of payload data, compute the SHA-256 hash. The last chunk of data can be less than 1 MB. For example, if you are uploading a 3.2 MB archive, you compute the SHA-256 hash values for each of the first three 1 MB chunks of data, and then compute the SHA-256 hash of the remaining 0.2 MB data. These hash values form the leaf nodes of the tree.
>
>
>
I think there has something to with the correct way in providing the checksum but I don't know how I should do it with large files using PHP. I really need your help regarding this one. | 2017/03/27 | [
"https://Stackoverflow.com/questions/43039407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6729008/"
] | Glacier have theirs own way to count SHA256-TREE-HASH.
Here you have working code on PHP.
This function returns SHA256 hash created from 1MB parts as they want. It works perfect for me, even for large or small files.
```
private function getFinalHash($path, $MB = 1048576)
{
$fp = fopen($path, "rb");
$hashes = [];
while (($buffer = fread($fp, $MB))!=="") {
$hashes[] = hash("sha256", $buffer, true);
}
if(count($hashes)==1){
return bin2hex($hashes[0]);
}
while(true){
$hashes_new = [];
foreach($hashes as $k => $hash){
if ($k % 2 == 0) {
if(isset($hashes[$k+1])){
$hashes_new[] = hash("sha256", $hash.$hashes[$k+1], true);
}
}
}
if(count($hashes)>2 && count($hashes) % 2 != 0){
$hashes_new[] = $hashes[count($hashes)-1];
}
if(count($hashes_new)>1){
$hashes = $hashes_new;
}else{
fclose($fp);
return bin2hex($hashes_new[0]);
}
}
}
``` | THe trick is, that the sha256 hash is computed by the AWS SDK for PHP which your are using.
So you do not need to calculate the hash by yourself.
Here is an example:
```
$client = new GlacierClient(array(
'key' => '[aws access key]',
'secret' => '[aws secret key]',
'region' => '[aws region]', // (e.g., us-west-2) )); $result =
$client->uploadArchive(array(
'vaultName' => $vaultName,
'body' => fopen($filename, 'r'), ));
$archiveId = $result->get('archiveId');
``` |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | There's really two parts to your question:
1. Is native log shipping good enough?
2. If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it. | I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it [here](http://sqlblogcasts.com/blogs/davidwimbush/archive/2009/07/05/roll-your-own-log-shipping.aspx).
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | There's really two parts to your question:
1. Is native log shipping good enough?
2. If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it. | I would expect this to be close to the last place you'd want to save a few bucks, especially given the likely consequences if you screw up. Would you rather have your job on the line? I don't even think I'd admit it, if I felt I had a chance of getting this one right?
What's your personal upside benefit in this? |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | Have you considered mirroring instead? Here is some [documentation](http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx) to determine if you could do that instead | I would expect this to be close to the last place you'd want to save a few bucks, especially given the likely consequences if you screw up. Would you rather have your job on the line? I don't even think I'd admit it, if I felt I had a chance of getting this one right?
What's your personal upside benefit in this? |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it [here](http://sqlblogcasts.com/blogs/davidwimbush/archive/2009/07/05/roll-your-own-log-shipping.aspx).
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started. | I would expect this to be close to the last place you'd want to save a few bucks, especially given the likely consequences if you screw up. Would you rather have your job on the line? I don't even think I'd admit it, if I felt I had a chance of getting this one right?
What's your personal upside benefit in this? |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | There's really two parts to your question:
1. Is native log shipping good enough?
2. If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it. | I'm pretty sure it's available in Standard, since we're doing some shipping, but I'm not sure about the Workgroup edition - it's pretty stripped down.
I'm always in favor of the packages solution, but mostly because I trust a whole team of MSFT developers more than I trust myself, but that comes with a price for sure. I'd second that **any solution you roll on your own has to come with a lag notification piece so that you'll know immediately if it isn't working** - how many times do we only find out backup solutions aren't working when somebody needs a backup? Also, think honestly about how much time it will take you to design and roll your own solution, including bug fixes and maintenance - can you really do it more cheaply? Maybe you can, but maybe not.
Also, one problem we ran into with Workgroup edition is that it only supports 5 connections at once, and it seems to start dropping connections if you get more users than that, so we had to upgrade to Standard. We were getting ASP.NET errors that our connections were closed if we left them unattended for even a few seconds, which caused us all kinds of problems. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | Have you considered mirroring instead? Here is some [documentation](http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx) to determine if you could do that instead | If you decide to roll your own, [here's a nice guide](http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1327798,00.html).
I'm assuming you're going this route because Enterprise Edition is so costly?
If you don't need a "live-backup", but really just want a frequently updated backup, I think this approach makes a lot of sense.
---
**One more thing:**
Make sure you regularly verify that your backup strategy is working. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it [here](http://sqlblogcasts.com/blogs/davidwimbush/archive/2009/07/05/roll-your-own-log-shipping.aspx).
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started. | I'm pretty sure it's available in Standard, since we're doing some shipping, but I'm not sure about the Workgroup edition - it's pretty stripped down.
I'm always in favor of the packages solution, but mostly because I trust a whole team of MSFT developers more than I trust myself, but that comes with a price for sure. I'd second that **any solution you roll on your own has to come with a lag notification piece so that you'll know immediately if it isn't working** - how many times do we only find out backup solutions aren't working when somebody needs a backup? Also, think honestly about how much time it will take you to design and roll your own solution, including bug fixes and maintenance - can you really do it more cheaply? Maybe you can, but maybe not.
Also, one problem we ran into with Workgroup edition is that it only supports 5 connections at once, and it seems to start dropping connections if you get more users than that, so we had to upgrade to Standard. We were getting ASP.NET errors that our connections were closed if we left them unattended for even a few seconds, which caused us all kinds of problems. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | I tried the built-in log shipping and found some real problems with it so I developed my own. I blogged about it [here](http://sqlblogcasts.com/blogs/davidwimbush/archive/2009/07/05/roll-your-own-log-shipping.aspx).
PS: And just for the record, you definitely get log shipping in the Workgoup edition. I don't know where this Enterprise-only thing started. | If you decide to roll your own, [here's a nice guide](http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1327798,00.html).
I'm assuming you're going this route because Enterprise Edition is so costly?
If you don't need a "live-backup", but really just want a frequently updated backup, I think this approach makes a lot of sense.
---
**One more thing:**
Make sure you regularly verify that your backup strategy is working. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | Have you considered mirroring instead? Here is some [documentation](http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx) to determine if you could do that instead | I'm pretty sure it's available in Standard, since we're doing some shipping, but I'm not sure about the Workgroup edition - it's pretty stripped down.
I'm always in favor of the packages solution, but mostly because I trust a whole team of MSFT developers more than I trust myself, but that comes with a price for sure. I'd second that **any solution you roll on your own has to come with a lag notification piece so that you'll know immediately if it isn't working** - how many times do we only find out backup solutions aren't working when somebody needs a backup? Also, think honestly about how much time it will take you to design and roll your own solution, including bug fixes and maintenance - can you really do it more cheaply? Maybe you can, but maybe not.
Also, one problem we ran into with Workgroup edition is that it only supports 5 connections at once, and it seems to start dropping connections if you get more users than that, so we had to upgrade to Standard. We were getting ASP.NET errors that our connections were closed if we left them unattended for even a few seconds, which caused us all kinds of problems. |
420,530 | I'm looking into using log shipping for disaster recovery and I'm getting mixed messages about whether to use the built-in stuff or roll my own. Which do you recommend, please, and if you favour rolling your own what's wrong with the built-in stuff? If I'm going to reinvent the wheel I don't want to make the same mistakes! (We have the Workgroup edition.) Thanks in advance. | 2009/01/07 | [
"https://Stackoverflow.com/questions/420530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43631/"
] | There's really two parts to your question:
1. Is native log shipping good enough?
2. If not, whose log shipping should I use?
Here's my two cents, but like you're already discovering, a lot of this is based on opinions.
About the first question - native log shipping is fine for small implementations - say, 1-2 servers, a handful of databases, and a full time DBA. In environments like this, the native log shipping's lack of monitoring, alerting, and management isn't a problem. If it breaks, you don't sweat bullets because it's relatively easy to repair. When would it break? For example, if someone accidentally deletes the transaction log backup file before it's restored on the disaster recovery server. (Happens all the time with automated processes.)
When you grow beyond a couple of servers, the lack of management automation starts to become a problem. You want better automated email alerting, alerts when the log shipping gets more than X minutes/hours behind, alerts when the file copying is taking too long, easier handling of multiple secondary servers, etc. That's when people turn to alternate solutions.
About the second question - I'll put it this way. I work for Quest Software, the makers of LiteSpeed, a SQL Server backup & recovery product. I regularly talk to database administrators who use our product and other products like Idera SQLSafe and Red Gate SQL Backup to make their backup management easier. We build GUI tools to automate the log shipping process, give you a nice graphical dashboard showing exactly where your bottlenecks are, and help make sure your butt is covered when your primary datacenter goes down. We sell a lot of licenses. :-)
If you roll your own scripts - and you certainly can - you will be completely alone when your datacenter goes down. You won't have a support line to call, you won't have tools to help you, and you won't be able to tell your coworkers, "Open this GUI and click here to fail over." You'll be trying to walk them through T-SQL scripts in the middle of a disaster. Expert DBAs who have a lot of time on their hands sometimes prefer writing their own scripts, and it does give you a lot of control, but you have to make sure you've got enough time to build them and test them before you bank your job on it. | If you decide to roll your own, [here's a nice guide](http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1327798,00.html).
I'm assuming you're going this route because Enterprise Edition is so costly?
If you don't need a "live-backup", but really just want a frequently updated backup, I think this approach makes a lot of sense.
---
**One more thing:**
Make sure you regularly verify that your backup strategy is working. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Install XP **first**. After that install 7. When 7 is installed, its bootloader will also recognize XP; that way you'll be able to boot in both operating systems without needing to do anything else. | Why do you need to dual boot XP? If you have Ultimate just install Windows 7 then install XP mode in Windows 7 and it's all virtualized. Unless you have some specific reason for it which you could add to the question. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Although the recommended method is to install XP and then Windows 7, there is no need to reinstall in your case.
Follow this [guide](http://www.ehow.com/how_4900122_use-easybcd-windows-xp.html) (edited below) using a free tool called [EasyBCD](http://neosmart.net/dl.php?id=1).
>
> 1. Download and install [**EasyBCD**](http://neosmart.net/dl.php?id=1). Click **I Agree** to the license agreement,
> click **Next** to install in the default
> location, and the installation wizard
> will do the rest.
> 2. Click **View Settings**.
> 3. Change the **Default OS** to **Windows 7**. The operating system to
> associate the settings with should be
> Windows 7 too. Select the drive on
> which Windows 7 is installed under
> **Drive**. Type **Windows 7** in the
> Name box and press **Save Settings**.
> 4. Click **Add/Remove Entries**.
> 5. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows 7 is installed. Type
> **Windows 7** in the **Name** box and
> press **Add Entry**.
> 6. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows XP is installed. Type
> **Windows XP** in the Name box and press
> **Add Entry**.
> 7. Exit EasyBCD and restart your computer to be presented with a
> multi-boot option screen for Windows
> XP and Windows 7.
>
>
> | Install XP **first**. After that install 7. When 7 is installed, its bootloader will also recognize XP; that way you'll be able to boot in both operating systems without needing to do anything else. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Install XP **first**. After that install 7. When 7 is installed, its bootloader will also recognize XP; that way you'll be able to boot in both operating systems without needing to do anything else. | Why don't you install and run XP from a VHD file?
[Windows 7 is able to natively boot VHD files](http://edge.technet.com/Media/Windows-7-Boot-from-VHD/), so this might be the easiest way to get XP installed.
If you still want to install XP and Win7 side by side, I'd install XP first, then Windows 7. Why? Because XPs installation does not know or recognize the Windows 7 bootloader, while the Windows 7 bootloader will know the XP bootloader. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Install XP **first**. After that install 7. When 7 is installed, its bootloader will also recognize XP; that way you'll be able to boot in both operating systems without needing to do anything else. | Also is installing xp completely needed why do you need xp for?
First check if all of the programs if you could use all of those in either vista or win7.
(for me I use a website called FileHippo is has an update checker to check all of you're programs for an update)
Than if there is a reason you have to use win7 than consider virtualization.
For me using [virtualbox](http://www.virtualbox.org/wiki/Downloads) is easy and simple.
For instructions view the [pdf][4]. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Although the recommended method is to install XP and then Windows 7, there is no need to reinstall in your case.
Follow this [guide](http://www.ehow.com/how_4900122_use-easybcd-windows-xp.html) (edited below) using a free tool called [EasyBCD](http://neosmart.net/dl.php?id=1).
>
> 1. Download and install [**EasyBCD**](http://neosmart.net/dl.php?id=1). Click **I Agree** to the license agreement,
> click **Next** to install in the default
> location, and the installation wizard
> will do the rest.
> 2. Click **View Settings**.
> 3. Change the **Default OS** to **Windows 7**. The operating system to
> associate the settings with should be
> Windows 7 too. Select the drive on
> which Windows 7 is installed under
> **Drive**. Type **Windows 7** in the
> Name box and press **Save Settings**.
> 4. Click **Add/Remove Entries**.
> 5. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows 7 is installed. Type
> **Windows 7** in the **Name** box and
> press **Add Entry**.
> 6. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows XP is installed. Type
> **Windows XP** in the Name box and press
> **Add Entry**.
> 7. Exit EasyBCD and restart your computer to be presented with a
> multi-boot option screen for Windows
> XP and Windows 7.
>
>
> | Why do you need to dual boot XP? If you have Ultimate just install Windows 7 then install XP mode in Windows 7 and it's all virtualized. Unless you have some specific reason for it which you could add to the question. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Why do you need to dual boot XP? If you have Ultimate just install Windows 7 then install XP mode in Windows 7 and it's all virtualized. Unless you have some specific reason for it which you could add to the question. | Also is installing xp completely needed why do you need xp for?
First check if all of the programs if you could use all of those in either vista or win7.
(for me I use a website called FileHippo is has an update checker to check all of you're programs for an update)
Than if there is a reason you have to use win7 than consider virtualization.
For me using [virtualbox](http://www.virtualbox.org/wiki/Downloads) is easy and simple.
For instructions view the [pdf][4]. |
39,490 | I have a PC where the desktop background options are disabled. I have tried resetting the ActiveDesktop - AllowChangingWallpaper (something like that) to no avail. There is no security enabled on the machine, it is running BitDefender after a recent trojan attack. That is all I know at the moment, but nowhere can I find any way to enable the background options.
Machine is running XP Home. | 2009/09/11 | [
"https://superuser.com/questions/39490",
"https://superuser.com",
"https://superuser.com/users/10670/"
] | Although the recommended method is to install XP and then Windows 7, there is no need to reinstall in your case.
Follow this [guide](http://www.ehow.com/how_4900122_use-easybcd-windows-xp.html) (edited below) using a free tool called [EasyBCD](http://neosmart.net/dl.php?id=1).
>
> 1. Download and install [**EasyBCD**](http://neosmart.net/dl.php?id=1). Click **I Agree** to the license agreement,
> click **Next** to install in the default
> location, and the installation wizard
> will do the rest.
> 2. Click **View Settings**.
> 3. Change the **Default OS** to **Windows 7**. The operating system to
> associate the settings with should be
> Windows 7 too. Select the drive on
> which Windows 7 is installed under
> **Drive**. Type **Windows 7** in the
> Name box and press **Save Settings**.
> 4. Click **Add/Remove Entries**.
> 5. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows 7 is installed. Type
> **Windows 7** in the **Name** box and
> press **Add Entry**.
> 6. Under **Add an Entry**, choose the **Windows** tab. Select the drive on
> which Windows XP is installed. Type
> **Windows XP** in the Name box and press
> **Add Entry**.
> 7. Exit EasyBCD and restart your computer to be presented with a
> multi-boot option screen for Windows
> XP and Windows 7.
>
>
> | Why don't you install and run XP from a VHD file?
[Windows 7 is able to natively boot VHD files](http://edge.technet.com/Media/Windows-7-Boot-from-VHD/), so this might be the easiest way to get XP installed.
If you still want to install XP and Win7 side by side, I'd install XP first, then Windows 7. Why? Because XPs installation does not know or recognize the Windows 7 bootloader, while the Windows 7 bootloader will know the XP bootloader. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.