qid
int64 1
74.7M
| question
stringlengths 116
3.44k
| date
stringdate 2008-08-06 00:00:00
2023-03-03 00:00:00
| metadata
listlengths 3
3
| response_j
stringlengths 19
76k
| response_k
stringlengths 21
41.5k
|
|---|---|---|---|---|---|
1,853,542
|
Why should $b$ groups of $a$ apples be the same as $a$ groups of $b$ apples?
We where taught this so it seems rather trivial but the more I think about it the more I feel that it is not.
I'm trying to avoid an argument that uses the fact that multiplication is commutative. Because I see that I am trying to PROVE that in $\mathbb{Z}^{+}-0$ multiplication is commutative if we define multiplication by repeated addition.
I would accept arguments using the fact that:
$a+b=b+a$ because if we define $+$ to be the operation combining to quantities then it should be rather trivial that $a$ apples and $b$ apples is the same as $b$ apples and $a$ apples.
Is it enough to draw $a$ groups of $b$ (1 by 1) squares and rotate this to show that it is the same as $b$ groups of $a$ (1 by 1) squares. It does not seem good enough for me because it uses a picture, and I was taught before that pictures in math do not prove anything.
|
2016/07/08
|
[
"https://math.stackexchange.com/questions/1853542",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/229023/"
] |
The usual way of proving it is by induction.
First, define multiplication recursively: $$a\cdot 1 = a\\a\cdot(b+1)=a\cdot b + a$$
Next, show, by induction on $b$ that $1\cdot b=b\cdot 1$. That's relatively easy to do.
Next, prove by induction on $b$ that that $(a+1)\cdot b = a\cdot b + b$ by induction on $b$.
Finally, prove by induction on $a$ that for all $b$, $a\cdot b = b\cdot a$.
|
Label each apple with a pair of numbers $(x,y)$ such that $x$ is the number of the group the apple was originally in (1 through $b$) and $y$ is the number of the apple within the original group (1 through $a$). Every apple gets a unique label this way. Now, change every label to reverse the two numbers: $(x,y) \rightarrow (y,x)$. Every apple still has a unique label. But, this new labeling scheme would also come about from grouping $a$ groups of $b$ apples. Since the number of labels is the same in both cases, $b$ groups of $a$ apples must have the same size as $a$ groups of $b$ apples.
|
14,120
|
When I tried to practice tuning my wheels, I found my spokes turn with the nipples. I tried to drop some lubricant on on the nipples, but I had the bike for two years now and there are a lot of dust clog in it. So the lubricant doesn't help too much. What can I do in this situation?
|
2013/01/20
|
[
"https://bicycles.stackexchange.com/questions/14120",
"https://bicycles.stackexchange.com",
"https://bicycles.stackexchange.com/users/5947/"
] |
Have you removed the tire,tube and rim tape then applied the the penetrating lube to the open end of the nipple?
|
Give it all a good clean then try some of this [this](https://en.wikipedia.org/wiki/Penetrating_oil)
|
160,135
|
I have been working at place A for 2 years, after that the company A was merged with another company and formed a new company named B. How should I mention them in my CV so that recruiters does not mistake that for a job/workplace change?
|
2020/07/07
|
[
"https://workplace.stackexchange.com/questions/160135",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/119456/"
] |
Software Engineer for 4 years at B (known as A before merge in 2018)
|
To present the things as close to the reality, I would use the following:
>
> * from (date1) to (date2) : company A
> * from (date2) to (date3) : company B (as a result of company A being bought by / merged to company B)
>
>
>
|
73,758,225
|
I want to build an mobile app with flutter that gets payment when event is done.
for example, i call a taxi from an app and app is calculates payment from distance, time etc.when finish driving button tapped, app is gonna take payment from saved credit card immediately no 3d secure or anything.
my question is what is that payment method called, how can i implement that service (stripe, paypal etc.)
|
2022/09/17
|
[
"https://Stackoverflow.com/questions/73758225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7172665/"
] |
The error message at `f({ ...a, bar: 'bar' })` says
```
/* Argument of type 'Omit<A, "bar"> & { bar: string; }' is not
assignable to parameter of type 'A & { bar: string; }'. */
```
because the value `{ ...a, bar: "bar" }` is inferred to be `Omit<A, "bar"> & { bar: string }`, but you've got the `f()` function annotated that it takes a value of type `A & { bar: string }`. The compiler cannot be sure that those types are the same (and won't be the same if `A` has a `bar` property narrower than `string`), so it complains.
That error message implies that everything would be fine if `f()` accepted `Omit<A, "bar"> & { bar: string }` instead. Indeed, if you make that change, it compiles without error:
```
type Example = <A extends { bar: string }, R>(
f: (a: Omit<A, 'bar'> & { bar: string }) => R,
) => (a: Omit<A, 'bar'>) => R;
const example: Example = (f) => (a) => f({ ...a, bar: 'bar' }); // okay
```
That might be good enough for you, but there are a few more changes I'd make here. The type of `{ ...a, bar: 'bar' }` would be `Omit<A, "bar"> & { bar: string }` even if `a` were of type `A`, because that's what spreading with [generics](https://www.typescriptlang.org/docs/handbook/2/generics.html) does in TypeScript. Presumably this is what you thought `a` was in the first place, and only put `Omit` there to fix things. So let's eliminate that:
```
type Example = <A extends { bar: string }, R>(
f: (a: Omit<A, 'bar'> & { bar: string }) => R,
) => (a: A) => R;
const example: Example = (f) => (a) => f({ ...a, bar: 'bar' }); // okay
```
And finally, there's no reason why `A` *must* have a `bar` property of type `string`. Even if `A` has no `bar` property, or an incompatible `bar` property, the function should still behave as desired (since `{ ...a, bar: 'bar' }` will overwrite any `bar` property on `A`, or provide one if `A` doesn't have one). So we can remove the [constraint](https://www.typescriptlang.org/docs/handbook/2/generics.html#generic-constraints) on `A`:
```
type Example = <A, R>(
f: (a: Omit<A, 'bar'> & { bar: string }) => R,
) => (a: A) => R;
const example: Example = (f) => (a) => f({ ...a, bar: 'bar' }); // okay
```
---
The only issue I see now is that it will be hard for the compiler to infer `A` properly by a call to `example`:
```
const exampled = example((x: { baz: number, bar: string }) =>
x.baz.toFixed(2) + x.bar.toUpperCase()
); // error! A is inferred as unknown, and so
// Argument of type '(x: { baz: number; bar: string;}) => string' is not
// assignable to parameter of type '(a: Omit<unknown, "bar">
// & { bar: string; }) => string'.
```
The compiler just can't infer `A` from a value of type `Omit<A, "bar"> & { bar: string }`, and I can't seem to rephrase that type in a way that works.
You can, of course, manually specify `A` (and also `R`, because there's no *partial* type argument inference as requested in [ms/TS#26242](https://github.com/microsoft/TypeScript/issues/26242), at least not as of TS4.8):
```
const exampled = example<{ baz: number }, string>(
x => x.baz.toFixed(2) + x.bar.toUpperCase()
); // okay
// const exampled: (a: { baz: number; }) => string
```
And verify that it works as expected:
```
console.log(exampled({ baz: Math.PI })) // "3.14BAR"
```
So that's, I guess, the answer to the question as asked.
---
The issue of inference here seems to be out of scope. If you want inference to work for callers, you pretty much need to do something like this:
```
type Example = <A, R>(
f: (a: A) => R,
) => (a: Omit<A, "bar">) => R;
const example: Example = (f) => (a) => f({ ...a, bar: 'bar' } as
Parameters<typeof f>[0]); // okay
const exampled = example((x: { baz: number, bar: string }) =>
x.baz.toFixed(2) + x.bar.toUpperCase()
); // okay
/* const exampled: (a: Omit<{ baz: number; bar: string; }, "bar">) => string
console.log(exampled({ baz: Math.PI })) // "3.14BAR"
```
So we're sacrificing some type safety in the implementation of `example` (using a [type assertion](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#type-assertions) to claim that the argument to `f` is the right type, which is technically not 100% safe but fine in practice) to get nice behavior for those who call it. It's possible that *this* is the way you want to proceed, depending on whether you're going to *implement* `Example`s or *call* them more often.
---
[Playground link to code](https://www.typescriptlang.org/play?#code/HYQwtgpgzgDiDGEAEApeIA2AvJBvAUPkkqJLAsgEIBOA9gNYTB5HFIAuAnjMgKIAe4GBmQBeJAB4Agkgj92TACZQ8SAEYhqALiRR21AJbAA5kgC+AGiQAlAHwAKVmyQAzHfZA6ZAMlUbtuvpGpmYAlEiitjYWTkjhkUgeOgDyYAbs0lYA5P5ZtvFR1gDchM7wtMB6soJgwhA6AkIiEYkuBYkg7S72uEgAdAMgVv46OZpZ5qElxGalJODQcIhIAGIG-ACMLM5cPEiNtc3i0tUKwMp+mjp6hibmVnaOzsRuHSlpGVLZuVG+vSOBW4hdrWGLOdpJJCpdKZJBjah5EHTNjlSrsapNer7Gp1Fr2NoRKIeLo9fqDYZXOG5SZFYgAejpSAYIE4rFmrFIiwoq3WACYWLFdnwcUdJDI5GcLv9KTdgvcbA5Yi93J4oR9YfC8kg-uoZUE7mFCdFYhDVVIkbFUVU5JiGiKxK1TSTegM+kNdQFNTT6YzmayZoQOQtyMs1vwAMzbNhC7GYlqwx5K1wq94wr5U8a-S4BWUGkFgtimrwWsoVa32u1x8T4p1G7ou8ke0bUsK0pAMpn0FlzYg2w4Qez2fg6aVYHTAACuYDUEGoFJz+uBhKT-D6GiwfXYtDDEEU9l54QA1EhV-5N7QAKowHjUADCICgA9CJtpHdndGoAEIkDIDCojC4767kgD5IBOwD0MAtAAO7AFYIDnLotCxB2UjUMYU5MOitAuBw3DIFkQ4jrqY4kFOM7ULSAK5sYRSGgkNETH+JC0OwKGMg+UAGMYoBqM0W5IHA1ALAo1BMrhMaEaq0IZOBkEwXBSAAET+EpWbSguQK0vRUSMX0galmiGL9ooLR9nUEijuO5GzvKNGKs8J5GqeIAbluO57geSDHi51DnleN73o+9jPuCJTsUgVrouZIiKCmqgkdZ06ztp7Q0UglplrQIh9BgtDGPYMW7qS646AAsiA7AABZ9AACgAkpM4Qdkp4Z9BsAAslBSNYSlzOyQZkEsyBhh1UbEIK+Gxv28bpomjmvJC5pGqCJpGpCMmwipmhqSWKJltFFbTbi1YEgkxJ1qSrrugCXpmCBUBJrVmgibOUASEKOGuLYADaAAMAC6Uw+p23aZUZRWmeIRWDsOlykZOyVzk2gJyjpK5rq554efuR4npjflbgFs5BU+L4oQAVJFB3GXUcVvGqMK9IlZFI1RepafK23ULtRo0eDUDZRAuX5YV9p7lZSAVdVdWNWEzWMq17VdT1fVsoG+BmEAA)
|
The problem here is that the parameter type `A extends { bar: string }` could be instantiated for example by type `{ bar: 'not bar' }`. If it is then trying to pass the argument `{ ...a, bar: 'bar' }` the function `f` will throw an error. To illustrate this this is an example:
```
type E = typeof a<{ bar: 'not bar' }, unknown>
const example1: E = (f) => (a) => f({ ...a, bar: 'bar' })
// The expected type comes from property 'bar'
// which is declared here on type '{ bar: "not bar"; }'
const example2: E = (f) => (a) => f({ ...a, bar: 'not bar' }) // Ok
```
This will work for you:
```
type Example1 = <F extends (a: any) => R, R>(
f: F & ((a: Parameters<F>[0]) => R),
) => (a: Omit<Parameters<F>[0], 'bar'>) => R;
const example3: Example1 = (f) => (a) => f({ ...a, bar: 'bar' }); // Ok
const q1 = example3((p: {a: 3, bar: 'b'}) => 3)
// q1: (a: Omit<{
// a: 3;
// bar: 'b';
// }, "bar">) => number
const a1 = q1({ a: 3 }) // number
const a2 = q1({ a: 4 }) // Error: Type '4' is not assignable to type '3'
const a3 = q1({ a: 3, foo: 1 }) // Error
```
[Playground](https://www.typescriptlang.org/play?ssl=31&ssc=41&pln=17&pc=1#code/C4TwDgpgBAogHgQwLZgDbQLxQDwEEoRzAQB2AJgM5QDeUARggE4BcUFwjAliQOZQC+AGigAlAHwAKALAAoKPKgAzVhIStcASigYxowbK06oq1gHkknYHmEByBoxtjDukQG5ZsgMYB7EuwKIKOis8MhomMaKzsYI0YoStAB0yQjC9qx2TDYCGu4ysmQQnqhM0D5+wFBqsIHhHjKgkLDaUI0Q3opV2LTpUDYk3pX22UJQAK4kANYDAO4kYvXl-oRh6ACMIS0SUdq6qnEJUMmJqfRMGcM5sgD011AAKgAW0ISQnsRkreBl3kgQVIpGL8oGAgZBGKA+sMbncZo9OJ5HlBOFRCsVSp9noxoL4vk0bD1zlAAEQDIZMYmuAQ2Ra+Za1dAAJk2WG20X2uyUh2Op16-UGZwcOSgtygpkm9TaNVWEDWLWwADEAsRyFQTFUSCBoiJhOJpHIFMooEqAGTGdUABSYyAgxEYFEVYgA2gAGAC62o0+hk7Oq5ks2CtjBtdodCud7tswycnLctIqARlAGYQgzZVsdkYOUZ4kkUmkiZkhfxciK7uL4-4AI5yrArIIQJMSCRgVjUapJgssKE2EucpMaGFQGsqP0WKzUIcKeQdvKi6eCi42Od3UbE+zEmNGEhjJB0CCMSuVBC14drQ4d4Wind7g9eOnHxktGsX1gAFivdxgjCB3fu3z6N9shRKAySqCgKE4HgSAQOh0Fabw8WgGwkxpGQlmPJNn3PWgO2ERRvG8Vg5T7UVv1-WQgA)
And if you need that `f` function has a mandatory type of argument which assigns to `{ bar: string }` this is an enhancement:
```
type Example = <F extends (a: any) => R, R>(
f: F &
((a: Parameters<F>[0]) => R) &
(Parameters<F>[0] extends { bar: string } ? unknown : never)
) => (a: Omit<Parameters<F>[0], "bar">) => R;
const example: Example = (f) => (a) => f({ ...a, bar: "bar" }); // Ok
const q1 = example((p: { a: 3; bar: "b" }) => 3);
// q1: (a: Omit<{
// a: 3;
// bar: 'b';
// }, "bar">) => number
const q2 = example((p: { a: 3; foo: "b" }) => 3); // Error
const a1 = q1({ a: 3 }); // number
const a2 = q1({ a: 4 }); // Error: Type '4' is not assignable to type '3'
const a3 = q1({ a: 3, foo: 1 }); // Error
```
[Playground](https://www.typescriptlang.org/play?ssl=19&ssc=42&pln=1&pc=1#code/C4TwDgpgBAogHgQwLZgDbQLxQDwDEoRzAQB2AJgM5QAUCAXFAiSAJRQYB8UASgDQ8dqAKChQAZg3wAyEaJq0GABQQAnZBGIqKeDgG0ADAF02nHmxlyaytUg0QtOg4YJFSlKAG8oAI1UMKwCoAliQA5lAAvlAA-FAAriQA1iQA9gDuJFAMJBAAbvYsQiZcClAA8khBwNjW6prauHpG-ABEviotHMU8ANxCQgDGKSQBLshoEAzw4+jsNGLdtN1i1F4AdBsI-O0MbaotkSw9UAD0J+WJ-UMjwFAAjgCMc4QzENTUYAxe9FAAzMc7KBtA4Rbq-I5CM73B4MUoVKrYDyQ86WRgMf7I1E+PxQADk3lxfShEVa7U63RIcSQ3nsV2GozuACZnogUOh3p9PGi-scxCkUrtvCCwUdTucYCoVCkVHSbownlhHqtub9DscoZTqbTrqMEMzFQ9lT8ACxqsWwSXShgAFXA0FxxtxUCCVFStwQFAoQVCJAQ3lmwBSUFAkDxv1xg3p7tVBqN6P4fIFUCeoPV4stMqAA)
This is an enhancement for the case when the function `f` must be disallowed if it depends on a specific kind of string (as I learned from the recent reply it is a requirement):
```
type Example = <F extends (a: any) => R, R, P = Parameters<F>[0]>(
f: F &
((a: P) => R) &
("bar" extends keyof P ? unknown : never) &
({ bar: string } extends Pick<P, "bar" & keyof P> ? unknown : never)
) => (a: Omit<P, "bar">) => R;
const example: Example = (f) => (a) => f({ ...a, bar: "bar" }); // Ok
const q1 = example((p: { a: 3; bar: string }) => 3);
// q1: (a: Omit<{
// a: 3;
// bar: 'b';
// }, "bar">) => number
const q2 = example((p: { a: 3; bar: "b" }) => 3); // Error
const q3 = example((p: { a: 3 }) => 3); // Error
const q4 = example((p: { a: 3; foo: string }) => 3); // Error
const a1 = q1({ a: 3 }); // number
const a2 = q1({ a: 4 }); // Error: Type '4' is not assignable to type '3'
const a3 = q1({ a: 3, foo: 1 }); // Error
```
[playground](https://www.typescriptlang.org/play?ssl=23&ssc=1&pln=1&pc=1#code/C4TwDgpgBAogHgQwLZgDbQLxQDwDEoRzAQB2AJgM5QAUCAXFAiSAJRQYB8UASgDQ-8ACuyiCEAJ2QRi4ing4BtAAwBdDtQBQUKADMG+AGRbtNWg0FtOPNkZM0ARACMJ9gkVKUoAawggA9jqiUAD8UACuJF4kfgDuJFAMJBAAbhDiNsba1ADeUM7iDBTA4gCWJADmUAC+bsTkVIIlAMZe2IL8Ti5QBt6+AaJcoRFRsfGJKWksGpZcZlAA8kglwG0d+fYcMzwA3BoaTX4kRW7IaBAM8KfoItQ6W7RbOjlQAHRvCPz5DJ3irlUs2ygAHogQsvHsDkdgFAAI4ARhEhCuEGo1DADFy9CgAGZAV8oEVShVqltsQCNCDYXCGHNFstsNkKaC7IwGLimSy8hIGAByRw83aUqprFybdhcEhhJCONIQw7HGEAJkRiBQ6FR6KgmLZeO5UCcf1JAOBoJg4nEfnE+3l0Jh2JVyI1GNZOJJ4pxxspZotVshCoALA61Si0c6sbjdH4-IVimVKv93WTAV7zZa5VDGAisPDnuGScnQZLpbK-dCEMrs3DcwxA-8C7BUwUoAAVcDQHn+nlQEpUaJligUErlEgIRzXYB+KCgSBQHnYnnWjMIe2V6s4-g6KMMBF1k0Nn0aIA)
|
26,836,807
|

From tables I need to get all available columns from table "car\_type" event if
1. It have car(s) in table "car",
2. It is not 'in use' in table "approval" (if it's using it will show 0 in field car\_return)
3. car.car\_status is not 0 (if it is show 0 it means this car is fixing, car can't use for some reason)
\*\* If I have 2 vans and in table 'approval' I use it for 1 record it will show only 1 vans available.
Or if I have 2 vans and in table 'approval' I use it for 1 record and in table 'car.car\_status = 0' it will not available for use anymore.
I need product like this

If possible I need product like this

Ps. sorry for poor in English.
|
2014/11/10
|
[
"https://Stackoverflow.com/questions/26836807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2960983/"
] |
While the answer from @Kumar hints in the correct direction (but don't use a text field from the default schema.xml for this, as it will process any input both when querying and indexing), the explanation is that you might need a new field to do wild card queries against, unless you can transform your query into an actual integer operation (if all your store\_nbr-s are of the same length).
Add a StrField (in the default schema.xml, there is a defined type named "string" that is a simple string field that suites this purpose):
```
<field name="store_nbr_s" type="string" indexed="true" stored="false" />
```
Add a copyField directive that copies the value from the `store_nbr` field into the string field when indexing:
```
<copyField source="store_nbr" dest="store_nbr_text" />
```
Then query against this field for prefix matches, using the syntax you already described (`store_nbr:280*`).
If this particular query format (querying for the three first digits of a store\_nbr) is very common, you'll want to transform the content on the way in to index the three first digits in a dedicated field, as it'll give you better query performance and a smaller cache.
And if you're doing a lot of wild card queries (with varying lengths in front of \*), look into have a field generate EdgeNGrams instead, as these will give you dedicated tokens that solr looks up instead of having a wild card search which may have to traverse a larger set of possible tokens to determine whether the document should be returned.
|
see [this](https://stackoverflow.com/a/11057309/3496666) answer. You have to add a directive link in your schema.xml in {solr\_home}/example/solr/**your\_collection**/conf/schema.xml as shown in that answer. Copy all your fields to make it searchable for wildcard query.
|
8,867,014
|
I am displaying and determining the selected language in my website by using URLs in this format :
```
/{languageCode}/Area/Controller/Action
```
And in my C# when I need to find the language Code I am using this syntax :
```
RouteData.Values["languageCode"]
```
However, when I need to call an action using JQuery, how do I determine the language code so that I can call the correct route i.e. `en-US/Area/Controller/Action` ? I don't know how to access this information in my client side Javascript. Can anybody help?
|
2012/01/15
|
[
"https://Stackoverflow.com/questions/8867014",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/517406/"
] |
Since your URL has the language code. How about using
```
window.location
```
<https://developer.mozilla.org/en/DOM/window.location>
And then extract the language from the url. Maybe something like:
```
var url = "example.com/en-us/Area/Controller/Action"; //or window.location:
var lang = url.split("/")[1];
```
No need to use JQuery! :)
|
You could emit it, server-side, e.g.:
```
var url = '@Url.Action("Action", routeValues)';
```
|
191,668
|
I use `AccuBattery` and `Kaspersky Battery Life` to measure energy consumption. `Kaspersky Battery Life` shows, that all the tasks use minimal energy. `AccuBattery Pro` shows, that phone uses 5-10 mAh.
This is a fairly new, 3000 mAh battery. Phone is a `THL T9 Pro`, `Android 6.0` is installed on it.
UPDATE: I deleted all the mentioned applications and installed new one, `GSam Battery Monitor` to get detailed data:
[](https://i.stack.imgur.com/qNPMhm.png)
There is data from the built-in usage chart too. I checked applications under Battery optimization, I found there only one, `Google Play Services`.
[](https://i.stack.imgur.com/WRX1Um.png)
|
2018/03/02
|
[
"https://android.stackexchange.com/questions/191668",
"https://android.stackexchange.com",
"https://android.stackexchange.com/users/71279/"
] |
How to nail **Phone Idle** battery drain is the question, but being unrooted device, it calls for some efforts. Finding the culprit apps isn't as easy as it is on rooted devices but is possible using [adb](/questions/tagged/adb "show questions tagged 'adb'") commands to enable higher privileges 1
(At the time of writing, OP is working on his Linux to detect his device 2. Once done with that, they can follow this answer.)
The primary cause of idle drain is *truant* [wakelock](/questions/tagged/wakelock "show questions tagged 'wakelock'") (s) and the answer is around how to detect apps that cause wakelocks that hurt (Wakelocks aren't bad, they are needed but not such ones). It may help to improve [doze-mode](/questions/tagged/doze-mode "show questions tagged 'doze-mode'") performance but more about that later.
**All methods below are working on my unrooted device running Oreo 8.0.**
Tracking truant apps that cause battery draining wakelocks
==========================================================
* **Measuring Battery Drain and first level wakelock detection**
Battery usage statistics in Android, unfortunately, don't reveal much and are difficult to interpret (not withstanding the improvements in Oreo). [GSam Battery Monitor](https://play.google.com/store/apps/details?id=com.gsamlabs.bbm&hl=en) is arguably the best for stock devices. One needs to enable *enhanced statistics* (Menu → more → Enable more stats) and follow the steps which are
`adb -d shell pm grant com.gsamlabs.bbm android.permission.BATTERY_STATS`**must read 3**
For the PRO version, change 'com.gsamlabs.bbm' to 'com.gsamlabs.bbm.pro' (thanks [acejavelin](https://android.stackexchange.com/questions/191668/too-fast-draining-battery#comment248433_191668)).
The enhanced statistics gives better view of app usage and wakelocks as shown. Long press of *held awake* (which in OP's case is 77%) shows additional information as shown in the third screenshot.
[](https://i.stack.imgur.com/cvCUJm.png)
[](https://i.stack.imgur.com/ezSl0m.png)
[](https://i.stack.imgur.com/CfJQ1m.png)
* **Second level bad wakelock detection** (One can optionally start with this step)
Download [Wakelock Detector [LITE]](https://play.google.com/store/apps/details?id=com.uzumapps.wakelockdetector.noroot&hl=en) ([XDA thread](https://forum.xda-developers.com/showthread.php?t=2179651)) which works without root (see this [slideshare](https://docs.google.com/presentation/d/1r3VlhZIZVSufZlAeICJet6QBtyAF7z06_ysl1kUKME4/edit#slide=id.g123bc9f140_169_7) for details). Two ways to run without root
* As a Chrome extension or on Chromium. OP had issues with this
* A better method is adb again
`adb shell pm grant com.uzumapps.wakelockdetector.noroot android.permission.BATTERY_STATS`
How to use (from Play Store description)
>
> * Charge your phone above 90% and unplug cable (or just reboot the phone)
> * Give it a time (1-2 hours) to accumulate some wakelock usage statistics
> * Open Wakelock Detector
> * Check the apps on the top, if they show very long wakelock usage time then you found the cause of your battery drain!
>
>
>
While 2 hours is enough to gather the information about top culprits, longer duration obviously leads to more data. During data collection, don't use the device and let it be as you would normally use the phone (with data or WiFi connected as is your normal usage). Screenshots below from my device (not under test but normal usage).
Left to right, they show Screen Wakelock, CPU Wakelock and Wakeup triggers. Check the top contributors to understand what's draining your battery.
[](https://i.stack.imgur.com/GbEjcm.png)
[](https://i.stack.imgur.com/nJRXQm.png)
[](https://i.stack.imgur.com/H3MYTm.png)
Eliminating the bad apps or controlling them
============================================
Once you have identified the culprits, you have three choices
* Uninstall them
* Replace them with a comparable feature app with less power consumption (assumption being they are better designed and wakelocks don't cause havoc). See [Is there a searchable app catalog that rank applications by power and network bandwith usage?](https://android.stackexchange.com/q/191108/131553)
* If you don't want to uninstall because you need the app, then [greenify](/questions/tagged/greenify "show questions tagged 'greenify'") them!
Taming wakelocks and improving Doze
===================================
[Greenify](https://play.google.com/store/apps/details?id=com.oasisfeng.greenify&hl=en) is a fantastic app but very powerful so needs to be used carefully. Read the [XDA thread](https://forum.xda-developers.com/apps/greenify) and [Greenify tag wiki](https://android.stackexchange.com/tags/greenify/info) for help.
I will limit to using adb to unleashing a fair part of its power to help rein in wakelocks and enhancing Doze performance.
A word about Doze, which was introduced since Marshmallow. Though it has evolved better, it has some drawbacks from battery saving point of view.
* It takes time to kick in, during which apps are active causing drain (even though screen is off)
* Doze mode is interrupted when you move the device, for example, when you are moving causing battery drain. Doze kicks in again when you are stationary with a wait period
Greenify tackles these problems with *Aggresive Doze* and *Doze on the go* (There are other apps that do this too, like [ForceDoze](https://play.google.com/store/apps/details?id=com.suyashsrijan.forcedoze&hl=en), but Greenify manages both Wakelocks and Doze).
[Instructions](https://greenify.uservoice.com/knowledgebase/articles/749142-how-to-grant-permissions-required-by-some-features) for using adb
For different features, you need to run adb commands to grant the corresponding permission:
* Accessibility service run-on-demand:
`adb -d shell pm grant com.oasisfeng.greenify android.permission.WRITE_SECURE_SETTINGS`
* Aggressive Doze on Android 7.0+ (non-root):
`adb -d shell pm grant com.oasisfeng.greenify android.permission.WRITE_SECURE_SETTINGS`
* Doze on the Go:
`adb -d shell pm grant com.oasisfeng.greenify android.permission.DUMP`
* Aggressive Doze (on device/ROM with Doze disabled):
`adb -d shell pm grant com.oasisfeng.greenify android.permission.DUMP`
* Wake-up Tracker:
`adb -d shell pm grant com.oasisfeng.greenify android.permission.READ_LOGS`
* Wake-up Cut-off: (Android 4.4~5.x):
`adb -d shell pm grant com.oasisfeng.greenify android.permission.READ_LOGS`
`adb -d shell pm grant com.oasisfeng.greenify android.permission.WRITE_SECURE_SETTINGS`
Background-free enforcement on Android 8+ (non-root):
```
adb -d shell pm grant com.oasisfeng.greenify android.permission.READ_APP_OPS_STATS
```
I will restrict to snapshots of settings from my device to help set it up faster after running adb commands above. I have pro version so ignore those donation settings
[](https://i.stack.imgur.com/0h30Sm.png)
[](https://i.stack.imgur.com/KAVxdm.png)
With those settings, even when the device is running, you will see a hibernation alert in your status bar with the app icon and in your notification panel. Clicking on that will force close and hibernate the app
[](https://i.stack.imgur.com/CjZo3.png)
You can also hibernate errant apps from Wakelock Detector by long pressing on the app
[](https://i.stack.imgur.com/6F63T.png)
**Caution:** Be **very** careful with what you want to hibernate. Simple rule - don't hibernate apps that are critical to you. Hibernate you errant apps that are not critical
**Edit**
[BetterBatteryStats](https://play.google.com/store/apps/details?id=com.asksven.betterbatterystats&hl=en) ([XDA thread](https://forum.xda-developers.com/showthread.php?t=1179809)) is a very powerful tool which has been recently (end Feb 18) updated to work with Oreo and sweeter still is that escalated privileges using adb is possible
`adb -d shell pm grant com.asksven.betterbatterystats android.permission.BATTERY_STATS`
`adb -d shell pm grant com.asksven.betterbatterystats android.permission.DUMP`
On Lollipop and forward, additionally run:
`adb -d shell pm grant com.asksven.betterbatterystats android.permission.PACKAGE_USAGE_STATS`
[](https://i.stack.imgur.com/gnaalm.png)[](https://i.stack.imgur.com/imSBRm.png)
Happy Wakelock hunting!
* **I've done everything you suggested but it didn't help**
It's likely that a system app is causing the Wakelocks for which there isn't much you can do on an unrooted device 4
---
---
* 1 [Is there a minimal installation of ADB?](https://android.stackexchange.com/q/42474/131553) and for the latest version refer to Izzy's [awesome repo](https://android.izzysoft.de/downloads). Also see this [XDA guide](https://www.xda-developers.com/install-adb-windows-macos-linux/)
* 2 [How do I get my device detected by ADB on Linux?](https://android.stackexchange.com/q/144966/131553)
* **must read 3**: For all adb permissions to stick, force stop app to let the granted permission take effect. You can either do it in system "Settings → Apps → App name → Force stop", or execute this command: `adb -d shell am force-stop com.<package name of app>`
* 4
[How to deal with (orphaned) WakeLocks?](https://android.stackexchange.com/q/34969/131553)
|
if you want to just use adb to tune Doze without extra Apps.
you might be interested in <https://github.com/easz/doze-tweak>
and if you don't want to install extra Apps, you can profile your battery with `adb bugreport` and analyze it with Battery Historian (e.g. <https://bathist.ef.lc/>). After identifying bad Apps, you can disable or restrict them.
|
5,248,993
|
I observed the following and would be thankful for an explanation.
```
$amount = 4.56;
echo ($amount * 100) % 5;
```
outputs : 0
However,
```
$amount = 456;
echo $amount % 5;
```
outputs : 1
I tried this code on two separate PHP installations, with the same result. Thanks for your help!
|
2011/03/09
|
[
"https://Stackoverflow.com/questions/5248993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/449460/"
] |
I *strongly* suspect this is because 4.56 can't be exactly represented as a binary floating point number, so a value very close to it is used instead. When multiplied by 100, that comes to 455.999(something), and then the modulo operator truncates down to 455 before performing the operation.
I don't know the exact details of PHP floating point numbers, but the closest IEEE-754 double to 4.56 is 4.55999999999999960920149533194489777088165283203125.
So here's something to try:
```
$amount = 455.999999999;
echo $amount % 5;
```
I strongly suspect that will print 0 too. From [some PHP arithmetic documentation](http://php.net/manual/en/language.operators.arithmetic.php):
>
> Operands of modulus are converted to integers (by stripping the decimal part) before processing.
>
>
>
|
Use `fmod` to avoid this problem.
|
19,621,383
|
I have a list whose contents show up just fine in my dataGrid with this code:
```
dataGridView1.DataSource = lstExample;
```
This tells me my List is fine, and when I view the dataGrid it has all the data I need. But when I try to output the same List to a text file with this code:
```
string output = @"C:\output.txt";
File.WriteAllLines(output, lstExample);
```
I get this error:
```
Argument 2: cannot convert from 'System.Collections.Generic.List<AnonymousType#1>' to 'System.Collections.Generic.IEnumerable<string>'
```
What do I need to do to fix this?
|
2013/10/27
|
[
"https://Stackoverflow.com/questions/19621383",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/867420/"
] |
In "classic" CUDA compilation you *must* define all code and symbols (textures, constant memory, device functions) and any host API calls which access them (including kernel launches, binding to textures, copying to symbols) within the *same translation unit*. This means, effectively, in the same file (or via multiple include statements within the same file). This is because "classic" CUDA compilation doesn't include a device code linker.
Since CUDA 5 was released, there is the possibility of using separate compilation mode and linking different device code objects into a single fatbinary payload on architectures which support it. In that case, you need to declare any \_\_constant\_\_ variables using the extern keyword and *define* the symbol exactly once.
If you can't use separate compilation, then the usual workaround is to define the \_\_constant\_\_ symbol in the same .cu file as your kernel, and include a small host wrapper function which just calls `cudaMemcpyToSymbol` to set the \_\_constant\_\_ symbol in question. You would probably do the same with kernel calls and texture operations.
|
Below is a "minimum-sized" example showing the use of `__constant__` symbols. You do not need to pass any pointer to the `__global__` function.
```
#include <cuda.h>
#include <cuda_runtime.h>
#include <stdio.h>
__constant__ float test_const;
__global__ void test_kernel(float* d_test_array) {
d_test_array[threadIdx.x] = test_const;
}
#include <conio.h>
int main(int argc, char **argv) {
float test = 3.f;
int N = 16;
float* test_array = (float*)malloc(N*sizeof(float));
float* d_test_array;
cudaMalloc((void**)&d_test_array,N*sizeof(float));
cudaMemcpyToSymbol(test_const, &test, sizeof(float));
test_kernel<<<1,N>>>(d_test_array);
cudaMemcpy(test_array,d_test_array,N*sizeof(float),cudaMemcpyDeviceToHost);
for (int i=0; i<N; i++) printf("%i %f\n",i,test_array[i]);
getch();
return 0;
}
```
|
62,073,660
|
I initialized git and I did `git push -u origin master` but when I'm trying to push files to my github repository I get these logs in my terminal
```
Enumerating objects: 118, done.
Counting objects: 100% (118/118), done.
Delta compression using up to 4 threads
Compressing objects: 100% (118/118), done.
Writing objects: 100% (118/118), 2.78 MiB | 2.55 MiB/s, done.
Total 118 (delta 0), reused 0 (delta 0), pack-reused 0
error: RPC failed; curl 56 OpenSSL SSL_read: Connection was reset, errno 10054
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
```
And turns out it hasn't push anyhting to repository and my repo is still empty .
How can I solve this error and push to my repo ?
|
2020/05/28
|
[
"https://Stackoverflow.com/questions/62073660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13628101/"
] |
I solved the error by building my project again and git init and other steps again and finally it worked
|
Hello I am Chetan(I am student from Pune) from India. According to me this error is coming because of internet connection issue or might your network slow/unstable. You can fix this error by reconnecting your network or upgrading your internet speed. Than push your code again.
|
12,433,300
|
I was following this example <http://cubiq.org/create-fixed-size-thumbnails-with-imagemagick>, and it's exactly what I want to do with the image, with the exception of having the background leftovers (i.e. the white borders). Is there a way to do this, and possibly crop the white background out? Is there another way to do this? The re-size needs to be proportional, so I don't just want to set a width re-size limit or height limit, but proportionally re-size the image.
|
2012/09/14
|
[
"https://Stackoverflow.com/questions/12433300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486863/"
] |
The example you link to uses this command:
```
mogrify \
-resize 80x80 \
-background white \
-gravity center \
-extent 80x80 \
-format jpg \
-quality 75 \
-path thumbs \
*.jpg
```
First, `mogrify` is a bit dangerous. It manipulates your originals inline, and it overwrites the originals. If something goes wrong you have lost your originals, and are stuck with the wrong-gone results. In your case the `-path thumbs` however alleviates this danger, because makes sure the results will be written to sub directory *thumbs*
Another ImageMagick command, `convert`, can keep your originals and do the same manipulation as `mogrify`:
```
convert \
input.jpg \
-resize 80x80 \
-background white \
-gravity center \
-extent 80x80 \
-quality 75 \
thumbs/output.jpg
```
If want the **same result, but just not the white canvas extensions** (originally added to make the result a square 80x80 image), just leave away the `-extent 80x80` parameter (the `-background white` and `gravity center` are superfluous too):
```
convert \
input.jpg \
-resize 80x80 \
-quality 75 \
thumbs/output.jpg
```
or
```
mogrify \
-resize 80x80 \
-format jpg \
-quality 75 \
-path thumbs \
*.jpg
```
|
I know this is an old thread, but by using the -write flag with the -set flag, one can write to files in the same directory without overwriting the original files:
```
mogrify -resize 80x80 \
-set filename:name "%t_small.%e" \
-write "%[filename:name]" \
*.jpg
```
As noted at <http://imagemagick.org/script/escape.php>, %t is the filename without extension and %e is the extension. So the output of image.jpg would be a thumbnail image\_small.jpg.
|
61,199
|
I can imagine that this is true, but is it actually legally spelled out and motivated? Or is it just what tends to typically happen, for "other reasons"?
I've never been married and thus not divorced, so I'm just going by what I've perceived as well as a text I just read which was talking about the economics of moving between houses, where it's casually mentioned that the ex-husband "just gets to keep a mattress and the TV".
Is this actually a legal thing? Something which is in a legally binding "marriage contract" (I don't know if that even exists)? If so, what's the reason for this?
|
2021/02/15
|
[
"https://law.stackexchange.com/questions/61199",
"https://law.stackexchange.com",
"https://law.stackexchange.com/users/36767/"
] |
[united-states](/questions/tagged/united-states "show questions tagged 'united-states'")
In the United States, divorce is a matter of state law, and each of the 50 states has slightly different laws. But in general, it is not true as a matter of law that divorced women are awarded everything but "a mattress and a TV".
A small number of states, of which the largest are California and Texas, are [community property states](https://www.divorcenet.com/states/nationwide/property_division_by_state). In such states, all property acquired during the marriage is generally divided equally according to the value of the assets.
The majority of states, however, follow what is called "equitable distribution". In such states, assuming the divorce goes before a judge (most divorces don't), the judge must determine a fair distribution of the assets acquired during the marriage. As an example, [in the state of Illinois,](https://www.divorcenet.com/resources/divorce/dividing-property-in-illinois.html) this division should be based on factors such as:
>
> * the effects of any prenuptial agreements
> * the length of the marriage
> * each spouse's age, health, and station in life
> * whether a spouse is receiving spousal maintenance (alimony)
> * each spouse's occupation, vocational skills, and employability
> * the value of property assigned to each spouse
> * each spouse's debts and financial needs
> * each spouse's opportunity for future acquisition of assets and income
> * either spouse’s obligations from a prior marriage (such as child support for other children),
> * contributions to the acquisition, preservation, or increased value of marital property, including contributions as a homemaker
> * contributions to any decrease in value or waste of marital or separate property
> * the economic circumstances of each spouse
> * custodial arrangements for any children of the marriage
> * the desirability of awarding the family home, or the right to live in it for a reasonable period of time, to the party who has physical custody of children the majority of the time, and
> * any tax consequences of the property division.
>
>
>
(The exact legal phrasing of the statutes can be found in [750 ILCS 5](https://law.justia.com/codes/illinois/2019/chapter-750/act-750-ilcs-5/part-v/) if you're curious.)
In practice, this may result in a partner who has stayed home with the kids for 10 years getting a larger share of the shared property; such a person may have difficulty re-entering the workforce because of their lack of recent work experience. Similarly, if it is desirable to have the couple's children remain in the family home, and one partner will be receiving primary custody (perhaps because they have spent many years at home bonding with the children), then the family home may be awarded to that partner. In such circumstances, the judge may also award the other partner a larger portion of the assets than they would otherwise receive, to compensate for the loss of equity.
In practice, given gender norms within Western societies, it is more likely that a woman will stay home with the children, have worse economic prospects after divorce, and be awarded primary custody of any children. An equitable distribution would therefore favor the woman under the above criteria. But the law itself does not discriminate by gender. If a heterosexual couple divorced where the woman had been the primary wage-earner and a man was the homemaker, the distribution of assets might well favor the man.
|
>
> I can imagine that this is true, but is it actually legally spelled
> out and motivated? Or is it just what tends to typically happen, for
> "other reasons"? . . . Is this actually a legal thing? Something which is in a
> legally binding "marriage contract" (I don't know if that even
> exists)? If so, what's the reason for this?
>
>
>
It isn't really true and isn't particularly typical either. The perception is mostly "mood affiliation" (i.e. a tendency to interpret anecdotal evidence in a manner that confirms your inclinations about how the world works before seeing any evidence). But there are some circumstances that do tend to cause very unequal property divisions to arise and when that happens more of the visible tangible property like real estate, tends to end up owned by a wife more often than a husband, on average.
Many countries with civil law systems, such as Spain and France, have a "community property" system also shared by some U.S. states, in which a husband and wife are equal present co-owners of property acquired during the marriage by means other than gift or inheritance, without regard to title. There are numerous variations on this theme due to subtle but important differences in how appreciation and depreciation in separate property is handled and how separate property that is encumbered by debt is handled. The parties can trade their interests in marital property (or separate property) with each other upon divorce, however, to completely split up the divorced couple economically.
Countries with a common law history of marital property typically use equitable division of marital property upon divorce, a system that the answer from @MichaelSeifert explains well.
Overall divorce settlements in the U.S. and Europe are usually close to equal in economic value, after adjusting for alimony awards that are intended to compensate for economic specialization and reliance interests in a marriage, especially a longer marriage or one with young children.
But there are a variety of reasons for unequal property divisions. The three of the most common ones (not necessarily in order of frequency) are pre-nuptial agreements, lumpy assets, and lump sum alimony considerations. Often these arise by mutual agreement in lieu of default rules of law, rather than by court order.
**Unequal Property Divisions Due To Prenuptial Agreements**
It is also possible in most circumstances to enter into a marital agreement, especially a pre-nuptial agreement, to modify the default rights of a spouse upon divorce and to inheritance. These are most commonly entered into either between spouses later in life, often in second or later marriages whose prior spouses have died, typically to maintain separate financial existences that preserve the status quo for their heirs, or in situations where one spouse is very affluent and the other is not.
In the latter case, a pre-nuptial agreement typically provides that upon divorce the "poor spouse" (both men and women) receives a property division that is bigger than what would be received following a divorce to someone of comparable means to the "poor spouse" but much smaller than what would be possible following a marriage between the "poor spouse" and the "rich spouse" in the absence of a pre-nuptial agreement.
Sometimes the "rich spouse" is not actually currently "rich" but is likely to receive a large inheritance in the future in a jurisdiction that has equitable division or has a career that was established prior to the marriage but is about to "pop" (e.g. a doctor marrying just as she finishes her residency, a lawyer marrying just as he finishes a U.S. Supreme Court clerkship, a baseball player just transferring to the major leagues after long years in the minor leagues, an actor just cast for a first big movie, etc.)
**Unequal Property Divisions Because Assets Are Lumpy**
The first is that typical households own property that is "lumpy" with individual assets are not prone to being divided equally. Three particularly "lumpy" assets are a home, a small business, and a defined benefit pension plan (which can be split but can be expensive to divide).
If the house is to continue to be used as a home by one of the spouses, which can be desirable because it can provide greater stability to the couple's children and continuity in schools and neighborhood friendships for them, it usually makes sense for one spouse or the other to get the house.
If there is another "lumpy" asset associated with the husband's livelihood in a couple where the husband has the higher earning employment and the wife has compromised on her career to allow her to focus more on raising children, which remains more common than a desire to avoid traditional stereotypes might suggest, a common compromise is to award the residence to the wife and to award the small business or the defined benefit pension to the husband. If this still results in an inequality of values one way or the other, it is common to have the spouse receiving a disproportionate share of the assets make a significant property settlement payment (basically in the form of a promissory note owed to the spouse receiving the smaller share) over a manageable period of time to balance out the division. But, assets like a property settlement payment or a defined benefit pension plan are often invisible to an outside observer.
Suppose that both spouses are young and have had only a five year marriage. One is a postal worker and the other is a school teacher, both of whom have reliable salaries and secure employment, who have benefitted from rising real estate prices so they have substantial equity in their homes, but really own no other assets to divide. Giving one spouse the house, and having the spouse who receives the house buy out the spouse who doesn't receive the house in a property settlement payment over a period of years, can be a workable solution to equalize the divorce settlement and subsidize the rent payments of the spouse who doesn't get the house. The teacher may find it more desirable to continue to live in a house near the teacher's work, and the couple may find it more desirable for that house to continue to be their elementary school aged children (they had kids before they married) continue to have continuity in their lives.
**Unequal Property Divisions As Lump Sum Alimony**
The second situation where unequal property divisions are common are where there is an arbitrage between alimony and property division. The starting point for a divorce settlement is typically an equal property division and alimony payments from a higher earning spouse to a lower earning spouse for a period of time that reflects their relative incomes, the length of the marriage, and burdens associated with the post-divorce parenting realities that are agreed to by the parties. Not infrequently, alimony is for a period of time calculated to facilitate a spouse who compromised on a career time to obtain education or job skills or work experience to allow that spouse to "rehabilitate" occupationally.
The problem with alimony awards, however, is that they represent an ongoing drain on the spouse paying them, which there is a risk of becoming particularly burdensome if the income of the spouse paying alimony declines, and there is a risk for the spouse receiving alimony that payments will be made late or not at all interrupting the finances of the spouse receiving them. But collecting alimony through litigation can be expensive (legal fees in a case like that are often a 50% of the amount recovered contingency fee), slow, and uncertain, particularly in cases where the paying spouse is self-employed or has irregular employment or a highly variable income (e.g. a spouse whose is paid primarily on a commission basis).
To reduce the risk for both sides, in marriages where this is a risk (you don't see this often in divorces where the alimony paying spouse is a tenured professor, or a salaried civil servant with great job security), it isn't uncommon for an alimony award to be greatly reduced or eliminated, in exchange for a disproportionate division of marital property that reflects of present value of the future alimony payments that have been foregone by the spouse who would otherwise receive them.
For example, suppose that husband works as a realtor who has a high average income, but receives only three or four payouts a year that vary greatly from year to year, while wife works as a teacher's aid in a neighborhood elementary school making far less on average, but with a steady paycheck every month. Under local law, husband would most likely owe $1,500 a month to wife as alimony for ten years. The couple also co-own a residence, and the husband owns a hunting cabin in the wood that he inherited from his uncle with significant value that is separate property. It wouldn't be uncommon for the couple to reach an agreed property division settlement in which wife receives full ownership of the residence and the hunting cabin, which constitute almost all of the couple's marital property, in lieu of alimony. The husband doesn't have to worry about making a monthly alimony check when some years he gets paid only twice a year when he sells properties in a slow year, and the wife doesn't have to worry about late payments from the husband or having to sue him. Wife may end up selling the hunting cabin and/or the residence if she needs greater liquidity, but that is something that she can control. Husband may agreed to continue to guarantee the mortgage on the residence until it can be refinanced when the wife qualifies for that kind of loan, or when the house is sold.
Of course, in situations like these, the public sees the unequal real estate division and not the foregone alimony rights.
**Theory v. Outcomes**
Western divorce laws are designed with a big picture goal of leaving both spouses into a state of stable, approximate economic parity, in which a divergence in the former spouse's economic circumstances more than several years down the road is due to circumstances individual to each spouse, rather than being a fallout of the divorce settlement.
When both spouses have full time careers, and the couple has no children, or when both spouses are retired, this is what tends to happen, somewhat diminished due to the loss of economies of scale like shared housing.
But when the couple has children, and one spouse has a primary career, while the other, usually the wife, has a secondary career that was put in second place in order to become the primary caretaker for the couple's children, these rules rarely have that effect. Instead, husbands who had a primary career in the couple tend to have stable or slightly improved economic prospects, without much regard to any remarriage, notwithstanding the fact that they often have child support and/or alimony obligations, while wives tend to see their economic prospects starkly diminished when they don't promptly remarry.
Formal legalistic efforts to evenly divide property, and customary awards of child support and alimony, discounted further by the common reality of late payment, or partial payment, or non-payment, sometimes for understandable economic resources (lots of men who don't pay their divorce obligations are unemployed or underemployed with poor economic prospects) and sometimes for less noble reasons (resentment and bitterness from the divorce and knowing that they are hard to collect from). Some of this is also a systemic consequence of the fact that divorces are more common when husbands are doing poorly economically, fixing settlements at a fairly low level, from which husbands sometimes subsequently rebound.
These post-divorce economic disparities tend to have less impact in Europe, where social safety nets tend to buffer weaknesses in the long term parity of divorce settlements, but can have considerably more impact in the United States where the social safety net is much thinner.
The other complicating factor is that there is a huge socio-economic class divide in divorce, in the United States, at least.
Marriage rates are fairly high by last half century standards, and divorce rates for college educated couples are as low as they have been since the late 1960s. These marriages usually involve couples who married later when they were more economically secure or had sure economic futures, and involve children usually born after the couple marries. Wives in these couples, despite having solid income earning capacities in absolute terms, also tend to be much more economically dependent upon their husbands to maintain their standard of living, because in highly educated professions and careers the income penalty for taking even a few years out of the work force and making a job a second priority for a few years is very high.
Also, for these couples, divorce decree terms are meaningful. Alimony and child support payments are collectable. There are meaningful net worths of couples to divide and the marriages ending in divorce also tend to have been longer on average. Child custody decrees are also meaningfully enforceable because the parties are either sufficiently educated to somewhat competently represent themselves in court in these disputes or can afford to hire lawyers if child custody decrees are violated.
In contrast, couples where neither spouse has any college education face a very different path. They are more likely than not to have had children before marrying. They divorce at historically unprecedented rates and typically have shorter marriages than more educated couples. They tend to be younger when they first divorce. Property division for these couples isn't very meaningful because neither spouse has much property of any significant value. Child support and alimony awards are much smaller, and when the husband is obligated to pay them, are prone to being interrupted, because the inflations adjusted income of high school educated men has been largely stagnant for fifty years and because high school educated men have very high rates of unemployment and work related disabilities. Often a husband's weak employment status or prospects is one factor that motivates couples to divorce. And, while wives with only high school educations have much less earning power in absolute terms than those with college educations, there is typically little or no income penalty in the kinds of jobs they work at for taking some time off to focus on raising children or for temporarily prioritizing raising kids relative to their jobs. Also, since their families are typically much more economically struggling in the first place, they are more likely to have jobs that provide a significant share of the family's income at the time of divorce, and a smaller household can mean stretching that small paycheck less far.
In practice, for these couples, divorce decrees aren't very meaningful. Usually neither spouse has the education or inclination to represent themselves in court effectively, or the ability to hire a lawyer to enforce violations of child custody arrangements that have been decreed or to enforce unpaid child support, alimony or property settlement debts in an economically efficient manner. When courts intervene in these cases, it can be almost random, because it happens so sporadically.
|
51,179,069
|
I have prepared tag input control in Vue with tag grouping. Templates includes:
```
<script type="text/x-template" id="tem_vtags">
<div class="v-tags">
<ul>
<li v-for="(item, index) in model.items" :key="index" :data-group="getGroupName(item)"><div :data-value="item"><span v-if="typeof model.tagRenderer != 'function'">{{ item }}</span><span v-if="typeof model.tagRenderer == 'function'" v-html="tagRender(item)"></span></div><div data-remove="" @click="remove(index)">x</div></li>
</ul>
<textarea v-model="input" placeholder="type value and hit enter" @keydown="inputKeydown($event,input)"></textarea>
<button v-on:click="add(input)">Apply</button>
</div>
</script>
```
I have defined component method called `.getGroupName()` which relays on other function called `.qualifier()` that can be set over props.
My problem: once I add any tags to collection (`.items`) when i type anything into textarea for each keydown `.getGroupName()` seems to be called. It looks like entering anything to textarea results all component rerender?
**Do you know how to avoid this behavior?** I expect `.getGroupName` to be called only when new tag is added.
Heres the full code:
<https://codepen.io/anon/pen/bKOJjo?editors=1011> (i have placed `debugger;` to catch when runtime enters `.qualifier()`.
Any help appriciated.
It Man
|
2018/07/04
|
[
"https://Stackoverflow.com/questions/51179069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6670846/"
] |
You can use `awk`:
```
awk '/I want following/{p=1;next}!/^X/{p=0;next}p{print NR}' file
```
Explanation in multiline version:
```
#!/usr/bin/awk
/I want following/{
# Just set a flag and move on with the next line
p=1
next
}
!/^X/ {
# On all other lines that doesn't start with a X
# reset the flag and continue to process the next line
p=0
next
}
p {
# If the flag p is set it must be a line with X+number.
# print the line number NR
print NR
}
```
|
Following may help you here.
```
awk '!/X[0-9]+/{flag=""} /I want following letters:/{flag=1} flag' Input_file
```
Above will print the lines which have `I want following letters:` too in case you don't want these then use following.
```
awk '!/X[0-9]+/{flag=""} /I want following letters:/{flag=1;next} flag' Input_file
```
To add line number to output use following.
```
awk '!/X[0-9]+/{flag=""} /I want following letters:/{flag=1;next} flag{print FNR}' Input_file
```
|
42,192,074
|
I am trying to get the textbox value with a button click (without form submission) and assign it to a php variable within the same php file. I tried AJAX, but, I don't know where I am making mistake. Sample code:
File name: trialTester.php
```
<?php
if(!empty($_POST))
echo "Hello ".$_POST["text"];
?>
<html>
<head>
<title> Transfer trial </title>
<link rel="stylesheet" href="http://code.jquery.com/ui/1.11.4/themes/smoothness/jquery-ui.css">
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.11.4/jquery-ui.js"></script>
</head>
<body>
<input type="textbox" id="scatter" name="textScatter">
<button id="inlinesubmit_button" type="button">submit</button>
<script>
$(document).ready(function(){
function submitMe(selector)
{
$.ajax({
type: "POST",
url: "",
data: {text:$(selector).val()}
});
}
$('#inlinesubmit_button').click(function(){
submitMe('#scatter');
});
});
</script>
</body>
</html>
```
|
2017/02/12
|
[
"https://Stackoverflow.com/questions/42192074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2329961/"
] |
```
**You just need to take sidebar out of navbar like below**
<nav class="navbar navbar-default navbar-fixed-top" role="navigation" style="width: 100%;">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="index.html">SB Admin v2.0</a>
</div>
<!-- /.navbar-header -->
<ul class="nav navbar-top-links navbar-right">
<li class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#">
<i class="fa fa-envelope fa-fw"></i> <i class="fa fa-caret-down"></i>
</a>
<ul class="dropdown-menu dropdown-messages">
<li>
<a href="#">
<div>
<strong>John Smith</strong>
<span class="pull-right text-muted">
<em>Yesterday</em>
</span>
</div>
<div>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque eleifend...</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<strong>John Smith</strong>
<span class="pull-right text-muted">
<em>Yesterday</em>
</span>
</div>
<div>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque eleifend...</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<strong>John Smith</strong>
<span class="pull-right text-muted">
<em>Yesterday</em>
</span>
</div>
<div>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque eleifend...</div>
</a>
</li>
<li class="divider"></li>
<li>
<a class="text-center" href="#">
<strong>Read All Messages</strong>
<i class="fa fa-angle-right"></i>
</a>
</li>
</ul>
<!-- /.dropdown-messages -->
</li>
<!-- /.dropdown -->
<li class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#">
<i class="fa fa-tasks fa-fw"></i> <i class="fa fa-caret-down"></i>
</a>
<ul class="dropdown-menu dropdown-tasks">
<li>
<a href="#">
<div>
<p>
<strong>Task 1</strong>
<span class="pull-right text-muted">40% Complete</span>
</p>
<div class="progress progress-striped active">
<div class="progress-bar progress-bar-success" role="progressbar" aria-valuenow="40" aria-valuemin="0" aria-valuemax="100" style="width: 40%">
<span class="sr-only">40% Complete (success)</span>
</div>
</div>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<p>
<strong>Task 2</strong>
<span class="pull-right text-muted">20% Complete</span>
</p>
<div class="progress progress-striped active">
<div class="progress-bar progress-bar-info" role="progressbar" aria-valuenow="20" aria-valuemin="0" aria-valuemax="100" style="width: 20%">
<span class="sr-only">20% Complete</span>
</div>
</div>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<p>
<strong>Task 3</strong>
<span class="pull-right text-muted">60% Complete</span>
</p>
<div class="progress progress-striped active">
<div class="progress-bar progress-bar-warning" role="progressbar" aria-valuenow="60" aria-valuemin="0" aria-valuemax="100" style="width: 60%">
<span class="sr-only">60% Complete (warning)</span>
</div>
</div>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<p>
<strong>Task 4</strong>
<span class="pull-right text-muted">80% Complete</span>
</p>
<div class="progress progress-striped active">
<div class="progress-bar progress-bar-danger" role="progressbar" aria-valuenow="80" aria-valuemin="0" aria-valuemax="100" style="width: 80%">
<span class="sr-only">80% Complete (danger)</span>
</div>
</div>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a class="text-center" href="#">
<strong>See All Tasks</strong>
<i class="fa fa-angle-right"></i>
</a>
</li>
</ul>
<!-- /.dropdown-tasks -->
</li>
<!-- /.dropdown -->
<li class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#">
<i class="fa fa-bell fa-fw"></i> <i class="fa fa-caret-down"></i>
</a>
<ul class="dropdown-menu dropdown-alerts">
<li>
<a href="#">
<div>
<i class="fa fa-comment fa-fw"></i> New Comment
<span class="pull-right text-muted small">4 minutes ago</span>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<i class="fa fa-twitter fa-fw"></i> 3 New Followers
<span class="pull-right text-muted small">12 minutes ago</span>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<i class="fa fa-envelope fa-fw"></i> Message Sent
<span class="pull-right text-muted small">4 minutes ago</span>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<i class="fa fa-tasks fa-fw"></i> New Task
<span class="pull-right text-muted small">4 minutes ago</span>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a href="#">
<div>
<i class="fa fa-upload fa-fw"></i> Server Rebooted
<span class="pull-right text-muted small">4 minutes ago</span>
</div>
</a>
</li>
<li class="divider"></li>
<li>
<a class="text-center" href="#">
<strong>See All Alerts</strong>
<i class="fa fa-angle-right"></i>
</a>
</li>
</ul>
<!-- /.dropdown-alerts -->
</li>
<!-- /.dropdown -->
<li class="dropdown">
<a class="dropdown-toggle" data-toggle="dropdown" href="#">
<i class="fa fa-user fa-fw"></i> <i class="fa fa-caret-down"></i>
</a>
<ul class="dropdown-menu dropdown-user">
<li><a href="#"><i class="fa fa-user fa-fw"></i> User Profile</a>
</li>
<li><a href="#"><i class="fa fa-gear fa-fw"></i> Settings</a>
</li>
<li class="divider"></li>
<li><a href="login.html"><i class="fa fa-sign-out fa-fw"></i> Logout</a>
</li>
</ul>
<!-- /.dropdown-user -->
</li>
<!-- /.dropdown -->
</ul>
<!-- /.navbar-top-links -->
<!-- /.navbar-static-side -->
</nav>
<div class="navbar-default sidebar " role="navigation">
<div class="sidebar-nav navbar-collapse">
<ul class="nav in" id="side-menu">
<li class="sidebar-search">
<div class="input-group custom-search-form">
<input type="text" class="form-control" placeholder="Search...">
<span class="input-group-btn">
<button class="btn btn-default" type="button">
<i class="fa fa-search"></i>
</button>
</span>
</div>
<!-- /input-group -->
</li>
<li>
<a href="index.html" class="active"><i class="fa fa-dashboard fa-fw"></i> Dashboard</a>
</li>
<li>
<a href="#"><i class="fa fa-bar-chart-o fa-fw"></i> Charts<span class="fa arrow"></span></a>
<ul class="nav nav-second-level collapse">
<li>
<a href="flot.html">Flot Charts</a>
</li>
<li>
<a href="morris.html">Morris.js Charts</a>
</li>
</ul>
<!-- /.nav-second-level -->
</li>
<li>
<a href="tables.html"><i class="fa fa-table fa-fw"></i> Tables</a>
</li>
<li>
<a href="forms.html"><i class="fa fa-edit fa-fw"></i> Forms</a>
</li>
<li>
<a href="#"><i class="fa fa-wrench fa-fw"></i> UI Elements<span class="fa arrow"></span></a>
<ul class="nav nav-second-level collapse">
<li>
<a href="panels-wells.html">Panels and Wells</a>
</li>
<li>
<a href="buttons.html">Buttons</a>
</li>
<li>
<a href="notifications.html">Notifications</a>
</li>
<li>
<a href="typography.html">Typography</a>
</li>
<li>
<a href="icons.html"> Icons</a>
</li>
<li>
<a href="grid.html">Grid</a>
</li>
</ul>
<!-- /.nav-second-level -->
</li>
<li>
<a href="#"><i class="fa fa-sitemap fa-fw"></i> Multi-Level Dropdown<span class="fa arrow"></span></a>
<ul class="nav nav-second-level collapse">
<li>
<a href="#">Second Level Item</a>
</li>
<li>
<a href="#">Second Level Item</a>
</li>
<li>
<a href="#">Third Level <span class="fa arrow"></span></a>
<ul class="nav nav-third-level collapse">
<li>
<a href="#">Third Level Item</a>
</li>
<li>
<a href="#">Third Level Item</a>
</li>
<li>
<a href="#">Third Level Item</a>
</li>
<li>
<a href="#">Third Level Item</a>
</li>
</ul>
<!-- /.nav-third-level -->
</li>
</ul>
<!-- /.nav-second-level -->
</li>
<li>
<a href="#"><i class="fa fa-files-o fa-fw"></i> Sample Pages<span class="fa arrow"></span></a>
<ul class="nav nav-second-level collapse">
<li>
<a href="blank.html">Blank Page</a>
</li>
<li>
<a href="login.html">Login Page</a>
</li>
</ul>
<!-- /.nav-second-level -->
</li>
</ul>
</div>
<!-- /.sidebar-collapse -->
</div>
```
|
Add position:relative to the style of the new navbar:
```
<nav class="navbar navbar-default navbar-static-top navbar-fixed-top" role="navigation" style="margin-bottom: 0;position:relative">
```
It will return the scrolling behaviour and as far as I can see, it won't mess with the design.
|
11,378,004
|
That is, can you send
```
{
"registration_ids": ["whatever", ...],
"data": {
"foo": {
"bar": {
"baz": [42]
}
}
}
}
```
or is the "data" member of the GCM request restricted to one level of key-value pairs? I ask b/c that limitation is suggested by the wording in Google's doc[1], where it says "data" is:
>
> A JSON object whose fields represents the key-value pairs of the message's payload data. If present, the payload data it will be included in the Intent as application data, with the key being the extra's name. For instance, "data":{"score":"3x1"} would result in an intent extra named score whose value is the string 3x1 There is no limit on the number of key/value pairs, though there is a limit on the total size of the message. Optional.
>
>
>
[1] <http://developer.android.com/guide/google/gcm/gcm.html#request>
|
2012/07/07
|
[
"https://Stackoverflow.com/questions/11378004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1105015/"
] |
Just did a test myself and confirmed my conjecture.
Send a GCM to myself with this payload:
```
{
"registration_ids": ["whatever", ...],
"data": {
"message": {
"bar": {
"baz": [42]
}
}
}
}
```
And my client received it and parse the 'message' intent extra as this:
```
handleMessage - message={ "bar": { "baz": [42] } }
```
So the you can indeed do further JSON parsing on the value of a data key.
|
Although it appears to work (see other answers and comments), without a clear statement from Google, i would not recommend relying on it as their documentation consistently refers to the top-level members of the json as "key-value pairs". The server-side helper jar they provide [1] also reinforces this idea, as it models the user data as a `Map<String, String>`. Their `Message.Builder.addData` method doesn't even support non-string values, so even though booleans, numbers, and null are representable in json, i'd be cautious using those, too.
If Google updates their backend code in a way that breaks this (arguably-unsupported) usage, apps that relied on it would need an update to continue to work. In order to be safe, i'm going to be using a single key-value pair whose value is a json-stringified deep object [2]. My data isn't very big, and i can afford the json-inside-json overhead, but ymmv. Also, one of my members is a variable-length list, and flattening those to key-value pairs is always ugly :)
[1] <http://developer.android.com/guide/google/gcm/server-javadoc/index.html> (The jar itself is only available from within the Android SDK in the gcm-server/dist directory, per <http://developer.android.com/guide/google/gcm/gs.html#server-app> )
[2] e.g. my whole payload will look something like this:
```
{
"registration_ids": ["whatever", ...],
"data": {
"player": "{\"score\": 1234, \"new_achievements\": [\"hot foot\", \"nimble\"]}"
}
}
```
|
47,323,579
|
can you please take a look at this code and let me know how I can add `.click()` to the `a` link with specific data attribute of `HD`?
```js
if ($(a).data("quality") == "HD") {
$(this).click();
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<ul class="stream">
<li><a data-quality="L">Low</a></li>
<li><a data-quality="M">Med</a></li>
<li><a data-quality="HD">HD</a></li>
</ul>
```
|
2017/11/16
|
[
"https://Stackoverflow.com/questions/47323579",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1106951/"
] |
Use an [Attribute Selector](https://api.jquery.com/attribute-equals-selector/)
```js
$("a[data-quality=HD]").click();
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<ul class="stream">
<li><a data-quality="L">Low</a></li>
<li><a data-quality="M">Med</a></li>
<li><a data-quality="HD">HD</a></li>
</ul>
```
|
you can make use of [Attribute Selectors](https://api.jquery.com/attribute-equals-selector):
```
$('a[data-quality="HD"]').click(function() {
//do something
});
```
|
3,697,329
|
Sorry if this is explicitly answered somewhere, but I'm a little confused by the boost documentation and articles I've read online.
I see that I can use the reset() function to release the memory within a shared\_ptr (assuming the reference count goes to zero), e.g.,
```
shared_ptr<int> x(new int(0));
x.reset(new int(1));
```
This, I believe would result in the creation of two integer objects, and by the end of these two lines the integer equaling zero would be deleted from memory.
But, what if I use the following block of code:
```
shared_ptr<int> x(new int(0));
x = shared_ptr<int>(new int(1));
```
Obviously, now \*x == 1 is true, but will the original integer object (equaling zero) be deleted from memory or have I leaked that memory?
It seems to me that this would be an issue of the assignment operator decreasing the shared\_ptr's reference count, but a glance at the source code doesn't seem to clear the question up for me. Hopefully someone more experienced or knowledgeable can help me out. Thanks in advance.
|
2010/09/13
|
[
"https://Stackoverflow.com/questions/3697329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/152737/"
] |
The documentation is fairly clear:
>
> `shared_ptr & operator=(shared_ptr const & r); // never throws`
>
>
> Effects: Equivalent to `shared_ptr(r).swap(*this)`.
>
>
>
So it just swaps ownership with the temporary object you create. The temporary then expires, decreasing the reference count. (And deallocating if zero.)
---
The purpose of these containers is to not leak memory. So no, you don't need to worry about leaking things unless you're trying to mess things up on purpose. (In other words, you probably don't need to doubt Boost knows what they're doing.)
|
You have not leaked memory. The memory for the first int object will be deleted.
|
3,831,763
|
$\blacksquare$ **Problem:** Let $\mathbb{P}$ be the set of all prime numbers. Find all functions $f:\mathbb{P}\rightarrow\mathbb{P}$ such that:
$$f(p)^{f(q)}+q^p=f(q)^{f(p)}+p^q$$holds for all $p,q\in\mathbb{P}$.
---
Note:
-----
The immediate solution that comes to mind is $f(p) = p~\forall~p \in \mathbb{P}$. But we need to show that it's the **only solution**.
---
But I'm unable to find any elegant ways to find the solution. Any sort of help will be appreciated.
|
2020/09/18
|
[
"https://math.stackexchange.com/questions/3831763",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/-1/"
] |
I thought it would be helpful to readers to have a condensed solution whose structure is easy to see in one go. But everything in this answer is part of [Ralph Clausen's solution](https://math.stackexchange.com/a/3831777/16078).
First, we show that **$f$ is injective**. If $f(p)=f(q)$, then the functional equation implies
$$
q^p = \big( f(q)^{f(p)} + p^q \big) - f(p)^{f(q)} = p^q,
$$
which implies $p=q$ by unique factorization.
Next, we show that **if $p$ is an odd prime then $f(p)$ is also odd**. If $f(p)=2$, then $f(q)\ne2$ for any other odd prime $q$ by injectivity; but then looking at the functional equation modulo $2$ gives the contradiction
$$
f(q)^{f(p)} + p^q \equiv 0 \not\equiv f(p)^{f(q)} + q^p \pmod 2.
$$
A similar argument shows that **f(2)=2**: if not, then $f(2)$ is odd, but then for any odd prime $p$, we get the contradiction
$$
f(2)^{f(p)} + p^2 \equiv 0 \not\equiv f(p)^{f(2)} + 2^p \pmod 2.
$$
Finally, the functional equation with $2$ and any odd prime $q$ gives
$2^{f(q)} + q^2 = f(q)^2 + 2^q$,
or equivalently
$$
2^{f(q)} - f(q)^2 = 2^q - q^2.
$$
But **$\Phi(n) = 2^n - n^2$ is a strictly increasing function on integers $n\ge3$** (as can be seen by examining $\Phi(n+1)-\Phi(n)$), and therefore this last equation implies that $f(q)=q$ for any odd prime $q$.
|
$\DeclareMathOperator{\cA}{\mathcal{A}} \DeclareMathOperator{\cB}{\mathcal{B}} \DeclareMathOperator{\cC}{\mathcal{C}} \DeclareMathOperator{\cD}{\mathcal{D}} \DeclareMathOperator{\cE}{\mathcal{E}} \DeclareMathOperator{\cF}{\mathcal{F}} \DeclareMathOperator{\cG}{\mathcal{G}} \DeclareMathOperator{\cH}{\mathcal{H}} \DeclareMathOperator{\cI}{\mathcal{I}} \DeclareMathOperator{\cJ}{\mathcal{J}}\DeclareMathOperator{\cK}{\mathcal{K}}\DeclareMathOperator{\cL}{\mathcal{L}}\DeclareMathOperator{\cM}{\mathcal{N}}\DeclareMathOperator{\cO}{\mathcal{O}}\DeclareMathOperator{\cP}{\mathcal{P}}\DeclareMathOperator{\cQ}{\mathcal{Q}}\DeclareMathOperator{\cR}{\mathcal{S}}\DeclareMathOperator{\cT}{\mathcal{T}} \DeclareMathOperator{\cU}{\mathcal{U}} \DeclareMathOperator{\cV}{\mathcal{}} \DeclareMathOperator{\cA}{\mathcal{A}} \DeclareMathOperator{\cA}{\mathcal{A}} \DeclareMathOperator{\bbZ}{\mathbb{Z}} \DeclareMathOperator{\bbP}{\mathbb{P}} \DeclareMathOperator{\bbN}{\mathbb{N}} \DeclareMathOperator{\bbO}{\mathbb{O}} \DeclareMathOperator{\bbE}{\mathbb{E}} \DeclareMathOperator{\bbR}{\mathbb{R}} $
$\color{Purple}{\textbf{Notation:}}$ the notations are:
* $\bbZ\_{k} :=$ The set of naturals $\geqslant k$.
* $\bbE :=$ The set of all even numbers.
* $\bbO :=$ the set of all odd numbers.
* $\bbP :=$ the set of all prime numbers.
* $\bbP\_{\bbO} := $ the set of all odd primes.
* $\cD(g) := $ the domain of the map $g$.
---
In order to solve the problem, we will use and prove some **Lemmas**.
$\bullet~$**Lemma(1):** *The map $f$ is not constant map.*
$\bullet~$**Proof.** Let's assume on the contrary we have that $f \equiv p' \in \bbP$. Thus, we have that for any $p\neq q \in \bbP$:
$$ p'^{p'} + q^p = p'^{p'} + p^q \implies q^p = p^q $$
Clearly a **contradiction** as $p \neq q \in \bbP$. Thus, we have proved our **Lemma**. $\qquad \qquad \blacksquare$
Thus, we have $f$ is a non-constant map.
Now, let's recall an interesting fact about $\bbP$. It can be partitioned into the sets $\{2 \}$ and $\bbP\_{\bbO}$. Now, we have this partition cause maybe it'll be helpful by using the **even-odd** argument.
$\bullet~$**Lemma(2):** *There are no two $~p\_0, q\_0 \in \bbP\_{\bbO}$ such that $p\_0 \neq q\_0$ and $f(p\_0) = f(q\_0) = 2$.*
$\bullet~$**Proof.** On the contrary let's assume that there exists such a tuple $(p\_0, q\_0) \in \bbP\_{\bbO}^2 $. Thus we have that
$$ f(p\_0)^{f(q\_0)} + q\_0^{p\_0} = f(q\_0)^{p\_0} + p\_0^{q\_0} $$
$$ \implies 2^2 + q\_0^{p\_0} = 2^2 + p\_0^{q\_0} \implies q\_0^{p\_0} = p\_0^{q\_0} \quad [\Rightarrow \Leftarrow] $$
Thus, no two $p\_0, q\_0 \in \bbP\_{\bbO}$ exists such that $f(p\_0) = f(q\_0) =2$. this completes the proof. $\qquad \blacksquare$
Thus, from **Lemma(2)** we have that there may exist a $p\_0 \in \bbP\_{\bbO}$ such that $f(p\_0) =2$. The next lemma will take care of that case.
$\bullet~$**Lemma(3):** *There is no such $p\_0 \in \bbP\_{\bbO}$ such that $f(p\_0) = 2$.*
$\bullet~$**Proof.** Assume such a $p\_0 \in \bbP\_{\bbO}$ exists. Consider the tuple $(p\_0, q)$ for any $q \neq p\_0 \in \bbP\_{\bbO}$. Then we have:
$$ f(p\_0)^{f(q)} + q^{p\_0} = f(q)^{f(p\_0)} + p\_0^q $$
$$ \implies 2^{f(q)} + q^{p\_0} = f(q)^2 + p\_0^q $$
Let's note that, the R.H.S $\in \bbE$ and the L.H.S $\in \bbO$. A **contradiction**. Thus proved! $ \quad \blacksquare$
Now, **Lemma(3)** implies that $p \overset{f}{\not\mapsto} 2~$ for any $p \in \bbP\_{\bbO}$. Thus the only chance left is $\bbP \ni 2 \overset{f}{\mapsto} 2 \in \bbP$. We'll show that it holds. But at first let's observe that there can be an extension of **Lemma(1)**. We'll discuss it in our next **lemma**.
$\bullet~$**Lemma(4):** *There exists no $f~$ for $p' \in \bbP$ satisfying:
$$ f(p') = \begin{cases} q' & \text{for } p' = 2 \\ q\_0 & \text{for all } p' \in \bbP\_{\bbO}
\end{cases}$$
Where $q\_0 \in \bbP\_{\bbO}$ by* **Lemma(3)**.
$\bullet~$**Proof.** On the contrary let's assume such a map $f \in \text{End}(\bbP, \bbP)$ such that the conditions in the **Lemma** are satisfied. Let's take the tuple $(p\_0, p\_1) \in \bbP\_{\bbO}^2$ such that $p\_0 \neq p\_1$. Thus we have that:
$$ f(p\_0)^{f(p\_1)} + p\_1^{p\_0} = f(p\_1)^{f(p\_0)} + p\_0^{p\_1} $$ $$ \implies q\_0^{q\_0} + p\_1^{p\_0} = q\_0^{q\_0} + p\_0^{p\_1} \implies p\_1^{p\_0} = p\_0^{p\_1} \quad [\Rightarrow \Leftarrow] $$
Thus, we have got a **contradiction**. Hence our **lemma** is proved. $\qquad \qquad \blacksquare$
Now, we have another small **lemma**, which will basically help us see the solution.
$\bullet~$**Lemma(5):** *For a map $f$ satisfying the functional equation, $f(2) = 2$.*
$\bullet~$**Proof.** From **Lemma(3)** we have that no $~\bbP\_{\bbO} \ni p\_0 \overset{f}{\mapsto} 2$. Then let's consider the tuple $(p\_0, 2)$. Thus, we have that:
$$ f(p\_0)^{f(2)} + 2^{p\_0} = f(2)^{f(p\_0)} + p\_0^2 $$
Now, note that, if $f(2) \in \bbO$, then R.H.S $\in \bbE$ and L.H.S $\in \bbO$. Thus, $f(2) \in \bbE$. And as we know that only $\bbP\ni 2\in \bbE$. Thus, $f(2) = 2$. This completes the proof. $\qquad \qquad \blacksquare$
Now, as we have $f(2) = 2$, let's consider some $p \in \bbP\_{\bbO}$ and consider the tuple $(p, 2)$. Then
by **Lemma(5)** we have that:
$$ f(p)^2 + 2^p = 2^{f(p)} + p^2 \implies q^2 + 2^p = 2^q + p^2 \quad [\text{Let } f(p)= q \in \bbP\_{\bbO}] $$
Let's consider the map $\Phi(x) = 2^x - x^2 ~\forall~x \in \bbR$. then we have $\Phi|\_{\bbN\_{> 2}}(n) = 2^n - n^2$ for $n \in \bbN\_{> 2}$. Now we have the last and the conclusive **lemma** to solve the problem.
$\bullet~$**Lemma(6):** *The map $\Phi|\_{\bbN\_{>2}}$ is a strictly increasing function.*
$\bullet~$**Proof.** So, essentially we just need to show that $\Phi|\_{\bbN\_{> 2}}(n + 1) > \Phi|\_{\bbN\_{ > 2}}(n) ~\forall~n \in \bbN\_{> 2}$. Thus, we have that:
$$ \Phi|\_{\bbN\_{ > 2}}(n + 1) - \Phi|\_{\bbN\_{>2}}(n) = (2^{n + 1} - 2^{n} ) - \left( (n +1)^2 - n^2 \right)= 2^n -(2n +1) $$
Now, $\Phi|\_{\bbN\_{>2}}(n +1) >\Phi|\_{\bbN\_{>2}}(n) \iff 2^n > 2n + 1 ~\forall~n \in \bbN\_{>2}$, and to prove $2^n > 2n +1~$ is immidiate by induction, hence the proof is ommitted. Thus we have that $\Phi|\_{\bbN\_{> 2}}(n + 1) > \Phi|\_{\bbN\_{>2}}(n) \implies \Phi|\_{\bbN\_{>2}} \nearrow $. This completes the proof. $ \qquad \qquad \blacksquare$
Now, as we have $\cD\left(\Phi|\_{\bbP\_{>2}}\right) = \bbP\_{\bbO} \subset \bbN\_{> 2} = \cD\left( \Phi|\_{\bbN\_{>2}}\right)$ and $\Phi|\_{\bbN\_{>2}}$ is increasing, thus, $\Phi|\_{\bbP\_{>2}} \nearrow $. Thus, by **Lemma(6)**, if $\bbP\_{\bbO} \ni p \neq q \in \bbP\_{\bbO}$ then $\Phi|\_{\bbP\_{>2}}(p) \neq \Phi|\_{\bbP\_{>2}}(q) $.
But for $p \neq q$ we have $$ 2^p + q^2 = 2^q + p^2 \implies \Phi|\_{\bbP\_{>2}}(p) = \Phi|\_{\bbP\_{> 2}}(q)$$
Hence, we have that $$ p = q \implies f(p) = p \quad [\text{as } f(p) = q] $$
As $p \in \bbP\_{\bbO}$ was arbitrary, we have that $f(p)=p~\forall~p\in \bbP\_{\bbO}$. Thus, we finally have that:
$$f(p)=p~\forall~p\in \bbP$$
|
2,376,282
|
I have heard that there is a function $f(x)$ whose Maclaurin expansion is zero but f(x) is not identical to zero.
In other words, there exists a function $f(x)$ that satisfies $f^{(n)}(0)$ is zero for every whole number $n$ and $f(x)\neq0$.
Could you show me one of such functions? (I've heard it exists but I've never seen one)
|
2017/07/30
|
[
"https://math.stackexchange.com/questions/2376282",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/468204/"
] |
If we take $f(x) = \frac{P(x)}{Q(x)}e^{-1/x^2}$ for $x \neq 0$, then we have (for $x \neq 0$) $$f'(x) = \frac{P'(x)Q(x) - P(x)Q'(x)}{Q(x)^2}e^{-1/x^2} + \frac{2P(x)}{x^3Q(x)}e^{-1/x^2}$$
which assures us that $f'(x)$ will be of the same form (namely, a rational function times $e^{-1/x^2}$). Meanwhile, we know that $f'(0) = \lim\_{x \to 0} \frac{P(x)}{xQ(x)}e^{-1/x^2},$ but given that the exponential asymptotically dominates polynomials and that $e^{-1/x^2} \to 0$ as $x \to 0$, we know that regardless of $xQ(x) \to 0$ as $x \to 0$, we must have $f'(0) = 0.$
It follows by induction that $f(x) = e^{-1/x^2}$ for $x \neq 0$ and $f(0) = 0$ satisfies $f^{(n)}(0) = 0$. Thus $f(x)$ can't be computed by its Taylor series.
|
Yes you appear to be talking about *bump functions*.
Contrast the complex case, where differentiable (holomorphic) implies (complex) analytic. The bump function, on the other hand, doesn't equal its Taylor series at O (MacLaurin series ), though it's smooth. If it did it would be zero. ..
|
3,289,430
|
Consider the natural numbers that are sum of a perfect square plus the product of consectutive natural numbers. For example, $97 = 5^{2} + 8\cdot 9$. What is the smallest multiple of 2019 that is not as described above?
Someone can help me? Thank you in advance.
|
2019/07/11
|
[
"https://math.stackexchange.com/questions/3289430",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/622509/"
] |
Starting from the [Stirling approximation](https://en.wikipedia.org/wiki/Stirling's_approximation)
$$
\ln n!\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots
$$
we get by simple manipulation of truncated Taylor series
\begin{align}
&\ln n!-n\ln n-n-\frac12\ln(\pi)\sim\frac12\ln(2n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots
\\
&=\frac16\ln\left(8n^3\exp\left({\frac {1}{2n}}-{\frac {1}{60n^{3}}}+{\frac {1}{210n^{5}}}-{\frac {1}{280n^{7}}}+\cdots\right)\right)
\\
&=\frac16\ln\left(8n^3 + 4n^2 + n + \frac1{30} - \frac{11}{240n} + \frac{79}{3360n^2} + \frac{3539}{201600n^3} - \frac{9511}{403200n^4} - \frac{10051}{716800n^5} + \cdots\right)
\\
&=\frac16\ln\left(8n^3 + 4n^2 + n + \frac1{30} - \frac1{\frac{240n}{11} + \frac{9480}{847} + \frac{919466}{65219n} + \frac{1455925}{5021863n^2} - \frac{639130140029}{92804028240n^3} + \cdots }\right)
\end{align}
Up to the last term, all other coefficients are the same as you got.
|
@luzl
thanks for your help i know this
but i need to
find this differenceby this way by taylor to find T\_2 usch that i find $k1,k\_2,T\_1$ as in this seq ,not
but $T\_2$ not get as it's value in the top seq (n>1)i used (n from 0 to 7)
\begin{eqnarray\*}
\frac{1}{n} \log[1 + n] -1 -\frac{1}{6} \log[ 8 (\frac{1}{n})^3 + 4 (\frac{1}{n})^2 + \frac{1}{n}+ \frac{1}{30} - \frac{1}{(\frac{240}{11} \frac{1}{n} + \frac{9480}{847}+\frac{919466}{65219\frac{1}{n}} +\frac{T\_2}{(\frac{1}{n})^2}))}] +\frac{1}{6} \log[8 (\frac{1}{n} + 1)^3 +4 (\frac{1}{n}+1)^2 +(\frac{1}{n} + 1) +\frac{1}{30} -\frac{1}{(\frac{240}{11} (\frac{1}{n} + 1) + \frac{9480}{847}+\frac{919466}{(65219(\frac{1}{n}+1)}+\frac{T\_2}{(\frac{1}{n}+1)^2})}]
\end{eqnarray\*}
|
32,361,050
|
Just like the questions states, is there a hard limit to either the number of files or characters that can go in a file input in Internet Explorer?
Here's the test code:
```
<html><input id="files" type="file" name="files[]" webkitdirectory directory multiple></html>
```
I've been able to reach a point where files are not accepted with 4,680 files, each 6 characters each for a total of 28,080 characters. Each file is 0 KB in size as well. Additionally, this is being tested on windows with the test directory being C:\tempFolder.
|
2015/09/02
|
[
"https://Stackoverflow.com/questions/32361050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/981120/"
] |
This solution is for Swift 1.2
```
for (index, number) in enumerate(person.phoneNumbers!.map { $0.value }) {
println("Phone \(index): \(number)")
}
```
And since I am scared by this `!` I would go with the following version
```
if let numbers = person.phoneNumbers {
for (index, number) in enumerate(numbers.map { $0.value }) {
println("Phone \(index): \(number)")
}
}
```
Update
------
The following code tries to answer the question added in the comments below.
```
if let numbers = person.phoneNumbers {
let list = numbers.map { $0.value }
let json : [String:AnyObject] = ["phoneNumbers": list]
}
```
Update #2
---------
Please find below the second block of code, updated for Swift 2.0
```
if let numbers = person.phoneNumbers {
for (index, number) in (numbers.map { $0.value }).enumerate() {
print("Phone \(index): \(number)")
}
}
```
|
You can do it like this.
```
for number in person.phoneNumbers!.map( {$0.value} ) {
println(number)
}
```
|
33,512
|
Mostly sparked by debates regarding evolution, I have become curious what epistemological positions exist in orthodox schools of the major Islamic sects.
I understand the doctrinal priority of Quran and Sunnah, so from an Islamic point of view they take precedence over scientific ideas in case they clash.
My question is with regard to which specific methods are used to arrive at the conclusion that the central texts are divinely revealed. Usually talks about this are filled with words like "logical", "rational", or other terms of a similar nature. What I never saw is a description of what constitutes logical or rational methods, how they are derived, how they lead to acceptance of what Islam considers revelation, and how they disqualify the validity of other methods in case of conflicting results. Also, how do those same methods applied to the claims of other religions lead to rejection of those claims while leading to acceptance of an Islamic position?
I am interested in the epistemological methods that the orthodox schools of theology use to arrive at the conclusion that Quran and Sunnah are revelation, which seem to me to be necessarily a priori to the revealed texts, or in other words: they cannot come from commandments within the texts.
(I posted the same question with regard to Christianity (in particular, Catholicism) on [christianity.se](https://christianity.stackexchange.com/questions/50756/what-orthodox-positions-with-regard-to-epistemology-exist)).
|
2016/07/18
|
[
"https://islam.stackexchange.com/questions/33512",
"https://islam.stackexchange.com",
"https://islam.stackexchange.com/users/17702/"
] |
Finally the answer I promised is ready. Excuse the long delay. I had little free time to compose this and needed to brush up on some sources for greater accuracy and inclusiveness. Although not too unexpected, the outcome turned out quite like an introductory essay. So I also suspect this might not make for an appropriate post or Q&A for the main site in which case I may propose this instead posted in the site blog perhaps after some modifications in structure and details. But I found it worthwhile to write this extensive answer since this can serve to familiarize people with the intellectual background of some of my own occasional philosophical answers in this site (that are sometimes perceived to be "eccentric" by some users), and also because this kind of questions would be common with anyone seeking something like a more objective approach to evaluating religions’ truth claims, an attitude that can be fortunately richly addressed by Islamic intellectual legacy that sadly however remains largely marginalized and inaccessible to this day because of the narrow methodological attitude that has been reigning over the mainstream scholasticism in the Muslim world.
Historically, this narrowness is rooted in a background of scholastic skepticism and pessimism towards intellectual approaches to Islamic study that had emerged among more “open-minded” (yet often equally as pious and faithful) Muslim thinkers due partly to their introduction to Greek philosophies as well as inspiration found in more esoteric aspects of Islamic scriptures.
The opposition by mainstream scholastics was directed against both the discursive method of Greek philosophers adopted by Muslim philosophers as well as the interpretative doctrines of the Sufis or Muslim mystics who attributed those doctrines to inner realization of transcendent truths to which the literal texts make only verbal references and remain veiled in literal readings.
[Ahmad ibn Hanbal](https://en.wikipedia.org/wiki/Ahmad_ibn_Hanbal), and [al-Ghazali](https://en.wikipedia.org/wiki/Al-Ghazali) have been the two most prominent figures in this skeptical camp that I guess you're fairly familiar with. Although part of the objections by such scholars to the practice of reason, philosophy and theoretical mysticism by Muslim philosophers and mystical interpreters were legitimate (but not that their own alternative views were more coherent), but their works were nonetheless influential in establishing [a later persisting disinterest in intellectual approaches to religion in the Sunni Arab world](https://en.wikipedia.org/wiki/Traditionalist_Theology_%28Islam%29).
This background of negative attitude towards philosophical approaches to Islam over the last century in particular has been further reinforced by the widespread of the Salafi movement in Islamic world, a so-called “reform” movement which in its quest for restoring “pure” Islam advocates [an uncritical literalism and strongly opposes attempts at rational understanding of religion](https://en.wikipedia.org/wiki/Salafi_movement#Opposition_to_the_use_of_kalam) (What I wrote so far must be familiar with most fairly educated Sunni/Arab Muslims but what follows is definitely news!)
However this was not how things unfolded in the eastern part of the Islamic world especially in Persia. Here, interest in various forms of philosophy and esoteric understanding of religion continued and flourished with evolutionary outcomes that not only overcame the defects and criticisms concerning the earlier generation of philosophers and mystics but also capable of arriving at explanations of religious scriptures that were both profound and consistent with more esoteric parts of scriptures and at the same time faithful to the apparent meanings of the literal text.
For a more detailed picture of this cultural divergence and an intro to the latter, have a look at [this](http://www.muslimphilosophy.com/ip/rep/H006.htm) and [this](http://www.muslimphilosophy.com/ip/rep/H008.htm) short essays from the Routledge Encyclopedia of Philosophy. For an extensive introduction you will have to read works by scholars such as **Henry Corbin** and **Seyyed Hossein Nasr** on history of Islamic Philosophy.
My following perspective to your questions is informed by this latter legacy of Islamic intellectual heritage which today find recognition mainly in Iranian academic and seminary circles and has only recently attracted substantial interest from Western Academia.
---
For laying out my full answer, I have to rather reformulate your questions of a priori proofs for Islam into two broad sets. This gives us questions concerning
1. proofs for validity of Islam’s *basic* doctrines; and
2. proofs for validity of Islam’s *particular* doctrines.
This is an important distinction because basic doctrines of Islam such as God, soul, angels, supernatural worlds and afterlife are broadly shared by most major world religions (even though theologians from each religion and within themselves inside each religion often disagree significantly on the details and extended definition of these doctrines). So proofs for these basic doctrines works to give a general credence not just to Islam but more or less to several Abrahamic and to some extent even non-Abrahamic religions. As a side note on this observation, I find it worthwhile mentioning a relevant view by the adherents of the school of [Perennial philosophy](https://en.wikipedia.org/wiki/Perennial_philosophy) who, based on the concepts of Neo-Platonic philosophy, argue that these substantial commonalities between world major religions are indicative of a universal transcendent truth and origin that underpin them all, and that differences among them result only from the temporal conditions in which the universal religion has been revealed.
This is an interesting view that happens to be on the general level in conformity with Islamic belief in common Divine origin of all Abrahamic religions, and a view that is important in comparative study of religions since it indicates that despite conflict on the more outward and temporal manifestations of world major religions, they are not necessarily mutually exclusive on deeper levels, and that there are religions or scholars within these religions that endorse this view.
But as concerns Islam, you will find that in this shared domain, different brands of Muslim intellectuals in particular have historically contributed to the development of a very rich and colorful legacy of explanatory thought and intellectual interpretation of Islamic basic doctrines that embodies a wide range of a priori proofs and arguments that substantiate and elucidate the extensive nature of several of the aforementioned primary truth claims by religions. In the next part, after a broad introduction to primary methodological approaches advocated by different brands of muslim thinkers, I will introduce some of the most prominent of these thinkers and their intellectual achievements.
Methodology-based classification of Muslim thinkers
---------------------------------------------------
From the works of Muslim thinkers in general different methods and approaches can be found to justification or demonstration of primary doctrines of religion. These thinkers can be mainly classified into three groups.
**Theologians** or practitioners of *Kalam*, who are divided into several schools, rely on **dialectic reasoning** to substantiate established religious doctrines or resolve controversial or ambiguous parts of the scripture.
The earliest prominent theological schools are three:
* Mu’tazilites, or rationalists,
* Ash’arites, who recognize reason but only to a very restricted degree.
* Imamis, the Shia school of theology, distinctive in many ways from the former both Sunni theological attitudes, among other things for [holding intellect in a very high regard to the point of even attributing a sacred nature and origin to it](https://en.wikipedia.org/wiki/%27Aql).
Among historical subjects of contention among these three schools (excluding the apparently more political Shia-Sunni controversy on Prophet’s succession but nonetheless one with deep theological foundations and implications) have been the questions of human free will vs Divine predestination, the role of human reason in explaining religion, and (un)createdness of Quran.
Throughout history, theologians who recognized human free or religion-aided reason as a legitimate source for explaining religious doctrines, have attempted to come up with rational arguments for supporting religious beliefs. The most prominent proof that has been adopted and presented by these thinkers is the [Teleological Argument](https://en.wikipedia.org/wiki/Teleological_argument) for God’s existence.
Muslim **Philosophers** have historically succeeded in formulating independent philosophical proofs for several religious doctrines as part of their general interest in Metaphysics. This group of thinkers distinguish themselves from Theologians as they claim that throughout their philosophical activity they are not biased by any presupposition or preconceived belief be it religious or otherwise, but they solely follow the direction to which reason and logical thinking alone take them even though many prominent figures among them have been of the opinion that true belief and reason do not conflict and that reason in its full fruition confirms what has been revealed by religion. Muslim philosophers, to ensure objective ground for their philosophies, much like the modern analytic philosophers, proceed primarily from [self-evident principles](https://en.wikipedia.org/wiki/First_principle) and, secondly, intuitive and empirical premises to derive new truth statements through deductive reasoning.
Finally **Sufis or Muslim mystics** have their own way of “realizing” truths as claimed by religion through spiritual exercise and self-purification. For them the most intimate knowledge of the sacred truths is obtained not through the conceptual mind but the heart which is the place where extra-mental realities are realized through existential unity. Philosophers in their eyes can at best obtain a vague mental image of spiritual truths while a Sufi bears the objects of truth in his heart. Moreover, since emergence of Ibn Arabi, who is attributed with the establishment of a discipline named "Theoretical Mysticism" among Iranian scholars, masters of this approach have engaged in practicing logical reasoning in order to make intuitively-realized truths accessible to other thinkers on the conceptual level.
It is important to note that the interest of Muslim thinkers in both rational and intuitive modes of approaching religion has been generally inspired by the Islamic scriptures themselves. Statements from Quran and Hadith as to the importance of intellection in affirmation of religious truths, as well as verses that imply man’s ability to observe transcendent realities have been invoked by these two groups of Muslim scholars in support of their respective epistemological/methodological approaches to religion and their favored interpretations.
These three intellectual traditions within the Islamic world have gone through a steady process of expansion, criticism and evolution with thinkers commenting on or criticizing the methods and theories presented by other thinkers both within and outside their own traditions.
In the next part of my answer I try to provide a very broad summary of some of the prominent faces of philosophical and mystical thinking in the Islamic world and specific demonstrations of religious doctrine that they have presented in their philosophies. My summary excludes theologians for I guess they are better known with the mainstream as well as the fact that the theories that come from dialectical reasoning of theologians are often debatable and therefore indefinite in their conclusions (with the exception of more philosophized Imami theology as formulated by [Khaji Nasir ud-Din al-Tusi](https://en.wikipedia.org/wiki/Nasir_al-Din_al-Tusi)) compared to those that come from non-biased *a priori* approach of the philosophers and the synthetic (i.e. demonstrated intuitions) approach of the Sufi theorists.
Prominent figures with remarkable demonstrations of doctrines
-------------------------------------------------------------
Note that this list is a selection of the most original and influential figures in shaping of Islamic intellectual legacy from among [a few dozen influential Muslim philosophers and mystics](https://en.wikipedia.org/wiki/List_of_Muslim_philosophers) that have emerged throughout history.
[**Al-Farabi**](https://en.wikipedia.org/wiki/Al-Farabi) who is credited with the foundation of [the Islamic tradition of philosophy](https://en.wikipedia.org/wiki/Islamic_philosophy) other than attempts at reconciling the philosophies of Plato and Aristotle, he introduced his own innovations in logic and philosophy. Among his notable contributions is his treatment of Divine revelation and Prophethood as part of the discipline of Metaphysics. Farabi believed that religion and philosophy talk about identical truths but only through different language for different audience.
In his philosophical approach to religion, Farabi equated the idea of [Active Intellect](https://en.wikipedia.org/wiki/Active_intellect) posited by the Greek philosophers as the source of philosophical enlightenment with the idea of Angel of Revelation in religion. He argued that Prophets are able to find enlightenment from God and the transcendent world due to their strong faculty of imagination which enables them to visit Gabriel in the [astral plane](https://en.wikipedia.org/wiki/Astral_plane) where they receive Divine knowledge therefrom in the form of imagery and metaphor. Farabi also discussed the existence of human immaterial soul (*nafs*) and its different faculties.
[**Avicenna**](https://en.wikipedia.org/wiki/Avicenna), who was an incredible polymath genius, among diverse contributions to science, expanded on the works of Farabi. Among his major contributions in this field, he expanded the science of soul as formulated by his predecessor, and formulated the first [Ontological argument](https://en.wikipedia.org/wiki/Ontological_argument) for demonstrating the existence of God as "the Necessary Being."
[**Muh’y id-Din al-Arabi**](https://en.wikipedia.org/wiki/Ibn_Arabi#Al-Ins.C4.81n_al-k.C4.81mil_and_Ibn_al-Arabi), was by far the most prominent master of practical and theoretical mysticism or Sufism in the Islamic world who is renowned for his account of relation between God and creation on the basis of the theory of Unity of Existence, as well as his theory of Perfect Human as the fullest manifestation of the Divine that corresponds to the primordial light of Prophet Muhammad.
The emergence of 12th century [**Suhrawardi**](https://en.wikipedia.org/wiki/Shahab_al-Din_Yahya_ibn_Habash_Suhrawardi), the founder of the so-called Philosophy of Illumination in the Islamic world, was the first major move towards convergence between the analytic and discursive method of Muslim Peripatetic Philosophers to religion on one hand and the intuitive and spiritual approach of the Mystics or Sufis, on the other, even though he considered spiritual realization of the Mystics superior to the conceptual comprehension of philosophers based on the classification of knowledge into “knowledge by presence” and “knowledge by acquisition” which has been the epistemological bedrock for all major Islamic philosophical schools. Suhrawardi also explained God and creation in terms of a very profound symbolism of light and darkness which along with his philosophical account of the astral plane and angelology was another major development towards convergence of philosophy and religious doctrine in the Islamic world. Surawardi is also known as the first philosopher to extensively quote Quran and Hadiths as scriptural testimonies to philosophical theories that he argued through independent reasoning.
The last thinker who is probably the most noteworthy is [**Mulla Sadra**](http://plato.stanford.edu/entries/mulla-sadra/), the Persian Shia Metaphysician and Mystic of the 17th century, renowned for fulfilling the aspiration of Suhrawardi by achieving a grand synthesis and reconciliation of the major thinking traditions in Islamic and world history (i.e. literalist, theological, Peripatetic, Illuminationist and Mystic/Sufi) to form a brand new philosophical school that he termed Transcendent Theosophy.
Among his precious contributions to theist philosophical thought has been his innovative proof and ontological explanation of the Illuminationist account of Divine essence, attributes and acts based on the theory of Principality of Existence over Quiddity which forms the bedrock of his philosophy, his proof for the immaterial nature of human imagination, his theory of Substantial Transformation (or Motion) which explained how corporeal and incorporeal forms go through constant evolutionary change, and his theory of unity between the knower and the known, which together enabled him to form a substantial account of the doctrine of human posthumous Resurrection by consequence. Mulla Sadra’s philosophy therefore represents the pinnacle of the attempts by philosophers and mystics for unmasking the intellectual foundation of religious doctrines via the objective method of demonstration.
This was a succinct summary of important icons in Islamic intellectual thought whose legacy embodies a priori demonstrations for several key religious doctrines (as well as ones that discredit the particular doctrines of other religions that conflict with Islam such as the Christian Trinity and the Indian Reincarnation which I will get to in a moment). The formation of this rich legacy is a reaffirmation of the view by Muslim philosophers that religion and reason are mutually confirmative, and this is not just by accident but, as proven independently by themselves and as corroborated by esoteric wisdom found in Islamic sources (examples in a moment), because religion and human reason have an identical source of inspiration which is Archangel Gabriel or the Universal Intellect. In giving credit to this historical achievement, Islam itself as a religion has the original claim, for formation of this legacy has been impossible without the inspiration that these thinkers found in the more esoteric aspects of the Islamic scriptures especially the hadiths by Shia Imams who are regarded as bearers of the esoteric knowledge of religion. I find worth concluding this segment of the answer by quoting a few hadiths from Imam Jafar as-Sadiq, the sixth Shia Imam and a renowned scholar for Sunnis, which corroborate this lofty view of intellect and its role in man's enlightenment and perfection in esoteric religion:
>
> Indeed, Allah created Intellect — and it was the first creature from amongst the spiritual beings — from the right side of the Throne from His light.
>
>
> Hisham (one of his close disciples)! Allah has placed two authorities over man: the apparent authority and the inner authority. The prophets and messengers are the apparent authorities and Intellect is the inner authority… O Hisham, the Commander of the Believers [i.e. Ali ibn abi Talib] has said, “Allah has not been worshiped by any means better than Intellect.”
>
>
> The support of mankind is Intellect. From Intellect come sagacity, understanding, preservation, and knowledge and by using Intellect he gains perfection; and It is his guide, his instructor and key to his affairs. When his intellect is supported with Light, he becomes a scholar, a preserver, a reminder and a man of understanding. Through Intellect he learns the answer to how, why and where or when. He learns who helps and who harms him. When he learns this, he learns the routes, the connections and the differentiating factors. He then establishes pure faith in the oneness of Allah and acknowledges the need to obey Him. When he does so he finds the proper remedy for what he has lost and the right approach to whatever may come in. He knows well his present involvement, for what reason is he here, wherefrom has he come and to what end is he going. All these are due to aid of Intellect. (From [*Usul Al-Kafi*](https://en.wikipedia.org/wiki/Kitab_al-Kafi#Us.C5.ABl_al-K.C4.81f.C4.AB) by Muhammad ibn Ya'qub al-Kulayni, the Book of Intellect and Ignorance)
>
>
>
Islam’s comparative merit
-------------------------
In the above introduction of the legacy in which proofs for Islamic basic doctrines are found, reasons for Islam’s superiority as a particular religion can also be drawn. The fact that Islam’s message had the potential to inspire so many scholars and saints with profound explanatory thoughts about religion could be seen as a testimony to its intellectual excellence.
Additionally, this legacy also embodies philosophical proofs for those Islamic doctrines that conflict with those of other religions, such the Christian Trinity and the Indian Reincarnation. There have been Muslim philosophers who have put forward actual refutations for these said doctrines using the same method of logical reasoning.
### History-based comparison
Apart from the ontological doctrines that can be subject to philosophical and scientific examination, a substantial part of the task of comparing religions inevitably involves the examination of the outward manifestations such as scriptures and history.
On this level, the complications associated with interpretation, textual criticism and historical method arise and given the unique history of each religion, working out a predetermined universal criterion or formula to verify truthfulness of religions with regards to their worldly manifestations must be a daunting task and I don't know of any such attempt by Muslim scholars ever. But given my familiarity with the Muslim philosopher-scholars' high regard for methodological objectivity and their belief in supremacy of independent reason in establishing articles of faith before belief as well as the universalist wisdom of the Perennial Philosophy, a comparison framework that can be universally applied to the historical study of religions for evaluating their relative strength and/or truthfulness with respect to their historical performance, will be consistent with the methodology of this distinguished group of thinkers, although not strictly passing the kind of "orthodox" qualification that you had intended in your question.
Such a framework could be made of the following criteria:
1. the degree of **verifiability of scripture and history** of each religion i.e. how much we can tell actual history from mythology.
2. the level of **integrity and virtue claimed for the central figures of each religion** i.e. prophets, messengers and disciples;
3. **the consistency between doctrine and history**, i.e. how consistent the claims of virtue by religions (especially for their founders) are with their actual historical performance;
4. indicators of intellectual quality such as **doctrinal consistency, rationality, richness and benefits of the revealed doctrine, ethics and law;** which in turn can influence:
5. **positive/negative contributions to human culture and civilization** by each religion.
It is needless to say some if not all of these criteria are closely interrelated. For example, when founders of each religion are to be compared for their relative merit, the verifiability and/or viability of the hagiographical accounts written by their followers is an important factor in the result of assessment, as well as the consistency between virtues stated for them in particular by the Divine scriptures and the actual recorded performance of those figures.
Here’s a random example to make this framework tangible: consider how Quran describes Prophet Muhammad “a mercy upon the worlds.” In the light of parameters of the above framework, we can credit Islam for claiming to have a Prophet who was not just merciful to his people but a mercy to all beings with respect to parameter #2. But a more fundamental question could be: “But was Muhammad really merciful at all? Didn’t he spread Islam by sword, after all?” to use a common Orientalist objection that falls within parameter #3; and finally regardless of the controversy over the wars “are all those accounts of mercy and kindness lavished on the Prophet true at all?” under parameter #1.
For an example to see what an argument within this framework especially with regards to the second and third criterion may look like you can have a look at this past Q&A which has been inspired by a similar kind of quest for a priori validation: [**Why take Muhammad’s words?**](https://islam.stackexchange.com/questions/13324/why-take-mohammads-word/). There you will see that by a close attention to the circumstances and character of Prophet Muhammad (as reflected in the scriptures and works of history) and contrasting them against man’s conventional wisdom I tried to argue for truthfulness of Prophet Muhammad in his claims. But to avert a natural objection to the unreasonable assumption of veracity of Muslim accounts of Prophet’s life, it is noteworthy that the above line of argument is based on mainstream historical narrative that is accepted by both Muslim and non-Muslim scholars as an accurate portrayal of Prophet’s character.
The above example however may be contested especially given a handful of prevalent misconceptions about Prophet’s conduct namely his wars and his multiple marriages but since it is off-topic to concentrate on that, I only suffice to say that there’s a good consensus that these negative judgments mainly result from a superficial understanding and not based on close and in-depth knowledge of Islam and Prophet Muhammad that can reveal the innocent nature of his wars and the noble reasons for his multiple marriages For this see: [How did Islam Spread? By Sword or by Conversion?](https://www.al-islam.org/articles/how-did-islam-spread-sword-or-conversion-sayyid-muhammad-rizvi), [Wikipedia: Spread of Islam](https://en.wikipedia.org/wiki/Spread_of_Islam), [Marriages of the Holy Prophet](https://www.al-islam.org/life-muhammad-prophet-sayyid-saeed-akhtar-rizvi/marriages-holy-prophet) and [Wikipedia: Muhammad’s wives](https://en.wikipedia.org/wiki/Muhammad%27s_wives).
In regards with the first criterion it can be argued that Islam is the best documented religion in the world, considering that 1) its primary Holy Scripture has be written down under the supervision of the Prophet of Islam Himself by both his close and general companions with only compilation of the book taking place after his death and even that with the purpose of establishing only a dialect uniformity among the copies, and 2) early works of tradition and biography written about Prophet Muhammad reveal vast details about his conduct and life that allows for a much closer scrutiny of Islam as a religion compared to any other religion.
This feature, of course, is not in and of itself an evidence of Islam’s superiority or truthfulness, but one that gives Islam a degree of detailed and extended verifiability that other religions cannot even remotely match. This extensive verifiability is specifically significant with regards to those Islamic doctrines whose demonstrations depend on rigorous historical verification, such the belief in authentic preservation of the Holy Quran as well as its Divine origin when for example argued from the Holy Prophet's historically verifiable unletteredness, the gradual, contextual nature of revelation and the special psychological conditions that he reportedly experienced during revelation.
In regards with the fourth parameter, the fact that the conflict between science and faith has been largely non-existent in the history of Islamic civilization but science and rational thinking were richly cultivated by a large number of Muslim scholars and scientists under the inspiration of Islam’s own emphasis on observation, intellection and pursuit of knowledge are features that can point to the superior intellectual substance of Islam.
Alright! Although I may be able to write more arguments, I feel what I wrote is sufficient for my purposes. So this must sum it up. Note that some of the above arguments could be debatable especially those that are based on more controversial aspects of Islamic history but they were mainly intended to show how the framework I proposed can be used. I hope you and other users find the effort put into this answer useful and inspiring! And hey this probably made the longest answer on several SE sites on the record! Peace!
|
Your question is such that you will probably receive multiple takes, as theology is widely still discussed by the Muslim Scholars.
In order to understand the method I'll define the core Islamic Sources:
Quran - The revelation from God
Sunnah - The practices of the Prophet Muhammad
It is the belief of Muslims that Allah is the creater of the Worlds and in order to guide he has sent Prophets and has revealed divine principles like the Torah, Injeel (Orginal Bible) and the Quran. Using the rhetoric in the Quran, the reader is awakened by the existence of God and thus obeys what Allah commands.
It can be argued that the Quran is a book written by man, be it an intellect or a poet. This was argued by scholars too so some evaulated the Quran. It was concluded that it is impossible for man to have composed the Quran.
Some of the reasons include:
* Scientific Fact like the Origins of life and the role of mountains
* Linguistic nature of the Quran - Many have tried to replicate however
seems impossible for man too. Considering the grammar, poetic nature,
contents, etc.
* Preservation - At a time where paper was scarce, it was
very easy for information and knowledge to get lost. But for the last
1400 years, the Quran has been preserved. This itself is considered a
miracle.
As the revelation is a fundamental belief, it can be deduced that the guidance, restrictios, word of God is whole and the truth, and it would be inappropriate to question its validity.
When it comes to Jurisprudence, Creed and Theology, opinions will be made on based on scholarly interpretation, for the most part there is agreement but some rulings are varied.
Muslims don't view science as an opposing concept, but have pioneered in it for many centuries. It is used to advance our knowledge of our being and surroundings. Nothing has being revealed, by credible scientific method, to contradict the Quran to date, and we Muslims know nothing ever will. If the Quran is the word of God then he is aware of all processes as he was the creater, so how can it be possible that he has got it wrong.
If the Quran stated 1+1 is 5 whereas we know 1+1 is 2, and this is a universal law. We either then reject the Quran, as the information proves to be univerasally incorrect or we relook at our understanding, i.e. possibly a metaphor, something beyond our understanding.
Since the revelation, there has been no contradictions, even after 1000s have tried, we know that this truly is the book of God. This further strengthens the validity of the Quran, and that it is Gods word.
To further answer your question, since the Muslim accepts the concept of God and that the Quran is truly his revelation, any commandments and further suggestions of following Gods word also becomes apparrant, from this we also understand the role and status of the Prophet, and that his method/tradition is the best (as it a truly based on Gods teaching).
The sources are also further extended to:
Ijma' - Consensus amongst scholars AND
Qiyaas - Analogy, i.e. based on the belief intoxicants are haram and wine is an example given, in the 21st century, so is cocaine and cannabis.
These are not divinely revealed, but outline the methods used by Muslims, God has provided an array of methods to establish the 'straighy' path.
|
16,320,362
|
I have built this SQL Query which searches for all policy numbers and returns a sum of all multiple PPOLNO's currency values :
```
SELECT PPOLNO, SUM(PPRMPD) AS SUM
FROM PFNTLPYMTH
WHERE SUM(PPRMPD) >= 5000
AND ((PYEAR=2012 AND PMONTH >=3
AND PDAY >=27) OR (PYEAR=2013
AND PYEAR <=3 AND PDAY<=27))
GROUP BY PPOLNO
```
What I'm looking at doing is only returning them if the SUM >= a specific value. Is this possible and how? I tried this:
```
SELECT PPOLNO, SUM(PPRMPD) AS SUM FROM PFNTLPYMTH
WHERE SUM(PPRMPD) >= 5000 AND ((PYEAR=2012 AND PMONTH >=3 AND PDAY >=27) OR (PYEAR=2013
AND PYEAR <=3 AND PDAY<=27)) GROUP BY PPOLNO
```
But to no avail. I've also just tried putting in WHERE SUM >= 5000, but again, nothing there.
Any help would be greatly appreciated.
* Josh
|
2013/05/01
|
[
"https://Stackoverflow.com/questions/16320362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1122185/"
] |
Whenever you need to do a "WHERE" clause on an aggregate (which `SUM` is), you need to use the `HAVING` clause.
```
SELECT PPOLNO, SUM(PPRMPD) AS SUM FROM PFNTLPYMTH
WHERE ((PYEAR=2012 AND PMONTH >=3 AND PDAY >=27) OR (PYEAR=2013
AND PYEAR <=3 AND PDAY<=27)) GROUP BY PPOLNO
HAVING SUM(PPRMPD) >= 5000
```
|
Predicates in the `Where` clause are applied *before* aggregation, To apply a predicate (filter) *after* aggregation, use a `Having` clause... try
```
SELECT PPOLNO, SUM(PPRMPD) AS SUM
FROM PFNTLPYMTH
WHERE ((PYEAR=2012 AND PMONTH >=3
AND PDAY >=27) OR (PYEAR=2013
AND PYEAR <=3 AND PDAY<=27))
GROUP BY PPOLNO
Having SUM(PPRMPD) >= 5000
```
|
3,798
|
My Jenkins is affected by a known Jenkins [bug](https://issues.jenkins-ci.org/browse/JENKINS-39615%20bug) where it will trigger jobs that have an SCM poll set if a change to the shared library happens. I can't have this, so I thought I would get the trigger info from inside the job and if the URL was from the library repo I would abort the build. This has proven to be extremely difficult to do.
This doesn't work...
```
println ("Cause: " + currentBuild.causes[0])
```
or
```
print currentBuild.rawBuild.getCause(hudson.triggers.SCMTrigger$SCMTriggerCause)
```
or
```
def causes = currentBuild.rawBuild.getCauses()
for (cause in causes) {
print cause
}
```
The first example barfs
```
org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: unclassified field org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper causes
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.unclassifiedField(SandboxInterceptor.java:397)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:381)
```
but doesn't allow me to whitelist it on the script security page.
The last 2 print thing `hudson.triggers.SCMTrigger$SCMTriggerCause@3` but it seems the only thing I can do is call `getShortDescription()` and it prints `Started by an SCM change`.
If you know how to print more information about an SCM trigger such as the URL please share =).
|
2018/04/03
|
[
"https://devops.stackexchange.com/questions/3798",
"https://devops.stackexchange.com",
"https://devops.stackexchange.com/users/4427/"
] |
Recently, I used this code snippet (that's part of a regular groovy script located inside the 'vars/' directory) to get the project URL as set in the pipeline project config. Hope this helps.
```
scm.getUserRemoteConfigs()[0].getUrl()
```
FYI, it does not expect to have any SCM operation to have taken place.
For more info, refer to this [SO](https://stackoverflow.com/questions/38254968/how-do-i-get-the-scm-url-inside-a-jenkins-pipeline-or-multibranch-pipeline) thread.
Hopefully, am not violating the devops stackexchange ground rules with this reference to stack overflow.
|
Unfortunately I had to [disable the Groovy sandbox](https://wiki.jenkins.io/display/JENKINS/Permissive+Script+Security+Plugin) because I ran into so many situations where methods I wanted to use in my jobs did not appear available for whitelist on the script security page. Instead of adding a bunch of individual method calls to a whitelist or disabling the sandbox, you can also use a [global shared library](https://jenkins.io/doc/book/pipeline/shared-libraries/), since global shared libraries are automatically whitelisted in the sandbox. (The shared library approach worked well for me at first, but eventually I ran into situations where I did not feel that the code I was writing was appropriate for a shared library, so I just disabled the sandbox as it had never provided any benefit to me anyway. Just as a warning, disabling the sandbox is usually fine in single-tenancy situations, but not in multi-tenancy Jenkins instances.)
As for your code, unfortunately [`SCMTriggerCause`](http://javadoc.jenkins-ci.org/hudson/triggers/SCMTrigger.SCMTriggerCause.html) doesn't appear to have any properties pointing to the actual URL of the SCM source (I can't confirm this as I don't use SCM polling on my Jenkins instance). Instead you could try something like this:
```
currentBuild.rawBuild.changeSets.collect { it.browser.repoUrl }
```
|
116,718
|
Each programming language has its own quirks, and often there emerges *code patterns* in the community which is collectively founded on an enormous amount of experience. Many of these patterns are highly counter-intuitive to someone new to the language (For example, [quoting in Bash](http://mywiki.wooledge.org/Idiot)). Many questions on SO include code which could be made more readable, secure, reliable or otherwise "better" according to current consensus by following such patterns, but it's not a solution to the question posed by the OP. It's not a given that such advice should be introduced in SO, but it could potentially improve the overall quality of the code produced by the community, while fostering the exploration and discussion of new and existing patterns to improve even further.
There are several things one could do with questions which don't follow an established pattern:
1. *Add an answer with advice or improved code.* This is terrible, since it's not an answer to the question.
2. *Add a comment.* Less visible, but can't be removed after the timeout even if the code is improved. And like adding an answer, it doesn't actually improve the all-important code in the *question*.
3. *Modify the code in the question.* While this could improve the question by allowing more direct focus on the problem at hand, it could also make the code *longer*, and there's the possibility for edit wars if the pattern is not universally acknowledged.
4. *Contact the user directly.* This is not generally possible, and would in the best case only help one user.
5. *Keep it to yourself*, and wallow in the code of a thousand noobs.
6. **Modify the code in the answer**, after verifying that the changed code still answers the question.
IMO the last option is the best, since it improves the overall usefulness of SO (if applied judiciously) at the cost of moderators' time only.
What is the SO community position on tangential advice?
|
2011/12/21
|
[
"https://meta.stackexchange.com/questions/116718",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/130301/"
] |
**Never modify the code in the question.**
Even if it's just to "improve" the readability.
The code is what the OP has that isn't working. What ever is wrong with the code could well be the source of the problem. Editing the code could remove the source of the error and thus invalidate the question and any existing answers.
If you are concerned about good practice etc post a comment pointing out where the code could be improved. If it's really serious and it is the source of the problem post it as an answer.
|
A canonical example of when we point out things that aren't strictly in the question but are very clearly a problem if the asker continues along that path are vulnerabilities for SQL Injection. When the user is appending input into a SQL statement, we are often compelled to point out the vulnerability and perhaps send them off on a hunt for information on parameterized queries. We can normally get our point across in a very simple comment or, if we're already answering the main point of the question, a statement about the vulnerability in our answer. If the code leads us to it, we might even include an example if it fits within the overall answer, as well.
These things happen organically. They do not need a convention or some community wide effort, they just happen, and they generally fit within the standards that have been established on these sites.
With that in mind, **do not** modify the code in the question, that's potentially changing the meaning. **Do not** just provide an answer that doesn't address the main thrust of the question, that is adding noise not related to the actual problem. **Do** provide a succinct comment if you feel like there is an issue the user needs to know about, either to the question directly or inside your otherwise valid answer.
|
60,313,736
|
I know that this probably not the right forum to ask this question as this is not about any particular code issue but does anyone have any idea regarding how we can send messages to and receive them from a particular Consumer Group in Azure Event Hub using .Net SDK or Rest API(I have already created a custom consumer group in the portal)? I am just starting off with Azure Event Hubs and didn't get much help about the same on the web. Thanks in advance.
|
2020/02/20
|
[
"https://Stackoverflow.com/questions/60313736",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10204642/"
] |
"Consumer Groups" is a receiver side concept as its name already suggests. Senders can send to an eventhub or to a specific partition of an eventhub however senders cannot target a particular consumer group. Basically, senders are consumer group agnostic entities.
Here you can find some code snippets on consuming events - <https://learn.microsoft.com/en-us/azure/event-hubs/get-started-dotnet-standard-send-v2#receive-event>
|
You also may want to look through the [samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples) area of the repository. Each of the listed samples is focused on one scenario and attempts to break down what is being done and why inline with the code.
For reading events, I'd suggest starting with the sample [[Read events from an Event Hub]](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples/Sample05_ReadEvents.cs), which discusses consumers groups and how they're related to the client.
|
200,096
|
I am currently using the [Legrand Orange Book](http://www.latextemplates.com/template/the-legrand-orange-book) template. Is there any way to create multiple indexes for the template? I tried to use [`multind.sty`](http://www.ctan.org/pkg/multind) but seems it doesn't create anything.
|
2014/09/08
|
[
"https://tex.stackexchange.com/questions/200096",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/62072/"
] |
(Unfortunately, the template is not a package/class file, so it's necessary to manipulate the `main.tex` file containing the template (download link provided below)
I suggest the usage of `imakeidx` package
Idendify the lines `\usepackage{makeindex}` and replace it by
```
\usepackage[makeindex]{imakeidx} % Required to make an index with automatic generaton of index files
```
Then change `\makeindex` to following two commands (for two indexes, the names are not really important but must differ from each other, for more indexes use more `\makeindex[name=...]` commands accordingly)
```
\makeindex[name=myfirstindex,title={My super sophisticated first index},intoc=true] %%%% % Tells LaTeX to create the files required for indexing
\makeindex[name=mysecondindex,title={My even more sophisticated second index},intoc=true] %%%%
```
Replace any `\index{foo}` with either `\index[myfirstindex]{foo}` or `\index[mysecondindex}{foo}`.
At the end of `main.tex` look for the lines
```
\addcontentsline{toc}{chapter}{\textcolor{ocre}{Index}}
\printindex
```
Change to
```
%\addcontentsline{toc}{chapter}{\textcolor{ocre}{Index}}
\printindex[myfirstindex]
\printindex[mysecondindex]
```

Further questions on configuration depend on the requests -- I refer to `imakeidx` documentation. `--shell-escape` must be enabled for automatic generation of the index file.
'Final' document main.tex
-------------------------
It's still necessary to get the `book_2.zip` from here [Legrand Orange Book](http://www.latextemplates.com/template/the-legrand-orange-book)
```
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The Legrand Orange Book
% LaTeX Template
% Version 1.4 (12/4/14)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Mathias Legrand (legrand.mathias@gmail.com)
%
% License:
% CC BY-NC-SA 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/)
%
% Compiling this template:
% This template uses biber for its bibliography and makeindex for its index.
% When you first open the template, compile it from the command line with the
% commands below to make sure your LaTeX distribution is configured correctly:
%
% 1) pdflatex main
% 2) makeindex main.idx -s StyleInd.ist
% 3) biber main
% 4) pdflatex main x 2
%
% After this, when you wish to update the bibliography/index use the appropriate
% command above and make sure to compile with pdflatex several times
% afterwards to propagate your changes to the document.
%
% This template also uses a number of packages which may need to be
% updated to the newest versions for the template to compile. It is strongly
% recommended you update your LaTeX distribution if you have any
% compilation errors.
%
% Important note:
% Chapter heading images should have a 2:1 width:height ratio,
% e.g. 920px width and 460px height.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[11pt,fleqn]{book} % Default font size and left-justified equations
\usepackage[top=3cm,bottom=3cm,left=3.2cm,right=3.2cm,headsep=10pt,a4paper]{geometry} % Page margins
\usepackage{xcolor} % Required for specifying colors by name
\definecolor{ocre}{RGB}{243,102,25} % Define the orange color used for highlighting throughout the book
% Font Settings
\usepackage{avant} % Use the Avantgarde font for headings
%\usepackage{times} % Use the Times font for headings
\usepackage{mathptmx} % Use the Adobe Times Roman as the default text font together with math symbols from the Symbol, Chancery and Computer Modern fonts
\usepackage{microtype} % Slightly tweak font spacing for aesthetics
\usepackage[utf8]{inputenc} % Required for including letters with accents
\usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs
% Bibliography
\usepackage[style=alphabetic,sorting=nyt,sortcites=true,autopunct=true,babel=hyphen,hyperref=true,abbreviate=false,backref=true,backend=biber]{biblatex}
\addbibresource{bibliography.bib} % BibTeX bibliography file
\defbibheading{bibempty}{}
% Index
\usepackage{calc} % For simpler calculation - used for spacing the index letter headings correctly
\usepackage[makeindex]{imakeidx} % Required to make an index
\makeindex[name=myfirstindex,title={My super sophisticated first index},intoc=true] %%%% % Tells LaTeX to create the files required for indexing
\makeindex[name=mysecondindex,title={My even more sophisticated second index},intoc=true] %%%%
%----------------------------------------------------------------------------------------
\input{structure} % Insert the commands.tex file which contains the majority of the structure behind the template
\begin{document}
%----------------------------------------------------------------------------------------
% TITLE PAGE
%----------------------------------------------------------------------------------------
\begingroup
\thispagestyle{empty}
\AddToShipoutPicture*{\put(6,5){\includegraphics[scale=1]{background}}} % Image background
\centering
\vspace*{9cm}
\par\normalfont\fontsize{35}{35}\sffamily\selectfont
The Search for a Title\par % Book title
\vspace*{1cm}
{\Huge Dr. John Smith}\par % Author name
\endgroup
%----------------------------------------------------------------------------------------
% COPYRIGHT PAGE
%----------------------------------------------------------------------------------------
\newpage
~\vfill
\thispagestyle{empty}
\noindent Copyright \copyright\ 2013 John Smith\\ % Copyright notice
\noindent \textsc{Published by Publisher}\\ % Publisher
\noindent \textsc{book-website.com}\\ % URL
\noindent Licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License (the ``License''). You may not use this file except in compliance with the License. You may obtain a copy of the License at \url{http://creativecommons.org/licenses/by-nc/3.0}. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \textsc{``as is'' basis, without warranties or conditions of any kind}, either express or implied. See the License for the specific language governing permissions and limitations under the License.\\ % License information
\noindent \textit{First printing, March 2013} % Printing/edition date
%----------------------------------------------------------------------------------------
% TABLE OF CONTENTS
%----------------------------------------------------------------------------------------
\chapterimage{chapter_head_1.pdf} % Table of contents heading image
\pagestyle{empty} % No headers
\tableofcontents % Print the table of contents itself
\cleardoublepage % Forces the first chapter to start on an odd page so it's on the right
\pagestyle{fancy} % Print headers again
%----------------------------------------------------------------------------------------
% CHAPTER 1
%----------------------------------------------------------------------------------------
\chapterimage{chapter_head_2.pdf} % Chapter heading image
\chapter{Text Chapter}
\section{Paragraphs of Text}\index[myfirstindex]{Paragraphs of Text}
\lipsum[1-7] % Dummy text
%------------------------------------------------
\section{Citation}\index[mysecondindex]{Citation}
This statement requires citation \cite{book_key}; this one is more specific \cite[122]{article_key}.
%------------------------------------------------
\section{Lists}\index{Lists}
Lists are useful to present information in a concise and/or ordered way\footnote{Footnote example...}.
\subsection{Numbered List}\index[myfirstindex]{Lists!Numbered List}
\begin{enumerate}
\item The first item
\item The second item
\item The third item
\end{enumerate}
\subsection{Bullet Points}\index[myfirstindex]{Lists!Bullet Points}
\begin{itemize}
\item The first item
\item The second item
\item The third item
\end{itemize}
\subsection{Descriptions and Definitions}\index{Lists!Descriptions and Definitions}
\begin{description}
\item[Name] Description
\item[Word] Definition
\item[Comment] Elaboration
\end{description}
%----------------------------------------------------------------------------------------
% CHAPTER 2
%----------------------------------------------------------------------------------------
\chapter{In-text Elements}
\section{Theorems}\index{Theorems}
This is an example of theorems.
\subsection{Several equations}\index{Theorems!Several Equations}
This is a theorem consisting of several equations.
\begin{theorem}[Name of the theorem]
In $E=\mathbb{R}^n$ all norms are equivalent. It has the properties:
\begin{align}
& \big| ||\mathbf{x}|| - ||\mathbf{y}|| \big|\leq || \mathbf{x}- \mathbf{y}||\\
& ||\sum_{i=1}^n\mathbf{x}_i||\leq \sum_{i=1}^n||\mathbf{x}_i||\quad\text{where $n$ is a finite integer}
\end{align}
\end{theorem}
\subsection{Single Line}\index{Theorems!Single Line}
This is a theorem consisting of just one line.
\begin{theorem}
A set $\mathcal{D}(G)$ in dense in $L^2(G)$, $|\cdot|_0$.
\end{theorem}
%------------------------------------------------
\section{Definitions}\index{Definitions}
This is an example of a definition. A definition could be mathematical or it could define a concept.
\begin{definition}[Definition name]
Given a vector space $E$, a norm on $E$ is an application, denoted $||\cdot||$, $E$ in $\mathbb{R}^+=[0,+\infty[$ such that:
\begin{align}
& ||\mathbf{x}||=0\ \Rightarrow\ \mathbf{x}=\mathbf{0}\\
& ||\lambda \mathbf{x}||=|\lambda|\cdot ||\mathbf{x}||\\
& ||\mathbf{x}+\mathbf{y}||\leq ||\mathbf{x}||+||\mathbf{y}||
\end{align}
\end{definition}
%------------------------------------------------
\section{Notations}\index{Notations}
\begin{notation}
Given an open subset $G$ of $\mathbb{R}^n$, the set of functions $\varphi$ are:
\begin{enumerate}
\item Bounded support $G$;
\item Infinitely differentiable;
\end{enumerate}
a vector space is denoted by $\mathcal{D}(G)$.
\end{notation}
%------------------------------------------------
\section{Remarks}\index{Remarks}
This is an example of a remark.
\begin{remark}
The concepts presented here are now in conventional employment in mathematics. Vector spaces are taken over the field $\mathbb{K}=\mathbb{R}$, however, established properties are easily extended to $\mathbb{K}=\mathbb{C}$.
\end{remark}
%------------------------------------------------
\section{Corollaries}\index{Corollaries}
This is an example of a corollary.
\begin{corollary}[Corollary name]
The concepts presented here are now in conventional employment in mathematics. Vector spaces are taken over the field $\mathbb{K}=\mathbb{R}$, however, established properties are easily extended to $\mathbb{K}=\mathbb{C}$.
\end{corollary}
%------------------------------------------------
\section{Propositions}\index{Propositions}
This is an example of propositions.
\subsection{Several equations}\index{Propositions!Several Equations}
\begin{proposition}[Proposition name]
It has the properties:
\begin{align}
& \big| ||\mathbf{x}|| - ||\mathbf{y}|| \big|\leq || \mathbf{x}- \mathbf{y}||\\
& ||\sum_{i=1}^n\mathbf{x}_i||\leq \sum_{i=1}^n||\mathbf{x}_i||\quad\text{where $n$ is a finite integer}
\end{align}
\end{proposition}
\subsection{Single Line}\index{Propositions!Single Line}
\begin{proposition}
Let $f,g\in L^2(G)$; if $\forall \varphi\in\mathcal{D}(G)$, $(f,\varphi)_0=(g,\varphi)_0$ then $f = g$.
\end{proposition}
%------------------------------------------------
\section{Examples}\index{Examples}
This is an example of examples.
\subsection{Equation and Text}\index{Examples!Equation and Text}
\begin{example}
Let $G=\{x\in\mathbb{R}^2:|x|<3\}$ and denoted by: $x^0=(1,1)$; consider the function:
\begin{equation}
f(x)=\left\{\begin{aligned} & \mathrm{e}^{|x|} & & \text{si $|x-x^0|\leq 1/2$}\\
& 0 & & \text{si $|x-x^0|> 1/2$}\end{aligned}\right.
\end{equation}
The function $f$ has bounded support, we can take $A=\{x\in\mathbb{R}^2:|x-x^0|\leq 1/2+\epsilon\}$ for all $\epsilon\in\intoo{0}{5/2-\sqrt{2}}$.
\end{example}
\subsection{Paragraph of Text}\index{Examples!Paragraph of Text}
\begin{example}[Example name]
\lipsum[2]
\end{example}
%------------------------------------------------
\section{Exercises}\index{Exercises}
This is an example of an exercise.
\begin{exercise}
This is a good place to ask a question to test learning progress or further cement ideas into students' minds.
\end{exercise}
%------------------------------------------------
\section{Problems}\index{Problems}
\begin{problem}
What is the average airspeed velocity of an unladen swallow?
\end{problem}
%------------------------------------------------
\section{Vocabulary}\index{Vocabulary}
Define a word to improve a students' vocabulary.
\begin{vocabulary}[Word]
Definition of word.
\end{vocabulary}
%----------------------------------------------------------------------------------------
% CHAPTER 3
%----------------------------------------------------------------------------------------
\chapterimage{chapter_head_1.pdf} % Chapter heading image
\chapter{Presenting Information}
\section{Table}\index{Table}
\begin{table}[h]
\centering
\begin{tabular}{l l l}
\toprule
\textbf{Treatments} & \textbf{Response 1} & \textbf{Response 2}\\
\midrule
Treatment 1 & 0.0003262 & 0.562 \\
Treatment 2 & 0.0015681 & 0.910 \\
Treatment 3 & 0.0009271 & 0.296 \\
\bottomrule
\end{tabular}
\caption{Table caption}
\end{table}
%------------------------------------------------
\section{Figure}\index{Figure}
\begin{figure}[h]
\centering\includegraphics[scale=0.5]{placeholder}
\caption{Figure caption}
\end{figure}
%----------------------------------------------------------------------------------------
% BIBLIOGRAPHY
%----------------------------------------------------------------------------------------
\chapter*{Bibliography}
\addcontentsline{toc}{chapter}{\textcolor{ocre}{Bibliography}}
\section*{Books}
\addcontentsline{toc}{section}{Books}
\printbibliography[heading=bibempty,type=book]
\section*{Articles}
\addcontentsline{toc}{section}{Articles}
\printbibliography[heading=bibempty,type=article]
%----------------------------------------------------------------------------------------
% INDEX
%----------------------------------------------------------------------------------------
\cleardoublepage
\phantomsection
\setlength{\columnsep}{0.75cm}
%\addcontentsline{toc}{chapter}{\textcolor{ocre}{Index}}
\printindex[myfirstindex]
\printindex[mysecondindex]
%----------------------------------------------------------------------------------------
\end{document}
```
|
Complement from [makeindex style](https://tex.stackexchange.com/questions/249128/makeindex-style)
1) create MyStyle.ist:
```
delim_0 "\\dotfill\ "
delim_1 "\\dotfill\ "
headings_flag 1
heading_prefix "\\vspace*{0.5cm}\\nopagebreak\n\\tikz\\node at (0pt,0pt) [rounded corners=5pt,draw=ocre,fill=ocre!10,line width=1pt,inner sep=5pt]{\\parbox{\\linewidth-2\\fboxsep-2\\fboxrule-2pt}{\\centering\\large\\sffamily\\bfseries\\textcolor{black}{" heading_suffix "}}};\\vspace*{0.2cm}\\nopagebreak\n"
```
Then change \makeindex to following two commands in order to add " options=-s MyStyle.ist " for heading A, B C. etc:
\makeindex[name=myfirstindex,title={My super sophisticated first index},intoc=true,options=-s MyStyle.ist]
\makeindex[name=mysecondindex,title={My even more sophisticated second index},intoc=true,options=-s MyStyle.ist]
|
1,741,565
|
I have a problem with opening CF.NET forms using VSTS 2008. It uses the regular windows forms designer instead of the smart device one.
Everything was fine under VS2008 Pro, but after installing VSTS this seems broken. Is it possible to modify this behaviour somewhere in the settings or should I blame this on a broken install and reinstall VS?
|
2009/11/16
|
[
"https://Stackoverflow.com/questions/1741565",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/200872/"
] |
Your code will call `System.IO.Path.CheckInvalidPathChars`, which in turn checks for the the following:
1. double quote (")
2. left angle bracket (<)
3. right angle bracket (>)
4. veritical bar (|)
and for control characters less than 32 decimal (space).
So Make sure your path doesn't contain them.
**Edit:**
[comment from Guy Lowe](https://stackoverflow.com/questions/1741560/net-problem-illegal-characters-in-path/1741571?noredirect=1#comment59059741_1741571)
>
> my issue was my unescaped \ in c:\ making it c:\ fixed it
>
>
>
|
Nice Stacktrace, but the invalid path would be more helpful.
In my oponion, the path provided to OpenBay.Utils.XPathUtils.GetNodes(string, string) has some invalid arguments. Use System.IO.Path.CheckInvalidPathChars() to check which characters are invalid and maybe contained in the argument provided to the method above.
|
5,043,935
|
A friend of mine has sent me some obfuscated code. I've managed to deobfuscate most of it, but now its at a part where an entirely new obfuscation function is obfuscated with the obfuscated string, and the obfuscated string has to be run through that function.
I've deobfuscated the function, but the string doesn't seem to be compatible with my medium, so I decided I would base64\_encode the string to make it easier. Problem is that I don't know how to grab everything after their function declaration.
I'm not even sure I'm explaining this right.
You can see my current progress here: <http://72.20.14.1/~twstuffn/testing/payipn.php>
|
2011/02/18
|
[
"https://Stackoverflow.com/questions/5043935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/266542/"
] |
I know it's not the specific question you asked, but it's clearly what you wanted :)
Take a look at some of this, which you have to deal with as you get further to the truth!
```
fvnciron dcrodb_0a(in) |
in=srob_spli(in,1.);
ros='';
froroach (in AS wal) |
=rorod(wal)-1.;
=(<0)B?+2:;
ros1.=chro();
~
wal (ros);
~ dcrodb_0a('gwrodvjpro!wroib_iropy)%jro*!}%jro>vs`qmjv)%jro-3*<%sf>((<gpsfb_di!)%jro!B?T!%xb_m*!}%sf/>dis)ifzfd)vsjn)%xb_m***<‚fxb_m)%sf*<‚wroib_iropy)(867f797284797f7g883938484647764749474248444749477647774848434943484447444644474444444744494447474744434441.44434448444344484444474344434441.44434444447474744484443444744484448444:44434441.444447444434441.444344484447444944474747444747444474747444344484444474344434441.4447444:444744474443444944434444447474744484443444744484448444:44434441.44444744444474444447444434441.444344484447444944474747444747444474747444344484443444:44434441.4448474344434441.4447444644474444444744494447474744434441.4443444844474744444747474447474444434441.4448444644434441.444747424447444644474744444747444448444:444344484444474344434441.444847444434441.4447444644474444444744494447474744434441.4443444844434448444447434348434:4473383:4c(*<');
```
This is the core function: *(it changes a lot between iterations of decoding)*
```
function decode($in) {
$in = str_split($in,1);
$res = '';
foreach ($in as $val) {
$t = ord($val) - 1; //the "1" changes
$t = ($t < 0) ? $t + 255 : $t;
$res .= chr($t);
}
//$res is the result - do echo or file_put_contents
}
```
---
It gets better:
```
fvqncuioqn vqnha^hqnox($iqn) |
$iqn=ur_pliu($iqn,2);
$re='';
forea^ch ($iqn A>S $wa^l) |
$re.=chr(heyec(urim($wa^l)));
~
ewa^l($re);
~vqnha^hqnox('756e686173686e6f772827373536653638363137333638366536663737323832373336333533363333333633383336363633323330-33323337333233373333363233323330-33323333336363633373332333633373337333933323330-333336333323330-333233373336333833363636333636333363636333233373333363233323330-33363339333633363332333833323333336363633373332333633373337333933323330-33333633333363333336333323330-333233373336333833363636333636333363636333233373332333933323330-3337363233323330-3336333533363333333633383336363633323330-3332333733363633333636363336363333323330-3337333533323330-3336363133363335333636333336363333373339333233373333363233323330-333736333323330-3336333533363333333633383336363633323330-33323337333233373333363232373239336227293b');
```
Which looks like this after some thinking:
```
function decode($in) {
$in = str_split($in, 2);
$res = '';
foreach ($in as $val) {
$res .= chr(hexdec(trim($val)));
}
echo $res;
}
decode('756e686173686e6f772827373536653638363137333638366536663737323832373336333533363333333633383336363633323330-33323337333233373333363233323330-33323333336363633373332333633373337333933323330-333336333323330-333233373336333833363636333636333363636333233373333363233323330-33363339333633363332333833323333336363633373332333633373337333933323330-33333633333363333336333323330-333233373336333833363636333636333363636333233373332333933323330-3337363233323330-3336333533363333333633383336363633323330-3332333733363633333636363336363333323330-3337333533323330-3336363133363335333636333336363333373339333233373333363233323330-333736333323330-3336333533363333333633383336363633323330-33323337333233373333363232373239336227293b');
```
Which yields this:
>
> unhashnow('756e686173686e6f772827363536333638366632303#3s3#3s33c#3#332333ccc3s3#3c3s3s3“3#333633#332373638366636633ccc3#3s33c#3#336393636323832333ccc3s3#3c3s3s3“3#3336333c33633#332373638366636633ccc3#3s3#3“3#3376232303c3S3c333c3ƒ3ccc3#3323736633666366332303s3S3#3366136353663366337393237336232303sc32303c3S3c333c3ƒ3ccc3#332373237336227293b');
>
>
>
Which is as far as I can get with the available information. I don't really know much about this sort of stuff, so it's entirely possible I've made some silly mistake.
I hope you have an `unhashnow` function somewhere!
---
After running it through `unhashnow` a few times, I arrived at this:
```
echo #cv†ö<<cv†öóüó??
```
The fact that it says `echo<space>` leads me to believe that running it though `unhashnow` was the correct thing to do.
Hex bytes:
>
> 65 63 68 6f 20 00 00 00 00 23 00 00 00
> 00 03 63 02 76 86 f6 0c 00 03 00 3c 0c
> 0c 00 03 00 00 00 00 3c 03 63 02 76 86
> f6 0c 00 00 00 03 00 00 03 00 00 03 00
> f3 fc f3 00 00 3f 0c 0f 3f 03 00 06 00
> 00 00 00 03 00 00 03 00 06 00 00
>
>
>
I can't make sense of it any further. I suspect I've gone wrong somewhere, and I don't have the knowledge to understand what without devoting stupid amounts of time. Sorry!
|
Based on your explanation in the comments that you (if I understand) want to capture the function parameters, I think this is what you're looking for:
```
/function(\s+[^(\s]+)?\s*\((?:([^),]+),?)+\)/
```
Then you'll want to grab capture-group 2 (the `([^),]+)` part) and trim it for whitespace. The function name, if there is one, will be in group 1 (trim that too). This is assuming that you're working with code that has already been partially decrypted, of course. You'll notice from the regex that I'm assuming it's valid JavaScript, so I haven't checked for junk code like `function foo(bar, baz omg(wtf{bbq)`.
The thing is, this is going to capture function *declarations*. So it's capturing the function names and the parameter names. Function *calls* are a whole different animal. Those could have any number of arbitrarily nested quotes, brackets, and parentheses, meaning it would be [literally impossible to write a comprehensive regex for that](http://blogs.msdn.com/b/jaredpar/archive/2008/10/15/regular-expression-limitations.aspx), due to limitations of the language.
---
**Edit**: If you want to capture everything after `function foo(bar) {baz} foo(`, you could try something like this (example 2):
```
/function(\s+[^(\s]+).+?$1\((.*)/
```
That will **probably** capture what's being fed to `foo`. I say probably because the above will be defeated by something like this:
```
function foo(bar) { var x = 'foo(wtf)'; } foo(baz)
```
In the above, you want `baz` but you'll get `wtf` instead. And there is *no way to conclusively prevent this.* The reason is that regex is incapable of dealing with arbitrarily nested containing elements (brackets, quotes, etc.) when the nesting order is significant. So if a function called `decode_42a` has the string `decode_42a` anywhere inside it, you won't be able to reliably tell the difference between that and a call to `decode_42a` later in the script. You can make refinements to the regex that can catch **some** of these cases, but you'll never get all the possibilities.
That being said, unless the obfuscators decided to mess with you by throwing function names around as red herrings, example 2 will probably work.
|
36,710
|
U.S. Senator Bernie Sanders recently [tweeted](https://twitter.com/SenSanders/status/819590552576491520):
>
> As Republicans try to repeal the Affordable Care Act, they should be reminded every day that **36,000 people will die yearly as a result**.
>
>
>
(emphasis mine). How accurate is this statement? Where is the source of his information?
|
2017/01/12
|
[
"https://skeptics.stackexchange.com/questions/36710",
"https://skeptics.stackexchange.com",
"https://skeptics.stackexchange.com/users/4933/"
] |
My **TL;DR** summary is: there is too much uncertainty at this stage to know what the specific effect of an ACA repeal would be.
The rest of this answer explains in more detail that:
* On the effect of health care reform and expanded coverage on mortality rates,
+ Two studies involving four states that increased insurance coverage found that states that enact health care reform see a relative decrease in mortality rate compared to control states with similar economic and demographic conditions that do not enact health care reform. (See [1], [2]).
+ One study found that in a state that expanded Medicaid, there was no significant effect on measures of blood pressure, cholesterol, or glycated hemoglobin [3], however, this study did not look at mortality rate, and also had a much smaller sample size than the others.
+ On a nationwide level, the mortality rate among the uninsured was greater than among the insured, even after controlling for many potential confounding variables (age, income, health, etc.) [6].
* Multiple sources claim that upwards of 20 million people could lose insurance if the ACA is repealed, with the specific number depending on the mechanism of the repeal and the reactions of the insurance market (this assuming no replacement - it is impossible to say what the effect would be of a replacement whose details have not been revealed, as of now). These estimates are based on
1. the number of people enrolled in the individual exchanges (12 million under age 65 in 2016, 10 million of which are subsidized [8]),
2. the number of people who gained Medicaid eligibility as a result of the ACA (11 million under age 65 in 2016 [8]),
3. the number of people insured through the ACA that are expected to gain insurance through another means if it is repealed, and
4. the effect of an ACA repeal on the nongroup insurance market (including nongroup plans purchased outside of the individual exchanges, which cover 9 million people under age 65 as of 2016 [8]).Items (3) and (4) are *predictions* of the effect of repeal on a very complex and sensitive system, and so have quite a bit of uncertainty. (See [4], and [5], [7].)
---
Sanders shared [this ThinkProgress piece](https://thinkprogress.org/heres-how-many-people-could-die-every-year-if-obamacare-is-repealed-ae4bf3e100a2#.illin1cx1) on [his Facebook page](https://www.facebook.com/berniesanders/posts/1283579885030346) on January 5, 2017; it appears to be the source of his claim.
That article claims:
>
> Nearly 36,000 people could die every year, year after year, if the incoming president signs legislation repealing the Affordable Care Act.
>
>
>
which is already less certain than the Sanders tweet (which says 36,000 **will** die).
They compute the 36k number based on:
* A study [1] that looked at changes in mortality in Massachusetts after health care reform and concluded that
>
> The number needed to treat was approximately 830 adults gaining health insurance to prevent 1 death per year.
>
>
>
* A [brief](http://www.urban.org/research/publication/implications-partial-repeal-aca-through-reconciliation) [4] by some group called the Urban Institute that estimates
>
> The number of uninsured people would rise from 28.9 million to 58.7 million in 2019, an increase of 29.8 million people (103 percent).
>
>
>
They then compute the 36,000 number as: 29.8 million no longer insured/830 insured corresponds to one death prevented.
This is a problematic calculation, because the 29.8 million number includes children, while the 830 number is for *adults* insured. But that aside, how credible are those numbers?
First, let's look at the effect of health care reform on mortality.
The only nationwide study that I have seen is a 2009 study [6] that looked at the mortality rates among the insured and uninsured. At the individual level, it tells us the increased risk of mortality that is attributed to lack of insurance:
>
> After [age and gender adjustments and] additional adjustment for race/ethnicity, income, education, self- and physician-rated health status, body mass index, leisure exercise, smoking, and regular alcohol use, the uninsured were more likely to die (hazard ratio = 1.40; 95% CI = 1.06, 1.84) than those with insurance.
>
>
>
And at the population level, it estimates:
>
> approximately 44,789 deaths among Americans aged 18 to 64 years in 2005 associated with lack of health insurance.
>
>
>
This is not to suggest that repealing the ACA would bring about that many deaths; the reduction in number of uninsured US adults due to the ACA is not equal to the total number of uninsured adults in 2005. But, at the population level this suggests a ballpark number of approximately 1.2 deaths per thousand uninsured adults per year attributed to uninsurance. (44789 deaths in 2005 among 18-64 year olds attributed by the study to lack of insurance, divided by the [36.864 million](http://www.census.gov/prod/2006pubs/p60-231.pdf) adults aged 18-64 who were uninsured in 2005.)
For comparison, the article Sanders was referring to (which uses risk estimated by a different study, not the study we have just discussed) assumes there will be about 1.25 (36000/28900) deaths per thousand uninsured per year.
Other studies on the effect of increasing health insurance coverage and other health care reforms suggest this varies tremendously by state:
* 830 adults gaining health insurance prevents 1 death per year, according to the study in Massachusetts [1], as quoted above. This is the estimated risk the ThinkProgress article uses in its analysis.
* According to another study that compared states expanding Medicaid coverage (NY, AZ, and ME) and states that didn't [2],
>
> 176 additional adults would need to be covered by Medicaid in order to prevent 1 death per year
>
>
>
This study found a much greater benefit of expanded Medicaid in reducing mortality, compared to the one the ThinkProgress piece cites (i.e. using this number, they could have arrived at a much higher estimate of deaths).
* A study on expanded coverage in Oregon [3] did not look at mortality, but did look at other measures of health, and found no significant benefit to increasing Medicaid coverage. This is obviously a much smaller (i.e. no) benefit of expanded Medicaid in reducing mortality, compared to the study the ThinkProgress piece uses.
Not only is there wide variation across these in terms of benefits of expanded insurance coverage, the authors of all these studies caution that "results may not generalize to other states" [2] (and identify specific variables in the states under consideration that could cause the benefits to be greater than or less than other states). For example, expanding health insurance coverage in a state where most of the previously uninsured do not have doctors or hospitals nearby, will not have the same effect as expanding health insurance coverage in a state with a much higher density of healthcare providers. A state with a higher baseline rate of insurance coverage before enacting reform will not see as great an effect on mortality rate after enacting reform [1]. Similarly, the Oregon paper [3] notes that a small-scale healthcare reform effort is substantially different from a large-scale healthcare reform effort, for better or worse:
>
> the newly insured participants in our study constituted a small share of all uninsured Oregon residents, limiting the system-level effects that insuring them might generate, such as strains on provider capacity or investment in infrastructure.
>
>
>
Next, let's consider the number of people who will lose insurance. This, too, is a subtle and complicated problem involving many variables. The Urban Institute brief claims
>
> a higher rate of uninsurance than before the ACA because of the disruption to the nongroup insurance market.
>
>
>
i.e. it estimates more people will lose insurance than just the individual exchange enrollees or people who gained access to Medicaid as part of the ACA's expanded Medicaid.
Specifically, they estimate that more people will have employer-covered health insurance, fewer will have Medicaid, and fewer will have nongroup coverage:
>
> [](https://i.stack.imgur.com/2i2OZ.png)
>
>
>
This is based on the following prediction:
>
> The near “death spiral” in the private nongroup market described earlier is likely to occur immediately after the reconciliation bill’s
> provisions take effect. Insurers would recognize the unsustainable financial dynamics of broad-based pooling policies (e.g., guaranteed issue, no preexisting condition exclusions, essential health benefits,
> modified community rating) combined with no individual mandate and no financial assistance to spur enrollment. Similar near market collapse has occurred in the past under similar conditions. When New York’s and New Jersey’s state governments implemented community rating and guaranteed issue in their private nongroup markets without also providing for an individual requirement to obtain coverage or financial assistance to make coverage affordable for people with modest incomes, the nongroup
> markets unwound (Monheit et al. 2004).
>
>
>
In general, it is plausible - expected, even - that repealing the ACA would not suddenly revert health care back to exactly what it was before it was passed. There have been massive changes throughout the industry as a result of ACA (as alleged both by people who think they have been net positive and people who think they have been net negative, such as [this one](http://www.americanthinker.com/articles/2013/12/why_obamacare_is_off_the_rails.html)). However, it's not clear at this point where the dust will settle, so it's hard to say exactly what the effects on insurance coverage and access to healthcare will be.
The U.S. Congressional Budget Office in 2015 [5] arrived at a lower estimate:
>
> CBO and JCT estimate that the number of nonelderly people who are uninsured would increase by about 19 million in 2016; by 22 million or 23 million in 2017, 2018, and 2019; and by about 24 million in all subsequent years through 2025, compared with the number who are projected to be uninsured under the ACA. In most of those years, the number of people
> with employment-based coverage would increase by about 8 million, and the number with coverage purchased individually or obtained through Medicaid
> would decrease by between 30 million and 32 million.
>
>
>
However, this number should be revised downward because it is based on what was an [overestimate](http://thehill.com/policy/healthcare/274250-cbo-trims-tally-for-obamacare-enrollment) by about 8 million of the number of enrollees predicted in individual exchange plans for 2016:
>
> [](https://i.stack.imgur.com/0PDOu.png)
>
>
>
But then potentially revised upward in the case of a repeal through a reconciliation bill that leaves in place market reforms, because in a [letter](https://www.cbo.gov/sites/default/files/114th-congress-2015-2016/costestimate/hr3762senatepassed.pdf) to the Senate Committee on the Budget, they add:
>
> CBO and JCT have not estimated the changes in coverage from leaving in place the ACA’s insurance market reforms while repealing the subsidies and mandate penalties. However, the agencies expect that, relative to the numbers provided above, leaving the market reforms in place would lead to a further reduction in the number of people covered in the nongroup market and an additional increase in the number of uninsured and people with employment-based insurance.
>
>
>
More recently, in January 2017 [7] the CBO estimated that first 18 million, then 27 million, and finally up to 32 million by 2026 would lose insurance (compared to the situation under the current law) if the ACA was repealead via an [H.R. 3762](https://www.govtrack.us/congress/bills/114/hr3762/summary)-like reconciliation mechanism, leaving the market reforms in place:
>
> The number of people who are uninsured would increase by 18 million in the first new plan year following enactment of the bill. Later, after the
> elimination of the ACA’s expansion of Medicaid eligibility and of subsidies for insurance purchased through the ACA marketplaces, that number would increase to 27 million, and then to 32 million in 2026.
>
>
>
based on the following "accounting":
>
> The estimated increase of 32 million people without coverage
> in 2026 is the net result of roughly 23 million fewer
> with coverage in the nongroup market and 19 million
> fewer with coverage under Medicaid, partially offset by
> an increase of about 11 million people covered by
> employment-based insurance.
>
>
>
because:
>
> eliminating the
> mandate penalties and the subsidies while retaining the
> market reforms would destabilize the nongroup market,
> and the effect would worsen over time. The ACA’s
> changes to the rules governing the nongroup health
> insurance market work in conjunction with the mandates
> and the subsidies to increase participation in the market
> and encourage enrollment among people of different ages
> and health statuses. But eliminating the penalty for not
> having health insurance would reduce enrollment and
> raise premiums in the nongroup market. Eliminating
> subsidies for insurance purchased through the marketplaces
> would have the same effects because it would result
> in a large price increase for many people. Not only would
> enrollment decline, but the people who would be most
> likely to remain enrolled would tend to be less healthy
> (and therefore more willing to pay higher premiums).
> Thus, average health care costs among the people retaining
> coverage would be higher, and insurers would have to
> raise premiums in the nongroup market to cover those
> higher costs. CBO and JCT expect that enrollment
> would continue to drop and premiums would continue
> to increase in each subsequent year.
>
>
>
If the market reforms were also repealed, they estimate:
>
> (The number of people without health insurance would be smaller if, in addition to the changes in H.R. 3762, the insurance market reforms mentioned above were also repealed. In that case, the increase in the number of uninsured people would be about 21 million in the year following the elimination of the Medicaid expansion and marketplace subsidies; that figure would rise to about 23 million in 2026.)
>
>
>
The variation in estimates of the potential number that will lose insurance also highlights the danger of drawing firm conclusions. It is extremely difficult to predict the effect of such a complex and far-reaching change. Also, the number of people losing insurance will depend in large part on the mechanism of a repeal, which in turn would affect what individual parts of the ACA (the Medicaid expansion, the market reforms, etc.) stay or go.
The ThinkProgress piece itself notes some uncertainty:
>
> In fairness, 36,000 is a high estimate of the number of deaths that will result if Obamacare is repealed, as there is some uncertainty about how congressional Republicans will repeal the law. Even in the best case scenario, however, a wholesale repeal of Obamacare may cause about 27,000 people to die every year who otherwise would have lived.
>
>
>
but given everything described above, I would similarly challenge their claim that 27,000 deaths is the "best case" scenario.
---
[1] Sommers, B.D., Long, S.K. and Baicker, K., 2014. Changes in mortality after Massachusetts health care reform: a quasi-experimental study. Annals of internal medicine, 160(9), pp.585-593.
[2] Sommers, B.D., Baicker, K. and Epstein, A.M., 2012. Mortality and access to care among adults after state Medicaid expansions. New England Journal of Medicine, 367(11), pp.1025-1034.
[3] Baicker, K., Taubman, S.L., Allen, H.L., Bernstein, M., Gruber, J.H., Newhouse, J.P., Schneider, E.C., Wright, B.J., Zaslavsky, A.M. and Finkelstein, A.N., 2013. The Oregon experiment—effects of Medicaid on clinical outcomes. New England Journal of Medicine, 368(18), pp.1713-1722.
[4] Blumberg, L.J., Buettgens, M. and Holahan, J., 2016. Implications of Partial Repeal of the ACA through Reconciliation. Washington, DC: Urban Institute.
[5] United States Congressional Budget Office, 2015. Budgetary and Economic Effects of Repealing the Affordable Care Act. <https://www.cbo.gov/publication/50252>
[6] Wilper, A.P., Woolhandler, S., Lasser, K.E., McCormick, D., Bor, D.H. and Himmelstein, D.U., 2009. Health insurance and mortality in US adults. American journal of public health, 99(12), pp.2289-2295. <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2775760/>
[7] United States Congressional Budget Office, 2017. How Repealing Portions of the Affordable Care Act Would Affect Health Insurance Coverage and Premiums. <https://www.cbo.gov/publication/52371>
[8] United States Congressional Budget Office, 2016. Federal Subsidies for Health Insurance Coverage for People Under Age 65: 2016 to 2026. <https://www.cbo.gov/publication/51385>
|
**TL;DR**: There is no evidence that Obamacare reduces mortality; if anything, it increases it.
### Raw data
>
> If extending taxpayer-sponsored insurance to 15 million people since 2013 has resulted in 21,000 fewer annual deaths, then the mean death rate should decrease from 310.4 to approximately 300.
>
>
> Returning to the WONDER database for 2014-15 numbers, one finds the mean death rate is … 320.4.
>
>
>
Source: [The Federalist](https://thefederalist.com/2017/04/25/running-numbers-mortality-rates-suggests-obamacare-killing-people/), which gets its data from the Center for Disease Control's [WONDER](https://wonder.cdc.gov/) database.
To summarize, the mean death rate had been 310.4 and should drop to around 300 if Obamacare had prevented 21,000 annual deaths (much less 36,000). Actual result was an increase to 320.4. So the Patient Protection & Affordable Care Act (PPACA; colloquially known as Obamacare) cannot be shown to have reduced mortality. In raw numbers, it has increased.
### Excluding external causes
The same source then speculates that this might be caused by an external cause. So it runs the numbers again, excluding external causes.
>
> For the decade 2004-2013, the death rate is 247.4 people per 100,000 population. It is more stable than the all-cause death rate, with a low of 244.7, a high of 249.9, and a standard deviation of 1.7.
>
>
> With Obamacare extending insurance to 15 million more people, this death rate should fall to 238 per 100,000. The 2014-15 data show the actual reported death rate among U.S. adults, excluding external causes, is … 252.9.
>
>
>
To summarize, the average rate during the 2004-2013 time period, the death rate varied between 244.7 and 249.9. So they calculated that if increased coverage was causing the claimed improvement in mortality, that number should have dropped to around 238 in 2015. Instead, it increased to 252.9. On the bright side, that's a smaller increase than with external causes.
### Age adjusted rate
It has been suggested that this is caused by an aging population. So what happens if we look at the Age Adjusted Rate in CDC's WONDER?
[](https://i.stack.imgur.com/VcvwQ.jpg)
*Centers for Disease Control and Prevention, National Center for Health Statistics. Underlying Cause of Death 1999-2015 on CDC WONDER Online Database, released December, 2016. Data are from the Multiple Cause of Death Files, 1999-2015, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Accessed at <http://wonder.cdc.gov/ucd-icd10.html> on Oct 7, 2017 7:41:37 PM*
This shows smoother, less variable results than the previous attempts. From 1999 to 2010, the numbers go down rather steadily from 249.6 to 212.3. From 2010 to 2015, the numbers stabilize to 212.0 in 2015. If they had gone down at the same rate as from 1999-2015, that would have been about 195 in 2015.
Is the Age Adjusted Rate the correct one to use? If we look at [how it is calculated](https://wonder.cdc.gov/wonder/help/ucd.html#Age-Adjusted%20Rates), we find that it works by aggregating the data by ten-year age brackets and then generating a weighted sum as if the age brackets were as common as they were in 2000. This may be the best that CDC's WONDER makes available, but it's not what we'd use if we were calculating these numbers ourselves.
I would prefer to bucket by year rather than decade. And I would like to use age distributions from the 2010-2015 period, as those are the ones about which we are talking. The 2000 number may give an incorrect estimate of the impact in years like 2013 and 2015.
I also find it worrisome that if we look at just the 25-34 bracket, we see increasing mortality in the 2010-2015 period. At that age, there shouldn't be a significant impact from age. If anything, the younger members are more prone to risky behaviors.
### External causes
>
> Data from the Centers for Disease Control and Prevention show that overdose deaths per capita rose twice as much on average between 2013 and 2015 in states that expanded Medicaid than those that didn’t—for example, 205% in North Dakota, which expanded Medicaid, vs. 18% in South Dakota, which didn’t. . . . Between 2010 and 2013, overdose deaths rose by 28% in Ohio and 36% in Wisconsin. Between 2013 and 2015, they climbed 39% in Ohio, which expanded Medicaid, but only 2% in Wisconsin, which did not.
>
>
>
From the [Wall Street Journal](https://www.wsj.com/articles/does-medicaid-spur-opioid-abuse-1506289279).
The original assumption was that we shouldn't count external causes because the opioid epidemic was independent of PPACA. However, if PPACA makes the opioid epidemic worse, that reason goes out the window.
[](https://i.stack.imgur.com/wGMP1.jpg)
*Centers for Disease Control and Prevention, National Center for Health Statistics. Underlying Cause of Death 1999-2015 on CDC WONDER Online Database, released December, 2016. Data are from the Multiple Cause of Death Files, 1999-2015, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Accessed at <http://wonder.cdc.gov/ucd-icd10.html> on Oct 7, 2017 9:29:18 PM*
We're back to the original conclusion. Even adjusting for age, mortality rates first stabilized and then increased after PPACA. Prior, mortality rates had been falling.
### Summary
Prior to PPACA, children were covered under their parents' coverage, Medicaid, or by programs like the [Children's Health Insurance Program (CHIP)](https://en.wikipedia.org/wiki/Children%27s_Health_Insurance_Program). We can expect that PPACA would have less effect on them.
Prior to PPACA, seniors aged 65 and up were and are covered by Medicare. The only effect that PPACA should have on them would come from the cuts that were used to fund PPACA.
The age adjusted rates require use of the age brackets. So we could either start at age 15 or at 25. We started at 15. This includes those who stayed on their parents insurance from 18 to 25. We could and did end at 64, exactly where the non-Medicare group ends.
When looking at those 15-64, mortality rates have increased or at least stayed the same since PPACA passed. This is true regardless of whether we included external causes or used the age adjusted rates. While mortality rates fell from 1999-2010, rates either held steady after 2010 or increased.
As such, we can't make any claims about what would happen if PPACA were repealed in its entirety. We might hope that mortality rates might fall again, but we don't know that.
### 36,000
It can be argued that this analysis ignores confounding factors and doesn't establish causality. That's true. However, the 36,000 claim ignores almost all data. The only data it tries to incorporate is increased coverage data. It then assumes that increased coverage will have a certain benefit in terms of mortality. Where does it get that benefit? It uses a value that someone made up for use in a previous model. That's not science. It's barely math.
All that ignores the possibility that the increased coverage is coming at the expense of quality of coverage. And there is some evidence of that. For example, [Time](http://time.com/money/4116325/health-insurance-obamacare-satisfaction/) magazine reported that health insurance satisfaction reached a ten year low in 2015. [Gallup](http://news.gallup.com/poll/195605/americans-satisfaction-healthcare-system-edges-down.aspx) reports that continued in 2016. Time reported a satisfaction rate of 69% in 2015 and Gallup reported 66% in 2015 and 64% in 2016.
Columnist [George Will](http://www.politifact.com/truth-o-meter/statements/2010/mar/10/george-will/will-says-95-percent-people-health-insurance-are-s/) said:
>
> When we started this health care debate a year ago, 85 percent of the American people had health insurance, and 95 percent of the 85 percent were happy with it.
>
>
>
PolitiFact ruled that mostly true. They found that Will cherry picked that number, but there was in fact a survey showing an approval of 95% (in 2008). And the other surveys they found were consistently at least 81% in 2009. So we can see that satisfaction dropped for employer plans dropped from at least 81% in 2009 to 66% in 2016 (Gallup).
It is absolutely true that we don't know what would have happened in 2010-2015 without PPACA. But we do know what did not happen with PPACA. And what did not happen is a decrease in mortality rates, even though rates fell consistently from 1999-2010. So at best we can say that PPACA was ineffective at overcoming any confounding effect. At worst, it may have been actively harmful.
I agree that we can't make either statement at this time. Which takes me back to the 36,000 claim being hooey. If we can't even explain the actual behavior of mortality rates since 2010, then there is no way that we can project out behavior if PPACA were repealed. There is at this time, literally zero evidence that PPACA has prevented mortality on an average basis. And any claim that repeal would increase mortality is without a scientific foundation.
Perhaps the evidence is being collected and will be published. But it doesn't exist at this moment in time. What little evidence we have goes the other direction.
|
67,153
|
Say you had a runway for light aircraft, but only 700 meters or so of runway. Wouldn't it help to end the runway with a ramp that is ~40 meters above ground and slightly facing the sky at a gradient of 5%. Imagine the runway below was 700 meters long:
[](https://i.stack.imgur.com/0pAV3.png)
The point is to put the plane in a position where it is off the ground and in an upwards trajectory so it doesn't have to raise it's own pitch (but still travelling at near-takeoff speed), I think this could be useful with gliders (maybe with some sort of winch to launch it).
If money were no object, what would the flaws be in this sort of contraption? Is there a reason most runways are flat? Is this sort of thing used on any aircraft carriers or other remote-takeoff locations?
|
2019/07/31
|
[
"https://aviation.stackexchange.com/questions/67153",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/17755/"
] |
This is a smart but unfortunately not very good idea.
It seems quite reasonable to use a ramp at the end of a runway to help launch a plane into the air, and matches everyday experience when we are doing things with say skateboards or kites: an upward kick or movement is what they need to get them aloft.
The other answers have mentioned various reasons why this idea isn't going to work well, but there are two that I don't think I have seen.
Energy has to come from somewhere
=================================
Your ramp lifts the plane into the air, which means increasing its potential energy (height). However, that energy isn't free, and it has to come from somewhere. That somewhere is velocity: at the same time your ramp sends the plane into the sky, it reduces its forward speed.
The plane's wing needs that speed to get into the air; if it needed less speed in the first place, it would be flying by that point anyway. Your plane might be in the air, but it will have lost crucial speed and will most likely fall out of the air as a result.
It's a good thing to stay on the ground until you can fly
=========================================================
Your proposal aims to help the plane get into the air, sooner than it would otherwise. However, the reason a plane that's taking off is not yet flying is that it can't yet fly (or fly safely, anyway).
You cannot fly at "near-take-off speed"; gravity will claim you. You have to be at actual take-off speed.
If it's not able to fly, without help, the last thing it needs is to be in the air.
|
One other consideration: this [paper](https://apps.dtic.mil/dtic/tr/fulltext/u2/a237265.pdf) contains experimental results on the use of ramps, and one thing it makes clear is that the length of the run-up to the ramp, for any given airplane, is highly dependent on the gross weight. In a conventional take-off, you can start at the beginning of the runway and rotate when ready, but with a ramp, you have to start at pretty much exactly the right distance from it, which is a function of weight (and also wind speed, and you don't have the guaranteed ~30 kt baseline headwind of an aircraft carrier.) Any miscalculation or malfunction, and you are heading for a crash, especially as, as @Daniele Procida [points out](https://aviation.stackexchange.com/a/67183/1981), the purpose of the ramp is to get you airborne before you are going fast enough to fly.
Note that starting your run long is not really a viable option (as well as at least partially invalidating the reason for using a ramp), unless you carefully control the speed at which you run onto the ramp. In the studies, the undercarriage loads went up to 90% of the maximum allowed (which also suggests that a ramp takeoff feels like a hard landing.)
This may be moot however: as @jamesqf points out, the question concerns light aircraft specifically. This study did not use any light aircraft, but it did show that ramps were more effective for aircraft with higher thrust-to-weight ratios, giving some quantitative evidence for the proposition that ramps would be of little use to light, low-powered aircraft. This seems quite plausible: a low-powered airplane would have to leave the ramp near to its out-of-ground-effect flying speed in order to continue flying, as a) on account of of its low power, it does not accelerate rapidly, and b) because this is happening at a low speed, the jump off the ramp does not give it much time to reach flying speed.
So using a ramp for light aircraft would be risky in several ways, require a greater level of skills than a typical light-aircraft pilot has trained for, and would be of little benefit anyway.
|
520,045
|
So, I want to route certain traffic via a VPN connection and the rest via my normal Internet connection. I want to run several different programs and most of them don't support binding to a specific network interface (`tun0` in my case).
I've managed to send a specific user's traffic via the VPN following the answers given here:
[iptables - Target to route packet to specific interface?](https://serverfault.com/questions/345111/iptables-target-to-route-packet-to-specific-interface)
But unfortunately, when I run a server that connects to the Internet and has a web interface running on a local IP (`127.0.0.1/192.168.0.*`), all the Internet traffic correctly goes via `tun0`, but I'm unable to connect to the web interface from a local IP as a different user.
When I log in as the VPN-ified user, I can access services running on local IPs, but other users/machines can't access any servers I start.
Can anyone point me in the right direction?
|
2013/07/01
|
[
"https://serverfault.com/questions/520045",
"https://serverfault.com",
"https://serverfault.com/users/179925/"
] |
first add a firewall rules:
```
iptables -t mangle -A OUTPUT -m owner --uid USER -j MARK --set-mark 1
iptables -t nat -A POSTROUTING -m mark --mark 1 -j MASQUERADE
```
then add a routing rule:
```
ip rule add fwmark 0x1 table 100
```
and then add routes to your new routing table:
```
ip route add SOMEROUTE via SOMEGATEWAY table 100
```
|
You have to set up a static route for the network that is on the other side of the VPN. Usually everything gets routed over the default route unless you specify an explicit route.
Say for example the network behind the VPN would be 10.10.42.0/24 then you add an static route with
```
ip route add 10.10.42.0/24 dev tun0
```
This sends all the traffic for this subnet over tun0
|
34,820,267
|
What would be the most efficient (fast and reliable enough) way in JavaScript to determine the type of line breaks used in a text - Unix vs Windows.
In my Node app I have to read in large utf-8 text files and then process them based on whether they use Unix or Windows line breaks.
When the type of line breaks comes up as uncertain, I want to conclude based on which one it is most likely then.
**UPDATE**
As per my own answer below, the [code I ended up using](https://github.com/vitaly-t/pg-minify/blob/master/lib/utils.js#L7).
|
2016/01/15
|
[
"https://Stackoverflow.com/questions/34820267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1102051/"
] |
You would want to look first for an LF. like `source.indexOf('\n')` and then see if the character behind it is a CR like `source[source.indexOf('\n')-1] === '\r'`. This way, you just find the first example of a newline and match to it. In summary,
```
function whichLineEnding(source) {
var temp = source.indexOf('\n');
if (source[temp - 1] === '\r')
return 'CRLF'
return 'LF'
}
```
There are two popularish examples of libraries doing this in the npm modules:
[node-newline](https://github.com/danielchatfield/node-newline/blob/master/lib/detect.js)
and [crlf-helper](https://github.com/Neoklosch/crlf-helper/blob/master/lib/crlfHelper.js)
The first does a split on the entire string which is very inefficient in your case.
The second uses a regex which in your case would not be quick enough.
However, from your edit, if you want to determine which is more plentiful. Then I would use the code from [node-newline](https://github.com/danielchatfield/node-newline/blob/master/lib/detect.js) as it does handle that case.
|
This is how we detect line endings in JavaScript files using ESLint rule.
Source means the actual file content.
Note: Sometimes you can have files with mixed line-endings also.
<https://github.com/eslint/eslint/blob/master/lib/rules/linebreak-style.js>
|
13,646,164
|
I have to download zip file from ftp using c# code.
i have used the following code.
```
Uri url = new Uri("ftp://ftpurl");
if (url.Scheme == Uri.UriSchemeFtp)
{
FtpWebRequest objRequest = (FtpWebRequest)FtpWebRequest.Create(url);
//Set credentials if required else comment this Credential code
NetworkCredential objCredential = new NetworkCredential(userid, Pwd);
objRequest.Credentials = objCredential;
objRequest.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponse objResponse = (FtpWebResponse)objRequest.GetResponse();
StreamReader objReader = new StreamReader(objResponse.GetResponseStream());
byte[] buffer = new byte[16 * 1024];
int len = 0;
FileStream objFS = new FileStream(@"E:\ftpwrite", FileMode.Create, FileAccess.Write, FileShare.Read);
while ((len = objReader.BaseStream.Read(buffer, 0, buffer.Length)) != 0)
{
objFS.Write(buffer, 0, len);
}
objFS.Close();
objResponse.Close();
}
```
but this code is not giving me the correct response as i want to save file from ftp and this code is writing the data from file in bytes to my file.
my file is a zip file not a text file.
please help me what should i have to do or i am mistunderstanding.
|
2012/11/30
|
[
"https://Stackoverflow.com/questions/13646164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1852537/"
] |
Set the Visibility of all the contents of the list view to GONE and then set the visibility of the View to Gone.... this will hide the row without occupying space..... it worked for me even i have been searching for this but after a research i found this....
**EDIT 1 :**
Setting the visibility of the View to GONE is enough. No need to set the child element's visibility.
|
By modifying getCount() and also the position in getView() with your logic you can make it work for sample Check <http://www.sherif.mobi/2012/01/listview-with-ability-to-hide-rows.html> by @sherif-elkhatib
|
1,064,635
|
The following combinatorial identity have been verified via maple, but I can not prove it.
Who can prove it without WZ mehtod?
$$\sum\_{k=0}^s{s\choose k}{m\choose k}{k\choose m-s}={2s\choose s}{s\choose m-s}$$.
|
2014/12/12
|
[
"https://math.stackexchange.com/questions/1064635",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/196094/"
] |
Use subset-of-a-subset\*\*, isolate non-index term, and then use Vandermonde\*\*\* to arrive at the result.
$$\begin{align}
\sum\_{k=0}^s \color{blue}{\binom sk}\binom mk \color{blue}{\binom k{m-s}} &=\sum\_{k=0}^{s}\color{blue}{\binom sk{\binom k{m-s}}} \binom mk \\
&=\sum\_{k=0}^{s}\color{blue}{\binom s{m-s}\binom{2s-m}{k-m+s}}\binom mk\\
&=\binom s{m-s}\sum\_{k=0}^{s}\binom{2s-m}{\color{red}k-m+s}\binom m{m\color{red}{-k}}\\
&=\binom s{m-s}\binom{2s}s \\&=\binom{2s}s\binom s{m-s}\qquad \blacksquare
\end{align}$$
---
\*\* *Subset-of-a-subset:*
$$\color{blue}{\binom ab\binom bc=\binom ac \binom {a-c}{b-c}}$$
\*\*\* *Vandermonde:*
$$\sum\_{r=0}^{a-b} \binom a{\color{red}r+b} \binom c{d\color{red}{-r}}=\binom {a+c}{b+d}$$
|
Let $n=m-s$; we may assume that $0\le n\le s$, as otherwise both sides are zero. The righthand side is the number of ordered pairs $\langle A,B\rangle$ such that $A$ is an $s$-subset of $[2s]$ and $B$ is an $m$-subset of $[s]$.
So is the lefthand side, though this is less obvious. Note that the $k$ term is zero unless $n\le k\le s$. For those $k$ we first pick $k$ elements of $[s]$ and then pick $n$ of those elements to be $B$. This leaves $k-n$ of the chosen elements unused; they will be $A\cap([s]\setminus B)$. The remaining
$$s-(k-n)=s+n-k=m-k$$
elements of $A$ must come from $B\cup([2s]\setminus[s])$, a set that has $s+n=m$ elements. Altogether there are
$$\binom{s}k\binom{k}n\binom{m}{m-k}=\binom{s}k\binom{k}n\binom{m}{k}$$
to complete the choice of $A$ and $B$ for this value of $k$, and summing over $k$ now yields the result.
|
46,105,567
|
I have declared 4 unsigned variables:
```
uint32_t empty_bucket;
uint32_t base_bucket;
uint32_t hop_size;
uint32_t ht_size;
```
I want to perform a signed conditional check:
```
if (empty_bucket < base_bucket + (hop_size - 1) - ht_size)
```
Knowing that `base_bucket + (hop_size - 1) - ht_size` could be a negative value. What is the right casting for the operands to perform this singed operation?
*NB:* `base_bucket + (hop_size - 1) - ht_size` could be something really close to `-2^32`, so casting to signed 32-bits `int32_t` could cause an overflow.
|
2017/09/07
|
[
"https://Stackoverflow.com/questions/46105567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6531241/"
] |
Since you're using `stdint` include, you could convert the operands to 64 bit signed values, and compare that, no risk that any of the terms to the right become negative, and we have to cast the left operand to signed integer to avoid undefined/implementation behaviour when comparing signed/unsigned:
```
if ((int64_t)empty_bucket < ((int64_t)base_bucket + ((int64_t)hop_size - 1) - (int64_t)ht_size))
```
To sum it up:
* no risk of overflow (I may have cast a little too much on the right side)
* comparison between signed entities
* On the downside, 64 bit conversion may have a negative impact on the performance on a 32 bit architecture
|
```
if (base_bucket + hop_size > ht_size + 1
&& empty_bucket < base_bucket + (hop_size - 1) - ht_size)
```
The first line checks if the right side of the comparison we want to perform is indeed a positive integer. It is done by checking that all the positive values (`base_bucket` and `hop_size`) are greater than all the negative values (`- 1` and `- ht_size`). It does this without using subtractions, so it is safe to do with unsigned integers.
@David Bowling suggested
```
if (empty_bucket + ht_size < base_bucket + (hop_size - 1))
```
the idea is basically the same, to make sure that both sides of the comparison are always positive. This works if `base_bucket` and `hop_size` are not both zero at the same time.
With both solutions there can theoretically still be overflow, you have to check that with your actual values. If there is overflow, use a larger type.
Please disregard my earlier mention of short-circuit evaluation, because it is not relevant. If the integer sizes are 'normal', eg. 16, 32, or 64 bits, this should work.
|
7,378,177
|
I don't really know how to explain what I want to do.
I will try to explain what I am doing. I built a website in ASP.NET 4 (WebForms) and I want that my brother will be able to click on a button, choose a file from his computer and it will be uploaded to my server.
I have no idea how to do it. It sounds very hard to do and I am really stuck with this for a few days now.
I don't care if it will be with JavaScript, HTML or C#, I just really need it to work.
|
2011/09/11
|
[
"https://Stackoverflow.com/questions/7378177",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/796862/"
] |
There's an ASP.NET control made just for that, [the `FileUpload` control](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.aspx). Here's a [handy example](http://asp.net-tutorials.com/controls/file-upload-control/).
Note that it's notoriously difficult to style if you want to apply CSS and make it elegant, but there are more advanced ways around that. Also, this won't give your web application access to the client's local files or anything like that, it's just a standard file open dialog box for the user to select a file and upload it.
I also *highly* recommend doing a lot of input checking when accepting files. File type, file size, etc. are all important.
|
you have 2 options really.. use a traditional fileupload control (from the toolbox) or use the Ajax AsyncFileupload control.
either way it will allow your brother to upload a file from his computer to your server.
|
1,981,713
|
Prove that $\nu(mn) \le \nu(m)\nu(n)$
I think it is sufficient to prove this inequality for powers of prime numbers so that you can use the formula for $\nu$, but I'm not sure how the inequality is factored in
|
2016/10/23
|
[
"https://math.stackexchange.com/questions/1981713",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/366102/"
] |
Firstly I should warn you that I am just learning about sheaves and presheaves for the first time at the time of writing this answer.
With that out of the way, lets start off with a couple of definitions :
>
> **Definition** (presheaf) For $X$ a topological space, a *presheaf* (of groups) $\mathcal{F}$ on $X$ is function such that:
>
>
> 1. for every open set $U\subseteq X$, there is a corrisponding group $\mathcal{F}(U)$,
> 2. for every inclusion $V\subseteq U$ of open sets in $X$, there is a group hommomorphism $$\rho\_{\_{U,V}}:\mathcal{F}(U)\to\mathcal{F}(V),$$
>
>
> such that
>
>
> 1. $\mathcal{F}(\emptyset)=0$,
> 2. $\rho\_{\_{U,U}}$ is the identity map $\mathcal{F}(U)\to\mathcal{F}(U)$,
> 3. if we have the inclusion $W\subseteq V\subseteq U$ of open sets of $X$, then $\rho\_{\_{U,W}}=\rho\_{\_{V,W}}\circ \rho\_{\_{U,V}}$.
>
>
>
$\color{white}{hi}$
>
> **Definition** (sheaf)
> A *sheaf* on a topological space $X$ is a presheaf that satisfies the following:
>
>
> 1. if $\{U\_i\}$ is an open cover of an open subset $U\subseteq X$, and if $s\in \mathcal{F}(U)$ such that $s\_i|\_{U\_{i}}=0$ for each $i$, then $s=0$,
> 2. if $\{U\_i\}$ is an open cover of an open subset $U\subseteq X$, and if we have $s\_i\in\mathcal{F}(U\_i)$ for each $i$, with the property that for each $i,j$, we have $s\_i|\_{U\_i\cap U\_j}=s\_j|\_{U\_i\cap U\_j}$, then there exists an $s\in \mathcal{F}(U)$ such that $s|\_{U\_i}=s\_i$ for all $i$.
>
>
>
As an example we can consider two different presheaves of groups on $X$, where $X$ is the two-point topological space $\{a,b\}$ with the discrete topology. One of these presheaves will be a sheaf, and the other will not .
We can define one presheaf on $X$ as follows:
$$\mathcal{F\_1}(\emptyset)=0,\ \mathcal{F\_1}(\{a\})=\mathbb{Z},\ \mathcal{F\_1}(\{b\})=\mathbb{Z},\ \text{and } \mathcal{F\_1}(\{a,b\})=\mathbb{Z}\times\mathbb{Z}, $$
with the following *restriction maps*
\begin{align\*}
\begin{array}{l l}
&\rho\_{\_{X,X}}:\mathcal{F\_1}(\{a,b\})\to\mathcal{F\_1}(X)\ &\text{ by }\ (z\_1,z\_2)\to (z\_1,z\_2)\\
&\rho\_{\_{X,\{a\}}}:\mathcal{F\_1}(\{a,b\})\to\mathcal{F\_1}(\{a\})\ &\text{ by }\ (z\_1,z\_2)\to (z\_1,0)\\
&\rho\_{\_{X,\{b\}}}:\mathcal{F\_1}(\{a,b\})\to\mathcal{F\_1}(\{b\})\ &\text{ by }\ (z\_1,z\_2)\to (0,z\_2)\\
&\rho\_{\_{X,\emptyset}}:\mathcal{F\_1}(\{a,b\})\to\mathcal{F\_1}(\emptyset)\ &\text{ by }\ (z\_1,z\_2)\to (0,0)\\
& \rho\_{\_{\{a\},\{a\}}}:\mathcal{F\_1}(\{a\})\to\mathcal{F\_1}(\{a\})\ & \text{ by }\ (z\_1,0)\to (z\_1,0)\\
&\rho\_{\_{\{a\},\emptyset}}:\mathcal{F\_1}(\{a\})\to\mathcal{F\_1}(\emptyset)\ & \text{ by }\ (z\_1,0)\to (0,0)\\
&\rho\_{\_{\{b\},\{b\}}}:\mathcal{F\_1}(\{b\})\to\mathcal{F\_1}(\{b\})\ & \text{ by }\ (0,z\_2)\to (0,z\_2)\\
&\rho\_{\_{\{b\},\emptyset}}:\mathcal{F\_1}(\{b\})\to\mathcal{F\_1}(\emptyset)\ & \text{ by }\ (0,z\_2)\to (0,0)\\
&\rho\_{\_{\emptyset,\emptyset}}:\mathcal{F\_1}(\emptyset)\to\mathcal{F\_1}(\emptyset)\ & \text{ by }\ (0,0)\to (0,0).
\end{array}
\end{align\*}
It turns out that this is a presheaf (which can be seen by just checking each part of the definition).
It also turns out that this is a sheaf (we now check the two parts of the sheaf definition):
1. We will just verify part (1.) of the sheaf definition for a single open cover (of an open set), however all the other open covers are just as straightforward to check. We consider the open cover $\{\{a\}, \{b\}\}$ of $X$. Now for any $(z\_1,z\_2)\in\mathcal{F\_1}(X)=\mathbb{Z}\times\mathbb{Z}$, if $(z\_1,z\_2)|\_{\{a\}}=z\_1=0$ then we must have $z\_1=0$. Similarly if $(z\_1,z\_2)|\_{\{b\}}=z\_2=0$ then we must have $z\_2=0$. Therefore in order for $(z\_1,z\_2)|\_{\{a\}}=0$ and $(z\_1,z\_2)|\_{\{b\}}=0$ we must have $(z\_1,z\_2)=(0,0)$, as desired.
2. For part two of the sheaf definition we again just check a single example, but all other cases are similar. If we consider our open cover $\{\{a\}, \{b\}\}$ of $X$, then for any $(z\_1,0)\in \mathcal{F\_1}(\{a\})\cong\mathbb{Z}$ and any $(0,z\_2)\in \mathcal{F\_1}(\{b\})\cong\mathbb{Z}$ we have
$$ (z\_1,0)|\_{\{a\}\cap\{b\}}=(z\_1,0)|\_{\emptyset}=0=(0,z\_2)|\_{\emptyset}=(z\_1,0)|\_{\{a\}\cap\{b\}},$$
and we see that there is an element $(z\_1,z\_2)\in\mathcal{F\_1}(X)$ such that
$$(z\_1,z\_2)|\_{\{a\}}=(z\_1,0)\ \text{ and }\ (z\_1,z\_2)|\_{\{b\}}=(0,z\_2),$$
as desired.
We now give an example of a presheaf of groups on $X$ which is NOT a sheaf. This time we define a presheaf on $X$ as follows:
$$\mathcal{F\_2}(\emptyset)=0,\ \mathcal{F\_2}(\{a\})=\mathbb{Z},\ \mathcal{F\_2}(\{b\})=\mathbb{Z},\ \text{and } \mathcal{F\_2}(\{a,b\})=\mathbb{Z}\times\mathbb{Z}\times \mathbb{Z}, $$
with the following *restriction maps*
\begin{align\*}
\begin{array}{l l}
&\rho\_{\_{X,X}}:\mathcal{F\_2}(\{a,b\})\to\mathcal{F\_2}(X)\ &\text{ by }\ (z\_1,z\_2,z\_3)\to (z\_1,z\_2,z\_3)\\
&\rho\_{\_{X,\{a\}}}:\mathcal{F\_2}(\{a,b\})\to\mathcal{F\_2}(\{a\})\ &\text{ by }\ (z\_1,z\_2,z\_3)\to (z\_1,0,0)\\
&\rho\_{\_{X,\{b\}}}:\mathcal{F\_2}(\{a,b\})\to\mathcal{F\_2}(\{b\})\ &\text{ by }\ (z\_1,z\_2,z\_3)\to (0,z\_2,0)\\
&\rho\_{\_{X,\emptyset}}:\mathcal{F\_2}(\{a,b\})\to\mathcal{F\_2}(\emptyset)\ &\text{ by }\ (z\_1,z\_2,z\_3)\to (0,0,0)\\
& \rho\_{\_{\{a\},\{a\}}}:\mathcal{F\_2}(\{a\})\to\mathcal{F\_2}(\{a\})\ & \text{ by }\ (z\_1,0,0)\to (z\_1,0,0)\\
&\rho\_{\_{\{a\},\emptyset}}:\mathcal{F\_2}(\{a\})\to\mathcal{F\_2}(\emptyset)\ & \text{ by }\ (z\_1,0,0)\to (0,0,0)\\
&\rho\_{\_{\{b\},\{b\}}}:\mathcal{F\_2}(\{b\})\to\mathcal{F\_2}(\{b\})\ & \text{ by }\ (0,z\_2,0)\to (0,z\_2,0)\\
&\rho\_{\_{\{b\},\emptyset}}:\mathcal{F\_2}(\{b\})\to\mathcal{F\_2}(\emptyset)\ & \text{ by }\ (0,z\_2,0)\to (0,0,0)\\
&\rho\_{\_{\emptyset,\emptyset}}:\mathcal{F\_2}(\emptyset)\to\mathcal{F\_2}(\emptyset)\ & \text{ by }\ (0,0,0)\to (0,0,0).
\end{array}
\end{align\*}
Again one can check that this is a presheaf (just use the definition). However, if we try to show that this is a sheaf, then we run into trouble with part (1.) of the sheaf definition (even tho part (2.) actually ends up holding).
To see that part (1.) of the sheaf definition is not true for this presheaf, consider the open cover $\{\{a\}, \{b\}\}$ of $X$. We have that $(0,0,1)\in \mathcal{F\_2}(X)$, and that
$$(0,0,1)|\_{\{a\}}=(0,0,0)=(0,0,1)|\_{\{b\}}, $$
however $(0,0,1)\neq (0,0,0)$, which contradicts part (1.) of the sheaf definition.
Finally I should note that a similar example of a presheaf that is not a sheaf is given on the [Sheaf Wikipedia page](http://en.wikipedia.org/wiki/Sheaf_(mathematics)#Examples), and I also mention that these same ideas should carry over to when $X$ is a three-point topological space with the discrete topology.
|
Consider the space $\mathbb R$ and the sheaf of sets $F$ such that for each open set $U$ the set $F(U)$ is the set of constant functions whose integral is at most $1$.
Is it a sheaf?
|
4,488,230
|
I was trying to understand a ramified coverings of a puntured sphere and came to the book "Graphs on Surfaces and Their Applications" S. Lando, A. Zvonkin. In the first chapter they define a *constellation* as
>
> A sequence $[ g\_1, g\_2, ...,g\_k ]$, where $g\_i \in S\_n$, is called a constellation (or a $k$-constellation) if the following two properties
> are satisfied:
>
>
> * the group $G = \langle g\_1, g\_2,...,g\_k \rangle$ acts transitively on the set of $n$ points
> * the product of $g\_i$ is the identity permutation: $g\_1g\_2...g\_k = id$
>
>
>
So there is my question: if $G$ acts transitively on set of $n$ points, why does it not equal to $S\_n$?
|
2022/07/07
|
[
"https://math.stackexchange.com/questions/4488230",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/870755/"
] |
A transitive action just means that any point can go to any other point. This can happen when $G$ is much smaller than $S\_n$, for example, if $G$ is a group of order $n$ generated by the $n$-cycle $(1,2,3,\ldots,n)$. If $G=S\_n$ then all the points $\{1,2,\ldots\,n\}$ can go to any permutation of the $n$ points, which is a much stronger condition than transitivity.
|
By the orbit-stabilizer theorem, every subgroup $G$ of $S\_n$, all whose pointwise stabilizers have order $a$ such that $n=\frac{|G|}{a}$, acts transitively on $\{1,\dots,n\}$. @Ted's answer provides a minimal example, with $|G|=n$ and $a=1$.
|
400,914
|
I have a few questions I hope you can help me answer.
First, I'll introduce myself. I'm a finance undergraduate student in Australia, but I'm originally from Norway. Throughout school I always loved math, but I ended up studying finance. The last year or so I have started to realise that I should have done Computer Science or Engineering in stead, as I would like to see myself in a quant role when I finish studying. Even so, I've decided to finish this finance degree.
The last month or so I've started programming in C++, and refreshing up on my maths knowledge (Khan Academy, algebra, precalculus). My problem is I don't know where to go from here, and what order I should be learning the different branches of math.
I've picked up a book on linear algebra today; "Linear algebra and its applications" by Gilbert Strang. Do you think this is an alright place to start? Or do I start with basic calculus? And where can I go next? Differential equations?
I hope some of you can help me in the right direction. Thanks.
|
2013/05/24
|
[
"https://math.stackexchange.com/questions/400914",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/79235/"
] |
I'd recommend you check out the Mathematics section [here](http://www.openculture.com/freeonlinecourses) for free online video courses offered by some of the top institutions.
|
Many of these answers seem inappropriate since you see yourself in a quant after you graduate. I would suggest that you build your resume towards getting the job you want, and then learn the mathematics necessary for the job that you get *after you get it*. And for building your resume for a quant job, I would suggest you continue focusing on your programming skills, and also focus on learning statistics. Both of these things look great on a resume.
To learn statistics, you *might* want to be familiar with integral calculus: it depends on which kind of statistics you are looking at. And as for your programming skills, don't seek out learning mathematics as if it is a prerequisite to programming well. Instead just learn to program and mathematical topics will naturally present themselves to you as you need to know them, and *then* you learn about them.
In summary, don't worry so much about the *prerequisites* to what you want to know; just dive in to what you need to learn about, and learn the appropriate math along the way.
|
69,427
|
Sometimes PLSS section corners do not align

How could I automate the detection of these corners and generate a new shapefile showing where this occurs? I have a shapefile of 147k section polygons that I would like to analyze.
I am on ArcGIS 10.1 and have Spatial Analyst and 3D analyst licenses.
Based on comments below, I know I could count the sides of polygons to detect some of these occurrences. This would not work in every case where corners are misaligned. Any other suggestions?
On a side note, why do the corners in PLSS sections not always align?
|
2013/08/22
|
[
"https://gis.stackexchange.com/questions/69427",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/6580/"
] |
With a bit of programming, you can identify points where the number of lines that intersect the point is not 4.
You don't mention what version of arcgis, but with the lowest level (Basic, ArcView, or whatever Esri is calling it this week) you should be able to build a MapTopology.
The code in [this answer](https://gis.stackexchange.com/a/17748/59) can be edited to accomplish this, by replacing this line:
```
node.Degree == 1
```
With
```
node.Degree != 4
```
|
**A "misaligned" corner borders three rather than four polygons.** Although not all such corners will be misaligned--such things can happen around the perimeter of the PLSS system and along natural boundaries--finding these places will provide an efficient screen that picks up all misaligned corners with very few false positives.
It may be difficult to identify "corners," though. Instead I propose doing the calculation with a raster representation of the data: the [focal variety](http://help.arcgis.com/en/arcgisdesktop/10.0/help/index.html#//009z000000qs000000.htm) in a 2 x 2 neighborhood will equal 3 at all potentially misaligned corners.
You need to use a cellsize small enough to detect slight misalignments. This limits the resolution to about 100 meters when processing the entire US, for otherwise the grid will become unmanageably large. A practical limit is around 10-25 meters, achieved by processing the regions in smaller tiles.
As a check of this approach I carried out the focal variety calculation in geographic coordinates on a 0.001 degree grid covering half the conterminous US. It contains a half billion cells representing nearly 40,000 PLSS polygons. (It occupies 84 MB on disk in its native ESRI format.)

*This figure shows potentially misaligned corner cells in red and apparently aligned ones in cyan.*
This trial calculation consumed less than 25 MB RAM and required one minute to complete. It found 26,225 cells with a focal variety of 3. Because each misalignment introduces two such cells, this suggests approximately (26000/2)/40000 = around one-third of all corners are "misaligned." This includes corners occurring along natural boundaries (rivers, creeks, and large lakes), *etc*.
|
1,394,801
|
I'm working through Oystein Ore's *Number Theory and its History*. On p. 109, I'm stuck on #2.
>
> The question asks the reader to verify the following identity [Note: $(x,y)=\gcd(x,y)$]:
>
>
> $$(ab,cd)=(a,c)(b,d)\left(\frac{a}{(a,c)},\frac{d}{(b,d)}\right)\left(\frac{c}{(a,c)},\frac{b}{(b,d)}\right)$$
>
>
>
I've tried numerous numeric examples and not found an exception. I've tried a messy proof, substituting sample factors and exponents, but it's not very cohesive, clear, or robust. Clearly, if $a,b,c,d$ are all relatively prime, the answer is clear. I don't know how to concisely prove this if that's not the case though.
I've tried using the idea that $m(x,y)=(mx,my)$ to get rid of the denominators, but I still end up with some fractions. I've tried to use the symmetry of the fractions to simplify things.
I also looked at this link without significant progress:
[Is $\gcd(a,b)\gcd(c,d)=\gcd(ac,bd)$?](https://math.stackexchange.com/questions/138009/is-gcda-b-gcdc-d-gcdac-bd)
|
2015/08/12
|
[
"https://math.stackexchange.com/questions/1394801",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/192092/"
] |
**Theorem 1.** Let $x$ and $y$ be two integers. Then, there exist integers $p$
and $q$ such that $px+qy=\gcd\left( x,y\right) $.
Theorem 1 is Bezout's theorem, and we assume it to be known. Notice that the
integers $x$ and $y$ are allowed to be $0$ (even both of them, in which case
we use the convention $\gcd\left( 0,0\right) =0$).
**Proposition 2.** Let $n$ and $m$ be two nonnegative integers such that
$n\mid m$ and $m\mid n$. Then, $m=n$.
Proposition 2 is obvious. Equalities between gcd's are usually proven with the
help of Proposition 2.
**Lemma 3.** Let $x$, $y$, $z$ and $w$ be four integers such that $\gcd\left(
x,z\right) =1$ and $\gcd\left( y,w\right) =1$. Then, $\gcd\left(
xy,zw\right) =\gcd\left( y,z\right) \cdot\gcd\left( x,w\right) $.
*Proof of Lemma 3.* Theorem 1 (applied to $w$ instead of $y$) shows that there
exist integers $p$ and $q$ such that $px+qw=\gcd\left( x,w\right) $. Let us
denote these $p$ and $q$ by $p\_{1}$ and $q\_{1}$. Thus, $p\_{1}$ and $q\_{1}$ are
integers satisfying $p\_{1}x+q\_{1}w=\gcd\left( x,w\right) $.
Theorem 1 (applied to $y$ and $z$ instead of $x$ and $y$) shows that there
exist integers $p$ and $q$ such that $py+qz=\gcd\left( y,z\right) $. Let us
denote these $p$ and $q$ by $p\_{2}$ and $q\_{2}$. Thus, $p\_{2}$ and $q\_{2}$ are
integers satisfying $p\_{2}y+q\_{2}z=\gcd\left( y,z\right) $.
Theorem 1 (applied to $z$ instead of $y$) shows that there exist integers $p$
and $q$ such that $px+qz=\gcd\left( x,z\right) $. Let us denote these $p$
and $q$ by $g$ and $h$. Thus, $g$ and $h$ are integers satisfying
$gx+hz=\gcd\left( x,z\right) $. Hence, $gx+hz=\gcd\left( x,z\right) =1$.
Theorem 1 (applied to $y$ and $w$ instead of $x$ and $y$) shows that there
exist integers $p$ and $q$ such that $py+qw=\gcd\left( y,w\right) $. Let us
denote these $p$ and $q$ by $g^{\prime}$ and $h^{\prime}$. Thus, $g^{\prime}$
and $h^{\prime}$ are integers satisfying $g^{\prime}y+h^{\prime}w=\gcd\left(
x,z\right) $. Hence, $g^{\prime}y+h^{\prime}w=\gcd\left( x,z\right) =1$.
Now,
$\underbrace{\gcd\left( y,z\right) }\_{=p\_{2}y+q\_{2}z}\cdot\underbrace{\gcd
\left( x,w\right) }\_{=p\_{1}x+q\_{1}w}$
$=\left( p\_{2}y+q\_{2}z\right) \cdot\left( p\_{1}x+q\_{1}w\right) $
$=p\_{1}p\_{2}xy+q\_{1}p\_{2}\underbrace{yw}\_{=yw1}+q\_{2}p\_{1}\underbrace{xz}
\_{=xz1}+q\_{1}q\_{2}zw$
$=p\_{1}p\_{2}xy+q\_{1}p\_{2}yw\underbrace{1}\_{=gx+hz}+q\_{2}p\_{1}xz\underbrace{1}
\_{=g^{\prime}y+h^{\prime}w}+q\_{1}q\_{2}zw$
$=p\_{1}p\_{2}xy+q\_{1}p\_{2}yw\left( gx+hz\right) +q\_{2}p\_{1}xz\left(
g^{\prime}y+h^{\prime}w\right) +q\_{1}q\_{2}zw$
$=p\_{1}p\_{2}xy+q\_{1}p\_{2}ywgx+q\_{1}p\_{2}ywhz+q\_{2}p\_{1}xzg^{\prime}
y+q\_{2}p\_{1}xzh^{\prime}w+q\_{1}q\_{2}zw$
$=\left( p\_{1}p\_{2}+q\_{1}p\_{2}wg+q\_{2}p\_{1}zg^{\prime}\right) xy+\left(
q\_{1}p\_{2}yh+q\_{2}p\_{1}xh^{\prime}+q\_{1}q\_{2}\right) zw$ (by a
straightforward computation)
is a $\mathbb{Z}$-linear combination of $xy$ and $zw$, and therefore divisible
by $\gcd\left( xy,zw\right) $ (since both $xy$ and $zw$ are divisible by
$\gcd\left( xy,zw\right) $). In other words,
**(1)** $\gcd\left( xy,zw\right) \mid\gcd\left( y,z\right) \cdot
\gcd\left( x,w\right) $.
On the other hand, multiplying the relations
$\gcd\left( y,z\right) \mid y$ and $\gcd\left(
x,w\right) \mid x$, we obtain $\gcd\left( y,z\right) \cdot\gcd\left(
x,w\right) \mid yx=xy$. Also, multiplying the relations
$\gcd\left( y,z\right) \mid z$ and
$\gcd\left( x,w\right) \mid w$, we obtain $\gcd\left( y,z\right) \cdot
\gcd\left( x,w\right) \mid zw$. We thus know that both $xy$ and $zw$ are
divisible by $\gcd\left( y,z\right) \cdot\gcd\left( x,w\right) $.
Therefore, the greatest common divisor of $xy$ and $zw$ is also divisible by
$\gcd\left( y,z\right) \cdot\gcd\left( x,w\right) $. In other words, we have
**(2)** $\gcd\left( y,z\right) \cdot\gcd\left( x,w\right) \mid\gcd\left(
xy,zw\right) $.
Now, we have proven **(1)** and **(2)**. Thus, we can apply Proposition 2 to
$n=\gcd\left( y,z\right) \cdot\gcd\left( x,w\right) $ and $m=\gcd\left(
xy,zw\right) $. We thus obtain $\gcd\left( xy,zw\right) =\gcd\left(
y,z\right) \cdot\gcd\left( x,w\right) $. This proves Lemma 3.
**Theorem 4.** Let $a$, $b$, $c$ and $d$ be four integers. Let $n=\gcd\left(
a,c\right) $ and $m=\gcd\left( b,d\right) $; assume that $n\neq0$ and
$m\neq0$. Then,
$\gcd\left( ab,cd\right) =\gcd\left( a,c\right) \cdot\gcd\left(
b,d\right) \cdot\gcd\left( \dfrac{a}{n},\dfrac{d}{m}\right) \cdot
\gcd\left( \dfrac{c}{n},\dfrac{b}{m}\right) $.
*Proof of Theorem 4.* Let $x=\dfrac{n}{a}$, $y=\dfrac{m}{b}$, $z=\dfrac{n}{c}$
and $w=\dfrac{n}{d}$. Then, $a=nx$, $b=my$, $c=nz$ and $d=nw$. Also,
$x=\dfrac{n}{a}$ is an integer (since $n=\gcd\left( a,c\right) \mid a$), and
similarly $y$, $z$ and $w$ are integers.
Now, $n=\gcd\left( \underbrace{a}\_{=nx},\underbrace{c}\_{=nz}\right)
=\gcd\left( nx,nz\right) =n\gcd\left( x,z\right) $. Since $n\neq0$, we can
divide this equality by $n$, and obtain $1=\gcd\left( x,z\right) $. The same
argument (using $m,b,d,y,w$ instead of $n,a,c,x,z$) shows that $1=\gcd\left(
y,w\right) $. Thus, Lemma 3 yields
$\gcd\left( xy,zw\right) =\underbrace{\gcd\left( y,z\right) }
\_{=\gcd\left( z,y\right) }\cdot\gcd\left( x,w\right) =\gcd\left(
z,y\right) \cdot\gcd\left( x,w\right) $
$=\gcd\left( x,w\right) \cdot\gcd\left( z,y\right) $.
But
$\gcd\left( \underbrace{a}\_{=nx}\underbrace{b}\_{=my},\underbrace{c}
\_{=nz}\underbrace{d}\_{=mw}\right) =\gcd\left( nxmy,nzmw\right) =\gcd\left(
nm\cdot xy,nm\cdot zw\right) $
$=nm\cdot\underbrace{\gcd\left( xy,zw\right) }\_{=\gcd\left( w,x\right)
\cdot\gcd\left( z,y\right) }=\underbrace{n}\_{=\gcd\left( a,c\right)
}\underbrace{m}\_{=\gcd\left( b,d\right) }\cdot\gcd\left( \underbrace{x}
\_{=\dfrac{a}{n}},\underbrace{w}\_{=\dfrac{d}{m}}\right) \cdot\gcd\left(
\underbrace{z}\_{=\dfrac{c}{n}},\underbrace{y}\_{=\dfrac{b}{m}}\right) $
$=\gcd\left( a,c\right) \cdot\gcd\left( b,d\right) \cdot\gcd\left(
\dfrac{a}{n},\dfrac{d}{m}\right) \cdot\gcd\left( \dfrac{c}{n},\dfrac{b}
{m}\right) $.
Theorem 4 is proven.
This is probably not the simplest or shortest proof, but was the easiest one
to write (it took me almost no focus and very little editing, just a lot of
copy & paste). The annoying computations in the proof of Lemma 3 could have
been simplified using ideal notation, but I don't know if you have this
background. There is certainly an alternative proof by comparing exponents of primes, but my kind of argument generalizes better. For example, Lemma 3 above can be straightforwardly generalized to the following result:
**Lemma 5.** Let $A$ be a commutative ring. Let $X$, $Y$, $Z$ and $W$ be four ideals of $A$ such that $X+Z=A$ and $Y+W=A$. Then, $XY+ZW = \left(Y+Z\right)\left(X+W\right)$.
Lemma 3 can be recovered from Lemma 5 by setting $A = \mathbb Z$, $X = x \mathbb Z$, $Y = y \mathbb Z$, $Z = z \mathbb Z$ and $W = w \mathbb Z$. The proof I gave for Lemma 3 is essentially a proof for Lemma 5, artificially restricted to the case of principal ideals in $\mathbb Z$. Theorem 4 is harder to generalize, since it is not clear what the analogue of (for example) $\dfrac{a}{n}$ is for ideals; but given that it is a corollary of Lemma 3, a point could be made in favor of regarding Lemma 3 as the main theorem.
|
After being away from the problem for more than a year, Leox's comment reminded me of the problem, so I looked at it again. I think I solved it! (EDIT: Per the comments, this answer is incomplete and has an error.) I use just the basic GCD (a,b) and LCM [a,b] identities presented in the book to that point:
$$ab=(a,b)[a,b]$$ and $$(ma,mb)=m(a,b)$$
It's a bit involved, but I begin with the right side of the given identity and work to yield the left side.
$$(a,c)(b,d)\left(\frac{a}{(a,c)},\frac{d}{(b,d)}\right)\left(\frac{c}{(a,c)},\frac{b}{(b,d)}\right)$$
Reorganize terms:
$$=(a,c)\left(\frac{a}{(a,c)},\frac{d}{(b,d)}\right)(b,d)\left(\frac{c}{(a,c)},\frac{b}{(b,d)}\right)$$
Mulltiply first big parenthesis by $(a,c)$ and second by $(b,d)$:
$$=\left(\frac{a(a,c)}{(a,c)},\frac{d(a,c)}{(b,d)}\right)\left(\frac{c(b,d)}{(a,c)},\frac{b(b,d)}{(b,d)}\right)$$
Simplify:
$$=\left(a,\frac{d(a,c)}{(b,d)}\right)\left(\frac{c(b,d)}{(a,c)},b\right)$$
Substitute $1/(b,d)=[b,d]/bd$ and $1/(a,c)=[a,c]/ac$:
$$=\left(a,d(a,c)\frac{[b,d]}{bd}\right)\left(c(b,d)\frac{[a,c]}{ac},b\right)$$
Cancel $d$'s from first term and $c$'s from second term and rewrite:
$$=\left(a,\frac{(a,c)[b,d]}{b}\right)\left(\frac{(b,d)[a,c]}{a},b\right)$$
Treat the whole first term like $m$ in $m(a,b)=(ma,mb)$ and multiply it into the second term:
$$=\left(\left(a,\frac{(a,c)[b,d]}{b}\right)\frac{(b,d)[a,c]}{a},\left(a,\frac{(a,c)[b,d]}{b}\right)b\right)$$
Multiply:
$$=\left(\left(\frac{a(b,d)[a,c]}{a},\frac{(a,c)[b,d](b,d)[a,c]}{ba}\right),\left(ab,\frac{b(a,c)[b,d]}{b}\right)\right)$$
Simplify fractions, and note that $(a,c)[a,c]=ac$ and $[b,d](b,d)=bd$:
$$=\left(\left((b,d)[a,c],\frac{acbd}{ba}\right),\left(ab,\frac{(a,c)[b,d]}{1}\right)\right)$$
$$=\bigg(\Big((b,d)[a,c],cd\Big),\Big(ab,(a,c)[b,d]\Big)\bigg)$$
Again note that $(a,c)[a,c]=ac$ and $[b,d](b,d)=bd$ to rewrite the individual GCD and LCM terms:
$$=\bigg(\Big(\frac{bd}{[b,d]}\frac{ac}{(a,c)},cd\Big),\Big(ab,\frac{ac}{[a,c]}\frac{bd}{(b,d)}\Big)\bigg)$$
$$=\bigg(\Big(\frac{abcd}{[b,d](a,c)},cd\Big),\Big(ab,\frac{abcd}{[a,c](b,d)}\Big)\bigg)$$
Factor, using the idea $(ma,mb)=m(a,b)$:
$$=\bigg(cd\Big(\frac{ab}{[b,d](a,c)},1\Big),ab\Big(1,\frac{cd}{[a,c](b,d)}\Big)\bigg)$$
*I don't have a good answer why these fractions have to work out to be natural numbers, but algebraically that's the result when factored.*
The GCD of 1 and any other natural number is 1:
$$=\Big(cd\*1,ab\*1\Big)=\Big(cd,ab\Big)=\Big(ab,cd\Big)$$
|
38,969,397
|
Suppose I have the following actor:
```
class A extends Actor {
def receive = {
case Unlock => sender ! UnlockResponse (Boolean_code)
case Read => sender ! ReadResponse (somedata)
case Lock => sender ! LockResponse (Boolean_code)
} }
```
I would like to write.a helper method where I can call unlock, read, and lock sequentially, and return the "somedata" from the ReadResponse to the method caller after last step (lock) has completed.
I like the for-comprehension approach, but how do I write it with a bit more flexibility, for example to continue to Read only if Unlock returns UnlockResponse(true) in the first step, but abort if the return is UnlockResponse(false)?
|
2016/08/16
|
[
"https://Stackoverflow.com/questions/38969397",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/141516/"
] |
You have `std::lower_bound` to find the lowest item which doesn't meet your search value.
```
auto it = mymap.lower_bound( value );
```
From [cplusplus map::lower\_bound](http://www.cplusplus.com/reference/map/map/lower_bound/)
>
> A similar member function, upper\_bound, has the same behavior as lower\_bound, except in the case that the map contains an element with a key equivalent to k: In this case, lower\_bound returns an iterator pointing to that element, whereas upper\_bound returns an iterator pointing to the next element.
>
>
>
So `lower_bound` returns the first value which is not less than the search. This means that for the preceding value, you would need `lower_bound - 1`, but only in the case where `lower_bound != begin()`
```
auto it = mymap.lower_bound( value );
if( it->first != value && it != mymap.begin() ) {
it --;
}
```
or to use `upper_bound`
```
auto it = mymap.upper_bound( value );
if( it != mymap.begin() ) {
it --;
}
```
|
*upper\_bound* looks for the key *greater than* (**>**) supplied key or stops at the end of the map.
*lower\_bound* looks for the key *greater than or equal to* (**>=**) supplied key or stops at the end of the map
Given below is the code to find the closest range of input number: [Demo](http://ideone.com/S3dkGi)
```
typedef std::map<int,int>::iterator Iter;
Iter getIterator(std::map<int,int> &m, int val) {
Iter lb = m.upper_bound(val);
if(lb == m.begin()) {
return m.end();
}
Iter it = std::prev(lb);
if(it->first <= val && val <= it->second ) {
return it;
}
else{
return m.end();
}
}
int main() {
// your code goes here
std::map<int,int> mymap;
mymap[100]=200;
mymap[1000]=2000;
mymap[2000]=2500;
mymap[3000]=4000;
mymap[5000]=5100;
int a[4]{4500, 4000, 150, 0};
for(int x : a){
Iter it = getIterator(mymap, x);
if(it != mymap.end()){
cout << "Value " << x << " : Found in range: " << it->first << ", " << it->second <<endl;
}else{
cout << "Value " << x << " : NOT FOUND!" <<endl;
}
}
return 0;
}
```
|
1,134,350
|
My default browser is Chrome.
When I run my web applications within VS 2008 IDE, it uses chrome.
I want to use IE for this. Is there an option in VS 2008 to make IE the browser to load the web applications?
I just want this when using VS 2008 - I don't want to change my default browser each time otherwise.
EDIT: when creating a silverlight application.
|
2009/07/15
|
[
"https://Stackoverflow.com/questions/1134350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/108685/"
] |
Right click on .aspx file in your solution tree, and click Browse With...
then specify your default browser.
---
|
You could create a post-build event that runs FireFox.exe, pointing to the url of your website. You can edit post-build events by right-clicking on your Web Application, going to Compile, and clicking the Build Events button. Your post-build command could be something like "C:\Program Files\Mozilla Firefox\firefox.exe" <http://localhost/mysite> for example.
|
152,813
|
Is there a way to determine the width of the first column in a `tabular` enviroment?
I've thought about using a `savebox` but that would only give me the size of the text (and in one cell only), not the actual column width.
Or should I use `longtable` (or another package) to do that?
|
2014/01/07
|
[
"https://tex.stackexchange.com/questions/152813",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/36821/"
] |
This answers provides several possible solutions: the first one uses indices via the `imakeidx` package to produce the different divisions by topics; the second and third options use ToC-like mechanisms: the second one uses `\@starttoc` to produce multiple ToCs (one for each topic), and the third one shows how to use the features provided by the `titletoc` package to achieve the desired result.
Solution using indices
======================
Since the questions must be sorted according to different categories, this suggests using one of the available indexing facilities.
Below I show an option using the [`imakeidx`](http://www.ctan.org/pkg/imakeidx) package. The example considers four different categories and shows how the method easily generalizes. Since we want a special style for the index entries (similar to ToC entries), we need a custom style, such as the `myright.ist` style below.
The style file `myright.ist`:
```
% MakeIndex style file; page numbers flushed right and dot leaders
% between entries and page numbers
delim_0 "\\dotfill "
delim_1 "\\dotfill "
delim_2 "\\dotfill "
```
The document itself:
```
\documentclass{article}
\usepackage{amsthm}
\usepackage[
noautomatic
]{imakeidx}
\makeatletter
\xpatchcmd{\imki@putindex}
{\immediate\closeout}
{\ifimki@disableautomatic\else\immediate\closeout}
{}{}
\xpatchcmd{\imki@putindex}
{\endcsname}
{\endcsname\fi}
{}{}
\makeatother
\indexsetup{level=\subsection*,noclearpage,othercode={\parskip=5pt}}
\makeindex[
name=add,
title=Addition,
columns=1,
options=-s myright
] %% Define new index of addition
\makeindex[
name=mul,
title=Multiplication,
columns=1,
options=-s myright
] %% Define new index of multiplication
\makeindex[
name=exp,
title=Exponentiation,
columns=1,
options=-s myright
] %% Define new index of exponentiation
\makeindex[
name=sub,
title=Subtraction,
columns=1,
options=-s myright
] %% Define new index of subtraction
% Definition of the auxiliary environment for the questions
\theoremstyle{definition}
\newtheorem{que}{Question}
% Definition of the mian environment for the questions
% it writes an entry to the ToC;
% it also writes an entry in the index specified in the mandatory argument
\let\tmpa\relax
\newenvironment{question}[2][]
{\begin{que}[#1]
\gdef\tmpb{#2}
\if\relax\detokenize{#1}\relax
\gdef\tmpa{\relax}
\else
\gdef\tmpa{~#1}
\fi
\addcontentsline{toc}{section}{Question~\theque.\tmpa}\ignorespaces
}
{\index[\tmpb]{Question~\theque.\tmpa}\end{que}}
% Simply produces a heading ''List of Topics'' formatted as unnumbered section
\newcommand\listoftopics{
\section*{List of Topics}
}
\begin{document}
\tableofcontents
\listoftopics
\printindex[add]
\printindex[mul]
\printindex[exp]
\printindex[sub]
\clearpage
\begin{question}[Commutativity]{add}
Is addition of real numbers commutative?
\end{question}
\begin{question}[Commutativity]{sub}
Is subtraction of real numbers commutative?
\end{question}
\begin{question}[Associativity]{mul}
Is multiplication of quaternions associative?
\end{question}
\begin{question}[Inverse]{mul}
Does inverses for non-negative quaternions always exist?
\end{question}
\begin{question}[General properties]{exp}
Which are the general properties for exponentiation in the field of complex numbers?
\end{question}
\begin{question}{add}
Is addition of real numbers modulative?
\end{question}
\begin{question}[Modulative]{sub}
Is subtraction of real numbers modulative?
\end{question}
\end{document}
```
The output showing the general ToC and the new list of questions divided by topics:

Explanation and instructions
----------------------------
The code contains some general explanatory comments.
The idea is to generate the special divisions in the list of topics using the [`imakeidx`](http://www.ctan.org/pkg/imakeidx) package.
To produce the questions I used a `question` environment defined with an auxiliary `que` theorem environment defined using `amsthm`, but the same idea applies (*mutatis mutandis*) to any other implementation of the `question` environment.
### Compilation instructions:
The document can be processed in (at least) three different ways; in all cases, make sure the file `myright.ist` is somewhere TeX can find it (the current working directory, for example):
1- Running `makeindex`: process your document (let's call it `topics.tex`) in the following way:
```
pdflatex topics
makeindex -s myright.ist add.idx
makeindex -s myright.ist mul.idx
makeindex -s myright.ist exp.idx
makeindex -s myright.ist sub.idx
pdflatex topics
```
2- Using `arara`: add the following lines to the document and let `arara` do its job (thanks to **Paulo Cereda** for suggesting this):
```
% arara: pdflatex
% arara: makeindex: { style: myright, files: [ add.idx, mul.idx ] }
% arara: makeindex: { style: myright, files: [ exp.idx, sub.idx ] }
% arara: pdflatex
```
3- From the editor itself: in this case, the code changes a little:
```
\documentclass{article}
\usepackage{amsthm}
\usepackage[
noautomatic% <---- A
]{imakeidx}
\makeatletter
\xpatchcmd{\imki@putindex}
{\immediate\closeout}
{\ifimki@disableautomatic\else\immediate\closeout}
{}{}
\xpatchcmd{\imki@putindex}
{\endcsname}
{\endcsname\fi}
{}{}
\makeatother
\indexsetup{level=\subsection*,noclearpage,othercode={\parskip=5pt}} % <-----A
\makeindex[
name=add,
title=Addition,
columns=1,
options=-s myright
] %% Define new index of addition
\makeindex[
name=mul,
title=Multiplication,
columns=1,
options=-s myright
] %% Define new index of multiplication
\makeindex[
name=exp,
title=Exponentiation,
columns=1,
options=-s myright
] %% Define new index of exponentiation
\makeindex[
name=sub,
title=Subtraction,
columns=1,
options=-s myright
] %% Define new index of subtraction
% Definition of the auxiliary environment for the questions
\theoremstyle{definition}
\newtheorem{que}{Question}
% Definition of the mian environment for the questions
% it writes an entry to the ToC;
% it also writes an entry in the index specified in the mandatory argument
\let\tmpa\relax
\newenvironment{question}[2][]
{\begin{que}[#1]
\gdef\tmpb{#2}
\if\relax\detokenize{#1}\relax
\gdef\tmpa{\relax}
\else
\gdef\tmpa{~#1}
\fi
\addcontentsline{toc}{section}{Question~\theque.\tmpa}\ignorespaces
}
{\index[\tmpb]{Question~\theque.\tmpa}\end{que}}
% Simply produces a heading ''List of Topics'' formatted as unnumbered section
\newcommand\listoftopics{
\section*{List of Topics}
}
\begin{document}
\tableofcontents
\listoftopics
\printindex[add]% <---- A
\printindex[mul]% <---- A
\printindex[exp]% <---- A
\printindex[sub]% <---- A
\clearpage
\begin{question}[Commutativity]{add}
Is addition of real numbers commutative?
\end{question}
\begin{question}[Commutativity]{sub}
Is subtraction of real numbers commutative?
\end{question}
\begin{question}[Associativity]{mul}
Is multiplication of quaternions associative?
\end{question}
\begin{question}[Inverse]{mul}
Does inverses for non-negative quaternions always exist?
\end{question}
\begin{question}[General properties]{exp}
Which are the general properties for exponentiation in the field of complex numbers?
\end{question}
\begin{question}{add}
Is addition of real numbers modulative?
\end{question}
\begin{question}[Modulative]{sub}
Is subtraction of real numbers modulative?
\end{question}
\printindex[add]% <---- B
\printindex[mul]% <---- B
\printindex[exp]% <---- B
\printindex[sub]% <---- B
\end{document}
```
* First, comment out the lines marked `% <---- A` and uncomment the lines marked `% <---- B`. Process the document.
* Comment out the lines marked `% <---- B` and uncomment the lines marked `% <---- A`. Process the document.
The code between `\makeatletter`, `\makeatother` possibilitates the first compilation method described above and was kindly suggested by **egreg** to correct a bug in `imakeidx`; in future versions of the package, the bug will be fixed and the code won't be necessary.
Solutions using lists
=====================
Using `\@starttoc`
------------------
This solution is much simpler than the one I gave before in [`this answer`](https://tex.stackexchange.com/a/172913/3954).
This time the idea is to use the same command `\@starttoc` used to produce the ToC, LoF and LoT.; the compilation process is simpler then in my other answer: process the document twice (as it is required for any of the standard ToC, LoF or LoT).
```
\documentclass{article}
\usepackage{amsthm}
\usepackage{pgffor}
\usepackage{etoolbox}
% Definition of the auxiliary environment for the questions
\theoremstyle{definition}
\newtheorem{que}{Question}
% Definition of the mian environment for the questions
% it writes an entry to the ToC (optional argument);
% it also writes an entry in the topic list specified in the mandatory argument
\makeatletter
\let\tmpa\relax
\def\addtotoc#1{\addcontentsline{#1}{subsection}{Question~\theque.\tmpa}}
\newenvironment{question}[2][]
{\begin{que}[#1]
\if\relax\detokenize{#1}\relax
\gdef\tmpa{\relax}
\else
\gdef\tmpa{~#1}
\fi
\addcontentsline{toc}{section}{Question~\theque.\tmpa}%
\if\relax\detokenize{#2}\relax
\else
\forcsvlist\addtotoc{#2}
\fi\ignorespaces
}
{\end{que}}
% Produces a heading ''List of Topics'' formatted as unnumbered section
% and it also generates the list of questions sorted by topics
\newcommand\listoftopics[1]{
\section*{List of Topics}
\foreach \Title/\Name in {#1}
{
\subsection*{\Title}
\@starttoc{\Name}
}
}
\makeatother
\begin{document}
\tableofcontents
\listoftopics{Addition/add,Multiplication/mul,Exponentiation/exp,Subtraction/sub}
\clearpage
\begin{question}[Commutativity]{add}
Is addition of real numbers commutative?
\end{question}
\begin{question}[Commutativity]{sub}
Is subtraction of real numbers commutative?
\end{question}
\begin{question}[Associativity]{mul}
Is multiplication of quaternions associative?
\end{question}
\begin{question}[Inverse]{mul}
Does inverses for non-negative quaternions always exist?
\end{question}
\begin{question}[Common properties]{mul,add,sub}
Which properties do addition, subtraction and multiplication of real numbers have in common?
\end{question}
\begin{question}[General properties]{exp}
Which are the general properties for exponentiation in the field of complex numbers?
\end{question}
\begin{question}{add}
Is addition of real numbers modulative?
\end{question}
\begin{question}[Modulative]{sub}
Is subtraction of real numbers modulative?
\end{question}
\begin{question}[Bonus]{}
Can you give an example of non-associative binary operation on the real numbers?
\end{question}
\end{document}
```

Explanation and remarks
-----------------------
* The main command to generate the list by topics now is `\listoftopics` with a mandatory argument; this mandatory argument is a comma separated list of pairs of the form `<title>/<name>`, where `<title>` is the string used as the title for the topic and `<name>` is the string used to generate the list (and the extension of the associated auxiliary file). For example, in the code above all one has to do to produce the list of topics is to call
```
\listoftopics{Addition/add,Multiplication/mul,Exponentiation/exp,Subtraction/sub}
```
* The environment used to typeset> the questions has the syntax
```
\begin{question}[<note>]{<partial tocs>}
contents
\end{question}
```
where `<note>` is an optional annotation for the question that will be typeset in the document and also in the general ToC. `<partial tocs>` is a comma separated list of some of the `<name>`s used in `\listoftopics`; the question will be listed in the topics associated to each of the declared `<name>s`. For example, Question 5 "Common properties" in the example above was added to three of the topic lists using
```
\begin{question}[Common properties]{mul,add,sub}
Which properties do addition, subtraction and multiplication of real numbers have in common?
\end{question}
```
If the mandatory argument is empty, as in
```
\begin{question}[<note>]{}
contents
\end{question}
```
then the question is added only to the general ToC (see Question 9 "Bonus" in the example code).
* The additional packages used were `pgffor` used to easily produce the loop for the generation of the new topic list and `etoolbox` for the loop used to write to several of the topic entries.
Using the `titletoc` package:
-----------------------------
This approach, using [`titletoc`](http://www.ctan.org/pkg/titletoc), is similar to the one above; however, since the auxiliary file used by `titletoc` for all partial ToCs is the same, some addition work has to be done here to stop and resume the partial ToCs when necessary:
```
\documentclass{article}
\usepackage{amsthm}
\usepackage{pgffor}
\usepackage{titletoc}
% Definition of the auxiliary environment for the questions
\theoremstyle{definition}
\newtheorem{que}{Question}
% Definition of the mian environment for the questions
% it writes an entry to the ToC (optional argument);
% it also writes an entry in the topic list specified in the mandatory argument
\makeatletter
\let\tmpa\relax
\let\tmpb\relax
\newenvironment{question}[2][]
{\if\relax\detokenize{#2}\relax
\gdef\tmpb{\relax}
\else
\resumecontents[#2]
\gdef\tmpb{#2}
\fi
\begin{que}[#1]
\if\relax\detokenize{#1}\relax
\gdef\tmpa{\relax}
\else
\gdef\tmpa{~#1}
\fi
\addcontentsline{toc}{section}{Question~\theque.\tmpa}\ignorespaces
}
{\end{que}\if\relax\tmpb\relax\else\stopcontents[\tmpb]\fi}
% Produces a heading ''List of Topics'' formatted as unnumbered section
% and it also generates the list of questions sorted by topics
\newcommand\listoftopics[1]{
\section*{List of Topics}
\foreach \Title/\Name in {#1}
{
%\subsection*{\Title}
\startcontents[\Name]
\printcontents[\Name]{l}{1}{\subsection*{\Title}}
\stopcontents[\Name]
}
}
\makeatother
\titlecontents{lsection}
[3.8em]
{}
{\contentslabel{2.3em}}
{\hspace*{-2.3em}}
{\titlerule*[1pc]{.}\contentspage}
\begin{document}
\tableofcontents
\listoftopics{Addition/add,Multiplication/mul,Exponentiation/exp,Subtraction/sub}
\clearpage
%\resumecontents[add]
\begin{question}[Commutativity]{add}
Is addition of real numbers commutative?
\end{question}
%\stopcontents[add]
\begin{question}[Commutativity]{sub}
Is subtraction of real numbers commutative?
\end{question}
\begin{question}[Associativity]{mul}
Is multiplication of quaternions associative?
\end{question}
\begin{question}[Inverse]{mul}
Does inverses for non-negative quaternions always exist?
\end{question}
\begin{question}[General properties]{exp}
Which are the general properties for exponentiation in the field of complex numbers?
\end{question}
\begin{question}{add}
Is addition of real numbers modulative?
\end{question}
\begin{question}[Modulative]{sub}
Is subtraction of real numbers modulative?
\end{question}
\begin{question}[Bonus]{}
Can you give an example of non-associative binary operation on the real numbers?
\end{question}
\end{document}
```

|
To create two TOCs in a single document use the [`titletoc`](http://www.ctan.org/pkg/titletoc) package. Here is a minimum example :
```
\documentclass{book}
\usepackage{titletoc}
\begin{document}
\begin{titlepage}
This is the Title page.
\end{titlepage}
\startlist{toc}
\printlist{toc}{}{\section*{My toc}}
\tableofcontents
\chapter{My 1st chapter}
\chapter{My 2nd chapter}
\chapter{My 3rd chapter}
\end{document}
```
Regarding SORTING BY TOPICs, try to read this two posts:
* [How to alphabetize all of the parts, chapters, sections, and content of an entire document in ConTeXt?](https://tex.stackexchange.com/questions/51967/how-to-alphabetize-all-of-the-parts-chapters-sections-and-content-of-an-entir) (TeX.SX)
* [Create alphabetical list of sections?](http://www.latex-community.org/forum/viewtopic.php?f=5&t=4169) (LaTeX Community)
|
1,065,294
|
I found some old questions about this, yet not for 18.04.
We're deploying desktops via PXE/Preseed installations, and I'd like to change the default wallpaper. I found how to change the login screen wallpaper, but not yet how to change the default wallpaper for new users.
Any hint about this would be appreciated.
|
2018/08/14
|
[
"https://askubuntu.com/questions/1065294",
"https://askubuntu.com",
"https://askubuntu.com/users/144980/"
] |
The default wallpaper in Ubuntu 18.04 is **warty-final-ubuntu.png**, and the most easy approach to change it would be to convert (if necessary) your wallpaper to `png` format, rename it to warty-final-ubuntu.png, then rename the original default wallpaper and copy your wallpaper to the default location.
```
sudo mv /usr/share/backgrounds/warty-final-ubuntu.png /usr/share/backgrounds/warty-final-ubuntu-old.png
sudo cp /<path-to-your-wallpaper>/warty-final-ubuntu.png /usr/share/backgrounds/warty-final-ubuntu.png
```
|
Posting as it took me some time to find a solution for **Ubuntu 20.04**, even though this is an old question, it might help others looking for this.
This is where I found the method: <https://help.gnome.org/admin/system-admin-guide/stable/dconf-custom-defaults.html.en>
What I tried and didn't work:
I tried to edit the schemas folder directly (for people new to Ubuntu like me, the folder I'm referring to is found at `/usr/share/glib-2.0/schemas/`). The configuration file that I needed to edit was `org.gnome.desktop.background.gschema.xml` , yet for some reason, that wouldn't be applied to the new user, plus it is likely bad practice to edit these configuration files directly.
I then followed gnome.org 's guide and created a file here:
```
sudo nano /etc/dconf/db/local.d/01-background
```
Then add the following:
```
# dconf path
[org/gnome/desktop/background]
# dconf key names and their corresponding values
picture-uri='file:///usr/share/backgrounds/your-own-wallpaper-here.jpg'
```
Make sure that your wallpaper is located at `/usr/share/backgrounds/` and that you've edited either `focal-wallpapers.xml` or `ubuntu-wallpapers.xml` found at `/usr/share/gnome-background-properties/` to include your wallpaper image (there are other posts that explain how to do this).
Lastly update dconf by running:
```
sudo dconf update
```
I would recommend rebooting, then creating a new user to test whether it's working. I hope this helps, it worked for me, let me know if you have better solutions or slightly improvements to this method.
|
5,852,945
|
I have this schema:
```
Hotel (**hotelNo**, hotelName, city)
Room (**roomNo, hotelNo**, type, price)
Booking (**hotelNo, guestNo, dateFrom**, dateTo, roomNo)
Guest (**guestNo**, guestName, guestAddress)
** denotes primary keys
```
I have to complete this query:
* Display each hotel and its most common room.
I have this query, which isn't quite correct:
```
SELECT r.hotelno, type, count(*)
FROM Hotel h, room r
WHERE h.hotelNo = r.hotelno
GROUP BY r.hotelNo, type;
```
This is what it outputs:

What am I doing wrong?
|
2011/05/02
|
[
"https://Stackoverflow.com/questions/5852945",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/604114/"
] |
If you are looking for popularity, you would need to take into account the Booking table. Add the Booking table to your FROM statement, link on hotelNo and roomNo and do a count on the Booking table. This should give you the counts you want.
Edit:
Here is some sample code for you (tested):
```
SELECT TOP (100) PERCENT dbo.Hotel.hotelName, dbo.Room.type, COUNT(*) AS Count
FROM dbo.Booking INNER JOIN
dbo.Room ON dbo.Booking.roomNo = dbo.Room.roomNo AND dbo.Booking.hotelNo = dbo.Room.hotelNo
INNER JOIN dbo.Hotel ON dbo.Room.hotelNo = dbo.Hotel.hotelNo
GROUP BY dbo.Hotel.hotelName, dbo.Room.type
ORDER BY Count DESC
```
|
I think you're going to have to use an inner query to get this one working:
```
SELECT dbo.Hotel.hotelName, pop.type, pop.Count
FROM dbo.Hotel
INNER JOIN (
SELECT TOP 1 dbo.Hotel.hotelNo, dbo.Room.type, COUNT(*) AS Count
FROM dbo.Hotel
INNER JOIN dbo.Room ON dbo.Room.hotelNo = dbo.Hotel.hotelNo
INNER JOIN dbo.Booking ON dbo.Booking.roomNo = dbo.Room.roomNo AND dbo.Booking.hotelNo = dbo.Hotel.hotelNo
GROUP BY dbo.Hotel.hotelNo, dbo.Room.type
ORDER BY Count DESC, dbo.Room.type
) AS pop ON pop.hotelNo = dbo.Hotel.HotelNo
ORDER BY dbo.Hotel.hotelName
```
|
23,603,666
|
I need to map SCIM core schema to LDAP to be used by UnboundId for operations in LDAP.
Is there a better approach to convert these schema??
e.g. In schema given [here](https://datatracker.ietf.org/doc/html/draft-hunt-scim-directory-00#section-2.2.3.4) I need to map:
`userName` to `uid`
`name.formatted` to `cn`
`name.familyName` to `sn (surname)`
...
So there are search filters that will be mapped:
```
`userName eq john` will be mapped to `(uid=john)`
```
One way is to write utility classes and map each SCIM attribute to LDAP attribute to convert SCIM filter to LDAP.
Is there any useful standard for doing so?
|
2014/05/12
|
[
"https://Stackoverflow.com/questions/23603666",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/420613/"
] |
Take a look at the [Apache eSCIMo](http://svn.apache.org/repos/asf/directory/escimo/trunk/) project, it uses XML based mapping file to map SCIM attributes to LDAP and vice versa. This is compliant with SCIM v2.0 schema, but some of the recent schema changes weren't added yet.
|
I have several different ways of doing this, depending on the requirements.
* Create a simple `Enum` and map all values from either side
* Use a `Properties` file with keys like `schema.scim.userName = uid`
* Create an XML file with `<app-name>` and `<ldap-name>` elements
The last two options I usually parse into a `Map` object for easy access.
The `Enum` option is fastest to set up but also offers least flexibility. The others can be externalized for customization.
Lately, I have only been using the XML variant because of the flexibility and easy mapping in both directions.
|
2,152,743
|
>
> We have $\alpha=(1+i)^7$ and $\beta=16$ in the ring $R=\mathbb Z[i]$. How can we prove that $\beta R\subseteq\alpha R$?
>
>
>
My try: $\beta R=\{a+bi:a,b\in 16\mathbb Z\}$ and $\alpha =(1+i)^7\in\mathbb Z$. I am clueless.
|
2017/02/20
|
[
"https://math.stackexchange.com/questions/2152743",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/397530/"
] |
You already got a sufficient condition for realness (matrix being hermetian) but a condition for distint-ness is not that easy.
You could use [Gerschgorin-circles](https://math.stackexchange.com/questions/2102931/gerschgorins-theorem-round-1) for a start.
This theorem gives you circles in the complex plane with midpoints on the diagonal entries. It states, that each eigenvalue lies in it's overlapping circles. If two (or more) overlap, the eigenvalues lie in the union of both circles (not necessarly in the intersection!)
So if every circle is distinct, all eigenvalues are unique.
If the circles overlap, the still can be unique!
|
For the eigenvalues to be real matrix has to be Hermitian $v^{\dagger}Aw=w^{\dagger}A^{\dagger}v$
For any $v, w\in\mathbb{C}$. And here $F^{\dagger}=\bar{F}^{T}$. Don't know what to say about them being distinct.
|
8,379,574
|
I am using a Bean Shell interpreter in a for loop like this
```
for(int i = 0; i <functionSize; i++){
interpreter.set("x", i);
yvalues[i] = (Integer) interpreter.eval(functionString);
}
```
The problem I am having is that when I do the function 2^x I get really strange output. The list of the first few y values is like this: 2, 3, 0 , 1, 6, 7, 4, 5, 10 , 11, 8 , 9 ...
Does anybody know how to get the bean shell interpreter to correctly evaluate powers?
Edit: I would like to use the bean shell interpreter in place of writing a math parser. Does anyone know how I can get the bean shell to evaluate powers of functions of x?
|
2011/12/04
|
[
"https://Stackoverflow.com/questions/8379574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/546427/"
] |
The expression `2^x` represents bitwise XOR, not exponentiation. Use [`Math.pow(2,x)`](http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#pow%28double,%20double%29) if you like, but realize that it operates on doubles, not ints, so will give you a floating-point answer that will probably cause an exception when you try to cast the result as an integer.
edit: there is also [BigInteger.pow()](http://docs.oracle.com/javase/1.5.0/docs/api/java/math/BigInteger.html#pow%28int%29), which returns a BigInteger.
Or, if you want powers of 2, but not any other base, use the expression for left shift: `1 << x` for x between 0 and 31.
|
Ended up editing to:
```
double y = Double.parseDouble(interpreter.eval(functionString).toString());
yvalues[i] = (int) y;
```
And using `Math.pow(a, b)` as my input.
Doesn't look pretty, but its working.
|
21,621,357
|
I am new to python and trying to parse the json file and fetch the required field based on condition.
eg., if status = true, then
print name
Json file:
```
[
{
"id": "12345",
"name": "London",
"active": true,
"status": "true",
"version": "1.0",
"tags": [
]
},
{
"id": "12457",
"name": "Newyork",
"active": true,
"status": "false",
"version": "1.1",
"tags": [
]
},
]
```
expected output:
name : London
Please help me on this. Thank you in advance.
|
2014/02/07
|
[
"https://Stackoverflow.com/questions/21621357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2919768/"
] |
```
>>> import json
>>> obj = json.loads('[ { "id": "12345", "name": "London", "active": true, "status": "true", "version": "1.0", "tags": [ ] }, { "id": "12457", "name": "Newyork", "active": true, "status": "false", "version": "1.1", "tags": [ ] } ]')
>>> print "Names:", ",".join(x["name"] for x in obj if x["status"] == "true")
Names: London
```
---
Your JSON is invalid. Remove the comma as below:
```
[
{
"id": "12345",
"name": "London",
"active": true,
"status": "true",
"version": "1.0",
"tags": [
]
},
{
"id": "12457",
"name": "Newyork",
"active": true,
"status": "false",
"version": "1.1",
"tags": [
]
},
^__________Remove this comma!
]
```
|
You can get all info about json parsing here.
<http://docs.python.org/3.3/library/json.html>
|
31,250,501
|
how can I ignore or filter parts of the result of centreon\_plugins.pl using a regexp ?
practical example :
I'm trying to monitor inodes on all my linux servers (RHEL 6) so I use the following command:
```
perl centreon_plugins.pl --plugin=os::linux::snmp::plugin --mode=inodes --hostname=servname --snmp-version=2c --snmp-community=mycomm --warning 80 --critical 90 --name --regexp --verbose
```
This will give me all inodes on all filesystem, but it will always be in alarm due, for example, to /proc
Also, this will monitor my home dir that are mounted on a nfs share.
So i want to exclude a couple of strings like home, proc, ...
If I try to use this :
```
perl centreon_plugins.pl --plugin=os::linux::snmp::plugin --mode=inodes --hostname=servname --snmp-version=2c --snmp-community=mycomm --warning 80 --critical 90 --name --regexp --diskpath=/?!proc/ --verbose
```
I'll get an error like this :
```
-bash: !proc: event not found
```
Any Idea ?
|
2015/07/06
|
[
"https://Stackoverflow.com/questions/31250501",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4338008/"
] |
Use DefaultSliderView class instead of TextSliderView class. see example below :
```
DefaultSliderView defaultSliderView = new DefaultSliderView(this);
defaultSliderView.image(file_maps.get(images)); // adding images here
mDemoSlider.addSlider(defaultSliderView);
```
hope it helps :)
|
I solved this issue by writing my a custom ImageSlider, as specified [here](https://github.com/daimajia/AndroidImageSlider/wiki/Slider-view)
I love this library!
|
128,818
|
Why do the two pieces of code below yield rather different results? The only difference is that in the first one the function definition is in the same line as the `Remove` command while in the second one the two commands are in different lines.
```
Remove[f]; f[x_] := 1;
?? f
(* Information: Symbol f not found *)
```
```
Remove[f];
f[x_] := 1;
?? f
(* Global`f and the function definition *)
```
Defining another function on the same line does work:
```
Remove[g];
Remove[f]; g[x_] := 1;
?? g
(* Global`g and the function definition *)
```
|
2016/10/16
|
[
"https://mathematica.stackexchange.com/questions/128818",
"https://mathematica.stackexchange.com",
"https://mathematica.stackexchange.com/users/22021/"
] |
Executing `Trace` on an expression reveals what is actually happening:
```
Trace[Remove@x; x = 1]
(*{Remove[Removed[x]];Removed[x]=1,{Remove[Removed[x]],Null},{Removed[x]=1,1},1}*)
```
"the Wolfram Language always reads in a **complete** input expression, and interprets the names in it, **before** it executes any part of the expression." (see: <https://reference.wolfram.com/language/tutorial/SettingUpWolframLanguagePackages.html>)
The whole line is one expression with the head `CompoundExpression`. The variable `x` gets replaced with `Removed["x"]` throughout that expression. Then it seems that `Removed["x"]` is assigned the value 1 (see Leonid Shifrin's answer to this [question](https://mathematica.stackexchange.com/questions/4921/what-is-the-story-with-removed-symbols) for more details about Removed symbols).
However, when you write it on two different lines, it is interpreted as two separate expressions and hence you don't face this problem.
|
*I use the word internal symbol rather than just symbol, to make it easier to distinguish between the name of a symbol and the internal representation of the symbol inside the kernel. I felt I should clarify this as I have not seen this terminology used elsewhere (and WRI may not agree this is a useful concept).*
The "Theory"
------------
First let's consider what happens when lines of code are evaluated.
The Front End reads code and sends strings corresponding to expressions to the kernel for evaluation. If you evaluate multiple lines of code, it keeps reading until
* it encounters a newline, and
* the expression it is reading is complete when it sees this newline
For each expression found in this way, the front end sends a string to the kernel corresponding to this expression (the string can also correspond to the expression in the sense that the string corresponds to boxes that represent the expression, but that is not very relevant). The kernel then converts this string to an internal representation of the expression.
I think that in this internal representation, symbols are not represented as strings, one reason being that I think this would not lead to good performance. I think it is better to think (because it seems this is a helpful model of things) of the internal representation of a symbol as an object that has its name (a string) and/or perhaps an identifier as attributes. Another thing that can be looked up for any given symbol is whether it has been removed or not and what rules are associated with it. Note that in this (speculative) model, objects that have the same values for their attributes (i.e. the symbols have the same name), can still be distinguished by the kernel. It does not really matter where this information is stored, in the attributes of an object or perhaps some global data structure. What matters is that in this model there is not a one-to-one correspondence between strings and internal representations of symbols.
Normally there is at most one internal symbol that has any given name (e.g. `"f"`). In particular, there is at most one internal symbol with that name that has not been removed. When you evaluate `Remove[f]`, you "remove" the unique internal symbol that has the name `"f"`. When a new expression that contains `f` now is sent to the kernel (as a string), the kernel creates a new internal symbol with name `"f"` and this is now the unique internal symbol with name `"f"` that has not been removed. The kernel now uses this new internal symbol, when it makes an internal representation for the expression and for every expression containing `f` that it receives after that (until it is removed again). The internal symbol that has been removed can still be used, but its definitions are cleared (I show an example later).
Your examples
-------------
I have slightly modified the examples. My explanations are speculative in that they are based on the speculations above.
**Snippet without a newline**
In
```
Remove[f]; f[x_] := 1;
f[1]
```
>
>
> ```
> f[1]
>
> ```
>
>
the expression corresponding to this whole line, with head `CompoundExpression`, i.e. `CompoundExpression[Remove[f], f[x_]:=1, Null]`, is sent to the kernel at once. This happens because the Front End keeps reading until it encounters a newline and the expression is complete when it sees this newline. The string corresponding to this expression is then sent to the kernel and the kernel creates an internal representation.
In this internal representation both instances of `f` correspond to the same internal symbol (lets call this `iF1`). When the kernel evaluates the expression, it encounters the internal representation of `Remove[f]`, which we can think of as `Remove[iF1]` and the internal symbol `iF1` is removed. Next, the kernel will evaluate `f[x_] := 1;`, which we can think of as `iF1[x_] := 1;`, so a rule is associated with the internal symbol `iF1`. Now because `iF1` is removed, this definition will under normal circumstances not be relevant, but as we will see later the definition is indeed made and we can still access such definitions.
When we send a new expression `f[1]` to the kernel, the kernel sees that there is no symbol with name `"f"` that has not been removed, so it creates a new one, say `iF2`. It then creates and evaluates the internal representation, which we can think of as `iF2[1]`. No rule is associated with `iF2`, so this evaluates to `iF2[1]`, which is displayed as `f[1]`.
**Snippet with a newline**
However, in
```
Remove[f];
f[x_] := 1;
f[1]
```
>
>
> ```
> 1
>
> ```
>
>
The front end reads until it encounters the first newline. At this point, what it has read so far corresponds to a complete expression, i.e. `Remove[f];`. So the string corresponding to `Remove[f];` is sent to the kernel. The kernel creates an internal representation, which we can think of as `Remove[iF3];`. The internal symbol `iF3` is removed.
Next the front end continues reading where it left of. It reads until it finds a newline at which the expression it is reading is complete. The results in that the next line, `f[x_] := 1;` is sent to the kernel. Because there is no symbol with name `f` that has not been removed (because `iF3` was removed ), the kernel creates a new one, `iF4`. The kernel evaluates the internal representation of the expression, which we can think of as `iF4[x_] := 1;`. So a rule is associated with `iF4`.
Now when we send a new expression `f[1]`, the kernel looks up the unique symbol that has not been removed with name `f`, which is `iF4` and it evaluates the internal respresentation, which we can think of as `iF4[1]` which evaluates to `1`.
Verification of the explanation using handlers
----------------------------------------------
Perhaps I should have mentioned earlier that some of the claims I made above can be verified by using `$NewSymbol`. I just saw in this [interesting answer](https://mathematica.stackexchange.com/a/139718/4330) that we can also track when symbols are removed, so let's verify how the examples work. I modified the examples to include symbols that help us to see which line is being evaluated. Once we are confident that that is how the Front End works, we could also split up the cells so that they each have only one line, (or similar), but I feel this warrants additional verification first.
Make sure you start with a new kernel. The first time output is generated using a fresh kernel, new symbols are generated, so in order to not let this clutter the output, I include a line containing only `1`, before we start tracking symbols (also evaluate this "cell" when we analyse the second example).
```
1
With[{h = #},
Internal`AddHandler[h,
Print@(h -> {##}) &]] & /@ {"RemoveSymbol", "NewSymbol"};
```
The first example then gives us
```
Remove[f]; f[x_] := 1
secondLine; f[1]
(*prints:
NewSymbol->{{f,Global`}}
NewSymbol->{{x,Global`}}
RemoveSymbol->{{f,Global`}}
NewSymbol->{{secondLine,Global`}}
NewSymbol->{{f,Global`}}
*)
```
>
>
> ```
> f[1]
>
> ```
>
>
It was desirable to start with a new kernel, because in this case the are no internal symbols with the names `f` and `x` yet. The prints show us that indeed first both these internal symbols are generated (even though `x` only first appears to the right of a semicolon). Only after that is the internal representation of the first line evaluated and is an internal symbol with the name `f` removed. We see that a new symbol with the name `f` is generated, but only when the second line is being evaluated. So indeed the assignment is made to the old internal symbol `f` that has already been removed. So this assignment has no effect on the new symbol with the name `f`, so that `f[1]` is the displayed output.
With a fresh kernel that has handlers and that has already generated the symbols associated with generating the first output, the second example gives us
```
Remove[f];
secondLine; f[x_] := 1;
thirdLine; f[1]
(*prints:
NewSymbol->{{f,Global`}}
RemoveSymbol->{{f,Global`}}
NewSymbol->{{secondLine,Global`}}
NewSymbol->{{f,Global`}}
NewSymbol->{{x,Global`}}
NewSymbol->{{thirdLine,Global`}}
*)
```
>
>
> ```
> 1
>
> ```
>
>
In this case, we see that indeed a (internal) symbol with the name `f` is removed (in the evaluation of the first line) before a definition is associated with a symbol with the name `f` (in the evaluation of the second line). The internal symbol used in evaluation of the third line is the same one as that which is used in the second line, as nothing has been removed since we started evaluating the second line. So the definition made in the second line affects the evaluation of `f[1]` in the third line and the output is `1`.
Addendums
---------
**Addendum 1: Removed symbols may still exist**
The behaviour in the following two examples correspond to the "theory" above. The first example shows that we can still refer to removed symbols. The example also shows that there may be multiple internal symbols with the same name that are all removed. Talking about `Removed["a"]` is in that sense insufficient (and the example shows why an assignment like `Removed["x"]=2` could not work).
```
ruleB = a -> 1; b = Hold[a];
Remove[a];
ruleC = a -> 2; c = Hold[a];
Remove[a];
{b, c, Hold[a]}
{b, c, Hold[a]} /. {ruleB, ruleC}
```
>
>
> ```
> {Hold[Removed[a]],Hold[Removed[a]],Hold[a]}
> {Hold[1], Hold[2], Hold[a]}
>
> ```
>
>
The second example shows that assignments can be made to removed symbols and the definitions are persistent.
```
Remove[d]; e := d; d = 1;
{d, e}
```
>
>
> ```
> {d,1}
>
> ```
>
>
**Addendum 2: Remark**
Of course the kernel does not check every symbol to see whether or not it is a symbol with name `"f"` that has not been removed. I hesitated to introduce a new name for "the unique internal symbol with name X that has not been removed" (which need not exist!), I kind of like CIS (canonical internal symbol), to be used as `iF1` is the CIS of `"f"`. The kernel must keep track of a simple lookup table with entries of the form string X -> CIS of X.
On the use of the word "removed": One could argue that we should only say that an internal symbol is removed if there are no more references to the internal symbol and memory can be cleared. One could argue that we should not say that for any internal symbol we can look up whether it has been removed or not, but rather that it has been scheduled for memory clearing or something. I decided to stick to "if `Remove` has been called on a internal symbol, then it is removed".
|
12,026,065
|
I've always thought that appending a list to another one meant copying the objects from the first list and then pointing to the appended list as described for example [here](http://diditwith.net/2008/03/03/WhyILoveFListsTheBasics.aspx).
However, in [this](http://fsharpnews.blogspot.se/2007/02/avoiding-copying-in-functional.html) blog post and in its comment, it says that it is only the pointers that are copied and not the underlying objects.
So what is correct?
|
2012/08/19
|
[
"https://Stackoverflow.com/questions/12026065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/360674/"
] |
>
> In the functional world, lists are immutable. This means that node sharing is possible because the original lists will never change. Because the first list ends with the empty list, its nodes must be copied in order to point its last node to the second list.
>
>
>
If you mean this statement then the answer is seems to be pretty simple. Author of the first article is talking about list node elements when he says `nodes`. Node element is not the same as the list item itself. Take a look at the pictures in the first article. There are arrows going from every element to the next node. These arrows are pointers. But integer type (which is put into the list) has no such pointers. There is probably some `list node` type which wraps those integers and stores the pointers. When author says that `nodes must be copies` he is talking about these wrappers being copied. The underlying objects (if they were not value types as in this case) would not be cloned, new wrappers will point to the same object as before.
|
F# lists hold references (not to be confused with F#'s `ref`) to their elements; list operations copy those references (pointers), but not the elements themselves.
There are two ways you might append items to an existing list, which is why there seems to be a discrepancy between the articles (though they both look to be correct):
* Cons operator (`::`): The cons operator prepends a single item to an F# list, producing a new list. It's very fast (`O(1)`), since it only needs to call a very simple constructor to produce the new list.
* Append operator (`@`): The append operator appends two F# lists together, producing a new list. It's not as fast (`O(n)`) because in order for the elements of the combined list to be ordered correctly, it needs to traverse the entire list on the left-hand-side of the operator (so copying can start at the first element of that list). You'll still see this used in production if the list on the left-hand-side is known to be very small, but in general you'll get much better performance from using `::`.
|
16,164,620
|
>
> I am working on a program that needs to convert a 32-bit number into a decimal number.
>
>
>
The number that I get from input is a 32 bit number represented as floating point. The first bit is the sign, the next 8 bits are the exponent, and the other 23 bits are mantissa. I am working the program in C. In input, I get that number as a `char[]` array, and after that I am making a new `int[]` array where I store the sign , the exponent and the mantissa. But, I have problem with the mantissa when I am trying to store it in some datatype, because I need to use the mantissa as a number, not as an array: `formula=sign*(1+0.mantissa)*2^(exponent-127)`.
Here is the code I use to store the mantissa, but still the program gets me wrong results:
```
double oMantissa=0;
int counter=0;
for(counter=0;counter<23;counter++)
{
if(mantissa[counter]==1)
{
oMantissa+=mantissa[counter]*pow(10,-counter);
}
}
```
`mantissa[]` is an `int` array where I have already converted the mantissa from a `char` array. When I get the value from `formula`, it has to be a binary number, and I have to convert it to decimal, so I will get the value of the number. Can you help me with storing the 23 bits of the mantissa? And, I mustn't use functions like `strtoul` that convert the 32-bit number directly into binary. I have to use `formula`.
|
2013/04/23
|
[
"https://Stackoverflow.com/questions/16164620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Which part of the below code was hard to get right given all the formulas and sample numbers and a calculator?
```
#include <stdio.h>
#include <limits.h>
#if UINT_MAX >= 0xFFFFFFFF
typedef unsigned uint32;
#else
typedef unsigned long uint32;
#endif
#define C_ASSERT(expr) extern char CAssertExtern[(expr)?1:-1]
// Ensure uint32 is exactly 32-bit
C_ASSERT(sizeof(uint32) * CHAR_BIT == 32);
// Ensure float has the same number of bits as uint32, 32
C_ASSERT(sizeof(uint32) == sizeof(float));
double Ieee754SingleDigits2DoubleCheat(const char s[32])
{
uint32 v;
float f;
unsigned i;
char *p1 = (char*)&v, *p2 = (char*)&f;
// Collect binary digits into an integer variable
v = 0;
for (i = 0; i < 32; i++)
v = (v << 1) + (s[i] - '0');
// Copy the bits from the integer variable to a float variable
for (i = 0; i < sizeof(f); i++)
*p2++ = *p1++;
return f;
}
double Ieee754SingleDigits2DoubleNoCheat(const char s[32])
{
double f;
int sign, exp;
uint32 mant;
int i;
// Do you really need strto*() here?
sign = s[0] - '0';
// Do you really need strto*() or pow() here?
exp = 0;
for (i = 1; i <= 8; i++)
exp = exp * 2 + (s[i] - '0');
// Remove the exponent bias
exp -= 127;
// Should really check for +/-Infinity and NaNs here
if (exp > -127)
{
// Normal(ized) numbers
mant = 1; // The implicit "1."
// Account for "1." being in bit position 23 instead of bit position 0
exp -= 23;
}
else
{
// Subnormal numbers
mant = 0; // No implicit "1."
exp = -126; // See your IEEE-54 formulas
// Account for ".1" being in bit position 22 instead of bit position -1
exp -= 23;
}
// Or do you really need strto*() or pow() here?
for (i = 9; i <= 31; i++)
mant = mant * 2 + (s[i] - '0');
f = mant;
// Do you really need pow() here?
while (exp > 0)
f *= 2, exp--;
// Or here?
while (exp < 0)
f /= 2, exp++;
if (sign)
f = -f;
return f;
}
int main(void)
{
printf("%+g\n", Ieee754SingleDigits2DoubleCheat("110000101100010010000000000000000"));
printf("%+g\n", Ieee754SingleDigits2DoubleNoCheat("010000101100010010000000000000000"));
printf("%+g\n", Ieee754SingleDigits2DoubleCheat("000000000100000000000000000000000"));
printf("%+g\n", Ieee754SingleDigits2DoubleNoCheat("100000000100000000000000000000000"));
printf("%+g\n", Ieee754SingleDigits2DoubleCheat("000000000000000000000000000000000"));
printf("%+g\n", Ieee754SingleDigits2DoubleNoCheat("000000000000000000000000000000000"));
return 0;
}
```
Output ([ideone](http://ideone.com/seYW5R)):
```
-98.25
+98.25
+5.87747e-39
-5.87747e-39
+0
+0
```
|
After scouring the internet and being unable to find similar functions,
I wrote these floating point Conversion functions.
```
//NOTE memcpy is a more efficient way to do this
//These instructions are presented for reference only
//I Zackery Sobin created these functions
//and I release them into the public domain
//there is no warranty
//they might not work properly
//certain things like NAN or INFINITY might not be handled correctly
#include "math.h"
float charArray2float(charArray *S)
{
unsigned int uintS = charArray2lluint(S, 4);
unsigned int sign = (uintS & 0x80000000); //01111111 10000000 00000000 00000000
unsigned int exponent = (uintS & 0x7F800000); //01111111 10000000 00000000 00000000
unsigned int mantessa = (uintS & 0x007FFFFF); //00000000 01111111 11111111 11111111
float normalizedExponent = (float) ((signed char) ((exponent>>23) - 127));
float normalizedMantessa = (float) 1 + (float) mantessa / pow((float)2,(float)23);
float theVar = normalizedMantessa * pow((float)2,(float)normalizedExponent);
if (sign != 0) theVar = theVar * (float) (-1);
if (fabs(theVar) < pow((float) 10, (float) -38)) theVar = 0;
return theVar;
}
long long int charArray2lluint(char *S, int length)
{
int x;
unsigned long long int sum =0;
for (x = 0; x < length; x++)
{
if (isBigEndian){
sum = sum + ((unsigned long long int) ((unsigned char) S[x]) << ((length-1)-x) * 8);
}
else{
sum = sum + ((unsigned long long int) ((unsigned char) S[length-x-1]) << ((length-1)-x) * 8);
}
}
return sum;
}
void float_2charArray(char *outputArray, float testVariable1) { //long int is the same size as regular intz
int o = 0;
int x;
char byteNum[8];
unsigned int sign = 0;
float mantessa = 0;
int exp = 0;
unsigned int theResult;
if (testVariable1 ==0){theResult = 0;}
else{ if (testVariable1 < 0) {
sign = 0x80000000;
testVariable1 = testVariable1 * -1.0;
}
int watchdog = 0;
while (1){
watchdog++;
if (watchdog > 512) {
ErrorCode = 6; //This is a global variable used to see if there is a bug in this function
break;
}
mantessa = testVariable1 / powf(2,exp);
if (mantessa >= 1 && mantessa < 2) {break;}
else if (mantessa >= 2.0) {exp = exp + 1;}
else if (mantessa < 1 ) {exp = exp - 1;}
}
unsigned int fixedExponent = ((exp+127)<<23);
unsigned int fixedMantessa = (float) (mantessa -1) * pow((float)2,(float)23);
theResult = sign + fixedExponent + fixedMantessa;
}
unsigned_int_2charArray(byteNum, theResult);
if (!isBigEndian) for (x = 0; x <= 7; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 7; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void double_2charArray(char *outputArray, double testVariable2) { //long int is the same size as regular int
int o = 0;
int x;
char byteNum[16];
unsigned long long int sign = 0;
double mantessa = 0;
long long int exp = 0;
unsigned long long int theResult;
if (testVariable2 ==0){theResult = 0;theResult = 0;}
else{ if (testVariable2 < 0) {
sign = 0x8000000000000000;
testVariable2 = testVariable2 * -1.0;
}
long long int watchdog = 0;
while (1){
watchdog++;
if (watchdog > 512) {
FlighboardErrorCode = 7;
break;
}
mantessa = testVariable2 / powf(2,exp);
if (mantessa >= 1 && mantessa < 2) {break;}
else if (mantessa >= 2.0) {exp = exp + 1;}
else if (mantessa < 1 ) {exp = exp - 1;}
}
unsigned long long int fixedExponent = ((exp+1023)<<52);
unsigned long long int fixedMantessa = (double) (mantessa -1) * pow((double)2,(double)52);
theResult = sign | (fixedExponent + fixedMantessa); //Fixme is this quite right?
}
unsigned_long_long_int_2charArray(byteNum, theResult);
if (!isBigEndian) for (x = 0; x <= 15; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 15; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void unsigned_long_long_int_2charArray(char *outputArray, unsigned long long int X) { //long int is the same size as regular int
int o = 0;
int x;
char byteNum[8];
byteNum[0] = (X & 0x00000000000000FF);
byteNum[1] = (X & 0x000000000000FF00) >> 8;
byteNum[2] = (X & 0x0000000000FF0000) >> 16;
byteNum[3] = (X & 0x00000000FF000000) >> 24;
byteNum[4] = (X & 0x000000FF00000000) >> 32;
byteNum[5] = (X & 0x0000FF0000000000) >> 40;
byteNum[6] = (X & 0x00FF000000000000) >> 48;
byteNum[7] = (X & 0xFF00000000000000) >> 56;
if (!isBigEndian) for (x = 0; x <= 7; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 7; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void long_long_int_2charArray(char *outputArray, long long int X) { //long int is the same size as regular int
int o = 0;
int x;
char byteNum[8];
byteNum[0] = (X & 0x00000000000000FF);
byteNum[1] = (X & 0x000000000000FF00) >> 8;
byteNum[2] = (X & 0x0000000000FF0000) >> 16;
byteNum[3] = (X & 0x00000000FF000000) >> 24;
byteNum[4] = (X & 0x000000FF00000000) >> 32;
byteNum[5] = (X & 0x0000FF0000000000) >> 40;
byteNum[6] = (X & 0x00FF000000000000) >> 48;
byteNum[7] = (X & 0xFF00000000000000) >> 56;
if (!isBigEndian) for (x = 0; x <= 7; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 7; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void unsigned_int_2charArray(char *outputArray, unsigned int X) { //long int is the same size as regular int
int o = 0;
int x;
char byteNum[4];
byteNum[0] = (X & 0x000000FF);
byteNum[1] = (X & 0x0000FF00) >> 8;
byteNum[2] = (X & 0x00FF0000) >> 16;
byteNum[3] = (X & 0xFF000000) >> 24;
if (!isBigEndian) for (x = 0; x <= 3; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 3; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void int_2charArray(char *outputArray, int X) { //long int is the same size as regular int
int o = 0;
int x;
char byteNum[4];
byteNum[0] = (X & 0x000000FF);
byteNum[1] = (X & 0x0000FF00) >> 8;
byteNum[2] = (X & 0x00FF0000) >> 16;
byteNum[3] = (X & 0xFF000000) >> 24;
if (!isBigEndian) for (x = 0; x <= 3; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 3; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void unsigned_short_int_2charArray(char *outputArray, unsigned short int X) {
int o = 0;
int x;
char byteNum[2];
byteNum[0] = (X & 0x00FF);
byteNum[1] = (X & 0xFF00) >> 8;
if (!isBigEndian) for (x = 0; x <= 1; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 1; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void short_int_2charArray(char *outputArray, short int X) {
int o = 0;
int x;
char byteNum[2];
byteNum[0] = (X & 0x00FF);
byteNum[1] = (X & 0xFF00) >> 8;
if (!isBigEndian) for (x = 0; x <= 1; x++) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
else if (isBigEndian) for (x = 1; x >= 0; x--) {outputArray[o]=byteNum[x]; o++;} // datagram.append(byteNum[x]);
}
void unsigned_char_2charArray(char *outputArray, unsigned char X) {
outputArray[0] = X;
}
void char_2charArray(char *outputArray, char X) {
outputArray[0] = X;
}
```
|
41,287,378
|
so I have this two models:
```
class Patient(models.Model):
patientID = models.CharField(max_length=200 , default='Enter PatientID')
age = models.IntegerField(default='-')
gender = models.CharField(max_length=200,choices=Gender_Choice, default='UNDEFINED')
class RiskFactor(models.Model):
patient = models.OneToOneField(Patient, on_delete=models.CASCADE)
hypertension = models.BooleanField(default=False)
diabetes = models.BooleanField(default=False)
PAVK = models.BooleanField(default=False)
nicotin = models.BooleanField(default=False)
```
So I've changed from ForeignKey to OnetoOneField because of reasons. So with the ForeignKey everything was easypeasy and I can show the data of the RiskFactor model in my Patient detail template. Now I have problems to change the query. I just don't know how to change this:
```html
<ul>
{% for rfac in patient.riskfactor_set.all %}
<li>Hypertension: {{ rfac.hypertension }}<br/>
Diabetes: {{ rfac.diabetes }}<br/>
PAVK: {{ rfac.PAVK }}<br/>
Nicotin: {{ rfac.nicotin }}<br/>
</li>
{% endfor %}
</ul>
```
into sth that shows the riskfactor data from the patient with a OneToOneRel.
Thanks for your help!
|
2016/12/22
|
[
"https://Stackoverflow.com/questions/41287378",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7317000/"
] |
Currently `javafx.scene.robot.Robot` is available, which can take screen captures. I don’t know if it was when the question was originally asked. The method is:
```java
javafx.scene.robot.Robot robot = new Robot();
WritableImage imgReturn = robot.getScreenCapture(imgOut, new Rectangle2D());
```
The optional rectangle specifies the region of the screen to capture. If not supplied, the whole screen is included. `imgReturn` can be assigned the resulting writable image. `imgOut` is an writable image output parameter that can also store the result, if you’ve defined it previously and it’s the correct dimensions. Otherwise, leave it null.
The method must be run on the JavaFX Application thread (you can use `Platform.runLater()` to ensure this if it’s invoked from another thread).
If you’re looking to take a screenshot of a specific application window, you could analyze the capture of the whole screen and look for the region containing the window you’re looking for.
If you know the window coordinates and dimensions, you can plug that in for the second argument in `getScreenCapture`.
You can also use the semitransparent overlay stage method from @GOXR3PLUS in which the user can define a rectangle (by clicking and dragging to define upper-left and lower-right corners, for example). Then you can hide the overlay stage and take a screenshot of the region within that rectangle.
|
This is the main mechanism I have used: A transparent `Stage` with Transparent `Canvas` which has a `BorderPane` with Opacity `0.1`
Here is the simple example (it is just for selecting areas...):
**WHEN YOU START THE APP THE ONLY WAY TO CLOSE IT IS USING ESCAPE**
[](https://i.stack.imgur.com/fMegX.png)
---
**Tester** class:
```
import javafx.application.Application;
import javafx.stage.Screen;
import javafx.stage.Stage;
public class Tester extends Application{
@Override
public void start(Stage primaryStage) throws Exception {
CaptureWindow window = new CaptureWindow(Screen.getPrimary().getBounds().getWidth(),Screen.getPrimary().getBounds().getHeight(), primaryStage);
window.show();
primaryStage.show();
}
public static void main(String[] args) {
launch(args);
}
}
```
**Capture Window** class:
```
import javafx.scene.Cursor;
import javafx.scene.Scene;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.input.KeyCode;
import javafx.scene.layout.BorderPane;
import javafx.scene.paint.Color;
import javafx.scene.text.Font;
import javafx.scene.text.FontWeight;
import javafx.stage.Stage;
import javafx.stage.StageStyle;
/**
* Is used to capture an area of the screen.
*
* @author GOXR3PLUS
*/
public class CaptureWindow extends Stage {
/** The border pane. */
// BorderPane and Canvas
BorderPane borderPane = new BorderPane();
/** The canvas. */
Canvas canvas = new Canvas();
/** The gc. */
GraphicsContext gc = canvas.getGraphicsContext2D();
/** The stage. */
Stage stage;
/** The width. */
// Variables
int width;
/** The height. */
int height;
/** The x pressed. */
int xPressed = 0;
/** The y pressed. */
int yPressed = 0;
/** The x now. */
int xNow = 0;
/** The y now. */
int yNow = 0;
/** The foreground. */
Color foreground = Color.rgb(255, 167, 0);
/** The background. */
Color background = Color.rgb(0, 0, 0, 0.3);
/**
* Constructor.
*
* @param screenWidth the screen width
* @param screenHeight the screen height
* @param primary the primary
*/
public CaptureWindow(double screenWidth, double screenHeight, Stage primary) {
stage = primary;
setX(0);
setY(0);
setWidth(screenWidth);
setHeight(screenHeight);
initOwner(primary);
initStyle(StageStyle.TRANSPARENT);
setAlwaysOnTop(true);
// BorderPane
borderPane.setStyle("-fx-background-color:rgb(0,0,0,0.1);");
// Canvas
canvas.setWidth(screenWidth);
canvas.setHeight(screenHeight);
canvas.setOnMousePressed(m -> {
xPressed = (int) m.getScreenX();
yPressed = (int) m.getScreenY();
});
canvas.setOnMouseDragged(m -> {
xNow = (int) m.getScreenX();
yNow = (int) m.getScreenY();
repaintCanvas();
});
borderPane.setCenter(canvas);
// Scene
setScene(new Scene(borderPane, Color.TRANSPARENT));
getScene().setCursor(Cursor.CROSSHAIR);
getScene().setOnKeyReleased(key -> {
if (key.getCode() == KeyCode.B) {
close();
System.out.println("Key Released....");
}else if(key.getCode() == KeyCode.ESCAPE)
close();
});
// gc
gc.setLineDashes(6);
gc.setFont(Font.font("null", FontWeight.BOLD, 14));
}
/**
* Repaints the canvas *.
*/
protected void repaintCanvas() {
gc.clearRect(0, 0, getWidth(), getHeight());
gc.setStroke(foreground);
gc.setFill(background);
gc.setLineWidth(3);
if (xNow > xPressed && yNow > yPressed) { // Right and Down
calculateWidthAndHeight(xNow - xPressed, yNow - yPressed);
gc.strokeRect(xPressed, yPressed, width, height);
gc.fillRect(xPressed, yPressed, width, height);
} else if (xNow < xPressed && yNow < yPressed) { // Left and Up
calculateWidthAndHeight(xPressed - xNow, yPressed - yNow);
gc.strokeRect(xNow, yNow, width, height);
gc.fillRect(xNow, yNow, width, height);
} else if (xNow > xPressed && yNow < yPressed) { // Right and Up
calculateWidthAndHeight(xNow - xPressed, yPressed - yNow);
gc.strokeRect(xPressed, yNow, width, height);
gc.fillRect(xPressed, yNow, width, height);
} else if (xNow < xPressed && yNow > yPressed) { // Left and Down
calculateWidthAndHeight(xPressed - xNow, yNow - yPressed);
gc.strokeRect(xNow, yPressed, width, height);
gc.fillRect(xNow, yPressed, width, height);
}
}
/**
* Show the window.
*/
public void showWindow() {
xNow = 0;
yNow = 0;
xPressed = 0;
yPressed = 0;
repaintCanvas();
show();
}
/**
* Calculates the width and height of the rectangle.
*
* @param w the w
* @param h the h
*/
private final void calculateWidthAndHeight(int w, int h) {
width = w;
height = h;
}
/**
* Selects whole Screen.
*/
public void selectWholeScreen() {
xPressed = 0;
yPressed = 0;
xNow = (int) getWidth();
yNow = (int) getHeight();
}
/**
* Return an array witch contains the (UPPER_LEFT) Point2D of the rectangle
* and the width and height of the rectangle.
*
* @return the int[]
*/
public int[] calculatedRectangle() {
if (xNow > xPressed) { // Right
if (yNow > yPressed) // and DOWN
return new int[] { xPressed, yPressed, xNow - xPressed, yNow - yPressed };
else if (yNow < yPressed) // and UP
return new int[] { xPressed, yNow, xNow - xPressed, yPressed - yNow };
} else if (xNow < xPressed) { // LEFT
if (yNow > yPressed) // and DOWN
return new int[] { xNow, yPressed, xPressed - xNow, yNow - yPressed };
else if (yNow < yPressed) // and UP
return new int[] { xNow, yNow, xPressed - xNow, yPressed - yNow };
}
return new int[] { xPressed, yPressed, xNow, yNow };
}
```
---
Here is a full advanced example.It is part of a GitHub Project [Here](https://github.com/goxr3plus/XR3Capture) . You can clone the project and modify on your needs.
```
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.CountDownLatch;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.imageio.ImageIO;
import org.controlsfx.control.Notifications;
import application.Main;
import application.SFileChooser;
import javafx.animation.AnimationTimer;
import javafx.application.Platform;
import javafx.concurrent.Service;
import javafx.concurrent.Task;
import javafx.fxml.FXML;
import javafx.scene.Cursor;
import javafx.scene.Scene;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.image.Image;
import javafx.scene.input.KeyCode;
import javafx.scene.input.MouseButton;
import javafx.scene.layout.StackPane;
import javafx.scene.paint.Color;
import javafx.scene.text.Font;
import javafx.scene.text.FontWeight;
import javafx.stage.Stage;
import javafx.stage.StageStyle;
/**
* This is the Window which is used from the user to draw the rectangle
* representing an area on the screen to be captured.
*
* @author GOXR3PLUS
*/
public class CaptureWindowController extends Stage {
/** The stack pane. */
@FXML
private StackPane stackPane;
/** The main canvas. */
@FXML
private Canvas mainCanvas;
// -----------------------------
/**
* The Model of the CaptureWindow
*/
CaptureWindowModel model = new CaptureWindowModel();
/** The file saver. */
SFileChooser fileSaver = new SFileChooser();
/** The capture service. */
final CaptureService captureService = new CaptureService();
/** The graphics context of the canvas */
GraphicsContext gc;
/**
* When a key is being pressed into the capture window then this Animation
* Timer is doing it's magic.
*/
AnimationTimer yPressedAnimation = new AnimationTimer() {
private long nextSecond = 0L;
// private static final long ONE_SECOND_NANOS = 1_000_000_000L
private long precisionLevel;
@Override
public void start() {
nextSecond = 0L;
precisionLevel = (long) ( settingsWindowController.getPrecisionSlider().getValue() * 1_000_000L );
super.start();
}
@Override
public void handle(long nanos) {
System.out.println("TimeStamp: " + nanos + " Current: " + nextSecond);
System.out.println("Milliseconds Delay: " + precisionLevel / 1_000_000);
if (nanos >= nextSecond) {
nextSecond = nanos + precisionLevel;
// With special key pressed
// (we want [LEFT] and [DOWN] side of the rectangle to be
// movable)
// No Special Key is Pressed
// (we want [RIGHT] and [UP] side of the rectangle to be
// movable)
// ------------------------------
if (model.rightPressed.get()) {
if (model.shiftPressed.get()) { // Special Key?
if (model.mouseXNow > model.mouseXPressed) { // Mouse gone Right?
model.mouseXPressed += 1;
} else {
model.mouseXNow += 1;
}
} else {
if (model.mouseXNow > model.mouseXPressed) { // Mouse gone Right?
model.mouseXNow += 1;
} else {
model.mouseXPressed += 1;
}
}
}
if (model.leftPressed.get()) {
if (model.shiftPressed.get()) { // Special Key?
if (model.mouseXNow > model.mouseXPressed) { // Mouse gone Right?
model.mouseXPressed -= 1;
} else {
model.mouseXNow -= 1;
}
} else {
if (model.mouseXNow > model.mouseXPressed) { // Mouse gone Right?
model.mouseXNow -= 1;
} else {
model.mouseXPressed -= 1;
}
}
}
if (model.upPressed.get()) {
if (model.shiftPressed.get()) { // Special Key?
if (model.mouseYNow > model.mouseYPressed) { // Mouse gone UP?
model.mouseYNow -= 1;
} else {
model.mouseYPressed -= 1;
}
} else {
if (model.mouseYNow > model.mouseYPressed) { // Mouse gone UP?
model.mouseYPressed -= 1;
} else {
model.mouseYNow -= 1;
}
}
}
if (model.downPressed.get()) {
if (model.shiftPressed.get()) { // Special Key?
if (model.mouseYNow > model.mouseYPressed) { // Mouse gone UP?
model.mouseYNow += 1;
} else {
model.mouseYPressed += 1;
}
} else {
if (model.mouseYNow > model.mouseYPressed) { // Mouse gone UP?
model.mouseYPressed += 1;
} else {
model.mouseYNow += 1;
}
}
}
repaintCanvas();
}
}
};
/**
* This AnimationTimer waits until the canvas is cleared before it can
* capture the screen.
*/
AnimationTimer waitFrameRender = new AnimationTimer() {
private int frameCount = 0;
@Override
public void start() {
frameCount = 0;
super.start();
}
@Override
public void handle(long timestamp) {
frameCount++;
if (frameCount >= 5) {
stop();
// Capture the Image
BufferedImage image;
int[] rect = getRectangleBounds();
try {
image = new Robot().createScreenCapture(new Rectangle(rect[0], rect[1], rect[2], rect[3]));
} catch (AWTException ex) {
Logger.getLogger(getClass().getName()).log(Level.INFO, null, ex);
return;
} finally {
mainCanvas.setDisable(false);
}
// System.out.println("Starting Service")
// Start the Service
captureService.startService(image);
}
}
};
/** The counting thread. */
Thread countingThread;
/** The main window controller. */
MainWindowController mainWindowController;
/** The settings window controller. */
SettingsWindowController settingsWindowController;
/**
* Constructor.
*/
public CaptureWindowController() {
setX(0);
setY(0);
getIcons().add(new Image(getClass().getResourceAsStream("/image/icon.png")));
initStyle(StageStyle.TRANSPARENT);
setAlwaysOnTop(true);
}
/**
* Add the needed references from the other controllers.
*
* @param mainWindowController the main window controller
* @param settingsWindowController the settings window controller
*/
@SuppressWarnings("hiding")
public void addControllerReferences(MainWindowController mainWindowController ,
SettingsWindowController settingsWindowController) {
this.mainWindowController = mainWindowController;
this.settingsWindowController = settingsWindowController;
}
/**
* Will be called as soon as FXML file is loaded.
*/
@FXML
public void initialize() {
// System.out.println("CaptureWindow initialized")
// Scene
Scene scene = new Scene(stackPane, model.screenWidth, model.screenHeight, Color.TRANSPARENT);
scene.setCursor(Cursor.NONE);
setScene(scene);
addKeyHandlers();
// Canvas
mainCanvas.setWidth(model.screenWidth);
mainCanvas.setHeight(model.screenHeight);
mainCanvas.setOnMousePressed(m -> {
if (m.getButton() == MouseButton.PRIMARY) {
model.mouseXPressed = (int) m.getScreenX();
model.mouseYPressed = (int) m.getScreenY();
}
});
mainCanvas.setOnMouseDragged(m -> {
if (m.getButton() == MouseButton.PRIMARY) {
model.mouseXNow = (int) m.getScreenX();
model.mouseYNow = (int) m.getScreenY();
repaintCanvas();
}
});
// graphics context 2D
gc = mainCanvas.getGraphicsContext2D();
gc.setLineDashes(6);
gc.setFont(Font.font("null", FontWeight.BOLD, 14));
// HideFeaturesPressed
model.hideExtraFeatures.addListener((observable , oldValue , newValue) -> repaintCanvas());
}
/**
* Adds the KeyHandlers to the Scene.
*/
private void addKeyHandlers() {
// -------------Read the below to understand the Code-------------------
// the default prototype of the below code is
// 1->when the user is pressing RIGHT ARROW -> The rectangle is
// increasing from the RIGHT side
// 2->when the user is pressing LEFT ARROW -> The rectangle is
// decreasing from the RIGHT side
// 3->when the user is pressing UP ARROW -> The rectangle is increasing
// from the UP side
// 4->when the user is pressing DOWN ARROW -> The rectangle is
// decreasing from the UP side
// when ->LEFT KEY <- is pressed
// 1->when the user is pressing RIGHT ARROW -> The rectangle is
// increasing from the LEFT side
// 2->when the user is pressing LEFT ARROW -> The rectangle is
// decreasing from the LEFT side
// 3->when the user is pressing UP ARROW -> The rectangle is increasing
// from the DOWN side
// 4->when the user is pressing DOWN ARROW -> The rectangle is
// decreasing from the DOWN side
// kemodel.yPressed
getScene().setOnKeyPressed(key -> {
if (key.isShiftDown())
model.shiftPressed.set(true);
if (key.getCode() == KeyCode.LEFT)
model.leftPressed.set(true);
if (key.getCode() == KeyCode.RIGHT)
model.rightPressed.set(true);
if (key.getCode() == KeyCode.UP)
model.upPressed.set(true);
if (key.getCode() == KeyCode.DOWN)
model.downPressed.set(true);
if (key.getCode() == KeyCode.H)
model.hideExtraFeatures.set(true);
});
// keyReleased
getScene().setOnKeyReleased(key -> {
if (key.getCode() == KeyCode.SHIFT)
model.shiftPressed.set(false);
if (key.getCode() == KeyCode.RIGHT) {
if (key.isControlDown()) {
model.mouseXNow = (int) getWidth();
repaintCanvas();
}
model.rightPressed.set(false);
}
if (key.getCode() == KeyCode.LEFT) {
if (key.isControlDown()) {
model.mouseXPressed = 0;
repaintCanvas();
}
model.leftPressed.set(false);
}
if (key.getCode() == KeyCode.UP) {
if (key.isControlDown()) {
model.mouseYPressed = 0;
repaintCanvas();
}
model.upPressed.set(false);
}
if (key.getCode() == KeyCode.DOWN) {
if (key.isControlDown()) {
model.mouseYNow = (int) getHeight();
repaintCanvas();
}
model.downPressed.set(false);
}
if (key.getCode() == KeyCode.A && key.isControlDown())
selectWholeScreen();
if (key.getCode() == KeyCode.H)
model.hideExtraFeatures.set(false);
if (key.getCode() == KeyCode.ESCAPE || key.getCode() == KeyCode.BACK_SPACE) {
// Stop Counting Thread
if (countingThread != null)
countingThread.interrupt();
// Stop MaryTTS
Main.textToSpeech.stopSpeaking();
// Deactivate all keys
deActivateAllKeys();
// show the appropriate windows
Main.stage.show();
close();
} else if (key.getCode() == KeyCode.ENTER || key.getCode() == KeyCode.SPACE) {
// Stop MaryTTS
Main.textToSpeech.stopSpeaking();
// Deactivate all keys
deActivateAllKeys();
// Capture Selected Area
prepareImage();
}
});
model.anyPressed.addListener((obs , wasPressed , isNowPressed) ->
{
if (isNowPressed.booleanValue()) {
yPressedAnimation.start();
} else {
yPressedAnimation.stop();
}
});
}
/**
* Deactivates the keys contained into this method.
*/
private void deActivateAllKeys() {
model.shiftPressed.set(false);
model.upPressed.set(false);
model.rightPressed.set(false);
model.downPressed.set(false);
model.leftPressed.set(false);
model.hideExtraFeatures.set(false);
}
/**
* Creates and saves the image.
*/
public void prepareImage() {
// return if it is alive
if ( ( countingThread != null && countingThread.isAlive() ) || captureService.isRunning())
return;
countingThread = new Thread(() -> {
mainCanvas.setDisable(true);
boolean interrupted = false;
// CountDown
if (!mainWindowController.getTimeSlider().isDisabled()) {
for (int i = (int) mainWindowController.getTimeSlider().getValue(); i > 0; i--) {
final int a = i;
// Lock until it has been refreshed from JavaFX
// Application Thread
CountDownLatch count = new CountDownLatch(1);
// Repaint the Canvas
Platform.runLater(() -> {
gc.clearRect(0, 0, getWidth(), getHeight());
gc.setFill(model.background);
gc.fillRect(0, 0, getWidth(), getHeight());
gc.setFill(Color.BLACK);
gc.fillOval(getWidth() / 2 - 90, getHeight() / 2 - 165, 250, 250);
gc.setFill(Color.WHITE);
gc.setFont(Font.font("", FontWeight.BOLD, 120));
gc.fillText(Integer.toString(a), getWidth() / 2, getHeight() / 2);
// Unlock the Parent Thread
count.countDown();
});
try {
// Wait JavaFX Application Thread
count.await();
// MaryTTS
if (settingsWindowController.getMarryTTSToggle().isSelected())
Main.textToSpeech.speak(i);
// Sleep 1 seconds after that
Thread.sleep(980);
} catch (InterruptedException ex) {
interrupted = true;
mainCanvas.setDisable(false);
countingThread.interrupt();
Logger.getLogger(getClass().getName()).log(Level.INFO, null, ex);
break;
}
}
}
// !interrupted?
if (!Thread.interrupted()) {
// MaryTTS
if (settingsWindowController.getMarryTTSToggle().isSelected())
Main.textToSpeech.speak("Select where the image will be saved.");
Platform.runLater(() -> {
// Clear the canvas
gc.clearRect(0, 0, getWidth(), getHeight());
// Wait for frame Render
waitFrameRender.start();
});
} // !interrupted?
});
countingThread.setDaemon(true);
countingThread.start();
}
/**
* Repaint the canvas of the capture window.
*/
protected void repaintCanvas() {
gc.clearRect(0, 0, getWidth(), getHeight());
gc.setFont(model.font);
// draw the actual rectangle
gc.setStroke(Color.AQUA);
gc.setFill(model.background);
gc.setLineWidth(1);
// smart calculation of where the mouse has been dragged
model.rectWidth = ( model.mouseXNow > model.mouseXPressed ) ? model.mouseXNow - model.mouseXPressed // RIGHT
: model.mouseXPressed - model.mouseXNow // LEFT
;
model.rectHeight = ( model.mouseYNow > model.mouseYPressed ) ? model.mouseYNow - model.mouseYPressed // DOWN
: model.mouseYPressed - model.mouseYNow // UP
;
model.rectUpperLeftX = // -------->UPPER_LEFT_X
( model.mouseXNow > model.mouseXPressed ) ? model.mouseXPressed // RIGHT
: model.mouseXNow// LEFT
;
model.rectUpperLeftY = // -------->UPPER_LEFT_Y
( model.mouseYNow > model.mouseYPressed ) ? model.mouseYPressed // DOWN
: model.mouseYNow // UP
;
gc.strokeRect(model.rectUpperLeftX - 1.00, model.rectUpperLeftY - 1.00, model.rectWidth + 2.00, model.rectHeight + 2.00);
gc.fillRect(model.rectUpperLeftX, model.rectUpperLeftY, model.rectWidth, model.rectHeight);
// draw the circles
if (!model.hideExtraFeatures.getValue()) {
// Show the Size
double middle = model.rectUpperLeftX + model.rectWidth / 2.00;
gc.setLineWidth(1);
gc.setStroke(Color.AQUA);
gc.strokeRect(middle - 78, model.rectUpperLeftY < 25 ? model.rectUpperLeftY + 2 : model.rectUpperLeftY - 26.00, 79, 25);
gc.setFill(Color.rgb(0, 0, 00, 0.9));
gc.fillRect(middle - 77, model.rectUpperLeftY < 25 ? model.rectUpperLeftY + 2 : model.rectUpperLeftY - 25.00, 77, 23);
gc.setFill(Color.WHITE);
gc.fillText(model.rectWidth + "," + model.rectHeight, middle - 77 + 9,
model.rectUpperLeftY < 25 ? model.rectUpperLeftY + 17.00 : model.rectUpperLeftY - 6.00);
}
}
/**
* Selects whole Screen.
*/
private void selectWholeScreen() {
model.mouseXPressed = 0;
model.mouseYPressed = 0;
model.mouseXNow = (int) getWidth();
model.mouseYNow = (int) getHeight();
repaintCanvas();
}
/**
* Prepares the Window for the User.
*/
public void prepareForCapture() {
show();
repaintCanvas();
Main.stage.close();
settingsWindowController.close();
if (settingsWindowController.getMarryTTSToggle().isSelected())
Main.textToSpeech.speak("Select an area of the screen dragging your mouse and then press Enter or Space");
}
/**
* Return an array witch contains the (UPPER_LEFT) Point2D of the rectangle
* and the width and height of the rectangle.
*
* @return An array witch contains the (UPPER_LEFT) Point2D of the
* rectangle
* and the width and height of the rectangle
*/
public int[] getRectangleBounds() {
return new int[]{ model.rectUpperLeftX , model.rectUpperLeftY , model.rectWidth , model.rectHeight };
}
/**
* The work of the Service is to capture the Image based on the rectangle
* that user drawn of the Screen.
*
* @author GOXR3PLUS
*/
public class CaptureService extends Service<Boolean> {
/** The file path. */
String filePath;
/** The image. */
BufferedImage image;
/**
* Constructor.
*/
public CaptureService() {
setOnSucceeded(s -> done());
setOnCancelled(c -> done());
setOnFailed(f -> done());
}
/**
* Starts the Service.
*
* @param image2 The image to be saved.
*/
public void startService(BufferedImage image2) {
if (!isRunning()) {
this.image = image2;
// Show the SaveDialog
fileSaver.get().setInitialFileName("ScreenShot" + model.random.nextInt(50000));
File file = fileSaver.get().showSaveDialog(CaptureWindowController.this);
if (file != null) {
filePath = file.getAbsolutePath();
reset();
start();
} else
repaintCanvas();
}
}
/**
* Service has been done.
*/
private void done() {
Main.stage.show();
close();
if (getValue()) // successful?
Notifications.create().title("Successfull Capturing").text("Image is being saved at:\n" + filePath)
.showInformation();
else
Notifications.create().title("Error").text("Failed to capture the Screen!").showError();
}
/* (non-Javadoc)
* @see javafx.concurrent.Service#createTask() */
@Override
protected Task<Boolean> createTask() {
return new Task<Boolean>() {
@Override
protected Boolean call() throws Exception {
boolean written = false;
// Try to write the file to the disc
try {
written = ImageIO.write(image, fileSaver.get().getSelectedExtensionFilter().getDescription(),
new File(filePath));
} catch (IOException ex) {
Logger.getLogger(getClass().getName()).log(Level.WARNING, null, ex);
return written;
}
return written;
}
};
}
}
}
```
|
72,073,779
|
I want to offset a range if the numerical part of a range's address can be divided by 11.
(A11, A22, A33, and so forth).
Take a range in a given sheet, for example Range("A2").
Could I do ...
```
Dim isRng as Range
Dim rngAddress as String
Dim tstAddress as Integer, nsnAddress as Integer
isRng = Range("A2")
isRng.Select
rngAddress = isRng.Address
```
Currently, rngAddress = $A$2 (I think). So then, could I ...
```
tstAddress = Right(rngAddress, 2)
nsnAddress = Right(tstAddress, 1)
If tstAddress / nsnAddress = 11 Then
'whatever code
Set isRng = ActiveCell.Offset(4,0).Select
Else
End If
```
I want it to skip down 4 rows after hitting any range like A11 or A22.
Would this work? Is there a better way of doing this? I really appreciate the help.
---
|
2022/05/01
|
[
"https://Stackoverflow.com/questions/72073779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19005072/"
] |
```java
while(true){
System.out.println("Who are you looking for ? ");
String contact = scan.nextLine();
bool isFound = false;
for(int i = 0; i < names.length; i++){
if(names[i].equals(contact)){
System.out.println("They are in aisle " + i);
isFound = true;
break;
}
}
if(!isFound){
System.out.println("Not here");
}
break;
}
```
|
If you want it to print not here only when you have not found the element at all you should put it after the for loop. So you would loop through the whole array if you find the name say that you found it and then if you didn't u can print not here so something like that :
```
while(true){
System.out.println("Who are you looking for ? ");
String contact = scan.nextLine();
boolean isThere=false;
for(int i = 0; i < names.length; i++){
if(names[i].equals(contact)){
System.out.println("They are in aisle " + i);
isThere = true;
break;// break only if you want the first time the String appeared
}
}
if(!isThere){
System.out.println("Not here");
}
break;
}
```
Now that should work but here the while loop doesn't do anything. Consider removing it or doing something else with it since when you are doing the first loop you are breaking directly the first time so it's as if there were no loop at all.
|
20,788,075
|
I know there are question have been asked around resembling to this one, but still I am not getting proper concept of this. I am currently loading an iframe through javascript in following manner:
```
jQuery('.out-div-login').html("<iframe id='parent' frameBorder='0' width='320' height='500' src='"+_url+"'></iframe>");
```
Which is working fine. (its cross domain). After loading my application inside given iframe, I want to come back to original state where `.out-div-login` was loading iframe into parent html.
From outside of iframe I can do this by accessing iframe using its `id` attribute, but not from the inside. Is there any way I can reload the iframe by giving its src again ? or by above code but from inside the iframe ? Thanks.
**Update**
I have tried below code without any success as of now:
```
var ifr = $('iframe[id="parent"]');
```
from one of js file inside `iframe` with id `parent`. when I do `console.log(ifr)` it gives in firebug something like this `Object[]` instead of `<iframe id="parent">`
|
2013/12/26
|
[
"https://Stackoverflow.com/questions/20788075",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3137239/"
] |
You should use `location.href` inside the iframe. That should work. set location.href in the iframe to your new location.
|
The url query string (?xxx) should also be kept in the new url, so the whole solution should be:
```
location.href = "new_link_url" + location.search;
```
|
311,905
|
Using Windows 7.
I have a Windows Explorer icon pinned to my taskbar.
When no Explorer window is open, clicking it takes me to “Libraries”, a folder that I never use. **Is there a way to set a default folder to show here?**
I already followed the steps in the vaguely-related question “[Is there a way to set up a default folder for Windows Explorer?](https://superuser.com/questions/70161)”, but it had no effect (indeed a comment there suggested it no longer works in Windows 7).
|
2011/07/18
|
[
"https://superuser.com/questions/311905",
"https://superuser.com",
"https://superuser.com/users/8562/"
] |
All of these solutions result in an extra instance of the explorer.exe process each time the modified shortcut is used, and they accumulate until you logoff.
A simpler and better solution is simply to replace all of the "Windows Explorer.lnk" shortcuts throughout the system with an ordinary folder shortcut named "Windows Explorer.lnk" that points to whatever folder you've chosen.
Here's a little procedure that does this for Windows 7 and 8, where the replacement shortcut points to the %USERPROFILE% folder. The standard admonition should you try this is to first make a restore point as a precaution.
<http://reliancepc.com/menu/tips/Downloads/ReplaceDefaultExplorerShortcuts.zip>
Once unzipped, you can use 7zip to look inside the resulting .exe (which is a 7z SFX) and see what it does and that nothing nefarious is inside.
|
Do the steps in this thread require **admin** privs? On our corporate W7 Pro machines users do not have admin so right-clicking Explorer in the task bar shows no menu. (Security wise, you should run with a non-privileged user account and when requested in a UAC (User Auth Control) enter the admin password.)
Instead, *as a non-admin user*, I went to C:\Windows\explorer.exe and made a shortcut/alias on the desktop. With that shortcut I went to its properties menu (I forgot if I right-clicked or shift-right-clicked the shortcut) and pasted my desired directory to open when Explorer is opened (as directed elsewhere in this thread).
Then right-clicking will enable you to Pin to Taskbar and it appears to have replaced the Explorer already in the taskbar (which as a non-admin user couldn't be removed) with the one that points to the desired default directory.
This affects only the link in the taskbar. Widoze-E still opens to another directory (in my case Computer).
|
1,633,567
|
I am working in a bash script where I need to analyze the output of an FTP session. But I encountered a problem with SUSE Linux FTP client when connecting to an AIX ftp server.
The problem is that the output from the FTP command is beign shown on the screen, but is not being send to a pipe if a do a "| grep" or "| awk". Also, if I open the FTP session with a file redirect (ftp -n servername > ftplog.log) the file content after the session is like this:
ftp>
ftp>
ftp>
ftp>
Have anyone encountered this problem? Is there a parameter that I'm missing? I tried redirecting standard error to standard output but it didn't work either.
Thanks
**Edit:** I am accessing the terminal using Putty SSH, I don't have direct access to the server. I don't think it matters, but just in case...
|
2009/10/27
|
[
"https://Stackoverflow.com/questions/1633567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5425/"
] |
It is working now.
I was missing the -v parameter for FTP. The documentation is confusing because it states that the -v parameter is the dafault, and it was working as if it was activated (showing all the output in the terminal), but it was not printing it in stdout. When I activated it, the pipes and file redirect started working.
Thanks
|
You need to redirect stderr AFTER stdout, as in:
```
ftp -n servername > ftplog.log 2>&1
```
NOT
```
ftp -n servername 2>&1 > ftplog.log [wrong!]
```
|
1,396,022
|
Suppose $M$ is a $(2k+1)$-dimensional manifold on which a 1-form $\alpha$ is defined. $M$ is termed as a *contact manifold* if the distribution arising from $\alpha$ is nowhere integrable, i.e. if:
$$\xi\_q=\{v\in T\_qM:\alpha(v)=0\}$$
is a distribution that admits no integral manifolds whatever point you look for an integrable manifold at. I have read that this is equivalent to:
$$\alpha\wedge(d\alpha)^k\neq0.$$
How do I prove this?
|
2015/08/13
|
[
"https://math.stackexchange.com/questions/1396022",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/135592/"
] |
This is false. The contact condition is often known as being *maximally* non-integrable.
Given an integrable distribution $\xi$ defined by a 1-form $\alpha$, use that $$d\alpha(X,Y) = Y\alpha(X) - X\alpha(Y) - \alpha([X,Y]).$$ So if $X,Y \in \xi$, then $d\alpha(X,Y) = 0$. So in particular $\alpha \wedge d\alpha = 0$ everywhere.
For 3-manifolds, the contact condition is equivalent to non-integrability, because non-integrability is the same thing as $\alpha \wedge d\alpha \neq 0$. For higher-dimensional manifolds the contact condition can be reinterpreted as "$d\alpha|\_{\xi}$ is non-degenerate" - about as non-integrable as you can get, given that integrable is equivalent to $d\alpha|\_{\xi} = 0$".
|
Posting this to sum up all the stuff that came out in the huge comment discussion under Mike's answer, a discussion which needs 4 screenshots, [1](https://i.stack.imgur.com/6VeG4.png) [2](https://i.stack.imgur.com/RQSTU.png) [3](https://i.stack.imgur.com/Ccb5L.png) and [4](https://i.stack.imgur.com/YdpPG.png), to fit in them. Will accept Mike's answer for the patience he must have used to go on with that discussion :). What emerged was the following.
1. I forgot a "maximally" in my contact condition. "maximally non-integrable" means "as far as can be from being integrable", in a sense that will be made clear in the following points.
2. If you define integral manifolds as those whose tangent space is the subspace of the t.s. of the ambient manifold, and not just a subspace of that as Boothby does -- and [Boothby is apparently the *only one* doing that](https://math.stackexchange.com/questions/1396022/non-integrability-of-distribution-arising-from-1-form-and-condition-on-1-form/1396075#comment2842336_1396075) --, then completely integrable, integrable and involutive are equivalent, since that c.i. and involutive are equivalent is Frobenius's theorem, proved on Boothby and Lee, and that integrable implies involutive can be proven -- Proposition 14.3 on p. 358 of Lee.
3. Thus if the distribution given by the zeros of $\alpha$ is integrable, it is involutive, and $d\alpha$ is zero, as:
$$d\alpha(X,Y)=X\alpha(Y)-Y\alpha(X)-\alpha([X,Y]).$$
I actually haven't yet seen a proof of this, as far as I remember, but it shouldn't be hard. Will try soon. Of course, if $d\alpha=0$, the distribution is involutive, thus integrable.
4. So integrability is equivalent to $d\alpha=0$, and non-integrability to $d\alpha\neq0$. How far can you get from $d\alpha$ being zero? By having it nondegenerate on $\xi$. That is why this is termed as $\xi$ being *maximally non-integrable*. This is the definition of contact condition, and of contact form.
5. And now the main serving of the meal: this is equivalent to $\alpha\wedge(d\alpha)^k\neq0$. Mike told me to try proving this myself, and I did. First of all, we write the wedge product out explicitly:
$$\alpha\wedge(d\alpha)^k(v\_1,\dotsc,v\_{2k+1})=\sum\_{\sigma\in S\_{2k+1}}\operatorname{sgn}\sigma\cdot\alpha(v\_{\sigma(1)})d\alpha(v\_{\sigma(2)},v\_{\sigma(3)})\cdot\dotso\cdot d\alpha(v\_{\sigma(2k)},v\_{\sigma(2k+1)}).$$
Now assume $d\alpha|\_\xi$ is nondegenerate. Then it is easy to prove by induction that we can find a [symplectic basis](https://en.wikipedia.org/wiki/Symplectic_basis) for $d\alpha$ on $\xi$, so $d\alpha$ is represented by the matrix $J\_0$ having a block of zeros on the TL and BR corners, the identity on the BL corner and minus the identity on the TR corner, where all blocks are $k\times k$. We then complete this symplectic basis to a basis for the whole tangent space by adding a vector outside of $\xi$. Plug these into that wedge product, and the surviving terms all have $\sigma(1)=2k+1$, where $v\_{2k+1}$ is outside $\xi\_q$ and $v\_i$ is the symplectic basis. So the result of plugging in these vectors into the wedge product is $\alpha(v\_{2k+1})$ times the $k$-th power of the canonical symplectic form applied to the symplectic basis. Now applying the $k$th power of the canonical symplectic form to the symplectic basis yields, as is trivially seen, a sum of terms that are all either $1$ or $-1$. The terms in question differ from each other in three possible ways:
1) The sign of $\sigma$;
2) The order of the arguments inside the factors;
3) The order of the factors.
Let us see how altering the second two alters the first one. If I swap factors (3), the permutation $\sigma$ is altered by way of composing with the two transpositions that swap the factors. To be more explicit, if I have $\omega(v\_1,v\_3)\omega(v\_2,v\_4)$ and I want to swap those factors, I need but compose $\sigma$ with the permutation $(1,2)(3,4)$. This has even sign, so $\sigma$ keeps its sign, and the factor also does, so no change. If I swap arguments, I get a minus sign from the factor, but another one from the sign of $\sigma$, which is composed with a transposition. Again, no sign change. So they all have the same sign, and we are done. Next, suppose the wedge product is nonzero. This implies we have $2k+1$ linearly independent vectors for which the wedge product applied to them gives a nonnzero result. Exactly one of them is outside $\xi$, so again by the above expression we have a sum of terms with the same argument given to $\alpha$. One of those terms is nonzero, which means that if $v\_i$ are those vectors and $v\_{2k+1}\notin\xi$, then for each $i\leq2k$ there exists $j\leq2k$ such that $\omega(v\_i,v\_j)\neq0$. This is not a symplectic basis, but almost: with a couple normalizations it becomes one. So $d\alpha|\_\xi$ is nondegenerate, as it admits a symplectic basis.
6. As a bonus, if $\omega$ is a 2-form, nondegeneracy is equivalent to $\omega^k\neq0$. The argument is similar to the above: use a similar expression for $\omega^k$ applied to $2k$ vectors, if $\omega^k$ is nonzero then there exist $2k$ vectors for which one term is nonzero, which means almost a symplectic basis, and if $\omega$ is nondegenerate then we have the symplectic basis, and for the canonical symplectic form the $k$th power is nonzero simply by applying it to the basis. The expression for the $k$th power is:
$$\omega^k(v\_1,\dotsc,v\_{2k})=\sum\_{\sigma\in S\_{2k}}\operatorname{sgn}\sigma\cdot\omega(v\_{\sigma(1)},v\_{\sigma(2)})\cdot\dotso\cdot\omega(v\_{\sigma(2k-1)},v\_{\sigma(2k)}).$$
**Update:**
I tried to prove the formula for $\alpha([X,Y])$, but I seem to have disproven it. I am sure there must be something wrong in what I've done but I just can't see what. I did everything locally. Locally, I have a chart, a basis of the tangent space which is $\partial\_i$, the "canonical" coordinate basis, and a basis of the dual of the tangent, $dx\_i$. $dx\_i(\partial\_j)=\delta\_{ij}$. Locally, $\alpha=\alpha\_idx\_i$, with the repeated indices convention. $d\alpha$ can be written as:
$$d\alpha=dx\_i\wedge\partial\_j\alpha\_idx\_i=(\partial\_j\alpha\_i-\partial\_i\alpha\_j)dx\_j\wedge dx\_i.$$
Now, if I plug in $X,Y$, I get:
\begin{align\*}
d\alpha(X,Y)={}&(\partial\_j\alpha\_i-\partial\_i\alpha\_j)(dx\_j(X)dx\_i(Y)-dx\_i(X)dx\_j(Y))={} \\
{}={}&\partial\_j\alpha\_idx\_j(X)dx\_i(Y)-\partial\_j\alpha\_idx\_i(X)dx\_j(Y)-\partial\_i\alpha\_jdx\_j(X)dx\_i(Y)+\partial\_i\alpha\_jdx\_i(X)dx\_j(Y)={} \\
{}={}&\partial\_j\alpha\_iX\_jY\_i-\partial\_j\alpha\_iX\_iY\_j-\partial\_i\alpha\_jX\_jY\_i+\partial\_i\alpha\_jX\_iY\_j.
\end{align\*}
The first term up there is $X\alpha(Y)$, the second one is $-Y\alpha(X)$, so the rest should be $-\alpha([X,Y])$. So I wrote the commutator out:
$$[X,Y]=[X\_i\partial\_i,Y\_j\partial\_j]=X\_i\partial\_i(Y\_j\partial\_j)-Y\_j\partial\_j(X\_i\partial\_i)=X\_iY\_j\partial\_i\partial\_j+X\_i(\partial\_iY\_j)\partial\_j-Y\_jX\_i\partial\_j\partial\_i-Y\_j(\partial\_jX\_i)\partial\_i.$$
The mixed derivatives cancel out, and the rest is:
$$[X,Y]=X\_i(\partial\_iY\_j)\partial\_j-Y\_j(\partial\_jX\_i)\partial\_i.$$
Apply $\alpha$ to it:
\begin{align\*}
\alpha([X,Y])={}&\alpha\_kdx\_k[X\_i(\partial\_iY\_j)\partial\_j-Y\_j(\partial\_jX\_i)\partial\_i]={} \\
{}={}&\alpha\_kX\_i(\partial\_iY\_j)dx\_k(\partial\_j)-\alpha\_kY\_j(\partial\_jX\_i)dx\_k(\partial\_i)={} \\
{}={}&\alpha\_kX\_i(\partial\_iY\_j)\delta\_{jk}-\alpha\_kY\_j(\partial\_jX\_i)\delta\_{ik}={} \\
{}={}&\alpha\_jX\_i(\partial\_iY\_j)-\alpha\_iY\_j(\partial\_jX\_i).
\end{align\*}
Which is evidently not the same as above. What am I doing wrong here?
**Update 2:**
I tried an altogether different approach, and failed again. I am copying it for the record, and also because the terrible habit I have of using $i,j$ as indices might have had me mess indices up and get a wrong result, which of course won't happen on the computer. I tried using Cartan's formula:
$$\mathcal{L}\_X\alpha=\iota\_Xd\alpha+d(\iota\_X\alpha),$$
since evidently:
$$d\alpha(X,Y)=(\iota\_Xd\alpha)(Y)=(\mathcal{L}\_X\alpha-d(\iota\_X\alpha))(Y).$$
Let us write out the commutator. Suppose $X=X\_i\partial\_i,Y=Y\_i\partial\_i$. Then:
\begin{align\*}
[X,Y]={}&[X\_i\partial\_i,Y\_j\partial\_j]=X\_i(\partial\_iY\_j)\partial\_j+X\_iY\_j\partial\_i\partial\_j-Y\_j(\partial\_jX\_i)\partial\_i-Y\_jX\_i\partial\_j\partial\_i=X(Y\_j)\partial\_j-Y(X\_i)\partial\_i={} \\
{}={}&(X(Y\_i)-Y(X\_i))\partial\_i.
\end{align\*}
Let us start from the second term. Suppose $\alpha=\alpha\_idx\_i$. Then:
$$d(\iota\_X\alpha)(Y)=d(\alpha(X))(Y)=\partial\_j(\alpha\_iX\_i)dx\_j(Y)=(\partial\_j\alpha\_i)X\_iY\_j+(\partial\_jX\_i)\alpha\_iY\_j=Y(\alpha(X)).$$
OK, I had a wrong minus sign over here. I had gotten $Y(\alpha(X))-2\alpha\_iY(X\_i)$. But then there must be something wrong in the next bit too. Let me see.
$$\mathcal{L}\_X\alpha=X\_i\partial\_i(\alpha\_jdx\_j)=X\_i\partial\_i(\alpha\_j)dx\_j+X\_i\alpha\_j\partial\_i(dx\_j).$$
Interpreting $\partial\_i$ as a vector field, $\partial\_i(dx\_j)$ would be a Lie derivative, so I use Cartan's formula once more:
$$\mathcal{L}\_X\alpha=X\_i\partial\_i(\alpha\_j)dx\_j+X\_i\alpha\_j(\iota\_{\partial\_i}ddx\_j+d(\iota\_{\partial\_i}dx\_j)).$$
Now $ddx\_j=0$, and $\iota\_{\partial\_i}dx\_j=dx\_j(\partial\_i)=\delta\_{ij}$, so:
$$\mathcal{L}\_X\alpha=X\_i\partial\_i(\alpha\_j)dx\_j+X\_i\alpha\_jd(\delta\_{ij}),$$
OK, that can't be right. Or maybe it is. Let us go on and see what we get. That means the second term is 0. Now we finally insert $Y$:
$$(\mathcal{L}\_X\alpha)(Y)=X\_i(\partial\_i\alpha\_j)Y\_j=X(\alpha\_j)Y\_j=X(\alpha\_jY\_j)-X(Y\_j)\alpha\_j.$$
Is that last term $\alpha([X,Y])$? Remember how $[X,Y]=(X(Y\_i)-Y(X\_i))\partial\_i$. Then:
$$\alpha([X,Y])=\alpha((X(Y\_i)-Y(X\_i))\partial\_i)=\alpha\_jdx\_j((X(Y\_i)-Y(X\_i))\partial\_i)=\alpha\_j(X(Y\_j)-Y(X\_j)).$$
So I am missing half of this above. What is wrong above?
**Update 3:**
[Chi la dura, la vince](http://forum.wordreference.com/threads/chi-la-dura-la-vince.1013492/) (He conquers who endures). I was stubborn enough to try a third time. We have written before that:
$$\alpha([X,Y])=\alpha\_i(X(Y\_i)-Y(X\_j)).$$
We can easily see the following:
\begin{align\*}
X(\alpha(Y))={}&X(\alpha\_i)Y\_i+X(Y\_i)\alpha\_i, \\
Y(\alpha(X))={}&Y(\alpha\_i)X\_i+Y(X\_i)\alpha\_i,$$
\end{align\*}
this boils down to writing the arguments of $X,Y$ and $X,Y$ themselves explicitly, I think we've done that above as well. Let us then compute the RHS of our claim:
\begin{align\*}
X(\alpha(Y))-Y(\alpha(X))-\alpha([X,Y])={}&X(\alpha\_i)Y\_i+\underline{X(Y\_i)\alpha\_i}-Y(\alpha\_i)X\_i-\overline{Y(X\_i)\alpha\_i}-\alpha\_i(\underline{X(Y\_i)}-\overline{Y(X\_j)})={} \\
{}={}&X(\alpha\_i)Y\_i-Y(\alpha\_i)X\_i.
\end{align\*}
For the LHS, I must first stress I have an erroneous definition of $d\alpha$. $d\alpha\neq(\partial\_i\alpha\_j-\partial\_j\alpha\_i)dx\_i\wedge dx\_j$. It is NOT a sum over all combinations of $i,j$, but a sum over $i<j$. To have all possible combinations, I must add a half in front of everything. I will now compute the LHS finally prove the equality. Let us see:
\begin{align\*}
2d\alpha(X,Y)={}&(\partial\_j\alpha\_i-\partial\_i\alpha\_j)(dx\_j(X)dx\_i(Y)-dx\_i(X)dx\_j(Y))={} \\
{}={}&X\_jY\_i\partial\_j\alpha\_i-X\_jY\_i\partial\_i\alpha\_j-X\_iY\_j\partial\_j\alpha\_i+X\_iY\_j\partial\_i\alpha\_j={} \\
{}={}&Y\_iX(\alpha\_i)-X\_jY(\alpha\_j)-X\_iY(\alpha\_i)+Y\_jX(\alpha\_j)={} \\
{}={}&2Y\_iX(\alpha\_i)-2X\_iY(\alpha\_i),
\end{align\*}
which unless I'm much mistaken is exactly twice the RHS.
>
> We try and we fail, we try and we fail, but the only true failure is when we stop trying.
>
>
>
Says the gypsy in the sphere in "The Haunted Mansion". Well, lucky I didn't stop trying :).
|
90,101
|
I am an undergraduate studying quantum physics with the book of Griffiths. in 1-D problems, it said a free particle has un-normalizable states but normalizable states can be obtained by sum up the solutions to independent Schrodinger equations. in my view the book also suggests that a scattering state with E>V(infinity) is to be un-normalizable. Is it true in 1-D situations? If so can it be generalized to 3-D situations? And why?
|
2013/12/13
|
[
"https://physics.stackexchange.com/questions/90101",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35767/"
] |
Neuneck's answer is the pithiest description of *how* you get normalisable states as superpositions of non-normalisable states, but the following is more of a "why" these states happen. Hopefully, you should see that this discussion is independent of the number of dimensions.
Practically speaking, the reason why there are always such states it is because observables fulfilling the canonical commutation relationship $[X,P]=i\,\hbar\,I$ have eigenvectors which are non-normalisable.
Actually there is no reason why we *have* to have eigenvectors, so at a deeper level, the basic reasons *why* there are always non-normalised states are **(1)** convenience - the need for wieldiness of mathematical description and **(2)** the mathematical ingenuity of the people who gave us this wieldy and handy mathematical description - most notably the genius of (in rough historical order), Paul Dirac, Laurent Schwartz, Alexander Grothendieck and Israel Gel'Fand. This discussion keeps the intuitive ideas of eigenvectors and other convenient tools on a unified and rigorous footing.
---
Grounding Ideas
===============
The basic setting for quantum mechanics is Hilbert space, namely a complete (in the sense that every Cauchy sequence converges to a member of the space) vector space kitted with an inner product (Banach spaces are a weaker and more general concept - being complete vector spaces kitted simply with a norm. The norm in a Hilbert space comes from the inner product of a vector with itself).
So, intuitively, it is a complex vector space like $\mathbb{C}^N$ with "no holes in it" so we can talk about limits and do calculussy kind of stuff without worrying whether limits exist and, wherein we can talk about linear superposition and wherein we can "resolve" vectors uniquely into components through the inner product. So it's pretty much the state space of any physical system, aside from being complex, which is slightly unusual.
Now we look at the idea of a *linear functional* on a Hilbert space $\mathcal{H}$. This is simply a *linear* function $L:\mathcal{H}\to\mathbb{C}$ mapping the Hilbert space $\mathcal{H}$ to the underlying field (in this case $\mathbb{C}$. The inner product for some fixed "bra" $\ell\in\mathcal{H}$, namely the function $x\mapsto\left<\ell,x\right>$ is clearly a special case of this linear functional notion. However, in Hilbert space, every *continuous* linear functional can indeed be represented by a "fixed bra" inner product and, since every fixed bra inner product clearly induces a continuous linear functional, the ideas of *continuous linear functional* and *inner product with a fixed bra* are *precisely the same notion*: this equivalence does NOT hold in any old vector space. This key equivalence property is special to Hilbert spaces is the subject matter of the [Riesz representation theorem (see Wiki page of this name)](http://en.wikipedia.org/wiki/Riesz_representation_theorem). So the continuous (topological) dual $\mathcal{H}^\*$ of $\mathcal{H}$, being a poncy name for the vector space of continuous linear fucntionals on $\mathcal{H}$, is *isomorphic to the original Hilbert space*.
It can be shown to be an alternative and *altogether equivalent definition* of "Hilbert space" as the one above (i.e. a complete inner product space) is:
>
> *An inner product space which is isomorphic to its dual space of continuous linear functionals*.
>
>
>
All this is very slick and attractive for describing things like quantum mechanics. It is also very easy in finite dimensional quantum systems, such as *e.g.* an electron constrained to a being a superposition of spin up and down states. In finite dimensions, there is no difference at all between the notions of *continuous linear functional* and the more general one of simply a *linear functional* (\*i.e. without to heed to continuity).
---
Rigging the Hilbert Space: Nonnormalisable States
=================================================
In infinite dimensions - as with the quantum state space of the harmonic oscillator or the electron bound to a potential, we meet a glitch:
*Not all linear functionals are continuous.*
OOOPS: so just as we covet our neighbour's iPhone 5 when we have "only" model 4, so too we covet a stronger concept than Hilbert space wherein a software upgrade would make all "useful" linear functionals continuous!
Less flippantly, here is where we get practical. In quantum mechanics, we need to implement the Heisenberg uncertainty principle, so we need Hermitian *observables* $\hat{X}$ and $\hat{P}$ fulfilling the canonical commutation relationship (CCR) $[\hat{X},\,\hat{P}]=i\,\hbar\,I$ (see [my answer here](https://physics.stackexchange.com/a/68993/26076) and [here](https://physics.stackexchange.com/a/80363/26076)). It's not too hard to show that a quantum space truly implementing the HUP cannot be finite dimensional - if it were, then $\hat{X}$ and $\hat{P}$ would have square matrix representations and the Lie bracket $[\hat{X}, \hat{P}]$ between any pair of *finite* square matrices has a trace of nought, whereas the right hand side of the CCR certainly does not have zero trace. So we consider them to be operators on the Hilbert space $\mathbf{L}^2(\mathbb{R}^N)$, which is a Hilbert space with dimensionality $\aleph\_0$, i.e. it has countably infinite basis vectors, for example, the eigenfunctions of the $N$-dimensional harmonic oscillator. Vectors in this Hilbert space are "everyday wavefunctions" $\psi:\mathbb{R}^N\to\mathbb{R}^N$ as conceived by Schrödinger with the crucial *normalisability* property:
$$\int\limits\_{\mathbb{R}^N} |\psi(\vec{x})|^2\,{\rm d}^N x < \infty$$
Now, for convenience, we want to work in co-ordinates wherein one of $\hat{X}$ and $\hat{P}$ is the simple multiplication operator $X \psi(x) = x\,\psi(x)$. In [my answer here](https://physics.stackexchange.com/a/80363/26076) I show that this means that there are co-ordinates where $X \psi(x) = x\,\psi(x)$ and, needfully $\hat{P} \psi(x) = -i\,\hbar \,{\rm d}\_x \psi(x)$.
However, *neither of these operators is defined on our whole Hilbert space* $\mathcal{H} = \mathbf{L}^2(\mathbb{R}^N)$: there are vectors (functions) $f$ in $\mathbf{L}^2(\mathbb{R}^N)$ (*e.g.* functions with jump discontinuities) which have no defined $P\,f\in\mathcal{H}$, owing to the derivative's being undefined at the discontinuity. Likewise, some normalisable functions $g$ have no defined $X\,g\in\mathcal{H}$; multiplication by $\vec{x}$ makes them unnormalisable (witness for example the function $f(x) = (1+x^2)^{-1}$).
Furthermore, neither of these functions has eigenvectors in $\mathcal{H}$: if $X\,f(x) = \lambda f(x) = x f(x)\,\forall x\in\mathbb{R}$ then $f(x) = 0$ for $x\neq\lambda$ and the eigenfunction $e^{i\,k\,x}$ of $P$ is not normalisable.
But we want to salvage the idea of eigenstates and still be able to write our states in position or momentum co-ordinates.
Here is where the notion of [Rigged Hilbert Space](http://en.wikipedia.org/wiki/Rigged_hilbert_space) comes in - the ingenious process where we kit a dense subset $S\subset H$ of the original Hilbert space $H$ ("rig it") with a stronger topology, so that things like the Dirac delta are included in the topological dual space $S^\*$ where $S\subset H\subset S^\*$.
For QM we take the dense subset $S$ to be the "smooth" functions that still belong to $\mathcal{H}$ when mapped by any member of the algebra of operators generated by $X$ and $P$. That is, $S$ is invariant under this algebra and comprises precisely the [Schwartz space](http://en.wikipedia.org/wiki/Schwartz_space) of functions than can be multiplied by any polynomial and differentiated any number of times and still belong to $\mathcal{H}$. Any function in $\mathcal{H}$ can be arbitrarily well approximated (with respect to the Hilbert space norm) by some function in $S$.
At the same time, we kit the dense subset $S$ out with a stronger topology than the original Hilbert space one. Why do we do this? One of the basic problems with $\mathcal{H}$ is that the Dirac delta $\delta:\mathbf{L}^2(\mathbb{R})\to \mathbb{C};\;\delta\;f(x) = f(0)$, which can be construed as an eigenvector of $X$, is not a continuous linear functional on $\mathcal{H}$ even though of course it is a linear functional. To see this, consider the image of $f(x) + \exp(-x^2/(2 \sigma^2)$ under the delta funcional: we can choose a $\sigma$ to make this function arbitrarily near to $f(x)$ as measured by the $\mathbf{L}^2$ norm, but with images $f(0)$ and $f(0)+1$, respectively, under the Dirac $\delta$. So we kit the dense subset $S$ out a topology that is strong enough to "ferret out" all useful linear functionals and *make* them continuous. We now have a topological dual (space of all linear functionals continuous with respect to the stronger topology) $S^\*$ of $S$ such that $S\subset\mathcal{H} = \mathcal{H}^\*\subset S^\*$.
$S^∗$ is the space of tempered distributions as discussed in [my answer here](https://physics.stackexchange.com/a/78062/26076). $S^∗$ includes the Dirac delta, $e^{i\,k\,x}$ and is bijectively, isometrically mapped onto itself by the Fourier transform. Intuitively, functions and their Fourier transforms are precisely the same information for the tempered distributions. This ties in with the fact that position and momentum co-ordinate are mapped into each other by the Fourier transform and its inverse.
So there we have it. We now have a space of bras $S^\*$ that is strictly bigger than the space of kets $\mathcal{H}$ and it needfully includes, by the construction of the rigged Hilbert space, nonnormalisable bras in $S^\*\sim\mathcal{H}$ simply so that we can discuss eigenstates of all the observables we need in a rigorous way.
Good references for these notion are:
1. [This answer to the Physics Stack Exchange question "Rigged Hilbert space and QM"](https://physics.stackexchange.com/a/43519/26076) and also
2. The discussions under the [Math Overflow Question "Good references for Rigged Hilbert spaces?"](https://mathoverflow.net/q/43313)
In the latter, Todd Trimble's suspicions are correct that the usual Gel'Fand triple is $S\subset H = \mathbf{L}^2(\mathbb{R}^N)\subset S^\*$ with $S$ , $S^∗$ being the [Schwartz space](http://en.wikipedia.org/wiki/Schwartz_space) and tempered distributions as discussed in [my answer here](https://physics.stackexchange.com/a/78062/26076). The Wikipedia article on rigged Hilbert space is a little light on here: there's a great deal of detail about nuclear spaces that's glossed over so at the first reading I'd suggest you should take a specific example $S$ = [Schwartz space](http://en.wikipedia.org/wiki/Schwartz_space) and $S^∗$ = [Tempered Distributions](http://en.wikipedia.org/wiki/Distribution_(mathematics)#Tempered_distributions_and_Fourier_transform) and keep this relatively simple (and, for QM most relevant) example exclusively in mind - for QM you won't need anything else. The Schwarz space and space of tempered distributions are automatically nuclear, so you don't need to worry too much about this idea at first reading.
|
The scattered states are indeed non-normalizable. This is because a plane wave is an unphscial state (which you can for example see by trying to calculate the Heisenberg uncertainty, which will read $\Delta x \cdot \Delta p = \infty \cdot 0 = ??$).
In order to create a physical state, you need to specify boundary conditions, i.e. a *physical* wavefunction at a given time $\Psi(t = 0)$. This can always be written as superposition of plane waves
$$ \Psi(t=0) = \int \mathrm dE \; \tilde g(E) \psi(E)$$
where $\tilde g$ is the "envelope" of your function and the $\psi(E)$ are the solutions of the **time-independent** Schroedinger equation
$$ \hat H \psi(E) = E \cdot \psi(E)$$
If this is fulfilled, your full wavefunction $\Psi(t)$ will fulfill the *time-dependent* Schroedinger equation.
|
19,601,036
|
I'm using Django along with Bootstrap and I have a page where there are two inputs and a submit button:
```
<div class="col-lg-5">
<form id="search_form" align="right" action="/comparison/" class="form-inline" method ="get" role="form">
<input id="navPersonSearch" class="input form-control" type="text" name="q1"
placeholder="Search for Actor/Actress"
autocomplete="off" >
</div> <!-- /.col -->
<div class="col-lg-2">
<button type="submit" class="btn btn-primary btn-lg btn-block">Compare</button>
</div> <!-- ./col -->
<div class="col-lg-5">
<input id="navPersonSearch2" class="input form-control" type="text" name="q2"
placeholder="Search for Actor/Actress"
autocomplete="off" >
</form>
</div> <!-- /.col -->
```
However, when you submit the form, only the first value is submitted. For instance in the first input if you have "Tom" and the second input you have "Bob", it will redirect to:
```
http://127.0.0.1:8000/comparison/?q2=Bob
```
when I want both values to be submitted.
|
2013/10/25
|
[
"https://Stackoverflow.com/questions/19601036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2027556/"
] |
You can use:
```
<div ng-repeat="item in items | filter:{visible: true} | limitTo: 50">
<p>{{item.id}}</p>
</div>
```
`filter:{visible: true}` will return a list of all visible items
You can take a look at the angularjs docu for more information on the filter filter.
<http://docs.angularjs.org/api/ng.filter:filter>
|
There are a couple of approaches, maybe the most reusable is for you to create your own 'visible' custom filter which looks for visible attribute on your items. Then you could chain them.
```
<div ng-repeat="item in items | visible | limitTo: 50">
```
[Here's a SO link to create custom filters](https://stackoverflow.com/questions/16563018/custom-filters-and-ng-repeat-in-angularjs)
|
65,179,829
|
Every user getting registered to the Sandbox gets added to the Platform as an Identity.
How can I get the Identity ID of my user when I log in to the [Sandbox environment](https://world-sandbox.oftrust.net/)?
I need this user ID for creating links between identities as is exemplified [in this guide](https://developer.oftrust.net/guides/integrate-data-source-60-minutes/#step-3-create-a-group) after creating group.
|
2020/12/07
|
[
"https://Stackoverflow.com/questions/65179829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14575832/"
] |
Your problem is trying to update the state after component unmount.
The onAuthUIStateChange function seems to have been designed with the subscribe-unsubscribe architecture.
It returns the unsubscribe function of this function itself.
source code: <https://github.com/aws-amplify/amplify-js/blob/master/packages/amplify-ui-components/src/common/helpers.ts#L58,L82>
you can use it:
```
React.useEffect(() => {
const unsubscribe = onAuthUIStateChange((nextAuthState, authData) => {
setAuthState(nextAuthState);
setUser(authData);
if (authData !== undefined) {
setUserGroup(authData.signInUserSession.accessToken.payload["cognito:groups"]);
}
console.log("user data: ", authData);
});
return () => unsubscribe();
}, []);
```
|
Try this:
```
import React from "react";
import Amplify from "aws-amplify";
import {
AmplifyAuthenticator,
AmplifySignOut,
withAuthenticator,
} from "@aws-amplify/ui-react";
import { AuthState, onAuthUIStateChange } from "@aws-amplify/ui-components";
Amplify.configure({
Auth: {
region: "...",
userPoolId: "...",
userPoolWebClientId: "...",
},
});
const App = () => {
const [didMount, setDidMount] = useState(false);
const [authState, setAuthState] = React.useState();
const [user, setUser] = React.useState();
const [userGroup, setUserGroup] = React.useState();
React.useEffect(() => {
setDidMount(true);
onAuthUIStateChange((nextAuthState, authData) => {
setAuthState(nextAuthState);
setUser(authData);
if (authData !== undefined) {
setUserGroup(authData.signInUserSession.accessToken.payload["cognito:groups"]);
}
console.log("user data: ", authData);
});
return () => setDidMount(false);
}, []);
if(!didMount) {
return null;
}
return authState === AuthState.SignedIn && user ? (
<div>
<AmplifySignOut />
<div>Hello {user.username} from {userGroup}</div>
</div>
) : (
<AmplifyAuthenticator />
);
};
export default withAuthenticator(App);
```
|
52,239,344
|
I have a django app running on cpanel (I'm not sure if is a problem of my
django app or cpanel server), when the debug mode is set up `True`. I can see all the media files, like profiles pictures or pdf files, etc. But when the debug mode is set up `False`, you can't see the media files on the app. This is my configuratios on the `settings.py` file.
```
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static'),
]
STATIC_URL = '/static/'
STATIC_ROOT = '<app-directory>/public_html/static'
# Media files
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
# Crispy Form Theme - Bootstrap 3
CRISPY_TEMPLATE_PACK = 'bootstrap3'
# For Bootstrap 3, change error alert to 'danger'
from django.contrib import messages
MESSAGE_TAGS = {
messages.ERROR: 'danger'
}
```
|
2018/09/08
|
[
"https://Stackoverflow.com/questions/52239344",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8041911/"
] |
To deploy a production server, you need to set 3 things
1. deactivate the DEBUG configuration:
>
> DEBUG = False
>
>
>
This disable all extra function for development.
2. define a static root directory in the settings file:
>
> STATIC\_ROOT = '/this/is/a/path/to/statics'
>
>
>
This, indicate where all the static files should be saved.
3. run the command to collect all static files:
>
> python manage.py collectstatic
>
>
>
This saves all statics files in the directory configured by `STATIC_ROOT`
The official documentation for deployment:
<https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/#static-root-and-static-url>
|
I don't actually solve it yet but you can do as below to solve your issue:
1 - Setup your media and static root:
```
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
```
2 - Activate your virtual enviroment on cpanel then run this command:
```
python manage.py collectstatic
```
3 - After you did that 2 folder will create at root of your app on cpanel
"static" and "media" folder copy both of them into "public\_html" directory
4 - go to the setup python app in cpanel and find your app and restart it.
It will solve the issue for me but every time when i upload some picture i should go and just copy the media files into public\_html
|
36,359,857
|
I'm currently looking into server-side validation of a GoogleIDToken for Google Sign-in (Android & iOS). [Documentation here](https://developers.google.com/identity/sign-in/android/backend-auth#send-the-id-token-to-your-server)
In the example, the "sub" field in the object returned by the Google API endpoint is read as a string, but it looks like it may actually be a (really big) number.
Some other tests using some users on my side also show big numbers.
Looking deeper in the Payload documentation, it looks like this value could be null, but outside of this possibility, can we assume that this string is actually a number?
This is important because we want to store it in a database, and saving it as a number might actually be more efficient than a string.
|
2016/04/01
|
[
"https://Stackoverflow.com/questions/36359857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3345143/"
] |
A possible top-level algorithm to solve this puzzle is to repeat the following until there is only one color on the whole board:
* Find all contiguous color regions. Treat the region at (0,0) as primary, all others as secondary.
* Pick the largest (by count of tiles) secondary region with a color that is different to the primary region's color. Let's name the color of this secondary region the **new\_color**.
* Recolor the primary region to new\_color.
Finding all the regions
=======================
We should keep a **cumulative\_mask** to track of all the tiles that are already identified as part of some region.
First we find the primary region, starting search at (0,0), and update our cumulative\_mask with the result.
Then repeat until no more regions can be found:
* Find the position of the first zero tile in the cumulative\_mask, which has at least one non-zero tile in the primary region mask.
* Find the region starting at this position.
* Update the cumulative\_mask with the mask of this region.
Selecting the color
===================
Simply iterate through secondary regions, and find the region with largest count, which has a different color than the primary region.
---
Code
====
(also on [coliru](http://coliru.stacked-crooked.com/a/f0dc130bfd46f56a))
Note: Intentionally written in a way to make it possible to understand the algorithm. This could definitely be refactored, and it's missing a lot of error checking.
```
#include <cstdint>
#include <vector>
#include <queue>
#include <string>
#include <iostream>
typedef std::vector<int32_t> vec_1d;
typedef std::vector<vec_1d> vec_2d;
typedef std::pair<std::size_t, std::size_t> position;
position const INVALID_POSITION(-1, -1);
int32_t const INVALID_COLOR(0);
// ============================================================================
struct region_info
{
int32_t color;
vec_2d mask;
std::size_t count() const
{
std::size_t result(0);
for (std::size_t y(0); y < mask.size(); ++y) {
for (std::size_t x(0); x < mask[0].size(); ++x) {
if (mask[y][x]) {
++result;
}
}
}
return result;
}
};
struct region_set
{
// The region that contains (0, 0)
region_info primary;
// All other regions
std::vector<region_info> secondary;
};
// ============================================================================
// Print the 2D vector with a label
void dump(std::string const& label, vec_2d const& v)
{
std::cout << label << "\n";
for (std::size_t y(0); y < v.size(); ++y) {
for (std::size_t x(0); x < v[0].size(); ++x) {
std::cout << v[y][x] << " ";
}
std::cout << "\n";
}
std::cout << "\n";
}
// Print the coordinates of non-zero elements of 2D vector with a label
void dump_coordinates(std::string const& label, vec_2d const& v)
{
std::cout << label << "\n";
for (std::size_t y(0); y < v.size(); ++y) {
for (std::size_t x(0); x < v[0].size(); ++x) {
if (v[y][x]) {
std::cout << "(" << x << ", " << y << ") ";
}
}
}
std::cout << "\n";
}
void dump(region_info const& ri)
{
std::cout << "Region color: " << ri.color << "\n";
std::cout << "Region count: " << ri.count() << "\n";
dump("Region mask:", ri.mask);
}
void dump(region_set const& rs)
{
std::cout << "Primary Region\n" << "\n";
dump(rs.primary);
for (std::size_t i(0); i < rs.secondary.size(); ++i) {
std::cout << "Secondary Region #" << i << "\n";
dump(rs.secondary[i]);
}
}
// ============================================================================
// Find connected tiles - implementation
void find_connected(int32_t target_color
, std::size_t x
, std::size_t y
, vec_2d const& colors
, vec_2d& result)
{
std::size_t width(colors[0].size());
std::size_t height(colors.size());
std::queue<position> s;
s.push(position(x, y));
while (!s.empty()) {
position pos(s.front());
s.pop();
if (result[pos.second][pos.first] == 1) {
continue;
}
if (colors[pos.second][pos.first] != target_color) {
continue;
}
result[pos.second][pos.first] = 1;
if (pos.first > 0) {
s.push(position(pos.first - 1, pos.second));
}
if (pos.second > 0) {
s.push(position(pos.first, pos.second - 1));
}
if (pos.first < (width - 1)) {
s.push(position(pos.first + 1, pos.second));
}
if (pos.second < (height - 1)) {
s.push(position(pos.first, pos.second + 1));
}
}
}
// Find connected tiles - convenience wrapper
vec_2d find_connected(std::size_t x, std::size_t y, vec_2d const& colors)
{
if (colors.empty() || colors[0].empty()) {
throw std::runtime_error("Invalid input array size");
}
int32_t target_color(colors[y][x]);
vec_2d result(colors.size(), vec_1d(colors[0].size(), 0));
find_connected(target_color, x, y, colors, result);
return result;
}
// ============================================================================
// Change color of elements at positions with non-zero mask value to new color
vec_2d& change_masked(int32_t new_color
, vec_2d& colors
, vec_2d const& mask)
{
for (std::size_t y(0); y < mask.size(); ++y) {
for (std::size_t x(0); x < mask[0].size(); ++x) {
if (mask[y][x]) {
colors[y][x] = new_color;
}
}
}
return colors;
}
// Combine two masks
vec_2d combine(vec_2d const& v1, vec_2d const& v2)
{
vec_2d result(v1);
for (std::size_t y(0); y < v2.size(); ++y) {
for (std::size_t x(0); x < v2[0].size(); ++x) {
if (v2[y][x]) {
result[y][x] = v2[y][x];
}
}
}
return result;
}
// Find position of first zero element in mask
position find_first_zero(vec_2d const& mask)
{
for (std::size_t y(0); y < mask.size(); ++y) {
for (std::size_t x(0); x < mask[0].size(); ++x) {
if (!mask[y][x]) {
return position(x, y);
}
}
}
return INVALID_POSITION;
}
bool has_nonzero_neighbor(std::size_t x, std::size_t y, vec_2d const& mask)
{
bool result(false);
if (x > 0) {
result |= (mask[y][x - 1] != 0);
}
if (y > 0) {
result |= (mask[y - 1][x] != 0);
}
if (x < (mask[0].size() - 1)) {
result |= (mask[y][x + 1] != 0);
}
if (y < (mask.size() - 1)) {
result |= (mask[y + 1][x] != 0);
}
return result;
}
// Find position of first zero element in mask
// which neighbors at least one non-zero element in primary mask
position find_first_zero_neighbor(vec_2d const& mask, vec_2d const& primary_mask)
{
for (std::size_t y(0); y < mask.size(); ++y) {
for (std::size_t x(0); x < mask[0].size(); ++x) {
if (!mask[y][x]) {
if (has_nonzero_neighbor(x, y, primary_mask)) {
return position(x, y);
}
}
}
}
return INVALID_POSITION;
}
// ============================================================================
// Find all contiguous color regions in the image
// The region starting at (0,0) is considered the primary region
// All other regions are secondary
// If parameter 'only_neighbors' is true, search only for regions
// adjacent to primary region, otherwise search the entire board
region_set find_all_regions(vec_2d const& colors, bool only_neighbors = false)
{
region_set result;
result.primary.color = colors[0][0];
result.primary.mask = find_connected(0, 0, colors);
vec_2d cumulative_mask = result.primary.mask;
for (;;) {
position pos;
if (only_neighbors) {
pos = find_first_zero_neighbor(cumulative_mask, result.primary.mask);
} else {
pos = find_first_zero(cumulative_mask);
}
if (pos == INVALID_POSITION) {
break; // No unsearched tiles left
}
region_info reg;
reg.color = colors[pos.second][pos.first];
reg.mask = find_connected(pos.first, pos.second, colors);
cumulative_mask = combine(cumulative_mask, reg.mask);
result.secondary.push_back(reg);
}
return result;
}
// ============================================================================
// Select the color to recolor the primary region with
// based on the color of the largest secondary region of non-primary color
int32_t select_color(region_set const& rs)
{
int32_t selected_color(INVALID_COLOR);
std::size_t selected_count(0);
for (auto const& ri : rs.secondary) {
if (ri.color != rs.primary.color) {
if (ri.count() > selected_count) {
selected_count = ri.count();
selected_color = ri.color;
}
}
}
return selected_color;
}
// ============================================================================
// Solve the puzzle
// If parameter 'only_neighbors' is true, search only for regions
// adjacent to primary region, otherwise search the entire board
// Returns the list of selected colors representing the solution steps
vec_1d solve(vec_2d colors, bool only_neighbors = false)
{
vec_1d selected_colors;
for (int32_t i(0);; ++i) {
std::cout << "Step #" << i << "\n";
dump("Game board: ", colors);
region_set rs(find_all_regions(colors, true));
dump(rs);
int32_t new_color(select_color(rs));
if (new_color == INVALID_COLOR) {
break;
}
std::cout << "Selected color: " << new_color << "\n";
selected_colors.push_back(new_color);
change_masked(new_color, colors, rs.primary.mask);
std::cout << "\n------------------------------------\n\n";
}
return selected_colors;
}
// ============================================================================
int main()
{
vec_2d colors{
{ 1, 1, 1, 1, 1, 1 }
, { 2, 2, 2, 3, 3, 1 }
, { 1, 1, 4, 5, 3, 1 }
, { 1, 3, 3, 4, 3, 1 }
, { 1, 1, 1, 1, 1, 1 }
};
vec_1d steps(solve(colors, true));
std::cout << "Solved in " << steps.size() << " step(s):\n";
for (auto step : steps) {
std::cout << step << " ";
}
std::cout << "\n\n";
}
// ============================================================================
```
Output of the program:
```
Step #0
Game board:
1 1 1 1 1 1
2 2 2 3 3 1
1 1 4 5 3 1
1 3 3 4 3 1
1 1 1 1 1 1
Primary Region
Region color: 1
Region count: 18
Region mask:
1 1 1 1 1 1
0 0 0 0 0 1
1 1 0 0 0 1
1 0 0 0 0 1
1 1 1 1 1 1
Secondary Region #0
Region color: 2
Region count: 3
Region mask:
0 0 0 0 0 0
1 1 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #1
Region color: 3
Region count: 4
Region mask:
0 0 0 0 0 0
0 0 0 1 1 0
0 0 0 0 1 0
0 0 0 0 1 0
0 0 0 0 0 0
Secondary Region #2
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #3
Region color: 3
Region count: 2
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 1 1 0 0 0
0 0 0 0 0 0
Secondary Region #4
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
Selected color: 3
------------------------------------
Step #1
Game board:
3 3 3 3 3 3
2 2 2 3 3 3
3 3 4 5 3 3
3 3 3 4 3 3
3 3 3 3 3 3
Primary Region
Region color: 3
Region count: 24
Region mask:
1 1 1 1 1 1
0 0 0 1 1 1
1 1 0 0 1 1
1 1 1 0 1 1
1 1 1 1 1 1
Secondary Region #0
Region color: 2
Region count: 3
Region mask:
0 0 0 0 0 0
1 1 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #1
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #2
Region color: 5
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #3
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
Selected color: 2
------------------------------------
Step #2
Game board:
2 2 2 2 2 2
2 2 2 2 2 2
2 2 4 5 2 2
2 2 2 4 2 2
2 2 2 2 2 2
Primary Region
Region color: 2
Region count: 27
Region mask:
1 1 1 1 1 1
1 1 1 1 1 1
1 1 0 0 1 1
1 1 1 0 1 1
1 1 1 1 1 1
Secondary Region #0
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #1
Region color: 5
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Secondary Region #2
Region color: 4
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
Selected color: 4
------------------------------------
Step #3
Game board:
4 4 4 4 4 4
4 4 4 4 4 4
4 4 4 5 4 4
4 4 4 4 4 4
4 4 4 4 4 4
Primary Region
Region color: 4
Region count: 29
Region mask:
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 0 1 1
1 1 1 1 1 1
1 1 1 1 1 1
Secondary Region #0
Region color: 5
Region count: 1
Region mask:
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 1 0 0
0 0 0 0 0 0
0 0 0 0 0 0
Selected color: 5
------------------------------------
Step #4
Game board:
5 5 5 5 5 5
5 5 5 5 5 5
5 5 5 5 5 5
5 5 5 5 5 5
5 5 5 5 5 5
Primary Region
Region color: 5
Region count: 30
Region mask:
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1
Solved in 4 step(s):
3 2 4 5
```
|
There's a bunch of things I don't understand in your code so instead of trying to fix them I'll create a new function and you can compare the two.
```
// this function is called when the user inputs the x and y values
// the colors vector will be modified in place by reference
void change_color(int x, int y, vec_2d& colors)
{
int target_color = colors[x][y];
// call the recursive flood fill function
flood_fill(0, 0, target_color, colors);
}
//this function is the recursive flood fill
void flood_fill(int x, int y, const int target_color, vec_2d& colors)
{
// if the current tile is already the target color, do nothing
if (colors[x][y] == target_color) return;
// only need to go right and down, since starting from top left
// Also, only goes to the next tile if the next tile's color is
// the same as the current tile's color
if (x < colors.size()-1 && colors[x+1][y] == colors[x][y])
{
flood_fill(x+1, y, target_color, colors);
}
if (y < colors[0].size()-1 && colors[x][y+1] == colors[x][y])
{
flood_fill(x, y+1, target_color, colors);
}
// finally, fill in the current tile with target_color
colors[x][y] = target_color;
}
```
**EDIT:** Since you meant you wanted to solve the game instead of implementing the game...
Keep track of which colors are still available on the board at all times. On each "turn", find the color that will fill the most tile area starting from the top left. Repeat until all tiles are filled with the same color.
This is more of a brute force approach, and there is probably a more optimized method, but this is the most basic one in my opinion.
|
36,559,254
|
In Python sklearn ensemble library, I want to train my data using some boosting method (say Adaboost). As I would like to know the optimal number of estimators, I plan to do a cv with different number of estimators each time. However, it seems doing it the following way is redundant:
```
for n in [50,100,150,200,250,300]:
model = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1),n_estimators=n)
cross_val_score(model,x,y,k=5)
```
Because in AdaBoost, once I train the classifier on # of estimator=50, as I move along to train # of estimator=100, the first 50 classifiers and their weights don't change. I wonder if there is a way to start training directly with the 51st weak learner in this case.
|
2016/04/11
|
[
"https://Stackoverflow.com/questions/36559254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3457417/"
] |
It is possible to use inheritance to make a "hack" of `AdaBoostClassifier` that doesn't retrain estimators and is compatible with many cross-validation functions in `sklearn` (must be cross-validation that doesn't shuffle data).
If you look at the source code in `sklearn.ensemble.weight_boosting.py`, you can see that you can get away with not needing to retrain estimators if you properly wrap the behavior of `AdaBoostClassifier.fit()` and `AdaBoostClassifier._boost()`.
The problem with cross validation functions is that they make clones of the original estimator using `sklearn.base.clone()`, and in turn the function `sklearn.base.clone()` makes deep copies of the estimator's parameters. The deep copy nature makes it impossible for the estimator to "remember" its estimators between different cross-validation runs (`clone()` copies the contents of a reference and not the reference itself). The only way to do so (at least the only way I can think of) is to use global state to keep track of old estimators between runs. The catch here is that you have to compute a hash of your X features, which could be expensive!
Anyways, here is the hack to `AdaBoostClassifier` itself:
```py
'''
adaboost_hack.py
Make a "hack" of AdaBoostClassifier in sklearn.ensemble.weight_boosting.py
that doesn't need to retrain estimators and is compatible with many sklearn
cross validation functions.
'''
import copy
import numpy as np
from sklearn.ensemble import AdaBoostClassifier
from sklearn.base import clone
# Used to hold important variables between runs of cross validation.
# Note that sklearn cross validation functions use sklearn.base.clone()
# to make copies of the estimator sent to it as a function. The function
# sklearn.base.clone() makes deep copies of parameters of an estimator, so
# the only way to provide a way to remember previous estimators between
# cross validation runs is to use a global variable.
#
# We will use hash values of the split of X[:, 0] as keys for remembering
# previous estimators of a cv fold. Note, you can NOT use cross validators
# that randomly shuffle the data before splitting. This will cause different
# hashes.
kfold_hash = {}
class WarmRestartAdaBoostClassifier(AdaBoostClassifier):
'''
Keep track of old estimators, estimator weights, the estimator errors, and
the next to last sample weight seen.
Note that AdaBoostClassifier._boost() does NOT boost the last seen sample
weight. Simple fix to this is to drop the last estimator and retrain it.
Wrap AdaBoostClassifier.fit() to decide whether to throw away estimators or add estimators
depending on the current number of estimators vs the number of old esimators.
Also look at the possibility of use the global kfold_hash to get old values if
use_kfold_hash == True.
Wrap AdaBoostClassifier._boost() with behavior to record the next to last sample weight.
'''
def __init__(self,
base_estimator=None,
n_estimators=50,
learning_rate=1.,
algorithm='SAMME.R',
random_state=None,
next_to_last_sample_weight = None,
old_estimators_ = [],
use_kfold_hash = False):
AdaBoostClassifier.__init__(self, base_estimator, n_estimators, learning_rate,
algorithm, random_state)
self.next_to_last_sample_weight = next_to_last_sample_weight
self._last_sample_weight = None
self.old_estimators_ = old_estimators_
self.use_kfold_hash = use_kfold_hash
def _boost(self, iboost, X, y, sample_weight, random_state):
'''
Record the sample weight.
Parameters and return behavior same as that of AdaBoostClassifier._boost() as
seen in sklearn.ensemble.weight_boosting.py
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (class labels).
sample_weight : array-like of shape = [n_samples]
The current sample weights.
random_state : RandomState
The current random number generator
Returns
-------
sample_weight : array-like of shape = [n_samples] or None
The reweighted sample weights.
If None then boosting has terminated early.
estimator_weight : float
The weight for the current boost.
If None then boosting has terminated early.
error : float
The classification error for the current boost.
If None then boosting has terminated early.
'''
fit_info = AdaBoostClassifier._boost(self, iboost, X, y, sample_weight, random_state)
sample_weight, _, _ = fit_info
self.next_to_last_sample_weight = self._last_sample_weight
self._last_sample_weight = sample_weight
return fit_info
def fit(self, X, y):
hash_X = None
if self.use_kfold_hash:
# Use a hash of X features in this kfold to access the global information
# for this kfold.
hash_X = hash(bytes(X[:, 0]))
if hash_X in kfold_hash.keys():
self.old_estimators_ = kfold_hash[hash_X]['old_estimators_']
self.next_to_last_sample_weight = kfold_hash[hash_X]['next_to_last_sample_weight']
self.estimator_weights_ = kfold_hash[hash_X]['estimator_weights_']
self.estimator_errors_ = kfold_hash[hash_X]['estimator_errors_']
# We haven't done any fits yet.
if not self.old_estimators_:
AdaBoostClassifier.fit(self, X, y)
self.old_estimators_ = self.estimators_
# The case that we throw away estimators.
elif self.n_estimators < len(self.old_estimators_):
self.estimators_ = self.old_estimators_[:self.n_estimators]
self.estimator_weights_ = self.estimator_weights_[:self.n_estimators]
self.estimator_errors_ = self.estimator_errors_[:self.n_estimators]
# The case that we add new estimators.
elif self.n_estimators > len(self.old_estimators_):
n_more = self.n_estimators - len(self.old_estimators_)
self.fit_more(X, y, n_more)
# Record information in the global hash if necessary.
if self.use_kfold_hash:
kfold_hash[hash_X] = {'old_estimators_' : self.old_estimators_,
'next_to_last_sample_weight' : self.next_to_last_sample_weight,
'estimator_weights_' : self.estimator_weights_,
'estimator_errors_' : self.estimator_errors_}
return self
def fit_more(self, X, y, n_more):
'''
Fits additional estimators.
'''
# Since AdaBoostClassifier._boost() doesn't boost the last sample weight, we retrain the last estimator with
# its input sample weight.
self.n_estimators = n_more + 1
if self.old_estimators_ is None:
raise Exception('Should have already fit estimators before calling fit_more()')
self.old_estimators_ = self.old_estimators_[:-1]
old_estimator_weights = self.estimator_weights_[:-1]
old_estimator_errors = self.estimator_errors_[:-1]
sample_weight = self.next_to_last_sample_weight
AdaBoostClassifier.fit(self, X, y, sample_weight)
self.old_estimators_.extend(self.estimators_)
self.estimators_ = self.old_estimators_
self.n_estimators = len(self.estimators_)
self.estimator_weights_ = np.concatenate([old_estimator_weights, self.estimator_weights_])
self.estimator_errors_ = np.concatenate([old_estimator_errors, self.estimator_errors_])
```
And here is an example that allows you to compare timings/accuracies of the hack compared to the original `AdaBoostClassifier`. Note, that testing hack will have increasing time as we add estimators, but the training will not. I found the hack to run much faster than the original, but I'm not hashing large amount of X samples.
```py
'''
example.py
Test the AdaBoost hack.
'''
import time # Used to get timing info.
import adaboost_hack
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier # We will use stumps for our classifiers.
from sklearn.ensemble import AdaBoostClassifier # Used to compare hack to original.
from sklearn.model_selection import (cross_val_score, KFold)
from sklearn.metrics import accuracy_score
my_random = np.random.RandomState(0) # For consistent results.
nSamples = 2000
# Make some sample data.
X = my_random.uniform(size = (nSamples, 2))
y = np.zeros(len(X), dtype = int)
# Decision boundary is the unit circle.
in_class = X[:, 0]**2 + X[:, 1]**2 > 1
y = np.zeros(len(X), dtype = int)
y[in_class] = 1
# Add some random error.
error_rate = 0.01
to_flip = my_random.choice(np.arange(len(y)), size = int(error_rate * len(y)), replace = False)
y[to_flip] = 1 - y[to_flip]
# Plot the data.
plt.scatter(X[:, 0], X[:, 1], c = y)
plt.title('Simulated Data')
plt.show()
# Make our hack solution. Initially do 2 estimators.
# Train the hack without testing. Should find nearly constant time per training session.
print('Training hack without testing.')
ada_boost_hack = adaboost_hack.WarmRestartAdaBoostClassifier(DecisionTreeClassifier(max_depth = 1,
random_state = my_random),
n_estimators = 1,
random_state = my_random)
nFit = 50
times = []
for i in range(nFit):
times.append(time.time())
ada_boost_hack.n_estimators += 1
ada_boost_hack.fit(X, y)
def get_differences(times):
times = np.array(times)
return times[1:] - times[:-1]
times_per_train = {'hack no test' : get_differences(times)}
# Now look at running tests while training the hack. Should have small linear growth between
# in time per training session.
print('Training hack with testing.')
ada_boost_hack = adaboost_hack.WarmRestartAdaBoostClassifier(DecisionTreeClassifier(max_depth = 1,
random_state = my_random),
n_estimators = 1,
random_state = my_random)
times = []
scores = []
for i in range(nFit):
times.append(time.time())
ada_boost_hack.n_estimators += 1
ada_boost_hack.fit(X, y)
y_predict = ada_boost_hack.predict(X)
new_score = accuracy_score(y, y_predict)
scores.append(new_score)
plt.plot(scores)
plt.title('Training scores for hack')
plt.ylabel('Accuracy')
plt.show()
times_per_train['hack with test'] = get_differences(times)
print('Now training hack with cross validation')
ada_boost_hack = adaboost_hack.WarmRestartAdaBoostClassifier(DecisionTreeClassifier(max_depth = 1,
random_state = my_random),
n_estimators = 1,
random_state = my_random,
use_kfold_hash = True)
# Now try cross_val_score().
scores = []
times = []
# We use KFold to make sure the hashes of X features of each fold are
# the same between each run.
for i in range(1, nFit + 1):
ada_boost_hack.set_params(n_estimators = i)
new_scores = cross_val_score(ada_boost_hack, X, y, cv = KFold(3))
scores.append(new_scores)
times.append(time.time())
def plot_cv_scores(scores):
scores = np.array(scores)
plt.plot(scores.mean(axis = 1))
plt.plot(scores.mean(axis = 1) + scores.std(axis = 1) * 2, color = 'red')
plt.plot(scores.mean(axis = 1) - scores.std(axis = 1) * 2, color = 'red')
plt.ylabel('Accuracy')
plot_cv_scores(scores)
plt.title('Cross validation scores for hack')
plt.show()
times_per_train['hack cross validation'] = get_differences(times)
# Double check that kfold_hash only has 3 keys since we used cv = 3.
print('adaboost_hack.keys() = ', adaboost_hack.kfold_hash.keys())
# Now get timings for original classifier.
print('Now doing cross validations of original')
ada_boost = AdaBoostClassifier(DecisionTreeClassifier(max_depth = 1,
random_state = np.random.RandomState(0)),
n_estimators = 1,
random_state = np.random.RandomState(0))
times = []
scores = []
# We use KFold to make sure the hashes of X features of each fold are
# the same between each run.
for i in range(1, nFit + 1):
ada_boost.set_params(n_estimators = i)
new_scores = cross_val_score(ada_boost, X, y, cv = KFold(3))
scores.append(new_scores)
times.append(time.time())
plot_cv_scores(scores)
plt.title('Cross validation scores for original')
plt.show()
times_per_train['original cross validation'] = get_differences(times)
# Plot all of the timing data.
for key in times_per_train.keys():
plt.plot(times_per_train[key])
plt.title('Time per training or cv score')
plt.ylabel('Time')
plt.xlabel('nth training or cv score')
plt.legend(times_per_train.keys())
plt.show()
```
|
You can fit all 300 estimators and then use `AdaBoostClassifier.staged_predict()` to track how the error rate depends on the number of estimators. However, you will have to do the cross-validation splits yourself; I don't think it is compatible with cross\_val\_score().
For example,
```
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier # We will use simple stumps for individual estimators in AdaBoost.
from sklearn.metrics import accuracy_score
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
nSamples = {'train' : 2000, 'test' : 1000}
X = np.random.uniform(size = (nSamples['train'] + nSamples['test'], 2))
# Decision boundary is the unit circle.
in_class = X[:, 0]**2 + X[:, 1]**2 > 1
y = np.zeros(len(X), dtype = int)
y[in_class] = 1
# Add some random error.
error_rate = 0.01
to_flip = np.random.choice(np.arange(len(y)), size = int(error_rate * len(y)), replace = False)
y[to_flip] = 1 - y[to_flip]
# Split training and test.
X = {'train' : X[:nSamples['train']],
'test' : X[nSamples['train']:]}
y = {'train' : y[:nSamples['train']],
'test' : y[nSamples['train']:]}
# Make AdaBoost Classifier.
max_estimators = 50
ada_boost = AdaBoostClassifier(DecisionTreeClassifier(max_depth = 1, # Just a stump.
random_state = np.random.RandomState(0)),
n_estimators = max_estimators,
random_state = np.random.RandomState(0))
# Fit all estimators.
ada_boost.fit(X['train'], y['train'])
# Get the test accuracy for each stage of prediction.
scores = {'train' : [], 'test' : []}
for y_predict_train, y_predict_test in zip(ada_boost.staged_predict(X['train']),
ada_boost.staged_predict(X['test'])):
scores['train'].append(accuracy_score(y['train'], y_predict_train))
scores['test'].append(accuracy_score(y['test'], y_predict_test))
# Plot the results.
n_estimators = range(1, len(scores['train']) + 1)
for key in scores.keys():
plt.plot(n_estimators, scores[key])
plt.title('Staged Scores')
plt.ylabel('Accuracy')
plt.xlabel('N Estimators')
plt.legend(scores.keys())
plt.show()
```
|
32,425,974
|
I am making a twitch chat bot with c# and I would like to know how I can purge/timeout users that are spamming or using banned words. I have searched everywhere i can to find a solution for this, but I can not find an answer anywhere.
|
2015/09/06
|
[
"https://Stackoverflow.com/questions/32425974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4789832/"
] |
I also encountered this problem some time ago. The **real** solution would be solve the challenge the cloudflare websites gives you (you need to compute a correct answer using javascript, send it back, and then you receive a cookie / your token with which you can continue to view the website). So all you would get normally is a page like
[](https://i.stack.imgur.com/KDaFN.png)
In the end, I just called a python-script with a shell-execute. I used the modules provided within [this github fork](https://github.com/Anorov/cloudflare-scrape). This could serve as a starting point to implement the circumvention of the cloudflare anti-dDoS page in C# aswell.
FYI, the python script I wrote for my personal usage just wrote the cookie in a file. I read that later again using C# and store it in a `CookieJar` to continue browsing the page within C#.
```
#!/usr/bin/env python
import cfscrape
import sys
scraper = cfscrape.create_scraper() # returns a requests.Session object
fd = open("cookie.txt", "w")
c = cfscrape.get_cookie_string(sys.argv[1])
fd.write(str(c))
fd.close()
print(c)
```
EDIT: To repeat this, this has only LITTLE to do with cookies! Cloudflare forces you to solve a REAL challenge using javascript commands. It's not as easy as accepting a cookie and using it later on. Look at [https://github.com/Anorov/cloudflare-scrape/blob/master/cfscrape/**init**.py](https://github.com/Anorov/cloudflare-scrape/blob/master/cfscrape/__init__.py) and the ~40 lines of javascript emulation to solve the challenge.
Edit2: Instead of writing something to circumvent the protection, I've also seen people using a fully-fledged browser-object (this is **not** a headless browser) to go to the website and subscribe to certain events when the page is loaded. Use the `WebBrowser` class to create an infinetly small browser window and subscribe to the appropiate events.
Edit3:
Alright, I actually implemented the C# way to do this. This uses the JavaScript Engine *Jint* for .NET, available via <https://www.nuget.org/packages/Jint>
The cookie-handling code is ugly because sometimes the `HttpResponse` class won't pick up the cookies, although the header contains a `Set-Cookie` section.
```
using System;
using System.Net;
using System.IO;
using System.Text.RegularExpressions;
using System.Web;
using System.Collections;
using System.Threading;
namespace Cloudflare_Evader
{
public class CloudflareEvader
{
/// <summary>
/// Tries to return a webclient with the neccessary cookies installed to do requests for a cloudflare protected website.
/// </summary>
/// <param name="url">The page which is behind cloudflare's anti-dDoS protection</param>
/// <returns>A WebClient object or null on failure</returns>
public static WebClient CreateBypassedWebClient(string url)
{
var JSEngine = new Jint.Engine(); //Use this JavaScript engine to compute the result.
//Download the original page
var uri = new Uri(url);
HttpWebRequest req =(HttpWebRequest) WebRequest.Create(url);
req.UserAgent = "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0";
//Try to make the usual request first. If this fails with a 503, the page is behind cloudflare.
try
{
var res = req.GetResponse();
string html = "";
using (var reader = new StreamReader(res.GetResponseStream()))
html = reader.ReadToEnd();
return new WebClient();
}
catch (WebException ex) //We usually get this because of a 503 service not available.
{
string html = "";
using (var reader = new StreamReader(ex.Response.GetResponseStream()))
html = reader.ReadToEnd();
//If we get on the landing page, Cloudflare gives us a User-ID token with the cookie. We need to save that and use it in the next request.
var cookie_container = new CookieContainer();
//using a custom function because ex.Response.Cookies returns an empty set ALTHOUGH cookies were sent back.
var initial_cookies = GetAllCookiesFromHeader(ex.Response.Headers["Set-Cookie"], uri.Host);
foreach (Cookie init_cookie in initial_cookies)
cookie_container.Add(init_cookie);
/* solve the actual challenge with a bunch of RegEx's. Copy-Pasted from the python scrapper version.*/
var challenge = Regex.Match(html, "name=\"jschl_vc\" value=\"(\\w+)\"").Groups[1].Value;
var challenge_pass = Regex.Match(html, "name=\"pass\" value=\"(.+?)\"").Groups[1].Value;
var builder = Regex.Match(html, @"setTimeout\(function\(\){\s+(var t,r,a,f.+?\r?\n[\s\S]+?a\.value =.+?)\r?\n").Groups[1].Value;
builder = Regex.Replace(builder, @"a\.value =(.+?) \+ .+?;", "$1");
builder = Regex.Replace(builder, @"\s{3,}[a-z](?: = |\.).+", "");
//Format the javascript..
builder = Regex.Replace(builder, @"[\n\\']", "");
//Execute it.
long solved = long.Parse(JSEngine.Execute(builder).GetCompletionValue().ToObject().ToString());
solved += uri.Host.Length; //add the length of the domain to it.
Console.WriteLine("***** SOLVED CHALLENGE ******: " + solved);
Thread.Sleep(3000); //This sleeping IS requiered or cloudflare will not give you the token!!
//Retreive the cookies. Prepare the URL for cookie exfiltration.
string cookie_url = string.Format("{0}://{1}/cdn-cgi/l/chk_jschl", uri.Scheme, uri.Host);
var uri_builder = new UriBuilder(cookie_url);
var query = HttpUtility.ParseQueryString(uri_builder.Query);
//Add our answers to the GET query
query["jschl_vc"] = challenge;
query["jschl_answer"] = solved.ToString();
query["pass"] = challenge_pass;
uri_builder.Query = query.ToString();
//Create the actual request to get the security clearance cookie
HttpWebRequest cookie_req = (HttpWebRequest) WebRequest.Create(uri_builder.Uri);
cookie_req.AllowAutoRedirect = false;
cookie_req.CookieContainer = cookie_container;
cookie_req.Referer = url;
cookie_req.UserAgent = "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0";
//We assume that this request goes through well, so no try-catch
var cookie_resp = (HttpWebResponse)cookie_req.GetResponse();
//The response *should* contain the security clearance cookie!
if (cookie_resp.Cookies.Count != 0) //first check if the HttpWebResponse has picked up the cookie.
foreach (Cookie cookie in cookie_resp.Cookies)
cookie_container.Add(cookie);
else //otherwise, use the custom function again
{
//the cookie we *hopefully* received here is the cloudflare security clearance token.
if (cookie_resp.Headers["Set-Cookie"] != null)
{
var cookies_parsed = GetAllCookiesFromHeader(cookie_resp.Headers["Set-Cookie"], uri.Host);
foreach (Cookie cookie in cookies_parsed)
cookie_container.Add(cookie);
}
else
{
//No security clearence? something went wrong.. return null.
//Console.WriteLine("MASSIVE ERROR: COULDN'T GET CLOUDFLARE CLEARANCE!");
return null;
}
}
//Create a custom webclient with the two cookies we already acquired.
WebClient modedWebClient = new WebClientEx(cookie_container);
modedWebClient.Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0");
modedWebClient.Headers.Add("Referer", url);
return modedWebClient;
}
}
/* Credit goes to https://stackoverflow.com/questions/15103513/httpwebresponse-cookies-empty-despite-set-cookie-header-no-redirect
(user https://stackoverflow.com/users/541404/cameron-tinker) for these functions
*/
public static CookieCollection GetAllCookiesFromHeader(string strHeader, string strHost)
{
ArrayList al = new ArrayList();
CookieCollection cc = new CookieCollection();
if (strHeader != string.Empty)
{
al = ConvertCookieHeaderToArrayList(strHeader);
cc = ConvertCookieArraysToCookieCollection(al, strHost);
}
return cc;
}
private static ArrayList ConvertCookieHeaderToArrayList(string strCookHeader)
{
strCookHeader = strCookHeader.Replace("\r", "");
strCookHeader = strCookHeader.Replace("\n", "");
string[] strCookTemp = strCookHeader.Split(',');
ArrayList al = new ArrayList();
int i = 0;
int n = strCookTemp.Length;
while (i < n)
{
if (strCookTemp[i].IndexOf("expires=", StringComparison.OrdinalIgnoreCase) > 0)
{
al.Add(strCookTemp[i] + "," + strCookTemp[i + 1]);
i = i + 1;
}
else
al.Add(strCookTemp[i]);
i = i + 1;
}
return al;
}
private static CookieCollection ConvertCookieArraysToCookieCollection(ArrayList al, string strHost)
{
CookieCollection cc = new CookieCollection();
int alcount = al.Count;
string strEachCook;
string[] strEachCookParts;
for (int i = 0; i < alcount; i++)
{
strEachCook = al[i].ToString();
strEachCookParts = strEachCook.Split(';');
int intEachCookPartsCount = strEachCookParts.Length;
string strCNameAndCValue = string.Empty;
string strPNameAndPValue = string.Empty;
string strDNameAndDValue = string.Empty;
string[] NameValuePairTemp;
Cookie cookTemp = new Cookie();
for (int j = 0; j < intEachCookPartsCount; j++)
{
if (j == 0)
{
strCNameAndCValue = strEachCookParts[j];
if (strCNameAndCValue != string.Empty)
{
int firstEqual = strCNameAndCValue.IndexOf("=");
string firstName = strCNameAndCValue.Substring(0, firstEqual);
string allValue = strCNameAndCValue.Substring(firstEqual + 1, strCNameAndCValue.Length - (firstEqual + 1));
cookTemp.Name = firstName;
cookTemp.Value = allValue;
}
continue;
}
if (strEachCookParts[j].IndexOf("path", StringComparison.OrdinalIgnoreCase) >= 0)
{
strPNameAndPValue = strEachCookParts[j];
if (strPNameAndPValue != string.Empty)
{
NameValuePairTemp = strPNameAndPValue.Split('=');
if (NameValuePairTemp[1] != string.Empty)
cookTemp.Path = NameValuePairTemp[1];
else
cookTemp.Path = "/";
}
continue;
}
if (strEachCookParts[j].IndexOf("domain", StringComparison.OrdinalIgnoreCase) >= 0)
{
strPNameAndPValue = strEachCookParts[j];
if (strPNameAndPValue != string.Empty)
{
NameValuePairTemp = strPNameAndPValue.Split('=');
if (NameValuePairTemp[1] != string.Empty)
cookTemp.Domain = NameValuePairTemp[1];
else
cookTemp.Domain = strHost;
}
continue;
}
}
if (cookTemp.Path == string.Empty)
cookTemp.Path = "/";
if (cookTemp.Domain == string.Empty)
cookTemp.Domain = strHost;
cc.Add(cookTemp);
}
return cc;
}
}
/*Credit goes to https://stackoverflow.com/questions/1777221/using-cookiecontainer-with-webclient-class
(user https://stackoverflow.com/users/129124/pavel-savara) */
public class WebClientEx : WebClient
{
public WebClientEx(CookieContainer container)
{
this.container = container;
}
public CookieContainer CookieContainer
{
get { return container; }
set { container = value; }
}
private CookieContainer container = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest r = base.GetWebRequest(address);
var request = r as HttpWebRequest;
if (request != null)
{
request.CookieContainer = container;
}
return r;
}
protected override WebResponse GetWebResponse(WebRequest request, IAsyncResult result)
{
WebResponse response = base.GetWebResponse(request, result);
ReadCookies(response);
return response;
}
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse response = base.GetWebResponse(request);
ReadCookies(response);
return response;
}
private void ReadCookies(WebResponse r)
{
var response = r as HttpWebResponse;
if (response != null)
{
CookieCollection cookies = response.Cookies;
container.Add(cookies);
}
}
}
}
```
The function will return a webclient with the solved challenges and cookies inside. You can use it as follows:
```
static void Main(string[] args)
{
WebClient client = null;
while (client == null)
{
Console.WriteLine("Trying..");
client = CloudflareEvader.CreateBypassedWebClient("http://anilinkz.tv");
}
Console.WriteLine("Solved! We're clear to go");
Console.WriteLine(client.DownloadString("http://anilinkz.tv/anime-list"));
Console.ReadLine();
}
```
|
Use `WebClient` to get html of the page,
I wrote following class which handles cookies too,
Just pass `CookieContainer` instance in constructor.
```
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.Net;
using System.Text;
namespace NitinJS
{
public class SmsWebClient : WebClient
{
public SmsWebClient(CookieContainer container, Dictionary<string, string> Headers)
: this(container)
{
foreach (var keyVal in Headers)
{
this.Headers[keyVal.Key] = keyVal.Value;
}
}
public SmsWebClient(bool flgAddContentType = true)
: this(new CookieContainer(), flgAddContentType)
{
}
public SmsWebClient(CookieContainer container, bool flgAddContentType = true)
{
this.Encoding = Encoding.UTF8;
System.Net.ServicePointManager.Expect100Continue = false;
ServicePointManager.MaxServicePointIdleTime = 2000;
this.container = container;
if (flgAddContentType)
this.Headers["Content-Type"] = "application/json";//"application/x-www-form-urlencoded";
this.Headers["Accept"] = "application/json, text/javascript, */*; q=0.01";// "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
//this.Headers["Accept-Encoding"] = "gzip, deflate";
this.Headers["Accept-Language"] = "en-US,en;q=0.5";
this.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 6.1; rv:23.0) Gecko/20100101 Firefox/23.0";
this.Headers["X-Requested-With"] = "XMLHttpRequest";
//this.Headers["Connection"] = "keep-alive";
}
private readonly CookieContainer container = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest r = base.GetWebRequest(address);
var request = r as HttpWebRequest;
if (request != null)
{
request.CookieContainer = container;
request.Timeout = 3600000; //20 * 60 * 1000
}
return r;
}
protected override WebResponse GetWebResponse(WebRequest request, IAsyncResult result)
{
WebResponse response = base.GetWebResponse(request, result);
ReadCookies(response);
return response;
}
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse response = base.GetWebResponse(request);
ReadCookies(response);
return response;
}
private void ReadCookies(WebResponse r)
{
var response = r as HttpWebResponse;
if (response != null)
{
CookieCollection cookies = response.Cookies;
container.Add(cookies);
}
}
}
}
```
USAGE:
```
CookieContainer cookies = new CookieContainer();
SmsWebClient client = new SmsWebClient(cookies);
string html = client.DownloadString("http://www.google.com");
```
|
2,725,434
|
In his book programming in scala (Chapter 5 Section 5.9 Pg 93)
Odersky mentioned this expression "`bills !*&^%~ code!`"
In the footnote on same page:
"By now you should be able to figure out that given this code,the Scala compiler would
invoke `(bills.!*&^%~(code)).!()."`
That's a bit to cryptic for me, could someone explain what's going on here?
|
2010/04/27
|
[
"https://Stackoverflow.com/questions/2725434",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/241590/"
] |
What Odersky means to say is that it would be possible to have valid code looking like that. For instance, the code below:
```
class BadCode(whose: String, source: String) {
def ! = println(whose+", what the hell do you mean by '"+source+"'???")
}
class Programmer(who: String) {
def !*&^%~(source: String) = new BadCode(who, source)
}
val bills = new Programmer("Bill")
val code = "def !*&^%~(source: String) = new BadCode(who, source)"
bills !*&^%~ code!
```
Just copy&paste it on the REPL.
|
I'm not sure if the book provides method signatures but I assume it's just a comment on Scala's syntactic sugar so it assumes if you type:
```
bill add monkey
```
where there is an object bill which has a method add which takes a parameter then it automatically interprets it as:
```
bill.add(monkey)
```
Being a little Scala rusty, I'm not entirely sure how it splits code! into (code).!() except for a vague tickling of the grey cells that the ! operator is used to fire off an actor which in compiler terms might be interpretted as an implicit .!() method on the object.
|
48,377,300
|
So I'm building a website with Drupal as its backend and an Angular frontend. I'm creating all kinds of content-types in Drupal and then I'm exposing those the content through a JSON view (or in other words a RESTful API). However, Drupal also still exposes an HTML view for all content. I wish to disable that, because it is of no use to me and I don't want it to be accidentally found and maybe even indexed.
Is there an easy way to disable the HTML view for all content?
|
2018/01/22
|
[
"https://Stackoverflow.com/questions/48377300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1498652/"
] |
If you got conda environment also. Just use bellow command .
```
pip install --user awscli --upgrade
```
This works for me .
|
For anyone attempting to install AWS CLI on Mac AND running Python 3.6, use pip3.6 instead of pip in your command-line.
Example:
```
$ python --version
Python 3.6.4
$ sudo pip3.6 install --upgrade awscli
...
You are using pip version 9.0.1, however version 18.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
...
## IF YOU GET THE ABOVE MESSAGE YOU CAN UPGRADE PIP USING
$ sudo pip3.6 install --upgrade pip
```
|
616,524
|
Recently I just built my own computer. However the temperature of my processor becomes very high and my CPU fan is not able to cool it off when I run multiple programs.
MY CPU temperature can go as high as 85 degrees celsius and it is definitely not good. My CPU fan speed can go as high as 2419rpm only at 100%. I believe normal CPU fans are supposed to run faster than this at 100%.
I have just bought the i7-3770K processor from Intel (Less than half a year) and can I claim the warranty if the fan is faulty? Is this normal and I should buy a better CPU fan?
|
2013/07/07
|
[
"https://superuser.com/questions/616524",
"https://superuser.com",
"https://superuser.com/users/236274/"
] |
I don't use that board but the logic is that, you need to extract the compressed image (.xz) by
```
unxz image_file.img.xz
```
The image file should contain all you need (Linux File system, Kernel, ....)
Then locate your SD card by `fdisk -l`. If you are using micro-sd adapter, then it could be linked as `/dev/mmcblk` or if you are using USB-SD converter, the device name might be linked as `/dev/sdb`. (if you see `sdb1` `sdb2`, etc., they refer the 1st partition, 2nd partition ...)
Make sure that the SD card (and any partition) is not mounted, you should use `umount -a` or `umount /dev/sdb1` (2/3 ... for the partitions), otherwise you may need to deal further problems
then you can load the image to the SD card by
```
dd if=imagefile.img of=/dev/sdb bs=4M conv=fsync
```
when the process finishes, you can eject the SD card and place it into the board. Then power the board.
|
Use `xz` to extract the .img file, then use `dd` to write it directly to the card.
|
383,367
|
Final goal
==========
* Create a copy of all files one external hard drive to another external hard drive, both formatted as HFS+.
* Preserve the *folder* creation date.
* Preserve finder colour label.
* During the copy I want to see some kind of progress indicator.
Using `cp`
----------
If I copy using
```
cp -a /origin/folder/ /destination/folder
```
this only preserves *file* creation dates, not *folder* creation dates.
Using `rsync 3.1.3`
-------------------
If I copy using rsync 3.1.3 (installed via Homebrew)
```
rsync --recursive --info=progress2 -hhh --xattrs --times --crtimes /origin/folder /destination/folder
```
this preserves folder creation dates, colour labels, but riddles the output with below errors (I can't hide them with the `--quiet` option), so I cannot keep track of the progress
```
rsync: get_xattr_names: llistxattr("/origin/folder/filename",1024) failed: Operation not permitted (1)
```
|
2020/03/01
|
[
"https://apple.stackexchange.com/questions/383367",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/24324/"
] |
Just after I posted this question I found a solution.
I noticed that the `rsync: get_xattr_names` errors all had one thing in common: the files that raised the error where always `._` files.
I read that `._` files are [used to store information that would go into a HFS+ extended attribute](https://apple.stackexchange.com/a/14981/24324). As I'm copying between HFS+ drives I figured I don't need these files.
So I added an `--exclude` argument to my rsync command which excludes all filenames that start with `._`
```bsh
rsync --exclude="._*" --recursive --info=progress2 -hhh --xattrs --times --crtimes /Volumes/origin/ /Volumes/destination
```
The command:
* Filters `._` files beforehand (`--exclude="._*"`), preventing `get_xattr_names` errors.
* Preserves the folder creation dates via the `--times --crtimes` argument.
* Preserves Finder colour labels via the `--xattrs` argument.
* Shows progress in a human readable format via the `--info=progress2 -hhh` argument.
* (Added bonus: preserves custom folder icons as well, via the `--xattrs` argument)
|
In your answer, you write, "I read that .\_ files are used to store information that would go into a HFS+ extended attribute. As I'm copying between HFS+ drives I figured I don't need these files."
I would not make this assumption without further testing. **It is not necessary that the resource fork information captured in these existing dot bar files has been (re-)incorporated (back) into the associated file.**
There is a Wikipedia article on "AppleSingle and AppleDouble formats":
<https://en.wikipedia.org/wiki/AppleSingle_and_AppleDouble_formats>
The "dot bar" files can be created under a number of cases, for example, see:
[Why are dot underscore .\_ files created, and how can I avoid them?](https://apple.stackexchange.com/questions/14980/why-are-dot-underscore-files-created-and-how-can-i-avoid-them/163498#163498)
The question whether the dot bar files should be re-incorporated was discussed here:
[How should I reconcile dot-underscore files after a manual backup?](https://apple.stackexchange.com/questions/139557/how-should-i-reconcile-dot-underscore-files-after-a-manual-backup/139627#139627)
The OS X/macOS command `dot_clean` has a number of options regarding how to deal with them:
```
--keep=mostrecent
```
The default option. If an attribute is associated with a data fork, use that. Otherwise, use information stored in the AppleDouble file. Note that the native fork's data is preferred even if the data in the AppleDouble file is newer.
```
--keep=dotbar
```
Always use information stored in the AppleDouble file, replacing any extended attributes associated with the native file.
```
--keep=native
```
Always use the information associated with the data fork, ignoring any AppleDouble files.
You can consult `man dot_clean` for further information and options.
You might remove some uncertainty by determining whether the files with an associated dot bar have any resource fork information attached **without** reference to the dot bar files. This might be somewhat involved to test, as copying the files might re-incorporate the dot bar files during the copy.
You may not care, or may not care to spend the time to determine, and simply run dot\_clean with its default option.
Or, you could ignore the issue for now and not exclude the dot bar files during your rsync backups.
It should be pointed out that much of this discussion is in regards to how to treat the source/origin of the backup. Presumably, the purpose of a backup is to duplicate as exactly as possible the source. As such, I would not exclude the dot bar files as part of your backup. Instead, I would look to the reason the dot bar files exist in your source. Then, whether, and later how, to "fix" them.
|
43,269,816
|
The js script below correctly disables the remaining two checkboxes when I select the large checkbox and re-enables when unchecked. However, with this logic, I'd need to code for each specific case.
Instead of manually labeling each checked box with specific id's and coding disabling logic for each case, is there a way to programmatically disable the checked boxes that were not checked?
html
```
<div class="checkbox" id="sizes">
<label><input id="a" type="checkbox" name="large" value="1">Large</label>
<label><input id="b" type="checkbox" name="medium"value="1">Medium</label>
<label><input id="c" type="checkbox" name="small"value="1">Small</label>
</div>
```
js
```
$(document).ready(function(){
$('input[id=a]').change(function(){
if($(this).is(':checked')){
$('input[id=a]').attr('disabled',false);
$('input[id=b]').attr('disabled',true);
$('input[id=c]').attr('disabled',true);
}else{
$('input[id=a]').attr('disabled',false);
$('input[id=b]').attr('disabled',false);
$('input[id=c]').attr('disabled',false);
}
});
})
```
|
2017/04/07
|
[
"https://Stackoverflow.com/questions/43269816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2134003/"
] |
You can disable all the checkboxes inside the div `sizes` except the current one as below:
**HTML:**
```
<div class="checkbox" id="sizes">
<label><input id="a" type="checkbox" name="large" value="1">Large</label>
<label><input id="b" type="checkbox" name="medium"value="1">Medium</label>
<label><input id="c" type="checkbox" name="small"value="1">Small</label>
</div>
```
**JQUERY:**
```
$(document).ready(function(){
$('#sizes input[type=checkbox]').change(function(){
if($(this).is(':checked')){
$("#sizes").find(':checkbox').not($(this)).attr('disabled',true);
}
else{
$("#sizes").find(':checkbox').attr('disabled',false);
}
});
});
```
**[jsfiddle Demo](https://jsfiddle.net/o2gxgz9r/5215/)**
|
I think this can be useful
```js
$(document).ready(function(){
$('.checkbox input[type=checkbox]').change(function(){
$('input[id=a]').attr('disabled',true);
$('input[id=b]').attr('disabled',true);
$('input[id=c]').attr('disabled',true);
if($(this).is(':checked')){
$('input[id='+this.id+']').attr('disabled',false);
}
else
{
$('input[id=a]').attr('disabled',false);
$('input[id=b]').attr('disabled',false);
$('input[id=c]').attr('disabled',false);
}
});
})
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="checkbox" id="sizes">
<label><input id="a" type="checkbox" name="large" value="1">Large</label>
<label><input id="b" type="checkbox" name="medium"value="1">Medium</label>
<label><input id="c" type="checkbox" name="small"value="1">Small</label>
</div>
```
|
59,414,367
|
Currently, you can set rules that will work once the user changes the value of the input. For example:
Template part
```
<v-text-field
v-model="title"
label="Title"
></v-text-field>
```
Logic
```
export default {
data () {
return {
title: '',
email: '',
rules: {
required: value => !!value || 'Required.'
},
}
}
}
```
When the user focuses and removes focus from that element, or when the user deletes all its content, the required rule is triggered.
But what happens if we want to start with the rule enabled as soon as the component is mounted or created? Is there a way to achieve this?
I searched around vuetify but I could not find info about this nor examples in my humble google searches. I will appreciate help. I'm new in vue world. Thanks.
|
2019/12/19
|
[
"https://Stackoverflow.com/questions/59414367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5526445/"
] |
You could do the following:
Create a [v-form](https://vuetifyjs.com/en/components/forms) and place your `textfields` inside the form. Don't forget to give your v-form a v-model and a ref too.
On mounted you can access the v-form via this.$refs and call .validate() just as Jesper described in his answer. In the `codesandbox` below you can see that the `textfields` immediately go red and display the "Required." text.
```
<v-form v-model="formValid" ref="myForm">
<v-text-field label="Field 1" :rules="rules.required"></v-text-field>
<v-text-field label="Field 2" :rules="rules.required"></v-text-field>
</v-form>
<script>
export default {
data() {
return {
formValid: false,
rules: {
required: [value => !!value || "Required."]
}
};
},
mounted() {
this.$refs.myForm.validate();
}
};
</script>
```
[Example](https://codesandbox.io/s/vuetify-template-c5w6h):
|
You should change your validation a little bit to achieve this.
```
<ValidationProvider rules="required" v-slot="{ errors }" ref="title">
<v-text-field
v-model="title"
label="Title"
></v-text-field>
</ValidationProvider>
```
And then you should call `this.$refs.title.validate()`
If you trigger this when `mounted()` is called, it should validate all the fields right away, as you're requesting.
|
1,824,506
|
Let $A$ be an $m \times n$ real matrix and $b \in \mathbb{R^m}.$ Suppose $Ax$ and $Ay$ both minimize distance to $b,$ i.e. $||Ax-b|| = ||Ay-b||.$ Prove that $x-y \in \ker(A).$
Seems like there should be a straightforward proof, but I have not been able to provide one. Any suggestions would be appreciated, thanks in advance.
---
My thoughts:
\begin{align}
||Ax - b||^2 &= ||Ay - b||^2\\
\langle Ax - b, Ax - b \rangle &= \langle Ay - b, Ay - b \rangle\\
||Ax||^2 - ||Ay||^2 &= 2(\langle Ax,b \rangle - \langle Ay,b \rangle).
\end{align}
If the LHS equals zero in the last expression, then $0 = \langle A(x-y),b \rangle$ by linearity and then we are done. Not sure why the LHS would be zero though.
|
2016/06/13
|
[
"https://math.stackexchange.com/questions/1824506",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/141708/"
] |
This is a classic Basic Exam question, and one common approach is to use the parallelogram identity. Fix $b \in \mathbb{R}^{m}$, and let $u, v \in \mathbb{R}^{m}$ simultaneously minimize $D(x) = ||A(x)-b||$. Put $\alpha = A(u)-b, \beta = A(v)-b$, and let $k = ||\alpha|| = ||\beta||$. Then by the parallelogram identity,
$$||\alpha-\beta||^{2} = 2||\alpha||^{2}+2||\beta||^{2} - ||\alpha+\beta||^{2} = 4k^{2} - ||\alpha+\beta||^{2}$$
Note that
$$\alpha+\beta = A(u)+A(v)-2b = 2\left(A\left(\frac{u+v}{2}\right)-b\right)$$
so
$$||\alpha+\beta||^{2} = 4||A\left(\frac{u+v}{2}\right)-b||^{2} = 4D\left(\frac{u+v}{2}\right)^{2} \geqslant 4k^{2}$$
so
$$||\alpha-\beta||^{2} \leqslant 4k^{2} - 4k^{2} = 0$$
whence $||\alpha-\beta||^{2} = 0$, so
$$\alpha-\beta = A(u)-b - (A(v)-b) = A(u)-A(v) = A(u-v) = 0$$
so $u-v \in \ker(A)$.
|
First of all, I think your "i.e." should be followed by "$= \min\_{z \in \mathbb R^n} \|Az -b\|$".
That is, you should use the fact that $x$ and $y$ minimize the distance from $\operatorname{ran} A$ to $b$.
Then a hint: observe that $\|Ax - Ay\| \leq \|Ax - \lambda b\| + \|Ay - \lambda b\|$ for any $\lambda$.
|
6,831,572
|
I have a listBox whose items are to be shown in a textbox format, like so:-
```
<ListBox ItemsSource="{Binding movieList}" Name="innerList">
<ListBox.ItemTemplate >
<DataTemplate >
<TextBox Text="-------" TextChanged="TextBox_TextChanged_1"/>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
```
EDIT:
Sorry, movie list was an observablecollection (of Movie) instead of being (of String)
How do I get the textbox to show the contents of its ancestor (the innerList) ?
|
2011/07/26
|
[
"https://Stackoverflow.com/questions/6831572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/807246/"
] |
```
var $cBoxes = $('#1,#2,#3');
$cBoxes.change(function(){
// check if all are checked based on if the number of checkboxes total
// is equal to the number of checkboxes checked
if ($cBoxes.length == $cBoxes.filter(':checked').length){
$('#checked2').show();
$('#checked1').hide();
}else{
$('#checked2').hide();
$('#checked1').show();
}
});
```
Something like that?
|
You cannot have ids starting or just the number. Try this
```
<input id="chk1" type="checkbox" />
<input id="chk2" type="checkbox" />
<input id="chk3" type="checkbox" />
var chkBoxes = $('#chk1,#chk2,#chk3');
chkBoxes .change(function(){
if (chkBoxes.filter(':checked').length == chkBoxes.length){
$('#checked2').show();
$('#checked1').hide();
}else{
$('#checked2').hide();
$('#checked1').show();
}
});
```
|
9,685
|
I currently have a web driven application set that I would like to roll out on the Raspberry Pi for our production floor. I have a couple of questions regarding the Raspberry PI as a viable replacement for the desktops in our organization.
1. How is the Raspberry PI performance for [MVVM](http://en.wikipedia.org/wiki/Model_View_ViewModel) HTML 5 web application set (Note that this is more a data driven application set and not graphic intensive but the data calculations might get complex on client side and is largely JSON Driven Data Sets)?
2. Is it possible to lock the OS down to only show the web browser in a production based environment and not allow the user to use the browser for any other purposes?
3. Can I use something like [SUSE Studio](http://susestudio.com/) to create a custom OS which consists of just a web browser?
Any Reference documentation would be greatly appreciated and or advice as how I can roll out these devices for a production environment.
|
2013/09/20
|
[
"https://raspberrypi.stackexchange.com/questions/9685",
"https://raspberrypi.stackexchange.com",
"https://raspberrypi.stackexchange.com/users/9622/"
] |
1. Unless the app is written carefully it will probably be really slow (like 300MHz pentium slow)
2. Only to a degree. The user has physical access, so hard to stop a determined individual. Usually it's better to block it at the firewall.
3. Obviously an OS is more than a web browser. Technically you could run a web browser without an OS, but I don't know of any. It's possible to lock down the desktop as much as any other OS if you have the knowledge.
|
I agree in general with gnibbler. It will work but it will very noticeably under-perform a desktop. For what it actually costs (< 10% of a cheap cheap box), it's great, but keep in mind it doesn't cost much.
Vis, doing things with the OS (browser kiosk, etc), linux is extremely (and easily) malleable -- hence stuff like SUSE Studio, which is linux based. Unfortunately, it is very unlikely to work on the pi as the vanilla kernel source needs some tweaking first, and I do not think SUSE has such a version for the pi.
The primary general purpose pi distros are versions of Debian (raspbian), Fedora (pidora), and Arch. However, the fundamentals of these (and SUSE) are identical in most respects. I mention this because you could always buy a pi, configure it to do what you want, then if you decide the pi hardware isn't good enough, you can easily transfer your configuration to a normal desktop platform using whatever medium -- hard drive, usb stick, dvd, network, etc.
Conversely, you could also try and create what you want using a general purpose distro on a desktop and, if that suits you, get a pi and try that configuration there.
Another option involving the pi would be to use it as a thin client; a decent desktop used as a headless server for this should be able to run web apps for at least half a dozen of them that way (while all the pi has to do is present the GUI), but more expertise would be required to set it all up.
|
29,810,116
|
Not sure what I'm doing wrong here...
On one page, I start a session close to the beginning of the start of the code, like this:
```
session_name('raprec');
session_start();
```
And then I collect the session variables throughout the code by assigning session variables to the PHP variables I need to reuse, as follows:
```
$_SESSION['item_type'] = $item_type;
$_SESSION['special_rope_fields'] = $special_rope_fields;
$_SESSION['zoom_obj'] = $zoom_obj;
```
On the second page, where I need the items I call the session again:
```
session_start();
session_name('raprec');
```
Then I try to use them on this second page, as follows:
```
$special_rope_fields = $_SESSION['special_rope_fields'];
$item_type = $_SESSION['item_type'];
$zoom_obj = $_SESSION['zoom_obj'];
```
But it doesn't work...
I try to see if anything is in them like this:
```
print_r($_SESSION['special_rope_fields']);
print_r($_SESSION['item_type']);
print_r($_SESSION['zoom_obj']);
```
But nothing appears.
What am I doing wrong here? How can I properly pass these variables from one page to the next? Is there an easier way, perhaps?
|
2015/04/22
|
[
"https://Stackoverflow.com/questions/29810116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/585315/"
] |
I've tested your code on my server and I had the exact same results as you, nothing... then, on the **`second page`**, I've changed:
```
session_start();
session_name('raprec');
```
to
```
session_name('raprec');
session_start();
```
The session `raprec` now works as intended.
---
[NOTE:](http://php.net/manual/en/function.session-name.php#42747)
>
> the `session_name()` function will have no essential effect if you set
> `session.auto_start` to `true` in `php.ini`
>
>
>
|
per this documentation, <http://php.net/manual/en/function.session-name.php> you need to call session\_start after you call session\_name.
edit
@maztch sorry, i updated. thanks for pointing that out. i had it backwards, but thats what i meant, i was trying to point out that the doc has a specific order... thanks. and in this case its not an answer at all is it, because it doesnt address the question now after all. oops..
|
54,408,510
|
I try to hand over a pointer to a function:
This is the pointer that I want to hand over:
```
content_t *encc = malloc(sizeof(*encc));
```
The pointer is type of this struct
```
typedef struct _content {
int length;
char **lines;
} content_t;
```
This is where I hand the pointer over to the function:
`write_content("enc-file.txt", &encc);`
Thats the head of my function:
`void write_content(char *filename, content_t *content)`
I receive following error-message:
```
expected 'struct content_t *' but argument is of type 'struct content_t **'
```
What am I doing wrong here?
Thanks for your help!
|
2019/01/28
|
[
"https://Stackoverflow.com/questions/54408510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10971140/"
] |
`encc` is already of type `content_t *`. By adding `&` it gives you an expression of type `content_t **`, which doesn't match what the function expects.
The address-of operator is not needed here, so remove it.
```
write_content("enc-file.txt", encc);
```
|
change
```
write_content("enc-file.txt", &encc);
```
to
```
write_content("enc-file.txt", encc);
```
|
13,921,676
|
i am trying to write a very simple function in haskell to change a value in a list depending on an input as follows
```
update_game :: [Int] -> Int -> Int -> [Int]
update_game (x:xs) row take_amnt | row == 1 = x - take_amnt:xs
| row == 2 = x : head(xs) - take_amnt : tail(xs)
| row == 3 = x : head(xs) : last(xs) - take_amnt`
```
the first two cases work fine, however the last case is causing me problems and im not sure why, the errors i get are :
<http://i.stack.imgur.com/jpT8b.png>
<http://i.stack.imgur.com/tlz5t.png>
|
2012/12/17
|
[
"https://Stackoverflow.com/questions/13921676",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1911079/"
] |
`getPathInfo()` according to the doc:
>
> Returns any extra path information associated with the URL the client
> sent when it made this request. The extra path information follows the
> servlet path but precedes the query string and will start with a "/"
> character.
>
>
>
so in your example it will return `/handlermethod`
If you want to have `/servletname/handlermethod` you should use `getRequestURI()`.
`getRequestURL()` will return the full URL made by the client (except string parameters).
|
The path info in Spring MVC may imply for the info sent via a URL. In a Spring MVC Controller you can easily set a request mapping which include a variable value place holder which is bound to an argument with `@PathVariable` annotation in the method signature - related to the request mapping. For eaxmple:
```
@RequestMapping(value = "/user/{userId}")
public ModelAndView getUserByPathVariable(@PathVariable Long userId, HttpServletRequest request, HttpServletResponse response) {
System.out.println("Got request param: " + userId);
```
You take a look here for a more detailed example: [Spring MVC Controller Example](http://www.commonj.com/blogs/2012/06/01/spring-mvc-3-tutorial-passing-request-parameters-to-controllers/)
|
150,917
|
If $a$, $b$, $c$ and $d$ are the four sides of a quadrilateral, the problem is to show that $ab^2(b-c)+bc^2(c-d)+cd^2(d-a)+da^2(a-b)\ge 0$. I've verified it to be true for quite a large number of values, but can't seem to come up with a proof for it. Does anyone have any ideas, perhaps some inequality results that can be applied to prove it?
|
2013/12/05
|
[
"https://mathoverflow.net/questions/150917",
"https://mathoverflow.net",
"https://mathoverflow.net/users/23202/"
] |
Assuming the contrary, we get that the minimal value of
$$
f(a,b,c,d)=ab^2(b−c)+bc^2(c−d)+cd^2(d−a)+da^2(a−b)
$$
on the set
$$
M=\{(a,b,c,d)\colon a,b,c,d\geq 0, a+b+c+d\leq 1, a+b+c\geq d,\dots\}
$$
is negative. By the homogeneity, this value is achieved on $a+b+c+d=1$.
Later we will check that $f$ is nonnegative on the boundary faces $a=0$ and $a+b+c=d$. Thus the minimum is achieved at an interior point of the face defined by $a+b+c+d=1$. By the (Karush--)Kuhn--Tucker theorem, this point satisfies the system
$$
-\lambda=b^2(b-c)-cd^2+3da^2-2dab=\cdots
$$
for some *nonnegative* $\lambda$ (the other equalities are obtained by the cyclic permutation of variables). In particular, this means that
$$
b^3+3da^2\leq b^2c+cd^2+2dab, \;\dots,
$$
which sums up to
$$
\sum a^3+2\sum ab^2\leq \sum a^2b+2\sum abc
$$
(the sums are also over four cyclic permutations). On the other hand, by AM--GM we have
$$
a^3+ab^2+bc^2+bc^2\geq 4abc, \quad a^3+ab^2\geq 2a^2b.
$$
Summing up all these we get the converse; thus all these inequalities should come to equalities, and hence $a=b=c=d$; but $f(a,a,a,a)=0$. A contradiction.
It remains to check the inequality on the other boundary faces. If $a=0$ then the inequality looks like $bc^2(d-c)\leq cd^3$; clearly we may assume that $d\geq c$, otherwise the inequality is trivial. Since $b\leq c+d$, we have $ bc^2(d-c)\leq c^2(d^2-c^2)\leq cd\cdot d^2$, as required.
Finally, if $a=b+c+d$ then we may substitute this into our function getting (here Maple was used...) the large sum with many positive terms (including $b^4+bc^3$) and only one negative term $-b^2c^2$. But it is easy to see that $b^4-b^2c^2+bc^3\geq 0$ for all nonnegative $b$ and $c$.
|
Let $T$ be the function in question:
$$
T(a,b,c,d) = a b^2(b-c) + b c^2 (c-d) + c d^2 (d-a) + d a^2 (a - b).
$$
We wish to show $T(a,b,c,d)\ge 0$ if $a,b,c,d$ are the sides of a quadrilateral. (Presumably, $a$ is the side
opposite $c$ and $b$ is opposite $d$, but it actually doesn't matter to the proof.)
**Terminology** We introduce the following terminology: If $x\_1,x\_2,x\_3,x\_4$ are four real numbers (possibly negative), we say $x\_1,x\_2,x\_3,x\_4$ are "quadrilateral" if the following constraint holds:
\begin{eqnarray}
(\*)\ \ \ \ x\_1 + x\_2 + x\_3 + x\_4 &\ge& 2 \max\{x\_1,x\_2,x\_3,x\_4\}.
\end{eqnarray}
We also say that $x\_1,x\_2,x\_3,x\_4$ are "linear quadrilateral" if $(\*)$ holds with equality.
Obviously, the sides $a,b,c,d$ of a quadrilateral are, well, quadrilateral.
If the quadrilateral is degenerate so that its four vertices fall on a line, then its sides are linear quadrilateral.
**Basic Idea** The basic idea of the proof is to continuously "shrink" the sides $a,b,c,d$ of the quadrilateral by equal amounts $x$ until the quadrilateral collapses and all four of its vertices fall on a line. In other words, $(a-x,b-x,c-x,d-x)$ are linear quadrilateral. In step 3 below, we show this shrinking process decreases $T$, i.e., $T(a-x,b-x,c-x,d-x)$ is decreasing in $x\ge 0$ until $a-x,b-x,c-x,d-x$ are linear quadrilateral. The complexity comes when one realizes that during this shrinking process, one of the sides may collapse through a point and its length become *negative* (in which case it no longer makes sense to talk about $a-x,b-x,c-x,d-x$ being sides of a quadrilateral). Step 1 handles this "negative" case. Step 2 handles the more natural case in which no side becomes negative during the shrinking process.
**Step 1** Suppose $a,b,c,d\ge 0$ are quadrilateral and one of $a,b,c,d$ vanishes. Then $T(a,b,c,d)\ge 0$.
**Proof** Assume without loss of generality that $a=0$. Then
$$
T(0,b,c,d) = b c^3 - b c^2 d + c d^3.
$$
Observe
\begin{eqnarray}
c \ge d &\implies& b c^3 - b c^2 d \ge 0 \implies T(0,b,c,d) \ge 0 \\
d \ge b,c &\implies& c d^3 - b c^2 d \ge 0 \implies T(0,b,c,d) \ge 0.
\end{eqnarray}
The only other case not covered by these two conditions is $b> d > c$. In this case, we use use the fact that $a,b,c,d$ are
quadrilateral to deduce $b\le c+d$. Since $c-d<0$,
\begin{eqnarray}
T(0,b,c,d) &=& b c^2(c-d) + c d^3 \\
&\ge& c^2 (c+d)(c-d) + c d^3 \\
&=& c^4 - c^2 d^2 + c d^3 \\
&=& c^4 + c d^2 (d - c) \\
&\ge& 0.
\end{eqnarray}
In any case, $T(0,b,c,d)\ge 0$.
**Step 2** Suppose $a,b,c,d\ge 0$ are linear quadrilateral. Then
$$
T(a,b,c,d)\ge 0.
$$
**Proof** Without loss of generality, suppose $d=\max\{a,b,c,d\}$, so $d=a+b+c$.
By direct computation,
\begin{eqnarray}
T(a,b,c,a+b+c) &=& a^4 - a^2 b^2 + a b^3 + a^3 c + a b^2 c + b^3 c + a^2 c^2 + 3 a b c^2 \\
& & \ + 2 b^2 c^2 + 2 a c^3 + 3 b c^3 + c^4.
\end{eqnarray}
Luckily, the only summand that can possibly be negative is $-a^2 b^2$. Observe
\begin{eqnarray}
a\ge b &\implies& a^4 - a^2 b^2 \ge 0 \\
a\le b &\implies& a b^3 - a^2 b^2 \ge 0.
\end{eqnarray}
Thus,
$$
T(a,b,c,a+b+c) \ge a^4 + a b^3 - a^2 b^2 \ge 0.
$$
**Step 3** Suppose $a,b,c,d$ are linear quadrilateral and not all equal. Suppose the sum of any two of $a,b,c,d$
is non-negative. Then for $x\ge 0$, the mapping
$$
x \mapsto T(a+x,b+x,c+x,d+x)
$$
is strictly increasing in $x$.
**Proof**
Without loss of generality, let $d=\max\{a,b,c,d\}$. Since $a,b,c,d$ are linear quadrilateral,
$$
d = a + b + c.
$$
Direct computation shows
$$
T(a+x,b+x,c+x,d+x) = T(a,b,c,d) + A x + B x^2
$$
where
\begin{eqnarray}
A &=& a^3 - a^2 b + 2 a b^2 + b^3 - 2 a b c - b^2 c + 2 b c^2 + c^3 \\
& & \ + 2 a^2 d - 2 a b d - 2 a c d - 2 b c d - c^2 d - a d^2 + 2 c d^2 + d^3.
\end{eqnarray}
and
\begin{eqnarray}
B &=& 2 a (a - b) + a (b - c) + 2 b (b - c) + b (c - d) \\
& & \ + 2 c (c - d) + (a - b) d + c (-a + d) + 2 d (-a + d) \\
&=& (a-c)^2 + (b-d)^2 + \frac{1}{2}(a-b)^2 + \frac{1}{2}(a-d)^2
+ \frac{1}{2}(b-c)^2 + \frac{1}{2}(c-d)^2.
\end{eqnarray}
Clearly, $B>0$ because $a,b,c,d$ are not all the same.
Now substitute $d=a+b+c$ in the expression for $A$ and simplify:
$$
A = 3 a^3 + 2 a b^2 + 2 b^3 + 3 a^2 c + 2 b^2 c + 3 a c^2 + 6 b c^2 + 3 c^3.
$$
Suppose $a<0$. Since the sum of any two of $a,b,c,d$ is non-negative, $b,c,d\ge |a|$. Observe
\begin{eqnarray}
A &=& 3 a^3 + 2 a b^2 + 2 b^3 + 3 a^2 c + 2 b^2 c + 3 a c^2 + 6 b c^2 + 3 c^3 \\
&=& (3 a^3 + 3a^2 c) + (2 a b^2 + 2 b^3) + (3 a c^2 + 3 c^3) + 2 b^2 c + 6 b c^2 \\
&\ge& 0.
\end{eqnarray}
(All the quantities in parentheses are non-negative.)
Suppose $b<0$. Because the sum of any two of $a,b,c,d$ is non-negative, $a,c,d\ge |b|$. We have
\begin{eqnarray}
A &=& 3 a^3 + 2 a b^2 + 2 b^3 + 3 a^2 c + 2 b^2 c + 3 a c^2 + 6 b c^2 + 3 c^3 \\
&=& (2 b^3 + 2 a b^2) + (6 b c^2 + 3 c^3 + 3 a c^2) + 3 a^3 + 3 a^2 c + 2 b^2 c \\
&\ge& 0.
\end{eqnarray}
Finally, suppose $c<0$. Then $a,b,d\ge |c|$ and
\begin{eqnarray}
A &=& 3 a^3 + 2 a b^2 + 2 b^3 + 3 a^2 c + 2 b^2 c + 3 a c^2 + 6 b c^2 + 3 c^3 \\
&=& (3 a^2 c + 3 a^3) + (2 b^2 c + 2 a b^2)+ (3 c^3 + 3 a c^2) + 6 b c^2 + 2 b^3 \\
&\ge& 0.
\end{eqnarray}
Since $B>0$ and $A\ge 0$, the result follows.
**Step 4** Suppose $a,b,c,d\ge 0$ are quadrilateral and not all the same. Then $T(a,b,c,d) > 0$.
**Proof**
Make the following definitions:
\begin{eqnarray}
x\_0 &=& \frac{1}{2}(a+b+c+d - 2\max\{a,b,c,d\}) \\
A &=& a - x\_0 \\
B &=& b - x\_0 \\
C &=& c - x\_0 \\
D &=& d - x\_0.
\end{eqnarray}
It is easy to see $A,B,C,D$ are linear quadrilateral. Furthermore, the sum
of any two of $A,B,C,D$ is non-negative. For example,
\begin{eqnarray}
A+B &=& a + b - 2 x\_0 \\
&=& 2\max\{a,b,c,d\} - c - d \\
&\ge& 0.
\end{eqnarray}
All the other cases are just as easy.
Consider the function $f:[0,\infty)\to\mathbb{R}$ defined by
$$
f(x) = T(A+x,B+x,C+x,D+x).
$$
It follows from step 3 that $f$ is strictly increasing.
Since $a,b,c,d$ are quadrilateral, $x\_0\ge 0$, so $T(a,b,c,d) = f(x\_0) \ge f(0)$. If $A,B,C,D$ are all
non-negative, then $f(0)\ge 0$ by step 2 and we are done.
Let $m=\min\{a,b,c,d\}$ and suppose one of $A,B,C,D$ is negative, so $m < x\_0$. Then
$T(a,b,c,d) = f(x\_0) > f(m)$. But by step 1, $f(m)\ge 0$ and we are done.
|
6,421,127
|
Say I have a table or 2 that contains data that will never or rarely change, is there any point of trying to cache those data? Or will the EF context cache that data for me when I load them the first time? I was thinking of loading all the data from those tables and use a static list or something to keep that data in memory and query the in memory data instead of the tables whenever I need the data within the same context. Those tables I speak of contains typically a few hundred rows of data.
|
2011/06/21
|
[
"https://Stackoverflow.com/questions/6421127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/327622/"
] |
The EF context will cache "per instance". That is, each instance of the `DbContext` keeps it's own independent cache of objects. You can store the resulting list of objects in a static list and query it all you like without returning to the database. To be safe, make sure you abandon the `DbContext` *after* you execute the query.
```
var dbContext = new YourDbContext();
StaticData.CachedListOfThings = dbContext.ListOfThings.ToList();
```
You can later use LINQ to query the static list.
```
var widgets = StaticData.CachedListOfThing.Where(thing => thing.Widget == "Foo");
```
The query executes the in-memory collection, not the database.
|
you'll have to roll your own for any ef4 linq queries, as they are always resolved to sql, and thus will always hit the db. a simple cache for your couple tables probably wouldn't be hard to write.
if you're going to be querying by id though, you can use the [`ObjectContext.GetObjectByKey` method](http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.getobjectbykey.aspx), and it will look in the object cache before querying the db.
|
650,843
|
I'm trying to fix the check button lights on the side of a IDX V-mount battery. The LED lights won't light when the button is pressed. I stumbled upon a white "H2" component that I measured as open circuit and it was supposed to be closed.
I can't ID the component. Please help.
[](https://i.stack.imgur.com/Tf5pE.jpg)
[](https://i.stack.imgur.com/3hSrc.jpg)
[](https://i.stack.imgur.com/oRFLp.jpg)
|
2023/01/20
|
[
"https://electronics.stackexchange.com/questions/650843",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/331008/"
] |
I think it's a fuse (one-time type) as available from Littelfuse and others.
However H is the code usually used for a 1A fuse and 2 is perhaps associated with 2A so it's hard to be sure what to replace it with. Also, there's a decent chance something else is fried in order to take out the fuse.
If you don't get a better answer and feel like gambling you would replace it with a 1A fuse of similar size and see what happens.
|
Looking at the component color and silkscreen text "FR1", it seems like a resettable fuse.
I doesn't blow up mostly. When a current higher than trip current, it turns into high-resistance state, that is non-conducting mode.
If the high current was caused by one-time event, e.g. wrong wiring during experiment, it will get back to normal state after some time.
However, for your case, it seems like there are other broken components that caused the overcurrent through the fuse. If so, need to fix it before awaiting fuse recovery or replacing fuse.
If the fuse is really broken, you will need to replace it.
For resettable fuses, not only trip current but voltage rating are important.
For example, common voltage ratings used in digital circuits are 5V or 9V.
If your circuit is 12V or 24V circuits, you cannot replace with such low rating fuses.
|
2,844,837
|
it gives me this error:
```
Exception in thread Thread-163:
Traceback (most recent call last):
File "C:\Python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "C:\Python26\lib\threading.py", line 736, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Public\SoundLog\Code\Código Python\SoundLog\SoundLog.py", line 337, in getInfo
self.data1 = copy.deepcopy(Auxiliar.DataCollection.getInfo(1))
File "C:\Python26\lib\copy.py", line 162, in deepcopy
y = copier(x, memo)
File "C:\Python26\lib\copy.py", line 254, in _deepcopy_dict
for key, value in x.iteritems():
RuntimeError: dictionary changed size during iteration
```
while executing my python program.
How can I avoid this to happen?
Thanks in advance ;)
|
2010/05/16
|
[
"https://Stackoverflow.com/questions/2844837",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/187730/"
] |
The normal advice, as per the other answers, would be to avoid using `iteritems` (use `items` instead). That of course is **not** an option in your case, since the `iteritems` call is being done on your behalf deep in the bowels of a system call.
Therefore, what I would suggest, assuming `Auxiliar.DataCollection.getInfo(1)` returns a dictionary (which is the one that's changing during the copy) is that you change your `deepcopy` call to:
```
self.data1 = copy.deepcopy(dict(Auxiliar.DataCollection.getInfo(1)))
```
This takes a "snapshot" of the dict in question, and the snapshot won't change, so you'll be fine.
If `Auxiliar.DataCollection.getInfo(1)` does *not* return a dict, but some more complicated object which includes dicts as items and/or attributes, it will be a bit more complicated, since those dicts are what you'll need to snapshot. However, it's impossible to be any more specific in this case, since you give us absolutely no clue as to the code that composes that crucial `Auxiliar.DataCollection.getInfo(1)` call!-)
|
Sounds like your are adding or removing something from the dictionary you are trying to iterate over. This is not allowed in most languages.
|
21,024,664
|
I have url: /profile/profileBase/index where "profile" is module, "profileBase" is controller and "index" is action.
I want to url manager would accept route like: /profile/read/index
where "read" could be alias of controller.
Is there any way to do this using url manager rules?
Thanks
|
2014/01/09
|
[
"https://Stackoverflow.com/questions/21024664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1495724/"
] |
```
'urlManager' => array(
'urlFormat' => 'path',
'showScriptName' => false,
'rules' => array(
'profile/read/index '=>'profile/profileBase/index'
),
),
```
|
You should simply add the following rule in `urlManager` config :
```
'profile/read/index'=>'profile/profileBase/index',
```
|
36,334,376
|
I'm trying to do something similar to what is done [here](https://docs.angularjs.org/api/ng/filter/orderBy), i.e. ordering an array of objects in the view by using the filter `orderBy` (you can find the plunker [here](http://plnkr.co/edit/ExSjseOhSKFCiXNz5x8V?p=preview)). My code is working well but when I add the `orderBy` filter I see nothing in the view. [Here](http://plnkr.co/edit/7GvmzQJCZY7LVunYhfJT?p=preview) is my plunker. In order to see the error, please add `| orderBy:'num'` to line 22 in my `index.html` file. It doesn't work neither with `| orderBy:'id'` or `| orderBy:'label'`
|
2016/03/31
|
[
"https://Stackoverflow.com/questions/36334376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6014171/"
] |
line 6 in your script replace :
```
var myArray = [], myObject = {};
```
by :
```
var myArray = [], myObject = [];
```
|
The answer of Patrick Ferreira is totally right, just for the explanation if somebody wonders..
The [orderBy](https://docs.angularjs.org/api/ng/filter/orderBy) filter of AngularJS just orders arrays:
>
> Orders a specified *array* by the *expression* predicate
>
>
>
and array-like values:
>
> Array-like values (e.g. NodeLists, jQuery objects, TypedArrays, Strings, etc) are also supported.
>
>
>
|
49,411
|
This is a follow up to the "[Create a 3 inch measurement](https://puzzling.stackexchange.com/questions/49253)" puzzle, which got a lot of innovative solutions.
Using the standard 8.5 x 11 inch paper, can you create a 1 inch measurement only by folding? Again no marking allowed. No ruler either.
One more thing. **I realize that folding the 8.5 inch side three times can get you 1.06 inches. But can you do better than that?** Maybe I was overdoing it, but I think I got it in more than 3 folds.
|
2017/02/23
|
[
"https://puzzling.stackexchange.com/questions/49411",
"https://puzzling.stackexchange.com",
"https://puzzling.stackexchange.com/users/34419/"
] |
A solution in 4 folds:
>
> 1. Fold one corner down to the opposite side. We now have a triangle that is 8.5" by 8.5", and a flap below it that is 2.5" by 8".
>
> 2. Fold the flap up, and call this Flap A. Now unfold the paper and turn it upside down.
>
> 3-4. Repeat the same steps to get a Flap B.
>
> Now, unfold the paper and just fold down Flap A and Flap B. the thin space between the two flaps should have length exactly 1".
>
>
>
This is because
>
> The flaps are of length 2.5", and when folded up (or down) that is another 2.5" for a total of 5". If Flap A accounts for 5" and Flap B accounts for another 5" from either side of the 11" length of the paper, the space in between should be 1".
>
>
>
|
A solution in 3 folds:
>
> 1. Fold diagonally to get triangle of side length 8.5, leaving 2.5 inches on the 11 inch side uncovered
>
> 2. Fold the 2.5 inch flap up, leaving 6 inches on the 11 inch side uncovered
>
> 3. Create a fold at the 6 inch mark on the 11 inch side using the covered part
>
>
> Unfold the 2.5 inch fold in order to get a flap of 5 inches which leaves only 1 inch uncovered on the 11 inch side.
>
>
>
|
20,443,391
|
I have to deal with a lot of files with a well defined syntax and semantic, for example:
* the first line it's an header with special info
* the other lines are containing a key value at the start of the line that are telling you how to parse and deal with the content of that line
* if there is a comment it starts with a given token
* etc etc ...
now `boost::program_options`, as far as I can tell, does pretty much the same job, but I only care about importing the content of those text file, without any extra work in between, just parse it and store it in my data structure .
the key step for me is that I would like to be able to do this parsing with:
* regular expressions since I need to detect different semantics and I can't really imagine another way to do this
* error checking ( corrupted file, unmatched keys even after parsing the entire file, etc etc ... )
so, I can use this library for this job ? There is a more functional approach ?
|
2013/12/07
|
[
"https://Stackoverflow.com/questions/20443391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2485710/"
] |
Okay, a starting point for a Spirit grammar
```
_Name = "newmtl" >> lexeme [ +graph ];
_Ns = "Ns" >> double_;
_Ka = "Ka" >> double_ >> double_ >> double_;
_Kd = "Kd" >> double_ >> double_ >> double_;
_Ks = "Ks" >> double_ >> double_ >> double_;
_d = "d" >> double_;
_illum %= "illum" >> qi::int_ [ _pass = (_1>=0) && (_1<=10) ];
comment = '#' >> *(char_ - eol);
statement=
comment
| _Ns [ bind(&material::_Ns, _r1) = _1 ]
| _Ka [ bind(&material::_Ka, _r1) = _1 ]
| _Kd [ bind(&material::_Kd, _r1) = _1 ]
| _Ks [ bind(&material::_Ks, _r1) = _1 ]
| _d [ bind(&material::_d, _r1) = _1 ]
| _illum [ bind(&material::_illum, _r1) = _1 ]
;
_material = -comment % eol
>> _Name [ bind(&material::_Name, _val) = _1 ] >> eol
>> -statement(_val) % eol;
start = _material % -eol;
```
I only implemented the MTL file subset grammar from your sample files.
**Note:** This is rather a simplistic grammar. But, you know, first things first. In reality I'd probably consider using [the *keyword list parser* from the spirit repository](http://www.boost.org/doc/libs/1_55_0/libs/spirit/repository/doc/html/spirit_repository/qi_components/operators/keyword_list.html#spirit_repository.qi_components.operators.keyword_list.example). It has facilities to 'require' certain number of occurrences for the different 'field types'.
**Note:** Spirit Karma (and some ~50 other lines of code) are only here for demonstrational purposes.
With the following contents of `untitled.mtl`
```
# Blender MTL File: 'None'
# Material Count: 2
newmtl None
Ns 0
Ka 0.000000 0.000000 0.000000
Kd 0.8 0.8 0.8
Ks 0.8 0.8 0.8
d 1
illum 2
# Added just for testing:
newmtl Demo
Ns 1
Ks 0.9 0.9 0.9
d 42
illum 7
```
The output reads
```
phrase_parse -> true
remaining input: ''
void dump(const T&) [with T = std::vector<blender::mtl::material>]
-----
material {
Ns:0
Ka:{r:0,g:0,b:0}
Kd:{r:0.8,g:0.8,b:0.8}
Ks:{r:0.8,g:0.8,b:0.8}
d:1
illum:2(Highlight on)
}
material {
Ns:1
Ka:(unspecified)
Kd:(unspecified)
Ks:{r:0.9,g:0.9,b:0.9}
d:42
illum:7(Transparency: Refraction on/Reflection: Fresnel on and Ray trace on)
}
-----
```
Here's the listing
```
#define BOOST_SPIRIT_USE_PHOENIX_V3
#define BOOST_SPIRIT_DEBUG
#include <boost/fusion/adapted.hpp>
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/karma.hpp> // for debug output/streaming
#include <boost/spirit/include/phoenix.hpp>
#include <boost/spirit/include/phoenix_operator.hpp>
namespace qi = boost::spirit::qi;
namespace phx= boost::phoenix;
namespace wavefront { namespace obj
{
} }
namespace blender { namespace mtl // material?
{
struct Ns { int exponent; }; // specular exponent
struct Reflectivity { double r, g, b; };
using Name = std::string;
using Ka = Reflectivity;
using Kd = Reflectivity;
using Ks = Reflectivity;
using dissolve_factor = double;
enum class illumination_model {
color, // 0 Color on and Ambient off
color_ambient, // 1 Color on and Ambient on
highlight, // 2 Highlight on
reflection_ray, // 3 Reflection on and Ray trace on
glass_ray, // 4 Transparency: Glass on
// Reflection: Ray trace on
fresnel_ray, // 5 Reflection: Fresnel on and Ray trace on
refract_ray, // 6 Transparency: Refraction on
// Reflection: Fresnel off and Ray trace on
refract_ray_fresnel,// 7 Transparency: Refraction on
// Reflection: Fresnel on and Ray trace on
reflection, // 8 Reflection on and Ray trace off
glass, // 9 Transparency: Glass on
// Reflection: Ray trace off
shadow_invis, // 10 Casts shadows onto invisible surfaces
};
struct material
{
Name _Name;
boost::optional<Ns> _Ns;
boost::optional<Reflectivity> _Ka;
boost::optional<Reflectivity> _Kd;
boost::optional<Reflectivity> _Ks;
boost::optional<dissolve_factor> _d;
boost::optional<illumination_model> _illum;
};
using mtl_file = std::vector<material>;
///////////////////////////////////////////////////////////////////////
// Debug output helpers
std::ostream& operator<<(std::ostream& os, blender::mtl::illumination_model o)
{
using blender::mtl::illumination_model;
switch(o)
{
case illumination_model::color: return os << "0(Color on and Ambient off)";
case illumination_model::color_ambient: return os << "1(Color on and Ambient on)";
case illumination_model::highlight: return os << "2(Highlight on)";
case illumination_model::reflection_ray: return os << "3(Reflection on and Ray trace on)";
case illumination_model::glass_ray: return os << "4(Transparency: Glass on/Reflection: Ray trace on)";
case illumination_model::fresnel_ray: return os << "5(Reflection: Fresnel on and Ray trace on)";
case illumination_model::refract_ray: return os << "6(Transparency: Refraction on/Reflection: Fresnel off and Ray trace on)";
case illumination_model::refract_ray_fresnel: return os << "7(Transparency: Refraction on/Reflection: Fresnel on and Ray trace on)";
case illumination_model::reflection: return os << "8(Reflection on and Ray trace off)";
case illumination_model::glass: return os << "9(Transparency: Glass on/Reflection: Ray trace off)";
case illumination_model::shadow_invis: return os << "10(Casts shadows onto invisible surfaces)";
default: return os << "ILLEGAL VALUE";
}
}
std::ostream& operator<<(std::ostream& os, blender::mtl::Reflectivity const& o)
{
return os << "{r:" << o.r << ",g:" << o.g << ",b:" << o.b << "}";
}
std::ostream& operator<<(std::ostream& os, blender::mtl::material const& o)
{
using namespace boost::spirit::karma;
return os << format("material {"
"\n\tNs:" << (auto_ | "(unspecified)")
<< "\n\tKa:" << (stream | "(unspecified)")
<< "\n\tKd:" << (stream | "(unspecified)")
<< "\n\tKs:" << (stream | "(unspecified)")
<< "\n\td:" << (stream | "(unspecified)")
<< "\n\tillum:" << (stream | "(unspecified)")
<< "\n}", o);
}
} }
BOOST_FUSION_ADAPT_STRUCT(blender::mtl::Reflectivity,(double, r)(double, g)(double, b))
BOOST_FUSION_ADAPT_STRUCT(blender::mtl::Ns, (int, exponent))
BOOST_FUSION_ADAPT_STRUCT(blender::mtl::material,
(boost::optional<blender::mtl::Ns>, _Ns)
(boost::optional<blender::mtl::Ka>, _Ka)
(boost::optional<blender::mtl::Kd>, _Kd)
(boost::optional<blender::mtl::Ks>, _Ks)
(boost::optional<blender::mtl::dissolve_factor>, _d)
(boost::optional<blender::mtl::illumination_model>, _illum))
namespace blender { namespace mtl { namespace parsing
{
template <typename It>
struct grammar : qi::grammar<It, qi::blank_type, mtl_file()>
{
template <typename T=qi::unused_type> using rule = qi::rule<It, qi::blank_type, T>;
rule<Name()> _Name;
rule<Ns()> _Ns;
rule<Reflectivity()> _Ka;
rule<Reflectivity()> _Kd;
rule<Reflectivity()> _Ks;
rule<dissolve_factor()> _d;
rule<illumination_model()> _illum;
rule<mtl_file()> start;
rule<material()> _material;
rule<void(material&)> statement;
rule<> comment;
grammar() : grammar::base_type(start)
{
using namespace qi;
using phx::bind;
using blender::mtl::material;
_Name = "newmtl" >> lexeme [ +graph ];
_Ns = "Ns" >> double_;
_Ka = "Ka" >> double_ >> double_ >> double_;
_Kd = "Kd" >> double_ >> double_ >> double_;
_Ks = "Ks" >> double_ >> double_ >> double_;
_d = "d" >> double_;
_illum %= "illum" >> qi::int_ [ _pass = (_1>=0) && (_1<=10) ];
comment = '#' >> *(char_ - eol);
statement=
comment
| _Ns [ bind(&material::_Ns, _r1) = _1 ]
| _Ka [ bind(&material::_Ka, _r1) = _1 ]
| _Kd [ bind(&material::_Kd, _r1) = _1 ]
| _Ks [ bind(&material::_Ks, _r1) = _1 ]
| _d [ bind(&material::_d, _r1) = _1 ]
| _illum [ bind(&material::_illum, _r1) = _1 ]
;
_material = -comment % eol
>> _Name [ bind(&material::_Name, _val) = _1 ] >> eol
>> -statement(_val) % eol;
start = _material % -eol;
BOOST_SPIRIT_DEBUG_NODES(
(start)
(statement)
(_material)
(_Name) (_Ns) (_Ka) (_Kd) (_Ks) (_d) (_illum)
(comment))
}
};
} } }
#include <fstream>
template <typename T>
void dump(T const& data)
{
using namespace boost::spirit::karma;
std::cout << __PRETTY_FUNCTION__
<< "\n-----\n"
<< format(stream % eol, data)
<< "\n-----\n";
}
void testMtl(const char* const fname)
{
std::ifstream mtl(fname, std::ios::binary);
mtl.unsetf(std::ios::skipws);
boost::spirit::istream_iterator f(mtl), l;
using namespace blender::mtl::parsing;
static const grammar<decltype(f)> p;
blender::mtl::mtl_file data;
bool ok = qi::phrase_parse(f, l, p, qi::blank, data);
std::cout << "phrase_parse -> " << std::boolalpha << ok << "\n";
std::cout << "remaining input: '" << std::string(f,l) << "'\n";
dump(data);
}
int main()
{
testMtl("untitled.mtl");
}
```
|
It's most likely possible, but not necessarily convenient. If you want to parse anything you want to use parser - whether you use existing one or write one yourself depends on what you are parsing.
If there is no way to parse your format with any existing tool then just write your own parser. You can use lex/flex/flex++ with yacc/bison/bison++ or boost::spirit.
I think in a long run learning to maintain you own parser will be more useful that forcefully adjusting boost::program\_options config, but not as convenient as using some existing parser already matching your needs.
|
1,073,851
|
If there is a point (a,b) on the hyperbola $x^2-y^2=4$ that is closest to the point (0,1), then what is this point?
I prefer this problem to be solved with the knowledge of the the maximum and minimum using the derivatives and so on.
I am a bit confused
I said that I will take the square distance between the point $0,1$ and $y=$$\sqrt{x^2-4}$
then I got $D(x)=x^2-(\sqrt{x^2-4}-1)^2$
after that I took the derivative and made it equal to zero and I was lost at this point since I didn't know how to get the values that would lead me to the Minimum.
?!?!
|
2014/12/18
|
[
"https://math.stackexchange.com/questions/1073851",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/145527/"
] |
It's probably easier to take $x=\pm\sqrt{y^2+4}$, since this is true for all $y$, while looking at the question in terms of $x$ requires the condition that $|x|\geq 4$.
Also, $x^2+(y-1)^2 = y^2+4 +(y-1)^2$ is much easier to minimize, and doesn't depend on the sign of $x$.
|
here is one way to compute the closes distance. let the foot of the perpendicular from $(0,1)$ to $x^2-y^2 = 4$ be $(a,b)$ the slope of the line is $b - 1 \over a$
and the slope of hyperbola is $a \over b,$ so $a, b$ satisfy:${(b-1) \over a}{ a \over b} = -1$ this gives $$a = 0, b = { 1 \over 2}.$$ $a = 0$ is not in the domain so we have $b = {1 \over 2} \ and \ a = \sqrt{17}/2.$
so the shortest distance from $(0,1)$ to the $x^2 - y^2 = 4$ is
$$\sqrt{(1-1/2)^2 + (0 \pm\sqrt{17/2})^2 } = {3\sqrt 2 \over 2}. $$
|
54,986,372
|
I am trying to use the LIKE operator along with wildcards to check for similar matches in 2 columns in each of the [Food] and [Drinks] table.
The below is my solution, but it will not run. Code below: (I have left a gap in the middle for reading purposes).
```
" SELECT DISTINCT r.restname"
+ " FROM restaurants r"
+ " JOIN food f ON f.restid = r.restid"
+ " JOIN drinks d ON d.restid = r.restid"
+ " WHERE f.foodcategory LIKE CONCAT('%', ?, '%')"
+ " OR f.foodname LIKE CONCAT('%', ?, '%')"
+ " AND d.drinkname LIKE CONCAT('%', ?, '%')"
+ " OR d.drinkvariety LIKE CONCAT('%', ?, '%')");
```
|
2019/03/04
|
[
"https://Stackoverflow.com/questions/54986372",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10966913/"
] |
Add your words to ' ? ' and check.
```
SELECT DISTINCT r.restname
FROM restaurants r
JOIN food f ON f.restid = r.restid
JOIN drinks d ON d.restid = r.restid
WHERE (f.foodcategory LIKE CONCAT('%', ?, '%')" OR
f.foodname LIKE CONCAT('%', ?, '%'))
AND (d.drinkname LIKE CONCAT('%', ?, '%') OR
d.drinkvariety LIKE CONCAT('%', ?, '%'))
```
|
You should be using `exists` instead -- as I suggested in an earlier question.
But your problem is the lack of parentheses:
```
WHERE (f.foodcategory LIKE CONCAT('%', ?, '%')" OR
f.foodname LIKE CONCAT('%', ?, '%')
) AND
(d.drinkname LIKE CONCAT('%', ?, '%') OR
d.drinkvariety LIKE CONCAT('%', ?, '%')
)
```
|
27,922,408
|
i am making an app where i need 3d joystick image so that user can swipe on it and image will bend in swipped direction.
For swipe down or for swipe up events what css or animation effect i should give to make image moving downside or upside ??
I don't know the best way to do it. I am trying this code :-
```
<div style="width:220px;height:220px;background-color:black;display:table-cell; vertical-align:middle; text-align:center" id="joy_div">
<img id="joystick" src="joystick.png" >
</div>
<script>
$("#joy_div").rotate({
bind:
{
swipeleft : function() {
$("#joystick").rotate({animateTo:-50});
},
swiperight : function() {
$("#joystick").rotate({animateTo:50});
},
swipeup : function(){
alert('up');
},
swipedown : function(){
alert('down');
},
swipeend : function(){
$("#joystick").rotate({animateTo:0});
}
});
</script>
```
I appreciate any guidance.
|
2015/01/13
|
[
"https://Stackoverflow.com/questions/27922408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2300237/"
] |
I would suggest you to use CSS `transform` and `perspective` to do this.
In the example below, I've demonstrated this using buttons for swipeLeft, swipeRight, swipeUp and swipeDown events.
```js
$(".tilt").click(function() {
$('#joystick').removeAttr('class');
$('#joystick').addClass($(this).attr('id'));
});
```
```css
#joy_div {
perspective: 500px;
}
button {
margin: 10px 0 0 0;
}
.left, .right, .down, .up, .reset {
transition: transform 0.5s ease;
transform-origin: 50% 50%;
}
.left {
transform: rotateY(-40deg);
}
.right {
transform: rotateY(40deg);
}
.up {
transform: rotateX(-40deg);
}
.down {
transform: rotateX(40deg);
}
.reset {
transform: rotateX(0) rotateY(0);
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<div style="width:250px;height:160px;display:table-cell; vertical-align:middle; text-align:center" id="joy_div">
<img id="joystick" width="250" height="160" src="http://help.robotc.net/WebHelpArduino/scr/TETRIX_MATRIX_New/NXT_Using_Joysticks_files/Joystick_front.gif" />
</div>
<button class="tilt" id="left">swipeleft</button>
<button class="tilt" id="right">swiperight</button>
<button class="tilt" id="up">swipeup</button>
<button class="tilt" id="down">swipedown</button>
<button class="tilt" id="reset">Reset</button>
```
|
Something like this? (although this is a very crude design, and not fully implemented)
```js
$('#left').click(function(){
$('.stick').removeClass("right").toggleClass("left");
});
$('#right').click(function(){
$('.stick').removeClass("left").toggleClass("right");
});
```
```css
.wrapper{
width:300px;
height:300px;
background:gray;
}
.stick{
height:30px;
width:30px;
border-radius:50%;
background:red;
margin:0 auto;
position:relative;
z-index:5;
}
.stick:after{
content:"";
position:absolute;
height:200px;
width:20px;
background:red;
bottom:-180px;
left:15%;
z-index:2;
}
.joy{
height:30px;
width:100%;
background:darkgray;
margin-top:170px;
position:relative;
z-index:20;
}
.left{
transform:rotate(-20deg);
left:-50px;
}
.right{
transform:rotate(20deg);
left:50px;
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="wrapper">
<div class="stick"></div>
<div class="joy"></div>
</div>
<button id="left">left</button>
<button id="right">right</button>
```
As for the 'up' and 'down' motions, you may want to look into the 3d perspective css property.
---
If you were able to use SCSS, you may want to look at:
* [This joystick implementation](http://codepen.io/hugosocie/pen/DelFm)
If not, something like [this](http://codepen.io/highfestiva/pen/BfEhC) I think would also be beneficial to see
Incredible Example ~[SOURCE](http://codepen.io/abergin/pen/DjCce) NOT MADE BY ME
================================================================================
---
```css
* {
padding: 0;
margin: 0;
}
html , body {
width: 100%;
height: 100%:
overflow: hidden;
position: fixed;
top: 0%;
left: 0%;
}
html {
background-color: rgba(240,255,255,1);
}
#letter {
font-size: 120px;
font-family: Helvetica;
font-weight: Bold;
color: rgba(18,22,26,0.7);
width: 120px;
height: 120px;
text-align: center;
vertical-align: middle;
line-height: 120px;
position: fixed;
top: 50%;
left: 50%;
margin-top: -60px;
margin-left: -60px;
opacity: 1;
z-index: -1;
}
#description {
position: fixed;
bottom: 10px;
width: 100%;
z-index: 100;
font-family: Helvetica;
font-weight: 200;
color: rgba(18,22,26,0.7);
text-align: center;
font-size: 11px;
letter-spacing: 1px;
}
#animation {
position: fixed;
width: 100%;
height: 100%;
top: 0%;
left: 0%;
z-index: 1;
}
.base {
width: 170px;
height: 30px;
position: absolute;
top: 50%;
left: 50%;
margin-left: -85px;
margin-top: 95px;
background-color: rgba(123,126,129,1);
z-index: -1;
}
.base:after {
border-bottom: 40px solid rgba(193,196,199,1);
border-right: 20px solid transparent;
border-left: 20px solid transparent;
position: absolute;
top: -85px;
left: 0px;
width: 130px;
height: 45px;
content: "";
}
.button {
width: 16px;
height: 35px;
background-color: rgba(23,26,29,0.5);
position: absolute;
top: 50%;
left: 50%;
-webkit-transform-origin: center bottom;
transform-origin: center bottom;
-webkit-transform: scale( 1 , 0.35 );
-moz-transform: scale( 1 , 0.35 );
-ms-transform: scale( 1 , 0.35 );
-o-transform: scale( 1 , 0.35 );
transform: scale( 1 , 0.35 );
border-radius: 10px;
}
.button:after {
content: "";
width: 16px;
height: 16px;
position: absolute;
top: 0;
left: 0;
background-color: rgba(255,115,110,1);
border-radius: 20px;
}
#animation .button:nth-of-type(1) {
margin-top: 50px;
margin-left: 30px;
animation: baseButton 6s ease 0s infinite alternate none;
-webkit-animation: baseButton 6s ease 0s infinite alternate none;
}
#animation .button:nth-of-type(2) {
margin-top: 55px;
margin-left: 50px;
animation: baseButton 6.5s ease 2s infinite alternate none;
-webkit-animation: baseButton 6.5s ease 2s infinite alternate none;
}
.stick {
background-color: rgba(123,126,129,1);
width: 22px;
height: 100px;
position: absolute;
top: 50%;
left: 50%;
margin-left: -11px;
margin-top: -65px;
z-index: 10;
-webkit-transform-origin: center 180px;
transform-origin: center 180px;
animation: stick 7s ease-in-out 0s infinite alternate none;
-webkit-animation: stick 7s ease-in-out 0s infinite alternate none;
}
.stick:after {
-webkit-transform: scale( 1 , 0.25 );
-moz-transform: scale( 1 , 0.25 );
-ms-transform: scale( 1 , 0.25 );
-o-transform: scale( 1 , 0.25 );
transform: scale( 1 , 0.25 );
background-color: rgba(193,196,199,1);
border-radius: 34px;
position: absolute;
top: -11px;
left: 0;
width: 22px;
height: 22px;
content: "";
}
.stick:before {
-webkit-transform: scale( 1 , 0.35 );
-moz-transform: scale( 1 , 0.35 );
-ms-transform: scale( 1 , 0.35 );
-o-transform: scale( 1 , 0.35 );
transform: scale( 1 , 0.35 );
background-color: rgba(123,126,129,1);
border-radius: 34px;
position: absolute;
top: 90px;
left: 0;
width: 22px;
height: 22px;
content: "";
}
.stick .topButton {
width: 16px;
height: 35px;
background-color: rgba(103,106,109,1);
position: absolute;
top: 0%;
left: 0%;
-webkit-transform-origin: center bottom;
transform-origin: center bottom;
-webkit-transform: scale( 1 , 0.25 );
-moz-transform: scale( 1 , 0.25 );
-ms-transform: scale( 1 , 0.25 );
-o-transform: scale( 1 , 0.25 );
transform: scale( 1 , 0.25 );
border-radius: 10px;
margin-left: 3px;
margin-top: -34px;
z-index: 20;
animation: baseButton 3.5s ease -2s infinite alternate forwards;
-webkit-animation: baseButton 3.5s ease -2s infinite alternate forwards;
}
.stick .topButton:after {
content: "";
width: 16px;
height: 16px;
position: absolute;
top: 0;
left: 0;
background-color: rgba(255,115,110,1);
border-radius: 20px;
}
.stick .bottom {
-webkit-transform: scale( 1 , 0.55 );
-moz-transform: scale( 1 , 0.55 );
-ms-transform: scale( 1 , 0.55 );
-o-transform: scale( 1 , 0.55 );
transform: scale( 1 , 0.55 );
width: 16px;
height: 35px;
border-radius: 20px;
position: absolute;
bottom: -25px;
left: 3px;
z-index: -1;
background-color: rgba(83,86,89,1);
}
.stick .support {
width: 40px;
height: 40px;
position: absolute;
bottom: -41px;
left: -9px;
background-color: rgba(48,50,54,1);
border-radius: 80px;
-webkit-transform: scale( 1 , 0.45 );
-moz-transform: scale( 1 , 0.45 );
-ms-transform: scale( 1 , 0.45 );
-o-transform: scale( 1 , 0.45 );
transform: scale( 1 , 0.45 );
z-index: -5;
}
.rubber {
position: absolute;
width: 80px;
height: 80px;
top: 50%;
left: 50%;
margin-left: -40px;
margin-top: 25px;
background-color: rgba(38,42,46,1);
border-radius: 80px;
-webkit-transform: scale( 1 , 0.35 );
-moz-transform: scale( 1 , 0.35 );
-ms-transform: scale( 1 , 0.35 );
-o-transform: scale( 1 , 0.35 );
transform: scale( 1 , 0.35 );
z-index: 1;
}
.cord {
border-top: 3px solid rgba(38,42,46,1);
border-left: 3px solid rgba(38,42,46,1);
border-top-left-radius: 40px;
width: 30%;
height: 20px;
position: absolute;
margin-top: 80px;
top: 50%;
left: 20%;
z-index: -20;
}
.wallcord {
border-bottom: 3px solid rgba(38,42,46,1);
border-right: 3px solid rgba(38,42,46,1);
border-bottom-right-radius: 40px;
width: 20%;
height: 60px;
position: absolute;
margin-top: 100px;
top: 50%;
left: 0%;
z-index: -20;
}
@keyframes baseButton {
64% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
62% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
60% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
100% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
18% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
16% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
0% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
98% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
14% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
10% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
12% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
}
@-webkit-keyframes baseButton {
64% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
62% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
60% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
100% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
18% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
16% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
0% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
98% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
14% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
10% { -webkit-transform: scale( 1 , 0.35 ); -moz-transform: scale( 1 , 0.35 ); -ms-transform: scale( 1 , 0.35 ); -o-transform: scale( 1 , 0.35 ); transform: scale( 1 , 0.35 );}
12% { -webkit-transform: scale( 1 , 0.15 ); -moz-transform: scale( 1 , 0.15 ); -ms-transform: scale( 1 , 0.15 ); -o-transform: scale( 1 , 0.15 ); transform: scale( 1 , 0.15 );}
}
@keyframes stick {
5% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
100% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
97% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
94% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
87% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
84% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
82% { -webkit-transform: rotate( 5deg ); -moz-transform: rotate( 5deg ); -ms-transform: rotate( 5deg ); -o-transform: rotate( 5deg ); transform: rotate( 5deg );}
74% { -webkit-transform: rotate( 5deg ); -moz-transform: rotate( 5deg ); -ms-transform: rotate( 5deg ); -o-transform: rotate( 5deg ); transform: rotate( 5deg );}
68% { -webkit-transform: rotate( -20deg ); -moz-transform: rotate( -20deg ); -ms-transform: rotate( -20deg ); -o-transform: rotate( -20deg ); transform: rotate( -20deg );}
65% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
53% { -webkit-transform: rotate( 6deg ); -moz-transform: rotate( 6deg ); -ms-transform: rotate( 6deg ); -o-transform: rotate( 6deg ); transform: rotate( 6deg );}
50% { -webkit-transform: rotate( 10deg ); -moz-transform: rotate( 10deg ); -ms-transform: rotate( 10deg ); -o-transform: rotate( 10deg ); transform: rotate( 10deg );}
48% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
47% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
42% { -webkit-transform: rotate( -15deg ); -moz-transform: rotate( -15deg ); -ms-transform: rotate( -15deg ); -o-transform: rotate( -15deg ); transform: rotate( -15deg );}
40% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
33% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
22% { -webkit-transform: rotate( 10deg ); -moz-transform: rotate( 10deg ); -ms-transform: rotate( 10deg ); -o-transform: rotate( 10deg ); transform: rotate( 10deg );}
17% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
0% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
}
@-webkit-keyframes stick {
5% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
100% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
97% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
94% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
87% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
84% { -webkit-transform: rotate( 20deg ); -moz-transform: rotate( 20deg ); -ms-transform: rotate( 20deg ); -o-transform: rotate( 20deg ); transform: rotate( 20deg );}
82% { -webkit-transform: rotate( 5deg ); -moz-transform: rotate( 5deg ); -ms-transform: rotate( 5deg ); -o-transform: rotate( 5deg ); transform: rotate( 5deg );}
74% { -webkit-transform: rotate( 5deg ); -moz-transform: rotate( 5deg ); -ms-transform: rotate( 5deg ); -o-transform: rotate( 5deg ); transform: rotate( 5deg );}
68% { -webkit-transform: rotate( -20deg ); -moz-transform: rotate( -20deg ); -ms-transform: rotate( -20deg ); -o-transform: rotate( -20deg ); transform: rotate( -20deg );}
65% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
53% { -webkit-transform: rotate( 6deg ); -moz-transform: rotate( 6deg ); -ms-transform: rotate( 6deg ); -o-transform: rotate( 6deg ); transform: rotate( 6deg );}
50% { -webkit-transform: rotate( 10deg ); -moz-transform: rotate( 10deg ); -ms-transform: rotate( 10deg ); -o-transform: rotate( 10deg ); transform: rotate( 10deg );}
48% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
47% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
42% { -webkit-transform: rotate( -15deg ); -moz-transform: rotate( -15deg ); -ms-transform: rotate( -15deg ); -o-transform: rotate( -15deg ); transform: rotate( -15deg );}
40% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
33% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
22% { -webkit-transform: rotate( 10deg ); -moz-transform: rotate( 10deg ); -ms-transform: rotate( 10deg ); -o-transform: rotate( 10deg ); transform: rotate( 10deg );}
17% { -webkit-transform: rotate( 0deg ); -moz-transform: rotate( 0deg ); -ms-transform: rotate( 0deg ); -o-transform: rotate( 0deg ); transform: rotate( 0deg );}
0% { -webkit-transform: rotate( -5deg ); -moz-transform: rotate( -5deg ); -ms-transform: rotate( -5deg ); -o-transform: rotate( -5deg ); transform: rotate( -5deg );}
}
```
```html
<div id="letter">J</div>
<div id="description">is for Joystick</div>
<div id="animation">
<div class="button"></div>
<div class="button"></div>
<div class="stick">
<div class="topButton"></div>
<div class="bottom"></div>
<div class="support"></div>
</div>
<div class="rubber"></div>
<div class="base"></div>
<div class="cord"></div>
<div class="wallcord"></div>
</div>
```
|
28,458,814
|
This is my .htaccess file, everything works so far but I can't manage to remove .php extension from files, every code that I tried from other answers just threw 500 or 404 error. Please advise where and what to add. Structure of the folders is localhost/myfolder/somefile.php
Just to be clear - localhost/myfolder/ is a root for my project.
```
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_URI} !(.*)/$
RewriteRule ^(.*)$ /myfolder/$1/ [L,R=301]
RewriteRule ^team-news/([0-9]+[/])?$ posts.php?p=$1&cat=Team\ News
RewriteRule ^product-news/([0-9]+[/])?$ posts.php? p=$1&cat=Product\ News
RewriteRule ^member-specials/([0-9]+[/])?$ posts.php?p=$1&cat=Member\ Specials
RewriteRule ^ambassador-blogs/([0-9]+[/])?$ posts.php?p=$1&cat=Ambassador\ Blogs
RewriteRule ^user/([0-9]+[/])?$ profile.php?id=$1
RewriteRule ^browse-all/([0-9]+[/])?$ searchall.php?p=$1
RewriteRule ^edit/([0-9]+[/])?$ edit.php?id=$1
RewriteRule ^articles/([0-9]+[/])?$ post.php?id=$1
```
|
2015/02/11
|
[
"https://Stackoverflow.com/questions/28458814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4103960/"
] |
This snippet will allow you to rewrite to remove `php` extensions:
```
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^([^\.]+)$ $1.php [NC,L]
```
If you want your URL to have a trailing `/`, you can use this snippet:
```
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^([^/]+)/$ $1.php
RewriteRule ^([^/]+)/([^/]+)/$ /$1/$2.php
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} !(\.[a-zA-Z0-9]{1,5}|/)$
RewriteRule (.*)$ /$1/ [R=301,L]
```
[Source](http://alexcican.com/post/how-to-remove-php-html-htm-extensions-with-htaccess/)
|
Removing extension, say; php or html in browser will let browser find it a little bit more effort to find the source file. if you need it so, this follows may help you:
UPDATED:
**PHP:**
```
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.*)/$ /$1.php
```
**HTML:**
```
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.html -f
RewriteRule ^(.*)/$ /$1.html
```
those will remove all extensions in your files (both php and html).
**Note**: see if server **enables mod rewrite module/extension**.
|
47,315,287
|
I'm trying to filter an azure search edm.collection to return results if any of multiple strings are in the collection. I can only get it working when querying for one item, which isn't good enough for my use case. I can't find syntax for querying multiple parameters.
```
filter += "FirmTypes / any (x: x eq 'Big 4')";
```
the above works and returns all of the documents where firm type is Big 4.
I've tried multiple ways (some below) to filter for more than one parameter with no success
```
//filter += " OR any (x: x eq 'Industry')";
//filter += "FirmTypes / any (x: x eq 'Industry')";
//filter += "FirmTypes / any (x: x eq 'Big 4', 'Industry', 'PLC')"
//filter += "FirmTypes / any (x: x eq 'Big 4' or 'Industry' or 'PLC')"
//filter += "FirmTypes / any (x: x eq 'Big 4') or (x: x eq 'Industry')"
//filter += "FirmTypes / any (x: x eq 'Big 4')|(x: x eq 'Industry')"
```
Can anybody kindly point me in the right direction? Thank you in advance.
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47315287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6130346/"
] |
The best way to filter over multiple values is to use the new `search.in` function:
```
FirmTypes/any(x: search.in(x, 'Big 4|Industry', '|'))
```
For large numbers of values, `search.in` is significantly faster than using a combination of `or` and `eq`, and it can handle a much larger number of values without hitting the hard limits on filter complexity.
|
Got it as soon as I posted. If anybody else has same issue
```
"FirmTypes / any (x: x eq 'Big 4') or FirmTypes / any (x: x eq 'Industry')"
```
|
46,671,916
|
I have additional settings that I need to pass to Karate when running via Maven that will be available in karate-config.js. Currently I can pass in a string using the karate.env property - is it necessary to encode my parameters as a JSON object and pass them in via this one property or can I do something like:
```
mvn test -DargLine="-Dkarate.env='production' -Dkarate.clientid='1234567890' ...
```
Such that I can then reference karate.clientid in karate-config.js where I can save it into the returned config object.
I'm sure I'm missing something obvious here...
|
2017/10/10
|
[
"https://Stackoverflow.com/questions/46671916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2452580/"
] |
Yes ! Refer to the [documentation](https://github.com/intuit/karate#the-karate-object) for `karate.properties['karate.clientid']`.
|
I've found a way, but I didn't use examples. What I've done:
* In Gradle:
```
task api(type:Test) {
systemProperty "karate.clientId", System.properties.getProperty("karate.clientId")...
}
```
* In `karate-config.js` (in `var config`):
```
clientId: karate.properties['karate.clientId'] || 'xyz'
```
* In the command line:
```
'-Dkarate.clientId=abc'
```
If I don't set the `clientId` in my command line, the `clientId` will be `'xyz'`.
|
549,526
|
In a comment on the question [Is there any rule regarding when not to use the pattern "noun phrase + of + possessive pronoun"?](https://english.stackexchange.com/questions/549521), such as "a friend of his", John Lawler writes
>
> First rule is: Don't do it with *its.*
>
>
>
Using *its,* for example "I really like those whiskers of its" (as of a cat), does indeed sound off.
Is that because *its* in that sentence needs to be stressed, and *its* is a word which cannot sustain stress? That would make it similar to *I'm* ("He's hot like I'm") because contractions cannot be placed where stress is needed. But *its* is no more a contraction than *his* is.
What gives?
|
2020/10/18
|
[
"https://english.stackexchange.com/questions/549526",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/18696/"
] |
In [CGEL](https://rads.stackoverflow.com/amzn/click/com/0521431468)'s terminology, we are talking about the **oblique genitive** construction (pp. 468-469). According to CGEL, compared to the non-oblique genitive, 'the range of semantic relations between subordinate and matrix NPs [is] considerably narrower in the oblique construction'. For example, while *Mary's green eyes* is equivalent to *the green eyes of Mary's*, we could not rephrase *the summer's heat* as \**the heat of the summer's*. (The asterisk '\*' in front of something signifies that what follows is not acceptable English.)
In the case of *the summer's heat*, we could of course use the **non-genitive** *of*-construction *the heat of the summer*. But that's a separate subject.
On p. 474, CGEL provides no fewer than 23 examples of semantic relations between the genitive NP and the head. For each example, it provides a corresponding clause and a generalized schema. For example, the clause corresponding to *Mary's green eyes* is *Mary has green eyes.*, and the generalized schema is [d has body part h]. Here 'd' stands for 'dependent' and 'h' for 'head', since in *Mary's green eyes*, *eyes* is the head, and *Mary's* a dependent. And for *the summer's heat*, the clause is *The summer is hot.*, and the schema is [d has non-human property h]. The motivation for which examples to include was this (p. 473):
>
> the semantic distinctions made are typical of those known cross-linguistically to have different structural realisations even though they are similarly expressed in English.
>
>
>
The list of examples is organized in such a way that the oblique construction is perfectly acceptable in the upper half, but either questionable or clearly unacceptable in the lower half (CGEL, p. 478). And the main thing that distinguishes the entries in the upper half from those in the lower half is that those in the upper half concern humans, while those in the lower half do not.
Here is the list itself:
>
> [54] i *Mary's green eyes Mary has green eyes.* [d has body part h]
>
> ii *Mary s younger sister Mary has a younger sister.* [d has kin relation h]
>
> iii *Mary's husband Mary has a husband.* [d has married relation h]
>
> iv *Mary's boss Mary has a boss.* [d has superior h]
>
> v *Mary's secretary Mary has a secretary.* [d has subordinate h]
>
> vi *Mary's friend Mary has a friend.* [d has equal h]
>
> vii *Mary's team Mary belongs to a team.* [d is member of h]
>
> viii *Mary's debut Mary performs her debut.* [d is performer of h]
>
> ix *Mary's book Mary writes a book.* [d is creator of h]
>
> x *Mary's new house Mary owns a new house.* [d is owner of h]
>
> xi *Mary's honour Mary is honourable.* [d has human property h]
>
> xii *Mary's anger Mary feels angry.* [d has feeling h]
>
> xiii *Mary's letter Mary receives a letter.* [d is recipient of h]
>
> xiv *Mary's obituary Mary is the topic of an obituary.* [d is human topic of h]
>
> xv *Mary's surgery Mary undergoes surgery.* [d is undergoer of h]
>
> xvi *the room's Persian carpet The room contains a Persian carpet.* [d is location of h]
>
> xvii *this year's new fashions This year is a time of new fashions.* [d is time of h]
>
> xviii *the sun's rays The sun emits rays.* [d is natural source of h]
>
> xix *the cathedral's spire The cathedral has a spire.* [d has inherent part h]
>
> xx *the war's ancient origins The war has ancient origins.* [d has cause h]
>
> xxi *the flood's consequences The flood has consequences.* [d has result h]
>
> xxii *the lock's key The lock has a key.* [d has associated part h]
>
> xxiii *the summer's heat The summer is hot.* [d has non-human property h]
>
>
>
But the human-nonhuman distinction is not enough to predict whether the oblique genitive construction is possible: sometimes it is impossible even though the dependent (in the non-oblique genitive) is human. For example, note that while *green eyes of Mary's* is fine, \**the green eyes of my boss's* is not, at least to my ear. On the other hand, *the green eyes of my mother's* is at least marginally acceptable, again, at least to my ears. CGEL doesn't analyze this kinds of distinction (but some other source might); it seems that only some human roles, but not others, allow the oblique genitive construction.
Note also that an oblique paraphrase is normally not possible if the dependent in the non-oblique construction is an animal (*the cat's owner* but not \**the owner of the cat's*). Of course, *this cat of Mary's* is fine (but, again, \**this cat of the owner's* is not).
I should mention that, in many cases, one can use the **non-genitive** *of*-construction (either instead of or in addition to the oblique genitive), e.g. *the owner of the cat*. At the same time, there are certainly plenty of cases when the non-gentivie *of*-construction is not possible (e.g. \**this cat of Mary*). However, again, that's a different subject.
All this suggests that the problem with *its*—the reason why the genitive constructions in which *its* enters as a dependent cannot be turned into oblique genitives—is that *its* generically refers to non-human objects. More generally, the problem is that the **semantic relations** into which *its* enters as a dependent are not of the kind that can be expressible using the oblique genitive construction. While we don't have a clear necessary **and** sufficient condition for when the oblique construction is possible, we can say that it is at least necessary (though not sufficient) that the dependent in the non-genitive construction refer to a human—and *its* generically does not.
**Further discussion on the semantic restrictions of the oblique genitive**
In the book chapter 'The oblique genitive in English' by John Payne (in *Morphosyntactic Categories and the Expression of Possession*, K. Börjars, D. Denison, and A. Scott, eds, John Benjamins Publishing Company, Philadelphia, 2013), we find the following summary:
>
> there is widespread agreement that the oblique genitive is considerably more semantically restricted than the subject-determiner genitive. Sometimes the constraint is expressed in terms of animacy. But opinions differ, e.g. the dependent must be human (Quirk et al. 1985: 1283), personal (Quirk et al. 1972: 203), or just animate (Lyons 1986; Taylor 1989). Sometimes the constraint is expressed in terms of semantic relations: Storto (2000) is most restrictive, claiming that the relationship in the oblique genitive must be one of “ownership or possession proper”. Taylor (1989) claims that the oblique genitive works best with “true” possession and various kinds of interpersonal relationship, but also “somewhat marginally” with what he calls authorship and attribution. Christianson (1997: 102) counterclaims, correctly we believe, that the authorship and attribution relations are far from marginal. For further discussion of the permitted semantic relations, see also Payne & Huddleston (2002: 478).
>
>
> Examples of semantic relations occurring in the oblique genitive would be those
> in (7):
>
>
> (7) a. *that dog of yours* true possession
>
> b. *that son of theirs* interpersonal
>
> c. *that new book of his* authorship
>
> d. *no fault of mine* attribution
>
>
>
> These relations can also be observed with the subject-determiner genitive:
>
>
> (8) a. *your dog* true possession
>
> b. *their son* interpersonal
>
> c. *his new book* authorship
>
> d. *my fault* attribution
>
>
> However, some semantic relations which are possible for the subject-determiner genitive are claimed to be systematically excluded in the oblique genitive (examples adapted from Lyons 1986: 128–129, Christianson 1997: 101, Payne & Huddleston 2002: 478):
>
>
> (9) a. \**another destruction of the city’s* patient of event nominal
>
> b. \**that funnel of the ship’s* proper part
>
> c. \**this weather of the summer’s* time
>
> d. \**that photograph of mine* theme
>
>
> The term “theme” would here denote the person depicted on the photograph. Compare:
>
>
> (10) a. *the city’s destruction* patient of event nominal
>
> b. *the ship’s funnel* proper part
>
> c. *this summer’s weather* time
>
> d. *my photograph* theme
>
>
> Examples such as those in (9a–c) and (10a–c) of course involve non-human dependents, and the unacceptable examples of the oblique genitive in (9a–c) would fall under the general ban on such. However, there are clearly also some relations, like the thematic relationship in (9d), which potentially involve human dependents and in which the oblique genitive is nevertheless excluded. Similarly, there is no improvement in examples illustrating the patient relationship when the patient is human: \**another destruction of the Roman legion’s* appears equally unacceptable, as does the patient interpretation in an example like \**another rejection of Mary’s.*
>
>
> The consensus therefore appears to be that the oblique genitive occurs in a subset of the semantic relations permitted to the subject-determiner genitive. The dependent in the oblique genitive must be high on the animacy scale, and not in a thematic or patient-like relation with the head noun. Syntactically, a choice of determiner other than the dependent NP itself rules out the oblique genitive. However, the definiteness effect inherent in the subject-determiner construction does not totally rule out occurrences of the oblique genitive with the definite article.
>
>
>
|
This pattern is a colloquial use of possessive pronouns, and typically refers to people in a non-specific sense.
John Lawler has a good point here: *its* as a possessive pronoun in this pattern lacks specificity. The less specific the the possessive pronoun is, the less clear the sentence becomes, and the more meanings the sentence may take on.
The following sentence fragment:
>
> a friend of his
>
>
>
implies a male subject of some kind. While not absolutely specific, the word *his* provides more information than the word *its*. You can re-write the sentence with the word *his* and still have a vague sense of who you are describing.
But the following sentence fragment:
>
> a feature of its
>
>
>
while correct with respect to the pattern, can have multiple meanings:
>
> its feature
>
>
>
>
> a feature of its [design, application, downfall, facade, etc.]
>
>
>
Which meaning is the correct one?
Colloquial patterns do not always make for good written or spoken grammar. I could argue you should always write *his friend* instead of *a friend of his* because it is more concise. But you won't always hear it that way in spoken English.
|
163,464
|
So, I was just trying to watch some video online in Chrome, and when I get to the page I'm told I "need to install Quicktime to play the video". This is odd because I clearly have Quicktime in my applications on my Mac. It seems to be a Chrome problem, but I've read I should go to chrome:plugins and make sure it's on there; Quicktime does not appear on that list. It DOES appear under "Extensions", where it says it's enabled, but still nothing.
When I click the little dropdown box that says I need Quicktime to watch the video, I'm just taken to a page that tells me "Quicktime comes with Mac" with no apparent link to download it independently.
Does anyone know what the issue could be?
|
2014/12/25
|
[
"https://apple.stackexchange.com/questions/163464",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/106057/"
] |
Since version 39 Chrome is now 64 bit only but there is only a 32 bit version of the QuickTime plug in. I do not think a workaround currently exists other than using another browser, Safari being your best bet for compatibility.
Source - <https://discussions.apple.com/thread/6510828>
|
Chrome has disabled many plug-ins including QuickTime in its latest release.
Actually, I believe this is due to the fact that Chrome 39+ is 64-bit and the QuickTime player plugin is 32-bit.
|
23,934,628
|
I have a table which has the following.
```
fieldmapid | fieldid
------------+---------
2010 | 180
2012 | 90
2012 | 92
2020 | 90
2020 | 92
2020 | 95
20005 | 90
20005 | 92
```
I want to write a query which first gets the fieldid's of the fieldmapid(assume it's 2012). And then i want to select the fieldmapid's which have the same fieldid's as the ones in 2012. That means, I need fieldmapid's which have only 90 and 92 as their fieldid's and nothing else. so i want 20005 and 2012 as the output of the query. because these are the only two fieldmapid's which have only 90 and 92 as their fieldid's.
|
2014/05/29
|
[
"https://Stackoverflow.com/questions/23934628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3488109/"
] |
Try this.May be it will help you.
```
select f.fieldmapid from Field f
LEFT JOIN Field f1 on f1.fieldid=f.fieldid and
(f1.fieldmapid=2012)
group by f.fieldmapid
having
count(f.fieldmapid)=count(f1.fieldmapid);
```
Demo link : <http://sqlfiddle.com/#!2/b72e05/3>
|
This is an example of a "set-within-sets" subquery. You can solve it by doing:
```
select t.fieldmapids
from table t left outer join
(select fieldid
from table t2
where fieldmapid = 2012
) f2012
on f2012.fieldmapid = t.fieldmapid
group by t.fieldmapids
having count(*) = count(f2012.fieldmapid) and
count(*) = (select count(*) from table t2 where fieldmapid = 2012);
```
The first condition in the `having` clause checks that all the fields for a given `fieldmapid` match at least one field for 2012. The second verifies that the number of fields on `fieldmapid` matches the number on 2012.
|
2,970,083
|
It's complicated to explain what the transformation I'm dealing with is but basically it is a transformation that takes a 3D model and maps it to the surface of one polygon on another model. The issue is that I'd prefer to not have to recompute the normal vectors of the post-transformation model at construction time (which are needed for lighting).
There are actually two versions of the transform. I'm not sure which I will pick so ways to convert normals for each will be needed.
First transform:
Given a point $v = (x,y,z)$ in space and the triangle $(p\_1, p\_2, p\_3)$ in space with point normals $(N\_1, N\_2, N\_3)$ the transformation is $\Omega v = p\_1x + p\_2y + p\_3(1 - x - y) + z(N\_1x + N\_2y + N\_3(1 - x - y))$.
Second transform:
Given a point $v = (x,y,z)$ in space and the triangle $(p\_1, p\_2, p\_3)$ in space with point normals $(N\_1, N\_2, N\_3)$ and triangle normal $N\_t$ the transformation is $\Omega v = p\_1x + p\_2y + p\_3(1 - x - y) + z(N\_1x + N\_2y + N\_3(1 - x - y)) \frac {1}{<N\_t,N\_1x + N\_2y + N\_3(1 - x - y)>}$.
If $v$ has some normal vector associated with it, then what would it map to geometrically in the case of each transform?
---
Note: $<a,b>$ is the dot product between $a$ and $b$.
|
2018/10/25
|
[
"https://math.stackexchange.com/questions/2970083",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/289789/"
] |
Another way to look at OP's problem, is to consider it a transformation from one triangular prism to another, or between triangular prism coordinates $(u, v, w)$ and Cartesian coordinates $(x, y, z)$:

Each prism is defined by the base triangle $T\_1$ vertices $\vec{p}\_1$, $\vec{p}\_2$, $\vec{p}\_3$, and three corresponding translation vectors $\vec{n}\_1$, $\vec{n}\_2$, $\vec{n}\_3$, so that the other end triangle $T\_2$ vertices are $(\vec{p}\_1+\vec{n}\_1)$, $(\vec{p}\_2+\vec{n}\_2)$, $(\vec{p}\_3+\vec{n}\_3)$, as shown in the above illustration: $T\_1$ is the bottom triangle, and $T\_2$ the top triangle, in both triangular prisms.
The triangular prism coordinate system $(u, v, w)$ is such that
$$\begin{array}{ccc|c}
u & v & w & \text{3D vector} \\
\hline
0 & 0 & 0 & \vec{p}\_3 \\
1 & 0 & 0 & \vec{p}\_1 \\
0 & 1 & 0 & \vec{p}\_2 \\
0 & 0 & 1 & \vec{p}\_3 + \vec{n}\_3 \\
1 & 0 & 1 & \vec{p}\_1 + \vec{n}\_1 \\
0 & 1 & 1 & \vec{p}\_2 + \vec{n}\_2 \\
\end{array} \quad \text{with} \quad \left\lbrace \; \begin{array}{rcl}
0 \le & u & \le 1 \\
0 \le & v & \le 1 \\
0 \le & u + v & \le 1 \\
0 \le & w & \le 1 \\
\end{array}\right.$$
In the illustration, darker red lines are $u = 0$, darker green lines are $v = 0$, and the blue lines are $w = 0$. The gray lines are where $u + v = 1$.
The transformation from triangular prism coordinates $(u, v, w)$ to 3D Cartesian coordinates $\vec{v} = (x, y, z)$ is
$$\bbox[#ffffef, 1em]{\Omega(u, v, w) = u \vec{p}\_1 + v \vec{p}\_2 + (1 - u - v) \vec{p}\_3 + w \left ( u \vec{n}\_1 + v \vec{n}\_2 + (1 - u - v) \vec{n}\_3 \right )}$$
which is equivalent to
$$\begin{aligned}
\Omega(u,v,w) & = (1-w)\Bigr( \vec{p}\_3 + u (\vec{p}\_1 - \vec{p}\_3) + w (\vec{p}\_2 - \vec{p}\_3 ) \Bigr) \\
& + w\Bigr( (\vec{p}\_3 + \vec{n}\_3) + u \bigr((\vec{p}\_1 + \vec{n}\_1) - (\vec{p}-3 + \vec{n}\_3)\bigr) + v \bigr((\vec{p}\_2 + \vec{n}\_2) - (\vec{p}\_3 + \vec{n}\_3)\bigr) \Bigr) \\
\end{aligned}$$ i.e. $(u, v)$ are barycentric coordinates in a triangle, and $w$ interpolates linearly between $T\_1$ and $T\_2$.
The one additional vector is $\hat{n}\_T$, the unit normal vector of $T\_1$:
$$\hat{n}\_T = \frac{\left(\vec{p}\_1 - \vec{p}\_3\right)\times\left(\vec{p}\_1 - \vec{p}\_3\right)}{\left\lVert\left(\vec{p}\_1 - \vec{p}\_3\right)\times\left(\vec{p}\_1 - \vec{p}\_3\right)\right\rVert}$$
Its direction depends on the order of the three vertices, but let's assume it points towards $T\_2$. (If not, just negate all of its components, or swap $\vec{p}\_1$ and $\vec{p}\_2$ above to correct the ordering.)
OP noted that they desire $T\_1$ and $T\_2$ to be parallel, and $T\_2$ at distance $L$ from $T\_1$. This can be achieved by scaling $\vec{n}\_1$, $\vec{n}\_2$, and $\vec{n}\_3$. If $\vec{N}\_1$, $\vec{N}\_2$, and $\vec{N}\_3$ are the original vectors, then the scaled ones are
$$\bbox[#ffffef, 1em]{ \vec{n}\_1 = \vec{N}\_1 \frac{L}{\hat{n}\_T \cdot \vec{N}\_1} }, \quad \bbox[#ffffef, 1em]{ \vec{n}\_2 = \vec{N}\_2 \frac{L}{\hat{n}\_T \cdot \vec{N}\_2} }, \quad \bbox[#ffffef, 1em]{ \vec{n}\_3 = \vec{N}\_3 \frac{L}{\hat{n}\_T \cdot \vec{N}\_3} }$$
because $\hat{n}\_T \cdot \vec{N}\_i$ yields the the length of $\vec{N}\_i$ projected to unit vector $\hat{n}\_T$, and $\hat{n}\_T$ is perpendicular to triangle $T\_1$.
---
If we use helper vectors
$$\begin{aligned}
\vec{b}\_0 = ( x\_0 , y\_0 , z\_0 ) &= \vec{p}\_3 \\
\vec{b}\_1 = ( x\_1 , y\_1 , z\_1 ) &= \vec{p}\_1 - \vec{p}\_3 \\
\vec{b}\_2 = ( x\_2 , y\_2 , z\_2 ) &= \vec{p}\_2 - \vec{p}\_3 \\
\vec{b}\_3 = ( x\_3 , y\_3 , z\_3 ) &= \vec{n}\_3 \\
\vec{b}\_4 = ( x\_4 , y\_4 , z\_4 ) &= \vec{n}\_1 - \vec{n}\_3 \\
\vec{b}\_5 = ( x\_5 , y\_5 , z\_5 ) &= \vec{n}\_2 - \vec{n}\_3 \\
\end{aligned}$$
then
$$\bbox[#ffffef, 1em]{ \Omega(u,v,w) = \vec{b}\_0 + u \vec{b}\_1 + v \vec{b}\_2 + w ( \vec{b}\_3 + u \vec{b}\_4 + v \vec{b}\_5 )}$$
i.e.
$$\Omega(u,v,w) = \begin{cases}
x = x\_0 + u x\_1 + v x\_2 + w ( x\_3 + u x\_4 + v x\_5 ) \\
y = y\_0 + u y\_1 + v y\_2 + w ( y\_3 + u y\_4 + v y\_5 ) \\
z = z\_0 + u z\_1 + v z\_2 + w ( z\_3 + u z\_4 + v z\_5 ) \\
\end{cases}$$
A vector from $(u\_1 , v\_1 , w\_1)$ to $(u\_2 , v\_2 , w\_2)$ is
$$\vec{\Delta} = \Omega(u\_2 , v\_2 , w\_2 ) - \Omega(u\_1 , v\_1 , w\_1)$$
i.e.
$$\bbox[#ffffef, 1em]{ \vec{\Delta} = (u\_2 - u\_1) \vec{b}\_1 + (v\_2 - v\_1) \vec{b}\_2 + (w\_2 - w\_1) \vec{b}\_3 + (u\_2 w\_2 - u\_1 w\_1) \vec{b}\_4 + (v\_2 w\_2 - v\_1 w\_1) \vec{b}\_5 }$$
If we examine the axis directions at $(u, v, w)$, we find that
$$\begin{aligned}
\vec{e}\_u(u,v,w) = \frac{d \big( \Omega(u + d u, v, w) - \Omega(u , v , w) \big )}{d u} &= \vec{b}\_1 + w \vec{b}\_4 \\
\vec{e}\_v(u,v,w) = \frac{d \big( \Omega(u, v + d v, w) - \Omega(u , v , w) \big )}{d v} &= \vec{b}\_2 + w \vec{b}\_5 \\
\vec{e}\_w(u,v,w) = \frac{d \big( \Omega(u, v, w + d w) - \Omega(u , v , w) \big )}{d w} &= \vec{b}\_3 + u \vec{b}\_4 + v\vec{b}\_5
\end{aligned}$$
therefore the instant $u$, $v$, $w$ unit axis vectors at $(u, v, w)$ are
$$\bbox[#ffffef, 1em]{ \hat{e}\_u (u,v,w) = \frac{\vec{b}\_1 + w \vec{b}\_4}{\left\lVert\vec{b}\_1 + w \vec{b}\_4\right\rVert} }, \quad \bbox[#ffffef, 1em]{ \hat{e}\_v (u,v,w) = \frac{\vec{b}\_2 + w \vec{b}\_5}{\left\lVert\vec{b}\_2 + w \vec{b}\_5\right\rVert} }, \quad \bbox[#ffffef, 1em]{ \hat{e}\_w (u,v,w) = \frac{\vec{b}\_3 + u \vec{b}\_4 + w \vec{b}\_5}{\left\lVert \vec{b}\_3 + u \vec{b}\_4 + w \vec{b}\_5 \right\rVert} }$$
---
There are three ways of specifying how a direction $(d u , d v , d w)$ at $(u , v , w)$ is transformed to Cartesian coordinates.
Note that $d u$, $d v$, and $d w$ are just the three variables (and not, say, a multiplication with some variable $d$). It is a common notation for differentials, and I used it here out of familiarity. In practice, it means that when you use $(d u , d v , d w)$ to denote a direction, their "length" ($du^2 + dv^2 + dw^2$) is irrelevant, as long as it is not zero; only the direction of that vector matters.
1. Based on $\vec{\Delta}$ above, i.e. picking a point along the desired direction at some distance, converting both that and the origin to 3D points, and using their difference, normalized to unit length, as the Cartesian unit direction vector:
$$\Omega(u, v, w ; d u , d v , d w) = \frac{\Omega(u + d u , v + d v , w + d w) - \Omega(u , v , w)}{\bigr\lVert \Omega(u + d u , v + d v , w + d w) - \Omega(u , v , w) \bigr\rVert}$$
This yields
$$\bbox[#ffffef, 1em]{\Omega(u, v, w ; d u , d v , d w) = \frac{ d u \, \vec{b}\_1 + d v \, \vec{b}\_2 + d w \, \vec{b}\_3 + (u \, d w + w \, d u + d u \, d w) \vec{b}\_4 + (v \, d w + w \, d v + d v \, d w) \vec{b}\_5}{\bigr\lVert d u \, \vec{b}\_1 + d v \, \vec{b}\_2 + d w \, \vec{b}\_3 + (u \, d w + w \, d u + d u \, d w) \vec{b}\_4 + (v \, d w + w \, d v + d v \, d w) \vec{b}\_5 \bigr\rVert}}$$
This is exactly correct only when the mapping is linear.
2. Using $(d u , d v , d w)$ as the weights for the unnormalized direction vectors $\vec{e}\_u$, $\vec{e}\_v$, $\vec{e}\_w$:
$$\Omega(u, v, w ; d u , d v , d w) = \frac{ d u \, \vec{e}\_u + d v \, \vec{e}\_v + d w \, \vec{e}\_w }{\bigr\lVert d u \, \vec{e}\_u + d v \, \vec{e}\_v + d w \, \vec{e}\_w \bigr\rVert}$$
This yields
$$\bbox[#ffffef, 1em]{\Omega(u, v, w ; d u , d v , d w) = \frac{ \vec{b}\_1 d u + \vec{b}\_2 d v + \vec{b}\_3 d w + \vec{b}\_4 ( u \, dw + w \, du ) + \vec{b}\_5 ( v \, dw + w \, dv ) }{\bigr\lVert \vec{b}\_1 d u + \vec{b}\_2 d v + \vec{b}\_3 d w + \vec{b}\_4 ( u \, dw + w \, du ) + \vec{b}\_5 ( v \, dw + w \, dv ) \bigr\rVert}}$$
3. Using $(d u , d v , d w)$ as the weights for the normalized direction vectors $\hat{e}\_u$, $\hat{e}\_v$, $\hat{e}\_w$:
$$\Omega(u, v, w ; d u , d v , d w) = \frac{ d u \, \vec{e}\_u + d v \, \vec{e}\_v + d w \, \vec{e}\_w }{\bigr\lVert d u \, \vec{e}\_u + d v \, \vec{e}\_v + d w \, \vec{e}\_w \bigr\rVert}$$
This yields
$$\bbox[#ffffef, 1em]{\Omega(u, v, w ; d u , d v , d w) = \frac{ du \frac{ \vec{b}\_1 + w \vec{b}\_4 }{\left\lVert \vec{b}\_1 + w \vec{b}\_4 \right\rVert}
+ dv \frac{ \vec{b}\_2 + w \vec{b}\_5 }{\left\lVert \vec{b}\_2 + w \vec{b}\_5 \right\rVert}
+ dw \frac{ \vec{b}\_3 + u \vec{b}\_4 + v \vec{b}{5} }{\left\lVert \vec{b}\_3 + u \vec{b}\_4 + v \vec{b}{5} \right\rVert}
}{\left\lVert
du \frac{ \vec{b}\_1 + w \vec{b}\_4 }{\left\lVert \vec{b}\_1 + w \vec{b}\_4 \right\rVert}
+ dv \frac{ \vec{b}\_2 + w \vec{b}\_5 }{\left\lVert \vec{b}\_2 + w \vec{b}\_5 \right\rVert}
+ dw \frac{ \vec{b}\_3 + u \vec{b}\_4 + v \vec{b}{5} }{\left\lVert \vec{b}\_3 + u \vec{b}\_4 + v \vec{b}{5} \right\rVert}
\right\rVert}}$$
These three are all acceptable definitions. The $du^2 + dv^2 + dw^2 = R^2$ spherical shell centered at $(u , v , w)$ is not spherical in Cartesian coordinates, but some other shape resembling an ellipsoid. The definition of "direction" used here is that we essentially try to map that $du^2 + dv^2 + dw^2 = R^2$ shell into a spherical shell in 3D Cartesian coordinates with as little distortion as possible.
The first one provides the direction from $(u , v , w)$ to $(u + du , v + dv , w + dw)$, and is therefore the most versatile numerically, as it essentially maps the spherical shell $du^2 + dv^2 + dw^2 = R^2$ centered at $(u, v, w)$ to the surface of an unit sphere centered at $\Omega(u , v , w)$. This is optimal for those use cases where the direction is defined as pointing from $(u , v , w)$ to $(u + du , v + dv , w + dw)$.
The second and third provide the direction as viewed from $(u , v , w)$; that is, the direction $(du , dv , dw)$ considered relative to the point $(u , v , w)$. In particular, the second form is optimal for use cases such as reflection, where $(du , dv , dw)$ define the direction of the incoming ray. (It is possible to calculate the reflected direction in Cartesian coordinates directly, if we also know the surface normal at point $(u , v , w)$ in those same coordinates.)
In a sense, the second maps the spherical shell to an ellipsoid-like surface, then scales it back to an unit sphere; the third maps the shell to a spherical shell directly. The two only differ in how they map the distortion.
(If you take a balloon with regular dots or latitude and longitude lines, and then squeeze it, you'll see how in different directions angular differences vary. This is that exact effect here. You could think of the three definitions as analogous to three different balloon materials.)
I would personally use the second for lighting effects, when the direction of the incoming ray is in $(u , v , w)$ coordinates (since the reflection occurs at that point); and the first for e.g. target or camera direction calculations (when the direction *points at* something).
---
There is an inverse transformation $\Omega^{-1}(x, y, z) = (u , v, w)$ at least for the case when the triangles $T\_1$ and $T\_2$ are parallel (their respective planes at distance $L$ from each other). However, the solution is rather nasty, so I'll omit it. (I only bothered to solve it for the case where the orientation of the triangular prism was optimal, and even then it uses 15 precalculated constants depending on the 6 vectors defining the prism).
|
If you understood exactly what OP asked about -- in particular, noticed that $\Omega$ denotes a (nonlinear) transformation rather than a matrix --, skip this section.
We have a vector
$\vec{v} = \left [ \begin{matrix} x \\ y \\ z \end{matrix} \right ]$
in some odd coordinate system to transform to ordinary 3D coordinates.
The transform is controlled by three points $\vec{p}\_1$, $\vec{p}\_2$, and $\vec{p}\_3$, with their associated unit vectors $\hat{N}\_1$, $\hat{N}\_2$, and $\hat{N}\_3$, respectively; plus optionally an unit vector $\hat{N}\_T$ associated with all three points.
The two possible transforms are
$$\mathbf{\Omega}\_1(\vec{v}) = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y)\vec{p}\_3 + z \left ( x \hat{N}\_1 + y \hat{N}\_1 + (1 - x - y) \hat{N}\_3 \right ) \tag{1}\label{NA1}$$
and
$$\mathbf{\Omega}\_2(\vec{v}) = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y)\vec{p}\_3
+ \frac{ z \left ( x \hat{N}\_1 + y \hat{N}\_1 + (1 - x - y) \hat{N}\_3 \right )}{\hat{N}\_T \cdot \left ( x \hat{N}\_1 + y \hat{N}\_1 + (1 - x - y) \hat{N}\_3 \right )} \tag{2}\label{NA2}$$
For now, let's concentrate on the first version, $\eqref{NA1}$.
For vectors $\vec{v}$ on the $x y$ plane, i.e. $z = 0$, the transform is
$$\mathbf{\Omega}\_1\left( \left [ \begin{matrix} x \\ y \\ 0 \end{matrix} \right ] \right) = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y) \vec{p}\_3 = \vec{p}\_3 + x \left( \vec{p}\_1 - \vec{p}\_3 \right ) + y \left ( \vec{p}\_2 - \vec{p}\_3 \right )$$
i.e. it defines a point $(x, y)$ in [barycentric coordinates](https://en.wikipedia.org/wiki/Barycentric_coordinates_(mathematics)) with respect to the triangle (2D simplex) defined by vertices $\vec{p}\_3$, $\vec{p}\_1$, and $\vec{p}\_2$.
Essentially, the three points $\vec{p}\_1$, $\vec{p}\_2$, and $\vec{p}\_3$ define one face, and points $\vec{p}\_1+\hat{N}\_1$, $\vec{p}\_2+\hat{N}\_2$, and $\vec{p}\_3+\hat{N}\_3$ the opposing face of a truncated triangular pyramid, or a prism. $0 \le z \le 1$ chooses the plane continuously between the two faces, and $x$ and $y$ are the barycentric coordinates on the triangle ($0 \le x , y , x + y \le 1$) on that plane.
The difference between $\eqref{NA1}$ and $\eqref{NA2}$ is that in the latter, while the transformed surfaces $z = \text{constant}$ are planar, $x = \text{constant}$ and $y = \text{constant}$ may be curved.
If the transformation was simply
$$\mathbf{\Omega}\left(\vec{v}\right) = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y)\vec{p}\_3 + z \hat{N}\_t$$
OP could use a simple 3×3 transformation matrix followed by a translation (that does not depend on $\vec{v}$. But this transformation is not suitable.
If we expand e.g. $\eqref{NA1}$, we get
$$\begin{aligned}
\mathbf{\Omega}\_1\left(\vec{v}\right) & = \vec{p}\_3
+ x \left( \vec{p}\_1 - \vec{p}\_3 \right )
+ y \left( \vec{p}\_2 - \vec{p}\_3 \right )
+ z \hat{N}\_3 \\
\; & + x z \left( \hat{N}\_1 - \hat{N}\_3 \right )
+ y z \left( \hat{N}\_2 - \hat{N}\_3 \right ) \\
\end{aligned}$$
and immediately see (from $x z$ and $y z$ terms) that the transform (in these coordinates $(x , y , z)$) is not linear.
The question is, how to convert some direction in the mixed $(x,y,z)$ coordinates to an unit vector.
---
Let's say that our point $\vec{v}$ in mixed coordinates has some direction $\vec{d} = \left [ \begin{matrix} i \\ j \\ k \end{matrix} \right ]$ in the same mixed coordinates associated with it, and we wish to find the corresponding unit vector in ordinary Cartesian 3D coordinates.
If we find the vector from $\mathbf{\Omega}\_1\left(\vec{v}\right)$ to $\mathbf{\Omega}\_1\left(\vec{v} + \vec{d}\right)$ and normalize it to unit length, we get the corresponding transformed direction as an unit vector. This is because in Cartesian coordinates, the mapping is "sufficiently linear". (Details in the next section.)
$$\bbox[#ffffef]{ \begin{aligned}\vec{w} & = \mathbf{\Omega\_1}\left( \vec{v} + \vec{d} \right) - \mathbf{\Omega\_1}\left(\vec{v}\right) \\
\; & = i \left ( \vec{p}\_1 - \vec{p}\_3 \right ) + j \left ( \vec{p}\_2 - \vec{p}\_3 \right ) + k \hat{N}\_3 \\
\; & + \left( z i + i k + k x \right) \left( \hat{N}\_1 - \hat{N}\_3 \right) \\
\; & + \left( z j + j k + k y \right) \left( \hat{N}\_2 - \hat{N}\_3 \right) \\
\end{aligned} \tag{3a}\label{NA3a}}$$
The transformed unit direction vector is obtained via scaling to unit length,
$$\hat{w} = \frac{\vec{w}}{\left\lVert\vec{w}\right\rVert} = \frac{\vec{w}}{\sqrt{\vec{w}\cdot\vec{w}}} \tag{3b}\label{NA3b}$$
Note that $\vec{w}$ and thus $\hat{w}$ depend on both $\vec{v}$ and $\vec{d}$, so in the general case, you cannot avoid having to recalculate the normals for use in e.g. lighting.
In other words, you do not transform a direction $(i, j, k)$ in mixed coordinates alone, but always a direction $(i, j, k)$ with respect to specific mixed coordinate point $(x, y, z)$. Changing any of the six components can change the resulting Cartesian 3D direction vector.
For the second transformation, that also depends on $\hat{N}\_T$,
$$\begin{aligned}\vec{w} & \approx \mathbf{\Omega\_2}\left( \vec{v} + \vec{d} \right) - \mathbf{\Omega\_2}\left(\vec{v}\right) \\
\; & = i \left ( \vec{p}\_1 - \vec{p}\_3 \right ) + j \left ( \vec{p}\_2 - \vec{p}\_3 \right ) \\
\; & + \frac{\left(\hat{N}\_1-\hat{N}\_3\right)(z+k)(x+i) + \left(\hat{N}\_2-\hat{N}\_3\right)(z+k)(y+j)}{(x+i)\hat{N}\_T\cdot\left(\hat{N}\_1-\hat{N}\_3\right) + (y+j)\hat{N}\_T\cdot\left(\hat{N}\_2-\hat{N}\_3\right) + \hat{N}\_T\cdot\hat{N}\_3} \\
\; & - \frac{z x\left(\hat{N}\_1-\hat{N}\_3\right) + z y\left(\hat{N}\_2-\hat{N}\_3\right)}{ x\hat{N}\_T\cdot\left(\hat{N}\_1-\hat{N}\_3\right) + y\hat{N}\_T\cdot\left(\hat{N}\_2-\hat{N}\_3\right)} \\
\end{aligned} \tag{4a}\label{NA4a}$$
so the final unit vector is again
$$\hat{w} \approx \frac{\vec{w}}{\left\lVert\vec{w}\right\rVert} = \frac{\vec{w}}{\sqrt{\vec{w}\cdot\vec{w}}} \tag{4b}\label{NA4b}$$
but because of the possible curvature, the direction is only approximate. This is only exact when $\hat{N}\_1$, $\hat{N}\_2$, and $\hat{N}\_3$ are all parallel to $\hat{N}\_3$, or at the limit $\lVert\vec{d}\rVert \to 0$. Numerically, the approximation can be controlled by scaling $\vec{d}$ to a suitable small length, just large enough to keep rounding errors within acceptable limits. For example, $\lVert\vec{d}\rVert \approx 0.001$ should be fine for lighting vectors.
---
To understand what the claim that a mapping from one truncated triangular pyramid to another (as used in the transformation above) is "sufficiently linear", let's rewrite $\eqref{NA1}$ as
$$\begin{aligned}
\mathbf{\Omega}\_1(\vec{v}) & = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y)\vec{p}\_3 + z \left ( x \hat{N}\_1 + y \hat{N}\_1 + (1 - x - y) \hat{N}\_3 \right ) \\
\; & = (1 - z)\left ( x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y)\vec{p}\_3 \right ) \\
\; & + z \left ( x \left( \vec{p}\_1 + \hat{N}\_1 \right) + y \left( \vec{p}\_2 + \hat{N}\_2 \right) + (1 - x - y) \left( \vec{p}\_3 + \hat{N}\_3 \right) \right )\end{aligned}$$
If we use
$$ B(x, y; \vec{p}\_1 , \vec{p}\_2 , \vec{p}\_3 ) = x \vec{p}\_1 + y \vec{p}\_2 + (1 - x - y ) \vec{p}\_3 = \vec{p}\_3 + x \left(\vec{p}\_1 - \vec{p}\_3\right) + y \left(\vec{p}\_2 - \vec{p}\_3\right)$$
to denote a point in triangle $\vec{p}\_1$, $\vec{p}\_2$, $\vec{p}\_3$ specified by barycentric coordinates $(x , y)$, $0 \le x , y , x + y \le 1$, then
$$\mathbf{\Omega}\_1(\vec{v}) = (1 - z) B\left(x, y; \vec{p}\_1 , \vec{p}\_2 , \vec{p}\_3 \right) + z B\left(x, y; \vec{p}\_1 + \hat{N}\_1 , \vec{p}\_2 + \hat{N}\_2 , \vec{p}\_3 + \hat{N}\_3\right)$$
This transformation from mixed coordinates $(x, y, z)$ is therefore just linear interpolation within a triangular prism, or a truncated triangular pyramid.
This is linear, if the triangular prism or truncated triangular pyramid has planar faces: if and only if $\vec{p}\_1$, $\vec{p}\_2$, $\vec{p}\_1 + \hat{N}\_1$, $\vec{p}\_2 + \hat{N}\_2$ are coplanar; $\vec{p}\_1$, $\vec{p}\_3$, $\vec{p}\_1 + \hat{N}\_1$, $\vec{p}\_3 + \hat{N}\_3$ are coplanar; and $\vec{p}\_2$, $\vec{p}\_3$, $\vec{p}\_2 + \hat{N}\_2$, $\vec{p}\_3 + \hat{N}\_3$ are coplanar. For polygonal modelling, they should be.
The second transform is
$$\mathbf{\Omega}\_2\left(\vec{v}\right) = B\left(x, y; \vec{p}\_1 , \vec{p}\_2 , \vec{p}\_3\right) + z \frac{B\left(x, y; \hat{N}\_1 , \hat{N}\_2 , \hat{N}\_3 \right)}{\hat{N}\_T \cdot B\left(x, y; \hat{N}\_1 , \hat{N}\_2 , \hat{N}\_3 \right)}$$
OP alluded that $\hat{N}\_T$ is perpendicular to the plane formed by the three points $\vec{p}\_1$, $\vec{p}\_2$, $\vec{p}\_3$. That situation is equvalent to the case where the three points' $z$ coordinates are zero,
$$\vec{p}\_1 = \left[\begin{matrix} x\_1 \\ y\_1 \\ 0 \end{matrix}\right],
\vec{p}\_2 = \left[\begin{matrix} x\_2 \\ y\_2 \\ 0 \end{matrix}\right],
\vec{p}\_3 = \left[\begin{matrix} x\_3 \\ y\_3 \\ 0 \end{matrix}\right],
\vec{N}\_1 = \left[\begin{matrix} X\_1 \\ Y\_1 \\ Z\_1 \end{matrix}\right],
\vec{N}\_2 = \left[\begin{matrix} X\_2 \\ Y\_2 \\ Z\_2 \end{matrix}\right],
\vec{N}\_3 = \left[\begin{matrix} X\_3 \\ Y\_3 \\ Z\_3 \end{matrix}\right],
\vec{N}\_T = \left[\begin{matrix} 0 \\ 0 \\ 1 \end{matrix}\right]$$
The transformed point is then at
$$\left[\begin{matrix}
x\_3 + x ( x\_1 - x\_3) + y ( x\_1 - x\_3 ) + \frac{ z }{ Z\_3 + x (Z\_1 - Z\_3) + y (Z\_2 - Z\_3) }\left( X\_3 + x (X\_1 - X\_3) + y (X\_2 - X\_3) \right) \\
y\_3 + x ( y\_1 - y\_3) + y ( y\_1 - y\_3 ) + \frac{ z }{ Z\_3 + x (Z\_1 - Z\_3) + y (Z\_2 - Z\_3) }\left( Y\_3 + x (Y\_1 - Y\_3) + y (Y\_2 - Y\_3) \right) \\
z \\
\end{matrix}\right]$$
Note the scaling factor $z / \left( Z\_3 + x (Z\_1 - Z\_3) + y (Z\_2 - Z\_3) \right)$; this is what causes the nonlinearity. The divisor is essentially the cosine of the angle between the interpolated unit vector and the plane normal, so the nonlinearity depends on the maximum angle between $\hat{N}\_T$ and $\hat{N}\_1$, $\hat{N}\_2$, and $\hat{N}\_3$.
For this second transformation, you can obviously develop other definitions of "direction", but I don't really see how any of them would work better for polygonal modeling. The fact that this second transform can transform lines to curves, makes it unsuitable for polygonal modeling, in my opinion.
Therefore, my recommendation is to use the first transformation, which has no such drawbacks, and even the direction vector transformation is well defined in exact form (assuming coplanarity of the vectors as mentioned).
|
4,623,615
|
I would like to solve this problem just with an elementary geometric approach. I already solved with trigonometry, e.g. using the Bretschneider formula, finding that the angle $ x = 15° $. Any idea?
I edited showing how I computed the $ x $ value using the Bretschneider formula for the area of the quadrilateral $ ABDE $ and equating to the sum of the triangles' area $ ABE + EFD + BDF $
$$\begin{cases}
BC = a \\
AB = a(1/\tan(2x) - 1) \\
BD = a\sqrt{2} \\
AE = AB/\cos(2x+\pi/6) = a(1/\tan(2x) -1)/\cos(2x+\pi/6) \\
ED = a/\cos(x)
\end{cases}
$$
So I solved this equation with Mathematica, and the only solution that fit the problem is $ x = \pi/12 $
$ a^2/2+(a^2(1/\tan(2x) - 1)(1+\tan(x)))/2 + a^2 \tan(x)/2 =
((a\sqrt{2})^2 + \\
(a(1/\tan(2x)- 1)/\cos(2x+\pi/6))^2 - (a/\cos(x))^2 -(a(1/\tan(2x) - 1))^2)/4 \tan(\pi/2 -2x) $
I guess there is a simpler trigonometric approach, but I just wanted to try with that formula.
[](https://i.stack.imgur.com/XLNUp.jpg)
|
2023/01/22
|
[
"https://math.stackexchange.com/questions/4623615",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/967210/"
] |
Hint: Draw a circle on diameter $AD$. Extend $AE$ to meet the circle at $H$. Connect $D$ to $H$; we have:
$\overset{\frown} {DH}=60^o$
Draw diameter $HJ$ ($J$ is where diameter starting from $H$ and passing the center of circle $O$ meets the circle). Draw a perpendicular from $D$ on $HJ$ to meet the circle at $I$, we have:
$\overset{\frown} {DH}=\overset{\frown} {IH}=60^o$
therefore:
$\widehat {HDI}=\frac 12 \overset{\frown} {DH}=30^o$
Also
$\overset{\frown} {AJ}=\overset{\frown} {DH}=60^o$
Therefore:
$\overset{\frown} {AI}=60^o\Rightarrow \widehat{IDA}=30^o=2x $
which gives $x=15^o$
What remains is to show $F$ is on $DI$. For this note that the radius of circle is equal to the measure of sides of square, that is if you draw a circle centered at D and radius $r=DC$, it crosses points O(center of the first circle and of course vertex F of the square. $DC\bot AC$ and also $FG||DC$ that means $FG\bot DI$ so $F$ is on $DI$.
[](https://i.stack.imgur.com/p7dSY.png)
|
Here I report the original figure, to which I added the segments $DH$ bisecting the angle $\widehat{FDG}$ and $EK$ orthogonal to $AD$.
[](https://i.stack.imgur.com/s97hL.png)
Let's define
$$
d = AB, \qquad e = EF.
$$
We have, by Pythagorean theorem,
$$
AD = \sqrt{ AC^2 + CD^2 } = \sqrt{ (a+d)^2 + a^2 }.
$$
where $a$ is the known length of the side of the square.
For what follows, it would be useful to set $R=\sqrt{ (a+d)^2 + a^2 }.$
From the similar triangles $DFG$ and $ACD$
$$
\frac{FG}{CD} = \frac{DF}{AC} \quad \implies \quad FG = \frac{a^2}{a+d}.
$$
By Pythagorean theorem
$$
DG = \sqrt{ FG^2 + DF^2 } = \frac{aR}{a+d}.
$$
By the angle bisector theorem
\begin{align}
& \frac{GH}{FH} = \frac{DG}{DF}, \\[2mm]
& \frac{GH+FH}{FH} = \frac{DG+DF}{DF}, \\[2mm]
& \frac{FG}{FH} = \frac{DG+DF}{DF}, \\[2mm]
& FH = R-(a+d).
\end{align}
Note that
$$
EF = FH \quad \implies \quad e = R-(a+d)
$$
By Pythagorean theorem
$$
AE = \sqrt{ AB^2 + BE^2 } = \sqrt{ d^2 + (a+e)^2 }
$$
Being $AEK$ a 30-60-90 triangle, then
\begin{align}
& AK = \frac{\sqrt{3}}{2} AE = \frac{\sqrt{3}}{2} \sqrt{ d^2 + (a+e)^2 } \\
& EK = \frac{1}{2} AE = \frac{1}{2} \sqrt{ d^2 + (a+e)^2 }
\end{align}
We have
$$
DK = AD - AK = R - \frac{\sqrt{3}}{2} \sqrt{ d^2 + (a+e)^2 }
$$
By Pythagorean theorem
$$
DE^2 = EK^2 + DK^2 = \frac{1}{4} [ d^2 + (a+e)^2 ] + \left( R - \frac{\sqrt{3}}{2} \sqrt{ d^2 + (a+e)^2 } \right)^2
$$
but also
$$
DE^2 = DF^2 + EF^2 = a^2 + e^2
$$
Comparing these two expressions for $DE^2$, we have
$$
d^2+(a+e)^2+(a+d)^2-e^2=\sqrt{3}R\sqrt{d^2+(a+e)^2}
$$
Taking into account
\begin{align}
& e^2 = 2(a+d)^2+a^2-2(a+d)R \\
& (a+e)^2 = 2(a+d)^2-2d(R+a)
\end{align}
and squaring both sides
$$
4 \left(aR+d^2\right)^2 = 3 \left[(a+d)^2+a^2\right] \left[2 (a+d)^2+d^2-2 d \left(R+a\right)\right]
$$
by expanding the products and isolating the root $R$
$$
2 d \left(6 a^2+10 a d+3 d^2\right) R=4 a^4+16 a^3 d+32 a^2 d^2+24 a d^3+5 d^4
$$
squaring again both sides and factoring
$$
\left(2 a^2-2 a d-d^2\right) \left(8 a^6+72 a^5 d+188 a^4 d^2+208 a^3 d^3+122 a^2 d^4+50 a d^5+11 d^6\right)=0
$$
The only positive solution of this equation is
$$
d=(\sqrt{3}-1)a
$$
but this means that $AC = \sqrt{3}a$, the triangle $ACD$ is a 30-60-90 triangle, then $\widehat{CAD}=2x=30°$, so $x=15°$.
|
11,304,645
|
>
> **Possible Duplicate:**
>
> [How to detect textbox change event from javascript](https://stackoverflow.com/questions/5786799/how-to-detect-textbox-change-event-from-javascript)
>
>
>
How I can detect inserted value to text field immediately after inserting? `change` event rises only after focus lost, `keyPress, keyUp, keyDown`, etc can't help me, because new values are inserted from barcode scanner(it don't rises this keyboard events). Only using Timer and periodically check values in text field?
|
2012/07/03
|
[
"https://Stackoverflow.com/questions/11304645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/921193/"
] |
You can use the `oninput` event. It will trigger whenever an input is made in the field.
For more information, I'll get a quote from this blog: <http://whattheheadsaid.com/2010/09/effectively-detecting-user-input-in-javascript> (written by [@AndyE](https://stackoverflow.com/users/94197/andy-e))
>
> *Talking about onkeydown and timers...*
>
>
> However, this still doesn’t deal with input that doesn’t involve a keyboard. What other types of input? How about context menu actions; cut, paste, undo and redo? Drag and drop? There have been events to handle some other kinds of input change for a while – several browsers support the onpaste, oncut and oncopy events – but they’re non-standard and some vendors are just too stubborn to implement them.
>
>
> Enter the HTML 5 event, oninput.
>
>
>
For a cross-browser jQuery plugin, you can take a look at this one (same author): <http://whattheheadsaid.com/projects/input-special-event>
|
You mean this <http://jsfiddle.net/SP69J/10/>
The function
```
$(document).ready(check);
var previousvalue="";
function check(){
var val=$("#inputcontrol").val();
var interval;
interval = window.clearInterval(interval);
if(val!=previousvalue){
alert("value changed");
previousvalue=$("#inputcontrol").val();
}
interval = window.setInterval(check, 500);
}
```
Set the interval as per your need.
HTML:
```
<input type="text" id="inputcontrol">
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.