qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
34,019,654
how can I connect the MySQL database in **my local computer** to android application.? is this possible .? I developed an android application its working perfectly. Now I need access some data from MySQL database in my **Local computer** .I am searching a solution for this since 2 weeks. kindly help me to solve ``` --------------------------- | Order_id | Toatl Amt | |-------------------------| | 1025566 | 99.50 | |-------------------------| | 1125426 | 50.00 | |-------------------------| | 1025555 | 150.00 | --------------------------- ``` this is my data base table "Bill" in my local PC .i need access this database from android application installed in my mobile .when i enter the order id in my mobile app i need get total\_amt of specific Oreder\_id.but the problem is this table is in **Local Computer**. i can't access this database direclty from mobile app. is there is any solution for do this
2015/12/01
[ "https://Stackoverflow.com/questions/34019654", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2496443/" ]
"current\_observation" does not contain an Array, it contains Dictionary ``` NSDictionary* weather = allData[@"current_observation"]; NSString *currentWeather = nil; NSString *currentTemp = nil; if (weather[@"temperature_string"]){ currentWeather = weather[@"temperature_string"]; } if (weather[@"temp_c"]) { currentTemp = [NSString stringWithFormat:@"%@",weather[@"temp_c"]]; } ``` "temp\_c" is a number and not string so you need to cast it to string.
It's not really good explanation for your problem,in previous comment.You have a `NSDictionary` not an `NSArray`. Simply solution is to just iterate: ``` NSString *weather =[NSString stringWithFormat:@"%@", [allData objectForKeyPath:@"temp_c"]]; ```
183,358
My hard disk crashed.. I can run Ubuntu using a pendrive by making a live USB of Ubuntu, which I made using Windows 7. In the similar way, I want to run Windows XP too using another pen drive (without hard disk) and I want to make it from Ubuntu (12.04). The resources I have are Ubuntu's live USB, Windows XP and Windows 7 installation disk, some blank DVDs but no hard drive. I have very basic knowledge of Linux. Thanks
2012/09/02
[ "https://askubuntu.com/questions/183358", "https://askubuntu.com", "https://askubuntu.com/users/87134/" ]
<http://www.iasptk.com/ubuntu-ppa-repositories/16958-ubuntu-and-winusb-a-simple-tool-that-enable-you-to-create-your-own-usb-stick-windows-installer-from-an-iso-image-or-a-real-dvd> **WinUSB** is a simple tool that enable you to create your own usb stick windows installer from an iso image or a real DVD. This package contains two programs: - WinUSB-gui: a graphical interface which is very easy to use. - winusb: the command line tool. Supported images: Windows Vista, Seven+ installer for any language and any version (home, pro...) and Windows PE. **Install WinUSB in ubuntu** Open the terminal and run the following commands ``` sudo add-apt-repository ppa:colingille/freshlight sudo apt-get update sudo apt-get install winusb ```
I've tried various approaches to this in the past, and in the end, there are a few possible ways you *can* do it, but nothing that is guaranteed to work. For me, the easiest method is to install [Hiren's BootCD](http://www.hirensbootcd.org/download/) which includes a copy of a "windows live" version with it. Another method would be to create a BartPE install using a VirtualBox Windows VM. With your given resources, you might install your Windows 7 copy in a VM, and use something like [WinBuilder](http://www.instantfundas.com/2010/12/how-to-create-windows-7-live-cddvdusb.html) to create a legal live version of Windows 7. You can then boot into a live Windows 7 version from your USB. For Windows XP, you can use the similar [BartPE tool](http://www.nu2.nu/pebuilder/) to build a live Windows XP install.
10,968,952
How would I apply "an empty text" template for WPF `ComboBox`? ``` <ComboBox ItemsSource="{Binding Messages}" DisplayMemberPath="CroppedMessage" Name="Messages" Width="150" Margin="0,4,4,4"> </ComboBox> ``` I use the above code to display a `ComboBox` with a few messages. Now, when the application starts there's no item chosen by default, and if so, I want to display a custom text on the `ComboBox`. I think I would need some kind of template with a trigger?
2012/06/10
[ "https://Stackoverflow.com/questions/10968952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283055/" ]
Try this... ``` <ComboBox ItemsSource="{Binding Messages}" DisplayMemberPath="CroppedMessage" Name="Messages" Width="150" Margin="0,4,4,4" IsEditable="True" Text="select" /> ```
Try this ``` <ComboBox ItemsSource="{Binding Messages}" DisplayMemberPath="CroppedMessage" Name="Messages" Width="150" Margin="0,4,4,4" IsEditable="True" IsReadOnly="True" Text="Select"/> ```
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
No, since the switch may only be used with integral types, or a type convertible to an integral type
Sure, depending on how much work you are willing to do. You can [use a preprocessor and some macros to map strings to integral identifiers](http://timj.testbit.eu/2007/07/14/13072007-switch-on-strings-in-c-and-c/), giving you a syntax like: ``` switch (SOSID_LOOKUP (sample_string)) { case SOSID (hello): printf ("Hello "); break; case SOSID (world): printf ("World! "); break; case 0: default: printf ("unknown "); break; } ``` If you can use C++ instead of C, you can use [litb's](https://stackoverflow.com/users/34509/johannes-schaub-litb) [template-based string switcher](http://bloglitb.blogspot.com/2010/11/fun-with-switch-statements.html), giving you syntax like: ``` sswitch(s) { scase("foo"): { std::cout << "s is foo" << std::endl; break; // could fall-through if we wanted } // supports brace-less style too scase("bar"): std::cout << "s is bar" << std::endl; break; // default must be at the end sdefault(): std::cout << "neither of those!" << std::endl; break; } ```
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
The simplest approach is an if-else chain using `strcmp` to do the comparisons: ``` if (strcmp(str, "String 1") == 0) // do something else if (strcmp(str, "String 2") == 0) // do something else else if (strcmp(str, "String 3") == 0) // do something else ... else printf("%s not found\n", str); ``` A more complicated approach is to use a lookup table, keyed by the string: ``` struct lookup {const char *key; int value}; struct lookup LookupTable[] = { {"String 1", 1}, {"String 2", 2}, {"String 3", 3}, ... {NULL, -1} }; int lookup(const char *key) { size_t i = 0; while (LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].value; else i++; return -1; } ... switch(lookup(str)) { case 1: ... case 2: ... case 3: ... ... default: printf("%s not found\n", str); break; } ``` If you want to get really fancy, you could modify the lookup table so that the value is a pointer to a function: ``` void String1Cmd(void) { ... } void String2Cmd(void) { ... } void String3Cmd(void) { ... } ... void BadCmd(void) { printf("Key not found!\n"); } struct lookup {char *key, void (*cmd)(void); }; struct lookup LookupTable[] = { {"String 1", String1Cmd}, {"String 2", String2Cmd}, {"String 3", String3Cmd}, ... {NULL, BadCmd} }; void (*lookup(const char *str))(void) { size_t i = 0; while(LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].cmd; else i++; return BadCmd; } ... void (*f)(void) = lookup(str); // retrieve the function for the given string f(); // execute the function ``` In the last example, if `str` == "String 1", then `String1Cmd` will be executed. If `str` is a string not found in the lookup table, then `BadCmd` will be executed. This method is very flexible, and depending on your design, allows you to add behavior at runtime (sort of a plug-in architecture). However, note that we've just deferred the main problem - branching on a string value - to the `lookup` function, and that the `lookup` function is back to just doing `strcmp` against each value in the table. We could speed that part of the process up by using a [hash table](http://en.wikipedia.org/wiki/Hash_table) or a [tree](http://en.wikipedia.org/wiki/Binary_search_tree) to minimize the number of comparisons. Depending on how many strings you're branching on, that may or may not be worth the additional effort.
a [hashtable](http://en.wikipedia.org/wiki/Hash_table) if you have a large number of strings and speed is an issue
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
The simplest approach is an if-else chain using `strcmp` to do the comparisons: ``` if (strcmp(str, "String 1") == 0) // do something else if (strcmp(str, "String 2") == 0) // do something else else if (strcmp(str, "String 3") == 0) // do something else ... else printf("%s not found\n", str); ``` A more complicated approach is to use a lookup table, keyed by the string: ``` struct lookup {const char *key; int value}; struct lookup LookupTable[] = { {"String 1", 1}, {"String 2", 2}, {"String 3", 3}, ... {NULL, -1} }; int lookup(const char *key) { size_t i = 0; while (LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].value; else i++; return -1; } ... switch(lookup(str)) { case 1: ... case 2: ... case 3: ... ... default: printf("%s not found\n", str); break; } ``` If you want to get really fancy, you could modify the lookup table so that the value is a pointer to a function: ``` void String1Cmd(void) { ... } void String2Cmd(void) { ... } void String3Cmd(void) { ... } ... void BadCmd(void) { printf("Key not found!\n"); } struct lookup {char *key, void (*cmd)(void); }; struct lookup LookupTable[] = { {"String 1", String1Cmd}, {"String 2", String2Cmd}, {"String 3", String3Cmd}, ... {NULL, BadCmd} }; void (*lookup(const char *str))(void) { size_t i = 0; while(LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].cmd; else i++; return BadCmd; } ... void (*f)(void) = lookup(str); // retrieve the function for the given string f(); // execute the function ``` In the last example, if `str` == "String 1", then `String1Cmd` will be executed. If `str` is a string not found in the lookup table, then `BadCmd` will be executed. This method is very flexible, and depending on your design, allows you to add behavior at runtime (sort of a plug-in architecture). However, note that we've just deferred the main problem - branching on a string value - to the `lookup` function, and that the `lookup` function is back to just doing `strcmp` against each value in the table. We could speed that part of the process up by using a [hash table](http://en.wikipedia.org/wiki/Hash_table) or a [tree](http://en.wikipedia.org/wiki/Binary_search_tree) to minimize the number of comparisons. Depending on how many strings you're branching on, that may or may not be worth the additional effort.
Sure, depending on how much work you are willing to do. You can [use a preprocessor and some macros to map strings to integral identifiers](http://timj.testbit.eu/2007/07/14/13072007-switch-on-strings-in-c-and-c/), giving you a syntax like: ``` switch (SOSID_LOOKUP (sample_string)) { case SOSID (hello): printf ("Hello "); break; case SOSID (world): printf ("World! "); break; case 0: default: printf ("unknown "); break; } ``` If you can use C++ instead of C, you can use [litb's](https://stackoverflow.com/users/34509/johannes-schaub-litb) [template-based string switcher](http://bloglitb.blogspot.com/2010/11/fun-with-switch-statements.html), giving you syntax like: ``` sswitch(s) { scase("foo"): { std::cout << "s is foo" << std::endl; break; // could fall-through if we wanted } // supports brace-less style too scase("bar"): std::cout << "s is bar" << std::endl; break; // default must be at the end sdefault(): std::cout << "neither of those!" << std::endl; break; } ```
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
if you have the function **lfind** in your lib (POSIX or gcc) you can use it like: ``` enum { NOTFOUND, HELLO, WORLD, FOO, BAR }; char list[][100]={"hello","world","foo","bar"}; size_t r, siz = sizeof*list, num = sizeof list/siz; char *tosearch = "foo"; switch ( (r=lfind(tosearch,list,&num,siz,strcmp))? (r+siz-(size_t)list)/siz:0 ) { case HELLO: puts("hello");break; case WORLD: puts("world");break; case FOO: puts("foo"); break; case BAR: puts("bar"); break; case NOTFOUND:puts("not found"); } ``` each string in the array must have the same size and should not be a pointer
Sure, depending on how much work you are willing to do. You can [use a preprocessor and some macros to map strings to integral identifiers](http://timj.testbit.eu/2007/07/14/13072007-switch-on-strings-in-c-and-c/), giving you a syntax like: ``` switch (SOSID_LOOKUP (sample_string)) { case SOSID (hello): printf ("Hello "); break; case SOSID (world): printf ("World! "); break; case 0: default: printf ("unknown "); break; } ``` If you can use C++ instead of C, you can use [litb's](https://stackoverflow.com/users/34509/johannes-schaub-litb) [template-based string switcher](http://bloglitb.blogspot.com/2010/11/fun-with-switch-statements.html), giving you syntax like: ``` sswitch(s) { scase("foo"): { std::cout << "s is foo" << std::endl; break; // could fall-through if we wanted } // supports brace-less style too scase("bar"): std::cout << "s is bar" << std::endl; break; // default must be at the end sdefault(): std::cout << "neither of those!" << std::endl; break; } ```
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
a [hashtable](http://en.wikipedia.org/wiki/Hash_table) if you have a large number of strings and speed is an issue
Yes, and the way is - long `if-else-if` statement. (for reference: [Why switch statement cannot be applied on strings?](https://stackoverflow.com/questions/650162/why-switch-statement-cannot-be-applied-on-strings) ) And what do you mean by "a C construct at all in C" o.O ? I'll edit my post, when you answer :)
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
The simplest approach is an if-else chain using `strcmp` to do the comparisons: ``` if (strcmp(str, "String 1") == 0) // do something else if (strcmp(str, "String 2") == 0) // do something else else if (strcmp(str, "String 3") == 0) // do something else ... else printf("%s not found\n", str); ``` A more complicated approach is to use a lookup table, keyed by the string: ``` struct lookup {const char *key; int value}; struct lookup LookupTable[] = { {"String 1", 1}, {"String 2", 2}, {"String 3", 3}, ... {NULL, -1} }; int lookup(const char *key) { size_t i = 0; while (LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].value; else i++; return -1; } ... switch(lookup(str)) { case 1: ... case 2: ... case 3: ... ... default: printf("%s not found\n", str); break; } ``` If you want to get really fancy, you could modify the lookup table so that the value is a pointer to a function: ``` void String1Cmd(void) { ... } void String2Cmd(void) { ... } void String3Cmd(void) { ... } ... void BadCmd(void) { printf("Key not found!\n"); } struct lookup {char *key, void (*cmd)(void); }; struct lookup LookupTable[] = { {"String 1", String1Cmd}, {"String 2", String2Cmd}, {"String 3", String3Cmd}, ... {NULL, BadCmd} }; void (*lookup(const char *str))(void) { size_t i = 0; while(LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].cmd; else i++; return BadCmd; } ... void (*f)(void) = lookup(str); // retrieve the function for the given string f(); // execute the function ``` In the last example, if `str` == "String 1", then `String1Cmd` will be executed. If `str` is a string not found in the lookup table, then `BadCmd` will be executed. This method is very flexible, and depending on your design, allows you to add behavior at runtime (sort of a plug-in architecture). However, note that we've just deferred the main problem - branching on a string value - to the `lookup` function, and that the `lookup` function is back to just doing `strcmp` against each value in the table. We could speed that part of the process up by using a [hash table](http://en.wikipedia.org/wiki/Hash_table) or a [tree](http://en.wikipedia.org/wiki/Binary_search_tree) to minimize the number of comparisons. Depending on how many strings you're branching on, that may or may not be worth the additional effort.
if you have the function **lfind** in your lib (POSIX or gcc) you can use it like: ``` enum { NOTFOUND, HELLO, WORLD, FOO, BAR }; char list[][100]={"hello","world","foo","bar"}; size_t r, siz = sizeof*list, num = sizeof list/siz; char *tosearch = "foo"; switch ( (r=lfind(tosearch,list,&num,siz,strcmp))? (r+siz-(size_t)list)/siz:0 ) { case HELLO: puts("hello");break; case WORLD: puts("world");break; case FOO: puts("foo"); break; case BAR: puts("bar"); break; case NOTFOUND:puts("not found"); } ``` each string in the array must have the same size and should not be a pointer
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
The simplest approach is an if-else chain using `strcmp` to do the comparisons: ``` if (strcmp(str, "String 1") == 0) // do something else if (strcmp(str, "String 2") == 0) // do something else else if (strcmp(str, "String 3") == 0) // do something else ... else printf("%s not found\n", str); ``` A more complicated approach is to use a lookup table, keyed by the string: ``` struct lookup {const char *key; int value}; struct lookup LookupTable[] = { {"String 1", 1}, {"String 2", 2}, {"String 3", 3}, ... {NULL, -1} }; int lookup(const char *key) { size_t i = 0; while (LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].value; else i++; return -1; } ... switch(lookup(str)) { case 1: ... case 2: ... case 3: ... ... default: printf("%s not found\n", str); break; } ``` If you want to get really fancy, you could modify the lookup table so that the value is a pointer to a function: ``` void String1Cmd(void) { ... } void String2Cmd(void) { ... } void String3Cmd(void) { ... } ... void BadCmd(void) { printf("Key not found!\n"); } struct lookup {char *key, void (*cmd)(void); }; struct lookup LookupTable[] = { {"String 1", String1Cmd}, {"String 2", String2Cmd}, {"String 3", String3Cmd}, ... {NULL, BadCmd} }; void (*lookup(const char *str))(void) { size_t i = 0; while(LookupTable[i].key != NULL) if (strcmp(str, LookupTable[i].key) == 0) return LookupTable[i].cmd; else i++; return BadCmd; } ... void (*f)(void) = lookup(str); // retrieve the function for the given string f(); // execute the function ``` In the last example, if `str` == "String 1", then `String1Cmd` will be executed. If `str` is a string not found in the lookup table, then `BadCmd` will be executed. This method is very flexible, and depending on your design, allows you to add behavior at runtime (sort of a plug-in architecture). However, note that we've just deferred the main problem - branching on a string value - to the `lookup` function, and that the `lookup` function is back to just doing `strcmp` against each value in the table. We could speed that part of the process up by using a [hash table](http://en.wikipedia.org/wiki/Hash_table) or a [tree](http://en.wikipedia.org/wiki/Binary_search_tree) to minimize the number of comparisons. Depending on how many strings you're branching on, that may or may not be worth the additional effort.
Yes, and the way is - long `if-else-if` statement. (for reference: [Why switch statement cannot be applied on strings?](https://stackoverflow.com/questions/650162/why-switch-statement-cannot-be-applied-on-strings) ) And what do you mean by "a C construct at all in C" o.O ? I'll edit my post, when you answer :)
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
No, you have to do it yourself. There are many variants: ``` if (strcmp(str, "toto") == 0) { // ... } else if (strcmp(str, "tata") == 0) { // ... } else { // ... } ``` If the number of strings is expected to grow, then a dispatch table with function pointers ``` struct dispatch_entry { const char *key; void (*action)(void); }; // Make sure it is sorted ! dispatch_entry dispatch_table[] = { { "tata", &action_tata }, { "toto", &action_toto }, }; ``` coupled with binary search: ``` int dispatch_compare(const void *x, const void *y) { const dispatch_entry *xx = x, *yy = y; return strcmp(xx->key, yy->key); } // Return -1 on failure int dispatch(const char *str) { static const size = sizeof(struct dispatch_entry); static const n = sizeof(dispatch_table) / size ; dispatch_entry tmp = { str, NULL }; dispatch_entry *what = bsearch(tmp, dispatch_table, n, size, &dispatch_compare); if (what == NULL) return -1; (*what->action)(); return 0; } ``` will do. Hash table based approaches are OK as well.
Sure, depending on how much work you are willing to do. You can [use a preprocessor and some macros to map strings to integral identifiers](http://timj.testbit.eu/2007/07/14/13072007-switch-on-strings-in-c-and-c/), giving you a syntax like: ``` switch (SOSID_LOOKUP (sample_string)) { case SOSID (hello): printf ("Hello "); break; case SOSID (world): printf ("World! "); break; case 0: default: printf ("unknown "); break; } ``` If you can use C++ instead of C, you can use [litb's](https://stackoverflow.com/users/34509/johannes-schaub-litb) [template-based string switcher](http://bloglitb.blogspot.com/2010/11/fun-with-switch-statements.html), giving you syntax like: ``` sswitch(s) { scase("foo"): { std::cout << "s is foo" << std::endl; break; // could fall-through if we wanted } // supports brace-less style too scase("bar"): std::cout << "s is bar" << std::endl; break; // default must be at the end sdefault(): std::cout << "neither of those!" << std::endl; break; } ```
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
if you have the function **lfind** in your lib (POSIX or gcc) you can use it like: ``` enum { NOTFOUND, HELLO, WORLD, FOO, BAR }; char list[][100]={"hello","world","foo","bar"}; size_t r, siz = sizeof*list, num = sizeof list/siz; char *tosearch = "foo"; switch ( (r=lfind(tosearch,list,&num,siz,strcmp))? (r+siz-(size_t)list)/siz:0 ) { case HELLO: puts("hello");break; case WORLD: puts("world");break; case FOO: puts("foo"); break; case BAR: puts("bar"); break; case NOTFOUND:puts("not found"); } ``` each string in the array must have the same size and should not be a pointer
a [hashtable](http://en.wikipedia.org/wiki/Hash_table) if you have a large number of strings and speed is an issue
6,927,690
Is there a way (in C) to write a construct like the `switch` statement, but for strings? Is there a way to write a C construct *at all* in C? By C construct I mean a statement with braces ... like an `if` statement has braces, and it's a C construct... right?
2011/08/03
[ "https://Stackoverflow.com/questions/6927690", "https://Stackoverflow.com", "https://Stackoverflow.com/users/569183/" ]
No, since the switch may only be used with integral types, or a type convertible to an integral type
No, `switch` works on an integer value (I think floats/doubles are not even allowed). You can emulate that with if/else if/else doing `strcmp`. ``` if (strcmp(mystr, "this") == 0) { //mystr == "this" } else if (strcmp(mystr, "that") == 0) { //mystr == "that" } else { //mystr is not "this" or "that" } ```
70,811,082
I have this code that transposes a matrix using loop tiling strategy. ``` void transposer(int n, int m, double *dst, const double *src) { int blocksize; for (int i = 0; i < n; i += blocksize) { for (int j = 0; j < m; j += blocksize) { // transpose the block beginning at [i,j] for (int k = i; k < i + blocksize; ++k) { for (int l = j; l < j + blocksize; ++l) { dst[k + l*n] = src[l + k*m]; } } } } } ``` I want to optimize this with multi-threading using OpenMP, however I am not sure what to do when having so many nested for loops. I thought about just adding `#pragma omp parallel for` but doesn't this just parallelize the outer loop?
2022/01/22
[ "https://Stackoverflow.com/questions/70811082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14271622/" ]
You can use the collapse specifier to parallelize over two loops. ``` # pragma omp parallel for collapse(2) for (int i = 0; i < n; i += blocksize) { for (int j = 0; j < m; j += blocksize) { // transpose the block beginning at [i,j] for (int k = i; k < i + blocksize; ++k) { for (int l = j; l < j + blocksize; ++l) { dst[k + l*n] = src[l + k*m]; } } } } ``` As a side-note, I think you should swap the two innermost loops. Usually, when you have a choice between writing sequentially and reading sequentially, writing is more important for performance.
> > I thought about just adding #pragma omp parallel for but doesnt this > just parallelize the outer loop? > > > Yes. To parallelize multiple consecutive loops one can utilize OpenMP' [collapse](https://stackoverflow.com/questions/28482833/understanding-the-collapse-clause-in-openmp) clause. Bear in mind, however that: * (As pointed out by [Victor Eijkhout](https://stackoverflow.com/a/70813595/1366871)). Even though this does not directly apply to your code snippet, typically, for each new loop to be parallelized one should reason about potential newer *race-conditions* *e.g.,* that this parallelization might have added. For example, different threads writing concurrently into the same *dst* position. * in some cases parallelizing nested loops may result in slower execution times than parallelizing a single loop. Since, the concrete implementation of the *collapse* clause uses a more complex heuristic (than the simple loop parallelization) to divide the iterations of the loops among threads, which can result in an overhead higher than the gains that it provides. You should try to benchmark with a single parallel loop and then with two, and so on, and compare the results, accordingly. ``` void transposer(int n, int m, double *dst, const double *src) { int blocksize; #pragma omp parallel for collapse(...) for (int i = 0; i < n; i += blocksize) for (int j = 0; j < m; j += blocksize) for (int k = i; k < i + blocksize; ++k for (int l = j; l < j + blocksize; ++l) dst[k + l*n] = src[l + k*m]; } ``` Depending upon the number of threads, cores, size of the matrices among other factors it might be that running sequential would actually be faster than the parallel versions. This is specially true in your code that is *not very* CPU intensive (*i.e.,* `dst[k + l*n] = src[l + k*m];`)
70,811,082
I have this code that transposes a matrix using loop tiling strategy. ``` void transposer(int n, int m, double *dst, const double *src) { int blocksize; for (int i = 0; i < n; i += blocksize) { for (int j = 0; j < m; j += blocksize) { // transpose the block beginning at [i,j] for (int k = i; k < i + blocksize; ++k) { for (int l = j; l < j + blocksize; ++l) { dst[k + l*n] = src[l + k*m]; } } } } } ``` I want to optimize this with multi-threading using OpenMP, however I am not sure what to do when having so many nested for loops. I thought about just adding `#pragma omp parallel for` but doesn't this just parallelize the outer loop?
2022/01/22
[ "https://Stackoverflow.com/questions/70811082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14271622/" ]
When you try to parallelize a loop nest, you should ask yourself how many levels are conflict free. As in: every iteration writing to a different location. If two iterations write (potentially) to the same location, you need to 1. use a reduction 2. use a critical section or other synchronization 3. decide that this loop is not worth parallelizing, or 4. rewrite your algorithm. In your case, the write location depends on `k,l`. Since `k<n` and `l*n`, there are no pairs `k.l` / `k',l'` that write to the same location. Furthermore, there are no two inner iterations that have the same `k` or `l` value. So all four loops are parallel, and they are perfectly nested, so you can use `collapse(4)`. You could also have drawn this conclusion by considering the algorithm in the abstract: in a matrix transposition each target location is written exactly once, so no matter how you traverse the target data structure, it's completely parallel.
You can use the collapse specifier to parallelize over two loops. ``` # pragma omp parallel for collapse(2) for (int i = 0; i < n; i += blocksize) { for (int j = 0; j < m; j += blocksize) { // transpose the block beginning at [i,j] for (int k = i; k < i + blocksize; ++k) { for (int l = j; l < j + blocksize; ++l) { dst[k + l*n] = src[l + k*m]; } } } } ``` As a side-note, I think you should swap the two innermost loops. Usually, when you have a choice between writing sequentially and reading sequentially, writing is more important for performance.
70,811,082
I have this code that transposes a matrix using loop tiling strategy. ``` void transposer(int n, int m, double *dst, const double *src) { int blocksize; for (int i = 0; i < n; i += blocksize) { for (int j = 0; j < m; j += blocksize) { // transpose the block beginning at [i,j] for (int k = i; k < i + blocksize; ++k) { for (int l = j; l < j + blocksize; ++l) { dst[k + l*n] = src[l + k*m]; } } } } } ``` I want to optimize this with multi-threading using OpenMP, however I am not sure what to do when having so many nested for loops. I thought about just adding `#pragma omp parallel for` but doesn't this just parallelize the outer loop?
2022/01/22
[ "https://Stackoverflow.com/questions/70811082", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14271622/" ]
When you try to parallelize a loop nest, you should ask yourself how many levels are conflict free. As in: every iteration writing to a different location. If two iterations write (potentially) to the same location, you need to 1. use a reduction 2. use a critical section or other synchronization 3. decide that this loop is not worth parallelizing, or 4. rewrite your algorithm. In your case, the write location depends on `k,l`. Since `k<n` and `l*n`, there are no pairs `k.l` / `k',l'` that write to the same location. Furthermore, there are no two inner iterations that have the same `k` or `l` value. So all four loops are parallel, and they are perfectly nested, so you can use `collapse(4)`. You could also have drawn this conclusion by considering the algorithm in the abstract: in a matrix transposition each target location is written exactly once, so no matter how you traverse the target data structure, it's completely parallel.
> > I thought about just adding #pragma omp parallel for but doesnt this > just parallelize the outer loop? > > > Yes. To parallelize multiple consecutive loops one can utilize OpenMP' [collapse](https://stackoverflow.com/questions/28482833/understanding-the-collapse-clause-in-openmp) clause. Bear in mind, however that: * (As pointed out by [Victor Eijkhout](https://stackoverflow.com/a/70813595/1366871)). Even though this does not directly apply to your code snippet, typically, for each new loop to be parallelized one should reason about potential newer *race-conditions* *e.g.,* that this parallelization might have added. For example, different threads writing concurrently into the same *dst* position. * in some cases parallelizing nested loops may result in slower execution times than parallelizing a single loop. Since, the concrete implementation of the *collapse* clause uses a more complex heuristic (than the simple loop parallelization) to divide the iterations of the loops among threads, which can result in an overhead higher than the gains that it provides. You should try to benchmark with a single parallel loop and then with two, and so on, and compare the results, accordingly. ``` void transposer(int n, int m, double *dst, const double *src) { int blocksize; #pragma omp parallel for collapse(...) for (int i = 0; i < n; i += blocksize) for (int j = 0; j < m; j += blocksize) for (int k = i; k < i + blocksize; ++k for (int l = j; l < j + blocksize; ++l) dst[k + l*n] = src[l + k*m]; } ``` Depending upon the number of threads, cores, size of the matrices among other factors it might be that running sequential would actually be faster than the parallel versions. This is specially true in your code that is *not very* CPU intensive (*i.e.,* `dst[k + l*n] = src[l + k*m];`)
37,711,447
Relevant code: ``` addDay.setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View v){ ++fragIdCount; FragmentManager fragmentManager = getFragmentManager(); FragmentTransaction fragmentTransaction = fragmentManager.beginTransaction(); String fragString = Integer.toString(fragIdCount); TemplateFragment1 frag2 = new TemplateFragment1(); // creating new fragment frag2 of fragment TemplateFragment1 fragmentTransaction.add(R.id.templateFragmentLayout, frag2, fragString); // "frag2" is where the error is fragmentTransaction.commit(); fragmentManager.executePendingTransactions(); } }); ``` TemplateFragment1 is a fragment. I'm creating a new variable that is a new TemplateFragment1 fragment. When I pass that in (frag2), I get an error saying that it requires a fragment there, but frag2 is a fragment! It's strange because I have this structure elsewhere in my program that works fine. Edit: I did some research and I don't know why I was using the support fragment, but now I know better!
2016/06/08
[ "https://Stackoverflow.com/questions/37711447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6195990/" ]
If you know that the `searchBar` is the first responder then either: 1. Pass a reference to the `searchBar` to the `UITableViewController` 2. Use KVC: Post a notification that you're loading more results and keyboard should be hidden, and in your `UISeachController` listen for that notification.
``` [[UIApplication sharedApplication]sendAction:@selector(resignFirstResponder) to:nil from:nil forEvent:nil]; ```
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
The trick is that those two methods belong to different `UITableView` protocols: `tableView:titleForHeaderInSection:` is a **`UITableViewDataSource`** protocol method, where `tableView:viewForHeaderInSection` belongs to **`UITableViewDelegate`**. That means: * If you implement the methods but assign yourself only as the `dataSource` for the `UITableView`, your `tableView:viewForHeaderInSection` implementation will be ignored. * `tableView:viewForHeaderInSection` has a higher priority. If you implement both of the methods and assign yourself as both the `dataSource` and the `delegate` for the `UITableView`, you will return the views for section headers but your `tableView:titleForHeaderInSection:` will be ignored. I have also tried removing `tableView:heightForHeaderInSection:`; it worked fine and didn't seem to affect the procedures above. But the documentation says that it is required for the `tableView:viewForHeaderInSection` to work correctly; so to be safe it is wise to implement this, as well.
Same issue occured with me but as I was using **automatic height calculation** from **xCode 9**, I cannot give any explicit height value as mentioned above. After some experimentation I **got solution**, we have to override this method as, ``` -(CGFloat)tableView:(UITableView *)tableView estimatedHeightForHeaderInSection:(NSInteger)section { return 44.0f; } ``` Although I have **checked** both options 1. **Automatic Height Calculation** 2. **Automatic Estimated Height Calculation** from **storyboard** as apple says, but still I got this weird error. **Please Note**: This Error was shown only on **IOS-10** version not on **IOS-11** version. Maybe it's a bug from xCode. Thanks
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
Giving `estimatedSectionHeaderHeight` and `sectionHeaderHeight` values fixed my problem. e.g., `self.tableView.estimatedSectionHeaderHeight = 100 self.tableView.sectionHeaderHeight = UITableViewAutomaticDimension`
It's worth briefly noting that if your implementation of `tableView:heightForHeaderInSection:` returns `UITableViewAutomaticDimension`, then `tableView:viewForHeaderInSection:` will not be called. `UITableViewAutomaticDimension` assumes that a standard `UITableViewHeaderFooterView` will be used that is populated with the delegate method `tableView:titleForHeaderInSection:`. From comments in the `UITableView.h`: > > Returning this value from `tableView:heightForHeaderInSection:` or `tableView:heightForFooterInSection:` results in a height that fits the value returned from `tableView:titleForHeaderInSection:` or `tableView:titleForFooterInSection:` if the title is not nil. > > >
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
You should implement `tableView:heightForHeaderInSection:` and set the height for the header >0. This delegate method goes along with the `viewForHeaderInSection:` method. I hope this helps. ``` - (CGFloat)tableView:(UITableView *)tableView heightForHeaderInSection:(NSInteger)section { return 40; } ```
Sometimes setting `tableview.delegate` or `datasource = nil` in the `viewWillAppear:` or `viewDidAppear:` methods can cause this issue. Make sure not to do this...
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
@rmaddy has misstated the rule, twice: in reality, `tableView:viewForHeaderInSection:` does *not* require that you also implement `tableView:heightForHeaderInSection:`, and also it is perfectly fine to call both `titleForHeader` and `viewForHeader`. I will state the rule correctly just for the record: The rule is simply that `viewForHeader` will not be called unless you somehow give the header a height. You can do this in any combination of three ways: * Implement `tableView:heightForHeaderInSection:`. * Set the table's `sectionHeaderHeight`. * Call `titleForHeader` (this somehow gives the header a default height if it doesn't otherwise have one). If you do none of those things, you'll have no headers and `viewForHeader` won't be called. That's because without a height, the runtime won't know how to resize the view, so it doesn't bother to ask for one.
Sometimes setting `tableview.delegate` or `datasource = nil` in the `viewWillAppear:` or `viewDidAppear:` methods can cause this issue. Make sure not to do this...
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
The use of `tableView:viewForHeaderInSection:` requires that you also implement `tableView:heightForHeaderInSection:`. This should return an appropriate non-zero height for the header. Also make sure you do not also implement the `tableView:titleForHeaderInSection:`. You should only use one or the other (`viewForHeader` or `titleForHeader`).
Here's what I've found (**Swift 4**) (thanks to [this comment](https://stackoverflow.com/a/28488181/428981) on another question) Whether I used titleForHeaderInSection or viewForHeaderInSection - it wasn't that they weren't getting called when the tableview was scrolled and new cells were being loaded, but any font choices I made for the headerView's textLabel were only appearing on what was initially visible on load, and not as the table was scrolled. The fix was willDisplayHeaderView: ``` func tableView(_ tableView: UITableView, willDisplayHeaderView view: UIView, forSection section: Int) { if let header = view as? UITableViewHeaderFooterView { header.textLabel?.font = UIFont(name: yourFont, size: 42) } } ```
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
The use of `tableView:viewForHeaderInSection:` requires that you also implement `tableView:heightForHeaderInSection:`. This should return an appropriate non-zero height for the header. Also make sure you do not also implement the `tableView:titleForHeaderInSection:`. You should only use one or the other (`viewForHeader` or `titleForHeader`).
@rmaddy has misstated the rule, twice: in reality, `tableView:viewForHeaderInSection:` does *not* require that you also implement `tableView:heightForHeaderInSection:`, and also it is perfectly fine to call both `titleForHeader` and `viewForHeader`. I will state the rule correctly just for the record: The rule is simply that `viewForHeader` will not be called unless you somehow give the header a height. You can do this in any combination of three ways: * Implement `tableView:heightForHeaderInSection:`. * Set the table's `sectionHeaderHeight`. * Call `titleForHeader` (this somehow gives the header a default height if it doesn't otherwise have one). If you do none of those things, you'll have no headers and `viewForHeader` won't be called. That's because without a height, the runtime won't know how to resize the view, so it doesn't bother to ask for one.
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
The trick is that those two methods belong to different `UITableView` protocols: `tableView:titleForHeaderInSection:` is a **`UITableViewDataSource`** protocol method, where `tableView:viewForHeaderInSection` belongs to **`UITableViewDelegate`**. That means: * If you implement the methods but assign yourself only as the `dataSource` for the `UITableView`, your `tableView:viewForHeaderInSection` implementation will be ignored. * `tableView:viewForHeaderInSection` has a higher priority. If you implement both of the methods and assign yourself as both the `dataSource` and the `delegate` for the `UITableView`, you will return the views for section headers but your `tableView:titleForHeaderInSection:` will be ignored. I have also tried removing `tableView:heightForHeaderInSection:`; it worked fine and didn't seem to affect the procedures above. But the documentation says that it is required for the `tableView:viewForHeaderInSection` to work correctly; so to be safe it is wise to implement this, as well.
I've just had an issue with headers not showing for **iOS 7.1**, but working fine with later releases I have tested, explicitly with 8.1 and 8.4. For the exact same code, 7.1 was *not* calling any of the section header delegate methods at all, including: `tableView:heightForHeaderInSection:` and `tableView:viewForHeaderInSection:`. After experimentation, I found that **removing** this line from my `viewDidLoad` made headers re-appear for 7.1 and did not impact other versions I tested: ``` // _Removing_ this line _fixed_ headers on 7.1 self.tableView.estimatedSectionHeaderHeight = 80; ``` … so, there seems to be some kind of conflict there for 7.1, at least.
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
It's worth briefly noting that if your implementation of `tableView:heightForHeaderInSection:` returns `UITableViewAutomaticDimension`, then `tableView:viewForHeaderInSection:` will not be called. `UITableViewAutomaticDimension` assumes that a standard `UITableViewHeaderFooterView` will be used that is populated with the delegate method `tableView:titleForHeaderInSection:`. From comments in the `UITableView.h`: > > Returning this value from `tableView:heightForHeaderInSection:` or `tableView:heightForFooterInSection:` results in a height that fits the value returned from `tableView:titleForHeaderInSection:` or `tableView:titleForFooterInSection:` if the title is not nil. > > >
The reason why `viewForHeaderInSection` does not get called is for one of two reasons: Either you did not set up your `UITableViewDelegate`, or you set up your `UITableViewDelegate` incorrectly.
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
The use of `tableView:viewForHeaderInSection:` requires that you also implement `tableView:heightForHeaderInSection:`. This should return an appropriate non-zero height for the header. Also make sure you do not also implement the `tableView:titleForHeaderInSection:`. You should only use one or the other (`viewForHeader` or `titleForHeader`).
In my case it was cause I did not implement: ``` func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat ```
15,078,725
I've set up the tableview with correct delegate and datasource linkages.. the reloadData method calls the datasource and the delegate methods except for `viewForHeaderInSection:`. Why is that so?
2013/02/25
[ "https://Stackoverflow.com/questions/15078725", "https://Stackoverflow.com", "https://Stackoverflow.com/users/962724/" ]
@rmaddy has misstated the rule, twice: in reality, `tableView:viewForHeaderInSection:` does *not* require that you also implement `tableView:heightForHeaderInSection:`, and also it is perfectly fine to call both `titleForHeader` and `viewForHeader`. I will state the rule correctly just for the record: The rule is simply that `viewForHeader` will not be called unless you somehow give the header a height. You can do this in any combination of three ways: * Implement `tableView:heightForHeaderInSection:`. * Set the table's `sectionHeaderHeight`. * Call `titleForHeader` (this somehow gives the header a default height if it doesn't otherwise have one). If you do none of those things, you'll have no headers and `viewForHeader` won't be called. That's because without a height, the runtime won't know how to resize the view, so it doesn't bother to ask for one.
Here's what I've found (**Swift 4**) (thanks to [this comment](https://stackoverflow.com/a/28488181/428981) on another question) Whether I used titleForHeaderInSection or viewForHeaderInSection - it wasn't that they weren't getting called when the tableview was scrolled and new cells were being loaded, but any font choices I made for the headerView's textLabel were only appearing on what was initially visible on load, and not as the table was scrolled. The fix was willDisplayHeaderView: ``` func tableView(_ tableView: UITableView, willDisplayHeaderView view: UIView, forSection section: Int) { if let header = view as? UITableViewHeaderFooterView { header.textLabel?.font = UIFont(name: yourFont, size: 42) } } ```
20,179,793
What is the differences between property owned by an association and a property owned by a class in UML? is there a simple example helps me understand the differences?
2013/11/24
[ "https://Stackoverflow.com/questions/20179793", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2021253/" ]
There are, multiple reasons, each one good enough to just the choice: 1. You can partially specialize class templates but you can only fully specialized function templates (at least, so far). Thus, you can provide a replacement for an entire suite of related template arguments with `std::hash<T>` being a class template. Note that, partial overloading doesn't help because the hash function would need to be specified somehow as an object which can't be done with overloaded functions (unless they are accessed via an object but that's what is differentiated against). 2. The unordered associative containers are parametersized with a static entity (which can also be customized dynamically if the specific type supports that) which is easier done using class templates. 3. Since the entities used for the hash function are customizable, the choice is between using a type or a function pointer for customization. Function pointers are often hard to inline while inline member functions of a type are trivial to inline, improving the performance for simple functions like computing a simple hash quite a bit.
A template function **can not** be partially specialized for types, while `std::hash` specialized for different types as a class template. And, in this template class based way, you can do some meta programming such as accessing to return type and key type like below: ``` std::hash<X>::argument_type std::hash<X>::result_type ```
141,484
I’ve recently started an LLC with a co-worker and am looking for some advice. We’re currently both employed by a fairly large company that produces software that is specialized and only really relevant in that market. One of the first products we plan on developing for this business was inspired by our current project. The service is relatively complicated and virtually nonexistent publicly; however, the information is publicly available and none of the ideas utilize any proprietary information and are entirely backed by private research done outside of work hours. With that being said; we are currently working on a solution for our primary gig while tangentially working on a similar solution to market independently. The code, data, infrastructure, and design is different from our work solution as we are designing it for general use. Realistically; the only similarity between the two solutions is that we’re solving a similar problem. We don’t plan on leaving our current positions any time soon, so we want some advice on how to approach this opportunity both from a professional and legal point of view. How should we approach informing coworkers and the company at large? Should this be something we announce early on or later after we have a finished product? We also don’t want to make it seem like we’re creating a less than optimal solution for our company with the hopes that they will use our product. The company is very supportive of its employees having side gigs. In fact, another coworker built and released a product very similar to an existing product at our primary gig. That being said, what measures should we take to protect ourselves and our product legally?
2019/08/01
[ "https://workplace.stackexchange.com/questions/141484", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/107522/" ]
You need legal advice here, not workplace-related advise. At the least, make sure you have not signed any non-compete clause, and are not violating any terms of job agreement with your current employer. Seek professional legal assistance if you find the language in the offer letter hard to understand. It is crucial to ensure that you are not in violation, intentionally or unintentionally. Employers generally don't take such acts lightly. On top of it, you are currently on their payroll. Don't discuss this with any co-workers (apart from your partner), and seek legal advice from a professional before taking any step.
Not on the legal aspect of things, but I do not see this working out well with you staying on good terms with your company. If you release the service, they will have essentially lost their edge over their competitors as their competitors can easily solve the problem your company has been paying you to fix. While you may not be stealing code from the company, you are putting them at a competitive disadvantage by releasing a solution for a problem you would have not known existed if you had not worked for your company. This is one of those situations where it is difficult to have your cake and eat it too. If your goal is to make the service successful, I suggest you seek legal advice and quit your company if your legal counsel gives you the green light. If your goal is to stay on good terms with your company, I suggest you either give up the idea, or offer to give the rights of the project to the company, offering to have them release it as a service.
53,580,952
Recently, we started working to convert an established Django project from a docker stack to Google App Engine. On the way, Google Cloud Build turned out to come handy. Cloudbuild takes care of a few items in preparation of rolling out, in particular the front end part of the application. Now when it comes to python and Django specific tasks, the obvious choice is to resort to cloudbuild as well. Therefore we tried to follow the pattern Google explains with their official NPM cloud-builder ([here](https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/npm)) The issue we are facing is the following. When building with the official python image, the buildsteps are set up as followed: ``` steps: [...] 8 - name: 'python:3.7' 9 entrypoint: python3 10 args: ['-m', 'pip', 'install', '-r', 'requirements.txt'] 11 - name: 'python:3.7' 12 entrypoint: python3 13 args: ['./manage.py', 'collectstatic', '--noinput'] ``` This works just fine for the first step, to install all requirements. GAE does that when deploying the application as well, but here it's necessary to *collectstatic* from the repository and installed django apps, before uploading them. While the first step succeeds with the above, the 2nd step fails with the following error: ``` File "./manage.py", line 14, in <module> ) from exc ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment? ``` Is there a better way to approach this situation?
2018/12/02
[ "https://Stackoverflow.com/questions/53580952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1569491/" ]
Without `apiIsReady` set to true, you are creating loop that adds new value to array with each iteration of that same array. ``` function load_all_waitting_inits() { for(var opts of waitting_inits) // new values are being added with each iteration, preventing loop to end { init(opts); // parse value of waitting_inits array } } function init(opts) { loadPlayer(); if (apiIsReady) { // always false addVideo(opts.video, opts.playerVars || {}); } else { waitting_inits.push(opts) // here you are adding values infinitely } } ``` Edit ==== Check if array includes object. ``` function init(opts) { loadPlayer(); if (apiIsReady) { addVideo(opts.video, opts.playerVars || {}); } else if(!waitting_inits.includes(opts)) // if array doesn't include opts then push { waitting_inits.push(opts) } } ```
If you remove `apiIsReady = true;` then it creates an infinity loop. And that's why, the browser will freeze.
66,490,089
Consider the following code: ``` typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; int f(void) { blob_t a = {0}; *(byte_t *)a.bytes = (byte_t){10}; return a.bytes[0]; } ``` Does this give aliasing problems in the return statement? You do have that `a.bytes` dereferences a type that does not alias the assignment in `patch`, but on the other hand, the `[0]` part dereferences a type that does alias. I can construct a slightly larger example where `gcc -O1 -fstrict-aliasing` does make the function return 0, and I'd like to know if this is a gcc bug, and if not, what I can do to avoid this problem (in my real-life example, the assignment happens in a separate function so that both functions look really innocent in isolation). Here is a longer more complete example for testing: ``` #include <stdio.h> typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; static char *find(char *buf) { for (int i = 0; i < 1; i++) { if (buf[0] == 0) { return buf; }} return 0; } void patch(char *b) { *(byte_t *) b = (byte_t) {10}; } int main(void) { blob_t a = {0}; char *b = find(a.bytes); if (b) { patch(b); } printf("%d\n", a.bytes[0]); } ``` Building with `gcc -O1 -fstrict-aliasing` produces `0`
2021/03/05
[ "https://Stackoverflow.com/questions/66490089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4869375/" ]
The main issue here is that those two structs are not compatible types. And so there can be various problems with alignment and padding. That issue aside, the standard 6.5/7 only allows for this (the "strict aliasing rule"): > > An object shall have its stored value accessed only by an lvalue expression that has one of the following types: > > > * a type compatible with the effective type of the object, > > ... > * an aggregate or union type that includes one of the aforementioned types among its members > > > Looking at `*(byte_t *)a.bytes`, then `a.bytes` has the *effective type* `char[10]`. Each individual member of that array has in turn the effective type `char`. You de-reference that with `byte_t`, which is not a compatible struct type nor does it have a `char[10]` among its members. It does have `char` though. The standard is not exactly clear how to treat an object which effective type is an array. If you read the above part strictly, then your code does indeed violate strict aliasing, because you access a `char[10]` through a struct which doesn't have a `char[10]` member. I'd also be a bit concerned about the compiler padding either struct to meet alignment. Generally, I'd simply advise against doing fishy things like this. If you need type punning, then use a union. And if you wish to use raw binary data, then use `uint8_t` instead of the potentially signed & non-portable `char`.
The error is in `*(byte_t *)a.bytes = (byte_t){10};`. The C spec has a special rule about character types (6.5§7), but that rule only applies when using character type to access any other type, not when using any type to access a character.
66,490,089
Consider the following code: ``` typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; int f(void) { blob_t a = {0}; *(byte_t *)a.bytes = (byte_t){10}; return a.bytes[0]; } ``` Does this give aliasing problems in the return statement? You do have that `a.bytes` dereferences a type that does not alias the assignment in `patch`, but on the other hand, the `[0]` part dereferences a type that does alias. I can construct a slightly larger example where `gcc -O1 -fstrict-aliasing` does make the function return 0, and I'd like to know if this is a gcc bug, and if not, what I can do to avoid this problem (in my real-life example, the assignment happens in a separate function so that both functions look really innocent in isolation). Here is a longer more complete example for testing: ``` #include <stdio.h> typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; static char *find(char *buf) { for (int i = 0; i < 1; i++) { if (buf[0] == 0) { return buf; }} return 0; } void patch(char *b) { *(byte_t *) b = (byte_t) {10}; } int main(void) { blob_t a = {0}; char *b = find(a.bytes); if (b) { patch(b); } printf("%d\n", a.bytes[0]); } ``` Building with `gcc -O1 -fstrict-aliasing` produces `0`
2021/03/05
[ "https://Stackoverflow.com/questions/66490089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4869375/" ]
The error is in `*(byte_t *)a.bytes = (byte_t){10};`. The C spec has a special rule about character types (6.5§7), but that rule only applies when using character type to access any other type, not when using any type to access a character.
According to the Standard, the syntax `array[index]` is shorthand for `*((array)+(index))`. Thus, `p->array[index]` is equivalent to `*((p->array) + (index))`, which uses the address of `p` to compute the address of `p->array`, and then without regard for `p`'s type, adds `index` (scaled by the size of the array-element type), and then dereferences the resulting pointer to yield an lvalue of the array-element type. Nothing in the wording of the Standard would imply that an access via the resulting lvalue is an access to an lvalue of the underlying structure type. Thus, if the struct member is an array of character type, the constraints of N1570 6.5p7 would allow an lvalue of that form to access storage of any type. The maintainers of some compilers such as gcc, however, appear to view the laxity of the Standard there as a defect. This can be demonstrated via the code: ``` struct s1 { char x[10]; }; struct s2 { char x[10]; }; union s1s2 { struct s1 v1; struct s2 v2; } u; int read_s1_x(struct s1 *p, int i) { return p->x[i]; } void set_s2_x(struct s2 *p, int i, int value) { p->x[i] = value; } __attribute__((noinline)) int test(void *p, int i) { if (read_s1_x(p, 0)) set_s2_x(p, i, 2); return read_s1_x(p, 0); } #include <stdio.h> int main(void) { u.v2.x[0] = 1; int result = test(&u, 0); printf("Result = %d / %d", result, u.v2.x[0]); } ``` The code abides the constraints in N1570 6.5p7 because it all accesses to any portion of `u` are performed using lvalues of character type. Nonetheless, the code generated by gcc will not allow for the possibility that the storage accessed by `(*(struct s1))->x[0]` might also be accessed by `(*(struct s2))->x[i]` despite the fact that both accesses use lvalues of character type.
66,490,089
Consider the following code: ``` typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; int f(void) { blob_t a = {0}; *(byte_t *)a.bytes = (byte_t){10}; return a.bytes[0]; } ``` Does this give aliasing problems in the return statement? You do have that `a.bytes` dereferences a type that does not alias the assignment in `patch`, but on the other hand, the `[0]` part dereferences a type that does alias. I can construct a slightly larger example where `gcc -O1 -fstrict-aliasing` does make the function return 0, and I'd like to know if this is a gcc bug, and if not, what I can do to avoid this problem (in my real-life example, the assignment happens in a separate function so that both functions look really innocent in isolation). Here is a longer more complete example for testing: ``` #include <stdio.h> typedef struct { char byte; } byte_t; typedef struct { char bytes[10]; } blob_t; static char *find(char *buf) { for (int i = 0; i < 1; i++) { if (buf[0] == 0) { return buf; }} return 0; } void patch(char *b) { *(byte_t *) b = (byte_t) {10}; } int main(void) { blob_t a = {0}; char *b = find(a.bytes); if (b) { patch(b); } printf("%d\n", a.bytes[0]); } ``` Building with `gcc -O1 -fstrict-aliasing` produces `0`
2021/03/05
[ "https://Stackoverflow.com/questions/66490089", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4869375/" ]
The main issue here is that those two structs are not compatible types. And so there can be various problems with alignment and padding. That issue aside, the standard 6.5/7 only allows for this (the "strict aliasing rule"): > > An object shall have its stored value accessed only by an lvalue expression that has one of the following types: > > > * a type compatible with the effective type of the object, > > ... > * an aggregate or union type that includes one of the aforementioned types among its members > > > Looking at `*(byte_t *)a.bytes`, then `a.bytes` has the *effective type* `char[10]`. Each individual member of that array has in turn the effective type `char`. You de-reference that with `byte_t`, which is not a compatible struct type nor does it have a `char[10]` among its members. It does have `char` though. The standard is not exactly clear how to treat an object which effective type is an array. If you read the above part strictly, then your code does indeed violate strict aliasing, because you access a `char[10]` through a struct which doesn't have a `char[10]` member. I'd also be a bit concerned about the compiler padding either struct to meet alignment. Generally, I'd simply advise against doing fishy things like this. If you need type punning, then use a union. And if you wish to use raw binary data, then use `uint8_t` instead of the potentially signed & non-portable `char`.
According to the Standard, the syntax `array[index]` is shorthand for `*((array)+(index))`. Thus, `p->array[index]` is equivalent to `*((p->array) + (index))`, which uses the address of `p` to compute the address of `p->array`, and then without regard for `p`'s type, adds `index` (scaled by the size of the array-element type), and then dereferences the resulting pointer to yield an lvalue of the array-element type. Nothing in the wording of the Standard would imply that an access via the resulting lvalue is an access to an lvalue of the underlying structure type. Thus, if the struct member is an array of character type, the constraints of N1570 6.5p7 would allow an lvalue of that form to access storage of any type. The maintainers of some compilers such as gcc, however, appear to view the laxity of the Standard there as a defect. This can be demonstrated via the code: ``` struct s1 { char x[10]; }; struct s2 { char x[10]; }; union s1s2 { struct s1 v1; struct s2 v2; } u; int read_s1_x(struct s1 *p, int i) { return p->x[i]; } void set_s2_x(struct s2 *p, int i, int value) { p->x[i] = value; } __attribute__((noinline)) int test(void *p, int i) { if (read_s1_x(p, 0)) set_s2_x(p, i, 2); return read_s1_x(p, 0); } #include <stdio.h> int main(void) { u.v2.x[0] = 1; int result = test(&u, 0); printf("Result = %d / %d", result, u.v2.x[0]); } ``` The code abides the constraints in N1570 6.5p7 because it all accesses to any portion of `u` are performed using lvalues of character type. Nonetheless, the code generated by gcc will not allow for the possibility that the storage accessed by `(*(struct s1))->x[0]` might also be accessed by `(*(struct s2))->x[i]` despite the fact that both accesses use lvalues of character type.
3,898,485
For instance adding new categories to a list. Once a new category has been added, and successfully submitted to the db, should I use JS to directly append this value to the list or should I query the table again and repopulate the list?
2010/10/10
[ "https://Stackoverflow.com/questions/3898485", "https://Stackoverflow.com", "https://Stackoverflow.com/users/219609/" ]
Yes, what you're doing makes some sense. I find it much more intuitive to have the Window listen to the Game than the other way round. I've also found that Java is much more maintainable if you separate out the different areas of the GUI and pass the Game into each of them through a fine-grained interface. I normally get the GUI elements to listen to changes in the model, and request any interactions to be dealt with. This way round makes for easier unit testing, and you can replace the GUI with a fake for acceptance testing if you don't have a decent automation suite, or even just for logging. Usually splitting up the GUI results in some panels purely listening, and some panels purely interacting. It makes for a really lovely separation of concerns. I represent the panels with their own classes extending `JPanel`, and let the Window pass the Game to them on construction. So for instance, if I have two panels, one of which displays the results and one of which has an "Update" button, I can define two interfaces: `INotifyListenersOfResults` and `IPerformUpdates`. (Please note that I'm making role-based interfaces here using the `IDoThisForYou` pattern; you can call them whatever you like). The Game controller then implements both these interfaces, and the two panels each take the respective interface. The Update interface will have a method called `RequestUpdate` and the Results interface will have `AddResultsListener`. Both these methods then appear on the Game class. Regardless of whether you get the Game to listen to the Window or the Window to the Game, by separating things through interfaces this way you make it much easier to split the Game controller later on and delegate its responsibilities, once things start getting *really* complicated, which they always do!
I think you should implement the Observer design pattern (http://en.wikipedia.org/wiki/Observer\_pattern) without using .NET's events. In my approach, you need to define a couple of interfaces and add a little bit of code. For each different kind of event, create a pair of symmetric interfaces ``` public interface IEventXDispatcher { void Register(IEventXHandler handler); void Unregister(IEventXHandler handler) throws NotSupportedException; } public interface IEventXHandler { void Handle(Object sender, Object args); } ``` X denotes the specific name of event (Click, KeyPress, EndApplication, WhateverYouWant). Then make your observed class implement IEventDispatcher and your observer class(es) implement IEventHandler ``` public class Dispatcher implements IEventXDispatcher, IEventYDispatcher ... { private List<IEventXHandler> _XHandlers; private List<IEventYHandler> _YHandlers; void Register(IEventXHandler handler) { _XHandlers.Add(handler); } void Unregister(IEventHandler handler) throws NotSupportedException { //Simplified code _XHandlers.Remove(handler); } private MyMethod() { [...] for(IEventXHandler handler: _XHandlers) handler.Handle(this, new AnyNeededClass()) [...] } //Same for event Y ``` All the code is hand-written. I have little experience with Java but I believe this pattern may help you!
212,877
I tried searching but could not find a definitive answer on blender stackexchange. I'm looking to upgrade a rendering Rig, and the RAM is a bit old... I know for most of my work I use cycles which relies on the GPU for rendering. My question is that, Is there any benefit to upgrading the normal RAM i use? e.g. If I have 16 GB of 2000 mhz ram, is it worth to go to 32 GB of 4000 mhz ram? Does blender only care about amount (GB) or does it take advantage of the Mhz as well? I've read the Mhz are useful for high intensity gaming, but what about blender usage? Does it make sense to leave normal RAM by the wayside and focus all cash towards buying an expensive and powerful GPU with the most VRAM ?
2021/02/23
[ "https://blender.stackexchange.com/questions/212877", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/77364/" ]
There's a setup procedure that happens when you render (subdiv/displacement, BVH calculation, etc) that is heavily CPU bound and uses system memory until it settles on the final scene data which is then copied over to the rendering device. There's a balance to be struck.
RAM matters with ( in my experience ) with regards to simulations. For rendering the biggest bottleneck ( and in many cases direct crashes ) is VRAM. I would look into getting video cards with the most VRAM you can justify paying for. 8GB or higher ( ideally 12+ GB ) for heavy high poly count sims. I am on 64GB of RAM and heavy fluid sims are no go as my dual card system is 2x 4GB VRAM cards. I am hoping to get a 3090 to have 24GB of VRAM to solve this issue.
212,877
I tried searching but could not find a definitive answer on blender stackexchange. I'm looking to upgrade a rendering Rig, and the RAM is a bit old... I know for most of my work I use cycles which relies on the GPU for rendering. My question is that, Is there any benefit to upgrading the normal RAM i use? e.g. If I have 16 GB of 2000 mhz ram, is it worth to go to 32 GB of 4000 mhz ram? Does blender only care about amount (GB) or does it take advantage of the Mhz as well? I've read the Mhz are useful for high intensity gaming, but what about blender usage? Does it make sense to leave normal RAM by the wayside and focus all cash towards buying an expensive and powerful GPU with the most VRAM ?
2021/02/23
[ "https://blender.stackexchange.com/questions/212877", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/77364/" ]
There's a setup procedure that happens when you render (subdiv/displacement, BVH calculation, etc) that is heavily CPU bound and uses system memory until it settles on the final scene data which is then copied over to the rendering device. There's a balance to be struck.
tl;dr = Quantity matters a lot; speed matters almost not at all. Blender does "care" about system RAM quite a lot. Until the point that you're actually rendering, the whole scene resides in system memory. The amount of system memory you have will limit what you can make. Also, although Cycles is 100x faster on GPU, it can run on CPU. **Regarding quantity:** Everything you make resides in system memory. If you have an enormous model that you want to decimate, you'll have to load the whole thing at some point. If you do any video editing, the more RAM you have will mean that more of your video can be cached, which helps with playback. Simulations tend to require a lot of RAM; more RAM means more options available to you. As I alluded to above, although Cycles runs much faster on GPU, it will run on CPU. I have encountered cases where the scene simply couldn't be simplified, and I HAD to render it, but it was too big for GPU vRAM. Switch Cycles to CPU rendering, and although it took FOREVER, it did work. The CPU could use system RAM, and even virtual RAM to render something that occupied 40GB of memory. I.e. having enough system RAM might mean the difference between being able to render very slowly versus not being able to render at all. **Regarding speed:** All software will benefit from faster RAM. This is a very "low level" attribute, which means that none of the software on your system needs to be aware of nor specially coded for faster RAM. It will just be faster. It's worth at least saying out loud that 10yo DDR2 will make your system noticeably less performant than the latest DDR5. I'm assuming that we're talking about choosing from among current-generation specifications. That having been said, the difference between "slow" DDR4 and "fast" DDR4 is something I've found to be negligible and unnoticeable except for the most time-sensitive operations. (I mention DDR4; higher specifications are still new enough to warrant a lot of scrutiny; you'll want to do your own research at that point). I would recommend looking up benchmarks for the RAM you're considering, and compare their results. Blender Cycles is a common enough benchmark that you may even be able to find examples comparing various RAM speeds in the exact implementation you care about. My investigation shows that it's really not enough of a difference to notice, except for highly demanding real-time applications (games).
212,877
I tried searching but could not find a definitive answer on blender stackexchange. I'm looking to upgrade a rendering Rig, and the RAM is a bit old... I know for most of my work I use cycles which relies on the GPU for rendering. My question is that, Is there any benefit to upgrading the normal RAM i use? e.g. If I have 16 GB of 2000 mhz ram, is it worth to go to 32 GB of 4000 mhz ram? Does blender only care about amount (GB) or does it take advantage of the Mhz as well? I've read the Mhz are useful for high intensity gaming, but what about blender usage? Does it make sense to leave normal RAM by the wayside and focus all cash towards buying an expensive and powerful GPU with the most VRAM ?
2021/02/23
[ "https://blender.stackexchange.com/questions/212877", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/77364/" ]
tl;dr = Quantity matters a lot; speed matters almost not at all. Blender does "care" about system RAM quite a lot. Until the point that you're actually rendering, the whole scene resides in system memory. The amount of system memory you have will limit what you can make. Also, although Cycles is 100x faster on GPU, it can run on CPU. **Regarding quantity:** Everything you make resides in system memory. If you have an enormous model that you want to decimate, you'll have to load the whole thing at some point. If you do any video editing, the more RAM you have will mean that more of your video can be cached, which helps with playback. Simulations tend to require a lot of RAM; more RAM means more options available to you. As I alluded to above, although Cycles runs much faster on GPU, it will run on CPU. I have encountered cases where the scene simply couldn't be simplified, and I HAD to render it, but it was too big for GPU vRAM. Switch Cycles to CPU rendering, and although it took FOREVER, it did work. The CPU could use system RAM, and even virtual RAM to render something that occupied 40GB of memory. I.e. having enough system RAM might mean the difference between being able to render very slowly versus not being able to render at all. **Regarding speed:** All software will benefit from faster RAM. This is a very "low level" attribute, which means that none of the software on your system needs to be aware of nor specially coded for faster RAM. It will just be faster. It's worth at least saying out loud that 10yo DDR2 will make your system noticeably less performant than the latest DDR5. I'm assuming that we're talking about choosing from among current-generation specifications. That having been said, the difference between "slow" DDR4 and "fast" DDR4 is something I've found to be negligible and unnoticeable except for the most time-sensitive operations. (I mention DDR4; higher specifications are still new enough to warrant a lot of scrutiny; you'll want to do your own research at that point). I would recommend looking up benchmarks for the RAM you're considering, and compare their results. Blender Cycles is a common enough benchmark that you may even be able to find examples comparing various RAM speeds in the exact implementation you care about. My investigation shows that it's really not enough of a difference to notice, except for highly demanding real-time applications (games).
RAM matters with ( in my experience ) with regards to simulations. For rendering the biggest bottleneck ( and in many cases direct crashes ) is VRAM. I would look into getting video cards with the most VRAM you can justify paying for. 8GB or higher ( ideally 12+ GB ) for heavy high poly count sims. I am on 64GB of RAM and heavy fluid sims are no go as my dual card system is 2x 4GB VRAM cards. I am hoping to get a 3090 to have 24GB of VRAM to solve this issue.
52,361,501
I have two div outside wrapper div. But I want to push both div inside wrapper. Here is my code ``` <div class="eg-section-separetor"></div> <div class="eg-parallax-scene"></div> <div class="eg-section-separetor-wrapper eg-section-parallax"></div> function pushInsideSection(){ "use strict"; jQuery(".eg-section-separetor").each(function(){ try{ var elem=jQuery(this).next(".eg-section-separetor-wrapper")[0]; jQuery(this).prependTo(elem); }catch(e){ } }); jQuery(".eg-parallax-scene").each(function(){ try{ var elem=jQuery(this).next(".eg-section-parallax")[0]; jQuery(this).prependTo(elem); }catch(e){ } }); } jQuery(document).ready(function($) { pushInsideSection(); }); ``` Problem is jQuery push second div which have class "eg-parallax-scene",but not pushing first div which have class "eg-section-separetor". Any help!
2018/09/17
[ "https://Stackoverflow.com/questions/52361501", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4767484/" ]
It's because `.next()` worked for immediate next element. while `eg-section-separetor-wrapper` is not immediate element after `eg-section-separetor` So use [.siblings()](https://api.jquery.com/siblings/) ``` var elem=jQuery(this).siblings(".eg-section-separetor-wrapper")[0]; ``` Or use `.next()` 2 times:- ``` var elem=jQuery(this).next().next(".eg-section-separetor-wrapper")[0]; ```
You can use `append` to put elements inside of another. ```js function pushInsideSection() { var wrapper = $('.eg-section-separetor-wrapper'); var sep = $('.eg-section-separetor'); var scene = $('.eg-parallax-scene'); wrapper.append(sep, scene); } $('#nest').click(function($) { pushInsideSection(); }); // To run on page load: //$(function() { // pushInsideSection(); //}); ``` ```css .eg-section-separetor-wrapper { min-height: 50px; background: grey; margin: 10px; padding: 10px; } .eg-section-separetor { background: white; border: solid 1px black; } .eg-parallax-scene { background: white; border: solid 1px black; } ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="eg-parallax-scene">this is the scene</div> <div class="eg-section-separetor">This is the separator. it's ok if its in the wrong order in html. </br>You can append them in any order you want.</div> <div class="eg-section-separetor-wrapper eg-section-parallax"> </div> <button id="nest">click me to nest the divs</button> ```
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
Two reasons: 1. They don't have the time/patience to deal with amateurs/brides. 2. They don't want to get in the middle of any copyright issues by dealing with images from someone who may not actually own the rights to have them assembled into a book. I honestly suspect #1 is the prime reason. They price their services with the assumption that they have a very minimal amount of back and forth so they can get the images and design choices in a bundle, do the work and hand it back. They don't have the time budgeted to go back and forth with every random consumer that wants their one album. They'll happily invest the single time effort to train a pro how to submit work to them, knowing that once they've done it they'll be able to get album after album in from the pro in a consistent format and be able to turn it around in a set amount of effort that they built their pricing around. Pros that prove to not meet that expectation will get dropped. Either explicitly and directly, or just by having their work de-prioritized to the point they get the hint and go somewhere else. (disclaimer, I'm talking from a general feeling in the industry, not about those three in particular.)
There is a separate issue that [cabbey](https://photo.stackexchange.com/questions/8995/why-do-nice-photo-albums-require-professional-photographers-to-buy/8997#8997) didn't touch on, and it is a common one in services to business -- hiding prices from consumers helps their business customers in a big way. > > If Company X will create an album for me at price Y, then why are you charging me so damned many Ws for the album? > > > All your customer can see is the cost to have the album produced. You might be able to get away with shipping and sales tax, but forget about cropping, resizing, retouching, colour correction, business administration time associated with the album or any of the other expenses you may have -- never mind the markup. With weddings in particular, the photographer's customer sees him/her snapping pictures for a couple of hours (even if you're there all day and well into the evening, it only registers as "a couple of hours") -- nobody sees the week of work that goes into creating the package that's delivered at the end of it all.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
Two reasons: 1. They don't have the time/patience to deal with amateurs/brides. 2. They don't want to get in the middle of any copyright issues by dealing with images from someone who may not actually own the rights to have them assembled into a book. I honestly suspect #1 is the prime reason. They price their services with the assumption that they have a very minimal amount of back and forth so they can get the images and design choices in a bundle, do the work and hand it back. They don't have the time budgeted to go back and forth with every random consumer that wants their one album. They'll happily invest the single time effort to train a pro how to submit work to them, knowing that once they've done it they'll be able to get album after album in from the pro in a consistent format and be able to turn it around in a set amount of effort that they built their pricing around. Pros that prove to not meet that expectation will get dropped. Either explicitly and directly, or just by having their work de-prioritized to the point they get the hint and go somewhere else. (disclaimer, I'm talking from a general feeling in the industry, not about those three in particular.)
I go through this question quite a bit at work (I am not a pro photographer), where we sell business to business not to end consumer. There are additional challenges as Cabbey outlined with doing work directly with the end consumer. The main one being the amount of training in the process that is needed. Having said that I can say that you can find some solutions that are still cost effective with a little looking around. Going to your local camera store, not a box store but a real camera store with accessories and tools for photographers, they can often provide some of these services. There are also online solutions that have some good resources for printing albums. My personal favorite is SmugMug as they have the tools to assist with the process as well as having options for what provider you want to use. You can check it out at <http://www.smugmug.com/prints/catalog/AB#More> Disclaimer - I am not a SmugMug employee just a very satisfied customer that knows a few of the employees. If you would like a referral code, just let me know I will share mine.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
Two reasons: 1. They don't have the time/patience to deal with amateurs/brides. 2. They don't want to get in the middle of any copyright issues by dealing with images from someone who may not actually own the rights to have them assembled into a book. I honestly suspect #1 is the prime reason. They price their services with the assumption that they have a very minimal amount of back and forth so they can get the images and design choices in a bundle, do the work and hand it back. They don't have the time budgeted to go back and forth with every random consumer that wants their one album. They'll happily invest the single time effort to train a pro how to submit work to them, knowing that once they've done it they'll be able to get album after album in from the pro in a consistent format and be able to turn it around in a set amount of effort that they built their pricing around. Pros that prove to not meet that expectation will get dropped. Either explicitly and directly, or just by having their work de-prioritized to the point they get the hint and go somewhere else. (disclaimer, I'm talking from a general feeling in the industry, not about those three in particular.)
You may think the pro markup is astronomical but in reality, the actual cost of these types of high end books that are only available to pro photographers typically *begin* at $350. And that's only for a 10 page, simple cover 10x10 book. When you start changing sizes, cover options such as metal or photo covers, gilding, hinged pages, substrates, imprinting and more, the out of pocket cost to the photographer can easily reach$700-$800 or more. So don't automatically assume just because it's from the actual manufacturer that it will be cheap.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
Two reasons: 1. They don't have the time/patience to deal with amateurs/brides. 2. They don't want to get in the middle of any copyright issues by dealing with images from someone who may not actually own the rights to have them assembled into a book. I honestly suspect #1 is the prime reason. They price their services with the assumption that they have a very minimal amount of back and forth so they can get the images and design choices in a bundle, do the work and hand it back. They don't have the time budgeted to go back and forth with every random consumer that wants their one album. They'll happily invest the single time effort to train a pro how to submit work to them, knowing that once they've done it they'll be able to get album after album in from the pro in a consistent format and be able to turn it around in a set amount of effort that they built their pricing around. Pros that prove to not meet that expectation will get dropped. Either explicitly and directly, or just by having their work de-prioritized to the point they get the hint and go somewhere else. (disclaimer, I'm talking from a general feeling in the industry, not about those three in particular.)
This is just business, and is not unique to photography. I'm a electrical engineer and have a consulting company that also sells small gizmos on the side that I designed. I won't sell you certain things directly either. The broad reasons are the same regarless of industry. For example, we just got another lot of 1000 made of one particular gizmo. After adding up all the costs, these cost us $3.95 each to have fully built, tested, delivered to our office, and includes the cost of managing the build process. We sell them to one reseller for about $10.50 in lots of 10s to 100 or so and they resell individual units for $14.95. That's actually a very small reseller markup, which is because they take them on consignment. In other words, they don't pay for stock, but pay me once a quarter for units sold, usually with a order for what they think they need to replenish stock for the next quarter. There is no way I'd sell you one, even for the $14.95 full street price. The $11 profit just isn't worth the hassle of putting it into a box, shipping it, accounting for and charging you the postage, etc. If we sold 1000s a quarter we could probably afford a lacky to pack and ship boxes, but the volumes don't justify that. However, the real reason is you'll expect my attention and support that will greatly outweigh the $11 profit. In another case, we sell units that cost us $17 to produce to resellers for $36, which they sell for around $48 while we publish a list price of $59. Even if you wanted to pay me $50 for one I'd refuse for two reasons. First, you're going to suck up my time. Just one phone call and whatever profit I made is blown on lost consulting time. Second, I don't want to undercut the resellers. I need to let them make a profit selling my stuff, else they won't be selling it. If I regularly take business away from them, then they aren't going to make money and will stop selling my product. The $12/unit they cost me is well worth the publicity, front line support, and fulfillment they provide. I expect the business logic behind the photo albums to be basically the same. They are not set up to deal with and support the end customer, and need to let their resellers (the professional photographers) make a buck on their stuff, else those resellers will go resell something else. It's just basic business.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
There is a separate issue that [cabbey](https://photo.stackexchange.com/questions/8995/why-do-nice-photo-albums-require-professional-photographers-to-buy/8997#8997) didn't touch on, and it is a common one in services to business -- hiding prices from consumers helps their business customers in a big way. > > If Company X will create an album for me at price Y, then why are you charging me so damned many Ws for the album? > > > All your customer can see is the cost to have the album produced. You might be able to get away with shipping and sales tax, but forget about cropping, resizing, retouching, colour correction, business administration time associated with the album or any of the other expenses you may have -- never mind the markup. With weddings in particular, the photographer's customer sees him/her snapping pictures for a couple of hours (even if you're there all day and well into the evening, it only registers as "a couple of hours") -- nobody sees the week of work that goes into creating the package that's delivered at the end of it all.
I go through this question quite a bit at work (I am not a pro photographer), where we sell business to business not to end consumer. There are additional challenges as Cabbey outlined with doing work directly with the end consumer. The main one being the amount of training in the process that is needed. Having said that I can say that you can find some solutions that are still cost effective with a little looking around. Going to your local camera store, not a box store but a real camera store with accessories and tools for photographers, they can often provide some of these services. There are also online solutions that have some good resources for printing albums. My personal favorite is SmugMug as they have the tools to assist with the process as well as having options for what provider you want to use. You can check it out at <http://www.smugmug.com/prints/catalog/AB#More> Disclaimer - I am not a SmugMug employee just a very satisfied customer that knows a few of the employees. If you would like a referral code, just let me know I will share mine.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
There is a separate issue that [cabbey](https://photo.stackexchange.com/questions/8995/why-do-nice-photo-albums-require-professional-photographers-to-buy/8997#8997) didn't touch on, and it is a common one in services to business -- hiding prices from consumers helps their business customers in a big way. > > If Company X will create an album for me at price Y, then why are you charging me so damned many Ws for the album? > > > All your customer can see is the cost to have the album produced. You might be able to get away with shipping and sales tax, but forget about cropping, resizing, retouching, colour correction, business administration time associated with the album or any of the other expenses you may have -- never mind the markup. With weddings in particular, the photographer's customer sees him/her snapping pictures for a couple of hours (even if you're there all day and well into the evening, it only registers as "a couple of hours") -- nobody sees the week of work that goes into creating the package that's delivered at the end of it all.
You may think the pro markup is astronomical but in reality, the actual cost of these types of high end books that are only available to pro photographers typically *begin* at $350. And that's only for a 10 page, simple cover 10x10 book. When you start changing sizes, cover options such as metal or photo covers, gilding, hinged pages, substrates, imprinting and more, the out of pocket cost to the photographer can easily reach$700-$800 or more. So don't automatically assume just because it's from the actual manufacturer that it will be cheap.
8,995
I want to get a really nice photo album for myself, without going through some middleman. Why do so many companies require that I be a "professional photographer" before they'll talk to me? Example of several I've found while searching: * <http://asukabook.com/prices.html> * <https://www.pictobooks.com/html/sign.php?menu=signup> * <http://www.lifetimealbums.com/contacts> Is this sort of a buddy thing, where they want to make sure the middlemen don't get undercut? Like maybe why would I pay my photographer $500 for an album, when I could buy it direct for $100?
2011/02/19
[ "https://photo.stackexchange.com/questions/8995", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/36/" ]
There is a separate issue that [cabbey](https://photo.stackexchange.com/questions/8995/why-do-nice-photo-albums-require-professional-photographers-to-buy/8997#8997) didn't touch on, and it is a common one in services to business -- hiding prices from consumers helps their business customers in a big way. > > If Company X will create an album for me at price Y, then why are you charging me so damned many Ws for the album? > > > All your customer can see is the cost to have the album produced. You might be able to get away with shipping and sales tax, but forget about cropping, resizing, retouching, colour correction, business administration time associated with the album or any of the other expenses you may have -- never mind the markup. With weddings in particular, the photographer's customer sees him/her snapping pictures for a couple of hours (even if you're there all day and well into the evening, it only registers as "a couple of hours") -- nobody sees the week of work that goes into creating the package that's delivered at the end of it all.
This is just business, and is not unique to photography. I'm a electrical engineer and have a consulting company that also sells small gizmos on the side that I designed. I won't sell you certain things directly either. The broad reasons are the same regarless of industry. For example, we just got another lot of 1000 made of one particular gizmo. After adding up all the costs, these cost us $3.95 each to have fully built, tested, delivered to our office, and includes the cost of managing the build process. We sell them to one reseller for about $10.50 in lots of 10s to 100 or so and they resell individual units for $14.95. That's actually a very small reseller markup, which is because they take them on consignment. In other words, they don't pay for stock, but pay me once a quarter for units sold, usually with a order for what they think they need to replenish stock for the next quarter. There is no way I'd sell you one, even for the $14.95 full street price. The $11 profit just isn't worth the hassle of putting it into a box, shipping it, accounting for and charging you the postage, etc. If we sold 1000s a quarter we could probably afford a lacky to pack and ship boxes, but the volumes don't justify that. However, the real reason is you'll expect my attention and support that will greatly outweigh the $11 profit. In another case, we sell units that cost us $17 to produce to resellers for $36, which they sell for around $48 while we publish a list price of $59. Even if you wanted to pay me $50 for one I'd refuse for two reasons. First, you're going to suck up my time. Just one phone call and whatever profit I made is blown on lost consulting time. Second, I don't want to undercut the resellers. I need to let them make a profit selling my stuff, else they won't be selling it. If I regularly take business away from them, then they aren't going to make money and will stop selling my product. The $12/unit they cost me is well worth the publicity, front line support, and fulfillment they provide. I expect the business logic behind the photo albums to be basically the same. They are not set up to deal with and support the end customer, and need to let their resellers (the professional photographers) make a buck on their stuff, else those resellers will go resell something else. It's just basic business.
71,976,778
I'm trying to make a textbox which whenever a textchanged event happens, the function will use the textbox text to search through the database to find the right records. For example: I have 3 rows (name, phone number, birthday - the format is year - month - day - in SQL Server): | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | | Mike | 123 | 2000-4-1 | If the textbox.text is `9` or `9/`, I want to return the 2 rows for John and Dennis. | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | It is easy to search if I enter the date with `yyyy-MM-dd` format to the textbox, the query will be: ``` SELECT * FROM database WHERE birthday LIKE %textbox.text% ``` I tried and it worked perfectly, but only when the date in textbox is following the `yyyy-MM-day` format. Is there anyway for it to work with the `dd/month/yyyy` format?
2022/04/23
[ "https://Stackoverflow.com/questions/71976778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18676736/" ]
You have to use a Series as `x-axis` not a DataFrame: ``` pl.plot(x_train['YearsExperience'], model.predict(x_train)) # OR pl.plot(x_train.squeeze(), model.predict(x_train)) ``` [![enter image description here](https://i.stack.imgur.com/0a5Lh.png)](https://i.stack.imgur.com/0a5Lh.png)
this is solving my problem dont known why? ``` x = df.iloc[:, :-1].values y = df.iloc[:, 1].values ``` thas full code ``` import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split df = pd.read_csv('Salary_Data.csv') x = df.iloc[:, :-1].values y = df.iloc[:, 1].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.2, random_state=0) model = LinearRegression().fit(x_train,y_train) import matplotlib.pyplot as plt pl = plt pl.scatter(x_train, y_train) pl.plot(x_train, model.predict(x_train)) pl.show() ```
71,976,778
I'm trying to make a textbox which whenever a textchanged event happens, the function will use the textbox text to search through the database to find the right records. For example: I have 3 rows (name, phone number, birthday - the format is year - month - day - in SQL Server): | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | | Mike | 123 | 2000-4-1 | If the textbox.text is `9` or `9/`, I want to return the 2 rows for John and Dennis. | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | It is easy to search if I enter the date with `yyyy-MM-dd` format to the textbox, the query will be: ``` SELECT * FROM database WHERE birthday LIKE %textbox.text% ``` I tried and it worked perfectly, but only when the date in textbox is following the `yyyy-MM-day` format. Is there anyway for it to work with the `dd/month/yyyy` format?
2022/04/23
[ "https://Stackoverflow.com/questions/71976778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18676736/" ]
this is solving my problem dont known why? ``` x = df.iloc[:, :-1].values y = df.iloc[:, 1].values ``` thas full code ``` import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split df = pd.read_csv('Salary_Data.csv') x = df.iloc[:, :-1].values y = df.iloc[:, 1].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.2, random_state=0) model = LinearRegression().fit(x_train,y_train) import matplotlib.pyplot as plt pl = plt pl.scatter(x_train, y_train) pl.plot(x_train, model.predict(x_train)) pl.show() ```
I had a similar problem, with the exact same error reported: `TypeError: '(slice(None, None, None), None)' is an invalid key`. I also solved it by using `df.iloc[:, :-1].values` instead on `df.iloc[:, :-1]` in `plot`. For me the problem appears only on one computer but not on another, with different matplotlib and/or pandas versions, which might be the root cause of the error.
71,976,778
I'm trying to make a textbox which whenever a textchanged event happens, the function will use the textbox text to search through the database to find the right records. For example: I have 3 rows (name, phone number, birthday - the format is year - month - day - in SQL Server): | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | | Mike | 123 | 2000-4-1 | If the textbox.text is `9` or `9/`, I want to return the 2 rows for John and Dennis. | Name | PhoneNumber | Birthday | | --- | --- | --- | | John | 482 | 2000-7-9 | | Dennis | 912 | 2001-12-9 | It is easy to search if I enter the date with `yyyy-MM-dd` format to the textbox, the query will be: ``` SELECT * FROM database WHERE birthday LIKE %textbox.text% ``` I tried and it worked perfectly, but only when the date in textbox is following the `yyyy-MM-day` format. Is there anyway for it to work with the `dd/month/yyyy` format?
2022/04/23
[ "https://Stackoverflow.com/questions/71976778", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18676736/" ]
You have to use a Series as `x-axis` not a DataFrame: ``` pl.plot(x_train['YearsExperience'], model.predict(x_train)) # OR pl.plot(x_train.squeeze(), model.predict(x_train)) ``` [![enter image description here](https://i.stack.imgur.com/0a5Lh.png)](https://i.stack.imgur.com/0a5Lh.png)
I had a similar problem, with the exact same error reported: `TypeError: '(slice(None, None, None), None)' is an invalid key`. I also solved it by using `df.iloc[:, :-1].values` instead on `df.iloc[:, :-1]` in `plot`. For me the problem appears only on one computer but not on another, with different matplotlib and/or pandas versions, which might be the root cause of the error.
44,654,505
Is it possible to create a CSS grid that allows for different sized content blocks that don't have fixed starting positions with other blocks flowing around? Here's my test HTML ``` <div class="grid"> <div class="item">Small 1</div> <div class="item">Small 2</div> <div class="item large">Large 1</div> <div class="item large">Large 2</div> <div class="item">Small 3</div> <div class="item">Small 4</div> <div class="item">Small 5</div> <div class="item">Small 6</div> <div class="item">Small 7</div> <div class="item">Small 8</div> <div class="item">Small 9</div> <div class="item">Small 10</div> <div class="item">Small 11</div> <div class="item">Small 12</div> <div class="item">Small 13</div> </div> ``` CSS ``` * { padding: 0; margin: 0; box-sizing: border-box; } body { padding: 5em; } .grid { display: grid; grid-template-columns: 25% 25% 25% 25%; grid-gap: 1em 1em; grid-auto-flow: row dense; } .item { background: rgba(0, 0, 0, 0.1); border-radius: 0.25em; padding: 2em; } .large { background: rgba(255, 0, 0, 0.25); grid-column: auto / span 2; grid-row: auto / span 2; } ``` Fiddle: <https://jsfiddle.net/bLjzscLs/> Expected: [![IMAGE](https://i.stack.imgur.com/KwjFe.png)](https://i.stack.imgur.com/KwjFe.png) Actual: [![IMAGE](https://i.stack.imgur.com/wP3Qq.png)](https://i.stack.imgur.com/wP3Qq.png)
2017/06/20
[ "https://Stackoverflow.com/questions/44654505", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1662536/" ]
Of course you can! Simply add `height: <insert value that's above 100px here>` to your CSS under `.large` and find the correct values to reach the expected result. Example: ```css * { padding: 0; margin: 0; box-sizing: border-box; } body { padding: 5em; } .grid { display: grid; grid-template-columns: 25% 25% 25% 25%; grid-gap: 1em 1em; grid-auto-flow: row dense; } .item { background: rgba(0, 0, 0, 0.1); border-radius: 0.25em; padding: 2em; } .large { background: rgba(255, 0, 0, 0.25); grid-column: auto / span 2; grid-row: auto / span 2; height: 200px; } ``` ```html <div class="grid"> <div class="item">Small 1</div> <div class="item">Small 2</div> <div class="item large">Large 1</div> <div class="item large">Large 2</div> <div class="item">Small 3</div> <div class="item">Small 4</div> <div class="item">Small 5</div> <div class="item">Small 6</div> <div class="item">Small 7</div> <div class="item">Small 8</div> <div class="item">Small 9</div> <div class="item">Small 10</div> <div class="item">Small 11</div> <div class="item">Small 12</div> <div class="item">Small 13</div> </div> ```
Actually your code will work the way as you expected. Try adding a large paragraph into the 'large 1' area with many lines as possible and see the output yourself. > > `<div class="item large">`**Add a large paragraph with many new lines as > possible and check the ouput**`</div>` > > > Anyway you can also use the code provided here -> <https://jsfiddle.net/anoopmnm/dopt5s4y/> Or Just increase the value from `grid-row: auto / span 2;` to `grid-row: auto / span 11;` in your style sheet
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
You can do this: ``` window.location.reload() ``` It just tells javascript to reload the page, this is not dependent on jQuery.
Works efficiently for me: ``` $(window).unload(function() { if (!window.opener.closed) { window.opener.__doPostBack('', ''); e.preventDefault(); } }); ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
You can do this: ``` window.location.reload() ``` It just tells javascript to reload the page, this is not dependent on jQuery.
Here is a code that refresh parent window and closes the popup in one operation. ``` <script language="JavaScript"> <!-- function refreshParent() { window.opener.location.href = window.opener.location.href; if (window.opener.progressWindow) { window.opener.progressWindow.close() } window.close(); } //--> </script> ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
In your popup window: ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Reloads the parent page and closes the popup.
You can do this: ``` window.location.reload() ``` It just tells javascript to reload the page, this is not dependent on jQuery.
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
In your popup window: ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Reloads the parent page and closes the popup.
Works efficiently for me: ``` $(window).unload(function() { if (!window.opener.closed) { window.opener.__doPostBack('', ''); e.preventDefault(); } }); ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
``` onclick="javascript:window.opener.location.reload(true);self.close();" ```
Here is a code that refresh parent window and closes the popup in one operation. ``` <script language="JavaScript"> <!-- function refreshParent() { window.opener.location.href = window.opener.location.href; if (window.opener.progressWindow) { window.opener.progressWindow.close() } window.close(); } //--> </script> ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
In your popup window: ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Reloads the parent page and closes the popup.
``` onclick="javascript:window.opener.location.reload(true);self.close();" ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
You can do this: ``` window.location.reload() ``` It just tells javascript to reload the page, this is not dependent on jQuery.
``` onclick="javascript:window.opener.location.reload(true);self.close();" ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
In your popup window: ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Reloads the parent page and closes the popup.
Here is a code that refresh parent window and closes the popup in one operation. ``` <script language="JavaScript"> <!-- function refreshParent() { window.opener.location.href = window.opener.location.href; if (window.opener.progressWindow) { window.opener.progressWindow.close() } window.close(); } //--> </script> ```
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
``` onclick="javascript:window.opener.location.reload(true);self.close();" ```
Use this code in window close event like ``` $('#close').click(function(e) { window.opener.location.reload(); window.close(); self.close(); }); ``` <http://www.delhincrjob.com/blog/how-to-close-popup-window-and-refresh-parent-window-in-jquery/>
2,523,703
I have a pop-up window a user logs into, once they are logged in successful, I have a message that has a link to close the window. But I want it to not only close that pop up window, but I want it to refresh the webpage the pop-up window was clicked on. So the page can refresh to see that there is a valid login session for that user. Is this possible w/ jQuery?
2010/03/26
[ "https://Stackoverflow.com/questions/2523703", "https://Stackoverflow.com", "https://Stackoverflow.com/users/26130/" ]
In your popup window: ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Reloads the parent page and closes the popup.
Use this code in `ajaxlogin.js` file for mvc 4 user ``` $('#closeButton').click(function(e) { window.opener.location.reload(true); window.close(); e.preventDefault(); }); ``` Its working fine
65,746,585
During a .map from an array of data, I have another array of data, which I need to do a .map inside it too. It's looking a bit like this First Data load ``` async function handleSubmit(event: FormEvent){ event.preventDefault(); await api.get(`people/?search=${input}`).then(response =>{ setCharacters(response.data.results) }) } ``` Second Data Load ``` async function getFilmName(film: string) { const movie = film.split("/"); await api.get(`films/${movie[5]}`).then(response =>{ return(response.data.title) }) } ``` and then, the .map's ``` {characters.map(characters =>{ return ( <div> <Link to={`character/${characters.url.split("/")[5]}`}>{characters.name}</Link> {characters.films.map((films) =>{ return( <h5>{getFilmName(films)}</h5> ) })} </div> ) })} ``` and, in the end I receive a ``` Error: Objects are not valid as a React child (found: [object Promise]). If you meant to render a collection of children, use an array instead. ```
2021/01/16
[ "https://Stackoverflow.com/questions/65746585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15016805/" ]
`getFilmName` is Promise object. So It can't get file name in JSX context. So you need to use it in `useEffect` or other event handler and you need to save data in `state`. And you can render with right data. And in that case, you can make some new component like, ``` function FileName ({file}){ const [fileName,setFileName] = useState("") const getFileName = useCallback(async ()=> {...},[file]) useEffect(()=>{ getFileName() },[getFileName]) return (<h5>{fileName}</h5>) } ```
I think you ware using same variable name here ``` {characters.map(character =>{ return ( <div> <Link to={`character/${character.url.split("/")[5]}`}>{character.name}</Link> {character.films.map((films) =>{ return( <h5>{getFilmName(films)}</h5> ) })} </div> ) })} ```
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
I resolved the problem. the cause was that i installed * `aspnetcore-runtime-2.1.0-preview1-final-win-x64` and * `.net core SDK 2.1.4-x64` versions. * The installation placed the sdk files in `c:\Program Files\dotnet` * but VS2017 32bit was looking for the sdk files in `c:\Program Files(x86)\dotnet`. To resolve this i * installed the x86 version of the sdk and aspnetcore runtime, * set the MSBuildSDKsPath environmental variable to point to the new installation path. * deleted all obsolete sdks from control panel The question [VS2017 Update 3 'Microsoft.NET.Sdk.Web' could not be found](https://stackoverflow.com/questions/45694411/visual-studio-2017-update-3-the-sdk-microsoft-net-sdk-web-specified-could-no?rq=1) helped in resolving this issue.
I had this problem and I did a fresh install of VS2017. That fixed it!
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
Same happened to me, but then for version 2.2 of .NET Core. I installed the latest version of .NET Core 2.2 SDK, which was 2.2.202 at that time. Visual studio allowed me to create a new project for Core 2.2, but it was showing the error: > > "The current .NET SDK does not support targeting .NET Core 2.2. Either target .NET Core 2.1 or lower, or use a version of the .NET SDK that supports .NET Core 2.2." > > > The target framework for my project was empty and the dropdown didn't show 2.2. After installing .NET Core SDK 2.2.103, the error was gone and the dropdown did show ".NET Core 2.2".
I started getting this error after I installed SDK 2.2.300. After reading this post and some other I **downgrade it to SDK 2.2.1xx** and the error went away. Note: I had to uninstall SDK 2.2.300 and restart after installing SDK 2.2.1xx.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
Stopping IIS for publishing solved the problem. But first I needed to install net core 2.1 SDK and update the Visual Studio.
I had this problem and I did a fresh install of VS2017. That fixed it!
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
The problem here is that Microsoft confused a whole lot of people with how they numbered their .NET Core SDKs. In the original poster's message the path C:\Program Files\dotnet\sdk\2.1.100\ DOES NOT appear to represent the .NET Core 2.1 runtime (but you'd think it does). I came across this post [The current .NET SDK does not support targeting .NET Core 2.1](https://developercommunity.visualstudio.com/content/problem/213229/the-current-net-sdk-does-not-support-targeting-net.html) on developercommunity.visualstudio.com where a Microsoft support person explains the confusion: > > "Thank you for your feedback! We have determined that this issue is > not a bug. The first SDK with .NET Core 2.1 support is > 2.1.300-preview1. We know the versioning is confusing which is why starting in 2.1.300, the major.minor versions of the SDK will now be > aligned with the major.minor versions of the runtime as well." > > > So ... in order to get .NET Core 2.1 support for building via the SDK you need to install the SDK with version 2.1.300 at least (since 2.1.2 is NOT .NET Core 2.1) ... yeah, confusing. Thank you Microsoft for some lost time on this.
Check to make sure you don't have `global.json` file in your project root folder that forces your project to use .NET SDK 2.1 only. If you have this global.json file, **delete it**, and then restart visual studio. As embarrassing as it might sound, I spent almost an hour tinkering and I even downloaded several SDK version to force it to use 2.2
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
Installing [`.NET Core SDK 2.1.300-preview2`](https://www.microsoft.com/net/download/dotnet-core/sdk-2.1.300-preview2) did the trick for me: UPDATE: just in case, there is a newer version has been released recently. You can download the new .NET Core SDK for 2.2.0-preview1 (which includes ASP.NET 2.2.0-preview1) [here](https://www.microsoft.com/net/download/dotnet-core/2.2). See also this [answer](https://stackoverflow.com/a/53318706/804385) when you are getting error like this in general: > > The current .NET SDK does not support targeting .NET Core 2.X > > >
It looks like Microsoft are encouraging better coding practice for those early adopters of latest development software in Net Core 2.1 by removing the capability to use older software where bad habits prevail. Net Core 2.0 and the older versions are almost end of life so should not be being used at all. (<https://blogs.msdn.microsoft.com/dotnet/2018/06/20/net-core-2-0-will-reach-end-of-life-on-september-1-2018/>) 1) Microsoft have removed ServiceLocator since widely considered an anti-pattern resulting in difficult to understand code. 2) To improve MVC applications, the AccountController was removed from Authentication/Authorization to encourage use of Razor pages which implement the Single Responsibiity Principle. It would not be considered best practice to circumvent these changes in order to increase perpetuation of software built to lower software engineering standards in the past.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
Same happened to me, but then for version 2.2 of .NET Core. I installed the latest version of .NET Core 2.2 SDK, which was 2.2.202 at that time. Visual studio allowed me to create a new project for Core 2.2, but it was showing the error: > > "The current .NET SDK does not support targeting .NET Core 2.2. Either target .NET Core 2.1 or lower, or use a version of the .NET SDK that supports .NET Core 2.2." > > > The target framework for my project was empty and the dropdown didn't show 2.2. After installing .NET Core SDK 2.2.103, the error was gone and the dropdown did show ".NET Core 2.2".
Go to your pipeline. Click on Edit pipeline. Click on the Agent Specification dropdown. Change it to Windows 2019. Click Save And Queue. And here you Go. It worked fine for me.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
This happened to me after installing 2.2.100-preview3-009430 and then updating to Visual Studio 15.9.2. I resolved by enabling the "Use previews of the .NET Core SDK" option. 1. Go to: *Tools > Options > Projects and Solutions > .NET Core* 2. Check the "Use previews of the .NET Core SDK" box 3. Restart Visual Studio and rebuild the solution. [VS Preview Options](https://i.stack.imgur.com/m8Wtu.png)
I resolved the problem. the cause was that i installed * `aspnetcore-runtime-2.1.0-preview1-final-win-x64` and * `.net core SDK 2.1.4-x64` versions. * The installation placed the sdk files in `c:\Program Files\dotnet` * but VS2017 32bit was looking for the sdk files in `c:\Program Files(x86)\dotnet`. To resolve this i * installed the x86 version of the sdk and aspnetcore runtime, * set the MSBuildSDKsPath environmental variable to point to the new installation path. * deleted all obsolete sdks from control panel The question [VS2017 Update 3 'Microsoft.NET.Sdk.Web' could not be found](https://stackoverflow.com/questions/45694411/visual-studio-2017-update-3-the-sdk-microsoft-net-sdk-web-specified-could-no?rq=1) helped in resolving this issue.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
The problem here is that Microsoft confused a whole lot of people with how they numbered their .NET Core SDKs. In the original poster's message the path C:\Program Files\dotnet\sdk\2.1.100\ DOES NOT appear to represent the .NET Core 2.1 runtime (but you'd think it does). I came across this post [The current .NET SDK does not support targeting .NET Core 2.1](https://developercommunity.visualstudio.com/content/problem/213229/the-current-net-sdk-does-not-support-targeting-net.html) on developercommunity.visualstudio.com where a Microsoft support person explains the confusion: > > "Thank you for your feedback! We have determined that this issue is > not a bug. The first SDK with .NET Core 2.1 support is > 2.1.300-preview1. We know the versioning is confusing which is why starting in 2.1.300, the major.minor versions of the SDK will now be > aligned with the major.minor versions of the runtime as well." > > > So ... in order to get .NET Core 2.1 support for building via the SDK you need to install the SDK with version 2.1.300 at least (since 2.1.2 is NOT .NET Core 2.1) ... yeah, confusing. Thank you Microsoft for some lost time on this.
I started getting this error after I installed SDK 2.2.300. After reading this post and some other I **downgrade it to SDK 2.2.1xx** and the error went away. Note: I had to uninstall SDK 2.2.300 and restart after installing SDK 2.2.1xx.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
I had the .Net Core SDK 2.1.4 installed and followed the other answers in this post without solving my problem. What finally did it for me was **installing .Net Core SDK version 2.1.301**, and uninstalling every other version. Seems like the SDK 2.1.4 cannot target .Net Core 2.1 but SDK 2.1.301 does the job.
Go to your pipeline. Click on Edit pipeline. Click on the Agent Specification dropdown. Change it to Windows 2019. Click Save And Queue. And here you Go. It worked fine for me.
49,171,623
have tried upgrading to the professional version of visual studio 2017 v 15.6.0 (Preview 7.0) and installed aspnetcore-runtime-2.1.0-preview1-final-win-x64 and .net core SDK 2.1.4. When I created a new web application I get an error saying > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > When I try to build an existing project I get an error > > "The current .NET SDK does not support targeting .NET Core 2.1. Either > target .NET Core 2.0 or lower, or use a version of the .NET SDK that > supports .NET Core 2.1." > > > I don't see ".net core 2.1" in my target framework I don't have global.json file in my computer When I try dotnet --info, I get this > > c:\source\dnacloud\testapp>dotnet --info .NET Command Line Tools > (2.1.100) > > > > ``` > Product Information: > Version: 2.1.100 > Commit SHA-1 hash: b9e74c6 > > Runtime Environment: > OS Name: Windows > OS Version: 10.0.16299 > OS Platform: Windows > RID: win10-x64 > Base Path: C:\Program Files\dotnet\sdk\2.1.100\ > > Microsoft .NET Core Shared Framework Host > > Version : 2.0.5 > Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54 > > ``` > >
2018/03/08
[ "https://Stackoverflow.com/questions/49171623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8559109/" ]
Stopping IIS for publishing solved the problem. But first I needed to install net core 2.1 SDK and update the Visual Studio.
Had this issue when opening an old project. Solution was to install **.NET Core 2.1 development tools** for my IDE (VS 2017) from Visual Studio Installer [![Visual Studio Installer](https://i.stack.imgur.com/eHglp.png)](https://i.stack.imgur.com/eHglp.png)
57,411,124
I have one data sets with name `DATA_TEST`.This data frame contain 6-observations in character format.You can see table below. ```r dput(DATA_TEST) structure(list(Ten_digits = c("NA", "207", "0101", "0208 90", "0206 90 99 00", "103")), .Names = "Ten_digits", row.names = c(NA, -6L), class = "data.frame") # ------------------------------------------------------------------------- # > DATA_TEST # Ten_digits # 1 NA # 2 207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 103 ``` So my intention is to convert this data frame with a stringr or other package like picture below. Actually the code needs to do one thing or more precisely first must found only variables with three digits like `207` or `103` and convert this variables into `0207` and `0103`. In the table below you can see finally what the table should look like. ```r # > Desired Output # Ten_digits # 1 NA # 2 0207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 0103 ``` So can anybody help me with this code ?
2019/08/08
[ "https://Stackoverflow.com/questions/57411124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10924836/" ]
You can use `str_length` from `stringr`: ``` library(tidyverse) # in order to load all required packages at once DATA_TEST %>% mutate(Ten_digits = case_when( str_length(Ten_digits) == 3 ~ paste0("0", Ten_digits), TRUE ~ Ten_digits )) # Ten_digits #1 NA #2 0207 #3 0101 #4 0208 90 #5 0206 90 99 00 #6 0103 ``` `str_length` allows you to vectorize lengths of your character vector: from the [function's documentation](https://stringr.tidyverse.org/reference/str_length.html): > > `Technically this returns the number of "code points", in a string. One code point usually corresponds to one character(...)`. > > > `case_when` allows to vectorize multiple `if_else` statements. As mentioned in comments, you can use `ifelse` or `if_else`, which are more straightforward than `case_when`. See example below inside the microbenchmarking: ``` microbenchmark::microbenchmark( DATA_TEST %>% mutate(Ten_digits = case_when( str_length(Ten_digits) == 3 ~ paste0("0", Ten_digits), TRUE ~ Ten_digits )), DATA_TEST %>% mutate(Ten_digits = ifelse( str_length(Ten_digits) == 3, paste0("0", Ten_digits), Ten_digits )), DATA_TEST %>% mutate(Ten_digits = if_else( str_length(Ten_digits) == 3, paste0("0", Ten_digits), Ten_digits )) ) # min lq mean median uq max neval # 785.809 806.9130 1051.9314 858.217 1193.865 2445.434 100 # case_when # 613.398 623.3985 862.6720 636.858 822.027 8610.763 100 # ifelse # 625.485 641.1370 822.3502 664.135 894.812 1995.932 100 # if_else ```
We can do this simply by pasting a `0` in front of 3-char strings, i.e. ``` DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3] <- paste0("0", DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3]) DATA_TEST # Ten_digits #1 NA #2 0207 #3 0101 #4 0208 90 #5 0206 90 99 00 #6 0103 ```
57,411,124
I have one data sets with name `DATA_TEST`.This data frame contain 6-observations in character format.You can see table below. ```r dput(DATA_TEST) structure(list(Ten_digits = c("NA", "207", "0101", "0208 90", "0206 90 99 00", "103")), .Names = "Ten_digits", row.names = c(NA, -6L), class = "data.frame") # ------------------------------------------------------------------------- # > DATA_TEST # Ten_digits # 1 NA # 2 207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 103 ``` So my intention is to convert this data frame with a stringr or other package like picture below. Actually the code needs to do one thing or more precisely first must found only variables with three digits like `207` or `103` and convert this variables into `0207` and `0103`. In the table below you can see finally what the table should look like. ```r # > Desired Output # Ten_digits # 1 NA # 2 0207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 0103 ``` So can anybody help me with this code ?
2019/08/08
[ "https://Stackoverflow.com/questions/57411124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10924836/" ]
You can use `str_length` from `stringr`: ``` library(tidyverse) # in order to load all required packages at once DATA_TEST %>% mutate(Ten_digits = case_when( str_length(Ten_digits) == 3 ~ paste0("0", Ten_digits), TRUE ~ Ten_digits )) # Ten_digits #1 NA #2 0207 #3 0101 #4 0208 90 #5 0206 90 99 00 #6 0103 ``` `str_length` allows you to vectorize lengths of your character vector: from the [function's documentation](https://stringr.tidyverse.org/reference/str_length.html): > > `Technically this returns the number of "code points", in a string. One code point usually corresponds to one character(...)`. > > > `case_when` allows to vectorize multiple `if_else` statements. As mentioned in comments, you can use `ifelse` or `if_else`, which are more straightforward than `case_when`. See example below inside the microbenchmarking: ``` microbenchmark::microbenchmark( DATA_TEST %>% mutate(Ten_digits = case_when( str_length(Ten_digits) == 3 ~ paste0("0", Ten_digits), TRUE ~ Ten_digits )), DATA_TEST %>% mutate(Ten_digits = ifelse( str_length(Ten_digits) == 3, paste0("0", Ten_digits), Ten_digits )), DATA_TEST %>% mutate(Ten_digits = if_else( str_length(Ten_digits) == 3, paste0("0", Ten_digits), Ten_digits )) ) # min lq mean median uq max neval # 785.809 806.9130 1051.9314 858.217 1193.865 2445.434 100 # case_when # 613.398 623.3985 862.6720 636.858 822.027 8610.763 100 # ifelse # 625.485 641.1370 822.3502 664.135 894.812 1995.932 100 # if_else ```
You can use `str_pad` from `stingr`. Note that it'll pad any string with length less than 4 characters so the code will need modification if you specifically want to focus on strings with length 3. Also `ifelse` won't be needed if you had a literal `NA` instead of "NA". - ``` DATA_TEST %>% mutate( Ten_digits = ifelse(Ten_digits == "NA", "NA", str_pad(Ten_digits, width = 4, pad = 0)) ) Ten_digits 1 NA 2 0207 3 0101 4 0208 90 5 0206 90 99 00 6 0103 ```
57,411,124
I have one data sets with name `DATA_TEST`.This data frame contain 6-observations in character format.You can see table below. ```r dput(DATA_TEST) structure(list(Ten_digits = c("NA", "207", "0101", "0208 90", "0206 90 99 00", "103")), .Names = "Ten_digits", row.names = c(NA, -6L), class = "data.frame") # ------------------------------------------------------------------------- # > DATA_TEST # Ten_digits # 1 NA # 2 207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 103 ``` So my intention is to convert this data frame with a stringr or other package like picture below. Actually the code needs to do one thing or more precisely first must found only variables with three digits like `207` or `103` and convert this variables into `0207` and `0103`. In the table below you can see finally what the table should look like. ```r # > Desired Output # Ten_digits # 1 NA # 2 0207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 0103 ``` So can anybody help me with this code ?
2019/08/08
[ "https://Stackoverflow.com/questions/57411124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10924836/" ]
You may use a simple regex with `sub`: ``` DATA_TEST<-data.frame(Ten_digits=c("NA","207","0101","0208 90","0206 90 99 00","103"),stringsAsFactors = FALSE) DATA_TEST$Ten_digits <- sub("^(\\d{3})$", "0\\1", DATA_TEST$Ten_digits) DATA_TEST ## => Ten_digits 1 NA 2 0207 3 0101 4 0208 90 5 0206 90 99 00 6 0103 ``` Here, `^(\\d{3})$` matches a three digit string and captures the digits into Group 1 (since the pattern is inside parentheses) and the `0\1` replacement pattern inserts a `0` and adds back the whole match value in Group 1. **Pattern details** * `^` - start of string * `(\d{3})` - Group 1: three digits * `$` - end of string.
We can do this simply by pasting a `0` in front of 3-char strings, i.e. ``` DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3] <- paste0("0", DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3]) DATA_TEST # Ten_digits #1 NA #2 0207 #3 0101 #4 0208 90 #5 0206 90 99 00 #6 0103 ```
57,411,124
I have one data sets with name `DATA_TEST`.This data frame contain 6-observations in character format.You can see table below. ```r dput(DATA_TEST) structure(list(Ten_digits = c("NA", "207", "0101", "0208 90", "0206 90 99 00", "103")), .Names = "Ten_digits", row.names = c(NA, -6L), class = "data.frame") # ------------------------------------------------------------------------- # > DATA_TEST # Ten_digits # 1 NA # 2 207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 103 ``` So my intention is to convert this data frame with a stringr or other package like picture below. Actually the code needs to do one thing or more precisely first must found only variables with three digits like `207` or `103` and convert this variables into `0207` and `0103`. In the table below you can see finally what the table should look like. ```r # > Desired Output # Ten_digits # 1 NA # 2 0207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 0103 ``` So can anybody help me with this code ?
2019/08/08
[ "https://Stackoverflow.com/questions/57411124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10924836/" ]
We can do this simply by pasting a `0` in front of 3-char strings, i.e. ``` DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3] <- paste0("0", DATA_TEST$Ten_digits[nchar(DATA_TEST$Ten_digits) == 3]) DATA_TEST # Ten_digits #1 NA #2 0207 #3 0101 #4 0208 90 #5 0206 90 99 00 #6 0103 ```
You can use `str_pad` from `stingr`. Note that it'll pad any string with length less than 4 characters so the code will need modification if you specifically want to focus on strings with length 3. Also `ifelse` won't be needed if you had a literal `NA` instead of "NA". - ``` DATA_TEST %>% mutate( Ten_digits = ifelse(Ten_digits == "NA", "NA", str_pad(Ten_digits, width = 4, pad = 0)) ) Ten_digits 1 NA 2 0207 3 0101 4 0208 90 5 0206 90 99 00 6 0103 ```
57,411,124
I have one data sets with name `DATA_TEST`.This data frame contain 6-observations in character format.You can see table below. ```r dput(DATA_TEST) structure(list(Ten_digits = c("NA", "207", "0101", "0208 90", "0206 90 99 00", "103")), .Names = "Ten_digits", row.names = c(NA, -6L), class = "data.frame") # ------------------------------------------------------------------------- # > DATA_TEST # Ten_digits # 1 NA # 2 207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 103 ``` So my intention is to convert this data frame with a stringr or other package like picture below. Actually the code needs to do one thing or more precisely first must found only variables with three digits like `207` or `103` and convert this variables into `0207` and `0103`. In the table below you can see finally what the table should look like. ```r # > Desired Output # Ten_digits # 1 NA # 2 0207 # 3 0101 # 4 0208 90 # 5 0206 90 99 00 # 6 0103 ``` So can anybody help me with this code ?
2019/08/08
[ "https://Stackoverflow.com/questions/57411124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10924836/" ]
You may use a simple regex with `sub`: ``` DATA_TEST<-data.frame(Ten_digits=c("NA","207","0101","0208 90","0206 90 99 00","103"),stringsAsFactors = FALSE) DATA_TEST$Ten_digits <- sub("^(\\d{3})$", "0\\1", DATA_TEST$Ten_digits) DATA_TEST ## => Ten_digits 1 NA 2 0207 3 0101 4 0208 90 5 0206 90 99 00 6 0103 ``` Here, `^(\\d{3})$` matches a three digit string and captures the digits into Group 1 (since the pattern is inside parentheses) and the `0\1` replacement pattern inserts a `0` and adds back the whole match value in Group 1. **Pattern details** * `^` - start of string * `(\d{3})` - Group 1: three digits * `$` - end of string.
You can use `str_pad` from `stingr`. Note that it'll pad any string with length less than 4 characters so the code will need modification if you specifically want to focus on strings with length 3. Also `ifelse` won't be needed if you had a literal `NA` instead of "NA". - ``` DATA_TEST %>% mutate( Ten_digits = ifelse(Ten_digits == "NA", "NA", str_pad(Ten_digits, width = 4, pad = 0)) ) Ten_digits 1 NA 2 0207 3 0101 4 0208 90 5 0206 90 99 00 6 0103 ```
1,359,983
Let $G$ be a group of order $150$. I must show that it has a normal subgroup of order $25$. The hint says to show that is has a normal subgroup of order $5$ or $25$. Now from Sylow, I know that the number $n\_5$ of Sylow-$5$ subgroups (which each have $25$ elements) must be either $1$ or $6$, since $n\_5$ must also divide $6$. Now clearly if $n\_5=1$ we are done, so I can assume that $n\_5=6$. My problem is I haven't figured out how to use this information (I am going for a contradiction, just based on what the problem asks me to prove). I know by Cauchy's theorem I can get a subgroup of order $5$, but I don't see immediately if it must be normal. Any direction I should try to be moving?
2015/07/13
[ "https://math.stackexchange.com/questions/1359983", "https://math.stackexchange.com", "https://math.stackexchange.com/users/235510/" ]
As you say, by Sylow's theorem, either $n\_5=1$ or $n\_5=6$. If $n\_5=1$, we have found a normal subgroup of $25$, so we are done. So let's assume that $n\_5=6$. We will exhibit a normal subgroup of order $5$ as follows. Let $P$ and $Q$ be *distinct* Sylow $5$-subgroups. Using the fact that $|PQ| = \frac{|P|\cdot |Q|}{|P\cap Q|}$, it is easy to see that $|P\cap Q| = 1$ is impossible! So we conclude that the subgroup $T=P\cap Q$ has order $5$. We claim that $T$ is normal in $G$. It is a fact if $H$ is a proper subgroup of a $p$-group $L$, then $H$ is properly contained in its normalizer $N\_{L}(H)$ (see [here](https://math.stackexchange.com/questions/234426/if-h-is-a-proper-subgroup-of-a-p-group-g-then-h-is-proper-in-n-gh) for proof). Since $P$ and $Q$ are $p$-groups, we can apply this result. Hence, $T\subsetneq N\_{P}(T)$ and $T\subsetneq N\_{Q}( T)$. So $N\_{P}(T) = P$ and $N\_{Q}(T)=Q$. This immediately gives $P\subseteq N\_{G}(T)$ and $Q\subseteq N\_{G}(T)$, which implies $PQ \subseteq N\_{G}(T)$. Again using the formula above, $PQ=\{p \cdot q: p\in P, q\in Q\}$ (as a subset) has cardinality $125$. By order considerations, we get that $N\_{G}(T)=150$, so $N\_{G}(T)=G$ and $T$ is a normal subgroup of order $5$. This proves the hint. Now let's finally show that $G$ has a normal subgroup of order $25$. So let $T$ be the normal subgroup of order $5$ constructed above. Then the quotient group $G/T$ has order $150/5=30$. Now every group of order $30$ contains a normal Sylow $5$-subgroup (see [this MSE thread](https://math.stackexchange.com/questions/352501/a-group-of-order-30-has-a-normal-5-sylow-subgroup)). Let $H$ be a normal subgroup of order $5$ in $G/T$, and consider the canonical map $\pi: G\to G/T$. Then $\pi^{-1}(H)$ is normal subgroup of $G$. By correspondence theorem, $[G: \pi^{-1}(H)]=[G/T: H]=6$, so $\pi^{-1}(H)$ is the desired normal subgroup of index $6$, i.e. desired subgroup of order $25$. (This shows that the case $n\_5=6$ is actually impossible, and there is only one subgroup of order $25$, which is the $5$-Sylow subgroup)
There are easier solutions, especially if your primary goal is to show the group is not simple. (See the paragraph starting (C) for a really simple argument.) (A) For non-simplicity: You already know by Sylow that if the group is simple it must have $6$ $5$-Sylow subgroups. If this is the case, then it acts by conjugation on these $6$ subgroups both transitively (by Sylow) and faithfully (since it's simple), and so embeds monomorphically as a subgroup of $S\_6$. As $150$ does not divide $720$ (the order of $S\_6$), this is impossible. So no group of order $150$ is simple. (B) Furthermore, doing just a little more work, we can also use this homomorphism to $S\_6$ to show the $5$-Sylow of any group of order $150$ is normal. The image of the homomorphism must not be divisible by $25$ (since $25$ does not divide $720$) and the image must be divisible by $6$ (to be transitive on $6$ elements). Thus, the image has order $6$ or $30$. In either event, you can see by elementary means (and probably already know) that the image has a normal $5$-Sylow, and, since the kernel is a $5$-group, the pre-image of that in $G$ is a normal $5$-Sylow in $G$. (The group of order $30$ has a normal $5$-Sylow for example, because it has a normal subgroup of index $2$--since any element of order $2$ acts by odd permutations by translation--$15$ $2$-cycles--, so just take the elements that act by even permutations--and this normal subgroup of order $15$ both contains all $5$-Sylows and can have only one $5$-Sylow.) Alternatively, you can do without the last paragraph-and-a-half by showing (pretty easy) that $S\_6$ has no subgroup of order $30$ so the image must have order $6$. (C) But the simplest proof of all goes like this: $G$ must have an index $2$ subgroup, since in its action on itself by multiplication, an order $2$ element maps to an odd permutation (a product of $75$ transpositions), so the elements that map to even permutations are an index $2$ subgroup $H$, which is normal and has order $75$. By Sylow's theorem, $H$ has a unique $5$-Sylow which is thus characteristic in $H$ and hence normal in $G$. (D) Of course, if you know Philip Hall's theorem, and you know that not simple for order $150$ implies solvable, then (A) is enough since you have immediately that the $5$-Sylow is normal in any solvable group of order $150$ since the number of $5$-Sylows in a *solvable* group is a product of prime powers each *individually* congruent to $1$ mod $5$. This embedding in $S\_n$ idea works for a bunch of similar problems as well. In more complex cases, one can sometimes make more sophisticated use of this embedding in a symmetric group to make inferences about the structure of the given group; for example, if in a simple group the number of $p$-Sylows of order $p^2$ is fewer than $p^2$, then the embedding shows they must be elementary abelian rather than cyclic. This is actually kind of a big deal--if a simple group has an element of order $p^n$ all of its subgroups must have index at least $p^n$, which is an enormous constraint on the structure of the group, especially if $p$ is large. This is why so often Sylow subgroups of simple groups are elementary abelian or extra-special.
1,359,983
Let $G$ be a group of order $150$. I must show that it has a normal subgroup of order $25$. The hint says to show that is has a normal subgroup of order $5$ or $25$. Now from Sylow, I know that the number $n\_5$ of Sylow-$5$ subgroups (which each have $25$ elements) must be either $1$ or $6$, since $n\_5$ must also divide $6$. Now clearly if $n\_5=1$ we are done, so I can assume that $n\_5=6$. My problem is I haven't figured out how to use this information (I am going for a contradiction, just based on what the problem asks me to prove). I know by Cauchy's theorem I can get a subgroup of order $5$, but I don't see immediately if it must be normal. Any direction I should try to be moving?
2015/07/13
[ "https://math.stackexchange.com/questions/1359983", "https://math.stackexchange.com", "https://math.stackexchange.com/users/235510/" ]
As you say, by Sylow's theorem, either $n\_5=1$ or $n\_5=6$. If $n\_5=1$, we have found a normal subgroup of $25$, so we are done. So let's assume that $n\_5=6$. We will exhibit a normal subgroup of order $5$ as follows. Let $P$ and $Q$ be *distinct* Sylow $5$-subgroups. Using the fact that $|PQ| = \frac{|P|\cdot |Q|}{|P\cap Q|}$, it is easy to see that $|P\cap Q| = 1$ is impossible! So we conclude that the subgroup $T=P\cap Q$ has order $5$. We claim that $T$ is normal in $G$. It is a fact if $H$ is a proper subgroup of a $p$-group $L$, then $H$ is properly contained in its normalizer $N\_{L}(H)$ (see [here](https://math.stackexchange.com/questions/234426/if-h-is-a-proper-subgroup-of-a-p-group-g-then-h-is-proper-in-n-gh) for proof). Since $P$ and $Q$ are $p$-groups, we can apply this result. Hence, $T\subsetneq N\_{P}(T)$ and $T\subsetneq N\_{Q}( T)$. So $N\_{P}(T) = P$ and $N\_{Q}(T)=Q$. This immediately gives $P\subseteq N\_{G}(T)$ and $Q\subseteq N\_{G}(T)$, which implies $PQ \subseteq N\_{G}(T)$. Again using the formula above, $PQ=\{p \cdot q: p\in P, q\in Q\}$ (as a subset) has cardinality $125$. By order considerations, we get that $N\_{G}(T)=150$, so $N\_{G}(T)=G$ and $T$ is a normal subgroup of order $5$. This proves the hint. Now let's finally show that $G$ has a normal subgroup of order $25$. So let $T$ be the normal subgroup of order $5$ constructed above. Then the quotient group $G/T$ has order $150/5=30$. Now every group of order $30$ contains a normal Sylow $5$-subgroup (see [this MSE thread](https://math.stackexchange.com/questions/352501/a-group-of-order-30-has-a-normal-5-sylow-subgroup)). Let $H$ be a normal subgroup of order $5$ in $G/T$, and consider the canonical map $\pi: G\to G/T$. Then $\pi^{-1}(H)$ is normal subgroup of $G$. By correspondence theorem, $[G: \pi^{-1}(H)]=[G/T: H]=6$, so $\pi^{-1}(H)$ is the desired normal subgroup of index $6$, i.e. desired subgroup of order $25$. (This shows that the case $n\_5=6$ is actually impossible, and there is only one subgroup of order $25$, which is the $5$-Sylow subgroup)
Just to state the simplest answer alone so people see it: Since $|G|=2m$ with $m$ odd, $G$ has an index $2$ subgroup $H$ of order $75$. By Sylow, $H$ has a unique 5-Sylow $P$ of order $25$. Since $P$ is characteristic in $H$, which is normal in $G$ (because index $2$), $P$ is normal in $G$, so it is the unique $5$-Sylow of $G$.
33,083,616
As I'm writing more and more Groovy to use with the Jenkins Workflow plugin I've started getting to the point where I've got re-usable code that could be used in multiple scripts. What would be the best way of sharing this code? Is it possible to produce my own .jar with the shared code in and utilize this from within the Workflow script? Or is there a simpler way?
2015/10/12
[ "https://Stackoverflow.com/questions/33083616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3617723/" ]
You can use Global Lib as pointed in other comments and/or use the `load` step to load you own scripts from somewhere (i.e. your SCM just checked out previously). More info about `load`: <https://github.com/jenkinsci/workflow-plugin/blob/master/TUTORIAL.md#triggering-manual-loading>
This is what the Workflow Global Library is for! <https://github.com/jenkinsci/workflow-plugin/blob/master/cps-global-lib/README.md> I use this in my installation, it's a great feature of Workflow. Right now I just have one "helper" class that contains methods common to all builds, but as other teams start to adopt Workflow they are showing interest in creating their own classes to use for subsets of our builds.
33,083,616
As I'm writing more and more Groovy to use with the Jenkins Workflow plugin I've started getting to the point where I've got re-usable code that could be used in multiple scripts. What would be the best way of sharing this code? Is it possible to produce my own .jar with the shared code in and utilize this from within the Workflow script? Or is there a simpler way?
2015/10/12
[ "https://Stackoverflow.com/questions/33083616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3617723/" ]
I've actually got this working in the end by using our own git repo but putting a symlink into workflow-libs/src to point at that repo.
This is what the Workflow Global Library is for! <https://github.com/jenkinsci/workflow-plugin/blob/master/cps-global-lib/README.md> I use this in my installation, it's a great feature of Workflow. Right now I just have one "helper" class that contains methods common to all builds, but as other teams start to adopt Workflow they are showing interest in creating their own classes to use for subsets of our builds.
33,083,616
As I'm writing more and more Groovy to use with the Jenkins Workflow plugin I've started getting to the point where I've got re-usable code that could be used in multiple scripts. What would be the best way of sharing this code? Is it possible to produce my own .jar with the shared code in and utilize this from within the Workflow script? Or is there a simpler way?
2015/10/12
[ "https://Stackoverflow.com/questions/33083616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3617723/" ]
You can use Global Lib as pointed in other comments and/or use the `load` step to load you own scripts from somewhere (i.e. your SCM just checked out previously). More info about `load`: <https://github.com/jenkinsci/workflow-plugin/blob/master/TUTORIAL.md#triggering-manual-loading>
The [Workflow Remote File Loader plugin](https://github.com/jenkinsci/workflow-remote-loader-plugin/) might meet your needs.
33,083,616
As I'm writing more and more Groovy to use with the Jenkins Workflow plugin I've started getting to the point where I've got re-usable code that could be used in multiple scripts. What would be the best way of sharing this code? Is it possible to produce my own .jar with the shared code in and utilize this from within the Workflow script? Or is there a simpler way?
2015/10/12
[ "https://Stackoverflow.com/questions/33083616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3617723/" ]
I've actually got this working in the end by using our own git repo but putting a symlink into workflow-libs/src to point at that repo.
The [Workflow Remote File Loader plugin](https://github.com/jenkinsci/workflow-remote-loader-plugin/) might meet your needs.
43,423,508
I am really struggling to optimize this query: ``` SELECT wins / (wins + COUNT(loosers.match_id) + 0.) winrate, wins + COUNT(loosers.match_id) matches, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id FROM ( SELECT COUNT(match_id) wins, winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id FROM matches WHERE 157 IN (winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id) GROUP BY winning_champion_one_id, winning_champion_two_id, winning_champion_three_id, winning_champion_four_id, winning_champion_five_id ) winners LEFT OUTER JOIN matches loosers ON winners.winning_champion_one_id = loosers.loosing_champion_one_id AND winners.winning_champion_two_id = loosers.loosing_champion_two_id AND winners.winning_champion_three_id = loosers.loosing_champion_three_id AND winners.winning_champion_four_id = loosers.loosing_champion_four_id AND winners.winning_champion_five_id = loosers.loosing_champion_five_id GROUP BY winners.wins, winners.winning_champion_one_id, winners.winning_champion_two_id, winners.winning_champion_three_id, winners.winning_champion_four_id, winners.winning_champion_five_id HAVING wins + COUNT(loosers.match_id) >= 20 ORDER BY winrate DESC, matches DESC LIMIT 1; ``` And this is the output of `EXPLAIN (BUFFERS, ANALYZE)`: ``` Limit (cost=72808.80..72808.80 rows=1 width=58) (actual time=1478.749..1478.749 rows=1 loops=1) Buffers: shared hit=457002 -> Sort (cost=72808.80..72837.64 rows=11535 width=58) (actual time=1478.747..1478.747 rows=1 loops=1) " Sort Key: ((((count(matches.match_id)))::numeric / ((((count(matches.match_id)) + count(loosers.match_id)))::numeric + '0'::numeric))) DESC, (((count(matches.match_id)) + count(loosers.match_id))) DESC" Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=457002 -> HashAggregate (cost=72462.75..72751.12 rows=11535 width=58) (actual time=1448.941..1478.643 rows=83 loops=1) " Group Key: (count(matches.match_id)), matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id" Filter: (((count(matches.match_id)) + count(loosers.match_id)) >= 20) Rows Removed by Filter: 129131 Buffers: shared hit=457002 -> Nested Loop Left Join (cost=9857.76..69867.33 rows=115352 width=26) (actual time=288.086..1309.687 rows=146610 loops=1) Buffers: shared hit=457002 -> HashAggregate (cost=9857.33..11010.85 rows=115352 width=18) (actual time=288.056..408.317 rows=129214 loops=1) " Group Key: matches.winning_champion_one_id, matches.winning_champion_two_id, matches.winning_champion_three_id, matches.winning_champion_four_id, matches.winning_champion_five_id" Buffers: shared hit=22174 -> Bitmap Heap Scan on matches (cost=1533.34..7455.69 rows=160109 width=18) (actual time=26.618..132.844 rows=161094 loops=1) Recheck Cond: ((157 = winning_champion_one_id) OR (157 = winning_champion_two_id) OR (157 = winning_champion_three_id) OR (157 = winning_champion_four_id) OR (157 = winning_champion_five_id)) Heap Blocks: exact=21594 Buffers: shared hit=22174 -> BitmapOr (cost=1533.34..1533.34 rows=164260 width=0) (actual time=22.190..22.190 rows=0 loops=1) Buffers: shared hit=580 -> Bitmap Index Scan on matches_winning_champion_one_id_index (cost=0.00..35.03 rows=4267 width=0) (actual time=0.045..0.045 rows=117 loops=1) Index Cond: (157 = winning_champion_one_id) Buffers: shared hit=3 -> Bitmap Index Scan on matches_winning_champion_two_id_index (cost=0.00..47.22 rows=5772 width=0) (actual time=0.665..0.665 rows=3010 loops=1) Index Cond: (157 = winning_champion_two_id) Buffers: shared hit=13 -> Bitmap Index Scan on matches_winning_champion_three_id_index (cost=0.00..185.53 rows=22840 width=0) (actual time=3.824..3.824 rows=23893 loops=1) Index Cond: (157 = winning_champion_three_id) Buffers: shared hit=89 -> Bitmap Index Scan on matches_winning_champion_four_id_index (cost=0.00..537.26 rows=66257 width=0) (actual time=8.069..8.069 rows=67255 loops=1) Index Cond: (157 = winning_champion_four_id) Buffers: shared hit=244 -> Bitmap Index Scan on matches_winning_champion_five_id_index (cost=0.00..528.17 rows=65125 width=0) (actual time=9.577..9.577 rows=67202 loops=1) Index Cond: (157 = winning_champion_five_id) Buffers: shared hit=231 -> Index Scan using matches_loosing_champion_ids_index on matches loosers (cost=0.43..0.49 rows=1 width=18) (actual time=0.006..0.006 rows=0 loops=129214) Index Cond: ((matches.winning_champion_one_id = loosing_champion_one_id) AND (matches.winning_champion_two_id = loosing_champion_two_id) AND (matches.winning_champion_three_id = loosing_champion_three_id) AND (matches.winning_champion_four_id = loosing_champion_four_id) AND (matches.winning_champion_five_id = loosing_champion_five_id)) Buffers: shared hit=434828 Planning time: 0.584 ms Execution time: 1479.779 ms ``` Table and index definitions: ``` create table matches ( match_id bigint not null, winning_champion_one_id smallint, winning_champion_two_id smallint, winning_champion_three_id smallint, winning_champion_four_id smallint, winning_champion_five_id smallint, loosing_champion_one_id smallint, loosing_champion_two_id smallint, loosing_champion_three_id smallint, loosing_champion_four_id smallint, loosing_champion_five_id smallint, constraint matches_match_id_pk primary key (match_id) ); create index matches_winning_champion_one_id_index on matches (winning_champion_one_id); create index matches_winning_champion_two_id_index on matches (winning_champion_two_id); create index matches_winning_champion_three_id_index on matches (winning_champion_three_id); create index matches_winning_champion_four_id_index on matches (winning_champion_four_id); create index matches_winning_champion_five_id_index on matches (winning_champion_five_id); create index matches_loosing_champion_ids_index on matches (loosing_champion_one_id, loosing_champion_two_id, loosing_champion_three_id, loosing_champion_four_id, loosing_champion_five_id); create index matches_loosing_champion_one_id_index on matches (loosing_champion_one_id); create index matches_loosing_champion_two_id_index on matches (loosing_champion_two_id); create index matches_loosing_champion_three_id_index on matches (loosing_champion_three_id); create index matches_loosing_champion_four_id_index on matches (loosing_champion_four_id); create index matches_loosing_champion_five_id_index on matches (loosing_champion_five_id); ``` The table can have 100m+ rows. At the moment it does have about 20m rows. Current size of table and indexes: ``` public.matches, 2331648 rows, 197 MB public.matches_riot_match_id_pk, 153 MB public.matches_loosing_champion_ids_index, 136 MB public.matches_loosing_champion_four_id_index, 113 MB public.matches_loosing_champion_five_id_index, 113 MB public.matches_winning_champion_one_id_index, 113 MB public.matches_winning_champion_five_id_index, 113 MB public.matches_winning_champion_three_id_index, 112 MB public.matches_loosing_champion_three_id_index, 112 MB public.matches_winning_champion_four_id_index, 112 MB public.matches_loosing_champion_one_id_index, 112 MB public.matches_winning_champion_two_id_index, 112 MB public.matches_loosing_champion_two_id_index, 112 MB ``` These are the only changes I made to `postgresql.conf`: ``` max_connections = 50 shared_buffers = 6GB effective_cache_size = 18GB work_mem = 125829kB maintenance_work_mem = 1536MB min_wal_size = 1GB max_wal_size = 2GB checkpoint_completion_target = 0.7 wal_buffers = 16MB default_statistics_target = 100 max_parallel_workers_per_gather = 8 min_parallel_relation_size = 1 ``` There is probably something I do overlook. **EDIT:** For anyone wondering. The best approach was the `UNION ALL` approach. The suggested schema of Erwin unfortunately doesn't work well. Here is the `EXPLAIN (ANALYZE, BUFFERS)` output of the suggested schema: ``` Limit (cost=2352157.06..2352157.06 rows=1 width=48) (actual time=1976.709..1976.710 rows=1 loops=1) Buffers: shared hit=653004 -> Sort (cost=2352157.06..2352977.77 rows=328287 width=48) (actual time=1976.708..1976.708 rows=1 loops=1) " Sort Key: (((((count(*)))::numeric * 1.0) / (((count(*)) + l.loss))::numeric)) DESC, (((count(*)) + l.loss)) DESC" Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=653004 -> Nested Loop (cost=2.10..2350515.62 rows=328287 width=48) (actual time=0.553..1976.294 rows=145 loops=1) Buffers: shared hit=653004 -> GroupAggregate (cost=1.67..107492.42 rows=492431 width=16) (actual time=0.084..1409.450 rows=154547 loops=1) Group Key: w.winner Buffers: shared hit=188208 -> Merge Join (cost=1.67..100105.96 rows=492431 width=8) (actual time=0.061..1301.578 rows=199530 loops=1) Merge Cond: (tm.team_id = w.winner) Buffers: shared hit=188208 -> Index Only Scan using team_member_champion_team_idx on team_member tm (cost=0.56..8978.79 rows=272813 width=8) (actual time=0.026..103.842 rows=265201 loops=1) Index Cond: (champion_id = 157) Heap Fetches: 0 Buffers: shared hit=176867 -> Index Only Scan using match_winner_loser_idx on match w (cost=0.43..79893.82 rows=2288093 width=8) (actual time=0.013..597.331 rows=2288065 loops=1) Heap Fetches: 0 Buffers: shared hit=11341 -> Subquery Scan on l (cost=0.43..4.52 rows=1 width=8) (actual time=0.003..0.003 rows=0 loops=154547) Filter: (((count(*)) + l.loss) > 19) Rows Removed by Filter: 0 Buffers: shared hit=464796 -> GroupAggregate (cost=0.43..4.49 rows=2 width=16) (actual time=0.003..0.003 rows=0 loops=154547) Group Key: l_1.loser Buffers: shared hit=464796 -> Index Only Scan using match_loser_winner_idx on match l_1 (cost=0.43..4.46 rows=2 width=8) (actual time=0.002..0.002 rows=0 loops=154547) Index Cond: (loser = w.winner) Heap Fetches: 0 Buffers: shared hit=464796 Planning time: 0.634 ms Execution time: 1976.792 ms ``` And now with the `UNION ALL` approach and the new schema: ``` Limit (cost=275211.80..275211.80 rows=1 width=48) (actual time=3540.420..3540.421 rows=1 loops=1) Buffers: shared hit=199478 CTE t -> Index Only Scan using team_member_champion_team_idx on team_member (cost=0.56..8978.79 rows=272813 width=8) (actual time=0.027..103.732 rows=265201 loops=1) Index Cond: (champion_id = 157) Heap Fetches: 0 Buffers: shared hit=176867 -> Sort (cost=266233.01..266233.51 rows=200 width=48) (actual time=3540.417..3540.417 rows=1 loops=1) " Sort Key: ((((count((true)))::numeric * 1.0) / (count(*))::numeric)) DESC, (count(*)) DESC" Sort Method: top-N heapsort Memory: 25kB Buffers: shared hit=199478 -> HashAggregate (cost=266228.01..266232.01 rows=200 width=48) (actual time=3455.112..3540.301 rows=145 loops=1) Group Key: t.team_id Filter: (count(*) > 19) Rows Removed by Filter: 265056 Buffers: shared hit=199478 -> Append (cost=30088.37..254525.34 rows=936214 width=9) (actual time=315.399..3137.115 rows=386575 loops=1) Buffers: shared hit=199478 -> Merge Join (cost=30088.37..123088.80 rows=492454 width=9) (actual time=315.398..1583.746 rows=199530 loops=1) Merge Cond: (match.winner = t.team_id) Buffers: shared hit=188208 -> Index Only Scan using match_winner_loser_idx on match (cost=0.43..79893.82 rows=2288093 width=8) (actual time=0.033..583.016 rows=2288065 loops=1) Heap Fetches: 0 Buffers: shared hit=11341 -> Sort (cost=30087.94..30769.97 rows=272813 width=8) (actual time=315.333..402.516 rows=310184 loops=1) Sort Key: t.team_id Sort Method: quicksort Memory: 24720kB Buffers: shared hit=176867 -> CTE Scan on t (cost=0.00..5456.26 rows=272813 width=8) (actual time=0.030..240.150 rows=265201 loops=1) Buffers: shared hit=176867 -> Merge Join (cost=30088.37..122074.39 rows=443760 width=9) (actual time=134.118..1410.484 rows=187045 loops=1) Merge Cond: (match_1.loser = t_1.team_id) Buffers: shared hit=11270 -> Index Only Scan using match_loser_winner_idx on match match_1 (cost=0.43..79609.82 rows=2288093 width=8) (actual time=0.025..589.773 rows=2288060 loops=1) Heap Fetches: 0 Buffers: shared hit=11270 -> Sort (cost=30087.94..30769.97 rows=272813 width=8) (actual time=134.076..219.529 rows=303364 loops=1) Sort Key: t_1.team_id Sort Method: quicksort Memory: 24720kB -> CTE Scan on t t_1 (cost=0.00..5456.26 rows=272813 width=8) (actual time=0.003..60.179 rows=265201 loops=1) Planning time: 0.401 ms Execution time: 3548.072 ms ```
2017/04/15
[ "https://Stackoverflow.com/questions/43423508", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5851426/" ]
Your query and explain output don't look so bad. Still, a couple of observations: 1. An [**index-only scan**](https://www.postgresql.org/docs/current/indexes-index-only-scans.html) instead of an index scan on `matches_loosing_champion_ids_index` would be faster. The reason you don't see that: the useless `count(match_id)`. 2. 5 bitmap index scans + BitmapOR step are pretty fast but a single bitmap index scan would be faster. 3. The most expensive part in *this* query plan is the `Nested Loop Left Join`. Might be different for other players. With your schema ---------------- ### Query 1: `LEFT JOIN LATERAL` This way, we aggregate before we join and don't need another `GROUP BY`. Also fewer join operations. And `count(*)` should unblock index-only scans: ``` SELECT player1, player2, player3, player4, player5 , ((win * 1.0) / (win + loss))::numeric(5,5) AS winrate , win + loss AS matches FROM ( SELECT winning_champion_one_id AS player1 , winning_champion_two_id AS player2 , winning_champion_three_id AS player3 , winning_champion_four_id AS player4 , winning_champion_five_id AS player5 , COUNT(*) AS win -- see below FROM matches WHERE 157 IN (winning_champion_one_id , winning_champion_two_id , winning_champion_three_id , winning_champion_four_id , winning_champion_five_id) GROUP BY 1,2,3,4,5 ) w LEFT JOIN LATERAL ( SELECT COUNT(*) AS loss -- see below FROM matches WHERE loosing_champion_one_id = w.player1 AND loosing_champion_two_id = w.player2 AND loosing_champion_three_id = w.player3 AND loosing_champion_four_id = w.player4 AND loosing_champion_five_id = w.player5 GROUP BY loosing_champion_one_id , loosing_champion_two_id , loosing_champion_three_id , loosing_champion_four_id , loosing_champion_five_id ) l ON true WHERE win + loss > 19 ORDER BY winrate DESC, matches DESC LIMIT 1; ``` **`count(*)`**: is slightly shorter and faster in Postgres, doing the same as `count(match_id)` here, because `match_id` is never NULL. Removing the only reference to `match_id` allows an **index-only scan** on `matches_loosing_champion_ids_index`! Some other preconditions must be met ... ### Query 2: `UNION ALL` Another way around the expensive `Nested Loop Left Join`, and a single `GROUP BY`. But we add 5 more bitmap index scans. May or may not be faster: ``` SELECT player1, player2, player3, player4, player5 ,(count(win) * 1.0) / count(*) AS winrate -- I would round ... , count(*) AS matches FROM ( SELECT winning_champion_one_id AS player1 , winning_champion_two_id AS player2 , winning_champion_three_id AS player3 , winning_champion_four_id AS player4 , winning_champion_five_id AS player5 , TRUE AS win FROM matches WHERE 157 IN (winning_champion_one_id , winning_champion_two_id , winning_champion_three_id , winning_champion_four_id , winning_champion_five_id) UNION ALL SELECT loosing_champion_one_id , loosing_champion_two_id , loosing_champion_three_id , loosing_champion_four_id , loosing_champion_five_id , NULL AS win -- following "count(win)" ignores NULL values FROM matches WHERE 157 IN (loosing_champion_one_id , loosing_champion_two_id , loosing_champion_three_id , loosing_champion_four_id , loosing_champion_five_id) ) m GROUP BY 1,2,3,4,5 HAVING count(*) > 19 -- min 20 matches -- AND count(win) > 0 -- min 1 win -- see below! ORDER BY winrate DESC, matches DESC LIMIT 1; ``` `AND count(win) > 0` is commented out, because it's redundant, while you pick the single best `winrate` anyways. Different schema ---------------- I really would use a different schema to begin with: ``` CREATE TABLE team ( team_id serial PRIMARY KEY -- or bigserial if you expect > 2^31 distinct teams -- more attributes? ); CREATE TABLE player ( player_id smallserial PRIMARY KEY -- more attributes? ); CREATE TABLE team_member ( team_id integer REFERENCES team , player_id smallint REFERENCES player , team_pos smallint NOT NULL CHECK (team_pos BETWEEN 1 AND 5) -- only if position matters , PRIMARY KEY (team_id, player_id) , UNIQUE (team_id, team_pos) ); CREATE INDEX team_member_player_team_idx on team_member (player_id, team_id); -- Enforce 5 players per team. Various options, different question. CREATE TABLE match ( match_id bigserial PRIMARY KEY , winner integer NOT NULL REFERENCES team , loser integer NOT NULL REFERENCES team , CHECK (winner <> loser) -- wouldn't make sense ); CREATE INDEX match_winner_loser_idx ON match (winner, loser); CREATE INDEX match_loser_winner_idx ON match (loser, winner); ``` Subsidiary tables add to the disk footprint, but the main table is a bit smaller. And most importantly, you need fewer indexes, which should be substantially smaller overall for your cardinalities. ### Query We don't need any other indexes for this equivalent query. Much simpler and presumably faster now: ``` SELECT winner , win * 1.0/ (win + loss) AS winrate , win + loss AS matches FROM ( SELECT w.winner, count(*) AS win FROM team_member tm JOIN match w ON w.winner = tm.team_id WHERE tm.player_id = 157 GROUP BY w.winner ) w LEFT JOIN LATERAL ( SELECT count(*) AS loss FROM match l WHERE l.loser = w.winner GROUP BY l.loser ) l ON true WHERE win + loss > 19 ORDER BY winrate DESC, matches DESC LIMIT 1; ``` Join the result to `team_member` to get individual players. You can also try the corresponding `UNION ALL` technique from above: ``` WITH t AS ( SELECT team_id FROM team_member WHERE player_id = 157 -- provide player here ) SELECT team_id ,(count(win) * 1.0) / count(*) AS winrate , count(*) AS matches FROM ( SELECT t.team_id, TRUE AS win FROM t JOIN match ON winner = t.team_id UNION ALL SELECT t.team_id, NULL AS win FROM t JOIN match ON loser = t.team_id ) m GROUP BY 1 HAVING count(*) > 19 -- min 20 matches ORDER BY winrate DESC, matches DESC LIMIT 1; ``` ### `bloom` index I briefly considered a [bloom index](https://www.postgresql.org/docs/current/bloom.html) for your predicate: ``` WHERE 157 IN (loosing_champion_one_id , loosing_champion_two_id , loosing_champion_three_id , loosing_champion_four_id , loosing_champion_five_id) ``` Didn't test, probably won't pay for just 5 `smallint` columns.
The execution plan looks pretty good in my opinion. What you could try is to see if performance improves when a nested loop join is avoided. To test this, run ``` SET enable_nestloop = off; ``` before your query and see if that improves the speed. Other than that, I cannot think of any improvements.
71,058,311
Similar to this question, but in XSLT 2.0 or 3.0, and I want to break on the last non-word character (like the regex \W) [Finding the last occurrence of a string xslt 1.0](https://stackoverflow.com/questions/11166784/xslt-1-0-finding-the-last-occurence-and-taking-string-before) The input is `REMOVE-THIS-IS-A-TEST-LINE,XXX,XXXXX` The desired output is: ``` REMOVE-THIS-IS-A-TEST-LINE,XXX, ``` This works for one delimiter at a time, but I need to break on at least commas, spaces and dashes. ``` substring('REMOVE-THIS-IS-A-TEST-LINE,XXX,XXXXX',1,index-of(string-to-codepoints('REMOVE-THIS-IS-A-TEST-LINE,XXX,XXXXX'),string-to-codepoints(' '))[last()]) ``` I am using oxygen with saxon 9.9EE and antenna house.
2022/02/10
[ "https://Stackoverflow.com/questions/71058311", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3826797/" ]
Think I figured it out myself. In your yaml file, change the target vmImage from windows-latest, to windows-2019. ``` pool: vmImage: 'windows-2019' ```
It might be caused by environment variables of Azure Agent server(build serer). You can review environment variables via project setting - agent pools - your agent pool - agents tab - agent - capabilities tab [![enter image description here](https://i.stack.imgur.com/ISrBA.png)](https://i.stack.imgur.com/ISrBA.png) check value of MSBuild variable, to see if it point to "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Current\Bin" if so, login Azure Agent server then update MSBuild variable to VS2019.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon is a noble gas and won't be affected by dyes like that. You could rent a smoke machine and see how the air moves. However the source of the smoke will be of your choosing. Not where the radon is emitting from. Also the smoke tends to stink. and if airflow is too low it's difficult to differentiate between natural dissipation and actual airflow.
There is no practical way to color Radon gas which enters the basement of a house. But if a gas or oil burner, which depends on room air, is working in the basement, there would be a pretty high air exchange rate in the cold season. And also in the warm season, if the warm water is made by that burner. The measuring results/triangulation in different locations could be misleading in this case, depending on the locations of the openings and the locations where Radon enters the basement. Some modern burners do get their air via a double walled round stainless steel tube (inlet) chimney, in this case the air exchange rate in the basement is not affected. Otherwise airing via (small) openings on opposite sides of a basement (west, east is optimal) should decrease the danger of accumulating Radon gas.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon mitigation best practices are well defined - radon is heavy and tends to accumulate at the lowest point of the house where there is no ventilation. Really I don't see much point in trying to find the source. [![enter image description here](https://i.stack.imgur.com/2VCxa.gif)](https://i.stack.imgur.com/2VCxa.gif) If you are in Andorra then your predicted radon levels are high and you should follow the [best practices for radon mitigation](https://www.caloryfrio.com/construccion-sostenible/rehabilitacion-de-edificios/que-es-gas-radon-gas-invisible-letal-como-prevenirlo.html). Air sealing and depressurization/ventilation of your sub slab. Don't have bedrooms in the basement / ground level. If you have radon gas you want to minimize the time you are exposed to it and sleep is likely the majority of your indoor time. [![enter image description here](https://i.stack.imgur.com/6UzUk.jpg)](https://i.stack.imgur.com/6UzUk.jpg)
Radon is a noble gas and won't be affected by dyes like that. You could rent a smoke machine and see how the air moves. However the source of the smoke will be of your choosing. Not where the radon is emitting from. Also the smoke tends to stink. and if airflow is too low it's difficult to differentiate between natural dissipation and actual airflow.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon is a noble gas and won't be affected by dyes like that. You could rent a smoke machine and see how the air moves. However the source of the smoke will be of your choosing. Not where the radon is emitting from. Also the smoke tends to stink. and if airflow is too low it's difficult to differentiate between natural dissipation and actual airflow.
Radon is produced in miniscule traces by radium. Radium is an alkali earth like calcium , very similar chemical properties. So if there is any radium it tends to be with calcium. Limestone is mostly calcium and may contain traces of radium. Some radium produces radon VERY,VERY slowly. Because it is a gas it seeps out cracks in the limestone, almost always with water. So water from a limestone formations is the source of radon. Ground water and well water from limestone may contain radon. A water sump in a basement is a possible source. The other most likely place in the shower if it is supplied from a limestone reservoir. The diagram from the Canada Natural Resourse shows this very well . Use the diagram to search for radon.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon is a noble gas and won't be affected by dyes like that. You could rent a smoke machine and see how the air moves. However the source of the smoke will be of your choosing. Not where the radon is emitting from. Also the smoke tends to stink. and if airflow is too low it's difficult to differentiate between natural dissipation and actual airflow.
There are tools like Radon Sniffer which don't color gas but give you Radon levels in as little as 15secs but cost about $1750. It's too expensive for my purpose, but might be useful to others.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon mitigation best practices are well defined - radon is heavy and tends to accumulate at the lowest point of the house where there is no ventilation. Really I don't see much point in trying to find the source. [![enter image description here](https://i.stack.imgur.com/2VCxa.gif)](https://i.stack.imgur.com/2VCxa.gif) If you are in Andorra then your predicted radon levels are high and you should follow the [best practices for radon mitigation](https://www.caloryfrio.com/construccion-sostenible/rehabilitacion-de-edificios/que-es-gas-radon-gas-invisible-letal-como-prevenirlo.html). Air sealing and depressurization/ventilation of your sub slab. Don't have bedrooms in the basement / ground level. If you have radon gas you want to minimize the time you are exposed to it and sleep is likely the majority of your indoor time. [![enter image description here](https://i.stack.imgur.com/6UzUk.jpg)](https://i.stack.imgur.com/6UzUk.jpg)
There is no practical way to color Radon gas which enters the basement of a house. But if a gas or oil burner, which depends on room air, is working in the basement, there would be a pretty high air exchange rate in the cold season. And also in the warm season, if the warm water is made by that burner. The measuring results/triangulation in different locations could be misleading in this case, depending on the locations of the openings and the locations where Radon enters the basement. Some modern burners do get their air via a double walled round stainless steel tube (inlet) chimney, in this case the air exchange rate in the basement is not affected. Otherwise airing via (small) openings on opposite sides of a basement (west, east is optimal) should decrease the danger of accumulating Radon gas.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon mitigation best practices are well defined - radon is heavy and tends to accumulate at the lowest point of the house where there is no ventilation. Really I don't see much point in trying to find the source. [![enter image description here](https://i.stack.imgur.com/2VCxa.gif)](https://i.stack.imgur.com/2VCxa.gif) If you are in Andorra then your predicted radon levels are high and you should follow the [best practices for radon mitigation](https://www.caloryfrio.com/construccion-sostenible/rehabilitacion-de-edificios/que-es-gas-radon-gas-invisible-letal-como-prevenirlo.html). Air sealing and depressurization/ventilation of your sub slab. Don't have bedrooms in the basement / ground level. If you have radon gas you want to minimize the time you are exposed to it and sleep is likely the majority of your indoor time. [![enter image description here](https://i.stack.imgur.com/6UzUk.jpg)](https://i.stack.imgur.com/6UzUk.jpg)
Radon is produced in miniscule traces by radium. Radium is an alkali earth like calcium , very similar chemical properties. So if there is any radium it tends to be with calcium. Limestone is mostly calcium and may contain traces of radium. Some radium produces radon VERY,VERY slowly. Because it is a gas it seeps out cracks in the limestone, almost always with water. So water from a limestone formations is the source of radon. Ground water and well water from limestone may contain radon. A water sump in a basement is a possible source. The other most likely place in the shower if it is supplied from a limestone reservoir. The diagram from the Canada Natural Resourse shows this very well . Use the diagram to search for radon.
246,639
I'm trying to lower radon gas levels at home. I currently have one Airthings Wave Plus device in one room and I'm waiting for a second one to triangulate radon sources, but I'm looking for another hopefully faster way to solve this problem because the manufacturer says that to get a 10% accurate reading (no definition of "accurate") you need to leave the devices 7 days in the same place. Between that and the 24h rolling average charting they do, they feel like a clunky tool and the industrial tools I've seem too expensive for what I'm trying to do. Is there a way to "color" a gas like radon that could let me temporarily see the house as a heatmap and identify where the gas is coming in into the house? Thanks
2022/03/24
[ "https://diy.stackexchange.com/questions/246639", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/149964/" ]
Radon mitigation best practices are well defined - radon is heavy and tends to accumulate at the lowest point of the house where there is no ventilation. Really I don't see much point in trying to find the source. [![enter image description here](https://i.stack.imgur.com/2VCxa.gif)](https://i.stack.imgur.com/2VCxa.gif) If you are in Andorra then your predicted radon levels are high and you should follow the [best practices for radon mitigation](https://www.caloryfrio.com/construccion-sostenible/rehabilitacion-de-edificios/que-es-gas-radon-gas-invisible-letal-como-prevenirlo.html). Air sealing and depressurization/ventilation of your sub slab. Don't have bedrooms in the basement / ground level. If you have radon gas you want to minimize the time you are exposed to it and sleep is likely the majority of your indoor time. [![enter image description here](https://i.stack.imgur.com/6UzUk.jpg)](https://i.stack.imgur.com/6UzUk.jpg)
There are tools like Radon Sniffer which don't color gas but give you Radon levels in as little as 15secs but cost about $1750. It's too expensive for my purpose, but might be useful to others.
29,928,903
Currently my design\_dialog is saving the settings under etc/designs/default/jcr, how do I modify the template in order for it to save under etc/designs/(mydesign)/jcr. I was looking at the documentation but couldn't find anything specific on how to ensure the design\_dialog creates the properties under its own design template.
2015/04/28
[ "https://Stackoverflow.com/questions/29928903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/209904/" ]
You can try this with a Grid instead of Stack, also add RowHeight to your ListView ``` <ListView ... RowHeight="55"> ... <ViewCell Height="55"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="44"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="44"/> <!-- for the checkmark --> </Grid.ColumnDefinitions> <Grid Grid.Column="0"> <Grid.RowDefinitions> <RowDefinition Height="5" /> <RowDefinition Height="22" /> <RowDefinition Height="17" /> </Grid.RowDefinitions> <BoxView Grid.Row="0" WidthRequest="44" HeightRequest="5" BackgroundColor="Purple"/> <!-- experiment with the vertical alignment to get it right --> <Label Grid.Row="1" Text="AUG" .../> <Label Grid.Row="2" Text="31" .../> </Grid> <Grid Grid.Column="0"> <Grid.RowDefinitions> <RowDefinition Height="22" /> <RowDefinition Height="22" /> </Grid.RowDefinitions> <!-- if the vertical alignment doesn't work well add two more rows for top and bottom padding --> <Label Grid.Row="0" Text="Relay for life" VerticalOptions="End" .../> <Label Grid.Row="1" Text="Hope city" VerticalOptions="Start" .../> </Grid> </Grid> </ViewCell> ```
Set HasUnevenRows="True" on the ListView
14,397,788
Default style for DataGridRow is as following: ``` <Style x:Key="{x:Type DataGridRow}" TargetType="{x:Type DataGridRow}"> <Setter Property="Background" Value="{DynamicResource {x:Static SystemColors.WindowBrushKey}}" /> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="Validation.ErrorTemplate" Value="{x:Null}" /> <Setter Property="ValidationErrorTemplate"> <Setter.Value> <ControlTemplate> <TextBlock Margin="2,0,0,0" VerticalAlignment="Center" Foreground="Red" Text="!" /> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Template"> ... </Setter> </Style> ``` What I want is to add ToolTip to TextBlock which displays "!" in the row header, and it will get an error message from DataGridRow.Item.Error property (My entity object implements IDataErrorInfo). So I did the following: ``` <TextBlock Margin="2,0,0,0" VerticalAlignment="Center" Foreground="Red" Text="!" ToolTip="{Binding RelativeSource={ RelativeSource FindAncestor, AncestorType={x:Type DataGridRow}}, Path=Item.Error}"/> ``` So far so good. Now, Error property returns multi-line string, so I want to use TextBlock for ToolTip: ``` <TextBlock Margin="2,0,0,0" VerticalAlignment="Center" Foreground="Red" Text="!"> <TextBlock.ToolTip > <TextBlock Text="{Binding RelativeSource={ RelativeSource FindAncestor, AncestorType={x:Type DataGridRow}}, Path=Item.Error}" TextWrapping="Wrap"/> </TextBlock.ToolTip> </TextBlock> ``` But the above won't display error message; the problem seems to be ToolTip is not part of parent element's visual tree. I've read about PlacementTarget etc., but still couldn't get the job done. Can someone show me proper way of doing the above?
2013/01/18
[ "https://Stackoverflow.com/questions/14397788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/85443/" ]
I think the problem is to bind to relative source of the given element's property (PlacementTarget), not the element itself. But RelativeSource markup extension describes the location of the binding source relative to the given element. So, what I did was to set PlacementTarget to the ancestor of original tooltip target: ``` <Setter Property="ValidationErrorTemplate"> <Setter.Value> <ControlTemplate> <TextBlock Margin="2,0,0,0" VerticalAlignment="Center" HorizontalAlignment="Center" TextAlignment="Center" Foreground="Red" Text="!" ToolTipService.PlacementTarget="{Binding RelativeSource={ RelativeSource FindAncestor, AncestorType={x:Type DataGridRow}}}"> <TextBlock.ToolTip > <ToolTip DataContext="{Binding Path=PlacementTarget, RelativeSource={x:Static RelativeSource.Self}}"> <TextBlock Text="{Binding Path=Item.Error}"/> </ToolTip> </TextBlock.ToolTip> </TextBlock> </ControlTemplate> </Setter.Value> </Setter> ``` Now it works as I wanted.
Use the ancestor level in that textblock as like below ``` <TextBlock Text="{Binding RelativeSource={ RelativeSource FindAncestor, AncestorType={x:Type DataGridRow},AncestorLevel=1}, Path=Item.Error}" TextWrapping="Wrap"/> ```
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
Use OnExecute and if you have nothing to do, Sleep() for a period of time, say 10 milliseconds. Each connection has its own OnExecute handler so this will only affect each individual connection.
The `Indy` component set is designed to emulate *blocking* operation on a network connection. You're supposed to encapsulate all your code in the `OnExecute` event handler. That's supposed to be *easier*, because most protocols are blocking any way (send command, wait for response, etc). You apparently don't like it's mode of operation, you'd like something that works without blocking. You should consider using a component suite that's designed for the way you intend to use it: give the [ICS](http://www.overbyte.be) suite a try! ICS doesn't use threads, all the work is done in event handlers.
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
Use OnExecute and if you have nothing to do, Sleep() for a period of time, say 10 milliseconds. Each connection has its own OnExecute handler so this will only affect each individual connection.
In the OnExecute handler, you can use thread communication methods like TEvent and TMonitor to wait until there is data for the client. TMonitor is available since Delphi 2009 and provides methods (Wait, Pulse and PulseAll) to send / receive notifications with mininmal CPU usage.
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
Use OnExecute and if you have nothing to do, Sleep() for a period of time, say 10 milliseconds. Each connection has its own OnExecute handler so this will only affect each individual connection.
I had similar situation taking 100% CPU and it solved by adding IdThreadComponent and: ``` void __fastcall TForm3::IdThreadComponent1Run(TIdThreadComponent *Sender) { Sleep(10); } ``` Is it right? I am not sure.
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
`TIdTCPServer` requires an `OnExecute` event handler assigned by default. To get around that, you would have to derive a new class from `TIdTCPServer` and override its virtual `CheckOkToBeActive()` method, and should also override the virtual `DoExecute()` to call `Sleep()`. Otherwise, just assign an event handler and have it call `Sleep()`. This is not an effective use of `TIdTCPServer`, though. A better design is to not write your outbound data to clients from inside of your `SendMessage()` method directly. Not only is that error-prone (you are not catching exceptions from `WriteBuffer()`) and blocks `SendMessage()` during writing, but it also serializes your communications (client 2 cannot receive data until client 1 does first). A much more effective design is to give each client its own thread-safe outbound queue, and then have `SendMessage()` put the data into each client's queue as needed. You can then use the `OnExecute` event to check each client's queue and do the actual writing. This way, `SendMessage()` does not get blocked anymore, is less error-prone, and clients can be written to in parallel (like they should be). Try something like this: ``` uses ..., IdThreadSafe; type TMyContext = class(TIdServerContext) private FQueue: TIdThreadSafeStringList; FEvent: TEvent; public constructor Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); override; destructor Destroy; override; procedure AddMsgToQueue(const Msg: String); function GetQueuedMsgs: TStrings; end; constructor TMyContext.Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); begin inherited; FQueue := TIdThreadSafeStringList.Create; FEvent := TEvent.Create(nil, True, False, ''); end; destructor TMyContext.Destroy; begin FQueue.Free; FEvent.Free; inherited; end; procedure TMyContext.AddMsgToQueue(const Msg: String); begin with FQueue.Lock do try Add(Msg); FEvent.SetEvent; finally FQueue.Unlock; end; end; function TMyContext.GetQueuedMsgs: TStrings; var List: TStringList; begin Result := nil; if FEvent.WaitFor(1000) <> wrSignaled then Exit; List := FQueue.Lock; try if List.Count > 0 then begin Result := TStringList.Create; try Result.Assign(List); List.Clear; except Result.Free; raise; end; end; FEvent.ResetEvent; finally FQueue.Unlock; end; end; procedure TFormMain.FormCreate(Sender: TObject); begin TCPServer.ContextClass := TMyContext; end; procedure TFormMain.TCPServerExecute(AContext: TIdContext); var List: TStrings; I: Integer; begin List := TMyContext(AContext).GetQueuedMsgs; if List = nil then Exit; try for I := 0 to List.Count-1 do AContext.Connection.IOHandler.Write(List[I]); finally List.Free; end; end; procedure TFormMain.SendMessage(const IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do begin with TMyContext(Items[I]) do begin if Binding.PeerIP = IP then begin AddMsgToQueue(Msg); Break; end; end; end; finally TCPServer.Contexts.UnlockList; end; end; ```
The `Indy` component set is designed to emulate *blocking* operation on a network connection. You're supposed to encapsulate all your code in the `OnExecute` event handler. That's supposed to be *easier*, because most protocols are blocking any way (send command, wait for response, etc). You apparently don't like it's mode of operation, you'd like something that works without blocking. You should consider using a component suite that's designed for the way you intend to use it: give the [ICS](http://www.overbyte.be) suite a try! ICS doesn't use threads, all the work is done in event handlers.
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
`TIdTCPServer` requires an `OnExecute` event handler assigned by default. To get around that, you would have to derive a new class from `TIdTCPServer` and override its virtual `CheckOkToBeActive()` method, and should also override the virtual `DoExecute()` to call `Sleep()`. Otherwise, just assign an event handler and have it call `Sleep()`. This is not an effective use of `TIdTCPServer`, though. A better design is to not write your outbound data to clients from inside of your `SendMessage()` method directly. Not only is that error-prone (you are not catching exceptions from `WriteBuffer()`) and blocks `SendMessage()` during writing, but it also serializes your communications (client 2 cannot receive data until client 1 does first). A much more effective design is to give each client its own thread-safe outbound queue, and then have `SendMessage()` put the data into each client's queue as needed. You can then use the `OnExecute` event to check each client's queue and do the actual writing. This way, `SendMessage()` does not get blocked anymore, is less error-prone, and clients can be written to in parallel (like they should be). Try something like this: ``` uses ..., IdThreadSafe; type TMyContext = class(TIdServerContext) private FQueue: TIdThreadSafeStringList; FEvent: TEvent; public constructor Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); override; destructor Destroy; override; procedure AddMsgToQueue(const Msg: String); function GetQueuedMsgs: TStrings; end; constructor TMyContext.Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); begin inherited; FQueue := TIdThreadSafeStringList.Create; FEvent := TEvent.Create(nil, True, False, ''); end; destructor TMyContext.Destroy; begin FQueue.Free; FEvent.Free; inherited; end; procedure TMyContext.AddMsgToQueue(const Msg: String); begin with FQueue.Lock do try Add(Msg); FEvent.SetEvent; finally FQueue.Unlock; end; end; function TMyContext.GetQueuedMsgs: TStrings; var List: TStringList; begin Result := nil; if FEvent.WaitFor(1000) <> wrSignaled then Exit; List := FQueue.Lock; try if List.Count > 0 then begin Result := TStringList.Create; try Result.Assign(List); List.Clear; except Result.Free; raise; end; end; FEvent.ResetEvent; finally FQueue.Unlock; end; end; procedure TFormMain.FormCreate(Sender: TObject); begin TCPServer.ContextClass := TMyContext; end; procedure TFormMain.TCPServerExecute(AContext: TIdContext); var List: TStrings; I: Integer; begin List := TMyContext(AContext).GetQueuedMsgs; if List = nil then Exit; try for I := 0 to List.Count-1 do AContext.Connection.IOHandler.Write(List[I]); finally List.Free; end; end; procedure TFormMain.SendMessage(const IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do begin with TMyContext(Items[I]) do begin if Binding.PeerIP = IP then begin AddMsgToQueue(Msg); Break; end; end; end; finally TCPServer.Contexts.UnlockList; end; end; ```
In the OnExecute handler, you can use thread communication methods like TEvent and TMonitor to wait until there is data for the client. TMonitor is available since Delphi 2009 and provides methods (Wait, Pulse and PulseAll) to send / receive notifications with mininmal CPU usage.
9,324,723
I want to make a TCPserver and send/receive message to clients **as needed**, not OnExecute event of the TCPserver. Send/receive message is not a problem; I do like that: ``` procedure TFormMain.SendMessage(IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do if TIdContext(Items[I]).Connection.Socket.Binding.PeerIP = IP then begin TIdContext(Items[I]).Connection.IOHandler.WriteBuffer(Msg[1], Length(Msg)); // and/or Read Break; end; finally TCPServer.Contexts.UnlockList; end; end; ``` Note 1: If I don't use OnExecute, the program raise an exception when a client connects. Note 2: If I use OnExecute without doing anything, the CPU usage goes to %100 Note 3: I don't have a chance to change the TCP clients. So what should I do?
2012/02/17
[ "https://Stackoverflow.com/questions/9324723", "https://Stackoverflow.com", "https://Stackoverflow.com/users/62313/" ]
`TIdTCPServer` requires an `OnExecute` event handler assigned by default. To get around that, you would have to derive a new class from `TIdTCPServer` and override its virtual `CheckOkToBeActive()` method, and should also override the virtual `DoExecute()` to call `Sleep()`. Otherwise, just assign an event handler and have it call `Sleep()`. This is not an effective use of `TIdTCPServer`, though. A better design is to not write your outbound data to clients from inside of your `SendMessage()` method directly. Not only is that error-prone (you are not catching exceptions from `WriteBuffer()`) and blocks `SendMessage()` during writing, but it also serializes your communications (client 2 cannot receive data until client 1 does first). A much more effective design is to give each client its own thread-safe outbound queue, and then have `SendMessage()` put the data into each client's queue as needed. You can then use the `OnExecute` event to check each client's queue and do the actual writing. This way, `SendMessage()` does not get blocked anymore, is less error-prone, and clients can be written to in parallel (like they should be). Try something like this: ``` uses ..., IdThreadSafe; type TMyContext = class(TIdServerContext) private FQueue: TIdThreadSafeStringList; FEvent: TEvent; public constructor Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); override; destructor Destroy; override; procedure AddMsgToQueue(const Msg: String); function GetQueuedMsgs: TStrings; end; constructor TMyContext.Create(AConnection: TIdTCPConnection; AYarn: TIdYarn; AList: TThreadList = nil); begin inherited; FQueue := TIdThreadSafeStringList.Create; FEvent := TEvent.Create(nil, True, False, ''); end; destructor TMyContext.Destroy; begin FQueue.Free; FEvent.Free; inherited; end; procedure TMyContext.AddMsgToQueue(const Msg: String); begin with FQueue.Lock do try Add(Msg); FEvent.SetEvent; finally FQueue.Unlock; end; end; function TMyContext.GetQueuedMsgs: TStrings; var List: TStringList; begin Result := nil; if FEvent.WaitFor(1000) <> wrSignaled then Exit; List := FQueue.Lock; try if List.Count > 0 then begin Result := TStringList.Create; try Result.Assign(List); List.Clear; except Result.Free; raise; end; end; FEvent.ResetEvent; finally FQueue.Unlock; end; end; procedure TFormMain.FormCreate(Sender: TObject); begin TCPServer.ContextClass := TMyContext; end; procedure TFormMain.TCPServerExecute(AContext: TIdContext); var List: TStrings; I: Integer; begin List := TMyContext(AContext).GetQueuedMsgs; if List = nil then Exit; try for I := 0 to List.Count-1 do AContext.Connection.IOHandler.Write(List[I]); finally List.Free; end; end; procedure TFormMain.SendMessage(const IP, Msg: string); var I: Integer; begin with TCPServer.Contexts.LockList do try for I := 0 to Count-1 do begin with TMyContext(Items[I]) do begin if Binding.PeerIP = IP then begin AddMsgToQueue(Msg); Break; end; end; end; finally TCPServer.Contexts.UnlockList; end; end; ```
I had similar situation taking 100% CPU and it solved by adding IdThreadComponent and: ``` void __fastcall TForm3::IdThreadComponent1Run(TIdThreadComponent *Sender) { Sleep(10); } ``` Is it right? I am not sure.
24,337,958
I have a function I want to add to dynamically as the program runs. Let's say I have function Foo: ``` function foo() Function1() Function2() Function3() end ``` and I want to change Foo() to: ``` function foo() Function1() Function2() Function3() Function4() end ``` later in the program. Is there any way to do this?
2014/06/21
[ "https://Stackoverflow.com/questions/24337958", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3762069/" ]
Just do it. The code that you wrote works fine. Functions in Lua can be redefined as desired. If you don't know what `foo` does, you can do this: ``` do local old = foo foo = function () old() Function4() end end ``` Or perhaps it is clearer to use a table of functions: ``` local F={ Function1, Function2, Function3 } function foo() for i=1,#F do F[i]() end end ``` Latter, do ``` F[#F+1]=Function4 ``` and you don't need to redefine `foo`.
This is a supplementary answer with background information. Lua identifiers are used for global variables, local variables, parameters and table fields. They hold any type of value. Lua functions are values. Lua functions are all anonymous, regardless of the syntax used to define them. ``` function f() --... end ``` is a Lua statement that compiles to a function definition and an assignment to a variable. It's an alternate to ``` f = function() --... end ``` Each time a function definition is executed, it produces a new function value, which is then used in any associated expression or assignment. It should be clear that neither statement necessarily creates a new variable nor requires it to always have the same value, nor requires it to always hold a function value. Also, the function value created need not always be held only by the one variable. It can be copied just like any other value. Also, just like other values, function values are garbage collected. So, if `f` had a function value and is assigned a different value or goes out of scope (say, if it wasn't a global variable), the previous value will be garbage collected when nothing else refers to it. --- Without any other context for `function f() end`, we would assume that `f` is a global variable. But that's not necessarily the case. If `f` was an in-scope local or parameter, that is the `f` that would be assigned to.
955,155
I have a server running 2012. I have a file in a directory, and I wish to move it to a sub directory, which I do by dragging and dropping. I get an error telling me I require admin priveleges to do this. I search for file explorer in the TRULY AWFUL 2012 startmenu and find it but it won't allow me to run as administrator. How can I drag and drop this file?
2015/08/11
[ "https://superuser.com/questions/955155", "https://superuser.com", "https://superuser.com/users/3765/" ]
How can I click-drag to move a folder that needs admin permissions in explorer? ------------------------------------------------------------------------------- 1. `Win+X` --> `Command prompt (admin)` (alternatively right click the `Start` tile in `Desktop` mode) 2. `explorer` (`Enter`) 3. Using the new administrative `explorer` window, click and drag to move the folder
In order to move something that requires admin privileges, you will have to go through the UAC process in order to move it. Launching Windows Explorer as an administrator will enable you to drag and drop. This can be done with the right click context menu, or launch command prompt with administrator privileges and type 'explorer.exe' then hit enter