qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
351,085
I know of [How to remove an uninstalled package's dependencies?](https://askubuntu.com/questions/443/how-to-remove-an-uninstalled-packages-dependencies) and I tried ``` apt-get autoremove ``` but that does not remove dependencies that are recommended/suggested by other packages. That is, if I install a package X that recommends Y, but I do not install Y, and then I install package Z that depends on Y. and later I do ``` apt-get remove --auto-remove Z ``` then Y is not automatically removed even though nothing depends on it. (X "picked up" Y, even though it does not depend on it).
2013/09/28
[ "https://askubuntu.com/questions/351085", "https://askubuntu.com", "https://askubuntu.com/users/195582/" ]
### Overriding APT options Unlike dependencies, automatically installed "recommended" or "suggested" packages may be ignored by `apt-get autoremove`. As described elsewhere, this behavior of APT can be changed in the configuration. Likewise, the configuration of the `apt-get` command can be temporarily changed through the `-o` command line option. This is, how you would force autoremove to remove left-over "recommended" and "suggested" packages, in addition to unused dependencies. ``` sudo apt-get autoremove -o APT::Autoremove::RecommendsImportant=0 -o APT::Autoremove::SuggestsImportant=0 ``` ### Caution! Some functionality may be lost. Be prepared to investigate and reinstall things. It may be easier to leave these packages alone. ### Other options To uninstall the 'recommended' and 'suggested' packages solely for a particular package, have a look at the apt history log.
Actually the command is: ``` sudo apt-get autoremove <Z> ``` But this has a trick! If any of the dependencies has some other previously installed packages that recommend/suggest them then apt would not remove them. You didn't specified what package was but for example, if I were to install the IcedTea plugin, it would install Java/OpenJRE by dependencies. If I uninstall them using `sudo apt-get autoremove icedtea-7-plugin` you would notice that it won't remove Java/OpenJRE, since LibreOffice also suggests the packages. So, to remove them you has to be overly specific about the package you wants to uninstall that normal `autoremove` won't: ``` sudo apt-get autoremove <Z> <dependency of Z> ``` This way you could be sure your package get removed. You can also use deborphan to remove some dependencies.
34,301,141
I'd prefer to do the following in R, but am open to (easy to learn) other solutions. I have multiple (lets say 99) tab-delimited files (let's call them S1.txt through S99.txt) with tables, all with the exact same format. Each table is ~2,000,000 cols by 5 rows. Here's a toy example: ``` ID Chr Position DP1 DP2 A1 1 123 1.5 2.0 A2 1 124 1.4 0.3 ``` ID by definition is unique and always in the same order, Chr and Pos are always in the same order. The only things different in each input file are DP1 column and DP2 column. The output table I'd like to be "collated", I think is the word. Here's an example of the output if there were ONLY 3 Sample input files. ``` ID Chr Position S1.DP1 S1.DP2 S2.DP1 S2.DP2 S3.DP1 S3.DP2 A1 1 123 1.5 2.0 1.2 2.0 1.5 2.1 A2 1 124 1.4 0.3 1.0 0.5 0.5 0.05 ``` Notice that each input file has a new column created for DP1 and DP2. ALSO, the name of the columns is informative (tells me which input file it came from & which datapoint - DP). I've found questions for when the columns are different: [R: merging a lot of data.frames](https://stackoverflow.com/questions/14096814/r-merging-a-lot-of-data-frames) I'm also aware of merge, although I feel like you end up with strange column names: [How to join (merge) data frames (inner, outer, left, right)?](https://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right) My other solution has been to initialize a dataframe and then load each file and add the data points, but this would use a loop and be incredibly slow and horrible. So, I need a more elegant solution. Thank you for your help.
2015/12/15
[ "https://Stackoverflow.com/questions/34301141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4789486/" ]
I'm going to assume that all the files are stored in a single folder and that you want to load all the files with `.txt` extensions in that folder. ``` ## List all the files in the current directory that end in .txt files <- list.files(path = ".", pattern = "*.txt") ## Load them into a list called datlist and name each element after the file it came from datlist <- lapply(files, read.table, sep = "\t") names(datlist) <- gsub("(*).txt", "\\1", files) ``` However for the purposes of a reproducible example I'm going to manually create a list of data frames like the one you showed. ``` S1 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.5 2.0 A2 1 124 1.4 0.3", header = TRUE) S2 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.2 2.0 A2 1 124 1.0 0.5", header = TRUE) S3 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.5 2.1 A2 1 124 0.5 0.05", header = TRUE) datlist <- list(S1 = S1, S2 = S2, S3 = S3) ``` Now load the packages we're going to use ``` library("dplyr") library("tidyr") ``` With a mix of dplyr and tidyr functions we can get the result you want: ``` ## First, combine the list into a single data frame, adding a column to indicate ## which file each row came from bind_rows(datlist, .id = "file") %>% ## Gather this into a longer format with DP1/DP2 as variables gather(key = col, value = value, which(!names(.) %in% c("ID", "Chr", "Position", "file"))) %>% ## Create a new column that combines the file name and DP1/DP2 -- this will be ## the final column names unite(newcol, file, col, sep = ".") %>% ## Spread the data so that each combination of file and DP1/DP2 is its own ## column spread(newcol, value) ``` End result: ``` ## Source: local data frame [2 x 9] ## ID Chr Position S1.DP1 S1.DP2 S2.DP1 S2.DP2 S3.DP1 S3.DP2 ## (fctr) (int) (int) (dbl) (dbl) (dbl) (dbl) (dbl) (dbl) ## 1 A1 1 123 1.5 2.0 1.2 2.0 1.5 2.10 ## 2 A2 1 124 1.4 0.3 1.0 0.5 0.5 0.05 ```
A one liner with base R ``` l = list(S1=S1, S2=S2, S3=S3) idx = c("ID","Chr","Position") d <- Reduce(function(x, y) merge(x, y, by = idx), l) ``` **Update** Forgot the variable names. This might be a bit excessive but it is the best way I can think of to avoid hard coding the names. ``` n <- expand.grid(names(l), setdiff(names(S1), idx)) names(d)[!names(d)%in%idx] <- paste(n[ ,1], n[ ,2], sep = ".") ```
34,301,141
I'd prefer to do the following in R, but am open to (easy to learn) other solutions. I have multiple (lets say 99) tab-delimited files (let's call them S1.txt through S99.txt) with tables, all with the exact same format. Each table is ~2,000,000 cols by 5 rows. Here's a toy example: ``` ID Chr Position DP1 DP2 A1 1 123 1.5 2.0 A2 1 124 1.4 0.3 ``` ID by definition is unique and always in the same order, Chr and Pos are always in the same order. The only things different in each input file are DP1 column and DP2 column. The output table I'd like to be "collated", I think is the word. Here's an example of the output if there were ONLY 3 Sample input files. ``` ID Chr Position S1.DP1 S1.DP2 S2.DP1 S2.DP2 S3.DP1 S3.DP2 A1 1 123 1.5 2.0 1.2 2.0 1.5 2.1 A2 1 124 1.4 0.3 1.0 0.5 0.5 0.05 ``` Notice that each input file has a new column created for DP1 and DP2. ALSO, the name of the columns is informative (tells me which input file it came from & which datapoint - DP). I've found questions for when the columns are different: [R: merging a lot of data.frames](https://stackoverflow.com/questions/14096814/r-merging-a-lot-of-data-frames) I'm also aware of merge, although I feel like you end up with strange column names: [How to join (merge) data frames (inner, outer, left, right)?](https://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right) My other solution has been to initialize a dataframe and then load each file and add the data points, but this would use a loop and be incredibly slow and horrible. So, I need a more elegant solution. Thank you for your help.
2015/12/15
[ "https://Stackoverflow.com/questions/34301141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4789486/" ]
I re-read your question and thought of an even better solution. First off all I would not load all the .txt files into R at once. If your .txt files are 2e6x5 and there are a 100 of them you are likely going to run out of RAM before you load them all. I would load them one at a time and iteratively merge them. ``` library(readr) #Use this to load your data, it is much better than the base functions f <- list.files(path = "path/to/file", pattern = "*.txt", full.names = TRUE) d <- read_delim(f[1], delim = "\t") idx = c("ID", "Chr", "Position") for (i in seq(2, length(f)){ d_temp <- read_delim(f[i], delim = "\t") d <- merge(d, d_temp, by = idx) rm(d_temp) #not necessary but I like to include to make explicit } ``` Naming d ``` n <- expand.grid(paste0("S", seq(1, length(f)), c("DP1", "DP2")) names(d)[!names(d) %in% idx] <- paste(n[ ,1], n[ ,2], sep = ".") ``` **Update** Ugh I missed the obvious, if you truly have 100 2e6x5 .txt files, you are probably not going to be able to use R for this task. I doubt it will be possible to store a 2e6X500 data frame in R. Even if you on a server with loads of RAM computation time will be non-trivial. I think the most important question going forward is what are you trying to do with this data. Once you answer this you might be able to efficiently use your data.
I'm going to assume that all the files are stored in a single folder and that you want to load all the files with `.txt` extensions in that folder. ``` ## List all the files in the current directory that end in .txt files <- list.files(path = ".", pattern = "*.txt") ## Load them into a list called datlist and name each element after the file it came from datlist <- lapply(files, read.table, sep = "\t") names(datlist) <- gsub("(*).txt", "\\1", files) ``` However for the purposes of a reproducible example I'm going to manually create a list of data frames like the one you showed. ``` S1 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.5 2.0 A2 1 124 1.4 0.3", header = TRUE) S2 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.2 2.0 A2 1 124 1.0 0.5", header = TRUE) S3 <- read.table(text = "ID Chr Position DP1 DP2 A1 1 123 1.5 2.1 A2 1 124 0.5 0.05", header = TRUE) datlist <- list(S1 = S1, S2 = S2, S3 = S3) ``` Now load the packages we're going to use ``` library("dplyr") library("tidyr") ``` With a mix of dplyr and tidyr functions we can get the result you want: ``` ## First, combine the list into a single data frame, adding a column to indicate ## which file each row came from bind_rows(datlist, .id = "file") %>% ## Gather this into a longer format with DP1/DP2 as variables gather(key = col, value = value, which(!names(.) %in% c("ID", "Chr", "Position", "file"))) %>% ## Create a new column that combines the file name and DP1/DP2 -- this will be ## the final column names unite(newcol, file, col, sep = ".") %>% ## Spread the data so that each combination of file and DP1/DP2 is its own ## column spread(newcol, value) ``` End result: ``` ## Source: local data frame [2 x 9] ## ID Chr Position S1.DP1 S1.DP2 S2.DP1 S2.DP2 S3.DP1 S3.DP2 ## (fctr) (int) (int) (dbl) (dbl) (dbl) (dbl) (dbl) (dbl) ## 1 A1 1 123 1.5 2.0 1.2 2.0 1.5 2.10 ## 2 A2 1 124 1.4 0.3 1.0 0.5 0.5 0.05 ```
34,301,141
I'd prefer to do the following in R, but am open to (easy to learn) other solutions. I have multiple (lets say 99) tab-delimited files (let's call them S1.txt through S99.txt) with tables, all with the exact same format. Each table is ~2,000,000 cols by 5 rows. Here's a toy example: ``` ID Chr Position DP1 DP2 A1 1 123 1.5 2.0 A2 1 124 1.4 0.3 ``` ID by definition is unique and always in the same order, Chr and Pos are always in the same order. The only things different in each input file are DP1 column and DP2 column. The output table I'd like to be "collated", I think is the word. Here's an example of the output if there were ONLY 3 Sample input files. ``` ID Chr Position S1.DP1 S1.DP2 S2.DP1 S2.DP2 S3.DP1 S3.DP2 A1 1 123 1.5 2.0 1.2 2.0 1.5 2.1 A2 1 124 1.4 0.3 1.0 0.5 0.5 0.05 ``` Notice that each input file has a new column created for DP1 and DP2. ALSO, the name of the columns is informative (tells me which input file it came from & which datapoint - DP). I've found questions for when the columns are different: [R: merging a lot of data.frames](https://stackoverflow.com/questions/14096814/r-merging-a-lot-of-data-frames) I'm also aware of merge, although I feel like you end up with strange column names: [How to join (merge) data frames (inner, outer, left, right)?](https://stackoverflow.com/questions/1299871/how-to-join-merge-data-frames-inner-outer-left-right) My other solution has been to initialize a dataframe and then load each file and add the data points, but this would use a loop and be incredibly slow and horrible. So, I need a more elegant solution. Thank you for your help.
2015/12/15
[ "https://Stackoverflow.com/questions/34301141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4789486/" ]
I re-read your question and thought of an even better solution. First off all I would not load all the .txt files into R at once. If your .txt files are 2e6x5 and there are a 100 of them you are likely going to run out of RAM before you load them all. I would load them one at a time and iteratively merge them. ``` library(readr) #Use this to load your data, it is much better than the base functions f <- list.files(path = "path/to/file", pattern = "*.txt", full.names = TRUE) d <- read_delim(f[1], delim = "\t") idx = c("ID", "Chr", "Position") for (i in seq(2, length(f)){ d_temp <- read_delim(f[i], delim = "\t") d <- merge(d, d_temp, by = idx) rm(d_temp) #not necessary but I like to include to make explicit } ``` Naming d ``` n <- expand.grid(paste0("S", seq(1, length(f)), c("DP1", "DP2")) names(d)[!names(d) %in% idx] <- paste(n[ ,1], n[ ,2], sep = ".") ``` **Update** Ugh I missed the obvious, if you truly have 100 2e6x5 .txt files, you are probably not going to be able to use R for this task. I doubt it will be possible to store a 2e6X500 data frame in R. Even if you on a server with loads of RAM computation time will be non-trivial. I think the most important question going forward is what are you trying to do with this data. Once you answer this you might be able to efficiently use your data.
A one liner with base R ``` l = list(S1=S1, S2=S2, S3=S3) idx = c("ID","Chr","Position") d <- Reduce(function(x, y) merge(x, y, by = idx), l) ``` **Update** Forgot the variable names. This might be a bit excessive but it is the best way I can think of to avoid hard coding the names. ``` n <- expand.grid(names(l), setdiff(names(S1), idx)) names(d)[!names(d)%in%idx] <- paste(n[ ,1], n[ ,2], sep = ".") ```
11,302,118
My question is that how can we check value in posted variables in $\_POST without defining them as you can see in the code. ``` if ($_SESSION["Admin"]=="" AND $_REQUEST[act]!="show_login" AND $_REQUEST[act]!="chk_login" ) { #not logged in show_login(); return; ``` I am getting these errors, * Undefined index: act in F:\xampp\htdocs\shangloo\admin\index.php on line 6 * Undefined index: Admin in F:\xampp\htdocs\shangloo\admin\index.php on line 6
2012/07/02
[ "https://Stackoverflow.com/questions/11302118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1461776/" ]
Use [`isset()`](http://php.net/isset) before you try to access the index in the `$_REQUEST` array, like so: ``` if( $_SESSION["Admin"] == "" && (!isset( $_REQUEST['act']) || ( $_REQUEST['act'] != "show_login" && $_REQUEST['act'] != "chk_login"))) ``` I think I've added the correct logic that you're looking for.
``` if(isset($_SESSION)) { foreach($_SESSION as $sessKey => $sessVal) { echo $sessKey . ' - ' . $sessVal.'<br>'; } } echo count($_SESSION); ``` Now I am not sure if this is exactly what your asking but as I take it you wanna know if a session is set, and if it is, then what is the data in said session. `$_SESSION, $_COOKIE, $_POST, $_GET, $_REQUEST` are all essentially arrays, so you can treat them like an array when working with them. Another problem you may be having.. this is cause I noticed your error after posting an answer is you are using xampp, presumably on windows. You may be running into a permissions error with the windows file system, and the sessions not being able to write to the local temp or tmp directory. Also I don't suggest using `$_REQUEST` if you don't have to, its a global variable. So if I make a post, get request to your site/software with the name admin, or I set a cookie and call it admin, $\_REQUEST will treat it the same as it would any of the specifically defined versions.
11,302,118
My question is that how can we check value in posted variables in $\_POST without defining them as you can see in the code. ``` if ($_SESSION["Admin"]=="" AND $_REQUEST[act]!="show_login" AND $_REQUEST[act]!="chk_login" ) { #not logged in show_login(); return; ``` I am getting these errors, * Undefined index: act in F:\xampp\htdocs\shangloo\admin\index.php on line 6 * Undefined index: Admin in F:\xampp\htdocs\shangloo\admin\index.php on line 6
2012/07/02
[ "https://Stackoverflow.com/questions/11302118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1461776/" ]
Use [`isset()`](http://php.net/isset) before you try to access the index in the `$_REQUEST` array, like so: ``` if( $_SESSION["Admin"] == "" && (!isset( $_REQUEST['act']) || ( $_REQUEST['act'] != "show_login" && $_REQUEST['act'] != "chk_login"))) ``` I think I've added the correct logic that you're looking for.
I find it easier on maintenance to check all my variables in a loop: ``` <?php // VARIABLES NEEDED BY THIS PAGE $needed_session = array( 'Admin' => "", ); $needed_request = array( 'act' => "", ); foreach($needed_session as $var => $default) if (!isset($_SESSION[$var])) $_SESSION[$var] = $default; foreach($needed_request as $var => $default) if (!isset($_REQUEST[$var])) $_REQUEST[$var] = $default; ?> ``` Even if writing to \_REQUEST is really a bad coding practice, and at least one should differentiate between POST and GET. As a variation, you might declare only those variables which are known: ``` foreach($needed_request as $var => $default) if (!isset($_REQUEST[$var])) ${$var} = $default; else ${$var} = $_REQUEST[$var]; ``` There are various possibilities: 1. you might validate syntactically the variables upon import, e.g. through a regexp, too). 2. you might declare some "really really needed variables" whose absence throws out an error page. If you're going much farther down that road, however, you'd be better advised to investigate some higher level framework.
67,004,431
I have a table with values extracted from a csv I want to make a contour plot from. Let's use this table as an example ```matlab tdata.x = [1;2;1;2]; tdata.y = [3;3;4;4]; tdata.z = randn(4,1); tdata=struct2table(tdata); >> tdata tdata = 4×3 table x y z _ _ _______ 1 3 0.53767 2 3 1.8339 1 4 -2.2588 2 4 0.86217 ``` I would like to pivot this such that I can use it for plotting a contour, so in principle I want a 2x2 z matrix where rows/columns are given by y and x respectively, something in this direction: ```matlab x 1 2 y 3 0.53767 1.8339 4 -2.2588 0.86217 ``` where the first row are the x coordinates, the first columns is the y coordinates and in-between are the corresponding z-values. So that is to say the z-value corresponding to (x,y)=(1,4) is -2.2588. Note, I am going to use this grid for other things down the road so solutions involving interpolation are not valid, as well the data is guaranteed to be given on a grid.
2021/04/08
[ "https://Stackoverflow.com/questions/67004431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9585520/" ]
Try to correct your `SessionFactory` definition in this way: ```java SessionFactory sessionFactory = new Configuration() .configure("hibernate.cfg.xml") .addAnnotatedClass(Employee.class) .addAnnotatedClass(Detail.class) .buildSessionFactory(); ```
The problem was in Main method. You should always add annotated class. **.addAnnotatedClass(Detail.class)**
4,387,216
**Question**: Suppose that $f\in C^2((0,1))$,$\lim\limits\_{x\to 1^{-}}f(x)=0$. Assume that there exists a constant $C>0$ such that $\forall x\in (0,1)$, $(1-x)^2|f''(x)|\leqslant C$. Prove that $\lim\_\limits{x\to 1^{-}} (1-x)f'(x)=0$. **Attempt**: I've tried to use Taylor expansion and got $0=f(x)+f'(x)(1-x)+\frac{f''(c)}{2}(1-x)^2,c\in (x,1).$ But that's not enough.
2022/02/21
[ "https://math.stackexchange.com/questions/4387216", "https://math.stackexchange.com", "https://math.stackexchange.com/users/977241/" ]
ParamanandSingh's hint helped enourmously. Given $\epsilon>0$, there exist $\delta \in (0,1)$ such that if $x > 1 - \delta$, then $|f(x)| < \epsilon$. For any $0 < x\_0 < x < \tfrac12(1+x\_0)$, we have $|f''(x)| \le C/(1-x)^2 \le 4C/(1-x\_0)^2$. Hence $$ f'(x) = f'(x\_0) + \int\_{x\_0}^x f''(y) \, dy \ge f'(x\_0) - 4C\frac{x-x\_0}{(1-x\_0)^2} .$$ Similarly, $$ f'(x) = f'(x\_0) + \int\_{x\_0}^x f''(y) \, dy \le f'(x\_0) + 4C\frac{x-x\_0}{(1-x\_0)^2} .$$ Now suppose $f'(x\_0) \ge L /(1-x\_0)$ for some $x\_0>1-\delta$, where $L$ will be chosen later. Then for $$ x\_0 \le x \le x\_1 := \min\left\{\frac{1+x\_0}2, x\_0 + \frac{L(1-x\_0)}{8C}\right\} $$ we have $$ f'(x) \ge \frac L{2(1-x\_0)} $$ Hence if $L \le 4C$, we have $$ 2 \epsilon > f(x\_1) - f(x\_0) = \int\_{x\_0}^{x\_1} f'(x) \, dy \ge \frac L{2(1-x\_0)} \cdot \frac{L(1-x\_0)}{8C} = \frac{L^2}{16C} .$$ Without loss of generality, we can assume $32 \epsilon < 16C^2$. Hence if we set $ L^2 \in( 32 \epsilon, 16C^2] $, we obtain a contradiction. Similarly, if $f'(x\_0) \le -L/(1-x\_0)$, we obtain a similar contradiction. Hence for any $x\_0 > 1 - \delta$, as long as $\epsilon < C^2/2$, we have $$ (1-x\_0)|f(x\_0)| \le \sqrt{32 \epsilon} .$$
By Taylor expension we have: $f(\delta+(1-\delta)x)=f(x)+\delta(1-x)f'(x)+\frac{\delta^2}{2}\frac{(1-x)^2}{(1-\xi)^2}(1-\xi)^2f''(\xi),\xi\in(x,\delta+(1-\delta)x)$. Divided by $\delta$ gives that: $|(1-x)f'(x)|\leqslant|\frac{f(\delta+(1-\delta)x)-f(x)}{\delta}|+|\frac{\delta}{2}\frac{(1-x)^2}{(1-\xi)^2}(1-\xi)^2f''(\xi)|\leqslant|\frac{f(\delta+(1-\delta)x)-f(x)}{\delta}|+|\frac{\delta}{2(1-\delta)^2}M|$. $\forall \epsilon >0,\exists \delta'>0,\forall 0<\delta<\delta',\forall 0<x<1$, the second part is less than $\epsilon/2$ ($\delta\to 0$ in short), then set $x\to 1$ and the first part is also less than $\epsilon/2$.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
*Note: since no location/culture/company policy is provided in the question, no guarantees can be made. However, your interpretation of what constitutes sick leave seems to diverge from the general interpretation, which is what this answer is responding to.* > > One could take an hour or two off work to go to the doctor and then continue working. > > > However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. > > > The length of time you are absent does not define whether it is sick leave or not. Sick leave is defined by the nature of the absence, not its length. There seems to be a misunderstanding on your part on what sick leave constitutes. Sick leave is not "medical appointment leave". Sick leave is granted when unable to work due to illness. Illness often entails medical appointments, but medical appointments do not always entail illness. When you are ill and take sick leave, you generally make an appointment with a medical professional, but that does not mean that every appointment with a medical professional therefore entails sick leave. A very clear cut example of this distinction would be elective plastic surgery. That being said, "sick leave can't be planned" is also an oversimplification on your employer's part. This can range from dentist appointments (which even in urgent situations often need to be planned one or two days ahead) to treatment for illnesses which don't compromise your ability to work on a daily basis unless you miss regular treatment. While I suspect that your employer may have overstated their case by stating that there's no such thing as planned sick leave, it's possible that their basis for rejecting your sick leave application is valid. That being said, without specifying a location, no final conclusion can be made on the legalities of this situation. > > I am not asking for any information regarding my specific country/company policy. I am just asking if "planned sick leave" is a thing. > > > You seem to think that the definition and workings of "sick leave" are universally defined. **They are not**. There is no legal definition that transcends national borders. The largest (currently existing) legislative scope is a country's legal system. It's impossible to fully answer the question without knowing the country in question.
This question is company specific, but I hope this answer isn’t. Sick leave varies by jurisdiction and company. In some places it will be mandated and defined by law, in other places it will be a term defined by the company. When defined by a company it can mean anything from “we certainly aren’t paying you, but we probably won’t fire you for missing work as long as it isn’t *too* much work and you have a doctors note” to “you get X hours of paid sick leave a year and as long as you don’t appear on the tube surfing and you don’t go over it, we are good” to “unlimited paid sick leave time as long as you can convince us it’s health related”. Then there’s company without sick leave at all, which can be some variation of “don’t show up and you are fired”, “don’t show up, don’t get paid, don’t care why”, “you get X hours of paid leave a year (aka accrued PTO), don’t care how it is used, don’t use more” or even “unlimited PTO for whatever purpose you like”. I have personally applied for sick leave to take my mother-in-law to a routine doctors appointment known well in advance, I have also worked where it didn’t exist. Whether scheduled sick leave is known or acceptable to your employer, is something only your employer can answer. I don’t know what “sick leave” means in Cyprus or for your company. You’ll have to ask someone more informed about your personal situation.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
*Note: since no location/culture/company policy is provided in the question, no guarantees can be made. However, your interpretation of what constitutes sick leave seems to diverge from the general interpretation, which is what this answer is responding to.* > > One could take an hour or two off work to go to the doctor and then continue working. > > > However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. > > > The length of time you are absent does not define whether it is sick leave or not. Sick leave is defined by the nature of the absence, not its length. There seems to be a misunderstanding on your part on what sick leave constitutes. Sick leave is not "medical appointment leave". Sick leave is granted when unable to work due to illness. Illness often entails medical appointments, but medical appointments do not always entail illness. When you are ill and take sick leave, you generally make an appointment with a medical professional, but that does not mean that every appointment with a medical professional therefore entails sick leave. A very clear cut example of this distinction would be elective plastic surgery. That being said, "sick leave can't be planned" is also an oversimplification on your employer's part. This can range from dentist appointments (which even in urgent situations often need to be planned one or two days ahead) to treatment for illnesses which don't compromise your ability to work on a daily basis unless you miss regular treatment. While I suspect that your employer may have overstated their case by stating that there's no such thing as planned sick leave, it's possible that their basis for rejecting your sick leave application is valid. That being said, without specifying a location, no final conclusion can be made on the legalities of this situation. > > I am not asking for any information regarding my specific country/company policy. I am just asking if "planned sick leave" is a thing. > > > You seem to think that the definition and workings of "sick leave" are universally defined. **They are not**. There is no legal definition that transcends national borders. The largest (currently existing) legislative scope is a country's legal system. It's impossible to fully answer the question without knowing the country in question.
To show the absurdity of this: You need a life saving operation within the next 3 months. Your doctor says “let’s do it on the 12th of June, and you’ll be in hospital for a week”. You say “sorry, I can’t have planned sick leave. Just call me the night before”. June 11th you get a call and take a week unplanned sick leave. Surely your boss would have preferred knowing ahead?
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
*Note: since no location/culture/company policy is provided in the question, no guarantees can be made. However, your interpretation of what constitutes sick leave seems to diverge from the general interpretation, which is what this answer is responding to.* > > One could take an hour or two off work to go to the doctor and then continue working. > > > However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. > > > The length of time you are absent does not define whether it is sick leave or not. Sick leave is defined by the nature of the absence, not its length. There seems to be a misunderstanding on your part on what sick leave constitutes. Sick leave is not "medical appointment leave". Sick leave is granted when unable to work due to illness. Illness often entails medical appointments, but medical appointments do not always entail illness. When you are ill and take sick leave, you generally make an appointment with a medical professional, but that does not mean that every appointment with a medical professional therefore entails sick leave. A very clear cut example of this distinction would be elective plastic surgery. That being said, "sick leave can't be planned" is also an oversimplification on your employer's part. This can range from dentist appointments (which even in urgent situations often need to be planned one or two days ahead) to treatment for illnesses which don't compromise your ability to work on a daily basis unless you miss regular treatment. While I suspect that your employer may have overstated their case by stating that there's no such thing as planned sick leave, it's possible that their basis for rejecting your sick leave application is valid. That being said, without specifying a location, no final conclusion can be made on the legalities of this situation. > > I am not asking for any information regarding my specific country/company policy. I am just asking if "planned sick leave" is a thing. > > > You seem to think that the definition and workings of "sick leave" are universally defined. **They are not**. There is no legal definition that transcends national borders. The largest (currently existing) legislative scope is a country's legal system. It's impossible to fully answer the question without knowing the country in question.
> > Does the concept of "planned sick leave" as in time off work planned > in advance for medical reasons exist out there? > > > Yes it is routine in every company I have worked for or heard off, e.g. you go to the Doctor for an appointment, he refers you to a consultant who examines you and then schedules an operation for a month's time and advises you you will be in hospital for 3 days and require a further 5 days recuperation. Exactly how each of these is treated might vary slightly e.g. initial appointment in your lunch hour, consultant visit discretionary time off but the operation would be planned sick leave - you would be off from X until Y due to a medical procedure. Planned sick leave is a good thing as it allows the company to, you know, plan for your absence! It may be that your company has a slightly different procedure for planned appointments in which case your manager should clarify "In ACME plc we don't have planned sick leave that's only for sudden emergencies, instead this will be treated as *planned absence*" (or non-discretionary leave or whatever). Bottom line they might not call it planned sick leave but it will exist, consult with HR or read your company manual if you need to but there will be something.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
The statement of your manager is obviously trivially wrong. I have had planned sick leave way to many times for my liking. It's easy. You visit a doctor. The doctor makes a frowny face and sends you to a specialized surgeon. The surgeon makes a happy face, looks at their calendar and says "no problem, the procedure can be scheduled as soon as next Tuesday. You will not be able to work the rest of that week." And then you go to your manager and say "I will have planned sick days next week Tuesday till the end of that week, you will get the doctors note as soon as I have it and the painkillers have kicked in so I can operate my scanner." Wherever you live, I guarantee that your medical system isn't a McDrive where you spontaneously get surgery with french fries, mayo and a two liter diet coke when you feel the need. That stuff has to be organized and *planned*. So if your country has any notion of sick leave at all, yes, it can definitely be planned. Assuming otherwise is either stupid or over the top naive. Now whether a non-emergency doctors visit counts as sick leave is another matter and up to your countries laws. In my country a doctors appointment only counts as sick leave if it cannot be scheduled outside working hours. For example a 20 minute non-emergency dentist appointment should be in your private time. A check, then x-ray, then talk with a specialist in the next city probably cannot. So whether your trip for medical reasons counts as sick leave depends on your laws and your company's regulations. Maybe your specific requested sick leave does not qualify for "sick leave" where you work, you will have to figure that out, but sick leave *can* be planned there is no doubt about that.
This question is company specific, but I hope this answer isn’t. Sick leave varies by jurisdiction and company. In some places it will be mandated and defined by law, in other places it will be a term defined by the company. When defined by a company it can mean anything from “we certainly aren’t paying you, but we probably won’t fire you for missing work as long as it isn’t *too* much work and you have a doctors note” to “you get X hours of paid sick leave a year and as long as you don’t appear on the tube surfing and you don’t go over it, we are good” to “unlimited paid sick leave time as long as you can convince us it’s health related”. Then there’s company without sick leave at all, which can be some variation of “don’t show up and you are fired”, “don’t show up, don’t get paid, don’t care why”, “you get X hours of paid leave a year (aka accrued PTO), don’t care how it is used, don’t use more” or even “unlimited PTO for whatever purpose you like”. I have personally applied for sick leave to take my mother-in-law to a routine doctors appointment known well in advance, I have also worked where it didn’t exist. Whether scheduled sick leave is known or acceptable to your employer, is something only your employer can answer. I don’t know what “sick leave” means in Cyprus or for your company. You’ll have to ask someone more informed about your personal situation.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
The statement of your manager is obviously trivially wrong. I have had planned sick leave way to many times for my liking. It's easy. You visit a doctor. The doctor makes a frowny face and sends you to a specialized surgeon. The surgeon makes a happy face, looks at their calendar and says "no problem, the procedure can be scheduled as soon as next Tuesday. You will not be able to work the rest of that week." And then you go to your manager and say "I will have planned sick days next week Tuesday till the end of that week, you will get the doctors note as soon as I have it and the painkillers have kicked in so I can operate my scanner." Wherever you live, I guarantee that your medical system isn't a McDrive where you spontaneously get surgery with french fries, mayo and a two liter diet coke when you feel the need. That stuff has to be organized and *planned*. So if your country has any notion of sick leave at all, yes, it can definitely be planned. Assuming otherwise is either stupid or over the top naive. Now whether a non-emergency doctors visit counts as sick leave is another matter and up to your countries laws. In my country a doctors appointment only counts as sick leave if it cannot be scheduled outside working hours. For example a 20 minute non-emergency dentist appointment should be in your private time. A check, then x-ray, then talk with a specialist in the next city probably cannot. So whether your trip for medical reasons counts as sick leave depends on your laws and your company's regulations. Maybe your specific requested sick leave does not qualify for "sick leave" where you work, you will have to figure that out, but sick leave *can* be planned there is no doubt about that.
To show the absurdity of this: You need a life saving operation within the next 3 months. Your doctor says “let’s do it on the 12th of June, and you’ll be in hospital for a week”. You say “sorry, I can’t have planned sick leave. Just call me the night before”. June 11th you get a call and take a week unplanned sick leave. Surely your boss would have preferred knowing ahead?
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
The statement of your manager is obviously trivially wrong. I have had planned sick leave way to many times for my liking. It's easy. You visit a doctor. The doctor makes a frowny face and sends you to a specialized surgeon. The surgeon makes a happy face, looks at their calendar and says "no problem, the procedure can be scheduled as soon as next Tuesday. You will not be able to work the rest of that week." And then you go to your manager and say "I will have planned sick days next week Tuesday till the end of that week, you will get the doctors note as soon as I have it and the painkillers have kicked in so I can operate my scanner." Wherever you live, I guarantee that your medical system isn't a McDrive where you spontaneously get surgery with french fries, mayo and a two liter diet coke when you feel the need. That stuff has to be organized and *planned*. So if your country has any notion of sick leave at all, yes, it can definitely be planned. Assuming otherwise is either stupid or over the top naive. Now whether a non-emergency doctors visit counts as sick leave is another matter and up to your countries laws. In my country a doctors appointment only counts as sick leave if it cannot be scheduled outside working hours. For example a 20 minute non-emergency dentist appointment should be in your private time. A check, then x-ray, then talk with a specialist in the next city probably cannot. So whether your trip for medical reasons counts as sick leave depends on your laws and your company's regulations. Maybe your specific requested sick leave does not qualify for "sick leave" where you work, you will have to figure that out, but sick leave *can* be planned there is no doubt about that.
> > Does the concept of "planned sick leave" as in time off work planned > in advance for medical reasons exist out there? > > > Yes it is routine in every company I have worked for or heard off, e.g. you go to the Doctor for an appointment, he refers you to a consultant who examines you and then schedules an operation for a month's time and advises you you will be in hospital for 3 days and require a further 5 days recuperation. Exactly how each of these is treated might vary slightly e.g. initial appointment in your lunch hour, consultant visit discretionary time off but the operation would be planned sick leave - you would be off from X until Y due to a medical procedure. Planned sick leave is a good thing as it allows the company to, you know, plan for your absence! It may be that your company has a slightly different procedure for planned appointments in which case your manager should clarify "In ACME plc we don't have planned sick leave that's only for sudden emergencies, instead this will be treated as *planned absence*" (or non-discretionary leave or whatever). Bottom line they might not call it planned sick leave but it will exist, consult with HR or read your company manual if you need to but there will be something.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
This question is company specific, but I hope this answer isn’t. Sick leave varies by jurisdiction and company. In some places it will be mandated and defined by law, in other places it will be a term defined by the company. When defined by a company it can mean anything from “we certainly aren’t paying you, but we probably won’t fire you for missing work as long as it isn’t *too* much work and you have a doctors note” to “you get X hours of paid sick leave a year and as long as you don’t appear on the tube surfing and you don’t go over it, we are good” to “unlimited paid sick leave time as long as you can convince us it’s health related”. Then there’s company without sick leave at all, which can be some variation of “don’t show up and you are fired”, “don’t show up, don’t get paid, don’t care why”, “you get X hours of paid leave a year (aka accrued PTO), don’t care how it is used, don’t use more” or even “unlimited PTO for whatever purpose you like”. I have personally applied for sick leave to take my mother-in-law to a routine doctors appointment known well in advance, I have also worked where it didn’t exist. Whether scheduled sick leave is known or acceptable to your employer, is something only your employer can answer. I don’t know what “sick leave” means in Cyprus or for your company. You’ll have to ask someone more informed about your personal situation.
> > Does the concept of "planned sick leave" as in time off work planned > in advance for medical reasons exist out there? > > > Yes it is routine in every company I have worked for or heard off, e.g. you go to the Doctor for an appointment, he refers you to a consultant who examines you and then schedules an operation for a month's time and advises you you will be in hospital for 3 days and require a further 5 days recuperation. Exactly how each of these is treated might vary slightly e.g. initial appointment in your lunch hour, consultant visit discretionary time off but the operation would be planned sick leave - you would be off from X until Y due to a medical procedure. Planned sick leave is a good thing as it allows the company to, you know, plan for your absence! It may be that your company has a slightly different procedure for planned appointments in which case your manager should clarify "In ACME plc we don't have planned sick leave that's only for sudden emergencies, instead this will be treated as *planned absence*" (or non-discretionary leave or whatever). Bottom line they might not call it planned sick leave but it will exist, consult with HR or read your company manual if you need to but there will be something.
158,097
Question -------- Does the concept of “***planned* sick leave**”, as in time off work planned in advance for medical reasons, exist out there? Or is it totally unheard of? I found [this question](https://workplace.stackexchange.com/questions/5859/i-have-sick-leave-for-an-appointment-that-got-cancelled-what-should-i-do), which seems to imply that “planned sick leave” does exist, since the asker mentions that he had sick leave for a dentist appointment. --- Background ---------- My manager informed me that I cannot take sick leave for an upcoming doctor’s appointment because “sick leave can't be planned”. For routine medical appointments, this makes perfect sense. One could take an hour or two off work to go to the doctor and then continue working. However, in my case, my doctor is located in another city. Therefore, it will take around 6 hours to go, have the appointment, and come back. My manager expects me to adjust my shift accordingly in order to put in my hours of work, or take a paid vacation leave.
2020/05/12
[ "https://workplace.stackexchange.com/questions/158097", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/62851/" ]
To show the absurdity of this: You need a life saving operation within the next 3 months. Your doctor says “let’s do it on the 12th of June, and you’ll be in hospital for a week”. You say “sorry, I can’t have planned sick leave. Just call me the night before”. June 11th you get a call and take a week unplanned sick leave. Surely your boss would have preferred knowing ahead?
> > Does the concept of "planned sick leave" as in time off work planned > in advance for medical reasons exist out there? > > > Yes it is routine in every company I have worked for or heard off, e.g. you go to the Doctor for an appointment, he refers you to a consultant who examines you and then schedules an operation for a month's time and advises you you will be in hospital for 3 days and require a further 5 days recuperation. Exactly how each of these is treated might vary slightly e.g. initial appointment in your lunch hour, consultant visit discretionary time off but the operation would be planned sick leave - you would be off from X until Y due to a medical procedure. Planned sick leave is a good thing as it allows the company to, you know, plan for your absence! It may be that your company has a slightly different procedure for planned appointments in which case your manager should clarify "In ACME plc we don't have planned sick leave that's only for sudden emergencies, instead this will be treated as *planned absence*" (or non-discretionary leave or whatever). Bottom line they might not call it planned sick leave but it will exist, consult with HR or read your company manual if you need to but there will be something.
3,989,407
Find surface area of a cylinder $y^2 + z^2 = 1$ between two planes: $x + y - 2 = 0$, $x-z+4 = 0$. This was my approach: I drew the picture and projected the figure onto $xOy$ plane, and after that I found partial detivatives $\frac{\partial z}{\partial x} $ and $ \frac{\partial z}{\partial y}$ where $z = + \sqrt{1-y^2}$. I choose the positive sqare root because of the symmetry, and that's why I will multiply following integral by $2$. Eventually, my integral looks like this: $P(S) = 2\begin{gather\*} \iint\_D \sqrt{1+ \frac{y^2}{1-y^2} + 0^2}\,dx\,dy = 2\iint\_D \frac{1}{\sqrt{1-y^2}}\,dx\,dy = ... = 9\pi \end{gather\*} $ . $D$ is $\{ -4 \leq x \leq 1 \land -1\leq y \leq 1 \} \cup \{ 1 \leq x \leq 3 \land -1 \leq y \leq 2-x \}$ I wonder if this is correct and if anyone could tell me if I am wrong on this, I would really appreciate it.
2021/01/17
[ "https://math.stackexchange.com/questions/3989407", "https://math.stackexchange.com", "https://math.stackexchange.com/users/872128/" ]
Alternatively, let $t=e^{-x}$ to express the integral as $$\int\_0^{\infty} \frac{x}{e^x+1} dx= - \int\_0^1 \frac{\ln t}{1+t}dt \overset{IBP} = \int\_0^1 \frac{\ln (1+t)}{t}dt = \frac{\pi^2}{12} $$ where the result [Finding $ \int^1\_0 \frac{\ln(1+x)}{x}dx$](https://math.stackexchange.com/questions/2046219/finding-int1-0-frac-ln1xxdx/3826136#3826136) is used.
Let $ n\in\mathbb{N} $, we have : \begin{aligned}\left\vert\sum\_{k=1}^{n}{\left(-1\right)^{k-1}\int\_{0}^{+\infty}{x\,\mathrm{e}^{-kx}\,\mathrm{d}x}}-\int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x}\right\vert&=\left\vert\int\_{0}^{+\infty}{x\sum\_{k=n+1}^{+\infty}{\left(-1\right)^{k-1}\,\mathrm{e}^{-kx}}\,\mathrm{d}x}\right\vert\\ &\leq\int\_{0}^{+\infty}{x\left\vert\sum\_{k=n+1}^{+\infty}{\left(-1\right)^{k-1}\,\mathrm{e}^{-kx}}\right\vert\mathrm{d}x}\\ &\leq\int\_{0}^{+\infty}{x\,\mathrm{e}^{-\left(n+1\right)x}\,\mathrm{d}x}=\frac{\Gamma\left(1\right)}{\left(n+1\right)^{2}}\underset{n\to +\infty}{\longrightarrow}0\end{aligned} Which means that : $$ \sum\_{n=1}^{+\infty}{\frac{\left(-1\right)^{n-1}}{n^{2}}}=\lim\_{n\to +\infty}{\sum\_{k=1}^{n}{\left(-1\right)^{k-1}\int\_{0}^{+\infty}{x\,\mathrm{e}^{-kx}\,\mathrm{d}x}}}=\int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x} $$ Thus : $$ \int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x}=\eta\left(2\right)=\left(1-\frac{1}{2}\right)\zeta\left(2\right)=\frac{\pi^{2}}{12} $$
3,989,407
Find surface area of a cylinder $y^2 + z^2 = 1$ between two planes: $x + y - 2 = 0$, $x-z+4 = 0$. This was my approach: I drew the picture and projected the figure onto $xOy$ plane, and after that I found partial detivatives $\frac{\partial z}{\partial x} $ and $ \frac{\partial z}{\partial y}$ where $z = + \sqrt{1-y^2}$. I choose the positive sqare root because of the symmetry, and that's why I will multiply following integral by $2$. Eventually, my integral looks like this: $P(S) = 2\begin{gather\*} \iint\_D \sqrt{1+ \frac{y^2}{1-y^2} + 0^2}\,dx\,dy = 2\iint\_D \frac{1}{\sqrt{1-y^2}}\,dx\,dy = ... = 9\pi \end{gather\*} $ . $D$ is $\{ -4 \leq x \leq 1 \land -1\leq y \leq 1 \} \cup \{ 1 \leq x \leq 3 \land -1 \leq y \leq 2-x \}$ I wonder if this is correct and if anyone could tell me if I am wrong on this, I would really appreciate it.
2021/01/17
[ "https://math.stackexchange.com/questions/3989407", "https://math.stackexchange.com", "https://math.stackexchange.com/users/872128/" ]
Alternatively, let $t=e^{-x}$ to express the integral as $$\int\_0^{\infty} \frac{x}{e^x+1} dx= - \int\_0^1 \frac{\ln t}{1+t}dt \overset{IBP} = \int\_0^1 \frac{\ln (1+t)}{t}dt = \frac{\pi^2}{12} $$ where the result [Finding $ \int^1\_0 \frac{\ln(1+x)}{x}dx$](https://math.stackexchange.com/questions/2046219/finding-int1-0-frac-ln1xxdx/3826136#3826136) is used.
Using power series, we convert the integral into an infinite sum: $$ \begin{aligned} I &=\int\_{0}^{\infty} \frac{x}{e^{x}+1} d x \\ &=\int\_{0}^{\infty} \frac{x e^{-x}}{1+e^{-x}} d x \\ &=\int\_{0}^{\infty} x e^{-x} \sum\_{k=0}^{\infty}(-1)^{k} e^{-k x} d x \\ &=\sum\_{k=0}^{\infty}(-1)^{k} \underbrace{\int\_{0}^{\infty} x e^{-(k+1) x} d x}\_{J\_k} \end{aligned} $$ Integration by parts yields $$ \begin{aligned} J\_{k} &=-\frac{1}{k+1} \int\_{0}^{\infty} x \cdot d\left(e^{-(k+1) x}\right) \\ &=-\left[\frac{1}{k+1} x e^{-(k+1) x}\right]\_{0}^{\infty}+\frac{1}{k+1} \int\_{0}^{\infty} e^{-(k+1) x} d x \\ &=\frac{1}{(k+1)^{2}} \end{aligned} $$ Now we can conclude that \begin{aligned} I&=\sum\_{k=0}^{\infty} \frac{(-1)^{k}}{(k+1)^{2}} \\ &=\sum\_{k=1}^{\infty} \frac{1}{k^{2}}-2 \sum\_{k=1}^{\infty} \frac{1}{(2 k)^{2}} \\ &=\frac{\pi}{6}-\frac{2}{4} \cdot \frac{\pi}{6} \\ &=\frac{\pi}{12} \end{aligned}
3,989,407
Find surface area of a cylinder $y^2 + z^2 = 1$ between two planes: $x + y - 2 = 0$, $x-z+4 = 0$. This was my approach: I drew the picture and projected the figure onto $xOy$ plane, and after that I found partial detivatives $\frac{\partial z}{\partial x} $ and $ \frac{\partial z}{\partial y}$ where $z = + \sqrt{1-y^2}$. I choose the positive sqare root because of the symmetry, and that's why I will multiply following integral by $2$. Eventually, my integral looks like this: $P(S) = 2\begin{gather\*} \iint\_D \sqrt{1+ \frac{y^2}{1-y^2} + 0^2}\,dx\,dy = 2\iint\_D \frac{1}{\sqrt{1-y^2}}\,dx\,dy = ... = 9\pi \end{gather\*} $ . $D$ is $\{ -4 \leq x \leq 1 \land -1\leq y \leq 1 \} \cup \{ 1 \leq x \leq 3 \land -1 \leq y \leq 2-x \}$ I wonder if this is correct and if anyone could tell me if I am wrong on this, I would really appreciate it.
2021/01/17
[ "https://math.stackexchange.com/questions/3989407", "https://math.stackexchange.com", "https://math.stackexchange.com/users/872128/" ]
Let $ n\in\mathbb{N} $, we have : \begin{aligned}\left\vert\sum\_{k=1}^{n}{\left(-1\right)^{k-1}\int\_{0}^{+\infty}{x\,\mathrm{e}^{-kx}\,\mathrm{d}x}}-\int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x}\right\vert&=\left\vert\int\_{0}^{+\infty}{x\sum\_{k=n+1}^{+\infty}{\left(-1\right)^{k-1}\,\mathrm{e}^{-kx}}\,\mathrm{d}x}\right\vert\\ &\leq\int\_{0}^{+\infty}{x\left\vert\sum\_{k=n+1}^{+\infty}{\left(-1\right)^{k-1}\,\mathrm{e}^{-kx}}\right\vert\mathrm{d}x}\\ &\leq\int\_{0}^{+\infty}{x\,\mathrm{e}^{-\left(n+1\right)x}\,\mathrm{d}x}=\frac{\Gamma\left(1\right)}{\left(n+1\right)^{2}}\underset{n\to +\infty}{\longrightarrow}0\end{aligned} Which means that : $$ \sum\_{n=1}^{+\infty}{\frac{\left(-1\right)^{n-1}}{n^{2}}}=\lim\_{n\to +\infty}{\sum\_{k=1}^{n}{\left(-1\right)^{k-1}\int\_{0}^{+\infty}{x\,\mathrm{e}^{-kx}\,\mathrm{d}x}}}=\int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x} $$ Thus : $$ \int\_{0}^{+\infty}{\frac{x}{\mathrm{e}^{x}+1}\,\mathrm{d}x}=\eta\left(2\right)=\left(1-\frac{1}{2}\right)\zeta\left(2\right)=\frac{\pi^{2}}{12} $$
Using power series, we convert the integral into an infinite sum: $$ \begin{aligned} I &=\int\_{0}^{\infty} \frac{x}{e^{x}+1} d x \\ &=\int\_{0}^{\infty} \frac{x e^{-x}}{1+e^{-x}} d x \\ &=\int\_{0}^{\infty} x e^{-x} \sum\_{k=0}^{\infty}(-1)^{k} e^{-k x} d x \\ &=\sum\_{k=0}^{\infty}(-1)^{k} \underbrace{\int\_{0}^{\infty} x e^{-(k+1) x} d x}\_{J\_k} \end{aligned} $$ Integration by parts yields $$ \begin{aligned} J\_{k} &=-\frac{1}{k+1} \int\_{0}^{\infty} x \cdot d\left(e^{-(k+1) x}\right) \\ &=-\left[\frac{1}{k+1} x e^{-(k+1) x}\right]\_{0}^{\infty}+\frac{1}{k+1} \int\_{0}^{\infty} e^{-(k+1) x} d x \\ &=\frac{1}{(k+1)^{2}} \end{aligned} $$ Now we can conclude that \begin{aligned} I&=\sum\_{k=0}^{\infty} \frac{(-1)^{k}}{(k+1)^{2}} \\ &=\sum\_{k=1}^{\infty} \frac{1}{k^{2}}-2 \sum\_{k=1}^{\infty} \frac{1}{(2 k)^{2}} \\ &=\frac{\pi}{6}-\frac{2}{4} \cdot \frac{\pi}{6} \\ &=\frac{\pi}{12} \end{aligned}
4,869,656
I have an array that looks like following. ``` $ => Array (2) ( | ['0'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "2" | ) | ['1'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ``` But could also be bigger or smaller having only one array. Each array represents a row of result that have been returned from a database. The first field [0][0] is a ID number which is going to be needed. [0][1] is the value I need to check. I need to know whether it is present or not, say whether I got a 2 or a 1 or whether I didn't. If I didn't then I need to send the ID ([0][0]) off to another function. Sometimes I may end up with more results or less. So this needs to be done using loops but am struggling to get it right, each time I think I have some code that will work it won't. Can anyone help out? Edit: This is what I got so far... ``` $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } ``` This changes the way the arrays look into the following: ``` $ => Array (1) ( | ['2'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ```
2011/02/02
[ "https://Stackoverflow.com/questions/4869656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161570/" ]
This is how I understand your question. A first loop goes through the main array and works on its index => array2. The 2nd loop goes through that second array and check if the value of that array is "value1". If it is, it executes `doWhenValueIsThere()` otherwise it executes `doWhenValueIsNotThere()`. You have to create the two functions depending on your needs. ``` foreach ($array1 as $index => $array2) { foreach (array_keys($array2) as $id) { if ($array2[$id] == "value1") doWhenValueIsThere(); else doWhenValueIsNotThere(); } } ```
Maybe you can do the check like this: `array_key_exists(1, $x[1]) ? $x[1][1] : otherFunction($x[0][0])` where $x is your array.
4,869,656
I have an array that looks like following. ``` $ => Array (2) ( | ['0'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "2" | ) | ['1'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ``` But could also be bigger or smaller having only one array. Each array represents a row of result that have been returned from a database. The first field [0][0] is a ID number which is going to be needed. [0][1] is the value I need to check. I need to know whether it is present or not, say whether I got a 2 or a 1 or whether I didn't. If I didn't then I need to send the ID ([0][0]) off to another function. Sometimes I may end up with more results or less. So this needs to be done using loops but am struggling to get it right, each time I think I have some code that will work it won't. Can anyone help out? Edit: This is what I got so far... ``` $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } ``` This changes the way the arrays look into the following: ``` $ => Array (1) ( | ['2'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ```
2011/02/02
[ "https://Stackoverflow.com/questions/4869656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161570/" ]
I ended up changing a few of the internal SQL statements so that my result from the database was different. I found that the SQL statements I were using and the returned arrays were too complex to process in the way I wanted - mainly due to how it was formatted by the SQL query results.
Maybe you can do the check like this: `array_key_exists(1, $x[1]) ? $x[1][1] : otherFunction($x[0][0])` where $x is your array.
4,869,656
I have an array that looks like following. ``` $ => Array (2) ( | ['0'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "2" | ) | ['1'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ``` But could also be bigger or smaller having only one array. Each array represents a row of result that have been returned from a database. The first field [0][0] is a ID number which is going to be needed. [0][1] is the value I need to check. I need to know whether it is present or not, say whether I got a 2 or a 1 or whether I didn't. If I didn't then I need to send the ID ([0][0]) off to another function. Sometimes I may end up with more results or less. So this needs to be done using loops but am struggling to get it right, each time I think I have some code that will work it won't. Can anyone help out? Edit: This is what I got so far... ``` $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } ``` This changes the way the arrays look into the following: ``` $ => Array (1) ( | ['2'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ```
2011/02/02
[ "https://Stackoverflow.com/questions/4869656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161570/" ]
This is how I understand your question. A first loop goes through the main array and works on its index => array2. The 2nd loop goes through that second array and check if the value of that array is "value1". If it is, it executes `doWhenValueIsThere()` otherwise it executes `doWhenValueIsNotThere()`. You have to create the two functions depending on your needs. ``` foreach ($array1 as $index => $array2) { foreach (array_keys($array2) as $id) { if ($array2[$id] == "value1") doWhenValueIsThere(); else doWhenValueIsNotThere(); } } ```
try this ``` <?php $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; if(isset($sentiment[1])){ $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } } ?> ```
4,869,656
I have an array that looks like following. ``` $ => Array (2) ( | ['0'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "2" | ) | ['1'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ``` But could also be bigger or smaller having only one array. Each array represents a row of result that have been returned from a database. The first field [0][0] is a ID number which is going to be needed. [0][1] is the value I need to check. I need to know whether it is present or not, say whether I got a 2 or a 1 or whether I didn't. If I didn't then I need to send the ID ([0][0]) off to another function. Sometimes I may end up with more results or less. So this needs to be done using loops but am struggling to get it right, each time I think I have some code that will work it won't. Can anyone help out? Edit: This is what I got so far... ``` $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } ``` This changes the way the arrays look into the following: ``` $ => Array (1) ( | ['2'] => Array (2) | ( | | ['0'] = String(1) "2" | | ['1'] = String(1) "1" | ) ) ```
2011/02/02
[ "https://Stackoverflow.com/questions/4869656", "https://Stackoverflow.com", "https://Stackoverflow.com/users/161570/" ]
I ended up changing a few of the internal SQL statements so that my result from the database was different. I found that the SQL statements I were using and the returned arrays were too complex to process in the way I wanted - mainly due to how it was formatted by the SQL query results.
try this ``` <?php $tweet_sentiment = array(); $analyzer = array(); foreach($get_sentiment as $sentiment) { $tweet_id = $sentiment[0]; if(isset($sentiment[1])){ $analyzer[] = $sentiment[1]; $tweet_sentiment[$tweet_id] = $analyzer; } } ?> ```
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
Well, actually there is a potential Ubuntu specific answer to this question. As mentioned by Gergoes link, this is basically about modifying */etc/mysql/my.cnf* and set a new value for **datadir =** in the **[mysqld]** section. So far the unspecific part of the answer. Assuming you are running a somewhat modern version of Ubuntu you might very well have [AppArmor](https://help.ubuntu.com/10.04/serverguide/C/apparmor.html) installed by default, with a profile for */usr/sbin/mysqld* in enforced mode. That default profile will most likely not accept your new datadir. Let us assume that your new datadir will be */home/data/mysql*. If you open the file */etc/apparmor.d/usr.sbin.mysqld* you will among the rules find these two lines. ``` /var/lib/mysql/ r, /var/lib/mysql/** rwk, ``` Assuming our example above, they will have to be replaced or (probably preferable) complemented by these two lines. ``` /home/data/mysql/ r, /home/data/mysql/** rwk, ``` Before we can startup our MySQL server, with its new datadir, we will also have to explicitly reload our new apparmor profile. ``` $ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.mysqld ```
This really isn't Ubuntu specific. Nevertheless, here is something that might help: <http://developer.spikesource.com/wiki/index.php/How_to_change_the_mysql_database_location>
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
[Super user has a nice step by step instructions on how to solve this probelm](https://serverfault.com/q/168957/71120) Here is an other set of instruction on doing the same thing <http://www.ubuntugeek.com/how-to-change-the-mysql-data-default-directory.html> Here it is reposted. Go and up vote the original if you can on super user. After some general confusion about permissions I realized that the problem wasn't that I didn't have my permissions and paths right but that AppArmor was preventing mysql from reading and writing to the new location. This is my solution: First stop MySQL so nothing weird happens while you're fiddling: ``` $ sudo stop mysql ``` Then move all the database directories to their new home: ``` $ sudo mv /var/lib/mysql/<all folders> /new-mysql-dir/ ``` Don't move the files, they will be generated by mysql, just move the folders (which are the databases). Then politely ask AppArmor to allow mysql to use the new folder: ``` $ sudo vim /etc/apparmor.d/usr.sbin.mysqld ``` add lines: ``` /new-mysql-dir/ r, /new-mysql-dir/** rwk, ``` Then tell mysql that the datadir has moved: ``` $ sudo vim /etc/mysql/my.cnf ``` change the line: ``` datadir=/var/lib/mysql ``` to: ``` datadir=/my-new-db-dir/ ``` NOTE: Depending on your database setup you might need to change innodb-data-home-dir etc. as well. Then restart AppArmor to read the new settings: ``` $ sudo /etc/init.d/apparmor restart ``` And start up MySQL again using the new datadir: ``` $ sudo start mysql ``` Hope this helps!
This really isn't Ubuntu specific. Nevertheless, here is something that might help: <http://developer.spikesource.com/wiki/index.php/How_to_change_the_mysql_database_location>
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
This really isn't Ubuntu specific. Nevertheless, here is something that might help: <http://developer.spikesource.com/wiki/index.php/How_to_change_the_mysql_database_location>
This won't work just like that. user mysql has to have the right to write to the new dir: ``` sudo chown -R mysql:mysql /newdatadir sudo chmod -R 754 /newdatadir sudo chmod 754 /newdatadir/.. ```
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
Well, actually there is a potential Ubuntu specific answer to this question. As mentioned by Gergoes link, this is basically about modifying */etc/mysql/my.cnf* and set a new value for **datadir =** in the **[mysqld]** section. So far the unspecific part of the answer. Assuming you are running a somewhat modern version of Ubuntu you might very well have [AppArmor](https://help.ubuntu.com/10.04/serverguide/C/apparmor.html) installed by default, with a profile for */usr/sbin/mysqld* in enforced mode. That default profile will most likely not accept your new datadir. Let us assume that your new datadir will be */home/data/mysql*. If you open the file */etc/apparmor.d/usr.sbin.mysqld* you will among the rules find these two lines. ``` /var/lib/mysql/ r, /var/lib/mysql/** rwk, ``` Assuming our example above, they will have to be replaced or (probably preferable) complemented by these two lines. ``` /home/data/mysql/ r, /home/data/mysql/** rwk, ``` Before we can startup our MySQL server, with its new datadir, we will also have to explicitly reload our new apparmor profile. ``` $ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.mysqld ```
[Super user has a nice step by step instructions on how to solve this probelm](https://serverfault.com/q/168957/71120) Here is an other set of instruction on doing the same thing <http://www.ubuntugeek.com/how-to-change-the-mysql-data-default-directory.html> Here it is reposted. Go and up vote the original if you can on super user. After some general confusion about permissions I realized that the problem wasn't that I didn't have my permissions and paths right but that AppArmor was preventing mysql from reading and writing to the new location. This is my solution: First stop MySQL so nothing weird happens while you're fiddling: ``` $ sudo stop mysql ``` Then move all the database directories to their new home: ``` $ sudo mv /var/lib/mysql/<all folders> /new-mysql-dir/ ``` Don't move the files, they will be generated by mysql, just move the folders (which are the databases). Then politely ask AppArmor to allow mysql to use the new folder: ``` $ sudo vim /etc/apparmor.d/usr.sbin.mysqld ``` add lines: ``` /new-mysql-dir/ r, /new-mysql-dir/** rwk, ``` Then tell mysql that the datadir has moved: ``` $ sudo vim /etc/mysql/my.cnf ``` change the line: ``` datadir=/var/lib/mysql ``` to: ``` datadir=/my-new-db-dir/ ``` NOTE: Depending on your database setup you might need to change innodb-data-home-dir etc. as well. Then restart AppArmor to read the new settings: ``` $ sudo /etc/init.d/apparmor restart ``` And start up MySQL again using the new datadir: ``` $ sudo start mysql ``` Hope this helps!
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
Well, actually there is a potential Ubuntu specific answer to this question. As mentioned by Gergoes link, this is basically about modifying */etc/mysql/my.cnf* and set a new value for **datadir =** in the **[mysqld]** section. So far the unspecific part of the answer. Assuming you are running a somewhat modern version of Ubuntu you might very well have [AppArmor](https://help.ubuntu.com/10.04/serverguide/C/apparmor.html) installed by default, with a profile for */usr/sbin/mysqld* in enforced mode. That default profile will most likely not accept your new datadir. Let us assume that your new datadir will be */home/data/mysql*. If you open the file */etc/apparmor.d/usr.sbin.mysqld* you will among the rules find these two lines. ``` /var/lib/mysql/ r, /var/lib/mysql/** rwk, ``` Assuming our example above, they will have to be replaced or (probably preferable) complemented by these two lines. ``` /home/data/mysql/ r, /home/data/mysql/** rwk, ``` Before we can startup our MySQL server, with its new datadir, we will also have to explicitly reload our new apparmor profile. ``` $ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.mysqld ```
This won't work just like that. user mysql has to have the right to write to the new dir: ``` sudo chown -R mysql:mysql /newdatadir sudo chmod -R 754 /newdatadir sudo chmod 754 /newdatadir/.. ```
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
Well, actually there is a potential Ubuntu specific answer to this question. As mentioned by Gergoes link, this is basically about modifying */etc/mysql/my.cnf* and set a new value for **datadir =** in the **[mysqld]** section. So far the unspecific part of the answer. Assuming you are running a somewhat modern version of Ubuntu you might very well have [AppArmor](https://help.ubuntu.com/10.04/serverguide/C/apparmor.html) installed by default, with a profile for */usr/sbin/mysqld* in enforced mode. That default profile will most likely not accept your new datadir. Let us assume that your new datadir will be */home/data/mysql*. If you open the file */etc/apparmor.d/usr.sbin.mysqld* you will among the rules find these two lines. ``` /var/lib/mysql/ r, /var/lib/mysql/** rwk, ``` Assuming our example above, they will have to be replaced or (probably preferable) complemented by these two lines. ``` /home/data/mysql/ r, /home/data/mysql/** rwk, ``` Before we can startup our MySQL server, with its new datadir, we will also have to explicitly reload our new apparmor profile. ``` $ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.mysqld ```
For those who like me work with VirtualBox and need to move the MySQL datadir to a shared folder on the host system, follow the simple tutorial at <http://vacilando.org/en/article/moving-mysql-data-files-virtualbox-shared-folder>
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
[Super user has a nice step by step instructions on how to solve this probelm](https://serverfault.com/q/168957/71120) Here is an other set of instruction on doing the same thing <http://www.ubuntugeek.com/how-to-change-the-mysql-data-default-directory.html> Here it is reposted. Go and up vote the original if you can on super user. After some general confusion about permissions I realized that the problem wasn't that I didn't have my permissions and paths right but that AppArmor was preventing mysql from reading and writing to the new location. This is my solution: First stop MySQL so nothing weird happens while you're fiddling: ``` $ sudo stop mysql ``` Then move all the database directories to their new home: ``` $ sudo mv /var/lib/mysql/<all folders> /new-mysql-dir/ ``` Don't move the files, they will be generated by mysql, just move the folders (which are the databases). Then politely ask AppArmor to allow mysql to use the new folder: ``` $ sudo vim /etc/apparmor.d/usr.sbin.mysqld ``` add lines: ``` /new-mysql-dir/ r, /new-mysql-dir/** rwk, ``` Then tell mysql that the datadir has moved: ``` $ sudo vim /etc/mysql/my.cnf ``` change the line: ``` datadir=/var/lib/mysql ``` to: ``` datadir=/my-new-db-dir/ ``` NOTE: Depending on your database setup you might need to change innodb-data-home-dir etc. as well. Then restart AppArmor to read the new settings: ``` $ sudo /etc/init.d/apparmor restart ``` And start up MySQL again using the new datadir: ``` $ sudo start mysql ``` Hope this helps!
This won't work just like that. user mysql has to have the right to write to the new dir: ``` sudo chown -R mysql:mysql /newdatadir sudo chmod -R 754 /newdatadir sudo chmod 754 /newdatadir/.. ```
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
[Super user has a nice step by step instructions on how to solve this probelm](https://serverfault.com/q/168957/71120) Here is an other set of instruction on doing the same thing <http://www.ubuntugeek.com/how-to-change-the-mysql-data-default-directory.html> Here it is reposted. Go and up vote the original if you can on super user. After some general confusion about permissions I realized that the problem wasn't that I didn't have my permissions and paths right but that AppArmor was preventing mysql from reading and writing to the new location. This is my solution: First stop MySQL so nothing weird happens while you're fiddling: ``` $ sudo stop mysql ``` Then move all the database directories to their new home: ``` $ sudo mv /var/lib/mysql/<all folders> /new-mysql-dir/ ``` Don't move the files, they will be generated by mysql, just move the folders (which are the databases). Then politely ask AppArmor to allow mysql to use the new folder: ``` $ sudo vim /etc/apparmor.d/usr.sbin.mysqld ``` add lines: ``` /new-mysql-dir/ r, /new-mysql-dir/** rwk, ``` Then tell mysql that the datadir has moved: ``` $ sudo vim /etc/mysql/my.cnf ``` change the line: ``` datadir=/var/lib/mysql ``` to: ``` datadir=/my-new-db-dir/ ``` NOTE: Depending on your database setup you might need to change innodb-data-home-dir etc. as well. Then restart AppArmor to read the new settings: ``` $ sudo /etc/init.d/apparmor restart ``` And start up MySQL again using the new datadir: ``` $ sudo start mysql ``` Hope this helps!
For those who like me work with VirtualBox and need to move the MySQL datadir to a shared folder on the host system, follow the simple tutorial at <http://vacilando.org/en/article/moving-mysql-data-files-virtualbox-shared-folder>
224
I have my hard drive partitioned with two partitions, so I can easily re-install Ubuntu and try out different versions without losing my home directory data. It is setup like this: ``` 20GB -> / (root) 180GB -> /home ``` I do a lot of development work, so I have my `/var/www` folder symlinking to `/home/valorin/workspace`. But I want to do this with my MySQL data files as well, as I am getting annoyed that each time I reinstall my machine I need to do a full SQLdump and then restore all the DB's before I can do more work. What is the best way to do this without breaking MySQL?
2010/07/28
[ "https://askubuntu.com/questions/224", "https://askubuntu.com", "https://askubuntu.com/users/176/" ]
For those who like me work with VirtualBox and need to move the MySQL datadir to a shared folder on the host system, follow the simple tutorial at <http://vacilando.org/en/article/moving-mysql-data-files-virtualbox-shared-folder>
This won't work just like that. user mysql has to have the right to write to the new dir: ``` sudo chown -R mysql:mysql /newdatadir sudo chmod -R 754 /newdatadir sudo chmod 754 /newdatadir/.. ```
43,587,817
I have an MVC pattern and I am trying to display image from database path. Here is my view: ``` <div class="col-md-8"> <h1 ><?php echo htmlspecialchars( $results['task']->username )?></h1> <p><?php echo htmlspecialchars( $results['task']->email )?></p> <p><?php echo $results['task']->text?></p> <img src="<?php echo $results['task']->imagePath?>" /> <p>Published on <?php echo date('j F Y', $results['task']->publicationDate)?></p> <p><a href=".?action=home">Return to Homepage</a></p> </div> ``` And controller: ``` <?php require( "config/config.php" ); $action = isset( $_GET['action'] ) ? $_GET['action'] : ""; switch ( $action ) { case 'viewTask': viewTask(); break; } function viewTask() { if ( !isset($_GET["taskId"]) || !$_GET["taskId"] ) { homepage(); return; } $results = array(); $results['task'] = Task::getById( (int)$_GET["taskId"] ); $results['pageTitle'] = $results['task']->username . " "; require( TEMPLATE_PATH . "/viewTask.php" ); } ``` And getById function inside Task class: ``` class Task { // Свойства public $id = null; public $username = null; public $email = null; public $text = null; public $publicationDate = null; public function __construct( $data=array() ) { if ( isset( $data['id'] ) ) $this->id = (int) $data['id']; if ( isset( $data['username'] ) ) $this->username = $data['username']; if ( isset( $data['email'] ) ) $this->email = $data['email']; if ( isset( $data['text'] ) ) $this->text = $data['text']; if ( isset( $data['publicationDate'] ) ) $this->publicationDate = (int) $data['publicationDate']; if ( isset( $data['status'] ) ) $this->status = (int) $data['status']; } public static function getById( $id ) { $conn = new PDO( DB_DSN, DB_USERNAME, DB_PASSWORD ); $sql = "SELECT *, UNIX_TIMESTAMP(publicationDate) AS publicationDate FROM tasks WHERE id = :id"; $st = $conn->prepare( $sql ); $st->bindValue( ":id", $id, PDO::PARAM_INT ); $st->execute(); $row = $st->fetch(); $conn = null; if ( $row ) return new Task( $row ); } } ``` How can I get an image and display it on the page? Thanks
2017/04/24
[ "https://Stackoverflow.com/questions/43587817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6725091/" ]
No. StaggeredGridLayoutManager doesn't support items that differ in both width and height. You would need to use an external library, this one might help you out: <https://github.com/lucasr/twoway-view> Fair warning: I've used this library and it hasn't been supported for a long time, and it definitely has some bugs in it. So read up on the Git issues first.
I have achieved using `SpannableGridLayoutManager` for more detail answer follow the link provided [Solution is here](https://stackoverflow.com/questions/48821961/how-to-design-spannable-gridview-using-recyclerview-spannablegridlayoutmanager/48825958#48825958) . This solution is working perfectly.
22,283,526
I have a form that will update specified entries in a mysql table. The form will only submit if all the fields are filled in . Is there a way to make it so that the form will only update fields that have a new value and leave the ones that have been left blank? Form code : ``` <?php error_reporting(E_ALL ^ E_DEPRECATED); if(isset($_POST['update'])) { $con = mysql_connect($server, $db_user, $db_pass); if(! $con ) { die('Could not connect: ' . mysql_error()); } $id = $_POST['id']; $english = $_POST['english']; $math = $_POST['math']; $science = $_POST['science']; $table = $_POST['year']; $sql = "UPDATE $table ". "SET english = $english ,math = $math ,science = $science ". "WHERE id = $id" ; mysql_select_db('education'); $retval = mysql_query( $sql, $con ); if(! $retval ) { die('Could not enter data: ' . mysql_error()); } //header("Location: " . $_SERVER['PHP_SELF']); //echo "Entered data successfully\n"; mysql_close($con); } else { ?> <?php } ?> <h4 align="center">Update student details</h4> <form action="<?php $_SERVER['PHP_SELF'] ?>" method="post"> Student ID: <input name="id" type="text" id="id"> <br> English mark: <input name="english" type="number" id="english"> <br> Math's mark: <input name="math" type="number" id="math"> <br> Science mark: <input name="science" type="number" id="science"> <br> Year: <br> <select name="year" id="year"> <option value="">Select...</option> <option value="year1">Year 1</option> <option value="year2">Year 2</option> <option value="year3">Year 3</option> <option value="year4">Year 4</option> </select> <br> <br> <input name="update" type="submit" id="update" value="Submit"> </form> ```
2014/03/09
[ "https://Stackoverflow.com/questions/22283526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3396245/" ]
Angular way of doing this would be using `$watch`: ``` var myApp = angular.module('myApp',[]); myApp.directive('d6', function(){ return { require: 'ngModel', restrict: 'A', link: function link(scope, elem, attrs, ngModel) { scope.$watch( function () { return ngModel.$modelValue; }, function(newValue) { console.log(newValue); } ); } } }); ``` There is working [JSFiddle](http://jsfiddle.net/Zd3FT/2/).
**Angular code** Found in the angular code in the `checkboxInputType` function used for all input[type=checkbox] with a `ngModelController`: ``` element.on('click', function() { scope.$apply(function() { ctrl.$setViewValue(element[0].checked); }); }); ``` This code updates the `ngModelController` with the boolean as view value, instantly piped into provided parsers, thus setting the model value. **Click and change events** You are listening to the `change` event, which is triggered before the `click` event in Chrome and the opposite in firefox. That is the whole problem. **Solutions** The jQuery solution is to listen to the `click` event too. The Angular solution would be to watch the model value directly.
22,283,526
I have a form that will update specified entries in a mysql table. The form will only submit if all the fields are filled in . Is there a way to make it so that the form will only update fields that have a new value and leave the ones that have been left blank? Form code : ``` <?php error_reporting(E_ALL ^ E_DEPRECATED); if(isset($_POST['update'])) { $con = mysql_connect($server, $db_user, $db_pass); if(! $con ) { die('Could not connect: ' . mysql_error()); } $id = $_POST['id']; $english = $_POST['english']; $math = $_POST['math']; $science = $_POST['science']; $table = $_POST['year']; $sql = "UPDATE $table ". "SET english = $english ,math = $math ,science = $science ". "WHERE id = $id" ; mysql_select_db('education'); $retval = mysql_query( $sql, $con ); if(! $retval ) { die('Could not enter data: ' . mysql_error()); } //header("Location: " . $_SERVER['PHP_SELF']); //echo "Entered data successfully\n"; mysql_close($con); } else { ?> <?php } ?> <h4 align="center">Update student details</h4> <form action="<?php $_SERVER['PHP_SELF'] ?>" method="post"> Student ID: <input name="id" type="text" id="id"> <br> English mark: <input name="english" type="number" id="english"> <br> Math's mark: <input name="math" type="number" id="math"> <br> Science mark: <input name="science" type="number" id="science"> <br> Year: <br> <select name="year" id="year"> <option value="">Select...</option> <option value="year1">Year 1</option> <option value="year2">Year 2</option> <option value="year3">Year 3</option> <option value="year4">Year 4</option> </select> <br> <br> <input name="update" type="submit" id="update" value="Submit"> </form> ```
2014/03/09
[ "https://Stackoverflow.com/questions/22283526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3396245/" ]
Angular way of doing this would be using `$watch`: ``` var myApp = angular.module('myApp',[]); myApp.directive('d6', function(){ return { require: 'ngModel', restrict: 'A', link: function link(scope, elem, attrs, ngModel) { scope.$watch( function () { return ngModel.$modelValue; }, function(newValue) { console.log(newValue); } ); } } }); ``` There is working [JSFiddle](http://jsfiddle.net/Zd3FT/2/).
Either what the two answers said, or you can simply watch for the changes in $scope.accept (which is your model): ``` $scope.$watch('accept', function(){ $scope.value = $scope.accept; }); ``` See it here: <http://jsfiddle.net/A8Vgk/307/>. This way seems the most natural to me.
22,283,526
I have a form that will update specified entries in a mysql table. The form will only submit if all the fields are filled in . Is there a way to make it so that the form will only update fields that have a new value and leave the ones that have been left blank? Form code : ``` <?php error_reporting(E_ALL ^ E_DEPRECATED); if(isset($_POST['update'])) { $con = mysql_connect($server, $db_user, $db_pass); if(! $con ) { die('Could not connect: ' . mysql_error()); } $id = $_POST['id']; $english = $_POST['english']; $math = $_POST['math']; $science = $_POST['science']; $table = $_POST['year']; $sql = "UPDATE $table ". "SET english = $english ,math = $math ,science = $science ". "WHERE id = $id" ; mysql_select_db('education'); $retval = mysql_query( $sql, $con ); if(! $retval ) { die('Could not enter data: ' . mysql_error()); } //header("Location: " . $_SERVER['PHP_SELF']); //echo "Entered data successfully\n"; mysql_close($con); } else { ?> <?php } ?> <h4 align="center">Update student details</h4> <form action="<?php $_SERVER['PHP_SELF'] ?>" method="post"> Student ID: <input name="id" type="text" id="id"> <br> English mark: <input name="english" type="number" id="english"> <br> Math's mark: <input name="math" type="number" id="math"> <br> Science mark: <input name="science" type="number" id="science"> <br> Year: <br> <select name="year" id="year"> <option value="">Select...</option> <option value="year1">Year 1</option> <option value="year2">Year 2</option> <option value="year3">Year 3</option> <option value="year4">Year 4</option> </select> <br> <br> <input name="update" type="submit" id="update" value="Submit"> </form> ```
2014/03/09
[ "https://Stackoverflow.com/questions/22283526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3396245/" ]
**Angular code** Found in the angular code in the `checkboxInputType` function used for all input[type=checkbox] with a `ngModelController`: ``` element.on('click', function() { scope.$apply(function() { ctrl.$setViewValue(element[0].checked); }); }); ``` This code updates the `ngModelController` with the boolean as view value, instantly piped into provided parsers, thus setting the model value. **Click and change events** You are listening to the `change` event, which is triggered before the `click` event in Chrome and the opposite in firefox. That is the whole problem. **Solutions** The jQuery solution is to listen to the `click` event too. The Angular solution would be to watch the model value directly.
Either what the two answers said, or you can simply watch for the changes in $scope.accept (which is your model): ``` $scope.$watch('accept', function(){ $scope.value = $scope.accept; }); ``` See it here: <http://jsfiddle.net/A8Vgk/307/>. This way seems the most natural to me.
249,821
Our router (Asus RT-AC68U) has been slowing down our speeds, up until I disabled the IPV6 firewall (Went from 250 to 380, which is the modem cap for now). We've always had IPV6 disabled on the router, but I was wondering if it poses any security risk by also disabling the IPV6 firewall.
2021/05/28
[ "https://security.stackexchange.com/questions/249821", "https://security.stackexchange.com", "https://security.stackexchange.com/users/258098/" ]
The Debian and Ubuntu projects maintain a list of "unfixed" vulnerabilities which they've assessed and decided not to patch. One of the problems with vulnerability scanning container images is that most of the tools default to reporting those unfixed issues (it's worth noting that "traditional" vulnerability scanners don't usually report unfixed issues at all, more info [here](https://raesene.github.io/blog/2020/11/22/When_Is_A_Vulnerability_Not_A_Vulnerability/)) With [Trivy](https://github.com/aquasecurity/trivy) there's an `--ignore-unfixed` option which will provide a report without those issues. In the case of `php:7.3-fpm` on Docker hub adding that option takes the vulnerability count from 565 issues to 1. Depending on your threat model, you might want to use a vulnerability scanner which ignores unfixed vulnerabilities or you might want to manually compile fixed versions of packages that are included in the container image which you use and where the vulnerabilities are relevant to you.
From those links you posted, it is clear that Debian is not patching them because they consider them too minor and or unrealistic to exploit: > > **CVE-2019-19603** [buster] - sqlite3 (Minor issue, too intrusive to backport) > > > > > **CVE-2019-3844** [buster] - systemd (Minor issue; exploit vector needs control both of the service and a helper > > > The high severity could be overstated, but only you can determine if the above notes are good enough for your threat model. If not, you could download the newer, patched versions of these programs and build them from source. It also looks like the issues may be fixed in Debian sid (unstable). You could try to configure apt to install these newer versions, but it may cause dependency issues.
62,756,177
Since the 2011 revision of C++ standards, variables can be initialized in three different ways given below. ``` int i = 0; int i (0); int i {0}; ``` As far as I know all three different initializations have the same effect. If they all have the same effect, why not stick to one way of initialisation like the first one? Is there any special need to initialise variables by surrounding their initial values in () or {}?
2020/07/06
[ "https://Stackoverflow.com/questions/62756177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8247997/" ]
The reason for not disallowing any of the three is to maintain backward compatibility. There are tons of code, in production, that is written with each of the three ways. If the standards were to be changed, working code would need to be rewritten, causing costs and possibly bugs. Since the C++ committee is very serious about backward compatibility, we end up in this situation. It is still way better than other language like Python, where, from minor version to minor version, you need to rewrite code to format a string, or to loop from zero to ten. If you have the choice, pick `{0}`, it's called uniform initialization for a reason :-)
``` int i = 0; int i (0); //For backward compatibility int i {0}; //Uniform Initialization ``` There is a difference in these 3 types of initializations. ``` int i = 2.2; int i (2.2); int i {2.2}; ``` First, two will do the implicit conversion but `int i {0};` will give you error/warning for parameter narrowing. For details about initializer list [check this](https://youtu.be/XjkRIc5mDK8) video.
62,756,177
Since the 2011 revision of C++ standards, variables can be initialized in three different ways given below. ``` int i = 0; int i (0); int i {0}; ``` As far as I know all three different initializations have the same effect. If they all have the same effect, why not stick to one way of initialisation like the first one? Is there any special need to initialise variables by surrounding their initial values in () or {}?
2020/07/06
[ "https://Stackoverflow.com/questions/62756177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8247997/" ]
It is an historical question. The first one is `int i=0;`. This one is from the early C language from the 1970's. Next first C++ versions introduced a function like initialization syntax which writes here `int i(0);`. But because of the [most vexing parse](https://en.wikipedia.org/wiki/Most_vexing_parse) ambiguities, the curly braces initialization was invented. And for compatibility reasons, all those syntaxes are still valid...
``` int i = 0; int i (0); //For backward compatibility int i {0}; //Uniform Initialization ``` There is a difference in these 3 types of initializations. ``` int i = 2.2; int i (2.2); int i {2.2}; ``` First, two will do the implicit conversion but `int i {0};` will give you error/warning for parameter narrowing. For details about initializer list [check this](https://youtu.be/XjkRIc5mDK8) video.
38,234,487
In C when I return a pointer of a stack-created variable from a function, the memory discards after the function is returned, thus making the pointer impossible to dereference. But in Go, the compiler is not giving me any errors. Does that mean that this is safe to do? ``` package main import ( "fmt" ) func main() { fmt.Println(*(something())) } func something() *string { s := "a" return &s } ```
2016/07/06
[ "https://Stackoverflow.com/questions/38234487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4527337/" ]
Yes, this is safe and a normal pattern in Go programming. Go uses [escape analysis](https://en.wikipedia.org/wiki/Escape_analysis) to move any values with pointers that escape the stack to the heap automatically. You don't need to be concerned with where values are allocated. From the Go FAQ: ["How do I know whether a variable is allocated on the heap or the stack?"](https://golang.org/doc/faq#stack_or_heap) > > if the compiler cannot prove that the variable is not referenced after the function returns, then the compiler must allocate the variable on the garbage-collected heap to avoid dangling pointer errors > > > You can see these optimization choices during compilation by using the `-gcflags -m` option.
Yes, in Golang it is fine to return a pointer to a local variable. Golang will manage the objects lifetime for you and free it when all pointers to it are gone. In another answer I point out all the differences between C/C++ pointers and Golang pointers: [What is the meaning of '\*' and '&' in Golang?](https://stackoverflow.com/questions/38172661/what-is-the-meaning-of-and-in-golang/38172757#38172757)
70,353,721
Description =========== Say I have a lot of strings, some of them are very long: ``` Aim for the moon. If you miss, you may hit a star. – Clement Stone Nothing about us without us ``` I want to have a text wrapper doing this algorithm: 1. Starting from the beginning of the string, identify the nearest blank character () that around position 25 2. **If the residue is smaller than 5 character-length, then do nothing. If not, replace that blank character with `\n`** 3. Identify the next nearest blank character in the end of the next 25 characters 4. Return to 2 until end of line So that text will be replaced to: ``` Aim for the moon. If you\nmiss, you may hit a star.\n– Clement Stone Nothing about us without us ``` Attempt 1 ========= Consulting [Wrapping Text With Regular Expressions](https://macromates.com/blog/2006/wrapping-text-with-regular-expressions/ "Wrapping Text With Regular Expressions") * Matching pattern: `(.{1,25})( +|$\n?)` * Replacing pattern: `$1\n` But this will produce `Nothing about us without\nus`, which is not preferable. Attempt 2 ========= Using a [Lookahead Construct](https://www.regular-expressions.info/lookaround.html "Regex Tutorial - Lookahead and Lookbehind Zero-Length Assertions") in a [If-Then-Else Conditionals](https://www.regular-expressions.info/conditional.html "Regex Tutorial - If-Then-Else Conditionals"): * Matching pattern: `(.{1,25})(?(?=(.{1,5}$).*))( +|$\n?)` * Replacing pattern: `$1$2\n` It still produce `Nothing about us without\nus`, which is not preferable.
2021/12/14
[ "https://Stackoverflow.com/questions/70353721", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3416774/" ]
Created this based on @sln 's? [answer to a different word wrap problem](https://stackoverflow.com/a/20434776/406712). All I have added is this alternative point to add a line break: "Expand by up to 5 characters until before a linebreak or EOS" and changed the number of characters allowed from `50` to `25` ``` [^\r\n]{1,5}(?=\r?\n|$) ``` Compressed ---------- ``` (?:((?>.{1,25}(?:[^\r\n]{1,5}(?=\r?\n|$)|(?<=[^\S\r\n])[^\S\r\n]?|(?=\r?\n)|$|[^\S\r\n]))|.{1,25})(?:\r?\n)?|(?:\r?\n|$)) ``` Replacement ----------- `$1` followed by a linebreak ``` $1\r\n ``` Preview ------- <https://regex101.com/r/pRqdhi/1> Detailed Regular Expression --------------------------- ``` (?: # -- Words/Characters ( # (1 start) (?> # Atomic Group - Match words with valid breaks .{1,25} # 1-N characters # Followed by one of 4 prioritized, non-linebreak whitespace (?: # break types: [^\r\n]{1,5}(?=\r?\n|$) # Expand by up to 5 characters until before a linebreak or EOS | (?<= [^\S\r\n] ) # 1. - Behind a non-linebreak whitespace [^\S\r\n]? # ( optionally accept an extra non-linebreak whitespace ) | (?= \r? \n ) # 2. - Ahead a linebreak | $ # 3. - EOS | [^\S\r\n] # 4. - Accept an extra non-linebreak whitespace ) ) # End atomic group | .{1,25} # No valid word breaks, just break on the N'th character ) # (1 end) (?: \r? \n )? # Optional linebreak after Words/Characters | # -- Or, Linebreak (?: \r? \n | $ ) # Stand alone linebreak or at EOS ) ```
If your input is run line-by-line, and there is no newline character in the middle of a line, then you can try this: * Pattern: `(.{1,25}.{1,5}$|.{1,25}(?= ))` * Substitution: `$1\n` Then apply this: * Pattern: `\n` * Substitution: `\n`
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
For a vanilla nuclear reactor (no MOX, breeder, etc.) the fuel (U-235) is only about 4% of the fuel rod itself. The problem with this, however, is that a "spent" fuel rod still has U-235 (more than the natural occurring level of 0.71%) as well as U-238, Po-239, etc. Being "spent" just means that there isnt enough U-235 around to keep the chain reaction going. All these materials in the fuel rods (as mentioned above) have thousands to 10000's years half life, so until they are "safe" to handle there will be nobody around to take care of it. The material that is enchasing these rods also becomes radioactive by the all the neutrons floating around and hitting the material.
Well its not really all spent in the chemical fuel sense. The leftovers from nuclear fission are themselves still radioactive, not usefully radioactive for energy generation. These products will continue to be dangerously radioactive for 1000's of years and so the problem is to store dangerous materials for longer than modern civilisation has been on the planet.
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
Mart's answer gets to some of the problem (e.g. ensuring that storage remains stable for the period of decay) but I believe the other answers here are a bit off base. Nuclear fuel is determined to be "spent" when the nuclear engineers determine it is no longer economically warranted to continue using it. On a physical sense, the fuel could certainly be used for much longer. Besides, it depends on what kind of reactor you are talking about as to how long the fuel is used and what the composition of the spent fuel is. The plutonium that is such a dangerous part of most used fuel is actually a major contributor to the energy output of a CANDU (Canadian heavy water) reactor toward the end of the useful fuel life. As for composition of waste, the majority is composed of rather inert U238 (~91%) that did not transmute through neutron capture or fission. This material is part of the original fuel composition and is not harmful. A small percentage (~1%) consists of the remaining U235 that did not transmute or fission. About the same amount is plutonium that results from neutron capture by U238 and subsequent decay. Depending on reactor operation, about 4% is [daughter products](http://en.wikipedia.org/wiki/Nuclear_fission_product) and the rest is actinides and activation products. The half-lives of these isotopes varies significantly but it is a convenient fact that the more radioactive a material is, the shorter its halflife and consequently the shorter time before it is "safe." There are many [graphs](http://www.world-nuclear.org/education/phys.htm) (e.g. see second graph) out there of decay times for spent fuel, but as I said above, the exact time for decay depends on a lot of things like the original composition of the fuel, what kind of reactor was used, and the final processing of the used fuel. Several options for processing spent fuel exist including recycling it to retrieve the useable uranium and plutonium. Doing so reduces the waste volume considerably but also necessitates the development of separations technologies and the handling of concentrated waste. It is also possible to place the waste in special reactors that are dedicated to "burning" the waste with high neutron flux; even still, there will always be some waste. Waste that is slated for disposal is often vitrified, that is, it is mixed with borated glass. These glass logs are put into steel containers and then stored in concrete. Whatever is done with the waste, we must be confident in the stability of the storage for at least several hundred years (though thousands of years in some cases). A great deal of research continues on this subject. That being said, the absolute amount of waste is quite small. I've read various numbers, but for order of magnitude we are talking about one football field 20 feet deep of waste for all of the nuclear reactors in the United States for the last 60 years. That is a lot of bad stuff, but in comparison, it seems quite manageable. If we recycled the fuel, that would reduce to about 6 inches of waste spread over one foodball field. Coal plants, in general produce on the order of 10,000 times more waste by volume which contains more radioactive material in absolute terms than nuclear plant waste.
Well its not really all spent in the chemical fuel sense. The leftovers from nuclear fission are themselves still radioactive, not usefully radioactive for energy generation. These products will continue to be dangerously radioactive for 1000's of years and so the problem is to store dangerous materials for longer than modern civilisation has been on the planet.
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
Here are the common engineering safety concerns regarding spent fuel: * **Radioisotope source** - in its normal form it is a danger to anyone next to it from penetrating radiation including gammas and neutrons, and there is a much much greater danger that should the fuel be torn apart it releases radioactive gases that emit radiation. Even worse is the possibility that the entire thing gets dissolved into a water supply (like a Yucca Mtn concern). * **Heat production leads to danger of melting** - there wouldn't be a strong concern about physical damage to the fuel if not for the fact that the fuel produces heat from those decaying isotopes. This means that for a category of spent fuel (recently taken out of the core), active cooling is necessary or else it will just melt down on its own and create a small nuclear disaster. * **Potential to go critical** - as has been pointed out here, the spent fuel only lack sufficient reactivity (from U-235 mostly) to go critical in reactor conditions. However, reactor conditions are very different from ordinary life, and the doppler temperature coefficient of reactivity makes it less critical inside the reactor because it is higher temperature. Our world is lower temperature, so a similar conglomeration (to a reactor) of assemblies submersed in ordinary water could create an active nuclear reactor. This would be very bad. In practice however, these assemblies are almost always coupled with absorber materials such that even very extreme conditions would not make them critical again. * **Proliferation and now terrorism concerns** - Spent fuel can be recycled to use again, but the downside to this is that people can get access to somewhat usable nuclear materials through spent fuel. The actual reprocessing process is very difficult but the difficulty depends on the safety and safeguards you employ - things a rouge regime may not be very concerned with. The Plutonium, in particular, is very potent and can be isolated unlike the Uranium-235, although it would be suboptimal to use in a nuclear weapon, a crude weapon with reactor Plutonium is still theoretically possible. Also, we are more worried about terrorism today and as I've pointed out, the material within the fuel is very dangerous when the physical integrity is not longer maintained. There is some logic to say that this fuel is "self protected" because you can't pick it up and take it away on your own, because the radiation would kill you. But concerns of a variety of types of attacks exist, since even bad people can be creative.
Well its not really all spent in the chemical fuel sense. The leftovers from nuclear fission are themselves still radioactive, not usefully radioactive for energy generation. These products will continue to be dangerously radioactive for 1000's of years and so the problem is to store dangerous materials for longer than modern civilisation has been on the planet.
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
For a vanilla nuclear reactor (no MOX, breeder, etc.) the fuel (U-235) is only about 4% of the fuel rod itself. The problem with this, however, is that a "spent" fuel rod still has U-235 (more than the natural occurring level of 0.71%) as well as U-238, Po-239, etc. Being "spent" just means that there isnt enough U-235 around to keep the chain reaction going. All these materials in the fuel rods (as mentioned above) have thousands to 10000's years half life, so until they are "safe" to handle there will be nobody around to take care of it. The material that is enchasing these rods also becomes radioactive by the all the neutrons floating around and hitting the material.
As mentioned in the other answers, there's still radioactive material in the spent fuel. Any containment will have to deal with the heat from said radioactive material. Also, radioactivity can change the quality of container-materials - metals can become brittle, glass might crack (see here: <http://jol.liljenzin.se/KAPITEL/CH07NY3.PDF> ) So you need a containment that stands up to radiation in addition to all other environmental stresses over a geological timeframe, while still beeing able to dissipate the rest heat.
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
Mart's answer gets to some of the problem (e.g. ensuring that storage remains stable for the period of decay) but I believe the other answers here are a bit off base. Nuclear fuel is determined to be "spent" when the nuclear engineers determine it is no longer economically warranted to continue using it. On a physical sense, the fuel could certainly be used for much longer. Besides, it depends on what kind of reactor you are talking about as to how long the fuel is used and what the composition of the spent fuel is. The plutonium that is such a dangerous part of most used fuel is actually a major contributor to the energy output of a CANDU (Canadian heavy water) reactor toward the end of the useful fuel life. As for composition of waste, the majority is composed of rather inert U238 (~91%) that did not transmute through neutron capture or fission. This material is part of the original fuel composition and is not harmful. A small percentage (~1%) consists of the remaining U235 that did not transmute or fission. About the same amount is plutonium that results from neutron capture by U238 and subsequent decay. Depending on reactor operation, about 4% is [daughter products](http://en.wikipedia.org/wiki/Nuclear_fission_product) and the rest is actinides and activation products. The half-lives of these isotopes varies significantly but it is a convenient fact that the more radioactive a material is, the shorter its halflife and consequently the shorter time before it is "safe." There are many [graphs](http://www.world-nuclear.org/education/phys.htm) (e.g. see second graph) out there of decay times for spent fuel, but as I said above, the exact time for decay depends on a lot of things like the original composition of the fuel, what kind of reactor was used, and the final processing of the used fuel. Several options for processing spent fuel exist including recycling it to retrieve the useable uranium and plutonium. Doing so reduces the waste volume considerably but also necessitates the development of separations technologies and the handling of concentrated waste. It is also possible to place the waste in special reactors that are dedicated to "burning" the waste with high neutron flux; even still, there will always be some waste. Waste that is slated for disposal is often vitrified, that is, it is mixed with borated glass. These glass logs are put into steel containers and then stored in concrete. Whatever is done with the waste, we must be confident in the stability of the storage for at least several hundred years (though thousands of years in some cases). A great deal of research continues on this subject. That being said, the absolute amount of waste is quite small. I've read various numbers, but for order of magnitude we are talking about one football field 20 feet deep of waste for all of the nuclear reactors in the United States for the last 60 years. That is a lot of bad stuff, but in comparison, it seems quite manageable. If we recycled the fuel, that would reduce to about 6 inches of waste spread over one foodball field. Coal plants, in general produce on the order of 10,000 times more waste by volume which contains more radioactive material in absolute terms than nuclear plant waste.
As mentioned in the other answers, there's still radioactive material in the spent fuel. Any containment will have to deal with the heat from said radioactive material. Also, radioactivity can change the quality of container-materials - metals can become brittle, glass might crack (see here: <http://jol.liljenzin.se/KAPITEL/CH07NY3.PDF> ) So you need a containment that stands up to radiation in addition to all other environmental stresses over a geological timeframe, while still beeing able to dissipate the rest heat.
17,676
Just what the title states; there's a good deal of noise made about transport, and storage of spent nuclear fuel. Why all the hullabaloo when the fuel is all spent?
2011/11/30
[ "https://physics.stackexchange.com/questions/17676", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5265/" ]
Here are the common engineering safety concerns regarding spent fuel: * **Radioisotope source** - in its normal form it is a danger to anyone next to it from penetrating radiation including gammas and neutrons, and there is a much much greater danger that should the fuel be torn apart it releases radioactive gases that emit radiation. Even worse is the possibility that the entire thing gets dissolved into a water supply (like a Yucca Mtn concern). * **Heat production leads to danger of melting** - there wouldn't be a strong concern about physical damage to the fuel if not for the fact that the fuel produces heat from those decaying isotopes. This means that for a category of spent fuel (recently taken out of the core), active cooling is necessary or else it will just melt down on its own and create a small nuclear disaster. * **Potential to go critical** - as has been pointed out here, the spent fuel only lack sufficient reactivity (from U-235 mostly) to go critical in reactor conditions. However, reactor conditions are very different from ordinary life, and the doppler temperature coefficient of reactivity makes it less critical inside the reactor because it is higher temperature. Our world is lower temperature, so a similar conglomeration (to a reactor) of assemblies submersed in ordinary water could create an active nuclear reactor. This would be very bad. In practice however, these assemblies are almost always coupled with absorber materials such that even very extreme conditions would not make them critical again. * **Proliferation and now terrorism concerns** - Spent fuel can be recycled to use again, but the downside to this is that people can get access to somewhat usable nuclear materials through spent fuel. The actual reprocessing process is very difficult but the difficulty depends on the safety and safeguards you employ - things a rouge regime may not be very concerned with. The Plutonium, in particular, is very potent and can be isolated unlike the Uranium-235, although it would be suboptimal to use in a nuclear weapon, a crude weapon with reactor Plutonium is still theoretically possible. Also, we are more worried about terrorism today and as I've pointed out, the material within the fuel is very dangerous when the physical integrity is not longer maintained. There is some logic to say that this fuel is "self protected" because you can't pick it up and take it away on your own, because the radiation would kill you. But concerns of a variety of types of attacks exist, since even bad people can be creative.
As mentioned in the other answers, there's still radioactive material in the spent fuel. Any containment will have to deal with the heat from said radioactive material. Also, radioactivity can change the quality of container-materials - metals can become brittle, glass might crack (see here: <http://jol.liljenzin.se/KAPITEL/CH07NY3.PDF> ) So you need a containment that stands up to radiation in addition to all other environmental stresses over a geological timeframe, while still beeing able to dissipate the rest heat.
34,928,635
I am using the following query to grab the index columns on a table along with their data type: ``` SELECT DISTINCT COL.COLUMN_NAME, COL.DATA_TYPE FROM DBA_IND_COLUMNS IND INNER JOIN DBA_TAB_COLUMNS COL ON ( IND.TABLE_OWNER = COL.OWNER AND IND.TABLE_NAME = COL.TABLE_NAME AND IND.COLUMN_NAME = COL.COLUMN_NAME) WHERE IND.TABLE_NAME = 'MY_TABLE' AND TABLE_OWNER = 'SCHEMA' ``` But how can I grab the columns for just one index, instead of the columns for all the indexes? For example: If a table has indexes: INDEX1: column\_a,column\_b INDEX2: column\_c,column\_d My current query would result in: ``` column_a, varchar column_b, varchar column_c, varchar column_d, varchar ``` but I want it to result in just: ``` column_a, varchar column_b, varchar ```
2016/01/21
[ "https://Stackoverflow.com/questions/34928635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5437107/" ]
I came across the same problem and solved it using your second option. However, I found a way to prevent the test cases from the base class from running: Override the `defaultTestSuite()` class method in your base class to return an empty `XCTestSuite`: ``` class InterfaceTests: XCTestCase { var implToTest: Interface! override class func defaultTestSuite() -> XCTestSuite { return XCTestSuite(name: "InterfaceTests Excluded") } } ``` With this no tests from `InterfaceTests` are run. Unfortunately also no tests of `ImplementationATests` either. By overriding `defaultTestSuite()` in `ImplementationATests` this can be solved: ``` class ImplementationATests : XCTestCase { override func setUp() { super.setUp() implToTest = ImplementationA() } override class func defaultTestSuite() -> XCTestSuite { return XCTestSuite(forTestCaseClass: ImplementationATests.self) } } ``` Now the test suite of `ImplementationATests` will run all test from `InterfaceTests`, but no tests from `InterfaceTests` are run directly, without setting `implToTest`.
The way I've done this before is with a shared base class. Make `implToTest` nillable. In the base class, if an implementation is not provided, simply `return` out of the test in a guard clause. It's a little annoying that the test run includes reports of the base class tests when it's not doing anything. But that's a small annoyance. The test subclasses will provide useful feedback.
34,928,635
I am using the following query to grab the index columns on a table along with their data type: ``` SELECT DISTINCT COL.COLUMN_NAME, COL.DATA_TYPE FROM DBA_IND_COLUMNS IND INNER JOIN DBA_TAB_COLUMNS COL ON ( IND.TABLE_OWNER = COL.OWNER AND IND.TABLE_NAME = COL.TABLE_NAME AND IND.COLUMN_NAME = COL.COLUMN_NAME) WHERE IND.TABLE_NAME = 'MY_TABLE' AND TABLE_OWNER = 'SCHEMA' ``` But how can I grab the columns for just one index, instead of the columns for all the indexes? For example: If a table has indexes: INDEX1: column\_a,column\_b INDEX2: column\_c,column\_d My current query would result in: ``` column_a, varchar column_b, varchar column_c, varchar column_d, varchar ``` but I want it to result in just: ``` column_a, varchar column_b, varchar ```
2016/01/21
[ "https://Stackoverflow.com/questions/34928635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5437107/" ]
The way I've done this before is with a shared base class. Make `implToTest` nillable. In the base class, if an implementation is not provided, simply `return` out of the test in a guard clause. It's a little annoying that the test run includes reports of the base class tests when it's not doing anything. But that's a small annoyance. The test subclasses will provide useful feedback.
Building on top of ithron's solution, if you carefully craft your `defaultTestSuite`, you can remove the need for each subclass to re-override it. ```swift class InterfaceTests: XCTestCase { override class var defaultTestSuite: XCTestSuite { // When subclasses inherit this property, they'll fail this check, and hit the `else`. // At which point, they'll inherit the full test suite that generated for them. if self == AbstractTest.self { return XCTestSuite(name: "Empty suite for AbstractSpec") } else { return super.defaultTestSuite } } } ``` Of course, the same limitation applies: this won't hide the empty test suite from the Xcode test navigator. Generalizing this into a `AbstractTestCase` class ================================================= I would go a step further an make a base `class AbstractTestCase: XCTestCase` to store this `defaultTestSuite` trick, from which all your other abstract classes can inherit. For completeness, to make it truly abstract you'd also want to override all the `XCTestCase` initializers to make them error out if an attempt is made to instantiate your abstract classes. On Apple's platforms, there's 3 initializers to override: 1. `-[XCTestCase init]` 2. `-[XCTestCase initWithSelector:]` 3. `-[XCTestCase initWithInvocation:]` Unfortunately, this can't be done from Swift because of that last initializer, which uses `NSInvocation`. `NSInvocation` isn't available with Swift (it isn't compatible with Swift's ARC). So you need to implement this in Objective C. Here's my stab at it: `AbstractTestCase.h` -------------------- ```objectivec #import <XCTest/XCTest.h> @interface AbstractTestCase : XCTestCase @end ``` `AbstractTestCase.m` -------------------- ```objectivec #import "AbstractTestCase.h" @implementation AbstractTestCase + (XCTestSuite *)defaultTestSuite { if (self == [AbstractTestCase class]) { return [[XCTestSuite alloc] initWithName: @"Empty suite for AbstractTestCase"]; } else { return [super defaultTestSuite]; } } - (instancetype)init { self = [super init]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } - (instancetype)initWithSelector:(SEL)selector { self = [super initWithSelector:selector]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } - (instancetype)initWithInvocation:(NSInvocation *)invocation { self = [super initWithInvocation:invocation]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } @end ``` Usage ----- You can then just use this as the superclass of your abstract test, e.g. ```swift class InterfaceTests: AbstractTestCase { var implToTest: Interface! func testSharedTest() { } } ```
34,928,635
I am using the following query to grab the index columns on a table along with their data type: ``` SELECT DISTINCT COL.COLUMN_NAME, COL.DATA_TYPE FROM DBA_IND_COLUMNS IND INNER JOIN DBA_TAB_COLUMNS COL ON ( IND.TABLE_OWNER = COL.OWNER AND IND.TABLE_NAME = COL.TABLE_NAME AND IND.COLUMN_NAME = COL.COLUMN_NAME) WHERE IND.TABLE_NAME = 'MY_TABLE' AND TABLE_OWNER = 'SCHEMA' ``` But how can I grab the columns for just one index, instead of the columns for all the indexes? For example: If a table has indexes: INDEX1: column\_a,column\_b INDEX2: column\_c,column\_d My current query would result in: ``` column_a, varchar column_b, varchar column_c, varchar column_d, varchar ``` but I want it to result in just: ``` column_a, varchar column_b, varchar ```
2016/01/21
[ "https://Stackoverflow.com/questions/34928635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5437107/" ]
I came across the same problem and solved it using your second option. However, I found a way to prevent the test cases from the base class from running: Override the `defaultTestSuite()` class method in your base class to return an empty `XCTestSuite`: ``` class InterfaceTests: XCTestCase { var implToTest: Interface! override class func defaultTestSuite() -> XCTestSuite { return XCTestSuite(name: "InterfaceTests Excluded") } } ``` With this no tests from `InterfaceTests` are run. Unfortunately also no tests of `ImplementationATests` either. By overriding `defaultTestSuite()` in `ImplementationATests` this can be solved: ``` class ImplementationATests : XCTestCase { override func setUp() { super.setUp() implToTest = ImplementationA() } override class func defaultTestSuite() -> XCTestSuite { return XCTestSuite(forTestCaseClass: ImplementationATests.self) } } ``` Now the test suite of `ImplementationATests` will run all test from `InterfaceTests`, but no tests from `InterfaceTests` are run directly, without setting `implToTest`.
Building on top of ithron's solution, if you carefully craft your `defaultTestSuite`, you can remove the need for each subclass to re-override it. ```swift class InterfaceTests: XCTestCase { override class var defaultTestSuite: XCTestSuite { // When subclasses inherit this property, they'll fail this check, and hit the `else`. // At which point, they'll inherit the full test suite that generated for them. if self == AbstractTest.self { return XCTestSuite(name: "Empty suite for AbstractSpec") } else { return super.defaultTestSuite } } } ``` Of course, the same limitation applies: this won't hide the empty test suite from the Xcode test navigator. Generalizing this into a `AbstractTestCase` class ================================================= I would go a step further an make a base `class AbstractTestCase: XCTestCase` to store this `defaultTestSuite` trick, from which all your other abstract classes can inherit. For completeness, to make it truly abstract you'd also want to override all the `XCTestCase` initializers to make them error out if an attempt is made to instantiate your abstract classes. On Apple's platforms, there's 3 initializers to override: 1. `-[XCTestCase init]` 2. `-[XCTestCase initWithSelector:]` 3. `-[XCTestCase initWithInvocation:]` Unfortunately, this can't be done from Swift because of that last initializer, which uses `NSInvocation`. `NSInvocation` isn't available with Swift (it isn't compatible with Swift's ARC). So you need to implement this in Objective C. Here's my stab at it: `AbstractTestCase.h` -------------------- ```objectivec #import <XCTest/XCTest.h> @interface AbstractTestCase : XCTestCase @end ``` `AbstractTestCase.m` -------------------- ```objectivec #import "AbstractTestCase.h" @implementation AbstractTestCase + (XCTestSuite *)defaultTestSuite { if (self == [AbstractTestCase class]) { return [[XCTestSuite alloc] initWithName: @"Empty suite for AbstractTestCase"]; } else { return [super defaultTestSuite]; } } - (instancetype)init { self = [super init]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } - (instancetype)initWithSelector:(SEL)selector { self = [super initWithSelector:selector]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } - (instancetype)initWithInvocation:(NSInvocation *)invocation { self = [super initWithInvocation:invocation]; NSAssert(![self isMemberOfClass:[AbstractTestCase class]], @"Do not instantiate this abstract class!"); return self; } @end ``` Usage ----- You can then just use this as the superclass of your abstract test, e.g. ```swift class InterfaceTests: AbstractTestCase { var implToTest: Interface! func testSharedTest() { } } ```
5,431,381
I am trying to connect an `onMouseDown` event to an image with `dojo.connect` like: ``` dojo.connect(dojo.byId("workpic"), "onMouseDown", workpicDown); function workpicDown() { alert("mousedown"); } ``` Similar code a few lines later, where I'm connecting `onMouse*` events to `dojo.body` does work completely properly. but when I click on the image, I'm not seeing the alert window, so the event doesn't get called. Why is that?
2011/03/25
[ "https://Stackoverflow.com/questions/5431381", "https://Stackoverflow.com", "https://Stackoverflow.com/users/522479/" ]
"onMouseDown" should be all lower case when used with DOM events as opposed to Widget events. Try: ``` dojo.connect(dojo.byId("workpic"), "onmousedown", workpicDown); ``` From the [documentation](http://dojotoolkit.org/reference-guide/quickstart/events.html#quickstart-events): > > A note about the event names: Event > names now are lower case, except in > special cases (e.g., some Mozilla DOM > events). Dojo will add "on" to your > event name if you leave it off (e.g., > 'click' and 'onclick' are the same > thing to dojo). This differs from > Widget Events in the sense Dijit uses > mixedCase event names, to avoid > potential conflicts. > > >
Probably it's problem with execution context. Try to use fallowing: ``` dojo.connect(dojo.byId("workpic"), "onMouseDown",window, "workpicDown"); window.workpicDown = function() { alert("mousedown"); } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
``` val i = "42".toIntOrNull() ``` Keep in mind that the result is nullable as the name suggests.
You can Direct Change by using readLine()!!.toInt() Example: ``` fun main(){ print("Enter the radius = ") var r1 = readLine()!!.toInt() var area = (3.14*r1*r1) println("Area is $area") } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
``` val i = "42".toIntOrNull() ``` Keep in mind that the result is nullable as the name suggests.
**In Kotlin:** Simply do that ``` val abc = try {stringNumber.toInt()}catch (e:Exception){0} ``` In catch block you can set default value for any case string is not converted to "Int".
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
You could call `toInt()` on your `String` instances: ``` fun main(args: Array<String>) { for (str in args) { try { val parsedInt = str.toInt() println("The parsed int is $parsedInt") } catch (nfe: NumberFormatException) { // not a valid int } } } ``` Or `toIntOrNull()` as an alternative: ``` for (str in args) { val parsedInt = str.toIntOrNull() if (parsedInt != null) { println("The parsed int is $parsedInt") } else { // not a valid int } } ``` If you don't care about the invalid values, then you could combine `toIntOrNull()` with the safe call operator and a scope function, for example: ``` for (str in args) { str.toIntOrNull()?.let { println("The parsed int is $it") } } ```
You can Direct Change by using readLine()!!.toInt() Example: ``` fun main(){ print("Enter the radius = ") var r1 = readLine()!!.toInt() var area = (3.14*r1*r1) println("Area is $area") } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
As suggested above, use `toIntOrNull()`. > > Parses the string as an [Int] number and returns the result > or `null` if the string is not a valid representation of a number. > > > ``` val a = "11".toIntOrNull() // 11 val b = "-11".toIntOrNull() // -11 val c = "11.7".toIntOrNull() // null val d = "11.0".toIntOrNull() // null val e = "abc".toIntOrNull() // null val f = null?.toIntOrNull() // null ```
You can Direct Change by using readLine()!!.toInt() Example: ``` fun main(){ print("Enter the radius = ") var r1 = readLine()!!.toInt() var area = (3.14*r1*r1) println("Area is $area") } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
**In Kotlin:** Simply do that ``` val abc = try {stringNumber.toInt()}catch (e:Exception){0} ``` In catch block you can set default value for any case string is not converted to "Int".
You can Direct Change by using readLine()!!.toInt() Example: ``` fun main(){ print("Enter the radius = ") var r1 = readLine()!!.toInt() var area = (3.14*r1*r1) println("Area is $area") } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
I use this util function: ``` fun safeInt(text: String, fallback: Int): Int { return text.toIntOrNull() ?: fallback } ```
add (?) before fun toInt() ```js val number_int = str?.toInt() ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
Actually, there are several ways: Given: ``` // aString is the string that we want to convert to number // defaultValue is the backup value (integer) we'll have in case of conversion failed var aString: String = "aString" var defaultValue : Int = defaultValue ``` Then we have: | Operation | Successful operation | Unsuccessful Operation | | --- | --- | --- | | aString.toInt() | Numeric value | NumberFormatException | | aString.toIntOrNull() | Numeric value | null | | aString.toIntOrNull() ?: defaultValue | Numeric value | defaultValue | If `aString` is a valid integer, then we will get is numeric value, else, based on the function used, see a result in column `Unsuccessful Operation`.
You can Direct Change by using readLine()!!.toInt() Example: ``` fun main(){ print("Enter the radius = ") var r1 = readLine()!!.toInt() var area = (3.14*r1*r1) println("Area is $area") } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
i would go with something like this. ``` import java.util.* fun String?.asOptionalInt() = Optional.ofNullable(this).map { it.toIntOrNull() } fun main(args: Array<String>) { val intArgs = args.map { it.asOptionalInt().orElseThrow { IllegalArgumentException("cannot parse to int $it") } } println(intArgs) } ``` this is quite a nice way to do this, without introducing unsafe nullable values.
``` fun getIntValueFromString(value : String): Int { var returnValue = "" value.forEach { val item = it.toString().toIntOrNull() if(item is Int){ returnValue += item.toString() } } return returnValue.toInt() } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
Actually, there are several ways: Given: ``` // aString is the string that we want to convert to number // defaultValue is the backup value (integer) we'll have in case of conversion failed var aString: String = "aString" var defaultValue : Int = defaultValue ``` Then we have: | Operation | Successful operation | Unsuccessful Operation | | --- | --- | --- | | aString.toInt() | Numeric value | NumberFormatException | | aString.toIntOrNull() | Numeric value | null | | aString.toIntOrNull() ?: defaultValue | Numeric value | defaultValue | If `aString` is a valid integer, then we will get is numeric value, else, based on the function used, see a result in column `Unsuccessful Operation`.
I use this util function: ``` fun safeInt(text: String, fallback: Int): Int { return text.toIntOrNull() ?: fallback } ```
50,570,262
I am working on a console application in Kotlin where I accept multiple arguments in `main()` function ``` fun main(args: Array<String>) { // validation & String to Integer conversion } ``` I want to check whether the `String` is a valid integer and convert the same or else I have to throw some exception. How can I resolve this?
2018/05/28
[ "https://Stackoverflow.com/questions/50570262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6284931/" ]
Actually, there are several ways: Given: ``` // aString is the string that we want to convert to number // defaultValue is the backup value (integer) we'll have in case of conversion failed var aString: String = "aString" var defaultValue : Int = defaultValue ``` Then we have: | Operation | Successful operation | Unsuccessful Operation | | --- | --- | --- | | aString.toInt() | Numeric value | NumberFormatException | | aString.toIntOrNull() | Numeric value | null | | aString.toIntOrNull() ?: defaultValue | Numeric value | defaultValue | If `aString` is a valid integer, then we will get is numeric value, else, based on the function used, see a result in column `Unsuccessful Operation`.
add (?) before fun toInt() ```js val number_int = str?.toInt() ```
2,404,150
I need to save all ".xml" file names in a directory to a vector. To make a long story short, I cannot use the dirent API. It seems as if C++ does not have any concept of "directories". Once I have the filenames in a vector, I can iterate through and "fopen" these files. Is there an easy way to get these filenames at runtime?
2010/03/08
[ "https://Stackoverflow.com/questions/2404150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/257569/" ]
Easy way is to use [Boost.Filesystem](http://www.boost.org/doc/libs/1_42_0/libs/filesystem/doc/index.htm) library. ``` namespace fs = boost::filesystem; // ... std::string path_to_xml = CUSTOM_DIR_PATH; std::vector<string> xml_files; fs::directory_iterator dir_iter( static_cast<fs::path>(path_to_xml) ), dir_end; for (; dir_iter != dir_end; ++dir_iter ) { if ( boost::iends_with( boost::to_lower_copy( dir_iter->filename() ), ".xml" ) ) xml_files.push_back( dir_iter->filename() ); } ```
I suggest having a look at [boost::filesystem](http://www.boost.org/doc/libs/1_42_0/libs/filesystem/doc/index.htm) if it should be portable and bringing boost in isn't too heavy.
2,404,150
I need to save all ".xml" file names in a directory to a vector. To make a long story short, I cannot use the dirent API. It seems as if C++ does not have any concept of "directories". Once I have the filenames in a vector, I can iterate through and "fopen" these files. Is there an easy way to get these filenames at runtime?
2010/03/08
[ "https://Stackoverflow.com/questions/2404150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/257569/" ]
Easy way is to use [Boost.Filesystem](http://www.boost.org/doc/libs/1_42_0/libs/filesystem/doc/index.htm) library. ``` namespace fs = boost::filesystem; // ... std::string path_to_xml = CUSTOM_DIR_PATH; std::vector<string> xml_files; fs::directory_iterator dir_iter( static_cast<fs::path>(path_to_xml) ), dir_end; for (; dir_iter != dir_end; ++dir_iter ) { if ( boost::iends_with( boost::to_lower_copy( dir_iter->filename() ), ".xml" ) ) xml_files.push_back( dir_iter->filename() ); } ```
If you don't like boost, try Poco. It has a DirectoryIterator. <http://pocoproject.org/>
2,404,150
I need to save all ".xml" file names in a directory to a vector. To make a long story short, I cannot use the dirent API. It seems as if C++ does not have any concept of "directories". Once I have the filenames in a vector, I can iterate through and "fopen" these files. Is there an easy way to get these filenames at runtime?
2010/03/08
[ "https://Stackoverflow.com/questions/2404150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/257569/" ]
Easy way is to use [Boost.Filesystem](http://www.boost.org/doc/libs/1_42_0/libs/filesystem/doc/index.htm) library. ``` namespace fs = boost::filesystem; // ... std::string path_to_xml = CUSTOM_DIR_PATH; std::vector<string> xml_files; fs::directory_iterator dir_iter( static_cast<fs::path>(path_to_xml) ), dir_end; for (; dir_iter != dir_end; ++dir_iter ) { if ( boost::iends_with( boost::to_lower_copy( dir_iter->filename() ), ".xml" ) ) xml_files.push_back( dir_iter->filename() ); } ```
Something like this (Note, Format is a sprintf:ish funciton you can replace) ``` bool MakeFileList(const wchar_t* pDirectory,vector<wstring> *pFileList) { wstring sTemp = Format(L"%s\\*.%s",pDirectory,L"xml"); _wfinddata_t first_file; long hFile = _wfindfirst(sTemp.c_str(),&first_file); if(hFile != -1) { wstring sFile = first_file.name; wstring sPath = Format(L"%s%s",pDirectory,sFile.c_str()); pFileList->push_back(sPath); while(_wfindnext(hFile,&first_file) != -1) { wstring sFile = first_file.name; wstring sPath = Format(L"%s%s",pDirectory,sFile.c_str()); pFileList->push_back(sPath); } _findclose(hFile); }else return false; return true; } ```
859,636
Is the lower limit topology finer than the standard topology on $\mathbb{R}$? In Lemma 13.4 on p.82 of Munkres' *Topology* (2nd ed.), it is stated that the lower limit topology is (strictly) finer than the standard topology on $\mathbb{R}$. In the argument, he is using that the interval $[a,b)$ lies in the interval $(a,b)$ which is certainly not true. On the other hand, the converse is true, that is: $$(a,b) \text{ lies in the interval } [a,b).$$ So we can conclude that the standard topology is finer than the lower limit topology. Am I right? If not then why? I think it's an errata. I have checked some existing errata online but it's not included, though.
2014/07/08
[ "https://math.stackexchange.com/questions/859636", "https://math.stackexchange.com", "https://math.stackexchange.com/users/50948/" ]
Yes! Since one have that $$ (a,b) = \cup\_{n\ge 1} \ [a+\frac{\epsilon}{n},b) $$ where $\epsilon < \frac{b-a}{2}$. Note that if for topology ${\mathcal T}\_1$ with basis ${\mathcal S}\_1$ and topology ${\mathcal T}\_2$, one have that ${\mathcal S}\_1 \subseteq {\mathcal T}\_2$, then ${\mathcal T}\_2$ is finer than ${\mathcal T}\_1$. In this case if ${\mathcal S}\_2$ be a basis for topology ${\mathcal T}\_2$ and ${\mathcal S}\_2 \not\subseteq{\mathcal T}\_1$, then ${\mathcal T}\_2$ is **strictly** finer than ${\mathcal T}\_1$.For this, it is enough to note that $[0,1)$ is not open in standard topology. For more details you can consult this textbook: James Munkres, "Topology; A First Course".
Well, I don't have a copy of Munkres' book at hand but I doubt that it is said. If $[a,b)$ is open then $(a,b)=\bigcup\_{n\in\mathbb{N}}\left[ a+\frac{1}{n},b\right)$ must be open. Conversely, $[0,1)$ is not open in the standard topology. This means that the topology in $\mathbb{R}\_l$ is finer that the topology on $\mathbb{R}.$
29,921,679
**Error code** : ``` The type name or alias UnitOfWorkFactory could not be resolved. Please check your configuration file and verify this type name. ``` I'm scraping google results / trying to debug for 2 days now, and I didn't find any solution yet. Mention that "ApplicationService" is being resolved. * I verified the names of assemblies and namespaces many times * I already tried this concepts in the config file: <https://stackoverflow.com/a/18671286/3264998> , and other concepts. * I verified the connection to the database. * I tried debugging mode in VS without any succes. * Probably some other stuff that I don't remember right now. Bellow you have my code, hope it's enough. If there is any other file/info that I've omitted I apologies and I will edit the post immediately. **IApplicationService.cs** ``` using System.Collections.Generic; using Abc.Project.Domain.Model.DTO; namespace Abc.Project.Application.Interfaces { public interface IApplicationService { void AddFile(FileDTO fileDTO); } } ``` **ApplicationService.cs** ``` using System.Collections.Generic; using AutoMapper; using Microsoft.Practices.Unity; using Abc.Project.Application.Interfaces; using Abc.Project.Domain.Model.DTO; using Abc.Project.Domain.Model.Poco.Entities; using Abc.Project.Domain.Repository.UnitOfWork; using Abc.Project.Domain.Unity; namespace Abc.Project.Application.Services.Global { public class ApplicationService : IApplicationService { public void AddFile(FileDTO fileDTO) { File file = new File { Id = fileDTO.ID, FileObs = fileDTO.FileObs, Ind = fileDTO.Ind, Levels = fileDTO.Levels, }; using (var uow = IoC.Container.Resolve<IUnitOfWorkFactory>().Create()) { uow.Context.File.Add(file); uow.Commit(); } } } } ``` **IUnitOfWorkFactory.cs** ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Abc.Project.Domain.Repository.UnitOfWork { public interface IUnitOfWorkFactory { IUnitOfWork Create(); } } ``` **UnitOfWorkFactory.cs** ``` using System.Data; using System.Reflection; using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Tool.hbm2ddl; using Abc.Project.Domain.Repository.UnitOfWork; namespace Abc.Project.DataAccess.NHibernate.UnitOfWork { public class UnitOfWorkFactory : IUnitOfWorkFactory { private static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(c => c.FromConnectionStringWithKey("FilesDB")) ) .Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())) .ExposeConfiguration(cfg => new SchemaExport(cfg) .Create(false, false)) .BuildSessionFactory(); } public IUnitOfWork Create() { UnitOfWork UnitOfWork = new UnitOfWork(CreateSessionFactory().OpenSession()); UnitOfWork.BeginTransaction(IsolationLevel.ReadCommitted); return UnitOfWork; } } } ``` **App.config** ``` <?xml version="1.0"?> <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/> </configSections> <system.web> <compilation debug="true" targetFramework="4.0" /> </system.web> <connectionStrings> <add name="FilesDB" providerName="System.Data.SqlClient" connectionString="server=SP2010;database=FilesDB;User ID=sa;Password=password;"/> </connectionStrings> <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singlecall" type="Microsoft.Practices.Unity.TransientLifetimeManager, Microsoft.Practices.Unity"/> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity"/> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity"/> <typeAlias alias="percall" type="Abc.Project.Domain.Unity.StaticPerCallLifeTimeManager, Abc.Project.Domain.Model"/> <!-- SERVICE APPLICATION INTERFACES--> <typeAlias alias="IApplicationService" type="Abc.Project.Application.Interfaces.IApplicationService, Abc.Project.Application.Interfaces"/> <!-- DOMAIN INTERFACES--> <typeAlias alias="IUnitOfWorkFactory" type="Abc.Project.Domain.Repository.UnitOfWork.IUnitOfWorkFactory, Abc.Project.Domain.Repository"/> <!-- CONCRETE CLASSES--> <!-- SERVICE APPLICATION--> <typeAlias alias="ApplicationService" type="Abc.Project.Application.Services.Global.ApplicationService, Abc.Project.Application.Services"/> <!--DATA ACCESS--> <typeAlias alias="UnitOfWorkFactory" type="Abc.Project.DataAccess.NHibernate.UnitOfWork.UnitOfWorkFactory, Abc.Project.DataAccess.NHibernate"/> </typeAliases> <containers> <container> <!--<extension type="Interception" />--> <types> <type type="IApplicationService" mapTo="ApplicationService"> <lifetime type="singlecall"/> </type> <type type="IUnitOfWorkFactory" mapTo="UnitOfWorkFactory"> <lifetime type="singleton"/> </type> </types> </container> </containers> </unity> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="Abc.Project.WcfService.WcfServiceAspNetAjaxBehavior"> <enableWebScript/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="metadataAndDebug"> <serviceMetadata httpGetEnabled="true" httpGetUrl=""/> <serviceDebug httpHelpPageEnabled="true" includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/> <services> <service name="Abc.Project.WcfService.WcfService" behaviorConfiguration="metadataAndDebug"> <endpoint address="" behaviorConfiguration="Abc.Project.WcfService.WcfServiceAspNetAjaxBehavior" binding="webHttpBinding" contract="Abc.Project.WcfService.WcfService"/> </service> </services> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> ``` How can I solve this? English is not my native language; please excuse typing errors.
2015/04/28
[ "https://Stackoverflow.com/questions/29921679", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3264998/" ]
Looks like you have the following options: 1. Convert your binary column to a none binary text column, using a temp column because binary columns cannot be case in-sensitive 2. Use the Convert function as the link you mentioned 3. Use the Lower or Upper methods If you really want the column be always case in-sensitive, I'd say go for option 1.
In mysql there is a collation for each column in addition to the overall collation of the table. You will need to change the collation for each individual column. (I believe the overall table collation determines the default collation if you create a new column, but don't quote me on that.)
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using Perl ``` $ cat jeevan.txt 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) } }' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { print "$_ ",join(",",keys %{$kv{$_}}) for(keys %kv) } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for(keys %kv) { %p=map{ $_ => 1} @{$kv{$_}} ; print "$_ ",join(",", keys %p) } } ' jeevan.txt 281474976749447 17,16 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { @p=grep{ !$s{$a}{$_}++ } @{$kv{$a}} ; print "$a ",join(",", @p) } } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { print "$a ",join(",", grep{ !$s{$a}{$_}++ } @{$kv{$a}}) } } ' jeevan.txt 281474976750348 16,17 281474976749447 16,17 ``` Since this resembles SQL, you can use sqlite also ``` $ cat ./sqllite_unique.sh #!/bin/sh sqlite3 << EOF create table t1(a,b); .separator ',' .import $1 t1 select b|| ' ' || group_concat(distinct a) from t1 group by b; EOF $ ./sqllite_unique.sh jeevan.txt 281474976749447 16,17 281474976750348 16,17 ```
Here is a `Perl`: ``` $ perl -F, -lanE '$HoH{$F[1]}{$F[0]}++; END{for (keys %HoH) { say "$_ ", join(", ", keys %{$HoH{$_}}); }}' file 281474976749447 16, 17 281474976750348 17, 16 ``` Here is an awk: ``` $ awk -F, '{a[$2][$1]} END{ for (e in a){ s="" for (x in a[e]) s=s?s ", " x:x print e, s}}' file 281474976749447 16, 17 281474976750348 16, 17 ``` NOTE: Since both the `awk` and the `perl` use an associative array, the order printed will likely be different than the order the elements are encountered in the file.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Here is a `Perl`: ``` $ perl -F, -lanE '$HoH{$F[1]}{$F[0]}++; END{for (keys %HoH) { say "$_ ", join(", ", keys %{$HoH{$_}}); }}' file 281474976749447 16, 17 281474976750348 17, 16 ``` Here is an awk: ``` $ awk -F, '{a[$2][$1]} END{ for (e in a){ s="" for (x in a[e]) s=s?s ", " x:x print e, s}}' file 281474976749447 16, 17 281474976750348 16, 17 ``` NOTE: Since both the `awk` and the `perl` use an associative array, the order printed will likely be different than the order the elements are encountered in the file.
`sort` assisted `awk` ``` $ sort -t, -u -k2 -k1,1 file | awk -F, '{a[$2]=a[$2] sep[$2] $1; sep[$2]=FS} END{for(k in a) print k,a[k]}' 281474976749447 16,17 281474976750348 16,17 ``` sep is for lazy separator initialization to skip the first one.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using Perl ``` $ cat jeevan.txt 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) } }' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { print "$_ ",join(",",keys %{$kv{$_}}) for(keys %kv) } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for(keys %kv) { %p=map{ $_ => 1} @{$kv{$_}} ; print "$_ ",join(",", keys %p) } } ' jeevan.txt 281474976749447 17,16 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { @p=grep{ !$s{$a}{$_}++ } @{$kv{$a}} ; print "$a ",join(",", @p) } } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { print "$a ",join(",", grep{ !$s{$a}{$_}++ } @{$kv{$a}}) } } ' jeevan.txt 281474976750348 16,17 281474976749447 16,17 ``` Since this resembles SQL, you can use sqlite also ``` $ cat ./sqllite_unique.sh #!/bin/sh sqlite3 << EOF create table t1(a,b); .separator ',' .import $1 t1 select b|| ' ' || group_concat(distinct a) from t1 group by b; EOF $ ./sqllite_unique.sh jeevan.txt 281474976749447 16,17 281474976750348 16,17 ```
Here's another. It appends `$1` values comma-separated to `a[$2]` but uses `match()` first to check that the value isn't there already: ``` $ awk -F, '{ a[$2]=a[$2] (match(a[$2],"(^|,)" $1 "($|,)")?"":(a[$2]==""?"":",")$1) } END { for(i in a) print i,a[i] } ' file 281474976749447 16,17 281474976750348 16,17 ``` Explained a bit: * `a[$2]=a[$2] (...` append to array * `match(a[$2],"(^|,)" $1 "($|,)")?""` null if `match` finds a matching value * `:(a[$2]==""?"":",")$1)` or a comma if needed and the value
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using [`GNU Datamash`](https://www.gnu.org/software/datamash/manual/datamash.html): ``` $ datamash --sort -t, -g 2 unique 1 < file 281474976749447,16,17 281474976750348,16,17 ``` If you insist on the space: ``` $ datamash --sort -t, -g 2 unique 1 < file | sed 's/,/ /' 281474976749447 16,17 281474976750348 16,17 ```
Here's another. It appends `$1` values comma-separated to `a[$2]` but uses `match()` first to check that the value isn't there already: ``` $ awk -F, '{ a[$2]=a[$2] (match(a[$2],"(^|,)" $1 "($|,)")?"":(a[$2]==""?"":",")$1) } END { for(i in a) print i,a[i] } ' file 281474976749447 16,17 281474976750348 16,17 ``` Explained a bit: * `a[$2]=a[$2] (...` append to array * `match(a[$2],"(^|,)" $1 "($|,)")?""` null if `match` finds a matching value * `:(a[$2]==""?"":",")$1)` or a comma if needed and the value
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using [`GNU Datamash`](https://www.gnu.org/software/datamash/manual/datamash.html): ``` $ datamash --sort -t, -g 2 unique 1 < file 281474976749447,16,17 281474976750348,16,17 ``` If you insist on the space: ``` $ datamash --sort -t, -g 2 unique 1 < file | sed 's/,/ /' 281474976749447 16,17 281474976750348 16,17 ```
Here is a `Perl`: ``` $ perl -F, -lanE '$HoH{$F[1]}{$F[0]}++; END{for (keys %HoH) { say "$_ ", join(", ", keys %{$HoH{$_}}); }}' file 281474976749447 16, 17 281474976750348 17, 16 ``` Here is an awk: ``` $ awk -F, '{a[$2][$1]} END{ for (e in a){ s="" for (x in a[e]) s=s?s ", " x:x print e, s}}' file 281474976749447 16, 17 281474976750348 16, 17 ``` NOTE: Since both the `awk` and the `perl` use an associative array, the order printed will likely be different than the order the elements are encountered in the file.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using [`GNU Datamash`](https://www.gnu.org/software/datamash/manual/datamash.html): ``` $ datamash --sort -t, -g 2 unique 1 < file 281474976749447,16,17 281474976750348,16,17 ``` If you insist on the space: ``` $ datamash --sort -t, -g 2 unique 1 < file | sed 's/,/ /' 281474976749447 16,17 281474976750348 16,17 ```
`sort` assisted `awk` ``` $ sort -t, -u -k2 -k1,1 file | awk -F, '{a[$2]=a[$2] sep[$2] $1; sep[$2]=FS} END{for(k in a) print k,a[k]}' 281474976749447 16,17 281474976750348 16,17 ``` sep is for lazy separator initialization to skip the first one.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using [`GNU Datamash`](https://www.gnu.org/software/datamash/manual/datamash.html): ``` $ datamash --sort -t, -g 2 unique 1 < file 281474976749447,16,17 281474976750348,16,17 ``` If you insist on the space: ``` $ datamash --sort -t, -g 2 unique 1 < file | sed 's/,/ /' 281474976749447 16,17 281474976750348 16,17 ```
For GNU awk: ``` awk -F, '{a[$2][$1]} END {for(i in a) {printf i; first=1; for (j in a[i]) if (first) {printf " " j; first=0;} else printf "," j; print ""} }' test_mfd_1 #=> 281474976749447 16,17 #=> 281474976750348 16,17 ``` Just improved your attempt. The idea is to use two-dimension array, and a inner `for` loop. `printf` won't print newline, so use `print ""` to append a new line at last.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using Perl ``` $ cat jeevan.txt 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) } }' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { print "$_ ",join(",",keys %{$kv{$_}}) for(keys %kv) } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for(keys %kv) { %p=map{ $_ => 1} @{$kv{$_}} ; print "$_ ",join(",", keys %p) } } ' jeevan.txt 281474976749447 17,16 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { @p=grep{ !$s{$a}{$_}++ } @{$kv{$a}} ; print "$a ",join(",", @p) } } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { print "$a ",join(",", grep{ !$s{$a}{$_}++ } @{$kv{$a}}) } } ' jeevan.txt 281474976750348 16,17 281474976749447 16,17 ``` Since this resembles SQL, you can use sqlite also ``` $ cat ./sqllite_unique.sh #!/bin/sh sqlite3 << EOF create table t1(a,b); .separator ',' .import $1 t1 select b|| ' ' || group_concat(distinct a) from t1 group by b; EOF $ ./sqllite_unique.sh jeevan.txt 281474976749447 16,17 281474976750348 16,17 ```
`sort` assisted `awk` ``` $ sort -t, -u -k2 -k1,1 file | awk -F, '{a[$2]=a[$2] sep[$2] $1; sep[$2]=FS} END{for(k in a) print k,a[k]}' 281474976749447 16,17 281474976750348 16,17 ``` sep is for lazy separator initialization to skip the first one.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Using Perl ``` $ cat jeevan.txt 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { while(my($x,$y) = each(%kv)) { print "$x ",join(",",keys %$y) } }' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' $kv{$F[1]}{$F[0]}++; END { print "$_ ",join(",",keys %{$kv{$_}}) for(keys %kv) } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for(keys %kv) { %p=map{ $_ => 1} @{$kv{$_}} ; print "$_ ",join(",", keys %p) } } ' jeevan.txt 281474976749447 17,16 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { @p=grep{ !$s{$a}{$_}++ } @{$kv{$a}} ; print "$a ",join(",", @p) } } ' jeevan.txt 281474976749447 16,17 281474976750348 16,17 ``` or ``` $ perl -F, -lane ' push @{$kv{$F[1]}},$F[0]; END { for my $a (keys %kv) { print "$a ",join(",", grep{ !$s{$a}{$_}++ } @{$kv{$a}}) } } ' jeevan.txt 281474976750348 16,17 281474976749447 16,17 ``` Since this resembles SQL, you can use sqlite also ``` $ cat ./sqllite_unique.sh #!/bin/sh sqlite3 << EOF create table t1(a,b); .separator ',' .import $1 t1 select b|| ' ' || group_concat(distinct a) from t1 group by b; EOF $ ./sqllite_unique.sh jeevan.txt 281474976749447 16,17 281474976750348 16,17 ```
For GNU awk: ``` awk -F, '{a[$2][$1]} END {for(i in a) {printf i; first=1; for (j in a[i]) if (first) {printf " " j; first=0;} else printf "," j; print ""} }' test_mfd_1 #=> 281474976749447 16,17 #=> 281474976750348 16,17 ``` Just improved your attempt. The idea is to use two-dimension array, and a inner `for` loop. `printf` won't print newline, so use `print ""` to append a new line at last.
53,957,876
I have a input file like: ``` > cat test_mfd_1 16,281474976750348 17,281474976750348 16,281474976750348 17,281474976750348 16,281474976749447 17,281474976749447 16,281474976749447 17,281474976749447 ``` And I need the output like: ``` 281474976750348 16,17 281474976749447 16,17 ``` Column 2 and 1 both have duplicated values. But as o/p it should find the unique values in column 2 and print all corresponding unique values as in row. I am using awk and i get the o/p like below. ``` awk -F, '{a[$2]=$1;} END {for(i in a) print i" "a[i];}' test_mfd_1 281474976749447 17 281474976750348 17 ``` I am not able to print all unique values from column 1 in front of column 2
2018/12/28
[ "https://Stackoverflow.com/questions/53957876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10842759/" ]
Here's another. It appends `$1` values comma-separated to `a[$2]` but uses `match()` first to check that the value isn't there already: ``` $ awk -F, '{ a[$2]=a[$2] (match(a[$2],"(^|,)" $1 "($|,)")?"":(a[$2]==""?"":",")$1) } END { for(i in a) print i,a[i] } ' file 281474976749447 16,17 281474976750348 16,17 ``` Explained a bit: * `a[$2]=a[$2] (...` append to array * `match(a[$2],"(^|,)" $1 "($|,)")?""` null if `match` finds a matching value * `:(a[$2]==""?"":",")$1)` or a comma if needed and the value
`sort` assisted `awk` ``` $ sort -t, -u -k2 -k1,1 file | awk -F, '{a[$2]=a[$2] sep[$2] $1; sep[$2]=FS} END{for(k in a) print k,a[k]}' 281474976749447 16,17 281474976750348 16,17 ``` sep is for lazy separator initialization to skip the first one.
135,461
As the title states, when I open up terminal to type a command, I cannot see what I am typing, it is as if the terminal is frozen. I can still execute commands when I hit return, I just cannot see what I am typing. The weird part is that when I hit the delete key, I am suddenly able to see what I am typing, and the terminal functions normally until I run the command. As well, when I hit delete the header at the top of the terminal window changes from: name - bash - 80x24 to: name - 37m - bash - 80x24 Any help would be greatly appreciated. Thank you edit: Thanks for all the help, I've tried some of the suggestions. Creating a new Admin account and opening terminal seemed to do the trick; I can type in terminal in this new account without pressing delete. Any ideas for my main account? Here is what I get when I run: /usr/bin/env $/usr/bin/env TERM\_PROGRAM=Apple\_Terminal SHELL=/bin/bash TERM=xterm-256color TMPDIR=/var/folders/h5/rp872k9n0zq2lkl0kbbykjx00000gn/T/ Apple\_PubSub\_Socket\_Render=/tmp/launch-KfwCn3/Render TERM\_PROGRAM\_VERSION=326 TERM\_SESSION\_ID=F81718AA-A3FC-4FB9-9FF4-00037406DBAF USER=derekbogdanoff SSH\_AUTH\_SOCK=/tmp/launch-qQfC1a/Listeners \_\_CF\_USER\_TEXT\_ENCODING=0x1F5:0:0 PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin \_\_CHECKFIX1436934=1 PWD=/Users/derekbogdanoff LANG=en\_CA.UTF-8 PS1=$[\033]0;37m] SHLVL=1 HOME=/Users/derekbogdanoff LOGNAME=derekbogdanoff \_=/usr/bin/env
2014/06/19
[ "https://apple.stackexchange.com/questions/135461", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/82063/" ]
Your prompt is messed up, specifically PS1: ``` PS1=$[\033]0;37m] ``` that's missing a lot of escape characters (`\e[`) needed for the colors (and most useful parameters for a PS1). That's also why you get the `37m` in the terminal window title. Try setting it to something different by running: ``` export PS1="\e[0;31m[\h:\W \u]\$\e[m " ``` and see if that works. It should give a red (thats `\e[0;31m`) prompt showing hostname (`\h`), current working directory (`\W`) and logged in user (`\u`) inside brackets `[]` and the bash exit status of the previous command (`\$`). Note that at the end the color is reset to the default of the session with `\e[m` . If the above worked, you only have to find out from which configuration file your "bad" PS1 comes from: look for an "`export PS1=`" line in `~/.profile`, `~/.bash_profile`, `~/.bashrc`, (as M K already suggested in [his answer](https://apple.stackexchange.com/a/135469/2418)) and put in the above version. There are a lot of answers around here with helpful colors codes and inputs for configuring the PS1, like [this one](https://apple.stackexchange.com/questions/9821/can-i-make-my-mac-os-x-terminal-color-items-according-to-syntax-like-the-ubuntu/9825#9825) for example.
Firstly, look at Terminal preferences (`Cmd`+`,`)for the font, text and color settings and change them appropriately. This may not be the issue, but it could make troubleshooting easier. It may be the case that you have some profile script that changes the colors. Within `Terminal.app`, type the following command to start `bash` on a clean slate (without executing any profile scripts): ``` bash --noprofile --norc ``` If you no longer face the text visibility issue, then check all the profile scripts (some may not be present) like `/etc/profile`, `~/.profile`, `~/.bash_profile`, `~/.bashrc`, look for any ANSI escape sequences and remove those.
61,813,249
My database teacher has given me the following interrogation to translate in a single query in SQL: > > Show, for every exam module, the number of students who got a mark between the values 18 and 21, the the ones who got a mark between 22 and 26 and in the end the ones who got a mark between 27 and 30. > > > The tables involved are: ```sql CREATE TABLE student( code CHAR(6) PRIMARY KEY, bachelor_course CHAR(3), name VARCHAR(50) NOT NULL, surname VARCHAR(50) NOT NULL, birth_date DATE NOT NULL, fiscal_code CHAR(16) NOT NULL UNIQUE, photo BLOB, ); CREATE TABLE module( code CHAR(3) PRIMARY KEY, name VARCHAR(50) NOT NULL, description VARCHAR (100), university_credits TINYINT NOT NULL CHECK(university_credits > 0 AND university_credits < 13) ); CREATE TABLE exam( student_code CHAR(6), module_code CHAR(3), teacher_code CHAR(6), exam_date DATE NOT NULL, mark TINYINT NOT NULL CHECK(mark > 0 AND mark < 31), notes VARCHAR(100) ); ``` I've been trying for half a day and I think I'm near the correct answer. After searching on the web I found a way using multiple SELECT in the main one like this: ```sql SELECT module.code, (SELECT COUNT(*) FROM module m1 JOIN exam e1 ON m1.code = e1.module JOIN student s1 ON e1.student_number = s1.number WHERE e1.mark >= 18 AND e1.mark <= 21 GROUP BY m1.code) AS StudentNumber_18_21, (SELECT same as the first one but with 22 and 26 values in the WHERE clause) as StudentNumber_22_26, (SELECT same as the first one but with 27 and 30 values in the WHERE clause) as StudentNumber_27_30, FROM module; ``` The output should be like this: ``` +----------+-------------------+-------------------+-------------------+ |ModuleName|StudentNumber_18_22|StudentNumber_22_26|StudentNumber_27_30| +----------+-------------------+-------------------+-------------------+ //values ```
2020/05/15
[ "https://Stackoverflow.com/questions/61813249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13535799/" ]
Your checking function should look like this: ``` bool checking(BSTNode* parent, int val) { if(parent == nullptr) // point 1 return false; if (val == parent->data){ // point 2 return true; } else{ bool left = checking(parent->left, val); bool right = checking(parent->right, val); return left||right; } } ``` Your assist function should look something like this: ``` bool assist(BSTNode* parent) { if (parent != nullptr) { if(checking(parent->left, parent->data)) return true; // point 3 if(checking(parent->right, parent->data)) return true; return assist(parent->left)||assist(parent->right); // point 4 } else return false; } ``` 1. You need to check for null values. 2. If `val` is same, why are you still checking? Just stop 3. You need to check node's value in the left and right subtree. 4. Recurse it for the child nodes
If you want to check that parent value is different than child values, you might do: ``` bool checking(const BSTNode* node, int parent_value) { if (node == nullptr) { return false; } if (node->data == parent_value) { return true; } return checking(node->left, node->data) || checking(node->right, node->data); } bool assist(const BSTNode* parent) { if (parent == nullptr) { return false; } return checking(parent->left, parent->data) || checking(parent->right, parent->data); } ```
61,813,249
My database teacher has given me the following interrogation to translate in a single query in SQL: > > Show, for every exam module, the number of students who got a mark between the values 18 and 21, the the ones who got a mark between 22 and 26 and in the end the ones who got a mark between 27 and 30. > > > The tables involved are: ```sql CREATE TABLE student( code CHAR(6) PRIMARY KEY, bachelor_course CHAR(3), name VARCHAR(50) NOT NULL, surname VARCHAR(50) NOT NULL, birth_date DATE NOT NULL, fiscal_code CHAR(16) NOT NULL UNIQUE, photo BLOB, ); CREATE TABLE module( code CHAR(3) PRIMARY KEY, name VARCHAR(50) NOT NULL, description VARCHAR (100), university_credits TINYINT NOT NULL CHECK(university_credits > 0 AND university_credits < 13) ); CREATE TABLE exam( student_code CHAR(6), module_code CHAR(3), teacher_code CHAR(6), exam_date DATE NOT NULL, mark TINYINT NOT NULL CHECK(mark > 0 AND mark < 31), notes VARCHAR(100) ); ``` I've been trying for half a day and I think I'm near the correct answer. After searching on the web I found a way using multiple SELECT in the main one like this: ```sql SELECT module.code, (SELECT COUNT(*) FROM module m1 JOIN exam e1 ON m1.code = e1.module JOIN student s1 ON e1.student_number = s1.number WHERE e1.mark >= 18 AND e1.mark <= 21 GROUP BY m1.code) AS StudentNumber_18_21, (SELECT same as the first one but with 22 and 26 values in the WHERE clause) as StudentNumber_22_26, (SELECT same as the first one but with 27 and 30 values in the WHERE clause) as StudentNumber_27_30, FROM module; ``` The output should be like this: ``` +----------+-------------------+-------------------+-------------------+ |ModuleName|StudentNumber_18_22|StudentNumber_22_26|StudentNumber_27_30| +----------+-------------------+-------------------+-------------------+ //values ```
2020/05/15
[ "https://Stackoverflow.com/questions/61813249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13535799/" ]
Your checking function should look like this: ``` bool checking(BSTNode* parent, int val) { if(parent == nullptr) // point 1 return false; if (val == parent->data){ // point 2 return true; } else{ bool left = checking(parent->left, val); bool right = checking(parent->right, val); return left||right; } } ``` Your assist function should look something like this: ``` bool assist(BSTNode* parent) { if (parent != nullptr) { if(checking(parent->left, parent->data)) return true; // point 3 if(checking(parent->right, parent->data)) return true; return assist(parent->left)||assist(parent->right); // point 4 } else return false; } ``` 1. You need to check for null values. 2. If `val` is same, why are you still checking? Just stop 3. You need to check node's value in the left and right subtree. 4. Recurse it for the child nodes
You could just go through the BST breadth wise with a Deque. Store the values in a set and check if the value is already in the set, if it is return true otherwise wait for the loop to finish and return true. This had the benefit of hash table lookup for values at thr cost of extra storage in O(n) time. Its also easier to follow in my opinion as it's not recursion. ``` bool hasDuplicate(BSTNode *parent) { if (!parent) return false; std::dueue<BSTNode*> nodes; std::unordered_set<int> vals; nodes.push_back(parent); while(!nodes.empty()) { BSTNode *node = nodes.pop_front(); int v = nodes->val; // Check if value exists and return true if(vals.find(v) != vals.end()) return true; // Otherwise insert it vals.insert(v); // insert left node if exists if (node->left) nodes.push_back(node->left); // insert right node if exists if (node->right) nodes.push_back(node->right); } // no dups found return false; } ``` Sorry for bad indents. Did this on phone lol.
15,382,063
I have a DLL file which does Log4Net logging to a a file. There is a process which loads the DLL and can create multiple instances of the DLL. Each instance of a DLL has to create a separate log file. Therefore I do all the Log4Net configuration programatically. I've used some help from [here.](https://stackoverflow.com/questions/1519728/programmatically-adding-and-removing-log-appenders-in-log4net) Here is my code: ``` public class LogHelper { private PatternLayout _layout = new PatternLayout(); private const string LOG_PATTERN = "%date %-5level - %message%newline"; private String Configuration; public static string DefaultPattern { get { return LOG_PATTERN; } } public ILog log = null; public LogHelper(String configuration) { Configuration = configuration; InitialiseLogger(); _layout.ConversionPattern = DefaultPattern; _layout.ActivateOptions(); Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); hierarchy.Configured = true; hierarchy.LevelMap.Add(log4net.Core.Level.Debug); hierarchy.LevelMap.Add(log4net.Core.Level.Critical); hierarchy.LevelMap.Add(log4net.Core.Level.Info); hierarchy.LevelMap.Add(log4net.Core.Level.Warn); hierarchy.LevelMap.Add(log4net.Core.Level.Error); hierarchy.LevelMap.Add(log4net.Core.Level.Fatal); } ~LogHelper() { log.Debug("Closing myself down"); IAppender[] appenders = log.Logger.Repository.GetAppenders(); //appenders are empty log.Logger.Repository.Shutdown(); } public void InitialiseLogger() { Hierarchy hierarchy = (Hierarchy)LogManager.GetRepository(); Logger newLogger = hierarchy.GetLogger(Configuration) as Logger; PatternLayout patternLayout = new PatternLayout(); patternLayout.ConversionPattern = LOG_PATTERN; patternLayout.ActivateOptions(); RollingFileAppender roller = new RollingFileAppender(); roller.Layout = patternLayout; roller.AppendToFile = true; roller.RollingStyle = RollingFileAppender.RollingMode.Size; roller.MaxSizeRollBackups = 4; roller.MaximumFileSize = "10MB"; String name = String.Format("-{0:yyyy-MM-dd_HH-mm-ss}", DateTime.Now); roller.File = "C:\\Logs\\" + Configuration + name + ".log"; roller.ImmediateFlush = true; roller.ActivateOptions(); newLogger.AddAppender(roller); log = LogManager.GetLogger(Configuration); } ``` The problem is that log.Debug("Closing myself down"); is not logged to a log file; I know it is being called. And the log files never get released, unless I stop the process that loads my DLL0, and I do not want to stop it. A link from [here](https://stackoverflow.com/questions/5892916/proper-way-to-shutdown-a-logger-instance-in-log4net) explains how to shutdown appenders. But the problem is that in my destructor a call to log.Logger.Repository.GetAppenders(); returns an empty array. How should I solve it? Just a note: the process that loads my DLL is from 3rd party and I don't know the internals of it.
2013/03/13
[ "https://Stackoverflow.com/questions/15382063", "https://Stackoverflow.com", "https://Stackoverflow.com/users/788314/" ]
You are using the destructor of the LogHelper to release the file(s) According to 1.6.7.6 Destructors in [the language specification](http://msdn.microsoft.com/en-us/library/ms228593.aspx) the destructor will be called but you can not know when. You just know it will be called before the process terminates. The most obvious thing to do is to move the logic of the destructor to a method that will be called explicitly (e.g. [Dispose](http://msdn.microsoft.com/en-us/library/498928w2.aspx)) That way you will be able to call the method and thus release the files.
What you call a "Destructor" is actually a `Finalizer`. They should only be used to release unmanaged resources, so it looks to me like you are abusing it. Also note that the Finalizer is likely to be called on a separate thread, it may be called at effectively a random time, and it may not even be called at all. You should make `LogHelper` implement `IDisposable` and implement `Dispose()` (which will contain the logic that is currently in your Finalizer). Then you need to managed the lifetime of your `LogHelper` by calling `Dispose()` at the appropriate time.
28,531,526
I'm trying to understand laravel's basic blade template engine, but I can't seem to get past a basic example. My blade template is not loading .It only show the white screen but when I remove the hello.blade.php to hello.php it works .Any suggestion? **Routes.php** ``` Route::get('/', 'PagesController@home'); ``` **PagesController.php** ``` <?php namespace App\Http\Controllers; use App\Http\Requests; use App\Http\Controllers\Controller; use Illuminate\Http\Request; class PagesController extends Controller { /** * Display a listing of the resource. * * @return Response */ public function home() { return Views('hello'); } } ``` **hello.blade.php** ``` <html> <head> <title>Hello World</title> </head> <body> <div class="container"> <div class="content"> <div class="title">Starting to learn Laravel 5</div> </div> </div> </body> </html> ```
2015/02/15
[ "https://Stackoverflow.com/questions/28531526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4341561/" ]
There is no `Views` helper. It's called `view`: ``` return view('hello'); ```
Well, I had the same "white screen problem". And the blade docs in laravel is so confused. Then, try this: 1. Create a layout, lets say `template.blade.php`: ``` <html> <head> <title>Hello World</title> </head> <body> <div class="container"> <div class="content"> <div class="title">Starting to learn Laravel 5</div> @yield('view_content') </div> </div> </body> ``` 2. Create a simple view that you want to wrap into the template, lets say `hello.blade.php`: ``` @extends('template') @section('view_content') This is my view content @stop ``` Now, in your controller just call the view instead the layout: ``` public function home(){ return view('hello'); } ```
28,531,526
I'm trying to understand laravel's basic blade template engine, but I can't seem to get past a basic example. My blade template is not loading .It only show the white screen but when I remove the hello.blade.php to hello.php it works .Any suggestion? **Routes.php** ``` Route::get('/', 'PagesController@home'); ``` **PagesController.php** ``` <?php namespace App\Http\Controllers; use App\Http\Requests; use App\Http\Controllers\Controller; use Illuminate\Http\Request; class PagesController extends Controller { /** * Display a listing of the resource. * * @return Response */ public function home() { return Views('hello'); } } ``` **hello.blade.php** ``` <html> <head> <title>Hello World</title> </head> <body> <div class="container"> <div class="content"> <div class="title">Starting to learn Laravel 5</div> </div> </div> </body> </html> ```
2015/02/15
[ "https://Stackoverflow.com/questions/28531526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4341561/" ]
There is no `Views` helper. It's called `view`: ``` return view('hello'); ```
I'm not sure how it is working regardless of the extension as in your controller the syntax is incorrect. You return the view not Views... `return view('hello')`. You should technically see something like the following: `FatalErrorException in PagesController.php line 18: Call to undefined function App\Http\Controllers\Views()` Even if app\_debug is false you should see `Whoops, looks like something went wrong.`
28,531,526
I'm trying to understand laravel's basic blade template engine, but I can't seem to get past a basic example. My blade template is not loading .It only show the white screen but when I remove the hello.blade.php to hello.php it works .Any suggestion? **Routes.php** ``` Route::get('/', 'PagesController@home'); ``` **PagesController.php** ``` <?php namespace App\Http\Controllers; use App\Http\Requests; use App\Http\Controllers\Controller; use Illuminate\Http\Request; class PagesController extends Controller { /** * Display a listing of the resource. * * @return Response */ public function home() { return Views('hello'); } } ``` **hello.blade.php** ``` <html> <head> <title>Hello World</title> </head> <body> <div class="container"> <div class="content"> <div class="title">Starting to learn Laravel 5</div> </div> </div> </body> </html> ```
2015/02/15
[ "https://Stackoverflow.com/questions/28531526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4341561/" ]
There is no `Views` helper. It's called `view`: ``` return view('hello'); ```
I got this when I made a typo like so: ``` return views('abc'); ``` instead of ``` return view('abc'); ``` That extra `s` was killing everything.
18,941,520
I want to know the cursor position inside body tag.Actually I want if the cursor position is inside the span tag having class edit, then I want to prevent the cursor from jumping to new line on pressing Enter key. And if the cursor position is out of the span tag,then the cursor will jump to new line on pressing enter key. I would be greatly thankful to you if you provide the solution with an example. I am not so strong in javascript.I know to stop the cursor jumping to new line. The code for this will be. ``` $('p').keydown(function(e){ e.preventDefault(); }); ``` But in my case, prevention should be only if the cursor is inside the span tag. Thanks in advance.
2013/09/22
[ "https://Stackoverflow.com/questions/18941520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2702406/" ]
Try this: ``` var canPressEnter = true; $("span.edit").on("focus", function(){ canPressEnter = false; }).on("keypress", function(e){ var code = (e.keyCode ? e.keyCode : e.which); if (canPressEnter === false && code === 13) { e.preventDefault(); } }).on("blur", function(){ canPressEnter = true; }); ``` <http://jsfiddle.net/hescano/S6hzY/> If you don't need the flag from somewhere else, this will do: ``` $("span.edit").on("keypress", function(e){ var code = (e.keyCode ? e.keyCode : e.which); if (code === 13) { e.preventDefault(); } }); ``` <http://jsfiddle.net/hescano/S6hzY/1/> This works for me: ``` tinyMCE.init({ theme : "advanced", mode: "exact", elements : "elm1", setup : function(ed) { ed.onInit.add(function(ed, evt) { tinymce.dom.Event.add(ed.getDoc(), 'keydown', function(e) { var existing = tinyMCE.get('elm1').getElement(e); var code = (e.keyCode ? e.keyCode : e.which); var spans = tinyMCE.activeEditor.getBody().getElementsByTagName("span"); if (spans.length > 0) { for (var i = 0; i < spans.length; i++) { if (spans[i].getAttribute("class") === "AMedit") { if (code === 13) { e.preventDefault(); } } } } }); }); }, themes... ```
If there is no selection, you can use the properties .selectionStart or .selectionEnd (with no selection they're equal). ``` var cursorPosition = $('#myTextarea').prop("selectionStart"); ``` Note that this is not supported in older browsers, most notably IE8-.
18,941,520
I want to know the cursor position inside body tag.Actually I want if the cursor position is inside the span tag having class edit, then I want to prevent the cursor from jumping to new line on pressing Enter key. And if the cursor position is out of the span tag,then the cursor will jump to new line on pressing enter key. I would be greatly thankful to you if you provide the solution with an example. I am not so strong in javascript.I know to stop the cursor jumping to new line. The code for this will be. ``` $('p').keydown(function(e){ e.preventDefault(); }); ``` But in my case, prevention should be only if the cursor is inside the span tag. Thanks in advance.
2013/09/22
[ "https://Stackoverflow.com/questions/18941520", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2702406/" ]
Try this: ``` var canPressEnter = true; $("span.edit").on("focus", function(){ canPressEnter = false; }).on("keypress", function(e){ var code = (e.keyCode ? e.keyCode : e.which); if (canPressEnter === false && code === 13) { e.preventDefault(); } }).on("blur", function(){ canPressEnter = true; }); ``` <http://jsfiddle.net/hescano/S6hzY/> If you don't need the flag from somewhere else, this will do: ``` $("span.edit").on("keypress", function(e){ var code = (e.keyCode ? e.keyCode : e.which); if (code === 13) { e.preventDefault(); } }); ``` <http://jsfiddle.net/hescano/S6hzY/1/> This works for me: ``` tinyMCE.init({ theme : "advanced", mode: "exact", elements : "elm1", setup : function(ed) { ed.onInit.add(function(ed, evt) { tinymce.dom.Event.add(ed.getDoc(), 'keydown', function(e) { var existing = tinyMCE.get('elm1').getElement(e); var code = (e.keyCode ? e.keyCode : e.which); var spans = tinyMCE.activeEditor.getBody().getElementsByTagName("span"); if (spans.length > 0) { for (var i = 0; i < spans.length; i++) { if (spans[i].getAttribute("class") === "AMedit") { if (code === 13) { e.preventDefault(); } } } } }); }); }, themes... ```
Hope this may be helpful to you. ``` $('#myarea')[0].selectionStart; // return start index of cursor ```
260,462
I turned off water to two of the three toilets in my house because they started leaking. By doing this, does this affect water pressure in say the shower? The one toilet has been off for a year and no issue. But the other I turned off this morning around 4am because it started running constantly. All of a sudden about 515am, my shower starts dripping like crazy. So I turn on the shower and it shot out real fast for a split second. Was this just a coincidence? Or does turning off water to toilets affect water pressure in other items that use water? Thanks
2022/11/14
[ "https://diy.stackexchange.com/questions/260462", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/158945/" ]
Turning off water to toilets won't make any difference in your shower. Thinking thru this: What's the difference between the toilet itself shutting of water when the tank is full vs turning off the supply to the toilet? NOTHING! So that's not the source of your problem. What sort of water heater do you have? Tank type? Is there expansion tank somewhere near the WH? The reason I ask this is in a home my son was living in, the water company replaced the meter and added check valves to prevent back-flow. He called me, concerned that the he'd get sudden spurts of high pressure at times and that the Temp/Pressure valve (TPV) on the WH was leaking. He said it usually happened after doing laundry or taking showers. I asked him to get one of those inexpensive pressure gauges and put it on an outside hose bib. Sure enough, the pressure was spiking to over 100psi. The TPV was on the WH was doing it's job. This was caused by cold water entering the WH, getting heated (which makes it expand), with now (given the new check valves), no place to go so the pressure went up. Adding an expansion tank fixed the problem. If this is your problem, the high pressure probably overwhelmed the shower valve. For further diagnosis, does the "spurt" happen from any faucet? Does it continue to leak after the short spurt? Like I told my son, you should consider getting a simple pressure gauge which are usually available in the plumbing/sprinkler system area of a big box store. attach it to a hose bib and see what kind of pressures you're dealing with. That's just one possibility that I personally experienced. Crip might also be right in that if you have a pressure reducing valve it might be failing , but I would not expect that to be intermittent.
If you turn off the water supply to the whole system for a bit, drain the lines to do some work, then turn the supply back on, some spurting from the faucets is normal.
260,462
I turned off water to two of the three toilets in my house because they started leaking. By doing this, does this affect water pressure in say the shower? The one toilet has been off for a year and no issue. But the other I turned off this morning around 4am because it started running constantly. All of a sudden about 515am, my shower starts dripping like crazy. So I turn on the shower and it shot out real fast for a split second. Was this just a coincidence? Or does turning off water to toilets affect water pressure in other items that use water? Thanks
2022/11/14
[ "https://diy.stackexchange.com/questions/260462", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/158945/" ]
If you turn off the water supply to the whole system for a bit, drain the lines to do some work, then turn the supply back on, some spurting from the faucets is normal.
I would get a pressure check of your incoming water supply. Also I would say all your water appliances are old and in need of replacing. When you shut off water appliances in the system it will increase pressure on other fixtures. As you are shutting off a leak that is lowering the water pressure. Hope this helps.
260,462
I turned off water to two of the three toilets in my house because they started leaking. By doing this, does this affect water pressure in say the shower? The one toilet has been off for a year and no issue. But the other I turned off this morning around 4am because it started running constantly. All of a sudden about 515am, my shower starts dripping like crazy. So I turn on the shower and it shot out real fast for a split second. Was this just a coincidence? Or does turning off water to toilets affect water pressure in other items that use water? Thanks
2022/11/14
[ "https://diy.stackexchange.com/questions/260462", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/158945/" ]
Turning off water to toilets won't make any difference in your shower. Thinking thru this: What's the difference between the toilet itself shutting of water when the tank is full vs turning off the supply to the toilet? NOTHING! So that's not the source of your problem. What sort of water heater do you have? Tank type? Is there expansion tank somewhere near the WH? The reason I ask this is in a home my son was living in, the water company replaced the meter and added check valves to prevent back-flow. He called me, concerned that the he'd get sudden spurts of high pressure at times and that the Temp/Pressure valve (TPV) on the WH was leaking. He said it usually happened after doing laundry or taking showers. I asked him to get one of those inexpensive pressure gauges and put it on an outside hose bib. Sure enough, the pressure was spiking to over 100psi. The TPV was on the WH was doing it's job. This was caused by cold water entering the WH, getting heated (which makes it expand), with now (given the new check valves), no place to go so the pressure went up. Adding an expansion tank fixed the problem. If this is your problem, the high pressure probably overwhelmed the shower valve. For further diagnosis, does the "spurt" happen from any faucet? Does it continue to leak after the short spurt? Like I told my son, you should consider getting a simple pressure gauge which are usually available in the plumbing/sprinkler system area of a big box store. attach it to a hose bib and see what kind of pressures you're dealing with. That's just one possibility that I personally experienced. Crip might also be right in that if you have a pressure reducing valve it might be failing , but I would not expect that to be intermittent.
If the toilet was leaking bad, it will reduce the water pressure to other fixtures, but will not increase the normal house water pressure if the leak is stopped. Would maybe have the water pressure coming into the house checked. Some houses on city water have a pressure reducing valve near where the water comes in. It might not be working right and letting to much water pressure in.
260,462
I turned off water to two of the three toilets in my house because they started leaking. By doing this, does this affect water pressure in say the shower? The one toilet has been off for a year and no issue. But the other I turned off this morning around 4am because it started running constantly. All of a sudden about 515am, my shower starts dripping like crazy. So I turn on the shower and it shot out real fast for a split second. Was this just a coincidence? Or does turning off water to toilets affect water pressure in other items that use water? Thanks
2022/11/14
[ "https://diy.stackexchange.com/questions/260462", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/158945/" ]
If the toilet was leaking bad, it will reduce the water pressure to other fixtures, but will not increase the normal house water pressure if the leak is stopped. Would maybe have the water pressure coming into the house checked. Some houses on city water have a pressure reducing valve near where the water comes in. It might not be working right and letting to much water pressure in.
I would get a pressure check of your incoming water supply. Also I would say all your water appliances are old and in need of replacing. When you shut off water appliances in the system it will increase pressure on other fixtures. As you are shutting off a leak that is lowering the water pressure. Hope this helps.
260,462
I turned off water to two of the three toilets in my house because they started leaking. By doing this, does this affect water pressure in say the shower? The one toilet has been off for a year and no issue. But the other I turned off this morning around 4am because it started running constantly. All of a sudden about 515am, my shower starts dripping like crazy. So I turn on the shower and it shot out real fast for a split second. Was this just a coincidence? Or does turning off water to toilets affect water pressure in other items that use water? Thanks
2022/11/14
[ "https://diy.stackexchange.com/questions/260462", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/158945/" ]
Turning off water to toilets won't make any difference in your shower. Thinking thru this: What's the difference between the toilet itself shutting of water when the tank is full vs turning off the supply to the toilet? NOTHING! So that's not the source of your problem. What sort of water heater do you have? Tank type? Is there expansion tank somewhere near the WH? The reason I ask this is in a home my son was living in, the water company replaced the meter and added check valves to prevent back-flow. He called me, concerned that the he'd get sudden spurts of high pressure at times and that the Temp/Pressure valve (TPV) on the WH was leaking. He said it usually happened after doing laundry or taking showers. I asked him to get one of those inexpensive pressure gauges and put it on an outside hose bib. Sure enough, the pressure was spiking to over 100psi. The TPV was on the WH was doing it's job. This was caused by cold water entering the WH, getting heated (which makes it expand), with now (given the new check valves), no place to go so the pressure went up. Adding an expansion tank fixed the problem. If this is your problem, the high pressure probably overwhelmed the shower valve. For further diagnosis, does the "spurt" happen from any faucet? Does it continue to leak after the short spurt? Like I told my son, you should consider getting a simple pressure gauge which are usually available in the plumbing/sprinkler system area of a big box store. attach it to a hose bib and see what kind of pressures you're dealing with. That's just one possibility that I personally experienced. Crip might also be right in that if you have a pressure reducing valve it might be failing , but I would not expect that to be intermittent.
I would get a pressure check of your incoming water supply. Also I would say all your water appliances are old and in need of replacing. When you shut off water appliances in the system it will increase pressure on other fixtures. As you are shutting off a leak that is lowering the water pressure. Hope this helps.
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
The issue I had with tests not getting discovered came down to the fact that the tests are discovered from the `class` files, so to make the tests get discovered I need to add `<goal>testCompile</goal>` to `scala-maven-plugin` `goals`.
**Cause:** Maven plugins does not compile your test code whenever you run mvn commands. **Work around:** > > Run scala tests using your IDE which will compile the test code and saves it in target directory. And when next time you run mvn test or any maven command which internally triggers maven's test cycle it should run the scala tests > > >
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
This is probably because there are some space characters in the project path. Remove space in project path and the tests can be discovered successfully. Hope this help.
The issue I had with tests not getting discovered came down to the fact that the tests are discovered from the `class` files, so to make the tests get discovered I need to add `<goal>testCompile</goal>` to `scala-maven-plugin` `goals`.
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
This is probably because there are some space characters in the project path. Remove space in project path and the tests can be discovered successfully. Hope this help.
With me, it's because I wasn't using the following plugin: ```html <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>${scala.version}</scalaVersion> <args> <arg>-target:jvm-1.8</arg> </args> </configuration> </plugin> ```
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
With me, it's because I wasn't using the following plugin: ```html <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>${scala.version}</scalaVersion> <args> <arg>-target:jvm-1.8</arg> </args> </configuration> </plugin> ```
**Cause:** Maven plugins does not compile your test code whenever you run mvn commands. **Work around:** > > Run scala tests using your IDE which will compile the test code and saves it in target directory. And when next time you run mvn test or any maven command which internally triggers maven's test cycle it should run the scala tests > > >
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
With me, it's because I wasn't using the following plugin: ```html <plugin> <groupId>org.scala-tools</groupId> <artifactId>maven-scala-plugin</artifactId> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>${scala.version}</scalaVersion> <args> <arg>-target:jvm-1.8</arg> </args> </configuration> </plugin> ```
In my case it's because of the nesting of tests inside the test directory and using the `<memberOnlySuites>` configuration. `<memberonlySuites>` only looks out for the test files in the give package / directory. Instead use `<wildcardSuites>` which will look into a package / directory and all it's subdirectories. This happens quiet often when you are adding more tests to your test suite and organising them in a more structured manner.
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
This is probably because there are some space characters in the project path. Remove space in project path and the tests can be discovered successfully. Hope this help.
In my case it's because of the nesting of tests inside the test directory and using the `<memberOnlySuites>` configuration. `<memberonlySuites>` only looks out for the test files in the give package / directory. Instead use `<wildcardSuites>` which will look into a package / directory and all it's subdirectories. This happens quiet often when you are adding more tests to your test suite and organising them in a more structured manner.
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
This is probably because there are some space characters in the project path. Remove space in project path and the tests can be discovered successfully. Hope this help.
Try excluding junit as a transitive dependency. Works for me. Example below, but note the Scala and Spark versions are specific to my environment. ``` <dependency> <groupId>com.holdenkarau</groupId> <artifactId>spark-testing-base_2.10</artifactId> <version>1.5.0_0.6.0</version> <scope>test</scope> <exclusions> <!-- junit is not compatible with scalatest --> <exclusion> <groupId>junit</groupId> <artifactId>junit</artifactId> </exclusion> </exclusion> </dependency> ```
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
In my case it's because of the nesting of tests inside the test directory and using the `<memberOnlySuites>` configuration. `<memberonlySuites>` only looks out for the test files in the give package / directory. Instead use `<wildcardSuites>` which will look into a package / directory and all it's subdirectories. This happens quiet often when you are adding more tests to your test suite and organising them in a more structured manner.
**Cause:** Maven plugins does not compile your test code whenever you run mvn commands. **Work around:** > > Run scala tests using your IDE which will compile the test code and saves it in target directory. And when next time you run mvn test or any maven command which internally triggers maven's test cycle it should run the scala tests > > >
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
Try excluding junit as a transitive dependency. Works for me. Example below, but note the Scala and Spark versions are specific to my environment. ``` <dependency> <groupId>com.holdenkarau</groupId> <artifactId>spark-testing-base_2.10</artifactId> <version>1.5.0_0.6.0</version> <scope>test</scope> <exclusions> <!-- junit is not compatible with scalatest --> <exclusion> <groupId>junit</groupId> <artifactId>junit</artifactId> </exclusion> </exclusion> </dependency> ```
**Cause:** Maven plugins does not compile your test code whenever you run mvn commands. **Work around:** > > Run scala tests using your IDE which will compile the test code and saves it in target directory. And when next time you run mvn test or any maven command which internally triggers maven's test cycle it should run the scala tests > > >
38,700,319
I'm trying to use [scalatest](http://www.scalatest.org/) and [spark-testing-base](https://github.com/holdenk/spark-testing-base) on Maven for integration testing Spark. The Spark job reads in a CSV file, validates the results, and inserts the data into a database. I'm trying to test the validation by putting in files of known format and seeing if and how they fail. This particular test just makes sure the validation passes. Unfortunately, scalatest can't find my tests. Relevant pom plugins: ``` <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <!-- enable scalatest --> <plugin> <groupId>org.scalatest</groupId> <artifactId>scalatest-maven-plugin</artifactId> <version>1.0</version> <configuration> <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory> <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> </configuration> <executions> <execution> <id>test</id> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> ``` And here's the test class: ``` class ProficiencySchemaITest extends FlatSpec with Matchers with SharedSparkContext with BeforeAndAfter { private var schemaStrategy: SchemaStrategy = _ private var dataReader: DataFrameReader = _ before { val sqlContext = new SQLContext(sc) import sqlContext._ import sqlContext.implicits._ val dataInReader = sqlContext.read.format("com.databricks.spark.csv") .option("header", "true") .option("nullValue", "") schemaStrategy = SchemaStrategyChooser("dim_state_test_proficiency") dataReader = schemaStrategy.applySchema(dataInReader) } "Proficiency Validation" should "pass with the CSV file proficiency-valid.csv" in { val dataIn = dataReader.load("src/test/resources/proficiency-valid.csv") val valid: Try[DataFrame] = Try(schemaStrategy.validateCsv(dataIn)) valid match { case Success(v) => () case Failure(e) => fail("Validation failed on what should have been a clean file: ", e) } } } ``` When I run `mvn test`, it can't find any tests and outputs this message: ``` [INFO] --- scalatest-maven-plugin:1.0:test (test) @ load-csv-into-db --- [36mDiscovery starting.[0m [36mDiscovery completed in 54 milliseconds.[0m [36mRun starting. Expected test count is: 0[0m [32mDiscoverySuite:[0m [36mRun completed in 133 milliseconds.[0m [36mTotal number of tests run: 0[0m [36mSuites: completed 1, aborted 0[0m [36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0[0m [33mNo tests were executed.[0m ``` **UPDATE** By using: ``` <suites>com.cainc.data.etl.schema.proficiency.ProficiencySchemaITest</suites> ``` Instead of: ``` <wildcardSuites>com.cainc.data.etl.schema.proficiency</wildcardSuites> ``` I can get that one Test to run. Obviously, this is not ideal. It's possible wildcardSuites is broken; I'm going to open a ticket on GitHub and see what happens.
2016/08/01
[ "https://Stackoverflow.com/questions/38700319", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2683012/" ]
The issue I had with tests not getting discovered came down to the fact that the tests are discovered from the `class` files, so to make the tests get discovered I need to add `<goal>testCompile</goal>` to `scala-maven-plugin` `goals`.
In my case it's because of the nesting of tests inside the test directory and using the `<memberOnlySuites>` configuration. `<memberonlySuites>` only looks out for the test files in the give package / directory. Instead use `<wildcardSuites>` which will look into a package / directory and all it's subdirectories. This happens quiet often when you are adding more tests to your test suite and organising them in a more structured manner.