qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
8,493,521
The problem is following: * Input: All articles from Wikipedia (33gb of text) * Output: Count of each words skipgram (n-gram with maximum k skips) from Wikipedia in SQLite file. Output table schema is: ``` CREATE TABLE [tokens] ([token] TEXT UNIQUE NOT NULL PRIMARY KEY, [count] INTEGER NOT NULL ``` The naive approach is that for each skipgram we create a new record in table or increment counter in existing record: ``` INSERT OR REPLACE INTO [tokens] VALUES (@token, COALESCE((SELECT count FROM [tokens] WHERE token=@token), 0) + 1) ``` The problem with this approach is that index is constantly updated and when database grows to several giga those updates are very slow. We can solve this by creating the "tokens" table without index and adding index at end of processing. The problem is that the select statement `SELECT count FROM [tokens] WHERE token=@token` that has to scan the table is significantly reducing performance. The best method I have found so far is following (I am using C#): 1. Create a `Dictionary<string,int>` in order to count tokens. 2. Add tokens to this dictionary till it gets too big to fit in the RAM. 3. Insert (not update) tokens from the dictionary to temporarily table without index. The table has following schema: ``` CREATE TABLE [temp] ([token] TEXT, [count] INTEGER) ``` 4. If there are more tokens, clear the dictionary and go to step 2. 5. Copy tokens from temp table to tokens table: ``` INSERT INTO [tokens] SELECT [token], SUM([count]) AS [count] FROM [temp] GROUP BY [token] ``` This method takes "only" 24 hours to process the dataset, but I believe that it is not the best approach because the step 5 takes 22 out of 24 hours. Do you know an alternative approach that can solve this problem? P.S. My application is single threaded and I make the above inserts in batches (100000 per batch) within transaction.
2011/12/13
[ "https://Stackoverflow.com/questions/8493521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096250/" ]
i would suggest creating of another table with the same definition, populating the table to a certain state, merging the results to the main one, purging the table and starting processing the next set of items.
I would suggest adding `SET TRANSACTION ISOLATION READ UNCOMMITTED`. That means it is possible the counts could be slightly off, especially in a threaded enviornment where multiple are trying to insert/update at the same time.
8,493,521
The problem is following: * Input: All articles from Wikipedia (33gb of text) * Output: Count of each words skipgram (n-gram with maximum k skips) from Wikipedia in SQLite file. Output table schema is: ``` CREATE TABLE [tokens] ([token] TEXT UNIQUE NOT NULL PRIMARY KEY, [count] INTEGER NOT NULL ``` The naive approach is that for each skipgram we create a new record in table or increment counter in existing record: ``` INSERT OR REPLACE INTO [tokens] VALUES (@token, COALESCE((SELECT count FROM [tokens] WHERE token=@token), 0) + 1) ``` The problem with this approach is that index is constantly updated and when database grows to several giga those updates are very slow. We can solve this by creating the "tokens" table without index and adding index at end of processing. The problem is that the select statement `SELECT count FROM [tokens] WHERE token=@token` that has to scan the table is significantly reducing performance. The best method I have found so far is following (I am using C#): 1. Create a `Dictionary<string,int>` in order to count tokens. 2. Add tokens to this dictionary till it gets too big to fit in the RAM. 3. Insert (not update) tokens from the dictionary to temporarily table without index. The table has following schema: ``` CREATE TABLE [temp] ([token] TEXT, [count] INTEGER) ``` 4. If there are more tokens, clear the dictionary and go to step 2. 5. Copy tokens from temp table to tokens table: ``` INSERT INTO [tokens] SELECT [token], SUM([count]) AS [count] FROM [temp] GROUP BY [token] ``` This method takes "only" 24 hours to process the dataset, but I believe that it is not the best approach because the step 5 takes 22 out of 24 hours. Do you know an alternative approach that can solve this problem? P.S. My application is single threaded and I make the above inserts in batches (100000 per batch) within transaction.
2011/12/13
[ "https://Stackoverflow.com/questions/8493521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096250/" ]
i would suggest creating of another table with the same definition, populating the table to a certain state, merging the results to the main one, purging the table and starting processing the next set of items.
If you have many gigs to spare.... I suggest that you do not count the tokens as you go, but rather add all the tokens into a single table and create an index that organizes the tokens. ``` CREATE TABLE tokens (token TEXT); CREATE INDEX tokens_token ON tokens (token ASC); ``` then add all of the token one at a time... ``` INSERT INTO tokens VALUES ('Global Warming'); INSERT INTO tokens VALUES ('Global Cooling'); ``` finally execute a `SELECT ... GROUP BY` ``` SELECT token, COUNT(0) token_count FROM tokens GROUP BY token ```
8,493,521
The problem is following: * Input: All articles from Wikipedia (33gb of text) * Output: Count of each words skipgram (n-gram with maximum k skips) from Wikipedia in SQLite file. Output table schema is: ``` CREATE TABLE [tokens] ([token] TEXT UNIQUE NOT NULL PRIMARY KEY, [count] INTEGER NOT NULL ``` The naive approach is that for each skipgram we create a new record in table or increment counter in existing record: ``` INSERT OR REPLACE INTO [tokens] VALUES (@token, COALESCE((SELECT count FROM [tokens] WHERE token=@token), 0) + 1) ``` The problem with this approach is that index is constantly updated and when database grows to several giga those updates are very slow. We can solve this by creating the "tokens" table without index and adding index at end of processing. The problem is that the select statement `SELECT count FROM [tokens] WHERE token=@token` that has to scan the table is significantly reducing performance. The best method I have found so far is following (I am using C#): 1. Create a `Dictionary<string,int>` in order to count tokens. 2. Add tokens to this dictionary till it gets too big to fit in the RAM. 3. Insert (not update) tokens from the dictionary to temporarily table without index. The table has following schema: ``` CREATE TABLE [temp] ([token] TEXT, [count] INTEGER) ``` 4. If there are more tokens, clear the dictionary and go to step 2. 5. Copy tokens from temp table to tokens table: ``` INSERT INTO [tokens] SELECT [token], SUM([count]) AS [count] FROM [temp] GROUP BY [token] ``` This method takes "only" 24 hours to process the dataset, but I believe that it is not the best approach because the step 5 takes 22 out of 24 hours. Do you know an alternative approach that can solve this problem? P.S. My application is single threaded and I make the above inserts in batches (100000 per batch) within transaction.
2011/12/13
[ "https://Stackoverflow.com/questions/8493521", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096250/" ]
i would suggest creating of another table with the same definition, populating the table to a certain state, merging the results to the main one, purging the table and starting processing the next set of items.
This sounds like a good place to use a "counting bloom filter" to me. It'd require two passes over your data, and it's a bit heuristic, but it should be fast. Bloom filters allow set insertion and presence tests in constant time. A counting bloom filter counts how many of a particular value have been found, as opposed to the usual bloom filter that only keeps track of presence/absence.
63,389,074
I have an application where i have 2 types of data: 1. Persons 2. Toys Conditions: * User can assign a person to one of the toys, depending of: if the toyType === type of the person. For example, ```css { name: "Lisa", age: 7, type: "F" }, ``` ..can't be assigned to a toy with toyType `M`, only `F` * User can assign a person to just one type of toys. My code: ```js const info = [ { name: "Bill", age: 11, type: "M" }, { name: "Lisa", age: 7, type: "F" }, { name: "Carl", age: 17, type: "M" }, { name: "John", age: 8, type: "M" } ]; const toys = [ { color: "red", toyType: "M" }, { color: "white", toyType: "F" } ]; const thisPerson = { name: "Carl",age: 17}; function app(selectedtoyType) { const result = toys.map((i) => { if (selectedtoyType === i.toyType) { return { ...i, persons: toys.persons ? [...i.persons, thisPerson] : [thisPerson] }; } else { return { ...i }; } }); return result; } console.log('1 time assign', app('M')) console.log('2 time assign', app('M')) ``` I took an example with: `const thisPerson = { name: "Carl",age: 17};`, and I tried to assign him 2 times. Here I should get just one person in `persons` array, the next time I should get an error in the console, also I should be able to add another person with `type M` to this array, of course a different one than the previous. **Question: What is the problem with my code?** ps: the code is just a simuation of the app.
2020/08/13
[ "https://Stackoverflow.com/questions/63389074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12540500/" ]
By doing: ``` cv::Mat testInputImage(80, 80, CV_32FC(3), TF_TensorData(*OutputValues)); ``` you are "wrapping" the existing data in a `cv::Mat`, which avoids a copy. Note that the third argument should be `CV_32FC(3)` (a 32-bit floating point image, with 3 channels). This approach should work if `OutputValues` is a `TF_Tensor**` type, and if the underlying `TF_Tensor` holds appropriate data. However, I don't think that this: ``` TF_Tensor** OutputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*) * NumOutputs); ``` is an appropriate way to allocate a TF\_Tensor; I think you should be using [TF\_AllocateTensor](https://github.com/tensorflow/tensorflow/blob/f0030f31d1e9021169d6f679c9021b21a79d90b0/tensorflow/c/tf_tensor.h#L93L104) instead. All that said, if you are using C++, you might consider using [tf::Tensor](https://www.tensorflow.org/api_docs/cc/class/tensorflow/tensor) API instead of `TF_Tensor` (which is used for C, and is less common). You omitted some details, but let's say your tensor is 4-dimensional (as is common), has float32 values, and is laid out as NxHxWxC (In other words, the tensor is holding a collection of float images). If you want to convert `idx`-th element in the batch to a `cv::Mat`, you can do it like this: ``` tf::Tensor tensor = /* tensor from somewhere */; int idx = /* index of the image in the batch */; int batch_size = tensor.dim_size(0); int rows = tensor.dim_size(1); int cols = tensor.dim_size(2); int channels = tensor.dim_size(3); int row_size = channels * cols * sizeof(float); cv::Mat image(rows, cols, CV_32FC(channels)); auto tensor_mapped = tensor.tensor<float, 4>(); for (int r = 0; r < rows; ++r) { float* row = reinterpret_cast<float*>(mat.data + r * row_size); for (int c = 0; c < cols; ++c) { for (int k = 0; k < channels; ++k) { row[k + c * channels] = tensor_mapped(idx, r, c, k); } } } ```
``` try : cv::Mat mat(width, height, CV_32F); std::memcpy((void *)mat.data, camBuf , sizeof(TF_Tensor*) * NumOutputs); ```
209,367
I am trying to plot something like the [Frenet-Serret Formulas](https://en.wikipedia.org/wiki/Frenet%E2%80%93Serret_formulas) using a parametric plot, like this: ``` r[t_] := {t, t^2, 2 t^3/3} t[t_] := r'[t]/Norm[r'[t]] Manipulate[ParametricPlot3D[{r[t]}, {t, 0, p} PlotRange -> {{-0.1, 1.1}, {-0.1, 1.1}, {-0.1, 1.1}}], {p, 10^-10, 1}] ``` Now I would like to have the function `t[x]` also plotted, but as a single vector at the coordinates of the current point `r[p]`. This probably helps to visualize what I am trying to do, as it is exactly the same thing: [Frenet-Serret Frame moving along a parametric helix](https://en.wikipedia.org/wiki/File:Frenetframehelix.gif) The helix would be my function `r` and `t` is the first of those three vectors that I would like to see moving around. I tried two things, first using ``` Manipulate[ParametricPlot3D[{r[t]}, {t, 0, p} PlotRange -> {{-0.1, 1.1}, {-0.1, 1.1}, {-0.1, 1.1}}, Epilog -> {Arrow[{r[p], t[p]}]}], {p, 10^-10, 1}] ``` But the Arrow is shown as an overlay in 2D above the plot, and then ``` Manipulate[ParametricPlot3D[{r[t]}, {t, 0, p} PlotRange -> {{-0.1, 1.1}, {-0.1, 1.1}, {-0.1, 1.1}}, Epilog -> {ParametricPlot3D[t[p]*u, {u, 0, 1}] /. Line -> Arrow}], {p, 10^-10, 1}] ``` which I cannot get to work because `ParametricPlot3D` is not a primitive that can be shown in `Epilog` So, any ideas? ~~I'm sure, because I'm just a noob.~~ Thanks in advance guys and have a nice day :)
2019/11/10
[ "https://mathematica.stackexchange.com/questions/209367", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/68374/" ]
The problem is that [`Epilog`](https://reference.wolfram.com/language/ref/Epilog.html) creates a 2D graphic that is overlayed on top of the main image. From the **Details** section of the documentation > > In three-dimensional graphics, two-dimensional graphics primitives can be specified by the Epilog option. > > > Thus we have to create a seperate 3D object and "overlay"/superimpose it onto the parametric plot with [`Show`](https://reference.wolfram.com/language/ref/Show.html). And to make sure we can always see all or the arrows (i.e. make it so the arrows don't go outside of the bounding box), we need to find the right plot range. Finding the right plot range ============================ **Summary**: We need to find the minimum of the minima and the maximum of the maxima of each coordinate of each vector over the parametric domain. Given the curves ``` r[t_] := {t, t^2, 2 t^3/3} v[t_] := Normalize[r'[t]] /. Abs[x_] :> x ``` the probably-not-best-way to find the right plot range is to find the minimum and maximum values of each coordinate of each vector over the whole time. ``` ( {NMinValue[{#, 0 < t < 1}, t], NMaxValue[{#, 0 < t < 1}, t]} & /@ ( r[t] + 0.5 Normalize@# ) ) & /@ {v[t], v'[t], Cross[v[t], v'[t]]} ``` > > > ``` > { > (*Min and Max for each x,y,z for v[t]*) > {{0.5, 1.16667}, {0., 1.33333}, {1.42102*10^-19, 1.}}, > (*v'[t]*) {{7.95036*10^-15, 0.666667}, {0.414214, 0.833333}, {0., 1.}}, > (*Cross[v[t], v'[t]]*) {{0., 1.33333}, {-0.164252, 0.666667}, {0.415978, 0.833333}} > } > > ``` > > Then group them by coordinate ``` Transpose[%, {3, 2, 1}] ``` > > > ``` > { > { > (*Minimum x for v[t], v'[t], Cross[v[t], v'[t]]*) > {0.5, 7.95036*10^-15, 0.}, > (*y*) {0., 0.414214, -0.164252}, > (*z*) {1.42102*10^-19, 0., 0.415978} > }, > (*Maxima*) { > {1.16667, 0.666667, 1.33333}, > {1.33333, 0.833333, 0.666667}, > {1., 1., 0.833333}} > } > > ``` > > Then find the minimal minimum and maximal maximum of each coordinate so that each vector is always within the formed box. ``` infimumbox = Transpose@{Min /@ %[[1]], Max /@ %[[2]]} ``` > > > ``` > { > (*Min, Max x for all vectors over time*) {0., 1.33333}, > (*y*) {-0.164252, 1.33333}, > (*z*) {0., 1.} > } > > ``` > > And those edges form the smallest cuboid that holds all three vectors over the whole parametric domain. Making the animation ==================== Now that we have a plot range `infimumbox`, we can animate the problem. Since we need to plot from zero to something positive, we can't include `p = 0` in our time/parametric domain. So instead we choose the closest thing, [`$MinMachineNumber`](https://reference.wolfram.com/language/ref/$MinMachineNumber.html). ``` Manipulate[ Show[ ParametricPlot3D[ r[t], {t, 0, p}, PlotRange -> infimumbox ], Graphics3D[ {Thickness[.006], {Red, Arrow[{r[p], r[p] + 0.5 Normalize@t[p]}]}, {Blue, Arrow[{r[p], r[p] + 0.5 Normalize[t'[p]]}]}, {Darker[Green, 3/5], Arrow[{r[p], r[p] + 0.5 Normalize@Cross[t[p], t'[p]]}]} } ], PlotRange -> infimumbox ], {p, $MinMachineNumber, 1, Animator} ] ``` [![enter image description here](https://i.stack.imgur.com/jBRMy.gif)](https://i.stack.imgur.com/jBRMy.gif) Another example (a simple helix i.e. `r[t_] := {Cos[2 π t], Sin[2 π t], 0.5 t}` which yields `infimumbox = {{-1.11733, 1.11733}, {-1.11733, 1.11733}, {0., 2.06922}}` over the domain of `$MinMachineNumber <= t <= π`) that clearly shows the relationship between the vectors. [![enter image description here](https://i.stack.imgur.com/0Gr6W.gif)](https://i.stack.imgur.com/0Gr6W.gif)
Try `Show`: ``` Manipulate[Show[ ParametricPlot3D[{r[t]}, {t, 0, p}, PlotRange -> {{-0.1, 1.1}, {-0.1, 1.1}, {-0.1, 1.1}}], Graphics3D@Arrow[{r[p], t[p]}] ], {p, 10^-10, 1}] ``` [![enter image description here](https://i.stack.imgur.com/SM3l3.png)](https://i.stack.imgur.com/SM3l3.png)
21,830,761
I used a php form and a jquery script so if a customer leaves any field unfilled, they can't proceed. The related field,s code is as follows: ``` <div class="storagetype"> <div class="input select"> <div class="labelfortype"> <label for="storagetype">select your storage type</label></div><!--end of labelfortype class--> <select name="storagetype" id="storagetype"> <option value="">(select)</option> <option value="business">business</option> <option value="domestic">domestic</option> <option value="student">student</option> </select> </div> </div><!--end of storagetype class--> ``` When nothing is selected, by default (select) option appears. So when (selected) is active a customer hasn't chosen one of the options yet. But still if customer wants to submit the form, the form lets them to do so since it sees (select) as an option with a value. How can I make sure that unless one of "business, domestic or student" options is selected, they can't submit the form? The live example can be found in this [Link](http://urbanlocker.co.uk/testingforselectoption.php) The javascript code that I use for "required filll function" is as below: ``` function formCheck(formobj){ var x=document.forms["form1"]["email"].value; var atpos=x.indexOf("@"); var dotpos=x.lastIndexOf("."); var fieldRequired = Array("storagetype", "location", "durationnumber", "duration", "txtDate", "fname", "sname", "email", "number"); // Enter field description to appear in the dialog box var fieldDescription = Array("Type of Storage", "Post Code", "Duration Number", "Rental Duration Period", "Rental Start Date", "First Name", "Surname", "Email Address", "Telephone Number"); // dialog message var alertMsg = "Please complete all fields and enter a valid email address"; var l_Msg = alertMsg.length; for (var i = 0; i < fieldRequired.length; i++){ var obj = formobj.elements[fieldRequired[i]]; if (obj){ switch(obj.type){ case "select-one": if (obj.selectedIndex == -1 || obj.options[obj.selectedIndex].text == ""){ alertMsg += " - " + fieldDescription[i] + "\n"; } break; case "select-multiple": if (obj.selectedIndex == -1){ alertMsg += " - " + fieldDescription[i] + "\n"; } break; case "text": case "textarea": if (obj.value == "" || obj.value == null || atpos<1 || dotpos<atpos+2 || dotpos+2>=x.length){ alertMsg += " " + " " + "\n"; } break; default: } if (obj.type == undefined){ var blnchecked = false; for (var j = 0; j < obj.length; j++){ if (obj[j].checked){ blnchecked = true; } } if (!blnchecked){ alertMsg += " - " + fieldDescription[i] + "\n"; } } } } if (alertMsg.length == l_Msg){ return true; }else{ alert(alertMsg); return false; } } // --> </script> ``` This javascript works great for all text fields but only gives the error that I've mentioned for the dropdown options such as storaget type in the example.
2014/02/17
[ "https://Stackoverflow.com/questions/21830761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3280147/" ]
Use slicing with a stride: ``` x = x[1::2] ``` or to select the odd items instead of the even ones: ``` x = x[::2] ``` The first takes every second element in the input list, starting from the second item. The other takes every second element from the list, starting at the first item. Demo: ``` >>> x = ['apple', 'fruit', 'orange', 'fruit', 'lemon', 'fruit'] >>> x[1::2] ['fruit', 'fruit', 'fruit'] >>> x[::2] ['apple', 'orange', 'lemon'] ``` The original code only works if wanted to select just the even *values*, not for even indices. You can use the [`enumerate()` function](http://docs.python.org/2/library/functions.html#enumerate) to add indices to that loop: ``` >>> [f for i, f in enumerate(x) if i % 2] ['fruit', 'fruit', 'fruit'] ``` but slicing is way easier here.
Your code fails, because you are trying to modifying the list which you are iterating, without considering the side effects. ``` x = ['apple','fruit','orange','fruit','lemon','fruit'] for i in range(0,len(x),2): if i%2 !=0: x.pop(i) print x ``` **Note 1:** `range(0,len(x),2)` will produce `[0, 2, 4]` and none of which will succeed with the condition. So, I am going to assume that you meant `range(1,len(x),2)` **Note 2:** Since you are iterating over `range(1,len(x),2)`, which is actually `[1, 3, 5]`, the `if` condition is obsolete. When `i` is 1, we pop the element at 1. So what actually happens is *Before popping element at 1* ``` x = ['apple','fruit','orange','fruit','lemon','fruit'] 0 1 2 3 4 5 ``` *After popping element at 1* ``` x = ['apple','orange','fruit','lemon','fruit'] 0 1 2 3 4 ``` The same way, when `i` becomes 3, we pop the element at 3 *Before popping element at 3* ``` x = ['apple','orange','fruit','lemon','fruit'] 0 1 2 3 4 ``` *After popping element at 3* ``` x = ['apple','orange','fruit','fruit'] 0 1 2 3 ``` Now, `i` becomes 5 and there is no element at location 5. That is why `x.pop(5)` raises ``` pop index out of range ``` error. You can confirm that with this program ``` x = ['apple','fruit','orange','fruit','lemon','fruit'] try: for i in range(1,len(x),2): if i%2 !=0: x.pop(i) except IndexError, e: print e, x, i ``` The output would be like this ``` pop index out of range ['apple', 'orange', 'fruit', 'fruit'] 5 ``` **Solution** You can use slicing notation to get only the elements at even ordinal data like this ``` print x[::2] # ['apple', 'orange', 'lemon'] ``` You can get the elements at odd ordinals like this ``` print x[1::2] # ['fruit', 'fruit', 'fruit'] ``` Otherwise, you can use list comprehension to filter out the odd ordinal data, like this ``` x = ['apple','fruit','orange','fruit','lemon','fruit'] print [item for idx, item in enumerate(x) if idx % 2 == 0] # ['apple', 'orange', 'lemon'] print [item for idx, item in enumerate(x) if idx % 2 == 1] # ['fruit', 'fruit', 'fruit'] ```
189,424
I'm trying to plot a Table of functions. Since the number of elements is unspecified, I want to specify a "repeating" graphic directive. Supposing I have two direcives, *m* and *s*, I would like to do something like: ``` PlotStyle -> {m, s...} ``` But I can only specify ``` PlotStyle -> {m, s} ``` However, since this specification is cyclical, I get (what would be like) ``` {m, s, m, s, m, s...} ``` So I'm forced to specify ``` PlotStyle -> {m, s, s, s, s, s, s, s, s, s, s, s} ``` for the exact number of elements in the Table, and breaking the graphical presentation for another number of elements. Is there a way I can specify a directive for a number *n* of first elements in the Table, and another for the remaining elements?
2019/01/13
[ "https://mathematica.stackexchange.com/questions/189424", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/62370/" ]
My usual approach would be to programmatically generate the set of styles: ``` fns = x^Range[5]; Plot[fns, {x, -1, 1}, PlotStyle -> Prepend[Table[Black, 4], Red]] ``` You could also use `Style` to override the setting coming from PlotStyle: ``` Plot[Evaluate[MapAt[Style[#, Red] &, fns, 1]], {x, -1, 1}, PlotStyle -> Black] ``` [![enter image description here](https://i.stack.imgur.com/9iPVD.png)](https://i.stack.imgur.com/9iPVD.png)
Here's a way with an `UpValue`: ``` repPlotStyle /: Plot[f_, {x_, a_, b_}, o1___, repPlotStyle[PlotStyle -> {s1 : Except[_List] ..., s2_List}], o2___] := With[{n = Length[Block[{x = (a + b)/2.}, f]]}, (* could use Length[f] *) With[{s = Take[ Join[{s1}, Apply[Join, Table[s2, {(n/ Length[s2]) + 1}]]], n]}, Plot[f, {x, a, b}, o1, PlotStyle -> s, o2] ]]; Plot[Evaluate[ChebyshevT[Range@11, x]], {x, -1, 1}, PlotStyle -> {Red, {Black}} // repPlotStyle] ``` [![enter image description here](https://i.stack.imgur.com/DIAZw.png)](https://i.stack.imgur.com/DIAZw.png) ``` Plot[Evaluate[ChebyshevT[Range@11, x]], {x, -1, 1}, PlotStyle -> {Red, Orange, {Black, Blue, Green}} // repPlotStyle] ``` [![enter image description here](https://i.stack.imgur.com/J0zKO.png)](https://i.stack.imgur.com/J0zKO.png) Or tweak an internal function in the same way (not guaranteed to work in versions other than 11.3): ``` Internal`InheritedBlock[ {Charting`padList}, Unprotect@Charting`padList; Charting`padList[{a_, b_List}, n_Integer] := Take[ Prepend[Apply[Join, Table[b, {(n/ Length[b]) + 1}]], a], n]; Protect@Charting`padList; Plot[Evaluate[ChebyshevT[Range@5, x]], {x, -1, 1}, PlotStyle -> {Red, {Black}}] ] (* same output as the first graphics *) ```
687,474
I am trying to create a link to destroy and entry in the DB using AJAX but I also want it to function without JavaScript enabled However the following code ``` <%=link_to_remote "Delete", :update => "section_phone", :url => {:controller => "phone_numbers", :action => "destroy", :id => phone_number_display.id }, :href => url_for(:controller => "phone_numbers", :action => "destroy", :id => phone_number_display.id)%> ``` produces the output ``` <a href="#" onclick="new Ajax.Updater('section_phone', '/phone_numbers/destroy/1', {asynchronous:true, evalScripts:true, parameters:'authenticity_token=' + encodeURIComponent('b64efb643e49e9af5e2e195a90fd5a8b6b99ece2')}); return false;">Delete</a> ``` For some reason the url\_for does not work properly and the href tag is set to #. I do not know why this does not work since I am not notified of any errors in the log Does anyone know why this could be? Thank you
2009/03/26
[ "https://Stackoverflow.com/questions/687474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/72537/" ]
You're missing curly braces around the options{} hash, which :update and :url belong to, to separate them from the html\_options{} hash that :href belongs to. Try this: ``` <%=link_to_remote "Delete", {:update => "section_phone", :url => {:controller => "phone_numbers", :action => "destroy", :id => phone_number_display.id }}, :href => url_for(:controller => "phone_numbers", :action => "destroy", :id => phone_number_display.id)%> ``` That'll get the URL to show up as the href attribute of your link, but a GET request to your destroy action shouldn't delete it. You'll need something else (like what vrish88 suggests) so that you can make a GET request to the destroy action to get a form, then POST that form to actually delete the phone number.
I believe your looking for something like this: [RailsCasts - Destroy Without JavaScript](http://railscasts.com/episodes/77-destroy-without-javascript)
3,460,437
So lots of pedantic opinions rather than answers to [this question](https://stackoverflow.com/questions/595081/can-svn-handle-case-sensitivity-issues "this question"). We had a couple java packages accidentally checked in with initial capitalization. (com.foo.PackageName) and then renamed them correctly (com.foo.packagename). Allow me to reiterate based on reading some of the responses. We have an existing "com.foo.packagename" that needs to stick around. We used to have a "com.foo.PackageName" that we renamed to "com.foo.packagename". Our svn server lives on a linux box with a case-sensitive file system. We develop on mac's with "case-preserving" file systems. My command-line svn client seems to have handled the issue fine, and doesn't mention anything about files in com.foo.PackageName. The svn client built into netbeans seems to think there are ghost files of "unknown" status in the two directories that used to be capitalized. I'm guessing the solution is to make the svn server think those directories never existed.... or perhaps some other solution? They were so short-lived under the wrong names that losing history on their contents when wrongly named wouldn't be a problem. Copying off the files, deleting the directories from svn, and then re-adding them didn't do anything for us. Also, rm -Rf on my local copy and then doing a fresh svn co still shows the ghost directories. The main pain point for this is that I can't do a commit at project level, or it tries to commit ghost files and freaks out, so I have to give it a specific set of files to commit, or jump to the command line.
2010/08/11
[ "https://Stackoverflow.com/questions/3460437", "https://Stackoverflow.com", "https://Stackoverflow.com/users/409/" ]
First take a look at what's currently in your repository: ``` svn ls http://server/svn... ``` if com.foo.PackageName is still in there remove it with ``` svn rm http://server/svn.../com.foo.PackageName ``` After you have your repository in order I would suggest you do a fresh checkout of your working copy.
Try using `svn delete --force` on the "Packagename" directory from a case-sensitive o.s. and then commit this change.
38,284,300
Is it possible to find rows preceding and following a matching rows in a BigQuery query? For example if I do: ``` select textPayload from logs.logs_20160709 where textPayload like "%something%" ``` and say that I get these results back: ``` something A something B ``` How can I also show the 3 rows *preceding* and *following* the matching rows? Something like this: ``` some text 1 some text 2 some text 3 something A some text 4 some text 5 some text 6 some text 90 some text 91 some text 92 something B some text 93 some text 94 some text 95 ``` Is this possible and if so how?
2016/07/09
[ "https://Stackoverflow.com/questions/38284300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/398441/" ]
While on Zuma Beach - I was thinking of avoiding CROSS JOIN in my original answer. Check below - should be `much cheaper` especially for big set ``` SELECT textPayload FROM ( SELECT textPayload, SUM(match) OVER(ORDER BY ts ROWS BETWEEN 3 PRECEDING AND 3 FOLLOWING) AS flag FROM ( SELECT textPayload, ts, IF(textPayload CONTAINS 'something', 1, 0) AS match FROM YourTable ) ) WHERE flag > 0 ``` *Of course another way to avoid cross join is to use BigQuery Standard SQL. But still - above solution with no joins at all is better than my original answer*
I think, one piece is missing in your example - extra field that will define the order, so I added ts field for this in my answer. This mean I assume your table has two fields involved : textPayload and ts Try below. Should give you exactly what you need ``` SELECT all.textPayload FROM ( SELECT start, finish FROM ( SELECT textPayload, LAG(ts, 3) OVER(ORDER BY ts ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) AS start, LEAD(ts, 3) OVER(ORDER BY ts ROWS BETWEEN CURRENT ROW AND 3 FOLLOWING) AS finish FROM YourTable ) WHERE textPayload CONTAINS 'something' ) AS matches CROSS JOIN YourTable AS all WHERE all.ts BETWEEN matches.start AND matches.finish ``` Please note: depends on type of your ts field - you might need to do some data casting in query for this field. hope not
48,192,185
I am trying to do what should be a pretty straightforward insert statement in a postgres database. It is not working, but it's also not erroring out, so I don't know how to troubleshoot. This is the statement: ``` INSERT INTO my_table (col1, col2) select col1,col2 FROM my_table_temp; ``` There are around 200m entries in the temp table, and 50m entries in my\_table. The temp table has no index or constraints, but both columns in my\_table have btree indexes, and col1 has a foreign key constraint. I ran the first query for about 20 days. Last time I tried a similar insert of around 50m, it took 3 days, so I expected it to take a while, but not a month. Moreover, my\_table isn't getting longer. Queried 1 day apart, the following produces the same exact number. select count(\*) from my\_table; So it isn't inserting at all. But it also didn't error out. And looking at system resource usage, it doesn't seem to be doing much of anything at all, the process isn't drawing resources. Looking at other running queries, nothing else that I have permissions to view is touching either table, and I'm the only one who uses them. I'm not sure how to troubleshoot since there's no error. It's just not doing anything. Any thoughts about things that might be going wrong, or things to check, would be very helpful.
2018/01/10
[ "https://Stackoverflow.com/questions/48192185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7335256/" ]
For the sake of anyone stumbling onto this question in the future: After a lengthy discussion (see [linked discussion](https://chat.stackoverflow.com/rooms/162930/discussion-between-bma-and-reen) from the comments above), the issue turned out to be related to psycopg2 buffering the query in memory. Another useful note: inserting into a table with indices is slow, so it can help to remove them before bulk loads, and then add them again after.
in my case it was date format issue. i commented date attribute before interting to DB and it worked.
48,192,185
I am trying to do what should be a pretty straightforward insert statement in a postgres database. It is not working, but it's also not erroring out, so I don't know how to troubleshoot. This is the statement: ``` INSERT INTO my_table (col1, col2) select col1,col2 FROM my_table_temp; ``` There are around 200m entries in the temp table, and 50m entries in my\_table. The temp table has no index or constraints, but both columns in my\_table have btree indexes, and col1 has a foreign key constraint. I ran the first query for about 20 days. Last time I tried a similar insert of around 50m, it took 3 days, so I expected it to take a while, but not a month. Moreover, my\_table isn't getting longer. Queried 1 day apart, the following produces the same exact number. select count(\*) from my\_table; So it isn't inserting at all. But it also didn't error out. And looking at system resource usage, it doesn't seem to be doing much of anything at all, the process isn't drawing resources. Looking at other running queries, nothing else that I have permissions to view is touching either table, and I'm the only one who uses them. I'm not sure how to troubleshoot since there's no error. It's just not doing anything. Any thoughts about things that might be going wrong, or things to check, would be very helpful.
2018/01/10
[ "https://Stackoverflow.com/questions/48192185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7335256/" ]
For the sake of anyone stumbling onto this question in the future: After a lengthy discussion (see [linked discussion](https://chat.stackoverflow.com/rooms/162930/discussion-between-bma-and-reen) from the comments above), the issue turned out to be related to psycopg2 buffering the query in memory. Another useful note: inserting into a table with indices is slow, so it can help to remove them before bulk loads, and then add them again after.
In my case it was a `TRIGGER` on the same table I was updating and it failed without errors. Deactivated the trigger and the update worked flawlessly.
53,397,916
<http://tabulator.info/examples/4.1> The Editable Data example above shows the use of a custom editor for the date field (example in the link is DOB). Similar examples exist in earlier tabulator versions as well as here and Github. The javascript date picker that results works perfectly for most users but not all (even if also on Chrome). So the alternate approach often attempted by the users is to try and enter the date directly into the cell. But unfortunately this is problematic --in the same way it is with the linked example. Changing the month and day isn't too bad -- but directly changing the year is very difficult. Does anyone have a potential solution? I've explored everything from blur/focus/different formats/"flatpicker"/etc - but I'm coming up empty.
2018/11/20
[ "https://Stackoverflow.com/questions/53397916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9871424/" ]
The best approach to get full cross browser support would be to create a custom formatter that used a 3rd party datepicker library, for example the [jQuery UI datepicker](https://jqueryui.com/datepicker/). The correct choice of date picker would depend on your needs and your existing frontend framework. in the case of the jQuery datepicker the custom formatter could look something like this (this example uses the standard input editor, you will notice in the ***onRendered*** function it turns the standard input into the jQuery datepicker): ``` var dateEditor = function(cell, onRendered, success, cancel, editorParams){ var cellValue = cell.getValue(), input = document.createElement("input"); input.setAttribute("type", "text"); input.style.padding = "4px"; input.style.width = "100%"; input.style.boxSizing = "border-box"; input.value = typeof cellValue !== "undefined" ? cellValue : ""; onRendered(function(){ input.style.height = "100%"; $(input).datepicker(); //turn input into datepicker input.focus(); }); function onChange(e){ if(((cellValue === null || typeof cellValue === "undefined") && input.value !== "") || input.value != cellValue){ success(input.value); }else{ cancel(); } } //submit new value on blur or change input.addEventListener("change", onChange); input.addEventListener("blur", onChange); //submit new value on enter input.addEventListener("keydown", function(e){ switch(e.keyCode){ case 13: success(input.value); break; case 27: cancel(); break; } }); return input; } ``` You can then add this to a column in the column definition: {title:"Date", field:"date", editor:dateEditor}
I couldn't get what Oli suggested to work. Then again, I might be missing something simple as I am much more of a novice. After a lot of trial+error, this is the hack kind of approach I ended up creating -- builds upon Oli's onRender suggestion but then uses datepicker's onSelect the rest of the way. The good: The datepicker comes up regardless where in the cell the user clicks -- so the user is less tempted to try and enter manually. If the user happens to try and enter manually, they can do so. The less-than-ideal: If the user does manually enter, the datepicker won't go away until he/she clicks elsewhere. But not a showstopper. ``` //Date Editor// var dateEditor = function(cell, onRendered, success, cancel, editorParams){ var cellValue = cell.getValue(), input = document.createElement("input"); input.setAttribute("type", "text"); input.style.padding = "4px"; input.style.width = "100%"; input.style.boxSizing = "border-box"; input.value = typeof cellValue !== "undefined" ? cellValue : ""; onRendered(function(){ $(input).datepicker({ onSelect: function(dateStr) { var dateselected = $(this).datepicker('getDate'); var cleandate = (moment(dateselected, "YYYY-MM-DD").format("MM/DD/YYYY")); $(input).datepicker( "destroy" ); cell.setValue(cleandate,true); cancel(); }, }); input.style.height = "100%"; }); return input; }; ```
53,397,916
<http://tabulator.info/examples/4.1> The Editable Data example above shows the use of a custom editor for the date field (example in the link is DOB). Similar examples exist in earlier tabulator versions as well as here and Github. The javascript date picker that results works perfectly for most users but not all (even if also on Chrome). So the alternate approach often attempted by the users is to try and enter the date directly into the cell. But unfortunately this is problematic --in the same way it is with the linked example. Changing the month and day isn't too bad -- but directly changing the year is very difficult. Does anyone have a potential solution? I've explored everything from blur/focus/different formats/"flatpicker"/etc - but I'm coming up empty.
2018/11/20
[ "https://Stackoverflow.com/questions/53397916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9871424/" ]
The best approach to get full cross browser support would be to create a custom formatter that used a 3rd party datepicker library, for example the [jQuery UI datepicker](https://jqueryui.com/datepicker/). The correct choice of date picker would depend on your needs and your existing frontend framework. in the case of the jQuery datepicker the custom formatter could look something like this (this example uses the standard input editor, you will notice in the ***onRendered*** function it turns the standard input into the jQuery datepicker): ``` var dateEditor = function(cell, onRendered, success, cancel, editorParams){ var cellValue = cell.getValue(), input = document.createElement("input"); input.setAttribute("type", "text"); input.style.padding = "4px"; input.style.width = "100%"; input.style.boxSizing = "border-box"; input.value = typeof cellValue !== "undefined" ? cellValue : ""; onRendered(function(){ input.style.height = "100%"; $(input).datepicker(); //turn input into datepicker input.focus(); }); function onChange(e){ if(((cellValue === null || typeof cellValue === "undefined") && input.value !== "") || input.value != cellValue){ success(input.value); }else{ cancel(); } } //submit new value on blur or change input.addEventListener("change", onChange); input.addEventListener("blur", onChange); //submit new value on enter input.addEventListener("keydown", function(e){ switch(e.keyCode){ case 13: success(input.value); break; case 27: cancel(); break; } }); return input; } ``` You can then add this to a column in the column definition: {title:"Date", field:"date", editor:dateEditor}
I use datepicker from [bootstrap](https://bootstrap-datepicker.readthedocs.io/en/latest/), this is my code ``` var dateEditor = function (cell, onRendered, success, cancel, editorParams) { //create and style input var editor = $("<input type='text'/>"); // datepicker editor.datepicker({ language: 'ja', format: 'yyyy-mm-dd', autoclose: true, }).on('changeDate', function() { if(editorParams != 'row'){ editor.trigger('keyup'); }else{ editor.trigger('change'); } }); editor.css({ "padding": "3px", "width": "100%", "height": "100%", "box-sizing": "border-box", }); editor.val(cell.getValue()); onRendered(function(){ editor.focus(); }); editor.on("blur", function (e) { e.preventDefault(); if(editor.val() === '') { success(cell.getValue()); } else { //submit new value on change editor.on("change", function (e) { success(editor.val()); }); } }); return editor; } ```
2,498,022
> > $$ > \sum\_{k = 1}^\infty\sin\left(\frac1k + k\pi\right) > $$ > > > I was thinking of using alternating series but I am not sure how to prove that is is alternating or decreasing.
2017/10/31
[ "https://math.stackexchange.com/questions/2498022", "https://math.stackexchange.com", "https://math.stackexchange.com/users/448610/" ]
$$\sum\_{k=1}^\infty (-1)^k\sin\Big(\frac{1}{k}\Big) = \sum\_{k=1}^\infty x\_k$$ with $x\_k = (-1)^k\sin\Big(\frac{1}{k}\Big)$ Because $\sin(x)$ is increasing when $0 \leq x \leq \frac{\pi}{2}$ ; $$\frac{1}{k+1} \leq \frac{1}{k} \Rightarrow \sin(\frac{1}{k+1}) \leq \sin(\frac{1}{k})$$ So we have $$|x\_{k+1}| \leq |x\_k|$$ We have too $$\lim\limits\_{k\to\infty}(-1)^k\sin\Big(\frac{1}{k}\Big) = 0$$ $$\text{We conclude that }\sum\_{k=1}^\infty x\_k\text{ converges.}$$
The given series is $\sum\_{k\geq 1}(-1)^{k}\sin\tfrac{1}{k}$ which is conditionally convergent by [Leibniz' test](https://en.wikipedia.org/wiki/Alternating_series_test), *sic et simpliciter*. By the inverse Laplace transform, such series equals $$ \int\_{0}^{+\infty}\sum\_{k\geq 1}(-1)^k \sum\_{n\geq 0}\frac{(-1)^n x^{2n}}{(2n)!(2n+1)!}e^{-kx}\,dx = -\int\_{0}^{+\infty}\frac{\text{Ke}(x)}{e^x+1}\,dx $$ where $\text{Ke}$ is related to Kelvin and Bessel functions. An alternative representation is given by $$ \sum\_{n\geq 0}\sum\_{k\geq 1}\frac{(-1)^n(-1)^{k+1}}{(2n+1)! k^{2n+1}}=-\sum\_{n\geq 0}\frac{(1-4^{-n})(-1)^n\,\zeta(2n+1)}{(2n+1)!} $$ which is equally well suited for numerical purposes.
17,404,885
I am stuck and hope someone has an easy solution I've not thought about :-) 1. I have a 1040px centered div for page content, menu and footer. 2. The header image shall have the same left margin as the content div AND grow to the right side (for those with higher screen resolutions) Is there any way to do this using CSS? I know, I could calculate the left margin of the content box with javascript and set the header-margin dynamically, but I would prefer a css solution. Regards, Martin
2013/07/01
[ "https://Stackoverflow.com/questions/17404885", "https://Stackoverflow.com", "https://Stackoverflow.com/users/649749/" ]
You just have to correct your `Repeater` declaration. After that there will be no need to handle `ItemDataBound` event at all: ``` <asp:Repeater ID="Repeater1" runat="server"> <HeaderTemplate> <table> </HeaderTemplate> <ItemTemplate> <tr> <td> <asp:Label ID="Label6" runat="server" Text='<%#Eval("Name")%>'></asp:Label> <asp:Label ID="Label5" runat="server" Text='<%#Eval("Surname")%>'></asp:Label> </td> <td> </td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> ```
To get an object of your class back in the DataBound event, you just need to cast `e.Item.DataItem` to your class: ``` protected void Repeater1_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { var result = (SearchResult)e.Item.DataItem; } } ```
11,106,441
I created a `RadioGroup` layout in XML. So I creating it dynamically: ```java RadioGroup segmentRadioGroup = new RadioGroup(parentActivity); inflater.inflate(R.layout.segm_btn_stores, segmentRadioGroup); segmentRadioGroup.setOnCheckedChangeListener(new RadioGroup.OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup radioGroup, int i) { showMap(); } }); ``` Oh, it doesn't work! `showMap` is not firing! But... wait. What if we do it this way? ```java RadioGroup segmentRadioGroup = (RadioGroup) inflater.inflate(R.layout.segm_btn_stores, null); ``` It... works. Why? `segmentRadioGroup` is `RadioGroup` in both cases. And if I pass `segmentRadioGroup` created before instead of `null` it won't work too.
2012/06/19
[ "https://Stackoverflow.com/questions/11106441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1397218/" ]
``` RadioGroup segmentRadioGroup = new RadioGroup(parentActivity); ``` In the above line, you create an 'empty' `RadioGroup`. Then... ``` inflater.inflate(R.layout.segm_btn_stores, segmentRadioGroup); ``` ...in the above line, you inflate another `RadioGroup` from the layout file and it is then 'added' to the first `RadioGroup`. The logic here seems to be that as `RadioGroup` extends (and effectively IS) `LinearLayout`, it is legal for a `RadioGroup` to contain another `RadioGroup`. ``` segmentRadioGroup.setOnCheckedChangeListener(new RadioGroup.OnCheckedChangeListener() { ...}); ``` Finally, on the line above, you set the listener on the outer / parent `RadioGroup` and not on the inner `RadioGroup`. As such, the `onCheckedChanged(...)` method is never called for the inner `RadioGroup`. Well that's the only logic I can come up with. With your second approach... ``` RadioGroup segmentRadioGroup = (RadioGroup) inflater.inflate(R.layout.segm_btn_stores, null); ``` You are simply inflating one `RadioGroup` without an outer parent layout because you pass 'null' as the second parameter.
This should work : ``` RadioGroup segmentRadioGroup = inflater.inflate(R.layout.segm_btn_stores, null); segmentRadioGroup.setOnCheckedChangeListener( new RadioGroup.OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup radioGroup, int i) { showMap(); } }); // add to the parent layout here.. ```
62,100,880
I have written a simple function in Python which aims to find, if from two elements `a` and `b`, one can be obtained from another by swapping at most one pair of elements in one of the arrays. This is my function: ``` def areSimilar(a, b): test = 0 for i in range(len(b)): for j in range(len(b)): b2 = b b2[i] = b[j] b2[j] = b[i] if a == b2: test = 1 return(test==1) ``` The issue is that upon inspecting `b`, it has changed even though I don't actually perform any calculations on `b` - what's going on!!??
2020/05/30
[ "https://Stackoverflow.com/questions/62100880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5429273/" ]
(**EDITED**: to better address the second point) There are two issues with your code: * When you do `b2 = b` this just creates another reference to the underlying object. If `b` is mutable, any change made to `b2` will be reflected in `b` too. * When a single swapping suffices there is no need to test further, but if you keep on looping the test will be successful again with `i` and `j` swapped, so `test` condition is hit either never or (at least -- depending on the amount of duplicates) twice. While this would not lead to incorrect results, it would normally be regarded as an error in the logic. To fix your code, you could just create a copy of `b`. Assuming that by Python arrays you actually mean Python `list`s one way of doing it would be to create a new `list` every time by replacing `b2 = b` with `b2 = list(b)`. A more efficient approach is to perform the swapping on `b` itself (and swap back): ``` def are_similar(a, b): for i in range(len(b)): for j in range(len(b)): b[i], b[j] = b[j], b[i] if a == b: b[i], b[j] = b[j], b[i] # swap back return True else: b[i], b[j] = b[j], b[i] # swap back return False print(are_similar([1, 1, 2, 3], [1, 2, 1, 3])) # True print(are_similar([1, 1, 2, 3], [3, 2, 1, 1])) # False ``` --- By contrast, you can see how inefficient (while correct) the copying-based approach is: ``` def are_similar2(a, b): for i in range(len(b)): for j in range(len(b)): b2 = list(b) b2[i] = b[j] b2[j] = b[i] if a == b2: return True return False print(are_similar2([1, 1, 2, 3], [1, 2, 1, 3])) # True print(are_similar2([1, 1, 2, 3], [3, 2, 1, 1])) # False ``` with much worse timings, even on relatively small inputs: ``` a = [1, 1, 2, 3] + list(range(100)) b = [1, 2, 1, 3] + list(range(100)) %timeit are_similar(a, b) # 10000 loops, best of 3: 22.9 µs per loop %timeit are_similar2(a, b) # 10000 loops, best of 3: 73.9 µs per loop ```
I would got with [Sadap](https://stackoverflow.com/users/8733066/sadap)'s code, but if you want to copy, use : ``` import copy def areSimilar(a, b): test = 0 for i in range(len(b)): for j in range(len(b)): b2 = copy.deepcopy(b) b2[i] = copy.deepcopy(b[j]) b2[j] = copy.deepcopy(b[i]) if a == b2: test = 1 if test == 1: return True else: return False ```
72,544,665
I got a comma seperated list `var arr = [1,2,3,4]` and want it to be: ``` var new_arr = [ {x:1, y:2}, {x:3, y:4} ] ``` Struggeling how to get the key/value change done.
2022/06/08
[ "https://Stackoverflow.com/questions/72544665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4120417/" ]
Man, Java developer's are so deceptively verbose... In this (pile of garbage) log, what's really important is ``` * What went wrong: Execution failed for task ':launcher:packageRelease'. > A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade > com.android.ide.common.signing.KeytoolException: Failed to read key AndroidDebugKey from store "C:\Users\Samprit Hazra\.android\debug.keystore": Invalid keystore format ``` *Failed to read key AndroidDebugKey from store "C:\Users\Samprit Hazra.android\debug.keystore": Invalid keystore format* You might want to take a look to your file using <http://keystore-explorer.org/> and see if something is wrong with it.
I needed to create a custom Keystore.
44,553,077
I have an animated background from Codepen (link below) I cannot get my text to float in-front of the background however. I haven't included the js as I dont think it will help Codepen: <https://codepen.io/zessx/pen/ZGBMXZ> Screenshot: <https://gyazo.com/37568fdb9681e4c9d67d4d88fc7658ba> I have tried using z-index and using an absolute position isnt helping either. `Index.html` Note: I have removed code that is irrelevant ``` <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/style.css"> <script type="text/javascript" src="js/index.js"></script> </head> <body> <div id="bg"></div> <div class="content w3-content" style="width: 80%;margin-left: 10%;"> <h1 class="font w3-jumbo w3-text-black">MOLLY URS</h1> </div> </body> </html> ``` `style.css` ``` .bg { z-index: 1; background: -webkit-radial-gradient(center ellipse, #721B94 0%, #210627 100%) no-repeat center center fixed; background: radial-gradient(ellipse at center, #721B94 0%, #210627 100%); overflow: hidden; } body { margin: 0; } .content { position: absolute; z-index: 100; overflow: hidden; } ```
2017/06/14
[ "https://Stackoverflow.com/questions/44553077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7927952/" ]
Fixed position to the text container will solve this issue. ``` <div class="content w3-content" style="width: 80%;margin-left: 10%; position : fixed"> <h1 class="font w3-jumbo w3-text-black">MOLLY URS</h1> </div> ``` I have working plunker here [[link]](https://plnkr.co/edit/6CMBuJ2CteEF5FQr9Mkr?p=preview)
`position: absolute` should in most cases be paired with `top`, `bottom`, `left`, and/or `right`. You are missing `top:0` or similar. You shouldn't need to change `z-index`. `.content` comes later in the DOM, so it'll be "painted" above `#bg`.
44,553,077
I have an animated background from Codepen (link below) I cannot get my text to float in-front of the background however. I haven't included the js as I dont think it will help Codepen: <https://codepen.io/zessx/pen/ZGBMXZ> Screenshot: <https://gyazo.com/37568fdb9681e4c9d67d4d88fc7658ba> I have tried using z-index and using an absolute position isnt helping either. `Index.html` Note: I have removed code that is irrelevant ``` <!DOCTYPE html> <html> <head> <link rel="stylesheet" href="https://www.w3schools.com/w3css/4/w3.css"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/style.css"> <script type="text/javascript" src="js/index.js"></script> </head> <body> <div id="bg"></div> <div class="content w3-content" style="width: 80%;margin-left: 10%;"> <h1 class="font w3-jumbo w3-text-black">MOLLY URS</h1> </div> </body> </html> ``` `style.css` ``` .bg { z-index: 1; background: -webkit-radial-gradient(center ellipse, #721B94 0%, #210627 100%) no-repeat center center fixed; background: radial-gradient(ellipse at center, #721B94 0%, #210627 100%); overflow: hidden; } body { margin: 0; } .content { position: absolute; z-index: 100; overflow: hidden; } ```
2017/06/14
[ "https://Stackoverflow.com/questions/44553077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7927952/" ]
Fixed position to the text container will solve this issue. ``` <div class="content w3-content" style="width: 80%;margin-left: 10%; position : fixed"> <h1 class="font w3-jumbo w3-text-black">MOLLY URS</h1> </div> ``` I have working plunker here [[link]](https://plnkr.co/edit/6CMBuJ2CteEF5FQr9Mkr?p=preview)
Place html text first, the bg last. Attach `position: fixed;` to text container.
655,068
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
2009/03/17
[ "https://Stackoverflow.com/questions/655068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3800/" ]
Follow Travis Leleu's suggestion - because it's a good idea, regardless. Then add an enum field to the `Addresses` table called `table_id`. The value of the `table_id` field could be "customer", "vendor", "contact", and whatever other tables would link to the addresses table. Also include a single foreign key called `entity_id`. This foreign key would be the primary key of the corresponding customer, vendor, or whatever. When you, for example, want the billing address for a certain vendor, add in the `$conditions` array: ``` 'Address.entity_id'=>'123456' 'Address.table_id'=>'vendor' 'Address.type'=>'billing' ``` With this set-up you could have as many tables as you want referencing the `Addresses` table.
Jack, Perhaps I do not understand the question correctly, so I apologize if I've misinterpreted. I would probably just make an enum field in the address table to record what type of address it is (billing or mailing). Then you can use a direct $hasMany relationship between your customers model and your address model. When you perform a query (e.g. you just want the billing address for a given customer), just specify in the $conditions array that 'Address.type'=>'billing'. HTH, Travis
655,068
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
2009/03/17
[ "https://Stackoverflow.com/questions/655068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3800/" ]
I like [Kyle's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/684105#684105) and [Travis's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/655407#655407) suggestions, but you can also put the foreign keys the other direction. If you want your addresses to be independent and have several other tables reference them, then you should be able to define two [belongsTo relationships](http://book.cakephp.org/view/78/Associations-Linking-Models-Together#belongsTo-81) from customer to address. Each relationship then has to specify which field to use as the foreign key. ``` <?php class Customer extends AppModel { var $name = 'Customer'; var $belongsTo = array( 'BillingAddress' => array( 'className' => 'Address', 'foreignKey' => 'billing_address_id' ), 'MailingAddress' => array( 'className' => 'Address', 'foreignKey' => 'mailing_address_id' ) ); } ?> ``` However, both of these solutions leave you open to orphaned addresses, because the foreign key constraint isn't really correct. The simplest solution might be to just add a bunch of optional foreign keys to the address table, like `customer_id`, `company_id`, `employee_id`, and so on. Then you've got a standard arc pattern, and the keys are pointing the right direction, so you get correct referential integrity. Another solution is to design a more general entity table that has address as a child table. Then customer, company, and employee are all subtypes of the entity table. For more details on that style of schema, I recommend [Data Model Patterns](https://rads.stackoverflow.com/amzn/click/com/0932633293) by David Hay.
Jack, Perhaps I do not understand the question correctly, so I apologize if I've misinterpreted. I would probably just make an enum field in the address table to record what type of address it is (billing or mailing). Then you can use a direct $hasMany relationship between your customers model and your address model. When you perform a query (e.g. you just want the billing address for a given customer), just specify in the $conditions array that 'Address.type'=>'billing'. HTH, Travis
655,068
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
2009/03/17
[ "https://Stackoverflow.com/questions/655068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3800/" ]
Follow Travis Leleu's suggestion - because it's a good idea, regardless. Then add an enum field to the `Addresses` table called `table_id`. The value of the `table_id` field could be "customer", "vendor", "contact", and whatever other tables would link to the addresses table. Also include a single foreign key called `entity_id`. This foreign key would be the primary key of the corresponding customer, vendor, or whatever. When you, for example, want the billing address for a certain vendor, add in the `$conditions` array: ``` 'Address.entity_id'=>'123456' 'Address.table_id'=>'vendor' 'Address.type'=>'billing' ``` With this set-up you could have as many tables as you want referencing the `Addresses` table.
If a customer HASMANY addresses, use the has-many association: <http://book.cakephp.org/view/82/hasMany> If a customer HASONE (and only one) address, use the has-one association: <http://book.cakephp.org/view/80/hasOne> If customers could possibly share the same address (same record), you will need to use HABTM with the join table you alluded to. More info about Cake's associations: <http://book.cakephp.org/view/78/Associations-Linking-Models-Together>
655,068
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
2009/03/17
[ "https://Stackoverflow.com/questions/655068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3800/" ]
I like [Kyle's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/684105#684105) and [Travis's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/655407#655407) suggestions, but you can also put the foreign keys the other direction. If you want your addresses to be independent and have several other tables reference them, then you should be able to define two [belongsTo relationships](http://book.cakephp.org/view/78/Associations-Linking-Models-Together#belongsTo-81) from customer to address. Each relationship then has to specify which field to use as the foreign key. ``` <?php class Customer extends AppModel { var $name = 'Customer'; var $belongsTo = array( 'BillingAddress' => array( 'className' => 'Address', 'foreignKey' => 'billing_address_id' ), 'MailingAddress' => array( 'className' => 'Address', 'foreignKey' => 'mailing_address_id' ) ); } ?> ``` However, both of these solutions leave you open to orphaned addresses, because the foreign key constraint isn't really correct. The simplest solution might be to just add a bunch of optional foreign keys to the address table, like `customer_id`, `company_id`, `employee_id`, and so on. Then you've got a standard arc pattern, and the keys are pointing the right direction, so you get correct referential integrity. Another solution is to design a more general entity table that has address as a child table. Then customer, company, and employee are all subtypes of the entity table. For more details on that style of schema, I recommend [Data Model Patterns](https://rads.stackoverflow.com/amzn/click/com/0932633293) by David Hay.
If a customer HASMANY addresses, use the has-many association: <http://book.cakephp.org/view/82/hasMany> If a customer HASONE (and only one) address, use the has-one association: <http://book.cakephp.org/view/80/hasOne> If customers could possibly share the same address (same record), you will need to use HABTM with the join table you alluded to. More info about Cake's associations: <http://book.cakephp.org/view/78/Associations-Linking-Models-Together>
655,068
Using cakephp, I have a generic address table, which I want to link to customers, vendors, contacts. most of the tables only have a 1 to 1 relationship, but I want my customers table to have 2 perhaps for clarification: I have a customers table ``` id, int mailing_address_id, int billing_address_id, int ``` and an addresses table ``` id,int addr, varchar city, varchar etc.... ``` Now I know I could put a `customer_id` in the addresses table. But I don't want to do that because I have a vendors table, and contacts table, and other tables that all are going to use the addresses table. the `customer_id` would not really be relavant to those other tables. I'd like the Customer model to automatically link in the two addresses
2009/03/17
[ "https://Stackoverflow.com/questions/655068", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3800/" ]
Follow Travis Leleu's suggestion - because it's a good idea, regardless. Then add an enum field to the `Addresses` table called `table_id`. The value of the `table_id` field could be "customer", "vendor", "contact", and whatever other tables would link to the addresses table. Also include a single foreign key called `entity_id`. This foreign key would be the primary key of the corresponding customer, vendor, or whatever. When you, for example, want the billing address for a certain vendor, add in the `$conditions` array: ``` 'Address.entity_id'=>'123456' 'Address.table_id'=>'vendor' 'Address.type'=>'billing' ``` With this set-up you could have as many tables as you want referencing the `Addresses` table.
I like [Kyle's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/684105#684105) and [Travis's](https://stackoverflow.com/questions/655068/how-do-i-use-multiple-foreign-keys-in-one-table-referencing-another-table-in-cake/655407#655407) suggestions, but you can also put the foreign keys the other direction. If you want your addresses to be independent and have several other tables reference them, then you should be able to define two [belongsTo relationships](http://book.cakephp.org/view/78/Associations-Linking-Models-Together#belongsTo-81) from customer to address. Each relationship then has to specify which field to use as the foreign key. ``` <?php class Customer extends AppModel { var $name = 'Customer'; var $belongsTo = array( 'BillingAddress' => array( 'className' => 'Address', 'foreignKey' => 'billing_address_id' ), 'MailingAddress' => array( 'className' => 'Address', 'foreignKey' => 'mailing_address_id' ) ); } ?> ``` However, both of these solutions leave you open to orphaned addresses, because the foreign key constraint isn't really correct. The simplest solution might be to just add a bunch of optional foreign keys to the address table, like `customer_id`, `company_id`, `employee_id`, and so on. Then you've got a standard arc pattern, and the keys are pointing the right direction, so you get correct referential integrity. Another solution is to design a more general entity table that has address as a child table. Then customer, company, and employee are all subtypes of the entity table. For more details on that style of schema, I recommend [Data Model Patterns](https://rads.stackoverflow.com/amzn/click/com/0932633293) by David Hay.
42,551,867
I working on vtiger CRM, For this CRM i need to develop a plugin which after installation can be accessible through organization or leads details view. I have successfully reached to this level of my plugin. for linking of my module i have used setRelatedList API and my code is ``` include_once('vtlib/Vtiger/Module.php'); $moduleInstance = Vtiger_Module::getInstance('Payslip'); $accountsModule = Vtiger_Module::getInstance('Accounts'); $relationLabel = 'Accounts'; $moduleInstance->setRelatedList( $accountsModule, $relationLabel, Array('ADD','SELECT') ); ``` My plugin's name is mailAddon, and it is showing on the side bar on builtin details module, not the task if when if click on my plugin, it should fetch data according to my requirements, from my defined table. I just want to know how to extend this behavior of vtiger. Thanks
2017/03/02
[ "https://Stackoverflow.com/questions/42551867", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7538880/" ]
Use SQLite to store the data...
The SQLite is binary format that is relatively fast. Use SQLite or Realm. Imho, Realm (based on SQLite) engine might be better in your case (it faster that pure SQLite). [Realm vs Sqlite for mobile development](https://stackoverflow.com/questions/37151580/realm-vs-sqlite-for-mobile-development) Nowadays there is a Firebase Database from Google. Imho, you can you this format too because of it has both server and offline interactions. Because of both SQLite formats must be stored in APK file. But in order to work inside the application it is needed that sqlite-file have to be copied from assets (or raw) directory into database (workable) directory. That's why all data would be dublicated(!). Firebase Database do not have this disadvantage. <https://firebase.google.com/docs/database/android/start/> JSON and XML formats are too huge (and memory and machine time consumed) to work with them. Oh. forgot! it is possible to integrate your code directly inside the code. Just create a class to work:)
56,140,277
I want to convert from `ATL::CImage` to `cv::Mat` for image handling in opencv(C++). Could you please help to convert this object? I got `CImage` from windows screen shot(Using MFC). Then, I want to handle image in OpenCV Mat object. I did not know how to convert. * C++ Project(VC 2017) * MFC * OpenCV 3.4.6 --- ``` CImage image; int cx; int cy; CWnd* pWndDesktop = CWnd::GetDesktopWindow(); CWindowDC srcDC(pWndDesktop); Rect rcDesktopWindow; ::GetWindowRect(pWndDesktop->m_hWnd, %rcDesktopWindow); cx = (rcDesktopWindow.right - rcDesktopWindow.left); cy = (rcDesktopWindow.bottom - rcDesktopWindow.top); image.create(cx, cy, srcDC.GetDeviceCaps(BITPIXEL)); CDC* pDC = CDC::FromHandle(image.GetDC()); pDC->BitBlt(0, 0, cx, cy, &srcDC, 0, 0, SRCCOPY); image.ReleaseDC(); cv::Mat mat; // I want set image to mat! mat = image??? ``` Can not convert `ATL::Image` to `cv::Mat`.
2019/05/15
[ "https://Stackoverflow.com/questions/56140277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10644169/" ]
`CImage` creates a bottom-top bitmap if height is positive. You have to pass a negative height to create top-bottom bitmap for `mat` Use `CImage::GetBits` to retrieve the bits as follows: ``` HDC hdc = GetDC(0); RECT rc; GetClientRect(GetDesktopWindow(), &rc); int cx = rc.right; int cy = rc.bottom; CImage image; image.Create(cx, -cy, 32); BitBlt(image.GetDC(), 0, 0, cx, cy, hdc, 0, 0, SRCCOPY); image.ReleaseDC(); ReleaseDC(0, hdc); cv::Mat mat; mat.create(cy, cx, CV_8UC4); memcpy(mat.data, image.GetBits(), cy * cx * 4); //or borrow pixel data from CImage cv::Mat mat(cy, cx, CV_8UC4, image.GetBits()); ``` Or force a deep copy as follows: ``` cv::Mat mat; mat = cv::Mat(cy, cx, CV_8UC4, image.GetBits()).clone(); ``` Note, `CImage` makes its own allocation for pixel data. And `Mat` needs to make its own allocation, or it has to borrow from `CImage` which can be tricky. If you are just taking a screen shot, you can do that with plain Windows API, then write directly to `cv::Mat`. This way there is a single allocation (a bit faster) and `mat` does not rely on other objects. Example: ``` void foo() { HDC hdc = ::GetDC(0); RECT rc; ::GetClientRect(::GetDesktopWindow(), &rc); int cx = rc.right; int cy = rc.bottom; cv::Mat mat; mat.create(cy, cx, CV_8UC4); HBITMAP hbitmap = CreateCompatibleBitmap(hdc, cx, cy); HDC memdc = CreateCompatibleDC(hdc); HBITMAP oldbmp = (HBITMAP)SelectObject(memdc, hbitmap); BitBlt(memdc, 0, 0, cx, cy, hdc, 0, 0, SRCCOPY); BITMAPINFOHEADER bi = { sizeof(bi), cx, -cy, 1, 32, BI_RGB }; GetDIBits(hdc, hbitmap, 0, cy, mat.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS); //GDI cleanup: SelectObject(memdc, oldbmp); DeleteDC(memdc); DeleteObject(hbitmap); ::ReleaseDC(0, hdc); } ``` --- Edit: Changed `mat.data = (unsigned char*)image.GetBits();` to `memcpy(mat.data, image.GetBits(), cy * cx * 4);` Changed `ReleaseDC(0, hdc)` to `::ReleaseDC(0, hdc)` to avoid conflict with `CWnd::ReleaseDC(dc)`
``` #include <opencv2\opencv.hpp> #include <opencv2/imgproc/types_c.h> #include <atlimage.h> using namespace cv; Mat CImage2Mat(CImage cimg) { BITMAP bmp; ::GetObject(cimg.Detach(), sizeof(BITMAP), &bmp); int nChannels = bmp.bmBitsPixel == 1 ? 1 : bmp.bmBitsPixel / 8; int depth = bmp.bmBitsPixel == 1 ? IPL_DEPTH_1U : IPL_DEPTH_8U; IplImage* img = cvCreateImageHeader(cvSize(bmp.bmWidth, bmp.bmHeight), depth, nChannels); img->imageData = (char*)malloc(bmp.bmHeight * bmp.bmWidth * nChannels * sizeof(char)); memcpy(img->imageData, (char*)(bmp.bmBits), bmp.bmHeight * bmp.bmWidth * nChannels); cvFlip(img, img, 0); return cvarrToMat(static_cast<IplImage*>(img)); } ``` Usage: ``` CImage cimg; cimg.Load(L"resim.png"); Mat img = CImage2Mat(cimg); ```
5,662,161
I am using a CMS that has been poorly configured with horrific CSS (e.g. H1 is about 12px). How can I load my content without it being infected by this diseased CSS? I was considering an iframe, but I would want to keep it in the CMS if possible. Would frames work?
2011/04/14
[ "https://Stackoverflow.com/questions/5662161", "https://Stackoverflow.com", "https://Stackoverflow.com/users/617794/" ]
If you can keep your content within an element with a specific class or id (e.g. `<div class="content">`, then you could adapt a reset stylesheet (like [Eric Meyer’s](http://meyerweb.com/eric/tools/css/reset/)) to reset everything within that class: ``` .content div, .content span, /* ...and so on */ { margin: 0; padding: 0; border: 0; font-size: 100%; font: inherit; vertical-align: baseline; } ``` Then write all your styles prefixed with that class too, e.g. ``` .content h1 { font-size: 3em; } ``` If you’d rather reset everything to the default browser styles (rather than the unstyled settings you get with a reset stylesheet), you could adapt [Firefox’s built-in html.css stylesheet](http://davidwalsh.name/firefox-internal-rendering-css) in a similar way (i.e. prefix all its selectors with the class/id on the element containing all your content). Bit of a drag, but it might be less of a faff than frames. (I assume the CMS generates your HTML, so it’d be harder to change that to use frames than to work around their issues in your CSS file.) You might consider changing your CMS — they’re meant to reduce the amount of work you have to do, not increase it.
Is there any possibility to load your custom css classes? You should load your CSS classes after CMS's CSS classes and override them.
24,588,929
I have an old MySQL database. Here is a time column. I see here is some time values Like: ``` 2013-06-03 21:33:15 ``` So, I want to convert this time to my local time UTC +6 in my PHP Script. How can it possible? I can make a mysql query to get the time from Database to my my PHP variable $TimeFromMySQL Just now I want to show like: ``` 11:32:44 PM 05 July 2014 ``` Thank You
2014/07/05
[ "https://Stackoverflow.com/questions/24588929", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2178781/" ]
See VMai's comment above if you want to do this in MySQL. For PHP: ``` $inDate = '2013-06-03 21:33:15'; $inDate_tz = 'America/Chicago'; $original_date = new DateTime($inDate, new DateTimeZone($inDate_tz) ); $original_date->setTimeZone(new DateTimeZone('Asia/Dhaka')); $new_date = $original_date->format('H:i:s d F Y'); echo $new_date; //outputs 08:33:15 04 June 2013 ```
My answer here might be too late, still it might be helpful for some one who run to the same situation like me before I work around this solution. To convert datetime to UTC that is, getting correct location Time and Date. I came up with this: ``` // Get time Zone. $whereTimeNow = date_default_timezone_get(); // Set the time zone. date_default_timezone_set($whereTimeNow); $dateTime = date('d-m-Y H:i'); $dateTimeConvert = new DateTime($dateTime, new DateTimeZone($whereTimeNow) ); $dateTimeConvert->setTimeZone(new DateTimeZone('UTC')); $dateTimeNow = $dateTimeConvert->format('Y-m-d H:i:s'); echo $dateTimeNow; ```
1,561,780
Is it possible to force the horizontal (or vertical) scroll to NOT display even when needed? The thing is that I need to display colors that are different depending of the item. That works fine but you can clearly see that the color does not reach both edge of the list view, which is kinda ugly. To make things worse, I have in my listview another listview that contains another list of item. Those item's background does not come even close to the edge of the listview.
2009/10/13
[ "https://Stackoverflow.com/questions/1561780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/164377/" ]
You can specify the visibility of the scrollbar for both vertical and horizontal scrolling to four options, using the `ScrollViewer.HorizontalScrollBarVisibility` and `ScrollViewer.VerticalScrollBarVisibility` attached properties: `Auto`, `Disabled`, `Hidden` and `Visible`. ``` <ListView ScrollViewer.HorizontalScrollBarVisibility="Disabled"> ``` `Disabled` will have it never show up and scrolling is not possible, `Hidden` will have it not show, but will allow users to scroll using text selection and arrow keys/mousewheel, etc.
Directly on the scroll bar: ``` <ScrollViewer HorizontalScrollBarVisibility="Hidden" /> ``` If you're doing it in a control that implements it in its ControlTemplate: ``` <StackPanel ScrollViewer.HorizontalScrollBarVisibility="Hidden" /> ```
2,651,165
Let $n$ be a 6-digit number, perfect square and perfect cube. If $n-6$ is not even or a multiple of 3, find $n$. **My try** Playing with the first ten perfect squares and cubes I ended with: The last digit of $n \in (1,5,9)$ If $n$ last digit is $9$, then the cube ends in $9$, **Ex:** if $n$ was $729$, the cube is $9^3$ (ends in $9$) and the square ends in $3$ or $7$ If $n$ last digit is 5, then the cube ends in 5 and the square ends in 5 If $n$ last digit is 1, then the cube ends in 1 and the square ends in 1 By brute force I saw that from $47^3$ onwards, the cubes are 6-digit, so I tried some cubes (luckily for me not for long) and $49^3 = 343^2 = 117649$ worked. So I found $n=117649$ but I want to know what is the *elegant* or without brute force method to find this number because my method isn't very good, just pure luck maybe.
2018/02/15
[ "https://math.stackexchange.com/questions/2651165", "https://math.stackexchange.com", "https://math.stackexchange.com/users/455734/" ]
Note that the required number is *both* a square and a cube, so it must be a sixth power. Already $10^6=1000000$ has seven digits and $5^6=15625$ has only five digits, so that leaves us with $6^6,7^6,8^6,9^6$ to test. Furthermore, we are given that $n-6$ is not even and not a multiple of 3, which implies that $n$ itself is also not even and not a multiple of 3. This eliminates $6^6,8^6$ and $9^6$ immediately, leaving $7^6$ as the only possible answer.
If $n$ is both a perfect square and perfect cube then. $n = a^6$ If $n-6$ is neither even nor divisible by $3$, then $n$ is not even nor divisible by $3$ and $a$ is not even or divisible by 3. $a^6$ is a $6$ digit number $6<a<10$ $7$ is the only integer in that interval that is not divisible by $2$ or by $3.$ $n = 7^6$
126,303
I rooted my LG G2 D802 and changed font using [Font Installer ★ Root ★](https://play.google.com/store/apps/details?id=com.jrummy.font.installer). Then I changed my mind and changed the font with the one which came already in the phone. But the font hasn't changed in the whole smartphone. The rooted font exists in some apps like Play Store and the stock one in the rest of the smartphone! What can I do? (Click image to enlarge) [![IMG: and here is the rooted font](https://i.stack.imgur.com/kbdLml.png)](https://i.stack.imgur.com/kbdLm.png) [![IMG: here is the stock font](https://i.stack.imgur.com/FTowLl.jpg)](https://i.stack.imgur.com/FTowL.jpg)
2015/10/18
[ "https://android.stackexchange.com/questions/126303", "https://android.stackexchange.com", "https://android.stackexchange.com/users/132617/" ]
You will have to find someone with the same phone as you (rooted) and find the font files and send them to yourself and copy them into the location of your font files and do a reboot. Worked for me. Or you can find a dump for your phone and download those files and find the font in there and place those into your font files. (Friend did that and it worked as well. I have warned people not to download that app as it ruins the font settings.
Ok so i found out the solution! First of all before you change font from a rooted app you have to backup all your stock fonts! I didn't do that and then this problem occurred! After some research the only way to fix this (if you don't have a backup) is to reflash the stock ROM! Thanks everyone for the help!
23,337,763
I have some sensitive data that I need to store in a database, however I also need to be able to decrypt that data to its original state. I have been doing some reading and it seems like AES is the way to go (if you disagree then I'm more than happy to receive any suggestions!). The thing I don't quite get with AES is that there is something called IV, and if I did get this right, IV acts like some sort of "key/password". So. My question is. If I want to decrypt the database-stored value, then do I also need to know the IV and the key to decrypt it? I would need to store these two values in the database as well?
2014/04/28
[ "https://Stackoverflow.com/questions/23337763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3557855/" ]
The strength is in the Key. There's usually no problem with the IV being *known*, so storing it alongside the data (either as a separate column or just concatenated onto the start, as common way to do this) is fine. There may be some other requirement for the IV, however, that you should ensure you follow. These may be around the apparent randomness of the IV, or that IVs should not be reused (although in such a case, it should more correctly be referred to as a Nonce).
IV is used for 'randomising' your data in a way that the same text never gets encrypted in the same way. This increases the strength of your encrypted data. Example when IV is useful: You encrypt passwords. User A and User B both use the password 'HelloWorld!'. Without and IV, the encrypted data is equal in both cases. If someone knows the password for User A and sees that the encrypted data is the same as for User B, he can then use the password for User A for logging in as user B.
1,324,962
How do I install QtiPlot or Scidavis on Ubuntu 20.10? I tried to install through the tutorials that are here in the community for version 20.04 and it doesn't work. The terminal says that "it has broken packages and that it depends on libgsl23 (> = 2.5) and that it is not installable".
2021/03/20
[ "https://askubuntu.com/questions/1324962", "https://askubuntu.com", "https://askubuntu.com/users/1194443/" ]
After downloading and extracting the zip file, please consult the README. It says, in part: > > Linux > > > > > We recommend to install `stlink-tools` from the package repository of > the used distribution: > > > * Ubuntu Linux: [(Link)](https://packages.ubuntu.com/stlink-tools) > > > I suggest that you open a terminal and do: ``` sudo apt update sudo apt install stlink-tools ``` You should be all set.
A very fast and easy to install on linux is EBlink. EBlink is using a more sophisticated algorithm and is the fastest gdb server (for stlink). Stlink-org is no longer maintained so for newer devices you have to look for alternatives anyway. <https://github.com/EmBitz/EBlink>
81,569
Although the phrase "sweep me off my feet" probably means, "make me fall in love with you in a short time", what does it exactly mean, because "sweeping" can be difficult to be associated with "love". (It can be difficult to read the words "sweeping" and "feet" to get a feeling that it means love). Below is one of its usage in Steve Jobs's letter to his wife: > > We didn’t know much about each other twenty years ago. We were guided > by our intuition; you swept me off my feet. It was snowing when we got > married at the Ahwahnee. Years passed, kids came, good times, hard > times, but never bad times. Our love and respect has endured and > grown. We’ve been through so much together and here we are right back > where we started 20 years ago—older, wiser— with wrinkles on our faces > and hearts. We now know many of life’s joys, sufferings, secrets and > wonders and we’re still here together. My feet have never returned to > the ground. -- Steve Jobs > > >
2012/09/14
[ "https://english.stackexchange.com/questions/81569", "https://english.stackexchange.com", "https://english.stackexchange.com/users/1204/" ]
Although the phrase *can* mean that, and often does, it's also sometimes applied in a more broad context. To be "swept off your feet" is to be surprised, enthralled, exhilarated. Critics can be swept off their feet by an epic film; operagoers can be swept off their feet by a beautiful aria, etc. As for how sweeping became associated with love, that's referring to the aspect of *sweeping* that means *a smooth movement*, not the act of using a broom. Ballroom dancers can sweep across the dance floor, a powdery snow can sweep across the barren fields. It's that smooth, fluid motion – and the idea of your emotions being carried in that fashion – that brought about the idiom. A strong ocean or river current can literally sweep you off your feet, and young lovers can do the same thing to each other, figuratively and emotionally.
> > It's an English expression referring to the feeling that one gets when > completely *taken by* someone, *carried away*, *swept away* (all > emotionally). > > > So "Are you trying to sweep me off my feet?" translates to, literally, > "Are you trying to make me fall (in love) with you?" > > > It's like making someone fall in love with you in a short amount of > time. > > > [Source](http://www.urbandictionary.com/define.php?term=sweep%20me%20off%20my%20feet) Urban Dictionary
81,569
Although the phrase "sweep me off my feet" probably means, "make me fall in love with you in a short time", what does it exactly mean, because "sweeping" can be difficult to be associated with "love". (It can be difficult to read the words "sweeping" and "feet" to get a feeling that it means love). Below is one of its usage in Steve Jobs's letter to his wife: > > We didn’t know much about each other twenty years ago. We were guided > by our intuition; you swept me off my feet. It was snowing when we got > married at the Ahwahnee. Years passed, kids came, good times, hard > times, but never bad times. Our love and respect has endured and > grown. We’ve been through so much together and here we are right back > where we started 20 years ago—older, wiser— with wrinkles on our faces > and hearts. We now know many of life’s joys, sufferings, secrets and > wonders and we’re still here together. My feet have never returned to > the ground. -- Steve Jobs > > >
2012/09/14
[ "https://english.stackexchange.com/questions/81569", "https://english.stackexchange.com", "https://english.stackexchange.com/users/1204/" ]
Although the phrase *can* mean that, and often does, it's also sometimes applied in a more broad context. To be "swept off your feet" is to be surprised, enthralled, exhilarated. Critics can be swept off their feet by an epic film; operagoers can be swept off their feet by a beautiful aria, etc. As for how sweeping became associated with love, that's referring to the aspect of *sweeping* that means *a smooth movement*, not the act of using a broom. Ballroom dancers can sweep across the dance floor, a powdery snow can sweep across the barren fields. It's that smooth, fluid motion – and the idea of your emotions being carried in that fashion – that brought about the idiom. A strong ocean or river current can literally sweep you off your feet, and young lovers can do the same thing to each other, figuratively and emotionally.
It is an expression used mainly by women. Swept off my feet refers to the time when they are hugged by a taller man and spun around, their feet not touching the ground. Hence, 'swept off my feet'.
81,569
Although the phrase "sweep me off my feet" probably means, "make me fall in love with you in a short time", what does it exactly mean, because "sweeping" can be difficult to be associated with "love". (It can be difficult to read the words "sweeping" and "feet" to get a feeling that it means love). Below is one of its usage in Steve Jobs's letter to his wife: > > We didn’t know much about each other twenty years ago. We were guided > by our intuition; you swept me off my feet. It was snowing when we got > married at the Ahwahnee. Years passed, kids came, good times, hard > times, but never bad times. Our love and respect has endured and > grown. We’ve been through so much together and here we are right back > where we started 20 years ago—older, wiser— with wrinkles on our faces > and hearts. We now know many of life’s joys, sufferings, secrets and > wonders and we’re still here together. My feet have never returned to > the ground. -- Steve Jobs > > >
2012/09/14
[ "https://english.stackexchange.com/questions/81569", "https://english.stackexchange.com", "https://english.stackexchange.com/users/1204/" ]
Although the phrase *can* mean that, and often does, it's also sometimes applied in a more broad context. To be "swept off your feet" is to be surprised, enthralled, exhilarated. Critics can be swept off their feet by an epic film; operagoers can be swept off their feet by a beautiful aria, etc. As for how sweeping became associated with love, that's referring to the aspect of *sweeping* that means *a smooth movement*, not the act of using a broom. Ballroom dancers can sweep across the dance floor, a powdery snow can sweep across the barren fields. It's that smooth, fluid motion – and the idea of your emotions being carried in that fashion – that brought about the idiom. A strong ocean or river current can literally sweep you off your feet, and young lovers can do the same thing to each other, figuratively and emotionally.
Imagine a broom sweeping the floor, in one sweeping motion, dust particles are lifted into the air. In the same way when you fall in love with someone, you are lifted off your feet effortlessly. This feeling of elation is felt by both sexes, so I would disagree with @user72209 that the idiom is almost exclusive to women, indeed the touching tribute left by Steve Jobs to his wife disproves this idea. > > *[to sweep someone off their feet](http://dictionary.cambridge.org/dictionary/american-english/sweep-someone-off-their-feet)* > > to cause someone to fall suddenly and completely in love with you > > >
81,569
Although the phrase "sweep me off my feet" probably means, "make me fall in love with you in a short time", what does it exactly mean, because "sweeping" can be difficult to be associated with "love". (It can be difficult to read the words "sweeping" and "feet" to get a feeling that it means love). Below is one of its usage in Steve Jobs's letter to his wife: > > We didn’t know much about each other twenty years ago. We were guided > by our intuition; you swept me off my feet. It was snowing when we got > married at the Ahwahnee. Years passed, kids came, good times, hard > times, but never bad times. Our love and respect has endured and > grown. We’ve been through so much together and here we are right back > where we started 20 years ago—older, wiser— with wrinkles on our faces > and hearts. We now know many of life’s joys, sufferings, secrets and > wonders and we’re still here together. My feet have never returned to > the ground. -- Steve Jobs > > >
2012/09/14
[ "https://english.stackexchange.com/questions/81569", "https://english.stackexchange.com", "https://english.stackexchange.com/users/1204/" ]
> > It's an English expression referring to the feeling that one gets when > completely *taken by* someone, *carried away*, *swept away* (all > emotionally). > > > So "Are you trying to sweep me off my feet?" translates to, literally, > "Are you trying to make me fall (in love) with you?" > > > It's like making someone fall in love with you in a short amount of > time. > > > [Source](http://www.urbandictionary.com/define.php?term=sweep%20me%20off%20my%20feet) Urban Dictionary
It is an expression used mainly by women. Swept off my feet refers to the time when they are hugged by a taller man and spun around, their feet not touching the ground. Hence, 'swept off my feet'.
81,569
Although the phrase "sweep me off my feet" probably means, "make me fall in love with you in a short time", what does it exactly mean, because "sweeping" can be difficult to be associated with "love". (It can be difficult to read the words "sweeping" and "feet" to get a feeling that it means love). Below is one of its usage in Steve Jobs's letter to his wife: > > We didn’t know much about each other twenty years ago. We were guided > by our intuition; you swept me off my feet. It was snowing when we got > married at the Ahwahnee. Years passed, kids came, good times, hard > times, but never bad times. Our love and respect has endured and > grown. We’ve been through so much together and here we are right back > where we started 20 years ago—older, wiser— with wrinkles on our faces > and hearts. We now know many of life’s joys, sufferings, secrets and > wonders and we’re still here together. My feet have never returned to > the ground. -- Steve Jobs > > >
2012/09/14
[ "https://english.stackexchange.com/questions/81569", "https://english.stackexchange.com", "https://english.stackexchange.com/users/1204/" ]
> > It's an English expression referring to the feeling that one gets when > completely *taken by* someone, *carried away*, *swept away* (all > emotionally). > > > So "Are you trying to sweep me off my feet?" translates to, literally, > "Are you trying to make me fall (in love) with you?" > > > It's like making someone fall in love with you in a short amount of > time. > > > [Source](http://www.urbandictionary.com/define.php?term=sweep%20me%20off%20my%20feet) Urban Dictionary
Imagine a broom sweeping the floor, in one sweeping motion, dust particles are lifted into the air. In the same way when you fall in love with someone, you are lifted off your feet effortlessly. This feeling of elation is felt by both sexes, so I would disagree with @user72209 that the idiom is almost exclusive to women, indeed the touching tribute left by Steve Jobs to his wife disproves this idea. > > *[to sweep someone off their feet](http://dictionary.cambridge.org/dictionary/american-english/sweep-someone-off-their-feet)* > > to cause someone to fall suddenly and completely in love with you > > >
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
When you build and run Android Tests for your app the Android Gradle plugin builds two APKs (the app and the test APK). During the gradle run the dependencies for the app and test builds are compared. Dependencies that exist in both are removed from the test build when the version numbers are the same. When the same dependencies are in use, but differ by version number then you will need to manually resolve the dependency conflict and this error is presented. To resolve the conflict you first need to figure out the two versions that are conflicting. If you aren't already using the Android Gradle Plugin v1.1.1+ then if you upgrade to that version the error message will give you the conflicting version numbers. Choose which one you need. \*When choosing between the conflict numbers it might be important to keep in mind that unless you've overridden the default gradle dependency resolution strategy ([failOnVersionConflict](http://gradle.org/docs/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html)) then conflicts internally within the app and test builds (separately) will be resolved by choosing the greater version. Now you need to decide how to resolve the conflict. If you need to force the use of the lower version (1.2) of the library you will need to force the dependency to be resolved for both the app and test builds to a specific version of the library like this: ``` // Needed to resolve app vs test dependencies, specifically, transitive dependencies of // libraryq and libraryz. Forcing the use of the smaller version after regression testing. configurations.all { resolutionStrategy.force 'org.somelibrary:library-core:1.2' } ``` If you need to use the 2.1 version of the dependency then you can use the snippet above as well, but you will never start using a newer version of the library regardless of whether transitive dependency updates require it. Alternatively, you can also add a new normal dependency to either the app or the test builds (whichever was trying to use the 1.2 version of the dependency). This will force the app or test build to depend on the (previously mentioned) gradle dependency resolution strategy and therefore use the 2.1 version of the library for that build. ``` // Force the use of 2.1 because the app requires that version in libraryq transitively. androidTestCompile 'org.somelibrary:library-core:2.1' ``` or ``` // Force the use of 2.1 because the Android Tests require that version in libraryz. compile 'org.somelibrary:library-core:2.1' ``` In this solution the error could resurface, if say version 3.3, started to be used in only one of either the test or the app builds, but this is typically OK because you'll be notified of another incompatibility at build time and can take action. Update: A few new solutions to this question now also list excluding a particular transitive dependency from a declared dependency. This is a valid solution, but puts more onus on the developers. In the same way that the forced dependency resolution suggestion above above hard codes a version into the build, the exclude-transitive-dependency solution specifically overrides the stated requirements of a library. Sometimes library developers have bugs or work around bugs in various other libraries so when you implement these solutions you take some risk in potentially having to chase down very obscure bugs.
If you look at the (generated) .iml file(s), you can see the conflicting version numbers quite easily. In my case: ``` <orderEntry type="library" exported="" scope="TEST" name="support-annotations-20.0.0" level="project" /> <orderEntry type="library" exported="" name="support-annotations-21.0.3" level="project" /> ``` Going back to version 1.0.1 of the gradle plugin resolves the problem.
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
When you build and run Android Tests for your app the Android Gradle plugin builds two APKs (the app and the test APK). During the gradle run the dependencies for the app and test builds are compared. Dependencies that exist in both are removed from the test build when the version numbers are the same. When the same dependencies are in use, but differ by version number then you will need to manually resolve the dependency conflict and this error is presented. To resolve the conflict you first need to figure out the two versions that are conflicting. If you aren't already using the Android Gradle Plugin v1.1.1+ then if you upgrade to that version the error message will give you the conflicting version numbers. Choose which one you need. \*When choosing between the conflict numbers it might be important to keep in mind that unless you've overridden the default gradle dependency resolution strategy ([failOnVersionConflict](http://gradle.org/docs/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html)) then conflicts internally within the app and test builds (separately) will be resolved by choosing the greater version. Now you need to decide how to resolve the conflict. If you need to force the use of the lower version (1.2) of the library you will need to force the dependency to be resolved for both the app and test builds to a specific version of the library like this: ``` // Needed to resolve app vs test dependencies, specifically, transitive dependencies of // libraryq and libraryz. Forcing the use of the smaller version after regression testing. configurations.all { resolutionStrategy.force 'org.somelibrary:library-core:1.2' } ``` If you need to use the 2.1 version of the dependency then you can use the snippet above as well, but you will never start using a newer version of the library regardless of whether transitive dependency updates require it. Alternatively, you can also add a new normal dependency to either the app or the test builds (whichever was trying to use the 1.2 version of the dependency). This will force the app or test build to depend on the (previously mentioned) gradle dependency resolution strategy and therefore use the 2.1 version of the library for that build. ``` // Force the use of 2.1 because the app requires that version in libraryq transitively. androidTestCompile 'org.somelibrary:library-core:2.1' ``` or ``` // Force the use of 2.1 because the Android Tests require that version in libraryz. compile 'org.somelibrary:library-core:2.1' ``` In this solution the error could resurface, if say version 3.3, started to be used in only one of either the test or the app builds, but this is typically OK because you'll be notified of another incompatibility at build time and can take action. Update: A few new solutions to this question now also list excluding a particular transitive dependency from a declared dependency. This is a valid solution, but puts more onus on the developers. In the same way that the forced dependency resolution suggestion above above hard codes a version into the build, the exclude-transitive-dependency solution specifically overrides the stated requirements of a library. Sometimes library developers have bugs or work around bugs in various other libraries so when you implement these solutions you take some risk in potentially having to chase down very obscure bugs.
Had similar problem. First - I upgrade the gradle plugin to 1.1.1 (in the project's gradle): ``` classpath 'com.android.tools.build:gradle:1.1.1' ``` which helped me realize that the problem was the app referring to: ``` com.android.support:support-annotations:21.0.3 ``` while the test app was referring to: ``` com.android.support:support-annotations:20.0.0 ``` (due to specifying `androidTestCompile 'com.squareup.assertj:assertj-android-appcompat-v7:1.0.0'`) solved it by specifying: ``` androidTestCompile 'com.android.support:support-annotations:21.0.3' ```
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
When you build and run Android Tests for your app the Android Gradle plugin builds two APKs (the app and the test APK). During the gradle run the dependencies for the app and test builds are compared. Dependencies that exist in both are removed from the test build when the version numbers are the same. When the same dependencies are in use, but differ by version number then you will need to manually resolve the dependency conflict and this error is presented. To resolve the conflict you first need to figure out the two versions that are conflicting. If you aren't already using the Android Gradle Plugin v1.1.1+ then if you upgrade to that version the error message will give you the conflicting version numbers. Choose which one you need. \*When choosing between the conflict numbers it might be important to keep in mind that unless you've overridden the default gradle dependency resolution strategy ([failOnVersionConflict](http://gradle.org/docs/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html)) then conflicts internally within the app and test builds (separately) will be resolved by choosing the greater version. Now you need to decide how to resolve the conflict. If you need to force the use of the lower version (1.2) of the library you will need to force the dependency to be resolved for both the app and test builds to a specific version of the library like this: ``` // Needed to resolve app vs test dependencies, specifically, transitive dependencies of // libraryq and libraryz. Forcing the use of the smaller version after regression testing. configurations.all { resolutionStrategy.force 'org.somelibrary:library-core:1.2' } ``` If you need to use the 2.1 version of the dependency then you can use the snippet above as well, but you will never start using a newer version of the library regardless of whether transitive dependency updates require it. Alternatively, you can also add a new normal dependency to either the app or the test builds (whichever was trying to use the 1.2 version of the dependency). This will force the app or test build to depend on the (previously mentioned) gradle dependency resolution strategy and therefore use the 2.1 version of the library for that build. ``` // Force the use of 2.1 because the app requires that version in libraryq transitively. androidTestCompile 'org.somelibrary:library-core:2.1' ``` or ``` // Force the use of 2.1 because the Android Tests require that version in libraryz. compile 'org.somelibrary:library-core:2.1' ``` In this solution the error could resurface, if say version 3.3, started to be used in only one of either the test or the app builds, but this is typically OK because you'll be notified of another incompatibility at build time and can take action. Update: A few new solutions to this question now also list excluding a particular transitive dependency from a declared dependency. This is a valid solution, but puts more onus on the developers. In the same way that the forced dependency resolution suggestion above above hard codes a version into the build, the exclude-transitive-dependency solution specifically overrides the stated requirements of a library. Sometimes library developers have bugs or work around bugs in various other libraries so when you implement these solutions you take some risk in potentially having to chase down very obscure bugs.
Alternatively, one can exclude the conflicting dependency (e.g. support annotations library) pulled in by the test app dependency (e.g. assertj-android), by using the following: `testCompile('com.squareup.assertj:assertj-android:1.0.0') { exclude group: 'com.android.support', module: 'support-annotations' }`
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
When you build and run Android Tests for your app the Android Gradle plugin builds two APKs (the app and the test APK). During the gradle run the dependencies for the app and test builds are compared. Dependencies that exist in both are removed from the test build when the version numbers are the same. When the same dependencies are in use, but differ by version number then you will need to manually resolve the dependency conflict and this error is presented. To resolve the conflict you first need to figure out the two versions that are conflicting. If you aren't already using the Android Gradle Plugin v1.1.1+ then if you upgrade to that version the error message will give you the conflicting version numbers. Choose which one you need. \*When choosing between the conflict numbers it might be important to keep in mind that unless you've overridden the default gradle dependency resolution strategy ([failOnVersionConflict](http://gradle.org/docs/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html)) then conflicts internally within the app and test builds (separately) will be resolved by choosing the greater version. Now you need to decide how to resolve the conflict. If you need to force the use of the lower version (1.2) of the library you will need to force the dependency to be resolved for both the app and test builds to a specific version of the library like this: ``` // Needed to resolve app vs test dependencies, specifically, transitive dependencies of // libraryq and libraryz. Forcing the use of the smaller version after regression testing. configurations.all { resolutionStrategy.force 'org.somelibrary:library-core:1.2' } ``` If you need to use the 2.1 version of the dependency then you can use the snippet above as well, but you will never start using a newer version of the library regardless of whether transitive dependency updates require it. Alternatively, you can also add a new normal dependency to either the app or the test builds (whichever was trying to use the 1.2 version of the dependency). This will force the app or test build to depend on the (previously mentioned) gradle dependency resolution strategy and therefore use the 2.1 version of the library for that build. ``` // Force the use of 2.1 because the app requires that version in libraryq transitively. androidTestCompile 'org.somelibrary:library-core:2.1' ``` or ``` // Force the use of 2.1 because the Android Tests require that version in libraryz. compile 'org.somelibrary:library-core:2.1' ``` In this solution the error could resurface, if say version 3.3, started to be used in only one of either the test or the app builds, but this is typically OK because you'll be notified of another incompatibility at build time and can take action. Update: A few new solutions to this question now also list excluding a particular transitive dependency from a declared dependency. This is a valid solution, but puts more onus on the developers. In the same way that the forced dependency resolution suggestion above above hard codes a version into the build, the exclude-transitive-dependency solution specifically overrides the stated requirements of a library. Sometimes library developers have bugs or work around bugs in various other libraries so when you implement these solutions you take some risk in potentially having to chase down very obscure bugs.
Gradle has [Resolution Strategy Mechanism](https://docs.gradle.org/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html). You can resolve this conflict by adding below lines to app level build.gradle file: ``` configurations.all { resolutionStrategy { force 'com.google.code.findbugs:jsr305:1.3.9', 'com.google.code.findbugs:jsr305:2.0.1' } } ```
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
Had similar problem. First - I upgrade the gradle plugin to 1.1.1 (in the project's gradle): ``` classpath 'com.android.tools.build:gradle:1.1.1' ``` which helped me realize that the problem was the app referring to: ``` com.android.support:support-annotations:21.0.3 ``` while the test app was referring to: ``` com.android.support:support-annotations:20.0.0 ``` (due to specifying `androidTestCompile 'com.squareup.assertj:assertj-android-appcompat-v7:1.0.0'`) solved it by specifying: ``` androidTestCompile 'com.android.support:support-annotations:21.0.3' ```
If you look at the (generated) .iml file(s), you can see the conflicting version numbers quite easily. In my case: ``` <orderEntry type="library" exported="" scope="TEST" name="support-annotations-20.0.0" level="project" /> <orderEntry type="library" exported="" name="support-annotations-21.0.3" level="project" /> ``` Going back to version 1.0.1 of the gradle plugin resolves the problem.
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
Alternatively, one can exclude the conflicting dependency (e.g. support annotations library) pulled in by the test app dependency (e.g. assertj-android), by using the following: `testCompile('com.squareup.assertj:assertj-android:1.0.0') { exclude group: 'com.android.support', module: 'support-annotations' }`
If you look at the (generated) .iml file(s), you can see the conflicting version numbers quite easily. In my case: ``` <orderEntry type="library" exported="" scope="TEST" name="support-annotations-20.0.0" level="project" /> <orderEntry type="library" exported="" name="support-annotations-21.0.3" level="project" /> ``` Going back to version 1.0.1 of the gradle plugin resolves the problem.
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
Gradle has [Resolution Strategy Mechanism](https://docs.gradle.org/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html). You can resolve this conflict by adding below lines to app level build.gradle file: ``` configurations.all { resolutionStrategy { force 'com.google.code.findbugs:jsr305:1.3.9', 'com.google.code.findbugs:jsr305:2.0.1' } } ```
If you look at the (generated) .iml file(s), you can see the conflicting version numbers quite easily. In my case: ``` <orderEntry type="library" exported="" scope="TEST" name="support-annotations-20.0.0" level="project" /> <orderEntry type="library" exported="" name="support-annotations-21.0.3" level="project" /> ``` Going back to version 1.0.1 of the gradle plugin resolves the problem.
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
Had similar problem. First - I upgrade the gradle plugin to 1.1.1 (in the project's gradle): ``` classpath 'com.android.tools.build:gradle:1.1.1' ``` which helped me realize that the problem was the app referring to: ``` com.android.support:support-annotations:21.0.3 ``` while the test app was referring to: ``` com.android.support:support-annotations:20.0.0 ``` (due to specifying `androidTestCompile 'com.squareup.assertj:assertj-android-appcompat-v7:1.0.0'`) solved it by specifying: ``` androidTestCompile 'com.android.support:support-annotations:21.0.3' ```
Alternatively, one can exclude the conflicting dependency (e.g. support annotations library) pulled in by the test app dependency (e.g. assertj-android), by using the following: `testCompile('com.squareup.assertj:assertj-android:1.0.0') { exclude group: 'com.android.support', module: 'support-annotations' }`
28,641,462
We've created a pipeline, which is performing a transformation from 3 streams located in GCS ('Clicks', 'Impressions', 'ActiveViews'). We have the requirement that we need to write the individual streams back out to GCS, but to separate files (to be later loaded into BigQuery), because they all have slightly a different schema. One of the writes has failed twice in succession with different errors each time, which turn causes the pipeline to fail. These are the last 2 workflow/pipeline represented visually from the GDC, which show the failure: ![Write failing](https://i.stack.imgur.com/6PT8H.png) ![Write failing](https://i.stack.imgur.com/4d8cX.png) The 1st error: ``` Feb 21, 2015, 12:55:14 PM (b0cbc05dfc56dbd9): Workflow failed. Causes: (f98c177c56055863): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (2d838e694976dc6): Expansion failed for filepattern: gs://cdf/binaries/tmp-38156614004ed90e-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].avro. ``` The 2nd error: ``` Feb 21, 2015, 1:20:15 PM (19dcdcf1fe125eeb): Workflow failed. Causes: (2a27345ef73673d3): Map task completion for Step "ActiveViews-GSC-write" failed. Causes: (8f79a20dfa5c4d2b): Unable to view metadata for file: gs://cdf/binaries/tmp-2a27345ef7367fe6-00001-of-00015.avro. ``` It's only happening on the "ActiveViews-GCS-Write" step. Any idea what we're doing wrong?
2015/02/21
[ "https://Stackoverflow.com/questions/28641462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877278/" ]
Had similar problem. First - I upgrade the gradle plugin to 1.1.1 (in the project's gradle): ``` classpath 'com.android.tools.build:gradle:1.1.1' ``` which helped me realize that the problem was the app referring to: ``` com.android.support:support-annotations:21.0.3 ``` while the test app was referring to: ``` com.android.support:support-annotations:20.0.0 ``` (due to specifying `androidTestCompile 'com.squareup.assertj:assertj-android-appcompat-v7:1.0.0'`) solved it by specifying: ``` androidTestCompile 'com.android.support:support-annotations:21.0.3' ```
Gradle has [Resolution Strategy Mechanism](https://docs.gradle.org/current/dsl/org.gradle.api.artifacts.ResolutionStrategy.html). You can resolve this conflict by adding below lines to app level build.gradle file: ``` configurations.all { resolutionStrategy { force 'com.google.code.findbugs:jsr305:1.3.9', 'com.google.code.findbugs:jsr305:2.0.1' } } ```
565,578
I'm currently looking at [SRR1240-8R2M](https://www.bourns.com/docs/Product-Datasheets/SRR1240.pdf). On the datasheet, it shows us the inductance of the inductor at 100KHz is 8.2uH. But I'm trying to run my circuit at 2.2MHz. How do I determine the inductance when I'm running at 2.2MHz? The SRF is 7.96MHz. Thanks. Edit: Sorry, I made a mistake. The SRF is 32MHz, not 7.96Mhz. 7.96MHz is the test frequency.
2021/05/17
[ "https://electronics.stackexchange.com/questions/565578", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/236717/" ]
You’ll have to derate Imax and Ipp ripple significantly @ 2MHz due to core loss. This means L may reduce 10% for a 50’C rise. \*updated with new info on SRF L won’t change at all unless you overheat it or get within the Q BW effects at resonance. The higher the Q factor, the closer you may operate near resonance but core loss rises with f and impedance rises sharply near SRF. For an LC resonant filter the gain rises with Q and peaks at resonance. For an Inductor the Q drops 0 at resonance and the inductance rises like the Q in an LC filter then drops to 0 and it becomes just a resistor with DCR after which it becomes capacitive as the phase shifts rapidly from interwinding parasitics then C declines at the same rate as L rose before resonance. \*Core loss ----------- This loss in watts increases in 3 dimensions with the cube root of (f)as a function of current or magnetic flux. Thus when tested at 100kHz and operated away from Q=0 at 2.2MHz the core loss induced temperature rise increases by \$ 22^{0.33}=2.8\$ so current max ratings must be reduced to ~35% of 100kHz spec. ~~but FET losses also increase due to C effects near SRF unless using ZVS. Thermal runaway exists if core losses result when temp rise then drops in L then faster with thermal runaway. You should put a heat sensor on magnetics until the design has an acceptable margin. (Dk > 1)~~ Recommendation -------------- <https://www.mouser.com/datasheet/2/427/ihlp-6767gz-01-1763230.pdf> Only use parts rated for your operating frequency.
With the inductance specified at 100KHz, you're likely going to operating this well outside its design intent at 2.2MHZ. You generally want to be well away from the SRF. You should pick a different inductor technology than this one if you intend to run it at 2.2MHZ.
565,578
I'm currently looking at [SRR1240-8R2M](https://www.bourns.com/docs/Product-Datasheets/SRR1240.pdf). On the datasheet, it shows us the inductance of the inductor at 100KHz is 8.2uH. But I'm trying to run my circuit at 2.2MHz. How do I determine the inductance when I'm running at 2.2MHz? The SRF is 7.96MHz. Thanks. Edit: Sorry, I made a mistake. The SRF is 32MHz, not 7.96Mhz. 7.96MHz is the test frequency.
2021/05/17
[ "https://electronics.stackexchange.com/questions/565578", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/236717/" ]
You’ll have to derate Imax and Ipp ripple significantly @ 2MHz due to core loss. This means L may reduce 10% for a 50’C rise. \*updated with new info on SRF L won’t change at all unless you overheat it or get within the Q BW effects at resonance. The higher the Q factor, the closer you may operate near resonance but core loss rises with f and impedance rises sharply near SRF. For an LC resonant filter the gain rises with Q and peaks at resonance. For an Inductor the Q drops 0 at resonance and the inductance rises like the Q in an LC filter then drops to 0 and it becomes just a resistor with DCR after which it becomes capacitive as the phase shifts rapidly from interwinding parasitics then C declines at the same rate as L rose before resonance. \*Core loss ----------- This loss in watts increases in 3 dimensions with the cube root of (f)as a function of current or magnetic flux. Thus when tested at 100kHz and operated away from Q=0 at 2.2MHz the core loss induced temperature rise increases by \$ 22^{0.33}=2.8\$ so current max ratings must be reduced to ~35% of 100kHz spec. ~~but FET losses also increase due to C effects near SRF unless using ZVS. Thermal runaway exists if core losses result when temp rise then drops in L then faster with thermal runaway. You should put a heat sensor on magnetics until the design has an acceptable margin. (Dk > 1)~~ Recommendation -------------- <https://www.mouser.com/datasheet/2/427/ihlp-6767gz-01-1763230.pdf> Only use parts rated for your operating frequency.
You typically would like to be operating at <10x below SRF. When you're at <10x SRF, the inductance is pretty much the nominal inductance. As you approach the SRF, inductance increases - but you'll have to measure what it is. It might be OK, it might not.
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Added memoization which would be used in practice for these types of recursive functions to improve efficiency by removing repetitive computations (i.e. <https://www.python-course.eu/python3_memoization.php>) ``` def memoize(f): memo = {} def helper(x): if x not in memo: memo[x] = f(x) return memo[x] return helper @memoize def sum_of_factorial(n): if n==0: return 0 return factorial(n) + sum_of_factorial(n-1) @memoize def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) ```
Not sure if the math is right, probably is not, yet you might want to define a method and iterate through, that I'm guessing: ``` def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) def sum_of_factorial(n): sum_output = 0 while n >= 0: sum_output += factorial(n) n -= 1 return sum_output print(factorial(10)) print(sum_of_factorial(10)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
@DarryIG and @bkbb's answers would work but are inefficient since it makes repeated recursive calls with the same numbers, which have the same results, over and over again for the higher numbers. You can cache the results for better efficiency. Also, since: ``` sum_factorials(n) = (sum_factorials(n-1) - sum_factorials(n-2)) * n + sum_factorials(n-1) ``` you don't actually need two functions to implement the recursion: ``` def sum_factorials(n, cache=[0, 1]): if len(cache) > n: return cache[n] previous = sum_factorials(n - 1) cache.append((previous - sum_factorials(n - 2)) * n + previous) return cache[n] ``` so that `sum_factorials(4)` returns: ``` 33 ```
Not sure if the math is right, probably is not, yet you might want to define a method and iterate through, that I'm guessing: ``` def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) def sum_of_factorial(n): sum_output = 0 while n >= 0: sum_output += factorial(n) n -= 1 return sum_output print(factorial(10)) print(sum_of_factorial(10)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Alternative single recursive solution with generators, using `functools.reduce` for multiplication: ``` from functools import reduce as _r def fact_sum(n, f = True): if not f and n: yield from [n, *fact_sum(n - 1, f = False)] if f and n: new_n = _r(lambda x, y:x*y, list(fact_sum(n, f = False))) yield from [new_n, *fact_sum(n - 1, f = True)] print(sum(fact_sum(3))) print(sum(fact_sum(4))) ``` Output: ``` 9 33 ```
Not sure if the math is right, probably is not, yet you might want to define a method and iterate through, that I'm guessing: ``` def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) def sum_of_factorial(n): sum_output = 0 while n >= 0: sum_output += factorial(n) n -= 1 return sum_output print(factorial(10)) print(sum_of_factorial(10)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Your logic is almost good, I have corrected it. Please check it below. > > You can try it online at <https://rextester.com/AGS44863> > > > ``` def factorial(n): if n == 0: return 0 else: mul_sum = 1 for i in range(1, n + 1): mul_sum *= i return factorial(n-1) + mul_sum print(factorial(4)) # 33 ``` To make sure, the above one is correct, you can also have a look at the below function which forms the expression (string) using recursion. ``` def factorial_expression(n): if str(n) == '0': return "" else: if n == 1: return "1" mul_sum = "(" for i in range(1, n + 1): mul_sum += str(i) + "*" return (factorial_expression(n-1) + "+" + mul_sum.rstrip("*") + ")").lstrip("+") # 1+(1*2)+(1*2*3)+(1*2*3*4) print(factorial_expression(4)) ```
Not sure if the math is right, probably is not, yet you might want to define a method and iterate through, that I'm guessing: ``` def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) def sum_of_factorial(n): sum_output = 0 while n >= 0: sum_output += factorial(n) n -= 1 return sum_output print(factorial(10)) print(sum_of_factorial(10)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I'm not sure why we need any subtraction or caches. Recursion can go forwards as well as backwards: ``` def f(n, i=1, factorial=1, result=1): if i == n: return result next = factorial * (i + 1) return f(n, i + 1, next, result + next) print(f(4)) # 33 ``` (This also seems slightly [faster](https://ideone.com/jp07ah) than blhsing's [answer](https://stackoverflow.com/a/58071468/2034787).)
Not sure if the math is right, probably is not, yet you might want to define a method and iterate through, that I'm guessing: ``` def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) def sum_of_factorial(n): sum_output = 0 while n >= 0: sum_output += factorial(n) n -= 1 return sum_output print(factorial(10)) print(sum_of_factorial(10)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
@DarryIG and @bkbb's answers would work but are inefficient since it makes repeated recursive calls with the same numbers, which have the same results, over and over again for the higher numbers. You can cache the results for better efficiency. Also, since: ``` sum_factorials(n) = (sum_factorials(n-1) - sum_factorials(n-2)) * n + sum_factorials(n-1) ``` you don't actually need two functions to implement the recursion: ``` def sum_factorials(n, cache=[0, 1]): if len(cache) > n: return cache[n] previous = sum_factorials(n - 1) cache.append((previous - sum_factorials(n - 2)) * n + previous) return cache[n] ``` so that `sum_factorials(4)` returns: ``` 33 ```
Added memoization which would be used in practice for these types of recursive functions to improve efficiency by removing repetitive computations (i.e. <https://www.python-course.eu/python3_memoization.php>) ``` def memoize(f): memo = {} def helper(x): if x not in memo: memo[x] = f(x) return memo[x] return helper @memoize def sum_of_factorial(n): if n==0: return 0 return factorial(n) + sum_of_factorial(n-1) @memoize def factorial(n): if n == 0: return 1 else: return n * factorial(n-1) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
@DarryIG and @bkbb's answers would work but are inefficient since it makes repeated recursive calls with the same numbers, which have the same results, over and over again for the higher numbers. You can cache the results for better efficiency. Also, since: ``` sum_factorials(n) = (sum_factorials(n-1) - sum_factorials(n-2)) * n + sum_factorials(n-1) ``` you don't actually need two functions to implement the recursion: ``` def sum_factorials(n, cache=[0, 1]): if len(cache) > n: return cache[n] previous = sum_factorials(n - 1) cache.append((previous - sum_factorials(n - 2)) * n + previous) return cache[n] ``` so that `sum_factorials(4)` returns: ``` 33 ```
Alternative single recursive solution with generators, using `functools.reduce` for multiplication: ``` from functools import reduce as _r def fact_sum(n, f = True): if not f and n: yield from [n, *fact_sum(n - 1, f = False)] if f and n: new_n = _r(lambda x, y:x*y, list(fact_sum(n, f = False))) yield from [new_n, *fact_sum(n - 1, f = True)] print(sum(fact_sum(3))) print(sum(fact_sum(4))) ``` Output: ``` 9 33 ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
@DarryIG and @bkbb's answers would work but are inefficient since it makes repeated recursive calls with the same numbers, which have the same results, over and over again for the higher numbers. You can cache the results for better efficiency. Also, since: ``` sum_factorials(n) = (sum_factorials(n-1) - sum_factorials(n-2)) * n + sum_factorials(n-1) ``` you don't actually need two functions to implement the recursion: ``` def sum_factorials(n, cache=[0, 1]): if len(cache) > n: return cache[n] previous = sum_factorials(n - 1) cache.append((previous - sum_factorials(n - 2)) * n + previous) return cache[n] ``` so that `sum_factorials(4)` returns: ``` 33 ```
Your logic is almost good, I have corrected it. Please check it below. > > You can try it online at <https://rextester.com/AGS44863> > > > ``` def factorial(n): if n == 0: return 0 else: mul_sum = 1 for i in range(1, n + 1): mul_sum *= i return factorial(n-1) + mul_sum print(factorial(4)) # 33 ``` To make sure, the above one is correct, you can also have a look at the below function which forms the expression (string) using recursion. ``` def factorial_expression(n): if str(n) == '0': return "" else: if n == 1: return "1" mul_sum = "(" for i in range(1, n + 1): mul_sum += str(i) + "*" return (factorial_expression(n-1) + "+" + mul_sum.rstrip("*") + ")").lstrip("+") # 1+(1*2)+(1*2*3)+(1*2*3*4) print(factorial_expression(4)) ```
58,071,322
In my app, each user can create their own social 'group', similar to meetup.com. An example group might be "Let's play tennis on Thursday". Users can see each group they've created from within their dashboard. I have a [bootstrap badge](https://getbootstrap.com/docs/4.1/components/badge/) which links to the users groups. This badge displays a little number indicating the number of groups that user has created. Here's my button with a dummy number '3' inside: ``` <a href="groups" class="list-group-item list-group-item-action">Groups I've Created<span class="badge badge-primary badge-pill">3</span> ``` My questions is, how should I dynamically update the badge number so that it shows how many groups a user has created? I have a groups table so I could count the number of rows and populate the badge that way? I've only been coding for a few weeks so if anybody can show me how the code should look at where I need to put it, it would really help!! Thank you guys. Here's my database `Groups` data ``` DROP TABLE IF EXISTS `groups`; CREATE TABLE IF NOT EXISTS `groups` ( `id` bigint(20) UNSIGNED NOT NULL AUTO_INCREMENT, `created_at` timestamp NULL DEFAULT NULL, `updated_at` timestamp NULL DEFAULT NULL, `group_title` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, `group_description` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ``` User.php ``` public function groups() { return $this->hasMany('App\Group'); } ``` GroupsController.php ``` public function () { $user = User::with('groups')->get(); $check = $user->count(); return view('groups.index', compact('check')); } ``` home.blade.php ``` <a href="groups" class="list-group-item list-group-item-action"> Groups I've Created <span class="badge badge-primary badge-pill">{{ $check }}</span> </a> ```
2019/09/23
[ "https://Stackoverflow.com/questions/58071322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
@DarryIG and @bkbb's answers would work but are inefficient since it makes repeated recursive calls with the same numbers, which have the same results, over and over again for the higher numbers. You can cache the results for better efficiency. Also, since: ``` sum_factorials(n) = (sum_factorials(n-1) - sum_factorials(n-2)) * n + sum_factorials(n-1) ``` you don't actually need two functions to implement the recursion: ``` def sum_factorials(n, cache=[0, 1]): if len(cache) > n: return cache[n] previous = sum_factorials(n - 1) cache.append((previous - sum_factorials(n - 2)) * n + previous) return cache[n] ``` so that `sum_factorials(4)` returns: ``` 33 ```
I'm not sure why we need any subtraction or caches. Recursion can go forwards as well as backwards: ``` def f(n, i=1, factorial=1, result=1): if i == n: return result next = factorial * (i + 1) return f(n, i + 1, next, result + next) print(f(4)) # 33 ``` (This also seems slightly [faster](https://ideone.com/jp07ah) than blhsing's [answer](https://stackoverflow.com/a/58071468/2034787).)
24,914,631
I' am trying to assign a value to a variable but i am getting that compile error: `s(String) is a 'field' but is used like a 'type'` ``` public class SForceTest { String s = ""; s = "asdsa"; } ``` What can i do to fix that problem ? Thanks
2014/07/23
[ "https://Stackoverflow.com/questions/24914631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2391964/" ]
You can only create members (and initialize them) outside of a constructor or method. You need to place the assignment inside a constructor or method: ``` public class SForceTest { String s = ""; public SForceTest() { s = "asdsa"; } } ```
You need to either initialize it in a method, as above, or at the time of creation, e.g: ``` public class SForceTest { String s = "asdsa"; } ``` You'd normally give it an accessibility modifier as well, making it look more like this (assuming we only ever want instances of this class to be able to access the field): ``` public class SForceTest { private String s = "asdsa"; } ```
2,124,553
I need to define a (Java) regex that will match any string that does NOT contain any of these * 'foo' or 'foos' as a whole word * 'bar' or 'bars' as a whole word * 'baz' or 'bazs' as a whole word Is it possible to express this as a single regex? I know it would be more readable to use 3 separate regexs, but I'd like to do it in one if possible. Thanks, Don
2010/01/23
[ "https://Stackoverflow.com/questions/2124553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2648/" ]
Try the following: ``` final private static Pattern p = Pattern.compile(".*\\b(?:foos?|bars?|bazs?)\\b.*"); public boolean isGoodString(String stringToTest) { return !p.matcher(stringToTest).matches(); } ```
Here you go: ``` ^((?!\bfoos?|bars?|bazs?\b).)*$ ```
51,292
In the paper [A Riemannian framework for tensor computing](https://doi.org/10.1007/s11263-005-3222-z "Pennec, X., Fillard, P. & Ayache, N. Int J Comput Vision 66, 41–66 (2006). zbMATH review at https://zbmath.org/?q=an:1287.53031"), by Pennec et al., on page 46 the authors state a "distance" function on the manifold of positive definite matrices $\mathcal{Sym}\_n^+$ given by $$d(A,B) = \lVert\log A^{-1/2}BA^{-1/2}\rVert$$ where the norm on the right is the standard sum of squares euclidean norm (presumably on the eigenvalues of the argument to the log-term). The authors then say that they could not prove (but had strong empirical evidence) that the above function $d$ satisfies the triangle inequality. I am not that familiar with differential geometry, so I don't know where to check whether this property holds (I need it in an application). Where should I look for a proof / disproof of the triangle inequality for $d$?
2011/01/06
[ "https://mathoverflow.net/questions/51292", "https://mathoverflow.net", "https://mathoverflow.net/users/12028/" ]
Let $X$, $Y$ and $Z$ be positive definite Hermitian matrices (you ask about the real symmetric case, the Hermitian one includes it). Let the eigenvalues of $(X^\*)^{-1/2} Y X^{-1/2}$ be $e^{\alpha\_i}$, those of $(Y^\*)^{-1/2} Z Y^{-1/2}$ be $e^{\beta\_i}$ and those of $(X^\*)^{-1/2} Z X^{-1/2}$ be $e^{\gamma\_i}$. Set $A=Y^{1/2} X^{-1/2}$, $B=Z^{1/2} Y^{-1/2}$ and $C=Z^{1/2} X^{-1/2}$. The [singular values](http://en.wikipedia.org/wiki/Singular_value) of $A$ are then $e^{\alpha\_i/2}$, and so forth. Note that $AB=C$. By a [result of Klyachko](http://www.math.neu.edu/~suciu/GAS/klyachko/kl.pdf), there exist Hermitian matrices $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ such that $\mathfrak{a}+\mathfrak{b}=\mathfrak{c}$, the eigenvalues of $\mathfrak{a}$ are $\alpha\_i/2$, those of $\mathfrak{b}$ are $\beta\_i/2$ and those of $\mathfrak{c}$ are $\gamma\_i/2$. This result can be thought of as saying that, although $\log A + \log B \neq \log C$, and although $\log A$, $\log B$ and $\log C$ are not Hermitian, we can find matrices which do have those properties and have the same eigenvalues. The inequality you want to prove is that $$\left( \sum \alpha\_i^2 \right)^{1/2} + \left( \sum \beta\_i^2 \right)^{1/2} \geq \left( \sum \gamma\_i^2 \right)^{1/2}$$ But $\sum \alpha\_i^2 = 4 \mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$. So this turns into the (standard) fact that $\mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$ is a positive definite norm on the Hermitian matrices. --- I can't resist indulging in a little self promotion. This argument appears at the beginning of my paper [Horn's Problem, Vinnikov Curves and the Hive Cone](http://arxiv.org/abs/math.AG/0311428). There I consider the curve $\det(xX+yY+zZ)=0$ in $\mathbb{RP}^2$. This curve is hyperbolic and, by results of Helton and Vinnikov, all hyperbolic curves are of this form. It meets the three coordinate lines precisely at $-e^{\alpha\_i}$, $-e^{\beta\_i}$ and $-e^{\gamma\_i}$. The point is to use results from the theory of hyperbolic curves to explain Horn's results on eigenvalues of matrix sums.
It seems that $d(A,B)$ is the distance function for the (unique up to scalar multiplication) Riemannian metric on $\mathcal{Sym}\_n^+$ invariant by the $\operatorname{GL}\_n(R)$ action $(g,A)\mapsto gAg^t$. Indeed $d(A,B)$ is then equal to $d(I,A^{-1/2}BA^{-1/2})$, and if $S$ is any symmetric matrix, $t\mapsto \exp(tS)$ is a geodesic through $I$ with speed $\lVert S\rVert=\operatorname{tr}(S^2)^{1/2}$. To see that it is indeed geodesic, note that the inverse map $s\_I:B\mapsto B^{-1}$ is a riemannian isometry ("symmetry about $I$"), as well as $s\_A:B\mapsto A^{1/2}B^{-1}A^{1/2}$ ("symmetry about $A$"). But then when $A$ is near enough $I$, the (unique) geodesic segment from $I$ to $A^2$ has to be invariant (as a set) by $s\_A$ (which switches its end points), so to contain $A$. By iterating and passing to the limit, the geodesic has to coincide with a one parameter group $t\mapsto\exp(tS)$ for small $t$, hence for all $t$. ADDED: in particular, $d$ satisfies the triangle inequality. The Riemannian manifold $\mathcal{Sym}\_n^+$ is the prototypical *symmetric space*, and most books on riemannian geometry must treat it, for instance [Gallot–Hulin–Lafontaine - Riemannian geometry](https://doi.org/10.1007/978-3-642-18855-8) (I've not checked).
51,292
In the paper [A Riemannian framework for tensor computing](https://doi.org/10.1007/s11263-005-3222-z "Pennec, X., Fillard, P. & Ayache, N. Int J Comput Vision 66, 41–66 (2006). zbMATH review at https://zbmath.org/?q=an:1287.53031"), by Pennec et al., on page 46 the authors state a "distance" function on the manifold of positive definite matrices $\mathcal{Sym}\_n^+$ given by $$d(A,B) = \lVert\log A^{-1/2}BA^{-1/2}\rVert$$ where the norm on the right is the standard sum of squares euclidean norm (presumably on the eigenvalues of the argument to the log-term). The authors then say that they could not prove (but had strong empirical evidence) that the above function $d$ satisfies the triangle inequality. I am not that familiar with differential geometry, so I don't know where to check whether this property holds (I need it in an application). Where should I look for a proof / disproof of the triangle inequality for $d$?
2011/01/06
[ "https://mathoverflow.net/questions/51292", "https://mathoverflow.net", "https://mathoverflow.net/users/12028/" ]
It seems that $d(A,B)$ is the distance function for the (unique up to scalar multiplication) Riemannian metric on $\mathcal{Sym}\_n^+$ invariant by the $\operatorname{GL}\_n(R)$ action $(g,A)\mapsto gAg^t$. Indeed $d(A,B)$ is then equal to $d(I,A^{-1/2}BA^{-1/2})$, and if $S$ is any symmetric matrix, $t\mapsto \exp(tS)$ is a geodesic through $I$ with speed $\lVert S\rVert=\operatorname{tr}(S^2)^{1/2}$. To see that it is indeed geodesic, note that the inverse map $s\_I:B\mapsto B^{-1}$ is a riemannian isometry ("symmetry about $I$"), as well as $s\_A:B\mapsto A^{1/2}B^{-1}A^{1/2}$ ("symmetry about $A$"). But then when $A$ is near enough $I$, the (unique) geodesic segment from $I$ to $A^2$ has to be invariant (as a set) by $s\_A$ (which switches its end points), so to contain $A$. By iterating and passing to the limit, the geodesic has to coincide with a one parameter group $t\mapsto\exp(tS)$ for small $t$, hence for all $t$. ADDED: in particular, $d$ satisfies the triangle inequality. The Riemannian manifold $\mathcal{Sym}\_n^+$ is the prototypical *symmetric space*, and most books on riemannian geometry must treat it, for instance [Gallot–Hulin–Lafontaine - Riemannian geometry](https://doi.org/10.1007/978-3-642-18855-8) (I've not checked).
This distance is related to the notion of *Geometric Mean* in $\mathcal{Sym}\_n^+$ (see exercises 198/199 of my [extra exercises for *Matrices : Theory and Applications*](http://perso.ens-lyon.fr/serre/DPF/exobis.pdf)). The geometric mean of $A$, $B$ is given by (assume that $A$ is positive definite) $$A\mathbin\sharp B=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1/2}A^{1/2}.$$ Although this is not clear on the formula, $A\mathbin\sharp B=B\mathbin\sharp A$. This mean turns out to be the middle point of the geodesic segment $[A,B]$ for the metric $d$ of the question. The geodesic segment is unique, the underlying Riemannian manifold is hyperbolic. The segment is parametrized by $$s\mapsto A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1-s}A^{1/2}.$$ The following inequality relates the geometric, arithmetic and harmonic means $$\frac12\left(A^{-1}+B^{-1}\right)^{-1}\le A\mathbin\sharp B\le\frac12(A+B).$$ Finally, the geometric mean of the harmonic and arithmetic means is the geometric mean. Actually, the arithmetico-harmonic mean (defined as a limit by iterating both arithmetic and harmonic means) is the geometric mean.
51,292
In the paper [A Riemannian framework for tensor computing](https://doi.org/10.1007/s11263-005-3222-z "Pennec, X., Fillard, P. & Ayache, N. Int J Comput Vision 66, 41–66 (2006). zbMATH review at https://zbmath.org/?q=an:1287.53031"), by Pennec et al., on page 46 the authors state a "distance" function on the manifold of positive definite matrices $\mathcal{Sym}\_n^+$ given by $$d(A,B) = \lVert\log A^{-1/2}BA^{-1/2}\rVert$$ where the norm on the right is the standard sum of squares euclidean norm (presumably on the eigenvalues of the argument to the log-term). The authors then say that they could not prove (but had strong empirical evidence) that the above function $d$ satisfies the triangle inequality. I am not that familiar with differential geometry, so I don't know where to check whether this property holds (I need it in an application). Where should I look for a proof / disproof of the triangle inequality for $d$?
2011/01/06
[ "https://mathoverflow.net/questions/51292", "https://mathoverflow.net", "https://mathoverflow.net/users/12028/" ]
Let $X$, $Y$ and $Z$ be positive definite Hermitian matrices (you ask about the real symmetric case, the Hermitian one includes it). Let the eigenvalues of $(X^\*)^{-1/2} Y X^{-1/2}$ be $e^{\alpha\_i}$, those of $(Y^\*)^{-1/2} Z Y^{-1/2}$ be $e^{\beta\_i}$ and those of $(X^\*)^{-1/2} Z X^{-1/2}$ be $e^{\gamma\_i}$. Set $A=Y^{1/2} X^{-1/2}$, $B=Z^{1/2} Y^{-1/2}$ and $C=Z^{1/2} X^{-1/2}$. The [singular values](http://en.wikipedia.org/wiki/Singular_value) of $A$ are then $e^{\alpha\_i/2}$, and so forth. Note that $AB=C$. By a [result of Klyachko](http://www.math.neu.edu/~suciu/GAS/klyachko/kl.pdf), there exist Hermitian matrices $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ such that $\mathfrak{a}+\mathfrak{b}=\mathfrak{c}$, the eigenvalues of $\mathfrak{a}$ are $\alpha\_i/2$, those of $\mathfrak{b}$ are $\beta\_i/2$ and those of $\mathfrak{c}$ are $\gamma\_i/2$. This result can be thought of as saying that, although $\log A + \log B \neq \log C$, and although $\log A$, $\log B$ and $\log C$ are not Hermitian, we can find matrices which do have those properties and have the same eigenvalues. The inequality you want to prove is that $$\left( \sum \alpha\_i^2 \right)^{1/2} + \left( \sum \beta\_i^2 \right)^{1/2} \geq \left( \sum \gamma\_i^2 \right)^{1/2}$$ But $\sum \alpha\_i^2 = 4 \mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$. So this turns into the (standard) fact that $\mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$ is a positive definite norm on the Hermitian matrices. --- I can't resist indulging in a little self promotion. This argument appears at the beginning of my paper [Horn's Problem, Vinnikov Curves and the Hive Cone](http://arxiv.org/abs/math.AG/0311428). There I consider the curve $\det(xX+yY+zZ)=0$ in $\mathbb{RP}^2$. This curve is hyperbolic and, by results of Helton and Vinnikov, all hyperbolic curves are of this form. It meets the three coordinate lines precisely at $-e^{\alpha\_i}$, $-e^{\beta\_i}$ and $-e^{\gamma\_i}$. The point is to use results from the theory of hyperbolic curves to explain Horn's results on eigenvalues of matrix sums.
You can understand this geometrically by interpreting a positive definite symmetric matrix as a Euclidean metric on $\mathbb R^n$. Imagine starting with a lump of some kind of moldable material, like clay, and reshaping it (we may as well restrict to linear deformations) until its shape is that defined by a given symmetric matrix. (Actually, for this particular metaphor, we need to restrict to the case that the volume element $\det(A)$ remains constant, but the determinant is easily separated out in both the formula and the process.) What is the total energy needed to change the shape? Here, total energy of an infinitesimal change is the $L^2$ norm of the change in length of unit vectors (up to a constant, this can be measured by integrating over the unit sphere, or summing over any orthonormal basis). It's intuitively obvious and easy to prove that the most efficient path is to find the principal directions and stretch them log-linearly to get to the desired endshape. To measure the distance between two metrics, you can do the same thing: find the principal directions of one with respect to the other. That's what the formula does: $A^{-1/2}$ is a linear transformation that sends the metric defined by $A$ to the standard Euclidean metric, and $A^{-1/2} B A^{-1/2}$ is the metric $B$ transformed into the new coordinates.
51,292
In the paper [A Riemannian framework for tensor computing](https://doi.org/10.1007/s11263-005-3222-z "Pennec, X., Fillard, P. & Ayache, N. Int J Comput Vision 66, 41–66 (2006). zbMATH review at https://zbmath.org/?q=an:1287.53031"), by Pennec et al., on page 46 the authors state a "distance" function on the manifold of positive definite matrices $\mathcal{Sym}\_n^+$ given by $$d(A,B) = \lVert\log A^{-1/2}BA^{-1/2}\rVert$$ where the norm on the right is the standard sum of squares euclidean norm (presumably on the eigenvalues of the argument to the log-term). The authors then say that they could not prove (but had strong empirical evidence) that the above function $d$ satisfies the triangle inequality. I am not that familiar with differential geometry, so I don't know where to check whether this property holds (I need it in an application). Where should I look for a proof / disproof of the triangle inequality for $d$?
2011/01/06
[ "https://mathoverflow.net/questions/51292", "https://mathoverflow.net", "https://mathoverflow.net/users/12028/" ]
Let $X$, $Y$ and $Z$ be positive definite Hermitian matrices (you ask about the real symmetric case, the Hermitian one includes it). Let the eigenvalues of $(X^\*)^{-1/2} Y X^{-1/2}$ be $e^{\alpha\_i}$, those of $(Y^\*)^{-1/2} Z Y^{-1/2}$ be $e^{\beta\_i}$ and those of $(X^\*)^{-1/2} Z X^{-1/2}$ be $e^{\gamma\_i}$. Set $A=Y^{1/2} X^{-1/2}$, $B=Z^{1/2} Y^{-1/2}$ and $C=Z^{1/2} X^{-1/2}$. The [singular values](http://en.wikipedia.org/wiki/Singular_value) of $A$ are then $e^{\alpha\_i/2}$, and so forth. Note that $AB=C$. By a [result of Klyachko](http://www.math.neu.edu/~suciu/GAS/klyachko/kl.pdf), there exist Hermitian matrices $\mathfrak{a}$, $\mathfrak{b}$ and $\mathfrak{c}$ such that $\mathfrak{a}+\mathfrak{b}=\mathfrak{c}$, the eigenvalues of $\mathfrak{a}$ are $\alpha\_i/2$, those of $\mathfrak{b}$ are $\beta\_i/2$ and those of $\mathfrak{c}$ are $\gamma\_i/2$. This result can be thought of as saying that, although $\log A + \log B \neq \log C$, and although $\log A$, $\log B$ and $\log C$ are not Hermitian, we can find matrices which do have those properties and have the same eigenvalues. The inequality you want to prove is that $$\left( \sum \alpha\_i^2 \right)^{1/2} + \left( \sum \beta\_i^2 \right)^{1/2} \geq \left( \sum \gamma\_i^2 \right)^{1/2}$$ But $\sum \alpha\_i^2 = 4 \mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$. So this turns into the (standard) fact that $\mathrm{Tr} \ \mathfrak{a}^\* \mathfrak{a}$ is a positive definite norm on the Hermitian matrices. --- I can't resist indulging in a little self promotion. This argument appears at the beginning of my paper [Horn's Problem, Vinnikov Curves and the Hive Cone](http://arxiv.org/abs/math.AG/0311428). There I consider the curve $\det(xX+yY+zZ)=0$ in $\mathbb{RP}^2$. This curve is hyperbolic and, by results of Helton and Vinnikov, all hyperbolic curves are of this form. It meets the three coordinate lines precisely at $-e^{\alpha\_i}$, $-e^{\beta\_i}$ and $-e^{\gamma\_i}$. The point is to use results from the theory of hyperbolic curves to explain Horn's results on eigenvalues of matrix sums.
This distance is related to the notion of *Geometric Mean* in $\mathcal{Sym}\_n^+$ (see exercises 198/199 of my [extra exercises for *Matrices : Theory and Applications*](http://perso.ens-lyon.fr/serre/DPF/exobis.pdf)). The geometric mean of $A$, $B$ is given by (assume that $A$ is positive definite) $$A\mathbin\sharp B=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1/2}A^{1/2}.$$ Although this is not clear on the formula, $A\mathbin\sharp B=B\mathbin\sharp A$. This mean turns out to be the middle point of the geodesic segment $[A,B]$ for the metric $d$ of the question. The geodesic segment is unique, the underlying Riemannian manifold is hyperbolic. The segment is parametrized by $$s\mapsto A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1-s}A^{1/2}.$$ The following inequality relates the geometric, arithmetic and harmonic means $$\frac12\left(A^{-1}+B^{-1}\right)^{-1}\le A\mathbin\sharp B\le\frac12(A+B).$$ Finally, the geometric mean of the harmonic and arithmetic means is the geometric mean. Actually, the arithmetico-harmonic mean (defined as a limit by iterating both arithmetic and harmonic means) is the geometric mean.
51,292
In the paper [A Riemannian framework for tensor computing](https://doi.org/10.1007/s11263-005-3222-z "Pennec, X., Fillard, P. & Ayache, N. Int J Comput Vision 66, 41–66 (2006). zbMATH review at https://zbmath.org/?q=an:1287.53031"), by Pennec et al., on page 46 the authors state a "distance" function on the manifold of positive definite matrices $\mathcal{Sym}\_n^+$ given by $$d(A,B) = \lVert\log A^{-1/2}BA^{-1/2}\rVert$$ where the norm on the right is the standard sum of squares euclidean norm (presumably on the eigenvalues of the argument to the log-term). The authors then say that they could not prove (but had strong empirical evidence) that the above function $d$ satisfies the triangle inequality. I am not that familiar with differential geometry, so I don't know where to check whether this property holds (I need it in an application). Where should I look for a proof / disproof of the triangle inequality for $d$?
2011/01/06
[ "https://mathoverflow.net/questions/51292", "https://mathoverflow.net", "https://mathoverflow.net/users/12028/" ]
You can understand this geometrically by interpreting a positive definite symmetric matrix as a Euclidean metric on $\mathbb R^n$. Imagine starting with a lump of some kind of moldable material, like clay, and reshaping it (we may as well restrict to linear deformations) until its shape is that defined by a given symmetric matrix. (Actually, for this particular metaphor, we need to restrict to the case that the volume element $\det(A)$ remains constant, but the determinant is easily separated out in both the formula and the process.) What is the total energy needed to change the shape? Here, total energy of an infinitesimal change is the $L^2$ norm of the change in length of unit vectors (up to a constant, this can be measured by integrating over the unit sphere, or summing over any orthonormal basis). It's intuitively obvious and easy to prove that the most efficient path is to find the principal directions and stretch them log-linearly to get to the desired endshape. To measure the distance between two metrics, you can do the same thing: find the principal directions of one with respect to the other. That's what the formula does: $A^{-1/2}$ is a linear transformation that sends the metric defined by $A$ to the standard Euclidean metric, and $A^{-1/2} B A^{-1/2}$ is the metric $B$ transformed into the new coordinates.
This distance is related to the notion of *Geometric Mean* in $\mathcal{Sym}\_n^+$ (see exercises 198/199 of my [extra exercises for *Matrices : Theory and Applications*](http://perso.ens-lyon.fr/serre/DPF/exobis.pdf)). The geometric mean of $A$, $B$ is given by (assume that $A$ is positive definite) $$A\mathbin\sharp B=A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1/2}A^{1/2}.$$ Although this is not clear on the formula, $A\mathbin\sharp B=B\mathbin\sharp A$. This mean turns out to be the middle point of the geodesic segment $[A,B]$ for the metric $d$ of the question. The geodesic segment is unique, the underlying Riemannian manifold is hyperbolic. The segment is parametrized by $$s\mapsto A^{1/2}\left(A^{-1/2}BA^{-1/2}\right)^{1-s}A^{1/2}.$$ The following inequality relates the geometric, arithmetic and harmonic means $$\frac12\left(A^{-1}+B^{-1}\right)^{-1}\le A\mathbin\sharp B\le\frac12(A+B).$$ Finally, the geometric mean of the harmonic and arithmetic means is the geometric mean. Actually, the arithmetico-harmonic mean (defined as a limit by iterating both arithmetic and harmonic means) is the geometric mean.
30,529,142
I am using cakephp 2x. I am having trouble redirecting login user on the basis of their role.I am using two role admin and collegesupervisor. I want if admin login he redirects to user controller, index page and if collegesupervisor login he redirects to collegeprofiles controller , addinfo page.Is it possible to redirects different user on the basis of their role without using cakephp Acl component??Thanks in Advance.. Here is my Appcontroller and usercontroller code.... ``` //AppController <?php /** * Application level Controller * * This file is application-wide controller file. You can put all * application-wide controller-related methods here. * * CakePHP(tm) : Rapid Development Framework (http://cakephp.org) * Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) * * Licensed under The MIT License * For full copyright and license information, please see the LICENSE.txt * Redistributions of files must retain the above copyright notice. * * @copyright Copyright (c) Cake Software Foundation, Inc. (http://cakefoundation.org) * @link http://cakephp.org CakePHP(tm) Project * @package app.Controller * @since CakePHP(tm) v 0.2.9 * @license http://www.opensource.org/licenses/mit-license.php MIT License */ App::uses('Controller', 'Controller'); /** * Application Controller * * Add your application-wide methods in the class below, your controllers * will inherit them. * * @package app.Controller * @link http://book.cakephp.org/2.0/en/controllers.html#the- app-controller */ class AppController extends Controller { public $components = array( 'Session', 'Auth' => array( 'loginRedirect' => array('controller' => 'users', 'action' => 'index'), 'logoutRedirect' => array('controller' => 'users', 'action' => 'index'), 'authError' => 'You do not have the authority to view this page.', 'loginError' => 'Invalid Username or Password entered, please try again.', 'authorize' => array('Controller'), )); public function isAuthorized($user) { // Here is where we should verify the role and give access based on role return true; ``` } ``` // only allow the login controllers only public function beforeFilter() { parent::beforeFilter(); $this->layout = 'bootstrap'; $this->Auth->allow("login","logout"); $this->set('logged_in', $this->Auth->loggedIn()); $this->set('current_user', $this->Auth->user()); $wr=$this->webroot; //$this->set('authUser', $this->Auth->user()); $user1 = $this->Session->read("Auth.User"); $user=$user1['username']; //pr($user); $this->set(compact('user','wr')); $this->set('admin', $this->_isAdmin()); } function _isAdmin() { $admin = FALSE; if($this->Auth->user('role') == 'admin') { $admin = TRUE; } return $admin; } } //User Controller <?php App::uses('AppController', 'Controller'); /** * Users Controller * * @property User $User * @property PaginatorComponent $Paginator */ class UsersController extends AppController { /** * Components * * @var array */ public $components = array('Paginator'); /** * index method * * @return void */ public function beforeFilter() { parent::beforeFilter(); $this->Auth->allow('login','logout'); } public function isAuthorized($user) { if($user['role']== 'admin') return true; if(in_array($this->action, array('edit', 'delete', 'add'))) { if($user['id'] != $this->request->params['pass'][0]) { return false; } } return true; } public function login() { //if already logged-in, redirect if($this->Session->check('Auth.User')){ $this->redirect(array('controller'=>'football_results','action' => 'index2 ')); } // if we get the post information, try to authenticate if ($this->request->is('post')) { if ($this->Auth->login()) { $this->Session->setFlash(__('Welcome, '. $this->Auth- >user('username'))); $this->redirect($this->Auth->redirectUrl()); } else { $this->Session->setFlash(__('Invalid username or password')); } } } public function logout() { $this->redirect($this->Auth->logout()); } public function index() { $this->User->recursive = 0; $this->set('users', $this->Paginator->paginate()); } /** * add method * * @return void */ public function add() { if ($this->request->is('post')) { $this->User->create(); if ($this->User->save($this->request->data)) { $this->Session->setFlash(__('The user has been saved.'), 'default', array('class' => 'alert alert-success')); return $this->redirect(array('action' => 'index')); } else { $this->Session->setFlash(__('The user could not be saved. Please, try again.'), 'default', array('class' => 'alert alert-danger')); } } } } ```
2015/05/29
[ "https://Stackoverflow.com/questions/30529142", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4905982/" ]
Just do like following in `AppController - beforeFilter():` `if($this->Auth->user('role') == 'admin'){ $this->Auth->loginRedirect = array('controller' => 'controller1', 'action' => 'action1'); }else{ $this->Auth->loginRedirect = array('controller' => 'controller2', 'action' => 'action2'); }` [See accepted answer of related Question](https://stackoverflow.com/questions/11626741/cakephp-auth-loginredirect-for-admin)
this problem is a perfect fit for cakephp's event system. inside your users controller login action, dispatch an event e.g 'afterLogin' and put your post login logic there ``` public function login() { if($this->Auth->loggedIn()) { $event= new CakeEvent('Controller.users.afterLogin',$this, $this->Auth->user()); $manager = $this-getEventManager(); $manager->dispatch($event); } } /* @Event afterLogin */ public function afterLogin($user) { //check roles against acl and redirect if($user('role') == 'admin'){ $this->Auth->loginRedirect = array('controller' => 'controller1', 'action' => 'admin'); }else{ $this->Auth->loginRedirect = array('controller' => 'controller2','action' => 'supervisor'); } } ```
13,146,756
Is it possible to transmit GPS data from a GPS-enabled iPhone to a wifi-only iPad? Does anyone have sample code to share that would do this? How about getting GPS data from an Android via bluetooth, over to a wifi iPad?
2012/10/30
[ "https://Stackoverflow.com/questions/13146756", "https://Stackoverflow.com", "https://Stackoverflow.com/users/108512/" ]
Yes, but you would need to create an application for both devices that would communicate with each other. You cannot get the location over wifi without a custom application sending the data.
Have a look at the WiTap sample app. This app allows two devices to find each other and send data to each other using WiFi. You can adapt this code so one device sends location data it obtains to the other device instead of info about which rectangle was tapped. I have no info for doing this with Android.
13,028,403
I am using this jQuery basic ajax reader: ``` $.ajax({ url: url, dataType: 'jsonp', success: function (data) { console.log('data is', data); } }); ``` The full server response I get is: ``` jQuery17107194540228229016_1350987657731({"action":"", "type":"", "callerId":""}, {"errorCode":0,"errorDescription":"OK","success":true,"payload":null}); ``` However, when I try to output it with the `console.log('data is,data);` the output I get is: ``` data is Object {action: "", type: "", callerId: ""} ``` **How do I receive the other part of the server response?** ie: The part that tells me `success:true`: ``` {"errorCode":0,"errorDescription":"OK","success":true,"payload":null} ```
2012/10/23
[ "https://Stackoverflow.com/questions/13028403", "https://Stackoverflow.com", "https://Stackoverflow.com/users/657801/" ]
Try this, I don't know if it will help: ``` success:function(data, second){ console.log('data is',data, 'second is ',second); ``` As several people has pointed out, the success function will only return if the request is a success. But if you have some special reason why you want to use those return values, you could add an extra parameter ( I think, still haven't tested it myself ).
success callback from jquery request will always be success even if the response is a 404. As long as the server was reachable, that is always a success. Only when server is not reachable or request got lost in the way the error callback is triggered. From that perspective, you'll always have to analyze the output to see if the result is the desired (that or check the status code of the response. If it's 40x, then it's probably an error from your perspective).
20,987,595
i m working on project in which i need open popup window on div onclick onclick="window.scrollTo(0,0);" i took iframe for link but my problem is that when i click on image then one lightbox is open with product detail which is calling by js. data is loaded with .html function.when first time page is load then i click on div then popup window is open but when i close the product detail lightbox then after i reopen the lightbox then i click on div click then popup window is not open. ``` function ssdd() { //var myid=myid1; //$(document).ready(function() { $('#cboxLoadedContent div').on('click','#learn',function() { $('#learn_more').AeroWindow({ WindowTitle: 'Learn More', WindowPositionTop: 5, WindowPositionLeft: 'center', WindowWidth: 650, WindowHeight: 490, WindowAnimationSpeed: 1000, WindowAnimation: 'easeOutCubic', WindowResizable: false, WindowDraggable: true, WindowMinimize: true, WindowMaximize: false, WindowClosable: true }); }); //var afd= sdp(); return false; // }); } ```
2014/01/08
[ "https://Stackoverflow.com/questions/20987595", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
i found the solution is that we can remove the iframe containing div id learn\_more on popup window close button and then below that we append the div id learn\_more with iframe.it means on close we delete the iframe div and same time we create the div with iframe. like this BTNClose.click(function () { ``` $(this).find(".AeroWindow").css('display', 'block'); var sdx= $('#cboxLoadedContent div').find('div.AeroWindow ui-draggable active'); $('#cboxLoadedContent div').find('div.AeroWindow').removeClass('.active ui-draggable'); $('#learn_more').remove(); $('#cboxLoadedContent div').find('div#learn').append("<div id='learn_more' style='display: none; width: 100%; height: 100%;'><iframe width='100%' height='100%' frameborder='0' scrolling='yes' marginheight='0' marginwidth='0' src='window_page-price_learn_more' align='bottom'></iframe></div>") $('.AeroWindow ui-draggable active').remove(); WindowContent=""; Window.css('display','none'); return(false); }); ```
``` $(document).ready(function () { ssdd(); }); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function () { ssdd(); }); ``` Check after postback again calling that function checkout is it working?
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`IDisposable` has nothing to do with reclaiming managed memory. `IDisposable` allows types to free resources not handled by garbage collection such as handles etc. For normal .NET types, the garbage collector will handle reclaiming memory when the objects are no longer referenced.
The GC will run when it decides it needs to, so 'timely' is not relevant. It will happen when it happens; i.e. it is non-deterministic
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
The GC will run when it decides it needs to, so 'timely' is not relevant. It will happen when it happens; i.e. it is non-deterministic
> > from my observation it seems that the > memory eaten by the program is not > freed up. > > > That is perfectly normal. The program will hang on to a certain amount of unused memory, and this will be released if the system really needs it. One might think that the best thing for performance would be to keep the memory usage as small as possible, but it's actually the other way around. The computer doesn't benefit anything at all from having a lot of unused memory, so for best performance the application should not do the extra work to minimise the memory usage until it's really needed. > > Can Dispose really free the memory if > there are other objects pointing to > it? > > > Yes and no... Calling Dispose on an object will not free the object itself, however if the object contains other objects, those can be released by the Dispose method. That will not free any memory by itself, but it will let the garbage collector do it on the next run.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
The GC will run when it decides it needs to, so 'timely' is not relevant. It will happen when it happens; i.e. it is non-deterministic
If you have implemented it correctly, and you are releasing all unmanaged resources inside `Dispose()`, then the object should be collected (or better, will be eligible for collection) after you release all references to that object. Note, however, that you are not disposing object `A` in your example, which is also `IDisposable`. If `A` contains a reference to `B`, and you don't dispose `A`, then this might delay the collection of `B` as well (in case that `A` has some unmanaged stuff which may be creating a reference to `A`). Since unmanaged code seems to be referenced by `A` in your example, `A` should be responsible for disposing it.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`IDisposable` has nothing to do with reclaiming managed memory. `IDisposable` allows types to free resources not handled by garbage collection such as handles etc. For normal .NET types, the garbage collector will handle reclaiming memory when the objects are no longer referenced.
> > from my observation it seems that the > memory eaten by the program is not > freed up. > > > That is perfectly normal. The program will hang on to a certain amount of unused memory, and this will be released if the system really needs it. One might think that the best thing for performance would be to keep the memory usage as small as possible, but it's actually the other way around. The computer doesn't benefit anything at all from having a lot of unused memory, so for best performance the application should not do the extra work to minimise the memory usage until it's really needed. > > Can Dispose really free the memory if > there are other objects pointing to > it? > > > Yes and no... Calling Dispose on an object will not free the object itself, however if the object contains other objects, those can be released by the Dispose method. That will not free any memory by itself, but it will let the garbage collector do it on the next run.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`IDisposable` has nothing to do with reclaiming managed memory. `IDisposable` allows types to free resources not handled by garbage collection such as handles etc. For normal .NET types, the garbage collector will handle reclaiming memory when the objects are no longer referenced.
`Dispose` is just a method. It doesn't have to do anything at all. After calling `Dispose` on an object, the object still exists, but can no longer be safely used. The runtime doesn't assist in enforcing this, however. A "solid" implementation of `Dispose` (one designed to assist in catching bugs) would set a `_disposed` flag inside the object to true, and every other method on the object would throw `ObjectDisposedException` if that flag is true (the `Dispose` method itself should silently ignore further calls). But it is totally up to the implementer how far they go in enforcing this pattern. An example would be `FileStream`. When it has an open file, the process's handle count will have increased by 1. When you call `Dispose` on it, the handle count will decrease. But this is only because the author of `FileStream` wrote their `Dispose` method to make that happen. Which leads to the next problem - you can see the process's handle count in Task Manager and that is a very simple counter, but how are you measuring the memory usage? Note that the numbers shown in Task Manager are far from straightforward measures.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`IDisposable` has nothing to do with reclaiming managed memory. `IDisposable` allows types to free resources not handled by garbage collection such as handles etc. For normal .NET types, the garbage collector will handle reclaiming memory when the objects are no longer referenced.
If you have implemented it correctly, and you are releasing all unmanaged resources inside `Dispose()`, then the object should be collected (or better, will be eligible for collection) after you release all references to that object. Note, however, that you are not disposing object `A` in your example, which is also `IDisposable`. If `A` contains a reference to `B`, and you don't dispose `A`, then this might delay the collection of `B` as well (in case that `A` has some unmanaged stuff which may be creating a reference to `A`). Since unmanaged code seems to be referenced by `A` in your example, `A` should be responsible for disposing it.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`Dispose` is just a method. It doesn't have to do anything at all. After calling `Dispose` on an object, the object still exists, but can no longer be safely used. The runtime doesn't assist in enforcing this, however. A "solid" implementation of `Dispose` (one designed to assist in catching bugs) would set a `_disposed` flag inside the object to true, and every other method on the object would throw `ObjectDisposedException` if that flag is true (the `Dispose` method itself should silently ignore further calls). But it is totally up to the implementer how far they go in enforcing this pattern. An example would be `FileStream`. When it has an open file, the process's handle count will have increased by 1. When you call `Dispose` on it, the handle count will decrease. But this is only because the author of `FileStream` wrote their `Dispose` method to make that happen. Which leads to the next problem - you can see the process's handle count in Task Manager and that is a very simple counter, but how are you measuring the memory usage? Note that the numbers shown in Task Manager are far from straightforward measures.
> > from my observation it seems that the > memory eaten by the program is not > freed up. > > > That is perfectly normal. The program will hang on to a certain amount of unused memory, and this will be released if the system really needs it. One might think that the best thing for performance would be to keep the memory usage as small as possible, but it's actually the other way around. The computer doesn't benefit anything at all from having a lot of unused memory, so for best performance the application should not do the extra work to minimise the memory usage until it's really needed. > > Can Dispose really free the memory if > there are other objects pointing to > it? > > > Yes and no... Calling Dispose on an object will not free the object itself, however if the object contains other objects, those can be released by the Dispose method. That will not free any memory by itself, but it will let the garbage collector do it on the next run.
1,067,673
I have such a code: ``` public class A: IDisposable { public CPlusCode cPlusCode{get;set;} public void CallB() { using(bCode = new B(cPlusCode)) { //do everything in B } } public void Dispose() { cPlusCode.Dispose(); } } public class B: IDisposable { private CPlusCode cpp; public B(CPlusCode cPlus) { cpp= cPlus; } public void Dispose() { cpp.Dispose(); //dispose everything } } public static void Main() { for(int i=0; i<100000; i++) { var aObject = new A(); aObject .CallB(); } } ``` The issue is that when I execute `Main`, and `B` eats up a lot of memory to instantiate, and from my observation it seems that the memory eaten by the program is not freed up. Can Dispose really free the memory if there are other objects pointing to it?
2009/07/01
[ "https://Stackoverflow.com/questions/1067673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834/" ]
`Dispose` is just a method. It doesn't have to do anything at all. After calling `Dispose` on an object, the object still exists, but can no longer be safely used. The runtime doesn't assist in enforcing this, however. A "solid" implementation of `Dispose` (one designed to assist in catching bugs) would set a `_disposed` flag inside the object to true, and every other method on the object would throw `ObjectDisposedException` if that flag is true (the `Dispose` method itself should silently ignore further calls). But it is totally up to the implementer how far they go in enforcing this pattern. An example would be `FileStream`. When it has an open file, the process's handle count will have increased by 1. When you call `Dispose` on it, the handle count will decrease. But this is only because the author of `FileStream` wrote their `Dispose` method to make that happen. Which leads to the next problem - you can see the process's handle count in Task Manager and that is a very simple counter, but how are you measuring the memory usage? Note that the numbers shown in Task Manager are far from straightforward measures.
If you have implemented it correctly, and you are releasing all unmanaged resources inside `Dispose()`, then the object should be collected (or better, will be eligible for collection) after you release all references to that object. Note, however, that you are not disposing object `A` in your example, which is also `IDisposable`. If `A` contains a reference to `B`, and you don't dispose `A`, then this might delay the collection of `B` as well (in case that `A` has some unmanaged stuff which may be creating a reference to `A`). Since unmanaged code seems to be referenced by `A` in your example, `A` should be responsible for disposing it.
273,355
I am trying to build a simple JFET common source amplifier to get about 5-10 times gain for a signal I have coming from a microphone with an amplifier already in it, from ADA Fruit, [here](https://www.adafruit.com/products/1063) I am working with a circuit just like below, but with the capacitor on the output removed. [![enter image description here](https://i.stack.imgur.com/FoXzp.gif)](https://i.stack.imgur.com/FoXzp.gif) I have tried various values, but currently I have RD at 10 KΩ, RS at 1 KΩ, Cs and Cin are are .1 μF, and RG is 1 MΩ. VDD is 5 volts. I tried two different transistors FQP30N06L [here](http://cdn.sparkfun.com/datasheets/Components/General/FQP30N06L.pdf) and J310 [here](http://danssmallpartsandkits.net/J309-D.pdf). From what I understand, this should give a gain of 10x. I can generate a signal by whistling, and the preamp on my circuit gives about 100-500 mV sine wave output. However, my output signal from the drain is always smaller than my input signal. I am not sure what is wrong here, any advice would be appreciated :)
2016/12/06
[ "https://electronics.stackexchange.com/questions/273355", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/111346/" ]
If you leave Cs out then you are correctly expecting a gain of 10 in the Common Source configuration you show. But the resistor values and devices you have selected will prevent a successful result. The FQP30N06L is an enhancement mode device and won't work at all in this bias configuration. The J310 is enhancement mode (the right type of device), but the VGs(off) and 0-VGS(IDSS) is too high to work in this configuration with this supply voltage and resistor values. You should read this to help your understanding: <http://www.vishay.com/docs/70595/70595.pdf> Your biasing is this type: [![enter image description here](https://i.stack.imgur.com/h6Mi5.png)](https://i.stack.imgur.com/h6Mi5.png) In this configuration Rs is part of both the bias and gain setting which creates some compromises in setting the operating current. In your case the device (J310) has: VGS(off) of -2 to -6.5 V. Zero volt VGS(ID) of 24 to 60 mA. (this is usually called IDSS, the zero VGS saturation current) Note: This device is really designed as an RF amplifier where Rs would be zero. Let's work through the design and see where the problems are when using a J310. Ignoring Rd for the moment (assume it is shorted out while we bias the device operating current) if you look at Figure 1 in the datasheet, you can see the VGS curve (RHS of graph) for the device. If VGS(off) is -2.0 V (the best of the J310 devices) the voltage across Rs can set the operating point (ID) somewhere under 2.0 V measured on the Source pin. Here is the Figure 1 with our extra information added: [![enter image description here](https://i.stack.imgur.com/oMvFI.png)](https://i.stack.imgur.com/oMvFI.png) Notice that with a 1K Ohm Rs the Source voltage will be about 1.8 V and the operating current about 2 mA. If we now tried to add back the RD value of 10K Ohm we have a real problem....to draw 2mA through 10k you need 20 V across it!!! The end result is that the JFET simply saturates, so you get no signal out. You should be able to confirm this by measuring VD and VS. We'd typically expect that the quiescent point of VD (the Drain) should be about 2/3 of the supply voltage....or about3.3 V in this case. That means the value of RD would be about 750 Ohms. That would limit the gain to less than 1. We just made an active attenuator...not very useful. Let's select a device that might be more appropriate. We can try a J113: <https://www.fairchildsemi.com/datasheets/J1/J111.pdf> This is a relatively common small signal JFET. There is still a range of VGS(off) and IDSS and the graphs are a little less helpful this time, but we can use Figure 6 and get an idea of where the operating point might be. If we use the VGS(off) value as -1.1 V there is a graph for it (but all the devices will vary of course). [![enter image description here](https://i.stack.imgur.com/hUHT7.png)](https://i.stack.imgur.com/hUHT7.png) We now have an ID of about 520 uA and a VS of about 520 mV. At this current the voltage drop across a 10k load resistor would be about 5.2 V ....closer to working, but it still won't work. We have some choices to make if we want to keep the 1K in the Source side. We could drop the value of RD to set the voltage on the Drain to about 3.3 V, that would require RD=(5-3.3)/0.00052 --> approximately 3.3K Ohms. However this would limit our gain to 3.3. Or we could get creative and make RS up of two resistors that total 1K Ohm and bypass one to ac signals. To get a gain of 10 we need a 3.3K and 330 OHM RD and RS, leaving us 680 Ohms to be bypassed. The circuit would then look this way: ![schematic](https://i.stack.imgur.com/dZEuL.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fdZEuL.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
The gain of this circuit isn't set by the ratio of R2 to R1. The capacitor CS is shorting out R1, and helps with the gain. However, to do this at audio frequencies, the capacitor value needs to be much larger. This circuit is an excellent example for learning LTspice. I suggest trying it, and you can get help from the LTspice mailing list if you need it. To get help from them, you will need to correctly upload an LTspice schematic showing what you tried. The job of R1 is to set the drain-to-source current. The gate voltage will be about 0V. When the supply is turned on, the JFET will start to conduct, and the source voltage will rise because of the current in R1. It will rise until the JFET begins to turn off. This provides a little bit of negative feedback, and it will find an equilbrium point. This equilibrium point is set by the value of R1, and it should be adjusted until you have about 10mA, which is 0.75V across the 75 Ohm R1. The resistor R2 can't be so large that the voltage on the source of J1 is too close to the voltage on the drain of J1. This is your analog output, and there needs to be a range of output where the drain voltage can vary without becoming equal to or less than the source voltage. That is the problem with the existing design. This topic is called 'how to bias a transistor' and it is lots of fun. Search for "[how to bias a JFET](https://www.google.com/search?q=how%20to%20bias%20a%20JFET)" I tried the same circuit in LTspice, except I used a U309 JFET and a 9V supply instead of a J310 and a 5V supply. For my transistor, the values R2=750, R1=75, and C1=100uF gave reasonable results. The values that work will depend on the parameters of the transistor, and I don't expect them to work with the J310. Usually, IDSS and VGS(off) are enough to do the bias calculations, and these values are in the datasheet.
273,355
I am trying to build a simple JFET common source amplifier to get about 5-10 times gain for a signal I have coming from a microphone with an amplifier already in it, from ADA Fruit, [here](https://www.adafruit.com/products/1063) I am working with a circuit just like below, but with the capacitor on the output removed. [![enter image description here](https://i.stack.imgur.com/FoXzp.gif)](https://i.stack.imgur.com/FoXzp.gif) I have tried various values, but currently I have RD at 10 KΩ, RS at 1 KΩ, Cs and Cin are are .1 μF, and RG is 1 MΩ. VDD is 5 volts. I tried two different transistors FQP30N06L [here](http://cdn.sparkfun.com/datasheets/Components/General/FQP30N06L.pdf) and J310 [here](http://danssmallpartsandkits.net/J309-D.pdf). From what I understand, this should give a gain of 10x. I can generate a signal by whistling, and the preamp on my circuit gives about 100-500 mV sine wave output. However, my output signal from the drain is always smaller than my input signal. I am not sure what is wrong here, any advice would be appreciated :)
2016/12/06
[ "https://electronics.stackexchange.com/questions/273355", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/111346/" ]
The gain of this circuit isn't set by the ratio of R2 to R1. The capacitor CS is shorting out R1, and helps with the gain. However, to do this at audio frequencies, the capacitor value needs to be much larger. This circuit is an excellent example for learning LTspice. I suggest trying it, and you can get help from the LTspice mailing list if you need it. To get help from them, you will need to correctly upload an LTspice schematic showing what you tried. The job of R1 is to set the drain-to-source current. The gate voltage will be about 0V. When the supply is turned on, the JFET will start to conduct, and the source voltage will rise because of the current in R1. It will rise until the JFET begins to turn off. This provides a little bit of negative feedback, and it will find an equilbrium point. This equilibrium point is set by the value of R1, and it should be adjusted until you have about 10mA, which is 0.75V across the 75 Ohm R1. The resistor R2 can't be so large that the voltage on the source of J1 is too close to the voltage on the drain of J1. This is your analog output, and there needs to be a range of output where the drain voltage can vary without becoming equal to or less than the source voltage. That is the problem with the existing design. This topic is called 'how to bias a transistor' and it is lots of fun. Search for "[how to bias a JFET](https://www.google.com/search?q=how%20to%20bias%20a%20JFET)" I tried the same circuit in LTspice, except I used a U309 JFET and a 9V supply instead of a J310 and a 5V supply. For my transistor, the values R2=750, R1=75, and C1=100uF gave reasonable results. The values that work will depend on the parameters of the transistor, and I don't expect them to work with the J310. Usually, IDSS and VGS(off) are enough to do the bias calculations, and these values are in the datasheet.
I made the circuit designed by Jack Creasey with some modifications for my needs: * I placed a 5K adjustable resistor for R1 * I removed C2 * I removed R4 Here are the FRA of 2 different boards: [![enter image description here](https://i.stack.imgur.com/fgHFf.png)](https://i.stack.imgur.com/fgHFf.png) [![enter image description here](https://i.stack.imgur.com/49Y0R.png)](https://i.stack.imgur.com/49Y0R.png) I hope I was helpful to someone. Thanks Jack Creasey!
273,355
I am trying to build a simple JFET common source amplifier to get about 5-10 times gain for a signal I have coming from a microphone with an amplifier already in it, from ADA Fruit, [here](https://www.adafruit.com/products/1063) I am working with a circuit just like below, but with the capacitor on the output removed. [![enter image description here](https://i.stack.imgur.com/FoXzp.gif)](https://i.stack.imgur.com/FoXzp.gif) I have tried various values, but currently I have RD at 10 KΩ, RS at 1 KΩ, Cs and Cin are are .1 μF, and RG is 1 MΩ. VDD is 5 volts. I tried two different transistors FQP30N06L [here](http://cdn.sparkfun.com/datasheets/Components/General/FQP30N06L.pdf) and J310 [here](http://danssmallpartsandkits.net/J309-D.pdf). From what I understand, this should give a gain of 10x. I can generate a signal by whistling, and the preamp on my circuit gives about 100-500 mV sine wave output. However, my output signal from the drain is always smaller than my input signal. I am not sure what is wrong here, any advice would be appreciated :)
2016/12/06
[ "https://electronics.stackexchange.com/questions/273355", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/111346/" ]
If you leave Cs out then you are correctly expecting a gain of 10 in the Common Source configuration you show. But the resistor values and devices you have selected will prevent a successful result. The FQP30N06L is an enhancement mode device and won't work at all in this bias configuration. The J310 is enhancement mode (the right type of device), but the VGs(off) and 0-VGS(IDSS) is too high to work in this configuration with this supply voltage and resistor values. You should read this to help your understanding: <http://www.vishay.com/docs/70595/70595.pdf> Your biasing is this type: [![enter image description here](https://i.stack.imgur.com/h6Mi5.png)](https://i.stack.imgur.com/h6Mi5.png) In this configuration Rs is part of both the bias and gain setting which creates some compromises in setting the operating current. In your case the device (J310) has: VGS(off) of -2 to -6.5 V. Zero volt VGS(ID) of 24 to 60 mA. (this is usually called IDSS, the zero VGS saturation current) Note: This device is really designed as an RF amplifier where Rs would be zero. Let's work through the design and see where the problems are when using a J310. Ignoring Rd for the moment (assume it is shorted out while we bias the device operating current) if you look at Figure 1 in the datasheet, you can see the VGS curve (RHS of graph) for the device. If VGS(off) is -2.0 V (the best of the J310 devices) the voltage across Rs can set the operating point (ID) somewhere under 2.0 V measured on the Source pin. Here is the Figure 1 with our extra information added: [![enter image description here](https://i.stack.imgur.com/oMvFI.png)](https://i.stack.imgur.com/oMvFI.png) Notice that with a 1K Ohm Rs the Source voltage will be about 1.8 V and the operating current about 2 mA. If we now tried to add back the RD value of 10K Ohm we have a real problem....to draw 2mA through 10k you need 20 V across it!!! The end result is that the JFET simply saturates, so you get no signal out. You should be able to confirm this by measuring VD and VS. We'd typically expect that the quiescent point of VD (the Drain) should be about 2/3 of the supply voltage....or about3.3 V in this case. That means the value of RD would be about 750 Ohms. That would limit the gain to less than 1. We just made an active attenuator...not very useful. Let's select a device that might be more appropriate. We can try a J113: <https://www.fairchildsemi.com/datasheets/J1/J111.pdf> This is a relatively common small signal JFET. There is still a range of VGS(off) and IDSS and the graphs are a little less helpful this time, but we can use Figure 6 and get an idea of where the operating point might be. If we use the VGS(off) value as -1.1 V there is a graph for it (but all the devices will vary of course). [![enter image description here](https://i.stack.imgur.com/hUHT7.png)](https://i.stack.imgur.com/hUHT7.png) We now have an ID of about 520 uA and a VS of about 520 mV. At this current the voltage drop across a 10k load resistor would be about 5.2 V ....closer to working, but it still won't work. We have some choices to make if we want to keep the 1K in the Source side. We could drop the value of RD to set the voltage on the Drain to about 3.3 V, that would require RD=(5-3.3)/0.00052 --> approximately 3.3K Ohms. However this would limit our gain to 3.3. Or we could get creative and make RS up of two resistors that total 1K Ohm and bypass one to ac signals. To get a gain of 10 we need a 3.3K and 330 OHM RD and RS, leaving us 680 Ohms to be bypassed. The circuit would then look this way: ![schematic](https://i.stack.imgur.com/dZEuL.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fdZEuL.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
I made the circuit designed by Jack Creasey with some modifications for my needs: * I placed a 5K adjustable resistor for R1 * I removed C2 * I removed R4 Here are the FRA of 2 different boards: [![enter image description here](https://i.stack.imgur.com/fgHFf.png)](https://i.stack.imgur.com/fgHFf.png) [![enter image description here](https://i.stack.imgur.com/49Y0R.png)](https://i.stack.imgur.com/49Y0R.png) I hope I was helpful to someone. Thanks Jack Creasey!
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You can't backup to an NTFS formatted disk as stated below: > > Note: Every available disk that can be used to store backups is listed. If you’ve partitioned a disk, the available partitions are listed. **Time Machine can’t back up to** an external disk that's connected to an AirPort Extreme, or to an iPod, iDisk, or **a disk formatted for Microsoft Windows (NTFS or FAT format)**. If you select an NTFS or FAT-formatted disk, Time Machine prompts you to reformat the disk. Choose a different disk or reformat the disk in Mac OS Extended (Journaled) format. Because reformatting erases any files on the disk, only do this if you no longer need the files or if you have copies of them on a different disk. > > > This quote is from the [apple support page for Time Machine](http://support.apple.com/kb/HT1427) You could always reformat the disk in Mac OS Extended (Journaled) format which would allow you to use it.
Copied to [A Super User answer](https://superuser.com/a/452316/84988) to **Equivalent for Time Machine that writes to NTFS disks**: --- Backup to NTFS ============== If you wish to use Time Machine in Lion or greater with an NTFS volume – and if you have a write-enabled driver for NTFS: * with [tmutil](https://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/man8/tmutil.8.html) you can configure Time Machine to back up to a sparse bundle disk image, the .sparsebundle stored on NTFS. In some situations you may find that Time Machine simply offers to use an NTFS volume. This may occur if, say, a write-enabled driver for NTFS is installed *before* a physical disk with NTFS is introduced to OS X. Restore from NTFS ================= OS X can read NTFS, and so should be able to restore from a .sparsebundle in this environment. Whether Recovery OS is similarly prepared to read from NTFS and restore, I don't know.
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You can't backup to an NTFS formatted disk as stated below: > > Note: Every available disk that can be used to store backups is listed. If you’ve partitioned a disk, the available partitions are listed. **Time Machine can’t back up to** an external disk that's connected to an AirPort Extreme, or to an iPod, iDisk, or **a disk formatted for Microsoft Windows (NTFS or FAT format)**. If you select an NTFS or FAT-formatted disk, Time Machine prompts you to reformat the disk. Choose a different disk or reformat the disk in Mac OS Extended (Journaled) format. Because reformatting erases any files on the disk, only do this if you no longer need the files or if you have copies of them on a different disk. > > > This quote is from the [apple support page for Time Machine](http://support.apple.com/kb/HT1427) You could always reformat the disk in Mac OS Extended (Journaled) format which would allow you to use it.
As others said, you cannot use it directly. The only way I found is: * Create a virtual disk in VMDK format * Mount it using some freeware tool * Create a sparsebundle in the VMDK * Configure TimeMachine to use that VMDK Note that the intermediate VMDK is needed to prevent OSX from unmounting the sparsebundle (expect that behaviour if you mount a sparsebundle directly from an USB disk).
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You can't backup to an NTFS formatted disk as stated below: > > Note: Every available disk that can be used to store backups is listed. If you’ve partitioned a disk, the available partitions are listed. **Time Machine can’t back up to** an external disk that's connected to an AirPort Extreme, or to an iPod, iDisk, or **a disk formatted for Microsoft Windows (NTFS or FAT format)**. If you select an NTFS or FAT-formatted disk, Time Machine prompts you to reformat the disk. Choose a different disk or reformat the disk in Mac OS Extended (Journaled) format. Because reformatting erases any files on the disk, only do this if you no longer need the files or if you have copies of them on a different disk. > > > This quote is from the [apple support page for Time Machine](http://support.apple.com/kb/HT1427) You could always reformat the disk in Mac OS Extended (Journaled) format which would allow you to use it.
If you have some data on the disk and don't want to format whole disk and disk itself is quite big make a certain partition on the NTFS disk. Do it on PC with Windows XP/7 using Partition Magic or Partition Menager then format this partition in Mac OS Disc Utilities with Mac OS Extended (Journaled) format. Next open Time Machine and choose disk. You should see both NTFS and Extended (Journaled) portions. Choose Extended (Journaled) one and backup your Mac.
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You can't backup to an NTFS formatted disk as stated below: > > Note: Every available disk that can be used to store backups is listed. If you’ve partitioned a disk, the available partitions are listed. **Time Machine can’t back up to** an external disk that's connected to an AirPort Extreme, or to an iPod, iDisk, or **a disk formatted for Microsoft Windows (NTFS or FAT format)**. If you select an NTFS or FAT-formatted disk, Time Machine prompts you to reformat the disk. Choose a different disk or reformat the disk in Mac OS Extended (Journaled) format. Because reformatting erases any files on the disk, only do this if you no longer need the files or if you have copies of them on a different disk. > > > This quote is from the [apple support page for Time Machine](http://support.apple.com/kb/HT1427) You could always reformat the disk in Mac OS Extended (Journaled) format which would allow you to use it.
You **can** backup to an NTFS-formatted **volume**. I backed up my Mac (Yosemite) with *Time Machine* as per Graham's answer here (<https://apple.stackexchange.com/a/57082/134740>), **but** I had to use a *.sparseimage*, as a .sparsebundle image failed to be created on the NTFS volume – for details on the differences between the two, see: <https://support.apple.com/kb/PH22247>. In terms of restoring that image for recovery purposes, I tested it by restarting and holding the keys `Cmd+R` to boot into Mac OSX Recovery (<https://support.apple.com/en-ie/HT201314>) and Time Machine could not find the disk. I had to start Disk Utility, mount the image manually, then go back to Time Machine and it could see the volume and all the available backups in it. I didn't actually go ahead and start the restore but I **assume** if it can see the backups, it should be able to restore them : )
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
Copied to [A Super User answer](https://superuser.com/a/452316/84988) to **Equivalent for Time Machine that writes to NTFS disks**: --- Backup to NTFS ============== If you wish to use Time Machine in Lion or greater with an NTFS volume – and if you have a write-enabled driver for NTFS: * with [tmutil](https://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/man8/tmutil.8.html) you can configure Time Machine to back up to a sparse bundle disk image, the .sparsebundle stored on NTFS. In some situations you may find that Time Machine simply offers to use an NTFS volume. This may occur if, say, a write-enabled driver for NTFS is installed *before* a physical disk with NTFS is introduced to OS X. Restore from NTFS ================= OS X can read NTFS, and so should be able to restore from a .sparsebundle in this environment. Whether Recovery OS is similarly prepared to read from NTFS and restore, I don't know.
As others said, you cannot use it directly. The only way I found is: * Create a virtual disk in VMDK format * Mount it using some freeware tool * Create a sparsebundle in the VMDK * Configure TimeMachine to use that VMDK Note that the intermediate VMDK is needed to prevent OSX from unmounting the sparsebundle (expect that behaviour if you mount a sparsebundle directly from an USB disk).
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
Copied to [A Super User answer](https://superuser.com/a/452316/84988) to **Equivalent for Time Machine that writes to NTFS disks**: --- Backup to NTFS ============== If you wish to use Time Machine in Lion or greater with an NTFS volume – and if you have a write-enabled driver for NTFS: * with [tmutil](https://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/man8/tmutil.8.html) you can configure Time Machine to back up to a sparse bundle disk image, the .sparsebundle stored on NTFS. In some situations you may find that Time Machine simply offers to use an NTFS volume. This may occur if, say, a write-enabled driver for NTFS is installed *before* a physical disk with NTFS is introduced to OS X. Restore from NTFS ================= OS X can read NTFS, and so should be able to restore from a .sparsebundle in this environment. Whether Recovery OS is similarly prepared to read from NTFS and restore, I don't know.
If you have some data on the disk and don't want to format whole disk and disk itself is quite big make a certain partition on the NTFS disk. Do it on PC with Windows XP/7 using Partition Magic or Partition Menager then format this partition in Mac OS Disc Utilities with Mac OS Extended (Journaled) format. Next open Time Machine and choose disk. You should see both NTFS and Extended (Journaled) portions. Choose Extended (Journaled) one and backup your Mac.
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You **can** backup to an NTFS-formatted **volume**. I backed up my Mac (Yosemite) with *Time Machine* as per Graham's answer here (<https://apple.stackexchange.com/a/57082/134740>), **but** I had to use a *.sparseimage*, as a .sparsebundle image failed to be created on the NTFS volume – for details on the differences between the two, see: <https://support.apple.com/kb/PH22247>. In terms of restoring that image for recovery purposes, I tested it by restarting and holding the keys `Cmd+R` to boot into Mac OSX Recovery (<https://support.apple.com/en-ie/HT201314>) and Time Machine could not find the disk. I had to start Disk Utility, mount the image manually, then go back to Time Machine and it could see the volume and all the available backups in it. I didn't actually go ahead and start the restore but I **assume** if it can see the backups, it should be able to restore them : )
Copied to [A Super User answer](https://superuser.com/a/452316/84988) to **Equivalent for Time Machine that writes to NTFS disks**: --- Backup to NTFS ============== If you wish to use Time Machine in Lion or greater with an NTFS volume – and if you have a write-enabled driver for NTFS: * with [tmutil](https://developer.apple.com/library/mac/#documentation/darwin/reference/manpages/man8/tmutil.8.html) you can configure Time Machine to back up to a sparse bundle disk image, the .sparsebundle stored on NTFS. In some situations you may find that Time Machine simply offers to use an NTFS volume. This may occur if, say, a write-enabled driver for NTFS is installed *before* a physical disk with NTFS is introduced to OS X. Restore from NTFS ================= OS X can read NTFS, and so should be able to restore from a .sparsebundle in this environment. Whether Recovery OS is similarly prepared to read from NTFS and restore, I don't know.
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You **can** backup to an NTFS-formatted **volume**. I backed up my Mac (Yosemite) with *Time Machine* as per Graham's answer here (<https://apple.stackexchange.com/a/57082/134740>), **but** I had to use a *.sparseimage*, as a .sparsebundle image failed to be created on the NTFS volume – for details on the differences between the two, see: <https://support.apple.com/kb/PH22247>. In terms of restoring that image for recovery purposes, I tested it by restarting and holding the keys `Cmd+R` to boot into Mac OSX Recovery (<https://support.apple.com/en-ie/HT201314>) and Time Machine could not find the disk. I had to start Disk Utility, mount the image manually, then go back to Time Machine and it could see the volume and all the available backups in it. I didn't actually go ahead and start the restore but I **assume** if it can see the backups, it should be able to restore them : )
As others said, you cannot use it directly. The only way I found is: * Create a virtual disk in VMDK format * Mount it using some freeware tool * Create a sparsebundle in the VMDK * Configure TimeMachine to use that VMDK Note that the intermediate VMDK is needed to prevent OSX from unmounting the sparsebundle (expect that behaviour if you mount a sparsebundle directly from an USB disk).
7,280
When I've plugged in my USB HDD (WD Passort Elite) for the first time, the system had asked me whether I want to use this HDD as time machine drive. I've chosen smth like 'decide later' and continued my work. When later I tried to setup time machine preferences I couldn't find the way to set my USB HDD as time machine drive. When I press 'Select drive for backup' i see empty list, nevertheless my drive is plugged in and works well. Btw, it is ntfs-formatted, could it be a problem? Thanks in advance
2011/01/29
[ "https://apple.stackexchange.com/questions/7280", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/1097/" ]
You **can** backup to an NTFS-formatted **volume**. I backed up my Mac (Yosemite) with *Time Machine* as per Graham's answer here (<https://apple.stackexchange.com/a/57082/134740>), **but** I had to use a *.sparseimage*, as a .sparsebundle image failed to be created on the NTFS volume – for details on the differences between the two, see: <https://support.apple.com/kb/PH22247>. In terms of restoring that image for recovery purposes, I tested it by restarting and holding the keys `Cmd+R` to boot into Mac OSX Recovery (<https://support.apple.com/en-ie/HT201314>) and Time Machine could not find the disk. I had to start Disk Utility, mount the image manually, then go back to Time Machine and it could see the volume and all the available backups in it. I didn't actually go ahead and start the restore but I **assume** if it can see the backups, it should be able to restore them : )
If you have some data on the disk and don't want to format whole disk and disk itself is quite big make a certain partition on the NTFS disk. Do it on PC with Windows XP/7 using Partition Magic or Partition Menager then format this partition in Mac OS Disc Utilities with Mac OS Extended (Journaled) format. Next open Time Machine and choose disk. You should see both NTFS and Extended (Journaled) portions. Choose Extended (Journaled) one and backup your Mac.
16,437,033
Hello i have a DB (MongoDB) with many entries with a date in milliseconds. I want to extract all the entries that have a date between 6:00 and 10:00 in the morning How can i do it? Is it possible to do it in a single query? Something like this extract all the entries before Tue Jul 17 2012 14:09:05 for example ``` db.OBSERVABLEPARAMETER.find({startDate:{$lte:1342526945150}}) ```
2013/05/08
[ "https://Stackoverflow.com/questions/16437033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/487561/" ]
Analogue to the $lte-operator there is also the [$gte (greater-than-or-equal) operator](http://docs.mongodb.org/manual/reference/operator/gte/). Both can be combined in the same object: `db.OBSERVABLEPARAMETER.find({startDate:{$gte:1342560000000, $lte:1342570000000}})` (values aren't specific timestamps, they are just to illustrate the concept) This allows you to get all data in a specific timeframe. But when you want to have all data within a specific time period **on any day**, it gets a lot more complicated, both for you and for the database. Such a complex query requires a [$where operator](http://docs.mongodb.org/manual/reference/operator/where/) with a javascript function which extracts the hours from the timestamp and returns true when they are between 6 and 10. By the way: The recommended way to store dates in MongoDB is using the Date type. Using integer timestamps is discouraged. See <http://docs.mongodb.org/manual/core/document/#document-bson-type-considerations>
i wasn't able to build the query with javascript, i solved with a php scrip on a webpage that query the MongoDB in this manner (this function calculates the average hour an object with fieldname "glicemia" is inserted at): ``` public function count_glicemie_momento_media($momento){ try{ // access collection $collection = $this->db->OBSERVABLEPARAMETER; // execute query // retrieve all documents $cursor = $collection->find(array( "parameter_name" => "glicemia","measuredValue2"=>$momento, "status" => array('$ne' => "DELETE"))); $sum=0; foreach($cursor as $glicemia){ $sum+=date("G", $glicemia["startDate"]/1000); } return round($sum/$cursor->count(),2); } catch (MongoConnectionException $e) { die('Error connecting to MongoDB server'); } catch (MongoException $e) { die('Error: ' . $e->getMessage()); } return -1; } ```
13,615,176
I'm using Visual Studio 2010 Pro to build a solution that contains two projects. Project A contains most of my source code, while Project B is intended to run independently, but must use some of the source code contained in Project A. Under the current configuration, Project A is contained as a reference within Project B. I'd like to be able to build and maintain versions of each project independently, but it appears that when I build the entire solution, ProjectB.exe cannot run without ProjectA.exe in the same local directory. I would think and hope that when the .exe binaries are compiled that all of their dependencies are packaged within each, but that appears not to be the case. In fact, any attempt to run ProjectB.exe while ProjectA.exe is not present results in a System.IO.FileNotFoundException. Is there a way to build a version ProjectB.exe that runs independently and avoids code duplication?
2012/11/28
[ "https://Stackoverflow.com/questions/13615176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731898/" ]
In cases where you want common code, the best solution is to break out the common classes into a third assembly to serve as a library. (As per Adriano's suggestion.) The other option he hints at is to use the "as link" option when using the "add existing file" to the second project. If you don't know where it is, use the "Add existing file" option, then in the dialog box to select the file, the "Add" button has a drop-down selection where you can select "As Linked File" (or something to that effect.) This allows you to compile the same classes into multiple projects. But keep in mind that the namespacing for the linked file cannot be changed for the second project. If the namespace was "ProjectA.Domain", this is how you need to access it in Project B. This was a useful trick for Silverlight projects back before the multi-platform assemblies were introduced.
If you want to get rid or the dependency on A, you will have to extract the common logic into another project (let's call it C), as Adriano suggested in a comment. If you need even looser bond between the projects, you can reference A (or C) not as a project, but as a built assembly (.dll file) and check `Specific Version` reference property to `True`. Additionally, if your project/codebase structure is more complex, check more assembly sharing options **[here](https://stackoverflow.com/questions/13520063/share-common-codes-between-multiple-projects)**.
13,615,176
I'm using Visual Studio 2010 Pro to build a solution that contains two projects. Project A contains most of my source code, while Project B is intended to run independently, but must use some of the source code contained in Project A. Under the current configuration, Project A is contained as a reference within Project B. I'd like to be able to build and maintain versions of each project independently, but it appears that when I build the entire solution, ProjectB.exe cannot run without ProjectA.exe in the same local directory. I would think and hope that when the .exe binaries are compiled that all of their dependencies are packaged within each, but that appears not to be the case. In fact, any attempt to run ProjectB.exe while ProjectA.exe is not present results in a System.IO.FileNotFoundException. Is there a way to build a version ProjectB.exe that runs independently and avoids code duplication?
2012/11/28
[ "https://Stackoverflow.com/questions/13615176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731898/" ]
If you want to get rid or the dependency on A, you will have to extract the common logic into another project (let's call it C), as Adriano suggested in a comment. If you need even looser bond between the projects, you can reference A (or C) not as a project, but as a built assembly (.dll file) and check `Specific Version` reference property to `True`. Additionally, if your project/codebase structure is more complex, check more assembly sharing options **[here](https://stackoverflow.com/questions/13520063/share-common-codes-between-multiple-projects)**.
Some options: 1. The common option: Separate the common code into a third class library (DLL) project. And have both ProjectA and ProjectB dependent on it. The downside is that now in order to run the projects you need two files (the main exe and the dll.) This method is how most software is developed: a single executable and a bunch of DLLs. 2. The correct option: Separate the common code into a third project and modify the project files to create executables that contain both assemblies (similar to statically linked libraries in unmanaged code.) The downside is that Visual Studio does not support this out of the box and you need to modify the project files which are actually MS-Build definition files to do this. 3. The ugly option: Create shortcuts for the common files in ProjectA in ProjectB. This is the same as copying the common code to the other project, but you're still left with one source file. The downside is that you have to do this for every file and maintain the same structure in both projects. This is an ugly, if viable, option. Choose one of the others.
13,615,176
I'm using Visual Studio 2010 Pro to build a solution that contains two projects. Project A contains most of my source code, while Project B is intended to run independently, but must use some of the source code contained in Project A. Under the current configuration, Project A is contained as a reference within Project B. I'd like to be able to build and maintain versions of each project independently, but it appears that when I build the entire solution, ProjectB.exe cannot run without ProjectA.exe in the same local directory. I would think and hope that when the .exe binaries are compiled that all of their dependencies are packaged within each, but that appears not to be the case. In fact, any attempt to run ProjectB.exe while ProjectA.exe is not present results in a System.IO.FileNotFoundException. Is there a way to build a version ProjectB.exe that runs independently and avoids code duplication?
2012/11/28
[ "https://Stackoverflow.com/questions/13615176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/731898/" ]
In cases where you want common code, the best solution is to break out the common classes into a third assembly to serve as a library. (As per Adriano's suggestion.) The other option he hints at is to use the "as link" option when using the "add existing file" to the second project. If you don't know where it is, use the "Add existing file" option, then in the dialog box to select the file, the "Add" button has a drop-down selection where you can select "As Linked File" (or something to that effect.) This allows you to compile the same classes into multiple projects. But keep in mind that the namespacing for the linked file cannot be changed for the second project. If the namespace was "ProjectA.Domain", this is how you need to access it in Project B. This was a useful trick for Silverlight projects back before the multi-platform assemblies were introduced.
Some options: 1. The common option: Separate the common code into a third class library (DLL) project. And have both ProjectA and ProjectB dependent on it. The downside is that now in order to run the projects you need two files (the main exe and the dll.) This method is how most software is developed: a single executable and a bunch of DLLs. 2. The correct option: Separate the common code into a third project and modify the project files to create executables that contain both assemblies (similar to statically linked libraries in unmanaged code.) The downside is that Visual Studio does not support this out of the box and you need to modify the project files which are actually MS-Build definition files to do this. 3. The ugly option: Create shortcuts for the common files in ProjectA in ProjectB. This is the same as copying the common code to the other project, but you're still left with one source file. The downside is that you have to do this for every file and maintain the same structure in both projects. This is an ugly, if viable, option. Choose one of the others.
69,195,856
I have two arrays. ``` let dateArray = ['2018-05-04T00:00:00+01:00', '2019-04-20T00:00:00+01:00', '2020-05-29T00:00:00+01:00']; let rangesArray = [['2021-09-01','2022-09-01'],['2019-09-01','2020-09-01']]; ``` How to check if the dates from dateArray are between the dates in rangesArray. rangesArray[0] is first range - Im interested in dates between 2021-09-01 and 2022-09-01. rangesArray[1] is second range - Im interested in dates between 2019-09-01 and 2020-09-01.
2021/09/15
[ "https://Stackoverflow.com/questions/69195856", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11815078/" ]
You said: > > i thought that this shouldn't happen since the function += .add() creates a new index for the newly added list/line only. > > > That is correct. However, the new element points to the same `line` object as all the other elements, because you are using a single object for all lines. So this, for example: ``` minesField.add(line) minesField.add(line) ``` causes `minesField` to be a list of two references to the same list (same as the list object that `line` refers to). One way to make each line refer to a unique object and avoid duplication is to add a *copy* of `line` each time: ``` minesField.add(line.toMutableList()) ```
I assume you perform `minesField.add(line)` repeatedly using exactly the same `line` object. The point is: `minesField.add(line)` does not copy the contents of `line` into `minesField`. It adds the `line` object itself to it. If you then modify `line` you will modify the contents of `minesField` as well. As a result, you end up with `minesField` that contains 9 references to exactly the same `line` object. You need to either create a new `line` object with each new line or you need to create a copy before adding to `minesField` by using: `line.toList()`.
57,251,504
Is there a way to use `MapBox GL JS` without access token? I cannot find any hint in the documentation of [MapBox GL JS](https://docs.mapbox.com/mapbox-gl-js/api/), however, `Uber` suggest that it is [possible with their library](https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens), [providing `React` Components](https://github.com/uber/react-map-gl) for `MapBox GL JS`. From the documentation of `react-map-gl` > > Display Maps Without A Mapbox Token > > > It is possible to use the map component without the Mapbox service, if > you use another tile source (for example, if you host your own map > tiles). You will need a custom Mapbox GL style that points to your own > vector tile source, and pass it to ReactMapGL using the mapStyle prop. > This custom style must match the schema of your tile source. > > > Source <https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens> Is it possible to use the "native" `MapBox GL JS` without Access Token? If so, how?
2019/07/29
[ "https://Stackoverflow.com/questions/57251504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935035/" ]
Yep, as the comments mention, just don't set the accessToken and refrain from using any mapbox styles or tiles: ``` var map = new mapboxgl.Map({ container: 'map' center: [-74.50, 40], zoom: 9 }); ``` Then you can add your layer programmatically via `map.addLayer/addSource` or just create your own style.json file referencing your tile server and layers. The style specification is documented extensively here: <https://docs.mapbox.com/mapbox-gl-js/style-spec/>
Check out this code snipped at: <https://docs.mapbox.com/mapbox-gl-js/example/map-tiles/> You can delete the line with "mapboxgl.accessToken" and your good to go. I have just tested it with the ReactMapboxGL component and it works! Just pass the "mapStyle" prop to the component with the style object from the docs.
57,251,504
Is there a way to use `MapBox GL JS` without access token? I cannot find any hint in the documentation of [MapBox GL JS](https://docs.mapbox.com/mapbox-gl-js/api/), however, `Uber` suggest that it is [possible with their library](https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens), [providing `React` Components](https://github.com/uber/react-map-gl) for `MapBox GL JS`. From the documentation of `react-map-gl` > > Display Maps Without A Mapbox Token > > > It is possible to use the map component without the Mapbox service, if > you use another tile source (for example, if you host your own map > tiles). You will need a custom Mapbox GL style that points to your own > vector tile source, and pass it to ReactMapGL using the mapStyle prop. > This custom style must match the schema of your tile source. > > > Source <https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens> Is it possible to use the "native" `MapBox GL JS` without Access Token? If so, how?
2019/07/29
[ "https://Stackoverflow.com/questions/57251504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935035/" ]
Yep, as the comments mention, just don't set the accessToken and refrain from using any mapbox styles or tiles: ``` var map = new mapboxgl.Map({ container: 'map' center: [-74.50, 40], zoom: 9 }); ``` Then you can add your layer programmatically via `map.addLayer/addSource` or just create your own style.json file referencing your tile server and layers. The style specification is documented extensively here: <https://docs.mapbox.com/mapbox-gl-js/style-spec/>
As people have already commented you need to add your own data source, stamens have some open tiles services or normal OSM would do. Change the style key to be an object with a source and layers parameters. The Mapbox style docs are pretty good. <https://docs.mapbox.com/mapbox-gl-js/style-spec/> I have created a medium post which goes step by step - <https://medium.com/@markallengis/simple-web-map-using-mapbox-gl-js-a44e583e0589> Quick example of what I mean below, note if your service is vector then update *type*. ``` style:{ 'version': 8, 'sources': { 'raster-tiles': { 'type': 'raster', 'tiles': [ 'https://yourtileservicehere/{z}/{x}/{y}.jpg' ], 'tileSize': 256, } }, 'layers': [{ 'id': 'simple-tiles', 'type': 'raster', 'source': 'raster-tiles', 'minzoom': 0, 'maxzoom': 22 }] } ```
57,251,504
Is there a way to use `MapBox GL JS` without access token? I cannot find any hint in the documentation of [MapBox GL JS](https://docs.mapbox.com/mapbox-gl-js/api/), however, `Uber` suggest that it is [possible with their library](https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens), [providing `React` Components](https://github.com/uber/react-map-gl) for `MapBox GL JS`. From the documentation of `react-map-gl` > > Display Maps Without A Mapbox Token > > > It is possible to use the map component without the Mapbox service, if > you use another tile source (for example, if you host your own map > tiles). You will need a custom Mapbox GL style that points to your own > vector tile source, and pass it to ReactMapGL using the mapStyle prop. > This custom style must match the schema of your tile source. > > > Source <https://uber.github.io/react-map-gl/#/Documentation/getting-started/about-mapbox-tokens> Is it possible to use the "native" `MapBox GL JS` without Access Token? If so, how?
2019/07/29
[ "https://Stackoverflow.com/questions/57251504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3935035/" ]
As people have already commented you need to add your own data source, stamens have some open tiles services or normal OSM would do. Change the style key to be an object with a source and layers parameters. The Mapbox style docs are pretty good. <https://docs.mapbox.com/mapbox-gl-js/style-spec/> I have created a medium post which goes step by step - <https://medium.com/@markallengis/simple-web-map-using-mapbox-gl-js-a44e583e0589> Quick example of what I mean below, note if your service is vector then update *type*. ``` style:{ 'version': 8, 'sources': { 'raster-tiles': { 'type': 'raster', 'tiles': [ 'https://yourtileservicehere/{z}/{x}/{y}.jpg' ], 'tileSize': 256, } }, 'layers': [{ 'id': 'simple-tiles', 'type': 'raster', 'source': 'raster-tiles', 'minzoom': 0, 'maxzoom': 22 }] } ```
Check out this code snipped at: <https://docs.mapbox.com/mapbox-gl-js/example/map-tiles/> You can delete the line with "mapboxgl.accessToken" and your good to go. I have just tested it with the ReactMapboxGL component and it works! Just pass the "mapStyle" prop to the component with the style object from the docs.
55,674,323
I am using sequelize.js with node.js and postgres. I got 2 simple tables from an example as a 'POC' of sorts. I changed the ID to be UUID and I am having an issue with the insert into the second table ( with the UUID FK ). I am using postman to test it. I am creating todo rows with UUID with no issues, Then I am trying to create a todo item which has a todo id as foreign key and it seems that it is failing to recognize that ID! I tried a manual script in postgres and it worked. I am probably missing something code wise but I cant figure out what. here is the error which is being returned to me in postman - ``` { "name": "SequelizeDatabaseError", "parent": { "name": "error", "length": 96, "severity": "ERROR", "code": "22P02", "file": "uuid.c", "line": "137", "routine": "string_to_uuid", "sql": "INSERT INTO \"TodoItems\" (\"id\",\"content\",\"complete\",\"createdAt\",\"updatedAt\",\"todoId\") VALUES ($1,$2,$3,$4,$5,$6) RETURNING *;" }, "original": { "name": "error", "length": 96, "severity": "ERROR", "code": "22P02", "file": "uuid.c", "line": "137", "routine": "string_to_uuid", "sql": "INSERT INTO \"TodoItems\" (\"id\",\"content\",\"complete\",\"createdAt\",\"updatedAt\",\"todoId\") VALUES ($1,$2,$3,$4,$5,$6) RETURNING *;" }, "sql": "INSERT INTO \"TodoItems\" (\"id\",\"content\",\"complete\",\"createdAt\",\"updatedAt\",\"todoId\") VALUES ($1,$2,$3,$4,$5,$6) RETURNING *;" } ``` Here are the relevant js files - todoItems.js controller - ``` const TodoItem = require('../dal/models').TodoItem; const uuid = require('uuid/v4'); module.exports = { create(req, res) { return TodoItem .create({ content: req.body.content, todoId: req.params.todoId, }) .then(todoItem => res.status(201).send(todoItem)) .catch(error => res.status(400).send(error)); }, update(req, res) { return TodoItem .find({ where: { id: req.params.todoItemId, todoId: req.params.todoId, }, }) .then(todoItem => { if (!todoItem) { return res.status(404).send({ message: 'TodoItem Not Found', }); } return todoItem .update({ content: req.body.content || todoItem.content, complete: req.body.complete || todoItem.complete, }) .then(updatedTodoItem => res.status(200).send(updatedTodoItem)) .catch(error => res.status(400).send(error)); }) .catch(error => res.status(400).send(error)); }, destroy(req, res) { return TodoItem .find({ where: { id: req.params.todoItemId, todoId: req.params.todoId, }, }) .then(todoItem => { if (!todoItem) { return res.status(404).send({ message: 'TodoItem Not Found', }); } return todoItem .destroy() .then(() => res.status(204).send()) .catch(error => res.status(400).send(error)); }) .catch(error => res.status(400).send(error)); }, }; ``` todos.js controller- ``` const Todo = require('../dal/models').Todo; const TodoItem = require('../dal/models').TodoItem; module.exports = { create(req, res) { return Todo .create({ title: req.body.title, }) .then((todo) => res.status(201).send(todo)) .catch((error) => res.status(400).send(error)); }, list(req, res) { return Todo .findAll({ include: [{ model: TodoItem, as: 'todoItems', }], order: [ ['createdAt', 'DESC'], [{ model: TodoItem, as: 'todoItems' }, 'createdAt', 'ASC'], ], }) .then((todos) => res.status(200).send(todos)) .catch((error) => res.status(400).send(error)); }, retrieve(req, res) { return Todo .findByPk(req.params.todoId, { include: [{ model: TodoItem, as: 'todoItems', }], }) .then((todo) => { if (!todo) { return res.status(404).send({ message: 'Todo Not Found', }); } return res.status(200).send(todo); }) .catch((error) => res.status(400).send(error)); }, update(req, res) { return Todo .findByPk(req.params.todoId, { include: [{ model: TodoItem, as: 'todoItems', }], }) .then(todo => { if (!todo) { return res.status(404).send({ message: 'Todo Not Found', }); } return todo .update({ title: req.body.title || todo.title, }) .then(() => res.status(200).send(todo)) .catch((error) => res.status(400).send(error)); }) .catch((error) => res.status(400).send(error)); }, destroy(req, res) { return Todo .findByPk(req.params.todoId) .then(todo => { if (!todo) { return res.status(400).send({ message: 'Todo Not Found', }); } return todo .destroy() .then(() => res.status(204).send()) .catch((error) => res.status(400).send(error)); }) .catch((error) => res.status(400).send(error)); }, }; ``` todo table create migration - ``` module.exports = { up: (queryInterface, Sequelize) => queryInterface.createTable('Todos', { id: { allowNull: false, primaryKey: true, type: Sequelize.UUID, }, title: { type: Sequelize.STRING, allowNull: false, }, createdAt: { allowNull: false, type: Sequelize.DATE, }, updatedAt: { allowNull: false, type: Sequelize.DATE, }, }), down: (queryInterface /* , Sequelize */) => queryInterface.dropTable('Todos'), }; ``` todo-item table create migration - ``` module.exports = { up: (queryInterface, Sequelize) => queryInterface.createTable('TodoItems', { id: { allowNull: false, primaryKey: true, type: Sequelize.UUID, }, content: { type: Sequelize.STRING, allowNull: false, }, complete: { type: Sequelize.BOOLEAN, defaultValue: false, }, createdAt: { allowNull: false, type: Sequelize.DATE, }, updatedAt: { allowNull: false, type: Sequelize.DATE, }, todoId: { type: Sequelize.UUID, onDelete: 'CASCADE', references: { model: 'Todos', key: 'id', as: 'todoId', }, }, }), down: (queryInterface /* , Sequelize */) => queryInterface.dropTable('TodoItems'), }; ``` todo model - ``` const uuid = require('uuid/v4'); 'use strict'; module.exports = (sequelize, DataTypes) => { const Todo = sequelize.define('Todo', { title: { type: DataTypes.STRING, allowNull: false, } }); Todo.associate = (models) => { Todo.hasMany(models.TodoItem, { foreignKey: 'todoId', as: 'todoItems', }); }; Todo.beforeCreate((item, _ ) => { return item.id = uuid(); }); return Todo; }; ``` todo-item model - ``` const uuid = require('uuid/v4'); 'use strict'; module.exports = (sequelize, DataTypes) => { const TodoItem = sequelize.define('TodoItem', { content: { type: DataTypes.STRING, allowNull: false, }, complete: { type: DataTypes.BOOLEAN, defaultValue: false, } }); TodoItem.associate = (models) => { TodoItem.belongsTo(models.Todo, { foreignKey: 'todoId', onDelete: 'CASCADE', }); }; TodoItem.beforeCreate((item, _ ) => { return item.id = uuid(); }); return TodoItem; }; ```
2019/04/14
[ "https://Stackoverflow.com/questions/55674323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8918221/" ]
What does your router code look like? Are you using correct path parameter for todoId? If you're using express for example. it should look like `app.post("/todos/:todoId/todo_items", todoItemController.create)` . Note the camelcase todoId . That will ensure that the `req.params.todoId` you're referencing in todoItems controller would have the right value. Also, make sure you have a correct body parser to handle req.body.content correctly. In express, this would be done via body body-parser library and `app.use(bodyParser.json())` . Add a breakpoint or log statement in the todoItem controller create code and verify that you actually have the correct parameter values.
If you happen to have the error above, it might be because you are nesting other entities in your request body and therefore the UUID is not getting converted from string to a UUID. For instance if you have a request body like ``` { "Transaction": { "id" : "f2ec9ecf-31e5-458d-847e-5fcca0a90c3e", "currency" : "USD", "type_id" : "bfa944ea-4ce1-4dad-a74e-aaa449212ebf", "total": 8000.00, "fees": 43.23, "description":"Description here" }, } ``` and therefore in your controller you are creating your entity like ``` try { await Transaction.create( { id: req.body.Transaction.id, currency: req.body.Transaction.currency, type_id: req.body.Transaction.type_id, total: req.body.Transaction.total, fees: req.body.Transaction.fees, description: req.body.Transaction.description, }...... ``` Your `id` and `type_id` are mostly likely not being converted from string to a UUID. There are multiple ways of tackling this. The most straightforward approach is to do an explicit conversion from string to UUID. To do this, import `parse` from the `uuid npm module` and do the explicit conversion as you can see in the code sample below. ``` const { parse: uuidParse } = require("uuid"); try { await Transaction.create( { id: uuidParse(req.body.Transaction.id), currency: req.body.Transaction.currency, type_id: uuidParse(req.body.Transaction.type_id), total: req.body.Transaction.total, fees: req.body.Transaction.fees, description: req.body.Transaction.description, }..... ``` This explicit conversion from string to a UUID will mostly solve the issue.