text
stringlengths
15
59.8k
meta
dict
Q: how to calculate recall fast and correct with duplicate document id's python 2.7 I have two tsv data files (gold and results) with four columns and I would like to calculate the recall score. Here is the format of the gold file: (doc1) 173 174 mike (doc2) 189 194 sara (doc2) 189 194 mike (doc2) 200 207 car format of the resluts file: (doc1) 173 174 mike (doc2) 189 194 mike (doc2) 200 207 car and here is my code: from __future__ import division def eval(): tp=0 #true positives fn=0 #false negatives gold=[] results=[] with open('/path/to/gold.tsv', 'rb') as goldFile, open('/path/to/results.tsv', 'rb') as f: for l in goldFile: parts = l.decode('utf-8').split('\t') try: gold.append((parts[0], parts[3])) except:pass for line in f: cols = line.decode('utf-8').split('\t') try: results.append((cols[0], cols[3])) except:pass for i,j in gold: doc = [] for docid in results: if results[0] == i: doc.append(docid) for k, v in doc: if j == v: tp += 1 gold.remove((i,j)) results.remove((i,j)) continue gold.remove((i,j)) fn += 1 return float(tp)/(tp+fn) print(eval()) The output is: 0.0 Recall should be 0.75 in the above case, what am I doing wrong? And how to calculate more efficiently? I tried with dictionaries but since I have duplicate documents id's that is not possible. I'd really appreciate any guidance. Thanks in advance.
{ "language": "en", "url": "https://stackoverflow.com/questions/52407630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: excel show/hide group based on combobox selection I need some help to figure out how to unhide/hide a group based on an activeX combo box selection. I currently have two groups (group_1 and group_2) and a combobox (activeX) with two selections (2021-2022 and 2022-2023). When 2021-2022 is selected from the drop down, I want group_1 to be unhidden (it is hidden by default). When 2022-2023 is selected from the drop down, I want group_1 to be hidden and group_2 unhidden (it is hidden by default). I am very new to VBA and have tried to put some code together for the first group and drop down selection option, but I have had no luck. Private Sub ComboBox1_Change_2() Select Case ComboBox1.Text Case "2021-2022" With ActiveSheet.Shapes("group_1") If .Visible = False Then .Visible = True Else .Visible = False End With End If End Sub Is this something that can be done? A: Try this: Private Sub ComboBox1_Change() Dim txt txt = ComboBox1.Text With Me 'assuming this is in the worksheet code module .Shapes("group_1").Visible = txt = "2021-2022" .Shapes("group_2").Visible = txt = "2022-2023" .Shapes("group_3").Visible = Len(txt) > 0 'any option selected End With End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/72816584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting TypeError trying to open() a file in write mode with Python I have a Python script that in my mind should: * *Open a file *Save its content in a variable *For each line in the variable: * *Edit it with a regular expression *Append it to another variable *Write the second variable to the original file Here is a MWE version of the script: # [omitting some setup] with open(setFile, 'r') as setFile: olddata = setFile.readlines() newdata = '' for line in olddata: newdata += re.sub(regex, newset, line) with open(setFile, 'w') as setFile: setFile.write(newdata) When I run the script I get the following error: Traceback (most recent call last): File C:\myFolder\myScript.py, line 11, in <module> with open(setFile, 'w') as setFile: TypeError: expected str, bytes or os.PathLike object, not _io.TextIOWrapper As far as I can understand, Python is complaining about receiving the setFile variable as an argument of open() because it isn't of the expected type, but then why did it accept it before (when I only read the file)? I suppose that my mistake is quite evident but, as I am a neophyte in Python, I can't find out where it is. Could anyone give me a help? A: just curious why you are using the same variable name for your file and then as your filehandler and then again in your next with function. _io.TextIOWrapper is the object from your previous open, which has been asssigned to the setFile variable. try: with open(setFile, 'r') as readFile: olddata = readFile.readlines() newdata = '' for line in olddata: newdata += re.sub(regex, newset, line) with open(setFile, 'w') as writeFile: writeFile.write(newdata)
{ "language": "en", "url": "https://stackoverflow.com/questions/59020487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Breaking on a deadlock between multiple startup projects in Visual Studio I have multiple startup projects in my client/server solution. The server is a Console app and the client is a WinForms app. The server/console is launched first in case that matters. Now there is a deadlock caused by some synchronization client-side code that blocks the server. Thread synchronization is done using simple lock statements. When the deadlock occurs, both apps freeze of course and hitting pause/break in VS only breaks the server app, not the client. There are two questions here: * *How could I choose which project to break out of multiple start up projects? *If a lock statement is stuck in a deadlock, is there a way to find out which line of code has a current lock on that object? A: I think your best solution would be to debug your client and server in separate instances of visual studio and setting startup projects accordingly. As for the second question, I normally set a guid and output on create and release of a lock. to see if this is happening. If it is, I set breakpoints and debug and look at the stack to see where on the calls are coming from. You may be able to output System.Environment.StackTrace to a log to get this information, but I've ever attempted it. A: You can use 2 Visual studios. One starting the console, one starting the server I Would check if you really need a lock statement. What do you want to lock ? Do you always need a exclusive lock ? Or are there some operations which can happen in parallel and only some which are exclusive ? In this case you could use the ReaderWriterLockSlim This can reduce the risk of deadlocks.
{ "language": "en", "url": "https://stackoverflow.com/questions/20451007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: HTML5 tag as container bad for SEO? I'm trying to use the <a> tag in HTML5 more as a container as this tag can now have block elements as children, example: before (valid XHTML 1.1) <div> <h3> <a href="page.html" title="article title">article title</a> </h3> <p> text </p> <a href="page.html" title="article title" > <img alt="image"> </a> <a href="page.html" title="article title" > read more </a> </div> after (valid HTML5) <a href="page.html" title="article title" > <h3> article title </h3> <p> text </p> <img alt="image"> <div> read more </div> </a> Does this new way of markup have any effects for SEO? A: OK, removing pure semantics from your question (which, in my mind, does have a material impact on deciding on implementing your chosen method) and concentrating on pure "SEO" value and impact: The first example needs to be qualified more, as if we take your example as literal, then you are linking to the same page.html 3 times. Google (specifically) only takes the link anchor value from the 1st link to any page that it comes across, so - the value for the first example is only extracted from that first link. The 2nd link (using an IMG tag with an ALT attribute as the anchor value), and the 3rd link using read more as the anchor value are effectively "ignored". It's important that other signals are used to supplement the first link's true intended value, such as surrounding text, images etc. The 2nd example (HTML5), wraps all of that semantic/surrounding content up to make the effective 'anchor' value from which search engines will derive the link's intended meaning, and then as a consequence, the meaning of the destination page of the link. Using an anchor tag as a containing wrapper for content that contains additional emphasis (the H tag), an image and an additional div only increases the difficulty that a search engine has to decipher the intended meaning of the link so it can associate it with the destination page. Search engines (and Google predominantly) are constantly improving their crawling ability to enable better algorithmic parsing and processing of the HTML. Apart from emphasis signals (which are very low), Google mostly ignores the mark-up. The exception is of course links - so making an effort to simplify the parsing/processing by providing clear signals as to a link's anchor text is the safest way forward. Expecting them to understand all of the differences of HTML3, vs HTML4, vs HTML5 and all of the transitional, strict and other variations of each, is probably expecting too much. TL;DR Possibly, but only in terms of true link value. A: As far as i know in the second way is not bad in anyway in term of seo But first may be slightly better as the titles,images are more closely linked to link. Q. But better by how much? A. May be not too much
{ "language": "en", "url": "https://stackoverflow.com/questions/8383846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: how to pass pointer to a method as another method argument All! I have some class hierarchy class A {public: virtual void foo(int, T*) = 0; virtual void foo1(int, T*) = 0;}; class B : public A {public: void foo(int, T*) override; void foo1(int, T*) override;}; class C : public B {public: void foo(int, T*) override; void foo1(int, T*) override;}; In client code class D{void client_foo(A* pA, bool, T*);}; void D::client_foo(A* pA, bool b, T* pT) { if (b) pA->foo(1050, pT); else pA->foo1(5010, pT); } I want to introduce new function void D::client_helper(???) which will achieve pA, int value, pT and pointer to method of class A to be called. So, D::client_foo(...) could be rewritten as: void D::client_foo(A* pA, bool b, T* pT) { if (b) client_helper(pA, 1050, pT, std::mem_fn(&A::foo)); else client_helper(pA, 5010, pT, std::mem_fn(&A::foo1)); } the question is: what signature should D::client_helper() have? A: Since both A::foo and A::foo1 have the same signature, there's no need for std::mem_fn, or another abstraction, just have client_helper take a plain pointer to member function of A. void client_helper(A* pA, int i, T* pT, void(A::*memfn)(int, T*)) { (pA->*memfn)(i, pT); } And call it as void client_foo(A* pA, bool b, T* pT) { if (b) client_helper(pA, 1050, pT, &A::foo); else client_helper(pA, 5010, pT, &A::foo1); } Live demo A: For passing functions as arguments, you have std::function. A: I've used std::function<ret_type(arg_1)> and std::bind instead of std::mem_fn e.g. client function: void client_helper(std::function<void (T*)> fnFoo, T* pT) { nfFoo(pT); } call of the function: usign namespace std::placeholders; A* pA; T* pT; client_helper(std::bind(&A::foo, pA, 1050, _1), pT);
{ "language": "en", "url": "https://stackoverflow.com/questions/25670766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: looking for maximum in the array of structs I'm just a beginner in the learning stage. I am supposed to arrange a struct of point (x,y,z) so that structure p[n] has the point with the greatest x stored in it. Is my method correct? If not, are there any simper methods to do this? struct point { float x; float y; } p[1000]; void sortptx(struct point *t, int ctr); int main() { int n = 100; sortptx(&p, n); return 0; } void sortptx(struct point *t, int ctr) { float temp; int i; for(i = 0; i < ctr-1; i++) { if (t[ctr]->x < t[i]->x) { temp = t[ctr]->x; t[ctr]->x = t[i]->x; t[i]->x = temp; } } } A: There are at least a few bugs here: * *As mentioned in the comments, struct point *t is either a pointer to a single struct point, or an array of points. It is not a pointer to an array of pointers to struct point. So t[i]->x should be t[i].x. *If ctr is intended to be the length of an array of points, t[ctr] will run off the end of the array, and could access uninitialized memory. In that case, t[ctr-1] would be the last element of the array. *Rather than swapping entire points, you are just swapping the x coordinates. A: From what I understand you don't want to sort it, but find point with maximum x and make sure it is stored at p[n-1]. struct point { float x; float y; } p[1000]; This is how it could look like: void max(struct point *t, int ctr) { struct point temp; int i; for(i = 0; i < (ctr - 1); i++) { if (t[ctr - 1].x < t[i].x) { temp = t[ctr - 1]; t[ctr - 1] = t[i]; t[i] = temp; } } } This function takes struct point *t as an argument, which is pointer to first point just like p is. t[i] is struct point at index i (not pointer to struct point), so you use . instead of -> when you access its members. And since you are indexing from zero, last element of an array of size n has index n-1, which is place where the struct point with the highest x will be stored. Example: int main () { for(int i = 0; i < 99; i++) { p[i].x = 1; p[i].y = 3; } p[4].x = 7; int n = 100; max(p, n); cout << p[n - 1].x; return 0; } output: 7
{ "language": "en", "url": "https://stackoverflow.com/questions/9166777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to concatenate a list of values in sparql? Suppose I have a uri http://dbpedia.org/page/Manmohan_Singh now he has a list of years in his tag dbpprop:years. When I write a query like PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX dbpedia: <http://dbpedia.org/resource/>PREFIX dcterms: <http://purl.org/dc/terms/> PREFIX dbpedia-owl: <http://dbpedia.org/ontology/>PREFIX category: <http://dbpedia.org/resource/Category:> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>PREFIX foaf: <http://xmlns.com/foaf/0.1/>PREFIX dbpprop: <http://dbpedia.org/property/> PREFIX dbprop: <http://dbpedia.org/property/>PREFIX grs: <http://www.georss.org/georss/> PREFIX category: <http://dbpedia.org/resource/Category:> PREFIX owl: <http://www.w3.org/2002/07/owl#> PREFIX dbpprop: <http://dbpedia.org/property/> PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT DISTINCT ?x ?name ?abs ?birthDate ?birthplace ?year ?party ?office ?wiki WHERE { ?x owl:sameAs? dbpedia:Manmohan_Singh. ?x dbpprop:name ?name. ?x dbpedia-owl:birthDate ?birthDate. ?x dbpedia-owl:birthPlace ?birthplace. ?x dbpprop:years ?year. ?x dbpprop:party ?party. ?x dbpedia-owl:office ?office. ?x foaf:isPrimaryTopicOf ?wiki. ?x rdfs:comment ?abs. FILTER(lang(?abs) = 'en') } I get result of each year in different row .. and hence repeating data for other collumns. Is there a way I can get it as a list in just one collumn like all the years in one collumn comma separated or smthng like that? Similary for the prop dbpedia-owl:office A: Look at GROUP_CONCAT but it may be easier to retrieve the data in multiple rows (you can sort to put repeats adjacent to each other) and process in code. A: This is similar to Aggregating results from SPARQL query, but the problem is actually a bit more complex, because there are multiple variables that have more than one result. ?name, ?office, and ?birthPlace have the same issue. You can work around this using group_concat, but you'll need to use distinct as well, to keep from getting, e.g., the same ?year repeated multiple times in your concatenated string. group by reduces the number of rows that you have in a solution, but in each of those rows, you have a set of values for the variables that you didn't group by. E.g., since ?year isn't in the group by, you have a set of values for ?year, and you have to do something with them. You could, e.g., select (sample(?year) as ?aYear) to grab just one from the set, or you could do as we've done here, and select (group_concat(distinct ?year;separator=", ") as ?years) to concatenate the distinct values into a string. You'll want a query like the following, which produces one row: SELECT ?x (group_concat(distinct ?name;separator="; ") as ?names) ?abs ?birthDate (group_concat(distinct ?birthplace;separator=", ") as ?birthPlaces) (group_concat(distinct ?year;separator=", ") as ?years) ?party (group_concat(distinct ?office;separator=", ") as ?offices) ?wiki WHERE { ?x owl:sameAs? dbpedia:Manmohan_Singh. ?x dbpprop:name ?name. ?x dbpedia-owl:birthDate ?birthDate. ?x dbpedia-owl:birthPlace ?birthplace. ?x dbpprop:years ?year. ?x dbpprop:party ?party. ?x dbpedia-owl:office ?office. ?x foaf:isPrimaryTopicOf ?wiki. ?x rdfs:comment ?abs. FILTER(langMatches(lang(?abs),"en")) } group by ?x ?abs ?birthDate ?party ?wiki SPARQL results
{ "language": "en", "url": "https://stackoverflow.com/questions/20231536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: 7506 Error:The library for UDF/XSP/UDM HCPRE_AUTODDP.oreplace2 could not be found Below are two queries. The first query runs fine. However the second one throws 7506 error. I recompiled the function however that didn't help. SEL PID, SimpleDefinitionID, SimpleDefinitionName, SimpleDefinitionQuery ,ABC.OREPLACE (cast(SimpleDefinitionQuery as varchar(1000)),') OR (',')') FROM ABC.SimpleDefinitions where PID='71500001' SEL PID, SimpleDefinitionID, SimpleDefinitionName, SimpleDefinitionQuery ,ABC.OREPLACE (cast(SimpleDefinitionQuery as varchar(1000)),') OR (',')') FROM ABC.SimpleDefinitions where PID='71400001' ; Any idea why that's happening?
{ "language": "en", "url": "https://stackoverflow.com/questions/44459736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I resolve "unexpected EOF" error when pulling docker image I have Docker Desktop 4.16.2 (95914) on MacOS Ventura 13.1, and as far as I know I'm not connected to any proxies. I've authenticated to docker hub via docker desktop, and have tried to pull a python base image by running "docker build .", with the following line at the top of the Dockerfile: FROM python:3 I get the following error: Get "https://registry-1.docker.io/v2/": unexpected EOF Does anyone know what might be causing this and how to resolve it? A: I pinned down the issue to privileges changes since Docker Desktop 4.15.0 for Mac. What fixed the issue on my end was to downgrade to 4.14.1: * *Uninstall Docker completely. This can be done by opening Docker Desktop UI, clicking the Bug icon and clicking "Uninstall". Then, the application can be moved to the bin. *Install Docker Desktop 4.14.1 I encountered this issue on a company Mac where I suspect the Privileges app I need to use to elevate permissions doesn't work well with the privileges changes Docker made starting from 4.15.0. Extract from their 4.15.0 changelog: Docker Desktop for Mac no longer needs to install the privileged helper process com.docker.vmnetd on install or on the first run. For more information see Permission requirements for Mac.
{ "language": "en", "url": "https://stackoverflow.com/questions/75286398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS Wordpress z-index not stacking canvas elements I'm triyng to stack 2 canvas elements on Wordpress site. Problem is that canvas elements refuse to stack. Link to actual page: WebPage CSS: #plane{ position:relative; z-index: 110; border: 1px solid black; width: 100%; } #plane1{ position:relative; z-index: 120; border: 1px solid black; width: 100%; } was using clear:both;, left: 0px;, top: 0px; but it didn't make any difference. The only way to shift second canvas to the top is to add top: -200px; but top: -100% dose not work :-[[[ Need help with this. A: You can try this: * *Wrap them inside a common div which you set position:relative; on *Then set position:absolute; for both the canvas elements inside that div element *Use left:0;top:0; (unit not needed when 0) to adjust the position for the canvases if necessary Using relative on parent element makes the absolute positioned elements inside it relative to it. If you use relative on both you would need to adjust the position for one of them (as you already discovered).
{ "language": "en", "url": "https://stackoverflow.com/questions/23756782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: sql data change Currently my data looks like this: A 15902 8.11 9.20 7 8 5 6 A 15902 2021 8.11 7 5 A 15902 2022 9.20 8 6 I'm quite unsure how to do this. Any help is greatly appreciated! A: You can unpivot this using CROSS APPLY (VALUES SELECT t.Hospital, t.Zip, v.Year, v.Paid$, v.Visits, v.LOS FROM [MyTable] T CROSS APPLY (VALUES (2021, Paid$_21, Visits21, LOS21), (2022, Paid$_22, Visits22, LOS22) ) v(Year, Paid$, Visits, LOS) Note that this only queries the base table once. db<>fiddle A: SELECT Hospital, Zip, '2021' As Year, Paid$_21 As Paid$, Visits21 as Visits, LOS21 as LOS FROM [MyTable] UNION SELECT Hospital, Zip, '2022' As Year, Paid$_22 As Paid$, Visits22 as Visits, LOS22 as LOS FROM [MyTable] You can also try UNPIVOT
{ "language": "en", "url": "https://stackoverflow.com/questions/73238660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to convert php array to javascript array in php I'm working on core php,i need how to convert PHP array to Javascript array please help me,below the my example code this there please check it. I tried from long time to debug but not getting any leads. please help me in resolving this issue Here my php array data: Array ( [0] => 001-1234567 [1] => 1234567 [2] => 12345678 [3] => 12345678 [4] => 12345678 ) Here javascript array: var cities = [ "Aberdeen", "Ada", "Adamsville", "Addyston", "Adelphi" ]; A: You can use this script to convert php array to javascript: <script type='text/javascript'> <?php $php_array = array('abc','def','ghi'); $js_array = json_encode($php_array); echo "var javascript_array = ". $js_array . ";\n"; ?> </script> A: You can simply convert by JSON encode. echo json_encode($your_array);
{ "language": "en", "url": "https://stackoverflow.com/questions/49269963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: I would like to create a text generator using JFrame just like in the picture inside : I'm starting to get familiar with Java and the JFrame package, but I'm not quite there. This is what I would like to get: As you can see, I'd like to have the possibility to fill boxes and then take what is in the boxes and make a pre-written sentence with it ! A: In Swing, there are two different components. JTextArea and JTextPane. The JTextArea is easy to use, but doesn't allow formatting. If you do not plan on changing the formatting of different words, that is the one to use. The JTextArea is more robust but harder to use. Check out the Java tutorial for more information. http://docs.oracle.com/javase/tutorial/uiswing/components/text.html A: I hope this helps... TextGenerator.java package textgenerator; import java.awt.GridLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JTextField; import javax.swing.JTextPane; public class TextGenerator { JFrame frame; JPanel panel; JTextPane textPane; JLabel namel; JLabel agel; JTextField namef; JTextField agef; JButton button; public TextGenerator() { frame = new JFrame("My Frame");//Construct the frame frame.setBounds(200, 100, 1000, 500);//set the size and position frame.setLayout(new GridLayout(1, 2));//set layout with 1 row and 2 columns frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setResizable(false);//restrict the resixing of the frame panel = new JPanel();//create a panel panel.setLayout(null); textPane = new JTextPane();//create a textpane textPane.setEditable(false); frame.add(panel);//add panel to the frame frame.add(textPane);//add textpane to the frame //create labels and textfields and add them to the panel namel = new JLabel("Name : "); namel.setBounds(20, 200, 150, 20); agel = new JLabel("Age : "); agel.setBounds(20, 250, 150, 20); namef = new JTextField(); namef.setBounds(220, 200, 150, 20); agef = new JTextField(); agef.setBounds(220, 250, 150, 20); panel.add(namel); panel.add(agel); panel.add(namef); panel.add(agef); //create button and add it to the panel button = new JButton("Done !"); button.setBounds(350, 400, 100, 20); panel.add(button); //set the required text to the textfield on button click button.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { textPane.setText("Hello !\n" + "My name is " + namef.getText() + ", and I'm " + agef.getText() + ".\n" + "How are you ?"); } }); frame.setVisible(true);//make the frame visible } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new TextGenerator(); } }); } } Output looks like this
{ "language": "en", "url": "https://stackoverflow.com/questions/27360091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Recursively Draw a Box JavaScript I am trying to draw a box inside of another the inner, and keep drawing them until the width is 2px Here is the fiddle: http://jsfiddle.net/443wovkk/ JavaScript: var innerBox = $('.box'); var innerBoxDimentions; var boxHtml = innerBox.clone().removeClass('outter'); $(document).ready(function(){ do { innerBox = findInnerBox(innerBox); innerBoxDimentions = innerBox.width() / 2; boxHtml = boxHtml.width(innerBoxDimentions).height(innerBoxDimentions); innerBox.append(boxHtml); } while (innerBoxDimentions > 2); //var boxHtml = innerBox.clone().width(innerBoxDimentions/2).height(innerBoxDimentions/2); innerBox.append(boxHtml); }); function findInnerBox(box){ if(box.children('.box').length > 0){ findInnerBox(box.children('.box')) } else{ return box; } } Currently it only draws one box. It should keep drawing more boxes inside the inner most box until the last box is 2px wide. How do I recursively draw a box inside the inner most box continuously until the last box drawn reaches 2px wide? A: There are two critical problems with your code keeping it from working. 1) You are always updating the same "box" variable. You need to create a different one each time. I fixed this by adding a call to clone(). 2) The recursive function does not return a value after recursing. Add a return here in findInnerBox. The following code does what you want with those minor adjustments: var innerBox = $('.box'); var innerBoxDimentions; var boxHtml = innerBox.clone().removeClass('outter'); $(document).ready(function(){ do { innerBox = findInnerBox(innerBox); innerBoxDimentions = innerBox.width() / 2; boxHtml = boxHtml.clone().width(innerBoxDimentions).height(innerBoxDimentions); innerBox.append(boxHtml); } while (innerBoxDimentions > 2); //var boxHtml = innerBox.clone().width(innerBoxDimentions/2).height(innerBoxDimentions/2); innerBox.append(boxHtml); }); function findInnerBox(box){ if(box.children('.box').length > 0){ return findInnerBox(box.children('.box')) } else{ return box; } } /* CSS Styles for Recursive Box */ .box{ border:solid 1px #000; } .outter { width: 500px; height:500px } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.0/jquery.min.js"></script> <div class="box outter"></div> A: Opps, too late I guess. Here is my version of the code var size = 500; while(size = size - 20){ $('.box:last').append('<div class="'+size+' box"></div>'); $('.'+size).css('height',size).css('width',size); } http://jsfiddle.net/443wovkk/2/
{ "language": "en", "url": "https://stackoverflow.com/questions/28118016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Errorr Returnn value of Maatwebsite\Excel\Sheet::mapArraybleRow() must be of the type array, string returned I just upgraded Laravel 5.4 to 5.5 and now I have to change all the coding that used the old Laravel-Excel. I'm using php 7.2.25, Windows/Wamp. I am trying to upload an excel file, get it's data, do lots of check and calculations on it (Not in the code yet) and then create a new excel file and give the user the Windows 'Save File' option. * *Not sure how to get the Windows save file window, don't see any explanation on the documentation. * *If I can't get the Windows save file window, I'm not sure how to set the path to: My Documents\test for example. *With the code I have, I get this error: Symfony \ Component \ Debug \ Exception \ FatalThrowableError (E_RECOVERABLE_ERROR) Type error: Return value of Maatwebsite\Excel\Sheet::mapArraybleRow() must be of the type array, string returned My Import class code: namespace App\Imports; use Illuminate\Support\Collection; use Maatwebsite\Excel\Concerns\ToCollection; use App\Exports\TimesheetsExport; use Maatwebsite\Excel\Concerns\WithHeadingRow; use Maatwebsite\Excel\Facades\Excel; use App\User; class TimesheetsImport implements ToCollection, WithHeadingRow { private $user; public function __construct($param) { $this->user = $param; } public function collection(Collection $rows) { $data = new Collection([$this->user->fullName()]); foreach ($rows as $row) { $data->put($row['date'], $row['in'], $row['out']); } return Excel::download(new TimesheetsExport($data), 'testtttt.xlsx'); My Export class: namespace App\Exports; use Maatwebsite\Excel\Concerns\FromCollection; use Illuminate\Support\Collection; class TimesheetsExport implements FromCollection { protected $rows; public function __construct(Collection $rows) { $this->rows = $rows; } public function collection() { return $this->rows; } } Can someone please help? A: After a day and a half, I managed to make it work. The working code: Import Class: namespace App\Imports; use Maatwebsite\Excel\Concerns\ToCollection; use Maatwebsite\Excel\Concerns\WithHeadingRow; use Maatwebsite\Excel\Facades\Excel; use App\User; class TimesheetsImport implements ToCollection, WithHeadingRow { public $data; public function collection($rows) { $this->data = $rows; } } Export class: namespace App\Exports; use Maatwebsite\Excel\Concerns\FromCollection; class TimesheetsExport implements FromCollection { protected $rows; public function __construct($rows) { $this->rows = $rows; } public function collection() { return $this->rows; } } My controller: public function importTimesheets(Request $request) { $import = new TimesheetsImport; $rows = Excel::toCollection($import, $request->file('file')); return Excel::download(new TimesheetsExport($rows), 'test.xlsx'); } With this code, I get the Windows 'Save File' as well. This was not fun, but it's done, I hope it will help someone else. A: use Maatwebsite\Excel\Concerns\ToModel; use Maatwebsite\Excel\Concerns\WithHeadingRow; use Excel; use Maatwebsite\Excel\Concerns\FromArray; use Maatwebsite\Excel\Excel as ExcelType; ... $array = [[1,2,3],[3,2,1]]; return \Excel::download(new class($array) implements FromArray{ public function __construct($array) { $this->array = $array; } public function array(): array { return $this->array; } },'db.xlsx', ExcelType::XLSX); decided so without additional frames. Laravel 7*, Maatwebsite 3*
{ "language": "en", "url": "https://stackoverflow.com/questions/59824682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: The use of **.join()** in this particular code The code is from an UVA. It goes like: Consider a list (list = []). You can perform the following commands: insert, print, remove, append, sort, pop, reverse. Initialize your list and read in the value of followed by lines of commands where each command will be of the types listed above. Iterate through each command in order and perform the corresponding operation on your list. Inputs: The first line contains an integer, denoting the number of commands. Each line of the subsequent lines contains one of the following commands. Sample Input 12 insert 0 5 insert 1 10 insert 0 6 print remove 6 append 9 append 1 sort print pop reverse print Sample Output [6, 5, 10] [1, 5, 9, 10] [9, 5, 1] A neat solution I found is: n = input() l = [] for _ in range(n): s = raw_input().split() cmd = s[0] args = s[1:] if cmd !="print": cmd += " ("+ ",".join(args) +") " eval("l."+cmd) else: print l I don't really understand how the part " ("+ ",".join(args) +") " works. Especially why the + in the beginning and at the end. An explanation would be great. Thanks. A: .split will divide your your string into an array. Each space will create a new element, so if you have "I like food", it will become ["I","like","food"]. Now we can look at our .join. The .join is doing the reverse. So if we were to call ','.join(["I","like","food"]), we'd get "I,like,food". In your function, we are using .join to combine a series of list elements into a series of string arguments, then we invoke eval and it executes. And, eval is not recommended. So, instead, we can write a chain for if and elif statements: inpu = """12 insert 0 5 insert 1 10 insert 0 6 print remove 6 append 9 append 1 sort print pop reverse print""" li = [] for line in inpu.split("\n"): args = line.split() if args[0] == "insert": li.insert(int(args[1]),int(args[2])) elif args[0] == "print": print(li) elif args[0] == "remove": li.remove(int(args[1])) elif args[0] == "sort": li.sort() elif args[0] == "reverse": li = li[::-1] elif args[0] == "append": li.append(int(args[1])) elif args[0] == "pop": li.pop()
{ "language": "en", "url": "https://stackoverflow.com/questions/43155545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Change a parent window's URL using history pushState when link in Iframe is clicked I have a page that has a header and sidebar with a right content panel that is an Iframe. In a page loaded into the right content panel, I am trying to have a clicked link update the Browser URL in the parent window to the URL of the new page that is loaded into the Iframe. I do not want the actual parent window to reload the URL but simply to update the URL in the address bar. Something like: window.history.pushState('obj', 'newtitle', '/bookmarks/list/'); Is this possible from an Iframe? A: I was able to accomplish updating the parent windows URL in the address bar using history.pushState by sending the new URL to the parent from the child Iframe window using postMessage and on the parent window listening for this event. WHen the parent receives the child iframes postMessage event, it updates the URL with pushSTate using the URL passed in that message. Child Iframe <script> // Detect if this page is loaded inside an Iframe window function inIframe() { try { return window.self !== window.top; } catch (e) { return true; } } // Detect if the CTRL key is pressed to be used when CTRL+Clicking a link $(document).keydown(function(event){ if(event.which=="17") cntrlIsPressed = true; }); $(document).keyup(function(){ cntrlIsPressed = false; }); var cntrlIsPressed = false; // check if page is loaded inside an Iframe? if(inIframe()){ // is the CTRL key pressed? if(cntrlIsPressed){ // CTRL key is pressed, so link will open in a new tab/window so no need to append the URL of the link }else{ // click even on links that are clicked without the CTRL key pressed $('a').on('click', function() { // is this link local on the same domain as this page is? if( window.location.hostname === this.hostname ) { // new URL with ?sidebar=no appended to the URL of local links that are clicked on inside of an iframe var linkUrl = $(this).attr('href'); var noSidebarUrl = $(this).attr('href')+'?sidebar=no'; // send URL to parent window parent.window.postMessage('message-for-parent=' +linkUrl , '*'); alert('load URL with no sidebar: '+noSidebarUrl+' and update URL in arent window to: '+linkUrl); // load Iframe with clicked on URL content //document.location.href = url; //return false; } }); } } </script> Parent window <script> // parent_on_message(e) will handle the reception of postMessages (a.k.a. cross-document messaging or XDM). function parent_on_message(e) { // You really should check origin for security reasons // https://developer.mozilla.org/en-US/docs/DOM/window.postMessage#Security_concerns //if (e.origin.search(/^http[s]?:\/\/.*\.localhost/) != -1 // && !($.browser.msie && $.browser.version <= 7)) { var returned_pair = e.data.split('='); if (returned_pair.length != 2){ return; } if (returned_pair[0] === 'message-for-parent') { alert(returned_pair[1]); window.history.pushState('obj', 'newtitle', returned_pair[1]); }else{ console.log("Parent received invalid message"); } //} } jQuery(document).ready(function($) { // Setup XDM listener (except for IE < 8) if (!($.browser.msie && $.browser.version <= 7)) { // Connect the parent_on_message(e) handler function to the receive postMessage event if (window.addEventListener){ window.addEventListener("message", parent_on_message, false); }else{ window.attachEvent("onmessage", parent_on_message); } } }); </script> A: Another solution using Window.postMessage(). Iframe: <a href="/test">/test</a> <a href="/test2">/test2</a> <script> Array.from(document.querySelectorAll('a')).forEach(el => { el.addEventListener('click', event => { event.preventDefault(); window.parent.postMessage(this.href, '*'); }); }); </script> Main page: Current URL: <div id="current-url"></div> <iframe src="iframe-url"></iframe> <script> const $currentUrl = document.querySelector('#current-url'); $currentUrl.textContent = location.href; window.addEventListener('message', event => { history.pushState(null, null, event.data); $currentUrl.textContent = event.data; }); </script> See demo on JS Fiddle.
{ "language": "en", "url": "https://stackoverflow.com/questions/35518763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Window Authentication with seperate web server and database server What settings we need to do at server level in case we are using window authentication and our web server & database server are separate. Note:- I am well aware about connection string setting in web.config file. The only concern is setting at database server level in case of window authentication. Suggestions will be appreciated Thanks A: If you use windows authentication you just need to make sure that the user(s) under which your application runs also have access to the database(s) on the sql server. Using Management Studio you can add the windows logins or groups and grant them access.
{ "language": "en", "url": "https://stackoverflow.com/questions/28314818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Design: Passing on class instances or using singletons? My app project contains several helper classes that serve all kind of different purposes (eg time/date/calculation, db access, ..). Initiating these classes is quit expensive since they contain some properties that need to be filled up from the database or need to be recalculated each time an new instance is created. To avoid performance problems I tend to initiate each of these classes in the application delegate and pass these on from viewController to viewController. This has worked for me for some time but I'm finding now that the more complicated the app is getting the more problems I'm bumping into. Mostly problems related to classes getting entangled in a circular reference. I would want to know how I could solve this properly, I alread thought about turning each helper class into a singleton, and use the singleton instead of passing a class instance around. But since some helper classes are depended on each other I'll have singletons that call other singletons, I can't seem to figure out if this would lead to other problems in the end. Anyone any advice on this? A: The problem with singletons is that they make it harder to mock and unit test your application. You should decouple your dependencies; and if you do somehow need a singleton (which should be very, very rare) then consider having the singleton implement an interface that you can mock for testing purposes. A: Whenever I'm tempted to use a singleton, I re-read Global Variables are Bad and consider whether the convenience is really worth putting up with this (slightly) paraphrased list of problems: * *Non-locality of methods and variables *No access control or constraint checking *Implicit coupling *Concurrency issues *Namespace pollution *Memory allocation issues *Unduly Complicated Testing A: Singletons are basically global variables, and it's a bad idea to create a global variable just to avoid passing things around. Then again, often the right thing to do is simply to pass objects around from one class to another. The trick is figuring out the minimum amount of data you can pass to minimize the coupling. This is where well-designed classes are important. For example, you rarely need to pass an NSManagedObjectContext because you can get it from any NSManagedObject. Now, let me address the specific case of your expensive-to-create objects. You might try pooling those objects instead of creating them every time one is needed. Database access is a good example of this. Rather than allocating a database connection every time you ask for one, you grab one out of a cache. Of course, when the cache is empty, you need to allocate one. And, for memory reasons, you should be willing and able to empty out the cache when the system asks you to. That the object is expensive to create shouldn't matter to the user. That's an implementation detail, but it's one that you can design around. You do have to be careful because only objects that don't have mutable state can be handled this way, so you may have to rethink the design of your classes if you want to go this route. A: Why don't you just make your app delegate a factory for the expensive-to-create instances? Each time a view controller needs an instance of the helper class it ask the appDelegate for it. A: Personally, I usually using singleton. In my opinion, it make the code cleaner... And I am sure that class instance is unique. I use it to have single point access to resource Edit : Seems I'm wrong ! what about the flexible implementation ? static Singleton *sharedSingleton = nil; + (Singleton*)sharedManager { if (sharedSingleton == nil) { sharedSingleton = [[super alloc] init]; } return sharedSingleton; }
{ "language": "en", "url": "https://stackoverflow.com/questions/4097322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Fire and forget worker inside controller I need to create a simple solution to receive input from an user, query our database and return the result in any way, but the queries can take as long as half an hour to run (and our cloud is configured to timeout after 2 minutes, I'm not allowed to change that). I made the following solution that works locally, and want to include code to send the query's result via email to the user (in a fire and forget manner), but am unsure as how to do that while returning HTTP 200 to the user. index.js: const express = require('express') const bodyParser = require('body-parser') const db = require('./queries') const app = express() const port = 3000 app.use(bodyParser.json()) app.use( bodyParser.urlencoded({ extended: true, }) ) app.get('/', (request, response) => { response.json({ info: 'Node.js, Express, and Postgres API' }) }) app.post('/report', db.getReport) app.get('/test', db.getTest) app.listen(port, () => { console.log(`App running on port ${port}.`) }) queries.js: const Pool = require('pg').Pool const pool = new Pool({ user: 'xxx', host: 'xx.xxx.xx.xxx', database: 'xxxxxxxx', password: 'xxxxxxxx', port: xxxx, }) const getReport = (request, response) => { const { business_group_id, initial_date, final_date } = request.body pool.query(` SELECT GIANT QUERY`, [business_group_id, initial_date, final_date], (error, results) => { if (error) { throw error } response.status(200).json(results.rows) }) // I want to change that to something like: // FireNForgetWorker(params) // response.status(200) } module.exports = { getReport } A: Through the use of callbacks, and based on the design of express, you can send a response and continue to perform actions in that same function. You can, therefore, restructure it to look something like this: const Pool = require('pg').Pool const pool = new Pool({ user: 'xxx', host: 'xx.xxx.xx.xxx', database: 'xxxxxxxx', password: 'xxxxxxxx', port: xxxx, }) const getReport = (request, response) => { const { business_group_id, initial_date, final_date } = request.body pool.query(` SELECT GIANT QUERY`, [business_group_id, initial_date, final_date], (error, results) => { if (error) { // TODO: Do something to handle error, or send an email that the query was unsucessfull throw error } // Send the email here. }) response.status(200).json({message: 'Process Began'}); } module.exports = { getReport } ============================================================================= Another approach could be to implement a queuing system that would push these requests to a queue, and have another process listening and sending the emails. That would be a bit more complicated though.
{ "language": "en", "url": "https://stackoverflow.com/questions/54905943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: All node modules in package.json are being re-downloaded after small change I have a NodeJS container with the following Dockerfile FROM node:6 COPY package.json /tmp/package.json RUN npm config set registry http://registry.npmjs.org/ RUN cd /tmp && npm install RUN mkdir -p /app && cp -a /tmp/node_modules /app/ WORKDIR /app CMD npm run dev EXPOSE 80 The node modules aren't being re-installed if package.json isn't modified whenever I run docker-compose build, which is good. However, if I add one more dependency to package.json, it seems that all my dependencies are being re-downloaded from NPM, which wastes a lot of time. Is this behaviour intended? A: This is the design of the layer caching. When you run the same command with the same inputs as before, Docker finds a layer where you started from the same parent and ran the same command, and is able to reuse that layer. When your input changes (from the COPY command changing its input), the cache becomes invalid and it goes back to building on top of a fresh node:6 image. From that image, none of your previously downloaded files are available.
{ "language": "en", "url": "https://stackoverflow.com/questions/39357105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to build recommendation model for calling prospects My goal is to better target prospects at a higher call success rate, based on time of day and prior history. I have created a "Prodprobability" column showing the probability of a PropertyID answering the phone at that hour for the history of calls. Instead of merely omitting Property ID 233303.13 from any calls, I want to retarget them into hour 13 or hour 16 (the sample data doesn't show but the probability of pickup at those hours are 100% and 25% respectively). So, moving forward, based on hour of day, and history of that prospect picking up the phone or not during that hour, I'd like to re-target every prospect during the hours they're most likely to pick up. sample data EDIT: I guess I need a formula to do this: If "S425=0", I want to search for where "A425" has the highest probability in the S column, and return the hour and probability for that "PropertyID". Hopefully that makes sense. EDIT: :sample date returns this A: The question here would be are you dead set on creating a 'model' or an automation works for you? I would suggest ordering the dataframe by probability of picking the call every hour (so you can give the more probable leads first) and then further sorting them by number of calls on that day. Something along the lines of: require(dplyr) todaysCall = df %>% dplyr::group_by(propertyID) %>% dplyr::summarise(noOfCalls = n()) hourlyCalls = df %>% dplyr::filter(hour == format(Sys.time(),"%H")) %>% dplyr::left_join(todaysCall) %>% dplyr::arrange(desc(Prodprobability),noOfCalls) Essentially, getting the probability of pickups are what models are all about and you already seem to have that information. Alternate solution Get top 5 calling times for each propertyID top5Times = df %>% dplyr::filter(Prodprobability != 0) %>% dplyr::group_by(propertyID) %>% dplyr::arrange(desc(Prodprobability)) %>% dplyr::slice(1:5L) %>% dplyr::ungroup() Get alternate calling time for cases with zero Prodprobability: zeroProb = df %>% dplyr::filter(Prodprobability == 0) alternateTimes = df %>% dplyr::filter(propertyID %in% zeroProb$propertyID) %>% dplyr::filter(Prodprobability != 0) %>% dplyr::arrange(propertyID,desc(Prodprobability)) Best calling hour for cases with zero probability at given time: #Identifies the zero prob cases; can be by hour or at a particular instant zeroProb = df %>% dplyr::filter(Prodprobability == 0) #Gets the highest calling probability and corresponding closest hour if probability is same for more than one timeslot bestTimeForZero = df %>% dplyr::filter(propertyID %in% zeroProb$propertyID) %>% dplyr::filter(Prodprobability != 0) %>% dplyr::group_by(propertyID) %>% dplyr::arrange(desc(Prodprobability),hour) %>% dplyr::slice(1L) %>% dplyr::ungroup() Returning number of records as per original df: zeroProb = df %>% dplyr::filter(Prodprobability == 0) %>% dplyr::group_by(propertyID) %>% dplyr::summarise(total = n()) bestTimesList = lapply(1:nrow(zeroProb),function(i){ limit = zeroProb$total[i] bestTime = df %>% dplyr::filter(propertyID == zeroProb$propertyID[i]) %>% dplyr::arrange(desc(Prodprobability)) %>% dplyr::slice(1:limit) return(bestTime) }) bestTimeDf = bind_rows(bestTimesList) Note: You can combine the filter statements; I have written them separate to highlight what each step does.
{ "language": "en", "url": "https://stackoverflow.com/questions/58510395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: navigator.geolocation.getCurrentPosition return null I have a problem about the geolocation cordova. I am currently developing an application that uses geolocation phone, but geolocation function will always in the error function and when I log on error I have this: PositionError {code: null, message: ""} Here is my function: var getGeoLocation = function() { if (typeof(navigator.geolocation) != 'undefined') { try{ navigator.geolocation.getCurrentPosition(function(position) { var lat = position.coords.latitude; var lng = position.coords.longitude; alert(lat + "/" + lng); },function(err) { console.log(err); }); } catch(e){ alert(e); } } } getGeoLocation(); I'm on IntelXDK with version 4.0 of cordova. The org.apache.cordova.geolocation plugin is installed. I looked all over the internet before posting the question but I have found absolutely nothing :/ If anyone know the reason for the bug I would be very grateful. Thanks,
{ "language": "en", "url": "https://stackoverflow.com/questions/27186868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AttributeError: 'numpy.ndarray' object has no attribute 'dim' 1: issues with pytorch nn. made an auto encoder out of linear layers for a tensorflow conversion project (to use pysyft at a future point) 2: have made sure that the forward method does return a value, does work in other situations. 3: the full error: Traceback (most recent call last): File "<input>", line 1, in <module> File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "B:\tools and software\PyCharm 2020.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/pytorch_conversion.py", line 157, in <module> torch.from_numpy(x_test).float(), tr=1) File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/pytorch_conversion.py", line 88, in test x_test_predictions = net(x_test.detach().numpy()) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "B:/projects/openProjects/githubprojects/BotnetTrafficAnalysisFederaedLearning/anomaly-detection/pytorch_conversion.py", line 121, in forward x = torch.tanh(self.fc1(x)) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "B:\tools and software\Anaconda\envs\pysyft-pytorch\lib\site-packages\torch\nn\functional.py", line 1368, in linear if input.dim() == 2 and bias is not None: AttributeError: 'numpy.ndarray' object has no attribute 'dim' the method: def test(net, x_test, tr): correct = 0 total = 0 x_test_predictions = net(x_test.detach().numpy()) print("Calculating MSE on test set...") mse_test = np.mean(np.power(x_test - x_test_predictions, 2), axis=1) over_tr = mse_test > tr false_positives = sum(over_tr) test_size = mse_test.shape[0] print(f"{false_positives} false positives on dataset without attacks with size {test_size}") with torch.no_grad(): for i in tqdm(range(len(x_test))): real_class = torch.argmax(x_test[i]) net_out = net(x_test[i]) predicted_class = torch.argmax(net_out) if predicted_class == real_class: correct += 1 total += 1 print("Accuracy: ", round(correct / total, 3)) enter code here The NN: class Net(nn.Module): def __init__(self, input_dim=10): super().__init__() self.fc1 = nn.Linear(input_dim, int(0.75 * input_dim)) self.fc2 = nn.Linear(int(0.75 * input_dim), int(0.5 * input_dim)) self.fc3 = nn.Linear(int(0.5 * input_dim), int(0.33 * input_dim)) self.fc4 = nn.Linear(int(0.33 * input_dim), int(0.25 * input_dim)) self.fc5 = nn.Linear(int(0.25 * input_dim), int(0.33 * input_dim)) self.fc6 = nn.Linear(int(0.33 * input_dim), int(0.5 * input_dim)) self.fc7 = nn.Linear(int(0.5 * input_dim), int(0.75 * input_dim)) self.fc8 = nn.Linear(int(0.75 * input_dim), input_dim) def forward(self, x): x = torch.tanh(self.fc1(x)) x = torch.tanh(self.fc2(x)) x = torch.tanh(self.fc3(x)) x = torch.tanh(self.fc4(x)) x = torch.tanh(self.fc5(x)) x = torch.tanh(self.fc6(x)) x = torch.tanh(self.fc7(x)) x = self.fc8(x) return torch.softmax(x, dim=1)
{ "language": "en", "url": "https://stackoverflow.com/questions/62801695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WPF Dual Monitor Application Overlay I looked at this thread to create a dual monitor application in WPF: http://social.msdn.microsoft.com/forums/en-US/wpf/thread/5d181304-8952-4663-8c3c-dc4d986aa8dd where a WPF Window will be displayed on each of the two monitors. The issue I am having is that the windows are overlapping - they are both being displayed on the same screen. The Debugger tells me that there are 2 Displays in the System.Windows.Forms.Screen.AllScreens array and that the Top and Left values of the working areas of each screen are 0, -1600 and 0, 0 respectively (which seem to be accurate to me). Both screens have a resolution of 1600x1200. Has anyone come across a similar issue before? The monitors are set to 'Extend desktop to this display' in the Screen resolution settings. Thanks! A: I managed to display a window in the second monitor (placed at the right of the primary) by using the following code: window.Left = System.Windows.SystemParameters.VirtualScreenWidth / 2;
{ "language": "en", "url": "https://stackoverflow.com/questions/13689644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to easily parse the AT command response from a GSM module? I'm trying to parse the following output taken from a GSM module in Arduino, to get the Voltage (3.900V) part only. However, I can't get it to work. " +CBC: 0,66,3.900V OK " I have tried the following code, but it fails and even crashes. float getVoltage() { if (atCmd("AT+CBC\r") == 1) { char *p = strchr(buffer, ','); if (p) { p += 3; // get voltage int vo = atof(p) ; p = strchr(p, '.'); if (p) vo += *(p + 1) - '0'; // ?? return vo; } } return 0; } How can this be done in a better or more transparent way? A: You can do it using the C function strtok to tokenize the buffer void setup() { Serial.begin(115200); char buffer[20] = "+CBC: 1,66,3.900V"; const char* delims = " ,V"; char* tok = strtok(buffer, delims); // +CVB: tok = strtok(NULL, delims); int first = atoi(tok); tok = strtok(NULL, delims); int second = atoi(tok); tok = strtok(NULL, delims); float voltage = atof(tok); Serial.println(first); Serial.println(second); Serial.println(voltage); } void loop() { } A: This fixed it: float getVoltage() { if (atCmd("AT+CBC\r") == 1) { char *p = strchr(buffer, 'V'); if (p) { p -= 5; // get voltage double vo = atof(p) ; //printf("%1.3f\n", vo); return vo; } } return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/56939302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Golang - Missing expression error on structs type Old struct { UserID int `json:"user_ID"` Data struct { Address string `json:"address"` } `json:"old_data"` } type New struct { UserID int `json:"userId"` Data struct { Address string `json:"address"` } `json:"new_data"` } func (old Old) ToNew() New { return New{ UserID: old.UserID, Data: { // from here it says missing expression Address: old.Data.Address, }, } } What is "missing expression" error when using structs? I am transforming old object to a new one. I minified them just to get straight to the point but the transformation is much more complex. The UserID field for example works great. But when I use struct (which intended to be a JSON object in the end) the Goland IDE screams "missing expression" and the compiler says "missing type in composite literal" on this line. What I am doing wrong? Maybe should I use something else instead of struct? Please help. A: Data is an anonymous struct, so you need to write it like this: type New struct { UserID int `json:"userId"` Data struct { Address string `json:"address"` } `json:"new_data"` } func (old Old) ToNew() New { return New{ UserID: old.UserID, Data: struct { Address string `json:"address"` }{ Address: old.Data.Address, }, } } (playground link) I think it'd be cleanest to create a named Address struct. A: You're defining Data as an inline struct. When assigning values to it, you must first put the inline declaration: func (old Old) ToNew() New { return New{ UserID: old.UserID, Data: struct { Address string `json:"address"` }{ Address: old.Data.Address, }, } } Hence it is generally better to define a separate type for Data, just like User.
{ "language": "en", "url": "https://stackoverflow.com/questions/63563204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multiple Roles for a single User The Following Code Adds Multiple Roles to a Single user, it also should be noted that this will only work for a single session as we're trying to define the roles and users every time we start the app, to prevent any crashes due to that add a check for the database and create the roles and user IF they don't exist. import trippinspring.* class BootStrap { def init = { servletContext -> def adminRole = new SpringRole(authority: 'ROLE_ADMIN').save(flush: true) def userRole = new SpringRole(authority: 'ROLE_USER').save(flush: true) def testUser = new SpringUser(username: 'me', enabled: true, password: 'password') testUser.save(flush: true) if (!testUser.authorities.contains(adminRole)) { new SpringUserSpringRole(springUser: testUser, springRole: adminRole).save(flush: true,failOnError: true) } if (!testUser.authorities.contains(userRole)) { new SpringUserSpringRole(springUser: testUser, springRole: userRole).save(flush: true,failOnError: true) } } } Most of the code is a direct reference to Aram Arabyan's answer, and Ian Roberts comments with some fixes to work with my code. A: if (!testUser.authorities.contains(adminRole)) { new SpringUserSpringRole(user: testUser, role: adminRole).save(flush: true,failOnError: true) } if (!testUser.authorities.contains(userRole)) { new SpringUserSpringRole(user: testUser, role: userRole).save(flush: true,failOnError: true) } A: Just a suggestion, maybe you should try creating a hierarchy for your roles instead of adding two roles for a single user: see doc.
{ "language": "en", "url": "https://stackoverflow.com/questions/12565041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I dispose all of the controls in a panel or form at ONCE??? c# Possible Duplicate: Does Form.Dispose() call controls inside's Dispose()? is there a way to do this? A: You don't give much detail as to why. This happens in the Dispose override method of the form (in form.designer.cs). It looks like this: protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); } A: Both the Panel and the Form class have a Controls collection property, which has a Clear() method... MyPanel.Controls.Clear(); or MyForm.Controls.Clear(); But Clear() doesn't call dispose() (All it does is remove he control from the collection), so what you need to do is List<Control> ctrls = new List<Control>(MyPanel.Controls); MyPanel.Controls.Clear(); foreach(Control c in ctrls ) c.Dispose(); You need to create a separate list of the references because Dispose also will remove the control from the collection, changing the index and messing up the foreach... A: I don't believe there is a way to do this all at once. You can just iterate through the child controls and call each of their dispose methods one at a time: foreach(var control in this.Controls) { control.Dispose(); } A: You didn't share if this were ASP.Net or Winforms. If the latter, you can do well enough by first calling SuspendLayout() on the panel. Then, when finished, call ResumeLayout().
{ "language": "en", "url": "https://stackoverflow.com/questions/1511047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Why do college computer science classes promote 'using namespace std'? I've taken 2 classes on C++ so far, one each at a different school, and both of them have used 'using namespace std;' to teach basic programming. It may be a coincidence but I had to go out of my way to find out it's not a good practice to do so. A: Because best practices when writing sample code are not necessarily best practices when writing large projects. In a C++ course, you write mainly small programs (up to a few hundred lines of code) that have to solve a relatively small problem. This means little to no focus on future maintenance (and avoiding sources of confusion for future maintainers). Because many teachers simply do not have coding experience in large projects, the problem doesn't even get acknowledged (let alone discussed) in most C++ courses. A: Because college computer science professors do not necessarily know how to write good code.
{ "language": "en", "url": "https://stackoverflow.com/questions/18595204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change parent block background when child input is checked(vanilla js) I am new to the world of coding, and would be very grateful for an advise or any idea. I was wondering if it is possible to change color of current parent block background(.checkbox-container), when child input is checked. And the main problem is that I have multiple blocks with inputs, and require to change background color only to current block, not to all? As I understood there is no pure css solution without mark-up change, but this is not my case. Could someone please give any idea how that could be done in vanilla js? Here is the link to fiddle: https://jsfiddle.net/william_eduards/r4jxvuz5/4/ and visual code example here: .checkbox-container { display: flex; align-items: center; justify-content: center; flex-wrap: nowrap; background: #E8EBF0; border: 1px solid #E8EBF0; transition: background .3s ease-in-out; margin-bottom: 8px; border-radius: 4px; } <div class="dimensional-container checkbox-container"> <input type="checkbox" name="snapchat" class="checkbox-container__snapchat" id="snapchat"> <label for="snapchat">snapchat</label> </div> <div class="dimensional-container checkbox-container"> <input type="checkbox" name="facebook" class="checkbox-container__facebook" id="facebook"> <label for="facebook">Facebook</label> </div> <div class="dimensional-container checkbox-container"> <input type="checkbox" name="hangouts" class="checkbox-container__hangouts" id="hangouts"> <label for="hangouts">hangouts</label> </div> A: * *Find all input using querySelectorAll. *Loop over all input and addEventListener *check if the elment is checked or not using e.target.checked, If it is checked change its parent e.target.parentElement background style. I've used red, you can select color on your own. const allInputs = document.querySelectorAll("input"); allInputs.forEach(input => { input.addEventListener("click", e => { if (e.target.checked) { e.target.parentElement.style.background = "#ff0000"; } else { e.target.parentElement.style.background = "#E8EBF0"; } }) }) .checkbox-container { display: flex; align-items: center; justify-content: center; flex-wrap: nowrap; background: #E8EBF0; border: 1px solid #E8EBF0; transition: background .3s ease-in-out; margin-bottom: 8px; border-radius: 4px; } <div class="dimensional-container checkbox-container"> <input type="checkbox" name="snapchat" class="checkbox-container__snapchat" id="snapchat"> <label for="snapchat">snapchat</label> </div> <div class="dimensional-container checkbox-container"> <input type="checkbox" name="facebook" class="checkbox-container__facebook" id="facebook"> <label for="facebook">Facebook</label> </div> <div class="dimensional-container checkbox-container"> <input type="checkbox" name="hangouts" class="checkbox-container__hangouts" id="hangouts"> <label for="hangouts">hangouts</label> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/67089526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: DataGrid shows strange Checkboxes below heading i've got a problem displaying a datagrid using dojo. Populating the grid with data provided by a ItemFileReadStore works fine. But the result looks like this: The two checkboxes below the Grid-Headings are not supposed to be there. I've already experimented with the DataGrid's property rowSelector but i oviously wasn't successful. I've created the DataGrid programmatically. This is the sourcecode: var oStore = new dojo.data.ItemFileReadStore({ data:{ identifier: 'catID', items: [ {catID: '3', duration: '1,5'}, {catID: '4', duration: '2,0'}, {catID: '9', duration: '1,0'}, {catID: '7', duration: '2,0'} ] } }); var oGrid = new dojox.grid.DataGrid({ store: oStore, query:{ catID:'*'}, autoHeight: 5, structure:[ {name: 'KatalogID', field: 'catID', width: 'auto'}, {name: 'Dauer', field: 'duration', width: 'auto'} ] }, dojo.create('div', {'id':'oGrid'})); oGrid.startup(); Does anybody know, where these checkboxes come from and how they can be removed? A: I found a possible Workaround, but this doesn't fix the problem: Including the following css-code hides the div-container containing the unbeloved checkboxes. <style type="text/css"> .dojoxGridView > .dijitCheckBox{ display: none; } </style> Unfortunately, this involeves the checkBoxes that are generated by the rowSelector-option within the DataGrid-declaration. So if you dont need the rowSeletion-feature (at least by checkboxes), this works. A: I had this problem many time and it is removed by including Grid.css and theme css. This can be remove by just playing with css.
{ "language": "en", "url": "https://stackoverflow.com/questions/13267649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pagination problem in WordPress For some reason the pagination is not working here, and I can't figure out why. <?php if ( get_query_var('paged') ) $paged = get_query_var('paged'); elseif ( get_query_var('page') ) $paged = get_query_var('page'); else $paged = 1; $post_type = 'portfolio'; $tax = 'type'; $tax_terms = get_terms($tax); ?> <?php //print_r($tax_terms); if ($tax_terms) { foreach ($tax_terms as $tax_term) { $args = array ( 'post_type' => $post_type, "$tax" => $tax_term->slug, 'post_status' => 'publish', 'posts_per_page' => 2, 'caller_get_posts'=> 1, 'paged' => $paged ); $my_query = new WP_Query($args); ?> <?php if ( $my_query->have_posts () ) { ?> <?php while ( $my_query->have_posts () ) : $my_query->the_post(); $count++; global $post; ?> <?php include (TEMPLATEPATH . '/_framework/includes/portContent.php'); ?> <?php endwhile;?> <?php } ?> <?php } ?> <?php } ?> <?php if (function_exists("pagination")) { pagination($additional_loop->max_num_pages); } ?> Any ideas? A: Try this: $paged = (get_query_var('paged')) ? get_query_var('paged') : 1;
{ "language": "en", "url": "https://stackoverflow.com/questions/5597439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In an assignment A(:) = B, the number of elements in A and B must be the same Can you please help me to correct the above mentioned problem in the following matlab code ? E = [5,200]; Selected edge values X = imread('LENNA128.bmp'); N = length(X); Y = false(N+2); for k = 1:numel(E); Y(2:end-1,2:end-1) = X==E(k); Z = Y(1:end-2,2:end-1) | Y(3:end,2:end-1) | Y(2:end-1,3:end) | Y(2:end-1,1:end-2); X(Z) = round((X(end-3,3:end-2) + X(end-3,4:end-1))/2); end A: i guess thats matlab code (maybe add matlab tag next time). if you look at the colon operator in matlab doc http://de.mathworks.com/help/matlab/ref/colon.html then when used on left side of an assignement it will fill the matrix and keep dimenesions so you need same amount of elements.
{ "language": "en", "url": "https://stackoverflow.com/questions/34903448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Please, how to count the number of element in a fields of a structure? I would like to count the number of element in a field of a structure in Matlab The file name is "data.m" I have something like this: * *The file has 3 fields (Columns): x, y and z *The file has maximal 6 lines, which is the number of line of the y-column *x has 5 elements (0, 9, 5, 6, 6) *y has 6 elements (6, 1, 2, 2, 8, 2) *z has 4 elements (8, 8, 4, 9) Using: number_of_element = numel(data.x); returns 1. It only takes the first element (The first line, which is "0" here) I would like to have the number of element of the column x, which is "5" in this case. Then I tried this: number_of_element = numel(data(:,x)); But it doesn't work. I though Matlab could recognise "x" as a field name. This also did not work: count = 0; for i = 1:end % I get an error because of this "end". Why is it not recognised here ? number_of_element = numel(data(i).x); count = count+number_of_element; end How could I get the number of element in the x-column ? Thank you in advance.
{ "language": "en", "url": "https://stackoverflow.com/questions/55685103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: return a list of files in a folder in C I have this code which will print to the console all files in a given folder which have a given extension: int scandir(char dirname[], char const *ext) /* Scans a directory and retrieves all files of given extension */ { DIR *d = NULL; struct dirent *dir = NULL; d = opendir(dirname); if (d) { while ((dir = readdir(d)) != NULL) { if (has_extension(dir->d_name, ext)) { printf("%s\n", dir->d_name); } } closedir(d); } return(0); } This function works, but I would like to modify it so that it returns an array of filenames. (I have done a lot of google searches but only come up with functions like mine, which print to console) I'm fairly new to C and 'low-level' programming, so I am unsure of how to correctly handle the memory here. How do I create and add things to a character array when I do not know how big it is going to be? I'm using MinGW.. A: You can use realloc: #include <stdio.h> #include <string.h> #include <stdlib.h> #include <dirent.h> extern char *strdup(const char *src); int scandir(char ***list, char dirname[], char const *ext) /* Scans a directory and retrieves all files of given extension */ { DIR *d = NULL; struct dirent *dir = NULL; size_t n = 0; d = opendir(dirname); if (d) { while ((dir = readdir(d)) != NULL) { if (has_extension(dir->d_name, ext)) { *list = realloc(*list, sizeof(**list) * (n + 1)); (*list)[n++] = strdup(dir->d_name); } } closedir(d); } return n; } int main(void) { char **list = NULL; size_t i, n = scandir(&list, "/your/path", "jpg"); for (i = 0; i < n; i++) { printf("%s\n", list[i]); free(list[i]); } free(list); return 0; } Note that strdup() is not a standard function but its available on many implementations. As an alternative to realloc you can use a singly linked list of strings. EDIT: As pointed out by @2501, is better to return an allocated array of strings from scandir and pass elems as param: #include <stdio.h> #include <string.h> #include <stdlib.h> #include <dirent.h> extern char *strdup(const char *src); char **scandir(char dirname[], char const *ext, size_t *elems) /* Scans a directory and retrieves all files of given extension */ { DIR *d = NULL; struct dirent *dir = NULL; char **list = NULL; d = opendir(dirname); if (d) { while ((dir = readdir(d)) != NULL) { if (has_extension(dir->d_name, ext)) { list = realloc(list, sizeof(*list) * (*elems + 1)); list[(*elems)++] = strdup(dir->d_name); } } closedir(d); } return list; } int main(void) { size_t i, n = 0; char **list = scandir("/your/path", "jpg", &n); for (i = 0; i < n; i++) { printf("%s\n", list[i]); free(list[i]); } free(list); return 0; } Finally, do you really need an array? Consider using a call-back function: #include <stdio.h> #include <string.h> #include <dirent.h> void cb_scandir(const char *src) { /* do whatever you want with the passed dir */ printf("%s\n", src); } int scandir(char dirname[], char const *ext, void (*callback)(const char *)) /* Scans a directory and retrieves all files of given extension */ { DIR *d = NULL; struct dirent *dir = NULL; size_t n = 0; d = opendir(dirname); if (d) { while ((dir = readdir(d)) != NULL) { if (has_extension(dir->d_name, ext)) { callback(dir->d_name); n++; } } closedir(d); } return n; } int main(void) { scandir("/your/path", "jpg", cb_scandir); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/26357792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to extract the highlighted text from a table in docx document python? Image of the table i need to extract the highlighted text from. I need to write a python script which helps me convert this docx table to csv and just writing the highlighted information from a row in a csv. Like the column name would be "overall verdict" so its value underneath it must be "4". Please if anyone can help, It would be appericiated.
{ "language": "en", "url": "https://stackoverflow.com/questions/55827647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Avoid hard-coded TCM URI in Tridion-Related Code We often need specific items (schemas, templates, or components) in Tridion-related code. Template, Content Delivery, Workflow, or the Business Connector (Core Service) regularly need references to Tridion Content Manager URIs. We can link to components, but I typically see either hard-coded values or WebDAV URLs for everything else. Hard-coded values I understand hard-coding Tridion Content Manager (Native) URI's is a bad practice except for a few scenarios: * *To simplify example code and make it clear what a variable is *When generated for use in Content Delivery (CD) API logic Whenever possible we use the given API or WebDAV URLs to reference items, otherwise we must avoid using Content Porter on anything that references TCM URIs (or somehow make these references "configurable" outside of Tridion). WebDAV URLs WebDAV URLs seem to be better for a few reasons: * *Hard-coded values in design template building blocks (TBBs) or other template formats remain intact with SDL Content Porter (breaking a relationship when moved through CMS environments, with an exception described below) *"Configuration" components that refer to specific items also do better with SDL Content Porter, though differently-named paths can "break" relationships Use cases In addition to having template that work well with Content Porter, I would like to localize folders and/or structure groups in lower publications. This can help with: * *CMS authors that read different languages *translate item names and paths to appropriate languages *maybe help users navigate better (e.g. I suspect different-named folders may reduce confusion for where authors are in the BluePrint) One Approach To make references "Content Porter-friendly," at least for Template Building Blocks, I know we can use WebDAV Urls in components making sure to localize each path to the right locations in children publications. For example: * *Code checks Publication Metadata *Publication Metadata points to a "config component" *Config component has paths as WebDAV URLs As long as we set the Publication Metadata and localize the fields to the correct paths per publication, this will work for most scenarios. Questions * *Did I get this right? Is there a simpler or easier-to-maintain setup? I believe we can alternatively use includes or map unmanaged URI in template code. * *Anyone have an example of the #include approach? Do I use that at the top of a TBB and/or DWT and do references get replaced regardless of Template Mediator (e.g. will this work with XSLT Mediator, Razor Mediator, etc?) *Does the included reference work in lower publications or is this just for Content Porter? In other words, if I reference "tcm:5-123" will the template correctly reference "tcm:17-123" in publication 17? A: I tend to follow a few simple rules... * *There is no single valid reason to ever use a TCM ID in anything - template code, configuration components, configuration files. *If I need webdav URLs in configuration, I try to always make them "relative", usually starting at "/Building%20Blocks" instead of the publication name. At runtime I can use Publication.WebDavUrl or PublicationData.LocationInfo.WebDavUrl to get the rest of the URL *Tridion knows what to do with managed links, so as much as possible, use them. (managed links are the xlink:href stuff you see a bit all over Tridion XML). I also tend to use a "configuration page" for content delivery, with a template that outputs the TCM IDs I may need to "know" from the content delivery application. This is then either loaded at run time as a set of configuration variables or as dictionary or as a set of constants (I guess it depends how I'm feeling that day). A: Although we commonly refer to a Template Type implementation as a Template Mediator, that's not the whole story. A Template Type implementation consists of both a Template Mediator, and a Template Content Handler, although the latter is optional. Whether a given implementation will process "includes" correctly depends not on the mediator but the handler. A good starting point in the documention can be found by searching for AbstractTemplateContentHandler. SDL Tridion's own Dreamweaver Templating implementation has such a handler. I've just looked at the Razor implementation, and it currently makes use of the Dreamweaver content handler. Obviously, YMMV for the various XSLT template type implementations that exist. As this is an extensibility point of SDL Tridion, whether included references will work "correctly" in lower publications depends on the implementer's view of what that would mean. One interesting possibility is to implement your own custom handler that behaves as you wish. The template type configuration (in the Tridion Content Manager configuration file) allows for the mediator and content handler for a given template type to be specified independently, which means you could potentially customise the content handling behaviour for an existing template type.
{ "language": "en", "url": "https://stackoverflow.com/questions/14541895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: c# LoadXml randomly return they are several root element without any obvious reasons I am currently developing some solution based on communication through httpwebrequest between some distant (Prestashop) MYSQL database and return on information. The concept is working like a charm, and I use to load some object, like customers, groups or products through a build Xml response, I de-serialized. All object return same kind of process and all are loading perfectly. But in case of the products, I recently faced a mysterious bug, saying that my xml contains several root element, witch is completely wrong. I feel very at ease with Xml building, so I became crazy not finding a way out. The c# function who operate de serialization is quite simple : public static Dictionary<string, string> get(IRestResponse response) { var objectToLoad = new Dictionary<string, string>(); XmlDocument doc = new XmlDocument(); doc.LoadXml(response.Content.ToString()); XmlNodeList idNodes = doc.SelectNodes("object"); foreach (XmlNode node1 in idNodes) { foreach (XmlNode node in node1.ChildNodes) { objectToLoad.Add(node.Name, node.InnerText); } } return objectToLoad; } To illustrate It, here is a first example loading a group who is working perfectly : <?xml version="1.0" encoding="UTF-8"?> <object> <id>4</id> <reduction>0</reduction> <price_display_method>0</price_display_method> <show_prices>1</show_prices> <date_add>2014-09-23 16:23:05</date_add> <date_upd>2014-10-14 09:27:09</date_upd> <name>Public VIP</name> <id_lang>1</id_lang> </object> But when I load a Object type product : <?xml version="1.0" encoding="UTF-8"?> <object> <id>6</id> <id_shop_default>1</id_shop_default> <id_manufacturer>1</id_manufacturer> <id_supplier>1</id_supplier> <reference>MTDENIMJR</reference> <supplier_reference></supplier_reference> <location></location> <width>0</width> <height>0</height> <depth>0</depth> <weight>0.05</weight> <quantity_discount>0</quantity_discount> <ean13>815264500995</ean13> <upc>815264500995</upc> <cache_is_pack>0</cache_is_pack> <cache_has_attachments>0</cache_has_attachments> <is_virtual>0</is_virtual> <id_category_default>9</id_category_default> <id_tax_rules_group>1</id_tax_rules_group> <on_sale>1</on_sale> <online_only>0</online_only> <ecotax>0</ecotax> <minimal_quantity>1</minimal_quantity> <price>10.313</price> <wholesale_price>2.9</wholesale_price> <unity></unity> <unit_price_ratio>0</unit_price_ratio> <additional_shipping_cost>0</additional_shipping_cost> <customizable>0</customizable> <text_fields>0</text_fields> <uploadable_files>0</uploadable_files> <active>1</active> <redirect_type>404</redirect_type> <id_product_redirected>0</id_product_redirected> <available_for_order>1</available_for_order> <available_date>0000-00-00</available_date> <condition>new</condition> <show_price>1</show_price> <indexed>1</indexed> <visibility>both</visibility> <cache_default_attribute>0</cache_default_attribute> <advanced_stock_management>0</advanced_stock_management> <date_add>2013-02-27 08:03:35</date_add> <date_upd>2018-05-09 10:59:40</date_upd> <pack_stock_type>3</pack_stock_type> <groups_allowed></groups_allowed> <flashsale>0</flashsale> <id_google_category>36</id_google_category> <meta_description>Vernis à Ongle Morgan Taylor Denim Du Jour Format 15 ml</meta_description> <meta_keywords>vernis à ongles,morgantaylor,manucure,beauté des mains,nails,harmony</meta_keywords> <meta_title></meta_title> <link_rewrite>morgan-taylor-denim-du-jour</link_rewrite> <name>Morgan Taylor Denim Du Jour</name> <description>&lt;p&gt;Vernis à Ongle Morgan Taylor Denim Du Jour 15 ml&lt;/p&gt;</description> <description_short></description_short> <available_now></available_now> <available_later></available_later> <id_lang>1</id_lang> <id_shop>1</id_shop> </object> I get with this products and all other a System.Xml.XmlException: 'They are several root element. Line 2, position 2.' There is only one root element : "object", and each nodes are unique, I have been trying to all online Xml checker, my return Xml pass every test successfully,so I just turn a little crazy. So if there are some good soul who would maybe give me a suggestion, or a beginning of explanation, or simply point to my attention a huge mistake I am doing, I would highly appreciate It :) Million of thanks in advance ! Jeff A: So the mistake went from linux Server side due to ** @ini_set('display_errors', 'on');** Everything is rolling great again ! Thanks for all your concerns and support ! Jeff
{ "language": "en", "url": "https://stackoverflow.com/questions/51571255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Form submission without revealing form content I run a website which includes several radio streams. I have set up icecast to request .htaccess account in order to authenticate and start streaming. There is the same account for all streams. I submit the form (it is hidden via css) with jquery once the page loads so the user does not have to know the account nor submit the form. The problem is that form information are being revealed if user views source. Is there any way to hide these information? Searching the internet what most people say is that this is not possible because browser needs to be able to clearly read these information in order to function properly. Anyone know any way, if it is possible? A: I ended up creating the form (document.createElement) on page load with jquery, submitting it (.trigger("click")) and then removing it (.remove()). In addition I obfuscated the jquery code with the tool found here Crazy Obfuscation as @André suggested. That way user cannot see the htaccess username and password in Page Source nor find it using "inspect element" or firebug. A: Personally, I need a bit more information to clearly deduct a solution for your issue, I hope you can give me that. However, have you tried simply .remove()ing the form after submission? That way it gets submitted on page load and then gets removed so by the time the page loads and the user clicks view source, he will not be able to see it. He can, of course, disable JS for example, or any other workaround, but this is a very quick fix with the amount of information we have. A: You can not directly hide values in 'view source'. Similarly when the form is being submitted, using tools like 'fiddler' the user could view the values. If you want to really hide them what you can do is never have those values show in the form. You could try techniques like encrypting those values on server or something if it never needs to be displayed to the user in the web page.
{ "language": "en", "url": "https://stackoverflow.com/questions/22617417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySql - having issues with double left join I am having issues with getting this double left join to get the listingspecificsListPrice, but that info exists in the table, cant figure out why it would not include it. This is my sql. SELECT mls_subject_property.*, mls_images.imagePath, mls_forms_listing_specifics.listingspecificsListPrice FROM mls_subject_property LEFT JOIN mls_images ON mls_subject_property.mls_listingID = mls_images.mls_listingID LEFT JOIN mls_forms_listing_specifics ON mls_forms_listing_specifics.mls_listingID = mls_subject_property.mls_listingID AND mls_images.imgOrder = 0 WHERE userID = 413 GROUP BY mls_subject_property.mls_listingID The result comes out like this.. All of the other fields come back, but it doesnt seem to want to bring back those two items. This is a picture of the other table, to show that the data does in fact exist. A: The mls_images.imgOrder = 0 condition should be in the join with mls_images, not mls_forms_listing_specifics. Don't use GROUP BY if you're not using any aggregation functions. Use SELECT DISTINCT to prevent duplicates. SELECT DISTINCT mls_subject_property.*, mls_images.imagePath, mls_forms_listing_specifics.listingspecificsListPrice FROM mls_subject_property LEFT JOIN mls_images ON mls_subject_property.mls_listingID = mls_images.mls_listingID AND mls_images.imgOrder = 0 LEFT JOIN mls_forms_listing_specifics ON mls_forms_listing_specifics.mls_listingID = mls_subject_property.mls_listingID WHERE userID = 413
{ "language": "en", "url": "https://stackoverflow.com/questions/69256606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Compilation Error on my server on SmartStore App I have installed SmartStore on my server. everything is working fine except the add category module. when ever I try to Add or Edit any category an error pops saying : Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS0121: The call is ambiguous between the following methods or properties: 'Telerik.Web.Mvc.UI.Fluent.GridToolBarCommandFactory.Template(System.Action>)' and 'Telerik.Web.Mvc.UI.Fluent.GridToolBarCommandFactory.Template(System.Func,object>)' Line 441: .ToolBar(commands => commands.Template(CategoryProductsGridCommands)) Please help me out. I m stuck :( A: After working on it, we got the solution for this. So kindly try like this. Hope will work .ToolBar(commands => commands.Template(pp=>GridCommands(pp)))
{ "language": "en", "url": "https://stackoverflow.com/questions/52237520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: BackgroundWorker not firing RunWorkerCompleted The first time I run my backgroundworker it runs correctly - updates a datatable in the background and then RunWorkerCompleted sets the datatable as a datagridview datasource. If I then run it again, the datagridview clears and doesn't update. I can't work out why. I've verified that the datatable contains rows when my code hits dgvReadWrites.DataSource. private void btnGenerateStats_Click(object sender, EventArgs e) { dtJobReadWrite.Columns.Clear(); dtJobReadWrite.Rows.Clear(); dgvReadWrites.DataSource = dtJobReadWrite; List<Tuple<string, string>>jobs = new List<Tuple<string, string>>(); foreach (ListViewItem job in lstJobs.SelectedItems) { jobs.Add(new Tuple<string, string>(job.Text, job.SubItems[2].Text)); } BackgroundWorker bgw = new BackgroundWorker(); bgw.WorkerReportsProgress = true; bgw.WorkerSupportsCancellation = true; bgw.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bgw_RunWorkerCompleted); bgw.DoWork += new DoWorkEventHandler(bgw_DoWork); pbarGenStats.Style = ProgressBarStyle.Marquee; pbarGenStats.MarqueeAnimationSpeed = 30; pbarGenStats.Visible = true; bgw.RunWorkerAsync(jobs); } private void bgw_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker bgw = sender as BackgroundWorker; List<Tuple<string, string>> jobs = (List<Tuple<string, string>>)e.Argument; GetReadWriteStats(jobs); } private void bgw_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { BackgroundWorker bgw = sender as BackgroundWorker; bgw.RunWorkerCompleted -= new RunWorkerCompletedEventHandler(bgw_RunWorkerCompleted); bgw.DoWork -= new DoWorkEventHandler(bgw_DoWork); pbarGenStats.MarqueeAnimationSpeed = 0; pbarGenStats.Value = 0; pbarGenStats.Visible = false; dgvReadWrites.DataSource = dtJobReadWrite; dgvReadWrites.Visible = true; dgvReadWrites.Refresh(); } A: private void btnGenerateStats_Click(object sender, EventArgs e) { //... dgvReadWrites.DataSource = dtJobReadWrite; // etc... } That's a problem, you are updating dtJobReadWrite in the BGW. That causes the bound grid to get updated by the worker thread. Illegal, controls are not thread-safe and may only be updated from the thread that created them. This is normally checked, producing an InvalidOperationException while debugging but this check doesn't work for bound controls. What goes wrong next is all over the place, you are lucky that you got a highly repeatable deadlock. The more common misbehavior is occasional painting artifacts and a deadlock only when you are not close. Fix: dgvReadWrites.DataSource = null; and rebinding the grid in the RunWorkerCompleted event handler, like you already do. A: Because you unscubscribe from those events bgw.RunWorkerCompleted -= new RunWorkerCompletedEventHandler(bgw_RunWorkerCompleted); bgw.DoWork -= new DoWorkEventHandler(bgw_DoWork); Remove those lines A: Why are you creating a new BackgroundWorker every time you want to run it? I would like to see what happens with this code if you use one instance of BackgroundWorker (GetReadWriteWorker or something along those lines), subscribe to the events only once, and then run that worker Async on btnGenerateStats_Click.
{ "language": "en", "url": "https://stackoverflow.com/questions/7635081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Pandas groupBy with conditional grouping I have two data frames and need to group the first one based on some criteria from the second df. df1= summary participant_id response_date 0 2.0 11 2016-04-30 1 3.0 11 2016-05-01 2 3.0 11 2016-05-02 3 3.0 11 2016-05-03 4 3.0 11 2016-05-04 5 3.0 11 2016-05-05 6 3.0 11 2016-05-06 7 4.0 11 2016-05-07 8 4.0 11 2016-05-08 9 3.0 11 2016-05-09 10 3.0 11 2016-05-10 11 3.0 11 2016-05-11 12 3.0 11 2016-05-12 13 3.0 11 2016-05-13 14 3.0 11 2016-05-14 15 3.0 11 2016-05-15 16 3.0 11 2016-05-16 17 4.0 11 2016-05-17 18 3.0 11 2016-05-18 19 3.0 11 2016-05-19 20 3.0 11 2016-05-20 21 4.0 11 2016-05-21 22 4.0 11 2016-05-22 23 4.0 11 2016-05-23 24 3.0 11 2016-05-24 25 3.0 11 2016-05-25 26 3.0 11 2016-05-26 27 3.0 11 2016-05-27 28 3.0 11 2016-05-28 29 3.0 11 2016-05-29 .. ... ... ... df2 = summary participant_id response_date 0 12.0 11 2016-04-30 1 12.0 11 2016-05-14 2 14.0 11 2016-05-28 . ... ... ... I need to group (get blocks) of df1 between the dates in the column of df2. Namely: df1= summary participant_id response_date 2.0 11 2016-04-30 3.0 11 2016-05-01 3.0 11 2016-05-02 3.0 11 2016-05-03 3.0 11 2016-05-04 3.0 11 2016-05-05 3.0 11 2016-05-06 4.0 11 2016-05-07 4.0 11 2016-05-08 3.0 11 2016-05-09 3.0 11 2016-05-10 3.0 11 2016-05-11 3.0 11 2016-05-12 3.0 11 2016-05-13 3.0 11 2016-05-14 3.0 11 2016-05-15 3.0 11 2016-05-16 4.0 11 2016-05-17 3.0 11 2016-05-18 3.0 11 2016-05-19 3.0 11 2016-05-20 4.0 11 2016-05-21 4.0 11 2016-05-22 4.0 11 2016-05-23 3.0 11 2016-05-24 3.0 11 2016-05-25 3.0 11 2016-05-26 3.0 11 2016-05-27 3.0 11 2016-05-28 3.0 11 2016-05-29 .. ... ... ... Is there an elegant solution with groupby? A: There might be a more elegant solution but you can loop through the response_date values in df2 and create a boolean series of values by checking against the all the response_date values in df1 and simply summing them all up. df1['group'] = 0 for rd in df2.response_date.values: df1['group'] += df1.response_date > rd Output: summary participant_id response_date group 0 2.0 11 2016-04-30 0 1 3.0 11 2016-05-01 1 2 3.0 11 2016-05-02 1 3 3.0 11 2016-05-03 1 4 3.0 11 2016-05-04 1 Building off of @Scott's answer: You can use pd.cut but you will need to add a date before the earliest date and after the latest date in response_date from df2 dates = [pd.Timestamp('2000-1-1')] + df2.response_date.sort_values().tolist() + [pd.Timestamp('2020-1-1')] df1['group'] = pd.cut(df1['response_date'], dates) A: You want the .cut method. This lets you bin your dates by some other list of dates. df1['cuts'] = pd.cut(df1['response_date'], df2['response_date']) grouped = df1.groupby('cuts') print grouped.max() #for example
{ "language": "en", "url": "https://stackoverflow.com/questions/44617917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: java.lang.NoClassDefFoundError: edu/emory/mathcs/backport/java/util/concurrent/BlockingQueue I am working with SWTBot, and created a plugin in order to test my application's GUI. At the point i have been able to initiate the bot, but i am not getting the following exception when testing the product: java.lang.NoClassDefFoundError: edu/emory/mathcs/backport/java/util/concurrent/BlockingQueue at net.sf.ehcache.config.ConfigurationHelper.createCache(ConfigurationHelper.java:418) at net.sf.ehcache.config.ConfigurationHelper.createDefaultCache(ConfigurationHelper.java:334) at net.sf.ehcache.CacheManager.configure(CacheManager.java:306) at net.sf.ehcache.CacheManager.init(CacheManager.java:226) at net.sf.ehcache.CacheManager.<init>(CacheManager.java:213) at net.sf.ehcache.hibernate.EhCacheProvider.start(EhCacheProvider.java:127) at org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge.start(RegionFactoryCacheProviderBridge.java:72) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:250) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:954) It happens when the program tries to build a session factory in Hibernate. I've been googling a lot and most of the answers i found are related with maven/spring usage, which is not what i am using. The problem seems to be the lack of backport.util.concurrent.jar, which is (or should be) included in the java.util.concurrent.jar. I managed to create a plugin from the backport.util.concurrent.jar and include it in my target-definitions, but the problem still persists. Does anyone have a clue of how this problem can be solved? Any help will be much appreciated. Thanks in advance! A: I figured it out. I thought the problem was in my swtbot tester plugin, but it was indeed in one of the several plugins present in the product i am testing. Solution was to add the dependency in the correct plugin of the product (instead of adding it in the swtbot tester plugin). Thanks anyway
{ "language": "en", "url": "https://stackoverflow.com/questions/19834487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Error while initializing node sdk in hyperledger fabric I'm trying to connect to the node SDK in hyperledger Fabric using my networkConnection.yaml for 3 organizations with 2 peers each and a kafka orderer but I get the following error: 2019-02-04T12:34:45.710Z - error: [Network]: _initializeInternalChannel: Unable to initialize channel. Attempted to contact 6 Peers. Last error was Error: 2 UNKNOWN: Stream removed Error processing transaction. Error: Unable to initialize channel. Attempted to contact 6 Peers. Last error was Error: 2 UNKNOWN: Stream removed Error: Unable to initialize channel. Attempted to contact 6 Peers. Last error was Error: 2 UNKNOWN: Stream removed at Network._initializeInternalChannel (/home/alberto/ibotics-network-1/application/node_modules/fabric-network/lib/network.js:127:12) Disconnect from Fabric gateway. Issue program complete. Thank you My Fabric network is running on Gcloud in docker swarm in 3 instances (16.04 LTS).
{ "language": "en", "url": "https://stackoverflow.com/questions/54516326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to pass the **type** HTML input attribute value to the $_POST array in a HTML/PHP Form? Can we somehow pass the type HTML input attribute value to the $_POST array or grab it anyhow else with PHP? I am aware that I can create a hidden field and basically put the type of the real input into the value of the hidden field, but this seems a bit like "repeating" work to me. I want to create a Form, where input values are submitted to the $_POST and I can detect the type of that input without the need to hardcode/map the single inputs to each a type. In this way I could detect the field type and act upon without the need to create a "map" that maps my custom inputs (by name or ID) to a certain type, which I already declare in HTML form anyway. It seems a real shortcoming that the type of an input is undetectable in a Form Submit - or perhaps (hopefully) I miss something? A: Can we somehow pass the type HTML input attribute value to the $_POST array or grab it anyhow else with PHP? Not per se. I am aware that I can create a hidden field and basically put the type of the real input into the value of the hidden field That is a way to do it. It seems a real shortcoming that the type of an input is undetectable in a Form Submit Usually you know what type of data you expect for a given field because you aren't processing them generically, so it would rarely be a useful feature. perhaps (hopefully) I miss something? No. A: Well here is the breakdown; GET accessed via $_GET in PHP tackling and POST accessed via $_POST in PHP are transport methods, so is PUT, and DELETE etc for a from it does not matter what method you use it only works on client side and only knows to map every thing in it into serialised query string or at least have it read for being serialised. For example <input type="text" id="firstname" name="fname"> it takes the name attribute and converts into this ?fname=ferret See it didn't even bother with ID attribute. When we hit submit button form will only run through name attributes of each input and make LHS of the with value and add user input as RHS to the value. It will not do anything else at all. On PHP side we ask $_GET tunnel are there any query strings in the request or $_POST tunnel. Each of these if there is any query string - emphasis on word string. explodes the string into array and gives it you. hence $POST['fname']. Looks something like this $_POST = [ fname => 'ferret', someothingelse => 'someothervalue'] SO what you are trying to do is or at least asking to do is ...make browser change its BOM behaviour - which we cannot in real sense of the matter; to make form add some thing like this. ?fname=ferret,text ?fname=ferret-text ?fname=ferret/text form by default will not do this, unless you run custom function updating each query before submit and that is pron to what we call escaping, 3/100 time you would miss it given the chance Then on PHP side you want PHP to figure out on its own that after slash is type like so $_POST = [ fname => 'ferret/text'] PHP would not do that on its own, unless you fork it make custom whatever like Facebook has done and then run it or at least make some kind of low level library but that too would be after the fact. in case your not wondering, thats how XSS and injections happen. SO query string standards are rigid to keep things a string with militaristic data and serialised. So yes what you intended to do with hidden field is one tested way of achieving what you are want.
{ "language": "en", "url": "https://stackoverflow.com/questions/66931914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: bad URI(is not URI?) in tests Rspec I`ve got these strange issues I dont know how to fix it. Please help me if you can. Here is a testing result: 1) Admin can edit a hotel Failure/Error: visit edit_admin_hotel_path(hotel) URI::InvalidURIError: bad URI(is not URI?): # ./spec/requests/admin_spec.rb:32:in `block (2 levels) in <top (required)>' 2) Admin can edit a user Failure/Error: visit edit_admin_user_path(admin) URI::InvalidURIError: bad URI(is not URI?): # ./spec/requests/admin_spec.rb:54:in `block (2 levels) in <top (required)>' rake routes shows me nice edit routes for users and hotels: edit_admin_hotel GET /admin/hotels/:id/edit(.:format) admin/hotels#edit edit_admin_user GET /admin/users/:id/edit(.:format) admin/users#edit And everything works just fine if I start server and check it manually. So I have no idea where these issues comes from. Thanks for any help! And my admin_spec.rb file: require 'spec_helper' describe "Admin" do let(:admin) { FactoryGirl.create(:user) } let(:hotel) { FactoryGirl.create(:hotel) } before(:each) do sign_up_as_admin admin visit admin_hotels_path end subject { page } it { expect(page).to have_content("Manage Hotels") } it { expect(page).to have_content("Manage Users") } it { expect(page).to have_link("Sign out") } it { expect(page).to have_content("List of hotels") } it { expect(page).to have_content("Hello, Admin") } it "can add a hotel" do click_link "Add Hotel" expect(current_path).to eq(new_admin_hotel_path) fill_in 'name', with: "TestHotel" fill_in 'price', with: "666" fill_in 'star_rating', with: "5" expect { click_button "Submit" }.to change(Hotel,:count).by(1) expect(current_path).to eq(admin_hotel_path(1)) end it "can edit a hotel" do visit edit_admin_hotel_path(hotel) end it "can delete a hotel" do visit admin_hotel_path(hotel) expect { click_link "Delete hotel" }.to change(Hotel,:count).by(-1) #expect { click_link "Delete hotel" }.to redirect_to(admin_hotels_path) end it "can create a new user" do click_link "Add User" expect(current_path).to eq(new_admin_user_path) expect(page).to have_content("Create New User") fill_in "Name", :with => "user" fill_in "Email", :with => "user@auser.com" fill_in "Password", :with => "user.password" fill_in "password_confirmation", :with => "user.password" expect { click_button "Create User" }.to change(User,:count).by(1) expect(current_path).to eq(admin_users_path) end it "can edit a user" do visit edit_admin_user_path(admin) end end Edit/update actions in users_controller.rb: # GET admin/users/1/edit def edit @user = User.find(params[:id]) render "edit", status: 302 end # PATCH/PUT admin/users/1 def update @user = User.find(params[:id]) if @user.try(:update_attributes, user_params) render "edit", notice: 'User was successfully updated.' else render action: 'edit' end end private def user_params params.require(:user).permit(:name, :email, :password, :password_confirmation, :admin) end And user/edit.html.erb: <% provide(:title, "Edit user") %> <h1>Update profile</h1> <div class="row"> <div class="span6 offset3"> <%= form_for([:admin, @user]) do |f| %> <%= render 'shared/error_messages', object: f.object %> <%= f.label :name %> <%= f.text_field :name %> <%= f.label :email %> <%= f.text_field :email %> <%= f.label :password %> <%= f.password_field :password %> <%= f.label :password_confirmation, "Confirm Password" %> <%= f.password_field :password_confirmation %> <%= f.submit "Save changes" %> <% end %> <%= button_to 'Delete User', [:admin, @user], :data => { confirm: 'Are you sure?' }, method: :delete %> </div> </div> Update 1: I found out that these bad URI(is not URI?) errors also in hotel_controller and comment_controller while testing edit action. These errors in all of my controllers in edit actions and I dont know what cousing them :(
{ "language": "en", "url": "https://stackoverflow.com/questions/30554892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MongoDb Lookup gives empty aggregation result when using Spring Data Mongodb The following is a app_relation Collection : { "_id" : ObjectId("5bf518bb1e9f9d2f34a8299b"), "app_id" : "123456789", "dev_id" : "1", "user_id" : "1", "status" : "active", "created" : NumberLong(1542789294) } The other collection is app : { "_id" : ObjectId("5bd02abb1e9f9d2adc211138"), "app_id" : "123456789", "custom_app_name" : "Demo", "price" : 10, "created" : NumberLong(1540369083) } Using Lookup in mongodb I want to embed App collection in AppRelation For the same my mongodb query is: db.app_relation.aggregate([ { $lookup: { "from": "app", "localField": "app_id", "foreignField": "app_id", "as": "data" } }, { $match: { "data": { "$size": 1 } } } ]) The equivalent code in Spring Java is : LookupOperation lookupOperation = LookupOperation.newLookup().from("app").localField("app_id") .foreignField("app_id").as("data"); AggregationOperation match = Aggregation.match(Criteria.where("data").size(1)); Aggregation aggregation = Aggregation.newAggregation(lookupOperation, match) .withOptions(Aggregation.newAggregationOptions().cursor(new BasicDBObject()).build()); List<AppRelation> results = mongoTemplate.aggregate(aggregation, AppRelation.class, AppRelation.class) .getMappedResults(); When executing the above code it provides the empty collection, whereas executing mongo db query it provides proper result. The query generated in Debug logs is: { "aggregate": "app_relation", "pipeline": [ { "$lookup": { "from": "app", "localField": "app_id", "foreignField": "app_id", "as": "data" } }, { "$match": { "data": { "$size": 1 } } } ], "cursor": {} }
{ "language": "en", "url": "https://stackoverflow.com/questions/53479739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How does work the signature appearance in Itext 7? Has anybody already played with the signature appearance of PdfSignatureFormField in Itext 7 ? If yes, could you please give a little explanation and/or a little example Thanks in advance David L. A: You have itext7 samples here: http://gitlab.itextsupport.com/itext7/samples/tree/develop This is a sample about signature with appearance: http://gitlab.itextsupport.com/itext7/samples/blob/develop/publications/signatures/src/test/java/com/itextpdf/samples/signatures/chapter03/C3_01_SignWithCAcert.java
{ "language": "en", "url": "https://stackoverflow.com/questions/38975377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to check if an application got closed and returned to device application list using robotium? Is there any way to check whether an android application got closed and returned to device application list using robotium? A: There is no direct way, however two ideas came to my mind. First: private boolean isApplicationClosed() { return solo.getCurrentViews().size() == 0; } Second (this may affect your application): private boolean isApplicationClosed() { try { solo.clickOnScreen(100, 100); } catch (AssertionFailedError e) { if("Click can not be completed!".equals(e.getMessage().trim()) { return true; } } return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/19808599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JQuery addClass() and removeClass() synchronisation I have the following table: <table> <tr class="change"><td>Click to change</td></tr> <tbody id="p1" class="now"> <tr>...</tr> <tr>...</tr> <tr>...</tr> <tr>...</tr> </tbody> <tbody id="p2" class="next"> <tr>...</tr> <tr>...</tr> <tr>...</tr> <tr>...</tr> </tbody> <tbody id="p3" class="previous"> <tr>...</tr> <tr>...</tr> <tr>...</tr> <tr>...</tr> </tbody> </table> I want to hide tbody with class "now" and display the one with class "next" when I'll click the change row. My Jquery (computes only now and next): $(document).ready(function(){ $('.change').click(function(){ $('.now').hide('slow', function(){ $('.next').show('slow', function(){ $('#p1').removeClass('recent'); $('#p2').removeClass('next'); $('#p1').addClass('next'); $('#p2').addClass('recent'); }); }); }); }); I see I'm doing it wrong, So i want to ask you, how to nicely synchronize it in a way, that when I click on "change" my "now" tbody becomes "previous", "next" becomes "now" and "previous" becomes next? A: try $(document).ready(function () { $('.change').click(function () { $('.now').hide('slow', function () { $('.next').show('slow', function () { $prev = $('.previous'); $now = $('.now'); $next = $('.next'); $prev.removeClass('previous').addClass('next'); $now.removeClass('now').addClass('previous'); $next.removeClass('next').addClass('now'); }); }); }); }); A: $('.change').click(function(){ var now = $('.now'); var next = $('.next'); var previous = $('.previous'); now.hide('slow', function(){ next.show('slow'); previous.removeClass('previous').addClass('next'); now.removeClass('now').addClass('previous'); next.removeClass('next').addClass('now'); }); });​ DEMO A: $('.change').click(function() { $('.now').attr('class', 'previous'); $('.next').attr('class', 'now'); $('.previous').attr('class', 'next'); });
{ "language": "en", "url": "https://stackoverflow.com/questions/11048210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can floating point multiplication by zero be optimised at runtime? I am writing an algorithm to find the inverse of an nxn matrix. Let us take the specific case of a 3x3 matrix. When you invert a matrix by hand, you typically look for rows/columns containing one or more zeros to make the determinant calculation faster as it eliminates terms you need to calculate. Following this logic in C/C++, if you identify a row/column with one or more zeros, you will end up with the following code: float term1 = currentElement * DetOf2x2(...); // ^ // This is equal to 0. // // float term2 = ... and so on. As the compiler cannot know currentElement will be zero at compile time, it cannot be optimised to something like float term = 0; and thus the floating point multiplication will be carried out at runtime. My question is, will these zero values make the floating point multiplication faster, or will the multiplication take the same amount of time regardless of the value of currentElement? If there is no way of optimising the multiplication at runtime, then I can remove the logic that searches for rows/columns containing zeros. A: float term1 = currentElement * DetOf2x2(...); The compiler will call DetOf2x2(...) even if currentElement is 0: that's sure to be far more costly than the final multiplication, whether by 0 or not. There are multiple reasons for that: * *DetOf2x2(...) may have side effects (like output to a log file) that need to happen even when currentElement is 0, and *DetOf2x2(...) may return values like the Not-a-Number / NaN sentinel that should propagate to term1 anyway (as noted first by Nils Pipenbrinck) Given DetOf2x2(...) is almost certainly working on values that can only be determined at run-time, the latter possibility can't be ruled out at compile time. If you want to avoid the call to Detof2x2(...), try: float term1 = (currentElement != 0) ? currentElement * DetOf2x2(...) : 0; A: Modern CPUs will actually handle a multiply-by-zero very quickly, more quickly than a general multiply, and much more quickly than a branch. Don't even bother trying to optimize this unless that zero is going to propagate through at least several dozen instructions. A: The compiler is not allowed to optimize this unless the calculation is trival (e.g. all constants). The reason is, that DetOf2x2 may return a NAN floating point value. Multiplying a NAN with zero does not return zero but a NAN again. You can try it yourself using this little test here: int main (int argc, char **args) { // generate a NAN float a = sqrt (-1); // Multiply NAN with zero.. float b = 0*a; // this should *not* output zero printf ("%f\n", b); } If you want to optimize your code, you have to test for zero on your own. The compiler will not do that for you. A: Optimisations performed at runtime are known as JIT (just-in-time) optimisations. Optimisations performed at translation (compilation) are known as AOT (ahead-of-time) optimisations. You're referring to JIT optimisations. A compiler might introduce JIT optimisations into your machine code, but it's certainly a far more complex optimisation to implement than the common AOT optimisations. Optimisations are typically implemented based on significance, and this kind of "optimisation" might be seen to affect other algorithms negatively. C implementations aren't required to perform any of these optimisations. You could provide the optimisation manually, which would be "the logic that searches for rows/columns containing zeros", or something like this: float term1 = currentElement != 0 ? currentElement * DetOf2x2(...) : 0; A: The following construct is valid at compile time when the compiler can guess the value of "currentElement". float term1 = currentElement ? currentElement * DetOf2x2(...) : 0; If it cannot be guessed at compile time, it will be checked at run-time and the performance depends on processor architecture : the trade-off between a branch (include branch latency and the delay to rebuild the instruction pipeline can be up to 10 or 20 cycles) and flat code (some processors run 3 instructions per cycle) and hardware branch prediction (when the hardware supports branch prediction). Since multiplications throughput is close to 1 cycle on a x86_64 processor, there is no perf differenec depending on operand values like 0.0, 1.0, 2.0 or 12345678.99. if such a difference exists, that would be perceived as a covert channel in cryptographic-style software. GCC allows to check function parameters at compile time inline float myFn(float currentElement, myMatrix M) { #if __builtin_constant_p(currentElement) && currentElement == 0.0 return 0.0; #else return currentElement * det(M); #endif } you need to enable inlining and interprocedural optimizations in the compiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/15214673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: In Openpyxl - want to convert the excel cell value into a integer while copying data into csv I am copying a csv values into excel using openpyxl. with open('file_modified.csv', 'r') as f: reader = csv.reader(f, delimiter="|") next(reader, None) #skip the headers in the csv file i = 2 # paste into second row of the excel for row in reader: for j in range(1, len(row)): if(j==1 or j==10): ws_1.cell(row=i, column=j).value = float(row[j - 1]) ws_1.cell(row=i, column=j).number_format = '0000000000' else: # writing the read value to destination excel file ws_1.cell(row=i, column=j).value = row[j-1] i +=1 wb_1.save(destination_file) But still when i open excel the column 1, column 10 still shows error. I need to manually go and convert it to string. Also when i open excel its throws recover error. Not sure why. -Prasanna.K
{ "language": "en", "url": "https://stackoverflow.com/questions/67866059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using git as backup tool with cron and ssh aliases I have a repository for storing all my configs (e. g. .ssh/config). From time to time I need a backup them, so I need to run these commands git commit -am 'Auto git backup `date -I`' git push origin master It is pretty annoying so I decided to make a cron job, which runs simple script with these lines above. I ran crontab -e and added new line at the end of file. I also replaced git with which git inside of script. git commit command works perfectly in contrast with git push. When I made some debug prints I got ssh: Could not resolve hostname bitbucket.org: Name or service not known fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. But when I run script manually, it works. I think it is because of SSH aliases in my .ssh/config, which looks like Host bitbucket.org-personal HostName bitbucket.org PreferredAuthentications publickey IdentityFile ~/.ssh/id_rsa Host bitbucket.org-work HostName bitbucket.org PreferredAuthentications publickey IdentityFile ~/.ssh/another_id_rsa Does somebody have idea, how to propagate SSH aliases to environment, which runs cron jobs? Also maybe I am wrong and problem is caused by something else. A: The issue was a network connection. When I added sleep (or modified solution from here How to check internet access using bash script in linux?) at the begging of the script, it works perfectly.
{ "language": "en", "url": "https://stackoverflow.com/questions/45653685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to add more data into LocalStorage instead of overwriting it in contenteditable attribute? I want to create contenteditable in div when user add something in it then it should add in local storage. but right now it is overwrite when add another data. HTMl code <!DOCTYPE html> <html> <head> </head> <body onload="checkEdits()"> <div id="edit" contenteditable="true"> Here is the element's original content </div> <input type="button" value="save my edits" onclick="saveEdits()" /> <div id="update"> - Edit the text and click to save for next time</div> <h1 contentEditable="true">Your Name</h1> <script src="script.js"></script> </body> </html> JS code function saveEdits() { //get the editable element var editElem = document.getElementById("edit"); //get the edited element content var userVersion = editElem.innerHTML; //save the content to local storage localStorage.userEdits = userVersion; //write a confirmation to the user document.getElementById("update").innerHTML = "Edits saved!"; } function checkEdits() { //find out if the user has previously saved edits if (localStorage.userEdits != null) document.getElementById("edit").innerHTML = localStorage.userEdits; } A: You need to change your saveEdits function to check is there anything saved on storage with the same key or not. To achieve it I will recommend you to use get and set item from API here some example how you can do it. function saveEdits() { //get the editable element var editElem = document.getElementById("edit"); //get the edited element content var userVersion = editElem.innerHTML; //get previously saved const previousEditsStr = localstorage.getItem('userEdits'); // parse allready saved edits or create empty array const savedEdits = previousEditsStr ? JSON.parse(previousEditsStr) : []; // push the latest one savedEdits.push(userVersion); //stringify and save the content to local storage localStorage.setItem('userEdits', JSON.stringify(savedEdits)); //write a confirmation to the user document.getElementById("update").innerHTML = "Edits saved!"; } Please be noticed that the memory here is limited and you need to take it under your control. For example you can limit previously saved comments. Since we are saving an array thats men you need to change your reading part as well function checkEdits() { const userEdits = localStorage.getItem('userEdits'); //find out if the user has previously saved edits if (userEdits) { // here is the saved edits const comments = JSON.parse(userEdits); // showing previously saved message document.getElementById("edit").innerHTML = comments[comments.length - 1]; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/74918223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Read data from json This is my JSON file { "styleMapping": { "1": { "zIndex": 1, "StyleMappingCollection": { "636404903145791477": { "border": { "color": "#FF000000", "width": "Small", "type": "Solid" }, "background": { "bgOption": "Image", "thumbOption": "Pointer", "opacity": 1.0, "bgColor": "#FFF5F5DC", "bgImage": "C:\Users\raj\Downloads\images\image.jpg", } } } i want to capture bgImage parameter in my javascript. My script code is $http.get('Data//xyz.json').then(successCallback, errorCallback); function successCallback(response) { sliderCtrlPtr.sliderParams = response.data; sliderCtrlPtr.sliderParams.height = response.data.deviceHeight; console.log("After JSON read : ",sliderCtrlPtr.sliderParams); } function errorCallback(error) { //error code } sliderCtrlPtr.GetsliderStyle = function () { if(sliderCtrlPtr.sliderParams != undefined) { var styleObj = sliderCtrlPtr.sliderParams; canvas.color = styleObj.StyleMappingCollection. 636404903145791477.background.bgColor; } }; }]); })(); i want to retrieve bgColor or bgImage parameter from my json file in my script. How can I do that? A: I think the line should be. From the JSON provided which is not complete, I could only find this error! sliderCtrlPtr.GetsliderStyle = function() { if (sliderCtrlPtr.sliderParams != undefined) { var styleObj = sliderCtrlPtr.sliderParams; canvas.color = styleObj["styleMapping"]["1"]["StyleMappingCollection"] ["636404903145791477"]["background"]["bgColor"]; canvas.image = styleObj["styleMapping"]["1"]["StyleMappingCollection"] ["636404903145791477"]["background"]["bgImage"]; } } A: Dot notation: First you need to parse the JSON with obj = JSON.parse(json); Since the JSON contains objects, you then use the dot notation, e.g. var backColor = obj.StyleMappingCollection.background.bgcolor; The thing is, those numeric field names in the JSON could cause problems in using this. This is en excerpt from a program I've written, in which I have a partners.json file (which contains nested objects in different levels) and I want to store the JSON offices field in an object. The JSON file is in this form: [ { "id": 1, "urlName": "test-url", "organization": "Test Organization", "customerLocations": "Global", "willWorkRemotely": true, "website": "http://www.testurl.com/", "services": "This is a test services string", "offices": [ { "location": "Random, Earth", "address": "Randomness 42, 109 Some St \nRandomville 2000", "coordinates": "-33.8934219,151.20404600000006" } ] }, {"id": 2, ... }] I import the JSON in my JavaScript code: /* import partner information from the supplied JSON file */ var partners = require('./partners.json'); And then I parse and stringify the required "offices" fields accordingly: /* extract all office locations */ var officeLocations = {}; for (var i = 0; i < partners.length; i++) { officeLocations[i] = JSON.parse(JSON.stringify(partners[i].offices)); } Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/46315273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the solution to Check internet connectivity getnetworkInfo' is deprecated , what is the solution? I am using compilesdkversion 24. A: Simple way ! public static boolean isConnectingToInternet(@NonNull Context context) { ConnectivityManager connectivity = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); if (connectivity != null) { NetworkInfo info = connectivity.getActiveNetworkInfo(); if (info != null) { if (info.getType() == ConnectivityManager.TYPE_WIFI || info.getType() == ConnectivityManager.TYPE_MOBILE || info.getType() == ConnectivityManager.TYPE_ETHERNET || info.getType() == ConnectivityManager.TYPE_WIMAX) { return true; } } } return false; } How to use just check if (!isConnectingToInternet(getContext())) { // here no internet connection } A: you can use getActiveNetworkInfo() instead of getnetworkinfo , because there were some other factors which weren't considered before ,like network state can vary app to app while using getnetworkinfo hence deprecated docs ConnectivityManager connectivityManager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo activeNetwork = connectivityManager.getActiveNetworkInfo(); if(activeNetwork!=null && activeNetwork.isConnectedOrConnecting()){ // yeah we are online }else{ // oops! no network } Note : put a nullity check too to confirm it is not null before accessing the network status A: public class NetworkUtils { public static boolean isConnected(Context context) { ConnectivityManager connectivityManager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo activeNetwork = connectivityManager.getActiveNetworkInfo(); return (activeNetwork != null) && activeNetwork.isConnectedOrConnecting(); } } Use this class to check network connection as: if(NetworkUtils.isConnected(getContext())) { //Call your network related task } else { //Show a toast displaying no network connection } A: Use ConnectivityManager to get network access. you can use BroadcastReceiver to get constant updates of your network status. package your_package_name; import android.content.Context; import android.content.ContextWrapper; import android.net.ConnectivityManager; import android.net.NetworkInfo; public class ConnectivityStatus extends ContextWrapper{ public ConnectivityStatus(Context base) { super(base); } public static boolean isConnected(Context context){ ConnectivityManager manager = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo connection = manager.getActiveNetworkInfo(); if (connection != null && connection.isConnectedOrConnecting()){ return true; } return false; } } Receive updates from your network class, use this in your main class where to update status. Register Receiver on class getContext().registerReceiver(receiver, new IntentFilter(ConnectivityManager.CONNECTIVITY_ACTION)); private BroadcastReceiver receiver = new BroadcastReceiver() { @Override public void onReceive(Context context, Intent intent) { if(!ConnectivityStatus.isConnected(getContext())){ //not connected }else { //connected } } };
{ "language": "en", "url": "https://stackoverflow.com/questions/39408697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to change facebook comment box if params change in current component (angular6) in current i use facebook comments plugin it can show comment only first params if i router to current component (but send new params) comment box it's not update My Question: Is it possible, if i router to current component but change params (http://localhost:4000/content?contentId=1 to http://localhost:4000/content?contentId=2) than ( commentBox Id=1 will change to commentBox Id=2 ) sorry for my english, thank for your help ** i try to use angular2+lazyload , angular6+universal , ngx-facebook but it not work!! ( or i have a mistake ?) A: This is the approach you want: ngOnInit() { this.activeRoute.queryParams.subscribe(queryParams => { // do something with the query params }); this.activeRoute.params.subscribe(routeParams => { this.loadUserDetail(routeParams.id); }); } See https://kamranahmed.info/blog/2018/02/28/dealing-with-route-params-in-angular-5/
{ "language": "en", "url": "https://stackoverflow.com/questions/51697305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to trigger the change event of Kendo Jquery Spreadsheet when using custom Paste function? I tried the below custom paste function to paste only values but it doesn't trigger the change event in order for me to sync the data to the data source. Dojo Demo Link ... change: onExcelChange, paste: function(e) { e.preventDefault() var currentRange = e.range; var fullData = e.clipboardContent.data; var mergedCells = e.clipboardContent.mergedCells; var topLeft = currentRange.topLeft(); var initialRow = topLeft.row; var initialCol = topLeft.col; var origRef = e.clipboardContent.origRef; var numberOfRows = origRef.bottomRight.row - origRef.topLeft.row + 1; var numberOfCols = origRef.bottomRight.col - origRef.topLeft.col + 1; var spread = e.sender; var sheet = spread.activeSheet(); var rangeToPaste = sheet.range(initialRow, initialCol, numberOfRows, numberOfCols); sheet.batch(function() { for(var i = 0; i < fullData.length; i += 1) { var currentFullData = fullData[i]; for(var j = 0; j < currentFullData.length; j += 1 ) { var range = sheet.range(initialRow + i, initialCol + j); var value = currentFullData[j].value; if (value !== null) { range.input(value); range.format(null); } } } sheet.select(rangeToPaste); }, { layout: true, recalc: true }); } ... I have tried to use recalc: true in sheet.batch to trigger the change to no avail. Any help would be greatly appreciated. A: All Kendo widgets inherit from Observable, which has a trigger method: var obj = new kendo.Observable(); obj.bind("myevent", function(e) { console.log(e.data); // outputs "data" }); obj.trigger("myevent", { data: "data" }); You need to manually trigger the Spreadheet's change event with its correct parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/62276644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Showing Popup Async I want an alert to be shown according to an async logic. This is my code : @IBAction func AddToFavoriteButton(_ sender: Any) { IsItemInFavoritesAsync(productId: productToDisplay.UniqueID()) { success in // Product exists in favorite tree Constants.showAlert(title: "Item in Tree", message: "Yes it is", timeShowing: 1, callingUIViewController: self) } // Product doesn't exist in favorite tree Constants.showAlert(title: "Item NOT in Tree", message: "Isn't", timeShowing: 1, callingUIViewController: self) Product.RegisterProductOnDatabaseAsFavorite(prodToRegister: productToDisplay, productType: productType) } private func IsItemInFavoritesAsync(productId: String, completionHandler: @escaping (Bool) -> ()) { let favoriteTree = productType == Constants.PRODUCT_TYPE_SALE ? Constants.FAVORITE_TREE_SALE : Constants.FAVORITE_TREE_PURCHASE Constants.refs.databaseRoot.child(favoriteTree).child((Constants.refs.currentUserInformation!.uid)).observeSingleEvent(of: .value, with: {(DataSnapshot) in return DataSnapshot.hasChild(self.productToDisplay.UniqueID()) ? completionHandler(true) : completionHandler(false) }) } What this code snippet does is: * *Check if a product was saved to favorites *If it has, show popup A *If it has not, show popup B I followed another post on StackOverflow that stated that's how I was supposed to call the async function. (Can Swift return value from an async Void-returning block?) What actually happens is that only popup B shows (the one outside the IsItemInFavoritesAsync closure). What is my mistake? I guess that IsItemInFavoritesAsync enters the closure upon finishing, and the function IsItemInFavoritesAsync continues onwards to the "Product not in tree" section. Notice: the function IsItemInFavoritesAsync reads from Firebase. How do I fix this? A: Both actions (positive result and negative result) need to be inside the completion handler. So something like this: IsItemInFavoritesAsync(productId: productToDisplay.UniqueID()) { success in if success { // Product exists in favorite tree Constants.showAlert(title: "Item in Tree", message: "Yes it is", timeShowing: 1, callingUIViewController: self) } else { // Product doesn't exist in favorite tree Constants.showAlert(title: "Item NOT in Tree", message: "Isn't", timeShowing: 1, callingUIViewController: self) Product.RegisterProductOnDatabaseAsFavorite(prodToRegister: productToDisplay, productType: productType) } }
{ "language": "en", "url": "https://stackoverflow.com/questions/52015149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: double quotes pandas.read_csv I have large txt file with multiple words and chars and I'm trying to read this file into a pandas dataframe, with each word or char in a different row. The problem is that " is one of the chars, and the function reads all the words between two " as a single word (because of the quoting). How can I address this char as another regular char and not as a quoting char? I tried to play with the parameters of the read_csv function but couldn't manage to fix it. My code now: data = pd.read_csv(filepath, header=None, delimiter = "\t") Thanks in advance! A: you can use the parameter quotechar data = pd.read_csv("a.txt", delim_whitespace=True, header=None,quotechar="~") print(data.head()) a.txt abc def xyz "abc xyz" def Output 0 1 2 0 abc def xyz 1 "abc xyz" def there are qoutes left this way. A: Try via numpy's genfromtxt() method: import numpy as np data=np.genfromtxt('data.csv',dtype='str',delimeter='\t',skip_header=1) columns=np.genfromtxt('data.csv',dtype='str',delimiter='\t',skip_footer=len(data)) Finally: df=pd.Dataframe(data=data,columns=columns)
{ "language": "en", "url": "https://stackoverflow.com/questions/67762140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to create calculated field based on another time series I have a set of prices as data source for given timeseries and I would like to create a calculated field by combining two prices for each date: i.e., Price A *5 - Price B. Data source: Date Product Price 01.01.2018 A 10 01.01.2018 B 15 02.01.2018 A 20 02.01.2018 B 30 03.01.2018 A 10 03.01.2018 B 30 I don't know how to write the formula correctly for the Calculated field. What I expect is to build the following table: Date A B Combined Price (A *5 - B) 01.01.2018 10 15 35 02.01.2018 20 30 70 03.01.2018 10 30 20 Thank you A: Answer from Mohfooj can be found in Tableau forum here: https://community.tableau.com/message/900181#900181
{ "language": "en", "url": "https://stackoverflow.com/questions/55373075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is this VBA line erroring out? Little bit of context, I'm updating a VBA macro from the early 2000's to work in 2021 AutoCAD... Lots of little bugfixes mostly related to changes in language or references. Here is the line that errors out with a "Run-time error '-2147221164(80040154)': Class not registered.". Dim OutPDF as ABCpdf.Doc 'init the ABCpdf 'uses the OutPDF a few times, works fine Set OutPDF = New ABCpdf.Doc The ABCpdf.Doc works fine in between those two lines, everything executes correctly up until it hits that Set OutPDF line. If there is more context needed for this, I can elaborate further.
{ "language": "en", "url": "https://stackoverflow.com/questions/65183935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: If my entity has a (0-1):1 relation to another entity, how would I model that in the database? For example, lets say I have an entity called user and an entity called profile_picture. A user may have none or one profile picture. So I thought, I would just create a table called "user" with this fields: user: user_id, profile_picture_id (I left all other attributes like name, email, etc. away, to simplify this) Ok, so if an user would have no profile_picture, it's id would be NULL in my relational model. Now someone told me that I have to avoid setting anything to NULL, because NULL is "bad". What do you think about this? Do I have to take off that profile_picture_id from the user table and create a link-table like user__profile_picture with user_id, profile_picture_id? Which would be considered to be "better practice" in database design? A: This is a perfectly reasonable model. True, you can take the approach of creating a join table for a 1:1 relationship (or, somewhat better, you could put user_id in the profile_picture table), but unless you think that very few users will have profile pictures then that's likely a needless complication. Readability is an important component in relational design. Do you consider the profile picture to be an attribute of the user, or the user to be an attribute of the profile picture? You start from what makes logical sense, then optimize away the intuitive design as you find it necessary through performance testing. Don't prematurely optimize. A: "NULL is bad" is a rather poor excuse for a reason to do (or not do) something. That said, you may want to model this as a dependent table, where the user_id is both the primary key and a foreign key to the existing table. Something like this: Users UserPicture Picture ---------------- -------------------- ------------------- | User_Id (PK) |__________| User_Id (PK, FK) |__________| Picture_Id (PK) | | ... | | Picture_Id (FK) | | ... | ---------------- -------------------- ------------------- Or, if pictures are dependent objects (don't have a meaningful lifetime independent of users) merge the UserPicture and Picture tables, with User_Id as the PK and discard the Picture_Id. Actually, looking at it again, this really doesn't gain you anything - you have to do a left join vs. having a null column, so the other scenario (put the User_Id in the Picture table) or just leave the Picture_Id right in the Users table both make just as much sense. A: NULL isn't "bad". It means "I don't know." It's not wrong for you or your schema to admit it. A: Your user table should not have a nullable field called profile_picture_id. It would be better to have a user_id column in the profile_picture table. It should of course be a foreign key to the user table. A: Since when is a nullable foreign key relationship "bad?" Honestly introducing another table here seems kind of silly since there's no possibility to have more than one profile picture. Your current schema is more than acceptable. The "null is bad" argument doesn't hold any water in my book. If you're looking for a slightly better schema, then you could do something like drop the "profile_picture_id" column from the users table, and then make a "user_id" column in the pictures table with a foreign key relationship back to users. Then you could even enforce a UNIQUE constraint on the user_id foreign key column so that you can't have more than one instance of a user_id in that table. EDIT: It's also worth noting that this alternate schema could be a little bit more future-proof should you decide to allow users to have more than one profile picture in the future. You can simply drop the UNIQUE constraint on the foreign key and you're done. A: It is true that having many columns with null values is not recommended. I would suggest you make the picture table a weak entity of user table and have an identifying relationship between the two. Picture table entries would depend on user id. A: Make the profile picture a nullable field on the user table and be done with it. Sometimes people normalize just for normalization sake. Null is perfectly fine, and in DB2, NULL is a first class citizen of values with NULL being included in indices. A: I agree that NULL is bad. It is not relational-database-style. Null is avoided by introducing an extra table named UserPictureIds. It would have two columns, UserId and PictureId. If there's none, it simply would not have the respective line, while user is still there in Users table. Edit due to peer pressure This answer focuses not on why NULL is bad - but, on how to avoid using NULLs in your database design. For evaluating (NULL==NULL)==(NULL!=NULL), please refer to comments and google.
{ "language": "en", "url": "https://stackoverflow.com/questions/1974940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Aggregation matrices in subgroups I have few matrix. They are in few subgroups (I have vector which shows these subgroups - for example matrix 1 and 2 are in group A, matrix 3 in group B, matrix 4,5,6 in group C, and so on). I want to add all matrices in one group to have new matrix. To sum up: input - matrices in some subgroups, output - matrices in amount of dimension of vector of subgroups) How can I do it? I tried with 'aggregate' and 'tapply', but it doesn't work: aggregate(list1,list2,func) where list1 contains all my matrices list2 contains subgroups func - my function to add matrices (it could be standard "+") EXAMPLE > subgroups=c(1,1,2,3,3,3) > M1<-matrix(c(2,3,1,4),2,2) > M1 [,1] [,2] [1,] 2 1 [2,] 3 4 > M2<-matrix(c(3,0,1,1),2,2) > M2 [,1] [,2] [1,] 3 1 [2,] 0 1 > M3<-matrix(c(0,0,1,1),2,2) > M3 [,1] [,2] [1,] 0 1 [2,] 0 1 > M4<-matrix(c(0,2,-9,-3),2,2) > M4 [,1] [,2] [1,] 0 -9 [2,] 2 -3 > M5<-matrix(c(0,0,1,1),2,2) > M5 [,1] [,2] [1,] 0 1 [2,] 0 1 > M6<-matrix(c(-1,2,2,1),2,2) > M6 [,1] [,2] [1,] -1 2 [2,] 2 1 > result=list(M1+M2,M3,M4+M5+M6) > result [[1]] [,1] [,2] [1,] 5 2 [2,] 3 5 [[2]] [,1] [,2] [1,] 0 1 [2,] 0 1 [[3]] [,1] [,2] [1,] -1 -6 [2,] 4 -1 A: Use tapply as shown: L <- list(M1, M2, M3, M4, M5, M6) # or mget(ls(pattern = "^M\\d$")) tapply(L, subgroups, Reduce, f = "+") giving: $`1` [,1] [,2] [1,] 5 2 [2,] 3 5 $`2` [,1] [,2] [1,] 0 1 [2,] 0 1 $`3` [,1] [,2] [1,] -1 -6 [2,] 4 -1
{ "language": "en", "url": "https://stackoverflow.com/questions/65263939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: West Europe Cloud Service on Azure shows in the United States I have a deployed application on Windows Azure in West Europe. However, the IP I have been assigned (168.63.108.xx) is marked as being in the US on http://cqcounter.com/whois/ Is there something wrong with my deployment? If not, what is the reason that it is shown in the US? Thanks in advance A: This is probably an issue with the whois and how it maps the IP to a location. Take a look at this file, it contains the IP address ranges for the Azure datacenters. Here's what you'll see for West Europe: <subregion name="West Europe"> .. <network>168.63.0.0/19</network> <network>168.63.96.0/19</network> .. </subregion> Now, since this is IP range is in CIDR notation, there are a few tools which make it easy to find the complete range, like this one. So actually, 168.63.96.0/19 = 168.63.96.0 - 168.63.127.255. And this range includes 168.63.108.xx. So there's no issue with your deployment and you can be sure it's located in West Europe.
{ "language": "en", "url": "https://stackoverflow.com/questions/13306554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: two tables into one? My question is simple I just want to joint two tables into one table without any PK first table is completely different they have nothing same table1. table2. |в|q| |@|John | |ы|a| |£|Sara | |в|f| |$|ciro | |с|g| |%|Jo. | |ф|s| what I need is this Table3 |в|q|@|John | |ы|a|£|Sara | |в|f|$|ciro | |с|g|%|Jo. | |ф|s|-|- | A: This is a little complicated. You want a "vertical" list but have nothing to match the columns. You can use row_number() and union all: select max(t1_col1), max(t1_col2), max(t2_col1), max(t2_col2) from ((select t1.col1 as t1_col1, t1.col2 as t1_col2, null as t2_col1, null as t2_col2, row_number() over () as seqnum from table1 t1 ) union all (select null, null, t2.col1, t2.col2, row_number() over () as seqnum from table2 t2 ) ) t group by seqnum; Here is a db<>fiddle. Note that this will keep all rows in both tables, regardless of which is longer. The specific ordering of the rows in each column is not determinate. SQL tables represent unordered sets. If you want things in a particular order, you need a column that specifies the ordering. If you want to save this in a new table, put create table as table3 before the select. If you want to insert into an existing table, use insert.
{ "language": "en", "url": "https://stackoverflow.com/questions/58917520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I stub a gRPC or HTTP/2 request using WireMock.NET? I have a netcore web service that makes additional calls out to other webservices. One of those other web services is gRPC-based. I would like to write some tests at the protocol level by stubbing out the gRPC-based service with a simulated server. How can I stub a gRPC or HTTP/2 request using WireMock.NET? A: WireMock.NET does not currently support simulating HTTP/2 servers. This would require a change to the library to allow configuring the Protocol on the WireMockServer and a change to the internal ResponseMessage to better support stream bodies and trailing headers.
{ "language": "en", "url": "https://stackoverflow.com/questions/65078013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Enforce single click in angular js I am creating a client and calling my API but it takes at least 2 seconds to respond so I want to that user cannot click the REGISTER twice. I am incorporating this fiddle in my controller. https://jsfiddle.net/zsp7m155/ But What happening is that first function takes 2 second to disable the button and after that it enables he button and send request to API. Am I doing something wrong here? Code my my controller CONTROLLER.JS $scope.createClient = function() { ClientService.createClient($scope.theClient).then(function (aCreateClientResponse) { if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_SUCCESS) { alert('success'); } else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_DOMAINNAME_ERROR) { alert('Check your domain name'); } else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_INVALID_INPUT) { alert('Invalid request'); } else { alert('Service is not available'); } }); }; Code of my service SERVICE.JS App.factory('ClientService', function($http, API_URL, REQUEST_HEADER, RESPONSE_CODE) { createClient: function(aClient) { var myCreateClientRequest = { "CreateClientRequest": { "Header": { "CMMHeader": { "Id": uuid2.newuuid() } }, "Client": { "OrganizationName": aClient.Name, "OrganizationDomain": aClient.Domain, }, } }; //API Call var promise = $http.post(API_URL.CREATE_CLIENT, myCreateClientRequest, REQUEST_HEADER).then( function(aCreateClientResponse) { //Success Callback return [aCreateClientResponse.data.CreateClientResponse.Result.ResponseCode,'']; }, function(aCreateClientResponse) { //Error Callback return [aCreateClientResponse.status,'']; }); return promise; }, }); HTML <button id="register-btn" name="register-btn" class="btn btn-primary" ng-disabled="((Client.User.UserPassword !== Client.User.UserconfirmPassword) || CreateClientForm.domain.$invalid || CreateClientForm.username.$invalid || CreateClientForm.email.$invalid || CreateClientForm.password.$invalid || CreateClientForm.confirmpassword .$invalid || CreateClientForm.organizationname.$invalid )" single-click="createClient()">{{ running ? 'Please wait...' : 'Register' }}</button> A: You have a couple of options, the most simple is just to disable the button when you call your API and re-enable when it resolves. You could do it like this: <button id="register-btn" name="register-btn" class="btn btn-primary" ng-disabled="isRegistering" single-click="createClient()">{{ running ? 'Please wait...' : 'Register' }}</button> and in your controller $scope.isRegistering = false; $scope.createClient = function() { $scope.isRegistering = true; return $timeout(function() { ClientService.createClient($scope.theClient).then(function (aCreateClientResponse) { $scope.isRegistering = false; if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_SUCCESS) { alert('success'); } else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_DOMAINNAME_ERROR) { alert('Check your domain name'); } else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_INVALID_INPUT) { alert('Invalid request'); } else { alert('Service is not available'); } }, function(){ $scope.isRegistering = false; }); }, 1000); };
{ "language": "en", "url": "https://stackoverflow.com/questions/45318887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Setting param to an element through jquery to ng-click I need to set param inside the ng-click i.e., I am able to set the id for the element, but I need to set the param inside the ng-click like ng-click="editorder(5)" Here is the html <i class="fa fa-pencil fa-2x order-edit" aria-hidden="true" ng-click='editOrder()'></i> Script : $(".order-edit").attr("id",message.data.id) Help pls A: Assign the id to a $scope $scope.id = message.data.id And use it as: <i class="fa fa-pencil fa-2x order-edit" aria-hidden="true" id="editOrder" ng-click='editOrder(id)'></i> UPDATE: Assigned a DOM Id to the li element in above and fetched the element as: var editOrder = document.getElementById("editOrder"); Now, binding the ng-click with the id. editOrder.bind('ng-click', { id: message.data.id}, function(event) { var data = event.data; alert(data.id); }); PS: The update works for the javascript click event, its not tested for ng-click. EDIT: Tested for 'ng-click', it doesnt work. You could look into this Fiddle and create a custom directive that suits the requirement.
{ "language": "en", "url": "https://stackoverflow.com/questions/37406165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Spring cloud config doesn't detect git uri How do I reference resources directory (Or a directory relative to my source files) as my local git uri for the config server (On Windows)? I've tried file:///resources and file:///full/path/to/resources, all seem to fail. As requested, here's some code: ConfigServiceApplication.java @EnableConfigServer @SpringBootApplication public class ConfigServiceApplication { public static void main(String[] args) { SpringApplication.run(ConfigServiceApplication.class, args); } } application.properties spring.cloud.config.server.git.uri=file:///full/path/to/resources spring.application.name=config-service server.port=8888 A: If you want to use file system as a backend, simply try this settings: spring.profiles.active=native spring.cloud.config.server.native.searchLocations: file:///full/path/to/resources See also this documentation A: I'm embarrassed to write this but intellij wouldn't clean the build. I ran the gradle task and it all worked out.
{ "language": "en", "url": "https://stackoverflow.com/questions/41161587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the point of a Facade in Java EE? I'm not really understanding the point of a facade. public abstract class AbstractFacade<T> { private Class<T> entityClass; public AbstractFacade(Class<T> entityClass) { this.entityClass = entityClass; } protected abstract EntityManager getEntityManager(); public void create(T entity) { getEntityManager().persist(entity); } public void edit(T entity) { getEntityManager().merge(entity); } public void remove(T entity) { getEntityManager().remove(getEntityManager().merge(entity)); } public T find(Object id) { return getEntityManager().find(entityClass, id); } public List<T> findAll() { CriteriaQuery cq = getEntityManager().getCriteriaBuilder().createQuery(); cq.select(cq.from(entityClass)); return getEntityManager().createQuery(cq).getResultList(); } public List<T> findRange(int[] range) { CriteriaQuery cq = getEntityManager().getCriteriaBuilder().createQuery(); cq.select(cq.from(entityClass)); Query q = getEntityManager().createQuery(cq); q.setMaxResults(range[1] - range[0]); q.setFirstResult(range[0]); return q.getResultList(); } public int count() { CriteriaQuery cq = getEntityManager().getCriteriaBuilder().createQuery(); Root<T> rt = cq.from(entityClass); cq.select(getEntityManager().getCriteriaBuilder().count(rt)); Query q = getEntityManager().createQuery(cq); return ((Long) q.getSingleResult()).intValue(); } } If I have this code and then I have an EJB like this. @Stateless public class WrapSpecFacade extends AbstractFacade<WrapSpec> { @PersistenceContext private EntityManager em; @Override protected EntityManager getEntityManager() { return em; } public WrapSpecFacade() { super(WrapSpec.class); } } What is the point of this? Why call this a facade? To me it's just an abstract class that groups similar functionality. Thanks. A: Typically this pattern is used to either hide the implementation of the underlying classes it is presenting an interface for, or to simplify the underlying implementation of something that may be complex. A facade may present a simple interface to the outside world, but under the hood do things like create instances of other classes, manage transactions, handle files or TCP/IP connections -- all stuff that you can be shielded from by the simplified interface. A: In your particular context, this is not really a Facade. What you have in that code is basically a DAO (Data Access Object). A DAO can be seen as a Facade for DB operations, but this is not its main purpose. It mainly intends to hide the DB internals. In your example, if you're switching the underlying storage system to XML files or to some key-value store like HBase, you can still use the methods defined in that "Facade" and no change is required in the client code. A (traditional) Facade deals with complex designs that need to be hidden from the clients. Instead of exposing a complex API and complex flows (get this from this service, pass it to this converter, get the result and validate it with this and then send it to this other service), you just encapsulate all that in a Facade and simply expose a simple method to clients. This way, along with the fact that your API is a lot easier to use, you are also free to change the underlying (complex) implementation without breaking your clients code. A: Facade is a design pattern. A pattern, a software pattern, is a set of rules in order to organize code and provide a certain structure to it. Some goals can be reached by using a pattern. A design pattern is used when designing the application. The Facade pattern allows programmers to create a simple interface for objects to use other objects. Consider working with a very complex group of classes, all implementing their own interfaces. Well, you want to provide an interface to expose only some functionality of the many you have. By doing so, you achieve code simplicity, flexibility, integration and loose-coupling. Facade, in your example, is used in order to manage coupling between many actors. It is a design issue. When you have many components interacting together, the more they are tied the harder it will be to maintain them (I mean code maintenance). Facade allows you to reach loose coupling, which is a goal a programmer should always try to reach. Consider the following: public class MyClass1 implements Interface1 { public void call1() {} public call call2() {} } public class MyClass2 implements Interface2 { public void call3() {} public void call4() {} } public class MyClass { private MyClass1 a; private MyClass2 b; //calling methods call1 call2 call3 and call4 in other methods of this class ... ... } If you had to change business logic located in a class used by call1 or call2... by not changing the interface, you would not need to change all these classes, but just the class inside the method used by one of the interface methods of the first two classes. Facade lets you improve this mechanism. I am sorry but I realize that it does not look so wonderful. Design patterns are heavily used in the software industry and they can be very useful when working on large projects. You might point out that your project is not that large and that may be true, but Java EE aims to help business and enterprise-level application programming. That's why sometimes the facade pattern is used by default (some IDEs use it too).
{ "language": "en", "url": "https://stackoverflow.com/questions/4798184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Programmatically Presenting and Dismissing Views SwiftUI I am working on a project that is attempting to present and dismiss views in a NavigationView using state and binding. The reason I am doing this is there is a bug in the @Environment(.presentationMode) var presentaionMode: Binding model. It's causing odd behavior. It's discussed in this post here. The example below has three views that are progressively loaded on to the view. The first two ContentView to NavView1 present and dismiss perfectly. However, once NavView2 is loaded, the button that is used to toggle the state of presentNavView2 ends up adding another NavView2 view on the stack and does not dismiss it as expected. Any thoughts as to why this would be? ContentView struct ContentView: View { @State private var presentNavView1 = false var body: some View { NavigationView { List { NavigationLink(destination: NavView1(presentNavView1: self.$presentNavView1), isActive: self.$presentNavView1, label: { Button(action: { self.presentNavView1.toggle() }, label: { Text("To NavView1") }) // Button }) // NavigationLink } // List .navigationTitle("Home") } // NavigationView } // View } NavView1 struct NavView1: View { @State private var presentNavView2 = false @Binding var presentNavView1: Bool var body: some View { List { NavigationLink(destination: NavView2(presentNavView2: self.$presentNavView2), isActive: self.$presentNavView2, label: { Button(action: { self.presentNavView2.toggle() }, label: { Text("To NavView2") }) // Button }) // NavigationLink Button(action: { self.presentNavView1.toggle() }, label: { Text("Back") }) } // List .navigationTitle("NavView1") } // View } NavView2 struct NavView2: View { @Binding var presentNavView2: Bool var body: some View { VStack { Text("NavView2") Button(action: { self.presentNavView2.toggle() }, label: { Text("Back") }) // Button } // VStack .navigationTitle("NavView2") } } A: You can use DismissAction, because PresentationMode will be deprecated. I tried the code and it works perfectly! Here you go! import SwiftUI struct MContentView: View { @State private var presentNavView1 = false var body: some View { NavigationView { List { NavigationLink(destination: NavView1(), isActive: self.$presentNavView1, label: { Button(action: { self.presentNavView1.toggle() }, label: { Text("To NavView1") }) }) } .navigationTitle("Home") } } } struct NavView1: View { @Environment(\.dismiss) private var dismissAction: DismissAction @State private var presentNavView2 = false var body: some View { List { NavigationLink(destination: NavView2(), isActive: self.$presentNavView2, label: { Button(action: { self.presentNavView2.toggle() }, label: { Text("To NavView2") }) }) Button(action: { self.dismissAction.callAsFunction() }, label: { Text("Back") }) } .navigationTitle("NavView1") } } struct NavView2: View { @Environment(\.dismiss) private var dismissAction: DismissAction var body: some View { VStack { Text("NavView2") Button(action: { self.dismissAction.callAsFunction() }, label: { Text("Back") }) } .navigationTitle("NavView2") } } struct MContentView_Previews: PreviewProvider { static var previews: some View { MContentView() } }
{ "language": "en", "url": "https://stackoverflow.com/questions/70087666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Setting quality levels to Plyr player with HLS from .m3u8 file I am using Plyr player with HLS implementation in an angular component, I did all the settings mentioned in the below post Adding Quality Selector to plyr when using HLS Stream Still, I can't get the quality setting option in UI. Upon clicking gear icon, only speed options with 8 options showing(.75 to 4 ). For the .m3u8 file, even if I set the quality level to player quality option, it won't be showing in UI. In the console, from player.config.speed attribute i can find the speed options showing in the player UI, but the player.config.quality is showing some in build value like { default: 576, options: [4320, 2880, 2160, 1440, 1080, 720, 576, 480, 360, 240] } same as plyr README.md file I tried to set the quality level from Hls.Event.Manifest.Parsed event. Does anybody know how to set quality levels from .m3u8 file to Plyr player? Please help
{ "language": "en", "url": "https://stackoverflow.com/questions/65578946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Autofilter a range of time in excel using VBA I have tried to create a macro to auto filter the time range from 12:00:00 am to 3:00:00 am. But the code gave me an error. What is wrong with my code? Sub TIME() T1 = TimeSerial(0, 0, 0) T2 = TimeSerial(0, 3, 0) ActiveSheet.Range("$B$1:$L$597064").AutoFilter Field:=3, Criteria1:=">=" & T1, Operator:=xlAnd, Criterial2:="<" & T2 End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/46996538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unsupported Media Type 415 Error in Angular 7 & .NET Core API I'm passing in a selected row to delete in my Angular Application using selection on a material data table. For some reason though, I'm getting a 415 error. Not sure what I'm doing wrong, either on the server or the client side, but I'm not sure even if I'm passing the correct object. What's the issue here? I'm using Angular 7 for the client and making the API in .NET Core ActionsController.cs .NET Core [HttpDelete("deleteRow")] public Msg DeleteRows(string sessionId, T table, Tag[] rows) { try { UserState userState = GetUserState(sessionId); Msg m = CheckTableAccess(sessionId, table, TableAccessLevel.ReadModifyCreateDelete, userState); if (m.IsNotOk) return m; if (table == T.Action) { foreach (Tag t in rows) { m = CheckUpdatableAction(sessionId, rows[0]); if (m.IsNotOk) return m; } } if (table == T.RouteStop) { XioTransaction xt = new XioTransaction(userState); XioWriter xwd = null; xwd = xt.CreateDeleteWriter(table); foreach (Tag t in rows) { XioTable routeStop = new XioTable(userState, T.RouteStop); Tag ownerTag = ((DbrRouteStop)routeStop.LoadSingleRow(t, C.RouteStop_RouteTag)).RouteTag; xwd.DeleteRow(t, ownerTag); } xt.WriteAll(); } else if (table == T.RouteEvent) { XioTransaction xt = new XioTransaction(userState); XioWriter xwd = null; xwd = xt.CreateDeleteWriter(table); foreach (Tag t in rows) { XioTable routeEvent = new XioTable(userState, T.RouteEvent); Tag ownerTag = ((DbrRouteEvent)routeEvent.LoadSingleRow(t, C.RouteEvent_RouteTag)).RouteTag; xwd.DeleteRow(t, ownerTag); } xt.WriteAll(); } else if (table == T.CompanyResource) { XioTransaction xt = new XioTransaction(userState); XioWriter xwd = null; xwd = xt.CreateDeleteWriter(table); foreach (Tag t in rows) { XioTable cr = new XioTable(userState, T.CompanyResource); DbrCompanyResource crRec = (DbrCompanyResource)cr.LoadSingleRow(t, C.CompanyResource_CompanyTag, C.CompanyResource_Tab); XioTable xtr = new XioTable(userState, crRec.Tab); // the critical where is on divisiontag and all tables that are passed in will have a divion tag // luckily the code will just look at the field name xtr.WhereList.Where(C.Driver_DivisionTag, ComparisonOp.EqualTo, crRec.CompanyTag); xtr.LoadData(); if (xtr.GetAllRows().Length > 0) return new Msg(M.ResourcesExistAtCompanyLevel); xwd.DeleteRow(t); } xt.WriteAll(); } else DbRow.DeleteRecursive(userState, table, rows); userState.Completed(LogEntryType.DeleteRows, null); } catch (MsgException e) { return e.Msg; } catch (SqlException e) { if (e.Number == 547) { return new Msg(M.CannotDeleteOwnerRowWithComponent); } else return new Msg(M.UnexpectedViewDeleteError, e.ToString()); } catch (Exception e) { return new Msg(M.UnexpectedViewDeleteError, e.ToString()); } return Msg.Ok; } ViewComponent.ts export class ViewComponent implements OnInit, OnDestroy { // User Fields currentUser: User; users: User[] = []; currentUserSubscription: Subscription; loading : boolean; // Action Fields viewData: any; viewName: string; refNumber: number; currentActionSubscription: Subscription; displayedColumns: string[] = []; dataSource: any = new MatTableDataSource([]); pageSizeOptions: number[] = [10, 20, 50]; @ViewChild(MatSort) sort: MatSort; @ViewChild(MatPaginator) paginator: MatPaginator; selection = new SelectionModel<TableRow>(true, []); defaultSort: MatSortable = { id: 'defColumnName', start: 'asc', disableClear: true }; defaultPaginator: MatPaginator; constructor( private iconRegistry: MatIconRegistry, private sanitizer: DomSanitizer, private actionService: ActionService ) { this.loading = false; this.iconRegistry.addSvgIcon( 'thumbs-up', this.sanitizer.bypassSecurityTrustResourceUrl( 'assets/img/examples/thumbup-icon.svg' ) ); } loadAction(action: any) { this.loading = true; // If there is already data loaded into the View, cache it in the service. if (this.viewData) { this.cacheAction(); } if (this.sort) { // If there is sorting cached, load it into the View. if (action.sortable) { // If the action was cached, we should hit this block. this.sort.sort(action.sortable); } else { // Else apply the defaultSort. this.sort.sort(this.defaultSort); } } if (this.paginator) { // If we've stored a pageIndex and/or pageSize, retrieve accordingly. if (action.pageIndex) { this.paginator.pageIndex = action.pageIndex; } else { // Apply default pageIndex. this.paginator.pageIndex = 0; } if (action.pageSize) { this.paginator.pageSize = action.pageSize; } else { // Apply default pageSize. this.paginator.pageSize = 10; } } // Apply the sort & paginator to the View data. setTimeout(() => this.dataSource.sort = this.sort, 4000); setTimeout(() => this.dataSource.paginator = this.paginator, 4000); // Load the new action's data into the View: this.viewData = action.action; this.viewName = action.action.ActionName; this.refNumber = action.refNumber; // TODO: add uniquifiers/ids and use these as the sort for table const displayedColumns = this.viewData.Columns.map((c: { Name: any; }) => c.Name); displayedColumns[2] = 'Folder1'; this.displayedColumns = ['select'].concat(displayedColumns); // tslint:disable-next-line: max-line-length const fetchedData = this.viewData.DataRows.map((r: { slice: (arg0: number, arg1: number) => { forEach: (arg0: (d: any, i: string | number) => any) => void; }; }) => { const row = {}; r.slice(0, 9).forEach((d: any, i: string | number) => (row[this.displayedColumns[i]] = d)); return row; }); this.dataSource = new MatTableDataSource(fetchedData); this.loading = false; } // Stores the current Action, sort, and paginator in an ActionState object to be held in the action service's stateMap. cacheAction() { let actionState = new ActionState(this.viewData); // Determine the sort direction to store. let cachedStart: SortDirection; if (this.sort.direction == "desc") { cachedStart = 'desc'; } else { cachedStart = 'asc'; } // Create a Sortable so that we can re-apply this sort. actionState.sortable = { id: this.sort.active, start: cachedStart, disableClear: this.sort.disableClear }; // Store the current pageIndex and pageSize. actionState.pageIndex = this.paginator.pageIndex; actionState.pageSize = this.paginator.pageSize; // Store the refNumber in the actionState for later retrieval. actionState.refNumber = this.refNumber; this.actionService.cacheAction(actionState); } ngOnInit() { // Subscribes to the action service's currentAction, populating this component with View data. this.actionService.currentAction.subscribe(action => this.loadAction(action)); } /** Whether the number of selected elements matches the total number of rows. */ isAllSelected() { const numSelected = this.selection.selected.length; const numRows = this.dataSource.data.length; return numSelected === numRows; } /** Selects all rows if they are not all selected; otherwise clear selection. */ masterToggle() { this.isAllSelected() ? this.selection.clear() : this.dataSource.data.forEach((row: TableRow) => this.selection.select(row)); } // Delete row functionality deleteRow() { console.log(this.selection); this.selection.selected.forEach(item => { const index: number = this.dataSource.data.findIndex((d: TableRow) => d === item); console.log(this.dataSource.data.findIndex((d: TableRow) => d === item)); this.dataSource.data.splice(index, 1); this.dataSource = new MatTableDataSource<Element>(this.dataSource.data); }); this.selection = new SelectionModel<TableRow>(true, []); this.actionService.deleteRow(this.selection).subscribe((response) => { console.log('Success!'); }); } ActionsService.ts deleteRow(selection: any): Observable<{}> { console.log('testing service'); // create an array of query params using the property that you use to identify a table row const queryParams = [selection._selection].map((row: { value: any; }) => `id=${row.value}`); // add the query params to the url const url = `http://localhost:15217/actions/deleteRow`; return this.http.delete<any>(url); } A: You need to add an http header to specify the content type for your http request body. If you are sending a json body, the header is content-type: application/json You can update actionService.ts deleteRow(selection: any): Observable<{}> { let headers = new Headers(); headers.append('Content-Type', 'application/json'); const queryParams = [selection._selection].map((row: { value: any; }) => `id=${row.value}`); const url = `http://localhost:15217/actions/deleteRow`; const options = { headers: new HttpHeaders({ 'Content-Type': 'application/json' }), body: {} } return this.http.delete<any>(url, options); }
{ "language": "en", "url": "https://stackoverflow.com/questions/60143201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MSSQL variable = null Anyone know why a situation like the following would run fine on MSSQL 2005 and not MSSQL 2008: declare @X int = null; select A, B, C from TABLE where X=@X Without going into detail, I've got a stored proc which calls another stored proc that takes a hard coded Null as one of the parameters and it runs fine apparently on MSSQL2005 but not 2008. A: The code is poorly written regardless of which version of SQL you're using, because NULL is never "equal" to anything (even itself). It's "unknown", so whether or not it's equal (or greater than, or less than, etc.) another value is also "unknown". One thing that can affect this behavior is the setting of ANSI_NULLS. If your 2005 server (or that connection at least) has ANSI_NULLS set to "OFF" then you'll see the behavior that you have. For a stored procedure the setting is dependent at the time that the stored procedure was created. Try recreating the stored procedure with the following before it: SET ANSI_NULLS ON GO and you'll likely see the same results as in 2008. You should correct the code to properly handle NULL values using something like: WHERE X = @X OR (X IS NULL AND @X IS NULL) or WHERE X = COALESCE(@X, X) The specifics will depend on your business requirements. A: That might be due to your ansi_null settings in two servers. When SET ANSI_NULLS is ON, a SELECT statement that uses WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement that uses WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. When SET ANSI_NULLS is OFF, the Equals (=) and Not Equal To (<>) comparison operators do not follow the SQL-92 standard. A SELECT statement that uses WHERE column_name = NULL returns the rows that have null values in column_name. A SELECT statement that uses WHERE column_name <> NULL returns the rows that have nonnull values in the column. Also, a SELECT statement that uses WHERE column_name <> XYZ_value returns all rows that are not XYZ_value and that are not NULL. You can find detailed information here: https://msdn.microsoft.com/en-us/library/ms188048(v=sql.90).aspx A: Try this: select A, B, C from TABLE where X IS NULL The reason why your original query did not return the expected result is explained here A: SET ANSI_NULLS OFF GO ^^That made the stored procs work the way I expected.
{ "language": "en", "url": "https://stackoverflow.com/questions/34881866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Function isn't working with a gobal variable. Should it be expected? As title says, I can't have the variable "countDash" being used on my function if it's globaly, only local. Should it be like this? What am I missing something? Thanks in advance. //count let countEl = document.getElementById("count-el"); let saveEl = document.getElementById("save-el"); let count = 0; //message to user let username = "Mr. Unknown"; let message = "You have three new notifications"; let messageToUser = `${message}, ${username}!`; //welcome message let welcomeEl = document.getElementById("welcome-el"); let name = "Eduardo"; let greeting = "Welcome back"; welcomeEl.innerHTML = `${greeting}, ${name}!`; function increment() { count += 1; countEl.innerHTML = count; } // let countDash = ` ${count} -`; //does not work function save() { let countDash = ` ${count} -`; //it only works if I have it here localy saveEl.innerHTML += countDash; } A: When you declare CountDash in the global scope, the code is only being run once, so CountDash is initialised with the value ' 0 - '. So even when you update count in your increment function, countDash will not be updated. If you'd like to keep countDash as a global variable for whatever reason (although we should reduce global variable use where possible) you can just update it after you update count in the increment function :)
{ "language": "en", "url": "https://stackoverflow.com/questions/70395306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to implement fill constructor and range constructor for sequence containers unambiguously Sequence containers need to have fill constructors and range constructors, i.e. these must both work, assuming MyContainer models a sequence container whose value_type is int and size_type is std::size_t: // (1) Constructs a MyContainer containing the number '42' 4 times. MyContainer<int> c = MyContainer<int>(4, 42); // (2) Constructs a MyContainer containing the elements in the range (array.begin(), array.end()) std::array<int, 4> array = {1, 2, 3, 4}; MyContainer<int> c2 = MyContainer<int>(array.begin(), array.end()); Trouble is, I'm not sure how to implement these two constructors. These signatures don't work: template<typename T> MyContainer<T>::MyContainer(const MyContainer::size_type n, const MyContainer::value_type& val); template<typename T> template<typename OtherIterator> MyContainer<T>::MyContainer(OtherIterator i, OtherIterator j); In this case, an instantiation like in example 1 above selects the range constructor instead of fill constructor, since 4 is an int, not a size_type. It works if I pass in 4u, but if I understand the requirements correctly, any positive integer should work. If I template the fill constructor on the size type to allow other integers, the call is ambiguous when value_type is the same as the integer type used. I had a look at the Visual C++ implementation for std::vector and they use some special magic to only enable the range constructor when the template argument is an iterator (_Is_iterator<_Iter>). I can't find any way of implementing this with standard C++. So my question is... how do I make it work? Note: I am not using a C++11 compiler, and boost is not an option. A: I think you've got the solution space right: Either disambiguate the call by passing in only explicitly size_t-typed ns, or use SFINAE to only apply the range constructor to actual iterators. I'll note, however, that there's nothing "magic" (that is, nothing based on implementation-specific extensions) about MSVC's _Is_iterator. The source is available, and it's basically just a static test that the type isn't an integral type. There's a whole lot of boilerplate code backing it up, but it's all standard C++. A third option, of course, would be to add another fill constructor overload which takes a signed size.
{ "language": "en", "url": "https://stackoverflow.com/questions/21042872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: what is the easy way to make suffix from this js code? Note the code below shows the array in the console, not in the snippet output var nodes = ["maria", "mary", "marks", "michael"]; function insert_word(split_nodes) { var rest = []; for (var i = 0; i < split_nodes.length; i++) { //console.log(current); var word = split_nodes[i]; var letters = word.split(""); var current = rest; console.log(current); for (var j = 0; j < letters.length; j++) { var character = letters[j]; var position = current[character]; if (position == null) { current = current[character] = j == letters.length - 1 ? 0 : {}; } else { current = current[character]; } } } } insert_word(nodes); Above outputs this M :{a : {r :{i :{a :0}, k :0, y : }, }, i :{c :{h :{a :{e :{l :0}}}}}}} but I want to output this : M :{ar:{ia:0, k :0, y :0 }, ichael :0 } can anyone help me to find out this output from my code? how could i make suffeix from this code? A: This solution take a sightly changed object structure for the end indicator with a property isWord, because the original structure does not reflect entries like 'marc' and 'marcus', because if only 'marc' is uses, a zero at the end of the tree denotes the end of the word, but it does not allowes to add a substring, because the property is a primitive and not an object. Basically this solution creates first a comlete tree with single letters and then joins all properties which have only one children object. function join(tree) { Object.keys(tree).forEach(key => { var object = tree[key], subKeys = Object.keys(object), joinedKey = key, found = false; if (key === 'isWord') { return; } while (subKeys.length === 1 && subKeys[0] !== 'isWord') { joinedKey += subKeys[0]; object = object[subKeys[0]]; subKeys = Object.keys(object); found = true; } if (found) { delete tree[key]; tree[joinedKey] = object; } join(tree[joinedKey]); }); } var node = ["maria", "mary", "marks", "michael"], tree = {}; node.forEach(string => [...string].reduce((t, c) => t[c] = t[c] || {}, tree).isWord = true); console.log(tree); join(tree); console.log(tree); .as-console-wrapper { max-height: 100% !important; top: 0; } A recursive single pass approach with a function for inserting a word into a tree which updates the nodes. It works by * *Checking the given string with all keys of the object and if string start with the actual key, then a recursive call with the part string and the nested part of the trie is made. *Otherwise, it checks how many characters are the same from the key and the string. Then it checks the counter and creates a new node with the common part and two nodes, the old node content and a new node for the string. Because of the new node, the old node is not more necessary and gets deleted, as well as the iteration stops by returning true for the update check. *If no update took place, a new property with string as key and zero as value is assigned. function insertWord(tree, string) { var keys = Object.keys(tree), updated = keys.some(function (k) { var i = 0; if (string.startsWith(k)) { insertWord(tree[k], string.slice(k.length)); return true; } while (k[i] === string[i] && i < k.length) { i++; } if (i) { tree[k.slice(0, i)] = { [k.slice(i)]: tree[k], [string.slice(i)]: 0 }; delete tree[k]; return true; } }); if (!updated) { tree[string] = 0; } } var words = ["maria", "mary", "marks", "michael"], tree = {}; words.forEach(insertWord.bind(null, tree)); console.log(tree); insertWord(tree, 'mara'); console.log(tree); .as-console-wrapper { max-height: 100% !important; top: 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/48599675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to check if there is a specific user in PFUser's Pointer Array in Parse Subclass? My class has a PFUser Pointer Array. I'm trying to check if PFUser.current () in Pointer Array or not. Here is my snippet code. class News: PFObject, PFSubclassing { @NSManaged var notiBy: PFUser? @NSManaged var notiTo: PFUser? //[News]() // @NSManaged var notiArray: [PFUser]? @NSManaged var notiArray: Array<String> var newsLiveQuery: PFQuery<News> { return News.query()? .whereKeyExists("objectId") .includeKeys(["notiBy", "postObj", "notiTo", "notiArray"]) .whereKey("notiArray", equalTo: PFUser.current()!) .order(byDescending: "createdAt") as! PFQuery<News> } But I got an JSON Error message. But I can get result of PFObject like this [<News: 0x1740b5480, objectId: zyCEHc78l6, localId: (null)> { checked = 0; messageText = Hellow; notiArray = ( "<PFUser: 0x1742ee800, objectId: a85SoYwEiE, localId: (null)>" ); notiBy = "<PFUser: 0x1742ed100, objectId: tmWHuptLmd, localId: (null)>"; notiTo = "<PFUser: 0x1742ede00, objectId: a85SoYwEiE, localId: (null)>"; postObj = "<Post: 0x1742ee700, objectId: CXaSlvrW4G, localId: (null)>"; type = comment; }] Look at that I think It should be working but...why I can't run query? Anyone knows about that? A: I believe you are not get back the username as a string. Try using PFUser.current()!.username instead
{ "language": "en", "url": "https://stackoverflow.com/questions/41244573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: pyspark - Join two RDDs - Missing third column I'm very new at Pyspark please take in consideration :) Basically I've this two textfiles: file1: 1,9,5 2,7,4 3,8,3 file2: 1,g,h 2,1,j 3,k,i And the Python code: file1 = sc.textFile("/user/cloudera/training/file1.txt").map(lambda line: line.split(",")) file2 = sc.textFile("/user/cloudera/training/file2.txt").map(lambda line: line.split(",")) Now doing this join: join_file = file1.join(file2) I was hoping to get this: (1,(9,5),(g,h)) (2,(7,4),(i,j)) (3,(8,3),(k,1)) However, I am getting a different result: (1, (9,g)) (3, (8,k)) (2, (7,1)) Am I missing any parameter on Join? Thanks! A: This should do the trick: file1 = sc.textFile("/FileStore/tables/f1.txt").map(lambda line: line.split(",")).map(lambda x: (x[0], list(x[1:]))) file2 = sc.textFile("/FileStore/tables/f2.txt").map(lambda line: line.split(",")).map(lambda x: (x[0], list(x[1:]))) join_file = file1.join(file2) join_file.collect() returns with Unicode u': Out[3]: [(u'2', ([u'7', u'4'], [u'1', u'j'])), (u'1', ([u'9', u'5'], [u'g', u'h'])), (u'3', ([u'8', u'3'], [u'k', u'i']))]
{ "language": "en", "url": "https://stackoverflow.com/questions/53677714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to make electron app configurable in the release version? I have created an electron app having angular as frontend.And in backend i have used MySql db. I want to do following things after i make release version of this app:- * *Make a json file in the release directory which contains info like db name,password,port,or any other info etc. I want this file to work as a config file which can be edited by end user (In the release version) The changes that users make in the file should be reflected in my app too Like changing the whole database etc I am not sure how to achieve this. A: You don't need to generate your json config file during your build process (unless you generate the users initial username, password , database name, etc during any application registration / purchase process). Instead, make this function part of your applications start-up / first run code. During the initialisation phase of your application starting, look for existence of the json config file. The best place for this file to be stored is in the user’s data directory, appended by your applications name. See app.getPath('userData') for more information. If the json config file exists, load it (obviously in your main process) and reference it's values as per normal. If the file does not exist (IE: First run), create it with default values. You can also give the user (if you want) the option to change the default values during the first run by popping up a dialog (or something similar) with the appropriate config fields. The json config values should be changeable by your applications UI for users who are not comfortable editing the json config file directly. Those who do edit the json config file directly will see their changes take effect immediately upon application restart. You could also make any changes hot reloadable though that would require a file watcher, etc (or a manual 'reload' button in the UI). If the json config file is allowed to be edited by the user, you should validate the structure and content of the file during application start-up (or after hot reloading), else any malformed json config file would crash your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/75258108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sqoop Not able to detect updates when no lastmodified date I have Orders Mysql Table which have auto incremet id and order status will get updated and don't have any last modified date and It is very big table.Now I planned to create real time sink to HDFS using Sqoop. I am not able to capture updates in sqoop. I tried both append and merge key option with lastmodified but both not able to solve problem. I can't reload entire table since it is very huge. I am looking for a way using sqoop how we can detect updates like status changes and move to hdfs
{ "language": "en", "url": "https://stackoverflow.com/questions/53666327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Way to know what Windows service pack version is from the browser? Is there a way to find what service pack is installed from the browser? It doesn't look like it's in the System.Web.HttpBrowserCapabilities in asp.net. I need a way to warn users that they need to update to XP Service Pack 3 before proceeding and installing some software. A: Not directly, no. Unless it's in the browser's UA, there's no way of detecting it without some kind of plugin. A: If you can use VBSCRIPT you can get what you are looking for. The WMI class Win32_OperatingSystem has the properties ServicePackMajorVersion, ServicePackMinorVersion, Name and Version. Try samples here: WMI Tasks Hope this can help
{ "language": "en", "url": "https://stackoverflow.com/questions/5970385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Count unique values by group eliminating duplicates across groups by in Excel NOTE: I have already checked this count unique text values based on condition in another column between date criteria and it does not solve my problem. I have an Excel chart with names and their groups. I want to distinct count by group with no-repetition. See Unique (Requirement) column for desired output. The purpose is to count how many people appeared first time by month. Name Group Unique-(NEED THIS COLUMN) Ryan Jan-16 1 Ryan Jan-16 0 Sam Jan-16 1 Sally Feb-16 1 Ryan Feb-16 0 Sam Mar-16 0 Tom Mar-16 1 Peter Mar-16 1 Note: Although Ryan appears three times, he is only counted the first time he appears in any group. The requirement is for last Column "Unique" to have a 1 or 0 so we can create a pivot table to group by "Group" like below: Group Count Jan-16 2 Feb-16 1 Mar-16 2 The names are listed only once per group by month and do not repeat even if their name appears in a subsequent month. A: If the names are starting in A2 with a heading in A1, you can just use this starting in C2 and pulled down:- =COUNTIF(A$1:A1,A2)=0 to get true/false values, or =--(COUNTIF(A$1:A1,A2)=0) to get ones and zeroes.
{ "language": "en", "url": "https://stackoverflow.com/questions/39558771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Intellij IDEA, when open new task - it doesn't create local branch from origin IDEA version - 2017.2. If create task from Jira (for which branch have been already created in remote) and select this branch for this Jira from dialog it wont create local branch on my computer and just point to commit instead Of course if try to commit you will not succeed without local branch. So it would be very handy to say IDEA to create local branch automatically. How to do it? A: Idea currently does not allow checking out remote branches without creating a local one that tracks it. Here is the request: https://youtrack.jetbrains.com/issue/IDEA-140077 Since there is no local branch that matches the remote one, you need to use the Create branch and select the remote one in the From dropdown.
{ "language": "en", "url": "https://stackoverflow.com/questions/45503923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ImageView show different output on different resolution device I have a ImageView which is 450px*450px,it show perfectly on Note 3(1080*1920), but when I run the app on other smaller resolution device, the ImageView shown larger, and some other contents doesn't fit in the screen. Any solution to solve this kind of problem? is it about the unit(px, dp)? A: The reason is every devices has different screen dimensions, so either you have to re-size the ImageView based on the screen size or simply use different size Images for your ImageView which will be used automatically based on the screen dimension. To do so, check this, A. Getting the screen dimension programmatically and settings the appropriate size into the ImageView, Display display = getWindowManager().getDefaultDisplay(); DisplayMetrics outMetrics = new DisplayMetrics (); display.getMetrics(outMetrics); float density = getResources().getDisplayMetrics().density; float dpHeight = outMetrics.heightPixels / density; float dpWidth = outMetrics.widthPixels / density; B. Adding different image sizes for different screens, res/drawable-mdpi/my_icon.png // bitmap for medium density res/drawable-hdpi/my_icon.png // bitmap for high density res/drawable-xhdpi/my_icon.png // bitmap for extra high density The following code in the Manifest supports all dpis. <supports-screens android:smallScreens="true" android:normalScreens="true" android:largeScreens="true" android:xlargeScreens="true" android:anyDensity="true" /> A: Your Screen Resolution Represents that you are Setting the ImagevIew height and width for xxhdpi resolution device and the Second thing use dp for density pixel it cover the screen according to resolution use 150dpX150dp for ImageView use the following link for px to dp conversion http://pixplicity.com/dp-px-converter/
{ "language": "en", "url": "https://stackoverflow.com/questions/30519579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Mockito: creating a parameter matcher for a class Given a method definition as follows: MyClass.myMethod(SecondClass secondClass); and a mock of MyClass: MyClass myClass = mock(MyClass.class); how would you match the method parameter when defining the expecation? when(myClass.myMethod(???)).thenReturn(null); Thanks A: when(myClass.myMethod(Mockito.any(SecondClass.class))).thenReturn(null); A: You can use Mockito.any(SecondClass.class) or (SecondClass)any() A: Actually the best way to do it is doReturn(object).when(myClass).myMethod(???); in ??? you have some possibilities. * *You can pass a given object and then you wait for an especific one *Or you can pass Mockito.any(Clazz.class) and then you will accept any object of that kind
{ "language": "en", "url": "https://stackoverflow.com/questions/21408853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: git show logs exclude some user or committer I don't know how to do this I try these already: git log -p --author="user" --not Sites/Web/Templates git log -p --author="!user" Sites/Web/Templates still show log of that user. Help, please.
{ "language": "en", "url": "https://stackoverflow.com/questions/19850985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: libtorch c++ doesn't release tensor memory I'm having an issue when using libTorch C++. I load a module torch::jit::script::Module net = torch::jit::load(universal_path); As this takes some time to load, I store it to use it later on different images that I want to process : net.to(device); net.eval(); torch::Tensor out_tensor = net.forward({ input_tensor.to(device) }).toTensor().to(cpy); The problem is that the forward function allocates a lot of memory (between 2 to 8MB) but it's not released when out_tensor is going out of scope. The only way to release the memory is to delete the module as well, but this is not efficient. Am I missing something? I'm using libtorch 1.5.1, a bit old, is there known leaks with this version? Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/71841985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Rails fuzzy searching on title and description I have a simple rails 3 application that lists restaurants as a training exercise. I want to be able to search name and description using one textfield on the restaurant index page. Given the query pizza. The matches should be * *name: Tony's, description: ... is a pizzeria that has been around since the 1950's ... *name: Domino's Pizza, description: ... *name: The Hall, description: ... pizzas, pastas and steaks ... Because: * *the word pizza is a fuzzy match to " pizz eri a " using similar logic as TextMate's Cmd-T. (the spaces in the word pizzeria are only used to get the mini-Markdown to work) *pizza is a lowercase match to Pizza *pizza is a substring of pizzas (should work with ends-with begins-with and includes) How would I go about doing this in rails 3? Do I use thinking_sphinx, tire, sunspot-rails or just a custom query for my application. A: The only tricky one is pizza/pizzeria and it's an issue called stemming. Both sphinx and solr/sunspot support stemming but I imagine you will need to teach them both that pizza is a stem of pizzeria. A: One way to remove false positives is to run a user defined function (UDF) to compute the edit distance between a candidate answer and the original string, and ignore those answers whose edit distance is too large. A: I found a very simple solution that serves my needs. "%#{"pizza".scan(/./).join("%")}%" This creates a string that looks like this "%p%i%z%z%a%" Then I use it in a LIKE query and I get the expected results. Now all that remains is to solve the non-trivial problem of determining the order of relevance :) UPDATE: Found a quick and dirty way of determining order of relevance base on the assumption that a shorter string will most likely be a closer match than a longer one. ORDER BY length(sequence) ASC
{ "language": "en", "url": "https://stackoverflow.com/questions/12994212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to get the value from button on click I am having some buttons which are created dynamically based on the number of phone numbers present.Suppose I have 3 names in DB so there will be three buttons.So when I click on the 1st button then it should give the value of first button,if 2nd is clicked then 2nd button value should display.This is simple jsfiddle which describes about my requirement.I thought of assigning the each button a different id which should be the phone number.In the jsfiddle when i am clicking on a particular button then alert pops but it does not give any value. i did like this $('.btn').click(function(){ var number1 = $('#s2').val(); alert(number1); }); A: You have to use event delegation using jQuery's on() method. From its documentation: When a selector is provided, the event handler is referred to as delegated. The handler is not called when the event occurs directly on the bound element, but only for descendants (inner elements) that match the selector. jQuery bubbles the event from the event target up to the element where the handler is attached (i.e., innermost to outermost element) and runs the handler for any elements along that path matching the selector. In your case, you'll need to delegate the .btn click event to an ancestor which exists prior to that element being dynamically added to the page: $('body').on('click', '.btn', function() { var number1 = $('#s2').val(); alert(number1); }); The closer you get to the .btn element, the better, so unless your document's body is the nearest non-dynamic ancestor then you'll want to change this to something a bit closer. Edit: Further question in comments: can I get the ID of each button To get the id of each button, simply use this.id: $('body').on('click', '.btn', function() { var id = this.id; alert(id); }); Edit 2: Further question in comments So as I said,ID are numbers.So if i click button1 then it will print s1 in the inputfield.Can you please tell me how to do? As your input has an id of "number", you can simply use jQuery's .val() method to set the value of the input to the id of the clicked button: $('body').on('click', '.btn', function(){ $('#number').val(this.id); }); Working JSFiddle demo. A: $(document).on('click', '.btn', function(){ alert($(this).val()); }); Should work. The "on" is a way of working with dynamically placed elements. A: Put value attribute in button input <button type="button" class="btn" value="test2" >test</button> <button type="button" class="btn" value="test1" id="s1">test</button> <button type="button" class="btn" value="test" id="s2">test</button> And then use script $('.btn').on('click', function() { var number = $('#s2').val();//get value }); A: If you want to get the id of the clicked button you should use attr: $('.btn').click(function(){ var number1 = $('#s2').attr('id'); alert(number1); });
{ "language": "en", "url": "https://stackoverflow.com/questions/19540564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Error in my trigger I am working on a project using SqlSiteMapProvider. I have two tables Articles and SiteMap Articles table: CREATE TABLE [dbuser].[Articles]( [ArticleId] [uniqueidentifier] NOT NULL, [Title] [nvarchar](500) NULL, [Description] [ntext] NULL, [CategoryId] [int] NULL, [pubDate] [datetime] NULL, [Author] [nvarchar](255) NULL, [Hit] [int] NULL, [Auth] [bit] NULL )... And SiteMap Table: CREATE TABLE [dbuser].[SiteMap]( [ID] [int] IDENTITY(0,1) NOT NULL, [Title] [nvarchar](50) NULL, [Description] [nvarchar](512) NULL, [Url] [nvarchar](512) NULL, [Roles] [nvarchar](512) NULL, [Parent] [int] NULL, CONSTRAINT [PK_SiteMap] PRIMARY KEY CLUSTERED (... When I insert an Article my asp.net page also inserts that articles url and such information into SiteMap table. What I am trying to do is when I delete an Article from my Articles table (from asp.net page) the related row from SiteMap table with a trigger. My asp.net page inserts the Article info into Sitmap table in this format: Dim SMUrl As String = "~/c.aspx?g=" & ddlCategoryId.SelectedValue & "&k=" & BaslikNo.ToString I mean there is no one column exactly matchin in two tables. My Trigger is as follows: USE [MyDB] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET NOCOUNT ON GO SET ROWCOUNT 1 GO ALTER TRIGGER [dbuser].[trig_SiteMaptenSil] ON [dbuser].[Articles] AFTER DELETE AS BEGIN DECLARE @AID UNIQUEIDENTIFIER, @ttl NVARCHAR,@CID INT SELECT @AID=ArticleId, @ttl = Title, @CID=CategoryId FROM DELETED IF EXISTS(SELECT * FROM dbuser.SiteMap WHERE Url = N'~/c.aspx?g=' + CONVERT(nvarchar(5), @CID) + N'&k=' + CONVERT(nvarchar(36), @AID)) BEGIN DELETE FROM dbuser.SiteMap WHERE Url = N'~/c.aspx?g=' + CONVERT(nvarchar(5), @CID) + N'&k=' + CONVERT(nvarchar(36), @AID) END END GO I am using SSMSE 2008 and my remote db server's version is 8.0 The Error I get is: No rows were deleted. A problem occurred attempting to delete row 1. Error Source: Microsoft.VisualStudio.DataTools. Error Message: The row value(s) updated or deleted either do not make the row unique or they alter multiple rows(2 rows). Correct the errors and attempt to delete the row again or press ESC to cancel the change(s). May you help me how to get this working? I have searched for this for about a few days.. Couldn't find a solution for my case... Thanks in advance A: To deal with the original issue and cope with deletions of multiple rows your trigger could be rewritten as follows. ALTER TRIGGER [dbuser].[trig_SiteMaptenSil] ON [dbuser].[Articles] AFTER DELETE AS BEGIN SET NOCOUNT ON DELETE FROM s FROM dbuser.SiteMap AS s INNER JOIN Deleted AS d ON s.Url = N'~/c.aspx?g=' + CONVERT(nvarchar(5), d.CategoryId) + N'&k=' + CONVERT(nvarchar(36), d.ArticleId) END
{ "language": "en", "url": "https://stackoverflow.com/questions/3845867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Get jsondata value in javascript JSON var jsondata={"id": "10", "skills": "english", "post": "devloper", "emp_name": "jaydeep","timestemp":"10:45"} I am trying to get each element key and value: javascript .. }).done(function(data){ console(data['post']); }); Expected Output : emp_name = jaydeep post = devloper I am getting undefined in console. WHY? I tried data.post, i tried loop but no success.. A: I think you'll need to decode the JSON first. }).done(function(data){ data = JSON.parse(data); console(data['post']); }); A: You can use basic JS too to attain this. // property is an optional parameter. function disp(obj, property) { var prop; if (property) { obj[property] && (console.log(obj[property])); } else { for (prop in obj) { if (obj.hasOwnProperty(prop)) { console.log(prop + " = " + obj[prop]) } } } } var jsondata = { "id": "10", "skills": "english", "post": "devloper", "emp_name": "jaydeep", "timestemp": "10:45" } //disp(jsondata, "post"); disp(jsondata);
{ "language": "en", "url": "https://stackoverflow.com/questions/38408607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: trying to run jiunt test cases spring boot application I tried to run junit test cases with spring boot application. But there is bean creating issue due to not finding setProxyTargetClass on PersistenceExceptionTranslationPostProcessor package. An attempt was made to call the method org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setProxyTargetClass(Z)V but it does not exist. Below is my stack trace. Description: An attempt was made to call the method org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setProxyTargetClass(Z)V but it does not exist. Its class, org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor, is available from the following locations: jar:file:/C:/Users/r2um2k/.m2/repository/org/springframework/spring-dao/2.0.8/spring-dao-2.0.8.jar!/org/springframework/dao/annotation/PersistenceExceptionTranslationPostProcessor.class jar:file:/C:/Users/r2um2k/.m2/repository/org/springframework/spring-tx/5.1.4.RELEASE/spring-tx-5.1.4.RELEASE.jar!/org/springframework/dao/annotation/PersistenceExceptionTranslationPostProcessor.class It was loaded from the following location: file:/C:/Users/r2um2k/.m2/repository/org/springframework/spring-dao/2.0.8/spring-dao-2.0.8.jar Action: Correct the classpath of your application so that it contains a single, compatible version of org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor 2020-07-30 12:49:55.741 ERROR 17624 --- [ main] o.s.test.context.TestContextManager : Caught exception while allowing TestExecutionListener [org.springframework.test.context.web.ServletTestExecutionListener@73afe2b7] to prepare test instance [com.fanniemae.acquisitions.cdds.datafeed.task.UcdpCompDataFeedTaskHelperTest@5b251fb9] java.lang.IllegalStateException: Failed to load ApplicationContext at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:125) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.web.ServletTestExecutionListener.setUpRequestContextIfNecessary(ServletTestExecutionListener.java:190) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.web.ServletTestExecutionListener.prepareTestInstance(ServletTestExecutionListener.java:132) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12] at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12] at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12] at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12] at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12] at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12] at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190) [spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) [surefire-junit4-2.22.1.jar:2.22.1] at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) [surefire-junit4-2.22.1.jar:2.22.1] at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) [surefire-junit4-2.22.1.jar:2.22.1] at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) [surefire-junit4-2.22.1.jar:2.22.1] at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) [surefire-booter-2.22.1.jar:2.22.1] at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) [surefire-booter-2.22.1.jar:2.22.1] at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) [surefire-booter-2.22.1.jar:2.22.1] at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) [surefire-booter-2.22.1.jar:2.22.1] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'persistenceExceptionTranslationPostProcessor' defined in class path resource [org/springframework/boot/autoconfigure/dao/PersistenceExceptionTranslationAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor]: Factory method 'persistenceExceptionTranslationPostProcessor' threw exception; nested exception is java.lang.NoSuchMethodError: org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor.setProxyTargetClass(Z)V at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:627) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:607) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1288) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1127) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:538) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:204) ~[spring-beans-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.context.support.PostProcessorRegistrationDelegate.registerBeanPostProcessors(PostProcessorRegistrationDelegate.java:228) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.registerBeanPostProcessors(AbstractApplicationContext.java:707) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:531) ~[spring-context-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE] at org.springframework.boot.SpringApplication.run(SpringApplication.java:316) ~[spring-boot-2.1.2.RELEASE.jar:2.1.2.RELEASE] at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:127) ~[spring-boot-test-2.1.2.RELEASE.jar:2.1.2.RELEASE] at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) ~[spring-test-5.1.4.RELEASE.jar:5.1.4.RELEASE] at org.springframework.test.con Here is my POM file. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.2.RELEASE</version> </parent> <modelVersion>4.0.0</modelVersion> <groupId>com.fanniemae.acquisitions.cdds</groupId> <artifactId>ucdp-company-datafeed</artifactId> <packaging>jar</packaging> <version>0.0.1-SNAPSHOT</version> <name>ucdp-company-datafeed</name> <properties> <java.version>1.8</java.version> <ucdp-ram-common.version>0.0.1-SNAPSHOT</ucdp-ram-common.version> <common.version>0.0.1-SNAPSHOT</common.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> </exclusion> </exclusions> </dependency> <!-- https://mvnrepository.com/artifact/org.mybatis/mybatis-spring --> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>2.0.5</version> </dependency> <!-- https://mvnrepository.com/artifact/org.springframework/spring-ibatis --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-ibatis</artifactId> <version>2.0.8</version> </dependency> <dependency> <groupId>com.oracle.ojdbc</groupId> <artifactId>ojdbc8</artifactId> <version>19.3.0.0</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.10</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-io</artifactId> <version>1.3.2</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.11.3</version> </dependency> <dependency> <groupId>commons-pool</groupId> <artifactId>commons-pool</artifactId> <!-- <version>1.6</version> --> </dependency> <dependency> <groupId>org.pojava</groupId> <artifactId>pojava</artifactId> <version>2.8.0</version> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <!-- <version>42.2.11</version> --> </dependency> <dependency> <groupId>org.apache.axis2</groupId> <artifactId>addressing</artifactId> <version>1.3</version> <type>mar</type> <exclusions> <exclusion> <groupId>xalan</groupId> <artifactId>xalan</artifactId> </exclusion> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.fanniemae.acquisitions.cdds.internal.dependentjar</groupId> <artifactId>rahas</artifactId> <version>1.3</version> <type>mar</type> </dependency> <!-- <dependency> <groupId>com.oracle.weblogic</groupId> <artifactId>wlthint3client</artifactId> <version>12.2</version> <scope>test</scope> </dependency> --> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.2</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.10</version> </plugin> </plugins> </build> </project> Any help will be appreciated. A: Remove the dependency to spring-ibatis - this is for Spring v 2 and you are using version 5! Look at the release date, this is from 2008. mybatis-spring is enough for spring integration of mybats (see here)
{ "language": "en", "url": "https://stackoverflow.com/questions/63179313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Finding pairs in a list I'm trying to look for pairs of elements in a list, assuming that they are the only pair in the list, and there are no more than 3 identical consecutive elements. I have a function that takes in a list, and returns the index of the first element of the pair, if there is any. If not, then it returns -1 searchForPairs xs = searchHelp xs ((genericLength xs) - 1) where searchHelp xs n | searchHelp xs 0 = -1 -- no pairs found | (xs !! n) == (xs !! (n - 1)) = n | otherwise = searchHelp xs n-1 For some reason, it is returning the error: Couldn't match expected type `Bool' with actual type `Int' In the expression: n In an equation for `searchHelp': searchHelp xs n | searchHelp xs 0 = - 1 | (xs !! n) == (xs !! (n - 1)) = n | otherwise = searchHelp xs n - 1 In an equation for `searchForPairs': searchForPairs xs = searchHelp xs ((genericLength xs) - 1) where searchHelp xs n | searchHelp xs 0 = - 1 | (xs !! n) == (xs !! (n - 1)) = n | otherwise = searchHelp xs n - 1 It seems like it should work. Any ideas why it is not? A: You have two problems. The first is in this line: | otherwise = searchHelp xs n-1 The compiler interperets this as (searchHelp xs n) - 1, not searchHelp xs (n-1), as you intended. The second problem is in you use of guards: | searchHelp xs 0 = -1 -- no pairs found Since searchHelp xs 0 is not a boolean expression (you wanted to use it as a pattern), the compiler rejected it. I can see two easy solutions: searchForPairs xs = searchHelp xs ((genericLength xs) - 1) where searchHelp xs n | n == 0 = -1 -- no pairs found | (xs !! n) == (xs !! (n - 1)) = n | otherwise = searchHelp xs (n-1) and searchForPairs xs = searchHelp xs ((genericLength xs) - 1) where searchHelp xs 0 = -1 -- no pairs found searchHelp xs n | (xs !! n) == (xs !! (n - 1)) = n | otherwise = searchHelp xs (n-1) Now, unfortunately, although this works, it is terribly inefficient. This is because of your use of !!. In Haskell, lists are linked lists, and so xs !! n will take n steps, instead of 1. This means that the time your function takes is quadratic in the length of the list. To rectify this, you want to loop along the list forward, using pattern matching: searchForPairs xs = searchHelp xs 0 where searchHelp (x1 : x2 : xs) pos | x1 == x2 = pos | otherwise = searchHelp (x2 : xs) (pos + 1) searchHelp _ _ = -1 A: @gereeter already explained your errors, I would just like to point out that you should not return -1 in case the answer is not found. Instead, you should return Nothing if there is no answer and Just pos if the answer is pos. This protects you from many kinds of errors. A: I couldn't quite grok what you want to do, but from the code, it looks like you're trying to find two consecutive elements in a list that are equal. Instead of using !! to index the list, you can use pattern matching to extract the first two elements of the list, check if they are equal, and continue searching the remainder (including the second element) if they are not. If the list doesn't have at least two elements, you return Nothing searchForPairs xs = go 0 xs where go i (x1:xs@(x2:_)) | x1 == x2 = Just i | otherwise = go (i+1) xs go _ _ = Nothing A: For what it's worth, here is a somewhat idiomatic (and point-free) implementation of what you are trying to do: searchPairs :: Eq a => [a] -> Maybe Int searchPairs = interpret . span (uncurry (/=)) . (zip <*> tail) where interpret (flag, res) = if null flag then Nothing else Just $ length res Explanation: zip <*> tail creates a list of pairs of successive elements (using the reader Applicative type class). uncurry (/=) tests if such a pair is made of identical elements. Finally, interpret translates the result in a value of Maybe Int type.
{ "language": "en", "url": "https://stackoverflow.com/questions/12773721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# Event causing Session Variable Error ASP.NET 5 MVC 6 So I basically have an event that fired off when my connected cash receiver takes in money, and I'm using a session variable to store the index of the bill inserted, and I get an error that says: My session variables work everywhere else correctly except here. Any idea why this is happening? Everywhere I look online it tells me to set up my session correctly which it is if my other ones work fine. This event is fired when on a view page with no other methods running. Any help is appreciated. If the information is too vague I apologize, I will provide more information if you need it to help me solve this problem. Thank you. Edit: Stack This is the event being fired where the SetString is causing the error. void validator_OnCredit(object sender, CreditArgs e) { Console.WriteLine("Credited bill#: {0}", e.Index); switch (e.Index) { case 1: HttpContext.Session.SetString("Test", "1"); break; default: HttpContext.Session.SetString("Test", e.Index.ToString()); break; } Thread.Sleep(500); } A: 3) Updated Answer Session Namespace Reference for beta(s) or RC1 Microsoft.AspNet.Session Session Namespace Reference from CORE 1.0 Microsoft.AspNetCore.Session 2) Updated Answer - If accessing a session outside the controller then you need to inject the session into it as follows public class TestClass { private readonly IHttpContextAccessor _httpContextAccessor; private ISession _session => _httpContextAccessor.HttpContext.Session; public TestClass(IHttpContextAccessor httpContextAccessor) { _httpContextAccessor = httpContextAccessor; } public void SetString() { _session.SetString("Test", "Ben Rules!"); } public string GetString() { return _session.GetString("Test"); } } 1) Initial Answer - Incorrect for current scenario... but if session isn't working this is a good place to start From the error message I assume you are using .NET Core? If so then the middleware is added sequentially so you need to ensure that your code intitialises Session Management first before it uses MVC public void ConfigureServices(IServiceCollection services) { // ... services.AddSession(); // ... } public void Configure(IApplicationBuilder app) { app.UseSession(); app.UseMvc(); // ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/39071303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }