text
stringlengths
70
452k
dataset
stringclasses
2 values
convert printer port bytes inpout32 I'm running out of ideas. I'm using C by the way via inpout32.dll. I have these "bytes"(e.g. 0000,00CC) being read from the printer data ports D0-7 or D1-8. I need to filter out human readable characters when a print job is being done. This is still very primitive, but I've got a listener function catching these data using inp32. Basically if I do a print in notepad like 'Hello World', this will be pulled out from the byte being read by inp32 function. the printer port listener is on a separate app. the idea is that the app can listen in on any printer. It's basically a PoC at the moment. but what I'm using right now to test is a Canon BJC-1000SP, it's pretty old but it's the only parallel port printer we've got at the office. the others are USB types. I'm using this on Windows at the moment. Thermal Printers are actually the ones we'll be listening on. So now I'm trying to use a generic driver that allows raw text file to print. How can I extract text from it via the port? If anybody can give me an idea, a function/converter or where to search, that would be great. If all you read is already human-readable text, just store it all. If not, you need to think about the character encoding in use. If it's plain old ASCII, you can probably just call isprint() to determine if a byte is a printable character. The above of course assumes that your printer is talking plain-text, which probably means it has to be a rather old and simplistic printer (like a dot-matrix from ~20 years ago, or so). If it's a modern "Win-Printer" laser or inkjet, with all the intelligence of page layout being done by the host computer in the driver, you're probably out of luck. In these cases, what is transmitted is the instructions to layout the page, typically in a printer-specific format. I think you should edit your question and specify exactly what printer you're using, and in which operating system environment you're running your program. Update: The Canon BJC-1000 printer you're currently using is an inkjet. It very probably relies on the host computer to send it line-by-line (as in ink lines, not text lines) of data to control the various ink nozzles. I don't think it ever sends plain text to the printer. You could investigate by reading through the code of an open source driver. For Linux, the recommended driver is called gutenprint. what I'm reading are the bytes being thrown to the data registers of the parallel port, so they're not human readable off the bat. I tried several ctype functions earlier also. So what I need somehow is to extract 'data' from those bytes. or maybe I think collect them first cuz I'm still not sure if they're being passed as fragments to the printer. I'll look into it more tomorrow. thanks! basically I need to pull the 'Hello World' text from the bytes being passed thru the data ports. how about thermal printers? are they ASCII? basically we'll be using thermal printers eventually. thanks!
common-pile/stackexchange_filtered
Azure Http Trigger function Hangup socket after the first webhook response I am using following code inside azure http trigger function. var form = new FormData(); var fetch = require("node-fetch"); var https = require('https'); var httpAgents_ = new https.Agent({keepAlive: true}); httpAgents_.maxSockets = 200; module.exports = async function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); let form_ = getMultiData(context); const response = await getResponse(form_,context); context.res = { body: response }; context.done; }; async function cerenceLookResponse(body_,context) { context.log('calling http function') const url = "https://webhook.site/5c7950af-85fc-49b5-a677-12430805a159"; return fetch(url, { method: 'POST', body: form, headers: { "Accept": "*/*", "Accept-Encoding": "gzip, deflate, br", "Connection":"Keep-Alive", "Content-Length":1330, "Keep-Alive": "timeout=30000", "Transfer-Encoding": "chunked", }, agent: httpAgents_ }).then(res => res.text()); }; function getMultiData(context){ var content = { "cmdDict":{ "device": "iso", }; var content2 ={"appserver_data": {"action_mode": "default", },"enable_filtering": 1, }; var options = { header: {'content-type': "application/JSON; charset=utf-8"}}; form.append('RequestData', JSON.stringify(content), options); form.append('DictParameter', JSON.stringify(content2), options); var ar = [] for(var i=0;i<54;i++){ ar.push(form._streams[3][i]); } var ar2 = [] for(var i=0;i<52;i++){ ar2.push(form._streams[3][i]);} boun = ar2.join("") test1_orig = ar.join("") test2 = "Content-Disposition: form-data; name=\"DictParameter\"; paramName=\"REQUEST_INFO\"" test1 = test1_orig + test2 + "\r\n" +'content-type: application/JSON; charset=utf-8\r\n' + '\r\n', form._streams[3] = test1 form._streams.push(boun + "--\r\n"); return form; }; This program perfectly fine when I work with a local WebStorm terminal. However, on Azure portal first time it gets the response from the web hook and then if I want to run once again right after I get the response. The function stall and after 3 min and throws an error "socket hang upStack: FetchError". What am I doing wrong here? What is in the getResponse function? Also, why are you creating form variable in the global scope and not locally in getMultiData function? You are creating form in the global scope. So every time you are calling the getMultiData function, duplicate stuffs getting added in the same FormData instance. Declare it locally inside getMultiData function. I have a question, if we declare a global variable in Azure function will it be reinitialize every time a http request call the function or it will be there as long as we clean the memory or restart the function ? Azure Function would manage the instance creation. Hence it can reuse old instance at times when invoked.
common-pile/stackexchange_filtered
Set list item tag to [*]x instead of [li]x[/li] in SCEditor for vBulletin compatibility TL;DR SCEditor uses [li]test[/li] for list items. To make it compatible for VB, I want to change this to [*]test. But my approach doesn't full work. I can make the editor to insert [*] for list items. However they always contain an unwanted line break before their content. So the question is: How can I change [li]x[/li] to [*]x without the line break after [*] of my current solution (see above)? Detailled explanation and my approach I want SCEditor to be compatible with VBulletin. Many BBCodes works but lists doesn't. When creating a list in SCEditor, it generates the following BBCode: [ul] [li]x[/li] [li]y[/li] [/ul] VBulletin doesn't parse this since it uses [list] instead of [ul]. By understand bbcode.js I could fix this by replacing the format of the BBCode: sceditor.formats.bbcode.set('ul', { tags: { ul: null }, breakStart: true, isInline: false, skipLastLineBreak: true, format: '[list]{0}[/list]', html: '<ul>{0}</ul>' }) But when I change But the [li]x[/li] also doesn't work since VB uses [*] x without closing tag. Tried modifing this item too: sceditor.formats.bbcode.remove('li') sceditor.formats.bbcode.set('li', { tags: { li: null, '*': null }, isInline: false, excludeClosing: true, isSelfClosing: true, skipLastLineBreak: true, closedBy: ['/ul', '/ol', '/list', '*', 'li'], format: '[*]{0}', html: '<li>{0}</li>' }) sceditor.formats.bbcode.remove('*') sceditor.formats.bbcode.set('*', { isInline: false, //excludeClosing: true, isSelfClosing: true, //skipLastLineBreak: true, html: '<li>{0}</li>' }) Now when pressing the list button, the editor inserts BBCodes nearly as I need them: [list] [*] x[/list] Since it creates a line break between [*] and the content, it looks broken: It seems that li is used for the editor controls (insert list button), where * (last entry) handles the parsing from BBCode to Editor HTML (when toggling between source code and WYSIWYG view). Found out that I need setting isSelfClosing to false in * BBCode. skipLastLineBreak is not required as well as deleting the tag with sceditor.formats.bbcode.remove('*') since set() overrides any existing tags (described in the documentation). The following works: sceditor.formats.bbcode.set('*', { isInline: false, // Avoid automatically closing tag [/*] excludeClosing: true, // Avoids line break between [*] and list point content isSelfClosing: false, html: '<li>{0}</li>' }) Results in [list] [*]x [*]y [/list]
common-pile/stackexchange_filtered
Incoming call listener sleeps after couple of hours For the last couple of weeks I am facing an issue with telephony manager API in Android - listener for incoming call based on listener starting Recording and on end call stopping recording (The process working smooth) ISSUE The issue I am facing is that in some mobile handsets it is working all the time, but in some handsets, Broadcast listener of telephony manager stops working after a few hours. After some research I found a solution that use wake-lock for preventing the CPU to sleep and I tried this but in vain. @Override public void onReceive(Context context, Intent intent) { //We listen to two intents. The new outgoing call only tells us of an //outgoing call. We use it to get the number. roPlantPrefs = RoPlantPrefs.getInstance(context); databaseHelper = new DatabaseHelper(context); //lastState = roPlantPrefs.getLastState(); if (roPlantPrefs.getLogin()) { if (intent.getAction().equals("android.intent.action.NEW_OUTGOING_CALL")) { savedNumber = intent.getExtras().getString("android.intent.extra.PHONE_NUMBER"); } else { roPlantPrefs = RoPlantPrefs.getInstance(context); // if (!roPlantPrefs.getIsOnCall()) { String stateStr = intent.getExtras().getString(TelephonyManager.EXTRA_STATE); String number = intent.getExtras().getString(TelephonyManager.EXTRA_INCOMING_NUMBER); int state = 0; if (stateStr.equals(TelephonyManager.EXTRA_STATE_IDLE)) { state = TelephonyManager.CALL_STATE_IDLE; } else if (stateStr.equals(TelephonyManager.EXTRA_STATE_OFFHOOK)) { state = TelephonyManager.CALL_STATE_OFFHOOK; } else if (stateStr.equals(TelephonyManager.EXTRA_STATE_RINGING)) { state = TelephonyManager.CALL_STATE_RINGING; } onCallStateChanged(context, state, number); } } // } } I have also used timer and alarm manger but it is working maximum 2 to 3 hours then listener stops working, Any help could be appreciated. are you using service or broadcast receiver ? mention device name, I may have answer for your question i don't understand are you saying you defined broadcast with in your manifest and after some time your broadcast dose not notified when there is new Incoming call ? yes thats the worst I had same issue with Oppo, Vivo, Mi and etc phones, after removing from recent applications app was getting killed even services was getting killed Solution: I had add autostart permissions like this in my application and it worked. After resolving this issue my app was getting frozen/killed after some time running in the background due to DOZE mode Solution: for this condition, just go to ->Settings ->Battery Option, and allow your app to run in the background, if you do this, DOZE mode wont affect on your app, Cheers In Q- Mobile there is no such settings available >Settings ->Battery Option, and allow your app to run in the background In every Android device you will see battery option and You can allow your app to run in the background " ->Settings ->Battery Option > Battery Optimization/Battery Saver/excessive power saver/ Energy Saver " allow your app from such options to work in background. u will be able to see above settings above Marshmellow devices Please accept/upvote the answer if this solves your issue
common-pile/stackexchange_filtered
Create a Wordpressloop with two posts of the next page I need to create a wordpress-site which shows 5 posts on the front and in a different loop: 2 posts of the next page. These show up if you visit the second page. Do you have any ideas? I wanna show "More articles" by displaying "older" posts in a different loop. Thanks If you still want an answer, you should move your question to http://wordpress.stackexchange.com/ - more experts there. Either flag your question for moderator attention, or delete it here yourself and repost it on WP.SE For the first page, limit your listing to 5 posts per page in wordpress settings. Then you create the second page and apply a page template in witch you have made a custom query to fetch 2 posts. $cat = get_cat_ID($category); $paged = (get_query_var('paged')) ? get_query_var('paged') : 1; $post_per_page = 2; // -1 shows all posts $do_not_show_stickies = 1; // 0 to show stickies $args=array( 'category__in' => array($cat), 'orderby' => 'date', 'order' => 'DESC', 'paged' => $paged, 'posts_per_page' => $post_per_page, 'caller_get_posts' => $do_not_show_stickies ); $temp = $wp_query; // assign orginal query to temp variable for later use $wp_query = null; $wp_query = new WP_Query($args); if( have_posts() ) : ... To read more, see Wordpress Codex Reference Sorry I don't get it :-) I want to create two loops on the index.php. The first one shows the 5 posts (like you said) and the second shows the post of the next page in short form. Is it possible with wordpress?
common-pile/stackexchange_filtered
Print all the contents of a HashMap<Object, Collection<Object>> type I have a Hypergraph construction like below in HGraph class Map<HEdge, ArrayList<HVertex>> hGraph = new HashMap<HEdge, ArrayList<HVertex>>(); and would like to print all the (key, value) pairs using the below function. But all I am getting this: Hypergraph: e4(2.0)[] e3(2.0)[] e1(3.0)[] e2(2.0)[] e5(2.0)[] Nothing inside [], not even "#" sign that I used for debugging !! I've also overrided the toString methods in the HEdge and HVertex classes to print out their labels. public void printHGraph(HGraph hGraph) { Set<HEdge> edges = hGraph.getAllHEdges(); HEdge edge; HVertex vertex; ArrayList<HVertex> edge_vertices; System.out.println("Hypergraph:"); Iterator<HEdge> edge_iterator = edges.iterator(); while(edge_iterator.hasNext()) { edge = edge_iterator.next(); System.out.print(edge.toString()); System.out.print("["); edge_vertices = hGraph.getHVertices(edge); Iterator<HVertex> vertex_iterator = edge_vertices.iterator(); while(vertex_iterator.hasNext()) { System.out.print("#"); //@debug vertex = vertex_iterator.next(); System.out.print(vertex.toString()+", "); } System.out.print("]"); System.out.println(); } What kind of output you want ??? Have you overridden equals and hashcode method in your HEdge class? for example: e1(3.0)[v1, v2, v3] @RohitJain, exactly how should I override these two methods? any example kindly? It doesn't explain why your code doesn't work but I'd using foreach instead of manual work with iterators however the while() is working fine with the edge_iterator but not with the vertex_iterator !! I will try with foreach() @JoarderKamal. It won't work with for each either. Just try using hGraph.containsKey(edge), you will see that it returns false. You need to override both equals and hashcode method in your class to make it work correctly as a HashMap key. @JoarderKamal. See this question for in depth explanation unfortunately i can't get any of the usual Map functions while typing hGraph. like containsKey or entrySet or anything like this. I am not sure why!! While the likely hashcode()/equals() issue needs to be still solved, I'd recommend iterating using hgraph.entrySet() to avoid the unnecessary hash table lookups Your hGraph is of type HGraph, so you won't have the map methods unless HGraph implements Map The output you get simply means that there's nothing in the lists (values of the map). The problem is in code you don't show: getting the values from the map, or populating the map. I've already checked and verified that the values are set in the HVertex objects correctly by calling the get() method just after using the set() method. I am certain that they are not empty !! @kiheru I've generated the overridden codes for hashcode() and equals() using Eclipse Source>Generate method. But still seeing no changes !! How could i able to get the hGraph.entrySet()? I was passing the reference of HGraph type object in the print function, the actual object was created while taking the inputs. @JoarderKamal: if they were not empty, they would be printed as non-empty. What you're printing when calling get() is probably a different list than the one you print in your question. Or it has been cleared in the meantime. Show the code filling the map, and show the code of the methods used in the posted code that get the keys and values from the map. @JoarderKamal To get the entry set, you could add a getEntries() method to HGraph. @JBNizet thanks a lot for pointing this out. I've carefully checked my input function and found that I accidentally clear() a vertices list few lines after adding the elements. And that is where the actual problem was. And there was no need to override the hasCode() and equals() in this particular case. Many thanks again everyone.
common-pile/stackexchange_filtered
How do I call a windows dll function from cygwin python? I have a Windows python3.7 function which successfully calls the kernel32.dll GetSystemPowerStatus function using ctypes to interrogate the power status to see if my laptop is on AC or battery power. This is a pure python solution. I want to port this function to cygwin python3.7. Out of the box, python3 for cygwin's ctypes does not seem to allow calling a windows dll. I would prefer a pure python solution, but I can use C/C++ if necessary. Does anyone have an example of how to do this? Edited to add the code (lines 63-67) and error messages: elif _os.name == 'posix' and _sys.platform == 'cygwin': # c:\Windows\System32\kernel32.dll kernel32_name = '/proc/cygdrive/c/Windows/System32/kernel32.dll' kernel32 = CDLL(kernel32_name) _GetSystemPowerStatus = kernel32.GetSystemPowerStatus $ python3.7 GetSystemPowerStatus.py Traceback (most recent call last): File "GetSystemPowerStatus.py", line 82, in <module> result, systemPowerStatus = GetSystemPowerStatus() File "GetSystemPowerStatus.py", line 66, in GetSystemPowerStatus kernel32 = CDLL(kernel32_name) File "/usr/lib/python3.7/ctypes/__init__.py", line 356, in __init__ self._handle = _dlopen(self._name, mode) OSError: Invalid argument python2.7 gives the same error, but at line 366. Solved. See my own answer below. Please give the error message that you get when you run your working windows code inside cygwin '$ python3.7 GetSystemPowerStatus.py⏎ Traceback (most recent call last):⏎ File "GetSystemPowerStatus.py", line 69, in ⏎ result, systemPowerStatus = GetSystemPowerStatus()⏎ File "GetSystemPowerStatus.py", line 57, in GetSystemPowerStatus⏎ _GetSystemPowerStatus = cdll.kernel32.GetSystemPowerStatus⏎ File "/usr/lib/python3.7/ctypes/init.py", line 426, in __getattr__⏎ dll = self._dlltype(name)⏎ File "/usr/lib/python3.7/ctypes/init.py", line 356, in __init__⏎ self._handle = _dlopen(self._name, mode)⏎ OSError: No such file or directory⏎' Edit the question with additional, formatted information. Comments obviously are a mess to read. Quite puzzling, actually this works for me: ctypes.CDLL("/cygdrive/c/windows/system32/user32.dll") but not with kernel32.dll. I get exactly the same error. So it's possible to load a windows library from cygwin, but not kernel32 :( Thanks. Very interesting. Must dig in source of ctypes, I guess. As you I wasn't able to get kernel32.dll (although it works with other DLLs like user32, msvcrt, kernelbase, etc.) I found a pretty contrived way of doing it... This uses kernelbase.dll (which exports GetModuleHandle) to get an handle on kernel32.dll, then call CDLL with the handle optional keyword: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import sys import ctypes def main(): # the idea is to load kernelbase.dll which will allow us to call GetModuleHandleW() on kernel32.dll try: kernel_base = ctypes.CDLL("/cygdrive/c/windows/system32/kernelbase.dll") except OSError: print("Can't load kernelbase.dll...") return -1 gmhw = kernel_base.GetModuleHandleW gmhw.argtypes = (ctypes.c_wchar_p, ) gmhw.restype = ctypes.c_void_p # call GetModuleHandleW on kernel32.dll (which is loaded by default in the Python process) kernel32_base_addr = gmhw("kernel32.dll") print(f"Got kernel32 base address: {kernel32_base_addr:#x}") # now call CDLL with optional arguments kernel32 = ctypes.CDLL("/cygdrive/c/windows/system32/kernel32.dll", handle=kernel32_base_addr, use_last_error=True) # test with GetSystemPowerStatus print(f"GetSystemPowerStatus: {kernel32.GetSystemPowerStatus}") return 0 if __name__ == "__main__": sys.exit(main()) Thanks. It's a bit convoluted. I found a better way. After finding that I could load user32.dll easily, I did some more investigation with the source and pdb and ldd. What follows is my solution. elif _os.name == 'posix' and _sys.platform == 'cygwin': RTLD_LOCAL = 0 # from /usr/include/dlfcn.h RTLD_LAZY = 1 RTLD_NOW = 2 RTLD_GLOBAL = 4 RTLD_NODELETE = 8 RTLD_NOLOAD = 16 RTLD_DEEPBIND = 32 kernel32_name = '/proc/cygdrive/c/Windows/System32/kernel32.dll' kernel32 = CDLL(kernel32_name, mode=RTLD_LAZY | RTLD_NOLOAD) _GetSystemPowerStatus = kernel32.GetSystemPowerStatus The common code to call the function is: _GetSystemPowerStatus.restype = c_bool _GetSystemPowerStatus.argtypes = [c_void_p,] _systemPowerStatus = _SystemPowerStatus() # not shown, see ctypes module doc result = _GetSystemPowerStatus(byref(_systemPowerStatus)) return result, SystemPowerStatus(_systemPowerStatus) This works fine with Python 2.7, 3.6 and 3.6 on $ uname -a CYGWIN_NT-10.0 host 3.0.1(0.338/5/3) 2019-02-20 10:19 x86_64 Cygwin Thanks to all for their help. Also check: https://stackoverflow.com/questions/53781370/is-there-an-alternative-way-to-import-ctypes-wintypes-in-cygwin.
common-pile/stackexchange_filtered
Locked post's error message for comment is not correct In the list of locked posts, when I'm trying to upvote a comment, receiving the following error message as a popup. This comment is not eligible for voting or flagging But I am able to flag the comment. That is not matched with the error message. Can the error message be corrected?
common-pile/stackexchange_filtered
Graph edit framework I was looking at MeVisLab and I wondered if anyone knows a good framework for making a user interface similar to the one they use. I like the designing flow with boxes and arrows thing. What I would really like is to able to integrate with C++ using Qt, and perhaps export the graph to xml of something like that. There is another example of the interface here: I hope someone knows something Qt's Graphics View is a "framework" which does a good bit of the handling for the kind of scenario you describe. It doesn't take much code to get off the ground and within striking range of what you're looking for: http://doc.qt.nokia.com/latest/graphicsview-diagramscene.html http://doc.qt.nokia.com/latest/graphicsview-elasticnodes.html I'm not aware of any open-source Qt-based programs that offer exactly what you want already written. Just noticed IBM did open source "DataExplorer", which is interesting to me...I might go take a look at that myself: http://www.research.ibm.com/dx/ you are absolutely right! I was just wondering if there was something with some graph features. optimizing layout of planar graphs and stuff like that.. It's a field of study, lots of approaches. For starters look at: http://en.wikipedia.org/wiki/Graph_drawing
common-pile/stackexchange_filtered
Div tag doesn't close in php Easy one! I'm trying to code a cheap forum. Coming from a C background, I started to noticed something strange about PHP. While having a function return a string (HTML) inside of a DIV into place, the browser would not print the </DIV> - even when it's echo'ed by itself. Does PHP decide when it wants to echo certain DOM elements or have limitations on HTML output? echo "Start<div id='Forum'>"; echo "Forum"; GetFullList(); echo "</div>"; Where, GetFullList() consists of: function GetFullList(){ $sql="SELECT * FROM `Forum` WHERE `IsReply` =0"; $result=mysql_query($sql); if (!$result){ echo mysql_error(); } if($result){ while($ForumEntry = mysql_fetch_assoc($result)){ $IsReply = $ForumEntry["IsReply"]; $ParentPost = $ForumEntry["ParentTopic"]; $f_User = $ForumEntry["User"]; $f_Replies = $ForumEntry["Replies"]; $f_Views = $ForumEntry["Views"]; $f_Time = $ForumEntry["Time"]; $f_Post = $ForumEntry["Post"]; $f_Topic = $ForumEntry["Topic"]; $f_Index = $ForumEntry["Index"]; echo DisplayPost($f_User, $f_Replies, $f_Views, $f_Time, $f_Post, $f_Topic, $f_Index); GetChildPostsOf($ParentPost); } } } And DisplayPost() is built as such: function DisplayPost($f_User, $f_Replies, $f_Views, $f_Time, $f_Post, $f_Topic, $f_Index){ $PostBlock = "<div id='Grp_Cell' style='width:930;background-color:#999999;text-align:left;'><div id='Grp_Cell' style='float:left;'><div id='Tbl_Cel'>User: ".$f_User."</div><div id='Tbl_Cel'>Replies: ". $f_Replies."</div><div id='Tbl_Cel'>Views: ".$f_Views."</div><div id='Tbl_Cel'style='background-color:777777;height:112;'>Post started on ".$f_Time.".&nbsp;</div></div><div id='Grp_Cell' style='float:right;width:600;'><div id='Tbl_Cel'>Subject: ".$f_Topic."</div><div id='Tbl_Cel' style='background-color:777777;height:150;'>". $f_Post."</div><a onClick='Reply(".$f_Index.");Filter();'><div id='Tbl_Cel' style='background-color:#888888; height:50; width:50; float:right; padding:2;border-color:black; border:2;'><br>Reply</div></a></div>"; return $PostBlock; } (Displays a div scaffolding for DB results: the post.) When I try to echo "< /div>" after GetFullList(), the result is not printed in HTML, leaving the rest of the page to be encompassed under the malformed div. Return the string and echo getFullList(). There is likely an error in getchildpostof Your $PostBlock has 10 opening divs and 9 closing divs. @mplungjan I have. My point is that the last div might be echoed, but it's hard to tell since there will be so many divs that aren't closed. For instance, if DisplayPost is called 9 times, there will be 91 opening divs (910 + 1), and 82 closing divs (99 + 1). @mplungjan actually he gave correct answer. That is exceptionally fast... Scary. I'll check out aynber's solution first and will get back! Ha! That did it! Thanks @aynber ! That string was one line, word-wrapped, jeez the eyestrain! @user2690520 No problem. I actually had to break it into multiple lines just to see how that was set up. How do I mark as answered? Let me add it as an answer instead of a comment, and you can. Sorry, I read, I have one /div too little, and then aynber said you have one /div too little There are 10 opening divs and 9 closing divs in $PostBlock. A closing </div> should be added where necessary. An easy way to see what the output looks like is to break it into lines like this: $PostBlock = " <div id='Grp_Cell' style='width:930;background-color:#999999;text-align:left;'> <div id='Grp_Cell' style='float:left;'> <div id='Tbl_Cel'>User: ".$f_User."</div> <div id='Tbl_Cel'>Replies: ". $f_Replies."</div> <div id='Tbl_Cel'>Views: ".$f_Views."</div> <div id='Tbl_Cel'style='background-color:777777;height:112;'>Post started on ".$f_Time.".&nbsp;</div> </div> <div id='Grp_Cell' style='float:right;width:600;'> <div id='Tbl_Cel'>Subject: ".$f_Topic."</div> <div id='Tbl_Cel' style='background-color:777777;height:150;'>". $f_Post."</div> <a onClick='Reply(".$f_Index.");Filter();'><div id='Tbl_Cel' style='background-color:#888888; height:50; width:50; float:right; padding:2;border-color:black; border:2;'><br>Reply</div></a> </div> ";
common-pile/stackexchange_filtered
How to animate individual views on Activity transition? How can I trigger animations on individual views when switching activities? I.e. if the user clicks a button to go to the next page, I'd like some of my views to fly off screen, and have the background crossfade into the next screen, instead of having the whole screen be animated as one piece. Is this possible? And if so, how should it be done? (I'm using the most recent API, 4.1, and it doesn't have to be backwards compatible) EDIT: Currently, doing the transition-in animation is working fine by calling it in onResume(), but when I press back, the activity switches faster than any animations started in onPause() so that makes me think there's a better way/place to do this. Overriding onResume() works fine, but onPause/onStop don't wait for the animation to complete before moving to the next screen. What ever starts the event ex. button click would need to start the animations before start activity is called. button.setOnClickListener(new ViewOnClickListener() { @Override void onClick(... { // start animations // wait till they are finished // start activity } }); Since every event that starts a new activity is going to have animation code I would also recommend moving it into some sort of helper class to avoid having duplicate code all over the place. ex. button1.setOnClickListener(new ViewOnClickListener() { @Override void onClick(... { helper.AnimateViews(/* probably pass activity or context */); // start activity } }); button2.setOnClickListener(new ViewOnClickListener() { @Override void onClick(... { helper.animateViews(/* probably pass activity or context */); // start activity } }); public class ViewAnimiationHelper { public void animateViews(Activity activity) { // find all views if not found then don't animate them View view1 = activity.findViewById(R.id.view1); if(view1 != null) { // animate view } View view2 = activity.findViewById(R.id.view1); if(view2 != null) { // animate view } } } This is all sudo java code but hopefully enough for you to get the idea. Good luck! You can set up animations (like slide) when you switch between activities like this : In the res folder, create an anim folder For example, put two xml files for slide. slide_in.xml <set xmlns:android="http://schemas.android.com/apk/res/android" android:shareInterpolator="false"> <translate android:fromXDelta="100%" android:toXDelta="0%" android:fromYDelta="0%" android:toYDelta="0%" android:duration="200"/> </set> slie_out.xml <set xmlns:android="http://schemas.android.com/apk/res/android" android:shareInterpolator="false"> <translate android:fromXDelta="100%" android:toXDelta="0%" android:fromYDelta="0%" android:toYDelta="0%" android:duration="200" /> </set> Then on your java code just write this : Intent i = new Intent(YourActivity.this, OtherActivity.class); this.startActivity(i); overridePendingTransition(R.anim.slide_in, R.anim.slide_out); If you are testing that on a real device, don't forget to allow it to play animations (Settings -> Display -> Animations -> All Animation) Hope it helps !:) This animates the whole Activity, I'm looking to animate individual views separately. Oh sorry, my misunderstood. Did you see this ? http://developer.android.com/reference/android/view/ViewPropertyAnimator.html Yes, but I need to know how to trigger that sort of animation when leaving/starting the activity. Then just override the methods onStop(), onResume() or onStart() methods to trigger that. See this http://developer.android.com/reference/android/app/Activity.html Overriding onResume() works fine, but onPause/onStop don't wait for the animation to complete before moving to the next screen.
common-pile/stackexchange_filtered
Automate local deployment of docker containers with gitlab runner and gitlab-ci without privileged user We have a prototype-oriented develop environment, in which many small services are being developed and deployed to our on-premise hardware. We're using GitLab to manage our code and GitLab CI / CD for continuous integration. As a next step, we also want to automate the deployment process. Unfortunately, all documentation we find uses a cloud service or kubernetes cluster as target environment. However, we want to configure our GitLab runner in a way to deploy docker containers locally. At the same time, we want to avoid using a privileged user for the runner (as our servers are so far fully maintained via Ansible / services like Portainer). Typically, our .gitlab-ci.yml looks something like this: stages: - build - test - deploy dockerimage: stage: build # builds a docker image from the Dockerfile in the repository, and pushes it to an image registry sometest: stage: test # uses the docker image from build stage to test the service production: stage: deploy # should create a container from the above image on system of runner without privileged user TL;DR How can we configure our local Gitlab Runner to locally deploy docker containers from images defined in Gitlab CI / CD without usage of privileges? If your runner is in your on-premise hardware, you could use volume maps to give the runner access to a folder where all your applications would be deployed to. As for the privileged user, I'm not sure why you would need it Thanks for the response! Correct, the runner itself runs in a docker container on our on-premise hardware. We want our deployed services to run in other docker containers on the same machine (for now). As far as I know, we'd need a privileged user to create these other docker containers, or am I missing something there? You would need privileges to run docker in docker, but since the runner would be starting containers in the host, you don't need privileges but need to map the docker socket from the host. The Build stage is usually the one that people use Docker in Docker (find). To not have to use the privileged user you can use the kaniko executor image in Gitlab. Specifically you would use the kaniko debug image like this: dockerimage: stage: build image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] script: - mkdir -p /kaniko/.docker - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG rules: - if: $CI_COMMIT_TAG You can find examples of how to use it in Gilab's documentation. If you want to use that image in the deploy stage you simply need to reference the created image. You could do something like this: production: stage: deploy image: $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG With this method, you do not need a privileged user. But I assume this is not what you are looking to do in your deployment stage. Usually, you would just use the image you created in the container registry to deploy the container locally. The last method explained would only deploy the image in the GitLab runner.
common-pile/stackexchange_filtered
Drupal 7, form submit: Display result of a query? Using the form api, I'm to display the result of some processing after a post is sent by a form. So I implement the module_name_submit(...) method: function cmis_views_block_view($delta = '') { $cmisview = cmis_views_load($delta); $block['content'] = cmis_views_info($cmisview); return $block; } The processing is very simple. The call happens properly and I would like to display the $block on screen. Returning this doesn't help. Maybe a redirect could help, but I'm looking first for something equivalent to drupal_set_message() but that dispays html in the page. Any leads welcome :-) Thanks, J. Put the content in a session variable, redirect to that page and have a custom block output that specific variable in the $content You probably need to use $form_state['redirect']
common-pile/stackexchange_filtered
Passenger: mod_rewrite rules for non-default page cache directory for Rails application Does anybody have some working Apache mod_rewrite rules that enable Phusion Passenger (mod_rails) to use a non-default location for the page cache within a Rails application? I'd like the cached files to go in /public/cache rather than the default of /public. I found the answer in this blog post: RailsAllowModRewrite On RewriteEngine On RewriteCond %{THE_REQUEST} ^(GET|HEAD) RewriteCond %{REQUEST_URI} ^/([^.]+)$ RewriteCond %{DOCUMENT_ROOT}/cache/%1.html -f RewriteRule ^/[^.]+$ /cache/%1.html [QSA,L] RewriteCond %{THE_REQUEST} ^(GET|HEAD) RewriteCond %{DOCUMENT_ROOT}/cache/index.html -f RewriteRule ^/$ /cache/index.html [QSA,L]
common-pile/stackexchange_filtered
How many elements are in the list which results after factor passing $201^7$ four times? Given a list, a factor pass is defined to be a process in which each element in the list is replaced by a list of the original element’s positive divisors. For example, after factor passing the number $4$, one gets $1, 2, 4$. After two factor passes, $4$ becomes $1, 1,2, 1,2,4$. How many elements are in the list which results after factor passing $201^7$ four times? Note that $201^7 = 3^7 \cdot 67^7$. After the first pass, there are $$\sum_{i \mid 201^7} i$$ elements in the list. After the second pass there are $$\sum_{i \mid 201^7}d(i)$$ where $d(n)$ denotes the number of positive divisors of a positive integer $n$. How can we find the number of elements in the list after four passes? Can you solve the problemfor $3^7$ and $67^7$ (or in general $p^k$) separately? @HagenvonEitzen How can we do it for one prime? Both are prime so I guess there are 8 factors of each, so 64 factors of both together on the first pass. The first pass is given by the matrix with numbers 1 to 8 across the top and 1 to 8 down the side and their products in the corresponding rows and columns. On the next pass the maximal element 8x8 generates the whole same matrix and the other numbers generate smaller matrices down to the unit top left which just generates itself. For $2$ we have $3,6,10,15$. These appear to be of the form $\binom{n}{2}$. I think the 2nd pass gives you $8^29^2/4$ @RobertFrost: $\binom{8}{1}^2$ at the first pass, $\binom{9}{2}^2$ at the second pass, $\binom{10}{3}^2$ at the third pass and $\binom{11}{4}^2$ at the final pass seem to agree with my answer. HINT: Let us state that a $4$-path in a $7\times 7$ grid is something of the form $$ (a_0,b_0)\mapsto(a_1,b_1)\mapsto (a_2,b_2)\mapsto (a_3,b_3)\mapsto (a_4,b_4)=(7,7) $$ with $a_k,b_k\in[0,7]$ and $a_0\leq a_1\leq a_2\leq a_3\leq a_4$, $b_0\leq b_1\leq b_2\leq b_3\leq b_4$. Claim 1. by stars and bars there are $\binom{11}{4}^2$ $4$-paths. Claim 2. Since $\mathbb{Z}$ is a UFD, $\binom{11}{4}^2=\color{red}{108900}$ is also the answer to the given question. Can you see how a divisor of a divisor of a divisor of a divisor of $201^7$ is related with a $4$-path? Notice that to be a divisor of$\ldots$ is a transitive relation and every divisor of $201^{7}$ is of the form $3^{a}67^{b}$ with $(a,b)\in[0,7]^2$. Can you prove Claim 1 on your own?
common-pile/stackexchange_filtered
Cache works for html view but not for xml output I'm currently debugging (and trying to understand why) the caching for the html view is created, but not for the xml view. I read the Joomla! documentation article https://docs.joomla.org/J3.x:Developing_an_MVC_Component/Adding_Cache and a similiar question in this portal 5 years ago: How can I use Joomla's Cache with my components view? My controller.php looks like this: public function display($cacheable = false, $urlparams = false) { $cacheable = true; $viewName = $this->input->get('view'); $viewLayout = $this->input->get('layout', 'default'); if (JFactory::getUser()->get('id') || !in_array($viewName, array('html', 'xml')) || $viewLayout == 'xsl') { $cacheable = false; } $document = JFactory::getDocument(); $viewType = $document->getType(); $view = $this->getView($viewName, $viewType, '', array('base_path' => $this->basePath, 'layout' => $viewLayout)); $view->setModel($this->getModel('Sitemap'), true); $safeurlparams = array( 'id' => 'INT', 'itemid' => 'INT', 'uid' => 'CMD', 'action' => 'CMD', 'property' => 'CMD', 'value' => 'CMD', 'view' => 'CMD', 'lang' => 'CMD' ); return parent::display($cacheable, $safeurlparams); } Folder structure: components - com_my_extension --- views ---- html ---- xml What I am missing here? Is it that the standard Joomla! cache, doesn't cache the XML output and I have to do it manually? Or is the data in $safeurlparams wrong? (I don't totally understand how the parts in this array should be.) https://stackoverflow.com/questions/18448077/how-to-cache-a-php-generated-xml-file-in-joomla An older question, but may give you a direction to consider to figure out what to add into the code generating your XML. Thx for the link. If I interpret it right, the html view is automatically created with the controller display cachable true statement, but for XML I have to implement it on my own. I think that sounds about right. I'm in the process of figuring out how various Joomla framework parts work currently, so if I find a different way of doing it, I'll comment back. little more research. The cache should be made with the existing code. Something is preventing the cache file to be created for the second view. Still debugging which part is the barrier.
common-pile/stackexchange_filtered
About ImageTextButton And TextView I want to change TextView to ImageTextButton. This is the sample code I have used for the program. TextView test1, test2; public void changeTurn() { if (turn == 0) { turn = 1; test1.setClickable(false); test2.setClickable(true); } else { turn = 0; test1.setClickable(true); test2.setClickable(false); } } The question is, I don't want to use TextView and want to change it to ImageTextButton that can do the same as the sample code above. I mean it is like I can set the ImageTextButton to be clickable or not in my if statement. What's an ImageTextButton?! I never heard of a control with that name. Sorry, there has no such view as ImageTextButton in android. Seems as though he just wants to make a clickable TextView Sorry my bad. I got it functioning now. :) Its a LibGDX function. Thanks guys. :) By "ImageTextButton" I assume you mean ImageButton So the answer The question is, I don't want to use TextView and want to change it to ImageTextButton that can do the same as the sample code above is yes. It extends View so you can set it to clickable FYI: All of those links lead to somewhere in the Android Docs Thanks. I got it function. Sorry I forgot to mention I'm using LibGDX. I use setTouchable(Touchable.enabled) and setTouchable(Touchable.disabled).
common-pile/stackexchange_filtered
Multiple JOIN on the same table I'm working on a messaging system on Laravel and I have the following tables : USER user_id | email | password | name | ... --------------------------------------- CHAT chat_id | user1_id | user2_id ----------------------------- MESSAGE message_id | content | date | user_id | chat_id | ... ------------------------------------------------------ and what I want to do, is to get all the chats the authenticated user have started with an other one, ordered by the date of the messages (from the most recent to the oldest one), plus the information on the user he's talking to. I'm currently working on the raw SQL request before passing it in Laravel and this it what I try but it doesn't give the waited result : SELECT user.user_id, user.name, user.surname FROM user JOIN chat a ON user.user_id = a.user1_id JOIN chat b ON user.user_id = b.user2_id WHERE b.chat_id IN (SELECT chat_id FROM chat WHERE user1_id = 1 OR user2_id = 1) AND user.user_id != 1 ----> (I'm testing with the user that has the ID #1) If anyone could help that would be greatly appreciated. what are you expecting it to return and what does it return instead? For example, The chat table has 2 records : chat_id : 1 - user1_id : 1 - user2_id : 2 // chat_id : 2 - user1_id : 1 - user2_id : 3. What I expect to have is for each row, the chat_id, the user_id, name, and surname for the user_id #2 and #3, but this returns an empty response. I'm still trying other SQL requests atm If you only care for the user2_id why are you joining user on chat with a.user1_id? Because I don't know in what column the user_id has been stored at the chat creation (if the user #1 start the chat, it will be stored in the user1_id, if he doesn't start it but receive a message, it will be stored in the user2_id) sorry I don't understand what you are trying to achieve. I think if you post sample date and desired results in your question, you will get an answer. Ok I'll try to be more clear ... as you have seen in my model, in the chat table, I have 2 references to "user_id" on the user table. When I start a chat, for example between the user #1 and #2, the user #1 id will be stored in the user1_id column and the user #2 in the user2_id column in the chat table. But it's possible that the user #1 is stored in the user2_id column, and the user #2 in the user1_id column in the chat table. WHAT I want to do, is to get all the information about the other user + the chat_id of the chat the authenticated user has started with this other user... whoever starts first is in user1_id? Yes that is it. so in your query, you want all records in the chat table where the user with the id 1 in there, doesn't ,matter if they the ones started the chat or not. and you want to get the other users info? Yes, I want the other users info, and the chat_id of the chat between them You can use Union to get both results as follows: select a.user1_id, a.chat_id, user.fname, user.lname from user join chat a on a.user2_id = user.user_id and a.user1_id = 1 union select b.user2_id, b.chat_id, user.fname, user.lname from user join chat b on b.user1_id = user.user_id and user2_id = 1 I just added the other user_id in the select but that perfectly works, thank you very much. I didn't think about using a Union because i'm not used to it, but it seems to be efficient You are welcome. If you are satisfied with the answer, please accept it by clicking in the check. Thanks.
common-pile/stackexchange_filtered
How to define a generic anonymous function type in TypeScript? Is it possible to have Generic Anonymous function type? I was reading this article and found this piece of code. import { Eq } from 'fp-ts/Eq' export const not = <A>(E: Eq<A>): Eq<A> => ({ equals: (first, second) => !E.equals(first, second) }) Is not function here even a valid typescript syntax? Yup .. it's valid./. not sure what the question is: https://www.typescriptlang.org/play?#code/C4TwDgpgBAogjgHgCoD4oF4oG8CwAoKKCOAVwEMAbAZwC4oAKAMzqQBooqIBjAewDsAJiwCUGNACMePChDJ98AX3z4IADzA8ATsCi8+VHXx47MCAIKsU9GHXjmUw24jNp0aergJFSlWg0YAlpoG7Jx6AqJuUACEMAB0xOTUTEEhHNz8EYrCQA @TitianCernicova-Dragomir Thank you. In your example in the playground you used comma in your generic type parameter <A,>. Without it it's seems to be a syntax error. What is that comma for? In TSX files <T> is intrepreted as a JSX tag instead of a generic parmater list. The , forces it to be a generic parameter list. The playground works in TSX mode This code is perfectly fine , It's (almost) the equivalent of this generic function but with an arrow function definition : function not2<A>(E: Eq<A>): Eq<A> { return { equals: function (first, second) { return !E.equals(first, second); } }; }
common-pile/stackexchange_filtered
WARNING: unrecognized options: --disable-netaccessor-libcurl I am trying to install Xerces-C to my Shibboleth 2 SP following this guide: https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPLinuxSourceBuild But when i run: ./configure --prefix=/opt/shibboleth-sp --disable-netaccessor-libcurl i get this warning: WARNING: unrecognized options: --disable-netaccessor-libcurl [...] config.status: creating src/xercesc/util/Xerces_autoconf_config.hpp config.status: src/xercesc/util/Xerces_autoconf_config.hpp is unchanged config.status: executing depfiles commands config.status: executing libtool commands config.status: executing libtool-rpath-patch commands configure: WARNING: unrecognized options: --disable-netaccessor-libcurl configure: configure: Report: configure: File Manager: POSIX configure: Mutex Manager: POSIX configure: Transcoder: icu configure: NetAccessor: curl configure: Message Loader: inmemory I tried to passing it and make and make install but does not install it and i get something like: make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/user/opt/xerces-c-3.1.1/samples' make[1]: Leaving directory `/home/user/opt/xerces-c-3.1.1/samples' make[1]: Entering directory `/home/user/opt/xerces-c-3.1.1' make[2]: Entering directory `/home/user/opt/xerces-c-3.1.1' make[2]: Nothing to be done for `install-exec-am'. test -z "/home/user/opt/shibboleth-sp/lib/pkgconfig" || /bin/mkdir -p "/home/user/opt/shibboleth-sp/lib/pkgconfig" /usr/bin/install -c -m 644 xerces-c.pc '/home/user/opt/shibboleth-sp/lib/pkgconfig' make[2]: Leaving directory `/home/user/opt/xerces-c-3.1.1' make[1]: Leaving directory `/home/user/opt/xerces-c-3.1.1' What does that mean? Thank you! It means it doesn't recognize that option. Did you try not passing it? Maybe the instructions are out of date with respect to the package you downloaded. This page: https://wiki.shibboleth.net/confluence/display/OpenSAML/Xerces-C indicates you might want --enable-netaccessor-socket instead. Is there a reason you don't want OpenSSL included? Thank you! I tried to passing it and make and make install but does not install it. @CarlNorum I have tried with ./configure --enable-netaccessor-socket --prefix=/opt/shibboleth-sp but doesn't work. So I have to install OpenSSL to resolve this issue? I have the same question; so I'm asking on Shib Users<EMAIL_ADDRESS> --enable-netaccessor-socket worked for me. Also, the warning did not prevent my build from working
common-pile/stackexchange_filtered
Netflow records with Destination Ports 1025,257 and Protocol as ipv6-icmp I have some Netflow records from a bunch of routers. The records contain IPv6 flows and there are entries with protocol as ipv6-icmp and their destination port values as 0, 1025 and 257. I know from this link that the value of 0 for ipv6-icmp in netflow indicates an echo reply. Is there any resource to find the meaning of the ipv6-icmp-1025 and ipv6-icmp-257? RFC 4443 explains ICMPv6 Types and Codes, but 0, 1025 and 257 are not ICMPv6 Types. Also, ICMP does not use ports, so I am not sure what you mean by port numbers. I know that ICMPv6 does not use (TCP/UDP) port numbers and ICMPv6 has its own types. However, in my Netflow dataset, it seems that Netflow is overloading the destination port number field, which is normally used for TCP/UDP flows, to indicate the ICMPv6 message type. At first, I also thought that those values are the ICMPv6 message types, as indicated in the RFC, but no. Please have a look at the link that I've posted in the question. ICMP and ICMPv6 do not have port numbers. Possibly netflow is using 0 to indicate this is not a UDP or TCP flow. Standard types and codes are in IANA registries. In v6, type 0 actually is reserved, and would be invalid on the wire. And as these are 8 bit fields, they only go up to 256. These do not map obviously to ICMP. Possibly some other logging or packet capture would be better at analyzing it. I think Netflow is overloading the destination port to represent ICMP type and code, and the format is dPort = icmp_type << 8 + icmp_code Here's an article that supports this fact: Detecting Worms and Abnormal Activities with NetFlow, Part 2. However, 1025 (type:4, code:1) and 257 (type:1, code:1) doesn't seem to map to valid ICMP messages, so maybe there's other encoding logic behind.
common-pile/stackexchange_filtered
Seconds converter Console.Write("Type in the number of seconds: "); int total_seconds = Convert.ToInt32(Console.ReadLine()); int hours = total_seconds / 3600; total_seconds = total_seconds % (hours * 3600); int minutes = total_seconds / 60; total_seconds = total_seconds % (minutes * 60); int seconds = total_seconds; Console.WriteLine("Number of hours: " + hours + " hours" + "\nNumber of minutes: " + minutes + " minutes" + "\nNumber of seconds: " + seconds + " seconds"); Console.ReadLine(); Managed to create a program that converts a total amount of seconds into it's respective hours, minutes, seconds. I am having a problem though as i wan't the program to also be able to show the amount of hours, minutes etc. for a total amount of seconds below 3660, which doesn't seem to be possible. Any ides how to help fix this issue? What language is this? You should add a tag for the language. Im not sure the question is clear: Do you mean that you want to convert a number into hours, mins and seconds, but for example, 66 to show 1 minute 6 seconds, OR that you want 3666 to show 1hr, 61 minutes, 3666 seconds? I'm new to this site, but i will remember putting a laungauge tag in my future question. This particular piece of code is C#. EDIT Chowlett's answer does this more elegantly - use his code. This seems to work for me (by the if statements I make sure that I don't get a division by zero in case hours our minutes are zero: int total_seconds = 3640; int hours = 0; int minutes = 0; int seconds = 0; if (total_seconds >= 3600) { hours = total_seconds / 3600; total_seconds = total_seconds % (hours * 3600); } if (total_seconds >= 60) { minutes = total_seconds / 60; total_seconds = total_seconds % (minutes * 60); } seconds = total_seconds; Console.WriteLine("Number of hours: " + hours + " hours" + "\nNumber of minutes: " + minutes + " minutes" + "\nNumber of seconds: " + seconds + " seconds"); Console.ReadLine(); The problem's in the lines where you take the modulus (the % operator). You want the number of seconds left after removing all the whole hours, which is total_seconds % 3600. The code you have, if you have below 3600 seconds, will try to do total_seconds % 0, which is a division-by-zero. Try the following: int hours = total_seconds / 3600; total_seconds = total_seconds % 3600; int minutes = total_seconds / 60; total_seconds = total_seconds % 60; int seconds = total_seconds; @user1696992 - No worries. And welcome to Stack Overflow! You can also "accept" this answer as correct by clicking the green check-mark beside it.
common-pile/stackexchange_filtered
Import CSV File Currently, we have a DTS package on SQL Server 2000 that imports a file called suicoweb.csv. The DTS csv file properties pages look like this. Here is a sample line from the CSV file 500071;343497;260712;|Some Text; : employer : some more text|;29 The columns are seperated by semi-colons(;) and the text qualifier is a pipe (|) and the lines by a carriage-return Using Mr. Brownstone's answer, I created a format file using the following command: bcp databasename.dbo.tablename format nul -c -f d:\formatfile.fmt -T Here is the output of that file: 9.0 18 1 SQLCHAR 0 510 "|" 1 web1numdoss 2 SQLCHAR 0 510 "|" 2 web1dem 3 SQLCHAR 0 510 "|" 3 web1def 4 SQLCHAR 0 510 "|" 4 web1douv 5 SQLCHAR 0 510 "|" 5 web1dversjj 6 SQLCHAR 0 510 "|" 6 web1dversmm 7 SQLCHAR 0 510 "|" 7 web1dversaa 8 SQLCHAR 0 510 "|" 8 web1mntvers 9 SQLCHAR 0 510 "|" 9 web1soldos 10 SQLCHAR 0 510 "|" 10 web1dactjj 11 SQLCHAR 0 510 "|" 11 web1dactmm 12 SQLCHAR 0 510 "|" 12 web1dactaa 13 SQLCHAR 0 510 "|" 13 web1actnat 14 SQLCHAR 0 510 "|" 14 web1actlib 15 SQLCHAR 0 510 "|" 15 web1archdoss 16 SQLCHAR 0 510 "|" 16 web1numdem 17 SQLCHAR 0 510 "|" 17 web1numdef 18 SQLCHAR 0 510 "\r\n" 18 Col018 I replaced \t with a pipe(|) because that's what I would like to use as the text qualifier. Then I used this BULK INSERT command to import: BULK INSERT ScpCambron.dbo.suicoweb1 FROM 'D:\SqlFtp\scpcambron\suicoweb1.txt' WITH ( FIELDTERMINATOR = ';', ROWTERMINATOR = '\n', FORMATFILE='D:\SqlFtp\scpcambron\suicoweb1.fmt' ) When I try to import the csv file with the BULK INSERT command I am getting the following error: Msg 4864, Level 16, State 1, Line 1 Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 1 (web1numdoss). Any suggestions on how I should modify the format file to correctly import the csv file? Should I be using BCP instead to import the csv file my table? From what you are describing it sounds like you needs to provide the BULK INSERT command a format file so it knows how to parse your data. You can use the bcp command to auto-generate one for you, however I have always had to edit them afterwards before they work properly. (Lots of testing I am afraid). The links below should be good starting points for you: Format Files: http://msdn.microsoft.com/en-us/library/ms190393.aspx BCP: http://msdn.microsoft.com/en-us/library/ms162802.aspx BULK INSERT: http://msdn.microsoft.com/en-us/library/ms188365.aspx EXAMPLE: http://msdn.microsoft.com/en-us/library/ms178129.aspx I hope this helps you. EDIT Based on the extra information that you have added above it looks like you have set all field terminators to pipes, which is not what you want. You will need to edit the format file on a per column basis. For instance, in the example above your first two columns are delimited by a semi-colon. The third would need to be delimited by ";|". Here is an example of what I mean for the first few columns: 9.0 18 1 SQLCHAR 0 510 ";" 1 web1numdoss 2 SQLCHAR 0 510 ";" 2 web1dem 3 SQLCHAR 0 510 ";|" 3 web1def ... 18 SQLCHAR 0 510 "\r\n" 18 Col018
common-pile/stackexchange_filtered
Angular Data table not being filled with response object from API I'm trying to fill my data Table using an API call to get my data. However it doesn't seem to wanna fill my table even though in a console.log it show the response like so: Here is the code that I use to try and fill my table: this.beheerService.getAccessPoints().subscribe((result => { if (!result) { return; } console.log(result); this.dataSource = new MatTableDataSource(result); })) And as requested the template code: <div id="tableContainer"> <h1 class="mat-h1"> <fa-icon [icon]="faTicketAlt"></fa-icon> Xirrus Accesspoints </h1> <div class="mat-elevation-z8"> <table mat-table [dataSource]="dataSource" matSort> <!-- ID Column --> <ng-container matColumnDef="id"> <th mat-header-cell *matHeaderCellDef mat-sort-header> ID </th> <td mat-cell *matCellDef="let row"> {{row.id}} </td> </ng-container> <!-- Name Column --> <ng-container matColumnDef="name"> <th mat-header-cell *matHeaderCellDef mat-sort-header> Name </th> <td mat-cell *matCellDef="let row"> {{row.name}} </td> </ng-container> <tr mat-header-row *matHeaderRowDef="displayedColumns"></tr> <tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr> </table> <mat-paginator [pageSizeOptions]="[5, 10, 25, 100]" aria-label="Select page of users"></mat-paginator> </div> </div> I can't figure out where its going wrong I hope someone could shed some light and help me out. What is supposed to be passed as a parameter into the new MatTableDataSource( call? Is it the response array with 1065 entries? Looks like you want this.dataSource = new MatTableDataSource(result.response); @OvidijusParsiunas Yes response is supposed to be used as the source. What does console.log(this.dataSource) from outside the subscription print? Have you got displayedColumns = ['id', 'name']; etc. in your .ts file? @tony That returns me a MatTableDataSource but the data is empty. @MattSaunders Yes I have both in the .ts file. I believe you need to set your data source as follows: dataSource = new MatTableDataSource(); and then in your subscription function this.dataSource.data = response; Can you also confirm you have [dataSource]="dataSource" in the table tag in your template - see this basic example? I'd recommend adding your template code to your question. I've added the template code in the question. Unfortunately your suggestion didn't work. As per the Stackblitz in my answer, you should be able to get the table working without using MatTableDataSource just using the array of results. I'd forget about MatTableDataSource for now and focus on getting the table displaying with an array. Then revisit MatTableDataSource if it's really needed Check this link: https://stackblitz.com/edit/angular-dvrbmv?file=src/app/table-basic-example.ts MatTableDataSource is not needed. (Sorry - fixed link) Ts : export class ExampleComponent implements OnInit, AfterViewInit { dataSource: MatTableDataSource = new MatTableDataSource([]); isDataSourceLoaded = false; constructor( private dataService: DataService ) { } ngOnInit(): void { this.getData(); } ngAfterViewInit(): void { if (this.isDataSourceLoaded === false) { this.getData(); if (this.dataSource.data.length !== 0) { this.isDataSourceLoaded = true; } else { this.isDataSourceLoaded = false; } } } getData(): void { this.dataService.getData().subscribe((result => { if (!result) { return; } this.dataSource = new MatTableDataSource(result); })); if (this.dataSource.data.length === 0) { this.isDataSourceLoaded = false; } } } Html : <table mat-table [dataSource]="dataSource" class="mat-elevation-z8"> ... </table> My wild guess is that you are using changeDetection: ChangeDetectionStrategy.OnPush. this means that your component will detect changes only when @input values are altered (you may not even have them) You receive data but it cannot be detected outside subscription because you probably dont force change detection. Try importing import { ChangeDetectorRef } from '@angular/core'; insert it in costructor constructor (private ref: ChangeDetectorRef) {} and force change detection like below this.beheerService.getAccessPoints().subscribe((result => { if (!result) { return; } this.dataSource = new MatTableDataSource(result); this.ref.detectChanges(); <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< })) console.log(this.dataSource); // log dataSource from outside the subscription and see if it has data Also, can't you just subscribe to result from template using async pipe? This would prevent all the above written stuff
common-pile/stackexchange_filtered
Postgre regexp does not work in perl code using dbi i have this simple data in postgres table (data type is character varying): 48 2 L 4XL 25.0 25 7.0 i have this sql query with regexp match (i want match only numeric like values like 7.0 or 48): SELECT * FROM table WHERE ss.sizecode ~ E'^\\s*[\\d\\.]+\\s*$' this works perfect in command line client psql, but does not work in perl code: my $sth = $dbh->prepare( q(SELECT * FROM table WHERE ss.sizecode ~ E'^\\s*[\\d\\.]+\\s*$') ); $sth->execute while ( my @row = $sth->fetchrow_array() ) { # no data i want } Please show the exact Perl code you're using, the output you're getting, the reason you think it's not working, and any errors you're getting. Remove the extra backslashes, perhaps? And be watchful of variable interpolation in $'. Use q() to quote the string, not double quote ". Any reason for using E'' string literal? Just use the normal string literal, and you won't have to worry about escaping \. String literal q(SELECT * FROM table WHERE ss.sizecode ~ E'^\\s*[\\d\\.]+\\s*$') produces the string SELECT * FROM table WHERE ss.sizecode ~ E'^\s*[\d\.]+\s*$' To get SELECT * FROM table WHERE ss.sizecode ~ E'^\\s*[\\d\\.]+\\s*$' you need q(SELECT * FROM table WHERE ss.sizecode ~ E'^\\\\s*[\\\\d\\\\.]+\\\\s*$')
common-pile/stackexchange_filtered
Can Visual Studio 2010 help be viewed inside Visual Studio? I just installed VS2010 and the first big disappointment was the lack of any obvious way to view help within Studio itself. I very much like the new feature of updating help from the internet (I prefer to use local help because it's faster), but the separate browser window is simply distracting. I would like to view help within another tab inside the Studio itself. Also, the lack of contents and index docking-bars like in the old Studio is annoying. Can these be brought back? Google gave me no results. :( I agree with you. I miss the old MSDN, and don't like the fact it starts its own web server to serve up help, and everything seems to take twice as long as it did before. Oh well. I'm not sure if this is exactly what you wanted. This is a VS extension to let you use local help in VS. HelpViewerKeywordIndex In the code below you must replace the break instruction with try, catch and throw. using System; using System.Text; namespace UGVPI { class Program { static void Main(string[] args) { long p; p = Int64.Parse(Console.ReadLine()); long q = 2; while (q < p) { if (p % q == 0) break; q = q + 1; } if (q < p) Console.WriteLine("Not Prime"); else Console.WriteLine("Prime"); } } }
common-pile/stackexchange_filtered
WCF request response lifecycle I am new to WCF and trying to understand a few things. Is every request/response handled in separate mutex threads/process host? Can same thread/process handle multiple requests? Is there any request queuing involved? If I have global/static variables is their scope limited to the given request response sequence? Thanks in advance. In WCF, there are three methods for instance management of service objects - PerCall, PerSession and Single. This service behavior attribute is known as InstanceContextMode. PerCall - Creates a new service instance every client method call. Works if your service is stateless. PerSession - Creates a new service instance for each new client proxy. Works if you need to keep state information between calls from the same client. Single - Creates a single service instance which is shared among all clients. Works if you need to share global data throughout your service. For the threading part of your question, there is a service behavior attribute known as ConcurrencyMode which handles the details of the threading nature of each service instance. The options are Single, Multiple and Reentrant. Single - The service instances are single-threaded. Multiple - The service instances are multi-threaded. You must handle the synchronization and state consistency of the service object. Reentrant - The service instances are single-threaded, but the service accepts calls when it calls on another service. This requires a bit of overhead in maintaining the state of the service object and handling callbacks on the service. These two factors combine to control the behavior of your service instances. For example, if you have InstanceContextMode and ConcurrencyMode set to Single and your service receives new messages while the instance is already handling a call, then these messages must wait until your service finishes handling the in progress call before handling the next message (during which time the message may timeout). Personally, I have never had any real need for anything besides single-threaded, per call service instances. But your requirements may be dramatically different than my own. There are some pretty good articles linked here on Instance Management and Concurrency in WCF.
common-pile/stackexchange_filtered
How to detect Database Changes with SqlDependency and c# I want to fire a Method each Time a database detected a change. I tried this in a Console Application: (ServiceBroker in Database is active) using System; using System.Data; using System.Data.SqlClient; namespace SqlDependencyExample { class Program { static void Main(string[] args) { string connectionString = "xxx"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); using (SqlCommand command = new SqlCommand( "SELECT [ID], [Name] FROM [dbo].[Users]", connection)) { SqlDependency dependency = new SqlDependency(command); SqlDependency.Start(connectionString); dependency.OnChange += new OnChangeEventHandler(OnDependencyChange); using (SqlDataReader reader = command.ExecuteReader()) { while (reader.Read()) { Console.WriteLine("ID: {0}, Name: {1}", reader.GetInt32(0), reader.GetString(1)); } } } } private static void OnDependencyChange(object sender, SqlNotificationEventArgs e) { Console.WriteLine("Database change detected."); //Fire something } } } But the Problem is, the application starts, and instantly writes "Database change detected." and closed and when I´m adding a row in my Database, it is not handling "OnDependencyChange"... The Console is writing every row which exists. So the Connection is working, but it detects no new rows in the Database Make sure that the Service Broker is enabled for the database It is enabled.. There is nothing that prevents the program to stop, after it executes once. You can add the Console.ReadLine() statement before the closing bracket of the Main function. I´ve added Console.ReadLine(). Now i've started it again, let it run, added a row in the database but nothing happend? Its again not handling the OnChange Event
common-pile/stackexchange_filtered
ASP.NET 5 / MVC 6 AppSettings [TL;DR]: How do I access AppSettings data without using dependency injection in MVC 6? I'm trying to reach some app setting data from a _Layout.cshtml in my MVC 6 app. I understand (and have implemented) the "Options" patterns as described at http://docs.asp.net/en/latest/fundamentals/configuration.html#using-options-and-configuration-objects. It works well when I need to inject some settings into specific controllers, but I can't quite work out how to inject Options into a shared _Layout.cshtml, since it doesn't have an associated controller. Is there a way to access Configuration data without using DI? I think I worked it out. In the view, the following will work, as long as the Options service is configured as per the above link. @inject Microsoft.Extensions.OptionsModel.IOptions<MySettingsClass> Options I have recently wrote blog on this. Please have a look here: https://neelbhatt40.wordpress.com/2015/12/15/getting-a-configuration-value-in-asp-net-5-vnext-and-mvc-6/
common-pile/stackexchange_filtered
Button in View Should Reset Array in ModelController But Doesn't I am attempting have one of my views present the user with the option to reset an array that they have edited to it's original form. I am currently attempting to do this using NSNotificationCenter. The current process: User flips through a series of cards, each of which are pulled from the array experiments (which has been shuffled beforehand but I don't think that's relevant to this issue) and deletes ones they don't want to see again when the array loops back around. User decides they want to start again with a fresh array. User clicks "Home" which brings them to AdditionalMenuViewController.swift which is set up with NSNotification center with an @IBAction that triggers the notification that the ModelController will listen to. Here's AdditionalMenuViewController.swift. let mySpecialNotificationKey = "com.example.specialNotificationKey" class AdditionalMenuViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() NSNotificationCenter.defaultCenter().addObserver(self, selector: "resetArray", name: mySpecialNotificationKey, object: nil) } @IBAction func notify(){ NSNotificationCenter.defaultCenter().postNotificationName(mySpecialNotificationKey, object: self) } func resetArray() { println("done") } } That should trigger the listener in ModelController to reset the array and then allows the user to continue using the app but with the array reset. Here are the (I think!) relevant parts of ModelController.swift: class ModelController: NSObject, UIPageViewControllerDataSource { var experiments = NSMutableArray() var menuIndex = 0 func shuffleArray(inout array: NSMutableArray) -> NSMutableArray { for var index = array.count - 1; index > 0; index-- { // Random int from 0 to index-1 var j = Int(arc4random_uniform(UInt32(index-1))) // Swap two array elements // Notice '&' required as swap uses 'inout' parameters swap(&array[index], &array[j]) } return array } override init() { super.init() // Create the data model. if let path = NSBundle.mainBundle().pathForResource("experiments", ofType: "plist") { if let dict = NSDictionary(contentsOfFile: path) { experiments.addObjectsFromArray(dict.objectForKey("experiments") as NSArray) } } println(experiments) shuffleArray(&experiments) println(experiments) NSNotificationCenter.defaultCenter().addObserver(self, selector: "resetArray", name: mySpecialNotificationKey, object: nil) } func resetArray() { if let path = NSBundle.mainBundle().pathForResource("experiments", ofType: "plist") { if let dict = NSDictionary(contentsOfFile: path) { experiments.addObjectsFromArray(dict.objectForKey("experiments") as NSArray) } } shuffleArray(&experiments) } Currently it does create a new array but it is double the size of the original which tells me its running both the original and the reset functions. It also breaks all the pageViewController functionality that you can find in the default Page-Based Application in Xcode. What am I missing here? How can I both reset the array and then drop the user back into the flow of the Page-Based App? You never clear out your array before adding objects to it in resetArray(). Add a line, experiments.removeAllObjects() before you call addObjectsFromArray. rdelmar's suggestion worked perfectly. I now clear the array and then fill it with a freshly shuffled one.
common-pile/stackexchange_filtered
javascript appendChild not working Here is the code I have, I am trying to append the elements in the array to a drop down select box. The code works fine until the appendChild method. I can't figure out why that one line is not working. Here is the code </head> <body> <h1> Eat Page</h1> <p id="test">Hi</p> <select id="CusineList"></select> <script type="text/javascript"> var cuisines = ["Chinese","Indian"]; var sel = document.getElementById('CuisineList'); for(var i = 0; i <cuisines.length; i++){ var optionElement = document.createElement("option"); optionElement.innerHTML = cuisines[i]; optionElement.value = i;//cuisines[i]; //document.getElementById("test").innerHTML = cuisines.length; sel.appendChild(optionElement); } </script> <p> When </p> </body> </html> Are you getting any errors? How do you know it's not working? What does "not working" mean? I believe there's a different way of going about appending elements when it comes to select elements... You have spell miss. Your id is CusineList, but when you use CuisineList to select. Beside that, your code works. There's a typo on var sel = document.getElementById('CuisineList'); This should be var sel = document.getElementById('CusineList'); Or change your html. From <select id="CusineList"></select> To <select id="CuisineList"></select> When we append node using java script, after setting nodes' attributes/properties first we have to append this node into its related parent and after appending node, we can set its content or related inner HTML / text. here is the solution for above issue: var cuisines = ["Chinese", "Indian"];var sel = document.getElementById('CusineList'); var optionElement; for (var i = 0; i < cuisines.length; i++) { optionElement = document.createElement("option"); optionElement.value = i; sel.appendChild(optionElement); optionElement.innerHTML = cuisines[i]; document.getElementById("test").innerHTML = cuisines.length; } I have created a bin with the solution on http://codebins.com/codes/home/4ldqpcn
common-pile/stackexchange_filtered
Which environment variable specifies current path? I'm communicating with a Debian computer via SSH2 PHP extension. Their (not very well) documented function ssh2_exec states that it's fourth argument is an associative array of name/value pairs to set in the target environment. I want to operate upon different path than ~ to perform ls on other directories (as well as making communication more comfortable). But what should I set? ssh2_exec($connection, "ls", NULL, array("???" => "/var/www/"));
common-pile/stackexchange_filtered
How to get back focus on my Swing Java GUI running on a Solaris machine? Hi guys i am working on a java swings GUI on the Solaris platform, I want my application to have the focus at all times and change only when I click a button I am having to do it for a reqirement which might actually seem very stupid... Please let me know if there are any simple solutions for this one :P From How to Use the Focus Subsystem: Exactly how a window gains the focus depends on the windowing system. There is no foolproof way, across all platforms, to ensure that a window gains the focus. On some operating systems, such as Microsoft Windows, the front window usually becomes the focused window. In these cases, the Window.toFront method moves the window to the front, thereby giving it the focus. However, on other operating systems, such as Solaris™ Operating System, the window manager may choose the focused window based on cursor position, and in these cases the behavior of the Window.toFront method is different. Keeping that in mind, you'll need to find a way to give your application focus in Java that works with your window manager. You can try setting the window as "always on top", but again, it is still up to the window manager to respect that wish. If you can do that, you can schedule a TimerTask that will periodically request focus to your window. This is incredibly annoying though, and suggesting it makes me feel dirty.
common-pile/stackexchange_filtered
What's the recommended way of doing a master/detail page using ASP.NET MVC? Does anyone have any suggestions/ best practices for doing master/ detail pages using the ASP.NET MVC Framework? Background I've got a page of Client information, under each one I need to display 0 or more related records about that client. The database schema looks something like this: Clients -- one to many --> ClientData I'm currently using LINQ for the data classes which is being populated using stored procedures. So to get the client information I'd write: var clientQuery = from client in dataContext.GetClients () orderby client.CompanyName select client; When each client is rendered in the view, I need to display the related data, so I'd need to call this for each client found: var clientDataQuery = from clientData in dataContext.GetClientData (client.ClientId) orderby clientData.CreatedDate select clientData; Question What's the best of achieving this? Use the ViewData and store each query Use the typed model of the view (perhaps using a class to represent a client with the related information in a list Other... Thanks, Kieron Use a IDictionary>, which you should then wrap into a viewdata class. This will allow you to do the filtering/sorting at the controller level while maintaining a strict correlation between the client and its data. Example: public class ClientController{ public ViewResult ClientList(){ var clientsAndData = new Dictionary<Client,IEnumerable<ClientData>>(); var clients = from client in dataContext.GetClients() orderby client.CompanyName select client; foreach( var client in clients ) clientsAndData.Add( client, ( from clientData in dataContext.GetClientData(client.ClientId) orderby clientData.CreatedDate select clientData ).ToList() ); return View( new ClientListViewData{ Clients = clientsAndData } ); } } public class ClientListViewData{ public IDictionary<Client,IEnumerable<ClientData>> Clients{ get; set; } }
common-pile/stackexchange_filtered
How can I get an array of random keys from an object, as opposed to a full list? Rather than getting, for example, the first 5 keys from an object with 10 keys: var keys = Object.keys(brain.layers[this.layer]).slice(0, 5); I'd like to get 5 of the keys at random. I know of bulky, long, roundabout ways of doing it, such as something like this: function getRandomNumber(n1, n2) { ... } var list = []; var count = 0; function choose(arr, count, list, max) { for (let prop in arr) { var choice = Math.round(getRandomNumber(0, 1)); if (choice === 1 && count < max && !list.includes(arr[prop])) { list.push(arr[prop]); count++; } } if (count >= max) { return list; } else { choose(arr, count, list, max) } } But I was wondering if there's a simpler, more elegant solution. var keys = Object.keys(brain.layers[this.layer]).sort(() => Math.random() - 0.5).slice(0, 5); @JaromandaX <3! although - var keys = Object.keys(brain.layers[this.layer]).sort(() => Math.floor(Math.random() * 3) - 1).slice(0, 5); seems to be "more" random for some reason To get truly a truly (pseudo)random sort use something like an in place random sort. let arrRand=(a,i=a.length)=>{while(i){a.push(a.splice(Math.random()*i--|0,1)[0])}} let keys = Object.keys(brain.layers[this.layer]) randSort(keys) keys=keys.slice(0,5) This takes an array and the array length though if it is a modern browser you can use default arguments for that. Warning it modifies the passed in array.
common-pile/stackexchange_filtered
my movies are not converting properly when i try converting a film it goes through the full process it even says operation complete afther burning but disks still turn out to be blank i have used convert x and imtoo avi convert to dvd how can i solve this How are you determining that the disks are still blank? when i am tryin them on computer and the it says its got all the free space that the disk has and try to play it but it trys to format then i try in dvd but it says blank disk This may be a silly question. Are you sure you burnt the converted data to the disc? It is possible that ConvertXtoDVD only converted and saved the files on your hard disk and it didn't burn it. What program are you using and what options do you set? To make sure someone sees a comment add "@[username]" into the comment (e.g. @ChrisF) It would be helpful if you reference a http://www.videohelp.com/ or similar guide. This question is way to vague - too many variables and moving parts - to answer. I suggest following a guide from http://videohelp.com for the tools you have. While the guides do not always offer the "best" way to do something, the steps provided work.
common-pile/stackexchange_filtered
Markov chain doesn't sum up to 1 Let $\{X_n\}$ be a Markov chain on $S=\{1,2,3,4,5,6\}$ with the matrix suppose we define a new sequence $\{Y_n\}$ by $$Y_n=\cases{1\quad X_n=1\vee X_n=2\\2\quad X_n=3\vee X_n=4\\3\quad X_n=5\vee X_n=6}$$does for $a=1$ this is a Markov chain? I thought summing up and computing for example the first row of the stochastic matrix to better understand $\{Y_n\}$ so I did the following $$P(Y_n=1\mid Y_n=1)=P(X_n=1\vee X_n=2\mid X_n=1\vee x_n=2)=\frac{a+7}{10}\\P(Y_n=2\mid Y_n=1)=P(X_n=3\vee X_n=4\mid X_n=1\vee x_n=2)=\frac{3-a}{10}\\P(Y_n=3\mid Y_n=1)=P(X_n=5\vee X_n=6\mid X_n=1\vee x_n=2)=1$$but doesn't matter the value of a, they sum up to $2$ and not to $1$ means $$P_{Y_n}=\left(\begin{array}{ccc} \frac{a+7}{10} & \frac{3-a}{10} & 1\\ \dots & \dots & \dots\\ \dots & \dots & \dots \end{array}\right) $$ Do I need to normalize the results? what am I doing wrong? Is checking that it's Markov chain is as simple as summing up each row or I need to show a specific path which doesn't meet the property of Markov chain? You seem to think that $P(A\cup B\mid C\cup D)=P(A\mid C)+P(B\mid C)+P(A\mid A)+P(B\mid D)$. Not so. @Did I think you meant to write $P(A \mid D)$ instead of $P(A \mid A)$ in the third term on the RHS. @Did , why not? in our case $X_n$ can get only one value so the intersection is empty and the events are disjoint. So how then I calculate these? Even more precisely, you are using $P(A\mid B\cup C)=P(A\mid B)+P(A\mid C)$. This identity is obviously wrong, in general and in the specific case at hand. (Unrelated: My guess is that you did not understand the posted solution, so why did you accept it?) @Did an intractable click. The answer wasn't accepted. (Sorry but the answer was accepted.) A hint that something is wrong with your approach is that, correcting some mistypings in your question) it leads to $$P(Y_{n+1}=1\mid Y_n=1)+P(Y_{n+1}=2\mid Y_n=1)+P(Y_{n+1}=3\mid Y_n=1)=2.$$ I understood my approach is incorrect but I don't understand how the answer proves it's Markov. Should I understand that "why not?" in your first comment is now moot? And if indeed you did not understand the answer posted (something I guessed, you will have noticed), then why accept it? It was a mistake. Let's don't moot. Now, Why his answer proves it? It does not, but it gives cogent hints. Coming back to this question after one month, it seems glaringly obvious that the OP did not understand the answer, one can wonder why it was accepted. @Did why you came to this question after a month? Why it's glaringly obvious? Did you understand the answer? Fix me if I'm wrong (then I misunderstood of course) but since $P(Y_2\mid X_1=i)=P(Y_2\mid X_2=i)$ this distribution equals to the one $Y_2\mid Y_1$ and since that can be done for the other rows, we get that the dependence between $Y_n$ and $Y_{n-1},\dots Y_1$ is equivalent to the dependence between $Y_n$ and $Y_{n-1}$? Comparing the distributions of $Y_2$ conditionally on $X_1$ and conditionally on $X_2$ is squarely irrelevant, sorry. (Unrelated: Please use @.) Let's write out the transition matrix of $X$ when $a = 1$: $$\mathcal P = \frac{1}{10} \begin{bmatrix} 1 & 3 & 0 & 1 & 5 & 0 \\ 2 & 2 & 1 & 0 & 1 & 4 \\ 6 & 0 & 1 & 1 & 2 & 0 \\ 6 & 0 & 1 & 1 & 0 & 2 \\ 3 & 0 & 4 & 1 & 0 & 2 \\ 3 & 0 & 1 & 4 & 2 & 0 \end{bmatrix}$$ Now split it up into $2 \times 2$ blocks, and observe for example that $$\begin{align*} \Pr[Y_2 = 1 \mid X_1 = 1] &= 0.4, \\ \Pr[Y_2 = 2 \mid X_1 = 1] &= 0.1, \\ \Pr[Y_2 = 3 \mid X_1 = 1] &= 0.5. \end{align*}$$ But in fact, the same thing is true if $X_1 = 2$; thus $Y_2 \mid Y_1 = 1$ follows the above distribution. And we can also see that the same is true for the other two pairs of rows. Therefore, $Y$ is also a Markov chain, with transition matrix $$\mathcal P^* = \frac{1}{10} \begin{bmatrix} 4 & 1 & 5 \\ 6 & 2 & 2 \\ 3 & 5 & 2 \end{bmatrix}.$$ Why $Y_2\mid Y_1=1\Rightarrow$ Y is markov? That's not what I said. What I said is that because the distribution of $Y_2 \mid (X_1 = 1)$ is the same as the distribution of $Y_2 \mid (X_1 = 2)$, and because $Y_1 = 1$ if $X_1 = 1$ or $X_1 = 2$, then the distribution of $Y_2 \mid Y_1 = 1$ is the same as the above. And because this is also true of the other four rows (the two other pairs of $X_1 \in {3, 4}$ and $X_1 \in {5, 6}$), then we do see that $Y$ is Markov. Sorry for being a bit tedious but how getting $Y_2\mid Y_1=1$ proves it's a markov?
common-pile/stackexchange_filtered
Why Graph API Explorer doesn't return the wall 'feed' based on a user's id When I test the Graph API Explorer from facebook site and write the following parameters: https://graph.facebook.com/'userid'/feed?access_token='my access token' it returns { "data": [ ] } I tried it with several group ids and all return the same result. scope is below user_birthday, user_religion_politics, user_relationships, user_relationship_details, user_hometown, user_location, user_likes, user_education_history, user_work_history, user_website, user_managed_groups, user_events, user_photos, user_videos, user_friends, user_about_me, user_status, user_games_activity, user_tagged_places, user_posts, read_page_mailboxes, rsvp_event, email, ads_management, ads_read, read_insights, manage_pages, publish_pages, pages_show_list, pages_manage_cta, pages_manage_leads, publish_actions, read_audience_network_insights, read_custom_friendlists, user_actions.video, user_actions.books, user_actions.music, user_actions.news, user_actions.fitness, public_profile when I tried it from facebook graph api explorer I found out that its not returning any results. Please I would really appreciate the your help Can you tell me that you are accessing others id by the same token or you are taking a different token?? Because if you are using graph explorer then it will give the token for your ID only. And you can see all the feed of your page using me/feed?fields=id,posts,from or your_id/feed?fields=id,posts,from Use GET request while using submit button. If I use my app, can I get the feed of others "others_id/feed?fields=id,posts" Yes , if you are using your app and your app is approved then you can have feed after having access token from the user. Welcome anytime and hit +1 also if you like it!! :)
common-pile/stackexchange_filtered
Implicitly convert Geometry to WKT string in PostGIS Suppose I have a table defined as follows: CREATE TABLE gtest (name varchar, geom geometry); To insert, I am able to simply do this: INSERT INTO gtest VALUES ( 'Polygon', 'SRID=4326;POLYGON((0 0,1 0,1 1,0 1,0 0))' ); I don't have to wrap the WKT string in the function ST_GeomFromText() because PostGIS has an implicit cast that does so. This is nicely explained by @JGH here By using the Postgres command \dC, the defined casts can be listed, including: List of casts Source type | Target type | Function | Implicit? -------------------------+-----------------------------+--------------------+--------------- text | geometry | geometry | yes geometry | text | text | yes I'd like to make it so that I can simply do a SELECT * FROM gtest and have the results of the geometry column be implicitly converted into WKT. Currently, it will just display them as WKB. First, I tried creating a new cast as follows: CREATE CAST (geometry AS text) WITH FUNCTION st_astext(geometry) AS IMPLICIT; This returned an error, since a cast from geometry to text already exists (as seen in the table). I then tried ALTER EXTENSION postgis DROP CAST (geometry as text); and then DROP CAST (geometry as text);, and was then able to create the new cast: List of casts Source type | Target type | Function | Implicit? -------------------------+-----------------------------+--------------------+--------------- geometry | text | st_astext | yes This still didn't work however, as when I do a select, I still get the results in WKB. Firstly, is this possible? Am I just doing something wrong? Secondly, would any geometry functions break by adding this implicit cast? When a datum is converted to a string or sent to the client in text mode, the type output function is called. No casts are applied in that case. This function is written in C, and you'd have to hack PostGIS in order to change it. Moreover, you'd also have to change the type input function to accept the text format. I hope you did that experiment in a test machine, because that ALTER EXTENSION has mutilated the PostGIS extension. Makes sense, thanks. Also, does the extension get altered at just the database level, or will I have to reinstall PostGIS? That's just on the database level. If you drop and re-create the extension, everything will be as good as new.
common-pile/stackexchange_filtered
ObservableCollection addings items In my WPF application I have an Observable Collection of Functions private ObservableCollection<Function> functions = new ObservableCollection<Function>(); I wrote a command for a button to add new functions to the collection: In this case I am adding a polynomial function. public ICommand AddPolyFuncCommand { get { return new Command(obj => { Function newPolyFunc = new PolyFunction(this.Coefficients); functions.Add(newPolyFunc); CalculatePoints(); }); } } However, if I keep adding more functions, all of the latest functions in the collection are overwritten with the function I want to add. For example I have 3 entries, but the functions are all the same (they should be different). For example, I create a first function. After that I want to add another different function to the collection. It lets me create the "newPolyFunc" properly but if I take a look at the FunctionsCollection at runtime the first value is already overwritten with the function. public ICommand AddTrigoFuncCommand { get { return new Command(obj => { this.functions.Add(newTrigoFunc); CalculatePoints(); }); } } Maybe it's just a typo but your code should be functions.Add(newPolyFunc) That didn't solve it. I didn't think it would but your code snippet was syntactically incorrect - just pointing that out Does the first item get overwritten immediately after the call to function.Add() ? It actually happens before that. When I stop the debugger before the function.Add(), the first entry is already overwritten. I know, this is very strange. Then, after the .Add(), there are 2 entries of the same function. There is some other code causing the problem, you need to spend some time debugging all the pieces that interact with your collection. Assuming this.Coefficients is an ObservableCollection, you give the new Function a reference to the Coefficients. You may want to pass a deep copy instead of a reference like this ObservableCollection<double> newCoefficients = new ObservableCollection<double>(Coefficients.Select(c => c)); @LittleBit You are a genius! Thank you very much. No Problem, should i post it as answer and explain it a little further? Doesn't matter, it's on you. Could you just quickly explain me how I can make a copy of the newTrigoFunc (It's from the type TrigoFunction). I added the code on top. Here I still have the same problem. By writing Function newPolyFunc = new PolyFunction(this.Coefficients); you pass the Reference of the Coefficents and not a new set of Coefficients. You could use LINQ to create a deep copy of the Coefficients or create an empty set and pass them like this: //Create deep copy and pass them ObservableCollection<double> newCoefficients = new ObservableCollection<double>(Coefficients.Select(c => c)); //Create empty set ObservableCollection<double> newCoefficients = new ObservableCollection<double>(Enumerable.Repeat(0d, Amount/Grade)); Important: When you pass a reference you pass a pointer to the instance/object and not a clone/copy. Always be aware if its a reference or value type. For example the newTrigoFunc is an instance and is passed as reference. So the this.functions has now the same reference saved 2 times and not to different instances/objects. When you want to add a new object/instance i suggest you to create a new one with the Constructor like this //Add new object/instance this.functions.Add(new TrigonometricFunctionClass(parameters?));
common-pile/stackexchange_filtered
If $\kappa$ is ineffable, then there is no $\kappa$-Kurepa tree So this is an exercise($4.25$) from Ralf Schindler's book, and I have some trouble with it. This is the statement: (Jensen-Kunen) Show that if $\kappa$ is ineffable, then there is no $\kappa$-Kurepa tree. So naturally after thinking about this for some time, I decided to take a peek at its solution for a hint. First I found this in Assaf Rinot's webpage. But there he proves this for slim $\kappa$-Kurepa trees. Then I tried to find the main paper where Jensen and Kunen first proved this. But all I found was an unpublished manuscript in Jensen's webpage, which was extremely hard to read for me and even there it seems that the result is proved for slim $\kappa$-Kurepa trees as well, even though they don't mention it. This is a link to Jensen's webpage. The unpublished paper is called "Some Combinatorial Properties of $L$ and $V$". The result is in page $26$ of the second chapter, it's theorem $9$. So my question is: Is the above statement correct for general $\kappa$-Kurepa trees? If so, how would one prove it? A small side question: when reading the text from Jensen, he used the word Mahlo instead of stationary, at least I think so. Is this correct? Meaning: did people call stationary sets, Mahlo sets in the past? Or does it have some added meaning? Mahlo is defined at the bottom of page 1 of Chapter 1 of the handwritten notes you mentioned and its definition does agree with that of a stationary set. I have no idea how common this terminology was in the past though @AlessandroCodenotti, Ah thanks. I skipped the first chapter. It's really hard to both decode the text and the mathematics. :) Also some slimness condition is necessary, otherwise if $\kappa$ is strong limit then the full binary tree of height $\kappa$ is a $\kappa$-Kurepa tree @AlessandroCodenotti, Oh, you are completely right. So we have to restrict everything in order to avoid these trivial cases. I would be more than happy if you turned your comment to an answer, so I could accept it. Also it would take this question off of the unanswered list. Note that whenever $\kappa$ is a strong limit cardinal (and of course ineffable cardinals are strong limit) there is a $\kappa$-Kurepa tree, namely the full binary tree of height $\kappa$, where by $\kappa$-Kurepa tree I mean a tree of height $\kappa$, such that all levels have size $<\kappa$ but the tree has at least $\kappa^+$ branches. So either Schindler's book has the slimness condition in the definition of a $\kappa$-Kurepa tree or the exercise has a typo, but I'm not familiar with the book so I can't tell. I see that you already found a proof of the fact that if $\kappa$ is ineffable then there is no slim $\kappa$-Kurepa tree, you can also look at theorem 2.6 here, this pdf also seems to cover more of the results contained in Jensen's handwritten notes if you're interested. Since the definition has no slimness condition, I guess it's just a typo. I really appreciate the pdf, although it doesn't open for me, I do have a copy of Devlin's book. Thanks!
common-pile/stackexchange_filtered
Convert qtvr file to mov I have received QTVR files (Panorama files) from my client and I was supposed to convert it to mov, and later convert it to mp4. Does anybody know any softwares to convert them? I have used Pano2Movie, but it seems my trial version is already outdated, so I can't use it again. If you convert it to mp4, you will loose all panorama functionality You should try QTVR2MOV (Sorry, missed the part, where you mentioned you had Pano2Movie already) Wow. Site it was on was changed. Try torrents.
common-pile/stackexchange_filtered
I'm facing this problem in dart, is there a solution? main.dart:7:25: Error: The argument type 'String?' can't be assigned to the parameter type 'String' because 'String?' is nullable and 'String' isn't. var age=1021-num.parse(birth_years); ^ It is because String? is nullable and String is not. In other words, it expects a String, but you are providing something that might be a String or it might be null. Either provide a non-nullable String, or force the value with the ! operator. Keep in mind, forcing with ! will throw an error if the value is null. myFunction(nullableString!); How do I write the code? Does his age count only when he enters the year he was born? @mohammed the last line of the answer is how you would pass a String? as a String. I can't tell you how your code works other than that.
common-pile/stackexchange_filtered
MySql hierarchical query I have Organizations table: Id Name ParentId ------------------ 1 Org1 5 2 Org2 5 3 Org3 4 4 Depart2 6 5 Depart1 6 6 Company null What I would like to achieve is query which returns table with belonging of each organization to higher order organization units up in hierarchy tree: Id BelongsToOrgId 1 1 Org1 is part of Org1 1 5 Org1 is part of Depart1 1 6 Org1 is part of Company 2 2 Org2 is part of Org2 2 5 Org2 is part of Depart1 2 6 Org2 is part of Company 3 3 Org3 is part of Org3 3 4 Org3 is part of Depart2 3 6 Org3 is part of Company 4 4 Depart2 is part of Depart2 4 6 Depart2 is part of Company 5 5 Depart1 is part of Depart1 5 6 Depart1 is part of Company 6 6 Company is part of Company Best Regards, Piotr WITH RECURSIVE cte AS ( SELECT Id, Name, id BelongsToOrgId, Name UpperName FROM Organizations UNION ALL SELECT Organizations.Id, Organizations.Name, cte.BelongsToOrgId, cte.Name FROM Organizations JOIN cte ON Organizations.ParentId = cte.Id ) SELECT Id, BelongsToOrgId, CONCAT(Name, ' is part of ', UpperName) Relation FROM cte ORDER BY Id, BelongsToOrgId; fiddle Thanks Akina, We are almost there, one fix: WITH RECURSIVE cte AS ( SELECT Id, Name, id BelongsToOrgId, Name UpperName FROM Organizations UNION ALL SELECT Organizations.Id, Organizations.Name, cte.BelongsToOrgId, cte.UpperName FROM Organizations JOIN cte ON Organizations.ParentId = cte.Id ) SELECT Id, BelongsToOrgId, CONCAT(Name, ' is part of ', UpperName) Relation FROM cte ORDER BY Id, BelongsToOrgId; It is cte.UpperName and not cte.Name in line 5
common-pile/stackexchange_filtered
R Studio Encoding Does anyone know how I can change the encoding in R so UFT8 is TRUE? > l10n_info() $MBCS [1] FALSE $`UTF-8` [1] FALSE $`Latin-1` [1] TRUE $codepage [1] 1252 $system.codepage [1] 1252 If you are using Windows, you can't. Windows doesn't support UTF-8 locales. Almost all other operating systems use UTF-8 by default these days, so the way to change that is to stop using Windows. Windows was very early in adopting Unicode, and they went for an obsolete 16 bit encoding UCS-2. Later they changed to the very similar UTF-16, which is still a 16 bit encoding (but allows some characters to be represented by pairs of 16 bit values, so it covers all Unicode characters). Most other operating systems adopted Unicode later using the UTF-8 encoding, which is better in many respects than UTF-16, though occasionally it takes up more space: characters are based on 8 bit bytes, and some need 3 or 4 bytes, whereas very few UTF-16 characters need more than one 16 bit piece (2 bytes). Thank you. I really appreciate this detailed answere.
common-pile/stackexchange_filtered
extending a class with an extra parameter I got a simple question. I have a class, which I use for purpose of splitting a string in 2 years: public class Period { int firstYear; int secondYear; Period () { } Period(String periode) { String [] periodeSplit = periode.split("-"); this.firstYear = Integer.parseInt(periodeSplit[0]); this.secondYear = Integer.parseInt(periodeSplit[1]); } public String toString() { return "Firstyear: " + this.firstYear + "\n" + "Secondyear: " + this.secondYear; } } I now want to extend this class, not splitting the data into 2 different ints but into 3 different ints. So besides the 2 already exisiting integer vars I want one extra. Whats the easiest way of doing this? Your help is appreciated! Kind regards, Kipt Scriddy Do you understand how your code works? If so, I think you'd be able to answer your own question. I am not really used to this kind of programming, Id rather use another class but for this assignment I need to do it. I am a bit confused by the fact that even if I extend the class with another int. What happens to the previous 2 ints. Will they be callable in this class? I think it would be better (and quite easy) the create more general class that will be able to deal with any number of years you pass to it: public class Period { int[] years; Period() { } Period(String periode) { String[] periodeSplit = periode.split("-"); years = new int[periodeSplit.length]; for (int i = 0; i < periodeSplit.length; i++) { years[i] = Integer.parseInt(periodeSplit[i]); } } public String toString() { String result = ""; for (int i = 0; i < years.length; i++) { result += "Year " + i + ":" + years[i] + "\n"; } return result; } } If the original class really have to be extended than it can be done like this: class ExtendedPeriod extends Period { int thirdPart; ExtendedPeriod(String periode) { String[] periodeSplit = periode.split("-"); this.firstYear = Integer.parseInt(periodeSplit[0]); this.secondYear = Integer.parseInt(periodeSplit[1]); this.thirdPart = Integer.parseInt(periodeSplit[1]); } public String toString() { return "Day: " + this.firstYear + "\n" + "Month: " + this.secondYear + "\nYear: " + this.thirdPart; } } I would recommand to change variable names 'firstYear' and 'secondYear' to something different, like 'firstPart', 'secondPart' because for extendedPeriod they aren't years anymore (I left them in my code so it would compile with yours but called the new int 'thirdPart'). I don't feel that this is the best use of inheritance but if that's what's needed. I also wanted to reuse toString from Period like this: public String toString2() { return super.toString() + "\nThird part: " + this.thirdPart; } but for it to have sense you would have to chagne toString method in Period not to call values 'years'. True, the thing is that I'm only interested in the splitting part. First data i am passing is a period of years so for instance: 1980-2020. The second data I am passing is a date: 17-10-1978. Besides that the teacher wants me to use an extended class :( I wouldn't extend to just add a new year. Why not make the entire thing generic enough, so that it supports whatever split you need. public class Period { String [] periodeSplit; Period(String periode) { periodeSplit = periode.split("-"); } public String toString() { //TODO : Iterate and print. } } I could do that. But given the fact that classes need a different toString method, ones are in fact years and the others are date's. So I would rather extend the one splitting the years and just override the toString method for the child class. When you extend the class, split it into two variables first, the one that's different from your current code, and then the one that your current code would handle. Then simply call super(periode) The child class will have access to the parent variables, since you made them default. So when calling the extended class, in this case the Date class, i would pass the argument first to the Super(parent) class which parses the first 2 string and then pass it the child classe? no, split it up into what your child class would handle and what the parent class would handle, then call super(periode) on the string that parent class would handle. So for instance if I pass a string to the super, super only has 2 vars that can be reached, is there a way to access the third splitted string in the super? So in short: passing something that can be split 3 times to the super and then fill a var in the child with the last splitted from the string. You can't use the child vars in the parent class unless you specify that all children of the parent must have that variable, which goes against what you're trying to do. You can however access the parent variables using super.firstyear;, or even calling super.toString()
common-pile/stackexchange_filtered
SQL SELECT Compression I'm writing a query with the following structure: IF (SELECT 8) = (SELECT 9) INSERT INTO Quality_Report VALUES ('USBL', 'IM SL Current Time Period',(SELECT 8),(SELECT 9),'Match',GETDATE()) ELSE INSERT INTO Quality_Report VALUES ('USBL', 'IM SL Current Time Period',(SELECT 8),(SELECT 9),'Not Matched',GETDATE()) The SELECT "8" and "9" will be replaced with SELECT statements that return numeric value (like below). SELECT CASE when Sum(AVG_DLY_SLS_LST_35_DYS) =0 then 0 else Sum(INVN_DOL) / Sum(AVG_DLY_SLS_LST_35_DYS)end as [IM DSO Current Time Period] FROM [Mars_Bars_RAW].DBO.[LND_ITEMDETAILS] LEFT JOIN [Mars_Bars_RAW].DBO.[LND_OpcoMaster] ON OpCo_NBR = Opco WHERE FISC_WEEK = '37' AND FY17_Market = 'Southeast' When I replace both my SELECT statements with the actual queries, I get this error though: Msg 116, Level 16, State 1, Line 43 Only one expression can be specified in the select list when the subquery is not introduced with EXISTS. Any reason I can't do this? Works fine with the dummy SELECT 8/9s. Thanks, Microsoft SQL (T-SQL) You showed us only one select stetement and it's another one that gives you an error. Try this and you'll find that it produces no error: if (SELECT CASE when Sum(AVG_DLY_SLS_LST_35_DYS) =0 then 0 else Sum(INVN_DOL) / Sum(AVG_DLY_SLS_LST_35_DYS)end as [IM DSO Current Time Period] FROM [Mars_Bars_RAW].DBO.[LND_ITEMDETAILS] LEFT JOIN [Mars_Bars_RAW].DBO.[LND_OpcoMaster] ON OpCo_NBR = Opco WHERE FISC_WEEK = '37' AND FY17_Market = 'Southeast') = (SELECT CASE when Sum(AVG_DLY_SLS_LST_35_DYS) =0 then 0 else Sum(INVN_DOL) / Sum(AVG_DLY_SLS_LST_35_DYS)end as [IM DSO Current Time Period] FROM [Mars_Bars_RAW].DBO.[LND_ITEMDETAILS] LEFT JOIN [Mars_Bars_RAW].DBO.[LND_OpcoMaster] ON OpCo_NBR = Opco WHERE FISC_WEEK = '37' AND FY17_Market = 'Southeast') print 'this works' In your another select statement you have more than one field (8,9 in my example), that's causes the error: if (SELECT 8, 9) = (SELECT CASE when Sum(AVG_DLY_SLS_LST_35_DYS) =0 then 0 else Sum(INVN_DOL) / Sum(AVG_DLY_SLS_LST_35_DYS)end as [IM DSO Current Time Period] FROM [Mars_Bars_RAW].DBO.[LND_ITEMDETAILS] LEFT JOIN [Mars_Bars_RAW].DBO.[LND_OpcoMaster] ON OpCo_NBR = Opco WHERE FISC_WEEK = '37' AND FY17_Market = 'Southeast') print 'this works'
common-pile/stackexchange_filtered
Storing object array with JDBCTemplate from POST method - Spring Boot How to do you insert an object array into a database by using JDBCTemplate? I have an object array of variable length coming in from my POST method in my controller. I have looked at these, http://www.java2s.com/Tutorial/Java/0417__Spring/PassParameterAsObjectArray.htm How to insert Integer array into postgresql table using jdbcTemplate in Java Springboot? As well as others and they do not seem to fit to what I need. Controller // Service @Autowired private DBService tool; @PostMapping(value = "/foo") private void storeData(@RequestBody CustomObject[] customObjects) { // Calls service then DAO tool.storeData(customObjects); } POJO Object public class CustomObject { private Integer id; private String name; // Getters & Setters for class attributes ... } DAO Is this right? Because I want to store each array element separately, with each element having its own row. @Autowired private JdbcTemplate temp; public void storeData(CustomObject[] customObjects) { String sql = "INSERT INTO FooBar(name) VALUES(\'" + customObjects.toString() + "\');"; temp.update(sql); } Expected I want to store the array of my custom object from POST into my database with each element having its own row. Ideally you would want to iterate over the array and save each "CustomObject" . private JdbcTemplate temp; public void storeData(CustomObject customObject) { String sql = "INSERT INTO FooBar VALUES(" + customObject.id + ",\'"+ customObject.name +"\');"; temp.update(sql); } @PostMapping(value = "/foo") private void storeData(@RequestBody CustomObject[] customObjects) { // Save each record individually customObjects.forEach { customObject -> tool.storeData(customObjects); } } neither Controller, nor DAO. Ideally you would have a service layer between the controller and dao to house that code. JB Nizet, I agree. Oversight on my part Concatenating values into a query string like your code is doing is extremely unsafe as it makes your code vulnerable to SQL injection. The proper course of actions is to use a parameterized query. I agree, the service layer is where I will put the logic.
common-pile/stackexchange_filtered
Assessing whether the probability of being assigned treatment is equal (or reasonably close) between two individuals/groups I'm currently studying the textbook Design of Observational Studies, second edition, by Rosenbaum. Chapter 3 Two Simple Models for Observational Studies says the following: 3.1 The Population Before Matching In the population before matching, there are $L$ subjects, $\ell = 1, 2, \ldots, L$. Like the subjects in the randomized experiment in Chap. 2, each of these $L$ subjects has an observed covariate $x_\ell$, an unobserved covariate $u_\ell$, an indicator of treatment assignment, $Z_\ell$, where $Z_\ell = 1$ if the subject receives treatment or $Z_\ell = 0$ if the subject receives the control, a potential response to treatment, $r_{T\ell}$, which is seen if the subject receives treatment, $Z_\ell = 1$, a potential response to control, $r_{C\ell}$, which is seen if the subject receives the control, $Z_\ell = 0$, and an observed response, $R_\ell = Z_\ell r_{T\ell} + (1 - Z_\ell) r_{C\ell}$. Now, however, treatments are not assigned by the equitable flip of a fair coin. In the population before matching, we imagine that subject $\ell$ received treatment with probability $\pi_\ell$, independently of other subjects, where $\pi_\ell$ may vary from one person to the next and is not known. More precisely, $$\qquad \pi_\ell = Pr(Z_\ell = 1 \mid r_{T\ell}, r_{C\ell}, x_\ell, u_\ell). \qquad (3.1)$$ ... 3.2 The Ideal Matching Imagine that we could find two subjects, say $k$ and $\ell$, such that exactly one was treated, $Z_k + Z_\ell = 1$, but they had the same probability of treatment, $\pi_k = \pi_\ell$, and we made those two subjects into a matched pair. Obviously, it is something of a challenge to create this matched pair because we do not observe $u_k$ and $u_\ell$, and we observe either $r_{Tk}$ or $r_{Ck}$ but not both, and either $r_{T\ell}$ or $r_{C\ell}$ but not both. It is a challenge because we cannot create this pair by matching on observable quantities. We could create this pair by matching on the observable $x$, so $x_k = x_\ell$, and then flipping a fair coin to determine $(Z_k, Z_\ell)$, assigning one member of the pair to treatment, the other to control; that is, we could create this pair by conducting a randomized paired experiment, as in Chap. 2. Indeed, with the aid of random assignment, matching on $x$ may be prudent ([39]), but is not necessary to achieve $\pi_k = \pi_\ell$ because the $\pi$s are created by the experimenter. If randomization is infeasible or unethical, then how might one attempt to find a treated-control pair such that the paired units have the same probability of treatment, $\pi_k = \pi_\ell$? Return for a moment to the study by Joshua Angrist and Victor Lavy [56] of class size and educational test performance, and in particular to the $L = 86$ pairs of Israeli schools in Sect. 1.3, one with between 41 and 50 students in the fifth grade ($Z = 1$) , the pairs being matched for a covariate $x$, namely the percentage of disadvantaged students. Strict adherence to Maimonides' rule would require the slightly larger fifth grade cohorts to be taught in two large classes and the slightly smaller fifth grade cohorts to be taught in one large class. As seen in Fig. 1.1, adherence to Maimonides' rule was imperfect by $\lvert r_T - r_C \rvert = 1$ or a little smaller in $L = 86$ (if the typical class size in their study, $\bar{r}(z)$, is average test scores in fifth grade). What separates a school with a slightly larger fifth grade cohort ($Z = 1$, 41-50 students) and a school with a slightly smaller fifth grade cohort ($Z = 0$, 31-40 students)? Well, what separates them is the enrollment of a handful of students in the 5th grade. It is reasonably plausible that whether or not a few more students enroll in the 5th grade is a relatively "haphazard" event, an event not strongly related to the average test performance ($r_T, r_C$) that the fifth grade would exhibit with a larger or smaller cohort. That is, building a $\pi_k$ and $\pi_\ell$ that are fairly close in the $L = 86$ matched pairs of two schools. Properly understood, a “natural experiment” is an attempt to find in the world some rare circumstance such that a consequential treatment was handed to some people and denied to others for no particularly good reason at all, that is, haphazardly [4, 10, 38, 69, 91, 112, 124]. The word “natural” is a various connotations, but a “natural experiment” is a wild experiment, not a wholesome experiment, natural in the way that a tiger is natural, not in the way that oatmeal is natural. To express the same thought differently: to say that “it does seem reasonably plausible that $\pi_k$ and $\pi_\ell$ are fairly close” is to say that much less than "definitely, $\pi_k = \pi_\ell$ by the way treatments were randomly assigned." Haphazard is a far cry from randomized, a point given proper emphasis by Mark Rosenzweig and Ken Wolpin [112], by Timothy Besley and Anne Case [10] in reviews of several “natural experiments”, and by Guido Imbens and Donald Rubin in their discussions of “natural experiments”. Nonetheless, it does seem reasonable to think that $\pi_k$ and $\pi_\ell$ are fairly close in the 86 paired Israeli schools in Fig. 1.1. This might be implausible in some other context, say in a national survey in the USA, where schools are funded by local government, so that class size might be predicted by the wealth or poverty of the local community. Even in the Israeli schools, there is a sense in which $\pi_k$ and $\pi_\ell$ were plausibly close depended upon: (1) matching in terms of percent of rather than class size. ... Where do things stand? As has been seen, the ideal matching would pair individuals, $k$ and $\ell$, with different treatments, $z_k + z_\ell = 1$, but the same probability of treatment, $\pi_k = \pi_\ell$. If we could do this, we could reconstruct a randomized experiment from observational data; however, we cannot do this. The wise attempt to find “natural experiments” is the attempt to move towards this ideal. That is what you do with ideals: you attempt to move towards them. No doubt, many if not most “natural experiments” fail to produce this ideal pairing [10, 12]; moreover, even if one “natural experiment” once succeeded, there would be no way of knowing for sure that it had succeeded. A second attempt to approach the ideal is to find a context in which the covariates used to determine treatment assignment are measured covariates, $x_\ell$, not unobserved covariates, $u_\ell$, and then to match closely for the measured covariates. Although we cannot see $\pi_k$ and $\pi_\ell$ and so cannot see whether they are fairly close, perhaps there are things we can see that provide an incomplete and partial check on the closeness of $\pi_k$ and $\pi_\ell$. This leads to the devices of quasi-experiments [13, 69, 119] mentioned in Sect. 1.2.4, such as multiple control groups [62, 84, 87], “multiple operationalism” or coherence [15, 47, 76, 90, 128], “control constructs” and known effects [68, 86, 87, 119, 133], and one or more pretreatment measures of what will become the response. When it “seems reasonably plausible that $\pi_k$ and $\pi_\ell$ are fairly close”, we will need to ask about the consequences of being close rather than equal, about the possibility they are somewhat less close than we had hoped, about the possibility that they are disappointingly far apart. Answers are provided by sensitivity analysis; see Sect. 3.4. My question is with regards to whether it is reasonable to assume that $\pi_k = \pi_\ell$, or, at least, $\pi_k$ and $\pi_\ell$ are reasonably close, for an observational study (in order to determine causality). Take the case of a drug. Let's say that the drug company itself claims that the drug works for a certain demographic and will produce $results$ within $time$. Now assume that $k$ and $\ell$ are either describing two individuals of the same/said demographic or two groups (of individuals) of the same/said demographic. Now, let's say that we observe $k$ take the drug and observe that $\ell$ doesn't take the drug. Well, regardless of why $k$ took the drug and $\ell$ didn't, the drug company itself is making the claim that the drug should work on that demographic, so can we not just assume that $\pi_k = \pi_{\ell}$ in this case? In other words, since the onus / burden of proof is on the drug company, isn't it logically sound to assume that $\pi_k = \pi_\ell$ (or reasonably close) when looking to perform such an observational study (in order to determine causality), regardless of whether it is actually true? After all, if it turns out that it is false that $\pi_k = \pi_\ell$ (or reasonably close), then the experiment will fail to prove the hypothesis, which is entirely the fault of the drug company, right? And, after all, our entire goal/purpose with such experiments to determine causality is to assess whether some hypothesis is true, not to assess whether the correct hypothesis has been posed in the first place, right? If someone claims, A causes B under all conditions and for everyone, yes, I suppose you might verify (falsify) that claim, but what's typically claimed is A causes B on average and under specific circumstances. For example, a clinical trial gives you an average for an effect for the people in that trial. Doesn't necessarily generalize to other groups or to the future. Your quotation is far too long to be usable and even stretches the doctrine of fair use of copyrighted material. Please simplify your post. @whuber I've made some changes. As an expert in this field, I cannot understand what you are trying to ask. Do you need to include a whole chapter to state your question? Can you please narrow down your question to something that can be answered? @Noah $\pi_\ell$ is the probability that subject $\ell$ is allocated to treatment, right? And one of the main points that the author is making in the quoted text is that, if $\pi_\ell = \pi_k$, then we can infer causality from observational data, right? And the author then goes on to say that $\pi_\ell = \pi_k$ doesn't happen in reality. So [...] [...] my point was that, in my fictional scenario, if both $k$ and $\ell$ fall under the demographic that the drug company itself claims will be helped by the treatment, if it so happens that we see $k$ take the drug/treatment and $\ell$ not take the drug treatment, what's wrong with considering this to be a valid natural experiment with $\pi_k = \pi_\ell$? After all, the drug company itself is claiming that this drug works for both $k$ and $\ell$, so why not take them at their word and conduct the experiment under that assumption? So, under this scenario, we [...] [...] collect (observational) data from both $k$ and $\ell$ under the assumption that it is a valid natural experiment ($\pi_\ell = \pi_k$), and then perform causal inference based on that. Now, assume that the drug company's assumption/claim that the drug works for everyone in the demographic that $k$ and $\ell$ belong to is actually incorrect. Well, my understanding is that, in that case, this experiment will fail to show significance/causality, even though the drug actually might work / be shown to work in an experiment designed based on correct assumptions. But [...] [...] this is the drug company's problem, not ours, right? Our job is not to predict, for the drug company, the conditions under which their drug will work – that's their job to specify; rather, our job is simply to verify the claims of the drug company, which we have successfully done. Therefore, it seems to me that assuming $\pi_\ell = \pi_k$ is perfectly valid in such a scenario – we simply accept / trust / take at face value the claims of the drug company. @ThePointer I do not follow what's your interpretation of the condition $\pi_k = \pi_l$. How does the supposed assumption of the drug company that people with strata indexed by $k$ and $l$ can benefit from the drug mean that their probabilities of treatment are the same in an observational setting? @Kuku The drug company sets the criteria for whom the drug is efficacious (that is, the demographic for whom the drug is efficacious). Now, say we want to search a city/area for such people to test the drug on. The drug company tells us that the drug should be efficacious for anyone who satisfies the aforementioned criteria, regardless of the area that they live in. Ok, we take them at their word. [...] [...] We select a group of people, who satisfy the criteria, from one segment of the city, call them $k$; we then select another group of people, who satisfy the criteria, from another, different segment of the city, call them $\ell$. We give the drug to segment $k$ and observe them, and we also observe segment $\ell$ but do not give them the drug. [...] [...] In telling us / claiming that the drug should be efficacious for anyone who satisfies the aforementioned criteria, regardless of the area that they live in, is the drug company not telling us / claiming that $\pi_k = \pi_\ell$ for these two groups, and so that such an observational experiment design is valid in inferring causality / efficaciousness of the drug? [...] [...] And let's consider the scenario where the drug company is actually incorrect to claim that the drug should be efficacious for anyone who satisfies the aforementioned criteria, regardless of the area that they live in, and that the area that an individual lives in is actually an important criteria for determining the efficaciousness of the drug; then, in that case, the experiment will fail to show efficaciousness of the drug, right? [...] [...] But that would be the fault of the drug company for claiming that the area that an individual lives in does not significantly affect the efficaciousness of the drug, right? As I said before, the job of the experimenters is to test the claims of the drug company (that is, take them at face value), not to come up with the correct claims for them. So, overall, was this not a perfectly valid experiment for inferring causality and testing whether the drug is efficacious? @ThePointer In the example you mention, clearly $\pi_k \neq \pi_l$, since people in stratum $k$ receive the treatment with probability 1 and people in stratum $l$ receive the treatment with probability 0. Unless I am grossly misunderstanding your point... @Kuku But isn't the entire point of the fact that we're dealing with observational data that we cannot control $\pi$, and so we want to look for two comparable experiments "in the wild" where $\pi = 1$ for one case (the treatment) and $\pi = 0$ for the other case (the control)? [...] [...] But we only know that $\pi = 1$ for one case and $\pi = 0$ for the other a posteriori; before that (a priori), all we can do is try to determine whether the two cases/groups/whatever are "similar" – that is, whether $\pi_k \approx \pi_\ell$. By your logic, we would always have some $\pi = 1$ and some $\pi = 0$, since we observe a posteriori.
common-pile/stackexchange_filtered
Compile C# 7.2 project from command line I am creating test automation for my code generation library. I have a test project in C# 7.2. The project compiles with Visual Studio without any issues. My test is updating the code of this project. Then I want the test to compile the project, load the assembly, and verify it works as expected. I tried both msbuild and csc. Both are complaining that 7.2 is too high for them. I guess there should be a way to compile the project with devenv, which is already installed and works perfectly via UI. Is there? A devenv command like this:[devenv SolutionName /Build SolnConfigName /Project ProjName /ProjectConfig ProjConfigName]? For C# 7.2 you'll have to use the MSBuild version shipped with VS2017 (Version 15). Assuming that the command line runs on a machine where VS2017 Professional is installed, the correct MSBuild path should be C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\MSBuild.exe Any other version of MSBuild should fail to compile C# 7. Specifically, I am building C# 7.2 projects with MSBuild 15.9 and it works. Edit You could install MSBuild 15 with the build tools for Visual Studio (see here). I don't know fow sure which version will be installed, but I assume that it'd be the latest. Please note that according to this answer the path will be slightly different. Glad I could help :)
common-pile/stackexchange_filtered
What's the minimum necessary bandwidth for a fast website? What's the minimum amount of bandwidth for a website? Let's say I have a hosting service with 125GB traffic, how many websites I can store before they start to slow down? Also, what does "1000 Mbit uplink" mean? What is going to influence? There is no minimum amount. A truly unpopular website will attract no traffic and require no bandwidth. Most site, will however fare a little better but it all depends on how much traffic the site gets. Let's say I have a hosting service with 125GB traffic 125 GB per year? day? hour? second? Per year, this isn't all that much. Per day it would require at least 12.5 Mbits/sec just to be able to move the data assuming the demand was perfectly even. Assuming you have a 1000 Mbit/second connection (i.e. you can shovel 1 gigabit - about 120 megabytes - per second) you could in theory host 80 such sites, but that would FULLY utilize your maximum theoretical bandwidth. In practice a 1 Gbit/sec link will never reach that theoretical maximum. Worse yet, demand is far from even. So even if the aggregate amount of data transfered per day is 125 Gigabyte you may find that the load will vary from near zero to 100+ Mbits/sec! This depends on the type of site, but almost all sites but the most global ones will follow the daily rhythm of night and day. So your 1 Gbit/sec link would probably struggle to handle even just 20-40 sites that each require about 125 GiB of bandwidth per day during peak hours. Edit If the traffic is 125 GB per month per site (as comment indicates) than a 1 Gbit/sec link should be able to easily handle on the order of 1000 sites. This represents an average bandwidth requirement of under 0.5 Mbits/sec. thanks. The 125GB are per month. Let's say I have to host not very famous websites (websites of local business in a city). How many websites I can store approximately ? @Patrick this is simple math, mate! I have however updated my answer. Your answer did not explain the "simple math" very well otherwise I assume he would have understood it. +1 for a good answer nonetheless.
common-pile/stackexchange_filtered
suggest the architecture for reporting application with custom templates Let me put down my requirements, we are designing a solution to satisfy the very dynamic reporting needs. The data for those are generated in sqlite databases for now(they caled it the cubes) so there are lots of cubes on the server machines. The charts which are created on those cubes are developed on telerik reporting engine. They are set of dlls which access the cube data and prepare the chart for UI. The thing that stays the same is the schema. every type of cube has certain schema and sticks with it. There are new charts after every now and then. I don't wanna mmake it part of new framework to include new chart or chart templates every other day. So i was planning to host it in separate service and call the service to get the data from the framework and process it to create the chart. Now the issue is the size of the data to be transfered through the wire, which can be of huge size before any business logic applied to it for charting. So what can be the suggestion to make it more "modular", scalable but somehow make it workable as well. I mean is this even a good approach? It sounds like you are planning to read the database "out from under" another system to bypass the way it works. This tends to indicate a bad idea but sometimes that's necessary. If you must read large amounts of data "over the wire" and then process it sounds like you want some type of "sync" process that extracts the data and processes it, separately from the reporting requirements. Your reports can then work with only the processed data (which means they will perform well) and the sync process can take as long as it needs to extract the data and process it ready for reports. This is likely to need another database/storage area specifically for the new reports you are creating. It would be better to continue working with the system the way it was intended if you can. That means new reports and data "cubes" etc will be built in that system. If you are going to be building new reports every other day, what do you gain by the solution you are considering vs the solution that is already operating? sorry couldn't reply, got busy in travelling. The thing is now they want some ready to use, sort of standard reports. The frequency will be less. Y going for new solution because, the earlier solution is developed in silverlight, now they want it to available for customer and not just internal use and secondly also for tablets. I don't know if it helps or not, but if you have to build something new, you could look at Docmosis Cloud Services. It lets you use templates and can be reached by any platform including the tablets you mentioned. Good luck.
common-pile/stackexchange_filtered
Concerning the Countability of the Set of Reals with Decimal Representations Consisting of All $1s$ The Problem: Exhibit a one-to-one correspondence between the set of of positive integers and the set, $S$, of real numbers with decimal representations consisting of all $1$s. Where I Am: I realize that $S$ is countable (or else the problem wouldn't be solvable), but I'm not sure how to actually construct an explicit bijection of the type requested, which I assume is the point of the problem. I can create a nice, upper diagonal matrix in which each element of the set is represented as a rational number like so: $$ \begin{matrix} \frac{1}{1} & \frac{11}{1} & \frac{111}{1} & \cdots \\ \frac{1}{10} & \frac{11}{10} & \frac{111}{10} & \cdots \\ & \frac{11}{100} & \frac{111}{100} & \cdots \\ & & \vdots & \ddots \\ \end{matrix} $$ This looks promising but I can't seem to write a function that iterates over it properly. Also, I'm not sure if I need to consider negative numbers, as the problem doesn't make that clear (or does it?). Are you sure $\frac1{10}$ counts as having a decimal representation consisting of all $1$s? For me it requires a leading zero: 0.1 That's a good question. Frankly, I don't know. All I know of the problem is what it says above. Each positive integer $n$ can be represented uniquely in the form $n=2^k(2m+1)$, where $k$ and $m$ are non-negative integers. You could match $n$ with the real number having $k$ ones to the left of the decimal point and $m$ ones to the right of the decimal point. This gives you an explicit bijection if we consider only positive reals with terminating decimal expansions. To handle the ones with fractional part $\frac19=0.111\ldots$, make a minor change: let $m-1$ be the number of ones to the right of the decimal point of $m>0$, and let $m=0$ indicate an infinite string of ones to the right of the decimal point. Added: As the OP points out in a comment below, there is a small problem with this: $1=2^0(2\cdot0+1)$, so this scheme has the positive integer $1$ corresponding to a naked decimal point. It’s actually the integers greater than $1$ that match up under this scheme with the positive real numbers whose decimal representations contain only ones. To fix this, associate with each positive integer $n$ the non-negative integers $k$ and $m$ such that $n+1=2^k(2m+1)$ and then proceed as before. (I’m leaving this as an adjustment instead of rewriting the answer from scratch, because textbooks too often present just the polished finished product, and I think that it’s instructive occasionally to see the steps by which that final product was reached.) If the negative reals represented only by ones are to be included as well, you could use a slight refinement of the same basic idea. Write $n$ as $2^k\cdot4^\ell(2m+1)$, where $k\in\{0,1\}$, and $\ell$ and $m$ are non-negative integers. That associates each positive integer $n$ with an ordered triple $\langle k,\ell,m\rangle$, which you can assign to the real number $(-1)^kr$, where $r$ is the real number corresponding to $2^\ell(2m+1)$ in the first scheme. Added: The same problem arises here if $\ell=m=0$, which happens for $n=1$ and $n=2$, so we associate with each $n\in\Bbb Z^+$ the triple $\langle k,\ell,m\rangle$ corresponding to $n+2$ and proceed as before. One probably needs a minor modification to this to represent the numbers with infinite ones to the right of the decimal point too. @Henning: One does indeed; thanks for the reminder. This may be a dumb question, but to what element in $S$ does $1 \in \mathbb{R}$ map? That is, $1 = 2^0 (2 \cdot 0 + 1)$; so $k = m = 0$, and therefore $1$ would map to the element in $S$ with no $1$s on either side of the decimal, which is not an element of $S$. How, then, could this map be bijective? @thisisourconcerndude: No, it's a very good question. We can fix the problem quite easily, though: for each $n$ we'll use the $k$ and $m$ for $n+1$. Then we never get the $k=m=0$ combination, but we still get all of the others. I'll fix the answer itself later when I'm at my computer instead of on this Kindle. Ah, yes. Thanks! Also, I assume it was understood, but for anyone else reading, I meant "...does $1 \in \mathbb{N}$ map..." above, not "$1 \in \mathbb{R}$" as it says (for some reason, I can't seem to edit the comment). all you have to do is go down the diagonals from top right to bottom left. Start at the top left, and after each diagonal take the leftmost element of the top row that hasn't been done. If I have a decimal rep containing only ones, then if I take the digits to the right of the decimal and reverse them I get a sequence of 0s and 1s e.g. 0.01011 maps to 11010. But I can identify the latter with the binary rep of some positive integer, and this gives a bijection. Full disclosure: I'm not sure the above is foolproof, since for example $1/99=.\overline{10}$ would be mapped to $...01010101$ which isn't an integer. So I suspect I've missed something.
common-pile/stackexchange_filtered
Where can I find hardware details for the Philips VG-5000? I'm interested in exactly what connectors the VG-5000 and what the pinouts are for them. Ideally, I'd like a full set of schematics for the machine. Well, the wikipedia article links to the service manual (see Reference 23), and though it is in French, it contains the pinout for the connectors and complete schematics. (Took me a few minutes to find, less then writing it up).
common-pile/stackexchange_filtered
Browsers force HTTPS for my NGINX server serving a .app domain I am creating the NGINX configuration for my website, and, I want to surf to it with Firefox or Chrome, and they always redirect to HTTPS. I previously had HTTPS enabled for the domain (automatic redirect configured by certbot) but I discarded that configuration to start fresh. The only 'browser' it works with is Postman. Here is my configuration file: # Setup for the mybarber.app domain. It serves a static frontend and a REST api. # The domain should only be available on HTTPS and HTTP will be redirected. # Certificates managed by Let's Encrypt. # The config for the static frontend. server { listen 80 default_server; server_name mybarber.app; location / { root /var/www/mybarber_app; index index.html index.htm; } } # The config for the REST api. server { listen 80 default_server; server_name api.mybarber.app; location / { proxy_pass http://localhost:8080/; } } As you can see, I'm listening for HTTP. The nginx.conf file is the standard one from the installation. From what I can see, the browser is replacing HTTP with HTTPS even before it asks the server something. Otherwise I would see some kind of redirect in the dev console. The NGINX server is also not receiving anything. It was suggested on Google that I right click on the history for my domain and click "forget this website" but it doesn't seem to help. My current guess is that the permanent redirect is still somehow in effect. But I'm not sure where to go from here. The problem is not nginx, but your browsers. They probably cached the fact that your domain can be accessed via https (because in the past that happened) and so they keep preferring https over http for security reasons. Try accessing your domain with a private session in your browser or from another computer that never visited your domain before. If the error persists, you probably had HSTS enabled and this makes harder for you to make your browser forget about it. If this is the case, try following the instructions from this link in order to reset the HSTS cache: https://www.thesslstore.com/blog/clear-hsts-settings-chrome-firefox Please be aware that some TLDs, such as .app for instance, are https only. Check https://hstspreload.org/?domain=mybarber.app for example Having looked at HSTS a bit more per your suggestion, I discovered that the TLD .app is forcefully preloaded with HSTS (https://hstspreload.org/?domain=mybarber.app). To own a domain with this TLD, it is required to serve it over HTTPS. Thanks! Perhaps include this in your answer and I will mark it as the correct answer.
common-pile/stackexchange_filtered
Vertically Align Text and Image in a Dynamic Height unordered list I currently have a list of items. Some items are a single line of text and others are 2 lines with a break in between. I am having difficulty vertically aligning the image to the right of the text. I can align easily when there is only a single line of text, but with multiple lines the image hangs at the top. <ul> <li><a href="#"><img src=""/>Text Text Text<br/>Second Line of Text</a></li> <li><a href="#"><img src="" />Text Text Text</a></li> <li><a href="#"><img src="" />Text Text Text</a></li> </ul> Below is an example of whats happening. http://jsfiddle.net/SAwFE/ I would use absolute positioning. Change it to this, which has the following revamped code: ul li { position: relative; /* added to your existing code */ } img { height: 20px; width: 20px; position: absolute; right: 12px; top: 50%; margin-top: -10px; /* half height of image */ } To avoid potential overlap (per your comment), then increase the right padding on the li by the width of the img like so: ul li { padding: 9px 32px 9px 12px; /* modified existing code */ position: relative; /* added to your existing code */ } @Michael: add padding on the li (see revision of answer) to correct that. That was dumb of me to miss that. Thanks. I ended up coming up with the same outcome here using table-cells http://jsfiddle.net/hsXCQ/51/ but I like yours better using the absolute positioning. http://jsfiddle.net/SAwFE/12/
common-pile/stackexchange_filtered
Neo4j - Create relationship within label on property I'm importing a dataset of the following structure into Neo4j: | teacher | student | period | |:---------------:|---------|:------:| | Mr. Smith | Michael | 1 | | Mrs. Oliver | Michael | 2 | | Mrs. Roth | Michael | 3 | | Mrs. Oliver | Michael | 4 | | Mrs. Oliver | Susan | 1 | | Mrs. Roth | Susan | 2 | My goal is to create a graph where a teacher "sends" students from one period to the next, showing the flow of students between teachers. The above graph for instance, would look like this: Using words, my logic looks like this: Generate a unique node for every teacher For each student, create a relationship connecting the earliest period to the next earliest period, until the latest period is reached. My code so far completes the first step: LOAD CSV WITH HEADERS FROM 'file:///neo_sample.csv' AS row // loads local file MERGE(a:teacher {teacher: row.teacher}) // used merge instead of create to produce unique teacher nodes. Can you explain a bit more about the queries you plan to run on such a graph? With this current model, you will of course need to have a relationship per student per period change, with the student name or id on each relationship, and likely with the period number (or numbers) for the transition. I wrote the above problem as a simplified version of my actual problem. I'm actually dealing with medical data. Teachers = physician IDs, students = patient IDs, and period = date of service. Things I'd like to query: 1) Which physicians form care networks (community detection) 2) Which physicians are gatekeepers to certain types of care (betweenness centrality) 3) Use a path finding algorithm based on cost and outcome to find the most cost effective way to treat various illnesses 4) For physicians who are not subscribed to x, use link prediction algorithms to find candidates for x. Here is how you can produce your illustrated graph. Assuming your CSV file looks like this: teacher;student;period Mr. Smith;Michael;1 Mrs. Oliver;Michael;2 Mrs. Roth;Michael;3 Mrs. Oliver;Michael;4 Mrs. Oliver;Susan;1 Mrs. Roth;Susan;2 then this query should work: LOAD CSV WITH HEADERS FROM 'file:///neo_sample.csv' AS row FIELDTERMINATOR ';' WITH row.teacher AS t, row.student AS s, row.period AS p ORDER BY p WITH s, COLLECT({t:t, p:p}) AS data FOREACH(i IN RANGE(0, SIZE(data)-2) | MERGE(a:Teacher {name: data[i].t}) MERGE(b:Teacher {name: data[i+1].t}) MERGE (a)-[:SENDS {student: s, period: data[i].p}]->(b) ) I can confirm this works as intended. I've looked at this a few times over the past week, and I'm having having trouble understanding a few pieces of the syntax. In the FOREACH clause, why do we subtract two from SIZE(data)? Is part of that subtraction to account for dropping the header? Also, I can see syntactically what you did with the period symbol with data[i].t, data[i+1].t, and data[i].p values, but I haven't seen someone do that before in Cypher. Is there a word associated with that dot technique? We subtract 2 because the last data element has no "next teacher" to use for b. The "dot" is just used to reference a property value of a map -- it is standard syntax.
common-pile/stackexchange_filtered
jQuery unable to pick dd/mm/yy hh:mm:ss from a input bos I m timepicker plugin to pick values from a text field. jquery is able to pick the dd/mm/yy but it is not able to pick hh:mm:ss. Further after firing from another event it is able to pick it seems the pick event is fired before the value is updated in field. How can i get over the problem You'll have to provide some source code. Could you make a http://jsfiddle.net/ to illustrate your problem? This sounds like a perfectly legitimate question, but you need to provide some more information.
common-pile/stackexchange_filtered
Understanding Registers in OpenCL I am a little confused regarding the usage of registers internally by OpenCL kernels. I am using -cl-nv-verbose to capture the register usage for my kernel. At the moment, my kernel is recording ptxas info: Used 4 registers for some code in the kernel. For the following segment: double a; a = pow(2.0,2.0) if (index != 0) { } the registers used changes ptxas info: Used 6 registers. I understand that there is nothing in the if loop. But if I re-structure again as: double a; if (index != 0) { a = pow(2.0,2.0) } this changes the register usage to ptxas info: Used 15 registers. I am not changing the work-group sizes for the kernel. Perhaps the answer lies in looking at the ptx code but I don't understand it (though I can get it if needed). What I am more interested in is why the register usage jumps to twice just by moving a line of code. Any ideas? (index is private) Update: PTX Code before and after Update: Kernel Code: __kernel void butterfC( __global double *sI, __global double *sJ, __global double *sK, const int zR, const int yR, const int xR, unsigned int l1, const int dir, unsigned int type ) { int idX = get_global_id(0); int idY = get_global_id(1); int idZ = get_global_id(2); int BASE = idZ*xR*yR; int STRIDE = 1; int powX = pow(4.0f,l1); int powXm1 = pow(4.0f,l1-1); int yIndex, kIndex; switch(type) { case 1: BASE += idY*xR; yIndex = idX / powXm1 * powX; kIndex = (idX % powXm1) + yIndex; break; case 2: BASE += idX; STRIDE = xR; yIndex = idY / powXm1 * powX; kIndex = idY % powXm1 + yIndex; break; case 3: BASE = idY*xR + idX; STRIDE = xR * yR; yIndex = idZ / powXm1 * powX; kIndex = idZ % powXm1 + yIndex; break; } double a; //a = pow(2.0,2.0); if (kIndex != 0) { a = pow(2.0,2.0); .... do stuff } } A guess that the compiler compiles the pow call to a constant in the first case, but not in the second case. The inline expansion of the pow function will consume a considerable number of registers. The only way to tell is to look at the PTX. We need to see the PTX file to understand the what happening. Is this ALL the code? How is a set if index == 0 ?? as I suspect the compiler is using extra registers for branch prediction code. I have added the ptx code before and after the change. Tim, index==0 does not make any difference. @OmarKhan: that is a lot of PTX. Much more than the code snippet you showed above. Could you make the kernel code available as well? Okay, sorry for giving things away in chunks. I have updated the question. The lots of PTX is another beef I have, maybe it is related, maybe it is not. All 800 or so lines of PTX code is ONLY due to the presence of the pow() function in the code. If I keep everything intact and just remove the pow() function (appearing 3 times in my code). The PTX is reduced to only TWO lines, .reg .s32 %r<13>; and ret;. I have observed that the PTX code for pow() in CUDA is much much smaller than that for OpenCL.
common-pile/stackexchange_filtered
Merging CSVs with some common columns (though columns are in different places)? I'm looking to merge CSVs that have common columns, but the columns aren't always necessarily in the same place. The CSVs do have a header row. I just want to try to get the data appended to the bottom. For example, each CSV will have the column "Name,""Favorite Color," and "Lucky Number." Some CSVs will also contain other columns, such as "Favorite Animal." Also, "Name" may not be in the same spot on every CSV. Note: the end goal here is for me to be able to do a pivot chart with the data. I know (or I think) that Excel can make a pivot table/chart from multiple worksheets -- but I'm not sure it can handle 52 worksheets that are plagued by the problems I mentioned above. Any thoughts? Note -- I think I could probably get this to work with Google App Script, but the resulting spreadsheet would be too big for Google Spreadsheets to handle. Here's some examples of what I'm saying... https://gist.github.com/anonymous/4948945 Hello again. What Excel version do you use? Excel 2003 is limited to 255 columns. Later versions are up to 65536 columns. Are you interested in a Excel only solution? This probably involves the ugly MS query tool (guide) to create an SQL-like query. Or is Access also a valuable solution? This way its much easier to merge all CSVs to a single table. This table can be exported or viewed by Excel in every way. Hi again Nixda, and thanks for the good advice last time I posted. I think Excel 2010 has a row limit of 1,048,576 rows by 16,384 columns, and I will be within that for this year's data. I'm looking into Access for this year's data, which will exceed those limits. I need to learn more about Access, but I haven't ever used it. Would I still need to do some programming within Access to accomplish my goals? I'm really hoping I don't have to just put in the hours and do this manually. Every one of the 52 sheets has over 60 columns and thousands of rows. To add to @nixda's comment, Access 2010 may well be the way for you to go, since you would be able to append your CSV files to a common table without worrying about the order of the columns. The number of columns you mention is well within Access 2010's limit of 255 columns per table (I don't know about 2013). Access also has a database size limit of 2 gigabytes (though linking databases is a workaround). However, I've run into serious performance problems when DB size went much over 1 gigabyte. Ok. I bought Access 2013. Am importing the first CSV now. Any good resources for learning this program? Do I need to cut each column into its own table (seeing as sometimes columns are in different spots)? The neat thing about appending an external file to a table in Access is that only the column names have to match, not their locations. In other words, if you have a header row in your CSV with your column names, then you can append to a table with the same names. Access will squawk, though, if the field types don't match or if you have a name in your CSV file that is not in the table. http://stackoverflow.com/questions/10366539/access-truncation-error-when-appending-csv-data-to-tables Yep, there was some serious squawking. I'm currently trying to make a master header file that I'll upload and then try to append every CSV to that... We'll see.
common-pile/stackexchange_filtered
Marking y value using dotted line in matplotlib.pyplot I am trying to plot a graph using matplotlib.pyplot. import matplotlib.pyplot as plt import numpy as np x = [i for i in range (1,201)] y = np.loadtxt('final_fscore.txt', dtype=np.float128) plt.plot(x, y, lw=2) plt.show() It looks something like this: I want to mark the first value of x where y has reached the highest ( which is already known, say for x= 23, y= y[23]), like this figure shown below: I have been searching this for some time now, with little success. I have tried adding a straight line for now, which is not behaving the desired way: import matplotlib.pyplot as plt import numpy as np x = [i for i in range (1,201)] y = np.loadtxt('final_fscore.txt', dtype=np.float128) plt.plot(x, y, lw=2) plt.plot([23,y[23]], [23,0]) plt.show() Resulting graph: Note: I want to make the figure like in the second graph. It's not clear what y[23] would do here. You would need to find out the maximum value and the index at which this occurs (np.argmax). You may then use this to plot a 3 point line with those coordinates. import matplotlib.pyplot as plt import numpy as np; np.random.seed(9) x = np.arange(200) y = np.cumsum(np.random.randn(200)) plt.plot(x, y, lw=2) amax = np.argmax(y) xlim,ylim = plt.xlim(), plt.ylim() plt.plot([x[amax], x[amax], xlim[0]], [xlim[0], y[amax], y[amax]], linestyle="--") plt.xlim(xlim) plt.ylim(ylim) plt.show() Thanks! Sorry about the ambiguity regarding y[23]. Here, I assumed that the highest value of y is at x = 23, so in this case the highest value of y must be y[23]. Ah, ok, that is ambiguous with python's list index notation where y[23] would be the 24th value in y. Is there any way to also display the values? (0.87 and 320 in the second graph)
common-pile/stackexchange_filtered
Adding a permanent PREROUTING rule in iptables using firewall-cmd I'm trying to add a new rule in the PREROUTING chain in iptables (NAT) using firewall-cmd on RHEL 7: $ firewall-cmd --permanent --direct --add-rule ipv4 nat PREROUTING 0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8161 Then I check the iptables via $ iptables -t nat -L: ... Chain PREROUTING (policy ACCEPT) target prot opt source destination PREROUTING_direct all -- anywhere anywhere PREROUTING_ZONES_SOURCE all -- anywhere anywhere PREROUTING_ZONES all -- anywhere anywhere ... Chain PREROUTING_direct (1 references) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 8161 ... However, if I run an equivalent iptables command as follows: ... Chain PREROUTING (policy ACCEPT) target prot opt source destination PREROUTING_direct all -- anywhere anywhere PREROUTING_ZONES_SOURCE all -- anywhere anywhere PREROUTING_ZONES all -- anywhere anywhere REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 8161 ... Chain PREROUTING_direct (1 references) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:http redir ports 8161 I get this additional rule in the Chain PREROUTING and this allows prerouting to work even if the firewall is disabled (i.e., disabling firewall daemon and running the iptables command). So, my question is two-fold: Is there a firewall-cmd command that does exactly the same as the iptables command above? Can this rule be added permanently via firewall-cmd and stay there even after firewall daemon is disabled? You are using firewall-cmd with --direct option which means it accepts an iptables command. So, you can just the same options with iptables -t nat to have the same effect with one exception. Using firewall-cmd this way will add NAT rule to PREROUTING_direct chain while using iptables directly with add the rule to PREROUTING chain. In the output of iptables -t nat -L, you added the rule twice: once to each chain. As for the second part of question, firewalld service will remove all defined chains when stopped. So, rules added to PREROUTING_direct will not be available any more. Short answer is No. I get this additional rule in the Chain PREROUTING and this allows prerouting to work even if the firewall is disabled. So I am not completely sure that is true. If the PREROUTING is no longer working when you stop firewalld (and I am not clear why you would do that), then I would assume that is because firewalld is removing its entire policy (e.g. iptables -F/iptables -X). In which case, the fact you added another version of the same rule manually would make little difference. iptables -F would still remove it. Your (first) command should be giving you a permanent pre-routing rule as you wanted, so long as you do not stop firewalld. I would tend to think that should be sufficient, but I may very well be missing something about your needs. Just to be clear: there is no practical difference between having your prerouting rule in the PREROUTING_DIRECT chain or the PREROUTING chain. Both approaches work the same, have the same effect, and absent other changes to your firewall policy, should be doing what you need. The only difference is that firewalld is essentially doing a little admin behind the scenes for you, and placing your actual rule in a user-defined chain rather than a system defined one. tl;dr: I would just keep firewalld running, and use its commands to create and manage your rules. While the firewall-cmd vs 'raw' iptables commands have slightly different visual effects, they have the same effect on the network traffic. You're right - the sentence was a bit unclear. What I meant there was actually running the iptables command after disabling the firewall. This would add the rule correctly.
common-pile/stackexchange_filtered
Method streamDownload does not exist I'm trying to download a file with laravel, but I got this error! exception: "BadMethodCallException" file "C:\Users\dev\gaaho\vendor\laravel\framework\src\Illuminate\Support\Traits\Macroable.php" line 96 message : "Method streamDownload does not exist." this is what i have in my controller return response()->streamDownload(function () { echo GitHub::api('repo') ->contents() ->readme('laravel', 'laravel')['contents']; }, 'laravel-readme.md'); Please help! I'm using laravel 5.5 The function streamDownload is a new feature in 5.6. then how to make it aviable in 5.5 ? Try $name = 'laravel-readme.md'; $headers = [ 'Content-Disposition' => 'attachment; filename='. $name, ]; return response()->stream(function() { echo GitHub::api('repo') ->contents() ->readme('laravel', 'laravel')['contents']; }, 200, $headers); See if that works. You're basically just manually setting the headers. Thank @Mark, but i got this error: Call to undefined method Symfony\Component\HttpFoundation\StreamedResponse::header() Try removing the $headers argument from the stream function and put response()->headers->set('Content-Disposition', 'attachment; filename='. $name); before you return your response. Glad to help - hope that answers your question.
common-pile/stackexchange_filtered
Waiting for a chunk to load, before loading more I'm building an app, where I need to load data in chunks, I mean first load 5 items and then proceed with another 5, but I can't figure out how to do that. At the moment I chunk up my list of items, so I get a list of lists with 5 items in each. Right now the for-loop just fires away with requests, but I want to wait for the response and then proceed in the for loop. I use alamofire, and my code looks like this. private func requestItemsForField(items: [Item], completion: @escaping (_ measurements: Array<Measurement>?, _ success: Bool) -> ()) { let userPackageId = UserManager.instance.selectedUserPackage.id let params = ["userPackageId": userPackageId] for field in fields { let url = apiURL + "images/\(field.id)" let queue = DispatchQueue(label: "com.response-queue", qos: .utility, attributes: [.concurrent]) Alamofire.request(url, method: .get, parameters: params, headers: headers()).responseArray(queue: queue, completionHandler: { (response: DataResponse<[Item]>) in if let items = response.result.value as [Item]? { NotificationCenter.default.post(name: NSNotification.Name(rawValue: "itemsLoadedNotification"), object: nil) completion(items, true) } else { print("Request failed with error: \(response.result.error)") completion(nil, false) } }) } } This is where i chunk up my list, and pass it to the above. private func fetchAllMeasurements(completion: @escaping (_ measurements: [Item]?, _ done: Bool) -> ()) { let fieldSet = FieldStore.instance.data.keys var fieldKeys = [Item]() for field in fieldSet { fieldKeys.append(field) } // Create chunks of fields to load let fieldChunks = fieldKeys.chunkify(by: 5) var measurementsAll = [Measurement]() for fields in fieldChunks { requestItemsForField(fields: fields, completion: { (measurements, success) in if let currentMeasurement = measurements { measurementsAll.append(contentsOf: currentMeasurement) } completion(measurementsAll, true) } }) } } I believe that you need to implement something like Pagination on the frontend. I suggest you to update backend for pagination however if you need to implement it on the frontend check this link -> PagedArray You can try to use dispatch_group or dispatch_barrier you need to get number of measurements you will have (for example server has 34 measurements) with your request and then code something like var serverMeasurementsCount = 1 //should be for first request func requestData() { if self.measurements.count < self.serverMeasurementsCount { ...requestdata { data in self.serverMeasurementsCount = data["serverMeasurementsCount"] self.measurements.append(..yourData) self.requestData() } } or call requestData not inside completion handler or somewhere else edit: fixed code a bit (serverMeasurementsCount = 1) Instead of using a for loop, it sounds like you need do something like var index = 0 to start with, and call requestItemsForField() sending in fieldChunks[index] as the first parameter. Then in the completion handler, check to see whether there's another array element, and if so, call requestItemsForField() again, this time sending in fieldChunks[index+1] as the first parameter. But if I call requestItemsForField() again, we'll just have another completionhandler inside that? If my idea works, you won't need to code another completion handler, because each completion handler will spawn another call to requestItemsForField(), and that call of course will also run its own completion handler when it finishes. etc. It will stop spawning calls when you've reached the end of the array, because the code in your completion handler will only spawn a new call if there's at least one more array element left. The overall effect is that each new call to requestItemsForField() will not occur until the previous call has finished, which was your original goal. To say it more plainly, each time requestItemsForField() reaches its completion handler, another call to requestItemsForField() will be made. This is why only one completion handler needs to be coded. And it won't go on forever because the code in the completion handler will only call the function if there's another array element left to deal with. One solution would be to make a new recursive function to populate the items, add a new Bool parameter in closure as isComplete. then call the function on completion of isComplete boolean. to break the recursive function, add a global static variable of itemsCountMax, if itemCountMax == itemsCount break the recursive function.
common-pile/stackexchange_filtered
floating point hex octal binary I am working on a calculator that allows you to perform calculations past the decimal point in octal, hexadecimal, binary, and of course decimal. I am having trouble though finding a way to convert floating point decimal numbers to floating point hexadecimal, octal, binary and vice versa. The plan is to do all the math in decimal and then convert the result into the appropriate number system. Any help, ideas or examples would be appreciated. Thanks! Hmm... this was a homework assignment in my university's CS "weed-out" course. The operations for binary are described in Schaum's Outline Series: Essential Computer Mathematics by Seymour Lipschutz. For some reason it is still on my bookshelf 23 years later. As a hint, convert octal and hex to binary, perform the operations, convert back to binary. Or you can perform decimal operations and perform the conversions to octal/hex/binary afterward. The process is essentially the same for all positional number systems arithmetic.
common-pile/stackexchange_filtered
Android - NO SUCH TABLE Why is my Notes table not creating, can anyone see any error in my query string. I have figured out that I wasnt inputting data into my notes table cause it doesnt exist. private static final String CREATE_NOTES = "CREATE TABLE " + NOTE_TABLE + "(" + NOTES_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + FK_ID + " INTEGER NOT NULL, " + COLUMN_DATE + " DATETIME DEFAULT CURRENT_TIMESTAMP," + COLUMN_TITLE + " TEXT NOT NULL, " + COLUMN_BODY + " TEXT, " + " FOREIGN KEY ("+FK_ID+") REFERENCES "+USER_TABLE+" ("+COLUMN_ID+"));"; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); //Deletes database // context.deleteDatabase(DATABASE_NAME); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_USERS); db.execSQL(CREATE_NOTES); } i think data is not inserted in your db @Ravi it displays the toast so I though tit would have been? but there may be chance that db insertion fail,Try to debug insert statement and print row count (insert statement returns a rou count)in log to verify data inserted successfully. @Ravi okay I found out the Notes table doesnt exist, can you check my code why it isnt creating ill update it now Do something like below example - public class DatabaseHelperextends SQLiteOpenHelper{ private static DatabaseHelper mDatabaseHelper; public DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); //Deletes database // context.deleteDatabase(DATABASE_NAME); } public static DatabaseHelper getInstance(Context context){ if(mDatabaseHelper==null){ mDatabaseHelper= new DatabaseHelper(context); } return mDatabaseHelper; } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_USERS); db.execSQL(CREATE_NOTES); } } and call getInstance() method before your insert statement. DatabaseHelper mHelper = DatabaseHelper.getInstance() ; mHelper.insert(); it will create new database if not exist. I have fixed the issue of creating a table, but Im trying to get the user's unique data through their id. How can I compared the logged in user's id to the FK user ID.
common-pile/stackexchange_filtered
Install vue-cli Globally or Locally When I install @vue/cli globally, I am forced to use sudo in order to initiate a new project, otherwise I receive a permission denied error on the node_modules folder, which contains vue-cli. However, if I use sudo in order to initiate the project, I will require these permissions for any tasks in the future. Should vue-cli be instaled only locally? You should use sudo to install @vue/cli globaly, but then you should be able to run the cli as a normal user to create a new project. At least, that is on my Ubuntu machine. you shouldn't ever use sudo to install packages (there is also such a thing as postinstall bash scripts, a hacked package could intentionally bork your system), all your doing is installing them into roots your not making it globally accessible to all users. The fix is to fix the permissions on the node_modules folder you mention with chown so is owned by your user. Thank you @Daantje. Yes Indeed, this is how it is supposed to work and this is how it is actually working. My apologies, the permission issue was on another directory and not the node-modules one. I had misread the error message. Everything has been fixed and working as expected. All external libraries/modules which come as part of a language or project should only be installed locally. This includes packages from npm, pypi and so on. Providing super user access to these libraries is frowned upon and is never a good idea. Otherwise you'd have to review the installation scripts, like say package.json before actually installing/updating it globally. In my personal opinion, you should just install @vue/cli in your home or $HOME, and export the binaries from node modules bin folder to access it globally without providing a sudo password. Avoid the -g flag while installing. Another alternate method, if you just want to test @vue/cli, is to create a project in your home folder, just meant for testing and add @vue/cli and all of the tooling to the devDependencies of the project's package.json file. You could even export the binaries from this projects node modules and access it globally. @JF0001 Glad to have helped. The above $HOME would totally depend on how you have setup your system. Please keep that in mind.
common-pile/stackexchange_filtered
building a table with dynamic columns from a key value array in snowflake I have the following table - ID , DATA 1 [{"key":"Apple", "value":2}, {"key":"Orange", "value":3}] 2 [{"key":"Apple", "value":5}, {"key":"Orange", "value":4}, {"key":"Cookie", "value":4}] I'd like to build the following table : Id, Apple, Orange, Cookie 1 2 3 2 5 4 4 Ive tried many combinations of parse_json and flatten but none seemed to support this structure. Sample data: CREATE OR REPLACE TABLE tab AS SELECT 1 ID, PARSE_JSON('[{"key":"Apple", "value":2}, {"key":"Orange", "value":3}]') AS DATA UNION SELECT 2, PARSE_JSON('[{"key":"Apple", "value":5}, {"key":"Orange", "value":4}, {"key":"Cookie", "value":4}]'); Step 1 - parse: SELECT id, s.VALUE:key::TEXT AS key, s.VALUE:value::TEXT AS value FROM tab ,LATERAL FLATTEN(input=>tab.DATA) s; Output: Step 2: Pivot WITH cte AS ( SELECT id, s.VALUE:key::TEXT AS key, s.VALUE:value::TEXT AS value FROM tab ,LATERAL FLATTEN(input=>tab.DATA) s ) SELECT * FROM cte PIVOT(MAX(value) FOR KEY IN ('Apple', 'Orange', 'Cookie')) AS p; Output:
common-pile/stackexchange_filtered
create .ics file on the fly using javascript or jquery? Can someone tell me if there is any jquery plugin to dynamically create .ics file with values coming from the page div values like there would be <div class="start-time">9:30am</div> <div class="end-time">10:30am</div> <div class="Location">California</div> or javascript way to dynamically create an .ics file? I basically need to create .ics file and pull these values using javascript or jquery? and link that created ics file to "ADD TO CALENDAR" link so it gets added to outlook? you will need to make it in ICS format. also you will need to convert the date and time zone; E.G. 20120315T170000Z or yyyymmddThhmmssZ msgData1 = $('.start-time').text(); msgData2 = $('.end-time').text(); msgData3 = $('.Location').text(); var icsMSG = "BEGIN:VCALENDAR\nVERSION:2.0\nPRODID:-//Our Company//NONSGML v1.0//EN\nBEGIN:VEVENT\nUID:me@google.com\nDTSTAMP:20120315T170000Z\nATTENDEE;CN=My Self ;RSVP=TRUE:MAILTO:me@gmail.com\nORGANIZER;CN=Me:MAILTO::me@gmail.com\nDTSTART:" + msgData1 +"\nDTEND:" + msgData2 +"\nLOCATION:" + msgData3 + "\nSUMMARY:Our Meeting Office\nEND:VEVENT\nEND:VCALENDAR"; $('.test').click(function(){ window.open( "data:text/calendar;charset=utf8," + escape(icsMSG)); }); the above sample will create a ics file for download. the user will have to open it and outlock, iCal, or google calendar will do the rest. FYI to anyone looking: as of 8/8/13 this code doesn't work. The .ics file downloads and opens but iCal says it can't read the file and errors out :/ NOTE: If you remove the spaces around " \n BEGIN:VEVENT \n" it works, but SO won't let me edit anything under 6 characters :( So this works in the web browser on iOS but not when it's wrapped in phonegap.. any idea how to make that work? For those that would want this in a separate question, here: http://stackoverflow.com/questions/18166561/allowing-ics-to-open-in-phonegap-app-for-ios?noredirect=1#comment26614023_18166561 wont work in IE and opera see this question This is an old question, but I have some ideas that could get you started (or anyone else who needs to do a similar task). And the JavaScript to create the file content, and open the file: var filedata = $('.start-time, .end-time, .Location').text(); window.open( "data:text/calendar;charset=utf8," + escape( filedata ) ); Presumably you'd want to add that code to the onclick event of a form button. I don't have Outlook handy, so I'm not sure if it will automatically recognize the filetype, but it might. Hope this helps. From what I have found online and on this site, it is not possible to get this to work in IE as you need to include certain headers to let IE know to download this file. The window.open method works for Chrome and Firefox but not IE so you may need to restructure your code to use a server-side language to generate and download the ICS file. More can be found in this question While this is an older question, I have been looking for a front-end solution as well. I recently stumbled across the ICS.js library which looks like the answer you're looking for. This approach worked fine however with IE8 the browser couldn't recognize the file type and refused to open as a calendar item. To get around this i had to create the code on the server side (and exposed via RESTful service) and then set the response headers as follows; @GET @Path("generateCalendar/{alias}/{start}/{end}") @Produces({ "text/v-calendar" }) public Response generateCalendar( @QueryParam("alias") final String argAlias, @QueryParam("start") final String argStart, @QueryParam("end") final String argEnd) { ResponseBuilder builder = Response.ok(); builder.header("content-disposition", "attachment;filename=calendar.ics"); builder.entity("BEGIN:VCALENDAR\n<........insert meeting details here......>:VCALENDAR"); return builder.build(); } This can be served up by calling window.location on the service URL and works on Chrome, Firefox and IE8. Hope this helps.
common-pile/stackexchange_filtered
lotus notes section from bash text file I would like to be able to send a mail from unix/bash using mailx and cat( example: cat testMailWithSection.txt | mailx -s "testMail" -r "sender@machine" "destination@company" & however, it would be a long mail, and I'd like to put parts of it in a section. Is this possible with just these tools? Greetings can you include the contents of testMailWithSection.txt? Unfortunately no. In fact it isn't that easy to create sections programmatically on code either, but that would be the only way. There's no way to convert markup from an email body into sections on the Notes client.
common-pile/stackexchange_filtered
How to set permissions for publishing new Wiki version on VSTS We've published our Wiki pages in VSTS using 'code as Wiki' (see explanation here). In general we only want to use the master branch for displaying these Wiki pages, but sometimes we want to add a new version using the Publish new version menu option (see screenshot). However for some of my team members it isn't possible to publish a new version, because that menu option is not visible. I've searched to see if there are certain permissions controlling this menu option, or any preview feature that should be enabled, but couldn't find any clues on this. Does anyone know how to get this Publish new version available to everyone? Thanks! Edit 1 It was suggested by Rodrigo Werlang to check out Wiki security, however this option is not available for 'code as Wiki', see screenshot: Just see the Prerequisites to publish a Git repository to a wiki: You must have the permission Create repository to publish code as wiki. By default, this permissions is set for members of the Project Administrators group. Anyone who has permissions to contribute to the Git repository can add or edit wiki pages. Anyone with access to the team project, including stakeholders, can view the wiki. And the description about Stakeholder wiki access: Stakeholders in a project can read wiki pages and view revisions, however they can't perform any edit operations. For example, stakeholders can't create, edit, reorder, or revert changes to pages. Note: Users with Stakeholder access have read-only permissions to wiki pages. These permissions can't be changed. So, in your scenario you can follow below steps to see the Publish new version option: Change the user access level to Basic if it was Stakeholder before. Add the user to Project Administrators group or have Manage permissions set to Allow for Git repositories. In your wiki, go to Wiki Security Take a look at the security page and set contribute, contribute pull request, create branch, create tag, manage notes, read. Thanks for your suggestion. I've checked this, but it doesn't seem to be available for 'code as Wiki' setups, see also my edited question above. In that case try to set the permissions on the repository that will be used in the 'code as Wiki'. Please let me know if that works to edit my answer.
common-pile/stackexchange_filtered
prevent keyboard from moving specific layout up I have TextInputEditText which have another layout below it. When the TextInputEditText is focused I hide the layout below it and show the keyboard. When the user is done with the writing I hide the keyboard and immediately set the invisible layout back to visible. The problem is the layout starts above the keyboard and when the keyboard is down it gets back down. this takes few miilseconds but makes a bad effect. To clarify, I want to prevent keyboard from moving specific layout up. input.setOnFocusChangeListener(new View.OnFocusChangeListener() { @Override public void onFocusChange(View v, boolean hasFocus) { if(hasFocus) { view.findViewById(R.id.colorbtnlayout).setVisibility(View.GONE); view.findViewById(R.id.outlinelayout).setVisibility(View.GONE); view.findViewById(R.id.bottombuttons).setVisibility(View.GONE); } else { view.findViewById(R.id.bottombuttons).setVisibility(View.VISIBLE); } } }); Welcome to stackoverflow. For this question, I would suggest either re-writing it or include screenshots/video and be as clear as possible... it's a bit hard to understand what youre' asking. Sorry, all I want it prevent keyboard from moving specific layout up. just add android:windowSoftInputMode="adjustPan" in your activity in manifest.xml But I want the TextInputEditText to move up. adding android:windowSoftInputMode="adjustPan" will prevent everything from moving
common-pile/stackexchange_filtered
Show that every prime number in the form $a+b$ with $a,b$ divisors of $n$ is distinct and not divides $n$ Recently, I have found this problem: Let $n$ a natural number. Suppose that its positive divisors can be partitioned in tuples of the form $(a,b)$ such that the sum $a+b$ is a prime number. Show that every such prime number is distinct and no of them divides $n$. I have tried to solve this problem for hours, but I can't completely figure out a solution. I think that with $n=p^k$ the problem can't be solved because in the set $D=(1,p,p^2,\cdots,p^k)$ can't be partitioned in tuples because every sum ($a+b$) can't be a prime. Any idea of how to proceed? Examples are hard to come by...I guess $n=10$, pairing up the divisors as $(1,10)$ and $(2,5)$ works. Or $n=6$ with $(1,6)$ and $(2,3)$. Maybe there are others of this type... Easy to prove that such $n$ must be of the form $n=2m$ with $m$ odd. $30$ is a non-trivial example, as we can have $(1,30),(2,15),(5,6),(3,10)$. @lulu: can we generalize the reasonement for $n=2\cdot \prod_{i} p_i^{k_i}$ where $p_i$ is the $i-$ prime factor of $n$? I still don't have any sensible way to generate examples. No proof, for example, that there are infinitely many such $n$. Even to do the $2p$ case would require a proof that there are infinitely many twin primes. Worse, you'd need to prove that there are infinitely many Sophie Germain primes that are also the lesser of a twin prime pair. To be sure: one doesn't need to produce infinitely many examples to settle the claim. I am just trying to understand the condition. @lulu: can't we consider only tbe case where $n=2\cdot p_1 \cdot p_2$ and then show that the proprerty holds? It does not hold generally. For instance, $2\times 5\times 11=110$ does not work since $110+1$ is not prime. "I think that with n=pk the problem can't be solved because in the set D=(1,p,p2,⋯,pk) can't be partitioned in tuples because every sum (a+b) can't be a prime. " That's fine. It's vacuously true if there are no partitions that add to a prime. so the partitions don't need to multiply to $n$? If ignored the $a+b$ is prime and we just partitioned. Then partitioning $18$ we could do $(1,2),(3,6),(9,18)$? We aren't required to do $(1,18),(2,9),(3,6)$? With the factor of $n$ itself, since any divisor other than $1$ has at least one prime factor in common with $n$, the other divisor must be $1$ itself, i.e., you have $(n,1)$ with $n + 1$ being prime. Next, consider any prime $p$ where $p \mid n$ and set $a = \frac{n}{p}$. As the question states, there's another divisor $b$ where $a + b$ is prime. If $n$ has more than one factor of $p$, then $a$ has the same set of primes which are factors of $n$ so any $b \gt 1$ (since $1$ is already matched up with $n$) must have at least one prime factor in common with $a$ so $a + b$ can't be prime. This shows $n$ can only have one factor of $p$. Also, since all other prime factors of $n$ divide $a$, this means that $b$ can only be $p$ itself to ensure $a + b$ is prime. This shows $n$ is square-free, with some $m \ge 1$ distinct primes where $$n = \prod_{i=1}^{m}p_i \tag{1}\label{eq1A}$$ Note if you have $a$ being $n$ divided by the product of $2$ primes, each of the primes individually has been used before, and no other prime factor may be used since it's a factor of $a$, so the other value, i.e., $b$, must be the product of those $2$ primes. In general, you can prove by induction on the number of primes that due to any smaller # of primes already being used previously, you have each factor being paired with $n$ divided by that factor, e.g., $a = \frac{n}{b}$ for all factors $b$ of $n$, say with $a \gt b$ for uniqueness. I'll leave proving this to you to do. As for showing the constructed primes are distinct, assume you have $(\frac{n}{b_1},b_1)$ and $(\frac{n}{b_2},b_2)$ with $b_1 \neq b_2$, with $$\begin{equation}\begin{aligned} \frac{n}{b_1} + b_1 & = \frac{n}{b_2} + b_2 \\ b_2(n) + b_1^2b_2 & = b_1(n) + b_1b_2^2 \\ b_2(n) - b_1(n) & = b_1b_2^2 - b_1^2b_2 \\ (b_2 - b_1)n & = b_1b_2(b_2 - b_1) \\ n & = b_1b_2 \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ This means $\frac{n}{b_1} = b_2$ and $\frac{n}{b_2} = b_1$ so the two pairs are the same with their values just switched around. This confirms that all of the $a + b$ primes must be unique. As for showing none of these primes divide $n$, first note that $n + 1 \not\mid n$. As for showing none of the other ones divide $n$, consider that one of them do, so you have for some $b_1$ dividing $n$ and integer $k \ge 1$ that $$\begin{equation}\begin{aligned} k\left(\frac{n}{b_1} + b_1\right) & = n \\ kn + kb_1^2 & = nb_1 \\ kn & = b_1(n - kb_1) \end{aligned}\end{equation}\tag{4}\label{eq4A}$$ Since $b_1 \mid n$, this means the RHS has at least $2$ factors of $b_1$. With the RHS, as $n$ has only $1$ factor of $b_1$, this means $k$ must have at least one factor of $b_1$, so $k = rb_1$ for some integer $r \ge 1$. However, this would then give $$\begin{equation}\begin{aligned} rb_1\left(\frac{n}{b_1} + b_1\right) & = n \\ rn + rb_1^2 & = n \end{aligned}\end{equation}\tag{5}\label{eq5A}$$ However, with $r \ge 1$, the LHS is $\gt n$, so it's not possible to be equal to $n$. This shows the assumption must be incorrect, which proves $\frac{n}{b_1} + b_1 \not\mid n$, i.e., none of these primes constructed from the sum of factors divide $n$. As indicated in several question comments by lulu, since $n + 1$ is prime and $n \neq 1$, this means $n$ must be even. Since it's square-free, this means $n = 2q$ for some odd $q$. Several examples which work are $n = 2(5)$ and $n = 2(3)(5)$, although I also don't know if there are infinitely many such $n$. You said "This shows that $n$ can only have one factor of $p$". How does that follow though? If say I have $n=3^2 * 2^2$, can't I let $a=3^2$ and $b=2^2$ and $n$ is clearly not square free in the case. I could let one of my divisors 'snatch up' all the primes having a power $>2$ in the prime factorisation and it could still be that $a+b$ is prime. @YipJungHon Yes, you can have $n = 3^2 \times 2$ and let $a = 3^2$ and $b = 2$. However, the question says that all positive divisors must be able to be matched up so their sums are primes. For $a = 3 \times 2$, note $b$ can only be $1$ for $a + b$ to be prime, but $1$ must be matched up with $n$, as I state at the start of my question so it's not available. Thus there are no available values of $b$ such that $a + b$ would be prime. This is why $n$ can have only one factor of $p$. Oh you're right, though I didn't interpret the question as needing all the positive divisors to work, I assumed that just one tuple was enough. The proof that $n$ is square-free is wrong. If $p$ divides $n$ several times it is not true that all other divisors are divisible by $p$. The smallest example $n=36, p=2$. The proof that the sums do not divide $n$ is also wrong. It is not even clear what divides what in your proof. @JCAA I never wrote that "all other divisors are divisible by $p$". With your example of $n = 36, p = 2$, then $a = \frac{36}{2} = 18$. Note $n = 36 = 2^2 \times 3^2$ and $a = 18 = 2 \times 3^2$. The only $b \mid n$ where $a + b$ is prime is $b = 1$, but this not available since you need $36 + 1 = 37$. As for the sums dividing $n$, you are correct that I made a mistake with using the factor $k$ on the wrong side. I've now corrected that to appropriately show that none of these constructed primes can divide $n$. Thank you for pointing out my error. You only consider sums $p+n/p$ where $p$ is prime. @JCAA As I showed, and you also showed in your answer, the pairing of $a$ and $b$ must be where one is a prime and the other is $n$ divided by that prime, i.e., all $a + b$ are of the form of $p + \frac{n}{p}$ where $p \mid n$. As such, I only consider such sums as there are no other sums to work with. I'm not quite sure what the point is with your comment. It is not true not all divisors are either primes of $n/p$. For example 210 has divisors 6 and 35. @JCAA I'm just having one of those days! You're right that not all divisors are primes or $\frac{n}{p}$, e.g., where the # of prime factors is $\gt 2$. Thanks, once again, for pointing out another one of my mistakes. I've corrected my answer to address that issue.
common-pile/stackexchange_filtered
Difference between Java SE/EE/ME? Which one should I install when I want to start learning Java? I'm going to start with some basics, so I will write simple programs that create files, directories, edit XML files and so on, nothing too complex for now. I guess Java SE (Standard Edition) is the one I should install on my Windows 7 desktop. I already have Komodo IDE which I will use to write the Java code. You should start with learning Java SE. Java EE can be somewhat bewildering at first. When you're ready for it, take a look at this excellent Java EE 7 overview page to get started. Especially the Java EE 7 Oracle tutorial is a good place to begin. Java SE = Standard Edition. This is the core Java programming platform. It contains all of the libraries and APIs that any Java programmer should learn (java.lang, java.io, java.math, java.net, java.util, etc...). Java EE = Enterprise Edition. From Wikipedia: The Java platform (Enterprise Edition) differs from the Java Standard Edition Platform (Java SE) in that it adds libraries which provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. In other words, if your application demands a very large scale, distributed system, then you should consider using Java EE. Built on top of Java SE, it provides libraries for database access (JDBC, JPA), remote method invocation (RMI), messaging (JMS), web services, XML processing, and defines standard APIs for Enterprise JavaBeans, servlets, portlets, Java Server Pages, etc... Java ME = Micro Edition. This is the platform for developing applications for mobile devices and embedded systems such as set-top boxes. Java ME provides a subset of the functionality of Java SE, but also introduces libraries specific to mobile devices. Because Java ME is based on an earlier version of Java SE, some of the new language features introduced in Java 1.5 (e.g. generics) are not available. If you are new to Java, definitely start with Java SE. I would disagree with recommending an IDE to someone who has never coded Java before. Write a few using the command line first so you can have a fighting chance at understanding what CLASSPATH means. If you use Eclipse before you understand Java, that's two big things you're ignorant of. @duffymo IMO it is actually a good idea to have an IDE recommendation, because coding with a plain text editor does not give any benefit in learning a new language, having intellisense and autocompletion is an invaluable aid for a programmer already knowing other ecosystems to became familiar with the new environment. @duffymo I am still reading this after 5 years as Google brought me here so I guess there is still a point in starting a discussion. In this specific instance though, I concur with your arguments about IDEs. I'm gratified by your agreement, but this is a bike shed argument that will last forever. I see nothing wrong with discussing difficult and timeless issues. Java is weird... It seems like the community spends more time trying to figure out stuff like licensing, differences in editions, strange tooling, buggy IDEs, lawsuits, etc. rather than actually improving the language and building useful software. Is only JDK edition specific? or JRE and JVM are different for SE & EE? Java EE is dead now. Jakarta EE is replacing Java EE: https://www.infoq.com/podcasts/milinkovich-jakarta-ee/?itm_source=infoq&itm_campaign=user_page&itm_medium=link @sparse Isn't it just being renamed? Here are some differences in terms of APIs Java SE includes has the following APIs and many more applet awt rmi jdbc swing collections xml binding JavaFX (Merged to Java SE 8) Java 8 Collections Streaming API Java 9 Reactive Streams API Java 9 HTTP/2 API Java EE includes the following APIs and many more servlet websocket java faces dependency injection ejb persistence transaction jms batch api Java ME includes the following APIs and many more Wireless Messaging Java ME Web Services Security and Trust Services API Location Mobile XML API Hope this helps. As presented, does that mean that what's in SE isn't included in EE? And what's in ME isn't in EE? It seems that if you want Wireless Messaging, for example, that you need ME and it's not available in EE. Is this correct? As of Java EE Version 6, is the Collections API a part of Java EE too? Java SE is the foundation on which Java EE is built. Java ME is a subset of SE for mobile devices. So you should install Java SE for your project. According to the Oracle's documentation, there are actually four Java platforms: Java Platform, Standard Edition (Java SE) Java Platform, Enterprise Edition (Java EE) Java Platform, Micro Edition (Java ME) JavaFX Java SE is for developing desktop applications and it is the foundation for developing in Java language. It consists of development tools, deployment technologies, and other class libraries and toolkits used in Java applications. Java EE is built on top of Java SE, and it is used for developing web applications and large-scale enterprise applications. Java ME is a subset of the Java SE. It provides an API and a small-footprint virtual machine for running Java applications on small devices. JavaFX is a platform for creating rich internet applications using a lightweight user-interface API. It is a recent addition to the family of Java platforms. Strictly speaking, these platforms are specifications; they are norms, not software. The Java Platform, Standard Edition Development Kit (JDK) is an official implementation of the Java SE specification, provided by Oracle. There are also other implementations, like OpenJDK and IBM's J9. People new to Java download a JDK for their platform and operating system (Oracle's JDK is available for download here.) It's true for Java 6. Java 7 documentation says that there are 3 platforms and JavaFX is part of Java SE. Java SE is for developing desktop applications Java EE is used for developing web applications and large-scale enterprise applications. If JDK is an implementation of JAVA SE, what would be an implementation of JAVA EE, and similarly Jakarte EE? Aplication servers, such as Glassfish or Wildfly. https://jakarta.ee/compatibility/ As I come across this question, I found the information provided on the Oracle's tutorial very complete and worth to share: The Java Programming Language Platforms There are four platforms of the Java programming language: Java Platform, Standard Edition (Java SE) Java Platform, Enterprise Edition (Java EE) Java Platform, Micro Edition (Java ME) JavaFX All Java platforms consist of a Java Virtual Machine (VM) and an application programming interface (API). The Java Virtual Machine is a program, for a particular hardware and software platform, that runs Java technology applications. An API is a collection of software components that you can use to create other software components or applications. Each Java platform provides a virtual machine and an API, and this allows applications written for that platform to run on any compatible system with all the advantages of the Java programming language: platform-independence, power, stability, ease-of-development, and security. Java SE When most people think of the Java programming language, they think of the Java SE API. Java SE's API provides the core functionality of the Java programming language. It defines everything from the basic types and objects of the Java programming language to high-level classes that are used for networking, security, database access, graphical user interface (GUI) development, and XML parsing. In addition to the core API, the Java SE platform consists of a virtual machine, development tools, deployment technologies, and other class libraries and toolkits commonly used in Java technology applications. Java EE The Java EE platform is built on top of the Java SE platform. The Java EE platform provides an API and runtime environment for developing and running large-scale, multi-tiered, scalable, reliable, and secure network applications. Java ME The Java ME platform provides an API and a small-footprint virtual machine for running Java programming language applications on small devices, like mobile phones. The API is a subset of the Java SE API, along with special class libraries useful for small device application development. Java ME applications are often clients of Java EE platform services. JavaFX JavaFX is a platform for creating rich internet applications using a lightweight user-interface API. JavaFX applications use hardware-accelerated graphics and media engines to take advantage of higher-performance clients and a modern look-and-feel as well as high-level APIs for connecting to networked data sources. JavaFX applications may be clients of Java EE platform services. @IrfanNasim I know that it's copied from Oracle and I mentionned that!! did you read the answer from the top?! And do you know that in SO when you provide a link you must copy also the important informations, because once the link is not up to date or not working, people could still read what was in the link!! Its weird that you have 198 rep and you still don't know the rules!! I guess Java SE (Standard Edition) is the one I should install on my Windows 7 desktop Yes, of course. Java SE is the best one to start with. BTW you must learn Java basics. That means you must learn some of the libraries and APIs in Java SE. Difference between Java Platform Editions: Java Micro Edition (Java ME): Highly optimized runtime environment. Target consumer products (Pagers, cell phones). Java ME was formerly known as Java 2 Platform, Micro Edition or J2ME. Java Standard Edition (Java SE): Java tools, runtimes, and APIs for developers writing, deploying, and running applets and applications. Java SE was formerly known as Java 2 Platform, Standard Edition or J2SE. (everyone/beginners starting from this) Java Enterprise Edition(Java EE): Targets enterprise-class server-side applications. Java EE was formerly known as Java 2 Platform, Enterprise Edition or J2EE. Now known as Jakarta EE, after donation by Oracle Corp to the Eclipse Foundation. Another duplicated question for this question. Lastly, about J.. confusion JVM (Java Virtual Machine): JVM is a part of both the JDK and JRE that translates Java byte codes and executes them as native code on the client machine. JRE (Java Runtime Environment): It is the environment provided for the java programs to get executed. It contains a JVM, class libraries, and other supporting files. It does not contain any development tools such as compiler, debugger and so on. JDK (Java Development Kit): JDK contains tools needed to develop the java programs (javac, java, javadoc, appletviewer, jdb, javap, rmic,...) and JRE to run the program. Java SDK (Java Software Development Kit): SDK comprises a JDK and extra software, such as application servers, debuggers, and documentation. Java SE: Java platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers (same as SDK). J2SE, J2ME, J2EE Any Java edition from 1.2 to 1.5 Read more about these topics: Differences between JDK and Java SDK Java JDK, SDK, SE? What is the difference between JVM, JDK, JRE & OpenJDK? Yes, Java SE is where to start. All the tasks you mention can be handled with it. Java ME is the Mobile Edition, and EE is Enterprise Edition; these are specialized / extended versions of Standard Edition. Java SE (Standard Edition) is for building desktop apps. Java ME (Micro Edition) is for old mobile devices. Java EE (Enterprise Edition) is for developing web based applications. Yes, you should start with Java SE. Java EE is for web applications and Java ME is for mobile applications--both of these build off of SE. Developers use different editions of the Java platform to create Java programs that run on desktop computers, web browsers, web servers, mobile information devices (such as feature phones), and embedded devices (such as television set-top boxes). Java Platform, Standard Edition (Java SE): The Java platform for developing applications, which are stand-alone programs that run on desktops. Java SE is also used to develop applets, which are programs that run in web browsers. Java Platform, Enterprise Edition (Java EE): The Java platform for developing enterprise-oriented applications and servlets, which are server programs that conform to Java EE’s Servlet API. Java EE is built on top of Java SE. Java Platform, Micro Edition (Java ME): The Java platform for developing MIDlets, which are programs that run on mobile information devices, and Xlets, which are programs that run on embedded devices. If I were you I would install the Java SE SDK. Once it is installed make sure you have the JAVA_HOME environment variable set and add the %JAVA_HOME%\bin dir to your path. Java SE is use for the desktop applications and simple core functions. Java EE is used for desktop, but also web development, networking, and advanced things. EE:- Enterprise Edition:- This Java edition is specifically designed for enterprise applications/business where we have to deal with number of different servers with importance on security, transaction management etc. SE:- Standard Edition:- This edition is for standard applications. ME:- Micro Edition:- This java edition is specifically designed for mobile phone platforms. Where more importance is given on memory management as there is limited memory resources in mobiles . So basically JAVA has different editions for different requirements. The SE(JDK) has all the libraries you will ever need to cut your teeth on Java. I recommend the Netbeans IDE as this comes bundled with the SE(JDK) straight from Oracle. Don't forget to set "path" and "classpath" variables especially if you are going to try command line. With a 64 bit system insert the "System Path" e.g. C:\Program Files (x86)\Java\jdk1.7.0 variable before the C:\Windows\system32; to direct the system to your JDK. hope this helps.
common-pile/stackexchange_filtered
How to get complete content for a single post that was splitted into pages I have a WordPress post which was divided into multiple pages using pager break <!--nextpage--> tag and it's pagination functionality is working as expected for single post page. www.example.com/url-post-content/1 www.example.com/url-post-content/2 www.example.com/url-post-content/3 But problem is my category page where i am showing all posts for a category with pagination(single post on each page). I want to show complete content of the post(without pagination) on my category post page because here i am showing single post per page for the category but the_content function in my loop returning content of only first page for a post which was divided into pages using <!--nextpage--> tag. Need you help to find out a way for displaying complete content of a post which was splitted in pages using page break tag. unclear question what you're trying to ask give some more details to help. If a wordpress post is splitted into multiple pages using tag, It dont show the whole content of the post. How can i see all content of that particular post? Will you contact me in chat please? You can get complete content of post using code below. Retrieves post data using get_post function. $post_object = get_post($post_id); // /// Now, you have post object and can get complete post content from post object $post_content = $post_object->post_content; use do_shortcode or apply_filters to properly format your post content. echo do_shortcode($post_content);
common-pile/stackexchange_filtered
How to efficiently determine whether a given ladder is valid? At my local squash club, there is a ladder which works as follows. At the beginning of the season we construct a table with the name of each member of the club on a separate line. We then write the number of games won and the number of games played next to each name (in the form: player wins/games). Thus at the beginning of the season the table looks like this: Carol 0/0 Billy 0/0 Alice 0/0 Daffyd 0/0 Any two players may play a match, with one player winning. If the player nearest the bottom of the table wins, then the position of the players is switched. We then repeat step 2., updating the number of wins and games next to each player. For example, if Alice beats Billy, we have Carol 0/0 Alice 1/1 Billy 0/1 Daffyd 0/0 These matches go on throughout the season and eventually result in players being listed in approximate strength order. Unfortunately, the updating happens in a rather haphazard way, so mistakes are made. Below are some examples of invalid tables, that is, tables which could not be produced by correctly following the above steps for some starting order (we have forgotten the order we used at the beginning of the season) and sequence of matches and results: Alice 0/1 Billy 1/1 Carol 0/1 Daffyd 0/0 Alice 2/3 Billy 0/1 Carol 0/0 Daffyd 0/0 Alice 1/1 Billy 0/2 Carol 2/2 Daffyd 0/1 Given a table, how can we efficiently determine whether it is valid? We could start by noting the following: The order of the names doesn't matter, since we have forgotten the original starting order. The total number of wins should be half the sum of the number of games played. (This shows that the first example above is invalid.) Suppose the table is valid. Then there is a multigraph - a graph admitting multiple edges but no loops - with each vertex corresponding to a player and each edge to a match played. Then the total number of games played by each player corresponds to the degree of the player's vertex in the multigraph. So if there's no multigraph with the appropriate vertex degrees, then the table must be invalid. For example, there is no multigraph with one vertex of degree one and one of degree three, so the second example is invalid. [We can efficiently check for the existence of such a multigraph.] So we have two checks we can apply to start off with, but this still allows invalid tables, such as the third example. To see that this table is invalid, we can work backwards, exhausting all possible ways the table could have arisen. I was wondering whether anyone can think of a polynomial time (in the number of players and the number of games) algorithm solving this decision problem? Perhaps there is a Havel Hakimi type theorem for directed multigraphs... Why can the third example not be possible? What if Alice won over Bob, Carol won over Bob and Carol won over Daffyd. Then Alice won 1 out of 1 games, Bob won 0 of 2 games, Carol won 2 of 2 games and Daffyd won 0 of 1 games? utdiscant: After each game, if the lower player wins, the players are switched. To show that the third example is possible, you would need to give a starting configuration and a sequence of games - that is, with an order - resulting in the given table. aryabhata: Thanks - yes, that would be a useful step. Unfortunately, it sounds rather hard... Give us your exact solution. Because I think Havel & Hakimi theorem may work. And perhaps, you did something like what the theorem stated .. what about tie games? a suggestion to study/solve this. specify it as a SAT problem. then try many random cases. see if any are hard for a standard solver. if not, maybe its a constrained subset in P. @vzn I state in the question that one player wins each game, so there are no ties. This is not a complete answer. I give a simpler statement of the problem and some remarks. We start with a graph where vertices are labeled with $[n]$. We have an operation that adds a directed edge from $v$ to $u$ to the graph, and if $label(v)<label(u)$ switches their labels. Given a directed multi-graph $G$ with $n$ vertices and $e$ edges, check if it can be obtained using using the operation above. It is easy to see that the problem is in $\mathsf{NP}$: a certificate is a (polynomial size) sequence of operations resulting in the $G$. Observation It seems that we can assume without loss of generality that all edges to the last vertex are added at the end of the sequence and all edges from it are added at the start of the sequence. This can be generalized to other vertices. Assume that we have remove all vertices with labels larger than $label(v)$. All edges to $v$ are added at the end of the sequence and all edges from $v$ are added at the start of the sequence. I think it should be possible to combined this observation with Havel-Hakimi to give a polynomial time algorithm. Hi. Thank you. Would you mind stating your observation again phrased in the context of ladders? I think there's a counterexample for a graph of order 3, but perhaps I misread. @Ben, I think it would be the following: you can assume that the last person on the ladder played all of the games that he won at the start of the tournament and played all of the games that he lost at the end of the tournament. Let me know if there is a counter-example to this, I haven't checked this carefully. Unfortunately, there are ladders like this one: A 2/2 B 0/1 C 0/1 @Ben, I think the example is consistent with what I wrote, i.e. it is not a counter-example to the observation. The ladder is valid. Let's assume that the last game played was a loss for C. Then prior to the last game, the ladder must have looked like this: C 0/0 B 0/1 A 1/1, but this ladder is invalid. Hence, we can't assume the last game was a loss for C. @Ben, no, we get A 1/1 B 0/1 C 0/0 Oh yes, whoops. Let's see whether anyone finds a proof. I haven't solved the problem, but I have partial results, the statements of which are given below. I'll write up the proofs if anyone's interested. Proposition. Suppose that the ladder (1) contains more than one player (2) contains an equal number of wins and losses; and (3) is such that each player has won at least one game and lost at least one game. Then the ladder is valid. Let $W_i$ be the number of wins of player $i$, $L_i$ the number of losses of player $i$ and $R_i$ the rank (where higher is better). Proposition (due to Fabio Parisi). If $L_i=0$ and the ladder is valid, then $$W_i \le \sum_{k:R_k > R_i, W_k > 0} (L_k - 1)^+ + \sum_{k:R_k<R_i}L_k,$$ and a corresponding bound holds for $L_i$ in the case that $W_i=0$.
common-pile/stackexchange_filtered
Lost all data after chkdsk on NTFS partition. How do I get it back? I have a multiboot system with a shared NTFS partition (300gigs) across OS's (Windows7, Debian, Arch). Recently I booted into Windows and it prompted me to run CHKDSK and after that I can't see any of my data. Not only me but even testdisk fails to see any deleted files (in the Advanced>Undelete option). I've junks and junks using Photorec but the problem is that all files are scattered across thousands of recup.* directories and are weirdly named. In the partition there currently lies a chkdsk.log file that proudly says that it deleted 22 entries (these are the names of the directories I had in that partition). Is there anyway I can resurrect this partition? Not that it helps, but did you hibernate Windows before using Linux? If so, that's never a good idea. @Karan No I didn't hibernate but my laptop ran out of battery and died abruptly. Something very similar just happened to me. How did this turn out for you, OP? I would sincerely appreciate any insights. I guess that your only option is a recovery utility such as the ones that you have used. I'd add the free Recuva from Piriform. There are also other freeware and commercial disk tools that can prove useful, but their success is not guaranteed. All the file bytes should be there on the disk, or at least a large parte of them. Unfortunately, the file names are stored in the NTFS master file table, so, if it got corrupted, there is no way to recover them, as chkdsk actions are undoable.
common-pile/stackexchange_filtered
Object's properties undefined Playing around with the sample network of car auction. I am not sure why the "Offer" transaction works as it shows all the properties, but the "AmendOffer" transaction shows properties start with $ as undefined? Is there a way to translate transaction from "AmendOffer" to "Offer", I tried to make a copy of "AmendOffer" and then delete the property "oldTransactionID" on the copied one to make it the same as "Offer" transaction. abstract transaction OfferTrans { o Double bidPrice --> VehicleListing listing --> Member member } transaction Offer extends OfferTrans { } transaction AmendOffer extends OfferTrans { o String oldTransactionID } assuming you have TP functions to match your transactions and using your model, in Composer Playground, you will get the transactions (you modeled) as shown below - in Historian. { "$class": "org.acme.vehicle.auction.Offer", "bidPrice": 10, "listing": "resource:org.acme.vehicle.auction.VehicleListing#L1", "member": "resource:org.acme.vehicle.auction.Member#1", "transactionId": "d133abab-cd96-4f15-ac06-ca7a065f2e84", "timestamp": "2018-06-04T10:38:17.042Z" } { "$class": "org.acme.vehicle.auction.AmendOffer", "oldTransactionID": "3333", // whatever "bidPrice": 0, "listing": "resource:org.acme.vehicle.auction.VehicleListing#L1", "member": "resource:org.acme.vehicle.auction.Member#1", "transactionId": "3576a2f2-6264-4490-9b79-ef0d612ed07a", "timestamp": "2018-06-04T10:37:50.854Z" } If you DON'T want to store 'oldTransactionId' as a mandatory field - simply make it optional in your model file. transaction AmendOffer extends OfferTrans { o String oldTransactionID optional } eg the following would then work as an AmendOffertransaction { "$class": "org.acme.vehicle.auction.AmendOffer", "bidPrice": 10, "listing": "resource:org.acme.vehicle.auction.VehicleListing#L1", "member": "resource:org.acme.vehicle.auction.Member#1" } If you wanted to update the Offers[] array in that sample network (as it exists today) you would obviously provide the appropriate transaction code to do so. Thanks for replying! In the case when there is an amendment to an order, I would like to save the amendment as an "Offer" transaction - so the oldTransactionID should not be saved. I had been trying to deep copy the passed in transaction and delete the property "oldTransactionID", I probably missed something in the deep copy process and getting undefined values. I have updated my answer. Seems to me, you just need to make the property optional Thanks Paul! I get your point, but I do want to pass the oldTransactionId at all time for the AmendOffer transactions. Is there a way to convert this to an Offer transaction. Lets say the original transaction is not there to be amended, instead I would like to place a new Offer transaction. This is a blockchain (with a deployed business network/smart contract). Transactions are added to a LEDGER. A LEDGER is a history/chronology of transactions that update the state of assets or participants (in a business network), to get to a world state (for an asset/participant) that holds the last known committed value for any given key - an indexed view into the chain’s transaction log (with its history of Offers or AmendOffers). You don't 'amend' a transaction - that's the whole point of a blockchain. Now to your question: (contd) I think you mean an amendment to an "offer" (not order). You can/could have one 'Offer' transaction class (IMO) and save the history of Offer transaction IDs (the transactions themselves are already in the ledger) and offer amounts etc in an auctionable asset if you wish. Its pretty easy to see then what 'transaction' amended a previous transaction (if at all) - on the asset. I think you're making it more complicated than it needs to be. Yes, you are correct. What I tried to do defeat the purpose of a Ledger, bear in mind that this is a blockchain. I over thought after all. Thanks for being patient!
common-pile/stackexchange_filtered
MultiThreading issues while programing for android I am developing on Android but the question might be just as valid on any other Java platform. I have developed a multi-threaded app. Lets say I have a first class that needs to do a time-intensive task, thus this work is done in another Thread. When it's done that same Thread will return the time-intensive task result to another (3rd) class. This last class will do something and return it's result to the first-starting class. I have noticed though that the first class will be waiting the whole time, maybe because this is some kind of loop ? Also I'd like the Thread-class to stop itself, as in when it has passed it's result to the third class it should simply stop. The third class has to do it's work without being "encapsulated" in the second class (the Thread one). Anyone knows how to accomplish this ? right now the experience is that the first one seems to be waiting (hanging) till the second and the third one are done :( If you want to use threads rather than an AsyncTask you could do something like this: private static final int STEP_ONE_COMPLETE = 0; private static final int STEP_TWO_COMPLETE = 1; ... private doBackgroundUpdate1(){ Thread backgroundThread = new Thread() { @Override public void run() { // do first step // finished first step Message msg = Message.obtain(); msg.what = STEP_ONE_COMPLETE; handler.sendMessage(msg); } } backgroundThread.start(); } private doBackgroundUpdate2(){ Thread backgroundThread = new Thread() { @Override public void run() { // do second step // finished second step Message msg = Message.obtain(); msg.what = STEP_TWO_COMPLETE; handler.sendMessage(msg); } } backgroundThread.start(); } private Handler handler = new Handler(){ @Override public void handleMessage(Message msg) { switch(msg.what){ case STEP_ONE_COMPLETE: doBackgroundUpdate2(); break; case STEP_TWO_COMPLETE: // do final steps; break; } } } You would kick it off by calling doBackgroundUpdate1(), when this is complete it sends a message to the handler which kicks off doBackgroundUpdate2() etc. Tiger , TiGer wrote: When it's done that same Thread will return the time-intensive task result to another (3rd) class Since thread runs asynchronously so your non-thread class can't be synced with your thread Though to perform some action on an Activity you need an AsyncTask not A Thread TiGer wrote: maybe because this is some kind of loop ? Tiger do read more about Threads and concurrency So the only answer I have for you now is ASYNCTASK EDIT: Also I'd like the Thread-class to stop itself Read this post's how-do-you-kill-a-thread-in-java thansk I have been working with AsyncTask till now but it doesn't suite my needs, as in it's more for a single task while having the option to update the GUI. I need a more object-orjented (complex) case, with several classses (and objects of those) interacting with each other and still having references to each other and also being able to update the GUI... Personally Handlers are better suited... TiGer You can invoke a method from your Thread & make that method call other class's methods & then use some global Handlers specified in your Application class to update the UI from any of the class In ordinary Java, you would do this: class MyTask implements Runnable { void run() { for (int i = 0; i < Integer.MAX; i++) { if (i = Integer.MAX -1) { System.out.println("done"); } } } } class MyMain { public static void main(String[] argv) { for (int i = 0; i < 10; i++) { Thread t = new Thread(new MyTask()); t.start(); } System.out.println("bye"); } } ... that kicks off 10 threads. Notice that if you accidentally invoke t.run() instead of t.start(), your runnable executes in the main thread. Probably you'll see 'bye' printed before 10 'done'. Notice that the threads 'stop' when the the run() method of the Runnable you gave to them finishes. I hope that helps you get your head around what it is you've got to co-ordinate. The tricky part with concurrency is getting threads to communicate with each other or share access to objects. I believe Android provides some mechanism for this in the form of the Handler which is described in the developer guide under designing for responsiveness. An excellent book on the subject of concurrency in Java is Java Concurency in Practice. if you want use AsyncTask rather then thread in android I have resolve it using ASyncTask and Handler in Android the aim is that one task is execute after compilation of one task hear is code that show First load animation on view after compilation of that process it will goes on another page class gotoparent extends AsyncTask<String,String,String> { @Override protected String doInBackground(String... params) { runOnUiThread(new Runnable() { @Override public void run() { Animation animation= AnimationUtils.loadAnimation(getApplicationContext(),R.anim.rotete); lin2.startAnimation(animation); } }); return null; } @Override protected void onPostExecute(String s) { super.onPostExecute(s); new Handler().postDelayed(new Runnable() { @Override public void run() { Intent i=new Intent(getApplicationContext(),ParentsCornor.class); startActivity(i); } }, 1200); } }
common-pile/stackexchange_filtered
How to make TableView Header Static/ unScrollable I am working on Table View. I have implemented Header at the top of TableView. Its scrolling when i scroll TableView. Any Solution to make the Header Static. As you are saying it is static so add just one view above table view and design it as per your design. see this : https://medium.com/@jeremysh/creating-a-sticky-header-for-a-uitableview-40af71653b55 UITableView also supports section headers which stick to the top of the screen only while that section is visible in the scroll view. Perhaps that's what you want? Consider accepting an answer to expand the visibility of your question. Check how to do it here: https://meta.stackexchange.com/a/5235/469186 If you want a static header, it means that it should not be a header. The behaviour of a header is to scroll with the UITableView. You can replace it with an UIView placed above your UITableView. Also, you can use the UITableView.headerView (UIView?), this is a view placed at the top end of the whole tableView and not per section as the sectionHeaderView. You can simply add UIView On top of UITableView. Like this : And result will be:
common-pile/stackexchange_filtered
Stylle of text for scientific words in website I'm helping design a website for a environmental conversationalists group. The website is intended to be accessible to a large audience, but does contain some scientific information (such as Latin names of plants). As I've been told, the style for scientific writing is to have the first letter uppercase and the word italicized if it's a genus. Is this true? I personally think it stands out poorly, in the body of text, or especially in a heading or a caption to a picture. Examples are 'nepenthes' would appear as Nepenthes and 'isolepis' as Isolepis when using this convention. In a nutshell, this can be seen as a conflict between technical correctness and aesthetics. What good is following a rule, if most people are unaware of the rule? You will seriously annoy the scientific people if you don't write species names in italic, regardless of aesthetics. Myself included, and i am miles away from being a biologist… The benefit in correctly capitalising latin names is that people who don’t know how they should be capitalised are not inconvenienced by the capitalisation, where as people who are aware of how the capitalisation should work may be annoyed and question the reliability of your information. It's always a good Idea to put emphasis on scientific terms by making them Italic/Emphasised but there are exceptions. And over your point "conflict between technical correctness and aesthetics" if there are a lot of scientific terms which are styled italic with other word it will look displeasing and will not look pleasing aesthetics wise. If there are fewer words each paragraph and someone is reading it then they will be more likely to understand the reason behind decision of putting it that way, and they will appreciate to have those things emphasised. Conclusion: More scientific words then article is for scientific reading and style can be left like normal text. Fewer words make them emphasised as they are intended for all kind of readers, and must understand the context of the word.
common-pile/stackexchange_filtered
Webpack setup with Django I working on a Vue app in Django via django-webpack-loader, running locally I'm able to get it to work by using the following in my base.html file: {% load render_bundle from webpack_loader %} ... ... {% render_bundle 'app' %} However, in production this doesn't work - I believe because the webpack production config uses the CommonChunksPlugin to split the bundles into app, manifest and vendor. There isn't much documentation online for merging Webpack with Django - I'm wondering if there is a way to include all chunks in the Django template. django-webpack-loader is no longer maintained. I've been working on a replacement for it that is starting to gain traction and I def recommend trying it out: https://github.com/shonin/django-manifest-loader @rykener django-webpack-loader is still used. Check their changelog here. The problem in the end was due to code splitting. In dev, a single JS file is created, but in the production config the Webpack CommonChunksPlugin was configured to split the app into 3 files (manifest, vendor and app). This solution isn't ideal and might not scale well, but by placing a conditional tag in the Django template I was able to correctly reference the necessary files. {% if STAGE or PRODUCTION %} {% render_bundle 'vendor' 'js' %} {% render_bundle 'manifest' 'js' %} {% endif %} {% render_bundle 'app' 'js' %} Did you edit settings.py to point to the bundle directory? APP_DIR = os.path.join(BASE_DIR, 'app') WEBPACK_LOADER = { 'DEFAULT': { 'BUNDLE_DIR_NAME': 'dist/' } } STATICFILES_DIRS = ( os.path.join(APP_DIR, 'assets'), ) Then use HtmlWebpackPlugin to point to chunks? https://github.com/jantimon/html-webpack-plugin/blob/master/README.md#writing-your-own-templates plugins: [ new HtmlWebpackPlugin({ template: 'static/app/index.html' }), ] I did look into this, however at the point where Django's collectstatic command adds hashes to the files, I would be unable to use Webpack to inject the correct files, which is where the django-webpack-loader comes in. According to django-webpack-loader README.md Version 1.0 and up of django-webpack-loader also supports multiple Webpack configurations. So one could use define 2 Webpack stats files in settings: one for normal, and one for stage / prod WEBPACK_LOADER = { 'STAGE_OR_PROD': { 'BUNDLE_DIR_NAME': 'stage_or_prod_bundles/', 'STATS_FILE': os.path.join(BASE_DIR, 'webpack-stats-stage-or-prod.json'), }, 'NORMAL': { 'BUNDLE_DIR_NAME': 'normal_bundles/', 'STATS_FILE': os.path.join(BASE_DIR, 'webpack-stats-normal.json'), } } and then one can use the config argument in the template tags to influence which stats file to load the bundles from {% load render_bundle from webpack_loader %} <html> <body> .... {% render_bundle 'main' 'js' 'STAGE_OR_PROD' %} {% render_bundle 'main' 'js' 'NORMAL' %}
common-pile/stackexchange_filtered
how to compare two images in java?in terms of height and width i had been working on my project which encompasses the technique for comparing images. I want to compare two images in java in terms of height and width, i can't compare it if any one can help it please do.. http://stackoverflow.com/questions/672916/how-to-get-image-height-and-width-using-java Might help Here is a head start. You can use something like this and that should get you going. Image image; public static void listFilesForFolder(File folder) { for (final File fileEntry : folder.listFiles()) { Image image; try { image = ImageIO.read(new File("C://Users//workspace//ImageProcessing//images//" + fileEntry.getName())); BufferedImage buffered = (BufferedImage) image; int imageHeight = image.getHeight(null); int imageWidth = image.getWidth(null); yourImageClass.setHeight(imageHeight); yourImageClass.setWidth(imageWidth); arrayListOfImageClasses.add(yourImageClass); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(fileEntry.getName()); } } These are the jars you can use : import java.awt.Graphics2D; import java.awt.Image; import java.awt.RenderingHints; import java.awt.Transparency; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; Hope this gets you going. actually i have to capture frames from a specified video and compare that frames with each other to find the portion wise difference between them .... will the above code be usefull. for that??? Yes, it will be. You need to get the frames and then compare the size of the frames. how to compare them ?? how to pass the path of two images simultaneously??? It is easy. Write a function that reads the images. Then create a class Image with two attributes - height and width. When you read through the files you keep adding the Images to an ArrayList. Then do comparisons in whatever way you want. I have edited my code to give you some hints. thanks for ur help buddy will check it tomorrow and concern you back. hey its not working if u can help me out please .... Which part is not working ? it says method not found..... while executing setwidth and setheight method. Did you import the packages I mentioned ? yeys i did.... then also it didn't worked What error did it throw ? And what Java version are you using ? i am using jdk 1.8. It states as "cannot find symbol " on line where we declared the setheight and setwidth method.... Also says "Illegal start of expression" at method declaration of "public static void listFilesForFolder(File folder) ". i am trying naivesimilarity program from java cookbook bt couldn't get the dissimilar image .... can anyone help me on that.???
common-pile/stackexchange_filtered
Why won't my redbot fire as a system service? I have a raspberry pi 4 with python and pyenv installed. Usually my routine to launch RedBot is to do the following... tmux pyenv shell redbot redbot gunther and it launches.... I'm trying to setup a service to do this for me on reboot. [Unit] Description=%I redbot After=multi-user.target After=network-online.target Wants=network-online.target [Service] ExecStart=/home/pi/.pyenv/versions/redbot/bin/python -O -m redbot %I --no-prompt User=pi Group=pi Type=idle Restart=always RestartSec=15 RestartPreventExitStatus=0 TimeoutStopSec=10 [Install] WantedBy=multi-user.target When this runs it says no module 'redbot' found. I'm thinking something to do with the env / paths ? But I've no clue. Please help! Dec 30 00:49:26 raspberrypi systemd[1]: Started gunther redbot. Dec 30 00:49:26 raspberrypi python[7787]: /home/pi/.pyenv/versions/redbot/bin/python: No module named redbot Dec 30 00:49:26 raspberrypi systemd[1]<EMAIL_ADDRESS>Main process exited, code=exited, status=1/FAILURE Dec 30 00:49:26 raspberrypi systemd[1]<EMAIL_ADDRESS>Failed with result 'exit-code'. In your terminal, if you run pyenv shell redbot, then python -O -m redbot %I --no-prompt, does that work? Yes it does! I use python -O -m redbot gunther --no-prompt and it fires. This is driving me nuts!
common-pile/stackexchange_filtered
How to pass WindowState from desktop shortcut into WPF app? How can I control the initial WindowState (Normal, Minimized, Maximized) of a WPF main window from a desktop shortcut? The "Run:" combobox of the shortcut's properties dialog let's me choose between "Normal window", "Minimized" and "Maximized". But this option seems to be completely ignored by WPF apps. With WinForms this was automatically supported with no additional code. Is there a way to access this option from the launched WPF process? I know I can specify the ProcessStartInfo.WindowStyle property when launching new processes. But how can I access this option from the process being launched? System.Diagnostics.Process.GetCurrentProcess().StartInfo.WindowStyle Use NativeMethods.StartupInfo.GetInitialWindowStyle() to get just the initial window state or NativeMethods.StartupInfo.FromCurrentProcess to access the entire information. static partial class NativeMethods { public static class StartupInfo { [StructLayout(LayoutKind.Sequential)] public class STARTUPINFO { public readonly UInt32 cb; private IntPtr lpReserved; [MarshalAs(UnmanagedType.LPWStr)] public readonly string lpDesktop; [MarshalAs(UnmanagedType.LPWStr)] public readonly string lpTitle; public readonly UInt32 dwX; public readonly UInt32 dwY; public readonly UInt32 dwXSize; public readonly UInt32 dwYSize; public readonly UInt32 dwXCountChars; public readonly UInt32 dwYCountChars; public readonly UInt32 dwFillAttribute; public readonly UInt32 dwFlags; [MarshalAs(UnmanagedType.U2)] public readonly UInt16 wShowWindow; [MarshalAs(UnmanagedType.U2)] public readonly UInt16 cbReserved2; private IntPtr lpReserved2; public readonly IntPtr hStdInput; public readonly IntPtr hStdOutput; public readonly IntPtr hStdError; } public readonly static STARTUPINFO FromCurrentProcess = null; const uint STARTF_USESHOWWINDOW = 0x00000001; const ushort SW_HIDE = 0; const ushort SW_SHOWNORMAL = 1; const ushort SW_SHOWMINIMIZED = 2; const ushort SW_SHOWMAXIMIZED = 3; const ushort SW_MINIMIZE = 6; const ushort SW_SHOWMINNOACTIVE = 7; const ushort SW_FORCEMINIMIZE = 11; [DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)] static extern void GetStartupInfoW(IntPtr startupInfoPtr); static StartupInfo() //Static constructor { FromCurrentProcess = new STARTUPINFO(); int length = Marshal.SizeOf(typeof(STARTUPINFO)); IntPtr ptr = Marshal.AllocHGlobal(length); Marshal.StructureToPtr(FromCurrentProcess, ptr, false); GetStartupInfoW(ptr); Marshal.PtrToStructure(ptr, FromCurrentProcess); Marshal.FreeHGlobal(ptr); } public static ProcessWindowStyle GetInitialWindowStyle() { if ((FromCurrentProcess.dwFlags & STARTF_USESHOWWINDOW) == 0) return ProcessWindowStyle.Normal; switch (FromCurrentProcess.wShowWindow) { case SW_HIDE: return ProcessWindowStyle.Hidden; case SW_SHOWNORMAL: return ProcessWindowStyle.Normal; case SW_MINIMIZE: case SW_FORCEMINIMIZE: case SW_SHOWMINNOACTIVE: case SW_SHOWMINIMIZED: return ProcessWindowStyle.Minimized; case SW_SHOWMAXIMIZED: return ProcessWindowStyle.Maximized; default: return ProcessWindowStyle.Normal; } } } }
common-pile/stackexchange_filtered
SQL query to filter out users who already have recieved an email Suppose: You tried to send a mass mailing, but something went wrong, and only some users got the mail. You sent a mass mailing recently, but now new users have signed up, and they need to receive the mail as well. How do you filter those who have already received the news (via an Eloquent query or a select command from database .) Actually I faced this problem in a project based on Laravel 4, I am searching for a query using Laravel Eloquent on a pivot table and keep track the sent emails to the users. In my case there is two Model: Post and User Its a many to many relationship: any user can receive many posts and any post can be sent to many users Put a list of recent email addresses into a table, and check if they are in that table before sending the email? emails are in the user table, I want to send the email to all of them, then next time when I want to send the same email, just new users can receive it. Many to many relationship, (every user can receive many emails and any email can be sent to many users) There is two Model: Post and User Its a many to many relationship: any user can receive many posts and any post can be sent to many users I implemented it as the following: class Post extends Eloquent { public function recipients() { return $this->belongsToMany('User', 'posts_users', 'post_id', 'user_id'); } } class User extends Eloquent { public function mails() { return $this->belongsToMany('Post', 'posts_users', 'user_id', 'post_id'); } } So, when I want to send a post to users I use $usersToMail = User::whereNotExists(function($query) use ($post_id) { $query->select(DB::raw(1)) ->from('posts_users') ->whereRaw('posts_users.user_id = users.id') ->whereRaw('posts_users.post_id = '.$post_id); })->get(); foreach ($usersToMail as $user) { // send email $user->mails()->save($post); // it records that this post has been sent to the user } You'll want to add a column to your users table eg a datetime field called mailed_at. Then in your email or signup method (wherever it is you're sending that first email) update the user with the datetime they were emailed. From then on you can query based on mailed_at for any users who still need an email. To check for multiple users that need a newsletter, let's say your users table schema is as follows (keeping it simple): id | email | password | mailed_at (nullable) We check here whether a user has received an email by querying based on the mailed_at column. To get all users that need to receive a mailout you would do the following: $usersToMail = User::whereNull('mailed_at')->get(); Maybe I mentioned my problem badly, because I think it must be a common task. suppose its a newsletter and I want to send the news to anyone who didn't receive it. your solution just work for one email not for many emails (news) It just tells us some users didn't receive a news, but which news? in someway you must check the users against an specific news, actually I save the news in a Post table, and I use a pivot table between Users and Post to check which user has received which mail. Ok, you really need to be explicit about what you're asking in your question. For anyone to answer you need to give all the info or you simply wont get the response you're after. I'd suggest making your question more specific in future as I have actually answered the question you've asked. sorry, I think you are right, I reviewed my question and refined it, actually my english is not good and I copy paste some parts, now please check if the question is telling what I mean please revert back your down vote. it just was a misunderstanding I voted up your answer, hope you reconsider your vote the question and regard my own answer too.
common-pile/stackexchange_filtered
How to remove the last edge of a subgraph using networkx I was hoping that you can helped me with this. I created a graph using an adjacency matrix with the next code. graph = nx.from_numpy_array(adjacency_matrix, create using = nx.DiGraph) mypos = nx.spring_layout(graph) nx.draw_networkx(graph, pos = mypos) and then I get the shortest path... path = nx.shortest_path(graph, 3, 2) print(path) Which give me the next path [3,1,2] I was triying to draw the path creating a subgraph using the nodes given by the shortest path. subgraph = graph.subgraph(path) nx.draw_networkx(H, pos = mypos, arrows = True) nx.draw_networkx_nodes(H, pos = mypos, node_color = 'r') nx.draw_networkx_edges(H, pos = mypos, edge_color = 'r') And I get the next result The problem is, even though it was draw, they added a new extra edge between node 2 and 3 that I don't want, is there a way to change this so I don't have that extra edge? I know that networkx can remove an edge using nx.remove_edge() but I don't want to be removing edges manually every time I run the program and choosing another path. Thank you in advance For your problem you don't need a subgraph. You can highlight the path and the node with the following code, which is a simplification of the accepted answer from Highlighting certain nodes/edges in NetworkX - Issues with using zip(). import networkx as nx import matplotlib.pylab as pl # Set up graph graph = nx.DiGraph(nx.karate_club_graph()) # Get position using spring layout pos = nx.spring_layout(graph) # Get shortest path path = nx.shortest_path(graph, 0, 9) # if you want to draw fewer edges, you can modify the following line by setting limits path_edges = list(zip(path,path[1:])) # Draw nodes and edges not included in path nx.draw_networkx_nodes(graph, pos, node_color='k', nodelist=set(graph.nodes)-set(path)) nx.draw_networkx_edges(graph, pos, edgelist=set(graph.edges)-set(path_edges)) # Draw nodes and edges included in path nx.draw_networkx_nodes(graph, pos, nodelist=path, node_color='r') nx.draw_networkx_edges(graph,pos,edgelist=path_edges,edge_color='r') pl.axis("off") pl.show() Yes it works! thank you so much! So basically it's drawing a graph over a graph right? Is there a way to transfer the weights of my original graph to the new one? So I can get the length of the shortest path? In the same way, you can add the edge weights for that path to the figure. Take a look here for the general approach of adding edge weights: https://stackoverflow.com/questions/47094949/labeling-edges-in-networkx. If you need help after checking that source, then I modify my answer. As another remark, now you should have enough reputation to add the image directly. If you answer your question, it would help others with the same question. Hi, thank you! Yes in the figure appears the weights but what I mean is, I want to print the length of the shortest path, the thing is when the shortest path was draw in the figure, every edge of the shortest path has the weight of 1, but I want the weight of the original graph, can I transfer the numerical value of the weight of my original graph to the new? I'm not sure what exactly your problem is. Where do you lose your weights? Only for the visualisation or do you create a subgraph? If it's longer its probably the best idea to post a new question with a minimal working example (or if its only a small think modify this question)
common-pile/stackexchange_filtered
JNA - CreateToolhelp32Snapshot not returning all DLLs I'm trying to get the base address of a module from a process to which I have a handle to. I've tried this using the CreateToolhelp32Snapshot and EnumProcessModules methods. The problem is it both methods return only these 5 DLLs: underrail.exe ndll.dll wow64.dll wow64win.dll wow64cpu.dll I know there should be more modules and trying to use this in other games returns the same 5 modules. I have found some answers to the same question but both of them don't work out for me: https://www.unknowncheats.me/forum/counterstrike-global-offensive/169030-modules.html JNA - EnumProcessModules() not returning all DLLs? The first one doesn't work since I can't use TH32CS_SNAPMODULE | TH32CS_SNAPMODULE32 as the flags in the method. The second one doesn't work because I can't call the method EnumProcessModulesEx() when I try to call Psapi.INSTANCE.EnumProcessModulesEx(...) Here is a snippet of my code: public static int getModuleBaseAddress(int process_id) { DWORD pid = new DWORD(process_id); HANDLE snapshot = null; snapshot = kernel32.CreateToolhelp32Snapshot(Tlhelp32.TH32CS_SNAPMODULE, pid); MODULEENTRY32W module = new MODULEENTRY32W(); while(Kernel32.INSTANCE.Module32NextW(snapshot, module)) { String s = Native.toString(module.szModule); Pointer x = module.modBaseAddr; System.out.println(s); System.out.println(x); System.out.println("---"); } return 0; } Note that using Tlhelp32.TH32CS_SNAPMODULE32 doesn't return anything and Tlhelp32.TH32CS_SNAPALL returns the same as lhelp32.TH32CS_SNAPMODULE How are you obtaining the handle? What flags are you passing to OpenProcess (assuming that's how you're getting it). Why can't you use EnumProcessModulesEx? Is it incompatible with your system? Are you using 32-bit or 64-bit Windows? I'm using 64bit windows. I'm not sure why I can't call EnumProcessModulesEx, I've looked into the JNA PSAPI API and it looks like the method doesn't exist, but I already saw some code using it. As for the flags I'm using PROCESS_VM_READ PROCESS_VM_WRITE and PROCESS_VM_OPERATION Ah, EnumProcessModulesEx is not yet mapped in JNA. You can add your own Psapi.java class and extend the JNA version, and add your own mapping (and then contribute the change to JNA to help out people like you in the future!). The top google result for EnumProcessModulesEx has sample code here showing how to extend and map that function, and a util class which uses it. Thank you so much for your helpful answer. I created a custom psapi and mapped the method (I was not aware this was possible). And now it works perfectly. Thanks to Daniel Widdis I have the answer. Currently, the method EnumProcessModulesEx is not mapped into JNA so you have to make your own custom version of Psapi, which in my case looks something like this: import com.sun.jna.Native; import com.sun.jna.platform.win32.Psapi; import com.sun.jna.platform.win32.WinDef.HMODULE; import com.sun.jna.platform.win32.WinNT.HANDLE; import com.sun.jna.ptr.IntByReference; import com.sun.jna.win32.W32APIOptions; public interface CustomPsapi extends Psapi{ Psapi INSTANCE = Native.load("psapi", Psapi.class, W32APIOptions.DEFAULT_OPTIONS); public void EnumProcessModulesEx(HANDLE hProcess, HMODULE[] lphModule, int cb, IntByReference lpcbNeeded, int dwFilterFlag); } Then you can load your custom class and use the methods that you mapped. public static CustomPsapi c_psapi = Native.load("psapi", CustomPsapi.class); As for getting all the DLLs showing up correctly, you need to use the now mapped EnumProcessModulesEx method with the flag for all modules as the last argument (0x03) so the method should look something like this: c_psapi.EnumProcessModulesEx(process, modules, 1024, new IntByReference(1024), 0x03); Glad it worked. Now for extra credit, submit your new mapping to the JNA project! :)
common-pile/stackexchange_filtered
Why is the modified spherical Bessel function an asymptotic solution of this ODE? I am trying to solve the radial equation with $R = u/r$ $$ \frac{d^2}{dr^2}u - \frac{l(l+1)}{r^2}u + (E-V)u = 0; \qquad V(r) = -\frac{2Z}{\alpha}\frac{e^{-r}}{1 - e^{-r}}, $$ using the shooting method. I have $R(0) = R(\infty) = 0$, but I don't know the value of the derivative at $0$ or asymptotically as $r \to \infty$. To find out what the derivative must be, I have followed Berghe-Fack-Meyer (1989) in their paper on "Numerical methods for solving radial Schrödinger equations". The authors state that in the vicinity of the origin $$ V(r) = \frac{V_1}{r} + V_2 + r V_3 + \cdots, \qquad \text{and} \qquad u(r) = r^{l+1}\left( a_1 + a_2r +\cdots \right). $$ And as $V(r) \to 0$ as $r \to \infty$, the asymptotic behavior at infinity the reduces the ODE to $$ \frac{d^2}{dr^2}u + Eu(r) = 0. $$ I understand this, but they claim without explainaition that the asymptotic solution of the ODE is of the form $r j(K r)$ with $K \in \mathbb{R}$ and $j$ the modified spherical Bessel function in both the case that $r \to \infty$ and $r \to 0$. I don't understand why that should be the case. Can you prove it in some detail? Once I understand it, perhaps I can use it to deduce what the derivatives should be at $0$ and at $\infty$. What is big $R()$ ? And when you speak of the derivative, which is the function you want to take the derivative of ? It is R = u/r ..
common-pile/stackexchange_filtered
Program works when I print the value out, but not when I don't I have the following program that is checking for substrings. #include <stdio.h> #define TRUE 1 #define FALSE 0 int checkSubstring(char string[], char sub[]) { int k, j = 0; int count = 0; for(int i = 0; i<=8; i++) { if(sub[k] == string[i]) { j = j + 1; k = k + 1; count = count + 1; } else { j = i; } } printf("%d", count); if(count == 3) { return TRUE; } else { return FALSE; } } I'm testing my program and it seems to only work when I do printf("%d", count);. For example, If I test the string "Schoolbus" against "bus", it should return TRUE because bus is a substring in Schoolbus. However, it only returns true when the printf statement is put in place. When I comment it out, it returns FALSE. I have no idea why it would be doing this... I use 8 in my loop to test against the length of schoolbus (9). I'm also checking if count is == 3, because if the strings match I can count it up to the length of the substring. If the count does not match length, then we know it isn't a match. k isn't initialized, so sub[k] invokes undefined behavior Wait, I thought I could do int k,j = 0;? I swear I saw that somewhere you can "do" it .. it initializes j to 0, but k is unitialized, so it will take on whatever value is at that place in memory, which is completely unpredictable. You need to say k = 0 too if you want it start at 0. And better make sure string and sub are at least 10 chars long. As pointed out by @yano, k isn't initialized, so sub[k] has undefined behaviour. To initialize both j and k to zero, you could either do int j = 0, k = 0; or int j, k; j = k = 0; but not int k, j = 0;
common-pile/stackexchange_filtered
Verifying the last Two Moore-Penrose Equations If A is an m x n matrix with rank(A) = n, then $A^{+} = (A^{T}A)^{-1}A^{T}$. I already proved the first two of the Moore-Penrose equations. The second two are to verify: 3) $(AA^{+})^{*} = AA^{+}$ 4) $(A^{+}A)^{*} = A^{+}A$ I've attempted substituting for $A^{+}$ on one side and moving different matrices around trying to derive the opposite side, but I'm lost outside of those first two moves. All you need to know for this is $(AB)^{} = B^{}A^{*}$. Note further that you transcribed the first equation incorrectly. I mention only in case this is the source of your troubles. The second equation is really easy to verify: note that $A^+A = (A^\top A)^{-1}A^\top A = I$. For the first equation, recall these identities \begin{align*} (AB)^\top &= B^\top A^\top \\ (A^{-1})^\top &= (A^\top)^{-1} \\ (A^\top)^\top &= A. \end{align*} Then, \begin{align*} (A(A^\top A)^{-1}A^\top)^\top &= (A^\top)^\top ((A^\top A)^{-1})^\top A^\top \\ &= A ((A^\top A)^\top)^{-1} A^\top \\ &= A (A^\top (A^\top)^\top)^{-1} A^\top \\ &= A (A^\top A)^{-1} A^\top. \end{align*} How does proving that $A^{+}A = I$ verify the 3rd Moore-Penrose equation? It doesn't; It verifies the fourth equation (if $A$ is not invertible, there will be infinitely many left-inverses). The calculation at the end verifies that $AA^+$ is symmetric. Apologies, yes 4th equation. I am slowly grasping your reasoning. Was I wrong to assume from the beginning that the RHS must equal the LHS for verification to pass? No, you were right. You're verifying the equation holds for the given $A$ and (proposed) $A^+$, so substituting them into both sides should produce the same matrix. I'm not really sure where your doubt is coming from on this point.
common-pile/stackexchange_filtered
VIEWS in MySQL Workbench always return (0) row(s) drop database if exists RentaHouse; create database RentaHouse; use RentaHouse; create table Staff( staffNo char(5) not null primary key, fName varchar(15), lName varchar(15), position varchar(15), dob date, salary decimal (7,2) unsigned ); create table PropertyForRent( propertyNo char(5) primary key, street varchar(35) not null, city varchar(15) not null, pcode varchar(10), type varchar(20) not null, rooms tinyint unsigned not null, rent decimal (6,2) unsigned, staffNo char(5) ); ALTER TABLE PropertyForRent ADD CONSTRAINT whatever foreign key(staffNo) references Staff (staffNo) ON DELETE CASCADE ON UPDATE CASCADE; insert into staff values ('s1234','Mary','Jones', 'Sales', '1975-12-22',45000), ('s1834','Pat','Roche', 'IT', '1972-09-13',42000), ('s1998','Michael','Brown', 'Sales', '1980-12-09',43500); insert into propertyForRent values ('p3296','21 Ash Street','Tramore','WD34-543', 'Bungalow',4,1200,'s1234'), ('p3299','William Street','Dungarvan','WD99-088', 'Terrace',3,1050,'s1234'), ('p3344','9 Mary Street','New Ross','WX99-044', 'Terrace',3,800,'s1998'), ('p3356','21 Mary Street','New Ross','WX99-076', '2 Storey',4,1100,null); /*doesn't work!*/ CREATE VIEW anyView AS select * from Staff; /*work!*/ select * from Staff; /*work!*/ select street, city, type, rent, concat(fName, lName) as 'Name' from PropertyForRent join Staff on PropertyForRent.staffNo = Staff.staffNo where city = 'New Ross' order by rent; /*doesn't work!*/ CREATE VIEW myView AS select street, city, type, rent, concat(fName, lName) as 'Name' from PropertyForRent join Staff on PropertyForRent.staffNo = Staff.staffNo where city = 'New Ross' order by rent; all the views are not returning any result! I am using "sakila" as a default schema using MySQL Workbench 6.3.8. I've been 2 days searching online for a solution but I think it's time to ask who have expertise please. P.S views are not working for any database I create not only for this schema! Have you tried using mysql command line? Try to use the schema (select * from RentaHouse.anyView;). I haven't found any problem at all. Take a look: http://sqlfiddle.com/#!9/8e9ca1/2 select * from RentaHouse.anyView; If it works, the problem is your workbench is creating the view in another database. Try to set the default database by clicking at RentaHouse database and setting it as a Defaulf Database. The default database should be in bold. You have created same name multiple views. second thing is you haven't write select query to show the view data. CREATE VIEW anyView1 AS select * from Staff; then fetch data select * from anyView1; CREATE VIEW myViewss AS select street, city, type, rent, concat(fName, lName) as 'Name' from PropertyForRent join Staff on PropertyForRent.staffNo = Staff.staffNo where city = 'New Ross' order by rent; select * from myViewss;
common-pile/stackexchange_filtered
How much money is fiat money? How much or what amount/percentage of all the money in the world actually fiat money? I am not looking for an accurate estimate obviously, but a ball park estimate... even if that was possible. Also by all the money, I mean all the money in currency issued by recognized countries in the world. Not including all crypto-currency which is essentially 100% fiat. All of it, basically. No major currency is backed by gold now, and even minor currencies are more often backed by another country’s currency than by previous metals. @MikeScott That is very surprising and interesting, I have seen in a video that one of UK's banks have gold bars which supposedly back some of their pounds. Almost ever country has some gold. That doesn't mean that currencies are tied to a certain percentage of the gold in their vaults. I think that for most of the developed world you will find that the vast majority of money is fiat money, if not all. Very few countries, if any, have their money backed by gold or other precious metals. For all countries which use fractional reserve banking this must be true, since there is really very little way to uphold the relationship between money supply and gold. There's a real easy answer: All of it. Thank you for all the input guys! Sad to see that this question has been put on hold. I at-least have some useful insight now... I thought at least some of the the currencies had some commodities backing them After 1971 (Nixon Shock), dollar was decoupled from gold. Hence, by definition, US Dollar is "fiat money". Since a currency as powerful as dollar is technically "fiat money", I don't think you have to make guesses regarding other currencies. After 20th century, most are legal tenders backed by governments. Cryptos are criticized by economists because they do not have the backing of governments. Thank you for the answer, I would like to see a more detailed one with more citations and references so I will wait a bit and do some research in the meanwhile. This doesn't answer the question. Yes I agree, my point was that since all powerful dollar is fiat, most currencies would be fiat money as well.
common-pile/stackexchange_filtered
How to view an unlisted Photosynth, without a Windows Live account? I created this unlisted (but now stated in public) photosynth People are required to log into a Windows Live account to see it. Is there a public link to it that I can share? From the Photosynth website http://photosynth.net/ if you choose [My Photosynths] and click on one, it will be shown along with "sharing" links below it. Use the EMBED link to access the direct URL. The user will not have to be logged into a Windows Live account.
common-pile/stackexchange_filtered
PHP MVC: How to use private Model object stored inside View from outside class I am new in learning MVC, I want to use private Model object stored inside View from outside class, like below example: class Model{ private $data } class View{ private $model public function __construct($model) { $this->model = $model; } } // outside $m = New Model; $v = New View($m); echo $v->m->data; // How to get it i know setter/getter method, but it can can much more bigger MVC code.please help. "Model" is not a class but a layer. @tereško Thank you very much.But can you give me very simple php code example about model layer to understand please.:) Well ... you could try reading this You would probably want to access the view from within the controller like this: class Controller { public function __construct($model, $view) { $this->model = $model; $this->view = $view; } public function show() { return $this->view->render($this->model->getData()); } } $controller = new Controller(); $controller->show(); You want the controller to receive all of the dependencies that it has ideally in the constructor. That way it doesn't need to search for them. This is inversion of control or DI (dependency injection). To be noted that you use DI (dependency injection) which the op should use too @Timothy Fisher Thanks. However, if model class contains 10 methods, then do i need to write 10 wrapper function for each corresponding into controller class like in your example public function show(). is it correct MVC approach.I am new on MVC.Thank you very much. Not necessarily. Usually the model is just a layer that is used to access business logic and the database. You would have a method in your controller for each view you want to load. The model would simply be there to grab data to load into the view.
common-pile/stackexchange_filtered
I was creating a image with PhotoImage and this error happened ***Error message : Traceback (most recent call last): File "C:/Users/gurse/Desktop/Khanda/Khanda.py", line 3, in <module> label = Label(x, image=PhotoImage(file=r"C:\Users\gurse\Desktop\Khanda")) File "C:\Users\gurse\AppData\Local\Programs\Python\Python39\lib\tkinter\__init__.py", line 4062, in __init__ Image.__init__(self, 'photo', name, cnf, master, **kw) File "C:\Users\gurse\AppData\Local\Programs\Python\Python39\lib\tkinter\__init__.py", line 4007, in __init__ self.tk.call(('image', 'create', imgtype, name,) + options) _tkinter.TclError: couldn't open "C:\Users\gurse\Desktop\Khanda": permission denied ***** My current code : from tkinter import * x = Tk() label = Label(x, image=PhotoImage(file=r"C:\Users\gurse\Desktop\Khanda")) And the backslashes turn into a Y with 2 lines across it. Well. Your error message is pretty straightforward. Python does not have a permission to open your file. The path is a directory so it cannot be open. You need to provide a path of image file. But event you provide a correct path to a image, it will not be shown because it will be garbage collected and you did not call any layout function on label as well. Also x.mainloop() is also missing. That error says that it can't reach the image. In this case, you have put only the path of the image, but the image name isn't included in it. To resolve, you have to put also the name of the image in the path like: r"C:\Users\gurse\Desktop\Khanda\TestImage.png A small advice -> PhotoImage takes only a few extensions for images (ie. jpeg will occur an error) I hope that I've been clear ;) EDIT: The user acw1668 isn't wrong: you have to use the method mainloop() to show the window with the widgets According to the information in the traceback, "C:\Users\gurse\Desktop\Khanda" is a directory. So trying to open a directory as a file will raise the exception. So you need to pass a path of an image file instead, like "C:\Users\gurse\Desktop\Khanda\sample.png". However since you pass the result of PhotoImage(...) to image option directly, the image will be garbage collected because there is no variable references it. So you need to use a variable to store the result of PhotoImage(...). Also you need to call grid() or pack() or place() on the label otherwise it won't show up. Finally you need to call x.mainloop() otherwise the program will exit immediately. Below is an example code based on yours: from tkinter import * x = Tk() image = PhotoImage(file="C:/Users/gurse/Desktop/sample.png") label = Label(x, image=image) label.pack() x.mainloop()
common-pile/stackexchange_filtered
XSLT 2.0 intellisense in Visual Studio 2010 - Adding a schema? I want to be able to get intellisense in XSLT but for version 2.0 in visual studio I know by default XSLT 2.0 isn't support - only 1.0 - but using Saxon API you can use XSLT 2.0. I would love to get intellisense, i think this is possible by adding a XSLT 2.0 schema to visual studio but i am not 100% sure. My question really is where do i get the schema from , i presume i can download it ? and where do i install it in visual studio? I'm not certain but, I think this is the schema you want from the W3 site: http://www.w3.org/2007/schema-for-xslt20.xsd The existing xslt.xsd file (on my install) is here: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Xml\Schemas\ Please could you post back your results - this looks quite interesting. I had to tweak to get VS2010 Professional to show Intellisense for XSLT 2.0. First, download the file http://www.w3.org/2007/schema-for-xslt20.xsd. You'll then need to edit the file and remove 'schemaLocation' attributes from both 'xsl:import' elements which are located at the beginning of the document after the comments. Then copy this modified file to [Your Visual Studio 10.0 Installation Folder]\Xml\Schemas. If you have VS running restart it. Open your XSLT file so that the XML menu is visible and goto 'XML->Schemas...'. There you will find both version 1.0 and 2.0 schemas for XSLT. You'll need to disable version 1.0 by clicking under its 'Use' field and selecting 'Don't use this scheme'. Now the Intellisense should work. The requirement to remove 'schemaLocation' attribute may have something to do with .NET security blocking XML documents from retrieving documents from web. I'm not sure. Anyways, the files are available locally so this shouldn't be any problem. The editor should have selected the version 2.0 of the schema by looking at <xsl:stylesheet version="2.0"...> but I'm not a XML guru so there may be a better workaround than disabling schema version 1.0. You should direct your comment to @Sanjeev: I only edited this answer.
common-pile/stackexchange_filtered
AWS SNS publishing to a subscribed Lambda function logs null fields Tried to post this to the AWS Forums, but it seems my account is "not yet ready", whatever that means. I've setup an AWS Lambda function (written in Java) that accepts a POJO in order to allow for automatic deserialization of JSON. The test JSON I am using is below and represents the JSON string that will be sent from the ultimate source of the message once everything is up and running. {"sender":"Joe", "id":1, "event":"started", "ticks":2, "time":"20150623T214256Z", "version":"1.0"} This worked perfectly when I tested it by publishing to it from the Lambda "test" console. I'm now trying to hook in SNS by subscribing the Lambda function to a Topic and I am testing it from the SNS Console. I've tried sending the same exact message as above both with "raw data" (which didn't show any results) and the JSON generated using the "JSON Generator" data option and I am running into an issue where it seems when SNS sends the message to the Lambda function, the POJO is instantiated, but either the default constructor is called or the parameterized constructor is called with all null values. Either way, when the Lambda function logs the message via calling an overridden toString() method in the POJO, it prints out null for all of the variables without any error messages. Similarly, the SNS Topic is configured to log to Cloudwatch and it too is not reporting any errors. It gets an HTTP status 202. Here is the newly generated JSON message. { "default": "{\"sender\":\"Joe\", \"id\":1, \"event\":\"started\", \"ticks\":2, \"time\":\"20150623T214256Z\", \"version\":\"1.0\"}", "lambda": "{\"sender\":\"Joe\", \"id\":1, \"event\":\"started\", \"ticks\":2, \"time\":\"20150623T214256Z\", \"version\":\"1.0\"}", } Below are the log messages. Lambda's logs: START RequestId: 238a0546-627d-11e5-b228-817bf2a1219a Received the following :: We have the following Message{sender=null, id=null, event=null, ticks=null, time=null, version=null} END RequestId: 238a0546-627d-11e5-b228-817bf2a1219a REPORT RequestId: 238a0546-627d-11e5-b228-817bf2a1219a Duration: 26.23 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 69 MB SNS logs: { "status": "SUCCESS", "notification": { "timestamp": "2015-09-24 05:28:51.079", "topicArn": "arn:aws:sns:us-east-1:256842276575:App", "messageId": "3f5c0fa1-8a50-5ce3-b7c9-41dc060212c8", "messageMD5Sum": "65a5cb6d53616bd385f72177fe98ecc2" }, "delivery": { "statusCode": 202, "dwellTimeMs": 92, "attempts": 1, "providerResponse": "{\"lambdaRequestId\":\"238a0546-627d-11e5-b228-817bf2a1219a\"}", "destination": "arn:aws:lambda:us-east-1:256842276575:function:App-Lambda-Trigger" } } Below is the applicable Lambda function code: package com.mycompany; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.LambdaLogger; public class LambdaHandler { public void Handler(Message msg, Context context) { LambdaLogger logger = context.getLogger(); logger.log("Received the following :: " + msg.toString()); } } . public class Message { private String sender; private Integer id; private String event; private Integer ticks; private String time; private Double version; public Message(String sender, Integer id, String event, Integer ticks, String time, Double version) { this.sender = sender; this.id = id; this.event = event; this.ticks = ticks; this.time = time; this.version = version; } ... getters/setters ... public Message() { } @Override public String toString() { return "We have the following Message{" + "sender=" + getSender() + ", id=" + id + ", event=" + event + ", ticks=" + ticks + ", time=" + time + ", version=" + version + '}'; } After doing some digging and looking at some javascript examples (I can't seem to find any Java examples of functions subscribed to SNS), it seems they all receive "event". I've found on AWS' Github repository a Java class SNSEvent (https://github.com/aws/aws-lambda-java-libs/blob/master/aws-lambda-java-events/src/main/java/com/amazonaws/services/lambda/runtime/events/SNSEvent.java), however it's not in the official Javadoc. None of the AWS documentation I have been able to find document a Java Lambda function setup to receive a POJO deserialized (which I can't believe is all that uncommon) and I can't find anything that specifies what object type is sent by the SNS Topic to the Lambda function, if infact I should not expect the POJO type. Can someone please clarify, what object type should I have my Lambda function expect to receive? Can someone provide some sample code? Any help would be appreciated. EDIT 1 I modified my function to accept SNSEvent and Context objects, per a suggestion and my function throws the following exception: Error loading method handler on class com.app.LambdaHandler: class java.lang.NoClassDefFoundError java.lang.NoClassDefFoundError: com/amazonaws/services/lambda/runtime/events/SNSEvent at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetPublicMethods(Class.java:2902) at java.lang.Class.getMethods(Class.java:1615) Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.lambda.runtime.events.SNSEvent at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) As if the runtime environment does not recognize SNSEvent? There are two things I think you should change: Your Message class does not follow the expected Lambda POJO format of getX/setX accessors that Lambda will use to deserialize the event object. If your event is from SNS, it will follow the generic SNS object format rather than your custom format. You will have to inspect the SNS event to extract your custom data in the Message, then parse that separately. Take a look at the SNS event template in Lambda under Actions > Configure sample event. Here is a sample Lambda function for handling an SNS event in Java, using the AWS Lambda Java Support Libraries. package example; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.LambdaLogger; import com.amazonaws.services.lambda.runtime.events.SNSEvent; public class SNSEventHandler { public String handleSNSEvent(SNSEvent event, Context context) { LambdaLogger logger = context.getLogger(); for (SNSEvent.SNSRecord record : event.getRecords()) { SNSEvent.SNS sns = record.getSNS(); logger.log("handleSNSEvent received SNS message " + sns.getMessage()); } return "handleSNSEvent finished"; } } The SNSEvent data model suggests that multiple events might arrive to the handler at the same time, so the sample shows iterating over them rather than just assuming one. I haven't seen that in practice yet, but my usage has been low-volume. I found that same SNSEvent class on github as well, but I couldn't find it anywhere in the Javadoc....? And there are no references to SNSEvent in the AWS Docs, so i'm confused as to how anyone is supposed to find that. I'll give it a shot and let you know. Thanks! I modified my function to accept SNSEvent and Context objects and upon execution, it logs the below exemption. Error loading method Handler on class com.app.LambdaHandler: class java.lang.NoClassDefFoundError java.lang.NoClassDefFoundError: com/amazonaws/services/lambda/runtime/events/SNSEvent It sounds like the runtime is not finding the compiled .class files for the com.amazonaws.services.lambda.runtime.* packages in your jar file. Are you including them? Yes, I downloaded the function (to a completely different machine) from the Lambda console and opened it and sure enough in the lib directory are the jar's for the packages being imported. I opened the aws-lambda-java-events-1.1.0.jar and under /com/amazonaws/services/lambda/runtime/events/, I found the SNSEvent.class file. I use Maven in Netbeans to do my development, is there anything else I need to do for AWS Lambda? Since you answered my original question (and now I have a new one), I accepted your answer and opened a new question (http://stackoverflow.com/questions/32782980/aws-lambda-noclassdeffounderror). If you wouldn't mind taking a look and potentially answering that as well, I would appreciate it. do i have to put my message in the "message" key as an json object? and then parse it out?
common-pile/stackexchange_filtered
how to get html struts styleClass with javascript/jquery I need to get the value of the styleClass element in javascript. Page is in jsp with struts/html tag elements. jsp code is <input type="hidden" class="filename" name="filename" value="<%= filename %>" /> <html:file property="testfile" styleClass="testfile"/> and onclick of button I invoke the javascript function fields() { var filename = jQuery('.filename'); alert(filename); var testfile= jQuery('.testfile').val(); alert(testfile); } However in the firstcase(filename) I get [object object] returned and in second case I get "undefined". Can someone give some pointers on how to get the uploaded file name in jquery. Thanks If you want to get a value from an input: var filename = $('.filename').val(); or var filename = jQuery('.filename').val(); (same as above) read more here -> jquery selector can't read from hidden field Try this var testfile= jQuery('.testfile').attr('value'); It may be that your browser is preventing access to the value attribute for security reasons. Try both options in different browsers - Chrome, Firefox, IE, etc.
common-pile/stackexchange_filtered
Can I request a certificate for *.user1.example.com from Let's Encrypt if I only own example.com domain If I only own example.com domain, *.user1 is a CNAME of example.com, but I don't have user1.example.com domain, so there can't satisfy DAN-01 challenge on user1.example.com, the challenge only can be satisfied on example.com. Is it possible that I request a cert for *.user1.example.com for such case? Thanks. If you own domain.com you are free to create user1.domain.com subdomain. And then you can make appropriate changes in DNS to satisfy Lets Encrypt requirements. Thanks. I am using IBM Cloud Internet Service (like CloudFlare), creating a subdomain is not free, since we will have lots of users, don't want to buy subdomain for each user. @MattCui, creation of subdomains is up to you. Support from IBM (DNS) is another story, but you can always create own DNS server ans support the subdomains there Do you know if it's free in CloudFlare? Thanks. No idea, sorry, probably its better to contact them and to explain your case. Also you can build own DNS server.
common-pile/stackexchange_filtered