text
stringlengths
70
452k
dataset
stringclasses
2 values
How does tf.estimator.LinearClassifier prediction work? I have a TF model which is a tf.estimator.LinearClassifier built using the FTRL optimizer with L1 regularization. I have a sparse column with a couple of million of string keys and a couple of real valued columns. The problem is that I want to extract the model trained and use it elsewhere, so I just took the weights resulted after training using model.get_variable_value(var_name) and plugged them in where I needed. The problem is I do not get the same results for identical examples, and it performs way poorly. The way I am doing predictions is the standard way of multiplying the weights by the feature values, summing them up, adding the bias, and then applying the sigmoid function. For the keys in the sparse column the corresponding weight gets added only if the key is present in the current example. My question is why am I not getting the same results. Is TF doing it differently? Is it using the weights in a different way to make predictions? If yes, then how. PS. I have debugged this for over two day so I do not think there are bugs in my code. Hopefully someone can clear the air on this, thank you! Can you match tensorflow's predictions on an empty example (no features)? Can you match the predictions on an example with a single feature? With just the real-valued columns? Try doing this and you'll see what part of the behavior your code is not matching. I actually figured it out. The problem was that I was using the 'sqrtn' combiner when creating the sparse column with keys. After I removed it I can use the weights outside of tensor flow with the same predictions.
common-pile/stackexchange_filtered
Stop Galleria from looping slideshow I have the latest version of Galleria (1.2.6), I'm using the Classic theme with Picasa plugin. I have set the slideshow on autoplay, however I would like the slideshow to STOP after playing through all the images. Also, I would like Galleria to stop/return to the first image - is there a way to do that? Please help, thank you. Provide us link of Galleria Picasa link which you are using... picasa: 'useralbum:110151686478277507568/index' You should add this code after loading the gallery: var gallery = Galleria.get(0) var totalImages = parseInt(gallery.$('total').html()); gallery.bind("loadfinish", function(e) { if (e.index == 0 || e.index == totalImages - 1) { gallery.pause(); } }); It first gets the gallery (assuming you only have one gallery in your page) and the total image count. Then, every time an image is loaded, checks if the image is the first or the last one, and pauses the gallery if so. EDITED: You should add this script after the js are loaded in a <script> tag: // Note .js names might be different in your page <script type="text/javascript" src="galleria-1.2.5.js"></script> <script type="text/javascript" src="galleria.classic.theme.js"></script> <script type="text/javascript" src="galleria.picassa.js"></script> <script type="text/javascript"> // Above code here... </script> Hi, Thank you so much for the reply. Sorry for asking a stupid question, but there are three js linked to my page(galleria 1.2.5 ; galleria classic theme ; galleria picasa), I'm not sure which document I should insert these lines and where in the document exactly? Thanks for your help. I edited the answer. You can add a new script tag after the .js are loaded with the code in it :) Is there a way to prevent the "next picture" and "previous picture" buttons from looping as well? (so in the last picture, the "next" button doesn't go to the first picture and in the first picture the "previous" button doesn't work either).
common-pile/stackexchange_filtered
Is there a way to get the GraphQL schema Object combined by NestJS v6? I'm looking for a way to get the combined GraphQL schema from NestJS v6, in order to mock the schema interface with addMockFunctionsToSchema from Apollo, for testing purposes. i.e. https://www.apollographql.com/docs/graphql-tools/mocking/#addmockfunctionstoschema In short, I need to get anyhow the schema object and call : addMockFunctionsToSchema({ schema }) It looks like this was possible on the previous version of NestJS (v5): e.g. https://github.com/alessandrodeste/graphql-nodejs-typescript/blob/master/src/app.module.ts#L24 Is it not possible anymore to achieve this? Thanks for your help! Do you mean combing module schemas to get one at the app root level? That can be done in six as before. NestJS is combining all the *.graphql into a schema object, under the hood. I want to get this schema object so I can call addMockFunctionsToSchema({ schema }) Found a way to mock the schema. I didn't even know it was built-in Nest v6 :) GraphQLModule.forRoot({ ... mockEntireSchema: true, // it's even possible to pass your own mocks if needed // mocks: { ... } ... });
common-pile/stackexchange_filtered
Automating assignment in initialize() methods for Reference Classes in R I'm working with a reference class with a few dozen fields. I've set up an initialize()method that takes a list object in. While some of the fields rely on further computation from list elements, most of the fields are directly assigned from list elements as such: fieldA <<- list$A fieldB <<- list$B I was thinking that it'd be nice to automate this a bit. To give an example in R pseudocode (this example obviously won't work): for (field in c('A', 'B', 'C', 'D')) field <<- list[[field]] I've tried making a few end runs around the <<- for instance doing something like: for field in c('A', 'B', 'C', 'D')) do.call('<<-' c(field, list[[field]])) but no dice. My guess is that this sort of behavior simply isn't possible in the current incarnation of reference classes, but thought it might be worth seeing if anyone out in SO land knew of a better way to do this. Use .self to indicate the instance, and select fields using [[. I'm not 100% sure (but who ever is?) that [[ is strictly legal. I added defaults to lst, so it works when invoked as C$new(), an implicit assumption in S4 that seems likely to bite in a similar way with reference classes. C <- setRefClass("C", fields=list(a="numeric", b="numeric", c="character"), methods=list( initialize=function(..., lst=list(a=numeric(), b=numeric(), c=character()) { directflds <- c("a", "b") for (elt in directflds) .self[[elt]] <- lst[[elt]] .self$c <- as.character(lst[["c"]]) .self })) c <- C$new(lst=list(a=1, b=2, c=3)) Or leave the option to pass a list or the elements themselves to the user with B <- setRefClass("B", fields=list(a="numeric", b="numeric", c="character"), methods=list( initialize=function(..., c=character()) { callSuper(...) .self$c <- as.character(c) .self })) b0 <- B$new(a=1, b=2, c=3) b1 <- do.call(B$new, list(a=1, b=2, c=3)) This also seems to be more tolerant of omitting some values from the call to new().
common-pile/stackexchange_filtered
How to connect to a remote meteor mongodb using robomongo I am using Meteor. Which is installed on another server. I want to access its mongodb from another [Ubuntu Machine]. Now how can I access that mongodb via robomongo or any other tool? Any guidance or help would be appreciated. In Robomongo in the upper left: Click create In the pop-up window enter the address and port of your Mongo server Give the connection a name and click save. Using the Terminal (on the client): mongo --host <hostname> --port <port> You have to make sure the port is not blocked on the Ubuntu machine running the Meteor application. Note: when developing a Meteor app the default Mongo port is 3001. On the client. (I'll add that to my answer for clarification) I just ran this command: mongo --port 3001 --host <IP_ADDRESS>. And it says::: MongoDB shell version: 2.4.9 connecting to: <IP_ADDRESS>:3001/test Thu Jan 21 19:18:42.489 Error: couldn't connect to server <IP_ADDRESS>:3001 at src/mongo/shell/mongo.js:147 exception: connect failed Are you sure the port is open on the Ubuntu machine? Have you correctly configured your firewall? (i.e. if you use ufw, check this) Here is my firewall stats: root@myMachine:~# sudo ufw status Status: active To Action From 3001/tcp ALLOW Anywhere 3001 ALLOW Anywhere 3001/tcp ALLOW <IP_ADDRESS> 3001/tcp (v6) ALLOW Anywhere (v6) 3001 (v6) ALLOW Anywhere (v6)
common-pile/stackexchange_filtered
read a file with both text and binary information in .Net I need to read binary PGM image files. Its format: P5 # comments nrows ncolumns max-value binary values start at this line. (totally nrows*ncolumns bytes/unsigned char) I know how to do it in C or C++ using FILE handler by reading several lines first and read the binary block. But don't know how to do it in .Net. Try looking into Stream.Read() method. Here is how you'd read a binary file in C#. This article discusses upon reading a PGM file. You should look into System.IO.Stream (and its inheriting classes, such as FileStream) and the various reader classes. Depending on the type of stream, you can set the position. stream.Position = {index of byte}; You could read the first section, determine at which byte the binary part starts, and read the stream from there.
common-pile/stackexchange_filtered
XCode and svn client certificates I've been stuck on an error in Xcode for over a year: "The server “my.servername.net” requires a client certificate." I have svn client certificates correctly setup on my Mac and can properly access our internal svn via the command line. I can also browse it via Safari. For whatever reason, I cannot access it through Xcode. Irecall some years ago discovering how Xcode uses its own svn client. I'm not sure if that means it may/may-not honor local ~/.subversion/servers settings. In my case I have my servers file correctly identifying my client certificate but it just doesn't work. Help? possible duplicate of Xcode 4 SVN hanging at “Checking out” if client certificate required or of your very own Point XCode4 to my client certificate I think I found the answer with the help of a buddy. I never imported my ".pem" certificate into keychain. Things seem to work differently after the import.
common-pile/stackexchange_filtered
Redis performance on Windows localhost I have a Redis instance installed on my local machine. The instance contains 147,848 serialized objects. I need to retrieve all objects and then apply some logic. I cam to know that there is no way to retrieve all objects at once so first i get all keys as var keys = client.GetAllKeys(); and then i iterate through keys to get json and then deserialize them as var keys = client.GetAllKeys(); foreach (string key in keys) { var sobj = client.Get<string>(key); MyClass desobj = JsonConvert.DeserializeObject<MyClass>(sobj); myList.Add(desobj); } All this process, getting all keys and then retrieving all objects and deserializing them, takes approx 32 seconds. For 0.15 million objects Getting all keys took 0.4 seconds Getting all values took 16 seconds Deserializing took 6 seconds There is another Redis instance which has 1 million objects and Getting all keys took 2 seconds Getting all values took 64 seconds Deserializing took 29 seconds Is there any way to improve the performance? I think you can improve performance by using multiple threads (you can use c# parallel for). Please not that Redis is a single-thread application and the gain of permanence will be on the DeSerializing step. If I were you I would measure time required to get all keys, all values and deserialize all values. With this information you can understand better what part is most slow. Since the bottle neck is in the back and forth round trips to redis, a few thoughts I have are: Can you use an in memory dictionary instead of redis? Can you consolidate values into fewer but larger objects? Can you use multiple redis instance so that readings can be parallelized?
common-pile/stackexchange_filtered
TextField in Compose : Double Tap to select text is not working I am playing with Jetpack Compose. In previous Android Views, we could double tap on a word in an EditText/TextInputLayout/TextInputEditText to select the entire word, and provide options like copy-paste, select-all, etc. This does not work with Jetpack Compose TextField. The only way to make the copy-past/select-all options is with a long press on the TextField. Is there a solution for this ? (Tested with Compose stable version 1.1.1 and 1.2.0-alpha07) I don't remember being able to do that by double tap in normal android Views. As far as I know it's always been by long tap. Maybe it even depends on Android version or device Nevermind, it does seem to work in most apps. I just wasn't aware it was even possible like that. long tap was just my natural way to do it, but it actually might depend on device I have same issue. any update? It is a jetpack compose bug. You can fix the problem using this code: @Composable fun CustomTextField() { val textFieldValue = remember { mutableStateOf(TextFieldValue("")) } val interactionSource = remember { MutableInteractionSource() } val isDoubleTap by interactionSource.collectIsDoubleTapAsState() LaunchedEffect(isDoubleTap) { val endRange = if (isDoubleTap) textFieldValue.value.text.length else 0 textFieldValue.value = textFieldValue.value.copy( selection = TextRange( start = 0, end = endRange ) ) } BasicTextField( value = textFieldValue.value, onValueChange = { if (!isDoubleTap) { textFieldValue.value = it } }, interactionSource = interactionSource ) } @Composable fun InteractionSource.collectIsDoubleTapAsState(): State<Boolean> { val isDoubleTap = remember { mutableStateOf(false) } var firstInteractionTimeInMillis = 0L LaunchedEffect(this) { interactions.collect { interaction -> when (interaction) { is PressInteraction.Press -> { val pressTimeInMillis = System.currentTimeMillis() if (pressTimeInMillis - firstInteractionTimeInMillis <= 500L) { firstInteractionTimeInMillis = 0 isDoubleTap.value = true } else { firstInteractionTimeInMillis = System.currentTimeMillis() isDoubleTap.value = false } } } } } return isDoubleTap } You can use it for TextField, BasicTextField and OutlineTextField. Please, check this gist for more details. Please state explicitly that you're the author of the linked gist. Any links to external content that belong to you must always contain disclosure. This issue has been reported on Googles's issue tracker thread here https://issuetracker.google.com/issues/137321832. It's currently assigned but not yet fixed Double or triple click in the TextField is not working, but if you just tap the blue water drop of the cursor once, the pop-up will be shown.
common-pile/stackexchange_filtered
CodeChecker Warning: file contents changed or missing since the latest analysis I executed locally a static analysis using Codechecker. Now I want to store the results (generated in the reports folder) in CodeChecker database using the command CodeChecker store ./reports --name my_first_run --url http://<codechecker_ip>:8555/Default as documented in Store analysis results in a CodeChecker DB. but page is empty like in the image below Am I missing something? Do I need to updload source code? After the command, from log I see this warning [INFO 2023-09-25 07:11] - Processing report files done. [WARNING 2023-09-25 07:11] - The following source file contents changed or missing since the latest analysis: - /home/runner/work/codechecker-analysis-action-test/codechecker-analysis-action-test/src/main.cpp Please re-analyze your project to update the reports! I would expect to be able to view the results in my web browser http://<codechecker_ip>:8555/Default as in the following image CodeChecker dev here, I know this is an old question, but if anybody gets here this might help them. Between CodeChecker analyze and CodeChecker store, the source files need to be left intact. You should not modify them, or else the CodeChecker store cannot be completed. To fix this error, please reanalyzed your project, and store the results without modifying your source files. It is suggested in the last line Please re-analyze your project to update the reports!
common-pile/stackexchange_filtered
How to sort HashMap<String,ArrayList> in Android? How to sort HashMap<String,ArrrayList<String>>? I am fetching contact name from phone contacts like: "String name = cur.getString(cur.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME));" And stored it into one HashMap<String,ArrayList<String>>. But all contacts I am getting in unsorted order. May I know how to sort those contacts in alphabetical order so that I can display list view in sorted order? did you search this topic before posting the question? i searched but i got only how to parse HashMap<string,string> or HashMap<string,int> since two days i am stucking on this HashMap<String,ArrayList>. ArrayList of what? More strings? Just use Comparator for that.... possible duplicate of How to sort a Map<Key, Value> on the values in Java? I used following code to get Contacts in Ascending Order of String names as follows: public void getAllContacts(ContentResolver cr) { Cursor phones = cr.query( ContactsContract.CommonDataKinds.Phone.CONTENT_URI, null, null, null, Phone.DISPLAY_NAME + " ASC"); while (phones.moveToNext()) { String name = phones .getString(phones .getColumnIndex(ContactsContract.CommonDataKinds.Phone.DISPLAY_NAME)); String phoneNumber = phones .getString(phones .getColumnIndex(ContactsContract.CommonDataKinds.Phone.NUMBER)); name1.add(name); phno1.add(phoneNumber); } phones.close(); } Code for reference Hope this is what you need. one more question .can you tell me how to add search option in my custom adapter for listview @User_B You can add a EditText component as HeaderView in ListView of yours. do you have some snippet Just put Edit Text above listview component else @User_B you are looking for this i guess http://www.androidbegin.com/tutorial/android-search-listview-using-filter/ yes i rhave already refferd this link but result is like when i type 'a' ,all the names which conatins 'a' for eg. 'manish' is also displayed. how to dispay records only starting with whatever i typed in edit text you need to implement something like trie data structure then. You can retrieve your data from the database already sorted. Just use String orderBy = ContactsContract.Contacts.DISPLAY_NAME + " DESC"; in your query. After that, usually when you need to use HashMap an have also sorted data, then you use HashMap + ArrayList. In HashMap you keep normal key,value pairs, in ArrayList you keep sorted values (in case if you have already sorted data, you add it to ArrayList while reading, you don't need to sort again). If you need to sort ArrayList which is value in your HashMap, you can use: Collections.sort(yourArrayList); Use below class and pass List of data to the it. It will give you new sorted list. public class CompareApp implements Comparator<AppDetails> { private List<AppDetails> apps = new ArrayList<AppDetails>(); Context context; public CompareApp(String str,List<AppDetails> apps, Context context) { this.apps = apps; this.context = context; Collections.sort(this.apps, this); } @Override public int compare(AppDetails lhs, AppDetails rhs) { // TODO Auto-generated method stub return lhs.label.toUpperCase().compareTo(rhs.label.toUpperCase()); } } AppDetails is a generic class public class AppDetails { public String label; public String packageName; public Drawable icon; }
common-pile/stackexchange_filtered
No output to terminal after inserting a module with insmod I am following the following tutorial, trying to learn how to develop device drivers, and in Chapter 2, the focus is to develop a working module and insert it into the kernel. I used the following code (hello.c): #include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> MODULE_LICENSE("Dual BSD/GPL"); static int hello_init(void) { printk(KERN_ALERT "Hello World!\n"); return 0; } static void hello_exit(void) { printk(KERN_ALERT "Goodbye, cruel world!\n"); } module_init(hello_init); module_exit(hello_exit); And my this is the Makefile: obj-m += hello.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean I then run the following in LXTerminal: brian@brian-desktop:~/driver_stuff/hello$ su root@brian-desktop:/home/brian/driver_stuff/hello# make make -C /lib/modules/2.6.32-21-generic/build M=/home/brian/driver_stuff/hello modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-21-generic' Building modules, stage 2. MODPOST 1 modules make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-21-generic' root@brian-desktop:/home/brian/driver_stuff/hello# insmod ./hello.ko root@brian-desktop:/home/brian/driver_stuff/hello# rmmod hello root@brian-desktop:/home/brian/driver_stuff/hello# exit However, after the insmod ./hello.ko command, one should expect that the terminal would print "Hello world!" and then "Goodbye, cruel world!" after the rmmod hello command. The book mentioned that this happens when you run the commands in the console, but not in an emulated terminal, could this be the problem? I also checked under /var/log/messages and /var/log/messages.1 which had no record of either "Hello World!" nor "Good bye, cruel world!". Is it possible that these messages are in a different file, or is the issue that the messages aren't being pushed to the kernel in the first place? If you need an info about the kernel I am running (Lubuntu 10.04, inside a VM): brian@brian-desktop:~/driver_stuff/hello$ uname -r 2.6.32-21-generic Thank you.
common-pile/stackexchange_filtered
Fourier transform of time-shifted complex conjugate I want to do a Fourier transform of a time-shifted complex conjugate function, $\exp(iat)\bar{f}(t)$ where $a$ is a real, positive constant. If the Fourier transform of the original function is $\exp(iat)f(t) \rightarrow F(\omega-a)$ then does it mean that $\exp(iat)\bar{f}(t) \rightarrow \bar{F}(-\omega-a)$ or is it $\exp(iat)\bar{f}(t) \rightarrow \bar{F}(-\omega+a)$ What physical differences are there between the two solutions? You know that $\bar{f}(t)$ corresponds to $\bar{F}(-\omega)$. There are two (reasonable) options to determine the Fourier transform of $e^{iat}\bar{f}(t)$. The first is to define a function $$g(t)=e^{-iat}f(t)\Longleftrightarrow G(\omega)=F(\omega+a)$$ from which $$\bar{g}(t)=e^{iat}\bar{f}(t)\Longleftrightarrow \bar{G}(-\omega)=\bar{F}(-\omega+a)$$ follows. The other way is to transform $\bar{f}(t)$, and then shift the result by replacing $\omega$ with $\omega -a$: $$\bar{f}(t)\Longleftrightarrow \bar{F}(-\omega)\\ e^{iat}\bar{f}(t)\Longleftrightarrow \bar{F}(-(\omega-a))=\bar{F}(-\omega+a)$$ So your second solution is correct. The difference between the two is that the correct solution has a spectrum centered at $\omega=a$, whereas the spectrum of the incorrect first solution is centered at $\omega=-a$.
common-pile/stackexchange_filtered
The spectroscopy data from the cerium compound is confusing me. The f-electrons should be completely localized, but we're seeing this intermediate behavior. That's exactly what the Anderson model predicts though. The f-orbitals aren't purely localized - they're hybridizing with the conduction band, creating this mixed valence state. But how can an electron be both localized and delocalized simultaneously? The Coulomb repulsion U should keep two electrons from occupying the same f-orbital. Think about it this way: when U is much larger than the hybridization strength V, you get Kondo physics. The f-electron forms a local moment that gets screened by conduction electrons. But when V becomes comparable to U, you're in this intermediate regime. So the hybridization parameter V essentially competes with the on-site repulsion U? That would explain why we see different magnetic behavior depending on pressure. Exactly. Under pressure, the lattice contracts, increasing the overlap between f-orbitals and conduction bands. Higher V means stronger hybridization, which can destroy the local moments entirely. But here's what I don't understand - if the f-electrons are hybridizing away, why do we still see magnetic ordering at low temperatures in some of these heavy fermion systems? The periodic Anderson model shows that even with hybridization, you can still get magnetic ground states. The key is the competition between the kinetic energy that wants to delocalize electrons and the Coulomb interaction that favors localization. That makes sense for the single impurity case, but with a lattice of impurities, don't the RKKY interactions between local moments complicate things? Absolutely. You get this three-way competition: direct exchange between f-electrons, RKKY coupling through conduction electrons, and the Kondo effect trying to screen individual moments. The ground state depends on which energy scale dominates. So when the Kondo temperature is higher than the RKKY coupling strength, you get a non-magnetic heavy Fermi liquid. But if RKKY wins, you get magnetic order. Right, and the crossover between these regimes is where things get really interesting. Near that quantum critical point, you see non-Fermi liquid behavior - the specific heat coefficient diverges, resistivity becomes linear in temperature. The entanglement between the f-electrons and conduction electrons must be crucial there. The quantum criticality emerges from this many-body entangled state. That's the beauty of the Anderson model - it captures how local quantum mechanics gives rise to emergent classical magnetic order through collective effects.
sci-datasets/scilogues
What is git status -uno in git When I type git status There is message saying It took 2.39 seconds to enumerate untracked files. 'status -uno' may speed it up, but you have to be careful not to forget to add new files yourself (see 'git help status'). nothing to commit, working tree clean I type git help status and read but not understand how to get same as git status result Git is telling you that because you have many untracked files, the result of git status takes longer time than usual. The fix is to use git status -uno. From the manual: -u[<mode>] --untracked-files[=<mode>] Show untracked files. The mode parameter is optional (defaults to all), and is used to specify the handling of untracked files; when -u is not used, the default is normal, i.e. show untracked files and directories. The possible options are: no - Show no untracked files normal - Shows untracked files and directories all - Also shows individual files in untracked directories.
common-pile/stackexchange_filtered
Trim a string with an array list I want to be able to trim a string where values match an array like this $image_formats = array('.png','.jpg', '.jpeg', '.gif'); $file = 'image1.png'; $file_stripped = trim($file, $image_formats); Wanted Result: 'image1' Is there a function for this, whats the best method to go about achieving this? http://php.net/manual/en/function.pathinfo.php in addition u can also use in_array() function You can use str_replace passing an array of search values: $image_formats = array('.png','.jpg', '.jpeg', '.gif'); $file = 'image1.png'; $file_stripped = str_replace($image_formats, '', $file); trim() is for removing matches of individual characters, not longer strings. You can convert $image_formats to a regular expression and use preg_replace(). $image_formats = '/\.(png|jpg|jpeg|gif)$/'; $file_stripped = preg_replace($image_formats, '', $file);
common-pile/stackexchange_filtered
Flow rate in split bottleneck Water flows from top to bottom. First there are two pipes, one with $10$ LPM max flow rate, and another with $20$ LPM max flow rate. They both go to another pipe, which is $20$ LPM max flow rate. The total flow rate will be $20$ LPM, because of the lower pipe bottleneck. What would be the flow rate in the two upper pipes? Picture of this problem: I thought it may be $10$ LPM each or $13.333333$ in the fatter, $6.6666667$ in the slimmer. Both make sense to me. I would also like to know which field does this question belong to? I mean more specific than just fluid mechanics. What theoretical material should I read? Any specific topics / equations? You claim the flow through the lower, single pipe is 20 LPM, yet you don't know that. Without detailed knowledge of pipe diameters and pipe lengths (and assuming smooth pipes) this cannot be solved. let's say that upper fat pipe is twice the area of the upper slimmer pipe, and same area as the lower pipe. is that solvable now? Using Darcy-Weisbach, I get: $\frac{Q_1}{Q_2}=\bigg(\frac{A_1}{A_2}\bigg )^{5/2}$ for the volumetric flow rates in the upper pipes ($A$ are the cross-sections of the pipes) but without lengths and friction coefficients it's as far as this goes. Total flow rate would of course be $Q=Q_1+Q_2$. The continuity equation requires that the flow rates from the fat and thin pipe exiting the upper tank equal the flow rate of the pipe entering the lower tank. In addition, the pressure drop in each of the top pipes must be equal. Finally, as Gert pointed out, you need quite a bit more information before you can set up and solve this problem. It completely depends on how the maximum flow rates are enforced. For one extreme, I could imagine a sensor that losslessly watches the flow and, if the max flow is exceeded, closes a valve to limit the flow. In that case, there'd be no difference between the two pipes up to the smaller pipe's 10LPM limit, so the flow would be equally split. On the other hand, I can imagine a turbulence-based device where the device's back-pressure soars as you approach the configured limit. In that case, the 10LPM pipe would offer greater restriction than the 20LPM pipe, so the flow would be asymmetric. On the other other hand, perhaps the flow is shut down if the rate is exceeded, as it is in new propane tanks' flow limiting devices. In that case, the flow rate might be split anywhere from equal (if the 10LPM limit isn't exceeded) to all in the 20LPM tube. Summary: you need to add more information to your question. hi. no active enforcement on flow rate. no measurement. gravity makes water flow downwards. surface area of the pipes are: 2x fox upper fat, x for upper slim, 2x for down pipe. lets say water flow rate in the down pipe is 2y LPM. what is the flow in the upper pipes? how these 2y LPM are distributed between them? Still no good. A bare pipe has no intrinsic flow limit; it depends on the force behind the flow overcoming the resistance of the pipe. If you put two pipes that you say have a 20LPM limit in series, you'll get less than 20LPM total because the resistance has increased.
common-pile/stackexchange_filtered
Is it possible to use .svg files with React Native without converting them? I have found many answers about converting .svg files to rasterized formats, however I would prefer to keep it as a single svg file. I got excited when I found react-native-svg, but was then disappointed when I saw their image example used a .jpg file. Is keeping my image as a vector possible? Am I trying to do something unwise? Thank you for your help. You need to convert your SVG's to use the react-native-svg library, this is a useful tool to do so: https://react-svgr.com/playground/ This does not rasterise the image, it will remain as a vector just in a format that the library can parse.
common-pile/stackexchange_filtered
How to calculate conflict into two git branches without needing to do the actual merge? I have a repository in github, which main branch is named master and the following situation: A contribution of code in a PR (let's name PR-A) from branch feature/pr-a to master. A contribution of code in a PR (let's name PR-B) from branch feature/pr-b to master. Both PRs are independent so they could in theory to include incompatible changes that would cause a git conflict in some files. Is there any simple or scripted way (*) of checking the potential conflict between both PRs/branches once merged in master? I mean, a way of answering the following questions: If I merge first PR-A to master, would I get a conflict when I try to merge PR-B to master. Which files? If I merge first PR-B to master, would I get a conflict when I try to merge PR-A to master. Which files? Thanks in advance! (*) Of course, I can always get this information doing the actual merge in my local copy and check, but I wonder if there is any other mechanism that doesn't need to do the actual merges. You could get the changed file lists from the diff or cherry commands. You would then know if there are any files that are common to both. But just because the files are common does not mean git will flag a conflict unless the changes are close together (or overlapping of course) in the file. It may be enough to keep the teams alert and communicating, or it may be pointless in files with very high churn, such as monolithic resource files or help files. You could simply create a new local branch from the master and try to merge the two working branches to there, and see what happens? That way you can easily just throw away that local branch. But you could do the same just with a pull to local, not push anything and discard everything.
common-pile/stackexchange_filtered
How to merge when each row on table A needs multiple rows from table B in SQL Server I have two tables. InstructorRelated TermCode |Term_Seq_ID |Subject|Course|QuestionNbr|InstructorName |Instructor_Pid|Mean |StdDev FS15 |1154 |ACC |201 |5 |SKYWALKER, LUKE|BR549 |4.349|1.033 FS15 |1154 |ACC |201 |5 |AMIDALA, PADME |39AHW |4.285|1.030 ClassRelated: TermCode |Term_Seq_ID |Subject|Course|QuestionNbr|InstructorName |Instructor_Pid|Mean |StdDev FS15 |1154 |ACC |201 |6 |NULL |ALL |4.078|1.049 FS15 |1154 |ACC |201 |9 |NULL |ALL |3.806|1.128 What the client wants is for these two tables to be merged such that every ACC 201 instructor has a row for questions 5, 6, and 9, like so: TermCode |Term_Seq_ID |Subject|Course|QuestionNbr|InstructorName |Instructor_Pid|Mean |StdDev FS15 |1154 |ACC |201 |5 |SKYWALKER, LUKE|BR549 |4.342|1.033 FS15 |1154 |ACC |201 |6 |NULL |ALL |4.078|1.049 FS15 |1154 |ACC |201 |9 |NULL |ALL |3.806|1.128 FS15 |1154 |ACC |201 |5 |AMIDALA, PADME |39AHW |4.285|1.030 FS15 |1154 |ACC |201 |6 |NULL |ALL |4.078|1.049 FS15 |1154 |ACC |201 |9 |NULL |ALL |3.806|1.128 Can this be done? select * from (select * from InstructorRelated Union all select * from ClassRelated Union all select * from InstructorRelated) x ...Not sure why you want the info from classrelated in there twice...if you're looking to replace the NULLs and ALLs with the instructor name, then: select * from ( select TermCode , Term_Seq_ID , Subject, Course, QuestionNbr, InstructorName , Instructor_Pid, Mean , StdDev from InstructorRelated Union select CR.TermCode , CR.Term_Seq_ID , CR.Subject, CR.Course, CR.QuestionNbr, IR.InstructorName , IR.Instructor_Pid, CR.Mean , CR.StdDev from InstructorRelated IR join ClassRelated CR on IR.subject = CR.subject and IR.Course=CR.Course) x The client wanted it this way for a report. I'll try out your solution. try it now, I changed the union all to a union which should group it That did it. Thanks!
common-pile/stackexchange_filtered
Adjusting a php function so that it displays link in the form of:- '/widgets?pg=2' instead of 'products.php?cat=20&pg=2' I added the following .htaccess rule:- RewriteRule ^widgets$ products.php?cat=20 [QSA] So now I have a simple link called 'widgets' which leads to the 'widgets' category 1st page. However, the links to the 2nd page looks like the following:- products.php?cat=20&pg=2 What I would like is for the subsequent pages to be rather in the form of:- widgets?pg=2 The QSA flag in the above .htaccess rule does achieve this, but I need to change the function which generates these page links, otherwise the only way of getting to widgets?pg=2 is by typing it in the browser address bar as:- mywebsite.com/widgets?pg=2. I think the following PHP function might need to be adjusted, to achieve the result I want. Can any PHP wizards or anyone with appropriate knowledge please help with this. The reason I want to do this is because I want google to index the simple looking pages, rather than the longer ones:- function writepagebar($CurPage,$iNumPages,$sprev,$snext,$sLink,$nofirstpage){ $startPage = max(1,round(floor((double)$CurPage/10.0)*10)); $endPage = min($iNumPages,round(floor((double)$CurPage/10.0)*10)+10); if($CurPage > 1) $sStr = $sLink . '1' . '" rel="prev"><span style="font-family:Verdana;font-weight:bold">&laquo;</span></a> ' . $sLink . ($CurPage-1) . '">'.$sprev.'</a> | '; else $sStr = '<span style="font-family:Verdana;font-weight:bold">&laquo;</span> '.$sprev.' | '; for($i=$startPage;$i <= $endPage; $i++){ if($i==$CurPage) $sStr .= '<span class="currpage">' . $i . '</span> | '; else{ $sStr .= $sLink . $i . '">'; if($i==$startPage && $i > 1) $sStr .= '...'; $sStr .= $i; if($i==$endPage && $i < $iNumPages) $sStr .= '...'; $sStr .= '</a> | '; } } if($CurPage < $iNumPages) $sStr .= $sLink . ($CurPage+1) . '" rel="next">'.$snext.'</a> ' . $sLink . $iNumPages . '"><span style="font-family:Verdana;font-weight:bold">&raquo;</span></a>'; else $sStr .= ' '.$snext.' <span style="font-family:Verdana;font-weight:bold">&raquo;</span>'; if($nofirstpage) $sStr = str_replace(array('&amp;pg=1"','?pg=1"'),'" rel="start"',$sStr); return($sStr); } If it helps to know how the writepagebar function fits into the incproducts.php which itself sits inside the products.php page you can see here:- http://freetexthost.com/3ubiydspzm Can you please tell what is the current output of this function which you have written, and some examples of what the links are currently showing? As an example. If I visit the page 'mywebsite.com/widgets' and then click on the send page, it leads to:- 'mywebsite.com/products.php?cat=20&pg=2’. If I click the 3rd page it leads to:- 'mywebsite.com/products.php?cat=20&pg=3’. What I want is for it to lead to:- 'mywebsite.com/widgets?pg=2' for the 2nd page or 'mywebsite.com/widgets?pg=3' for the 3rd page. Similarly, if visiting the page 'mywebsite.com/doodaas' I would want that the 2nd page number leads to:- 'mywebsite.com/doodaas?pg=2'. I have added a link above which shows where on the incproducts.php page the writepagebar function sits. The incproducts.php page is itself the main part of products.php. Edited after comments from "nitbuntu":- In the function just instead of these lines:- $sStr = str_replace(array('&amp;pg=1"','?pg=1"'),'" rel="start"',$sStr); return($sStr); } write the following lines:- $sStr = str_replace('products.php?cat=20', 'widgets', $sStr); $sStr = str_replace('&amp;pg=', '?pg=', $sStr); $sStr = str_replace(array('&amp;pg=1"', '?pg=1"'), '" rel="start"', $sStr); return($sStr); } Hope it helps. Thanks for contributing, but doing what you mentioned results in the 2nd page link going to:- 'products.php?cat=20?pg=2' and this leads to the wrong page. It appears that all that changed was that the '&' got changed to '?'. I'll test that out. But will this only work for 'widgets'. If I needed to do it for a few more pages. For example, if I also had another page called 'doodles' which corresponds to 'products.php?cat=21'....and another called 'dogs' which corresponds to 'products.php?cat=22'. Would I just append additional lines in the code? Well answer to my above comment is 'yes' it seems. For 'doodles' I just added '$sStr = str_replace('products.php?cat=21', 'doodles', $sStr);'. Brilliant, Thanks! @nitbuntu - Sorry, I couldn't answer to your last comment quick enough. But yes, you will require to add additional lines in the code. Thanks that you got a grip. Analyzing your code, it seems that $sLink is who contains the 'products.php?cat=20' value, the function only appends the page value, so maybe you need to modify another function. In other words, your writepagebar() only appends the page number to the generated link. You must search who is calling writepagebar() with the 'products.php?cat=20' string and modify at that level.
common-pile/stackexchange_filtered
input and a on one click So i have an a href that clears the filters and i have an input that sets the value of other filter as empty. On a href click i want it to also clear the other filter. I have tried multiple things but none of those were working. It may be done with jQuery, or pure html. This is my a href: <a href="#clear" class="btn clear-filter" title="clear filter">{translate}Clear{/translate}</a> This is my input : <label class="btn btn-status all-statuses active"> <input class="filter-status" type="radio" name="options" value="" checked> {translate}All statuses{/translate} </label> How can i on one click connect them both to work on a href click ? So what exactly is your question? In one click I want to input and a work at the same time. So both filters are cleared. You can do it on click event of it. $('.clear-filter').on('click', function() { $( ".filter-status" ).prop('checked', false); }); This solution was exacly what i wanted. Thank you very much :) $(document).on("click",".clear-filter",function(){ $("input[name='options']").prop("checked",false); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <a class="btn clear-filter" title="clear filter">{translate}Clear CLICK ME{/translate}</a> <br><br> <label class="btn btn-status all-statuses active"> <input class="filter-status" type="radio" name="options" value="" checked > {translate}All statuses{/translate} </label> I'm not sure if that's true but w3schools tell me that this kind of name attributes is not supported in HTML5: https://www.w3schools.com/tags/att_a_href.asp It's under more Examples the first one have a link to a .js file and make a click event and then just do everything from there. .html <script src="main.js"></script> main.js file $('#clear').click(function() { } is a load of stuff you can google to help you do this but here is a basic one to get you going: https://www.w3schools.com/jsref/event_onclick.asp Happy Coding :)
common-pile/stackexchange_filtered
raspbian with qemu won't boot up I tried to create a vm for raspbian in ubuntu. I made a few changes in the raspbian to test if it works in other machines or not. After that i copied the edited image into my computer. Now i try to boot it with qemu but each time qemu opens, i can't see the raspbian. Here is my work qemu-system-arm -kernel /usr/share/qemu_vms/kernel-qemu -cpu arm1176 -m 256 -M versatilepb -serial stdio -append "root=/dev/sda2 panic=1 rootfstype=ext4 rw" -hda /usr/share/raspi-image.img So my kernel image is in the /usr/share/qemu_vms and my image is in /usr/share. I tried to increase the ram amount but in that case even qemu didn't start up. Also i was following this tutorial(which is almost the same with other tutorials and my /etc/ld.so.preload is missing(i can't edit it at all). I deleted -no-reboot option so it is in a loop for rebooting everytime but it seems that it can't find the image to start. Any help would be very appreciated. Sorry to add another thing but i guess i need to edit the question a bit. All the problem seems to be with the root path. I fixed that problem with adding rootfstype=ext4 rw into the root section. Now it is entering emergency mode. I tried to change the kernel from wheezy to jessie but it is still the same problem. What is going on? From where did you obtain the kernel? The stock raspbian kernel won't boot under QEMU. What sort of errors -- if any -- are you seeing on the console when trying to start the image? Does it work with a stock raspbian image? I would verify that first before trying your own (raspi-image.img). I got the image from https://github.com/dhruvvyas90/qemu-rpi-kernel/blob/master/kernel-qemu-4.1.13-jessie[here] https://github.com/dhruvvyas90/qemu-rpi-kernel/blob/master/kernel-qemu-4.1.13-jessie My raspbian image works with a raspberry-pi when i plug the sd card and i copied with dd command in my pc which is ubuntu. And i don't get any errors it just stands like that. Is my code wrong? Also i couldn't boot my computer when i rebooted all the system so i had to format my computer. Is my qemu code correct? It seems something is wrong. I just posted this Answer. http://stackoverflow.com/questions/38837606/emulate-raspberry-pi-raspbian-with-qemu/39109860#39109860
common-pile/stackexchange_filtered
Can I repeat the last UI command? I know that I can use . to repeat the last editing command. Is there a way to repeat the last UI manipulation command? For example, I can write 10<C-W>- to shrink a window by ten rows. It'd be nice to be able to press ⟨some key⟩ to easily repeat this command if I want to shrink it more. Related: http://stackoverflow.com/q/6952636/2072269 (no answer given that can be used after you have already done a resize). @muru: nice, but that's for this specific case. What if I've done something like fz and then 10;? What about :tabm +1? Are these all going to have to be special-cased? I think you misunderstood me. I'm saying the linked post has useless answers (before somebody else comes and suggests it). oh! okay, then we're on the same page @muru :) Full example here mapping ++++ to ++++ The dot command . works because Vim "keeps track" of commands that change the contents of buffers. If you run :echo b:changedtick, you'll see it incrementing with each change to the current buffer. But Vim doesn't "keep track" of non-editing commands. Thus, no, what you're asking for can't be done. There is no way of doing this by default in vim because vim does not keep track of the previously executed wincmd. However, it is possible to do this through some clever mappings: function! s:Wincmd(count, key) " If count is not zero, use the original count. If otherwise, don't " include a count. let if_count = a:count ? a:count : "" " This builds a wincmd from the given key, and saves it so " it can be repeated. let g:last_wincmd = "wincmd " . nr2char(a:key) " Execute the built wincmd execute if_count . g:last_wincmd endfunction function! s:WincmdRepeat(count) " If no wincmd has been executed yet, don't do anything if !exists('g:last_wincmd') | return | endif " If a count is given, repeat the last wincmd that amount of times. " If otherwise, just repeat once. let if_count = a:count ? a:count : "" execute if_count . g:last_wincmd endfunction " Overwrite the default <C-w> mapping so that the last wincmd can be kept " track of. The getchar function is what captures the key pressed " directly afterwards. The <C-u> is to remove any cmdline range that vim " automatically inserted. nnoremap <silent> <C-w> :<C-u>call <SID>Wincmd(v:count, getchar())<CR> " This just calls the function which repeats the previous wincmd. It " does accept a count, which is the number of times it should repeat the " previous wincmd. You can also replace Q with whatever key you want. nnoremap <silent> Q :<C-u> call <SID>WincmdRepeat(v:count)<CR> Note that if you have any mappings that use <C-w> they can only be repeated if they are not of the nore variety. Any wincmds issued using :wincmd will not be repeated. Also, any wincmds that contain more than one character cannot be performed (such as <C-w>gf). Relevant Help Topics :help v:count :help getchar() :help nr2char() :help expr1 :help :wincmd :help :execute :help :for :help :map-<silent> :help c_CTRL-U :help <SID> This is great, and an excellent example of well-written VimScript! Some minor (perhaps picky) feedback: This repeat command would behave different from the way the built-in . behaves with a count. When a count is supplied to ., the previous count is ignored. So 2dd followed by 3. would delete 2 lines and then 3 lines; in contrast, with your mappings, 2<C-w>- followed by 3Q would shrink the window by 2 lines and then by 6 (= 2x3) lines. That behaviour is fine, but it's nice to draw from analogous built-in Vim commands when choosing how a custom command should behave. Thanks! Also, I see what you mean with how the count works. I may change it so it works that way. The submode plugin can help with this. You could define a "submode" that you enter by typing <C-W>-, wherein you've defined - (and perhaps +) to continue resizing the window. There is another plugin called repmo.vim ("repeat motions") which can do what you want. But you will need to specify which motions (or actions in general) you want to repeat. Mine is currently configured like this: let g:repmo_mapmotions = "j|k h|l zh|zl g;|g, <C-w>w|<C-w>W" let g:repmo_mapmotions .= " <C-w>+|<C-w>- <C-w>>|<C-w><" let g:repmo_key = ";" let g:repmo_revkey = "," So after doing 5 CTRL-W + I can hit ; to repeat it as many times as a like. The plugin works by creating mappings for each of the keys specified. When f or t are used, the ; and , mappings are cleared back to their default behaviour. I find the mapping for g; especially useful, to get back to an earlier edit point. g; ; ; ; I have created a small vim-remotions plugin that allows to repeat the last motion using the ; and , keys (like for the f and t motions). The following settings makes that the Ctrl < and Ctrl - together with their counterpart are repeated (including the count). let g:remotions_direction = 1 let g:remotions_repeat_count = 1 let g:remotions_motions = { \ 'vsplit' : { 'backward' : '<C-w><', 'forward' : '<C-w>>', 'direction' : 1 }, \ 'hsplit' : { 'backward' : '<C-w>-', 'forward' : '<C-w>+', 'direction' : 1 }, \ } But the configuration can be extended to repeat motions and other actions. When I was faced with this problem, I fell in love with the idea of having my own Vim mode for dealing with windows. I quickly came across the vim-submode plugin, as already mentioned by tommcdo. Unfortunately, you would have to build such a window mode on top of this plugin yourself. This nice blog post shows how this could look like. But finally, I found the plugin tinykeymap, which already comes with a window mode that works out of the box. So if you want to try it out, that's the way to go ;-)
common-pile/stackexchange_filtered
Clarification needed: SP List Graph api webhook notifications I've created a Graph subscription for changes (ChangeType = "updated") in a Sharepoint List according to docs; and am also receiving notifications with no issues. But... why does the resourceData of the notification contain just one property? [...] "resourceData": { <EMAIL_ADDRESS>"#Microsoft.Graph.ListItem" }, [...] I would expect something more like this: "resourceData": { <EMAIL_ADDRESS>"#Microsoft.Graph.ListItem", "AdditionalData": { "Id": "1234" } }, Minor config info: the app creating the subscription & receiving the notification has the Sites.ReadWrite.All application permission. As for the clarification: are my expectations just too high and I misunderstood the notification definition/function, or is there some way how to get the itemId from the notification body? Thx
common-pile/stackexchange_filtered
Young researchers analyzing samples in a biotech startup laboratory. PERSON 1 Why does the scattered light show different frequencies when we shine the laser on this protein sample? PERSON 2 Most photons bounce back unchanged, but some interact with the molecular vibrations. When a photon hits a vibrating bond, it can
sci-datasets/scilogues
Where are the default crontab instructions kept? When I run crontab -l on a new user that does not have any crons yet, the result is like this and the command fails and exits no crontab for [user] If I run crontab -e on a new user the result is as follows and the crontab editor opens. no crontab for [user] - using an empty one Where is it pulling the following verbage from? # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any').# # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command Specifically what file holds these instructions. I would like to run the commands like this: [instructions file] > temp_file [a cron job] >> temp_file crontab temp_file rm temp_file However running this fails due to there being no cron file for the new user: crontab -l > temp_file [a cron job] >> temp_file crontab temp_file rm temp_file The text actually appears to be built in to the crontab.c source code rather than read from a file at execution time: if (add_help_text) { fprintf(NewCrontab, "# Edit this file to introduce tasks to be run by cron.\n" "# \n" "# Each task to run has to be defined through a single line\n" "# indicating with different fields when the task will be run\n" "# and what command to run for the task\n" "# \n" "# To define the time you can provide concrete values for\n" "# minute (m), hour (h), day of month (dom), month (mon),\n" "# and day of week (dow) or use '*' in these fields (for 'any')." "# \n" "# Notice that tasks will be started based on the cron's system\n" "# daemon's notion of time and timezones.\n" "# \n" "# Output of the crontab jobs (including errors) is sent through\n" "# email to the user the crontab file belongs to (unless redirected).\n" "# \n" "# For example, you can run a backup of all your user accounts\n" "# at 5 a.m every week with:\n" "# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/\n" "# \n" "# For more information see the manual pages of crontab(5) and cron(8)\n" "# \n" "# m h dom mon dow command\n" ); } /* ignore the top few comments since we probably put them there. */ The variable add_help_text is non-zero if crontab fails to find an existing spool file for the user: log_it(RealUser, Pid, "BEGIN EDIT", User); (void) snprintf(n, MAX_FNAME, CRON_TAB(User)); if (!(f = fopen(n, "r"))) { if (errno != ENOENT) { fprintf(stderr, "%s/: fdopen: %s", n, strerror(errno)); exit(ERROR_EXIT); } fprintf(stderr, "no crontab for %s - using an empty one\n", User); if (!(f = fopen("/dev/null", "r"))) { perror("/dev/null"); exit(ERROR_EXIT); } add_help_text = 1; } +1 for the good finding. I thought so as well but didn't find the relevant source code. I thought fiddling around with the CRONTAB_NOHEADER environment variable could help, but was wrong. It's just for the DON'T EDIT header. It's built in to the executable: strings $(type -p crontab) | less "+/Edit this file to introduce tasks to be run by cron" No need to consult the source. I agree the source in the One True Document. Well, sometimes strings returns garbled output and often the hardcoded text in a binary is just the last ressort if no external config file is found. @PerlDuck Careful inspection of strings output (run on an ELF binary) can show you all the files it may open, embedded texts, version strings, ... . The output isn't garbled, just some binary fields can also be seen as text. I'd also like examples of "often the hardcoded text in a binary is just the last resort". I've seen programs that write a missing config file with default values, copy a default file from somewhere else, ...
common-pile/stackexchange_filtered
Correct manners of getting value I think, that all manners of getting value are correct, but I want to ask. lockedList is ArrayList and I want only 1 thread to get the value. public T get1(int index) { lock.lock(); try { return lockedList.get(index); } finally { lock.unlock(); } } public T get2(int index) { lock.lock(); try { T t = lockedList.get(index); return t; } finally { lock.unlock(); } } public T get3(int index) { lock.lock(); T t = null; try { t = lockedList.get(index); } finally { lock.unlock(); } return t; } I forgot to add: I know that the best way is to use ready synchronized containers. I ask whether manners written by me are correct. Use semaphores or mutex @emd it's implied lock is a mutex. I forgot to add: I know that the best way is to use ready synchronized containers. I ask whether manners written by me are correct. All three are correct. The read to the shared variable is all that is needed. Applying it to a local variable or returning directly have the same thread-safe semantics. You could solve this more canonically by using a blocking queue: http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html This is what you usually want to use. Alternatively, use Collections.synchronizedList if you want to fully synchronized your list, do not implement your own. Otherwise, your code looks correct to me. its correct but you need to change one thing. Instead of an Array list use a CopyOnWriteArrayList. That way you wont have to worry about synchronizations at all. Check this article about such things http://walivi.wordpress.com/2013/08/24/concurrency-in-java-a-beginners-introduction/ In regards to CopyOnWriteArrayList you cannot simply make that conclusion without more information. For instance if he is doing 50% writes and 50% reads he would be crazy to do so. using a lock would only make his code thread safe. But client of his class can mutate the objects they get from the list thus defeating the purpose of it. Any change made by the clients to the objects would reflect in the list. Well, yes that's true, but CopyOnWriteArrayList won't fix that. Copy on write will copy the array, not the elements in the array. Your code does things correctly in all three instances - it locks the lock before the access, and unlocks it in the finally clause, protecting the list itself. Picking a version is a matter of your personal preference. Since you always lock and unlock your lock in the same method, you could simplify this code by using synchronize: private final Object theLock = new Object(); ... public T get1(int index) { synchronized(theLock) { return lockedList.get(index); } } Note, however, that neither your code nor its modified version would protect the values inside the list, if T happens to be mutable. I'd say they all could result in exactly the same code at run-time through basic optimization so they must be functionally equivalent. Personally - I'd prefer get1 for it's succinctness. Ermm ... isn't that a tautology? Or to put it another way, what is the basis for your assertion that they could compile to the same code? (I agree with the assertion ... but not the explanation.) @StephenC - Obviously get2 could be optimised to get1 trivially by recognising that the variable is only ever used as a transient for the return. Similar arguments could probably be used for get3 - or am I missing something? What you missed was putting that in your Answer!
common-pile/stackexchange_filtered
Spyder console self defined module not updated I import a function from a self defined module in spyder console as: from self import ver1 now if i edit the self and add a ver2 function and do this: from self import ver2 I get an error: ImportError: cannot import name ver2 I have tried this: (i delete the self.pyc file and regenerate it) import py_compile py_compile.compile("self.py") but it still does not work. However if I close and reopen spyder, it does work. Is there any other getaround? You need to reload the module as it has already been imported. You can do that using the python built-in reload function. Besides, if the Spyder console that you use is an IPython console, you can use the autoreload IPython magic. % load_ext autoreload % autoreload 2 You can set up Spyder to run this automatically on startup of a new ipython console in Spyder preferences. Doing this: "reload(ver2)" gave the following error: "reload() argument must be a module" Whereas, doing this: "type(ver2)" gives: <type 'function'> ... So I guess my question is how do I reload or refresh a function.
common-pile/stackexchange_filtered
!role error in keycloak while accessing a resource I have secured my rest api with Keycloak. After authentication, when I try to access rest API , I get: 403 error. Unable to access rest api. Reason !role. In configuration I have specified role as *: ConstraintSecurityHandler securityHandler = new ConstraintSecurityHandler(); context.setSecurityHandler(securityHandler); securityHandler.addRole("*"); ConstraintMapping constraintMapping = new ConstraintMapping(); constraintMapping.setPathSpec("/*"); Constraint constraint = new Constraint(); constraint.setAuthenticate(true); constraint.setRoles(new String[]{"*"}); Was my assumption of specifying any role by using '*' wrong or does the error mean something different? I see following logs in JettyKeycloakAuthentciator logs: 2018-05-24 12:55:52,253 [DEBUG] [ers.PreAuthActionsHandler(handleRequest )] - adminRequest http://localhost:7100/api/v1/design/test 2018-05-24 12:55:52,254 [DEBUG] [TokenRequestAuthenticator(thenticateToken)] - Verifying access_token 2018-05-24 12:55:52,255 [DEBUG] [TokenRequestAuthenticator(thenticateToken)] - successful authorized 2018-05-24 12:55:52,255 [DEBUG] [JettyRequestAuthenticator(rAuthentication)] - Completing bearer authentication. Bearer roles: [uma_authorization] 2018-05-24 12:55:52,255 [DEBUG] [ters.RequestAuthenticator(eAuthentication)] - User '8f9381df-2f7e-4ff8-9ef5-2123b03db3c9' invoking 'http://localhost:7100/api/v1/design/test' on client 'my_server' 2018-05-24 12:55:52,255 [DEBUG] [ters.RequestAuthenticator(authenticate )] - Bearer AUTHENTICATED 2018-05-24 12:55:52,255 [DEBUG] [thenticatedActionsHandler(handledRequest )] - AuthenticatedActionsValve.invoke http://localhost:7100/api/v1/design/test Turns out the correct way to set any role is: constraint.setRoles(new String[]{"**"}); i.e. double * and not single * Also, I removed this line: securityHandler.addRole("*");
common-pile/stackexchange_filtered
Merge two lists (string and int) together My problem is that I have 2 lists, that I get dynamically but they will be of same size every time, and I need merge them in one list, for example I have List<string> chars = [aaa],[bbb],[ccc]; List<int> numbers= [1][2][3]; I want to get 3th list that will have combined data like List<?> combo= [[aaa][1]],[[bbb][2]],[[ccc][3]] Is this possible? Can't you show compiling code? What type has the desired result at all? this seems to be a job for Enumerable.Zip Please indicate what should occur if they are NOT the same size here. they will be same size... "they wont be of same size every time" and "they will be same size" ... well..... Now you confuse me with your question portion of " so they wont be of same size every time," @Carsten Maybe some kind of quantum programming technique? @UweKeim I guess it's Schrödingers Lists ... but why should the specifications on SO be better than in RL? sry mistake in question. You guys are complicating things too much - I'd say it's a reasonable assumption that the Length of the two arrays will be equal, but not known at compile-time. Hence - "not same size every time", but "same size". Perhaps you could use Enumerable.Zip and a tuple: List<Tuple<string, int>> combo = chars.Zip(numbers, (s, i) => Tuple.Create(s, i)).ToList(); This works fine, thx for help everyone :) chars.Select((x, i) => new object[] { x, numbers[i] }).ToArray();
common-pile/stackexchange_filtered
Correct way to suspend coroutine until Task<T> is complete I've recently dove into Kotlin coroutines Since I use a lot of Google's libraries, most of the jobs is done inside Task class Currently I'm using this extension to suspend coroutine suspend fun <T> awaitTask(task: Task<T>): T = suspendCoroutine { continuation -> task.addOnCompleteListener { task -> if (task.isSuccessful) { continuation.resume(task.result) } else { continuation.resumeWithException(task.exception!!) } } } But recently I've seen usage like this suspend fun <T> awaitTask(task: Task<T>): T = suspendCoroutine { continuation -> try { val result = Tasks.await(task) continuation.resume(result) } catch (e: Exception) { continuation.resumeWithException(e) } } Is there any difference, and which one is correct? UPD: second example isn't working, idk why The block of code passed to suspendCoroutine { ... } should not block a thread that it is being invoked on, allowing the coroutine to be suspended. This way, the actual thread can be used for other tasks. This is a key feature that allows Kotlin coroutines to scale and to run multiple coroutines even on the single UI thread. The first example does it correctly, because it invokes task.addOnCompleteListener (see docs) (which just adds a listener and returns immediately. That is why the first one works properly. The second example uses Tasks.await(task) (see docs) which blocks the thread that it is being invoked on and does not return until the task is complete, so it does not allow coroutine to be properly suspended. One of the ways to wait for a Task to complete using Kotlin Coroutines is to convert the Task object into a Deferred object by applying Task.asDeferred extension function. For example for fetching data from Firebase Database it can look like the following: suspend fun makeRequest() { val task: Task<DataSnapshot> = FirebaseDatabase.getInstance().reference.get() val deferred: Deferred<DataSnapshot> = task.asDeferred() val data: Iterable<DataSnapshot> = deferred.await().children // ... use data } Dependency for Task.asDeferred(): implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-play-services:1.5.2' To call suspend function we need to launch a coroutine: someCoroutineScope.launch { makeRequest() } someCoroutineScope is a CoroutineScope instance. In android it can be viewModelScope in ViewModel class and lifecycleScope in Activity or Fragment, or some custom CoroutineScope instance. Dependencies: implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.4.0' implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.4.0'
common-pile/stackexchange_filtered
Express routes, More DRY method of passing data to routes I have several mongodb models, which I am passing through to my routes. The approach I'm taking leaves a lot of repeated code. var AboutData = mongoose.model( 'AboutData' ); var BlogData = mongoose.model( 'BlogData' ); app.get('/about', function(req, res, next) { BlogData.find(function(err, blogitems){ if(err) { return next(err); } AboutData.find(function(err, items){ if(err) { return next(err); } res.render('index.ejs',{ homelist: ['home', 'about', 'services', 'volunteer', 'contact', 'give', 'blog'], aboutlist: items, bloglist: blogitems, bootstrappedUser: req.user, page: 'about' }); }); }); }); Is there a better approach, that I could take to have multiple models be available all of my routes? You could create a middleware that sets common view variables by setting properties on res.locals. Here is one example: app.use(function(req, res, next) { res.locals.bootstrappedUser = req.user; res.locals.homelist = [ 'home', 'about', 'services', 'volunteer', 'contact', 'give', 'blog' ]; BlogData.find(function(err, blogitems) { if (err) return next(err); res.locals.bloglist = blogitems; next(); }); }); app.get('/about', function(req, res, next) { AboutData.find(function(err, items){ if (err) return next(err); // here `index.js` will have access to `bootstrappedUser`, `homelist`, and // `bloglist` res.render('index.ejs',{ aboutlist: items, page: 'about' }); }); }); You can also set variables in the same fashion on the app.locals object. Typically you set static values that are not request-specific on app.locals during set up of your Express app and set dynamic request-specific values on res.locals.
common-pile/stackexchange_filtered
How (and by whom) are financial decisions made in the Catholic Church? Coming from a denomination that doesn't subscribe to one earthly central Church authority, or a rigid structure, I find it very interesting to learn about denominations that do have such a structure. While browsing through the "Gospel Topics" section on the LDS website, I ran across the article on Tithing, which says this: Church members give their tithing donations to local leaders. These local leaders transmit tithing funds directly to the headquarters of the Church, where a council determines specific ways to use the sacred funds. This council is comprised of the First Presidency, the Quorum of the Twelve Apostles, and the Presiding Bishopric. Acting according to revelation, they make decisions as they are directed by the Lord. (See D&C 120:1.) That's a very clear, understandable high-level explanation of the process including who makes the decisions. Before getting to the question, here's a high-level explanation of my Church's process and who makes the decisions. In my own Church (A Baptist one) it's handled in a more straightforward democratic fashion. Once per year, the Pastor puts together a budget of anticipated expenses (by category - Pastoral care, Sunday School Supplies, etc.) and anticipated income (based on last year's Tithes, growth trends, etc.), and presents it to the Church. The Church (meaning the voting (age 18+) members of the local Church) discuss and vote on the budgets. Usually we accept the Pastor's budget, but we may discuss and adjust certain line-items. Then, throughout the year, as needs arise, they are met if they fall within the budget. The Pastor has some leeway on certain items. For example, we have a line item for "Needs of others", and the Pastor can dole that out to individuals in need at his discretion. One last thing before the actual question, I really am interested in how this is handled in other denominations, but I don't want to turn this into a "list" question. And I don't want to re-post the same question over and over naming different denominations in each. So I'm going to focus on one Church with a central authority and a well-defined structure that interests me, and ask about them. Without further ado: In the Catholic Church, can someone provide an overview of who makes the broad financial decisions, and what is the process? The simple answer is: "it's very complicated". Individual churches, dioceses, national councils of bishops, religious orders, houses of those religious orders, the dicasteries of the Roman Curia, institutions like schools and hospitals: they are all to some extent independent and to some extent related. The Catholic Church is not a simple organisation, and it doesn't have a simple financial structure. @lonesomeday - I sort of figured it was complicated. I'm sure they have a defined set of processes. (As an external observer, it seems to me that the Catholic church has everything well-defined. Very meticulous, the CC is.) That's where the question comes from - can anyone post something that is high-level enough to be reasonably understood, yet still be essentially correct? I'm sure the reality of how the LDS handles finances is also quite complicated, but they were able to provide high-level "layman's" overview. I'm just looking for the same here. The simple answer is: "it's very complicated". Individual churches, dioceses, national councils of bishops, religious orders, houses of those religious orders, the dicasteries of the Roman Curia, institutions like schools and hospitals: they are all to some extent independent and to some extent related. The Catholic Church is not a simple organisation, and it doesn't have a simple financial structure. In particular, it is complicated because the exact pattern is not replicated in every country. The national Episcopal Conferences have some degree of autonomy in setting a pattern which will apply in each country, according to what is appropriate. Moreover, the Vatican finances are both complex and opaque. Some income comes directly from donations from the faithful (a system known as Peter's Pence), while other income comes from the surplus from the Istituto per le Opere di Religione, the Vatican Bank. There isn't a simple organisational structure, with the Vatican at the top and the parishes at the bottom and a simple flow of money up and down the chain. This is because the basic structure of the Catholic Church is based around is that of the particular church. This is generally (though not always) a diocese. It is local (that is to say, defined by geography) and headed by a bishop. (Code of Canon Law, Canon 368) If you want some of the theology behind this, the document Communionis Notio is relevant. The diocese are institutionally independent of one another, though they are grouped in the national/regional Episcopal Conferences which have certain legislative powers in their region, and they are all subject to the Canon Law of the Catholic Church. Each diocese is divided into parishes. The priests in each parish are essentially "deputies" for the bishop. The financial relationship that we might consider best, therefore, is between the diocese and the parish. (The financial organisation of the Vatican would take years to analyse fully!) The parish-diocese relationship is the most common financial relationship. It leaves a whole lot out, as you see from my opening paragraph. So let's look at one such example. I'm going to look at the Roman Catholic Diocese of Westminster and it's 2011 financial statement, for the simple reason that I could find it online. The parishes receive income of various types (p19): collections, donations and legacies (by far the largest) parish activities investment income rent trading income disposal of assets This is spent in various ways (p21). The largest are non-clergy salaries and parish assessments, which are in effect a diocesan tax: each parish provides a proportion of their income to the diocese. This proportion varies among Episcopal Conferences and possibly among dioceses: I haven't found any clear information. In each parish, the parish priest is primarily in charge of finances: In all juridic affairs the pastor represents the parish according to the norm of law. He is to take care that the goods of the parish are administered according to the norm of cann. 1281-1288. (Canon 532) On top of this, there is to be a finance council in each parish: In each parish there is to be a finance council which is governed, in addition to universal law, by norms issued by the diocesan bishop and in which the Christian faithful, selected according to these same norms, are to assist the pastor in the administration of the goods of the parish, without prejudice to the prescript of can. 532. (Canon 537) So many decisions are taken on a local basis, by the priest with the advice of the finance council. At a diocesan level, most income comes from the parish assessments mentioned above (p20 of the Diocese of Westminster report). In the diocese, there is again a finance council to manage finances: In every diocese a Finance council is to be established, offer which the diocesan bishop himself or his delegate presides and which consists of at least three members of the Christian faithful truly expert in Financial affairs and civil law, outstanding in integrity, and appointed by the bishop. (Canon 492) The function of this council is to provide "a budget of the income and expenditures which are foreseen for the entire governance of the diocese in the coming year". So a short summary. Most income comes to individual parishes. The parish priest is in charge of this, with the assistance of a finance council. A proportion of income goes from the parish to the diocese, where a finance council appointed by and presided over by the bishop is in charge of decisions concerning the governance of the diocese. This is obviously not the whole picture. It covers only the parishes and diocese. These are the most significant aspect of the Church, but a lot more could be said about the various other aspects of the Church on local, national, regional and international levels. Moreover, to some extent these relationships vary significantly across the world. The Pope, in consultatation with the Institute for the Works of Religion, aka 'The Vatican Bank' is in charge of the budget of the Vatican. The lack of transparency into the budgetting and collection of funds has been in the news for the last couple of years, and Pope Francis has brought in outside consultants to suggest reforms. While this is correct as far as it goes, it doesn't go very far. Lonesomeday's comment should be an answer!
common-pile/stackexchange_filtered
EJB2.1 hello word application configuration issue I am trying to make hello world EJB2.1 application in RAD6 with Web sphere application server. But i'm unable to understand that which file i've to edit in RAD6 for doing changes that we do in jboss.xml while using jboss. Some lines of codes from jboss.xml:- <ejb-name>HelloWorld</ejb-name> <jndi-name>myHelloWorld</jndi-name> If it's in any way possible for you, I advice you to stay well clear of EJB 2.1. It's a disgrace to computer science in general. EJB 3.0 is already over 5 years old, and is so much better. Try to use at least that if you can. Your are luck when you use RAD. This great tool has possibility to edit anything within dedicate ejb-jar.xml editor. Try open ejb-jar.xml, and you will be possible to edit everything related to EJB descriptors. WebSphere has two additional files to describe EJB 2.1 ibm-ejb-jar-bnd.xmi and ibm-ejb-jar-ext.xmi. Both of them is better to edit via RAD editor instead trying to do it manually. For example ibm-ejb-jar-ext.xmi is: <ejbbnd:EJBJarBinding xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:ejbbnd="ejbbnd.xmi" xmlns:ejb="ejb.xmi" xmi:id="ejb-jar_ID_Bnd"> <ejbJar href="META-INF/ejb-jar.xml#ejb-jar_ID"/> <ejbBindings xmi:id="Session_1_Bnd" jndiName="ejbs/Authentication"> <enterpriseBean xmi:type="ejb:Session" href="META-INF/ejb-jar.xml#Session_1"/> </ejbBindings> </ejbbnd:EJBJarBinding> and ibm-ejb-jar-ext.xmi is <ejbext:EJBJarExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:ejbext="ejbext.xmi" xmlns:ejb="ejb.xmi" xmi:id="ejb-jar_ID_Ext"> <ejbJar href="META-INF/ejb-jar.xml#ejb-jar_ID"/> <ejbExtensions xmi:type="ejbext:SessionExtension" xmi:id="Session_1_Ext" timeout="600"> <enterpriseBean xmi:type="ejb:Session" href="META-INF/ejb-jar.xml#Session_1"/> <structure xmi:id="BeanStructure_1" inheritenceRoot="false"/> <beanCache xmi:id="BeanCache_1" activateAt="ONCE"/> <internationalization xmi:id="BeanInternationalization_1" invocationLocale="CALLER"/> <localTran xmi:id="LocalTran_1" boundary="BEAN_METHOD" unresolvedAction="ROLLBACK"/> </ejbExtensions> </ejbext:EJBJarExtension> So, I suggest to use RAD editor for to change EJB 2.1 descriptors. If your editor doesn't work correctly - you cannot see wndow with most of proterties dividet init tabs, sections. YOu may have ruble with editing EJB descriptors. Try to right click on the ejb-jar.xml and open it with specialized editor (I don'n remember its name, but it is not "xml editor" or "System Default") ok thanks. But when i try to add tag in ejb-jar.xml, RAD6 is giving the error description of which is that "Invalid content was found starting with jndi-name". jndi-name is not a part of ejb-jar.xml. To specify jndi location of your ejb specification it must be done in another file. This file is server specific: another for JBoss, another for WebSphere, and another for WebLogic. If you use RAD try do this by its editor. You should see desired section. Filling jndi name into this filed, will put value to WebSphere specific file.
common-pile/stackexchange_filtered
How to loop through selected data in SQL Server? I want to select multiple rows of data from a table and add it onto another one. For example: Select * from Table1 which would return id | name 1 | Chad 2 | Mary 3 | Denise I want to add these rows of data to Table 2 Insert(id, name) values(@id, @name) Thank you! INSERT INTO Table2(id,name) SELECT t.id,t.name FROM Table1 t ;
common-pile/stackexchange_filtered
ElasticSearch NEST DSL Query Cross Fields Query I am trying to convert following ElasticSearch DSL Query to NEST and it seems something is not correct. Here is my DSL Query: { "query": { "multi_match": { "query": "AJ", "type": "cross_fields", "fields": ["name", "shortname", "shortname2", "number"], "operator": "and" } } } I have a POCO class. I want to get result as a List as seen below: public class SearchDto { public Guid Id { get; set; } public string Number { get; set; } public string Name { get; set; } public string ShortName2 { get; set; } public string ShortName1 { get; set; } } Since it is a Cross Fields query, I have created fields like this: Fields nameField = Infer.Field<SearchDto>(p => p.Name); var shortName2 = Infer.Field<SearchDto>(p => p.ShortName2); var shortName1 = Infer.Field<SearchDto>(p => p.ShortName1); var number = Infer.Field<SearchDto>(p => p.Number); Here is my NEST query: var searchRequest = new SearchRequest() { Query = new MultiMatchQuery() { Fields = nameField .And(shortName2) .And(shortName1) .And(number), Query = value, Operator = Operator.And, Type = TextQueryType.CrossFields } } When I get the Json string for my searchRequest, it only prints "{}" using the following: var json = _client.RequestResponseSerializer.SerializeToString(searchRequest); It also posts "{}" as request body I also tried the following: var response = _client.Search <List<SearchDto>> (s => s .Size(500) .Index("mysearchIndex") .Query(q => q .MultiMatch(m => m .Type(TextQueryType.CrossFields) .Fields(nameField) .Fields(shortName1) .Fields(shortName2) .Fields(number) .Operator(Operator.And) .Query(value) ) )); Above query posts only "{"size" : 500}" to my elasticsearch endpoint Can someone please suggest what I am doing wrong and/or suggest better way to handle my query using NEST? It is not even building a full query for some reason. Nest queries are condition less. If input is determined to be null or empty string then the query will be omitted from the request. In your query is "value" is empty or null , then query generated will be "{}". If your intention is to search on empty value. Then you need to mark individual query as verbatim An individual query can be marked as verbatim in order take effect; a verbatim query will be serialized and sent in the request to Elasticsearch, bypassing NEST’s conditionless checks. Example var searchRequest = new SearchRequest() { Query = new MultiMatchQuery() { Fields = nameField .And(shortName2) .And(shortName1) .And(number), Query = "", Operator = Operator.And, Type = TextQueryType.CrossFields, IsVerbatim=true ---> note flag } }; Corresponding query {"query":{"multi_match":{"fields":["name","shortName2","shortName1","number"],"operator":"and","query":"","type":"cross_fields"}}} Jaspreet, Thanks for your suggestions. Indeed it was value being NULL from my api controller. I never imagined that it could be my controller action which was a problem and I never checked for the value.. It did solve my issue. Thanks again. @AndyJohnson, Glad could be of help
common-pile/stackexchange_filtered
Fit More Widgets on Wordpress Footer My theme shows that I can fit up to 4 widgets in the footer as shown in the demo site here: http://demo.woothemes.com/?name=simplicity But I can only fit 2 on mine when I'd like to have 3. Here's my site for my assignment: http://www.brightpixelstudios.com/ I'm guessing I'll need to modify the CSS. I'd really appreciate any suggestions! Thank you in advance, Will It is possible, if you look at the output, yours says: <div id="footer-widgets" class="col-full col-2"> But changing the output to: <div id="footer-widgets" class="col-full col-4"> Divides the width into 4 parts. I can't help much more without seeing the actual admin side of the website, but if the col-4 is set dynamically, then it'd be a setting you'd have to change for the theme in the admin side. Thank you so much for your suggestions, they really helped me understand the structure of the footer more. But as it turns out, the amount of widgets shown in the footer is controlled under Dashboard>Theme Options. Silly me! @WillWhitehead Yes I thought it would be in the admin side, glad it helped anyway! you can use 'BNS Add Widget' wordpress plug in to add a widget area to the footer of any wordpress theme. This is the plug in link : http://buynowshop.com/plugins/bns-add-widget/
common-pile/stackexchange_filtered
ADB2C Custom Attribute String Data Size What is the max size (number of characters) that a String Custom Attribute can hold on Active Directory B2C? The maximum length for a String property is 256 characters. For more information, see Directory schema extensions | Graph API concepts.
common-pile/stackexchange_filtered
Select how many documents are in other table for each person I have 3 tables. Client,Documents and ClientDocuments. The first one is the clients,where the information for each client are. The second one is the documents, which document can go in the system. The third one is the ClientDocuments, which client has which document. Tables Here I have to do a select where i get the information from the clients, and how many documents of the 3 types of the documents they have. Example,Client 1 Have one document called 'Contrato Social',and 2 called 'Ata de Negociação'. In the select must return every client and in the columns ContratoSocial returns 1,Ata Negociacao returns 2 and Aceite de Condições Gerais returns 0. I did this to show. select idFornecedor, txtNomeResumido, txtNomeCompleto, --txtEmail, --txtSenha, bitHabilitado, (SELECT Count(idDocumentosFornecedoresTitulo) FROM tbDocumentosFornecedores WHERE idDocumentosFornecedoresTitulo = 1) AS 'Contrato Social', (SELECT Count(idDocumentosFornecedoresTitulo) FROM tbDocumentosFornecedores WHERE idDocumentosFornecedoresTitulo = 2) AS 'Ata de Negociação', (SELECT Count(idDocumentosFornecedoresTitulo) FROM tbDocumentosFornecedores WHERE idDocumentosFornecedoresTitulo = 3) AS 'Aceite de Condições Gerais' from dbo.tbFornecedores tbf order by tbf.txtNomeResumido asc returns this: Returns of the query But its just counting how many documents from that type is in the database, i want to filter for each client, how should i do? Working answer: select tbf.idFornecedor, tbf.txtNomeResumido, tbf.txtNomeCompleto, tbf.bitHabilitado, sum(case when idDocumentosFornecedoresTitulo = 1 then 1 else 0 end) as contrato_social, sum(case when idDocumentosFornecedoresTitulo = 2 then 1 else 0 end) as Ata_de_Negociação, sum(case when idDocumentosFornecedoresTitulo = 3 then 1 else 0 end) as Aceite_de_Condições_Gerais from dbo.tbFornecedores tbf left join tbDocumentosFornecedores df on tbf.idFornecedor = df.idFornecedor group by tbf.idFornecedor, tbf.txtNomeResumido, tbf.txtNomeCompleto, tbf.bitHabilitado order by tbf.txtNomeResumido asc Hard-coded IDs are generally a terrible idea. You should base your logic on the actual title, not the ID. What should happen when a fourth document is added? Why do you count? Can a client be associated with the same document multiple times? This looks like a pivot issue.as well as a schema issue. You need some way of matching the rows in tbDocumentosFornecedores to the rows in tbFornecedores. Your question is not clear on what column is used for that, but I might guess something like idDocumentosFornecedore. You could fix your query by using a correlation clause. However, I might instead suggest conditional aggregation: select tbf.idFornecedor, tbf.txtNomeResumido, tbf.txtNomeCompleto, tbf.bitHabilitado, sum(case when idDocumentosFornecedoresTitulo = 1 then 1 else 0 end) as contrato_social, sum(case when idDocumentosFornecedoresTitulo = 2 then 1 else 0 end) as Ata_de_Negociação, sum(case when idDocumentosFornecedoresTitulo = 3 then 1 else 0 end) as Aceite_de_Condições_Gerais from dbo.tbFornecedores tbf left join tbDocumentosFornecedores df on tbf.idDocumentosFornecedore = df.idDocumentosFornecedore -- this is a guess group by tbf.idFornecedor, tbf.txtNomeResumido, tbf.txtNomeCompleto, tbf.bitHabilitado order by tbf.txtNomeResumido asc
common-pile/stackexchange_filtered
Greyed out folder in GitHub repo? Hey so I pushed a new folder to my GitHub repository and well it is greyed out. Why? I'm new to git and have probably done something to git to cause this so how can i start from fresh? Here's a link to my repo: https://github.com/ZoidinCode/ZoidinCode.github.io/tree/master/dist I've uninstalled Git and then re-installed it and well this has happened :( Git: cd desktop/Artificial-Reason-1.4.6 bash: cd: desktop/Artificial-Reason-1.4.6: No such file or directory XXXX~/desktop/Artificial-Reason-1.4.6 (master) $ git add dist/header_light_dark XXXX ~/desktop/Artificial-Reason-1.4.6 (master) $ git commit -m "First commit to GitHub" [master 0e2035b] First commit to GitHub 1 file changed, 1 insertion(+) create mode 160000 dist/header_light_dark XXXX ~/desktop/Artificial-Reason-1.4.6 (master) $ git push origin master Counting objects: 1229, done. Delta compression using up to 2 threads. Compressing objects: 100% (1223/1223), done. Writing objects: 100% (1229/1229), 49.79 MiB | 443.00 KiB/s, done. Total 1229 (delta 848), reused 0 (delta 0) To https://github.com/ZoidinCode/ZoidinCode.github.io.git * [new branch] master -> master Possible duplicate of What does a grey icon in remote GitHub mean Still stuck as my git is acting funny and i can't do anything, Any help? @Chris "Acting funny" isn't a very useful description. Can you elaborate? Did the answers in the duplicate question help? A gray folder on GitHub looks like a submodule, as I mentioned in: "What is this grey git icon?" "What does a grey icon in remote GitHub mean" It is not a sub-folder, but a special entry in the index which marks it as a submodule-like. If you don't have a .gitmodules file in your main repo, that special entry is typical of adding a nested repo: check if your dist/ folder has itself a .git/ subfolder. To fix the issue, try a git rm --cached dist (no trailing slash). See more at "Cannot remove submodule from Git repo" git rm --cached dist git commit -m "Remove submodule entry" rm -Rf dist/.git # delete the nested repo git add dist git commit -m "Add dist plain subfolder" git push it looks like you have initialised git inside this folder. To fix change directory into this folder and delete .git then add commit and push again.
common-pile/stackexchange_filtered
Cannot read property 'viewManagersNames' of undefined I'm trying to create an ExpoPixi.Sketch View in React Native, but an error, 'Cannot read property 'viewManagersNames' of undefined' shows when the App loads. I cannot find anything on this error online. import React from 'react'; import { Text, View, StyleSheet } from 'react-native'; import { Provider } from 'react-redux'; import EStyleSheet from 'react-native-extended-stylesheet'; import ExpoPixi from 'expo-pixi'; import { Constants } from 'expo'; export default class App extends React.Component { render() { const color = 0xff0000; const width = 5; const alpha = 0.5; return ( <View style={styles.container}> <ExpoPixi.Sketch strokeColor={color} strokeWidth={width} strokeAlpha={alpha} style={{ flex: 1 }} /> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, paddingTop: Constants.statusBarHeight, backgroundColor: 'green', } }); I tried this same code in another new React Native project, and it works fine, creating a screen where the user can draw using their finger. This also shows up in the console, but I am using expo so I don't think I can use react-native link. *No native NativeModulesProxy found among NativeModules, are you sure the expo-react-native-adapter's modules are linked properly How can I resolve this error? I don’t see viewManagersNames code anywhere in your code. Please post relevant code wr you have viewManagersNames code I searched the entire codebase, and the only place viewManagersNames shows up is inside the expo-react-native-adapter node_module @Think-Twice Did you found any solution? I think it was that Expo Pixi used an older version of react-native-svg, which wasn't installed. I don't remember exactly, but I ended up using another library in place of Expo Pixi.
common-pile/stackexchange_filtered
Linq-To-Sql with WCF, Models, and POCO ViewModels Disconnected "DataContext" Timestamp/Rowversion I have a Linq-To-Sql based repository class which I have been successfully using. I am adding some functionality to the solution, which will provide WCF based access to the database. I have not exposed the generated Linq classes as DataContracts, I've instead created my own "ViewModel" as a POCO for each entity I am going to be returning. My question is, in order to do updates and take advantage of some of the Linq-To-Sql features like cyclic references from within my Service, do I need to add a Rowversion/Timestamp field to each table in by database so I can use code like dc.Table.Attach(myDisconnectedObject)? The alternitive, seems ugly: var updateModel = dc.Table.SingleOrDefault(t => t.ID == myDisconnectedObject.ID); updateModel.PropertyA = myDisconnectedObject.PropertyA; updateModel.PropertyB = myDisconnectedObject.PropertyB; updateModel.PropertyC = myDisconnectedObject.PropertyC; // and so on and so forth dc.SubmitChanges(); you can also use reflection to set fields if all of them have the same names in linq and your own classes How would the performance on something like that be? Could you post an example, because that sounds like the best of both worlds to me... I wouldn't need to worry about updating code if I add database fields, just make sure the designer is up to date, etc. I guess a RowVersion/TimeStamp column on each table might be the best and least intrusive option - just basically check for that one value, and you're sure whether or not your data might have been modified in the mean time. All other columns can be set to Update Check=Never. This will take care of handling the possible concurrency issues when updating your database from "returning" objects. However, the other thing you should definitely check out is AutoMapper - it's a great little component to ease those left-right-assignment orgies you have to go through when using ViewModels / Data Transfer Objects by making this mapping between two object types a snap. It's well used, well tested, used by many and very stable - a winner! What're your thoughts on the comment to my question about using Reflection to deal with the left-right-assignment code? @Nate: that's basically what AutoMapper does - in a nicely wrapped package, so you don't have to deal with too many of the gory details.
common-pile/stackexchange_filtered
Limit a hashed and encoded string to 44 characters: NodeJs Here is a sample input. 426155Grtyhr8888xxxxxxx7777BDTR56654.88555G77D6666FF555W44RT46G666D55TY_3rtyDeeeeeEEE9 And follow the steps given bellow. add a salt to the string. My salt is: 'tttttttttt' Hash this salted string using "SHA-256". encode using base64 I need to out put a string 44 characters long. Here I have provide a sample code I made. var str_salted = str+'tttttttttt'; var sha256 = require('sha256'); var str_myHash = sha256(str_salted); var str_encoded = new Buffer(str_myHash).toString('base64'); console.log(str_encoded); This code outputs a very long string. I need to limit this string to 44 characters. How can I do this. Use substring str_encoded.substring(0, 44); You have misunderstand my word "limit". I don't need a part from the string. But I need the entire string to be generated in 44 characters long. I found the answer. Here I used crypto for solve this problem. Following is my code. var crepto = require('crypto'); var output = crypto.createHash("sha256").update(str_salted).digest("base64"); If any string hased using "sha256" and digest to base64, it provides a 44 long string.
common-pile/stackexchange_filtered
Create Database Instance from Model *Seems like there is some confusion. I created a SQL Server Compact Edition file and can see it from the Server Explorer. I can also right click and add tables manually. What I want to do is run the generated sqlce file to add all of the tables and columns from my model to the SDF. -- background -- In Visual Studio 2012 (Ultimate), I designed a model using the model designer. It created an edmx file. I right clicked the model and chose "Generate Database from Model..." and created an sqlce file. My understanding is that I should be able to execute this file on an sdf somehow to create a SQL Server Compact Edition Instance of my database. I don't see the option on right click to execute the sql code, and the other option is to "Run SQL Scripts in Solution Explorer" which doesn't seem to make sense. http://msdn.microsoft.com/en-us/library/yea4bc1b.aspx It says to drag the sqlce file to a database reference, but I'm not really sure what they mean. I have tried to drag it to the server explorer where the sdf is connected. I tried right clicking on the sdf in the Server Explorer to do New SQL Query and pasting the sqlce in, but it seems that Create Table isn't supported. Any ideas? Generate Database from Model... only generates tables and relationships. You need to have a database created already, and have it in the Database References folder. Then you drag your script file to that database reference, as described in your mentioned link.
common-pile/stackexchange_filtered
Is it safe to mix pthread.h and C++11 standard library threading features? Can I spawn a thread with pthread_create and use std::mutex inside of it safely? I would think that if std::mutex is implemented as a pthread_mutex_t then it would be fine but I don't see this documented anywhere For example: #include <pthread.h> #include <mutex> namespace { std::mutex global_lock; } void* thread_func(void* vp) { // std::mutex used in thread spawned with pthread_create std::lock_guard<std::mutex> guard(global_lock); // critical section return nullptr; } int main() { pthread_t tid; pthread_create(&tid, nullptr, thread_func, nullptr); pthread_join(tid, NULL); } BTW I'm running Debian Wheezy. Spawning a thread and locking a mutex are two separate concepts. Is your question actually about mixing concurrency control from STL and PThread or just this one instance in particular? @sixlettervariables mixing control in general. Can you elaborate on the requirements/use case for this? Do you need to run the code from within C APIs? For this case this might be OK (since a C API running with pthread will need a pthread based C++ implementation). @g-makulik I'm not sure I understand what you're asking. My particular use case isn't as much what I'm concerned with, as whether these can be mixed in general. I don't intend to use any C API. let's assume C++11 all the way through. You could on my machine (Debian too). But I'm not sure if I would call this safe. If you look at the relevant file, /usr/include/c++/4.7/i486-linux-gnu/bits/gthr-default.h in my case, you will see that there will be a 1:1 mapping to the pthreads api. <mutex> uses __gthread_mutex_lock for locking which is defined exactly there to pthread_mutex_lock. Or you will see that std::thread declares typedef __gthread_t native_handle_type; I don't know if there is a documented way to check if pthreads are used. But gthr-default.h defines _GLIBCXX_GCC_GTHR_POSIX_H as include guard and I think as long as this macro is defined, you can assume that you can mix them both. Edit: Given the hint from @Wakely, I would write: template <typename T> using strip = typename std::remove_pointer<typename std::decay<T>::type>::type; static_assert(std::is_same<strip<std::thread::native_handle_type>, pthread_t>::value, "libstdc++ doesn't use pthread_t"); Don't rely on implementation details like that macro, it could change next release (something this broke Boost.Thread recently.) std::is_same<std::thread::native_handle_type, pthread_t> is a pretty good indicator Pthreads is used, although native_handle_type could be pthread_t* or another related type and the test would fail Looks like native_handle_type isn't a pointer, and it is pthread_t on gcc on my platform, so the strip part isn't needed (but still cool, I didn't know you could do that). There's no guarentee in any spec that it will work, but it's likely that any C++ implementation on an OS that uses pthreads as its only real threading library will use pthreads underneath C++ threads, so it will likely work. You will likely run into problems if you later try to port the code to some other platform that uses something other than pthreads, even if that platform supports pthreads too (eg, windows). The questions is, why bother and risk it? If you're using C++11 std::mutex, why not use std::thread as well? I probably won't risk it, but I thought maybe it would be known to be safe. The reasons for wanting this are a but contrived. But let's just pretend that I'm using existing C++ code and updating it. Thanks for the answer. Why bother? Because pthread_create supports customization via pthread_attr_t that std::thread doesn't Even when the C++11 thread implementation isn't based on pthreads, it's very likely that any platform that has both APIs available implements them in terms of a common underlying platform API. Despite the lack of guarantee I think there is a reasonable expectation of practical portability. I think platforms lacking an implementation of one or the other will be far more common than platforms that implement the two APIs incompatibly. std::thread does not provide any way to set or otherwise control the stack size. So if you want to do that, you need to write non-portable code and carefully consult your implementation's documentation about how to do that. Both std::thread and std::mutex have a native_handle method which allows you to dig down to the platform implementation of the given object. This says to me that the standard threading library is designed to play nice with the platform implementation. As an aside std::thread and std::mutex are different objects that do different things viz. manage threads and provide cross thread synchronization. In the end the kernel does the heavy lifting. So if you are not worried about portability, I cannot see why this should be an issue. As an aside, sometimes you may need the native platform implementation so as to provide you with the richer feature-set that the platform allows. For example BSD threading allows different types of threads and some threading libraries allow you to set the stack size of your new thread. The C++ threading APIs are a portable lowest common denominator. Maybe worth adding to this that threads, like processes, are implemented by the kernel on most/all operating systems that use pthreads (and probably most operating systems generally), meaning there is really only one kind of thread at the system level regardless of which userspace lib you use. Not so sure about mutexes, but if I had to guess I'd say the same is true there. If your question is: can I use freely switch between one type of mutex and another at random? Then the answer is no (unless all the bottom layers use the same implementation, such as pthread_mutex.) However, if you have different groups of resources that you want to protect, any one group of resources can be protected with any one implementation. Actually, it can at times be better to use a semaphore (i.e. semaphore are useful to define a one lock for write, many locks for reads). So, if you have 4 groups of resources you are managing, you may use: std::mutex, pthread_mutex, boost::mutex, semaphores. What you cannot do is use the boost::mutex to access data protected by the semaphores and vice versa, or std::mutex to use things protected by pthread_mutex. As a simple example, in terms of code, this means a getter and a setter would be done this way: void set(int new_value) { guard lock(my_mutex); m_value = new_value; } int get() const { guard lock(my_mutex); return m_value; } The two functions use the same mutex (here my_mutex) and obvious the mutex has one type. Opposed to that, you could not do that: void set(int new_value) { guard lock(this_mutex_here); m_value = new_value; } int get() const { SafeGuard lock(that_mutex_there); return m_value; } In this second example, you use two different mutexes and that won't work as expected because the lock in set() won't block the lock in get() and vice versa. So there is nothing safe about it (even if one of the guards is called SafeGuard.) So the rule is, if you protect m_value with a mutex named my_mutex, any time you access m_value you must lock the my_mytex mutex. Which implementation you are using does not matter, as long as you are consistent in this way.
common-pile/stackexchange_filtered
How to query with orWhereMonth in laravel query builder? How can I convert this into laravel query builder? select *, (col1 + col2 + col3) as total where (MONTH('date_') = '01' OR MONTH('date_') = '02') AND YEAR('date_') = '2019' AND paid = 1 I tried this but I think it is not returning correctly. $carDataAm = DB::table('carwash') ->selectRaw('*, (col1 + col2 + col3) as totalAmount') ->where('paid','1') ->whereMonth('date_', '=', '01') ->orWhereMonth('date_', '=', '02') ->whereYear('date_', '=', '2019') ->get(); and why do you think it isn't returning corectly? https://laravel.com/docs/6.x/queries#parameter-grouping The equivalence of grouping expressions together in Eloquent (like you grouped (MONTH('date_') = '01' OR MONTH('date_') = '02') in SQL using brackets ()) is to use parameter grouping (thanks to @lagbox for the link) like this: ->where(function ($query) { $query->whereMonth('date_', '=', '01') ->orWhereMonth('date_', '=', '02'); }) got the idea with this one. Thanks You should use parameter grouping as @lagbox suggested. DB::table('carwash') ->selectRaw('*, (col1 + col2 + col3) as totalAmount') ->where('paid','1') ->where(function($query){ ->whereMonth('date_', '=', '01') ->orWhereMonth('date_', '=', '02') }) ->whereYear('date_', '=', '2019') ->get(); Otherwise whenever your orWhere condition matches it will fetch those data also not regarding other where conditions.
common-pile/stackexchange_filtered
How to resize the Legend Item Wndow in Map Composer? I am just wondering if there is any simple way (no messing with codes) to adjust this Legend Item Window in Map Composer. I am using QGIS 2.14 in windows 8. A much larger (taller) window would really help, rather than this 2-cm window that is so hard to scroll when you have 20+ legend items to edit. Here is the screenshot of the Map Composer Not sure it's currently possible but if you want, you can increase your 2cm window to a 4cm window by moving the Atlas generation dock window somewhere inside the middle of the Legend window. This should turn the Atlas generation window into a menu tab: Now you should have a little more space but not a huge amount: This has been fixed in the upcoming version 2.14.2.
common-pile/stackexchange_filtered
Can I create a Batch File to create subfolders and maybe rename existing subfolders? I have very limited if any knowledge of coding. So I'm not sure if this can be done through creating a batch file. I'm on Windows 10 btw. I have a large database of 'client' folders. Perhaps this would be the parent folder, if not.. There are subfolders within for each letter of the alphabet which is the first letter for each client name. So it would go C:Client/A/Adams, Bill C:Client/A/Anderson, Jill C:Client/B/Burgundy, Jack ..and so on. There are thousands of clients. Under each and every specific client folder (adams, bill etc) I need to create a subfolder called ID & PFP Is there are batch I can create to automatically go through all the lettered subfolders and the subsequent client subfolders within those and create the 'ID & PFP' subfolders inside every single client name? Furthermore some of said subfolders have a folder called 'ID' already. Is there a code that could say create those subfolders within client name folders - if there is no folder called 'ID'.. If there is folder called 'ID' rename it to 'ID & PFP'? Very kind regards Yes, this can be done using the md command which is used to Make Directory. edited my answer so it also included the case with the ID subfolder The following batch is what you need. edit: added subfolder "ID" case @echo off setlocal for /f %%b in ('dir /b /a:d') do ( rem enter the directory pushd %%b echo In Directory: %%b for /f %%c in ('dir /b /a:d') do ( rem enter the directory pushd %%c echo In Directory: %%c IF EXIST ID ( rem Folder "ID" does exist echo Folder "ID" Renamed to "ID & PFP" rename "ID" "ID & PFP" ) ELSE ( rem Folder "ID" does not exist rem create folder md "ID & PFP" echo Folder: "ID & PFP" created ) rem leave the directory popd ) rem leave the directory popd ) pause >nul endlocal Put this into a batch file and put it into the parent folder where you have the folder a,b,c etc.... Basically this script will go into every sub folder in which it is in and then go into every next sub folder benethe to create a folder. In Directory: test In Directory: child Folder "ID" Renamed to "ID & PFP" In Directory: child1 Folder: "ID & PFP" created In Directory: child2 Folder: "ID & PFP" created In Directory: test1 In Directory: child Folder: "ID & PFP" created In Directory: test2 In Directory: child Ein Unterverzeichnis oder eine Datei mit dem Namen "ID & PFP" existiert bereits. Folder: "ID & PFP" created Well, the following code will do everything You asked for, please feel free to copy it inside of some ".txt" editor and save it with ".bat" extension(for example: "dirHelper.bat"). Don't forget to change a second line from "CD C:\root" to something that corresponds to your case - "CD C:\Client" and feel free to ditch the "echo" statements I have added them just to help you understand what's going on while you are testing or for log purposes on execution @Echo Off CD C:\root echo entered %cd% directory FOR /D %%G in ("*") DO ( CD %%G echo entered %cd%\%%G directory FOR /D %%N in ("*") DO ( cd %%N echo entered %cd%\%%G\%%N directory if EXIST ID ( rename ID "ID & PFP" echo Found ID folder inside %cd%\%%G\%%N echo ID folder renamed to "ID & PFP" CD ..\ ) else ( if EXIST "ID & PFP" ( echo "ID & PFP folder already exist here:" %cd%\%%G\%%N CD ..\ ) else ( md "ID & PFP" echo created "ID & PFP" folder inside %cd%\%%G\%%N directory CD ..\ ) ) ) CD ..\ ) PAUSE
common-pile/stackexchange_filtered
VB.NET Progress Bar issue I want another form to open with a progress bar when a button is clicked. So far the execution freezes for a couple of seconds and then the progress bar form opens up with the progress bar full, and then closes. I want the main form to pause execution so that the file isn't written until the progress bar has finished doing it's thing. Here's my code: Main class: Private Sub btnWriteFile_Click(sender As Object, e As EventArgs) Handles btnWriteFile.Click If hasUserEnteredFileName() = True Then 'don't worry about this If reversedString IsNot Nothing Then 'or this FileWriting.Show() 'the progress bar class Dim sw As StreamWriter = File.CreateText(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) & "\" & fileName.Text & ".txt") sw.Write(reversedString) sw.Flush() sw.Close() End If End If End Sub Progress bar class: Private Sub FileWriting_Load(sender As Object, e As EventArgs) Handles MyBase.Load ProgressBar1.Minimum = 1 ProgressBar1.Maximum = 100 For i = ProgressBar1.Minimum To ProgressBar1.Maximum - 2 Sleep(10) ProgressBar1.Value += 2 ProgressBar1.Value -= 1 Next i End Sub your progress bar does really show the progress of anything. Remove the Sleep statement and there is no reason for it to exist. It certainly isnt need to report the progress of writing one line of text I know it doesn't actually do anything, it is just an aesthetic feature. In MSDN for Form.Load Occurs before a form is displayed for the first time. A part from this your code has no possibility to show a progress status while the file is written, The two operation are executed in sequence why does the do-nothing progress bar have to go on a new form? put it on the current form and have it do nothing there. Don't bother replying, you're not solving it. You cannot do such operations without using Multithreading so i changed some parts of your code like this: Public Class FileWriting Private Sub FileWriting_Load(sender As System.Object, e As System.EventArgs) Handles MyBase.Load BackgroundWorker1.RunWorkerAsync() End Sub Private Sub BackgroundWorker1_DoWork(sender As System.Object, e As System.ComponentModel.DoWorkEventArgs) Handles BackgroundWorker1.DoWork Me.Invoke(New MethodInvoker(Sub() setprogressbarminmax())) Dim min_value As Integer = 0 Dim max_value As Integer = 0 Me.Invoke(New MethodInvoker(Sub() min_value = ProgressBar1.Minimum)) Me.Invoke(New MethodInvoker(Sub() max_value = ProgressBar1.Maximum)) For i = min_value To max_value System.Threading.Thread.Sleep(10) If i < 99 Then Me.Invoke(New MethodInvoker(Sub() setprogvalue())) End If Next i End Sub Private Sub setprogressbarminmax() ProgressBar1.Minimum = 1 : ProgressBar1.Maximum = 100 End Sub Private Sub setprogvalue() ProgressBar1.Value += 2 ProgressBar1.Value -= 1 End Sub Private Sub BackgroundWorker1_RunWorkerCompleted(sender As Object, e As System.ComponentModel.RunWorkerCompletedEventArgs) Handles BackgroundWorker1.RunWorkerCompleted Me.Close() End Sub End Class also you can find more info about BackgroundWorker here
common-pile/stackexchange_filtered
What is the use of declaring a pointer to a structure with typedef? As in the image I've declared a pointer to a struct, one with typedef keyword and another without typedef keyword, and I'm trying to assign the address of the same user-defined array, but I'm getting an error at line number 19. When I comment out line number 19 and build the project, no error occurs at line number 20. By the way. No one can see the image. I recommend copying and pasting the code into the question as text and then indenting the whole thing in 4 spaces (select the code use the {} button or CTRL+K keys to indent the whole block). That always works. OK we can see the image now, but you, and everyone else, are far, far better off with text than with an image. I may have misinterpretted the question. Are you asking what typedef does or are you asking why one would use typedef for this case? You're mixing your metaphors. NODE is a typedef (i.e. a type). NODE2 is a global scope pointer to a struct node. You can assign a value to NODE2 because it's a variable. The convention is to not use all caps for a variable because the all caps is used for constants, such as #define PI 3.14159. Better to do struct node *my_node_pointer; or NODE my_node_pointer; But, you can't assign a value to a type. Thus, NODE = arr; is invalid. I am voting to close this since all of the code is in an image. Also you're missing the error message and this is not the entire code excerpt. You will want to review: Is it a good idea to typedef pointers?. typedef struct node *NODE; Introduces a synonym NODE for the type struct node *. You cannot assign a value to a type, therefore the line NODE = arr; fails compilation. struct node *NODE2; Introduces a global variable named NODE2 of type struct node *. You can assign to this variable as you wish, therefore the line NODE2 = arr; does not give an error. The above is similar to the following: typedef int Int32; int Counter; ... Int32 = 5; // error Counter = 5; // valid @SagarTube: NODE is an alias to struct node* <-- the pointer is important. Otherwise yes. So,According to the answer NODE is just an alias to struct node whereas NODE2 is a global variable of struct node* fine but now consider a code where malloc is used to create a memory block and malloc returns a void pointer and so inorder to use the memory block we have to typecast to a particular type of pointer and using NODE keyword we can do it instead of struct node* eg: NODE x = (NODE) malloc (sizeof(struct node)); how is this possible? @SagarTube: what's the problem? NODE x = (NODE)malloc(sizeof(struct node)); is the same as struct node *x = (struct node*)malloc(sizeof(struct node)); Also note that you don't need to write an explicit cast in C, only in C++.
common-pile/stackexchange_filtered
laravel php artisan migrate When I run php artisan migrate In Connection.php line 664: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client (SQL: select * from information_schema.tables where table_schema = aviandb and table_name = migrations) In Connector.php line 68: SQLSTATE[HY000] [2054] The server requested authentication method unknown to the client In Connector.php line 68: PDO::__construct(): The server requested authentication method unknown to the client [caching_sha2_password] How can I solve? Your php mysql extension doesn't support the version of MySQL server you are running. I'm assuming you're running MySQL 8.0, which is new at the time of this post. You need to update or rebuild PHP with support for the latest version of MySQL, or downgrade your MySQL Server version. Another solution is to create a user with the mysql_native_password option. CREATE USER 'user'@'localhost' IDENTIFIED WITH mysql_native_password BY 'yourpassword'; GRANT ALL PRIVILEGES ON *.* TO 'user'@'localhost' WITH GRANT OPTION; I've run into the same issue, so I followed your last suggestion to correct it. Indeed, I had to use the MySQL CLI in order to create the users with that option, because the latest version of MySQL Workbench as I'm typing this (8.0.11rc) doesn't let you do so. I can create with Standard selected as the Authentication Type, but as soon as I Apply those changes, it automatically changes to caching_sha2_password and is grayed-out so I cannot change it again. I even have mysql_native_password set as the default_authentication_plugin in the options file. @Sturm, honestly, I wouldn't even touch MySQL 8.0 yet. Not worth the trouble and possible instability unless you absolutely need one of the new features. I'm inclined to agree, @Devon. I actually didn't mean to do so; I just did a simple brew update and brew upgrade, not realizing it would wipe out my old MySQL installation, replacing it with the new 8.0 version. For PDO::__construct(): The server requested authentication method unknown to the client [caching_sha2_password] issue, there's a Japanese blog which covers this: https://qiita.com/r641y/items/7f0ca12ced72363f9448 To summarize, you can log into mysql via command line then change the password type from caching_sha2_password to mysql_native_password type. The code to achieve this within mysql is: ALTER USER 'user'@"localhost" IDENTIFIED WITH mysql_native_password BY 'password' You can replace 'user' and 'password' with your username and password to mysql. Then within mysql again: mysql> FLUSH PRIVILEGES; Once that's done, remember to update the .env file's () DB_USERNAME= and DB_PASSWORD= . There's a sample video on how to get to .env file below: https://laracasts.com/series/laravel-from-scratch-2017/episodes/4?autoplay=true Hope this helps! It worked on my macbook pro high sierra. Run this script on your query mysql and just your new password it will be work ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_new_password' MySQL 8 & Laravel: The server requested authentication method unknown to the client So, here’s the fix. You can create a user with the “old” authentication mechanism, which the MySQL database driver for PHP still expects. CREATE USER 'user_name'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_password'; GRANT ALL PRIVILEGES ON db_name.* TO 'user_name'@'localhost'; ALTER USER 'user_name'@'localhost' IDENTIFIED WITH mysql_native_password BY 'your_password'; then restart mysql server sudo service mysql restart;
common-pile/stackexchange_filtered
Developer Console can SELECT or COUNT() more than 50.000 records. Why not Apex? For years people were looking for ways to query more than 50.000 records in Salesforce without running into Too many query rows: 50001. Event doing a COUNT() query was impossible. By accident, I just did a query on a type with more than 1 million records. In the Query Editor of the Dev Console. And it worked! When I tried the same in Apex it failed as expected. Why? Apex uses more resources than a simple API call, such as extra memory, CPU time, etc, so there's additional limits in place. The Developer Console, contrariwise, uses the API, which is just a server-side database cursor. As such, you can technically query up to 50,000,000 rows in the Developer Console (though very large queries may time out or freeze your browser). But would this be a hack to do a "SELECT Count() FROM Account LIMIT 50001" in Apex using the Tooling API? @RobertSösemann You could do it in certain contexts, but you might get a recursive callout error or a DML not committed error. It's probably safe in Visualforce contexts, though, if you wanted to. Though with VF, you can use the readonly mode to get 1 million rows I think? @BritishBoyinDC Certainly, but then you're in read-only mode. So it might be useful if you need to write data back later, or need to get more than 1 million rows, etc. @sfdcfox - agreed, though that point, I think it is time to look at Batch or something else...
common-pile/stackexchange_filtered
Saving a game state to a file I'm now at a point where I need to save the current game state. I'm using libGDX and did add the kryo lib to my project and did some testings. Question: Do I have to override the file every save or can I override only the bites that change from one class. Do I have to create allways a new OutputStream if I whant the game to be saved? (Save game every 5 mins for example) I whant the current entities that are created to be saved should I create one file for all or for each entity a file? Dario, Do I have to override the file every save or can I override only the bites that change from one class. You don't have to, but should (create a fresh save file on every save). There is no reason to overcomplicate this. Do I have to create allways a new OutputStream if I whant the game to be saved? (Save game every 5 mins for example) Again, why worry? You save once per 5 minutes, you won't notice any difference (besides your coding time and efforts being wasted) if you reused your OutputStream or created a new one. Create a new one. I whant the current entities that are created to be saved should I create one file for all or for each entity a file? Depends on what makes sense and what these "entities" are. In any case to save the entity you will need to serialize it, which is just a fancy way of saying create a representation of it in text. To load the entity, reverse the process (deserialize it). The easiest way to learn how to do this is to make a JSONObject (library here). Put the values from the entity into the JSONObject and to turn it into text, call JSONObject.toString(). To deserialize it, create a new JSONObject and pass the text into its constructor. You may then retrieve the values. These entities could be 3 types: Enemies max 15, Projectiles max 50, Blocks (Triangles that are destructable but grow back after time ) They have to grow back allso if they're not in the vilible area (only visible areas of the map will be loaded) Save it into 1 file as described.
common-pile/stackexchange_filtered
What's hatched on my rose? While watering my rose plant this morning, I noticed it was covered in tiny brown bugs: They're on most of the leaves of the plant. What are they? Is there anything I can do? The plant is currently kept inside - should I move the plant outside to avoid an infestation? I agree they do look like spider mites - however, you are in the UK, and red spider mite is only an issue on indoor plants here, never outdoors. I suggest you relocate your rose to the outdoors, and the spider mite should vanish on its own over a week or two, but if you happen to have any Roseclear Ultra, or another insecticidal spray, use that just in case the webbing is coincidental and what's actually on the plant are aphids. I note also there is some yellowing of areas of leaf on the plant - it's obviously in a pot if its indoors, so if you can find a sunny spot in the garden to plant it in the ground, it should grow more healthily. Garden? I'll put it on the balcony :-) @TomMedley Ah! You and me both then... roses don't last long in pots in terms of years, they don't like 'em, but you'll have it for a while It was a cheap Sainsbury's pot, which I think might have run its course. Oh well! @TomMedley oh I see - they're just meant as temporary visitors really, but you could pot up into something larger, see if it flowers again or next year. Zooming in on the picture, they definitely look like mites of some kind to me, and with the webs, I concur that they're some kind of spider mite. :) Our spider mites in my section of Idaho in the USA are almost always invisible, though, for some odd reason, even though they're probably our most prevelant pest (so, although I know spider mites generally are often visible, because I'm not used to seeing them, those look huge to me). @Shule Red spider mite I'm pretty sure....probably on the plant at point of sale, acquired from the greenhouse in which it would have been grown Make sure that you harden this plant before taking out of doors. Bring it back in reversing the process for winter unless you plant it in the garden. I would certainly spray with Neem first. Do it in the tub and get all surfaces to include under the leaves. Allow to dry, then turn the shower on, soak and allow to drain. Then start the hardening off to take it out on your balcony. Do not allow to sit in a saucer of water. Are there areas on your balcony that are shaded the entire day? You can put your plant in that shade without hardening. Then allow a bit more sun. Retreat if needed. @stormy - no need for hardening here at the moment - chances are its hotter outdoors than it is in - 30 degrees C here at the mo at 8.45 pm! Not much sun forecast either, just heat... Even humans can get a bad sunburn in overcast. The best place is a covered porch or balcony. Temperature isn't as important as the light rays from the sun. I want to see if this rose is OUT of that dang foo foo foil paper the florists sometimes use to sell plants. Florists or at least their suppliers know that sitting in water the foil holds in will kill the plant. The buyer thinks it is their fault and buys another. grrrr. Same with the packet they give for fresh flowers? It actually shortens the life of fresh flowers! Corruption is everywhere!! Grins! @stormy - its in a pot from a supermarket, meant to be a temporary bit of colour in a house display, they're quite cheap to buy, bit like azaleas or poinsettias in the winter. You just bin 'em when they stop flowering usually. Exactly, I spent a year as a florist in a huge grocery store/chain. I was a bit 'unusual'...my bottom line was higher than all the other stores in the chain combined. Had a full blown nursery in the parking lot, sold huge bouquets that the customers chose flowers fillers wrapped in cool newspaper and raffia. Painted a life size cherry tree on the entrance wall and had it change with the seasons. Dried flowers or sent to hospitals instead of throwing them away. Huge return clientele. Amazing that I wasn't LOVED. Grins. Those foils for pots are awful. They look like rose spider mites. You can use a miticide or insecticidal soap to get rid of them. Repeat after 14 days to get the eggs that hatch. http://www.gardeningknowhow.com/ornamental/flowers/roses/rose-spider-mites.htm
common-pile/stackexchange_filtered
Buildozer errors while converting kivy to apk in colab For detailed logs and the code please check my colab notebooks below: most basic attempt: https://colab.research.google.com/drive/1FhDPxcxOy562cl9hlLnTTotxnpKbe7Wk?usp=sharing p4a manually: https://colab.research.google.com/drive/1zB1xK1lCNjpd7Z3JZz2vrh0yul7HZ0wY?usp=sharing desparate final attempt: https://colab.research.google.com/drive/1hyef6UwhZTb7EZDqmIofe1r9Wub2wIBq?usp=sharing I would highly appreciate your help, I'm kind of at a loss. No other current question was answered and the kivy subreddit seems to be dealing with the same issues and giving up because there are no propper answers anywhere. I hope this post can become the main point to start the debugging. Questions have to include all the required information. Please do not use external links to present crucial information or code. Just include the relevant code into your question.
common-pile/stackexchange_filtered
How can I get started developing extensions for python IDLE? How can I develop extensions in IDLE? In IDLE's preferences I've seen this and would love to learn how to make my own extensions: <pythondir>/Lib/idlelib/extend.txt explains how to write an extension module with an extension class. <pythondir>/Lib/idlelib/zzdummy.py is an (incomplete) example extension module, with an example ZzDummy class . The comment in <pythondir>/Lib/idlelib/config-extension.def explains how to add an entry to that file so that IDLE will incorporate an extension. The ZzDummy entry is an example extension entry. If you uncomment ZzDummy.menudefs in zzdummy.py, enable the ZzDummy extension on the Settings dialog Extensions tab, as shown in your image, and restart IDLE, z in and z out entries will appear at the bottom of the Format menu. However, the menu entries do not work, which is why menudef is commented out. You question reminded me of the existence of https://bugs.python.org/issue32631. I have edited and merged the patch and the backports are in progress. The changes will be in the next releases of 3.8, 3.9, and 3.10. You can see the changes now at https://github.com/python/cpython/pull/14491/files. The new version of zzdummy is https://github.com/python/cpython/blob/master/Lib/idlelib/zzdummy.py and you could copy that into your installed idlelib for any of the above versions.
common-pile/stackexchange_filtered
React Native header / bottom tabbar jumping on first app load I have a single application which includes only navigation packages. On IOS, all is fine but on Android, header and/or bottom tabbar seems like jumping (maybe recalculating their positions). This happens only when I use navigation components and only when app is just launched. Is there anyone faced same problem? Thanks in advance. Packages: "@react-native-community/masked-view": "^0.1.10", "@react-navigation/bottom-tabs": "^5.6.1", "@react-navigation/native": "^5.6.1", "@react-navigation/stack": "^5.6.2", "react": "16.11.0", "react-native": "0.62.2", "react-native-gesture-handler": "^1.6.1", "react-native-reanimated": "^1.9.0", "react-native-safe-area-context": "^3.0.7", "react-native-screens": "^2.9.0" This is the whole app: import * as React from 'react'; import { Button, Text, View } from 'react-native'; import { NavigationContainer } from '@react-navigation/native'; import { createStackNavigator } from '@react-navigation/stack'; import { createBottomTabNavigator } from '@react-navigation/bottom-tabs'; function DetailsScreen() { return ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text>Details!</Text> </View> ); } function HomeScreen({ navigation }) { return ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text>Home screen</Text> <Button title="Go to Details" onPress={() => navigation.navigate('Details')} /> </View> ); } function SettingsScreen({ navigation }) { return ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text>Settings screen</Text> <Button title="Go to Details" onPress={() => navigation.navigate('Details')} /> </View> ); } const HomeStack = createStackNavigator(); function HomeStackScreen() { return ( <HomeStack.Navigator> <HomeStack.Screen name="Home" component={HomeScreen} /> <HomeStack.Screen name="Details" component={DetailsScreen} /> </HomeStack.Navigator> ); } const SettingsStack = createStackNavigator(); function SettingsStackScreen() { return ( <SettingsStack.Navigator> <SettingsStack.Screen name="Settings" component={SettingsScreen} /> <SettingsStack.Screen name="Details" component={DetailsScreen} /> </SettingsStack.Navigator> ); } const Tab = createBottomTabNavigator(); export default function App() { return ( <NavigationContainer> <Tab.Navigator> <Tab.Screen name="Home" component={HomeStackScreen} /> <Tab.Screen name="Settings" component={SettingsStackScreen} /> </Tab.Navigator> </NavigationContainer> ); } Hello @Basar Sen, Can you try it with the RN 0.61.5 version? It should not be jumping like that. Also, please try it on a real device. @FreakyCoder Thank you. I tried you suggestion. But it's still same. That's a weird bug :) Let me test it :) @FreakyCoder I tried with this combination. Now it works properly :). "react": "16.11.0", "react-native": "0.62.2", "@react-navigation/bottom-tabs": "^5.2.5", "@react-navigation/native": "^5.1.4", "@react-navigation/stack": "^5.2.9", "@react-native-community/masked-view": "^0.1.7", "react-native-gesture-handler": "^1.6.1", "react-native-reanimated": "^1.7.1", "react-native-screens": "^2.4.0", "react-native-safe-area-context": "^0.7.3" i get the same problem "@react-navigation/bottom-tabs": "^5.5.1", Hello Basar Sen, Did you find any solution to this problem? I am facing same problem. @ArunGirivasan did you find a solution ? @StanlyMedjoYes, please check the answer below. https://stackoverflow.com/a/64274224/10505503 I fixed this problem by using SafeAreaProvider. You should add SafeAreaProvider in your app root component and use SafeAreaView as the root component of your page. Also check the import statement of SafeAreaView , react-native also has SafeAreaView but that component only supports iOS 10+ . I also needed to reset safeAreaInsets. Refer to my updated answer: https://stackoverflow.com/a/67922977/492325 I was struggling with this exact bug for a while. I've finally been able to find a workaround. It seems that all the ReactNavigation navigators (eg Tab and Stack) will by default accommodate safe areas. This is mentioned in this page: https://reactnavigation.org/docs/bottom-tab-navigator/ By default, the device's safe area insets are automatically detected So it seems the behaviour we're seeing is due to this. It's not clear why ReactNavigation has buggy "safe area" logic, but the workaround is to disable that. The workaround is similar to what @Arun Girivasan has suggested, with a couple extra steps: Use react-native-safe-area-context to wrap everything in a SafeAreaProvider and SafeAreaView Specify the safeAreaInsets to be 0 for all directions: <Tab.Navigator initialRouteName="AppDashboard" tabBarOptions={{ safeAreaInsets: { top: 0, bottom: 0, left: 0, right: 0, } }} > If you're creating stacks within your tab screens, provide the same safeAreaInsets for your stack navigators. With these changes I'm no longer seeing the tab bar height jump AND i'm no longer seeing the stack header jumping. Basically this workaround resolves all UI glitches for me. Just saved me a bunch of time to solve this... Just to works in the iOS and Android I put in the bottom prop a different value, using the lib react-native-iphone-x-helperi did it: safeAreaInsets: { top: 0, bottom: getBottomSpace(), left: 0, right: 0 } Use react-native-safe-area-context to wrap everything in a SafeAreaProvider and SafeAreaView fixed the issue. Thank you so much! @badsyntax : After followed your instruction also, my bottom tabs jumping for the first time. Anyother possibilities for this issue? same thing happend to me on @react-navigation/bottom-tabs i just removed paddingBottom and padding top from "tabstyle" and pasted in "style" this solved the issue BEFORE: tabBarOptions={{ keyboardHidesTabBar: true, activeTintColor: COLOR.white, style: { backgroundColor: COLOR.primary, height: responsiveHeight(7), }, tabStyle: { paddingBottom: responsiveHeight(0.5), paddingTop: responsiveHeight(0.5), }, }} AFTER: tabBarOptions={{ keyboardHidesTabBar: true, activeTintColor: COLOR.white, style: { backgroundColor: COLOR.primary, height: responsiveHeight(7), paddingBottom: responsiveHeight(0.5), paddingTop: responsiveHeight(0.5), }, }} ... i hope it helps :) give this navigationOptions: { headerShown: true, safeAreaInsets: { top: 0, bottom: 0, left: 0, right: 0, }, }, if you are using react-navigation 4x
common-pile/stackexchange_filtered
Designed table cannot be saved in SQL Server Management Studio I am trying to design a table in Microsoft SQL Server Management Studio. I can modify it but cannot be saved, anyone knows the problem how to fix? Details, Details... I'm guessing you get an error message? if so, what is it? You may as well have said "I have a problem, does anyone know how to fix it?" and nothing else. What error are you getting, what are you modifying. What exactly are you doing? Well, if you create a modify script, you should be able to run it and save. Like @ZoharPeled said, what error are you getting? look at the screenshot! The error message is pretty clear what the problem is and how to fix - what's the question? I found the way to fix. Go to tools > options > designers uncheck "Prevent saving changes that require table re-creation
common-pile/stackexchange_filtered
Replacing Leading Minus Signs in Environment In this question concerning negative signs and the alignment of entries in matrix environments, Heiko Oberdiek gave a great answer in which he uses a user-defined command called \matminus instead of the usual -. How can I define a matrix environment that will automatically perform this replacement of leading negative signs for me? Thanks for the help! Edit: I have adapted the code provided by Heiko Oberdiek and egreg, and created a package that defines a set of alternate matrix environments (spmatrix, sbmatrix, sBmatrix, svmatrix, and sVmatrix). These environments function analogously to their counterparts without the 's', but reduce the length of the minus signs within them to achieve better alignment of the entries within each column. The project is hosted here. Here is a picture that shows the difference between the regular and reduced minus matrix environments: Perhaps you should consider making the columns right aligned, e.g. by using extra options to matrices provided by mathtools. @AndrewSwann I tried various techniques, including what you suggest, in the question that I link to above. You can see the results in the PDF file that I include with the question -- the right-aligned columns do not look very good when negative signs are used. We can make - to become \matminus in matrices by exploiting the fact that all matrix environments of amsmath use \env@matrix, by injecting code in it. \documentclass[11pt,a4paper]{article} \usepackage{amsmath} \usepackage{etoolbox} \usepackage{graphicx} %%% Save a copy of the original minus in math mode \mathchardef\realminus\mathcode`- %%% Define the shortened version of minus (H. Oberdiek) \newcommand{\reducedminus}{% \leavevmode\hphantom{0}% \llap{% \settowidth{\dimen0 }{$0$}% \resizebox{1.1\dimen0 }{\height}{$\realminus$}% }% } %%% Define \matminus so that it doesn't reduce the minus in sub/superscripts \newcommand{\matminus}{% \mathchoice{\reducedminus}{\reducedminus}% {\realminus}{\realminus}% } \makeatletter %%% Make - become \matminus in matrices \preto\env@matrix{\mathcode`-=\string"8000 \begingroup\lccode`~=`- \lowercase{\endgroup\let~}\matminus } \makeatother \begin{document} \[ A = \begin{pmatrix} -123 & -10 & 1 \\ 100 & 5 & 16 \\ 13 & 7 & 7^{-2} \end{pmatrix}. \] \end{document} However, I can't see any improvement over the normal appearance If you want shorter minus signs also in exponents, change the code as follows: %%% Save a copy of the original minus in math mode \mathchardef\realminus\mathcode`- %%% Define the shortened version of minus (H. Oberdiek) \newcommand{\reducedminus}[2]{% \leavevmode\hphantom{0}% \llap{% \settowidth{\dimen0 }{$#10$}% \resizebox{1.1\dimen0 }{\height}{$#1\realminus$}% }% } %%% Define \matminus so that it doesn't reduce the minus in sub/superscripts \newcommand{\matminus}{\mathpalette\reducedminus\relax} \makeatletter %%% Make - become \matminus in matrices \preto\env@matrix{\mathcode`-=\string"8000 \begingroup\lccode`~=`- \lowercase{\endgroup\let~}\matminus } \makeatother Thanks for the answer. I think that the appearance of the first two columns of A has improved, but when exponents are used (as in the third column), there is no difference. @void-pointer If you want shorter minus signs also in exponents, some more work is needed. Let me add it, so you can choose. Thank you very much! I think the result is much better than the unmodified matrix, though the change is subtle. @void-pointer I continue to prefer the original. I have adapted the code you have provided and created a package that defines a set of alternate matrix environments (spmatrix, sbmatrix, sBmatrix, svmatrix, and sVmatrix), in case others would like to experiment. The project is hosted here.
common-pile/stackexchange_filtered
T-invariant subspaces of V. Let T be a linear operator on a vector space V over F.If W1,W2,........,Wk are T invariant subspaces of V, prove that summation i=1 to k Wi and intersection i=1 to k are T invariant subspaces of V. What have you tried? The result is almost immediate, so where did you get stuck? Let $S:= W_1+...+W_k$. We have to show that $T(S) \subseteq S.$ To this end let $w \in S$. Then there are $w_j \in W_j \quad (j=1,...,k)$ such that $$w=w_1+...+w_k.$$ It follows that $$T(w)=T(w_1)+...+T(w_k).$$ From $T(w_j) \in W_j \quad (j=1,...,k)$ we derive $T(w) \in S.$ The intersection is now your turn.
common-pile/stackexchange_filtered
Tracking a branch with Git submodule where some clients < 1.8.2 Git 1.8.2 added the possibility to track remote branches with submodule (which is awesome). # add submodule to track master branch git submodule add -b master [URL to Git repo]; # update your submodule # --remote will also fetch and ensure that # the latest commit from the branch is used git submodule update --remote .gitmodules looks like: [submodule "libraries/shared_libraries"] path = libraries/shared_libraries url =<EMAIL_ADDRESS> branch = develop In our dev shop, all developers use Git >v1.8.2. However, our QA, staging & production servers run either RHEL 6.5 or CentOS, which has 1.7.1 OOTB. These boxes are typically "pull only", and aren't used to commit code. What should we expect to happen when using "git submodule init/update" from our boxes running 1.7.1? It this a recipe for disaster, or is it a supported use case? It this a recipe for disaster, or is it a supported use case? Supported, in that it would ignore the .gitmodules branch directive. A git submodule update --init would simply checkout the submodule to its gitlink entry as recorded in the index of the main parent repo. That gitlink entry would be the SHA1 of the submodule, as last recorded by the parent repo (but a git add + git commit + git push). As long as both the parent repo has been pushed with that gitlink entry, and the submodule has been pushed with that SHA1 included in its history, both can then be pulled by any client. So in my example, if my submodule "shared_libraries" advances on the develop branch, to get the latest version with the older git clients, I'd just first have to use a newer git client to git submodule update --remote, which would update the gitlink allowing a subsequent git submodule update from an older client to get this newer version? Does that sound right? @rcourtna yes. As far as the older client is concerned, all its need is a parent repo which references a SHA1 existing in the history of the submodule it will pull.
common-pile/stackexchange_filtered
mysqli array fetching is not getting data I have to fetch result in single row but I can't run multiple rows. How can I fix this? $stmt = $this->conn->prepare("SELECT * from user where id=?"); $stmt->bind_param("s", $id); if($stmt->execute()){ $result = $stmt->get_result()->fetch_array(MYSQLI_ASSOC); $stmt->close(); return $result; } I Got Result Like Wise This {"ID":2,"Name":"Anju"} But i need to get all user result .my code is here $stmt = $this->conn->prepare("SELECT * from user where id=?"); $stmt->bind_param("s", $id); if($stmt->execute()){ $result = array(); while ($row = $stmt->get_result()->fetch_array(MYSQLI_ASSOC)) { $result[] = $row; } $stmt->close(); return $result; } I got the error Fatal error: Call to a member function fetch_array() on a non-object in line 5 The line is: while ($row = $stmt->get_result()->fetch_array(MYSQLI_ASSOC)) my expecting result is {"ID":1,"Name":"Obi"}, {"ID":3,"Name":"Oman"}, {"ID":4,"Name":"Anju"} $stmt = $this->conn->prepare("SELECT * from user where id=?"); $stmt->bindValue(1, $id); Try that way And try to fetchAll instead of fetch_array am try that i got error like Warning: mysqli_stmt::bind_param(): Undefined fieldtype 1 (parameter 2) . did you try bindValue? from your error i see you still using bind_param() You could make following correction. Change $rows->fetch_assoc(MYSQLI_ASSOC) to $rows->fetch_assoc() It should look like if ($stmt->execute()) { $rows = $stmt->get_result(); $result = array(); while ($row = $rows->fetch_assoc()){ $result[] = $row; } $stmt->close(); return $result; } else { return NULL; } Suggestion : Always specify the list of columns in your SELECT, which makes query execution faster. Please read php manual
common-pile/stackexchange_filtered
Is it possible to mock a type with an attribute using Rhino.Mocks I have this type: [RequiresAuthentication] public class MobileRunReportHandler : IMobileRunReportHandler { public void Post(MobileRunReport report) { ... } } I am mocking it like so: var handler = MockRepository.GenerateStub<IMobileRunReportHandler>(); handler.Stub(x => x.Post(mobileRunReport)); The problem is that the produced mock is not attributed with the RequiresAuthentication attribute. How do I fix it? Thanks. EDIT I want the mocked type to be attributed with the RequiresAuthentication attribute, because the code that I am testing makes use of this attribute. I would like to know how can I change my mocking code to instruct the mocking framework to attribute the produced mock accordingly. possible duplicate of Mocking attributes - C# I will gladly agree with you if you show me where exactly does that post contain the answer to my question. Thanks. In what context do you need the class to be attributed with RequiresAuthentication? Could you elaborate that a little more? How does the code you are testing use the attribute? Could you add a code snippet? The approach to add the attribute to the stub might depend on this. This happens inside the OpenRasta framework. It uses the regular reflection API - Attribute.GetCustomAttributes. Adding an Attribute to a type at runtime and then getting it using reflection isn't possible (see for example this post). The easiest way to add the RequiresAuthentication attribute to the stub is to create this stub yourself: // Define this class in your test library. [RequiresAuthentication] public class MobileRunReportHandlerStub : IMobileRunReportHandler { // Note that the method is virtual. Otherwise it cannot be mocked. public virtual void Post(MobileRunReport report) { ... } } ... var handler = MockRepository.GenerateStub<MobileRunReportHandlerStub>(); handler.Stub(x => x.Post(mobileRunReport)); Or you could generate a stub for the MobileRunReportHandler type. But you'd have to make its Post method virtual: [RequiresAuthentication] public class MobileRunReportHandler : IMobileRunReportHandler { public virtual void Post(MobileRunReport report) { ... } } ... var handler = MockRepository.GenerateStub<MobileRunReportHandler>(); handler.Stub(x => x.Post(mobileRunReport)); I am sorry. I just do not get it. The stub generated by Rhino.Mocks is of dynamic type, i.e. the type is created at runtime using the Reflection.Emit facility, which allows to have a dynamic type attributed as we please. It is all a question of whether Rhino.Mocks knows to exploit this ability of Reflection.Emit or not.
common-pile/stackexchange_filtered
Mutate and if else function to create new column It may be a very easy question, but so far I failed. My dataset looks like this Duration Unit 1 day 3 month 5 weeks What I want to do is to create a new column for the number of days depending on the unit. So 3 months should be 90 days, 5 weeks should be 35 days. And in case of unit is day, the value of day should be placed in my new column without any calculation. So the result should look like this Duration Unit Duration_calculated 1 day 1 3 month 90 5 weeks 35 Here is the exmaple dataset Duration <- c(1,3,5) Unit <- c("day", "month", "weeks") dataset <- data.frame(Duration, Unit) I've tried a combination of mutate and if else function, but it worked not out for me. Average month length is 30.437. Why is it day, month, but weeks - plural? We could use case_when(): library(dplyr) df %>% mutate(Duration_calculated = case_when( Unit == "day" ~ Duration, Unit == "month" ~ Duration * 30, Unit == "weeks" ~ Duration * 7 )) Duration Unit Duration_calculated 1 1 day 1 2 3 month 90 3 5 weeks 35 Or if you stick on ifelse or if_else we could use nested if_else() statement: df %>% mutate(Duration_calculated = if_else( Unit == "day", Duration, if_else(Unit == "month", Duration * 30, if_else(Unit == "weeks", Duration * 7, NA_real_) ) )) Make a lookup vector, then match: dataset <- data.frame(Duration = c(1, 3, 5), Unit = c("day", "month", "week")) lookup <- setNames(c(1, 7, 30.437), c("day", "week", "month")) dataset$Duration_calculated <- dataset$Duration * lookup[ dataset$Unit ] dataset # Duration Unit Duration_calculated # 1 1 day 1.000 # 2 3 month 91.311 # 3 5 week 35.000
common-pile/stackexchange_filtered
Uniform continuity of continuous functions on compact sets Assume that $f: \mathbb R \rightarrow \mathbb R$ is continuous function on the compact set $A$. Does for any $\varepsilon >0$ exist a $\delta >0$, such that $$ \lvert\, f(x)-f(y)\rvert<\varepsilon \,\,\,\,\,\,\textrm{for every}\,\,\,\, x,y\in A,\,\, \text{with}\,\,\, \lvert x-y\rvert<\delta? $$ Do you mean to say that $f$ is continuous on each compact subset of $\mathbb{R}$ or just one fixed compact subset of $\mathbb{R}$? $f$ is continuous on a fixed compact $A$. Not really, there is the slight problem of when $x\in A$, but $x+t \notin A$. Else it is just trivial since $f$ is uniformly continuous. The last edit changed the question by requiring $y \in A$, which makes it much easier (or at least much more familiar). Assume that $f: \mathbb R \rightarrow \mathbb R$ is continuous on a compact $A$. Then for every $\varepsilon >0$, there exists a $\delta >0$, such that $$ \lvert\,f(x+t)-f(x)\rvert<\varepsilon, $$ whenever $x\in A$, $\lvert t\rvert<\delta$ and $x+t\in A$. If $x+t$ is not required to belong to $A$, then the value $f(x+t)$ does not affect the continuity of $f$, when restricted on $A$. For example (as in D. Fisher's example above), let $$ f(x)=\left\{ \begin{array}{lll} 1 & \text{if} & x\in A,\\ 0 & \text{if} & x\not\in A. \end{array} \right. $$ Then $f$ restricted on $A$ is continuous, while at $x=1$, and $t=1/n$, we have $$ f(1+1/n)-f(1)=-1, $$ for all $n\in\mathbb N$. On the other hand, the reformulated claim is just the fact that continuity on a compact metric space implies uniform continuity. This is an answer to the wrong question. The question as originally asked is meaningful and has an affirmative answer (see my answer). Yes, this example $f$ is not continuous on $A$ if this means that $f$ is supposed to be continuous at each element of $A$. It's continuous on $A$ in the weaker sense that the restriction of $f$ to $A$ is continuous, but that makes the problem much less interesting. First notice that we can assume that $f$ is identically zero on $A$ by subtracting off a continuous function $g$ extending the restriction of $f$ to $A$. Such a $g$ can be constructed using the distance function $d$ from $x\in\mathbb{R}$ to $A\subset \mathbb{R}$. The question becomes to show that for every $\epsilon>0$ there exists a $\delta>0$ such that if $d(t,A)<\delta$ then $|f(t)|<\epsilon$. Suppose there were no such $\delta$. Then one can construct a sequence $(t_n)$ with $d(t_n,A)\to 0$ as $n\to\infty$ while $|f(t_n)|\geq\epsilon$. The sequence is obviously bounded and therefore has a convergent subsequence $t_{n_k}\to x_0$. Then $x_0\in A$ by compactness. It follows that $f$ is not continuous at $x_0$, contradiction. +1 for answering the more interesting question that was originally asked (before the last edit). But I'm unclear on how to define $g$. We want $g(x) = f(x)$ for $x \in A$, of course; for $x \notin A$, if there is a unique $y \in A$ such that $d(x,y) = d(x,A)$, then we can use $g(x) = f(y) + d(x,A)$. There is at least one such $y$, but what if there is more than one? If we just choose arbitrarily, then how do we guarantee that $g$ is continuous when we switch from one branch to another? More explicitly, given $A={0,1}$, $f(0)=0$, $f(1)=1$, and $f$ continuous at $0$ and $1$, what is $g(1/2)$? (Well, $g(x)=x$ is an obvious choice, but how do you derive that from $f|_A$ and $d(-,A)$?) Sorry, one last comment: I agree that a $g$ exists to complete the proof, by the Tietze Extension Theorem, so I'm fine with your overall answer. I just don't understand what you were saying about how to construct $g$. I think this is called Heine's theorem, here goes my attempt: First note that your condition is equivalent to: $$\exists\delta>0 : \forall\varepsilon>0, x,y\in A \Rightarrow|f(x)-f(y)|<\varepsilon, \text{if } |x-y|<\delta$$ Which also means $$\forall\varepsilon>0, (x_n),(y_n)\subset A, \ \lim(x_n-y_n)=0 \Rightarrow |f(x_n)-f(y_n)|<\varepsilon$$ Also I'll use the definition of compact by sub-successions, which is that: $$\forall (x_n)\subset A, (x_n) \text{ succession, then there exists a convergent sub-succession} (x_{n_k})\subset A, \text{and} \\ \lim x_{n_k}\in A$$ So, first suppose that $f$ is not uniformly continuos, so we negate the previous statement: $$ \exists\varepsilon>0, (x_n),(y_n)\subset A : \ \lim(x_n-y_n)=0 \Rightarrow |f(x_n)-f(y_n)|\ge\varepsilon \tag{#} $$ So, let $(x_{n_k}), (y_{n_k})$ be convergent sub-successions, with $(x_{n_k})\to x_0$. Since $\lim(x_n-y_n)=0 \Rightarrow (y_{n_k})\to x_0$. Now use the fact that $f$ is continuos$^*$, in which case $$(f(x_{n_k}))\to f(x_0), f(y_{n_k})\to f(x_0). \text{Therefore, }\lim(f(x_{n_k})-f(y_{n_k}))=0.$$ Using the definition of limit this contradicts condition $\text{(#)}$, therefore $f$ is uniformly continuos. Note that by replacing "$|$" with "$||$" we obtain the generalization for metric spaces! (because the succession characterizations also hold). $*\text{using $f$ continuous} \Leftrightarrow \forall (x_n)\mid\lim(x_n)=p, \ \lim f(x_n)=f(p). $ This only answers the question after the subsequent edits by Yiorgos, not the original (more interesting, less duplicated) question.
common-pile/stackexchange_filtered
How to specify structured streaming time based window in straight Spark SQL We are using structured streaming to perform aggregations on real time data. I'm creating a configurable Spark job that is given a configuration and uses it to group rows across tumbling windows and performs aggregations. I know how to do this with the functional interface. Here is a code fragment using the functional interface var valStream = sparkSession.sql(sparkSession.sql(config.aggSelect)) //<- 1 .withWatermark("eventTime", "15 minutes") //<- 2 .groupBy(window($"eventTime", "1 minute"), $"aggCol1", $"aggCol2") //<- 3 .agg(count($"aggCol2").as("myAgg2Count")) Line 1 executes a SQL string that comes from the configuration. I would like to move lines 2 & 3 into the SQL syntax so that the grouping and aggregations are specified in the configuration. Does anyone out there know how to specify this in Spark SQL? withWatermark does not have a corresponding SQL syntax. You have to use the dataframe API. For aggregation, you can do something like select count(aggcol2) as myAgg2Count from xxx group by window(eventTime, "1 minute"), aggCo1, aggCol2
common-pile/stackexchange_filtered
how to access an s3 bucket given user name and keys? a friend gave me information on his s3 bucket, so I can look at his logs: user name access key secret key A lot of the material I have is to access your own bucket, but how do people share a particular s3 bucket? If you are using his username, access key, and secret key it shouldn't be your bucket's information I don't understand the exact question. As Ramhound said, if you have an access key and a secret key, as well as the bucket name, you have everything you need to know in order to completely access it as if it were your own. What are you trying to achieve? There are plenty of tools that allow to connect to S3 shares. @slhck I need to download some logs for work. He gave me his info... I don't know the basis of how to access S3 buckets. The Amazon web site keeps telling me to create my own bucket and the bucket from my boss' info S3 buckets can be accessed by anyone as long as you know: the bucket name the access key the secret key There are many tools that allow you to connect to an S3 bucket and up/download files, including: S3 browser Cyberduck s3fs (CLI) s3cmd (CLI) … I'm sure a web search for S3 clients will deliver more results.
common-pile/stackexchange_filtered
how to disable specific date in angular bootstrap datepicker I am using angular ui bootsrap datepicker with this version (2.5.0): https://angular-ui.github.io/bootstrap/#!#datepicker I want to disable specific dates. for example, every 29th and 30th on each month. how can i do it? have you tried date-disabled attribute with some function "disabledDates(date,mode)"? thank you @AlekseySolovey! it's works! i have tried it before but not in the right way. after i made some more research i had discovered how to do it right. to me it works with: date-disabled: vm.disableFN. (withe out parentheses and parameters). after that i define the function like this "vm.disableFN = function(data){ let date = data.date; let mode = data.mode;} the date and the mode sitting within the same object (data).
common-pile/stackexchange_filtered
TaskExecutor is not working Spring Integration I have setup File poller with task executor ExecutorService executorService = Executors.newFixedThreadPool(10); LOG.info("Setting up the poller for directory {} ", finalDirectory); StandardIntegrationFlow standardIntegrationFlow = IntegrationFlows.from(new CustomFileReadingSource(finalDirectory), c -> c.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS, 5) .taskExecutor(executorService) .maxMessagesPerPoll(10) .advice(new LoggerSourceAdvisor(finalDirectory)) )) //move file to processing first processing .transform(new FileMoveTransformer("C:/processing", true)) .channel("fileRouter") .get(); As seen I have setup fixed threadpool of 10 and maximum message 10 per poll. If I put 10 files it still processes one by one. What could be wrong here ? * UPDATE * It works perfectly fine after Gary's answer though I have other issue now. I have setup my Poller like this setDirectory(new File(path)); DefaultDirectoryScanner scanner = new DefaultDirectoryScanner(); scanner.setFilter(new AcceptAllFileListFilter<>()); setScanner(scanner); The reason of using AcceptAll because the same file may come again that's why I sort of move the file first. But when I enable the thread executor the same file is being processed by mutliple threads, I assume because of AcceptAllFile If I Change to AcceptOnceFileListFilter it works but then the same file that comes again will not be picked up again ! What can be done to avoid this issue ? Issue/Bug In Class AbstractPersistentAcceptOnceFileListFilter We have this code @Override public boolean accept(F file) { String key = buildKey(file); synchronized (this.monitor) { String newValue = value(file); String oldValue = this.store.putIfAbsent(key, newValue); if (oldValue == null) { // not in store flushIfNeeded(); return true; } // same value in store if (!isEqual(file, oldValue) && this.store.replace(key, oldValue, newValue)) { flushIfNeeded(); return true; } return false; } } Now for example if I have setup max per poll 5 and there are two files then its possible same file would be picked up by two threads. Lets say my code moves the files once I read it. But the other thread gets to the accept method if the file is not there then it will return lastModified time as 0 and it will return true. That causes the issue because the file is NOT there. If its 0 then it should return false as the file is not there anymore. When you add a task executor to a poller; all that does is the scheduler thread hands the poll task off to a thread in the thread pool; the maxMessagesPerPoll is part of the poll task. The poller itself only runs once every 5 seconds. To get what you want, you should add an executor channel to the flow... @SpringBootApplication public class So53521593Application { private static final Logger logger = LoggerFactory.getLogger(So53521593Application.class); public static void main(String[] args) { SpringApplication.run(So53521593Application.class, args); } @Bean public IntegrationFlow flow() { ExecutorService exec = Executors.newFixedThreadPool(10); return IntegrationFlows.from(() -> "foo", e -> e .poller(Pollers.fixedDelay(5, TimeUnit.SECONDS) .maxMessagesPerPoll(10))) .channel(MessageChannels.executor(exec)) .<String>handle((p, h) -> { try { logger.info(p); Thread.sleep(10_000); } catch (InterruptedException e1) { Thread.currentThread().interrupt(); } return null; }) .get(); } } EDIT It works fine for me... @Bean public IntegrationFlow flow() { ExecutorService exec = Executors.newFixedThreadPool(10); return IntegrationFlows.from(Files.inboundAdapter(new File("/tmp/foo")).filter( new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "foo")), e -> e.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS) .maxMessagesPerPoll(10))) .channel(MessageChannels.executor(exec)) .handle((p, h) -> { try { logger.info(p.toString()); Thread.sleep(10_000); } catch (InterruptedException e1) { Thread.currentThread().interrupt(); } return null; }) .get(); } and 2018-11-28 11:46:05.196 INFO 57607 --- [pool-1-thread-1] com.example.So53521593Application : /tmp/foo/test1.txt 2018-11-28 11:46:05.197 INFO 57607 --- [pool-1-thread-2] com.example.So53521593Application : /tmp/foo/test2.txt and with touch test1.txt 2018-11-28 11:48:00.284 INFO 57607 --- [pool-1-thread-3] com.example.So53521593Application : /tmp/foo/test1.txt EDIT1 Agreed - reproduced with this... @Bean public IntegrationFlow flow() { ExecutorService exec = Executors.newFixedThreadPool(10); return IntegrationFlows.from(Files.inboundAdapter(new File("/tmp/foo")).filter( new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "foo")), e -> e.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS) .maxMessagesPerPoll(10))) .channel(MessageChannels.executor(exec)) .<File>handle((p, h) -> { try { p.delete(); logger.info(p.toString()); Thread.sleep(10_000); } catch (InterruptedException e1) { Thread.currentThread().interrupt(); } return null; }) .get(); } and 2018-11-28 13:22:23.689 INFO 75681 --- [pool-1-thread-1] com.example.So53521593Application : /tmp/foo/test1.txt 2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-2] com.example.So53521593Application : /tmp/foo/test2.txt 2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-3] com.example.So53521593Application : /tmp/foo/test1.txt 2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-4] com.example.So53521593Application : /tmp/foo/test2.txt But I have to send to another channel that files to be processed. How do I route that from here ? Sorry, I don't know what you mean; just put the .channel() before the transformer and remove the executor from the poller. I actually tried but I see unexpected behaviour it looks like its not thread-safe. Your code? The framework is thread safe. If you want to process files in parallel your code needs to be thread safe. Use a FileSystemPersistentAcceptOnceFileListFilter - it will allow the same file name to pass only if the lastModified time changes. Thanks I understand this. I have used like this FileSystemPersistentAcceptOnceFileListFilter fileSystemPersistentAcceptOnceFileListFilter = new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), ""); fileSystemPersistentAcceptOnceFileListFilter.setFlushOnUpdate(true); I am still seeing same file being processed twice !!! I don't see how that's possible; it works fine for me; see the edit to my answer. Not for the simple store - it's used when using a shared store such as redis. Good catch - please open a bug report. Reproduced; see my second edit. We were just about to start a new release build - I will create the ticket. INT-4560. Thanks for your help Thanks for this, I was struggling a lot to do concurrent processing, I wonder why this is not in the documentation anywhere? This is a very common use case... who wants to process sequentially? The “very common use case” is strict ordering. The framework has to take the most conservative approach to avoid unexpected side effects. While providing mechanisms for concurrency.
common-pile/stackexchange_filtered
ArcMap calculate area automatically after edit Is there any way to calculate area automatically when I edit a polygon? I mean, is there any function there I can use on area field and update itself when I finish editing? If you keep your data in a file geodatabase the area in the SHAPE_Area field is automatically updated/calculated: is there any way to see the area as square kilometers this way? @AliDağ just change the field display properties : as number format choose rate with a factor of 1000000 and add a km² suffix
common-pile/stackexchange_filtered
HTML5 Canvas - Can I handle a video with Easeljs into a canvas tag? I would like put a video into a canvas tag and handle from there, eg. color, control playback with scroll etc... But I didn't undertand how it's possible (and if) by Easel lib. JAVASCRIPT function init(){//init() is in Body -> onload stage = new createjs.Stage("canvas"); var ctx = stage.canvas.getContext('2d'); var video = document.getElementById("video"); createjs.Ticker.addEventListener("tick", function(event){ if(video.paused || video.ended) return false; drawFrame(video, stage, ctx); } }//init function drawFrame(v,s,c){ bitmap = new createjs.Bitmap(video); bitmap.draw(c); s.update(); } This is the code, doesn't draw anything and after 5 or 6 sec goes to crash. What did you try so far? Please share the code and if possible a fiddle Not easeljs, but may help: http://html5doctor.com/video-canvas-magic/
common-pile/stackexchange_filtered
Linear Regression (OLS): Confidence Intervals are not being calculated accurately using Statsmodel summary_Frame() Incorrect Confidence Intervals I want to calculate the confidence interval of my forecasted values from OLS model in python. I found a function in statmodel that helps you create a dataframe of each forecasted value, se of forecasted value, upper and lower bound values of CI using get_prediction() and then summary_frame(). Unfortunately my upper and lower CI are not matching the results. Please Find attahced screenshot for my code and results Screenshot. Row 1 For Ex: Forecasted Value - 11.788462, SE - 0.580693, for 95% CI Lower Bound should be = 11.788462 - (1.96 * 0.580693) = 10.65030372 Upper Bound should be = 11.788462 + (1.96 * 0.580693) = 12.92662028 But the results in screenshot are not matching these numbers. I am not sure if I am doing anything wrong. Any help is appreciated. statsmodels uses the t-distribution by default for inference in linear regression models like OLS. Because of the very small sample size and low degrees of freedom the critical values of the t-distribution differ from those of the normal distribution in an observable magnitude. The following replaces the critical value of 1.96 by the critical values from the t-distribution with df=5. The values match those the statsmodels results in the screenshot attached in the question. from scipy import stats 11.788462 + stats.t.ppf(0.025, 5) * 0.580693 Out[12]: 10.295743121550677 11.788462 + stats.t.isf(0.025, 5) * 0.580693 Out[13]: 13.281180878449325
common-pile/stackexchange_filtered
Does bootstrap popover have a DOM loaded event, or equivalent way to find out? I'm creating a bootstrap popover in javascript on an element, and I need to do some work after the popover DOM has been written. I have the following so far. // bootstrap popover element.popover({ title: scope.title, content: content.data, placement: "bottom", html: true }).on("show.bs.popover", function () { console.log("this is still before the node is in the DOM"); }); So i'm setting title, html content, and placement. However, the html content contains an angular <directive> that I wish to $compile() and bootstrap doesn't add the node to the DOM until after the click event. What I want to do, is when the node appears on the page, run a function that processes the html inside the popover. Any ideas? Many Thanks answering my own question I can do: .on("shown.bs.popover", function () { var id = $(this).attr("aria-describedby"); var popover = $("#"+id); }); This gets me the popover html, which I can then run $compile on.
common-pile/stackexchange_filtered
Is there a way to make the WKWebview a firstResponder? The Main idea is to reload the WKWebView if there is no touches or presses detected (i.e When webView remains inactive for some timeinterval ) . I tried the way to implement touchesBegan(_:with:) and touchesEnded(_:with:) I have overwritten those functions . But these functions are not called when a touch is detected . This function is called when an TextField is present and touches inside the textfields are detected . But why it is not working in WebView . I found that to make the touchesBegan and touchesEnded functions work , I need to make the object an first responder . I used print(myWebView.canBecomeFirstResponder) // Return True myWebView.becomeFirstResponder() print(myWebView.isFirstResponder) // Return false I dont know how to make webview an firstResponder and also is there any other ways to detect presses/ Touches while in WKWebView . Can Anyone help me? And Thank you In Advance
common-pile/stackexchange_filtered
how to visualize entities of code first I have a simple query.I have searched it many times but not getting satisfactory result. I want to generate a diagram for all the entities(Code first) and relationship between them.I have total 19 classes so far.I have seen some articles on Reverse engineer code first.But I would like to see A diagram which looks like as created in Database First Approach. So far, have installed Entity framework Power tools in my visual studio 2013 But from there only getting two options--- 1>Reverse engineer code first 2>Customize reverse engineer template thanks in advance! You can use Entity Framework Powertools "View Entity Data Model(Read-only)" feature https://msdn.microsoft.com/en-us/data/jj593170.aspx
common-pile/stackexchange_filtered
How do Busniess Rules fit into or with the Entity Framework generated Entites? Let's say I am using a traditional 3-layer application (UI-BLL-DAL) in a .NET application, where would the busniess rules be applied in reference to the generated Entity class? Would you extend the entity with a partial class and add the rules there, pass the Entity up to the BLL map to a busniess object and process rules in a separate class, or something entirely different? What has been the common practice thus far? Thank you, Don't put business logic in your entities. Entities exist to map the DB interface to the application and, hence, aren't really even objects. Also, putting business logic in your entities makes them fat and confusing. You'll have some properties which exist for DB mapping. Others which represent runtime concerns. Some methods you can call in an L2E query. Some you can't. It's a mess. Also, it makes your business logic deeply tied up in EF code, which is a bad separation of concerns. We write services for business processes. Each service is constructor-injected with repositories for the data it needs. The business logic is totally separate from the EF mapping concern. It might not even use EF types. For example, you can write code like: var q = from l in Context.Animals.OfType<Lemur>() select new LemurDto { Id = l.Id, IsKing = l.Name.Equals("Julien XIII") }; var service = new LemurCountService(q); return service.Inventory(); So in this case the LemurCountService is totally independent of the EF. Nice, so in your case the service that is injected with the Repository is preforming the business rules correct? Also for something like data binding: would your business objects map to certain properties from the Entity and then in turn be bound to, or would you bind directly to the Entity? That's right regarding services. No, I don't bind to entities; I bind to presentation models. Perfect, perfect, perfect! That is exactly what I was looking for. Quite Similar to creating Presentation Views mapped from Domain objects in the Application Services layer in DDD. Nice post by the way, and thanks for your help.
common-pile/stackexchange_filtered
Ansible Playbook: Exchange variable between playbooks I need an idea how can I use output variable of one Playbook in another playbook. they both are called using include statement Detailed Explanation: I have one master playbook and two child playbooks. I want to use output variable of child1 playbook as in child2 playbook I have tried to register but no luck Main.yml - hosts: localhost - name: executing child1 playbook include: child1.yaml - name: executing child2 playbook include: child2.yaml child1.yml - hosts: localhost - name: print to stdout command: echo "hello" register: test1 child2.yml - hosts: localhost - name: print to stdout command: echo '{{ test }}' register: test2 I want variable test1 of playbook child1 out so that I can use in child2 playbook I don't see any attempts to use test1 variable. use include in your test2 playbook. https://documentation.magnolia-cms.com/display/DOCS60/YAML+inherit+and+include#YAMLinheritandinclude-YAMLinclude
common-pile/stackexchange_filtered
Flask-SocketIO broadcasting to client I read in Flask documentation (https://flask-socketio.readthedocs.io/en/latest/#broadcasting) that emitting a message to a client can be triggered by an event that originated in the server. So I decided to try that out. The idea is that client sends number 15 to server every second and the server, regardless of what client is doing, sends number 12 to client every two seconds. Server side (.py): from flask import Flask, jsonify, request, render_template from flask_socketio import SocketIO, emit app = Flask(__name__, static_folder='') io = SocketIO(app, cors_allowed_origins="*") @io.on('connected') def connected(): print("client connected") @io.on('messageC') def messageC(data): print(data) def send_messageS(): io.emit('messageS', 12) if __name__ == '__main__': import _thread, time _thread.start_new_thread(lambda: io.run(app), ()) while True: time.sleep(2) send_messageS() client side (.html): <html> <body> <script type="text/javascript" src="//code.jquery.com/jquery-2.1.3.min.js"></script> <script type="text/javascript" src="/resources/socket.io.js"></script> <script type="text/javascript" charset="utf-8"> $(document).ready(function(){ var socket = io.connect('http://localhost:5000'); socket.on('connect', function() { socket.emit('connected'); }); socket.on('messageS', function(data) { console.log(data); }); setInterval(function() { socket.emit('messageC', 15); },1000); }); </script> </body> The result is that number 15 is received by the server every second is desired, but number 12 is never received by the client. What am I doing wrong? are you sure the client connects? Do you get "client connected" in your log? Also check the Developer Console in your browser for errors on the Javascript side Yes, I do get "client connected" and there is nothing in the browser's console. I just noticed I mistook 12 for 15 in my question, now it's corrected, so please read it again if you can. Number 15 emitted by client is received by server. But number 12 emitted by server is not received by client. I found a solution. Apparently eventlet does not work with Python threads. So the solution is to "monkey patch the Python standard library so that threading, sockets, etc. are replaced with eventlet friendly versions." import eventlet eventlet.monkey_patch() This did the trick for me. I found this solution here: How to send message from server to client using Flask-Socket IO
common-pile/stackexchange_filtered
replace up to nth match With sed I read that you can replace all matches starting with the second You can combine a number with the g (global) flag. For instance, if you want to leave the first word alone, but change the second, third, etc. to be DELETED instead, use /2g: example sed 's/foo/bar/2g' a.txt However how could you replace matches up to the second, so that the third, fourth, etc are not affected? Something like sed 's/foo/bar/1-2' a.txt This might work for you (GNU sed): sed 's/foo/&\n/2;T;h;s//bar/g;G;s/\n.*\n//' file Replace the nth occurence of the intended string by itself and a newline. If the substitution fails bail out else copy the pattern space (PS) to the hold space (HS). Make the substitution using the replacement string and then reconstitute the original line with the altered line. Also: sed 's/foo/\n/g;s/\n/foo/3g;s/\n/bar/g' file Where nth +1 is used instead of the 1 to the nth Easiest way is: sed 's/foo/bar; s/foo/bar' a.txt This will make effectively replace the 2 first occurrences of foo. It makes two passes per line, replacing always the first occurrence. Since on the second pass the first foo has already been replaced, the second pass 'sees' the original second occurrence as the first. extending Chirlo's answer... replace_string=s/foo/bar n=2 replace_string_n=`python -c 'print "'$replace_string';"*'$n` sed $replace_string_n a.txt
common-pile/stackexchange_filtered
Copying a value (object) from a Dictionary of objects I'm going mad with value vs reference in relation to dictionaries. How come when I do this: Object obj = ObjDictionary[key]; obj.intproperty ++; C# increments the intproperty of the corresponding object in ObjDictionary as well as obj itself. This suggests to me the operation creates a reference rather than a copy of the Object but I can't find out how to copy the Object itself into a new one. This is causing all sorts of havoc for me as I try to create new dictionaries from objects copied from the original dictionary. EDIT: I understand the dictionary does not contain objects, but references to objects. The problem I'm still having is how this line works. Object obj = ObjDictionary[key]; How does it compile when obj is an Object and ObjDictionary[key] is a reference? Is this something C# does implicitly? What is Object here? Obviously it's not System.Object, since that type doesn't have intproperty. Can you share some more specific example code that's vexing you? The sample you've provided now isn't valid C#, since the compiler doesn't know that intproperty exists on all objects. sorry no, it's a custom object with a property of type int This is simple. When you set object to the object in the dictionary it references it. To copy the object use dictionary[key].copyto or the clone method. The duplicate is helpful thank you and confirms what I knew about the Dictionary containing references to Objects, rather than the Objects themselves. How do I use the copyto or clone method? I can't get it to play the game. Also, I still struggle to understand how Object obj = Object obj =ObjDictionary[key] compiles when obj is supposed to be an object, bot a reference to one? Careful. What the dictionary contains depends not on the dictionary object itself, but the type of object. Please read up on the difference between class (reference types) and struct (value types). I thought http://stackoverflow.com/questions/129389/how-do-you-do-a-deep-copy-an-object-in-net-c-specifically would be more appropriate for what you trying to do, but one found by @PeterDuniho is actually better match to how question is asked. @PeterDuniho it contains Class types. Thanks @AlexeiLevenkov I looked at the deep copy solution but I guess what's throwing me is the Object obj =ObjDictionary[key] line. Doesn't seem to me like it should compile as ObjDictionary[key] is returning a reference, not an object? Object obj = Object obj =ObjDictionary[key] won't compile under any circumstance. In any case, if you go back and actually understand the difference between reference types and value types, some of this will make more sense. The basic issue is that you are thinking that the variable contains the object itself, but that's true only for value types. For reference types, the variable always contains only a reference to the object. This is true whether the variable is a class field, a local variable, or an element of a collection like a dictionary or list. @PeterDuniho should make that an answer and get some points :) @Chris: I'm happy to answer that different question if the OP decides to post it as an actual different question. I certainly would not want to do something so silly as to post an answer to a question that is a duplicate of one or more other questions. :) @PeterDuniho I appreciate the answer regardless. I had looked at all the related concepts but was still missing your point: For reference types, the variable always contains only a reference to the object. This is true whether the variable is a class field, a local variable, or an element of a collection like a dictionary or list
common-pile/stackexchange_filtered
Stripe with Django - make form clean() method return value that isn't a form field I am integrating Stripe payment processing into my Django app, and I can't figure out the 'correct' way to verify the customer's card information and insert a row into my Users table that contains the user's Stripe Customer ID. Ideally, I'd love to do something along the lines of the following, in which my CheckoutForm verifies card details and raises a form ValidationError if they are incorrect. However, using this solution, I can't figure a way to get the customer.id that's generated out of the clean() function. forms.py class CheckoutForm(forms.Form): email = forms.EmailField(label='E-mail address', max_length=128, widget=forms.EmailInput(attrs={'class': 'form-control'})) stripe_token = forms.CharField(label='Stripe token', widget=forms.HiddenInput) def clean(self): cleaned_data = super().clean() stripe_token = cleaned_data.get('stripe_token') email = cleaned_data.get('email') try: customer = stripe.Customer.create( email=email, source=stripe_token, ) // I can now get a customer.id from this 'customer' variable, which I want to insert into my database except: raise forms.ValidationError("It looks like your card details are incorrect!") views.py # If the form is valid... if form.is_valid(): # Create a new user user = get_user_model().objects.create_user(email=form.cleaned_data['email'], stripe_customer_id=<<<I want the customer.id generated in my form's clean() method to go here>>>) user.save() The only other solution I can think of is to run the stripe.Customer.create() function in views.py after the form is validated. That'll work, but it doesn't seem like the 'right' way to code things, since as I understand it all validation of form fields is supposed to be done within forms.py. What's the proper Django coding practice in this situation? Should I just move my card validation code to views.py, or is there a cleaner way to keep the card validation code within forms.py and get the customer.id out of it? I don't think that proper Django coding practice is any different from Python coding practice in this situation. Since Django form is just a class, you can define property for customer. Something like this: class CheckoutForm(forms.Form): email = forms.EmailField(label='E-mail address', max_length=128, widget=forms.EmailInput(attrs={'class': 'form-control'})) stripe_token = forms.CharField(label='Stripe token', widget=forms.HiddenInput) _customer = None def clean(self): cleaned_data = super().clean() stripe_token = cleaned_data.get('stripe_token') email = cleaned_data.get('email') try: self.customer = stripe.Customer.create( email=email, source=stripe_token, ) except: raise forms.ValidationError("It looks like your card details are incorrect!") @property def customer(self): return self._customer @customer.setter def customer(self, value): self._customer = value Then it the views.py after form.is_valid(), you'd call this property. if form.is_valid(): customer = form.customer Or maybe @property is an overkill and you could simply do it like this: class CheckoutForm(forms.Form): email = forms.EmailField(label='E-mail address', max_length=128, widget=forms.EmailInput(attrs={'class': 'form-control'})) stripe_token = forms.CharField(label='Stripe token', widget=forms.HiddenInput) customer = None def clean(self): cleaned_data = super().clean() stripe_token = cleaned_data.get('stripe_token') email = cleaned_data.get('email') try: self.customer = stripe.Customer.create( email=email, source=stripe_token, ) except: raise forms.ValidationError("It looks like your card details are incorrect!") ... and still form.customer in views.py. I guess both should work, but I haven't tested the code. Got it — makes total sense, Borut! Thanks for the response, I've accepted this as the correct answer. One quick follow-up: would you consider it 'proper django coding practice' to include all of the form parsing logic in the clean() method? E.g., should I maybe move all of the 'user=' code block from views.py into the clean() method of forms.py? It's a good practice to include everything related to form validation inside clean(). You shouldn't validate forms (or data posted by users) in views. About including user validation in clean(); I have checked a few of my projects and I haven't found many cases where I did that or made sense to me. Wonderful. Thank you for your help!
common-pile/stackexchange_filtered
rounding cells up when the number hits X.60 I'm making something in excel which calculates my hours I work a week/month, when a number gets to 30.60 (two shifts which when finishing on the half an hour) it calculates it as 30.60*wage=not the right pay. so far I have =ROUND(SUM(C6:I6),0) which rounds up the number, which works fine until I have another day which I work till half an hour then it will just show 16 or so. As you can see here, it calculates it fine until I work 7.30 hours on a wednesday, the total shows 23.00 instead of 23.30. How can this be done. Thank you. =ROUND(SUM(C6:I6), 2) your problem is with excel understanding of your "hours". When you write 7.30 you mean 7 hours 30 minutes = 7.5 hours. But excel understands that as 7 hours and 30/100 of hour = 18 minutes. The easiest solution would be to use 7,5 for 7 hours 30 minutes. Thank you for your quick reply, changing that now I would suggest that you change the cell format of your cells to a hh:mm format. You can then use something along the lines of <total time>*<rate>*24 post the solution as answer so it can be off the unanswered question list. your problem is with excel understanding of your "hours". When you write 7.30 you mean 7 hours 30 minutes = 7.5 hours. But excel understands that as 7 hours and 30/100 of hour = 18 minutes. The easiest solution would be to use 7,5 for 7 hours 30 minutes. (for sake of checking the question off the unanswered I copied my comment) Or, use 7:30, which Excel will treat both as a time, and as 0.3125 (i.e. 7.5 / 24) - so remember to multiply by 24 to calculate hours. You can enforce this by setting the "Data Validation" to "Time" with appropriate "Start time" and "End time" values If you don't want to use 7.5 (Seven-and-a-half-hours) or 7:30 (7 hours, 30 minutes - but remember to multiply this by 24, since Excel stores this as the fraction of a day, 0.3125) then you can use INT and MOD: =INT(C6)+(MOD(C6,1)/0.6) The first part, INT(c6) will give you the Integer part (i.e. whole hours) which we don't want to scale/skew. The second part has 2 stages. First, MOD(c6,1) will give us the Decimal part of the number (i.e. 7.3 will become 0.3) and the second part is to divide by 0.6 to convert from "fake-minutes" to "fraction-of-real-hour" Finally, since you want to apply the formula to an array of cells, you will need to swap from SUM to SUMPRODUCT: =SUMPRODUCT(INT(C6:I6)+(MOD(C6:I6,1)/0.6)) But, overall, best option is to use 7:30 and set Data Validation only allow actual Time values in that field. {EDIT} Of course, this will give your output with 0.5 for 30 minutes. If you want to reverse back to 0.3 for 30 minutes (although, I can scarcely fathom why) then you need to run the same calculation in reverse: =INT(SUMPRODUCT(INT(C6:I6)+(MOD(C6:I6,1)/0.6))) + 0.6*MOD(SUMPRODUCT(INT(C6:I6)+(MOD(C6:I6,1)/0.6)),1)
common-pile/stackexchange_filtered
translation invariance of expectation value of hit counting variable for Lévy process Let $(X_t)_{t \in [0, \infty)}$ a $\mathbb{R}$- valued Markov process (in my question I'm primary interested in dealing with Lévy process), $s, a, u >0$, $I(a) := \{[k \cdot a, (k+1) \cdot a] \ : \ k \in \mathbb{Z} \} $ the family of $a$-integral tiles covering $\mathbb{R}$. Let $M_u(a,s)$ the random variable counting the number of tiles $[k \cdot a, (k+1) \cdot a]$ in $I(a)$ hit by $(X_t)$ at some time $t \in [u,u+s]$. The notion is introduced in the paper "Hausdorff Dimension Theorems for Self-Similar Markov Processes" by Luqin Liu and Yimin Xiao (online available) in lemma 3.1 for Markov processes. My question is if for $(X_t)_{t \in [0, \infty)}$ a $\mathbb{R}$- valued non deterministic(!) Lévy process the expectation value of $M_u(a,s)$ is "translation invariant" in the sense that $$\mathbb{E}[M_u(a,s)]= \mathbb{E}[M_0(a,s)] $$ i.e. the expectation value of the number of hit tiles $[k \cdot a, (k+1) \cdot a]$ in $I(a)$ from $I(a)$ by $(X_t)$ at some time $t \in [u,u+s]$ is the same as the expectation value of the number of hit by $(X_t)$ at some time $t \in [0,s]$? I asked identical question in mse. Do you have a response to the answer below? No, this is not true is general. For instance, if $(X_t)$ is a Poisson process of intensity (say) $\lambda=1$, then $$M_u(a,s)=\lfloor X_{u+s}/a\rfloor - \lceil X_u/a\rceil + 2.$$ So, $$EM_{1/2}(2,2)=2.593\ldots\ne2.534\ldots=EM_1(2,2).\quad\Box$$ More generally, here $$EM_u(2,s)=\frac{1}{4} \left(e^{-2 (s+u)}+2 s+e^{-2 u}+6\right),$$ which is strictly decreasing in $u$.
common-pile/stackexchange_filtered
Android SDK makes my MacBook Pro shut down When I run an android studio on my Mac it opens perfectly. After Gradle built it shows "indexing..." message and my Mac freezes and shuts down. I tried installing IntelliJ and tried running it to create a project with android SDK, but I see the same problem the MacBook shuts down again after showing the same message at the bottom "indexing...". for me it looks like a problem with your Mac I would recommend you uninstall Android Studio completely delete all the folders of android studio and reinstall from scratch. Looks like hardware disk or memory issues. It could be hard drive issue.
common-pile/stackexchange_filtered
Why does my Python function work in the console, but not when called within code? I'm trying to learn Python by working through the problems on the Project Euler website. I know exactly what I want my code to do, and my method works on paper, but I can't make the code work. GitHub link: https://github.com/albyr/euler-python/blob/master/euler3.py I have created two functions, one which works out the factors of the target number, and one which checks whether a given number is prime. # Function that finds all the factors of a given number def findfactors(n): # for i in range(1,int(sqrt(n)+1)): for i in range(1,n+1): if n/i == int(n/i): factors.append(i) # Function that checks if a number is prime def checkprime(n): # Trial division for i in range(2,int(sqrt(n)+1)): if n/i == int(n/i): # Number gives a remainder upon division and therefore is not prime isprime = False break else: isprime = True if isprime == True: return True elif isprime == False: return False I'm sure that to experts that code looks horrible. But it works if I use the Python shell: >>> checkprime(9) False >>> checkprime(79) True >>> checkprime(factors[3]) True But when I run the program with F5 I get: Traceback (most recent call last): File "/home/alby/euler-python/euler3.py", line 45, in <module> checkprime(factors[i]) File "/home/alby/euler-python/euler3.py", line 32, in checkprime if isprime == True: UnboundLocalError: local variable 'isprime' referenced before assignment If I call the checkprime function from within the program with a hardcoded number (e.g. checkprime(77)) I get no output at all. I'm certain that this is something basic about the way that Python works that I don't understand, but I cannot for the life of me work out what. Any suggestions? You get no output because you're not printing anything - unlike the console, top-level function calls do not automatically print their results when run normally. Well the error is quite obvious: If the loop is never entered you're reading a non initialised variable. Generally that's a pretty strange way to write checkprime to begin with, just return true if you get through the loop and false in the loop. In your Github code, we can see that you're trying to call checkprime(1) (on the first iteration through your last loop). # Check each factor to see if it is prime or compound for i in range(0,len(factors)): print (factors[i]) # Why can't I call checkprime here, like this? It works in the console. checkprime(factors[i]) But look at your code: def checkprime(n): # Trial division for i in range(2,int(sqrt(n)+1)): if n/i == int(n/i): # Number gives a remainder upon division and therefore is not prime isprime = False break else: isprime = True If n = 1, then range(2, int(sqrt(1)+1)) is range(2,2) which is empty... so isprime never gets set, because the loop body never gets run. Keep in mind that the arguments to range() are a half-open interval - range(x,y) is "integers starting at x and ending before y". Thus range(2,3) = [2] and range(2,2) = []. Another issue here is that findfactors() is returning 1 as the first factor - this is probably not what you want: def findfactors(n): # for i in range(1,int(sqrt(n)+1)): for i in range(1,n+1): For prime factorization checking, you probably want to start at 2, not 1 (since everything is divisible by 1). Also, this code is redundant: if isprime == True: return True elif isprime == False: return False You can really just write this as... return isprime Or you can go one step better and never use isprime in the first place - just replace isprime = True with return True and isprime = False with return False. Finally, a shorthand for int(n/i) is n // i - Python's // operator does integer division. That's an amazing answer, Amber. Thank you. In terms of no output printing, simply use print(checkprime(77)) instead when running from F5 and you should get your output. When running from a call, python prints nothing (or at least, only prints the last command) by default.
common-pile/stackexchange_filtered
Some xml files in layout folder are small, some are normal I created xml files for different screen sizes. I have layout, layout-large, layout-normal and layout-xlarge folders. In layout folder some of the xml files are shown as small screen on visual editor. here is my manifest codes <supports-screens android:anyDensity="true" android:largeScreens="true" android:normalScreens="true" android:resizeable="true" android:smallScreens="true" android:xlargeScreens="true" /> <compatible-screens> <!-- all normal size screens --> <screen android:screenDensity="ldpi" android:screenSize="normal" /> <screen android:screenDensity="mdpi" android:screenSize="normal" /> <screen android:screenDensity="hdpi" android:screenSize="normal" /> <screen android:screenDensity="xhdpi" android:screenSize="normal" /> <!-- large screens --> <screen android:screenDensity="hdpi" android:screenSize="large" /> <screen android:screenDensity="xhdpi" android:screenSize="large" /> <!-- xlarge screens --> <screen android:screenDensity="hdpi" android:screenSize="large" /> <screen android:screenDensity="xhdpi" android:screenSize="xlarge" /> </compatible-screens> why some xml files are shown as small screens? One of xml files is appears narrow and long screen. Is there a way to define dimensions of xml file? Thanks
common-pile/stackexchange_filtered
"Triple Join"?? Can an INNER-JOIN have have an add'l query after 'WHERE' to exclude rows from a 3rd table? Ok this (truncated query) runs fine. Got some expert advice a month ago to fix it. SELECT * FROM artWork WHERE art_id in ( SELECT art_id FROM artWork AS a INNER JOIN userPrefs AS u ON (( ((u.media_oil='1' AND a.media_oil='1') OR (u.media_acrylic='1' AND a.media_acrylic='1') OR (u.media_wc='1' AND a.media_wc='1') OR (u.media_pastel='1' AND a.media_pastel='1')) etc, etc........................................ WHERE a.artist_id NOT EXISTS ( SELECT * FROM removeList AS r WHERE r.artist_id = a.artist_id AND r.user_id ='$user_id') AND a.make_avail='1' AND a.cur_select='1' AND u.user_id='$user_id' AND ((u.pref_painting='1' AND a.pref_painting='1') OR (u.pref_photo='1' AND a.pref_photo='1') OR (u.pref_paper='1' AND a.pref_paper='1') OR (u.pref_print='1' AND a.pref_print='1') OR (u.pref_draw='1' AND a.pref_draw='1') OR (u.pref_sculp='1' AND a.pref_sculp='1') OR (u.pref_install='1' AND a.pref_install='1') OR (u.pref_vid='1' AND a.pref_vid='1') OR (u.pref_public='1' AND a.pref_public='1') OR (u.pref_indef='1' AND a.pref_indef='1')) ) ORDER BY date_submit DESC But now I need to exclude certain rows that may be in another 2 column (user_id & artist_id) child table: 'removeList'. So I am trying without success to what amounts to a "triple join" (look for the code around 'NOT EXISTS'): SELECT * FROM artWork WHERE art_id in ( SELECT art_id FROM artWork AS a INNER JOIN userPrefs AS u ON ( (((u.media_oil='1' AND a.media_oil='1') OR (u.media_acrylic='1' AND a.media_acrylic='1') OR (u.media_wc='1' AND a.media_wc='1') OR (u.media_pastel='1' AND a.media_pastel='1')) etc, etc........................................ WHERE a.artist_id NOT EXISTS ( SELECT * FROM removeList AS r WHERE r.artist_id = a.artist_id AND r.user_id ='$user_id' ) AND a.make_avail='1' AND a.cur_select='1' AND u.user_id='$user_id' AND (( u.pref_painting='1' AND a.pref_painting='1') OR ( u.pref_photo='1' AND a.pref_photo='1') OR ( u.pref_paper='1' AND a.pref_paper='1') OR ( u.pref_print='1' AND a.pref_print='1') OR ( u.pref_draw='1' AND a.pref_draw='1') OR ( u.pref_sculp='1' AND a.pref_sculp='1') OR ( u.pref_install='1' AND a.pref_install='1') OR ( u.pref_vid='1' AND a.pref_vid='1') OR ( u.pref_public='1' AND a.pref_public='1') OR ( u.pref_indef='1' AND a.pref_indef='1') ) ) ORDER BY date_submit DESC Am I reaching too far here? Is there a better approach I am overlooking. Thanks to all. NOT EXISTS shouldn't have a column name before it. remove the a.artist_id from before the not exists. So the relevant line would say WHERE NOT EXISTS ( SELECT * FROM removeList AS r WHERE r.artist_id = a.artist_id AND r.user_id ='$user_id' ) Not sure where you're using this query, but its also recommended to parameterize your queries to prevent sql injections. Hope this helps! Thanks! Worked on the first shot. Also, fyi, not shown is that the entire query is saved to a variable and used to pull rows in standard fashion to echo in a dynamic list. This is the second time you've made my code behave. really appreciate it, ehudokai i know this is not an answer, but i could not place a table within the comments so i am putting it here. i feel like extra tables to hold your media and prefs is needed here to greatly simplify things in your database. media_table id, user_id, media_type prefs_table id, user_id, pref_type Then fields in your art table that correlate to a media type, and pref type. art_table media_type, art_type This way you could easily use a simple JOIN to find correlating records instead of those hundreds of comparisons. SELECT art.id FROM art JOIN prefs_table ON art.pref_type = prefs_table.pref_type JOIN users ON prefs_table.user_id = users.id WHERE users.id = whatever This is a simplified example that would pull up all peices of art that matched any one of the user's preferences. You could of course easily incorporate media types as well into the same query in the same fashion. You need to do some serious work on your schema, along the lines suggested by dqhendricks. You have 3 tables: ArtWork (Art_ID, Artist_ID, ...characteristics...) UserPrefs (User_ID, ...characteristics...) RemoveList (Artist_ID, User_ID) You want to list art works which match the user preferences of a specific user, but do not want to list any art works where the artist dislikes the user or the user dislikes the artist (or both). You should, therefore, be able to do something like: SELECT A.* FROM ArtWork AS A JOIN UserPrefs AS U ON (U.User_ID = ? AND (...ghastly OR'd join conditions...)) WHERE A.Artist_ID NOT IN (SELECT R.Artist_ID FROM RemoveList AS R WHERE R.User_ID = ?) Note that you can move some of the join conditions into the WHERE clause, and various other changes, but the basic structure of the query will be a JOIN of ArtWork and UserPrefs and a WHERE clause with the NOT IN clause (which you could write as a NOT EXISTS clause, but I think the NOT IN formulation is easier to read).
common-pile/stackexchange_filtered
Working with Dash Cytoscape from data frame Let us say, I have the following table. df source target Weight A B 10 A C 8 C F 8 B F 6 F D 6 B E 4 I able to manage to plot it using networkx library. AND I would like to also give a try using Dash Cytoscope component. I have tried the following. But not succeed yet. app.layout = html.Div([ html.Div([ cyto.Cytoscape( id='org-chart', autoungrabify=True, minZoom=0.2, maxZoom=1, layout={'name': 'breadthfirst'}, style={'width': '100%', 'height': '500px'}, elements= [ # Nodes elements {'data': {'id': x, 'label': x}} for x in df.name ] + [ # Edge elements {'data': {'source': df['source'], 'target': df['target']}}, ] ) ], className='six columns'), html.Div([ html.Div(id='empty-div', children='') ],className='one column'), ], className='row') Any help is appreciated on this. The problem is that you tried to define your edges using the entire data frame columns instead of iterating it (as you did for the nodes). Have you tried the below code ? edges = [{'data': {'source': x, 'target': y}} for x,y in zip(df.source,df.target)]
common-pile/stackexchange_filtered
Patroni Postgres Cluster, what does 'TL' mean in the output of 'patronictl list' Executing. patronictl list Produces + Cluster: psql-core03-uat-kong<PHONE_NUMBER>458676291) ------+----+-----------+ | Member | Host | Role | State | TL | Lag in MB | +------------------------+---------------+--------+---------+----+-----------+ | psql-podname-blahbla-0 | ##.##.###.## | Leader | running | 46 | | | psql-podname-blahbla-1 | ##.##.###.## | | running | 21 | 14288 | | psql-podname-blahbla-2 | ##.##.###.## | | running | 46 | 0 | +------------------------+---------------+--------+---------+----+-----------+ Does anyone know what the column 'TL' stands for, means? Scoured the manuals but no joy. Thanks A quick search of the source indicates that it means timeline.
common-pile/stackexchange_filtered
How to fit a webpage to different screen sizes without disturbing its original width? I was flirting with an idea of making a bookmark app which might let users bookmark only a section of a page like in this image. They will click twice on the page to mark two horizontal lines. Area between these lines will be highlighted to show the selected portion of the page. I will be saving the y-coordinates of the two lines. That's it. But what if: The user opens the bookmark in a bigger or smaller screen afterwards. The webpage will re-size to fit the new width and some of the elements will definitely change their position. Then the coordinates will mismatch and wont show the original selected area. The page content on the url changes later on. The webpage is responsive. Then the mobile view will be completely different. It might not contain some of the elements from the original normal monitor screen. Solution for problem 2 can be that we save a permanent copy of the page and then set coordinates on it. I have figured out this part of saving the webpage to a single html file which looks very much like the original. And for 1 and 3 points I am thinking if it can be possible to somehow fix the width of the webpage to show against the screen width. i.e. just stretch or shrink the original page width according to the screen width. Is there a way to do this? I did some experimentation with viewport meta tag but it didn't work. Do you guys have any idea? Anything else according to you that can go wrong? Any extra add-on? Just liked the idea of bookmarking only a section of a long page for reference. I think the best you can do is bookmark a tag. <p class="markable" ondclick="bookmark()">Some Content..... That will only bookmark just a single point on the page. I would like to select section between two points.
common-pile/stackexchange_filtered
How do I find an inverse for this injective multivariate function? I have come up with an injective multivariate function that puts out a unique value for every configuration of four positive natural numbers provided that $\omega\ge\psi\ge\chi\ge\theta\ge1$ $f(\omega,\psi,\chi,\theta)= \frac{\omega^4+2\omega^3-\omega^2-2\omega}{24}+\frac{\psi^3-\psi}{6}+\frac{\chi^2-\chi}{2}+\theta$ f(4,2,2,1) = 18 If $f$ is really injective with your restriction $\omega\ge\psi\ge\chi\ge\theta$ then it means that $f(\omega+1,1,1,1)>f(\omega,\omega,\omega,\omega)$, else we have a contradiction. So it's simple to find the value of $\omega$, by finding the largest integer $\omega$ so $f(\omega,1,1,1)$ doesn't exceed the value of the function to be inverted. You can then find the next values in chain ($\psi$,$\chi$,$\theta$), using a similar technique. Sorry, I should've mentioned that the four numbers can only be natural numbers, we can't put 0 in any of them so the function is truly injective with that restriction. @leaven this doesn't change the argument. I changed the answer anyway for clarity. Thanks. I understand it.
common-pile/stackexchange_filtered
Jupyter Notebooks stuck at In[*]: I reinstalled Anaconda with base Python 3.7. In my base environment, my Jupyter notebooks will not work. I receive the following error in my prompt as I'm trying to run the notebook: [I 19:07:37.057 NotebookApp] Starting buffering for 39b758d6-fcaa-4f2d-b10f-235ad7b39292:2380a18e272242638ff4fe4d9e628719 [I 19:07:38.115 NotebookApp] Adapting to protocol v5.1 for kernel 39b758d6-fcaa-4f2d-b10f-235ad7b39292 [I 19:07:38.121 NotebookApp] Restoring connection for 39b758d6-fcaa-4f2d-b10f-235ad7b39292:2380a18e272242638ff4fe4d9e628719 This occurs with something as simple as print('World') or any code. The errors just continue to loop. I've search GitHub and StackOverflow and have yet to find a real solution. Any help with this?? Thanks! FYI: I'm using 64-bit Windows 10. For an update, I have reinstalled different python versions, checked my port, used old version, and have had several software engineers look at my computer. Any help would be greatly appreciated... One thing I notice in 'About' under the Notebook the kernel status is always 'Waiting for kernel to be available..' In the anaconda prompt, port 8888 is usually unavailable, so it switches to the next and seems to connect okay. After some time, I also usually get a Websocket timeout (after ~9000 ms). Solved on my laptop by uninstalling antivirus. My antivirus was actually causing Jupyter to not be able to use my system-level kernel. I believe this is due to Jupyter's browser-based nature. (FYI: I was using ad-aware free)
common-pile/stackexchange_filtered
How can I fix error "InvocableMethod methods must be in version 33 or higher"? I tried to get a value from flow. This is my code which cause error "InvocableMethod methods must be in version 33 or higher": @InvocableMethod public static List<List<Opportunity>> getOppIds(List<String>Ids) { return new List<List<Opportunity>>(); } Please check an API version of the given apex class. It must be higher then 33 as it said in a error description Thanks, now it works There is a ClassName.cls-meta.xml that accompanies your class file when you are using SFDX or metadata API to manage your code base. This includes specification of the API version used for that class. The equivalent information is also available in the Salesforce Setup UI along side the class's code within Custom Code > Apex Classes. In editing a class through this UI there is a "Version Settings" tab to adjust the value. You need to make sure that the API version is set to 33 or higher.
common-pile/stackexchange_filtered
How to encourage players to not lessen their own gaming experience with mods and cheats? In many games (often single-player computer games) there's a built-in way to alter gaming experience; either by modding (e.g. Minecraft, most games on Steam Workshop and Nexus Mods) or by some in-game way to cheat (console commands or retro-style cheat codes). And even if there is no native support, life will find a way. This can be an amazing asset to any players for several reasons: It can enhance your gaming experience by adding new content (levels, quests, items, etc.). It can remove / change features you dislike about a game (perhaps you dislike the 10 minute long game intro where you cannot move. I'm looking at you, Skyrim!) It helps you have some stupid fun (what happens if every force applied to a rag-doll is multiplied by x10?) These are all great things, but from experience and observation it's very clear that these things can have the complete opposite effect: New content can be utterly broken and unbalanced (instant-kill weapon). Removing features can lessen gameplay (I don't like questing; let's remove all quests). Remove the need to actually do anything (I need to gather 1000 gold to buy this item? give_gold 1000). Of course what a player finds fun and what might run their experience is completely subjective, which is why I'm not wanting to remove these features, just discourage extensive the use of them. Because the problem here is not that the player is able to do these things; the problem is that the player is unable to see that it might affect their gaming experience in a negative fashion and when it does they are often unable to localize the problem and its source leading to them getting bored of the game because of their own actions. So my question is this; How do I give players all of the benefits of altering my game but also make them understand that it might lessen their gaming experience so that they (themselves) can regulate their use of it? Again note that I'm not trying to completely prevent it; only discourage it. Is it really impossible for extensive use of mods to be fun? Personally, I wouldn't want you to decide for me. I hate systems I encounter that deliberately and artificially restrict what I can do with them, game or not. @Anko absolutely not, I adore mods and cheats and I use them extensively wherever I can. And as for deciding I don't wish to decide for the users - that's why I wrote "...so that they (themselves) can regulate their use of it" instead of simply excluding it. A very common method is to disable achievements when gameplay-affecting mods are enabled or cheats are used. Players want those achievements. But when they are using overpowered mods or cheats, they become meaningless anyway, so disabling them is completely reasonable. This encourages the player to try everything in the game in vanilla mode at least once. The nice thing about this method is that it only affects players who have not yet unlocked all achievements. The moment where all achievements are unlocked is also the moment where the player runs out of stuff to do. So this is also the ideal moment to bring out the mods and cheats to refresh the game experience. To still allow the player some convenience mods which improve the game experience without making the game notably easier or harder, you could mark certain functionality in your modding API as "achievement-safe" and still allow achievements with mods which only use the achievement-safe API features.
common-pile/stackexchange_filtered
What is status_of_proc, and how do I call it? In the init script of nginx in Debian 7 (Wheezy) I read the following exerpt: status) status_of_proc -p /var/run/$NAME.pid "$DAEMON" nginx && exit 0 || exit $? ;; This code runs just fine and sudo service nginx status outputs [ ok ] nginx is running. Yet status_of_proc is not defined in bash, neither in dash: $ type status_of_proc status_of_proc: not found Though if I inserted the same check into the nginx-script I got the following result: status_of_proc is a shell function And running bash on the init file itself provided further explanation: status_of_proc is a function status_of_proc () { local pidfile daemon name status OPTIND; pidfile=; OPTIND=1; while getopts p: opt; do case "$opt" in p) pidfile="$OPTARG" ;; esac; done; shift $(($OPTIND - 1)); if [ -n "$pidfile" ]; then pidfile="-p $pidfile"; fi; daemon="$1"; name="$2"; status="0"; pidofproc $pidfile $daemon > /dev/null || status="$?"; if [ "$status" = 0 ]; then log_success_msg "$name is running"; return 0; else if [ "$status" = 4 ]; then log_failure_msg "could not access PID file for $name"; return $status; else log_failure_msg "$name is not running"; return $status; fi; fi } Yet inserting the same function call into an init script made by myself returned that the function was undefined. So it has nothing to with init scripts being special. Neither is it declared previously in the init script. Around the net I read that it is part of the LSB, but I can't figure out how call it. Will someone please help me figure out how to use this wonderful function? Why is this question considered off-topic? @PiotrJurkiewicz -> Not anymore. :) I found that the function was sourced from /lib/lsb/init-functions in the nginx init script. So adding: . /lib/lsb/init-functions To my init script solved the problem.
common-pile/stackexchange_filtered
Group and Compare multiple dataframe columns with conditions in Python I'm tryng to print out the states with highest population in each region. Code Sample: # all unique regions region_unique = data['Region'].unique() # highest population max_pop = data['population'].max() How can I chain the above lines of code and bring in the 'States' column to achieve my result? Dataset: data.groupby(["States","Region"]).Population.max() can you try this? or it could be the inverse, data.groupby(["Region","States"]).Population.max() Thanks. Though this doesn't answer my question correclty. Considering you haven't mentioned any library... You could first create a helper dict, mapping each region to an array of states. Each state is a tuple: (state, pop) (name and population count): regions = {} for state, pop, region in zip(data['States'], data['population'], data['Region']): res.setdefault(region, []).append((state, pop)) Then for each region you can pull out the most inhabited state: for region, states in regions.items(): print(region, max(states, key=lambda _, pop: pop)) To states under each region with a population less than 100, you can do: for region, states in regions.items(): print(region, list(filter(lambda state: state[1] > 100, states))) Hi @Ivan, Also under each region, I want to group states with a population less than 100. How do I go about this pls.? Yeaa. It worked fine.Thans again @Ivan. I wilsharehe project across when am done. How can I connect with you privately pls?
common-pile/stackexchange_filtered
HDFS disk usage How to display Non dfs used % ? If i use hdfs dfsadmin -report command it display like this Name: <IP_ADDRESS>:50010 (aline.node5.cdc) Hostname: aline.node5.cdc Decommission Status: Normal Configured Capacity:<PHONE_NUMBER>0 (43.73 GB) DFS Used: 13463552 (12.84 MB) Non DFS Used:<PHONE_NUMBER>6 (28.67 GB) DFS Remaining:<PHONE_NUMBER>2 (15.05 GB) DFS Used%: 0.03% DFS Remaining%: 34.41% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 2 Last contact: Thu Jun 21 01:02:55 EDT 2018 But I want non-dfs used percentage? Is there any specific command for it. Try running 'hdfs dfs -du /' command You could parse the output and do that math, but why do you care about non dfs?
common-pile/stackexchange_filtered
z-index cross-browser incompatiblity This is the CodePen of a very short simple code. only html and css, and Bootstrap It shows a very basic Bootstrap carousel with an added white transparent overlay. The carousel caption should appear on top of the overlay, but it doesn't appear like that in Chrome Browser. Firefox,though, is showing it like it's supposed to. I want the text to slide and appear on top of the still overlay on all browsers. A link to CodePen is great but can you put the relevant piece of code in question for people to help you better? @PraveenPuglia the relevant part is the z-index of the captions. I want them to appear on top of everything. they are not and i just wanna know why. Welcome to Stack Overflow! Questions seeking debugging help must include the shortest code necessary to reproduce it in the question itself. NB - Please don't abuse the code blocks to get around this requirement. Why not using: CSS .carousel-inner > .item::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: white; z-index: 1; opacity: 0.5; } Instead of a separate overlay container? I mean there even is a content: '' in your CSS for this container, so I guess it was or was supposed to be a pseudo element.
common-pile/stackexchange_filtered
Change div colour based on query param ember I am attempting to send a query param through the {{#link-to}} helper, get the query param in my route and then dynamically change the colour of a div based on the param sent through. I can see I am getting the correct ID but nothing seems to be happening on the page. Here is my link to helper <li>{{#link-to 'usernotification' (query-params highlightedNotification=activeUserNotification.id) classNames="read-more"}}Read more{{/link-to}}</li> and here is my route export default BaseRoute.extend({ accountService: Ember.inject.service('account'), userNotificationService: Ember.inject.service('usernotification'), queryParams: { highlightedNotification: { refreshModel:true } }, beforeModel(params){ this._super(...arguments); Ember.$("#"+params.queryParams.highlightedNotification).attr('style', 'background-color: black !important'); }, }); Can anyone see where I am going wrong? At beforeModel hook, route hasn't rendered yet! So the DOM element that you try to modify is not in its place. You can make it work in some way. For example scheduling to run that code to work at afterRender phase by wrapping the code with Ember.run.scheduleOnce('afterRender', this, ()=>{/*your code*/});. There may some other alternatives. But this is not ember way of doing. You should pass this data to the corresponding component. The component should decide how it will be displayed. Further, the name "highlightedNotification" maybe wrong. It should be something like: "selectedNotification", "currentNotification", "notificationToBeAttentioned". Let component decide how the attention will be emphasized. "highligting" is just one way to take attention. Sample twiddle to illustrate the suggested way.
common-pile/stackexchange_filtered