text
stringlengths
70
452k
dataset
stringclasses
2 values
How to turn a list into many strings - Python I was wondering if I have a list such as ['This is my list'], how can I split it into 4 strings such that I can count them? Sorry, I forgot to mention that I am using Python! It depends on which language you are using! For example, in Python, you can use .split(' ') which splits the string using space as separator. Is this what you want? l=['This is my list'] ans = [ i.split() for i in l ] print ans[0]
common-pile/stackexchange_filtered
How do I convince Dell blades to PXE Boot Scenario: Brand new Dell M610 blades. Connected to the network with a 10GE uplink into a Dell PowerConnect M6220 blade switch. DHCP server evidently working, as everything else boots / gets addresses fine. TFTP server working fine. Can't seem to get a DHCP address from the server. Is there something on a blade chassis level interfering? Will I need to set up something special on the chassis? On the switch? On the blade? Portfast = Enabled. IP Helper = Points at our DHCP server (BUT! Shouldn't be needed, everything is somewhere on <IP_ADDRESS>/16) DHCP Relay = Enabled DHCP Snooping = Disabled Now What? What do the DHCP server logs say? You should see the DHCP exchange (On my ISC DHCP daemon, the exchange steps are DHCPDISCOVER, DHCPOFFER, DHCPREQUEST and then a DHCPACK) so that the client gets an IP address. It looks like your server isn't making it past this step (TFTP occurs after this step, so it's irrelevant for the error above). The traffic isn't getting that far. You mean there is nothing in the logs at all? Traffic is not making it to the server? That's good to know, because it helps to clarify your question. There's not nothing in the logs.. The DHCP works for every other physical device on the network. There's nothing blade-related there. MDMarra was closest with the suggestion of a L3 switch. I actually solved this just now (20:20 GMT) by: Resetting the switch to factory defaults. Double-checking the cables to the fibre core switch and the patch panel. Swapping the SFP port on the Core Switch Swapping the SFP port on the Dell Blade Switch (Why this matters is beyond me, unless there's a duff port). Testing everything one more time. It sounds like you might have an L3 switch in the mix that needs DHCP Forwarding configured on it. Maybe you haven't enable portfast, or the Dell equivalent on the switch? Portfast is enabled.
common-pile/stackexchange_filtered
Artifactory storage keeps growing Artifactory storage summary says that binary size is 1.95TB, Artifact Size: 3,59TB. But Filestore is 5.1 TB and it keeps only growing. It seems that there is about 3TB space in use by old artifacts. Is there a way to investigate what is consuming this storage and can it be cleaned up somehow? Are you using the trash? You might have all the storage used by the trash can. If so, you can empty it or even configure not to use it at all. Thank you for the remark; indeed we use the trash can but it only occupies 0.05% (2GB) of storage (according to the GUI). So I guess that will not help. Can you check the file system and see the distribution of the storage? Look under (assuming linux) /var/opt/jfrog/artifactory. See what directory is consuming so much of the storage. @Peter When is the last time your Artifactory ran a garbage collection? it is possible that you have storage consumed by artifacts which are no longer in use and should be cleaned by the GC process @Dror: Garbage collection is running every 4 hours every day. @Eldad: I had a closer look at the directory sizes. It appears that the database/backup/current/repositories directory is consuming about 3.6TB. We configured daily incremental backup. So it looks as if I have to change the backup somehow. Any good recommendations? Backups can eat up a LOT of storage. Company policy is your first priority. Consider putting it on a separated mounted volume. Look at https://jfrog.com/whitepaper/best-practices-for-artifactory-backups-and-disaster-recovery/ for more.
common-pile/stackexchange_filtered
Create new signal or multiplex SIGALRM? I am trying to write a benchmark that receives a signal from the kernel telling it to adjust its parameters. I'm trying to study whether a proactive or reactive approach works best. In the proactive approach, I use setitimer to set an alarm periodically and force the benchmark to look at its performance thus far and re-tune itself. In the reactive approach, the kernel periodically monitors the process and signals it if it is performing poorly. Since I've been using the setitimer functionality, and since setitimer causes SIGALRM, I have asked the kernel to throw a SIGALRM in the reactive approach. This has been working fine. However, now I need to use SIGALRM to run the benchmark for a specific duration of time. Is there a way to multiplex SIGALRM to serve both purposes - to do a timed run and terminate and to re-tune. Is there a function/syscall similar to setitimer that allows the user to set an alarm but with a custom signal? Yes. You want to look at the timer_create / timer_settime etc., family of calls. The 2nd parameter of timer_create is a struct sigevent. The field within that, sigev_signo can be set to send a specific signal number on timer expiration.
common-pile/stackexchange_filtered
dynamic redux app loading vs store composition and combineReducers Let's say I've got a very big application written in react/redux and I've got application split into modules, webpack import statements, etc. I need to provide a setup that for a given production build, I choose which modules should get included into the dist (rest is ignored). For example, I've got modules A, B, C, D. One customer paid for modules A & B and that's what he gets, another one paid for all and gets A, B, C, D. And this is what should get bundled, there is, of course, one consistent codebase. On webpack level I'm just generating a new entry point that will include (AST-level) modules that I want (import moduleA, import moduleB)... but now comes the question about redux store and combineReducers. Is there any way to dynamically add pieces into a combineReducers call? The only way I can think of is to generate the root reducer manually, importing module reducers. But maybe there is any better approach to do that? The standard approach to dynamically adding slice reducers is to call combineReducers again, pass in all the reducers you want to have now, and call store.replaceReducer(newRootReducer). The react-boilerplate project has an example of this. See their utility function injectAsyncReducer. sorry for this long time to accept the answer, now I understand it :P
common-pile/stackexchange_filtered
Using integrals to prove that the mean of the sampling distribution is the population mean Let the random variables $X_1, X_2, \dots X_n$ denote a random sample from a population. The sample mean of these random variables is: $\overline{X}=\frac{1}{n}\sum\limits_{i=1}^{n}X_i$ I would like to show that the mean of the sampling distribution of the sample mean is $\mu$, the population mean. Here's what I have done: $$\begin{align} E(\overline{X}) &= \int\limits_{\overline{X}} \bar{x} f(\bar{x})\,\, d\bar{x} \\ &=\int\limits_{\overline{X}} \left(\frac{1}{n} \sum\limits_{i=1}^n X_i \right) f(\bar{x}) \, d\bar{x} \end{align}$$ From here, I am not sure what to do anymore but anyway I end up with: $$\begin{array} {cc} &=& \frac{1}{n} \left( \int\limits_{\overline{X}}X_1f(\bar{x}) \, d\bar{x} + \int\limits_{\overline{X}}X_2f(\bar{x}) \, d\bar{x}+ \dots + \int\limits_{\overline{X}}X_nf(\bar{x}) \, d\bar{x}\right) \end{array}$$ Now, I don't know how to complete this as I am unsure how to interpret the last equation. Somehow, the $\int\limits_{\overline{X}}X_if(\bar{x}) \, d\bar{x}$ is suppose to equal to $\mu$ but I don't see how that can be true. I know the answer will be $\mu$ because of here but I would like to arrive at the answer using integrals instead. The approach in this answer may help you, http://math.stackexchange.com/questions/506352/expectation-of-a-function-of-a-random-variable/506378#506378 @AlecosPapadopoulos Thanks for the link. I can't follow it very well yet as I have not covered the topics you touched on. Is trying to arrive at the answer using integrals really more complicated than relying on the properties of expected values? It is more complicated, although not really more complex. You cannot avoid the joint density and the multiple integrals, since the sample mean is a function of many random variables (and you do not assume that the variables are independent in order to prove unbiasedness of the sample mean). Okay, noted. Then I will have to revisit this question later on in the future. But I would appreciate it if you can post a complete answer. That way, I can accept it and mark this question as answered. No problem giving a complete answer. Let's see As already stated, the sample mean is a function of many random variables, and so the symbol $E$ refers to the expected value with respect to their joint distribution. Denoting $\mathbf X$ the multivariate vector of the $n$ r.v.'s, their joint density can be written as $f_{\mathbf X}(\mathbf x)= f_{X_1,...,X_n}(x_1,...,x_n)$ and their joint support $D = S_{X_1} \times ...\times S_{X_n}$ The sample mean is a function of this multivariate vector, $\bar X = \frac 1n \sum_{i=1}^{n}X_i = g(\mathbf X)$. Using the Law of Unconcscious Statistician We have $$E[\frac 1n \sum_{i=1}^{n}X_i] = \int_D g(\mathbf x)f_{\mathbf X}(\mathbf x)d\mathbf x$$. Under convergence regularity conditions we can decompose the multidimensional integral into an n-iterative integral: $$E[\frac 1n \sum_{i=1}^{n}X_i] = \int_{S_{X_n}}...\int_{S_{X_1}}\left[\frac 1n \sum_{i=1}^{n}x_i\right]f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$ and using the linearity of integrals we can decompose into $$ = \frac 1n\int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n \; + ...\\ ...+\frac 1n\int_{S_{X_n}}...\int_{S_{X_1}}x_nf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n $$ For each n-iterative integral we can re-arrange the order of integration so that, in each, the outer integration is with respect to the variable that is outside the joint density. Namely, $$\frac 1n\int_{S_{X_n}}...\int_{S_{X_1}}x_1f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_n = \\\frac 1n\int_{S_{X_1}}x_1\int_{S_{X_n}}...\int_{S_{X_2}}f_{X_1,...,X_n}(x_1,...,x_n)dx_2...dx_ndx_1$$ and in general $$\frac 1n\int_{S_{X_n}}...\int_{S_{X_j}}...\int_{S_{X_1}}x_jf_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_j...dx_n =$$ $$=\frac 1n\int_{S_{X_j}}x_j\int_{S_{X_n}}...\int_{S_{X_{j-1}}}\int_{S_{X_{j+1}}}...\int_{S_{X_1}}f_{X_1,...,X_n}(x_1,...,x_n)dx_1...dx_{j-1}dx_{j+1}......dx_ndx_j$$ As we calculate one-by-one the integral in each n-iterative integral (starting from the inside), we "integrate out" a variable and we obtain in each step the "joint-marginal" distribution of the other variables. Each n-iterative integral therefore will end up as $\frac 1n\int_{S_{X_j}}x_jf_{X_j}(x_j)dx_j$. Bringing it all together we arrive at $$E[\frac 1n\sum_{i=1}^{n} X_i ] = \frac 1n\int_{S_{X_1}}x_1f_{X_1}(x_1)dx_1 +...+\frac 1n\int_{S_{X_n}}x_nf_{X_n}(x_n)dx_n $$ But now each simple integral is the expected value of each random variable separately, so $$= E[\frac 1n\sum_{i=1}^{n} X_i ] = \frac 1nE(X_1) + ...+\frac 1nE(X_n) = \frac 1nE(X) + ...+\frac 1nE(X)$$ $$= \frac 1n nE(X) = E(X)$$ You should not confuse the argument to the probability density function with the random variable. Often one uses a lower-case letter for the former and a capital letter for the latter. For example, if the density function for the random variable $X$ is $f$, then one can speak of $\int_{-\infty}^\infty xf(x)\,dx$, and the lower-case "$x$" is not the random variable $X$. It is analogous to such things as "$P(X\le x)$": the "$X$" and the "$x$" mean two different things. One can also write $f(3)$, and it doesn't mean the density function of a random variable called "$3$"; it means the value at the number $3$, of the random variable $X$. Thus one can write $f_X(4)$ and $f_Y(4)$ and they're the values at $4$ of the density functions of two different random variables $X$ and $Y$. If you're writing $$ \mathbb E(\bar X) = \int_{-\infty}^\infty \bar x f(\bar x)\,d\bar x, $$ then that means the same thing as $$ \mathbb E(\bar X) = \int_{-\infty}^\infty x f_{\bar X}(x)\,dx. $$ You can't put the random variable $\bar X$ in place of the bound variable $\bar x$. To understand this, it may help to realize that $$ \sum_{j=1}^3 (i^2 j^3) \text{ and } \sum_{k=1}^3 (i^2 k^3) $$ both mean $$ i^2 1^3 + i^2 2^3 + i^2 3^3, $$ and thus both depend on the value of the "free variable" $i$ but not on any value of the "bound variable" that is either $j$ or $k$. There's no $j$ or $k$ for them to depend on. One can freely change the name of the bound variable from $j$ to $k$ without changing the value of the expression. The same thing applies to $$ \mathbb E(X) = \int_{\mathbb R} x f_X(x)\,dx = \int_{\mathbb R} w f_X(w)\,dw. $$ One can freely change the name of the bound variable from $x$ to $w$. But the capital $X$ still refers to the same random variable. If you intend $f$ to be the density function of $\bar X$, then in your second displayed line in your question, $f$ is still the density function of the random variable $\bar X$. If I wanted to do this by using the density function of the sample mean $\bar X$, I'd actually need to show how that function depends on the density function of $X_1$. That would be an $n$-gold convolution. I wouldn't do it that way if I could help it. Instead I'd use the linearity of expectation, thus: $$ \mathbb E\left(\frac{X_1+\cdots+X_n}{n}\right) = \frac1n\mathbb E(X_1+\cdots+X_n) = \frac1n((\mathbb E X_1)+\cdots+(\mathbb E X_n)) $$ There is of course the problem of how to prove the linearity of expectation. Maybe that's posted here somewhere as a separate question already.
common-pile/stackexchange_filtered
Why is javascript saying that addcallback function is not defined? my first time on here. My problem is with AS3, Javascript and possibly the browsers Firefox and IE. I have done so much searching for an answer so i will print my code: i am using this line to call the flash application and in all browsers its combatible and actually traces in firebug to hold an OBJECT->FLASH_ID so thats not the problem. var obj = document.getElementById('test'); then i use addcallback: obj.sendStatus(loggedIn); now whats weird is that i trace all individual elments in chrome and -obj = flash object -sendStatus = flash->function -loggedIn = either false or true; everything works great but when i am on firefox or ie it traces differently -obj = flash object -sendStatus = undefined -loggedIn = either true or false; now what am i missing?????????? i tried embedding rather than object insertion i made sure that the id's were all unique i checked to make sure i had the right flash object selected with getElementById im so confused.. and it feels like something simple. I know about some browser - dependent timing problems, making the interface of the flash object available... A timer could help, try this: var obj = document.getElementById('test'); setTimeout(function(){obj.sendStatus(loggedIn);}, 500); 500 is a bit to long, but just to be sure. If it works you can try to lower it to 200 - 300. okay i tried it, and its not working; its gets me thinking though - ExternalInterface.available just checks whether the container it is inside of is a html document, so i know to use $(document).ready() and $(window).load(.. but can flash access the DOM before it is ready? uhm.. I don't understand you quite well. But another thing, is it maybe a logging issue, are you sure the method is not being called? yeh the method, i assume you mean the callback; it is being called in Firefox and Chrome, but not IE - what do you mean by logging?? What i mean is could the there be calls made from flash player to javascript before the DOM is ready? are flash elements loaded before javascript is? you just have to make sure that you are accessing the flash object, which is a DOM element, when the document is ready. And there also seem to be timing issues with this object making it's javascript interface available, that's why I suggested the timer. But you tested that and it doesn't work. In your original post you say it doesn't work in firefox and ie, here you say it doesn't work only in ie - which is true? My question was only, are you sure the callback is not executed in flash? have you debugged it, have you put traces in the method? hey lxx callback is not executed, as the element is undefined and flash bug confirms that also. The way i just got it to work in Firefox is that i removed object within object and just used imbed instead and it decided to work in firefox but not IE. so IE is the only browser that is saying that its an undefined element make sure you declared allowScriptAccess = sameDomain both in embed tag and object tag in case you don't use swfObject Maybe the way you get a reference to the swf is wrong, try this function thisMovie(movieName) { if (navigator.appName.indexOf("Microsoft") != -1) { return window[movieName]; } else { return document[movieName]; } } http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/external/ExternalInterface.html The problem is that using ExternalInterface requires both parties (browser and flash) to be ready. You can have the flash poll a method in the page which just returns true so that you know its ready to receive calls from flash. On the flip side if the page is cached, it can sometimes happen that the page wants to send to flash before flash is ready, so I use a callback to the page telling it flash is ready, so its like a handshake, once both parties are ready, then we can start sending data back and forth. This has been my approach since Firefox 3.
common-pile/stackexchange_filtered
Caesar cipher - Trouble with negative numbers in the shift key When I use a negative number for the shift like -1 and use the char 'a', I'm supposed to get 'z' but I get ` instead. How can I fix this? using System; using System.IO; namespace CaesarCipher { class Program { public static char cipher(char ch, int key) { if (!char.IsLetter(ch)) { return ch; } char d = char.IsUpper(ch) ? 'A' : 'a'; return (char)((((ch + key) - d) % 26) + d); } public static string Encipher(string input, int key) { string output = string.Empty; foreach (char ch in input) output += cipher(ch, key); return output; } public static string Decipher(string input, int key) { return Encipher(input, 26 - key); } static void Main(string[] args) { bool Continue = true; Console.WriteLine(" Ceasar Cipher"); Console.WriteLine("-------------------------\n"); while (Continue) { try { Console.WriteLine("\nType a string to encrypt:"); string UserString = Console.ReadLine(); Console.Write("\nShift: "); int key = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("\nEncrypted Data: "); string cipherText = Encipher(UserString, key); Console.WriteLine(cipherText); Console.Write("\n"); Console.WriteLine("Decrypted Data:"); string t = Decipher(cipherText, key); Console.WriteLine(t); Console.WriteLine("\nDo you want to continue?"); Console.WriteLine("Type in Yes to continue or press any other key and then press enter to quit:"); string response = Console.ReadLine(); Continue = (response == "Yes"); } catch (FormatException ex) { Console.WriteLine("You entered a bad operation, try another one"); } } } } } Ceasar Cipher Type a string to encrypt: Hello how are you? Shift: 1 Encrypted Data: Ifmmp ipx bsf zpv? Decrypted Data: Hello how are you? Do you want to continue? Type in Yes to continue or press any other key and then press enter to quit: Yes Type a string to encrypt: Hello how are you? Shift: -1 Encrypted Data: Gdkkn gnv `qd xnt? Decrypted Data: Hello how `re you? Do you want to continue? Type in Yes to continue or press any other key and then press enter to quit: The answer seems obvious enough: You need a rollover. The English alphabet is only about 26 Characters wide. If you go below or above that, you need to return to the other end. You should consider making a array with the characters, rather then just doing ASCII math. Not only will that support more languages, you will avoid small letters turning into big or vice versa with large offsets. Only +/-7 can turn a small 'a' into a big 'Z'. IIRC Romans did not use small letters, so they had less of an issue with that. return (char)((((ch + key) - d) % 26) + d); is equivalent to: return (char)(ch + key);, which is not, I guess, what you want it to do. ` is the character that proceeds "a" in the ASCII/Unicode character list. https://unicode-table.com/en/ You'll want to detect if you're trying to shift "a", and instead replace it with "z" There are many ways you could do this. One way would be to subtract from the last letter the difference if the result is less than the first letter: public static char cipher(char ch, int key) { if (!char.IsLetter(ch)) { return ch; } char firstLetter = char.IsUpper(ch) ? 'A' : 'a'; char lastLetter = char.IsUpper(ch) ? 'Z' : 'z'; int result = ch + key; if (result < firstLetter) result = lastLetter - (firstLetter - result + 1); return (char)result; }
common-pile/stackexchange_filtered
Setting pandas set_option globally for particular venv I find myself setting the pd.set_option in multiple files and forgetting about them Is there any performance issues with setting these values so large, I tend to do this in DEV? Is there a way to set an .env variable or pandas config globally so that all modules/scripts that leverage pandas inherit these options? Then I can revert back to a smaller value in PROD pd.set_option('display.max_rows', 1000, 'display.width', 1000, 'display.max_columns', 1000) If you are are using ipython then you can create a startup script. https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html#setting-startup-options-in-python-ipython-environment No currently, using a remote python36 on aws ec2. Is there a similar startup script concept for regular python? I wondering if I should create a custom module to import pandas and ps.set_option, and then import that module elsewhere in the code base....but seems like a hack to me
common-pile/stackexchange_filtered
Whats the difference between entropy and the (dis)order of a system? Entropy is often verbally described as the order/disorder of the thermodynamic system. However, I've been told that this description is a vague "hand-waving" attempt at describing what entropy is. For example, a messy bedroom doesn't have greater entropy than a tidy room My question is why is this the case? Also, what would better describe entropy verbally? Essential: http://ariehbennaim.com/books/entropyd.html. Briefly, spontaneous processes tend to proceed from states of low probability to states of higher probability. The higher-probability states tend to be those that can be realized in many different ways. Entropy is a measure of the number of different ways a state with a particular energy can be realized. Specifically, $$S=k\ln W$$ where $k$ is Boltzmann's constant and $W$ is the number of equivalent ways to distribute energy in the system. If there are many ways to realize a state with a given energy, we say it has high entropy. Often the many ways to realize a high entropy state might be described as "disorder", but the lack of order is beside the point; the state has high entropy because it can be realized in many different ways, not because it's "messy". Here's an analogy: if energy were money, entropy would be related to the number of different ways of counting it out. For example, there, there are only two ways of counting out two dollars with American paper money (2 1-dollar bills, or 1 two-dollar bill). But there are five ways of counting out two dollars using 50-cent or 25-cent coins (4 50-cent pieces, 3 50-cent pieces and 2 quarters, and so on). You could say that the "entropy" of a system that dealt in coins was higher than that of a system that dealt only in paper money. Let's look the change in entropy for a reaction $\rm A\rightarrow B$, where A molecules can take on energies that are multiples of 10 energy units, and B molecules can take on energies that are multiples of 5 units. Suppose that the total energy of the reacting mixture is 20 units. If we have 3 molecules of A, there are 2 ways to distribute our 20 units among energy levels with 0, 10, and 20 units: If we have 3 molecules of B, there are 4 ways to distribute 20 units among energy levels with 0, 5, 10, 15, and 20 units: The entropy of B is higher than the entropy of A because there are more ways to distribute the same amount of energy in B than in A. Therefore, $\Delta S$ for the reaction $\rm A\rightarrow B$ will be positive. what would better describe entropy verbally? We can also use "measurement of randomness" or "amount of chaos" or " energy dispersion" Initially in 1862, Rudolf Clausius asserted that thermodynamic process always "admits to being reduced to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of entropy change. But later after few years Ludwig Boltzmann translated word alteration from Rudolf Clausius' assertion to order and disorder in gas phase molecular systems. But if you see latest books they use concept of energy dispersion instead of order or disorder to explain entropy. Source: Wikipedia Also have look at physics S.E. @RutvikSutaria I have added link have a look there. :) I will take a crack at this although I admit that this topic sometimes confuses me as well. Here is how I like to think about entropy. Consider a box containing equal amounts of gases A and B. So, we have a fixed volume and fixed number of molecules. Let us also isolate the box from its surroundings so that it has a fixed energy as well. We have just created a microcanonical ensemble (constant NVE). The ensemble consists of every possible configuration of molecules having the same NVE. Now, if we were able to step back and take a broad view of the ensemble, we would observe that the vast majority of boxes contain a rather bland homogeneous mixture of gases A and B. In fact, they would be indistinguishable for all practical purposes. Let us count the number of boxes in this state and call the number $W_1$. Continuing with the box counting, we find that there are only a handful of distinguishable boxes left, but they are quite interesting! One box, for instance, might have all of the A molecules crowded together in one corner and all of the B molecules in another corner! This one box can be labelled $W_2$. We start to understand that some configurations of molecules will be extremely improbable because they represent such a small fraction of the total number of possible configurations. Boltzmann quantified this relationship as $S=k \log W$. If we plug $W_1$ into this equation we get a high value for the entropy, $S$, because $W_1$ is so large. On the other hand, if we plug in $W_2$ the entropy we get is very low. So, to sum up, entropy is really the measure of how likely a given system configuration is when compared to all of the possible configurations. In this sense, I would argue that you could say a messy room has a higher entropy than a clean room because there are so many more ways a room could be considered "messy" than "clean". That's probably about the best I can do. I am a rather boring person. Breaking down the wall of text might be helpful.
common-pile/stackexchange_filtered
Flexbox: How to achieve 50% width when the content is overflowing? I have a nav bar and a main area that is divided to 2 equal halves, left and right. When the left half contains a wide element that overflows its width, the 50/50 division of main breaks, causing left to take more than 50%. Why is this happening? How could I avoid that? .container { display: flex; height: 100px; background-color: #ccc; width: 300px; } nav { flex-shrink: 0; background-color: #aaa; } main { display: flex; flex: 1; } .left { flex: 1 0 50%; background-color: #bbb; overflow: auto; } .wide { width: 300px; } .right { flex: 1 0 50%; } <div class="container"> <nav>Navigation</nav> <main> <div class="left"> <div class="wide">Left</div> </div> <div class="right">Right</div> </main> </div> Add min-width: 0 to main. (An initial setting on flex items is min-width: auto. This means that, by default, a flex item cannot be smaller than its content size. You need to override the default.)
common-pile/stackexchange_filtered
Importing methods from Python class into another class instance Let's say I've got python object A, that is an instance of class A. Every instance of class A has an attribute SubType. I've also got classes SubType_B, SubType_C, and SubType_D, each of which has a method called ingest(). The ingest() method was previously using (self) to get all the parameters it needed, but that self is now the instance of class A. What is the most elegant way to inherit / use the method ingest() from a SubType class (e.g. SubType_A), using the self from object A? Previously, the ingest() method was defined in class A, but I would like to break that out into separate classes. Class A is instantiated, and based on the SubType parameter, that object would inherit the correct ingest() method from the corresponding SubType class. In real simple terms, I'd like one object to grab a method from another object, as if it were its own. If you have an instance of A, then how would a child's method be relevant to it? Keep the relationship well defined - make a new controlling class which passes the instance of A as necessary to the instance of SubType and remove the SubType attribute from A. That is, the new class will have an instance of A and an instance of SubType which is deals with as it should. @HenryGomersall, a controlling class is a good thought, thank you, will look into that. @IgnacioVazquez-Abrams, you're right, it's not. The question was perhaps too loosely defined, stating not my end goal, but how to perform something ill-conceived. The idea of a controlling class, as suggested in other comments, intuited my intent despite the incoherence. Thanks for the XY problem link though, good to keep in mind. Why not creating a metaclass which will return a class with the proper ingest method based on the subclass type? This metaclass would have direct access to any of your subclasses bases and would be able to take the decisions, in its new method. UPDATE Final solution was to make a factory function that returns instances of the SubType classes. This factory reads information in parameters passed to it, that determine which SubType class to instantiate. The SubType classes all extend Class A. I had envisioned things backwards, thinking I started with the most general class, then incorporated attributes and methods from sub-classes. This factory-like-function does the requisite information gathering, then instantiates the appropriate sub-class. Thanks to @HenryGomersall and @IgnacioVazquez-Abrams for ideas and clarification.
common-pile/stackexchange_filtered
After chmod can't login on VM Change permissions on file /etc/passwd on 640, then chmod o-r . After that, i can't login on server on ssh. Putty says "Network error: Software caused connection abort". What problems? And how i can fix this... What problems? Basically, you broke it. If you have console access, boot from a rescue image - and change the permissions on /etc/passwd back to what they're supposed to be. But what wrong in 640 /etc/passwd? @PhilippCrux Most likely the login process can't read the file to determine if the account you're trying to log in with exists. It's weird, i think. All users can see /etc/passwd - it is not secure.. @PhilippCrux What's not secure about it? Account names aren't secret. You're supposed to be able to read /etc/passwd. It's for user information. Passwords are stored in /etc/shadow.
common-pile/stackexchange_filtered
"ways/steps to increase" And "ways/step to increasing" Way and step I happened to read some sentences containing phrases with two similar forms such as Subject Verb to Verb... and Subject Verb to Verb-ing... I come up examples for each. "I find ways to increase my enjoyment in study." And "I find ways to increasing my enjoyment in study." In what situations each one is correct? Any explanation in meanings? *How about the other word "step"? I take steps to increase my enjoyment in study. and I take steps to increasing my enjoyment in study. A similar example: Steps to overcoming obstacles Both of your second options are wrong. We can say ways of increasing but not steps of or steps to increasing. "Steps to overcoming obstacles" is from a book which I currently read. I am quite surprised. This is interesting. I think the explanation is that steps to overcoming obstacles uses step in the sense of 'a stage in a process', while take steps has the specific meaning of 'do what is necessary', so their grammatical functions are different. @KateBunting I added the souce of "steps to overcoming obstacle". Which shows a drawing of actual steps. Take it from me, I take steps to increasing my enjoyment in study is not idiomatic English. I definitely believed you. You did provide answers in detail. I was a bit curious. I find ways to increase my enjoyment in study. This is a fully grammatical use, which a fluent speaker should understand. Such a speaker might be more likely to say "I have found way..." or "I am finding ways..." but there is nothing with sentence 1. I find ways to increasing my enjoyment in study. This is not grammatical, or at least is not natural. The form "to increasing" is not used in this construction. "increasing" indicates a process, and so does not fit "I find ways to..." (Increasing cvan also indicate a direction, as "the prices are increasing" but it still does not fit in sentence 2.) I take steps to increase my enjoyment in study. This, like sentence 1, is fully grammatical. A fluent speaker would understand and might well say sentence 3. I take steps to increasing my enjoyment in study. This has much the same problem as sentence 2. "to increasing" is still not used in this construction, and the change from "find ways" to "take steps" does not alter that. Steps to overcoming obstacles This could be short for 5A. Here are some steps to overcoming obstacles. The participle "overcoming" can accept this "to-form" while "increasing" cannot. I find it hard to spell out a reason for this in the form of a concise rule. Perhaps someone else could do so. Here are some steps for increasing enjoyment in study by changing the preposition from "to" to "for" and taking the sentence out of the first person, sentence 4 becomes acceptable. A similar transformation on sentence 2 yields: Here are some ways for increasing enjoyment in study Examples 5 up to 7 are perfect. Define "perfect". As this usage chart clearly shows, native speaker almost always choose to refer to ways to improve things rather than ways for improving things. Just because no-one wants to rule it out on syntactic grounds doesn't imply the latter form is even "any good", let alone "perfect". @FumbleFingers I claim only that examples 5A to 7 are grammatically valid, would be understood by a fluent speaker, and not thought particularly odd by such a speaker, not that they are favored by such speakers in general. David: Indeed. My only intended response to your actual Answer text here was the upvote itself. I just didn't think it was a good idea to let @Stats Cruncher's "erroneous / questionable" assertion stand. We wouldn't necessarily think the "less favoured" form was "odd", but there's no doubt whatsoever which way most native speakers would jump if required to actually make a choice between the infinitive and continuous verb forms in such contexts. As a non-native English speak, I just take baby steps to improve on my learning outcome. May I ask if the related passive is possible: 1a. Steps / ways to increase my enjoyment in study has been found. @Robby zhu If "steps" or "ways" is plural, then the verb form must be "have", not "has": "Ways to increase my enjoyment in study have been found." That is grammatically valid. But I see no advantage to the passive voice here, and would advise against using it unless there was some particular reason to use it.
common-pile/stackexchange_filtered
how to update multiple data in entityframework through async web api I am using web api 2, and entity framework 6. I have created an async web api, which updates all the records at once. I am also using Autofac for dependency injection. My service interface is as follows : Task<Approval> TakeAction(int id, bool isApprove) void TakeAction(bool isApprove) These are my service methods : public async void TakeAction(bool isApprove) { //GetAllDataToApprove is the same function in the service. var approvalList= GetAllDataToApprove().Approvals.ToList(); foreach (var approval in Approvals) { //This is an async method as well await TakeAction(approval.ApprovalId, isApprove); } } TakeAction method is as follows : public async Task<Approval> TakeAction(int id, bool isApprove) { Approval approval= approvalrepository.Query(o => o.ApprovalId == id).FirstOrDefault(); try { //updating the approval status approval.StatusId=5 UpdateDashboard(approval); approvalrepository.Update(approval); await unitOfWork.SaveChangesAsync(); } } catch (Exception ex) { throw ex; } return approval; } My web-API method is as follows : [HttpPut] public IHttpActionResult Put(bool isApprove) { try { approvalservice.TakeAction(isApprove); return Ok(); } catch (Exception ex) { throw new HttpResponseException(Request.CreateErrorResponse(HttpStatusCode.InternalServerError, ex.Message)); } } I want to make a non-blocking API call, such that if this API is triggered, it should approve all the pending data. The reason I made this async is because there are lot of data in the pending list, so it takes a long time. I am getting the following error: The operation cannot be completed because the DbContext has been disposed. How to you manage lifetime of DbContext this doesn't have anything to do with async operations. it looks like it has to do with you making changes inside a repository (which, presumably, has a DbContext inside it) and then trying to save changes on a unitOfWork (which probably has a different DbContext). for what it's worth, Entity Framework is already a repository/unitOfWork. adding these "wrappers" around EF not only make the code harder to work with, they also tend to lead you into places where you lose some of the power that EF offers. If I understand your question correctly, it boils down to this: How do I start a long running task from an API call but respond before it is finished. Short Answer: You don't in the API. You delegate to a Service. Long Answer: The error you are getting is most likely thanks to Autofac. With a Web Api Request you initialize everything you need at the start of the request and get rid of it at the end. In your code, you are trying to do stuff with DbContext after your request ended and it got thrown away. To fix this, simply add await like this await approvalservice.TakeAction(isApprove);. Doing the above means you are now waiting for the process to complete. Which is not what you wanted. But this is the only way, because Web Api was made to work this way. What you need to do instead is delegate the work to a service. Something that was made to run long running tasks. A few options are: HangFire Azure WebJobs / Cloud Services Windows Services This is a really nice read on the exact issue you are having. HowToRunBackgroundTasksInASPNET I completely agree with you, I was able to resolve this. As you said Autofac was actually the culprit. I had overriden the dbcontext Dispose method. I had to comment the code. It works fine now, but my concern is just like azure portal, how do I manage the long running tasks on the UI side. Should I use Signal-R to do this ? or do I need to call the API every few seconds to understand how many of the records have been processed ? Both will work. You can poll, or use something like SignalR for real time updates. Heads Up: Although you have gotten it to 'work', it is not guaranteed. If your task is long running there is no guarantee Web Api will not just simply throw it away half way through. Furthermore, if there is some error you won't be able to do anything about it. oh so Hangfire, Azurewebjobs and windows services are the only options I could use ? I think polling would be good as the client calls the API every few seconds while on the page, whereas SignalR would require the server to broadcast the response to all its client. Please correct me if I am wrong SignalR doesn't have to broadcast the response to all its clients, you can obtain the id of the client e.g. via $.connection.hub.id in the javascript $.connection.hub.start().done(function () @user3825003 See my comment above re SignalR. However, I have found that as hatcyl says, a WebApi or MVC function on the server can often stop either broadcasting, or even running altogether after an unpredictable time. Azure WebJobs have worked for me for long running tasks (NB you have to make sure that your web app is kept alive).
common-pile/stackexchange_filtered
Fastest way to determine if a collection (or db table) has two records at the same time or not I want to check a collection (Huge collection let's say), to see if it has two records, each with a specific criteria. What is the fastest way? For example I would say I have a People table with billion records, I want to get a True answer if there is a person with name equal to JACK and there is a person with Last name equal to SMITH. It may be one record, like "Jack Smith" or two records "Jack some-family" and "some-name Smith". Please tell me what is the fastest way in C# (collections and lists) and what would be the fastest way in Sql-Server. My opinion: Checking Exists (C#) or Any (sql) is faster. Someones opinion: put a where resulting a smaller collection (than the whole table) and then distinct it and count it (confused me too) is faster. Your opinion goes in the answers How about run a test to compute that ?... use Exists in sql. I think using distinct would exclude the 'Jack Smith' result as it would return 1 record rather than 2. In sql you probably won't get much better than: SELECT COUNT(*) FROM ( SELECT TOP 1 Surname FROM People WHERE FirstName = 'Jack' UNION ALL SELECT TOP 1 Surname FROM People WHERE Surname = 'Smith' ) Hi Steve, wouldn't it be faster to do the same using Exists? C#- we can give a try to generic hashset and use Contain method. SQL- we can create index/cover indes on search field. In sql server we can find the jump to the matching records directly using B tree architecture , so sql server engine no need to scan all the pages containing matching rows, its quite easy to implement create index IX_People_name on People( FirstName) create index IX_People_Surname on People( Surname) Having the perfect indexes with having Sargable query makes works perfectly for your query: if exists ( select 1 from People WHERE FirstName = 'Jack' union all SELECT 1 Surname FROM People WHERE Surname = 'Smith' ) BEGIN print 'first Condition ' END ELSE BEGIN print 'Second Condition ' END This should return the result with in the second you can test it with Set statistics time on
common-pile/stackexchange_filtered
Scan barcodes from a mobile web site? What is the best solution to scan barcodes and QR codes from a mobile web site (no phonegap etc...)? The solution has to work on Android, iOS and Windows Phone. I've heard about pic2shop, does it work well (even on windows phone)? Are there better solutions? As nothing seems to come out, I'll explain the best solution I found. The thing is to use the app "pic2shop". It is a barcode scanning app that is available on Android, iOS and Windows Phone 8. It is possible to request this app from a web page, let it scan the barcode and then callback you web site with the scanned information. window.location="pic2shop://scancallback=http%3A//www.google.com/m/ products%3Fg‌l%3Dus%26source%3Dmog%26hl%3Den%26source%3Dgp 2%26q%3DEAN%26btnProductsHome%3DSear‌​ch%2BProducts The above example will open the app and when the user scans a barcode, open the web browser on a google search for the scanned product barcode. If pic2shop is not installed on the phone, the call will fail. The thing is to set a timeout that will redirect to the store (to download the app) if the call fails. User agents can be used to know to which store it should redirect. Full example (Android store redirection) : <SCRIPT LANGUAGE="JavaScript"> function trygoogle() { setTimeout(function() { // if pic2shop not installed yet, go to App Store window.location = "market://details?id=com.visionsmarts.pic2shop"; }, 25); // launch pic2shop and tell it to open Google Products with scan result window.location="pic2shop://scan?callback=http%3A//www.google.com/m/products%3Fgl%3Dus%26source%3Dmog%26hl%3Den%26source%3Dgp2%26q%3DEAN%26btnProductsHome%3DSearch%2BProducts"; } </SCRIPT> Documentation : http://www.pic2shop.com/developers.html There is no solution for mobile web. You need access to the camera. GetUserMedia is only supported in the latest versions of android. iOS and windows phone don't support it yet. Actually there are solutions. I identified two of them : request the "pic2shop" app from the web site or use the web implementation of zxing. I was looking for a possible other better solution that would work well on the three os I mentionned. Thanks for sharing. Do you know how they work in mobile safari without getusermedia to access the camera? You can call apps from a webpage. For exemple this : window.location="pic2shop://scan?callback=http%3A//www.google.com/m/products%3Fgl%3Dus%26source%3Dmog%26hl%3Den%26source%3Dgp2%26q%3DEAN%26btnProductsHome%3DSearch%2BProducts ; will call pic2shop (barcode scanning app) and make a google search with the scanned barcode.This way you don't need to call the camera yourself as, in this case, pic2shop will do it for you. I'll answer my post to explain it better if nobody comes with something else.
common-pile/stackexchange_filtered
JavaFX: Add text to TextFlow - rendering issue I'm trying to use a quite simple JavaFX feature, but may be missing something: adding a new Text object to a TextFlow that is already showing. I created a very minimal working example: public class TextFlowTest extends Application { @Override public void start(Stage stage) { TextFlow textFlow = new TextFlow(new Text("One "), new Text("Two "), new Text("Three "), new Text("Four ")); var layout = new BorderPane(textFlow); layout.setOnMouseClicked(e -> { textFlow.getChildren().addAll(new Text("myAdd ")); }); Scene scene = new Scene(layout, 200, 50); stage.setScene(scene); stage.show(); } public static void main(String args[]){ launch(args); } } If you run the code above, and click on the text it should add a new word: myAdd - however it only renders the first few pixels of the first character. My feeling is that it is related to the initial calculation of the width of the flow, so I tried requestLayout on the TextFlow or the containing pane but it didn't help. Shouldn't this just work out of the box? Just add a new Text node to the flow's children, which is an observable list, so the flow should update its layout? This looks like a bug. I noticed a couple of things: 1. If you add any style to the text flow (I was adding a background color to see the size of it), it seems to work fine. 2. If you switch to a different container (e.g. an HBox), it also seems to work fine. Interesting, thanks! Worth reporting as a bug? The container solution won't work for me because I want to use a custom layout manager (where the issue originally came up) but with some styling trick I actually might be work around the issue. Yes, certainly. Another workaround is to call textFlow.layout() in the event handler. You might be able to make the layout() call in your custom pane/region layoutChildren() method. That would be a more suitable place for it. I don't fully understand the bug though, so I don't know if that would work. This looks like a bug (which I'd recommend reporting). Interestingly, it only seems to manifest in a BorderLayout, but not in a HBox. It also doesn't show up if you add style to the text flow. If those workarounds aren't usable, calling textFlow.autosize() to recompute the size of the text flow seems to work. Since you say in the comments that you're using a custom pane to contain the text flow, you may find a convenient place to do that in the custom pane's layoutChildren() method, or similar. As a worst case, you can just call that in the event handler: layout.setOnMouseClicked(e -> { textFlow.getChildren().addAll(new Text("myAdd ")); textFlow.autosize(); });
common-pile/stackexchange_filtered
how do you hide an image in Tumblr's permalink page? I want to have my own thumbnails for video posts on my main tumblr page....and show the video only in the permalink page with out the thumbnail. here's an example site of what I'm trying to do: http://devour.com/ (not tumblr but same idea) I figured the best method is to post the thumbnail as a photo post, with the video embedded. Then hide the image in the permalink page. so my question is...how do I hide the image in the permalink page? Rather than hiding the image on the permalink page, use the theme operators {block:IndexPage} and {block:PermalinkPage} to only include the content that you need. {block:Posts} {block:Photo} <!-- On the Index, show only the Image / Cover --> {block:IndexPage} <img src="{PhotoURL-500}"> {/block:IndexPage} <!-- On the Permalink, show only the Caption, which contains the video --> {block:PermalinkPage} {block:Caption} {Caption} {/block:Caption} {/block:PermalinkPage} {/block:Photo} {/block:Posts} Reference: http://www.tumblr.com/docs/en/custom_themes#basic_variables
common-pile/stackexchange_filtered
term doesnt evaluate to 1 argument in std algorithm when using unary op defined with std::unique_ptr I am trying to use a std::unique_ptr functor in std algorithms but if i use it like this: std::unique_ptr<IFormatter> format(new formatter("abcd")); std::transform(vec.begin(), vec.end(), vec.begin(), *format.get()) I am getting compilation error saying: Error 1 error C2064: term does not evaluate to a function taking 1 arguments. Complete program below: #include <iostream> #include <string> #include<algorithm> #include<vector> using namespace std; struct IFormatter { protected: std::string keyFormat; void fun(std::string& str) { // do something here. } public: IFormatter(std::string keyFormat) : keyFormat(keyFormat) {} virtual std::string operator() (const std::string &key) = 0; }; struct formatter : IFormatter { formatter(std::string keyFormat) : IFormatter(keyFormat) {} std::string operator() (const std::string &key) { std::string s = keyFormat; fun(s); return s; } }; int main() { std::vector<std::string> vec{ "one", "two" }; std::unique_ptr<IFormatter> format(new formatter("$ID$")); // This line compiles fine if i define format as - formatter format("abcd"); std::transform(vec.begin(), vec.end(), vec.begin(), format.get()); start s; return 0; } EDIT: Thanks all for the suggestions but format.get() was a typo, I was using *format.get() & i had tried with *format (I agree that get() is not really needed.). Using std::ref(*format) isnt solving the problem. This is strange but i still gets same error. P.S. IF it matters i am using visual studio 2013. Quick guess: try *format.get(). A pointer to a formatter is not callable. A reference to a formatter is callable. @BoBTFish: .get() is unneeded, *format is enough and more idiomatic. std::transform(vec.begin(), vec.end(), vec.begin(), std::ref(*format)); A pointer isn't callable. But you have a pointer-to-base, so std::ref to avoid slicing. Since MSVC 2013's reference_wrapper hates abstract classes, use a lambda: std::transform(vec.begin(), vec.end(), vec.begin(), [&](const std::string& s) { return (*format)(s); }); Thanks for mentioning slicing. I had missed that part. but original problem is still as is. @user888270 Ah, MSVC bug.
common-pile/stackexchange_filtered
How to setup lerna with git-only I have a React app using submodules to share code and I want to swap it with lerna. But instead of publishing to NPM, I just want to use git-only where the code will be published back to git using commitish. Basically the folder is structured something like this app/ src/ components/ shared/ ui/ utilities/ index.js .gitmodules package.json All submodules go to under shared. My question is how can I setup lerna so that "app" will have dependency on "ui" and "utilities" using git-only? Is the structure will be like this lerna-app/ packages/ app/ ui/ utilities/ or like this? app/ packages/ ui/ utilities
common-pile/stackexchange_filtered
CTCallCenter how to implement effectively I am developing a call related application. Need to get events about the call states, the calls are logged and UI is updated. When the application is active - for a miss call I am getting CTCallStateIncoming and CTCallStateDisconnected: for a received call I am getting CTCallStateIncoming and CTCallStateConnected: CTCallStateDisconnected not getting in the later. Also if the application is in background no events will call. I would like to know how to receive events from CTCallCenter in the background at all times and then have that event directly used by the app. Can somebody direct me, how to implement CTCallCenter callEventHandler which should work for all above cases. Thanks CTCallCenter is not meant for this, It will not work in the background. The CTCallCenter is meant for VOIP client to detect call state. i don't get you well, wondering if CTCallCenter won't work in background how VoIP client's get call state. can you explain little more Well VOIP client keep running in the background. Not all apps can run in the background. This is restricted to VOIP, Location, Audio and accessory apps. yea i read that. so if i implement VoIP module to my app, will the problem solve. Is it worth or any other ideas to achieve this ? Well unless you app really is a VOIP app Apple will reject your app. What you want is just very difficult and near to impossible on iOS. thanks. since you didn't say "NO", can you shed some light. want to give it a try No, I can not. Since I've never done something like that.
common-pile/stackexchange_filtered
Using Alexander's Theorem to show that the sphere $S^3$ is a prime manifold I'm completely aware of the triviality of this question, but for some reason, I can't visualize the argument. In Hatcher's 3-manifold notes, the form of Alexander's theorem stating that Every embedded 2-sphere in $\mathbb{R}^3$ bounds an embedded 3-ball is given. Later, he uses this fact to conclude that $S^3$ is prime (recall that a 3-manifold $M$ is prime if, whenever $M=P\#Q$, either $P=S^3$ or $Q=S^3$ where here, $P\#Q$ denotes the connected sum of $P$ and $Q$), stating merely that every 2-sphere in S^3 bounds a 3-ball. I was hoping to visualize why this is true, and so far, I'm not having any luck; I'm hoping someone can help. Things I've read: From Hatcher's notes, it's mentioned that the trivial decomposition $M=M\# S^3$ is obtained by choosing the sphere $S$ (in the connected sum decomposition process) which bounds a ball in $M$. I realize that knowing this implies my result immediately, but it doesn't help me see the result. Elsewhere, Hatcher states that the result follows from the fact that every 2-sphere in $S^3$ bounds a ball on each side. I've seen this justification elsewhere as well, but I can't visualize this one any easier. I guess what I'm looking for is a direct proof of some kind. I tried to construct one as follows, but I'm stuck almost immediately after doing all the obvious things. Note that for a sphere $S$ in $M$, $M|S$ is Hatcher's notation for the manifold $M$ obtained by splitting along $S$, i.e. by removing an open tubular neighborhood $N(S)$ of $S$ from $M$. Suppose that $S^3=P\#Q$. By definition of connected sum, there exists a sphere $S$ in $S^3$ such that $S^3|S$ has two components $P'$ and $Q'$ where $P$, respectively $Q$, is obtained from $P'$, respectively $Q'$, by filling in the boundary sphere corresponding to $S$ with a ball. By Alexander's theorem, $S$ bounds a 3-ball $B$ in $S^3$, so.... And that's it. I really have no idea how to proceed, and despite this being truly one of the most trivial things imaginable, I'm at a loss. Knowing why this is true for $S^3$ would be great, but knowing why the fact about general $M^3$ being trivially decomposed when $S$ bounds a ball would probably be much more helpful from a big picture perspective. Here's what I think is happening. Saying that any (smooth) embedding of $S^3$ in $\mathbb{R}^3$ bounds a 3-ball is saying that the interior region of the embedded sphere is already a 3-ball. Thus any time you embed $S^2$ smoothly in $S^3$ (viewed as the one-point compactification of $\mathbb{R}^3$), it is automatically splitting $S^3$ into two 3-balls, one on each side. Thus the statement implies right away that $P$ and $Q$ are already 3-balls. @BenBlum-Smith - I understand what you're saying, but I still don't see why. Clearly, embedding $S^2$ smoothly into $\mathbb{R}^3$ (or thus into $S^3$) yields the interior region a 3-ball; I guess I'm confused as to why the exterior of an $S^2$ embedded into $S^3$ is necessarily a 3-ball? I agree that $S^3$ is the one-point compactification of $\mathbb{R}^3$...I don't know what I'm missing or why, but I'm definitely missing something. Maybe this helps? The exterior region is homeomorphic to the interior region. You can see this by thinking about reflection in the unit sphere in $\mathbb{R}^3$. This is the map $\mathbb{R}^3\rightarrow\mathbb{R}^3$ obtained by taking a point of distance $r$ from the origin, and, keeping it on the same ray from the origin, relocating it to distance $1/r$. It is defined everywhere except at the origin. It fixes the points of the unit sphere. But if $\mathbb{R}^3$ is embedded in $S^3$, you can extend the map to the origin and it interchanges it with the point at infinity. This gives a smooth homomorphism between the interior ball and the exterior "ball" of the unit sphere. As Dan notes, you can also get some intuitive mileage by considering the analogy with the 2-d case. What I was describing also works in the plane - reflection in the unit circle. Viewing it as $\mathbb{C}$, it's the map $z\mapsto 1/\overline{z}$. $z\mapsto 1/z$ would also make the point. Viewed as a map on the Riemann sphere (aka $S^2$ seen as the one-point compactification of $\mathbb{C}$), it exchanges the interior disc with the exterior "disc" of the unit circle. Imagine an open ball $B^2$ in $\mathbb R^2$. If you consider the one point compactification of the complement, you get a closed ball. You could also consider an open ball in the one point compactification of $R^2$ which is $S^2$. Take the ball out and you'll see another ball. The reason for this is that the boundary of the embedded ball, namely $S^1 = \partial B^2$, closes up at the other end, namely $\infty$. This is how you can image the higher-dimensional analogue. The boundary of the ball, is also the boundary of another ball. You could also imagine a bicollar of a codimension 1 sphere - it will close up at both ends. Hope it helps your intuition.
common-pile/stackexchange_filtered
Replacing raw pointers in vectors with std::shared_ptr I have the following structure: typedef Memory_managed_data_structure T_MYDATA; std::vector<T_MYDATA *> object_container; std::vector<T_MYDATA *> multiple_selection; T_MYDATA * simple_selection; Edit: this may be very important: the Memory_managed_data_structure contains, among other things, a bitter, raw pointer to some other data. It aims to be a very simple representation of an original container of memory managed objects (object_container) and then a "multiple_selection" array (for selecting many objects in the range and doing various operations with them) and a "simple_selection" pointer (for doing these operations on a single object). The lifetime of all objects is managed by the object_container while multiple_selection and simple_selection just point to some of them. multiple_selection and simple_selection can be nullified as needed and only object_container objects can be deleted. The system works just fine but I am trying to get into shared_ptrs right now and would like to change the structure to something like: typedef Memory_managed_data_structure T_MYDATA; std::vector<std::shared_ptr<T_MYDATA> > object_container; std::vector<std::shared_ptr<T_MYDATA> > multiple_selection; std::shared_ptr<T_MYDATA> simple_selection; Again, the object container would be the "owner" and the rest would just point to them. My question is, would this scheme wreak havok in the application?. Is there something I should know before snowballing into these changes?. Are not shared_ptr the appropriate kind of pointer here?. I can somewhat guarantee that no object would exists in multiple_selection or simple_selection if it is not in object_container first. Of course, no delete is ever called in multiple_selection or simple_selection. Thanks for your time. Edit: Forgot to mention, never used any of these automated pointers before so I may be wildly confused about their uses. Any tips and rules of thumb will be greatly appreciated. If the object container is to be the owner, then std::shared_ptr is not the right smart pointer. Just realised and edited the question... Any tips about what kind of structures can I use for the owner and for those who borrow from it?. std::unique_ptr is for unique ownership. That means that when the owning container goes out of scope, all the managed pointers will be deleted. If the container is to own the pointers, then it is up to you to make sure they do not get used after the container goes out of scope, or erases its elements. I could use those then in the real container... What about the ones that borrow from it?. Is there anything out there that sidesteps raw pointers?. Thanks a lot for your time Juanchpoanza. Use raw pointers for the containers that borrow from object_container. Just make sure that these containers don't outlive object_container. Sebastian, aren't these frowned upon?. I don't really care as long as it does the job and is readable anyway. So: unique_ptr for object_container and raw for borrowing... The only difference then would be that object_container would manage the memory itself. Note taken. Thanks a lot. You say, that the object container would be the "owner" of the objects in question. In that case, that you have a clear owning relationship, using std::shared_ptr is not ideal. Rather, stick with what you have. However, if you cannot guarantee, that a pointer has been removed from multiple_selection and/or simple_selection before it is deleted, you have to act. One possible action could be, that you use shared_ptr. In that case, an object could still continue to exist in one of the selections, even, if it is removed (via shared_ptr::reset or just assigning a null value) from object_container. The other alternative is to make sure, that objects get removed thoroughly: If something is to be deleted, remove ALL references to it from the selections and from the object_container, and THEN delete it. If you strictly follow this scheme, you don't need the overhead of shared_ptr. Thanks Kai. I can assure there is a clear owning relationship there, also, the code works just as you described: thoroughly removing every single reference before anything is changed in the original vector. Thing is, I really want to learn the C++11 way of doing things and I wonder if there is any combination of smart pointers that models the owner - borrowers relationshipt. Some useful tips are given in other comments but any other is welcome. Chosen as right answer. Not that the rest is wrong, but because I will follow its advice to stick with what I have: just tried starting and the snowball effect was daunting.
common-pile/stackexchange_filtered
javascript/jquery/mootools/other js lib- can i write text over a css3 gradient I have with me a script that generates gradients using CSS3. I want to do 2 things with the gradient, either separately or together-- (a) Allow user to write some text over the gradient at location chosen by user (indicated using mouse cursor). (b) Allow user to choose from a set of fonts to be used when writing the text- the font can be a custom font or from Google Web Fonts collection... I plan to implement the above in a GWT application, so any kind of javascript , incl libraries like mootools/jquery are acceptable for this work for me. possible duplicate of javascript/jquery/mootools/other js lib- can i save a css3 gradient as a image? If your question is simply, "Can I write text over a CSS3 Gradient," you're answer is Yes. CSS Gradients, when used, act as a background to an element. Having a gradient background on an element is no different than declaring the background-color red or white.
common-pile/stackexchange_filtered
Using Macaulay2 to Write the Canonical Module as a Quotient of a Free Module Let $S$ be a polynomial ring over a field. Let $I$ be a homogeneous parameter ideal of $S.$ Observe that $S/I$ is an Artinian local ring, so it is Cohen-Macaulay, and it is finitely generated as an $S$-module. Even more, the projection of $S$ onto $S/I$ maps the homogeneous maximal ideal of $S$ onto the maximal ideal of $S/I,$ hence there exists a canonical module of $S/I.$ Explicitly, it is $\operatorname{Ext}_S^{\dim S}(S/I, S).$ One can find this canonical module by writing a (finite) minimal free resolution of $S/I$ as an $S$-module; applying $\operatorname{Hom}_S(-, S)$; and taking cohomology. Let $S = \Bbb Q[x_1, \dots, x_5]$ and $I = (x_1 x_3, x_1 x_4, x_2 x_4, x_2 x_5, x_3 x_5) + (x_i^2 \mid 1 \leq i \leq 5).$ (One might recognize this as the sum of the Stanley-Reisner ideal of the five-cycle and the ideal generated by the squares of all of the variables.) I am attempting to use Macaulay2 to express the canonical module of $S/I$ as a quotient of a free $S$-module. Unfortunately, when I use the command Ext^5(S^1/I, S^1) in Macaulay2, it produces the following output that I am unable to interpret. cokernel {-7} | x_5 x_4 x_2 0 x_3 0 0 0 0 0 0 0 0 0 0 x_1 0 0 0 0 | {-7} | 0 0 0 x_5 -x_4 x_3 x_2 0 0 x_1 0 0 0 0 0 0 0 0 0 0 | {-7} | 0 0 0 0 0 0 0 x_5 x_3 -x_2 x_1 x_4 0 0 0 0 0 0 0 0 | {-7} | 0 0 0 0 0 0 0 0 0 0 0 x_5 x_4 x_3 x_1 0 0 x_2 0 0 | {-7} | 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x_5 x_4 -x_3 x_2 x_1 | If I am correct, {-7} refers to the degree of the generators as a graded homomorphism; however, I'm not sure how this matrix $A$ is acting. Considering that this is a $5 \times 20$ matrix with entries in $S,$ I believe that this is acting by left-multiplication on the column vectors of $S^{\oplus 20},$ but I am not sure what a vector there looks like. I have searched extensively for an interpretation of this output in the Macaulay2 documentation, but I have not found anything to settle my question. Ultimately, if I can decipher the output $\operatorname{cokernel}(A),$ then I would like to use the fact that $\omega_{S/I} = \operatorname{Ext}_S^5(S/I, S) = S^{\oplus 5} / \operatorname{coker}(A)$ is the canonical module of $S/I$ to determine the idealization of $\omega_{S/I}$ over $S.$ I would appreciate any insight or suggestions. Notice that the ideal $I$ is not a Stanley--Reisner ideal for the cycle (these are always square-free, you want the complementary set of generators, or just remove the squares). Also, $S$ is just a polynomial ring, so what do you mean by "I am not sure what a vector there looks like."? You are correct. For one, I had not written down the correct ideal, but also, I failed to mention that I wanted to add the sum of all squares. As for your other question, Macaulay2 has described the canonical module as the cokernel of some map induced by (left-)multiplication by some $5 \times 20$ matrix with entries in $S.$ In order to find this, I imagine I would first attempt to describe the image of $A.$ Either way, what I had would have produced the sum of the Stanley-Reisner ideal of the five-cycle (with a possibly non-standard labeling) and the ideal generated by the squares of the variables. I apologize for the confusion. I am not sure if this will clear your confusion, but let me just go through some of what you say in your post. First, notice that if $\Delta$ is some abstract simplicial complex over a set $X$, then the Stanley--Reisner ideal $I_\Delta$ is generated by the square-free monomials $x_{i_1}\cdots x_{i_k}$ where $\{i_1,\ldots,i_k\}$ is not in $\Delta$. If you take the $5$-cycle to be $12,23,34,45,51$, then what you wrote is somewhat the opposite of this, though if you remove the squares you will get the SR-ideal for the cycle $13524$, For the resolutions. You can ask M2 to tell you what the resolution of $I$ as an $S$ module looks like, and then ask for the transpose of the $5$th differential: o53 = R o53 : PolynomialRing i54 : I = ideal(x1*x2,x2*x3,x3*x4,x4*x5,x5*x1,x1^2,x2^2,x3^2,x4^2,x5^2) 2 2 2 2 2 o54 = ideal (x1*x2, x2*x3, x3*x4, x4*x5, x1*x5, x1 , x2 , x3 , x4 , x5 ) o54 : Ideal of R i55 : M = module R/I o55 = cokernel | x1x2 x2x3 x3x4 x4x5 x1x5 x1^2 x2^2 x3^2 x4^2 x5^2 | 1 o55 : R-module, quotient of R i56 : C = res M 1 10 25 31 20 5 o56 = R <-- R <-- R <-- R <-- R <-- R <-- 0 0 1 2 3 4 5 6 o56 : ChainComplex i57 : (Hom(C,module R)).dd_(-4) o57 = {-7} | x5 0 -x4 x3 0 -x2 x1 0 0 0 0 0 0 0 0 0 0 0 0 | {-7} | 0 -x5 0 0 -x4 0 0 0 x3 0 -x2 x1 0 0 0 0 0 0 0 0 | {-7} | 0 0 -x5 0 0 x4 0 0 0 -x3 0 -x2 x1 0 0 0 0 0 0 0 | {-7} | 0 0 0 0 0 x5 0 0 0 0 0 0 0 x4 0 x3 -x2 x1 0 0 | {-7} | 0 0 0 0 0 0 0 x5 0 0 0 0 0 0 x4 0 x3 0 -x2 x1 | -------------------------------------------------------------------------- 5 20 o57 : Matrix R <--- R Thus, you can see that the matrix that computes $\mathrm{Ext}^5_S(S/I,S)$ is the above, I assume the one that M2 gave you is obtained from this one by some elementary operations. At any rate, you compute this module as the quotient of $S^5$ by the image of this matrix. I apologize for the initial confusion. Thank you for your helpful response. Call your matrix $A.$ I understand that Ext will be the quotient of $S^{\oplus 5}$ by the image of this matrix, but I'm still a little uncertain as to what is being multiplied by this matrix to produce the image. By the way, am I correct to assume that the copies of {-7} appearing to the left of each row of the matrix is the degree of the map? If so, then it should be important to keep track of this in order to determine the idealization because it is a graded module.
common-pile/stackexchange_filtered
Flutter remove where command i have created list builder of tasks with CheckBox. I've created button on clicked to delete only tasks when checkbox is true. This is code where i want to remove only tasks with selected checkboxes widget.selectedList.productsAndStatus.removeWhere(() => true); await ProductsList.editSavedListName(widget.selectedList, widget.selectedListIndex); setState(() {}); Can you help me what to write to removeWhere(() => true) please? Here is checkbox code Widget productCell(int index, Map productCell){ bool productStatus = productCell["productStatus"]; return GestureDetector( onTap: () { productStatus = !productStatus; setState(() { setState(() { editProductStatus(productStatus, index); }); }); }, this is selectedList class class SavedProducts extends StatefulWidget { ProductsList selectedList; int selectedListIndex; SavedProducts({this.selectedList, this.selectedListIndex}); @override _SavedProductsState createState() => _SavedProductsState(); } Can you share a bit more code ? Where does widget.selectedList come from ? Is is your parent list ? How do you know that an item is selected in the parent list ? I updated a code, selected list in class @BabC BabC asked good questions, have you tried something like: widget.selectedList.productsAndStatus.removeWhere((item) => item['productStatus'] == true); See the docs for removeWhere
common-pile/stackexchange_filtered
Scalability of C server implementation based on pthreads I am wondering about the feasibility of the following basic implementation of a server and how well it would scale. I know that large-scale, distributed servers should probably be written in a language like Erlang, but I'm interested in the viability of the following code "these days". Other than bugs/issues I'd primarily like to know 3 things: C headers have many with compatibility methods/structs/etc. Some of which do similar things. Is this a correct "modern" way to handle incoming IPv4 and IPv6 connections? How scalable is it? If I have a single VPN and don't need a distributed server, is it adequate for todays applications? (Potentially thousands/millions of concurrent connections? I appreciate the latter would also very much be hardware dependent!) // SimpleCServer.c // Adapted from http://beej.us/guide/bgnet/output/print/bgnet_A4.pdf #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <pthread.h> // The port users will be connecting to #define PORT "12345" // Prototype for processing function void *processRequest(void *sdPtr); // Get sockaddr, IPv4 or IPv6 void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } int main(int argc, char *argv[]) { // Basic server variables int sockfd = -1; // Listen on sock_fd int new_fd; // New connection on new_fd int yes=1; int rv; struct addrinfo hints, *servinfo, *p; struct sockaddr_storage their_addr; // connector's address information socklen_t sin_size; char s[INET6_ADDRSTRLEN]; // pthread variables pthread_t workerThread; // Worker thread pthread_attr_t threadAttr; // Set up detached thread attributes pthread_attr_init(&threadAttr); pthread_attr_setdetachstate(&threadAttr, PTHREAD_CREATE_DETACHED); // Server hints memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // use my IP if ((rv = getaddrinfo(NULL, PORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // Loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("server: socket"); continue; } if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int)) == -1) { perror("setsockopt"); exit(2); } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("server: bind"); continue; } break; } if (p == NULL) { fprintf(stderr, "server: failed to bind\n"); return 3; } // All done with this structure freeaddrinfo(servinfo); // SOMAXCONN - Maximum queue length specifiable by listen. (128 on my machine) if (listen(sockfd, SOMAXCONN) == -1) { perror("listen"); exit(4); } printf("server: waiting for connections...\n"); // Main accept() loop while (1) { // Accept sin_size = sizeof their_addr; new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size); if (new_fd == -1) { perror("accept"); continue; } // Get IP Address for log inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), s, sizeof s); printf("server: got connection from %s\n", s); // Process the request on a new thread. Spawn (detaching) worker thread pthread_create(&workerThread, &threadAttr, processRequest, (void *)((intptr_t)new_fd)); } return 0; } void *processRequest(void *sdPtr) { int sd = (int)sdPtr; fprintf(stderr, "Processing fd: %d\n", sd); // Processing goes here FILE *fpIn = fdopen(sd, "r"); FILE *fpOut = fdopen(sd, "w"); fprintf(fpOut, "Processing fd %d on server.", sd); fflush(fpOut); // fclose(fpIn); fclose(fpOut); close(sd); return NULL; } First of all, each connection consumes a local port. Therefore, the number of concurrent connection is hard limited by unsigned short, that is 65536 (so millions are out of question). There are also other limitations you may or may not care about. Second, thread creation is somewhat expensive. Consider pre-allocating a thread pool. Third, a code for reading data from the connection is missing. I assume that the intention is for each thread to issue a recv system call. This may lead to many thousands outstanding system calls, each consuming kernel resources. Using poll is way more scalable. Finally, a code review. Your main does way too much. Variables are declared too far away from their uses. Consider restructuring. At least 2 functions (setup_listener_socket and mainloop) must be realized. Thanks! I am not 100% sure what you mean in the first paragraph, however this SF question seems to indicate more than 65536 connections are possible. I did wonder about thread pools though, I shall go hunting for a decent implementation. "each connection consumes a local port." This is plain wrong! The server listens on exactly one port, that's it. The accepted socket does not occupy any port. What each connection does use is a socket descriptor, which indeed is a limited system resource. Nope. Same local port for all connections. I stand corrected +1 for thread pools. You definitely don't want 1 million threads fighting for the scheduler's attention.
common-pile/stackexchange_filtered
Looping through a python dictionary and manipulate each value I am a fairly new python user and I am stuck on a problem. Any guidance would be greatly appreciated. I have a pandas data frame with three columns 'ID', 'Intervention', and 'GradeLevel'. See code below: data = [[100,'Long', 0], [101,'Short', 1],[102,'Medium', 2],[103,'Long', 0],[104,'Short', 1],[105,'Medium', 2]] intervention_df = pd.DataFrame(data, columns = ['ID', 'Intervention', 'GradeLevel']) I then created a dictionary of data frames grouped by 'Intervention'. See code below: intervention_dict = {Intervention: dfi for Intervention, dfi in df.groupby('Intervention')} My question is can you loop through the values of the dictionary and manipulate each value of the dictionary? Specifically I am trying to reference a look-up table. The lookup table can be thought of as a roster. My goal is to label anyone in the roster as either 'Yes - Intervention Name' or 'No Intervention'. It becomes tricky because lets say the Long Intervention, for instance, has only GradeLevel 0. That means I would want to tag anyone in the intervention_df with grade level 0 as 'Yes - Long' and anyone not in the intervention_df as 'No - Long' this would become a new column called 'Value'. I would also need to create another variable 'Category' which would specify the intervention name in this example it would simply be 'Long' lookup_data = [[100, 0], [101, 1],[102, 2],[103, 0],[104, 1],[105, 2], [106, 0], [107, 0],[108, 2],[109, 1]] lookup_df = pd.DataFrame(lookup_data, columns = ['ID', 'GradeLevel']) For example the 'Long' dictionary would look like this after the processing: longint_data = [[100,'Long', 'Yes - Long'],[103,'Long', 'Yes - Long'], [106,'Long', 'No - Long'], [107,'Long', 'No - Long']] longint_df = pd.DataFrame(longint_data, columns = ['ID','Category', 'Value']) The desired final output after all manipulation would look like this: result_data = [[100,'Long', 'Yes - Long'] , [101,'Short','Yes - Short'], [102,'Medium','Yes - Medium'], [103,'Long', 'Yes - Long'], [104,'Short','Yes - Short'] , [105, 'Medium','Yes - Medium'], [106,'Long', 'No - Long'], [107,'Long', 'No - Long'], [108,'Medium','No - Medium'], [109,'Short','No - Short']] result_df = pd.DataFrame(result_data, columns = ['ID','Category', 'Value']) Thank you! It seems like you're making this more complicated than it needs to be, I'm confused with all the looping and different dataframes. why not just a single join? Ah shoot I forgot to explain a part.. for each intervention I only would want “No” for people of the same grade level. For examples the Long intervention has only grade level 0, therefore I would only want to merge with people with the grade level 0. I forgot to add the step of filtering the lookup_df to only unique grades in that specific intervention. Still pretty confusing. You have things like 109, 'Short', 'No - Short' in your results dataframe, but no where else is 109, 'Short' referenced. 109 itself is referenced in the lookup_df, but no mention of Short I edited my initial question with slightly more explanation about the lookup_df. It is a roster. So lets say you have an intervention for only kindergartners at a school. I want to take a list of the students in the intervention and compare it with the whole class roster. If a student is in the intervention they would be tagged with a 'Yes' and if they are not in the intervention they would be tagged with a 'No'. Here the solution without using dictionary intervention_dict. Below is your data which I get from your commands: In [1048]: intervention_df Out[1048]: ID Intervention GradeLevel 0 100 Long 0 1 101 Short 1 2 102 Medium 2 3 103 Long 0 4 104 Short 1 5 105 Medium 2 In [1049]: lookup_df Out[1049]: ID GradeLevel 0 100 0 1 101 1 2 102 2 3 103 0 4 104 1 5 105 2 6 106 0 7 107 0 8 108 2 9 109 1 Step 1: Doing outer merge between lookup_df and intervention_df, create column Value and set_index to GradeLevel In [1059]: df = lookup_df.merge(intervention_df, on=['ID', 'GradeLevel'], how='outer').assign(Value='Yes - '+intervention_df['Intervention']).set_index('GradeLevel') In [1060]: df Out[1060]: ID Intervention Value GradeLevel 0 100 Long Yes - Long 1 101 Short Yes - Short 2 102 Medium Yes - Medium 0 103 Long Yes - Long 1 104 Short Yes - Short 2 105 Medium Yes - Medium 0 106 NaN NaN 0 107 NaN NaN 2 108 NaN NaN 1 109 NaN NaN Step2: create df_fillna to fill NaN in df In [1063]: df_fillna = intervention_df.groupby('Intervention').head(1).assign(Value='No - '+intervention_df['Intervention']).set_index('GradeLevel') In [1064]: df_fillna Out[1064]: ID Intervention Value GradeLevel 0 100 Long No - Long 1 101 Short No - Short 2 102 Medium No - Medium Step 3 (final): using combine_first to fill NaN in df from df_fillna values and reset_index to delete 'GradeLeveland doingsort_valuesonID` In [1068]: df.combine_first(df_fillna).sort_values('ID').reset_index(drop=True) Out[1068]: ID Intervention Value 0 100 Long Yes - Long 1 101 Short Yes - Short 2 102 Medium Yes - Medium 3 103 Long Yes - Long 4 104 Short Yes - Short 5 105 Medium Yes - Medium 6 106 Long No - Long 7 107 Long No - Long 8 108 Medium No - Medium 9 109 Short No - Short This is what I feel like you're going for.. but without more clear explanation, I"m not sure. data = [[100,'Long', 0], [101,'Short', 1],[102,'Medium', 2],[103,'Long', 0],[104,'Short', 1],[105,'Medium', 2]] intervention_df = pd.DataFrame(data, columns = ['ID', 'Intervention', 'GradeLevel']) lookup_data = [[100, 0], [101, 1],[102, 2],[103, 0],[104, 1],[105, 2], [106, 0], [107, 0],[108, 2],[109, 1]] lookup_df = pd.DataFrame(lookup_data, columns = ['ID', 'GradeLevel']) df= pd.merge(intervention_df.assign(y='Yes'), lookup_df, on=['ID', 'GradeLevel'], how='outer') df.loc[df.y.isnull(), 'y'] = 'No' ID Intervention GradeLevel y 0 100 Long 0 Yes 1 101 Short 1 Yes 2 102 Medium 2 Yes 3 103 Long 0 Yes 4 104 Short 1 Yes 5 105 Medium 2 Yes 6 106 NaN 0 No 7 107 NaN 0 No 8 108 NaN 2 No 9 109 NaN 1 No
common-pile/stackexchange_filtered
I need an app code to be corrected. The app displays spectrogram but there is an error that it does not allow plotting bigger spectrograms I need an app code to be corrected. The app displays spectrogram but there is an error that it does not allow plotting bigger spectrograms. and I get an error W/OpenGLRenderer: Bitmap too large to be uploaded into a texture (32768x1, max=16384x16384). Welcome to Stack Overflow! The error message is pretty clear, actually. Maybe you should break the bitmap into two smaller portions. Bitmap too large. You need a smaller Bitmap Try compressing the Bitmap, example:- //This is to get the bitmap from resources Bitmap bitmap = (BitmapDrawable) getResources() .getDrawable(R.drawable.imageBitmap)).getBitmap(); ByteArrayOutputStream out = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.PNG, 25, out); // Now the bitmap is compressed and use 'bitmap' anywhere Thanks, but i need a proper app code for this error.
common-pile/stackexchange_filtered
CompositeItemWriter with Wrapper reader -> injest a class named Person processor -> injest Person, and returns a wrapper object, that has 3 fields, all of them are of type Person (1. inputPerson, 2. outputPerson, 3. output) writer -> injest this wrapper and should write first two fields in one file, and the third one in a second file as xml. That is the code, that I have written for this problem: @Bean public CompositeItemWriter<Wrapper> compositeItemWriter(){ CompositeItemWriter writer = new CompositeItemWriter(); writer.setDelegates(Arrays.asList(firstTwoWriters(), thirdWriter)); return writer; } @Bean public StaxEventItemWriter<Wrapper> firstTwoWriters() { StaxEventItemWriter<Wrapper> xmlFileWriter = new StaxEventItemWriter<>(); xmlFileWriter.setRootTagName("something"); String outputName = applicationArguments.getOptionValues("output").get(0); FileSystemResource outputResource = new FileSystemResource(outputName); xmlFileWriter.setResource(outputResource); Jaxb2Marshaller personMarshaller = new Jaxb2Marshaller(); scoringMapMarshaller.setClassesToBeBound(Person.class); xmlFileWriter.setMarshaller(personMarshaller ); } The problem is, that i cannot choose which field (inputPerson, outputPerson or output) should be used by this writer(which field should be converted to xml). Any ideas, how can I do this? (if possible with an example) https://stackoverflow.com/questions/27836312/use-spring-batch-to-write-in-different-data-sources I have read everything here and it looks similar to my problem, but I do not understand how should I write Unwrapper, cause the problem is the same: I do not know how to choose which field should be used by the writer. Can you explain what are you trying to achieve without referring to Spring Batch? What is the input/output of your job?
common-pile/stackexchange_filtered
Btrfs minimum free space for raid 5 convert I have a btrfs array of 7 drives that I just finished building. It's currently in "single" mode and I'd like to convert it to raid5 (I understand the risks and the write hole issue). Per the documentation, The way balance operates, it usually needs to temporarily create a new block group and move the old data there. For that it needs work space, otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as if the free space is exhausted. This refers to the space on the level of block groups. My current space allocation based on btrfs df is Data, single: total=20.46TiB, used=19.93TiB Data, RAID5: total=3.25TiB, used=3.17TiB System, RAID5: total=96.00MiB, used=2.38MiB Metadata, RAID5: total=29.91GiB, used=26.54GiB GlobalReserve, single: total=512.00MiB, used=0.00B And my filesystem per-disk usage us: Total devices 7 FS bytes used 23.13TiB devid 1 size 7.28TiB used 6.06TiB path /dev/sdc devid 2 size 7.28TiB used 5.95TiB path /dev/sdd devid 3 size 7.28TiB used 5.99TiB path /dev/sde devid 4 size 3.64TiB used 2.42TiB path /dev/sdj devid 5 size 3.64TiB used 2.43TiB path /dev/sdk devid 6 size 4.55TiB used 909.00GiB path /dev/sdf devid 7 size 4.55TiB used 559.00GiB path /dev/sdg Note the 3tb of raid5 storage is from running a convert for about 20 hours before I realized the potential for this to be a problem. Is there any way for me to calculate how much free space I'd potentially need, or is 3tb of my data already converted to raid5 and the small amount of free space on each drive sufficient?
common-pile/stackexchange_filtered
IPv6 network range I was learning about IPv6 in order to build an internal network port scanner (this means a private network to which one is connected) I wasn't able to find a way to know the network range with the submask, let me illustrate it with an example : IPv4 : <IP_ADDRESS> with a sub mask of <IP_ADDRESS> means that the 192.168.2 part of the IPv4 represents the network and the 32 represents the device ip. So, when scanning the network I know I have to scan the ips in the following range : <IP_ADDRESS> -- <IP_ADDRESS> IPv6: fd04:ad:32be:: . I know the first 64 bits represent the network if /64 , but while scanning an internal network with this IPv6 address how do I know the range to scan like in IPv4 ? Thank you, I'm voting to close this question as off-topic because it belongs on ServerFault. Did you do the math? On a standard IPv6 /64 network, there are 18,446,744,073,709,551,616 possible addresses, and scanning 1,000,000 addresses per second, it will take you over 584,542 years to scan one /64 network. If you have older network equipment and are in an enterprise, rapid sequential scans of large swaths of IPv6 space is not a good idea (and isn't really a gd idea in general). W/r/t "ranges", this is from current rev nmap IPv6 addresses can be specified by their fully qualified IPv6 address or hostname or with CIDR notation for subnets. Octet ranges aren't yet supported for IPv6. This is a randomly selected range-to-subnet converter. Consider using ICMPv6 Neighbor Discovery vs sequential scans and also consider using DHCP logs and switch/router logs to get individual IPv6 addresses vs use ranges and perform sequential scans. Also, unless you know what you're doing or are trying to take a deep-dive into low-level network programming, don't reinvent the wheel and just use nmap. Finally, this is prbly a more appropriate question for serverfault and can/should likely be closed.
common-pile/stackexchange_filtered
Minimum for this function I thought of writing this question Minimum for this function in a different way, if it helps. I want to minimize $$\sum_{i=1}^n a_ix_i + \nu \sum_{i=1}^n b_i 2^{x_i} ,$$ where $a_i \in [0,1]$, $b_i \in (0,\infty)$, and $x_i \in [x_{\min},0)$, and $$ \sum_{i=1}^n 2^{x_i} = 1 .$$ $x_{\min}$, $\nu$, $a_i$ and $b_i$ are constants. I guess the tricky part is to minimize the function while ensuring all $2^{x_i}$ sum up to $1$ for these $n$ variables. Thank you for your patience and help. First let $x = (x_1, \dots, x_n)$ and write $f(x) = \sum_i^n a_i x_i + \nu\sum_i^n b_i 2^{x_i}$ and $g(x) = \sum_i^n 2^{x_i} - 1$. Now the method of Lagrange's multipliers tells you that you have a solution $x'$ iff it is an extremal point of the function $f(x) + \lambda g(x)$ with $\lambda$ a parameter to be determined. So you get equations $a_i + (\nu b_i + \lambda) 2^{x'_i} \log{2} = 0$ for all $1 \leq i \leq n$. It's now easy to express $x'_i$ as a function of lambda and plugging this into the remaining constraint $g(x') = 0$ to find the lambda and complete the solution. Hope this helps. I agree but is this (http://math.stackexchange.com/questions/9105/solving-series-of-equations) not going to happen here. I will get a polynomial equation for $\lambda$, which will be not solvable if degree is greater than 5. It will be solvable numerically. It will not be solvable in radicals (in general). Is this a problem? Another thing is that x' need not be a minimum but only a stationary point, so you will need to further inspect the stationary points. The minimum can also be attained on the boundary of the domain D(f), g=0 which means that some x_k might be equal to x_{min} but that only reduces the problem to having less variables. Ok. I understand what you are saying. I guess I got my answer here. :) Thank you all. I don't think rewriting this way helps. In the first place, how would you show that if $y$ is the optimal solution of $\min_y f(y)$, then $x = 2^y$ is the optimal solution to $\min_x f(\log x)$? This is not a good answer.
common-pile/stackexchange_filtered
how to identify moderator or high repo user? If any user get any down voted then how he can identify that his post down voted by moderator or high repo user? Is there any way to identify them? thanks In addition to Iain's answer, up and down votes are equal. An upvote from a high rep user or from someone who joined the site half an hour ago are both worth the same. All that high-rep users and moderators can do in addition to voting is vote to close questions, which isn't really the same thing. With the exception of a successful series of close votes (which says who voted to close but not how they voted), all voting is anonymous. One other (minor) exception: Close votes that were cast through the review system can be viewed on a user's profile. Delete votes are also not exactly anonymous... Though few can see who cast them.
common-pile/stackexchange_filtered
A simple question about why a mapping is bijective. I've been reading mathematical analysis written by B.A.Zorich recently and have some doubts. Why is the mapping $f:\mathbb{N\times N}\rightarrow\mathbb{N} \quad given\;by\quad (m,n)\rightarrow\frac{(m+n-2)(m+n-1)}{2}+m$ a bijection($\mathbb{N}\;$in this book does not contain the number zero)? The question may be very simple, but I'm just a beginner of analysis so I would appreciate it if someone could answer it in detail. https://en.wikipedia.org/wiki/Pairing_function see https://math.stackexchange.com/a/91323/663924 Hint: forget Algebra, and just focus on intuition. In the mapping, the 1st expression is the sum of the numbers from $1$ through $(m+n-2)$. Suppose that there exists $(A,B) \in \Bbb{N^2}$ that yields the same mapping, where $(A,B) \neq (m,n)$. Hint: $(A+B) = (m+n)$ generates a contradiction. Suppose $(A+B) \neq (m+n).$ Without loss of generality, $(A+B) > (m+n)$. Hint: If you have two positive integers, $r$ and $s$, with $r > s$, and you compute the sums $\displaystyle E = \sum_{i=1}^r (i) ~~~\text{and}~~~ F = \sum_{i=1}^s (i)$ Then what is the minimum possible value of $E - F~~$? It's very enlightening. I've understood it. This means that as long as the function values are equal, the two pairs must be equal, otherwise a contradiction will be deduced. However, this method seems to only prove that the mapping is injective, and additional proof may be needed in the case of surjection. @Erutaner Since I was not permitted to provide a real answer, I was boxed in. The difficulty is that your posting has a number of defects. See this article for details. The surjective property follows from considering that you have total control over the top end of the summation, which is $(m+n-2)$. Within this control, you have fine tuning control of being able to specify $m$. ...see next comment @Erutaner So, for example, it is easy to see that each of the following numbers are in the range: ${0+1::1+1,1+2::3+1,3+2,3+3::\cdots}.$
common-pile/stackexchange_filtered
Writing a structured Om application with requests, but not om.next I'd like to write an application in Om - a GitHub issues viewer in particular. To retrieve issues from GitHub, I'll need to XHR request them, and there'll be the action of marking issues as 'viewed' by sending a request back to GitHub. There's quite a bit of documentation for using the current version of Om without async/http calls, and quite a bit for using Om.next with them, but I'm just getting started and feel like Om.next isn't the right place for a complete ClojureScript newbie to dive in. Is there documentation and are there patterns for using the current stable version of Om (0.8.x) with remote resources, that'd lead to a clean architecture for this kind of app? The big applications that are using Om in production, like CircleCI - are they using Om.next? If not, what's the commonly-accepted pattern for requests? I think, that you can dive in om's real-world example. They are using Google Closure's XhrIo for async http calls. (defn edn-xhr [{:keys [method url data on-complete]}] (let [xhr (XhrIo.)] (events/listen xhr goog.net.EventType.COMPLETE (fn [e] (on-complete (reader/read-string (.getResponseText xhr))))) (. xhr (send url (meths method) (when data (pr-str data)) #js {"Content-Type" "application/edn"})))) Communicating server on user change (defn on-edit [id title] (edn-xhr {:method :put :url (str "class/" id "/update") :data {:class/title title} :on-complete (fn [res] (println "server response:" res))})) Data loading on om.core/IWillMount (defn classes-view [app owner] (reify om/IWillMount (will-mount [_] (edn-xhr {:method :get :url "classes" :on-complete #(om/transact! app :classes (fn [_] %))})) om/IRender (render [_] (dom/div #js {:id "classes"} (dom/h2 nil "Classes") (apply dom/ul nil (map (fn [class] (let [id (:class/id class)] (om/build editable class {:opts {:edit-key :class/title :on-edit #(on-edit id %)}}))) (:classes app))))))) This is not answer for your question, but you can dive in om examples CircleCI frontend is currently written in Om, but they have plans to migrate to Om.next, and they explain why. Regarding Om, there is a repository dedicated to "idioms and patterns", linked from the tutorial section. I would not say that anything is written in marble though, be prepared to experiment a bit.
common-pile/stackexchange_filtered
how do i get startActivityForResult() to bring up just a list of telephone contacts (like when i click on the "People" icon) using the android sdk? excuse me if this question is obvious, but I am new to android sdk. What I am trying to do is get a list of contacts that have real telephone numbers to send an sms message. I am deploying directly to my phone and trying to use just the contacts listed on my phone, but I am getting too many weird contacts. I would expect that when I call the startActivityForResult() method it will give me a list of phone contacts. What it seems to do is give me a list of all potential contacts and that seems to include twitter, facebook, and every potential email address I have sent to instead of just the contacts that are listed when I click on the "People" icon on my phone. The code I am using to call it is here: private static final int PICK_CONTACT_REQUEST = 1;//defined elsewhere but listed for clarity Intent intent = new Intent(Intent.ACTION_PICK, Contacts.CONTENT_URI); startActivityForResult(intent, PICK_CONTACT_REQUEST); The list that pops up starts off with hundreds of "Unnamed" contacts (many with icon photos that look to be twitter images), but also include what appear to be just email contacts, including what seems to be every craigslist email I may have ever sent to. Can anybody clear this up for me? how do I get just the "People" list without all the additional contact data? There are several predifined actions with standard application contacts. I think, what you're looking for is : intent = new Intent(android.provider.Contacts.Intents.UI.LIST_CONTACTS_WITH_PHONES_ACTION); Update: well this seems deprecated now, starting from API 5, one should use ContactsContract. I do not know this API though. wow, this should be such an easy task but android is written like sh#t, the documentation is written like sh#t with very few example of how to use these methods and what all these parameters are for i did try it, his code brings up a contact list (that looks like a better list than mine) but when i select one of the contacts it takes me to their account, instead of returning to my app
common-pile/stackexchange_filtered
partial view not opening as jQuery UI Dialog I need to open a partial view as dialog box on click of a button, basically add/ Edit scenario. My problem is that mu partial view does open but not as a dialog but at the bottom of the page. Please see my code below: I have an empty div on the page: On the click of the button I call the below code: function addSelectionActivate() { var selectionID = 0; $.ajax({ url: "AddEditSelection", type: "POST", data: "&selectionID=" + selectionID, dataType: "html", success: function (data) { $("#addEditSelectionDialog").html(data); $("#addEditSelectionDialog").dialog('open'); }, error: function (error) { alert(error.status); } }); } My controller has a method "AddEditSelection" which returns the result. But the partial view opens at the end of the page rather than as a dialog. Please help what I might b edoing wrong. you need to add the partial in a seperate div contained in the dialog div. eg: <div id="DialogDiv"> <div id="AnotherDiv"> </div> </div> and register "DialogDiv" as dialog and load ur partial in the "AnotherDiv" Sorry I cun't understand by what you mean by register "DialogDiv" as dialog ok, it is similar to this ' $("#ajax-dialog").dialog({ autoOpen: false, draggable: false, modal: true, resizable: false, closeOnEscape: false, open: function () { } });' this way you register a div as a dialog
common-pile/stackexchange_filtered
Next button does not work in button for code php I'm doing two buttons Next and previous i have this line of code in the php file and the function defined in another php fixer, the one that has to do the function is to go to the taula where I am and select the next product. The problem is that the button does nothing. I tried to put the ide in another way but it does not do anything when I hit the button public static function botoSeguent(){ if(isset($_POST['Seguent'])){ $Ordre=$_POST['Ordre']+1; $sql= "SELECT * FROM" .self::$tablename. " WHERE Ordre='$Ordre"; $query = Executor::doit($sql); } } The button doesn't do anything. This is the line I used in PHP file: <button class="btn btn-primary" type="button" id="botoSeguent()">&nbsp;<i class="fa fa-search"></i>&nbsp;Següent</button> This is the line I used in document php   Següent If that's a PHP function you're attempting to call in HTML - it won't work (not that it makes sense attempting to call it in an id attribute anyway). Possible duplicate of What is the difference between client-side and server-side programming? I'm doing everything in php, because I canvio the id by link or why? PHP runs on the server, not the client (unlike JavaScript) so you can't call a PHP function from within HTML. And putting a function call on the id doesn't make sense because that's not an event, it's just an attribute - it's not triggered by click or anything. You probably need dynamic output created by JavaScript based on the response from an Ajax call to the server, triggered by the click event... which is essentially a full blown Ajax tutorial and therefore too broad in scope for Stack Overflow's simple QA format. You need to call the php function via javascript. There is two part one is server site script which is PHP code and another one is client side script which is HTML, Javascript etc. Here is the example This is client side script: <button class="btn btn-primary" type="button" onclick="botoSeguent();">&nbsp;<i class="fa fa-search"></i>&nbsp;Següent</button> <script> function botoSeguent() { $.ajax({ url: 'http://xxxxx.com/botoSeguent', type: 'POST', success: function (data) { } }); } </script> This is your server side script public static function botoSeguent(){ if(isset($_POST['Seguent'])){ $Ordre=$_POST['Ordre']+1; $sql= "SELECT * FROM" .self::$tablename. " WHERE Ordre='$Ordre"; $query = Executor::doit($sql); } } Client side java script ajax call the server side php function when you click the button. Hope it will answer your question. Many Gracies for answering but not working I have tried to change the xxxx.com for the path where the botoSeguent function is stored. If you could advise me something else I would be very grateful.
common-pile/stackexchange_filtered
C++: Trouble loading long string from XML file using Mini-XML I'm using the Mini-XML library to parse and XML file. I am able to load just about every element and attribute, but I am having trouble loading a long string. Here is the relevant part of the code: //Load XML file into XmlO void load(wxString filenam){ //First, convert wxString to std::string for safety (char* is transient), then to const char* std::string tmp_filenam = std::string(filenam.mb_str()); const char* tmp_filenam2 = tmp_filenam.c_str(); //Get pointer to file fp = fopen(tmp_filenam2,"r"); //Load tree tree = mxmlLoadFile(NULL, fp, MXML_TEXT_CALLBACK); //Close file (be nice!) fclose(fp); //Load <Systems> node Asset_elem = mxmlWalkNext(tree, tree, MXML_DESCEND_FIRST); //Start loading <asset> elements //Temporary Elements mxml_node_t *node; //Node to save mxml_node_t *subnode_pos; //Subnode for pos nodes mxml_node_t *subnode_GFX; //Subnode for GFX nodes mxml_node_t *subnode_pres; //Subnode for presence nodes mxml_node_t *subnode_gen; //Subnode for general nodes mxml_node_t *subnode_serv; //Subnode for services nodes mxml_node_t *subnode; //Subnode const char* name_tmp; //String for names of asset const char* tmp_str; //String for anything :P float x_pos; //X_pos Float float y_pos; //Y_pos Float const char* gfx_space; const char* gfx_ext; const char* pres_fac; float pres_val; int pres_range; const char* plan_class; int population; bool land; bool refuel; bool bar; bool missions; bool commodity; bool outfits; bool shipyard; const char* descrip; const char* bar_descrip; //Load first asset node = mxmlFindElement(Asset_elem, tree, "asset", NULL, NULL, MXML_DESCEND); //Start loading the rest of the ssys elements (but fail if first element is NULL) int i = 1; while (node != NULL){ //Load name attrib name_tmp = mxmlElementGetAttr(node, "name"); //Mark Branching nodes //Pos Element subnode_pos = mxmlFindElement(node, Asset_elem, "pos", NULL, NULL, MXML_DESCEND); //GFX Element subnode_GFX = mxmlFindElement(node, Asset_elem, "GFX", NULL, NULL, MXML_DESCEND); //Presence Element subnode_pres = mxmlFindElement(node, Asset_elem, "presence", NULL, NULL, MXML_DESCEND); //General Element subnode_gen = mxmlFindElement(node, Asset_elem, "general", NULL, NULL, MXML_DESCEND); //Services Sub-element subnode_serv = mxmlFindElement(subnode_gen, Asset_elem, "services", NULL, NULL, MXML_DESCEND); /*********Loading routines that work********/ //Get Descriptions const char * tmp_str; mxml_node_t *temp_sub_node; temp_sub_node = mxmlFindElement(subnode_gen, subnode_gen, "description", NULL, NULL, MXML_DESCEND); if(temp_sub_node != NULL){ tmp_str = temp_sub_node->child->value.text.string; } else{ tmp_str = NULL; } delete tmp_str; delete temp_sub_node; Here is one element that I need to parse: <asset name="Ammu"> <pos> <x>90.000000</x> <y>2490.000000</y> </pos> <GFX> <space>A00.png</space> <exterior>lava.png</exterior> </GFX> <presence> <faction>Empire</faction> <value>100.000000</value> <range>2</range> </presence> <general> <class>A</class> <population>60000</population> <services> <land/> <refuel/> <bar/> <missions/> <commodity/> <outfits/> </services> <commodities> <commodity>Food</commodity> <commodity>Ore</commodity> <commodity>Industrial Goods</commodity> </commodities> <description>Ammu is a generally calm planet, once one is accustomed to the constant rumbling of the lava flows. Lava eruptions are often felt in the subterranean spaceport, but the way it doesn't seem to phase the locals reassures you.</description> <bar>The Ammu Spaceport Bar, known as "The Heatsink" due to its frigid temperatures, in contrast to the rest of the station. While primarily known for their temperature, that's not to say they can't whip up a mean Pan-Galactic Gargle Blaster.</bar> </general> <tech> <item>Basic Outfits 1</item> </tech> </asset> I am only getting the first word from the description tag. Why? EDIT 1: I tried switching over to std::string, but the MiniXML library is returning a const char*, which apparently cannot hold such a long string. Any suggestions? EDIT 2: I changed the callback to OPAQUE so that it would ignore whitespaces, but now it just returns NULL. EDIT 3: I now changed the methods to acquire value.opaque instead of value.text.string. This makes the "description" tag work great, but the "bar" tag still crashes when I try to load it into a const char*. I tried removing quotes and the like from the xml file to see if that was causing it, but it didn't help. EDIT 4: I even removed all but one "asset" object, and then its "bar" element, and it still crashes. This is absolutely bizarre! EDIT 5: Okay, I isolated the problem piece of code: tmp_str = temp_sub_node->child->value.opaque; However, I have this integrated into a method, the same method I am using for the description element (which directly precedes it), and that works fine. What is wrong? EDIT 6: Oddly enough, when I change the search string to "bar ", it fails gracefully (i.e. returns NULL). It's only when I change it to "bar" (the element I need) that it crashes. Is this a reserved keyword or something that mini xml doesn't like?! EDIT 7: FINALLY! Figured it out. I changed MXML_DESCEND to MXML_DESCEND_FIRST, and it works fine. WHEW!!!! What a relief. Thanks guys! have you tried walking all the children of your description node to see if it's storing each word as a child? Just based on a quick glance at the docs, it sure looks like MiniXML is fond of caring about whitespace, so perhaps the default TEXT processing is splitting your string into multiple children. Hmm, might work. I'll try it. Darn, now it crashes on loading. It does this whenever I really start messing with node pointers. You need to replace: tree = mxmlLoadFile(NULL, fp, MXML_TEXT_CALLBACK); By: tree = mxmlLoadFile(NULL, fp, MXML_OPAQUE_CALLBACK); Is that what you tried? I think you also need to read value like tmp_str = temp_sub_node->child->value.opaque; Some bounty will be coming your way :). If you are using C++ then for string handling "string" STL class can be used. It can load any number of characters up to the memory limit. Does const char* have a limit other than memory?
common-pile/stackexchange_filtered
Django chartit loading jquery and highcharts js I'm trying to include some charts to my django website using Chartit but facing problems. Just to keep it simple, I created a project that replicates chartit's demo charts but still having problems. I guess the problem is related with loading jquery and highlights js. Here's what I got. Model from django.db import models class MonthlyWeatherByCity(models.Model): month = models.IntegerField() boston_temp = models.DecimalField(max_digits=5, decimal_places=1) houston_temp = models.DecimalField(max_digits=5, decimal_places=1) new_york_temp = models.DecimalField(max_digits=5, decimal_places=1) san_franciso_temp = models.DecimalField(max_digits=5, decimal_places=1) class MonthlyWeatherSeattle(models.Model): month = models.IntegerField() seattle_temp = models.DecimalField(max_digits=5, decimal_places=1) class DailyWeather(models.Model): month = models.IntegerField() day = models.IntegerField() temperature = models.DecimalField(max_digits=5, decimal_places=1) city = models.CharField(max_length=50) state = models.CharField View from django.shortcuts import render_to_response from chartit import DataPool,Chart from demo.models import MonthlyWeatherByCity def line(request): ds = DataPool( series= [{'options': { 'source': MonthlyWeatherByCity.objects.all()}, 'terms': [ 'month', 'houston_temp', 'boston_temp']} ]) cht = Chart( datasource = ds, series_options = [{'options':{ 'type': 'line', 'stacking': False}, 'terms':{ 'month': [ 'boston_temp', 'houston_temp'] }}], chart_options = {'title': { 'text': 'Weather Data of Boston and Houston'}, 'xAxis': { 'title': { 'text': 'Month number'}}}) return render_to_response('demo/chart.html', {'weatherchart':cht}) Template <html> <head> <script src ="http://ajax.googleapis.com/ajax/libs/jquery/1.7/jquery.min.js"></script> <script src ="http://code.highcharts.com/highcharts.js"></script> {% load chartit %} {{ weatherchart|load_charts:”container” }} </head> <body> <div id=”container”> </div> </body> </html> and in setting I have inside installed apps the following INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'demo', 'jquery', 'highcharts', 'chartit', ) The problem is that when I try to load the chart I received the following message TemplateSyntaxError at /chart/ Could not parse the remainder: ':”container”' from 'weatherchart|load_charts:”container”' Actually, if I remove the script tags from the template I receive the same message. I've also tried with local versions of jquery and highcharts or with the same results. Does anybody have an idea of what am I missing? I've been looking around at different examples and looks like I'm doing everything the right way, is there anything else I need to load? Thanks for your help guys... Regards, Alejandro Change the quotation symbols: FROM {{ weatherchart|load_charts:”container” }} --> TO {{ weatherchart|load_charts:"container" }}
common-pile/stackexchange_filtered
Export from Camtasia to Premiere Pro, with good audio How can I export from Camtasia to Premiere Pro keeping the two audio tracks as separate audio tracks in Premiere Pro? If all else fails, do two different exports, one with each audio track muted and then put them on separate tracks in Premiere.
common-pile/stackexchange_filtered
How to set default group for files created in Samba share I'm sharing a directory, /home/pi/pydev on a debian box (raspberry pi, in fact) with Samba. I'm reading from and writing to that directory from a Windows 7 machine. When I create, under W7, a file in that directory, it gets 0764 rights, and it's owned by user rolf and group rolf - that's me on the W7 machine. User pi on the debian box and user rolf (on W7) both need to be able to modify files in that directory, so I made them both member of group coders, hoping I could configure it so that members of coders have at least read & write access to files in that directory. . But user pi can't modify any file that belongs to group rolf. I could chmod rolf:coders <filename> file by file. Adding user pi to group rolf is ugly, and doesn't work (didn't expect that. Does Samba maintain an entirely different user administration with groups, beside Debian's?). I could also log on to the debian machine as rolf, and navigate to that folder. But the most elegant way (to me) would be if a file created by rolf from the W7 machine would get userid rolf and groupid coders, by default. Can I configure Samba to do that, or is there some other way to automate that task? If I understand what you are asking correctly then what you want is inside the smb.conf located here: /etc/samba/smb.conf Add these options to the [global] section: force user = rolf force group = coders In case like me someone is looking to add user, group, and actual permissions string add create mask = 0775 you do know that the force directives make any user do operations as the specified user/group? say e.g. you have a share /joe and a share /anne, if you do force user = anne on share /anne, then user joe can access /anne, big no-no and a big security risk! this option is so misleading it should be documented better and not used lightly you could try adding sticky bit for the group on that folder chmod 2770 foldername find foldername -type d -exec chmod g+s {} \; Like it. The right answer should be this one. Making group permissions on directories and subdirectories sticky is a typical Linux filesystem problem and not a Samba problem. What does it do? The description of the two kinds of "sticky bits" does not go along with what the answer suggests. Or is it that "execution" of a folder is like execution of a file and therefore the folder then "runs" with ownership under the group that was specified? That is the only way I could imagine how it works. https://unix.stackexchange.com/questions/79395/how-does-the-sticky-bit-work @bomben that Q&A focuses on executables; see this discussion of permissions for details of setgid’s effect on directories. On a samba domain with samba-tool: samba-tool group addmembers 'My Group Name' 'My Username' samba-tool user setprimarygroup 'My Group Name' 'My Username'
common-pile/stackexchange_filtered
Django - Filter QuerySet in models.py Is is possible to make queries from django models? I have 2 models: class Book(models.Model): ... def copies_available(self): pass and class BookCopy(models.Model): ... book_category = models.ForeignKey(Book, related_name='copies') issued_to = models.ForeignKey(EndUser, related_name='issued_books', null=True, blank=True) I want copies_available() to return the number of BookCopy intances of that Book whose issued_to is None This should work: class Book(models.Model): ... def copied_available(self): return self.copies.filter(issued_to__isnull=True).count() I did this a couple years ago, so it may have changed a little bit: You nee to create a BookManager class, and put the functionality there. Then assign the manager to your Book model's object variable, like this: class BookManager(models.Manager): def copied_available(self): queryset = BookCopy.objects.filter(book_category=self.id).filter(issued_to is not None) return queryset.count() class Book(models.Model): ... objects = BookManager() So in your template, you can do something like: <p> Copies: {{ thebook.copied_available }}</p> you should not overwrite djangos default model manager while filtering results, it may lead to undesired results (https://docs.djangoproject.com/en/dev/topics/db/managers/#writing-correct-managers-for-use-in-automatic-manager-instances) Yes just set limit_choices_to in the ForeignKey. From the docs: "A dictionary of lookup arguments and values (see Making queries) that limit the available admin or ModelForm choices for this object. Use this with functions from the Python datetime module to limit choices of objects by date. For example:" limit_choices_to = {'pub_date__lte': datetime.date.today}
common-pile/stackexchange_filtered
Creating a function in JS Possible Duplicate: Javascript: var functionName = function() {} vs function functionName() {} I am curious to which is best pratice when creating a function in js function x() { ... } OR var x = function() { ... } Is there a difference or are they the exact same thing. x(); // I work function x() { ... } y(); // I fail var y = function() { ... } The first is a function declaration. You can use functions before you've declared them. The second is a assigning a function to a variable. This means you can assign to anything. You can assing it to foo[0] or foo.bar.baz or foo.get("baz")[0] I prefer the first form because it gets defined before variables are defined. So, you could invoke x in a later scope because the interpreter already defined that function even though it may be declared later on in your code. This will be simpler with some code: x(); //logs "hi" //... function x() { console.log("hi"); } vs x(); //fails var x = function() { console.log("hi"); }; They are not exactly the same thing, people have mentioned the forward-lookahead difference, here's a less known subtlety - the function name property: function x(){} x.name; // "x" var x = function(){}; x.name; // "" In the second case, you can pass functions as parameters and store them in arrays You might find this useful: https://developer.mozilla.org/en/JavaScript/Reference/Functions_and_function_scope The first is a function declaration, the latter is a function expression. Read more here while I try to find a very in-depth article I once read on the differences and implications. Edit: Ah, here we go. Grab a cup of tea and settle in for an in-depth discussion.
common-pile/stackexchange_filtered
Matching occurrence in string with regex for value replacement? I've spent some time on this problem but really need help from a regex guru. So far I have the following which doesn't quite give me what I need. [.*?]\s[=><]+\s[@]\w+ From the following sample string, I need all occurrences of a field followed by a parameter\variable. A parameter starts with an '@'. I am then going to use the result to replace the contents of each value in .net. Therefore, the regex expression would match [System.TeamProject] = @project [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay Note [System.State] = 'Closed' is not matched. Sample string select [System.Id], [System.WorkItemType], [System.Title], [System.AssignedTo], [System.State], [System.Tags] from WorkItems where [System.TeamProject] = @project and [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay and [System.State] = 'Closed' and [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay Thanks heaps! Change the square brackets of the character class [.*?] to a group (.*?) This regex should do what you want: \[([^]]*)]\s+[><=]+\s+(\@\w+) The main change is that the [ and ] in the initial part of your regex needed to be escaped. I have also added capture groups to collect the field name (group 1) and parameter value (group 2). Demo on regex101 My guess is that maybe you're trying to write some similar expression to: \[([^\]]*)\]\s*(=|>=|<=)\s*(@\w+) and replace with some string similar to: [new_value] $2 $3 I've added some capturing groups, wasn't sure about the desired output, you can simply remove or modify those if/as you wish. Demo Test using System; using System.Text.RegularExpressions; public class Example { public static void Main() { string pattern = @"\[([^\]]*)\]\s*(=|>=|<=)\s*(@\w+)"; string substitution = @"[new_value] $2 $3"; string input = @"select [System.Id], [System.WorkItemType], [System.Title], [System.AssignedTo], [System.State], [System.Tags] from WorkItems where [System.TeamProject] = @project and [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay and [System.State] = 'Closed' and [Microsoft.VSTS.Common.ClosedDate] >= @startOfDay Note [System.State] = 'Closed' is not matched."; RegexOptions options = RegexOptions.Multiline; Regex regex = new Regex(pattern, options); string result = regex.Replace(input, substitution); } } If you wish to simplify/modify/explore the expression, it's been explained on the top right panel of regex101.com. If you'd like, you can also watch in this link, how it would match against some sample inputs. RegEx Circuit jex.im visualizes regular expressions:
common-pile/stackexchange_filtered
Powershell script using get-process and get-content Trying to create Powershell script to find process that is not running on a list of computers (text file). I have found below script but I am not sure how to utilize the get-content cmdlet with it. $ProcessName = "VPDICOMServer" if((get-process $ProcessName -ErrorAction SilentlyContinue) -eq $Null) echo "Process is not running" }else{ echo "Process is running" } So what did you try? What can you do so far? Can you loop through all the computers in your file printing them? Assuming your computers.txt file is new-line separated list of computer names, you can use a foreach loop: $ProcessName = 'VPDICOMServer' $Path = "$Env:UserProfile\Documents\computers.txt" foreach ($Computer in (Get-Content -Path $Path)) { and then you can execute Get-Process against each one (assuming you have privileges): if (Get-Process -ComputerName $Computer -Name $ProcessName -ErrorAction 'SilentlyContinue') { 'Process is running' } else { 'Process is not running' } } Awesome! Thanks so much! That worked! Just wondering however if I can get the response to also show computer name. Right now I just get a list with "process is running" or "process is not running" and I have to match up with list of computers names. @Amiehl Sure- "${Computer}: Process is running" note the double-quotes. @Amiehl Normally you can achieve this variable replacement without the special syntax (${}), but because : can be used to define drive/scope, it's necessary in this example. As an example to what I mean, this is also valid: "$Computer - Process is running" Thanks. That is so helpful. I knew I had to put that $Computer somewhere - I just was trying to put it in the wrong place.
common-pile/stackexchange_filtered
How to oracle partition by date range Unix Timestamp I have below table in oracle, I want Partition Range by Date in Oracle monthly on MyTimestamp column(Data type is number). Can I partition with this column or do I need another column? If I need a new column, what is the data type of the new column and how do I partition with the new column(convert MyTimestamp to new data type and partition )? ---------------------------------------------------------------------------------------------------- | id | MyTimestamp | Name | etc ... ---------------------------------------------------------------------------------------------------- | 0 |<PHONE_NUMBER> | John | ... | 1 |<PHONE_NUMBER> | Tom | ... | 2 |<PHONE_NUMBER> | Tom | ... | 3 |<PHONE_NUMBER> | John | ... | 4 |<PHONE_NUMBER> | Jack | ... -------------------------------------------------------------------------------------------------- You can define a virtual column and define partition key on that: CREATE TABLE ... ( id NUMBER, MyTimestamp NUMBER Name VARCHAR2(100), etc... PARTITION_KEY TIMESTAMP(0) GENERATED ALWAYS AS ( CAST(TRUNC(TIMESTAMP '1970-01-01 00:00:00 UTC' + MyTimestamp * INTERVAL '1' SECOND) AS TIMESTAMP(0)) ) VIRTUAL ) PARTITION BY RANGE (PARTITION_KEY) INTERVAL (INTERVAL '1' MONTH) ( PARTITION P_INITIAL VALUES LESS THAN (TIMESTAMP '2020-01-01 00:00:00') ); You could also use MyTimestamp directly, however 2'635'200 (i.e. 30.5 days) seconds is just roughly a month CREATE TABLE ... ( id NUMBER, MyTimestamp NUMBER Name VARCHAR2(100), etc... ) PARTITION BY RANGE (MyTimestamp) INTERVAL (2635200) ( PARTITION P_INITIAL VALUES LESS THAN<PHONE_NUMBER>) ); Thanks I test That,please help me for index this table in this question https://stackoverflow.com/q/73018170/12780274 If you want to PARTITION by DATE, you need a date column. Below is an example with some dummy data. When new PARTITIONs are automatically added they will have system GENERATED names. I have code to RENAME them to something meaningful if you like. In addition, you will probably want to implement a RETENTION period for the PARTITION, how long to keep them around. I also implemented that too. CREATE TABLE t2 ( seq_num NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL, dt DATE ) PARTITION BY RANGE (dt) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH')) ( PARTITION OLD_DATA values LESS THAN (TO_DATE('2022-01-01','YYYY-MM-DD')) ); / INSERT into t2 (dt) with dt (dt, interv) as ( select date '2022-01-01', numtodsinterval(1,'DAY') from dual union all select dt.dt + interv, interv from dt where dt.dt + interv < date '2022-07-31') select dt from dt; / By timestamp CREATE TABLE t3 ( seq_num NUMBER GENERATED BY DEFAULT AS IDENTITY (START WITH 1) NOT NULL, dt TIMESTAMP) PARTITION BY RANGE (dt) INTERVAL ( NUMTODSINTERVAL (1, 'MONTH') ) ( PARTITION OLD_DATA VALUES LESS THAN (TIMESTAMP '2022-01-01 00:00:00.000000') ); / INSERT into t3 (dt) SELECT TIMESTAMP '2022-01-01 00:00:00' + (LEVEL - 1) * INTERVAL '5' MINUTE + MOD(LEVEL - 1, 10) * INTERVAL '0.1' SECOND FROM DUAL CONNECT BY TIMESTAMP '2022-01-01 00:00:00' + (LEVEL - 1) * INTERVAL '5' MINUTE + MOD(LEVEL - 1, 10) * INTERVAL '0.1' SECOND < DATE '2022-01-15'; / Thanks. T2 partition with Date Type and T3 partition with Timestamp type? Yes, look at dt in both tables. In t2 it's a date and t3 it's a timestamp. I also gave you a way to generate sample data so the PARTITION can be created and you can see what's going on. If I answered your question please upvote my answer. In addition, I have other code to RENAME PARTITIONs and to deal with RETENTION time stamp is faster or Date data type? Faster? What are you referring too? You will need to decide between global or local indexes if they are needed and test your queries. Also decide whether you need to keep dates or timestamps. I think performance should be equivalent but you should do the correct amount of testing and see what works best for you Thanks, please help me for index this table in this question https://stackoverflow.com/q/73018170/12780274
common-pile/stackexchange_filtered
node-horseman async not working? I'm using node-horseman to ( hopefully ) allow me to carry out asynchronous phantomjs operations to achieve async headless browsing. The numbers in the array are at this stage irrelevant, I've just stripped the code down to the bare minimum to demonstrate the problem I'm having. When I run the code below it runs async, however, as soon as I create a new Horseman it stops running asynchronously. I know it is not running async because the outputs ( console logging the numbers ) happens in a linear fashion with each number being displayed after a uniform amount of time. Running it async it should be instantaneous as the overhead for showing each one should be the same, so all the numbers should appear at the same time the same way they do when horseman objects are not created ( as shown in the code below with the horseman object disabled ). var Horseman = require('node-horseman'); var async = require('async'); var testArray = [ 1, 2, 3, 4, 5 ]; function evaluate( item ) { console.log( item ); /*It runs asynchronously but if the two lines below are activated it stops being async and runs synchronously, defeating the whole purpose of using horseman??*/ //var horseman = new Horseman(); //horseman.close(); } async.each( testArray, function( item, callback ) { evaluate( item ) callback(); }, function( err ) { console.log( 'all complete' ); } ) Any help is greatly appreciated. The code that you run with async.each is supposed to be asynchronous. If the evaluate function is not async the whole async each call is not async plus it runs the risk of producing a stack overflow error if the array is very long. Ok thanks. If I put the evaluate function inline on the async.each function parameter would this fix it? Or are you saying that the new horseman is making it not async in which case I should create it outside of the async and pass it? Just tried what I asked you and no it doesn't!! code var Horseman = require('node-horseman'); var async = require('async'); testArray = [ { number : 1, horseman : new Horseman }, { number : 1, horseman : new Horseman }, ]; async.each( testArray, function( item, callback ) { console.log( item.number ); var horseman = item.horseman; horseman .open( 'http://www.google.com' ); horseman.close(); callback(); }, function( err ) { console.log( 'all complete' ); } ) code Sync code can't be made async just by running it through async.each. Your async.each call is sync whether you have new Horseman in there or not. Thanks, so just to clarify, you're saying that my code as shown in my initial question ( without the horseman ) isn't async? @Michael 1. How do you know that it runs async or sync? Have you some indication of this by looking at the output? Please show the output and how you arrived at your conclusion. 2. What are you actually trying to achieve? What are those numbers and what are they used for in conjunction with Horseman? Please [edit] your question to include details. Comments are not well suited for posting console output or code. Thanks, have done to answer your queries Yes, your code as shown in your initial question isn't async. It just looks as if all the numbers appear at the same time because your computer is very fast. You don't notice a delay of a few milliseconds. Also it looks like horseman doesn't support async operations. So if you want to process several websites at the same time you need to use something else. e.g. https://github.com/alexscheelmeyer/node-phantom Ok thanks. I copied the code from an example of async node so presumed it to be. I also wrongly presumed the selling point of node-horseman was that it bridged between phantom and node allowing async, without that what is its purpose? Server side headless browsing? Also I tried node-phantom but being a linux newbie couldn't get it working correctly, will have to retry. Thanks for your help I'm also having the same issues using async and node-horseman. If you use the .forEachLimit function instead of .each, you can limit the amount requests being done. I've opened an issue on Github about this issue as well: https://github.com/johntitus/node-horseman/issues/28 I switched to phantom-zombie instead as couldn't find a solution. So .forEachLimit didn't quite as well as I wanted it to. Instead, I ended up using a synchronous for loop since node-horseman refuses to play nice with async. Yes agreed, like I said phantom-zombie gives you async headless browsing though got that to work
common-pile/stackexchange_filtered
Vue typescript router.Push() just changes address but not reloading page Hello there everyone. I m asking this strange question because of one factor: Whenever I'm trying to change route in my app is just changes adress in bar, but not reloading page. My code is just basic implementation of vue router. Routes do not even change if i write adress to my web browser bar and click enter, It just refresh when i manualy refresh page (same scenario when i use @click, it changes adress but not reloading page, until i would do it manually) import Vue from 'vue' import VueRouter from 'vue-router' Vue.use(VueRouter) const routes = [ { path: '/', name: 'home', component: () => import(/* webpackChunkName: "welcome" */<EMAIL_ADDRESS> }, { path: '/face', name: 'faceHeal', component: () => import(/* webpackChunkName: "welcome" */<EMAIL_ADDRESS> }, ] const router = new VueRouter({ routes }) export default router If it can helps, i m using typescript Are you injecting the router into the app? This is where you declare new Vue({ .. }).mount(#app) @JamesTotty Yes, I m: new Vue({ router, store, render: h => h(App) }).$mount('#app') I have the same problem
common-pile/stackexchange_filtered
Unresolved reference: compileKotlin in build.gradle.kts Kotlin project success build by build.gradle: compileKotlin { kotlinOptions.jvmTarget = JavaVersion.VERSION_1_8 } compileTestKotlin { kotlinOptions.jvmTarget = JavaVersion.VERSION_1_8 } Nice. But I need to change to build.gradle.kts: plugins { kotlin("jvm") version "1.2.10" id("application") } group = "com.myproject" version = "1.0-SNAPSHOT" application { mainClassName = "MainKt" } java.sourceCompatibility = JavaVersion.VERSION_1_8 repositories { mavenCentral() jcenter() } val kotlinVer = "1.2.10" dependencies { compile(kotlin(module = "stdlib-jre8", version = kotlinVer)) implementation("com.google.code.gson:gson:2.7") implementation("com.squareup.okhttp3:logging-interceptor:3.8.0") implementation("com.squareup.retrofit2:converter-gson:2.1.0") implementation("com.squareup.retrofit2:retrofit:2.5.0") implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8") } compileKotlin { kotlinOptions.jvmTarget = "1.8" } compileTestKotlin { kotlinOptions.jvmTarget = "1.8" } and now I get error: Line 32: compileKotlin { ^ Unresolved reference: compileKotlin I don't see the kotlin plugin in your build script. https://kotlinlang.org/docs/reference/using-gradle.html @LaksithaRanasingha I updated my post. It not help. There's an issue in the Kotlin Gradle DSL that causes this. https://github.com/gradle/kotlin-dsl-samples/issues/1368 You will need to use the following workaround until it gets resolved. tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> { kotlinOptions { jvmTarget = "1.8" } } Any ideas on how to extend this task? I want to create a custom task that compiles kotlin files. Should import KotlinCompile first: import org.jetbrains.kotlin.gradle.tasks.KotlinCompile. Following official Kotlin documentation (using Gradle part), I suggest to use such constructions in the build.gradle.kts: import org.jetbrains.kotlin.gradle.tasks.KotlinCompile plugins { java kotlin("jvm") version ("1.3.21") } // repositories, dependencies, etc... val compileKotlin: KotlinCompile by tasks val compileTestKotlin: KotlinCompile by tasks compileKotlin.kotlinOptions { jvmTarget = "1.8" } compileTestKotlin.kotlinOptions { jvmTarget = "1.8" } Shouldn't be there a difference between compileTestKotlin and compileKotlin. Or maybe I'm missing sth? Use withType keyword: import org.jetbrains.kotlin.gradle.tasks.KotlinCompile plugins { val kotlin = "1.3.61" kotlin("jvm") version kotlin apply false } subprojects { repositories { mavenCentral(); mavenLocal() } apply(plugin = "org.jetbrains.kotlin.jvm") tasks { val java: String by project withType<KotlinCompile>{ kotlinOptions { jvmTarget = java }; sourceCompatibility = java; targetCompatibility = java } } } unrelated: Unresolved reference: compile I was able to provide the dependency to implementation() instead of compile() // build.grdle.kts //... dependencies { //- compile("com.cloudbees:groovy-cps:1.22) //+ implementation("com.cloudbees:groovy-cps:1.22) implementation("com.cloudbees:groovy-cps:1.22) } //... this so post says api and implementation keywords are successors to the compile keyword Gradle Implementation vs API configuration edit:google turned up this question for Unresolved reference: compile plugins{ id ("org.jetbrains.kotlin.android") //add this } Remember that Stack Overflow isn't just intended to solve the immediate problem, but also to help future readers find solutions to similar problems, which requires understanding the underlying code. This is especially important for members of our community who are beginners, and not familiar with the syntax. Given that, can you [edit] your answer to include an explanation of what you're doing and why you believe it is the best approach?
common-pile/stackexchange_filtered
Anime about a pirate submarine on a flooded mars Around 5 to 10 Years ago I had seen an anime series that was set on Mars, which was flooded to create pretty much an ocean world. The main character - a stereotypical male teen ditz - was picked up by a pirate submarine close to the start and in the run of the show, this pirate submarine did go on some kind of odyssey around the planet. I remember that the main character (oh wonder) proved to be the key to some ages-old mystery that would save the mankind of mars or something. A particular scene that I remember relatively clearly was when the main cast did visit a preserve for native martian humans, that turned out to be pretty much a farce: behind the surface of primitive life (think like a native American preserve), the huts were stuffed with all the amenities of the modern martian life. This appears to be Mars Daybreak It hits all the main points: Ocean terraformed Mars Young male protagonist Invited onto crew of pirate submarine named 夜明けの船 / Yoake-no-fune. That would translate as Vessel of Breaking Dawn or (as the official translation) Ship of Aurora With the surface under heavy satellite surveillance, the pirates have been forced underground. Or rather, underwater. Instead of sailing over the waves, they lurk under them in submarines. And the baddest pirates around are the crew of the sub Ship of Aurora. Which brings us to our hero, Gram River: A young man living on the city-ship Adena. While trying to help a friend get out of a bad situation, he ends up in the middle of a raid by the Ship of Aurora, and thrown overboard when the port is rammed and breached... He is eventually hired on by the crew - https://tvtropes.org/pmwiki/pmwiki.php/Anime/MarsDaybreak And from Wikipedia: The ocean-covered environment makes a perfect setting for commercial trade ships and pirates to utilize submarines to make a living. At the same time, the pirates also raid those very trades for personal profit. The most renowned and feared of the pirate vessels is the Ship of Aurora, which makes a habit of reselling its booty cheap so that it can be redistributed to the less fortunate folk. - https://en.wikipedia.org/wiki/Mars_Daybreak "ship of aurora..". is pretty much a translation error I believe: 夜明けの船 is the title as far as wikipedia tells, Yoake-no-fune. That would translate as "Vessel of Breaking Dawn" Aurora is the Latin word for "dawn". So is this like Space Pirate Captain Harlock, only on Mars? @chepner the error is in the Ship/vessel distinction: Submarines in the English language are not ships but either the neutral vessels or (regardless of size) boats.
common-pile/stackexchange_filtered
python module matplotlib not found Trying to use the matplotlib module however when I try and run it I get, ImportError: No module named 'matplotlib'. I've had a look online but I couldn't find any solution. Any ideas on what I could do to fix this and if not any suggestions on similar modules. Install python distribution like Anaconda: https://www.continuum.io/downloads You probably dont have it installed.Just type this in your terminal/cmd: pip install matplotlib when I used this in both IDLE and cmd it did not work... am i doing this right Do you have python installed properly? pip install matplotlib if you are using python3 pip3 install matplotlib
common-pile/stackexchange_filtered
Re-route all internet traffic through firewall I'm setting up a dual firewall setup with a DMZ and an internal network. The servers are dedicated root servers running Debain Bullseye, all necessarily having a NIC with a public IP. In addition, servers in the DMZ have a second NIC going to a switch. Another dedicated root server is setup as firewall (pfSense), attached to the same switch. Now I want to route all incoming traffic from each dedicated root server through this firewall by routing all traffic from the public NICs through the second NIC, then through the firewall and back. I'm struggling with the Debian network interface configuration. Could you provide me an example config how such re-routing should be done? UPDATE This is my bare config at the moment (IPs are fake of course). How should I change the config in order to use the pfSense server as a gateway as suggested? auto lo iface lo inet loopback iface lo inet6 loopback auto enp8s0 iface enp8s0 inet static address <IP_ADDRESS> netmask <IP_ADDRESS> gateway <IP_ADDRESS> # route <IP_ADDRESS>/26 via <IP_ADDRESS> up route add -net <IP_ADDRESS> netmask <IP_ADDRESS> gw <IP_ADDRESS> dev enp8s0 auto enp1s0 iface enp1s0 inet static address <IP_ADDRESS>/24 # gateway <IP_ADDRESS> pointopoint <IP_ADDRESS> up sysctl -w net.ipv4.ip_forward=1 up route add -net <IP_ADDRESS>/24 gw <IP_ADDRESS> dev enp1s0 UPDATE 2 My network is as follows: 10GB Switch with 2 VLANs for internal network and DMZ External Firewall server running pfSense Internal firewall server running OPNsense 2 servers in the DMZ, each having 2 NICs: one with public IP directly connected to the provider, one private NIC connected via the Switch to same VLAN as the external firewall Similar setup for internal network: dedicated VLAN, two NICS. Only way to get into the internal network is via VON forwarded from external firewall to internal Firewall with VPN server on on it. Thus what I want to achieve is to forward all incoming traffic from the two servers in the DMZ to the external firewall, before any service on the server gets it. For example, how can I configure incoming traffic to be forwarded to the internal NIC then to the firewall where it‘s filtered, and back? UPDATE 3 Infratsucture ovierview: the pfsense has to be set as gateway, but remember that the question is in here if this is a public hoster that denies such setup or are you running your own steel? This should be possible. I'll update the question with a current bare config. you still hide your network. why are you dont explaining your network situation? Can you provide a drawing of desired traffic flow and physical connections? @vidarlo I've added an infrastructure overview. As you can see, servers e1 and e2 in the DMZ have a public IP and in order to use the IPs they must use the root server company's gateway. What I want is traffic coming from the public WAN NIC to be forwarded to the firewall server (e0). e0 has three NICs: one public WAN, one to the DMZ VLAN and one crossover to the internal firewall. I don't entirely grok your problem. Does the servers in DMZ have an additional interface towards the public Internet? Or do you have a public subnet routed towards you? If you have additional interfaces on the servers facing internet, the solution is probably to move those interfaces to your pfsense box, and NAT/forward traffic from there. Servers in the DMZ have each a dedicated WAN interface and public IP on it. The dedicated root servers are managed over this WAN interface, e.g. they can be reset or rescued or managed from the management console of the hosting service. This cannot be changed, thus my idea to forward traffic from the WAN interface to internal and from there to the firewall and back. First off, if you have a firewall, the proper method of accessing anything behind it would be through the public firewall interface (that is it's public IP). Any host with a public network interface behind a firewall creates a backdoor that could be exploited. I think you assume that forwarding traffic from these interfaces to the firewall via internal net may solve this issue; however it just gets unauthorized traffic on the internal net. I.e. your current setup with routing enabled allows anyone on the same public subnet to forward traffic to your internal net via routing.. @PeterZhabin Thank you very much for this comment. This guided me to the probably right direction: I've requested an additional IP for the external firewall and will forward it from there to the server in the DMZ.
common-pile/stackexchange_filtered
Stop Reduce function in Hadoop on condition I have a reduce function where i want to halt the reduce function after processing some 'n' keys. I have set a counter to increment on each key, and on condition being satisfied return from the reduce function. Here is the code public class wordcount { public static class Map extends Mapper<LongWritable, Text, IntWritable, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); private IntWritable leng=new IntWritable(); public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { String lword=tokenizer.nextToken(); leng.set(lword.length()); context.write(leng, one); } } } public static class Reduce extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable> { int count=0; public void reduce(IntWritable key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); count++; } context.write(key, new IntWritable(sum)); if(count>19) return; } } Is there any other way that I achieve this. Bear in mind that if you have more than one reducer, you cannot handle the processed keys limit internally. For example, if you wanted to stop after 10 keys, but you had 2 reducers, you'd end up processing 20 keys in total. You'll need to control this limit externally from wherever you are starting the job from. I am using a single reducer to achieve my condition of top n keys required. Thanks for the note. You can achieve this by overriding the run() of the Reducer class (New API) public static class Reduce extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable> { //reduce method here // Override the run() @override public void run(Context context) throws IOException, InterruptedException { setup(context); int count = 0; while (context.nextKey()) { if (count++ < n) { reduce(context.getCurrentKey(), context.getValues(), context); } else { // exit or do whatever you want } } cleanup(context); } }
common-pile/stackexchange_filtered
How can I find a set of characters in a char array I'm writing a C++ Windows httpPlatformHandler web app. The data coming in has http headers separated by \r\n. After the last header is double \r\n followed by the posted data. I need to get a pointer to the posted data. I looked at strchr but that only looks for a single character. I need to look for "\r\n\r\n" and have a pointer to the posted data after that. The Winsock recv function gets the data into this variable: char szRecvBuffer[1024]; When doing stuff with strings try to use types like std::string/std::string_view. Have a look at std::string's find function it might help you too. You want std::getline What happens if the client sends more than 1024 bytes of data? Always a good question, the receive function will return a larger number and this needs to be handled correctly. The API allows you to pass the buffer size so that won't crash but data will be lost and error handling is probably a thing. recv is called in a loop. szRecvBuffer is appended to a string. The issue is that the posted data is Unicode while the headers are not. The use of string_view and find on your buffer in code : #include <array> #include <string_view> #include <iostream> int main() { std::array<char, 1024ul> buf; // receive buffer (nice thing is a std::array will keep track of its own size) std::string data_to_receive{ "Hello world!\r\n\r\nData 123" }; // simulate recv(ClientSocket, buf.data(), buf.size(), 0); by copying data into buf std::copy(data_to_receive.begin(), data_to_receive.end(), buf.begin()); // this will normally be returned by receive function of your socket. std::size_t bytes_received = data_to_receive.length(); // make a string_view on the buffer that spans the received data (no pointers at all) // this avoids looking at bytes in the buffer that are outside the received data range std::string_view buf_view{ buf.begin(),buf.begin() + bytes_received }; // then within that view find the characters you are looking for auto pos = buf_view.find("\r\n\r\n"); if (pos != std::string::npos) { std::string_view substring{ buf.begin() + pos, buf.begin() + bytes_received }; std::cout << substring; } } I've never used array or string_view, I'll look into using them. What does the ul after 1024 stand for? I should mention that the posted data is Unicode and the headers are UTF8. std array has the same performance as "C" style arrays but can passed around like objects so by (const) reference if you want to. And can easily be returned from functions as well (like any other object). Oversimplified a string_view is a struct with a begin and end pointer into existing data. UTF8 is still byte (char) based, just don't confuse the number of bytes (length of view) with the actual number of characters.
common-pile/stackexchange_filtered
Atmega328p resets and crashes when powering up solenoids I'm having problems with my PCB board based on atmega328p microcontroller (very similar to an Arduino standalone board). This is my current PCB schematics and Eagle board. SCHEMATICS UPDATED! I use connectors on VSX, VDX, MOT1A, MOT2A, MOT2A, MOT2B to switch ON and OFF solenoid valves (the valves are rated as 12VDC, 2A at max). The board works randomly fine only for few seconds or minutes until the micontroller crashes or resets and I think this happens due to solenoids noise. The main power is 12VDC, 35A (used for the valves and the relay) and then I use a 5VDC voltage regulator to power on the atmega328p. As you can see, I already added flyback diodes across each mosfet, but may be this is not enough to prevent solenoid noise. I also twisted each couple of wires (maximum wire length is 90 cm) which connect the valves pins to VSX, VDX, MOT1A, MOT2A, MOT2A, MOT2B connectors and I added an additional diode in parallel on each valve pins. How can I solve the problem? I was thinking to add 100nF ceramic caps between VCC and GND and AREF and GND and AVCC and GND as close as possible to the microcontroller pads and another 100nF cap in parallel on each valve pins. Do you think it can be enough to solve the problem? Unfortunately, I think I can't separate the solenoid power source from the atmega328p power source since they are currently sharing the same GROUND. What can I do to solve the problem by continuing using the same PCB board? I'm not exactly sure what the relay at the extreme right of the PCB is supposed to do, but if one pair of screw terminals is a mains input and the other pair is a relay-switched output it's not wired correctly. It will just short out the mains: Illustration. There is also a second issue with that relay; the clearance between the relay coil and the common terminal of the switch is unnecessarily small. If the relay is used to switch the mains, this could be dangerous. Thank you, jms for your support. The relay has no function on this board and I'm not using it. It is not even mounted on this layout. Its task was to short the two terminals to act like a switch, but I'm not using it. Cut that trace running up from pin 22 and jumper it directly back to the middle pin on the 7805 with a fly wire. Add a 10uF X7R or X5R cap directly between pin 7 and 8. Make the layout better next time or use a 4-layer board. So, should I cut the GND traces (pins 22 and 8) from the atmega and directly connect them to the 7805 central pin (GND)? Yes, jumper from pin 8 directly back to GND on the 7805 Can you explain me the purpose of this operation, please? The middle pin of 7805 isn't the same GND for the entire circuit? I wanna improve my electornics knowledge, so I would like to know what happens if I do this. Thank you! Current from the MOSFETs is flowing through the micro GND trace. The cut and jumper avoids the ground bouncing around. This may not be enough but it may help. Next step is to add series gate resistors of a few hundred ohms. Thank you! I already have the series gate resistors of 220R between the atmega output and the MOSFEST gate. Are they not correct? @MarcusBarnet The resistors you have added are parallel resistors, being between the gate and the drain. Don't get me wrong, they are a great addition (they ensure that the MOSFETs stay off when the AVR resets), but they won't help against voltage spikes caused by excessively fast switching. Their current values (220 ohms) are way too low trough, normally you want those parallel resistors to have a value of a few thousand ohms (e.g. 10 k), especially if you add the gate series resistors mentioned (series as in between the IO pins and the gates). I don't see them on the schematic...except Q1 Sorry guys, I did a mistake and I uploaded an old schematics. I edited my first topic and replaced it with the REAL schematics. As you can see, I'm using 220R series resistors. Do you think they are OK? Yes, looks okay. You could even go higher but it's probably not necessary. I added a 1000uF main capacitor filter across +12v and gnd con the main connector. I also added a 100uF between pin 7 and pin 8. There was already a 100uF cap between aref and gnd but it was not close to the atmega so I added another cap directly on aref and gnd pads. It is OK or I have to remove the old cap? I've cut the pin 22 trace and I connected it directly to the 7805 gnd. Is there anything else I can do to improve the board? A ceramic cap is requested - 100uF sounds a bit high to be a ceramic, though not impossible. These details do matter. An electrolytic cap will not work as well on that position because of inductance and ESR. I did a typo in my previous topic, I used 100nF ceramic capacitors. Unfortunately, the board worked fine for few hours and then it started to fail again! :( Now, everytime I power it up, the atmega crashes after few seconds. I can't understand why so small solenoids give me so many problems. Is there still anything I can do to solve the situation? Or I have to give up? :( Do I have to limit the maximum current for the solenoid? It says that it requires a maximum current of 1.9A, but i'm not limiting the current in any way. Or the solenoid will only ask for 1.9A in any case? Can the solenoid drain over current (more than 2A) and get burned? May be the problem is that I need to limit the current! What do you think about this? It is very unlikely that something will help to fix this board layout. This PCB design has a very poor ground, so somewhere the ground bounces, and a glitch kills the processor. Why so many folks here are not using ground pour? Then the ground return path for relays must be separated from the ground for low-power control electronics. It should be designed in first. Then, a main capacitor is missing, on input 12V jack, which is the main power rail for all relays. Hello Ali, thank you for your answer. This is my first time in PCB designing and so I did several errors. Is there any way to solve my problem? For example, where can I place the main capacitor? Across +12v and ground? 100nF As main capacitor should be enough? No. I would put a 100uF or 2,200uF or something. Maybe you will be able to drop thick blue wires to beef up the ground return from power electronics. And why do you have heat sinks on FETs? The transitors should be operating in ON of OFF mode, and dissipate almost nothing. And I miss the 35A parameter. Clearly the traces (as drawn) can't support this kind of current. You can try to put a quarter of pound of solder on top to beef them up as well. I'm not using the heat sinks on the FETs, I just added them to the board design in Eagle, but I'm not using them for real. Do you think I will have to add also bypass capacitors on VCC and GND close to the microcontroller? I'm just using a 12V, 35A battery, but I really don't need all that ampere since they will fry the board for sure. The maximum current needed by the board is 2A for each valve. A 100nF bypass at microcontroller is a must. 2A times 10 valves is still a lot Do you think the D1 diode on the +12V input is correct? Or I have to remove it? It is a 1N4004 diode which can only draw 1A. The cap PC1 is 100uF, should I replace it with the 2200uF? The cap PC1 only works for 5V digital rail, 100uF is definitely ok. The diode is ok, it provides at least some de-coupling from main power rail. Without extra big capacitor at 12V power jack the inductance of supply wires will cause drops to the rail on every valve switch. Thank you, so I need to add a big cap between +12V and GND before the diode and PC1 if I well understood. Ali, the schematics added to my first message was not the updated one. I edited my first topic to add the real schematics. Do I have to limit the maximum current for the solenoid? It says that it requires a maximum current of 1.9A, but i'm not limiting the current in any way. Can the solenoid drain over current (more than 2A) and get burned? May be the problem is that I need to limit the current! What do you think about this? Solenoid coils should be driven to solenoid manufacturer's specifications. If it is 12V, then it should be 12V regardless of current taken. Otherwise they might not operate to their mechanical specifications. The problem is that the manufacturer says that the solenoid is 12VDC, 3.7 ohm and 1.9A. The measured current between the mosfet and the solenoid is 2.9A which is quite higher than expected. May be this is the problem which makes the board to block. If I use a voltmeter in series between the mosfet and the solenoid to measure the current and I power on the board.. than the board works perfectly!! I'd I remove the voltmeter, than the board don't work again. This is very strange!! Why it works correctly only if the voltmeter is connected to the circuit? Is your DMM an auto-ranging DMM? Or DMM leads work as inductance, which slows di/dt and causes less switching spikes and ground bounce
common-pile/stackexchange_filtered
Does the MTA (Postfix, Exim, etc.) get installed seperate from the mail server I'm new to mail servers and still trying to understand all the components. What I read is that there are many MTAs, but the 4 common ones are qmail, Postfix, Sendmail, and Exim. I also found this list of mail servers. http://en.wikipedia.org/wiki/List_of_mail_servers What's confusing me is that mail servers like Zimbra and Atmail are listed at the same level as qmail/Postfix, etc. Are mail servers and MTAs the same thing? I thought (correct me if wrong) that a mail server includes an MTA as one of its components. Zimbra uses Postfix as its MTA. The problem you're having is that "mail server" is an imprecise term. Some people take it to mean "MTA" (because that's all you need to have a server that handles mail), whilst others take it to mean a server that receives mail and stores it for users to manipulate, while still other people have other definitions entirely. The four software packages you list are, indeed, MTAs, and have little-to-no other functionality provided. Zimbra, et al are what I would call something like "mail service suites", but really there isn't any fully-accepted terminology. It's best to do your own investigation into what a particular program, package or suite does rather than try to rely on imprecise terminology to gauge suitability. (See also: "cloud"). Thanks for the clarification. So how do I find out which MTA (qmail, Postfix, etc.) a mail server (atmail, Zimbra, etc.) is built around? I found this third link: http://en.wikipedia.org/wiki/Comparison_of_mail_servers which has 3 big tables full of properties that compare mail servers, but I can't find any info on the MTA used within each mail server. Am I missing something? You need to ask the vendor. They may not even use a standard one -- some mail service suites use their own in-house MTA. That's interesting. I'll have to read more about this to wrap my head around it. Yahoo uses (or used) their own frankenstein adaptation of qmail.
common-pile/stackexchange_filtered
new line in react admin textfield I do I make the TextField recognize a new line with the "\n" because when I to enter it in the Field it's just print it as it is <TextField source="body" component="pre"/> Just Try to post to many stuff not relevent\nand think he is a god Well I found the solution and it just add multiline and its fix the problem
common-pile/stackexchange_filtered
What is returned from a SQL query into a PowerShell variable? Here is the function I have setup that works just fine to send queries to a SQL database from PowerShell and return the results (the results are what I don't quite understand) function Invoke-SQL { param ( [string]$server, [string]$database, [string]$Query ) $connectionString = "Data Source=$server; " + "Integrated Security=SSPI; " + "Initial Catalog=$database" $connection = new-object system.data.SqlClient.SQLConnection($connectionString) $command = new-object system.data.sqlclient.sqlcommand($Query, $connection) $connection.Open() $adapter = New-Object System.Data.sqlclient.sqlDataAdapter $command $dataset = New-Object System.Data.DataSet $adapter.Fill($dataSet) | Out-Null $connection.Close() $dataSet.Tables } If I run a query such as the one below (it returns no results, meaning there were no records that existed that matched the condition) why does it return nothing when I just put in $results? Why is the result 'Table' when I do Write-Host $results ? See below PS>$results = Invoke-SQL -server 'servername' -database 'DBname' -Query "SELECT * FROM [DBname].[dbo].[TableName] WHERE UserID = 'x' AND ComputerName = 'x'" PS>$results PS>Write-Host $results Table When no records are found I thought it would be equal to "" or $null but it is not upon testing $null test PS>If ($results -eq $null) { >> write-host "Null"}else{ >> write-host "Not Null" >> } Not Null "" test PS>If ($results -eq "") { >> write-host "Empty"}else{ >> write-host "Not Empty" >> } If someone could explain this to me, and what options I might have in order to check if a query returns no results, that would be great! You can write ($results | Measure-Object).Count to count the rows in the table (0 = no rows). Thanks Bill, that answers half of my question :) What happens when you run the commands individually instead as a function? Does the $dataset contain any rows? Also, for testing purposes, drop the | Out-Null. What's the other half of the question? @Bill_Stewart, Why does Write-Host $results return 'Table' ? Honestly i'm not sure what a 'dataset' is, but that's what the Invoke-sql function returns. I found that function online a few months ago. Why does it matter? Why do you need Write-Host? It does not matter at all, simply would like to learn more in depth about what exactly is happening in the code. @Adamar, I will test that tomorrow! If you are not sure what a dataset is, It's all documented in the MSDN. https://msdn.microsoft.com/en-us/library/system.data.dataset(v=vs.110).aspx. All the other classes as well. Tons and ton and tons of stuff to read. Yep. DataSet, DataTable, DataView, etc. Honestly you should've read that before you started using System.Data.SqlClient. That said, whenever you want to know the answer to, "What type of object is this?" you should try $results.GetType().FullName or $results | Get-Member. Read the comments on the question post for more details. In order to see if records were returned or not, this will return the number of rows (records) returned. Credit to @Bill_Stewart. ($results | Measure-Object).Count @Tomalak provided a helpful link. @BaconBits had this helpful tip to get the type of an object $results.GetType().FullName # or $results | Get-Member Thank you all for your help.
common-pile/stackexchange_filtered
Android drawables work in Debug but become black squares in Release I'm working on an Android app (written in Flutter but I don't think it matters) which uses a drawable called quick_plus.png as shown here: It works great in debug mode. But not in release mode, where the quick_plus.png drawable became a black square. After unzipping my APKs I confirmed that's also what they contained, just a black square: No idea if this is relevant, but I generated those APKs by following the bundletool docs: bundletool build-apks \ --bundle=build/app/outputs/bundle/release/app-release.aab \ --output=build/app/outputs/bundle/release/app-release.apks \ --ks=... \ --ks-pass=... \ --ks-key-alias=... \ --key-pass=... bundletool install-apks --apks=build/app/outputs/bundle/release/app-release.apks Here's my build.gradle release config: signingConfigs { release { keyAlias keystoreProperties['keyAlias'] keyPassword keystoreProperties['keyPassword'] storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null storePassword keystoreProperties['storePassword'] } } buildTypes { release { signingConfig signingConfigs.release minifyEnabled true useProguard true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } and the proguard-rules.pro: #Flutter Wrapper -keep class io.flutter.app.** { *; } -keep class io.flutter.plugin.** { *; } -keep class io.flutter.util.** { *; } -keep class io.flutter.view.** { *; } -keep class io.flutter.** { *; } -keep class io.flutter.plugins.** { *; } -keep class com.dexterous.** { *; } Has anyone seen this before? Found the problem! It was due to Android shrinking and obfuscating my code. All I had to do was add a file called android/app/src/main/res/raw/keep.xml with this: <?xml version="1.0" encoding="utf-8"?> <resources xmlns:tools="http://schemas.android.com/tools" tools:keep="@drawable/quick_plus" />
common-pile/stackexchange_filtered
How should I approach consolidation of BEA WebLogic application development workstations? I am hoping to gain some insight as to how I might virtualize/consolidate the desktop environment for my developers. Our dev/unit testing environment consists of a WebLogic server installed on each developer's Windows XP workstation. He/she will make coding changes, compile their JARs, and then composes their work with JARs from their peers' last commit from ClearCase. They run an Eclipse-based client locally to hit the instance of WebLogic on their desktop to do unit testing without affecting other developers' configurations. This development environment is on an isolated "testing" network. The developers use separate workstations for office email, etc. but are not permitted (secure environment) to install the WebLogic Server instance locally. I am fairly new with managing WebLogic, but have experience managing other applications/development environments using Windows Terminal Services. Would it be possible to "sandwich" several instances of WebLogic Server for, say, 25-50 developers on one instance of Windows 2003 or 2008 server, and have the developers access their individual WebLogic Server instances via RDP session? Or can it only be installed in a single Application Server instance per OS instance? My ultimate goal is to have an isolated server (trying to avoid the cost/overhead of using vmWare with a bunch of XP or Windows Server VMs) that can be accessed from the "office management" network and do away with the need to maintain an additional set of desktop hardware/OS instances. Any "outside-the-box" ideas are welcome, but there are many security constraints and coporate standards which restrict the set of possible solutions. You can run multiple Weblogic servers on your Server class machine. (May be not 20 -25 though) Each Weblogic server is finally a java application which you kick off with the startServerCmd batch file. You will be limited by the RAM available on the server. For a dev instance, if you give each weblogic server 512 Mb to the JVM heap, you can ideally run 8 servers on a 4 Gb server. Also, each WLS will need a separate port. Developers can access their servers via RDP. You will have an issue around security and separation of concerns, since each weblogic domain runs under the common BEA_HOME, so the developers might be able to access other domains. Or you might have to install BEA_HOME within each RDP profile. This approach sounds promising; I could at least consolidate a few desktop VMs into "farms" which would follow the logical division of the WLServers. Would I simply run a single AdminServer and then multiple instances of the custom servers? With some careful parameter configuration, I am willing to give this a try. Thanks very much! @mxmader: You wouldnt be able to run multiple Managed servers to the same Admin BECAUSE the Admin will keep the same codebase (i.e. same jars/wars) in sync across all it's Managed servers. I'm assuming you need each farm with a separate code base. So you'll actually need multiple "weblogic domains" which is "multiple admins". In dev mode you dont need Admin + Managed - we usually only keep Admin servers on which we deploy the code and test. @mxmader: see here http://download.oracle.com/docs/cd/E13222_01/wls/docs103/domain_config/understand_domains.html and you could run multiple domains off a single BEA_HOME or multiple BEA_HOME
common-pile/stackexchange_filtered
Sending Hex data I am working with some hardware that can be controlled via hex commands. I already have some snippets of Python code I use for telnet control of other devices that use ASCII commands. How do I go about sending hex commands? For instance, how would I modify skt.send('some ascii command\r') with hex value, and what's the best data type for storing those values? Thanks. Are you using Python 2.x or Python 3? In Python 2, use string literals: skt.send('\x12\r') In Python 3, use bytes literals or bytes.fromhex: skt.send(b'\x12\r') skt.send(bytes.fromhex('12 0d')) In either case, the bytearray type will probably be useful as it is a mutable (modifiable) type that you can construct with integers as well as a bytes literal: skt.send(bytearray([0x02, 0x03, 0x00, 0x00, 0x05])) So if need to store x02,x03,x00,x00,x05 and send that as one string what would that look like? @user1124541 see bytearray example above. skt.send(bytes.fromhex('02 03 00 00 05')) will do. So if I'm trying to conform to this protocol Jill's example should work? It's not clear to me if I decide to use telnetlib instead can I still invoke the bytes.fromhex() THis is the protocol I am trying to adhere to. (*3) Checksum : "CKS" inscription This is the value of the lower 8 bits of the results calculated in byte units from all of the data up to the immediately preceding data. Example) 20H 81H 01H 60H 01H 00H 03H = CKS (* Absolutely, you just need to decide whether bytes literals or bytes.fromhex is more readable in your case. Gotcha. But the data that is sent would be identical regardless ow which one I used? Yes. ヽ(*・ω・)ノ (this Japanese emoticon is just here to fool Stack Overflow's minimal limit of characters in commentaries, and so does this sentence). Under Python 3, you have the great bytes.fromhex() function as well. >>> bytes.fromhex('AA 11 FE') b'\xaa\x11\xfe' >>> bytes.fromhex('AA11FE') b'\xaa\x11\xfe'
common-pile/stackexchange_filtered
how to create aroundplugin for abstractentity? I have created around plugin for isAttributeValid function for Magento\ImportExport\Model\Import\Entity\AbstractEntity but it is not working. code: <type name="Magento\ImportExport\Model\Import\Entity\AbstractEntity"> <plugin name="bbb" type="xxxx\yyy\Plugin\Entity\AbstractEntity" sortOrder="10" /> </type> in the plugin: <?php namespace XXX\YYY\Plugin\Entity; class AbstractEntity { public function aroundIsAttributeValid( $subject, $attrCode, $result) { $writer = new \Zend\Log\Writer\Stream(BP . '/var/log/test.log'); $logger = new \Zend\Log\Logger(); $logger->addWriter($writer); $logger->info('hhhuu'); echo "hhh";exit(); $returnValue = $subject; return $returnValue; } } but echo is not printed. what is the location of di.xml ? Global location could you plz tell when isAttributeValid is called ? It was called in Magento\CatalogImportExport\Model\Import\Product.php I want to ask, in admin when IsAttributeValid is called so i can check override is working or not Try adding a log file - $writer = new \Zend\Log\Writer\Stream(BP . '/var/log/test.log'); $logger = new \Zend\Log\Logger(); $logger->addWriter($writer); $logger->info('Your text message'); log file is not created Post the whole plugin code updated the plugin code There's an error in your method name: aroundisAttributeValid should be aroundIsAttributeValid (note the capital "I"). Also, I'm not sure if your parameter $result should exist - you may need to check that. Another thing, you may need to check the scope (frontend, admin, global) in which you're defining the plugin. This can also influence if it works or not. I have changed aroundIsAttributeValid .still it is not working
common-pile/stackexchange_filtered
What is the modern approach for Core Data Property Validation? Does Key-Value Validation still the de facto approach? As the Core Data Programming Guide - Object Validation, updated to Swift 3, suggests that the Key-Value Validation utilizing the Key-Value Coding of objective-c runtime is the recommended approach to perform single property validation. As the evolutions of Swift and iOS in recent years, does this approach still represent the best practice? And what are the practical caveats when applying this technic in modern iOS? For example, @objc(AuthorMO) public class AuthorMO: NSManagedObject, Identifiable { @NSManaged public var uuid: UUID? @NSManaged public var name: String? } // MARK: Key-Value Property Validation extension AuthorMO { @objc public func validateUuid(_ value: AutoreleasingUnsafeMutablePointer<AnyObject?>) throws { guard let newValue = value.pointee as? UUID else { return } // Custom property validation. } @objc public func validateName(_ value: AutoreleasingUnsafeMutablePointer<AnyObject?>) throws { guard let newValue = value.pointee as? String else { return } // Custom property validation. } } // MARK: LifeCycle Validation Alternative // previously mainly used for inter-properties validation. extension AuthorMO { public override func validateForInsert() throws { try super.validateForUpdate() try propertyValidations() } public override func validateForUpdate() throws { try super.validateForUpdate() try propertyValidations() } public func propertyValidations() throws { try validateUUID() try validateName() } public func validateUUID() throws { let newValue = primitiveValue(forKey: #keyPath(AuthorMO.uuid)) // Custom property validation } public func validateName() throws { let newValue = primitiveValue(forKey: #keyPath(AuthorMO.name)) // Custom property validation } } This is one way of doing it. It's only automatically used when you're saving changes, and in that case all it can do is prevent saving and return an error message. You might display that error message to the user. If that fits your needs, then it's a good approach. You can also run it manually. It might be less useful in other situations, for example importing data from a server API. This is the only built-in technique specifically designed for property validation. There are other ways-- literally any way that works for you to check that the property values fit your requirements. For example, you could write your own failable initializer, and have it return nil if any of the arguments are no good. Or have it throw so it can return an Error explaining why it can't initialize. Custom code like this won't prevent saving changes, but if you use it carefully it'll still make sure values match your requirements. When working with CoreData this good way of validating a data. If you would like to work with SQLite using full potential of Swift then you should chose the GRDB project. This project is promoted by Swift Developers by means of dedicated sub forum on https://forums.swift.org forum site. Thanks for the answer. GRDB is very promising and I indeed had plans using it in a future project. Though in a current project I am heavily using Core Data to utilizing its integrated sync with CloudKit. And implementing custom cloudkit sync with GRDB would represent a huge amount of work. @zrfrank CoreData is a very good framework then. Indeed the most practical solution. Though due to it's OC nature, and a lot of ticks were practiced before the modern persistent history tracking, query generation and cloudkit, not to mention the UI now are almost reactive rx/combine. I'm not very confident when using some old technics tbh. Apple doesn't integrated Combine into CoreData so developer must do it by himself. RxSwift has an CoreData extension. If you like to use Combine/RxSwift and you have complex queries with which depend on other queries then using Combine or RxSwift can be helpful.
common-pile/stackexchange_filtered
Flag condition and file writing I am creating a hotel management system software for my college project.There are two primary problems with the code. The program takes in a room number, runs it through the check function and if according to the condition books it, However when I try to book again with a unique room number it says it has been booked even though it hasn't been in reality.It only books rooms one time. The function of book_rooms is not writing contents to the file. #include<iostream> #include<conio.h> #include<fstream> #include<stdio.h> #include<stdlib.h> #include<dos.h> #include<string.h> using namespace std; struct hotel { int room_no[50]; string name[50] ; string adress[50]; int mobile_no[50]; int bill[50]; int days[50]; int room; int room1; int flag=0; fstream records; void main_menu(); int book_rooms(); int customer_records(); int rooms_alloted(); int edit_records(); int customer_bills(); int check(int); }; void hotel :: main_menu() { int choice; while(choice!=6) { system("cls"); cout<<"\n\t\t\t\t*************************"; cout<<"\n\t\t\t\t HOTEL MANAGEMENT SYSTEM "; cout<<"\n\t\t\t\t * MAIN MENU *"; cout<<"\n\t\t\t\t*************************"; cout<<"\n\n\n\t\t\t1.Book A Room"; cout<<"\n\t\t\t2.Customer Records"; cout<<"\n\t\t\t3.Rooms Allotted"; cout<<"\n\t\t\t4.Edit Record"; cout<<"\n\t\t\t5.Customer Bills"; cout<<"\n\t\t\t6.Exit"; cout<<"\n\n\t\t\tEnter Your Choice: "; cin>>choice; switch (choice) { case 1: { book_rooms(); break; } case 2: { customer_records(); break; } case 3: { rooms_alloted(); break; } case 4: { edit_records(); break; } case 5: { customer_bills(); break; } case 6: { break; } } } } int hotel:: book_rooms() { system("cls"); int room; cout<<"\n\t*************************"; cout<<"\n\t ROOM BOOKING"; cout<<"\n\t*************************"; cout<<"\n\nPlease enter the room number you want to book."; cout<<"\n1: Standard room(Room 1-20) (Rs.1000/night)"; cout<<"\n2.Suite(Room 21-40) (Rs.5000/night)"; cout<<"\n3.Luxury Room(Room 41-50) (Rs.10000/night)\n"; ofstream records("rooms.txt"); cin>>room; room1=room-1; flag=check(room); if(flag==1) { cout<<"Sorry this room has been taken"; system("pause"); } else{ room_no[room-1]=room; records<<"Room "<<room_no[room-1]<<"\n"; cout<<"Name:"; getline(cin, name[room-1]); getline(cin, name[room-1]); records<<name[room-1]<<"\n"; cout<<"Adress:"; getline(cin, adress[room-1]); records<<adress[room-1]<<"\n"; cout<<"\nMobile no: "; cin>>mobile_no[room-1]; records<<mobile_no[room-1]<<"\n"; records.close(); cout<<"\nYour room has been booked\n"; system("pause"); }} int hotel::check(int r) { ifstream records("rooms.txt"); while(!records.eof()) { records>>room_no[r-1]; if(room_no[r-1]==r) { flag=1; break; } } records.close(); return flag; } int main() { hotel p; p.main_menu(); } Seems very likely there are multiple problems with this code. But your immediate problem seems to be that you are opening "rooms.txt" for input and output simultaneously. int hotel::book_rooms() { ... ofstream records("rooms.txt"); ... flag=check(room); ... } int hotel::check(int r) { ifstream records("rooms.txt"); ... } See the problem? book_rooms immediately opens rooms.txt for output and then check tries to open the same file for input. Reorganize your code like this so you are not opening the file in two places at the same time. int hotel::book_rooms() { ... flag=check(room); ... ofstream records("rooms.txt"); ... } int hotel::check(int r) { ifstream records("rooms.txt"); ... } But as I say I think you are going to find this is only the first of multiple problems with this code. It seems you are trying to be too ambitious and writing too much code without enough testing. It's a very common failing of beginners.
common-pile/stackexchange_filtered
Mysql case statement not returning appropriate message after case evaluation I am selecting engineStatus field from car table, the query looks like so :- select engineStatus from car LIMIT 1 car table column can have values 1 or 0. Assuming engineStatus is set to 1, the above query will return 1. I want to return ON if engineStatus = 1 or return OFF when engineStatus = 0 I've tried below query, but SQL throws an Error SELECT ( case engineStatus = 1 then SET engineStatus = 'ON' else SET status = 'OFF') FROM `car` WHERE 1 Mysql Says #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'then SET engineStatus = 'ON' else SET engineStatus = 'OFF') FROM `car` WHERE 1 LI' at line 1 How can i achieve this in mysql ? WHEN and END are missing You miss WHEN and END. As well as you shouldn't use 'SET' here, inside the select So you simply want to return ON instead of 1, or else OFF? You miss WHEN and there's no need SET here The proper syntax is : SELECT ( CASE WHEN engineStatus = 1 THEN 'ON' ELSE 'OFF' END ) AS status FROM `car` WHERE 1 cool, works! I had implemented the same solution a couple of months ago, forgot to include the WHEN clause. It should be: SELECT ( case WHEN engineStatus = 1 then 'ON' else 'OFF' END) FROM `car`;
common-pile/stackexchange_filtered
Strange Keyboard Bug I recently installed Raspbian onto my pi and I started to experience a strange keyboard bug. I was able to get it to work just fine for about an hour, and I was on my way to setting up SSH and all of a sudden my keyboard stopped working. I restarted the pi and they keyboard worked again for about 5 minutes and then died on me. This continued after every reboot, so I reinstalled Raspbian onto the SD card and now the keyboard won't work at all. I am using a Monoprice K11 USB Keyboard. I tested the keyboard on my Mac and it said that it wasn't recognized, and I had to tell my Mac that it was an ANSI keyboard for it to work. This keyboard is not on the compatible devices list, but I thought that given the nature of the error that it was something besides the keyboard. The pi is recognizing the keyboard, it shows up during boot. Do you think that this is an error with keyboard compatibility, or something else? And does anybody know how to fix it? Have you tried a different, or compatible keyboard? This would rule out the Pi being the problem. Try using a powered hub, possibly a lack of power. I would second the idea for a lack of power. Using underpowered cords with a higher draw keyboard causes exactly these kinds of issues. I would agree with the others. Here's the list of verified powered USB hubs if you're unsure of which one to buy: http://elinux.org/RPi_Powered_USB_Hubs I know I had many issues with my Pi freezing or rebooting with my Mac keyboard, which also acted as a USB hub. What I did was power the Pi with a Power USB hub; a cable from the main powered plug to the Pi, then a cable going from the Pi to the hub so I can use other devices. I used a D-Link Powered USB hub, specifically, instead of a cheap one. It stays running, and I plug in devices to the hub. Right now, the only thing plugged in to the Pi's USB is the hub, and a basic Microsoft Mouse. Depending on how you set up your Pi, or what it's for though, you may be able to get away with using just SSH for using it. That won't require a keyboard to be hooked up until there's something really wrong with it though. I'm marking this answer as correct because I'm almost 100% it's a power issue. The critical piece of information I was missing is the fact that I also had a mouse plugged in, and when I unplugged it, the keyboard worked. A verified, powered hub would solve my problem.
common-pile/stackexchange_filtered
golang variable setting I need to set a variable at the beginning of my package that will later on be populated with parseFiles() but im not sure how to set the variable considering its not a string or int or anything generic like that. How would i set the variable without having to add some arbitrary name in there just to set it? var templatefiles = template.New("foo") // Im having to do New("foo") just to set the variable // Later on adding files to original variable templatefiles.New("template name").Parse("Template text here") You just need to replace = template.New("foo") with the type returned. In this case: var templatefiles *template.Template // the return type of html/template.New() templatefiles is now a global variable that contains a nil pointer to a type template.Template. That worked thanks!!! What if later on i want to clear out the variable? Basically empty out the templates that are in the var templatefiles? As with any pointer, you set it to nil. templatefiles = nil Im sorry i was wrong it did not work. this is the error i get. http://play.golang.org/p/a0irFLNgpo You tried to use templates before creating one. http://play.golang.org/p/RitlvGxTrG haha yes you are right. but my issue just adds a bit more complication and i was wondering your thoughts on how to make it work using your previous playground. http://play.golang.org/p/rkxa3XUNXi Thats what i figured. Thanks so much for your help. Shoot me an email using the one on my profile page. Would love to chat on your expertise on go language. let us continue this discussion in chat
common-pile/stackexchange_filtered
show loading spinner in the Jquery popup I am using jquery popup that loads a external page inside an Iframe. Now I want to show loading spinner or loading bar image and hide the external page/iframe while its working,so user only sees the spinner. can it be done,if yes how.please help me. Here is the code <script type="text/javascript"> $(document).ready(function () { $("a#em4c").bind("click", function () { $("#popup2").bPopup({ content: 'iframe', contentContainer: '#pContent', loadUrl: "http://site.com" }); return false }); }); </script> html Part <p><a id="em4c"><strong>POP ME</strong>: Simple jquery popup that loads a page inside an iframe</a></div> Thankyou Might This would be helpful for you <form> Your input elements </form> <div id="dialog1" title="Waiting" style="display:none"> <img src="/spinning.gif" border="0" align="left" hspace="12"/> Place all your text here <iframe id="iframe1" src="" width="100%" height="100%"></iframe> Place all your Text </div> write this in document.ready $("#dialog1").dialog({ autoOpen: false, height: 150, weight: 50, resizable: false, modal: true }); //On click of the Element You can do in this way $("a#em4c").bind("click", function () { $('#dialog1').removeAttr('style'); $('#dialog1').dialog('open'); var url ="some.php"; $('#iframe1').attr('src',url); }) On click Of the Element it will Open the Spinning Image after That it opens some.php in the iframe. Is this you were expecting? +1 for this! @Priya, You must add jquery ui css and js to your page for it to work and it's a little complex the first time you do it: 1) http://jqueryui.com/download - go here and deselect all boxes except for, 'Dialog' then click [Download]. 2) Add the css, js, and images to your site. 3) import the css in the of your page before any js imports, and import the jquery ui js file in the of your page immediately AFTER your jquery js import. Should work after that. Thanks I have done all that but didn't work...anyway i figured a different way..thanks for help both of you :) You could use a jQuery plugin. Here are a few variations of a jQuery Modal (popup). I would use this SimpleModal jQuery plugin at ericmmartin.com. It says, "Loading" as the content loads and I'm sure this can be overridden with a loading spinner graphic. thanks,but then how do i hide the iframe while its working in the background. It does not hide it. There are other way to do this.. such as loading everything in to a hidden div, then when the content is loaded you can add it to a modal popup.
common-pile/stackexchange_filtered
Error when importing geo data to Mongodb : Can't extract geo keys from object, malformed geometry I am trying to import some geo data (more than 40K) to mongodb(3) but I get sometime an error for some documents: "code" : 16755, "errmsg" : "insertDocument :: caused by :: 16755 Can't extract geo keys from object, malformed geometry I have checked the document and it's a valid geojson, respecting mongodb format [long, lat]. And when I insert the document without the polygon, it works... I cannot figure out what's wrong with those polygons. Here is an example: Thanks for help. { name: "Alco", slug: "alco-montpellier", cities: ["MONTPELLIER"], polygon: { type: "Polygon", coordinates: [ [ [3.834017, 43.610297], [3.833734, 43.610352], [3.832791, 43.612713], [3.832684, 43.613476], [3.832791, 43.614735], [3.830752, 43.614796], [3.829067, 43.614578], [3.831064, 43.617001], [3.832523, 43.61924], [3.832716, 43.619938], [3.833925, 43.620975], [3.834285, 43.621277], [3.834561, 43.621804], [3.834534, 43.621753], [3.834561, 43.621803], [3.835378, 43.622456], [3.835416, 43.622468], [3.835417, 43.622473], [3.836562, 43.622849], [3.835891, 43.624896], [3.838081, 43.626389], [3.839628, 43.627281], [3.840811, 43.628289], [3.841549, 43.629196], [3.842159, 43.631116], [3.84352, 43.629856], [3.845636, 43.628679], [3.846009, 43.628236], [3.846367, 43.62771], [3.847108, 43.627283], [3.848243, 43.62677], [3.850334, 43.625332], [3.852267, 43.624563], [3.853425, 43.623907], [3.854275, 43.623266], [3.854834, 43.622371], [3.85659, 43.621271], [3.857811, 43.62069], [3.855053, 43.620195], [3.852897, 43.620075], [3.852789, 43.620075], [3.850239, 43.620028], [3.847899, 43.619022], [3.844711, 43.618395], [3.843713, 43.617984], [3.842315, 43.617007], [3.840065, 43.615578], [3.839092, 43.613684], [3.838844, 43.612444], [3.838938, 43.610591], [3.838526, 43.609439], [3.834017, 43.610297] ] ] }, centroid: { type: "Point", coordinates: [3.84344, 43.6203] } } Have you checked this? http://stackoverflow.com/questions/24375649/mongodb-sphere-index-rejects-my-object There are numerous explanations of your error there. Thanks for your reply. Yes I did, my polygon is valid (using geojsonlint.com), it's not a self-intersecting polygon, and the bug in Jira is for 2.6 and it's closed so i suppose it's fixed for 3+ versions If you insert the document via Mongo Shell in MongoDB v3.0, you should see an error message: "writeError": { "code": 16755, "errmsg": "Can't extract geo keys: ... Loop is not valid: ... ... Edges 11 and 13 cross. Edge locations in degrees: [3.8342850, 43.6212770]-[3.8345610, 43.6218040] and [3.8345340, 43.6217530]-[3.8345610, 43.6218030] Checking those coordinates, they are too close together (almost identical) that creates an anomaly in the polygon shape: [3.834561, 43.621804] and [3.834561, 43.621803]. If we visualise it in geodndmap.mongodb.com, you can see that there is a tiny edge causing an invalid loop. This edge crossing is caused by the coordinates above. How can I visualize it in [http://geodndmap.mongodb.com/] You can drag and drop a GeoJSON file to the page, and it would visualise it for you.
common-pile/stackexchange_filtered
Parse HTML inside the CDATA text The data inside CDATA to be parsed as Html. <?xml version="1.0" encoding="utf-8" ?> <test> <test1> <![CDATA[ &lt;B&gt; Test Data1 &lt;/B&gt; ]]> </test1> <test2> <![CDATA[ &lt;B&gt; Test Data2 &lt;/B&gt; ]]> </test2> <test3> <![CDATA[ &lt;B&gt; Test Data3 &lt;/B&gt; ]]> </test3> </test> From the Above input xml I need the output to be parsed as html. But I am getting the output as <B>Test Data1</B> <B>Test Data2</B> <B>Test Data3</B> But the actual output I need the text to be in bold. **Test Data1 Test Data2 Test Data3** The input is coming from external system.We could not change the text inside CDATA What does your XSLT look like? Rishe, I have a big xslt with some other scenario. This scenario is a part my xslt. I am using xslt 1.0 and Visual studio editor. Does your input really look like your example? escaped html in cdata? Perhaps this could help: http://stackoverflow.com/questions/2067116/convert-an-xml-element-whose-content-is-inside-cdata @hr_117 yes .It looks like same. If I convert my transform as html the text to be in bold. Instead of that the output looks like above. And the story in the provided link is different than my issue. Parsing as HTML is only possible with an extension function (or with XSLT 2.0 and an HTML parser written in XSLT 2.0) but if you want to create HTML output and want to output the contents of the testX elements as HTML then you can do that with e.g. <xsl:template match="test/*[starts-with(local-name(), 'test')]"> <xsl:value-of select="." disable-output-escaping="yes"/> </xsl:template> Note however that disable-output-escaping is an optional serialization feature not supported by all XSLT processors in all use cases. For instance with client-side XSLT in Mozilla browsers it is not supported. Hi Honnen, If ran your Xslt the result is displaying as Test Data1 Test Data2 Test Data3 . But I need the out put in bold. Yes, sorry, I overlooked that your input data uses both CDATA section and entity references, that way my suggestion does not work. If you had e.g. <test1><![CDATA[<b>Test data</b>]]></test1>, then the disable-output-escaping would do. Can you tell us which XSLT processor you want to use to solve that? Are you developing in Visual Studio and do you want to write a .NET application using .NET's XslCompiledTransform? And also tell us more about the input format, the last sample has <![CDATA[ &lt;p&gt; Test Data3 &lt;/B&gt; ]]> with p being closed as /B which would further complicate things as neither SGML nor XML parsers could handle that without throwing an error. Is that a mistake in your posting? Or do you really need to handle input data with such errors? sorry Honnen, I have edited my Input file.I have placed P instead of B mistakenly. And I am using xml editor in visual studio and i am not using any .net code. If your have to stay with XSLT 1.0 you have to to run two transformation passes. First one to copy your xml but remove the CDTA by generate the content with disable-output-escaping="yes" (See answer from @Martin Honnen) In second path you can access the html part. But this may be only possible if the html part follow the roles for well formatted xml (xhtml). If not perhaps a input switch as in xsltproc may help to work with html e.g.: --html: the input document is(are) an HTML file(s) See also: Convert an xml element whose content is inside CDATA
common-pile/stackexchange_filtered
Can a single <input> element return more than one value? VS2013, VB, MVC5, html These posts (multiple submits, which button is pressed) show nicely how to use multiple < input > elements in a single form. This is nice for including a similar command at the end of each line in a list. I want to have a page that lists multiple lines and has the same < input > elements at the end of each line. When a specific < input > is clicked, it must return to the 'controller method' information specific to that line and relevant to which link was clicked. So my display will look something like this: This is line1 LINK1 | LINK2 | LINK3 This is line2 LINK1 | LINK2 | LINK3 This is line3 LINK1 | LINK2 | LINK3 The user will click on LINK1, LINK2, or LINK3 to accomplish some operation on line1, line2, or line3 based on which LINK is clicked. The actions might be something like Edit, Delete, List Users, etc. The controller method will need data specific to the line on which the LINKs are clicked such as record ID, type of record, and so on just as an example. It could be the line was formed from multiple tables with different ID's; I'm just trying to establish that more than one piece of information is required in the transfer of control. Can I provide multiple pieces of different data for each LINK, and how? I've only seen and learned how to get information back to the controller from the element by checking if the "ID" attribute is empty or not, and then capturing the data in the "Value" attribute per this post also linked above. Is there a way to setup an element so that I can read other data from that specific element? The information has to be particular to that button and that line, meaning not something that is an overall form attribute. Perhaps it is not possible by design, but I won't know if I don't ask. I don't really understand your question, so some code would help. You seem to be confused about a lot of things.. for instance, I don't see where you've even mentioned an <input> element anywhere in your question, it's almost entirely about links... which are not input elements. @Gergo. I had thought showing what I was looking for was more useful, but once again I am legitimately reminded some find actual code more useful, so, good point. I will make an addition to the original post to make it more clear. @ErikFunkenbusch. Good point also. I styled the < input > element as text and fell victim to my own technique as I wrote the question. It's not a link, but an < input > element that looks like a link. When I update the post per Gergo above I'll add that to clarify. Why don't you add as many forms as lines do you have? Something like: <table> @foreach (obj item in Model) { @using (Html.BeginForm()) { <tr> <td> // your inputs and buttons here </td> </tr> } } <table> This way each form will only POST data relevant to your line. Technically, this is not valid HTML, though it will usually work with most browsers... @ErikFunkenbusch That is correct, but not what I wanted to point out. Just remote the table or move the form inside td's to make the code HTML compliant. @SmartDev. Perfect. I'm not as fully conversant yet as I need to be with HTML and didn't know I could do what you suggest, but that is exactly the kind of solution I was looking for and I understand how to code it. I also get that this is in Razor that will be converted to HTML. @ErikFunkenbusch. I thought once the server converts the Razor to HTML it's a legitimate HTML page. What part won't work with some browsers? Do some browsers not accept multiple form sections per page? @Alan - Razor will do whatever you tell it to, it doesn't mean it's valid HTML. In this case, it's not valid to place form elements within a tr element. The only valid elements in a tr are td or th. @ErikFunkenbusch. Although th/td are the only valid elements in a tr, does your statement imply there's a restriction on what happens inside a th/td? I spent some time researching after reading your comment, and it seems that many different elements can be used inside the td, so I want to make sure I understand your point correctly. @Alan - The above code places the Form element inside the tr. That is the code I said was invalid. I said nothing about being inside a td or th, as you are correct. However, you cannot span a form tag across multiple td's or th's in a single row because of this. you could go further than below with some ajax but this will do the trick and allow you to use multiple values from each "link" in each "row" @using (Html.BeginForm()) { @Html.Hidden("myType", "") @Html.Hidden("myID","") } This is line1 <a class="action-link" href="#" data-type="typeA" data-id="1">Link 1</a> | <a class="action-link" href="#" data-type="typeA" data-id="2">Link 2</a> | <a class="action-link" href="#" data-type="typeA" data-id="3">Link 3</a> This is line2 <a class="action-link" href="#" data-type="typeB" data-id="1">Link 1</a> | <a class="action-link" href="#" data-type="typeB" data-id="2">Link 2</a> | <a class="action-link" href="#" data-type="typeB" data-id="3">Link 3</a> etc.... <script> $(document).ready(function() { $(".action-link").on("click", function(e) { e.preventDefault(); var myType = $(this).attr("data-type"); var myID = $(this).attr("data-id"); //you could do some validation here... $("#myType").val(myType); $("#myID").val(myID); $("form:first").submit(); }); }); </script> While I didn't state it explicitly other than listing html at the head, I am actually trying to do it in HTML, if possible. HOWEVER, this is certainly a great option and your explicit example is very useful for me from a script point of view. +1. Glad to have been of assistance Alan
common-pile/stackexchange_filtered
Using QSocketNotifier to select on a char device I wrote a char device driver and am now writing a Qt "wrapper" which part of it is to get a signal to fire when the device becomes readable via the poll mechanism. I had tried to do: QFile file("/dev/testDriver"); if(file.open(QFile::ReadOnly)) { QSocketNotifier sn(file.handle(), , QSocketNotifier::Read); sn.setEnabled(true); connect(&sn, SIGNAL(activated(int)), &this, SLOT(readyRead())); } But readyRead was never called and my driver never reported having its poll method called. I was able to get the following code to work so I know my driver is working QFile file("/dev/testDriver"); if(file.open(QFile::ReadOnly)) { struct pollfd fd; fd.fd = file.handle(); fd.events = POLLIN; struct pollfd fds[] = {fd}; int ready; qDebug() << "Started poll"; ready = poll(fds, 1, -1); qDebug() << "Poll returned: " << ready; QTextStream in(&file); QTextStream out(stdout); out << in.readAll(); } This properly waits for my driver to call wake_up and I can see two poll calls from my driver. One for the initial poll registration and one for when the wake_up happens. Doing it this way I would probably have to spawn a separate thread which all it did was poll on this device and throw a signal and loop. Is it possible to use QSocketNotifier in this way? The documentation of QFile::handle() seems to indicate it should be. Did you ever get this working? I wrote something similar, but I couldn't get file->read(buf, 1) to work. It would just hang. However, read(file->handle(), buf, 1) worked just fine. @Harvey Yes the answer I checked worked for me. Turned out to be a simple coding error. Also check out my answer to see if it helps you. To revise my earlier comment, it was a misunderstanding on my part of how the device driver worked. Both pieces of code worked, it just happened that my other factors caused the entire test to fail when I was using one of them causing me to make the erroneous connection. Your QSocketNotifer gets destroyed as soon as that if block ends. It doesn't stand a chance of reporting anything. You must keep that socket notifier alive as long as you want that file to be monitored. The simplest way of doing that is probably keeping a QSocketNotifer* member in one of your classes. Thanks mat. Go figure it could be something so simple. QSocketNotifier works just fine now. You could add QFile as parent of QSocketNotifier and then when QFile is deleted, so is QSocketNotifier I'll also mention that QSocketNotifier can be used to watch stdin using the following #include "ConsoleReader.h" #include <QTextStream> #include <unistd.h> //Provides STDIN_FILENO ConsoleReader::ConsoleReader(QObject *parent) : QObject(parent), notifier(STDIN_FILENO, QSocketNotifier::Read) { connect(&notifier, SIGNAL(activated(int)), this, SLOT(text())); } void ConsoleReader::text() { QTextStream qin(stdin); QString line = qin.readLine(); emit textReceived(line); } ---Header #pragma once #include <QObject> #include <QSocketNotifier> class ConsoleReader : public QObject { Q_OBJECT public: explicit ConsoleReader(QObject *parent = 0); signals: void textReceived(QString message); public slots: void text(); private: QSocketNotifier notifier; }; Thanks for the elegant way of reading from stdin. However, imo there are two issues: std::getline(std::cin, line); should be used instead of QTextStream. The reason is that QTextStream uses some kind of buffering, which is problematic when text with \n is pasted to the terminal (some of the text will be lost). possibly an QTextStream member would also work. it would be better to create notifier on the heap with ConsoleReader as a parent. This way it'll be moved to a new thread together with ConsoleReader automatically if needed in the future. Remember that stdin is not thread safe.
common-pile/stackexchange_filtered
Insert data to a SQL Server database that contains apostrophes I'm making this program on Vb.net 2012 that has a connection to SQL Server 2012. One of the columns of this database table is Description, and in some cases the date may include apostrophes, for example... 'chainsaw 15'3" X 1 1/2 X .050 X 3/4' When I run the query the apostrophe that is between the data causes an error at the syntax of the query, this is the query line in VB.net. CMD.CommandText = "INSERT INTO Table_ARTICLES (NUMPART, DESCRIPTION, LOCATION, MAX, ACTUAL, MIN, Unidad_de_medida) VALUES ('" & txtNumParte.Text & "', '" & txtDescripcion.Text & "', '" & txtLocaclizacion.Text & "', '" & txtMaximo.Text & "', '" & txtActual.Text & "', '" & txtMin.Text & "', '" & cmbUnidad.Text & "')" Does anybody know how to make this query accept those characters on the query? Don't tag SQL Server questions with the MySQL tag. Two entirely different products. 2) Don't build SQL strings via concatenation... you're just asking for SQL injection attacks and issues exactly like this. 3) Use parameterized SQL and the problem goes away. https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.prepare(v=vs.110).aspx Use SQL parameters and you'll also avoid data type mismatch errors and many many other problems. Creating SQL simply isnt meant to be that tedious. Also please read [ask] and take the [tour] As @pmbAustin pointed out, is a terrible idea to build sql statements via string concatenation due to SQL Injection attacks and other problems. The approach you should use is called a parametrized query: CMD.CommandText = "INSERT INTO (NUMPART, DESCRIPTION, LOCATION, MAX, ACTUAL, MIN, Unidad_de_medida) VALUES (@NUMPART, @DESCRIPTION,@LOCATION,@MAX,@ACTUAL,@MIN,@UNIDAD_DE_MEDIDA)" And then: CMD.Parameters.Add("@NUMPART",txtNumParte.Text); CMD.Parameters.Add("@DESCRIPTION",txtDescripcion.Text); //...and so on CMD.ExecuteNonQuery(); Please use parameterized SQL (i.e. stored procedures) to prevent SQL injection and the like. As for your question, you would want to replace the single quote (apostrophe) with two single quotes before you add it as a parameter. This way the first one acts as an escape character which will allow for the apostrophe to be inserted into the database. Example: txtNumParte.Text.Replace("'", "''") Any examples where simply escaping the apostrophes wouldn't stop SQL injection? @MichaelZ.thanks for adding the example - makes the answer more complete. As for SQL injection, the way to prevent that is to use parameters, either in a stored procedure or building an insert statement as in the answer provided by Icarus. I focused more on the apostrophe part, as that was the OP's question. No problem. I know parameters are best practice, but I'm wondering if escaping apostrophes is enough to prevent SQL injection and I do believe it is. If you can think of an example where it wouldn't work the please comment an example here.
common-pile/stackexchange_filtered
Already has an open DataReader I have a registration form for my new user: Then, it also has a button: When the button Register is clicked, I set command to check if the email already exists. It does check the values and able to find the values in the debug process. But an error occurs saying that There is already an open DataReader associated with this Command which must be closed first. Here is my ASP.NET markup: <div id="success" runat="server" class="alert alert-success" visible="false"> Account registration success! <br /> Account sent for approval. Thank you. </div> <div id="emailavail" runat="server" class="alert alert-success" visible="false"> Email is Available. </div> <div id="error" runat="server" class="alert alert-danger" visible="false"> Email is already in use <br /> Please choose another email / Login if already registered </div> <div class="form-group"> <br /> <label class="control-label col-lg-4">User Type</label> <div class="col-lg-8"> <asp:Label ID="UserType" runat="server" class="form-control" text="Customer" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Email</label> <div class="col-lg-8"> <asp:TextBox ID="txtEmail" runat="server" class="form-control" type="email" MaxLength="80" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Password</label> <div class="col-lg-8"> <asp:TextBox ID="txtPassword" runat="server" class="form-control" TextMode="Password" MaxLength="20" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">First Name</label> <div class="col-lg-8"> <asp:TextBox ID="txtFN" runat="server" class="form-control" MaxLength="80" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Last Name</label> <div class="col-lg-8"> <asp:TextBox ID="txtLN" runat="server" class="form-control" MaxLength="50" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Company Name</label> <div class="col-lg-8"> <asp:TextBox ID="txtCName" runat="server" class="form-control" MaxLength="50" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Street</label> <div class="col-lg-8"> <asp:TextBox ID="txtStreet" runat="server" class="form-control" MaxLength="50" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Municipality</label> <div class="col-lg-8"> <asp:TextBox ID="txtMunicipality" runat="server" class="form-control" MaxLength="100" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">City</label> <div class="col-lg-8"> <asp:TextBox ID="txtCity" runat="server" class="form-control" MaxLength="50" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Company Phone</label> <div class="col-lg-8"> <asp:TextBox ID="txtCPhone" runat="server" class="form-control" MaxLength="12" type="number" required /> </div> </div> <div class="form-group"> <label class="control-label col-lg-4">Mobile</label> <div class="col-lg-8"> <asp:TextBox ID="txtMobile" runat="server" class="form-control" MaxLength="12" type="number" required /> </div> </div> </div> <div class="col-lg-10"> <span class="pull-right"> <asp:Button ID="btnCancel" runat="server" class="btn" style="color:White" text="Cancel" PostBackUrl="~/Default.aspx" BackColor="Black" /> <asp:Button ID="btnRegister" runat="server" class="btn btn-success" Text="Register" style="color:White" onclick="btnRegister_Click" /> </span> </div> And here is the code-behind: protected void btnRegister_Click(object sender, EventArgs e) { con.Open(); SqlCommand cmd = new SqlCommand(); cmd.Connection = con; cmd.CommandText = "SELECT Email FROM Registration WHERE Email=@Email"; cmd.Parameters.AddWithValue("@Email", txtEmail.Text); SqlDataReader dr = cmd.ExecuteReader(); if (dr.HasRows) { error.Visible = true; emailavail.Visible = false; success.Visible = false; con.Close(); } else { cmd.Connection = con; cmd.CommandText = "INSERT INTO Users VALUES (@TypeID, @Email, @Password, " + "@FirstName, @LastName, @CompanyName, @Street, @Municipality, @City, @CompanyPhone, @Mobile, " + "@Status, @DateAdded, @DateModified)"; cmd.Parameters.AddWithValue("@TypeID", "2"); cmd.Parameters.AddWithValue("@Email", txtEmail.Text); cmd.Parameters.AddWithValue("@Password", Helper.CreateSHAHash(txtPassword.Text)); cmd.Parameters.AddWithValue("@FirstName", txtFN.Text); cmd.Parameters.AddWithValue("@LastName", txtLN.Text); cmd.Parameters.AddWithValue("@CompanyName", txtCName.Text); cmd.Parameters.AddWithValue("@Street", txtStreet.Text); cmd.Parameters.AddWithValue("@Municipality", txtMunicipality.Text); cmd.Parameters.AddWithValue("@City", txtCity.Text); cmd.Parameters.AddWithValue("@CompanyPhone", txtCPhone.Text); cmd.Parameters.AddWithValue("@Mobile", txtMobile.Text); cmd.Parameters.AddWithValue("@Status", "Pending"); cmd.Parameters.AddWithValue("@DateAdded", DateTime.Now); cmd.Parameters.AddWithValue("@DateModified", DBNull.Value); } cmd.ExecuteNonQuery(); con.Close(); } I did try: Removing my selectedindexchanged for Label NAME because it does SELECT the email from users again. I tried moving the con.close because as what I have researched it has something to do there. Please help it got me hours for this. Thank you in advance! I am still new to c# You have open reader, but you try to send new command. Always close first connection and just open new one for INSERT. Or, use a "IF NOT EXISTS" syntax to check and insert actions in one SQL command. You need to enable MultipleActiveResultSets where will i close it? i want to happen it in the else command (insert when the email is available) Possible duplicate of There is already an open DataReader associated with this Command which must be closed first @TomTom why can't i include it? isn't helpful for your further understanding as i believe being clear here is a must. Because it is not helpfull, it is irrelevant noise. No one cares how the form looks and it takes half of the space of your question. Noise. Simply noise. sure thing. thanks @TomTom This is just messed up on so many levels dr.HasRows is false it jumps to the else no kidding you still have a DataReader open you then repeat cmd.Connection = con; dr.HasRows is true you close the connection then it jumps past the else then you try a cmd.ExecuteNonQuery(); on a connection you have closed and a select statement that is not even a valid ExecuteNonQuery There is no purpose to the DataReader in the first place Just do count "SELECT count(*) FROM Registration WHERE Email=@Email" Check the count and if 0 do the insert Then close the connection cmd.CommandText = "SELECT count(*) FROM Registration WHERE Email=@Email"; cmd.Parameters.AddWithValue("@Email", txtEmail.Text); if ((Int32)cmd.ExecuteScalar() > 0) { error.Visible = true; emailavail.Visible = false; success.Visible = false; } else { cmd.CommandText = "INSERT INTO Users VALUES (@TypeID, @Email, @Password, " + "@FirstName, @LastName, @CompanyName, @Street, @Municipality, @City, " + "@CompanyPhone, @Mobile, @Status, @DateAdded, @DateModified)"; cmd.Parameters.AddWithValue("@TypeID", "2"); cmd.Parameters.AddWithValue("@Email", txtEmail.Text); cmd.Parameters.AddWithValue("@Password", Helper.CreateSHAHash(txtPassword.Text)); cmd.Parameters.AddWithValue("@FirstName", txtFN.Text); cmd.Parameters.AddWithValue("@LastName", txtLN.Text); cmd.Parameters.AddWithValue("@CompanyName", txtCName.Text); cmd.Parameters.AddWithValue("@Street", txtStreet.Text); cmd.Parameters.AddWithValue("@Municipality", txtMunicipality.Text); cmd.Parameters.AddWithValue("@City", txtCity.Text); cmd.Parameters.AddWithValue("@CompanyPhone", txtCPhone.Text); cmd.Parameters.AddWithValue("@Mobile", txtMobile.Text); cmd.Parameters.AddWithValue("@Status", "Pending"); cmd.Parameters.AddWithValue("@DateAdded", DateTime.Now); cmd.Parameters.AddWithValue("@DateModified", DBNull.Value); cmd.ExecuteNonQuery(); } con.Close(); could also do it in one trip but I am not positive that would work with parameters IF NOT EXISTS(SELECT ...); BEGIN INSERT INTO ...; END; thank you! i did it but i just decided to separate them. Let's re-read your exception message again: There is already an open DataReader associated with this Command which must be closed first You have a few options. You can manually dispose your reader after con.Close() line like dr.Dispose(); or you can use using statement to dispose your reader automatically for that (which I recommended). using(SqlDataReader dr = cmd.ExecuteReader()) { ... } // <-- On this line your dr will disposed or you can create a new SqlCommand object for your INSERT statement like; cmd = new SqlCommand(); cmd.Connection = con; ... Also realated: There is already an open DataReader associated with this Command which must be closed first @PaulineGailMallariChan Why don't you try and see? While Soner Gönül already gave a solution that works for this situation, I wanted to give another perspective that makes your entire question moot (the answer is also too long for a comment). What you're doing here is good for learning purposes, but you're reinventing the wheel here. The 3 major web-based languages (PHP, Java and .NET) all aready have flexible and proven systems for user management. Given that this is ASP.NET, I STRONGLY urge you to use the default ASP.NET Membership provider. The membership provider abstracts away all the user management, so you don't need to manually configure SQL commands. For example, I can see 2 glaring flaws in your code: You're not salting your password hash; You're using an outdated hashing algorithm in SHA. It should be bcrypt, Scrypt, PBKDF2 or the new Argon2 function. Both of these issues are solved if you use the ASP.NET Membership provider. It is parameterized and therefore not vulnerable to an SQL injection exploit @Frisbee My other points still stand though. He's not hashing properly for saving passwords and he's not salting his hashes. Both of these are handled by the Membership provider. The Membership provider also gives other benefits. And I did not comment on the other points. Why not just correct your answer? @Frisbee Done. I was pressed for time and didn't have a good way to edit the answer. I wasn't sure if parameters used in this fashion are enough to fully prevent SQLi. All the guides I found on the matter also mention using SqlDbType and a max length. well i need the designs and customize my application so. i'm up for adventure and more learning to c# AND asp.net thanks for advice! thanks for the help and kind consideration @Soner @Frisbee what i did was separate them as i have thought about the answers i got and easier for me to understand when i get back to the same scenario. protected void btnRegister_Click(object sender, EventArgs e) { con.Open(); SqlCommand cmd = new SqlCommand(); cmd.Connection = con; cmd.CommandText = "SELECT Email FROM Users WHERE Email=@Email"; cmd.Parameters.AddWithValue("@Email", txtEmail.Text); SqlDataReader dr = cmd.ExecuteReader(); if (dr.HasRows) { error.Visible = true; emailavail.Visible = false; success.Visible = false; con.Close(); } else { success.Visible = false; con.Close(); RegisterUser(); } } void RegisterUser() { con.Open(); SqlCommand cmd = new SqlCommand(); cmd.Connection = con; cmd.CommandText = "INSERT INTO Users VALUES (@TypeID, @Email, @Password, " + "@FirstName, @LastName, @CompanyName, @Street, @Municipality, @City, @CompanyPhone, @Mobile, " + "@Status, @DateAdded, @DateModified)"; cmd.Parameters.AddWithValue("@TypeID", "2"); cmd.Parameters.AddWithValue("@Email", txtEmail.Text); cmd.Parameters.AddWithValue("@Password", Helper.CreateSHAHash(txtPassword.Text)); cmd.Parameters.AddWithValue("@FirstName", txtFN.Text); cmd.Parameters.AddWithValue("@LastName", txtLN.Text); cmd.Parameters.AddWithValue("@CompanyName", txtCName.Text); cmd.Parameters.AddWithValue("@Street", txtStreet.Text); cmd.Parameters.AddWithValue("@Municipality", txtMunicipality.Text); cmd.Parameters.AddWithValue("@City", txtCity.Text); cmd.Parameters.AddWithValue("@CompanyPhone", txtCPhone.Text); cmd.Parameters.AddWithValue("@Mobile", txtMobile.Text); cmd.Parameters.AddWithValue("@Status", "Pending"); cmd.Parameters.AddWithValue("@DateAdded", DateTime.Now); cmd.Parameters.AddWithValue("@DateModified", DBNull.Value); cmd.ExecuteNonQuery(); con.Close(); } thank you again guys
common-pile/stackexchange_filtered
Writing to Java program from Mac Terminal I have a small problem with the Mac terminal. I run a Java program from the terminal and try to read some stuff from the stdin using: BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Produsul:" + System.getProperty("line.separator")); productOrder.setProductName(in.readLine()); The program gets stuck. I think that somehow, it doesn't detect EOL. I tried setting Send string to shell (\033OF) for the end key, but this didn't solve the problem. What am I missing? how have you set up the reader assigned to the variable in? Scanner in = new Scanner(System.in); Scanner doesn't have a readLine() method, though. @stevevls: Yes, I did, you can see it now. I didn't initially post the whole piece of code, as it's very simple and works fine from Eclipse. I'm pretty sure the problem is the fact that I'm running my app from the terminal. Also, I need to run it this way, so I have to solve this. Any other ideas? It's weird. I just tried it from my Mac and used your code (roughly) and it works fine. I'd say the problem is somewhere else in the surrounding code that you haven't shared. I don't think so, the program runs fine from Eclipse. From the terminal, the readLine() remains blocked. Are there any other keyboard settings which I'm missing? All the suggestions I found indicated the change for the end key, explained above, which doesn't solve my problem.
common-pile/stackexchange_filtered
oauth for desktop and mobile with the same code I'm using phonegap and I want to set up google oauth for mobile and web (e.g. desktop chrome) with the same code. The end result should be an app engine cookie on the client side (whether it is the inapp browser or a desktop browser). Since I don't want my users to do the consent screen more than once, I need a refresh_token and not just an access_token. (also I've noticed that the cordova webview doesn't have access to cookies like the regular browser) As I understand, a refresh_token can only be received if you're doing the protocol recommended for web servers, meaning first obtaining the code, then using it to obtain the access_token and refresh_token. When I'm authenticating in this way, the protocol requires that I send the received code to google to receive the access token, but this is a cross domain request and is blocked on desktop browsers. How can this be conveniently solved? What is the correct way to achieve the end result? I crudely solved my problem using my server as a mediator for the cross-domain post requests. So the flow works like this: I get an authentication code using a pop up window. Then the user enters the code and my javascript sends it to my server. Then the server sends a request for access_token and refresh_token to google, and sends it back to the user. Seems to work well for now, and not very complicated.
common-pile/stackexchange_filtered
The lights in the hotel A hotel has $n$ rooms, numbered 1, 2, 3, etc. until $n$. At the hotel reception there are $n$ buttons, one for each room, also numbered 1, 2, 3, ..., $n$. Button 1 switches the light from on to off or from off to on in all rooms whose number contains a "1", hence in rooms 1, 10, 11, 12, etc. Button 2 switches the light on or off in all rooms whose number contains a "2", hence 2, 12, 20, 21, etc. The same thing happens with all other buttons, so for example button 42 switches the light in rooms 42, 142, 242, etc. The receptionist sees that all rooms have the lights on, so she pushes all buttons from 1 to $n$ in sequence. At the end of the process she notices that there are just as many rooms with the light on as there are rooms with the light off. How many rooms are there in the hotel at most? To clarify, the button for 24 does not toggle the light in room 42? @JonTheMon Nope :) Can we assume the hotel is not absurdly large? Empty the odd numbered rooms by sending the guest in room i to room 2i, then put the first coachload of guests in rooms 3^n, the second coachload in rooms 5^n; for coach number c we use the rooms p^n where p is the cth odd prime number. No - wait - wrong hotel. :) There might be a problem. It looks like Quark's answer of 306 is correct, given this plot for up to 1000 rooms (excess of lights on over lights off) But if we extend that range a little... We see that the initial trend reverses, crossing the y-axis several times after 8000 rooms. It's not clear to me that this back-and-forth trend ends. What happens from 10k to 11k? Just what I was afraid of. At this point either there is a mathematical proof of how all numbers with length greater than X will result in a uniform direction (increasing/decreasing the on/off proportion), or it will fluctuate to infinity. @GOTO0 Thanks for confirming, I'll take another look when I get home in a few hours. The answer (assuming I didn't mess up somewhere) is: 306 I sort of brute forced the problem because it seemed faster that way. To still be on, the number has to be called an odd number of times before its own button is pushed. So, since the first hundred numbers can only be called twice max (once for each digit), the only ones that stay on are 10/11, 20/22, etc up to 90/99 (18 on 81 off). Then, After three digits, it changes a bit. There are now 5 ways a 3 digit number can be called, one of its numbers, the first two, or the last two. There are no "1"s (even 100 has 1 and 10), so 3's are 102-110(9 on) and are only possible with the "0" second digit. So, For 5's, from 120 on there are 7 per set of 10, the 3 not included are 1*0, 1*1, and 1**, so for the set from 100-199, 65 are on and 35 are off (9+7*8). Another way to think of this is +30 to the original -63. From here, Every hundred from here on will be another +30 with symmetry. So by 300 the "count" will be -3 And finally, 300(3 and 30) is -1, 301 and 302 are +1, 303 is -1, 304-309 are +1 but by 306 the -3 from before is reached (-1+2-1+3), leaving a final answer of 306. It will never back track far enough to go negative again up to a thousand. It may be explained a bit messily, I'm not a math proofs guy but you can generalize my statements for any number of digits. (There's probably a better way to spoiler a paragraph than to add random words in between sentences, I'm still new at this) I just considered the possibility that in the 4 digit numbers it could loop back to being more negative but my brain is starting to hurt and I wouldn't be able to focus long enough to figure it out anyway if that was the case (because then the solution would be in the 5 digit numbers). You found a repeating pattern in certain groups of 10 and 100 lights, that's great! You can use that symmetry to explore what happens with more than 3 digits without too many calculations. Be sure to double-check your logic for a few flaws. There is a solution between $10^{11}$ and $10^{12}$. For $n=10^{11}$, the total number of lights off is<PHONE_NUMBER>8. Just over half. For $n=10^{12}$, the total number of lights off is<PHONE_NUMBER>51. Just under half. So, somewhere between $10^{11}$ and $10^{12}$ there is an $n$ where exactly half of the lights are off. I found this using a computer program. To cut the number of cases, I used symmetry. Permuting the digits 1-9 does not change whether a room is on or off, for example "377377009" is the same lightness as "899899001". This reduces the number of essentially different 12-digit numbers to only ~27 million. I doubt this puzzle has a nice answer. I would be very impressed/surprised if GOTO 0 has a solution. Not necessarily. Since you are playing with integers, it doesn't have to play nicely. It could go directly from 49.999% to 50.001% lightness. @DrLem Whenever you increase $n$ by one, you either get one new lit room or one new dark room. None of the previous rooms change. Therefore, as you increase $n$, the quantity [# of lit rooms] - [# of dark rooms] changes by 1 at a time. So, since it goes from negative to positive, that quantity must at some point be 0. Thanks, I should have included this to start with. I think it's 12072 I brute forced it with a Java program that I wrote. EDIT: This is the program that I wrote: import java.util.HashSet; public class Main { public static void main(String[] args) { long i = 1; long even = 0; while (i < 1000000) { even += check(i) ? 1 : -1; if (even == 0) System.out.println(i); i++; } } private static boolean check(long i) { HashSet<String> subs = new HashSet<String>(); String string = String.valueOf(i); for (int beginIndex = 0; beginIndex < string.length(); beginIndex++) { for (int endIndex = beginIndex; endIndex <= string.length(); endIndex++) { if (!string.substring(beginIndex, endIndex).startsWith("0")) { subs.add(string.substring(beginIndex, endIndex)); } } } return subs.size() % 2 == 0; } } I am sure there must be more efficient ways but this is how it works. For each number I check how many light switches there exists. I do this by checking how many distinct sub strings there are that do not begin with 0. I keep track of a variable even that is raised by one on an even amount of these substrings and substracted by one on an odd amount. Every time this variable is 0 there must be an equal amount of lights that are on and off. The 306 mentioned in another answer is the first it finds so it gives me confidence that this method works. What was your upper bound? Do you have a mathematical proof that it is the highest possible number? I first tried till 1,000,000. That program executed in a few seconds. Then I tried 10,000,000. That makes me believe that my solution is the highest. But unfortunately I don't have a proof. And of course I could have made a mistake in my code Some numbers: for $n$=10^5, 10^6, 10^7, the excess of lights on is 3938, 11804, 166586 respectively. (This is # of lights on minus $n/2$.) So for very low values of $n$, there is a significant trend towards having an even number of light flips. Naively, I'd expect that for large $n$ (i.e. around a trillion digits), the long-term behavior looks like a random walk, and crosses 0 infinitely many times. I would be very interested in a proof or even a heuristic arguing otherwise.
common-pile/stackexchange_filtered
Issue with Semantic ui react calendar I've defined the following component to be rendered using JSX: const TestingDate = () => { return ( <Container> <DateInput clearable clearIcon={<Icon name="remove" color="red" />} name="date" value="2 Apr 2020" onChange={a => handleDateChange(a)} /> </Container> ); }; However, the issue I'm having is that, in the handleDataChange, I have to keep track of the date (namely the value prop of DateInput which is imported from "semantic-ui-calendar-react", but I can't find any reasonable way of passing that to the handleDateChange function... I can see it's a super basic issue but I'm kind of stuck since it's my first time working with DateInput and the tutorial used the older style where you'd bind a callback to the DateInput as a prop. If it helps, what I'd like to do is just call this line setDate(value) in the handleDataChange function. Thanks! I'm not sure what is the issue, can you elaborate pls. onChange of this calendar component pass you date, text, mode, what is missing? when I console.log the input into the onChange I get the event logged , which is the 'event' generic object in callbacks. However, I don't see where date, text and mode are passed. Could you explain? do i just change the Onchange to onChange={(a, date, text, mode) => handleDateChange(a, date, text, mode)}? I've based my response based on docs, but you can ensure what params with your snippet Where'd you find the docs about this? I was reading https://www.npmjs.com/package/semantic-ui-calendar-react?fbclid=IwAR3jcN9OJCqC1YrEzIhS2gXE_sZG0pJuisY7QFLICeef518Nfhr9MgKodjU - but I couldn't find what you're talking about. handleChange = (event, {name, value}) => { if (this.state.hasOwnProperty(name)) { this.setState({ [name]: value }); } } Yes, I saw that. The thing is I'm using the useState() function, and in that sense I'm not storing state within any "React.Component" --> I'm using functional components. Do you have any recommendations for that either? Can you make small https://codesandbox.io/ with your simple example? I was having the same issue and was able to resolve it by doing the following. const TestingDate = () => { const [date, setDate] = useState(null); function handleDateChange(name, value) { setDate(value) } return ( <Container> <DateInput clearable clearIcon={<Icon name="remove" color="red" />} name="date" value="2 Apr 2020" onChange={(a, {name, value}) => handleDateChange(name, value)} /> </Container> ); }; I hope that helps!
common-pile/stackexchange_filtered
Browser crash with Flash video in Ubuntu 10.04 Firefox crashes every time I open a web page with video. This looks like a Flash problem: what can be wrong? I'm using Ubuntu 10.04. I have had three 10.04.4 machines crash since ubuntu update the first week of june. They lose bookmarks and will not play video or audio. Im pretty sure it has to do with firefox. I have been using 10.4 for years and prefer it. When I could not find a fix I reinstalled 10.4 and used it fine before allowing updates. When update 237 items it crashes and does the same thing. The bug is in the ubuntu updates. You can try this to see if it will also work for you. It is likely the plugin, assuming that your install is standard: shut down firefox sudo aptitude reinstall flashplugin-nonfree start firefox If you don't have aptitude, you can use sudo apt-get install aptitude first. You can also try to purge the package then reinstall it.
common-pile/stackexchange_filtered
How can I maintain a SQL schema upgrade that goes out to our users on a regular basis? I work on a project that uses a SQL database as a back end, and a desktop app as a front end. We do releases on a regular basis, and as part of a release, our client needs to update their SQL server. Our business requires that we are upgradable from version 1.x to 1.y, and the way we've been doing it so far is by following a simple pattern for dozens/hundreds of tables: IF NOT EXISTS (XXX) CREATE TABLE/VIEW YYY ... ALTER TABLE (XXX) ADD COLUMN (WWW) ... CREATE INDEX (ZZZ) on table XXX (if needed) So on... The problem I see with this is that the steps laid out above are all laid out in a one-for-one relationship with the table that's being created. This is fine for simpler databases that have no constraints, but becomes a pain when foreign keys/triggers/anything complex becomes involved. I was wondering if there was a pattern out there that could help me clean this up and make modifications to my script easier. I was thinking something along the lines of following these steps for every table in the database: perform cleanup from any "old" version create/update all tables/columns (without constraints) create/update all views based on tables create/update all foreign keys / constraints create/update all indexes create/update all triggers ... I use a tool to merge all of the SQL files in my repository into one, but the rhyme/reason to which file gets merged at what point is becoming unclear - often I just give up and hack something in that says "include file X directly before/after file Y". It's becoming unmaintainable (I can still fix problems, but I want to be able to hand off this responsibility). It's this last annoyance that I want to be able to avoid. I want to be able to create a new table by just dropping a script with the content "CREATE TABLE X..." in a "Tables" directory and call it done. Are there any patterns out there that can help me? I'm really looking for some sort of ammunition that I can use when I propose this to my colleagues by saying "see - there IS a solution to this cluster* and THIS is how other companies address it." Are you familiar with Red-Gate's products? It could be that you're reinventing the wheel. Or if you don't want to pay for redgate, you could use something like RoundHousE to migrate your database. If you're using MS SQL Server, SQL Server Data Tools can also help with this problem. This question doesn't look like an "SQL script update" to me. @user61852 I apologize for using the reserved keyword "UPDATE" when talking about SQL. Perhaps it would have been better if I said "upgrade" instead (or did I miss your point?) Answered: http://stackoverflow.com/questions/861728/generating-sql-upgrade-scripts-for-the-customer This is not originally my idea, but here's a manual way to do it: Create a "UpdateScripts" directory. Create a field in the database which tracks the ID# of the last-run script. Every time any change is made to the database structure, save it as a script in that directory, with a sequential number (such as 00001) in front. Every time the customer launches, run every script in order between the stored ID and the last script in the directory. Update the last script field. There are a number of alternatives, most more automated than this, discussed here. +1. It may be worthwhile for you to check out how Rails' migrations work. Martin Fowler's Evolutionary Database Design article (http://martinfowler.com/articles/evodb.html) may also be helpful. +1. This is actually a pretty standard way of doing it. @MasonWheeler - I figured it was, but I've never used it myself. I don't have any databases which need to be versioned this way.
common-pile/stackexchange_filtered
Using pointers to change the bytes of integer variable So I am currently trying to teach myself C and am currently a little bit stuck in a exercise about pointers. So the task is to have three variables (char, char, short) which stand for the day, month and year of a certain date. I am asked to then use pointers to save these three variables into one. I thought it would make sense to store it on the different bytes of the variable since we are just using char and short. Now I tried to use pointers to put for example the value of the day into the first byte of the integer. I have come to two problems that I cannot figure out the solution for: warning: assignment to 'char *' from incompatible pointer type 'int *' [-Wincompatible-pointer-types] daypointer = (&date); So I think this means, that I try to use a pointer for a type that it is not meant to point on. I am not sure if I understand it right so please correct me if I'm wrong. I also don't understand how I can fix this since the exercise asked for char, char and unsigned variables and not integers. a2.c:60:11: error: 'date' undeclared (first use in this function) 60 | month(date); So I tried to use the returned value date from the first function to call the other functions and don't know why this doesn't work. For reference, this is my code: #include <stdio.h> #include <stdlib.h> //uses pointers to store the three variables into one integer variable int create_date (unsigned char day, unsigned char month, unsigned short year){ char *daypointer; char *monthpointer; int *yearpointer; int date = 0; daypointer = (&date); *daypointer = day; monthpointer = (&date+1); *monthpointer = month; yearpointer = (&date+2); *yearpointer = year; return date; } //return day, month or year of the date int year (int date){ int *pointer; unsigned short year; pointer = (&date+2); *pointer = year; printf("Year:%d", year); } int month (int date){ char *pointer; unsigned char month; pointer = (&date+1); *pointer = month; printf("Month:%c", month); } int day (int date){ char *pointer; unsigned char day; pointer = (&date); *pointer = day; printf("Day:%c", day); } int main(void){ //int date = 0; create_date(11, 12, 2020); month(date); day(date); year(date); return EXIT_SUCCESS; } It would be great if someone could help me understanding my mistakes a little. Thanks in advance! Presumably you have got some assignment which instructs you to pack the values of day, month, and year into a four-byte int. To do this, yes, you have to explicitly tell the compiler you are converting a pointer to an int to a pointer to a char. But first, the error you are getting on line 60 is because there is no date declared in main. Why is your line //int date = 0; commented out? Further, it is not clear why you have some many subroutines or how your program is supposed to be structured. Maybe the assignment you were given gives particular instructions about this, but you have not shown us the details of the assignment, so we do not know. In any case, you should perhaps just get the “pack the bytes into the int” operations working, and then you can add more code for the rest of the assignment. You can start with int date;. We will assume int is four bytes. Then you can make a pointer to its first byte with char *p = (char *) &date;. Then you can write a day of the month to its first byte with p[0] = 13;, and you can write a month to its second byte with p[1] = 6;. To write a year to the last two bytes is more complicated. Options include: Decompose the year and write the two bytes separately: short year = 2020; p[2] = year / 256; p[3] = year % 256;. Make a pointer to short and use that: short *q = (short *) (p+2); *q = 2020;. Copy into the bytes using memcpy: short year = 2020; memcpy(p+2, year, sizeof year);. You should not use the second option, because it has semantic problems regarding the rules of C. The other two options are supported by the C standard but they have different effects regarding the order in which bytes are stored in memory. The second one will generally match what your C implementation uses, so it may be the one your assignment intends. I commented l.60 out, because I thought then it would give me the date of 0. Which would then be month 0, day 0 and year 0. I thought I can comment it out because the create_date returns the variable date. Thank you for explaining step by step what my mistakes are because pointers are just something that feels so counterintuitive to me. I think I can now try to find the solution. you have so many errors here - it is even hard to explain them. I advise starting from something easier than the pointers, as you do not know anything about the assignments, parameters, functions, variables and other basic C stuff. Abstracting, pointer punning is generally dangerous, invokes many effects which may lead to UBs and it is difficult to understand by beginners. I have modified your code to work on modern systems where little endian order is used. unsigned create_date (unsigned char day, unsigned char month, unsigned short year){ return day | ((unsigned)month << 8) | ((unsigned)year << 16); } //return day, month or year of the date unsigned short year (unsigned date){ unsigned short *pointer; pointer = (unsigned short *)(((char *)&date) + 2); printf("Year:%hd", *pointer); return *pointer; } int month (unsigned date){ char *pointer = (char *)&date; printf("Month:%hhd", *(pointer+1)); return *(pointer+1); } int day (unsigned date){ char *pointer; pointer = (char *)&date; printf("Day:%hhd", *pointer); return *pointer; } int main(void){ unsigned date; date = create_date(11, 12, 2020); month(date); day(date); year(date); return EXIT_SUCCESS; } Hey, thanks for your help. Could you please explain what this line means: return day | ((unsigned)month << 8) | ((unsigned)year << 16);? So do you shift the bits by 8 or 16? it sets day on byte 0, month on byte 1 and year on bytes 2 & 3
common-pile/stackexchange_filtered
listbox with checkbox, with select All option This is the type of listbox, I am looking for a jquery plugin, or something in Jquery or Kendo UI. I searched a lot but in vain. I need a listbox, where i bind to a datasource. The items in listbox should have checkbox, allowing user to check/uncheck. I also want an option of All item in the list, enabling user to select all items in listbox or unselect all items in listbox. Have u come across any plugin like that? I have used this plugin before, and it more or less does what you describe: http://www.erichynds.com/blog/jquery-ui-multiselect-widget Much nicer than a list box in my opinion, and simply generates automatically off a select element. Did you consider using jqWidjects http://www.jqwidgets.com/listbox-with-checkboxes/ they have opensource and paid APIs. see which one will best fit to your needs if you want to stick to plugins you mentioned in your question then you can try kendo multiselect custom template http://demos.kendoui.com/web/multiselect/template.html Have you seen: http://joelanman.com/2009/03/simple-multiple-selection-with-checkbox-lists-and-jquery/? It seems to do the trick without a plug in.
common-pile/stackexchange_filtered
Performance of Scipy lsim/lsim2 In a python script of mine, I currently use the lsim function to simulate a system model. An issue I encountered recently is that lsim spawns a lot of subprocesses on multiple cores, and together they cause heavy CPU load. That fact also shows in the profiling log, where I attached the relevant snippet below. I run this script on a processing machine, which I share with multiple people. If I instead use lsim2, it seems that no subprocesses are spawned, however running the script takes unbearably long. Anyone has an idea how I can run lsim fast, while using fewer resources/just one core? ncalls tottime percall cumtime percall filename:lineno(function) 3740 25.422 0.007 51.062 0.014 /grid/common/pkgs/python/v3.7.2/lib/python3.7/site-packages/scipy/signal/ltisys.py:1870(lsim) 26753891 21.519 0.000 21.519 0.000 {built-in method numpy.dot} 12 0.001 0.000 21.450 1.788 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:431(run) 12 0.000 0.000 21.265 1.772 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:895(communicate) 24 0.000 0.000 21.265 0.886 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:985(wait) 24 0.000 0.000 21.265 0.886 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:1592(_wait) 12 0.000 0.000 21.264 1.772 /grid/common/pkgs/python/v3.7.2/lib/python3.7/subprocess.py:1579(_try_wait) Don't know what you are trying to achieve, is your system model simple? Maybe do simulation in noncontinuous domain with good-enough sampling frequency? Scipy is not meant to have best possible performance, its for research, not production. Setting the environment variable OPENBLAS_NUM_THREADS=1 did the trick for me See also the threadpoolctl package for controlling BLAS thread pool size on a per-call basis.
common-pile/stackexchange_filtered
Failed to install apk on device (null) I am stuck with my app project for quite a while now because of the error above in the title. I know there are many other topics about it and I have tried several solutions, but none seem to fix my problem. What I have tried so far: Increased ADB connection time out (ms) Changed the environmental path variable suggested here: http://anandpandia.blogspot.nl/2011/01/failed-to-install-helloandroidapk-on.html Restarted my phone Changed install location to external at the manifest file The problem seems to have something to do with the size of the app. It worked fine until I addes a few mp3 files. The app is now around 150MB I think. I do have plenty of space on both the internal disk and the SD. Console output [2013-11-09 14:47:47 - POapp] ------------------------------ [2013-11-09 14:47:47 - POapp] Android Launch! [2013-11-09 14:47:47 - POapp] adb is running normally. [2013-11-09 14:47:47 - POapp] Performing com.example.poapp.MainActivity activity launch [2013-11-09 14:47:47 - POapp] Automatic Target Mode: Unable to detect device compatibility. Please select a target device. [2013-11-09 14:49:03 - POapp] Uploading POapp.apk onto device '0123456789ABCDEF' [2013-11-09 14:49:39 - POapp] Installing POapp.apk... [2013-11-09 14:51:40 - POapp] Failed to install POapp.apk on device '0123456789ABCDEF! [2013-11-09 14:51:40 - POapp] (null) [2013-11-09 14:51:40 - POapp] Launch canceled! Logcat errors (after a second run): 11-09 15:08:07.697: E/VoldConnector(492): NDC Command {312 asec create smdl2tmp2 151 ext4 8b2b620d8895582cbbc8637606418f53 10016 1} took too long (49368ms) 11-09 15:09:30.360: E/GetJar SDK(707): PackageMonitor: doOnReceive(): failed 11-09 15:09:30.360: E/GetJar SDK(707): java.lang.IllegalStateException: Unable to access the application key 11-09 15:09:30.360: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor.doOnReceive(PackageMonitor.java:121) 11-09 15:09:30.360: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor.access$000(PackageMonitor.java:61) 11-09 15:09:30.360: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor$1.run(PackageMonitor.java:86) 11-09 15:09:30.360: E/GetJar SDK(707): at java.lang.Thread.run(Thread.java:838) 11-09 15:09:30.631: E/GetJar SDK(707): PackageMonitor: doOnReceive(): failed 11-09 15:09:30.631: E/GetJar SDK(707): java.lang.IllegalStateException: Unable to access the application key 11-09 15:09:30.631: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor.doOnReceive(PackageMonitor.java:121) 11-09 15:09:30.631: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor.access$000(PackageMonitor.java:61) 11-09 15:09:30.631: E/GetJar SDK(707): at com.getjar.sdk.data.metadata.PackageMonitor$1.run(PackageMonitor.java:86) 11-09 15:09:30.631: E/GetJar SDK(707): at java.lang.Thread.run(Thread.java:838) i am having the exact same problem with a 2GB apk have you found the answer to this problem? I am facing the same problem. Have you cleaned the project yet? When adding new resources, the project must first be cleaned before it is rebuilt. If you are using Eclipse, click Project -> Clean... and clean all projects. If you are using command line, type "ant clean" in the project directory. Now try rebuilding the project. Thank you. I have tried this before, without any luck. Forgot to list it.
common-pile/stackexchange_filtered
SignalR WebSocket Closed with status code 1006 (no reason given) (only on the remote server after deployment) I'm working with a project which combines angular app on a client-side and .net core 6 web API app on the server-side. To provide real-time communication between client and server on different browsers without reloading the page I use SignalR (with custom hubs of course) to track connections' Ids and send data to specific clients by connections' Ids which is stored in SQL server database after client connection and removed after client disconnection. On my local environment all works fine, but after deployment on hosting a websocket problem happened. On the first screenshot u can see no errors in my browser console in a couple of minutes after client connection and some data manipulations between client and server through hubs. On the second screenshot u can see the result of this data-manipulations between client and server but on the remote hosting server. I'm so confused about it, because this error happened only on remote server, locally websocket transfer type works as it expected. Is it a hosting server error which needs to be resolved by hosting provider? Or this is my mistake? I think there is a lot of questions with the same subject on the stack overflow, but I was searching for the right answer which can be a good solution for my problem, but... I can't find anything which can help me. Feel free to ask more info about signalR implementation if it needs to help me in this situation. Thanks! Hi @mikhail, I am currently facing this issue, do you know what is the fix for this issue ? @Nothing nope :c I also have this issue
common-pile/stackexchange_filtered
Selecting alpha and beta parameters for a beta distribution, based on a mode and a 95% credible interval I want to a select both alpha and beta values using both mode and a 95% credible interval simultaneously. I can do select alpha and beta, using either a specified 95% credible interval, or a mode in conjunction with beta, but using both mode and 95% credible interval would be helpful to further refine my beta distribution. I am after a beta distribution with a 95% credible interval from 0.01 to 0.15, and a mode of 0.05, to set up a prior distribution. I can select parameters using mode the 95% CI with this R code: library(LearnBayes) quantile1=list(p=.025, x=0.01) # 2.5% quantile should be 0.01 quantile2=list(p=.975, x=0.15) # 97.5% quantile should be 0.15 beta.select(quantile1, quantile2) [1] 2.44 38.21 I can select alpha, for a given mode and beta, using the following R code, by reorganizing the equation mode = (α - 1) / (α + β - 2): # Select beta and mode - does not work if alpha + beta = 2 beta <- 38.21 mod <- 0.05 alpha <- beta * mod / (1 - mod) + (-2 * mod + 1) / (1 - mod) alpha [1] 2.958421 Note the change in value of alpha, as the mode with alpha = 2.44 and beta = 38.21 is not 0.05 (it's 0.037257439). This is an optimisation problem to match a distribution to 3 parameters: 2 quantiles and the mode. In fact, even the function you used earlier returns estimates from an optimisation. If you calculate the quantile values from those $\alpha, \beta$ parameters it gave you, you'll see they don't match up exactly: > qbeta(p=c(0.025, 0.975), shape1=2.44, shape2=38.21) [1] 0.01004775 0.14983899 We'll define an objective function that calculates the squared error between the known and optimised quantile values and the mode, for a given set of parameters $\alpha,\beta$ of the distribution (encoded in the params vector): objective.function <- function(params) { alpha <- params[1] beta <- params[2] intended.quantiles <- c(0.01, 0.15) calculated.quantiles <- qbeta(p=c(0.025, 0.975), shape1=alpha, shape2=beta) squared.error.quantiles <- sum((intended.quantiles - calculated.quantiles)^2) intended.mode <- 0.05 calculated.mode <- calculate.mode(alpha, beta) squared.error.mode <- (intended.mode - calculated.mode)^2 return(squared.error.quantiles + squared.error.mode) } calculate.mode <- function(alpha, beta) { return((alpha-1) / (alpha+beta-2)) } You already have some good starting values for $\alpha, \beta$, so let's get those ready: starting.params <- c(2.44, 38.21) Incidentally, this is what the PDF of a Beta distribution parameterised with these starting estimates looks like. Red lines are actual quantiles & mode, and blue lines are what you are trying to match. As you suggest, the quantiles look fine but the mode is off: Now we use the nlm optimisation function to estimate optimal values $\alpha^*, \beta^*$, starting for those initial values. The algorithm converges: nlm.result <- nlm(f = objective.function, p = starting.params) optimal.alpha <- nlm.result$estimate[1] optimal.beta <- nlm.result$estimate[2] So the optimised estimates are: $\alpha^* = 3.174725$ $\beta^* = 44.94454$ And the quantiles and mode corresponding to these optimal parameters are: > qbeta(p=c(0.025, 0.975), shape1=optimal.alpha, shape2=optimal.beta) [1] 0.01499578 0.15042877 > calculate.mode(optimal.alpha, optimal.beta) [1] 0.04715437 Recreating the plot of the Beta PDF, this time with the new parameters, we can see that the mode is much better matched, at the expense of the lower $2.5\%$ quantile: Possible enhancements: This is a quick implementation that may not deal well with extreme values and numerical issues. Have a look at this answer for better code in the case of 2 quantiles only. Investigate constrained optimisation packages, where the problem would be formulated as estimating 2 parameters ($\alpha, \beta$) subject to the constraint $\frac{\alpha - 1}{\alpha + \beta - 2} = 0.05$ Thanks for the reply. This is definitely closer to the desired model. I'll try it out some more tomorrow to see if I can get even closer, but I have a feeling that your suggestion may be as close as we'll get! I don't think I'm going to get closer to an ideal distribution than with your method. I agree, it's easier to have more accurate and precise implementation with 2 quantiles with your suggested link. I'll accept this answer, but I am still open to other suggestions, at the very least for the sake of learning.
common-pile/stackexchange_filtered
Where is the block which caused IPV4 client can not access IPV6 Server I read many documents which are saying if you are in IPV4 network then you can not access IPV6 network, for example when I was at home I can not access some IPV6 website, such as:ipv6test.google.com. My confusion is that what blocked they talk each other. When a IPV4 package goes out, the router routes it to IPV6 server, then the IPV6 server just route the package back and the router could find the client as well. Though the package header different, but the TCP protocol and data format are same, so what blocked the two IP versions talk each other? My confusion is that what blocked they talk each other. Nothing is blocking; they are just two completely different, incompatible protocols. When a IPV4 package goes out, the router routes it to IPV6 server... That cannot happen. An IPv4 packet has a 32-bit IPv4 destination address, so you cannot send an IPv4 packet to a destination with a128-bit IPv6 address. The IPv6 addresses do not fit in an IPv4 packet header. ...then the IPV6 server just route the package back and the router could find the client as well. Again, that cannot happen because the addressing is incompatible. Though the package header different, but the TCP protocol and data format are same, so what blocked the two IP versions talk each other? Nothing is blocked, but the addressing simply will not allow a packet from one protocol to be addressed to a destination with a different protocol. TCP was also ported to IPX, but IPX and IP are incompatible protocols that simply cannot communicate. The same holds for IPv4 and IPv6.
common-pile/stackexchange_filtered
Equal Row Height in Sibling Flexbox Items/Columns I have multiple products with a heading, a description and a call-to-action. The word count of each is not regulated. I am using flexbox to display these products so that the containers of each item will be of equal height and can clear to the next row easily without having to mess with nth-child. My question is this. Is there a flex property (or other CSS property) that can allow me to match the height of rows in separate columns? Codepen http://codepen.io/fedaykin00/pen/yexOLV Desired outcome HTML <div class="container"> <div class="item"> <div class="item-row item-heading"> <h1>Item 1: Far far away</h1> </div> <div class="item-row item-body"> <p>Far far away, behind the word mountains, far from the countries Vokalia and Consonantia, there live the blind texts. Separated they live in Bookmarksgrove right at the coast of the Semantics, a large language ocean. A small river named Duden flows by their place and supplies it with the necessary</p> </div> <div class="item-row item-cta"> <a href="#" class="cta">Far far away</a> </div> </div> <div class="item"> <div class="item-row item-heading"> <h1>Item 2: A wonderful serenity has taken possession of my entire soul</h1> </div> <div class="item-row item-body"> <p>A wonderful serenity has taken possession of my entire soul, like these sweet mornings of spring which I enjoy with my whole heart.</p> </div> <div class="item-row item-cta"> <a href="#" class="cta">A wonderful serenity</a> </div> </div> <div class="item"> <div class="item-row item-heading"> <h1>Item 3: One morning, when Gregor Samsa</h1> </div> <div class="item-row item-body"> <p>One morning, when Gregor Samsa woke from troubled dreams, he found himself transformed in his bed into a horrible vermin. He lay on his armour-like back, and if he lifted his head a little he could see his brown belly, slightly domed and divided by arches into stiff sections. The bedding was hardly able to cover it and seemed ready to slide off any moment. His many legs, pitifully thin compared with the size of</p> </div> <div class="item-row item-cta"> <a href="#" class="cta">One morning</a> </div> </div> <div class="item"> <div class="item-row item-heading"> <h1>Item 4: The quick, brown fox</h1> </div> <div class="item-row item-body"> <p>The quick, brown fox jumps over a lazy dog. DJs flock by when MTV ax quiz prog. Junk MTV quiz graced by fox whelps. Bawds</p> </div> <div class="item-row item-cta"> <a href="#" class="cta">The quick</a> </div> </div> <div class="item"> <div class="item-row item-heading"> <h1>Item 5: Li Europan lingues es membres del sam familie</h1> </div> <div class="item-row item-body"> <p>Li Europan lingues es membres del sam familie. Lor separat existentie es un myth. Por scientie, musica, sport etc, litot Europa usa li sam vocabular. Li lingues differe solmen in li grammatica, li pronunciation e li plu commun vocabules. Omnicos directe al desirabilite de un nov lingua franca: On refusa</p> </div> <div class="item-row item-cta"> <a href="#" class="cta">Li Europan</a> </div> </div> </div> CSS .container { display: flex; flex-flow: row wrap; background-color: darkred; padding: 0.5em; justify-content: center; } .item { box-sizing: border-box; display: flex; flex-flow: column; width: calc((100% - 3em) / 3); margin: 0.5em; padding: 1em 1em 0 1em; background-color: lightgray; } .item-row { margin: 0 0 1em 0; padding: 1em; background-color: white; } .item-row.item-cta { margin-top: auto; padding: 0; } .item-row.item-cta a { display: block; background-color: blue; padding: 1em; color: white; text-align: center; } h1, p { margin: 0; } As Outline .container (flex-direction: row) .item (flex-direction: column) .item-row.item-heading h2 .item-row.item-body p .item-row.item-cta a[href] .item (flex-direction: column) .item-row.item-heading h2 .item-row.item-body p .item-row.item-cta a[href] If and when the content of one item's h2 fills two lines, but another only fills one line, I would like to be able to match the height of both of their parent elements (.item-row.item-heading). The same goes for the other instances of .item-row, when they/their children have disparate heights. I want to avoid having to manually set heights based on current content, as I'll have to change it when that content changes. I'm new to flexbox, so I'm not sure I know all of the properties and how they work. There may be no built-in way to do this. I'm sure there's a way to throw some scripts in the mix to make it do what I want, but I'm trying to avoid that if possible. yeah this is possible. Check this: http://clearleft.com/thinks/270 and codepen example: http://codepen.io/lottejackson/pen/PwvjPj @pivemi Thanks for the response. Yeah, I came across that article when I was trying to find a solution. Unfortunately, the 'equal height' aspect there is only referring to the main items (.list-item in her example, .item in my example), whereas I'm wanting the rows inside .list-item/.item to be of equal height. I understand. Didn't have much time hence the quick copy and paste. I saw tags are stretched in the example though (which is why the is pushed down). Not possible to apply to the row inside the element? @pivemi I see what you're saying. That seems to be from 'flex-grow:1;' on the , telling it to expand to fill the space available, though it seems to only act within its parent element, unaware of the height of its ... cousin (?) elements. Even though this doesn't make it do precisely what I wanted, this is not useless information, so thank you for pointing that out. You can use jQuery matchHeight: Your code: https://jsfiddle.net/debraj/38jtno6a/10/ Full documentation here: http://brm.io/jquery-match-height/ I'm trying to see if there's a CSS-only method to solve the problem, but this is very useful in case there is no way to do it without JS. Updated pen with your JS and flex-grow from @pivemi: http://codepen.io/fedaykin00/pen/KVxzgN. If I ever reach 15 rep, you'll have an upvote. You hardy do this dynamic stuffs with CSS, as with jQuery or JavaScript they are more robust. A minimal way to achieve this with Javascript is here. function setAllToMaxHeight($el) { var maxHeight = 0; $el.each(function() { var height = parseInt($(this).css('height').replace('px',''), 10); maxHeight = height > maxHeight ? height : maxHeight; }); $el.css({ 'height' : maxHeight }); } setAllToMaxHeight($('.item-heading'));
common-pile/stackexchange_filtered
Trancendental extension Galois group Let $K$ be a field and consider the extension $K(X)$ of rational functions with coefficients in $K$. It is common knowledge that $\text{Gal}(K(X)/K)$ is isomorphic to the group ${PL }_2(K)$, which is the quotient group of matrices in ${GL}_2(K)$ which identifies a matrix with its scalar multiple. I was doing some calculations in this group to verify my understanding, and I needed some help to double check my work. Is the map $$ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mapsto \left(X \mapsto \frac{aX + b}{cX + d} \right) $$ a homomorphism or an antihomomorphism into the Galois group? An antihomomorphism "switches the order of multiplication." If we want to investigate whether that occurs with the map you describe, we can just try out the multiplication. Let $\operatorname{M}_1$, $\operatorname{M}_2\in PL_2(K)$, and $\phi$ be the map you describe: $$ \operatorname{M}_1\operatorname{M}_2=\begin{pmatrix} a_1 & b_1 \\ c_1 & d_1 \end{pmatrix} \begin{pmatrix} a_2 & b_2 \\ c_2 & d_2 \end{pmatrix}=\begin{pmatrix} a_1a_2+b_1c_2 & a_1b_2+b_1d_2 \\ c_1a_2+d_1c_2 & c_1b_2+d_1d_2 \end{pmatrix}$$ $$\phi(\operatorname{M}_1\operatorname{M}_2)= \left(X \mapsto \frac{(a_1a_2+b_1c_2 )X + (a_1b_2+b_1d_2)}{(c_1a_2+d_1c_2 )X + (c_1b_2+d_1d_2)} \right) $$ Compared to the multiplication on the other end of the map: $$\phi(\operatorname{M}_1)\phi(\operatorname{M}_2)=\left(X \mapsto \frac{a_1X + b_1}{c_1X + d_1} \right) \circ\left(X \mapsto \frac{a_2X + b_2}{c_2X + d_2} \right) = \left(X \mapsto \frac{a_1\frac{a_2X + b_2}{c_2X + d_2} + b_1}{c_1\frac{a_2X + b_2}{c_2X + d_2} + d_1} \right)=\left(X \mapsto \frac{a_1(a_2X + b_2) + b_1(c_2X + d_2)}{c_1(a_2X + b_2) + d_1(c_2X + d_2)}\right)=\left(X \mapsto \frac{(a_1a_2+b_1c_2 )X + (a_1b_2+b_1d_2)}{(c_1a_2+d_1c_2 )X + (c_1b_2+d_1d_2)} \right)$$ Multiplication doesn't get turned around, so the map is a homomorphism. You can be reassured of this fact here.
common-pile/stackexchange_filtered
Change view controller on click UIButton Swift (without storyboard) How do you change the view controller when a button is clicked. I am not using the Storyboard. The views were all created programmatically. I tried adding a target to the button and then calling the following method to push the new View Controller: func switchView() { print(123) let prodListController = ProductListController() self.navigationController?.pushViewController(prodListController, animated: true) } This was called from within the UIviewController class. Any help will be greatly appreciated. Thanks!! EDIT: Made the following modifications: window = UIWindow(frame: UIScreen.main.bounds) window?.makeKeyAndVisible() window?.rootViewController = UINavigationController(rootViewController: ProductListController()) If this code is not doing anything, then I think your navigationController could be nil. Are you using navigationController? It looks like it is "nil"... You can do it with "show" instead: self.show(ProductListController(), sender: self) However, if you want to use navigationController, you have to set it up first. For example (supposing you will start your app with a navigationController): on didFinishLaunchingWithOptions (AppDelegate.swift): // Override point for customization after application launch. self.window = UIWindow(frame: UIScreen.main.bounds) let initialViewController = NavViewController() self.window?.rootViewController = initialViewController self.window?.makeKeyAndVisible() return true on viewDidLoad of NavViewController.swift: self.pushViewController(ViewController(), animated: true) and then, you can use your code for the button: func switchView() { print(123) let prodListController = ProductListController() self.navigationController?.pushViewController(prodListController, animated: true) } I have the code added to the AppDelagate. How would I add a view controller on top of it when I click a button? You can use the code inside switchView() (see above) to show a new view controller. If you can't do it, tell me the name of the current view controller that is showing, and the view controller that you want to show. I have that code exactly as the setupView but it doesn't show me the new view. Do you have a NavViewController.swift for your navigation view controller? Also a ViewController.swift for your initial view controller? And a ProductListController.swift for your new view controller? Create a breakpoint on your button action method and check on the console "po self.navigationController?.viewControllers" I do not have a NavViewController.swift What is that exactly? What exactly didn't you understand? What is a breakpoint? Do you know that arrow you put on your code so that the code stops running on that line? I have a login page which leads to a productList page by making it the new root controller. I now want to go to the productDetail page when a button is clicked. Ok,is the login page inside a navigation controller? Or are you starting the navigation controller after login? I believe so. I did so in the AppDelagate. If you don't want your login page to be inside a navigation view controller, you need to change your AppDelegate initialViewController to be the login page. Then, after login succeeds you use the "show" (see above) to start the navigation controller, and inside navigation controller viewDidLoad you "push" the productList page then, when the button inside productList page is clicked, "push" the productDetail page. Did you find a solution? If you have any more problems, please let me know :) Let us continue this discussion in chat. You can try this code to push another viewcontroller self.window = UIWindow(frame: UIScreen.mainScreen().bounds) var navVC = UINavigationController() var yourVC = YourViewController() navVC.viewControllers = [yourVC] self.window!.rootViewController = navVC self.window?.makeKeyAndVisible() hope this will resolve your problem. I followed what you did but made some modification: What would I do to add a view to the navigation controller? For example, when I click a button I want a new View Controller. Use self.navigationController?.viewControllers = [ViewController] this to add ViewController.
common-pile/stackexchange_filtered
Parsing a delimited multiline string using scala StandardTokenParser I have found a few similar questions but nothing that seems to directly address my needs here. I am creating a DSL using Scala and have much of it already defined. However, part of the language needs to handle blocks of multi-line textual documentation that are collected and handled as individual entities by the parser. I would like to delimit these blocks in some way (say with something like {{ and }}) and just collect everything between the delimiters and return it as a DocString (a case class in my parser). These blocks will then be used to create additional end-user documentation along with the rest of the parsed file(s). The parser is already structured as a StandardTokenParsers-derived class. I suppose I could convert it to a RegexParsers-derived class and just use regular expressions but that would be a major change and a lot of my grammar would have to be reworked. I am not sure if there would be any advantage to doing this (other than supporting the desired documentation blocks). I have seen Using regex in StandardTokenParsers and found this. I am not sure either of those will actually handle what I need, however, or how to begin if they do. If anyone has any ideas as to a viable way to proceed I would appreciate some pointers. As an example, here is something I have tried (from Using regex in StandardTokenParsers): object DModelParser extends StandardTokenParsers { ... def modelElement: Parser[ModelElement] = (other stuff, not important here) | docBlock import scala.util.matching.Regex import lexical.StringLit def regexBlockMatch(r: Regex): Parser[String] = acceptMatch( "string block matching regex " + r, {case StringLit(s) if r.unapplySeq(s).isDefined => s}) val bmr = """\{\{((?s).*)\}\}""".r def docBlockStr: Parser[String] = regexBlockMatch(bmr) def docBlock: Parser[DocString] = docBlockStr ^^ { s => new DocString(s) } ... } However, when passing it even a single line like the following: {{ A block of docs }} it fails to match causing the parser to stop parsing. I think the problem is in the case StringLit(s) in this case but I am not sure. Edit OK. StringLit was a problem. I forgot that this will only match strings in double quotes. So I tried replacing the string above with: "{{ A block of docs }}" and it works fine. However, the multi-line issue still remains. If I replace this with: "{{ A block of docs }}" Then it still fails to parse. Again, I think it is the StringLit not working across line-feeds. Edit Another option occurred to me but I am not sure how to make it work in the parser. If I can read and match a line that only contains the opening delimiter then collect into a List[String] all the lines until a line that only contains the closing delimiter that would be sufficient. Is there a way to do this? Edit 6/22/2015 I went a different direction and this seems to work for the examples I have tried so far: // https://stackoverflow.com/questions/24771341/scala-regex-multiline-match-with-negative-lookahead def docBlockRE = regex("""(?s).*?(?=}})""".r) def docBlock: Parser[DocString] = "{{" ~> docBlockRE <~ "}}" ^^ { case str => new DocString(str) } hope your problem has been solved. I have some trouble sort in this area : http://stackoverflow.com/q/30872527/3827280 would you please take a look at it and help me out? @Rubbic, see my latest edit. Thank you! I rewrote my problem using RegexParser and JavaTokenParser and it is solved but I am curious about this way and I am going to try it because my previous way was more logical!
common-pile/stackexchange_filtered
Hashed Database Password [Hibernate] Due to security requirements I need to store the Database password as a md5-hash in my hibernate.cfg.xml, but as far as I know Hibernate does not support hashed passwords. I am using hibernate 5.1.0. My hibernate.cfg.xml looks like this: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD//EN" "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.connection.driver_class">org.h2.Driver </property> <property name="hibernate.connection.url">jdbc:h2:tcp://localhost/~/test</property> <property name="hibernate.connection.username">sa</property> <property name="hibernate.connection.password"></property> <property name="show_sql">true</property> <property name="hibernate.c3p0.min_size">5</property> <property name="hibernate.c3p0.max_size">20</property> <property name="hibernate.c3p0.timeout">300</property> <property name="hibernate.c3p0.max_statements">50</property> <property name="dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.hbm2ddl.auto">update</property> </session-factory> </hibernate-configuration> This is how I create a sessionFactory: import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; public class HibernateUtility { private static final SessionFactory sessionFactory = buildSessionFactory(); private static SessionFactory buildSessionFactory() { try { // Create the SessionFactory from hibernate.cfg.xml return new Configuration() .configure() .buildSessionFactory().; } catch (Throwable ex) { System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { return sessionFactory; } } Is there a way to use a hashed Database password for hibernate? Please note that MD5 is a very weak (and broken) cryptographic hash algorithm. Either use a salted SHA256+ with x iterations or better switch to a password hash algorithm like BCrypt. Also please note that hashes cannot be reversed, that's why you cannot use them to login to any system. Absolutely nothing to do with the JPA API. Removing TAG You can also use jasypt password encryption api, it is more friendly with hibernate The password supplied within hibernate.connection.password is used by hibernate to connect to database and hence it needs actual password instead of Hashed password. You store hashed passwords only when you need to verify the identity of the user because once any text has been hashed, it's irreversible. It's a one way process: You can get hashed text from your password but you cannot get password back from generated hashed text. If you store hashed password in hibernate.connection.password then your hibernate won't be able connect to database because there's no way to get password from MD5 hash. So it's not possible. see also: Fundamental difference between Hashing and Encryption algorithms However, you can encrypt password in hibernate.cfg.xml see this question. You would be better externalising the password i.e. removing it completely from the hibernate.cfg.xml. You can then pass it in via a system property e.g. add the following to your server's startup command -Dhibernate.connection.password=password. AN even better approach is to define a JNDI datasource in your app server and then have hibernate obtain a reference to this. All DB credentials are then removed from the app config and you can then deploy your app to different environments without changing the configuration (assuming the JNDI datasource name remains consistent). See: https://docs.jboss.org/hibernate/orm/3.3/reference/en/html/session-configuration.html
common-pile/stackexchange_filtered
Discuss how "GIS Web Service" package can be used in python to access GIS information I want brief information about how python GIS web service package is used for accessing GIS information with one example.I am new to python , but i want this for my further projects. What kind of "python GIS web service package" are you talking about? 2) What exact are you trying to do? 3) Are you trying to do web/internet GIS or web-based mapping? First let me explain what the concept of GIS is? Over the years, the concept of GIS has experienced continues changes, from Geographical Information System (GISystem) to Geographical Information Science (GIScience), and then the development of Geographical Information Services (GIServices). GISystem: Talks about GIS as a tool or system for solving geospatial related problems. Its components include: Hardware, Software, Data, Users, Procedure/Methods and Network. GIScience: Talks about GIS as a scientific discipline of study in the academia. The major contributing disciplines are: Computer science, Mathematics/Statistics, Geomatics (Land Surveying, Photogrametry, Remote Sensing, Geodesy, GPS), Geography and Cartography. GIServices: Talks about GIS as a Service or an Overhaul or a Provision for delivering of geospatial information often via the web. Some examples are: MapQuest, Google maps, Bing Maps, Yahoo Maps, Apple Maps, Yandex Maps, OpenStreetMap and WikiMapia Maps. I have strong feelings that your question is inclined toward "GIServices". Python is an open source interpreted programming language used in many GIS programs. Python is a dynamic and strongly typed general purpose programming language. It has an extensive standard library and a community-run index of third-party packages for GIS software check: GIS PyPI. Python's use in GIS goes back to about 2000 with the development of Python bindings for GDAL and AVPython. Here is the link between the two (Python + GIS web Services)... Python and GIServices There several python packages/libraries for GIS web services and a large variety of useful GIS-related add-ons. Some notable once include: Folium, GeoDjango, MapFish, ArcPy, PyQGIS etc. All of these packages can be used for accessing GIS information. An example of using folium package can be found on the official website. Further reading: GIS Programming with Python and QGIS software Introduction to: GISystem, GIScience, and GIService
common-pile/stackexchange_filtered
Template metaprogramming within the body of a template class I'm trying to write a partially specialised template function within the body of a template class/struct. The partial specialisation is done to perform recursive template metaprogramming. template<size_t N> struct my_class { template<size_t D> double my_func(...){} template<> double my_func<0>(...){} double other_func(...){ return my_func<N-1>(...); } }; but g++ (using the -std=c++0x option) complains saying can't partially specialize template function within a class/struct and forces me to write the template functions my_func outside of the class scope in a separate namespace, as if they were static, eventually passing all the private class variables and making the code very messy (all the member variables which would be easily referenced by this). Is there a way a can do partial template specialization (I could make the functions as static members of private sub classes of my_class too) and metaprogramming within the same class? This makes the code cleaner and a lot easier to maintain. I'm using Ubuntu 12.04 and gcc 4.6. Cheers You can achieve the desired result by overloading the function (not specializing it) then using enable_if to selectively only enable one or other of the overloads: template<size_t D> typename std::enable_if<D!=0, double>::type my_func(...){} template<size_t D> typename std::enable_if<D==0, double>::type my_func(...){} The enable_if constraint means that when D!=0 only the first overload is a viable function, and when D==0 only the second overload is a viable function. In C++03 you can do the same thing with boost::enable_if_c. My preferred solution would replace the ugly enable_if usage with a custom trait type, maybe something like this: template<size_t> struct if_zero { typedef double disable; }; template<> struct if_zero<0> { typedef double enable; }; template<size_t D> typename if_zero<D>::disable my_func(...){} template<size_t D> typename if_zero<D>::enable my_func(...){} This has the same effect, but in a more literate programming style. Another form that's even easier to read would be: template<bool, typename T> struct If_ { typedef T enable; }; template<typename T> struct If_<false, T> { }; template<bool B, typename T> using If = typename If_<B, T>::enable; template<size_t D> If<D!=0, double> my_func(...){} template<size_t D> If<D==0, double> my_func(...){} I think the "Concepts Lite" proposal would allow this in a far cleaner way by constraining the second overload like so: template<size_t D> double my_func(...){} template<size_t D> requires (D == 0) double my_func(...){} Here the second overload can only be called when D==0 and will be chosen by overload resolution because it is more constrained than the first overload. SFINAE for the win eh? :-) I was genuinely hoping the new standard to support partial specified template metaprogramming out of the box, without these ugly syntactic tricks... See the If alias template version I added, which I think is about the best syntax you can get in C++11. If concepts make it into C++17 you'll be able to use constraints.
common-pile/stackexchange_filtered
How to highlight a certain value in a pyplot graph? I have created a simple pyplot of three Gaussians. Now, I want to draw a dotted straight line from the x-axis to the peak value of each Gaussian. Is this possible? import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm import math mu1 = 5 mu2 = 10 variance1 = 3 variance2 = 1 sigma1 = math.sqrt(variance1) sigma2 = math.sqrt(variance2) mu_combined = (variance2*mu1 + variance1*mu2)/(variance1 + variance2) variance_combined = 1/(1/variance1 + 1/variance2) sigma_combined = math.sqrt(variance_combined) x = np.linspace(0,15,100) plt.plot(x,norm.pdf(x, mu1, sigma1),'b') plt.plot(x,norm.pdf(x, mu2, sigma2),'r') plt.plot(x,norm.pdf(x, mu_combined, sigma_combined),'g') plt.plot([5,5],[0,0.23],'b:') plt.plot([10,10],[0,0.4],'r:') plt.plot([8.7,8.7],[0,0.46],'g:') ax = plt.gca() ax.set_xlabel('Random Variable') ax.set_ylabel('Probability') plt.grid() plt.show() plt.savefig('Program_Path/Gaussians.svg', format='svg') You might want to look at this ( unfortunately not that easy) answer https://stackoverflow.com/a/49194458/803359. Will even all you to zoom in and out, you have to modify it though. If you do not care about that. other answers there might help as well. Most simple solution plt.plot( [ x0, x0 ], [ 0, y0 ] ), where ( x0, y0 ) is the position of your max. @mikuszefski: Thx a lot, man! I have included your answer to my code and it looks fine. Now I am trying to save the figure as svg. It seems to work, i.e. I get a file.svg image in my specified directory. But I cannot open it, or rather when I open the file it is empty. I know this is a different question, but do you have an idea what could be the problem? Yep....known thing...look here: https://stackoverflow.com/a/21884187/803359
common-pile/stackexchange_filtered
Pushing a javascript object into another javascript object I am trying to merge two javascript sets into one. I am trying to put the products javascript object into the Quotes attribute of the solution object. So the end result should be the SolutionProducts object. $scope.Products = { "id":"", "attributes":{ "term":"36" }, "groups":[ { "products":[ // list of products ] } ] } $scope.Solution = { "SolutionID":"", "Quotes":[ ] } $scope.SolutionProducts = { "SolutionID":"", "Quotes":[ { "id":"", "attributes":{ "term":"36" }, "groups":[ { "products":[ // list of products ] } ] } ] } I tried to use the push function but it didn't work $scope.SolutionProducts = $scope.Solution.Quotes[0].push($scope.Products.products); The return value of Array.prototype.push is not the new array, but the number of elements. This question has nothing to do with JSON, which is a data serialization format. You are simply talking about javascript objects. Simple mistake: you are assigning the return value of the Array.push method to your variable $scope.SolutionProducts. Instead do this: $scope.Solution.Quotes.push($scope.Products); $scope.SolutionProducts = $scope.Solution; Note that $scope.Solution and $scope.SolutionProducts will have the same reference, meaning you actually don't need to have the $scope.SolutionProducts variable and can just go on with $scope.Solution. @MVP brings up one key issue with your code: you simply want to pass a reference of the Solution object to the SolutionProducts object. With your current code, you are setting $scope.SolutionProducts to the return value of the push() function, which actually returns a the length of the array as an integer, not the object. (See MDN's article on push for more) The second issue is that you're not actually using push on an array: $scope.SolutionProducts = $scope.Solution.Quotes[0].push($scope.Products.products); You're applying .push to Quotes[0], which is a value in the array, not the array itself. You need something like this: $scope.Solution.Quotes.push($scope.Products); Now you're using the push function on a proper array. Putting both of these issues together, you should have something that looks a little like this: $scope.Solution.Quotes.push($scope.Products); $scope.SolutionProducts = $scope.Solution; //sets SolutionProducts to Solution reference
common-pile/stackexchange_filtered
why i am getting cannot GET error in mongo db postman while i am using query parameter I am trying to request a query through the router but I didn't get router.get('/questions/:id',(req,res) => { console.log(req.query) res.send(req.query) }) I want to get req. query but while sending a request through postman it is showing Cannot GET/questions I don't know why but thanks in advance. enter image description here The error is telling you that you cannot get the path /questions. This is because you have implemented /questions/someId and not /questions?id=someId. // slash syntax router.get('/questions/:id', (req, res) => { console.log(`Got via parameter syntax: id=${req.params.id}`) res.send(req.params.id); }) // query syntax router.get('/questions', (req, res) => { if (typeof req.query.id !== "string") { res.status(404).send("Missing required query parameter: id"); return; } console.log(`Got via query syntax: id=${req.query.id}`) res.send(req.query.id); }) You can also support both in the same chain: // dual query & slash syntax router.get(['/questions', '/questions/:id'], (req, res) => { const id = req.params.id || req.query.id; if (typeof id !== "string") { res.status(404).send("Missing required parameter: id"); return; } res.send(id); }) thank you soo much @samthecodingman, u explained very clearly and solved my error
common-pile/stackexchange_filtered