id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_700
sp500=read.csv(text=getURL("https://raw.githubusercontent.com/datasets/s-and-p-500-companies/master/data/constituents-financials.csv"), header=T) n=nrow(sp500) for(i in 1:n) { j <- sp500[i,1] getSymbols(j) j=as.data.frame(j) } So i get a lot of datasets, called the same way, as the tickers given in column, that was mentioned before . But the problem is in the fact, that i have, somehow, to make a sort of aggregated dataset, consisting of one exact column from each dataset. In other words, i have to take MMM$MMM.Close and add ABT$ABT.Close and so on. I suppose, that it would be long to do it manually, so i'd like to find out, how is it possible to address code to those datasets one-by-one (with loop), knowing it's names from the column with tickers? A: Here is a code that downloads daily CLOSE prices for the selected tickers (credit to @Quant Guy). I couldn't open your file and can't find a free reliable source (apart from wikipedia to find all the S&P500 index components), so I manually entered the tickers: library(quantmod) tickers = c( "WMT", "MMM","AIG", "AAPL", "KO", "COST", "C", "AMZN", "ICE", "VTR") getSymbols(tickers, from = "2010-01-01", to = "2015-12-31") P <- NULL seltickers <- NULL for(ticker in tickers) { tmp = Cl(eval(parse(text = ticker))) ## Cl from quantmod if(is.null(P)){ timeP = time(tmp) } if(any(time(tmp)!=timeP)) next else P = cbind(P, as.numeric(tmp)) seltickers = c(seltickers, ticker) } P = xts(P, order.by = timeP) colnames(P) = seltickers Just modify the tickers and you should be good to go.
doc_701
my_collxn_view frame size is, (5, 200, 310, 368), my_view frame size is, (0, 0, 320, 568), my_scroll_view frame size is, (0, 0, 320, 568) I am having 10 Cells. So Content size is too large. I don't know how to expand UICollectionView frame size through coding. Kindly guide me. I have tried something. My coding is below. First Attempt func collectionView(collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAtIndexPath indexPath: NSIndexPath) -> CGSize { my_collxn_view.frame.size.height = my_collxn_view.contentSize.height //635.0 my_view.frame.size.height = 200 + my_collxn_view.frame.size.height //835.0 my_scroll_view.contentSize = CGSizeMake(320, my_view.frame.size.height) //Scrolling my_collxn_view.frame = CGRectMake(5, 200, 310, my_collxn_view.frame.size.height) //Not Expanding return CGSizeMake(150, 205) //CELL Size } Second Attepmt override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() my_collxn_view.frame.size.height = my_collxn_view.contentSize.height //635.0 my_view.frame.size.height = 200 + my_collxn_view.frame.size.height //835.0 my_collxn_view.frame = CGRectMake(5, 200, 320, my_collxn_view.frame.size.height) //Not Expanding my_view.frame = CGRectMake(0, 0, 320, my_view.frame.size.height) //Not Expanding my_scroll_view.contentSize = CGSizeMake(320, my_view.frame.size.height) //Scrolling } Updated my_view.setTranslatesAutoresizingMaskIntoConstraints(true) my_collxn_view.setTranslatesAutoresizingMaskIntoConstraints(true) I have used the above two lines. Exact output received. But, Some warnings has been displayed in debug area. Warning 2015-04-21 18:43:52.974 E Commerce[9475:1673654] Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSIBPrototypingLayoutConstraint:0x7fcf6958a010 'IB auto generated at build time for view with fixed frame' V:|-(200)-[UICollectionView:0x7fcf6a84b000] (Names: '|':UIView:0x7fcf6957abe0 )>", "<NSIBPrototypingLayoutConstraint:0x7fcf6958a0b0 'IB auto generated at build time for view with fixed frame' V:[UICollectionView:0x7fcf6a84b000(368)]>", "<NSAutoresizingMaskLayoutConstraint:0x7fcf696745c0 h=--& v=--& UICollectionView:0x7fcf6a84b000.midY == + 517.5>" ) Will attempt to recover by breaking constraint <NSIBPrototypingLayoutConstraint:0x7fcf6958a010 'IB auto generated at build time for view with fixed frame' V:|-(200)-[UICollectionView:0x7fcf6a84b000] (Names: '|':UIView:0x7fcf6957abe0 )> Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKit/UIView.h> may also be helpful. 2015-04-21 18:43:52.978 Test_work[9475:1673654] Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSIBPrototypingLayoutConstraint:0x7fcf6958a0b0 'IB auto generated at build time for view with fixed frame' V:[UICollectionView:0x7fcf6a84b000(368)]>", "<NSAutoresizingMaskLayoutConstraint:0x7fcf69674630 h=--& v=--& V:[UICollectionView:0x7fcf6a84b000(635)]>" ) Will attempt to recover by breaking constraint <NSIBPrototypingLayoutConstraint:0x7fcf6958a0b0 'IB auto generated at build time for view with fixed frame' V:[UICollectionView:0x7fcf6a84b000(368)]> Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKit/UIView.h> may also be helpful. 2015-04-21 18:43:52.980 Test_work[9475:1673654] Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSIBPrototypingLayoutConstraint:0x7fcf6958a6c0 'IB auto generated at build time for view with fixed frame' V:|-(0)-[UIView:0x7fcf6957abe0] (Names: '|':UIScrollView:0x7fcf6957a8b0 )>", "<NSIBPrototypingLayoutConstraint:0x7fcf6958a760 'IB auto generated at build time for view with fixed frame' V:[UIView:0x7fcf6957abe0(568)]>", "<NSAutoresizingMaskLayoutConstraint:0x7fcf69674910 h=--& v=--& UIView:0x7fcf6957abe0.midY == + 442.5>" ) Will attempt to recover by breaking constraint <NSIBPrototypingLayoutConstraint:0x7fcf6958a6c0 'IB auto generated at build time for view with fixed frame' V:|-(0)-[UIView:0x7fcf6957abe0] (Names: '|':UIScrollView:0x7fcf6957a8b0 )> Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKit/UIView.h> may also be helpful. 2015-04-21 18:43:52.990 Test_work[9475:1673654] Unable to simultaneously satisfy constraints. Probably at least one of the constraints in the following list is one you don't want. Try this: (1) look at each constraint and try to figure out which you don't expect; (2) find the code that added the unwanted constraint or constraints and fix it. (Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints) ( "<NSIBPrototypingLayoutConstraint:0x7fcf6958a760 'IB auto generated at build time for view with fixed frame' V:[UIView:0x7fcf6957abe0(568)]>", "<NSAutoresizingMaskLayoutConstraint:0x7fcf69674980 h=--& v=--& V:[UIView:0x7fcf6957abe0(885)]>" ) Will attempt to recover by breaking constraint <NSIBPrototypingLayoutConstraint:0x7fcf6958a760 'IB auto generated at build time for view with fixed frame' V:[UIView:0x7fcf6957abe0(568)]> Make a symbolic breakpoint at UIViewAlertForUnsatisfiableConstraints to catch this in the debugger. The methods in the UIConstraintBasedLayoutDebugging category on UIView listed in <UIKit/UIView.h> may also be helpful. Kindly guide me. A: Note: If you're seeing NSAutoresizingMaskLayoutConstraints that you don't understand, refer to the documentation for the UIView property translatesAutoresizingMaskIntoConstraints You need to setTranslatesAutoresizingMaskIntoConstraints(false) on each view you manually set one or more NSLayoutConstraint to.
doc_702
.ooo.... ..oooo.. ....oo.. .oooooo. ..o..o.. ...ooooooooooooooooooo... ..........oooo.......oo.. .....ooooooo..........o.. .....oo.................. ......ooooooo.... ...ooooooooooo... ..oooooooooooooo. ..ooooooooooooooo ..oooooooooooo... ...ooooooo....... ....oooooooo..... .....ooooo....... .......oo........ Where . is dead space and o is a marked pixel. I only care about "binary" generation - a pixel is either ON or OFF. So for instance these would look like some imaginary blob of ketchup or fictional bacterium or whatever organic substance. What kind of algorithm could achieve this? I'm really at a loss A: David Thonley's comment is right on, but I'm going to assume you want a blob with an 'organic' shape and smooth edges. For that you can use metaballs. Metaballs is a power function that works on a scalar field. Scalar fields can be rendered efficiently with the marching cubes algorithm. Different shapes can be made by changing the number of balls, their positions and their radius. See here for an introduction to 2D metaballs: https://web.archive.org/web/20161018194403/https://www.niksula.hut.fi/~hkankaan/Homepages/metaballs.html And here for an introduction to the marching cubes algorithm: https://web.archive.org/web/20120329000652/http://local.wasp.uwa.edu.au/~pbourke/geometry/polygonise/ Note that the 256 combinations for the intersections in 3D is only 16 combinations in 2D. It's very easy to implement. EDIT: I hacked together a quick example with a GLSL shader. Here is the result by using 50 blobs, with the energy function from hkankaan's homepage. Here is the actual GLSL code, though I evaluate this per-fragment. I'm not using the marching cubes algorithm. You need to render a full-screen quad for it to work (two triangles). The vec3 uniform array is simply the 2D positions and radiuses of the individual blobs passed with glUniform3fv. /* Trivial bare-bone vertex shader */ #version 150 in vec2 vertex; void main() { gl_Position = vec4(vertex.x, vertex.y, 0.0, 1.0); } /* Fragment shader */ #version 150 #define NUM_BALLS 50 out vec4 color_out; uniform vec3 balls[NUM_BALLS]; //.xy is position .z is radius bool energyField(in vec2 p, in float gooeyness, in float iso) { float en = 0.0; bool result = false; for(int i=0; i<NUM_BALLS; ++i) { float radius = balls[i].z; float denom = max(0.0001, pow(length(vec2(balls[i].xy - p)), gooeyness)); en += (radius / denom); } if(en > iso) result = true; return result; } void main() { bool outside; /* gl_FragCoord.xy is in screen space / fragment coordinates */ outside = energyField(gl_FragCoord.xy, 1.0, 40.0); if(outside == true) color_out = vec4(1.0, 0.0, 0.0, 1.0); else discard; } A: Here's an approach where we first generate a piecewise-affine potato, and then smooth it by interpolating. The interpolation idea is based on taking the DFT, then leaving the low frequencies as they are, padding with zeros at high frequencies, and taking an inverse DFT. Here's code requiring only standard Python libraries: import cmath from math import atan2 from random import random def convexHull(pts): #Graham's scan. xleftmost, yleftmost = min(pts) by_theta = [(atan2(x-xleftmost, y-yleftmost), x, y) for x, y in pts] by_theta.sort() as_complex = [complex(x, y) for _, x, y in by_theta] chull = as_complex[:2] for pt in as_complex[2:]: #Perp product. while ((pt - chull[-1]).conjugate() * (chull[-1] - chull[-2])).imag < 0: chull.pop() chull.append(pt) return [(pt.real, pt.imag) for pt in chull] def dft(xs): pi = 3.14 return [sum(x * cmath.exp(2j*pi*i*k/len(xs)) for i, x in enumerate(xs)) for k in range(len(xs))] def interpolateSmoothly(xs, N): """For each point, add N points.""" fs = dft(xs) half = (len(xs) + 1) // 2 fs2 = fs[:half] + [0]*(len(fs)*N) + fs[half:] return [x.real / len(xs) for x in dft(fs2)[::-1]] pts = convexHull([(random(), random()) for _ in range(10)]) xs, ys = [interpolateSmoothly(zs, 100) for zs in zip(*pts)] #Unzip. This generates something like this (the initial points, and the interpolation): Here's another attempt: pts = [(random() + 0.8) * cmath.exp(2j*pi*i/7) for i in range(7)] pts = convexHull([(pt.real, pt.imag ) for pt in pts]) xs, ys = [interpolateSmoothly(zs, 30) for zs in zip(*pts)] These have kinks and concavities occasionally. Such is the nature of this family of blobs. Note that SciPy has convex hull and FFT, so the above functions could be substituted by them. A: You could probably design algorithms to do this that are minor variants of a range of random maze generating algorithms. I'll suggest one based on the union-find method. The basic idea in union-find is, given a set of items that is partitioned into disjoint (non-overlapping) subsets, to identify quickly which partition a particular item belongs to. The "union" is combining two disjoint sets together to form a larger set, the "find" is determining which partition a particular member belongs to. The idea is that each partition of the set can be identified by a particular member of the set, so you can form tree structures where pointers point from member to member towards the root. You can union two partitions (given an arbitrary member for each) by first finding the root for each partition, then modifying the (previously null) pointer for one root to point to the other. You can formulate your problem as a disjoint union problem. Initially, every individual cell is a partition of its own. What you want is to merge partitions until you get a small number of partitions (not necessarily two) of connected cells. Then, you simply choose one (possibly the largest) of the partitions and draw it. For each cell, you will need a pointer (initially null) for the unioning. You will probably need a bit vector to act as a set of neighbouring cells. Initially, each cell will have a set of its four (or eight) adjacent cells. For each iteration, you choose a cell at random, then follow a pointer chain to find its root. In the details from the root, you find its neighbours set. Choose a random member from that, then find the root for that, to identify a neighbouring region. Perform the union (point one root to the other, etc) to merge the two regions. Repeat until you're happy with one of the regions. When merging partitions, the new neighbour set for the new root will be the set symmetric difference (exclusive or) of the neighbour sets for the two previous roots. You'll probably want to maintain other data as you grow your partitions - e.g. the size - in each root element. You can use this to be a bit more selective about going ahead with a particular union, and to help decide when to stop. Some measure of the scattering of the cells in a partition may be relevant - e.g. a small deviance or standard deviation (relative to a large cell count) probably indicates a dense roughly-circular blob. When you finish, you just scan all cells to test whether each is a part of your chosen partition to build a separate bitmap. In this approach, when you randomly choose a cell at the start of an iteration, there's a strong bias towards choosing the larger partitions. When you choose a neighbour, there's also a bias towards choosing a larger neighbouring partition. This means you tend to get one clearly dominant blob quite quickly.
doc_703
static int count = 0; protected synchronized String getSessionID(){ String[] IDs = new String[1]; DevTools devTools = ((ChromeDriver)Driver.getDriver()).getDevTools(); devTools.createSession(); devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.addListener(Network.responseReceived(), response -> { if (count == 0) { Response responsePayload = response.getResponse(); Optional<Object> optionalSessionID = Optional.ofNullable(responsePayload.getHeaders().get("session-id")); if (optionalSessionID.isPresent() && !optionalSessionID.get().equals("null")) { count++; IDs[0] = optionalSessionID.get().toString(); logger.info("Session ID: " + IDs[0]); // here there's no issue. I see in the session id printed in the console // THIS IS THE ONLY PLACE WHERE SESSION ID IS AVAILABLE, OUTSIDE FROM HERE, IT's NULL } } }); return IDs[0]; // however it is retuning null at the end. }
doc_704
A person sitting on table can interact with the person adjacent to him if he's a friend. We have to find an algorithm to arrange the n people on table so as to maximize the total interaction. A: This problem can be reduced to the Travelling salesman problem. Consider each person as a node in a graph. The cost of moving between friends is 0, and between non-friends is 1. The task is now to find a Hamiltonian cycle with the least cost. This is an NP-hard problem. A greedy algorithm is to place the person with the least friends first, then try to place two of his friends next to him (if he has more than two friends, choose those two friends who themselves have least friends). Keep going in this manner, placing friends next to friends if possible, until everyone is placed. This won't guarantee finding the optimal solution, but it will be very fast to compute. A: Mark, "equivalent" means that you've given a reduction from problem A to problem B and a reduction from problem B to problem A. You've reduced this problem to (non-metric) TSP, which tells us that TSP is at least as hard as this problem. All people can be seated simultaneously next to friends if and only if the friend graph has a Hamilton cycle, so this problem is in fact NP-hard. Mark's reduction means that we can use the O(n2 2n)-time dynamic program for TSP to solve this problem. Let x be the oldest person at the table. The DP computes, for each nonempty set S of people not including x and each possible person y in S, the best solution where the people in S - {y} are sitting in the counterclockwise arc from x to y. A: We could use arches to represent friendship and the problem of maximising interaction can be substituted by the problem of finding a closed path in the graph, touching all persons. All arches have the same weight, for instance 1. If it's not possible, we've to find the path touching the maximum number of persons and then start over with the remaining persons.
doc_705
return IntegrationFlows.from(Sftp.inboundStreamingAdapter(remoteFileTemplate) .remoteDirectory("remoteDirectory"), e -> e.poller(Pollers.fixedDelay(POLL, TimeUnit.SECONDS))) .transform(new StreamTransformer()) .handle(s3UploadMessageHandler(outputFolderPath, "headers['file_remoteFile']")) // Upload on S3 .get(); private S3MessageHandler s3UploadMessageHandler(String folderPath, String spelFileName) { S3MessageHandler s3MessageHandler = new S3MessageHandler(amazonS3, s3ConfigProperties.getBuckets().getCardManagementData()); s3MessageHandler.setKeyExpression(new SpelExpressionParser().parseExpression(String.format("'%s/'.concat(%s)", folderPath, spelFileName))); s3MessageHandler.setCommand(S3MessageHandler.Command.UPLOAD); return s3MessageHandler; } And it works as intended : the file is well uploaded to my S3 bucket. However, I would like to avoid SPEL syntax, and inject headers from the message to the s3uploadMessageHandler method, this way I could use a simple ValueExpression to set the keyExpression in the s3UploadMessageHandler method. To do this, I changed handle(s3UploadMessageHandler(outputFolderPath, "headers['file_remoteFile']")) // Upload on S3 to handle(m -> s3UploadMessageHandler(outputFolderPath, (String) m.getHeaders().get("file_remoteFile"))) // Upload on S3 But now this handler doesn't seem to be triggered anymore. There is no errors in the logs, and I know from the logs that the SFTP polling is still working. I tried to find the reason behind this, and I saw that when entering the handle method in IntegrationFlowdefinition.java, the messageHandler class type is different : it's an S3MessageHandler when called without lambda, and a MyCallingClass$lambda when calling with a lambda expression. What did I miss to make my scenario working ? A: There are two ways to handle a message. One is via a MessageHandler implementation - this is the most efficient approach and that's done in the framework for channel adapter implementation, like that S3MessageHandler. Another way is a POJO method invocation - this is the most user-friendly approach when you don't need to worry about any framework interfaces. So, when you use it like this .handle(s3UploadMessageHandler(...)) you refer to a MessageHandler and the framework knows that a bean for that MessageHandler has to be registered since your s3UploadMessageHandler() is not a @Bean. When you use it as a lambda, the framework treats it as a POJO method invocation and there is a bean registered for the MethodInvokingMessageHandler, but not your S3MessageHandler. Anyway, even if you change your s3UploadMessageHandler() to be a @Bean method it is not going to work because you don't let the framework to call the S3MessageHandler.handleMessage(). What you do here is just call that private method at runtime to create an S3MessageHandler instance against every request message: the MethodInvokingMessageHandler calls your lambda in its handleMessage() and that's all - nothing is going to happen with S3. The ValueExpression cannot help you here because you need to evaluate a destination file against every single request message. Therefore you need a runtime expression. There is indeed nothing wrong with the new SpelExpressionParser().parseExpression(). Just because we don't have a choice and have to have only single stateless S3MessageHandler and don't recreate it at runtime on every request like you try to achieve with that suspicious lambda and ValueExpression.
doc_706
from pathlib import Path import re import pandas as pd import panel as pn pn.extension('tabulator', 'vega') RESULTSDIR = Path('/home') def dir2df(rootdir: str, pattern: str=None): pat = re.compile(pattern, re.IGNORECASE) if pattern else re.compile('', re.IGNORECASE) dnames = [ f.name for f in Path(rootdir).glob('*') if f.is_dir() and pat.search(f.name)] mtimes = [ pd.Timestamp((Path(rootdir) / f).stat().st_mtime, unit='s') for f in dnames ] return pd.DataFrame({ 'dirname': dnames, 'timestamp': mtimes }) pn.widgets.Tabulator(dir2df(RESULTSDIR)) Example Output: However, when I try to be clever and attempt to filter and layout with other selections the rendering does not work: def cb_val2opt(target, event): if event.obj.name == 'Select Product:' and event.new: target.options = [ f for f in event.new.glob('*') if f.is_dir() ] elif event.obj.name == 'Select Project:' or event.obj.name == 'Filter:': rdirs = dir2df(sel_projdir.value, sel_filter.value) print(rdirs) target.value = rdirs pdirs = [ f for f in RESULTSDIR.glob('*') if f.is_dir() ] sel_proddir = pn.widgets.Select(name='Select Product:', options=pdirs, width=800) sel_projdir = pn.widgets.Select(name='Select Project:', options=[], width=800) sel_filter = pn.widgets.TextInput(name='Filter:', width=800) dirdf = pd.DataFrame({'dirname': [], 'timestamp': []}) sel_resdirs = pn.widgets.Tabulator(dirdf, name='Select Results:', selectable='checkbox', height=300, width=800) sel_proddir.link(sel_projdir, callbacks={'value': cb_val2opt}) sel_projdir.link(sel_resdirs, callbacks={'value': cb_val2opt}) sel_filter.link(sel_resdirs, callbacks={'value': cb_val2opt}) pn.Column(sel_proddir, sel_projdir, sel_filter, sel_resdirs) Example of problem rendering: You can see above the debug print shows the correctly formed DataFrame. Am I doing anything wrong?
doc_707
"Keyword,slug,description".split(','); Which results in an array like ["Keyword", "slug", "description"] This worked fine for awhile, until someone needed a comma in their description. I know I can replace split with match and use a regular expression, but the only regular expression I can come up with involves a negative lookbehind, like this: "Keyword,slug,description".match(/(?<!\\),/); Unfortunately JavaScript doesn't support lookbehinds. Is there any other way to do it? A: So use match instead of split "Keyword,slug,description".match(/([^,]+),([^,]+),(.*)/); will result in ["Keyword,slug,description", "Keyword", "slug", "description"] There are other ways to write the regular expression, just picked something quick. A: Here's a not-so-nice way, but the general principle can come in handy: "keyword,slug,foo\\,bar" .replace( '\\,', '{COMMA}' ) .split(',') .map(function(v){return v.replace('{COMMA}',',')}) A: If you manually give the string as the input, then you can escape the comma via HTML entity. var message = 'This is a message with some comma(,) and (,) '; message = message.replaceAll(",", "&#x2c;"); var params = 'title=Confirm,message='+message+'position=middle'; showAlert(params); Now regular .split method will work function showAlert(params) { params = params.split(","); // ... }
doc_708
I recently pulled a repository and did some modifications and deleted some files (Not shift delete). When I undo-ed the delete I see the attached cross mark on the file/What does that mean? If something is wrong , how can I revert back to the original situtaion. A: Looks like you have deleted the file. In order to revert (since i don't know which client you are using) it open the git bash and follow this steps: I assume you are using tortoise git: # Open git bash in the desired folder git status # now you should see your desired file in the list marked as deleted git checkout <file name> Here is a screenshot for you
doc_709
Is there a way to return, break, cycle, or stop on gnuplot? A: The exit statement is straightforward and can be used anywhere in the code. #!/bin/bash nmin = 1 nmax = 10 nmiddle = (nmin + nmax)/2 isexit = 0 print "---------------------------------" print "--------- REGULAR OUTPUTS -------" do for[i=nmin:nmax]{ print sprintf("Running No %4d", i) } # if (isexit==1){ # print "here" # exit # } print "" print "---------------------------------" print "--------- EXIT OUTPUTS -------" do for[i=nmin:nmax]{ print sprintf("Running No %4d", i); if (i == nmiddle){ exit } } For break and continue, it seems they are new features on GNUPLOT 5.2 and above as you can see in Page 21 (see memo), and are explained on page 71 and 73 (see memo). I have GNUPLOT 5.0 right now. I will just have to upgrade it to version 5.2 and that's it. Thanks Ethan and EWCZ.
doc_710
Made app using py2app which is basically continuously checking for save time of a file. Monitoring any changes made in the file in a while loop with small sleep in each iteration. Process should not stop at all, but it is exiting with error-32 Broken pipe after 15-20 minutes. How to resolve it. try: while True: app_log.debug("while true") time.sleep(5) configProp.read(propfile) fileNameList=configProp.sections() if len(fileNameList)!=0: app_log.debug("fileNameList is not zero") for i in range(0,len(fileNameList)): tempnameinfile=configProp.options(fileNameList[i]) openTimeLive = configProp.get(fileNameList[i], "openTimeLive") openTimeLive = float(openTimeLive) openTime=float(openTime) configureTime = 3600*float(configureTime) monitorTime=float(openTimeLive + configureTime) if monitorTime > time.time(): lastSavedTime = os.path.getmtime(str(tempname)) app_log.debug(lastSavedTime) aa = abs((float(openTime)) - (float(lastSavedTime))) if abs(aa) > 1 : app_log.debug("file modified") t = ThreadClass(fileNameList[i]) # t.setDaemon(True) t.start() time.sleep(5) configProp.set(fileNameList[i], str(tempnameinfile[0]),lastSavedTime) with open(propfile, 'wb') as propFile: configProp.write(propFile) app_log.debug("completed") except Exception as e: app_log.error(e) print e
doc_711
However, I don't want them to save such changes when leaving the form. Each time the form is opened the default format should be loaded. I've taken care of all but one closing method. To avoid them closing using the default close button I've set Border Style = None. Instead I have a Close Form button that uses DoCmd.CLOSE acForm, "Main_form", acSaveNo But if the user clicks the close button for the Access application, it pops the 'Do you want to save changes to the design of form` dialog like always. I looked into disabling the application's Close button, but messing with Windows API is beyond my skill (and there should be a way to accomplish this without going to extreme measures). A: I found a way to do this. A combination of database options, form format options, and vba can do it. * *Go to the 'Current Database' options screen in the main Access options and uncheck 'Enable design changes in Datasheet view'. This will prevent all datasheet view design changes in the database, so you will have to go into design mode for any table changes. Users can still reorder and resize columns within a form, but Access no longer considers that a valid design change and will not prompt to save it no matter how you close the form *Set the form format property 'Save Splitter Bar Position' = No. The form will now clear any change to the bar location when the form is closed. Access got really weird on me about this setting, however. Once I had ever set the option to no, I could no longer use design view or layout view to set a new default bar position; it always reverted to the location where it was when I first tried this setting. Even resetting the option to Yes, saving the design change, and fully exiting the database did not fix this. *So I added an On Load event to reset the split form size when the form opens: Me.SplitFormSize = 9000. The numbers involved are surprisingly high; in the form properties list this is set in inches. Mine was 6.5", which apparently translates to 9000. With these three changes (along with the steps I detailed in the question) Access no longer prompts to save design changes when the form is closed, even if the user is closing the Access application entirely. The form also snaps the split form bar back to where it should be on load. A: Since the API is beyond my skill too, here is a left-field workaround. Duplicate Main_Form and rename it "Main_Form_Template". Create an Autoexec module or edit an existing one and add: DoCmd.DeleteObject acForm, "Main_Form" DoCmd.CopyObject , "Main_Form", acForm, "Main_Form_Template" That should reinstate the standard template for the user each time they open the database even if they were to save the form when closing Access. A: Turn your close button off on the form. On the form's property sheet, format tab, about 2/3 of way down. Set Close Button = No This forces the user to close it via the button you created.
doc_712
$propiedadesObtenidas = Property::search($request->get('ubicacion')) ->where('tipoDePropiedad_id', '=', $tipoPropiedad_id[0]) ->get(); I would like to add one more condition, similar to: $propiedadesObtenidas = Property::search($request->get('ubicacion')) ->where('tipoDePropiedad_id', '=', $tipoPropiedad_id[0]) **->where('categoria', '=', $categoria_id)** ->get(); it's possible? A: Yes it is possible. // This, as u mentioned, gonna work. $propiedadesObtenidas = Property::search($request->get('ubicacion')) ->where('tipoDePropiedad_id', '=', $tipoPropiedad_id[0]) ->where('categoria', '=', $categoria_id) ->get(); // This can also Work $propiedadesObtenidas = Property::search($request->get('ubicacion')) ->where([ ['tipoDePropiedad_id', '=', $tipoPropiedad_id[0]] ['categoria', '=', $categoria_id] ]) ->get(); There are other ways base on what you wanna do, but above are the easiest approach which they're work. Good Luck.
doc_713
import java.io.*; import java.util.Scanner; public class PeriodicTable { public static void main(String[] args) throws IOException { final int MAX_ELEMENTS = 128; int[] atomicNumber = new int[MAX_ELEMENTS]; File file = new File("periodictable.dat"); Scanner inputFile = new Scanner(file); int currentElements = 0; while(inputFile.hasNext()) { atomicNumber[currentElements] = inputFile.nextInt(); String symbol = inputFile.next(); float mass = inputFile.nextFloat(); String name = inputFile.next(); currentElements++; } inputFile.close(); System.out.println("Periodic Table\n"); System.out.println(currentElements + " elements"); } } A: When I run the sample class it seems to work correctly for me, so I assume there is and issue with the periodictable.dat file. With a test periodictable.dat file filled with the following: 110 T 200.00 Test 300 A 100.18 Again I get the following output: Periodic Table 2 elements If there was a formatting error in the file you would receive a mismatch exception. So I would * *Check your file is not empty as this would cause the while condition to be false. *I would also add what your current file looks like in your question :)
doc_714
So let me explain the situation. On my website you can create and delete posts. * *On "/create", the user enters contents for the post. *Then the user clicks the "submit" button, which is routered to "/create_process"(where the data is actually saved in database) *Occasionally there is some delay in loading "/create_process". So the user keeps refreshing while loading. -> Here is the problem. Everytime the user refreshes at this stage same inputs are sent again and again. The result is multiple posts with exactly same contents. I am sure that there must be a way to block such trivial inputs. A: You can have a throttle function on whatever the use is clicking. Example: const throttle = (func, limit) => { let inThrottle return function() { const args = arguments const context = this if (!inThrottle) { func.apply(context, args) inThrottle = true setTimeout(() => inThrottle = false, limit) } } } from medium article
doc_715
When I scroll pass the cell which holds the button, it creates a second instance of the button slightly below the button. Here's a video to illustrate my problem: http://pixori.al/DJ1k Here's the code for the UITableViewCell and also how I populate the cells. Not sure why it's behaving like this. #pragma mark - UITableViewDataSource // 3 sections, (1 = mistarOverview) (2 = hourlyForecast) (3 = dailyForecast) - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 3; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (section == 0) { return MAX(6,6) + 1; //TODO add getNumberOfClasses for people with 7 or 8 classes } else if (section == 1) { return MIN([[MAManager sharedManager].hourlyForecast count], 6) + 1; } else { return MIN([[MAManager sharedManager].dailyForecast count], 6) + 1; } } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"CellIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; // Redefine layout variables in method from `viewDidLoad` CGFloat inset = 20; // For padding if (! cell) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue1 reuseIdentifier:CellIdentifier]; } // Sets up attributes of each cell cell.selectionStyle = UITableViewCellSelectionStyleNone; //TODO none cell.backgroundColor = [UIColor colorWithWhite:0 alpha:0.2]; cell.textLabel.textColor = [UIColor whiteColor]; cell.detailTextLabel.textColor = [UIColor whiteColor]; QBFlatButton* loginButton = nil; if (indexPath.section == 0) { if (indexPath.row == 0) { [self configureHeaderCell:cell title:@"Grades"]; if ([cell.textLabel.text isEqual: @"Grades"] && (!loginButton) && (indexPath.row == 0) && (indexPath.section == 0)) { UIView *cellView = cell.contentView; CGRect loginButtonFrame = CGRectMake((cellView.frame.size.width - (80 + inset)), 18, 80, (cellView.frame.size.height)); loginButton = [[QBFlatButton alloc] initWithFrame:loginButtonFrame]; [loginButton addTarget:self action:@selector(loginButtonWasPressed)forControlEvents:UIControlEventTouchUpInside]; loginButton.faceColor = [UIColor grayColor]; loginButton.sideColor = [UIColor clearColor]; loginButton.radius = 6.0; loginButton.margin = 4.0; loginButton.depth = 3.0; loginButton.alpha = 0.3; loginButton.titleLabel.font = [UIFont fontWithName:@"HelveticaNeue-Light" size:20]; [loginButton setTitleColor:[UIColor blackColor] forState:UIControlStateNormal]; [loginButton setTitle:@"Login" forState:UIControlStateNormal]; [cellView addSubview:loginButton]; } } else { cell.selectionStyle = UITableViewCellSelectionStyleBlue; cell.textLabel.text = [NSString stringWithFormat:@"Period %ld A+", (long)indexPath.row]; cell.detailTextLabel.text = @"Class name"; //TODO get grades and config using method (TB Created) } } else if (indexPath.section == 1) { if (indexPath.row == 0) { [self configureHeaderCell:cell title:@"Hourly Forecast"]; } else { // Get hourly weather and configure using method MACondition *weather = [MAManager sharedManager].hourlyForecast[indexPath.row - 1]; [self configureHourlyCell:cell weather:weather]; } } else if (indexPath.section == 2) { if (indexPath.row == 0) { [self configureHeaderCell:cell title:@"Daily Forecast"]; } else if (indexPath.section == 2) { // Get daily weather and configure using method MACondition *weather = [MAManager sharedManager].dailyForecast[indexPath.row - 1]; [self configureDailyCell:cell weather:weather]; } } return cell; } A: Implement the following UITableView Delegate Method -(void)tableView:(UITableView *)tableView didEndDisplayingCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { //In here, check the index path. When you have the cell that contains the button, pop it out from there by using [button removeFromSuperView]; } Your problem occurs when you dequeue the cell. Since the cell is being reused, it already has the button and you're simply re-adding it again. This will solve your issue. However, I'd recommend you create a subclass for the UITableViewCell, and in it's prepareForReuse method, pop the button out. Up to you. Both will work. A: Table view cells are not just deallocated then they move out of visible area. They are stored for reusing and then returned in tableView dequeueReusableCellWithIdentifier:CellIdentifier]; So you need to clean your cells after using or before reusing. There are several ways: 1.Add tag to your button when you create it loginButton.tag = SOME_TAG; just after UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; search for view with this tag loginButton = [cell viewWithTag:SOME_TAG]; if loginButton != nil you can reuse it or remove from cell and then create a new one. 2.Implement UITableViewDelegate method -(void)tableView:(UITableView *)tableView didEndDisplayingCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath and erase login button inside it. 3.Create custom UITableViewCellclass and implement prepareForReuse method. A: You're adding the button every time you return a cell in this method. If you scroll the cell off the screen and back on, this method is called again for the same index path, and you will add the button again. You declare the variable, do nothing with it, then check if it is nil. It will always be nil, so you always add the button. A quick and dirty solution is to give the button a tag, then check for its existence using viewWithTag:. A better solution is to make a custom cell subclass, and set one-time properties like this in the init method. Your cell contents seem very different for each section as well, so use different reuse identifiers for each section, and possibly a different cell subclass. Clearing out sub views is expensive and could hurt your scrolling performance. A: When you run your project first time then cellForRowAtIndexPath is called..... Then whenever you scroll tableView it again calls cellForRowAtIndexPath and reload data automatically.. So you have to take CellIdentifier as unique for each cell. you have to remove static keyword from static NSString *CellIdentifier = @"CellIdentifier"; now you have NSString *CellIdentifier = @"CellIdentifier"; only this things Now you have to write like below NSString *CellIdentifier = [NSString stringWithFormat:@"%@",indexPath]; Now Enjoy.....
doc_716
A: I need to add a combobox on the image set by me You can set the layout manager of any Swing component. So if you are displaying your image in a JLabel you can set the layout of the label. For example: JLabel background = new JLabel( new ImageIcon(...) ); background.setLayout( new FlowLayout() ); JComboBox comboBox = new JComboBox(); comboBox.addItem(...); background.add( comboBox ); If you are painting the image on the JPanel, the default layout is already a FlowLayout, you just need to override the getPreferredsize() method of the panel to be the size of the image.
doc_717
List<Double> DList=new ArrayList(); testList.add(0.5); testList.add(0.2); testList.add(0.9); testList.add(0.1); testList.add(0.1); testList.add(0.1); testList.add(0.54); testList.add(0.71); testList.add(0.71); testList.add(0.71); testList.add(0.92); testList.add(0.12); testList.add(0.65); testList.add(0.34); testList.add(0.62); testList.add(0.5); testList.add(0.2); testList.add(0.9); testList.add(0.1); testList.add(0.1); testList.add(0.1); testList.add(0.54); I have to perform sorting from index 7. how can i do that? A: * *List.subList(startIndex, endIndex) creates a "backed" collection which sees your original list, aka "proxy" I think. *Collections.sort() will just sort the sub-list, any swaps will actually occur in the original list The JavaDoc explains this more clearly than I can manage: This method eliminates the need for explicit range operations (of the sort that commonly exist for arrays). Any operation that expects a list can be used as a range operation by passing a subList view instead of a whole list. For example, the following idiom removes a range of elements from a list list.subList(from, to).clear(); Example using your original problem List<Double> list = new ArrayList<Double>(); list.add(0.5); list.add(0.2); list.add(0.9); list.add(0.1); list.add(0.1); list.add(0.1); list.add(0.54); list.add(0.71); list.add(0.71); list.add(0.71); list.add(0.92); list.add(0.12); list.add(0.65); list.add(0.34); list.add(0.62); list.add(0.5); list.add(0.2); list.add(0.9); list.add(0.1); list.add(0.1); list.add(0.1); list.add(0.54); Collections.sort(list.subList(7, list.size()));
doc_718
with gzip.open("myFile.parquet.gzip", "rb") as f: data = f.read() This does not seem to work, as I get an error that my file id not a gz file. Thanks! A: You can use read_parquet function from pandas module: * *Install pandas and pyarrow: pip install pandas pyarrow *use read_parquet which returns DataFrame: data = read_parquet("myFile.parquet.gzip") print(data.count()) # example of operation on the returned DataFrame
doc_719
* *I have to stream/publish, whatever android-camera is picking, to some server over some protocols(RTMP or HLS, etc..) *I have to setup server that will pull this input source and packages & stores it in a form that could be streamed/consumed on the mobile/web browser(Basically, an URL) and I believe AWS's MediaLive, MediaPackage, etc.. resources should do that. *I could use this URL are MediaSource for players on Android(like ExoPlayer) My problem is I couldn't find good documentation on 1st part. I found this, https://github.com/bytedeco/javacv, which doesn't appear to be production-level work. While trying out 2nd part, while creating MediaLive channel on AWS, I was asked to point the channel to 2 destinations(I don't know what it means) which made me doubt my understanding of this process. I'm looking for some skeleton-procedure with official documentation on how to achieve this. EDIT 1: For the Input-Production part, I'm experimenting with this answer. https://stackoverflow.com/a/29061628/3881561 EDIT 2: I've used https://github.com/ant-media/LiveVideoBroadcaster to send video source to RTMP server. I've created RTMP push input source in MediaLive and a channel with output - Archive(stores .ts files in S3). Now that the flow is working, How can I modify this architecture to allow multiple users to create live streaming?
doc_720
I want the load more button to show only when there's data to show and disappear when there's no data to show. index page: jquery <script type="text/javascript"> $(document).ready(function(){ $("#loadmorebutton").click(function (){ $('#loadmorebutton').html('<img src="ajax-loader.gif" />'); $.ajax({ url: "loadmore.php?lastid=" + $(".postitem:last").attr("id"), success: function(html){ if(html){ $("#postswrapper").append(html); $('#loadmorebutton').html('Load More'); }else{ $('#loadmorebutton').replaceWith('<center>No more posts to show.</center>'); } } }); }); }); </script> Html: <div id="wrapper"> <div id="postswrapper"> <?php $getlist = mysql_query("SELECT * FROM table_name LIMIT 25"); while ($gl = mysql_fetch_array($getlist)) { ?> <div class=postitem id="<? echo $gl['id']; ?>"><?php echo $gl['title']; ?></div> <?php } ?> </div> <button id="loadmorebutton">Load More</button> </div> </div> The loadmore.php page has; <?php $getlist = mysql_query("SELECT * FROM table_name WHERE id < '".addslashes($_GET['lastid'])."' LIMIT 0, 25 LIMIT 10"); while ($gl = mysql_fetch_array($getlist)) { ?> <div><?php echo $gl['title']; ?></div> <?php } ?> Basically what it this script does is, the index page will load the first 25 items from the database, and when you click on load more, it triggers loadmore.php, which will load 10 more the data starting from the last id loaded already. What I want to do is...to remove the Load more from the screen IF there's less than 25 items in the database and show if there's more than 25 items in the database. A: Put this in your jQuery ready function: if($('#postswrapper li').length == 10) { $('#loadmorebutton').show(); } else { $('#loadmorebutton').hide(); } A: <?php $getlist = mysql_query("SELECT * FROM table_name LIMIT 25"); while ($gl = mysql_fetch_array($getlist)) { ?> <div class=postitem id="<? echo $gl['id']; ?>"><?php echo $gl['title']; ?></div> <?php } if(mysql_num_rows($getlist) <= 25) { ?> <script type="text/javascript"> $(function(){ $('#loadmorebutton').hide(); }); </script> <?php } ?>
doc_721
Microsoft.EntityFrameworkCore Microsoft.EntityFrameworkCore.Relational Microsoft.EntityFrameworkCore.Tools Microsoft.EntityFrameworkCore.Design Npgsql.EntityFrameworkCore.PostgreSQL EFCore.NamingConventions Now my question is should I install all the above NuGet Packages or should I only install Npgsql.EntityFrameworkCore.PostgreSQL ? Can anyone give me some details? A: Initially all you need is: Npgsql.EntityFrameworkCore.PostgreSQL as it depends on several of the other packages.
doc_722
A: You could do it with ctypes >>> from ctypes import * >>> c = cdll.LoadLibrary("libc.so.6") >>> c.sigqueue <_FuncPtr object at 0xb7dbd77c> >>> c.sigqueue(100, 10, 0) -1 >>> You'll have to look up how to make a union in ctypes which I've never done before but I think is possible. A: One alternative, if no one has done it yet, would be to wrap the C library yourself - should be pretty quick and painless. Look here for more details.
doc_723
This is my model view. class Charter(models.Model): CHOICES = [('M', 'Male'), ('F', 'Female'), ('O', 'Others')] Gender = forms.ChoiceField(label='Gender', widget= forms.RadioSelect(choices=CHOICES)) created_at = models.DateField() First_name = models.CharField(max_length=200, unique=True) Last_name = models.CharField(max_length=200, unique=True) address = models.CharField(max_length=200, unique=True) Cell_no = models.CharField(max_length=200, unique=True) created_at = models.DateField() This is my forms.py from django.forms import ModelForm from .models import * class CharterForm(ModelForm): class Meta: model= Charter fields = '__all__' widgets = { 'Gender': forms.RadioSelect() } A: Check out this similar link: Dropdown in Django Model or Refer to Django official documentation for more info CHOICES = ( ('M', 'Male'), ('F', 'Female'), ('O', 'Others'), )
doc_724
@RequestMapping("/upload") public void getDownload(ModelView mv,HttpServletResponse response) { String fileName = ""; String fullZipFileName = zipDir+ zipFileName; FileManager fm = FileManager.getInstance(); zipOS = fm.getZipOutputStream(fullZipFileName); try {​​​​​ for(int i = 0; i < numOfFiles; i++) {​​​​​ fileName = (String) selectedFiles.get(i); File file = new File(directory, fileName); bIS = fm.getBufferedInputStream(file.getAbsolutePath()); ZipEntry entry = new ZipEntry(file.getName()); zipOS.putNextEntry(entry); while((count = bIS.read(fileToAddToZip, 0, buffer)) != -1) {​​​​​ zipOS.write(fileToAddToZip, 0, count); }​​​​​ bIS.close(); }​​​​​ zipOS.close(); FileManager fm = FileManager.getInstance(); bIS = fm.getBufferedInputStream(zipFileName); byte[] byteArray = new byte[bIS.available()]; bIS.read(byteArray); bIS.close(); response.setContentType("application/octate-stream"); response.setHeader("Content-Disposition", "attachment;filename" + zippedFilename); response.getOutputStream().write(byteArray); } In the jsp side I have added $.ajax({​​​​​​​​​​​ type: 'POST', url: "${​​​​​​​​​​​pageContext.request.contextPath}​​​​​​​​​​​/download", contentType: "application/json", data: JSON.stringify(payload), success: function (res, textStatus, response) {​​​​​​​​​​​ var fileName = response.getResponseHeader("content-disposition").replace("attachment;filename=", ""); var fileName = response.getResponseHeader("Content-Disposition").replace("attachment;filename=", ""); var bytes = new Uint8Array(res.length); for(var i=0;i< res.length;i++) { var as=res.charCodeAt(i); bytes[i]=as; } var blob = new Blob([bytes], {​​​​​​​​​​​ type: "application/octetstream" }​​​​​​​​​​​); //Check the Browser type and download the File. var isIE = false || !!document.documentMode; if (isIE) {​​​​​​​​​​​ window.navigator.msSaveBlob(blob, fileName); }​​​​​​​​​​​ else {​​​​​​​​​​​ var url = window.URL || window.webkitURL; link = url.createObjectURL(blob); var a = $("<a />"); a.attr("download", fileName); a.attr("href", link); $("body").append(a); a[0].click(); $("body").remove(a); }​​​​​​​​​​​ }​​​​​​​​​​​, error: function (e) {​​​​​​​​​​​ }​​​​​​​​​​​ }​​​​​​​​​​​); I am able to download the file but when I try to extract the zip file ,I get Invalid zip file. Can someone tell me how can I fix this issue
doc_725
I prepared 2 simple example Bash-Scripts: test_zfsadd: #!/bin/bash #ARGS=1 err="$(zfs create $1 2>&1 > /dev/null)" if [ $? -ne 0 ] then echo $err exit 1 fi echo "OK" exit 0 test_zfspart: #!/bin/bash #ARGS=1 msg="$(zfs get mounted $1 -H | awk '{print $3}')" echo $msg exit 0 When I call the according binaries from PHP with e. g. <?php $partition = 'raid1/testpart'; $ret = shell_exec("/path/test_zfsadd_bin $partition"); echo "<p>Return value: $ret</p>\n"; $ret = shell_exec("/path/test_zfspart_bin $partition"); echo "<p>Is mounted: $ret</p>\n"; the output is: Return value: OK Is mounted: yes This looks good, but when I call 'test_zfspart_bin raid1/testpart' directly from console, I get the correct result which is no (means that the partition is NOT mounted, checked in /proc/mounts). So I get 2 different answers from the same script depending somehow on the context. I first thought it has something to do with the SUID-Bit, but calling the script in console with an unprivileged user works fine. If I try (as root) zfs mount raid1/testpart in console I get filesystem 'raid1/testpart' is already mounted cannot mount 'raid1/testpart': mountpoint or dataset is busy which is weird. I also can't destroy the 'partition' from console, this works only from PHP. On the other hand, if I create a partition as root directly from bash and try to delete it via PHP, it doesn't work either. Looks like the partitions are somehow separated from each other by context. Everything gets synchronized again if I do systemctl restart httpd I think apache or PHP is keeping the zfs system busy to some extend, but I have absolutely no clue why and how. Any explanation or some workaround is much appreciated. A: I figured it out myself in the meantime.. The problem was not the apache process itself, it was how it is started by systemd. There is an option called 'PrivateTmp', which is set to 'true' in the httpd service file by default (at least in CentOS7). The man page says PrivateTmp= Takes a boolean argument. If true sets up a new file system namespace for the executed processes and mounts a private /tmp directory inside it, that is not shared by processes outside of the namespace. This is useful to secure access to temporary files of the process, but makes sharing between processes via /tmp impossible. Defaults to false. This explains it all I think. The newly created zfs partition is mounted in this 'virtual' file system and is therefore invisible to the rest of the system, what is not desired in this case. The apache process is not able to mount or unmount file systems outside its namespace. After disabling the option everything worked as expected.
doc_726
int red=9; int green=10; int blue=11; void setup() { pinMode(red, OUTPUT); pinMode(green, OUTPUT); pinMode(blue, OUTPUT); } void loop() { for (int fade=0; fade <=100; fade=fade+5); analogWrite (red, fade); delay(30); digitalWrite(red, 0); analogWrite (green, fade); delay(30); digitalWrite(green, 0); analogWrite (blue, fade); delay(30); digitalWrite(blue, 0); } A: A for loop will run whatever is the next statement after it until the case in the middle fails. If you put a semicolon after your for loop like this: for(int i=0; i<10; i++); then that "next statement" is just an empty semicolon. So it does nothing 10 times. Or more likely that gets optimized away. If you want to run several statements together in the for loop, then you need to surround those statements with a set of curly braces to group them together into a compound statement or "block". You do this for for loops as well as while and if statements. void loop() { for (int fade=0; fade <=100; fade=fade+5) //<- NO SEMICOLON { //<- OPENING BRACE analogWrite (red, fade); delay(30); digitalWrite(red, 0); analogWrite (green, fade); delay(30); digitalWrite(green, 0); analogWrite (blue, fade); delay(30); digitalWrite(blue, 0); } // <- CLOSING BRACE }
doc_727
I added some integration tests in JUnit that are using SpringBoot. Inside these tests I got the problem that thymeleaf now is trying to resolve any page in any directory. JSF is completely ignored and I got a whole bunch of JUnit tests failing because of that. Is there any point why thymeleaf ignores its configuration and wants to resolve all files? Here is my complete thymeleaf configuration, and as I said this works perfectly if I deploy it on a standalone tomcat. private ApplicationContext applicationContext; @Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { String imagesPattern = "/images/**"; String imagesLocation = basePath() + "resources/images/"; registry.addResourceHandler(imagesPattern).addResourceLocations(imagesLocation); log.info("added resourceHandler (pathPattern: '{}'), (resourceLocation: '{}')", imagesPattern, imagesLocation); String cssPattern = "/css/**"; String cssLocation = basePath() + "resources/css/"; registry.addResourceHandler(cssPattern).addResourceLocations(cssLocation); log.info("added resourceHandler (pathPattern: '{}'), (resourceLocation: '{}')", cssPattern, cssLocation); } @Bean(name = "basepath") public String basePath() { String basepath = ""; File file = new File(Optional.ofNullable(System.getenv("THYMELEAF_APP_RESOURCES")) .orElse("thymeleaf-resources/")); if (file.exists()) { basepath = "file:" + file.getAbsolutePath(); } if (!basepath.endsWith("/")) { basepath += "/"; } log.info("basepath: {}", basepath); return basepath; } @Bean @Description("Thymeleaf View Resolver") public ThymeleafViewResolver viewResolver(String basePath) { log.info("setting up Thymeleaf view resolver"); ThymeleafViewResolver viewResolver = new ThymeleafViewResolver(); viewResolver.setTemplateEngine(templateEngine(basePath)); viewResolver.setCharacterEncoding("UTF-8"); viewResolver.setCache(true); return viewResolver; } public SpringTemplateEngine templateEngine(String basePath) { log.info("setting up Thymeleaf template engine."); SpringTemplateEngine templateEngine = new SpringTemplateEngine(); templateEngine.setTemplateResolver(templateResolver(basePath)); templateEngine.setEnableSpringELCompiler(true); return templateEngine; } private ITemplateResolver templateResolver(String basePath) { log.info("setting up Thymeleaf template resolver"); SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver(); resolver.setApplicationContext(applicationContext); resolver.setPrefix(basePath + "thymeleaf/views/"); resolver.setSuffix(".html"); resolver.setTemplateMode(TemplateMode.HTML); resolver.setCacheable(false); return resolver; } @Bean public IMessageResolver thymeleafMessageSource(MessageSource messageSource) { SpringMessageResolver springMessageResolver = new SpringMessageResolver(); springMessageResolver.setMessageSource(messageSource); return springMessageResolver; } EDIT I just found that the problem seems to lie much deeper. Having the dependencies of thymeleaf added into my pom.xml seems to be enough for spring boot to load it into the context... I just deleted my ThymeleafConfig class for testing purposes and still thymeleaf tries to resolve the JSF pages... (yes I did maven clean before executing the test) EDIT 2 I read it now and tried to exclude the ThymeleafAutoConfiguration class but it does not help. My configurations are still overridden. Here is my configuration for this so far. (And yes this is the ONLY EnableAutoConfiguration annotation in the whole project) @Configuration @EnableAutoConfiguration(exclude = {ThymeleafAutoConfiguration.class}) @Import({WebAppConfig.class, ThymeleafConfig.class}) public class SpringBootInitializer extends SpringBootServletInitializer and my ThymeleafConfig class is already added above. A: Having the dependencies of thymeleaf added into my pom.xml seems to be enough for spring boot to load it into the context... If this has surprised you then I would recommend spending some time to take a step back and read about how Spring Boot works and, in particular, it's auto-configuration feature. This section of the reference documentation is a good place to start. In short, Spring Boot adopts a convention over configuration approach. If a dependency is on the classpath, Spring Boot assumes that you want to use it, and configures it with sensible defaults. This is what it's doing with Thymeleaf. You can disable this auto-configuration for a specific dependency using the excludes attribute on @SpringBootApplication: @SpringBootApplication(exclude={ThymeleafAutoConfiguration.class}) public class ExampleApplication { } You can also use the spring.autoconfigure.exclude property to provide a comma-separated list of auto-configuration classes to exclude. Each entry in the list should be the fully-qualified name of an auto-configuration class. You could use this property with @TestPropertySource to disable auto-configuration on a test-by-test basis. A: I have been struggling with a similar issue for hours and finally found out the root cause. If you have a dependency to *-data-rest in your pom like this: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-rest</artifactId> </dependency> you will have to add Thymeleaf to your pom as well even if you use a another template engine (Freemarker, JSP, ...) everywhere else. Reason: to expose a JpaRepository as a rest service Spring Boot requires Thymeleaf. I do not understand why this is not defined as a dependency of spring-boot-starter-data-rest so that Maven resolves it automatically. In my opinion it is a Spring Boot configuration bug.
doc_728
var joinAs = document.getElementById('joinAs'); while(joinAs.firstChild) { joinAs.removeChild(joinAs.firstChild); } joinAs.innerHTML= "<div class='form row'><?php $form=$this->beginWidget('CActiveForm', array('id'=>'login-form','enableClientValidation'=>true,'clientOptions'=>array('validateOnSubmit'=>true,),)); ?><div id='LoginDiv' class='answer_list' ><div class='col-lg-12' style='background-color:#F5F5F5;'><div class='row' style='margin:2% 0% 2% 0%'><div align='center' class='col-lg-6'><img src='images/close.png' onclick='hideDiv()' style='width:6%;cursor: pointer'></div><div align='center' class='col-lg-6'><p style='font-size:22px'>LOGIN</p></div></div><div style='clear:both'></div><div align='center' style='width:30%; margin:auto'><form action='' method='POST'><div class='form-group has-feedback'><label for='inputName' class='col-sm-2 col-xs-2 control-label' style='font-weight: 100;'>USERNAME</label><?php echo $form->textField($model,'username'); ?></div><?php echo $form->error($model,'username'); ?><div class='form-group has-feedback'><label for='inputName' class='col-sm-2 col-xs-2 control-label' style='font-weight: 100;'>PASSWORD</label><?php echo $form->passwordField($model,'password'); ?></div><?php echo $form->error($model,'password'); ?><div class='row'><div class='col-xs-4'><button type='submit' class='btn btn-primary btn-block btn-flat' onclick='PostLogin()'>Sign In</button></div></div></form></div></div></div><?php $this->endWidget(); ?></div>}</script><script src='js/validations.js' ></script></html>"; } This is script inside LongDiv() function, but I am getting following error: Uncaught ReferenceError: LoginDiv is not defined I search but all I got is you can't run php code inside javascript. A: Not too sure of my answer... But you can try it out!! <script> function LoginDiv() { var joinAs = document.getElementById('joinAs'); while(joinAs.firstChild) { joinAs.removeChild(joinAs.firstChild); } var innerText = "<div class='form row'>"; innerText += <?php $form=$this->beginWidget('CActiveForm', array('id'=>'login-form','enableClientValidation'=>true,'clientOptions'=>array('validateOnSubmit'=>true,),)); ?> innerText += " <div id='LoginDiv' class='answer_list' ><div class='col-lg-12' style='background-color:#F5F5F5;'><div class='row' style='margin:2% 0% 2% 0%'><div align='center' class='col-lg-6'><img src='images/close.png' onclick='hideDiv()' style='width:6%;cursor: pointer'></div><div align='center' class='col-lg-6'><p style='font-size:22px'>LOGIN</p></div></div><div style='clear:both'></div><div align='center' style='width:30%; margin:auto'><form action='' method='POST'><div class='form-group has-feedback'><label for='inputName' class='col-sm-2 col-xs-2 control-label' style='font-weight: 100;'>USERNAME</label>"; innerText += <?php echo $form->textField($model,'username'); ?> + "</div>" + <?php echo $form->error($model,'username'); ?> innerText += "<div class='form-group has-feedback'><label for='inputName' class='col-sm-2 col-xs-2 control-label' style='font-weight: 100;'>PASSWORD</label>" + <?php echo $form->passwordField($model,'password'); ?> +"</div>" + <?php echo $form->error($model,'password'); ?> + "<div class='row'><div class='col-xs-4'><button type='submit' class='btn btn-primary btn-block btn-flat' onclick='PostLogin()'>Sign In</button></div></div></form></div></div></div>" + <?php $this->endWidget(); ?> + "</div>"; joinAs.innerHTML= innerText; } </script> <script src='js/validations.js' ></script> </html>
doc_729
In the classic form-based, the button sends the html form as post, and the action page returns a table of results below the same form. something like: <form action="/search" method="post"> <input type="text" name="searchterms"> <button type="button" onclick="submit();">Search!</button> </form> In the ajax version, there is no form, but just a text field and the button calls a dosearch() function which makes an ajax request to a back end script which return search results, which are then used in a target DIV, showing a table of results, below the search input+button. <input type="text" name="searchterms"> <button type="button" onclick="dosearch();">Search!</button> Both work fine, but the second way avoid the browser to "collect" previously used search terms - I guess because there's no "submit". My users like this "previous search terms" suggestion, and I would like to add it back, if possible, in the search "ajax" form. I also tried to create an html form anyway, which encloses the input and the button, adding this event to its tag: onSubmit='dosearch();return false;' (if I don't return false, the form is submitted and action page loaded), like: <form onsubmit='dosearch();return false;'> <input type="text" name="searchterms"> <button type="button" onclick="dosearch();">Search!</button> </form> but even this seems not to work: new search terms are not "remembered" and suggested typing in the field... I thought maybe my dosearch() function could add (most recent) search terms into a session/cookie but then I would need to read those stored values and create, each time, a sort of "dropdown" list for the input field, just as browsers usually do automatically... it should be not complicated but probably overkill... Is there any way to make browsers "remember" inserted values even without submit? If not, what workaround is best/easiest? edit: I've just found this input remembered without using a form but maybe after 5 years something changed? A: I just tried to change the "button" type, to a "submit", and in this way it seems to work... page is not reloaded, ajax fills the results div and my new terms are remembered... like: <form onsubmit='dosearch();return false;'> <input type="text" name="searchterms"> <button type="submit" >Search!</button> </form> it seems that if the "submit" is triggered through a "submit" button (even if it return false) the browsers stores input field values... A: you need to do with ajax <form onsubmit='dosearch();return false;'> <input type="text" name="searchterms" id="searchTerm"> <button type="button" onclick="dosearch();">Search!</button> <script type="text/javascript"> function dosearch(){ val = $("#searchTerm").val(); $.ajax({ url: "LoadProduct.php?val ="+val, type: "GET", processData: false, contentType: false, }).done(function(respond){ $("#LoadProductTd"+no).html(respond); $(".chzn-select").chosen(); $(".chzn-select-deselect").chosen({allow_single_deselect:true}); }); } </script> so you can create LoadProduct.php file and in file you will get your search term n $_GET['val'] so you can use this for your query
doc_730
In one of my components I am trying to implement react-select https://github.com/JedWatson/react-select I copied and pasted the CSS from the example directory into my scss file and when I pull up the modal that is supposed to have the select, it's just a squished, tiny, input field with no styling on it at all. Not sure what I am missing here. import React, { Component, PropTypes } from 'react'; import Modal from 'react-modal'; import withStyles from 'isomorphic-style-loader/lib/withStyles'; import s from './Modal.scss'; import SelectField from 'material-ui/lib/select-field'; import MenuItem from 'material-ui/lib/menus/menu-item'; import Checkbox from 'material-ui/lib/checkbox'; import ActionFavorite from 'material-ui/lib/svg-icons/action/favorite'; import ActionFavoriteBorder from 'material-ui/lib/svg-icons/action/favorite-border'; import TextInput from '../UI/TextInput'; import Button from '../UI/Button'; import Select from 'react-select'; class AddQuestionModal extends Component { createQuestion = () => { this.props.createQuestion(); } closeModal = () => { this.props.close(); } changeText = (val) => { this.props.changeText(val); } changeAnswer = (val) => { this.props.changeAnswer(val); } techSelectChange = (event, index, value) => { this.props.techSelectChange(value); } updateTags = (val) => { this.props.updateTags(val); } levelSelectChange = (event, index, value) => { this.props.levelSelectChange(value); } render() { let multiLine = true; return ( <Modal isOpen={this.props.open} onRequestClose={this.closeModal}> <h2>New Question</h2> <TextInput hintText="Question" change={this.changeText} multiLine = {true} default = {this.props.question.text} /> <TextInput hintText="Answer" change={this.changeAnswer} multiLine = {true} default = {this.props.question.answer} /> <div> <SelectField value={this.props.question.tech} onChange={this.techSelectChange} floatingLabelText="Technology"> <MenuItem value={"JavaScript"} primaryText="JavaScript"/> <MenuItem value={"Java"} primaryText="Java"/> <MenuItem value={"C#"} primaryText="C#"/> <MenuItem value={".NET"} primaryText=".NET"/> <MenuItem value={"iOS"} primaryText="iOS"/> </SelectField> </div> <div> <SelectField value={this.props.question.level} onChange={this.levelSelectChange} floatingLabelText="Difficulty"> <MenuItem value={"Beginner"} primaryText="Beginner"/> <MenuItem value={"Intermediate"} primaryText="Intermediate"/> <MenuItem value={"Advanced"} primaryText="Advanced"/> <MenuItem value={"Expert"} primaryText="Expert"/> </SelectField> </div> <div> <Select name="tags" options={this.props.question.tags} onChange={this.updateTags} multi={true} allowCreate={true} /> </div> <div className='buttonDiv'> <Button label='Cancel' disabled={false} onSubmit={this.closeModal} /> <Button label='Create Question' disabled={false} onSubmit={this.createQuestion} /> </div> </Modal> ); } } AddQuestionModal.propTypes = { open : PropTypes.bool.isRequired, close : PropTypes.func.isRequired, question : PropTypes.object.isRequired, createQuestion : PropTypes.func.isRequired, changeText : PropTypes.func.isRequired, changeAnswer : PropTypes.func.isRequired, techSelectChange : PropTypes.func.isRequired, levelSelectChange : PropTypes.func.isRequired, updateTags : PropTypes.func.isRequired }; export default withStyles(AddQuestionModal, s); './Modal.scss'; is the stylesheet that is copied directly from the github example. There are no css options being applied to that field when I look in the dev tools. A: I had this same problem the first time I used it. You need to import the css file from react-select. Example: require('../../node_modules/react-select/dist/react-select.min.css') A: From what I can see you are not applying any styles to the <SelectField />. Try adding className = { s.my-class-name }as a property to the select field.
doc_731
library(ggplot2) library(lubridate) weeksummary <- data.frame( Date = rep(as.POSIXct("2020-01-01") + days(0:6), 2), Total = rpois(14, 30), Group = c(rep("group1", 7), rep("group2", 7)) ) ggplot(data = weeksummary, mapping = aes(x = Date, y = Total, fill = Group)) + geom_col(position = "dodge") + geom_text(aes(label = Total), position = position_dodge(width = 0.9), size = 3) I cannot for the life of me get this to put the numbers at the top of their own bars, been hunting around for an answer and trying everything I found with no luck, until I randomly tried this: weeksummary$Date <- as.factor(weeksummary$Date) But this seems unnecessary manipulation, and I'd need to make sure the dates appear in the right format and order and rewrite the additional bits that currently rely on dates... I'd rather understand what I'm doing wrong. A: What you're looking for is to use as.Date.POSIXct. as.factor() works to force weeksummary$Date into a factor, but it forces the conversion of your POSIXct class into a character first (thus erasing "date"). However, you need to convert to a factor so that dodging works properly - that's the question. You can either convert before (e.g. weeksummary$Date <- as.Date.POXIXct(weeksummary$Date)), or do it right in your plot call: ggplot(weeksummary, aes(x = as.Date.POSIXct(Date), y = Total, fill = Group)) + geom_col(position = 'dodge') + geom_text(aes(label = Total, y = Total + 1), position = position_dodge(width = 0.9), size = 3) Giving you this: Note: the values are different than your values, since our randomization seeds are likely not the same :) You'll notice I nudged the labels up a bit. You can normally do this with nudge_y, but you cannot specify nudge_x or nudge_y the same time you specify a position= argument. In this case, you can just nudge by overwriting the y aesthetic. A: Because geom_text inherits x aesthetics which is Date in this case, which is totally correct. You don't have to mutate your data frame, you can specify the behaviour when plotting instead aes(x = factor(Date), y = ...),
doc_732
function* watchFetchWatchlist() { yield takeLatest('WATCHLIST_FETCH_REQUEST', fetchWatchlist); } function* fetchWatchlist() { const activity = 'ACTIVITY_FETCH_WATCHLIST'; yield put( addNetworkActivity(activity) // Action 1: enables a global loading indicator before request is made ); const { response, error } = yield call( api.fetchWatchlist // make an API request ); yield put( removeNetworkActivity(activity) // Action 2: removes the above global loading indicator after request completes ); if (response) { yield put( updateUserWatchlist(response) // Action 3a: updates Redux store with data if response was successful ); } else { yield put( watchlistFetchFailed(error) // Action 3b: updates Redux store with error if response failed ); } } The flow of this saga is synchronous in nature. Action 1 must run first to set the global loading state for the app. Action 2 must run after Action 1 and after the API response comes back to remove the global loading state when the network activity is finished. I'm pretty new to redux-observable but I have been digging around a lot trying to figure out how to convert this saga into an epic. The two goals here: * *Perform actions sequentially, one after the other, as opposed to running in parallel *Perform these actions / flow in a single epic (kicks off when type: 'WATCHLIST_FETCH_REQUEST' is fired) How do you achieve this with redux-observable? Thanks! A: I found the answer to my question by piecing together parts of the conversation here: https://github.com/redux-observable/redux-observable/issues/62 I ended up with something along the lines of: import { concat as concat$ } from 'rxjs/observable/concat'; import { from as from$ } from 'rxjs/observable/from'; import { of as of$ } from 'rxjs/observable/of'; export const fetchWatchlistEpic = (action$) => { const activity = 'ACTIVITY_FETCH_WATCHLIST'; return action$.ofType('WATCHLIST_FETCH_REQUEST') .switchMap(() => concat$( of$(addNetworkActivity(activity)), from$(api.fetchWatchlist()) .map((data) => Immutable.fromJS(data.response)) .switchMap((watchlist) => of$( updateUserWatchlist(watchlist), removeNetworkActivity(activity), ) ) ) ); }; concat and of seem to be the go-to operators when trying to run multiple actions in sequence.
doc_733
How can I add extensions in Azure web app Linux? A: You have to activate mysql/mssql extension in azure. https://learn.microsoft.com/en-us/previous-versions/azure/windows-server-azure-pack/dn457758(v%3Dtechnet.10) OR in Azure Web app. open-up your webssh and execute the following commands. apt-get update apt-get install php7.2-mysql
doc_734
Are there solutions so I can see debug messages in the console, or to be able to use the PLAY button on MacOS? I have the new M1. Unity runs great, no issues, just can't see dang debug messages in the console! Thank you for any help! A: Console logging apps running in Unity logs. If you want to look at your logs from an android device, you should use logcat. You can get it as another unity window from Package Manager. Also, there is a logcat within Android Studio.
doc_735
<script type="text/javascript"> window.onerror=function(msg,url,line) { if (window.XMLHttpRequest) { var xmlhttp = new XMLHttpRequest(); } else { var xmlhttp = new ActiveXObject('Microsoft.XMLHTTP'); } xmlhttp.open('POST', '/logJSerrorsHere', true); xmlhttp.setRequestHeader('Content-type', 'application/x-www-form-urlencoded'); xmlhttp.send('msg='+encodeURIComponent(msg)+'&url='+encodeURIComponent(url)+'&line='+line); return true; }</script> and sometimes it logs some "mysterious" errors: "$ is not defined", but all of them comes from "googlebot(at)googlebot.com" or spiderbot. Should I deal with it? A: Depends :) If your site is readable and indexable with out Javascript (and your site is visible in search) I wouldn't worry too much about it, unless you feel the error is indicative of a bigger issue. You can test this using Fetch and Render in Google Webmaster Tools. If your site relies on Javascript to render the page (i.e. it uses AngularJS for example) then yes, fix it.
doc_736
I am building a REST based service using Spring-boot. I want to publish the JAR file to the local maven repository so that my web application can use it. After trying many things, I finally settled for maven-publish plugin. Here is my build.gradle file //Needed for spring-boot buildscript { repositories { mavenCentral() } dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:1.1.8.RELEASE") } } apply plugin: 'eclipse' // Apply the groovy plugin to add support for Groovy apply plugin: 'groovy' //apply Spring-boot plugin apply plugin: 'spring-boot' apply plugin: 'maven-publish' // In this section you declare where to find the dependencies of your project repositories { mavenLocal() // Use 'jcenter' for resolving your dependencies. // You can declare any Maven/Ivy/file repository here. jcenter() mavenCentral() } group = "com.proto" publishing { publications { maven(MavenPublication) { groupId "${project.group}" artifactId "${project.name}" version "${project.jar.version}" artifact sourceJar { classifier "sources" } from components.java pom.withXml { asNode().appendNode('parent') .appendNode('groupId', 'org.springframework.boot').parent() .appendNode('artifactId', 'spring-boot-starter-parent').parent() .appendNode('version', '1.1.8.RELEASE') asNode().appendNode('repositories').appendNode('repository') .appendNode('id', 'spring-releases').parent() .appendNode('url', 'http://repo.spring.io/libs-release') } } } } task sourceJar(type: Jar) { from sourceSets.main.allJava } jar { baseName = 'my-api' version = '0.0.1' } task('execJar', type:Jar, dependsOn: 'jar') { baseName = 'my-api' version = '0.0.1' classifier = 'exec' from sourceSets.main.output } bootRepackage { withJarTask = tasks['execJar'] } // In this section you declare the dependencies for your production and test code dependencies { // We use the latest groovy 2.x version for building this library compile 'org.codehaus.groovy:groovy-all:2.3.6' compile 'org.codehaus.groovy.modules.http-builder:http-builder:0.7.1' // tag::jetty[] compile("org.springframework.boot:spring-boot-starter-web") // { // exclude module: "spring-boot-starter-tomcat" // } // compile("org.springframework.boot:spring-boot-starter-jetty") // end::jetty[] // tag::actuator[] compile("org.springframework.boot:spring-boot-starter-actuator") // We use the awesome Spock testing and specification framework testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' testCompile 'junit:junit:4.11' testCompile('org.springframework.boot:spring-boot-starter-test') testCompile('cglib:cglib:3.1') } // tag::wrapper[] task wrapper(type: Wrapper) { gradleVersion = '2.1' } My problem is that, when I run: gradle publishToMavenLocal I get the following error: FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':publishMavenPublicationToMavenLocal'. > Failed to publish publication 'maven' to repository 'MavenLocal' > Unable to initialize POM pom-default.xml: Cannot find parent: org.springframework.boot:spring-boot-starter-parent for project: com.proto:proto-api:jar:0.0.1 for project com.proto:proto-api:jar:0.0.1 My gradle environment details: ------------------------------------------------------------ Gradle 2.1 ------------------------------------------------------------ Build time: 2014-09-08 10:40:39 UTC Build number: none Revision: e6cf70745ac11fa943e19294d19a2c527a669a53 Groovy: 2.3.6 Ant: Apache Ant(TM) version 1.9.3 compiled on December 23 2013 JVM: 1.7.0_72 (Oracle Corporation 24.72-b04) OS: Linux 3.13.0-39-generic amd64 What am I missing? Thank you in advance for your help. A: Ok, I have fixed the issue. I am behind our corporate firewall, and had configured proxy correctly for gradle in ~/.gradle/gradle.properties file. But, I missed setting proxies for maven in ~/.m2/settings.xml file. I configured our internal nexus repository to handle this issue but setting proxies block should work as well. Click here for maven settings.xml documentation A: Same as @aardee, I am sitting behind our corporate firewall but it seems that my proxy settings (settings.xml) for maven local did not change anything. Fortunately we have our own maven repository that can proxy out and so I just replaced the repository in the generated pom and made sure that our company maven repository knows the relevant spring repos. pom.withXml { asNode().appendNode('parent') .appendNode('groupId', 'org.springframework.boot').parent() .appendNode('artifactId', 'spring-boot-starter-parent').parent() .appendNode('version', '1.1.8.RELEASE') asNode().appendNode('repositories').appendNode('repository') .appendNode('id', 'spring-releases').parent() .appendNode('url', 'http://my.mavenRepo.com/releases} Replace http://my.mavenRepo.com/releases with your own maven repository.
doc_737
I have a nested if statement with the if statement of an if/else block. In the nested if statement, if it it meets the criteria, I'd like the code to break to the else statement. When I put a break in the nested if, though, I'm not sure if it's breaking to the else statement. I'd like to find the longest substring in alphabetical order of a given string, s. Here's my code: s = 'lugabcdeczsswabcdefghij' longest = 1 alpha_count = 1 longest_temp = 1 longest_end = 1 for i in range(len(s)-1): if (s[i] <= s[i+1]): alpha_count += 1 if (i+1 == (len(s)-1)): break else: longest_check = alpha_count if longest_check > longest: longest = longest_check longest_end = i+1 alpha_count = 1 print(longest) print('Longest substring in alphabetical order is: ' + s[(longest_end-longest):longest_end]) (Yes, I realize there's surely lots of unnecessary code here. Still learning!) At this nested if: if (i+1 == (len(s)-1)): break ...if True, I'd like the code to break to the else statement. It doesn't seem to break to that section, though. Any help? A: break is used when you want to break out of loops not if statments. You can have another if statement that executes this logic for you like this: if (s[i] <= s[i+1]): alpha_count += 1 elif (i+1 == (len(s)-1)) or (add boolean expression for else part in here too something like s[i] > s[i+1]): longest_check = alpha_count if longest_check > longest: longest = longest_check longest_end = i+1 alpha_count = 1 What this snippet is doing is evaluating two booleans, both are for the else part. However, it says either execute in case of else from first if or in case of (i+1 == (len(s)-1))
doc_738
Linguine maps seems to only work on old hibernate xml files. And the hibernate tool task hbm2hbmxml seems to have a bug so that I can't do the two step process "annotations->hbmxml->diagram" Best, Anders A: Hmm, I've found this great post on developerworks. There the author seems to generate entity diagrams from a live database. I wonder if I can go "annotated classes -> live db (eg. H2) -> SchemasSpy generated diagram? And yes, API Viz looks great. I've blogged my hacky solution with SchemaSpy. A: Not quite what you're looking for but you could use API Viz which automatically produces UML like diagrams from class hierarchies and can be augmented with doclet tags. You'd have to mark up your classes a little, but it's a great tool and worth using wherever you're trying to visualise large or complex projects.
doc_739
I have an active record variable which brings database entries from a model: $variable = Model::model()->findAll(); So I have $variable available in my view file and I want to check for the existence of a specific entry within the results. I am using the primary key of an entry available in the $variable, but I can't seem to get it working. What is the correct way to check if a given entry is contained within that variable from the view file, not the controller? PS: I do not want to iterate through the result set, it wouldn't be very efficient for my application. Thanks. A: If I got you right then: * *Indeed its better not to have such pieces of code in a view file. *If you are forced to use CActiveRecord.fin*() methods, consider using findByPk($pk). If it returns null - no such record. *Consider having your models extending PcBaseArModel as that class has 'checkExists($pk)' method that looks natural in the check you need to perform. This base class has other goodies - check its documentation. A: Try this code: $data = Post::model()->findAllBySql("select * from tbl_post where id=".$data->id); or $post=Post::model()->find(array( 'condition'=>'postID=:postID', 'params'=>array(':postID'=>$data->id) ));
doc_740
Can anyone help me with the code. I have created arrays like this and assigning values dynamically. import numpy as np A = [] A.append(values) A1 = np.array(A) B = [] B.append(values) B1 = np.array(B) np.savetxt(filename, zip(A1, B1), delimiter=',', fmt='%f') It is throwing error: Expected 1D or 2D array, got 0D array instead A: You can use np.vstack to stack the arrays on top of each other and then use np.savetxt import numpy as np a = [1,2,3,4] b = [2,3,4,5] c = [3,4,5,6] a = np.array(a) b = np.array(b) c = np.array(c) d = np.vstack((a,b,c)) np.savetxt("file.csv", d, delimiter=",", fmt='% 4d')
doc_741
private static final logger = LogManager.getLogger(); public void handler() { log.info("handler"); throw new RuntimeException("error") } The logs for an invocation of this function include the log4j2 info message, but the exception is logged plainly, without requestid, timestamp, or ERROR marker. 2019-10-07 11:39:02 43db1f36-0570-4b58-adc6-5c92ea62a862 INFO Example - handler java.lang.RuntimeException: error at Test.main(Test.java:3) It's harder to find exceptions in logs because you can't grep for the request id or ERROR. Is there a common pattern to emit logs using the log4j2 root logger in lambda? I don't want to wrap my handlers with try-catch-log-rethrow. try { ... } catch (Throwable t) { log.fatal(t); throw t; }
doc_742
When I do this I've noticed that the function is called multiple times. I have created a stack-blitz to show this situation. here that function is called 4 times but in my local application, it is called more than 6 times. https://stackblitz.com/edit/angular-ypwswn I know this is because of change detection cycle. but I could not figure out how to solve this situation logically. I've seen the following post however with it being related to Angular 2. I'm not sure it's relevant Angular2 *ngIf="afunctioncall()" results in the function being called 9 times The 'console.log is output 4 times. Can anyone point me or am I missing anything? And is there any way to avoid it? Any help will be appreciated. I need to pass a array to that method and get a selected leg from that array. A: call the function in ngOnInit instead on your template export class AppComponent implements OnInit { finaldata : any; ngOnInit() { this.finaldata = this.getSelectedLeg(this.data); } } in html <div *ngIf="finaldata; let L"> </div>
doc_743
Has it advantage doing this way? You can watch with inspector (of course) but I'm putting a response example: HeadersContentCookiesTiming { "value": { "html": "<div class=\"dialog_tabs\"><a class=\"tab\" group=\"__w2_PHfxEJe_tabs\" href=\"#\" show=\"signup\" id=\"__w2_PHfxEJe_signup_select\"><span class=\"no_icon signup\">Create an Account</span></a><a class=\"tab\" group=\"__w2_PHfxEJe_tabs\" href=\"#\" show=\"login\" id=\"__w2_PHfxEJe_login_select\"><span class=\"no_icon login\">Login</span></a></div><div group=\"__w2_PHfxEJe_contents\" id=\"__w2_PHfxEJe_signup\"><div class=\"row live_login_signup_form\"><div class=\"row p0_5\">Sorry, you must have an invitation to create an account on Quora.</div></div></div><div class=\"hidden\" group=\"__w2_PHfxEJe_contents\" id=\"__w2_PHfxEJe_login\"><div class=\"row form_row\" id=\"__w2_PHfxEJe_inline_login\"><div id=\"ld_LIJSXr_1\"><div id=\"__w2_b5Jr0f0_associated\"><div id=\"ld_LIJSXr_2\"></div></div><div class=\"w3_5 p1\"><form class=\"row w2_5 col inline_login_form\" method=\"POST\" id=\"__w2_b5Jr0f0_login_form\"><div class=\"form_inputs\"><div class=\"form_row\"><label for=\"__w2_b5Jr0f0_email\">Email Address</label><input class=\"text\" group=\"__w2_b5Jr0f0_interaction\" type=\"text\" name=\"email\" w2cid=\"b5Jr0f0\" id=\"__w2_b5Jr0f0_email\" /><p class=\"hidden input_validation_error_text\" id=\"__w2_b5Jr0f0_email_not_confirmed_error\">You need to confirm your email address\n before you can login. <br /><a hred=\"#\" id=\"__w2_b5Jr0f0_resend_confirmation\">Resend Confirmation Link</a></p><span class=\"hidden input_validation_error_text\" id=\"__w2_b5Jr0f0_email_not_found_error\">No account matching that email address was found.</span></div><div class=\"form_row\"><label for=\"__w2_b5Jr0f0_password\">Password</label><input class=\"text\" group=\"__w2_b5Jr0f0_interaction\" type=\"password\" name=\"password\" w2cid=\"b5Jr0f0\" id=\"__w2_b5Jr0f0_password\" /><span class=\"hidden input_validation_error_text\" id=\"__w2_b5Jr0f0_incorrect_password_error\">Incorrect password. <a href=\"#\" id=\"__w2_b5Jr0f0_reset_password_link\">Reset Password</a></span></div></div><div class=\"form_buttons p1\"><input class=\"col p0_5\" group=\"__w2_b5Jr0f0_interaction\" type=\"checkbox\" checked=\"checked\" name=\"allow_passwordless\" value=\"allow_passwordless\" w2cid=\"b5Jr0f0\" id=\"__w2_b5Jr0f0_allow_passwordless\" /><label class=\"login_option\" for=\"__w2_b5Jr0f0_allow_passwordless\">Let me login without a password on this browser</label><input class=\"submit_button\" group=\"__w2_b5Jr0f0_interaction\" type=\"submit\" value=\"Login\" w2cid=\"b5Jr0f0\" id=\"__w2_b5Jr0f0_submit_button\" /></div></form><div class=\"hidden e_col inline_login_preview_box\" id=\"__w2_b5Jr0f0_preview\"><img id=\"__w2_b5Jr0f0_pic\" /><br /><span id=\"__w2_b5Jr0f0_name\"></span></div></div></div></div></div>", "css": "", "js": "W2.addComponentMetadata({parents: {\"b5Jr0f0\": \"PHfxEJe\", \"PHfxEJe\": \"*dialog*_1\", \"NqeVUG8\": \"b5Jr0f0\"}, children: {}, knowsAbout: {\"b5Jr0f0\": {\"inline_login\": \".\"}, \"PHfxEJe\": {\"signup_form\": \"signup_form\"}}, groups: {\"__w2_PHfxEJe_contents\": [\"__w2_PHfxEJe_signup\", \"__w2_PHfxEJe_login\"], \"__w2_b5Jr0f0_interaction\": [\"__w2_b5Jr0f0_email\", \"__w2_b5Jr0f0_password\", \"__w2_b5Jr0f0_allow_passwordless\", \"__w2_b5Jr0f0_submit_button\"], \"__w2_PHfxEJe_tabs\": [\"__w2_PHfxEJe_signup_select\", \"__w2_PHfxEJe_login_select\"]}, domids: {\"b5Jr0f0\": \"ld_LIJSXr_1\", \"NqeVUG8\": \"ld_LIJSXr_2\"}});var _components = [new(LiveLoginDialog)(\"PHfxEJe\",\"\",{\"default_tab\": \"signup\", \"autostart\": null},\"cls:a.app.view.login:LiveLoginDialog:OuWttII3ndCni7\",{}), new(InlineLogin) (\"b5Jr0f0\",\"\",{},\"live:ld_LIJSXr_1:cls:a.app.view.login:InlineLogin:zLqmkvFx8WJgk2\", {})];W2.registerComponents(_components);W2.onLoad(_components, false);" }, "pmsg": null } A: Makinde from Facebook talks about this approach in this video and explains the benefits: http://www.facebook.com/video/video.php?v=596368660334&oid=9445547199 In summary: At Facebook, they reached a point where they had 1M of javascript and it made the site slow and was a nightmare to maintain. They found out that the majority of use cases were about sending a request to the server and rendering different HTML. So by pushing the business logic to the server and letting it return the html to be rendered, they managed to remove a huge amount of javascript and make the site faster. It turns out that returning HTML in the response doesn't add too much delay over returning only the json and using javascript to render it. A lot more details in the video, though. I'm working on a library that does some of that and using it in my own projects now. A: its pretty simple on how to handle javascript through ajax.. they will probably use some code like this to append the js code to the dom: var d = document.getElementById('divContents').getElements ByTagName("script") var t = d.length for (var x=0;x<t;x++){ var newScript = document.createElement('script'); newScript.type = "text/javascript"; newScript.text = d[x].text; //will refer to the js property of the json document.getElementById('divContents').appendChild (newScript); regarding why they do it probably send in sm script content that they thought wasnt initially required by the user but when he performs some action like say he fails to authenticate himself probably they might send in sm javascript code that will be responsible for generating a Register new account markup and also add some validation rules with it.. Thus loading script on the client side may be smthing that is done when they dynamically need to insert some script.
doc_744
I realize it can be done by creating and inserting into a temporary table: BEGIN; CREATE TEMPORARY TABLE temp (value INT); INSERT INTO temp VALUES (1), (2), (2), (3), (3), (4), (5), (6); SELECT GROUP_CONCAT(DISTINCT value) FROM temp; DROP TEMPORARY TABLE temp; ROLLBACK; but is there a way that does not require a temporary table? The list of integers is not coming from another MySQL table; pretend it is hard-coded.
doc_745
The main issue I have right now is that when before the Oracle dialect would generate a fast enough query like this : SELECT * FROM MY_TABLE WHERE SDO_RELATE(geom,SDO_GEOMETRY(?,4326),'mask=INSIDE+COVEREDBY') ='TRUE' Now it will generate something terribly slow: SELECT * FROM MY_TABLE WHERE MDSYS.OGC_WITHIN(MDSYS.ST_GEOMETRY.FROM_SDO_GEOM(geom),MDSYS.ST_GEOMETRY.FROM_SDO_GEOM(?))=1 This one will not finish on time and raise a transaction timeout : JTA transaction unexpectedly rolled back (maybe due to a timeout I can only think that there is something wrong with whatever dialect class it is used in order to translate HQL into proper performant Oracle spatial SQL. My configuration is as follows. pom.xml : <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>5.0.7.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>5.0.7.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-spatial</artifactId> <version>5.0.7.Final</version> </dependency> My persistence.xml , where I configure Atomikos (4.0.0M4) as transaction manager. <persistence-unit name="pers_unit_name" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>jta_data_source_name</jta-data-source> <mapping-file>oracle.hbm.xml</mapping-file> <class>...</class> <properties> <property name="hibernate.dialect" value="org.hibernate.spatial.dialect.oracle.OracleSpatial10gDialect" /> <property name="hibernate.spatial.dialect" value="org.hibernate.spatial.dialect.oracle.OracleSpatial10gDialect" /> <property name="hibernate.spatial.connection_finder" value="org.geolatte.geom.codec.db.oracle.DefaultConnectionFinder" /> <property name="hibernate.connection.autocommit" value="false" /> <property name="hibernate.transaction.manager_lookup_class" value="com.atomikos.icatch.jta.hibernate4.TransactionManagerLookup" /> <property name="transaction.factory_class" value="org.hibernate.transaction.JTATransactionFactory" /> <property name="hibernate.transaction.jta.platform" value="com.atomikos.icatch.jta.hibernate4.AtomikosPlatform"/> <property name="hibernate.transaction.coordinator_class" value="jta"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider" /> </properties> </persistence-unit> When I debug HQLQueryPlan I see the query translator that it is using internally is : org.hibernate.hql.internal.ast.QueryTranslatorImpl Not sure whether this is right or wrong, or how this could be configured to generate the right query. This application is running on Tomcat 8. The POJO used with Hibernate to map the entity contains that geom attribute which is defined as : @Column(name = "geom", columnDefinition="Geometry", nullable = true) protected Geometry geom; A: It looks that setting OGC_STRICT=false did the trick. This tells Hibernate to use Oracle's own spatial functions are used directly, instead of using Open Geospatial compliant functions, as we can read in the OGC compliance setting documentation. Actually, we had already set it up in the org.hibernatespatial.oracle.OracleSpatial10gDialect.properties file, but because after the upgrade it should be named after org.hibernate.spatial.dialect.oracle.OracleSpatial10gDialect.properties , it wouldn't work for us.
doc_746
Here is my demo => https://jsfiddle.net/fmvucqno/ inside options variable i have 10% and 90% i want to achieve 10% fill and 90% fill on the usernames. <input type="button" value="spin" style="float:left;" id='spin'/> <canvas id="canvas" width="500" height="500"></canvas> <body style="background-color: white"> <script> var options = [ [ '@david', '10%', ], [ '@burn', '90%' ] ]; var startAngle = 0; var arc = Math.PI / (options.length / 2); var spinTimeout = null; var spinArcStart = 10; var spinTime = 0; var spinTimeTotal = 0; var ctx; document.getElementById("spin").addEventListener("click", spin); function byte2Hex(n) { var nybHexString = "0123456789ABCDEF"; return String(nybHexString.substr((n >> 4) & 0x0F, 1)) + nybHexString.substr(n & 0x0F, 1); } function RGB2Color(r, g, b) { return '#' + byte2Hex(r) + byte2Hex(g) + byte2Hex(b); } function getColor(item, maxitem) { var phase = 0; var center = 128; var width = 127; var frequency = Math.PI * 2 / maxitem; red = Math.sin(frequency * item + 2 + phase) * width + center; green = Math.sin(frequency * item + 0 + phase) * width + center; blue = Math.sin(frequency * item + 4 + phase) * width + center; return RGB2Color(red, green, blue); } function drawRouletteWheel() { var canvas = document.getElementById("canvas"); if (canvas.getContext) { var outsideRadius = 200; var textRadius = 160; var insideRadius = 125; ctx = canvas.getContext("2d"); ctx.clearRect(0, 0, 500, 500); ctx.strokeStyle = "black"; ctx.lineWidth = 2; ctx.font = 'bold 12px Helvetica, Arial'; for (var i = 0; i < options.length; i++) { var angle = startAngle + i * arc; ctx.fillStyle = getColor(i, options.length); ctx.beginPath(); ctx.arc(250, 250, outsideRadius, angle, angle + arc, false); ctx.arc(250, 250, insideRadius, angle + arc, angle, true); ctx.stroke(); ctx.fill(); ctx.save(); ctx.shadowOffsetX = -1; ctx.shadowOffsetY = -1; ctx.shadowBlur = 0; ctx.shadowColor = "rgb(220,220,220)"; ctx.fillStyle = "black"; ctx.translate(250 + Math.cos(angle + arc / 2) * textRadius, 250 + Math.sin(angle + arc / 2) * textRadius); ctx.rotate(angle + arc / 2 + Math.PI / 2); var text = options[i]; ctx.fillText(text, -ctx.measureText(text).width / 2, 0); ctx.restore(); } //Arrow ctx.fillStyle = "black"; ctx.beginPath(); ctx.moveTo(250 - 4, 250 - (outsideRadius + 5)); ctx.lineTo(250 + 4, 250 - (outsideRadius + 5)); ctx.lineTo(250 + 4, 250 - (outsideRadius - 5)); ctx.lineTo(250 + 9, 250 - (outsideRadius - 5)); ctx.lineTo(250 + 0, 250 - (outsideRadius - 13)); ctx.lineTo(250 - 9, 250 - (outsideRadius - 5)); ctx.lineTo(250 - 4, 250 - (outsideRadius - 5)); ctx.lineTo(250 - 4, 250 - (outsideRadius + 5)); ctx.fill(); } } function spin() { spinAngleStart = Math.random() * 10 + 10; spinTime = 0; spinTimeTotal = Math.random() * 3 + 4 * 1000; rotateWheel(); } function rotateWheel() { spinTime += 30; if (spinTime >= spinTimeTotal) { stopRotateWheel(); return; } var spinAngle = spinAngleStart - easeOut(spinTime, 0, spinAngleStart, spinTimeTotal); startAngle += (spinAngle * Math.PI / 180); drawRouletteWheel(); spinTimeout = setTimeout('rotateWheel()', 30); } function stopRotateWheel() { clearTimeout(spinTimeout); var degrees = startAngle * 180 / Math.PI + 90; var arcd = arc * 180 / Math.PI; var index = Math.floor((360 - degrees % 360) / arcd); ctx.save(); ctx.font = 'bold 30px Helvetica, Arial'; var text = options[index] ctx.fillText(text, 250 - ctx.measureText(text).width / 2, 250 + 10); ctx.restore(); } function easeOut(t, b, c, d) { var ts = (t /= d) * t; var tc = ts * t; return b + c * (tc + -3 * ts + 3 * t); } drawRouletteWheel(); </script> A: Although some options are set up for the different people, which include the percentage they should occupy, these are not actually used. The arc is calculated to be equal for all users - Math.PI / (options.length / 2) Instead we need to use the value given in the option for that user, and we have to gradually add to the start angle to know where to start the next arc. Here's the snippet with the changes: var options = [ [ '@david', '10%', ], [ '@burn', '90%' ] ]; var startAngle = 0; var arc = Math.PI / (options.length / 2); var spinTimeout = null; var spinArcStart = 10; var spinTime = 0; var spinTimeTotal = 0; var ctx; document.getElementById("spin").addEventListener("click", spin); function byte2Hex(n) { var nybHexString = "0123456789ABCDEF"; return String(nybHexString.substr((n >> 4) & 0x0F, 1)) + nybHexString.substr(n & 0x0F, 1); } function RGB2Color(r, g, b) { return '#' + byte2Hex(r) + byte2Hex(g) + byte2Hex(b); } function getColor(item, maxitem) { var phase = 0; var center = 128; var width = 127; var frequency = Math.PI * 2 / maxitem; red = Math.sin(frequency * item + 2 + phase) * width + center; green = Math.sin(frequency * item + 0 + phase) * width + center; blue = Math.sin(frequency * item + 4 + phase) * width + center; return RGB2Color(red, green, blue); } function drawRouletteWheel() { var canvas = document.getElementById("canvas"); if (canvas.getContext) { var outsideRadius = 200; var textRadius = 160; var insideRadius = 125; ctx = canvas.getContext("2d"); ctx.clearRect(0, 0, 500, 500); ctx.strokeStyle = "black"; ctx.lineWidth = 2; ctx.font = 'bold 12px Helvetica, Arial'; var angle = startAngle; for (var i = 0; i < options.length; i++) { arc = Math.PI * Number(options[i][1].replace('%','')) / 50; ctx.fillStyle = getColor(i, options.length); ctx.beginPath(); ctx.arc(250, 250, outsideRadius, angle, angle + arc, false); ctx.arc(250, 250, insideRadius, angle + arc, angle, true); ctx.stroke(); ctx.fill(); ctx.save(); ctx.shadowOffsetX = -1; ctx.shadowOffsetY = -1; ctx.shadowBlur = 0; ctx.shadowColor = "rgb(220,220,220)"; ctx.fillStyle = "black"; ctx.translate(250 + Math.cos(angle + arc / 2) * textRadius, 250 + Math.sin(angle + arc / 2) * textRadius); ctx.rotate(angle + arc / 2 + Math.PI / 2); var text = options[i]; ctx.fillText(text, -ctx.measureText(text).width / 2, 0); ctx.restore(); angle += (i+1) * arc; } //Arrow ctx.fillStyle = "black"; ctx.beginPath(); ctx.moveTo(250 - 4, 250 - (outsideRadius + 5)); ctx.lineTo(250 + 4, 250 - (outsideRadius + 5)); ctx.lineTo(250 + 4, 250 - (outsideRadius - 5)); ctx.lineTo(250 + 9, 250 - (outsideRadius - 5)); ctx.lineTo(250 + 0, 250 - (outsideRadius - 13)); ctx.lineTo(250 - 9, 250 - (outsideRadius - 5)); ctx.lineTo(250 - 4, 250 - (outsideRadius - 5)); ctx.lineTo(250 - 4, 250 - (outsideRadius + 5)); ctx.fill(); } } function spin() { spinAngleStart = Math.random() * 10 + 10; spinTime = 0; spinTimeTotal = Math.random() * 3 + 4 * 1000; rotateWheel(); } function rotateWheel() { spinTime += 30; if (spinTime >= spinTimeTotal) { stopRotateWheel(); return; } var spinAngle = spinAngleStart - easeOut(spinTime, 0, spinAngleStart, spinTimeTotal); startAngle += (spinAngle * Math.PI / 180); drawRouletteWheel(); spinTimeout = setTimeout('rotateWheel()', 30); } function stopRotateWheel() { clearTimeout(spinTimeout); var degrees = startAngle * 180 / Math.PI + 90; var arcd = arc * 180 / Math.PI; var index = Math.floor((360 - degrees % 360) / arcd); ctx.save(); ctx.font = 'bold 30px Helvetica, Arial'; var text = options[index] ctx.fillText(text, 250 - ctx.measureText(text).width / 2, 250 + 10); ctx.restore(); } function easeOut(t, b, c, d) { var ts = (t /= d) * t; var tc = ts * t; return b + c * (tc + -3 * ts + 3 * t); } drawRouletteWheel(); <input type="button" value="spin" style="float:left;" id='spin'/> <canvas id="canvas" width="500" height="500"></canvas>
doc_747
However, in case the file is played and stopped many times, it starts giving the following exception when I call MediaPlayer.Play(song): Song playback failed. Please verify that the song is not DRM protected. DRM protected songs are not supported for creator games. If I try to access MediaPlayer.State in such a scenario, it gives me the following error: Value does not fall within the expected range. Any attempt to play the file after this fails and gives the above error. The file is able to play only after terminating and relaunching the app. I have also checked the properties of the file and it's protection is Off. Kindly help me in case any of you have come across the same issue and have a solution for the same. Thank you A: Can you try to create MediaPlayer element dynamically? MediaElement ME = new MediaElement(); ME.Source = new Uri("source of file"); ME.Play();
doc_748
value 12-01-2014 1 13-01-2014 2 .... 01-05-2014 5 I want to group them into 1 (Monday, Tuesday, ..., Saturday, Sonday) 2 (Workday, Weekend) How could I achieve that in pandas ? A: Make sure your dates column is a datetime object and use the datetime attributes: df = pd.DataFrame({'dates':['1/1/15','1/2/15','1/3/15','1/4/15','1/5/15','1/6/15', '1/7/15','1/8/15','1/9/15','1/10/15','1/11/15','1/12/15'], 'values':[1,2,3,4,5,1,2,3,1,2,3,4]}) df['dates'] = pd.to_datetime(df['dates']) df['dayofweek'] = df['dates'].apply(lambda x: x.dayofweek) dates values dayofweek 0 2015-01-01 1 3 1 2015-01-02 2 4 2 2015-01-03 3 5 3 2015-01-04 4 6 4 2015-01-05 5 0 5 2015-01-06 1 1 6 2015-01-07 2 2 7 2015-01-08 3 3 8 2015-01-09 1 4 9 2015-01-10 2 5 10 2015-01-11 3 6 11 2015-01-12 4 0 df.groupby(df['dates'].apply(lambda x: x.dayofweek)).sum() df.groupby(df['dates'].apply(lambda x: 0 if x.dayofweek in [5,6] else 1)).sum() Output: In [1]: df.groupby(df['dates'].apply(lambda x: x.dayofweek)).sum() Out[1]: values dates 0 9 1 1 2 2 3 4 4 3 5 5 6 7 In [2]: df.groupby(df['dates'].apply(lambda x: 0 if x.dayofweek in [5,6] else 1)).sum() Out[2]: values dates 0 12 1 19
doc_749
shop_id shop_name shop_time 1 Brian 40 2 Brian 31 3 Tom 20 4 Brian 30 Table:bananas banana_id banana_amount banana_person 1 1 Brian 2 1 Brian I now want it to print: Name: Tom | Time: 20 | Bananas: 0 Name: Brian | Time: 101 | Bananas: 2 I used this code: $result = dbquery("SELECT tz.*, tt.*, SUM(shop_time) as shoptime, count(banana_amount) as bananas FROM shopping tt LEFT OUTER JOIN bananas tz ON tt.shop_name=tz.banana_person GROUP by banana_person LIMIT 40 "); while ($data5 = dbarray($result)) { echo 'Name: '.$data5["shop_name"].' | Time: '.$data5["shoptime"].' | Bananas: '.$data5["bananas"].'<br>'; } The problem is that I get this instead: Name: Tom | Time: 20 | Bananas: 0 Name: Brian | Time: 202 | Bananas: 6 I just don't know how to get around this. A: The problem is that you are constructing a cross product of the two tables which multiplies the results up by the number of rows in the opposite table. To solve this first calculate the result of aggregating one of the tables in a derived table and join this aggregated result to the other table. SELECT shop_name, shoptime, IFNULL(SUM(banana_amount), 0) FROM ( SELECT shop_name, SUM(shop_time) as shoptime FROM shopping GROUP BY shop_name ) tt LEFT JOIN bananas tz ON tt.shop_name=tz.banana_person GROUP BY shop_name A: Using * is the issue (since you are using group by). Also, the SUM(shop_time) is being multipled by as many rows in banaanas hence you are getting 202(for two rows in bananas) Try this query: SELECT tt.shop_name, SUM(shop_time) AS shoptime, Ifnull(banana_amount, 0) AS bananas FROM shop tt LEFT OUTER JOIN (SELECT banana_person, SUM(banana_amount) AS banana_amount FROM bananas GROUP BY banana_person) tz ON tt.shop_name = tz.banana_person GROUP BY shop_name; A: select xx.shop_name , xx.tot_time , coalesce(yy.tot_bananas, 0) as tot_bananas from ( select shop_name , sum(shop_time) as tot_time from shopping group by shop_name ) as xx left join ( select banana_person , sum(banana_amount) as tot_bananas from bananas group by banana_amount ) as yy on xx.shop_name = yy.banana_person order by xx.shop_name ;
doc_750
I read this tutorial to see how can i show more than one coordenates in flutter. This tutorial add elements manually, i need to add with api rest. I created a foreach to retrieve all elements in array, then i add all coordinates in list. The problem: The list reset in initstate method, so i can´t take length of the list to loop for all coordenates. This is the code: MapController mapController; Map<String, LatLng> coords; List<Marker> markers; List<Map<String, LatLng>> listado = []; Future<Null> fetchPost() async { final response = await http.get(url); final responseJson = json.decode(response.body); for (Map user in responseJson) { coords.putIfAbsent("Test", () => new LatLng(user['lat'], user['long'])); listado.add(coords); // print(listado.toList()); } } @override void initState() { super.initState(); mapController = new MapController(); coords = new Map<String, LatLng>(); fetchPost(); markers = new List<Marker>(); for (int i = 0; i < listado.length; i++) { print(listado[1].values.elementAt(i)); markers.add(new Marker( width: 80.0, height: 80.0, point: listado[1].values.elementAt(i), builder: (ctx) => new Icon(Icons.home, color: Colors.red[300]))); } } @override Widget build(BuildContext context) { return new FlutterMap( options: new MapOptions( center: new LatLng(37.7525244, 139.1650556), zoom: 5.0, ), mapController: mapController, layers: [ new TileLayerOptions( urlTemplate: "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", subdomains: ['a', 'b', 'c']), new MarkerLayerOptions(markers: markers) ], ); } } final String url = 'https://my-json-server.typicode.com/tobiobi/myjasonserver/coordinates'; //STATES class UserDetails { final String name; final double lat, long; UserDetails({this.name, this.lat, this.long}); factory UserDetails.fromJson(Map<dynamic, dynamic> json) { return new UserDetails( name: json['name'], lat: json['lat'], long: json['long'], ); } } So, how can i get all coordinates in list and iterate in loop? UPDATE I try void initState() { super.initState(); mapController = new MapController(); coords = new Map<String, LatLng>(); fetchPost().then((data) { print(data); for (int i = 0; i < listado.length; i++) { markers.add(new Marker( width: 80.0, height: 80.0, point: coords.values.elementAt(i), builder: (ctx) => new Icon(Icons.home, color: Colors.red[300]))); } }); } But return 'the getter iterator was called on null'. The data has this json this: MapsPageState data: null Inside this i have listado array and coords but ¿how can i get? UPDATE 2: SOLUTION import 'package:flutter/material.dart'; import 'package:flutter_map/flutter_map.dart'; import 'package:latlong/latlong.dart'; import 'dart:convert'; import 'package:http/http.dart' as http; import 'package:igota/screens/partials/alertmessages.dart'; class MapsPage extends StatefulWidget { static String tag = 'maps-page'; @override MapsPageState createState() => new MapsPageState(); } class MapsPageState extends State<MapsPage> { MapController mapController; Map<String, LatLng> coords; List<Marker> markers; List<Map<String, LatLng>> list = []; int _counter = 0; bool loading; Future<Null> fetchPost() async { list = List(); markers = List(); mapController = new MapController(); coords = new Map<String, LatLng>(); final response = await http.get( 'https://my-json-server.typicode.com/tobiobi/myjasonserver/coordinates').catchError((error) { print(error.toString()); AlertMessages.general(context,'No ha sido posible acceder a los datos'); }); final List responseJson = json.decode(response.body) as List; for (Map<String, dynamic> data in responseJson) { _counter++; coords.putIfAbsent("Test $_counter", () => new LatLng(double.parse(data['lat'].toString()), double.parse(data['long'].toString()))); list.add(coords); loading=false; } return; } @override void initState(){ loading = true; super.initState(); fetchPost().then((data) { for (int i = 0; i < list.length; i++) { markers.add(new Marker( width: 80.0, height: 80.0, point: list[0].values.elementAt(i), builder: (ctx) => new Icon(Icons.home, color: Colors.red[300]))); } setState( () { } ); }).catchError((error) { print(error.toString()); AlertMessages.general(context, 'Problemas internos de código'); }); } @override Widget build(BuildContext context) { if (loading) { return new Container( color: Colors.red[300], child: new Center( child: new CircularProgressIndicator( valueColor: new AlwaysStoppedAnimation<Color>(Colors.white), ), )); } else { return new FlutterMap( options: new MapOptions( center: new LatLng(37.7525244, 139.1650556), zoom: 5.0, ), mapController: mapController, layers: [ new TileLayerOptions( urlTemplate: "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png", subdomains: ['a', 'b', 'c']), new MarkerLayerOptions(markers: markers) ], ); } } } A: So the main reason listado.length is zero when you try to iterate through it is because you didn't wait for fetchPost(); method to finish its execution. You declared fetchPost(); as a future which means it runs asynchronously. What you need to do to ensure that by the time you attempt to iterate through listado is to perform the iteration in a call back method on the fetchPost(). So your code should look something like this: fetchPost().then( (data){ for (int i = 0; i < listado.length; i++) { print(listado[1].values.elementAt(i)); markers.add(new Marker( width: 80.0, height: 80.0, point: listado[1].values.elementAt(i), builder: (ctx) => new Icon(Icons.home, color: Colors.red[300]) )); } }); This way, once fetchPost() completes, the callback method will be run and `listado' will have the latest data. I'd advise you to also introduce a isLoadingData state which you set to true whenever you call fetchPost() and to false at the end of the callback method. Just so you can let the user know that you performing some kind of processing in the background and they don't have to stare at a blank page without any feedback on what's happening. Hope this helps. A: Use the answer from @user8518697 and then do the following. -initialize markers as List<Marker> markers = new List(); -use for(int i = 0; i < (responseJson.length); i++) instead of for(Map user in responseJson) I also want to say thanks for your code. I have been able to use it to display multiple markers stored in a database.
doc_751
More details : The context : I have a dojox.data.grid table on the screen that can be updated. The modified data is correctly passed to the server as a JSON string. Like this : data = { "deletedItems":{} ,"newItems":{} ,"modifiedItems":{ "2890":{"idFacture":"2890" ,"idClient":"175" ,"idAffaire":"1323" ,"idContrat":"2234" ,"raisonSociale":"xxxxxx" ,"nomAffaire":"xxxxxx" ,"nrFacture":"xxxxx" ,"dateFacture":"2012-12-06" ,"montantFacture":"160000.00" ,"pourcentageFacture":"64.88" ,"pourcentageCalcule":"32.44" ,"noteFacture":"" ,"dateTransfert":"" ,"aTransferer":true ,"typeDocument":"Facture" ,"factureSoldee":false ,"montantTotalHeures":"0.00" ,"pourcentageTotalHeures":"0.00" ,"montantTotalMateriel":"0.00" ,"pourcentageTotalMateriel":"0.00" ,"montantTotalSousTraitance":"160000.00" ,"pourcentageTotalSousTraitance":"40.94" } ,"2892":{"idFacture":"2892" ,"idClient":"50" ,"idAffaire":"1649" ,"idContrat":"2713" ,"raisonSociale":"xxxxx" ,"nomAffaire":"xxxxx" ,"nrFacture":"xxxxx" ,"dateFacture":"2012-12-07" ,"montantFacture":"12004.50" ,"pourcentageFacture":"0.00" ,"pourcentageCalcule":"41.94" ,"noteFacture":"" ,"dateTransfert":"" ,"aTransferer":true ,"typeDocument":"Facture" ,"factureSoldee":false ,"montantTotalHeures":"12004.50" ,"pourcentageTotalHeures":"41.95" ,"montantTotalMateriel":"0.00" ,"pourcentageTotalMateriel":"0.00" ,"montantTotalSousTraitance":"0.00" ,"pourcentageTotalSousTraitance":"0.00" } } } You can see that the data contains three elements "deletedItems" (empty), "createdItems" (empty) and "modifiedItems". In the php code i have the following commands : $srvJson = new Services_JSON(SERVICES_JSON_LOOSE_TYPE); $data = $srvJson->decode($data); Where $data is filled like above. Normally after that last statement the folowing PHP variables are set : * *$data["deletedItems"] *$data["createdItems"] *$data["modifiedItems"] Here is the problem : if on the production server there are a lot of modified rows (about > 30) in the table, the modified data is correctly passed to the serveur BUT $data["modifiedItems"] is not set ??? If I just modify the dataset for a few rows $data["modifiedItems"] is well set. I can modify the whole data set piece per piece, but not the whole dataset in one time. I suppose this is a question of server settings but what ? I would appreciate any suggestion. Best regards, Roger PS : Sorry for my english A: Since it is not valid JSON (checked with http://jsonlint.com/) the results of json_decode() in PHP 5.3.10 and PHP 5.2.13 are different! While 5.3 returns nothing 5.2.13 returns the initial string. Probably your code has somewhere some error corrections... See the different results of the different PHP version on 3v4l.org! When us remove data = from your JSON does not throw errors.. A: Based on your above issue, you can check following things on server * *magic_quotes are open or closed on server similiar to local server *Does Services_JSON is compitable to PHP 5.3 *Services_JSON is pear php library so please check that it is working properly with all extension of php http://pear.php.net/bugs/search.php?cmd=display&package_name%5B%5D=Services_JSON Or you can use the default php json library for encode and decode May this will help you
doc_752
import 'package:get/get.dart'; import '../../../common/repo.dart'; import '../../../models/Channel.dart'; import '../../../models/Item.dart'; import '../../../models/enumn/stories_type.dart'; import '../../../models/request/article/article_request.dart'; class ChannelDetailController extends GetxController { var channel = Channel().obs; int isFav = 0; StoriesType? currentStoriesType; ArticleRequest? articleRequest; RxList<Item> articles = List<Item>.empty(growable: true).obs; void updateChannelFav(int isFav) { channel.value.isFav = isFav; update(); } Future<void> getChannelArticles(int channelId) async { Channel? channelResponse = await Repo.fetchChannelItem(channelId); if (channelResponse != null) { channel.value = channelResponse; if (channelResponse.articleDTOList != null) { articles.value = channelResponse.articleDTOList!; } update(); } } } when I invoke the updateChannelFav function, I found that the controller articles will be cleared. why did this happen? does the controller resetted when update?
doc_753
I can't find how to put buttons at the end of each row without them being in a cell. If it have to be in a cell, I would want to remove all the decoration of that cell to make it look like "outside" of the table. Any ideas how to do this with bootstrap? My html looks like this for the moment: <div class="table-responsive"> <table class="table table-striped"> <thead> <tr> <th>Date</th> <th>Heure</th> <th>Appel</th> <th>En vente</th> <th>En vente web</th> <th></th> </tr> </thead> <tbody> <tr> <td>2014-12-01</td> <td>20:00</td> <td>141201</td> <td> <div class="text-center"> <span class="glyphicon glyphicon-ok"></span> </div> </td> <td> <div class="text-center"> <span class="glyphicon glyphicon-ok"></span> </div> </td> <td> <a href=""> <span class="glyphicon glyphicon-edit"></span> </a> &nbsp; <a href=""> <span class="glyphicon glyphicon-trash"></span> </a> </td> </tr> <tr> <td>2014-12-02</td> <td>20:00</td> <td>141202</td> <td> <div class="text-center"> <span class="glyphicon glyphicon-ok"></span> </div> </td> <td> <div class="text-center"> <span class="glyphicon glyphicon-minus"></span> </div> </td> <td> <a href=""> <span class="glyphicon glyphicon-edit"></span> </a> &nbsp; <a href=""> <span class="glyphicon glyphicon-trash"></span> </a> </td> </tr> <tr> <td>2014-12-03</td> <td>20:00</td> <td>141203</td> <td> <div class="text-center"> <span class="glyphicon glyphicon-minus"></span> </div> </td> <td> <div class="text-center"> <span class="glyphicon glyphicon-minus"></span> </div> </td> <td> <a href=""> <span class="glyphicon glyphicon-edit"></span> </a> &nbsp; <a href=""> <span class="glyphicon glyphicon-trash"></span> </a> </td> </tr> </tbody> </table> </div><!-- table-responsive --> A: Try applying a class like this to your cells that should appear "disconnected": .table.table-striped .minimal_cell { background: none; border-width: 0; } Apply to the cells containing the icons: <td class="minimal_cell"> <a href=""> <span class="glyphicon glyphicon-edit"></span> </a> &nbsp; <a href=""> <span class="glyphicon glyphicon-trash"></span> </a> </td> jsFiddle: http://jsfiddle.net/eum3sn97/2/
doc_754
I have been getting the following error PHP Warning: POST Content-Length of 8501809 bytes exceeds the limit of 8388608 bytes in Unknown on line 0 I edited php.ini (which is in the same directory as app.yaml), but it doesn't seem to impact the maximum upload size. post_max_size = "16M" upload_max_filesize = "16M" memory_limit = "128M" Is there somewhere else I should be setting the max_size? Any other fields? Thanks! A: I'm using this in my .htaccess file (this works well on shared hosting where you might not be able to change php.ini): ## I need more memory to upload large image <IfModule mod_php5.c> php_value memory_limit 256M php_value post_max_size 50M php_value upload_max_filesize 50M </IfModule> further: in Codeigniter you can set filesize preference in your config.php file: see here A: The PHP runtime Directives documentation may be useful. As you indicated, php.ini file should be in the same directory as your app.yaml. I'd sugest specifically looking at the file_uploads and max_file_uploads values. (Note that the default for each is 0, try setting it to 1). A: I added below in php.ini (same place where app.yaml is located) it worked post_max_size = &quot;16M&quot; upload_max_filesize = &quot;16M&quot; memory_limit = &quot;128M&quot;
doc_755
Quick Edit: Yes I have considered using a Parameter, however I need the filter to be multi select which parameters do not offer. State NY PA FL SC NC WV TX CA ID | State 1 | PA, NY, FL, SC 2 | CA, WV, PA, NY 3 | NC, SC, TX, FL, NY Second Edit: I do not have the ability to reshape this data due to the potential number of options per column that I need to filter on, (75+ on at least two). Which is why I'm asking this question. I was hoping there might be a solution similar to SSRS where I can populate my filter with Query B and use the results to filter back to Query A. A: In case someone finds this question and still is seeking the solution! I had a similar need. The use case was a unique list of employees whose records each included a single, concatenated field of all of the cities they'd visited in the year. (The underlying data was correctly shaped with a distinct record for each city visited, but the client wanted a particular display view that worked great for them.) So the client wanted a table that looked like this: EMPLOYEE CITIES VISITED Person A London Person B London,Paris Person C Geneva Person D Geneva,London,Milan The easiest way for the client to find everyone who had visited London would have been to include a wildcard search filter. As soon as they typed London, the corresponding "Cities Visited" records would have appeared for them to select. However, the client wanted a filter option with preset cities they could click (rather than freehand entry of city names because they didn't want to guess what all the options were OR risk spelling errors). Turns out it was very easy to do this using a parameter control, with many thanks to Dave Rawlings in the Tableau Community Forums (https://community.tableau.com/thread/210796)! I am not the OP from the post whom Dave assisted, but I was delighted to find it. 1) Create a list of unique options for the filter: I removed dupes in the underlying data to generate a unique list of cities visited. (The OP here would just use a list of states or state abbreviations.) 2) Create a parameter to create your filter options: - On the bottom left of the worksheet, create a new parameter. I called mine "Cities." - Type is String, Value Options are from a List. - Add your unique options one-by-one to the List. I also included an option called "All." - Set the Current Value to "All." 3) Create a calculated field to regulate the parameter. I named mine "CityFilter" and used this formula to set either "All" or the selected city to "Yes." If [Cities]="All" THEN "Yes" ELSEIF FIND([CitiesVisited],[Cities])> 0 THEN "Yes" ELSE "No" END Here, "Cities" refers to the city the user has selected from the parameter list, and "CitiesVisited" refers to the concatenated field in the record with multiple/comma-separated values. 4) Create a filter from the calculated field: Drag the field to add to your filters: Set the required value to "Yes." 5) Display the parameter as a worksheet filter: Right-click the parameter to "Show Parameter." This adds it to the list of filters to the right of your worksheet. 6) Display the parameter as a dashboard filter: On your dashboard, on the upper right-hand side of one of your views, choose the Down arrow, then choose "Parameters." Select the parameter you created to add it to the filters list on the dash, too. Reminder, as of now, parameters are single-select only, so the client only can filter to one city at a time! The multi-select, dynamic parameters Tableau are developing are focused on parameters across multiple data sources, rather than for multiple values within one field. That is because, in theory, this is bad data-ing. But if an example like this helps a client understand and leverage the data, I'm happy to oblige. I hope this answers the original question for the OP or anyone else who could use a quick solution for a project! A: Reshape your data. Don’t have repeating lists in a cell. In your example, reshape to have one data row describing each association of a state to an ID. You should have 13 Rows, each with one ID and One state. Then analysis will be more straightforward. Regardless of tool. You can read about data modeling, database scheme design for more info. Your data violates first normal form. Try to achieve at least second normal form if possible A: I am not sure about your database design, Normally in a database there will be a table that will have all the values individually which will be used in the cases like the one you have. I can think of a way may be a bit rough one but will work. If you have access to database then take the state column in separate excel sheet. Using text to columns make separate columns for all values and then pivot those to make single column. Now in tableau using your original database and then join both using full join this will create a full set of all data. Now create a filter with the column in excel sheet. Hope this helps, but make sure to do a through QA as you are joining 2 databases
doc_756
When I create a new record using Entity Framework(EF), the workflow is not triggered. But it triggers fine when I create a record using Organization Service Proxy. Below is my code to create a new record using EF. Entity e = db.Entity.Create(); // ...... // e.Entity.Add(e); e.SaveChanges(); Above code works fine and it creates a new record. But it doesn't trigger the Workflow Process. What could be the solution in this scenario?
doc_757
https://github.com/ServiceStack/ServiceStack/wiki/Messaging-and-redis It seems to explain the basics very well. What I don't quite understand though are differences and applicable use cases when publishing via the MessageFactory: .CreateMessageProducer.Publish() and .CreateMessageQueueClient.Publish() I plan on reviewing the code but wanted to post this here for an "official" explanation. A: Here are the API's of IMessageProducer and IMessageQueueClient: public interface IMessageProducer : IDisposable { void Publish<T>(T messageBody); void Publish<T>(IMessage<T> message); } public interface IMessageQueueClient : IMessageProducer { void Publish(string queueName, byte[] messageBytes); void Notify(string queueName, byte[] messageBytes); byte[] Get(string queueName, TimeSpan? timeOut); byte[] GetAsync(string queueName); string WaitForNotifyOnAny(params string[] channelNames); } Basically a MessageQueueClient is also a MessageProducer, but contains other fine-grained methods in addition to Publish to Get messages off the queue as well as publish and subscribe to any MQ topics. The typed Publish<T> API on both the message client and the producer have the same behaviour.
doc_758
I followed all the steps which are mentioned here. Let's say I have created external table as External_Emp which has 3 columns : ID, Name, Dept. When I am running following query: select * from External_Emp; Then, it is showing me all the records which is right. But when i am selecting a specific column/columns then it shows the column name in a row. For Ex., if i run following query: select Name from External_Emp; Then output is : Name ----- Name 1 2 3 whereas, the output should be: Name ------ 1 2 3 Similarly , when i run query: select ID, Name from External_Emp; Then it shows following output: ID | Name --------- ID | Name 1 | abc 2 | xyz 3 | pqr whereas, the output should be : ID | Name -------- 1 | abc 2 | pqr 3 | xyz why is it showing the column names in the separate row? Is that a bug? I checked the data in csv file in azure data lake multiple times. It doesn't have repetitive column names. Thanks. A: Drop the External Table and the External File Format. Then recreate the External File Format with FIRST_ROW=2 which will skip one row as mentioned in the documentation: CREATE EXTERNAL FILE FORMAT TextFileFormat WITH ( FORMAT_TYPE = DELIMITEDTEXT , FORMAT_OPTIONS ( FIELD_TERMINATOR = '|' , STRING_DELIMITER = '' , DATE_FORMAT = 'yyyy-MM-dd HH:mm:ss.fff' , USE_TYPE_DEFAULT = FALSE , FIRST_ROW = 2 ) );
doc_759
The lang document option does the job when I render a html file: --- title: "Mi título" lang: es format: html --- ![Cat](fig.png) but it does not work when I render a pdf: --- title: "Mi título" lang: es format: pdf --- ![Cat](fig.png) In the pdf file a figure is referred as "Figure" instead of "Figura". This problem does not occur when a quarto document is created in a RStudio project. It occurs only for quarto book projects. I tried to include this code "lang: es" in the _quarto.yml file of the quarto book project, but it produces an error. Any idea about what could be the problem?
doc_760
First thread ended OK, second failed to start jvm.. ?? tks #include "jni.h" #include <process.h> #include "Stdafx.h" //DISPATCH Thread Check bool DispatchThreadCreated = FALSE; if (DispatchThreadCreated == FALSE) { HANDLE hDispThread; hDispThread = (HANDLE)_beginthread(DispFrontEnd,0,(void *)dispatchInputs); if ((long)hDispThread == -1) { log.LogError("Thread DispFrontEnd Returned********BG ", (long)hDispThread); log.LogError("errno", errno); log.LogError("_doserrno", _doserrno); } else { logloc->LogMethod("Dispatch Thread CREATED"); DispatchThreadCreated= TRUE; //Espera que a thread termine WaitForSingleObject( hDispThread, INFINITE ); DispatchThreadCreated= FALSE; // 01_02_2010 logloc->LogMethod("Dispatch Thread ENDED"); } } if (DispatchThreadCreated == FALSE) { HANDLE hDispThread3; logloc->LogMethod("3 : Dispatch Thread CREATED"); hDispThread3 = (HANDLE)_beginthread(DispFrontEnd,0,(void *)dispatchInputs); if ((long)hDispThread3 == -1) { log.LogError("3 : Thread DispFrontEnd Returned********BG ", (long)hDispThread3); log.LogError("errno", errno); log.LogError("_doserrno", _doserrno); } else { logloc->LogMethod("3 : Dispatch Thread CREATED"); DispatchThreadCreated= TRUE; //Espera que a thread termine WaitForSingleObject( hDispThread3, INFINITE ); DispatchThreadCreated= FALSE; // 01_02_2010 logloc->LogMethod("3 : Dispatch Thread ENDED"); } } void DispFrontEnd(void * indArr) { JNIEnv *env; JavaVM *jvm; env = create_vm(&jvm); // return null on second call ??? } JNIEnv* create_vm(JavaVM ** jvm) { CString str; JNIEnv *env; JavaVMInitArgs vm_args; JavaVMOption options; options.optionString = "-Djava.class.path=C:\\dispatch\\lib\\Run.jar;C:\\dispatch\\classes"; //Path to the java source code vm_args.version = JNI_VERSION_1_6; //JDK version. This indicates version 1.6 vm_args.nOptions = 1; vm_args.options = &options; vm_args.ignoreUnrecognized = 0; int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args); if(ret < 0) { env = NULL; str.Format("ERROR! create JVM (%d)",ret); // show this on second call!! ? logloc->LogMethod( str ); } else { str.Format("JVM %x created Success!",env->GetVersion()); logloc->LogMethod( str ); } return env; } A: Do you really have to start many JVM ? Could you use jint AttachCurrentThread(JavaVM *vm, JNIEnv **p_env, void *thr_args); instead ? The only thing I know is a native thread cannot attach two different JVM at the same time.
doc_761
In my app, I have two html pages like login.html and home.html. In the home.html have 3 pages. () like menupage, searchpage, resultpage. The project flow is login.html ---> home.html. In home.html, menupage is displayed as a first page. If I choose the some option in the menupage it will move to searchpage and then resultpage. consider, currently I am in the resultpage. If I press the back button on the mobile browsers (iPhone-safari, Android-chrome) then it moves to the login.html. But I want to display the searchPage. How to solve this one? is it possible to do this? [Note : The pages should be in the single html page(home.html). A: use the attribute data-rel="back" on the anchor tag instead of the hash navigation, this will take you to the previous page Look at back linking: Here A: try $(document).ready(function(){ $('mybutton').click(function(){ parent.history.back(); return false; }); }); or $(document).ready(function(){ $('mybutton').click(function(){ parent.history.back(); }); }); A: You can try this script in the header of HTML code: <script> $.extend( $.mobile , { ajaxEnabled: false, hashListeningEnabled: false }); </script> A: Newer versions of JQuery mobile API (I guess its newer than 1.5) require adding 'back' button explicitly in header or bottom of each page. So, try adding this in your page div tags: data-add-back-btn="true" data-back-btn-text="Back" Example: <div data-role="page" id="page2" data-add-back-btn="true" data-back-btn-text="Back"> A: You can use nonHistorySelectors option from jquery mobile where you do not want to track history. You can find the detailed documentation here http://jquerymobile.com/demos/1.0a4.1/#docs/api/globalconfig.html A: This is for version 1.4.4 <div data-role="header" > <h1>CHANGE HOUSE ANIMATION</h1> <a href="#" data-rel="back" class="ui-btn-left ui-btn ui-icon-back ui-btn-icon-notext ui-shadow ui-corner-all" data-role="button" role="button">Back</a> </div> A: try to use li can be more even <ul> <li><a href="#one" data-role="button" role="button">back</a></li> </ul>
doc_762
I've tried setting the MinWidth property, and (based on what I could find in the default template) also reduced the NumberBoxMinWidth theme resource but nothing changes. What am I missing? Thanks in advance. A: Try select your Numberbox, then right click in desginer view then Edit template => Edit copy Should give you a copy of control template, where you can setup internals... The width of text box comes not only from Width property, but also from Margins and Padding, it's maybe enough to set thouse, otherwise you have to make a Template copy and check witch component is not letting for it to become smaller...
doc_763
$ hdfs zkfc Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode. at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.setConf(DFSZKFailoverController.java:122) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:66) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:168) A: The error speaks for itself: looks like you forgot to change a setting hadoop uses. A: $ sudo service hadoop-hdfs-namenode start $ sudo -u hdfs hdfs namenode -bootstrapStandby $ sudo service hadoop-hdfs-namenode start $ sudo service hadoop-hdfs-zkfc start This is for configuring NN.And it works!! Thanks
doc_764
ID IDZONE IDVALUE RANK A1 ZONE-1 100 1 B1 ZONE-1 100 1 C1 ZONE-1 100 1 C1 ZONE-2 200 2 C1 ZONE-3 300 3 C1 ZONE-4 400 4 C1 ZONE-5 500 5 n rows---- I wanted to re display the table to make sure every ID should contains RANK value 1-5 like below( null/0 for missing values). ID IDZONE IDVALUE RANK A1 ZONE-1 100 1 A1 null 0 2 A1 null 0 3 A1 null 0 4 A1 null 0 5 B1 ZONE-1 100 1 B1 null 0 2 B1 null 0 3 B1 null 0 4 B1 null 0 5 C1 ZONE-1 100 1 C1 ZONE-2 200 2 C1 ZONE-3 300 3 C1 ZONE-4 400 4 C1 ZONE-5 500 5 n rows------ Tried left join by using separate WITH clause on RANK column but nothing works out. Please suggest how can we achieve this. A: Since you are in oracle, you can use this technique to create the artificial data you need. Then join to it, and keep the artificial data that is not in your primary set. SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 5;
doc_765
<?php $location = ''; $location .= '['; $counting = count($map_data); // die; foreach($map_data as $key=>$feed){ if($key>0){ $location .= ','; } $location .= "['".$feed['Attraction']['location']."', '".$feed['Attraction']['id']."']"; } $location .= ']'; ?> if i create the array and use json_encode it shows the missing; after prperty id. so only i used this method. the set icon is worked while mouse over the marker. the seticon is not working while trigger the event. and the js is below var marker, i; var markers = new Array(); var locations = "<?php echo $location; ?>"; var geocoder = new google.maps.Geocoder(); function initialize(locations){ var pointA = new google.maps.LatLng(-12.449380,-50.339844); var zoom_level = 4; var mapOpt = { mapTypeId: google.maps.MapTypeId.ROADMAP, center: pointA, zoom: zoom_level, disableDefaultUI: false, zoomControl: true, panControl:false, scrollwheel: true, styles: [ { "featureType": "landscape.natural", "elementType": "geometry.fill", "stylers": [ { "color": "#ffffff" } ] }, { "featureType": "landscape.man_made", "stylers": [ { "color": "#ffffff" }, { "visibility": "off" } ] }, { "featureType": "water", "stylers": [ { "color": "#80C8E5" }, // applying map water color { "saturation": 0 } ] }, { "featureType": "road.arterial", "elementType": "geometry", "stylers": [ { "color": "#999999" } ] } ,{ "elementType": "labels.text.stroke", "stylers": [ { "visibility": "off" } ] } ,{ "elementType": "labels.text", "stylers": [ { "color": "#333333" } ] } ,{ "featureType": "poi", "stylers": [ { "visibility": "off" } ] } ] }; map = new google.maps.Map(document.getElementById("map"), mapOpt); for(var i=0;i<locations.length;i++){ (function(address,attr_id) { geocoder.geocode({ 'address': address }, function(results) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, icon:"http://maps.google.com/mapfiles/ms/icons/blue.png" }); markers.push(marker); google.maps.event.addListener(marker, 'mouseover', function() { marker.setIcon("http://maps.google.com/mapfiles/ms/icons/red.png"); }); google.maps.event.addListener(marker, 'mouseout', function() { marker.setIcon("http://maps.google.com/mapfiles/ms/icons/blue.png"); }); }); })(locations[i][0],locations[i][1]); } } function mouseover_event(index){ markers[index].setVisible(false); map.panTo(markers[index].getPosition()); google.maps.event.trigger(markers[index], 'mouseover'); } function mouseout_event(index){ markers[index].setVisible(true); map.panTo(markers[index].getPosition()); google.maps.event.trigger(markers[index], 'mouseout'); } google.maps.event.addDomListener(window, 'load', initialize); Please help to solve this problem A: I found the solution. Instead of var markers = new Array(); i used var markers = {}; and while pushing the marker i use markers[i] = marker; instead of markers.push(marker); Thanks
doc_766
class Split_audio(): def __init__(self): """ Constructor """ def create_folder(self,audio): """ Create folder for chunks """ #name of the folder: exemple audio file's name = test.wav ==> folder's name = test pos=audio.get_nameAudioFile() pos=pos.rfind('.') folder=audio.get_nameAudioFile()[0:pos] #if folder exist overwrite if os.path.exists(folder): shutil.rmtree(folder) #create folder os.makedirs(folder) return folder def split(self,audio,silence_thresh=None, min_silence_len=500): """ Split audio file on silence """ sound_file = AudioSegment.from_wav(audio.get_nameAudioFile()) if silence_thresh==None: silence_thresh=int(sound_file.dBFS)-19 audio_chunks = split_on_silence(sound_file, silence_thresh=silence_thresh, min_silence_len=min_silence_len) return audio_chunks def export(self,audio,path_folder=None): """ Export chunks on wav's file """ audio_chunks=self.split(audio) if path_folder==None: path_folder=self.create_folder(audio) for i, chunk in enumerate(audio_chunks): out_file = "chunk{0}.wav".format(i) path="%s/%s" %(path_folder,out_file) chunk.export(path, format="wav") I conclude that the quality of google_recognize output depends on silence_thresh and min_silence. After testing on 3 different audio I set the values ​​to silence_thresh=dbfs of the audio - 19 and min_silence=500ms. After 1 month I retested my code for the same audio. oops I had a transcript totally different from the first. here are the two results: First result second result. Any suggestion?
doc_767
A: I don't think it's possible to list only methods, but you can add a : to the filter textbox to group results by type: If you want to have a keybinding for this, you can pass the text that should be pre-filled via the "args" of the "workbench.action.quickOpen" command (source): { "key": "<keybinding>", "command": "workbench.action.quickOpen", "args": "@:" } Unfortunately the : seems to be preselected, which might be a bug considering other "special characters" like @ and > are not.
doc_768
import pandas as pd import matplotlib.pyplot as plt import numpy as np import matplotlib.lines as mlines data = pd.read_csv('lbj.csv') data2 = data[['PTS','AST']] X1 = data[['Season']] Y2 = str(data[['AST']]) Y1 = str(data[['PTS']]) plt.tick_params(axis = 'both', which = 'both', labelsize = 5) plt.xticks(rotation = 50) season = data.iloc[:,1] points = data.iloc[:,-1] np.polyfit(X1,Y1,1) plt.show() Any advice helps. Thank you. print(data2.head()) 0 20.9 5.9 1 27.2 7.2 2 31.4 6.6 3 27.3 6.0 4 30.0 7.2 A: * *Are Y1 and Y2, needed to be strings? Maybe the code below is enough. Update: Since 'Season' data are strings, the X1 should be a range with length = number of seasons. X1 = np.arange(data['Season'].size) Y1 = data['PTS'] Y2 = data['AST'] See this link for full documentation of np.arange. *Regarding polyfit, it returns an array of polynomial coefficients. We can use poly1d to generate an "evaluator" for the polynomial. Z = np.polyfit(X1, Y1, 1) poly = np.poly1d(Z) Of course, these functions have more functionalities than just this. See this link for full documentation of np.polyfit, and this link for np.poly1d. *For (basic) plotting, you can do this: Update: 'Seasons' can directly be the X-axis. seasons = data['Season'] plt.plot(seasons, Y1, 'blue') plt.plot(seasons, poly(X1), 'red') plt.show() You can do the same for the AST.
doc_769
Each item in this list is a pair of numbers that represent the dimensions of rooms in a house: h = [ [18,12], [14,11], [8,10], [8,10] ] Write a function named area that computes the total area of all rooms, for example: > area(h) 530 A: def area(h): total_area = 0 for room in h: total_area += room[0] * room[1] return total_area
doc_770
In many other cases I've been able to achieve this behavior by simply setting the constraint's active property to NO. In this case however, it just doesn't work. Wherever I call self.theHatedConstraint.active = NO; it remains active afterwards as the layout doesn't change in the simulator and the debug view hierarchy (this fancy new 3d debugger thingy in Xcode) still lists the constraint on the respective subview. I have placed the above code in each of the following methods of my view: -initWithCoder -awakeFromNib -updateConstraints all without any effect. If I change the constraint's constant instead of deactivating it, it does have the desired effect so I can be sure I have the right constraint and the outlet is set up properly. The only explanation for this weird behavior is that there must be some kind of mechanism that reactivates theHatedConstraint after it's been deactivated. So the next thing I tried was to update the constraint state at a time when I can be sure that all the initialization and loading of the view has finished. I happen to have a collection view as a subview in my custom view so the next thing I did was to call the line of code from above whenever the user taps on any of the collection view cells: - (void)collectionView:(UICollectionView *)collectionView didSelectItemAtIndexPath:(NSIndexPath *)indexPath { self.theHatedConstraint.active = NO; } and finally the constraint was removed from the view. So here's the big question: What happens after awakeFromNib and updateConstraints have been called in a view, regarding its layout constraints? How can it be that a constraint is still active after I've explicitly set its state to inactive in those methods? (I'm not touching theHatedConstraint anywhere else in my code.)
doc_771
I know that the commands: aux_source_directory(. SRC_LIST) add_executable(${PROJECT_NAME} ${SRC_LIST}) add all the source files in the project directory to the project. Also I could say: aux_source_directory(/path/to/folder/ SRC_LIST) add_executable(${PROJECT_NAME} ${SRC_LIST}) to include all source files in a folder. But how can I exclude some specific files (in this example, /path/to/folder/main.cpp and /path/to/folder/CMakeLists.txt)? A: But how can I exclude some specific files Try this (according to Documentation): list (REMOVE_ITEM SRC_LIST /path/to/folder/main.cpp /path/to/folder/CMakeLists.txt) This question looks like related.
doc_772
Tnx, any help would be appreciated. A: Private Sub Command1_Click() Dim xlApp As Excel.Application Dim xlWB As Excel.Workbook Dim xlSH As Excel.Worksheet 'open excel application Set xlApp = New Excel.Application 'Open excel workbook Set xlWB = xlApp.Workbooks.Open(FileName:="C:\YourFile.xls") 'There are two ways to access specific worksheets 'By index number (the first worksheet in this case) Set xlSH = xlWB.Worksheets(1) 'or by the Sheet's Name Set xlSH = xlWB.Worksheets("TestSheet") PrintSheet xlSH, "MyFoot", "MyHead" 'Close workbook (optional) xlWB.Close 'Quit excel (automatically closes all workbooks) xlApp.Quit 'Clean up memory (you must do this) Set xlWB = Nothing Set xlApp = Nothing End Sub Sub PrintSheet(sh As Worksheet, strFooter As String, strHeader As String) sh.PageSetup.CenterFooter = strFooter sh.PageSetup.CenterHeader = strHeader sh.PrintOut End Sub A: Yet, to answer your question, you can use : ActiveWorkbook.PrintOut Copies:=1, Collate:=True and you can find much information here : http://www.exceltip.com/excel_tips/Printing_in_VBA/210.html Anyway, i insist, you should accept answers from your previous questions or people won't care answering your new ones. Max
doc_773
Adding a return false; after the if statement inside my bool function, it works. I just want to understand why this happens. The following code will execute //contains if the contains_input = "foo"; #include <iostream> #include <string> bool char_contains(char *input, const char *contain_input) { std::string contain_input_str = contain_input; std::string input_str = input; if (input_str.length() > 0 && contain_input_str.length() > 0 && input_str.find(contain_input) != std::string::npos) { return true; } } int main() { char name[] = "Test!"; if (char_contains(name, "foo")) { //contains } } , which is not the desired outcome, as 'Test!' does not contain 'foo'. Adding a return false; statement as follows will not execute //contains, which is expected behaviour: bool char_contains(char *input, const char *contain_input) { std::string contain_input_str = contain_input; std::string input_str = input; if (input_str.length() > 0 && contain_input_str.length() > 0 && input_str.find(contain_input) != std::string::npos) { return true; } return false; } Finally, specifying == true will also return the expected behaviour: char name[] = "Test!"; if (char_contains(name, "foo") == true) { //contains } Could someone please explain why this happens? Thank you. A: If your function has a return-type of anything other than void, you need to make sure to always explicitly return a value from it, otherwise you'll invoke undefined behavior (in particular, the value returned from the function will be arbitrary). So adding return false; to the end of your function is the correct thing to do; without it, you're returning something, but it's not defined what. Btw double-check that you've enabled warnings on your compiler -- most compilers will generate a warning like warning: control reaches end of non-void function [-Wreturn-type] when they notice this type of error.
doc_774
Here's the task: In a CRM System there are two tables, “Contacts” (ContactID, Name, CreateDate) and “ContactActivities” (ContactID, Activity, ActivityDate). Whenever something is modified in the CRM for a contact, a new activity is added to the ContactActivities table with the ContactID of the contact and a string Activity describing the activity. Create a query which indicates for each contact whether a certain activity has happened (1) or not (0). The activity should have happened within a certain time period from the creation of the contact (take 2 weeks). Here's what I came up with (which seems to work, by checking with SQLFiddle here): (SELECT SIGN(COUNT(*)) FROM ContactActivities AS c2 WHERE c2.Activity = 'opt-in' AND c2.ContactID = c1.ContactID AND (SELECT MIN(c2.ActivityDate) - c1.CreationDate) < 14 ) AS OPT_IN, (SELECT SIGN(COUNT(*)) FROM ContactActivities AS c2 WHERE c2.Activity = 'purchase' AND c2.ContactID = c1.ContactID AND (SELECT MIN(c2.ActivityDate) - c1.CreationDate) < 14 ) AS PURCHASE, (SELECT SIGN(COUNT(*)) FROM ContactActivities AS c2 WHERE c2.Activity = 'deleted' AND c2.ContactID = c1.ContactID AND (SELECT MIN(c2.ActivityDate) - c1.CreationDate) < 14 ) AS DELETED FROM Contacts as c1 Now I'm wondering (and I'm quite sure actually) that this can be done with some better nesting of the WHERE statements - but I don't really know how. I'm happy about any help! A: Like this, using a join and aggregation: SELECT c1.ContactID , MAX(CASE WHEN c2.Activity = 'opt-in' THEN 1 ELSE 0 END) AS OPT_IN , MAX(CASE WHEN c2.Activity = 'purchase' THEN 1 ELSE 0 END) AS PURCHASE , MAX(CASE WHEN c2.Activity = 'deleted' THEN 1 ELSE 0 END) AS DELETED FROM Contacts AS c1 LEFT JOIN ContactActivities AS c2 ON c2.ContactID = c1.ContactID AND c2.ActivityDate - c1.CreationDate < 14 GROUP BY c1.ContactID ; Updated fiddle A: would check for a positive sum for condition case when then else end, by contactid: select c.CONTACTID , sum(case when ACTIVITY='opt-in' and datediff(ACTIVITYDATE, CREATIONDATE)<=14 then 1 else 0 end) > 0 as OPT_IN , sum(case when ACTIVITY='purchase' and datediff(ACTIVITYDATE, CREATIONDATE)<=14 then 1 else 0 end) > 0 as PURCHASE , sum(case when ACTIVITY='deleted' and datediff(ACTIVITYDATE, CREATIONDATE)<=14 then 1 else 0 end) > 0 as DELETED from contacts c left join contactActivities a on a.CONTACTID = c.CONTACTID group by c.CONTACTID ; SQL Fiddle here
doc_775
* *Given a lower and upper bound input by user, determines the min and min index within that range For the test case (lower bound: 2 upper bound: 4), I tried two different codes, with the difference marked below. The following code does not return the expected output findMin: addi $t0, $a0, 0 # initialise $t0 (current pointer) to lower bound addi $t1, $a0, 0 # initialise minimum pointer to upper bound lw $t2, 0($t1) # initialise min (value) to the lower bound Loop: slt $t4, $a1, $t0 bne $t4, $zero, End # branch to end if upper < lower lw, $t3, 0($t0) # store the content of the current pointer slt $t4, $t3, $t2 # if current ($t3) < min ($t2), store 1 in $t4 beq $t4, $zero, LoopEnd # if it is 0, go to LoopEnd addi $t2, $t3, 0 # store content ($t3) as minimum ($t2) addi $v0, $t0, 0 # store the address of min (DIFFERENCE) LoopEnd: addi $t0, $t0, 4 # increments current pointer lower bound j Loop # Jump to loop End: jr $ra # return from this function However, the following code does return the expected value: findMin: addi $t0, $a0, 0 # $t0 is the pointer to the current item addi $t1, $a0, 0 # $t1 is the pointer to the minimum item lw $t2, 0($t1) # $t2 stores the value of minimum item loop: slt $t4, $a1, $t0 # check if last pointer < current pointer bne $t4, $zero, exit # if current pointer > last pointer, exit lw $t3, 0($t0) # $t3 stores the value of current item slt $t4, $t3, $t2 # if the current value is lesser than minimum value beq $t4, $zero, skip # if current value is not lesser, then skip addi $t1, $t0, 0 # minimum pointer = current pointer (DIFFERENCE) lw $t2, 0($t1) # $t2 stores the value of minimum item skip: addi $t0, $t0, 4 # move to the next item j loop exit: addi $v0, $t1, 0 # $v0 stores the address of the minimum item (DIFFERENCE) jr $ra # return from this function What is the rationale behind this? The following is the code in its entirety (optional) # arrayFunction.asm .data array: .word 8, 2, 1, 6, 9, 7, 3, 5, 0, 4 newl: .asciiz "\n" .text main: # Print the original content of array # setup the parameter(s) # call the printArray function la $a0, array # base address of array la $a1, 10 # number of elements in array jal printArray # call function # Ask the user for two indices li $v0, 5 # System call code for read_int syscall add $t0, $v0, $zero # store input in $t0 li $v0, 5 # System call code for read_int syscall add $t1, $v0, $zero # store input in $t1 # Call the findMin function # setup the parameter(s) la $a0, array # load address of array into $a0 la $a1, array # load address of array into $a1 sll $t0, $t0, 2 # calculate offset of lower bound sll $t1, $t1, 2 # calculate offset of upper bound add $a0, $a0, $t0 # set $a0 to the lower bound add $a1, $a1, $t1 # set $a1 to the upper bound # call the function jal findMin # call function # Print the min item # place the min item in $t3 for printing addi $t3, $t2, 0 # placing min item in $t3 addi $t4, $v0, 0 # saving the pointer to the min element # Print an integer followed by a newline li $v0, 1 # system call code for print_int addi $a0, $t3, 0 # print $t3 syscall # make system call li $v0, 4 # system call code for print_string la $a0, newl syscall # print newline #Calculate and print the index of min item la $a0, array sub $t3, $t4, $a0 srl $t3, $t3, 2 # Place the min index in $t3 for printing # Print the min index # Print an integer followed by a newline li $v0, 1 # system call code for print_int addi $a0, $t3, 0 # print $t3 syscall # make system call li $v0, 4 # system call code for print_string la $a0, newl syscall # print newline # End of main, make a syscall to "exit" li $v0, 10 # system call code for exit syscall # terminate program ####################################################################### ### Function printArray ### #Input: Array Address in $a0, Number of elements in $a1 #Output: None #Purpose: Print array elements #Registers used: $t0, $t1, $t2, $t3 #Assumption: Array element is word size (4-byte) printArray: addi $t1, $a0, 0 #$t1 is the pointer to the item sll $t2, $a1, 2 #$t2 is the offset beyond the last item add $t2, $a0, $t2 #$t2 is pointing beyond the last item l1: beq $t1, $t2, e1 lw $t3, 0($t1) # $t3 is the current item li $v0, 1 # system call code for print_int addi $a0, $t3, 0 # integer to print syscall # print it addi $t1, $t1, 4 j l1 # Another iteration e1: li $v0, 4 # system call code for print_string la $a0, newl # syscall # print newline jr $ra # return from this function ####################################################################### ### Student Function findMin ### #Input: Lower Array Pointer in $a0, Higher Array Pointer in $a1 #Output: $v0 contains the address of min item #Purpose: Find and return the minimum item # between $a0 and $a1 (inclusive) #Registers used: $t0 (counter), $t1 (max add), $t2 (min), $v0 (min pos), $t3 (current item) #Assumption: Array element is word size (4-byte), $a0 <= $a1 findMin: addi $t0, $a0, 0 # initialise $t0 (current pointer) to lower bound addi $t1, $a0, 0 # initialise minimum pointer to upper bound lw $t2, 0($t1) # initialise min (value) to the lower bound Loop: slt $t4, $a1, $t0 bne $t4, $zero, End # branch to end if upper < lower lw, $t3, 0($t0) # store the content of the current pointer slt $t4, $t3, $t2 # if current ($t3) < min ($t2), store 1 in $t4 beq $t4, $zero, LoopEnd # if it is 0, go to LoopEnd addi $t2, $t3, 0 # store content ($t3) as minimum ($t2) addi $t1, $t0, 0 # store the address of min LoopEnd: addi $t0, $t0, 4 # increments current pointer lower bound j Loop # Jump to loop End: addi $v0, $t1, 0 jr $ra # return from this function A: In the first case the problem is that you save the min pointer to the t1 register initially, but on return you expect it to be on v0. Now on cases where the min value is not exactly on the lowerbound index this will not come up as an issue, because in the loop you save the new found min values pointer to v0, so uppon return everything will be as expected. But in case like 2,4 the min value at the lowerbound here so in index 2, therefore in the loop since no new min points is found, thus nothing will be written in v0, so on return it will have some garbage value. Change the begining part to this and will work fine: addi $v0, $a0, 0 # initialise minimum pointer to upper bound lw $t2, 0($v0) # initialise min (value) to the lower bound
doc_776
template<size_t N, size_t M> // N × M matrix class Matrix { // implementation... }; I managed to implement basic operations such as addition/subtraction, transpose and multiplication. However, I'm having trouble implementing the determinant. I was thinking of implementing it recursively using the Laplace expansion, so I must first implement a way to calculate the i,j minor of a matrix. The problem is, the minor of an N × N matrix is an (N-1) × (N-1) matrix. The following does not compile: (error message is Error C2059 syntax error: '<', pointing to the first line in the function) template<size_t N> Matrix<N-1, N-1> Minor(const Matrix<N, N>& mat, size_t i, size_t j) { Matrix<N-1, N-1> minor; // calculate i,j minor return minor } How could I go around this and calculate the minor, while keeping the templated form of the class? EDIT: I was asked to provide a working example. Here is the relevant part of my code, I tried to keep it as minimal as possible. My Matrix class uses a Vector class, which I also wrote myself. I removed any unrelated code, and also changed any error-checks to asserts, as the actual code throws an exception class, which again was written by me. Here is the Vector.h file: #pragma once #include <vector> #include <cassert> template<size_t S> class Vector { public: Vector(double fInitialValue = 0.0); Vector(std::initializer_list<double> il); // indexing range is 0...S-1 double operator[](size_t i) const; double& operator[](size_t i); private: std::vector<double> m_vec; }; /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Implementation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ template<size_t S> Vector<S>::Vector(double fInitialValue) : m_vec(S, fInitialValue) { } template<size_t S> Vector<S>::Vector(std::initializer_list<double> il) : m_vec(il) { assert(il.size() == S); } template<size_t S> double Vector<S>::operator[](size_t i) const { return m_vec[i]; } template<size_t S> double& Vector<S>::operator[](size_t i) { return m_vec[i]; } And here is the Matrix.h file: #pragma once #include "Vector.h" template<size_t N, size_t M> class Matrix { public: Matrix(double fInitialValue = 0.0); Matrix(std::initializer_list<Vector<M>> il); // indexing range is 0...N-1, 0...M-1 Vector<M> operator[](int i) const; Vector<M>& operator[](int i); double Determinant() const; private: std::vector<Vector<M>> m_mat; // a collection of row vectors template <size_t N> friend Matrix<N - 1, N - 1> Minor(const Matrix<N, N>& mat, size_t i, size_t j); }; /* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Implementation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */ template<size_t N, size_t M> Matrix<N, M>::Matrix(double fInitialValue) : m_mat(N, Vector<M>(fInitialValue)) {} template<size_t N, size_t M> Matrix<N, M>::Matrix(std::initializer_list<Vector<M>> il) : m_mat(il) { assert(il.size() == N); } template<size_t N, size_t M> Vector<M> Matrix<N, M>::operator[](int i) const { return m_mat[i]; } template<size_t N, size_t M> Vector<M>& Matrix<N, M>::operator[](int i) { return m_mat[i]; } template<size_t N, size_t M> double Matrix<N, M>::Determinant() const { assert(N == M); if (N == 2) { return m_mat[0][0] * m_mat[1][1] - m_mat[0][1] * m_mat[1][0]; } double det = 0; for (size_t j = 0; j < N; j++) { if (j % 2) { det += m_mat[0][j] * Minor((*this), 0, j).Determinant(); } else { det -= m_mat[0][j] * Minor((*this), 0, j).Determinant(); } } return det; } template <size_t N> Matrix<N - 1, N - 1> Minor(const Matrix<N, N>& mat, size_t i, size_t j) { Matrix<N - 1, N - 1> minor; for (size_t n = 0; n < i; n++) { for (size_t m = 0; m < j; m++) { minor[n][m] = mat[n][m]; } } for (size_t n = i + 1; n < N; n++) { for (size_t m = 0; m < j; m++) { minor[n - 1][m] = mat[n][m]; } } for (size_t n = 0; n < i; n++) { for (size_t m = j + 1; m < N; m++) { minor[n][m - 1] = mat[n][m]; } } for (size_t n = i + 1; n < N; n++) { for (size_t m = j + 1; m < N; m++) { minor[n - 1][m - 1] = mat[n][m]; } } return minor; } Compiling these along with a simple main.cpp file: #include "Matrix.h" #include <iostream> int main() { Matrix<3, 3> mat = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9} }; std::cout << mat.Determinant(); } produces - Error C2760 syntax error: unexpected token '<', expected 'declaration' ...\matrix.h 67 EDIT2: Apparently I had written the template arguments as <N - 1><N - 1> instead of <N -1, N-1> in the implementation of the Minor function. Changing that fixed the error, but introduced a new one - compilation hangs, and after a minute or so I get Error C1060 compiler is out of heap space ...\matrix.h 65
doc_777
BadRequest. Http request failed as the content was not valid: 'Unable to translate bytes [9B] at index 790 from specified code page to Unicode.' I am able to invoke the same API using Postman without an any error, however, I can see in the response that there are some unknown characters. Does anyone know how I can work around this issue using Azure Logic Apps? A: This problem was caused by the coding of the response from your api, postman will parse the response automatically but HTTP action in logic app will not do it. As I don't know the coding of your data, so just provide some suggestions for your reference. 1. Please check if the response data is in UTF-8. If not, you can use code like below to convert the response data in your api. UTF8.decode(response.bodyBytes) 2. Add a field Accept in your HTTP action headers. Accept: text/html;charset=US-ASCII, text/html;charset=UTF-8, text/plain; charset=US-ASCII,text/plain;charset=UTF-8
doc_778
The app module is: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { AComponent } from './a/a.component'; import { BComponent } from './b/b.component'; import { CComponent } from './c/c.component'; @NgModule({ declarations: [ AppComponent, AComponent, BComponent, CComponent ], imports: [ BrowserModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } The app routing module is: import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { AComponent } from './a/a.component'; import { BComponent } from './b/b.component'; import { CComponent } from './c/c.component'; import { AppComponent } from './app.component'; const routes: Routes = [ { path: 'a', component: AComponent, }, { path: 'a', component: BComponent, }, { path: 'c', component: CComponent }, { path: '**', component: AppComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule], }) export class AppRoutingModule { } The app.component.html is : <h3>Routing and Navigation</h3> <router-outlet></router-outlet> The A component is: <h4>It is Component A</h4> <a class="button" routerLink="a" routerLinkActive='active'>B component</a> <router-outlet></router-outlet> The B component is: <a routerLink="b">C component</a> <router-outlet></router-outlet> The C component is: <p>C component works</p> Kindly help me with this as I am learning to route in Angular. I will be glad if I know the area of improvement in the code and a proper guidance. Thank you. A: here is a stackBlitz with a working example that has all 3 components. https://angular-7ucy2h.stackblitz.io/a the thing about angular routing is that if you want a child route to show inside of a parent route you will have to add that route as a child. in your case C is a child of B and B is a child of A. const routes: Routes = [ { path: 'a', component: AComponent, children: [ {path: 'b', component: BComponent, children: [ {path: 'c', component: CComponent} ]}, ] }, ]; A: You don't need routing at all to do what you are asking. If Component B & C are "inside" component A, then you don't route to them. you just display them. <h4>It is Component A</h4> <a class="button" (click)="showBFlg = true">B component</a> <div *ngIf="showBFlg"> <app-b></app-b> <a class="button" (click)="showCFlg = true">C component</a> </div> <app-c *ngIf="showCFlg"></app-c>
doc_779
package sendgrid_failure import ( "net/http" "fmt" "google.golang.org/appengine" "google.golang.org/appengine/log" ) func init() { http.HandleFunc("/sendgrid/parse", sendGridHandler) } func sendGridHandler(w http.ResponseWriter, r *http.Request) { ctx := appengine.NewContext(r) err := r.ParseMultipartForm(-1) if err != nil { log.Errorf(ctx, "Unable to parse form: %v", err) } fmt.Fprint(w, "Test.") } When SendGrid POSTs its multipart form, the console shows similar to the following: 2018/01/04 23:44:08 ERROR: Unable to parse form: open /tmp/multipart-445139883: no file writes permitted on App Engine App Engine doesn't allow you to read/write files, but Golang appears to need it to parse. Is there an App Engine specific library to parse multipart forms, or should we be using a different method from the standard net/http library entirely? We're using the standard go runtime. A: The documentation for ParseMultipartForm says: The whole request body is parsed and up to a total of maxMemory bytes of its file parts are stored in memory, with the remainder stored on disk in temporary files. The server attempts to write all files to disk because the application passed -1 as maxMemory. Use a value larger than the size of the files you expect to upload.
doc_780
please look in the the proper sum of presented range is 0, excel tells that it sums to 1,818989E-12. when i select minor range (e.g. without first or last cell), it sums properly, when i change numberformat of range, it sums properly - but those workarounds works only in worksheet - when i use VBA (and actually this is a part of some macro) i'm still getting this strange number - every single WorksheetFunction like Sum, Subtotal, SumIf still returning inproper result, changing of numberformat of range doesn't work, multiplying by 1 also, so any suggestions will be welcome. i can tell also that this happens only in this specific case - probably the rest of analyzed data are fine (=sum works properly), also this is not a variable type issue, because first version of code didn't use any variable to mark the sum. here's the part of code: For Each kom In amountRng Set baza = Rows("" & prevKom.Offset(1, 0).Row & ":" & kom.Offset(-1, 0).Row & "") baza.Copy Sheets("roboczy").Activate Range("A2").PasteSpecial xlPasteValues 'multipying by 1 doesn't work 'Range("A1").End(xlToRight).Offset(0, 1).Select 'Selection.Value = 1 'Selection.Copy 'Range(Range("V2"), Range("V2").End(xlDown)).PasteSpecial Paste:=xlPasteValues, Operation:=xlMultiply 'changing of numberformat doesn't work 'Columns("V:V").NumberFormat = "0.00" If IsEmpty(Range("A3")) Then Range("M2").Copy Range("A4").PasteSpecial xlPasteValues Else: Range(Range("M2"), Range("M2").End(xlDown)).Copy Range("A2").End(xlDown).Offset(2, 0).PasteSpecial xlPasteValues End If Selection.RemoveDuplicates 1 Selection.CurrentRegion.Select Dim liczba As Single For Each zam In Selection Range("A1").CurrentRegion.AutoFilter field:=13, Criteria1:=zam.Value Set sumowanieRng = Range(Range("V" & Rows.Count).End(xlUp), Range("V2")).Cells.SpecialCells(xlCellTypeVisible) sumowanieRng.EntireRow.Copy liczba = WorksheetFunction.Sum(sumowanieRng) Debug.Print liczba If liczba = 0 Then Sheets("zerowe").Activate Range("A" & Rows.Count).End(xlUp).Offset(1, 0).PasteSpecial xlPasteValues Else: Sheets("niezerowe").Activate Range("A" & Rows.Count).End(xlUp).Offset(1, 0).PasteSpecial xlPasteValues End If Application.CutCopyMode = False Sheets("roboczy").Activate If ActiveSheet.AutoFilterMode Then ActiveSheet.AutoFilterMode = False Next Range("A" & Rows.Count).End(xlUp).Select Range(Range("A2"), Selection).EntireRow.Clear Sheets(2).Activate Set prevKom = kom Next A: If you want a more accurate summation use a non-floating point data type like decimal Dim d, c, i For Each c In Range("B2:B25") d = d + CDec(c) i = i + CDbl(c) Next c Debug.Print d, i Outputs 0 1.81898940354586E-12 Keep in mind 1.81898940354586E-12 is VERY close to 0.
doc_781
User clicks “add to cart” -link and a modal window (or alternatively an ajax load into a div in the same window) is used to show the checkout form. Right now, i’m not clear with the "Drupal way" of how to work properly with commerce and ctools. Ctools and its API is the current choice to go with. I'm all willing to consider alternatives for Ctools. So far, i’ve been unable to include the checkout form in the ctools modal. The ctools code works fine with forms such as user register, login etc. Basically our checkout configuration includes just 2 checkboxes in the Checkout pane. Checkbox -fields are included in the checkout with “Commerce Order Fieldgroup Panes” -module. So there is basically just one step for the user to go thru the checkout process. Depending on the user data and submitted checkbox values, we do a redirect or internal payment handling with a third party payment gateway with which i’ve been building a custom payment module which is working fine with Commerce. So back to the modal. I’ve built a callback function according to the tutorials and ctools ajax samples.. i’ve tried several combinations to include and render the checkout form in a modal with no success. I’m wondering if the problem lies within commerce checkout process or ctools functions unable to gather/render the specified checkout form. Or probably just that the commerce / checkout API is not used correctly with ctools functions. I don’t yet know commerce at the Deep level so the internal checkout procedure is still a bit in the dark. I will not include all the code combinations here now, just want to point out the considered alternatives/directions: -programmatically include the checkout form with ctools: <?php $checkoutform = ctools_modal_form_wrapper('commerce_checkout_form_checkout', $form_state); print ajax_render($checkoutform); ?> The form id is caught from the default checkout page form using form_alter. the ctools_modal_form_wrapper -function does not return a valid checkout form array. Suspecting that ctools does not support commerce checkout form out of box. I’m considering that the $form_state needs to be filled with some extra data before the function call could return the full checkout array? Or maybe we need to get the checkout form array using some other method? Or -Programmatically include the enabled checkout panes, which would(?) include the rendered checkout form: <?php $panes_array = commerce_checkout_panes($conditions = array('enabled' => TRUE), FALSE); ?> Or -Create the order and checkout page programmatically in the callback function and get the form: <?php drupal_get_form('commerce_checkout_form_' . $checkout_page['page_id'], $order, $checkout_page); ?> This form call is copied from commerce_checkout.pages.inc. With this it is possible to get a form array that looks like it might have the needed data to continue.. I’ve tried to render the array with ajax_render(), ctools_modal_render(), ctools_modal_form_render() ..resulting in either 200 Http ajax error or a blank form (no visible html or fields) or just the ajax loader gif looping in the modal. Or -Use a custom form to build the checkboxes and handle forward the submission data to Commerce: ..so we would not need to include the actual “commerce_checkout_form_checkout” if it happens that ctools refuses to function with it.. Or Programmatically include the checkout pages (with panes/form included?): using commerce_checkout_pages() -function and a render function. Of course we could try to open the default checkout url in a modal but this would include the whole DOM in the modal, which we don’t want. Thank you! -Jussi A: A Simple solution that i can think of is using the module Overlay Paths Try & let us know
doc_782
I tried to run the command "mongod -f /path-to-configuration-file/ but everything is going ok so I tried to give permitions to db and log folder running the commands: 1 - chmod 777 /path-to-files 2 - chown mongod:mongod /path-to-files -R After doing that I tried to give permition to port 40017 but still getting error. My mongod.conf file: # mongod.conf # for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/ # where to write logging data. systemLog: destination: file logAppend: true path: /bin/rsmongo/log/mongod.log # Where and how to store data. storage: dbPath: /bin/rsmongo/data/db journal: enabled: true # engine: # mmapv1: # wiredTiger: # how the process runs processManagement: fork: true # fork and run in background pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile timeZoneInfo: /usr/share/zoneinfo # network interfaces net: port: 40017 Systemctl status mongod.service result: ● mongod.service - MongoDB Database Server Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Ter 2018-12-18 14:02:01 BRST; 9s ago Docs: https://docs.mongodb.org/manual Process: 3880 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=1/FAILURE) Process: 3877 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS) Process: 3874 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS) Process: 3872 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS) Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Starting MongoDB Database Server... Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: 2018-12-18T14:02:01.128-0200 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: about to fork child process, waiting until server is ready for connections. Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: forked process: 3883 Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: ERROR: child process failed, exited with error number 1 Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: To see additional information in this output, start without the "--fork" option. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: mongod.service: control process exited, code=exited status=1 Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Failed to start MongoDB Database Server. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Unit mongod.service entered failed state. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: mongod.service failed. Journalctl -xe reult: Dez 18 14:01:01 bon-lap-srv01 CROND[3850]: (root) CMD (run-parts /etc/cron.hourly) Dez 18 14:01:01 bon-lap-srv01 run-parts(/etc/cron.hourly)[3853]: starting 0anacron Dez 18 14:01:01 bon-lap-srv01 run-parts(/etc/cron.hourly)[3859]: finished 0anacron Dez 18 14:01:01 bon-lap-srv01 run-parts(/etc/cron.hourly)[3861]: starting 0yum-hourly.cron Dez 18 14:01:01 bon-lap-srv01 run-parts(/etc/cron.hourly)[3865]: finished 0yum-hourly.cron Dez 18 14:02:01 bon-lap-srv01 polkitd[1006]: Registered Authentication Agent for unix-process:3867:1091339 (system bus name :1.51 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/fr Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Starting MongoDB Database Server... -- Subject: Unidade mongod.service sendo iniciado -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- A unidade mongod.service está sendo iniciada. Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: 2018-12-18T14:02:01.128-0200 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: about to fork child process, waiting until server is ready for connections. Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: forked process: 3883 Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: ERROR: child process failed, exited with error number 1 Dez 18 14:02:01 bon-lap-srv01 mongod[3880]: To see additional information in this output, start without the "--fork" option. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: mongod.service: control process exited, code=exited status=1 Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Failed to start MongoDB Database Server. -- Subject: A unidade mongod.service falhou -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- A unidade mongod.service falhou. -- -- O resultado é failed. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: Unit mongod.service entered failed state. Dez 18 14:02:01 bon-lap-srv01 systemd[1]: mongod.service failed. Dez 18 14:02:01 bon-lap-srv01 polkitd[1006]: Unregistered Authentication Agent for unix-process:3867:1091339 (system bus name :1.51, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, loc A: SOLVED I deleted a file called "mongodb-40017.sock" at /tmp folder then it worked as expected.
doc_783
Here is my query: select c.branch, c.cust_no, c.name, c.dobirth, c.cust_sex,c.address, c.phone, c.group_name, c.group_code, c.INCOR_DATE, (l.branch+l.gl_no+l.ac_no) as cust, SUM(l.loan_amt) as lamt, l.ac_no, l.cycle, l.exp_date, l.inst_type, l.trx_no, l.ln_period, l.full_paid, l.Class_Age, l.disb_date, max(h.trx_date) as last_trx, (h.principal + h.interest) as instalment, (h.principal + h.interest) as overdue, h.trx_date from customer c, loans l, loanhist h where c.cust_no = l.ac_no and l.full_paid != 1 and c.cust_no = h.ac_no and MONTH(h.trx_date) = MONTH(GETDATE()) and YEAR(h.trx_date) = YEAR(GETDATE()) and h.trx_type != 'lp' and h.trx_date = MAX(h.trx_date) group by c.branch, c.cust_no, c.name, c.dobirth, c.cust_sex,c.address, c.phone, c.group_name, c.group_code, c.INCOR_DATE, l.ac_no, l.cycle, l.exp_date, l.inst_type, l.trx_no, l.ln_period, l.full_paid, l.Class_Age, l.disb_date, l.branch,l.gl_no, h.principal, h.interest, h.trx_date Here is the error it was giving me: Msg 147, Level 15, State 1, Line 11 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. A: Move it to the HAVING clause (like the error states). Having occurs after the group by. having h.trx_date = MAX(h.trx_date) A: You need to use the HAVING clause, which was designed for that: HAVING h.trx_date = MAX(h.trx_date) And you place it below the GROUP BY clause.
doc_784
struct config { int x; constexpr int multiply() const { return x*3; } }; constexpr config c = {.x = 1}; int main() { int x = c.multiply(); return x; } If I compile this with clang and -O0 I get a function call to multiply even though the object c and the function are marked constexpr. If I compile it with -O1 everything gets optimized as expected. Gcc on the other hand generates no call to multiply. If I change the main to: int main() { constexpr auto y = c.multiply(); int x = y; return x; } If I compile this with clang and -O0 I get not function call and the value 3 directly as stack variable. The -O1 result is the same as above. So my question is: Does the constexpr evaluation depend on the compiler level? I would expect that in the example 1 the call to multiply would be constexpr and performed compile time. (like gcc does) BR, go2sh See https://godbolt.org/z/WvPE5W77h A: The Standard just requires that a call to constexpr is evaluated at compile time if the arguments are constexpr and the result must be constexpr due to context. Basically just forcing more restrictions on the author of the function, thus allowing it to be used in constexpr contexts. Meaning y in second snippet forces evaluation at compile time. On the other hand, x in the first is an ordinary run-time call. But the as-if rule applies here - as long as the observable effects of the program remain the same, the compiler can generate any instructions it wants. It can evaluate any function, even non-constexpr ones if it can do so - happens in practice often with constants propagation. Yes, in general, higher optimization levels will inline more code and push more evaluation to the compile time. But "looking at the assembly" is not an observable effect in the sense above so there are no guarantees. You can use inline to give a hint for inlining the function instead of calling it (constexpr is inline by default but for other reasons...) but the compilers can ignore them. Of course, the compiler can evaluate all constexpr functions with constexpr args at compile time, that is why they exist, why clang does not do so with -O0, I do not know. If you need guaranteed compile-time evaluation, use consteval instead.
doc_785
res = re.search(r'Presets = {(.*)Version = 1,', data, re.DOTALL) What I now want to do is return the two strings surrounding this inner part. Keep in mind this is a multiline string. How can I get the bordering strings, the beginning and end part in a two part list would be ideal. data = """{ data = { friends = { max = 0 0, min = 0 0, }, family = { cars = { van = "honda", car = "ford", bike = "trek", }, presets = { location = "italy", size = 10, travelers = False, }, version = 1, }, }, stuff = { this = "great", }, }""" import re res = re.search(r'presets = {(.*)version = 1,', data, re.DOTALL) print res.groups(1) In this case I would want to return the beginning string: data = """{ data = { friends = { max = 0 0, min = 0 0, }, family = { cars = { van = "honda", car = "ford", bike = "trek", }, And the end string: }, }, stuff = { this = "great", }, }""" A: Regex is really not a good tool for parsing these strings, but you can use re.split to achieve what you wanted. It can even combine the 2 tasks into one: begin, middle, end = re.split(r'presets = \{(.*)version = 1,', data, flags=re.DOTALL) re.split splits the string at matching positions; ordinarily the separator is not in the resulting list. However, if the regular expression contains capturing groups, then the matching contents of the first group is returned in the place of the delimiter.
doc_786
For example: I have a sheet with 1000 rows. At this instance, there are only records up to row 25. So, what it does... it adds/ appends the new record at row 1001 instead of adding it to the empty row after the last record i.e. row 26. Here is a dummy sheet link: https://docs.google.com/spreadsheets/d/1mt9G9PWdIvAQsQSWmRb14o8eeocCx9gkOWJT6KHAGa0/edit?usp=sharing function appendUniqueRows() { var ss = SpreadsheetApp.getActive(); var sourceSheet = ss.getSheetByName('source'); var destSheet = ss.getSheetByName('destination'); var sourceData = sourceSheet.getRange("A:D").getValues(); var destData = destSheet.getRange("E:H").getValues(); //Check whether destination sheet is empty if (destData.length === 1 && "" === destData[0].join('')) { // Empty, so ignore the phantom row destData = []; } // Generate hash for comparisons var destHash = {}; destData.forEach(function(row) { destHash[row.join('')] = true; // could be anything }); // Concatentate source rows to dest rows if they satisfy a uniqueness filter var mergedData = destData.concat(sourceData.filter(function (row) { var hashedRow = row.join(''); if (!destHash.hasOwnProperty(hashedRow)) { // This row is unique destHash[hashedRow] = true; // Add to hash for future comparisons return true; // filter -> true } return false; // not unique, filter -> false })); // // Check whether two data sets were the same width // var sourceWidth = (sourceData.length > 0) ? sourceData[0].length : 0; // var destWidth = (destData.length > 0) ? destData[0].length : 0; // if (sourceWidth !== destWidth) { // // Pad out all columns for the new row // var mergedWidth = Math.max(sourceWidth,destWidth); // for (var row=0; row<mergedData.length; row++) { // for (var col=mergedData[row].length; col<mergedWidth; col++) // mergedData[row].push(''); // } // } // Write merged data to destination sheet destSheet.getRange(1, 5, mergedData.length, mergedData[0].length) .setValues(mergedData); } A: I believe your goal as follows. * *You want to add the filtered sourceData to the next row of the last row of destData. I think that the reason of your issue is due to var mergedData = destData.concat(sourceData.filter(function (row) {. In this case, the filtered sourceData is added to destData of var destData = destSheet.getRange("E:H").getValues();. destSheet.getRange("E:H").getValues() includes the empty rows. So, in this case, how about the following modification? From: var sourceData = sourceSheet.getRange("A:D").getValues(); var destData = destSheet.getRange("E:H").getValues(); To: var sourceData = sourceSheet.getRange("A1:D" + sourceSheet.getLastRow()).getValues(); var destData = destSheet.getRange("E1:H" + destSheet.getLastRow()).getValues(); or From: var mergedData = destData.concat(sourceData.filter(function (row) { To: var mergedData = destData.filter(row => row.join('')).concat(sourceData.filter(function (row) { Note: * *When the process cost is considered, I think that the 1st modification will be suitable.
doc_787
org.springframework.security.authentication.InternalAuthenticationServiceException: PreparedStatementCallback; bad SQL grammar [select username, password, active from usr by username = ?]; nested exception is org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "username") at org.springframework.security.authentication.dao.DaoAuthenticationProvider.retrieveUser(DaoAuthenticationProvider.java:123) ~[spring-security-core-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.authentication.dao.AbstractUserDetailsAuthenticationProvider.authenticate(AbstractUserDetailsAuthenticationProvider.java:144) ~[spring-security-core-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:175) ~[spring-security-core-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.authentication.ProviderManager.authenticate(ProviderManager.java:200) ~[spring-security-core-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter.attemptAuthentication(UsernamePasswordAuthenticationFilter.java:94) ~[spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212) ~[spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.csrf.CsrfFilter.doFilterInternal(CsrfFilter.java:124) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178) [spring-security-web-5.1.5.RELEASE.jar:5.1.5.RELEASE] at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:357) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:270) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:92) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:109) [spring-web-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:490) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:139) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:408) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:853) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1587) [tomcat-embed-core-9.0.21.jar:9.0.21] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.21.jar:9.0.21] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_211] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_211] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.21.jar:9.0.21] at java.lang.Thread.run(Thread.java:748) [na:1.8.0_211] at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:234) ~[spring-jdbc-5.1.8.RELEASE.jar:5.1.8.RELEASE] Caused by: org.postgresql.util.PSQLException: ОШИБКА: ошибка синтаксиса (примерное положение: "username") at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:143) ~[postgresql-42.2.5.jar:42.2.5] at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:106) ~[postgresql-42.2.5.jar:42.2.5] at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeQuery(ProxyPreparedStatement.java:52) ~[HikariCP-3.2.0.jar:na] at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeQuery(HikariProxyPreparedStatement.java) ~[HikariCP-3.2.0.jar:na] at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:677) ~[spring-jdbc-5.1.8.RELEASE.jar:5.1.8.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:616) ~[spring-jdbc-5.1.8.RELEASE.jar:5.1.8.RELEASE] ... 64 common frames omitted added all configurations for security, and wrote query I can't find an error in application.properteis either @Configuration @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired DataSource dataSource; @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/","/registration").permitAll() .anyRequest().authenticated() .and() .formLogin() .loginPage("/login") .permitAll() .and() .logout() .permitAll(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.jdbcAuthentication() .dataSource(dataSource) .passwordEncoder(NoOpPasswordEncoder.getInstance()) .usersByUsernameQuery("select username, password, active from usr by username = ?") .authoritiesByUsernameQuery("select u.username, ur.roles from usr u inner join user_roles ur on u.id = ur.user_id where u.username = ?"); } spring.datasource.url= jdbc:postgresql://localhost:5432/sweater spring.datasource.username=postgres spring.datasource.password=0503040080 spring.jpa.generate-ddl=true spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults = false spring.mustache.expose-request-attributes = true The problem occurs when I try to log in after registration I think this is not a problem because of the frontend, so the code did not show @Entity @Table(name = "usr") public class User { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private String username; private String password; private boolean active; @ElementCollection(targetClass = Roles.class, fetch = FetchType.EAGER) @CollectionTable(name = "user_roles", joinColumns = @JoinColumn(name="user_id")) @Enumerated(EnumType.STRING) private Set<Roles> roles; A: the faulty line is .usersByUsernameQuery("select username, password, active from usr by username = ?") it should be .usersByUsernameQuery("select username, password, active from usr WHERE username = ?") (mixing WHERE and incomplete ORDER BY statement)
doc_788
I am trying to integrate Stripe Payment into my APP. So my idea is having a button and call a function to checkout page with Stripe. <button onClick = {() =>handleCheckout()}> Payment </button> const handleCheckout = async () =>{ const response = await fetch('/checkout_sessions', { mode: 'no-cors', method: 'POST', header:{Accept: 'application/json', Access-Control-Allow-Origin': '*'}, }) And I have a checkout_sessions.js in another folder where I think it will link to Stripe Payment page, but I am still fetching on LocalHost which causing example https://www.localhost:3000/www.stripe.com I have tried to link directly by the url example: (Link to='https://react.com/'>), but it doesn't work as well. I think I might need to change a route setting or some, but I don't really know what next steps I can take. A: First of all, creating a session must be done from the server side so you need to create a route inside the API folder in next.js. // api/payment/session.js import Stripe from "stripe"; import absoluteUrl from "next-absolute-url"; export default async function handler(req, res) { if(req.method === "POST"){ const {priceId , email} = JSON.parse(req.body); const {origin} = ablsouteUrl(req); // redirect user after payment done to your website id and you can check by having CHECKOUT_SESSION_ID let success_uri = `${origin}/?session_id={CHECKOUT_SESSION_ID}`; const stripe = new Stripe(process.env.NEXT_PUBLIC_STRIPE_SECRET, { apiVersion: "2020-08-27", }); const { id } = await stripe.checkout.sessions.create({ success_url: success_uri, cancel_url: origin, payment_method_types: ["card"], mode: "subscription", customer_email: email, line_items: [{ price: String(priceId), quantity: 1 }], }); res.status(200).json({ id: id }); } } now from client you should send the priceId which comes from stripe and send it to the API body. //Plans.js import { loadStripe } from "@stripe/stripe-js"; const handleCheckout = async () => { const req = await fetch('/api/payment/session',{ method : "POST", body : JSON.stringify({ email : user_email, priceId : item_price_id }) }); const {id} = await req.json(); // client side stripe package! const stripe = await loadStripe( process.env.NEXT_PUBLIC_STRIPE_PUBLISHED ); if (stripe) { // method which redirect user to stripe checkout page. stripe.redirectToCheckout({ sessionId: id, // id come from our api response }); } } make sure you have installed Stripe, @stripe/stripe-js, and next-absolute-url for dynamic getting of the website domain if development it is same as origin of current running app.
doc_789
private void foo (){ try{ foo1(); } catch(Exception ex){ if (ex is specificException){ //Catch only the exceptions that I can really handle // log the exception // Display a message to the user that something went wrong. return; } throw; } } private void foo1(){ try{ // code that could throw an exception } catch{ throw; } } So the question is: Should I keep using this approach to handle the exceptions or should I log all the exceptions in the low level functions and then throw them? P.S I catch all the unhandled exceptions and then gracefully close the application. A: As a general rule, you should avoid using try/catch unless you're looking to trap a specific kind of exception. Example of bad use for try/catch try { //some work } catch(Exception e) { //this is bad, it traps everything! } Good use good use for try/catch try { //some work } catch(ArguementNullException e) { //good, we only trapped the one exception we were interested in, everything is is thrown } So how do we handle Exceptions then if we don't have try/catch all over the place? The answer is using an automatic logging tool to do it for you. Elmah is a great example, you install the Nuget package and wallah, all your exceptions are logged automatically and you can view them.
doc_790
1.) Uploading each CSV which will include a fault of a certain magnitude. 2.) Comparing the CSV containing the fault with the nominal value. 3.) Being able to print out where the failure occured, or if no failure occured at all. I was wondering which language would make the most sense for these three tasks. We've been given the system in Simulink, and have been able to output well-formatted CSV files containing information about the components which comprise the system. For each component, we have a nominal set of values and a set of values given after injecting a fault. I'd like to be able to compare these two and find where the fault has occurred. So far we've had very little luck in Python or in Matlab itself, and have been strongly considering using C to do this. Any advice on which software will provide which advantages would be fantastic. Thank you. A: If you want to store the outcomes in a database, it might be worth considering a tool like Microsoft SSIS (Sql Server Integration Services) where you could use your CSV files and sets of values as data sources, compare them / perform calculations and store outcomes / datasets in tables. SSIS has an easy enough learning curve and easy to use components as well as support for bespoke SQL / T-SQL and you can visually differentiate your components in separate processes. The package(s) can then be run either manually or in automated batches as desired. https://learn.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-2017 Good luck!
doc_791
A: Ugly hack: https://groups.google.com/forum/?fromgroups#!topic/refinery-cms/xiQfYNuWLOs
doc_792
the json is this. { "docs": [ { "vehicle": "moto", "status": "confirmed", "_id": 34401, "service": "programmed" }, { "vehicle": "moto", "status": "confirmed", "_id": 34402, "service": "programmed" } ] } my code is this listOrders.docs.map((order)=>{ order.orderCustomerCount = 1 }) but when I return listOrders not show this new attribute orderCustomerCount. A: The array map method returns a new array computed by calling the predicate in your case (order) => { order.orderCustomerCount; return undefined; } (verbose return, functions are equivalent) for each element of the array, and storing the return at the same position in a new array. [1, 2, 3, 4, 5].map((n) => 2 * n); // returns [2, 4, 6, 8, 10] In your case, you return undefined in the predicate, because any arrow function has a default return of undefined. In your case a more appropriate solution would be the array forEach method. listOrders.docs.forEach((order)=> { order.orderCustomerCount = 1; }); This method only calls the predicate for each element of the array. Because arrays and objects are reference types, the value of the elements are mutated to add the property. Additionally, you might want to consider whether or whether not you want this. In my own experience, when using objects and arrays, in order to increase predictability, I rarely use assignments, instead I construct new arrays of new objects to ensure the source is not mutated. let newOrders = listOrders.docs.map((order) => { let newOrder = { ...order, // copy all attributes (properties) of order orderCustomerCount: 1, // add orderCustomerCount property equal to 1 }; return newOrder; }) Note the spread syntax. By doing this, the original object is kept as it were, and newOrders along with the elements of the array can be modified independently from it. A: the .map() function creates & returns a new array. You need to do the following: listOrders.docs = listOrders.docs.map((order) => { order.orderCustomerCount = 1 /* NOTE: This line is important because if you don't return anything from this function, then your array will have an undefined item in it. */ return order; })
doc_793
A: The equation of an hypersphere is (X-Xc)² + (Y-Yc)² + (Z-Zc)² ... = R² Write the equations for N+1 points and subtract them pairwise. The quadratic terms cancel out and a system of N linear equations in N unknowns remains (they are the equations of N bissector hyperplanes). Solve it and use one of the initial equations to get the radius. In 1D you use two points, (X0 - Xc)² = R² (X1 - Xc)² = R² Then by subtraction (X0 - X1)(X0 + X1 - 2Xc) = 0 gives Xc, then R² = (X0 - Xc)². The generalization is straightforward. A: Very broad question, calculate, find, and generate? * *To calculate see answer by @YvesDaoust. *To find is very open-ended. Do you know they exist? Is an approximate solution okay? Try looking for least squares of a hypersphere. Standard least squares for N-spheres is non-linear which can get tricky to do well in high dimensions. I would suggest projecting the points to a N+1 sphere using a N-dimensional stereo-graphic projection. In which case an N-sphere becomes an N+1 hyperplane, which is a linear problem. Once you find the hyper-plane then project back to N-space to get the sphere. The cost function is not the original N-sphere least squares cost function but I think its worth it to make the problem linear. *To generate, not sure I see the difference between this and calculating.
doc_794
{ "location_id": 73, "location_name": "Aunt Mary's Great Coffee Shop", "location_town": "London", "latitude": 74.567, "longitude": 102.435, "photo_path": "http://cdn.coffida.com/images/78346822.jpg", "avg_overall_rating": 4.5, "avg_price_rating": 4.3, "avg_quality_rating": 4, "avg_clenliness_rating": 3.8, "location_reviews": [ { "review_id": 643, "overall_rating": 4, "price_rating": 2, "quality_rating": 3, "clenliness_rating": 5, "review_body": "Great coffee, but the bathrooms stank!", "likes": 4654 } ] } i am trying to parse the data fromt the location_ reviews array object. i am successfully recieving the data as i have checked in a console.log i have also successfully recieved and printed the location name and id onto the screen /* eslint-disable curly */ /* eslint-disable prettier/prettier */ /* eslint-disable react-native/no-inline-styles */ /* eslint-disable semi */ /* eslint-disable prettier/prettier */ import React, {Component} from 'react'; import {View, Text, ToastAndroid, FlatList, TouchableOpacity, StyleSheet} from 'react-native'; class Location extends Component { constructor(props) { super(props); this.state = { locations: [], isLoading: true, }; } getData = async () => { let loc_id = this.props.route.params.location_id; return await fetch(`http://10.0.2.2:3333/api/1.0.0/location/${loc_id}`, { method: 'get', 'headers': { 'Content-Type': 'application/json', }, }) .then((response) => { if (response.status === 200) { return response.json(); } else if (response.status === 404) { ToastAndroid.show('Unable to locate location', ToastAndroid.SHORT); } else { throw 'something went wrong'; } }) .then((responseJson) => { const review = responseJson.location_reviews[0] console.log(review); this.setState({ locations: responseJson, isLoading: false, }); }) .catch((error) => { ToastAndroid.show(error.toString(), ToastAndroid.SHORT); }); } renderItem = ({item, index}) => { let { locations } = item; if (!locations[0]) return null; let details = locations[0] return ( <View> <View> <Text>{details.review_body}</Text> <Text>{details.review_id}</Text> </View> </View> ); } keyExtractor = (item, index) => { return index.toString(); } render() { return ( <View style={styles.space}> <TouchableOpacity style={styles.space} onPress = {() => this.getData()} > <View> <Text>get data</Text> <Text>Location id: {this.props.route.params.location_id}</Text> </View> </TouchableOpacity> <View> <Text style={styles.space}>{this.state.locations.location_name}</Text> <Text style={styles.space}>{this.state.locations.location_town}</Text> </View> <View style={{flex: 1}}> <FlatList data={this.state.dataSource} keyExtractor={this.keyExtractor} renderItem={this.renderItem} /> </View> </View> ); } } const styles = StyleSheet.create({ space: { margin: 20, padding: 20, }, }); export default Location; however i am completely stuck on how the access the array of location review data(review_id, review_body etc.) any ideas would be appreciated A: i managed to get the flatlist to compile using the following render() { if (this.state.isLoading) { return ( <View> <Text>loading...</Text> </View> ) } else return ( <View> <Text>Name: {this.state.locations.location_name}</Text> <FlatList data={this.state.locations.location_reviews} renderItem={({item}) => ( <View> <Text>review id: {parseInt(item.review_id)}</Text> <Text>body: {item.review_body}</Text> <Text>overall rating: {parseInt(item.overall_rating)}</Text> <Text>price rating: {parseInt(item.price_rating)}</Text> <Text>quality rating: {parseInt(item.quality_rating)}</Text> <Text>cleanliness rating: {parseInt(item.clenliness_rating)}</Text> </View> )} keyExtractor={(item) => item.review_id.toString()} /> </View> ); } A: Note that the json_data you have above is an object, and FlatList must receive an array (it then calls renderItem with each item in that array, and displays this list of items). There are a few changes you should make (I'm assuming that you want the FlatList to display a list of items from the location_reviews array): * *In your constructor, the location field should be an object, and it is initially null: this.state = { location: null, } *In your render method, first check this.state.location exists and render nothing or a loading screen if it does not: render() { const { location } = this.state; if (location === null) return null; ... } *In your render method, pass this.state.location.location_reviews to your FlatList: const { location } = this.state; <FlatList data={location.location_reviews} ... /> *Finally, adjust your renderItem method: renderItem = ({item, index}) => { return ( <View> <Text>{item.review_body}</Text> <Text>{item.review_id}</Text> </View> ); } Note: I have not tested this as the snack does not work. You might need to adjust a few more things (ie. I changed this.state.locations -> this.state.location, etc.)
doc_795
I have 5 modules that I manually launch one after another, in a synchronous manner. I wanted to build jenkins pipeline for those 5 modules, so that I wont have to do all the stuff manually over and over again. I have written a pipeline script for the first module, which: * *fetches the repository, *does mvn clean install *does mvn package *deploys module using java -jar ./target/*.jar. I am planning to write a pipeline script for every module in this fashion, and then I plan to tie them together, one after another. However, this pipeline script gets stuck at "Deploy" phase, because java -jar successfully deploys the module. (I deploy a server, which hangs I/O for logging, nothing interactive). I thought deploying the module using nohup or detached process using &, however in that case Jenkins will report Success even if my server throws an exception and exits. I thought that if I can find a way to mark this job as SUCCESS, then I can proceed to next module, and so on. Those are the logs that hangs my jenkins pipeline: 10-07-2018 11:20:44.179 [35m[main][0;39m [34mINFO [0;39m org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.start - Tomcat started on port(s): 8081 (http) 10-07-2018 11:20:44.180 [35m[main][0;39m [34mINFO [0;39m org.springframework.cloud.netflix.eureka.serviceregistry.EurekaAutoServiceRegistration.onApplicationEvent - Updating port to 8081 It means server is successfully running. Nothing is wrong. I'm looking for a way to deploy all my modules one after another. I'm new to jenkins, so I might be asking a stupid question, please feel free to enlighten me with potentionally other ways to make it work. Thank you for your efforts :)
doc_796
After much searching around I found a code that seems like it can do that: cfg.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); cfg.SendTopology.ConfigureErrorSettings = settings => settings.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); cfg.SendTopology.ConfigureDeadLetterSettings = settings => settings.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); However when building against existing queues it doesnt seem to update the settings. Here is my full MassTransit setup: services.AddMassTransit(x => { x.AddAmazonSqsMessageScheduler(); x.AddConsumers(assembliesWithConsumers.ToArray()); x.UsingAmazonSqs((context, cfg) => { cfg.UseAmazonSqsMessageScheduler(); cfg.Host("aws", h => { if (!String.IsNullOrWhiteSpace(mtSettings.AccessKey)) { h.AccessKey(mtSettings.AccessKey); } if (!String.IsNullOrWhiteSpace(mtSettings.SecretKey)) { h.SecretKey(mtSettings.SecretKey); } h.Scope($"{mtSettings.Prefix}-{mtSettings.Environment}", true); var sqsConfig = !String.IsNullOrWhiteSpace(mtSettings.SqsServiceUrl) ? new AmazonSQSConfig() { ServiceURL = mtSettings.SqsServiceUrl } : new AmazonSQSConfig() { RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) }; h.Config(sqsConfig); var snsConfig = !String.IsNullOrWhiteSpace(mtSettings.SnsServiceUrl) ? new AmazonSimpleNotificationServiceConfig() { ServiceURL = mtSettings.SnsServiceUrl } : new AmazonSimpleNotificationServiceConfig() { RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) }; h.Config(snsConfig); }); cfg.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); cfg.SendTopology.ConfigureErrorSettings = settings => settings.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); cfg.SendTopology.ConfigureDeadLetterSettings = settings => settings.QueueAttributes.Add(QueueAttributeName.MessageRetentionPeriod, 1209600); cfg.ConfigureEndpoints(context, new BusEnvironmentNameFormatter(mtSettings.Environment)); }); }); Does anyone know how to force SQS queues settings update with MassTransit? A: MassTransit will not update settings on existing queues or topics. The only way to get settings applied is to delete the queue or topic, after which MassTransit will recreate it with the configured settings.
doc_797
In the code behind where I set the datagrid template column, I set the ComboBoxItem as follows: <ComboBoxItem Tag='" + product.ProductGuid + "' Content='" + product.Name + "'></ComboBoxItem> I need to programmatically select a ComboBoxItem based on the Tag value not the content. In the code below, currentProduct holds the ProductGuid value that I need to select, but this code will select the ComboBoxItem whose Content is my currentProduct ((ComboBox)QuotationDG.Columns[0].GetCellContent(MyData[MyData.Count - 1])).SelectedValue = currentProduct; Is there a way to set the ComboBox Selected value to the ComboBoxItem whose Tag value is currentProduct? EDIT: Here's the code I use to bind my ComboBox Column: private string CreateDDLColumnEditTemplate(int index, string propertyName, List<Product> ProductList) { StringBuilder CellTemp = new StringBuilder(); CellTemp.Append("<DataTemplate "); CellTemp.Append("xmlns='http://schemas.microsoft.com/winfx/"); CellTemp.Append("2006/xaml/presentation' "); CellTemp.Append("xmlns:x='http://schemas.microsoft.com/winfx/2006/xaml'>"); CellTemp.Append(String.Format("<ComboBox SelectedValue='{{Binding [{0}], Mode=TwoWay}}'>",0)); foreach (Product product in ProductList) { CellTemp.Append("<ComboBoxItem Tag='"+product.ProductGuid+"' Content='" + product.Name + "'></ComboBoxItem>"); } CellTemp.Append(String.Format("</ComboBox>")); CellTemp.Append("</DataTemplate>"); return CellTemp.ToString(); } A: What you need is YourComboBox.SelectedValue = currentProduct and <ComboBox SelectedValuePath="Tag" etc. etc. etc. /> which means your SelectedValue gets its value from your Tag A: The use of the combobox has the ability bind to objects and specify what is to be displayed in the dropdown. Why not use the Product object which you are loading into the tag but show something different on the drop down? If need be create an extended, Partial object, to display something wholly different in the drop down, but still access the Product item to change the selection. The suggestion here is to simply bind the combobox to the Product data item. Then you can change the value dynamically without using the Tag property. For example I have this data type similar to your Product type: public class PriceItem { public string Name { get; set; } public int Price { get; set; } } and here is the data list of values which is saved on my VM as Items. Items = new List<PriceItem>() { new PriceItem() { Name = "Alpha", Price=100 }, new PriceItem() { Name = "Beta", Price=200 }, }; Scenario Two combo boxes, both bound to the Items data. One combo box shows the Name in its dropdown while the other shows the Price. The Name combobox controls the price combo box, when it changes the price changes as well. <ComboBox x:Name="comboName" ItemsSource="{Binding Items}" DisplayMemberPath="Name" /> <ComboBox x:Name="comboValue" ItemsSource="{Binding Items}" DisplayMemberPath="Price" SelectedValuePath="Price" SelectedValue="{Binding SelectedItem.Price, ElementName=comboName, Mode=OneWay}" /> In action both combo boxes are blank. But whenever I select the first one it changes the second. All using the same data/data objects.
doc_798
Example : "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLowerCase().indexOf("şirket".toLowerCase()) return -1 Solution 1: str = "SANAYİ VE TİCARET LİMİTED ŞİRKETİ" var letters = { "İ": "i", "I": "ı", "Ş": "ş", "Ğ": "ğ", "Ü": "ü", "Ö": "ö", "Ç": "ç" }; str = str.replace(/(([İIŞĞÜÇÖ]))/g, function(letter){ return letters[letter]; }) var index = str.toLowerCase().indexOf("şirket".toLowerCase()) Solution 2: "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLocaleLowerCase('tr').indexOf("şirket".toLocaleLowerCase('tr')) A: As already mentioned in the other answers, localized strings need to be handled differently. If you know for sure in which language the strings are, you can use the locale versions of toLowerCase() and toUpperCase(), i.e., toLocaleLowerCase() and toLocaleUpperCase() respectively. Assuming that the strings are in Turkish language, pass the ISO code parameter 'tr' to the function toLocaleLowerCase(): Therefore "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLowerCase().indexOf("şirket".toLowerCase()) returns indeed -1 but "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLocaleLowerCase('tr').indexOf("şirket".toLocaleLowerCase('tr')) returns correctly: 26 HTH... A: Working with localised strings can be tricky due to non-english characters. "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLowerCase() returns "sanayi̇ ve ti̇caret li̇mi̇ted şi̇rketi̇". "SANAYİ VE TİCARET LİMİTED ŞİRKETİ".toLowerCase().indexOf("şi̇rket".toLowerCase()) (note "şi̇rket" instead of "şirket") returns 30. A: Actually you have two different words ŞİRKETİ which is şi̇rketi̇ which is not şirket. Take a look on different i̇ and i A: var letters = { "İ": "i", "I": "ı", "Ş": "ş", "Ğ": "ğ", "Ü": "ü", "Ö": "ö", "Ç": "ç" }; str = str.replace(/(([İIŞĞÜÇÖ]))/g, function(letter){ return letters[letter]; }) it's run for me
doc_799
When the app loads up, it loads a splash image and a background image (about 100k in size). When I remove these images from while the app is loading, the app works fine. But when I leave them there, it runs out of memory randomly after the app loads up. I checked the resource directory and the app only has images in the Drawable folder, not in any of the drawable-hdpi or ldpi or mdpi folders. So is the device trying to convert these images to fit the phone's resolution and it's using all the memory? Do we have standard image sizes for hdpi, ldpi and mdpi folders? Any help will be greatly appreciated. A: The reason my app failed on Galaxy S3 is because Galaxy S3 has a much higher resolution than any other phones I tested my app with. Hence the memory it allocated for the bitmaps was much higher and that caused the app to crash as it ran out of memory. To solve the issue, I simple removed the bitmaps from the view in the onDestroy method and it solved the issue.