text
stringlengths
70
452k
dataset
stringclasses
2 values
Directory size calculation difference I need to get the size of directory in terminal for signing purposes. I'm using following command: du -s /path/to/dir I'm multiplying the result by traditional UNIX block size (512 bytes) and get actual directory size in bytes. However, the Finder's "Get Info" dialog window shows the size slightly smaller than the one calculated with the terminal command. And it seems that it's reproducible on any folder/bundle. What am I missing? Normally, du shows information about disk usage (which is where its name comes from). Keep in mind that disk usage != sum of file sizes Because each file takes up a number of blocks on the filesystem (see man mkfs.ext2 for example). This means that only in a very rare situation a disk usage of a file equals its actual size - for that, the size must be exactly a multiple of the block size. Think of filesystem blocks as boxes that contain parts of files - each can contain a part of only one file. For GNU version of du, check out the --apparent-size option. An even more interesting situation can take place, when there are some sparse files on the file system! There is no such option (I'm on OS X, not Linux). Probably needed to mention that in a question, since tag is not enough .) Ah, right... Then have a look at the manpage and try to find references to actual or apparent. (Also see my updated explanation). About Mac OS X and the Finder (in Snow Leopard, version 10.6.8) I have noticed the following. I get the byte counts for the Finder's 'quantified' figures of a path (file or folder) with the code (in bash(1)) below. The Finder "Info" windows and pane shows the 'quantified' (e.g. kilo in KB) figures in decimal (base 10, 1000) bytes as opposed to the binary (base 2, 1024) bytes so then I 'quantify' by dividing by 1000 and increasing the unit (byte) prefix 'quantifier' (magnitude) and do some odd "off key" rounding. (My full code is full of out-commented development code and divided into several files (and languages) so it's hard to share.) As far I have seen my 'quantified' figures is the same as the 'quantified' figures in the Finder. Also, along with the code I want to say that I have no (and have never had any) environment variable BLOCKSIZE set in my shell but I tested (now, a little) both versions and the default values for $BLOCKSIZE gives the same values. #!/usr/bin/env bash #tab-width:4 du -s "${@:-.}" |awk '{u+=$1}END{ print u*'${BLOCKSIZE:-0512}' }'||exit $? #macosx (xnu) # gdu -sB${BLOCKSIZE:-4096} "${@:-.}" |awk '{u+=$1}END{ print u*'${BLOCKSIZE:-4096}' }'||exit $? #macports gnu The unquantified number I have not managed to match. The only thing I can say is that i get closer by only counting files (thus excluding directory ~'file-system meta index/header'~data) and that the closest i get is with the following. #!/usr/bin/env bash #tab-width:4 for a;do find "$a" -type f -print0|xargs -0 stat -f %z |awk '{u+=$1}END{ print u }'||exit $?;done #macosx (xnu) # for a;do find "$a" -type f -print0|xargs -0 gstat -c %s |awk '{u+=$1}END{ print u }'||exit $?;done #macports gnu Neither (xnu) du(1) nor (gnu) gdu(1) seems to count extended attributes (xattr) And then i must just pun 'Run the path and do the math' Peace out and goodnight fo'real this time. Sum all files in a directory: OSX: find dir ! -type d -print0 | xargs -0 stat -f '%z' | awk '{sum += $1} END{print sum}' Linux: find dir ! -type d -printf "%s\n" | awk '{sum += $1} END{print sum}' find: unrecognized: -printf. Alpine image add | numfmt --to iec --format "%8.4f" at the end to see human readable format (e.g. KB/MB/GB) On my Ubuntu system, using ext4, du -b file gives the size in bytes of an actual file, and du -b dir gives the size in bytes of the file(s) + directory overhead, The overhead is, in my case, multiples of 4096 bytes .. This overhead increases as the number of files increases. Note: even if files are deleted, the directory overhead remains at the higher level it was at before the flles were deleted.. I haven't tried rebooting, to see if it reverts, but in either case, this means that the directory size varies depending on historic circumstances. Tallying each files size may be the best option for an accurate value of the total file sizes. The following script totals all file sizes (in bytes).. For OS X, if you don't have the -b opton for 'du', you can use stat instead .(if you have it :)... The commented line shows the Ubuntu stat alternative to du -b; unset total while IFS= read -r -d $'\0' rf; do # (( total += $(stat "$rf" | sed -nre 's/^ Size: ([0-9]+).*/\1/p') )) (( total += $(du -b "$rf" | cut -f 1) )) done < <(find . -type f -name '*' -print0) echo $total OSX doesn't have du -b and a different stat. Your script is not portable outside Linux either way. With MacPorts on OS X you can install coreutils to get the GNU version of du as gdu. So its not exactly portable, but may be useful for people on OS X to get the GNU versions of a few core utils.
common-pile/stackexchange_filtered
show Done button on number pad for multiple UITextFields @Luda's answer is a great answer, but I got stuck when I needed to use it for multiple text fields so I edited it as the following: First, I get IBOutlets for each one of my textFields, say: textField1, textField2 Then I edited the code as - (void)viewDidLoad { [super viewDidLoad]; UIToolbar* numberToolbar = [[UIToolbar alloc]initWithFrame:CGRectMake(0, 0, 320, 50)]; numberToolbar.barStyle = UIBarStyleBlackTranslucent; numberToolbar.items = [NSArray arrayWithObjects: [[UIBarButtonItem alloc]initWithTitle:@"Cancel" style:UIBarButtonItemStyleBordered target:self action:@selector(cancelNumberPad:)], [[UIBarButtonItem alloc]initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:nil action:nil], [[UIBarButtonItem alloc]initWithTitle:@"Apply" style:UIBarButtonItemStyleDone target:self action:@selector(sendToServer:)], nil]; [numberToolbar sizeToFit]; textField1.inputAccessoryView = numberToolbar; textField2.inputAccessoryView = numberToolbar; } -(void)cancelNumberPad:(UITextField *)textField { //here I use if/else to determine which textField was tapped if(textField == self.textField1) { //do some stuff }else //... } -(void) sendToServer:(UITextField *)textField { //here I use if/else to determine which textField was tapped if(textField == self.textField1) { //do some stuff }else //... } Notice how I had to add the colons : to the @selector as e.g. @selector(sendToServer:) that way the correct TextField is passed as a parameter. BUT It's not working. the test fails: if(textField == self.textField1). So does anyone know how to do this right? The question is: How do I know which textfield is being edited? I am hoping for a more direct answer than http://stackoverflow.com/questions/1823317/get-the-current-first-responder-without-using-a-private-api. By direct I mean a parameter to the selector. The reason your check if (textField == self.textField) is failing is because the argument passed by default to a UIBarButtonItem selector is actually the UIBarButtonItem itself. You can verify this by placing a breakpoint inside either selector method and examining the textField arguments. One possible solution, without introducing any code to loop through subviews, would be to declare your view controller as a UITextFieldDelegate, and then modify your selectors as follows: - (void)sendToServer:(UIBarButtonItem*)barButtonItem { [self.view endEditing:NO]; } Then implement the delegate method textFieldShouldEndEditing: - (BOOL)textFieldShouldEndEditing:(UITextField *)textField { // here you would determine which text field the UIBarButtonItem was associated with if(textField == self.textField1) { // your code here // and then, if you wanted to dismiss the keyboard return YES; } else { return NO; } } try to use tag like this: textFiled1.tag = 1; textFiled2.tag = 2; //when your textfield begins editing, this method will be called. - (void)textFieldDidBeginEditing:(UITextField *)textField { NSLog(@"tag ==%d",textField.tag); if(textField.tag == 1){ //do sth } } by the way, you should declare UITextFieldDelegate in your XX.h file
common-pile/stackexchange_filtered
controlling the number of edges in a "object in curve" (Sverchok) How I might control the number of edges in the "X" direction for this surface? The edges in "Y" axis are controlled by the "Float series count" Everything nodes :D Yes! I'd do it this way, it gives you separate control over the 0..1 range between U and V, by using two Vector interpolation nodes You can use the List-Item node with Int Range which will delete some points along the X axis depending on the step you set in the Int Range node : ( change the step to get different results )
common-pile/stackexchange_filtered
Creating list from predicate I have a predicate as: delta(q1,a,q2). delta(q1,b,q3). delta(q2,a,q4). delta(q2,a,q3). delta(q3,a,q1). and I want to convert them to list like this: nfatodfa([(q1,a,q2),(q1,b,q3),(q2,a,q4),(q2,a,q3),(q3,a,q1)],L) How can I do that? I think what you might want to use is findall/3 to construct this list: ?- findall(d(X,Y,Z), delta(X,Y,Z), L). L = [d(q1,a,q2),d(q1,b,q3),d(q2,a,q4),d(q2,a,q3),d(q3,a,q1)] Note that this makes a list of compound terms using functor d to hold things, which is a little different than what you outlined. However the parentheses alone, eg. (q1,a,q2), can be a little tricky to work with. If you just want the bare triplets, do this: ?- findall((X,Y,Z), delta(X,Y,Z), L). L = [(q1,a,q2), (q1,b,q3), (q2,a,q4), (q2,a,q3), (q3,a,q1)]
common-pile/stackexchange_filtered
What does jerry means here by the line,and to whom? It's a clip from a cartoon Rick&Morty, with the link down below. After Jerry (Morty's father) stroke struck the monster (don't exactly know what that's called) down with a poker, he said this: Well, look where being smart got you. What does it mean? to whom he said? Could you please help me to understand more about lines like this one? link:https://getyarn.io/yarn-clip/b1a0802b-1bb4-46b7-b6d2-062d60fd9836 Being 'smart' (clever) didn't help the monster. Your link doesn't show enough of the cartoon for me to know in what way the monster had been smart and why it was a disadvantage. (BTW, he struck the monster down - that's the past tense of strike.) "Look where [something] got you" is an idiomatic phrase, usually said disparagingly, and it serves to highlight the (bad) results or consequences of something you did. In your example, the inference is that 'being smart' resulted in something bad. Perhaps it was said in response to a suggestion that they had done something 'stupid', to point out that their way was no better. it could just be something you say when you are clubbing something to death, like 'put that that in your pipe and smoke it' or 'here's something to think about', etc.
common-pile/stackexchange_filtered
The team should be able to send an email to the Underwriter with related recommendations Create a Custom workflow assembly for Audit entity Look for Coverholder asscociated with Audit record. Find all Binder Sections where Expiry Date is not greater than todays date plus 6 months. Retrieve all Underwriter Contacts recorded in Binder Sections Underwriter field Create an email listing the Audit Recommendations associated with the Audit where Sign Off status equals Open Send email to email address associated with the Underwriter. Please suggest how to proceed with sample code if possible The ways to proceed with such tasks can be wide ranged. Please provide something you've tried to narrow down the approaches. I have tried with query expression to retrieve coverholder associated with audit record.
common-pile/stackexchange_filtered
What is equivalent of taskset on powershell? What is equivalent of taskset on powershell ? I mean if you don't know what is taskset, the command to start a process/application on dedicated pool of CPU cores. I didn't find anything similar, does it exist that possibility ? sorry for so basic question but I m out of Windows knowledge. regards In Powershell you can assign dedicated CPU to a application or process by setting affinity value to the process. Get-Process <ApplicationName> | select -Property ProcessorAffinity For more info for setting the values to process: https://newbedev.com/change-affinity-of-process-with-windows-script thanks for that link, this is exactly what I faild to find :)
common-pile/stackexchange_filtered
Selecting rows where datetime column is a certain day I'm trying to write a SQLite query in Android where I select all rows from a table where the day component of the datetime column (stored as ISO 8601 strings) equals a given day. Basically, I want to select all rows with a certain date, disregarding the time. How can this be achieved? Use this query select * from table_name where day=desired_day; So this code: database.query("table_name", null, "day like ?",new String[]{"____-__-daynumber%"}, null, null, null, null); Checkout this documentation to see how I figured out that sql regex to use. This regex allows for any year and any month. If you want only a specific day in a specific month in a specific year do: database.query("table_name", null, "day like ?",new String[]{"1983-03-22%"}, null, null, null, null); This would look for the day 03/22/1983 The column is a datetime, i.e. YYYY-MM-DD HH:MM:SS.SSS, so using a day, YYYY-MM-DD will not be equal and will return no results -- isn't that correct? Edited my response to address your comment. SQLite has some date time functions. In your case the select could look as follows: select * from table where date(your_date_time_column) = iso_8601_date_only_value; Right but date(your_date_time_column, 'localtime') might be needed depending on how timezones are being handled. SELECT * FROM test WHERE strftime('%Y-%m-%d', timecol)=strftime('%Y-%m-%d', '2008-04-12') You can always use 00:00:00 and 23:59:59 as start and end times and simply use <= and >= in a raw query such as... SELECT * FROM the_table WHERE the_datetime_column >= '2011-11-15 00:00:00' AND the_datetime_column <= '2011-11-15 23:59:59'
common-pile/stackexchange_filtered
Does prefetching work with Service Bus Trigger? I have a Durable orchestration client which is Service Bus Topic triggered. [FunctionName("ServiceBusTrigger")] public static async Task ServiceBusTrigger( [ServiceBusTrigger("topicname", "subscriptionname", Connection = "MyServiceBusKey")]string mySbMsg, [OrchestrationClient]DurableOrchestrationClient starter, ILogger log) { string instanceId = await starter.StartNewAsync("Orchestrator", mySbMsg); log.LogInformation($"Started orchestration with ID = '{instanceId}'."); } Does enabling the prefetch count under extensions in host.json cause messages to be prefetched in the Service Bus Trigger? host.json: { "version": "2.0", "extensions": { "serviceBus": { "prefetchCount": 100 } } } Does enabling the prefetch count under extensions in host.json cause messages to be prefetched in the Service Bus Trigger? Prefetch count affects the maximum number of messages prefetched by the underlying MessageReceiver used by Azure Functions SDK. You will have to ensure that prefetchCount is configured properly with the value defined for maxConcurrentCalls to ensure not too many messages are pre-fetched and locks are lost while waiting for processing. Does this mean that if prefetchCount is set to 200 and maxConcurrentCalls is set to say 16, 200 messages will be prefetched to a specific instance, but only 16 messages will be processed at a time? Correct. MessageSender gets the messages prefetched (200), but the client code will only process up to MaxConcurrency (16). Are there any limits to maxConcurrentCalls? Is there anything that prevents us from setting prefetchCount equal to maxConcurrentCalls? MaxConcurrentCalls is how many messages will be processed concurrently by a single MessageReceiver. PrefetchCount is up to how many messages a single MessageReceiver will retrieve when making a call to receive a message. Setting those two to the same value is counterproductive. PrefetchCount should be larger than the number of messages processed concurrently. @SeanFeldman If we are having multiple azure function is there a way we can have different prefetch and concurrency for different trigger on the same function app ? and if there are two functions and the concurrency is kept to 15 will both the functions have concurrency as 15 or it will be divided between the functions ? Prefetch count and concurrency are per message receiver. A message receiver is per entity (queue or subscription). If you have more than one function and those work with different entities, then the settings are per entity/function.
common-pile/stackexchange_filtered
Linq Expression works on Desktop but errors on Server This is the strangest thing I've ever seen, but hopefully someone else has because I am clueless. I have the following code: DataTable dt = (DataTable)dataGridView1.DataSource; List<InvoiceItem> itemList = new List<InvoiceItem>(); int listSize = 30; int listIndex = 0; try { itemList = (from DataRow dr in dt.Rows select new InvoiceItem() { CustomerRef = dr["CustomerRef"].ToString(), Description = dr["Description"].ToString(), ItemRef = dr["ItemRef"].ToString(), Rate = Convert.ToDouble(dr["Rate"].ToString()), Quantity = Convert.ToDouble(dr["Quantity"].ToString()), PONumber = dr["PONumber"].ToString(), UnitOfMeasure = dr["UnitOfMeasure"].ToString(), RefNumber = dr["RefNumber"].ToString(), Total = Convert.ToDouble(dr["Total"].ToString()), Address1 = dr["Address1"].ToString(), Address2 = dr["Address2"].ToString(), Address3 = dr["Address3"].ToString(), Address4 = dr["Address4"].ToString(), City = dr["City"].ToString(), State = dr["State"].ToString(), PostalCode = dr["PostalCode"].ToString(), ServiceDate = string.IsNullOrEmpty(dr["ServiceDate"].ToString()) ? (DateTime?)null : DateTime.Parse(dr["ServiceDate"].ToString()), TxnDate = string.IsNullOrEmpty(dr["TxnDate"].ToString()) ? DateTime.Now : DateTime.Parse(dr["TxnDate"].ToString()), Note = dr["Note"].ToString() }).ToList(); List<string> list = new List<string>(); list = loadItems(); List<InvoiceItem> createNewItemsList = new List<InvoiceItem>(); foreach (var importing in itemList) { var matchingvalues = list.Where(l => l.Contains(importing.ItemRef)); //If there is no match in Quickbooks already... if (matchingvalues.Count() < 1) { createNewItemsList.Add(new InvoiceItem { ItemRef = importing.ItemRef, UnitOfMeasure = importing.UnitOfMeasure }); } } Here is the Code for loadItems(): private List<string> loadItems() { string request = "ItemQueryRq"; connectToQB(); int count = getCount(request); IMsgSetResponse responseMsgSet = sessionManager.processRequestFromQB(BuildItemQuery()); string[] itemList = parseItemQueryRs(responseMsgSet, count); disconnectFromQB(); List<string> list = new List<string>(itemList); return list; } Here is a view of the error: here shows list count: When I run this code on my deskotp, if matchingvalues.Count() = 0 it executes the code correctly. However, when I run the exact same code in debug on the server, that line of code errors out with "Object reference not set to an instance of an object." Can anybody explain why this might happen and if there is any work around for it? I don't think there's anything wrong with your code. Are you sure that you are using the exact same data source locally and on the server? What does loadItems() do? Can you include the code for that just in case? EXACTLY the same. And the thing is, list = loadItems() returns 28 records. So I'm baffled here. It errors out on Count(). I put in int test = matchinvalues.Count() and get the error. Which object is null, matchingvalues or something else? The matchingvalues is lazy-loaded when you call .Count() on it, so it's not doing the .Where filtering until you try to .Count() them. The source of the exception you got will be the biggest clue. I'm importing the exact same Excel spreadhseet and comparing with the exact same Company file. The only difference is the versions of Quickbooks. One is Quickbooks Pro, and one is Quickbooks Premier, otherwise, same data files. I see your new image. Only thing I can think of is importing is null and it blew during iteration. Perhaps check that and paste the stacktrace? Actually, ItemRef might be null as Contains will throw a null reference exception when you pass in null: https://msdn.microsoft.com/en-us/library/dy85x1sa(v=vs.110).aspx You would think that, but importing is not null. I can hover and read the string. matchingvalues clearly shows as null there, and you're doing a .Count() on that. That's what throws.. @FredKleuver We get that, but trying to figure out how that is even possible and what is causing it. Based on code it should just be an IEnumerable that either iterates 0 or more items. It seems OK in my opinion as well. If the server is not Windows(.NET), it can be a bug of Mono. The underlying query that is called by loadItems() is choking somewhere. The .where expression doesn't throw because it doesn't immediately execute the query, but the .count() will. Can you show the code in loadItems()? You're also talking about different versions of QuickBooks. Could be a configuration or state problem in the installation or userprofile that doesn't allow you to load the file. loadItems() returns 27 records.The Quickbooks files is loaded and returns 27 records. By the time it gets to the code above, Quickbooks should be out of the mix. Have you tried writing out the .where expression as a foreach loop? var matchingvalues = new List<string>(), then loop through the list, check for .Contains and add value to the list if it matches. Might expose what's going on. I'm working on iterating both list with foreach loop. now. Side note, you could rewrite your if check to this: if (!list.Any(l => l.Contains(importing.ItemRef))), but that's for after this issue is solved I suppose..
common-pile/stackexchange_filtered
Xamarin iOS app with WKWebView not allowed by Apple I already made everything that google says, even some stackoverflow posts... But, Apple does not allow my app. I created a Custom Renderer, but it makes reference to WebView anyway. So, the app is not allowed. Is there someway to use the WKWebView without any references to WebView? did you follow ALL of the instructions here? https://devblogs.microsoft.com/xamarin/uiwebview-deprecation-xamarin-forms/ yes,. In this article he says to put a text that would help the app stop beeing rejected in App Store, but this article is in February the apple's rule started in April, before this it was just a warning. Now it's not allowed anymore to have WebView in new apps. i have no idea how to implement the WKWebView in xamarin.ios without making any reference to WebView, i think this is not possible... there are a set of very explicit steps given to fix this. It is not just "put a text". Did you follow all of the steps? A custom render is NOT part of the recommended solution. Yes. I already followed this article. The custom render i've made to try to replace the WebView to the WKWebView, but no success. You do not need a renderer. If you are using the correct version of Forms it will automatically use WKWebView. Unless you have some code that explicitly references UIWebView you should be fine. I got a class that inherit WebView. In my app i need to use the WebView to show a webpage that the user will navigate. But the webview i can't use, so, i tried it with the renderer Why can't you use WebView? WebView is NOT the problem, UIWebView is what Apple no longer allows. Xamarin has updated XF so that WebView is rendered as a WKWebView. That's weird then. So, why Apple keeps sending the email saying that i need to change it? I already changed what they said. I do not know what else to do. Either you have not followed the instructions from Xamarin, or you have a direct reference to UIWebView in your code or in some third party nuget and i got the latest version installed. I found the solution, i had to add this text in mtouch:--optimize=force-rejected-types-removal @GabrielJuren Welcome to SO! If have solved that , remember to share the solution in answer when you have time. It will help others who have similar issue. :-)
common-pile/stackexchange_filtered
Timer is not counting down in Fixed position I am developing one game app in HTML5 for android devices. In that game I am using simple count down method with setInterval method and I am displaying that counter in timer div. When I make div element position is fixed then count down is not running whenever I touched the screen then only count down starts and stops. BUT this is running good in absolute position, I don't want to make that div as absolute. I have tried all the possibilities, No luck Please any one help me to resolve this problem Here is my code HTML: <div id="time"> <p id="txt2"></p> </div> CSS: #time { right: 150px; top: 11px; z-index: 999; position: fixed; font-size: 30px; color: rgb(0, 0, 0); font-weight: bold; font-size: 30px; color: #FFFFFF; } JS: function timedCount(){ sec = 119 - c; document.getElementById("txt2").innerHTML = sec; c=c+1; tt=setTimeout(function(){ timedCount() },10000); } function doTimer(){ //alert("TImer starts"); if (!timer_is_on){ timer_is_on=1; timedCount(); } } function stopCount(){ //alert("Timer stops"); clearTimeout(tt); timer_is_on=0; //c = 0; } Please check this one I refered http://www.w3schools.com/js/tryit.asp?filename=tryjs_timing_stop Please create a fiddle: jsfiddle.net It looks like c is never set. Try adding c=0; as a global before the functions. I don't think it's positioning problem. @Paul Code is correct.. I have some other code also so i copied and paste.. After touch the screen only counter runs other else counter will be idle only.. @IamDesai I am running it in mobile devices @C-link If I change position fixed to absolute then it will run correctly BUT div will be keep on moving so I want the div should be in fixed position @IamDesai please check jsfiddle link here http://jsfiddle.net/tpdqv/ Try this one with jquery <!DOCTYPE html> <html> <head> <style> #time { right: 150px; top: 11px; z-index: 999; position: fixed; font-size: 30px; color: rgb(0, 0, 0); font-weight: bold; font-size: 30px; color: red; } </style> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"> </script> <script> var c=1, timer_is_on = 0, tt; $(document).ready(function(){ doTimer(); function doTimer() { if (!timer_is_on){ timer_is_on=1; timedCount(); } } function timedCount() { var sec = 119 - c; $("#txt2").html(sec); //alert(c); c = c + 1; tt = setTimeout(function () { timedCount() }, 10000); } function stopCount() { //alert("Timer stops"); clearTimeout(tt); timer_is_on = 0; //c = 0; } }); </script> </head> <body> <div id="time"> <p id="txt2"></p> </div> </body> </html> Check this link Same code only I am using sir BUT problem is count down is not running in screen idle, If I touch screen then count down run and stops, If I keep on touch my screen then it will be running or else NOT
common-pile/stackexchange_filtered
rmarkdown encrypted output loses plotly and leaflet plots I want to generate an Rmarkdown document with a password. One possible solution is to encrypt the html output with encryptedRmd. I tried this: --- title: "test" author: "John Doe" date: "18/11/2020" output: encryptedRmd::encrypted_html_document: css: theme.css --- ```{r setup, include=FALSE} library(leaflet) library(plotly) knitr::opts_chunk$set(echo = FALSE) ``` ## R Markdown This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see <http://rmarkdown.rstudio.com>. When you click the **Knit** button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this: ```{r cars} fig <- plot_ly(data = iris, x = ~Sepal.Length, y = ~Petal.Length) fig ``` ## Map You can also embed plots, for example: ```{r pressure, echo=FALSE} m <- leaflet() %>% addTiles() %>% # Add default OpenStreetMap map tiles addMarkers(lng=174.768, lat=-36.852, popup="The birthplace of R") m # Print the map ``` Note that the `echo = FALSE` parameter was added to the code chunk to prevent printing of the R code that generated the plot. The problem is that this solution appears to be designed to simpler reports than the one i need to protect eith password. As a result, the encrypted version of the html don't render properly plotly and leaflet results. What can I do?
common-pile/stackexchange_filtered
Is it possible to talk about the distance of two random variables? Suppose that $X, X^\prime$ are two random variables with the same distribution $\mathcal{N}(\mu, \sigma^2)$. Is there anything that we can say about the "distance" between the two random variables? e.g., $\|X - X^\prime\|$ The thing is that I don't know if the space of random variables is a metric or normed space (never talked about in class), and the types of norms that are typically placed on these functions. More generally, what does it mean for a function of a random variable to be a Lipschitz function? Let $f$ be a function of $X$, then when I say that it is Lipschitz, does it mean $$\|f(X) - f(X^\prime)\| \leq L \|X - X^\prime\|$$ or $$\|f(x) - f(x^\prime)\| \leq L \|x - x^\prime\|,$$ where the lower case are the realizations. Lots of examples https://en.wikipedia.org/wiki/Statistical_distance Note that $X-X'$ will be a random variable with mean $0$ and variance somewhere between $0$ and $4\sigma^2$ depending on the correlation between $X$ and $X'$
common-pile/stackexchange_filtered
datatable loop performance I have a question about loop DataTable in C# I have a DataTable has about 20.000 rows and 60 columns I want to write a SQL query to insert all of data in DataTable to database I'm using My Sql this is my SQL syntax: "Insert into a values (...), (...), (...), ..." "(...)" is all data in one row this is C# code: DataTable myDataTable; // myDataTable has 20.000 rows and 60 columns string mySql = "insert into a values " for(int iRow = 0; iRow < myDataTable.Rows.Count; iRow++) { mySql += "("; for(int iColumn = 0; iColumn < myDataTable.Columns.Count; iColumn++) { string value = myDataTable.Rows[iRow][iColumn].ToString() string parameter = "@" + iRows.ToString() + iColumns.ToString(); // here is some code for add "parameter" and "value" to MySqlCommand mySql += "," } // remove "," at the end of string mySql mySql = mySql.SubString(0, mySql.Length - 1); mySql += ")," } // remove "," at the end of string mySql // after that, I will execute mySql command to insert data into database here when I run that code, It is take a lots of time to finish I try change from "string" to "StringBuilder", but it is just faster a little. How can I make the code run faster? Thanks for all supports. What is here is some code for add "parameter" and "value" to MySqlCommand? Can't you use SqlBulkCopy or LINQ to SQL. @Debug Your suggestion to SqlBulkCopy might need some review: I'm using My Sql "comm.Parameters.AddWithValue(parameterName, value).DbType = DbType.String;" // this is code using Parameter I used in my code. thanks. It's probably not productive to hammer your MySQL server with hundreds of KB of SQL. Consider sending smaller batches, and in parallel. To start with I would suggest measuring the time to build the SQL string and the time for it to execute the statement so you know whether it's DB related or code related. I would have expected StringBuilder to have more of an impact at improving the performance but we have no indication of the actual times you are experiencing. string parameter = "@" + iRows.ToString() + iColumns.ToString(); What does this produce for row 1 and column 11, and then row 11 and column 1? I know. thanks. if I send a large data to insert using MySql, MySql will be crash. And I will execute insert data for each 500 rows. but, I worry about my code is not good about performance because the time to execute 2 for loop is so long. @ta.speot.is: thanks. but that code just using for explain my idea. it make clear about my question. in real code, I add columsName + iRow + iColumn for parameter. My code has not Runtime Error. but take a lots of time. I think I will change my Idea for loop data in DataTable. I have asked for some timing information in a comment, but I then noticed this line of code: mySql = mySql.SubString(0, mySql.Length - 1); I would suggest removing that from your loop and having slightly smarter code to only append the commas when needed. For example, in the inner loop, add the comma to the mySql variable before each parameter instead of after, except for where iColumn == 0. That would stop the unnecessary comma being at the end of the string each time.
common-pile/stackexchange_filtered
List distinct values based on a condition List Distinct Value in Alphabet Column where all Alphabets that has RID 101 only Shall be eliminated from the list. RID Alphabet 100 A 101 B 102 B 101 C 104 D 101 D 106 E 102 F 108 C 108 A 104 E Output Column shall be Alphabet A E F Since A, B, & C has RID 101 and should be eliminated. How shall I achieve This? I have tried to use Group by with having function? And Where exists but did not work for me? Appreciate your help try this SELECT Alphabet FROM `mytable` WHERE Alphabet NOT IN ( SELECT Alphabet FROM `mytable` WHERE RID = 101 ) Based on your new info your solution might be a performance killer because of using HAVING MAX(when case...). But with Rating.rID as index the solution below will work very well: Select Movie.title From Movie Left Join Rating on Rating.mID = Movie.mID Left Join Reviewer on Rating.rID = Reviewer.rID WHERE Movie.mID NOT IN ( SELECT Rating.mID FROM Rating WHERE Rating.rID = 205 ) GROUP BY Movie.mID Did not work for me mate. Maybe because my actual structure of database is different. Let me share the full code that worked for me on actual so I learn alternative way as well. Select Movie.title From Movie Left Join Rating on Rating.mID = Movie.mID Left Join Reviewer on Rating.rID = Reviewer.rID Group by Movie.mID having Max(case when Reviewer.rID = 205 then 1 else 0 end) =0; @Saleem updated the answer, check it out! vote up and accept if it was helpful! Thanks bro. using above it worked with me with small adjustment. I was still missing two movies from the list since not all movies have been rated by reviewers. so I changed Where Rating.mID Not IN to Movie.mID Not IN. @Saleem you're right. accept an answer as your working answer by clicking the tick on the side of the answer! How shall I achieve this? I have tried to use group by with having Yes. You can just GROUP BY alphabet letter (which satisfies the distinct requirement), and exclude those that have the forbidden RID with a HAVING clause: select alphabet from mytable group by alphabet having max(case when rid = 101 then 1 else 0 end) = 0 Depending on your database, there may be other ways to express the HAVING predicate; in Postgres, for example, which supports boolean aggregation: having bool_and(rid != 101) Thanks for your support! It worked perfectly. I was close but since just started learning sql I was not able to figure out the right syntax of HAVING.
common-pile/stackexchange_filtered
Google chrome does not prompt authentication box inside an iFrame I have a website where iFrames are auto generated inside a webpage to show the video from IPCAM, but the IP camera is secure. Google chrome does not prompt the authentication box inside the iFrame, I googled it but could not find the solution. See here Is there any why to hard code the authentication details using javascript inside the iFrame because once the session is made, chrome shows the video. Thanks Have you tried including the username and password directly in the URL of the iframe, e.g<EMAIL_ADDRESS>See http://serverfault.com/questions/371907/can-you-pass-user-pass-for-http-basic-authentication-in-url-parameters for more details. Thanks for you comment. it Helped. I didn't try what you said but I have now. It works fine in google Chrome but not working in IE, ask for authentication again in Firefox and shows a security alert in Safari(after that it refreshes the page and the iframe goes away.) Congrats on solving the issue. You should leave an answer explaining what helped you, so that the next person with the same problem can benefit from your experience. I cannot say it solved my issue, it defiantly helped, it works fine on Google Chrome but It does not work in IE and Safari. And also in Safari it gives Possible phishing site Warning, which is not good for my site.
common-pile/stackexchange_filtered
Include Images with Reply in Mail? In Mail (connected to my work Exchange server, if that matters), when I forward an email which contains inline images they are preserved. However, if I reply to the email then they are not preserved. They are instead replaced with something like this: <image001.jpg> Is there a way to preserve the images in a reply? I suppose that is because it usually makes no sense to include attachments again in a reply. I have no solution to your question, though. @leberwurstsaft: In the simplest cases, I would agree. In longer email chains where people get added to the conversation late in the game it's very handy. If someone comes in late you can always forward them the original mail for reference. Otherwise you'd have to send the images back and forth all the time. Not exactly space-saving ;) In Mail.app go to the Edit menu, go down to Attachments, then select Include Original Attachments with Reply. Great, would've never occured to me to look for what are essentially preferences in the Edit menu :o Me either. Not very intuitive. I always expect to find it in Preferences > Composing and end up back here every time :) That is a good solution. But a more decent solution would be: "don't include attachments when reply unless it is an inline image". But I don't think it would be possible.
common-pile/stackexchange_filtered
title on the list activity my question is how i can add a title on ListActivity I used the following code final boolean customTitleSupported = requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); if ( customTitleSupported ) { getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE, R.layout.titlebar); } final TextView myTitleText = (TextView) findViewById(R.id.myTitle); if ( myTitleText != null ) { myTitleText.setText("Select Service Provider"); } But it not works on ListActivity If nothing else works, you can extend Activity instead of ListActivity, and in your layout you can place a ListView that you bind your data to. how i can add a ListView in Activity Reply how i can i add a ListView in Activity I'm assuming you've got some experience with creating an Activity, and in that you should have experience working with the layout of that activity (often called main.xml). A ListView is a view that you can include within your main.xml (or any other layout). Once your ListView is available you need to configure it for your purpose -- something that ListActivity already does for you. From here, you might want to start by examining the ListActivity source code; you can find that at [http://netmite.com/android/mydroid/frameworks/base/core/java/android/app/ListActivity.java].
common-pile/stackexchange_filtered
How to rename a file on the SFMC FTP? I am looking for a way to rename a file on the SFMC FTP using an automation. I have tried using the substitution strings %%FILENAME_FROM_TRIGGER_BASE%% and %%BASEFILENAME_FROM_TRIGGER%% with the "enhanced move and copy" data extract. Sadly it didn't do anything. Is there a way to do this with automation studio or with the API? API and SSJS cannot manipulate the sFTP unfortunately, to my knowledge. Thanks, @JonasLamberty I have found that out as well. I had to do the extract and then import it and then extract it to be able to change the file name. Its too bad you can't specify file names for the individual tracking extracts.
common-pile/stackexchange_filtered
python call function, when I click on button I want to call the function "function3", when I click on button b1 (in init). Thank you. class myClass(): def __init__(self): b1 = Button(self.ram2C, text="Aaa", command=???function3???) def function1(self, event=None): command1 def function2(event): command2 def function3 (event=None): command3 You probably shouldn't be defining functions within functions. Generally, it is only a good idea to define a function within another if the inner function is only used by the outer. Find another way of solving your problem, without having nested functions needing to be called from the outside. function3 is nested inside function1 and function2. You mean something like this? b1 = Button(self.ram2C, text="Aaa", command=function1) def function1(self, event=None): command1 def function2(event): command2 def function3 (event=None): command3 return function3(event) return function2(event) In short, function1 is calling function2, and function2 just calls function3. The value returned of function1 would be the value returned by function3.
common-pile/stackexchange_filtered
My keyboard on Latitude D520 stopped working I had Win 7 on the machine. The keyboard stopped working after being unused for 3 months. I installed Win10 without a problem. Still the same problem. I have an external keyboard, wireless. Same problem. I tried FN-keys and Num-lock key. No change. What is next? Since it is on both keyboards, it cannot be something on the physical built in keyboard.
common-pile/stackexchange_filtered
Ionic routing breaks when navigating to url twice When navigating through a page stack i've discovered that when i visit the same page twice the <ion-back-button> does not behave as expected. Here is a graphic of the issue. It seems that Ionic does not navigate back from the last data of Page 1 but rather of the first occurrence (red arrow). I've found a similar Github issue 16516 which should be fixed but it doesn't work for me. Has anyone encountered this or can provide a fix/workaround? My Versions "@ionic-native/core": "^5.0.0", "@ionic/angular": "^5.0.0", update: i created a issue on Github like suggested in the comments. Is there a question here? This looks more like a bug report, which should go on the Ionic Github issues page. The thing that bugs me is that it should be fixed. Also maybe someone has a fix for this and i'm simply doing smth wrong. But maybe you're right, i'll write a bug report if nothing comes up here Well, I do get that, but in the end, this is just a question / answer medium and without any clear question in a post the post shouldn't be here. I wouldn't remark on it if you had asked a specific question, such as if others have the same problem and if there are any known work arounds, but since you're just stating something without asking anything it doesn't really fit into StackOverflow. Call me picky, but it makes it easier to keep things organized! :) updated to provide a proper question. I simply forgot to specifically ask for it. I agree, there should be a question. have you defined defaulthref on your ion-back-button? this issue occurred if i set a defaulthref and if i don't I've come across the exact same issue. I think the issue is based on the page url and not on the page component itself. If you navigate to an url that already exists in the navigation stack history, Ionic will redirect you to that existing page and will also remove all pages in between. The workaround I'm using for now is to add a timestamp to each page's url that is likely to be added several times in the navigation stack. This will make sure that a new instance of a page component will be created and pushed into the stack instead of reusing an old one. Component import {NavController} from '@ionic/angular'; export class MyPage { constructor(private navController: NavController) {} navigate() { // You can use an NavigationOptions object if needed const options = { queryParams: { parameter: "value" } }; this.navController.navigateForward(['/url', new Date().getTime()], options); } } Template <button (click)="navigate()">Navigate</button> Let me know if it works for you. If you navigate to an url that already exists in the navigation stack history, Ionic will redirect you to that existing page and will also remove all pages in between This is exactly what my impression was I'll research more and keep you posted on my findings.
common-pile/stackexchange_filtered
Problems accessing the Tomcat directory I have problems accessing the Tomcat directory, mkdir says file exists but when I try to change directory is says no such file. Seems like a weird contradiction. Please help me with this. Spoorthys-MacBook-Pro:Library spoorthy$ mkdir Tomcat mkdir: Tomcat: File exists Spoorthys-MacBook-Pro:Library spoorthy$ cd Tomcat -bash: cd: Tomcat: No such file or directory Spoorthys-MacBook-Pro:Library spoorthy$ What does ls -la show you? Is there perhaps something else named Tomcat? lrwxr-xr-x 1 spoorthy admin 31 15 Oct 15:00 Tomcat -> /usr/local/apache-tomcat-7.0.47 i tried searching for Tomcat directory on spotlight but unfortunately it does nt show anything Looks like Tomcat is a link to /usr/local/apache-tomcat-7.0.47 check whether target directory /usr/local/apache-tomcat-7.0.47 exists.. if not then Tomcat is a broken link it does not exist , can you please let me know how do i rectify the problem with this broken link If you just want to create a new directory named Tomcat then you can simply delete the broken link and create the directory by using this command rm -rf Tomcat; mkdir Tomcat but I am not sure how to get /usr/local/apache-tomcat-7.0.47 back. if you want to point Tomcat to any other directory then see mcoolive's answer below From the comments, RBH gave you the right answer: Tomcat is a broken link. That's why you read these error messages that seam in contradiction. You can remove the broken link and create a new one: rm -f Tomcat ln -s <newPath> Tomcat
common-pile/stackexchange_filtered
Using a variable inside the Math.random function I am working on a piece of code which make an array full of numbers with the amount of numbers i want, but instead of this beeing static with 16 numbers i tried to change 16 to a variable but the math.floor/randome cant read it it only spits out Not a number wierdly enough. EDIT: with 16 put in it works, but i cant use a variable (declared in the same function ofc)after i console.log the variable it shows it as a number but then my browser freezes Is There anyone who knows how to change this while(arr.length < pictures.length) { var randomenumber = Math.floor((Math.random()* 16)); if(arr.indexOf(randomenumber) > -1) { continue; } arr[arr.length] = randomenumber; } //cheat sheet for(var i = 0; i < arr.length ; i++) { document.write(arr[i]); document.write("<br/>"); } I don't see any issues if you use var length = 16 and Math.floor((Math.random() * length)). Working snippet: var arr = [], length = 16; while(arr.length < length) { var randomenumber = Math.floor((Math.random() * length)); if(arr.indexOf(randomenumber) > -1) { continue; } arr[arr.length] = randomenumber; } //cheat sheet for(var i = 0; i < arr.length ; i++) { document.write(arr[i]); document.write("<br/>"); } You will need to parse it in integer using parseInt. var num = 16; var randomenumber = Math.floor((Math.random() * parseInt(num)));
common-pile/stackexchange_filtered
Windows Service will not start as Local User EDIT 2: I now believe this is an issue with the machine I'm attempting to run the service on. I tried moving the service to a different machine that is setup similarly and the service was able to start successfully even as a Local User. Now I just need to figure out what's different between the two machines... I have a Windows Service project (written in VB.net) that is installed and configured with a Startup Type of Automatic and the Log On As set to a Local User account. This service will start when the computer first starts up. However, if I stop the service and try to start it again, I get "Error 1053: The service did not respond to the start or control request in a timely fashion." immediately. However, if I change the Log On As to "Local System account" then the service will start. Summary: Service will run as Local User when computer first starts Service will not run as Local User if started manually Service will run as Local System when computer first starts Service will run as Local System if started manually I have read that Error 1053 is caused by the OnStart method not returning quickly enough. The fact that the service has started previously, and that I get the error message immediately, leads me to believe a timeout is not what's going on. To verify this, I created a completely new Windows Service Project and without changing anything I built and installed it. I get the same behavior. I am at a loss as to what's happening. As far as I can tell, the Local User has all of the correct privileges to run a service (as is evident by the fact that it will start with those credentials when it the computer is first starting up), and the OnStart method isn't actually timing out (as is evident by the completely blank dumb service exhibiting the same behavior). Any ideas as to what's preventing the service from starting, or where I can look for better error messages (I have looked in the Application Event Log, but nothing shows up there)? EDIT: Here is the code from the dumb service I created (using the EventLogger from here as a module). Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. EventLogger.WriteToEventLog("On Start") End Sub Protected Overrides Sub OnStop() ' Add code here to perform any tear-down necessary to stop your service. EventLogger.WriteToEventLog("On Stop") End Sub And the Main method of the same project. ' The main entry point for the process <MTAThread()> _ <System.Diagnostics.DebuggerNonUserCode()> _ Shared Sub Main() EventLogger.WriteToEventLog("Starting Main Method") Dim ServicesToRun() As System.ServiceProcess.ServiceBase ServicesToRun = New System.ServiceProcess.ServiceBase() {New Service1} System.ServiceProcess.ServiceBase.Run(ServicesToRun) EventLogger.WriteToEventLog("Leaving Main Method") End Sub When I try to run the Service as the Local User, none of the messages show in the Event Log and I get Error 1053. When I run the Service as the Local System, the messages show in the Event Log. The reason I need to run the actual service as the Local User is so that it can access a network share. I am currently looking into using Windows User Impersonation, but I still think I should be able to start a simple service as a Local User. Use this resource to create an event logger. Then wrap your code in each sub in a try/catch b/c most likely something is happening in your OnStart sub that is preventing the service from starting. Post some sample code of your onstart, onstop, and/or your service main subs and clarify why you need to use the local user vs the local system. The original program creates and starts a FileSystemWatcher in the OnStart method. However, I am also getting the same behavior in a Windows Service project that is completely blank. I do not think it's getting to the OnStart method, but I will add event logger calls and update my question.
common-pile/stackexchange_filtered
Sort array of objects by nested property with lodash I have an array of objects like this [{ ‘a’: { id: 4 }}, { ‘b’: { id: 3 }}, { ‘c’: { id: 2 }}, { ‘d’: { id: 5 }}] I want to know with lodash the sorting of the objects according to the id. I would expect to receive something like [‘d’, ‘a’, ‘b’, ‘c’]. I tried to search this but I didn’t find an answer which works with different keys in each object. I tried to do this with lodash’ different functions (many). So I thought there may be a good and maybe short way to solve this. In the first step, transform your array to [{key: 'a', id: 4}, {key: 'b', id: 3}, …]. Here is a plain JavaScript solution. Since you want to get the sorted keys as the result, this solution does not modify the original array, thanks to the first map before sort: const data = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}]; const sorted = data .map(x => Object.entries(x)[0]) .sort((a, b) => b[1].id - a[1].id) .map(x => x[0]); console.log(sorted); If you don't care about mutating the original array, you can directly sort using Object.values() and then return the first key with Object.keys(): const data = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}]; const sorted = data .sort((a, b) => Object.values(b)[0].id - Object.values(a)[0].id) .map(x => Object.keys(x)[0]); console.log(sorted); You can simply do it with javascript by using sort method and mapping over the returned array and returning the keys var data = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}] const res = data.sort((a, b) => Object.values(b)[0].id - Object.values(a)[0].id).map(obj => Object.keys(obj)[0]); console.log(res); Or you can use lodash with orderBy and map methods var data = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}] const res = _.orderBy(data, function(a) { return Object.values(a)[0].id }, ['desc']).map(o => Object.keys(o)[0]) console.log(res); <script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.11/lodash.min.js"></script> This is exactly the same solution as what @CodeManiac already posted @Bergi The time of post doesn't have much difference if you see and its entirely possible for two people to be working on the snippet simulatenously and posting the same answer because that is a very standard way of solving it You can first sort based on id and than map the key. let obj = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}] let op = obj .sort((a,b) => Object.values(b)[0].id - Object.values(a)[0].id) .map(e => Object.keys(e)[0]) console.log(op) This is a lodash/fp version of jo_va's solution. Use _.flow() to create a function that maps each object to a [key, { id }] pair. Sort with _.orderBy() according to id. Iterate with _.map() and dxtract the _.head() (the key): const { flow, flatMap, toPairs, orderBy, map, head } = _ const fn = flow( flatMap(toPairs), // convert each object to pair of [key, { id }] orderBy(['[1].id'], ['desc']), // order descending by the id map(head) // extract the original key ) const data = [{ 'a': { id: 4 }}, { 'b': { id: 3 }}, { 'c': { id: 2 }}, { 'd': { id: 5 }}] const result = fn(data) console.log(result) <script src='https://cdn.jsdelivr.net/g/lodash@4(lodash.min.js+lodash.fp.min.js)'></script> const data = [{ a: { id: 4 }}, { b: { id: 3 }}, { c: { id: 2 }}, { d: { id: 5 }}]; data.sort((a, b) => Object.values(b)[0].id - Object.values(a)[0].id); console.log(data); The above snippet shows how to do it with Array.sort. Lodash should work similarly.
common-pile/stackexchange_filtered
How do I ask LaTeX to exactly fill up a page? I understand that I can control interline spacing in LaTeX in a number of ways. The commands like \baselineskip, \baselinestretch, \linespread come in handy in controlling the interline spacing. Each has its own speciality and circumstances of usage, as we can find in these discussions (A, B, C). If want to go for a ready made solution for controlling line spacing, packages like setspace come in quite handy (built-in commands like \onehalfspacing or \setstretch for more finer controls). Anyway, my problem is a bit different. I have some text (both in paragraph and list mode) which I want to fill up exactly one page. To complicate the scenario, it may even contain equations and graphics. (Let us leave aside floats.) If I want to solve the problem statically, all I will have to is to play with some value of \baselineskip (or \setstretch) until satisfied. While this works good for most of the cases, I will have to go through the process again when I want to delete or add some texts. Would it be possible to have a dynamic value for \baselineskip or \setstretch so that my text always fills up one page? (Definitely within reasonable limits.) Another idea will be to use a number of \vfills between some pieces of texts. But I think that this technique is usable for cover pages only. I am not putting here an MWE. I think that one is not applicable. The problem is not a theoretical one, I am preparing some kind of handout which I want to fill-up exactly one page. Do you want to adjust the line spacing within paragraphs? Or will adjusting space between paragraphs, and between text and equations etc. be enough? @IanThompson Adjusting space only between paragraphs (and text and equations) may produce disproportionate results. I am looking for something which adjusts spaces all over the page, you know what I mean. Please, in real document, adjust the inter-line space as little as possible, if even. There're people who seriously hate it. I have to claim that I do it myself, because I'm too lazy dealing with underfull pages by <3pt or so. Is it an option to adjust the margins instead? E.g. you could automatically choose margins that precisely center your material on the page, with the dimensions of the text block determined by its length. (The TeXbook has several sample macros in this spirit, if I recall correctly.) in addition to what's been suggested, if you always have exactly one page, and the amount of text decreases, you might want to consider using a larger type size so that the blank background doesn't overwhelm the type. but i can't think of a way to do that automatically. @barbarabeeton I understand what you are suggesting and I do follow this in most of the cases. But the scenario (which I can not divulge fully due to confidentiality reasons) do require me to prepare the document in the manner I have asked for. You can add stretchability to \baselineskip: \documentclass{article} \usepackage[pass,showframe]{geometry} % just to show the page is filled up \usepackage{kantlipsum} \newcommand{\addstretch}[1]{\addtolength{#1}{\fill}} \flushbottom \begin{document} \addstretch{\baselineskip} \addstretch{\abovedisplayskip} \addstretch{\abovedisplayshortskip} \addstretch{\belowdisplayskip} \addstretch{\belowdisplayshortskip} \setlength{\parskip}{0pt} \kant*[1] \begin{equation} E=mc^2 \end{equation} \kant[2-3] \newpage \kant*[1] \begin{equation} E=mc^2 \end{equation} \kant[2] \newpage \end{document} Using \fill overrides the \vfil inserted by \newpage. It's easy to make a onepage environment: \newenvironment{onepage} {\newpage\flushbottom \addstretch{\baselineskip} \addstretch{\abovedisplayskip} \addstretch{\abovedisplayshortskip} \addstretch{\belowdisplayskip} \addstretch{\belowdisplayshortskip} \setlength{\parskip}{0pt}} {\newpage} Here's the complete code with the environment: \documentclass{article} \usepackage[pass,showframe]{geometry} % just to show the page is filled up \usepackage{kantlipsum} \newcommand{\addstretch}[1]{\addtolength{#1}{\fill}} \newenvironment{onepage} {\newpage\flushbottom \addstretch{\baselineskip} \addstretch{\abovedisplayskip} \addstretch{\abovedisplayshortskip} \addstretch{\belowdisplayskip} \addstretch{\belowdisplayshortskip} \setlength{\parskip}{0pt}} {\newpage} \begin{document} \begin{onepage} \kant*[1] \begin{equation} E=mc^2 \end{equation} \kant[2-3] \end{onepage} \begin{onepage} \kant*[1] \begin{equation} E=mc^2 \end{equation} \kant[2] \end{onepage} \end{document} Works like a charm. Nothing to do with your solution, but your file as it is produces this error message, `(/usr/share/texmf-texlive/tex/latex/expl3/l3file.sty) ! Undefined control sequence. \ExplFileName l.30 ...e}{\ExplFileVersion}{\ExplFileDescription}`. Anything from my version of LaTeX? Tested with lipsum. It is like a new toy for a child to see how the spaces adjust. @MMA Since kantlipsum uses expl3, the error message probably means you have to update your TeX distribution. Just hope @egreg never sees this result \documentclass{article} \def\a{one two three four five six} \def\b{\a\a\a\a\a\par\a\a\a\a\a\a\a\a\a\a\a} \flushbottom \begin{document} \setlength\baselineskip{\fill} \setlength\lineskip{\fill} \setlength\parskip{\fill} \b\b \end{document} Are you sure you even need \flushbottom? raggedbottom only adds some fil which is infinitely smaller than your fill anyway ;-) Well I started off timid (with finite stretch) but I got brave before posting:-) @DavidCarlisle Spent some time watching (playing actually) how the space adjusts when I add or delete texts. What do you do when get two acceptable solutions? @MMA If you get two acceptable solutions, and one is mine, then the obvious course of action is surely to accept mine:-) @DavidCarlisle Give me some time to savor this sweet dillema. @MMA oh no you gave it to egreg:( @DavidCarlisle You know, an accepted solution means which best suited the OP... :-) Try this example: \documentclass[a4paper]{article} \usepackage{etoolbox,lipsum} \patchcmd\newpage{\vfil}{}{}{} \flushbottom \begin{document} \baselineskip \the\baselineskip plus .1pt \lipsum[1] \lipsum[2] \end{document} \flushbottom makes LaTeX fill the page to the bottom on a page break. The patch to \newpage is neccessary to make it work also for a one-page document. The modification of \baselineskip is probably not as elegant as it could be and will vanish on font switches. You probably know more about modifying it properly than I do. Note that \parskip has 1pt of stretchability, so interline space stretches 1/10 the amount of inter-paragraph space. only 1pt?, so timid:-) @DavidCarlisle Not my decision, it's in the LaTeX sources. So I kept it for compatibility reasons ;-) @StephanLehmke Are not the inter-paragraph spaces a bit out of proportion? Well of course this depends on the amount of material on the page. If you require the page to be filled up completely, the space has to go somewhere. Of course, if you are using equations and suchlike, there will be more "stretch points". For instance, \displayskip has 3pt of stretchability, so the space around equations will stretch three times as much as the inter-paragraph space. The plus .1pt component I put in \baselinestretch only fixes the relation of stretchability to 1:10. You get a different relation by using a different value. @MMA If you use \baselineskip \the\baselineskip plus .5pt the inter-paragraph space is "only" double the inter-line space. Making \baselineskip stretchable is overkill, and it is typographically poor practice unless you're only generating a single page. If you have multiple pages, a flexible baselineskip will change the amount of black on the page which looks unpleasant, even for small changes. You can use \vfill in any context, but I would recommend using as light a touch as possible: \flushbottom will ensure bottom-aligned text, suppressing the \vfil that is otherwise inserted at the bottom of the page and leaving you free to use \vfil for your own vertical space (or to exercise finer control with expressions like 10pt plus 1fil). I suggest you identify all places in your content where it's ok to add a bit of stretch, and arrange to have a \vfil in each one. That way the regular interlinear space (\baselineskip) can stay constant. Lists and the like already introduce stretchable space, but you may need to tweak that in order to get the whitespace to be evenly distributed.
common-pile/stackexchange_filtered
Cool White LED bulbs: Are they "full-spectrum"? I don't have much knowledge on LED lights and find the technology quite fascinating (yes, I know, it is not that new!) I have read that in order to create "white" light, LEDs actually need to emit light from all spectrums. My question(s): How do LED light bulbs (cool white or warm) generate white light? What would be the light spectrum for cool white LED light bulbs? What would be the light spectrum for warm white LED light bulbs? Thanks! P.S. This is my first question here. If it should be forwarded to another SE site, please let me know! P.P.S. I have done my research BEFORE asking but could not find a technically accurate answer. "I have done my research BEFORE asking but could not find a technically accurate answer." what is a technically accurate answer? Your first question has two answers at wikipedia. This search seems to find several candidates for your second question: http://www.google.com/search?q=light+spectrum+for+cool+white+LED+light+bulbs&client=safari&rls=en&tbm=isch&tbo=u&source=univ&sa=X&ei=osrwU9CbOurG7Aas1oCYAQ&ved=0CEIQsAQ&biw=1216&bih=760 In what way are that not meeting your needs? gbulmer, 1st those spectrums are for few brands' products. 2nd those advertised light spectrums do not add up. a LED, pushing light of a very narrow spectrum, with half of its energy converted to another and pushed through P filtering should not be able to have that kind of broad spectrum. Please add that information to your question. I should add that to me, it makes sense to require a spectrum for each product, and if the manufacturers and labs do not publish, "that's just the way it is". What does "LED, pushing light of a very narrow spectrum" mean? What is very narrow? I do not understand any reason why a phosphor excited by a narrow band can't emit a broad spectrum. Fluorescent light works that way, doesn't it? That's exactly my point gbulmer. I do not find it to be "narrow spectrum" but that's what the 'net says. So I wanted to ask to people here and requested (kindly) some technical information that was legit and valid. P.S. I would personally classify very narrow as covering 1/6-1/8 of the visible spectrum, if it is even possible at all. It's been mentioned in many comments: Just because a LED appears a certain shade of white to a direct observer just means that this light source has the same relative power intensities on your color receptors as would white sunlight or warmwhite incandescent light. When considering color reproduction, these sources can still be terrible as frequently observed for cheap LEDs or low pressure gas lamps. The "color reproduction index" summarizes how close a light source behaves to a continuous spectrum source. Good LED lamps use many phosphors to smooth the spectrum. gbulmer puts you on the right track. For the most part, "White" LEDs are nothing more than a single color LED with a phosphor on them. The phosphor takes roughly half of the light from the LED and converts it to a second frequency of light. The two frequencies of light combine in our eyes and look to be some variation of white. A power LED I have emits yellow and purple to attain a "cool white". Warm white has more red in it. In short though, the color spectrum of White LEDs is generally horrible unless you get a really expensive one designed for full spectrum use. In general, white LEDs consist of two-ish spikes of color in the spectrum with everything else very low in comparison. The result of poor color spectrum is that some reflected colors won't even be present even though it appears to us as white. You need 3 separate bands of color to be capable of simulating all visible colors due to human physiology. Note that even then, surface reflections on colors outside of the spectrum peaks will result in gray or dull looking colors. Thank you very much horta. Why is this kind of curve called "very narrow", given that it appears to cover a broad area (especially the warm white ones)? @Phil compared to an incandescent lightbulb, the two peaks here are very narrow. Obviously, the blue/purple is a narrower peak. "You need 3 separate bands of color to be capable of producing all of the variations in between. With only two bands of color, you could be shining the light at something green and it'll come back looking dark grey." - That can easily happen with 3 bands too. If a bulb has distinct peaks in red, green, and blue (which mix to white in our eyes), true yellow or magenta surfaces may appear grey because there's no true yellow or magenta in the bulb output. @marcelm Good catch. I've clarified the difference of reflected light vs simulating colors. To extend horta's answer, you might want to take a look at the CREE guide to LED color mixing. As previously said, the two colors (blue and yellow) mix to create a white. This is shown below on the CIE 1931 color space: The mixed color (white) will be on a line between the two components (blue and yellow). The ratio of the intensities of blue:yellow determines the final color. Theoretically, you could achieve a white by mixing other colors (e.g. cyan and red). One of the advantages of a blue and yellow mix is that many standard "color temperatures" can be achieved. As you can see, the blue-yellow line is quite close to the line of standard color temperatures (the "Planckian Locus") tehwalris, this is brilliant AND amazing, thanks! Do you have a good source (preferably not 1000 pages long) where I can understand about how LED colours are set and how mixing is done on a single diode? (please keep in mind I'm not a scientist so I can not understand a complex white paper, something in simpler terms, if possible) This article could be interesting: http://www.ledsmagazine.com/articles/print/volume-10/issue-6/features/understand-rgb-led-mixing-ratios-to-realize-optimal-color-in-signs-and-displays-magazine.html It's important to note that the "averaging" of mixed colors only works when light is being observed directly. Illuminating a green object with a mixture of blue and yellow light will yield very different results from illuminating it with white light. This is wrong. Any point of the CIE maps is a mix of three well defined monochromatic sources. But you don't need to have three primary colors for an eye to see white, due to the fact our eyes are only sensitive to three limited areas of the full spectrum (L, M and S cone cells, this is called "trichromacy"). It's sufficient to stimulate the cones in the right proportion to obtain a sense of white. This is the trick which makes dual-peak LED appear fully white to our eye (but not to an animal eye, or to an instrument of measure). @supercat: A case of metamerism failure, which is very important for fabrics or paints. This door was likely of the exact color in the workshop, but appears different in the sun. A headache which actually makes obvious our eyes do not see well some portions of the visible spectrum and our sense of color (and white) is based on only 3 narrow areas. See also. White LEDs are coated with phosphors that glow with the desired color temperature. "Cool-white" LEDs (more blue) have a color temperature above about 5000K, while "warm-white" (less blue) ones have a color temperature below about 5000K. Thank you TDHofstetter. Excellent answer. Can you tell me about the light spectrum though and the effects of the phosphorus coating? Much appreciated! The spectral performance of any given LED is very dependent upon the specific LED... some emit over very narrow spectra and some emit over very broad spectra. The spec sheet for a given LED will usually show its spectral performance curves (similar to Horta's picture above). Google image search for "light spectrum for cool white LED light bulbs" gave e.g. http://www.enkonn-lighting.com/product/dimmable-led-light-bulbs-ceramic-cool-white/ google image search for "philips light spectrum for cool white LED light bulbs" gave e.g. http://www.philipslumileds.com/technology/quality-white-light without further information, I think the technique of searching the web, and looking at the images works. So is this question is complete? Both links are broken, so this link-only answer is incomplete..
common-pile/stackexchange_filtered
Backbone.js model not passing to view I'm starting out with backbone.js building my first project with backbone-boilerplate. I have a module named Navitem with a view called Sidebar: Navitem.Views.Sidebar = Navitem.Views.Layout.extend({ template: "navitem/sidebar", tagName: 'ul', beforeRender: function() { var me = this; this.options.navitems.each(function(navitem) { //insertView from Layout datatype me.$el.append(new Navitem.Views.Item({ model: navitem //select the 'ul' in sidebar view and append an Item with model navitem }).render().el); }); return this; } }); When the sidebar is constructed, a collection containing many Navitem.Model's are passed into it. After debugging, model:navitem seems to be working correctly and passing in the right navitem model to the new Navitem.Views.Item({...}). That class looks like: Navitem.Views.Item = Navitem.Views.Layout.extend({ tagName: 'li', template: 'navitem/default' events: { click: "navRoute" }, navRoute : function() { app.router.go(this.model.get('target')); return this; } }); The template looks like <a href="#"><%= model.get('label') %></a>. For some reason when I call Item.render() in the first code block, it whines that model is undefined in the view. I can't seem to figure out why this is happening. Any thoughts? Might be related to what was answered here : Backbone.js: Template variable in element attribute doesn't work You need to pass the model as a plain JSON to your template (unless maybe you're using another version?) Hope this helps! Unfortunately it does not. I found that this is related to backbone.layoutmanager. It does not seem like others are having this problem, does my code look correct? Did you try debugging into the render function? Your code looks correct but maybe if you post the Navitem.Views.Layout class we'd have a better idea. I'm doing something similar in a program that I wrote using Backbone Boilerplate and Backbone LayoutManager. Try adding a serialize function to your Navitem.Views.Item view: // provide data to the template serialize: function() { return this.model.toJSON(); } and then in the beforeRender function of Navitem.Views.Sidebar: beforeRender: function(){ _.each(this.options.navitems.models, function(model){ var view = new Navitem.Views.Item({model: model}); this.insertView(view); }, this); } and the navitem/default template could look like this: <%= label %> This is untested code (using your views and collections) but doing this has been working for me.
common-pile/stackexchange_filtered
Empty SDL2 window taking lots (40%+) GPU to render? I've been trying to learn how to use SDL2, and am trying to follow the tutorials here for that. From example 7 (texture loading / hardware rendering), I've put together this stripped down example. As far as I can tell, the code is only calling SDL_RenderClear() and SDL_RenderPresent() in the render loop, VSYNC'd to my monitor (100 Hz in this case). However, I'm seeing Windows report 40% GPU utilization when doing this, which seems high for doing nothing. My GPU sits at 1-2% utilization when the SDL2 program is not running. Removing just the SDL_RenderPresent() call brings the usage all the way down to 1-2% again, but then nothing gets drawn, which makes me think that the issue is either in SDL2's rendering, or (much more likely), how I've configured it. Is this high GPU utilization (when doing nothing) expected? Is there anything I can do to minimize the GPU utilization (while still using hardware rendering)? =================================== Machine details: Windows 10 Threadripper 3970x CPU (3.7 GHz 32 core) 256 GB RAM DDR4 3200 MHz 1x RTX 3090 GPU 3x 1440p 100Hz monitors GPU utilization (shown via Task Manager and Xbox Game Bar): Code: #define SDL_MAIN_HANDLED /*This source code copyrighted by Lazy Foo' Productions (2004-2020) and may not be redistributed without written permission.*/ // Using SDL, SDL_image, standard IO, and strings #include <SDL.h> #include <stdio.h> #include <string> // Screen dimension constants const int SCREEN_WIDTH = 1920; const int SCREEN_HEIGHT = 1080; // Starts up SDL and creates window bool init(); // Frees media and shuts down SDL void close(); // The window we'll be rendering to SDL_Window* gWindow = NULL; // The window renderer SDL_Renderer* gRenderer = NULL; bool init() { // Initialization flag bool success = true; // Initialize SDL if (SDL_Init(SDL_INIT_VIDEO) < 0) { printf("SDL could not initialize! SDL Error: %s\n", SDL_GetError()); success = false; } else { // Create window gWindow = SDL_CreateWindow( "SDL Tutorial", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN); if (gWindow == NULL) { printf("Window could not be created! SDL Error: %s\n", SDL_GetError()); success = false; } else { // Create renderer for window gRenderer = SDL_CreateRenderer(gWindow, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); if (gRenderer == NULL) { printf("Renderer could not be created! SDL Error: %s\n", SDL_GetError()); success = false; } else { // Initialize renderer color SDL_SetRenderDrawColor(gRenderer, 0xFF, 0x00, 0x00, 0x00); } } } return success; } void close() { // Destroy window SDL_DestroyRenderer(gRenderer); SDL_DestroyWindow(gWindow); gWindow = NULL; gRenderer = NULL; // Quit SDL subsystems SDL_Quit(); } int main(int argc, char* args[]) { // Start up SDL and create window if (!init()) { printf("Failed to initialize!\n"); } else { // Main loop flag bool quit = false; // Event handler SDL_Event e; // While application is running while (!quit) { // Handle events on queue while (SDL_PollEvent(&e) != 0) { // User requests quit if (e.type == SDL_QUIT) { quit = true; } } // Clear screen SDL_RenderClear(gRenderer); //Update screen SDL_RenderPresent( gRenderer ); } } // Free resources and close SDL close(); return 0; } Writing to ~830 MiB of VRAM every second is a bit more than "doing nothing" :) I just tried removing the SDL_RenderClear() call, keeping only the SDL_RenderPresent() call, which should remove the internal framebuffer getting overwritten, but even that still takes the same 40% or so GPU to run. I wonder what else could be causing the slowdown? I've had SDL_RenderPresent waste 100% cpu while waiting to render, when vsync is enabled. (Actually, it was SDL_GL_SwapWindow, but it should be the same thing.) Try predicting how long SDL_RenderPresent is going to wait, then sleep, say, 1ms less before calling it. @HolyBlackCat OP's complaining about GPU usage; their CPU usage is 1%.
common-pile/stackexchange_filtered
A UI/UX developer with color-blindness. Good or Bad idea or maybe a challenge? I'm working as a front-end developer. I used to work on JavaScript technologies. But when it comes to UX and about choosing colors and stuff which I'm not able to understand well due to color blindness. I wanted to create good UI designs with great UX. How can I do this? Is this possible for me? P.S. Are there any UX Designers with color-blindness? Even if you couldn't create the color palettes for the design, which I'm sure you could depending on the type of color blindness, there are still many other aspects in UI and UX development you could work on! A decent part of UI/UX is making sure interfaces work well for people just like yourself. I would think that would be a valuable addition to a team. Work for a company that requires 508 compliance. Your colorblindness will be an asset. Agree with @Doyle ... though the key word here is team. Don't try to do it all by yourself. @jpmc26: OP didn't say they are American. @BarryTheHatchet Fair enough, but I think you get the point. I used to work on a scrum team with a developer who was totally blind. He was an amazing asset. He wasn't the only front end developer on the team, but his contribution was very important. Without him we didn't do accessibility very well. In short - use your colour blindness as a strength. If you become an expert in accessible websites then you have a great skill that is both rare and in demand. Do you know that Mark Zuckerberg is red-green color blind and that's why he chose the color blue to be the dominant color of Facebook? http://flatuicolors.com/ -- pick a good pallet and research color emotion. (This is what helped me. I'm partially colorblind). There is no problem to work as a UX/UI designer, as choosing color is just a minor part of the usability process. There are lots of other activities that the UX-er should do, like usability testing, checking analytics, conducting A/B tests, writing reports. Choosing color is more like visual designers work. People often are confused between the two professions. Visual designers create logos, website designs and color schemes and rarely test which one works better. On the other hand UX specialists are creating desings based on user research and "best practices". Generally, UX designers make much more informed choices on an interface compared to Visual designers. If you have passion for making an interface easy to use for unexperienced users give this profession a go. Not to mention that is well paid and the demand for UX designers is increasing. I like this answer, but it's at least partially inaccurate. Visual designers creating logos and color schemes often do so based on research. In fact, there are companies that literally do nothing but research logo and website designs and the impact that those designs have on marketing and user interaction. Color schemes go hand-in-hand with that. Yes, you are right but thats a minority case. These companies are an exception. I have to agree with Jesse on this, great answer +1 except for the part about visual designers not based on research. Sure they can just freehand what they think looks nice or they can study the market/end user and see what emotions/thoughts it envokes. What do you think graphic design professors teach to their students "just do whatever you thinks pretty"? No, they're taught color theory, drawing attention using contrast, etc. which is all found through research. Changed the answer based on your comments. Does it sound good now? is colour theory, drawing attention using contrast, etc.... found out through research though? It seems to me that those are established rules. They're not just going out and blindly making pretty pictures, they are basing what they are doing on knowledge (much of which comes from research that somebody has done) but they're not usually actually going out there and doing user research The quality of the sentiment and message here is reduced I think by trying to draw an "us vs them." It likely feels supportive, but none of these jobs are really so cleanly defined to allows us to compare them like that. Designers and artists both may be incredibly mathematical or structured in their approach. Cutting the "designers just guess" bit would be a good move, IMO. I've been doing front-end work for a decade, and I have deuteranopia or deuteranomaly (red-green color blindness). It has never been a problem. I largely rely on color codes and location/proximity on color picker UIs to identify colors. When doing a design from scratch, I will often look at pre-existing palettes for inspiration. I will also use an eyedropper tool on the organization's logo for further inspiration. If the organization has a style guide that includes specific organizational colors, then that's even better. If I am handed a pre-existing design, I can re-use the existing colors. If I need to add new colors, I can go by the color codes and experience based on working with various palettes to determine what works well. One benefit of being color blind is that when choosing colors to represent data, I automatically pick colors that are easy to comprehend for color blind and color sighted users alike! For instance, the particular shades of red and green I would pick for a stop/go status indicator would be more perceptible to other color blind users. Or, the colors I choose for a graph or map would be easy to correlate with the key. Jeff, would you be so nice to tell me a good color for red and green (false/okay) indicators? Jeff hit on one of the things I thought about when considering this question. A color blind designer, while taking cues from existing or provided colors, is more likely to make a more accessible site because those of us with "normal" color vision wouldn't notice how similar two colors might be in some circumstances. FrankL - You could ask an entirely new question on that, since so many factors are involved. There are several types of colorblindness. Deuteranomaly (confusing red vs green) is the most common. And then those with deuteranomaly have varying degrees of severity. There's really no right answer to the red/green indicator question, especially if the visitor has deuteranopia (total red/green blindness). The best approach is to ensure that there are other indicators that do not rely on color (e.g., "Pass" vs "Fail" text overlaid on the red and green). This automatically makes it accessible for those with total blindness, too. It helps me when the lightness and saturation numbers differ greatly between the two (e.g., light green and dark red), and when the colored area is as large as possible (e.g., apply colors to background, not text). For me, #00ff00 and #990000 work together well. That's an ugly neon green, but #00cc00 would work with that red too. But that's just for my colorblindness. This site helps in identifying appropriate colors to use when representing information: http://www.iamcal.com/toys/colors/ Note that there's a distinction between decorative colors and colors that are used to represent information. A colorblind visitor would not likely be adversely affected if your site uses a general color theme that they have difficulty seeing. What would affect them is if you use inappropriate colors to represent important information, because that would be like failing to use alt tags on images for blind users. So you don't have to be careful with every color you choose. You only have to be careful with the colors that directly affect functionality or understanding of information. Also consider using shapes: red X vs green check; red octagon vs green circle; red thumbs down vs green thumbs up. Limitations are limiting Everyone here is very nice, but they're dodging one important point: Being a color-blind UXD will limit your ability to be an all-in-one product designer. Everyone has their limits. Unlike you, I do not have a solid engineering background. I work closely with a software architect throughout the discovery phase of a product or feature and with the engineers throughout development. This compensates for my limitations, ensuring I understand our technical constraints and opportunities. Color-blindness will not hinder you in areas like info architecture, application flow, and user journeys. But you should work with someone else when it comes time for the polished side of the product. Like any of us, you just have to find a role that allows you to make the most impactful contribution with the skills you have. In response to the disgruntled commenters ... There are tools to validate various accessibility issues; there's no need to employ an "expert" for every possible pitfall. I use an application to check designs against all forms of color-blindness. I can use that tool because I do not have any form of color-blindness myself which would corrupt my view of other forms. I also know several color-blind people that can provide feedback. Be judicious with the idea that you can test design. You can comply with known accessibility requirements. You can check against notable heuristic principals. But there is also an intangible, unquantifiable aspect to product design. Ask Google about the transition from "scientific" design to the process that resulted in Material. They didn't get design when they treated it like a thing to be tested down to the finest detail. Now they've beaten Apple at their own game. If I worked at a company that let me make all the decisions without passing them past anyone before going live then I don't think that's the best sort of company to work for. A good UXer will run their design decisions through testing (QA and usability) before going live. Any colour issues should be detected long before they become a problem. It shouldn't be an issue to the end product. @JonW So the executives, QA, engineering, etc should be the experts on final product design? I hope to never work in such a place. My point is, he should work in an environment where there are other roles to handle final product design concerns like color. Then he can focus on higher-level concerns.. This is reasonable. In a nutshell, you can still be an excellent UX designer, but there are certain jobs you will be less qualified for (the all-in-one guy, for example). I think it's unrealistic not to admit this (I also have a color deficiency). @JonW there are an AWFUL lot of companies out there that don't have a separate QA team, AND an engieering team, AND a production team... while some job requirements are pretty ridiculous when it comes to expecting expertise in many different areas, there are just a lot of small companies out there. WTF is a production team? :D -1: This misses the point that colour schemes that don't work for the colour blind is more of a UX fail than colour schemes that look slightly odd to the non-colour blind. You're actually adding more value to the company because they don't have to worry about accessibility issues. This doesn't diminish the usefulness of this answer, but "the all-in-one product design guy that many companies are looking for these days" sounds years out of date to me -- the trend across the board (except in the smallest of early phase startups) is toward larger teams with greater specialization. @plainclothes - you are making another gross assumption - that someone versed in UI design is also artistically talented enough to make the best pallet choices. That's simply not true. Go to any mid- to large-sized software company, go find the team responsible for UI/UX design, and look at their wardrobe. I give 50/50 odds that you wouldn't want them dressing you. If that's the case, they may not be cut out for making color choices. The product that I perform QA for fell victim to this a few years ago with some REALLY bad color choices. They've since been fixed, but, yeah... not good. @JonW - the point this post is making is that you don't necessarily want to employee someone whose work is always picked up by QA because of colour issues and of course you may not have a QA team. I'm not sure I agree with it, but it is a valid viewpoint @MattWilko Not exactly. I was trying to say that a good UXer will ensure their designs get tested - it's part of the UX process itself, so if you skip that for whatever reason then you're doing the designs a disservice. Even people with 20:20 vision and no impairments can make bad design decisions, so you'd want to ensure everything is tested to one extent or another. @deworde UX designers who do not have color blindness are still capable of ensuring that the colors chosen do not pose accessibility issues. A UX designer with color blindness will have an easier time noticing issues for their particular type of blindness, but they will also be unable to do as well matching some colors that don't pose an accessibility issue, but which may add aesthetic appeal. It's a very minor point, but, imo, still valid. And "they don't have to worry about accessibility issues"? No. They'll just have an easier time spotting one specific subset. Color blindness may hinder your ability to produce some visual designs and maybe some parts of a 'pretty' UI, as color goes a long way to aesthetic appeal, BUT, as a UX designer I would go so far as to say that you can use color blindness to your advantage. Around 8% of men and .5% of women are color blind, and as a UX designer, it is our job to make sure that we do not rely solely on color as a visual cue or indicator. Move into the UX professional world and ensure that decisions like this (based solely on color) aren't being made, and lean on your own experiences to come up with better solutions that don't rely on color only. Many of us can only use research and foresight to gauge the effectiveness of some of the color cues that we provide and you have first hand experience on what does and doesn't work. I once met a blind guy who was one of the most informed UX/Accessibility professionals I'd ever spoken to. So in answer to your question, 'Yes'! You most certainly can become a UI/UX professional. This answer seems to be assuming that there's exactly one variant of color blindness. There are even on the coarest level three different variants of colorblindness, so while the OP certainly would have an easier time with one and only one form, it wouldn't help for the others. There are better ways to handle such accessibility issues (it's after all just a question of contrast between colors, tools can check for that) @Voo, very true, but it doesn't change the fact that someone who is color blind can still become a UI/UX professional. I'd like to think that this answer works at varying levels for any kind of color blindness. Oh as long as someone else takes care of the colors - which is a pretty easy arrangement in larger teams - I don't see any reason why not. I just wanted to make it clear that there are different color blindness so that you'd still need tools to check for all the other variants, so it's not that big an advantage imo. In the end this shouldn't be a big deal in the hiring process assuming a larger team. I worked with a front-end developer with color blindness in the past. It never was a problem. You may have to check if the used colors are good for the larger group of users, but every UI/UX professional should check how a design looks and works for all kinds of users. No difference in my opinion. Why does a front end engineer care about color? It should all be spec'd by that point. @plainclothes: Because a front-ender works with color, should be involved in the design process and can be a designer as well, especially in smaller companies. Interesting perspective. In my experience, front-end devs who dabble in product design are about as useful as product designers who dabble in dev ;-) You will be the most precious designer in your company! In my software project, I always struggle to find solid advice about color choices. There are many guidelines and tools to measure how accessible a certain palette is, but applying them is so tedious and explaining the results so difficult. Typically, what works well is to find a color-blind user and make them report the problems they find. Your physical situation gives you the gift of automatically fixing a wealth of accessibility issues in your products without even thinking. Make sure to put this at profit. Of course, there are different kinds of color-blindess and what works for one (kind of) color-blind person may not work for another. However, the worst accessibility issues are find by any of them. To check color-blindness issues, it's better to run your product through a simulator where you can check all anomaly / deficiency conditions. I worked with a UX designer with serious color blindness. We would decide the color palette together, then wireframe the product in grayscale. Then using color palette with lighter/darker colors from the palette works pretty harmonious. I'm mildly red/green colorblind. As you're probably aware, about 9% of males have some degree of deuteranomaly. If that's your flavor of color blindness, you are absolutely an asset. Your first task is to become the local expert on accessibility, because that's part of UX design. Google 'WCAG 2.0'. https://www.google.com/search?q=wcag+2.0 Within that search, the following titles have been particularly useful for me: https://www.w3.org/WAI/WCAG20/quickref/ http://webaim.org/standards/wcag/checklist You'll also want to bookmark this tool. It can give you detailed WCAG reports on text/background contrast http://webaim.org/resources/contrastchecker/ Last Fall we inherited a very stylish website with 10pt light green hairline text on medium blue. Horrendous readability for anyone over 40, let alone the target audience, well-heeled older tourists. It took me 2 weeks to tweak the contrast and general readability, but the immediate wins came from Contrast Checker. Because visual designers usually are quite territorial about their inaccessible designs, I recommend that you try to find more contrasty colors for text/background, using the original color as the base. I find http://www.colorhexa.com's 'shades and tints' feature invaluable to find darker/lighter shades of colors, for example http://www.colorhexa.com/66aa33. Switching between ColorHexa and Contrast Checker will help you keep the text color scheme WCAG2-compliant without too many artistic arguments. I just noticed that ColorHexa.com has added simulators for the major color-anomalous visions. WOOT! • This means all the lucky people who don't have color-blindness can just goddam do a good color scheme on their own -- with you as the local test subject. I have been color deficient (trouble seeing shades within colors) and worked extensively in both newspaper production (working with million dollar ad spreads that had to print EXACTLY per our 4-color specs) as well as a web design/UX professional for over 25 years and this has NEVER limited me. Let's split UI and UX into separate answers: UI: As far as being a Picasso with colors, you might not be best suited for that, but if you have a solid understanding of color theory, and can see the basic color groups, you can be successful in making decisions that will compliment or enhance an established brand. UX: Your color blindness will have ZERO impact. Your ability to design simple, effective flows is more about critical thinking and designing simplicity than it is being able to see colors. I've used my color deficiency to a huge advantage numerous times in my career, because I know I can always solve the UX challenge, keeping (and often saving up for) the hand-off to my visual design colleagues and teams. An argument I've often used is to keep UX black and white, that way it stops conversations about what specific color something should be. Keep the focus on killer experience design, and defer to a great creative and you will have an amazing career! I think it's extremely important to note that this should give you an advantage over most for accessibility and usability. Product Designers that take more care in understanding how certain physical limitations change the way your product looks. Color isn't just about picking 'pretty colors' that make things 'pop.' It is as much an art as it is a science. Good color palettes are typically understandable to color blind people. Invest the time into learning the math behind color; leverage it when you need to work with color and make sure to get feedback from peers who aren't color blind and you and your colleagues will not look at this as a hinderance but a beneficial perspective. I'd also like to re-iterate what's been mentioned above, great UX design does not require a mastery of visual design, but you won't go far without working with others who focus on visual design. Some quick links on it: https://vis4.net/blog/posts/mastering-multi-hued-color-scales/ http://colorbrewer2.org/ UX is not just about visual design, it's about the flow. You can majorly contribute to flow and experience for that matter and make use of a decent designer to help with colors and other stuff What an interesting question. Most developers given the freedom, have used color cues. We have several options. We can make the color cue redundant or we can ensure that there are no more than about three hue transitions on the element (think shades of grey). In my experience, this should even be visible to most forms of color blindness. The 'Get a girlfriend' answer was good advice, too. A nice screen shot of each of the Windows Color Selector for each color and you would be g2g. You are absolutely qualified to make sure your software is ready for someone with this physical challenge. We are ALL supposed to work toward making our software accessible to many different users and their particular needs. It depends on how well you can adapt and how many people are willing to support you. From a "color-seeing perspective", the colors that we see can be described as "arbitrary" and "following patterns". Arbitrary in the sense that people have come to prefer certain color over others - in the case of text, our "advisors" are always insisting on gray text color. If it is dark black, they get annoyed. (As a programmer, you could consider me color-blind in the sense that I don't really care a whole lot, I just change the color when it is asked for). "Following a pattern" in the sense that they are choosing gray in order to prevent the text from appearing too prominent in the page. So as a programmer, I just change the color and to me it is arbitrary. If I wanted to improve UX, I would follow the above rule with graduations. That is a colorblind example, but colors are the same. They follow certain rules, certain colors go with certain colors and not with certain colors, and people like certain colors and not certain colors. There are 255^3 colors that are possible on the web, but I would speculate that most of the colors used are in a very small range. Which means that there are a number of "acceptable" blues, "acceptable" yellows and "acceptable" reds which you can learn and which may even be specified for you. You could learn maybe 3 of each and learn rules of color from others, either online or by your own research. Do you know that "red" is a very "dangerous" color and has no place except for alerts, warning, negativity, etc. That is a complex science, but you might be surprised to think that color see-ing is predictable. If you are good enough at UX, then you will just have someone else around you who fixes the colors, and some companies will support you in this. But ideally, you can just learn when to use dark blue, when to use light blue, the different shades of blue, etc and the colors that nobody wants to see like turquoise, etc. I'm not going to write that for you, but with difficulty, you could find that and begin producing designs, pretending like you know the colors. Take a look at this page? How many colors? I count: 2 yellows (answer box and tooltip), 2 blues, 1 blue so dark it looks like black (the "X" in UX), 1 dark yellow (brownish - ask question button), 2 greens and 1 red. Difficult, yes, but not rocket science. Additionally, the answer by "Miles McCrocklin" has a rule for you. Check it out and learn it.
common-pile/stackexchange_filtered
Trigger click after time delay with jQuery I would need to trigger a pop by replicating a click event after 10 sec using the following HTML with jQuery only once for the user. Any click trigger snippet that might do? and if possible to set cookie in the code so not to show the pop again for the same user? thanks. <a href="https://example.com" class="register" data-target="MyRegister" data-tooltip="Register"></a> Note: I can not using the class as that will trigger the link not the pop. It needs to be done using the data-target and I don't seem able to make it work. Also please I can not use another pop, it needs to be the one above! This to avoid suggestions of using third party pops etc.. If someone can help with jQuery snippet for this would be great, thank. use fanybox for your pop up and for cockie read this: https://www.w3schools.com/js/js_cookies.asp Never mind I solved it very simply and actually the selector method works! Anyone interested here is the code setTimeout(function() { $('.register').trigger('click'); }, 60000);
common-pile/stackexchange_filtered
Parsing a string representing a float *with an exponent* in Python I have a large file with numbers in the form of 6,52353753563E-7. So there's an exponent in that string. float() dies on this. While I could write custom code to pre-process the string into something float() can eat, I'm looking for the pythonic way of converting these into a float (something like a format string passed somewhere). I must say I'm surprised float() can't handle strings with such an exponent, this is pretty common stuff. I'm using python 2.6, but 3.1 is an option if need be. Nothing to do with exponent. Problem is comma instead of decimal point. >>> float("6,52353753563E-7") Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: invalid literal for float(): 6,52353753563E-7 >>> float("6.52353753563E-7") 6.5235375356299998e-07 For a general approach, see locale.atof() Correct. Incidentally, if you type "6,52353753563E-7" into the Python prompt, it gets parsed as the tuple (6, 5235.3753563) -- fairly obvious why, it just looks odd. On a related note, can I somehow make python recognize the comma? C# and Java can do this. Many places in the world use a comma as the decimal separator. Your problem is not in the exponent but in the comma. with python 3.1: >>> a = "6.52353753563E-7" >>> float(a) 6.52353753563e-07
common-pile/stackexchange_filtered
UIStackView resizing my views Im making a custom keyboard, and wanting a label and a textField centered in the space left over the keyboard, I figured the proper way would be to incorporate them in a UIstackView, and then centering the stack. However, i'm having som issues with the stack resizing my textField, causing whatever text i enter to be clipped. I tried adding compressionresistance and a number of different solutions, but its clear to me that im missing some information about how UIStackViews work. Usually i make them work, but the whole resizing part I dont understand. The first two screens are without a stack, adding them as subview to the viewcontrollers view, and the second screenshot shows the stackView resizing my textfield everytime the keyboard is resigned. How can i stop the stackview from resizing its contained views? Apple doesnt like dynamic StackViews: "In general, you should not use dynamic stack views to simply implement a scratch-built table view clone." Read the Apple Documentation on UIStackView for details. It seems UIStackView is inherently a auto layout component, and when it adds your arranged subviews, it automatically positions them using constraints. Since the UIStackView is adding constraints of its own, and the best way I found to stop it, was to simply explicitly set the subviews constraits: textField.widthAnchor.constraint(equalToConstant: 300).isActive = true
common-pile/stackexchange_filtered
How can I keep the visitor on an empty custom post-type archive page in WordPress and prevent archive.php from loading? My WordPress post-type archive page renders archive.php when there are no posts. When my custom post-type has a pubished post, the archive-post-typeX.php is loaded correctly. However, when there are no posts the rendering hierarchy takes over and archive.php is loaded instead. The custom post-type is registered in a plugin and I use this code to register the archive template: function caw_appr_vac_archive_template($caw_appr_vac_template_archive) { global $post; $plugin_template_dir = WP_PLUGIN_DIR . '/CAW Apprenticeship Vacancies/templates/'; if (is_archive() && get_post_type($post) == 'caw_appr_vac') { if (file_exists($plugin_template_dir . 'archive-caw-appr-vac-CAW.php')) { return $plugin_template_dir . 'archive-caw-appr-vac-CAW.php'; } } return $caw_appr_vac_template_archive; } add_filter('archive_template', 'caw_appr_vac_archive_template'); How do I prevent this and keep the visitor on the empty archive-post-typeX.php template? I would like to keep the standard functionality for other post-type archive pages and just change it for this specific template. I tried the add_action('template_redirect', 'function'), but this results in an endless redirect loop and now I am out of ideas.
common-pile/stackexchange_filtered
Is there an example of a group with cardinality $\aleph_1$? There are many simple examples of groups with infinite cardinality which are known outside of set theory, such as $\mathbb Z$, which has cardinality $\aleph_0$, and $\mathbb R$, which has cardinality $2^{\aleph_0}$. Assuming the continuum hypothesis, $\mathbb R$ is also a familiar example of a group with cardinality $\aleph_1$. However, is there a relatively simple example of a group where we can prove, in $\mathsf{ZFC}$ alone, that it has cardinality $\aleph_1$? The free group on $\aleph_1$-many generators? @NoahSchweber, :) :) But, yes, very true. :) @NoahSchweber: I'm not overly familiar with the free group construction, since I haven't yet got to that part of the algebra book I'm reading. However, I suppose that example is about as simple as one could hope for, so you could post it as an answer, if you wish. Sure, of any infinite cardinality. See https://math.stackexchange.com/questions/1296889/fields-of-arbitrary-cardinality for a stronger statement. See also https://math.stackexchange.com/questions/105433/does-every-set-have-a-group-structure. So the question here is a duplicate. @Joe -- a possibly more familiar version of Noah's answer is any $\mathbb{Q}$-vector space with $\aleph_1$ many basis vectors. In fact, an $\aleph_1$-dimensional vector space over any finite field would also work. @MartinBrandenburg: Hmm. On the one hand, the examples you have provided are very useful. On the other hand, I'm curious to see if there are any non-abelian examples. (But perhaps Noah's example suffices.) Sure, take any abelian example and take the product with $S_3$ or any finite non-abelian group. @MartinBrandenburg: Ah, I see. I've marked my post as a duplicate. Thanks to all for your help. @MartinBrandenburg: Would it be possible for you to add your second linked post to the duplicates list, too? By the way none of the above constructions require choice - $\aleph_1$ is nice and well-orderable so facts about cardinal arithmetic are true "for free". The example Wikipedia gives is the set of finite subsets of a set of cardinality $\aleph_1$ under symmetric difference, which is secretly an instance of HallaSurvivor's remark "an $\aleph_1$-dimensional vector space over any finite field". I tried, but for some reason I can't this time. The option "Edit" is not available. @Joe @IzaakvanDongen: Thank you for your added remarks. That seems like the simplest example to my mind, but perhaps I am just showing my ignorance of both group theory and set theory :)
common-pile/stackexchange_filtered
Swiftui Button image not changing on clicking until i click on another button I know someone has asked a similar question already, but I still haven't solved the problem. I have a HStack with 2 buttons that changes image upon click. The first button does not change on clicking until I click the other button, the second button updates correctly. The difference between the 2 buttons is that the first one’s action depends on the variable ‘museum.favorite’, which is a variable of the class Museum. The second one updates depending on the variable b’s value, which is a state variable declared in the same view. I have been stuck on this for the past few days and would really appreciate if any one knows the reason behind & a way around this. The reason I want the button to update depending on the value of ‘museum.favorite’ is because I found that if I update according to a state variable declared in the same view (passing the variable as a parameter does not work either), then the image changes back to the default one (“heart” instead of “heart.fill”) soon after I change to another view. Here is the code, thank you in advance!: var museum:Museum @State var a = false @State var b = false var body: some View { HStack { Button { museum.favorite.toggle() } label: { Image(systemName: museum.favorite == true ? "heart.fill" : "heart") } Button { b.toggle() } label: { Image(systemName: b == true ? "star.fill" : "star") } } } } struct Test_Previews: PreviewProvider { static var previews: some View { let model = MuseumModel() Test(museum: model.museums[0]) } } Does this answer your question? swiftui: problem with adding a property to data struct If museum is a class, like you say, it needs to be an ObservableObject and favorite needs to be a @Published property. I have tried this already, but adding @Published to favourite gets the error: "Type 'Museum' does not conform to protocol 'Decodable'". The class Museum is declared as --> class Museum : Identifiable, Decodable, ObservableObject {}. How can I solve this? Welcome to Stack Overflow! Please take the tour and see: How do I ask a good question? and How to create a Minimal, Reproducible Example. This is a classic example of needing a minimal, reproducible example. In the end, I found an easier approach. At the of the HStack, add .onAppear{} and do a condition check so that the image updates itself across different views. import SwiftUI struct buttons: View { var museum:Museum @State var visitIcon = "figure.walk.circle" @State var heart = "heart" var body: some View { HStack(spacing:60){ Button(action: { museum.visited.toggle() if(museum.visited){ visitIcon = "figure.walk.circle.fill" } else{ visitIcon = "figure.walk.circle" } }, label: { VStack{ Image(systemName:visitIcon).resizable() .frame(width: 20, height: 20).foregroundColor(.black) Text("Visited").foregroundColor(.black) } }) Button(action: { museum.favorite.toggle() if(museum.favorite){ heart = "heart.fill" } else{ heart = "heart" } }, label: { VStack{ Image(systemName:heart).resizable() .frame(width: 20, height: 20).foregroundColor(.black) Text("Favorite").foregroundColor(.black) } }) }.onAppear { if(museum.favorite){ visitIcon = "figure.walk.circle.fill" } else{ visitIcon = "figure.walk.circle" } if(museum.favorite){ heart = "heart.fill" } else{ heart = "heart" } } } }
common-pile/stackexchange_filtered
How to implement text auto-completion of commands using dbgeng.h? I am working on a front end for windbg which uses the pretty well documented dbgeng.h API. However there is some functionality that I don't know how to implement, such as tab auto-completion. I believe what I need are the following APIs: IDebugAdvanced3::Request DEBUG_GET_TEXT_COMPLETIONS_OUT DEBUG_GET_TEXT_COMPLETIONS_IN DEBUG_REQUEST_GET_TEXT_COMPLETIONS_ANSI (undocumented) There are also some comments inside dbgeng.h where these structs are defined: typedef struct _DEBUG_GET_TEXT_COMPLETIONS_IN { ULONG Flags; ULONG MatchCountLimit; ULONG64 Reserved[3]; // Input text string follows. } DEBUG_GET_TEXT_COMPLETIONS_IN, *PDEBUG_GET_TEXT_COMPLETIONS_IN; typedef struct _DEBUG_GET_TEXT_COMPLETIONS_OUT { ULONG Flags; // Char index in input string where completions start. ULONG ReplaceIndex; ULONG MatchCount; ULONG Reserved1; ULONG64 Reserved2[2]; // Completions follow. // Completion data is zero-terminated strings ended // by a final zero double-terminator. } DEBUG_GET_TEXT_COMPLETIONS_OUT, *PDEBUG_GET_TEXT_COMPLETIONS_OUT; I don't understand how I'm supposed to use this API. edit: I tried the following as suggested in the comments: struct in_wrap { DEBUG_GET_TEXT_COMPLETIONS_IN in; char src[4]; }; in_wrap wrp; DEBUG_GET_TEXT_COMPLETIONS_IN in; in.Flags = DEBUG_GET_TEXT_COMPLETIONS_NO_SYMBOLS; in.MatchCountLimit = 5; wrp.in = in; strcpy(wrp.src, ".ec"); // I'm expecting to receive ".echo" back. DEBUG_GET_TEXT_COMPLETIONS_OUT out = {}; ULONG outsize = 0; hr = advanced->Request(DEBUG_REQUEST_GET_TEXT_COMPLETIONS_ANSI, (void *)&wrp, sizeof(in_wrap), (void *)&out, sizeof(DEBUG_GET_TEXT_COMPLETIONS_OUT), &outsize); Which returns E_INVALIDARGS. I can't provide much help here, other than translating the source comments: It reads like the _IN and _OUT structures only carry part of the data. Either one is to be immediately followed by a list of strings, terminated by an empty string. What's awkward is that structures following this pattern commonly end with an array of size 1. I cannot infer who is responsible for allocating or releasing those blocks of memory, though. Typedef struct _FOO { debugin blah; char bar[somesize]}foo,*pfoo; foo inbuff; strcpy_s (inbuff.bar,".dot/bang partial input"); Request get inbuff sizeof foo............. Out completion memory is dbgengs work @blabb I just posted what you suggested but unfortunately it still doesnt work. Maybe I did it wrong? The list of strings needs to be terminated by an empty string. In other words, the final 2 characters need to be NUL characters. You'll probably also want to populate (or at least initialize) the other members of the struct. I added an answer take a look @blabb Thank you so much! How did you figure out that you have to add an asterisk after your input string? debug the debugger with .dbgdbg set bp yourextensionname!Extension::yourextensionCommand in code below it would be autocomp!Extension::autocomp and step around the partial input is a wildcard so you need an asterisk following your input like ".db*" the out needs two iterations one for getting the size with S_FALSE and size of completions available the second with allocated memory after OUT as a typedeffed structure as IN the code below has a fixed size buffer in OUT as poc if you pass just OUT without a buffer you should Receive an S_FALSE(01) as HRESULT the opsize must indicate the amount of memorysize needed for completion characters code as follows #include <engextcpp.cpp> typedef struct _AUTOCOMPIN { DEBUG_GET_TEXT_COMPLETIONS_IN auin; char instr[8]; } Autocompin, *PAutocompin; typedef struct _AUTOCOMPOUT { DEBUG_GET_TEXT_COMPLETIONS_OUT auout; char ostr[0x1000]; } Autocompout, *Pautocompout; class EXT_CLASS : public ExtExtension { public: EXT_COMMAND_METHOD(autocomp); }; EXT_DECLARE_GLOBALS(); EXT_COMMAND(autocomp, "", "") { Autocompin ibuff = {0}; Autocompout obuff = {0}; ULONG opsize = 0; HRESULT hr = E_FAIL; strcpy_s(ibuff.instr, ".d*"); hr = m_Advanced2->Request( DEBUG_REQUEST_GET_TEXT_COMPLETIONS_ANSI, &ibuff, sizeof(ibuff), &obuff, sizeof(obuff), &opsize); Out("hr = %x opsize = %x IDebugAdvancedCheckPointer = %p\n", hr, opsize, m_Advanced2); for (ULONG i = 0; i < opsize; i++) { Out("%c ", obuff.ostr[i]); } } compiled and linked with :\>cl Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27045 for x64 Copyright (C) Microsoft Corporation. All rights reserved. usage: cl [ option... ] filename... [ /link linkoption... ] :\>type complink.bat (must be in one line ) cl /LD /nologo /W4 /Od /Zi /EHsc /I"C:\Program Files (x86)\Windows Kits\10\Debuggers\inc" %1.cpp /link /EXPORT:DebugExtensionInitialize /Export:%1 /Export:help /RELEASE :\> compiled and executed :\>complink.bat autocomp :\>cl /LD /nologo /W4 /Od /Zi /EHsc /I"C:\Program Files (x86)\Windows Kits\10\Debuggers\inc" autocomp.cpp /link /EXPORT:DebugExtensionInitialize /Export:autocomp /Export:help /RELEASE autocomp.cpp C:\Program Files (x86)\Windows Kits\10\Debuggers\inc\engextcpp.cpp(1849): warning C4245: 'argument': conversion from 'int' to 'ULONG64', signed/unsigned mismatch Creating library autocomp.lib and object autocomp.exp :\>cdb -c ".load .\autocomp;!autocomp;q" cdb Microsoft (R) Windows Debugger Version 10.0.17763.132 AMD64 0:000> cdb: Reading initial command '.load .\autocomp;!autocomp;q' hr = 0 opsize = 8c IDebugAdvancedCheckPointer = 00000057edfface0 d b g d b g d e b u g _ s w _ w o w d e t a c h d m l _ f i l e d m l _ f l o w d m l _ s t a r t d o d r i v e r s d u m p d v a l l o c d v f r e e d i s a b l e p a c k a g e d e b u g quit:
common-pile/stackexchange_filtered
How do I create a Stored Procedure if it doesn't exist in TSQL I have tried this: if object_id('a_proc22') is not null CREATE PROCEDURE a_proc22 AS SELECT 1 go but it gives me a syntax error. But this seemed to compile: if object_id('a_proc22') is not null EXEC('CREATE PROCEDURE a_proc22 AS SELECT 1') go Why is the first one incorrect? I don't know SQL-Server but I doubt that it compiles if you have a syntax error. Is this the same problem as answered here? http://stackoverflow.com/questions/2072086/t-sql-check-if-a-procedure-exists-before-creating-it Possible duplicate of http://stackoverflow.com/questions/937908/how-to-detect-if-a-stored-procedure-already-exists I'm guessing the error is something like "CREATE/ALTER PROCEDURE must be the first statement in a query", so, well, that means that CREATE PROCEDURE must be the first statement in a query. If you wrapped it up on an EXEC, then when its executed, it is the first statement on that query, so that's why it works. Came here to say this. Sayeth BOL: "The CREATE PROCEDURE statement cannot be combined with other Transact-SQL statements in a single batch." So it must be the first and last statement. IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[a_proc22]') AND TYPE IN (N'P', N'PC')) DROP PROCEDURE [dbo].[a_proc22]; GO CREATE PROCEDURE [dbo].[a_proc22] AS BEGIN -- Code here END GO if object_id('a_proc22') is not null drop procedure a_proc22 go create procedure a_proc22 AS SELECT 1 The GO is the important thing here after the drop, you can't have create first, some SQL validation I guess for security purposes. Your first statement is giving error because after if condition you can not place a create/alter procedure statement. Try this if Exists(select * from sys.procedures -- if exists then drop it where name = 'a_proc22') Drop procedure a_proc22 GO CREATE PROCEDURE a_proc22 -- create the new procedure AS SELECT 1 go IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[a_proc22]') AND type in (N'P', N'PC')) DROP PROCEDURE [dbo].[a_proc22] --and create here... Or you can remove the drop and create with if not exists GO The question is why did the first SQL statement fail. The heading is "How do I create a Stored Procedure if it doesn't exist in TSQL". I should have said that create should be the first in a query batch to answer his question. But I think the answer is correct for the heading.
common-pile/stackexchange_filtered
How to determine the longest timeperiod and exclude other rows in Excel or R? In my dataset I have information of the ZIPCODE of 600K+ ID's. If ID's move to a different addressess, I want to determine at which zipcode they lived the longest and put a '1' for that specific year in that row (no need to combine rows as I want to know if they where they lived in what year). That way an ID only have a '1' for a certain year at one row (if there are multiple rows for that ID). The yellow highlight is what i don't want; in that case there is a '1' in two rows for the same year. In the preferred dataset there is only one '1' per year per ID possible. For example: ID 4 lived in 2013 in 2 places (NY and LA), therefore there are 2 rows. At this point there is a 1 in each row for 2013 and I only want a 1 in the row the ID lived the longest between 1-1-2013 and 31-12-2018. ID 4 lived in 2013 longer in LA than in NY, and so only a 1 should be at the row for NY (so in this case the row of LA will be removed because only '0's remain). I can also put this file in RStudio. Thank you! structure(v1) ID CITY ZIPCODE DATE_START DATE_END DATE_END.1 X2013 X2014 X2015 X2016 X2017 X2018 1 1 NY 1234EF 1-12-2003 31-12-2018 1 1 1 1 1 1 2 2 NY 1234CD 1-12-2003 14-1-2019 14-1-2019 1 1 1 1 1 1 3 2 NY 1234AB 15-1-2019 31-12-2018 0 0 0 0 0 0 4 3 NY 1234AB 15-1-2019 31-12-2018 0 0 0 0 0 0 5 3 NY 1234CD 1-12-2003 14-1-2019 14-1-2019 1 1 1 1 1 1 6 4 LA 1111AB 4-5-2013 31-12-2018 1 1 1 1 1 1 7 4 NY 2222AB 1-12-2003 3-5-2013 3-5-2013 1 0 0 0 0 0 8 5 MIAMI 5555CD 6-2-2015 20-6-2016 20-6-2016 0 0 1 1 0 0 9 5 VEGAS 3333AB 1-1-2004 31-12-2018 1 1 1 1 1 1 10 5 ORLANDO 4444AB 26-2-2004 5-2-2015 5-2-2015 1 1 1 0 0 0 11 5 MIAMI 5555AB 21-6-2016 31-12-2018 31-12-2018 0 0 0 1 1 1 12 5 MIAMI 5555AB 1-1-2019 31-12-2018 0 0 0 0 0 0 13 6 AUSTIN 6666AB 28-2-2017 3-11-2017 3-11-2017 0 0 0 0 1 0 14 6 AUSTIN 6666AB 4-11-2017 31-12-2018 0 0 0 0 1 1 15 6 AUSTIN 7777AB 20-1-2017 27-2-2017 27-2-2017 0 0 0 0 1 0 16 6 AUSTIN 8888AB 1-12-2003 19-1-2017 19-1-2017 1 1 1 1 1 0 > structure(list(ID = c(1L, 2L, 2L, 3L, 3L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L), CITY = structure(c(4L, 4L, 4L, 4L, 4L, 2L, 4L, 3L, 6L, 5L, 3L, 3L, 1L, 1L, 1L, 1L), .Label = c("AUSTIN", "LA", "MIAMI", "NY", "ORLANDO", "VEGAS"), class = "factor"), ZIPCODE = structure(c(4L, 3L, 2L, 2L, 3L, 1L, 5L, 9L, 6L, 7L, 8L, 8L, 10L, 10L, 11L, 12L), .Label = c("1111AB", "1234AB", "1234CD", "1234EF", "2222AB", "3333AB", "4444AB", "5555AB", "5555CD", "6666AB", "7777AB", "8888AB"), class = "factor"), DATE_START = structure(c(3L, 3L, 4L, 4L, 3L, 10L, 3L, 11L, 1L, 7L, 6L, 2L, 8L, 9L, 5L, 3L), .Label = c("1-1-2004", "1-1-2019", "1-12-2003", "15-1-2019", "20-1-2017", "21-6-2016", "26-2-2004", "28-2-2017", "4-11-2017", "4-5-2013", "6-2-2015"), class = "factor"), DATE_END = structure(c(1L, 2L, 1L, 1L, 2L, 1L, 7L, 4L, 1L, 9L, 8L, 1L, 6L, 1L, 5L, 3L), .Label = c("", "14-1-2019", "19-1-2017", "20-6-2016", "27-2-2017", "3-11-2017", "3-5-2013", "31-12-2018", "5-2-2015"), class = "factor"), DATE_END.1 = structure(c(7L, 1L, 7L, 7L, 1L, 7L, 6L, 3L, 7L, 8L, 7L, 7L, 5L, 7L, 4L, 2L ), .Label = c("14-1-2019", "19-1-2017", "20-6-2016", "27-2-2017", "3-11-2017", "3-5-2013", "31-12-2018", "5-2-2015"), class = "factor"), X2013 = c(1L, 1L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L), X2014 = c(1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L), X2015 = c(1L, 1L, 0L, 0L, 1L, 1L, 0L, 1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 1L), X2016 = c(1L, 1L, 0L, 0L, 1L, 1L, 0L, 1L, 1L, 0L, 1L, 0L, 0L, 0L, 0L, 1L ), X2017 = c(1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 1L, 1L, 1L, 1L), X2018 = c(1L, 1L, 0L, 0L, 1L, 1L, 0L, 0L, 1L, 0L, 1L, 0L, 0L, 1L, 0L, 0L)), class = "data.frame", row.names = c(NA, -16L)) HI Student0172, this is an easy operation in R, but if you want to increase the likelihood of a response, you will be more likely to get an answer if you provide minimal reproducible data. See How to make a great R reproducible example for more info. Hi Ian, thank you for your anwser. I added R structure to my question. Hi Ian, I tried your previous code from the other command (not sure where it went?). Thank you for your anwser. I've load the dplyr package, but I still get this error: Error in dmy(DATE_END.1) : could not find function "dmy".. Do you have any idea? You can use a little help from the lubridate package to calculate how many days are spent at each location. Then we can group_by ID and use case_when to assign 1 when the time is the max or 0 otherwise. library(lubridate) library(dplyr) v1 %>% dplyr::select(ID,CITY,ZIPCODE,DATE_START,DATE_END.1) %>% rowwise() %>% mutate("X2013" = max(0, min(dmy("31-12-2013"),dmy(DATE_END.1)) - max(dmy("1-1-2013"),dmy(DATE_START))), "X2014" = max(0, min(dmy("31-12-2014"),dmy(DATE_END.1)) - max(dmy("1-1-2014"),dmy(DATE_START))), "X2015" = max(0, min(dmy("31-12-2015"),dmy(DATE_END.1)) - max(dmy("1-1-2015"),dmy(DATE_START))), "X2016" = max(0, min(dmy("31-12-2016"),dmy(DATE_END.1)) - max(dmy("1-1-2016"),dmy(DATE_START))), "X2017" = max(0, min(dmy("31-12-2017"),dmy(DATE_END.1)) - max(dmy("1-1-2017"),dmy(DATE_START))), "X2018" = max(0, min(dmy("31-12-2018"),dmy(DATE_END.1)) - max(dmy("1-1-2018"),dmy(DATE_START)))) %>% ungroup %>% group_by(ID) %>% mutate_at(vars(starts_with("X")),list(~ case_when(. == max(.) ~ 1, TRUE ~ 0))) # A tibble: 16 x 11 # Groups: ID [6] ID CITY ZIPCODE DATE_START DATE_END.1 X2013 X2014 X2015 X2016 X2017 X2018 <int> <fct> <fct> <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 NY 1234EF 1-12-2003 31-12-2018 1 1 1 1 1 1 2 2 NY 1234CD 1-12-2003 14-1-2019 1 1 1 1 1 1 3 2 NY 1234AB 15-1-2019 31-12-2018 0 0 0 0 0 0 4 3 NY 1234AB 15-1-2019 31-12-2018 0 0 0 0 0 0 5 3 NY 1234CD 1-12-2003 14-1-2019 1 1 1 1 1 1 6 4 LA 1111AB 4-5-2013 31-12-2018 1 1 1 1 1 1 7 4 NY 2222AB 1-12-2003 3-5-2013 0 0 0 0 0 0 8 5 MIAMI 5555CD 6-2-2015 20-6-2016 0 0 0 0 0 0 9 5 VEGAS 3333AB 1-1-2004 31-12-2018 1 1 1 1 1 1 10 5 ORLANDO 4444AB 26-2-2004 5-2-2015 1 1 0 0 0 0 11 5 MIAMI 5555AB 21-6-2016 31-12-2018 0 0 0 0 1 1 12 5 MIAMI 5555AB 1-1-2019 31-12-2018 0 0 0 0 0 0 13 6 AUSTIN 6666AB 28-2-2017 3-11-2017 0 0 0 0 1 0 14 6 AUSTIN 6666AB 4-11-2017 31-12-2018 0 0 0 0 0 1 15 6 AUSTIN 7777AB 20-1-2017 27-2-2017 0 0 0 0 0 0 16 6 AUSTIN 8888AB 1-12-2003 19-1-2017 1 1 1 1 0 0 There is certainly a way that one could implement the first mutate call to not require manually writing each year, but would take much more work than just typing it out. Hi Ian, thank you for your help! Unfortunately I got this error: Grouping rowwise data frame strips rowwise nature. Online I found something about ungrouping de data again. Are you familiar with this error? About mutate: I'm grateful for this anwser because I can understand it better this way! Sorry about that, I am using a development version of dplyr and did not realize that would be a problem. I edited my answer. Thank you so much Ian, it worked! I added v2 <- v1 at the beginning (otherwise it didn't change my dataframe. Hopefully it will work on my huge dataframe as well. Thank you! Hi Ian, I triend your code om my original dataset (990K rows). However, I left it running all night and nothing happend. I'm currently running it again. Do you have any idea why this is? Thank you!
common-pile/stackexchange_filtered
Can I map IP addresses to allow 2 site to site VPNs with same IP range using Microsoft TMG I have a Microsoft TMG server with a number of site-2-site VPNs which all have different IP address ranges. Now a client has asked for another site-2-site VPN but their IP range clashes with an existing range. Is it possible to set up some form of IP mapping so the two ranges can be used at the same time. e.g. both clients use 192.168.0.x can I set it up so for one client we use the "real" ip addresses but for the other we use something like 192.168.1.x and TMG server changes it to 192.168.0.x and pushes it down the correct VPN. Thanks. In your example, how would TMG be able to tell which one the "real" IP was? If you can answer that, you can do it.
common-pile/stackexchange_filtered
:after pseudo element is not showing up I have simplified my question to this: HTML <div class="dropbox"> </div> CSS .dropbox { border:2px solid #ff0000; height:200px; width:50%; margin:auto; } .dropbox:after { content:''; width:10px; height:10px; background:#ff0000; top:100%; left:50%; margin-left:5px; } Why can't I get the :after pseudo element to show up? I need it at the bottom middle of the div. To by clear, it should be under the div in the middle. It's worth noting that the pseudo element is inline by default - you therefore can't add margins or change the dimensions of it. If you were to change the display to block you will notice that it shows up as you would expect (example). In your case you need to absolutely position it relative to the parent element. Example Here .dropbox { border:2px solid #ff0000; height:200px; width:50%; margin:auto; position:relative; } .dropbox:after { content:''; position:absolute; width:10px; height:10px; background:#ff0000; top:100%; left:50%; margin-left:5px; } I'm guessing that the default display for the pseudo :after element is inline. Because the :after element has no content it shrinks to nothing. Set display:block then your width & height apply and you can see your element. .dropbox:after { content:''; width:10px; height:10px; background:#ff0000; top:100%; left:50%; margin-left:5px; display: block }
common-pile/stackexchange_filtered
future got attached to different loop Task <Task pending name='Task-11' coro=<Queue.get() running at C:\Users\Administrator\anaconda3\lib\asyncio\queues.py:166> cb=[_release_waiter(<Future pendi...15C1F7220>()]>)() at C:\Users\Administrator\anaconda3\lib\asyncio\tasks.py:416]> got Future <Future pending> attached to a different loop The above error is what I recieved when deploying code in command prompt on an EC2 instance. But this kind of error does not occur in my personal computer. the code works fine in my personal computer's command prompt. Please provide enough code so others can better understand or reproduce the problem. Thanks for enquiring, the problem is solved the code was totally fine, the problem was with the package. My EC2 instance had the latest version of the library, downgrading it solved the issue
common-pile/stackexchange_filtered
Identify waterside building / skyline Among a friend's photos on Facebook I spotted an interesting skyline with a distinctive waterside building that looks like it's just being completed: I've travelled to over fifty countries and I can't tell where this is. Can somebody identify the city and/or the building? (Yes I know I could just ask him but this way I get to share my curiosity.) normally, just go to http://images.google.com -- sadly it had no good results, I saw, in this case. (As hippie says below.) http://tineye.com has an excellent tool that helps identify images. In this case, it has no result which indicates the photo couldn't be matched to any other photos; however many times the site works wonders. @MaxVernon: I tried Google image search and TinEye. TinEye seems to manage only for edited versions of the very same image and not different photos of the same subject. Google image search didn't work with my image combined with "Denmark" or "Aarhus" but it does work combined with "Isbjerget" now that we know what it's called (-: It is an apartment complex built on the waterfront of Aarhus in Denmark. Isbjerget was created in a collaboration between four architectural firms: JDS, CEBRA, SeARCH, and Louis Paillard. It took 5 years for the project to be completed, and is one of the first projects to be completed within De Bynære Havnearealer, the new docklands quarter of the city. The area was once a container port but is now being transformed into a sprawling development designed to house 7,000 inhabitants and provide 12,000 jobs Aha! I had a feeling it was Denmark but doing Google image searches with "Denmark" and other hints didn't uncover it. Two of my friends looked at apartments there when it opened. They looked really nice and at a reasonable price – but they had weird angled walls and corners, plus the immediate location is not fantastic (right now). In fact, the entire harbour and waterfront area in Aarhus is getting a revamp at the moment, with many new projects. Here is the crown jewel, which is being finished right now: http://www.urbanmediaspace.dk/en What a nice church you have. 1500s?
common-pile/stackexchange_filtered
Setting window.innerHeight Can someone please explain me why does setting height viewport property: <meta name="viewport" content="width=480px, height=120px"> doesn't work? The window.innerWidth reports 480px BUT window.innerHeight 210, not just 120px! As so what's the point in setting height property? What is the width and height of the device that your displaying this?? I don't think you're supposed to specify px. Just write an integer Well because there is no height this should be used for device support not for defining a specific size you have media queries for that. <meta name="viewport" content="width=device-width, initial-scale=1.0"> http://www.w3schools.com/css/css_rwd_viewport.asp What do You mean "there is no height"? According to: link there is height property and can be set just like the width @Ahmad Bamieh But only theoretically. As i wrote changing height has no effect. I just checked it out! @Mulligun81 maybe, resource please? As in the comment mentioned above, you're sopposed to add the width / heigth without the px - example below <meta name="viewport" content="width=500, initial-scale=1"> But be aware, that a pixel is not a pixel depending on the endusers device For a quick read take a look at this article: https://developer.mozilla.org/en-US/docs/Mozilla/Mobile/Viewport_meta_tag Also a ref about the pixel is not a pixel http://www.quirksmode.org/blog/archives/2010/04/a_pixel_is_not.html EDIT: OK - Maybe I got your problem now. While playing around with the viewport props i noticed, that maybe the cache of your browser played a trick on you while testing a lot. -ofc not sure if you did not clear it while testing. -- !!!Please read the NOTE at the END if you're testing on a non-mobile device!!! But to give you a quick example that following changes something: <meta name="viewport" content="height=50"> You should check out following 3 websites I set up for you: All with following div: div{ width: 60px; height: 60px; background-color: #000; } http://testcode.de/test/1.html <meta name="viewport" content="width=device-width, initial-scale=1.0"> http://testcode.de/test/2.html <meta name="viewport" content="width=device-width, initial-scale=2.0"> http://testcode.de/test/3.html <meta name="viewport" content="height=50"> NOTE: viewport, which gives hints about the size of the initial size of the viewport, is used by several mobile devices only. OFC - You should use a mobile device to test this! If you've used a normal browser - here is your answer! The px isn't required - i've checked it out. With or without the px changing the height has no effect. Please see the UPDATE of the post, maybe it should help now. @Mulligun81 Did you test the Sites I created for you and could you figure out what the problem is / did this solve your problem?
common-pile/stackexchange_filtered
Random Presentation Topic Picker returns null Today my teacher asked me to program a random presentation topic picker for him. The idea was, that the student goes to the pc and clicks on the message Dialog which then randomly generates a number between 1 and the max index of the topics and then prints the according topic. I tried it with HashMaps. to put in the key that stays with the String together so that I can then (after the output) remove that entry so that no other student can get the same topic. But it always returns at least 1 empty reference -> null. Here is the code: static HashMap<Integer, String> map = new HashMap<>(); public static void main(String[] args){ int anzahlEintraege = Integer.parseInt(JOptionPane.showInputDialog("Wie viele Themen gibt es?")); for(int i = 0; i < anzahlEintraege; i++){ map.put((i+1),JOptionPane.showInputDialog("Geben Sie das Thema Nummer " + (i+1) + " ein!")); } JOptionPane.showMessageDialog(null, "Jetzt geht's Los!"); int max = map.size(); int removed = 0; for(int i = 0; i < max; i++){ Random r = new Random(); int random = r.nextInt(max-1)+1; JOptionPane.showMessageDialog(null, "Sie haben das Thema "+ map.get(random) + " gezogen!"); map.remove(random); removed++; } } Do not remove random. When it is picked as a random once again, you have no item for it in your map, so you have null. The problem you're running into is that you can pick the same random number more than once, even if you've already removed the element with that key. Instead of trying to pick non-duplicating random numbers, you would be better off to simply create a list of your keys, randomize their order, and then simply iterate over them. Here's a simple example using strings that you should be able to adapt: import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; public class Scratch { public static void main(String[] args) throws Exception { Map<Integer, String> map = new HashMap<>(); map.put(1, "foo"); map.put(2, "bar"); map.put(3, "baz"); List<Integer> keys = new ArrayList<>(map.keySet()); Collections.shuffle(keys); for (Integer key : keys) { String randomValue = map.get(key); System.out.println(randomValue); } } }
common-pile/stackexchange_filtered
Add header to graphql response I'm building a graphql server with Spring and graphql-spring-boot-starter. To solve a very specific issue, I'd like to add a HTTP header to the graphql HTTP response. How can I achieve that? You can add the add the headers in ResponseEntity. public ResponseEntity<Object> callGraphQLService(@RequestBody String query) { ExecutionInput input = ExecutionInput.newExecutionInput() .query(query) .build(); ExecutionResult result = graphService.getGraphQL().execute(input); HttpHeaders headers = new HttpHeaders(); headers.add("Response-Code", "ABCD"); return new ResponseEntity<>(result, headers, HttpStatus.OK); }
common-pile/stackexchange_filtered
how to add custom callback URL to Coinbase I'm using Coinbase service to dealing with Bitcoin gateway. I know i can put callback url in the merchant settings page of my account. But i need to update it in each order. It can be done using API. but it's so boring. I need something like this : Adding data-callback="custom_callback_url" in the Anchor tag. <a class="coinbase-button" data-code="code_here" data-callback="custom_callback_url" href="https://coinbase.com/checkouts/code_here">Pay With Bitcoin</a> <script src="https://coinbase.com/assets/button.js" type="text/javascript"></script> With coinbase's API, you will have to set the callback URL while generating the code! There is more info on how to generate the code here: https://coinbase.com/api/doc/1.0/buttons/create.html
common-pile/stackexchange_filtered
Use default parameter for templated function I have two functions which look very similar to me : template <typename Predicate> vector<int> FindTopItems(const string& query, const Predicate& predicate) const; vector<int> FindTopItems(const string& query, ItemStatus status = ItemStatus::Initial) const; It is possible to use only templated version of this function? How to set it's predicate to default value, set by enum class? So I want the following code to be compilable : int main() { FindTopItems(""s, [](){}); /* predicate version */ FindTopItems(""s); /* default parameter as enum class should be used */ return 0; } Please post compilable code. Please post an [MCVE]. Is that vector<int> and const string& realy relevant to the question? What have you tried? How did it fail? I don't think this is doable. While you can provide a default type, template <class T = ItemStatus>, all you have for the function parameters is T, so you can default-construct a parameter, FindTopItems(const T& arg = T()), but this doesn't work for a scoped enum. Prsumably, the definition for the two functions look different depending on if you get a Predicate or an ItemStatus. So what would be the gain of mergin the two? You still need two different implementations, no? Added a minimal example. Well, implementation would differ for sure...but I have a feeling that I could simply my code You can set the default value for predicate to ItemStatus, then make a helper class that returns your default value. #include <iostream> #include <vector> #include <string> enum class ItemStatus { Initial }; template <typename T> auto defaultValue() { return T{}; } template <> auto defaultValue<ItemStatus>() { return ItemStatus::Initial; } template <typename Predicate = ItemStatus> std::vector<int> FindTopItems(const std::string& query, const Predicate& predicate = defaultValue<Predicate>()) { if constexpr (std::is_same_v<Predicate, ItemStatus>) { std::cout << "ItemStatus " << static_cast<int>(predicate) << '\n'; } else { std::cout << "Predicate " << predicate() << '\n'; } return {}; } int main() { FindTopItems(std::string{}, [](){ return 1; }); /* predicate version */ FindTopItems(std::string{}); /* default parameter as enum class should be used */ return 0; }
common-pile/stackexchange_filtered
What is the binary operations in $\mathbb{D}(n)$ I am using Joseph A gGallian for group theory.I came across something named Dihedral group $\mathbb{D}(n)$. The author uses only Caley table and does not describe the binary operation of this group algebrically.I searched on internet and came across different defnitions of binary operations and this confused me a bit. So I request that someone please tell me the binary operations. Are you happy with group presentations? $D_n$ is generated by elements $a$ and $b$, subject to the conditions $a^n=1$, $b^2=1$, and $ba=a^{-1}b$. A Cayley table does define the binary operation algebraic---it's just not always as illuminating as other characterizations. You can think of the dihedral group $D_{2n}$ (or sometimes $D_n$) as the group of symmetries of the regular $n$-gon, and so viewed it consists of suitable reflections and rotations. You can derive the relations given in @GerryMyerson 's comment from the definition of $a$ as the "smallest" rotation of the regular $n$-gon (that preserves symmetry) and $b$ as a reflection. Imagine a regular $n$-gon with vertices labeled $0, 1, \ldots, n-1$ in the obvious order. Then $a$ maps each vertex $k$ to $k + 1$ (addition modulo $n$), and $b$ fixes $0$; interchanges $1$ and $n - 1$; $2$ and $n - 2$, and so on (figure out where that "so on" ends, depending on the parity of $n$). You can also consider the $n$ vertices as the $n$ roots of unity in the complex plane. Then $a$ maps each $e^{i\frac{2k\pi}{n}}$ to $e^{i\frac{2(k+1)\pi}{n}}$, and $b$ maps each root to its conjugate. One particular algebraic representation for $D_n$ is the following: picture a regular $n$-gon in the plane. Then $D_n$ is the set of symmetries of the $n$-gon. In particular, let $r$ be rotation by $\frac{360^\circ}{n}$, and $s$ be reflection across the $x$ axis. Then clearly $r^n = 1$ and $s^2 = 1$. Also, you should convince yourself that $srs = r^{-1}$. It turns out that these rules completely describe $D_n$. Isn't that a geometric rather than an algebraic definition? The only way I can interpret "algebraic definition" is group presentation. Perhaps I should be more clear that the presentation is the point of the answer.
common-pile/stackexchange_filtered
How to launch PHP on Atom? While changing file from .html to .php extension it shows me message that there is no PHP interpreter found at php. And it is not about script support it is should launch on the browser. Clicking the "the set PHP" button it shows me the ide-php package. And I see no way how to set the Path. Atom settings.PHP settings you cant run php without a server, you need to install XAMPP (in case of windows you could use WAMP or in case of mac you could use MAMP). you can download XAMPP from the link below: https://www.apachefriends.org/download.html Thanks a lot, but MAMP I prefer more, anyway
common-pile/stackexchange_filtered
Binding Gridview With Different Names Regardless of Property names public class Emp { public int ID { get; set; } public string Name { get; set; } } List<Emp> lstEmp=new List<Emp>(); lstEmp.Add(new Emp{ID=1,Name="ABC"}); gridview1.DataSource=lstEmp; gridview1.DataBind(); This will show PropertyNames as Column Names when binded to a gridview But Can I Bind Gridview with Emp Object & different Column Names I mean Column names should not be ID & Name You mean Columns' Headers Yes you can, In your gridview1 data bound event write this: protected void gridview1_DataBound(object sender, EventArgs e) { gridview1.Columns[0].HeaderText = "Employee No."; gridview1.Columns[1].HeaderText = "Employee Name"; } Change headers accordingly, Hope that helps. My gridview Id is GridviewResult GridViewResults.HeaderRow.Cells[0].Text = "Employee no"; GridViewResults.HeaderRow.Cells[1].Text = "Employee Name"; GridViewResults.HeaderRow.Cells[2].Text = "Registration Date";
common-pile/stackexchange_filtered
Common mistakes: subject-verb agreement The counselor recommended that he go to a community college. I found the above example here. What is the context and why not goes after he? Certain verbs in English work like that in the third person. Those verbs of advice are: recommend, suggest and advise. In the simple present for the third person singular, they take the bare infinitive (people used to call this the subjunctive). There are also others,not discussed here. Demand and order also work like this. For example: 1) We recommend he stay for another week. [instead of the usual s on stay: stay] 2) She advises he leave immediately. [same as above] 3) I suggest she look this up if he doesn't believe me. [same as above] This is standard English, in both speaking and writing. And if an individual wants to pass an English test, I suggest he or she familiarize themselves with this usage. Not using it correctly will lead to losing points or getting a lower grade. Also, please note: people say all sorts of things and speak in all sorts of ways. None of those usages are relevant here. All of them are OK. All speech is what it is. It's all okay. This answer incorrectly suggests there are exactly three verbs (recommend,suggest,advice). But as the BBC link in Alex_ander's link shows, the same form (subjunctive) is used with "insist" and a host of other verbs. And it can even appear without any of those verbs: "t is necessary that he see a doctor". I meant to write: verbs of advice, and have now amended my answer. @MSalters indeed, the subjunctive can be triggered by things other than verbs, and by considerations other than advice: It's important that the project be finished promptly. It's important that he speak with her before tomorrow. There is no such thing as subjunctive. That is a complete misnomer. @phoog I was not dealing with every single instance of the "subjunctive". "It is necessary that he see a doctor" is ***not the specific usage I was addressing which can be seen by my examples." "There is no such thing as the subjunctive" is a baffling claim. @phoog The use of the term subjunctive came from linguists in the 18th (or earlier?) century who used Latin structure to explain English. Of course, it is impossible because English doesn't even really have "conjugation" of verbs. It has verb forms and tenses, including an s plural morpheme in the third person singular which, in cases such as the one at hand, is dropped. "subjunctive" therefore is considered a misnomer by linguists even though "the general public" seems to still use the term. Technically, the first version with Present Simple goes is not a mistake, it's just less formal (and not as widely used) than the second version, where the subjunctive is used in its classic form (with bare infinitive go). In modern British English the slightly less formal version with should go (in the place of go) is preferred: http://www.bbc.co.uk/worldservice/learningenglish/grammar/learnit/learnitv105.shtml @Lambie: Considering the answer is backed by a link to the BBC, I'm inclined to trust them on the subject of British English. What's your source? Possibly, it's considered wrong in AE, but numerous British book sources (mainly with official documents from the Parliament) can be googled (for 'recommend-that') using that phraseology with the 3rd person (both singular and plural 'are'). @Lambie In my experience, the "recommended that he goes" construction (including "recommend he goes") is very common in British usage. It is rather uncommon in the US and sounds hopelessly wrong to my (US) ear. I don't have enough exposure to other varieties of English to comment on those. @Lambie perhaps I didn't make myself clear. My principle exposure to British English these days is written sources produced by journalists and the UK government. The frequency with which I see third-person singular indicative mood in contexts that call for subjunctive mood is so high that I cannot reconcile it with the assertion that it is not standard. From a practical point of view, the fact that it appears with that frequency in those sources suggests that it has become standard. There are two possible answers. The one I prefer is that the author is using an ellipsis, which is permitted, indeed common, but very confusing to learners. What is meant in both sentences is "The counseler recommended that he should go to a community college." The modal verb "should" is to be followed by an infinitive without "to." So "he should goes" is absolutely wrong whereas "he should go" is perfectly proper. But it is permissible after "recommend" to drop the "should." There are numerous cases, particularly in speech or informal writing, where certain words with a purely grammatical function can be omitted but are to be added back mentally by the listener or reader. These are called ellipses, and native speakers process them without even being aware of it. They are, however, very confusing to people trying to learn grammatical English, and, in my opinion, teachers should avoid them. What is the answer you don't prefer? That it is the subjunctive? Why don't you prefer that? Yes, The alternative answer is that verbs such as "recommend" require that the verb in the subordinate clause must be in the subjunctive. I do not prefer it because, at least in modern American English, use of the subjunctive is so rare that I at least am not willing to make the blanket assertion that failure to use the subjuntive is an error. Moreover, explaining the few cases when failure to use the subjunctive is arguably a mistake involves difficulties that I try to avoid. The problem with your position is that historical evidence makes it clear that it is indeed a vestige of a much more robust subjunctive. Nobody in the 18th or 19th century was saying "recommend that he should go," but they certainly were saying "if he go" and "if he were." @phoog Your linguistic history is impeccable. English did once have a "robust" subjunctive mood. Donne wrote "if thou be'st born to see strange sights." Pure subjunctive. But we are helping people to learn 21st century English, where the subjunctive is itself a rare sight. As you deduced, I was aware of the historical approach, but I prefer to explain 21st century English through currently robust traits, such as modals and ellipses, rather than vestigial ones. Despite my preference, I shall not declare it a problem if you write an answer based on the remnants of the subjunctive.
common-pile/stackexchange_filtered
Merge dataframes including extreme values I have 2 data frames, df1 and df2: df1 Out[66]: A B 0 1 11 1 1 2 2 1 32 3 1 42 4 1 54 5 1 66 6 2 16 7 2 23 8 3 13 9 3 24 10 3 35 11 3 46 12 3 51 13 4 12 14 4 28 15 4 39 16 4 49 df2 Out[80]: B 0 32 1 42 2 13 3 24 4 35 5 39 6 49 I want to merge dataframes but at the same time including the first and/or last value of the set in column A. This is an example of the desired outcome: df3 Out[93]: A B 0 1 2 1 1 32 2 1 42 3 1 54 4 3 13 5 3 24 6 3 35 7 3 46 8 4 28 9 4 39 10 4 49 I'm trying to use merge but that only slice the portion of data frames that coincides. Someone have an idea to deal with this? thanks! By first / last you mean max and min ? I don't get why 2 and 54 would be in df3 otherwise I refer to the immediately up and down row if in column A is the same number Ok I get it, I will try some things... What have you tried so far ? Should index 7 A=2 B=23 be part of the result since index 8 B=13 is in the list? I started by merging both dataframes df1.merge(df2) then I tried to get the numbers of column A that I will need to slice by mean of unique() Here I can only think in to locate df2.B beside df1 and trying to chop using the position +1 and -1 However that doesn't work to keep using the same number of column A The index 7 has not to be processed because represent the row with a different set of number in column A Gotcha, check solution. Here's one way to do it using merge with indicator, groupby, and rolling: df[df.merge(df2, on='B', how='left', indicator='Ind').eval('Found=Ind == "both"') .groupby('A')['Found'] .apply(lambda x: x.rolling(3, center=True, min_periods=2).max()).astype(bool)] Output: A B 1 1 2 2 1 32 3 1 42 4 1 54 8 3 13 9 3 24 10 3 35 11 3 46 14 4 28 15 4 39 16 4 49 @JonathanPacheco Here's an interesting way of doing it. pd.concat([df1.groupby('A').min().reset_index(), pd.merge(df1,df2, on="B"), df1.groupby('A').max().reset_index()]).reset_index(drop=True).drop_duplicates().sort_values(['A','B']) A B 0 1 2 4 1 32 5 1 42 1 2 16 2 3 13 7 3 24 8 3 35 3 4 12 9 4 39 10 4 49 Breaking down each part #Get Minimum df1.groupby('A').min().reset_index() # Merge on B pd.merge(df1,df2, on="B") # Get Maximum df1.groupby('A').max().reset_index() # Reset the Index and drop duplicated rows since there may be similarities between the Merge and Min/Max. Sort values by 'A' then by 'B' .reset_index(drop=True).drop_duplicates().sort_values(['A','B'])
common-pile/stackexchange_filtered
How to show only a certain part of the entry title? Working on a project that I'm running into a snag about... I have a page title using the_title() ... that has a location in the title such as this: "Adventure Dental - Colorado Springs, CO" However, further down on the same page, that title needs to be used again, but this time just with the "Colorado Springs, CO" aspect of the title. Does anyone know of the best way that I can do this? Hiding part of the title with CSS clipping (as the OP requested in the original unedited title) is almost certainly not the best way to accomplish this. Here is how to do this in the template file. If all of your titles will be formatted the same way (with a space, a hyphen, and a space), you can do something like this to get just the second part of the title further down in the page: // Echoes the second part of the post title $title = get_the_title(); $parts = explode( ' - ', $title ); echo $parts[1]; However, depending on your use case it may be better to have just the business name as the post title, and store the location as a custom field. Then, assuming the location is stored in the custom field named location, you can do something like this in the main title section of your template file: <h1 class="entry-title"> <?php the_title(); ?> <?php if ( $location = get_post_meta( get_the_ID(), 'location', true ) ) { echo ' - ' . $location; } ?> </h1> Thanks for the reply, unfortunately though the first suggestion didn't work and doesn't display anything for the titles at all in my template. Maybe there's something else that's needed? The second suggestion won't really be possible as there's too many pages to make a custom field for these since they've all already been formatted in the same way where "business name - location" is the same exact way that all the pages are formatted. hmm. Really puzzled on how I can get this to work... if you have any ideas please let me know. Thanks again for the reply though. Fixed this! You were close in your first suggestion, just needed to have $post->post_title; instead of get_the_title();. Thanks, you're a rockstar.
common-pile/stackexchange_filtered
cv2.face' has no attribute 'LBPHFaceRecognizer I am creating a face recognition system using Python and idle on these versions:Python 3.6.1 :: Anaconda custom (64-bit),Anaconda 4.4,idle When I try to train the face recognizer I am getting an error like: AttributeError: module 'cv2.face' has no attribute 'LBPHFaceRecognizer' Here i have attached the code import cv2 import os import numpy as np from PIL import Image # Path for face image database path = 'dataset' recognizer = cv2.face.LBPHFaceRecognizer() def getImagesWithID(path): imagePaths=[os.path.join(path,f) for f in os.listdir(path)] faceSamples=[] Ids=[] for imagePath in imagePaths: faceImg=Image.open(imagePath).convert('L') faceNp=np.array(faceImg,'unit8') ID=int(os.path.split(imagePath)[-1].split('.')[1]) faces.append(faceNP) IDs.append(ID) cv2.imshow("training",faceNp) cv2.waitKey(10) return np.array(IDs), faces Ids,faces=getImagesWithID(path) recognizer.train(faces,Ids) recognizer.save('recognizer/trainningData.yml') cv2.destroyAllWindows() There is no need for the code after the recognizer line. See https://stackoverflow.com/help/mcve. The editor/IDE you use is also not a factor. Look in the cv2.face doc and see what attributes are available. What OpenCV version are you using? @eshirima we are using opencv 2.4.13 version You have to use a newer version of opencv, you can easily install opencv3.4.0 using conda install -c conda-forge opencv That is such an old version of OpenCV that they didn't even have Python docs for it. Now going off of their C++ documentation, I would say the Python equivalent was cv2.createLBPHFaceRecognizer(). It was not yet moved to face by then. I would highly recommend you update to at least OpenCV 3.X or else you'll keep running into these issues.
common-pile/stackexchange_filtered
Black Jack Probability Theoretical vs Simulation I am comparing the probability of winning blackjack by theoretically calculating it with probability equations and running realistic simulations in matlab. If I assume 3 decks are used, dealer sits at 18, and only the player's cards are considered in calculations, I calculate every possible combination of getting 19, 20, and 21 and run the probability equation for it. Example: 10 and ace would get you (16*4)/(factorial(156)/(factorial(2)*factorial(156-2))) = 0.0476. After all of the probabilities are added, the total probability of getting 19, 20, or 21 is 0.6552. Then I create a simulation where 3 decks are shuffled and cards are drawn from the top. If the cards drawn do not equal to 19,20,or 21 more cards are drawn. After running 1000s of these, the probability of winning I get is 0.4625. Why might these probabilities be so different? I know I am short cutting a lot of how the game is played, but I feel like I am modeling them the same way. Is it so much less in real life because the simulation isn't considering every possible little way of winning even though a lot of simulations are run? I think you'll have to add more detail about what you do. I don't understand what you mean by "$10$ and ace would get you", and I don't understand where $16\cdot4$ comes from. The answer to your question "Why might theses probabilities be so different?" could be either "because there's an error in your calculations", "because there's an error in your simulation code" or "because you've made some simplifying assumptions in the calculation", and we have no way of knowing which it is without knowing exactly what you did. In each deck, there are 10s, jacks, queens, and kings which equate to a value of 10 for blackjack and an ace high equates to 11 -> giving you a value of 21. So in each deck there are 16 opportunities to get a 10 and 4 opportunities to get an ace. I should've multiplied each of these by 3 (for the three decks) above, but I did that in my code so that's not the issue. Since apparently you have code both for the calculation and for the simulation, I'd suggest that you show it to us. Without that, there's really not much we can say, except that there must be an error in one of them. What you write in the last paragraph of the question isn't the explanation. If the winning probability is $0.6552$, you should be getting something much closer than $0.4625$ if you run thousands of simulations. One way of showing the code would be to post it on GitHub Gist.
common-pile/stackexchange_filtered
Problem burning mp3s to CDs I am having a problem writing mp3s to CDs that will play in my car CD player. (It is mp3 capable.) I have used Verbatim CD-RWs successfully in the past to write mp3s that play in my car. I have used Brasero,XfBurn, and K3B. And am writing it as a data disk because I can get more songs on it. I have also tried writing at the lowest speed. I also tried using CD-R discs. Is there anything else I can try? Using Ubuntu-Mate 18.04 In Brasero, are you burning them as a data project or as an audio project? Is your project's length less than 80 minutes? IIRC, 80 minutes is the max one can put on a traditional CD. I don't really know, I'm spitballing a this point. You may also want to edit your question to tell folks which version of Ubuntu you're using. Has anything changed from the time you could write till now when you can't write? Nothing has changed. I wrote them as an audio project and it does work. However my disks went from storing around 165 songs to 25. :-( @crip659 Odd, can see CD player not playing data files from beginning, but not stop playing them afterwards. Sounds almost like CD player laser getting weak or is dirty, but usually they work or don't(no halfway).
common-pile/stackexchange_filtered
JASIG External Redirect in AuthenticationHandler I'm at the end of my rope here and figure someone here might be able to help me. I'm working with the newest versions of JASIG CAS and Java and I've hit a wall. I have an implementation of an AuthenticationHandler and I'm trying to redirect to an external URL on authentication failure. I can't figure out how to get a RequestContext (to use WebUtils.getHttpServletResponse(context)) in the Handler or to set a flowscope variable in the login-webflow.xml because I'm not calling the handler from the webflow. I need to craft the URL for each specific failure because it includes an error code in the redirect so, I'm genuinely stumped. Does anyone have any places to point me or ideas on how I can accomplish my redirection? Thanks for anyone who can give some tips!
common-pile/stackexchange_filtered
php + sql server login & sessions Hello i am trying to create a login and session script with php to use for sql server and i cannot get it to work, it seams like no mater what i put into the login form aslong as it validates it will work, i cannot figure out what is wrong with the code however, i've just resently started using php and sql server and have not gotten the knowlage to figure out the problem for my self if soeone could help that would be great, also if you knwo any good tutorial sites that use sql server and php could you please share as there doesnt seam to be that many good tutorial sites for them sadly. any help is much welcomed at this stage. my main problem is, is that it isnt checking if the information posted in the html form exists in the database. (i have taken out the js validation as it doesnt seam nessessary however that works) Login.html <form name="log" action="log_action.php" method="post"> Username: <input class="form" type="text" name="uNm"><br /> Password: <input class="form" type="password" name="uPw"><br /> <input name="submit" type="submit" value="Submit"> </form> log_action.php session_start(); $serverName = "(local)"; $connectionInfo = array("Database"=>"mydatabase","UID"=>"myusername", "PWD"=>"mypassword"); $conn = sqlsrv_connect( $serverName, $connectionInfo); if( $conn === false){ echo "Error in connection.\n"; die( print_r( sqlsrv_errors(), true)); } $username = $_REQUEST['uNm']; $password = $_REQUEST['uPw']; $tsql = "SELECT * FROM li WHERE uNm='$username' AND uPw='$password'"; $stmt = sqlsrv_query( $conn, $tsql, array(), array( "Scrollable" => SQLSRV_CURSOR_KEYSET )); if($stmt == true){ $_SESSION['valid_user'] = true; $_SESSION['uNm'] = $username; header('Location: index.php'); die(); }else{ header('Location: error.html'); die(); } index.php <?php session_start(); if($_SESSION['valid_user']!=true){ header('Location: error.html'); die(); } ?> Thank you for any help you guys might be able to bring As an aside, your code is vulnerable to SQL Injection attacks (http://en.wikipedia.org/wiki/SQL_injection), it is extremely dangerous to directly concatenate user input into a query. Learn about parametrized queries to defend against this. Also, it isn't entirely clear what your question is. First you say "i cannot get it to work", then you say, "no mater what i put into the login form aslong as it validates it will work". What, exactly, is the problem you are running into? it's not checking if the username and password exists in the database, regardless of what i put in, it will take me to the login page, where if it doesn't exist it should take me to an error page. and yes i plan to do sql injection prevention once i get a few scripts up nd running and am comfortable enough workinb with sql server The problem is that you never actually check the results of the query. if($stmt == true){ only checks that the query executed without errors - it says nothing about the results returned by the query. Therefore, you need to use the sqlsrv_fetch function (or one of the related functions) to actually examine the result of the query. In your particular case, simply checking if the result set has rows with sqlsrv_has_rows should be sufficient. okay i had a look at the has rows link, and using it ive adjusted my script and it says it has rows, so thats good it also only works when im using the right username and password so now just to get the session to work, thanks allot!
common-pile/stackexchange_filtered
Is there a way to programmatically determine how long a conference room resource has been available for a given time slot? For example, let's say a conference room was booked for a 12-1pm meeting. At 9am that same morning, a user cancelled that meeting, freeing up the conference room. Is there any way to programmatically run a script which would indicate, if run at 10am, that the room had become available one hour ago? If you retrieve the event that was cancelled via Events.get, you can get the updated time field as a response, which, in case the event got cancelled, equals the time the event was cancelled. Then, the script can calculate the difference between current time and the time it got cancelled. You could also use Freebusy to make sure that no one created another event after the previous one got cancelled and that the resource is free for that time. Update If you want to know for how long a certain conference room has been free for a certain time, you can: Get the list of events related to this resource via Events.list, including the ones that were cancelled (set showDeleted to true) to achieve that. Check if there are any events whose scheduled time matches the time you want to look for (fields start and end). If any of these events matches, you can calculate, for that event (and in case the event got cancelled and the resource is indeed free - event status is cancelled), the difference between current time and the time the event got cancelled (by checking the field updated). I hope this is what you wanted. Great idea -- just not sure if it would work at a scale where there are hundreds of conference rooms and thousands of users. In other words, how would one be able to determine, for a given conference room, which event was canceled that had previously booked it? @Ryan I updated my response based on your comment. Let me know if that works for you.
common-pile/stackexchange_filtered
Dreamweaver Regex - Unmatched ) in regular expression The Issue The issue I am tackling is that past developers of some code I am working on set variables like $_GET[name] or sometimes like $_GET['name']. Trying to make the code uniform, I want to make all of them like $_GET['name'] Attempted Solution In a custom DreamWeaver script, I have used the following. dreamweaver.setUpFindReplace({ searchString: "\$(_POST|_SESSION|_GET)\[([^\'][0-9a-zA-Z _]+?[^\'])\]", replaceString: "$$1['$2']", searchWhat: "document", searchSource: true, useRegularExpressions: true }); dreamweaver.replaceAll(); Extra Information While I am getting the error running it from a custom script, I do not get it when running the same "searchString" and "replaceString" within the Find Prompt (CTRL + F). The find prompt will happily find, and replace the instances where it occurs. Before someone potentially points out the fact - yes, I could just run the Find Prompt and do it from there, but I still have to run the custom script to run the other 20 or so find and replace options. Do you have example end results somewhere? Sure do. I have the regex used on Regex 101 - https://regex101.com/r/bE9kN6/1 Finally... Anybody know how to stop the unmatched parenthesis issue? Been trying for a while now and I can't find a resolution since there are no unmatched parenthesis. Solution Thanks to Bob for figuring this out. Dreamweaver uses JS regex (which I did not believe was different to PHP, but turns out one is POSIX and one is perl-regex [or something...]) and that literals need escaping with \\ not \. This made the final, working, function; dreamweaver.setUpFindReplace({ searchString: "\\$(_POST|_SESSION|_GET)\\[([^\'][0-9a-zA-Z _]+?[^\'])\\]", replaceString: "$$1['$2']", searchWhat: "document", searchSource: true, useRegularExpressions: true }); dreamweaver.replaceAll(); I can't see any error (nor can Regex101), and I am led to the conclusion that you have found a bug in DreamWeaver. I don't think the single quote would have any special meaning in the searchString parameter, so you could try it without the preceding back-slash and see if that works round it. If not, try simplifying the second marked expression until you lose the error: it won't do what you want, but that doesn't matter for now. Thanks for the reply AFH. Just got to work and in front of a computer, so shall try that now - I'll post back here shortly. That would be a negative - stripping all the way back to \$(_POST|_SESSION|_GET)\[ gives "Unterminated character class" but works on regex101 - and making the search greedy using \$(_POST|_SESSION|_GET)\[(.+?)\] doesn't replace with the ReplaceWith parameter. Starting to dislike DW Regex. Further testing - even searchString: "\$(_POST|_SESSION|_GET)", replaceString: "\$_GET", does not replace $_POST and $_SESSION with $_GET. @SlackBadger Have you tried escaping the backslashes? I've not used DreamWeaver, but that looks a lot like JS and in a JS string literal "\[" becomes [ where you probably want "\\[" to become \[. Try using double backslashes in the string literal everywhere you want a backslash in the actual regex. Bob, I love you right now. You were correct, it is JS Regex and that did solve the issue. If you could throw that into an answer - you can take the accepted answer and the 100 point bounty :) Great to hear! I'll write a proper answer when I get home. Mobile typing sucks. Your escaping is a bit off. You seem to be using JavaScript, and the string literal "\$(_POST|_SESSION|_GET)\[([^\'][0-9a-zA-Z _]+?[^\'])\]" evaluates to $(_POST|_SESSION|_GET)[([^'][0-9a-zA-Z _]+?[^'])]. Instead, you should use "\\$(_POST|_SESSION|_GET)\\[([^'][0-9a-zA-Z _]+?[^'])\\]", which evaluates to \$(_POST|_SESSION|_GET)\[([^'][0-9a-zA-Z _]+?[^'])\]. The reason here is because you actually have two levels of parsing going on, each with its own escaping rules. First, you have the JavaScript string literal, which allows escaping things like \n for a new line. However, unrecognised escape sequences like "\[" are silently swallowed and produce [. The regex engine sees [, indicating the start of a character class. You want the regex engine to receive literal backslashes in the pattern. To do so, you must first produce a JS string containing literal backslashes. Which means you must escape the backslashes themselves in the string literal, so "\\" produces \, e.g. "\\[" produces the string \[. This way the regex engine sees \[, indicating an escaped (literal) bracket. The other thing is that the single quotes do not need to be escaped at all, since they hold no special meaning in regex and single quotes inside a double-quoted string are treated as normal characters by JS. There is another option, but I'm not sure if DreamWeaver accepts it. JavaScript has a special regex literal syntax, so you don't have to create a string first. By skipping that extra parsing step, you actually avoid the need to double-escape. A JS regex literal is of the form /pattern/options (forward-slashes need to be escaped, but you don't have any in this pattern). So your pattern can be expressed as /\$(_POST|_SESSION|_GET)\[([^'][0-9a-zA-Z _]+?[^'])\]/. Once again, the single quotes do not need to be escaped at all. If DreamWeaver supports the regex literal syntax, this is actually the preferred option. You may award your bounty in 21 hours - I'll give this in the morning :) Well spotted - I completely missed that. Now selected as best answer - I would also vote you up, but apparently I need 15 reputation for that and giving 100 away means I am now at 13 - bad times!
common-pile/stackexchange_filtered
Git clone fails with "SSLRead() return error -980641" I'm trying to clone a project from an open git repo. I got this error. I was trying to find out the reason and found that. Running brew install git --with-brewed-curl --with-brewed-openssl got this: curl: (56) SSLRead() return error -9806 Error: Failed to download resource "git" git --version: 2.4.9 (Apple Git-60 curl -V: curl 7.43.0` That error message is from a curl powered by darwinssl and not OpenSSL which you make it sound like you're trying to use? successfully cloned project using Tunnel Bear, but still have no idea what is the reason of this error... You may also need to install curl with --with-openssl option via brew. It helps me on OS X 10.11 (fresh install). You could use $ brew remove git $ brew remove curl $ brew install openssl $ brew install --with-openssl curl $ brew install --with-brewed-curl --with-brewed-openssl git
common-pile/stackexchange_filtered
Angular Variable undefined result I've been trying to solve this my self but i think i need help. It's been hours of searching for a solution but i can't seem to solve this myself. I'm learning asp.net core 2.2 web api. And i'm following a tutorial i found on youtube The webapi works using PostMan. I was able to save in the database But the problem is when i call it in the Angular. And i think the problem is not the way i call this. But the binding of the input text data. Here is the html code <div class="input-group"> <div class="input-group-prepend"> <div class="input-group-text bg-white"> <i class="far fa-credit-card" [class.green.icon]="CardNumber.valid" [class.red.icon]="CardNumber.invalid && CardNumber.touched"></i> </div> <input maxlength="16" minlength="16" required name="16 Digit CardNumber" #CardNumber="ngModel" [(ngModel)]="service.formData.CardNumber" class="form-control" placeholder="Card Number"> </div> </div> I have other input text which is working fine and in my eyes they are the same. SO i don't know the reason why. In my web api. All of the data is required to i'm receiving 400 (Bad Request) error. I use the console.log to view the input data in the browser. But when it comes to CardNumber it becomes undefined so i'm expecting this to be the problem. But the other input text is fine. This is my model export class PaymentDetail { PMId: number; CardOwnerName: string; CardNumber: string; ExpirationDate: string; CVV: string; } And this is the method that is called by the submit button where i'm receiving undefined postPaymentDetail(formData: PaymentDetail) { console.log(formData.CardOwnerName); console.log(formData.CVV); console.log(formData.ExpirationDate); console.log(formData.CardNumber); console.log(formData.PMId); return this.http.post(this.rootUrl + '/PaymentDetail', formData); } Update: This is the result of the Network Tab Header The data i typed in the input text is there. And this is the result of the Network tab Preview I checked the c# code. And i add a breakpoint in the code. The breakpoint is not working using Angular. But works using postman. I mean the code is called. Am i doing the breakpoint wrong? Put a break point in the postPaymentDetail method. What is the value of formdata? Is this a type that can be converted to json? Now move on to the network tab in the browser debugging tools, what is actually being sent? Now move on to the .net code, how is the incoming data being bound? You need a formGroup I guess because You can only get the value with ngModel, what do You send in postPaymentDetail() ? Is this your object typed, service.formData.CardNumber? @Igor i updated the question and added a screenshot. Msu Arven - i have a formGroup This is the the code <form #form="ngForm" autocomplete="off" (submit)="onSubmit(form)"> @shadowman_93 i'm sorry i didn't understand the question you don't need to send any data to postPaymentDetail() method, because you are already binding it with ngModel. do some changes like below your component, formData = new PaymentDetail(); postPaymentDetail() { console.log(this.formData.CardNumber); return this.http.post(this.rootUrl + '/PaymentDetail', this.formData); } I hope this will solve your issue :) This solved my problem. But why does simply adding 'this' solved the problem? can you please elaborate the answer. Thank you you are using formal parameter in the console which doesn't have the value in it, but with this keyword we are using the service class variable which is actually the binding variable. Create a constructor of PaymentDetail class and define variable default value is blank. Like : - export class PaymentDetail { PMId: number; CardOwnerName: string; CardNumber: string; ExpirationDate: string; CVV: string; constructor () { this.PMId = 0; this.CardOwnerName = ""; this.CardNumber = ""; this.ExpirationDate = ""; this.CVV = ""; } }
common-pile/stackexchange_filtered
RTC initialisation in an MCU - why use a global callback The code below is related to the initialization of an RTC in an MCU. Would anybody know the rational for passing NULL to rtc_init() and then setting a global callback global_rtc_cb equal to it. Why would you use a global callback at all when there is an other function called rtc_callback defined and used as the callback in the struct. int main() { rtc_init(NULL); } //----------------------------------------------------------------- void ( * global_rtc_cb)(void *); int rtc_init(void (*cb)(void *)) { rtc_config_t cfg; cfg.init_val = 0; cfg.alarm_en = true; cfg.alarm_val = ALARM; cfg.callback = rtc_callback; cfg.callback_data = NULL; global_rtc_cb = cb; irq_request(IRQ_RTC_0, rtc_isr_0); clk_periph_enable(CLK_PERIPH_RTC_REGISTER | CLK_PERIPH_CLK); rtc_set_config(QM_RTC_0, &cfg); return 0; } //--------------------------------------------------------------------- /** * RTC configuration type. */ typedef struct { uint32_t init_val; /**< Initial value in RTC clocks. */ bool alarm_en; /**< Alarm enable. */ uint32_t alarm_val; /**< Alarm value in RTC clocks. */ /** * User callback. * * @param[in] data User defined data. */ void (*callback)(void *data); void *callback_data; /**< Callback user data. */ } rtc_config_t; Why don't you search for global_rtc_cb and rtc_callback, and track down exactly where each one of them is invoked? (in the case of rtc_callback, you will have to go into the code of function rtc_set_config, and see where the value of cfg->callback is stored). The rtc_ functions are part of the RTC driver. The RTC driver has something driver-specific to do when the event that prompts the callback occurs. This driver-specific stuff happens in rtc_callback. But there may also be other application-specific stuff that the application must do when the event occurs. The application-specific stuff should be done at the application layer, not within the driver. So if the application has something to do in response to the event it can provide a callback to rtc_init. Surely rtc_callback calls global_rtc_cb so that both the driver-specific stuff and the application-specific stuff is performed when the event occurs. Apparently your particular application doesn't need to do anything for this event so it passes NULL to rtc_init. But a different application that uses the same driver may provide a callback function. Thanks that is very helpful description.
common-pile/stackexchange_filtered
face tracking or "dynamic recognition" What is a best approach for face detection/tracking considering following scenario: when person enters in scene/frame it should be detected and recognized in every next frame until he/she leaves the scene. also should be able to do this for multiple users at once. I have experience with viola jones detection, and fisher face recognition. But I've used ff recognition only for previously prepared learning set, and now I need something for any user that enters the scene.. I am also interested in different solutions. I used opencv face detection for multiple faces and the rekognition api (http://rekognition.com) and pushed the faces and retrained the dataset frequently. Light-weighted from our side, but I am sure there are more robust solutions for this. Have you tried VideoSurveillance? Also known as OpenCV blob tracker. It's a motion-based tracker with across frames data association(1) and if you want to replace motion with face detection, you must adjust the code by replacing the foreground mask with detection responses. This approach is called track-by-detect in the literature. (1) "Appearance Models for Occlusion Handling", Andrew Senior et al.
common-pile/stackexchange_filtered
Contact functionality for Mailing's I am evaluating code in my Salesforce Instance. I stumbled across some code in which is on Contact. With that said I am unsure of what MailingStreet, MailingCity, MailingPostalCode, MailingState and MailingCountry is... I found that these fields can be found in the Developer Console however what are they used for and how are they populated? for(Contact i : Trigger.New){ //If the old mailing street not = new mailing steet and currenct mailing is not null then assisng mailing steet to address. //Else if contact is updated and address doesnt match new update address then assign mailingsteet as address c if( (Trigger.oldMap.get(i.id).MailingStreet != i.MailingStreet) && i.MailingStreet != '') i.Address__c = i.MailingStreet; else if(Trigger.oldMap.get(i.id).Address__c != i.Address__c) i.MailingStreet = i.Address__c; if( (Trigger.oldMap.get(i.id).MailingCity != i.MailingCity) && i.MailingCity != '') i.City__c = i.MailingCity; else if (Trigger.oldMap.get(i.id).City__c != i.City__c) i.MailingCity = i.City__c; if( (Trigger.oldMap.get(i.id).MailingPostalCode != i.MailingPostalCode) && i.MailingPostalCode != '') i.Zip_Postal_Code__c = i.MailingPostalCode; else if (Trigger.oldMap.get(i.id).Zip_Postal_Code__c != i.Zip_Postal_Code__c) i.MailingPostalCode = i.Zip_Postal_Code__c; if( (Trigger.oldMap.get(i.id).MailingState != i.MailingState) && i.MailingState != '') i.State_Province__c = i.MailingState; else if (Trigger.oldMap.get(i.id).State_Province__c != i.State_Province__c) i.MailingState = i.State_Province__c; if( (Trigger.oldMap.get(i.id).MailingCountry != i.MailingCountry) && i.MailingCountry != '') i.Country__c = i.MailingCountry; else if (Trigger.oldMap.get(i.id).Country__c != i.Country__c) i.MailingCountry = i.Country__c; } } These are standard fields on Contact. They store address details and are part of the compound MailingAddress field. Different orgs will use them in different ways. O i didnt see that as a field on the page layout and i didnt think to look since that was not a field on the object. Thanks
common-pile/stackexchange_filtered
How to test React ErrorBoundary New to React, but not to test applications. I'd like to make sure every time a component throws a error the ErrorBoundary message is displayed. If you don't know what I mean by ErrorBoundary here is a link. I'm using Mocha + Chai + Enzyme. Let's say we need to test React counter example using the following test configuration. Test Configuration // DOM import jsdom from 'jsdom'; const {JSDOM} = jsdom; const {document} = (new JSDOM('<!doctype html><html><body></body></html>')).window; global.document = document; global.window = document.defaultView; global.navigator = global.window.navigator; // Enzyme import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() }); // Chai import chai from 'chai'; import chaiEnzyme from 'chai-enzyme'; chai.use(chaiEnzyme()); UPDATE 1 - Some later thoughts After reading this conversation about the best testing approach for connected components (which touches similar issues) I know I don't have to worry about componentDidCatch catching the error. React is tested enough and that ensures that whenever a error is thrown it will be caught. Therefore there are only test two tests: 1: Make sure ErrorBoundary displays the message if there's any error // error_boundary_test.js import React from 'react'; import { expect } from 'chai'; import { shallow } from 'enzyme'; import ErrorBoundary from './some/path/error_boundary'; describe('Error Boundary', ()=>{ it('generates a error message when an error is caught', ()=>{ const component = shallow(<ErrorBoundary />); component.setState({ error: 'error name', errorInfo: 'error info' }); expect(component).to.contain.text('Something went wrong.'); }); }); 2: Make sure component is wrapped inside the ErrorBoundary (in the React counter example is <App />, which is misleading. The idea is to do that on the closest parent component). Notes: 1) it needs to be done on the parent component, 2) I'm assuming children are simple components, not containers, as it might need more config. Further thoughts: this test could be better written using parent instead of descendents... // error_boundary_test.js import React from 'react'; import { expect } from 'chai'; import { shallow } from 'enzyme'; import App from './some/path/app'; describe('App', ()=>{ it('wraps children in ErrorBoundary', ()=>{ const component = mount(<App />); expect(component).to.have.descendants(ErrorBoundary); }); To test ErrorBoundary component using React Testing Library const Child = () => { throw new Error() } describe('Error Boundary', () => { it(`should render error boundary component when there is an error`, () => { const { getByText } = renderProviders( <ErrorBoundary> <Child /> </ErrorBoundary> ) const errorMessage = getByText('something went wrong') expect(errorMessage).toBeDefined() }) }) renderProviders import { render } from '@testing-library/react' const renderProviders = (ui: React.ReactElement) => render(ui, {}) It's interesting that you mention the "hasError state" but none of the code you posted is actually using it. What is renderProviders and why do you need it? This was my attempt without setting component state: ErrorBoundary: import React, { Component } from 'react'; import ErroredContentPresentation from './ErroredContentPresentation'; class ContentPresentationErrorBoundary extends Component { constructor(props) { super(props); this.state = { hasError: false }; } componentDidCatch(error, info) { this.setState({ hasError: true }); } render() { return this.state.hasError ? <ErroredContentPresentation /> : this.props.children; } } export const withErrorBoundary = WrappedComponent => props => <ContentPresentationErrorBoundary> <WrappedComponent {...props}/> </ContentPresentationErrorBoundary>; And the test: it('Renders ErroredContentPresentation Fallback if error ', ()=>{ const wrappedComponent = props => { throw new Error('Errored!'); }; const component = withErrorBoundary( wrappedComponent )(props); expect(mount(component).html()).toEqual(shallow(<ErroredContentPresentation/>).html()); });
common-pile/stackexchange_filtered
Console Application return value I am New in console application .I Need to pass two command line arguments to the console application from a web application and get the returned result from console app. here I tried in we Protected Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click Dim proc = New Process() With { _ .StartInfo = New ProcessStartInfo() With { _ .FileName = "C:\Users\Arun\Documents\visual studio 2012\Projects\ConsoleApplication1\ConsoleApplication1\bin\Debug\ConsoleApplication1.exe", _ .Arguments = TextBox1.Text & " " & TextBox2.Text, _ .UseShellExecute = False, _ .RedirectStandardOutput = True, _ .CreateNoWindow = True _ } _ } proc.Start() proc.WaitForExit() Response.Write(proc.ExitCode.ToString()) End Sub my console application code is Public Function Main(sArgs As String()) As Integer Return sArgs(0) End Function but I can't get the returned value from console app.What is the problem anyone Help? You can't natively return two separate arguments, you are limited A 32-bit signed integer The only way I can think of doing this is if you have two numeric values that you can guarantee are less than 16 bits each then you could combine these into one 32bit value by bit shifting one of them. This code should get you started: Public Shared Function CombineValues(val1 As Int16, val2 As Int16) As Int32 Return val1 + (CInt(val2) << 16) End Function Public Shared Sub ExtractValues(code As Int32, ByRef val1 As Int16, ByRef val2 As Int16) val2 = CShort(code >> 16) val1 = CShort(code - (CInt(val2) << 16)) End Sub Usage (console): 'in your console app combine two int16 values into one Int32 to use as the exit code Dim exitCode As Int32 = CombineValues(100, 275) Debug.WriteLine(exitCode) 'Output: 18022500 Usage (Calling code): 'In the calling app split the exit code back into the original values Dim arg1 As Int16 Dim arg2 As Int16 ExtractValues(exitCode, arg1, arg2) Debug.WriteLine(arg1.ToString + "; " + arg2.ToString) 'Output: 100; 275 Actually my requirement is ... I have a Web for having two 3 textbox textbox1=100 textbox2=275and a button when click on the button the two values passes to a console application as cmdline arg.And the console app take this two values in main function and the calculation done there and return the out put to the web app.then the web app will show the result as msgbox. this is my need . then just combine my answer with the one given by @varocarbas - that is your solution.
common-pile/stackexchange_filtered
How many translation units in one module? Does a module with multiple source files (.cpp) have one or multiple translation units? My understanding is that every single source file (.cpp) will be its own translation unit unless it is included, and #pragma onced (which I guess is a malpractice), but I don't know how that is done in a modular program. If there's any difference, then I am particularly interested in Visual Studio C++ development (post C++2020) To summarize (and simplify): A translation unit is a single preprocessed source file. It's the unit that the compiler itself is working with. You can then take multiple translation units to create a library, an executable, or a module. A module consists of one or more translation units. A translation unit that starts with a module declaration is termed a module unit, and if there are multiple module units in a program that have the same module name (ignoring any module partition) then they belong to the same module.
common-pile/stackexchange_filtered
WPF application difference in behavior for touch versus mouse/stylus I have an application that shows the user a message through a messagebox. After the user clicks okay, it restores the focus to an ItemsControl above in the visual tree. The weird behavior I'm running into: using your finger, you have to press twice on any button after the message box is closed. The first finger press can be literally anywhere on the screen. It seems like a focus issue but we restored the focus already. Using the mouse or a stylus only requires 1 click for a button to work after a message box is closed. Has anyone run into a problem like this before? I don't have much experience with WPF. Here's the message box being shown and how the focus is set afterwards. It'll go into the first if statement. // Save the element with focus. UIElement uiElement = Keyboard.FocusedElement as UIElement; // show message box using (new DisposableCursor(Cursors.Arrow)) { result = window.Show(type, title, message, buttons); } // Check if element still has focus after displaying the message box. if (uiElement != null && uiElement != Keyboard.FocusedElement) { // Go up the visual tree to try to set the focus to a parent element. for (int i = 0; i < 100; i++) { uiElement = VisualTreeHelper.GetParent(uiElement) as UIElement; if (uiElement == null) { break; } if (uiElement.Focus()) { break; } } } Turns out that the focus was a red herring. There was no event handler for TouchDown, even though there were handlers for StylusUp and Click. As to why that caused the first finger press to not register, I don't know. From what I read, Touch events should be promoted to Click events, so it should have worked fine.
common-pile/stackexchange_filtered
Multithreaded quicksort or mergesort How can I implement a concurrent quicksort or mergesort algorithm for Java? We've had issues on a 16-(virtual)-cores Mac where only one core (!) was working using the default Java sorting algo and it was, well, not good to see that very fine machine be completely underused. So we wrote our own (I wrote it) and we did indeed gain good speedups (I wrote a multithreaded quicksort and due to its partitioning nature it parallelize very well but I could have written a mergesort too)... But my implementation only scales up to 4 threads, it's proprietary code, and I'd rather use one coming from a reputable source instead of using my re-invented wheel. The only one I found on the Web is an example of how not to write a multi-threaded quicksort in Java, it is busy-looping (which is really terrible) using a: while (helpRequested) { } http://broadcast.oreilly.com/2009/06/may-column-multithreaded-algor.html So in addition to losing one thread for no reason it's making sure to kill the perfs by busy-looping in that while loop (which is mindboggling). Hence my question: do you know of any correctly multithreaded quicksort or mergesort implementation in Java that would be coming from a reputable source? I put the emphasis on the fact that I know that the complexity stays O(n log n) but I'd still enjoy very much to see all these cores start working instead of idling. Note that for other tasks, on that same 16 virtual cores Mac, I saw speedup of up to x7 by parallelizing the code (and I'm by no mean an expert in concurrency). So even tough the complexity stays O(n log n), I'd really appreciate a x7 or x8 or even x16 speedup. Ideally it would be configurable: you could pass a min/max number of threads you want to allow to your multithreading sort. Do you really need a multithreaded version of quicksort? If number of threads you want to use is k, do a quick partition into k arrays (selecting k-1 pivots) and call whatever sort you need on each independently. @Moron: But wouldn't the independently sorted partitions have to be merged then? @Fabian: No, as you know that the resultant will be sorted. Isn't it the same way quicksort works, but with two partitions? @Moron: But if you move elements between the k partitions as in the normal quicksort, you're no longer sorting the partitions independently. @Fabian: One partition step. Followed by parallel sorts. Won't that save time? We could try and parallelize the partition, in fact, it is achievable to some extent. I would expect the indiviual sort to be much more than the initial partition time though. @Moron: If we sort the partitions in parallel, how do we avoid a multithreaded version of quicksort? I was assuming your initial comment suggested to avoid concurrent access to the data being sorted by sorting partitions independently (which does not work without swapping elements between them), but perhaps I got your initial comment wrong. I meant, you don't need any 'Multithreaded quicksort which works parallelly on the same array and is configurable based on number of threads'. I meant, you just need a quicksort which works on one thread on one array, with no multithreading in mind, i.e. any common implementation of quicksort will work. So the code will look like:1) Partition. 2) Create threads 3) Run quicksort on each thread on relevant subarrays. @Moron: But say we do that, on an initial array partitioned into 3 subarrays like this: 6 4 | 3 2 | 2 1. After sorting the partitions independently we get 4 6 | 2 3 | 1 2, or am I missing something here? @Fabian. Yes probably misunderstood what I mean by partition. I said 'select k-1 pivots and partition based on those', similar to Quicksort. You do know what the partition step of quicksort is, right? After the partition we have k arrays, such that all elements of array 1 < all elements of array 2 < ... < all elments of array k. @Moron: Oh, I think I now see what you mean! Partition without concurrency and then concurrently sort the partitions independently... Thanks for explaining :-) @Moron: we're already working with a 16 cores machines, today. There are also, today, 64 and 128 CPUs machine. Tomorrow more than that. "manually partitioning" into 128 threads is unlikely to be very convenient and unlikely to be very efficient. What I want is a really multithreaded quicksort (or mergesort), where as soon as the first partitioning is done subsequent partitioning are handled by different threads in a concurrent way, like explained in Doug Lea's fork/join framework. @WizardOfOdds: No matter what, you will have to do some kind of synched partition/merge (the split/compose step of the fork/join). If you can, for eg, do the partition first (parallel or not), then you can do the sort on the different partitions concurrently. I don't see how that is any different from "fork/join". The thread creation cost is a one time cost (if you have a pool for instance), which you will encounter in your concurrent quicksort too. It seems to me that fork/join is the same thing and you could use that to do what I suggest. IMO, it will be easier than multithreaded qsort. check my code answered in some other thread here: https://stackoverflow.com/questions/3466242/multithreaded-merge-sort/31276759#31276759 give a try to fork/join framework by Doug Lea: public class MergeSort extends RecursiveAction { final int[] numbers; final int startPos, endPos; final int[] result; private void merge(MergeSort left, MergeSort right) { int i=0, leftPos=0, rightPos=0, leftSize = left.size(), rightSize = right.size(); while (leftPos < leftSize && rightPos < rightSize) result[i++] = (left.result[leftPos] <= right.result[rightPos]) ? left.result[leftPos++] : right.result[rightPos++]; while (leftPos < leftSize) result[i++] = left.result[leftPos++]; while (rightPos < rightSize) result[i++] = right.result[rightPos++]; } public int size() { return endPos-startPos; } protected void compute() { if (size() < SEQUENTIAL_THRESHOLD) { System.arraycopy(numbers, startPos, result, 0, size()); Arrays.sort(result, 0, size()); } else { int midpoint = size() / 2; MergeSort left = new MergeSort(numbers, startPos, startPos+midpoint); MergeSort right = new MergeSort(numbers, startPos+midpoint, endPos); coInvoke(left, right); merge(left, right); } } } (source: http://www.ibm.com/developerworks/java/library/j-jtp03048.html?S_TACT=105AGX01&S_CMP=LP) @dfa: +1, a wonderful paper that I didn't know about and a great article, excellent! Java 8 provides java.util.Arrays.parallelSort, which sorts arrays in parallel using the fork-join framework. The documentation provides some details about the current implementation (but these are non-normative notes): The sorting algorithm is a parallel sort-merge that breaks the array into sub-arrays that are themselves sorted and then merged. When the sub-array length reaches a minimum granularity, the sub-array is sorted using the appropriate Arrays.sort method. If the length of the specified array is less than the minimum granularity, then it is sorted using the appropriate Arrays.sort method. The algorithm requires a working space no greater than the size of the original array. The ForkJoin common pool is used to execute any parallel tasks. There does not seem to be a corresponding parallel sort method for lists (even though RandomAccess lists should play nice with sorting), so you'll need to use toArray, sort that array, and store the result back into the list. (I've asked a question about this here.) Sorry about this but what you are asking for isn't possible. I believe someone else mentioned that sorting is IO bound and they are most likely correct. The code from IBM by Doug Lea is a nice piece of work but I believe it is intended mostly as an example on how to write code. If you notice in his article he never posted the benchmarks for it and instead posted benchmarks for other working code such as calculating averages and finding the min max in parallel. Here is what the benchmarks are if you use a generic Merge Sort, Quick Sort, Dougs Merge Sort using a Join Fork Pool, and one that I wrote up using a Quick Sort Join Fork Pool. You'll see that Merge Sort is the best for an N of 100 or less. Quick Sort for 1000 to 10000 and the Quick Sort using a Join Fork Pool beats the rest if you have 100000 and higher. These tests were of arrays of random number running 30 time to create an average for each data point and were running on a quad core with about 2 gigs of ram. And below I have the code for the Quick Sort. This mostly shows that unless you're trying to sort a very large array you should back away from trying to improve your codes sort algorithm since the parallel ones run very slow on small N's. Merge Sort 10 7.51E-06 100 1.34E-04 1000 0.003286269 10000 0.023988694 100000 0.022994328 1000000 0.329776132 Quick Sort 5.13E-05 1.60E-04 7.20E-04 9.61E-04 0.01949271 0.32528383 Merge TP 1.87E-04 6.41E-04 0.003704411 0.014830678 0.019474009 0.19581768 Quick TP 2.28E-04 4.40E-04 0.002716065 0.003115251 0.014046681 0.157845389 import jsr166y.ForkJoinPool; import jsr166y.RecursiveAction; // derived from // http://www.cs.princeton.edu/introcs/42sort/QuickSort.java.html // Copyright © 2007, Robert Sedgewick and Kevin Wayne. // Modified for Join Fork by me hastily. public class QuickSort { Comparable array[]; static int limiter = 10000; public QuickSort(Comparable array[]) { this.array = array; } public void sort(ForkJoinPool pool) { RecursiveAction start = new Partition(0, array.length - 1); pool.invoke(start); } class Partition extends RecursiveAction { int left; int right; Partition(int left, int right) { this.left = left; this.right = right; } public int size() { return right - left; } @SuppressWarnings("empty-statement") //void partitionTask(int left, int right) { protected void compute() { int i = left, j = right; Comparable tmp; Comparable pivot = array[(left + right) / 2]; while (i <= j) { while (array[i].compareTo(pivot) < 0) { i++; } while (array[j].compareTo(pivot) > 0) { j--; } if (i <= j) { tmp = array[i]; array[i] = array[j]; array[j] = tmp; i++; j--; } } Partition leftTask = null; Partition rightTask = null; if (left < i - 1) { leftTask = new Partition(left, i - 1); } if (i < right) { rightTask = new Partition(i, right); } if (size() > limiter) { if (leftTask != null && rightTask != null) { invokeAll(leftTask, rightTask); } else if (leftTask != null) { invokeAll(leftTask); } else if (rightTask != null) { invokeAll(rightTask); } }else{ if (leftTask != null) { leftTask.compute(); } if (rightTask != null) { rightTask.compute(); } } } } } It is possible (assuming a CPU-bound problem and enough cores/hw threads for the affinity) :-) (I corrected the down-vote). The reason it is possible is because the sort can and should take the current operations "size" into account to decide if a parallel operation should actually occur. This is similar to switching to a "simple sort" near the leaves. The exact sizes at when the change-over should occur should can be gathered through profiling and analysis. Just coded up the above MergeSort and performance was very poor. The code block refers to "coInvoke(left, right);" but there was no reference to this and replaced it with invokeAll(left, right); Test code is: MergeSort mysort = new MyMergeSort(array,0,array.length); ForkJoinPool threadPool = new ForkJoinPool(); threadPool.invoke(mysort); but had to stop it due to poor performance. I see that the article above is almost a year old and maybe things have changed now. I have found the code in the alternative article to work: http://blog.quibb.org/2010/03/jsr-166-the-java-forkjoin-framework/ You probably did consider this, but it might help to look at the concrete problem from a higher level, for example if you don't sort just one array or list it might be much easier to sort individual collections concurrently using the traditional algorithm instead of trying to concurrently sort a single collection. I've been facing the multithreaded sort problem myself the last couple of days. As explained on this caltech slide the best you can do by simply multithreading each step of the divide and conquer approaches over the obvious number of threads (the number of divisions) is limited. I guess this is because while you can run 64 divisions on 64 threads using all 64 cores of your machine, the 4 divisions can only be run on 4 threads, the 2 on 2, and the 1 on 1, etc. So for many levels of the recursion your machine is under-utilized. A solution occurred to me last night which might be useful in my own work, so I'll post it here. Iff, the first criteria of your sorting function is based on an integer of maximum size s, be it an actual integer or a char in a string, such that this integer or char fully defines the highest level of your sort, then I think there's a very fast (and easy) solution. Simply use that initial integer to divide your sorting problem into s smaller sorting problems, and sort those using the standard single threaded sort algo of your choice. The division into s classes can be done in a single pass, I think. There is no merging problem after doing the s independent sorts, because you already know that everything in class 1 sorts before class 2, and so on. Example : if you wish to do a sort based on strcmp(), then use the first char in your string to break your data into 256 classes, then sort each class on the next available thread until they're all done. This method fully utilizes all available cores until the problem is solved, and I think it's easy to implement. I haven't implemented it yet though, so there may be problems with it that I have yet to find. It clearly cant work for floating point sorts, and would be inefficient for large s. Its performance would also be heavily dependent on the entropy of the integer/char used to define the classes. This may be what Fabian Steeg was suggesting in fewer words, but I'm making it explicit that you can create multiple smaller sorts from a larger sort in some circumstances. import java.util.Arrays; import java.util.concurrent.ForkJoinPool; import java.util.concurrent.RecursiveTask; public class IQ1 { public static void main(String[] args) { // Get number of available processors int numberOfProcessors = Runtime.getRuntime().availableProcessors(); System.out.println("Number of processors : " + numberOfProcessors); // Input data, it can be anything e.g. log records, file records etc long[][] input = new long[][]{ { 5, 8, 9, 14, 20 }, { 17, 56, 59, 80, 102 }, { 2, 4, 7, 11, 15 }, { 34, 37, 39, 45, 50 } }; /* A special thread pool designed to work with fork-and-join task splitting * The pool size is going to be based on number of cores available */ ForkJoinPool pool = new ForkJoinPool(numberOfProcessors); long[] result = pool.invoke(new Merger(input, 0, input.length)); System.out.println(Arrays.toString(result)); } /* Recursive task which returns the result * An instance of this will be used by the ForkJoinPool to start working on the problem * Each thread from the pool will call the compute and the problem size will reduce in each call */ static class Merger extends RecursiveTask<long[]>{ long[][] input; int low; int high; Merger(long[][] input, int low, int high){ this.input = input; this.low = low; this.high = high; } @Override protected long[] compute() { long[] result = merge(); return result; } // Merge private long[] merge(){ long[] result = new long[input.length * input[0].length]; int i=0; int j=0; int k=0; if(high - low < 2){ return input[0]; } // base case if(high - low == 2){ long[] a = input[low]; long[] b = input[high-1]; result = mergeTwoSortedArrays(a, b); } else{ // divide the problem into smaller problems int mid = low + (high - low) / 2; Merger first = new Merger(input, low, mid); Merger second = new Merger(input, mid, high); first.fork(); long[] secondResult = second.compute(); long[] firstResult = first.join(); result = mergeTwoSortedArrays(firstResult, secondResult); } return result; } // method to merge two sorted arrays private long[] mergeTwoSortedArrays(long[] a, long[] b){ long[] result = new long[a.length + b.length]; int i=0; int j=0; int k=0; while(i<a.length && j<b.length){ if(a[i] < b[j]){ result[k] = a[i]; i++; } else{ result[k] = b[j]; j++; } k++; } while(i<a.length){ result[k] = a[i]; i++; k++; } while(j<b.length){ result[k] = b[j]; j++; k++; } return result; } } } The most convenient multi-threading paradigm for a Merge Sort is the fork-join paradigm. This is provided from Java 8 and later. The following code demonstrates a Merge Sort using a fork-join. import java.util.*; import java.util.concurrent.*; public class MergeSort<N extends Comparable<N>> extends RecursiveTask<List<N>> { private List<N> elements; public MergeSort(List<N> elements) { this.elements = new ArrayList<>(elements); } @Override protected List<N> compute() { if(this.elements.size() <= 1) return this.elements; else { final int pivot = this.elements.size() / 2; MergeSort<N> leftTask = new MergeSort<N>(this.elements.subList(0, pivot)); MergeSort<N> rightTask = new MergeSort<N>(this.elements.subList(pivot, this.elements.size())); leftTask.fork(); rightTask.fork(); List<N> left = leftTask.join(); List<N> right = rightTask.join(); return merge(left, right); } } private List<N> merge(List<N> left, List<N> right) { List<N> sorted = new ArrayList<>(); while(!left.isEmpty() || !right.isEmpty()) { if(left.isEmpty()) sorted.add(right.remove(0)); else if(right.isEmpty()) sorted.add(left.remove(0)); else { if( left.get(0).compareTo(right.get(0)) < 0 ) sorted.add(left.remove(0)); else sorted.add(right.remove(0)); } } return sorted; } public static void main(String[] args) { ForkJoinPool forkJoinPool = ForkJoinPool.commonPool(); List<Integer> result = forkJoinPool.invoke(new MergeSort<Integer>(Arrays.asList(7,2,9,10,1))); System.out.println("result: " + result); } } While much less straight forward the following varient of the code eliminates the excessive copying of the ArrayList. The initial unsorted list is only created once, and the calls to sublist do not need to perform any copying themselves. Before we would copy the array list each time the algorithm forked. Also, now, when merging lists instead of creating a new list and copying values in it each time we reuse the left list and insert our values into there. By avoiding the extra copy step we improve performance. We use a LinkedList here because inserts are rather cheap compared to an ArrayList. We also eliminate the call to remove, which can be expensive on an ArrayList as well. import java.util.*; import java.util.concurrent.*; public class MergeSort<N extends Comparable<N>> extends RecursiveTask<List<N>> { private List<N> elements; public MergeSort(List<N> elements) { this.elements = elements; } @Override protected List<N> compute() { if(this.elements.size() <= 1) return new LinkedList<>(this.elements); else { final int pivot = this.elements.size() / 2; MergeSort<N> leftTask = new MergeSort<N>(this.elements.subList(0, pivot)); MergeSort<N> rightTask = new MergeSort<N>(this.elements.subList(pivot, this.elements.size())); leftTask.fork(); rightTask.fork(); List<N> left = leftTask.join(); List<N> right = rightTask.join(); return merge(left, right); } } private List<N> merge(List<N> left, List<N> right) { int leftIndex = 0; int rightIndex = 0; while(leftIndex < left.size() || rightIndex < right.size()) { if(leftIndex >= left.size()) left.add(leftIndex++, right.get(rightIndex++)); else if(rightIndex >= right.size()) return left; else { if( left.get(leftIndex).compareTo(right.get(rightIndex)) < 0 ) leftIndex++; else left.add(leftIndex++, right.get(rightIndex++)); } } return left; } public static void main(String[] args) { ForkJoinPool forkJoinPool = ForkJoinPool.commonPool(); List<Integer> result = forkJoinPool.invoke(new MergeSort<Integer>(Arrays.asList(7,2,9,-7,777777,10,1))); System.out.println("result: " + result); } } We can also improve the code one step further by using iterators instead of calling get directly when performing the merge. The reason for this is that get on a LinkedList by index has poor time performance (linear) so by using an iterator we eliminate slow-down caused by internally iterating the linked list on each get. The call to next on an iterator is constant time as opposed to linear time for the call to get. The following code is modified to use iterators instead. import java.util.*; import java.util.concurrent.*; public class MergeSort<N extends Comparable<N>> extends RecursiveTask<List<N>> { private List<N> elements; public MergeSort(List<N> elements) { this.elements = elements; } @Override protected List<N> compute() { if(this.elements.size() <= 1) return new LinkedList<>(this.elements); else { final int pivot = this.elements.size() / 2; MergeSort<N> leftTask = new MergeSort<N>(this.elements.subList(0, pivot)); MergeSort<N> rightTask = new MergeSort<N>(this.elements.subList(pivot, this.elements.size())); leftTask.fork(); rightTask.fork(); List<N> left = leftTask.join(); List<N> right = rightTask.join(); return merge(left, right); } } private List<N> merge(List<N> left, List<N> right) { ListIterator<N> leftIter = left.listIterator(); ListIterator<N> rightIter = right.listIterator(); while(leftIter.hasNext() || rightIter.hasNext()) { if(!leftIter.hasNext()) { leftIter.add(rightIter.next()); rightIter.remove(); } else if(!rightIter.hasNext()) return left; else { N rightElement = rightIter.next(); if( leftIter.next().compareTo(rightElement) < 0 ) rightIter.previous(); else { leftIter.previous(); leftIter.add(rightElement); } } } return left; } public static void main(String[] args) { ForkJoinPool forkJoinPool = ForkJoinPool.commonPool(); List<Integer> result = forkJoinPool.invoke(new MergeSort<Integer>(Arrays.asList(7,2,9,-7,777777,10,1))); System.out.println("result: " + result); } } Finally the most complex versions of the code, this iteration uses an entirely in-place operation. Only the initial ArrayList is created and no additional collections are ever created. As such the logic is particularly difficult to follow (so i saved it for last). But should be as close to an ideal implementation as we can get. import java.util.*; import java.util.concurrent.*; public class MergeSort<N extends Comparable<N>> extends RecursiveTask<List<N>> { private List<N> elements; public MergeSort(List<N> elements) { this.elements = elements; } @Override protected List<N> compute() { if(this.elements.size() <= 1) return this.elements; else { final int pivot = this.elements.size() / 2; MergeSort<N> leftTask = new MergeSort<N>(this.elements.subList(0, pivot)); MergeSort<N> rightTask = new MergeSort<N>(this.elements.subList(pivot, this.elements.size())); leftTask.fork(); rightTask.fork(); List<N> left = leftTask.join(); List<N> right = rightTask.join(); merge(left, right); return this.elements; } } private void merge(List<N> left, List<N> right) { int leftIndex = 0; int rightIndex = 0; while(leftIndex < left.size() ) { if(rightIndex == 0) { if( left.get(leftIndex).compareTo(right.get(rightIndex)) > 0 ) { swap(left, leftIndex++, right, rightIndex++); } else { leftIndex++; } } else { if(rightIndex >= right.size()) { if(right.get(0).compareTo(left.get(left.size() - 1)) < 0 ) merge(left, right); else return; } else if( right.get(0).compareTo(right.get(rightIndex)) < 0 ) { swap(left, leftIndex++, right, 0); } else { swap(left, leftIndex++, right, rightIndex++); } } } if(rightIndex < right.size() && rightIndex != 0) merge(right.subList(0, rightIndex), right.subList(rightIndex, right.size())); } private void swap(List<N> left, int leftIndex, List<N> right, int rightIndex) { //N leftElement = left.get(leftIndex); left.set(leftIndex, right.set(rightIndex, left.get(leftIndex))); } public static void main(String[] args) { ForkJoinPool forkJoinPool = ForkJoinPool.commonPool(); List<Integer> result = forkJoinPool.invoke(new MergeSort<Integer>(new ArrayList<>(Arrays.asList(5,9,8,7,6,1,2,3,4)))); System.out.println("result: " + result); } } Why do you think a parallel sort would help? I'd think most sorting is i/o bound, not processing. Unless your compare does a lot of calculations, a speedup is unlikely.
common-pile/stackexchange_filtered
Area under curve and its volume as solid of revolution I would like to know if there is some more "elegant" way to write these 3 functions. Any tip or idea is welcome. open System module Calculus = let area f a b = let dx = 0.001 seq { for x in a .. dx .. b -> (f x) * dx } |> Seq.sum let rotate f x = let y = f x y * y * Math.PI let volume f = area (rotate f) [<EntryPoint>] let main args = let f x = x * 2.0 printfn "Running..." printfn "%f" (Calculus.area f 1.0 10.0) printfn "%f" (Calculus.volume f 1.0 10.0) 0 Note I understand the method for integration I am using here is not the best around, but it is not the point here. Let's focus on its implementation. Looks good to me. Not much to review! One small possible improvement: f |> rotate |> area Here's how I would rewrite this: First, pipe everything that you can. It's easier to follow the linear transition of a |> b |> c rather than c (b a). Second, compose where possible. Compositions are powerful, and can allow you to abstract things away much more effectively. Third, order function parameters by most -> least generic. We see f a b, but F# sees let f = f a; f b, so make sure your parameters are ordered by least-significant first. module Calculus = let area f a b = let dx = 0.001 { a .. dx .. b } |> Seq.sumBy (f >> (*) dx) let rotate f x = let multX = x |> f |> (*) Math.PI |> multX |> multX let volume = rotate >> area [<EntryPoint>] let main args = let shape = (2.0 |> (*), 1.0, 10.0) printfn "%f" (shape |||> Calculus.area) printfn "%f" (shape |||> Calculus.volume) This ends up being three lines longer, but it's clearer. It's completely obvious what is happening, and the piping makes it easier to follow. We replaced the custom sequence generator with a basic one, and used sumBy to make our intent clearer. We want to sum f(x) * dx. This also ends up one line shorter, and that's only because we rewrote volume as a pure composition, and pushed it to a one-line definition. I recommend doing this for things that are 2-3 steps. (So even let volume f = f |> rotate |> area would be acceptable.) Instead of building the f in the main, we build the shape itself, and then pipe all three arguments of that shape to our area and volume methods. (Since the arguments are the same for each call.) This means our shape is reusable, and we could build a function: let printShape s = printfn "%f" (s |||> area) printfn "%f" (s |||> volume) let shapes = let f = 2.0 |> (*) [(f, 1.0, 10.0); (f, 2.0, 10.0)] shapes |> List.iter printShape Now we can build more test-cases pretty easily and functionally. Because printShapes takes a single parameter of a (float -> float) * float * float, we can pass the entire shape to it and then pass each portion of the tuple as individual parameters. I really like your version of volume, but I find your version of rotate overly complicated. Anyway, thank you for showing me some different ways to think functionally.
common-pile/stackexchange_filtered
Error in R: x must be numeric i try to write a program in r "generate a random sample from any distribution using function". but it shows "Error in hist.default(xbars) : 'x' must be numeric" my program is here sim.clt <- function(n, ran.func,..., simsize,...) { xbars<-vector() for(i in 1:simsize=simsize) { x<-function(ran.func) xbars[i]<-mean(x) } par(mfrow=c(2,1)) hist(xbars) qqnorm(xbars) return(xbars) } sim.out<-sim.clt(n=20,ran.func="rexp",simsize=5000) shapiro.test(sim.out) # i am new in r programming, so can't figure it out, how to solve the problem. thanks... There are lots of things wrong here. for(i in 1:simsize=simsize) should be throwing an error: > for(i in 1:simsize=simsize) { print(i)} Error: unexpected '=' in "for(i in 1:simsize=" Better is for(i in seq_len(simsize)) Then x <- function(ran.func) is not doing what you thought it was; it is returning a function with xbars[i]<-mean(x) as its body, as in: > x <- function(ran.func) + + xbars[i]<-mean(x) > x function(ran.func) xbars[i]<-mean(x) > is.function(x) [1] TRUE I think you wanted to call ran.func so you may need FUN <- match.fun(ran.func) x <- FUN() But that will fail because you don't seem to be passing any argument for ran.func to work, even just n in the example using rexp. The error message stems from this last point. You defined xbars to be the empty vector(), which by default created an empty logical vector: > xbars <- vector() > xbars logical(0) > is.numeric(xbars) [1] FALSE Now, this wouldn't have been a problem if you hadn't made the error in defining x (recall xbars[i]<-mean(x) is now in the body of the function x and has never been explicitly called), which means xbars remains a empty logical vector. As that is not numeric, hist throws the error you are seeing. Another error is that you can't use ... in the function definition twice. Are you trying to have the first contain arguments to pass to ran.func and the second ... to be for something else. You just can't do that in R. Is this what you wanted? sim.clt <- function(n, ran.func, simsize, ...) { ## ... passed to ran.func for other parameters of distribution xbars <- numeric(simsize) for(i in seq_len(simsize)) { FUN <- match.fun(ran.func) x <- FUN(n = n, ...) xbars[i] <- mean(x) } ## plot op <- par(mfrow=c(2,1)) on.exit(op) hist(xbars) qqnorm(xbars) xbars } > sim.out<-sim.clt(n=20,ran.func="rexp",simsize=5000) > shapiro.test(sim.out) Shapiro-Wilk normality test data: sim.out W = 0.9867, p-value < 2.2e-16 thanks a lot. that's the thing what i want and your explanation helps me a lot. can you give me a suggestion... if i want to modify the function further to not just study the sampling distribution of the mean, but of any statistic by using second function, what i have to do? @moniruzzaman if you have another question, feel free to post it as a separate question. Remember to give a reproducible example and show your code where it fails.
common-pile/stackexchange_filtered
mac term + tmux, send left ctrl + cmd + up/down arrow keys I recently started using a mac at work and I migrated over my dotfiles which work great on a linux setup. I have my Tmux shortcuts to resize the current pane. Which works fine for left right arrow, but up/down arrow are not working. Seams like max os is swallowing the commands. How can I get all of the following to work. # tmux.conf bind -n C-Left resize-pane -L 10 bind -n C-Right resize-pane -R 10 bind -n C-Down resize-pane -D 5 bind -n C-Up resize-pane -U 5 Cheers Ironically, C-up and C-down work fine for me in iTerm2 under macOS, but C-left and C-right conflict with the system-level keyboard shortcut for switching spaces (i.e., virtual desktops). (I get different behavior between iTerm2 and Terminal, which I'm not really inclined to track down at the moment, since I never use Terminal. Suffice to say, your terminal emulator may also affect how the Control keys are interpreted.) I read somewhere that installing iTerm2 might fix this but looks like it has it's own issues.
common-pile/stackexchange_filtered
Keyboard and Bluetooth killed when establishing FTDI USB Serial connection Okay, when I establish a connection using LabVIEW or Python to my Arduino Duemilanove (ATMega328) either my Bluetooth or keyboard is killed. I am simply trying to establish a connection to /dev/cu.usbserial-A9007UZh (or tty.usbserial-A9007UZh, but that seems to kill the keyboard or bluetooth even after Terminal is shut down when using python). I am on a MacbookPro, and I have found that the keyboard is on the same USB hub that one of the two USB slots is on, and the bluetooth is on the same USB hub that the other of the two USB slots is on, which explains the association between those two devices - but what is causing the loss of the other devices? when I use Arduino (programming IDE) or Cornflake to open a serial connection there are no errors -- and everything works as expected. This just seems to happen with LabVIEW and Python. I am on Snow Leopard 10.6.2 and have the latest FTDI USB drivers and am running in 32-bit mode. Hi, I´m seeing the same issue - did you find a sollution to this? So, it's been four years and three macbook pro's since the original post, and the problem has never left my side. I feel like something is overflowing when it happens; it seems to happen more often when I have some device programmed to transmit in an infinite loop without any delay taking place in-between iterations. The problem has occurred with every version of OS X between 10.6 and 10.10.2, and on every macbook pro (the model in this post was an early-2010 I believe, then mid-2011, currently early-2013). It is fascinating how common yet silent the issue is. Adding is hard. After nearly three years of study, I've concluded that it was, in fact, five years since the original post when I left that last comment. Not four years. Based on correspondence with FTDI, there seems to be a known problem with OS X drivers that can result in "complete system crash" from which "there is no way to recover". They recommend architecting software to use a dedicated thread for serial communications to ensure input data (aka, data transmitted by the device and received by OS X) is read promptly. They mention that new "certified" (signed?) drivers should be available for OS X in Spring 2015, but don't mention if this particular problem will be solved in this new release. Anecdotally, I have not experienced Bluetooth/keyboard crashes using OS X 10.10.2 with the built-in AppleUSBFTDI.kext drivers.
common-pile/stackexchange_filtered
Why is VPN not a protocol? Is VPN a layer 3 concept? says "VPN is not a protocol". Is VPN an application instead? Why is VPN not a protocol? How can I tell if something is a protocol or not? Thanks. A protocol is a set of rules for how to accomplish something, and network protocols are sets of rules for how to communicate on a network. VPN is a concept, and a VPN uses protocols to accomplish the concept, but VPN itself is not a protocol. For example, a VPN could could use SSH (a protocol), that uses TCP (a protocol), that uses IPv4 (a protocol), etc. but the VPN concept is not a protocol. Think about it this way, there are many kinds of VPNs and tunnels, and each uses one or more specific protocols to accomplish the VPN, but the VPNs themselves are not protocols. As an alternative to the already excellent answer: You can distinguish protocols from higher level concepts with the following heuristic: Protocols are defined by rules Higher level concepts are usually defined by purposes Consider "file transfer". It's a purpose: getting files from host 1 to host 2, and there are a great number of programs, using a great number of protocols, to achieve this. (List at Wikipedia.) Another question to distinguish protocols from purposes: "Where is the definition so I can start coding?" If the answer is "Such and such document", it's a protocol. Consider "FTP", "kermit", and "rsync": all are for the purpose of getting files from A to B, but they are protocols, defined respectively by the rules in RFC 959, da Cruz's BYTE articles and subsequent book (see refs at Wikipedia), and Tridgell's PhD thesis. They are designed to cover different circumstances and deal with different problems, but what's common about them are their purposes. Your particular question about VPNs is only trickier than file transfer because VPNs are normally used underneath end-user-applications. They are defined by their purpose: "provide the private network functionality of dedicated links in a virtual way". There is no document you can read and start coding. But if you want GRE as your protocol, you can read RFC 1701 and start programming.
common-pile/stackexchange_filtered
Knowing one $0$ of a quadratic trin., can I determine some necessary ( or sufficient) condition regarding the second root, in terms of the first one? Suppose I am given this equation : $x^2 - 12x - 28$ = 0. Using the rational root theorem, I find that the possible rational roots are ( if I am correct): $$1/1, 2/1,7/1,14/,28/1$$ or their additive inverses. I try the number $(-2)$ and I observe it works. At this stage, knowing one solution of the quadratic trinomial , is there any rule allowing me to determine the second root in terms of the first, or, at least, a rule allowing me to eliminate some candidates in the list above? I mean, is there some general law stating a relation between the roots of a quadratic trinomial? Sum of roots is $-(-12)$, and their product is $-28$ (Vieta formula). Also you can divide $x^2-12x-28$ by $x+2$. Why is the sum the opposite of coefficient $b$? See https://en.wikipedia.org/wiki/Vieta%27s_formulas Thanks for this link! Other possible rational roots, although they don't workin this case, are 4/1, -4/1 Why is something in mathematics true? Because there’s a proof. The reason why Vieta's law holds is very simple : Just note $$(x-a)(x-b)=x^2-(a+b)x+ab$$ $\ a\ $ and $\ b\ $ are obviously the roots and the coefficients are $\ -(a+b)\ $ and $\ ab\ $ @Peter. Thanks for this explanation.
common-pile/stackexchange_filtered
DB::raw('MAX gives only values from 1 to 9 This function gives only values from 1 to 9 and i don't know why. I've got column proformnumber with numbers from 1 to 10 but this function gives 9. When i deleted some rows it worked correctyly for numbers less than 10. $autoyear = date('Y'); $automonth = date('m'); $autonumber = DB::table("proforms as proforms") ->select(\DB::raw('MAX(proformnumber) as proformnumber')) ->where('automonth', '=', $automonth) ->where('autoyear', '=', $autoyear) ->get(); this is my db This is rest of the function code . I am using it to count proform number $autonumber[0]->proformnumber++; $number = $autonumber[0]->proformnumber; $number = "$number/$automonth/$autoyear/proforma"; post the records you have on your db please Ok. I edited post and pasted screen from db. Your proformanumber values are left aligned on the screenshot indicating those strings and not numbers. This would happen if proformnumber was stored as a string instead of a number. You can get the numeric maximum by converting. I think the simplest method in MySQL is to use implicit conversion by adding 0. MAX(proformnumber + 0) Let this be a lesson in choosing the right data types.
common-pile/stackexchange_filtered
Google App Engine and SAML (Okta) We're trying to setup a web app (django) in Google App Engine connected via SAML to our idP, Okta. It has to be done as a Custom Flexible App because of a binary requirement, making it basically a container deployment. Running it locally with gunicorn (including SSL configuration) works flawlessly, but deploying it to Google, not that much. The problem is that the idP to sP redirection fails with Traceback: File "/env/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner 34. response = get_response(request) File "/env/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 115. response = self.process_exception_by_middleware(e, request) File "/env/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response 113. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/env/lib/python3.6/site-packages/django/views/decorators/csrf.py" in wrapped_view 54. return view_func(*args, **kwargs) File "/env/lib/python3.6/site-packages/django_saml2_auth/views.py" in acs 159. resp, entity.BINDING_HTTP_POST) File "/env/lib/python3.6/site-packages/saml2/client_base.py" in parse_authn_request_response 714. binding, **kwargs) File "/env/lib/python3.6/site-packages/saml2/entity.py" in _parse_response 1213. response.require_signature = require_signature Exception Type: AttributeError at /sso/acs/ Exception Value: 'NoneType' object has no attribute 'require_signature' The current theory is that the Nginx proxy in front of the app is somehow messing with the POST request and breaking the SAML assertion but such settings or its documentation are yet to be found. Some fresh ideas would be greatly appreciated. The problem was simple enough: the reverse proxy configuration changes the HTTP request (HTTPS scheme to HTTP) which makes the Okta plugin (https://github.com/fangli/django-saml2-auth) fail with the obscure error. Adding the ASSERTION_URL entry to the SAML2_AUTH dict in you settings.py Django file does the trick. As suggested above adding the ASSERTION_URL entry to the SAML2_AUTH dict in your settings.py Django file will solve the Exception Value: 'NoneType' object has no attribute 'require_signature' error. I would like to add some details that helped me solved it. In my case, using a dockerized django running with AWS Fargate integrated with Okta, the config looks like: 'ASSERTION_URL' : f"https://{ENV_VAR('YOUR_APP_URL')}" if "YOUR_APP_URL" in os.environ else 'http://<IP_ADDRESS>:8000', Using the condition to have it running on localhost for testing too. Also, please note the preceding https protocol placed there for clarity of what we're trying to solve. In lack of this setting, besides the error mentioned above, another one from the logs output is probably more helpful to understand why the 'ASSERTION_URL' config is needed: "https://your-domain/saml2_auth/acs/ not in ['http://your-domain/saml2_auth/acs/']" Of course after reading this topic I've read again the documentation at https://github.com/fangli/django-saml2-auth/ where it says: "By default, django-saml2-auth will validate the SAML response's Service Provider address against the actual HTTP request's host and scheme. If this value is set, it will validate against ASSERTION_URL instead - perfect for when django running behind a reverse proxy."
common-pile/stackexchange_filtered
Answered: With what probability do $4$ points placed uniformly randomly in the unit square of $\mathbb{R}^2$ form a convex/concave quadrilateral? I have this problem that I've struggled with for a while. If you place $4$ points randomly into a unit square (uniform distribution in both $x$ and $y$), with what probability will this shape be convex if the $4$ points are connected in some order? Equivalently, with what probability will there be a point inside the triangle with the largest area with vertices at the other $3$ points. In particular I am interested in the answer for when this area of support is $\mathbb{R}^2$ and is uniform. I ran a simulation and found that on a unit square the answer is about $71\%$ concave. On a unit circle picking polar co-ordinates r and theta from uniform random distributions results in a a probability of concavity of $68\%$. When the distribution for r is altered so that each point in the circle is equally likely then this falls to $51\%$. Any advice or links for a possible answer or whether this is even possible would be appreciated. EDIT: It turns out this problem is the same as Sylvester's 4 point problem. Alas I am 150 years too late. Thanks to all who helped. Only one person gave an answer, not quite correct but I award the bounty to them anyway for their efforts. Also if you know how to fix my title then I'd appreciate that too. I'm not sure that your "equivalently" is really an equivalent probability, Perhaps the right question is "what's the probability that the convex hull is a quadrilateral?" The number 71% makes me think of $\frac{\sqrt{2}}{2}$; the last case (uniform points on a disk) giving 51% sure makes me think it's 0.5. I thought so too. But I ran over 100,000 simulations so I know it's not 0.5. It 's slightly greater. Good point about the root 2 though. This is pretty equivalent to "what is the expected triangle area formed by 3 uniformly random point inside a unit square". I can't think how to integrate an absolute value (Abs[(ax - bx) (ay - cy) - (ax - cx) (ay - by)]/2), though. Heron seems relevant here. For triangle sides of length a, b, c and semiperimeter s, area squared is s(s-a)(s-b)(s-c). Should be easier to integrate to find area in expectation. The thing about calculating the area from 3 points is that the 4th point can either be inside that triangle, outside that triangle making a convex quadrilateral or outside that triangle forming a concave quadrilateral so it's not really about the are per se. I have found a partial solution here. It turns out the unit circle case is called Sylvester's 4 point problem. The values in the link confirm my simulation. The puzzle is still open for the infinite plane problem but perhaps this helps. @stuartstevenson So then you'll have to calculate this area and then integrate. Exactly. But it's not as easy as it seems. I'm 99% sure this has an answer somewhere have you read this ? http://mathworld.wolfram.com/SylvestersFour-PointProblem.html Yes, as you can see, a few points up when I posted the same link :p "In particular I am interested in the answer for when this area of support is $\mathbb{R}^2$ and is uniform" - What do you mean? There is no uniform distribution on $\mathbb{R}^2$. Perhaps as a limiting case. Something like a normal distribution and then make the tails heavier and heavier as a limit? The difficulty is in defining such things. @stuartstevenson You can choose any probability distribution on $\mathbb{R}^2$, for example $x$ and $y$ are independently normally distributed. Or the angle with the $x$-axis is uniformly distributed and the distance to the origin has the half-normal distribution (or any other distribution on $\mathbb{R}^+$). There is no obvious choice. It depends on the problem you're trying to solve. Well what I want is so that every point is as likely as ever other. I know that defining the uniform probability density over the infinite plane sounds impossible but if you think about it, the chances that a point lands in 1 particular spot for the uniform distribution over a finite space is zero, and yet it manages to land somewhere. If anyone can give me some exact solution to the unit square or circle then I'll accept that but the infinite plane would be ideal. The problem with the uniform density on the plane is not that it sounds impossible, but that it provably is impossible, see this question. I fail to see how your problem is different than Sylvester's Four Point Problem for the unit square, which is already solved on the Mathworld page. @orlp It is the same problem, but the Mathworld page doesn't give a proof. Maybe there is a proof somewhere online, but I didn't find it after a quick search. I see now. Thank you very much. @Paul and stuart. I believe that the concern raised by this picture does not apply to Sylvester's 4 point problem. After all, a convex quadrilateral $\iff$ the convex hull contains 4 points. Perhaps Efron 1965 is of help. Pages 5 and 6 look interesting. Take $N=4$ and subtract $3$ from the expected number of vertices to get your probability. Here is a different approach to the problem. Consider a random triangle illustrated below. Regions are labelled near ($N_i$) and far ($F_i$) from point $i$. $T$ is the triangle. If the 4th point falls in regions $F_1$, $F_2$, or $F_3$ a convex quadrilateral will result. The probability that 4 points produce a convex shape is equivalent to the probability that the 4th point will be in one of those regions. That is, $p=E[F_1+F_2+F_3]$ where $F_i$ is the random variable expressing the area of that region. To simplify the calculations, consider the shaded region labelled $A_1$ and the counterparts $A_2$ and $A_3$. $$F_1+N_2+N_3=A_1$$ $$N_1+F_2+F_3+T=1-A_1$$ Combining these one finds, $$F_1+F_2+F_3 = 2 - 2T - A_1 - A_2 - A_3$$ and therefore, $$p=E[F_1+F_2+F_3]=2 - 2E[T] - 3E[A]$$ Fortunately others have worked out $E[T]={11\over144}$ To work out $E[A]$, I considered separately the case where the unit square is bisected across opposite sides (as is the case for $A_1$). You can show that this occurs with probability ${2\over3}$, given two uniform random points. The smallest of the two areas $A_\min$ follows the distribution $$f(a_\min)=4a_\min$$ for $0<a_\min<{1\over2}$. Given the two points $(2,3)$ point $1$ point will fall in the smaller of the two areas with probability $a_\min$, in which case $A_1=1-a_\min$. $$ E[A_{opp}]=16\int_0^{1\over2}a_\min^2(1-a_\min)\,da_\min = {5\over12} $$ For the case where the unit square is cut across a corner (as is the case for $A_2$), $A_\min={1\over2}X_0Y_0$ where $X_0$ and $Y_0$ are independent random variables with the identical distribution, $f(x)=2x$ for $0<x<1$. This gives the distribution for $A_\min$, $$f(a_\min)=-16 a_\min \log 2a_\min$$ for $0<a_\min<{1\over2}$. $$ E[A_{corner}]=-32\int_0^{1\over2}a_\min^2(1-a_\min)\log 2a_\min\,da_\min = {23\over72} $$ And so, overall $$E[A] = P(A_{opp})E[A_{opp}]+P(A_{corner})E[A_{corner}]={2\over3}{5\over12}+{1\over3}{23\over72} = {83\over216}$$ Finally, $$ p = 2 - 2{11\over144} - 3{83\over216}={25\over36}=\left({5\over6}\right)^2=0.69\bar{4} $$ which is close to your simulation result. A very simple form! $P_{convex}=\int_0^1 da a_{123}p(a_{123})$ where $a_{123}=\frac{1}{2}\left|(\mathbf{x}_{12} \times \mathbf{x}_{13})\right|$ this is the probability that the 4th point falls within the triangle formed by the first three. Normalization included due to unit square, and $$p(a)=\frac{1}{2} \int_0^1 dx_1 \int_0^1 dx_2 \int_0^1 dx_3 \int_0^1 dy_1 \int_0^1 dy_2 \int_0^1 dy_3 \delta \left( a - \sqrt{\left( (x_2 - x_1)^2 (y_3 - y_1)^2 + (y_2 - y_1)^2 (x_3 - x_1)^2 \right) } \right)$$ an integral which might be difficult to evaluate in closed form, where $\delta(x)$ is the Dirac delta function. The $p(a)$ probability density function for the area of the triangle formed by three uniformly random points in the unit square seems to fit quite well to the beta distribution with $\alpha = 1.18$ and $\beta = 5.1$ and an upper bound of $\frac{1}{2}$ The percentage of fourth points that fall inside the triangle formed by first three is then the expectation of the area $E\left[a\right]=\frac{\alpha / 2}{\alpha+\beta}$ or ~10%, corresponding to concave fraction of 90%. Seems off ... Also looked at the distribution of the maximum area of the 4 triangles formed by a set of 4 points each picked from $(U[0,1],U[0,1]) Mean moves from 9% to 15% Can you evaluate the integral for me? not in closed form, no I appreciate your effort. I think what you have is an approximation to Sylvester's 4 point problem which has a probability of $\frac{11}{144}$. Then considering that I don't mind which point is not part of the convex hull, we multiply by 4 for the 4 possible points on the inside. Unfortunately, this is not the answer to the question as there are additional spaces outside the triangle formed by the first 3 points that would also give a convex hull with 3 points. A good effort and the current front runner for the bounty as we now know that $\frac{11}{36}$ is a lower bound. Perhaps find the expected are of the largest triangle formed from randomly placing 4 points? appended amax distribution to answer Looks good. I think if you multiply that by 4 then you get Sylvester's 4 point problem solution discussed above. Thanks for your help.
common-pile/stackexchange_filtered
Prove that Petersen's graph is non-planar using Euler's formula Prove that Petersen's graph is non-planar using Euler's formula. I know that $n - m + f = 2$. But should I count $f$ and prove that the summation does not equal to two or solve to get $f =7$ and argue that it is impossible??? We know the Petersen graph has $15$ edges and $10$ vertices. In a planar graph, $V+F-E=2$. In Petersen, that would be $10+F-15 = 2$, so it would have $7$ faces in it's planar embedding. The minimal cycle in Petersen is $5$, so it would need to be made from pentagons, hexagons, or larger. $7$ pentagons = $35$ edges or more. Half that, rounded down = $17$ edges. Each edge can only be used twice, and we've gone over. Is it 17 edges or faces? If you can use the definition of planar graph, you can "contract" edges just to get $ K_5 $ or $ K_{3,3} $ and there you prooved that Petersen's graph is non-planar.
common-pile/stackexchange_filtered
how we can edit stored procedure in sql*plus so that i can correct it create or replace procedure minvalue(x in number,y in number,z in number) as begin if x< y then z := x; else z:= y; end if; end; / compilation error. in this code, x and y are parameters which takes values when user run this procedure and z take out the answer . this code find minimum values between x and y ,and store minimum value through z. I created this procedure on sql>. now how I can put this in buffer again. so that i can modify /enhance it. But I don't know how to open this procedure through sql command. Pls help me. In SQL*Plus the ed command will open your last statement in the default editor (unless you've defined something else in your login.sql script). It's good practice to write code as scripts in a decent text editor or IDE and run those scripts at the SQL*Plus command line. So, the actual error is this: create or replace minvalue That should be create or replace procedure minvalue. Once you clear the ORA-00922 missing or invalid option exception the next issue is: c := x; You haven't declared a variable c so this will hurl an ORA-00904 invalid identifier exception. Compile the PL/SQL code. Use SHOW ERROR to validate. If you see any compilation errors, use ed to modify the code. Save the afiedt.buf file. You will see the modified code loaded, just use / to recompile the code. For example, SQL> set serveroutput on; SQL> SQL> BEGIN 2 NULL; 3 END; 4 / PL/SQL procedure successfully completed. SQL> SHOW ERROR No errors. SQL> ed Wrote file afiedt.buf 1 BEGIN 2 DBMS_OUTPUT.pUT_LINE('code modified'); 3* END; SQL> / code modified PL/SQL procedure successfully completed. SQL> You just have to execute your statement again, it does as it says create or replace the stored procedure. If you want to know which errors occured, enter show errors after executing. PS: You dont't use Parameter zso why is it there? You'll need a function with a return value not a procedure if you want to return something.
common-pile/stackexchange_filtered
Ajax refreshing page in MVC I'm using Ajax to pass some data to a Controller and save to database, and it works, the issue that is refreshing the page with every POST and I need to prevent that. Ajax: function AddComment(commet, auto) { $.ajax({ type: "POST", url: '/bff/SaveComment', data: { id: idParte, commenta: commet, autoriza: auto }, dataType: 'json', success: function (correct) { $("#win1").show().kendoWindow({ width: "300px", height: "100px", modal: true, title: "Added" }); }, errror: function(inc) { } }); } Controller: [HttpPost] public JsonResult SaveComment(int id, string commenta, string autoriza) { // Some logic here return Json(""); } Tried this way with correct.preventDefault(); but didn't work Is there a way to do it? EDITED: This is my HTML: <form role="form"> <div class="form-group"> @Html.Label("Commentario:") <textarea id="Comment" style="resize:none;" class="form-control"></textarea> </div> <div class="form-group"> <input type="submit" value="Guardar" onclick="AddComment(Comment.value,'Comentario')" /> </div> </form> EDITED 2: Fixed by changing type="submit" for type="button" Thanks to Hasta Pasta How is this method called from the user action? Is it bound in a jquery click? Javascript within the href? This is the place where the page behavior is interrupting. Could you explain the code flow? when is AddComment function called? Also add this after url in ajax call. It is true by default anyhow worth a try. async : true, It's called in a button that read a textarea: @JoelEtherton Add preventDefault(); to your click handler not your ajax function. It should be something like btn.on("click", function(e){ e.preventDefault(); addComment(comment,auto); }); remove type= "submit" and make type = "button". Submit will try to post the form to the controller and hence refresh the page Refer to this to understand difference with Submit action. It will help you resolve your issue as well. http://stackoverflow.com/questions/27759380/how-to-stop-refreshing-page-after-ajax-call Also make sure you dont have any ajaxSetup done on application. Hasta Pasta is right replacing submit by button will be easy fix. Worked with button instead submit. Awesome!!! Thanks i think you should put return false; after the ajax request function AddComment(commet, auto) { $.ajax({ type: "POST", url: '/bff/SaveComment', data: { id: idParte, commenta: commet, autoriza: auto }, dataType: 'json', success: function (correct) { $("#win1").show().kendoWindow({ width: "300px", height: "100px", modal: true, title: "Added" }); }, error: function(inc) { } }); return false; } based on you need to return false as you sayed :) <form role="form"> <div class="form-group"> @Html.Label("Commentario:") <textarea id="Comment" style="resize:none;" class="form-control"></textarea> </div> <div class="form-group"> <input type="submit" value="Guardar" onclick="return AddComment(Comment.value,'Comentario')" /> </div> </form> could you please post your html also ? Based on the comment by OP, this is the correct answer, but it is still missing an element. The onclick also needs to return false this way: onclick="return AddComment(Comment.value,'Comentario')" i just edit the answer to be complete with your edit :) on button
common-pile/stackexchange_filtered
Can every positive rational number be written as the finite sum of distinct reciprocal primes? Inspiration for this question: Can every number be represented as a sum of different reciprocal numbers? My question: Can every positive rational number $\frac{m}{n}\in\mathbb{Q}$ be written as the finite sum of distinct (i.e. all different) reciprocal primes: $$\frac{m}{n} = \sum_k \frac{1}{k},\quad k \text{ is a prime number } ?$$ Well, how would you write $\frac 14$? Note that, if $p$ is one of your primes, then $p$ divides the denominator of $\sum \frac 1k$. I gave the reason. If $p$ were one of your primes we'd need to have $p,|,4$, hence $p=2$. To elaborate: if ${p_i}$ is a collection of distinct primes then the denominator of $\sum \frac 1{p_i}$ (in least terms) is $\prod p_i$. Indeed, if $p$ is one of the $p_i$ then every term in the numerator is divisible by $p$, except for one. Ok - so $\sum \frac{1}{k}$ cannot be equal to a rational number whose denominator is even. But can the denominator of the sum be odd? The principle I gave is very general. $\frac 1{p^2}$ is unrepresentable for any $p$. Similarly $\frac 2p$ is unrepresentable for any odd $p$. And so on. Indeed, by principle shows you exactly which fractions are representable (not that the classification is terribly enlightening). there is exactly one representable rational for each set of distinct primes. Note that your claim is false, by the way. $\frac 12+\frac 13=\frac 56$ is representable and $6$ is even. All you need is for $2\in {p_i}$. I think this is one of those questions which is obvious to a lot of people, but which I need time to absorb, as I'm making lots of basic errors. Will return to this tomorrow with fresh eyes I think. When you return to it: as far as your original question goes, note that you don't actually need to show that the denominator of $\sum \frac 1{p_i}$ is $\prod p_i$ (though it is not hard to show that). It is entirely clear that the denominator is at least a divisor of that, hence any representable fraction in least terms has a square free denominator. (to be sure, that condition is necessary for representablility, but it is not sufficient.) The denominator of the sum of reciprocals of distinct primes is the product of those primes, so the only possible sums are non-integers with square free denominators. However, given a square free denominator, we know the primes by factoring it, so for every square free denominator there is only one possible numerator. This lets us check whether a/b is the sum of reciprocals of primes quite efficiently, especially if it isn’t. So there are less than k numbers m/n with n <= k that are such a sum.
common-pile/stackexchange_filtered
Difference of "Training Data Set", "Testing Data Set" and "Validation Data set" I have 250 human face images and with those I am going to train the model. for the sake of convenience, what I am going to do is to pick first 10 images and use leave-one-image-out cross validation to train the model so that each image gets the chance to be the test image. What I understand is that in that case size of my training data set is 9 and size of my testing data set is 1. After that I'm going to get next 10 images and then use them as well to train the model. In that case,size of my training data set would be 19 and testing data set would be 1 (this takes place repeatedly 20 times so that every image gets the chance to be in the testing set ). Likewise, this goes up until I've used all the 250 images to train the model. What I don't understand is "Validation Data set ". Am I doing it in the wrong way ? There was one answer on Stackoverflow but it wasn't clear to me. That's why I posted this question You should split your data into training, validation and testing sets in the ratio of about 6:2:2. For training your model you using training set. Comparing results on training and validation sets give you information about bias and variance. And finally test set shows how good your model predict. Your model sholdn't see any of your test examples during training. you mean to say that I shouldn't use all the 250 images for training and testing ? like I have described in my question above ? Is that what you mean by "your model shouldn't see any of your test examples during training " ? Yes, after you finished your training there must be new examples for testing model's accuracy. The reason is that developing a model always involves tuning its configuration: for example, choosing the number of layers or the size of the layers (called the hyper-parameters of the model, to distinguish them from the parameters, which are the network’s weights). You do this tuning by using as a feedback signal the performance of the model on the validation data. In essence, this tuning is a form of learning: a search for a good configuration in some parameter space. Splitting your data into training, validation, and test sets may seem straightforward, but there are a few advanced ways to do it that can come in handy when little data is available as in your case with just 25 data points. You can look into three classic evaluation recipes: simple hold-out validation K-fold validation iterated K-fold validation with shuffling.
common-pile/stackexchange_filtered
Cast from '[Any]?' to unrelated type '[String : String?]' always fails issue I use this code to store app settings via UserDefaults: var appSettings: [String: String?] { set { // On attempt to set new value for the dictionary UserDefaults.standard.set(newValue, forKey: "AppSettings") UserDefaults.standard.synchronize() } get { // On attempt to get something from dictionary if let settings = UserDefaults.standard.array(forKey: "AppSettings") as? [String: String?] { return settings } else { return [:] } }} But line if let settings = UserDefaults.standard.array(forKey: "AppSettings") as? [String: String?] causes warning: Cast from '[Any]?' to unrelated type '[String : String?]' always fails Any ideas how to save this dictionary in UerDefaults? Why are the values of your dictionary optional strings? You would normally just omit the key and therefore have nil returned when accessing that key, so you would use [String:String] UserDefaults is widely Objective-C based. In ObjC dictionaries nil values are not supported anyway. Got it, thank you :) Replace this UserDefaults.standard.array(forKey: "AppSettings") as? [String: String?] with UserDefaults.standard.dictionary(forKey: "AppSettings") as? [String: String?] as you store a dictionary so don't use .array(forKey which returns [Any]? that for sure can't be casted to [String:String?] Also from Docs synchronize() Waits for any pending asynchronous updates to the defaults database and returns; this method is unnecessary and shouldn't be used. so comment UserDefaults.standard.synchronize()
common-pile/stackexchange_filtered
How do I use a Branch Template in Sitecore? I have a "Default Template" consisting of a bunch of sublayouts, most importantly the "ContactForm" sublayout. I want to create a Branch Template of the Default Template so that I can change the ContactForm sublayout to a "PressForm" sublayout. What I'm struggling with is exactly how I should do this. At the moment I have gone into my Branches folder and created a new Branch Template and use my Default Template; I've named it Default Template Branch. I now have an item underneath it called $name which states that it is a Default Template. I can edit this to change the ContactForm sublayout to "PressForm". Where do I go from here to get the desired result? $name is the item instance that you'd want to edit. you should make a Contact branch and a separate Press branch. Then edit each $name below according to your changes (e.g. different presentation) So, I've created my branch based on the Default Template, renamed it Press Branch and now I've edited the $name to reflect the changes needed. How do I now add this to my content item? I think you're misunderstanding what a branch is, it's a "creator". A content item is an instance of a template. A branch is a creator of an item of a specific template. Your content item should already be on the correct template. Branches are nice as you can predefine subitems below the instances, so when you create a folder it will automatically create subitems as well. It can also be used as in your case to act like a template but apply different values. Once you create the branch you need to update your assign options to no longer point to the template, but instead point to the branch. I think I finally get it! If I understand you correctly is that you set the Assign Options on the template above, so that if I have a content item with my branch assigned within that template I can then right-click, click insert and then use the branch? As this seems to work, how would I now do this for existing content items, so that I can get them to be "created from" my Press branch? Correct. The branch creates, so you can't make it affect existing items. The branch is just a convenient creator, almost like standard values, with different values. So if you have "Template A" but two separate scenarios to use it with different values, you can create two separates branches for it, each with their different values. Then your assign options can use the two branches instead of "Template A" itself Excellent. Thanks for helping me through this! Does this solution actually enable the layout goal described in the question, where the branch template is used to swap a certain sublayout? Documentation, this answer, and a local test seem to indicate otherwise. That said, there does seem to be some confusion about this. Yes, the branch is a creator. There can be a separate branch for each scenario. Each $name below it will deviate from standard values
common-pile/stackexchange_filtered
Is the python keyword "is" the same as the function id(), in the context of unittesting? I have tried reading the docs but couldn't get a clear answer. Is id(a) == id(b) the same as a is b Likewise is import unittest unittest.TestCase.assertNotEqual(id(a), id(b)) therefore the same as import unittest unittest.TestCase.assertIsNot(a, b) @Makoto Little mention of id in that question or its answers... (I'd say none, but two answers mention it without addressing this question) That question is the olde "is seems to work for strings", this question is the more novel "is is equivalent to id() comparison". @delnan: The answer contains the fact that is tests for identity, and the id() function returns the identity of an object. I'd say there's sufficient mention in the answer. @Makoto Putting these two together is question and answer IMHO. Yes "id" in CPython gives you the memory address of the object referred to. The address uniquely identifies an object in the same python process. Therefore, the meaning of id(a) == id(b) is "Are the memory addresses of instance a and b the same?" which is equivalent to "Do a and b refer to the same object?": a is b From "id"'s docstring: id(object) -> integer Return the identity of an object. This is guaranteed to be unique among simultaneously existing objects. (Hint: it's the object's memory address.) How about non-CPython implementations such as PyPy? @delnan: at least in PyPy, id(a) == id(b) <-> a is b (ref). @delnan If id(a)==id(b) is true in CPython then it is true in PyPy. This is not always the case the other way around. Try:<PHONE_NUMBER> + 1) is<PHONE_NUMBER>+1). The latter is always true in PyPy, but not in CPython. Great. I didn't ask because I don't know that (I even know why it's like that), I asked because the answer could use this information. According to the Python Language Reference at the end of section 5.9. Comparisons the 'is' operator tests for object identity. So assertIsNot(a,b) can always be replaced with assertNotEqual(id(a), id(b)). Depending on implementation internals you could get different results when it comes to value types, so avoid using 'is' when you want to compare values, as you don't know if it will work on othe implementations or event future implementations.
common-pile/stackexchange_filtered
Can BeEF also work in public? Is it possible to run the tool called BeEF to do penetration testing on real domains like example.com? As far as I know, BeEF can only be used within localhost. I can only test my site for XSS on the real domain because of the database running. Yes, it's absolutely possible to use this tool in public... However Beef is a browser exploitation tool, not a server exploitation tool. If you use beef to attack browsers of other people, who have not given explicit consent to being attacked, then you are likely committing an unlawful act, depending on your jurisdiction. So yes, technically it is possible. But no, it is likely not lawful and I would recommend against doing it. @schroeder I wish I could at least say goodbye to my friends, but this will have to do. I can tell I am no longer welcome here by the powers that be, so I'll exit stage left. It's been lovely. I'll miss you all. Well, then I'm sorry that things didn't work out. You always did a great job here. I'll miss it. Sorry, then. See you and thanks for all your contributions. I'll miss you and your answers and comments as well, @MechMK1.
common-pile/stackexchange_filtered
List of lists with various lengths and items - change numbers So my problem, to which I have been searching a solution for hours now, is, that I have a list of list with lists of various lengths and different items: list_1=[[160,137,99,81,78,60],[132,131,131,127,124,123],'none',[99,95,80,78]] Now I want to change the fifth number of every list and add +1. My problem is, that I keep getting 'out of range' or other problems, because list 3 doesn't contain numbers and list 3+4 don't contain a fifth element. I have so far found no answer to this. My first guess was adding zeros to the lists, but I'm not supposed to do that. It would also falsify the results, since then it would add +1 to any zero I have created. What language is it? Try to fix the previous post, and make minor correction, try it and ask any questions: Any function should return the modified list, otherwise, you will not get it (unless it's supposed to do in-place changes) You may try do List Comprehension as well, but that's more involved. L =[[160,137,99,81,78,60],[132,131,131,127,124,123], None,[99,95,80,78]] def addOne(L): for lst in L: if isinstance(lst, list) and len(lst) >= 5: lst[4] += 1 return L print(addOne(L)) Output: [[160, 137, 99, 81, 79, 60], [132, 131, 131, 127, 125, 123], None, [99, 95, 80, 78]] Thank you very much, Daniel, this worked. Now I just have an additional question. I ran this several times, just to see, and it added another +1 every time I ran it, so I had to reset the values. How would I change this so it would always use the defined L as input , meaning it would always add +1 to the same original input and not get increasingly higher? Glad it can help. You could you copy.deepcopy(L) to make a new List copy, so that the change will not affect the original list. Assuming this is in Python, list_1=[[160,137,99,81,78,60],[132,131,131,127,124,123],'none',[99,95,80,78]] def addOneToFifthElement(theList): for list_ in theList: if type(list_)=="<class 'list'>" and len(list_)>=5: list_[4] += 1 addOneToFifthElement(list_1) Thank you for your answer. I ran this, but the output still remains the same. I tried various different methods, but every time I still got the original list - can you tell me how to get the correct output? list_1=[[160,137,99,81,78,60],[132,131,131,127,124,123],'none',[99,95,80,78]] ​ def addOneToFifthElement(theList): for list_ in theList: if type(list_)=="<class 'list'>" and len(list_)>=5: list_[4] += 1 ​ addOneToFifthElement(list_1) list_1 [[160, 137, 99, 81, 78, 60], [132, 131, 131, 127, 124, 123], 'none', [99, 95, 80, 78]] Just make the function to return the list.
common-pile/stackexchange_filtered
Windows Task Scheduler Batch File - Whose permissions does commands in batch file take? I'm running Windows Datacenter, and I'm setting up a scheduled task to create a folder on another server on the network using the md command. Here is my setup. I have 'user A' who has access to log into the server, but does not have permissions to create tasks in Task Scheduler. I also have 'user B' who does not have permissions to log into the server, but does have permissions to create scheduled tasks. I've created a task with 'User A' as the author with security settings of When running the task, use the following user account: User B. The action looks like this: C:\windows\system32\cmd.exe /c "C:\test.bat" with the Program/script as C:\windows\system32\cmd.exe and the arguments of /c "C:\test.bat" It doesn't look like the batch file is working. When this batch file is being called by Task Scheduler, who is actually performing the md command, User A (login but no task) or User B (no login but task). I assume if it's User B then that could be the problem, that User B may not have permissions to write to the other server. Any insight you all could provide is greatly appreciated. The task will run as User B which unfortunately will cause issues regarding permissions. This doesn't answer my question at all. I have the scenario I have entered, and there isn't an option to deviate from that. I can't run as SYSTEM. Oh sorry i thought you wanted an alternative apologies! the answer to your question is yes it will run as User B and yes that will cause issues if he doesnt have permissions on the server. If you would like to edit your answer above to reflect this comment, I'd be happy to mark it as accepted!
common-pile/stackexchange_filtered
How to get value from node? SQL Server XML dml @x xml:' <a number="1"> <b>1</b> </a>' I want use query() to get value (can't use value()) @x.query('string(/a[1]/b[1])') is ok. @x.query('string(/a[@number="1"]/b)') throws an error. Do you have any solution? I want use [@number="1"] to get value. You can try this. @x.query('string( (/a[@number="1"]/b)[1])') In most cases one wants to read more than one value from a given node. You can use a combination of .nodes() (very related to .query(), but returning a derived table) and .query() or .value() like here: DECLARE @x xml= N'<a number="1"> <b>1</b> </a>'; SELECT MyA.value(N'(b/text())[1]','int') AS ReadTheValue ,MyA.query(N'b') QueryTheValue FROM @x.nodes(N'/a[@number="1"]') AS A(MyA); You can pass in the search value as a variable like here: DECLARE @SearchNumber INT=1; SELECT MyA.value(N'(b/text())[1]','int') AS ReadTheValue ,MyA.query(N'b') QueryTheValue FROM @x.nodes(N'/a[@number=sql:variable("@SearchNumber")]') AS A(MyA); If you need nothing else then the value through .query() go with the solution provided by Serkan Aslan. You were just missing to ensure the inner experession to be singleton. But I must admit, that I have no idea, why one should need this...
common-pile/stackexchange_filtered
Directive in pop up window angularjs I've a simple single page application. I want to implement tear off windows like functionality, like if I've a directive with a button in the tool bar that when clicked hides the directive in the main app, opens a pop up window (not a modal) and opens that directive in the popup window. Similarly when the button is clicked in the popup window it closes the popup window and the directive reappears in the main app. Is it even possible? If so can you please provide me with some pointers?. so far I've looked into https://github.com/bulkan/angular-popup Couldn't even get the thing running :( Any help would be highly appreciated .. Cheers Hi pixelbits like I said I'm not looking for any overlays or modals I want it to be a stand alone pop up window New window means new scope/application, so you'd have to find a clever way to implement some external storage combined with $watch to make something like that. window.postMessage() can be used for communicating between the two windows. But as Shomz mentioned, keeping data in sync between the two instances will be an issue. what @Shomz said. A new window will mean a completely new execution context
common-pile/stackexchange_filtered
Calculation FLOPs of adaptive_avg_pool2d I found several definitions of FLOPs in countering the flops for adaptive_avg_pool2d: From fvcore , it defines the FLOPs as 1 * prod(input) which is , 1 x N x C_in x H_in x W_in. Another definition is from the output perspective. I found one from here : It first calculate the kernel size, say, (kx, ky) Then compute the flops as ( kx*xy +1 ) * prod(output) which is , (k_x x k_y + 1) x (N x C_out x H_out x W_out) Which definition is correct? Is there any material of calculating FLOPs? Please refer to this question and this answer for how torch.nn.Adaptive{Avg, Max}Pool{1, 2, 3}d works. Essentially, it tries to reduce overlapping of pooling kernels (which is not the case for torch.nn.{Avg, Max}Pool{1, 2, 3}d), trying to go over each input element only once (not sure if succeeding, but probably yes). FLOPS refers to Floating Operations per Second, hence, if each input float value is "touched" (by max or mean per grouped parts of input) only once it would be equal to: 1 * prod(input) To get FLOPS from this value you would have to divide that by the time it took to get through 1 * prod(input) floating point operations. Second formula seems correct for pooling, not the adaptive one.
common-pile/stackexchange_filtered
Run nightly jobs on multibranch pipeline with declarative Jenkinsfile without deprecated feature 'Suppress automatic SCM triggering' Jenkins ver. 2.150.3 I have a multi-branch pipeline set up. I am using a declarative Jenkinsfile. I have a set of jobs which take a long time to run. I want these to run over night for any branches which have changes. In the past, one could use the 'Suppress automatic SCM triggering' option along with a cron trigger to achieve the nightly build for branches with changes. (See Run nightly jobs on multibranch pipeline with declarative Jenkinsfile I no longer have access to the 'Suppress automatic SCM triggering' option. The following trigger will run even if there are no changes to the code in the branch. triggers { cron('H 0 * * * *') } The following code runs if there are changes in the branch. However, the Jenkins multibranch project seems to trigger from the push rather than the pollSCM. This doesn't seem to achieve my desired outcome of running once nightly per branch if there are changes. triggers { pollSCM('H 0 * * * *') } How do I configure Jenkins to achieve the nightly jobs per branch only if changes exist in that branch? Do you have a Post commit hook configured? If so, can you try to add ignorePostCommitHooks in your pollSCM? I'm not sure how you can access it via Declaretive pipeline, but we use it in a scripted pipeline like this: [$class: "SCMTrigger", scmpoll_spec: "H 0 * * * *", ignorePostCommitHooks: true] I'm currently using the GitHub Enterprise "Jenkins (GitHub plugin)" Service, which is also being deprecated ... yikes. I'm fully committed to the declarative pipeline at this point. When I use the pipeline-syntax for declarative Directive Generator the syntax looks almost the same:triggers { pollSCM ignorePostCommitHooks: true, scmpoll_spec: 'H H * * * ' }. Can you try that? More options available at <yourJenkinsurl>/directive-generator/. @Unforgettable631, That seemed to do the trick. Thanks for the tip on using the directive-generator. I didn't know about that tool. Adding answer from comment to here. You can achieve this by using the following script: triggers { pollSCM ignorePostCommitHooks: true, scmpoll_spec: 'H H * * *' } With directive generator (available at <yourJenkinsUrl>/directive-generator/ you can generate scripts available in your instance + see some documentation, f.e.: To allow periodically scheduled tasks to produce even load on the system, the symbol H (for “hash”) should be used wherever possible. For example, using 0 0 * * * for a dozen daily jobs will cause a large spike at midnight. In contrast, using H H * * * would still execute each job once a day, but not all at the same time, better using limited resources.
common-pile/stackexchange_filtered
Is there way to show disk i/o read/writes per process in real time? I am struggling to find out what process is eating away my Mac performance, and disk i/o is the culprit. Activity Monitor frequently shows megabytes of read/write, but unfortunately, it does not show the per process value (only total written by given process, which is useless). Is there a way to find out? iotop does not seem to work, because I am on Sierra, and SIP is enabled. Thank You, Zsolt iStat Menus shows, among many many other things, disk activity by process. It is not free, but you can do a 14 day trial. Wasn't this free at some point?
common-pile/stackexchange_filtered
Inserting HTML tags using PHP I want to be able to dynamically insert an HTML tag, by first copying it to a textarea and then submitting it. This is my first attempt. <!DOCTYPE html> <html> <head> <title>Tag Parser</title> </head> <body> <form name="tagInput" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> Input Tags:<br> <textarea name="tag" id="tag"> </textarea> <br><br> <input type="submit" name="send" id="send" value="Submit"> </form> <?php foreach ($_POST as $key => $value) { $value = str_replace('"', "'", $value); echo "field " . $key . " " . "value " . htmlentities($value) . "<br>"; echo "$value"; } ?> </body> </html> When I output the tag with htmlentities(), I see it is formated properly and just like I'd like it to show, but rendered. But then, the second echo is adding the tag, but broken. This was my input: <IFRAME SRC="http://ad.doubleclick.net/adi/N7480.147698OMGBLOG1/B8174590.109702939;sz=300x250;ord=[timestamp]?" WIDTH=300 HEIGHT=250 MARGINWIDTH=0 MARGINHEIGHT=0 HSPACE=0 VSPACE=0 FRAMEBORDER=0 SCROLLING=no BORDERCOLOR='#000000'> <SCRIPT language='JavaScript1.1' SRC="http://ad.doubleclick.net/adj/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie;sz=300x250;ord=[timestamp]?"> </SCRIPT> <NOSCRIPT> <A HREF="http://ad.doubleclick.net/jump/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie4;abr=!ie5;sz=300x250;ord=[timestamp]?"> <IMG SRC="http://ad.doubleclick.net/ad/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie4;abr=!ie5;sz=300x250;ord=[timestamp]?" BORDER=0 WIDTH=300 HEIGHT=250 ALT="Advertisement"></A> </NOSCRIPT> </IFRAME> This is what's being generated in the site: <iframe src="" width="300" height="250" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000"> &lt;SCRIPT language='JavaScript1.1' SRC='http://ad.doubleclick.net/adj/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie;sz=300x250;ord=[timestamp]?'&gt; &lt;/SCRIPT&gt; &lt;NOSCRIPT&gt; &lt;A HREF='http://ad.doubleclick.net/jump/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie4;abr=!ie5;sz=300x250;ord=[timestamp]?'&gt; &lt;IMG SRC='http://ad.doubleclick.net/ad/N7480.147698OMGBLOG1/B8174590.109702939;abr=!ie4;abr=!ie5;sz=300x250;ord=[timestamp]?' BORDER=0 WIDTH=300 HEIGHT=250 ALT='Advertisement'&gt;&lt;/A&gt; &lt;/NOSCRIPT&gt; </iframe> The src= is empty and the tag is clearly broken (not rendering anything) Why don't you just add the link itself in the textbox and use the iframe + src="" in the foreach loop? @JimSundqvist Thanks for your reply! I wanted to preserve the original tag structure since I am doing this to test the tag implementation. My idea was to extract the tag from a website, paste it there and see if it works in a clean environment, just the way it was on the previous website. Makes sense? Thanks again! Yea suppose i understand what you mean. Do you try to add several items to the form at once or one by one? Also, the thing with htmlentities($value) is that it strips all possible hurtful tags, So where it read script it changes the < and > to it's "text" value. Just one by one, I would like to add a single tag, render it, and also parse it to provide some insights by analyzing it, so it should always be on a one by one basis. Regarding the htmlentities() it's there just for testing purposes, I reached that stage when you start trying everything out. have you tried the same thing without htmlentities() and it still doesn't work? can you print_r the post variables? @JimSundqvist, Yup, it adds the iFrame to the DOM, but broken, as shown in my question, with the empty src= and that. @Demodave, Hey thanks for writing, this is the result of print_r the $_POST array: SubmitArray ( [tag] => [send] => Submit ), tag is empty since it printed the broken iFrame tag there. Just so I understand, you put the whole iframe tag within the text area and submit it correct, because my example renders correctly using your code. It's some picture of a dinoworld free stomper pass Yup, that is correct. Whole iframe tag, and submit. That's crazy, could it be something related to my server configuration ?
common-pile/stackexchange_filtered
Ideal way to identify encoding of a content in HTTP request? I have a need to develop REST-based API that can accept either binary or base64 encoded content, and possibly other contents. It has to require that the encoding of the file is identified; otherwise it is assumed to be binary. I don't want to auto-guess based on the contents of the API. My first inclination was to use Content-Type but base64 does not appear as one of well-known content types - makes sense since it's not really a type but rather a encoding. Reading various RFC specs, one would think Content-Transfer-Encoding header is an appropriate place to indicate whether the content of the request body is encoded as a base64 or as binary. However, I do not think it is appropriate because it's primarily for SMTP protocols, where they are limited to 7 bits. Then there's Content-Encoding or Transfer-Encoding but I don't see base64 as a well-known value for that header because both headers have more to do with compressing the content rather than indicating whether base64 encoding has been applied. I'm inclined to think that using custom headers is probably safest as to not breach any existing specs but wanted to see if nice folks at SO can come up with a good & definitive answer that will be compliant with the RFCs. What's the point in supporting base64, when http is perfectly capable of doing binary??? Because I have clients who want to send base64 encoded content? Not saying HTTP can't do it; only that I need to not have to guess whether I have a base64 encoded content or whatever. OK, so why do the clients want to base64-encode?
common-pile/stackexchange_filtered