id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_23534500
int a = 0; int b = 0; for (int i = 0; i < 10; i++) { a += 10; b += 10; } printf("%d", a); Is the variable b ever present in memory or even operated on after compilation? Would there by any assembly logic storing and handling b? Just not positive whether this is something that's counted under deadcode elimination. Thanks. A: Yes, absolutely. This is a very common optimization. The best way to answer such questions for yourself is to learn a little bit of assembly language and read the code generated by the compiler. In this case, you can see that not only does GCC optimize b entirely out of existence, but also a, and it compiles the whole function into just the equivalent of printf("%d", 100);.
doc_23534501
Here's my javascript snippet : $('#day_wrapper').append('' + '<div class="holder-itenary">' + '<label class="control-label" style="font-size: large">Day ' + i + '</label><' + 'a href="#" class="remove-itenary"><i class="fa fa-trash pull-right"></i>' + '</a>' + '<textarea rows="2" cols="50" name="descriptions[]" class="form-control" ' + 'placeholder="Add Description" id="tinyContent">' + '</textarea></div>'); tinyMCE.execCommand('mceAddControl', false, 'tinyContent'); Also this is my tinyMCE function: tinymce.init({ selector: 'textarea', /* plugins: [ "advlist autolink link image lists charmap print preview hr anchor pagebreak", "searchreplace wordcount visualblocks visualchars code fullscreen insertdatetime media nonbreaking", "save table contextmenu emoticons template paste textcolor jbimages directionality", "placeholder" ] */ plugins: [ "table", 'advlist autolink lists link charmap print preview anchor', 'searchreplace visualblocks code fullscreen', 'insertdatetime media table contextmenu paste code' ] }); This is my html part where I have displayed already stored contents from database in the editor. <div id="day_wrapper" class="col-sm-10"> {{--parent div holder--}} <div id="limitDescErrorMessage"></div> <?php $i = 1; ?> @forelse($tour->itenaries as $key=>$day) <div class="holder-itenary"> <a href="#" class="remove-itenary"><i class="fa fa-trash pull-right"></i></a> <label class="control-label" style="font-size: large">Day {{$i}}</label> <textarea rows="2" cols="50" name="descriptions[]" class="form-control" id="tinyContent">{{$day->desc}}</textarea> </div> <?php $i++ ?> @empty <div><p>No Itenaries Available</p></div> @endforelse @if($errors->first('descriptions')) <div class="alert alert-danger">{{$errors->first('descriptions')}}</div> @endif </div> I've tried looking for the solution but couldn't figure out exactly..
doc_23534502
I spent many hours trying to find a solution to this problem - but without any success... I'm writing a very simple application (on iPad) which will send over a few TCP commands to my server. The server is already configured and working properly. I can connect to it with pTerm on my iPad, and after connecting succesfully over RAW TCP or telnet I can send the request to my server looking like that: #100 enter and it works.. But when Im trying to do it on my application - it doesn't work, it seems to be a problem with the end of line information to server (normally done by pushing enter key). Server is configured on 192.168.1.220, port 2000. After clicking reset button it should send command to server - but I can't see anything on the server side... My .h file looks: #import <UIKit/UIKit.h> NSInputStream *inputStream; NSOutputStream *outputStream; @interface ViewController : UIViewController <NSStreamDelegate> @property (weak, nonatomic) IBOutlet UIButton *resetbutton; @property (strong, nonatomic) IBOutlet UIView *viewController; - (IBAction)resetbuttonclick:(id)sender; @end my .m file : #import "ViewController.h" @interface ViewController () @end @implementation ViewController @synthesize resetbutton; @synthesize viewController; - (void) viewDidLoad { [self initNetworkCommunication]; [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void) viewDidUnload { [self setViewController:nil]; [self setResetbutton:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } - (void)initNetworkCommunication { CFReadStreamRef readStream; CFWriteStreamRef writeStream; CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"192.168.1.220", 2000, &readStream, &writeStream); inputStream = objc_unretainedObject(readStream); outputStream = objc_unretainedObject(writeStream); [inputStream setDelegate:self]; [outputStream setDelegate:self]; [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [inputStream open]; [outputStream open]; } - (void)sendMessage { NSString *response = [NSString stringWithFormat:@"#100\n"]; NSData *data = [[NSData alloc] initWithData:[response dataUsingEncoding:NSASCIIStringEncoding]]; [outputStream write:[data bytes] maxLength:[data length]]; } - (IBAction)resetbuttonclick:(id)sender { [self initNetworkCommunication]; [self sendMessage]; } @end Thank you very much for your help. A: You are initing the connection two times, remove the line [self initNetworkCommunication]; from the resetbutonclick method. A: One thing one immediately notices is you don't have any stream event hadlers - which is strange. How do you know that connection had been established? - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode The code in iPhone Network Programming is tested and working, if you're willing to read trough the articles. A: Maybe you can avoid such problems by using an additional library? (abstraction on top) I use CocoaAsyncSocket. It is very simple an comfortable.
doc_23534503
1: Icon for the app 2: Vertically stacked LinearLayout with Session title and Room to be held in 3: The start time of the session The problem I am facing is that #2 has a variable width depending on the size of the title. Hardcoding a width would look terrible if the user turns the handset sideways and goes into Landscape mode. Therefore here is my question: How can I layout three components horizontally in Android such that #1 and #3 are left and right aligned respectively, and #2 simply takes the space remaining? A: Look at RelativeLayout. You can make your middle LinearLayout use the android:layout_toLeftOf and android:layout_toRightOf to force it to be in the middle. I usually refer to Android's Common Layout Object page for a quick example to get started. There might still be a small caveat that I am missing but I know that's the general trick I used once and I am pretty sure it worked. A: Something like this might be be a reasonable basis: <RelativeLayout android:id="@+id/RelativeLayout01" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="horizontal" xmlns:android="http://schemas.android.com/apk/res/android"> <ImageView android:id="@+id/ImageView01" android:src="@drawable/droid" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true"></ImageView> <LinearLayout android:id="@+id/LinearLayout01" android:layout_below="@id/ImageView01" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="vertical" android:layout_alignTop="@+id/ImageView01" android:layout_toRightOf="@+id/ImageView01" android:background="#ffff00"> <TextView android:text="This is wide, wide, title text" android:id="@+id/TextView01" android:layout_width="wrap_content" android:layout_height="wrap_content"></TextView> <TextView android:text="Short text" android:id="@+id/TextView02" android:layout_width="wrap_content" android:layout_height="wrap_content"></TextView> <TextView android:text="Middling wide text" android:id="@+id/TextView03" android:layout_width="wrap_content" android:layout_height="wrap_content"></TextView> </LinearLayout> <TextView android:text="Right hand text" android:id="@+id/TextView04" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_alignTop="@+id/ImageView01" android:background="#00ffff"></TextView> </RelativeLayout> You might like to hard code some left margin on the LinearLayout to give it a bit of spacing A: After fighting with this for a few hours, I came to understand the use of layout_weight. Seem this was the appropriate means to fix the problem A: try something like this: <RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content"> <ImageView android:id="@+id/img1" android:background="#00F" android:layout_width="100dip" android:layout_height="100dip" android:layout_alignParentLeft="true"/> <ImageView android:id="@+id/img2" android:background="#0F0" android:layout_width="0dip" android:layout_height="100dip" android:layout_toRightOf="@id/img1" android:layout_toLeftOf="@+id/img3"/> <ImageView android:id="@+id/img3" android:background="#F00" android:layout_width="150dip" android:layout_height="100dip" android:layout_alignParentRight="true"/> </RelativeLayout>
doc_23534504
var node = svg.selectAll(".node") .data(graph.nodes) .enter().append("circle") .attr("class", "node") .attr("r", function(d) { return d.group * 3; }) .style("fill", function(d) { return color(d.group); }) .call(force.drag) .on('mouseover', connectedNodes) .on("click", function(d) { getprofile(d); }); A: You need to define on mouse out event. So your code will be like this: var node = svg.selectAll(".node") .data(graph.nodes) .enter().append("circle") .attr("class", "node") .attr("r", function(d) { return d.group * 3; }) .style("fill", function(d) { return color(d.group); }) .call(force.drag) .on('mouseover', connectedNodes) .on('mouseout', doSomethingCallback) .on("click", function(d) { getprofile(d); }); function doSomethingCallback(){ fill your circle with the original color } A: You're looking for mouseleave. Here's a D3 demo of it: http://bl.ocks.org/mbostock/5247027 A: You can use .on('mouseout', function(){}); to stop the function started whit mouseover.
doc_23534505
$myFile = 'C:/Users/Carl/Downloads/'. date("y-m-d") . '/test*.txt'; $myNewFile = 'C:/Users/Carl/Downloads/'. date("y-m-d").'/text.xml'; if(preg_match("([0-9]+)", $myFile)) { echo 'ok'; copy($myFile, $myNewFile); } I am getting an error because of * in $myFile. Any help is very appreciated. A: $myFile= 'C:/Users/Carl/Downloads/'. date("y-m-d") . '/test*.txt'; $myNyFile = 'C:/Users/Carl/Downloads/'.date("y-m-d").'/test.txt'; foreach (glob($myFile) as $fileName) { copy($fileName, $myNyFile); } A: For complete response, if you want to only move *.txt in NewFolder. $myFiles = 'C:/Users/Carl/Downloads/*.txt'; $myFolderDest = 'C:/Users/Carl/NewFolder/'; foreach (glob($myFiles) as $file) { copy($file, $myFolderDest . basename($file)); }
doc_23534506
http://www.domain.com/thePage.aspx?Param1=1&Param2=23&Param7=22 We want to rewrite the URL as http://www.domain.com/Param1/1/Param2/23/Param7/22 At the same time we need the filter to allow the application to see it as the old URL to be processed correctly. Again, we don't know what combination of parameters may be used. Can this be done only using the filter? A: As official forums said, you can use RewriteCompatibility2 on in Helicon 3.0, so you can use a LP directive (looping with the replace). I use this ReGex to parse QueryString (it makes two groups: name with all parameter names, and value with all parameter values) [\?&](?<name>[^&=]+)=(?<value>[^&=]+) Based on mentioned forum topic, I think you should try something like this: //for replacing '=' RewriteRule ^[\?&]([^&=]+)=([^&=]+)$ $1/$2 [LP,R=301,L] //for replacing '&' RewriteRule ^\?([^&]+)&([^&]+)$ $1/$2 [LP,R=301,L]
doc_23534507
A: Achieving this reliably is not trivial. The protocol specs for SMTP in RFC 821 specifies a number of return codes. Notably 550 is what an SMTP server should return when attempting to send an email to a nonexistent address. I say should because most public-facing SMTP servers won't do this - they either quietly accepts the message and then drops it or, if they are a little more good-mannered, accepts the message but sends a "delivery failed" notice back to the sender ("from" address). Public services like MSN and Gmail will also blacklist senders if they send enough emails to non-existing addresses to prevent spam. The reason for this is to prevent email-fishing and spamming. So what you can do is to * *Check for SendFailedException in your code. This will only work for servers that follow the SMTP specifications and actually send an error code back. Like I said, very few public servers actually do this. *Set up a proper mailbox for the address you use as sender and monitor that inbox for delivery failed notices. Note thought that these need not follow any common pattern, which is why this is non-trivial. *For the email servers that doesn't give any notice, you really have no way of knowing. This is one of the reasons why companies buy mass emailing services from dedicated providers, since they have all these things already built to measure bounce-rate etc. But even with those, it's never going to be 100% accurate. A: These FAQ entries might help as well: * *If I send a message to a bad address, why don't I get a SendFailedException or TransportEvent indicating that the address is bad? *When a message can't be delivered, a failure message is returned. How can I detect these "bounced" messages?
doc_23534508
When I try to merge it , i get the matrix C= [ 01012011 1 3 5; 01022011 2 3 5]. The problem is with the resulting matrix C.i.e.,the resulting matrix round of the values. I want the final matrix C = [01012011 1.2 3.1 5.1; 01022011 2.2 3.3 5.1]? A: I don't know how you are merging your matrices, but you can concatenate them using C = cat(2,A,B) or simply C = [A,B], in your 2D case. Even though Matlab may display rounded values, depending on how your output format is configured (type help format for more information on that), the values of matrix C will be correct. A= [01012011; 01022011]; B =[1.2 3.1 5.1;2.2 3.3 5.1]; C = cat(2,A,B); isequal(B,C(:,2:end)) % will return 1.
doc_23534509
doc_23534510
I have a nodejs file which loads user data via sql and sends the result to the main file (server). I call this function again with an async function to get the value of the row. var sql = require('./db'); let h = /^[a-zA-Z-_.\d ]{2,180}$/; async function getdata(hash){ if (!hash || !hash.match(h)) { return {type: "error", msg:"Verarbeitungsfehler, bitte laden Sie die Seite neu.", admin: false} }else{ const response = await sql.query('SELECT * FROM bnutzer WHERE hash = ?',[hash], async function(err, result, fields){ //console.log(result) return await result }) } } module.exports = { getdata }; async function getvalue(h){ try{ var result = await admin.getdata(h); console.log('1'+result); if(result{ console.log('2'+result) } }catch(err){ console.log(err); console.log('Could not process request due to an error'); return; } } getvalue(user.ident) The data from the sql query are correct and they also return the correct result when I output them in the console, however they are not passed with return to the function with which I call the code, it comes only undefined, I have the feeling that await here somehow does not wait for the result, what am I missing here? I have already tried it with several constellations that I could find on the internet, unfortunately I have not yet come to the goal. I already tried not writing the sql query with async/await and only writing the calling function with async/await, no result. I have tried various sql query with and without sql callback. I hope someone can point me to what I am twisting or missing here. A: Your async callback function will just return it in response variable in a good case. But anyway , use return await in nested async function is not a best way. So , your result will become a response variable , you're not doing anything with it and it just will stay as it is , without return value. So , then you'll have a plain function and undefined as a result of it's execution. Since you're not returning anything from there. I suggest to : const sql = require('./db'); const h = /^[a-zA-Z-_.\d ]{2,180}$/; async function getdata(hash){ if (!hash || !hash.match(h)) { return {type: "error", msg:"Verarbeitungsfehler, bitte laden Sie die Seite neu.", admin: false} }else{ try { const response = await sql.query('SELECT * FROM bnutzer WHERE hash = ?',[hash]); return response; } catch (err) { throw new Error('Cold not proceed with query'); // Wwhatever message you want here } } } module.exports = { getdata }; And then : module.exports = { getdata }; async function getvalue(h){ try { const result = await admin.getdata(h); console.log(result, 'Here will be your successfull result') } catch(err) { console.log(err); // Do anything you need , console.log / transaction rollback or whatever you need. throw err; // No need to return it. You can also make some custom Error here , via "new" operator } } const value = await getvalue(user.ident); // It's async , so you should wait. A: the method in which you are executing query is incorrect you should do something like this async function getdata(hash){ if (!hash || !hash.match(h)) { return {type: "error", msg:"Verarbeitungsfehler, bitte laden Sie die Seite neu.", admin: false} }else{ try{ const response = await sql.query('SELECT * FROM bnutzer WHERE hash = ?',[hash]) return response } catch(error){ throw Error(error.message + "error") } } } try this you cant mix callback and await like the way you're doing just await for the result and in case of error it'll handled by catch block in this case
doc_23534511
the result does not meet my expectations : http://localhost/site/action I want the result like this : http://localhost/site/genre/action I don't know where the error is, I've been looking but haven't found a solution. this is my code genre.php <?php /* Template Name: Genre */ get_header(); ?> <div class="content"> <div class="main-content"> <div class="main-container"> <div id="list_categories_categories_list"> <?php get_template_part( 'template-parts/ads-bottom' ); ?> <div class="headline"> <h1> Genre </h1> </div> <div class="box"> <div class="list-categories"> <div class="margin-fix" id="list_categories_categories_list_items"> <?php $terms = get_terms( array( 'taxonomy' => 'genre', 'hide_empty' => false, 'number' => 20 ) ); foreach ($terms as $term){ ?> <?php $image = get_term_meta( $term->term_id, 'image', true ); ?> <a class="item" href="<?php echo $term->slug; ?>" title="<?php echo $term->name; ?>"> <div class="img"> <?php if ( $image != '' ) { echo wp_get_attachment_image( $image, "", ["class" => "thumb"]); } ?> </div> <strong class="title"><?php echo $term->name; ?></strong> <div class="wrap"> <div class="videos">0 videos</div> <div class="rating positive"> 81% </div> </div> </a> <?php } ?> </div> </div> </div> </div> </div> </div> </div> <?php get_footer(); A: You can use this code it will give results as per your requirement. <?php /* Template Name: Genre */ get_header(); ?> <div class="content"> <div class="main-content"> <div class="main-container"> <div id="list_categories_categories_list"> <?php get_template_part( 'template-parts/ads-bottom' ); ?> <div class="headline"> <h1> Genre </h1> </div> <div class="box"> <div class="list-categories"> <div class="margin-fix" id="list_categories_categories_list_items"> <?php $terms = get_terms( array( 'taxonomy' => 'genre', 'hide_empty' => false, 'number' => 20 ) ); foreach ($terms as $term){ ?> <?php $image = get_term_meta( $term->term_id, 'image', true ); ?> <a class="item" href="<?php echo get_term_link($term->term_id); ?>" title="<?php echo $term->name; ?>"> <div class="img"> <?php if ( $image != '' ) { echo wp_get_attachment_image( $image, "", ["class" => "thumb"]); } ?> </div> <strong class="title"><?php echo $term->name; ?></strong> <div class="wrap"> <div class="videos">0 videos</div> <div class="rating positive"> 81% </div> </div> </a> <?php } ?> </div> </div> </div> </div> </div> </div> </div> <?php get_footer();?>
doc_23534512
Nowadays - property based testing based on Haskell's Quickcheck is popular. There are a number of ports to Java including: * *quickcheck *jcheck *junit-quickcheck My question is: What is the difference between Agitar and Quickcheck property based testing? A: Its worth noting that as of version 0.6, junit-quickcheck now supports shrinking: http://pholser.github.io/junit-quickcheck/site/0.6-alpha-3-SNAPSHOT/usage/shrinking.html quickcheck doesn't look to have had any new releases since 2011: https://bitbucket.org/blob79/quickcheck A: To me, the key features of Haskell QuickCheck are: * *It generates random data for testing *If a test fails, it repeatedly "shrinks" the data (e.g., changing numbers to zero, reducing the size of a list) until it finds the simplest test case that still fails. This is very useful, because when you see the simplest test case, you often know exactly where the bug is and how to fix it. *It starts testing with simple data, and gradually moves on to more complex data. This is useful because it means that tests fail more quickly. Also, it ensures that edge cases (e.g., empty lists, zeroes) are properly tested. Quickcheck for Java supports (1), but not (2) or (3). I don't know what features are supported by Agitar, but it would be useful to check. Additionally, you might look into ScalaCheck. Since Scala is interoperable with Java, you could use it to test your Java code. I haven't used it, so I don't know which features it has, but I suspect it has more features than Java Quickcheck.
doc_23534513
In MVC5, are variables in the Global.asax accessible via all sessions or does MVC create and instance of Global for each session? Example public class Global : System.Web.HttpApplication { public static string Current_UserName = ""; protected void Session_Start(object sender, EventArgs e) { Current_UserName = User.Identity.Name; } } So would user A Current_UserName change when user B loads the application? A: Current_UserName would essentially be the last user that initialized their session. So user B who accesses the app after user A would show "B" in the static variable. A: As Current_UserName user is static, the last assigned user will remain in that variable. I mean, the last session initiated user.
doc_23534514
I can think of one way to achieve this. I could install the aws cli on the EC2 itself and run something like aws ec2 stop-instances --instance-ids i-07c1849fe7abcdef. Doing so feels weird. Is there a better way? Note For complete clarity, I don't want to simply set a timer on the instance. Say I set a 45 minute timer, I would pay for 45 mniutes even when the task only takes 3 minutes. I would like the instance to stop immediately after the script has run, no matter how long the script took to run. A: Your instance can use the operating system command to Power Off. On Linux, it would be: sudo shutdown now -h (The -h means Halt, which will trigger the power-off. If not included, the operating system will stop but the instance will keep running.) This can be used in conjunction with the Instance Initiated Shutdown Behavior, which can either: * *Stop the instance (sounds like what you seek) *Terminate the instance (good if a new instance is launched each time) (source: amazon.com) No IAM permissions are required for this method.
doc_23534515
Error: Overload resolution failed because no accessible 'New' accepts this number of arguments. I've followed this example in MSDN. Dim s As String = "Test string" Dim b As New Binding("Description of bind") b.Mode = BindingMode.OneTime b.Source = s A: I'm not familiar with VB but it would appear the line should read: Dim b As New Binding() b.Path = New PropertyPath("Description of bind") I referred to this link.
doc_23534516
A: Ok, i have found an answer. In administration of joomla is possible to clone com_content the same way as in frontend. Main menu -> Extensions -> Template manager (now choose administrator template) -> Create overrides -> com_content -> Article Now you can see the path (picture below) and edit what you desire.
doc_23534517
HttpCookie myCookie = new HttpCookie("XYZ"); myCookie.Value = newCookie; myCookie.SameSite = SameSiteMode.None; myCookie.Secure = true; myCookie.Expires = DateTime.Now.AddYears(2); context.Response.Cookies.Add(myCookie); But these properties are not working. Even after upgrading the .Net Framework version to 4.8, this flag is not set the desired SameSite=None for the cookie. Also one strange issue which i am facing is when i use the following line in web.config the flag correctly gets set in my cookie while requesting it from of the server: <sessionState cookieSameSite="None" /> <httpCookies sameSite="None" requireSSL="true" /> But when I do the same configuration and release on another server in a load balancing it is not working. Does anyone have any clue on how to set this thing in a straight forward way in C#? Thanks for your help in advance A: You must install the December 10, 2019 (or later) cumulative update for the .NET Framework. Search the Microsoft Support site to locate the appropriate update for your version of Windows and the .NET Framework. Without the update, ASP.NET will never add the SameSite=None attribute to response cookies. With the update, ASP.NET will add the SameSite=None attribute to response cookies if you explicitly set HttpCookie.SameSite to SameSiteMode.None or if you set the <httpCookies sameSite="None"> attribute in Web.config. The update also changes the default SameSite mode to Lax for ASP.NET forms authentication and session cookies. To change them back to None, set the <forms cookieSameSite="None"> and <sessionState cookieSameSite="None"> attributes in Web.config. A: Solved such by adding the header directly: Response.Headers.Add("set-cookie", "mysessioncookie=theValue; path=/; SameSite=Strict")
doc_23534518
I have a Cordova app for Android and iOS and I want to add a new plugin without the need to use the command cordova build [insert your platform here] because that command builds all the application and removes the current code and sets the project like the beginning. How I can add plugins without having to backup all the existing project, build the platform and copy again the code with the plugins properly added?
doc_23534519
Now I put a test script in the CGI-Executables directory, and tried calling it from an html file inside the Documents directory. It results in this error: The requested URL /CGI-Executables/SM.php was not found on this server. The file is there, so I'm assuming it's a configuration issue, and looking around I played with the /etc/apache2/extra/httpd-vhosts.conf file but it seems like an overkill. I don't need a virtual host, and I'm not going to serve anything from my development machine. I just want to develop my site. A: You should either put your scripts in /Library/WebServer/Documents/ or call them like /cgi-bin/SM.php Also, are you sure you want cgi scripts or do you just need your php to work? If its the latter, then putting them up in the Document root is a better option than using it as cgi script.
doc_23534520
What I need to do is have that cell highlighted in red. But, once the number is input into the cell, I need the cell color to change back to 'no fill', unless that row is highlighted from the previous macro that was run. Expected result sample I am not opposed to combining these macros if that is easier. Here is the first macro listed above: ' This part highlights all rows that are Disputed ' Keyboard Shortcut: CTRL+SHIFT+L Dim row As Range For Each row In ActiveSheet.UsedRange.Rows If row.Cells(1, "F").Value = "After Dispute For SBU" Then row.Interior.ColorIndex = 6 Else row.Interior.ColorIndex = xlNone End If Next row ' This part clears the Disputed worksheet and copies all disputed rows to the sheet With ThisWorkbook.Worksheets("Disputed") Range(.Range("A2"), .UsedRange.Offset(1, 0)).EntireRow.Delete End With Dim lr As Long, lr2 As Long, r As Long lr = Sheets("Master").Cells(Rows.Count, "A").End(xlUp).row lr2 = Sheets("Disputed").Cells(Rows.Count, "A").End(xlUp).row For r = lr To 2 Step -1 If Range("F" & r).Value = "After Dispute For SBU" Then Rows(r).Copy Destination:=Sheets("Disputed").Range("A" & lr2 + 1) lr2 = Sheets("Disputed").Cells(Rows.Count, "A").End(xlUp).row End If Range("A2").Select Next r Range("C" & Rows.Count).End(xlUp).Offset(1).Select End Sub A: How about just using conditional formatting on the data. You would use a formula like =$A2="RCA Pending" which assumes that the data starts in A2 and the column in question is A. You would need to select all of the columns in all of the rows, starting at A2, and then apply the CF
doc_23534521
A: It means that your script will use up to that amount of RAM before the server just aborts it and you get a memory error ("script tried to allocate 81 bytes that exceeded the limit - exploding") A: some applications require more memory than the regular php limit. (16mb I think) Programs like Drupal recommend increasing it, and I have seen some large web applications run it with 100mb. Keep in mind this is not necessarily the best way to deal with a heavy program, instead looking at where your mem usage is with memory_get_usage in various places to better understand the heavy usage. Also consider that you may want to control it from php.ini and set it universally for all php instances, as opposed to maintaining on a per directory basis.
doc_23534522
I'm using Mac OSX Yosemite 10.10.1. CONSOLE LOG Machida-no-MacBook-Air:caffe machidahiroaki$ /usr/bin/clang++ -shared -o .build_release/lib/libcaffe.so .build_release/src/caffe/proto/caffe.pb.o .build_release/src/caffe/proto/caffe_pretty_print.pb.o .build_release/src/caffe/blob.o .build_release/src/caffe/common.o .build_release/src/caffe/data_transformer.o .build_release/src/caffe/dataset_factory.o .build_release/src/caffe/internal_thread.o .build_release/src/caffe/layer_factory.o .build_release/src/caffe/layers/absval_layer.o .build_release/src/caffe/layers/accuracy_layer.o .build_release/src/caffe/layers/argmax_layer.o .build_release/src/caffe/layers/base_data_layer.o .build_release/src/caffe/layers/bnll_layer.o .build_release/src/caffe/layers/concat_layer.o .build_release/src/caffe/layers/contrastive_loss_layer.o .build_release/src/caffe/layers/conv_layer.o .build_release/src/caffe/layers/cudnn_conv_layer.o .build_release/src/caffe/layers/cudnn_pooling_layer.o .build_release/src/caffe/layers/cudnn_relu_layer.o .build_release/src/caffe/layers/cudnn_sigmoid_layer.o .build_release/src/caffe/layers/cudnn_softmax_layer.o .build_release/src/caffe/layers/cudnn_tanh_layer.o .build_release/src/caffe/layers/data_layer.o .build_release/src/caffe/layers/dropout_layer.o .build_release/src/caffe/layers/dummy_data_layer.o .build_release/src/caffe/layers/eltwise_layer.o .build_release/src/caffe/layers/euclidean_loss_layer.o .build_release/src/caffe/layers/exp_layer.o .build_release/src/caffe/layers/flatten_layer.o .build_release/src/caffe/layers/hdf5_data_layer.o .build_release/src/caffe/layers/hdf5_output_layer.o .build_release/src/caffe/layers/hinge_loss_layer.o .build_release/src/caffe/layers/im2col_layer.o .build_release/src/caffe/layers/image_data_layer.o .build_release/src/caffe/layers/infogain_loss_layer.o .build_release/src/caffe/layers/inner_product_layer.o .build_release/src/caffe/layers/loss_layer.o .build_release/src/caffe/layers/lrn_layer.o .build_release/src/caffe/layers/memory_data_layer.o .build_release/src/caffe/layers/multinomial_logistic_loss_layer.o .build_release/src/caffe/layers/mvn_layer.o .build_release/src/caffe/layers/neuron_layer.o .build_release/src/caffe/layers/pooling_layer.o .build_release/src/caffe/layers/power_layer.o .build_release/src/caffe/layers/relu_layer.o .build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o .build_release/src/caffe/layers/sigmoid_layer.o .build_release/src/caffe/layers/silence_layer.o .build_release/src/caffe/layers/slice_layer.o .build_release/src/caffe/layers/softmax_layer.o .build_release/src/caffe/layers/softmax_loss_layer.o .build_release/src/caffe/layers/split_layer.o .build_release/src/caffe/layers/tanh_layer.o .build_release/src/caffe/layers/threshold_layer.o .build_release/src/caffe/layers/window_data_layer.o .build_release/src/caffe/leveldb_dataset.o .build_release/src/caffe/lmdb_dataset.o .build_release/src/caffe/net.o .build_release/src/caffe/solver.o .build_release/src/caffe/syncedmem.o .build_release/src/caffe/util/benchmark.o .build_release/src/caffe/util/im2col.o .build_release/src/caffe/util/insert_splits.o .build_release/src/caffe/util/io.o .build_release/src/caffe/util/math_functions.o .build_release/src/caffe/util/upgrade_proto.o -stdlib=libstdc++ -pthread -fPIC -DNDEBUG -O2 -DCPU_ONLY -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/local/include -I.build_release/src -I./src -I./include -I/usr/local/atlas/include -Wall -Wno-sign-compare -Wno-unneeded-internal-declaration --verbose -framework Accelerate -L/usr/lib -L/usr/local/include -L/usr/local/lib -L/usr/lib -L/usr/local/atlas/lib -L.build_release/lib -lglog -lgflags -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lhdf5_hl -lhdf5 -lm -lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread-mt -lcblas Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) Target: x86_64-apple-darwin14.0.0 Thread model: posix clang: warning: argument unused during compilation: '-pthread' "/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld" -demangle -dynamic -dylib -arch x86_64 -macosx_version_min 10.10.0 -o .build_release/lib/libcaffe.so -L/usr/lib -L/usr/local/include -L/usr/local/lib -L/usr/lib -L/usr/local/atlas/lib -L.build_release/lib .build_release/src/caffe/proto/caffe.pb.o .build_release/src/caffe/proto/caffe_pretty_print.pb.o .build_release/src/caffe/blob.o .build_release/src/caffe/common.o .build_release/src/caffe/data_transformer.o .build_release/src/caffe/dataset_factory.o .build_release/src/caffe/internal_thread.o .build_release/src/caffe/layer_factory.o .build_release/src/caffe/layers/absval_layer.o .build_release/src/caffe/layers/accuracy_layer.o .build_release/src/caffe/layers/argmax_layer.o .build_release/src/caffe/layers/base_data_layer.o .build_release/src/caffe/layers/bnll_layer.o .build_release/src/caffe/layers/concat_layer.o .build_release/src/caffe/layers/contrastive_loss_layer.o .build_release/src/caffe/layers/conv_layer.o .build_release/src/caffe/layers/cudnn_conv_layer.o .build_release/src/caffe/layers/cudnn_pooling_layer.o .build_release/src/caffe/layers/cudnn_relu_layer.o .build_release/src/caffe/layers/cudnn_sigmoid_layer.o .build_release/src/caffe/layers/cudnn_softmax_layer.o .build_release/src/caffe/layers/cudnn_tanh_layer.o .build_release/src/caffe/layers/data_layer.o .build_release/src/caffe/layers/dropout_layer.o .build_release/src/caffe/layers/dummy_data_layer.o .build_release/src/caffe/layers/eltwise_layer.o .build_release/src/caffe/layers/euclidean_loss_layer.o .build_release/src/caffe/layers/exp_layer.o .build_release/src/caffe/layers/flatten_layer.o .build_release/src/caffe/layers/hdf5_data_layer.o .build_release/src/caffe/layers/hdf5_output_layer.o .build_release/src/caffe/layers/hinge_loss_layer.o .build_release/src/caffe/layers/im2col_layer.o .build_release/src/caffe/layers/image_data_layer.o .build_release/src/caffe/layers/infogain_loss_layer.o .build_release/src/caffe/layers/inner_product_layer.o .build_release/src/caffe/layers/loss_layer.o .build_release/src/caffe/layers/lrn_layer.o .build_release/src/caffe/layers/memory_data_layer.o .build_release/src/caffe/layers/multinomial_logistic_loss_layer.o .build_release/src/caffe/layers/mvn_layer.o .build_release/src/caffe/layers/neuron_layer.o .build_release/src/caffe/layers/pooling_layer.o .build_release/src/caffe/layers/power_layer.o .build_release/src/caffe/layers/relu_layer.o .build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o .build_release/src/caffe/layers/sigmoid_layer.o .build_release/src/caffe/layers/silence_layer.o .build_release/src/caffe/layers/slice_layer.o .build_release/src/caffe/layers/softmax_layer.o .build_release/src/caffe/layers/softmax_loss_layer.o .build_release/src/caffe/layers/split_layer.o .build_release/src/caffe/layers/tanh_layer.o .build_release/src/caffe/layers/threshold_layer.o .build_release/src/caffe/layers/window_data_layer.o .build_release/src/caffe/leveldb_dataset.o .build_release/src/caffe/lmdb_dataset.o .build_release/src/caffe/net.o .build_release/src/caffe/solver.o .build_release/src/caffe/syncedmem.o .build_release/src/caffe/util/benchmark.o .build_release/src/caffe/util/im2col.o .build_release/src/caffe/util/insert_splits.o .build_release/src/caffe/util/io.o .build_release/src/caffe/util/math_functions.o .build_release/src/caffe/util/upgrade_proto.o -framework Accelerate -lglog -lgflags -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lhdf5_hl -lhdf5 -lm -lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread-mt -lcblas -L. -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -lstdc++ -lSystem /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a Undefined symbols for architecture x86_64: "cv::imread(cv::String const&, int)", referenced from: caffe::WindowDataLayer<float>::InternalThreadEntry() in window_data_layer.o caffe::WindowDataLayer<double>::InternalThreadEntry() in window_data_layer.o caffe::ReadImageToCVMat(std::string const&, int, int, bool) in io.o "cv::imdecode(cv::_InputArray const&, int)", referenced from: caffe::DecodeDatumToCVMat(caffe::Datum const&, int, int, bool) in io.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) src/caffe/layers/window_data_layer.cpp includes #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" It seems I have correct libraries. (Is this right thing to check??) Machida-no-MacBook-Air:caffe machidahiroaki$ find /usr/local/lib -name libopencv_*.dylib | grep 'core\|highgui\|imgproc' | xargs ls -ltr lrwxr-xr-x 1 machidahiroaki admin 49 1 10 10:23 /usr/local/lib/libopencv_imgproc.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_imgproc.dylib lrwxr-xr-x 1 machidahiroaki admin 53 1 10 10:23 /usr/local/lib/libopencv_imgproc.3.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_imgproc.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 55 1 10 10:23 /usr/local/lib/libopencv_imgproc.3.0.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_imgproc.3.0.0.dylib lrwxr-xr-x 1 machidahiroaki admin 49 1 10 10:23 /usr/local/lib/libopencv_highgui.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_highgui.dylib lrwxr-xr-x 1 machidahiroaki admin 53 1 10 10:23 /usr/local/lib/libopencv_highgui.3.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_highgui.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 55 1 10 10:23 /usr/local/lib/libopencv_highgui.3.0.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_highgui.3.0.0.dylib lrwxr-xr-x 1 machidahiroaki admin 46 1 10 10:23 /usr/local/lib/libopencv_core.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_core.dylib lrwxr-xr-x 1 machidahiroaki admin 50 1 10 10:23 /usr/local/lib/libopencv_core.3.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_core.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 52 1 10 10:23 /usr/local/lib/libopencv_core.3.0.0.dylib -> ../Cellar/opencv/HEAD/lib/libopencv_core.3.0.0.dylib Machida-no-MacBook-Air:caffe machidahiroaki$ ls -ltr /usr/local/Cellar/opencv/HEAD/lib/libopencv_*.dylib | grep 'core\|highgui\|imgproc' lrwxr-xr-x 1 machidahiroaki admin 24 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_core.dylib -> libopencv_core.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 26 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_core.3.0.dylib -> libopencv_core.3.0.0.dylib lrwxr-xr-x 1 machidahiroaki admin 27 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgproc.dylib -> libopencv_imgproc.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 29 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgproc.3.0.dylib -> libopencv_imgproc.3.0.0.dylib lrwxr-xr-x 1 machidahiroaki admin 27 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_highgui.dylib -> libopencv_highgui.3.0.dylib lrwxr-xr-x 1 machidahiroaki admin 29 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_highgui.3.0.dylib -> libopencv_highgui.3.0.0.dylib -r--r--r-- 1 machidahiroaki admin 4305392 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgproc.3.0.0.dylib -r--r--r-- 1 machidahiroaki admin 75352 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_highgui.3.0.0.dylib -r--r--r-- 1 machidahiroaki admin 3437740 1 10 10:23 /usr/local/Cellar/opencv/HEAD/lib/libopencv_core.3.0.0.dylib OpenCV libraries seem to be built with libstdc++, because the official install guide suggests so. Machida-no-MacBook-Air:caffe machidahiroaki$ find /usr/local/lib -name libopencv_*.dylib | grep 'core\|highgui\|imgproc' | grep -v '3.0' | xargs otool -L /usr/local/lib/libopencv_core.dylib: /usr/local/lib/libopencv_core.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 104.1.0) /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL (compatibility version 1.0.0, current version 1.0.0) /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.5) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0) /usr/local/lib/libopencv_highgui.dylib: /usr/local/lib/libopencv_highgui.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 104.1.0) /usr/local/Cellar/opencv/HEAD/lib/libopencv_videoio.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.5) /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa (compatibility version 1.0.0, current version 21.0.0) /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgcodecs.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgproc.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/local/Cellar/opencv/HEAD/lib/libopencv_core.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0) /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit (compatibility version 45.0.0, current version 1343.16.0) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1151.16.0) /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 1151.16.0) /usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0) /usr/local/lib/libopencv_imgproc.dylib: /usr/local/lib/libopencv_imgproc.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 104.1.0) /usr/local/Cellar/opencv/HEAD/lib/libopencv_core.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1213.0.0) All the build commands are correctly with -stdlib=libstdc++ option. Machida-no-MacBook-Air:caffe machidahiroaki$ make -n | grep clang | grep -v stdlib=libstdc++ Thank for your help in advance! Now I find I can use ld with -v option. I keep investigating. Machida-no-MacBook-Air:caffe machidahiroaki$ "/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld" -demangle -dynamic -dylib -arch x86_64 -macosx_version_min 10.10.0 -o .build_release/lib/libcaffe.so -L/usr/lib -L/usr/local/include -L/usr/local/lib -L/usr/lib -L/usr/local/atlas/lib -L.build_release/lib .build_release/src/caffe/proto/caffe.pb.o .build_release/src/caffe/proto/caffe_pretty_print.pb.o .build_release/src/caffe/blob.o .build_release/src/caffe/common.o .build_release/src/caffe/data_transformer.o .build_release/src/caffe/dataset_factory.o .build_release/src/caffe/internal_thread.o .build_release/src/caffe/layer_factory.o .build_release/src/caffe/layers/absval_layer.o .build_release/src/caffe/layers/accuracy_layer.o .build_release/src/caffe/layers/argmax_layer.o .build_release/src/caffe/layers/base_data_layer.o .build_release/src/caffe/layers/bnll_layer.o .build_release/src/caffe/layers/concat_layer.o .build_release/src/caffe/layers/contrastive_loss_layer.o .build_release/src/caffe/layers/conv_layer.o .build_release/src/caffe/layers/cudnn_conv_layer.o .build_release/src/caffe/layers/cudnn_pooling_layer.o .build_release/src/caffe/layers/cudnn_relu_layer.o .build_release/src/caffe/layers/cudnn_sigmoid_layer.o .build_release/src/caffe/layers/cudnn_softmax_layer.o .build_release/src/caffe/layers/cudnn_tanh_layer.o .build_release/src/caffe/layers/data_layer.o .build_release/src/caffe/layers/dropout_layer.o .build_release/src/caffe/layers/dummy_data_layer.o .build_release/src/caffe/layers/eltwise_layer.o .build_release/src/caffe/layers/euclidean_loss_layer.o .build_release/src/caffe/layers/exp_layer.o .build_release/src/caffe/layers/flatten_layer.o .build_release/src/caffe/layers/hdf5_data_layer.o .build_release/src/caffe/layers/hdf5_output_layer.o .build_release/src/caffe/layers/hinge_loss_layer.o .build_release/src/caffe/layers/im2col_layer.o .build_release/src/caffe/layers/image_data_layer.o .build_release/src/caffe/layers/infogain_loss_layer.o .build_release/src/caffe/layers/inner_product_layer.o .build_release/src/caffe/layers/loss_layer.o .build_release/src/caffe/layers/lrn_layer.o .build_release/src/caffe/layers/memory_data_layer.o .build_release/src/caffe/layers/multinomial_logistic_loss_layer.o .build_release/src/caffe/layers/mvn_layer.o .build_release/src/caffe/layers/neuron_layer.o .build_release/src/caffe/layers/pooling_layer.o .build_release/src/caffe/layers/power_layer.o .build_release/src/caffe/layers/relu_layer.o .build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o .build_release/src/caffe/layers/sigmoid_layer.o .build_release/src/caffe/layers/silence_layer.o .build_release/src/caffe/layers/slice_layer.o .build_release/src/caffe/layers/softmax_layer.o .build_release/src/caffe/layers/softmax_loss_layer.o .build_release/src/caffe/layers/split_layer.o .build_release/src/caffe/layers/tanh_layer.o .build_release/src/caffe/layers/threshold_layer.o .build_release/src/caffe/layers/window_data_layer.o .build_release/src/caffe/leveldb_dataset.o .build_release/src/caffe/lmdb_dataset.o .build_release/src/caffe/net.o .build_release/src/caffe/solver.o .build_release/src/caffe/syncedmem.o .build_release/src/caffe/util/benchmark.o .build_release/src/caffe/util/im2col.o .build_release/src/caffe/util/insert_splits.o .build_release/src/caffe/util/io.o .build_release/src/caffe/util/math_functions.o .build_release/src/caffe/util/upgrade_proto.o -framework Accelerate -lglog -lgflags -lprotobuf -lleveldb -lsnappy -llmdb -lboost_system -lhdf5_hl -lhdf5 -lm -lopencv_core -lopencv_highgui -lopencv_imgproc -lboost_thread-mt -lcblas -L. -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -lstdc++ -lSystem /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin/libclang_rt.osx.a -v @(#)PROGRAM:ld PROJECT:ld64-241.9 configured to support archs: armv6 armv7 armv7s arm64 i386 x86_64 x86_64h armv6m armv7m armv7em Library search paths: /usr/lib /usr/local/include /usr/local/lib /usr/lib /usr/local/atlas/lib .build_release/lib . /usr/local/lib /usr/local/lib /usr/local/lib /usr/local/lib /usr/lib /usr/local/lib Framework search paths: /Library/Frameworks/ /System/Library/Frameworks/ Undefined symbols for architecture x86_64: "cv::imread(cv::String const&, int)", referenced from: caffe::WindowDataLayer<float>::InternalThreadEntry() in window_data_layer.o caffe::WindowDataLayer<double>::InternalThreadEntry() in window_data_layer.o caffe::ReadImageToCVMat(std::string const&, int, int, bool) in io.o "cv::imdecode(cv::_InputArray const&, int)", referenced from: caffe::DecodeDatumToCVMat(caffe::Datum const&, int, int, bool) in io.o ld: symbol(s) not found for architecture x86_64 Dylibs seems to contain appropriate symbols. Machida-no-MacBook-Air:opencv2 machidahiroaki$ nm -g /usr/local/lib/libopencv_* | grep 'imread\|dylib' | grep -B 1 imread /usr/local/lib/libopencv_imgcodecs.3.0.0.dylib: 0000000000004eb0 T __ZN2cv6imreadERKNS_6StringEi -- /usr/local/lib/libopencv_imgcodecs.3.0.dylib: 0000000000004eb0 T __ZN2cv6imreadERKNS_6StringEi -- /usr/local/lib/libopencv_imgcodecs.dylib: 0000000000004eb0 T __ZN2cv6imreadERKNS_6StringEi -- /usr/local/lib/libopencv_superres.dylib: U __ZN2cv6imreadERKNS_6StringEi U __ZN2cv6imreadERKNS_6StringEi U __ZN2cv6imreadERKNS_6StringEi Machida-no-MacBook-Air:opencv2 machidahiroaki$ otool -L /usr/local/lib/libopencv_highgui.dylib | grep imgcodec /usr/local/Cellar/opencv/HEAD/lib/libopencv_imgcodecs.3.0.dylib (compatibility version 3.0.0, current version 3.0.0) Dylibs seems to have the appropriate architecture. mmm... Machida-no-MacBook-Air:opencv2 machidahiroaki$ nm -arch x86_64 /usr/local/lib/libopencv_imgcodecs.dylib | grep imread 0000000000004eb0 T __ZN2cv6imreadERKNS_6StringEi 0000000000004f50 t __ZN2cvL7imread_ERKNS_6StringEiiPNS_3MatE Found a nice option for ld, which logs each file the ld loads. Machida-no-MacBook-Air:caffe machidahiroaki$ "/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld" -t -demangle -dynamic -dylib -arch x86_64 -macosx_version_min 10.10.0 ~ @(#)PROGRAM:ld PROJECT:ld64-241.9 configured to support archs: armv6 armv7 armv7s arm64 i386 x86_64 x86_64h armv6m armv7m armv7em Library search paths: /usr/lib /usr/local/include /usr/local/lib /usr/lib /usr/local/atlas/lib .build_release/lib . /usr/local/lib /usr/local/lib /usr/local/lib /usr/local/lib /usr/lib /usr/local/lib Framework search paths: /Library/Frameworks/ /System/Library/Frameworks/ .build_release/src/caffe/proto/caffe.pb.o .build_release/src/caffe/proto/caffe_pretty_print.pb.o .build_release/src/caffe/blob.o .build_release/src/caffe/common.o .build_release/src/caffe/data_transformer.o .build_release/src/caffe/dataset_factory.o .build_release/src/caffe/internal_thread.o .build_release/src/caffe/layer_factory.o .build_release/src/caffe/layers/absval_layer.o .build_release/src/caffe/layers/accuracy_layer.o .build_release/src/caffe/layers/argmax_layer.o .build_release/src/caffe/layers/base_data_layer.o .build_release/src/caffe/layers/bnll_layer.o .build_release/src/caffe/layers/concat_layer.o .build_release/src/caffe/layers/contrastive_loss_layer.o .build_release/src/caffe/layers/conv_layer.o .build_release/src/caffe/layers/cudnn_conv_layer.o .build_release/src/caffe/layers/cudnn_pooling_layer.o .build_release/src/caffe/layers/cudnn_relu_layer.o .build_release/src/caffe/layers/cudnn_sigmoid_layer.o .build_release/src/caffe/layers/cudnn_softmax_layer.o .build_release/src/caffe/layers/cudnn_tanh_layer.o .build_release/src/caffe/layers/data_layer.o .build_release/src/caffe/layers/dropout_layer.o .build_release/src/caffe/layers/dummy_data_layer.o .build_release/src/caffe/layers/eltwise_layer.o .build_release/src/caffe/layers/euclidean_loss_layer.o .build_release/src/caffe/layers/exp_layer.o .build_release/src/caffe/layers/flatten_layer.o .build_release/src/caffe/layers/hdf5_data_layer.o .build_release/src/caffe/layers/hdf5_output_layer.o .build_release/src/caffe/layers/hinge_loss_layer.o .build_release/src/caffe/layers/im2col_layer.o .build_release/src/caffe/layers/image_data_layer.o .build_release/src/caffe/layers/infogain_loss_layer.o .build_release/src/caffe/layers/inner_product_layer.o .build_release/src/caffe/layers/loss_layer.o .build_release/src/caffe/layers/lrn_layer.o .build_release/src/caffe/layers/memory_data_layer.o .build_release/src/caffe/layers/multinomial_logistic_loss_layer.o .build_release/src/caffe/layers/neuron_layer.o .build_release/src/caffe/layers/pooling_layer.o .build_release/src/caffe/layers/power_layer.o .build_release/src/caffe/layers/relu_layer.o .build_release/src/caffe/layers/sigmoid_cross_entropy_loss_layer.o .build_release/src/caffe/layers/sigmoid_layer.o .build_release/src/caffe/layers/silence_layer.o .build_release/src/caffe/layers/slice_layer.o .build_release/src/caffe/layers/softmax_layer.o .build_release/src/caffe/layers/softmax_loss_layer.o .build_release/src/caffe/layers/split_layer.o .build_release/src/caffe/layers/tanh_layer.o .build_release/src/caffe/layers/threshold_layer.o .build_release/src/caffe/layers/window_data_layer.o .build_release/src/caffe/leveldb_dataset.o .build_release/src/caffe/lmdb_dataset.o .build_release/src/caffe/layers/mvn_layer.o .build_release/src/caffe/net.o .build_release/src/caffe/solver.o .build_release/src/caffe/syncedmem.o .build_release/src/caffe/util/benchmark.o .build_release/src/caffe/util/im2col.o .build_release/src/caffe/util/insert_splits.o .build_release/src/caffe/util/io.o .build_release/src/caffe/util/math_functions.o .build_release/src/caffe/util/upgrade_proto.o /usr/local/lib/libglog.dylib /usr/local/lib/libgflags.dylib /usr/local/lib/libprotobuf.dylib /usr/local/lib/libleveldb.dylib /usr/local/lib/libsnappy.dylib /usr/local/lib/liblmdb.dylib /usr/local/lib/libboost_system.dylib /usr/local/lib/libhdf5_hl.dylib /usr/local/lib/libhdf5.dylib /System/Library/Frameworks//Accelerate.framework/Accelerate /usr/lib/libm.dylib /usr/local/lib/libopencv_core.dylib /usr/local/lib/libopencv_highgui.dylib /usr/local/lib/libopencv_imgproc.dylib /usr/local/lib/libboost_thread-mt.dylib /usr/lib/libcblas.dylib /usr/lib/libstdc++.dylib /usr/lib/libSystem.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib /usr/lib/system/libcache.dylib /usr/lib/system/libcommonCrypto.dylib /usr/lib/system/libcompiler_rt.dylib /usr/lib/system/libcopyfile.dylib /usr/lib/system/libcorecrypto.dylib /usr/lib/system/libdispatch.dylib /usr/lib/system/libdyld.dylib /usr/lib/system/libkeymgr.dylib /usr/lib/system/liblaunch.dylib /usr/lib/system/libmacho.dylib /usr/lib/system/libquarantine.dylib /usr/lib/system/libremovefile.dylib /usr/lib/system/libsystem_asl.dylib /usr/lib/system/libsystem_blocks.dylib /usr/lib/system/libsystem_c.dylib /usr/lib/system/libsystem_configuration.dylib /usr/lib/system/libsystem_coreservices.dylib /usr/lib/system/libsystem_coretls.dylib /usr/lib/system/libsystem_dnssd.dylib /usr/lib/system/libsystem_info.dylib /usr/lib/system/libsystem_kernel.dylib /usr/lib/system/libsystem_m.dylib /usr/lib/system/libsystem_malloc.dylib /usr/lib/system/libsystem_network.dylib /usr/lib/system/libsystem_networkextension.dylib /usr/lib/system/libsystem_notify.dylib /usr/lib/system/libsystem_platform.dylib /usr/lib/system/libsystem_pthread.dylib /usr/lib/system/libsystem_sandbox.dylib /usr/lib/system/libsystem_secinit.dylib /usr/lib/system/libsystem_stats.dylib /usr/lib/system/libsystem_trace.dylib /usr/lib/system/libunc.dylib /usr/lib/system/libunwind.dylib /usr/lib/system/libxpc.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLinearAlgebra.dylib Undefined symbols for architecture x86_64: "cv::imread(cv::String const&, int)", referenced from: caffe::WindowDataLayer<float>::InternalThreadEntry() in window_data_layer.o caffe::WindowDataLayer<double>::InternalThreadEntry() in window_data_layer.o caffe::ReadImageToCVMat(std::string const&, int, int, bool) in io.o "cv::imdecode(cv::_InputArray const&, int)", referenced from: caffe::DecodeDatumToCVMat(caffe::Datum const&, int, int, bool) in io.o ld: symbol(s) not found for architecture x86_64 Machida-no-MacBook-Air:caffe machidahiroaki$ A: Solved! cv::imread(cv::String const&, int) is defined on libopencv_imgcodecs.dylib and Makefile is missing it. So, I added opencv_imgcodecs to Makefile. LIBRARIES += glog gflags protobuf leveldb snappy \ lmdb \ boost_system \ hdf5_hl hdf5 \ opencv_imgcodecs opencv_highgui opencv_imgproc opencv_core pthread
doc_23534523
The autocomplete suggestions appear when editing code - this is wanted, it is good. They also appear when editing text files though - this is not wanted, it is not good. Specifically, if I create a new text file in flash develop (so the file is called "readme.txt" or something), after typing a few words, FD tries to "guess" what I'm typing and pops up the suggestions list. This, of course, makes no sense when trying to type out things that aren't code. Yes, I've seen this Flashdevelop - Disable autocomplete for txt files and no, it doesn't work - even after a restart. Here is a screenshot showing it not working... A: Can confirm, setting that option doesn't prevent completion in .txt files. Consider opening an issue on the FlashDevelop repository. A: Looks like a bug. For now you can simply disable the BasicCompletion plugin (check Disable and restart FlashDevelop).
doc_23534524
I have following setup in public_html directory: I have 2 folders in /var/www/html * *prod *test Objective is to serve requests received from test.domain.com using test directory and the ones received from domain.com using prod directory. The setup is working fine with acme ssl certificates ie., for production (domain.com), we are using acme SSL certificate and DNS is pointing to Elastic IP and working fine. Even test.domain.com was working fine with acme ssl setup. However, I'm trying to switch to ACM. As it works only with CF and ELB (AWS Elastic Load Balancer), created a CF distribution. * *Created one CloudFront (CF) distribution pointing to AWS EC2 endpoint with Origin path /test. *Redirected test.domain.com to CF distribution in Google DNS as the domain is registered with them. With this setup, test.domain.com is also presenting domain.com and not the test server as anticipated. https.conf has correct DocumentRoot for each ServerName. But, request is not hitting the virtualhost of test server.. whats missing? pls suggest.. A: You can try below: * *Add both domain.com and test.domain.com to CloudFront CNAME list. *ACM certificate which has common name/SAN as domain.com and *.domain.com (or test.domain.com) *In CloudFront cache behavior , whiteist HOST header, this will make sure that when client access domain.com , cloudfront send the same value in host header when contacting origin. Link: Forward host header
doc_23534525
List<var> someVariable = new List<var>(); someVariable.Add( new{Name="Krishna", Phones = new[] {"555-555-5555", "666-666-6666"}} ); This is because I need to create a collection at runtime. A: I spent quite a lot of time trying to find a way to save myself some time using a list of anonymous types, then realised it was probably quicker just to use a private class inside the current class... private class Lookup { public int Index; public string DocType; public string Text; } private void MyMethod() { List<Lookup> all_lookups = new List<Lookup> { new Lookup() {Index=4, DocType="SuperView", Text="SuperView XML File"}, new Lookup() {Index=2, DocType="Word", Text="Microsoft Word Document"} }; // Use my all_lookups variable here... } A: How about dynamic? List<dynamic> dynamicList = new List<dynamic>(); dynamicList.Add(new { Name = "Krishna", Phones = new[] { "555-555-5555", "666-666-6666" } }); A: It involves a bit of hackery but it can be done. static List<T> CreateListFromSingle<T>(T value) { var list = new List<T>(); list.Add(value); return list; } var list = CreateListFromSingle( new{Name="Krishna", Phones = new[] {"555-555-5555", "666-666-6666"}} ); A: You can make a list like this, but you'll again have to use some serious hackery, and you'll have to use some "type by example" situations. For example: // create the first list by using a specific "template" type. var list = new [] { new { Name="", Phones=new[] { "" } } }.ToList(); // clear the list. The first element was just an example. list.Clear(); // start adding "actual" values. list.Add(new { Name = "Krishna", Phones = new[] { "555-555-5555", "666-666-6666" } }); A: In general you can use the (arguably bad-smelling) cast by example trick others have mentioned to create instances of any generic type parameterized with an anonymous type for the type argument. However, for List<T> there is a slightly less gross way to do it: var array = new[] { new { Name="Krishna", Phones = new[] {"555-555-5555", "666-666-6666"} } }; var list = array.ToList(); Your sketch of a proposed syntax is similar to a feature we did not implement for C# 3 or 4, but we considered. We call the feature "mumble types", and it would go something like this: List<?> myList = new List<?>() { new { Name="Krishna", Phones = new[] {"555-555-5555", "666-666-6666"} } }; We call it "mumble types" because of course you'd read it "myList is a new list of hrmmf". :-) The idea is that the compiler would look at the initializers and do its best to figure out what the type could possibly be, just the same way as how "var" means "look at the initializer and figure out what the type of the variable is". Whether we'd use "var" as the "mumble" or "?" (which is similar to what Java does in a related feature), or something else is an open question. In any event, I wouldn't hold my breath waiting for this feature if I were you. It hasn't made the cut for several language versions so far, but it will stay on the list of possibilities for a while longer I think. If, hypothetically speaking, we were to be designing future versions of the language. Which we might or might not be. Remember, Eric's musings about future versions of C# are for entertainment purposes only. A: I don't think this is possible. Maybe in C# 4 using the dynamic keyword? A: Here's an approach that is somewhat cleaner than many of the other suggestions: var list = Enumerable.Repeat(new { Name = "", Phones = new[] { "" } }, 0) .ToList(); // ... list.Add(new { Name = "Krishna", Phones = new[] { "555-555-5555", "666-666-6666" } }); A: You can't make a collection of an anonymous type like this. If you need to do this, you'll need to either use List<object>, or make a custom class or struct for your type. Edit: I'll rephrase this: Although, technically, it's possible to make a list of an anonymous type, I would strongly recommend never doing this. There is pretty much always a better approach, as doing this is just making code that is nearly unmaintainable. I highly recommend making a custom type to hold your values instead of using anonymous types. A custom type will have all of the same capabilities (since anonymous types are defined, by the compiler, at compile time), but will be much more understandable by the developer who follows you... And just to play, too, here's my entry for "code I'd never actually want to use in the real world": var customer = new { Name = "Krishna", Phones = new[] { "555-555-5555", "666-666-6666" } }; var someVariable = new[]{1}.Select(i => customer).ToList();
doc_23534526
I have a DB table called 'entity_aliases' which holds many variations of bands/artists in my system. the table look like this: entity_aliases int(11) auto inc. PK entity_type enum(artist, band) entity_id int(11) entity_alias varchar(100) + full text search index. Example entity_alias (field) values: Beyoncé Beyoncé Giselle Knowles Giselle Knowles ... General explanation about the type of queries I'd like to perform: My service needs to provide information about artists/bands. In order to do so - my clients need to provide me with the entity name. *My clients (sometime) provide me an entity name with typos or a name that not found exactly in the DB (in our case "Beyonce Knowles" also note the European "é"). So the demands are: * *I'm using sharded MySQL - so the 'entity_aliases' is also sharded. it need to index more than 1 MySQL server. *Its need to support 80M names. *Nice to have: ignore/overcome minor typos or European characters (fuzzy search). *Need to be supported by PHP (CakePHP). *entity names probably won't exceed 20-25 chars *The query itself is very simple - I provide a "name" and in return I'd like to get a list of similar entities (entity_id and entity_type) and if possible - a score. *I need to index entities on-the-fly and the index should be affect immediately. Things I'd like to know: * *is doable using lucene/solr? *is there any better solution that I need to consider? *how my schema should look like? Thanks! A: Sounds doable with solr. Without going into the details you could convert accented characters into the ascii counterpart using a ASCIIFoldingFilterFactory http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.ASCIIFoldingFilterFactory To search against words that sound similar but are misspelled you could use a PhoneticFilter http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PhoneticFilterFactory You would need to play around with the different filters to see what works best for you.
doc_23534527
On click of button, i need to add specific dynamic controls, enter the values and pass its values on clicking submit button using jquery. For ex.: In Create Event of Google Calendar, we can add multiple options for reminder. Like that i need to add controls dynamically and pass its values to the database using jquery. Any suggestions on this? A: First step is to add the controls dynamically: $(function() { $('#someButton').click(function() { // Add input control to the form: $('#someDivInsideTheFormThatWillHoldTheControl').append( '<input type="text" name="dynamicControl" id="dynamicControl" />' ); }); }); And in the action that will handle the form submission you can read the value: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(string dynamicControl) { // dynamicControl should hold the value entered // in the dynamically added input. } UPDATE: To get the value of the dynamically added control you can use the val function: var value = $('#dynamicControl').val(); A: You can add the dynamic controls as specified in the previous answer But just make sure that append these input elements within a "form" (You don't have to do this if you are planning to make an ajax "post" though). And, if you want to submit multiple values in a single "form submit" make sure that you follow the conventions specified in this blog post=> Model binding to a List
doc_23534528
It would look like this: Cat1: 1 Cat2: 4 Cat3: 2 How can I do this? A: Tables: blog_posts(id, title, catid, .... ) blog_categories(id, title, ... ) PDO: $sql=$dbh->query("SELECT blog_categories.*, COUNT(blog_posts.CatID) AS count FROM blog_categories LEFT JOIN blog_posts ON blog_posts.CatID=blog_categories.ID GROUP BY blog_categories.ID ORDER BY count DESC"); while($row=$sql->fetch(PDO::FETCH_OBJ)) { echo $row->title.':'.$count; } Result: Cat 2: 5 Cat 1: 3 Cat 3: 1
doc_23534529
dframe = pd.DataFrame({"A":list("aaaabbbbccc"), "C":range(1,12)}, index=range(1,12)) Out[9]: A C 1 a 1 2 a 2 3 a 3 4 a 4 5 b 5 6 b 6 7 b 7 8 b 8 9 c 9 10 c 10 11 c 11 to subset based on column value: In[11]: first = dframe.loc[dframe["A"] == 'a'] In[12]: first Out[12]: A C 1 a 1 2 a 2 3 a 3 4 a 4 To drop based on column value: In[16]: dframe = dframe[dframe["A"] != 'a'] In[17]: dframe Out[16]: A C 5 b 5 6 b 6 7 b 7 8 b 8 9 c 9 10 c 10 11 c 11 Is there any way to do both in one shot? Like subsetting rows based on a column value and deleting same rows in the original df. A: It's not really in one shot, but typically the way to do this is reuse a boolean mask, like this: In [28]: mask = dframe['A'] == 'a' In [29]: first, dframe = dframe[mask], dframe[~mask] In [30]: first Out[30]: A C 1 a 1 2 a 2 3 a 3 4 a 4 In [31]: dframe Out[31]: A C 5 b 5 6 b 6 7 b 7 8 b 8 9 c 9 10 c 10 11 c 11 A: You can also use drop() dframe = dframe.drop(dframe.index[dframe.A == 'a']) Output: A C 5 b 5 6 b 6 7 b 7 8 b 8 9 c 9 10 c 10 11 c 11 If you want to fix the index, you can do this. dframe.index = range(len(dframe)) Output: A C 0 b 5 1 b 6 2 b 7 3 b 8 4 c 9 5 c 10 6 c 11 A: An alternate way to think about it. gb = dframe.groupby(dframe.A == 'a') isa, nota = gb.get_group(True), gb.get_group(False)
doc_23534530
ControlPaint.DrawReversibleLine( PointToScreen(new Point(x, y1)), PointToScreen(new Point(x, y2)), Color.Black); If high-dpi support is not turned on in the app and the app is launched on a hi-res screen, the x, y1 and y2 coordinates come from the mouse events as if the app worked on a 96 dpi screen because of the Windows dpi virtualization. But when I pass these numbers to the Control.PointToScreen() function, it processes them taking into account the real resolution of the screen. As a result, the points I get from PointToScreen are shifted to the left-top corner on the screen on 4K screens. Is there a simple way to overcome this problem? Please, take into account modern multi-monitor configurations in which every monitor may have its own resolution.
doc_23534531
For example, does a Print Document or Send to Printer task exist? I can't seem to find it, but you would expect such a basic task to be available in SSIS. With Print I mean sending it to document printer, not the print sql statement. I would like to create a SSRS report, save it in a folder and then print the document automatically. Thanks! A: No, there is no "print file" task, or anything like it. SSIS tasks tend to be very general, yet highly configurable. You could use an Execute Process Task to print the file using win cmd. Put it after a File System task that moves your report to the desired location.
doc_23534532
I'll spare the full code but it gets down to m_VersionControlServer.GetItem(source).IsBranch For whatever reason this always returns false. Am I missing something or is just broken A: You need to call one of the overload's of GetItem() that has a GetItemsOptions parameter and pass in GetItemsOptions.IncludeBranchInfo. For example: var isBranch = m_VersionControlServer.GetItem( path: source version: VersionSpec.Latest, deletedState: DeletedState.NonDeleted, options: GetItemsOptions.IncludeBranchInfo).IsBranch;
doc_23534533
api.twitter.com/oauth2/token?oauth_token=xxxxxxxxxxxxxxxxxxxxxx:1 GET https://api.twitter.com/oauth2/token?oauth_token=xxxxxxxxxxxxxxxxxxxxxx 400 () /oauth2/token?oauth_token=xxxxxxxxxxxxxxxxxxxxxx:1 Refused to apply inline style because it violates the following Content Security Policy directive: "style-src https://abs.twimg.com https://abs-0.twimg.com". Either the 'unsafe-inline' keyword, a hash ('sha256-4Su6mBWzEIFnH4pAGMOuaeBrstwJN4Z3pq/s1Kn4/KQ='), or a nonce ('nonce-...') is required to enable inline execution. In my page "connect twitter" has a meta CSP: <meta http-equiv="Content-Security-Policy" content="default-src *; style-src 'self' http://* https://* 'unsafe-inline'; script-src 'self' http://* 'unsafe-inline' 'unsafe-eval'; img-src * data:" /> Whats going wrong? A: That's complaining about Twitter's CSP directive and not your own. You can see their directive here, and it contains the following: style-src https://abs.twimg.com https://abs-0.twimg.com Which directly matches the error message. Why Twitter are apparently blocking their own api call I don't know. Btw on a separate note I think your syntax is wrong as don't think https://* is allowed, and it should be: <meta http-equiv="Content-Security-Policy" content="default-src *; style-src 'self' http: https: 'unsafe-inline'; script-src 'self' http: 'unsafe-inline' 'unsafe-eval'; img-src * data:" />
doc_23534534
template <class T> void foo() { if (T == int) { // Sadly, this sort of comparison doesn't work printf("Template parameter was int\n"); } else if (T == char) { printf("Template parameter was char\n"); } } Is this possible? A: This is the purpose of template specialization, a search for that term gives tons of examples. #include <iostream> template <typename T> void foo() { std::cout << "Unknown type " << typeid(T).name() << "\n"; } template<typename T> void fooT(T const& x) { foo<T>(); } template<> void foo<int>() { printf("Template parameter was int\n"); } template<> void foo<char>() { printf("Template parameter was char\n"); } int main() { fooT(std::cout); fooT(5); fooT('a'); fooT("Look Here"); } A: By using the power of partial specialization, this can be done at compile time: template<class T, class U> struct is_same_type { static const bool value = false; }; template<class T> struct is_same_type<T, T> { static const bool value = true; }; template <class T> void foo() { if (is_same_type<T, int>::value) { printf("Template parameter was int\n"); } else if (is_same_type<T, char>::value) { printf("Template parameter was char\n"); } } Compiled in my head, but should work nonetheless. A: Using template specialization or typeid would probably work for you, although you might prefer template specialization as it won't incur the runtime cost of typeid. For example: #include <iostream> #include <typeinfo> template <typename T> void foo(T arg) { if (typeid(arg) == typeid(int)) std::cout << "foo<T> where T is int\n"; else if (typeid(arg) == typeid(double)) std::cout << "foo<T> where T is double\n"; else if (typeid(arg) == typeid(char)) std::cout << "foo<T> where T is char\n"; } template <> void foo<int>(int arg) { std::cout << "foo<int>\n"; } int main() { foo(3); // foo<int> foo(3.0); // foo<T> where T is double foo('c'); // foo<T> where T is char } A: Use type_info directly, or better still typeid operator to do that. #include <typeinfo> template < typename T > T max( T arg1, T arg2 ) { cout << typeid( T ).name() << "s compared." << endl; return ( arg1 > arg2 ? arg1 : arg2 ); }
doc_23534535
POST http://somesite.com/api/v2/stuff {"cool":"stuff"} and there are < 25 things in stuff: 200 OK if > 25 things in stuff: ??? DELETE http://somesite.com/api/v2/stuff/:id POST http://somesite.com/api/v2/stuff {"cool":"stuff"} 200 OK What is the best code for this? Straight 400? 409 CONFLICT? 429? None seem quite right.. A: Use 409. From httpbis section 7.5.8: "The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The payload SHOULD include enough information for the user to recognize the source of the conflict." In your case, the resource is the one identified by http://somesite.com/api/v2/stuff, and the POST request cannot be completed due to a conflict with its current state (which is that it is already maxed out). In your response, give the user enough info (preferably links) to delete one of the existing members, up the limit, or take some other action. Then they can resubmit the original request.
doc_23534536
var photo = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions() { PhotoSize= PhotoSize.Small, CompressionQuality = 100 }); if (photo != null) ProductPic.Source = ImageSource.FromStream(() => { return photo.GetStream(); }); than send the photo in post Method : HttpClient httpClient = new HttpClient(); MultipartFormDataContent mt = new MultipartFormDataContent(); photo.GetStream().Position = 0; StreamContent imagePart = new StreamContent(photo.GetStream()); imagePart.Headers.Add("files", "jpg"); mt.Add(imagePart, string.Format("image"), string.Format("bsk.jpeg")); var response = await httpClient.PostAsync("http://111.111.111.111:2222/upload", mt); The problem that im facing this error "{\"statusCode\":400,\"error\":\"Bad Request\",\"message\":\"Bad Request\",\"data\":{\"errors\":[{\"id\":\"Upload.status.empty\",\"message\":\"Files are empty\"}]}} A: The error points the image part is empty , try to replace StreamContent with ByteArrayContent , send ByteArray instead of Stream . HttpClient httpClient = new HttpClient(); MultipartFormDataContent mt = new MultipartFormDataContent(); mt.Headers.ContentType.MediaType = "multipart/form-data"; var upfilebytes = File.ReadAllBytes(photo.Path); mt.Add(new ByteArrayContent(upfilebytes, 0, upfilebytes.Count()), string.Format("image"), string.Format("bsk.jpeg")); var response = await httpClient.PostAsync("http://111.111.111.111:2222/upload", mt); Refer to https://stackoverflow.com/a/61095848/8187800.
doc_23534537
I call loadFragment for switching the fragments as below private boolean loadFragment(Fragment fragment) { //switching fragment if (fragment != null) { getSupportFragmentManager() .beginTransaction() .replace(R.id.fragment_container, fragment) .commit(); return true; } return false; } The problem is my app crashes if i start clicking randomly and change fragment quickly, when i check logcat it shows NPE while setting some data. My Fragment consist of following methods @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment View view = inflater.inflate(R.layout.fragment_current, container, false); unbinder = ButterKnife.bind(this, view); context = getActivity(); callCategoryAPI(); return view; } callCategory() is api retrofit call in separate controller class and brings back response via interface and then set data. So what i suspect is my API returns response (as its asynchronous) but views are not available as user has changes the fragment (quickly) so views are null. I already tried setuserVisibleHint also tried block click for 1200ms and also checked is my fragment view created, How to stop this crash? and make retrofit call Lifecycle dependent? Logcat at com.example.CurrentFragment$1.onApiSuccess(CurrentFragment.java:82) at com.example.services.current_statement.CurrentStatementController$2.onResponse(CurrentStatementController.java:91) at retrofit2.ExecutorCallAdapterFactory$ExecutorCallbackCall$1$1.run(ExecutorCallAdapterFactory.java:70) at android.os.Handler.handleCallback(Handler.java:790) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:171) at android.app.ActivityThread.main(ActivityThread.java:6606) at java.lang.reflect.Method.invoke(Method.java) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:518) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:823) -- Fatal Exception: java.lang.NullPointerException: Attempt to invoke virtual method 'void android.widget.TextView.setText(java.lang.CharSequence)' on a null object reference at com.example.fragment.statements.CurrentFragment$1.onApiSuccess(CurrentFragment.java:82) at com.example.services.current_statement.CurrentStatementController$2.onResponse(CurrentStatementController.java:91) at retrofit2.ExecutorCallAdapterFactory$ExecutorCallbackCall$1$1.run(ExecutorCallAdapterFactory.java:70) at android.os.Handler.handleCallback(Handler.java:790) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:171) at android.app.ActivityThread.main(ActivityThread.java:6606) at java.lang.reflect.Method.invoke(Method.java) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:518) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:823) A: set in your fragment if(activity != null && isAdded) { // perform your task } A: Check in fragment if(activity != null && isAdded){ Perform your operation } A: set in your fragment if(getActivity() != null && isAdded) { //do your operation } A: This is the problem: Attempt to invoke virtual method 'void android.widget.TextView.setText(java.lang.CharSequence)' on a null object reference You would like to set text to a textview which is null. You should check at first that your textview is not null like this: if (textview != null){ textview.setText("mytext"); } A: Use this library below to block the activity until the network operation is finished. Spot Dialog This is what I used for my app in the image which I showed you earliar. A: I may be late, but i hope i can help other developer as well , for i have encountered this problem too. just use isAdded() in your condition at onResponse callback inside the if-else ex: inside your onResponse callback : if (isAdded()){ // success response } else { if(isAdded()){ // error response } } A: code is for kotlin in android: TextView?.text here ? mean not null use it like shows as above A: If you change navigation drawer menu item quickly, current fragment will replace by new one too. Android can remove any ongoing calculation that is running in main thread. But if any heavy task like background network response result or handler delayed task that has a reference of fragment can find null because fragment has already detach or replace by new one. So you just have to check that, your desire fragment is still added in activity or not before process response data that has come lately. if(frag.isAdded()){ //lets process your delayed response data }
doc_23534538
def tetetetestae() = Action.async { implicit request => request.session.get("email") match { case Some(email) => for { t <- test("","","","","") } yield { if(t){ Ok("") } else { Ok("") } } } } } def test(email: String, firstName: String, lastName: String, c_password: String, n_password: String) = { WS.url("http://hire.monster.com:81/authenticateUser") .post(Map("email" -> Seq(email), "password" -> Seq(c_password))) .map(response => if(response.body.contains("true")){ for(result <- WS.url("http://localhost:9001/settings/updateUser") .post(Map("email" -> Seq(email), "firstName" -> Seq(firstName), "lastName" -> Seq(lastName), "n_password" -> Seq(n_password)))) yield result.body.contains("true") } else { false }) } A: You can try this: def test(email: String, firstName: String, lastName: String, c_password: String, n_password: String): Future[Boolean] = { WS.url("http://hire.monster.com:81/authenticateUser") .post(Map("email" -> Seq(email), "password" -> Seq(c_password))) .flatMap(response => if(response.body.contains("true")){ for(result <- WS.url("http://localhost:9001/settings/updateUser") .post(Map("email" -> Seq(email), "firstName" -> Seq(firstName), "lastName" -> Seq(lastName), "n_password" -> Seq(n_password)))) yield result.body.contains("true") } else { Future(false) }) } This will return the Future[Boolean] you need.
doc_23534539
I'm pretty much a n00b, so not to advanced, and somewhat well explained ;). im writing into a file with echo %File% > filer.txt Where File should contain the file name and path. Edit: Im sorry, I was perhaps a bit unclear in my description of the task it should preform. What I ment was, loop trough files and subfolders of a given folder. and return in the document the files with a path longer than 50 including Drive and Filetype. A: As Dennis already noted, batch has no built-in solution for getting the lenght of the path. But if you only want to know, if a string is larger than n, just check, if there is a character at the n+1 position: if "%string:~51,1%" neq "" echo %string% is longer than 50 characters Much faster than counting characters. Combined with Dennis' logic: @echo off setlocal EnableDelayedExpansion ( for %%g in ("*") do ( set "File=%%~dpnxg" if "!File:~51,1!" neq "" echo !File! ) )>filer.txt I put the whole for loop into another block to redirect only once to the output-file. this is much faster (>> would open the file, write one line and close the file, just to open it again etc...) A: You could use this: @echo off setlocal EnableDelayedExpansion for %%g in ("*") do ( set "File=%%~dpnxg" >x echo !File!&for %%? in (x) do set /a strlength=%%~z? - 2&del x if !strlength! gtr 50 echo !File! >> filer.txt ) This loops through all files in the current folder, and puts their drive, path, name and extension in the variable %File%. It then writes %File% to a temporary file x, and gets the stringlength from that, and deletes x. If the string length is greater than 50 it writes the name of the file to filer.txt. You need the delayed expansion to work with variables inside loops. Note, this code currently uses drive, path, filename and extension. Change the line set "File=%%~dpnxg" to change this behaviour ([d]rive, [p]ath, [n]ame, e[x]tension, the g is necesary)
doc_23534540
Last question for this little bugger; it's time for deployment. In my solution, there are 3 projects. One is the Windows service, another is the imported WCF, and finally a setup project I added for install. I can install/uninstall the service on my machine by going in the solution directory and finding "Setup.exe" or "Setup.msi". Executing either from explorer will install the service on my development box. Now there are a few directories associated with this solution. I'm betting that simply copying setup.exe or setup.msi to my target server and trying to run it will bomb out. How can I find out exactly which files/folder I will need to copy over for deployment? Or should I just copy the entire solution directory? That will be a little difficult for my coworkers as the setup routines are nested in directories 5 deep. A: Have you even tried?? Basically, if your NT Service is self-sufficient, it shouldn't need anything more than its accompanying config file (YourService.exe.config). And of course, .NET 3.0 (or preferably 3.5 SP1, or 4.0) needs to be installed on the target machine, to have WCF available. There's nothing more, really, that you need - unless you've defined it to be part of your install. But if it is important for your app, you should be putting it into your setup, anyway! The setup should really be able to create everything (files, services, directories) that's needed. A: There appears to be nobody on this planet that knows the answer to this question. So I will write this, so I can close the question. I would rather delete my question, or mark it as 'closed', but there is no way to do that :-( A: This should be not be that complex. Bin\Debug folder of your windows service is supposed to contain all the dependencies of you windows service. Here you have one set of folder that you need to have. Now you have WCF service. I am assuming that you are having self hosting so you don't have .svc file. All you might have is your service implementation and its dependencies in another bin\debug folder of your WCF project. Here you have second set of assemblies that you need to deploy. When you are deploying, either you merged both set of assemblies in one folder or keep them separate. You choice. Are you facing any problem when you deploy it to server or just worried about the complexity beforehand?
doc_23534541
I've been planning on using the same view file for both since all of the elements are shared. The only difference would be the form is blank when creating and it's populated when being edited. Is this the right way to go? I was thinking about having a method for each, so post/create and post/edit($id). In the create method in the post controller I have all the form data like this (for errors): $this->data['item_title'] = array( 'name' => 'item_title', 'id' => 'item_title', 'type' => 'text', 'value' => $this->form_validation->set_value('item_title'), ); I'm thinking about just altering the value to hold the database value instead of set_value(), so something like: public function edit($id) { $post_data = $this->post_model->get_post_data($id) $this->data['item_title'] = array( 'name' => 'item_title', 'id' => 'item_title', 'type' => 'text', 'value' => $post_data['post_title'], ); } Am I on the right track or is there a better way to approach this? Should I just use 2 views? A: i use a partial _form.php that is shared by a new and edit controller action. on both actions i have the same validations so i moved those to the controller constructor, then for each input i just use a ternary operator that says if the existing value $title is provided then populate the <input> value using it, otherwise use the codeigniter set_value() helper to populate with the validation value. <input type="text" name="title" value="<?php echo isset($title) ? set_value("title", $title) : set_value("title"); ?>" /> A: I usually use one view with a few variables in it. The values of the fields can either be set from the data from the server or they can be left blank. Depending on whether data is being provided or not I change which action the form will use because it may be adding or editing. This should be the most efficient method since it uses the idea of reusability :) A quick example <form action="<?php echo !$data ? "admin/add" : "admin/edit" ?> method="post"> <input type="text name="test" value="<?php echo $data['test'] ? $data['test'] : "" ?>" /> </form> A: I'm not pro at CodeIgniter (much better at CakePHP) but in the heart of MVC is that one action has one view. You have no reason to put it in one view. :) A: It's certainly possible, as I do it all of the time. Normally, I would have: Action function edit($PageID = -1) { $Page = new stdClass(); if($PageID === -1) { $Page->Title = $Page->Description = $Page->Keywords = ''; $Page->PageID = -1; } else { $this->load->model('page_model'); $Page = $this->page_model->GetByPageID($PageID); if(empty($Page)) { show_404(); return; } } if($this->input->post('Save', true) !== false) { // perform validation if($PageID === -1) { // insert } else { // update } } $data = array ( 'Page' => $Page ); $this->load->view('edit_page', $data); } View <?= form_open(); ?> <fieldset> <label for="title">Title: </label> <input type="text" name="title" id="title" value="<?= Form::Get('title', $Page->Title); ?>" /> <br /> <label for="description">Description: </label> <input type="text" name="description" id="description" value="<?= Form::Get('description', $Page->Description); ?>" /> <br /> <label for="keywords">Keywords: </label> <input type="text" name="keywords" id="keywords" value="<?= Form::Get('keywords', $Page->Keywords); ?>" /> <br /> <input type="submit" name="Save" value="Save" /> </fieldset> </form> Edit Sorry, I should have mentioned, Form::Get is not a CodeIgniter function, but one I have created. Simply, it takes the path to the Post value you need to read. If it doesn't exist, i.e. you haven't posted, then it will simply display the value from the second parameter. If I can dig the code out for you, I will post it.
doc_23534542
Im hoping to implement barcode scanning (which is another question entirely) which will then return a product ID or description which I can then call a webservice for to get details on the product. Im not interested in product pricing but the actual description or type of product. Is there anyone that knows of a web service for this kind of information? It does not nessesarily have to be a web service, im happy to call a website if I can read information from it. Any other suggestions would be helpful also A: This is looking like a good place to start (thanks to emaillenin) http://www.webservicelist.com/webservices/c.asp?cid=30&web+services A: I'm representative of http://aerse.com. You could have a look at Aerse. It provides: * *product specifications (CPU, display diagonal, RAM & etc) *product images (raw full-scale and thumbnails) *its free *no limit on requests
doc_23534543
I'm trying to parse this XML. The challenge is that the number of 'Events' changes from day to day. How would i be able to iterate over the Events when i don't know how many there are? My code so far: import xmltodict fileptr = open("xml/dayplan from 03-07-2022 to 31-07-2022.xml", "r") xml_content = fileptr.read() my_dict = xmltodict.parse(xml_content) event_eng_biler = dict(my_dict['Result']['DayPlans']['DayPlan'][0]['DayPlanNodes']['DayPlanNode'][0]['DayPlanNode'][0]) for element in event_eng_biler: print(element) Which gives this output: nodeType nodeID nodeName position Event DayPlanNode Here it ends for me. I can of course see the first 'Event' because it is Event[0] but is it possible to find out how many events there is?
doc_23534544
wage_data_files <- list.files("WageData", full.names=TRUE, recursive=TRUE) all_files <- grep(paste(toMatch, collapse = "|"), wage_data_files, value=TRUE) files_vector <- vector() for (i in seq_along(all_files){ files_vector <- c(files_vector, getAbsolutePath(all_files[i]))} Thanks again for all the help. # I'm trying to extract a subset of .csv files within a collection of folders. I want to put them all into a vector, then extract a specific value from each file and place that value into a vector. The code my question is about regards how to get all the files I wish to extract values from into a vector, which I can then run a for loop over to extract my desired values and place them into vectors. This is the folder structure: Desktop -> WageData -> 21 folders (sic.1980.annual.by.area; sic.1981.annual.by area, up to sic.2000.annual.by.area) -> In each of the above folders are roughly 1000 .csv files. I'm trying to extract six of those .csv files in each folder: "Idaho -- Statewide", "Indiana -- Statewide", "Michigan -- Statewide", "Oklahoma -- Statewide", "Texas -- Statewide" and "Wisconsin -- Statewide" So there are a total of 126 files: 6 for each year, for 21 years. Here are a couple examples of what the specific files are named: sic.1980.annual 40000 (Oklahoma -- Statewide) sic.1980.annual 55000 (Wisconsin -- Statewide) Here is my code: setwd("~/Desktop") wage_data_files <- list.files("WageData", full.names=TRUE) for (i in seq_along(wage_data_files)){ year_files <- list.files(wage_data_files[i]) toMatch <- c("Idaho -- Statewide", "Indiana -- Statewide", "Michigan -- Statewide", "Oklahoma -- Statewide", "Texas -- Statewide", "Wisconsin -- Statewide") dat <- data.frame() states_vector1 <- c(dat, grep(paste(toMatch, collapse = "|"), year_files, value=TRUE)) print(states_vector1)} One problem I'm running into right away, when I try to debug, is that I can't get my results to print out correctly. When I put the curly brackets after the print statement, I get a list like this: [[1]] [1] "sic.1980.annual 16000 (Idaho -- Statewide).csv" [[2]] [1] "sic.1980.annual 18000 (Indiana -- Statewide).csv" [[3]] [1] "sic.1980.annual 26000 (Michigan -- Statewide).csv" [[4]] [1] "sic.1980.annual 40000 (Oklahoma -- Statewide).csv" [[5]] [1] "sic.1980.annual 48000 (Texas -- Statewide).csv" [[6]] [1] "sic.1980.annual 55000 (Wisconsin -- Statewide).csv" [[1]] [1] "sic.1981.annual 16000 (Idaho -- Statewide).csv" As you can see, it repeats after 6, even though wage_data_files is a vector of length 21. So my first problem is getting the desired files into a vector. My second problem is how to run a for loop reading those files in and then extracting my desired value. The issue I'm running into is how to set the working directory. Because, for the above function, the working directory is desktop. But to get the read.csv function to work, I'd have to set the working directory to each individual folder (e.g. "WageData/sic.1980.annual.by_area", "WageData/sic.1981.annual.by_area", etc...) Does anyone have any suggestions? Thank you. A: The reason it 'repeats after 6' is that you are creating a new data frame in each loop which causes any existing data to be deleted. You need to initialize the data frame (or vector) before the loop. Here is a possible implementation which also answers your second question: root_directory <- "~/Desktop/WageData" toMatch <- c("Idaho -- Statewide", "Indiana -- Statewide", "Michigan -- Statewide", "Oklahoma -- Statewide", "Texas -- Statewide", "Wisconsin -- Statewide") folders <- list.files(root_directory, full.names = TRUE) # initialize state_vector1 as an empty vector states_vector1 <- c() # loop over folders and get the full path of each file matching a pattern in the toMatch vector for (folder in folders){ year_files <- list.files(folder) # get the names of matching files, e.g. "Indiana -- Statewide.csv" matches <- grep(paste(toMatch, collapse = "|"), year_files, value=TRUE) # prepend the path to the directory to get the full path to each file # to get e.g. "~/Desktop/WageData/sic.1980.annual.by.area/Wisconsin -- Statewide.csv" matches <- vapply(matches, function(x) {file.path(folder, x)}, "", USE.NAMES = FALSE) # append the new matches to states_vector1 states_vector1 <- c(states_vector1, matches) } # now you can loop over the vector containing the full path to each file n_files <- length(states_vector1) extracted_values <- rep(NA, n_files) for (i in 1:n_files) { file_content <- read.csv(states_vector1[i]) # create a function `extract_value()` which extracts the information you need from each file extracted_values[i] <- extract_value(file_content) } Testing this by setting up the following directory structure: ~/Desktop/WageData/sic.1980.annual.by.area/ ~/Desktop/WageData/sic.1981.annual.by.area/ where each directory has all six csv files, I get the following output: > states_vector1 [1] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Idaho -- Statewide.csv" [2] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Indiana -- Statewide.csv" [3] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Michigan -- Statewide.csv" [4] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Oklahoma -- Statewide.csv" [5] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Texas -- Statewide.csv" [6] "/Users/bene/Desktop/WageData/sic.1980.annual.by.area/Wisconsin -- Statewide.csv" [7] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Idaho -- Statewide.csv" [8] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Indiana -- Statewide.csv" [9] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Michigan -- Statewide.csv" [10] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Oklahoma -- Statewide.csv" [11] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Texas -- Statewide.csv" [12] "/Users/bene/Desktop/WageData/sic.1981.annual.by.area/Wisconsin -- Statewide.csv" A: You could try this (hard to test if this will work). You can get the full path name from list.files so you can just use that as the file name for read.csv. I converted the for loop into a couple of apply loops ## Doesn't need to be in loop toMatch <- c("Idaho -- Statewide", "Indiana -- Statewide", "Michigan -- Statewide", "Oklahoma -- Statewide", "Texas -- Statewide", "Wisconsin -- Statewide") results <- lapply(wage_data_files, function(folder) { year_files <- list.files(folder, full.names=T) # get full file names (w/ path) states_vector1 <- grep(paste(toMatch, collapse = "|"), year_files, value=TRUE) ## Get a value from these files sapply(states_vector1, function(fname) { val <- read.csv(fname)[1,1] # get the first value }) }) Here, the return value (stored in results) should be a list of vectors. Each element of the list would contain the results extracted from one of the year folders.
doc_23534545
{ "ok": 0, "code": 8000, "codeName": "AtlasError" } Here is my post model: const postSchema = mongoose.Schema({ title: { type: String, required: true }, message: { type: String, required: true }, //replace creator with name name: String, creator: String, tags: [String], size: String, selectedFile: String, likes: { type: [String], default: [], }, comments: { type: [String], default: [] }, createdAt: { type: Date, default: new Date(), }, dogTreats: { type: Number, default: 0, required: false, } }); and here is my controller/post.js export const getPopular = async (req, res) => { //get current time let currentTime = new Date() //get from 7 days ago currentTime.setDate(currentTime.getDate()-7) console.log(currentTime) // -> output 2022-09-04T19:29:39.612Z try { //sort posts by most likes and within 7 days ago, but with a max of 50 posts const mostPopular = await PostMessage.aggregate([{"$sort": { likes: -1}}, { "$limit": 50}, { "$gt": currentTime }]) res.status(200).json(mostPopular) } catch (error) { res.status(500).json(error) } } A: You can use find method. It is better to use here. If you need to reach a value from another table populated, aggregation is better to use. However, at here, find is the best way to reach datas. const mostPopular = await PostMessage.find({createdAt: {$gt : currentTime}}).sort({likes: -1}).limit(50) A: Try this aggregation export const getPopular = async (req, res) => { //get current time let currentTime = new Date() //get from 7 days ago currentTime.setDate(currentTime.getDate() - 7) console.log(currentTime) // -> output 2022-09-04T19:29:39.612Z try { //sort posts by most likes and within 7 days ago, but with a max of 50 posts const mostPopular = await PostMessage.aggregate([ { $match: { createdAt: { $gt: currentTime } } }, { $sort: { likes: -1 } }, { $limit: 50 } ]) res.status(200).json(mostPopular) } catch (error) { res.status(500).json(error) } }
doc_23534546
I was run my app and is not show anything, I got some bug in this case this is my MainActivity.kt : import android.app.Notification import android.app.NotificationChannel import android.app.NotificationManager import android.app.PendingIntent import android.content.Context import android.content.Intent import android.graphics.Color import android.os.Build import androidx.appcompat.app.AppCompatActivity import android.os.Bundle import android.widget.Button import kotlinx.android.synthetic.main.activity_main.* class MainActivity : AppCompatActivity() { lateinit var notificationManager: NotificationManager lateinit var notificationChannel: NotificationChannel lateinit var builder: Notification.Builder val channelId = "com.example.notiffication" val description = "My Notification" override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val show = findViewById<Button>(R.id.btn_show) notificationManager = getSystemService(Context.NOTIFICATION_SERVICE) as NotificationManager show.setOnClickListener { val intent = Intent() val pendingIntent = PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT) if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { notificationChannel = NotificationChannel(channelId, description, NotificationManager.IMPORTANCE_HIGH) notificationChannel.enableLights(true) notificationChannel.lightColor = Color.RED notificationChannel.enableVibration(true) notificationManager.notify(0, builder.build()) notificationManager.createNotificationChannel(notificationChannel) builder = Notification.Builder(this, channelId) .setContentTitle("Android") .setContentText("New Message") .setSmallIcon(R.mipmap.ic_launcher) .setContentIntent(pendingIntent) } else { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { builder = Notification.Builder(this) .setContentTitle("Android") .setContentText("New Message") .setSmallIcon(R.mipmap.ic_launcher) .setContentIntent(pendingIntent) } } } } } this is my main_activity.xml : <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <Button android:id="@+id/btn_show" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="250dp" android:layout_centerHorizontal="true" android:text="Show Notification" /> </RelativeLayout> this is my logcat : kotlin.UninitializedPropertyAccessException: lateinit property builder has not been initialized at com.example.notiffication.MainActivity.getBuilder(MainActivity.kt:20) at com.example.notiffication.MainActivity$onCreate$1.onClick(MainActivity.kt:53) at android.view.View.performClick(View.java:7448) at android.view.View.performClickInternal(View.java:7425) at android.view.View.access$3600(View.java:810) at android.view.View$PerformClick.run(View.java:28296) at android.os.Handler.handleCallback(Handler.java:938) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:223) at android.app.ActivityThread.main(ActivityThread.java:7656) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947) and for the method Notification.Builder I use old version A: Now the problem is that you are calling the notify method before build the notification. Your activity should looks like this class MainActivity : AppCompatActivity() { companion object { private const val CHANNEL_ID = "default_channel" private const val CHANNEL_NAME = "My Notification" private const val NOTIFICATION_ID = 123 } private val notificationManager by lazy { getSystemService(Context.NOTIFICATION_SERVICE) as NotificationManager } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val btnShow = findViewById<Button>(R.id.btn_show) btnShow.setOnClickListener { createChannel() val notification = buildNotification() notificationManager.notify(NOTIFICATION_ID, notification) } } private fun buildNotification(): Notification { val intent = Intent() val pendingIntent = PendingIntent .getActivity(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT) return NotificationCompat.Builder(this, CHANNEL_ID) .setContentTitle("Android") .setContentText("New Message") .setSmallIcon(R.mipmap.ic_launcher) .setContentIntent(pendingIntent) .build() } private fun createChannel() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { val channel = NotificationChannel(CHANNEL_ID, CHANNEL_NAME, NotificationManager.IMPORTANCE_HIGH) channel.enableLights(true) channel.lightColor = Color.RED channel.enableVibration(true) notificationManager.createNotificationChannel(channel) } } } You can learn more about notifications here: * *Official documentation *Official codelab *Official course
doc_23534547
>> find/any "here is a string" "s?r" == "string" I use this extensively in tight loops that need to perform well. But the refinement was removed in Rebol3. What's the most efficient way of doing this in Rebol3? (I'm guessing a parse solution of some sort.) A: Here's a stab at handling the "*" case: like: funct [ series [series!] search [series!] ][ rule: copy [] remove-each s b: parse/all search "*" [empty? s] foreach s b [ append rule reduce ['to s] ] append rule [to end] all [ parse series rule find series first b ] ] used as follows: >> like "abcde" "b*d" == "bcde" A: I had edited your question for "clarity" and changed it to say 'was removed'. That made it sound like it was a deliberate decision. Yet it actually turns out it may just not have been implemented. BUT if anyone asks me, I don't think it should be in the box...and not just because it's a lousy use of the word "ALL". Here's why: You're looking for patterns in strings...so if you're constrained to using a string to specify that pattern you get into "meta" problems. Let's say I want to extract the word *Rebol* or ?Red?, now there has to be escaping and things get ugly all over again. Back to RegEx. :-/ So what you might actually want isn't a STRING! pattern like s?r but a BLOCK! pattern like ["s" ? "r"]. This would permit constructs like ["?" ? "?"] or [{?} ? {?}]. That's better than rehashing the string hackery that every other language uses. And that's what PARSE does, albeit in a slightly-less-declarative way. It also uses words instead of symbols, as Rebol likes to do. [{?} skip {?}] is a match rule where skip is an instruction that moves the parse position past any single element of the parse series between the question marks. It could also do so if it were parsing a block as input, and would match [{?} 12-Dec-2012 {?}]. I don't know entirely what the behavior of /ALL would-or-should be with something like "ab??cd e?*f"... if it provided alternate pattern logic or what. I'm assuming the Rebol2 implementation is brief? So likely it only matches one pattern. To set a baseline, here's a possibly-lame PARSE solution for the s?r intent: >> parse "here is a string" [ some [ ; match rule repeatedly to "s" ; advance to *before* "s" pos: ; save position as potential match skip ; now skip the "s" [ ; [sub-rule] skip ; ignore any single character (the "?") "r" ; match the "r", and if we do... return pos ; return the position we saved | ; | (otherwise) none ; no-op, keep trying to match ] ] fail ; have PARSE return NONE ] == "string" If you wanted it to be s*r you would change the skip "r" return pos into a to "r" return pos. On an efficiency note, I'll mention that it is indeed the case that characters are matched against characters faster than strings. So to #"s" and #"r" to end make a measurable difference in the speed when parsing strings in general. Beyond that, I'm sure others can do better. The rule is certainly longer than "s?r". But it's not that long when comments are taken out: [some [to #"s" pos: skip [skip #"r" return pos | none]] fail] (Note: It does leak pos: as written. Is there a USE in PARSE, implemented or planned?) Yet a nice thing about it is that it offers hook points at all the moments of decision, and without the escaping defects a naive string solution has. (I'm tempted to give my usual "Bad LEGO alligator vs. Good LEGO alligator" speech.) But if you don't want to code in PARSE directly, it seems the real answer would be some kind of "Glob Expression"-to-PARSE compiler. It might be the best interpretation of glob Rebol would have, because you could do a one-off: >> parse "here is a string" glob "s?r" == "string" Or if you are going to be doing the match often, cache the compiled expression. Also, let's imagine our block form uses words for literacy: s?r-rule: glob ["s" one "r"] pos-1: parse "here is a string" s?r-rule pos-2: parse "reuse compiled RegEx string" s?r-rule It might be interesting to see such a compiler for regex as well. These also might accept not only string input but also block input, so that both "s.r" and ["s" . "r"] were legal...and if you used the block form you wouldn't need escaping and could write ["." . "."] to match ".A." Fairly interesting things would be possible. Given that in RegEx: (abc|def)=\g{1} matches abc=abc or def=def but not abc=def or def=abc Rebol could be modified to take either the string form or compile into a PARSE rule with a form like: regex [("abc" | "def") "=" (1)] Then you get a dialect variation that doesn't need escaping. Designing and writing such compilers is left as an exercise for the reader. :-) A: I've broken this into two functions: one that creates a rule to match the given search value, and the other to perform the search. Separating the two allows you to reuse the same generated parse block where one search value is applied over multiple iterations: expand-wildcards: use [literal][ literal: complement charset "*?" func [ {Creates a PARSE rule matching VALUE expanding * (any characters) and ? (any one character)} value [any-string!] "Value to expand" /local part ][ collect [ parse value [ ; empty search string FAIL end (keep [return (none)]) | ; only wildcard return HEAD some #"*" end (keep [to end]) | ; everything else... some [ ; single char matches #"?" (keep 'skip) | ; textual match copy part some literal (keep part) | ; indicates the use of THRU for the next string some #"*" ; but first we're going to match single chars any [#"?" (keep 'skip)] ; it's optional in case there's a "*?*" sequence ; in which case, we're going to ignore the first "*" opt [ copy part some literal ( keep 'thru keep part ) ] ] ] ] ] ] like: func [ {Finds a value in a series and returns the series at the start of it.} series [any-string!] "Series to search" value [any-string! block!] "Value to find" /local skips result ][ ; shortens the search a little where the search starts with a regular char skips: switch/default first value [ #[none] #"*" #"?" ['skip] ][ reduce ['skip 'to first value] ] any [ block? value value: expand-wildcards value ] parse series [ some [ ; we have our match result: value ; and return it return (result) | ; step through the string until we get a match skips ] ; at the end of the string, no matches fail ] ] Splitting the function also gives you a base to optimize the two different concerns: finding the start and matching the value. I went with PARSE as even though *? are seemingly simple rules, there is nothing quite as expressive and quick as PARSE to effectively implementing such a search. It might yet as per @HostileFork to consider a dialect instead of strings with wildcards—indeed to the point where Regex is replaced by a compile-to-parse dialect, but is perhaps beyond the scope of the question.
doc_23534548
The idea is that I would be able to set up an alias alias="python -m mymodule" and call files normally with python somefile.py some_arg some_arg. In the file mymodule/__main__.py, what is the best way to load somefile.py and pass it the argument list? * *I am looking for a generic solution, that would be python2 and 3 compatible. *It would be great to be as little intrusive as possible. If somefile.py would raise an exception, mymodule should barely be seen in the traceback. *What the module does is not interesting here in detail, but it sets up some python things (traceback hooks etc.), so somefile.py should be ran pythonicly in the same process. os.system or subprocess.Popen do not fit. A: Ok I found something good for python 3.5, and satisfying enough for python 2.7. mymodule/main.py import sys # The following block of code removes the part of # the traceback related to this very module, and runpy # Negative limit support came with python 3.5, so it will not work # with previous versions. # https://docs.python.org/3.5/library/traceback.html#traceback.print_tb def myexcepthook(type, value, tb): nb_noise_lines = 3 traceback_size = len(traceback.extract_tb(tb)) traceback.print_tb(tb, nb_noise_lines - traceback_size) if sys.version_info >= (3, 5): sys.excepthook = myexcepthook if len(sys.argv) > 1: file = sys.argv[1] sys.argv = sys.argv[1:] with open(file) as f: code = compile(f.read(), file, 'exec') exec(code) somefile.py import sys print sys.argv raise Exception() in the terminal $ python3 -m mymodule somefile.py some_arg some_arg ['somefile.py', 'some_arg', 'some_arg'] Traceback (most recent call last): File "somefile.py", line 3, in <module> raise Exception() $ python2 -m mymodule somefile.py some_arg some_arg ['somefile.py', 'some_arg', 'some_arg'] Traceback (most recent call last): File "/usr/lib64/python3.5/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/azmeuk/dev/testpy/mymodule/__main__.py", line 16, in <module> exec(code) File "somefile.py", line 3, in <module> raise Exception() $ python somefile.py some_arg some_arg ['somefile.py', 'some_arg', 'some_arg'] Traceback (most recent call last): File "somefile.py", line 3, in <module> raise Exception() Exception Still, if someone has a better proposition, it would be great! A: I think the negative value of limit does not work in traceback module before python 3.5. Here is an ugly hack that works with python 2.7 import sys import traceback class ExcFile(object): def __init__(self, file): self.topline = True self.file = file def write(self, s): if self.topline: u, s = s.split('\n', 1) self.file.write(u +'\n') self.topline = False if '#---\n' in s: u, s = s.split('#---\n', 1) self.file.write(s) self.write = self.file.write ExcFile._instance = ExcFile(sys.stdout) # The following block of code removes the part of # the traceback related to this very module, and runpy def myexcepthook(type, value, tb): traceback.print_exception(type, value, tb, file=ExcFile._instance) sys.excepthook = myexcepthook if len(sys.argv) > 1: file = sys.argv[1] sys.argv = sys.argv[1:] with open(file) as f: code = compile(f.read(), file, 'exec') exec(code) #--- All this should be written in a separate file, to avoid clutter __main__.py.
doc_23534549
Segmantion fault the url that I had for parsing was https when I replaced it with an http url, It worked! A: The problem caused by conflicting between two version of libssl. You can check version installed : $ ls /usr/lib/x86_64-linux-gnu/libssl.so.* /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 and By removing the version 1.0 my problem resolved : apt-get remove ssl1.0.0.0
doc_23534550
Here the code I'm using: wishlist.m - (void)viewDidLoad { NSString *voirexpo = @"viensdeexpo"; [[NSUserDefaults standardUserDefaults] setObject:voirexpo forKey:@"voirexpo"]; [[NSUserDefaults standardUserDefaults] synchronize]; [super viewDidLoad]; self.navigationController.navigationBar.barStyle = UIStatusBarStyleLightContent; refait=0; refait2=0; wishlist * car2=[NSKeyedUnarchiver unarchiveObjectWithFile:@"myWishlist1.bin"]; NSUserDefaults * defaults = [NSUserDefaults standardUserDefaults]; NSData * objectData = [defaults objectForKey:@"myWishlist1.bin"]; if(objectData != nil) { wishlist * car3 = [NSKeyedUnarchiver unarchiveObjectWithData:objectData]; NSLog(@"YOOOOOOUUUUHOOOOO %@", [car3 description]); } NSLog(@"YOOOOOOUUUUHOOOOO %@", [car2 description]); rent=@"0"; self.navigationItem.backBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Back" style:UIBarButtonItemStyleBordered target:nil action:nil]; } -(NSString*)pathForCacheFile:(NSString*)fileName { NSArray*documentDir = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString*path = nil; if (documentDir) { path = [documentDir objectAtIndex:0]; } return [NSString stringWithFormat:@"%@/%@", path, fileName]; } -(void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; if(_dataSource) { _dataSource=nil; } _dataSource=[NSKeyedUnarchiver unarchiveObjectWithFile:[self pathForCacheFile:@"myWishlist1.bin"]]; tableview.scrollEnabled=YES; [tableview reloadData]; } cardescription.m (the view where I choose the car to put in the wishlist) -(NSString*)pathForCacheFile:(NSString*)fileName { NSArray* documentDir = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString* path = nil; if (documentDir) { path = [documentDir objectAtIndex:0]; } return [NSString stringWithFormat:@"%@/%@", path, fileName]; } static wishlist *mywishlist; -(void)saveInWishlist { if(mywishlist) mywishlist=nil; mywishlist=[wishlist new]; mywishlist.exponom=wishexpo; mywishlist.idvoiture=TelephoneCellulaire; mywishlist.imvoiture=dataimage; mywishlist.objid=objectId; { NSMutableArray *temp=[NSKeyedUnarchiver unarchiveObjectWithFile:[self pathForCacheFile:@"myWishlist1.bin"]]; // mywishlist.imageSrc=filePath; if(temp.count==0 || temp==nil) { NSMutableArray*newArr=[NSMutableArray array]; [newArr addObject:mywishlist]; [NSKeyedArchiver archiveRootObject:newArr toFile:[self pathForCacheFile:@"myWishlist1.bin"]]; NSUserDefaults * defaults = [NSUserDefaults standardUserDefaults]; // userObject is the object we want to save... [defaults setObject:[NSKeyedArchiver archivedDataWithRootObject:newArr] forKey:@"myWishlist1.bin"]; [defaults synchronize]; } else { [temp addObject:mywishlist]; [NSKeyedArchiver archiveRootObject:temp toFile:[self pathForCacheFile:@"myWishlist1.bin"]]; NSUserDefaults * defaults = [NSUserDefaults standardUserDefaults]; // userObject is the object we want to save... [defaults setObject:[NSKeyedArchiver archivedDataWithRootObject:temp] forKey:@"myWishlist1.bin"]; [defaults synchronize]; } } } - (IBAction)BACKO:(id)sender { [self saveInWishlist]; NSLog(@"ICIII %@",wishexpo); wishlist * car2=[[wishlist alloc]init]; car2.exponom=wishexpo; car2.idvoiture=TelephoneFix; car2.imvoiture=dataimage; car2.objid=objectId; [NSKeyedArchiver archiveRootObject:car2 toFile:@" "]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Wishlist" message:@"This car has been successfully added to your wishlist" delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; } -(NSString*)pathForCacheFile1:(NSString*)fileName { NSArray*documentDir = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString*path = nil; if (documentDir) { path = [documentDir objectAtIndex:0]; } return [NSString stringWithFormat:@"%@/%@", path, fileName]; } A: I got the same error when using NSKeyedArchiver and only since iOS8. In my case, I was doing this: [defaultManager changeCurrentDirectoryPath:customDirectory]; [NSKeyedArchiver archiveRootObject:blankDictionary toFile:filename]; But it worked when I did this: NSString fullPath = [NSString stringWithFormat:@"%@/%@", customDirectory, filename]; [NSKeyedArchiver archiveRootObject:blankDictionary toFile:fullPath]; My guess is that there's a bug in NSKeyedArchiver and relative pathnames don't work. If there are relative pathnames in your search paths, try taking them out.
doc_23534551
The operation couldn’t be completed. Unable to locate a Java Runtime. Please visit http://www.java.com for information on installing Java. Command PhaseScriptExecution failed with a nonzero exit code So I tried to install a jdk during the ci_post_clone.sh phase. The output of java -version after the installation on the Xcode Cloud is the following: openjdk version "17.0.2" 2022-01-18 OpenJDK Runtime Environment (build 17.0.2+8-86) OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing) Installed java also ./gradlew -v output: Showing All Messages ------------------------------------------------------------ Gradle 7.4.2 ------------------------------------------------------------ Kotlin: 1.5.31 Groovy: 3.0.9 Ant: Apache Ant(TM) version 1.10.11 compiled on July 10 2021 JVM: 17.0.2 (Oracle Corporation 17.0.2+8-86) OS: Mac OS X 12.4 x86_64 Nevertheless, I still get the same error. Is this maybe a restriction by Apple? Any ideas?
doc_23534552
E.g.: z = peaks(50); surf(z); %->plot z along some defined plane in x,y,z... This has been asked before, e.g. here, but this is the answer given is for reducing 3D data to 2D data, and there is no obvious answer on googling. Thanks. A: If the normal vector of the plane you want to slice your surface will always lay in the xy plane, then you can interpolate the data over your surface along the x,y coordinates that are in the slicing line, for example, let the plane be defined as going from the point (0,15) to the point (50,35) % Create Data z=peaks(50); % Create x,y coordinates of the data [x,y]=meshgrid(1:50); % Plot Data and the slicing plane surf(z); hold on patch([0,0,50,50],[15,15,35,35],[10,-10,-10,10],'w','FaceAlpha',0.7); % Plot an arbitrary origin axis for the slicing plane, this will be relevant later plot3([0,0],[15,15],[-10,10],'r','linewidth',3); Since it is a plane, is relatively easy to obtain the x,y coordinates alogn the slicing plane with linspace, I'll get 100 points, and then interpolate those 100 points into the original data. % Create x and y over the slicing plane xq=linspace(0,50,100); yq=linspace(15,35,100); % Interpolate over the surface zq=interp2(x,y,z,xq,yq); Now that we have the values of z, we need against what to plot them against, that's where you need to define an arbitrary origin axis for your splicing plane, I defined mine at (0,15) for convenience sake, then calculate the distance of every x,y pair to this axis, and then we can plot the obtained z against this distance. dq=sqrt((xq-0).^2 + (yq-15).^2); plot(dq,zq) axis([min(dq),max(dq),-10,10]) % to mantain a good perspective
doc_23534553
<input v-model="message" @keyup="log" placeholder="Edit"> <p>Edited: {{ message }}</p> How can i fix it? I need get input value on typing (@keyup @input) A: I tried all solutions I could find on the internet, nothing worked for me. in the end i came up with this, finally works on android! Trick is to use compositionupdate event: <input type="text" ... v-model="myinputbox" @compositionupdate="compositionUpdate($event)"> ...... ...... methods: { compositionUpdate: function(event) { this.myinputbox = event.data; }, } A: Update: After a lot of discussion, I've come to understand that this is a feature, not a bug. v-model is more complicated than you might at first think, and a mobile 'keyboard' is more complicated than a keyboard. This behaviour can surprise, but it's not wrong. Code your @input separately if you want something else. Houston we might have a problem. Vue does not seem to be doing what it says on the tin. V-model is supposed to update on input, but if we decompose the v-model and code the @input explicitly, it works fine on mobile. (both inputs behave normally in chrome desktop) For display on mobiles, the issue can be seen at... https://jsbin.com/juzakis/1 See this github issue. function doIt(){ var vm = new Vue({ el : '#vueRoot', data : {message : '',message1 : ''} }) } doIt(); <script src="https://cdn.jsdelivr.net/npm/vue@2.5.16/dist/vue.js"></script> <div id='vueRoot'> <h1>v-model</h1> <div> <input type='text' v-model='message' > {{message}} </div> <h1>Decomposed</h1> <div> <input type='text' :value='message1' @input='evt=>message1=evt.target.value' > {{message1}} </div> </div> A: Ok, I dont know if there is another solution for this issue, but it can be solved with a simple directive: Vue.directive('$model', { bind: function (el, binding, vnode) { el.oninput = () => (vnode.context[binding.expression] = el.value) } }) using it just like <input v-$model="{toBind}"> There is an issue on the oficial repo, and they say this is the normal behavior (because the composition mode), but I still need the functionality A: EDIT: A simpler solution for me was to just use @input.native. Also, the this event has (now?) a isComposing attribute which we can use to either take $event.data into account, or $event.target.value In my case, the only scheme that worked was handling @keydown to save the value before the user action, and handling @keyup to process the event if the value had changed. NOTE: the disadvantage of this is that any non-keyboard input (like copy/paste with a mouse) will not work. <md-input v-else :value="myValue" ref="input" @keydown="keyDownValue = $event.target.value" @keyup="handleKeyUp($event)" @blur="handleBlur()" /> With handleKeyUp in my case being: handleKeyUp(evt){ if(evt.target.value !== this.keyDownValue){ this.$emit('edited', evt); } } My use case was the following: I requested a search endpoint in the backend to get suggestions as the user typed. Solutions like handling @compositionupdate lead to sending several several requests to the backend (I also needed @input for non-mobile devices). I reduced the number of requests sent by correctly handling @compositionStarted, but there was still cases where 2 requests were sent for just 1 character typed (when composition was left then, e.g. with space character, then re-entered, e.g. with backspace character).
doc_23534554
> dput(wi.fvs.hog.matrix) structure(list(Year = c("2008", "2009", "2010", "2011", "2012", "2013", "2014", "2015", "2016", "2017", "2018"), Age0 = c(1.85714285714286, 0.4, 0.485714285714286, 1.1, 2.42857142857143, 0.257142857142857, 0.0428571428571429, 0.314285714285714, 0.716666666666667, 0.833333333333333, 2.51666666666667), Age1 = c(1.41463963164237, 1.02555123757, 0.848368924551809, 1.0129081429117, 1.34174221299874, 1.73844699293102, 1.13150227778049, 1.04021644273328, 1.58517508190915, 0.816172211616916, NA), Age2 = c(0.697482814458681, 0.884021354731086, 0.572217414946522, 0.747321961250137, 0.414954234638407, 1.15324140821528, 0.795970290332159, 0.937855311313068, 0.964409048099429, NA, NA), Age3 = c(0.387697040724315, 0.469038457221031, 0.361764248224063, 1.04480498090706, 0.488540659420917, 0.352297506342294, 0.870303410790715, 0.375040960193853, NA, NA, NA), Age4 = c(0.615800626709934, 0.483981693363844, 0.421893433414089, 1.09969988854403, 0.589655172413793, 0.548020964506191, 0.346473672965025, NA, NA, NA, NA), Age5 = c(0.453089244851259, 0.56020477727594, 2.04363876414779, 1.63160785116988, 0.378917378917379, 0.698236836482513, NA, NA, NA, NA, NA), Age6 = c(0.238805970149254, 0.537267080745342, 0.920689655172414, 0.369420702754036, 0.382474226804124, NA, NA, NA, NA, NA, NA), Age7 = c(0.779503105590062, 0.303448275862069, 0.369420702754036, 0.230927835051546, NA, NA, NA, NA, NA, NA, NA), Age8 = c(0.43448275862069, 0.138651471984805, 0.0309278350515464, NA, NA, NA, NA, NA, NA, NA, NA), Age9 = c(0.0123456790123457, 0.0412371134020619, NA, NA, NA, NA, NA, NA, NA, NA, NA)), row.names = c(NA, -11L), class = c("tbl_df", "tbl", "data.frame")) I created this code to make correlation plots... fvs.prop.curve1 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age1)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 1", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve2 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age2)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 2", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve3 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age3)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 3", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve4 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age4)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 4", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve5 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age5)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 5", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve6 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age6)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 6", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) fvs.prop.curve7 <- ggplot(wi.fvs.hog.matrix, aes(x = Age0, y = Age7)) + geom_point(aes()) + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + labs(y = "Age 6", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) #create multiple panel plot fvs.prop.gg <- ggarrange(fvs.prop.curve1, fvs.prop.curve2, fvs.prop.curve3, fvs.prop.curve4, fvs.prop.curve5, fvs.prop.curve6, ncol = 3, nrow = 2) #annotate multiple panel plot annotate_figure(fvs.prop.gg, top = text_grob("Correlation plots \n FDM Proportional \n", color = "Black", face = "bold", size = 14), bottom = text_grob("Data source: \n FDM", color = "blue", hjust = 1, x = 1, face = "italic", size = 10), left = text_grob("", color = "green", rot = 90), right = "", fig.lab = "Figure 1", fig.lab.face = "bold") Which outputs: Now there has got to be an easier much more concise way to do this using one of the apply functions I tried to modify code I used to do something similar with cross correlation plots ccf() but can't figure out how to set it up. One of the issue is that the data for the code below was in long form and then split using split(). Maybe someone knows an even easier way? lapply(seq_along(wi.fvs.hog.matrix), function(x) ccf(wi.fvs.hog.matrix[[x]]$Year,wi.fvs.hog.matrix[[x]]$Age, lag.max = 5, ylab = "", main= names(wi.fvs.hog.matrix)[x])) A: Have you considered using facets? Facets will avoid repeating the code as well as avoid lapply. It needs data to be in long format. library(tidyverse) wi.fvs.hog.matrix %>% pivot_longer(cols = Age1:Age9) %>% ggplot(aes(x = Age0, y = value)) + geom_point() + ylim(0,2.2) + stat_smooth(method = "lm", se = FALSE, color="red") + facet_wrap(~name) + labs(y = "Age 1", x = "YOY") + theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(), panel.grid.minor = element_blank(), axis.line = element_line(colour = "black")) Running the code from OP I get fvs.prop.gg as : `
doc_23534555
I'm using the libraries found here: https://github.com/BtbN/FFmpeg-Builds/releases/tag/latest. I've tried using both the master builds and the 4.4 builds. And my CMake config is set up as such, add_executable(CicadaVid FFMPEGTest/CicadaVid.c ) target_link_libraries(CicadaVid avutil) And just making a call to any function such as, int main() { avutil_license(); } I end up with a return failure immediately. For reference I'm using the win64 builds with a mingW compiler.
doc_23534556
I have about 3,000 records. Each record has a Product Name and a Price columns. All my app does is display these records in a DataGrid and allows the user to search for products by name. Meaning, if a user types 'chair' in an edit control, then only the Product Names that contain 'chair' and their prices are displayed in the DataGrid. That's it. Nothing fancy. * *Should I use Silverlight instead of WPF? I understand Silverlight allows my app to run on any OS, while WPF requires Windows? Unless I create a WPF for browser project, I think. *Should I use XML or Access or SQL? Which database would make the SEARCH feature easiest, fastest, etc.? *Finally, should/does LINQ have any involvement in this project? Thanks. A: If its going to be a stand alone product i.e. installed a each computer and not networked in any way then personally I would go for a SQLCE database with a WPF client. Its up to you if you use LINQ, I quite like it but would be overkill for this app maybe. The next stage up is a small app used by a handful of users in a network. In this instance I would use an access backend (assuming a fileserver is in place) with a WPF client application. Right at the “top” of the scale would be something used by a lot of people and/or something on the web. In this case I would use SQL server as the backend and again a WPF client app or if you want it to run on other platforms then go for silverlight
doc_23534557
Both applications use 1 code fragment in func connectionDidFinishLoading(connection: NSURLConnection!) Why different results? self.data is good MacOS func connectionDidFinishLoading(connection: NSURLConnection!) { var err: NSError? let str = NSString(data: data, encoding: NSUTF8StringEncoding) var jsonResult: Dictionary<String, AnyObject> = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions.MutableContainers, error: &err) as Dictionary<String, AnyObject> ParserCreater.createWithJSON(jsonResult, pathDir:pathDir.path!, jsonUrl: tfURL.stringValue) self.data = NSMutableData() } result: Printing description of str: {"title":"Заголовок","margin": [1, 2, 3, 4],"rect1": [0, 0, 100, 50],"color1": "#00FF44","color2": "#FFF","point": [0, 100],"size": [500, 600]} Printing description of jsonResult: ([String : AnyObject]) jsonResult = { [0] = { key = "color1" value = (instance_type = Builtin.RawPointer = 0x3434464630302375) } [1] = { key = "title" value = (instance_type = Builtin.RawPointer = 0x00006000000455e0 -> 0x00007fff767c7538 (void *)0x00007fff767c74e8: __NSCFString) } [2] = { key = "rect1" value = (instance_type = Builtin.RawPointer = 0x00006000004430f0 -> 0x00007fff767c7dd0 (void *)0x00007fff767c7f10: __NSArrayM) } [3] = { key = "size" value = (instance_type = Builtin.RawPointer = 0x0000600000646570 -> 0x00007fff767c7dd0 (void *)0x00007fff767c7f10: __NSArrayM) } [4] = { key = "margin" value = (instance_type = Builtin.RawPointer = 0x0000600000044fe0 -> 0x00007fff767c7dd0 (void *)0x00007fff767c7f10: __NSArrayM) } [5] = { key = "point" value = (instance_type = Builtin.RawPointer = 0x000060000024b2b0 -> 0x00007fff767c7dd0 (void *)0x00007fff767c7f10: __NSArrayM) } [6] = { key = "color2" value = (instance_type = Builtin.RawPointer = 0x0000004646462345) } } iOS func connectionDidFinishLoading(connection: NSURLConnection!) { var err: NSError? let str = NSString(data: data, encoding: NSUTF8StringEncoding) var jsonResult: Dictionary<String, AnyObject> = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions.MutableContainers, error: &err) as Dictionary<String, AnyObject> self.data = NSMutableData() } result: Printing description of str: {"title":"Заголовок","margin": [1, 2, 3, 4],"rect1": [0, 0, 100, 50],"color1": "#00FF44","color2": "#FFF","point": [0, 100],"size": [500, 600]} Printing description of jsonResult: ([String : AnyObject]) jsonResult = {} A: No error in parsing! Error in the display of content, as it turned out, all values in the variable jsonResult there. With key, you can access them, but for some reason it is not shown in the description
doc_23534558
Here i have added my code.i have two mobile brands and models with submodels $scope.phones = [{ id: "986745", brandname: "Nokia", modelname: "Lumia", "Submodel": [{ "name": "Lumia 735 TS" }, { "name": "Lumia 510" }, { "name": "Lumia 830" }, { "name": "Lumia New" }] }, { id: "896785", brandname: "Nokia", modelname: "Asha", "Submodel": [{ "name": "Asha 230" }, { "name": "Asha Asn01" }, { "name": "Nokia Asha Dual sim" }, { "name": "Asha 540" }] }, { id: "144745", brandname: "Samsung", modelname: "Galaxy ", "Submodel": [{ "name": "Trend 840" }, { "name": "A5" }, { "name": "Note 4 Duos" }, { "name": "Galaxy Note Duos" }, { "name": "Galaxy A5" }] }, { id: "986980", brandname: "Samsung", modelname: "Asha", "Submodel": [{ "name": "Asha 230" }, { "name": "Asha Asn01" }, { "name": "Asha Dual sim" }, { "name": "Asha 540" }] }, ]; When i click Nokia checkbox. Lumia and Asha checkboxs coming .same thing working for Samsung also My Expectation: When i click Nokia it should show 1.Lumia(checkbox) 2.Asha(checkbox) here for example when i click Lumia it should show one more checkbox list with input text box(text box for enter mobile model price) 1.Lumia 735 TS(check box with user input textbox) 2.Lumia 510(check box with user input textbox) 3.Lumia 830(check box with user input textbox) 4.Lumia New(check box with user input textbox) same this when choose Asha in Nokia or If i select Samsung Galaxy and Asha . when i submit controller i should get selected 1.brandname 2.modelname 3.Submodel 4.user entered value in that text box already i am getting brandname,modelname var myApp = angular.module('myApp',[]); myApp.controller('MyCtrl', function($scope) { $scope.selectedBrands = []; $scope.selectBrand = function(selectedPhone) { // If we deselect the brand if ($scope.selectedBrands.indexOf(selectedPhone.brandname) === -1) { // Deselect all phones of that brand angular.forEach($scope.phones, function(phone) { if (phone.brandname === selectedPhone.brandname) { phone.selected = false; } }); } } $scope.checkSelectedPhones = function() { var modelNames = []; var aletrMsg= ''; angular.forEach($scope.phones, function(phone) { if (phone.selected) { modelNames.push(phone); aletrMsg += 'Brand : '+ phone.brandname + 'Phone Name: '+ phone.modelname + ' : Price: '+ phone.price +', '; } }); alert(modelNames.length ? aletrMsg : 'No phones selected!'); } $scope.phones = [{ id: "986745", brandname: "Nokia", modelname: "Lumia", "Submodel": [{ "name": "Lumia 735 TS" }, { "name": "Lumia 510" }, { "name": "Lumia 830" }, { "name": "Lumia New" }] }, { id: "896785", brandname: "Nokia", modelname: "Asha", "Submodel": [{ "name": "Asha 230" }, { "name": "Asha Asn01" }, { "name": "Nokia Asha Dual sim" }, { "name": "Asha 540" }] }, { id: "144745", brandname: "Samsung", modelname: "Galaxy ", "Submodel": [{ "name": "Trend 840" }, { "name": "A5" }, { "name": "Note 4 Duos" }, { "name": "Galaxy Note Duos" }, { "name": "Galaxy A5" }] }, { id: "986980", brandname: "Samsung", modelname: "Asha", "Submodel": [{ "name": "Asha 230" }, { "name": "Asha Asn01" }, { "name": "Asha Dual sim" }, { "name": "Asha 540" }] }, ]; }); myApp.filter('unique', function() { return function(collection, keyname) { var output = [], keys = []; angular.forEach(collection, function(item) { var key = item[keyname]; if(keys.indexOf(key) === -1) { keys.push(key); output.push(item); } }); return output; }; }); <div ng-controller="MyCtrl"> <button ng-click="checkSelectedPhones()"> Check selected phones </button> <div ng-repeat="phone in phones | unique:'brandname'"> <label> <input type="checkbox" ng-true-value="'{{phone.brandname}}'" ng-false-value="''" ng-model="selectedBrands[$index]" ng-change="selectBrand(phone)"> {{phone.brandname}} </label> </div> <br> <div ng-repeat="brand in selectedBrands track by $index" ng-if="brand"> {{brand}} <div ng-repeat="phone in phones" ng-if="phone.brandname === brand"> <label> <input type="checkbox" ng-model="phone.selected" > {{phone.modelname}} </label> </div> </div> </div> A: If yes you could try do something like this. var myApp = angular.module('myApp',[]); myApp.controller('MyCtrl', function($scope) { $scope.selectedBrands = []; $scope.selectBrand = function(selectedPhone) { // If we deselect the brand if ($scope.selectedBrands.indexOf(selectedPhone.brandname) === -1) { // Deselect all phones of that brand angular.forEach($scope.phones, function(phone) { if (phone.brandname === selectedPhone.brandname) { phone.selected = false; } }); } } $scope.checkSelectedPhones = function() { var modelNames = []; var aletrMsg= ''; angular.forEach($scope.phones, function(phone) { if (phone.selected) { modelNames.push(phone); aletrMsg += 'Brand : '+ phone.brandname + 'Phone Name: '+ phone.modelname + ' : Price: '+ phone.price +', '; } }); alert(modelNames.length ? aletrMsg : 'No phones selected!'); } $scope.phones = [{ id: "986745", brandname: "Nokia", modelname: "Lumia", "Submodel": [{ "name": "Lumia 735 TS", selected:false }, { "name": "Lumia 510", selected:false }, { "name": "Lumia 830", selected:false }, { "name": "Lumia New", selected:false }] }, { id: "896785", brandname: "Nokia", modelname: "Asha", "Submodel": [{ "name": "Asha 230", selected:false }, { "name": "Asha Asn01", selected:false }, { "name": "Nokia Asha Dual sim", selected:false }, { "name": "Asha 540", selected:false }] }, { id: "144745", brandname: "Samsung", modelname: "Galaxy ", "Submodel": [{ "name": "Trend 840", selected:false }, { "name": "A5", selected:false }, { "name": "Note 4 Duos", selected:false }, { "name": "Galaxy Note Duos", selected:false }, { "name": "Galaxy A5", selected:false }] }, { id: "986980", brandname: "Samsung", modelname: "Asha", "Submodel": [{ "name": "Asha 230", selected:false }, { "name": "Asha Asn01", selected:false }, { "name": "Asha Dual sim", selected:false }, { "name": "Asha 540", selected:false }] }, ]; }); myApp.filter('unique', function() { return function(collection, keyname) { var output = [], keys = []; angular.forEach(collection, function(item) { var key = item[keyname]; if(keys.indexOf(key) === -1) { keys.push(key); output.push(item); } }); return output; }; }); <div ng-controller="MyCtrl"> <button ng-click="checkSelectedPhones()"> Check selected phones </button> <div ng-repeat="phone in phones | unique:'brandname'"> <label> <input type="checkbox" ng-true-value="'{{phone.brandname}}'" ng-false-value="''" ng-model="selectedBrands[$index]" ng-change="selectBrand(phone)"> {{phone.brandname}} </label> </div> <br> <div ng-repeat="brand in selectedBrands track by $index" ng-if="brand"> {{brand}} <div ng-repeat="phone in phones" ng-if="phone.brandname === brand"> <label for="demo-{{phone.modelname}}">{{phone.modelname}}</label> <input id="demo-{{phone.modelname}}" type="checkbox" ng-model="phone.selected" > <div ng-repeat="subm in phone.Submodel" ng-if="phone.selected"> <label for="demo-{{subm.name}}">{{subm.name}}</label> <input id="demo-{{subm.name}}" type="checkbox" ng-model="subm.selected"> </div> </div> </div> </div>
doc_23534559
id | date | other_ids ===|============|========= 1 | 1489495210 | {3} 2 | 1489495520 | {} 3 | 1489495560 | {5,9} 4 | 1489496588 | {4} 5 | 1489496948 | {} 6 | 1489497022 | {1,3,8} 7 | 1489497035 | {3} 8 | 1489497318 | {2,4} 9 | 1489507260 | {} I am attempting to write a query to output a list of Object ids with the latest date for each of a specified array of other_ids. For example: specified_other_ids = [1, 2, 4] ids = //... # => ids = [6, 8] # Note that the max date occurs at id 8 for other_id 2 AND 4, so no duplicates returned My understanding is that UNNEST is used to break the other_ids arrays into rows, and then I should be able to use DISTINCT ON with the un-nested rows. So this is my attempt: ids = Object.from("objects, UNNEST(objects.other_ids) AS other_id") .where("other_id IN (?)", specified_other_ids) .order("other_id, date DESC") .pluck("DISTINCT ON(other_id) id") This works as I would expect when running in development, staging and production consoles. However, when running rspec in the test environment (via Codeship) every test running that code fails when that query is run, with the following error: ActiveRecord::StatementInvalid: PG::InvalidColumnReference: ERROR: function expression in FROM cannot refer to other relations of same query level So my first thought is that Postgres is somehow configured differently in the test environment, though I also have a sense that I'm somehow not using the Postgres UNNEST function properly. Where should I start looking for a solution to this? P.S. I'm not an experienced Rails/SQL dev, so detailed explanations would be much appreciated to help me learn! A: As pointed out by @pozs, the error appears to be the result of the test environment being set up with the wrong version of PostgreSQL. At the time of writing Codeship uses version 9.2 by default. This Codeship documentation page details how to change that default, by changing the port in the database.yml file: PostgreSQL version 9.6 is running on port 5436 and configured (almost) identical to the others. Make sure to specify the correct port in your project configuration if you want to test against this version. Similar to the other versions, you need to work around our auto-configuration for Rails based projects by adding the following command to your Setup Commands. sed -i "s|5432|5436|" "config/database.yml"
doc_23534560
In this way, it is more efficient than the original design(without using combiner), although using combiner, the efficiency should be equal. Any advice? A: Yes, you can use hashmap as well. But you need to consider worst case scenarios while designing your solution. Normally, the size of the block is 128 MB and consider that there small words(in terms of word length) with no or very less repetitions. In this case, you will have many words and thus no. of entries in HashMap will increase, consuming much more amount of memory. You need to take into account that there could be many different jobs operating on the same data node, so this HashMap consuming more amount of RAM will eventually slow down other jobs as well. Also, when the size of the HashMap gets increasing, it has to perform Rehashing which adds more time for your job execution. A: I know this is an old post but for people who are looking for Hadoop help in the future, maybe check out this question for another reference: Hadoop Word count: receive the total number of words that start with the letter "c"
doc_23534561
I'm new to both flask and celery so probably i'm doing something wrong. If i use this code everything goes fine, but as i split the code into different files i get that error. I'm missing something but what? I'm using Flask-Script to run the app with "manage.py runserver", files structure is: . ├── app │   ├── core.py │   ├── extensions.py │   └── __init__.py ├── manage.py └── settings.py With settings.py - just settings # -*- coding: utf-8 -*- DEBUG = True SECRET_KEY = 'not_a_secret' CELERY_BROKER_URL = 'amqp://' CELERY_RESULT_BACKEND = 'amqp' manage.py - runs the applicationn # -*- coding: utf-8 -*- from __future__ import absolute_import import os from app import create_app from flask.ext.script import Manager, Shell, Server app = create_app() manager = Manager(app) manager.add_command("runserver", Server(host="0.0.0.0", port=5032)) def make_shell_context(): return dict(app=app) manager.add_command('shell', Shell(make_context=make_shell_context)) if __name__ == '__main__': manager.run() app/init.py: creates app and make_celery as docs say from __future__ import absolute_import from flask import Flask from .extensions import celery, make_celery import settings def create_app(): app = Flask(__name__) app.config.from_object(settings) make_celery(app, celery) from .core import core as core_blueprint app.register_blueprint(core_blueprint) return app app/extensions.py: instantiates celery and define simple task # -*- coding: utf-8 -*- from __future__ import absolute_import from celery import Celery def make_celery(app, celery): celery = Celery(app.import_name, backend=app.config['CELERY_RESULT_BACKEND'], broker=app.config['CELERY_BROKER_URL']) celery.conf.update(app.config) TaskBase = celery.Task class ContextTask(TaskBase): abstract = True def __call__(self, *args, **kwargs): with app.app_context(): return TaskBase.__call__(self, *args, **kwargs) celery.Task = ContextTask print 'passo da make_celery' return celery celery = Celery() @celery.task(name="tasks.add") def add(x, y): return x + y app/core.py defines two routes: one for run the task and one for get the result # -*- coding: utf-8 -*- from __future__ import absolute_import from flask import Flask, Blueprint, abort, jsonify, request, session from app.extensions import add core = Blueprint('core', __name__) @core.route("/test") def hello_world(x=16, y=16): x = int(request.args.get("x", x)) y = int(request.args.get("y", y)) res = add.apply_async((x, y)) context = {"id": res.task_id, "x": x, "y": y} result = "add((x){}, (y){})".format(context['x'], context['y']) goto = "{}".format(context['id']) return jsonify(result=result, goto=goto) @core.route("/test/result/<task_id>") def show_result(task_id): retval = add.AsyncResult(task_id).get(timeout=1.0) return repr(retval) After launching celery -A app.extensions.celery worker -l debug I can go to /test but then if I try to go to /test/result/ i get the following error: File "/virt/biscelery/lib/python2.7/site-packages/celery/result.py", line 169, in get no_ack=no_ack, File "/virt/biscelery/lib/python2.7/site-packages/celery/backends/base.py", line 597, in _is_disabled 'No result backend configured. ' NotImplementedError: No result backend configured. Please see the documentation for more information. Why celery loose the backend configuration? If i modify in celery/backends/init.py the get_backend_cls function from def get_backend_cls(backend=None, loader=None): """Get backend class by name/alias""" backend = backend or 'disabled' loader = loader or current_app.loader aliases = dict(BACKEND_ALIASES, **loader.override_backends) try: return symbol_by_name(backend, aliases) except ValueError as exc: reraise(ValueError, ValueError(UNKNOWN_BACKEND.format( backend, exc)), sys.exc_info()[2]) to def get_backend_cls(backend=None, loader=None): """Get backend class by name/alias""" backend = backend or 'amqp' loader = loader or current_app.loader aliases = dict(BACKEND_ALIASES, **loader.override_backends) try: return symbol_by_name(backend, aliases) except ValueError as exc: reraise(ValueError, ValueError(UNKNOWN_BACKEND.format( backend, exc)), sys.exc_info()[2]) Everything start working again. I can't figure out what i'm doing wrong. Any help really appreciated.
doc_23534562
// Dummy function foo double foo(double a){return a+1;} // Some wrapping here ... which generates (maybe a pointer to) double Foo(double a, ...){return a+1;} // i.e. ignore any other arguments, just accept It's mainly because I have a struct member that will store a pointer to function, say double (*)(double, double) normally, but some case it has to store a pointer to a unary function (and I don't really want to use boost.variant). EDIT: Sorry if I didn't say it clearly, but it should (somehow like a converter) that works for infinite unary functions of a same kind. A: Since you're in a specific situation where you need a double x double -> double function, you can wrap foo as follows: double foo_w(double a, double) { return foo(a); } If you are interested in a more general solution, you can look into variadic template for the number of arguments (and their respective type) as Mohammadreza Panahi shows in his answer. But try not to over engineer your problem. ;-) A: Here are two general solutions. Compile time This can be accomplished using variadic templates: double func(double x) { return x + 1; } template<typename... Ts> double wrappedFunc(double x, const Ts&...) { return f(x); } wrappedFunc() can be invoked with any number of arguments of any types, only the first one will actually be used and passed to f(): wrappedFunc(5.0, "Hello!", false, -23); // returns 6.0 Run time As far as I understand the edit you made, you are looking for a way to do this at run time. Here's one way to do so in C++14, using a function template, returning a lambda with variable number of parameters, which calls the original function, passing its first parameter and ignoring the rest: template<typename Func> auto getVariadicWrapper(Func func) { return [func](auto argument, auto...) {return func(argument);}; } This function template can be used to construct a wrapper function in the following manner: // Assuming func() is the same as before // Create a wrapper function, which passes its first param to func and ignores the rest. auto wrappedFunc = getVariadicWrapper(func); // Use it wrappedFunc(5.0, false, "bananas", 72233); // returns 6.0 A: Instead of a function pointer, if you use C++11, you can use std::function to store the functions: double foo(double, double) {} double Foo(double, ...) {} //You could also use a template for this one //Note that the number of parameters of 'func' should be the maximum possible //arguments of the functions you want to store (here: 1 vs 2) std::function<double(double, double)> func; func = foo; //Ok func = Foo; //Ok //Now you can call 'func' func(1, 2); //if 'func' == 'Foo', ignores the 2 A: Another nice and easy way is with a lambda If you have double func (double a) { return a; } struct st { double(*foo)(double,double); }; You can do st a; a.foo = [](double a,double b){return func(a);};
doc_23534563
Here are my code snippets- Server- This class is called CLIENTConnection and takes care of all the connections from server to client import java.net.*; import java.util.logging.Level; import java.util.logging.Logger; import java.io.*; public class CLIENTConnection implements Runnable { private Socket clientSocket; private BufferedReader in = null; public CLIENTConnection(Socket client) { this.clientSocket = client; } @Override public void run() { try { in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream())); String clientSelection=in.readLine(); while (clientSelection != null) { switch (clientSelection) { case "1": receiveFile(); break; case "2": System.out.println("inside case 2"); String outGoingFileName = in.readLine(); System.out.println(outGoingFileName); while (outGoingFileName != null) { System.out.println("Inside while loop"); sendFile(outGoingFileName); } System.out.println("Out of while"); break; case "3": receiveFile(); break; default: System.out.println("Incorrect command received."); break; } in.close(); break; } } catch (IOException ex) { Logger.getLogger(CLIENTConnection.class.getName()).log(Level.SEVERE, null, ex); } } public void receiveFile() { try { int bytesRead; DataInputStream clientData = new DataInputStream(clientSocket.getInputStream()); String filename = clientData.readUTF(); System.out.println(filename+" is received on server side"); OutputStream output = new FileOutputStream(("C://Users/Personal/workspace/ClientServer/src/dir/"+filename)); long size = clientData.readLong(); byte[] buffer = new byte[1024]; while (size > 0 && (bytesRead = clientData.read(buffer, 0, (int) Math.min(buffer.length, size))) != -1) { output.write(buffer, 0, bytesRead); size -= bytesRead; } output.close(); clientData.close(); System.out.println("File "+filename+" received from client."); } catch (IOException ex) { System.err.println("Client error. Connection closed."); } } public void sendFile(String fileName) { try { //handle file read File myFile = new File("C://Users/Personal/workspace/ClientServer/src/dir/"+fileName); byte[] mybytearray = new byte[(int) myFile.length()]; FileInputStream fis = new FileInputStream(myFile); BufferedInputStream bis = new BufferedInputStream(fis); //bis.read(mybytearray, 0, mybytearray.length); DataInputStream dis = new DataInputStream(bis); dis.readFully(mybytearray, 0, mybytearray.length); //handle file send over socket OutputStream os = clientSocket.getOutputStream(); //Sending file name and file size to the server DataOutputStream dos = new DataOutputStream(os); dos.writeUTF(myFile.getName()); dos.writeLong(mybytearray.length); dos.write(mybytearray, 0, mybytearray.length); dos.flush(); System.out.println("File "+fileName+" sent to client."); } catch (Exception e) { System.err.println("File does not exist!"); } } } Client Side (Receive File) public class FileClient { private static Socket sock; private static String fileName; private static BufferedReader stdin; private static PrintStream os; public static void main(String[] args) throws IOException, ClassNotFoundException { ObjectInputStream inFromServer; try { sock = new Socket("localhost", 7777); stdin = new BufferedReader(new InputStreamReader(System.in)); } catch (Exception e) { System.err.println("Cannot connect to the server, try again later."); System.exit(1); } inFromServer= new ObjectInputStream(sock.getInputStream()); os = new PrintStream(sock.getOutputStream()); try { switch (Integer.parseInt(selectAction())) { case 1: os.println("1"); sendFile(); break; case 2: os.println("2"); System.err.print("Enter file name: "); fileName = stdin.readLine(); os.println(fileName); receiveFile(fileName); break; case 3: os.println("3"); Synchronise(); } } catch (Exception e) { System.err.println("not valid input"); } sock.close(); } private static void Synchronise() { HashMap<String, Calendar> ClientFileList=getTimeStamp("C://Users/Personal/workspace/ClientServer/Client/");//getting the filename and timestamp of all the files present in client folder. /*System.out.println("Client File List : \n"); for(String s : ClientFileList.keySet()) System.out.println(s);*/ HashMap<String, Calendar> ServerFileList=getTimeStamp("C://Users/Personal/workspace/ClientServer/src/dir/");//(HashMap<String, Calendar>) inFromServer.readObject(); /*System.out.println("\nServer File List : \n"); for(String s : ClientFileList.keySet()) System.out.println(s);*/ System.out.println("File comparision output"); compareTimestamp(ClientFileList,ServerFileList); } private static void compareTimestamp(HashMap<String, Calendar> ClientFileList, HashMap<String, Calendar> serverFileList) { LinkedList<String> fileToUpload=new LinkedList<String>(); LinkedList<String> fileToDownload=new LinkedList<String>(); LinkedList<String> fileToDeleteFromClient=new LinkedList<String>(); LinkedList<String> fileToDeleteFromServer=new LinkedList<String>(); Calendar clientCalender = null,serverCalendar=null; for (String filename : serverFileList.keySet()) { serverCalendar=serverFileList.get(filename); if(ClientFileList.containsKey(filename)) { clientCalender=ClientFileList.get(filename); if(clientCalender.before(serverCalendar)) { fileToDownload.add(filename); } else { fileToUpload.add(filename); } } else { fileToDeleteFromClient.add(filename); } } for (String filename : ClientFileList.keySet()) { clientCalender=ClientFileList.get(filename); if(!serverFileList.containsKey(filename)) { fileToDeleteFromServer.add(filename); } } System.out.println("Files to download to client: "+fileToDownload); System.out.println("Files to upload to Server: "+fileToUpload); System.out.println("Files to delete from client: "+fileToDeleteFromClient); System.out.println("Files to delete from Server: "+fileToDeleteFromServer); sendFile(fileToDeleteFromServer); } private static HashMap<String, Calendar> getTimeStamp(String location) { HashMap<String,Calendar> fileList = new HashMap<String,Calendar>(); File dir = new File(location); File[] files = dir.listFiles(); if (files.length == 0) { System.out.println("No file found"); //System.exit(1); } else { for (int i = 0; i < files.length; i++) { Calendar calendar = Calendar.getInstance(); calendar.setTimeInMillis(files[i].lastModified()); fileList.put(files[i].getName(), calendar); } } return fileList; } public static String selectAction() throws IOException { System.out.println("1. Send file."); System.out.println("2. Recieve file."); System.out.println("3. Synchronize"); System.out.print("\nMake selection: "); return stdin.readLine(); } public static void sendFile() { try { System.err.print("Enter file name: "); fileName = stdin.readLine(); File myFile = new File("C:/Users/Personal/workspace/ClientServer/Client/"+fileName); byte[] mybytearray = new byte[(int) myFile.length()]; FileInputStream fis = new FileInputStream(myFile); BufferedInputStream bis = new BufferedInputStream(fis); //bis.read(mybytearray, 0, mybytearray.length); DataInputStream dis = new DataInputStream(bis); dis.readFully(mybytearray, 0, mybytearray.length); OutputStream os = sock.getOutputStream(); //Sending file name and file size to the server DataOutputStream dos = new DataOutputStream(os); dos.writeUTF(myFile.getName()); dos.writeLong(mybytearray.length); dos.write(mybytearray, 0, mybytearray.length); dos.flush(); dis.close(); System.out.println("File "+fileName+" sent to Server."); } catch (Exception e) { System.err.println("File does not exist!"); } } //receive a list of file to upload to server from client. static void sendFile(LinkedList<String> fileList) { for(String file: fileList) sendFile(file); } public static void sendFile(String filename) { File file = new File("C:/Users/Personal/workspace/ClientServer/Client/"+filename); byte[] mybytearray = new byte[(int) file.length()]; FileInputStream fis; try { fis = new FileInputStream(file); BufferedInputStream bis = new BufferedInputStream(fis); //bis.read(mybytearray, 0, mybytearray.length); DataInputStream dis = new DataInputStream(bis); dis.readFully(mybytearray, 0, mybytearray.length); OutputStream os = sock.getOutputStream(); DataOutputStream dos = new DataOutputStream(os); dos.writeUTF(file.getName()); dos.writeLong(mybytearray.length); dos.write(mybytearray, 0, mybytearray.length); dos.flush(); dis.close(); System.out.println("File "+filename+" sent to Server."); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } public static void receiveFile(String fileName) { try { int bytesRead; InputStream in = sock.getInputStream(); DataInputStream clientData = new DataInputStream(in); fileName = clientData.readUTF(); OutputStream output = new FileOutputStream(("received_from_server_")); long size = clientData.readLong(); byte[] buffer = new byte[1024]; while (size > 0 && (bytesRead = clientData.read(buffer, 0, (int) Math.min(buffer.length, size))) != -1) { output.write(buffer, 0, bytesRead); size -= bytesRead; } output.close(); in.close(); System.out.println("File "+fileName+" received from Server."); } catch (IOException ex) { Logger.getLogger(CLIENTConnection.class.getName()).log(Level.SEVERE, null, ex); } } } It is showing me error at filename = clientData.readUTF(); Please let me know if there are any possible solutions. A: DataInputStreams use this exception to signal end of stream. So the exception in might not be an error, and just mean that the data has been sent. Before reading you will have to check that you have actually received something, and then read it. Check this: This exception is mainly used by data input streams to signal end of stream. Note that many other input operations return a special value on end of stream rather than throwing an exception. A: Are you getting the file? public static void receiveFile(String fileName) { boolean recieving = true; //new while(recieving){ //new try { int bytesRead; InputStream in = sock.getInputStream(); DataInputStream clientData = new DataInputStream(in); fileName = clientData.readUTF(); OutputStream output = new FileOutputStream(("received_from_server_")); long size = clientData.readLong(); byte[] buffer = new byte[1024]; while (size > 0 && (bytesRead = clientData.read(buffer, 0, (int) Math.min(buffer.length, size))) != -1) { output.write(buffer, 0, bytesRead); size -= bytesRead; } output.close(); in.close(); System.out.println("File "+fileName+" received from Server."); } catch (EOFException e){ // new Logger.getLogger(CLIENTConnection.class.getName()).log(Level.SEVERE, null, ex); recieving = false; } catch (IOException ex) { Logger.getLogger(CLIENTConnection.class.getName()).log(Level.SEVERE, null, ex); } } //new }
doc_23534564
cmake_minimum_required(VERSION 3.15) project(X11 C) set(CMAKE_C_STANDARD 11) add_compile_options(-Wall -lX11) add_executable(X11 main.c) And the main.c file: #include <stdio.h> #include <stdlib.h> #include <X11/Xlib.h> #include <X11/Xutil.h> #include <X11/Xos.h> int main(int argc, char **argv) { Display *dpy = XOpenDisplay(NULL); return EXIT_SUCCESS; } The error is undefined reference to 'XOpenDisplay'. The -Wall flag works fine and I get the unused variable warning for this snippet. Lastly, I am able to compile main.c from the command line without any problem: gcc main.c -lX11 A: You are correct that XOpenDisplay is in libX11.a and linked with -lX11. But maybe your libX11.a is not in the standard directories searched by the compiler. On my system (FreeBSD) it is in /usr/local/lib and is linked with -L/usr/local/lib -lX11. So, find the directory your libX11.a lives in and add -L/path/to/directory to the link command. I don't know much about CMake, but adding link options to something named add_compile_options does not sound right. Is there something like a add_link_options directive? -Wall is for compilation, but -L and -l are for linking.
doc_23534565
I ran the object debugger. From the results, in the og:image field, I can see the image correctly, however in the preview itself the image is broken. Any ideas? Edit: Since this is a steam page, my ability to influence it is limited. I can only change some images, but I cannot add meta tags. I did find, however, in this post, that a <link rel="image_src" href="/myimage.jpg"/> tag should also be identifiable by facebook as a preview image. For some reason in my page facebook either ignores this or treats the image as invalid (although it is larger than 200x200) A: og:image could not be downloaded or is too small og:image was not defined, could not be downloaded or was not big enough. Please define a chosen image using the og:image metatag, and use an image that's at least 200x200px and is accessible from Facebook. 'http://store.akamai.steamstatic.com/public/images/game/game_highlight_image_spacer.gif' will be used instead. Consult http://developers.facebook.com/docs/sharing/webmasters/crawler for more troubleshooting tips. So we do have the reason why it is blank (spacer.gif is blank) This is the same for other pages as well... http://cdn.akamai.steamstatic.com/steam/apps/455980/header.jpg?t=1459222551 <-- Can be found in the header... So lets have a closer look at the image itself... The images are creatd ondemand (so it seems) and you (a.jpg) is the same as Cities Skylines (b.jpg) So no idea at all... Maybe a problem with Steam's CDN (FB is unable to find the image. Did you update the image recently?) Sorry not to be able to help. The image itself cannot be faulty IMHO as stream is processing the image every time... Cannot say more :/ A: When content is shared for the first time, the Facebook crawler will scrape and cache the metadata from the URL shared. The crawler has to see an image at least once before it can be rendered. This means that the first person who shares a piece of content won't see a rendered image: * *Pre-cache the image with the URL Debugger Run the URL through the URL debugger to pre-fetch metadata for the page. You should also do this if you update the image for a piece of content. *Use og:image:width and og:image:height Open Graph tags Using these tags will specify the image to the crawler so that it can render it immediately without having to asynchronously. Source: https://developers.facebook.com/docs/sharing/best-practices#images
doc_23534566
I added my current directory to PATH. But could not run Composer. Then created a directory /usr/local/composer and did "sudo mv composer.phar .." into that directory, which did move the file. Then created a .zshrc file in my current directory and added both my current directory and /usr/local/composer on permanent PATH. But I still can't invoke composer. zsh gives me "command not found: composer". I am very new to this. Am I making noob mistakes? Thank you very much for any pointers.
doc_23534567
void store(String x, String y) async { Map<String, dynamic> map = { 'x': x, 'y': y, }; var jsonString = json.encode(map); SharedPreferences prefs = await SharedPreferences.getInstance(); prefs.setString('fileName', jsonString); } I saw that I can populate the shared preferences with const MethodChannel('plugins.flutter.io/shared_preferences') .setMockMethodCallHandler((MethodCall methodCall) async { if (methodCall.method == 'getAll') { return <String, dynamic>{}; // set initial values here if desired } return null; }); But I didn't understand how to use, expecially in my case. A: You can use SharedPreferences.setMockInitialValues for your test test('Can Create Preferences', () async{ SharedPreferences.setMockInitialValues({}); //set values here SharedPreferences pref = await SharedPreferences.getInstance(); bool working = false; String name = 'john'; pref.setBool('working', working); pref.setString('name', name); expect(pref.getBool('working'), false); expect(pref.getString('name'), 'john'); }); A: Thanks to nonybrighto for the helpful answer. I ran into trouble trying to set initial values in shared preferences using: SharedPreferences.setMockInitialValues({ "key": "value" }); It appears that the shared_preferences plugin expects keys to have the prefix flutter.. This therefore needs adding to your own keys if mocking using the above method. See line 20 here for evidence of this: https://github.com/flutter/plugins/blob/2ea4bc8f8b5ae652f02e3db91b4b0adbdd499357/packages/shared_preferences/shared_preferences/lib/shared_preferences.dart A: I don't know if that helps you but I also lost a lot of time before finding this solution LocalDataSourceImp.test.dart void main(){ SharedPreferences? preference; LocalDataSourceImp? localStorage ; setUp(() async{ preference = await SharedPreferences.getInstance(); localStorage = LocalDataSourceImp(preference!); SharedPreferences.setMockInitialValues({}); }); final token = TokenModel(data: "babakoto"); test("cache Token ", ()async{ localStorage!.cacheToken(token); final result = preference!.getString(TOKEN); expect(result, json.encode(token.toJson())); }); } LocalDataSourceImp.dart class LocalDataSourceImp implements LocalDataSource{ SharedPreferences pref ; LocalDataSourceImp(this.pref); @override Future<void> cacheToken(TokenModel token)async { await pref.setString(TOKEN,json.encode(token)); } }
doc_23534568
CREATE TABLE my_table (col1 VARCHAR(100), col2 VARCHAR(100), col3 INT) I tried to insert a few records to that table using Python 3.5 and pypyodbc library as shown below: import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=12.3.456.789.0123;' 'Database=mydb;' 'uid=me;pwd=mypwd') cursor= connection.cursor() data = [{'p1': "I'm hungry", 'p2': "text for col2", 'p3': '1234'}] for dd in data: insert_sql = "INSERT INTO my_table (col1,col2,col3) VALUES (%s, %s, %s)" cursor.execute(insert_sql, (dd['p1'], dd['p2'], dd['p3'])) But when the above code is run, it returns: pypyodbc.ProgrammingError: ('HY000', 'The SQL contains 0 parameter markers, but 3 parameters were supplied') I don't know what I'm doing wrong. If anyone could help me resolve this, I'd greatly appreciate the help! A: The issue here is you are using %s placeholder like in python or other programming language. However pypyodbc uses ? as placeholder (parameter marker). Simply replace %s with ?, your code should work fine. import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=12.3.456.789.0123;' 'Database=mydb;' 'uid=BenchmarkingUS;pwd=mypwd') cursor= connection.cursor() data = [{'p1': "I'm hungry", 'p2': "text for col2", 'p3': 1234}] for dd in data: insert_sql = "INSERT INTO phyo_test (col1,col2,col3) VALUES (?, ?, ?)" cursor.execute(insert_sql, (dd['p1'], dd['p2'], dd['p3'])) cursor.commit() Note: You are setting value of p3 as string '1234' which wont give errors. But its better to change it to int 1234 just to prevent data type errors.
doc_23534569
SQLite version 3.7.9 2011-11-01 00:52:41 sqlite> pragma foreign_keys=on; sqlite> sqlite> CREATE TABLE t1(id int primary key, uuid varchar(36)); sqlite> CREATE TABLE t2(id int primary key, t1_uuid varchar(36), FOREIGN KEY (t1_uuid) REFERENCES t1(uuid)); sqlite> sqlite> INSERT INTO t1(uuid) values ("uuid-1"); sqlite> INSERT INTO t2(t1_uuid) values ("uuid-1"); Error: foreign key mismatch But if i make t1(uuid) primary key, all works as expected: sqlite> pragma foreign_keys=off; sqlite> DROP TABLE t1; sqlite> DROP TABLE t2; sqlite> pragma foreign_keys=on; sqlite> CREATE TABLE t1(id int, uuid varchar(36) primary key); sqlite> CREATE TABLE t2(id int primary key, t1_uuid varchar(36), FOREIGN KEY (t1_uuid) REFERENCES t1(uuid)); sqlite> INSERT INTO t1(uuid) values ("uuid-1"); sqlite> INSERT INTO t2(t1_uuid) values ("uuid-1"); Creating an index doing nothing: sqlite> pragma foreign_keys=on; sqlite> CREATE TABLE t1(id int primary key, uuid varchar(36)); sqlite> CREATE INDEX uuindex ON t1(uuid); sqlite> CREATE TABLE t2(id int primary key, t1_uuid varchar(36), FOREIGN KEY (t1_uuid) REFERENCES t1(uuid)); sqlite> INSERT INTO t1(uuid) values ("uuid-1"); sqlite> INSERT INTO t2(t1_uuid) values ("uuid-1"); Error: foreign key mismatch A: The documentation says: Usually, the parent key of a foreign key constraint is the primary key of the parent table. If they are not the primary key, then the parent key columns must be collectively subject to a UNIQUE constraint or have a UNIQUE index.
doc_23534570
i include pg on my project but electron-builder say need pg-native to build, and i have installed pg-native with ps2017 installed, now electron-builder showing this error Error: C:\Program Files\nodejs\node.exe exited with code 1 Error output: npm ERR! code 1 npm ERR! path C:\Unfinished Project\PBLauncher\PBLauncher Debug\node_modules\libpq npm ERR! command failed npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node-gyp rebuild npm ERR! gyp info it worked if it ends with ok npm ERR! gyp info using node-gyp@9.0.0 npm ERR! gyp info using node@16.15.0 | win32 | x64 npm ERR! gyp info find Python using Python version 3.10.4 found at "C:\Users\AnonyX\AppData\Local\Programs\Python\Python310\python.exe" npm ERR! gyp http GET https://atom.io/download/electron/v18.3.0/node-v18.3.0-headers.tar.gz npm ERR! gyp http 403 https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist/v18.3.0/node-v18.3.0-headers.tar.gz npm ERR! gyp WARN install got an error, rolling back install npm ERR! gyp ERR! configure error npm ERR! gyp ERR! stack Error: 403 response downloading https://atom.io/download/electron/v18.3.0/node-v18.3.0-headers.tar.gz npm ERR! gyp ERR! stack at go (C:\Unfinished Project\PBLauncher\PBLauncher Debug\node_modules\node-gyp\lib\install.js:153:17) npm ERR! gyp ERR! stack at processTicksAndRejections (node:internal/process/task_queues:96:5) npm ERR! gyp ERR! stack at async install (C:\Unfinished Project\PBLauncher\PBLauncher Debug\node_modules\node-gyp\lib\install.js:62:18) npm ERR! gyp ERR! System Windows_NT 10.0.19043 npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\Unfinished Project\\PBLauncher\\PBLauncher Debug\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" npm ERR! gyp ERR! cwd C:\Unfinished Project\PBLauncher\PBLauncher Debug\node_modules\libpq npm ERR! gyp ERR! node -v v16.15.0 npm ERR! gyp ERR! node-gyp -v v9.0.0 npm ERR! gyp ERR! not ok npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\AnonyX\AppData\Local\npm-cache\_logs\2022-05-25T08_38_05_586Z-debug-0.log Compability * *WINDOWS10 *node@16.15.0 *npm@8.10.0 *node-gyp@9.0.0 *Python 3.10.4 *VS2017
doc_23534571
Checking if compiler (g++) supports -std=c++11 flag... (cached) no C++ compiler does not support C++11 standard, which is required. Please use Mapnik 2.x instead of 3.x as an alternative # more /proc/version Linux version 3.7.10-1.24-desktop (geeko@buildhost) (gcc version 4.7.2 20130108 [gcc-4_7-branch revision 195012] (SUSE Linux) ) #1 SMP PREEMPT Wed Oct 2 11:15:18 UTC 2013 (375b8b4) All dependencies are installed.
doc_23534572
A: Chart Control for .NET Framework enables you to add robust charting abilities to your applications with little effort. It is a fully managed .NET Framework component and has been specifically designed for use with Microsoft Visual Studio 2008. For examples of how to use Chart Control for .NET Framework, download the samples on Codeplex. Also, to access community content, go to the Chart Control Forum. http://go.microsoft.com/fwlink/?LinkId=128713 A: I've used Google Chart a few time with success. Use the google machine, I think there are even videos out there on how to use it. Good luck. A: A really simple charting component is to use the System.Windows.Forms.DataVisualization.Charting Chart http://msdn.microsoft.com/en-us/library/system.windows.forms.datavisualization.charting.chart.aspx It comes standard in the .NET 4.0 framework -- otherwise you can download a the assemblies here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=130f7986-bf49-4fe5-9ca8-910ae6ea442c&displaylang=en I use this chart control frequently and have built other components on top of it. It's very straightforward to use and supports databinding
doc_23534573
Name of the file is the Serial Number Column1 Name=Value Column2 Name=Value Column3 Name=Value def main() for serial in serialList: df = df.append(processSerialNumber(cs4Dir, fileName, serial.strip())) return df def processSerialNumber(dir, partNumber, serial): printEvent('processSerialNumber, ' + partNumber + ', ' + serial) with open(os.path.join(dir, serial + '.csv'), 'r') as s: cs4Lines = s.readlines() lineInfo = {'PartNumber' : partNumber, 'SerialNumber' : serial} for l in cs4Lines: line = l.strip() lineSplit = line.split('=') lineInfo = lineInfo.append(lineSplit[0] : [lineSplit[1]]) return lineInfo my goal is to end up with a series(lineInfo)(or any other variable containing key value pairs) that i can easily append to a dataframe. the above code is returning the error: lineInfo = lineInfo.append(lineSplit[0] : lineSplit[1]) ^ SyntaxError: invalid syntax I have had success in the past building series' but I can not figure out how to add key value pairs to an existing series in a loop. A: The solution was to add curly brackets around the key-value pair lineInfo = lineInfo.append({lineSplit[0] : lineSplit[1]}) ^ ^
doc_23534574
What I cannot discover in the standard library docs, however, is how to easily get input from less structured text files. In particular, what would be the Julia equivalent of the c++ idiom some_input_stream >> a_variable_int_perhaps; Given this is such a common usage scenario I am surprised something like this does not feature prominently in the standard library... A: You can use readuntil http://docs.julialang.org/en/latest/stdlib/io-network/#Base.readuntil shell> cat test.txt 1 2 3 4 julia> i,j = open("test.txt") do f parse(Int, readuntil(f," ")), parse(Int, readuntil(f," ")) end (1,2) EDIT: To address comments To get the last integer in an irregularly formatted ascii file you could use split if you know the character preceding the integer (I've use a blank space here) shell> cat test.txt 1.0, two five:$#!() + 4 last line 3 julia> i = open("test.txt") do f parse(Int, split(readline(f), " ")[end]) end 4 As far as code length is concerned, the above examples are completely self contained and the file is opened and closed in an exception safe manner (i.e. wrapped in a try-finally block). To do the same in C++ would be quite verbose.
doc_23534575
A: Not really. Normally you just set the height and width to whatever you want. Is there a particular reason why you want a certain number of characters on each line? [EDIT] I found some code here that splits a string into equal chunks: Splitting a string into chunks of a certain size Using that, I came up with the following. It works ok but needs some tweaking. private void TextBox_KeyUp(object sender, KeyEventArgs e) { var text = (sender as TextBox).Text.Replace(Environment.NewLine, "").Chunk(8).ToList(); (sender as TextBox).Text = String.Join(Environment.NewLine, text.ToArray()); (sender as TextBox).SelectionStart = (sender as TextBox).Text.Length; } And the extension method: public static class Extensions { public static IEnumerable<string> Chunk(this string str, int chunkSize) { for (int i = 0; i < str.Length; i += chunkSize) yield return str.Substring(i, Math.Min(chunkSize, str.Length - i)); } }
doc_23534576
My code will display the information if I query in this format course(managing, A, B, C, D, E) but won't work when I try to have it simplified. Could someone please tell me how I should edit my code so that it will ask what course I would like information on and only require a one word answer before displaying all information related to that course? course( accouting, acc10707, day(tuesday), time(1100, 1250), prof(ayesha, mujib), b228 ). course( managing, mng10247, day(thursday), time(1000, 1150), prof(brian, morris), b228 ). course( communication, com00207, day(monday), time(1000, 1250), prof(ali, bec), b727 ). details :- write('Please enter unit keyword.'), nl, read(Name), course(Name,Code,Day,Time,Prof,Room), write(Name,Code,Day,Time,Prof,Room). A: Using the code you gave I get the following errors with gnu-prolog version 1.4.4: | ?- details. Please enter unit keyword. communication. uncaught exception: error(existence_error(procedure,write/6),details/0) What's the problem? * *The last goal in details/0 is write(Name,Code,Day,Time,Prof,Room). *You inadvertently tried using write/6, which does not exist. Solution(s)? * *Use the prolog-toplevel instead of performing side-effects in details/0! Let's define details_of/2. Its first argument is a structure c/6. details_of(Details,Course_name) :- Details = c(Course_name,Code,Day,Time,Prof,Room), course(Course_name,Code,Day,Time,Prof,Room). Sample use: | ?- details_of(X,communication). X = c(communication,com00207,day(monday),time(1000,1250),prof(ali,bec),b727) yes *Quick fix: Instead of write/6 use write/1 and a structure c/6. Replace write(Name,Code,Day,Time,Prof,Room) by write(c(Name,Code,Day,Time,Prof,Room)). | ?- details. Please enter unit keyword. communication. c(communication,com00207,day(monday),time(1000,1250),prof(ali,bec),b727) yes
doc_23534577
I restarted Visual Studio, removed my old and new .suo files, and confirmed that a teammate sees the same behavior for an entirely different solution. These solutions all worked fine under 2013 and earlier. How can I get this back? It's preventing me from setting breakpoints on external code. Update I did a repair of Visual Studio 2015, and the required system restart, and it didn't make a difference. Update 2 I did a complete uninstall (which took multiple attempts, and eventually uninstalling from a system account), and reinstall, as well as installing all important (and most optional) Windows updates. There is still no difference. A: After working with Microsoft support, we landed on the solution (which I asked them to pass along as a bug): you need to install the Common Tools for Visual C++ 2015. You can modify your installation to add this, and (for me) it didn't require a machine restart.
doc_23534578
Possible Duplicate: Format date in C# I want to display the date in the formatMay 16, 2011. I have used format String.Format("{0:D}" by which i got the output like Monday, May 16, 2011 . So please any one tell me how i will be able to show date time in the following formatMay 16, 2011 Thanks in advance A: "{0:MMMM d, yyyy}" Here is the documentation. Custom Date and Time Format Strings A: DateTime dt = DateTime.Now; String.Format("{0:MMMM d, yyyy}", dt); This should to the trick? EDIT: Here are some handy examples! http://www.csharp-examples.net/string-format-datetime/ A: DateTime.Now.ToString("MMM dd, yyyy"); A: Try this String.Format("{0:MMMM dd, yyyy}", someDate) A: If you use D for a date format the format will be taken from the Regional settings in the control panel of the machine. You could change the date format of the machine the software is running on. (Or hand code a format). A: There doesn't appear to be a Standard Date and Time Format String that matches your desired format, so you need to construct a Custom Date and Time Format String This should do the trick: string.Format("{0:MMMM dd, yyyy}", value) Or alternatively value.ToString("MMMM dd, yyyy") * *MMMM - The full name of the month, e.g. June (en-US) *dd - The day of the month, from 01 through 31 *yyyy - The year as a four-digit number.
doc_23534579
I have a form for updating a User object, the form posts to a controller and the controller calls my service class that updates/add the entity using a UserDOA which extends CRUDRepository. I think this is a very basic test code. The issue is that the form doesn't contain all the fields in my User class, so when the object is posted to the controller, only the fields that are contained in the form are populated and the entity fields that are not in the form are null. For example my user has 4 fields, firstname, lastname, age, password. The form only contains the first 3 fields so when the object is posted to the controller, the password field is null, that is as I expect. My issue is that if I then call save(entity) on my DAO it will update the user but that would include nulling the password field, which of course is undesirable. Of course I could simply retrieve the original entity from the repository and merge it with only the fields that have been updated, but it seems to me this is a lot of work that would be very commonly required and typically spring has a solution to these types of generic tasks. Did I miss something? TIA
doc_23534580
the size of this file is about 2400273(calculated by fseek, SEEK_END) this file has lots of char like this 'apple = 사과'(simillar to dictionary) Main problem is that reading file takes very long time I couldn't find any solution to solve this problem in GOOGLE The reason i guessed is associated with using fgets() but i don't know exactly. please help me here is my code written by C #define _CRT_SECURE_NO_WARNINGS #include <stdio.h> #include <stdlib.h> #include <string.h> int main() { int line = 0; char txt_str[50]; FILE* pFile; pFile = fopen("dict_test.txt", "r"); if (pFile == NULL) { printf("file doesn't exist or there is problem to open your file\n"); } else { do{ fgets(txt_str, 50, pFile);; line++; } while (txt_str != EOF); } printf("%d", line); } Output couldn't see result because program was continuosly running Expected the number of lines of this txt file A: Major * *OP's code fail to test the return value of fgets(). Code needs to check the return value of fgets() to know when to stop. @A4L do{ fgets(txt_str, 50, pFile);; // fgets() return value not used. Other * *Line count should not get incremented when fgets() returns NULL. *Line count should not get incremented when fgets() read a partial line. (I. e.) the line was 50 or longer. Reasonable to use a wider than 50 buffer. *Line count may exceed INT_MAX. There is always some upper bound, yet trivial to use a wider type. *Good practice to close the stream. *Another approach to count lines would use fread() to read chunks of memory and then look for start of lines. (Not shown) *Recommend to print a '\n' after the line count. int main(void) { FILE* pFile = fopen("dict_test.txt", "r"); if (pFile == NULL) { printf("File doesn't exist or there is problem to open your file.\n"); return EXIT_FAILURE; } unsigned long long line = 0; char txt_str[4096]; while (fgets(txt_str, sizeof txt_str, pFile)) { if (strlen(txt_str) == sizeof txt_str - 1) { // Buffer full? if (txt_str[sizeof txt_str - 1] != '\n') { // Last not \n? continue; } } line++; } fclose(pFile); printf("%llu\n", line); } A: fgets returns NULL on EOF. You are never assigning the result of fgets(txt_str, 50, pFile); to txt_str, your program never sees the end of the file and thus enters an endless loop. try something like this: char* p_str; do{ p_str = fgets(txt_str, 50, pFile); } while (p_str != NULL);
doc_23534581
A: You can use: #include <thrust/device_ptr.h> #include <thrust/device_vector.h> template <typename T> class my_thrust_class { public: thrust::device_ptr<T> my_dptr; } to declare a device pointer that can then be initialized to the start of whatever device_vector you want it to refer to: thrust::device_vector<float> my_vec(3); my_thrust_class<float> A; A.my_dptr = my_vec.data();
doc_23534582
I am using a file upload control which can be found here: https://github.com/blueimp/jQuery-File-Upload The following is a snippet of code showing the file upload control: <div class="modal-body"> <form id="imageUpload" action="@Url.Content("~/Listing/ReturnBase64Data")" method="POST" enctype="multipart/form-data"> @Html.HiddenFor(m => m.ListingGuid) <div class="fileupload-buttonbar"> <div class="progressbar fileupload-progressbar" id="progressbar"> </div> <div class="fileinput-button">Upload Image <input type="file" id="fileupload" name="image"/> </div> </div> @if (Model.SelectedImage == null) { <div id="show_image"> </div> } else { <div id="show_image"> <img style="height:200px ! important;" src="data:image/png;base64, @Model.SelectedImage.Content"/> </div> } <div id="show_error" > </div> </form> There is more code after the </form> tag but that's the main part. The following shows the file upload control and the letter 'N' next to it. The letter N is actually a string that says "No file chosen". But for some reason it seems like there is something overlapping the entire control. Does anyone know where I am going wrong? A: It seems you have not jquery.fileupload-ui.css on your view, because you must not have 'choose file' and 'no file chosen'. There must be just 'Upload Image' button. First make sure you have link to jquery.fileupload-ui.css on your view then add btn class to the div which contains input. If you have the style sheet, the fileinput-button class must have been override by other style sheet. check it by firebug or Chrome developer tools. <div class="btn fileinput-button">Upload Image <input type="file" id="fileupload" name="image"/> </div>
doc_23534583
But It can't be supported. I try to join babel in karma.conf.js, transform the es6 code to es5 code ,then run coverage. It works fine, but the result is not normal. It tests the es5 code. preprocessors: { 'webapp/controller/*.js': ['babel', 'coverage'] // './!(test|localService)/**/*.js': ['coverage'] }, babelPreprocessor: { options: { presets: ['@babel/preset-env'], sourceMap: 'inline' }, sourceFileName: function(file) { return file.originalPath; } } I want to get the coverage about es6 code. But actually it gets the coverage of the es5 code。
doc_23534584
Example: http://localhost/school/backend/web/index.php?r=user%2Fview&id=20 20 must be encrypt. Whats the simplest way in Yii2 to achieve this. A: The problem with trying to encrypt part of the URL like that is that the client browser must have the key to use for the encryption. You can supply that over HTTPS but it would mean that anyone could also obtain the key. Alternatively you could have one key per browsing session, but that will impact performance and may be overkill. What's the reason for encrypting the id parameter? If it's just to avoid an insecure direct object reference then you could create a hash for the user based on random data (you'd need a unique hash for each user object). The hash would make it difficult, but not impossible, to correctly guess another object's hash. Essentially this is security by obscurity. A better approach is to securely handle viewing other IDs. For example, it may be that I'm allowed to view my own objects / users but not yours. To achieve this you should programmatically check the user is authorised to view the object in question. This does mean writing more code but is a significantly better way of doing things. Submitting the request by HTTP POST would only protect you from a casual user. A more skilled user (or attacker) would just intercept the POST request, modify the value and send it on.
doc_23534585
When looking around, I've seen that a solution for this can be: * *bind IShellItem2 to the given IShellItem *retrieve the IShellItem property store *with this function (as seen in the example in the page), find the file's size I don't fully understand the Win32 API, so maybe I got this all wrong, but if I am right I just find it difficult to get past the 1st step - How can I bind those two? A: You don't need to use IPropertyStore if you have an IShellItem2 reference, you can directly use IShellItem2::GetUInt64 . Here is some sample code: CoInitialize(NULL); ... IShellItem2* item; if (SUCCEEDED(SHCreateItemFromParsingName(L"c:\\myPath\\myFile.ext", NULL, IID_PPV_ARGS(&item)))) { ULONGLONG size; if (SUCCEEDED(item->GetUInt64(PKEY_Size, &size))) // include propkey.h { ... use size ... } item->Release(); } ... CoUninitialize(); If you already have an IShellItem reference (in general you want to get an IShellItem2 directly) and want a IShellItem2, you can do this: IShellItem2* item2; if (SUCCEEDED(item->QueryInterface(&item2))) { ... use IShellItem2 ... } Another way of doing it, w/o using IShellItem2, is this: IShellItem* item; if (SUCCEEDED(SHCreateItemFromParsingName(L"c:\\myPath\\myFile.ext", NULL, IID_PPV_ARGS(&item)))) { IPropertyStore* ps; if (SUCCEEDED(item->BindToHandler(NULL, BHID_PropertyStore, IID_PPV_ARGS(&ps)))) { PROPVARIANT pv; PropVariantInit(&pv); if (SUCCEEDED(ps->GetValue(PKEY_Size, &pv))) // include propkey.h { ULONGLONG size; if (SUCCEEDED(PropVariantToUInt64(pv, &size))) // include propvarutil.h { ... use size ... } PropVariantClear(&pv); } ps->Release(); } item->Release(); }
doc_23534586
In my XML, this AdMob tutorial says I add this to my activity XML to hold the banner ad: <com.google.android.gms.ads.AdView android:id="@+id/admob_adview" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" ads:adSize="BANNER" ads:adUnitId="@string/banner_footer" /> And then in the code, the tutorial says: public class MainActivity extends AppCompatActivity { private AdView mAdMobAdView; Button button; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mAdMobAdView = (AdView) findViewById(R.id.admob_adview); AdRequest adRequest = new AdRequest.Builder() .addTestDevice(AdRequest.DEVICE_ID_EMULATOR) .addTestDevice("4DD0986B8BB49093161F4F00CF61B887")// Add your real device id here .build(); mAdMobAdView.loadAd(adRequest); button = (Button) findViewById(R.id.button); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { startActivity(new Intent(getApplicationContext(), ActivityTwo.class)); } }); } } But if I only want this ad to display on one flavor vs. the other, what's the correct way to handle this without violating any rules? Normally flavors are checked via something like if (BuildConfig.FLAVOR.equals("adfree")) { //no ads } else { //show ads } Would I simply put the adRequest code in the above if-statement within my Activity? For the ad-free version, does it matter that the AdView is still in my XML, or do I need to use a separate copy of the XML with the AdView removed? A: This answer is not specific to AdMob, and so there may be options or limitations tied to AdMob that render this analysis moot. However, IMHO, in the ad-free app, you want AdMob totally gone. Why would you bother having it in there, taking up space, possibly crashing with uninitialized views, etc.? It just seems like an unnecessary cost to you and your users. To do that (referring to your two flavors ad and addfree): Step #1: For AdMob-specific dependencies, use adCompile instead of compile Step #2: Have the layout resource(s) with the AdMob AdView in ad/src/res/layout/, with equivalents with the same names in adfree/src/res/layout (so you can redesign around having no ad taking up space) Step #3: Define an AdStrategy strategy class in both the ad/ and adfree/ source sets, with the same public API, where the ad/ one has method(s) tied to AdMob (such as that initialization code) and the adfree/ one just has empty methods Step #4: In MainActivity and anywhere else you interact with AdMob from Java, call the appropriate AdStrategy method, so in ad builds you get the AdMob stuff and in adfree builds it does nothing (or alternative logic, if appropriate) Now your adfree builds are truly AdMob-free, not just AdMob-uninitialized. A: For me the simplest and most effective way to achieve this would be: if (BuildConfig.FLAVOR.equals("adfree")) { mAdMobAdView.setVisibility(View.GONE);} else {mAdMobAdView.setVisibility(View.VISIBLE);} This will make the adView completely go in the ad-free version.
doc_23534587
I'm defining $color variable as global, and its value has been set inside a mixin. When I call this mixin, I want to have $color as it's mixin value, but, passing the variable by @content directive, I always get the global value: $color: #000 !default; @mixin generate-fruits() { @each $fruit, $fruit-color in $fruits { $color: $fruit-color; .#{$fruit}{ @content; // Awaited result // background: $color; } } } @include generate-fruits(){ background: $color; }; result: background gets black; expected: background gets $fruit-color It seems @content has been parsed before the Mixin action. How can I force @content to keep the *mixin scope? Please, find out a simplified version of the problem on codePen.
doc_23534588
`maxim=max(listScore) `i=0 `keep = [-1]*num `for x in range(0,num): ````if listaScore[x]==maxim: `````````keep[i]=x `````````i+=1 `````````listScore[x]=-100 ````maxim=max(listaScore) the results i want is : listScore=[0.25, 0.5, 0.5, -0.25, -0.25, 0.25] positions=[1,2,5,-1,-1,-1] A: Here's one approach using a list comprehension. The idea is to sort a range of the same length as the list, specifying that we want to fetch elements from the list in thekey argument. What we get is the resulting sorted range to return us the indices that would sort the list: l= [0.25, 0.5, 0.5, -0.25, -0.25, 0.25] out = [i for i in sorted(range(len(l)), key=l.__getitem__, reverse=True)][:3] # [1, 2, 0] If you want the additional -1s: out + [-1] * (len(l) - len(out)) # [1, 2, 0, -1, -1, -1] A: Here is one slight tweak in @yatu's answer, little shorter: print([i for i in sorted(range(len(l)), key=l.__getitem__)][-3:]) Which gives: [1, 2, 0]
doc_23534589
* *Unified Documentation (That would make my job a lot easier.) *Testing capability built in to the language. *Debug code support in the language. *Forward Declarations. (I always thought it was stupid to declare the same function twice.) *Built in features to replace the Preprocessor. *Modules *Typedef used for proper type checking instead of aliasing. *Nested functions. (Cough PASCAL Cough) *In and Out Parameters. (How obvious is that!) *Supports low level programming - Embedded systems, oh yeah! However: * *Can D support an embedded system that not going to be running an OS? *Does the outright declearation that it doesn't support 16 bit processors proclude it entirely from embedded applications running on such machines? Sometimes you don't need a hammer to solve your problem. *Garbage collection is great on Windows or Linux, but, and unfortunately embedded applications sometime must do explicit memory management. *Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues. *What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading. *Is there a D-Lite for embedded systems? So basically is D suitable for embedded systems with only a few megabytes (sometimes less than a magabyte), not running an OS, where max memory usage must be known at compile time (Per requirements.) and possibly on something smaller than a 32 bit processor? I'm very interested in some of the features, but I get the impression it's aimed at desktop application developers. What is specifically that makes it unsuitable for a 16-bit implementation? (Assuming the 16 bit architecture could address sufficient amounts of memory to hold the runtimes, either in flash memory or RAM.) 32 bit values could still be calculated, albeit slower than 16 bit and requiring more operations, using library code. A: First and foremost read larsivi's answer. He's worked on the D runtime and knows of what he's talking about. I just wanted to add: Some of what you asked about is already possible. It won't get you all the way, and a miss is as good as a mile here but still, FYI: Garbage collection is great on Windoze or Linux, but, and unfortunately embedded apps sometime must do explicite memory management. You can turn garbage collection off. The various experimental D OSes out there do it. See the std.gc module, in particular std.gc.disable. Note also that you do not need to allocate memory with new: you can use malloc and free. Even arrays can be allocated with it, you just need to attach a D array around the allocated memory using a slice. Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues. The specification for arrays specifically requires that compilers allow for bounds checking to be turned off (see the "Implementation Note"). gdc provides -fno-bounds-check, and in dmd using -release should disable it. What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading. This I'm less clear on, but given that most C runtimes allow turning off multithreading, it seems likely one could get the D runtime to disable it as well. Whether that's easy or possible right now though I can't tell you. A: The answers to this question are outdated: Can D support an embedded system that not going to be running an OS? D can be cross-compiled for ARM Linux and for ARM Cortex-M. Some projects aim at creating libraries for Cortex-M architectures like MiniLibD for the STM32 or this project which uses a generic library for the STM32. (You could implement your own minimalistic OS in D on ARM Cortex-M.) Does the outright declearation that it doesn't support 16 bit processors proclude it entirely from embedded applications running on such machines? Sometimes you don't need a hammer to solve your problem. No, see answer above... (But I would not expect that "smaller" architectures than Cortex-M will be supported in the near future.) Garbage collection is great on Windows or Linux, but, and unfortunately embedded applications sometime must do explicit memory management. You can write Garbage Collection free code. (The D foundation seems to aim at a "GC free compliant" standard library Phobos but that is work in progress.) Array bounds checking, you love it, you hate it. Great for design assurance, but not alway permissable for performance issues. (As you said this depends on your "personal taste" and design decisions. But I would assume an acceptable performance overhead for bound checking due to the background of the D compiler developers and D's design aims.) What are the implications on an embedded system, not running an OS, for multithreading support? We have a customer that doesn't even like interrupts. Much less OS/multithreading. (What is the question? One could implement mutlithreading using D's language capabilities e.g. like explained in this question. BTW: If you want to use interrupts consider this "hello world" project for a Cortex-M3.) Is there a D-Lite for embedded systems? The SafeD subset of D targets at the embedded domain. A: I have to say that the short answer to this question is "No". * *If your machines are 16 bit, you'll have big problems fitting D into it - it is explicitly not designed for it. *D is not a light languages in itself, it generates a lot of runtime type info that normally is linked into your app, and that also is needed for typesafe variadics (and thus the standard formatting features be it Tango or Phobos). This means that even the smallest applications are surprisingly large in size, and may thus disqualify D from the systems with low RAM. Also D with a runtime as a shared lib (which could alleviate some of these issues), has been little tested. *All current D libraries requires a C standard library below it, and thus typically also an OS, so even that works against using D. However, there do exist experimental kernels in D, so it is not impossible per se. There just wouldn't be any libraries for it, as of today. I would personally like to see you succeed, but doubt that it will be easy work.
doc_23534590
$(document).ready(function () { var audioElement = document.createElement('audio'); audioElement.setAttribute('src', 'audio/01T.wav'); audioElement.setAttribute('autoplay:false', 'autoplay'); $.get(); audioElement.addEventListener("load", function () { audioElement.play(); }, false); $('.letter1').mouseenter(function () { $(this).addClass('shake shake-constant').stop().delay(3000).queue(function(){ $(this).removeClass('shake shake-constant'); }); audioElement.play(); }); var audioElementb = document.createElement('audio'); audioElementb.setAttribute('src', 'audio/01T.wav'); audioElementb.setAttribute('autoplay:false', 'autoplay'); audioElementb.loop=true; $.get(); audioElementb.addEventListener("load", function () { audioElementb.play(); }, false); $('.letter1').on('click', function(){ $( this ).toggleClass( "serif shake2 shake-constant2" ); if (audioElementb.paused) { audioElementb.play(); } else { audioElementb.pause(); } }); });
doc_23534591
if ($_SERVER["REQUEST_METHOD"] == "POST") { then proceeds to use the superglobal $_POST to get the variable which is then used in an SQL query. Below is my PHPUnit code that I am trying to use to test the procedural file above: <?php use PHPUnit\Framework\TestCase; require '../public_html/PHP/dbconfig.php'; //require '../public_html/PHP/getters/getBranchHeaderPhoto'; class getBranchHeaderPhotoTest extends TestCase { protected function setUp() { parent::setUp(); $_POST = array(); } /** * @test */ public function checkThatGetBranchHeaderPhotoReturnsTheDirectoryAndImage() { $_SERVER["REQUEST_METHOD"] == "POST"; $_POST = array("all"); ob_start(); include('../public_html/PHP/getters/getBranchHeaderPhoto.php'); $contents = ob_get_contents(); $this->assertNotNull($contents); } } ?> However, when I try and run the test in the command line, running the following cmd phpunit getBranchHeaderPhotoTest.php, the following error occurs; There was 1 error: 1) getBranchHeaderPhotoTest::checkThatGetBranchHeaderPhotoReturnsTheDirectoryAndImage Undefined index: REQUEST_METHOD C:\Users\User\Documents\project\project\test\getBranchHeaderPhotoTest.php:21 ERRORS! Tests: 1, Assertions: 0, Errors: 1. I have tried following previous SO answers on this topic but can not pass a request method into the procedural file. Is this possible? A: You're using == (comparison) when you want = (assignment). Change: $_SERVER["REQUEST_METHOD"] == "POST"; to: $_SERVER["REQUEST_METHOD"] = "POST"; If possible, fix the offending code to check the key exists, like with: if (($_SERVER["REQUEST_METHOD"] ?? 'GET') == 'POST')
doc_23534592
// put router.put('/', async (ctx, next) => { let body = ctx.request.body || {} if (body._id === undefined) { // Throw the error. ctx.throw(400, '_id is required.') } }) I will get when _id is not provided: _id is required. But I don't throw it like in plain text. I would prefer catching it at the top level and then formatting it, e.g.: { status: 400. message: '_id is required.' } According to the doc: app.use(async (ctx, next) => { try { await next() } catch (err) { ctx.status = err.status || 500 console.log(ctx.status) ctx.body = err.message ctx.app.emit('error', err, ctx) } }) But even without that try catch in my middleware, I still get _id is required. Any ideas? A: Throw an error with needed status code: ctx.throw(400, '_id is required.'); And use default error handler to format the error response: app.use(async (ctx, next) => { try { await next(); } catch (err) { ctx.status = err.statusCode || err.status || 500; ctx.body = { status: ctx.status, message: err.message }; } });
doc_23534593
// ... class stuff public: set<int> s; // ... set<int> ord(){ set<int, Order> aux; set<int> res; set<int>::iterator it = s.begin(); while(it != s.end()){ aux.insert(*it); it++; } // res = aux; return res; } Thank you! :) A: Without changing the order, you can't. – Quentin
doc_23534594
For example, I write a phrase in french -> Nous découvrir en vidéo <h1 class="primaryTitle"> Nous découvrir en vidéo </h1> In the display, I get this message Nous découvrir en vidéo I don't understand where comes the problem ? I think, the problem is perhaps here ? <!DOCTYPE html><html> <meta http-equiv="content-type" content="text/html;charset=ISO-8859-1" /> <head> <title>title</title> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <link rel="icon" href="../favicon.gif" > <script language="JavaScript" type="text/javascript"> </script> </head> Thanks A: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Document</title> </head> <body> <h1>Nous découvrir en vidéo</h1> </body> </html> Change the charset in the meta tag to "UTF-8", this worked when i tried it A: Can you try this? <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <h1 class="primaryTitle"> Nous découvrir en vidéo </h1> </body> </html> English <html lang="en"> French <html lang="fr">
doc_23534595
I am trying to build a reg ex pattern that will pull in any matches that have # around them. for example #match# or #mt# would both come back. This works fine for that. #.*?# However I don't want matches on ## to show up. Basically if there is nothing between the pound signs don't match. Hope this makes sense. Thanks. A: Please use + to match 1 or more symbols: #+.+#+ UPDATE: If you want to only match substrings that are enclosed with single hash symbols, use: (?<!#)#(?!#)[^#]+#(?!#) See regex demo Explanation: * *(?<!#)#(?!#) - a # symbol that is not preceded with a # (due to the negative lookbehind (?<!#)) and not followed by a # (due to the negative lookahead (?!#)) *[^#]+ - one or more symbols other than # (due to the negated character class [^#]) *#(?!#) - a # symbol not followed with another # symbol. A: Instead of using * to match between zero and unlimited characters, replace it with +, which will only match if there is at least one character between the #'s. The edited regex should look like this: #.+?#. Hope this helps! Edit Sorry for the incorrect regex, I had not expected multiple hash signs. This should work for your sentence: #+.+?#+ Edit 2 I am pretty sure I got it. Try this: (?<!#)#[^#].*?#. It might not work as expected with triple hashes though. A: Try: [^#]?#.+#[^#]? The [^ character_group] construction matches any single character not included in the character group. Using the ? after it will let you match at the beginning/end of a string (since it matches the preceeding character zero or more times. Check out the documentation here
doc_23534596
void readfile(char usrinput[]) // opens text file { char temp; ifstream myfile (usrinput); int il = 0; if (myfile.is_open()) { while (!myfile.eof()) { temp = myfile.get(); if (myfile.eof()) { break; } team[il] = temp; il++; } myfile.close } else { cout << "Unable to open file. (Either the file does not exist or is formmated incorrectly)" << endl; exit (EXIT_FAILURE); } cout << endl; } The user is required to create a input file that is formatted where the first column is a name, second column is a double, and third column is also a double. Something like this: Trojans, 0.60, 0.10 Bruins, 0.20, 0.30 Bears, 0.10, 0.10 Trees, 0.10, 0.10 Ducks, 0.10, 0.10 Beavers, 0.30, 0.10 Huskies, 0.20, 0.40 Cougars, 0.10, 0.90 I want to add a check currently, where if the user only enters 7 teams, it with exit out of the program, or if the user enters more than 8 teams, or double numbers. Ive tried creating an if statement using a counter (counter != 8 and you break out of the loop/program) in another function where I split this into three different arrays but that did not work. I am now trying to accomplish this check within this function and if its possible could someone guide me in the right direction? I appreciate all the help, and please let me know if i can provide more information to make things less vague. EDIT: we are not allowed to use vectors or strings A: I would recommend switching to a vector instead of an array, and getting a line at a time using getline. Also I'm not sure how you're returning the data from the file in your code. pseudocode: void readfile(char usrinput[], std::vector<string>& lines) // opens text file { ifstream myfile (usrinput); if (!myfile.good()) { cout << "Unable to open file. (Either the file does not exist or is formmated incorrectly)" << endl; exit (EXIT_FAILURE); } std::string line; while (myfile.good()) { getline(myfile, line); lines.push_back(line); } myfile.close(); // it would be safer to use a counter in the loop, but this is probably ok if (lines.size() != 8) { cout << "You need to enter exactly 8 teams in the file, with no blank lines" << endl; exit(1); } } Call it like this: std::vector<string> lines; char usrinput[] = "path/to/file.txt"; readfile(usrinput, lines); // lines contains the text from the file, one element per line Also, check this out: How can I read and parse CSV files in C++?
doc_23534597
I know this is not good way to search. for i in 1... 255 { var str = "163.289.2." + "i" var tempIP = Ping.getIPAddress(str) if(tempIP == true) { break; } } Now my problem is my custom class Ping.getIPAddress() takes 3 seconds to get the result for a given IP value. So for 255 searches it takes approx 765 seconds (12.75 minutes). I have restriction that search should complete in max 2 minutes. So is there anyway i can achieve this in iPhone using swift. I must use only this custom function Ping.getIPAddress() which gives true if given IP address exists else false. Please provide me example or reference or approach to solve this issue . Using NSOperationQueue with MaxConcurrentOperationCount set to 10 will be good ? A: Synchronous approach If we perform each call to Ping.getIPAddress(str) only after the previous one has completed of course we need to wait for (3 seconds * 256) = 768 seconds. Asynchronous approach On the other hand we can perform several concurrent calls to Ping.getIPAddress(str). The fake Ping class This is a class I created to test your function. class Ping { class func getIPAddress(str:String) -> Bool { sleep(3) return str == "163.289.2.255" } } As you see the class does wait for 3 seconds (to simulate your scenario) and then returns true only if the passed ip is 163.289.2.255. This allows me to replicated the worst case scenario. Solution This is the class I prepared class QuantumComputer { func search(completion:(existingIP:String?) -> ()) { var resultFound = false var numProcessed = 0 let serialQueue = dispatch_queue_create("myQueue", DISPATCH_QUEUE_SERIAL) for i in 0...255 { dispatch_async(dispatch_get_global_queue(Int(QOS_CLASS_UTILITY.value), 0)) { var ip = "163.289.2." + "\(i)" let foundThisOne = Ping.getIPAddress(ip) dispatch_async(serialQueue) { if !resultFound { resultFound = foundThisOne numProcessed++ if resultFound { completion(existingIP:ip) } else if numProcessed == 256 { completion(existingIP: nil) } } } } } } } The class performs 256 asynchronous calls to Ping.getIPAddress(...). The results from the 256 async closures is processed by this code: dispatch_async(serialQueue) { if !resultFound { resultFound = foundThisOne numProcessed++ if resultFound { completion(existingIP:ip) } else if numProcessed == 256 { completion(existingIP: nil) } } } The previous block of code (from line #2 to #9) is executed in my queue serialQueue. Here the 256 distinct closures run synchronously. * *this is crucial to ensure a consistent access to the variables resultFound and numProcessed; *on the other hand this is not a problem by a performance point of view since this code is pretty fast (just a bunch of arithmetic operations) Test And this is how I call it from a standard ViewController. class ViewController: UIViewController { var computer = QuantumComputer() override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. debugPrintln(NSDate()) computer.search { (existingIP) -> () in debugPrintln("existingIP: \(existingIP)") debugPrintln(NSDate()) } } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } Conclusions Finally this is the output when I test it on my iOS simulator. Please remind that this is the worst case scenario (since the last checked number is a valid IP). 2015-09-04 20:56:17 +0000 "existingIP: Optional(\"163.289.2.255\")" 2015-09-04 20:56:29 +0000 It's only 12 seconds! Hope this helps.
doc_23534598
Replacing C:\DevTfs2010\Apps\Messaging\Subscribers\AutoTrackerProcessor\Main\AutoTrackerProcessor\SolutionItems\SolutionInfo.cs AutoTrackerProcessor.vssscc, AutoTrackerProcessor.sln have been automatically checked out for editing. Not only does it check out the solution, but it removes the SolutionInfo.cs file from the solution file. Here is the compare of the solution file: Server: Local: Why would it do this? Note: Our build process alters the SolutionInfo.cs file, but only that file. It doesn't alter the solution to remove SolutionInfo.cs.
doc_23534599
@GetMapping("/dashboard") public String Date() { Connection conn = null; List<Map<String, Object>> listOfDates = null; try{ Class.forName("com.mysql.jdbc.Driver"); System.out.println("Connecting to database...To retrive DATE"); conn = DriverManager.getConnection(DB_URL,USER,PASS); String countQuery= "SELECT migrations.mignum, migration_states.statemigrations.projectleader, migrations.productiondate," + " migrations.installationtiers, migrations.targetplatform, migrations.apprelated, migrations.appversion FROM migrations, migration_states WHERE migrations.productiondate='2018-07-07"; QueryRunner queryRunner = new QueryRunner(); listOfDates = queryRunner.query(conn, countQuery, new MapListHandler()); conn.close(); } catch (SQLException se) { se.printStackTrace(); }catch(Exception e){ //Handle errors for Class.forName e.printStackTrace(); }finally { DbUtils.closeQuietly(conn); } return new Gson().toJson(listOfDates); } which will return me a JSon object like below [ {"state":"Approval in Staging","mignum":146384,"projectleader":"James Rice","productiondate":"Jul 7, 2018","installationtiers":"Linux Web WL10","targetplatform":"Production","apprelated":"Content Only","appversion":""}, {"state":"Approval by QA in Staging","mignum":146451,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"UPS Pickup Point Attribute Injector","appversion":"18.7.1"}, {"state":"Approval by QA in Staging","mignum":146453,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"UPS Pickup Point DB Web Services","appversion":"18.7.1"}, {"state":"Migration to Mahwah","mignum":146485,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Account Invoice Authorization","appversion":"18.07.01"}, {"state":"Migration to Mahwah","mignum":146487,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL10","targetplatform":"Production","apprelated":"My Choice Enrollment Component","appversion":"18.07.03"}, {"state":"Migration to Mahwah","mignum":146489,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"My Choice Enrollment WebApp","appversion":"18.07.01"}, {"state":"Migration to Mahwah","mignum":146492,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL81","targetplatform":"Production","apprelated":"LASSO","appversion":"UTA_18.07.03"},{"state":"Approval by QA in Staging","mignum":146495,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux Web WL10","targetplatform":"Production","apprelated":"LASSO","appversion":"18.07.03"},{"state":"Approval by QA in Staging","mignum":146496,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux Web WL10","targetplatform":"Production","apprelated":"LASSO","appversion":"18.07.03"},{"state":"Approval by QA in Staging","mignum":146497,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Precommissioning Authorization ","appversion":"18.07.09"},{"state":"Approval by QA in Staging","mignum":146498,"projectleader":"Keith Lucas","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Precommissioning Authorization ","appversion":"18.07.06"},{"state":"Approval by Dev Staging","mignum":146547,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Account Validation Component ","appversion":""},{"state":"Migration to Mahwah","mignum":146549,"projectleader":"Amardeep Grewal","productiondate":"Jul 7, 2018","installationtiers":"Linux Web WL10","targetplatform":"Production","apprelated":"URL Alias","appversion":""},{"state":"Approval by QA in Staging","mignum":146565,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Quantum View SubServer","appversion":"9.3.0"}, {"state":"Approval by Dev Staging","mignum":146566,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Web Email Preference App","appversion":"v3.8.19"}, {"state":"Approval by QA in Staging","mignum":146569,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"View Bill","appversion":"4.0.2"}, {"state":"Migration to Mahwah","mignum":146578,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Claims History Component","appversion":"NA"}, {"state":"Approval by QA in Staging","mignum":146579,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Address Search Component ","appversion":"2.1.0"}, {"state":"Approval by Dev Staging","mignum":146581,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"DCOWS","appversion":" 8.03.01"}, .... ] Can anyone let me know,how to perform the following manipulations in Spring side or Angular side * *Find the count of each state so that I get Json array as follows [ {"state":"Approval by Dev Staging", "count": 12}, {"state":"Approval by QA in Staging", "count": 12}, ... ] *Get the data present for each state Eg: if state = "Approval by Dev Staging" get its details as json object to display in front-end Angular if state = "Approval by QA in Staging" get its details as json object to display in front-end Angular Eg: [{ "name":"Approval by QA in Staging", "value":[ {"mignum":146547,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Account Validation Component ","appversion":"xxx"}, {"mignum":146547,"projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Account Validation Component ","appversion":"xxx"}, .... }] }, { "name":"Migrations to Staging", "value":[{ {"mignum":146547, "projectleader":"Eric Lok","productiondate":"Jul 7, 2018","installationtiers":"Linux BEA WL12","targetplatform":"Production","apprelated":"Account Validation Component ","appversion":"xxx"} .... }] }] .... etc A: For the second question, I was able to acheive it in Angular side using the following code in dashboard.service.ts by calling as below getDateDashboard() { return this._http.get(this.baseUrl + '/dashboard', this.options).pipe(map((response: Response) => response.json())); } in *dashboard.component.ts* by making a call to above service as follows for (let index = 0; index < chartdataa.length; index++) { if(chartdataa[index].state=="Approval by Dev. in Mahwah"){ this.displayDevMawah.data = chartdataa.filter(function(data:any){ return data.state == "Approval by Dev. in Mahwah";}); this.displayDevMawah.count = this.displayDevMawah.data.length; console.log("this.displayDevMawah.count-->", this.displayDevMawah.count); } if(chartdataa[index].state=="Approval by Dev. in Windward"){ this.displayDevWindward.data = chartdataa.filter(function(data:any){ return data.state == "Approval by Dev. in Windward";}); this.displayDevWindward.count = this.displayDevWindward.data.length; console.log("this.displayDevWindward.count-->", this.displayDevWindward.count); } ... } Now I need help to get json data for my first question from the above result, which is Getting the json data with state and count for each state in Angular side/ Spring side as follows [ {"state":"Completed","count":240}, {"state":"Pending Approval by Development in Windward","count":2}, {"state":"Pending Approval by QA in Windward","count":1}, {"state":"Pending Migration to Mahwah","count":1}, {"state":"Pending Migration to Production","count":3}, ... ] A: public String url(@RequestBody String jsonData) { String messageId="0"; try { ObjectMapper objectMapper = new ObjectMapper(); Message ms = new Message(); //write your class name that uses database entities Message message = objectMapper.readValue(jsonData, Message.class); if (message != null) { String text= message.getMessage(); messageRepository.save(ms);//enter reposistory that use in controller ms.getMessageId().toString(); } } catch (Exception ex) { String errorMsg = ex.getMessage(); if(errorMsg!="") return messageId + " donot save Error "; } return messageId; }