_id
stringlengths
2
6
text
stringlengths
4
46k
title
stringclasses
1 value
d10401
Yes it shows only once because you are loading the ads at once only in the onCreate() than you will displayed it once. But when you are displaying the ads once than after you have to load it again for display them after next 7 views. So please create one method for that and call each and every time before your ads will gonna be to display/show. This is the example code :- public void loadInterstitial() { // Instantiate an InterstitialAd object AdSettings.addTestDevice("350cf676a5848059b96313bdddc21a35"); interstitialAd = new InterstitialAd(MainActivity.this, getString(R.string.ins_ads_id)); interstitialAd.loadAd(); // Set listeners for the Interstitial Ad interstitialAd.setAdListener(new InterstitialAdListener() { @Override public void onInterstitialDisplayed(Ad ad) { Log.v("OkHttp", ad.toString()); } @Override public void onInterstitialDismissed(Ad ad) { Log.v("OkHttp", ad.toString()); } @Override public void onError(Ad ad, AdError adError) { Log.v("OkHttp", ad.toString() + " " + adError.getErrorCode() + " " + adError.getErrorMessage()); } @Override public void onAdLoaded(Ad ad) { Log.v("OkHttp", ad.toString()); showInterstitial(); } @Override public void onAdClicked(Ad ad) { Log.v("OkHttp", ad.toString()); } @Override public void onLoggingImpression(Ad ad) { Log.v("OkHttp", ad.toString()); } }); } public void showInterstitial() { interstitialAd.show(); } And Put this ad id into your string.xml of the project. <string name="ins_ads_id">222591425151579_222592145151XXX</string> Change your onPageScrolled code to this. @Override public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) { // NO-OP if (position % 7 == 0) { loadInterstitial(); } }
d10402
Step by step: Install gcc sudo yum install gcc Install LLVM & Clang sudo yum install clang Check the installed versions, and see their locations. clang --version May say: clang version 3.4.2 (tags/RELEASE_34/dot2-final) which clang /usr/bin/clang gcc --version May say: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) g++ --version May say: g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) which gcc /usr/bin/gcc which g++ /usr/bin/g++
d10403
I have created a behavior and it sets the focus in the Loaded event which guarantees the control is loaded. using System.Windows; using System.Windows.Controls; using System.Windows.Controls.Primitives; using System.Windows.Input; using System.Windows.Interactivity; namespace xxx.Behaviors { public class SetControlFocusBehavior : Behavior<Control> { protected override void OnAttached() { base.OnAttached(); if (AssociatedObject is Control) { ((Control)AssociatedObject).Loaded += new RoutedEventHandler(SetControlFocusBehavior_Loaded); } } void SetControlFocusBehavior_Loaded(object sender, RoutedEventArgs e) { var control = sender as Control; if (control == null) { return; } System.Windows.Browser.HtmlPage.Plugin.Focus(); control.Focus(); } protected override void OnDetaching() { base.OnDetaching(); ((Button)AssociatedObject).Loaded -= SetControlFocusBehavior_Loaded; } } } To use it, simply drag and drop it onto the control using Blend. <TextBox x:Name="MyTextBox"> <i:Interaction.Behaviors> <sg:SetControlFocusBehavior/> </i:Interaction.Behaviors> </TextBox>
d10404
Edited my previous answer for some obvious errors :) List.clone() doesn't work because it returns a shallow copy of the list. So basically it just returns the reference to the original list. We don't want that here. What we want is a brand new array that doesn't have the same reference to be returned. For that the best idea will always be to create a new array and run a for loop like this: String[] newArr = new String[list.length]; int i = 0; for (String s : list) { newArr[i++] = s } If you wanna know why this is the issue, refer to this answer :)
d10405
When types have to be specified If the compiler cannot infer the type by itself, it must be specified: let numbers: Vec<_> = (0..10).collect(); Types also cannot be omitted from items. In particular, consts and statics look very much like let statements, but the type must be specified: const PI_SQUARED: i32 = 10; // Not valid syntax const HALF_PI = 1.5; When types cannot be specified When the type is anonymous, it cannot be specified fn displayable() -> impl std::fmt::Display { "" } fn main() { let foo = displayable(); // Not valid syntax: let foo: impl std::fmt::Display = displayable(); } When the types can be specified, but do not have too But most of the time, the type can be specified but doesn't have to: The compiler can infer it from usage. In Rust, it is usually considered good practice to omit simple types when they can. The bound where people will decide that something is not simple anymore and decide that it must be specified or not is however very opinion-based and out of scope for StackOverflow.
d10406
It's considered functional if your if statement has a return value and does not have side effects. Suppose you wanted to write the equivalent of: if(x > 3) n = 3; else n = x; Instead of doing that, you use the return statement from the if command: let n = (if x > 3 then 3 else x) This hypothetical if is suddenly functional because it has no side effects; it only returns a value. Think of it as if it were the ternary operator in some languages: int n = x>3?3:x; A: An important point that hasn't been mentioned so far is the difference between if .. then .. else and if .. then without the else branch. If in functional languages The functional interpretation of if is that it is an expression that evaluates to some value. To evaluate the value of if c then e1 else e2 you evaluate the condition c and then evaluate either e1 or e2, depending on the condition. This gives you the result of the if .. then .. else. If you have just if c then e, then you don't know what the result of the evaluation should be if c is false, because there is no else branch! The following clearly does not make sense: let num = if input > 0 then 10 In F#, expressions that have side-effects like printf "hi" return a special value of type unit. The type has only a single value (written as ()) and so you can write if which does an effect in just a single case: let u = if input > 0 then printf "hi" else () This always evaluates to unit, but in the true branch, it also performs the side-effect. In the false branch, it just returns a unit value. In F#, you don't have to write the else () bit by hand, but conceptually, it is still there. You can write: let u = if input > 0 then printfn "hi" Regarding your additional example The code looks perfectly fine to me. When you have to deal with API that is imperative (like lots of the .NET libraries), then the best option is to use the imperative features like if with a unit-returning branch. You can use various tweaks, like represent your data using option<string> (instead of just string with null or empty string). That way, you can use None to represent missing data and anything else would be valid input. Then you can use some higher-order functions for working with options, such as Option.iter, which calls a given function if there is a value: maybeData |> Option.iter (fun data -> let byteData = System.Text.Encoding.Unicode.GetBytes(data) req.ContentLength <- int64 byteData.Length use postStream = req.GetRequestStream() postStream.Write(byteData, 0, byteData.Length) ) This is not really less imperative, but it is more declarative, because you don't have to write the if yourself. BTW: I also recommend using use if you want to Dispose object auotmatically. A: There are two observations that can assist in the transition from imperative to functional ("everything is an expression") programming: * *unit is a value, whereas an expression returning void in C# is treated as a statement. That is, C# makes a distinction between statements and expressions. In F# everything is an expression. *In C# values can be ignored; in F# they cannot, therefore providing a higher level of type safety. This explicitness makes F# programs easier to reason about and provides greater guarantees. A: There's nothing wrong with if-then in functional world. Your example is actually similar to let _ = expr since expr has side effects and we ignore its return value. A more interesting example is: if cond then expr which is equivalent to: match cond with | true -> expr | false -> () if we use pattern matching. When the condition is simple or there is only one conditional expression, if-then is more readable than pattern matching. Moreover, it is worth to note that everything in functional programming is expression. So if cond then expr is actually the shortcut of if cond then expr else (). If-then itself is not imperative, using if-then as a statement is an imperative way of thinking. From my experience, functional programming is more about the way of thinking than concrete control flows in programming languages. EDIT: Your code is totally readable. Some minor points are getting rid of redundant do keyword, type annotation and postStream.Dispose() (by using use keyword): if not <| System.String.IsNullOrWhiteSpace(data) then let byteData = System.Text.Encoding.Unicode.GetBytes(data) req.ContentLength <- int64 byteData.Length use postStream = req.GetRequestStream() postStream.Write(byteData, 0, byteData.Length) postStream.Flush() A: It's not the if-expression that's imperative, it's what goes in the if-expression. For example, let abs num = if num < 0 then -num else num is a totally functional way to write the abs function. It takes an argument and returns a transformation of that argument with no side effects. But when you have "code that only does something, not return a value," then you're writing something that isn't purely functional. The goal of functional programming is to minimize the part of your program that can be described that way. How you write your conditionals is tangential. A: In order to write complex code, you need to branch at some point. There's a very limited number of ways you can do that, and all of them require a logical flow through a section of code. If you want to avoid using if/then/else it's possible to cheat with loop/while/repeat - but that will make your code far less reasonable to maintain and read. Functional programming doesn't mean you shouldn't execute statements one after the other - it simply means that you shouldn't have a mutable state. Each function needs to reliably behave the same way each time it is called. Any differences in how data is handled by it need to be accounted for by the data that is passed in, instead of some trigger that is hidden from whatever is calling the function. For example, if we have a function foo(int, bool) that returns something different depending on whether bool is true or false, there will almost certainly be an if statement somewhere in foo(). That's perfectly legitimate. What is NOT legitimate is to have a function foo(int) that returns something different depending on whether or not it is the first time it is called in the program. That is a 'stateful' function and it makes life difficult for anyone maintaining the program. A: My apologies for not knowing F#, but here is one possible solution in javascript: function $if(param) { return new Condition(param) } function Condition(IF, THEN, ELSE) { this.call = function(seq) { if(this.lastCond != undefined) return this.lastCond.call( sequence( this.if, this.then, this.else, (this.elsif ? this.elsif.if : undefined), seq || undefined ) ); else return sequence( this.if, this.then, this.else, (this.elsif ? this.elsif.if : undefined), seq || undefined ) } this.if = IF ? IF : f => { this.if = f; return this }; this.then = THEN ? THEN : f => { this.then = f; return this }; this.else = ELSE ? ELSE : f => { this.else = f; return this }; this.elsif = f => {this.elsif = $if(f); this.elsif.lastCond = this; return this.elsif}; } function sequence(IF, THEN, ELSE, ELSIF, FINALLY) { return function(val) { if( IF(val) ) return THEN(); else if( ELSIF && ELSIF(val) ) return FINALLY(val); else if( ELSE ) return ELSE(); else return undefined } }} The $if function returns an object with the if..then..else..elsif construct, using the Condition constructor. Once you call Condition.elsif() you'll create another Condition object - essentially creating a linked list that can be traversed recursively using sequence() You could use it like: var eq = val => x => x == val ? true : false; $if( eq(128) ).then( doStuff ).else( doStuff ) .elsif( eq(255) ).then( doStuff ).else( doStuff ).call(); However, I realize that using an object is not a purely functional approach. So, in that case you could forgo the object all together: sequence(f, f, f, f, sequence(f, f, f, f sequence(f, f, f) ) ); You can see that the magic is really in the sequence() function. I won't try to answer your specific question in javascript. But I think the main point is that you should make a function that will run arbitrary functions under an if..then statement, then nest multiples of that function using recursion to create a complex logical chain. At least this way you will not have to repeat yourself ;) This code is just a prototype so please take it as a proof of concept and let me know if you find any bugs.
d10407
The Get rows action of the Google Sheets connector doesn't support filtering. You can only specify how many rows you want returned, and how many rows you want to skip. E.g. if you have 100,000 rows in your sheet, you can easily get the rows between 90,001 and 90,200, should you wish to do so. While you can't use this connector to retrieve filtered data from Google Sheets, you can use the Filter array action to filter the retrieved data as you wish. You might still need to use the For each loop to retrieve and filter data in chunks.
d10408
Please check if you have included <stdlib.h> and <malloc.h>.
d10409
You only update translateY and dudeYSpeed if dudeJumping is true. You should update these variables in every frame, no matter what the input was.
d10410
You can use popup.focus_force, but probably first check if root is in focus. But that seems to be similar to cheating. A: Ok, I have managed to solve my problem by changing the popup = Tk() to popup = Toplevel() and popup.grab_set works on the popup window. The main window can't be touched till the popup window is closed.
d10411
After working on it, we got the solution for this. So kindly try like this. Hope will work .ToolBar(commands => commands.Template(pp=>GridCommands(pp)))
d10412
Try this: http://jsfiddle.net/b82x7rr2/2/ $().ready(function(){ $(".slider").slick(); $(".close").on("click", function(){ if( !$("body").hasClass("active") ){ $("body").addClass("active"); $(".slider").slick('slickRemove'); }else{ $("body").removeClass("active"); $(".slider").slick('slickRemove'); } }); }); The $(".slider").slick('slickRemove'); removes the following slide from the screen, without reloading the current slide Even more concise fiddle: http://jsfiddle.net/b82x7rr2/3/ $().ready(function(){ $(".slider").slick(); $(".close").click(function(){ $("body").toggleClass("active"); $(".slider").slick('slickRemove'); } ); }); A: The only way i found to bypass this issue is $(".slider").slick('getSlick').slickGoTo(slick.slickCurrentSlide()); everytime you toggle navigation. here is the fiddle
d10413
From a data.frame you can create an sf object library(sf) df <- data.frame( name = c("a","b","c") , lon = 1:3 , lat = 3:1 ) sf <- sf::st_as_sf( df, coords = c("lon","lat" ) ) sf # Simple feature collection with 3 features and 1 field # geometry type: POINT # dimension: XY # bbox: xmin: 1 ymin: 1 xmax: 3 ymax: 3 # CRS: NA # name geometry # 1 a POINT (1 3) # 2 b POINT (2 2) # 3 c POINT (3 1) Then the list of POINTs is just the geometry column sf$geometry # Geometry set for 3 features # geometry type: POINT # dimension: XY # bbox: xmin: 1 ymin: 1 xmax: 3 ymax: 3 # CRS: NA # POINT (1 3) # POINT (2 2) # POINT (3 1) str( sf$geometry ) # sfc_POINT of length 3; first list element: 'XY' num [1:2] 1 3 And if you truly want a list of POINT objects you can remove the sfc class unclass( sf$geometry ) # [[1]] # POINT (1 3) # # [[2]] # POINT (2 2) # # [[3]] # POINT (3 1)
d10414
Without delving into the why, you can simply format the cell in the loop : for (int i = 0; i < dataGridView1.Rows.Count - 1; i++) { for (int k = 0; k < dataGridView1.Columns.Count; k++) { worksheet.Cells[i + 2, k + 1] = dataGridView1.Rows[i].Cells[k].Value.ToString(); worksheet.Cells[i + 2, k + 1].NumberFormat = "dd/mm/yyyy" } } workbook.Save();
d10415
As @dirk said, ABAP implicitly converts compared or assigned variables/literals if they have different types. First, ABAP decides that the C-type literal is to be converted into type I so that it can be compared to the other I literal, and not the opposite because there's this priority rule when you compare types C and I : https://help.sap.com/http.svc/rc/abapdocu_752_index_htm/7.52/en-US/abenlogexp_numeric.htm#@@ITOC@@ABENLOGEXP_NUMERIC_2 | decfloat16, decfloat34 | f | p | int8 | i, s, b | .--------------|------------------------|---|---|------|---------| | string, c, n | decfloat34 | f | p | int8 | i | (intersection of "c" and "i" -> bottom rightmost "i") Then, ABAP converts the C-type variable into I type for doing the comparison, using the adequate rules given at https://help.sap.com/http.svc/rc/abapdocu_752_index_htm/7.52/en-US/abenconversion_type_c.htm#@@ITOC@@ABENCONVERSION_TYPE_C_1 : Source Field Type c -> Numeric Target Fields -> Target : "The source field must contain a number in mathematical or commercial notation. [...] Decimal places are rounded commercially to integer values. [...]" Workarounds so that 23579235.43 is not implicitly rounded to 23579235 and so the comparison will work as expected : * *either IF +'23579235.43' = 23579235. (the + makes it an expression i.e. it corresponds to 0 + '23579235.43' which becomes a big Packed type with decimals, because of another rule named "calculation type") *or IF conv decfloat16( '23579235.43' ) = 23579235. (decfloats 16 and 34 are big numbers with decimals)
d10416
There are several leaders in the ALM space, including VersionOne, Rally, ThoughtWorks Studios Mingle + Go + Twist, Jira Greenhopper, and others. These are all fundamentally iterative in nature, supporting Scrum+XP. There is a new crop of tools coming up that support Kanban, if that's your preference. The key, though, is to decide what approach you plan to take. Iterative or flow? Beyond that, if you're using a continuous integration server - whether open source like Jenkins or commercial like Go - then that combined with an SCM system (git, for instance) gives you visibility into what's been produced and the ability to distribute that work. Back to your specific challenge, it seems to me that iterative wouldn't suit you, since you have people coming and going as they enter and leave the bench. Mingle supports this quite nicely, as do a few of the others. In fact, I'd suggest that the leading methodologies don't really lend themselves to your situation, as you will have neither iterations nor flow, most likely. A: Here's my TFS pitch. Your developers/consultants need to be able to work in house, remotely or offline. That means local workspaces. TFS 2012 has that. With the high rate of turnover, you need managers to be able to easily and quickly assign specific tasks to developers. With TFS you can create work items, break them into subtasks and easily assign them to any team member. When the developer begins working, you'll be able to see it and any check ins can be quickly associated (semi-automatically) with the subtask. So you'll know who did what task, and be able to see the exact code they implemented to accomplish it. If you have managers maintaining the product backlog, all a developer needs to do is select one of the tasks assigned to him, get the latest from the source and start developing. Minimal overhead for him/her. With Web Access you can see/edit your entire product backlog, get burndown charts (and other reports), see who completed what and when, assign tasks to team members, change team members, etc. All without VS installed, so no need to have a license for a manager if they don't develop. Finally, fully integrated automated builds will allow you to ensure that consultants don't break your source. Gated check ins are great for this kind of team. The changeset is stored, and a build is ran. If the changeset would break the build, the check in is denied. You can also automate builds on the other side, post check in. Any file created outside of VS can be easily added to source control. Once the file is added, TFS monitors the file for changes and you can easily add the changes to a changeset. Once in source control, its fully in source control and available to everyone. You never really mentioned any database requirements, but the new SSDT is awesome for declarative SQL development. So far, I've not had to write a single ALTER script, which makes me very happy. There is also fully integrated support for code reviews, build verification tests, automated deployments, architecture tools (with rules that can be enforced) and more. The rabbit hole goes pretty deep, but if you don't need it, none of it is forced on you. So, the methodology I would suggest is a KanBan style set up, with managers pushing tasks rather than developers pulling them. This way you can reduce the impact of your high turnover rates without overly micromanaging your consultants. You'll be able to easily give them a task, let them accomplish it, and have complete visibility of the work they perform. I'm not sure how you gather your requirements, and how much input in the dev process your customers have, so its hard to go into more detail. TFS supports storyboards associated with work items, so you can give detailed specs to your developers. Also the Feedback Manager can facilitate in getting feedback on working software from product owners. You could go Scrum with defined sprints, but I think a lot of the overhead of sprint reviews and sprint planning may be a waste for you, if you consultant turnover rate is high, and/or you don't need/want a lot of input from your consultants on user story breakdowns/requirements gathering.
d10417
All that can be done here is to output the expected context where the problem took place. Considering that the problem was caused by three: const three = null; `one${two}${three}four` Tag function arguments can be concatenated in error message to the point where they start to make sense, e.g. Expected a number as an expression at position 2, got `null`, `one${...}${...}four` ^^^ Stack trace can also be retrieved if needed with new Error().stack. If more precision is required, a template engine should be used instead of template literals because all necessary data is available during template compilation. The options for tag function are same as for any other function. If foo function was called with bar variable as an argument that equals 1 (like foo(bar)), it may be impossible to figure out that it was called with bar from inside foo, because all we've got is 1 value. The fact that it was called like foo(bar) can only be found out if we have stack trace and the access to source file - which we don't have under normal circumstances. This method can be used in cases where feedback should be provided on the context, e.g. a test runner - because it is responsible for script loading and has access to source files.
d10418
Having 2 spaces in file names has nothing to do with failing to run the command. You are using interpreted string literals: "file://C:\Users\1. Sample\2. Sample2" In interpreted string literals the backslash character \ is special and if you want your string to have a backslash, you have to escape it, and the escape sequence is 2 backslahses: \\: "file://C:\\Users\\1. Sample\\2. Sample2" Or even better: use raw string literals (and then you don't need to escape it): `file://C:\Users\1. Sample\2. Sample2` That weird 2 character in your file name may also cause an issue. If you include that in Go source code, it will be interpreted and encoded as UTF-8 string. You pass this as a parameter, but your OS (windows) may use a different character encoding when checking file names, so it may not find it. Remove that weird 2 character from the file name if correcting the string literal is not sufficient to make it work. When you run the command from PowerShell, it works because PowerShell does not expect an interpreted string and it uses the encoding that is also used to store file names. Also Cmd.CombinedOutput() (just like Cmd.Run() and Cmd.Start()) returns an error, be sure to check that as it may provide additional info.
d10419
The model visualization seems incorrect, the main branch and skip connection are encapsulated inside your Res_Block definition, it should not appear outside of the red Res_Block[0] box, but instead inside. A: I solved the problem by removing nn.Sequential in Res_Block __init__ and adding self.l1, self.l2 ... instead. (I also removed some layers and added maxpool but only after I solved the problem) class Res_Block(nn.Module): def __init__(self, in_shape, out_ch, ks, stride, activation): super(Res_Block, self).__init__() self.l1 = nn.Conv1d(in_shape, out_ch, ks, stride, padding='same') self.l2 = deepcopy(activation) self.l3 = nn.BatchNorm1d(out_ch) self.l4 = nn.Conv1d(out_ch, out_ch, ks, stride, padding='same') self.l5 = nn.BatchNorm1d(out_ch) self.shortcut = nn.Conv1d(in_shape, out_ch, kernel_size=1, stride=1, padding='same') def forward(self, X): return self.l5(self.l4(self.l3(self.l2(self.l1(X))))) + self.shortcut(X) The corresponding tensorboard structure is The only one left question is why did that help me solve the problem.
d10420
I would use :before to create a pseudo-element that you can style, as it is only used for presentation, so having an empty element would be unnecessarily verbose. Here’s an example of how this could be done: .splitter { border: 1px solid #ddd; border-top: 0; } .splitter:before { content: ' '; display: block; position: relative; top: -5px; width: 100%; height: 8px; background-color: red; border-radius: 100px / 70px; } .myContent { padding: 0 20px; } <div class="splitter"> <div class="myContent"> <h1>React or Angular</h1> <p>Lorem ipsum Mollit qui sunt consequat deserunt exercitation veniam.</p> </div> </div> Which can also be seen working on JS Bin: http://jsbin.com/hoqizajada/edit?html,css,output A: I think it's not possible with a single div. However, you can place a div above it and trick with border-radius. .inbox { width: 200px; } #top-border { border: red 4px solid; border-radius: 4px; } #test{ padding: 4px; height: 200px; background: #EEEEEE; } <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JS Bin</title> </head> <body> <div id="top-border" class="inbox"></div> <div id="test" class="inbox"></div> </body> </html> JSBIN: http://jsbin.com/pofolanuje/edit?html,css,output A: Here is the complete code, hope this helps you; .container{ width:320px; height:520px; background:#fff; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; border:1px solid #e4e4e4; } .border-red{ background:red; width:100%; height:10px; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 5px; } <div class="container"> <div class="border-red"> </div> </div>
d10421
I am using it in a Yeoman scaffold with * *Sass 3.3.4 (Maptastic Maple) *Compass 0.12.5 (Alnilam) It is being compiled by "grunt-contrib-compass": "~0.7.0" and so far have had no errors. edit: Also those settings appear to be stylus not scss.
d10422
Comma-separated multiple VALUES insert was only introduced in sqlite 3.7.11 and chances are you are running on a device with older sqlite version.
d10423
It is probably because you are using hyphens instead of slashes which is what is in your data. Try df['Birthyear'] = pd.to_datetime(df['Birthyear'], format='%m/%d/%y')
d10424
I managed to solve it by creating a measure which is a ratio. In my case, table 1 had values in say rupees. table 2 had values in say, litres So, I created a measure which is a division of sum total rupees and sum total of litres and when I added the measure to values in pivot matrix, it automatically divided the two tables.
d10425
Basically your loop works correctly, but happens immediately so you only see the 9 and not a sequence or anything. What you can do is to use kivy.clock to schedule an interval to a callback function, specify the interval time and stop the schedule at a certain point. If you would skip the kivy.clock part, your GUI is frozen until the function is finished. Example: from kivy.app import App from kivy.core.window import Window from kivy.lang import Builder from kivy.uix.widget import Widget from kivy.clock import Clock from timeit import time Window.size=(300,300) Builder.load_file('loop.kv') class MyLayout(Widget): loop_thread = None def callback_to_loop(self, dt): # dt is the interval-time # this is required because initially the text is an empty string try: current = int(self.ids.label_print.text) except: current = 0 # simply add up the numbers self.ids.label_print.text = str(current+1) # stop at a certain point and unschedule the thread if current == 11: Clock.unschedule(self.loop_thread) def loop(self): # schedule the thread with an interval-time = 1 self.loop_thread = Clock.schedule_interval(self.callback_to_loop, 1) pass class LoopApp(App): def build(self): return MyLayout() if __name__=='__main__': LoopApp().run()
d10426
If it is a top-level constraint, how about passing it as the additional query (an option for search:search)
d10427
You cannot assign to grades.get(i). I suggest you simplify your code as follows: public static List<Integer> gradingStudents(List<Integer> grades) { for(int i = 0; i < grades.size(); i++){ int grade = grades.get(i); if (grade >= 40 && grade % 5 < 3) { grades.set(i, grade + Math.abs(grade%5-5)); } } return grades; } A: It is because you are trying to modify the list like how you try to modify a list or dict in python as shown here. You have to use the inbuilt method .set() or .get(). set is used to add element by replacing it in the given index, add method adds the element to given index by pushing rest of the elements. You can understand more about the set and get method here To this particular problem, as @Eran mentioned here, you can use the .set() method.
d10428
To switch between installed JDKs: * *List Java alternatives: update-java-alternatives -l *Find the line with the Java version that you want to select. *The output of update-java-alternatives -l will look something like this if Java 8 was installed by sudo apt install openjdk-8-jdk which also matches the 2nd screenshot in your question. java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 The first part of the second line there is java-1.8.0-openjdk-amd64. *Set the first part of the line you want as the Java alternative. sudo update-java-alternatives -s java-1.8.0-openjdk-amd64 To sum it all up the key thing to watch for here is that you have installed both of the Java versions that you want to switch between with apt so that both Java versions will be recognized.
d10429
You can use cross apply() to unpivot your data first, then use a case expression with when exists() in your where clause to return 0 or 1 when it exists and compare that to @InclusionFlag. ;with FoodItem as ( select x.Item from food as f cross apply (values (Fruit),(Vegetable),(Dairy),(Color)) as x (Item) where x.Item is not null and f.Date = @Date ) select lt.Item from ListTable as lt where case when exists ( select 1 from FoodItem fi where lt.Item = fi.Item ) then 1 else 0 end = @InclusionFlag A: If I got it right, Food rows (may be repeating) matching #ListTable SELECT F.* FROM Food F INNER JOIN #ListTable LT1 ON LT1.Item IN (F.Fruit, F.Vegetable, F.Dairy, F.Color) WHERE Date = @Date
d10430
The best option here is to deploy all prometheus servers with the option replicaExternalLabelName: "" More info here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#prometheusspec
d10431
Considering you said that you can see the user location on first load, it's safe to assume you have setShowsUserLocation: set to YES. Have you implemented - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id<MKAnnotation>)annotation in your view controller? If so, be sure to avoid providing an annotation view for the built in user location marker. - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id<MKAnnotation>)annotation { //check annotation is not user location if([annotation isEqual:[mapView userLocation]]) { //bail return nil; } static NSString *annotationViewReuseIdentifer = @"map_view_annotation"; //dequeue annotation view MKPinAnnotationView *annotationView = (MKPinAnnotationView *)[mapView dequeueReusableAnnotationViewWithIdentifier:annotationViewReuseIdentifer]; if(!annotationView) { annotationView = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:annotationViewReuseIdentifer]; } //set annotation view properties [annotationView setAnnotation:annotation]; return annotationView; } A: Have you implemented the viewForAnnotation function? are you checking that you're not drawing (or failing to draw) a different sort of pin for the MKUserLocation? A: To show your current location with blue dot - (void)viewDidLoad { yourMapView.showsUserLocation=YES; } Navigate to your current location on button click -(IBAction)currntLctnClick:(id)sender { MKCoordinateRegion location = MKCoordinateRegionMakeWithDistance(yourMapView.userLocation.coordinate, 10000.0, 10000.0); [yourMapView setRegion:location animated:YES]; } EDIT Or if still your blue dot is not visible then just remove your MapView from xib and drag and drop it again. A: For all those who came here by searching for GoogleMaps solution - This is the property you need to set true. Swift 3.0 self.mapView.isMyLocationEnabled=true A: [map setShowsUserLocation:YES]; A: Just check that annotation is MKUserLocation while iterating in viewForAnnotation method and return nil if it is true. It works that way because MKUserLocation is also part of mapView.annotations array and you just need to pass it through without any modification. func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { guard !(annotation is MKUserLocation) else { return nil } // your viewForAnnotation code... } A: Just to add, if you are using Interface Builder, select the Map View and in the Attributes section, make sure User Location is checked: Map View Attributes
d10432
You have not shown a complete program, so we have to guess at what your program actually is. It appears you declared int arr[9][9]; inside a function. In that case, it is not initialized, and the values of its elements are indeterminate. They might be zeros. They might not. They might even change from use to use. To initialize the array, change the definition to int arr[9][9] = { 0 };. If the array is defined as a variable-length array, then write a loop to set the entire array to zeros immediately after the definition: … int array[n][n]; for (int i = 0; i < n; ++i) for (int j = 0; j < n; ++j) arr[i][j] = 0; Or write a loop to both set chosen elements to 1 and others to 0: for (int i = 0; i < n; ++i) for (int j = 0; j < n; ++j) arr[i][j] = i == j && i < n/2 ? 1 : 0; In the existing loop test, i<(n/2) & j>n/2 happens to work, but the conventional way to express this in C would be i < n/2 && j > n/2, because '&&is for the Boolean AND of two expressions, while&is for the bitwise AND of two integers. Additionally, there seems to be little point in testing bothiandj`. As the loop is written, testing just one of them will suffice to control the loop, unless you are intending the loop to handle non-square matrices in future code revisions.
d10433
That's not possible, COM doesn't have a mechanism to pass arguments to a constructor. This is most visible in your C++ snippet, you specified the GUID of the class with the __uuidof keyword as required but you didn't pass a NumberClass argument. You can't. What goes wrong next is that you didn't check for an error, CreateInstance() returns an HRESULT. Which would have told you that the method failed. The embedded interface pointer is still NULL and that's going to blow your program with an access violation when you just keep motoring on. Start fixing this by first getting rid of that constructor in your C# class, it must have a default constructor to be usable by COM. Add a property of type NumberClass so you can set that value after the object is created. And of course improve the error handling in your C++ code, these kind of failures just become completely undiagnosable if you don't have any. You must check the return value of CreateInstance() and you must add try/catch blocks in the code that uses the object so you can catch the _com_error exceptions that will be thrown when a method call fails.
d10434
The docs you linked has a space between the colon and the values. signature_string = 'date' + ':' + formated_time + '\n' + 'x-mod-nonce' + ':' + nonce should be: signature_string = 'date' + ': ' + formated_time + '\n' + 'x-mod-nonce' + ': ' + nonce or (simpler): signature_string = 'date: ' + formated_time + '\n' + 'x-mod-nonce: ' + nonce Update I registered to see what is going on. I also ran your code on the example given in the documentation and saw that the signature is not entirely correct. In addition to the change I suggested above, a further change was necessary. After changing the line b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode() to b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode().strip() the signature of the example matched. After this I was able to connect to the API with my own keys. Here is the complete working code: import codecs import hashlib import hmac import secrets import urllib.parse from datetime import datetime from time import mktime from wsgiref.handlers import format_date_time import requests key = '<key>' secret = '<secret>' account_id = '<account id>' url = f'https://api-sandbox.modulrfinance.com/api-sandbox/accounts/{account_id}' # Getting current time now = datetime.now() stamp = mktime(now.timetuple()) # Formats time into this format --> Mon, 25 Jul 2016 16:36:07 GMT formatted_time = format_date_time(stamp) # Generates a secure random string for the nonce nonce = secrets.token_urlsafe(30) # Combines date and nonce into a single string that will be signed signature_string = 'date' + ': ' + formatted_time + '\n' + 'x-mod-nonce' + ': ' + nonce # Encodes secret and message into a format that can be signed secret = bytes(secret, encoding='utf-8') message = bytes(signature_string, encoding='utf-8') # Signing process digester = hmac.new(secret, message, hashlib.sha1) # Converts to hex hex_code = digester.hexdigest() # Decodes the signed string in hex into base64 b64 = codecs.encode(codecs.decode(hex_code, 'hex'), 'base64').decode().strip() # Encodes the string so it is safe for URL url_safe_code = urllib.parse.quote(b64, safe='') # Adds the key and signed response authorization = f'Signature keyId="{key}",algorithm="hmac-sha1",headers="date x-mod-nonce",signature="{url_safe_code}"' headers = { 'Authorization': authorization, # Authorisation header 'Date': formatted_time, # Date header 'x-mod-nonce': nonce, # Adds nonce 'accept': 'application/json', } response = requests.get(url, headers=headers) print(response.text)
d10435
Use Auth; in top of page it may help you. Show me complete error .
d10436
No other browser behaves like WebKit does here. Searching WebKit Bugzilla for "block formatting context margin" yields this very similar result: https://bugs.webkit.org/show_bug.cgi?id=19123 As a workaround, you can use the fix I proposed in a comment: removing the left margin on div.right sorts it out: http://jsfiddle.net/BJuYR/13/
d10437
My suspicion goes on the | combinator. The type of (element ~ ";" ~ list) is ~[~[Element, String], Array[Element]] and the type of element ~ ";" is ~[Element, String]. Thus when applying the | combinator on these parsers, it returns a Parser[U] where U is a supertype of T ([U >: T]). Here the type of T is ~[~[Element, String], Array[Element]] and the type of U is ~[Element, String]. So the most specific type between Array[Element] and String is Serializable. Between ~[Element, String] and Element its Object. That's why the type of | is ~[Serializable, Object]. So when applying the map operation, you need to provide a function ~[Serializable, Object] => U where U is Array[Element] in your case since the return type of your function is PackratParser[Array[Element]]. Now the only possible match is: case obj ~ ser => //do what you want Now you see that the pattern you're trying to match in your map is fundamentally wrong. Even if you return an empty array (just so that it compiles), you'll see that it leads to a match error at runtime. That said, what I suggest is first to map separately each combinator: lazy val list: PackratParser[Array[Element]] = (element ~ ";" ~ list) ^^ {case a ~ ";" ~ b => Array(a) ++ b} | (element ~ ";") ^^ {case a ~ ";" => Array(a)} But the pattern you are looking for is already implemented using the rep combinator (you could also take a look at repsep but you'd need to handle the last ; separately): lazy val list: PackratParser[Array[Element]] = rep(element <~ ";") ^^ (_.toArray)
d10438
You can access all element's attributes document.getElementsByTagName("td")[0].id // returns the id attribute document.getElementsByTagName("td")[0].style // returns the style attribute You can access to the id directly with: document.getElementById("myIdentifier") // returns the entire object A: You can use .id. For example if I had the HTML: <p id="test"></p> You can get the id attribute by doing: document.getElementsByTagName("p")[0].id; A: Here is an exemple: <html> <head> <script> function getElements() { var x=document.getElementsByTagName("input"); alert(x[0].id); } </script> </head> <body> <input id="hi" type="text" size="20"><br> <input type="text" size="20"><br> <input type="text" size="20"><br><br> <input type="button" onclick="getElements()" value="What is the ID for the first element?"> </body> </html>
d10439
If package ifconfig in present in your image they you can always do kubectl exec <pod_name> ifconfig output might looks like the following: eth0 Link encap:Ethernet HWaddr DA:6E:42:4F:87:EE inet addr:10.8.1.9 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::d86e:42ff:fe4f:87ee/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1460 Metric:1 RX packets:1282 errors:0 dropped:0 overruns:0 frame:0 TX packets:1296 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:122059 (119.1 KiB) TX bytes:122960 (120.0 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Now on explain why docker stats won't work. Kubernetes doesn't use Docker networking, it uses CNI - Container Network Interface. A CNI plugin is responsible for inserting a network interface into the container network namespace (e.g. one end of a veth pair) and making any necessary changes on the host (e.g. attaching the other end of the veth into a bridge). It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin. I strongly advice you to read @Ian Lewis blog What are Kubernetes Pods Anyway? If you would like to know more about networking I recommend great read of Kubernetes Networking: Behind the scenes by @Nicolas Leiva. I hope this will sheds some light on the subject.
d10440
You can see for yourself the code is on codeplex A: You can read about what it is and how it works in the wiki docs. Blogengine uses reflection to find and instantiate types attributed as "extension" and extensions itself use event listeners to communicate with core library. Extension manager is basically API and admin front-end for all extensions running on the blog. A: Maybe the Plugin Pattern, have a look here how this works..
d10441
Change: tableStyle.MappingName = InventoryItems.ToString(); to tableStyle.MappingName = InventoryItems.GetType().Name;
d10442
---------- Revised The determination of Parent, Sub, Leaf is relative simple: * *Row Child id is null then Parent. *Row child id is not null and row has child row then Sub. *Otherwise leaf. The only difficultly is 2 above. Now this can be flushed out a couple ways: * *With the same basic recursive CTE apply both lead and lag functions in the main query then check for nulls. But with a hierarchy this gets messy in a hurry. So lets reject that idea. *With slight modification to the CTE the non-recursive select is Parents. In the recursive select see if the current has a child row then Sub Category. Modifying the CTE is not terribly bad, but select for it being a parent is still somewhat nasty looking. So the following abstracts this into a simple SQL function that returns the appropriate type (given that the id is known to be a child). create or replace function category_sub_or_leaf(id_in uuid) returns text language sql strict as $$ select case when exists (select null from category where parent_id = id_in ) then '(Sub Category)' else '(leaf level category)' end ; $$; And the resulting query as: with recursive heir (id, parent_id, name, short_description, sort_order, root_path, lev, cat_level) as ( select id, parent_id, name, short_description, sort_order,(name)::text, 1, '(Parent Category)' from category where parent_id is null union all select c.id, c.parent_id, c.name, c.short_description, c.sort_order, root_path || '>'||(c.name)::text, lev+1 , category_sub_or_leaf(c.id) from category c join heir h on c.parent_id = h.id ) --select * from heir; select (rpad(' ', 4*(lev-1),' ') || short_description) || ' ' || cat_level description from heir order by root_path,lev,sort_order; See revised demo: Demo also flushes out Women's Foot Ware to provide and extra layer in Sub Category. I.E 2 layers of sub category. But as I did not want that much typing I modified table to not require anything but the necessary columns. ---------- Orig What you are looking for is a recursive cte. Here the first (non-recursive term) select gets the rows which do not have parent_id defined. Then the recursive term (after union) looks back to the previous row and retrieves the child rows. (A poor description at best.) The query builds the path back to the first parent and the current depth. These generated columns are then used to provide order and indentation of the results. See Demo. with recursive heir as ( select id, parent_id, name, short_description, (name)::text path, 1 lev from category where parent_id is null union all select c.id, c.parent_id, c.name, c.short_description, path ||'>'||(c.name)::text, lev+1 from category c join heir h on c.parent_id = h.id ) select (rpad(' ', 4*(lev-1),' ') || short_description) short_description from heir order by path, lev ; Note: Demo built with Postgres 13 so function gen_random_uuid() substituted for uuid_generate_v4().
d10443
public class FindDuplicateNumberInArray { public static void main(String[] args) { int arr[] = { 11, 24, 65, 1, 111, 25, 58, 95, 24, 37 }; Arrays.sort(arr); String sortedArray = Arrays.toString(arr); System.out.println(sortedArray); // for (int i = 1; i < arr.length; i++) { // if (arr[i] == arr[i - 1]) { for (int i = 0; i < arr.length-1; i++) { if (arr[i] == arr[i + 1]) { System.out.println("Duplicate element from the given array is = " + arr[i]); } } } } To check for a duplicate number, you are running the for loop to last element(say nth position), but your if condition checks the last element with (n+1)th element, which doesn't exist. And also, you need to check the 1st element too, so say i=0. Or you can just change the if (arr[i] == arr[i + 1]) condition to if (arr[i] == arr[i - 1]) A: You have a very basic IndexOutOfBounds-Exception there. When accessing arrays, you have to provide an index. If that index is greater than array.length - 1, which is the last accessible index, you get an out of bounds exception. The same is true for lists. Because you compare the current (i) value to the next one (i + 1), you run out of bounds, because you count to i < arr.length. This means when i == arr.length - 1 you still add 1 to i, which is equal to arr.length, which is more than arr.length - 1.
d10444
library(rvest) library(dplyr) base <- "http://www.encyclopedia-titanica.org/titanic-passengers-crew-lived/country-17/england.html" # I use the older rvest package...`html` might be `read_html` now.Link to git repo below: # https://github.com/hadley/rvest/blob/7d65d84e013b1bb3827ae0a2e05ddaed4875c112/R/parse.R data_df <- (html(base) %>% html_table)[[1]] knitr::kable(summary(data_df)) | | Name | Age | Class/Dept | Ticket | Joined | Job |Boat [Body] | | |:--|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:------------| | |Length:1190 |Length:1190 |Length:1190 |Length:1190 |Length:1190 |Length:1190 |Length:1190 |Mode:logical | | |Class :character |Class :character |Class :character |Class :character |Class :character |Class :character |Class :character |NA's:1190 | | |Mode :character |Mode :character |Mode :character |Mode :character |Mode :character |Mode :character |Mode :character |NA |
d10445
You can fetch data through most recent checkin or checkout. For checkin : p_checkins = CheckIn.objects.all().order_by('-checkin')[0] For checkout : p_checkins = CheckIn.objects.all().order_by('-checkout')[0] To get the participant name by : name = p_checkins.adult.first_name A: When you use (-) your latest update will be query from database. p_checkins = CheckIn.objects.all().order_by('-checkin') or p_checkins = CheckIn.objects.all().order_by('-checkout') A: you can annotate the latest value via a subquery to the participant from django.db.models import OuterRef, Subquery checkin_q = CheckIn.objects.filter(adult=OuterRef('pk')).order_by('-checkin') queryset = Participant.objects.annotate(last_checkin=Subquery(checkin_q.values('checkin')[:1])) see https://docs.djangoproject.com/en/4.0/ref/models/expressions/#subquery-expressions A: Most of the answers so far are correct in several aspects. One thing to note is that if your check_in or check_out values (whichever you use) isn't chronological (and by "most recent", you mean the last added), you'll want to add a created_at datetime field with auto_now option True, or order by the pk. In addition to the other answers provided and my comment above, you can also get the most recent check in by using the related manager on the participant object.
d10446
a.popup.click will throw an error because a is not defined. You are trying to use the jQuery click method, so you need to create a jQuery object that references the element you are trying to select. jQuery("a.popup").click(your_function) A: You can achieve opening in a different tab functionality in your case by simply specifying target="_blank" for your anchor tag as <a href="b.html" target="_blank" class="popup" > Holiday </a> A: You please try using the following code, it works, and you can choose your title and set different parameters that are suitable for you: $(document).ready(function(event) { $('a.popup').on('click', function(event) { var params = "menubar=yes,location=yes,resizable=yes,scrollbars=yes,status=yes"; event.preventDefault(); window.open($(this).attr('href'), "Title", params); }); }); A: Just change this part <a href="b.html" target="_blank" class="popup" > jsFiddle
d10447
The problem is that barplot automatically adds a space of 0.2 before each bar. This means you have two options. One is to get rid of the spaces, then subtract 0.5 from the value of each abline to make it match up to the centre of the bar: Rating <- factor(c(1, 2, 2, 2, 2, 2, 3, 3, 4, 5), levels = 1:7) barplot(table(Rating), main = "E1 - Ärger", xlab = "Rating", ylab = "Häufigkeit", space = 0) abline(v = seq(7) - 0.5, col = "red") The other option is to leave the spaces by starting at 0.7 and incrementing by 1.2 for each bar: barplot(table(Rating), main = "E1 - Ärger", xlab = "Rating", ylab = "Häufigkeit") abline(v = seq(0.7, 8, 1.2), col = "red") To make this easier, you could define a function that does the calculation for you: add_vlines <- function(x) abline(v = (x * 1.2) - 0.5, col = "red") So you could just do: add_vlines(1:7) to get the lines you are looking for.
d10448
Those two logged dates are being logged in the UTC timezone. But if your current timezone is west of that, then both dates are on March 8, 2016 in your timezone and the comparison is done in your timezone. Or, if you live east of that by at least 2.5 hours, then both dates are March 9, 2016. Example. If you live in the eastern USA (GMT-4 currently but GMT-5 on March 8), then those two dates are actually 2016-03-08 16:43:53 -0500 and 2016-03-08 19:00:00 -0500. Example. If you live in eastern Europe or Asia (say GMT+5), then those two dates would be 2016-03-09 02:43:53 +0500 and 2016-03-09 05:00:00 +0500. As you can see, those are on the same day so the comparison is correct.
d10449
I think this is a perfect solution: make the input wide enough, align right to screen right, thus make cursor and content locate at the outside of the screen, while it's still clickable A: The color of the cursor matches the color of the text. <input type="text" style="color: transparent"> If you actually want to show text in the input box (my, some people are needy), you can use a text shadow. <style> body, div, input { color: #afa; text-shadow: 0px 0px 1px #fff; } <style> (tested Firefox and Safari). A: try this code it worked on me :- <div class="input-parent" style="margin-left: 30px;"> <input type="text" /> </div> input-parent { width: 100px; height: 30px; overflow: hidden; } .input-parent input { width: 120px; height: 30px; margin-left: -20px; text-align: left; caret-color: transparent; } A: Just add this to "inputselect" class: caret-color: transparent; A: I just used a blur to get rid of the cursor. $('input').mousedown(function(e){ e.preventDefault(); $(this).blur(); return false; }); A: I finally find a trick solution. <div class="wrapper"> <input type="text" /> </div> .wrappr { width: 100px; height: 30px; overflow: hidden; } .wrapper input { width: 120px; height: 30px; margin-left: -20px; text-align: left; } and in my sitation, it works! A: This may be helpful (though not a complete response) - I used it to disable input in a recent project: document.getElementById('Object').readOnly = true; document.getElementById('Object').style.backgroundColor = '#e6e6e6'; document.getElementById('Object').onfocus = function(){ document.getElementById('Object').blur(); }; The user focuses on the input when the click in it, the blur() function then removes the focus. Between this, setting the readOnly to "True" and setting the background color to grey (#e6e6e6), your user shouldn't get confused.
d10450
The compareTo and equals method implementations seem to be inconsistent, the error is telling you that for the same two objects equals gives true while compareTo does not produce zero, which is incorrect. I suggest you invoke compareTo from equals to ensure consistency or otherwise define a custom Comparator<T>. Simply do: public abstract class ComparablePerson extends IDValueItem implements Comparable<ComparablePerson> { private int score; private String itemID,itemName; //setters and getters public int compareTo(ComparablePerson another) { if (score == another.getScore()) return this.getItemName().compareToIgnoreCase(another.getItemName()); else if ((score) > another.getScore()) return 1; else return -1; } @Override public boolean equals(Object o) { return compareTo(o) == 0; } } A: ComparablePerson is abstract, the comparison method is probably overloaded elsewhere... Can you post the client (which owns the collection) and the concrete classes? This code works well: public class ComparablePerson implements Comparable< ComparablePerson > { public ComparablePerson( int score, String name ) { _score = score; _itemName = name; } @Override public int compareTo( ComparablePerson another ) { int delta = _score - another._score; if( delta != 0 ) return delta; return _itemName.compareToIgnoreCase( another._itemName ); } @Override public boolean equals( Object o ) { return 0 == compareTo((ComparablePerson)o); } @Override public int hashCode() { return super.hashCode(); } private final int _score; private final String _itemName; public static void main( String[] args ) { List< ComparablePerson > oSet = new LinkedList<>(); oSet.add( new ComparablePerson( 5, "x" )); oSet.add( new ComparablePerson( 5, "y" )); oSet.add( new ComparablePerson( 5, "z" )); oSet.add( new ComparablePerson( 6, "x" )); oSet.add( new ComparablePerson( 6, "y" )); oSet.add( new ComparablePerson( 6, "z" )); Collections.sort( oSet ); System.err.println( "Ok" ); } }
d10451
The Where needs to be before the GroupBy var reqtwo = list1 .Where(c => c.country =="Chiana" || c.country == "Hongkong" || c.country == "Malaysia" || c.country =="Indonesia" || c.country =="India") .GroupBy(p => p.salesPerson) .Select(s => new { salesperson= s.Key, totalSale = s.Sum(p => p.qty*p.price) }) .OrderByDescending(p => p.totalSale) .ToList(); Ive also changed your && to || as a country cannot be China & India at the same time (for example)
d10452
When libstdc++ is built its configure script tests your system to see what features are supported, and based on the results it defines (or undefines) various macros in c++config.h In your case configure determined that the POSIX nanosleep() function is not available and the macro is not defined. However, as you say, nanosleep() is available on your system. The reason it's not enabled by configure is that the checks for it don't even run unless you use the --enable-libstdcxx-time option (documented in the Configuration chapter of the libstdc++ manual, not the GCC configure docs) * *Is there a configuration option that should be used when building GCC to activate this macro by default, as suggested by this post? (I couldn't find any in the online documentation of the build process.) Yes, --enable-libstdcxx-time * *Is there really a relation between the nanosleep() function and the macro? The declaration of nanosleep() in ctime/time.h does not seem to depend on, or define, the macro. The declaration of glibc's function doesn't depend on libstdc++'s macro, no. But the macro tells libstdc++ whether to use the function or not. * *Is there any specific risk involved in defining the macro in my own header files, or as a -D option on the command line (as suggested in this related question)? What if I do this on a system where nanosleep() is not available, and how can I actually find out? It's naughty and is unsupported, but will work. The macro is an internal implementation detail that should be set by configure and not by users and changing the definition of the implementation's internal macros can break things. But in this case it won't because the only code that depends on it is in a header, no library code in libstdc++.so is affected. But it would be better to reinstall GCC and use the --enable-libstdcxx-time option, or if that's not possible edit your c++config.h to define the macro to true. If you define it on a different system where nanosleep() isn't available you'll get a compilation error when you #include <thread>. I have some ideas for improving that configuration, so nanosleep() and sched_yield() will be checked for by default, but I haven't had time to work on them yet. Update: I've committed some changes so that building GCC 4.8 without --enable-libstdcxx-time will still define std::this_thread::yield() (as a no-op) and will implement std::this_thread::sleep_for() and std::this_thread::sleep_until() using the lower resolution ::sleep() and ::usleep() functions instead of ::nanosleep(). It's still better to define --enable-libstdcxx-time though. Another update: GCC 4.9.0 is out and now defaults to automatically enabling nanosleep and sched_yield on platforms that are known to support them. There is no longer any need to use --enable-libstdcxx-time.
d10453
Well, I'm not sure you completely specified your problem, but luckily, even a very general variation of it can be solved fairly easily, considering that grep allows you to match both digit and non-digit characters. So to match "the last 3 consecutive digits that are not succeeded by a digit" in any text (even if it looks like "234_blablabla_lololol_343123_blablabla_abc.ext" or "blabla_987123, rather than "555-123.ext"), you could literally translate the quoted definition to a regular expression, and get "123", by using [0-9] to match a digit and [^0-9] to match a non-digit. The latter serves the purpose of narrowing your digits down to the last ones present in the text, by stating that only non-digits may (optionally) succeed them. E.g.: echo 234_blablabla_lololol_343999_blablabla_abc.txt | grep '[0-9][0-9][0-9][^0-9]*$' | grep '^...' 999 Of course, there are many other ways to do this. For instance, grep has a -P flag to enable the most powerful kind of regular expression syntax it supports, namely Perl regex. With this, you can avoid a lot of redundant code. E.g. with Perl regex, you can shorten repeats of the same regex unit ("atom"): [0-9][0-9][0-9] -> [0-9]{3} It even provides shorthands for common concepts as "character classes". One of these is "decimal digit", a shorthand for [0-9], denoted as \d: [0-9]{3} -> \d{3} You could also use lookaheads and lookbehinds to fetch your 3 digits in one pass, alleviating the need of grepping for the first 3 characters afterwards (the grep '^...' part), but I can't be bothered to look up the particular syntax for that in grep right now. Now sadly, I would have to think a lot how to generalize the above definition of "the last 3 consecutive digits that are not succeeded by a digit" into "the last 3 consecutive digits", meaning the above regular expression would not match file names where the last run of 3 digits is succeeded by a digit anywhere later in the file name, such as "blabla_12_blabla_123_blabla_56.ext", but I am optimistic that your naming convention does not allow that. A: You can use bash primitives to separate out the desired portion of the name. There's probably a slicker way to get the binary conversion of the decimal number, but I like dc: $ name=023-124.grf $ base=${name%.*} $ echo "$base" 023-124 $ suffix=${base##*-} $ echo $suffix 124 $ echo "$suffix" 2 o p | dc 1111100 $ new_name="${base%%-*}-$(echo $suffix 2 o p | dc).${name##*.}" $ echo "$new_name" 023-1111100.grf
d10454
Try this: dataGridInputEmails.ItemsSource = parts; You shouldn't have to do dataGridInputEmails.Columns.Clear(); first.
d10455
Try this ngAfterViewInit(): void { window.addEventListener('keyboardDidShow', this.customScroll); } private customScroll() { this.content.scrollTo(0, 100); // 100 replaced by your value }
d10456
You forgot to close the quoted string: mysqli_query($con,"UPDATE users SET gold = gold + 10 WHERE username= '".$_SESSION['user']['username']."'"); A: mysqli_query($con,"UPDATE users SET gold = gold + 10 WHERE username= '".$_SESSION['user']['username']."'); You have forgotten to close the quote " Should be like this: mysqli_query($con,"UPDATE users SET gold = gold + 10 WHERE username= '".$_SESSION['user']['username']."'"); A: You're missing a closing " on your mysqli_query line: ."'); needs to be ."'"); If your editor doesn't make this very obvious with syntax hilighting, you should use a different editor. A: Instead of mysqli_query($con,"UPDATE users SET gold = gold + 10 WHERE username= '".$_SESSION['user']['username']."'); use mysqli_query($con,"UPDATE users SET gold = gold + 10 WHERE username= '".$_SESSION['user']['username']."'");
d10457
If you are working with a binary matrix, you can use colMeans as.numeric(colMeans(m) > 0.5) # [1] 0 0 1 since colMeans(m) gives you the percentage of 1's in each column A: A simple solution is to use indexation and which.max, as you had proposed. To make things simpler, it can be done using apply and a function indexing which.max Thus, following your example matrix: apply(m,2,function (X) as.numeric(names(table(X)[which.max(table(X))])))
d10458
Yes, Cloud Build keep images between steps. You can imagine Cloud Build like a simple VM or your local computer so when you build an image it is stored in local (like when you run docker build -t TAG .) All the steps run in the same instance so you can re-use built images in previous steps in other steps. Your sample steps do not show this but the following do: steps: - name: 'gcr.io/cloud-builders/docker' args: - build - -t - MY_TAG - . - name: 'gcr.io/cloud-builders/docker' args: - run - MY_TAG - COMMAND - ARG As well use the previous built image as part of an step: steps: - name: 'gcr.io/cloud-builders/docker' args: - build - -t - MY_TAG - . - name: 'MY_TAG' args: - COMMAND - ARG All the images built are available in your workspace until the build is done (success or fail). P.D. The reason I asked where did you read that the images are discarded after every step is because I've not read that in the docs unless I missed something, so if you have the link to that please share it with us.
d10459
It's a little bit tricky but by using String_Split it can be handled like this: SELECT ph.Id, ph.Phrase FROM ( SELECT Id, Phrase, value FROM Phrases CROSS APPLY STRING_SPLIT(Phrase, ' ') ) ph LEFT JOIN Keywords keys ON keys.Keyword = ph.value GROUP BY ph.Id, ph.Phrase HAVING SUM( CASE WHEN keys.Keyword IS NULL THEN 1 ELSE 0 END ) = 0; A: You may try the following to get the phrase that contains all of the keywords listed in keyword table: SELECT T.ID, T.Phrase FROM table_name T JOIN keyword K ON T.Phrase LIKE CONCAT('%',K.Keywords,'%') GROUP BY T.ID, T.Phrase HAVING COUNT(*) = (SELECT COUNT(*) FROM keyword) See a demo.
d10460
It sounds like your problem is not Git: your problem is people. Since these people are paying you, presumably, this seems like the right problem to have! What I would do is this: | * tag:r1 |\______________________ | \ \ \ | * feature1 WIP2 WIP3 | __/ | |/ | * tag:r2 * feature2 | ________________/ |/ * tag:r3 | master In words, I would do all my releases from master, and all my work on feature branches. When a feature is complete, tested, and a client wants it, only then would I merge to master, retest, and make another release. In this way, master is never in a development state; it's always "just released", or "just about to release" (or idle). If WIP3 ("Work in progress 3") takes a long time to develop, the graph would evolve like this: | * tag:r1 |\__________ | \ | WIP3 * tag:r2 | |\__________ | | \| | * merge * tag:r3 | |\__________ | | \| | * merge | | | * feature3 | __________/ |/ * tag:r4 | master (I've deleted the feature1 and feature2 branches, now they're merged, but you'd still see the multiple paths in the history.) If you discover that a client wants a bug fix release to some old version (maybe they paid for support, but didn't pay for new features?), then you can always make a release branch from a tag: | * tag:r1 | * tag:r2 |\_________ * tag:r3 \ | * bugfix * tag:r4 | | * tag:r2.1 master | release2.x_branch
d10461
I change my Dao method to return Flow instead of LiveData : @Dao interface ShowDao { /** * Select all shows from the shows table. * * @return all shows. */ @Query("SELECT * FROM databaseshow") fun getShows(): Flow<List<DatabaseShow>> } And I could successfully run my tests such as : @Test fun givenServerResponse200_whenFetch_shouldReturnSuccess() { mockkStatic("com.android.sample.tvmaze.util.ContextExtKt") every { context.isNetworkAvailable() } returns true `when`(api.fetchShowList()).thenReturn(Calls.response(Response.success(emptyList()))) `when`(dao.getShows()).thenReturn(flowOf(emptyList())) val repository = ShowRepository(dao, api, context, TestContextProvider()) val viewModel = MainViewModel(repository).apply { shows.observeForever(resource) } try { verify(resource).onChanged(Resource.loading()) verify(resource).onChanged(Resource.success(emptyList())) } finally { viewModel.shows.removeObserver(resource) } } A: I would mock repository and instruct mockito true Mockito.when().doReturn() to return some data, and verify that the LiveData output is correct. Of course you could use an instance ShowRepository. You will still need to instruct mockito on how to return when the execution hits the mocked object. As before you can change the behaviour w This line is wrong verify(viewModel).shows. Verify can be called only on mocks. viewModel is an instance, hence, the moment the execution hits that line, your test will fail. For unit testing LiveData, you might need the following rule @get:Rule var rule: TestRule = InstantTaskExecutorRule()
d10462
Try this: df = df[df['Properties'].str.endswith('stock')] If you want to try what you were trying, this would work: df = df[df['Properties'].str[-5:]=='stock']
d10463
Second link can by placed in different div-s. from selenium import webdriver from selenium.webdriver.common.keys import Keys import os browser = webdriver.Chrome(executable_path=os.path.abspath(os.getcwd()) + "/chromedriver") link = 'http://www.google.com' browser.get(link) # search keys search = browser.find_element_by_name('q') search.send_keys("python") search.send_keys(Keys.RETURN) # click second link for i in range(10): try: browser.find_element_by_xpath('//*[@id="rso"]/div['+str(i)+']/div/div[2]/div/div/div[1]/a').click() break except: pass
d10464
Intent intent = new Intent(this, DisplayMessageActivity.class); // sets target activity EditText editText = (EditText) findViewById(R.id.edit_message); // finds edit text String message = editText.getText().toString(); // pull edit text content intent.putExtra(EXTRA_MESSAGE, message); // shoves it in intent object Lets say this = Activity A, and your DisplayMessageActivity = Activity B In this scenario, your getting the edit text content from Activity A and your communicating its content over to Activity B using the Intent object. Activity B, being interested in the value must pull it out of the intent object, so it would do the following usually in its onCreate() : Intent intent = getIntent(); String message = intent.getStringExtra(MainActivity.EXTRA_MESSAGE); This is generally the happy, but common case. However, if Activity B had existed in the backstack, and Activity A wanted to clear the backstack to reach Activity B again, Activity B would have a new intent delivered in its onNewIntent(Intent theNewIntent) method, which you would have to override in Activity B to see this new intent. Or else you would be stuck dealing with the original intent Activity B had first received. UPDATED Sounds like your interested in the internals of intents as well as how you get the "EXTRA_MESSAGE" part of the intent. Intents store key-value pairs, so if you want to get the key part, something like the following would work: for (String key : bundle.keySet()) { Object value = bundle.get(key); Log.d(TAG, String.format("%s %s (%s)", key, value.toString(), value.getClass().getName())); } A quick overview of the internals is that Intents use Android's IPC (Inter-process communication). Essentially, the only data types that are OS-friendly are primitive types (int, long, float, boolean, etc...), this is why putExtra() allows you to store primitives only. However, putExtra() also tolerates parcelables, and any object defining itself as a Parcelable basically defines how the Java object trickles down to its primitives, allowing the intent to deal with those friendly data types once more, so no magic there. This matters because Intents act as wrappers for the Binder layer. Binder layer being the underlying structure of an Intent object, and this implementation lives in the native layer of Android (the c/c++ parts). Effectively, the native layer handles the marshalling/unmarshalling back up to the java layer, where your Activity B gets the data. I realize this simplification might be skipping too many details, so reference this pdf for better understanding. A: If not mistaken its just key-value pair https://searchenterprisedesktop.techtarget.com/definition/key-value-pair .It is just indication this id(key) is 2(value). From the another activity you can get the value finding by the key(id) ,i.e Activity B Intent intent = getIntent(); String id = intent.getStringExtra("id"); REFER TO How do I get extra data from intent on Android?
d10465
There aren't many places you can store it though. If the data never changes, there's no downside in storing it in state, no changes means no re-renders. It sounds like there's quite a bit of it, so you can't store it in local storage due to limits. If you build a data set inside React by doing a bunch of API calls, revisit the API itself - it should return the data you care for, rarely a bunch of data segments you have to splice together yourself.
d10466
First you will want to create a batch file that successfully restores the database. There are plenty of examples when you search Google (MySQL database restore script). Then your button will call your batch file using the Process class, like in the example found HERE. You will use a .bat of course not .exe.
d10467
I think you have the right answer Jay. From the question, I would draw the tree to look like this: > o > /\ > feature 1: o o > /\ /\ > feature 2: o o o o > ... So you start with a root value. Then you ask if the feature has been successfully met by the email or not, so it breaks into 2 nodes, Y or N. For Y (left-subtree), you ask if the email has met the 2nd feature, Y or N and this breaks off into 2 more node and the same is repeated on the N side (right-subtree). Repeat for all features. We know that the big-Omega (worst case) for a perfect binary tree is log(n) [base 2]. So log(255) [base 2] is approximately 8, & that must be the max number of steps required. A: If your tree is a balanced binary tree, then the answer would be 8. The wording of the problem doesn't seem to call that out though. So that being said, we could make a tree that is a chain (only right children), which would make 255 the worse case.
d10468
Here is an example using jQuery: $(document).ready(function() { var user_agent_header = navigator.userAgent; if(user_agent_header.indexOf('iPhone')!=-1 || user_agent_header.indexOf('iPod')!=-1 || user_agent_header.indexOf('iPad')!=-1){ setTimeout(function() { window.location="myApp://www.myapp.com";}, 25); } });
d10469
React assumes that objects set in state are immutable, which means that if you want to add or remove some element inside your array you should create new one with added element keeping previous array untouched: var a = [1, 2, 3]; var b = React.addons.update(a, {'$push': [4] }); console.log(a); // [1, 2, 3]; console.log(b); // [1, 2, 3, 4]; By using immutable objects you can easily check if content of object has changed: React.createClass({ getInitialState: function () { return { elements: [1, 2, 3] }; }, handleClick: function() { var newVal = this.state.elements.length + 1; this.setState({ elements: React.addons.update(this.state.elements, { '$push': [ newVal ] }) }) }, shouldComponentUpdate: function (nextProps, nextState) { return this.state.elements !== nextState.elements; }, render: function () { return ( <div onClick={this.handleClick}>{ this.state.elements.join(', ') }</div> ); } }); A: ReactJS state should preferably be immutable. Meaning that, everytime render() is called, this.state should be a different object. That is: oldState == newState is false and oldState.someProp == newState.someProp is also false. So, for simple state objects, there's no doubt they should be cloned. However, if your state object is very complex and deep, cloning the entire state might impact performance. Because of that, React's immutability helpers is smart and it only clones the objects that it thinks it should clone. This is how you do it when you clone the state by yourself: onTextChange: function(event) { let updatedState = _.extend({}, this.state); // this will CLONE the state. I'm using underscore just for simplicity. updatedState.text = event.text; this.setState(updatedState); } This is how you do it when you let React's immutability helpers determine which objects it should actually clone: onTextChange: function(event) { let updatedState = React.addons.update(this.state, { text: {$set: event.text} }); this.setState(updatedState); } The above example will perform better then the first one when the state is too complex and deep. A: React application prefer immutability, there are two ways (from Facebook) to support immutability, one is to use immutable.js that is a complete immutability library, another one is the immutable helper that is a lightweight helper. You only need to choose one to use in one project. The only disadvantege of immutable.js is that it leak itself throughout your entire application including the Stores and View Components, e.g., // Stores props = props.updateIn(['value', 'count'], count => count + 1); // View Components render: function() { return <div>{this.props.getIn("value", "count")}</div>; } If you use immutable helper, you can encapsulate the immutability operations at the place where updates happen (such as Stores and Redux Reducers). Therefore, your View Components can be more reusable. // Stores or Reducers props = update(props, { value: {count: {$set: 7}} }; // View Components can continue to use Plain Old JavaSript Object render: function() { return <div>{this.props.value.count}</div>; }
d10470
The Host sFlow agent runs on Ubuntu and can generate sFlow using iptables and ulog/nflog. Alternatively, you could use Mininet to simulate a network and enable sFlow in the virtual switches. sflowtool is a command line sFlow decoder that you can use to check your parser.
d10471
"fields don't need to be indexed to enable doc values" means you can have "index": "no", for example: "my_field": { "type": "string", "index": "no", "fielddata": { "format": "doc_values" } } If you want to change format to doc_values, you need to update mapping and reindex your data.
d10472
So, there are a number of possible ways to enable the popup under certain conditions, like the zoom level of the view. In the example that I made for you popup only opens if zoom is greatest than 10. <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="viewport" content="initial-scale=1, maximum-scale=1,user-scalable=no" /> <title>PopupTemplate - Auto Open False</title> <link rel="stylesheet" href="https://js.arcgis.com/4.15/esri/themes/light/main.css" /> <script src="https://js.arcgis.com/4.15/"></script> <style> html, body, #viewDiv { padding: 0; margin: 0; height: 100%; width: 100%; } </style> <script> var populationChange; require(["esri/Map", "esri/views/MapView", "esri/layers/Layer"], function ( Map, MapView, Layer ) { var map = new Map({ basemap: "dark-gray" }); var view = new MapView({ container: "viewDiv", map: map, zoom: 7, center: [-87, 34] }); var highlightSelect = null; Layer.fromPortalItem({ portalItem: { id: "e8f85b4982a24210b9c8aa20ba4e1bf7" } }).then(function (layer) { map.add(layer); var popupTemplate = { title: "Population in {NAME}", outFields: ["*"], content: [{ type: 'fields', fieldInfos: [ { fieldName: "POP2010", format: { digitSeparator: true, places: 0 }, visible: false }, { fieldName: "POP10_SQMI", format: { digitSeparator: true, places: 2 } }, { fieldName: "POP2013", format: { digitSeparator: true, places: 0 } }, { fieldName: "POP13_SQMI", format: { digitSeparator: true, places: 2 } } ] }], }; layer.popupTemplate = popupTemplate; function populationChange(feature) { var div = document.createElement("div"); var upArrow = '<svg width="16" height="16" ><polygon points="14.14 7.07 7.07 0 0 7.07 4.07 7.07 4.07 16 10.07 16 10.07 7.07 14.14 7.07" style="fill:green"/></svg>'; var downArrow = '<svg width="16" height="16"><polygon points="0 8.93 7.07 16 14.14 8.93 10.07 8.93 10.07 0 4.07 0 4.07 8.93 0 8.93" style="fill:red"/></svg>'; var diff = feature.graphic.attributes.POP2013 - feature.graphic.attributes.POP2010; var pctChange = (diff * 100) / feature.graphic.attributes.POP2010; var arrow = diff > 0 ? upArrow : downArrow; div.innerHTML = "As of 2010, the total population in this area was <b>" + feature.graphic.attributes.POP2010 + "</b> and the density was <b>" + feature.graphic.attributes.POP10_SQMI + "</b> sq mi. As of 2013, the total population was <b>" + feature.graphic.attributes.POP2013 + "</b> and the density was <b>" + feature.graphic.attributes.POP13_SQMI + "</b> sq mi. <br/> <br/>" + "Percent change is " + arrow + "<span style='color: " + (pctChange < 0 ? "red" : "green") + ";'>" + pctChange.toFixed(3) + "%</span>"; return div; } view.popup.autoOpenEnabled = false; // <- disable view popup auto open view.on("click", function (event) { // <- listen to view click event if (event.button === 0) { // <- check that was left button or touch console.log(view.zoom); if (view.zoom > 10) { // <- zoom related condition to open popup view.popup.open({ // <- open popup location: event.mapPoint, // <- use map point of the click event fetchFeatures: true // <- fetch the selected features (if any) }); } else { window.alert(`Popup display zoom lower than 10 .. Zoom in buddy! .. (Current zoom ${view.zoom})`); } } }); }); }); </script> </head> <body> <div id="viewDiv"></div> </body> </html> Related to only display one result in the popup, you could hide the navigation like this, view.popup.visibleElements.featureNavigation = false; Now if what you actually want is to get only one result, then I suggest to use view method hitTest, that only gets the topmost result of the layers. You could do this after inside the click handler and only open if any result of desire layer. For this you need to set featFeatures: false, and set the features with the hit one. Just as a comment it could result weird or confusing to the user retrieve just one of all possible features. I think probable you may have a problem with the content.
d10473
You can use the map method to render a component for each element of your array. As for the horizontal scrolling, there are several ways to do this, but one way I've found that works well is to place the items in a flex-box container and put that container inside another container that scrolls horizontally. Also with this approach you'll need to set width: fit-content on the row component so that it expands outside of the parent component. Let's say your data is stored in an array called items and the component for each is GridItem. Then we can do this: function App() { const items = [{name: "One"}, {name: "Two"}, {name: "Three"}, {name: "Four"}, {name: "Five"}, {name: "Six"}]; return ( <div className="scroll-wrapper"> <div className="row"> {items.map(e => <GridItem name={e.name}/>)} </div> </div> ) } function GridItem({ name }) { return ( <div className="grid-item"> {name} </div> ) } ReactDOM.render(<App/>, document.querySelector("#root")); .scroll-wrapper { overflow-x: scroll; padding-bottom: 1rem; /* for the scroll bar */ } .row { display: flex; flex-direction: row; width: fit-content; } .grid-item { width: 30vw; padding: 0.25rem 0.5rem; border: 1px solid black; margin-right: 1rem; } <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> <div id="root"/>
d10474
Docker version 1.10 was not out when Nexus 3.0m7 was released. We are working on adding support for it now. This specific issue is being tracked here: https://issues.sonatype.org/browse/NEXUS-9766 UPDATE: This issue/ticket is resolved now in Nexus Repository Manager 3.0.0-03. For upgrade instructions see https://support.sonatype.com/hc/en-us/articles/217967608-How-to-Upgrade-Nexus-3-Milestone-m7-to-3-0-0-Final.
d10475
The other way is that your app can send email as anything@appid.appspotmail.com, where 'appid' is its App ID. As you say, you can also send email as the logged in user - but only on requests made by that user - so sending mail as them from the Task Queue is out. A: You might want to look into an 3rd party E-Mail provider. We use http://postmarkapp.com/ for our AppEngine projets (via huTools.postmark) and we love it.
d10476
In my experience, you cannot. If you try to (e.g., using the method George describes), you get the following error: error: must specify a matching key with non-equal value in Selector for api see 'kubectl rolling-update -h' for help. The above with kubernetes v1.1. A: Sure you can, Try this command: $ kubectl rolling-update <rc name> --image=<image-name>:<tag> If your image:tag has been used before, you may like to do following to make sure you get the latest image on kubernetes. $ docker pull <image-name>:<tag>
d10477
To me, it sounds like you have set up Gatekeeper to only protect your backend resources? Otherwise, the redirect would happen when you try to access your frontend. If you are running your frontend as a separate application you need to obtain a Bearer token from Keycloak and pass it along in your ajax request. You can use the JS adapter to do that: https://www.keycloak.org/docs/latest/securing_apps/#_javascript_adapter In that case, you should also configure Gatekeeper with the --no-redirect option, so that it denies any unauthorized request.
d10478
You can get the values set through class only after their computation. var oElm = document.getElementById ( "myStyle" ); var strValue = ""; if(document.defaultView && document.defaultView.getComputedStyle) { strValue = document.defaultView.getComputedStyle(oElm, null).getPropertyValue("-moz-opacity"); } else if(oElm.currentStyle) // For IE { strValue = oElm.currentStyle["opacity"]; } alert ( strValue ); A: The problem is, that element.style.opacity only stores values, that are set inside the element's style attribute. If you want to access style values, that come from other stylesheets, take a look at quirksmode. Cheers, A: I suggest you take a look at jQuery and some of the posts at Learning jQuery, it will make doing things like this very easy. A: Opacity should be a number rather than a boolean. Is it working in any other browseR? A: this link help http://www.quirksmode.org/js/opacity.html function setOpacity(value) { testObj.style.opacity = value/10; testObj.style.filter = 'alpha(opacity=' + value*10 + ')'; } opacity is for Mozilla and Safari, filter for Explorer. value ranges from 0 to 10.
d10479
When Application is compiled, all *.java files in references Library Projects are compiled into Application's bin/classes folder. And obfuscation step is done using .class files in this folder. This means that all referenced Library Projects are automatically obfuscated when you obfuscate your application.
d10480
I built the project with npm run build to find out how large the output is with minimization enabled. Then I edited node_modules/react-scripts/config/webpack.config.js and changed line 189 (the line with the minimize property) from: ... optimization: { minimize: isEnvProduction, minimizer: [ ... to: ... optimization: { minimize: false, minimizer: [ ... to disable minimization. Then built the project again to find out its size without minimization. You can compare the built file sizes manually to find out their difference, but you will also get a nice output to the terminal when building the second time: File sizes after gzip: 613.36 KB (+504.48 KB) build\static\js\2.0ddf8239.chunk.js 60.24 KB (+20.01 KB) build\static\js\main.8e9dd59c.chunk.js 4.73 KB (+457 B) build\static\css\main.aaaa4d7d.chunk.css 1.66 KB (+933 B) build\static\js\runtime~main.7f8cc4df.js Notice the differences stated in parantheses.
d10481
Not sure you are asking the right question. You need the current value not to be changed and no insert. With SERIALIZABLE you may not need (updlock). I am not positive about this answer. You should do your own testing. SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Begin Tran DECLARE @i int = (SELECT MAX(ID) FROM D with (updlock)) Print @i ; Insert into D values ((@i + 1), 'ANAS') SELECT * FROM D --COMMIT Rollback identity or sequence number are better approaches
d10482
try this: Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim g As Graphics = Me.CreateGraphics() MessageBox.Show("ScreenWidth:" & Screen.PrimaryScreen.Bounds.Width & " ScreenHeight:" & Screen.PrimaryScreen.Bounds.Height & vbCrLf & " DpiX:" & g.DpiX & " DpiY:" & g.DpiY) End Sub
d10483
I eventually solved this by changing my query to rely on directed relationships only. This brought the performance down to sub-second for very large data sets. The query ended up looking like: MATCH (p:Person {id: 100}) - [h:HAS_SKILL] -> (s:Skill) - [r:IS_IN_CAT*..] -> (parentSkill:Skill) <- [r:IS_IN_CAT*..] - (s2:Skill) <- [h2:HAS_SKILL] - (p2:Person) The introduction of the parentSkill allowed the relationships to stay directional.
d10484
Just needed to give the modal a unique id by adding ['. $row["mapid"].'] next to #assign. <td> <button class="btn btn-primary" data-toggle="modal" data-target="#assign['. $row["mapid"].']" value"">Assign</button> </td> <div class="modal fade" id="assign['. $row["mapid"].']" tabindex="-1" role="dialog" aria-labelledby="assignLabel" aria-hidden="true"> <div class="modal-dialog" role="document"> <div class="modal-content">
d10485
Call the repaint() method to force the JTextField to repaint itself using the current options set.
d10486
You can add a class: <th class='notexport'>yourColumn</th> then exclude by class: $('#reservation').DataTable({ dom: 'Bfrtip', buttons: [ { extend: 'excel', text: 'Export Search Results', className: 'btn btn-default', exportOptions: { columns: ':not(.notexport)' } }] }); A: Try using CSS selector that excludes last column for columns option. $('#reservation').DataTable({ dom: 'Bfrtip', buttons: [ { extend: 'excel', text: 'Export Search Results', className: 'btn btn-default', exportOptions: { columns: 'th:not(:last-child)' } } ] }); A: I just thought I'd add this in because the accepted answer only works to exclude if you are not already including something else (such as visible columns). In order to include only visible columns except for the last column, so that you can use this in conjunction with the Column Visibility Button, use $('#reservation').DataTable({ dom: 'Bfrtip', buttons: [ { extend: 'excel', text: 'Export Search Results', className: 'btn btn-default', exportOptions: { columns: ':visible:not(:last-child)' } }] }); And if you want to explicitly add your own class: $('#reservation').DataTable({ dom: 'Bfrtip', buttons: [ { extend: 'excel', text: 'Export Search Results', className: 'btn btn-default', exportOptions: { columns: ':visible:not(.notexport)' } }] }); A: for Excel, csv, and pdf dom: 'lBfrtip', buttons: [ { extend: 'excelHtml5', text: '<i class="fa fa-file-excel-o"></i> Excel', titleAttr: 'Export to Excel', title: 'Insurance Companies', exportOptions: { columns: ':not(:last-child)', } }, { extend: 'csvHtml5', text: '<i class="fa fa-file-text-o"></i> CSV', titleAttr: 'CSV', title: 'Insurance Companies', exportOptions: { columns: ':not(:last-child)', } }, { extend: 'pdfHtml5', text: '<i class="fa fa-file-pdf-o"></i> PDF', titleAttr: 'PDF', title: 'Insurance Companies', exportOptions: { columns: ':not(:last-child)', }, }, ] A: You can try this code, I copied it from PDF button. E.g columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ]. Check this documentation: https://datatables.net/extensions/buttons/examples/html5/columns.html buttons: [ { extend: 'excelHtml5', exportOptions: { columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ] } }, { extend: 'pdfHtml5', exportOptions: { columns: [ 0, 1, 2, 3, 4, 5, 6, 7, 8] } }, 'colvis' ] A: Javascript Part: $(document).ready(function() { $('#example').DataTable( { dom: 'Bfrtip', buttons: [ { extend: 'print', exportOptions: { // columns: ':visible' or columns: 'th:not(:last-child)' } }, 'colvis' ], columnDefs: [ { targets: -1, visible: false } ] } ); } ); And the js files to be included: https://code.jquery.com/jquery-3.3.1.js https://cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js https://cdn.datatables.net/buttons/1.5.2/js/dataTables.buttons.min.js https://cdn.datatables.net/buttons/1.5.2/js/buttons.print.min.js https://cdn.datatables.net/buttons/1.5.2/js/buttons.colVis.min.js Hope this was helpful for you. Thanks.
d10487
Select2 is a jQuery plugin, which implement dropdown with css & javascript, it's not native dropdown which implement purely by select. For such CSS dropdown, the visible option does not comes from select and the select is invisible or visible but with very small size (like 1 * 1 size) prevent user to operate it. Below code example test on the demo from Select2 site describe('handsontable', function(){ it('input text into cell', function(){ browser.ignoreSynchronization = true; browser.get('https://select2.org/selections'); browser.sleep(3000); // click to make the input box and options display out element(by.css('select.js-example-basic-multiple-limit + span' + ' .select2-selection--multiple')).click(); browse.sleep(1000); element(by.css("select.js-example-basic-multiple-limit + span input")) .sendKeys('Hawaii'); element(by.xpath("//li[@role='treeitem'][text()='Hawaii']")).click(); browser.sleep(3000); }); })
d10488
One way would be to keep a counter that's incremented on every call, and call the recursive setTimeout only when that counter is below 60: let count = 0; function doWork() { $('#more').load('exp1.php'); count++; if (count < 60) setTimeout(doWork, 1000); } doWork(); Note that there's no need for the repeater variable since you aren't using clearTimeout anywhere. Or, using setInterval and clearInterval: const interval = setInterval(doWork, 1000); setTimeout(() => clearInterval(interval), 59500); doWork(); A: I would pass in a value to the setTimeout duration as a multiplier. Assuming the lifecycle of the function exists beyond this duration, it would simply increment the value passed in. So, you would multiply 1000 by an integer that would increment inside the function itself. Starting from 1 and going to 60. Then wrap the instructions inside an if statement. If the value of the increment integer is less than 60, do work. A: Just try this one, var count = 60; function doWork() { var interval = setInterval(function(){ $('#more').load('exp1.php'); count--; if (count ==0) clearInterval(interval); },1000); } doWork();
d10489
I think I figured this out. Using "afterAdd" and "beforeRemove" callbacks I can trigger the "reArrangeTiles" at the appropriate moment. Important note here is that when "beforeRemove" binding is used, the corresponding node needs to be explicitly removed. Once the node is removed I can invoke the "reArrangeTiles" function: this.hideTile = function (elem) { if (elem.nodeType === 1) { $(elem).fadeOut(function () { $(elem).remove(); self.reArrangeTiles(); }); } }; Updated example
d10490
You always can access to the $_SESSION variable. However, I'm not sure if it's compatible with CodeIgniter Session library. session_start(); if(isset($_SESSION['lang'])) { // define your routing here } A: Upto the level I understand,You're using Codeigniter's session data as an identity to indicate lang and its associated data..? Codeigniter uses similar way of superglobal session maintainence like using session_start() and $_SESSION. But It is advised not to use session data anywhere else than that of controllers. Try writing a Super-controller which extends to all of your controllers. class SuperController extends MY_Controller { public function __construct() { // Ensure you run parent constructor parent::__construct(); $this->checkSess(); } public function checkSess() { //Your session check and its associated redirects //eg. if $this->session->en==1 redirect to eng lang controller } } Class YourController extends SuperController{ //Your code } Or You can use multilang suppport in Codeigniter as of in Codexworld Or If you want to still use session in routes.php . You can try as of in a standard PHP way as Alexander said.But I doubt whether it works properly. A: Initialize session in autoload.php like: $autoload['libraries'] = array('session'); and then in routes.php you can access the sessions as: session_start(); print_r($_SESSION);
d10491
This is only a guess but is this what you were after $("li").each(function() { var category = $.makeArray( $(this) .attr("class") .slice( $(this).attr("class").indexOf("-") + 1, $(this).attr("class").length ) ); // console.log(category); var product = $.makeArray($(this) .attr("class") .slice(0, $(this).attr("class").indexOf("-"))) // console.log(category); var completeList = $.merge(product, category); console.log(completeList) }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <ul> <li class="product_cat-category-1"></li> <li class="product_cat-category-3"></li> <li class="product_cat-category-2"></li> <li class="product_cat-category-5"></li> <li class="product_cat-category-1"></li> <li class="product_cat-category-2"></li> <li class="product_cat-category-3"></li> <li class="product_cat-category-5"></li> <li class="product_cat-category-4"></li> <li class="product_cat-category-1"></li> <li class="product_cat-category-2"></li> <li class="product_cat-category-3"></li> <li class="product_cat-category-4"></li> </ul> https://jsfiddle.net/7xcpjr59/ I hope it helps
d10492
Assuming you're using a somewhat recent (3.24 or newer) version of sqlite, you can use what's known as UPSERT: INSERT INTO AssyData(AID, MaxMag, MaxDefNID) VALUES (?, ?, ?) ON CONFLICT(AID) DO UPDATE SET MaxMag = excluded.MaxMag , MaxDefNID = excluded.MaxDefNID WHERE excluded.MaxMag > MaxMag; A: In case your vesrsion of SQLite does not allow the use of UPSERT you can achieve what you need in 2 steps: INSERT OR IGNORE INTO AssyData (AID, MaxDef, MaxDefLC, MaxDefNID, Comps, SubAssys) VALUES(100, 111, 202, 203, '', ''); This INSERT OR IGNORE will fail without an error if you try to insert a row with an AID that already exists in the table. Then: UPDATE AssyData SET MaxDef = 111, MaxDefLC = 202, MaxDefNID = 203, Comps = '', SubAssys = '' WHERE AID = 100 AND MaxDef < 111; This will fail if the row to be updated contains MaxDef equal or greater than the value 111. See the demo. In general such code needs special care when implemented, because as you can see the value of MaxDef (111 in this example) must be set 3 times.
d10493
maybe you should use "x-accel-redirect" for Nginx, and "X-Sendfile" for Apache.
d10494
--Group data on datekey, then sum it without lunch select datekey,sum(hours)[total] from newtbl where segment!='lunch' group by datekey A: to find total hours per columns, I would use something like this: Select Key,sum(hours) from Shifts where DateKey == 20210101; to exclude the lunchtime from total work: SELECT key,sum(hours) FROM Shifts where seg <> "lunch" GROUP by key
d10495
If I understand your question then, The easiest way would be to open create a page which takes all these parameter as query string. You can then open the new page in the window by making it a hyperlink. The new page can also process your input by taking all these parameter from query string. <script language ="javascript"> function submitForm(){ var url = "http://www.newpage.php?subnm1="+value1+"chpno1="+value2 //and like this window.open (url, "acb", "status=no, width=960, height=700") } </script> <a href="#" onclick="submitForm()">Submit</a>
d10496
If I understand what you want correctly, this should work: import numpy as np a = np.array([[2,2],[3,3]]) b = np.zeros((len(a)*2,len(a)*2)) b[::2,::2]=a This 'inserts' the values from your array (here called a) into every 2nd row and every 2nd column Edit: based on your recent edits, I hope this addition will help: x:y:z means you start from element x and go all the way to y (not including y itself) using z as a stride (e.g. 2, so every 2 elements, so x, x+2, x+4 etc up to x+2n that is the closest to y possible) so ::z would mean ALL elements with stride z (or ::2 for every 2nd element, starting from 0) You do that for each 'dimension' of your array, so for 2d you'd have [::z1,::z2] for going through your entire data, striding z1 on the rows and z2 on the columns. If that is still unclear, please specify what is not clear in a comment. One final clarification - when you type only : you implicitly tell python 0:len(array) and the same holds for ::z which implies 0:len(array):z. and if you just type :: it appears to imply the same as : (though I haven't delved deep into this specific example)
d10497
You can use the model method of the route to handle such parameters, separate them from the model parameter and set the appropriate controller state. Another approach would be to use nested routes that will render un-nested views(and controllers) - as explained towards the bottom here.
d10498
You are calling post_params with an argument, whereas it is not expecting one. Since post_params returns a hash, you don't need parenthesis to access the value: post_params[:wdier_id] However, you don't permit wdier_id in your strong params, so that would return nil. My guess is that you want this behavior: redirect_to wdier_path(params[:wdier_id]) A: Look at your error description. In line 32 of posts_controllers.rb in create method you use your function post_params with argument [:wdier_id]. That method doesn't take any arguments, so you have got error. Instead of redirect_to post_params([:wdier_id]) you should have something like redirect_to photo_path(@post.id) or something like that. A: The resolution was redirect_to wdier_path(@wdier)
d10499
Is this what you are trying to do? from plotnine import ggplot, aes, geom_bar, facet_wrap import pandas as pd a_churn = pd.read_csv('Churn_Modeling.csv') a_churn['c_rating'] = pd.cut(a_churn['CreditScore'], bins=[0,500,600,660,780,1000],labels = ['very poor', 'poor', 'fair', 'good', 'excellent']) p = ggplot(a_churn) + aes(x='c_rating', y='EstimatedSalary') + geom_bar(stat='identity', alpha=0.8) + facet_wrap('Gender') print(p)
d10500
you are passing only one argument ie hash :search_term => @isbn, :search_type => @search_type in Generalsearch.new() use Generalsearch.new( @isbn, @search_type) A: You have to use, as you're accepting 2 params on the initialize function, not a hash of of params. @prices = Generalsearch.new(@isbn, @search_type) A: If you want to use Generalsearch.new(:search_term => @isbn, :search_type => @search_type) Then you can have in the initialize method def initialize(options) # You can also use options[:search_term], # but fetch lets you know if the key doesn't exist self.search_term= options.fetch(:search_term) self.search_type= options.fetch(:search_type) end