text
stringlengths
70
452k
dataset
stringclasses
2 values
MatDialog is undefined in entry component I use the latest Angular version and I modify a template. I have a module called toolbar.module.ts which contains an entryComponent: @NgModule({ declarations: [ToolbarComponent, ToolbarDropdownComponent], imports: [ CommonModule, MatDialogModule, [...] ], exports: [ToolbarComponent], entryComponents: [ToolbarDropdownComponent] }) export class ToolbarModule { } Inside ToolbarDropdownComponent I use it as following: [...] export class ToolbarDropdownComponent implements OnInit { constructor(public dialog: MatDialog) { } foo() { this.dialog.open(...); <--- this.dialog is undefined } } I use it in the same way I use MatDialog anywhere else. Is there something different here? What could this.dialog causing being undefined? Thanks a lot! It's public. Make it private and you can use this. I use this and private or public shouldn't make a difference: Edit: confirmed, unfortunately it doesn't can you add import section of MatDialog in you component file. Ref: https://stackblitz.com/edit/angular-wvfr7z?file=src%2Fapp%2Fmaterial-module.ts export class ToolbarDropdownComponent implements OnInit { constructor(private dialog: MatDialog) { } foo() { this.dialog.open(...); } }
common-pile/stackexchange_filtered
How to post and get asmx web serivce access using iPhone app? I am new to iPhone app development. How can I post data and get data by accessing the asmx web service using iPhone app? I think asmx webservices are SOAP webservices you should read my blog entry here - http://www.makebetterthings.com/iphone/call-soap-web-service-from-iphone/ To call a SOAP service first i create a string with the SOAP request as follows. NSString *soapMessage = @"<?xml version="1.0" encoding="utf-8"?>n" "<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">n" "<soap:Body>n" "<CelsiusToFahrenheit xmlns="http://tempuri.org/">n" "<Celsius>50</Celsius>n" "</CelsiusToFahrenheit>n" "</soap:Body>n" "</soap:Envelope>n"; After creating the SOAP request I create a NSMutableRequest to send this request to server. NSURL *url = [NSURL URLWithString:@"http://w3schools.com/webservices/tempconvert.asmx"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: @"http://tempuri.org/CelsiusToFahrenheit" forHTTPHeaderField:@"SOAPAction"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } After firing the request we can collect the XML response in the NSURLConnection’s delegate methods. -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [webData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [webData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { NSLog(@"ERROR with theConenction"); [connection release]; [webData release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"DONE. Received Bytes: %d", [webData length]); NSString *theXML = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(@"%@",theXML); [theXML release]; } After collecting the XML response in theXML string in -(void)connectionDidFinishLoading:(NSURLConnection *)connection we can parse this string using TBXML for any other XML parser you like. I have added this code and able to run it too but am also getting array as the return param and i need to capture the array list. can you please post the response here? I am not sure how you are getting an array in webservice response? are you getting that in xml format? no i mean I want to process and array list which is returned form the web service i ned to handle it in the iPhone code how do i work on that. I am not sure how you are getting array in response (sorry I don't have any .net exp). But if you are talking about getting this data from xml response you can look at this very good library TBXML - http://www.tbxml.co.uk/
common-pile/stackexchange_filtered
Where to start integration of trigonometric function? Possible Duplicate: Evaluating $\int P(\sin x, \cos x) \text{d}x$ I am stuck with this integral: \begin{equation} \int \frac{dx}{(3+\cos^2x) \cdot \tan x} \end{equation} I tried playing with trigonometrical function and the most promissing variant I managed to produce was this: \begin{equation} \frac{1}{2}\int \frac{dx}{\sin2x} \end{equation} I am guessing that to integrate this substitution is required but of what... I need some good instruction how to deal with this. Another approach is to use $\cos^2 x=1-\sin^2 x$. Write the $\tan x$ as $(\sin x)/(\cos x)$. We end up with $$\int \frac{\cos x}{{}(4-\sin^2 x)\sin x}\,dx.$$ Make the substitution $u=\sin x$. We end up with $$\int \frac{1}{(4-u^2)u}.$$ This is a rational function, use partial fractions. I think that our integrand is $$\frac{1}{8}\frac{1}{2-u}-\frac{1}{8}\frac{1}{2+u}+\frac{1}{4}\frac{1}{u}.$$ Now the integration is easy. We get $$-\frac{1}{8}\log(4-u^2)+\frac{1}{4}\log u +C.$$ I have used it but missed one variable while rewriting it. Thanks! @Povylas: Had, as pretty often, a minus sign problem. Fixed. Noticed while checking it :) Integrating $\int\frac{dx}{\sin 2x}$ is fairly simple. Use $u=2x$. Then $du = 2\,dx$. You'll be left with $\frac{1}{4}\int \csc u\, du$. This is $\ln(\sin x)- \ln(\cos x)$. However, I don't think this is equivalent to your initial integral. Try calling the denominator of your initial integral $(4\cos^2x + 3\sin^2x) \tan x$. Try to see where it goes from there. We can also make it $\int\frac{\cos x}{(4-\sin^2x)\sin x}\,dx$ and use $u=\sin x$ to get $\int \frac{dx}{(4-u^2)u}$. Completely factorizing the denominator using difference of squares we can then use integration by parts. Sorry I can`t use csc... I am limited to sin cos tan and ctan only. csc is just 1/sin.
common-pile/stackexchange_filtered
Cylinder sets on $C[0,1]$ Suppose $\mathscr{B}$ is the cylindrical $\sigma$-algebra on $\mathbb{R}^{[0,1]}$. Let $C:=C[0,1]$ and $\mathscr{A}:=\{B\cap C: B\in\mathscr{B}$. I am trying to show that $$ A:=\{f\in C[0,1]: \int_{[0,1]} f<1\}$$ is inside $\mathscr{A}$. We know that $\mathscr{A}$ is a $\sigma$-algebra but apart from this we have nothing. We need to construct the cylinder set $B$ such that $A=B\cap C$. $$B := \bigcup_{k \ge 1} \bigcap_{N \ge 1}\bigcup_{n \ge N} \bigcup_{\substack{\alpha_0,\dots,\alpha_{n-1} \in \mathbb{Q} \\ \frac{1}{n}(\alpha_0+\dots+\alpha_{n-1}) \le 1-\frac{1}{k}}} \left\{f : [0,1] \to \mathbb{R} \hspace{1mm} | \hspace{1mm} \left|f\left(\frac{j}{n}\right)-\alpha_j\right| \le \frac{1}{2k} \hspace{2mm} \forall \hspace{1mm} 0 \le j \le n-1\right\}$$ Okay, so I see that $B$ is the set of all functions with integral 1. However I do not see why $B_{n,k}$ is a cylinder set in $R^{[0,1]}$. @user593295 I hope you mean "with integral strictly less than $1$". See the updated answer Yes less than 1** as required by definition of A. @user593295 do you understand the updated answer? yes I think so, so the reason why this is a cylinder is because f is specified by at most countably many points right? @user593295 Each of these $\left{f : [0,1] \to \mathbb{R} | f\left(\frac{j}{n}\right) = \alpha_j \hspace{2mm} \forall \hspace{1mm} 0 \le j \le n-1\right}$ is a cylinder set, since it specifies $f$ at finitely many points (countably many points would also be fine). Then we take unions and intersections, which obviously keeps everything in the same $\sigma$-algebra. where do you use $N$ here? @user593295 idk what that means. Do you mean to say "when proving $A = B \cap C$, how does $N$ enter the picture"? yeah but I get it now, it's basically just convergence of the Riemann sum right? @user593295 yea Hints: Recall (or show) that $\mathcal{A}$ is the Borel $\sigma$-algebra generated by the uniform topology on $C[0,1]$. Show that the set $A$ is open in $(C[0,1],\|\cdot\|_{\infty})$. Since the Borel $\sigma$-algebra contains all open sets, this implies, by Step 1, that $A \in \mathcal{A}$. Part 2 is easy, but I do not understand part 1? the construction of the Borel cylindrical algebra is independent of a specified topology? Can I see a proof of this/ @user593295 The $\sigma$-algebra $\mathcal{A}$ equals the Borel $\sigma$-algebra associated with the uniform norm on $C[0,1]$. To prove this, a) show that the projections $\pi_t: C[0,1] \to \mathbb{R}, f \mapsto f(t)$ are (Lipschitz)continuous, implying Borel measurabolity. b) Prove that the open balls ${g \in C[0,1]; |f-g|{\infty}<r}$ are in $\mathcal{A}$ (use the continuity of $f,g$ to write $|f-g|{\infty} = \sup_{t \in \mathbb{Q} \cap [0,1]} |f(t)-g(t)|$) @saz how's the vacation?
common-pile/stackexchange_filtered
Application_Error does not handle exception I am working on MVC Application. I have FilterConfig class : public class FilterConfig { public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleErrorAttribute()); } } I am using it in Global.asax.cs protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); } If i use this approach my Application_Error does not get fired when any exception occurs in controller. protected void Application_Error() { var exception = Server.GetLastError(); AppLogging.WriteError(exception); var httpException = exception as HttpException; Response.Clear(); Server.ClearError(); var routeData = new RouteData(); routeData.Values["controller"] = "Error"; routeData.Values["action"] = "Error"; routeData.Values["exception"] = exception; Response.StatusCode = 500; if (httpException != null) { Response.StatusCode = httpException.GetHttpCode(); switch (Response.StatusCode) { case 403: routeData.Values["action"] = "UnauthorizedAccess"; break; case 503: routeData.Values["action"] = "SiteUnderMaintenance"; break; case 404: routeData.Values["action"] = "PageNotFound"; break; } } // Avoid IIS7 getting in the middle Response.TrySkipIisCustomErrors = true; IController errorsController = new ErrorController(); HttpContextWrapper wrapper = new HttpContextWrapper(Context); var rc = new RequestContext(wrapper, routeData); errorsController.Execute(rc); } public ActionResult Error() { return View("Error"); } Now when i do customErrors mode="Off" it goes to Application_Error Event but HandleErrorInfo comes as null. Error.cshtml @model HandleErrorInfo @{ Layout = "~/Views/Shared/_LayoutAnonymous.cshtml"; ViewBag.Title = "Error"; } <div class="error_wrapper"> <div class="col-sm-12 col-xs-12"> <div class=" error_col"> <div style="display: none;"> @if (Model != null) { <h3>@Model.Exception.GetType().Name</h3> <pre> @Model.Exception.ToString() </pre> <p> thrown in @Model.ControllerName @Model.ActionName </p> } </div> <h1 class="error_h1">503 </h1> <h2 class="error_h2">Looks like we're having some server issues. </h2> <h3 class="error_h3"> Go back to the previous page and try again.<br> If you think something is broken, report a problem. </h3> <div class="col-sm-12 col-xs-12 padding_none"> <button class="btn btn-primary btn_box error_btn" id="btnReport">Report A Problem</button> <button class="btn btn-primary btn_box error_btn pull-left" onclick="location.href='@Url.Action("Requests", "Pricing", new RouteValueDictionary { { "area", "" } })'"> Go To Homepage</button> </div> </div> </div> </div> @Scripts.Render("~/bundles/Error") Do you have your custom error handling settings in your web config?? This web config setting is handling the errors and only unhandled errors will bubble up to this event. you need to disable this settings for your Application_Error to handle the errors This is because your web config settings <customErrors mode="On" /> are overriding the default behavior. You will need to disable this setting. if you disable this, you can handle the error in the Application_Error event then redirect the user or display a message from this event. This web config setting is handling the errors and only unhandled errors will bubble up to the Application_Error event. Ok. i made now it is hitting Application_Error event but when it is redirecting to my Error.cshtml page HandleErrorInfo model is null. i have posted the code of Error.cshtml above. Not sure about it, Never used such approach, should'nt it be something like adding the model data into routeData somehow similar to how you are adding the action name. Just a guess
common-pile/stackexchange_filtered
Writing a Protocol in a swift Framework? Im trying to write a Swift framework with a protocol used for a delegate method. I've made the protocol public, but when I add my framework to my project, the project spits out the following error ViewController.swift:15:83: No type named 'MyControllerDelegate' in module 'MyModule' import MyModule class ViewController: UIViewController, MyModule.MyControllerDelegate { Any help in these regards would be appreciated. Update: not sure if this makes any difference but some of the object types in the protocol are defined by another framework... Show the code for defining protocol MyControllerDelegate It seems like the compiler cannot infer that you are not defining inheritance, but conformance to a protocol. Have you actually defined all functions/properties needed for the protocol conformance? Btw there's no need for the explicit namespacing unless MyControllerDelegate has the same name as an already existing protocol. @DávidPásztor yes: I've implemented all the functions needed for the conformance -> I previously had the project as a source submodule... but Im trying to convert to a compiled framework to save build time on the CI the delegate name is in deed unique, but removing the namespace changes the error to "use of undeclared type 'MyControllerDelegate'" What if you put all delegate methods into an extension for ViewController, do you still get the same error? Is the protocol file added to the correct target ?
common-pile/stackexchange_filtered
Full Tannakian subcategories and surjection of fundamental groups Let $(\mathcal{T},w)$ be a neutral Tannakian category over a field $k$, with fundamental group $G$, and $w$ a fibre functor. Let $(\mathcal{S},w|_{\mathcal{S}})$ be a full Tannakian sub-category (i.e. closed under tensor products, direct sums, duals and quotients) as well as taking subobjects/subquotients. Denote its fundamental group by $H$. In "Deligne, Pierre, and J. S. Milne. "Tannakian categories." Lecture Notes in Matehmatics (2012)", Proposition 2.21, it is said that the natural morphism $G\longrightarrow H$ is faithfully flat (and in particular surjective). Their proof is tautological, and I could not understand the tautology. The way the authors deduce the claim is by arguing that the induced map on the underlying algebras (taking the duals of $G = \text{Aut}^{\otimes}(w) \longrightarrow \text{Aut}^{\otimes}(w') = H$), the map $$\text{Aut}^{\otimes}(w')^{\vee}\longrightarrow \text{Aut}^{\otimes}(w)^{\vee}$$ is clearly injective. While I understand how the conclusion follows from this implication, I would love for an explanation of why this is the case, or an alternative argument. A reference would also be appreciated. I lost you in the second last paragraph. What exactly are you asking? I want to understand why the fundamental group of a neutral Tannakian category surjects onto the fundamental groups of its full Tannakian subcategories Theorem: A homomorphism $G\rightarrow H$ of affine group schemes over $k$ is faithfully flat if and only the functor $Rep(H)\rightarrow Rep(G)$ is fully faithful and the essential image is closed under taking subobjects. Recall that a homomorphism of affine group schemes over a field is faithfully flat if and only if the map on coalgebras is injective. For a $k$-algebra $A$ (not necessarily commutative), we let $M(A,k)$ denote the category of left $A$-modules finite-dimensional over $k$. Let $f\colon A\rightarrow B$ be a homomorphism of $k$-algebras. Using $f$, we can regard a $B$-module as an $A$-module and $M(B,k)$ as a subcategory of $M(A,k)$. Lemma: Assume that $B$ is finite-dimensional over $k$. The homomorphism $f\colon A\rightarrow B$ is surjective if and only if $M(B,k)$ is a full subcategory of $M(A,k)$ closed under taking submodules. Proof: If $f$ is surjective, then the subcategory $M(B,k)$ certainly has the claimed properties. For the converse, let $\bar{A}$ denote the image of $A$ in $B$. Then $\bar{A}$ is an $A$-submodule of $B$, and hence also a $B$-submodule. As it contains the identity element $1$ of $B$, it equals $B$. Let $f\colon C\rightarrow D$ be a homomorphism of $k$-coalgebras. Using $f$, we can regard a $C$-comodule as a $D$-comodule and $Comodf(C)$ as a subcategory of $Comodf(D)$. Lemma: The homomorphism $f\colon C\rightarrow D$ is injective if and only if $Comodf(C)$ is a full subcategory of $Comodf(D)$ closed under taking subobjects. Proof: If $C$ is finite-dimensional over $k$, this follows from the previous lemma applied to $f^{\vee}\colon D^{\vee}\rightarrow C^{\vee}$. In the general case, we can write $C$ as a union $C=\bigcup C_{i}$ of finite-dimensional $k$-subcoalgebras, and correspondingly $Comodf(C)=\bigcup_{i}Comodf(C_{i})$. Now the statement for $C$ follows from the statement for the $C_{i}$. I believe $Comodf(D)$ should be $Comod(D)$. Many thanks, this totally clarifies everything. Your answer suggest that there is a non-canonical filtration of the $k$-coalgebra of a neutral Tannakian group over $k$. I wonder if this can be used to compute Tannakian fundamental groups. Does your argument also imply that every dense morphism of affine algebraic group schemes is faithfully flat?
common-pile/stackexchange_filtered
Cucumber test - could not find classs cucumber.api.cli.Main Since a couple of days, when I try to exectue my cucumber tests, I always receive an error "could not find classs cucumber.api.cli.Main". Nothing has been changed in my config (as far as know). [INFO] --- exec-maven-plugin:3.1.0:exec (default-cli) @acceptance-tests --- Erreur : impossible de trouver ou charger la classe principale cucumber.api.cli.Main [ERROR] Command execution failed. Strage behaviour : if I'm not able to execute the test via the run command : Execute with run I'm able to execute the test with the command run with coverage : Execute with run with coverage But as I have to debug my tests, the run with coverage is not very helpful. Do you have any idea to solve this issue? I've found the class cucumber.api.cli.Main in my dependecies. Thanks in advance Try checking the differences in the run configuration between both Sounds like your IDE is confused. Make sure you only have a single cucumber version in your (transitive) dependencies. Finally, I found the root cause of my issue, it was due to a wrong option in my intelij settings like describe here : https://youtrack.jetbrains.com/issue/IDEA-261546/Run-specific-Cucumber-scenario-builds-the-whole-project-running-tests-in-all-suite-files-in-the-project I guess I may find your issue cause here. Do you enable the Delegate IDE build/run actions to maven option in Preference | Build, Execution, Deployment | Build Tools | Maven | Runner? Please try to disable it to see if it helps.
common-pile/stackexchange_filtered
ggplot error bar not showing on graph because it falls outside y axis limits I am trying to create a plot with pre-defined Y-axis limits. I noticed that when error bars exceed these Y-axis bounds, the entire error bar is not plotted, rather than just clipping it to the Y-axis bound. An example, using the ToothGrowth data set (see also http://www.cookbook-r.com/Graphs/Plotting_means_and_error_bars_(ggplot2)/): df <- ToothGrowth dfc <- summarySE(df, measurevar="len", groupvars=c("supp","dose")) ggplot(dfc, aes(x=dose, y=len, colour=supp)) + geom_errorbar(aes(ymin=len-se, ymax=len+se), width=.1) + geom_line() + geom_point() + scale_y_continuous(limits=c(7.5,26.5)) I want the error bars at the dose=2.0 point start at their bottom point (note that the horizontal marker line is there), and run all the way up to the border of the plot (i.e., the Y-axis upper limit). How can I achieve this? (Similarly for the dose=0.5 point in the VC series.) Setting the limits in the scale function drops data outside the range; setting limits using coord_cartesian does a simple zoom.
common-pile/stackexchange_filtered
jQuery append href I'm trying to append a link with jQuery, but it's not working. Here is my code. $(document).ready(function() { $( ".class" ).append( "<p><a href="http://www.google.com">Google</a></p>" ); }); your code is right but please fix your quotes. in double quotes you can use only one quote Fix your quotes: $(document).ready(function() { $( ".class" ).append( "<p><a href='http://www.google.com'>Google</a></p>" ); }); You have nested double quotes which is why it doesn't work. This will work (user outer single quotes): $(document).ready(function() { $( ".class" ).append('<p><a href="http://www.google.com">Google</a></p>'); });
common-pile/stackexchange_filtered
Adding my SSH key to new user with only SSH access through key I'm quite new to this, and I've looked through questions but haven't found one that quite matches my problem, or rather I didn't sufficiently understand them to be able to solve this issue. I have a new server to which I can only access using a pre-generated SSH RSA key pair, which allows me to connect only to the default ubuntu user ssh -i .ssh/mykey ubuntu@ipaddr On the server I created a new user, newuser, and I'm trying to make it, so I can use my key that I connect to ubuntu on that account as well, but I'm not sure how to accomplish this, so I can then delete the ubuntu user. ssh-copy-id -i .ssh/mykey.pub -o "IdentityFile .ssh/mykey" newuser@ipaddr That though returns permission denied. How can I do this effectively? Any password login is disabled and can't be enabled in this instance. ssh to your old user ubuntu use sudo -i -u newuser to switch to the new user open ~newuser/.ssh/authorized_keys with your favourite editor and copy&paste the content of your new public key into it set proper permissions on the .ssh directory and the files inside it. chmod 700 ~newuser/.ssh chmod 600 ~newuser/.ssh/authorized_keys you may need to create the directory ~newuser/.ssh if it doesn't exist yet Don't forget to give sudo access to your new user before you delete the old one.
common-pile/stackexchange_filtered
Smooth animated Collapsing Toolbar with Android Design Support Library Are there anyway to make Android Design Support Library's Collapsing animation smoother while scrolling? When I release scrolling, it stops suddenly. But what I want is: collapsing animation will continue smoothly even if you stop scrolling. Android-ObservableScrollView and Scrollable are the libraries that are collapsing smoothly. A similar question has been asked, which links to this issue. It seems therefore that this is a bug which will be solved when version 23 of the library is released. Try and understand the code of this Smooth App Bar Library. You can use the new layout_scrollFlag snap for smooth scroll within the AppBarLayout states. But what I have experienced is that, when the RecyclerView reaches top, scrolling stops. i.e CollapsingToolbarLayout won't get expanded without another scroll. For the RecyclerView to scroll smoothly up and expand the CollapsingToolbarLayout I have used a ScrollListener on recyclerview. recyclerView.addOnScrollListener(new RecyclerView.OnScrollListener() { int scrollDy = 0; @Override public void onScrolled(RecyclerView recyclerView, int dx, int dy) { scrollDy += dy; } @Override public void onScrollStateChanged(RecyclerView recyclerView, int newState) { super.onScrollStateChanged(recyclerView, newState); if(scrollDy==0&&(newState == AbsListView.OnScrollListener.SCROLL_STATE_IDLE)) { AppBarLayout appBarLayout = ((AppBarLayout) view.findViewById(R.id.app_bar)); appBarLayout.setExpanded(true); } } }); I used "scroll|exitUntilCollapsed" as layout_scrollFlags. <android.support.design.widget.CollapsingToolbarLayout android:id="@+id/collapsing_toolbar_layout" android:layout_width="match_parent" android:layout_height="match_parent" android:fitsSystemWindows="true" android:minHeight="80dp" app:layout_collapseMode="none" app:layout_scrollFlags="scroll|exitUntilCollapsed"> Good solution but there is a speed difference between the scroll and expand :) This one is fairly new, but the AppBarLayout has been recently updated to handle exactly what you're looking for with a new layout_scrollFlag called snap. Usage: app:layout_scrollFlags="scroll|snap" I'll try to look for my source and update my answer when I do. Edit: Of course, it's from the android developer blog. thanx help to auto push up and down when swipe half I'm doing it through AppBarLayout. by overriding onNestedFling and onNestedPreScroll. The trick is to reconsume fling event if ScrollingView's top child is close to the beginning of data in Adapter. Source Flinging with RecyclerView + AppBarLayout public final class FlingBehavior extends AppBarLayout.Behavior { private static final int TOP_CHILD_FLING_THRESHOLD = 3; private boolean isPositive; public FlingBehavior() { } public FlingBehavior(Context context, AttributeSet attrs) { super(context, attrs); } @Override public boolean onNestedFling(CoordinatorLayout coordinatorLayout, AppBarLayout child, View target, float velocityX, float velocityY, boolean consumed) { if (velocityY > 0 && !isPositive || velocityY < 0 && isPositive) { velocityY = velocityY * -1; } if (target instanceof RecyclerView && velocityY < 0) { final RecyclerView recyclerView = (RecyclerView) target; final View firstChild = recyclerView.getChildAt(0); final int childAdapterPosition = recyclerView.getChildAdapterPosition(firstChild); consumed = childAdapterPosition > TOP_CHILD_FLING_THRESHOLD; } return super.onNestedFling(coordinatorLayout, child, target, velocityX, velocityY, consumed); } @Override public void onNestedPreScroll(CoordinatorLayout coordinatorLayout, AppBarLayout child, View target, int dx, int dy, int[] consumed) { super.onNestedPreScroll(coordinatorLayout, child, target, dx, dy, consumed); isPositive = dy > 0; } } Then set the layout behavior as FlingBehavior class <android.support.design.widget.AppBarLayout app:layout_behavior="package.FlingBehavior" android:id="@+id/appbar" android:layout_width="match_parent" android:layout_height="250dip" android:fitsSystemWindows="true"> add code app:layout_scrollFlags="scroll|enterAlways" in the view inside the AppBarLayout. This is my demo code collapsing the Toolbar with Android Design Support Library. <android.support.design.widget.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content" <EMAIL_ADDRESS> <android.support.v7.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" app:layout_scrollFlags="scroll|enterAlways"> <bubee.inews.Items.ItemMenu android:id="@+id/itemMenu" android:layout_width="match_parent" android:layout_height="wrap_content" /> </android.support.v7.widget.Toolbar> </android.support.design.widget.AppBarLayout> Perfect, finally have a smooth scroll. This is not collapsing Toolbar. Try adding the following code: app:layout_scrollFlags="scroll|snap" I am working on this problem too and have come up with may not very optimized solution but you can improve it.Once I improve it I will definitely edit answer till the have a look at this. public abstract class AppBarStateChangeListener implements AppBarLayout.OnOffsetChangedListener { private static final String TAG = "app_AppBarStateChange"; public enum State { EXPANDED, COLLAPSED, IDLE } private State mCurrentState = State.IDLE; private int mInitialPosition = 0; private boolean mWasExpanded; private boolean isAnimating; @Override public final void onOffsetChanged(AppBarLayout appBarLayout, int i) { if (i == 0) { if (mCurrentState != State.EXPANDED) { onStateChanged(appBarLayout, State.EXPANDED); } mCurrentState = State.EXPANDED; mInitialPosition = 0; mWasExpanded = true; Log.d(TAG, "onOffsetChanged 1"); isAnimating = false; appBarLayout.setEnabled(true); } else if (Math.abs(i) >= appBarLayout.getTotalScrollRange()) { if (mCurrentState != State.COLLAPSED) { onStateChanged(appBarLayout, State.COLLAPSED); } mCurrentState = State.COLLAPSED; mInitialPosition = appBarLayout.getTotalScrollRange(); mWasExpanded = false; Log.d(TAG, "onOffsetChanged 2"); isAnimating = false; appBarLayout.setEnabled(true); } else { Log.d(TAG, "onOffsetChanged 3"); int diff = Math.abs(Math.abs(i) - mInitialPosition); if(diff >= appBarLayout.getTotalScrollRange()/3 && !isAnimating) { Log.d(TAG, "onOffsetChanged 4"); isAnimating = true; appBarLayout.setEnabled(false); appBarLayout.setExpanded(!mWasExpanded,true); } if (mCurrentState != State.IDLE) { onStateChanged(appBarLayout, State.IDLE); } mCurrentState = State.IDLE; } } public abstract void onStateChanged(AppBarLayout appBarLayout, State state); public State getCurrentState() { return mCurrentState; } } Create this class and call following code private AppBarStateChangeListener mAppBarStateChangeListener = new AppBarStateChangeListener() { @Override public void onStateChanged(AppBarLayout appBarLayout, State state) { Log.d(TAG, "ToBeDeletedActivity.onStateChanged :: " + state); if(state == State.EXPANDED || state == State.IDLE) { getSupportActionBar().setTitle(""); } else { getSupportActionBar().setTitle("Hello World"); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { mAppBarLayout.setElevation(0); } } } }; mAppBarLayout.addOnOffsetChangedListener(mAppBarStateChangeListener); Note that do not set annonymous class OffsetChangedListener as this is kept as weak reference and will be collected by GC.I found my self in hard way. Kindly explore this code and improve it(anyone) and re-share it .Thanks
common-pile/stackexchange_filtered
equivalent to 'cpusets' for gpu's I work with clusters of computers and manage the nodes with torque and Moab. A user is able to submit a job to a node, and request the amount of resources they need. #The following submits the job foo.sh to 1 node, requesting 8 cores, and 1 gpu qsub foo.sh -l nodes=1:ppn=8:gpus=1 Because it is possible for a user to take more resources than requested, I've enabled the hwloc library (cpusets) to keep users in check. From what I have found, there is no possible way to prevent a user from taking more gpu's than they have requested. Is there a 'cpuset' equivalent for gpu's? Resources Moab Documentation Torque Documentation hwloc documentation
common-pile/stackexchange_filtered
How to configure jsdom to preserve tag name case? Look at these lines: var doc = jsdom.jsdom("<moshe></moshe>"); console.log(doc.childNodes[0].tagName); The second line writes "MOSHE" to the console in uppercase that means jsdom recognized my string as HTML and not XML. How can I enforce jsdom to preserve tag name original case? Thanks in advance. if you need parse XML, why dont you use XML parsers instead? I don't use other XML parsers b/c I need to adopt the nodes created by the parser by my main HTML document created by jsdom and append these nodes into the main HTML hierarchy. Using another XML parser creates XML nodes incompatible with jsdom. On the other hand I want to preserve the case in order to be able to execute case-sensitive xpath expressions on these nodes. According to the HTML standard tagName is supposed to be uppercase in a HTML document. The tagName attribute must run these steps: If context object's namespace prefix is not null, let qualified name be its namespace prefix, followed by a ":" (U+003A), followed by its local name. Otherwise, let qualified name be its local name. If the context object is in the HTML namespace and its node document is an HTML document, let qualified name be converted to ASCII uppercase. Return qualified name. Jsdom currently does not support XML documents (officially), since there's no differentiation internally between a HTML and XML document. To parse as XML in v1.0+ you have to provide htmlparser2 as the parser, jsdom then implies parsing as XML based on a <?xml directive. This might become unnecessary if #883 gets merged, in which case a parsingMode option will be introduced, which accepts "xml" and switches to a xml parser. Ultimately, work is underway to go about this problem, however an immediate solution to parsing XML with jsdom is not in sight.
common-pile/stackexchange_filtered
How to package the cassandra source code into debian package? I have the latest source code of Cassandra. I have done some changes to the cassandra code to meet my needs. Now I want to package this Cassandra into the debian package so that I can easily install it. I do not have much knowledge about debian and all. Can anyone explain the step by step procedure for the same? Assuming that you already have all the build dependencies, all you need to do is run this command from the Cassandra root directory: $ dpkg-buildpackage -uc -us It is strongly recommended that you build it with the OracleJDK instead of OpenJDK. There is a bug with certain versions of OpenJDK that will cause the build to fail. All of this information is available on the Cassandra Wiki.
common-pile/stackexchange_filtered
How to find the inverse Laplace transform of this function? $F(s) = \dfrac{s}{s+3}$ I can't find this term in any Laplace lists around Hint: $F(s) = \dfrac{s}{s+3} = \dfrac{(s+3)-3}{s+3} = 1-\dfrac{3}{s+3}$. Can you find $1$ and $\dfrac{1}{s+3}$ in your Laplace table? What's F(s) = 1 converted to the original funtion? Dictra Deltafunction doesn't ring a bell The inverse Laplace transform of $1$ is the Dirac Delta function $\delta(t)$. What's the problem with that?
common-pile/stackexchange_filtered
Perfect numbers adding up to 10 In aliquot sequences, am I right in saying that all perfect numbers when you add up the digits the result always add up to $10$ ( apart from the perfect no $6$?) if so, is this just a coincidence? Example: "$496\mapsto 4+9+6=19\mapsto 1+9=10$". Hi, Jonas, I did not explain myself very well at all. 4+9+6=19 1+9= 10 it seems to be the same with them all, unless I have made an error. Thank you so much for looking at this. Donna Noticing something special about the second, third, and fourth number in the sequence is not enough to make a conjecture about the sequence as a whole. Here are a few more terms in the sequence: $6,28,496,8128,33550336,8589869056,137438691328,\dots$ Perhaps you should try to reword your statement further. Maybe you mean to suggest that perfect numbers are congruent to $1\pmod{9}$ @JMoravitz: I guess it would be stronger than just congruence to $1\mod 9$, because it would imply you can't get to $100$ for example. Googling digit sums of perfect numbers led to https://primes.utm.edu/notes/proofs/Theorem3.html Perfect numbers are, as @JMoravitz suggests, congruent to $1$ modulo $9$, but I would be mildly surprised if they all passed through $10$ in their successive digital sums. It is worth pointing out that the proofs talked about so far have been specifically for even perfect numbers and are not valid for odd perfect numbers (if any actually exist, it remains an open problem to prove the existence or nonexistence of odd perfect numbers). @BrianTung: As long as the sum of digits exceeds $10$, all numbers equivalent to $1 \bmod 9$ will pass through some power of $10$ because those are the only numbers with sum of digits $1$. As OP seems not to have tried numbers large enough to have a sum of digits exceeding $100$ they will all go through $10$. I do not see evidence OP didn't try larger numbers. It's true for the first 20 perfect numbers, and at that point the digit sums exceed 10,000, but this is still not good evidence. I don't have an idea how to try to prove or disprove other than finding a counterexample Mathematica can handle if there is one. Plus, I don't have a calculator and added them up in my head ( which now hurts ) so I did get a bit tired by the 15th. Sadly, I am a lawyer, not a mathematician, and I think that you chaps are great. I have just always been fascinated by number patterns. Donna @DonnaSkelly: You are on the internet, so you have calculators! How about Wolfram Alpha for example? Will do, Jonas. This iPad is ancient and the signal is rubbish in the wilds of Scotland. I have to waive it out the window. @JonasMeyer How interesting such a simple observation is so much harder to prove than just congruence to 1 mod 9. I wonder what else can be said about certain sets of numbers congruent modulo $c$, and whether they have a seemingly constant second-to-last digit sum as well. With a little research and programming a separate question might be in order. I'll look into this later Checking all known perfect numbers in Mathematica, they all come to 10. E.g., the digit sum for the largest known perfect number is 201071323. Not surprising for such a small number of examples, but if there are infinitely many Mersenne primes I would (naively) be surprised if this always happens. @JonasMeyer Wow, Jonas, isn't nature brilliant. Maybe we have found something here. I'm always looking out for patterns. Prime Numbers drive me nuts. Again, I'm sorry I'm a lawyer and not a mathematician. Thank you so much for all of your help. Keep me updated if anyone else finds anything. Donna Skelly. Hello, Gentlemen. I understand that all perfect numbers are also nonagonal numbers. A centred nonagonal number is a centred figurate number that represents a nonagon with a dot in the centre and all other dots surrounding the centre dot in successive nonagonal layers. This is fascinating. Their digits all eventually add up to 10 as well. Thank you. https://en.wikipedia.org/wiki/Centered_nonagonal_number For any odd $k$, $(2^k-1)2^{k-1}$ is congruent to $1 \bmod 9$, and all even perfect numbers are of this form (although in one case, $6$, $k$ is even). Unless it happens that, in forming the ultimate digit sum for some perfect number, an intermediate total hits on a higher power of $10$, say $100$, it's likely that $10$ will be reached in the digit-sum sequence. Since the order of $2\bmod 9$ is $6$, you have essentially demonstrated the truth of the above proposition by testing $k=3,5,7$ already. It's speculated that there are an infinite, if widely-spaced, number of even perfect numbers. If so, then it seems inevitable that one such will in fact reach some higher power $10$ in the digit sum sequence and render the OP claim false. It is certainly true that even perfect numbers $>6$ are all congruent to $1\pmod 9$. Thus if you keep on adding the digits together until you get a single digit, that digit will always be $1$. To see this, note that, as per Euclid, even perfect numbers have the form $n_p=2^{p-1}(2^p-1)$ for some prime $p$ such that $2^p-1$ is also prime. Excluding $p=2$ (which gives us $6$) we may assume $p$ is odd. We note that $n_3=28$ which is indeed $1\pmod 9$. Form now on we'll assume that $p>3$. It is easy to verify that $2$ has order $6 \pmod 9$. $p$ odd and $>3$ implies that $\equiv \pm 1\pmod 6$ and it is now a straightforward exercise to confirm that in both cases $n_p\equiv 1 \pmod 9$. @JonasMeyer's comment above is important here... the OP seems to be making a stronger claim than just congruence to $1 \pmod 9$... instead, the digit sum must ALWAYS reach $10$, and not $100$ or $1000$ for example You are all brilliant. Thank you! Donna @BrevanEllefsen Ah, didn't see that. I would be surprised were that true, but I don't know. Euclid observed that if $2^p-1$ is prime, then $2^{p-1}(2^p-1)$ is perfect, but it was Euler who proved that even perfect numbers always have this form. An even number is perfect if and only if it is of the form $N=2^n\cdot(2^{n+1}-1)$ and $2^{n+1}-1$ is prime (Euler). This implies that $n+1$ is also prime (although this is not a sufficient condition for $2^{n+1}-1$ to be prime). On the other hand, if $S(n)$ is the sum of the digits of $n$, then $S(n)\equiv n\pmod 9$. So we need to see if $2^n\cdot (2^{n+1}-1)\equiv 1\pmod 9$. We have that $2$ is a primitive root modulo $9$, that is, the first power of $2$ that is $1$ in $\Bbb Z_9$ is $2^6$. Now write $n=6q+r$. Since $n+1$ must be prime, if we assume that $n+1$ is not $2$ or $3$, $r+1$ must be $1$ or $5$, so $r$ is $0$ or $4$. Putting all together we get $$N\equiv2^r(2^{r+1}-1)=2^{2r+1}-2^r\equiv1\pmod 9$$ in both cases.
common-pile/stackexchange_filtered
R data.table: names of .SD not available for assignment Often, I want to manipulate several variables in a DT and I need to select the column names based on their names or class. d <- data.table(x = 1:10, y= letters[1:10]) # My usual approach col <- str_subset(names(d), '^x') d[, (col) := 2:11] However, it would be very useful and less verbose to do this: d[, (names(.SD)) := 2:11, .SDcols = patterns('^x')] But this throws an error: Error in `[.data.table`(d, , `:=`((names(.SD)), 2:11), .SDcols = patterns("^x")) : LHS of := isn't column names ('character') or positions ('integer' or 'numeric') > The column names of .SD are available, though: > d[, names(.SD), .SDcols = patterns('^x')] [1] "x" Why aren't the names of .SD available for assignment on the LHS of :=? this is a long-standing feature request: https://github.com/Rdatatable/data.table/issues/795; if you're feeling adventurous, there's an unmerged pull request implementing the feature, early testing would be helpful https://github.com/Rdatatable/data.table/pull/4163 Thanks Michael for the information. I'm not up to it, I'm afraid. Let me just say that I would very much welcome to see this new feature go through. As noted this is not yet possible. The workaround only adds one line of code: cols = grep('^x', names(d)) d[ , (cols) := 2:11, .SDcols = cols] Do you know of any progress on this feature? It is still not possible in the current version, but the github logs seem auspicious, and it would be very convenient.
common-pile/stackexchange_filtered
Convert somebody's local time to the UTC time i'm a little lost in the timezone :) I have data stored with the time in UTC. The server is in the Netherlands, so we live in utc+1 (now, with daylightsavingtime, in utc + 2) Now a client says: give me the data from august 5th. So i have to calculate the utc time from 'his time'. For that i have to know: what is your utc offset (we stored that in his profile, let's say utc -6) are you in daylightsavingtime (because then we have to add +1 and make the utc offset -5) Then my questions: Can i ask the .Net framework: does country XX have daylightsavingtime? Can i ask the .Net framework: is 08-05-2010T00:00:00 in country XXX daylightsavingtime at that moment? i've been trying the .ToLocalTime(), but this only gives me the local time at the server, and that's not what i want, i want to calculate with the timezone of the user, and also with the fact that at that particular point in time, if he/she is in daylightsavingtime I've also seen this VB example: TimeZone As TimeZoneInfo = TimeZoneInfo.FindSystemTimeZoneById("W. Europe Standard Time") Dim Dated As DateTime = TimeZoneInfo.ConvertTimeToUtc(TempDate, TimeZone) but imho this doesn't take into account that the user in this timezone is or is not in a daylightsavingtime (dst) country. For instance, one user in this timezone is in the netherlands having dst, and one other is in another country which has no dst. Related question - http://stackoverflow.com/questions/2532729/daylight-saving-time-and-timezone-best-practices You can't ask the framework about a particular country - but you can ask about a particular time zone. TimeZoneInfo does take DST into account. (Heck, it wouldn't have the IsDaylightSavingTime method otherwise.) If you've got two users, one of whom is currently observing DST and the other of whom isn't, then they aren't in the same time zone. If you could specify which locations you're talking about, I could try to find out which time zones are involved. (It's generally easier to find out the Olson names, but it shouldn't be impossible to find out the Windows IDs.) Ok, thanks. But are you saying that users have to change their timezone when the enter the DST? for example, normally i'm in GMT+1 Amsterdam, but then the the DST is entered, am i then in GMT + 2? Or am i in GMT + 1 in DST? If the former case, the user has to change his timezone in my app. (in that case it will work as you say, because the user which lives in a DST country does change his timezone to +2, and the user which lives in a non-DST country keeps his timezone at +1) @Michel: No, the time zone describes when it enters and exits DST. For example, I'm in the time zone with the Olson name "Europe/London" - that's UTC for half the year, and UTC+1 for half the year. Your user in a DST country doesn't change time zone - their time zone changes offset from UTC. It's a very important distinction. @Jon: ok, didn't know that the TimeZone has the knowledge of when the DST starts end ends. But: then also have to know where you live, because i can check the TimeZone if your zone is in DST right now, but i also have to know if your country (as there are multiple countries in your timezone) works with DST or not? @Jon: There is also the problem that when you change DST to normal time, you have multiple 02.15 times. So when you say: it happened in my house at 02.15, i have to ask you: "Was it the first 02.15 or the second 02.15", to determine the right UTC time. @Michel: No, you don't need to know the country. If your country doesn't observe DST, then you're not in the same time zone as a country which does observe DST. You just need to know the time zone. (And be aware that many countries operate in multiple time zones.) And yes, a given local time may have 0, 1 or 2 corresponding UTC times. It's just a natural consequence of the mess that humans have made of calendaring :( TimeZoneInfo has methods to handle this - IsAmbiguousTime and GetAmbiguousTimeOffsets. Let's say that USA uses DST and Canada is not. Then those 2 counties could not be in the same time zone ? And yet they are one "above" (on the north) the other. Strange. @Jon: Thanks very much for your help. I've always seen the multiple entries for +1, but never thought it was for determing if you are in a DST zone or not. And the IsAmbiguousTime is a big helper. Thanks again. @Petar: There are several time zones in the US, not just one. And yes, there can be different time zones for the same longitude. I promote my comment to an answer. It appears to be a complicate problem. Look at http://www.timeanddate.com/time/dst2010.html, there is a table of DST in various country of the world. You cannot get this with static code because DST change not only from country to country, but also from year to year. I guess you have to maintain a table with DST information and get information from it. Or you can query a webservice to get such infos. Look at http://www.earthtools.org/webservices.htm#timezone for a webservice that you can query to get time in various country that take in account DST (it work only for Western Europe time zones). Also look at http://www.geonames.org/export/web-services.html. You have to get your users geolocation to use these services. I've found this: 'timeZone.GetAdjustmentRules()[0].DaylightTransitionStart' which tells me when the dst starts in a particular zone, is that the same? I look in MSDN. It's about the same, but according to info on http://www.timeanddate.com/time/dst2010.html DST information is a dynamic one, that change in time according to countries laws modification, so it cannot be "embedded" in a static framework. Anyway, I guess modification in laws doesn't happen so frequently, so if you don't bother on a mistake in the calculation once in a while, you can go with DaylightTransitionStart TimeZone class has a method IsDaylightSavingTime that take a date as parameter and return a boolean indicating if that date in in daylightsavingtime for that timezone. Thanks, but that only works for local times, in my case, in an asp.net server app, that will be the time of the location where the server is located. In that case i can check if the SERVER is in dst, but not if the user in a particular time zone is.
common-pile/stackexchange_filtered
using vba to change field name in access 2007 I receive data monthly from an external company and need to change the field name to a sequential number. example contract 11 15 17 to 1 2 3. I am trying to use the following code but get an error that I cannot define the field more than once at "fld.Name = (n) + 1". How can I correct this? Function ChangeFieldName() Dim db As DAO.Database Dim tbl As DAO.TableDef Dim fld As DAO.Field Dim n As Integer Set db = CurrentDb Set tbl = db.TableDefs("tdf1") On Error Resume Next n = 0 For Each fld In tbl.Fields fld.Name = (n) + 1 Next fld Set fld = Nothing Set tbl = Nothing Set db = Nothing End Function That code attempts to rename each field to n + 1, but since n is never incremented, it actually attempts to rename every field to 1. The following change may do what you want. n = 1 For Each fld In tbl.Fields fld.Name = n n = n + 1 Next fld However there are some other issues you should consider with that approach. The For Each loops through the fields based on fld.OrdinalPosition. If your numbered field names were not defined in the order you expect, you will have a problem. For example, these fields in OrdinalPostion order: 11; 15; 2. In that case 11 would be renamed to 1, but the code would throw an error when attempting to rename 15 to 2. Also that code will attempt to rename every field to a number. If the table only contains numbered field names, that may not be a problem. But if the table also contains other field names you wish to preserve, you've got more work to do. A minor point is that fld.Name is text type. When you attempt to rename a field to a number, Access actually uses the number's string equivalent. That may be fine, but I would prefer to explicitly cast the number to a string myself. fld.Name = CStr(n) Finally please reconsider this ... On Error Resume Next That instructs Access to silently ignore all errors. I think you should get rid of that and add a proper error handler code block instead. Thanks HansUp, it did work, but you are right, it does change all the field names to numbers. The probability of an error is just too high with this code. So I will have to go back to the drawing board and think of some other way to change the field names. Thanks again
common-pile/stackexchange_filtered
Rubymine not find gem docker rvm I use in the rvm docker. I would like to set the remote ruby ​​interpreter but after choosing the server docke, image and setting ruby ​​interpreter path to .rvm/gems/ rubymine does not download any gems. Somehow this question doesn't feel very clear to me. Could you at least paste your input, your output, and your expected output? problem solved. I used ssh
common-pile/stackexchange_filtered
The trio examined crystallographic data on their tablets, scattered coffee cups evidence of their extended afternoon session at the materials consulting firm. "This density variation is remarkable - we're seeing a threefold change just by controlling which isomer forms during polycondensation. The dia topology gives us one density, but switch to qtz and everything changes." "But that's exactly the problem, isn't it? We can design the building blocks perfectly - triptycene derivatives with twelve connection points, planar trigonal units - yet we still can't reliably predict which topology will emerge." "The aea topology we synthesized last month proves your point, Eric. Those large cavities with small windows create exactly the selectivity we need for gas separation, but getting reproducible crystallization remains hit-or-miss." "Consider the structural mechanics though. The pto framework achieves that massive 46 angstrom pore size precisely because the topology allows such low density. Compare that to mhq-z where face-enclosed polyhedra create uniform 1.0 nanometer micropores." "Which brings us back to the fundamental question - can we predict topology from building block geometry alone? The conformational strain in our rectangular-planar precursors clearly influences the final network, but the relationship isn't linear." "The high-connectivity approach might offer more control. When we move beyond simple tetrahedral nodes to these twelve-connected systems, we're essentially programming the network topology through geometric constraints." "True, but we're trading predictability for complexity. Those intricate networks show incredible stability once formed, but the synthesis becomes exponentially more challenging. Each additional connection point multiplies the possible structural outcomes." "What if we think about it differently? Instead of fighting the crystallization uncertainty, we could design building blocks that channel the condensation toward specific topologies. Use the molecular geometry as a template
sci-datasets/scilogues
toggleClass and executed order in a jquery file The jquery trigger into entered status when mouse move in. $(".test").bind("mouseenter mouseout", function(event) { $(this).toggleClass("entered"); alert("mouse postion (" + event.pageX + "," + event.pageY + ")"); }); .entered { font-size: 36px; width: 200px; height: 100px; border: 2px solid black; } .test { border: 2px solid red; background: #fdd; width: 60px; height: 60px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="test">please move in</div> I found that two issues in my code snippet. For the entered status ,only font-size:36px; take effect,why width:200px;height:100px;border:2px solid black; take no effect? alert pop up before $(this).toggleClass("entered"); is completed,how to make alert execute after $(this).toggleClass("entered"); is totally completed? Your fiddle was broken as it did not include jQuery, so I added it for you. You might want to look at this: https://stackoverflow.com/questions/38391178/jquery-toggleclass-callback-how-to The problem seems pretty moot anyway, as you really shouldn't be using alert() for this. Use console.log for the first problem, just switch position of the two css statements "alert pop up before $(this).toggleClass("entered"); is completed," No, it doesn't. What happens is that the js execution has been blocked by alert before the browser entered the paint phase. So some browser (Chrome?) will not enter this paint phase. Some others (e.g Firefox) will anyway. But in all, toggleClass which is synchronous will have been executed. This is because of selector precedence. .test is the last rule in the stylesheet which affects that element and it overrides the previous styles. To fix this, put the .entered rule last, and prefix it with .test. Here's a good guide on how CSS selector specificity works from MDN. This is because alert() blocks the UI thread from updating until its dismissed. This is one of the reasons you shouldn't use it. You could use a setTimeout() call to delay the alert(), but a much better and approrpriate solution is to just use console.log() instead. Also note that bind() is now deprecated and will be removed from the latest versions of jQuery. You should be using on() instead. $(".test").on("mouseenter mouseout", function(event) { $(this).toggleClass("entered"); console.log("mouse postion (" + event.pageX + "," + event.pageY + ")"); }); .test { border: 2px solid red; background-color: #fdd; width: 60px; height: 60px; } .test.entered { font-size: 36px; width: 200px; height: 100px; border: 2px solid black; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="test">please move in</div> Note that it is not specified anywhere that alert should "block the UI thread". It should block the js thread, but some browsers don't block the UI thread.
common-pile/stackexchange_filtered
How to optionally process code in node_modules with babel-loader in webpack? This is a follow up from this answer. I have some 3rd party code (react components) that I bundle as ES modules (using the pkg.module entry point). This works great (you get module concatenation and tree shaking), but the included code isn't transpiled with babel because, following most config examples, I exclude node_modules in the babel-loader section of my webpack config like this: { ... module: { rules: [ { exclude: /(node_modules)/, use: { loader: 'babel-loader', ... } } ] }, ... } So, I get unexpected token errors when I run webpack. Based on the linked answer, I switched from using an exclude to an include to optionally bring in some packages from node_modules like this: { ... module: { rules: [ { include: [/node_modules\/@my-scope/, /src/], use: { loader: 'babel-loader', ... } } ] }, ... } This seems to be working for me (no more, unexpected token errors when I run webpack), but I'm not 100% sure it's doing what I think it is. Does this solution look right? Is there a better way? The solution looks ok to me. If the include begins to get complex, you could replace it with a function and use logic to filter there. Thanks @JuhoVepsäläinen, I appreciate that. Feel free to post your comment as an answer so I can accept it. Done now. Thanks. :) The solution looks ok to me. If the include begins to get complex, you could replace it with a function and use logic to filter there.
common-pile/stackexchange_filtered
Auto-remove unused tags Drupal 7. We want to remove tags (of Tags vocabulary) automatically when all pages referring to this tag are removed. Is there a module for this? There are a couple of modules that might help: Taxonomy Orphanage (specifically the cron interface) This module provides interfaces (drush, cron, admin form) for removing orphaned taxonomy term references from entities. Field reference delete (sandbox only) This module removes references to a deleted entity from fields stored in an SQL database. It exists to prevent stale references to non-existent content from causing unexpected problems (for example, when the referencing content is being displayed).
common-pile/stackexchange_filtered
Google Geocoding API not returning correct latitudes,longitudes for address as returns in google maps I am facing a strange problem of google geocoding API. My requirement is whenever a user types any address all the corresponding locations should show in my map View(I am using MKMap View) like it happens in google maps. So what I do is I use google geocoding API that returns me a list of latitudes and longitudes for that corresponding location. But for some location eg:- caribou coffee, chapel hil (When I search for this location in google maps,I get a number of annotations showing that address but when I type this address in geocoding I didnt get nothing. request url:-http://maps.googleapis.com/maps/api/geocode/xml?address=caribou%20coffee,%20chapel%20hil%20&sensor=false response:- -<GeocodeResponse> <status>ZERO_RESULTS</status> </GeocodeResponse> Can any body tell me why I am not getting any latitude and longitude that correspnds to this address like it shown in google maps or there is some other way so that I can integrate the behaviour of google maps in my applications(mk map view). Please help me as I am stuck here. Any suggestions will be highly appreciated. Thanks in advance! Note that the address is incomplete - e.g. searching for caribou coffee, chapel hill, nc does produce a result. I would think that Geocoding API can't do anything with such addresses, and you may need to use some extra API - e.g. Places API (https://code.google.com/apis/maps/documentation/places/ ), which can return some results for queries like "coffee". You'd need to supply the user's location to use as a base, however. thanks but when I types this address in google map app of iphone then I get a number of results for the same location. So google maps obviously uses other APIs - like the Places API (and possibly some private ones, but you can't do anything about that).
common-pile/stackexchange_filtered
Calculating hashes of raw image data and writing it to the image file I am trying to write hashes to the metadata part of my image files. In the Exiftool Forum I saw this exiftool FILE -rawimagedigest=`exiftool FILE -all= -o - | md5` However, I would rather not run it manually for each file, and I do prefer SHA. I tried this find . -name "*" -exec sh -c ' md5hash=$(exiftool "$1" -all= -m -o - | md5) sha256hash=$(exiftool "$1" -all= -m -o - | shasum -a 256) exiftool -overwrite_original "$1" -FileImageMd5=$md5hash; exiftool -overwrite_original "$1" -FileImageSha256=$sha256hash ' _ {} \; Using the example file I created a config making it possibly to write to FileImageMd5 and FileImageSha256. However, the script only works without the line exiftool -overwrite_original "$1" -FileImageSha256=$sha256hash If I substitute the variable in the end with $md5hash it runs as expected. The config file is named .ExifTool_config and placed in $HOME. It consist of the following %Image::ExifTool::UserDefined = ( 'Image::ExifTool::XMP::Main' => { rlp => { SubDirectory => { TagTable => 'Image::ExifTool::UserDefined::rlp', }, }, }, ); %Image::ExifTool::UserDefined::rlp = ( GROUPS => { 0 => 'XMP', 1 => 'XMP-rlp', 2 => 'Image' }, NAMESPACE => { 'rlp' => 'http://ns.ladekjaer.org/rlp/1.0/' }, WRITABLE => 'string', FileUniqueId => { Writable => 'lang-alt' }, FileImageSha256 => { Writable => 'lang-alt' }, FileImageMd5 => { Writable => 'lang-alt' }, ); 1; #end Can you share your config file? I assume that you created writable tags for FileImageSha256 and FileImageMd5. Apparently the script failed due to shasum -a 256 ending its output with - Since a SHA256 written in hex is always 64 characters, can this be solved by adding | head -c 64 Thus making the script find . -name "*" -exec sh -c ' md5hash=$(exiftool "$1" -q -all= -m -o - | md5) sha256hash=$(exiftool "$1" -q -all= -m -o - | shasum -a 256 | head -c 64) exiftool -overwrite_original -q "$1" -FileImageMd5=$md5hash; exiftool -overwrite_original -q "$1" -FileImageSha256=$sha256hash ' _ {} \;
common-pile/stackexchange_filtered
Using regex in python for a dynamic string I have a pandas columns with strings which dont have the same pattern, something like this: {'iso_2': 'FR', 'iso_3': 'FRA', 'name': 'France'} {'iso': 'FR', 'iso_2': 'USA', 'name': 'United States of America'} {'iso_3': 'FR', 'iso_4': 'FRA', 'name': 'France'} How do I only keep the name of the country for every row? I would only like to keep "France", "United States of America", "France". I tried building the regex pattern: something like this r"^\W+[a-z]+_[0-9]\W+" But this turns out to be very specific, and if there is a slight change in the string the pattern wont work. How do we resolve this? As you have dictionaries in the column, you can get the values of the name keys: import pandas as pd df = pd.DataFrame({'col':[{'iso_2': 'FR', 'iso_3': 'FRA', 'name': 'France'}, {'iso': 'FR', 'iso_2': 'USA', 'name': 'United States of America'}, {'iso_3': 'FR', 'iso_4': 'FRA', 'name': 'France'}]}) df['col'] = df['col'].apply(lambda x: x["name"]) Output of df['col']: 0 France 1 United States of America 2 France Name: col, dtype: object If the column contains stringified dictionaries, you can use ast.literal_eval before accessing the name key value: import pandas as pd import ast df = pd.DataFrame({'col':["{'iso_2': 'FR', 'iso_3': 'FRA', 'name': 'France'}", "{'iso': 'FR', 'iso_2': 'USA', 'name': 'United States of America'}", "{'iso_3': 'FR', 'iso_4': 'FRA', 'name': 'France'}"]}) df['col'] = df['col'].apply(lambda x: ast.literal_eval(x)["name"]) And in case your column is totally messed up, yes, you can resort to regex: df['col'] = df['col'].str.extract(r"""['"]name['"]\s*:\s*['"]([^"']+)""") # or to support escaped " and ': df['col'] = df['col'].str.extract(r"""['"]name['"]\s*:\s*['"]([^"'\\]+(?:\\.[^'"\\]*)*)""")>>> df['col'] 0 0 France 1 United States of America 2 France See the regex demo. You may need to compose it with a json.loads as he is saying the column type is str. @Learningisamess Your username is so telling :) Right, that might need a bit more work, I suggested some more workarounds. thanks for the comment =) I tend to prefer json.loads to ast.literal_eval because I prefer speed to coverage. I also see you like pushing the heavier weights with regex!
common-pile/stackexchange_filtered
How to pass a bitmap from C++ DLL to a VB6 app I have some C++ code that captures a raw image and builds a BMP which is saved to a file xyz.bmp. This works fine. However the objective is to get that image data directly across to a VB6 app so no intermediary file is written out. Searching around I can't really find much at all by way of an example or clear explanation and my C++ is sketchy at best. C++ DLL code: extern "C" _declspec(dllexport) HRESULT CaptureSample(BYTE *pbByteArray, int *pnSize) { .... .... std::string bmpFile = "data/fingerPrint_" + s.str() + ".bmp"; MessageBox(0, L"Saving", L"title", MB_OK); BmpSetImageData(&bmp, data, width, height); BmpSave(&bmp, bmpFile); const int nSizeOfData = sizeof(bmp); *pnSize = min(nSizeOfData, *pnSize); ::memcpy(pbByteArray, bmp, *pnSize); In the above I have pbByteArray and pnSize to be returned to the VB6 app. The function BmpSave works fine. It is the next few lines of code I am coming unstuck on. ::memcpy .. bmp is invalid - "no suitable conversion function from SBmpImage". On the VB6 side: Public Declare Function CaptureSample Lib "ImageLibrary.dll" (ByVal FScan As Byte, ByVal FScanSize As Integer) As Long Private Sub Command_Click() Dim FScan As Byte Dim FScanSize As Integer Call CaptureSample(FScan, FScanSize) Again not sure of this is the correct approach. Any help would be appreciated. So bmp is declared as SBmpImage, could we see the definition of that? Hard to answer the question without that important information. Also sizeof(bmp) is certainly wrong. You seem to think that it will give you the size of your image in bytes, but it won't. sizeof is a C++ language construct which has no knowledge of bitmaps. CaptureSample must be declared as (ByRef FScan As Byte, ByRef FScanSize As Long) As Long, and you must pass an allocated buffer. You have to guess the size of the buffer though because the function will not tell you.
common-pile/stackexchange_filtered
AngularJs code organizing nuget package I am trying to start a new angularjs application with visual studio. I heard that there is specific folder structure to organize angular code. Is there any nuget package which does the folder structure and add crequired basic js files? Depending on the app you are making you can choose for a different approach on how to build your folder structure. There are as far as I know no nuget packages that create that specific structure, but off course there are all the AngularJS nuget packages that add every .js file you need to run an Angular application. Just search for AngularJS in your Nuget Package Manager. You could also read an article on best practices for the folder structure in AngularJS.
common-pile/stackexchange_filtered
index is out of bounds Here is my program, I don't understand why this error appears line 41 :"index 60 is out of bounds for axis 0 in size 60", it's the game of life and I tried to do it by myself because as I'm new in python it was hard to understand the ones on internet... import numpy as np rows=60 cols=60 size=rows*cols nb_steps=300 array=np.random.randint(0,high=2, size=size,dtype=int).reshape(rows,cols) next_state=np.zeros((rows,cols),dtype=int) alive=1 dead=0 grid=np.int(size) def nb_neighbors(grid): neighbors=np.zeros((rows,cols),dtype=int) for x in range (rows): for y in range (cols): if x==0 and y==0: neighbors[x][y]=grid[x-1][y]+grid[x-1][y+1]+grid[x][y+1] if x==0 and y==59: neighbors[x][y]=grid[x+1][y]+grid[x+1][y-1]+grid[x][y-1] if x==59 and y==0: neighbors[x][y]=grid[x][y+1]+grid[x+1][y+1]+grid[x+1][y] if x==59 and y==59: neighbors[x][y]=grid[x][y-1]+grid[x-1][y-1]+grid[x-1][y] if y-1==0: neighbors[x][y]=grid[x-1][y]+grid[x+1][y]+grid[x-1][y+1]+grid[x][y+1]+grid[x+1][y+1] if y+1==0: neighbors[x][y]=grid[x-1][y-1]+grid[x][y-1]+grid[x+1][y-1]+grid[x-1][y]+grid[x+1][y] if x-1==0: neighbors[x][y]=grid[x][y-1]+grid[x+1][y-1]+grid[x+1][y]+grid[x][y+1]+grid[x+1][y+1] if x+1==0: neighbors[x][y]=grid[x-1][y-1]+grid[x][y-1]+grid[x-1][y]+grid[x-1][y+1]+grid[x][y+1] else : neighbors[x][y]=grid[x-1][y-1]+grid[x][y-1]+grid[x+1][y-1]+grid[x-1][y]+grid[x+1][y]+grid[x-1][y+1]+grid[x][y+1]+grid[x+1][y+1] return neighbors def next_state(old): next_state=np.zeros((rows,cols),dtype=int) neighbors=nb_neighbors(old) for x in range (rows): for y in range(cols): if old[x][y] == alive and (neighbors[x][y] == 2 or neighbors[x][y] == 3): next_state[x][y] = alive if old[x][y] == dead and neighbors[x][y] == 3: next_state[x][y] = alive else : next_state[x][y]= dead return next following=next_state(array) for i in range (nb_steps): following=next_state(following) print (following) grid[x][y + 1] when you’re at the bottom of the grid is…? grid[x + 1] when you’re on the right? (Swap “right” and “bottom” if x represents a row.) Typical access into a 2D array is (row, column) or (y, x) Thank you very much Ryan and cricket_007 for your answers !! I'm gonna try this and I'll let you know ! But I don't really understand why do I have to change x and y ?
common-pile/stackexchange_filtered
Try to find out if my (use of google maps API services) calculations are correct? I am a nontechnical member (and idea leader) of a team that develop certain web application, so I am not very familiar with this area. I am asking this question here, because one or two responders here :) probably cross with this "problem" before. The problem for me is, that I dont know how to calculated the right number of requests or with other words "google maps API request and map loads usages of their services" when we would like to find out IF COORDINATE (which represent address) IS IN THE CERTAIN POLYGON OR NOT (polygons represent local areas, based on which we determin if address is in the area t or not). My first findings were (i dont know if they are right?), that we will probably need (to find out if coordinate is in the polygon or not) to use GOOGLE MAPS JAVASCRIPT API (because we have web app and because we will get address that represents coordinate from user that is user who uses our app client-side) and containsLocation() FUNCTION from Geometry Library. Now with all that, I assume that only "1 request" and "1 load map" through Google maps Javascript Api is taken? Thank you. Google Maps JavaScript API calculates only map loads. So your quota is limited to 25k daily map loads without Billing enabled and 100K daily map loads with Billing enabled. https://developers.google.com/maps/documentation/javascript/usage One map load corresponds to call new google.maps.Map(). If you call a function from geometry library it doesn't generate additional requests. Zoom, pan neither generate any additional request. You will generate additional requests only if you use client side services like geocoding, directions, distance matrix or places (autocomplete, search or detail). In this case requests will be calculated against corresponding Web Service quota. https://googlegeodevelopers.blogspot.com.es/2016/06/building-for-scale-updates-to-google.html Hope it helps! Thank you for your time and willingness and give me this answer. It helps.
common-pile/stackexchange_filtered
LyX: Citation shows up as [#] rather than (Author, Year) I've been having the same problem as this person. My citations show up as [#], and not in the desired format (Author, Year). I have checked natbib as advised: , and also in the citation menu itself: However, the PDF prints still only give me [#]. Any ideas? Edit: My BibTex is an export file from Mendeley, and looks quite normal. Thank you so much in advance. What document class are you using? Would the problem persist if you switch to a different document class, say article? I'm using Article (Komascript) Which bibliography style are you using? Try plainnat. At least for the class "elsarticle" (Elsevier), the document class is loaded with a numbered citation style by default. For elsarticle, one adds the year by adding the "authoryear" option to the class by doing this in LyX: Document -> Settings -> Document Class -> Class options -> Custom: "authoryear" Source: elsarticle documentation In case you are using the Elsevier template elsarticle-template-1-num.tex in a raw editor without these options you have to add the option \biboptions{authoryear}
common-pile/stackexchange_filtered
Angular filter Observable array I have an Observable array and I want to filter/find the Project by name. When I try to use the filter option it is saying ProjectService.ts import { Injectable } from '@angular/core'; import { Project } from "../classes/project"; import { Observable } from 'rxjs/Observable'; import 'rxjs/add/observable/of'; import { Http } from '@angular/http'; @Injectable() export class ProjectService { private projects: Observable<Project[]>; constructor(private http: Http) { this.loadFromServer(); } getProjects(): Observable<Project[]> { return this.projects; } private loadFromServer() { this.projects = this.http.get('/api/projects').map(res => res.json()); } getProjectByName(name: String) { return this.projects.filter(proj => proj.name === name); } } Project Class export class Project { public name: String; public miniDesc: String; public description: String; public category: String[]; public images: any[]; } it should be: getProjectByName(name: String) { return this.projects .map(projects => projects.filter(proj => proj.name === name)); } you misunderstood about filter operator. The operator using to filter data return from stream. Your stream return array of object, so you need filter array to get you needed value. The solution above will return an array after filtered, if you wanna get only one value, using following solution getProjectByName(name: String) { return this.projects .map(projects => { let fl = projects.filter(proj => proj.name === name); return (fl.length > 0) ? fl[0] : null; }); } You can also just write a for-loop inside the map arrow function which returns the first value that satisfies the condition: .map(xs => { for (let x of xs) if (/*cond*/) return x; return null; }) etc. Very nice solution provided by @Tiep Phan, only small note. Solution above work for array with primitive types, if you want to filter objects you can do something like: tasks.filter((item) => this.tasks.map((task) => task.id).indexOf(item.id) < 0 In your Service you can define <any> type or Project[] type to return the response value and same could be continue with filter. e.g. <any>res.json() or <Project[]>res.json() and also update your class as suggested by @Sajeetharan ProjectService.ts import { Injectable } from '@angular/core'; import { Project } from "../classes/project"; import { Observable } from 'rxjs/Observable'; import 'rxjs/add/observable/of'; import { Http } from '@angular/http'; @Injectable() export class ProjectService { private projects: Observable<Project[]>; constructor(private http: Http) { this.loadFromServer(); } getProjects(): Observable<Project[]> { return this.projects; } private loadFromServer(): Observable<any> { this.projects = this.http.get('/api/projects').map((res: Response)=> <any>res.json()); } getProjectByName(name: string) { return this.projects.filter(proj => proj.name === name); } } *always write your filters, conditions or manipulation in component not in services.
common-pile/stackexchange_filtered
How can I trigger global onError handler via native Promise when runtime error occurs? With Q.js I can trigger window.onerror using .done(): window.onerror = function() { console.log('error from handler'); } var q = Q('initValue').then(function (data) { throw new Error("Can't do something and use promise.catch method."); }); q.catch(function(error){ throw new Error("This last exception will trigger window.onerror handler via done method."); }) .done(); In native Promise (ES6) we have no .done(), and last ".catch" is the end of chain: var p = new Promise(function () { throw new Error("fail"); }); p.catch(function (error) { throw new Error("Last exception in last chain link"); }); "throw new Error(...)" in ".catch" is one of the simplest way to reproduce runtime error. In reality, it may be another analog of runtime error (EvalError, SyntaxError, TypeError, etc.), e.g.: var a = []; a(); // Uncaught TypeError: a is not a function .done usage is an example to explain my target more detail. I haven't a goal to duplicate .done API. My task is: I have a promises chain and handler on window.onerror. All errors inside chain I can handle by .cath, except runtime error at the end of the chain. When any runtime exception occurs at the end of promise's methods chain, I need to have the triggered handler that was hanged on window.onerror. The restrictions: only native JS, must use window.onerror. What is the best way to trigger this global handler via native Promise? .then() handlers catch exceptions within them and turn them into rejected promises. So, you can't throw an exception and expect it to go all the way you to the top level in ES6 standards promises. You don't need to trigger any global error event yourself. Just register your error handler for unhandledPromiseRejection! Not sure, maybe Catch all unhandled javascript promise rejections is a better dupe target. "must use window.onerror." - why? Can't you just use window.onunhandledrejection= window.onerror? @Bergi, Great! It's simple, but not obvious to me. It works exactly the way I was looking for. Thanks! I will investigate process more deeply. One more way with the same results - add final .catch with window.onerror call .catch(function (error) { window.onerror(); throw error; }) @A.Mikhailov Better make that .catch(window.onerror) - passes in the error, and doesn't lead to an extra unhandled rejection. @Bergi You're right. Thanks for the correction! Throw the error asynchronously: window.addEventListener('error', function() { console.log('error from handler'); }); new Promise(function () { throw new Error("fail"); }).catch(function (error) { setTimeout(() => {throw new Error("Last exception in last chain link")}, 0); }); Why not just drop the catch and simply register the error handler for the appropriate event? @Bergi I just copied the catch from the question, I would remove it too. About unhandled promise rejection events, are they supported already? Oriol, Bergi: thanks a lot for your help! I detailed the question and description. I've looked through all referral links and found no answers for this question. Could you look at updated question again, please @Oriol, you're right too! Applicable to my request, add catch with asynchronous throwing: .catch(function (error) { var a = []; a(); // error in last logic chain link }).catch(function (error) { setTimeout(function () {throw error}, 0); });
common-pile/stackexchange_filtered
Solving Trig Problem - A Level Maths Question: An equation of a curve is $y = \cos 2x + 2\sin x$. Find $dy/dx$ and the coordinates of the stationary points from $0 < x < \pi$. I got as far as $-2\cos x = -4\sin x\cos x$ Surely now I can divide by $\cos x$ to get: $-2 = -4\sin x$, hence $0.5 = \sin x$ and so on... But apparently this is wrong? The answer is $\cos x(-2\sin x + 1) = 0$ meaning $\sin x$ is $0.5$ AND $\cos x = 0$. What is wrong with my method? Why does my method not also get the $\cos x = 0$ part? Thanks! Meaning $\sin x=0.5$ OR $\cos x=0$. Please read this tutorial on how to typeset mathematics on this site. The derivative of $y=\cos (2x) + 2\sin (x)$ is just $$\frac{dy}{dx}=-2\sin(2x)+2\cos(x)$$ and the stationary points are at $\frac{dy}{dx}=0$, which gives $\sin(2x)=\cos(x)$, i.e., $2\sin(x)\cos(x)=\cos(x)$. Note that you can't cancel the $\cos(x)$ since it could be zero for $x\in (0, \pi)$. Thus $\cos(x)(2\sin(x)-1)=0$. This is the case when $\cos(x)=0$ or $\sin(x)=\frac{1}{2}$, so when $x=\frac{\pi}{6}, \frac{\pi}{2}, \frac{5\pi}{6}$. The coordinates are then easy to find by substituting these $x$ values back into $y$: $(\frac{\pi}{6}, \frac32), (\frac{\pi}{2}, 1), (\frac{5\pi}{6}, \frac32)$ The derivative is simply $$\frac{dy}{dx}=-2\sin(2x)+2\cos (x)$$ Setting this zero gives $$2\sin x\cos x=\cos x$$ $$\cos(x)(1-2\sin (x))=0$$ Now you can't cancel $\cos x$ as it can be zero..and division by zero is not defined SO the answers are...$\frac{\pi}{6},\frac{5\pi}{6},\frac{\pi}{2}$ You haven't taken into consideration the possibility that $\cos(x) =0$ as if it was you would be dividing by zero when you are going from $-2\cos(x) = -4\sin(x)\cos(x)$ to $-2 = -4\sin(x)$. Remember dividing by zero is ILLEGAL! Fundamentally, you can't divide by $\cos{x}$ if it is zero. Rearranging $ -2\sin{x}=-4\sin{x}\cos{x} $ gives $\cos{x}(2\sin{x}-1)=0$. I think you have a related issue in your assertion that this now means $\cos{x}=0$ and $\sin{x}=1/2$ (or at least, are being confusingly imprecise): if $ab=0$, this means one of the following is true: $a=0$ (and $b \neq 0$), $b=0$ (and $a \neq 0$), $a=0$ and $b=0$. (since "any number times zero is zero") Hence the condition $\cos{x}(2\sin{x}-1)=0$ includes: $x$ where $\cos{x}=0$ $x$ where $\sin{x}=1/2$ (and it happens in this case that these cannot both be true for the same $x$).
common-pile/stackexchange_filtered
How can missionary arguments about free will be refuted? In light of the fact that the majority of Christians including Catholics and Orthodox believe in predestination, as does Islam (in fact, one of Antony Flew's ("the world's most famous atheist") biggest reasons for being an atheist was that he correctly realized that if God exists He would be controlling everyone): I recently came across the following on chabad.org: QUESTION: "If everything is predetermined by G‑d, then how should we live?" ANSWER: As it is said, "Everything is in the hand of G-d except the fear of G‑d." There is much that is pre-determined - but not our moral choices. You would have realized this for yourself if you had pushed your questioning one step further, that is: If everything is predetermined by G‑d, then how can He reward us or punish us? The only possible answer is that those aspects in our lives which incur reward or punishment are not under the control of destiny; they are truly free choices. And this idea has serious support. For example, Rashi commenting on Sotah 2a says: “Do they in fact pair them up according to wickedness and merit? But before their formation, when his wickedness and meritoriousness are not known, they announce his match. And if you'll ask that everything is revealed before God, [the answer is that] everything is in the hands of Heaven except for fear of Heaven. As is stated in Tractate Niddah: "the angel appointed over pregnancy takes a drop and brings it before the Omnipresent and says before Him, 'what will be with this drop? Strong or weak? Wise or foolish? Rich or poor?'" But he does not say to Him 'righteous or wicked', since that is not in the hands of Heaven.” And Maimonides clearly held this view as well: “However, this is known without any doubt: That man's actions are in his [own] hands and The Holy One, blessed be He, does not lead him [in a particular direction] or decree that he do anything” (Hilchot Teshuva 5:5). “if we should say that God had destined that this sum should pass into the hands of the one and out of the possession of the other, God would be preordaining an act of iniquity" (Eight Chapters 8:6). "In making this assertion that obedience or disobedience to the Law of God does not depend upon the power or will of God, but solely upon that of man himself, the sages followed the dictum of Jeremiah, who said, "Out of the mouth of God there cometh neither the bad nor the good". By the words "the bad" he meant vice, and by "the good", virtue; and, accordingly, he maintains that God does not preordain that any man should be vicious or virtuous" (Eight Chapters 8:7). "It now remains for us to explain another phase of this problem, which arises from the fact that there are several Scriptural passages in which some think they find proof that God preordains and forces man to disobedience" (Eight Chapters 8:11). The Ra'avad responding to Maimonides said: “If the righteousness and wickedness of a person were dependent on the decree of God, we would say that His knowledge is His decree and we would have the very difficult question. But now that God has removed this dominion from His hand and given it to the hand of man himself, His knowledge is not a decree; rather, it is like the knowledge of the astrologers who know from another source what the ways of this person will be” (Hilchot Teshuva 5:5). Gersonides held this view and consequently denied that God knows the future: “[However,] human choice rules over this arrangement of their actions based on the celestial causes. And it is therefore possible that what people actually do is different from what God knew [they would do] based on the arrangement of their actions” (Commentary on Genesis 18). This view is also strongly implied by the idea that rulers do not have free will, which seems to imply that if they had free will then God would not be in control of what they do. This view is also strongly implied by the idea that you shouldn’t pray to be matched with a specific person. This seems to be based on the notion that if you end up marrying someone because you chose to pray about it using free will then God couldn’t have predestined it. This also seems to imply that free will is outside of God’s control. However God is the only First Cause which means that He willed everything about the universe including every decision made with free will. Any other view is illogical (and polytheist). So is it OK to say that these commentators acting out of a misguided desire to protect God’s goodness were simply wrong, but it doesn’t matter because Judaism doesn’t have dogmatic teachings about this sort of thing? Free will is not an invention of the Rabbis, but a pasuk in the Torah. Free will as a concept is notoriously paradoxical, and an unresolved one at that. It's not really fair to use one conclusion about the topic as a weapon against Judaism, which holds a different conclusion. We just accept it on faith, and so to do those who hold free will is impossible/polytheistic etc. @RabbiKaii All I'm talking about here is the claim that God doesn't control people's moral decisions, which is objectively false. Free will is a whole other discussion. Why do you keep bringing up missionaries? You know we don't do comparative religion. If it's an on-topic question about Judaism, ask the question on its own merit. Presenting it as a missionary argument doesn't give it any weight and only raises questions about your motives. @wmasse You said “God doesn’t control people’s moral decisions”. You then immediately said “free will is a whole other discussion”. Now take a second to think why we have a problem here. (Hint: focus on the terms “decision” and “free will”). @wmasse additionally, you can’t make an unsupported claim that something is objectively false in an attempt to disprove an argument that that thing is true. In this case, it is claimed that the freedom of moral choices does exist. Saying “Well no it doesn’t” isn’t a valid argument. I have upvoted @shmosel's question, because I think it would be very good if it were answered. Particular bizzare because this question seems much more likely to come from atheists than missionaries @shmosel I don't think it's comparative religion. If a claim is made about Judaism (in this case that Judaism teaches that God doesn't decide what people do) I want to know whether that claim is true or not. @Qwertrl No, I'm saying it's a fact that God does control people's moral decisions. That doesn't mean that we don't have free will, it just means you have to incorporate that in your idea of free will (i.e. from our perspective we have free will but not from God's) @Qwertrl I know I didn't give evidence to back that claim up, but the question is whether that claim is compatible with orthodox Judaism or if there's room for differing opinions. @RabbiKaii Not so. The majority of Christians including Catholics and Orthodox believe in predestination, as does Islam. In fact, one of Antony Flew's ("the world's most famous atheist") biggest reasons for being an atheist was that he correctly realized that if God exists He would be controlling everyone. @wmasse Free will is the freedom to want something as opposed to another. In other words, it is the ability to will independently. Free will exists. So does השגחה פרטית (complete Divine control). It’s always been known that the two apparently contradict each other, despite both being required by scripture. Wanna know how? Sucks, because God created knowledge, and He doesn’t have to make one of His creations conform to your beliefs about logic. Your inability to comprehend the existence of a paradox does not mean that every Rabbi who lived before you is wrong. [cont’d] It’s known that paradoxes exist, and the fact that this one arises from scripture instead of from pure mathematics doesn’t mean it can’t exist. @Qwertrl The quotes I gave don't sound like they believe in complete divine control. @wmasse the quotes you gave represent attempts at logically reconciling free will with the concept of complete divine control. If anything, those quotes can be taken as proof of the fact that their authors held that both concepts are completely true. The only one that even seems like it might disagree somewhat with complete divine control, is the last one, by Gersonides. Do some research into Izhbitzer philosophy: https://en.wikipedia.org/wiki/Mordechai_Yosef_Leiner?wprov=sfti1#Thought @Yø-cRo I'm confused, the Izhbitzer certainly held that ultimately God controls people's choices, so why does everyone on here seem to think this is heresy? @wmasse The Izhbitzer never truly says that there is no free will; in fact, he says the opposite several times (see e.g. Parshas Haazinu s.v. ה לה׳ תגמלו זאת). He does generally take a very nuanced view on free will, but the fact that it exists is indisputable in mainstream Judaism. @Yø-cRo This question was not meant to be about the nature of free will, but only whether or not the belief that moral decisions were predestined by God is part of Judaism. It sounds like the Izhbitzer as well as Crescas agree with this, so I am confused by some of the answers given. Is there room for differing opinion about this? @wmasse The problem with your question is your list of questionable premises, with the final question being “is it ok to say that anyone who disagrees with these premises is wrong?” I’m sure the reaction would be very different if you would ask “what is the Jewish view on predeterminism”. @Yø-cRo I was expecting someone to say that Judaism does believe in predeterminism and that the sources I quoted actually don't contradict predeterminism. As far as I can tell the common belief is that moral actions aren't predetermined but there are some who disagree. The premise of this question is unfounded. You write "However God is the only First Cause which means that He willed everything about the universe including every decision made with free will. Any other view is illogical (and polytheist)." This claim does not withstand scrutiny. G-d willed everything in the universe into existence, including a being that can make free choices, Man. That is what "created in in the image of G-d" means. G-d provides Man with multiple options and allows him/her to choose among them. G-d provides Man with the space to make these choices (G-d's will is necessary to effectuate those choices but the choices are Man's). The Torah expressly states this idea in Deuteronomy 30:15-20. רְאֵ֨ה נָתַ֤תִּי לְפָנֶ֙יךָ֙ הַיּ֔וֹם אֶת־הַֽחַיִּ֖ים וְאֶת־הַטּ֑וֹב וְאֶת־הַמָּ֖וֶת וְאֶת־הָרָֽע ... וּבָֽחַרְתָּ֙ בַּחַיִּ֔ים לְמַ֥עַן תִּֽחְיֶ֖ה אַתָּ֥ה וְזַרְעֶֽךָ See I have placed before you live and good and death and evil ... And you should choose life in order that you and your offspring shall live. You've asserted that this position is illogical and polytheist but have provided no argument to support that position. Moreover, everyone (or nearly so) alive experiences free will on a regular basis. Isn't it more likely that a universally experienced phenomenon exists and you just don't understand how it works, than it doesn't exist and G-d's just creating an illusion for his automatons to go through life deluding themselves that they can make choices? Appreciated, but my question was more about whether or not the view that God controls everyone is compatible with orthodox Judaism, because I know there are differing opinions about some of these things. @wmasse the view that humans cannot make choices is not compatible with orthodox Judaism as far as I know. Nor is it compatible with the Torah. I don't think the fact that God controls everyone necessarily prevents us from having a genuine experience of free will from our perspective. @wmasse our “experience” is irrelevant. If I hypnotize you (movie-style), without your consent, to rob a bank, then I—not you—should be the one punished. You had no choice in the matter, and thus, punishing you would be both illogical and futile, since you weren’t the perpetrator, but the non-consenting tool. If anything, you should be compensated: Had you not been hypnotized, you surely would never have done anything bad, and you’d likely regret having stolen. Your emotions and beliefs during hypnosis don’t logically bring about reward or punishment. It’s completely inane to think otherwise. @Qwertrl But earlier you said "their authors held that both concepts are completely true." If you meant free will and complete divine control then that makes sense (although they really dont sound like they believe in complete divine control). But if by free will you mean that God didn't decide from eternity what sins people would commit, then that's simply impossible because it's just affirming and negating the same statement. @Qwertrl Also being able to create beings with free will is a power that only God has because of things like divine simplicity and omnipotence. God doesn't need to use instruments to bring about His will. Unlike the hypnotist God can cause an effect with no inherent connection to a created cause. @wmasse We can make free choices but whether those free choices manifest in the world is up to G-d, not us. If Bob choose to murder Tim, G-d may deprive Bob of the ability to take action on that decision but Bob still made that choice and will receive divine judgment for it. The choice is free, the action in the world is under G-d's control. @conceptualinertia That explanation also sounds like it's implying that there are things that God doesn't control. @wmasse Yes. G-d doesn't control our choices (he can but chooses not to). @wmasse What’s happening here is very simple: You, understandably, have a problem with the idea that there’s something Hashem doesn’t control, because Hashem is omnipotent. But you fail to recognize that Hashem, being omnipotent, has the ability to not control something if He so desires. Your stated beliefs, not those of every Rabbi in the past few millennia, are internally dissonant. @Qwertrl In any case, if my view is unorthodox then why is this on Sefaria? https://www.sefaria.org/Mei_HaShiloach?tab=contents @wmasse Rav Leiner’s view on free will was extremely unorthodox. Your line of thought is not unique, nor is it accepted by a large percentage of Orthodoxy. Finding one single source that agrees with you does not validate your reasoning. @wmasse Not all of the works on Sefaria are Orthodox. That being said, the Ishbitzer is generally viewed as being Orthodox despite this theoretically antinomian view.This is because, despite this view, the Ishbitzer and his followers continued to adhere to strict religious practices of Hasidic Judaism. I suspect, very strongly, that the Ishbitzer did not actually believe the opinion attributed to him but used it as a way of explaining how to sinners should not despair from their past deeds. That claim is totally incompatible with Judaism. Reward and punishment are the result of moral choices. If those choices weren’t ours to make, we wouldn’t get reward or punishment, and thus a fundamental part of Judaism—referenced explicitly a multitude of times in scripture¹—would be impossible. See רמב״ם in משנֶה תורה, הִלְכוֹת תשובה, פרק ה׳ (from ״אִלּו הַאֵל היה גוזר…״). ¹e.g: Genesis 4:7, קַיְן can make the moral choice to repent; Leviticus 26:2–45, blessings and curses depend on the explicitly-conditional moral choices of the nation; Deuteronomy 11:13-28, where the Jews are given a moral decision with its consequences laid out in detailed form, and then (in Parashat Re’eh) in abstract form; Deuteronomy 30:15–20, where the Jews are given another explicit moral choice in order to mentally prepare them before entering the land; Joshua 24:14–15, where Joshua tells the Jews to make a moral decision regarding their loyalty to G-D. +1 Surprised you didn't bring Rambam Hilchot Teshuva who brings your exact point :) @RabbiKaii Good point; I’ll add that. When I first wrote the answer, I thought it was such a logical concept that it didn’t need a source. But then again, if that were the case, this question wouldn’t have been asked. (And thanks for the +1!) @Qwertrl You're assuming that God controlling people's actions is mutually exclusive with free will. Also in your comment to my question you said that all of the quoted sources do believe in complete divine control. Complete divine control is not mutually exclusive with free will unless you think free will means that God had not control over it. @wmasse Right here, in a comment, define free will for us. Define it precisely, just as you understand it. @Qwertrl Free will means you genuinely could have chosen differently (from your perspective). That means if A > B where A is everything that led up to that decision and B is the decision, there is no necessary connection between them, so it could've been A > C and been just as intelligible. So acts of free will are almost like bare instances of God's will like the number of atoms in the universe or something like that. That's how we can have the "illusion" that we really could have chosen differently and hence why we can be justly punished. @wmasse Thank you. Please elaborate on the logical connection between the second-to-last sentence and what comes before it, and between that and the last sentence. I don’t see how they follow each other. @Qwertrl I don't want to go off on tangents here. The question is just about the statement "God decided from eternity what sins I would commit." I want to know what Judaism's view of this claim is. @wmasse Judaism’s view of that claim is that it’s entirely, utterly, and absolutely false, because of A) direct scriptural proof, and B) a very simple logical derivation from the fact of free will. I’m attempting to understand why you don’t agree with reason B, which, in my mind, can only arise from a non-traditional understanding of the term “free will”. @Qwertrl I don't think the statement in question is really debatable due to first Cause arguments and the like. However I'm not sure that this view is outside the pale of Judaism, see for example Hasdai Crescas's view in this question https://judaism.stackexchange.com/questions/856/do-we-really-have-free-choice @wmasse Again, you can’t disprove an argument by claiming that you’re right. You can’t just dismiss the traditional approach, and claim that your opinion is correct for the sole reason that you “don’t think [it] is really debatable”. That’s not how it works. Jews find the truth by asking a hundred questions, figuring out all possible answers, and logically debating which ones are correct until the truth is agreed upon. We never just dismiss an opinion because we don’t understand it. That’s not okay. If that’s what you want to do, Mi Yodeya isn’t for you. Anyone bothered by how mans free will can coexist with god has bigger questions he should be asking. Such as why the existence of the finite world isn’t a direct refutation of an infinite god. The answer of course is that god can do anything. If that answers not good enough for you, then good, you’re a healthy human that doesn’t fully grasp the infinite god. One line answer: the Jewish picture is that we only have free will when it comes to our relationship with God/morality. In a world where it feels like we have absolutely no control, so many people have been struggling with the concept of free choice. When we think about it, most of the choices that people make on daily basis are not FREE at all. They are pragmatic decisions, based on a risk-benefit analysis, influenced by one’s personality and nature. This is not what we are referring to when we talk about free choice from a Torah perspective. G-d does not want to control us; He wants us to have independence. But the only area where our choice really matters is in our relationship with Him. What He wants is for us to choose Him. And this is where our free choice lies. Hence, "all is in the hands of heaven, except fear of heaven", i.e. except when it comes to our relationship with Hashem. See this lecture, "Is G-d Controlling You? The Truth About Our Freedom Of Choice", for more information. Rabbi Friedman's team recently sent this in an email, with the above blurb (which I have edited partially).
common-pile/stackexchange_filtered
add interfaces to a second OSPF process in cisco I am trying to create two OSPF processes on the same router, I configure both using the same commands but only the one with the lower process id takes the interfaces and works, the second one has no interfaces! the OSPF configuration takes place after assigning IP addresses to involved interfaces. the commands I used: router ospf 1 network [my network range] area 1 and the same for ospf 2 any idea how I can add interfaces to the 2nd OSPF process? Running two OSPF processes on the same interface is not supported and does not really make any sense to me from a protocol perspective. You'll need to configure the networks such that the two processes are running on different interfaces.
common-pile/stackexchange_filtered
Center page in IE7 & page moves after clicking menu items My page seems to be centered in all modern browsers except IE7. In CSS I have simply: html, body { width: 1000px; margin: auto auto; } and it doesn't work. Another issue for all browsers is that whole page slightly moves after clicking menu items. E.g. choosing second menu item causes thah page is shifted to the right compared to third page. Could you help me how to solve these problems. TIA To fix the first issue, remove html from the selector: body { width: 1000px; margin: auto auto; } The second issue is caused by there not always being a vertical scrollbar, which changes the width of the page and so causes a slight horizontal shift. Fix it by adding this, which forces there to always be a vertical scrollbar: html { overflow-y: scroll } Thanks for your help. I'm very begginer front-end developer, so my questions may be simple and primitive. You're welcome. Beginner level questions are welcome here, provided that they make sense.
common-pile/stackexchange_filtered
flutter's AutomaticKeepAliveClientMixin doesn't keep the page state after navigator.push was testing AutomaticKeepAliveClientMixin and run into an issue, page loses state after navigator.push anyone knows this issue? any workarounds? be glad for any info, cheers my goal is to keep the page state steps to reproduce: open app click PageOne's push-button then go back swipe right and left and the page loses state image import 'package:flutter/material.dart'; void main() => runApp(MaterialApp(home: MyApp())); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: DefaultTabController( initialIndex: 0, length: 2, child: Scaffold( body: TabBarView( children: <Widget>[Page1(), Page2()], ), bottomNavigationBar: Material( child: TabBar( labelColor: Colors.black, tabs: <Widget>[ Tab( icon: Icon(Icons.check), ), Tab( icon: Icon(Icons.check), ), ], ), ), ), ), ); } } class Page1 extends StatefulWidget { @override Page1State createState() { return new Page1State(); } } class Page1State extends State<Page1> with AutomaticKeepAliveClientMixin { @override Widget build(BuildContext context) { return ListView( children: <Widget>[ Container( height: 300, color: Colors.orange, ), Container( height: 300, color: Colors.pink, ), Container( height: 300, color: Colors.yellow, child: Center( child: Container(height: 26, child: MaterialButton( color: Colors.blue, child: Text('clicking this and back then swipe => page loses state'), onPressed: () { Navigator.push( context, MaterialPageRoute(builder: (context) => PushedPage()), ); }), ), ), ), ], ); } @override bool get wantKeepAlive => true; } class Page2 extends StatelessWidget { @override Widget build(BuildContext context) { return Container(height: 300, color: Colors.orange); } } class PushedPage extends StatelessWidget { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(), body: Container( color: Colors.blue, ), ); } } why you say it lose the state? Because the listview position is reset to the beginning after push, back, swipe https://stackoverflow.com/a/53702330/10269042 Sorry @anmol.majhail I didn't quite understand, you mean I should add super.Build(context); to the build? yes - Before return ListView( children: <Widget>[ add - super.Build(context); indeed it works @anmol.majhail thumbs up! added it as answer From the documentation on AutomaticKeepAliveClientMixin: A mixin with convenience methods for clients of [AutomaticKeepAlive]. Used with [State] subclasses. Subclasses must implement [wantKeepAlive], and their [build] methods must call super.build (the return value will always return null, and should be ignored). So in your code, before you return the ListView just call super.build: Widget build(BuildContext context) { super.build(context); return ListView(... }
common-pile/stackexchange_filtered
Disable enter fullscreen mode automatically based on the device's width in vidstack react player I am using this player in one of my React projects. I am showing the video player in a modal. So when the screen width of the device is below 1024px then I want the fullscreen mode should not apply. I read the official documentation but didn't get the official way to disable entering fullscreen mode automatically for a specific device's width. <MediaPlayer className="w-full h-[540px] bg-slate-200 text-white font-sans overflow-hidden rounded-md ring-media-focus data-[focus]:ring-4" title={title} src={src} crossorigin ref={player} autoplay > <MediaProvider> <Poster className="absolute inset-0 block h-full w-full rounded-md opacity-0 transition-opacity data-[visible]:opacity-100 object-cover" src={thumbnail_url} alt={title || ""} /> </MediaProvider> <VideoLayout /> </MediaPlayer> After discussing with the author of the player Rahim Alwer there is a property available for the MediaPlayer component named playsinline so this is what I did import { useRef, useState, useEffect } from 'react'; const VideoPlayer: React.FunctionComponent<VideoPlayerProps> = ({ src, title, thumbnail_url }) => { let player = useRef<MediaPlayerInstance>(null); const [windowWidth, setWindowWidth] = useState(window.innerWidth) useEffect(() => { function reportWindowSize() { setWindowWidth(window.innerWidth) } // Trigger this function on resize window.addEventListener('resize', reportWindowSize) // Cleanup for componentWillUnmount return () => window.removeEventListener('resize', reportWindowSize) }, []) return ( <MediaPlayer playsinline={windowWidth >= 1024} > ... </MediaPlayer> ) } PS:- There was a small bug with it too that got resolved today. so we should be on the latest version(1.5.3).
common-pile/stackexchange_filtered
R pagedown/shinyapps error - Google Chrome cannot be found I've been trying to use the pagedown package to create a pdf from a html which is better formatted and more presentable from a shinyapp. It tested fine locally on my machine. After deployment, the document failed to download with the following error from the logs: Warning: Error in find_chrome: Cannot find Chromium or Google Chrome [No stack trace available] I am using Google Chrome to view the app but I am guessing there's more to it than this? The code for the download is as follows: output$downloadDrug <- downloadHandler( filename = function() {("drug-instructions.pdf)}, content = function(file) { src <- normalizePath('report_drugHTML.Rmd') src2 <- normalizePath("printout.css") label <- normalizePath("pxLabel.png") logo <-normalizePath("logo.png") fasting <- normalizePath("fasting.png") owd <- setwd(tempdir()) on.exit(setwd(owd)) file.copy(src, 'report_drugHTML.Rmd', overwrite = TRUE) file.copy(src2, "printout.css", overwrite = TRUE) file.copy(label, "pxLabel.png", overwrite = TRUE) file.copy(logo, "logo.png", overwrite = TRUE) file.copy(fasting, "fasting.png", overwrite = TRUE) library(rmarkdown) out <- render('report_drugHTML.Rmd', params = list(name = input$px_name, dob = input$dob), 'html_document') library(pagedown) out <- pagedown::chrome_print(out, "drug-instructions.pdf") file.rename(out, file) } ) Any suggestions would be greatly appreciated. I am still a novice so please feel free to point out the obvious. A complete, minimal example that reproduces the error would go a long way to get people's help. I am not sure how to produce this as it will need code on my app, my R markdown and data. The problem seems to also happen when it is deployed, as it works fine on my machine If I am not mistaken, the page is "printed" on the server side. So I guess Chrome is not available on the machine where your app is deployed.
common-pile/stackexchange_filtered
Prevent the creation of routes from components and public folders in the pages folder. And deny access to see files in public folders. NextJs I have NextJs application with a custom server where I have customized a route. Where I show the template depending on the hostname and pathname. My server.js const next = require('next'); const { createServer } = require('http'); const { parse } = require('url'); const dev = process.env.ENV === 'local'; const app = next({ dev }); const handle = app.getRequestHandler(); const port = 3000; app.prepare().then(() => { createServer(async (req, res) => { const parsedUrl = parse(req.url, true); const { pathname, query } = parsedUrl; const pathnameFromServer = 'test_path_name'; const templateFromServer = 'template1'; if (pathname === `/${pathnameFromServer}`) { await app.render(req, res, `/${templateFromServer}`, query); } else { await handle(req, res, parsedUrl); } }).listen(port, (err) => { if (err) throw err; console.log('> Ready on http://localhost:3000'); }); }); I am creating a template in the pages folder. And I want to create the components, public folder in it. Like this. But NextJs build everything in the pages folder as routes. My next.config.js const withTM = require('next-transpile-modules')(['@nfs-react/components']); module.exports = withTM({ useFileSystemPublicRoutes: false, }); I need to prevent the creation of routes from components and public folders in the pages folder. I also need to deny access to see files in public folders. You can name the /pages folder something else, so Next.js will ignore it. A good idea! But then the routes are not created at all :( 404... Error: > Couldn't find a pages directory. Please create one under the project root @Grigri, you can leave /pages empty. Add your routes to some other directory. Keep useFileSystemPublicRoutes as false in the config file. If the custom routes are not created check the custom server script whether it defines routes correctly. This is exactly what I did. But besides 404 nothing more was created. https://user-images.githubusercontent.com/65594153/85734445-b9507a00-b705-11ea-9540-30a314dfd6e1.png @nikolai-kiselev This is what misleads me. But everything works as I want. What do you think? Does this add extra code to the build? @Grigri, even if useFileSystemPublicRoutes set to false, the pages inside /pages still will be generated. So you need to remove all files from this folder. I can’t find how to specify the pathname folder outside the pages folder. because app.render() pathname sees at the pagesDir level. @Grigri, see example https://github.com/vercel/next.js/tree/canary/examples/custom-server app.render (req, res, '/ b', query) - where '/ b' is the path in the folder pages. But this, unfortunately, does not fit. @nikolai-kiselev
common-pile/stackexchange_filtered
C# ServiceStack JsonSerializer Deserialize How can I deserialize a string to Json object where the json Object can be a single or an array, right now I have this, which works but its a hack (pseudo): class MyObject{ public string prop1 public string prop2; } class MyList{ List<MyObject> objects {get; set; } } class Test{ MyList list = JsonSerialzer.Deserialize<MyList>(str); //if list is null - it can be single if(list == null){ MyObject myObject = JsonSerializer.Deserialize<MyObject>(str); if(myObject != null) list.add(myObject); } } As shown above, problem is the json String I am receiving from another service can be either single or list. How to handle this elegantly? The problem lies more in your JSON message, an attribute should always be either 1 object or an array. If it's an array and the result set only has one result then it should still be encapsulated in an array so you can't run into the problem you're having. Here's another high listed SO answer which could help you http://stackoverflow.com/questions/7895105/deserialize-json-with-c-sharp Isn't the link you sent me the same exact thing I have? Maybe, i didnt compare. But the answer under here is the correct answer to your SO. I would strongly advise against accepting different structures in the same argument, it makes your software highly brittle and unpredictable. But if it could be a list you can just check the first char is a [, e.g: if (str.TrimStart().StartsWith("[")) { MyList list = JsonSerialzer.Deserialize<MyList>(str); } else { MyObject myObject = JsonSerializer.Deserialize<MyObject>(str); } Also please note that by default all ServiceStack text serializers only serialize public properties, so you need to add getters/setters to each property you want serialized, e.g: class MyObject { public string prop1 { get; set; } public string prop2 { get; set; } } class MyList { List<MyObject> objects { get; set; } } Otherwise you can configure ServiceStack.Text to also serialize public fields with: JsConfig.IncludePublicFields = true;
common-pile/stackexchange_filtered
Display error message to user (node.js) So I've been trying to research online for solutions, and none of them seem to work either because I'm doing it wrong, or they don't work in my situation. I have a webpage which gives the user a place to enter email/pass etc. When they press the submit, it calls a post function, which has all the validation contained within it. Like so: app.post('/check', function(req, res){ function emailCheck(email){ //when theres an error: console.log(error), and return false } function passCheck(password){ //when theres an error: console.log(error), and return false } if (passCheck == true && emailCheck == true){ //enter user into database } } When there is an error, I want it to be displayed to the user, either by use of popup box, or just text positioned under the sign up box. Any suggestions would be great. what view engine are you using? @Alex All I'm using at the moment is HTML and node.js. Not sure if that's what you wanted? Do you using some framework (Express, etc)? What error you could catch? With express, it's possible catch some http errors, as 404 or 502, in each request... @LucasCosta Oh, yes, I'm using express. I would make an ajax request from the client to the server. If the validation fails you can send data specifying the problem to the client and do what you'd like with it. Would this then require me making all the validation on the client side, rather than using the server? No, an example would have the server do the validation logic and return what fields are incorrect to the client and the client can process that information. Sorry for late reply, can you point me in the right direction? I've been researching of how to do this through AJAX but can't find anything. //register User on post request from register form router.post("/register", function(req, res){ var newUser = new User({username: req.body.username}); User.register(newUser, req.body.password, function(err, user){ if(err){ // console.log(err.message); req.flash("error", err.message); return res.render("register"); } passport.authenticate("local")(req, res, function(){ req.flash("success","You created a new User Account " +user.username); res.redirect("/campgrounds"); }); }); }); you can Install: npm install flash-connect and use error.message in console log to display the errors
common-pile/stackexchange_filtered
Error message is returned only for 200 or 500 error codes I'm trying to return error response to the client, but it returns only when I set the status code to either 200 or 500. When I set to 304 it does not return the message. What am I missing. Please clarify res.send(304, { message: 'cannot update config with empty value' }); ............................................................................................................................... // update configure options exports.update = function(req, res) { if (req.body.items) { // if items exists do something .... .... } else { // returns error message when set to 200 or 500 but does not when 304 // ie.,response body is empty res.send(304, { message: 'cannot update config with empty value' }); } }; Can you make a minimal test case? Is this using Express.js? Simplified the code. Yes it uses ExpressJS try this res.status(304).send({ message: 'cannot update config with empty value' }); Did not help. Same as before in my app i use 404 status code and return a json works fine to mi Indeed it works for 404 for me too. I wonder what's special about 304. Is there any significance to this code that I should know? While this answer is probably correct and useful, it is preferred if you include some explanation along with it to explain how it helps to solve the problem. This becomes especially useful in the future, if there is a change (possibly unrelated) that causes it to stop working and users need to understand how it once worked. In w3c.org explain that -The 304 response MUST NOT contain a message-body- and indicate that the content was not modified see here
common-pile/stackexchange_filtered
NSNotification center may not respond to -object? I'm trying to make simple use of the NSNotification center inside my iPhone application, but I seem to be doing something wrong in this case. I was under the impression that it was possible to retrieve an object associated with a particular message, or at least a reference to the object, but using the following example code I'm getting a warning, "NSNotification center may not respond to -object" - (void)addNewBookmark:(NSNotificationCenter *)notification { Bookmark *newBookMark = (Bookmark *)[notification object]; //Do some stuff with the bookmark object } Indeed, when I compile and run the code, basically nothing I try to do with the contents of the object actually gets carried out - it's simply ignored. The post code is as follows, - (IBAction)save:(id) sender{ //Sending the message with the related object [[NSNotificationCenter defaultCenter] postNotificationName:@"addNewBookmark" object:bookmark]; } and the bookmark object itself is just a dictionary. I also tried using the "userInfo" argument and passing the bookmark object through that, but the result was the same. How should I be doing this? What am I doing wrong? Your addNewBookmark: method should accept an NSNotification, not an NSNotificationCenter. NSNotification should respond to -object as expected. A notification center is the object in charge of keeping track of who is listening and sending notifications (not centers) to them. oh dear I feel silly. too quick with the autocomplete. ill accept this as soon as the timer allows it.
common-pile/stackexchange_filtered
receiving either value for the navigation menu in php I have two navigation one is in the header and the other is in the same page, now what I want is to do is be able to pull all the data from the table product a specific product from the sub category when I click the sub category. what I have now, is when I click on the menu that is on the header page that I have included in every page I get the product listed but the menu that is on product page produce an error that the foreach is empty. when I use my other function to pull all the sub category it works fine but I want to pull only the sub category that belong to that particular main category. the code below will make more sense. header Page: <?php error_reporting(E_ALL); ini_set('display_errors','1');?> <?php require_once("initi.php");?> <html> <head> <title></title> <link rel="stylesheet" href="css/reset.css" /> <link rel="stylesheet" href="css/main.css" /> <link rel="stylesheet" href="css/footer.css" /> <script type="text/javascript" src="/ghaniya.com/javascript/jqu.js"></script> <script type="text/javascript"> $(document).ready(function(){ $(".logbtn").click(function(){ $("#login2").hide("show"); $("#login1").slideToggle("slow"); $(this).toggleClass("active"); }); $(".logbtn2").click(function(){ $("#login1").hide("show"); $("#login2").slideToggle("slow"); $(this).toggleClass("active"); }); }); </script> <style type="text/css"> div#login1{ background-color:#efebee; margin:0 auto; width:100%; display:none; height:100px; } div#login2{ background-color:#f5dce4; margin:0 auto; width:100%; display:none; height:100px; } .size{width:960px; } .flotl{float:left; font-size:11px; text-decoration:none; height:30px; margin-left:20px;} div#centerl{ width:960px; margin:0 auto; } </style> </head> <body> <div id="head"> <div class="size center"> <img src="img/logosmall.png"> <ul id="mainLink"> <li><a class="logbtn">Women</a></li><li><a class="logbtn2" href="#">man</a></li><li><a href="#">Beauty</a></li><li><a href="">Gifts</a></li><li><a href="">Perfumes</a></li> </ul> <div id="loginBox"> <h3>REGISTER | SIGN IN</h3> </div> <div id="search"> <input type="submit"/><input type="text" value="Search"/> </div> <div id="bag"></div> </div> </div> <div id="login1" > <div id="centerl"> <ul class="flotl" > <?php $cata = Catagory::find_all(); foreach($cata as $catag){?> <li><a href="/ghaniya.com/product.php?catid=<?php echo $catag->catagory_id;?>"><?php echo $catag->name; ?></a></li> <?php }?> </ul> </div> </div> Product Page: <?php require_once("includes/head.php"); ?> <?php $cata_id = ""; $subcat =""; if(isset($_GET['catid'])) { $cata_id = $_GET['catid']; $product = Product::find_by_cata_id($cata_id); } elseif(isset($_GET['subcat'])) { $subcat =$_GET['subcat']; $subcat = SubCata::find_by_cata_id(1); print_r($subcat); } ?> <div class="cBoth"></div> <div id="contentEveryWhere" class="center"> <div id="cata"> <div id="leftNoticeBoard"></div> <ul id="catagories"> <h1>CLOTHING</h1> <?php foreach($subcat as $sub){ ?> <li><a href="product.php?subcat=<?php echo $sub->subcata_id; ?>"><?php echo $sub->name; ?></a></li> <?php } ?> </ul> </div> <div id="products"> <div id="catatitle"> <ul> <li>Home ></li><li>Women ></li><li>Coat<li> </ul> <h1>Coat</h1> <p class="floatl">Sort items by Price: High | Low View as: <img src="#" alt="icon png"/> |<img src="#" alt="icon png"/></p> </div> <ul id="product"> <?php $product = Product::find_by_cata_id($cata_id);?> <?php foreach($product as $pro){?> <?php $image = Image::find_by_product_id( $pro->product_id);?> <li> <?php if(isset($image->filename)){ ?> <img src="/ghaniya.com/img/products/<?php echo $image->filename;?>"/> <?php } ?> <h2><?php echo $pro->name;?></h2> <h3> <?php echo $pro->product_desc;?></h3> <h4><?php echo $pro->price;?></h4> </li> <?php } ?> </ul><!--end of product list--> </div> <!--end of products div--> </div><!--end of contentEveryWhere div--> <div class="cBoth"></div> <?php require_once("includes/footer.php"); ?> Warning: Invalid argument supplied for foreach() in /Users/mughery/Website/ghaniya.com/product.php on line 25 The problem is quite simple: if(isset($_GET['catid'])) { $cata_id = $_GET['catid']; $product = Product::find_by_cata_id($cata_id); } elseif(isset($_GET['subcat'])) { $subcat =$_GET['subcat']; $subcat = SubCata::find_by_cata_id(1); } That means you're only setting $subcat if, and only if $_GET['catid'] isn't set and $_GET['subcat'] is. In all other cases $subcat is an empty string, not an array. Just add a line like: $subcat = (is_array($subcat) ? $subcat : SubCata::find_by_cata_id(1)); //or: $subcat = (is_array($subcat) ? $subcat : array()); And you'll be good to go Also, this: $subcat =$_GET['subcat']; $subcat = SubCata::find_by_cata_id(1); Doesn't really make sense to me, I think you need something like: elseif(isset($_GET['subcat'])) { $subcat =SubCata::find_by_cata_id($_GET['subcat']); } $subcat = (is_array($subcat) ? $subcat : SubCata::find_by_cata_id(1)); If not, regardless of the subcat parameters actual value, your fetching all data for id 1... i used the 1 to test if i am able to pull the value. but thanks for your support. i am now able to pull both of the value but when i click ether link than both the menu and the product list does not show. but i will try to solve this my self if i couldn't then i will be back. or maybe i will go ahead and display the list in another page. but thanks again for the kick start. @BashaMan: If this answer was helpful, or enabled you to solve your issue an up-vote or accepting it would be greatly appreciated ;) I am sorry mate actually it works i just miss spelled something, but yes your suggestion was perfect. now it worked with no problem at all. thanks thank thank a lot. :) You're welcome, happy coding ;) - and no need to apologize... you did nothing wrong. I didn't mean to be blunt, but lately it seems new users don't get the voting system, so a lot of answers (and effort) isn't getting rewarded. That's why I'm going to point that out to ppl in future :) When using a foreach, the first argument has to be a array. You could change the code to: <?php if( is_array( $subcat ) ) foreach($subcat as $sub){ ?> 4 line: <?php $subcat = array(); ?>
common-pile/stackexchange_filtered
Entity Framework get a select command object from a stored procedure I got a simple stored procedure which returns a few columns from a much bigger table with a small term. Something like this : CREATE PROCEDURE spTemp @Type nvarchar(25) AS SELECT DISTINCT gType, gYear, gModel FROM MyTableOnDB WHERE gType = @Type GO I also got an object to fill the data into. Something like : public class info { public string Type {get; set;} public string Model {get; set;} public int year {get; set;} } What would be the right / best way to connect the object to the returned value of the procedure with Entity Framework in mind? P.S. I am using SQL Server, EF, C#. As you are using EF, your stored entity will get created in your solution. you can call your Stored Procedure and convert to Entity Type. Here is sample way return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction<info>("spTemp", Type); as the link Stored-Procedure you can import the stored procedures in the VS and use them to generate data . You need to import the stored procedure into the model. It will create a new complex type with the name {sp name}_Result. you will not need the complex type for your query so remove this by right clicking on the function import in the project explorer in this dialog change the return type to return the appropriate entities. when complete You will see the function in the class. you can then run the procedure by something like this: using (var context = new tempDBEntities()) { var type = context.spTemp(1); foreach (gType cs in gTypes) Console.WriteLine(cs.gType); } I have adapted this from the tutorial here, please let me know if you get it working. I am trying to upskill in entity framework. http://www.entityframeworktutorial.net/stored-procedure-in-entity-framework.aspx
common-pile/stackexchange_filtered
Can't figure out why simple method is infinitely looping in Java It's just a simple little method that gets user input to convert an integer from decimal to binary. It uses do-while loops to restart and verify valid input. When it catches an InputMismatchException, it starts to infinitely loop this: Must enter a positive integer, try again. Enter positive integer for binary conversion: I don't know why the Scanner isn't causing the program to wait for new input when I call nextInt(). Here's the code for the method: public static void main (String[] theArgs) { final Scanner inputScanner = new Scanner(System.in); boolean invalidInput = false; boolean running = true; int input = 0; do { do { System.out.println("Enter positive integer for binary conversion:"); try { input = inputScanner.nextInt(); if (input < 1) { System.out.println("Must be a positive integer, try again."); invalidInput = true; } else { invalidInput = false; } } catch (final InputMismatchException e) { System.out.println("Must enter a positive integer, try again."); invalidInput = true; } } while (invalidInput); System.out.println(StackUtilities.decimalToBinary(input)); System.out.println("Again? Enter 'n' for no, or anything else for yes:"); if (inputScanner.next().equals("n")) { running = false; } } while (running); } Have you tried printing printing input ? Do you know which version of ""Must be a positive integer, try again." is printed (are you sure it is the one in the exception?). Lookup "how to debug small programs" for more hints. If you enter an invalid value, nextInt() throws InputMismatchException without consuming the invalid value, so when you loop back, it's still there. Add e.g. next() inside the catch block to consume the invalid value. You need to clear the buffer when user enters the wrong type of input, Just use inputScanner.next() or inputScanner.nextLine() in catch block to clear the buffer catch (final InputMismatchException e) { inputScanner.nextLine(); System.out.println("Must enter a positive integer, try again."); invalidInput = true; }
common-pile/stackexchange_filtered
Select values between from and to columns SELECT Q.mem_id FROM tb_mem_share Q, tb_member Mb WHERE Mb.mem_id = Q.mem_id AND Q.share_num_from BETWEEN '42368' AND '42378' SELECT * FROM tb_mem_share WHERE share_num_from >= 42368 AND share_num_from <= 42378 Running this I get only the second record: mem_id | share_num_from | share_num_to | no_of_shares | share_amt -----------+------------------+----------------+----------------+-------------- KA003871 | 42360 | 42369 | 10 | 10000 KA000401 | 42370 | 42379 | 10 | 10000 What am I doing wrong? You have two queries there, trying to do what? Add sample table data, and the expected result (as formatted text). Don't compare numbers to strings. '42368' is a character value, not a number. 42368 is a number share_num_from in your first record is 42360 and is not between '42368' and '42378' Not sure what you are trying to do, but if you want to select based on values from your from & to column you should at least include them in your select: SELECT * FROM tb_mem_share WHERE share_num_from >= 42368 AND share_num_to <= 42378; Currently you were filtering on from twice.
common-pile/stackexchange_filtered
Looking for name of a priority queue data structure based on bit-boundary buckets I'm looking for the name of a data structure. I came up with this idea but suspect it should already be known, and don't know what to search to find it. This priority queue is based on putting items into buckets based on the position of the largest bit that differs between the largest dequeued priority so far, and the item's priority. Its advantage, over for example a heap, is that it tends to move events in big contiguous groups instead of one by one which gives more memory locality. Note that this priority queue requires that dequeued priorities must increase monotonically. So for example it could be used for an advancing timeline of potential events, but it couldn't be used in all situations. Suppose priorities are 32 bit integers. Then the priority queue has 33 buckets, each of which is just a vector of prioritized items where the priorities in bucket k differ from the largest dequeued priority at bit position k-1 (with "-1" meaning no difference). To enqueue an item, append it into bucket std::bit_width(largest_dequeued_priority ^ priority_of_item). If the item's priority is less than largest_dequeued_priority, throw a "not monotonic" exception instead of enqueueing. To dequeue an item, find the first non-empty bucket. If there is none, fail. If the first empty bucket isn't bucket 0, then advance largest_dequeued_priority to the minimum item priority in that bucket and re-enqueue every item in the bucket to redistribute the items to lower buckets. Now pop an item out of bucket 0 and return it as the result. Cross-posted: https://cs.stackexchange.com/q/153604/755. Please do not post the same question on multiple sites. Which site do you think is more appropriate for this question? This data structure is called a radix heap.
common-pile/stackexchange_filtered
How do I print the current NixOS' nix.package version? A NixOS configuration is built using the /etc/nixos/configuration.nix file. This configuration has a nix.package property. In an NixOS instance, I want to print the version/hash (i.e., unique identifier) of the nix.package object that has been used in building the current instance. Ideally, this should be stored inside a lockfile, but I don't believe the current version of nixos-rebuild uses those. Should this not be possible, can I explicitly store this hash somewhere during the build process by modifying my /etc/nixos/configuration.nix? For what purpose would you like to use a lockfile? We generally lock sources before evaluation (pinning), or we can work with the system "top-level" store path for binary deployment. Yes, you can access this attribute via NixOS' config parameter and use it in your configuration, or as part of a package. For example, this module causes the version and the store path to be written to files in /etc upon activation. { config, lib, ... }: { config = { environment.etc."x-nix-version".text = config.nix.package.version; environment.etc."x-nix-path".text = "${config.nix.package}"; }; } Alternatively, you can extract it from a potentially not yet built configuration using the nixos-option command or nix repl '<nixpkgs/nixos>'.
common-pile/stackexchange_filtered
How can I find out which tables and stored procedures a database user has accessed? Is there a way to find out which tables and stored procedures a user has accessed? Note that this is what the user has actually accessed and not what they can access. For tables, is it also possible to know whether the access was read or write? I would like this information to determine which database privileges can be removed for this user. Please before down voting can you leave a comment with some constructive feedback. I've tried to be clear and concise with my requirements but do let me know if I can expand on anything and I will edit the question. I Googled around for some time and could not find anyone asking the same question. Check out ApexSQL Audit, I'm a developer and don't use the audit tools myself, but my DBA says that specific tool (Apex has a ton) allows auditing of all changes, and can log individual SELECT and EXEC statements too. We aren't actually doing that so I can't say how well it works, but he said it was an option. @RobertSheahan thanks for this. I'm going to first investigate what SQLGeorge has posted as an answer as that's built-in so there are no costs involved. It is possible, but not for the past. On SQL Server 2014, you can setup specific monitoring tasks (called "audititing"), so that from now on you can collect the access data you want. Thanks for this. It looks like it will do the trick. Is it this that you're referring to? It suggests this was available from SQL Server 2008 so just making sure it's what you had in mind. Yes, exactly this is it. You can also find documentation on Microsoft sites. https://learn.microsoft.com/en-us/sql/relational-databases/security/auditing/sql-server-audit-database-engine?view=sql-server-2017
common-pile/stackexchange_filtered
SQL Injection Attack through form input Hi I want to show sql injection vulnerability through form input using PHP and MYSQL. Any suggestion how to go about. Thanks Not exactly sure what you're asking, but it sounds like this fits: http://stackoverflow.com/questions/9624292/is-this-a-secure-method-to-insert-form-data-into-a-mysql-database?lq=1 Possible duplicate of How can I prevent SQL-injection in PHP? in your input that connects to a mysql function, try use sql functions like DELETE table_name; or TRUNCATE table_name You might also want to tag it with mysql and php rather than sql so the right people come here. The SQL tag is a bit too ambiguous. Thanks for replying excuse me for being vague. I know how to prevent sql injection by adding striplashes and real_escape_string. I want to demonstrate a vulnerable login which is not working by simply removing the striplashes. I even tried; admin' 1-- and its giving me wrong username or password. mysql_query("INSERT INTO `table` (`column`) VALUES ('$inject_variable')"); If you have query like this you can insert something like value'); DROP TABLE table;-- to the $inject_variable to test the injection. Hence, your SQL query will became this: INSERT INTO `table` (`column`) VALUES('value'); DROP TABLE table;--') This will allow other users to drop the table. You can use Kali Linux to hack into the php website. Here is a tutorial on how to do that.
common-pile/stackexchange_filtered
JSON deserialize is coming null When I'm testing I just Tested this code, When I'm running this JSON not assigning to the variables it is coming NUll. String requestString='{"terms":{"customerCode":"sadasd","customerName":"sadasd","incoterm":"EXW","paymentTerms":"T30JFDM","paymentTermsDescriptionEn":"Transfer 30 days end of month the 20th","paymentTermsDescriptionFr":"Virement 30 jours Fin de Mois le 20","creditLimit":"asdasdsa"}}'; system.debug('string json'+requestString); System.debug( system.JSON.deserialize(requestString,terms.class)); terms quoweb=(terms)JSON.deserialize(requestString,terms.class); //quoteData quoweb=(quoteData)JSON.deserialize(json,quoteData.class); integer i=0; Try{ System.debug('web serivice size JSO'+quoweb); // for(i=0;i<quoweb.terms.size();i++){ // System.debug('web serivice JSON checking size and detailes'+quoweb.terms.size()+'Detailes'+quoweb.terms[i].paymentTerms); //} }catch(exception e) { System.debug('excdsdf'+e); } public class terms{ public String customerCode{get;set;} public String customerName{get;set;} public String incoterm{get;set;} public String paymentTerms{get;set;} public String paymentTermsDescriptionEn{get;set;} public String paymentTermsDescriptionFr{get;set;} public String creditLimit{get;set;} } What I'm Getting: USER_DEBUG [5]|DEBUG|terms:[creditLimit=null, customerCode=null, customerName=null, incoterm=null, paymentTerms=null, paymentTermsDescriptionEn=null, paymentTermsDescriptionFr=null] Your input JSON should not include the outer object with the "terms" property... change it to: String requestString='{"customerCode":"sadasd","customerName":"sadasd","incoterm":"EXW","paymentTerms":"T30JFDM","paymentTermsDescriptionEn":"Transfer 30 days end of month the 20th","paymentTermsDescriptionFr":"Virement 30 jours Fin de Mois le 20","creditLimit":"asdasdsa"}'; The issue is that your JSON isn't just a {<terms object>} as your code assumes. There's an extra layer of object in there that you haven't captured. It's really a {<property name> : <terms object>}. Removing the outer layer of object, as Phil W suggests, is one way to going about fixing this. That approach isn't always practical (especially if you're getting the JSON from some external third party). You could simply do an untyped deserialization, but probably the easiest way to handle this in most real applications would be to add another class so that you have an accurate representation of your JSON's structure. In this case, the extra class is pretty easy. public class OuterObject{ // Using the name of a property as the class name isn't ideal. // This will, however, work Terms terms; } You'd then be using OuterObject in the deserialization in place of Terms, and accessing the data inside of the Terms object would be done in the normal way (e.g. myOuterObject.terms.customerCode)
common-pile/stackexchange_filtered
DragBox change color while drawing In order to help users to draw a boxarea with minimum of 100 px at width and height, I've thought to start drawing in red color (fill and border of box) and then change automatically to green when it reaches the 100 px mentioned while user is drawing the feature. Any idea how to do this? I got it something like that when user has finished drawing, but in my opinion, that behavior is not enough comfortable. Thanks in advance UPDATE: http://jsfiddle.net/jonataswalker/41j800kv/ Found a better solution. Put these conditions inside a ol.interaction.Draw#StyleFunction: var draw = new ol.interaction.Draw({ source: vectorSource, type: 'LineString', maxPoints: 2, style: function(feature){ var style; var geometry = feature.getGeometry(); var extent = geometry.getExtent(); var topLeft = map.getPixelFromCoordinate(ol.extent.getTopLeft(extent)); var bottomLeft = map.getPixelFromCoordinate(ol.extent.getBottomLeft(extent)); var topRight = map.getPixelFromCoordinate(ol.extent.getTopRight(extent)); var width = topRight[0] - topLeft[0]; var height = bottomLeft[1] - topLeft[1]; coords_element.innerHTML = 'width: ' + width + '<br>height: ' + height; if (width > 100 && height > 100) { style = new ol.style.Style({ fill: new ol.style.Fill({ color: 'rgba(255, 255, 255, 0.2)' }), stroke: new ol.style.Stroke({ color: 'red', width: 2 }) }); } else { style = new ol.style.Style({ fill: new ol.style.Fill({ color: 'rgba(255, 255, 255, 0.2)' }), stroke: new ol.style.Stroke({ color: '#ffcc33', width: 2 }) }); } return [style]; }, geometryFunction: function(coordinates, geometry) { if (!geometry) { geometry = new ol.geom.Polygon(null); } var start = coordinates[0]; var end = coordinates[1]; geometry.setCoordinates([ [start, [start[0], end[1]], end, [end[0], start[1]], start] ]); return geometry; } }); Take this piece of code and put some conditions on it: I guess I missed something, but anyway it was a good exercise. Dont worry, at least he has two solutions to choose from. Instead of nothing .....:))))))) Really I use ol.interaction.Draw like your example's code. I've added pavlos suggestion to change style inside your geometryFunction: if (width>100 && height>100){ $('.ol-control').removeClass('ol-control').addClass('ol-mydragbox'); $('.ol-control').removeClass('ol-control').addClass('ol-mydragbox'); } But it doesn't work, not errors, but not color change... :-( @ToniBCN which solution would you like to use? I like your solution @JonatasWalker, but as you mentioned, is incomplete, and I didn't got with the missing part of code to make the color change. I didn't say it's incomplete, it's just not a solution for ol.interaction.DragBox, it's a solution for ol.interaction.Draw. See updated fiddle. Great !! It works in my app. Thans a lot @JonatasWalker Using the latest version of ol3. 3.13.1 you may do the following to achieve your goal. Create a map with a layer and add a dragbox interaction var raster = new ol.layer.Tile({ source: new ol.source.OSM() }); var map = new ol.Map({ layers: [raster], target: 'map', view: new ol.View({ center: [0, 0], zoom: 2 }) }); var selectOnBoxInt = new ol.interaction.DragBox({ condition : ol.events.condition.always }); map.addInteraction(selectOnBoxInt); //set it active on start up selectOnBoxInt.setActive(true); Create two css classes holding styles for your drawbox //this is the deafult .ol-dragbox { background-color: rgba(255,0,0,0.4); border-color: rgba(2500,0,0,1); border-width:2; } //this is when width,height>100 .ol-mydragbox { background-color: rgba(0,255,0,0.4); border-color: rgba(0,255,0,1); border-width:2; } asign a boxdrag event to your drawbox interaction so you can truck down its width, height and make the style changes. For this action, and for the sake of time, I use jquery. You may use your imagination to do it without jquery. selectOnBoxInt.on('boxdrag',function(e){ var width = Math.abs(e.target.box_.endPixel_[0] - e.target.box_.startPixel_[0]); var height = Math.abs(e.target.box_.endPixel_[1] - e.target.box_.startPixel_[1]); if (width>100 && height>100){ $('.ol-dragbox').removeClass('ol-dragbox').addClass('ol-mydragbox'); $('.ol-box').removeClass('ol-box').addClass('ol-mydragbox'); } else { $('.ol-mydragbox').removeClass('ol-mydragbox').addClass('ol-dragbox'); } }); And a fiddle to see it in action. This solution sounds good !!. I've to upgrade OL version of my webapp and try to adapt it because I have already made some changes. I will tell you what about. Thanks a lot. Really I use ol.interaction.Draw instead ol.interaction.Dragbox but the boxdrag event as you posted in your example, doesn't work. Not errors, and not tracking info in console... :-( If you use ol.interaction.Draw you have to do it the other way around. My example is working fine check the fiddle (http://jsfiddle.net/p_tsagkis/q5yns26k/) to verify your self. Box is changing color because boxdrag event is fired. True @pavlos your example works fine. But due to some changes, now, I need to use ol.interaction.Draw and then, it doesn't work. Thanks for your suport. No prombs amigo. boxdrag wont work for draw interaction, thats true. You better stick with @jonatas solution. glad to help.
common-pile/stackexchange_filtered
Count all existing combinations of groupings of records I have these db tables questions: id, text answers: id, text, question_id answer_tags: id, answer_id, tag_id tags: id, text question has has many answers answer has many tags through answer_tags, belongs to question tag has many answers through answer_tags An answer has an unlimited number of tags I would like to show all combinations of groupings of tags that exist ordered by count Examples data Question 1, Answer 1, tag1, tag2, tag3, tag4 Question 2, Answer 2, tag2, tag3, tag4 Question 3, Answer 3, tag3, tag4 Question 4, Answer 4, tag4 Question 5, Answer 5, tag3, tag4, tag5 Question 1, Answer 6, <no tags> How can I solve this using SQL? I'm not sure if this is possible with SQL but if it does I think it would need RECURSIVE method. Expected results: tag3, tag4 occur 4 times tag2, tag3, tag4 occur 2 times tag2, tag3 occur 2 times We would only return results with groupings greater than 1. No single tag is ever returned, it must be at least 2 tags together to be counted. In what sense is "tag2, tag3, tag4" a pair? Why are "tag3, tag4, tag5" and "tag3, tag5" not pairs in your expected results? Sorry about that. I updated the question. The goal is to identify all groupings that occur more than once. tag2, tag3, tag4 are one grouping and they occur for Question1 and Question2. tag3, tag4, tag5 (Question5) only occur once, no other question has those same three tags. That also goes for tag3 tag5 which only occurs for Question5. Sorry if I'm not clear. I don't understand "question has many answers" and "A question only has one answer"? Shouldn't it be tag3, tag4 occur 4 times (in questions 1,2,4, and 5)? @RyanSparks wow, you're correct. fixed @xavier fixed. sorry about that. a question has many answers You can indeed use a recursive CTE to produce the possible combinations. First select all tag IDs as an array of one element. Then UNION ALL a JOIN of the CTE and the tag IDs appending the tag ID to the array if it is larger than the largest ID in the array. To the CTE join an aggregation getting the tag IDs for every answer as an array. In the ON clause check that the answer's array contains the array from the CTE with the array contains operator @>. Exclude the combinations from the CTE with only one tag in a WHERE clause as you're not interested in those. Now GROUP BY the combination of tags an exclude all the combinations which occur less than twice in a HAVING clause -- you're not interested in them too. If you want you also "translate" the IDs to the names of the tags in the SELECT list. WITH RECURSIVE "cte" AS ( SELECT ARRAY["t"."id"] "id" FROM "tags" "t" UNION ALL SELECT "c"."id" || "t"."id" "id" FROM "cte" "c" INNER JOIN "tags" "t" ON "t"."id" > (SELECT max("un"."e") FROM unnest("c"."id") "un" ("e")) ) SELECT "c"."id" "id", (SELECT array_agg("t"."text") FROM unnest("c"."id") "un" ("e") INNER JOIN "tags" "t" ON "t"."id" = "un"."e") "text", count(*) "count" FROM "cte" "c" INNER JOIN (SELECT array_agg("at"."tag_id" ORDER BY "at"."tag_id") "id" FROM "answer_tags" "at" GROUP BY at.answer_id) "x" ON "x"."id" @> "c"."id" WHERE array_length("c"."id", 1) > 1 GROUP BY "c"."id" HAVING count(*) > 1; Result: id | text | count ---------+------------------+------- {2,3} | {tag2,tag3} | 2 {3,4} | {tag3,tag4} | 4 {2,4} | {tag2,tag4} | 2 {2,3,4} | {tag2,tag3,tag4} | 2 db<>fiddle Building on @filiprem's answer and using a slightly modified function from the answer here you get: --test data create table questions (id int, text varchar(100)); create table answers (id int, text varchar(100), question_id int); create table answer_tags (id int, answer_id int, tag_id int); create table tags (id int, text varchar(100)); insert into questions values (1, 'question1'), (2, 'question2'), (3, 'question3'), (4, 'question4'), (5, 'question5'); insert into answers values (1, 'answer1', 1), (2, 'answer2', 2), (3, 'answer3', 3), (4, 'answer4', 4), (5, 'answer5', 5), (6, 'answer6', 1); insert into tags values (1, 'tag1'), (2, 'tag2'), (3, 'tag3'), (4, 'tag4'), (5, 'tag5'); insert into answer_tags values (1,1,1), (2,1,2), (3,1,3), (4,1,4), (5,2,2), (6,2,3), (7,2,4), (8,3,3), (9,3,4), (10,4,4), (11,5,3), (12,5,4), (13,5,5); --end test data --function to get all possible combinations from an array with at least 2 elements create or replace function get_combinations(source anyarray) returns setof anyarray as $$ with recursive combinations(combination, indices) as ( select source[i:i], array[i] from generate_subscripts(source, 1) i union all select c.combination || source[j], c.indices || j from combinations c, generate_subscripts(source, 1) j where j > all(c.indices) and array_length(c.combination, 1) <= 2 ) select combination from combinations where array_length(combination, 1) >= 2 $$ language sql; --expected results SELECT tags, count(*) FROM ( SELECT q.id, get_combinations(array_agg(DISTINCT t.text)) AS tags FROM questions q JOIN answers a ON a.question_id = q.id JOIN answer_tags at ON at.answer_id = a.id JOIN tags t ON t.id = at.tag_id GROUP BY q.id ) t1 GROUP BY tags HAVING count(*)>1; Note: this gives tag2,tag4 occurs 2 times which was missed in the expected results (from questions 1 and 2) Try this: SELECT tags, count(*) FROM ( SELECT q.id, array_agg(DISTINCT t.text) AS tags FROM questions q JOIN answers a ON a.question_id = q.id JOIN answer_tags at ON at.answer_id = a.id JOIN tags t ON t.id = at.tag_id GROUP BY q.id ) t1 GROUP BY tags HAVING count(*)>1; Thank you for responding. Your answer kind of works. It works when the questions have the exact same tags but if one question has two tags, and a second question has three tags then it doesn't return any results.
common-pile/stackexchange_filtered
Trigger updatedAt=NOW() for hunded-thousand rows will not actual I am using SQL Trigger to update updatedAt column if there is a new data inserted/updated in PG 9.6. Here is the function that will be called by trigger CREATE FUNCTION sp_updatedatstamp() RETURNS trigger AS $updatedat_stamp$ BEGIN NEW."updatedAt" := now(); RETURN NEW; END; $updatedat_stamp$ LANGUAGE plpgsql; Here is the trigger script create trigger tg_de_installment_cdcupdatedat before insert or update on public.table for each row execute procedure sp_updatedatstamp() Here is the Table Definition CREATE TABLE table ( id serial NOT NULL, "type" text NULL, "transactionDate" timestamptz NULL, remark text NULL, "createdAt" timestamptz NULL, "updatedAt" timestamptz NULL, "deletedAt" timestamptz NULL, CONSTRAINT table_pkey PRIMARY KEY (id) ); This code is working fine with small data, until an application execute bulk update query (100K+ rows will be updated) that takes 30 minutes. The trigger will makes all 100k+ rows uses updatedAt of first rows, which makes the other rows updatedAt not correct. At the end, our data pipeline with schedule 5 min, will lose this data. Is there a way to makes the SQL Trigger uses it actual timestamp for huge data?
common-pile/stackexchange_filtered
How to know the process of the terminal which is running in Linux command? In Linux, how can I list the process id in /proc folder which is running the current open terminal? Thanks. If you want the PID of the shell, then it is the $$ variable, assuming that your shell is bash or similar. Therefore you could use $ ls /proc/$$ which would list the contents of that folder for the running shell, or just $ echo $$ to see the PID on the screen. Can you please explain how this is working? In the shell, when you write $variable it will substitute in the value of that variable. There is a special variable named $$ which is the PID of the shell itself. Do you mean this? $ ls /proc/`echo $$`
common-pile/stackexchange_filtered
Beamer changing part numbers to letter Hi I'd like to change Rome numbers of parts into letters, for instance: \documentclass{beamer} \usetheme{Madrid} \usecolortheme{whale} \usecolortheme{orchid} \setbeamertemplate{blocks}[rounded][shadow=true] \title{This is a title} \author{Author} \date{ } \begin{document} \part{Review of Previous Lecture} \frame{\partpage} \frame{...} \part{Today's Lecture} \frame{\partpage} \frame{...} \end{document} And I want to have Part A instead of Part I, etc. Do you know how to do it? \makeatletter \renewrobustcmd*\insertromanpartnumber{\@Alph\c@part} \makeatother Maybe you can redefine the content of \insertromanpartnumber, beacuse the default part number in part page is \insertromanpartnumber.
common-pile/stackexchange_filtered
Sum of an array in row that says "Yes" =SUM(3, IF(B8:B129="Yes",1,0)) The formula should start on 3 and add 1 every time a column in the row says "Yes", Included in the row are "No" and "Maybe". The current formula above provides the correct result however it outputs "#!VALUE" in the column. Anybody know the issue with the formula? Thanks in advance Please show the issue with a screenshot. It does seem that =3+COUNTIF(B8:B129,"Yes") should work. Apologies Folks, Had to get rid of "SUM" stupid mistake. Appreciate the help! Instead of IF(B8:B129="Yes",1,0) you may want to use COUNTIF(B8:B129,”Yes”) I have tried this and the error does disappear, However it doesn't count all the Yes' then it just takes the starting 3 and displays that. Bit of a tricky one!
common-pile/stackexchange_filtered
How to connect three traces in Altium as shown in the image? I am making a PCB layout for a board and I want to make a trace similar to the one in the image. I have tried changing the angle options but it still doesn't work. How do I do this? Check out the teardrop generation function. This link will take you, even without a valid license, to an excellent (and official) Altium forum: forum.live.altium.com (where all you need to do is register for an account). You should play with teardrops configuration, available in the "Tools" menu. In your case that's the T-Junction that is interesting
common-pile/stackexchange_filtered
What is the meaning of the following? int sampleArray[] = {1,2,3,4,5}; I understand that the sampleArray now points to the first element of the array. However, what does it mean when I say &sampleArray? Does it mean I am getting the address of the sampleArray variable? Or does it mean a two-dimensional array variable? So, can I do this: int (*p)[5] = &sampleArray? "I understand that the sampleArray now points to the first element of the array." This is wrong. sampleArray is an array, not a pointer; they are not the same thing. No, sampleArray does not really point to the first element of the array. sampleArray is the array. The confusion arises because in most places where you use sampleArray, it will be replaced with a pointer to the first element of the array. "Most places" means "anywhere that it isn't the operand of the sizeof or unary-& operators". Since sampleArray is the array itself, and being the operand of unary-& is one of the places where it maintains that personality, this means that &sampleArray is a pointer to the whole array. The name of an array evaluates to its address (which is the address of its first element), so sampleArray and &sampleArray have the same value. They do not, however, have the same type: sampleArray has a type of int* (that is, a pointer-to-int) &sampleArray has a type of int (*)[5] (that is, a pointer to an array of five ints). int (*p)[5] declares a pointer p to an array of five ints. Ergo, int (*p)[5] = &sampleArray; // well-formed :) int (*p)[5] = sampleArray; // not well-formed :( No, sampleArray and &sampleArray have different values. &sampleArray is the address of sampleArray. @WhirlWind actually if you try it (not sure if it's guaranteed) they have the same value. they are of different types though. @WhirlWind - No...sampleArray and &sampleArray have the same value. write a simple program and try this yourself. Minor nit: the type of the expression sampleArray is int [5], which in most contexts will be converted implicitly (decay) to int *. As usual, the comp.lang.c FAQ already explains this stuff: So what is meant by the ``equivalence of pointers and arrays'' in C? Since array references decay into pointers, if arr is an array, what's the difference between arr and &arr? You probably should read the whole section on arrays and pointers. Some sample code: #include <stdio.h> int main() { int a[] = {1,2,3,4,5}; int (*p)[5] = &a; //Note //int (*p)[5] = a; // builds (with warnings) and in this case appears to give the same result printf( "a = 0x%x\n" , (int *)a); printf( "&a = 0x%x\n", (int *)(&a)); printf( "p = 0x%x", (int *)p); return; } The output - same for both ways. $ ./a.exe a = 0x22cd10 &a = 0x22cd10 p = 0x22cd10 Compiler $ gcc --version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. that is called insilization of an array. array is collection of similar type of data like int,char,float number like that.
common-pile/stackexchange_filtered
Run NodeJS app in azure via container web app or normal web app in production What's the best way to run a nodeJS application in production on Azure? Using PM2 inside the docker image is no option here as it results in two layers of load balancing and monitoring thus more complexity. Options: Use normal web app Pro: Can use PM2 The application can use more than one process thus more than one core per AppService Instance thus one AppServicePlan Instance can be better used to capacity. Use container web app Pro: The application can easily be used somewhere else because of the docker image Better control over the environment Cons: Only one process per AppServicePlan Instance Possible downtime if application crashes till the new container is ready Either of these options is completely valid. Your approach of weighing the pros and cons of each option is the correct one- these are going to be somewhat unique to your situation, which is why it is not possible to provide an overall "best" way to deploy an app. One thing I will note is that Azure offers load balancing with apps scaled out to multiple instances, either manually or through rules that you set up. That would help mitigate downtime if an individual instance goes down.
common-pile/stackexchange_filtered
I'm attempting to render responsive images and control art direction. Could somebody see whyI can't get breakpoints.yml to be respected? I'm creating a title/ hero-image section that consists of two 50/50 columns at desktop but then stacks at about tablet (e.g. 1024px) I’m using a custom twig template for this content type. I want serve WebP images if the users browser will tolerate that, I also want to serve responsive derivatives and control art direction (i.e. different dimensions and cropping depending on screen width). To that end I’m trying to use the WebP module together with the Responsive Image module and image styles. I’ve created two responsive image styles (PMP Hero Col One & PMP Hero Col Two) and six image styles (PMP Hero C1 Desktop, PMP Hero C1 Tablet, PMP Hero C1 Phone, PMP Hero C2 Desktop, PMP Hero C2 Tablet, PMP Hero C2 Phone) each with a scale and crop effect. I’ve created a breakpoints.yml In my theme’s root directory and I can see its breakpoints within the Responsive Image configuration interface. my_theme.pmp_hero_col_1.phone: label: pmp_hero_col_1_phone mediaquery: '(max-width: 900px)' weight: 0 multipliers: - 1x group: neiu_main.pmp_hero_col_1 my_theme.pmp_hero_col_1.tablet: label: pmp_hero_col_1_tablet mediaquery: '(max-width: 1024px)' weight: 1 multipliers: - 1x group: neiu_main.pmp_hero_col_1 my_theme.pmp_hero_col_1.desktop: label: pmp_hero_col_1_desktop mediaquery: '(min-width: 1025px)' weight: 2 multipliers: - 1x group: neiu_main.pmp_hero_col_1 my_theme.pmp_hero_col_2.phone: label: pmp_hero_col_2_phone mediaquery: '(max-width: 900px)' weight: 0 multipliers: - 1x group: neiu_main.pmp_hero_col_2 my_theme.pmp_hero_col_2.tablet: label: pmp_hero_col_2_tablet mediaquery: '(max-width: 1024px)' weight: 1 multipliers: - 1x group: neiu_main.pmp_hero_col_2 my_theme.pmp_hero_col_2.desktop: label: pmp_hero_col_2_desktop mediaquery: '(min-width: 1025px)' weight: 2 multipliers: - 1x group: neiu_main.pmp_hero_col_2 I’ve selected the appropriate breakpoint group in each of my responsive image styles. Within each responsive image style I’ve selected “Select a single image style.” for each breakpoint and associated the image style I want with it. I’ve tried setting the fallback image style to “None (original image)” & “empty image”. I’ve tried rendering the responsive image style in my template like: {{ drupal_image(node.field_pmp_hero_image.entity.uri.value, 'pmp_hero_col_two', responsive=true) }} and like: {% set heroCol2ImagePath = node.field_pmp_hero_image.entity.uri.value %} {% set heroCol2ResponsiveImageStyle = { '#theme': 'responsive_image', '#responsive_image_style_id': 'pmp_hero_col_two', '#uri': heroCol2ImagePath, '#attributes': { class: 'img-responsive', alt: 'MBA Students' }, } %} {{ heroCol2ResponsiveImageStyle }} In both cases I do get the picture tag and the appropriate srcsets:  Also the derivatives are written to the file system in the appropriate directories and are available. The problem I'm experiencing is that the breakpoints.yml is never respected i.e. the image never changes to the appropriate derivative. It's always either the default or empty if the fallback is set to empty. This makes me think there's something wrong with my breakpoints.yml e.g. syntax, media queries, etc. So, I've tried about a zillion permutations without success. I am able to get this working if instead of using the Responsive Image module at all, I manually do: <picture> <source media="(max-width:900px)" srcset="{{ heroCol2ImagePath|image_style('pmp_hero_c2_phone') }}"> <source media="(max-width:1024px)" srcset="{{ heroCol2ImagePath|image_style('pmp_hero_c2_tablet') }}"> <source media="(min-width:1025px)" srcset="{{ heroCol2ImagePath|image_style('pmp_hero_c2_desktop') }}"> <img class="pmp-vtwo-hero-image" src="{{ file_url(heroCol2ImagePath) }}" alt="Flowers" style="width:auto;"> </picture> But, I don't know how to then use the WebP module with this approach. If somebody out there is able to point out what bone-head mistake I'm making, you'd forever have my gratitude. You may have a typo - in this example, it looks like the correct key for the media query is mediaQuery, not mediaquery. This may be why your media queries never make it to the rendered html. Ahhh,.. that was a bone-head mistake indeed. Thank you so much for catching it. I spent an embarrassing amount of time on that. If you post your comment as an answer I'll select it. It looks like you have a typo in your breakpoints.yml. See this the breakpoints.yml documentation. An example as defined in bartik theme: bartik.narrow: label: narrow mediaQuery: 'all and (min-width: 560px) and (max-width: 850px)' weight: 1 multipliers: - 1x The correct key is mediaQuery, not mediaquery. The result is your media queries aren't recognized so you only get default images on all devices. Update that and you'll be all set.
common-pile/stackexchange_filtered
Creating functions without parentheses in PHP like 'echo' I was wondering if there is any nice way of writing functions in PHP so that they don't require ( ) around the parameters. Example: function sayThis($str) { echo $str; } sayThis "hi!!"; Thanks, Matt Mueller see this answer possible duplicate of Can I create a PHP function that I can call without parentheses? There simply isn't. "echo" is more of an operator than a function, so you'd actually need to rewrite the PHP interpreter source in order to introduce new "functions" like those. Edit: Actually, the more accurate term for "echo" is, as eyze has correctly pointed out, language construct rather than operator. http://php.net/manual/de/function.echo.php provides some more information. Simple answer, no. echo is a language construct not a function, hence it doesn't need the parentheses.
common-pile/stackexchange_filtered
Entity Framework does not return object Hi I found problem with EF. Here is my Model I loaded asset: POCO.Asset asset = _context.Assets.Where(a => a.UID == assetUid).First(); then I go through all properties foreach (POCO.Property p in asset.Properties) /* request to db */ { /*...*/ } categories: foreach (POCO.Category p in > asset.Categories) /* request to db */ { /*...*/ } related assets: foreach (POCO.Relation relatedAsset in entityAsset.Relations) /* request to db */ { /*...*/ } all navigation properties work fine. I can see request to db through profile. everything is good. but if I go through Relations and trying to load RelatedAssetProperties then I have a problem. Basically my Asset has 4 relations and each relation has 2-3 properties. foreach (POCO.Relation relatedAsset in entityAsset.Relations) /* request to db */ { /**/ ICollection<RelatedAssetProperty> rap = relatedAsset.RelatedAssetProperties; foreach (RelatedAssetProperty relatedAssetProperty in rap) /* request to db */ { /**/ } } During RelatedAssetProperties execution I see all 4 requests to db to get properties. I run all requests in SQL manager and each returns data. But for some reason only for first relation rap has items (RelatedAssetProperty). For other Relations it is empty. And I do not know why. Can you make sure that your entities have a primary key which makes sense ? I had a similar problem in a view where only the first recordset in the DB was returned.
common-pile/stackexchange_filtered
Play sound through earpiece speaker iphone How can i make my iPhone 7 play all sound through the earpiece? I want to listen to music privately without headphones thought this would be an easy way to do it, pointing bottom of phone low-volume to head is awkward and doesn't work v well. This is currently not possible as the earpiece speaker is only for phone calls. Your choice is the main speaker or a headset. From this discussion: As far as I know this is not an option. The earpiece receiver is for phone calls, it's not a "private speaker". Your choices are the iPhones main speaker or a headset. You can request Apple to add this functionality here. How about as an app developer? can i send audio through the earpiece? @theonlygusti i dont think so Might be useful — also check out this SO post and this one
common-pile/stackexchange_filtered
Why does my java guessing game not work after the first loop I'm working on this guessing game where the user needs to guess the word in under 6 tries. They have the ability to try and guess the whole word but if guessed incorrectly the game ends. When the game ends it gives them the option to play again. My problem is that when I try to guess the word for the second time it gives me an error only when I enter n character. I'm later going to replace add an array instead of the static BRAIN word and randomize it but I want to figure this out. Here is the code: /* * WordGuess.java */ import java.util.Scanner; /** * Plays a word guessing game with one player. * */ public class WordGuess { //Main meathod public static void main(String[] arqs) { final String SECRET_WORD = "BRAIN"; final String FLAG = "!"; String wordSoFar = "", updatedWord = ""; String letterGuess, wordGuess = ""; int numGuesses = 0; String repeat = ""; Scanner input = new Scanner(System.in); /* begin game */ System. out. println ( "World Guess game. \n" ); do{ numGuesses = 0; wordSoFar = ""; updatedWord = ""; for (int i = 0; i < SECRET_WORD.length() ; i++) { wordSoFar += "-"; //word, as dashes } System.out.println("Your word is "); System.out.println(wordSoFar + "\n"); //displays dashes /* a11ow player to make guesses */ do{ System.out.print("Enter a letter (" + FLAG + " to guess entire world): "); letterGuess = input.nextLine(); letterGuess = letterGuess.toUpperCase() ; /* increment number of guesses */ //numGuesses += 1; /* player correctly guessed a letter--excract string in wordSoFar * up to the letter guessed and then append guessed. letter to that * string Next, extract rest of wordSoFar and append after the guessed * letter */ if (SECRET_WORD.indexOf(letterGuess) >= 0) { updatedWord = wordSoFar.substring(0, SECRET_WORD.indexOf(letterGuess)); updatedWord += letterGuess; updatedWord += wordSoFar.substring(SECRET_WORD.indexOf(letterGuess)+1, wordSoFar. length() ) ; wordSoFar = updatedWord; }else { numGuesses += 1; } /* display guessed letter instead of dash */ System.out.println(wordSoFar + "\n"); } while (!letterGuess.equals(FLAG) && !wordSoFar.equals(SECRET_WORD) && numGuesses < 6); /* finish game anil display message anil number of guesses */ if (letterGuess.equals(FLAG)) { System.out.println("What is your guess? "); wordGuess = input.nextLine() ; wordGuess = wordGuess.toUpperCase() ; } if (wordGuess.equals(SECRET_WORD) || wordSoFar.equals(SECRET_WORD)) { System.out.println ( "You won! " ) ; } else { System.out.println("Sorry. You 1ose."); } System.out.println("The secret word is " + SECRET_WORD); System.out.println("You made " + numGuesses + " mistake."); System.out.println("Would you like to play again?"); repeat = input.next(); }while(repeat.equalsIgnoreCase("Y")); System.out.println("GOOD BYE THANKS FOR PLAYING!"); } }//end of Word Guess class What error message do you get? Can you format the code so that it's more readable? (Indentations) Solution 1 - You can move Scanner input = new Scanner(System.in); to the first do block do { //Has moved to the first do block Scanner input = new Scanner(System.in); //Rest of your code } Solution 2 - You can use input.next() insted of input.nextLine() in the second do block do { System.out.print("Enter a letter (" + FLAG + " to guess entire world): "); // input.nextLine() Has changed to input.next() letterGuess = input.next(); //.. Rest of code } The Y input for playing again is being read in as the first character of the word. You read in the full line of input System.out.print("Enter a letter (" + FLAG + " to guess entire world): "); letterGuess = input.nextLine(); // <---------- HERE letterGuess = letterGuess.toUpperCase() ; but when you ask if they want to play again you only do input.next(), meaning that there is still some input in the buffer when this code gets called. If you replace input.next() with input.nextLine() like so System.out.println("You made " + numGuesses + " mistake."); System.out.println("Would you like to play again?"); repeat = input.next(); // <---------- CHANGE this to repeat = input.nextLine(); then your program works as expected. @ARYA8B No worries! If my answer solved your problem, don't forget to accept it by clicking the checkmark to the left :)
common-pile/stackexchange_filtered
TAPI call from web page I got a customer service team who'd like to click on a telephone number in a service request ticket and it would dial the call automatically for them rather than they manually dialling the number on the screen with from phone. I got a PHP 7.1 web application and all the users uses Windows desktops. I was looking in to TAPI interface by Microsoft which seems to be possible to get this done after the users installed the TAPI exe to their computer. Now what Im trying to do is try to call the exe from web application with the phone number, similarly to<EMAIL_ADDRESS>Is it possible to do this? This is not a duplicate of Open an exe file through a link in a HTML file? because I'm asking about calling a exe with a parameter (telephone number) and I'm looking for similar format as 'mailto' Possible duplicate of Open an exe file through a link in a HTML file? I think it's not possible. @SatishSaini I already had a look in to that question before. Thats not what I'm looking for I'm afraid. @NarayanSharma, Why do you think that? We are able to open email clients, browsers from html (which are exe files) also apple website is able to open iTunes application from their webpage. If you could do that, wouldn't this mean that people can open any software in your desktop using php/html? which will be considered as a very huge security risk? idk One simple hint is when you click on (contact number) it request goes to the server (PHP file), Create one .sh file and type required command(which will open your required application) and from your PHP file execute that .sh file. It might help you. The telephone equivalent of "mailto" is "tel": <a href="tel:+491234567">Dial</a> This is a bit of a quick answer. I might come back to edit it later, but what you want is not that difficult: you need the exe to register itself as a protocol handler. Here is the MSDN page with all the info: https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/aa767914(v=vs.85)
common-pile/stackexchange_filtered
How to reference SQL connection I'm just learning asp.net/C# and am building my first application. In my application, I am rendering data from sql on almost every view. My controllers are getting large, because every time I make a request, I'm using somthing like: try { sqlConnection = new SqlConnection(dbConnectionString); SqlCommand command = new SqlCommand("sp_Test", sqlConnection); command.CommandType = CommandType.StoredProcedure; sqlConnection.Open(); return command.ExecuteNonQuery(); sqlConnection.Close(); } catch (SqlException ex) { Console.WriteLine("SQL Error" + ex.Message.ToString()); return 0; } Is there a way to turn the sql into a simple using block? Maybe something like: using(myConnection){ SqlCommand command = new SqlCommand("sp_Test", sqlConnection); command.CommandType = CommandType.StoredProcedure; } You could make a function that returns a SqlConnection and then just do using (var conn = GetConnection()) { ... Just by the way, you're closing the connection after your return statement, so that will never happen. Have you considered an object/relational mapping tool like Entity Framework? There's a bit of a learning curve and it's not without headaches, but it abstracts a lot of the tedious work of data access away. There are many better approaches do it. You can create a SqlHelper class that can be used to execute stored procedures and SQL queries and also return DataReader and DataTable/DataSets. public class SqlHelper { public SqlHelper(string connString) { } public DataSet GetDatasetByCommand(string Command); public SqlDataReader GetReaderBySQL(string strSQL); public SqlDataReader GetReaderByCmd(string Command); public SqlConnection GetSqlConnection(); public void CloseConnection(); } You can see one such sample here: http://www.nullskull.com/a/1295/sql-helper-class-in-c.aspx If you want more advanced approach you can go for Enterprise Library Data Access Block http://msdn.microsoft.com/en-us/magazine/cc163766.aspx The best thing to do is refactor that statement into a seperate method. It looks like the only thing that could vary is the name of the procedure. So create an object with two properties, a boolean success and an error message. Call the function and pass in the name of the sql command. Your function should run your repeated code in the try block based on the given procedure name, then return an object with true/false and an error message if the call failed. This should make your controllers much smaller. Example code for the controller: var result = MyNewMethod("sp_Test"); if(!result.Success) { Console.WriteLine(result.ErrorMessage); return 0; }
common-pile/stackexchange_filtered
Schedule mysql event to run every chosen day of the week i want to read my table logs after rotating my logs using an event and i want my event to run in any day of the week i choose. After doing some research,i have come up with this CREATE EVENT read_rotated_logs ON SCHEDULE EVERY 1 WEEK STARTS CURRENT_DATE + INTERVAL 7 - WEEKDAY(CURRENT_DATE) DAY DO BEGIN END */$$ DELIMITER ; Its not clear how i might arrive at a specific day of the week for example monday.How may i structure my code to make the event run on any speific day of the week (mon or tuesday or wednesday or thursday or friday or saturday or sunday) http://www.java2s.com/Code/SQL/Event/Eventscheduleeveryweek.htm I have just written this select dayofweek(curdate()); from http://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_dayofweek looks promising.Let me study the link,looks interesting. @Mihai In the link,its as if mysql has a notion of days of the week like in the ordinary English Gregorian calendar. Here is how you do it for the other days of the week Monday STARTS CURRENT_DATE + INTERVAL 0 - WEEKDAY(CURRENT_DATE) DAY Tuesday STARTS CURRENT_DATE + INTERVAL 1 - WEEKDAY(CURRENT_DATE) DAY and so on Hi,how do i add a specific time on STARTS CURRENT_DATE + INTERVAL 3 - WEEKDAY(CURRENT_DATE) DAY you can do something like this CURRENT_TIMESTAMP + INTERVAL 7 - WEEKDAY(CURRENT_TIMESTAMP) DAY + INTERVAL 1 HOUR and play with + INTERVAL 1 HOUR to be on your desired hour Isn't there any way to set the time explicitly like 11:00 a.m? you can try like this CONCAT(CURRENT_DATE + INTERVAL 7 - WEEKDAY(CURRENT_DATE) DAY,' 11:00:00') Is it mandatory for time in an event to be in 24 hour format?.
common-pile/stackexchange_filtered
Run taskmgr.exe without administrator rights? I NEED to run Task Manager with the very specific code that I have, but it is appearing with an access denied error. I have attempted to run in administrator mode before. FileStream fs = new FileStream(System.IO.Path.Combine(Environment.SystemDirectory, "taskmgr.exe"), FileMode.Open, FileAccess.ReadWrite, FileShare.None); The expected result I want is that Task Manager opens using the code above, without administrator rights! (Is there anyway around this?) That opens a file for reading and writing, it doesn't actually run the executable. To actually start it you'd probably need to use Process.Start . Yes use the System.Diagnostice.Process class. Use this: using System.Diagnostics; ProcessStartInfo startInfo = new ProcessStartInfo(); //a processstartinfo object startInfo.CreateNoWindow = false; //just hides the window if set to true startInfo.UseShellExecute = true; //use shell (current programs privillage) startInfo.FileName = System.IO.Path.Combine(Environment.SystemDirectory, "taskmgr.exe"); //The file path and file name startInfo.Arguments = ""; //Add your arguments here Process.Start(startInfo); Resources: ProcessStartInfo - MSDN Purrfect! Thank you!! This is a start process function I have using System.Diagnostics; private static void StartProcess(string exeName, string parameter) { using (Process process = new Process()) { process.StartInfo.FileName = exeName; process.StartInfo.Arguments = parameter; process.EnableRaisingEvents = true; process.Start(); } } Then call it like StartProcess("exename.exe", fileParameter); Process Class
common-pile/stackexchange_filtered
how to change firebase account in a flutter project? I created a flutter project with my personal firebase, but now the client has already created his firebase, how do I change the account in a project, I've searched the documentation but without success? Its very simple, just type this firebase login --reauth Unfortunately there is no easy way to switch. The information you configured when you first created flutter project on firebase, it has be configured again with clients Firebase. For example -> replace old GoogleService-info.plist, android package name and iOS bundleID with the new one. // Change Firebase project. // May require `firebase login` to access the project. firebase use MYPROJECT // Update Flutter configuration based on project. // --yes will overwrite the configuration. flutterfire config \ --project=MYPROJECT \ --out=lib/firebase_options_dev.dart \ --android-package-nam=com.myandroid.id \ --platforms=web,android --yes // In case of using glocud, switch the project, too. // May require `gcloud login` to access the project. gcloud config set project MYPROJECT Note Some parameters of FlutterFire are optional, Flutterfire will try figuring out the correct values. Check all parameters with flutterfire --help config. Recently I had to change the firebase project of an app to a project created in some other account. This method will work even if the new project belongs to same user too. Note: Path to firebase.exe should be added to Environment Variables PATH and flutterfire should be able to call it within the project. Or fibase.exe could be placed in the project root(but do not commit to git). If using the later you can call .\firebase.exe login --reauth. If added to path run the following command reauthenticate with the user to which the new project belongs to. firebase login --reauth Re configure flutterfire flutterfire config Following question will be asked. Select 'no' You have an existing firebase.json file and possibly already configured your project for Firebase. Would you prefer to reuse the values in your existing firebase.json file to configure your project? no Next Question. For this question select 'yes': ? Generated FirebaseOptions file ...\firebase_options.dart already exists, do you want to override it? (y/n) yes It will load the projects of the logged in user. Select the project you want to link or create a new project and link it.
common-pile/stackexchange_filtered
How is this "connected dots" pattern done? I was wondering, if anyone could help me reproduce this "effect": I've noticed that this effect is often made with compound path, but I don't understand the process of it - do I have to make polygons at first and then make outline of them? FYI: this background is downloaded from freepik edit: Sorry, for wrong section and breaking few rules.. )= I didn't want something specific, just random effect for company I've been working for- it's science related. Im a big fan of poly-art (that's fairly easy to make for me), but I was really curious about this one, because it seemed to be more complicated than it should be. However, question is answered in comments below. thank you and may the force be with you.. (= I'm going to venture out on a limb and go with the most obvious answer that I see - by hand. Draw the lines, place some circles of various sizes on the intersections. lightly shade a few random polygons. Voila. Maybe I'll make this a more in-depth answer in a second, I just got to work. Compound path is often a symptom of being imported from another source. Jonas Dralle: Thank you, that's exactly I've been looking for. Im fairly familiar with poly-art, so I have no problem reproduced this with hand, but I was looking for something less "time-consuming". if you search for Plexus effect you'll find a plugin that generates it in After Effects and it can be exported as SVG http://aescripts.com/plexus/ @Luciano that plugin is $200 though! Probably too much to generate a background effect. One might be able to make a triangle and then use a script to copy+translate+scale+rotate+set-opacity an arbitrary number of times within defined parameter limits. @CAI OP didn't ask for prices... @Luciano Fair enough, but I can't see a $200 After Effects plugin ever being a reasonable solution to creating a relatively simple vector image. This effect is called "Plexus". This page seems to offer brushes to do this effect. Maybe you can find a good plugin for that. There's a good Plexus-Plugin for Adobe After Effects [This was originally a comment]
common-pile/stackexchange_filtered
How to make spotless format your code as per custom config file which is followed by checkstyle in Gradle My checkstyle is following custom config file define in root folder of application(config.xml file). I am trying to identify how to configure spotless in gradle such that spotless shall aut format that code follwing rules defined in the same xml file I am trying to get some help in the problem Please provide enough code so others can better understand or reproduce the problem.
common-pile/stackexchange_filtered
How to update git on RHEL without access to root? I am trying to get my IDE set up at work for a project I am working on. I am coding in python and working in PyCharm because of its awesome git support. However, whenever I try to configure git on PyCharm to clone my project, it tells me that my version of git , <IP_ADDRESS>, is too old and needs to be updated to at least <IP_ADDRESS>. I have searched around dozens of times and have only ever found solutions that require root access to achieve. Is there any simple way to update git on this machine? Compile it and then install it locally. - Step-by-step: go to https://github.com/git/git and download the zip file. Extract it to a convenient place and cd into it. If you are fine with git installing to ~/bin, you can skip 2 and 3 (source) run make configure run ./configure --prefix=/some/absolut/path/to/your/private/bin where the path can be eg.: /home/YOUR_USERNAME/.local - make sure the directory exists! run make && make install prefix ~/.bin to your $PATH environment variable, ie.: export PATH="~/.local:$PATH" or export PATH="~/bin:$PATH" in case you did not use configure to change the defaults. you should now be able to run 'git'. Optional: Add the export PATH="~/YOURFOLDER:$PATH" statement to your ~/.profile (if not existant, create it and paste the line into it) so PATH is set each time you login. More on this in the INSTALL file in the downloaded git source. Thank you very much, I understand. The only issue now is that I get an error on make install, saying: cannot change permissions of /usr/local/libexec/git-core: No such file or directory. I'll figure it out. @JHollowed weird. What have you set in --prefix= ? Actually, I just read that you can leave out step 2 and 3. Then git will be installed to ~/bin (and not to the ~/.bin I used in my example). @JHollowed Also make sure to flag the question as answered ;-) Installing to ~/bin will still require PREFIX to be set during configure or make install will fail. @ssnobody No, by default git will install to ~/bin if no prefix is set. @larkey Installing to ~/bin was far enough outside of what I was expecting that I tried it. make configure and then ./configure --help shows: By default, 'make install' will install all the files in '/usr/local/bin', '/usr/local/lib' etc. You can specify an installation prefix other than '/usr/local' using '--prefix', for instance '--prefix=$HOME'. To be clear, I still think your answer is useful and I've upvoted it, but unless you can cite a source, I wouldn't mention the skipping parts 2 and 3 as the OP clearly wants to install to non-root locations and these steps appear to be necessary to do so. @ssnobody No offense taken, I should have more precisely mentioned where I took this from. It's from the INSTALL file in the root of the sources, 5l. I will add that to the anwer. -- Regarding the 'make configure --help': Well that's weird, as when you run make install without root it can't install to /usr/local/bin ^^ (and it stands in clear contraditction with the INSTALL file) So I went ahead and actually did the ./configure and examined the Makefile. Sure enough, as you said, prefix = $(HOME) and bindir_relative = bin , bindir = $(prefix)/$(bindir_relative). Thanks for teaching me something new! ` @ssnobody No problem ;-) Thanks for reminding me to give a source :P Assuming you have the necessary C development tools installed, you can compile your own version of git from sources, and install it in $HOME/bin/ then ensure that's at the front of your PATH (assuming PyCharm just looks for git in your PATH). But how can I compile from source if I cannot use sudo? @JHollowed You do not need root to compile - root is only needed for access to system stuff - anything in HOME is ok. Some programs might want root on compilaton though, so you need to setup a "fakeroot" environment. Otherwise just compile it and use it locally. @larkey Then how should I do that? I cannot use sudo at all, and if I try yum install git, it denies me access and says the incident will be reported. I would be fine to compile and use it locally on this machine. Where should I look for instruction? @JHollowed We did tell you to COMPILE IT LOCALLY. This means: Download the sourcecode of git from the website and then build it locally (see the documentation on how to do that). @larkey Well I have never used linux until last week, and this is my first professional programming experience, so bare with me. You mean to clone this repository: https://github.com/git/git, correct? From there I would not be sure what to do. I have read the INSTALL file, but it clearly assumes prior knowledge. Surely I cant just type make, make install into the terminal and it will work. Where are those commands supposed to be carried out? I have never compiled code before. @JHollowed Well, actually it's not more complicated. Don't assume code and Linux is complicated just because some ppl say so :P - At least on my machine this worked. -- actually you should do configure first. more instructions in the answer (im writing) @JHollowed added answer. This is still relevant, so based on my recent experience, I wanted to add some additional information to @larkey 's response above: If the make && make install fails, run yum install zlib-devel (or whatever your *nix distro uses to get the zlib packages installed) Once the make install completed, CentOS7 was still saying that 1.8.3 was the current version of git. I just moved the current git out of the way and created a symbolic link to the newly installed version: cd /usr/bin sudo mv git git_<IP_ADDRESS> sudo ln -s /home/<user name>/.local/bin/git git Not the most elegant solution, but it worked and I could move on to more pressing issues. I suppose I could have used the alternatives install to fix this, but whatevs.
common-pile/stackexchange_filtered
The intervals $(2,4)$ and $(-1,17)$ have the same cardinality I have to prove that $(2,4)$ and $(-1,17)$ have the same cardinality. I have the definition of cardinality but my prof words things in the most confusing way possible. Help! Try to find a linear function, i.e. a line from $(2,4)$ to $(-1,17)$. Strictly speaking, an affine function, $x\mapsto ax+b$. ($b$ will be nonzero.) You need to find a bijective function $f\colon (2,4)\rightarrow (-1,17)$. As suggested in the comments, try with a function of the form $f(x) = ax+b$. You know that the interval $(2,4)$ of length $2$ should be stretched out to fit onto the interval $(-1,17)$, which has length $18$. In other words, $$a = 18/2 = 9.$$ But $f(x) = 9x$ doesn't work. It maps $(2,4)$ to $(18,36)$, so you need to adjust $f$ by adding some number $b$, i.e. you need to find $b$, such that $f(x) = 9x+b$ maps $(2,4)$ to the correct interval. When you have found a function that maps to the right place, you need to check that it is indeed a bijection. The general way to show two sets $X,Y$ have the same cardinality is to show that there is a function $f:X\rightarrow Y$ that is both 1) injective and 2) surjective. That is 1) for all $a\neq b\in X$ we must have $f(a)\neq f(b)$ and 2) for all $y\in Y$ there must exist $x\in X$ such that $f(x)=y$.
common-pile/stackexchange_filtered
Append "\n" to all elements except last in Ruby array I have an Array of Hashes that look like this: texts = [{:text => 'my text', :width => 123}, {:text => 'my other text', :width => 200}] I want to have a final text that would look like this: my_final_text = 'my text\nmy other text' I have tried doing this: def concat_breakline(texts) final_text = "" texts.each do |text| final_text << text[:text] + "\n" end end But this would add "\n" to the last element, and I want to avoid that. How could I solve that? It's pretty easy: texts.collect { |text| text[:text] }.join("\n") The join method adds things in the middle but not the end.
common-pile/stackexchange_filtered
Yeoman Workflow and Integration with Backend Scripts So, I've been anticipating Yeoman and it's already out for a week or so now. But after successfully installing it, I've been confused at the workflow and the implementation with backend script (API). Scenario 1 So let's say I don't need all those shiny BBB/Ember/Angular stuff and use Yeoman just for jQuery/H5BP/Modernizr backed with Codeigniter or Sinatra/Rails. Since yeoman server doesn't natively support PHP (I haven't tried Sinatra/Rails), I figure that the workflow is: Front End Development with Yeoman After it's finished, do yeoman build and then use the built dist folder as a base to develop backend (and probably copy the dist folder to another folder for backend implementation (let's say public folder) If I should change CSS/JS, use yeoman again, build and copy the dist folder to public again. So on and on... But using that workflow, that means the directory structure will be something like cool-app/ --app/ --yeoman development stuff --test/ --yeoman development stuff --dist/ --yeoman built stuff .dotfiles package.json Gruntfile.js It's nice and all, but quite a bit different with the CodeIgniter / Rails directory structure. Not to mention there are name difference (is this configurable in Yeoman?), so it's kinda hard to imagine a good workflow developing both Front End and Back End in one go, except using the built result as a base for the backend. Scenario 2 BBB/Ember/Angular. Frankly I've been just testing those stuff, so any tips to implement with backend code is welcome! Though for all I know, yeoman can generate the necessary files for those framework inside app folder, so I figure, the solution of the first scenario will kinda solve the problem for scenario 2 Thanks a lot! I like using this structure: rails-app/ --app/ --views/ --js/ --app/ --test/ --Gruntfile.js --public Here's how I set it up: rails new rails-app cd rails-app/app/views mkdir js cd js yeoman init ember Then edit Gruntfile.js to change "output: 'dist'" to "output: '../../../public'" After that, "yeoman build" or "yeoman build:dist" will output to the Rails /public folder. During dev, you can still use "yeoman server" to run yeoman in development mode, so any change you make will automatically be visible in the browser. Yeoman is great! Sanford's answer will work for Sinatra too of course, but there's a slightly different solution that can be used so that you don't have to issue "yeoman build" to run in development mode. In Sinatra, the public folder is configurable, so you can have a configure block that looks like this: configure do set :public_folder, ENV['RACK_ENV'] == 'production' ? 'dist' : 'app' end Then use your routes like this: get '/' do send_file File.join(settings.public_folder, 'index.html') end This is assuming that "yeoman init" was run in the root folder of the Sinatra application. All you do then is ensure that you've run "yeoman build" before deploying to a production environment, and the yeoman-optimised content will be used.
common-pile/stackexchange_filtered
Row data var not found if view has no document I have a view panel where I want to style the color the of text in the row based on a value in the document. <xp:viewPanel id="dataView1" var="rowData" rows="5" rowClasses="#{javascript:rowData.getColumnValue('objectStatus') == 'Inactive' ? 'status-inactive' : ''}"> This works perfectly fine if the view has at least one document, but if the view is empty I get the following error: com.ibm.xsp.exception.EvaluationExceptionEx: Error while executing JavaScript computed expression Error while executing JavaScript computed expression Script interpreter error, line=1, col=10: [ReferenceError] 'rowData' not found I'm guessing it has something to do with rowData not being created unless a document exist, but I can't figure out how to check for it. I tried if (rowData != null) and !@IsNull(rowData) but I still get the same error. How do I solve this problem? (Note that I am late to the XPages game.) EDIT: Thanks to all for the input but I was able to solve the issue by simply checking the view count: if (getComponent('dataView1').getRowCount() > 0) { 'Inactive'.equals(rowData.getColumnValue('objectStatus')) ? 'status-inactive' : '' } EDIT 2: Knut has a slightly quicker solution so I gave him credit. You can test it with if (typeof rowData !== 'undefined') .... If the view is empty then rowData is 'undefined'. <xp:viewPanel id="viewPanel1" var="rowData" rows="5" rowClasses="#{javascript: if (typeof rowData !== 'undefined') rowData.getColumnValue('objectStatus') == 'Inactive' ? 'status-inactive' : '' }"> (This solution is probably some ms faster than .getRowCount() :-) ) Ahh thanks. I solved it a different way by checking the view's row count (see my edited post) but I didn't know you could check for 'undefined'. This information will be quite useful! Knut correctly explains the solution to what you're trying to do. The answer to why it doesn't work is slightly different. rowData was an apt name to choose, it's going to be the current row's data. But you're setting a property for the DataView as a whole. What is the current row for the whole DataView? The answer is, there isn't one because you're not dealing with an individual row. @Paul Stephen Withers Thanks Paul. However, please see my response to Knut's comment--calculating classes based on rowData does work. Turns out I was able to get around the error simply by checking the view count. I added the solution to my post. @Paul: Jorey is right, it is possible to use the row variable rowData in rowClasses calculation. Therefore I deleted my first answer and replaced it with the 'undefined' solution.
common-pile/stackexchange_filtered
Can't get Backbone routes without hashes? I want to have bookmarkable URLs that the browser can capture and handle. If I just use Backbone.history.start(), then I can use hash URLs, like /#accounts. But I want URLs without the hashes, a la /accounts. But I can't get this to work using Backbone.history.start( { pushState: true } ) (as others have described it). My routes are straightforward, and taken directly from the documentation. MyRouter = Backbone.Router.extend({ routes: { '/accounts': 'accounts', } }); I'm using Chrome (also tried with FF), and the behaviour is that an /accounts request goes straight to the server. Not being intercepted by Backbone first. Has anyone run into this? How do I get hash-less URL handling with Backbone? Thanks in advance The # is used for internal linking in html, all urls without # will go to server. You can still add routes, but all links with out # will go by server first You would navigate to that url with js using router.navigate( "/accounts", true ), not by links or entering the url yourself. To use links, you must bind a click event to them and prevent the default action and call navigate to the links href. router is an instance of Router
common-pile/stackexchange_filtered
Migrate Django app from ubuntu to red hat Is there a best practice of migrating django app from ubuntu server to red hat? What could be the challenges? In the same vein will postgreSQL migration pose any challenges? Personally I've had my very basic Django based app switching between Ubuntu/RedHat/Debian/Windows. I've not had any OS platform specific problems (only using Django and Django-pagination). I built and installed Django from the sources at www.djangoproject.com each time (ubuntu had 0.99 for a long time and I wanted 1.0), I only used deb and rpm packages for python. My DB has been MySQL not PostgreSQL but I've not had any problems with it. My advice would be to keep a very close eye on version numbers for things like wsgi, python interpreter etc., if you have a test suite or the time to write one then that can be very reassuring when you migrate to a different platform to see the same results. Get some benchmarks before you move the site and compare them.
common-pile/stackexchange_filtered
Using async/await and yield return with TPL Dataflow I am trying to implement a data processing pipeline using TPL Dataflow. However, I am relatively new to dataflow and not completely sure how to use it properly for the problem I am trying to solve. Problem: I am trying to iterate through the list of files and process each file to read some data and then further process that data. Each file is roughly 700MB to 1GB in size. Each file contains JSON data. In order to process these files in parallel and not run of of memory, I am trying to use IEnumerable<> with yield return and then further process the data. Once I get list of files, I want to process maximum 4-5 files at a time in parallel. My confusion comes from: How to use IEnumerable<> and yeild return with async/await and dataflow. Came across this answer by svick, but still not sure how to convert IEnumerable<> to ISourceBlock and then link all blocks together and track completion. In my case, producer will be really fast (going through list of files), but consumer will be very slow (processing each file - read data, deserialize JSON). In this case, how to track completion. Should I use LinkTo feature of datablocks to connect various blocks? or use method such as OutputAvailableAsync() and ReceiveAsync() to propagate data from one block to another. Code: private const int ProcessingSize= 4; private BufferBlock<string> _fileBufferBlock; private ActionBlock<string> _processingBlock; private BufferBlock<DataType> _messageBufferBlock; public Task ProduceAsync() { PrepareDataflow(token); var bufferTask = ListFilesAsync(_fileBufferBlock, token); var tasks = new List<Task> { bufferTask, _processingBlock.Completion }; return Task.WhenAll(tasks); } private async Task ListFilesAsync(ITargetBlock<string> targetBlock, CancellationToken token) { ... // Get list of file Uris ... foreach(var fileNameUri in fileNameUris) await targetBlock.SendAsync(fileNameUri, token); targetBlock.Complete(); } private async Task ProcessFileAsync(string fileNameUri, CancellationToken token) { var httpClient = new HttpClient(); try { using (var stream = await httpClient.GetStreamAsync(fileNameUri)) using (var sr = new StreamReader(stream)) using (var jsonTextReader = new JsonTextReader(sr)) { while (jsonTextReader.Read()) { if (jsonTextReader.TokenType == JsonToken.StartObject) { try { var data = _jsonSerializer.Deserialize<DataType>(jsonTextReader) await _messageBufferBlock.SendAsync(data, token); } catch (Exception ex) { _logger.Error(ex, $"JSON deserialization failed - {fileNameUri}"); } } } } } catch(Exception ex) { // Should throw? // Or if converted to block then report using Fault() method? } finally { httpClient.Dispose(); buffer.Complete(); } } private void PrepareDataflow(CancellationToken token) { _fileBufferBlock = new BufferBlock<string>(new DataflowBlockOptions { CancellationToken = token }); var actionExecuteOptions = new ExecutionDataflowBlockOptions { CancellationToken = token, BoundedCapacity = ProcessingSize, MaxMessagesPerTask = 1, MaxDegreeOfParallelism = ProcessingSize }; _processingBlock = new ActionBlock<string>(async fileName => { try { await ProcessFileAsync(fileName, token); } catch (Exception ex) { _logger.Fatal(ex, $"Failed to process fiel: {fileName}, Error: {ex.Message}"); // Should fault the block? } }, actionExecuteOptions); _fileBufferBlock.LinkTo(_processingBlock, new DataflowLinkOptions { PropagateCompletion = true }); _messageBufferBlock = new BufferBlock<DataType>(new ExecutionDataflowBlockOptions { CancellationToken = token, BoundedCapacity = 50000 }); _messageBufferBlock.LinkTo(DataflowBlock.NullTarget<DataType>()); } In the above code, I am not using IEnumerable<DataType> and yield return as I cannot use it with async/await. So I am linking input buffer to ActionBlock<DataType> which in turn posts to another queue. However by using ActionBlock<>, I cannot link it to next block for processing and have to manually Post/SendAsync from ActionBlock<> to BufferBlock<>. Also, in this case, not sure, how to track completion. This code works, but, I am sure there could be better solution then this and I can just link all the block (instead of ActionBlock<DataType> and then sending messages from it to BufferBlock<DataType>) Another option could be to convert IEnumerable<> to IObservable<> using Rx, but again I am not much familiar with Rx and don't know exactly how to mix TPL Dataflow and Rx Your processing is CPU bound. Therefore, async IO is pointless. It does not save you one millisecond of processing time. Delete everything async and the problem becomes easy. @usr: I haven't looked closely at this specific scenario; the question is too broadly stated, and doesn't provide a good [mcve] with which one would actually fully understand the context. It may well be that async operations here are not useful. However, it is IMHO a fallacy to think that just because processing is CPU bound, async I/O is "pointless". Async operations provide architectural benefits independent of possible performance benefits, and the lack of the latter does not preclude the possibility of the former. @PeterDuniho What architectural benefits are there? You can always simulate any form of concurrency or parallelism using threads. The only point of async IO is being threadless (and in case of async IO plus await being awesome with GUI scenarios). The code quality detriments are significant, however. Going to refrain from closing this. Somehow I think there is a good core to this question. Since it is novel material, as opposed to the 100 rote async questions per day ("Oh my app locked up because I called Result or Wait!"), I'll give this the benefit of the doubt. @PeterDuniho "The only point of async IO is being threadless" -- I guess we'll have to agree to disagree. First, async I/O isn't even "threadless"; it just happens to use the IOCP thread pool instead of requiring additional explicitly-created threads. Second, the async idiom in C# provides a very good, clean way to implement asynchronous code in a virtually non-asynchronous form, which is useful regardless of any performance benefits. YMMV. Actually, there is not even an IOCP thread. http://blog.stephencleary.com/2013/11/there-is-no-thread.html If there are 10m sockets open and being read, there are not 10m IOCP threads. Maybe a few dozen. I love await, though :) There will be IO bound work here. The part which reads the file - either from Disk or using HttpClient and opens a stream - which I feed into StreamReader and JsonTextReader. - So yes, to answer your comments (@usr) - there will be IO work and also async/await provides a cleaner way to implement code Though JSON.NET does not support async/await yet. Think of a scenario, whereby I don't have file on disk, but hosted on web server. So, I would have something like stream = await httpClient.GetStreamAsync(uri) and then pass that stream to StreamReader and in turn JsonTextReader. Also, the file would be really large, so instead of deserializing whole file at once, I would like to process it record by record in file. That way I would not hit OutOfMemoryException @TejasVora, the _fileBufferBlock implementation feels a bit... dirty. All you're doing is dumping your file names into the BufferBlock, which does not have a capacity limit, incurring async/await overheads for zero benefit. You could reduce the number of moving parts and just post or send each of your filenames directly to your ActionBlock instead. That ActionBlock also has a BoundedCapacity, and so will throttle the producer for you, thereby managing the back-pressure (which may be important as you stated that your producer is much faster than consumer). You would like to introduce a caching or que in between when producer and consumer are running at different pace (usually producer is faster than consumer). Depending on your requirements, queue could be as simple as MSMQ to a high performance one like RabitMQ, Redis and so on. @Saleem, respectfully, TPL Dataflow has perfectly functional controls for synchronising producer and consumer blocks, so bringing in a monster like MSMQ into this is absolutely, totally unnecessary. @KirillShlenskiy You are correct. Instead of posting to BufferBlock<>, I can directly post to ActionBlock<> or TransformBlock<,> or IPropagateBlock<,> and set BoundedCapacity and MaxMessagesPerTask and MaxDegreeOfParallelism on that block. The code above is sort of sample code. @TejasVora are there fragments of a file that could be processed in parallel, or it is strictly sequential? @alexm it's strictly sequential, as it contains JSON list. So I can't break it down and have different tasks process different parts. ok, another question: How complex is to process a single DataType structure? @alexm It's a fairly complex structure. With lots of nested classes and arrays. I should have clarified the question: after DataType instance is constructed is there still some expensive processing to be done? @alexm yes, need to push that data to various datastore/database @TejasVora: If it is acceptable to read all DataType instances for a file at the same time (into an array), then you can use TransformManyBlock. If it's not, then what you have is as good as dataflow can do (Rx can do more). @StephenCleary that's the problem. Cannot read everything from file into array or list. Tis will cause out of memory exception. So the nest choice I have is to use Ienumerable<> or IObservable<> and process the records from file. @TejasVora: See my answer. Since the JSON readers force synchrony, that's the best you can do. Question 1 You plug an IEnumerable<T> producer into your TPL Dataflow chain by using Post or SendAsync directly on the consumer block, as follows: foreach (string fileNameUri in fileNameUris) { await _processingBlock.SendAsync(fileNameUri).ConfigureAwait(false); } You can also use a BufferBlock<TInput>, but in your case it actually seems rather unnecessary (or even harmful - see the next part). Question 2 When would you prefer SendAsync instead of Post? If your producer runs faster than the URIs can be processed (and you have indicated this to be the case), and you choose to give your _processingBlock a BoundedCapacity, then when the block's internal buffer reaches the specified capacity, your SendAsync will "hang" until a buffer slot frees up, and your foreach loop will be throttled. This feedback mechanism creates back pressure and ensures that you don't run out of memory. Question 3 You should definitely use the LinkTo method to link your blocks in most cases. Unfortunately yours is a corner case due to the interplay of IDisposable and very large (potentially) sequences. So your completion will flow automatically between the buffer and processing blocks (due to LinkTo), but after that - you need to propagate it manually. This is tricky, but doable. I'll illustrate this with a "Hello World" example where the producer iterates over each character and the consumer (which is really slow) outputs each character to the Debug window. Note: LinkTo is not present. // REALLY slow consumer. var consumer = new ActionBlock<char>(async c => { await Task.Delay(100); Debug.Print(c.ToString()); }, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 }); var producer = new ActionBlock<string>(async s => { foreach (char c in s) { await consumer.SendAsync(c); Debug.Print($"Yielded {c}"); } }); try { producer.Post("Hello world"); producer.Complete(); await producer.Completion; } finally { consumer.Complete(); } // Observe combined producer and consumer completion/exceptions/cancellation. await Task.WhenAll(producer.Completion, consumer.Completion); This outputs: Yielded H H Yielded e e Yielded l l Yielded l l Yielded o o Yielded Yielded w w Yielded o o Yielded r r Yielded l l Yielded d d As you can see from the output above, the producer is throttled and the handover buffer between the blocks never grows too large. EDIT You might find it cleaner to propagate completion via producer.Completion.ContinueWith( _ => consumer.Complete(), TaskContinuationOptions.ExecuteSynchronously ); ... right after producer definition. This allows you to slightly reduce producer/consumer coupling - but at the end you still have to remember to observe Task.WhenAll(producer.Completion, consumer.Completion). In order to process these files in parallel and not run of of memory, I am trying to use IEnumerable<> with yield return and then further process the data. I don't believe this step is necessary. What you're actually avoiding here is just a list of filenames. Even if you had millions of files, the list of filenames is just not going to take up a significant amount of memory. I am linking input buffer to ActionBlock which in turn posts to another queue. However by using ActionBlock<>, I cannot link it to next block for processing and have to manually Post/SendAsync from ActionBlock<> to BufferBlock<>. Also, in this case, not sure, how to track completion. ActionBlock<TInput> is an "end of the line" block. It only accepts input and does not produce any output. In your case, you don't want ActionBlock<TInput>; you want TransformManyBlock<TInput, TOutput>, which takes input, runs a function on it, and produces output (with any number of output items for each input item). Another point to keep in mind is that all buffer blocks have an input buffer. So the extra BufferBlock is unnecessary. Finally, if you're already in "dataflow land", it's usually best to end with a dataflow block that actually does something (e.g., ActionBlock instead of BufferBlock). In this case, you could use the BufferBlock as a bounded producer/consumer queue, where some other code is consuming the results. Personally, I would consider that it may be cleaner to rewrite the consuming code as the action of an ActionBlock, but it may also be cleaner to keep the consumer independent of the dataflow. For the code below, I left in the final bounded BufferBlock, but if you use this solution, consider changing that final block to a bounded ActionBlock instead. private const int ProcessingSize= 4; private static readonly HttpClient HttpClient = new HttpClient(); private TransformBlock<string, DataType> _processingBlock; private BufferBlock<DataType> _messageBufferBlock; public Task ProduceAsync() { PrepareDataflow(token); ListFiles(_fileBufferBlock, token); _processingBlock.Complete(); return _processingBlock.Completion; } private void ListFiles(ITargetBlock<string> targetBlock, CancellationToken token) { ... // Get list of file Uris, occasionally calling token.ThrowIfCancellationRequested() foreach(var fileNameUri in fileNameUris) _processingBlock.Post(fileNameUri); } private async Task<IEnumerable<DataType>> ProcessFileAsync(string fileNameUri, CancellationToken token) { return Process(await HttpClient.GetStreamAsync(fileNameUri), token); } private IEnumerable<DataType> Process(Stream stream, CancellationToken token) { using (stream) using (var sr = new StreamReader(stream)) using (var jsonTextReader = new JsonTextReader(sr)) { while (jsonTextReader.Read()) { token.ThrowIfCancellationRequested(); if (jsonTextReader.TokenType == JsonToken.StartObject) { try { yield _jsonSerializer.Deserialize<DataType>(jsonTextReader); } catch (Exception ex) { _logger.Error(ex, $"JSON deserialization failed - {fileNameUri}"); } } } } } private void PrepareDataflow(CancellationToken token) { var executeOptions = new ExecutionDataflowBlockOptions { CancellationToken = token, MaxDegreeOfParallelism = ProcessingSize }; _processingBlock = new TransformManyBlock<string, DataType>(fileName => ProcessFileAsync(fileName, token), executeOptions); _messageBufferBlock = new BufferBlock<DataType>(new DataflowBlockOptions { CancellationToken = token, BoundedCapacity = 50000 }); } Alternatively, you could use Rx. Learning Rx can be pretty difficult though, especially for mixed asynchronous and parallel dataflow situations, which you have here. As for your other questions: How to use IEnumerable<> and yeild return with async/await and dataflow. async and yield are not compatible at all. At least in today's language. In your situation, the JSON readers have to read from the stream synchronously anyway (they don't support asynchronous reading), so the actual stream processing is synchronous and can be used with yield. Doing the initial back-and-forth to get the stream itself can still be asynchronous and can be used with async. This is as good as we can get today, until the JSON readers support asynchronous reading and the language supports async yield. (Rx could do an "async yield" today, but the JSON reader still doesn't support async reading, so it won't help in this particular situation). In this case, how to track completion. If the JSON readers did support asynchronous reading, then the solution above would not be the best one. In that case, you would want to use a manual SendAsync call, and would need to link just the completion of these blocks, which can be done as such: _processingBlock.Completion.ContinueWith( task => { if (task.IsFaulted) ((IDataflowBlock)_messageBufferBlock).Fault(task.Exception); else if (!task.IsCanceled) _messageBufferBlock.Complete(); }, CancellationToken.None, TaskContinuationOptions.DenyChildAttach | TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default); Should I use LinkTo feature of datablocks to connect various blocks? or use method such as OutputAvailableAsync() and ReceiveAsync() to propagate data from one block to another. Use LinkTo whenever you can. It handles all the corner cases for you. // Should throw? // Should fault the block? That's entirely up to you. By default, when any processing of any item fails, the block faults, and if you are propagating completion, the entire chain of blocks would fault. Faulting blocks are rather drastic; they throw away any work in progress and refuse to continue processing. You have to build a new dataflow mesh if you want to retry. If you prefer a "softer" error strategy, you can either catch the exceptions and do something like log them (which your code currently does), or you can change the nature of your dataflow block to pass along the exceptions as data items. Question: Why cannot I use SendAsync and have to use Post here? @TejasVora: Let me turn that around. What benefit would SendAsync have over Post in this case? Nothing in particular. Just an informative question. It would be worth looking at Rx. Unless I'm missing something your entire code that you need (apart from your existing ProcessFileAsync method) would look like this: var query = fileNameUris .Select(fileNameUri => Observable .FromAsync(ct => ProcessFileAsync(fileNameUri, ct))) .Merge(maxConcurrent : 4); var subscription = query .Subscribe( u => { }, () => { Console.WriteLine("Done."); }); Done. It's run asynchronously. It's cancellable by calling subscription.Dispose();. And you can specify the maximum parallelism.
common-pile/stackexchange_filtered
How to open a file whose name is stored in a pandas cell, manipulate the contents and store in a new column Dataframe Example index fileName startline endline 0 293104.java 30 40 1 288951.java 183 247 2 2378709.java 98 117 Goal I want to open and read the contents of the file in fileName, and extract the lines in the range created by the values in the startline and endline columns. I then want to store that in a new column called snippet. Example of snippet creation logic def snippetMaker(fileName, startLine, endLine): file = open(fileName,'r').read() snippet = file.split('\n')[startLine:endLine] cleanSnippet = str(snippet).replace('[','').replace(']','').replace(',',' ') return cleanSnippet Current approach I have seen that map() is often used in functions like that shown above (given that the function can accept iterable arguments and returns a list) then set equal to a dataframe column like below. df['snippet']= snippetMaker(df['fileName'],df['startLine'],df['endLine']) I am having trouble reconfiguring the above snippetMaker function to work in such a way. Other details I do not want to use Iterrows, the dataframe contains over 8m rows. If you use apply, which will apply the function to each row, you write the function to take a single row of the dataframe and then you can use dot notation to access the columns in the function. def snippetMaker(row): file = open(row.fileName,'r').read() snippet = file.split('\n')[row.startLine:row.endLine] cleanSnippet = str(snippet).replace('[','').replace(']','').replace(',',' ') return cleanSnippet df['snippet'] = df.apply(snippetMaker, axis=1)
common-pile/stackexchange_filtered
KVM LVM-based guest... kernel: Buffer I/O error on device. Failing drive? Currently setting up a small KVM host to run a few VMs for a small business. The server has 2 drives in software md RAID 1, then I have it set as a PV in an LVM setup. Guests and hosts are all CentOS 6.4 64bit. The / partition of the KVM guests are disk images but with one particular guest that will have higher i/o requirements, I've added a 2nd HDD to the VM which is a logical volume from the host storage pool. I was running some pretty intense i/o on this LV this evening within the guest, extracting a 60GB multi-volume 7z archive of data. 7z barfed about 1/5th of the way through with E_FAIL. I tried to move some files around on this LV disk and was greeted with "cannot move ... read-only file system". All devices are mounted rw. I looked in /var/log/messages and saw the following errors... Nov 22 21:47:52 mail kernel: Buffer I/O error on device vdb1, logical block 47307631 Nov 22 21:47:52 mail kernel: lost page write due to I/O error on vdb1 Nov 22 21:47:52 mail kernel: Buffer I/O error on device vdb1, logical block 47307632 Nov 22 21:47:52 mail kernel: lost page write due to I/O error on vdb1 Nov 22 21:47:52 mail kernel: Buffer I/O error on device vdb1, logical block 47307633 Nov 22 21:47:55 mail kernel: end_request: I/O error, dev vdb, sector 378473448 Nov 22 21:47:55 mail kernel: end_request: I/O error, dev vdb, sector 378474456 Nov 22 21:47:55 mail kernel: end_request: I/O error, dev vdb, sector 378475464 Nov 22 21:47:55 mail kernel: JBD: Detected IO errors while flushing file data on vdb1 Nov 22 21:47:55 mail kernel: end_request: I/O error, dev vdb, sector 255779688 Nov 22 21:47:55 mail kernel: Aborting journal on device vdb1. Nov 22 21:47:55 mail kernel: end_request: I/O error, dev vdb, sector 255596560 Nov 22 21:47:55 mail kernel: JBD: I/O error detected when updating journal superblock for vdb1. Nov 22 21:48:06 mail kernel: __ratelimit: 20 callbacks suppressed Nov 22 21:48:06 mail kernel: __ratelimit: 2295 callbacks suppressed Nov 22 21:48:06 mail kernel: Buffer I/O error on device vdb1, logical block 47270479 Nov 22 21:48:06 mail kernel: lost page write due to I/O error on vdb1 Nov 22 21:48:06 mail kernel: Buffer I/O error on device vdb1, logical block 47271504 Nov 22 21:48:06 mail kernel: end_request: I/O error, dev vdb, sector 378116680 Nov 22 21:48:06 mail kernel: end_request: I/O error, dev vdb, sector 378157680 Nov 22 21:48:06 mail kernel: end_request: I/O error, dev vdb, sector 378432440 Nov 22 21:51:25 mail kernel: EXT3-fs (vdb1): error: ext3_journal_start_sb: Detected aborted journal Nov 22 21:51:25 mail kernel: EXT3-fs (vdb1): error: remounting filesystem read-only Nov 22 21:51:55 mail kernel: __ratelimit: 35 callbacks suppressed Nov 22 21:51:55 mail kernel: __ratelimit: 35 callbacks suppressed Nov 22 21:51:55 mail kernel: Buffer I/O error on device vdb1, logical block 64003824 Nov 22 21:51:55 mail kernel: Buffer I/O error on device vdb1, logical block 64003839 Nov 22 21:51:55 mail kernel: Buffer I/O error on device vdb1, logical block 256 Nov 22 21:51:55 mail kernel: Buffer I/O error on device vdb1, logical block 32 Nov 22 21:51:55 mail kernel: Buffer I/O error on device vdb1, logical block 64 Nov 22 21:51:55 mail kernel: end_request: I/O error, dev vdb, sector 6144 Nov 22 21:55:06 mail yum[19139]: Installed: lsof-4.82-4.el6.x86_64 Nov 22 21:59:47 mail kernel: __ratelimit: 1 callbacks suppressed Nov 22 22:00:01 mail kernel: __ratelimit: 1 callbacks suppressed Nov 22 22:00:01 mail kernel: Buffer I/O error on device vdb1, logical block 64003824 Nov 22 22:00:01 mail kernel: Buffer I/O error on device vdb1, logical block 512 There were plenty more than that, full excerpt here http://pastebin.com/vH8SDrCg Note the point where there's an i/o error when "updating journal superblock" then later the volume is re-mounted as read-only because of an aborted journal. So time to look at the host. cat /proc/mdstat returns UU for both RAID 1 arrays (boot and main PV). mdadm --detail shows state: clean and state: active respectively The typical LVM commands pvs, vgs and lvs all return the following error: /dev/VolGroup00/lv_mail: read failed after 0 of 4096 at<PHONE_NUMBER>80: Input/output error /dev/VolGroup00/lv_mail: read failed after 0 of 4096 at<PHONE_NUMBER>24: Input/output error /dev/VolGroup00/lv_mail: read failed after 0 of 4096 at 0: Input/output error /dev/VolGroup00/lv_mail: read failed after 0 of 4096 at 4096: Input/output error VG #PV #LV #SN Attr VSize VFree VolGroup00 1 4 1 wz--n- 930.75g 656.38g /var/log/messages on the host shows this: Nov 22 21:47:53 localhost kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception. Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 0 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 1 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 2 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 3 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 0 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 64004095 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 64004095 Nov 22 22:11:04 localhost kernel: Buffer I/O error on device dm-3, logical block 0 A short self-test with smartctl revealed nothing on either physical disk. There were no worrying error counters in the SMART data either, most all at 0 apart from power on hours, spin up time, temperature. Even power on hours were relatively low, about 150 days or so. I currently have long self-tests in progress. So based on all that, what's the likelihood of this being the start of a drive failure? Worth running an fsck or badblocks in the host? I don't want to cause a full kernel panic at this stage. I would have thought mdstat would have shown a failed array member by now, about 1hr after the event. This machine is a dedicated server so I don't have physical access. I'll check the console through the DRAC shortly but I'm expecting to see a bunch of i/o errors on the console. I don't have virtual media access so can't load systemrescuecd to do repairs, so I'm a bit wary of rebooting at this stage. Any chance on this? I have the same error. I copied all the physical devices to a file, mounted them with a loop device and it still happens. So it's not the backing drive.
common-pile/stackexchange_filtered
Detecting Each of Curve in Opencv I have an image and image has lots of curves I want to detect each of curve and I want to save these curves coordinates Firstly I used contour but I cant separet each of curve I want find detect each curves like this : This is the way I would proceed. Using a grid I would sample the image over small regions, collect data part of the white lines, and use a collinear function to test and group if points are part of lines, with additional tolerance, the line can Be a slightly curved lines.
common-pile/stackexchange_filtered
Initialization of inner private class within public constructor call of outer class -What's the standard's intention for this? Recently, I came across code like this: class NeedsFactoryForPublicCreation { private: struct Accessor { // Enable in order to make the compile failing (expected behavior) // explicit Accessor() = default; }; public: explicit NeedsFactoryForPublicCreation(Accessor) {} // This was intended to be the only public construction way static NeedsFactoryForPublicCreation create() { NeedsFactoryForPublicCreation result{Accessor()}; // ... Do something (complex parametrization) further on return result; } }; int main() { //NeedsFactoryForPublicCreation::Accessor publicAccessor {}; // Does not compile as expected NeedsFactoryForPublicCreation nonDefaultConstructible {{}}; // Compiles even with Accessor being an interal } At first, I was a bit shocked, that this compiles. After some minutes of analysis (and occurring self-doubts...), I found out, that this is fully explainable due to the definition of access specifiers and the way the compiler decides what actual way of initialization is used (aggregate vs. public constructor look-up). In order to extend the first confusion, even this access class compiles this way: class Accessor { Accessor() = default; // private, but we are an aggregate... friend class NeedsFactoryForPublicCreation; }; which is again fully explainable since Accessor is still an aggregate (another confusing topic...). But in order to emphasize, that this question has nothing to do with aggregates in detail (thanks to Jarod42 in the comments to point out the upcoming C++20 changes here!), this compiles too: class Accessor { public: Accessor() {} virtual ~Accessor() {} virtual void enSureNonAggregate() const {} friend class NeedsFactoryForPublicCreation; }; and does not as soon as the constructor becomes private. My question: What's the actual background, the standard decided the effect of access specifiers this counterintuitively in doubt? With counterintuitively I especially mean the inconsistency of effective look-up (the compiler doesn't care about the internal as long as we stay unnamed, but then cares when it comes to actual constructor look-up, still fully unnamed context). Or maybe I'm mixing categories too much here. I know, access specifiers' background is quite strictly naming based and there are also other ways to achieve the "publication" of this internal, but as far as I know, they are far more explicit. Legally implicitly creating an unnamed temporary of a private internal within an outer scope via this extremely implicit way looks horrible and might even be quite error prone, at latest when multiple arguments are the case (uniform initialized empty std-containers...). Has been fixed (private:Accessor() = default;) in C++20 IIRC. Rules have changed for aggregate_initialization. (Probably to fix issue like that (Not easy to see all possible interactions)). @Jarod42 thanks! Maybe I should try to emphasize the non-aggregate case within my question further more to focus more on the core question of the look-up behvaior itself. That won't answer the question, but if // This was intended to be the only public construction way why do you need explicit NeedsFactoryForPublicCreation(Accessor) {} being public then? @t.niese, for the concrete scenario, I want to refer to std::make_unique and/or std::make_shared still. @t.niese: to use with std::make_unique for example private only forbids to "name" it. So with C++20 fixes, and providing a (default) private constructor, I think there are no issues. (or you wanted different rule for generated constructors for private/protected/public classes?). @t.niese: and passkey idiom in generic case. @Jarod42 you're right, with C++20, there should not be any remaining issues here for the actual intention of the code I used (either I use structs with explicit default constructors or a class with private (default) constructor). But it's still confusing as hell in doubt that you can bypass obvious 'intuitive' scopes that easy. I'm aware that you try to find a way to prevent that NeedsFactoryForPublicCreation nonDefaultConstructible {{}}; is done by accident. But FYI, generally, if you expose types (as return types or parameters), then those types can be retrieved and used, so with explicit NeedsFactoryForPublicCreation(Accessor) {} the type Accessor is exposed. And for the shown example you could retrieve Accessor, and instantiate it anywhere. @t.niese Yes I know, but exposing the accessor as a return type/object is quite explicit about the underlying intention. When only referring to it as a function argument, I do not see this degree of exposure a priori, maybe a question of personal preference. Worth to mention, that the whole issues here arise from the make_shared/make_unique core issue for me actually. Public constructors of classes, whether nested or not, tell me at first glance, that they are meant to be constructed "from the public" a priori. The accessor-scheme might be some kind of a design attack then. What's the actual background: "Fixing" access modifiers - that currently only protect the access to the name - without breaking something looks like a problematic task. Having a public explicit NeedsFactoryForPublicCreation(Accessor) and a library function like make_unqiue (that is treated like any other function), and passes an instance of Accessor to the constructor (requiring it to make a copy and calling the copy constructor), ist not that much of a difference then NeedsFactoryForPublicCreation nonDefaultConstructible {{}} with respect of (required) access rights.
common-pile/stackexchange_filtered
LibreOffice Calc / OpenOffice Calc / Excel: How to display a negative time duration? I use a LibreOffice/OpenOffice spreadsheet to track my sleep. Column A contains the time I fell asleep, and column B contains the time I woke up. Column C contains an "adjustment"; if I woke up early, and then went back to sleep, I'll subtract the duration I was awake. Similarly, if I take a nap, I'll add in the duration I napped. Column D computes the duration of sleep: =(B1-A1)+C1 The trouble I'm having is visually representing column C effectively. I want it to show "-1:15" if I woke up early, was awake for 1 hour and 15 minutes, and then went back to bed. Conversely, I want it to show "+1:45" if I took a nap for an hour and 45 minutes. How can I format column C to do this? I've tried several custom formatting options, and none of my attempts have succeeded thus far. Everything I have tried winds up displaying "22:45" instead of "-1:15". There is a hack that you can use in Excel Go into Options|Advanced|When calculating this workbook and tick 'Use 1904 date system'. Then it will let you set the format to +hh:mm;-hh:mm and everything should work OK. In Open Office it will let you use +hh:mm;-hh:mm in LibreOffice Calc you should format the cells as [HH]:MM to use negative times and hours. Thanks. Upvoted. This appears to be the new default in LibreOffice when entering negative times (well, technically it's [HH]:MM:SS, but same idea). So either someone at LO read this thread and improved the functionality, or others had the same issue. This formatting works well for negative times (you get the minus sign before the hour), but is there a way to get a plus sign before the hour for positive times? You may now also try [BLUE][HH]:MM;[RED]-[HH]:MM to get the different colors.
common-pile/stackexchange_filtered
WPF Radiobutton property binding I am writing a wizard based application where, I load different usercontrol for each step. The usercontrol that is loaded has a radio button bound to 2 properties. when I try to load already existing usercontrol, the radiobutton status is not restored. The value of the property to which radio button is bound is set to false. Below is the view and the model code snippet public bool Yes { get { return _yes; } set { _yes = value; // Value is set to false whenever the view is reloaded. NotifyPropertyChanged(value.ToString()); } } public bool No { get { return _no; } set { _no = value; Yes = !value; //if (value) //{ // Yes = !_no; //} } } View: <RadioButton x:Name="Yes" GroupName ="Check" Content="Yes" Margin="24,0,0,0" IsChecked="{Binding Yes, Mode=TwoWay}"/> <RadioButton x:Name="No" GroupName ="Check" Content="No" Margin="24,0,0,0" IsChecked="{Binding No, Mode=TwoWay}"/> Would like to know why and how the value gets set to false? it is probably because of TwoWay binding, or of another part of your code. also it's wrong to pass value.ToString() !!!! Where is the sense in having two dependent properties for one boolean state, yes or no? Where are you "restoring" the values? It should work if you set either the Yes or No property to true initially. Please refer to the following sample code where "Yes" is selected initially: private bool _yes = true; public bool Yes { get { return _yes; } set { _yes = value; // Value is set to false whenever the view is reloaded. _no = !value; NotifyPropertyChanged("Yes"); } } private bool _no; public bool No { get { return _no; } set { _no = value; _yes = !value; NotifyPropertyChanged("No"); } } You could use a single property and a converter that you can use as inverter: private bool _IsYes; public bool IsYes { get { return _IsYes; } set { _IsYes = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("IsYes")); } } Here the BooleanInverter: public class BooleanInverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (value == null) return false; return !value; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { if (value == null) return false; return !value; } } The XAML could look like this: Add the inverter as resource: <Window.Resources> <local:BooleanInverter x:Key="Inverter"/> </Window.Resources> And use it: <RadioButton Content="Yes" IsChecked="{Binding IsYes}"/> <RadioButton Content="No" IsChecked="{Binding IsYes, Converter={StaticResource Inverter}}"/>
common-pile/stackexchange_filtered
How can i get my table to populate using a trigger function? i have a function: CREATE OR REPLACE FUNCTION delete_student() RETURNS TRIGGER AS $BODY$ BEGIN IF (TG_OP = 'DELETE') THEN INSERT INTO cancel(eno, excode,sno,cdate,cuser) VALUES ((SELECT entry.eno FROM entry JOIN student ON (entry.sno = student.sno) WHERE entry.sno = OLD.sno),(SELECT entry.excode FROM entry JOIN student ON (entry.sno = student.sno) WHERE entry.sno = OLD.sno), OLD.sno,current_timestamp,current_user); END IF; RETURN OLD; END; $BODY$ LANGUAGE plpgsql; and i also have the trigger: CREATE TRIGGER delete_student BEFORE DELETE on student FOR EACH ROW EXECUTE PROCEDURE delete_student(); the idea is when i delete a student from the student relation then the entry in the entry relation also delete and my cancel relation updates. this is what i put into my student relation: INSERT INTO student(sno, sname, semail) VALUES (1, 'a. adedeji', 'ayooladedeji@live.com'); and this is what i put into my entry relation: INSERT INTO entry(excode, sno, egrade) VALUES (1, 1, 98.56); when i execute the command DELETE FROM student WHERE sno = 1; it deletes the student and also the corresponding entry and the query returns with no errors however when i run a select on my cancel table the table shows up empty? Can you not use a 'cascade' drop constraint? Triggers seem like overkill. You didn't post your DB schema and it's not very clear what your problem is, but it looks like a cascade delete is interfering somewhere. Specifically: Before deleting the student, you insert something into cancel that references it. Postgres proceeds to delete the row in student. Postgres proceeds to honors all applicable cascade deletes. Postgres deletes rows in entry and ... cancel (including the one you just inserted). A few remarks: Firstly, and as a rule of thumb, before triggers should never have side-effects on anything but the row itself. Inserting row in a before delete trigger is a big no no: besides introducing potential problems related such as Postgres reporting an incorrect FOUND value or incorrect row counts upon completing the query, consider the case where a separate before trigger cancels the delete altogether by returning NULL. As such, your trigger function should be running on an after trigger -- only at that point can you be sure that the row is indeed deleted. Secondly, you don't need these inefficient, redundant, and ugly-as-sin sub-select statements. Use the insert ... select ... variety of inserts instead: INSERT INTO cancel(eno, excode,sno,cdate,cuser) SELECT entry.eno entry.excode, OLD.sno, current_timestamp, current_user FROM entry WHERE entry.sno = OLD.sno; Thirdly, your trigger should probably be running on the entry table, like so: INSERT INTO cancel(eno, excode,sno,cdate,cuser) SELECT OLD.eno OLD.excode, OLD.sno, current_timestamp, current_user; Lastly, there might be a few problems in your schema. If there is a unique row in entry for each row in student, and you need information in entry to make your trigger work in order to fill in cancel, it probably means the two tables (student and entry) ought to be merged. Whether you merge them or not, you might also need to remove (or manually manage) some cascade deletes where applicable, in order to enforce the business logic in the order you need it to run. You do not show how the corresponding entry is deleted. If the entry is deleted before the student record then that causes the problem because then the INSERT in the trigger will fail because the SELECT statement will not provide any values to insert. Is the corresponding entry deleted through a CASCADING delete on student? Also, your trigger can be much simpler: CREATE OR REPLACE FUNCTION delete_student() RETURNS trigger AS $BODY$ BEGIN INSERT INTO cancel(eno, excode, sno, cdate, cuser) VALUES (SELECT eno, excode, sno, current_timestamp, current_user FROM entry WHERE sno = OLD.sno); RETURN OLD; END; $BODY$ LANGUAGE plpgsql; First of all, the function only fires on a DELETE trigger, so you do not have to test for TG_OP. Secondly, in the INSERT statement you never access any data from the student relation so there is no need to JOIN to that relation; the sno does come from the student relation, but through the OLD implicit parameter.
common-pile/stackexchange_filtered
Using a algorithm for searching a string in a vector Is it possible to count how many strings are equal to one given as a parameter using an algorithm method? #include <algorithm> #include <vector> #include <string> int main(){ vector<string> vectorPeople; //assume that myVector isn't empty string name; cin >> name; int total=std::count(myVector.begin(),myVector.end(),name); } loop through each string in the vector using a vector<string>::iterator, and compare the strings to the parameter one-by-one @willywonka_dailyblah there's a method for that. and apart from that op explicitly asked for a method from the algorithm-API. I marked this as "unclear what you're asking". There is an answer to the question already; but it contains essentially the same code you have in the question; and furthermore the question does not appear to have been edited so I think the code was there when the answer was written. Since the only question I can understand you to ask has already been answered in the question body, I can't imagine what you're actually asking.
common-pile/stackexchange_filtered
Rename column names in Excel I have columns with names as SSS-01\01\2015, SSS-01\02\2015 and so on. I want to rename these cols as 01\01\2015, 01\02\2015 and so on by removing "SSS-". Below is the code that I am using. I have all the required values in the List. Now how do I save those as column names from cell E to H. chartRange = ws.get_Range("e7", "h7"); int colCount=chartRange.Columns.Count; List<string> colNames=new List<string>(); for(int c=1;c<colCount;c++) { if(chartRange!=null) { string colName = chartRange.Columns[c].Value2; string colVal = colName.Remove(0, 10); colNames.Add(colVal); } } Is it showing column content as ### ? Am able to fetch 01\01\2015 as value and it is being stored in the List colNames.How can i add these list values to cell text
common-pile/stackexchange_filtered
How to make an ellipse as from the car package in ggplot2? I need to build a plot showing the difference in variance between the ordinary least squares estimator and the ridge estimator. For this I want to plot contour ellipses for the variance. The data I'm working with is this: set.seed(10) sigma = 10 U = rnorm(50,0,sigma) #The errors X = scale(matrix(rnorm(50*2),ncol=2)) Y = scale(U + X%*%c(5,-2)) I then find the ridge(r) and OLS(m) and their covariancematrices: (EDIT: I've realised the covariance matrix for the ridge estimator is wrong, however that doesn't really matter. It will just change the ellipses, not how to plot the.) lbda = 100 r = solve(t(X)%*%X + lbda * diag(2))%*%t(X)%*%Y m = solve(t(X)%*%X)%*%t(X)%*%Y covmatr = sigma^2 * solve(t(X)%*%X + lbda * diag(2)) covmatm = sigma^2 * solve(t(X)%*%X) I then want to build the plot. In standard R I would do something like this using the function ellipse from the car-module: library(car) plot(r[1],r[2],xlim=c(-5,5),ylim=c(-5,5), xlab = TeX("$\\beta_1$"), col="red") points(m[1],m[2], col="blue") abline(h=0) abline(v=0) ellipse(center=c(r),shape=covmatr,center.cex=0,radius=sqrt(qchisq(0.5,2)), col = "red") ellipse(center=c(r),shape=covmatr,center.cex=0,radius=sqrt(qchisq(0.9,2)), col = "red") ellipse(center=c(m),shape=covmatm,center.cex=0,radius=sqrt(qchisq(0.5,2))) ellipse(center=c(m),shape=covmatm,center.cex=0,radius=sqrt(qchisq(0.9,2))) I realize I forgot to rename the y-label My problem is that I cant figure out how to do the same in ggplot2. I have tried: #Building the plot with dots rd = data.frame(r[1],r[2]) plot1 = ggplot(rd, aes(x=rd$r.1, y = rd$r.2, color = 'r')) + geom_point(size = 3) + geom_point(aes(x=m[1], y=m[2]), colour="blue", size = 3) + geom_vline(xintercept = 0) + geom_hline(yintercept = 0) + xlim(-5,5) + ylim(-5,5) + ggtitle(TeX("Varians for ridge"))+ xlab(TeX("$\\beta_1$")) + ylab("$\\beta_2$")+ theme_bw() plot1 #Trying to add ellipses: ellipse(center=c(r),shape=covmatr,center.cex=0,radius=sqrt(qchisq(0.5,2)), col = "red") ellipse(center=c(r),shape=covmatr,center.cex=0,radius=sqrt(qchisq(0.9,2)), col = "red") ellipse(center=c(m),shape=covmatm,center.cex=0,radius=sqrt(qchisq(0.5,2))) ellipse(center=c(m),shape=covmatm,center.cex=0,radius=sqrt(qchisq(0.9,2))) However this gives me the following error Fejl i plot.xy(xy.coords(x, y), type = type, ...) : plot.new has not been called yet I have also tried to simply add the ellipses within the ggplot like this using the ellipse function from the car package: plot1 = ... + ellipse(center=c(m),shape=covmatm,center.cex=0,radius=sqrt(qchisq(0.9,2))) where "..." depicts that there are lines missing. I know both my attempts are quite naive, however I hoped one would work. I have found the command stat_ellipse but this is not useful in my case because as far as I can see it cannot calculate the covariance matrix of the ridge estimator. You cannot use base graphics functions (the ones in car) with grid graphics functions (the ones in ggplot). Look at thisggplot2 draw individual ellipses but color by group. Thank you for your answer. I guess I am looking for a ggplot version of the ellipse function then. I already read the post you link to, but unfortunately it is of no use to me, since they use the stat_ellipse function, which is incompatible with my input. stat_ellipse is a ggplot version of the car function dataellipse. I was hoping there also is a ggplot version of ellipse. I had to do something like this recently. It's a bit of a pain, but the way to handle this (without hacking stat_geom(), or getting someone else to do it, which would be the best solution in the long run) is to use car::ellipse to generate the x and y coordinates of the ellipses, then plot them with geom_path, as follows: set up base plot (Using data setup from the OP, dropping a few details) dat <- rbind(data.frame(method = "ridge", t(r)), data.frame(method = "OLS", t(m))) |> setNames(c("method", "x", "y")) plot1 <- ggplot(dat, aes(x, y, colour = method)) + geom_point(size = 3) + scale_colour_manual(values = c("red", "blue")) + xlim(-5,5) + ylim(-5,5) compute ellipses We want to end up with everything in one data frame, identified by method and confidence level efun <- function(ctr, shp, lvl, nm) { e <- car::ellipse(c(ctr), shp, radius = sqrt(qchisq(lvl,2)), draw = FALSE) dplyr::tibble(method = nm, lvl, x = e[,"x"], y = e[,"y"]) } elist <- purrr::pmap(list( list(r, r, m, m), list(covmatr, covmatr, covmatm, covmatm), list(0.5, 0.9, 0.5, 0.9), list("ridge", "ridge", "OLS", "OLS")), efun ) e_df <- dplyr::bind_rows(elist) plot plot1 + geom_path(data = e_df, aes(lty = factor(lvl)))
common-pile/stackexchange_filtered
Upload APK Using Test fairy Plugin issues I am uploading an apk file using test fairy plugin But I am getting an error the apk does not contain testfairy sdk android- Testfairy sdk integration android. How to integrate testfairy sdk with application? I am following this article url but I am getting two exception Java.lang.class.defounder Null Pointer Exception android Please Help ! Adding the TestFairy SDK is easy, here are the details: http://docs.testfairy.com/Android/Integrating_Android_SDK.html If you have Proguard enabled, please add this snippet to your proguard rules file -keep class com.testfairy.** { *; } -dontwarn com.testfairy.** -keepattributes Exceptions, Signature, LineNumberTable
common-pile/stackexchange_filtered
How to undo the action of rails db:migrate in Rails <IP_ADDRESS> When migrating the database, I made a spelling mistake. I want to generate a scaffold by running: rails generate scaffold Micropost context:text user_id:integer rails db:migrate Although I made a mistake by leaving out the colon when I ran: rails generate scaffold Micropost context:text user_id integer rails db:migrate I want to undo this migration, how to do it? (I'm using Rails <IP_ADDRESS>) When I run rails db:migrate, I get an error of: SQLite3::SQLException: table "microposts" already exists: When I run rails db:migrate:status, I get the following output: Status Migration ID Migration Name up<PHONE_NUMBER>1157 Create users up<PHONE_NUMBER>5545 ********** NO FILE ********** down<PHONE_NUMBER>5805 Create microposts I tried to use rails db:migrate:down VERSION=20161024025805. There wasn't any message showing in the command line. Then I ran rails db:migrate. The error is the same. Possible duplicate of How to rollback just one step using rake db:migrate I'm using Rails <IP_ADDRESS>, not the former versions. rails db:rollback will simply rollback one migration which I believe is what you are looking for For a more specific rollback, you can run rails db:migrate:down VERSION=numberofversion Replace the numberofversion with the version number of the migration file generated, for example: rails db:migrate:down VERSION=1843652238 Edit: Since you are getting the error that the Microposts table already exists, you must follow these steps to remove the table: Run rails generate migration DropMicroposts Go to the /db/migrate folder and find the latest migration file you just created In that file paste this: class DropMicroposts < ActiveRecord::Migration def up drop_table :microposts end end run rails db:migrate After this, run rails generate scaffold Micropost context:text user_id:integer Then run rails db:migrate The message of running rails db:migrate:down VERSION=1843652238 tells me to use db:rollback. But running db:rollback shows nothing. It still doesn't work. Did you actually replace the version number with the version number of the migration you created? Is Migration ID the version number ? In the db/migrate folder, you have files like 241294283_add_something_to_something . The version number is the number at the front of the file that you want to rollback After doing rails db:migrate:down VERSION=20161024025805, it still says ActiveRecord::StatementInvalid: SQLite3::SQLException: table "microposts" already exists After running rails db:migrate:down VERSION=20161024025805, then I run rails db:migrate, it still says ActiveRecord::StatementInvalid: SQLite3::SQLException: table "microposts" already exists I think you need to read more on rails migrations. After rolling back a migration, you need to delete the migration file, then create a new migration that is correct and then run rails db:migrate. If you just rollback then migrate again, nothing has changed. Let us continue this discussion in chat.
common-pile/stackexchange_filtered
Java META-INF/services What is the purpose of META-INF/services in Java ? Related question: https://stackoverflow.com/q/70216/2987755 It's intended to store service provider configuration files. A Service provider is an implementation of a Service Provider Interface packaged as JAR. A Service loader discover and load all implementations declared in the service provider configuration file. A configuration file is a file named as the fully qualified name of the interface and its content is a list of fully qualified names of implementations. Following is an example of provider configuration file for javax.servlet.ServletContainerInitializer that is used by Servlet 3.0 at webapp startup. org.apache.jasper.servlet.JasperInitializer org.springframework.web.SpringServletContainerInitializer In this example Tomcat is the Service Loader; javax.servlet.ServletContainerInitializer is the Service Provider Interface file named javax.servlet.ServletContainerInitializer is the Service Provider configuration file; org.apache.jasper.servlet.JasperInitializer and org.springframework.web.SpringServletContainerInitializer are Service providers When tomcat startup webapp call both onStartup(java.util.Set<java.lang.Class<?>> types, ServletContext context) methods on JasperInitializer and SpringServletContainerInitializer classes Take a look at the ServiceLoader docs. Whilst this may theoretically answer the question, it would be preferable to include the essential parts of the answer here, and provide the link for reference. Before Java 9, ServiceLoader find the implementations for a Service from the file in META-INF/services which has fully qualified name same as the Service interface. It contain list of fully qualified names of the implementations. From Java 9 It have modules and modules have module descriptors. Those 'module' can define the services and their implementation that a ServiceLoader could load. I think this is wrong. META-INF does not contain modules? Can you point to an example where they do? What Yuresh was saying is the module-info.java file can define -classes/interfaces- in the module that are services and you can define in module-info.java what classes in the module implement services. If you are using Java 9 or later with a module path, you do not need to specify files in META-INF/services but if you are using classpath loading (or Java 8), you must have a service configuration file in META-INF/services.
common-pile/stackexchange_filtered
phonegap - is ios or android device? How can I tell if I'm running in ios7 (or any other env) ? In IOS7 the topbar overlap with the app and mess up the view, I want to add a css class that will push the top bar 20px down only in ios7. How should I do it? A nice solution would be to use a native plugin. If you're using cordova and nodejs run the following in your command line tool: cordova plugin add org.apache.cordova.statusbar Finally you need to invoke the function: vanilla JavaScript: document.addEventListener("deviceready", function(){ StatusBar.overlaysWebView(false); }, false); jQuery: $(document).on('deviceready', function(){ StatusBar.overlaysWebView(false); }); The StatusBar object has been added to the window namespace via the PhoneGap plugin system and you can access it via JavaScript. This way you can prevent the overlay on iOS. Try using cordova api : device.platform to check the OS and device.version to get the version of the OS. Check Phonegap doc for reference : http://docs.phonegap.com/en/edge/cordova_device_device.md.html#device.platform You can check which CSS hack works on iOS browser at: http://www.paulirish.com/demo/css-hacks and then provide custom solution for topbar
common-pile/stackexchange_filtered
Using State Hook - (state variable) is not defined no-undef function Header() { const [keys, setKeys] = useState([]); //added const first = (e) => { var result = new Map() axios.post('http://localhost:8000/' + query) .then(function(response){ var content=JSON.parse(JSON.stringify(response)).data for (var i=0;i<content.data.length;i++){ result.set(content.data[i].image_id, content.data[i].caption) } var key = Array.from(result.keys()); setKeys(key); //added var value = Array.from(result.values()); }).catch(err => { console.log("error"); alert(err); }); }; const second = () => {} return ( <div> <button onClick={(e)=> { first(); second(); }}/> <img src={require(`img/img${key[0]}.jpg`)}/> </div> ); } export default Header; I asked a question here Using state with array,map / How to pass variable and from the answers I got I edited the code like this. However I still get this error -->'key' is not defined no-undef I thought maybe it has something to do with https://reactjs.org/docs/hooks-rules.html (Only Call Hooks at the Top Level). But I can't figure it out. Is it keys instead of key? Your state variable is keys not key. If you access key you will get key is not defined. return ( <div> <button onClick={(e)=> { first(); second(); }}/> <img src={require(`img/img${keys[0]}.jpg`)}/> </div> ); The problem is typo. You wrote keys instead of key. your code should be like: const [keys, setKeys] = useState([]) <img src={require(`img/img${keys[0]}.jpg`)}/> The problem has nothing to do with what you suspected const [keys, setKeys] = useState([]); //added <img src={require(`img/img${key[0]}.jpg`)}/> You've defined the state to be keys, but in your img tag you are accesing key[0], hence it's saying key is undefined
common-pile/stackexchange_filtered