text
stringlengths
70
452k
dataset
stringclasses
2 values
Angular - how to pass ng-model from ng-repeat to a function and to scope I have the following code: <form ng-submit="addRow($index)"> <input type="text" ng-repeat="column in table.columns" ng-model="column"> <input type="submit" value="add row"> </form> Live code here: http://jsfiddle.net/muehvxx1/ What I'm trying to do is that whenever you click on the "Add Row" button, a new object is added to the scope (to the form's parent object) passing all values from the form's fields. So far you can add an object to the correct parent (that works), but it doesn't pass the fields' values. You could do something like this http://jsfiddle.net/9g8vpnaa/ HTML: Form <form ng-submit="addRow(table)"> <input type="text" ng-repeat="column in table.columns" ng-model="table.newItem[column]"> <input type="submit" value="add row"> </form> JS: addRow() $scope.addRow = function(table){ table.items.push(table.newItem); table.newItem = {}; }; What I was add and additional object newRow to all your tables $scope.tables = [ {name: 'tweets', columns: ['id', 'message', 'user'], items: [ {id: 1, message: 'hello', user: 'mike'}, {id: 2, message: 'hi', user: 'bob'}, {id: 3, message: 'whatup', user: 'bob'} ], newRow :{} }, {name: 'users', columns: ['id', 'username'], items: [ {id: 1, username: 'mike'}, {id: 2, username: 'bob'} ], newRow :{} } ]; I then use that object to bind to the inputs <form ng-submit="addRow(table)"> <input type="text" ng-repeat="column in table.columns" ng-model="table.newRow[column]" /> <input type="submit" value="add row" /> </form> Then you just have to add that object to the items when the user clicks "Add Row" $scope.addRow = function(table){ console.log($scope.id); table.items.push(table.newRow); }; JSFiddle I like the way you think ;) One of us must have been thinking too loudly :) Thanks, but that only works the first time you submit. After that, every time you change the values inside the "input" fields they change the values of the already existing object, which shouldn't happen. rob's answer solves that by creating a new newItem after adding
common-pile/stackexchange_filtered
Visualize frequency of 5 Boolean variables together I have a data set with 5 variables, a b c d e 1 0 0 1 0 0 1 0 1 1 0 1 1 0 0 0 0 0 1 0 1 1 1 0 0 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 0 1 0 1 1 0 0 1 1 0 I am only interested in the percentages of occurrence, occurrence, | a | b | c | d | e | .4 | .5 | .5 | .6 | .4 BUT, I would like to visualize in such a way that I can see the overlap, or not, among all the different groups. Any idea? so something like a frequency graph for all possible combinations? like a, b, c, d, e, ab, ac.....? Well, yes. But that's the problem, how to visualize 32 combinations. If you have richer data (ie more than 10 rows), you will want an upset plot. Upset plots are a way to view information in an intuitive way like a Venn diagram, but is more useful for 4+ categories. Some references which may give you some ideas and implementation in R: https://cran.r-project.org/web/packages/UpSetR/vignettes/basic.usage.html (attached image from r-project.org). https://www.littlemissdata.com/blog/set-analysis With Wolfram Language you may use AbsoluteCorrelation. With t = { {1, 0, 0, 1, 0}, {0, 1, 0, 1, 1}, {0, 1, 1, 0, 0}, {0, 0, 0, 1, 0}, {1, 1, 1, 0, 0}, {0, 1, 1, 0, 1}, {1, 0, 1, 0, 0}, {1, 0, 0, 1, 1}, {0, 1, 0, 1, 1}, {0, 0, 1, 1, 0} } Then MatrixForm[ac = AbsoluteCorrelation[t]] Where the diagonals are the marginal column frequencies and the off-diagonals the joint frequencies. That is for ac[[1,1]] variable a occurs with frequency 0.4 and for ac[[1,2]] (row 1, column 2) variable a occurs jointly with variable b with frequency 0.1 This can be visualised with MatrixPlot or ArrayPlot. MatrixPlot[ ac , FrameTicks -> {Transpose@{Range@5, CharacterRange["a", "e"]}} , PlotLegends -> Automatic] Hope this helps. But isn't this a pairwise correlation? So, the matrix has 25 elements which are pair combinations of occurrence but has no information beyond pairwise. @myradio Correct. I took your "overlap" to mean joint frequency. You only need the upper or lower triangle of the matrix since it is symmetric. Since the combinations are known, we can use some knowledge of binary numbers and use this to find come up with a frequency plot Basically - convert the binary string to integer and get a frequency plot based on the integer values import numpy as np import pandas as pd from itertools import product import matplotlib.pyplot as plt # test data, 1 of every 32 combinations combs = np.array(map(list, product([0, 1], repeat=5))) # store in dataframe df = pd.DataFrame(data={'a': combs[:, 0], 'b': combs[:, 1], 'c': combs[:, 2], 'd': combs[:, 3], 'e': combs[:, 4]}) # concatenate the binary sequences to strings df['concatenate'] = df[list('abcde')].astype(str).apply(''.join, axis=1) # to convert binary strings to integers def int2(x): return int(x, 2) # every combination has a unique value df['unique_values'] = df['concatenate'].apply(int2) # prepare labels for the frequency plot variables = list('abcde') labels = [] for combination in df.concatenate: tmp = ''.join([variables[i] for i, x in enumerate(combination) if x != '0']) labels.append(tmp) fig, ax = plt.subplots() counts, bins, patches = ax.hist(df.unique_values, bins=32, rwidth=0.8) # turn of the plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected top=False, # ticks along the top edge are off labelbottom=False) # calculate the bin centers bin_centers = 0.5 * np.diff(bins) + bins[:-1] ax.set_xticks(bin_centers) for label, x in zip(labels, bin_centers): # replace integer mapping with the labels ax.annotate(str(label), xy=(x, 0), xycoords=('data', 'axes fraction'), xytext=(0, -5), textcoords='offset points', va='top', ha='center', rotation='30') plt.show() +1 Indeed this is s possibility but I was looking for something simpler to visualize.
common-pile/stackexchange_filtered
Passing hash from perl CGI::Application::Plugin::JSON to jquery form plugin I need to pass a hash from server side to client side. I am using jquery and perl CGI::Application respectively on the front-end and back-end. I am a starter when it comes to using jquery thus I modified the jquery form plug-in example which shows how to handle JSON data returned from the server http://jquery.malsup.com/form/#json. I tried to use the given code with my favourite perl web framework CGI::Application. The CGI::Application::Plugin::JSON works well when passing scalar values but due to lack of documentation I can't figure out how to pass arrays or hashes or for that matter complex data structures. When passing a hash I am using the following code snippet :- foreach my $k (sort keys %hash) { return $self->add_json_header ( { message => $hash{$k}} ); } This is the error I am getting in the apache error log: ajaxtest.pl: Odd number of elements in hash assignment at /usr/local/share/perl/5.10.0/CGI/Application/Plugin/JSON.pm line 98., referer: http://localhost/echo.html While passing scalar I am using CGI::Application::Plugin::JSON json_body function. Kindly let me know where I am going wrong. Following is the Jquery code in the html file that is also given on the form plugin site (link given above): // prepare the form when the DOM is ready $(document).ready(function() { // bind form using ajaxForm $('#jsonForm').ajaxForm({ // dataType identifies the expected content type of the server response dataType: 'json', // success identifies the function to invoke when the server response // has been received success: processJson }); }); function processJson(data) { // 'data' is the json object returned from the server alert(data.message); } Any advice on using CGI::Application::Plugin::JSON with complex data structures likes hashes of hashes and arrays of arrays is most welcome as I would be needing it it in future. Here's a possible solution. you will only need the JSON library and in your code you can do the following: my %data_struct = { a => 1, b => 2 }; my $json = to_json( \%data_struct, {utf8 => 1} ); $json =~ s/"(\d+?)"/$1/g; # to_json puts quotes around numbers, we take them off # here $self is the CGI::App object, it's probably called like that $self->header_add( -type => 'application/json' ); return $json; (As Raoul pointed out, you cannot return more than once in a CGI::App block.) NOTE: I don't use CGI::Application::Plugin::JSON because I just didn't need it. This way I achieved the same result. Of course, TMTOWTDI. :) I don't think you understand the CGI::APP return method. You can only return once per runmode.
common-pile/stackexchange_filtered
drf-yasg : How can we use doc-string of different methods other than standard http method for swagger api documentation I am using drf-yasg and django-rest-framework. I am having one common parent class which is having all http method defined, which will be inherited in sub-classes where business logic methods will be overided. I want subclass overrided method docstring in swagger api documentation class BaseAPIView(APIView): """ Base API View. """ def get(self, request): """ Base API get method description. """ result = self.process_get() # Call the overridden method return Response(result) def process_get(self): raise NotImplementedError class DerivedAPIView(BaseAPIView): """ Derived API View. """ def process_get(self): """ List all items. """ items = [ {"name": "Item 1", "description": "Description of Item 1", "price": 10.99}, {"name": "Item 2", "description": "Description of Item 2", "price": 20.99} ] return Response(items) Here in swagger document for DerivedAPIView get method i want DerivedAPIView process_get method docstring as api documentation.
common-pile/stackexchange_filtered
Somehow my Login Program isn't taking or saving or comparing my login strings correctly I want to write a login program. However when I type in a user, The Program skips all my IFs. I think it might be a problem with the comparing of the strings. I am pretty new to using strings in more complex programs and don't quite know what to regard. This is my code: #include <stdio.h> #include <stdlib.h> #include <math.h> #include <time.h> int main() { char user[17] = ""; char password[17] = ""; char user1[17] = "strfad23"; char user2[17] = "zmufad23"; char user3[17] = "admin"; char user4[17] = "WURZINGER"; char pw1[20] = "WhyDoYouWantToKnow?"; char pw2[17] = "HowShouldIKnow?"; char pw3[17] = "2long2remember"; char pw4[17] = "shortAndInsecure"; int wrongLogins = 0; char repeat = 'A'; login: printf("Enter your username: "); fflush(stdin); scanf("%s", user); if(user == user1) { printf("Please enter your password, %s\n", user1); fflush(stdin); scanf("%s", password); if(password == pw1) printf("Hello Fabian!\n"); else wrongLogins++; } else if(user == user2) { printf("Please enter your password, %s\n", user2); fflush(stdin); scanf("%s", password); if(password == pw2) printf("Hello Fabio CMugg!\n"); else wrongLogins++; } else if(user == user4) { printf("Please enter your password, %s\n", user2); fflush(stdin); scanf("%s", password); if(password == pw4) printf("Hello Fabian Wutte!\n"); else wrongLogins++; }else if(user == user3) { printf("Please enter your password, %s\n", user3); fflush(stdin); scanf("%s", password); if(password == pw3) { printf("Hello Administrator!\n"); printf("There have been %d failed attempts of login on other accounts\n", wrongLogins); } else wrongLogins++; } else printf("user not found\n"); printf("Do you want to login again?\n"); printf("Y / N : "); fflush(stdin); scanf("%c", &repeat); if(repeat == 'Y')goto login; return 0; } But as Output I only get this: Enter your username: admin user not found Do you want to login again? Y / N : I want to get sth like that: Enter your username: WURZINGER Please enter your password, WURZINGER : shortAndInsecure Hello Fabian! Do you want to login again? Y / N : I added the error of "User not found" and made my program repeatable bc of the function I added when you login as admin. Be cautious about Using fflush(stdin)? How to debug small programs
common-pile/stackexchange_filtered
How to direct a data-toggle modal to the controller? I have a list of table for survey form and each one of them have a button/asp-action to view the answers of at least 3 competitors. but I need to select the competitors of a survey form using a modal. inside that modal, I should populate the body with a checkbox of competitors who answered that survey form. How can I push through to direct the data-toggle modal to the controller? Here is my View: @for (int i = 0; i < Model.SurveyNames.Count; i++) { <tr> <td> @Model.SurveyNames[i].SurveyName </td> <td> @Model.SurveyNames[i].SurveyFor </td> <td> @Model.SurveyNames[i].Description </td> <td> @Model.SurveyNames[i].CreatedBy </td> <td> @Model.SurveyNames[i].Status </td> <td> <!-- Button trigger modal --> <a asp-action="ViewCompetitors"<EMAIL_ADDRESS>data-toggle="modal" data-target="#ChooseCompetitors">View Competitors</a> </td> </tr> } And this is my Controller, it should return the values to index modal: public IActionResult Index(int? id) { var model = new CompetitorAnswerViewModel(); var SurveyList = _context.MainSurvey.ToList(); foreach (var zsurvey in SurveyList) { model.SurveyNames.Add(new survey { Id = zsurvey.Id, SurveyName = zsurvey.SurveyName, SurveyFor = zsurvey.SurveyFor,Description = zsurvey.Description,CreatedBy = zsurvey.CreatedBy,Status = zsurvey.Status}) ; } var competitorList = _context.SurveyCompetitor.ToList().Where(x => x.MainSurveyId == id); foreach (var competitors in competitorList) { model.CompetitorNames.Add(new Competitor { CompetitorName = competitors.CompetitorName}); } return View(model); } I should populate the body with a checkbox of competitors who answered that survey form. But it doesn't forward to the controller whenever I click "View Competitors". just a suggestion, won't solve the issue. the code could be a lot cleaner if you used a foreach instead of for in the first line. Hi @NevilleNazerane, it shows an error whenever I use foreach statement. and I am using a 1 model of a 2 database tables. foreach (var sname in Model.SurveyNames) when used with @sname.SurveyName etc must work fine is your _context a dbcontext of ef core Yes. It worked as you said :) Thank you! The error is because I did not put Model. in Model.SurveyNames. Lol. if you are talking about a bootstrap modal it would be in your HTML/razor code. you wouldn't have to push anything to the controller to create it I want the table data id to pass it to the contoller to filter the survey competitors and then return the filtered data to the modal in the same index. I want the table data id to pass it to the contoller to filter the survey competitors and then return the filtered data to the modal in the same index You could put the Competitors view in a partial view and render it to Index view using ajax. 1.Create the partial view in Shared folder /Views/Shared/_CompetitorsPartialView.cshtml @model YourViewModel <div class="col-md-12"> Your View </div> 2. Return the filtered data to this partail view public IActionResult ViewCompetitors(int id) { // filter logic return PartialView("_CompetitorsPartialView",YourViewModel); } 3.Use ajax in Index view. Modify <a> to <button>: <button id = "ViewCompetitors" onclick="viewCompetitor(@item.CustomerId)">View Competitors</button> Modal and ajax: <div class="modal fade" id="ChooseCompetitors" role="dialog"> <div class="modal-dialog"> <!-- Modal content--> <div class="modal-content"> <div class="modal-header"> <h4 class="modal-title">Modal Header</h4> <button type="button" class="close" data-dismiss="modal">&times;</button> </div> <div class="modal-body"> <div id="showresults"></div> </div> <div class="modal-footer"> <button type="button" class="btn btn-default" data-dismiss="modal">Close</button> </div> </div> </div> </div> @section Scripts{ <script> function viewCompetitor(id) { $.ajax({ type: 'Get', url: '/Home/ViewCompetitors/' + id,//your url success: function (result) { $("#ChooseCompetitors").modal();//open modal $('#showresults').html(result);//populate view to modal } }) } </script> }
common-pile/stackexchange_filtered
Make a link inactive after 30 minutes or once it is used I am new to codeigniter so could'nt find a proper way on how to make a link valid for only 30 minutes. Like when user forget password, he ask for link to generate new password. I want that link to be valid for only 30 minutes, and once he changed his password he won't be able to use that link again. You say you have not find a way to do it, that means you have tried? What have you tried? We need to know what you have tried to answer your question i searched in web n tried to look for a proper logic on how to implement and make a link valid in codeigniter for some time. But I could'nt get any. When you send forgot password link,that time store datetime in your db. And when user click it check time range(send time and clicked time) from your db.for once use set flag(column is_password_set) value .And again check that flag from your db. Thank you so much.. I will try to implement this. Thanks Shubham Azad Soory i but i could'nt get the flag thing. How to check that flag? when user click on link, check in your db a column that names as (is_password_set) . its' default value is 0.and when user set his password set it column value with 1. I would crate JWT token (JWT token with unique user id and expiration time). Then use this token as a parameter for the url, for example www.yoursite.com/changepassword/?token=eyJ0eXAiOiJKV1Qi.eyJrZXkiOi.eUiabuiKv Now using the controller for that page depending JWT token expiration time, & user id, either generate your password change page, or show them and expiration error etc... If you need more info just ask. How to create JWT token using PHP. So by this can i generate jwt token? follow this blog JWT token genataion in PHP for more info Thank you again!!
common-pile/stackexchange_filtered
How to see and navigate through functions list in rails, using vscode? I have visual studio code. How do I see and navigate through the functions list in rails? I don't see any methods in the "outline view". Also, when I use ctrl + shift + o, I get the following message: "the active text editor does not provide symbol information". Is there an extension that need to be installed? Or is it something in settings? See discussion thread about possible options here: https://github.com/rubyide/vscode-ruby/issues/40 At the time of writing, I think the best option is to add the ruby-symbols extension here: https://github.com/MiguelSavignano/vscode-ruby-symbols Note also that for Windows, it's Ctrl + Shift + O. And for Macs, it's Cmd + Shift + O.
common-pile/stackexchange_filtered
Classifier accuracy decreases as n of n-gram models increases. Is this expected? I am trying to tackle a multi-classification problem that requires text processing. The data contains a lot of samples (approximately 100.000 samples) and one of the features I need to work with is a short (sometimes longer) written message. Dividing the dataset into a 70/30 scheme and cross-validating on the 70% for tuning, the model reaches currently 90% accuracy. I am using scikit-learn's TfidfVectorizer to extract features from the messages and RandomForestClassifier for the machine learning per se. What I find peculiar is that as I increase the maximum n for the n-gram models considered, its accuracy decreases slightly : n in [1,2[ -> Accuracy: 91.0% Dimensionality: 126,756 n in [1,2] -> Accuracy: 90.8% Dimensionality: 605,010 n in [1,3] -> Accuracy: 90.5% Dimensionality: 1,408,346 ... -> ... I initially thought that increasing the maximum n could only increase the model's accuracy but that is not reflected in my results. Am I doing something wrong in my approach? Or is this decrease in accuracy something that could potentially happen? It would be helpful to know what is the dimensionality of data for these ranges. I've added them to my question I've updated my answer accordingly. The problem with spurious variables in Random Forest is that each tree selects only a random subset of features, and if their number gets high, so if most of them are bogus (which is most likely the case here, since most of higher n-grams will occur only a bunch of times), some trees won't learn anything useful. Random Forests are used because single decision tree is highly likely to overfit, and averaging predictions in decreases variance compared to using a single estimator. It doesn't seem likely that your single decision trees are overfitting, since they are using only a tiny random subset of huge, mostly irrelevant feature space. You didn't say anything about the trees - did you try using more of them and simultaneously limiting their depth? Scikit-learn's Random Forests have a couple of parameters that you can tune. Another question would be if you really actually need decision trees - I for example would at least try using logistic regression (with Lasso/ElasticNet) for such problem, as these methods naturally fit such sparse problems - they consider all the features, and do feature selection themselves. The forest is made of 30 trees each having default parameter values except for max_features which is set at 0.1 (Tuned using cross-validation) And I did explore other learning methods (Multinomial Naive Bayes, Linear SVM, ...) but I didn't give logistic regression a shot. I'll try that and see if my results become more coherent. These results are the same for character n-grams as well. until a certain n value, accuracy will increase and then it starts to drop for both word n-grams and character n-grams. Reason for this is if the individual accuracy of a given n value is lower for a particular selected corpus size it has a slightly negative effect on combined feature selection accuracy. (larger the n larger can be the corpus size needed for improving a unit of accuracy)
common-pile/stackexchange_filtered
Triangulation of Polytope It seems to me that, in two dimensions, a triangulation of a polytope with odd amount of vertices gives us an odd amount of simplices while a triangulation of a polytope with even amount of vertices gives us an even amount of simplices. (We can add vertices to the boundary of $P$ during the triangulation, but then they must be counted as vertices of $P$.) See the following images for example. Just count the vertices on the boundary of the polytope and the number of simplices that it is composed of. Does anyone know if this claim is true? If yes, where can I find a proof for it? Otherwise, a counterexample would be nice. Thanks. (I think I found a proof for this claim, but it is somewhat convoluted, so a simple proof would still be welcomed.) This is a simple proof that, in two dimensions, a triangulation of a polytope with odd amount of vertices gives us an odd amount of simplices while a triangulation of a polytope with even amount of vertices gives us an even amount of simplices. You have a plane graph, bounded by a cycle $C$, in which every internal region has three edges. Let $C$ have length $k$, and let there be $n-k$ vertices in the interior, so n vertices altogether. Let this graph be $G$. If we add another vertex and make it adjacent to every vertex of $C$ we obtain a maximal planar graph $H$. It is known (an easy consequence of Euler's formula) that every maximal planar graph with $p>3$ vertices has exactly $3p-6$ edges and $2p-4$ regions. $H$ has $n+1$ vertices, so it has $3n-3$ edges and $2n-2$ regions. But $H$ has $k$ regions more than $G$, so $G$ has $2n-2-k$ regions (not counting the infinite region). This is even if and only if $k$ is even.
common-pile/stackexchange_filtered
How to throw an exeption, if a value is missing? I tried this: const a = null || throw new Error("value missing"); But I got an error: Uncaught SyntaxError: expected expression, got keyword 'throw' Does any syntactic sugar exist for this or is it necessary to use an if clause? Yes, use an if. throw is a statement and cannot be mixed into an expression. @deceze annoying @deceze How exactly scope the constant a correctly when using if? if u want to be daring u can make it a iife const a = null || (()=> {throw new Error("value missing")})(); which is kinda weird but u can do it just for fun @AritraChakraborty annoying @Teemu const a = null || ... doesn't make much sense to begin with… Your line of code does not make sense to me... You are declaring a constant as null (meaning, it won't change and stay null), OR maybe if it has no value, throw an error?? But you are declaring it as null... What is this code supposed to achieve? @deceze null is just the most simple example. In the real world the r-value is a slightly more complex evaluation. Well, then: const a = ...; if (!const) throw new Error;. You can even write it on a single line, as shown here, if you have a shortage of lines and insist on it; you just can't make it a single expression.
common-pile/stackexchange_filtered
Mark / Unmarked another option from a set of enum I have a set of enum in a component like this: type TOption = (clVisible, clVisibleAlways, clRenderable, clEditable); TOptions = set of TOption; const defaultOptions = [clVisible, clRenderable]; type TMyComp = class(TComponent) private FOptions: TOptions; procedure SetOptions(const Value: TOptions); public property Options: TOptions read FOptions write SetOptions default defaultOptions; ... procedure TMyComp.SetOptions(const Value: TOptions); var ToBeSet, ToBeCleared: TOptions; begin if FOptions <> Value then begin ToBeCleared:= FOptions - Value; ToBeSet:= Value - FOptions; FOptions:= Value; //clVisible -> clRenderable if (clVisible in ToBeSet) and (clRenderable in ToBeCleared) then begin Include(FOptions, clRenderable); Include(ToBeSet, clRenderable); end; //not clRenderable -> not clVisible if (clRenderable in ToBeCleared) and (clVisible in ToBeSet) then begin Exclude(FOptions, clVisible); Exclude(ToBeSet, clVisible); end; //not clVisible -> not clVisibleAlways if (clVisible in ToBeCleared) and (clVisibleAlways in ToBeSet) then begin Exclude(FOptions, clVisibleAlways); Exclude(ToBeSet, clVisibleAlways); end; //clVisibleAlways -> clVisible if (clVisibleAlways in ToBeSet) and (clVisible in ToBeCleared) then begin Include(FOptions, clVisible); Include(ToBeSet, clVisible); end; end; end; What I would like to do and it doesn't work, is to: check also clVisible if clVisibleAlways was checked check also clRenderable if clVisible was checked un-check also clVisible if clRenderable was unchecked un-check also clVisibleAlways if clVisible was unchecked Please some support about this topic. You'll have to take care of that in code, whether you wrap this into a class or record type, or do it inline, In other words, there is nothing in the language that lets you do this automatically. The way you do it seems quite right to me. Like i indicate, it doesn't work! In the evening i will check again the code. What "doesn't work"? Please be more exact. It is not so hard to exclude or include certain other members when a member is set or removed. Personally, I would not make it so hard for myself. if clVisibleAlways is set, then that is a separate option. In your logic that queries those values, you can simply assume clVisible set if clVislbleAlways is set (and likewise for the other values). But I would not set it in the actual set. FWIW, I don't think your logic WRT ToBeSet and ToBeCleared makes sense. I would get rid of those. I don't have the time right now, but I will look into this later on. I think your approach is unnecessarily complex. Update One must obviously distinguish between adding or removing an option (which I didn't do before). The updated code does this and works nicely in Delphi 10.1 Berlin. My test component is called TNewButton and is based on TButton. procedure TNewButton.SetOptions(const Value: TOptions); var Opts: TOptions; { Because of "const"; you can also remove "const" and use Value instead of Opts. } begin if FOptions <> Value then begin Opts := Value; { Find out if we are adding or removing an option. } if Opts - FOptions <> [] then begin { We are adding an option. } if clVisibleAlways in Opts then Include(Opts, clVisible); if clVisible in Opts then Include(Opts, clRenderable) end else begin { We are removing an option. } if not (clRenderable in Opts) then Exclude(Opts, clVisible); if not (clVisible in Opts) then Exclude(Opts, clVisibleAlways); end; FOptions := Opts; end; end; I tested it several times, and, at least in my Object Inspector, it does exactly what you wanted. in design time, when i checked the clVisibleAlways, i should see clVisible getting checked, but I didn't. I apply this method with boolean properties and it worked. @REALSOFO; ah now I get it. It does not update the object inspector at design time. The options are being set as you wanted, though. They just don't show up in the OI. I'll look into this. You might need to have the property setter call the parent Form's Designer.Modified() method if Designer is not nil. No, @Remy, that is not it. When I set clVisibleAlways, then the other two are set too, no need to call Designer.Modified. But I can't unset clRenderable, because of the first two clauses. So I must first detect if an option is removed or if one is being added. it works now! thank you Rudy! Remy example is also good, but I think still have some chain reaction. Anyhow, thank you all for your time! You could wrap your TOption set into the helper class, which would provide additional logic on assignment: TOption = (clVisible, clVisibleAlways, clRenderable, clEditable); TOptions = set of TOption; TOptionHelper = class public constructor Create(); procedure Include(const AOption : TOption); procedure Exclude(const AOption : TOption); function GetOptions() : TOptions; property Options : TOptions read GetOptions; strict private FOptions : TOptions; end; constructor TOptionHelper.Create; begin FOptions := [clVisible, clRenderable]; end; procedure TOptionHelper.Exclude(const AOption: TOption); begin end; function TOptionHelper.GetOptions: TOptions; begin Result := FOptions; end; procedure TOptionHelper.Include(const AOption: TOption); begin case AOption of clVisibleAlways : FOptions := FOptions + [clVisible]; //and so on... end; end; I would use something more like this: procedure TMyComp.SetOptions(const Value: TOptions); var ToBeSet, ToBeCleared, LNewOptions: TOptions; begin ToBeCleared := FOptions - Value; ToBeSet := Value - FOptions; LNewOptions := FOptions - ToBeCleared + ToBeSet; if (clVisibleAlways in LNewOptions) then Include(LNewOptions, clVisible); if (clVisible in LNewOptions) then Include(LNewOptions, clRenderable); if not (clRenderable in LNewOptions) then Exclude(LNewOptions, clVisible); if not (clVisible in LNewOptions) then Exclude(LNewOptions, clVisibleAlways); if FOptions <> LNewOptions then begin FOptions := LNewOptions; // update the rest of your component as needed... end; end; Your logic is flawed. The lines if (clVisible in ToBeSet) and (clRenderable in ToBeCleared) then and if (clRenderable in ToBeCleared) and (clVisible in ToBeSet) then are equivalent - yet this is clearly not what you intend. I don't think Rudy's solution is quite there either because of the potential contradiction between set and clear. Here is my suggested solution. procedure TMyComp.SetOptions(const Value: TOptions); var ToBeSet, ToBeCleared: TOptions; begin if FOptions <> Value then begin ToBeCleared:= FOptions - Value; ToBeSet:= Value - FOptions; FOptions:= Value; //clVisible -> clRenderable if (clVisible in ToBeSet) then begin Include(FOptions, clRenderable); Exclude( ToBeCleared, clRenderable); // avoid contradiction! end; //not clRenderable -> not clVisible if (clRenderable in ToBeCleared) then begin Exclude(FOptions, clVisible); Exclude(ToBeSet, clVisible); Include(ToBeCleared, clVisible ); // Chain reaction - Clearing clRederable clears clVisible and therefore by implication clVisibleAlways! end; //not clVisible -> not clVisibleAlways if (clVisible in ToBeCleared) then begin Exclude(FOptions, clVisibleAlways); Exclude(ToBeSet, clVisibleAlways); end; //clVisibleAlways -> clVisible if (clVisibleAlways in ToBeSet) then begin Include(FOptions, clVisible); Include(ToBeSet, clVisible); end; end; end; you are right with the extra lines, but in design-time I see no effect... Shouldn't FNewVal by FOptions instead? In any case, wouldn't it be easier to simply setup ToBeSet and ToBeCleared as needed, and then use FOptions := (FOptions - ToBeCleared) + ToBeSet;? Yes, fNewVal should be FOptions, Remy. I was going to do it differently but changed my mind. Have edited source. You commented on extra lines REALSOFO, but did you notice all the 'and' statements that have been removed? I will add the full code I used to solve my question. This will remove some constraints about the order I check or unchecked the options. procedure TMyComp.SetOptions(Value: TOptions); var ToBeSet, ToBeCleared: TOptions; clRenderableChanged, clVisibleChanged, clVisibleAlwaysChanged: Boolean; begin if FOptions <> Value then begin ToBeCleared:= FOptions - Value; ToBeSet:= Value - FOptions; clRenderableChanged:= (clRenderable in ToBeSet) and (not (clRenderable in FOptions)) or ((clRenderable in ToBeCleared) and (clRenderable in FOptions)); clVisibleChanged:= (clVisible in ToBeSet) and (not (clVisible in FOptions)) or ((clVisible in ToBeCleared) and (clVisible in FOptions)); clVisibleAlwaysChanged:= (clVisibleAlways in ToBeSet) and (not (clVisibleAlways in FOptions)) or ((clVisibleAlways in ToBeCleared) and (clVisibleAlways in FOptions)); FOptions:= Value; if clRenderableChanged then begin if clRenderable in ToBeSet then Include(FOptions, clVisible); if clRenderable in ToBeCleared then begin Exclude(FOptions, clVisible); Exclude(FOptions, clVisibleAlways); end; end; if clVisibleChanged then begin if clVisible in ToBeSet then Include(FOptions, clRenderable); if clVisible in ToBeCleared then Exclude(FOptions, clVisibleAlways); end; if clVisibleAlwaysChanged then begin if clVisibleAlways in ToBeSet then begin Include(FOptions, clVisible); Include(FOptions, clRenderable); end; end; end; end; Anyhow, this was possible thanks to the help of all persons contribute with answers to this question. UPDATE I think some of my problems came from the fact I didn't restart Delphi after installing of component changes.
common-pile/stackexchange_filtered
Ejecting disk from non-booting Macbook My Macbook does not power up for more than a few seconds, and I have declared it dead. The only issue is that there is still a disk inside of it of Snow Leopard. The MB doesn't power on long enough to push the trackpad button, or do the PRAM Reset, or press the Eject button on the keyboard before shutting down again. Any ideas how to MANUALLY remove the disk? Get a standard USB mouse, attach it and press and hold the primary mouse button, while starting. This invokes the eject before the blutooth drivers are even loaded. Tried, but it wasn't a bluetooth mouse; it was the built in trackpad Kind of a shot in the dark, but one reason a MacBook won't start up at all is that the battery is so dead that it tells the Mac not to run. Some models allow you to still run the computer with the power adapter attached and the battery removed. Have you tried that? If it works, you can keep things running long enough to eject the disk. You can also (kind of a desperation move) work a thin bit of plastic (thinner than a credit card) around in the slot to try to get it under or on top of the disc and work it back and forth until the disc pops out. Battery was out, PSU was plugged in and was showing Green. It only starts up for about 1-5 seconds, depending on how long I leave it UNPLUGGED for. I know there's something wrong on the motherboard, probably a capacitor or two, but I need a mechanical way to eject the disk
common-pile/stackexchange_filtered
bootstrap glyphicons inside asp:buttons disapear after gridview sorting What should I do to fix this? html inside a asp:button need a fix to be displayed, I use this method (it works unless I do click on the header of any column to sort the gridview): private void FixGlyph(PlaceHolder ph, Button btn, string iconClass, string customLabelStye = "") { if (btn.Visible) { var g = new HtmlGenericControl(); g.ID = "labelFor_" + btn.ID; g.TagName = "label"; g.Attributes.Add("for", btn.ClientID); g.Attributes.Add("class", "" + btn.CssClass + ""); if (customLabelStye != "") { g.Attributes.Add("style", customLabelStye); } g.InnerHtml = "<i class='" + iconClass + "'></i> " + btn.Text; ph.Controls.Add(g); btn.Attributes.Add("style", "display:none;"); } } my gridview is this: <asp:UpdatePanel ID="UpdatePanel2" runat="server"> <ContentTemplate> <asp:GridView ID="GridView1" AllowPaging="True" AutoGenerateColumns="false" DataSourceID="AccessDataSource1" DataKeyNames="FICHA" runat="server" GridLines="None" OnSorted="GridViewFix" AllowSorting="true" CssClass="table table-hover table-striped" ShowFooter="true" PagerStyle-CssClass="bs-pagination text-center"> <Columns> <%-- Ficha del empleado --%> <asp:TemplateField HeaderText="Ficha" SortExpression="FICHA"> <ItemTemplate> <asp:Label ID="lblFicha" Text='<%# Eval("FICHA") %>' runat="server"></asp:Label> </ItemTemplate> </asp:TemplateField> <%-- columna de eliminar --%> <asp:TemplateField HeaderText="<span class='glyphicon glyphicon-remove'></span>"> <ItemStyle Width="20" /> <ItemTemplate> <asp:PlaceHolder ID="delPh" runat="server"></asp:PlaceHolder> <asp:Button ID="delBtn" CssClass="btn btn-link " data-toggle="modal" ToolTip="Modify" data-target="#deleteModal" runat="server" autopostback="false" OnClick="delBtn_Click" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </ContentTemplate> <Triggers> </Triggers> </asp:UpdatePanel> my Page_Load is like this protected void Page_Load(object sender, EventArgs e) { foreach (GridViewRow row in GridView1.Rows) { Button dbtn = row.FindControl("delBtn") as Button; PlaceHolder dPH = row.FindControl("delPh") as PlaceHolder; FixGlyph(dPH, dbtn, "glyphicon glyphicon-remove", "padding:0 !important;"); } } and I even tried to use onSorted event for the gridview: protected void GridViewFix(object sender, EventArgs e) { foreach (GridViewRow row in GridView1.Rows) { Button dbtn = row.FindControl("delBtn") as Button; PlaceHolder dPH = row.FindControl("delPh") as PlaceHolder; FixGlyph(dPH, dbtn, "glyphicon glyphicon-remove", "padding:0 !important;"); } } which gets trigged, aparently finds the controls and fix the labels but... then something happend after GridViewFix and reloads the components, again without the glyphicons Is there any particular reason you are doing this in code? It seems like you could add one tiny bit of markup and remove ALL the code your have posted. Hello Ben first I need glyphicons , they don't apear if you use it inside asp:button, I need asp:button for whole other reasons. yes i could use asp:Linkbutton but then code behind won't work You seem to be dynamically adding a <label<i></i><label> for the glyph icons. I can see no reason why you are doing this in code, just add it to the markup. Also there is nothing that you can do with an asp:Button that you can't do with a asp:LinkButton. you'r right.. already found a solution check it out well, the question wasn't completly clear.. buttons have data-toggle and data-target atributes which LinkButtons don't and that's why wanted them so bad. I Needed to open a modal at the same time I triggered an event on code behind. so this fix the issue: Just trigger the modal yourself with javascript. <asp:LinkButton ID="AddAreaBtn" runat="server" autopostback="false" OnClientClick="javascript:$('#BorrarAreaModal').modal('show');" OnClick="BorraArea_Click"> <i aria-hidden="true" class="glyphicon glyphicon-remove"></i> </asp:LinkButton> I would put your code to add the glyphs in the Page_PreRender method. It's possible that after page_load the Gridview has an event that is firing and recreating the button controls.
common-pile/stackexchange_filtered
Converting movie primitives from NetLogo 5 to 6 I am trying to convert the following procedure from NetLogo 5 to 6. NL5 Procedure: to make-movie user-message "Enter name ending with .mov" let path user-new-file if not is-string? path [ stop ] reset movie-start path while [ ticks <= 300 ] [ run-model movie-grab-view ] movie-close end My conversion to NL6: to make-movie user-message "Enter name ending with .mov" let path user-new-file if not is-string? path [ stop ] reset vid:start-recorder path while [ ticks <= 300 ] [ run-model vid:record-view ] vid:save-recording path end The system is flagging an error in my conversion on 'path' on the following line ... suggesting that a command is expected instead of the file-to-open as represented by 'path' reset vid:start-recorder path I have read through the vid extension docs ... as well as the transitions for the movie primitives, but just cannot figure out how to get around this. Any suggestions, pointers ? You probably just forgot to update the reset to vid:reset-recorder, it's also from the vid extension. vid:start-recorder doesnt take a path as input. You only need the path for vid:save-recording In the video extensions doc at the section for vid:save-recording, they say: Note that at present the recording will always be saved in the “mp4” format. So you probably want so change the user message. When I tried it with the following code the file extension was written automatically. extensions[vid] to setup ca reset-ticks crt 10 end to make-movie setup user-message "Enter name ending with .mov" let path user-new-file if not is-string? path [ stop ] vid:reset-recorder vid:start-recorder while [ ticks <= 10 ] [ go vid:record-view ] vid:save-recording path end to go ask turtles [ fd 1 ] tick end Awesome, and indeed you were correct. thanks a million. I didn't forget about converting 'reset' to the new format. I just was unaware of it (very new to NetLogo, and the vid extension). Thanks again. You are a star.
common-pile/stackexchange_filtered
How can I display parent and child taxonomies in separate drop downs? The requirement is where i have a taxonomy State which is the parent and it has a child sub-taxonomies they are the Cities, Need to display the state as a drop down, when i select the particular state tax i have to display the corresponding child cities of the state tax in the second drop down.....kindly help me.. I found the solution for the above problem, but it works partial........anyone with solution kindly help me.... Add the below code at the top of the template page after get_header(), as you can see i added a custom taxonomy 'state' in the wp_dropdown_categories.... <?php get_header(); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" ></script> <script type="text/javascript"> $(function() { $('#main_cat').change(function() { var $mainCat=$('#main_cat').val(); // call ajax $("#sub_cat").empty(); $.ajax ( { url:"<?php bloginfo('wpurl'); ?>/wp-admin/admin-ajax.php", type:'POST', data:'action=my_special_ajax_call&main_catid=' + $mainCat, success:function(results) { // alert(results); $("#sub_cat").removeAttr("disabled"); $("#sub_cat").append(results); } } ); }); }); <style type="text/css"> #content{width:auto; height:400px; margin:50px;} </style> <div id="content"> <?php wp_dropdown_categories('show_count=0&selected=-1&hierarchical=1&depth=1&hide_empty=0&exclude=1&show_option_none=Main Categories&name=main_cat&taxonomy=state'); ?> <select name="sub_cat" id="sub_cat" disabled="disabled"></select> </div> <?php get_footer(); ?> Add the below code in functions.php file........ function implement_ajax() { if(isset($_POST['main_catid'])) { $categories= get_categories('child_of='.$_POST['main_catid'].'hide_empty=0'); foreach ($categories as $cat) { $option .= '<option value="'.$cat->term_id.'">'; $option .= $cat->cat_name; $option .= ' ('.$cat->category_count.')'; $option .= '</option>'; } echo '<option value="-1" selected="selected">Sub Categories</option>'.$option; die(); } // end if } add_action('wp_ajax_my_special_ajax_call', 'implement_ajax'); add_action('wp_ajax_nopriv_my_special_ajax_call', 'implement_ajax');//for users that are not logged in. So now this is my structure.... State1 CT1-1 CT2-1 CT3-1 State2 CT1-2 CT2-2 CT3-2 State3 CT1-3 CT2-3 CT3-3 im able to display the parent taxonomies when i select the parent taxonomy the second drop gets activated but it does not populate the child items except it shows the static output " Select "..... I'm not a Pro so kindly bare with me, Can anyone kindly resolve this issue Hello Everyone out there, who are facing difficulty to display the parent and child taxonomies in drop down, i found the solution for the above problem...... edit the code in the functions.php file, before the code was $categories= get_categories('child_of='.$_POST['main_catid'].'hide_empty=0'); now edit the code, like this... $categories= get_categories('child_of='.$_POST['main_catid'].'&hide_empty=0'.'&taxonomy=state'); and this works so beautifully, try it for sure......all i did was i concatenated the taxonomy, that's all cheers everyone, happy coding..... :)
common-pile/stackexchange_filtered
ideavim (mac): On doing a paste in normal mode (press `p`), cursor jumps to previous location in editor after pasting On enabling ideavim plugin (on Mac), while doing a paste operation in normal mode of vim by pressing p, the paste happens but the cursor jumps to the previous location in the editor. This previous location could even be in another file (something similar to Intellij command Cmd+[). I tried to see if there is a keymap clash with ideavim vs. intellij, but could not find one. Can this be fixed by changing some keymap entry.. or some option in the Preferences? Don't you have conflicting mappings in ~/.ideavimrc? All it contains is set showmode
common-pile/stackexchange_filtered
Unexpected 401 error when connecting to a web service (possible kerberos \ double hop error related) We have a client that connects to a web service (service1.svc) with the URL https:\destination.domain\Service1.svc. This web service connects to a second web service (service2.asmx) with the URL https:\localhost.domain\service2.asmx. Both service are hosted on the same webserver. The DNS on the Domain controller sets destination.domain to point to the IP of webserver and localshost.domain to <IP_ADDRESS>. The application pool account is a global service managed account that is configured to allow delegation and is called webserveraccount. The application pool is configured to use the appPoolIdentity. We see a 401 autherisation error connecting to the second webservice (service2.asmx). I have also see the a KDC_ERR_BADOPTION which makes me this our SPN configuration is incorrect. What would the correct format for SPN in the above scenario? Or is this not a kerberos related issue? 401 means you lack authorization header in your request. Does your web service expect basic or token authorization ? To be honest I'm not sure. The existing code passes the default credentials when it creates the web service object. I will investigate - this isn't code I wrote, nor am a very knowledgeable in this area. Where are you seeing KDC_ERR_BADOPTION? How are the two app pools configured? Same account, or different? The expected format of the SPN is http/exactmatch.of.domain.com[:port] I must apologise - I failed to reply with a solution. We set the AllowedImpersonationLevel on the client credentials of the WCF client to Delegation. All started working after that. SPNs for kerberos were set correctly and the web service machine set for kerberos delegation.
common-pile/stackexchange_filtered
Do I need to update my BIOS? I'm wanting to try Ubuntu on my PC. (Windows is getting on my nerves) Burnt Live Disk - check Insert into Drive - check Restart pc - check Click try Ubuntu without installing - check Then I get this. [0.036714] [Firmware Bug]: AMD-Vi: IOAPIC[0] not in IVRS table [0.036792] [Firmwae Bug]: AMD-Vi: No southbridge IOAPIC found [0.036822] AMD-Vi: Disabling interrupt remapping Tried googling what I should do. But just about everywhere I look it says I need to update my BIOS. But they also say not to do so if you're not sure what you're doing. That's why I'm here. I need a bit of help. Thank you in advance.
common-pile/stackexchange_filtered
TabView navigation stuck in tvOS I’m implementing a carousel view for tvOS. In its simplest version, it would be: struct HorizontalCarouselComponent: View { let contents: [ContentViewModel] var body: some View { TabView { ForEach(contents) { content in Text(content.title) .focusable(true) } } .tabViewStyle(.page) } } With this version, I can see the content correctly and also the pagination. However, when I navigate using the arrow keys on the Apple TV simulator, I notice that some elements of the carousel become unreachable. For example, I can move two positions to the right, and even though there are two more positions, I cannot continue moving forward. I have tried using TabView with selection or setting the focus manually with @FocusState and focused, but none of them seem to work. What I expect is to reach every position of the pagination using the arrow keys or the remote. Do you have any ideas on what might be happening? Should I go for ScrollView and forget about TabView? It seems to be a problem with the simulator. The same code works perfectly on a physical device.
common-pile/stackexchange_filtered
how can i control the number of events a namespace can save? I want to control the number of events my tenants can create. My ttl is 1 our, and that ok, but sometimes we have spikes in several namespaces and i am worry about etcd getting flouted if it happens in several namespaces. I have tried to use resource quotas like this: apiVersion: v1 kind: ResourceQuota metadata: creationTimestamp: "2024-09-16T04:54:44Z" name: compute-resources namespace: test-dev resourceVersion: "35872662" uid: 78d46d9b-9b1d-435d-88c2-0cfd6f61c64c spec: hard: count/events: 1k status: hard: count/events: 1k used: # i dont see it here But it doesn't seem to work. Is there any way to make it work with a Resource Quota? or i need to create web hook? Thanks, i'd avoid throttling events. i'd also avoid trying to control your tenants. admission webhook will easily get complicated and consume more resources. sounds like you need to harden your control plane - your distributed store, etcd, can probably handle more events abuse than you think. do you have any supporting telemetry ? https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ There is no direct way to limit event resources using Resource Quotas because events are not considered as quota managed resources. A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace. See this Medium blog by Anvesh Muppeda and kodekloud blog about Hands-On Guide to Kubernetes Resource Quotas ,Limit Ranges and how to collect kubernetes events. Resource Quotas in Kubernetes allow administrators to set constraints on the total resource usage per namespace. By using Resource Quotas, you can limit the number of resources such as Pods, CPU, and memory that can be consumed within a namespace. You can use an admission webhook to capture event creation requests. You can also check the number of events already present in a namespace. Admission webhooks are HTTP callbacks that receive admission requests and do something with them. You can define two types of admission webhooks, validating admission webhook and mutating admission webhook. Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults. After all object modifications are complete, and after the incoming object is validated by the API server, validating admission webhooks are invoked and can reject requests to enforce custom policies.
common-pile/stackexchange_filtered
Python docstring about the type of object being returned class Foo: def __init__(self, bar): self.bar = bar def get_new_foo(self, new_bar): return type(self)([self.bar, new_bar]) #How should it be documented? If get_new_foo gets called from a derived class, then it would return an instance of the derived class. If multiple classes use Foo as base class, then get_new_foo will return an instance of the derived class it was called from. I want to document what type of object get_new_foo returns, and I don't understand what/how to document it. I can't say Returns an instance of Foo because this will not be the case always. I'll note that Returns a new Foo is technically accurate, since all subclasses should obey the "is-a" rule; that is, it might not be returning Foo exactly, but what it's returning is still a Foo. @wim: It's building off instance attributes though; a @classmethod could be made to work (requiring the caller to explicitly pass the existing bar too), but this doesn't seem to be a case where it's an alternate constructor that makes sense to call on the class itself. @ShadowRanger ah, right you are. weird code, hard to offer a less-weird alternative without more context. The example example in my question is definitely a @classmethod candidate, but my real code is a lot different than the example. I can't @classmethod otherwise I would have. @wim the question is pretty straight-forward, that is, how would one document the type of object such a method returns. Don't worry about the code, it's just an example and I don't it think needs more context than that. What's wrong with "Returns an instance of type(self)"? You can provide type hints in the docstring. Since you're writing the documentation on a class that would be the super-class for other classes, it makes sense that you document the base type it returns. Even if the actual type that will be returned once the base class is inherited, the results will still be instances of the base type and at this level, you cannot document anything beyond that anyway. If you feel sub-classes need more specific documentation for whatever they return, you can simply provide the documentation there. Personally, I wouldn't be overly concerned about this. Since any subclass "is-a" Foo anyway, you're at worst mildly misleading in your chosen wording. If you want to be pedantically correct, you can always expand it to "Returns an instance of Foo (a Foo subclass when called on child class instances)".
common-pile/stackexchange_filtered
Refinable string not searchable I am using SharePoint Online. I created a site column "Model" which is of CHOICE type. Two crawled properties ows_Model and ows_q_CHCS_Model are created automatically by SharePoint. I used the default managed property RefinableString01 to map ows_Model. At search centre, if I type the keyword model:GL, the search result contains the images tagged with GL, and the refinement also works. If I type the keyword GL, the search result does not contain the images I expected to see. From the article What makes a SharePoint column searchable, I found that the crawled property ows_Model is marked with Included in full-text index; However, the managed property RefinableString01 is marked Not searchable. It is the default value which cannot be changed. Is there any way to make this column free text searchable? you can modify the managed property , if that doesn't help. Suggest you to create new managed property and map it to any crawl property that you need (instead of using pre-defined Refinablestring01) and have your own searchable, Queryable,Retrievable, Refinable, Sortable and that will help in most cases. you can modify the managed property , if that doesn't help. Suggest you to create new managed property and map it to any crawl property that you need (instead of using pre-defined Refinablestring01) and have your own searchable, Queryable,Retrievable, Refinable, Sortable and that will help in most cases Solved by creating a new managed property marked with searchable, queryable, retrievable. pls upvote if any comments helped you, I mentioned the same thing as proposed solution.
common-pile/stackexchange_filtered
i am getting an error when i tried to set webhook for telegram I want to set webhook for telegram for certain user input so that telegram replied automatically to predefined question and I am able to write the program which is running perfectly fine, but when I tried to set webhook for that program it is showing an error: { "ok": false,"error_code": 400,"description": "Bad Request: bad webhook: Ip is reserved" } I tried to set webhook like that: https://api.telegram.org/bot<token>/setwebhook?url=https://localhost/Manisha/bot.php May be helpful for you:- http://stackoverflow.com/questions/33378216/how-to-use-setwebhook-in-telegram-with-self-certificates-on-windows-7-and-php You need to use a publically available URL that the Telegram servers can reach. http://localhost/ is not public. You are trying to make an HTTP request to the computer the bot is running on. This isn't your server, it is Telegram's server. It doesn't work because: Your PHP isn't installed on it Telegram won't let your bot go poking at their internal servers then how can i do it and is there any publically available URL / server which i can use for my work ? Rent a VPS or something.
common-pile/stackexchange_filtered
IMEI number formatting in Excel I'm trying to format IMEI numbers in Excel according to RIM (Blackberry) conventions: XXXXXX.XX.XXXXXX.X I enter them in excel without any periods and use the "special format" cell properties so it displays right. This works kind of: ######-##-######-# However, when I replace the dashes with periods (######.##.######.#) Excel automatically changes it to ###.###.###.###.###, 5 groups of 3. It does this too when I prepend it with a space. Does anyone know how to disable this behaviour or a workaround? Try ######"."##"."######"."#. Works in Excel 2010. Places literal .s - anything within quotes is literal rather than whatever special meaning it may hold. An interesting note, Excel 2010, ######.##.######.# displays the 15 digit number as XXXXXXXXXXXXXXX... (yes, three literal dots at the end).
common-pile/stackexchange_filtered
Excel files on a Samba network drive don't open I have a Microsoft Server 2019 Standard with MS Office installed. I also have a Linux server that shares each user's ~/windows/ directory through Samba with their Windows account when they're logged in. The rights to the folder are always set to 777 and are owned by the user and the group users, to which the user belongs. However, when they open a .xslx file located in the said directory, the following error appears: "Sorry, we couldn't find \AppData\Local\Microsoft\INetCache\Content.MSO\111191F2.xlsx. It is possible it was moved, renamed or deleted?" Opening up the location reveals that the file is indeed there, which hints at some problems with privileges. Opening the file on the network drive as Administrator works, which again hints at problems with privileges, but I haven't been able to debug this. I did no Active Directory setup, the disk is connected using net use (with a user's password) in the following way: net use S: \\<address>\<user> <password> As for the Samba config, here is the relevant part: [homes] browseable = no path = /home/%S/windows read only = no valid users = %S Saving the file locally and opening it after works without any problems. Editing a simple text file in Notepad works without any issues. Samba logs the following (it logs much more, but this should hopefully be the relevant part): [2021/11/01 11:27:49.692907, 3] ../source3/smbd/dir.c:1225(smbd_dirptr_get_entry) smbd_dirptr_get_entry mask=[*] found test.txt fname=test.txt (test.txt) [2021/11/01 11:27:49.692966, 3] ../source3/smbd/smb2_server.c:3195(smbd_smb2_request_error_ex) smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[5] status[STATUS_NO_MORE_FILES] || at ../source3/smbd/smb2_query_directory.c:158 [2021/11/01 11:27:49.694329, 2] ../source3/smbd/open.c:1447(open_file) tom opened file test.txt read=Yes write=No (numopen=4) [2021/11/01 11:27:49.694999, 3] ../source3/smbd/smb2_server.c:3195(smbd_smb2_request_error_ex) smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_ACCESS_DENIED] || at ../source3/smbd/smb2_getinfo.c:159 [2021/11/01 11:27:49.696187, 3] ../source3/smbd/smb2_read.c:421(smb2_read_complete) smbd_smb2_read: fnum<PHONE_NUMBER>, file test.txt, length=24 offset=0 read=24 Running accesschk.exe -ld S: (by the problem user) yields the following: S:\ DESCRIPTOR FLAGS: [SE_DACL_PRESENT] [SE_DACL_PROTECTED] [SE_SELF_RELATIVE] OWNER: S-1-5-21-2603346316-3644132649-3033319823-1025 [0] ACCESS_ALLOWED_ACE_TYPE: S-1-5-21-2603346316-3644132649-3033319823-1025 [OBJECT_INHERIT_ACE] [CONTAINER_INHERIT_ACE] FILE_ALL_ACCESS [1] ACCESS_ALLOWED_ACE_TYPE: S-1-22-2-1000 [OBJECT_INHERIT_ACE] [CONTAINER_INHERIT_ACE] FILE_ALL_ACCESS [2] ACCESS_ALLOWED_ACE_TYPE: Everyone [OBJECT_INHERIT_ACE] [CONTAINER_INHERIT_ACE] FILE_ALL_ACCESS [3] ACCESS_ALLOWED_ACE_TYPE: S-1-22-2-1000 FILE_ALL_ACCESS [4] ACCESS_ALLOWED_ACE_TYPE: S-1-5-21-2603346316-3644132649-3033319823-1025 FILE_ALL_ACCESS [5] ACCESS_ALLOWED_ACE_TYPE: CREATOR OWNER [OBJECT_INHERIT_ACE] [CONTAINER_INHERIT_ACE] [INHERIT_ONLY_ACE] FILE_ALL_ACCESS [6] ACCESS_ALLOWED_ACE_TYPE: CREATOR GROUP [OBJECT_INHERIT_ACE] [CONTAINER_INHERIT_ACE] [INHERIT_ONLY_ACE] FILE_ALL_ACCESS Any help would be much appreciated! In order to help you we would need to know if you have AD in place; the valid portion of your smb.conf. What is the user under which you share the user's directory? For next time - this question is better suited for superuser. I edited the answer to (hopefully) contain this information. This is much better. Did you try writing a simple text file, editing it and saving it (via e.g. notepad)? Did it work for you? What does the smbd.log file say (don't forget to raise your logging level)? What are the rights on the ~/windows/ directory? Is that user part of any group? I again edited the question to include this information. Thanks. Could you post a result of Sysinternals (https://learn.microsoft.com/en-us/sysinternals/downloads/accesschk) accesschk.exe -ld \\<address>\<user>? (or S:) I again did so. You maybe suffering from the SYNCHRONIZE bit issue. Could you try to set it, if it helps? https://learn.microsoft.com/en-us/troubleshoot/windows-client/networking/access-denied-access-smb-file-share Running the command described in the linked page (changing via icacls) on the entire share changes nothing about the permissions. However, a newly created folder on the share contains the synchronized bit and the issue still persists when opening an Excel file in the folder. Let us continue this discussion in chat. Go to control panel, set to small icons, go to sync center. Check the offline files is disabled. In control panel, go internet options, security tab, trusted sites – try listing your samba server as a trusted site. The directory your error comes from is where Office caches things – it may be treating your file as from as unsafe from an insecure zone. I tried replicating your setup but with windows 10 - everything worked, but my samba server is on a local lan, trusted by default. Tried it, sadly without success. Go to File -> Options -> Trust Center -> click on Trust Center Settings -> Trusted Locations -> select "Allow Trusted Locations on my network(not recommended" Then Click on "File Block Settings" under Trust Center, Uncheck the following or others that pertain to your excel version. "Excel 4 Workbooks” "Excel 4 Worksheets” "Excel 3 Worksheets” "Excel 2 Worksheets” "Excel 4 Macrosheets and Add-in files" "Excel 3 Macrosheets and Add-in files" "Excel 2 Macrosheets and Add-in files" Click Ok. Close all your excel files and try to open that excel file from the network share. Allow Trusted Locations on my network were already checked, nothing to do there. As for the File Block Settings, unchecking sadly didn't seem to work (same error). I just remembered that you are probably missing samba user map file, which maps windows user to *nix user. In the [global] section you need to add [global] user map = /<path>/mapusers.txt The details can be found at man pages. In the text file mapusers.txt you need to have one line per mapping. In your case it would be all windows users are mapped to one linux user or group: linux_user = windows_user or @linux_group = @windows_group
common-pile/stackexchange_filtered
ASP.NET Core application is saying SSL certificate has expired, but it has not expired I have an ASP.NET Zero application connected to an single sign-on (SSO) application using OpenIdConnect. The SSO application is using Identity Server and the ASP.NET Zero application is validating against the SSL certificate on the SSO. The SSL certificate is still valid till 2022, but the ASP.NET Zero just stops working when it is validating login from the SSO application saying the certificate has expired. Below are the errors: **Mvc.ExceptionHandling.AbpExceptionFilter - IDX10249: X509SecurityKey validation failed. The associated certificate has expired. ValidTo (UTC): 'System.DateTime', Current time (UTC): 'System.DateTime'. Microsoft.IdentityModel.Tokens.SecurityTokenInvalidSigningKeyException: IDX10249: X509SecurityKey validation failed. The associated certificate has expired. ValidTo (UTC): 'System.DateTime', Current time (UTC): 'System.DateTime'. at Microsoft.IdentityModel.Tokens.Validators.ValidateIssuerSecurityKey(SecurityKey securityKey, SecurityToken securityToken, TokenValidationParameters validationParameters)**. Below is some of the validation parameters code: private async Task<JwtSecurityToken> ValidateToken( string token, string issuer, IConfigurationManager<OpenIdConnectConfiguration> configurationManager, ExternalLoginProviderInfo providerInfo, CancellationToken ct = default(CancellationToken)) { if (string.IsNullOrEmpty(token)) { throw new ArgumentNullException(nameof(token)); } if (string.IsNullOrEmpty(issuer)) { throw new ArgumentNullException(nameof(issuer)); } var discoveryDocument = await configurationManager.GetConfigurationAsync(ct); var signingKeys = discoveryDocument.SigningKeys; var validationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidIssuer = issuer, ValidateIssuerSigningKey = true, IssuerSigningKeys = signingKeys, ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(5), ValidateAudience = false }; var principal = new JwtSecurityTokenHandler().ValidateToken(token, validationParameters, out var rawValidatedToken); return (JwtSecurityToken)rawValidatedToken; } What could be the problem? Obvious question but: have you confirmed the date and time on the machine running the above code is correct? yes have confirm that the date and time is correct Have you managed to resolve this issue? @ckeedee
common-pile/stackexchange_filtered
Maven -U option update NON-SNAPSHOT release I have 2 questions about the -U option, 1- I saw alot of questions and articles that say the -U option only updates the snapshot. However from the documentation of Maven 3.0.1 : Forces a check for updated releases and snapshots on remote repositories so does this mean in latest Maven it will check for updated NON-SNAPSHOT dependency ? 2- let's say I have a dependency on a local project that doesn't exist in the remote repository, what will be the behavior of -U option? the jar won't be on remote repository so it can't compare it , the project is not on the remote repo.
common-pile/stackexchange_filtered
Export Neo4j to GraphML with node/edge labels? Has anyone been able to export GraphML, readable by Gephi or yEd, which properly displays labels in Gephi/yEd? I'm using the latest Neo4j community and APOC; I can export GraphML from APOC just fine, and import into Gephi/yEd, but there doesn't seem to be a way to load the attributes/key/data elements, so that useful labels can be displayed. Add this line to the top of the graphml-file, along with the other keys. <key id="labels" for="node" attr.name="labels"/> You can then copy the "labels" property to the Label column in Gephi. For who is using Neo4j / APOC 4.x.x pay attention to add {readLabels: true} to your CALL or the nodes will appear just as grey dots. MATCH (n) DETACH DELETE n; CALL apoc.import.graphml("file:///mypgraph.graphml", {readLabels: true}); MATCH (n) RETURN n; Reference: https://neo4j.com/labs/apoc/4.1/import/graphml/
common-pile/stackexchange_filtered
CCAI platform webhook Is there a way to get req when message is send and received in ccai platform by agent and user? I need to translate the user and agent languages so is there a way to get the details through webhook?
common-pile/stackexchange_filtered
How to stop SSH pipe from dying due to inactivity in Konsole I have a super efficient Konsole profile setup just the way I like it with my prefered shortcuts. Lately I have been spending a lot of time in the terminal connected to a remote server through SSH. I spend a lot of time looking at logs and even doing development on the server using Vim and nano Some times, after a long period I come back to my session and it is dead, and I have to use SSH to connect again, type my password which is relatively long, navigate to a location and do whatever I need to do. I end up losing a minute or two doing this mostly due to long input delay and the login delay. I talked to a previous coworker who now works as a sys admin and he suggested that I use tmux. I have used tmux before and I liked it but i really like my current setup. Is there a way to set up an eternal ssh connection on Konsole? You can configure ssh client/server to send keepalive packets https://stackoverflow.com/questions/25084288/keep-ssh-session-alive I don't know anything about Konsole, but could you run Konsole inside of a tmux session? @mattb I dont think that is a solution. A solution would be the opposite - running tmux inside of konsole, but then I'd have to use the tmux shortcuts for the most part. I will try Robert's solution later, I am at work right now. I am pretty sure it will work, i will post an update later.
common-pile/stackexchange_filtered
Latitude Longitude JSON C# Serialization I would like to take this class and convert it to JSON. public class Location { public string city {get; set;} public double population {get; set;} public double Latitude {get; set;} public double Longitude {get; set;} } In this format by JSON.NET [ { "city ": "Atlanta, GA", "Value": 520, "Location": [ 42.7, 23.33 ] } ] Please read [ask]. Key phrases: "Search, and research" and "Explain ... any difficulties that have prevented you from solving it yourself". It's a bit strange format for this class. So try: public class Location { [JsonProperty("City")] public string City {get; set;} [JsonProperty("Value")] public double Population {get; set;} [JsonIgnore] public double Latitude {get; set;} [JsonIgnore] public double Longitude {get; set;} public double[] Location { get { return new double[] { Latitude, Longitude }; } } } Although the custom JsonConverter, may be a cleaner solution.
common-pile/stackexchange_filtered
Dynamically sizing a container view based on the view controller inside it in iOS Within my application, I have a UIScrollView with a containerView inside it. Based on buttons pressed by the user, I swap a different UIViewController into the containerView. The problem is, I need to figure out the height of this new UIViewController so that I can adjust the size of the containerView's height constraint. What I am currently doing is @IBOutlet weak var content: UIView! @IBAction func indexChanged(_ sender: UISegmentedControl) { if let vc = getViewController(sender.selectedSegmentIndex) { self.addChildViewController(vc) self.transition(from: self.currentViewController!, to: vc, duration: 0.5, options: UIViewAnimationOptions.transitionCrossDissolve, animations: { self.currentViewController!.view.removeFromSuperview() vc.view.frame = self.content.bounds self.content.addSubview(vc.view) }, completion: { finished in self.currentViewController!.removeFromParentViewController() self.currentViewController = vc }) } } Anything I'm doing wrong here? Did you get an answer to this? @SAHM It's been a while, but I think this method solved it for me: https://gist.github.com/BrandonSlaght/000a8735220384c6d398cdd46bf3f1f3 remember to call self.addedViewController.view.translatesAutoresizingMaskIntoConstraints = false and self.addChildViewController(addedViewController) before using the addSubview method if you're trying to adjust the contentSize property on a scrollview, why not setting content.contentSize = vc.view.frame.size ? This doesn't work for me: there is no property contentSize for a UIView the scrollview has a contentSize. //assuming vc.view has its frame set to {{0,0},{goodWidth,goodHeight}} then content.bounds = vc.view.bounds then scrollView.contentSize = content.frame You need a subview property and add it to the controller in the container that all your content goes in that is pinned to the top of the view and the sides in the viewcontroller with No bottom constraint. You need to make sure it is laid out with all the content and get the height of that view and update your height constraint on the viewcontroller holding this container with preferably a delegate method. If any of the views this content view holds does not have an intrinsic size assign one in Autolayout and it will need to stay the same.
common-pile/stackexchange_filtered
Cast type with range limit Is there an elegant way to cast a bigger datatype to a smaller one without causing the result to overflow? E.g. casting 260 to uint8_t should result in 255 instead of 4. A possible solution would be: #include <limits.h> #include <stdint.h> inline static uint8_t convert_I32ToU8(int32_t i32) { if(i32 < 0) return 0; if(i32 > UINT8_MAX) return UINT8_MAX; return (uint8_t)i32; } Although this solution works, I wonder if there is a better way (without having to create lots of conversion functions). Solution should be in C (with optionally GCC compiler extensions). What you seem to want is to clamp the value, not do a straight conversion (which will essentially lead to truncation of the upper bits). This is not supported in standard C, and you have no other way to solve it than through a function similar to the one you show. In C the only choice are macros. Your case is is a bit painful to do with macros. Biggest problem is different case of constant and associated types. From the previous comments (and C++ std::clamp): #define CLAMP(v,lo,hi) (v)<(lo)?(lo):(v)>(hi)?(hi):(v). From here, you can define specific macros for each destination type you target. clamping with the type's limits like this is also called saturation: is there a function in C or C++ to do "saturation" on an integer, Convert from int to char in C with cutoff, Clamping short to unsigned char Since C11 you can use the new _Generic selection feature #define GET_MIN(VALUE) _Generic((VALUE), \ char : CHAR_MIN, \ signed char : SCHAR_MIN, \ short : SHRT_MIN, \ int : INT_MIN, \ long : LONG_MIN, \ long long : LLONG_MIN, \ default : 0 /* unsigned types */) #define GET_MAX(VALUE) _Generic((VALUE), \ char : CHAR_MAX, \ unsigned char : UCHAR_MAX, \ signed char : SCHAR_MAX, \ short : SHRT_MAX, \ unsigned short : USHRT_MAX, \ int : INT_MAX, \ unsigned int : UINT_MAX, \ long : LONG_MAX, \ unsigned long : ULONG_MAX, \ long long : LLONG_MAX, \ unsigned long long : ULLONG_MAX) #define CLAMP(TO, X) ((X) < GET_MIN((TO)(X)) \ ? GET_MIN((TO)(X)) \ : ((X) > GET_MAX((TO)(X)) ? GET_MAX((TO)(X)) : (TO)(X))) You can remove the unnecessary types to make it shorter. After that just call it as CLAMP(type, value) like this int main(void) { printf("%d\n", CLAMP(char, 1234)); printf("%d\n", CLAMP(char, -1234)); printf("%d\n", CLAMP(int8_t, 12)); printf("%d\n", CLAMP(int8_t, -34)); printf("%d\n", CLAMP(unsigned char, 1234)); printf("%d\n", CLAMP(unsigned char, -1234)); printf("%d\n", CLAMP(uint8_t, 12)); printf("%d\n", CLAMP(uint8_t, -34)); } This way you can clamp to almost any types, including floating-point types or _Bool if you add more types to the support list. Beware of the type width and signness issues when using it Demo on Godlbolt You can also use the GNU typeof or __auto_type extensions to make the CLAMP macro cleaner and safer. These extensions also work in older C versions so you can use them in you don't have access to C11 Another simple way to do this in older C versions is to specify the destination bitwidth // Note: Won't work for (unsigned) long long and needs some additional changes #define CLAMP_SIGN(DST_BITWIDTH, X) \ ((X) < -(1LL << ((DST_BITWIDTH) - 1)) \ ? -(1LL << ((DST_BITWIDTH) - 1)) \ : ((X) > ((1LL << ((DST_BITWIDTH) - 1)) - 1) \ ? ((1LL << ((DST_BITWIDTH) - 1)) - 1) \ : (X))) #define CLAMP_UNSIGN(DST_BITWIDTH, X) \ ((X) < 0 ? 0 : \ ((X) > ((1LL << (DST_BITWIDTH)) - 1) ? \ ((1LL << (DST_BITWIDTH)) - 1) : (X))) // DST_BITWIDTH < 0 for signed types, > 0 for unsigned types #define CLAMP(DST_BITWIDTH, X) (DST_BITWIDTH) < 0 \ ? CLAMP_SIGN(-(DST_BITWIDTH), (X)) \ : CLAMP_UNSIGN((DST_BITWIDTH), (X)) Beside the fact that it doesn't work for long long and unsigned long long without some changes, this also implies the use of 2's complements. You can call CLAMP with a negative bit width to indicate a signed type or call CLAMP_SIGN/CLAMP_UNSIGN direction Another disadvantage is that it just clamps the values and doesn't cast to the expected type (but you can use typeof or __auto_type as above to return the correct type) Demo on Godbolt CLAMP_SIGN(8, 300) CLAMP_SIGN(8, -300) CLAMP_UNSIGN(8, 1234) CLAMP_UNSIGN(8, -1234) CLAMP(-8, 1234) CLAMP(-8, -1234) CLAMP(8, 12) CLAMP(8, -34) cool I didn't know that C11 gained such functionality. IMO best way to do it is first map constant describing limits into constant with desiried case. Then define macro which will use this new constant. So basic idea looks like this: const int8_t min_of_int8 = INT8_MIN; const int8_t max_of_int8 = INT8_MAX; const uint8_t min_of_uint8 = 0; const uint8_t max_of_uint8 = UINT8_MAX; .... #define DEFINE_CONVERTER(SRC, DST) \ inline static DST ## _t convert_ ## SRC ## _to_ ## DST (SRC ## _t src) \ { \ return src < min_of_ ## DST ? min_of_ ## DST : (src > max_of_ ## DST ? max_of_ ## DST : (DST ## _t)src); \ } DEFINE_CONVERTER(int32, uint8) DEFINE_CONVERTER(int32, int8) .... Here is test written using C++. Test it carefully since some implicit conversion may lurking and breaking this macro for specific pair of types. If you wish to have different pattern for function names (like this I8 U32) then do same trick as for constant and define respective typedef which name will contain desired short versions of types. Note similar approach is used on OpenSSL to provide same functions for different types. Thanks Marek . Learnt a new unit testing framework catch2
common-pile/stackexchange_filtered
How do I organize the source code of a CUDA C project using subfolders? I develop some parallel computing code using CUDA C. The system is running a Ubuntu based Linux, the IDE of choice is Eclipse Indigo. I set up the project using the template which is delivered with Cuda. I fail to setup subfolder containing portions of code (say "gui", "io", "net") in a such a way, that the compiler (ncc and/or g++) and/or linker recognizes those. The goal would be to just type "make" and everything is put together. Perhaps somebody knows a project template or sample makefile which works with directory structures? I don't feel the status quo with its many files (like gui_myclass.c, net_myotherclass.c, ...) in the projects root directory is the way to go. This question has nothing to do with CUDA. There is a good answer here to the question "how do I generate a makefile with source in subdirectories using just one makefile". This applies to any command line compiler, nvcc, g++, whatever.
common-pile/stackexchange_filtered
Why do authors claim that Euler gave no proof to his "$\sin(\pi x)= \pi x\prod\limits_{k=1}^{\infty}\left(1-\frac{x^2}{k^2} \right )$" when... When he proved the relation between $\pi \cot(\pi x)$ and the harmonic series in "Introductio in analysin infinitorum" which states that $$\pi \cot(\pi x)=\sum_{k \to \infty}^{\infty} \frac{1}{x+k}=\frac{1}{x}+\sum_{k=1}^{\infty}\left(\frac{1}{x+n}+\frac{1}{x-n} \right) \text{ for } x\in \mathbb{R} \backslash \mathbb{Z}.$$ It doesn't take a genius to transform this into an infinite product just by knowing the fact that $$\int \pi \cot(\pi x) = \log\left(\sin(\pi x) \right)+C.$$ So my question is, why does every historian/author claim that Euler's first proof of $\displaystyle \zeta (2)=\frac{\pi^2}{6}$ was not rigourous at all because Euler didn't prove his famous infinite product in his lifetime when the proof of the relation between the cotangent and the harmonic series implies directly his infinite product? EDIT:I'm sorry for doing this but shameless self bump. I got no answers and once again, I'm sorry. I am afraid that in Euler's days integrating those infinite series would be even more striking example of an argument that is not rigorous. Even Cauchy got it wrong the first time round. :) J.H.: I agree but the difference between no rigorous justification of the infinite product and not considering uniform convergence when integrating term by term is a big one. Authors don't even say he proved the product in any way. Do we have a definition for a "rigorous proof"? Mhenni Benghorbal: A proof where every step is completely justified. That's just shifting the semantic burden from the words "rigorous proof" to the words "completely justified". Do you have a definition for "completely justified"? "Authors don't even say he proved the product in any way." Which authors? @GregMartin: I really like your comment. @MhenniBenghorbal A sequence of statements where each is either an axiom, a tautology, or of the form $Q$ where both $P \implies Q$ and $P$ were previously written. Presumably the problem with the proof suggested in the question is that it omits a number of justifications of manipulations of infinite series, whether or not those justifications could in principle be given. I am answering the question in the content, the title is asking something about the infinite product. why does every historian/author claim that Euler's first proof of $\displaystyle \zeta (2)=\frac{\pi^2}{6}$ was not rigorous at all Euler's first derivation was done by factoring the infinite series for sin, even Euler himself was not satisfied with that method and although that method was correct in finite cases needed to be justified for infinite case, due to that fact later Euler gave alternative (more)rigorous proofs. (rigorous being what was deemed acceptable by other mathematicians of the time.) After the first proof was deemed not rigorous by the master himself, no other fact/rigorous proof can change the amount of the rigor of that proof. The infinite product was made rigorous by Weierstrass and his treatment of Entire function theory. : PS: What ever the first proof lacked in rigor it made up in ingenuity.
common-pile/stackexchange_filtered
Can I use HTML to define the GUI for my desktop application? Can I use HTML to define the GUI for my desktop application? Specifically, can I use it for a JTabbedPane? I'm pretty competent with HTML, so I want to define how my tabbed pane looks with it. Can it be done, or is HTML only for labels? If you have any resources on this, please give them. I'm not quite sure what you mean by using HTML in tabbed panes. DYM in the tab of the tabbed pane? If so, yes. But why not try it before asking? Specifically, can I use it for a JTabbedPane? - I have no idea what that means. Are you talking about the text in the tab of the tabbed pane? In any case, try it and see what happens. If it doesn't work as expected then post your [mcve] demonstrating the problem. If it does work then answer your own question to benefit others. "I'm pretty competent with HTML" As an aside, if you are competent with current day HTML, you're way ahead of Swing's support to render the same. The Swing HTML rendering engine supports 1) a subset of HTML 3.2. 2) Very limited CSS 3) No scripting (Java Script). I was meaning to use html + css to define the presentation of it, saving me the trouble of putting effort into it, but apparently the answer is no. Maybe you need to use JavaFX. See: Fancy Forms With JavaFX CSS or maybe Using FXML to Create a User Interface.
common-pile/stackexchange_filtered
Converting strange data.frame to matrix in R I have the following data.frame and convert into matrix object after deleting each delimiter. > data ID COL1 COL2 COL3 COL4 COL5 1 1 1,2,3,4 5,6,7,8 9,10,11,12 13,14,15,16 17,18,19,20 2 2 11,12,13,14 15,16,17,18 19,20,21,22 23,24,25,26 27,28,29,30 3 3 21,22,23,24 25,26,27,28 29,30,31,32 33,34,35,36 37,38,39,40 4 4 31,32,33,34 35,36,37,38 39,40,41,42 43,44,45,46 47,48,49,50 5 5 41,42,43,44 45,46,47,48 49,50,51,52 53,54,55,56 57,58,59,60 6 6 51,52,53,54 55,56,57,58 59,60,61,62 63,64,65,66 67,68,69,70 7 7 61,62,63,64 65,66,67,68 69,70,71,72 73,74,75,76 77,78,79,80 8 8 71,72,73,74 75,76,77,78 79,80,81,82 83,84,85,86 87,88,89,90 9 9 81,82,83,84 85,86,87,88 89,90,91,92 93,94,95,96 97,98,99,100 ===> > data.new [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16] [,17] [,18] [,19] [,20] [,21] 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 2 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 3 3 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 4 4 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 5 5 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 6 6 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 7 7 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 8 8 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 9 9 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 To do this, what functions of apply() should I apply? Thanks in advance Sean You don't really need apply at all. You can re-read the data. Try any of these three possibilities. In base R, (1) you could paste the columns together by row then read that text with read.csv dc <- do.call(paste, c(data, list(sep = ","))) unname(as.matrix(read.csv(text = dc, header = FALSE))) Or, (2) using scan directly matrix(scan(text = dc, what = integer(), sep = ","), length(dc), byrow = TRUE) Or, (3) you could use cSplit from splitstackshape library(splitstackshape) unname(as.matrix(cSplit(data, 2:6))) I've already given this my +1, but also wanted to share that using cSplit_f should be a little bit faster than cSplit when it is known that the data would be rectangular. I'm not sure if/how that would change when I transition these functions to use "stringi". Also, in case you wanted to add an fread alternative, x <- tempfile(); writeLines(do.call(paste, c(mydf, sep = ",")), x); fread(x). A solution based on apply: t(apply(data, 1, function(x) as.numeric(unlist(strsplit(x, ","))))) How it works? The function apply is used to apply a function to each row of the data frame. The character vectors are split at the commas (strsplit). This returns a list. This list is converted to a vector with unlist. Next, as.numeric is used to transform the character vector to a numeric vector. The function apply returns a matrix in which a column corresponds to a row in the original data frame. Finally, the function t is used to transpose the matrix.
common-pile/stackexchange_filtered
Dynamically determine the type of a function parameter in C? In JavaScript I can have a function with an untyped argument, and then customize the behavior depending on the type: function customize(a) { if (a instanceof Uint8Array) { doArray() } else if (typeof a == 'function') { doFunction() } else { doOther() } } How can I do this same thing in C, either with or without macros (preferrably without macros). void customize(void *a) { // ??? } Here it says: Your void pointer looses all its type information, so by that alone, you cannot check if it can be cast safely. It's up to the programmer to know if a void* can be cast safely to a type. Can I somehow do a comparison against the key functions or structs in my application to see if it is a match? void customize(void *a) { if (a == fn_a) return do_function(); if (a == fn_b) return do_function(); // somehow iterate dynamically through the functions. } void fn_a() { } How about a second parameter, which tells the function what it should do with a? I want the API to be exactly like this. No, you can't do it directly. There is absolutely no way to find out what a points to. However if is is guaranteed that a points to some struct where the first field contains some type information, then it could be done. Nothing prevents you from checking what the pointer points to, but note that casting a function pointer to void* is UB. I.e. there is no guarantee that a function pointer will fit into a data pointer and vice versa. You can cast to a different function pointer before returning to the original one. So you would at least need a union and a flag to tell you which of the two pointers to use. But then you get to what @mch proposed above (with an added caveat that you need the union). I always wanted to give an answer like this (which I always hated): you are doing it wrong. C is a statically typed language. Loosing and recovering type information is not part of the design of the language. BTW your a == fn_a comparisions are bogous, because a will never point to a function but only to some data. Don't try to mimic JavaScrip concepts with C, the two languages are totally different. There's no dynamic type system, you'll have to implement it manually with enums. You can however get static type checks at compile-time with _Generic, which is good enough for sane program designs. As it was stated in the thread you've found, it is not possible through void *. You can do something similar with unions, but it's not very flexible and I'm not sure it is a good solution (might be better to have separate functions for every type): typedef enum { TYPE_1 = 0, TYPE_2, TYPE_3 } type_e; typedef union { int t1; char t2[5]; double t3; } types_u; typedef struct { type_e type; types_u types; } my_types_t; void func(my_types_t *some_variable) { switch (some_variable->type) { case TYPE_1: // do something break; case TYPE_2: // do something else break; } } There is an old saying that you can write FORTRAN in any language. I'm not sure if it is true or not, but I'm sure that you can't write Javascript in any language and especially not in C. C is statically typed and all type information is lost at runtime. You'll have to rethink your design, perhaps avoiding completely the need of carrying the information, perhaps passing an additional parameter with that information, perhaps bundling it with the void* in a struct, perhaps assuming that the void* point to something you have more information about and you can reextract the type information. There are several techniques possible, all with various trade-off and caveats, but expanding on them seems like writing a book and I don't have time for that. Some of the techniques were made language features in C++; that would avoid you to have to reinvent them and reimplement them, but you'd need to learn another language.
common-pile/stackexchange_filtered
What happens if the gravitational force mg is greater than the normal force? We know that the earth exerts a normal force equal to our weight on us . Now if somehow we harness the normal force and use it to do some other work (for example to produce electricity) then what would happen . I mean what effect would it have on us. Would we get deformed due to the exta force as we can't get accelerated downwards as there is the Earth's surface. If mg and normal force don't exactly cancel, you accelerate in some direction. What happens if you step into quick sand and the normal force is less than than mg? By normal force we usually mean a contact force perpendicular ("normal") to a surface. By that conventional definition, the Earth does not exert a gravitational normal force. (I the object is lying on the surface of the Earth, then yes, the Earth exerts a normal force upwards.) I think you meant that the surface on which we stand exerts equal and opposite force to balance gravitational pull, which we call as a normal force. If you have this understanding than you are correct about this statement, as it is important to distinguish that it is not the Earth that exerts a normal force, but the surface on which we stand. For example, if you stand on a table that is strong enough to hold you, it prevents you to fall further down as it is strong enough to provide the normal force that will balance your weight. Instead, if you try standing on water, you know what will happen! That's because the surface of the water is not hard enough to provide you with the necessary normal force. So, here you need to understand that normal force always exists as a response to weight. If you place a 2 g weight on a table, the normal force will be less as compared to placing a 2 kg weight. As you increase the weight of an object placed (consider a bucket kept on the table which is getting filled with water) the normal force will also increase until a point where the surface is not strong enough to hold the object. Just think about this, can you harness any energy from a force whose mere existence is as a consequence of the existence of another force? You can't. I hope I have understood your question, if not, please elaborate. We would break the ground. The example is clearer if you try to put a heavy object (lead ball) on a sheet of paper, and on a table. They both exert a normal force (which, by the way, isn't even a real force: like friction, it is an effect of the electromagnetic force between the molecules of the different objects) on the ball, but the one exerted by the table is enough to "stop" the ball, while the one of the sheet of paper is not. Therefore, the ball continue its path towards the center of the earth, until it's stopped by something else, the ground for example. The examples you give aren't the case of the normal force becoming larger than $mg$. What you describe is the material exerting the normal force not being strong enough to exert the normal force needed to support the weight The normal force is pretty much just an interaction between two things that are "touching". So, I guess you could use this to harness energy? For example, a water wheel spinning relies on the normal force between the water and the water wheel in order to spin. In this case the normal force is large enough to spin the water wheel. But it isn't like we have some secret untapped energy just waiting to be harnessed that is hidden within normal forces. They are forces that just arise from objects being close enough together so that there is a repulsion.
common-pile/stackexchange_filtered
Product load by entity_id or row_id if both are different? In my magento 2.3.0 My product catalog has 100k products. In this 10k Products are having different entity_id and row_id for same sku. I have below questions. If I load product collection using id it should be entity_id or row_id? What could be the reason for sku having different entity_id and row_id? Is it mandatory to have same values for both entity_id and row_id? If yes, How can I make it same values? Thanks & Regards Palani row_id is the column that is used to link your product tables together. For instance catalog_product_entity and catalog_product_entity_varchar will be joined with row_id It is an enterprise database enhancement meant to optimise your dababase and make it scalable to big catalog. --> In other words, given your catalog is big, you do need this row_id and no it is not always the same as entity_id In practice, your custom product query will often be complex because of this row_id. Just remember row_id is the joining one, entity_id is your product id as you see it in the backend. And if they happen to be the same, don't hold your luck on it. https://magento.stackexchange.com/q/339814/86901 I am just trying to answer this question entity_id because it same $_product->getId(); as $_product->getData()['entity_id']; row_id is used to facilitate the Time Dimension for entities that support it. By having this additional identifier, it allows an entity (product, category, sales rule, etc..) to be stored more than once in their respective MySQL table but expressing different values for the same entity. These altered versions of the same entity are created using Magento's "Content Staging" feature. For More: LINK Based on the second answer. BIG NO. Happy Coding
common-pile/stackexchange_filtered
how do i tell a uitableview which array to load depending on what the date is I am making an iPhone app which has a tableview. The tableview needs to display different information on different days of the year. I can't seem to figure out how to tell the tableview which data to display. For example a sports fixture will load different information depending on which week of the year it is. Please help based upon remodify datasource array . Keep your year/date array seperately List item You can have an array of NSDictionary items and have a key 'date' that will store the event date. Somewhere in init you can filter your data source array using a NSPredicate. Something like this: - (id)initWithNibName: (NSString*)nibNameOrNil bundle: (NSBundle*)nibBundleOrNil { self = [super initWithNibName: nibNameOrNil bundle: nibBundleOrNil]; if (self) { // Custom initialization NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; formatter.dateFormat = @"yyyy-MM-dd"; _eventsArray = [[NSArray alloc] initWithObjects: [NSDictionary dictionaryWithObjectsAndKeys: @"Christmas event", @"name", [formatter dateFromString: @"2013-12-25"], @"date", nil], [NSDictionary dictionaryWithObjectsAndKeys: @"New Year event", @"name", [formatter dateFromString: @"2014-01-01"], @"date", nil], nil]; __block typeof(formatter) blockFormatter = formatter; NSPredicate *predicate = [NSPredicate predicateWithBlock: ^BOOL(NSDictionary *evaluatedObject, NSDictionary *bindings) { NSDate *date = [evaluatedObject objectForKey: @"date"]; if ([date isEqualToDate: [blockFormatter dateFromString: @"2013-12-25"]]) return YES; return NO; }]; _dataSource = [_eventsArray filteredArrayUsingPredicate: predicate]; [formatter release]; } return self; } You can modify the predicate to check if the date is in the current week, or whatever you need. P.S. Don't forget to release _eventsArray in dealloc.
common-pile/stackexchange_filtered
Shopware 6: Module parse failed: Unexpected token after running `./psh.phar storefront:dev` I want to add some custom SCSS code to my Shopware 6 plugin. Therefore I created a main.js file as entry point and imported my custom SCSS file. After running ./psh.phar storefront:dev I got following error from Webpack: ERROR in /app/custom/plugins/MyTheme/src/Resources/storefront/styles/custom.scss 2:0 Module parse failed: Unexpected token (2:0) You may need an appropriate loader to handle this file type. > .test { | | } npm ERR! code ELIFECYCLE npm ERR! errno 2 npm ERR<EMAIL_ADDRESS>development: `NODE_ENV=development webpack --config webpack.config.js` npm ERR! Exit status 2 With the EarlyAccess realese of shopware 6 we added the theme manager and with that an easy way to add custom styles. Previously you had to bring your own webpack config, that explains the error. Now shopware automatically imports and compiles all *.scss files located in your <plugin root>/src/Resources/storefront/style. As your custom.scss file is located in this folder it should be working out of the box with the current EA1 version of shopware. Just delete the import of the .scss file from your main.js and update your shopware instance and you should be ready to go. Detailed information on how to add custom js or custom styles can be found in the Docs.
common-pile/stackexchange_filtered
Why Vite proxying does not work with Post? I'm creating an API to handle an image uploader for practice purposes. I'm using Express as backend and React+Vite for frontend. I'm running my server in port 3000, so I configure my Vite proxy server accordingly: export default defineConfig({ server: { proxy: { '/upload': 'http://localhost:3000', } }, plugins: [react()], }) As you can see, my endpoint is /upload and when I making GET request works perfectly but for POST request I get this error: POST http://localhost:5173/upload net::ERR_CONNECTION_ABORTED I checked the documentation, I cannot see any ways to handle this issue. The port in your POST info is :5173 but you show the proxy to be :3000 ? Im running frontend in port 5173, this is why you see this error, but my API is running in port 3000. Are there any errors on your server console? If not errors, are there any logs to your server console? Is SOMETHING actually trying to connect to :3000 ?
common-pile/stackexchange_filtered
How to make an Image Hyperlinked using Aspose.Words DOM approach? I am trying to create a Word document using Aspose.Words for .NET using the DOM approach. How would I make an image hyperlinked? Thanks, Vijay You can use Shape.Href property to set the URL. Aspose.Words.Drawing.Shape shape = createOrFindShape(); if (shape.HasImage) { // Set hyperlink using Shape.Href shape.HRef = "http://www.google.com"; } I work with Aspose as Developer Evangelist. Thank you. I was initially wondering how to do this using the FieldHyperlink class.
common-pile/stackexchange_filtered
Non-blocking / Asynchronous Execution in Perl Is there a way to implement non-blocking / asynchronous execution (without fork()'ing) in Perl? I used to be a Python developer for many years... Python has really great 'Twisted' framework that allows to do so (using DEFERREDs. When I ran search to see if there is anything in Perl to do the same, I came across POE framework - which seemed "close" enough to what I was searching for. But... after spending some time reading the documentation and "playing" with the code, I came against "the wall" - which is following limitation (from POE::Session documentation): Callbacks are not preemptive. As long as one is running, no others will be dispatched. This is known as cooperative multitasking. Each session must cooperate by returning to the central dispatching kernel. This limitation essentially defeats the purpose of asynchronous/parallel/non-blocking execution - by restricting to only one callback (block of code) executing at any given moment. No other callback can start running while another is already running! So... is there any way in Perl to implement multi-tasking (parallel, non-blocking, asynchronous execution of code) without fork()'ing - similar to DEFERREDs in Python? Coro is a mix between POE and threads. From reading its CPAN documentation, I think that IO::Async does real asynchronous execution. threads can be used too - at least Padre IDE successfully uses them. Coro is also co-operative multitasking. @ikegami being co-operative means that it doesn't work for Val, right? @dlamblin, Not necessarily, but Val seems to think that's the case. IO::Async is multiplexed IO, the same way as POE and AnyEvent and anything else on Perl. They're all just modern variations on the same basic idea, first started with a select loop. I'm not very familiar with Twisted or POE, but basic parallel execution is pretty simple with threads. Interpreters are generally not compiled with threading support, so you would need to check for that. The forks package is a drop-in replacement for threading (implements the full API) but using processes seamlessly. Then you can do stuff like this: my $thread = async { print "you can pass a block of code as an arg unlike Python :p"; return some_func(); }; my $result = $thread->join(); I've definitely implemented callbacks from an event loop in an async process using forks and I don't see why it wouldn't work with threads. Twisted also uses cooperative multi-tasking just like POE & Coro. However it looks like Twisted Deferred does (or can) make use of threads. NB. See this answer from the SO question Twisted: Making code non-blocking So you would need to go the same route with POE (though using fork is probably preferable). So one POE solution would be to use: POE::Wheel::Run - portably run blocking code and programs in subprocesses. For alternatives to POE take a look at AnyEvent and Reflex. I believe you use select for that kind of thing. More similarly to forking, there's threading. POE is fine if you want asynchronous processing but using only a single cpu (core) is fine. For example if the app is I/O limited a single process will be enough most of the time. No other callback can start running while another is already running! As far as I can tell - this is the same with all languages (per CPU thread of course; modern web servers usually spawn a least one process or thread per CPU core, so it will look (to users) like stuff it working in parallel, but the long-running callback didn't get interrupted, some other core just did that work). You can't interrupt an interrupt, unless the interrupted interrupt has been programmed specifically to accommodate it. Imagine code that takes 1min to run, and a PC with 16 cores - now imagine a million people try to load that page, you can deliver working results to 16 people, and "time out" all the rest, or, you can crash your web server and give no results to anyone. Folks choose not to crash their web server, which is why they never permit callbacks to interrupt other callbacks (not that they could even if they tried - the caller never gets control back to make a new call before the prior one has ended anyhow...)
common-pile/stackexchange_filtered
Sort map based on values in decreasing order Map<Character, Integer> map = new HashMap(); for(char c : ch) { if(map.get(c) == null) { map.put(c, 1); } else { map.put(c, map.get(c)+1); } } for(Map.Entry entry : map.entrySet()) { System.out.println("Key is "+ entry.getKey() + " Value is " + entry.getValue()); } I have a String str = "PriyankaTaneja" I want to count the frequency of characters in the String using map and i want to sort the map in decreasing order of values, that means the character repeating the max number of times should be the first to print. Does this answer your question? Sort ArrayList of custom Objects by property To sort a map by values in descending order, you can use the Java stream API and the Comparator interface. For example: Map<Character, Integer> map = new HashMap<>(); for(char c : ch) { if(map.get(c) == null) { map.put(c, 1); } else { map.put(c, map.get(c)+1); } } Map<Character, Integer> sortedMap = map.entrySet() .stream() .sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())) .collect(Collectors.toMap( Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new )); for(Map.Entry entry : sortedMap.entrySet()) { System.out.println("Key is "+ entry.getKey() + " Value is " + entry.getValue()); } You're welcome. If you have any other questions, please let me know.
common-pile/stackexchange_filtered
How to redirect tar unzipped files to a specific directory without creating a subfolder? I'm using the following command to archive a folder: tar zcf portal.tar.gz portal then I'm untar'ing it using the following command: tar xvzf portal.tar.gz -C /var/www/html/portal But it seems like the "-C" switch creates another subfolder and then the path looks like so: /var/www/html/portal/portal/files... while I need it to be: /var/www/html/portal/files... What am I doing wrong? My best guess is that the following should work: tar xvzf portal.tar.gz -C /var/www/html by just simply going up one level in the directory structure. The problem is that the filename of the portal directory is unknown as it's portal$GIT_TAG.tar.gz and since -C expects the destination folder to exist then I need to make sure it exists before unarchiving into there yes, make sure '/var/www/html' exists before extracting. Or am I not understanding you correctly? Another way would probably be to omit the -C option and CD to there before extraction (as that's what the -C option does anyway), and cd makes sure the dir exists. Yea that's what I did.. I changed it so I cd to the dir first and then extract.
common-pile/stackexchange_filtered
Array inside list I'm really confused trying to solve this problem. I'm trying to use the sklearn function: MinMaxScaler but I'm getting an error because it seems to be that I'm setting an array element with a sequence. The code is: raw_values = series.values # transform data to be stationary diff_series = difference(raw_values, 1); diff_values = diff_series.values; diff_values = diff_values.reshape(len(diff_values), 1) # rescale values to 0,1 scaler = MinMaxScaler(feature_range=(0, 1)) scaled_values = scaler.fit_transform(diff_values); print(scaled_values) scaled_values = scaled_values.reshape(len(scaled_values), 1) "series" is a differenced time series that I'm trying to rescale between [0,1] with MinMaxScaler and the Time series was previously differenced in pandas. I get the following error when running the code: ValueError: setting an array element with a sequence. Which I don't understand is the fact that if there is just one feature or variable in one column, the code runs all right, but in this case I have 2 features, each one in a different column. Traceback: File "C:/....py", line 88, in prepare_data scaled_values = scaler.fit_transform(diff_values); print(scaled_values) File "C:\Users\name\AppData\Roaming\Python\Python35\site-packages\sklearn\base.py", line 494, in fit_transform return self.fit(X, **fit_params).transform(X) File "C:\Users\name\AppData\Roaming\Python\Python35\site-packages\sklearn\preprocessing\data.py", line 292, in fit return self.partial_fit(X, y) File "C:\Users\name\AppData\Roaming\Python\Python35\site-packages\sklearn\preprocessing\data.py", line 318, in partial_fit estimator=self, dtype=FLOAT_DTYPES) File "C:\Users\name\AppData\Roaming\Python\Python35\site-packages\sklearn\utils\validation.py", line 382, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) ValueError: setting an array element with a sequence. And this is what I obtain if I print diff_values [[array([ -1.3, 119. ])] [array([ 0.5, -9. ])] [array([ 0.8, 17. ])] ..., [array([ 2.8, 742. ])] [array([ 1.50000000e+00, -1.65900000e+03])] [array([ -2., 856.])]] The full code is not mine, it's been obtained from here EDIT: Here is my dataset Just switch the name 'shampoo-sales.csv'to 'datos2.csv' and this sentence: return datetime.strptime('190'+x, '%Y-%m') to this one: return datetime.strptime(''+x, '%Y-%m-%d') could you include the full traceback and/or give an example assignment for series (so we could reproduce the error) Of course, and I'm also going to include the full code in the post Can you post your data also (or some part of it), so that we can reproduce the error. I forgot it. This is the link for the dataset: https://mega.nz/#!XI1ixboS!u-K7vu18370L_6SG-c8f25iPRDnoLJZ57K9m_RDVqKc In the tutorial you linked to, the object series is actually a Pandas Series. It's a vector of information, with a named index. Your dataset, however, contains two fields of information, in addition to the time series index, which makes it a DataFrame. This is the reason why the tutorial code breaks with your data. Here's a sample from your data: import pandas as pd def parser(x): return datetime.strptime(''+x, '%Y-%m-%d') df = pd.read_csv("datos2.csv", header=None, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) df.head() 1 2 0 2012-01-01 10.9 3736 2012-01-02 10.3 3570 2012-01-03 9.0 3689 2012-01-04 9.5 3680 2012-01-05 10.3 3697 And the equivalent section from the tutorial: "Running the example loads the dataset as a Pandas Series and prints the first 5 rows." Month 1901-01-01 266.0 1901-02-01 145.9 1901-03-01 183.1 1901-04-01 119.3 1901-05-01 180.3 Name: Sales, dtype: float64 To verify this, select one of your fields and store it as series, and then try running the MinMaxScaler. You'll see that it runs without error: series = df[1] # ... compute difference and do scaling ... print(scaled_values) [[ 0.58653846] [ 0.55288462] [ 0.63942308] ..., [ 0.75 ] [ 0.6875 ] [ 0.51923077]] Note: One other minor difference in your dataset compared to the tutorial data is that there's no header in your data. Set header=None to avoid assigning your first row of data as column headers. UPDATE To pass your entire dataset to MinMaxScaler, just run difference() on both columns and pass in the transformed vectors for scaling. MinMaxScaler accepts an n-dimensional DataFrame object: ncol = 2 diff_df = pd.concat([difference(df[i], 1) for i in range(1,ncol+1)], axis=1) scaler = MinMaxScaler(feature_range=(0, 1)) scaled_values = scaler.fit_transform(diff_df) Thank you very much. It works as you said but, I'm trying to use both variables to make a prediction. By doing this, it isn't possible to consider all the variables. series = df[1] is only considering the first column. Is it possible to use all the variables? I've updated my answer to demonstrate how to scale multiple vectors at once. This is just what I needed. Thank you.
common-pile/stackexchange_filtered
increase query speed Okay I have table: ID Name Description Picture 1 Alex Alex desc.. 2 2 Maria NULL 3 3 John John desc.. NULL table picture has ID and varbinary image. I need to select: If description exists, then description, else picture I do this: select Id, Name, Case when Description is null then pic.Image else Description from person per join picture pic on per.Picture = pic.Id So, looks like unnecessary join if description is not null. Anyways. any suggestions on improving this simple query? also, What are good easy to use tools for performance comparison between two version of queries? Why are you worried about this JOIN? Have you experienced performance problems? This might be premature optimization. It's just a join on 2 integers. If your indexing is in place, it should be at the bottom of the worry list. I'm just doing things this way, and want community suggest if there's a better approach on doing things. If not, then fine. As far as I know there is no way to conditionally join within a single query. I would probably rewrite what you have as: SELECT per.ID, per.Name, COALESCE(per.Description, pic.Image) as desc FROM Person as per LEFT JOIN Picture as pic ON per.picture = pic.ID I'm not sure how the performance compares, but it seems a little cleaner. As far as comparing queries goes, I would recommend looking at the execution plan for both and seeing if either are doing any scans. Here is some info on using execution plans for performance tuning: http://www.sql-server-performance.com/2006/query-execution-plan-analysis/ http://beyondrelational.com/blogs/nakul/archive/2011/07/28/ssms-performance-tuning-using-graphical-execution-plans-missing-indexes-hints.aspx You can also use the Database Tuning Advisor as a quick way to check to see if you have the necessary indexes. Be careful to not spend too much time over optimizing either. While it's good to keep an eye on both current performance and performance as your database grows, it's easy to spend a lot of time on this without getting too much in return. If you have a good database structure and well written queries, then future optimizations shouldn't be too much of a pain. Okay. thanks that's what I wanted to know. Coalesce makes it much easier to read. I just want to produce high quality results from the beginning. But the join IS necessary if description isn't null.... and that's what your code is supposed to do right? You can roughly time your queries by wrapping them with a pair of 'select getdate()' statements and doing the math (or write the SQL to do the math for you). You could also do something like this: select Id, Name, Description from person where description is not null union all select Id, Name, pic.image from person per join picture pic on per.Picture = pic.Id where description is null Not sure how performance would be, but you are only doing the join if needed, but at the cost of a UNION. Union All will eliminate the need for sorting out distinct rows, which should help performance.
common-pile/stackexchange_filtered
C++ coroutines: Is it valid to call `handle.destroy` from the final suspend point? Is it valid to call handle.destroy() from within the final suspension of a C++ coroutine? From my understanding, this should be fine because the coroutine is currently suspended and it won't be resumed again. Still, AddressSanitizer reports a heap-use-after-free for the following code snippet: #include <experimental/coroutine> #include <iostream> using namespace std; struct final_awaitable { bool await_ready() noexcept { return false; } void await_resume() noexcept {} template<typename PROMISE> std::experimental::coroutine_handle<> await_suspend(std::experimental::coroutine_handle<PROMISE> coro) noexcept { coro.destroy(); // Is this valid? return std::experimental::noop_coroutine(); } }; struct task { struct promise_type; using coro_handle = std::experimental::coroutine_handle<promise_type>; struct promise_type { task get_return_object() { return {}; } auto initial_suspend() { return std::experimental::suspend_never(); } auto final_suspend() noexcept { return final_awaitable(); } void unhandled_exception() { std::terminate(); } void return_void() {} }; }; task foo() { cerr << "foo\n"; co_return; } int main() { auto x = foo(); } when compiled with clang 11.0.1 and the compilation flags -stdlib=libc++ --std=c++17 -fcoroutines-ts -fno-exceptions -fsanitize=address. (see https://godbolt.org/z/eq6eoc) (simplified version of my actual code. You can find the complete code in https://godbolt.org/z/8Yadv1) Is this an issue in my code or a wrong positive in AddressSanitizer? It's perfectly valid if you're 100% sure nobody will use the coroutine promise afterwards. calling coroutine_handle::destroy is equivalent to calling the coroutine promise destructor. If this is the case, then why to do it like this to begin with? just return std::suspend_never from final_suspend std::suspend_never final_suspend() const noexcept { return {}; } It's equivalent to your code. we want to suspend the coroutine in final_suspend if we want to do something meaningful with the coroutine promise after the coroutine is finished, like returning the stored result of the coroutine. since your task object doesn't store or return anything, I don't see why to final-suspend it. Do note that if you use third-party libraries, like my concurrencpp, you need to make sure it's ok to destroy promises that are not yours. A coroutine promise might be suspended, but still referenced by its coroutine_handle somewhere else. This goes back to point #1. In the case of my library, it is not safe, because it could be that a result object still references it. In conclusion, it's ok to call coroutine_promise::destroy if: the coroutine is suspended (when you reach final_suspend it is) nobody will use that coroutine promise after destruction (make extra sure there is no future-like object that references that coroutine!) destroy had not been called before (double-delete) Thanks for your answer. So do you agree that this is a false positive in address-sanitizer, then? To your question "If this is the case, then why to do it like this to begin with? just return std::suspend_never from final_suspend": The snippet above is only a simplified version. You can find a more complete example in https://godbolt.org/z/8Yadv1 It's extremely strange. I agree I don't understand what TSAN wants. Funnily, altering a code a bit fixes the problem: https://godbolt.org/z/sjW6zs . I have a lot of experience by now using coroutines and I can tell you that ALL the compilers currently have MAJOR problems compiling coroutines correctly. I guess it's kinda fine considering this feature is still considered "experiental" by clang? Unfortunately, I do need await_suspend to return a coroutine handle to get symmetric transfer. I am currently using a similar work-around to your edit (see https://godbolt.org/z/zW3jTG) I guess what is happening is that clang stores the coroutine handle returned by await_suspend inside the coroutine frame itself... In that case, ASAN would be correct to report an error, given the coroutine frame was destroyed. clang should use the stack instead of the coroutine frame for this value. Sounds like a clang-frontend bug... C'est la vie. I remember that until Visual Studio 2017, using std::tuple simply made the compiler crash. You could not use a feature from 2011 until 2016~17. looking at this I can only smile compared to other atrocities I've seen done by C++ compilers. be patient. turns out this is already fixed in clang as part of https://reviews.llvm.org/D90990. Counting the days until the next clang release... I know this thread is old. I only discovered it recently but the last few messages from @Vogelsgesang make me wonder if it could be linked to the bug I reported here: https://stackoverflow.com/questions/76085876/handle-destroy-crashes-with-clang-14-0-2-on-mac-os-when-called-in-await-s The LLVM team says they cannot reproduce it on non apple compilers. The Apple LLVM team has not address my bug report yet. "It is recommended that you structure your coroutines so that they do suspend at final_suspend where possible. This is because this forces you to call .destroy() on the coroutine from outside of the coroutine (typically from some RAII object destructor) and this makes it much easier for the compiler to determine when the scope of the lifetime of the coroutine-frame is nested inside the caller. This in turn makes it much more likely that the compiler can elide the memory allocation of the coroutine frame." Source: https://lewissbaker.github.io/2018/09/05/understanding-the-promise-type
common-pile/stackexchange_filtered
CRM2015: Validate a field on the first time a form is updated I've added a custom button on the form of some custom entity, that when clicked duplicates the record and opens the newly created record on a new window, i.e., the FormType of the newly created record is update. On that opened window, I need to know whether the save button was clicked. As long it's not been clicked, some fields shall be open for edit; Once it was clicked, those fields shall be disabled. Currently, I have a (hidden) bit field that indicates whether the record is a duplicate, and its initial value is set to true. On the first click of the save button, in my onSave function, I set it to false. In addition, I have an onLoad function checking this field for true (may happen only once a record was duplicated) or false. My problem is a logical one: In order to set this field to false on the first save click, I actually need to do a validation every time the save button is clicked (and on non-duplicated records, too). I thought maybe someone can offer some other, more logically-correct way, for doing the validation on a save event only once. Here is the relevant snippet: function OnLoad() { // some code... if (Xrm.Page.getAttribute("sln_isduplicate").getValue() == true) { // open for edit relevant fields } else { // close for edit relevant fields } // some code... } function OnSave() { // some code... if (Xrm.Page.getAttribute("sln_isduplicate").getValue() == true) { Xrm.Page.getAttribute("sln_isduplicate").setValue(false); } // some code... } Simply call this: Xrm.Page.getAttribute("sln_isduplicate").setValue(false); in your OnLoad function, after you did all the form modifications, so: function OnLoad() { // some code... if (Xrm.Page.getAttribute("sln_isduplicate").getValue() == true) { // open for edit relevant fields } else { // close for edit relevant fields } Xrm.Page.getAttribute("sln_isduplicate").setValue(false); } It does not matter if it's a duplicate or not - you want user to set this flag to false when he saves the record. The other approach would be to dynamically add onsave event (because I assume that you've added it on the form level). Basically it would look like this: if (Xrm.Page.getAttribute("sln_isduplicate").getValue() == true) { // open for edit relevant fields Xrm.Page.data.entity.addOnSave(OnSave); } else { // close for edit relevant fields } This approach would add your OnSave function only for record which are duplicates. But on the other hand, I would not use a custom field for that, rather pass some query string parameter to indicate that this is a duplicate. You are probably opening your newly created record like that: Xrm.Utility.openEntityForm("entityname","A85C0252-DF8B-E111-997C-00155D8A8410"); But when you check the documentation for this function, you will see that it has more useful options for you: Xrm.Utility.openEntityForm(name,id,parameters,windowOptions) where parameters can be default field ids or some completely custom values (but you have to configure that) So of course the obvious solution would be: open your duplicate record with some custom query string parameter, in onload function check for that parameter and if it's there, simply do your specific logic Solutions 2 & 3 are exactly what I was looking for, thank you!
common-pile/stackexchange_filtered
Joining to MAX record by date and joining to another table I have the category and product tables. I want to get in one line the record of a product with max date according to the category. If there is no product at all, I want at least to show the category. The ID of the product will be NULL as ProductDate will be also NULL as soon there are no records. I tried this script and I don't get anything (I don't have a product of the category passed as @ID). If I change the INNER JOIN to LEFT JOIN, I will get all the categories and including all the max products. I should get just one record because I'm filtering by an ID SELECT c.ID AS CategoryID, p.ID AS ProductID, p.Date AS ProductDate, FROM Category c LEFT JOIN Product p ON c.ID = p.CategoryID INNER JOIN ( SELECT CategoryID, MAX(Date) AS MaxDate FROM Product WHERE CategoryID = @ID GROUP BY CategoryID ) p2 ON p.CategoryID = p2.CategoryID WHERE c.ID = @ID ORDER BY p.CategoryID, p.Date How can I do to get the one record will match with the category more than the product ('cause I don't have any product of the category)? e.g. Category C1 Cat1 C2 Cat2 Product P1 Cat1 Prod1 2015-01-01 ... P2 Cat1 Prod2 2015-10-01 ... P3 Cat1 Prod1 2015-10-14 ... Result @ID = C2 (Category.ID) CategoryID, ProductID, Date C2, NULL, NULL Result @ID = C1 CategoryID, ProductID, Date C1, P3, 2015-10-14 UPDATE I found the error because I didn't post completely the issue. I was doing this additionaly WHERE c.ID = @ID AND c.Inactive IS NULL OR (c.Inactive = 0) ORDER BY p.CategoryID, p.Date I changed to this to fix the problem WHERE c.ID = @ID AND (c.Inactive IS NULL OR c.Inactive = 0) ORDER BY p.CategoryID, p.Date But the answer of Giorgi Nakeuri, simplify my script. Here is outer apply to help you. It is designed for such type of work: select c.id as categoryid, o.id as productid, o.date as productdate from categories c outer apply(select top 1 p.id, p.date from products p where p.categoryid = c.id order by p.date desc) o
common-pile/stackexchange_filtered
Cisco AnyConnect dropped and can't reconnect without reboot I am using Cisco AnyConnect on Ubuntu 18.04. The VPN sometimes got disconnected unexectedly. I try to reconnect, even quit the GUI, kill all process for programs under /opt/cisco, restart NetworkManger service. No use, it still can't initiate the connection (no login dialog). What am I missing? I guess there is something with the route is not cleaned up. Update: I got also this message from VPNGUI after a while: The VPN connection failed due to unsuccessful domain name resolution This article explains something about external DNS. I cleaned that up, and checked DNS information in Ehternet setting. Still couldn't get rid of the error. Update 2: This is the command that worked for me at last: systemctl restart systemd-resolved.service Please don't put the answer (Update 2) inside the question. You are encouraged to write your own solution as your own answer. Whether write your answer or not, accept the correct answer by clicking on the gray check mark next to the answer and turning it green. This will help others and mark your problem as solved. @user68186 whatever, his update solved my problem so who cares.
common-pile/stackexchange_filtered
Is 4-momentum conservation used in this answer? In Dale's answer to the collision rest mass problem, how is he concluding that the momentum 4-vector of the resulting particle is equal to the sum of the 4-momentums of the incoming particles. I am specifically wondering about this step in his analysis: $$R^\mu = P^\mu + Q^\mu$$ I agree with the final answer, it just seems to me that in regular collision problems we always use some form of momentum conservation (and maybe energy if collision is elastic) to find final velocities. How is Dale getting away without using conservation here? What is different to regular mechanics? Related fact: Momentum 4-vector is a conserved quantity in special relativity. While total 4-momentum is conserved (an equation of 4-momentum vectors), the collision need not be elastic. How is Dale getting away without using conservation here? I am not using the conservation of the four momentum in that question because that question is not about conservation. That question is about the fact that the mass of the whole is not the sum of the masses of the parts. In that answer $R^\mu$ is the four-momentum of the system. Because four-momentum is conserved it is the same before and after the collision. Let me repeat that concept for emphasis: $R^\mu$ is the four momentum of the system which is the same before the collision and after. So the mass of the system, even before the collision, is given by the expression in the answer, $m^2 c^2 = 2(1+\gamma) \ m_0{}^2 c^2$ even before the collision and therefore without invoking the conservation law before and after the collision. Note, I am not claiming that the conservation of four-momentum is violated in my answer. The four-momentum is indeed conserved. However its conservation is simply not part of the answer. It was not needed and not used for this question, although it would have been a valid property to use if it had been needed. Thanks for going at such a length to clarify this. You are saying the collision didnt even need to happen for us to consider the total momentum of the system R. Whether that is used to calculate the total mass of 2 or 1 particle, that depends on whether we are thinking before or after the collision. *of the system (not of 1 or 2 particles) @JohnA. yes, exactly In a collision, the total 4-momentum is conserved. This is true whether or not the collision is elastic. For the decay, the conservation of total 4-momentum is the equation $$R^\mu = P^\mu + Q^\mu,$$ which says (in component form) the total relativistic-energy is conserved and the total relativistic-spatial momentum is conserved. However, the decay is not elastic. In special relativity, an elastic collision is characterized by having the particles retain their rest-masses. In special relativity, this implies that the total relativistic-kinetic energy is also conserved.
common-pile/stackexchange_filtered
Returning the class pointer through function How can i return the pointer of the Foo class using its functions. The reason why i ask is because i want to make this code work Class fo fo.MakeA(34.5777).MakeY(73.8843); Thank you very much in advance This is called "Method Chaining," and in my opinion, it is an abomination. It saves you nothing, makes the code confusing, and gives you many opportunities for bugs. Assuming that you need a reference return type; class foo { public: foo& MakeA(float a) { // MakeA code logic here... return *this; } foo& MakeB(float b) { // MakeA code logic here... return *this; } } Otherwise, you can just return a copy (foo instead of foo&). You want to use an indexer for your class. Make the return-type for (MakeA, MakeY) the same data type as the class (with ref &). and in the end of every method put return *this;
common-pile/stackexchange_filtered
Integrate OSExcelBundle I tried to install OSExcelBundle (https://github.com/ouardisoft/OSExcelBundle) on Symfony 2.1 by following the README : Add this line to the require option in your composer.json file: "os/excel-bundle": "dev-master" OK Add autoloader for PHPExcel in app/autoloader.php : require __DIR__.'/../vendor/os/php-excel/PHPExcel/PHPExcel.php'; There is no app/autoloader.php in symfony 2.1 - I tried to add the line in app/autoload.php - without success. This works : php composer.phar install But without step 2 this does not work : I get this error message : You have requested a non-existent service "os.excel" If someone can give me a hint would be nice ... Edit : I added this line in AppKernel : new OS\ExcelBundle\OSExcelBundle(), and so the bundle seems to be taken in account. Nevertheless now I get this error : Fatal error: Class 'PHPExcel_IOFactory' not found in ..\vendor\os\excel-bundle\OS\ExcelBundle\Excel\Excel.php on line 29 So I really need to know where I can declare PHPExcel ! I installed an other bundle : ExcelBundle This one no problem to install (???) And !!! : it resolves my problem with the other bundle - because it made PHPExcel available So I can go on ...
common-pile/stackexchange_filtered
Variable assignment withing .env file I have one .env file , that looks like : NODE_ENV = local PORT = 4220 BASE_URL = "http://198.**.**.**:4220/" PROFILE_UPLOAD = http://198.**.**.**:4220/uploads/profile/ POST_UPLOAD = http://198.**.**.**:4220/uploads/discussion/ COMPANY_UPLOAD = http://198.**.**.**:4220/uploads/company/ ITEM_UPLOAD = http://198.**.**.**/uploads/item/ GROUP_UPLOAD = http://198.**.**.**/uploads/group/ I want to do something like this : NODE_ENV = local IP = 198.**.**.** PORT = 5000 BASE_URL = http://$IP:$PORT/ PROFILE_UPLOAD = $BASE_URL/uploads/profile/ POST_UPLOAD = $BASE_URL/uploads/discussion/ COMPANY_UPLOAD = $BASE_URL/uploads/company/ ITEM_UPLOAD = $BASE_URL/uploads/item/ GROUP_UPLOAD = $BASE_URL/uploads/group/ Expected result of BASE_URL is http://198.**.**.**:4220/ I have tried many few syntax but not getting computed values Tried Syntax : "${IP}" , ${IP} , $IP I have used dotenv package , for accessing env variables. dotenv-expand was built on top of dotenv to solve this specific problem dotenv-expand is the solutions as @maxbeatty answered , Here are the steps to follow Steps : First Install : npm install dotenv --save npm install dotenv-expand --save Then Change .env file like : NODE_ENV = local PORT = 4220 IP = 192.***.**.** BASE_URL = http://${IP}:${PORT}/ PROFILE_UPLOAD = ${BASE_URL}/uploads/profile/ POST_UPLOAD = ${BASE_URL}/uploads/discussion/ COMPANY_UPLOAD = ${BASE_URL}/uploads/company/ ITEM_UPLOAD = ${BASE_URL}/uploads/item/ GROUP_UPLOAD = ${BASE_URL}/uploads/group/ Last Step : var dotenv = require('dotenv'); var dotenvExpand = require('dotenv-expand'); var myEnv = dotenv.config(); dotenvExpand(myEnv); process.env.PROFILE_UPLOAD; // to access the .env variable OR (Shorter way) require('dotenv-expand')(require('dotenv').config()); // in just single line process.env.PROFILE_UPLOAD; // to access the .env variable It was changed to: ``var dotenv = require('dotenv') var dotenvExpand = require('dotenv-expand') var myEnv = dotenv.config() dotenvExpand.expand(myEnv)`` As already stated you can't assign variables in .env files. You could move your *_UPLOAD files to a config.js file and check that into .gitignore , then you could do //config.js const BASE_URL = `http://${process.env.IP}:${process.env.PORT}/` module.exports = { PROFILE_UPLOAD: BASE_URL+"uploads/profile/", POST_UPLOAD: BASE_URL+"uploads/discussion/", .... } Unfortunately dotenv can't combine variables. Check this issue to see more and find a solution for your project.
common-pile/stackexchange_filtered
How to give a name to an object for debug purposes? I have this situation: var Dog = function(name) { this.name = name; } var myDog = new Dog("Lucky"); myDog.bark(); I'm wondering if there is a way to give a name to an object and see it in the console when such errors are thrown: Object [object Object] has no method 'bark' I would love to see something like: Object [Dog] has no method 'bark' Is it by any means possible to do it in JavaScript? Disclaimer: As Cedric Reichenbach commented, this is a browser specific behavior. Don't declare your class using an anonymous function. Instead you should use a function declaration: function Dog(name) { this.name = name; } var myDog = new Dog("Lucky"); myDog.bark(); // TypeError: Object #<Dog> has no method 'bark' Or using a named function: var Dog = function Dog(name) { this.name = name; } var myDog = new Dog("Lucky"); myDog.bark(); // TypeError: Object #<Dog> has no method 'bark' This is very browser-specific and not really a correct answer. Technically, I guess both implementations should resolve to the same, which they seem to do in Firefox, but not in Chrome. Nice and simple! But what if I want to keep that notation for my classes? Is there any property that I can just update to reach the goal? @Alessandro I don't understand your question, sorry. I mean, what if I still want to define my classes with an anonymous function? Isn't it possible to set the class type as a property? I imagine something like: var Dog = function() {}; Dog.type = 'Dog';
common-pile/stackexchange_filtered
HTML5: Green screen effect on JPGs Since JPGs don't support transparency and in my case, PNGs are far too large, I'm trying to replace a colour (my example is white) by setting the alpha colour. My code is roughly based on this: http://tech.pro/tutorial/1281/chroma-key-video-effects-using-javascript-and-the-html5-canvas-element The problem is, it doesn't do anything! Also, setting the frames colour to anything also doesn't seem to update. I know it is something simply, I just can't crack it. HTML <div id="poo"> <div id="container"> <img id="source" src="http://i.imgur.com/RylDs1h.jpg" /> </div> </div> CSS #greend{ background:black; width:600px; } JS var img = $("img").get(0); var $canvasbg = $("<div id='greend' />"); var canvas = $("<canvas></canvas>").get(0); $canvasbg.append(canvas); $('#container').append($canvasbg); var $container = $('#container'); var context = canvas.getContext('2d'); canvas.width = img.clientWidth; canvas.height = img.clientHeight; $container.width(img.clientWidth); $container.height(img.clientHeight); context.drawImage(img, 0, 0); var frame = context.getImageData(0, 0, canvas.width, canvas.height); var length = frame.data.length; for (var i =0; i <length; i++){ var r = frame.data [i*4+ 0]; var g = frame.data [i*4 + 1]; var b = frame.data [i*4 + 2]; // replace white-ish colours. if(g > 240 && g < 256 && r > 240 && r < 256 && b > 240 && b < 256){ frame.data[i * 4 + 3] = 0; } } context.putImageData(frame, 0, 0); jsFiddle here: http://jsfiddle.net/Bhqq8/2/ If you were to do some debugging, you would realize this is an XY problem and you really need to ask http://stackoverflow.com/questions/17035106/context-getimagedata-operation-is-insecure Thanks. I actually did some debugging and got a security warning. IE Dev tools didn't explain though that it stopped image manipulation as a result, hence I thought there was something else that was wrong. In general once you get an unhandled exception, script execution stops, which explains the behavior you were seeing. By the way: JPEGs are quite bad for color-keying, because they have problems with encoding high-contrast edges. Even when you use a high level of quality, you will always have some grain near the borders of objects. A better image format for this would be PNG, but PNG already has alpha channel support, so this is rarely necessary.
common-pile/stackexchange_filtered
Is it possible to have multiple valid access tokens with the same client credentials? I have an API set with OAuth2 authentication. A client has subscribe to my API using WSO2. We don't use refresh tokens. All access tokens expire in 1 hour. What happens if my client requests 2 access tokens with the same client credentials? Will the first token be revoked or will both tokens live 1 hour? When you request a token with client credentials, it will give an access token which is valid for 1 hour. If you again request a token from the token API within that 1 hour period, then it will give the same token. Basically, if there is a valid token, then it will return that. This is the default behavior. But if you are using the API Store and click on token re-generation, then it will first revoke the token and get you a new access token. If you want to get two different access tokens for the same client credentials at the same time, then you can use a scope. When there are different scopes in the token request, it will return two different access tokens. Thank you for your answer. The problem i have : The second access token is different from the first one Around 10 min after i request the second access token, the first one is revoked. The solution i want is : if i have 2 requests i want the same access token. Exactly what you describe. I will check tomorrow how my WSO2 is set up. According to WSO2 docs, you can't have more than one access token. What you can do instead is changing the token expiration time to longer than one hour. In WSO2 API-M the access token must be unique for the following combinations - CONSUMER_KEY, AUTHZ_USER, USER_TYPE, TOKEN_STATE, TOKEN_STATE_ID and TOKEN_SCOPE. The latter mentioned constraint is defined in the IDN_OAUTH2_ACCESS_TOKEN table. Therefore, it is not possible to have more than one Access Token for any of the above combinations. Thank you for your answer ! It is actually possible to have different access tokens simultaneously for the same client, given they have different scopes.
common-pile/stackexchange_filtered
Computing the structure of the group completion of an abelian monoid, how hard can it be? Cherry Kearton, Bayer-Fluckiger and others have results that say the monoid of isotopy classes of smooth oriented embeddings of $S^n$ in $S^{n+2}$ is not a free commutative monoid provided $n \geq 3$. The monoid structure I'm referring to is the connect sum of knots. Bayer-Fluckiger has a result in particular that says you can satisfy these equations $$a+b=a+c, \ \ \ \ b \neq c$$ where $a,b,c$ are isotopy classes of knots and $+$ is connect sum. When $n=1$ it's an old result of Horst Schubert's that the monoid of knots is free commutative on countably-infinite many generators. What I'm wondering is, does anyone have an idea of how difficult it might be to compute the structure of the group completion of the monoid of knots, say, for $n \geq 3$? That's not really my question for the forum, though. It's this: Do people have good examples where it's "easy" to compute the group-completion of a commutative monoid, but for which the monoid itself is still rather mysterious? Meaning, one where rather minimal amounts of information are required to compute the group completion? Presumably there are examples where it's painfully difficult to say anything about the group completion? For example, can it be hard to say if there's torsion in the group completion? Abelian monoid =(. Do people have good examples where it's "easy" to compute the group-completion of a commutative monoid, but for which the monoid itself is still rather mysterious? This happens all the time in K-theory $K^0(X)$, both algebraic and topological. Perhaps it is even the reason that K-theory is a useful tool. For a striking algebraic example, take $X = \mathbb{A}^n_k$ where $k$ is a field. Then $K^0(X)$ is the group completion of the commutative monoid $M$ of isomorphism classes of finitely generated projective modules over $R = k[x_1, \ldots, x_n]$. In 1955 Serre asked whether every such module was free, i.e., whether $M = \mathbb{N}$. This question became known as Serre's conjecture. Serre proved in 1957 that every finitely generated projective $R$-module is stably free, i.e., $K^0(X) = \mathbb{Z}$. However, it was not until 1976 that Quillen and Suslin independently proved Serre's original conjecture. So between 1957 and 1976, $M$ was an example of a commutative monoid whose group completion was known but which itself was not known. This is only a historical example, because $M = \mathbb{N}$ turns out to be very simple; however, it illustrates the difficulty of the question in general. A topological example where the commutative monoid is not so simple is given by $KO^0(S^n)$. Let us take $n$ congruent to 3, 5, 6, or 7 modulo 8, so that $KO^0(S^n) = \mathbb{Z}$ by Bott periodicity (the generator being given by the trivial one-dimensional real vector bundle). Let $T$ be the tangent bundle to $S^n$. In $KO^0(S^n)$, of course, the class of $T$ is equal to its dimension $n$. But if we let $M$ be the commutative monoid of isomorphism classes of finite-dimensional real vector bundles on $S^n$ (so that $KO^0(S^n)$ is the group completion of $M$) then the class of $T$ is not equal to the class of the trivial $n$-dimensional vector bundle unless $S^n$ is parallelizable, which only happens when $n$ is equal to (0 or 1 or) 3 or 7. So for all other values of $n$, $M$ is not simply $\mathbb{N}$; there are extra vector bundles which get killed by the group completion process. Understanding these monoids $M$ for all $n$ amounts to understanding the homotopy groups of all the groups $O(m)$, which I expect is not much easier than understanding unstable homotopy groups of spheres. Finally, Pete's example of the monoid of cardinalities of at most countable sets and its absorbing element also makes an appearance in K-theory; here it is called the Eilenberg swindle and it explains why we restrict ourselves to finitely-generated projective modules. +1: very nice answer. The wikipedia page you gave is also relevant to the recent MO question about isomorphic group rings. This example is similar to Reid's: If $G$ is a finite group, then $K^0(G-\mathrm{rep})$ is just the class functions on $G$. But the question of whether a specific class function is the character of a representation, or only of a virtual representation, can be very hard. In a sense, Mark Haiman got tenure at Berkeley for proving that certain class functions on $S_n$ were characters. I know Pete has tongue in cheek, but I disagree about the grammatical parsing :) @DS and YC: I deleted the comment. An ultra-classical example: the failure of unique factorization in algebraic number fields. Here one looks at the multiplicative monoid of nonzero algebraic integers in a finite extension field of $\mathbb Q$. Factoring out units gives a quotient monoid $M$, and this is free (abelian) on the irreducible elements exactly when the unique factorization property holds. The monoid $M$ embeds in the ideal group, the free abelian group generated by the prime ideals, and the group completion of $M$ is the finite-index subgroup of the ideal group generated by the principal ideals. This subgroup is also free, but without a canonical basis. One can think of this subgroup as a somewhat skewed lattice in the ideal group, and $M$ is the intersection of this lattice with the positive "orthant" of the ideal group. I'm not an expert on this stuff, so please correct any inaccuracies in what I said above. I gather that the structure of $M$ as a monoid can be rather complicated, even though it's a submonoid of a free monoid. Perhaps this complexity is why the monoid structure seems rarely to be discussed explicitly. Does anyone know any references describing the monoid structure? (Warning: the examples that I give here are all quite trivial compared to the motivating example.) In general the group completion of a commutative monoid can have a much simpler structure than the monoid itself. An extreme example is the case of a monoid $M$ with an absorbing element, i.e., an element $z$ with $z*x = x*z = z$ for all $x \in M$. Then the group completion will just be the trivial group. There are natural examples of monoids with absorbing elements. For instance, on p.5 http://alpha.math.uga.edu/~pete/settheorypart2.pdf I give the example of the commutative monoid of cardinalities of at most countable sets. This is the usual natural numbers together with an additional (absorbing) point at infinity. It has the natural structure of a semiring, so it is somewhat disappointing that its ring completion is trivial. More generally, if you take a submonoid $M$ (in particular, a subset!) of the cardinal numbers under addition such that $M$ contains infinite cardinals, then you need not have an absorbing element but nevertheless the group completion will be trivial. This essentially amounts to the example of a totally ordered set $(X,\leq)$ with a least element $0$ made into a commutative monoid via $x+y = \max(x,y)$. Addendum: As Yemon Choi has pointed out below, a yet weaker condition for a commutative monoid to have trivial group completion is that $x+x = x$ for all $x \in M$. There are very rich classes of monoids satisfying this property! Further to Pete's comments: consider a commutative semigroup/monoid in which every element is idempotent (a.k.a. a semilattice). Then these can have quite rich structure, but since a group has only one idempotent the group completion of such an object is rather easily guessed... Here's a strange result that can help in computing the group completion of a commutative monoid. Let $M$ be a commutative monoid. Call an element $h \in M$ high if for all $x \in M$, there exists $y \in M$ such that $h = x + y$. Write $H(M)$ for the set of high elements of $M$. Examples: If $M$ is a group then $H(M) = M$ (and conversely). Any join-semilattice (i.e. a poset in which every finite subset has a least upper bound) can be viewed as a commutative monoid $M$, with the least upper bound of two elements as $+$ and the least element as $0$. Then $H(M)$ has at most one element, which is the greatest element if such exists. If $M = \mathbb{N}$, with the usual addition, then $H(M) = \emptyset$. Proposition If $H(M) \neq \emptyset$ then $H(M)$ is a group, under the same binary operation $+$ as $M$, but not necessarily the same zero. For a rather trivial example of why the zero might not be the same, consider a nontrivial join-semilattice with a greatest element. For a proof and nontrivial examples, see this paper by Marcelo Fiore and me. (The proof's in section 3.) Now: Theorem $H(M)$ is, if not empty, the group completion of $M$. How does this work? Write $z$ for the zero element of $H(M)$. Then there is a monoid homomorphism $\pi = z + (\ ): M \to H(M)$. It's not too hard to show that every homomorphism from $M$ to a group factors uniquely through $\pi$. Indeed, given a map $\phi: M \to A$, with $A$ a group, the corresponding map $\bar{\phi}: H(M) \to A$ is simply the restriction of $\phi$. The theorem only helps when there's at least one high element, though. There are nontrivial situations when there are no high elements, as the example above of $\mathbb{N}$ illustrates. In a paper (here) with Soren Galatius, we compute the topological group completions of certain topological monoids made up of moduli spaces of surfaces with various structures. Taking $\pi_0$ of these statements leads to examples of such group completions of discrete monoids. As a particular example, take the discrete monoid $\mathcal{M} := \coprod_{g \geq 0} [\Sigma_{g, 1}, \partial; Y, *] / \Gamma(\Sigma_{g, 1})$. Here the square brackets denotes the set of homotopy classes of maps from the genus $g$ surface with one boundary component $\Sigma_{g,1}$ to a path connected space $Y$ (sending the boundary of the surface to a basepoint $* \in Y$), and we quotient this set by the action of the mapping class group of the surface (rel boundary). The monoid structure is by pair-of-pants gluing of surfaces. This is a fairly complicated monoid, but its group completion turns out to be $$MTSO(2)_0(Y),$$ the degree zero part of a certain homology theory applied to the space $Y$. (It is the homology theory associated to the spectrum occurring in the Madsen--Weiss theorem.) In particular it only sees $Y$ "stably", so if you do something drastic like plus-construct $Y$, the group completion does not change (but the monoid certainly does!). Thus for example if you take $Y$ to be the Poincare sphere (which plus-constructs to $S^3$), one finds that the group completion is simply $\mathbb{Z}$ (which is always in there: there is a surjection $\mathcal{M} \to \mathbb{N}$ that sends a surface to its genus). I don't know if one can see this directly from the monoid. This is in spirit very closely related to my underlying motivation -- spaces of knots have a natural space-level group completion as they can be turned into topological monoids by considering the corresponding "long knot" space. At present I'm not sure what the group completion should be other than the knowledge that it has a certain iterated loop-space structure so I'm hoping that knowledge of $\pi_0$ would give some insights.
common-pile/stackexchange_filtered
Cordova: Open a link to a pdf in the app or browser I'm trying to open a link to a pdf in a cordova app, whether it be in the cordova app or in the native browser, to allow the customer download it. The app is a website loaded in the cordova app : cordova index.html : <html> ... <script>window.location.href="https://www.website.url";</script> <script src="cordova.js"></script> </html> The link to the pdf is in a website, I don't have access to the cordova.js script in the website. In android, when I click the link (<a href="document url" target="_blank"></a>), nothing happens. I also tried to open a window (window.open(url)). Nothing happens either. The reason of this redirection is because we did several apps of the same website for several clients, and we don't wan't to embed the sources in the app to not have to update each app every time we resolve a bug or add a feature. Check if this can help you: https://stackoverflow.com/questions/17887348/phonegap-open-link-in-browser Checked, there is no app in navigator in the app (cf first response) <script>window.location.href="www.website.url";</script> A protocol is missing. Does this answer your question? phonegap open link in browser No (it has already been linked in the first comment)
common-pile/stackexchange_filtered
How to persist cookies from WebViewClient to URLConnection, browser, or other file download technique in android We have a .net forms auth enabled site that the user visits via a WebViewClient in our android app. One of the features of the site is the ability to login and download some PDF files, however you need to be logged in to download the PDFs. We are currently implementing shouldOverrideUrlLoading and are downloading the pdf via the following code when the correct condition is met. URL u = new URL(url); URLConnection conn = u.openConnection(); int contentLength = conn.getContentLength(); DataInputStream stream = new DataInputStream(u.openStream()); byte[] buffer = new byte[contentLength]; stream.readFully(buffer); stream.close(); DataOutputStream fos = new DataOutputStream(new FileOutputStream("/sdcard/download/file.pdf")); fos.write(buffer); fos.flush(); fos.close(); From the IIS logs, its apparent that IIS does not consider this request to be logged in and redirects it to the login page. What we need is a way to download the file with the auth cookie persisted in the file download request but we are at a loss as to how to persist the cookie. Another viable solution for us is to persist the auth cookie between the WebViewClient and the android browser. If we could do that, we'd just open the PDF file via the default action in the browser. Edit: It looks like I can set the auth cookie manually via conn.setRequestProperty("Cookie", ""); Now I just need to figure out how to read the auth cookie out of the WebViewClient Since you're using ASP.NET Forms authentication, you'll need to copy the forms auth cookie from the WebView to the URLConnection. Luckily this is pretty straight forward. This code lives in an implementation of shouldOverrideUrlLoading string url = "http://site/generatePdfBehindFormsAuth"; // get an instance of a cookie manager since it has access to our auth cookie CookieManager cookieManager = CookieManager.getInstance(); // get the cookie string for the site. This looks something like ".ASPXAUTH=data" String auth = cookieManager.getCookie(url).toString(); URLConnection conn = (URLConnection)new URL(url).openConnection(); // Set the cookie string to be sent for download. In our case we're just copying the // entire cookie string from the previous connection, so all values stored in // cookies are persisted to this new connection. This includes the aspx auth // cookie, otherwise it would not be authenticated // when downloading the file. conn.setRequestProperty("Cookie", auth); conn.setDoOutput(true); conn.connect(); // get the filename from the servers response, its typical value is something like: // attachment; filename="GeneratedPDFFilename.pdf" String filename = conn.getHeaderField("Content-Disposition").split("\"")[1]; // by default, we'll store the pdf in the external storage directory String fileRoot = "/sdcard/"; // Complete the download FileOutputStream f = new FileOutputStream(new File(fileRoot, filename)); InputStream in = conn.getInputStream(); byte[] buffer = new byte[1024]; int len1 = 0; while ( (len1 = in.read(buffer)) > 0 ) { f.write(buffer,0, len1); } f.close(); in.close(); NOTE: One thing to be aware of is that you should NOT make a call to getContentLength on your URLConnection. After 4 hours of debugging, wireshark finally showed that, if you call getContentLength, the cookie would be sent for the request that gets the content length, but the cookie will not be sent for subsequent requests, even on the same instance of URLConnection. Maybe I am naive and this is by design (the documentation does not indicate that it is by design), but I was unable to manually set the cookie for the subsequent file request by calling setRequestProperty after calling getContentLength. If I attempted to do that, I'd get a force close. Very useful code, but where is this code setting the cookies? Is is possible to set cookies when browsing to urls using URLConnect? Its setting the cookie on this line conn.setRequestProperty("Cookie", auth);, I'll edit the post to illustrate that more clearly. You have 100% control over the cookie string, both getting it from the WebView and setting it in the URLConnection, so you can persist any values you want. I reccomend using fiddler or something to observe the cookie string sent by the WebView, that way you know what to persist in the URLConnection. In the above case, I just persisted ALL of the cookies. I could have selectively persisted some and not others, but I opted for the easier option. Have you looked at the CookieSyncManager class? I believe this is what is needed to persist cookies received from the server and re-use them. http://developer.android.com/reference/android/webkit/CookieSyncManager.html Yeah I've seen that but I haven't been able to figure out how to use it to do what I need to do. I will look into it more though! Thanks! That seems to be a class to transfer cookies from RAM to storage, not webkit to code, unless I'm mistaken? @user778234 you are right, see the documentation on the class: The CookieSyncManager is used to synchronize the browser cookie store between RAM and permanent storage. To get the best performance, browser cookies are saved in RAM. A separate thread saves the cookies between, driven by a timer.
common-pile/stackexchange_filtered
Active Directory Authentication in .NET Core Web API I am working on an intranet application which consists of a .NET Core Web API back end and a completely separate AngularJS SPA that queries it. A requirement of the project is authentication via AD (LDAP possible.) However, after studying for a while, I cannot seem to find a way for my backend API to receive a username/password and ask AD if it's correct, and what groups they are in, then store that locally and hand off a token to the SPA to use in future requests. Any ideas would be appreciated, thank you. Note: The web app hands the password to the API in an encrypted format, the API decrypts it, and thats where I'm at. The only out of the box authentication for this is IIS+Windows Authentication (you see it as option when creating a new ASP.NET Core project). It's not the same as you describe though, the currently logged in users authentication information is sent on each request to an enabled service. However, by default Firefox doesn't do that and you need to toy with its options to allow it. There is no on premise AD auth middleware (and no plans to add such one for security reasons), just AzureAD which is something different though
common-pile/stackexchange_filtered
Istio concepts - inter-service communication and tracing EDIT: I am rewriting this question to narrow the scope as suggested in comments. Under Deploying the application, the documentation says, To run the sample with Istio requires no changes to the application itself. Instead, you simply need to configure and run the services in an Istio-enabled environment, with Envoy sidecars injected along side each service. I have a NodeJS back-end API, that writes logs with winston package. I would presume that, the application will have to be changed so that the logs from the winston package can participate in distributed tracing. Is this correct? I think this is far out of scope for a single question but a few short notes; gRPC is over HTTP, it does provide automatic TLS between mesh nodes, it would have no impact on a service that only talks to Kafka, rather than change app code, you repoint all remote HTTP endpoints through the sidecar proxy. Distributed tracing systems in general require adding headers to outbound requests to tell the tracing system and downstream tasks which trace a given request belongs to. While this isn't Istio-specific, Istio does document a list of OpenTracing headers that need to be passed along. If you don't do this, then each call between services will show up as a separate trace instead of them being stitched together into a single unified end-to-end trace. This is separate from your logging system. Unless you're sending logs via HTTP to something like Logstash or directly into Elasticsearch, logs won't show up in traces at all. The flip side of this is that you don't need to change anything in your logging setup to "work with" Istio, but mostly because there's not a lot of direct interaction. So, basically, I have to undo all my winston logs and replace them with something that Istio can understand - correct? You should absolutely keep your winston logs; they have valuable data and it's complementary to what's in the tracing. But consider setting up something like a cluster-wide fluentd collector to get the logs into one place, and consider including a value like the OpenTracing X-Request-Id: header in the log content to be able to pair things up. If I have X-Request-Id set-up (I already do, actually) and an ELK/EFK cluster set-up outside of OpenShift, then what value add does OpenTracing bring in? Seeing the visual graph of which services called which other services and how long it spent there is really useful. If an inbound request takes 10s and it should take 1s, but it traversed six different services in its lifetime, the trace graph can pretty clearly tell you which service to start debugging. No, your assumption is not correct. Istio tracing has nothing to do with logs. It's all about custom headers managed by Istio and modified automatically by sidecars to allow each sidecar processing traffic to put timestamp when traffic enters (request) and leaves (response). This gives you somewhat useful picture of actual delays between the containers participating in a network call. On top of that, you are free to modify your app's code to include even more detailed method-level tracing using some OpenTracing-compatible lib for your app's language. Basically you add some lines besides your Winston logging in order to include checkpoints of your code execution pipeline too. While you can parse your logs and measure the same by math with log timestamps it's still way more job to be done to achieve what opentracing already offers you.
common-pile/stackexchange_filtered
Superposition of two wave functions of different Hilbert spaces I am trying to think of this problem for quite some time. Let's say, we have two sets of wave functions $\lbrace|\psi\rangle\rbrace$ and $\lbrace|\phi \rangle\rbrace$ and they belong to two different Hilbert spaces. That is, $$\hat{H_1}|\psi\rangle=E_1|\psi\rangle$$ and $$\hat{H_2}|\phi\rangle=E_2|\phi\rangle.$$ In the real space $\bf{R}$, their functional domains are disjoint. That is, if $\psi(x)$ is defined in $x\le0$, $\phi(x)$ is defined in $x>0$. In this case, is it possible to conceive some kind of superposition between the two waves? If so, how? I mean how do we define the superposed wave function and what can be said about the energy? This paper introduces such a concept http://dx.doi.org/10.1119/1.18854 In quantum field theory, there is a field in spacetime. But in nonrelativistic quantum mechanics, a wavefunction is not a function from $\mathbb R^3$ into the complex numbers, a wave function is a function from the configuration space of the system into the joint spin state of the system. Where in the cited paper does it mention having two distinct Hilbert Spaces? It's a method of images solution to reflection off an infinite potential barrier. @Timaeus Isn't the wave function in quantum mechanics defined as $f_{\psi}\colon x\to \langle x|\psi\rangle$, and thus exactly a function from $\mathbb{R}$ to the complex numbers? @GennaroTedesco Most definitely not. It is a function from configuration space into a joint spin state. The joint spin state is a tensor product of single particle spin states (one for each particle) and configuration space is $\mathbb R^{3n}$ when there are $n$ particles. How would you then define the wave function for the free spinless particle, according to that definition? Can you address me to some literature to check that? I have somehow never encountered such definition before. @Timaeus: right...it's a method of images solution. But the image wave should be confined at $x>0$ (as depicted in the paper), because it is subjected to inverted potential $V(x)=0\ \forall x>0$ and $V(x) \rightarrow\infty\ \forall x\le0$. Notice that the potential seen by the image state is different from the potential seen by the physical state $\psi(x)$. Strictly speaking, theie Hamiltonians are different and that means the Hilbert spaces of the physical & the images states are also different (though $\frac {d^2}{dx^2}$ remains the same under $x\rightarrow-x $). The potential seen by the physical state is $V(x)=0\ \forall x\le0$ and $V(x)\rightarrow\infty\forall x>0 $. Therefore, simple addition of these two states seems little oversimplification to me. Well, it may happen that if we extend the Hilbert space (as said by user3257624), the final result may be the simple addition. But I want to understand that clearly. It seems to me that the two states are also correlated (entangled? I do not know the definition of quantum entanglement, so cannot really say it). Is my point clear? @GennaroTedesco A spin n/2 particle generally has a single particle spin state of $\mathbb C^{n+1}$ and so spin 0 gives a spin state of $\mathbb C^1$ which when tensored with other spin states doesn't really change the state and if you have one particle or all the particles are spin zero just gives $\mathbb C.$ surely any text says a single particle has $\mathbb C$ for spin 0, $\mathbb C^2$ for spin 1/2 and $\mathbb C^3$ for spin 1 and that you tensor them together for a joint spin state. And surely they all say the domain is configuration space. And don't forget the @ @kolahalb You do the method of images by replacing the potential with a free particle and make the function odd which then enforces that it is zero at the origin for all time under the free particle Hamiltonian. The only trick then is similar to an EM image problem you only integrate outside the zero field region to find the actual physical energy, but you get the solution and the dynamics from the combination of real and image solution. So you use the free particle Hamiltonian to evolve, but when you compute the actual energy only integrate over $x<0.$ This entirely like a charge above a conducting grounded slab that fills half of space (where $x>0$). Instead of having a fixed potential in that half space you make an image solution in that half space then take the actual and image solution and evolve away, Sometimes keeping in mind that the actual energy on exists on half the space. Every objection you raise could be equally applied to that EM example. And entanglement isn't going to fix it. And tensoring isn't going to fix it. Because those assume you already solved the original problem so that defeats the purpose of the method of images. But don't you agree that the two wave functions belongs to different Hilbert spaces? In electrostatics, we can simply use the principle of superposition. If $V_1({\bf r})$ & $V_2({\bf r})$ individually satisfies Laplace's equation, then $a\ V_1({\bf r})+b\ V_2({\bf r})$ will also do the same. But in this problem, the equations satisfied by the real and the image state are intrinsically different. How can we possibly add them? Perhaps the final result in the paper is correct, but I guess some internal steps are missing. I am getting a message that we should better resort to chat window instead of extending the discussion here. So, can we go that way? Two different Hibert spaces correspond to two different physical systems. Superposition of wave functions makes sence for one system (for one Hilbert space), since addition of vectors (quantum states) is defined in a particular vector space (Hilbert space). What you can do is to create a new Hilbert space by forming the tensor product of the two Hilbert spaces. And if you work in the coordinates representation, then you should expand the domain of the each wave function to the entire real line, by multiplying $\psi \left( x \right)$ whith the characteristic function of $\left( -\infty ,0 \right]$ and $\varphi \left( x \right)$ with the characteristic function of $\left( 0,+\infty \right)$. In such a case, what will happen to the inner product of the new Hilbert space ! If by saying "what will happen?" you mean "how is it defined?", then check this article: https://en.wikipedia.org/wiki/Tensor_product_of_Hilbert_spaces Thank you very much for the reply. I shall go through this concept. One query: in the extended Hilbert space, does the two wave functions add normally? or do we need to construct some direct product or direct sum? In the tensor product space, the states are products of the wave functions; what is added there is such products of them and not the individual wave functions you are very welcome! Hi, I started reading the concept of tensor product of spaces from Le Bellac's book. However, can you tell me how to find the characteristic function of the domain of a function? Is this function related to the metric of the space in some manner? In these case they would be Heaviside step functions. But, on second thought, you may not need them. Actually, after taking a look at the paper, you don't need to consider tensor products since there haven't been introduced two different Hilbert spaces; you have only one Hilbert space. It is the geometrical space (1-D space, the real line) that is splitted in two disjoint spaces (not the space of quantum states) and your physical entity lies on the left one. What this paper actually explains is that you can solve the problem of a bouncing wave-packet by considering the interference of the actual packet with it's hypothetical image propagating in the opposite direction But how can the virtual image state (whose domain is restricted in $x>0$) be 'seen' from the space of the physical state (at $x<0$)? I am saying this, because the image state is a solution to a ${\it different}$ TISE. You could superpose them at $x<0$ if both real state and the image state were solutions to the same TISE. Isn't it? In this specific case, $\frac{d^2}{d x^2}$ remains invariant under $x\rightarrow -x$, and the general solution can be written as a linear sum at $x\le0$. But what if we were to deal with an infinite spherical barrier? Then, perhaps we cannot skip tensor product. Tensor product is usefull when you have two different physical systems and you want to find a new Hilbert space describing both; it is not the form of your geometrical space that dictates its use. You could also apply the same method there, I suppose. This method is that you forget about the barier and see what happens when the two waves (the actual one and its image) collide and interfere at x<0. The author claims that the result is that same as if you had just one wave, which bounces back at x=0 and then interferes with its self, since the bouncing one will be like the image crossing x=0 Regarding this problem, I can see the result found by the author is absolutely fine. Yes, we 'forget' the barrier and let the physical wave and the image wave (two different physical systems - not because they are geometrically separated, but because they satisfy $\it different$ TISE) interfere near $x<0$. Doesn't that mean we are "extending" the spaces of the wave functions? If we denote the Hilbert space by $D$, don't we have $D_{new}=D_{real}\otimes{D_{ image}}$? Not really, the are not geometrically seperated according to this method. While the physical probelm does indeed restrict the actual wave-packet to x<0, with this method you consider that both the actual and the image live on the whole real line, therefore they satisfy the same TISE. Since (by relfelction symmetry) both waves are solutions of the same TISE, what you do next is to consider a superposition of the actual wave-packet with its image and let them propagate (in opposite directions) until they collide and interfere. Then you focus on what happens for x<0. Your confusion may be due to the follwing: in eq. (1) the statement x<0 does not mean that Ψ(x,t) lives on the space x<0 but that this wave-function (which is defined on the whole real line) reproduces the actuall physical problem when we consider it for x<0; what happens for x>0 we don't care, since our actual physical system lives on x<0. on statement b), the fact that ψ(-x,0)=0 comes from the fact that the actual wave-packet originaly (t=0) is non-zero only for x<0 (x<0 implies that -x>0, so ψ(-x,0) is calculated for x>0 where, by assumption, is zero) Hi user3257624, I appreciate that you took very good care and time for explaining the point. But I think I understood the content of eq.(1) and statement b) in the paper and agree with what you said on this. I was not confused there. Of course once the method is implemented, we have the same TISE and both the real state and the image wave live in $(-\infty,\infty)$. But to start with, how do we know if the method can be used? My guess is that using the method of image here effectively means creating $D_{new}=D_{real} \otimes D_{image}$. I wanted to clarify this point only. Am I clear?
common-pile/stackexchange_filtered
Why are some variables in PHP written in uppercase? Why does some code in PHP have to be written in uppercase? For instance: if(isSet($_GET['lang'])) $lang = $_GET['lang']; $_SESSION['lang'] = $lang; Do they work if I write them in lowercase? "does this work?" - Why don't you just try it? If you refer to the function names: Yes, those are case insensitive. You can use IsSET(), IsSeT(), isSET() to your heart's content. If you refer to the variables $_GET etc.: No, variable names in PHP are case sensitive: Variables in PHP are represented by a dollar sign followed by the name of the variable. The variable name is case-sensitive. Variable names are case-sensitive. Some you mention have to be written uppercase by convention, the superglobal arrays like _GET, _SESSION et al.
common-pile/stackexchange_filtered
Can we use store from different libraries together in react project? Right now we are using easy peasy in our project, now we want to remove easy peasy store gradually and replace it with reduxtoolkit. Can we use stores with different libraries in react? Or is any alternative way to deal with this situation. Tried below which is not working: **Creating store out of reduxtoolkit** import { configureStore,combineReducers } from "@reduxjs/toolkit"; import appReducer from "./slice/appReducer"; const rootReducer= combineReducers({ app: appReducer, }); const store = configureStore({ reducer: rootReducer, }); **For easy peasy** import models from './models'; import { createStore } from 'easy-peasy'; const store = createStore(models); **In main file** <Provider store={store }> <Provider store={reduxStore }> <App/> </Provider> </Provider> **It is failing with Error:** easy-peasy.esm.js:93 Uncaught TypeError: store.getActions is not a function Basically Missing step from my side was Importing Provider from correct package. import { Provider as ReduxProvider } from "react-redux"; import { StoreProvider as Provider } from 'easy-peasy'; <ReduxProvider store={reduxStore }> <Provider store={easyPeasystore }> Changing my code with above line helped me in solving problem.
common-pile/stackexchange_filtered
First to flip N heads wins? Everyone knows the classic problem of A and B playing a game, with the first to flip heads winning the game. I understand that we can easily solve for the probability of A winning if she flips first, which is $2/3$, since $p_A=p_A/2 + (1-p_A)/2$. Now what if I generalise the problem to $N$ heads, such that the first to flip a total of $N$ heads wins? I’m unsure of how to construct the recursion relation here. My gut tells me the probabilities of either player winning if $N\rightarrow\infty$ is $1/2$. Could someone guide me on this? Thank you! We can solve this question through states. Suppose $p_{i,j}$ denotes the probability that $A$ wins if he starts first, with $i$ heads, and $B$ starts with $j$ heads. Note that $p_{N,j}=1$ and $p_{i,N}$ for $i<N$ is $0$. (Here, I will let $p_{N,N}=1$ for convenience.) Now, consider the first turn when at least one person flips a head. Clearly, the probability that it is only $A$, only $B$, or both is equal, so we have the recurrence $p_{i,j}=\frac{1}{3}\left(p_{i+1,j}+p_{i,j+1}+p_{i+1,j+1}\right)$. This is very similar to Pascal's triangle, and the closed form for this can be computed explicitly through finite differences. Use the Negative Binomial Distribution. The probably of $X$ number of trials to obtain $N$ successes is $f(x;N,p_A)=\binom{x-1}{N-1}p_A^N(1-p_A)^{x-N}$, for $x=N,N+1,...$ Since this is likely a fair coin, $f(x;N,0.5)=\binom{x-1}{N-1}\times 0.5^x$
common-pile/stackexchange_filtered
Query Security Events in Windows 10 I enabled Object Access auditing on my Windows 10 laptop using the instructions on this page. I then enabled auditing on a file (i.e. Sleep Times.csv). I accessed the file and manually looked through the Windows Logs > Security events in Event Viewer. I found that an event was generated. I would like to execute a query that shows me all of the events where Object Name == ...\Sleep Times.csv. How do I do this? You can try the following using command prompt as an administrator: wevtutil qe Security | findstr /C:"Sleep Times" I think /C: is causing the error message FIND: Invalid switch @Nathan Need to use findstr not find.
common-pile/stackexchange_filtered
how C_CreateObject , C_GenerateKey and C_GenerateKeyPair are different ? Our PKCS#11 library miss the implementation of C_CreateObject. Before jumping into its implementation, i want to know the instances where C_CreateObject should be used instead of C_GenerateKey/C_GenerateKeyPair . I'm not sure what you're asking here C_CreateObjectimports existing data objects, certificates and/or keys generated externally into the device. C_GenerateKey generates new symmetric key (DES/AES/...) inside the device. C_GenerateKeyPair generates new asymmetric key pair (RSA/ECC/...) inside the device. The PKCS#11 treats the Key Pairs (Public Key and Private Key), Certificates and Secret Keys as objects on the token. And there is also a Data Object which just holds some data. The C_GenerateKey would create a Secret Key Object. The C_GenerateKeyPair would create a Public Key and a Private Key Object (both the public key and private key combined would constitute the key pair). When you say C_GenerateKey the PKCS#11 knows that it has to generate a Secret Key Object and expects a Secret Key Object Template. And similarly the C_GenerateKeyPair, it knows that it has to create a Public Key Object and a Private Key Object and expects their respective object templates. But when you sayC_CreateObject, you have to specify which type of object you want to create, and pass in the right Object Template as well. Consider this like an abstract version of the C_GenerateXXX method.
common-pile/stackexchange_filtered
Why twitter cursor is null? I have some tweet id. I want to get retweeters of this tweet. Therefore, I use this api: https://dev.twitter.com/rest/reference/get/statuses/retweeters/ids. There is a example of code written with help python 3 and TwitterAPI: credentials = "credentials.txt" o = TwitterOAuth.read_file(credentials) api = TwitterAPI(o.consumer_key, o.consumer_secret, auth_type='oAuth2') data = api.request('statuses/retweeters/ids', {'id': "370134917713494016", 'count': 100}) My result is: {"ids":[id1,id2,..id100],"next_cursor":0,"next_cursor_str":"0","previous_cursor":0,"previous_cursor_str":"0"} I don't understand, why my cursors are null. @TerryJanReedy, thank you for correcting my stupid errors! That's how the API for retweeters works. While this method supports the cursor parameter, the entire result set can be returned in a single cursored collection. Using the count parameter with this method will not provide segmented cursors for use with this parameter. You can only get a maximum of 100 users out of it. So there's no need for a cursor. so, why twitter returns zero cursor? Cursors are used to move between pages of results. You have all the results. Therefore, the cursor is 0. There are no more results. I am sorry, but why you give me GET statuses/retweets/:id? My question is about GET statuses/retweeters/ids. Sorry @Denis, I've updated my answer with the correct link & text. The answer is still the same though - you can only get a maximum of 100 IDs out of the API. it is okey. But this limit is doing me sad. @TerenceEden But shouldn't there be more pages with max 100 each? The documentation says "Causes the list of IDs to be broken into pages of no more than 100 IDs at a time." @vloryan this answer is 3 years old - the API may have changed in that time. Please open a new question.
common-pile/stackexchange_filtered
How to make Geological Cross Sections in Adobe Illustrator, and any suggestions for GIS software? I have been tasked with producing a geological cross section in Adobe Illustrator. It is a lengthy process, as I am new to the software. I never learned it in my GIS studies. I know that there is plenty of GIS software out there to produce x sections automatically (we have Petrel for example), but they want it done in Adobe Illustrator as it's not a data driven x section. Does anyone have any familiarity with doing such a thing? I am having to basically use the cutting tool to divide up a box into the sedimentary layers. Then I fill the cut shapes with the correct colour. Complications arise, however: I have to show wavy lines between some sedimentary layers to indicate a gap, or layers which aren't shown. I am using the distortion tool to automatically produce the wavy line, which means I need to: a. copy/paste the original area b. change the appearance so there is no fill, delete all strokes except the one which will turn into the wavy line, apply distortion c. copy/paste the original area d. change the appearance so there is no fill, delete the stroke that will turn into the wavy line, change stroke to black line This means I have 3 elements whenever there is a part of the cross section showing a wavy line: the area which has a fill but no stroke, the wavy line (derived from the area by deleting the fill and strokes which aren't wavy), the rest of the stroke around the area. Does anyone know of a faster way to do such a thing? I wish I could just draw a shape using some sort of line tool and AI would detect when there is a closed off area so I could then select a colour fill. Another problem is the cutting tool does not make it easy to produce matching curves: In this image you can see that there is a wrinkle across many sedimentary layers, but my cutting skills in Illustrator mean that the lines do not follow a nice uniform shape. Of course this is the case in the real world as well, although at the moment the image has a bit too much of a 'hand drawn' look to it. Part of the problem is that the cutting tool, usually helpfully, smooths out the cut line as one goes (digitising a smooth curve with a mouse is impossible!). Sometimes this smoothing has undesired effects. It would be good to know if I can apply an offset to another line when cutting. Is it right that I should be using the cutting tool as my primary way of setting up a x section, anyway? I'm thinking a better way to do all of this would be to somehow have a tool that enables me to select vertices across multiple lines in sedimentary layers, then drag them up or down. This would then apply the same amount of curve across the layers. What is the underlying source of the data? You can use a mesh to set out your initial layers. This will allow you to move multiple points in a uniform direction. Data source: It is a pencil sketch from a geologist. Where he gets his data from I don't know. I'd prefer to make a data driven x section in Petrel or something, but it seems that's not available. I found x section tutes but in French!!! https://www.youtube.com/watch?v=3c-5so0omRI&list=PL8B4728ADF550F5BD&index=5 and https://www.youtube.com/watch?v=iII8WGbA1fI I had a similar task to do a while ago using Illustrator CS2. Names and menus etc. may have changed in the newer versions, but I'm sure you can still do the same thing. My workflow went something like this: Add the image to be traced. Scale, rotate it etc. until it's approximately the right size and shape. Trace the various geological contacts using the Pen tool. With each click you add a node, and by clicking and dragging you can create smooth segments. This allows you to create smooth lines with sharp discontinuities. An alternative is to use the Pen tool to click roughly along the desired line, then use the Direct Selection tool to select some or all of the line's nodes. You can then use Object > Path > Simplify to smooth just the selected part of the line. Again, this allows you to create smooth lines with sharp breaks for faults etc. Apply any distortion tools to create your wavy lines as necessary. Once your contacts have been traced, select them all and copy them to a new layer positioned below your original lines layer (ctrl+f will paste them into the same location as the originals). Add some additional lines to mark the boundary of your tracing (i.e. lines for the left, right and bottom edges of your section), then select all the lines in the new layer and create a Live Paint group. This is a bit like the Feature to Polygon tool in ArcGIS: it automatically identifies the polygons defined by your lines and lets you fill them using the Live Paint tool (a bucket symbol). Once you've finished adding colours etc. you can Expand the Live Paint group. You should end up with two layers: one on top which has your contact lines and another below it which has coloured polygons representing your stratigraphic units. Apologies if the above is a little vague - I don't have Illustrator on this computer, so I can't check the details at the moment. It may also be worth looking at Illustrator's Live Trace function. If your hand-drawn sections are reasonably neat, then you might be able to use this to digitize the lines for you. However, if most of the sections are rough pencil sketches like in your example, it probably won't work very well. Make pattern tiles for the various fills. Also better to draw with the pen tool and adjust curves. Once you get good curved lines, copy to new layers and connect the ends to make perfectly matching shapes.
common-pile/stackexchange_filtered
Getting a nice form from a series of PDEs I had the left form and I believe I identified it to be the thing on the right: $$\sum_{i,j=1}^n \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j}\overset{?}{=}\nabla u \cdot \nabla v$$ But when I had: $$\sum_{i,j=1}^n a_{ij}\frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j}$$ Where $a_{ij}$ are just entries from an $n\times n$ matrix. Is there some way to get the below thing to a nice $\nabla$ filled form? $\nabla u A \nabla v$? @Thomas So then my first line is wrong, and I believe you are right. $\nabla u \cdot \nabla v$ is same as \begin{equation} \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_i} \end{equation} The expression \begin{equation} \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j} \end{equation} is a second order tensor and therefore should be just $\nabla u \nabla v$, no dot between the two. Further, \begin{equation} a_{ij}\frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j} \end{equation} can be written as $\nabla u \cdot A \cdot \nabla v$, where $A$ is a second order tensor, represented as a matrix. Are there meant to be sums? Yes, I have used Einstein summation convention. Oh crêpe, I didn't see the repeated term, thanks I don't understand the last part though, you are saying that $\nabla u \nabla v = \nabla u \cdot A \cdot \nabla v$? @PDE, you are right, I forgot to add a_{ij} in the last formula. Thanks for pointing it out.
common-pile/stackexchange_filtered
Extra parentheses on function Possible Duplicate: What do parentheses surrounding a JavaScript object/function/class declaration mean? What does this “(function(){});”, a function inside brackets, mean in javascript? A Javascript function I encountered markup similar to this: var something = (function(){ //do stuff return stuff; })() document.ondblclick = function(e) { alert(something(e)) }; I don't understand the opening ( and closing )() in the something variable. Could you explain the difference to writing it like this? var something = function(){ //do stuff return stuff; }; Thanks! () invokes the function, the other parens are redundant. So in the second case something would be assigned to the function and in first case something would be assigned to stuff go to google, type 'JavaScript closure' and start reading :) Both are function expressions. The first looks like an attempt to explicitly make it a function expression. For instance, remove var something =, and the first will still run. However, when the parentheses at the edges are removed, an error will be thrown: "function statement requires a name" (assuming that the writer has thought about this, of course!). The first is immediately invoked, so something references the value of the stuff variable. The latter something is a reference to a function. When called, it refers to the value of the stuff variable at that moment. @EliasVanOotegem: That depends on the returned stuff, see Is the following JavaScript construct called a Closure? @Esailija @RobW meaning that in the first case the function is executed every time I use the something variable (which could therefore change every time), but in the second case the function is executed only at the variable declaration, and something always returns the same value? @Sean in the first case, something is stuff, whatever that is. In the second case something is just that function. @Sean In the first case, the something variable does not hold a function reference. The parentheses after the function immediately call the function, so the return value is stored in something. In the second case, something is a function. The return value depends on an external variable, so it might be different. @Bergi: well, stuff is a function here, because the snippet goes on to do this: something(e). @RobW: in the first case, stuff is a function reference though... as I said in my comment to Bergi @EliasVanOotegem It is expected to be a function reference (as seen in the ondblclick event). In the second function, stuff is expected to be a string though. @RobW, no it isn't: the event object (e) is being passed to it: I'm yet to see working code where a string constant accepts an argument... @EliasVanOotegem @RobW Yes, in the original code stuff is a function... Thank you everyone for your quick and very good answers and comments! It's not easy but I think I understood it, very useful! Thanks again! It's probably easier to understand if you leave the redundant parens out because they serve no purpose: var something = function() { return 3; } // <-- a function. (); // now invoke it and the result is 3 (because the return value is 3) assigned to variable called something console.log(something) //3 because the function returned 3 var something = function() { return 3; }; // a function is assigned to a variable called something console.log(something) //logs the function body because it was assigned to a function console.log(something()) //invoke the function assigned to something, resulting in 3 being logged to the console because that's what the function returns Accepted this answer because it was the simplest and clearest to me, but really all answers were correct and extremely useful! Thank you!! @Sean thanks a lot, I was about to delete it because it was not getting upvotes :P I'd rather add those redundant parens. JSLint says: "Wrap an immediate function invocation in parentheses to assist the reader in understanding that the expression is the result of a function, and not the function itself." @VainFellowman yes I was afraid someone that agrees with crockford would come here and comment that. I disagree because I would just make a function declaration (function something(){}) if the result would just be the function itself anyway. The declaration is easier to write and you get a named function for free as well. But this is just an opinion as is what JSLint says. @Esailija good point. Having a named function is a good thing anyway, for example in debugging. Adding those parens is a good habit I think, it costs no extra time and is way easier to understand. @VainFellowman @Esailija think I got it now... So I could write: var tag = document.createElement("script"); tag.src = fileName; tag.onload = function(arg1, arg2) { return function() { alert (arg1 + arg2); }; }('Script ', 'loaded!'); document.body.appendChild(tag); and it would alert Script loaded! when the script loads, correct? Not that this example itself has much sense :) Ok - all clear now. Thanks a LOT guys! (function(){ ... }) is a (anonymous) function expression, you could e.g. assign that value to a variable. The brackets behind it will immidiately execute the function expression, resulting in the return value of the function (in here: stuff). The construct is called IIFE. When stuff is a function (which I assume, because you invoke something lateron), this is called a closure - the returned function (stuff, assigned to something) still has access to the variables in the execution context of that anonymous function. On the question what it does, read all the comments and other answers. They are absolutely right. Why would you want to use it? You find this pattern very often when using closures. The intent of the following code snippet is to add an event handler to 10 different DOM elements and each one should alert it’s ID attribute (e.g. “You’ve clicked 3″). You should know that if this was your actual intent, then there is a much easier way to do this, but for academic reasons let’s stick with this implementation. var unorderedList = $( "ul" ); for (var i = 0; i < 10; i++) { $("<li />", { id: i, text: "Link " + i, click: function() { console.log("You've clicked " + i); } }).appendTo( unorderedList ); } The output of the above code may not be what you first expect. The result of every click handler will be “You’ve clicked 9″ because the value of i at the point the event handler was fired is “9″. What the developer really wanted is for the value of i to be displayed at the point in time the event handler was defined. In order to fix the above bug we can introduce a closure. var unorderedList = $( "ul" ), i; for (i = 0; i < 10; i++) { $("<li />", { id: i, text: "Link " + i, click: function(index) { return function() { console.log("You've clicked " + index); } }(i) }).appendTo( unorderedList ); } You can execute and modify the above code from jsFiddle. One way to fix the above code is to utilize a self-executing anonymous function. That is a fancy term that means we are going to create a nameless function and then immediately call it. The value of this technique is that the scope of the variable stays within the function. So, first we will surround the event handler content in a function and then immediately call the function and pass in the value of i. By doing that, when the event handler is triggered it will contain the value of i that existed when the event handler was defined. Further reading on closures: Use Cases for JavaScript Closures Many words to say a simple thing. Do you really need to throw in unrelated jQuery syntax to confuse the OP? Your answer in the comments is perfectly right of course (+1 btw). It's an example of a simple use case. And I guess jQuery is widely known, so it makes sense to use it in an example one could actually come accross. You said "In order to fix the above bug we can introduce a closure.", but the function you assign to click in the first example is a already a closure (it closes over i). Using a closure is not the way to solve that problem, making a function call and creating a new scope is the solution. That's what you have done in the second example, but you have not introduced a new closure (at least you are not using it as such). @Felix Kling yes, you are right. It is a closure too. By using a self executing anonymous function, we create a new scope, as you state in your comment. All of the answers were good, but I think the simplest answer has been skimmed over: var something = (function(){ //do stuff return stuff; })() After this code executes, something becomes stuff. The function that returned stuff is executed before something is assigned. var something = function(){ //do stuff return stuff; }; After this code executes, something is a function which returns stuff. The function that returns stuff was never executed. That's because this was already discussed in the comments :P Check the JavaScript FAQ section, too: Here are some pretty good explanations and examples Ok, why should you use this: Suppose my script is running, and there are a couple of things (I'm, for instance, looping through a nodes list) I might be needing later on. That's why I might choose to do something like this: for(var i=0;i<nodesList.lenght;i++) { if (nodesList[i].id==="theOneINeed") { aClosure = (function(node,indexInNodesList)//assign here { return function() { node.style.display = 'none';//usable after the parent function returns alert(indexInNodesList+ ' is now invisible'); } })(nodesList[i],i);//pass the element and its index as arguments here break; } } i = 99999; aClosure();//no arguments, but it'll still work, and say i was 15, even though I've just //assigned another value to i, it'll alert '15 is now invisible' What this enables me to do is to prevent function arguments from being garbage collected. Normally, after a function returns, all its var's and arguments are GC'd. But in this case, the function returned another function that has a link to those arguments (it needs them), so they're not GC'ed for as long as aClosure exists. As I said in my comment. Google closures, practice a bit, and it'll dawn on you... they really are quite powerful You are mixing the concept of closures and self-invoking functions, which can be very confusing. you're right, but I merely wanted to point out to why you might want to use them. A closure was the first thing that sprung to mind. Especially because in the snippets the OP provided, the return value/object is a function. Looked like a closure to me
common-pile/stackexchange_filtered
Dash - Setting nav bar for the dashboard Application I am building a Dashboard using Python Dash. I am using Materialize.css Framework for css. I want to create a Navigation Bar which the Unordered List must be shown on any devices either Mobile, Desktop. I am referring to this website Extended Nav Bar with Tabs. How do I get the webpage more responsive? The sample code is posted here layout = html.Div(id='main-page-content',children=[ #nav wrapper starts here html.Div( children=[ #nav bar html.Nav( #inside div html.Div( children=[ html.A( 'Dashboard Analytics', className='brand-logo', href='/' ), #ul list components html.Ul( children=[ html.Li(html.A('Configuration', href='/apps/config')), html.Li(html.A('Segmentation', href='/apps/segmentation')), html.Li(html.A('Main Page', href='/apps/users')), ], id='nav-mobile', className='right hide-on-med-and-down' ), ], className='nav-wrapper' ),style={'background-color':'#008EFE'}), ], className='navbar-fixed' ), ]) #define the external urls external_css = ['https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/css/materialize.min.css'] for css in external_css: app.css.append_css({'external_url': css}) external_js = ['https://cdnjs.cloudflare.com/ajax/libs/materialize/0.100.2/js/materialize.min.js'] for js in external_js: app.scripts.append_script({'external_url': js}) when I view the webpage in mobile the options must be shown in Side Navigation. So I can navigate through different pages without actually going back in mobile devices. In the link you gave, the example contains the following element, which is missing from your layout: <a href="#" data-target="mobile-demo" class="sidenav-trigger"><i class="material-icons">menu</i></a> This is the button that opens the menu on mobile devices. Specifically, the data-target property is essential, as it triggers the on-click event. Unfortunately, there is currently no easy way in Dash to add this attribute to an HTML element (see this thread). In addition, you forgot to give the nav element the class name 'nav-extended', but I don't know how essential that is.
common-pile/stackexchange_filtered
How to display second form by clicking first lightning component form? Here I'm displaying two forms in single page 1.customer login form and 2.customer details form. By using phonenumber I can able to get data from salesforce by clicking a button and diplaing in customer details form. Requirement is : I don't want to display two forms in same page.How can I display login form first after clicking a button it display customer details form. Component : <aura:component controller="RPAProcess" implements="force:lightningQuickAction,flexipage:availableForAllPageTypes" access="global"> aura:attribute name="RPABot" type="RPABot__c" default="{'sobjectType':'RPABot__c'}" /> aura:attribute name="PhoneNumber" type="String" default="" /> div class="slds-page-header"> div class="slds-align_absolute-center"> div class="slds-text-heading_large"> div class="slds-m-top_xx-large"> <b> Customer Login Form </b> /div> /div> /div> /div> div class="slds-size_3-of-12"> lightning:input label="Phone Number" name="phonenumber" value="{!v.PhoneNumber}" required = "true"/> br/> lightning:button variant="brand" label="Get Data" onclick="{!c.getData}" /> </div> div class="slds-m-top_xx-large"> <b>Cutomer Details </b> /div> div class="slds-size_3-of-12"> lightning:input label="CustomerID" name="rpaId" value="{!v.RPABot.Customer_ID__c}" /> <br/> lightning:input label="Customer Name" name="customername" value="{!v.RPABot.Customer_Name__c}" /> <br/> lightning:input label="Company" name="dob" value="{!v.RPABot.Company__c}" /> <br/> > </div> /aura:component> component.js ({ getData : function(component, event, helper) { var action = component.get("c.getDetails"); action.setParams({ phonenumber : component.get("v.PhoneNumber") }); action.setCallback(this, function(response){ component.set("v.RPABot",response.getReturnValue()); }); $A.enqueueAction(action); }, }) Add Customer_Name__c in default of attribute RPABot. Put 1st form and 2nd form in aura:if, such that 1st form is shown when Customer_Name__c is empty. When you get back the data Customer_Name__c is not empty and so 1st form is removed and 2nd form is rendered. Below is the code of .cmp: <aura:component controller="RPAProcess" implements="force:lightningQuickAction,flexipage:availableForAllPageTypes" access="global"> <aura:attribute name="RPABot" type="RPABot__c" default="{'sobjectType':'RPABot__c','Customer_Name__c':''}" /> <aura:attribute name="PhoneNumber" type="String" default="" /> <aura:if isTrue="{!empty(v.RPABot.Customer_Name__c)}"> <div class="slds-page-header"> <div class="slds-align_absolute-center"> <div class="slds-text-heading_large"> <div class="slds-m-top_xx-large"> <b> Customer Login Form </b> </div> </div> </div> </div> <div class="slds-size_3-of-12"> <lightning:input label="Phone Number" name="phonenumber" value="{!v.PhoneNumber}" required="true" /> <br /> <lightning:button variant="brand" label="Get Data" onclick="{!c.getData}" /> </div> </aura:if> <aura:if isTrue="{!not(empty(v.RPABot.Customer_Name__c))}"> <div class="slds-m-top_xx-large"> <b>Cutomer Details </b> </div> <div class="slds-size_3-of-12"> <lightning:input label="CustomerID" name="rpaId" value="{!v.RPABot.Customer_ID__c}" /> <br /> <lightning:input label="Customer Name" name="customername" value="{!v.RPABot.Customer_Name__c}" /> <br /> <lightning:input label="Company" name="dob" value="{!v.RPABot.Company__c}" /> <br /> </div> </aura:if> </aura:component> Thats great.It works perfectly.Thanks Without going in much of the code detail, you can use aura:if to decide at run time which form to display. Now when user completes form1, fire a component event to the parent component to convey form1 is completed and show form 2. Code will be somewhat like: Parent Component: <aura:component> <aura:attribute type="Boolean" default="true" name="showForm1"/> <aura:handler name="completedForm1Event" event="c:compEvent" action="{!c.handleForm1Completion}"/> <aura:if isTrue="{!v.showForm1}"> <c:form1 /> </aura:if> <aura:if isTrue="{!NOT(v.showForm1)}"> <c:form2 /> </aura:if> </aura:component> Parent JS: handleForm1Completion(component){ component.set("v.showForm1" , false); } Thanks for giving the info.Here my requirement is based on phone number I want to get the salesforce record details but I want to display details in other page by clicking "details" button in first page.How to achieve this please help me on this. please check the image added at the top. after clicking the get data button i want to display details in other page.How to achieve this? @Raghu I mentioned right, fire event from form1, capture it in parent, and make parent to display form2. thanks i will try and let you know any issues
common-pile/stackexchange_filtered
How to use multiple CSS classes with MUI 5 SX prop? Does anyone know how to use multiple CSS classes with MUI 5 SX prop? I created a base class that I want to use with my Box components but use a second class specifically for the text inside of the Box. Applying base class, such as sx={styles.baseBoxStyle} works but sx={styles.baseBoxStyle styles.customBoxFontStyle} returns an error. Full code snippet and sandbox provided below. Any assistance is greatly appreciated! Sandbox: https://codesandbox.io/s/mui-5-styling-uqt9m?file=/pages/index.js import * as React from "react"; import Box from "@mui/material/Box"; const styles = { baseBoxStyle: { backgroundColor: "red", borderRadius: "20px 20px 20px 20px", border: "1px solid black", maxWidth: "150px", margin: "20px", padding: "10px" }, customBoxFontStyle: { color: "yellow", fontWeight: "bold" } }; export default function Index() { return <Box sx={styles.baseBoxStyle styles.customBoxFontStyle}>This is a test</Box>; } Sandbox link: https://codesandbox.io/s/mui-5-styling-uqt9m?file=/pages/index.js Hi, if I want to use theme in styles and pass it, how can I use it? You can try to use classnames as its commonly used library, or you can just make string from these styles that you pass into sx sx={styles.baseBoxStyle+' '+styles.customBoxFontStyle} I just now tried the suggested on sx prop but it doesn't work as they are objects. https://codesandbox.io/s/mui-5-styling-uqt9m?file=/pages/index.js oh sorry, i dont know why i thought, these are classes... but there you can try to combine these two objects: sx={{...styles.baseBoxStyle,...styles.customBoxFontStyle}} Sorry Wraithy, that doesn't work either. my apologizes. Your solution does indeed work. I confirmed this after consulting with another React developer who suggested the same. Not sure why it didn't work initially tested but I suspect I neglected to wrap it in additional curly brackets. I had a similar problem and came up with this solution. <Box sx={[styles.baseBoxStyle, styles.customBoxFontStyle]}> This is a test </Box> https://codesandbox.io/s/sweet-blackwell-oqp9ph?file=/src/App.js:416-517 If you want combine 2 calsses, and you get one of them as props you should work like this const componnent = (styles) => { return ( <ListItem sx={[ { width: 'auto', textDecoration: 'underline', }, ...(Array.isArray(styles) ? styles : [styles]), ]} /> ) } You cannot spread sx directly because SxProps (typeof sx) can be an array For a simple solution, can deconstruct the style objects and compile into one object. <Box sx={{...styles.baseBoxStyle,...styles.customBoxFontStyle}}>This is a test</Box> With the following functions import { SxProps, Theme } from "@mui/material/styles"; export const withSxProp = ( sxProp: SxProps<Theme> | undefined, other: SxProps<Theme> ): SxProps<Theme> => { if (!sxProp) { return other; } else { return mergeSx(sxProp, other); } }; export const mergeSx = (...sxProps: SxProps<Theme>[]): SxProps<Theme> => { return sxProps.reduce((prev, currentValue) => { return [ ...(Array.isArray(prev) ? prev : [prev]), ...(Array.isArray(currentValue) ? currentValue : [currentValue]), ]; }, [] as SxProps<Theme>); }; Usage class MyComponent = (parentSxProp: SxProps<Theme>) => { const sxChildDefault = { display: "flex" } const innerSx1 = { display: "flex" } const innerSx2 = { flexDirection: "column" } return <Box sx={withSxProp(parentSxProp, sxChildDefault)} > <Box sx={mergeSx(innerSx1, innerSx2)} /> </Box> } To build on Aleksander's answer, there is a section about merging sx in MUI official documentation. import * as React from 'react'; import ListItem from '@mui/material/ListItem'; import FormLabel from '@mui/material/FormLabel'; import { SxProps, Theme } from '@mui/material/styles'; interface ListHeaderProps { children: React.ReactNode; sx?: SxProps<Theme>; } function ListHeader({ sx = [], children }: ListHeaderProps) { return ( <ListItem sx={[ { width: 'auto', textDecoration: 'underline', }, // You cannot spread `sx` directly because `SxProps` (typeof sx) can be an array. ...(Array.isArray(sx) ? sx : [sx]), ]} > <FormLabel sx={{ color: 'inherit' }}>{children}</FormLabel> </ListItem> ); } export default function PassingSxProp() { return ( <ListHeader sx={(theme) => ({ color: 'info.main', ...theme.typography.overline, })} > Header </ListHeader> ); } I think you cant use sx with classes. I have not seen any example in documentation. In this context I'm referring to JSS as classes, not traditional CSS class. In the example I provided in this post, sx={styles.baseBoxStyle} works without issue. I see. styles.baseBoxStyle is js object. It pretends just temporary constant. In these cases making a new object by destruction will work. I was confused you try to do similar way as makeStyles does. I'm migrating from v4 to v5 and because I used makeStyles in v4, which is deprecated in v5, my goal was to not have to reformat all of my styling. Yeap. I am new to MUI. I have the same problem. All examples I find a references point are made with makeStyles in v4. I can use the legacy method for my works, it works but I am trying to figure out how should styles components in v5 way.
common-pile/stackexchange_filtered
ggvis fill attribute is not working for certain layers or variables Pretty straightforward: This does not work iris %>% ggvis(x= ~Sepal.Length, y = ~Sepal.Width, fill=~Sepal.Length) %>% layer_bars() This it does iris %>% ggvis(x= ~Sepal.Length, y = ~Sepal.Width, fill=~Sepal.Length) %>% layer_points() Why? I actually managed to use the fill aesthetic with another dataset that I am not sharing, but that's just to point out that the fill should definitely work in my replicable example, right? From ?layer_bars If grouping var is continuous, you need to manually specify grouping iris %>% group_by(Sepal.Length) %>% ggvis(x= ~Sepal.Length, y = ~Sepal.Width, fill = ~Sepal.Length) %>% layer_bars() Which gives:
common-pile/stackexchange_filtered
Creating a conditonal ROW_NUMBER() Partition clause based on previous row value I have a table that looks like this: +----------------+--------+ | EvidenceNumber | ID | +----------------+--------+ | 001 | 8 | | 001.A | 8 | | 001.A.01 | 8 | | 001.A.02 | 8 | | 001.B | 8 | | 001.C | 8 | | 001.D | 8 | | 001.E | 8 | | 001.F | 8 | | 001.G | 8 | | 001.G.01 | 8 | +----------------+--------+ If 001 were a bag, inside of it was 001.A, 001.B, and so on through to 001.G In the output above, 001.A was another bag, and that bag contained 001.A.01 and 001.A.02. The same thing can be seen with 001.G.01. Every entry in this table is either a bag or an item. I am only interested in counting the amount of items per ID. Since 001.A.01 and 001.A.02 is the last we see of the "001.A's" we know A.01 and A.02 were items. Since we see 001.B only once, that was an item as well. 001.G was a bag, but 001.G.01 was an item. The above output is showing 8 items and 3 bags. I feel like Row_number and the Partition clause is the perfect tool for the job, but I can't find a way to partition based on a clause that uses a previous row's value. Maybe something like that isn't even necessary here, but I pictured it like: {001} -- variable {001}.A -- variable seen again, obviously 001 was a bag. Create new variable {001.A} and move on. {001.A}.01 -- same thing. {001.A.01} -- Unique variable. This is a final step. This is a bag and should be Row number 1. Obviously, the below code is just making "ItemNum" 1 for each item since there are not duplicates. SELECT ROW_NUMBER() OVER(Partition BY EvidenceNumber ORDER BY EvidenceNumber) AS ItemNum, EvidenceNumber, ID FROM EVIDENCE WHERE ID = '18' ORDER BY EvidenceNumber +---------+----------------+--------+ | ItemNum | EvidenceNumber | ID | +---------+----------------+--------+ | 1 | 001 | 8 | | 1 | 001.A | 8 | | 1 | 001.A.01 | 8 | | 1 | 001.A.02 | 8 | | 1 | 001.B | 8 | | 1 | 001.C | 8 | | 1 | 001.D | 8 | | 1 | 001.E | 8 | | 1 | 001.F | 8 | | 1 | 001.G | 8 | | 1 | 001.G.01 | 8 | +---------+----------------+--------+ Ideally, it would partition on the items only, so in this case: +---------+----------------+----+ | ItemNum | EvidenceNumber | ID | +---------+----------------+----+ | 0 | 001 | 8 | | 0 | 001.A | 8 | | 1 | 001.A.01 | 8 | | 2 | 001.A.02 | 8 | | 3 | 001.B | 8 | | 4 | 001.C | 8 | | 5 | 001.D | 8 | | 6 | 001.E | 8 | | 7 | 001.F | 8 | | 0 | 001.G | 8 | | 8 | 001.G.01 | 8 | +---------+----------------+----+ Please edit your question and show the results you want. @GordonLinoff edited. I don't think window functions alone are the best approach. Instead: select t.*, (case when exists (select 1 from evidence t2 where t2.caseid = t.caseid and t2.EvidenceNumber like t.EvidenceNumber + '.%' ) then 0 else 1 end) as is_item from evidence t ; Then sum these up using another subquery: select t.*, sum(is_item) over (partition by caseid order by EvidenceNumber) as item_counter from (select t.*, (case when exists (select 1 from evidence t2 where t2.caseid = t.caseid and t2.EvidenceNumber like t.EvidenceNumber + '.%' ) then 0 else 1 end) as is_item from evidence t ) t; @myaccountname . . . The first query is a subquery in the second. I missed that you had named the table, so t is the alias for evidence. trick with Lead and Row_Number: DECLARE @Table TABLE ( EvidenceNumber varchar(64), Id int ) INSERT INTO @Table VALUES ('001',8), ('001.A',8), ('001.A.01',8), ('001.A.02',8), ('001.B',8), ('001.C',8), ('001.D',8), ('001.E',8), ('001.F',8), ('001.G',8), ('001.G.01',8); WITH CTE AS ( SELECT [IsBag] = PATINDEX(EvidenceNumber+'%', IsNull(LEAD(EvidenceNumber) OVER (ORDER BY EvidenceNumber),0) ), [EvidenceNumber], [Id] FROM @Table ) SELECT [NumItem] = IIF(IsBag = 0,ROW_NUMBER() OVER (PARTITION BY [ISBag] order by [IsBag]),0), [EvidenceNumber], [Id] FROM CTE ORDER BY EvidenceNumber
common-pile/stackexchange_filtered
What should be the return type of WCF Service for large amount of data and wth different clients? I have created a WCF service whose return type is Dataset which is .NET framework compatible clients. But now my requirement gets changed and the clients can be platform independent i.e. service can be consumed by JAVA, Android phones and .NET application. My questions are: Which data type should I use which is compatible to all clients? i.e. JAVA don't have dataset as type(not much knowledge on JAVA) service that I've created is default one provided by .NET framework(NOT REST, not using SOAP manually) Data will be of thousand lines which return type will be better ? DO I have to use REST,SOAP for large amount of data ? how can I achieve this? please don't mark this question as DUPLICATE! And if it was a duplicate, why should we not mark it as such? @MartinKonecny why should one avoid SOAP, other than buzzword fanatistm "lolol REZTT LOL", please? @ThorstenDittmar you can but my question is complicated not much general @MartinKonecny by using REST I can send tabular data ?(i.e. currently I am sending the dataset to .NET client) but how to JAVA client and also ANDROID ? I mean how they consume if I won't change return type as dataset? I want to make service like that any client can consume it @highscore Java is a bit more than just Android - he could want to write a desktop application... @dirtydeveloper questions are not marked as duplicates based on their level of complexity, but when they've been answered before. @ThorstenDittmar I am new to stack overflow and don't know much about it but my title seems to be common and like it was answered before but not the actual question. anyway Thanks! I've just learned something. all of you please stop fighting and give clarifications of my question. please @DirtyDeveloper - you may want to consider using JSON + REST. JSON is very elegant if a little inefficient in sending data, however for small amounts of data (less than 1MB) it's fine. If you are sending multiple MB's of data, you can either paginate your JSON response, or use a binary serialization protocol. More here: http://theburningmonk.com/2013/09/binary-and-json-serializer-benchmarks-updated/ @MartinKonecny I agree with your above statement about JSON + REST, that's the immediate first choice. My advice still stands to use Xamarin because A: Xamarin is really cross platform, java is not, and B: java is a horrible language. .Net/Xamarin make things easy, while java is very painful to work with. @MartinKonecny thanks for the suggestion ! I am looking into that. @HighCore agree with you ,JAVA is very painful.:) Breaking a leg is painful. Using Java when used to C#/.net is simply a matter of willingness to learn something new. Language wars aren't helpful. Your requirement is to deliver data to non-.net clients, so deal with it. Martin gave you some good resource on how to do it. This is not meant to be offensive - just tired of that old "Language XY is painful" - wining. For large amount of data you should consider the followings: Try to use some kind of compression. I usually use 7zip's open source compression to reduce the data transfer. Use and share well defined DTO (Data Transfer Objects) on both server and clients. Use streams to transfer your data and parse them correctly. If you have less DTOs and large data in the objects, you can use SOAP otherwise stick to REST. Thanks for the response. I cannot use Compression and streams as of now. consider I m having 1000 product data rows (previously I was using datset as return type) how can I transfer the data using WCF then? return type? and if by using SOAP then how? any pointers please I do not think its a good idea to send datarows or datasets just like that over WCF. That is what I have explained in my answer that you need to make proper DTOs. can I use List as return type. I have read some of the threads which says it's not good to use list. what will be the return type? using default WCF will return SOAP so do I have to change anything else? @DirtyDeveloper ... Please understand when I said that you need to make your own DTOs. Imagine if you are consuming this web service on PHP, how would you convert List ? Its better to create your own list/array or objects for DT. I want to support for all clients including JAVA,android and NET applications. if I return strongly typed List will it be consumable by other clients? for NET I am sure it will work but for others I don't have much idea! that's why I mentioned List If its convert-able to a soap message, its really going to be used in any language. Down the road, you are just sending and receiving XMLs
common-pile/stackexchange_filtered
Custom Widget stays invisible in PyQt5 I´m trying to create a custom QWidget with PyQt5, but it seems like I´m missing some crucial information. The code itself doesn´t produce any errors, but whenever I try to add one of my custom widgets to the Layout of the MainWindow it stays invisible. Interestingly enough a QWidget, that is placed inside my custom widget, is shown in the MainWindow. I haven´t found anyone else who had the exact same problem, but I hope someone can explain to me what´s wrong with my code or understanding of PyQt5. import sys from PyQt5.QtWidgets import * class CustomWidget(QWidget): def __init__(self): super().__init__() self.layout = QVBoxLayout(self) self.setLayout(self.layout) self.innerwidget=QWidget() self.layout.addWidget(self.innerwidget) self.innerwidget.setFixedSize(50,50) self.setFixedSize(100,100) self.setStyleSheet("background-color:blue;") class MainWindow(QMainWindow): def __init__(self): super().__init__() self.CentralWidget=QWidget() self.setCentralWidget(self.CentralWidget) self.CentralWidget.setStyleSheet("background-color:green;") self.CentralWidget.resize(1000,600) self.Layout=QHBoxLayout() self.CentralWidget.setLayout(self.Layout) #----Script--------- App=QApplication(sys.argv) TestWindow=MainWindow() #This one is there, since I can´t reduce the size of the MainWindow further than 100x100, # but it doesnt get drawn TestWidget=CustomWidget() TestWindow.Layout.addWidget(TestWidget) #This one is shown correctly TestWidget2=CustomWidget() TestWidget2.show() TestWindow.show() App.exec() Your example works fine for me. What exactly were you expecting to see? And what platform are you testing this on? is this related to the size of the custom widget, compared to the child innerwidget size ? I added a picture of the executed code on Windows 10. The left one is the MainWindow with a custom widget inside, the right one is the custom widget on it´s own. I´m expecting to see the blue square (representing the CustomWidget) in the MainWindow as well. However I only see the InnerWidget (red square). I use the setFixedSize-Function only to make sure, that the QWidget I created inside the CustomWidget-Constructor isn´t as big as the CustomWidget itself. Furthermore I´m able to verify that the CustomWidget exists inside the MainWindow by trying to minimize it. Since I can´t reduce the size of my MainWindow to less than 100x100, I´m assuming that the CustomWidget just doesn´t get drawn for some reason. @eNceo1423. The code you posted cannot possibly produce the result in your screenshot. None of widgets are given a red background. This answer might come a little bit late, but eventually I found a workaround for my problem. It seems like there is a bug for the function "setStyleSheet" in objects that inherit QWidget. This causes the Object to never change it´s background color and therefore stay invisible. The following overload of "setStyleSheet" solves this problem: def setStyleSheet(self,p_str): super(CustomWidget,self).setStyleSheet(p_str) self.show() self.setAutoFillBackground(True)
common-pile/stackexchange_filtered
How to type Firebase getDoc when using vue-concurrency? I'm trying to create a TypeScript Vue composable using Firebase and vue-concurrency. The docs say I have to explicitly type any intermediate values that were yielded. And I am trying to do that for a Firebase getDoc call, but Eslint is giving me an error saying Unsafe assignment of an 'any' value which I would like to solve: import { db } from 'src/firebase/config'; import { doc, getDoc, // DocumentData, DocumentSnapshot, } from 'firebase/firestore'; import { useTask } from 'vue-concurrency'; const getDocumentTask = useTask(function* ( signal, collectionName: string, documentId: string ) { const documentReference = doc(db, collectionName, documentId); // eslint-disable-next-line @typescript-eslint/no-unsafe-assignment const response: DocumentSnapshot = yield getDoc(documentReference); // <-- Eslint error here const document = { ...response.data(), id: response.id, }; return document; }); export default getDocumentTask; Alright, just found the answer (at least in my case). Replace 'firebase/firestore' with 'firebase/firestore/lite'
common-pile/stackexchange_filtered
How to create perpendicular edges along one direction? I made path in Inkscape, import it to Blender and make it 3d. Now I need to bend it in one direction but wires connected to non-perpendicular points. Please look at image to understant what i mean. I don't think there's any automatic way to do that, for example you could create a grid above your mesh and use the Knife Project tool? Should I make mesh from bezier curve at first? what do you mean? It's already a mesh When I import it from svg - it's bezier curve. But then I convert it to mesh. could you please share your file? https://blend-exchange.giantcowfilms.com/ @moonboots added to post it makes my Blender crash, I'm not sure what you mean, what you show in the screenshot is a mesh, you were right to convert to mesh, now you could do what you want to do with a tool like Knife Project tool, I can make a full answer if needed Hmm. Okay. I will try it, but maybe I making something wrong and it can be made much easy? Here is SVG image. I need to import it to blender and bend in single dimension like i described in preview. https://svgur.com/s/Fpc @br. as moonboots said, this can be done easily enough using "Knife Project" <= That's your search keyword How about MeshDeformModifier ... you put a cage around your mesh (a simple box will do) and deform the cage. Make sure you have enough divisions in you cage to do what you want. I can write this up as an answer if this is of interest. Will try both and will say how solved, thanks actually what's the reason why you want to keep parallel edges? You can achieve that by creating a plane and subdividing it using Ctrl + R into parallel equally spaced edges, then using Knife Project tool, you can project your object to the plane, all you need to do later is to remove all the faces that fall outside the projected faces. But looking at your file, you need first to turn your object into a mesh using Alt+C and choose Mesh from curve, then in edit mode, select all, press X and choose Limited Disolve, in order to reduce the vertices in your object. Here's your edited file: Ctrl + r may not work because they will stay parreale to the mesh. Ctrl + R is to be used on the plane, not the object @Yohello1 Okay then it should work fine.
common-pile/stackexchange_filtered
Adding a "Report the post" button/form? I have a user-generated-content website with WP. I would like to allow the readers to notify me of posts that should be taken down. I need to: add a button to the end of each posts, that when clicked, will send the reader to another form, while taking with it the url from which it came I need a form that will hold several dropdown options to choose from, and a "send this report" ability (coupled with some CAPTCHA) I would appreciate any idea/ plugin on how to do this. (this older question didn't solve the problem for me, at my current level of understanding) To use the plugin in the question you linked to: Copy the code into notepad or an equivalent program Save the file using the extension .php (make sure your operating system has file endings turned on so it doesn't save as filename.php.txt Put the file into a compressed (or zipped) folder. How you do this depends on your operating system. Upload the zipped folder to your wordpress blog. (Plugins->add new->upload) Activate the plugin.
common-pile/stackexchange_filtered
Worker Thread in Java I need to read the data from a table every minute through thread & then perform certain action. Should I just start a thread & put it in sleep mode for 1 minute, once the task is done. And then again check if the table has any data, perform the task again & go to sleep for 1 minute... Is this the right approach? Can any one provide me some sample code in Java for doing the same? Thanks! As so often, the Java 5 extensions from the java.util.concurrent package are a huge help here. You should use the ScheduledThreadPoolExecutor. Here is a small (untestet) example: class ToSomethingRunnable implements Runnable { void run() { // Add your customized code here to update the contents in the database. } } ScheduledThreadPoolExecutor executor = Executors.newScheduledThreadPool(1); ScheduledFuture<?> future = executor.scheduleAtFixedRate(new ToSomethingRunnable(), 0, 1, TimeUnit.MINUTES); // at some point at the end future.cancel(); executor.shutdown(); Update: A benefit of using an Executor is you can add many repeating tasks of various intervals all sharing the same threadpool an the simple, but controlled shutdown. +1: newScheduledThreadPool(1) = newSingleThreadScheduledExecutor() and executor.shutdown() will cancel all the active tasks. A benefit of using an Executor is you can add many repeating tasks of various intervals all sharing the same thread(s). (And the simple shutdown) An alternative to create a thread yourself is to use the ExcecutorService, where Executors.newScheduledThreadPool( 1 ) creates a thred pool of size 1 and scheduleAtFixedRate has the signature: scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit); public class ScheduledDBPoll { public static void main( String[] args ) { ScheduledExecutorService scheduler = Executors.newScheduledThreadPool( 1 ); ScheduledFuture<?> sf = scheduler.scheduleAtFixedRate( new Runnable() { public void run() { pollDB(); } }, 1, 60,TimeUnit.SECONDS ); } } "Should I just start a thread & put it in sleep mode for 1 minute" Even if you want you want to go with traditional thread ( dont want to use executor Fremework) DO NOT USE SLEEP for waiting purpose as it will not release lock and its a blocking operation . DO USE wait(timeout) here. Wrong Approach - public synchronized void doSomething(long time) throws InterruptedException { // ... Thread.sleep(time); } Correct Approach public synchronized void doSomething(long timeout) throws InterruptedException { // ... while (<condition does not hold>) { wait(timeout); // Immediately releases the current monitor } } you may want to check link https://www.securecoding.cert.org/confluence/display/java/LCK09-J.+Do+not+perform+operations+that+can+block+while+holding+a+lock Your approach is good.You can proceed your approach. This Sample will help you to do your task: class SimpleThread extends Thread { public SimpleThread(String str) { super(str); } public void run() { for (int i = 0; i < 10; i++) { System.out.println(i + " " + getName()); try { // Add your customized code here to update the contents in the database. sleep((int)(Math.random() * 1000)); } catch (InterruptedException e) {} } System.out.println("DONE! " + getName()); } } as someone suggested the Java 5 extensions from the java.util.concurrent package contain better solutions to this problem. What is the purpose of Math.random() here?
common-pile/stackexchange_filtered
How to hide/show a button in a tablerow depending on the data in that row in Angular? I am trying to make an user table in which all users from my mongoDb are shown. The user can follow or unfollow other users by clicking on a button inside the row of the table. If the current user already follows one of the users i want to show the unfollow button and vice versa. Here is an image of the ui to give you a better understanding, "volgen" meaning follow. Image: table of users The problem i have is that i use a method named "isFollowing(id: string)" inside an *ngIf expresion which is called infinitely. I am not sure why. I have read about angular re evaluating ngIf expresions many times and using methods inside ngIf expression isn't best practise. Most suggest declaring a boolean variable instead of using a method. In my case this wont work, since the (un)follow button depends on the data that is provided in the ngFor. Does anyone know how i should approach this problem? Here is the isFollowing method: //check if the current user already follows the other user isFollowing(otherUser: string | undefined): boolean { console.log('isFollowing called from user-list.component.ts'); if (otherUser === undefined) { SweetAlert.showErrorAlert('Er gaat iets mis, probeer het opnieuw..'); return false; } if (this.currentUser.followingUsers?.includes(otherUser)) { SweetAlert.showSuccessAlert('Je hebt deze gebruiker ontvolgt.'); return true; } SweetAlert.showErrorAlert('Er gaat iets mis, probeer het opnieuw..'); return false; } Here is the html: <tbody *ngIf="users.length > 0"> <tr *ngFor="let user of filteredUsers; let i = index"> <th scope="row">{{ i + 1 }}</th> <td>{{ user.firstName | titlecase }}</td> <td>{{ user.lastName | titlecase }}</td> <td>{{ user.city | titlecase }}</td> <td *ngIf="isUser && !isFollowing(user._id)"> <a (click)="followUser(user._id)" class="btn customfriendbutton" > Volgen </a> </td> <td *ngIf="isUser && isFollowing(user._id)"> <a (click)="unfollowUser(user._id)" class="btn customfriendbutton" > Ontvolgen </a> </td> <td *ngIf="isAdmin"> <a (click)="sweetAlertDeleteConfirmation(user._id)" class="btn customdeletebutton" > Verwijderen </a> </td> </tr> </tbody> The right way would be to use a pure pipe (read more on this topic here in the official Angular documentation). This means the value will only be re-evaluated when the array instance changes (so on each change you would need to define a new array for followingUsers property). In your case you could make an IncludesPipe which could look like this: @Pipe({ name: 'includes' }) export class IncludesPipe<T> implements PipeTransform { transform(array: T[], item: T): boolean { return array.includes(item); } } You can then use it like this in your view: <td *ngIf="user.followingUsers | includes : user._id; then unfollowTemplate else followTemplate"></td> <ng-template #followTemplate> <a (click)="followUser(user._id)" class="btn customfriendbutton">Volgen</a> </ng-template> <ng-template #unfollowTemplate> <a (click)="unfollowUser(user._id)" class="btn customfriendbutton">Ontvolgen</a> </ng-template> In this template example I use a shorthand then and else blocks, which I think is more readable and the condition needs to only be evaluated once instead of twice. If you don't want to change followingUsers to a new array on each change you would need to manually trigger change detection for this example to work properly.
common-pile/stackexchange_filtered
Batch processing (offset of font marker) in QGIS I have a group of layers with type linestring. Every line consists of simple line and marker line (font marker). I have a batch file(csv) with offsets of font markers. Do you know if there is any function which makes this batch processing (offset of font markers) so I don't have to do it manually? Or it has to be done manually for every line separately? I have QGIS 2.6.1. Using python, find the symbol layer (type: QgsFontMarkerSymbolLayer) you can use: symbol.setOffset(QPointF(#offsetX,#offsetY)) Find more in qgis docs
common-pile/stackexchange_filtered
I have a Fujitsu Siemens AMILO Pro V3515 laptop. Where can I get the proper drivers? I have been using Ubuntu for several weeks now. (With some prior Linux usage.) Several programs I have used have given me this error message: Xlib: extension "GLX" missing on display ":0.0". I think the error may be caused by not having the proper drivers. How can I fix this issue? Edit: lspci | grep VGA gave me this output: 01:00.0 VGA compatible controller: VIA Technologies, Inc. CN896/VN896/P4M900 [Chrome 9 HC] (rev 01) There are no additional drivers installed. I am using Ubuntu 11.04. what is your graphics card (lspci | grep VGA) and have you installed any graphics drivers (Additional Drivers window)? What version of ubuntu are you using? @fossfreedom sorry, I didn't notice your comment. I added the info above. Thanks! VIA graphics are supported by the "OpenChrome" open-source graphics drivers. Unfortunately, these graphics cards are not supported by the manufacturer for Linux. The open-source graphics drivers are not actively developed. I've checked out the x-edgers PPA and x-stable PPA - the version has not changed between Natty and Oneiric. Three ubuntu choices: Accept the issues trade in your laptop for a intel/nvidia/ati graphics laptop join the via graphics team and help out development which looks very dead at the moment... So... Does this mean I should do video stuff on another computer? depends upon what you mean "video stuff" - if you want to play games - probably not. Day to day browsing, libre office work etc - should work OK. I tried Blender... It didn't work... cocos2d... Didn't work... Voodoo match mover... didn't work. Other than that, Firefox has worked, and so has MuseScore. I think Gimp has worked as well. Plus, I developed a website using PHP, MySQL, and Apache (which obviously didn't use graphics.) Thanks! yep - graphics intensive stuff like those apps you will have issues with. Blender in particular also has some issues with ATI graphics.
common-pile/stackexchange_filtered
Apache Flume ../flume.log permission denied using spooldir as source but ok with other sources I'm pretty new using Flume, just started testing it using a CDH 4.7.0 distributuion. I'm configuring Flume through Cloudera Manager. I've set up an agent using a sequence generator as source and everything went fine, but I've got an error when configured the source as a spooling directory: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /var/log/flume-ng/flume-cmf-flume1-AGENT-xxxxx.log (Permission denied) Didn't change anything else, any ideas? My .conf: SOURCE agent_pbe2.sources.spldir-src1.type = spooldir agent_pbe2.sources.spldir-src1.spoolDir = /tmp/Flume-PoC CHANNEL agent_pbe2.channels.mem-chn1.type = memory agent_pbe2.channels.mem-chn1.capacity = 1000 agent_pbe2.channels.mem-chn1.transactionCapacity = 100 SINK agent_pbe2.sinks.hdfs-snk1.type = hdfs agent_pbe2.sinks.hdfs-snk1.hdfs.path = hdfs://martehadoop/user/hduser/Flume-PoC/pbe2 BIND SOURCES agent_pbe2.sources.spldir-src1.channels = mem-chn1 BIND SINKS agent_pbe2.sinks.hdfs-snk1.channel = mem-chn1 Thanks! I answer my own question. The problem was related to user rights. The user used by Flume service must be able to access all resources involved in the agent(s). Cheers!
common-pile/stackexchange_filtered
Does it make sense to reuse microservices in a desktop application with an "offline" mode? Lets say we have a microservice architecture consisting of: multiple frontend clients, e.g. an Android App, an iOS App, and a Javascript Web Frontend a Java Spring (HTTP Rest) microservice to access user data a Java Spring (HTTP Rest) microservice to access application data a keycloak authentication microservice Lets assume that all of this works fine with the backend microservices deployed in the cloud. Now, we want to create a desktop application. This desktop application should also work when the user does not have access to the Internet, given that he had Internet access before at some point. Reusing the Javascript Web Frontend in the Desktop application is quite easy with Electron, that should not be a problem. In theory, we could ship the Electron app with the jar files of the Java Spring backend services, and let the Electron app start these in a background process (given that the user has Java installed). Then, after authenticating with the authentication microservice in the cloud, the app would have to download the user and application data and store it in a local database, e.g. a file-based SQLite DB. The backend microservices running locally would use this local DB as a source, therefore they are able to work without Internet access. Is this a sensible approach for converting an existing web/mobile application to a desktop application with reasonable effort? My thoughts on this are: Reusing a lot of code is super good, especially because the tasks of the Java Spring microservices would be exactly the same locally as in the cloud Requiring the user to have Java installed is not so good, it kind of ruins the nice Electron experience where everything is self-contained In reality, there might be more microservices, including stuff like Consul, maybe requiring the user to have more stuff installed. Running all of that on the user's machine might be a overkill, also might be quite bad for performance This is a question to ask the application's stakeholders - the issue about whether or not to deploy something on a user's own device is something which affects users and whoever is tasked with supporting those users, so needs to be driven by business need and stakeholders understanding the implications of the choices they have available @BenCottrell "This is a question" which of these questions do you mean? "Does this make sense" or "Is it common"? Are you saying that this approach makes sense, given that there is no problem whatsoever in requiring the user to have all the prerequisites installed? Yes, the question as to whether it makes sense is a business question rather than a technical one. I don't know what your stakeholders' specific needs/expectations are so it's impossible for me to say whether it makes sense. @BenCottrell Okay, thanks for the clarification, I see that this question is more of a business one. What about the "Is it common" question? Do you happen to have any experience / knowledge about it? How does "is it common" help you? The most interesting and useful solutions in computer science are the ones that are not "common." @BenCottrell: "the issue about whether or not to deploy something ... needs to be driven by business need and stakeholders" - that is true, but if I understand this correctly, the OP is not asking if those requirements make sense, but if the suggested architecture makes sense to realize these requirements. And the architecture of a system is usually not what the business side is going to decide on their own. I took the freedom and reworded your question a little bit (which is IMHO a very valid one), to keep the nitpickers satisfied - hope this helps. DocBrown thank you for the help. @RobertHarvey Yes, the most interesting solution is the one that is not common. But the most useful solution? I strongly believe that there is a good reason for common solutions. Not that common solutions are the best ones, but one should definitly consider them. For what it's worth, popularity is not the best way to make decisions. Case in point. @RobertHarvey: you may have noted, I edited the "popularity" part out of the question. For me, this post looks like a valid question about software architecture, asking more for expertise than opinion on this topic. If you have any suggestions how this can be more streamlined to this site's format, feel free to suggest (or make some edits on your own). @DocBrown: You already know my stance. Questions that include words like "sensible," "good," "best practice" or "most likely to win a beauty contest" are unanswerable unless the OP can state specific criteria or objectives." @RobertHarvey: I think the criteria here would be "there is a way simpler approach to achive the same goal of bringing this Web application to the desktop" or "there is an approach which avoids or mitigates the mentioned drawbacks", or "there can be no simpler approach because of the following reasons". Maybe just a matter of wording? We should not close questions just because they contain some buzzwords. @DocBrown: Yeah, but what drawbacks? All I see in the OP is "Code Reuse," "Requires Java," "Requires [other]". It would take the better part of a book chapter to do the question justice, and after you write it, the OP will reply, "Well, that's great, but it's not really what I was after." So at the end of the day it still comes down to "do your analysis. Research, write small prototypes, address the pros and cons, and then make the best decisions you can based on what you have learned." @RobertHarvey I admire your strictness on this topic. I do want to do my analysis and research. And this post here on stackexchange is part of my research. I do not want other people to do my work or decide for me, but rather ask for experience and expertise on the matter. And based on this, I need to address the pros and cons anyway. My thought was that I alone can not work out all the pros and cons in a reasonable time, where asking for experience of others is never a bad thing. Architecture Architecturally speaking, this is in fact a good system. When you zoom out a microservice architecture to the platform level what you have is (hopefully) a very modular and resilient monolith connected by a communication medium. Taking all of that and wrapping it up with a bow for a desktop machine is definitely possible, and will probably make for a decently architected desktop app. That being said... Constraints The first set of issues to handle are scale and resourcing - A desktop does not a server-farm make. You can address this in part by: not shipping the entire platform to the user desktop substituting in lighter weight components Not shipping the entire platform to the user desktop is obvious if the main thrust of the app relies on: Terabytes of data, hundreds of specialised processors, a super-secret trade secret, and/or expensive to licence software components. This is where it make sense to ship a middle man which operates a cache and some smarts for manging this offline state. The problem is how you synchronise this data when you go back online. Does the cache win? Does the server? Is it a mix of both? Does the user get a say, and if so how? This is an interface that's not part of the normal UI you are shipping. Substituting in lighter weight components, can make a real performance difference. Obviously that enterprise grade sharded sql server farm, is overkill for most users. Substituting in SQLite, or another more portable engine is not just a good idea, but necessary. This will be much harder if you've bought into proprietary sql dialect™. Perhaps you need to provide different "repository" implementations within you services to abstract this away. Privacy & Security There is also the difference of the desktop being Not Your Machine. Which opens up the can of worms about user privacy. As a website your app operated in a sandbox, with heavy sets of restrictions on what it could and couldn't do. Even what it could and couldn't see. The servers ran on Your Machine where monitoring and data collection are just good sense. That isn't going to work for you as a desktop app. You have access to a lot more information. Information about the users computer, and about the user themselves like: their files, OS, contacts, internet access, CPU load, and stored credentials. Worse you have access to more capabilities like: their printer, monitors, file system, and a network connection that isn't locked down. What before you had for free by being sandboxed, you are now going to have to enforce yourself, like being a good citizen in the File system, and not spamming the printer. You are going to need to reconfigure, if not comb out all of these data feeds, ensuring that you aren't inadvertently snooping. Along with enforcing those good desktop citisen policies. On Your Machine, this is just good practice. On Not Your Machine, its malware. Authentication and Authorisation Keep Authentication on your own servers, don't even try to localise it. Any attempt necessarily means leaking information about the users password/certificate/magic string to the local machine which if its like most user machines, is very insecure. Authorisation though, now that's a different picture. You can wrap the authorisations up inside a serialised token, and sign it with the servers private key. You probably are already doing this for at least the web app to the api, perhaps even through the microservices to the back-end services. Leverage that, or put it in place. If you add a public key to the token, (the server keeping the private key). Data returned by the API can be encrypted on a record level. This way you can allow a stream to to the local app to ship data for two or more users. You can be certain that the specific user has access to the specific data because only that user has the key. Keep that data in this encrypted state at rest too. You can additionally use the stored key to encrypt locally generated data for shipping to the server. The server can then prove that the specific user created that data. And locally your own app can prove what it made, what it received, and by whom. Any injected data will simply not be correct. If encrypting the data isn't that important, use a signed secure hash instead. This way changes to the data can be noticed, as can data inserted by other processes. To make this as fool proof as possible the token needs to be encrypted at rest on the users machine using the local encryption primitives leveraging the local user management of the machine. This ensures a copied database cannot be copied away and trivially decrypted. Unfortunately this does not make it impossible for someone to manipulate the local database. The local machine security, the certificate, and the application logic are all stored on the local machine. A user can decrypt their certificate manually A user can create a new local account and encrypt their own certificate locally and place it in the database A user can delete hashes and replace it with their own signed hash A user can manipulate the program logic itself, by editing the binary, or debugging it. So the short of it is: Don't ship data to the local machine/Web app that you don't want anyone else on that machine from accessing. Verify any and all data coming back from that machine/web app when it hits your servers, don't presume the microservices in front have verified anything. Note: I'm not a security expert, they will be much more aware of the pitfalls, and strategies for encrypting/hashing/signing data. The above is a broad stroke description of a multi-user financially orientated application with a local database and cloud service. Of course if the data isn't your data, and isn't sensitive from your own perspective. There isn't any reason to not just keep it in the clear and leverage the local OS permissions and user management. Each OS is a little different on how to do this, but you can usually make it only read/writeable from the current user account. If it needs encryption, the local OS usually offers a key management, or a service for encrypting data with a local secure key. Thanks for the detailed answer. "Architecturally speaking," was what I needed to hear. You address some key points. In fact, I do not plan on shipping the whole microservice architecture, but only a subset of it. Also, I already thought about this middle man, managing online/offline state, synchronising data. For additional features in the UI, I already have a concept, too: the main web frontend only will only be a subset of the desktop application. What are your thoughts on user authentication and data security with a local offline file-based db?
common-pile/stackexchange_filtered
MongoDB Realm has [error exchanging access code with OAuth2 provider] I am currently creating a simple app with the purposes are: Sign In with Google through GoogleSignInSDK then get the token from google, and use that for io.realm.mongodb.Credentials.google(token). But I always get the error E/SignInActivity: AUTH_ERROR(realm::app::ServiceError:47): error exchanging access code with OAuth2 provider. I tried: Followed unclear guidelines from https://docs.mongodb.com/realm/authentication/google/#set-up-a-project-in-the-google-api-console. I cannot found my Client Secret when I declare the app is Android on Google Cloud Platform Console. I found Client Secret and Client ID when I declare the app is Android on Google Cloud Platform Console. Nothing works. [Error on realm.mongodb.com][1][1]: https://i.sstatic.net/Esgi5.png Thanks so much for reading this, I hopelessly waiting for anyone who has a solution for the same problems. Tran Thanh Trong I encountered this bug too and it seems that it's being treated as a known issue by the developers. https://github.com/realm/realm-js/issues/3116
common-pile/stackexchange_filtered
Cooling for big layers There is an option in Slic3r to disable cooling for layers that take more than n seconds. What would be the disadvantages of having cooling on big layers ? Warping. Especially with materials like ABS, you want the plastic to cool down as gradually (and slowly) as possible, to prevent the print from warping as the cooling plastic contracts. On small layers, cooling is usually mandatory: with really small layers, you just end up with a big glob of molten plastic if the previous layer hasn't solidified enough before the next layer is put on top. You want just enough cooling that the plastic holds its shape, but no more than that. On a large layer, the plastic cools enough naturally without help from a fan.
common-pile/stackexchange_filtered
Regex extract substring after '=' sign I saw many examples, but for some reason it still does not work for me. This is the command I'm executing: NUMBER=$(docker logs vault | grep Token) NUMBER=${NUMBER##*": "} NUMBER=$(echo $NUMBER | sed 's/^token=(.*)$//g') echo $NUMBER I want to get the value after '=', which is a string basically. I tried using GREP, and other regex's but I either get nothing, or just the original string. Please advise. which os are you using? You're a) using () capture groups with Basic Regular Expressions, where they have to be \(\) (or use ERE instead, sed -E/sed -r) and b) you're removing the complete line, but you want to re-insert the captured group again. Better to use cut here echo 'token=dsa32e3' | cut -d= -f2 Or, if supported, number=$(echo token=dsa32e3);number=${number#*=} in bash: NUMBER=${tk/*=/} ; echo $NUMBER, assuming tk="token=dsa32e3" @anubhava, thank you! please post an answer so I'll accept it. To get text after a delimiter better to use cut instead of sed as in this example: echo 'token=dsa32e3' | cut -d= -f2 dsa32e3 -d= sets delimiter as = for cut -f1 makes cut print first field I encounter another at the moment, I get back the string I wanted but with an additional ANSI character: "48372d06-03a5-f2a0-8428-ecd58cf57424\u001b[0m", what is this '\u001b[0m' string? and how do I get rid of it? Is this \u001b[0m part of your original string as well? Nope, it's seems to be added after I use the grep command. But there is no grep here.. Just a cut command. If your script has more than it then I suggest you provide a sample input and your script by editing your question so that I can investigate and suggest you a fix. I've added the additional command I perform. ok can you tell me output of this command: docker logs vault | awk -F= '/Token/{print $2}' | cat -A Let us continue this discussion in chat. Solved it using sed -r "s/\x1B[([0-9]{1,2}(;[0-9]{1,2})?)?[m|K]//g" The string at the end was ANSI colour code. Oh ok glad you made it work. cheers With sed you can simply remove the token=, with NUMBER=$(echo token=dsa32e3 | sed 's/^token=//g') echo $NUMBER Other non-regexp based alternatives are possible, as other users pointed out. Another fun possibility is using the negative lookbehind, not supported by sed, so I used perl. NUMBER=$(echo token=dsa32e3 | perl -pe 's/.*(?<=token=)([a-z0-9]*)/$1/g') echo $NUMBER
common-pile/stackexchange_filtered
URL routing not working when user clicks a link I am trying to modify a web application we have and I'm not sure if I can do what is being requested. My boss wants to be able to click a link from an email and have our internal company web application go straight to a page identified at the end of a provided URL. If I click on the link below the first time, it goes to the index page of our web application. If I leave the web application open and click on the link again, it goes to the correct page identified at the end of the URL. http://mycompanyweb.com/handbook/mycompanyprocess/#/rt/softwate/9.13_UpdateSDP I've tried adding an init(), thinking that is where the application goes first in the lifecycle and I only see this part of the URL at that point (http://mycompanyweb.com/handbook/mycompanyprocess/). This leads me to believe that the browser is stripping everything off after the # when it first opens. Is that correct? Is there something I can do to get our web application to go directly to the document the first time a user clicks on the link, without the web application open? http://mycompanyweb.com/handbook/mycompanyprocess/ - Base URL #/rt - Used by our javascript engine to determine which path to take (dev or production). /software/9.13_UpdateSDP - Logical path to a web page named 6.034_UpdateSDP.htm Our engine that determines where to route based on the URL. I assume that the second time a link is clicked that it goes to the correct page is because the engine has been loaded (provided the browser is left open when clicked a second time). $(document).ready(function () { // Define the routes. Path.map("#/:program").to(function () { var program = this.params['program']; if (program == "search") { $("#mainContent").load("search.html"); } else { $("#mainContent").load("views/onepageprocess.html"); } $("#subheader").html(""); $("#headerLevelTwoBreadcrumbLink").html(""); }).enter(setPageActions); Path.map("#/:program/:swimlane").to(function () { localStorage.removeItem("SearchString"); var swimlane = this.params['swimlane']; var view = "views/" + swimlane + ".html"; $("#mainContent").load(view); }).enter(setPageActions); // Sends all links to the level three view and updates the breadcrumb. Path.map("#/:program/:swimlane/:page").to(function () { var page = this.params['page']; var url = "views/levelthree/" + page.replace("", "") + ".htm"; var levelThreeTitle = ""; $.get(url) .done(function () { // Nothing here at this time... }).fail(function () { url = "views/levelthree/badurlpage.htm"; levelThreeTitle = "Page Unavailable"; }).always(function (data) { $("#subheader").html(""); level_three_breadcrumb = "views/breadcrumbs/breadcrumb_link.html"; $("#headerLevelTwoBreadcrumbLink").load(level_three_breadcrumb); $("#headerLevelThreeBreadcrumb").html(""); $('#headerLevelThreeBreadcrumb').append('<img src="images/Chevron.gif" />'); if (data.status != "404") { $("#headerLevelThreeBreadcrumb").append(retrieveStorageItem("LevelThreeSubheader")); } $("#mainContent").load(url); }); }).enter(setPageActions); // Set a "root route". User will be automatically re-directed here. The definition // below tells PathJS to load this route automatically if one isn't provided. Path.root("#/rt"); // Start the path.js listener. Path.listen(); }); Is there something I can do to get our web application to go directly to the document the first time a user clicks on the link, without the web application open? the Browser will not strip anything from the URL. must have something todo with your application Sooo, how can I find out where? If anyone runs into something like this, I found out that my company's servers were stripping anything after the # in the URL at authentication. I will be modifying my app to not use hash tags in the URL to fix it.
common-pile/stackexchange_filtered
Unable to Update DataTable, No Primary Key I am updating a DataTable, and when I do it says Table doesn't have a primary key. I changed how I created the table, and added a PrimaryKey, so what's going on? I create the table, add columns, and add rows with data, and some blank values. The DEBUG code right after it's populated works fine. I then want to populate some of those blank values, but when I do I get the error. Here is my code: DataRow workrow; DataTable ReasonCodeTable = new(); DataColumn dc1 = new(); dc1.ColumnName = "Reason Code"; dc1.DataType = typeof(string); DataColumn dc2 = new(); dc2.ColumnName = "LocationID"; dc2.DataType = typeof(string); DataColumn dc3 = new(); dc3.ColumnName = "Location"; dc3.DataType = typeof(string); DataColumn dc4 = new(); dc4.ColumnName = "Issue"; dc4.DataType = typeof(string); DataColumn dc5 = new(); dc5.ColumnName = "Description"; dc5.DataType = typeof(string); DataColumn dc6 = new(); dc6.ColumnName = "Who Edited"; dc6.DataType = typeof(string); DataColumn dc7 = new(); dc7.ColumnName = "Last Edit (UTC)"; dc7.DataType = typeof(string); ReasonCodeTable.Columns.Add(dc1); ReasonCodeTable.Columns.Add(dc2); ReasonCodeTable.Columns.Add(dc3); ReasonCodeTable.Columns.Add(dc4); ReasonCodeTable.Columns.Add(dc5); ReasonCodeTable.Columns.Add(dc6); ReasonCodeTable.Columns.Add(dc7); ReasonCodeTable.PrimaryKey = new DataColumn[] { dc1, dc2 }; // Builds ReasonCodeTable foreach (string code in codeList) { foreach (KeyValuePair<String, String> loc in Locations) { workrow = ReasonCodeTable.NewRow(); workrow["Reason Code"] = code; workrow["LocationID"] = loc.Key; workrow["Location"] = loc.Value; workrow["Issue"] = ""; workrow["Description"] = ""; workrow["Who Edited"] = ""; workrow["Last Edit (UTC)"] = ""; ReasonCodeTable.Rows.Add(workrow); } } /*** DEBUG BEGIN ***/ foreach(DataRow dr in ReasonCodeTable.Rows) { for (int cnt = 0; cnt < ReasonCodeTable.Columns.Count; cnt++) Console.WriteLine($"{ReasonCodeTable.Columns[cnt].ColumnName}: {dr[cnt]}"); Console.WriteLine($""); } /*** DEBUG END ***/ foreach (DataRow _rcdr in ReasonCodeTable.Rows) { foreach (DataRow _dr in p21CodeTable.Rows) { if ( (p21CodeTable.Rows.Find(_rcdr["Reason Code"]) != null) && (p21CodeTable.Rows.Find("P21") != null) ) { _rcdr["Desciption"] = _dr["description"]; _rcdr["Who Edited"] = _dr["editwho"]; _rcdr["Last Edit (UTC)"] = _dr["editdate"]; } } } /*** DEBUG BEGIN ***/ foreach(DataRow dr in ReasonCodeTable.Rows) { for (int cnt = 0; cnt < ReasonCodeTable.Columns.Count; cnt++) Console.WriteLine($"{ReasonCodeTable.Columns[cnt].ColumnName}: {dr[cnt]}"); Console.WriteLine($""); } /*** DEBUG END ***/ R64CodeTable: company: IBM code: 10 description: Inventory Count editdate: 2020-05-19 editwho: User12 company: IBM code: 11 description: Data Error editdate: 2021-06-28 editwho: User12 company: IBM code: 12 description: Warehouse Error editdate: 2021-08-19 editwho: User06 ReasonCodeTable after initial population: Reason Code: 10 LocationID: R64 Location: R64 Issue: Description: Who Edited: Last Edit (UTC): Reason Code: 10 LocationID: WH1 Location: Chicago Issue: Description: Who Edited: Last Edit (UTC): Reason Code: 11 LocationID: R64 Location: R64 Issue: Description: Who Edited: Last Edit (UTC): ReasonCodeTable how it should look after update: Reason Code: 10 LocationID: R64 Location: R64 Issue: Description: Inventory Count Who Edited: User12 Last Edit (UTC): 2020-05-19 Reason Code: 10 LocationID: WH1 Location: Chicago Issue: Description: Who Edited: Last Edit (UTC): Reason Code: 11 LocationID: R64 Location: R64 Issue: Description: Data Error Who Edited: User12 Last Edit (UTC): 2021-06-28 I don't understand what's going on with the primary key. The following may be helpful: https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/dataset-datatable-dataview/navigating-datarelations , https://learn.microsoft.com/en-us/dotnet/api/system.data.datarelation?view=net-6.0 , and https://learn.microsoft.com/en-us/dotnet/api/system.data.datacolumncollection.add?view=net-6.0#system-data-datacolumncollection-add(system-string-system-type) @user9938 Ok, but why did the first debug code work, and then the next foreach bcause an error. Looks like I can do ReasonCodeTable.PrimaryKey = new DataColumn[] { "Reason Code","LocationID" }; Please add some sample data to your post. Also the following statement doesn't make sense: What I'm trying to do is to update the ReasonCodeTablefrom the R64CodeTable by matching the "code" to the "Reason Code" and if it has R64 in it The rest of the paragraph could use a re-write, as well, to make things more clear. A DataTable is usually used when a database is the source for the data (although this isn't a requirement). It seems that you aren't using any syntax that would benefit from a DataTable - you could very well have used a List of a class. (ex: List<MyClass1>). @user9938 Updated the question. I changed how I built the DataTable so I could create a PrimaryKey. I don't know why it still says it isn't there. If all columns are going to be String, specifying the data type is unnecessary. It seems that one of the columns could be created as int, and another could be created as DateTime-but one needs to refer to the data source to find that information. From looking at one of your previous posts, it seems that your data source may be JSON. @user9938 The data source is SQL and a json file. I read them into the Locations Dictionary and R64CodeTable DataTable. I don't want a DataSet, I want a DataTable (not a DataSet with a single table either). I don't understand why I have to have relationships just to set the value in a DataTable. That seems seriously convoluted. It also doesn't explain why I am still getting the Primary Key error when I have clearly set one. I'm having difficulty following what you're attempting to do.. Also, the code you posted is missing information on codeList, Locations, and p21CodeTable. For the part that's not working, you may want to try using for instead of foreach to see if it resolves your issue.
common-pile/stackexchange_filtered
AsyncTask has errors - beginner Eventually I am wanting this method to look up some values in a text document and return true of the username and password are present there. However I am having some problems with implementing the AsyncTask. I have tried to follow the guide at http://developer.android.com/reference/android/os/AsyncTask.html but have had no success. The error I am getting at the return type on the doInBackground method is "The return type is incompatible with AsyncTask.doInBackground(String[])" private class AuthenticateUser extends AsyncTask<String, Integer, Boolean> { String user; String pass; protected void onPreExecute(String uname, String passwd) { user = uname; pass = passwd; } protected boolean doInBackground(String... strings) { return true; } protected boolean onPostExecute(boolean v) { return v; } } I know that this is not a good way to authenticate a user at all. I am just trying to figure this out. Thanks. are you trying to override doInBackground method of AsyncTask? just put Override before doInBackground this will give u automatically default structure of doInBackground method according to params passed in AsyncTask<String, Integer, Boolean> try changing protected boolean doInBackground(String... strings) to protected Boolean doInBackground(String... strings) using the class Boolean instead of the primitive boolean. the problem here is that AsyncTask Extensions are generic and need three types: AsyncTask<Params, Progress, Result> which may be Void or a class, but not primitive data types. so what happens is you told the compiler that doInBackground returns a primitive boolean but it was expecting an instance of the class Boolean. and thus you get the "The return type is incompatible" error. just change protected boolean doInBackground(String... strings) to protected Boolean doInBackground(String... strings) and you should be fine. Thanks Oren, that has sorted it. However, I have another error that i forgot to mention. I am getting AuthenticateUser cannot be resolved as a type when I do: new AuthenticateUser.execute("http://path/to/file.txt"); Do i heave to do this even if AuthentcateUser is a subclass? private class AuthenticateUser the private part may have something to do with that. this usage example may help you. note the asynctask's class is private as it resides in the scope of another, public, class. you can follow that design or just make your AuthenticateUser class public (depending on your expected usage). side note: if an answer on Stack Overflow helped you, you should accept it so it can help others as well. new AuthenticateUser().execute(username, password); . private class AuthenticateUser extends AsyncTask<String, Void, Boolean> { String user; String pass; protected Boolean doInBackground(String... strings) { this.user = strings[0]; this.pass = strings[1]; //authen return true; } protected void onPostExecute(Boolean result) { //do stuff when successfully authenticated } }
common-pile/stackexchange_filtered
JSLint with multiple files JSLint works fine for just one JavaScript file. Recently, I've started breaking my program into several pieces. I don't want to be stringing the pieces each time I use JSLint to check my code. What is the standard solution to deal with multiples files with JSLint? We use the automated build using NAnt scripts and in this build process we run a task for JSLint that accepts a list of files. I don't know if this would work for you, but anyways. There is a version of JSLint (node-JSLint) (command line) that allows you to check multiple files at once. Follow link for download at GitHub: https://github.com/reid/node-jslint The following examples of calling via the command line: JSLint app.js JSLint lib / lib worker.js / server.js # Multiple files JSLint - white - onevar - regexp app.js JSLint # All options supported JSLint - bitwise false app.js # Defaults to true, but You Can Specify false JSLint - goodparts - # undef false app.js The Good Parts, except undef JSLint-gp app.js # Shorthand for - goodparts:-gp find . -name "*.js" -print0 | xargs -0 jslint # JSLint JSLint your Entire Project Note: This application was developed to NodeJS. Thanks. So is this for node users only? Do I have to have node installed? Randomblue, apparently it is not necessary, but you must have two js modules that are required by the program (nopt.js and fs.js). What's wrong with just running the command? jslint . will check all js files in the current directory and recurse down the tree. ... which will include everything in your node_modules directory. Note that if you use JSHint, it is possible to exclude directories by several ways (e.g. option --exclude dir). See http://jshint.com/docs/cli/ section "ignoring files". You can run JSLint against HTML files, not just against JavaScript files (which makes sense, because of the <SCRIPT> tag). And JSLint is smart about external scripts - if it can locate them, it will load them as part of the processing. So try something like this: <html> <head> <script src="file1.js"></script> <script src="file2.js"></script> ... <script>mainRoutine();</script> </head> </html> Run JSLint on that, instead of on each of your files. The command line app JavaScript Lint (http://www.javascriptlint.com/) works with multiple files and can recurse directories. E.g. %JSLPATH%\jsl.exe +recurse -process src\scripts\*.js You can also have a look here: https://github.com/mikewest/jslint-utils It should work with either Rhino or NodeJS. You can also pass multiple files for checking. NB: if you have a command line script which doesn't take multiple files as arguments, you can always do something like: ls public/javascripts/**/*.js | jslint If you don't need a running count of errors, open terminal (in OS X) and paste this: for i in $(find . -iname "*.js"); do jshint $i; done you can replace find. with find /path/ (This Is taken from this link and formatted) Rhino is a Javascript engine that is written entirely in Java. It can be used to run JSLint from the command line on systems with Java interpreters. The file jslint.js can be run by Rhino. It will read a JavaScript program from a file. If there are no problems, it terminates quietly. If there is a problem, it outputs a message. The WSH edition of JSLint does not produce a Function Report. One way to run JSLint is with this command: C:\> java org.mozilla.javascript.tools.shell.Main jslint.js myprogram.js It runs java which loads and runs Rhino which loads and runs jslint.js, which will read the file myprogram.js and report an error if found. Download jslint.js. This doesn't seem to answer the question.
common-pile/stackexchange_filtered
How to automate (from command-line) the installation of a Visual Studio Build Tools build environment, for C++, .NET, C#, etc Note: I have already read How can I install the VS2017 version of msbuild on a build server without installing the IDE? but this does not answer with a totally GUI-less script-only install. Along the years, here is what I noticed: I download a project from Github ; or open an old project of mine (say from 4 years ago) run msbuild.exe theproject.sln oops, I don't have the right Visual Studio version / .NET Framework version / the right SDK is missing (example situation here among many others) then spend X hours browsing on websites like https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019, install a package, then notice it's not the right one (msbuild still fails), download another package, install it, etc. at the end you have downloaded 8 GB of packages, waited for the download, waited for the install, for the reboot, and you're still not sure it works your computer is now a mess with 5 different versions of SDKs installed at the same time that probably collide with each other (did version Z overwrite/uninstall version Y or not?) This might be a solution to avoid this problem: How to install the required MS build tools from command-line? (I don't want to use any IDE, I want to script everything) If it was possible, I would just, once for all create a build.bat file for every project, that would be something like: msbuildget --package=VC14 --installdir=c:\buildtools\vc14 # automatically download and install C:\buildtools\vc14\bin\msbuild.exe myproject.sln or msbuildget --package=.NET-35 --installdir=c:\buildtools\net35 C:\buildtools\net35\bin\msbuild.exe myproject.sln How to do this? With this method, even if you open a 6-year old project, you should be able to build it. How to automate (from command-line) the installation of a Visual Studio Build Tools build environment, for C++ version X, .NET C# version Z, etc First, you should note that, all the workloads or packages need to be installed and they will be integrated into Build Tool, what you need is the workload Component ID of them. You can refer to this document to obtain the related Component ID of buildtool. Besides, this document also list the command line installed instructions. And the order is the same as the build tool. Suggestion You can try the below script: // This is for desktop development and also add the net framwork 3.5 vs_buildtool_xxx.exe --add Microsoft.VisualStudio.Workload.ManagedDesktopBuildTools^ --add Microsoft.Net.Component.3.5.DeveloperTools^ --add Microsoft.Net.Component.4.5.2.TargetingPack^ --installPath C:\BuildTools C:\BuildTools\MSBuild\Current\Bin\MSBuild.exe myproject.sln And you can add any more workloads or packages by commamd -add with the related Component ID. If you want to build c++ project, you can try the following example: vs_buildtool_xxx.exe --add Microsoft.VisualStudio.Workload.VCTools^ --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64^ --add Microsoft.VisualStudio.Component.VC.140^ --installPath C:\BuildTools C:\BuildTools\MSBuild\Current\Bin\MSBuild.exe myproject.sln Microsoft.VisualStudio.Component.VC.140 means that VS2015 build tool for C++. Important note: using command line is quite different from the vs_installer UI. When you click the c++ build tool in vs_installer UI, you could see that it will install related components automatically. These components are listed under Microsoft.VisualStudio.Workload.VCTools workload and you can choose whether or not to install them. However, it is not to specify that the workload will install all of them. When you use command line, it will not install any related components automatically, so you should add them one by one manually. For c++ projects, you could use these commands to install it: vs_buildtool.exe --add Microsoft.VisualStudio.Workload.MSBuildTools^ --add Microsoft.VisualStudio.Workload.VCTools^ --add Microsoft.VisualStudio.Component.VC.Tools.x86.x64^ --add Microsoft.VisualStudio.Component.Windows10SDK.18362^ --add Microsoft.VisualStudio.Component.VC.CMake.Project^ --add Microsoft.VisualStudio.Component.TestTools.BuildTools^ --add Microsoft.VisualStudio.Component.VC.ASAN^ --add Microsoft.VisualStudio.Component.VC.140 Thanks! I'll retry this the next time I'll have to reinstall MSVS. For an unknown reason, I did vs_buildtools.exe --add Microsoft.VisualStudio.Workload.MSBuildTools --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.ManagedDesktopBuildTools successfully, but it finally only installed < 1 GB of software (which is surprising). Then I was unable to compile anything, some not-installed files were still needed... Do you see a reason @PerryQianMSFT why Microsoft.VisualStudio.Workload.MSBuildTools, Microsoft.VisualStudio.Workload.VCTools, Microsoft.VisualStudio.Workload.ManagedDesktopBuildTools are not enough to be able to simply compile a Hello World cl.exe test.cpp? What is missing? (NB: it was not a PATH / vcvarsall problem). Command line is different from vs_installer UI. If you want to use C++ workload, you should install the c++ workload and related components. If you use vs_installer to install c++ build workload and when you click on it, it will also install the related tools containing cl.exe automatically. However, if you use command line to install the build tool, it will not install the related component and you should add them manually. That is one of their differences. In your situation, for c++ project, you should also add this:Microsoft.VisualStudio.Component.VC.Tools.x86.x64 for c++ project, use these: Microsoft.VisualStudio.Workload.MSBuildTools, Microsoft.VisualStudio.Workload.VCTools,Microsoft.VisualStudio.Component.VC.Tools.x86.x64,Microsoft.VisualStudio.Component.Windows10SDK.18362,Microsoft.VisualStudio.Component.VC.CMake.Project,Microsoft.VisualStudio.Component.TestTools.BuildTools,Microsoft.VisualStudio.Component.VC.ASAN,Microsoft.VisualStudio.Component.VC.140 You can open vs_installer.exe and click on c++ build tools and you will see all the related workload and components on the right menu. And then you can find their component IDs to add them. You can use my command which I list above to install it. I have listed all of them and it almost 1GB. For c# desktop projects, you should use similar function. In a word, command line will not install the whole related c++ workload, you should add them one by one. @PerryQianMSFT: see this link: Microsoft.VisualStudio.Workload.VCTools already includes Microsoft.VisualStudio.Component.VC.Tools.x86.x64 and Microsoft.VisualStudio.Component.VC.140, so I don't understand why it didn't work with vs_buildtools.exe --add Microsoft.VisualStudio.Workload.MSBuildTools --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.ManagedDesktopBuildTools. It should be automatically included, don't you think so? Actually, these are components that belong to Microsoft.VisualStudio.Workload.VCTools. You can choose whether or not to install them. However, it is not to specify that the workload will install all of them. And through the command line, they will not install them. So you need to specify them This might be the source of my problem indeed @PerryQianMSFT. I thought that doing --add Microsoft.VisualStudio.Workload.VCTools from command-line would automatically install ALL the packages listed in "Components included by this workload". Is it wrong? If so, the title "Components included by this workload" is misleading... Maybe could you add a note about this in your answer? It could save hours for future readers! Sure. It is really a bit annoying and not convenient enough. The command line method itself is tedious. So if you still want this feature, I suggest you could suggest a feature request on our User Voice Forum. Ok, sure, will do that. I just edited and added a Note at the end @PerryQianMSFT, is it ok? Thank you for your help too @PerryQianMSFT. I re-edited and used ^ line breaks (it's the right way to do it in batch files) for readability, I hope it's ok for you. Sorry, I didn't understand how I should download "vs_buildtool_xxx.exe" file in my script? I've just spent a good while trying to figure this out. I ended up with this: .\vs_buildtool.exe --passive --wait --add Microsoft.VisualStudio.Workload.VCTools;includeRecommended Note the use of includeRecommended to avoid adding components individually. includeOptional is also available. You can also use e.g. --includeRecommended flag to include recommended for all workloads (see here). Or to install using winget: winget install -e --id Microsoft.VisualStudio.2022.BuildTools --override "--passive --wait --add Microsoft.VisualStudio.Workload.VCTools;includeRecommended" Hope this might be of use to someone else
common-pile/stackexchange_filtered