text
stringlengths
70
452k
dataset
stringclasses
2 values
Fish completion script I'm working on a completion script for a command, and I'm stuck. The docs and various websites I find don't fit what I need. The main command is pacstall and it has the flags: -I -S -R -C -U -V -L -Up -Qd -Qi. For most of the flags, I need the completions to be the output of a command (if I ran pacstall -I, then tabbed, it would show the output of the command curl -s $(cat /usr/share/pacstall/repo/pacstallrepo.txt)/packagelist. This is what I have so far: set -l pacstall_commands "-I -S -R -C -U -V -L -Up -Qd -Qi" complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -I -d 'Install package' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -S -d 'Search for package' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -R -d 'Remove package' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -C -d 'Change repository' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -U -d 'Update pacstall scripts' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -V -d 'Print pacstall version' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -L -d 'List packages installed' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -Up -d 'Upgrade packages' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -Qd -d 'Query the dependencies of a package' complete -f --command pacstall -n "not __fish_seen_subcommand_from $pacstall_commands" -a -Qi -d 'Get package info' Also the script keeps tab completing even after typing in the flag Please [edit] your question and give us some details. What do you have so far? What works? What doesn't work? How does it fail? Also, this is probably not very relevant, but you never know, so please also tell us what operating system you are running. What is the question? Please edit your question and tell us what the problem is. I did: I need the completions to be the output of a command (if I ran pacstall -I, then tabbed, it would show the output of the command curl -s $(cat /usr/share/pacstall/repo/pacstallrepo.txt)/packagelist The -n flag of the complete command allows you to specify a condition for the completion to occur. In this case, you can use the __fish_seen_subcommand_from function to specify whether the subcommand -I has been seen already. After this, you can specify with the -a flag the command you want to run in (). complete -f --command pacstall -n "__fish_seen_subcommand_from -I" -a "(curl -s (cat /usr/share/pacstall/repo/pacstallrepo.txt)/packagelist)" As a note, in fish, you don't use the $ symbol when capturing the result of a command as you posted in your question. As for your final point at the end. If you remove the quotes from the set, that should solve the issue of the duplicates. By using quotes in that fashion, it specifies one long command rather than separate ones.
common-pile/stackexchange_filtered
"Yet at the end of the day, Mr. Guaidó fell short of the prize he sought" meaning Does the sentence : Yet at the end of the day, Mr. Guaidó fell short of the prize he sought. mean that : Even though a whole day went by and now its the evening, Mr. Guaidó failed to reach his goal. ? As in many languages, in English a "day" can be metaphorically applied to any specified length of time, for example, "At the end of the day we are all of us older but not necessarily wiser" It could mean that, yes. However, "At the end of the day" is also a saying that could mean at the end of any given period of time in this context. For example: He fought hard during a two-year-long campaign to win the election. Yet at the end of the day, he fell short of the prize he sought. Conceptually, there are two different ways to interpret this usage: The day being referred to is the last day of the time period. The time period is being condensed into a single day as a metaphor. To help understand the second interpretation, there is a well-known ancient riddle which in English reads: It walks on four legs in the morning, two legs at noon, and three legs in the evening. What is it? The answer to the riddle is a human. Morning refers to early age, noon to middle age, and evening to senior age. Babies crawl on their hands and feet, teens and adults walk on two feet, and seniors are known to use canes to walk around (and hence "three legs"). The riddle condenses the entire lifetime of a person into a single day to serve as a metaphor. If it's the second meaning that's being used, then "However, at the end, Mr. Guaidó failed to reach his goal" would be a better interpretation, wouldn't it ? @Norbert Yes, that's right. The phrase "at the end of the day" is one of those English cliches. It's often over used and misapplied. It means something like "at the end of some sequence" or "after everything was settled" or "at the conclusion of some contest" or various similar things. It isn't usually referring to a time of day. It's a common saying, but I don't know if I'd called it "cliche" or "over-used". https://www.urbandictionary.com/define.php?term=at%20the%20end%20of%20the%20day -1 I wouldn't recommend using the urban dictionary to support an argument, since it's based entirely on anecdote, hearsay, and conjecture. It's not bad to add possible alternate meanings, or to help define slang terms, but it's frequently, laughably parochial, out-of-date, and just plain wrong. @Andrew So there is enough "anecdote, hearsay, and conjecture" about the phrase to produce an UD entry. And that's not evidence it's cliche. OK.
common-pile/stackexchange_filtered
loading AND saving to txt/csv file? I am trying to set up tabulator with all it's data validation goodness and simple to use UI in order to help a colleague with CRUD operations on a .txt file he has to do on a daily basis. I found that tabulator can load data using AJAX but my question is, is it possible to load the data from a .csv/.txt file and then save to the same file? I know you can export to .csv but without overwriting the loaded data, next time all his work would be lost. If you are referring to a file on a users local computer then im afraid there is no import from file functionality built in to tabulator, but there is nothing to stop you implementing that bit your self. The link below is a link to an article that explains how to load a CSV file from an input element in JavaScript,. In the example it loads it into an HTML table, but you could easily alter that to dump it into an array of objects to pass into Tabulators setData function http://codeanalyze.com/Articles/Details/20174/Read-CSV-file-at-client-side-and-display-on-html-table-using-jquery-and-html5 In terms of saving the data back to the users computer, you would need to use the built in download function, there is no way to save it back to the users computer without the file popup due to browser safety constraints. But i will add that the above approach is a bit unorthodox. The usual way to handle data persistence would be to save the data back to your server into a database, and then load it back to the client with an ajax request, giving the user the option to download the data when they want the final copy
common-pile/stackexchange_filtered
Error opening file using avconv conversion from mp4 to mov I'm trying to convert an .mp4 file to .mov file. The command is as below. avconv -i test1.mp4 kkk.mov .mov file is created but there is an error opening the file. Below is the error. "Stream contains no data" I tried using ffmpeg as well. But, that didn't work too. Anyways, as ffmpeg seems to be not in maintenance anymore, I would prefer to go with avconv. Please suggest a solution. I found the solution. The correct command is avconv -i test1.mp4 -strict experimental kkk.mov If we don't add -strict experimental, the conversion is not properly done. The same can be used for ffmpeg.
common-pile/stackexchange_filtered
How do I know which field is selected in access? I have a subform from a table on a form. I need to know which field is selected. First I select a field on a sunform and then I click a button which calls a sunroutine. That subroutine has to know which field is selected. I tried Screen.ActiveControl.Name but it's value was the button's name. Thanks Define "selected". @Erik von Asmuth, this is not duplicate of mentioned question. ActiveControl cannot be used for identification of field, when focus already moved to the button. @SergeyS. then he needs to be clear in the question that he wants the previously selected control. That can be achieved by writing the control (or control name) to a variable on the LostFocus event for every control. I thought he wanted the active control of the subform while the button was on the main form, and then the duplicate is entirely valid. Me.mySubformControl.Form.ActiveControl might work. (And if it does, it is a duplicate :) ) I wanted the proviously selected control. Sorry make you confused.
common-pile/stackexchange_filtered
How to unit test a UIViewController - TDD/BDD Unit testing is just something I never seem to be able to get my head around but I can see why its important and can be a huge time saver (if you know what you're doing). I am hoping that someone can point me in the right direction. I have the following UIViewController QBElectricityBaseVC.h @interface QBElectricityBaseVC : QBStateVC @property (nonatomic, strong) QBElectricityUsage *electricityUsage; @property (nonatomic, assign) CGFloat tabBarHeight; - (void)updateElectricityUsage; @end QBElectricityBaseVC.m @implementation QBElectricityBaseVC - (instancetype)init { self = [super init]; if (self) { self.tabBarItem = [[UITabBarItem alloc] initWithTitle:NSLocalizedString(@"electricity_title", nil) image:nil tag:0]; } return self; } - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [self.notificationCenter addObserver:self selector:@selector(updateElectricityUsage) name:kUpdatedElectricityUsageKey object:nil]; } - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [self.notificationCenter removeObserver:self]; } - (void)updateElectricityUsage { self.electricityUsage = [self.stateManager electricityUsage]; } - (CGFloat)tabBarHeight { return self.tabBarController.tabBar.frame.size.height; } @end What should I test? An observer for kUpdatedElectricityUsageKey is added self.electricityUsage becomes an instance of QBElectricityUsage A tabBarHeight is returned An observer for kUpdatedElectricityUsageKey is removed Am I missing anything I should test or testing something I really shouldn't? How do I test? So I am trying to do this using Specta and Expexta. If I need to mock anything I would be using OCMockito. I really don't know how to test the observer is added/removed. I see the following in the Expexta documentation but not sure if its relevant/how to use it: expect(^{ /* code */ }).to.notify(@"NotificationName"); passes if a given block of code generates an NSNotification named NotificationName. expect(^{ /* code */ }).to.notify(notification); passes if a given block of code generates an NSNotification equal to the passed notification. To test that self.electricityUsage becomes an instance of QBElectricityUsage I could create a category that has a method that just pretends the notification fired and calls the updateElectricityUsage method but is this the best way? And as for the tabBarHeight, should I just test that it returns a valid CGFloat and not worry what the value is? UPDATE I changed my viewWillAppear method to look like below: - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; [self addNotificationObservers]; } - (void)addNotificationObservers { [self.notificationCenter addObserver:self selector:@selector(updateElectricityUsage) name:kUpdatedElectricityUsageKey object:nil]; } And then I created the following test: #import "Specs.h" #import "QBElectricityBaseVC.h" #import "ElectricityConstants.h" SpecBegin(QBElectricityBaseVCSpec) describe(@"QBElectricityBaseVC", ^{ __block QBElectricityBaseVC *electricityBaseVC; __block NSNotificationCenter *mockNotificationCenter; beforeEach(^{ electricityBaseVC = [QBElectricityBaseVC new]; mockNotificationCenter = mock([NSNotificationCenter class]); electricityBaseVC.notificationCenter = mockNotificationCenter; }); afterEach(^{ electricityBaseVC = nil; mockNotificationCenter = nil; }); it(@"should have a notification observer for updated electricity usage", ^{ [electricityBaseVC addNotificationObservers]; [verify(mockNotificationCenter) addObserver:electricityBaseVC selector:@selector(updateElectricityUsage) name:kUpdatedElectricityUsageKey object:nil]; }); }); SpecEnd That test now passes but is this the correct/best way to test this? I have the exact same questions - wish there were more documentation out there. Also, why didn't go with XCTest? You've just felt one big con of iOS ViewControllers: they suck at testability. ViewControllers mix logic of managing the view and model This leads to massive ViewControllers This violates the Single Responsibility Rule This makes code not reusable Another big problem with MVC is that it discourages developers from writing unit tests. Since view controllers mix view manipulation logic with business logic, separating out those components for the sake of unit testing becomes a herculean task. A task that many ignore in favour of… just not testing anything. Article - source Maybe you should think about using MVVM instead. This is a great article explaining the difference of iOS MVC and MVVM. The great thing about using MVVM is that you can use DataBinding using Reactive Cocoa. Here's a tutorial that will explain Data Binding with MVVM and reactive programming in iOS. I follow 2 practices for testing the pieces of a UIViewController. MVVM - with an MVVM pattern you can very easily unit test the content of your views in your unit tests for your ViewModel classes. This also keeps your ViewController logic very light so you don't have to write as many UI tests to cover all of those scenarios. KIF - then for UI testing I use KIF because its test actor helps handle async and view loading delays. With KIF I can post a notification from my code and my test will wait to see the effects of my notification handler in the view. Between those 2 systems I'm able to unit-test pretty much everything and then very easily write the UI tests to cover the final parts. Also, quick note on your code: I wouldn't add your observers in viewWillAppear because it is called more than once. However, it may not be an issue since you probably won't get redundant calls to your handler because of notification coalescing.
common-pile/stackexchange_filtered
Deadline for Electing S-Corp Status -- What does it mean? Got a new gig as a contractor, and I am considering forming an S-Corp for various reasons. I have seen a few references to the deadline for electing S-Corp status being March 15th. One example: http://smallbiztrends.com/2012/02/s-corp-deadline-approaches.html If your business is a corporation, you’re already aware that March 15th is the most critical tax deadline of the year. But March 15th is an important deadline for another reason…it’s the deadline for electing S Corporation status. What I don't understand is, what does this mean? Since that date is past for 2013, I must operate as a sole proprietor until next year? No, the deadline is for existing LLC/C-Corp to chose to be taxed as a S-Corp for the tax year 2013. You haven't form a corporation at all, so its irrelevant for you. Once you do form your corporation (i.e.: file the relevant documents with your State agency responsible - Secretary of State/Department of Corporations etc), you have 75 days to have the election made (by filing IRS form 8832). Consider having a consultation with a CPA/EA and a lawyer (which I'm neither of) about whether you really need it. For most cases LLC will suffice. Some professionals aren't allowed to operate under LLC in some states, though.
common-pile/stackexchange_filtered
Selecting strongest SIFT features for face recognition I'm trying to build a python code that recognize human face. I extracted SIFT features of the training face and tested face and matching them as the following code: img1 = cv2.imread("path\of\tested\image") img2 = cv2.imread("path\of\trained\image") sift = cv2.xfeatures2d.SIFT_create() kp1, des1 = sift.detectAndCompute(img1, None) kp2, des2 = sift.detectAndCompute(img2, None) # Brute Force Matching bf = cv2.BFMatcher(cv2.NORM_L1, crossCheck=True) matches = bf.match(des1, des2) matches = sorted(matches, key = lambda x:x.distance) matching_result = cv2.drawMatches(img1, kp1, img2, kp2, matches[:50], None, flags=2) I want to select the strongest features among them to compare the two faces if they are of the same person or not. How can I recognize faces based on SIFT features? Can anyone please help me? any hint may be useful I'm beginner. Thanks. Using SIFT features, one popular way is to create a Bag of Visual Words framework where you take all of the features detected from all of the faces and you create a dictionary, usually with k-Means. Once you find these clusters, for each face you figure out which feature maps to which cluster then build a histogram. You take these histograms and train a classification model. This is a good place to start: https://towardsdatascience.com/bag-of-visual-words-in-a-nutshell-9ceea97ce0fb. I'd write a complete answer for you, but I don't have access to your data. Good luck! @rayryeng its a good idea, I will try to implement it. Thanks a lot. As @rayryeng said, a great solution would be to work with the Bag of Visual Words/Features approach. The Bag of Visual Features (BoVF), is inspired by the Bag of Words (BoW) used in the areas of Natural Language Processing (NLP) and Information Retrieval (IR) where it is applied, for example, in text categorization, where the classification of a document is given by the frequency of words. The Bag of Visual Features, on the other hand, is characterized by the image categorization, where the classification of an image is given by the frequency of visual features, making it a simple and low computational cost approach. Below I leave my codes on GitHub of the Bag of Visual Features approach with Multilayer Perceptron (MLP), and Support Vector Machine (SVM) classifiers for MNIST, CIFAR-10 and FER-2013 visual datasets. The cool thing about these repositories is that you can test Feature Detection and Description task, not only with Local Descriptor SIFT, but also with other Local Descriptors (e.g., SURF, KAZE), and Local Binary Descriptors (e.g., BRIEF, ORB, BRISK, AKAZE, FREAK). BoVF with MLP Classifier BoVF with SVM Classifier I didn't put the codes directly here as it would be a too long answer. If the answer solves your problem or you believe is the best solution, please, mark this answer as accepted! Thank you!
common-pile/stackexchange_filtered
Parse and TryParse I wanted to know what others are doing with parse/try parse for decimals as an example. In PHP, I would write a method called GetValueOrDefault(value, default) that takes in two parameters - the first parameter is a string or value you'd LIKE to try to parse as an integer or decimal, and the second parameter is the default should that fail. I want something like this in .net but I'm seeing this: decimal salary = 0.0; //initialize salary if (!Decimal.TryParse(myStringValue, out salary)) salary = 0; So for me, the "Tryparse" boolean return type is not necessary for me - i'm not taking any special action, and don't care. All I want is "If there is a conversion fail, then default back to what I initialized the value to, otherwise don't do anything." Also, I don't want to do a Parse, as my understanding is that will throw an exception. I don't need to try..catch because i've already initialized the variable (What more should I do in the catch? I don't need to log anywhere). Should I really write an extension method or helper utility method to do the above? I feel like there is an easy solution to this that i'm clearly missing. Thanks all! Yes, if you want to have a method that does something different from the existing methods you should write your own method. Clearly implementing the method you want is easy enough thanks to the TryParse methods. If your default value is always 0, you don't need to write another method. If TryParse() fails to parse your string, it sets the value of the number to 0. I would consider avoiding naming your method GetValueOrDefault() - Nullable<T> types already have that method hanging off of them. Why not TryParseOrDefault()? @GoldenDragon is right. For your scenario (if you don't need to know if it failed or not), removing the if gives you the same result Out parameters don't have to be initialized, so you could use this: decimal salary; if (!Decimal.TryParse(myStringValue, out salary)) salary = 0.0; If you don't care whether or not it parsed, there's no real need to wrap it in an if. If it fails, however, your salary reference parameter will be zero. So in a situation where you've initialized salary to something besides zero, you should care if it parsed. If you don't check the return value then you can't tell the difference between an unparsed value and a value that happened to be parsed to zero. This is only an option if the desired default value is zero, which is not the case for this question. var salary = Decimal.TryParse(myStringValue, out var value) ? value : 0; I always use an extension method, very similar to what usr suggested, with a slight variance: public static decimal GetDecimalOrDefault(this string value, decimal defaultDecimal) { try { return Convert.ToDecimal(value); } catch(exception ex) { return defaultDecimal; } } Then you can call it like this: decimal finalValue = 0.0; string myStringValue = "123.456"; finalValue = myStringValue.GetDecimalOrDefault(finalValue); Exceptions should not be used for control flow. @Servy What's the reasoning behind that? I feel my solution satisfies the requirements of the question, but unless there's something performance-impacting about it, I don't think it's necessarily a wrong answer. There's a huge performance impact on every exception, That's a bad solution
common-pile/stackexchange_filtered
Access violation error while attempting to transfer a large file using HTTP / REST server Using a REST library, I am trying to set it up as a file sharing server, but am running into issues when transferring large files. As I understand it, the file transfer should mean opening a stream to the file, getting its buffer in a stringstream,then write it within a response body. This seems to work with small files of only a few bytes or KB, but anything larger fails. std::string filePath = "some_accessible_file"; struct stat st; if(stat(filePath.c_str(), &st) != 0) { //handle it } size_t fileSize = st.st_size; std::streamsize sstreamSize = fileSize; std::fstream str; str.open(filePath.c_str(), std::ios::in); std::ostringstream sstream; sstream << str.rdbuf(); const std::string str1(sstream.str()); const char* ptr = str1.c_str(); response.headers().add(("Content-Type"), ("application/octet-stream")); response.headers().add(("Content-Length"), fileSize); if (auto resp = request.respond(std::move(response))) //respond returns shared pointer to respond type { resp->write(ptr, sstreamSize ); //Access violation for large files } Not quite sure why large files would fail. Does file type make a difference? I was able to transfer small text files etc. but a small pdf failed... What is a here? Also add std::ios::binary to the constructor of str, if only to indicate that it might return binary data. Oops, that was the streamsize. Corrected now.. Just for paranoia's sake, can you check if sstreamSize == str1.size()? Also, are you on Windows by any chance? @Botje hmm...they are not same. e.g. sstreamsize is the actual file size (around 1 mb), whereas str1.size() simply gives 600...in the code, if I give resp->write(ptr, str1.size()), there is no error, but the file being downloaded is only 600 bytes (1 KB), so incorrectly written. Not sure where the issue is. Both client and server are on windows. And you are reading in binary mode, right? I am. std::fstream str; str.open(filePath.c_str(), std::ios::in, std::ios::binary); No, that should be str.open(filePath.c_str(), std::ios::in | std::ios::binary) If you read a binary file in text mode, Windows will stop as soon as it sees a 0x1A character. @Botje Ah, yes. That was indeed the issue. Thanks, that seems to have solved the problem. If you write it out as the answer, I'll accept it. The root cause of this error was std::fstream not reading the entire file because it was opened in text mode. In windows, this makes reading stop at a end of file (0x1A) character. The fix is to open the file in std::ios::binary mode.
common-pile/stackexchange_filtered
ionic 3 - UnhandledPromiseRejectionWarning when generating android build I am using ionic 3.19 and when I try to create an android build it has the following error. With ios its fine. What is this problem and how to solve this ? I am using node v8.9.1. (node:49779) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: spawn EACCES Here is the execution and the problem output $ ionic cordova build android --release Running app-scripts build: --platform android --target cordova [18:01:13] build dev started ... [18:01:13] clean started ... [18:01:13] clean finished in 6 ms [18:01:13] copy started ... [18:01:13] deeplinks started ... [18:01:13] deeplinks finished in 97 ms [18:01:13] transpile started ... [18:01:17] transpile finished in 4.06 s [18:01:17] preprocess started ... [18:01:17] preprocess finished in less than 1 ms [18:01:17] webpack started ... [18:01:17] copy finished in 4.30 s [18:01:25] webpack finished in 7.47 s [18:01:25] sass started ... Without `from` option PostCSS could generate wrong source map and will not find Browserslist config. Set it to CSS file path or to `undefined` to prevent this warning. [18:01:26] sass finished in 1.26 s [18:01:26] postprocess started ... [18:01:26] postprocess finished in 13 ms [18:01:26] lint started ... [18:01:26] build dev finished in 13.04 s > cordova build android --release ANDROID_HOME=/Users/megasap/Library/Android/sdk JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_71.jdk/Contents/Home (node:49779) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: spawn EACCES (node:49779) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code. [18:01:29] tslint: src/pages/attandance-detail/attandance-detail.ts, line: 175 'marker' is declared but never used. L174: console.log(location) L175: let marker = new google.maps.Marker({ L176: position: location, [18:01:29] tslint: src/pages/home/home.ts, line: 260 'marker' is declared but never used. L259: addMarker(location, map) { L260: let marker = new google.maps.Marker({ L261: position: location, [18:01:29] lint finished in 3.00 s UPDATE: This is the execution and error with node v9.4.0. This has a bit more detail than the previous running on node v8.9.1. https://gist.github.com/axilaris/4cc7094c7dae28477eb2f348e53fad91 Have you resolved it? yes! wait a sec this did the trick! sudo chmod 777 /Applications/Android\ Studio.app/Contents/gradle/gradle-4.1/bin/gradle Thanx! It's a horrible error message for this simlpe problem :/ chmod +x .../bin/gradle would be cleaner If anyone use windows then what? I just had this problem and after updating Android Studio and it updated gradle from 4.1 to 4.4 then I had to update my environment variable and go to the new gradle location /opt/android-studio/gradle/gradle-4.4/bin and do chmod 777 gradle
common-pile/stackexchange_filtered
How to assign ancestor to descendant type in PHP - class type casting I am trying to assign an ancestor(A) to descendant(B) type variable and call descendant method on ancestor in PHP but I can't get it working. class B extends A{ public function c(){ echo 'B.c'} } I have tried: Rewriting B constructor class B extends A{ public function __construct($parent) { $this = $parent; } ... } $this cannot be rewritten, so it's a no-go. This thing I am forgetting how is it called function d(B $b){ $b = new A(); b.c(); } Explicit type casting function d(){ $b = (B) new A(); b.c(); } Or function d(){ $b = settype(new A(),'B'); b.c() } This doesn't work either as settype only allows certain types not other user defined object types. http://php.net/manual/en/keyword.extends.php see examples :) call descendant method on ancestor in PHP ???? All the information I've seen suggests that there is no clean way to modify an object's type. (PHP merely fakes certain kinds of casting, e.g. there is an (int) operator but type int is unknown by itself.) The approaches I've found are of two types: Serialize your object, hack the string to modify its type, deserialize. (See here or here-- equivalent). Or Instantiate a new object and copy over all its properties. (See here) I'd go with the second approach since you seem to be dealing with a known object hierarchy: class B extends A { public function __construct(A $object) { foreach($object as $property => $value) { $this->$property = $value; } } } If you have some control over the creation of the original object, consider implementing a factory pattern so that you object is created with type B from the start.
common-pile/stackexchange_filtered
What does 'erotic in nature' mean? He turned the page of his book. ‘“Each wheel is divided into eight thick and thin spokes, dividing the day into eight equal pans. The rims are carved with designs of birds and animals, whereas the medallions in the spokes are carved with women in luxurious poses, largely erotic in nature I am wondering what the bold part could mean? What is more, would you please throw some examples or vivid example to make it clear? Many thanks. What exactly has you wondering? Try googling "classic erotic art", click on images. If that's not vivid enough, I'd be happy to answer any question that you still may have... The nature of the poses was erotic. The poses were erotic in nature. Which part do you not understand: nature or erotic? If it's "nature" you stumbeled upon, see my comment here: http://ell.stackexchange.com/questions/51194/nature-of-the-data-nature-implication 'erotic in nature'has connection with Some excitement. "Erotic" means having to do with sex. When we say that something is "X in nature" we mean that it is characterized by X. Like if I say, "This writing is technical in nature", I mean that an important characteristic of this writing is that it is technical. Often "in nature" is an unnecessary extra phrase. Like if I say, "This book is technical", versus "This book is technical in nature", the two sentences mean pretty much the same thing. The phrase does add some emphasis. Sorry, I will not post "vivid examples" of women in erotic poses. You'll have to find those sort of pictures on your own. Thanks. Yes. It was my specific answer. Thanks all.
common-pile/stackexchange_filtered
Importance of singularities within the process of Contour Integration? From Stein and Shakarchi Complex Analysis I'm having trouble understanding the process of Contour Integration, specifically the notion of "Singularities" of I observed that if $\zeta$ $\in$ $R$ , then the following occurs in 1.) $$ 1.) \, \, \, \, \, \, \, \, e^{-\pi\zeta^2}=\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2 \pi i x \zeta}dx$$ Initially I also made the observation if $\zeta$=0, then we have in 2.) $$2.) \, \, \, \, \, \, \, e^{-\pi(0)^2}=\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi ix(0)}dx$$ Then obviously we arrive at the following Integral in 3.) $$ \, 3.) \, \, \, \, \, \, \, 1 = \int_{-\infty}^{\infty}e^{-\pi x^2}dx$$ Initially the authors had us suppose that $\zeta$ > 0 , and consider the function $f(z)= e^{-\pi z^2}$, which is holomorphic within the interior of the Toy Contour pictured in Figure 1.). By Cauchy's Theorem it's I was able to make the observation that the following conclusion in 4.) $$4.) \, \, \, \int_{\gamma R}f(z)=0$$ Then next the function that was considered earlier was integrated over the real segment as follows in 5.) $$5.) \, \, \, \, \int_{-R}^{R}e^{-\pi z^2}$$ I'm having trouble justifying the initial step in 5.), from what I understand the function being integrated is initially called a "Singularity". Initially in summary my question is: What are singularities and why are they important within Complex Analysis and also what kind of Singularity is our function that we're integrating ? Perhaps this will help: http://mathworld.wolfram.com/Singularity.html How would we determine what kind of singularity our function that we are integrating is ? Often times, based on the Laurent series. Interesting how would you go about proving what kind of Singularity a function is through Laurent Series, I understand how it would be used to show a function f(z) is analytic but how would this tool work in terms of Singularity's ? Well, it tells you the order of the pole, or if it is an essential singularity. The main result of contour integration is the residue theorem. This concerns a a function $f$ which is holomorphic (i.e. complex differentiable) at all but finitely many points in its domain. It says that if you integrate such a function around a simple closed path $C$ with counterclockwise orientation, then the contour integral of $f$ around $C$ is $2 \pi i$ times the sum of all the residues of $f$ which are enclosed by the contour.* The residue of $f$ at a point $z_0$ is the coefficient of $(z-z_0)^{-1}$ in the Laurent expansion of $f$ about $z_0$; it will be nonzero only at a singularity of $f$ (which are just points where the function fails to be holomorphic). The idea of the proof is simple enough, you can gain intuition by just computing the contour integral of $z^k$ around a loop for each integer $k$. It will vanish whenever $k \neq -1$ (because such functions have an unambiguous antiderivative, namely $\frac{z^{k+1}}{k+1}$), but it will not vanish when $k=-1$ (essentially because the complex logarithm is multivalued). When we apply contour integration, we generally write down a family of closed curves in $\mathbb{C}$ which either exactly contain the region we want to integrate over or contain a region that approaches the region we want to integrate over. We then exactly compute these contour integrals, and then we take a limit that allows us to write our desired quantity in terms of the contour integral. Often the integration over the parts of the contour that we are not interested in will go to zero; other times we must actually compute them and subtract them from the integral over the whole contour to get what we want. The most elementary example I can think of is $\int_{-\infty}^\infty \frac{1}{1+x^2} dx$. (This example is a bit boring, because we can do it using elementary calculus, but it illustrates the main concept.) To compute this, we use a semicircular contour in the upper half plane (or lower half plane, it doesn't matter in this case) centered at the origin and with radius $R$, and send $R \to \infty$ at the end of the calculation. For $R>1$, this contour will enclose the pole $i$ and no other poles. So our desired integral is $2 \pi i \operatorname{Res}(1/(1+z^2),i)$ minus the integral over the upper contour, which we then have to separately show goes to zero. * Technically this can be generalized to non-simple closed paths, but I'll avoid this generalization. Just to clarify what is a meromorphic function ? @ZION I realized that I was actually being unnecessarily restrictive, so I changed it.
common-pile/stackexchange_filtered
What is path for hive beeline command history file? Using beeline to connect hive and running multiple commands.I am not able to find the path where I can view the history of all executed hive commands. You can view all executed beeline commands details from following path: /$HOME/.beeline/ For example if you are starting beeline as root user then file path will be: /root/.beeline/history Environment details: Os:RHEL6 and RHEL7 Apache Hive (version 1.1.0-cdh5.12.2) Driver: Hive JDBC (version 1.1.0-cdh5.12.2) Beeline version 1.1.0-cdh5.12.2 by Apache Hive
common-pile/stackexchange_filtered
Why I don't get any response? I am learning NodeJS with express now. This is my server: const express = require('express'); const helmet = require('helmet'); const router = express.Router(); const response = require('./network/response') var app = express(); app.use(helmet()); app.use(express.json()); app.use(express.urlencoded({ extended: false })) app.use(router); router.get('/message', (req, res) => { response.success(req, res, `Lista de mensajes 1000`); }) router.post('/message', (req, res) => { if (req.query.error == 'ok') { response.error(req, res, `Error simulado`, 401) } else { response.success(req, res, `Creado correctamente`, 200); } }) router.delete('/message', (req, res) => { res.send(`Mensaje eliminado`); }) app.listen(3000, () => { console.log(`La aplicacion se esta escuchando en puerto 3000`); }) and this is my network module: exports.success = function (req, res, message, status) { res.status(status || 200).send({ error: '', body: message }); } exports.error = function (req, res, message, status) { res.status(status || 500).send({ error: message, body: '' }); } network module help me to have a better control of HTTP request. The problem is that when I make a POST request I never get the response, is just loading and loading. I am trying to get the error but nothing. This is the request: http://localhost:3000/message?error=ok How are you actually submitting the POST request? (ie. are you using curl?) @dave I am using postman and Insomnia too, I get the same error in both @AmirPopovich I want to recieve the response.error no the response.success. To get the error I need to pass a query called error with value 'ok', I am doing it, but the petitions just loading loading and I never get the response. as @AmirPopovich pointed out, the code seems to work fine. I got the same response he posted and HTTP/1.1 401 Unauthorized response as expected. I issued the API call through curl -i -X POST http://localhost:3000/message\?error\=ok. The code looked good to me so I've copied it and ran in on my machine. The POST gives me a 401 response: {"error":"Error simulado","body":""} Thanks @dave, I will restart postman to check I still getting the same problem. I will publish a image. I never restarted the server, because for some reason the nodemon did not do it automatically. I am sorry for wasting your time. It is working mate. check these curl requests : curl -XPOST -data '{"data":"data"}' http://localhost:3000/message curl: (3) Port number ended with '"' {"error":"","body":"Creado correctamente"} POSTMAN : curl -XPOST -data '{"data":"data"}' http://localhost:3000/message?error=ok curl: (3) Port number ended with '"' {"error":"Error simulado","body":""}
common-pile/stackexchange_filtered
Finding a way between points given start and end coordinates in the matrix in C In the code below, I tried to write a program that checks whether there is a path consisting of 0s from the starting coordinate (sx,sy) to (dx,dy). For instances from (0,0) to (3,3) there seems to be a path of 0s and the output should be true. But I am not getting the correct result. It doesn't work the way I want. Can you help me to find my mistake? #include <stdio.h> #include <stdbool.h> #define N 5 void dfs(int adj[][N], int i, int j, bool visited[][N]); bool hasPathDfs(int adj[][N], int sx, int sy, int dx, int dy); int main() { int matrix[N][N] = { {1, 0, 0, 0, 0}, {2, 3, 0, 3, 1}, {0, 4, 0, 0, 0}, {0, 0, 0, 2, 4}, {5, 0, 0, 2, 5}}; // Find path int sx = 0, sy = 0, dx = 3, dy = 3; printf("Find path from (%d,%d) to (%d,%d):\n", sx, sy, dx, dy); printf("DFS: %s\n", hasPathDfs(matrix, sx, sy, dx, dy) ? "true" : "false"); return 0; } // Function Declarations void dfs(int adj[][N], int i, int j, bool visited[][N]) { if (i < 0 || i >= N || j < 0 || j >= N || adj[i][j] != 0 || visited[i][j]) { return; } visited[i][j] = true; dfs(adj, i - 1, j, visited); // Move left dfs(adj, i + 1, j, visited); // Move Right dfs(adj, i, j - 1, visited); // Move top dfs(adj, i, j + 1, visited); // Move bottom } bool hasPathDfs(int adj[][N], int sx, int sy, int dx, int dy) { bool visited[N][N]; int i,j; for ( i = 0; i < N; i++) { for ( j = 0; j < N; j++) { visited[i][j] = false; } } dfs(adj, sx, sy, visited); if (!visited[dx][dy]) { return false; } return true; } Have you tried running your code line-by-line in a debugger while monitoring the control flow and the values of all variables, in order to determine in which line your program stops behaving as intended? If you did not try this, then you may want to read this: What is a debugger and how can it help me diagnose problems? You may also want to read this: How to debug small programs? I suspect your algorithm doesn't even really start, because matrix[sx][sy] != 0, so it just immediately returns. Ditto matrix[dx][dy] will never be visited, for the same reason. Aside: if (!visited[dx][dy]) return false; else return true; is a really complicated way of saying return visited[dx][dy]; Yes, you're right, I found where the error is. dfs(adj, sx, sy, visited)'; here the dfs function will not start at all due to the 'adj[sx][sy]!=0' condition. So I need to change the condition for the starting point to be different from 0. Do you have any idea about this? You would probably need to remove the test for adj[i][j] != 0 in the bit that returns, and then only visit neighbours which are 0, i.e. for each of your recursions into neighbours, you only call them if they are 0. Your final test would be that a neighbour of [dx][dy] was visited, not that the actual destination was visited. Or, and this might "just work"... just set your [sx][sy]and [dx][dy] to 0 before you start the search. Your algorithm might then work as-is. Actually, what I want to do is create a way that combines 2 repeating numbers using 0 value cells and replace the 0 values in between with that number. The last situation you said came to my mind, but if I set the value (sx,sy) to 0, I cannot replace the 0s in the path with the number at the starting point. I've posted an answer to the question posed. I think if your requirements are different, you should post a new question. Thank you for your responses. You have been very helpful in finding the error and generating ideas about solutions. In fact, my aim was to ensure that this program finds a path between the two given coordinates and then develop it for my purpose. Once I fix this bug, if I come across any new bugs, I'll post it as a new question. Your void dfs() function looks almost correct. However, it immediately returns if the matrix entry at [i][j] is not zero. Your code has a start position of [0][0], and the matrix entry there is 1. Therefore your search doesn't even begin, it just immediately fails because the start location does not have 0 in it. For the same reason, your destination position [3][3] will never be visited, because it contains the value 2. The simplest solution would be to manually set the start and end positions to have the value 0 before starting the search: bool hasPathDfs(int adj[][N], int sx, int sy, int dx, int dy) { bool visited[N][N]; int i,j; for ( i = 0; i < N; i++) { for ( j = 0; j < N; j++) { visited[i][j] = false; } } // Mark the start and end positions as part of the '0' trail adj[sx][sy] = 0; adj[dx][dy] = 0; dfs(adj, sx, sy, visited); return visited[dx][dy]; } Thank you very much for your reply. You helped me fix my mistake and now my program is working. For example, let the value at the starting point be 2. How can I replace all the 0 values in between with the number at the starting point after finding the path? @alperone12 You should post a new question with your new code to ask that question. Sir ı need your help. I am weak a bit in alghoritms.
common-pile/stackexchange_filtered
Trying to test onChange function of react component form So I have a Signup component that renders a form with a simple text field and submit function. On entering text in the field the 'address' attribute should be updated. In my test I'm trying to assert that the onChange function is called but stubbing the function using jest. However when I try and simulate the change I get the error: TypeError: result.simulate(...) is not a function If I remove the .bind(this) is gets to the point where it's setting the state in the function but this is undefined. Here is my code: import React, { Component } from 'react'; class Signup extends Component { constructor(props){ super(props); this.state = {}; } onSubmit(e){ let {address} = this.state; this.setState({ address: "" }); this.props.addFeed(address); e.preventDefault(); } onChange(e) { this.setState({ address: e.target.value }); } render() { return ( <div> <form onSubmit={this.onSubmit.bind(this)}> Please enter your address: <input id='address' type="text" onChange={this.onChange.bind(this)} value={this.state.address}> </input> <input type="submit" value="Submit"> </input> </form> </div> ); } } export default Signup; And my test: test("onChange() is called upon changing the text field", () => { const value = "Makers Academy" const onChange = jest.fn() const wrapper = shallow(<Signup onChange={onChange} />) const result = wrapper.find('#address') result.simulate('change', { target: { value: {value} } })('change'); expect(onChange.called).toBe.true }) You try spy for onChange from props, but in your component no has any props. Component's methods and component props are different things. You need call this.props.onChange inside this.onChange. import React, { Component } from 'react'; class Signup extends Component { constructor(props){ super(props); this.state = {}; } onSubmit(e){ let {address} = this.state; this.setState({ address: "" }); this.props.addFeed(address); e.preventDefault(); } onChange(e) { this.setState({ address: e.target.value }); // Call onChange callback this.props.onChange(); } render() { return ( <div> <form onSubmit={this.onSubmit.bind(this)}> Please enter your address: <input id='address' type="text" onChange={this.onChange.bind(this)} value={this.state.address}> </input> <input type="submit" value="Submit"> </input> </form> </div> ); } } And some fix in your test test("onChange() is called upon changing the text field", () => { const value = "Makers Academy"; const onChange = jest.fn(); const wrapper = shallow(<Signup onChange={onChange} />); const result = wrapper.find('#address'); result.simulate('change', {target: {value}}); expect(onChange.called).toBe(true); });
common-pile/stackexchange_filtered
How to instantiate a UITableViewCell from a nib One can add a tableviewcell in the nib of its tableviewcontroller, but how do you get to it? Apple has a great explanation on how to do this in the table view programming guide The link return: 404 HTTP_NOT_FOUND @Beppe probably because it's a 3 1/2 year old answer. I just updated the link though, try again. Thanks, googling "table view programming guide" is not that difficult, my comment was just to advise users that the link was broken and no actual update was needed, but I really appreciate your following up such an old answer. Two good articles on this (From Bill Dudney and Jeff LaMarche): http://bill.dudney.net/roller/objc/entry/uitableview_from_a_nib_file http://iphonedevelopment.blogspot.com/2009/09/table-view-cells-in-interface-builder.html
common-pile/stackexchange_filtered
Mocking out parameters with JustMock I am writing unit tests and I need to mock the out parameter of the one of the target method dependencies with the following signature: bool TryProcessRequest(out string) I am using JustMock and I have tried to use DoInstead arrangement clause, but it seems that it is not so obvious. Please advise me how to achieve this, many thanks in advance. Show what you have tried so far and what you are actually trying to do. This option will probably suit you: var mock = Mock.Create<IYourInterface>(); string expectedResult = "result"; Mock.Arrange(() => mock.TryProcessRequest(out expectedResult)).Returns(true); string actualResult; bool isCallSuccessful = mock.TryProcessRequest(out actualResult); So for this you need to create a local variable with the desired value and use that in the out position.
common-pile/stackexchange_filtered
tag search library (ASP).NET I seen some awful forums with horrible searching. Its highly important to be able to find things in my db/app. I am considering writing my own but before i do what do you guys think are good tag search libraries? C# .NET or possibly ASP.NET NOTE: I do not want text searching, only tags. Use Lucene.NET I'm sure it can cover all that you want: http://incubator.apache.org/lucene.net/
common-pile/stackexchange_filtered
Tag version check between two pipelines azure devops I have two CI pipelines in azure devops: CI pipeline to train models CI pipeline to score/predict/inference new data Both of these pipelines are triggered when a PR is created on a specific branch. I have enabled "Tag Builds" on succeed with $build.BuildNumber format. I beleive if the builds are successful, they are given some tags. I have a release pipeline, what I want to do is to check if the tag/buildNumber for 1st and 2nd CI pipelines are same or not. If not, the release pipeline should fail. The problem is I cant find any tag information of the CI pipelines here is what I see after a build is succeeded. I found out that. It is not possible to check if two or more tags are valid based on some logic in devops. So, we ended up using bash task and git commands to check if tags are valid (using regex).
common-pile/stackexchange_filtered
How to post/create database rows [sql server] into Salesforce API How to post database rows into Salesforce API and create records. I've all the login credentials i.e. access token & client ID and I want to create/post sql server database rows into SalesForce API through C#. Anyone kindly help me on this. you need to load data from sql database, transform it to json format and load to salesforce. You can use REST API curl https://yourInstance.salesforce.com/services/data/v20.0/sobjects/Account/ -H "Authorization: Bearer token -H "Content-Type: application/json" -d<EMAIL_ADDRESS> Example request { "Name" : "Express Logistics and Transport" } example response { "id" : "001D000000IqhSLIAZ", "errors" : [ ], "success" : true } also you can use SOAP API If you are working from .NET with the Salesforce APIs you might find the Salesforce Toolkits for .NET to be a good starting point. They will handle most of the REST API calls for you. You mention moving SQL Server data directly in Salesforce. You might also consider a third party tool like DBAmp that allows Salesforce to appear as a linked server in the database. Having your own C# code do the integration will give you more flexibility, but something like DBAmp could speed up a simple integration.
common-pile/stackexchange_filtered
Configuring Livy with Cloudera 5.14 and Spark2: Livy can't find its own JAR files I'm new to Cloudera, and am attempting to move workloads from a HDP server running Ambari with Livy and Spark 2.2.x to a CDH 5 server with a similar setup. As Livy is not a component of Cloudera, I'm using version 0.5.0-incubating from their website, running it on one of the same servers as the YARN, Spark and HDFS masters. To keep a very, very long story short, when I try to submit to Livy, I get this error message: Diagnostics: File file:/home/livy/livy-0.5.0-incubating-bin/rsc-jars/livy-rsc-0.5.0-incubating.jar does not exist java.io.FileNotFoundException: File file:/home/livy/livy-0.5.0-incubating-bin/rsc-jars/livy-rsc-0.5.0-incubating.jar does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:811) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:588) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:432) at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251) at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:364) at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:362) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:361) at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Failing this attempt. Failing the application. The jar it's referencing is part of the Livy installation, and obviously exists. It looks like at some point in the process, Hadoop is looking for a file with the URL file:/home... instead of just /home... or file:///home..., but I'm not sure that that's even relevant, as this may be a valid path for HDFS. I've gone as far as building multiple versions of Livy from source, modifying the launch script and remote debugging it, but this error seems to be occurring somewhere in Spark. Here is my livy.conf file: # What spark master Livy sessions should use. livy.spark.master = yarn # What spark deploy mode Livy sessions should use. livy.spark.deploy-mode = cluster livy.file.upload.max.size 300000000 And livy-env.sh: export HADOOP_CONF_DIR=/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/etc/hadoop export SPARK_HOME=/opt/cloudera/parcels/SPARK2-2.2.0.cloudera2-1.cdh5.12.0.p0.232957/lib/spark2 export HADOOP_HOME=/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hadoop The old cluster used Hadoop <IP_ADDRESS>.6.5.0-141 and Spark 2.2.1. The new cluster is running Hadoop 2.6.0-cdh5.14.2 and Spark 2.2.0.cloudera2. Using the old cluster's Livy distro as well as Cloudera's own Livy distribution all gave the same basic error. Again, all this stuff worked just fine on the previous HDP/Ambari cluster. All of those jar files exist on that path on every node, and I've also tried this with the jars in HDFS--Livy extracts them and then gives the same error message for the extracted jars. I also tried a bunch of stuff with permissions but none of it seems to work. For example, I get: 18/06/09 00:13:12 INFO util.LineBufferedStream: (stdout: ,18/06/09 00:13:11 INFO yarn.Client: Uploading resource hdfs://some-server:8020/user/livy/jars/livy-examples-0.4.0-SNAPSHOT.jar -> file:/home/livy/.spar kStaging/application_1528398117244_0054/livy-examples-0.4.0-SNAPSHOT.jar) from Livy's output, followed by... Diagnostics: File file:/home/livy/.sparkStaging/application_1528398117244_0054/livy-examples-0.4.0-SNAPSHOT.jar does not exist java.io.FileNotFoundException: File file:/home/livy/.sparkStaging/application_1528398117244_0054/livy-examples-0.4.0-SNAPSHOT.jar does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598) ... from YARN's inevitable failure. Anyone have any thoughts? Would be happy to even just hear alternatives to Livy, if there are any... I fixed this by building Livy from the Cloudera repo with the string mvn clean package -DskipTests -Dspark-2.2.0.cloudera2 -Dscala-2.10. This version is outdated, has a broken UI, some of the Scala tests fail so they have to be skipped, and I didn't bother looking into how or why specifying 2.2.0.cloudera2 works. I also had to install Hue and its dependent services on the cluster. No other distribution of Livy, binary or source, worked.
common-pile/stackexchange_filtered
Error: LogonUser failed with error code 1909 I'm facing with a very strange issue with function LogonUser of Win32 API (Document: https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-logonusera) A program contain this function, and when it was called. Sometime it success, sometime it false. When false, it through this log: System.ComponentModel.Win32Exception (0x80004005): The referenced account is currently locked out and may not be logged on to at ImpersonationHelper The thing I dont understand is every 4 hours, this function will false 1 time, while usename, password and domain value do not change. I dont know why it sometime can execute and sometime can not execute Anyone have idea for this issue? What is hard to understand about "The referenced account is currently locked out"? That is happening on the domain side, nothing to do with your code. Why it is happening, who knows, ask the domain admin. For example, I work for a company whose network locks out my account several times a day for no apparent reason. I login to the company VPN just fine, and start doing work, and a few minutes later I'm locked out, then I wait a few minutes and login back in, and everything is fine for awhile and then I get locked out again. it is really annoying and counter-productive to my work at times when I need to access remote resources on the network. @RemyLebeau thank you for comment. Do you think domain admin can check why account lockout? I need confirm because to escalte domain admin, I need open ticket and it'll take fee for me
common-pile/stackexchange_filtered
Take 20 random words, order them, what are the chances that the next word will become first in order? I don't even know where to begin, is the language is even matter or it make no differences? The question is as follow: Open a random English book, pick up $x$ random words. Then pick up another word. What are the chances that the last word you picked will come before the all the other words you picked in alphabetic order? I think that you need to know the word count of the book in order to answer this. @FranklinP.Dyer can you prove that you need this info? Maybe... there are other things that you definitely need, though. For example, what if the book has duplicate words, and you pick all of the same word? @FranklinP.Dyer I can't prove that by restricting the pickup to be all different words I will change the result. Because the same rules applies for the last word as well. The chance of picking a word that is lexicographically before the other words depends on the book. If you (randomly) pick a book that has less than 20 unique words (maybe a children's book), what do you do? Aside from that you'd need to know the distribution of words in the book. Some words will almost always occur more frequently than other words in general. And in specific books some words will likely occur more often than they do when all of printed English is considered. Eg in novels names of the main characters will occur more frequently than they do in general. Are names even words? @Χpẘ Names are words; What you are asking will help you answering the question: What are the chance to pick a specific word. But it's not my question. Can you prove that this information is necessary for predicting the lexicographical order of the last word? No serious thoughts, but you may be interested in this: https://en.wikipedia.org/wiki/Order_statistic. In particular, you probably should assume that the words are iid according to the distribution of the given book, which simplifies the theory. Actually, is there any chance that the answer is just $1/21$? Some word has to be the first one, and if it's all iid from the same source... this might work but I lack confidence in probability. @EricStucky This is my prime gues as well. But how do you prove it? @Ilya_Gazman Suggest looking up the discrete Zipf distribution which was popularized by a linguist to describe distribution of words in English. The Pareto distribution is continuous analog to Zipf. The wikipedia article on Zipf says that "the" occurs 7% of the time. Since "the" is towards the end of the lexicographic order it probably won't be at the head of the list. Similarly "a" is top of order and is likely to be head of list. If "a" is third most frequently used word, Zipf says that it occurs $1/3$ as frequently as "the". As I said, tho, some words are more frequent in certain books. Assuming no words can be repeated in the list, the answer is $\frac{1}{21}$. Each of the $21$ chosen words is just as likely as any other to be the first lexicographically. If you want to be technical, there is a very, very small chance that two of the chosen 21 words are the same word, and furthermore are (depending on how you look at it) both first. This would make the chance ever so slightly higher than $\frac{1}{21}$, by an amount dependent on the word frequencies in the source. No, the distribution doesn't matter. It doesn't matter if some word is more prevalent than another. The key concept is that the order of choosing random words has no bearing on their lexicographical ordering. I think that duplicates does not matter, as there can be only one alphabetically min word, even if it has duplicates in the selected set. So this problem is equal to picking 21 apples from 21 apples where one of them is green and asking what are the chances that you picked green as the last apple.
common-pile/stackexchange_filtered
close any running application through another application programatically Can we close any other applicaton running in background from our application. Want functionality just similar to Task manager. Is it possible? Any hint. Thanx in advance
common-pile/stackexchange_filtered
angularjs - load dependencies whenever it is required instead of loading all in starting I am using AngularJs application, in that I have so many dependency files. all files path i am adding in index.html page and all dependencies in app.js file. So that whenever i will start my application all files get loaded at initialization level but instead of loading all js files in starting, I want to load js files when ever it is required for ex - - I click on home button, so app should load home.js, home.html, and home page related dependencies only - similarly when i ll redirect to about us screen, app should load about us dependencies only So what should i do to load only respected files instead of loading all files in starting. I have to load more than 50 js files including controller, directive, services, etc and all js files having heavy loaded data. so if I tried to load all files in starting, application become slow I have tried it with requirejs but is there any other way to do it Have you looked into Lazy Loading techniques? Seems to me it's more or less what you want to do. first tell us what is size of all your js & html? (not including libraries) and what size of libraries? @GabrielLovetro - i tried lazy loading technique using requireJs, but I didn't find the better way to do this @Petraveryanov I have to load more than 50 js files including controller, directive, services, etc and all js files having heavy loaded data. @ojuskulkarni - I myself haven't used Lazy Loading except for loading hundreds of pictures in a page. For that specific application it worked wonders! I was using ionic/angular (v1), though, so as far as I remember we didn't use requireJS (we used a lazy loading directive/plugin). Can you aswer question - 50 js files with size of? Heavy load data (?) - how on Earth this is related to question...
common-pile/stackexchange_filtered
Adding an itemid on the Tag component page I just migrated my bilingual website to joomla 3.3.1 and I am discovering the tag feature, which is awesome! When I click on a tag displayed near the title of an article I am redirected to a page listing all the article with this tag. So far so good. But the problem is that there isn't any menu displayed on that component/tags page! So I have been looking for a way to attach an itemid to a specific component page, but I didn't come close to a solution. You can see what I mean here Are you sure you have the menu module assigned to all pages? Thanks @Lodder! This does solve the issue, but it won't work because one of my menu item redirects to a part of the website where the menu-item is not the same. SO I have the menu module A and B and all the A menu item redirects toward pages with the A menu, except the menu item A-5 with redirects toward a page with the menu module B. And same thing for the menu B. Menaing that can't choose the option "assigned to all pages". Is there a way to assigned an itemid to a component page? This sounds like there is no menu item for the tag component. Then you get something like this /index.php/component/tags/tag/3-yellow as your URL. Try adding a menu item for example of the type Tags » Compact list of tagged items. If you don't want this menu item visible, you can add it to a new menu. If you don't link this menu to a module, it will never show up on the site. But it will still be used for the SEF URLs. This way you can create nice URLs and even customise the view. Thanks Bakual, this is the solution! You're right when I click on a tag I'm redirected to an address like component/tags/tag/science. So I did as you say, I created a new menu, added a new menu item (type tagged items), added the tag I want and assigned the menu module to this menu item... and voila! I preferred to choose tagged items rather than Compact list of tagged items because I prefer to get the intro text. The only downside is that I need to create one menu item per tag which is a lot of work. But anyway this is a better solution than the one selecting "on all pages" so thanks so much!
common-pile/stackexchange_filtered
Haskell implement a filter list with fold function filterList :: (Eq a) => a -> [(a, b)] -> [(a, b)] >filterList " foo " [( " foo " , 1) , ( " bar " ,2) , ( " foo " ,3)] [( " foo " ,1) ,( " foo " ,3)] I have figured out two ways to solve this problem , `first way with list comprehension :` filterList a ((x,y):xs) =[(b,c)|(b,c)<-((x,y):xs),a==b] second way with recursive function: filterList2 a []=[] filterList2 a ((x,y):xs)|a==x = (x,y):filterList2 a xs |otherwise = filterList2 a xs but I want to solve it with the folder function, and I am stuck. filterList a ((x,y):xs) = foldr filter1 a ((x,y):xs) filter1 b ((x,y):xs)|b==x = (x,y) |otherwise = filter1 b xs which is not working. A little help is really appreciated. note that the function you pass to the foldr, so here filter1 is not agiven the full list. It is given an element, and the result of the foldr on the remaining list. The patten filterList a ((x, y):xs) does not make much sense either. Since it should work on any list (including an empty), you implement this as filterList a ls = foldr (filter1 a) ls. Your foldr also should have a "base-case", the one if the list is exhausted" as second parameter. There are both problems with the filterList and filter1 function. The filterList function has as pattern: filterList a ((x, y): xs) = … but that does not make much sense, the type signature, and type inference will guarantee that it is a list of 2-tuples. Your pattern here will not work for an empty list, but filtering an empty list is likely still necessary. You thus should simplify it to: filterList a ls = … the foldr :: (a -> b -> b) -> b -> [a] -> b function is given three parameters: the fold function the "base-case", which is used when folding an empty list; and a list of elements. But you can not use a as base case, since that base case also determines the type of the result. The base case is here the empty list. We also need to pass a to the filter1 function, so we can implement this as: filterList :: Eq a => a -> [(a, b)] -> [(a, b)] filterList a ls = foldr (filter1 a) [] ls Your filter1 function works on a list, but that is not how a foldr function works. The function you pass to foldr will be given an element, and the result of folding the rest of the list. Your function thus looks like: filter1 :: Eq a => a -> (a, b) -> [(a, b)] -> [(a, b)] filter1 a (x, y) rs = … Here a is the element we have passed we have to look for, (x, y) is the 2-tuple we are "folding in", and rs is the result of folding the rest of the list. So this is a list that is already filtered. I leave implementing filter1 as an exercise.
common-pile/stackexchange_filtered
Google app engin, python: Google, Facebook, Twitter, OpenID account Do anyone know if there are alternatives of Django-SocialAuth which support Google, Facebook, Twitter and OpenID account. I prefer webapp version instead of Django. Or if you have done once would you mind sharing it? Thanks in million. try checking out http://code.google.com/p/gaema/ from the gaema introduction, gaema is a library that provides various authentication systems for Google App Engine. It is basically the tornado.auth module extracted to work on App Engine and independently of any framework. It supports login using: OpenId OAuth Google Accounts Facebook FriendFeed Twitter You can use one, all or a mix of these auth methods. This is done with minimal overhead: gaema is small and doesn't have any dependencies, thanks to the awesome work done by the Tornado crew. gaema only authenticates an user, and doesn't provide persistence such as sessions or secure cookies to keep the user logged in. Because each framework do these things in a different way, it is up to the framework to implement these mechanisms. You can get gaema from http://pypi.python.org/pypi/gaema. Does not seem to exist anymore.
common-pile/stackexchange_filtered
matching fewer than all fields with MongoDB Let's say I have a MongoDB query that looks like this: result = db.collection.find( { 'fruit_type': 'apple', 'fruit_name': 'macintosh' 'primary_color': 'red', 'sheen': 'glossy', 'origin_label': 'true', 'stem_present': 'true', 'stem_leaves_present': 'true', 'blemish': 'none', 'firmness': 'moderate' } ) When I have exact matches, I want only the exact matches. When I don't have exact matches, then (and only then) I want other apples, with the only mandatory fields and values being 'fruit_type': 'apple' and 'primary_color': 'red'. Note: this question has been edited multiple times for clarity Drive-by downvotes are not useful. Please explain why this is a bad question. I can only guess, but some people tend to see JavaScript as no real programming language. ;) Personally, I think it is an interesting question. The closest you can be to ensure that you at least satisfy the mandatory criteria is to put all your optional query fields together with one of the mandatory fields in the $or operator since it selects the documents that satisfy at least one of the optional expressions in the $or operator expression: result = db.collection.find( { 'fruit_type': 'apple', "$or": [ { 'primary_color': 'red' }, { 'fruit_name': 'macintosh' }, { 'sheen': 'glossy' }, { 'origin_label': 'true' }, { 'stem_present': 'true' }, { 'stem_leaves_present': 'true' }, { 'blemish': 'none' }, { 'firmness': 'moderate' } ] } ) The above query will select all documents in the collection where the fruit_type field value is apple and the primary_color field value equals red. If in your collection there is no document with the primary_color field value as red then the above will not return any documents. Performance-wise, consider creating a compound index on the two mandatory fields if they are the commonly issued queries since scanning an index is much faster than scanning a collection. For more details, read the docs sections on Optimize Query Performance and Behaviors - $or Clauses and Indexes @bahmait Ok, have you tested the query above? @bahmait Cheers for the clarification. In your first edit your question was rather unclear and I answered based on the premise that you need to include two mandatory fields in your query, the above just does that; it returns a match if at least the two mandatory fields match. I've accepted the answer finally, and tried to clarify again. I think what I wanted was to say basically: exact match OR inexact match.
common-pile/stackexchange_filtered
Is it reasonable in Python to check for a specific type of exception using isinstance? Is it reasonable in Python to catch a generic exception, then use isinstance() to detect the specific type of exception in order to handle it appropriately? I'm playing around with the dnspython toolkit at the moment, which has a range of exceptions for things like a timeout, an NXDOMAIN response, etc. These exceptions are subclasses of dns.exception.DNSException, so I am wondering if it's reasonable, or pythonic, to catch DNSException then check for a specific exception with isinstance(). e.g. try: answers = dns.resolver.query(args.host) except dns.exception.DNSException as e: if isinstance(e, dns.resolver.NXDOMAIN): print "No such domain %s" % args.host elif isinstance(e, dns.resolver.Timeout): print "Timed out while resolving %s" % args.host else: print "Unhandled exception" I'm new to Python so be gentle! That's what multiple except clauses are for: try: answers = dns.resolver.query(args.host) except dns.resolver.NXDOMAIN: print "No such domain %s" % args.host except dns.resolver.Timeout: print "Timed out while resolving %s" % args.host except dns.exception.DNSException: print "Unhandled exception" Be careful about the order of the clauses: The first matching clause will be taken, so move the check for the superclass to the end. Thanks Sven... that looks much nicer. From dns.resolver you can import some exceptions. (untested code) from dns.resolver import Resolver, NXDOMAIN, NoNameservers, Timeout, NoAnswer try host_record = self.resolver.query(self.host, "A") if len(host_record) > 0: Mylist['ERROR'] = False # Do something except NXDOMAIN: Mylist['ERROR'] = True Mylist['ERRORTYPE'] = NXDOMAIN except NoNameservers: Mylist['ERROR'] = True Mylist['ERRORTYPE'] = NoNameservers except Timeout: Mylist['ERROR'] = True Mylist['ERRORTYPE'] = Timeout except NameError: Mylist['ERROR'] = True Mylist['ERRORTYPE'] = NameError +1 with the answer: if the exceptions are known, it's better to use different except blocks. But a last except dns.resolver.DNSException would be wise, for handling of sub-exceptions without specific treatment (or to be sure to catch all dns errors).
common-pile/stackexchange_filtered
Is watching cartoons/anime for entertainment Haram? If it is Haram, as long as I don't believe what they include or say, am I committing shirk or am I going to get punished like the sketchers of the series? Is watching cartoons/anime for entertainment Haram? In my opinion it is not Haram. The prophet forbade drawings (of people), however Anime characters are usually unrealistic in origin. There are other reasons as well if you are interested check the following questions. If it is Haram, as long as I don't believe what they include or say, am I committing shirk or am I going to get punished like the sketchers of the series? If you believe something to be Haram, then doing it is a sin which Allah might punish you or given you. That isn't something a person can know. The best thing is to stay away from things you believe is Haram and ask Allah for forgiveness. You think very differently from other higher-ranking people from this site, or internet in general. You seem to have views that are more suitable for today's life. Citations needed. Your opinions doesn't matter. And anime characters are usually very realistic First of all i would say what counts in any matter is the intention. Then when it comes to drawings and pictures etc. i think our dear brother @American-Muslim gave you a few helpful links and an answer, but i'd like to explain my Point of View: My opinion is that one must take some facts in account for example still when our Messenger died there were many pagan's in the Arabic peninsula and there are still many in some of the "Muslim Countries" nowadays. But the most relevant fact to me is that our Messenger (peace be upon him) forbade them because many people just converted to Islam and rejected their idols (after a long time worshiping them) so there was a more or less slight chance that they have a fall-back to their former Idolatry. So forbidding them -in this case- makes sense! But today you'll find even in many Muslim countries some (more or less modern) Statues but nobody even thinks of worshiping them, even if in some cases this statues and placing them in some Public places etc. and admiring the Person it represents could be regarded at least as something near to worshiping! Finally i come to cartoons: First watching TV and anything which could be regarded as a waste of time has -if we take it very severely and exact- always a touch of makroh. But we shouldn't forget that we are not angels and we need to relieve stress from time to time, so watching TV etc. could be a way to do so. So i can't see any harm in this unless what you are watching is somehow against the rules of Islam or regarded as a sinful act in that case at least you will do a sin! Now as you said you don't believe in what you watch and i personally can't see any link to shirk in Cartoons (in general), i can't imagine why that should even lead to shirk or that you'll be punished for watching. But you still have to be aware that whether the act of watching would be considered as sinful or not depends on what you are watching and on your intention and true belief and how you interact/react with/on that (see also this Hadith which has some relevance)! So as a conclusion I'd give you the same advise mentioned by @American-Muslim if you believe that an act would be sinful you should stay away and turn away from it! And here are some more or less related fatwas 1 2 in English and one in Arabic/Turkish for some more details and a more sever (salafi) Point of View. And Allah knows best! One of the terms would be that no religious discrimination would be allowed. Also not to mention that I only use IslamQA if there is no other site which would give me a proper answer. @HüdaverdiAlperenDemirok well islamQA is one of the rare sites that provide fatwa in English that's why i do refer to it, as many people here don't know Arabic and the official language of this site is English! It is not haram, but watching inappropriate animes with girls in swimsuits and big breasts- even if it's unrealistic, is haram for all genders. Assalamualaikum. I do not know about cartoons and such but i used to watch alot of anime a few months ago and i recommend based on this that u should not get yourself into anime. Trust me. Even anime which seem to be for a younger audience turns out to have sexual content i have some examples but i do not want to share them as i dont want people to start googling these.So, even if animations are halal(to which i do not know the rulings of scholars),i would recommend that u stay away from anime and i think that the rest of the questions have been answered by others above (maybe lol). In the end Allah knows best. in my opinion, something is haram if it takes you away from ALLAH(STW). if you are missing your salah's by watching anime, then it is clearly haram or if you are watching certain categories of anime like: hentai,harem or ecchi is haram coz it will lead you to doing something which is clearly haram and sinful. there are other types of anime like shonen and shojo. shonen anime are based on adventure, fighting,comedy etc. shojo on the other hand focus on romantic and funny anime. shojo might take you to commiting a sin, so just be carefull, there is a small chance. both the anime have beautiful anime girls, and you are not drawing these so you should be fine. shonen example: one piece, dragon ball franchise, naruto, AOT. Shojo example: Kaichou wa Maid-sama, Tonari no Kaibutsu-kun, Kimi ni Todoke. There are also family friendly anime like: doraemon, shinchan, ninja hattori etc I DON'T HAVE A DIRECT ANSWER FOR THIS, BUT I DO KNOW THAT IT IS HARAM to draw anything that is depicted as living (having a soul). It is honestly hard to find an answer to this question, as there are many different opinions ikeepmynamehidden, I have a question. So, I watch Naruto right and it has chakra and powers which comes from Buddhism, but I ignore that and just pay attention to the plot and just think that chakra and all those powers are just for entertainment. They use the chakra and powers to do jutsus such as shadow clone jutsus,etc. When I watch naruto, it kinda helps me in Islam because it is about never giving up and the soundtracks are inspiring, so it is still haram to watch naruto Assalam o Allaikum. First of all, let me tell you I do not know whether this is Haram or not. But I surely do know that this can not be considered as ‘Shirk’. Because shirk as is the act with means to worship someone else other than ‘Allah’ or to include any other being in worship along with ‘Allah’. hlo i am probably late .but still. when u come to the talk of anime .anime's that categories to ecchi,harem,hentai are all haram coz they are related to sexual thing .before u watch an anime check the genres and try to get info on that genres . .the others like pokemon and doraemon are not haram Statues are haram -> referring to Medi1Saif's answer. And regarding haram, intention is not of importance. You cannot do something haram like drinking alcohol and say "I have a good intention". Regarding watching cartoons: It is not unproblematic: Music and drawing are problematic, the story is important and it must not lead you to something haram or it must not contain something haram. I've wondered this for a while, about Naruto, when they present themselves in god like ways. Also involved are alot of Buddhist ideologies, and sometimes unconsciously you might say the names of their goddesses or gods... I don't know the correct answer, but I think it's best to keep a certain limit. The intention is what really matters, if you're watching it because you find it interesting and fun, then you must regard it as it is, don't get too involved or obsessed with it. I like to watch stuff at night, after I'm done with everything. Again, I grew up in a Muslim country and that show was literally my childhood, everyone, from adults to kids watched it. Even really good Muslims I know, watch it. So it's best to say, not to get too involved in it. It's just a show made from someone's imagination, so be it that way. And if you feel uncertain, talk to a close one. Hope this helped.
common-pile/stackexchange_filtered
Python3 and permutations / combinations I am trying to solve one small conversion problem in Python3. Have input file with "blocks" of data and need to create output file with all non-repeating combinations of it. My math knowledge is too limited for it, maybe when I explain it, someone can give me advise. Input file contains block separated by line with semicolon on start followed by ID-string, followed by pairs of values (each on new line) corresponding to status string. Output file should contain lines in fixed format where the line starts always with ID-string and then 2 pairs of coordinates. For each ID-string from input, should output contain all possible non-repeating combinations of the pairs from input (corresponding to ID block). Example: input file ; ID1 X1 Y1 X2 Y2 X3 Y3 ; ID2 X1 Y1 X2 Y2 output file ID1;X1;Y1;X2;Y2 ID1;X1;Y1;X3;Y3 ID1;X2;Y2;X1;Y1 ID1;X2;Y2;X3;Y3 ID1;X3;Y3;X1;Y1 ID1;X3;Y3;X2;Y2 ID2;X1;Y1;X2;Y2 ID2;X2;Y2;X1;Y1 Output is always fixed to a pattern: ID;pair1;pair2. All values are strings, not numbers and can contain any character or number. I tried couple of ways and here is my latest result, but it doesn't work as expected. from itertools import permutations, combinations def generate_combinations(input_file, output_file): with open(input_file, 'r') as infile, open(output_file, 'w') as outfile: lines = infile.readlines() id_value = None x_values = [] y_values = [] for line in lines: line = line.strip() if line == ';': if id_value is not None: # Generate combinations for the previous block combinations_list = [] for x_combination in combinations(x_values, len(y_values)): for y_combination in permutations(y_values): combination = [id_value] + [f"{x};{y}" for x, y in zip(x_combination, y_combination)] combinations_list.append(';'.join(combination) + '\n') # Write combinations to the output file outfile.writelines(combinations_list) # Reset values for the new block id_value = None x_values = [] y_values = [] else: if not r_value: id_value = line elif len(x_values) < len(y_values): x_values.append(line) else: y_values.append(line) # Generate combinations for the last block combinations_list = [] for x_combination in combinations(x_values, len(y_values)): for y_combination in permutations(y_values): combination = [id_value] + [f"{x};{y}" for x, y in zip(x_combination, y_combination)] combinations_list.append(';'.join(combination) + '\n') # Write combinations to the output file for the last block outfile.writelines(combinations_list)
common-pile/stackexchange_filtered
How to run IntelliJ IDEA as a dedicated system user? the problem I installed IntelliJ IDEA on my system (to /opt/jetbrains), and want it to be able to modify its files (do self-updating) without enabling any other process to modify them. the configuration I've come up that I will need a system user, to whom the /opt/jetbrains directory will belong to, and I will run IDEA as that user: # Set up things sudo mkdir /opt/jetbrains sudo adduser --system --home /opt/jetbrains jetbrains sudo chown jetbrains /opt/jetbrains # Install IDEA wget https://download.jetbrains.com/idea/ideaIU-2017.1.2.tar.gz -O idea.tar.gz sudo -u jetbrains tar -xzf idea.tar.gz -C /opt/jetbrains/ rm idea.tar.gz # Configure idea.desktop # set the 'Exec' line, to run as user jetbrains what i tried I have read Run a shell script as another user that has no password I tried the following, but I got an error, as well as a password prompt. sudo su -c "/opt/jetbrains/idea-ultimate/bin/idea.sh" -s /bin/sh jetbrains sudo -u jetbrains /opt/jetbrains/idea-ultimate/bin/idea.sh Both output: No protocol specified Start Failed: Failed to initialize graphics environment java.awt.AWTError: Can't connect to X11 window server using ':0' as the value of the DISPLAY variable. at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method) at sun.awt.X11GraphicsEnvironment.access$200(X11GraphicsEnvironment.java:65) at sun.awt.X11GraphicsEnvironment$1.run(X11GraphicsEnvironment.java:115) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.X11GraphicsEnvironment.<clinit>(X11GraphicsEnvironment.java:74) at ... I also tried gksu -w -u jetbrains gksu /opt/jetbrains/idea-ultimate/bin/idea.sh, but that prompted me a password - of user jetbrains, who obviously has no password. Probably you want to install the IDE into the home of that new user, instead of /opt. However, running it with that user will give you problems when you try to modify projects into your home directory. It's late, but hopefully it helps someone. I'm doing this: xhost +SI:localuser:foxx1337 && sudo -u foxx1337 /opt/jetbrains/idea-ultimate/bin/idea.sh This is based on the wonderful reply sim gave here - https://unix.stackexchange.com/questions/108784/running-gui-application-as-another-non-root-user As an extra, do not forget to allow your user to sudo with no password prompt to the target user (otherwise the command can't be used in an X session).
common-pile/stackexchange_filtered
Implementing Multiplayer OS with AppKit I'm trying to implement a multiplayer OS where for now, cursor1 is controlled by the user's mouse movements and cursor2 is simulated by simple brownian motion. Since I'm new to the AppKit and Quartz API, I'd appreciate some direction. Following is the code I've already written: import Quartz import AppKit import random import ctypes import time import objc def create_cursor_window(position): rect = AppKit.NSMakeRect(position[0], position[1], 24, 24) window = AppKit.NSWindow.alloc().initWithContentRect_styleMask_backing_defer_( rect, AppKit.NSBorderlessWindowMask, AppKit.NSBackingStoreBuffered, False) window.setOpaque_(False) window.setBackgroundColor_(AppKit.NSColor.clearColor()) window.setLevel_(AppKit.NSMainMenuWindowLevel + 1) window.setIgnoresMouseEvents_(True) return window def load_custom_cursor_image(image_path): return AppKit.NSImage.alloc().initWithContentsOfFile_(image_path) def main(): cursor1_image_path = "./assets/cursor_1.png" cursor2_image_path = "./assets/cursor_2.png" cursor1_image = load_custom_cursor_image(cursor1_image_path) cursor2_image = load_custom_cursor_image(cursor2_image_path) cursor1_position = (0, 0) cursor2_position = (0, 0) cursor1_window = create_cursor_window(cursor1_position) cursor2_window = create_cursor_window(cursor2_position) def update_cursor_position(event, cursor_position): cursor_position = (event.locationInWindow().x, event.locationInWindow().y) return cursor_position def update_cursor_position_brownian(cursor_position): dx = random.uniform(-5, 5) dy = random.uniform(-5, 5) cursor_position = (cursor_position[0] + dx, cursor_position[1] + dy) return cursor_position def draw_cursor(window, cursor_image, cursor_position): if cursor_image is not None: content_view = AppKit.NSView.alloc().initWithFrame_(window.frame()) window.setContentView_(content_view) def draw(_self, _cmd, _rect): cursor_image.drawAtPoint_fromRect_operation_fraction_( cursor_position, AppKit.NSZeroRect, AppKit.NSCompositeSourceOver, 1.0 ) view_draw = objc.selector(draw, signature=b"v@:@{NSRect=dd{NSPoint=dd}{NSSize=dd}}") objc.classAddMethod(AppKit.NSView, b"drawRect:", view_draw) content_view.setNeedsDisplay_(True) else: print("Error: Cursor image not loaded.") def update_virtual_screen(): AppKit.NSScreen.screens()[0].display() app = AppKit.NSApplication.sharedApplication() app.setActivationPolicy_(AppKit.NSApplicationActivationPolicyRegular) while True: event = app.nextEventMatchingMask_untilDate_inMode_dequeue_(AppKit.NSEventMaskAny, AppKit.NSDate.distantPast(), AppKit.NSDefaultRunLoopMode, True) if event: if event.type() == AppKit.NSEventTypeMouseMoved: cursor1_position = update_cursor_position(event, cursor1_position) cursor2_position = update_cursor_position_brownian(cursor2_position) draw_cursor(cursor1_window, cursor1_image, cursor1_position) draw_cursor(cursor2_window, cursor2_image, cursor2_position) # update_virtual_screen() time.sleep(0.01) if __name__ == '__main__': main() I'm getting the following error: > (venv) shawn@shawns-Mac-mini ai_os % python3 multiplayer.py Traceback > (most recent call last): File > "/Users/shawn/Documents/Projects/ai_os/multiplayer.py", line 81, in > <module> > main() File "/Users/shawn/Documents/Projects/ai_os/multiplayer.py", line 74, in > main > draw_cursor(cursor1_window, cursor1_image, cursor1_position) File "/Users/shawn/Documents/Projects/ai_os/multiplayer.py", line 55, > in draw_cursor > objc.classAddMethod(AppKit.NSView, b"drawRect:", view_draw) File "/Users/shawn/Documents/Projects/ai_os/venv/lib/python3.11/site-packages/objc/_category.py", > line 25, in classAddMethod > return classAddMethods(cls, [sel]) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^ objc.BadPrototypeError: Python signature doesn't match implied Objective-C signature for <function > main.<locals>.draw_cursor.<locals>.draw at 0x108523380> I don't know anything about Python but (why) are you trying create a new NSView and replace drawRect: in a loop?
common-pile/stackexchange_filtered
Determine if modal dialog is open for specific unit - Delphi I have an application in Delphi 7, which pops up modal dialogs for several conditions. I am trying to determine if the dialog from a specific unit is open from another unit and close it. So far, I've tried with following code: Wnd := GetLastActivePopup(Application.Handle); if (Wnd <> 0) and (Wnd <> Application.Handle) then PostMessage(Wnd, wm_close,0,0); But, it closes all the opened dialogs. When I tried to specify for a specific form such as: if (Wnd <> 0) and (Wnd <> FormTest.Handle) then it throws Access Violation error. How can I determine whether the dialog from specific unit is being popped up? There is a simple solution that may work. You could use: procedure TForm1.Button2Click(Sender: TObject); var h: hwnd; begin h := FindWindow(PChar('TForm1'), PChar('Form1')); if h <> 0 then PostMessage(h, WM_CLOSE, 0,0); end; It works well to identify if the there is a handle to the TForm1 window. The obvious thing here is that FindWindow will seek windows in the entire OS system. Now, if you want something faster, you could use @Remy solution, that will only seek for Application's forms. From MSDN: FindWindow function: Retrieves a handle to the top-level window whose class name and window name match the specified strings. This function does not search child windows. This function does not perform a case-sensitive search. To Search for child windows use the following function: FindWindowEx function: Retrieves a handle to a window whose class name and window name match the specified strings. The function searches child windows, beginning with the one following the specified child window. This function does not perform a case-sensitive search. These are links to the both functions, respectively: FindWindow and FindWindowEx Oh my, this is surely not the solution to any problem! Then I think they should to remove findwindow from WinApi No, I mispoke. I meant this is not the solution to this problem. Even so, you'd normally use EnumWindows to get all top level windows rather than FindWindow to get the first. In any case, a desktop wide solution is surely not what was requested here. this is pointed out in my answer : "The obvious thing here is that FindWindow will seek windows in the entire OS system." that's also why I suggested @Remy's answer Sure. I never understand why people accept answers that don't address the question that they asked. I believe that in this situation, it's because with findwindow, OP don't need to iterate between all Screen.Forms, leading to write less code I believe right or wrong is a matter of perspective, if this solve the problem, why, I ask you, it would be wrong? I wouldn't bother if I were you. If the asker wants to accept this answer, then that's their choice. You did point out the flaw in the answer. Anyone reading carefully would realise the peril of running this code when you have multiple instances of the app running. No problem will happen, i think findwindow is somehow optimized to seek firstly in the same thread and same process, all test's that I did, the OS closed the right form. Obviously, this is a guess, I don't have any documentation to prove it Try looping through the Screen.Forms list looking for the desired modal form, and if found then close it: var I: Integer; Frm: TForm; begin for I := 0 to Screen.FormCount-1 do begin Frm := Screen.Forms[I]; if fsModal in Frm.FormState then begin if Frm is TDesiredFormClass then // or: if Frm.ClassName = 'TDesiredFormClass' then // or: GetTypeData(PTypeInfo(Frm.ClassInfo))^.UnitName = 'DesiredUnitName' then // or: if (whatever other criteria you need) then begin Frm.Close; // sets ModalResult to mrCancel Break; end; end; end; end; if (Wnd <> 0) and (Wnd <> FormTest.Handle) then This leads to access violation if FormTest is not a valid instance reference. Either: FormTest is nil, or FormTest is not-nil, but refers to an object that has been destroyed. You can check the classname of the window with GetClassName function
common-pile/stackexchange_filtered
Order of generic parameters in typescript causes the parameter to be interfered as {} I'm trying to write typings for compose function for the following scenario. interface ReducerBuilder<InS, OutS> { } interface State { hue2: number, hue3: string, hue: string; } declare function createBaseReducer <K>(initialState: K): ReducerBuilder<K, K> ; declare function createReducerTestI <K>(builder: ReducerBuilder<K, K>): ReducerBuilder<K, K>; declare function compose<TArg,TResult, TResult1>(f2: (arg: TResult1) => TResult, f1: (arg: TArg) => TResult1): (arg: TArg) => TResult; declare function composeRev<TResult, TArg, TResult1>(f1: (arg: TArg) => TResult1, f2: (arg: TResult1) => TResult): (arg: TArg) => TResult; const state : State = { hue2: 5, hue3: "aa", hue: "aa" }; const built = createReducerTestI(createBaseReducer(state)); const x0 = compose(createReducerTestI, createBaseReducer)(state); const x1 = compose(createReducerTestI, (arg: State) => createBaseReducer(arg))(state); const x2 = composeRev((arg: State) => createBaseReducer(arg), createReducerTestI)(state); Type for x0 is ReducerBuilder<{},{}>. I understand that at args cannot be interfered here because the function returned from compose has no information about the type of arguments. Type for x1 is ReducerBuilder<{},{}>. I don't get why the type here is {}. I explicitly say TArg is a type of State. I suspect that Typescript is trying to interfere TResult1 from left to right and it can't get it from f2 argument. Type for x2 is ReducerBuilder<State,State> - success. All I did was just to revert the order of parameters so TResult1 could be interfered from left to right. I don't really want to revert the order of arguments. Is there a better way to solve this? I want to say thank you for this question, I got caught up in the "what is going on here" and realized that I was having a lot of fun. I'll just list out what I have discovered so far. First off, you need to add something to ReducerBuilder<InS, OutS> the type is being inferred to be {} which is causing other issues since TArg is also being inferred to be{}. As far as I can tell, the root of the problem is not the order of the parameters. It's a failure of inferred types. I'm not actually sure how it should infer them so I won't go into my speculation of what is going on, but enumerating through the various ways of calling is a little more enlightening: const x0 = compose(createReducerTestI, createBaseReducer)(state); const x1 = compose(createReducerTestI, (arg: State) => createBaseReducer(arg))(state); const x2 = compose((arg: ReducerBuilder<State, State>) => createReducerTestI(arg), createBaseReducer)(state); const x3 = compose((arg: ReducerBuilder<State, State>) => createReducerTestI(arg), (arg: State) => createBaseReducer(arg))(state); const x4 = composeRev(createBaseReducer, createReducerTestI)(state); const x5 = composeRev(createBaseReducer, (arg: ReducerBuilder<State, State>) => createReducerTestI(arg))(state); const x6 = composeRev((arg: State) => createBaseReducer(arg), createReducerTestI)(state); const x7 = composeRev((arg: State) => createBaseReducer(arg), (arg: ReducerBuilder<State, State>) => createReducerTestI(arg))(state); Only x3,x6, and x7 show the correct return type. x3, and x7 show the correct return type because the arguments don't need to be inferred, but I have no idea why x6 works at all. Based on x6 working I would assume that either x2, or everything except x0 and x4 would work. Instead, besides the ones that return correctly, what I see is everything returns TResult, except x4 actually returning ReducerBuilder<{}, {}>? This is odd behavior since it's just the order of the parameters that is causing it. The new TypeScript 2.8 gives you ReturnType<T>, which would save you from having to specify the type between the two functions. But I don't know if that would actually "fix" the infer problem. If you just need a type definition, you can do it by just specifying the return types. Example 1: const composedA = compose<State, ReducerBuilder<State, State>, ReducerBuilder<State, State>>(createReducerTestI, createBaseReducer); const x0 = composedA(state); or, as you already pointed out with your built variable: Example 2: const composedB = (arg: State) => createReducer(createBaseReducer(arg)); const x0 = composedB(state); But that's not exactly helpful in a description file. I feel like the way of composing the functions is doing what is in example 2. However, it would probably be more accurate to create overloaded versions of compose, so that you can control through the types what is, and isn't possible to compose. In the following code I created 1 version of your compose, and another of a ridiculous one to show how you would do this. It also means that you can reduce the types that you need in order to be explicit about the functions it's possible to compose. Function Overloading: interface State1 { hue2: number; hue3: string; hue: string; } interface State2 { hue4: number; hue3: string; hue: string; } interface ReducerBuilder<InS, OutS> { in: InS; out: OutS; } interface RediculousObject<InS, OutS> { a: InS; b: OutS; } type TA_1<T> = ReducerBuilder<T, T>; type CTA_1<T> = (a: T) => TA_1<T>; type CTB_1<T> = (a: TA_1<T>) => TA_1<T>; type TA_2<T, Y> = RediculousObject<T, Y>; type CTA_2<T, Y> = (a: T) => TA_2<T, Y>; type CTB_2<T, Y> = (b: Y) => CTA_2<T, Y>; type CTC_2<T, Y> = (a: T, b: Y) => TA_2<T, Y>; declare function _compose <T>(a: CTB_1<T>, b: CTA_1<T> | CTB_1<T>): CTA_1<T>; declare function _compose <T, Y>(a: CTB_2<T, Y>, b: CTA_2<T, Y> | CTB_2<T, Y>): CTC_2<T, Y>; const state1: State1 = { hue2: 5, hue3: "aa", hue: "aa" }; const state2: State2 = { hue4: 5, hue3: "aa", hue: "aa" }; declare function createBaseReducer <K>(initialState: K): ReducerBuilder<K, K>; declare function createReducerTestI <K>(builder: ReducerBuilder<K, K>): ReducerBuilder<K, K>; const f0 = _compose<State1>(createReducerTestI, createBaseReducer)(state1); declare function _rediculous(a: State2): (b: State1) => RediculousObject<State1, State2>; const f1 = _compose<State1, State2>(_rediculous, _rediculous)(state1, state2); The main problem here involves an open issue in TypeScript where the compiler doesn't really know how to do generic type inference over higher-order functions where the functions are themselves generic. According to a comment by Anders Hejlsberg, This is a consequence of inference working left-to-right for contextually typed arguments. To solve it in the general case would require some form of unification, but that might in turn uncover other issues... which is in line with your discovery that the order of parameters matters. It looks like contextual inference of higher order generic functions may be quite difficult (according to comments on that GitHub issue and linked issues) to implement properly. In the absence of such an implementation, you probably need to pick a workaround. These include @Camron's suggestions to specify type parameters explicitly when calling compose(), or wait for TypeScript 2.8 and use conditional types to reduce the number of type parameters you need. The simplest workaround is to leave the parameters in the "reverse" order. Hope that was at least of some use. Good luck! Edit: I tried using conditional types with something like ReturnType<> and ArgumentType<>, but it still doesn't work. The problem where the generic type parameters of the higher-order function become {} remains.
common-pile/stackexchange_filtered
How to understand a range in hexadecimal I have this range: U+F0000..U+FFFFD Its for UTF, Private use characters. I understand F0000 to FFFFD means a range, but why the U+ is added in the beggining? what does it means? It means it's a Unicode character. It's Unicode's way of indicating it's a codepoint. The "U+" means it's a Unicode codepoint, just like "0x" means what follows is a hexadecimal number. The "U+" implies hexadecimal, so what follows is in hexadecimal notation, but represents a codepoint in Unicode. In UTF-8, U+F0000 would be encoded as 0xF3 0xB0 0x80 0x80. U+FFFFD would be encoded as 0xF3 0xBF 0xBF 0xBD. This is called Unicode code point and the U+ prefix is how you write it.
common-pile/stackexchange_filtered
Complex Contour Integration using Magnitude and triangle inequality I have some questions and I have been asked to use the magnitude and triangle inequality. Here is the first question: Let $C$ be the arc of the circle $|z| = 2$ from $z = 2i$ to $z = 2$, Show that $$\left|\int_C\frac{dz}{z^2-1}\right|\leq\frac{\pi}{3}$$ I'm given this magnitude inequality to be the following theorem: If on a contour $C$, $|f(z)|\leq M$ and $L$ is the length of $C$, then $$\left|\int_Cf(z)\,dz\right|\leq\int_C|f(z)||dz|\leq ML$$ I would like to have a clear process for this question so I can attempt the few after it on my own. I just don't see what I should be looking for. What to do first. HINTS: $$\left|\int_C f(z)\,dz\right|\le \int_C |f(z)|\,|dz|$$ And $|z_1+z_2|\ge ||z_1|-|z_2||$. So would it be true to say: $$|z^2+(-1)|\geq ||z|^2 - |-1|| = |2^2 -1|$$ $$\Rightarrow |f(z)| = \left|\frac{1}{z^2 -1}\right|\leq\frac{1}{3}.\quad L = \pi d = \frac{4\pi}{4} = \pi.$$ $$\Rightarrow \left|\int_C\frac{dz}{z^2 -1}\right|\leq ML = \frac{\pi}{3}$$ Yes, well done! You have it now. Although, $L\ne \pi d$ here. Rather, it is $\frac14 \pi d=\frac14\times \pi\times 4=\pi$. Is that what your calculation was? Oh yes. I saw the circumference of a circle is $\pi d$ and the contour is a quarter of that so I'll be more accurate and say $L = \frac{1}{4}\pi d$ But thank you! I wasn't sure about the triangle inequality part but this has helped a lot! You're quite welcome. My pleasure and very pleased to see that you really understand it well. -Mark
common-pile/stackexchange_filtered
Is it possible that two triangles satisfy these conditions? Are there two triangles with equal angles and a pair of equal sides which are not congruent? If yes, please give an example. You're looking at the ASA and AAS rules of the congruency test. http://www.onlinemathlearning.com/prove-triangles-congruent.html Do you mean that each of the three angles is the same on both triangles plus there is one side that is equal in both? Yes, but the equal angles may not belong to the side. So we cannot apply the rule for congruent triangles. For congruent triangle to exist, they must both share: SSS, SAS, ASA, AAS and HL. There for, on any two triangle with the same angles and a pair of equal sides, the "AAS" rule comes into play, and says, no, you cannot. Correct me if I'm wrong, but this is what I think. I think you are not right because the angles that belong to the side which is equal in both may not be equal?! But you said "two triangles with equal angles", I'm not sure what you're trying to say here.. @chenh. I mean that if we have triangles ABC and MNP with angles <M=<A, <N=<B and <P=<C, the equal sides may be for example AB=NP. @chenh. Ok, but you assume all 3 angles to be equal. Then how do you say in your first comment that "the angles that belong... may not be equal". Are all 3 angles equal or not? I drew both triangles out, still don't see how it's possible? If you do figure it out please enlighten me! @chenh. Yes, all three angles are equal. Someone already answered. There are such triangles. Can you add a diagram to your answer to help us further understand? @chenh. I dont know how to do it. Just take a right triangle and draw the altitude. You have 3 pairs of such triangles. Still doesn't make sense, anyway, glad you found your answer.. @chenh. See this http://ceemrr.com/Geometry1/RightTriangleAltitudes/paste_image6.gif ahh I see, makes more sense now, thanks. @chenh. Take a right triangle, draw the attitude from the right angle. You then have several pairs of examples. Oh, yes - three pairs!!!
common-pile/stackexchange_filtered
Accordion background-image's size I've made a horizontal accordions with background-images, but the images are quite big compared to the width of the accordions. I want the accordions to downsize the background-images, right now I'm only seeing the center of each background images. Here's my code: (sample) .accordion { width: 100%; max-width: 1080px; height: 400px; overflow: hidden; margin: 50px auto; } .accordion ul { width: 100%; display: table; table-layout: fixed; margin: 0; padding: 0; } .accordion ul li { display: table-cell; vertical-align: bottom; position: relative; width: 16.666%; height: 400px; background-repeat: no-repeat; background-position: center center; transition: all 500ms ease; } .accordion ul li:nth-child(1) { background-image: url("https://images.unsplash.com/photo-1460500063983-994d4c27756c?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=27c2758e7f3aa5b8b3a4a1d1f1812310"); } .accordion ul li:nth-child(2) { background-image: url("https://images.unsplash.com/photo-1460378150801-e2c95cb65a50?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=1b5934b990c027763ff67c4115b6f32c"); } .accordion ul li:nth-child(3) { background-image: url("https://images.unsplash.com/photo-1458400411386-5ae465c4e57e?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=47756f965e991bf72aa756b410929b04"); } .accordion ul li:nth-child(4) { background-image: url("https://images.unsplash.com/photo-1452827073306-6e6e661baf57?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=c28fd5ea58ed2262a83557fea10a6e87"); } .accordion ul li:nth-child(5) { background-image: url("https://images.unsplash.com/photo-1452215199360-c16ba37005fe?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=408c70a6e88b50949c51e26424ff64f3"); } .accordion ul li:nth-child(6) { background-image: url("https://images.unsplash.com/photo-1442551382982-e59a4efb3c86?format=auto&auto=compress&dpr=1&crop=entropy&fit=crop&w=1920&h=1280&q=80"); } .accordion ul:hover li { width: 8%; } .accordion ul:hover li:hover { width: 100%; } .accordion ul:hover li:hover a { background: rgba(0, 0, 0, 0.4); } .accordion ul:hover li:hover a * { opacity: 1; -webkit-transform: translateX(0); transform: translateX(0); } @media screen and (max-width: 200px) { body { margin: 0; } .accordion { height: auto; } .accordion ul li, .accordion ul li:hover, .accordion ul:hover li, .accordion ul:hover li:hover { position: relative; display: table; table-layout: fixed; width: 100%; -webkit-transition: none; transition: none; } } .about { text-align: center; font-family: 'Open Sans', sans-serif; font-size: 12px; color: #666; } .about a { color: blue; text-decoration: none; } .about a:hover { text-decoration: underline; } <body> <div class="accordion"> <ul> <li> <div> <a href="#"> </a> </div> </li> <li> <div> <a href="#"> </a> </div> </li> <li> <div> <a href="#"> </a> </div> </li> <li> <div> <a href="#"> </a> </div> </li> <li> <div> <a href="#"> </a> </div> </li> <li> <div> <a href="#"> </a> </div> </li> </ul> </div> <body> Try setting a fixed width for your images to see the full image instead of just the center of it. see fiddle. If you will be using huge images, I would be better to use background-size:cover instead. The "100% 100%" did the job. It creates a new problem though. I don't know if you can see this page (it's not mine, but it's where I got the idea from). http://www.t-randeris.dk/cases.aspx When holding the mouse over the image the size is perfect, but when the mouse is moved away, it "squeezes" the image together instead of just "hiding" the rest of the picture. yes it squeezs since its width is basing from the width of the accordions and since its a background image. unlike the accordion from the link you gave. you can try setting a fixed width for you image. I made a fidde. pls check if this is right But yours still doesn't show the entire picture (or does it)? I tried using "background-size: cover" and this seems to be the best option so far. It zooms in a tiny bit, but it's much better than before. I does shows the entire picture i guess. You said u will be using huge images so i think i would be better to use background-size:cover. Hope I helped. should i update my answer so u can accept it? Yes please, then I'll accept it. Thank you for your help! As you're currently using the images as background, you have a handy CSS property available to you right now. MDN - Background Size: Contain: This keyword specifies that the background image should be scaled to be as large as possible while ensuring both its dimensions are less than or equal to the corresponding dimensions of the background positioning area. I don't want it to be as large as possible though? I want it to fit. I tried using "background-size: cover" and this seems to work okay (it zooms in a tiny bit, but it's much better than before)
common-pile/stackexchange_filtered
Get current timestamp from specific date using UTC/GMT+0 as epoch ignoring local time zone epoch I've tried a lot from this community but it seems like there's no hyper specific scenario for my case. So basically I have a string in the format of yyyy-mm-dd. I use date methods to adjust it and add time on the date to make it more specific. I want to convert it to timestamp while ignoring the client computer's current timezone (or using UTC timezone). I have this code: function getTimestampRange(sparams, eparams){ sparams = "2018-11-12", eparams = "2018-11-13"; //sample param values const start = sparams.split("-"); const end = eparams.split("-"); const startDate = new Date(start[0], start[1] - 1, start[2]); const endDate = new Date(end[0], end[1] - 1, end[2]); endDate.setHours(23); endDate.setMinutes(59); endDate.setSeconds(59); //startDate is 2018-11-12 00:00:00 and endDate is 2018-11-13 23:59:59 const startTS = startDate.getTime() / 1000; const endTS = endDate.getTime() / 1000; return [startTS, endTS] } This is all fine and dandy but the problem is, I'm getting the timestamp relative to my computer's timezone. (GMT+9). So my epoch is the 9th hour of 1970-01-01. Which is not what I need. I need the GMT+0 UTC timestamp. In this scenario, I'd get<PHONE_NUMBER> and<PHONE_NUMBER>, start and end respectively; where I should be getting<PHONE_NUMBER> and<PHONE_NUMBER>. Tasukete kudasai " I want to convert it to timestamp while ignoring the client computer's current timezone" timestamps are timezone agnostic. They reflect the number of seconds (or milliseconds) since 1970-01-01T00:00:00Z @Phil I don't know how to properly phrase it but I get a different timestamp on this method, which is a very simple algorithm. When I convert the timestamp that I get in this method, it's always 9 hours ahead of UTC You have two options here... Use Date.UTC to construct timestamps in UTC const startDate = Date.UTC(start[0], start[1] - 1, start[2]) // a timestamp const endDate = Date.UTC(end[0], end[1] - 1, end[2], 23, 59, 59) // a timestamp Note: Date.UTC() produces a timestamp in milliseconds, not a Date instance. Since you're able to set the hours, minutes and seconds as above, you no longer need to manipulate those. Use your existing date strings which adhere to the ISO 8601 standard as the sole argument to the Date constructor. This benefits from this particular nuance... Support for ISO 8601 formats differs in that date-only strings (e.g. "1970-01-01") are treated as UTC, not local. const startDate = new Date(sparams) const endDate = new Date(eparams) Parsing ISO 8601 is supposedly supported in all decent browsers and IE from v9. Since this relies on a particular "feature" that may or may not be implemented in a client, there is an element of risk to this method. For your end date, if parsing you can easily append a time and zone portion to the date string rather than manipulate the date object with hour, minute and second values. For example const endDate = new Date(`${eparams}T23:59:59Z`) alternatively, use Date.prototype.setUTCHours()... const endDate = new Date(eparams) endDate.setUTCHours(23, 59, 59) Are both of this working in IE11? I'd get random problems in IE11 whenever I pass strings in the Date object. @AbanaClara from memory, strings in ISO 8601 format work fine in IE Okay this seems to be working for my start date object. But for my end date object, I get the same timestamp adjusted for my local time zone. Thanks for the help @Phil. But I think new Date(Date.UTC) is what works. I get an error on .getTime() without that @AbanaClara Date.UTC() produces a timestamp, not a Date instance so you don't need to use getTime() Oh sh-- really? Thanks for the answer! Amazing I would be very hesitant to ever recommend using the built-in parser. Manually parsing ISO dates is 2 lines of code (or 1 if you want to get funky) and for that you get certainty vs. maybe… maybe not… @RobG you mean you don't trust new Date(someISO8601DateString)? I've certainly been burnt by IE and non-ISO strings but never encountered a problem when the strings are formatted correctly @Phil—not in a general web application where the device may be anything. I just think the risk mitigation is trivial. Also, parsing YYYY-MM-DD as UTC often confuses whereas parseAsLocal() and parseAsUTC make things clearer. ;-) @RobG that's a very fair call. Glad I put both examples in then @RobG I'm keen to see that one-liner dealing with the zero-based month ;-) @Phil— new Date(Date.UTC(...('2018-01-01'.split('-').map((x,i)=>x-(i%2))))), but any browser that supports rest parameters that will also parse that date as UTC anyway. ;-) @RobG ooh, that's better than what I was trying with a regex, but then I was also trying to include the time portion. Very cool trick with the modulus
common-pile/stackexchange_filtered
How do I mimic a SQL Outer Apply in Linq using Entity Framework? I would like to mimic a SQL OUTER APPLY using linq. I have 2 tables: Main and Sub The SQL looks something like this: select M.Id, M.Number, M.Stuff, SD.SubName from Main as M outer apply ( select top 1 SubName from Sub S where M.Id = S.Id and M.Number = S.Number ) as SD Based answers here and elsewhere like this one,I've tried too many iterations of Linq to put in here, but here's one: var query1 = from m in dbContext.Main join s in dbContext.Sub on new {m.Id, m.Number} equals new {s.Id, s.Number} into subs select new { m, SubName = subs.FirstOrDefault().SubName } This compiles fine, but when I run it I get this exception: Processing of the LINQ expression 'DbSet<Main> // EF's attempt to translate my query 'NavigationExpandingExpressionVisitor' failed. This may indicate either a bug or a limitation in EF Core. See https://go.microsoft.com/fwlink/?linkid=2101433 for more detailed information. and a stack trace. Does anyone have any suggestions on how to go about coding this the correct way? I'm running .NET core 3.1 against SQL Server 2017. Try the following queries. EF Core 3.1, should translate this to Outer Appply, but higher versions may use JOIN and ROW_NUMBER var query1 = from m in dbContext.Main from s in dbContext.Sub .Where(s => m.Id == s.Id && m.Number == s.Number) .Take(1) .DefaultIfEmpty() select new { m, SubName = s.SubName } Or this variant: var query1 = from m in dbContext.Main select new { m, SubName = dbContext.Sub .Where(s => m.Id == s.Id && m.Number == s.Number) .Select(s => s.SubName) .FirstOrDefaut() }
common-pile/stackexchange_filtered
IdentityServer4 Authorize always gets "The signature key was not found" on Azure AppService I have an IdentityServer4 app based on the IS4 Identity sample, and an API using bearer tokens for it's Authorization via IS4.AccessTokenValidation. This is working fine on localhost via VisualStudio, and when I deploy to a Windows 2012 VM and hosted via IIS. When I deploy the Identity server to Azure as an App Service website, all is fine too. However when the API is deployed as an App Service using same domain and certificate as the VM, any method with an Authorize attribute (with a policy or none it doesn't matter) always returns a 401 with the header message: Www-Authenticate: Bearer error="invalid_token", error_description="The signature key was not found" We're using .NET 4.5.2, with the latest releases of IdentityServer4, and IdentityServer4.AccessTokenValidation packages. I've also pulled the latest of these packages from GitHub from 30/08/16 with no change. I don't think it's a bug is IS4 Validator anyway, but I don't know what might cause this. Any suggestions? Is it an Azure host bug? I'd love to be able to debug this, but I can't get Remote Debug working to this app even when I rebuilt from scratch, and app logs tell me nothing. I've had a rummage in the ASP.NET Security repo, but without more logging or debug access, I'm pretty clueless how to fix this problem. API Configure is very basic: var jwtBearerOptions = new JwtBearerOptions() { Authority = Configuration["Authentication:IdentityServer:Server"], Audience = Configuration["Authentication:IdentityServer:Server"]+"/resources", RequireHttpsMetadata = false, AutomaticAuthenticate = true, AutomaticChallenge = true, }; app.UseJwtBearerAuthentication(jwtBearerOptions); and the Identity Server is straight out of the samples, and using a purchased certificate for signing. Has anyone else got this configuration fully working as 2 Azure App Services? Or what might possibly cause this error given the same bearer token sent to the VM hosted API is acceptable. Similar case for idsrv3 was discussed on https://gitter.im/IdentityServer/IdentityServer3/archives/2015/04/13 . What i understand is the problem occurs from not getting identity server metadata(.well-known/opened-configuration). Also for more logging, configure aspnet core logging in your api(you can enable it to write log in a text file with serilog) and inspect error logs. Interesting possibility in that thread about size of token. Mine is fairly large, but I'd expect that to be a problem on all hosts. Will have a play with it. You're right about logging, I'll add Serilog. See what comes out. Getting more logging was what I really needed, and looking through the Security repo code. It would seem there must be a difference in the SigningKey validation options on App Service for some reason. It turned out you need to explicitly set the IssuerSigningKey in TokenValidationParameters. So I get the certificate from the App Service store, and add it via JwtBearerOptions.TokenValidationParameters. So Startup config looks like this: public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { ... JwtSecurityTokenHandler.DefaultInboundClaimTypeMap = new Dictionary<string, string>(); var tokenValidationParameters = new TokenValidationParameters { // The signing key must match! ValidateIssuerSigningKey = true, IssuerSigningKey = new X509SecurityKey(GetSigningCertificate()), // Validate the JWT Issuer (iss) claim ValidateIssuer = false, //ValidIssuer = "local", // Validate the JWT Audience (aud) claim ValidateAudience = false, //ValidAudience = "ExampleAudience", // Validate the token expiry ValidateLifetime = true, // If you want to allow a certain amount of clock drift, set that here: ClockSkew = TimeSpan.Zero }; var jwtBearerOptions = new JwtBearerOptions() { Authority = Configuration["Authentication:IdentityServer:Server"], Audience = Configuration["Authentication:IdentityServer:Server"]+"/resources", RequireHttpsMetadata = false, AutomaticAuthenticate = true, AutomaticChallenge = true, TokenValidationParameters = tokenValidationParameters }; app.UseJwtBearerAuthentication(jwtBearerOptions); app.UseMvc(); ... } No idea why this is only needed on the Azure App Service and not on a server or development machine. Can anyone else explain it? It would suggest ValidateIssuerSigningKey default to true for App Service and false anywhere else.
common-pile/stackexchange_filtered
How to maintain the Browser state in windows phone 7? How to maintain the Browser state like zoom level and current content displaying etc in windows phone 7? I want to display the same content and zoom level when the app is return from tomstoning. Thanks, Balaram. There is no zoom level property on the WebBrowser control for you to be able to persist and then restore. You can store the URL that the user has navigated to and restore that, though. Jeff Prosise's Real-World Tombstoning in Silverlight for Windows Phone should tell you pretty much everything you need to know about tombstoning.
common-pile/stackexchange_filtered
How to return a value in a function when Option<T> is Some(), or else exit with None when it is None? I'm trying to implement an iterator for a struct: struct A { x: Option<String>, y: Option<String>, z: String, index: usize, } impl Iterator for A { type Item = String; fn next(&mut self) -> Option<Self::Item> { let result = match self.index { 0 => /* **magic deciding thing for self.x** */, 1 => /* **magic deciding thing for self.y** */, 2 => String, _ => return None, }; self.index += 1; Some(result) } } Is there any way to always increment index, and then exit the function if the value in x or y is None, otherwise unwrap Some() and carry on as usual? There are no default values I can return. The duplicates applied to your case Do you mean this or this? You might also want .take() instead of .clone() for x and y, although I'm not sure what you're going to do with z. @trentcl in that case, you can use mem::take :-) I think this is exactly what I needed! Thank you so much! Or use combinators.
common-pile/stackexchange_filtered
Bootstrap 5.0.2 breaks when I try to compile using Dart Sass I'm trying to change the primary color in bootstrap with SCSS. I followed all the steps installing Dart Sass and running it on a custom.scss, but when I try to do @import "bootstrap" //with absolute path to bootstrap ofc in custom.scss, it gives me the following error: 150 │ @return mix(rgba($foreground, 1), $background, opacity($foreground) * 100); │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ╵ path\to\webapp\webroot\bootstrap-5.0.2\scss\_functions.scss 150:11 opaque() path\to\webapp\webroot\bootstrap-5.0.2\scss\mixins\_table-variants.scss 4:28 table-variant() path\to\webapp\webroot\bootstrap-5.0.2\scss\_tables.scss 134:3 @import path\to\webapp\webroot\bootstrap-5.0.2\scss\bootstrap.scss 22:9 @import path\to\webapp\webroot\scss\custom.scss 11:9 root stylesheet I did not change anything to the uncompiled bootstrap scss. It seems to randomly throw errors because if I remove the line that is throwing the error, the error gets thrown on the command above it (on line 144). I don't know why it is breaking on a seemingly random command, and the error report doesn't give any feedback either. Downgrade to a previous version of dart-sass that is compatible with Bootstrap 5.0.2 npm uninstall -g sass npm install -g<EMAIL_ADDRESS>
common-pile/stackexchange_filtered
Lightning connect - Salesforce adapter I want to try Lightning connect - Salesforce Adapter. But when I try to configure it I don't see the option "Lightning Connect: Salesforce" for "Type" when creating new External Data Source. Only option I see is , "Simple URL" Do I have to call salesforce to enable this feature or is this a paid feature ? Lightning Connect (unlike Salesforce-to-Salesforce) is a paid feature. You can reach out to your AE about enabling a demo, or try it out in a Dev Org. Furthermore, it is estimated to be around USD 4,000 per month per data source.
common-pile/stackexchange_filtered
Which is the the best fingeprint-reader program available? Being new to Ubuntu and curious, I installed FingerPrint GUI from the web some days back, but it turned out to be a mess with too many password puzzles. E.g., when I switch on my laptop, the DEFAULT login screen would ask for password, but at the same time the fingerprint reading window would also pop up from nowhere, making it difficult to enter the login password at the same time fingerprint detecting program would declare that because no finger was scanned 'AUTHENTICATION FAILED'. Later I had to re-install Ubuntu itself to start thing new. I found Finger Print GUI to be a shabby one to be re installed. So is there any other Finger scanning program for Ubuntu 12.10? @Vitor Can you help here please?
common-pile/stackexchange_filtered
SQLite Int to Hex and Hex to Int in query I have some numeric data that is given to me in INTEGER form. I insert it along with other data into SQLite but when I write it out The INTEGER Numbers need to be 8 digit Hex Numbers with leading zeros. Ex. Input 400 800 25 76 Output 00000190 00000320 00000019 0000004C Originally I was converting them as I read them in and storing them as TEXT like this. stringstream temp; temp << right << setw(8) << setfill('0') << hex << uppercase << VALUE; But life is never easy and now I have to create a second output in INTEGER form not HEX. Is there a way to convert INTEGER numbers to HEX or HEX numbers to INTEGERS in SQLite? I'd like to avoid using C++ to change data after it is in SQLite because I've written a few convent export functions that take a queries result and print them to a file. If I need to touch the data during the queries return I couldn't use them. I've looked at the HEX() function in SQLite but that didn't have desired results. Could I make a function or would that be very inefficient? I'm doing this over a really big data set so anything expensive should be avoided. Note: I'm using SQLites C/C++ interface with Visual Studios 2010. Well not my favorite answer I've decided to add a column in the table that is the INTEGER value. If someone finds a better way to do this I'm all ears. EDIT: After Implementing this answer and looking at it's effect on the program this appears to be a really good way to get the data without adding much to the run time load. This answer does add to the size of the Database a little but it doesn't require any extra processing to get the values from SQLite because this is just grabbing a different column in the query. Also because I had these values to start with, this answer had a HUGE cost savings by adding them to the table verses processing later to get values I through away earlier in the program. You can use sqlite3 printf() function: CREATE TABLE T (V integer); insert into T(V) values(400), (800), (25), (76); select printf('%08X', V) from T; You can use sqlite3_create_function. See an example
common-pile/stackexchange_filtered
difflib cannot correctly find the opcodes I faced with a very odd issue in difflib library in python. I have two strings as follows and I run get_opcodes on them like this: import difflib str1 = "MatrixElement(MatrixSymbol('Btd', Integer(11), Integer(11)), Integer(0), Integer(9))), Mul(Float('1.0', precision=24), MatrixElement(MatrixSymbol('Btd', Integer(11), Integer(11)), Integer(0), Integer(10))))" str2 = "MatrixElement(MatrixSymbol('Btd', Integer(11), Integer(11)), Integer(1), Integer(9))), Mul(Float('1.0', precision=24), MatrixElement(MatrixSymbol('Btd', Integer(11), Integer(11)), Integer(1), Integer(10))))" difflib.SequenceMatcher(None, str1,str2).get_opcodes() Only in this specific example, the output of the diff is like the following, which is obviously wrong. [('equal', 0, 69, 0, 69), ('replace', 69, 70, 69, 70), ('equal', 70, 188, 70, 188), ('insert', 188, 188, 188, 201), ('equal', 188, 190, 201, 203), ('replace', 190, 206, 203, 206)] The correct output should not contain insert opcode, as nothing new is added. Is this potentially a bug in difflib? This is not a bug. There are multiple ways to transform a sequence into another, the one difflib outputs here is correct. Although, you are right to wonder why difflib chose that odd transformation instead of that one: [('equal', 0, 69, 0, 69), ('replace', 69, 70, 69, 70), ('equal', 70, 188, 70, 188), ('replace', 188, 189, 188, 189), ('equal', 189, 206, 189, 206)] It comes down to one thing: autojunk=True Prepare to learn about junk! The main algorithm behind generating the opcodes comes from SequenceMatcher.get_matching_blocks, this method breaks down the provided sequences into matching subsequences. To do so efficiently, it first parses str2 and builds a dict where keys are characters of the sequence and values are lists of indices of the corresponding character. Although, this can be very memory consumming and thus, by default, difflib.SequenceMatcher will consider some reccuring characters as junk and not store their indices. From the difflib doc: Automatic junk heuristic: [...] If an item’s duplicates (after the first one) account for more than 1% of the sequence and the sequence is at least 200 items long, this item is marked as “popular” and is treated as junk for the purpose of sequence matching. [...] In your specific case, the culprit is the character ( which is treated as junk. The SequenceMatcher object is unable to see a matching sequence starting at index 189 because it is a (. Handling junk The simplest way to get the ouput you expected is to set autojunk=False. difflib.SequenceMatcher(None, str1, str2, autojunk=False).get_opcodes() This outputs what you expected: [('equal', 0, 69, 0, 69), ('replace', 69, 70, 69, 70), ('equal', 70, 188, 70, 188), ('replace', 188, 189, 188, 189), ('equal', 189, 206, 189, 206)] Although, note that sometimes turning autojunk off completely might not be the best option since it will likely consume more memory and time. A better approach would be to specify what is considered junk. [...] these “junk” elements are ones that are uninteresting in some sense, such as blank lines or whitespace [...] This is especially true when you are using difflib.ratio to get the measure of similarity between sequences. In that case you might want to ignore whitespaces as they are generally uninteresting in term of text comparsion. Thus if you turn off autojunk, you can still provide a isjunk function that will indicate to ignore, say, whitespaces. This argument is the one you set as None in your example. import difflib from string import whitespace ... difflib.SequenceMatcher(lambda x: x in whitespace, str1, str2, autojunk=False) Nice explanation! Thanks a lot. Python's difflib does not aim to find the minimal edit distance. You might even get different results depending if you compare Sequence1 to Sequence2 or the other way round. For example: S1 = 'ju1234567' S2 = 'a2bc5d6j7' print(SequenceMatcher(None, S1, S2, autojunk=False)) # only 2 equal matches # [('insert', 0, 0, 0, 7), ('equal', 0, 1, 7, 8), # ('delete', 1, 8, 8, 8), ('equal', 8, 9, 8, 9)] print(SequenceMatcher(None, S2, S1, autojunk=False)) # much better alignment with 4 equal matches # [('replace', 0, 1, 0, 3), ('equal', 1, 2, 3, 4), # ('replace', 2, 4, 4, 6), ('equal', 4, 5, 6, 7), # ('delete', 5, 6, 7, 7), ('equal', 6, 7, 7, 8), # ('delete', 7, 8, 8, 8), ('equal', 8, 9, 8, 9)]
common-pile/stackexchange_filtered
.NET How to change Folder/File permission or Access rights? I want to prevent user from moving/renaming/deleting folders but navigating inside the folder to see its sub folders and files; Also, the code can create sub folders/files inside the folder after setting these access rights. For files, i want also to prevent user to update/move/rename/delete but the app can do this. I played with the DirectorySecurity but i can't produce the desired results. var directoryInfo = new DirectoryInfo(path); var directorySecurity = directoryInfo.GetAccessControl(); var windowsIdentity = System.Security.Principal.WindowsIdentity.GetCurrent(); if (windowsIdentity != null) { var userName = windowsIdentity.Name; directorySecurity.AddAccessRule(new FileSystemAccessRule(userName, FileSystemRights.FullControl, AccessControlType.Deny)); directorySecurity.AddAccessRule(new FileSystemAccessRule(userName, FileSystemRights.Read | FileSystemRights.ListDirectory | FileSystemRights.CreateDirectories | FileSystemRights.CreateFiles , AccessControlType.Allow)); directoryInfo.SetAccessControl(directorySecurity); } have look at this SO answer http://stackoverflow.com/a/5398398/1298308 and use SecurityIdentifier to set the correct access level to you directories Thanks for your help but it doesn't work as i want to prevent any user to delete/rename/move folder but he or she can navigate/create sub folders and files. Hi, any help or comment
common-pile/stackexchange_filtered
Probability of Event So I am working on a probability question and I think I am overthinking it. I did my coding in R btw Question: There is an annual rainfall in Seattle that is normally distributed with a mean of $30$ inches and standard deviation of $4$. What is the probability that it takes more than $6$ years before having a rainfall over $55$ inches? My attempt: year1 <- pnorm(55, mean = 30, sd=4, lower.tail = FALSE) $(1-year1)^{6}$ but my TA said my answer didn't look right Any assistance would be helpful Is the mean 30 or 40? Welcome to MSE. For some basic information about writing mathematics at this site see, e.g., basic help on mathjax notation, mathjax tutorial and quick reference, main meta site math tutorial and equation editing how-to. For mean of 40 I got approximately $99.22%$ Please give a reply if you want further help. damn! I copied a different code from a different question! the mean is 30 Is my approach correct? I take the probability of the event, (leading to year 1) then subtract that from 1 and take it to the 6th power (in this case because we want to see how it takes more than 6 years) In general yes. But what do you have for $P(X\leq 55)$ if $X\sim \mathcal N(30,4^2)$ ? We need more information what you have done. When I use the R code stated above. I get - 2.052263e-10 @GavinRienne Yes,, I got $2.052263\cdot 10^{-10}$ as well for $P(X\geq 55)$ That means we have to calculate $(1-2.052263\cdot 10^{-10})^6$ which is effectively $1$. I didn't finish this response. I had it btw! So I wanted to say thanks for the help!
common-pile/stackexchange_filtered
How to log the JSON response body in Playwright after a button click? (Error Network.getResponseBody: No resource with given identifier found) I am trying to log the response body from an api call that is initiated by a button click in playwright. I have been at this for a while, with no results. I can log the response status, request headers and response headers, But not the response. Although, running a headed browser I can see the JSON response in Network tab inspect window. await Promise.all([ page.waitForResponse(resp => resp.url().includes('https://example.com/user-check') && resp.status() === 200 && resp.request().method() === 'POST') .then(async resp => { console.log("Response URL:", resp.url()); // this works and logs the URL of the response console.log("Response status:", resp.status()); // this works and logs 200 console.log("Response body:", resp.body()); // this fails }), page.click('#clickButton'), ]); I tried resp.body(), resp.json(), resp.text() all failed with the same error below. node:internal/process/promises:288 triggerUncaughtException(err, true /* fromPromise */); ^ response.json: Protocol error (Network.getResponseBody): No resource with given identifier found I hope someone out there can help. UPDATE: Based on the response headers, the content is gzip encoded. Therefore, I incorporated the solution provided by ggorlen as below. const responsePromise = page.waitForResponse(resp => resp.url().includes("https://example.com/user-check") && resp.status() === 200 && resp.request().method() === "POST" ); await page.click("#buttonClick"); const resp = await responsePromise; console.log("Response URL:", resp.url()); console.log("Response status:", resp.status()); console.log("Response body:", zlib.gunzipSync(resp.body())); I am guessing there is a specific way to decode the response body in playwright, because I got this error: Response status: 200 TypeError [ERR_INVALID_ARG_TYPE]: The "buffer" argument must be of type string or an instance of Buffer, TypedArray, DataView, or ArrayBuffer. Received an instance of Promise Re: "Update", please open a new question rather than changing the original problem to something else. Please provide a [mcve] when you do, including the actual site or a simplified reproduction of the important behavior on it. Thanks. Why are you using zlib.gunzipSync instead of the correct JSON.parse(await resp.body()) as shown in the answer? If you do need to use something other than JSON.parse, that may be OK, but at least await the promise as the error message says: zlib.gunzipSync(await resp.body()). It's difficult to help without the site, since there could be some additional behavior making the situation more complex than you assume, but the code should probably be arranged like: const responsePromise = page.waitForResponse(resp => resp.url().includes("https://example.com/user-check") && resp.status() === 200 && resp.request().method() === "POST" ); await page.click("#clickButton"); const resp = await responsePromise; console.log("Response URL:", resp.url()); console.log("Response status:", resp.status()); console.log("Response body:", resp.body()); Or, if you prefer Promise.all(): const [resp] = await Promise.all([ page.waitForResponse(resp => resp.url().includes("https://example.com/user-check") && resp.status() === 200 && resp.request().method() === "POST" ), page.click("#clickButton") ]); console.log("Response URL:", resp.url()); console.log("Response status:", resp.status()); console.log("Response body:", resp.body()); The rule of thumb is never to mix await and then. Also, remove the async keyword on functions that don't use await. Here's a minimal, complete example: const playwright = require("playwright"); // ^1.30.1 const html = `<!DOCTYPE html><html><body> <button id="clickButton">Go</button> <script> document.querySelector("button").addEventListener("click", e => { fetch("https://httpbin.org/post", { method: "POST", headers: { "Accept": "application/json", "Content-Type": "application/json" }, body: JSON.stringify({foo: 42}) }) .then(res => res.json()) .then(d => console.log(d)); }); </script> </body></html>`; let browser; (async () => { browser = await playwright.chromium.launch(); const page = await browser.newPage(); await page.setContent(html); const responsePromise = page.waitForResponse(resp => resp.url().includes("https://httpbin.org/post") && resp.status() === 200 && resp.request().method() === "POST" ); await page.click("#clickButton"); const resp = await responsePromise; console.log("Response URL:", resp.url()); console.log("Response status:", resp.status()); console.log("Response body:", JSON.parse(await resp.body())); console.log("Request body:", JSON.parse(await resp.request().postData())); console.log("Response data:", (await resp.json()).json); })() .catch(err => console.error(err)) .finally(() => browser?.close()); Output: Response URL: https://httpbin.org/post Response status: 200 Response body: { args: {}, data: '{"foo":42}', files: {}, form: {}, headers: { Accept: 'application/json', 'Accept-Encoding': 'gzip, deflate, br', 'Content-Length': '10', 'Content-Type': 'application/json', Host: 'httpbin.org', Origin: 'null', 'Sec-Ch-Ua': '"HeadlessChrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', 'Sec-Ch-Ua-Mobile': '?0', 'Sec-Ch-Ua-Platform': '"Linux"', 'Sec-Fetch-Dest': 'empty', 'Sec-Fetch-Mode': 'cors', 'Sec-Fetch-Site': 'cross-site', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/117.0.5938.62 Safari/537.36', 'X-Amzn-Trace-Id': 'Root=1-6536996a-0272562a24ed9f3645d0440a' }, json: { foo: 42 }, origin: '<IP_ADDRESS>', url: 'https://httpbin.org/post' } Request body: { foo: 42 } Response json: { foo: 42 } greg@greg-framework:~/programming/scraping$ node pw Response URL: https://httpbin.org/post Response status: 200 Response body: { args: {}, data: '{"foo":42}', files: {}, form: {}, headers: { Accept: 'application/json', 'Accept-Encoding': 'gzip, deflate, br', 'Content-Length': '10', 'Content-Type': 'application/json', Host: 'httpbin.org', Origin: 'null', 'Sec-Ch-Ua': '"HeadlessChrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', 'Sec-Ch-Ua-Mobile': '?0', 'Sec-Ch-Ua-Platform': '"Linux"', 'Sec-Fetch-Dest': 'empty', 'Sec-Fetch-Mode': 'cors', 'Sec-Fetch-Site': 'cross-site', 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/117.0.5938.62 Safari/537.36', 'X-Amzn-Trace-Id': 'Root=1-65369972-2d6424004d38bbff73da64f6' }, json: { foo: 42 }, origin: '<IP_ADDRESS>', url: 'https://httpbin.org/post' } Request body: { foo: 42 } Response data: { foo: 42 } If this doesn't work when adapted to your use case, please provide enough information to reproduce the problem. This comment in Playwright issue #3617 suggests that a navigation can cause this, so make sure you're properly handling such an event. Thank you, and you were right, it is more complex. I found that the content is gzip encoded. Is there way to decode the the response before logging it? Here is a chunk of the response headers. Response headers: { expires: '0', date: 'Sun, 19 Mar 2023 19:00:39 GMT', 'strict-transport-security': 'max-age=31536000; includeSubDomains', 'x-content-type-options': 'nosniff', 'content-encoding': 'gzip', 'x-cdn': 'Imperva, Imperva', 'transfer-encoding': 'chunked', connection: 'keep-alive', } I don't know much about that header, but my understanding is that's standard and handled by the request library. Are you expecting a JSON response or some other format? I can't help much further than this without seeing the site, a clear description of what you're trying to accomplish, an error message (if there is one and it's different than the one before) and a [mcve] of the failing code that I can run and experiment with. Thanks. 'transfer-encoding': 'chunked' seems relevant... are you expecting a stream of data? Yes, I am expecting a json response. I also think gzip decoding is handled by the library. The json response in this structure { "user": "external", "status": [ { "@Code": "available", "description": [ "" ], "effect": "[]", "permissions": {} } ], "id": "35901033", "status": "available", "phone": "1234567890", "support": "Unknown", "portal": "remote" } Thanks, but I'm still missing the critical reproducibility and error diagnostic information necessary to help further. If you use my code and plug in your URL, you should see that response. If you don't, please share the URL, the exact error you're seeing and the steps I can take (i.e. runnable code) to reproduce that error on my machine so I can offer a solution.
common-pile/stackexchange_filtered
Skipping alternate data points in gnuplot I am using the following code to get a graph: set term jpeg size "600,600" set output "test2.jpeg" unset key set xtic 500 set ytic 100 set title "DD-ME2" plot "nkBDDME2.out" us 2:1 lc -1 lw 2 with lines , "nkDDME2.out" us 2:1 lc rgb "#FF4433" pt 5 ps 0.5 plot In the plot, the points are very close together making the other line less visible. Is there any way to plot alternate data points to space out the points? Is there any way of doing this without directly manipulating the data file by deleting the alternate data values? Did you have a look at help every? @Eldrad No. I did not know about that. After looking at it, my problem is solved. Thanks.
common-pile/stackexchange_filtered
Understanding the point estimation of the expected value I am trying to understand this problem, however I can't get past some of the definitions used when estimating the expected value. What I would need is to confirm or disprove my conclusions - I read school materials and tried to find the answers on internet. Let's say we measure how much are sea fish toxic and we want to know the expected value of toxicity, when we catch some fish. From my understanding we should do something like this: If we knew the random variable of toxicity $X$, we could compute the expected value, but we don't know that. If we want to estimate the expected value, the estimation is: $E(\bar x)= u$, where $\bar x$ is sample mean. So we catch, for example, $100$ fish and measure the toxicity - now is the part, where I am lost: In my scripts it says, that this is a random sample, from which we get random vector $(X_1, X_2, X_3, \dots, X_{100})$. How can we get a random vector? Random vector should consist of random variables - does that mean, that each of the fish get's it's own random variable? Or does it mean, that the random vector gives us values of random events and those random events mean : I will catch $100$ of fish from Atlantic is $1$ event and I will catch $100$ fish from Pacific is second event? The second one seems right to me. Ok let's say we have this random vector To compute the estimation: $E(\bar x)=\frac 1n \sum_{i = 1}^{100}x_i$ $N$ = all fish in sea Now does $x_i$ represent the numerical values of toxicity of each fish caught from our random vector $(X_1, X_2, \dots)$? So if we would be able to catch all the fish in sea, wouldn't this be just an average? Maybe the best thing to understand this would be, if someone could estimate the expected value, when for example we caught $10$ fish: toxicity of each fish: $5,2,7,8,9,1,1,1,2,1 - $ from pool of $100$ fish. Thanks for replies! The vector $(x_1,x_2,...,x_{100})$ is our realization of the random vector $(X_1,X_2,...,X_n)$. The fact that $(X_1,X_2,...,X_n)$ is a random vector is simply the fact that every sample of 100 fish is going to give you a different realization of the vector of values $(x_1,x_2,...,x_{100})$. Each $X_i$ is a random variable which represents the value of a random fish. Every time you take a sample of $100$ fish, the toxicity $x_i$ of the $i_{th}$ fish in your sample is random. Thus, the $i_{th}$ fish's toxicity is a random variable $X_i$. $\bar{X}$ (the sample mean) is a random variable because the sample is itself random. It is our estimator of $\mu$, the average population toxicity. The expression $E(\bar{X})=\mu$ tells us that on average, our sample mean will be equal to the true population toxicity. Every time we grab 100 fish, we get a different value for the sample mean this is $\bar{x}$. This is our estimate of $\mu$. An estimate is not a random variable, it is just a number. Thus, the expression $E(\bar{x})$ is not very meaningful, the expected value of a number is just the number. $E(\bar{x})=\bar{x}$. $\bar{x}$ is our estimate of the mean value of toxicity. It is an unbiased estimate since the sample mean as an estimator is on average equal to the true population mean. We have no reason to expect that the sample mean is consistently above or below the true mean.
common-pile/stackexchange_filtered
Use global variable in ASP.NET MVC First look at the sample controller code. There you'll find two statements which are repetitive- public class DashboardController : Controller { //the following line is repetitive for every controller. String ConnectionString = WebConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString; public ActionResult Index() { try { //codes } catch (Exception Ex) { //The following line is repetitive for every action method. var path = HttpContext.Server.MapPath("~/App_Data"); ExceptionLog.Create(Ex, path); } return View("AdminDashboard"); } } I would like to avoid such repetition. Is there any way to do it which can work for the entire application as global variable? Create a BaseController and inherit all your controllers from it @StephenMuecke I was going to suggest that WebConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString; as this connection string itself is global in your app lifecycle, why dont you directly use it? Thanks Stephen Mueke. Would you please provide a sample snippet so that I can grab the concept in detail? Specially with the "App_Data" path. Here is an example using a base controller approach: public abstract class BaseController : Controller { protected string ConnectionString = WebConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString; protected void LogException(Exception ex) { var path = HttpContext.Server.MapPath("~/App_Data"); ExceptionLog.Create(Ex, path); } } public class DashboardController : BaseController { public ActionResult Index() { try { string conn = base.ConnectionString; //codes } catch (Exception ex) { base.LogException(ex); } return View("AdminDashboard"); } } There you have redundant repeatable try-catch block that do absolutely equal work in every action of every controller. It's just one approach - one benefit is that a view can be returned within the try and if an exception is raised, once the catch block has exited, a different view can be returned @Ric Thanks. I think this is the simplest way out. If it were me, and there were frequent configuration options I needed to access, I would create/expose some kind of IAppConfiguration which I could drop in to my controllers (using dependency injection). Something like: public interface IAppConfguration { String MyConnectionString { get; } String ServerPath { get; } } public class AppConfiguration : IAppConfiguration { private readonly HttpContext context; public AppConfiguration(HttpContext context) { this.context = context; } public String MyConnectionString { get { return COnfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString; } public String ServerPath { get { return context.Server.MapPath("~/"); } } } Then you can extend it for different circumstances, sub-sites/areas, etc. Then, bring it in via the controller's constructor: public class MyController : Controller { private readonly IAppConfiguration config; public MyController() : this(new AppConfiguration(HttpContext.Current)) { } public MyController(IAppConfiguration config) { this.config = config; } // reference config.ServerPath or config.MyConnectionString; } Taking it a step further, you could add a factory atop this, bring that in as a reference instead, and allow you to get configuration settings for other environments. e.g. IAppConfiguration config = configFactory.GetConfig(/* environment */); The thing I like about this approach is allows me to substitute out settings for local [unit/integration] testing. Using base controllers/static classes would make it very difficult to substitute out these values at a later time. Thanks Brad. Can I use this technique in a base controller approach? @s.k.paul - combine the approach i've given with this example and you're on your way. @s.k.paul" You can, but base controllers become more difficult to emulate in tests (if that's of concern to you). I avoid controller inheritence because it's (typically) a pain to be consistent. However, you could create a static extension (e.g. public static void LoadException(this Controller controller, Exception ex) in LogExceptions class.) But I'd keep things broken out by module and bring them in. @Brad, : this(new AppConfiguration(HttpContext.Current)) gives error - object reference is required. Any help? If you use DI, you can either use that as your config, or throw that into a static method and pass that method in c'tor. Just used that to show an example of building the class. To avoid exception catching you can log it in OnError event handler in global.asax.cs: protected void Application_Error(object sender, EventArgs e) { HttpContext ctx = HttpContext.Current; Exception ex = ctx.Server.GetLastError(); var path = HttpContext.Server.MapPath("~/App_Data"); ExceptionLog.Create(Ex, path); } you can use connection string name "MyConnectionString" instead of WebConfigurationManager.ConnectionStrings["MyConnectionString"].ConnectionString when pass it to dbcontext or ADO Connection. But more rightful way is to inject (with Dependency Injection) in controller class that will be responsible for work with database instead of creating it in controller and passing there connection string. You can inherit controller and then use it for all other controllers, have a look at similar question How can I inherit an ASP.NET MVC controller and change only the view?. But this will be only accessible in controllers, you can also create a class with static method or property returning back the connection string. This can be used in any path of code where you have access to the class. And the bad design is? Having a central class returning the value? Thats was example by Brad is doing in principle. I would not create object everytime for given case, though will make it configurable if needed. Anyways OP got his answer happy for him :) . Thanks for your response.
common-pile/stackexchange_filtered
Can the gravitational field be considered conservative despite the existence of singularities? Assuming singularities are physical objects as opposed to mathematical artifiacts, can the gravitational field still be considered conservative? And if not, does this open a possibility of breaking the law of conservation of energy? Open to discussion on this one: I was a bit distracted in my calc class and it was mentioned a vector field isn’t simply connected it isn’t conservative. I’m probably missing something but a conversation about this would be incredibly interesting. The space $\mathbb R^3$ with isolated point singularities is however simply connected, so the problem actually does not exist. In the three space, to produce a multiply connected domain you should remove more than isolated points. You should remove lines at least (closed or infinitely extended). However, also if the space is not simply connected, an irrotational vector field may still admit potential. Think of the static electric field for instance in the presence of a source given by an infinitely long line uniformly charged or a charged ring, it always admits potential nomatter topological problems of its spatial domain in view of the integral Maxwell laws.
common-pile/stackexchange_filtered
How to make Rails 4.2 work with Postgres Jsonb? I've seen a few blog posts claiming that rails 4.2 added support for the new Jsonb data type in Postgres 4.2. However, googling gets me zero results on how to actually use the datatype. Since I'm not depending on key order and I would like my application to be fast, I would very much like to use Jsonb instead of json in one of my models. Was it actually added in 4.2, and, if so, how do you use it? It's part of the not yet released version of Rails 4.2 (currently 4.2.0.rc3). To use the datatype, specify the jsonb type when creating a table: create_table :users do |t| t.jsonb :extra_info end or add to an existing table add_column :users, :extra_info, :jsonb Since jsonb is virtually the same as json except for the internal storage, the way you work with the column is the same as well. There are a number of operators made possible in Postgres 9.4. Checkout this article on some of those in the context of a Rails 4.2 app. http://robertbeene.com/rails-4-2-and-postgresql-9-4/
common-pile/stackexchange_filtered
How to set PrimeUI Datatable column width How to set PrimeUI Datatable column width? I have tried this code("http://www.primefaces.org/primeui/datatable.html"), it is working fine but i want to set width of column, please help me Thx in advanced Just add width to .pui-datatable thead th, .pui-datatable tbody td, .pui-datatable tfoot td class? Edit: .pui-datatable thead th:nth-child(1), .pui-datatable tbody td:nth-child(1), .pui-datatable tfoot td:nth-child(1) { width: 50px; } .pui-datatable thead th:nth-child(2), .pui-datatable tbody td:nth-child(2), .pui-datatable tfoot td:nth-child(2) { width: 100px; } It is working but i want set width of particular column.How can i set it for ex- 1st Column width:50px, 2nd column width:100px, thx in advanced I've edited my answer. Remove the width you've added now and add these classes. Another option is to use the headerStyle option in the columns array. {field: 'view', headerText: 'View', content: createEditButton, headerStyle: "width:8%"}, I haven't been able to find the complete options list for the columns array so if anyone knows where it is please post it! http://primefaces.org/primeui/#datatable scroll down to the discussion on Column array. it's hidden in the "Datasource" subtopic. sometimes i wish the primefaces documentation was as stellar as their products :-)
common-pile/stackexchange_filtered
Can I pipe /dev/video over ssh I have two computers, a desktop in my office ( with a webcam attached ) and a laptop somewhere else on the network. Usually I take a look at my office through my webcam by running ssh Office -Y "mplayer tv://device=/dev/video0" from my laptop. I don't like Xforwarding mplayer, so why can't I tunnel /dev/video to my pc by running this on my laptop? sudo mkfifo /dev/video1 ssh Office 'dd if=/dev/video' | sudo dd of=/dev/video1' and then to watch the webcam ( on my laptop ) mplayer tv://device=/dev/video1 Something like: dd if=/dev/video0 | mplayer tv://device=/dev/stdin works for me (SOA#1) locally. So does: ssh localhost dd if=/dev/video0 | mplayer tv://device=/dev/stdin As well as mkfifo test dd if=/dev/video0 of=test & mplayer tv://device=test Hence: Try without named pipe Check bandwidth Also - how does in not work (display black screen, complains about unknown device etc.)? I think something is wrong with my mplayer. If I run dd if=/dev/video0 | mplayer tv://device=/dev/stdin it tells me the resource is busy. Otherwise it works ( I see video ) even when I run mplayer tv://device=/dev/null This answer is quite misleading. The "correct" invocation of mplayer would be mplayer tv:// -tv device=/dev/stdin or similar, but this does not work (character devices are more special than dd can handle properly). When you run mplayer tv://device=/dev/stdin it is not seeing a device specification and so falling back to /dev/video0 directly, giving the illusion of "working". But it won't work at all when the webcam and mplayer process are separated by the network. Yes, this looks like it's working right because you're SSH'ing to localhost, but in reality it's failing and mplayer is falling back to /dev/video0 on localhost. If you try these commands SSHing to a different computer (i.e. not localhost), you'll see your local webcam, not the remote one. You tried in local host but how to run the command ssh localhost dd if=/dev/video0 | mplayer tv://device=/dev/stdin when using two different computers? What is an alternative of mplayer tv://device=/dev/stdin on OSX? I recommend using HNP for this purpose: https://www.psc.edu/hpn-ssh If you have a low bandwidth I recommend compression of the video stream (still works in 2020). with ffmpeg and mplayer ssh USERNAME@REMOTEHOST ffmpeg -an -f video4linux2 -s 640x480 -i /dev/video0 -r 10 -b:v 500k -f matroska - | mplayer - -idle -demuxer matroska where -an turns off audio encoding. If you want audio, replace -an with -f alsa -ac 1 -i hw:3 (where hw:3 could also be hw:0 or hw:1, … See arecord -l for your device). If you want audio only (no video), use this) -s 640x480 is the size of your video in x and y dimension -r 10 is the framerate you want to receive (lower makes better images at low bitrates, but looks more bumby) -b:v 500k is a bitrate of 500 kilobit/s You need ffmpeg on the remote host and mplayer on the local machine installed. with ffmpeg and mpv ssh USERNAME@REMOTEHOST ffmpeg -an -f video4linux2 -s 640x480 -i /dev/video0 -r 10 -b:v 500k -f matroska - | mpv --demuxer=mkv /dev/stdin with ffmpeg and ffplay ssh USERNAME@REMOTEHOST ffmpeg -an -f video4linux2 -s 640x480 -i /dev/video0 -r 10 -b:v 500k -f matroska - | ffplay -f matroska /dev/stdin Thank you for this solution. Trying to tunnel RTSP over SSH using ffmpeg/ffplay or some other solution involving http servers, etc, is so much more complicated than encoding to matroska and using stdout/stdin for the connection! For anyone getting the error message Unable to find a suitable output format for 'pipe:' pipe:: Invalid argument: when outputting to a Unix pipe, ffmpeg requires to have an output format explicitly specified, just as erik does in his answer: ... -f matroska - | ffplay ... This one worked for me, but with a 10-second delay. Both computers are hard wired with cat5e to the same 1G switch. @JayRugMan The delay comes from buffers being filled before playing. If you find a solution with smaller or zero buffers that have less delay you can comment here. The mpv version doesn't work for me but the mplayer version does. mpv will complain about invalid input and exit while mplayer will wait a while and then play something with a huge delay. The accepted answer does not work for me. dd simply won't read it. nc is bad if you cant spare another port (I didn't get that to work at all either anyway). cat didn't work for me either. What ended up working for me was this on the receiving end: ssh user@host "ffmpeg -r 14 -s 640x480 -f video4linux2 -i /dev/video0 -f matroska -" | mplayer - -idle This has the benefit of it being encoded, so you save bandwidth as a bonus. Nothing else on any forum/website was working for me on a debian machine. Combine with tee and you can watch and record at the same time: ssh user@host "ffmpeg -r 14 -s 640x480 -f video4linux2 -i /dev/video0 -f matroska -" | tee $(date +%Y-%m-%d_%H-%M-%S)_recording.mkv | mplayer - -idle This will open mplayer for live streaming and save it to a file containing the current datetime at the same time (example filename: 2018-11-22_01-22-10_recording.mkv). Seems like less of a delay than the similar answer above. Good show. Thanks. The VideoLAN Project exists in large part to do just what you desire. I've not used its streaming capabilities but in its single machine use it has shown to be rock solid for me. And so, could you elaborate for this scenario, please ...? -1, the question is about streaming over SSH. VLC's unencrypted stream is useless by itself I don't know if there's any reason you can't do it, but one problem I see with your implementation is that the remote system will look for /dev/video1 on its system, but won't be able to find it because you created it on your local system. What I'd do is something along the following nc -l 12345 | sudo tee /dev/video > /dev/null & ssh Office and then try something by telling it to go to your local system's TCP port 12345. I tried clarifying my question. please see the updated version This isn't the best option as far as quality, but cool, nonetheless. If you have VLC installed on the remote computer and ssh to it from a terminal, you can run the following command and get video streamed from the remote web-cam to your terminal in ascii art. I stumbled across this cool feature when trying to do what the OP is doing with a command that works locally to open the camera with VLC. Locally, this will open a window with video streaming normally from your web camera: [me@myComp /some/dir]$ cvlc v4l2:///dev/video0 Remotely, first ssh just for a remote terminal, then run the same comand [me@myComp /some/dir]$ ssh person@otherComputer ... [person@otherCompter /some/dir]$ cvlc v4l2:///dev/video0 vuala, ascii video from the remote web cam. If you zoom the terminal out, you get better resolution, but worse frame-rate. There's a happy medium somewhere in there - just fiddle with it some. Anyway, not high-quality, but still awesome!! Note, to stop the feed, I've had to ssh to the remote machine in another terminal and kill vlc from the command line. [me@myComp /some/dir]$ ssh person@otherComputer ... [person@otherCompter /some/dir]$ killall vlc
common-pile/stackexchange_filtered
SSRS aggregate expression,get sum of column from values as expressed in each row I have to prepare a report that lists % of enquiry tunred into Sale. But we want the % to be = or < than 100% =IIF( Sum(Fields!Sold.Value)/SUM(Fields!Enquired.Value)>1,1, IIF(Sum(Fields!Sold.Value)=0,"", Sum(Fields!Sold.Value)/SUM(Fields!Enquired.Value)) ) and Sold value to be = or < enquired value. =IIF( Sum(Fields!Sold.Value) > SUM(Fields!Enquired.Value), SUM(Fields! Enquired.Value), IIF(Sum(Fields! Sold.Value)=0,"",Sum(Fields! Sold.Value)) ) So I have used expression included here to achieve that objective. Issue is when I get the total it still shows the actual sum of Sold by which I mean, we would like the total to appear as 50 instead of 51 and % to appear as 96% instead of 98%. Any suggestion if and how this can be done. Thanks. Am not sure how to title this so pls pardon if it isn't clear. I think you're making life harder for yourself than is entirely necessary with those equations you have worked out currently. First, if we re-think about your requirements they are that if Sold is greater than Enquired, use the Enquired value, otherwise the Sold value. This can be represented as =iif(Fields!Sold.Value > Fields!Enquired.Value, Fields!Enquired.Value, Fields!Sold.Value) <-- This will be the GREEN <Expr> below To then calculate the percentage for this product you should then use this calculation in the Sold/Enquired column as follows =iif(Fields!Sold.Value > Fields!Enquired.Value, Fields!Enquired.Value, Fields!Sold.Value) /Fields!Enquired.Value <-- This will be the PURPLE <Expr> below To determine the sum for the Sold items we need to determine the sum of either the Sold value, or the Enquired value, as follows =Sum(iif(Fields!Sold.Value>Fields!Enquired.Value, Fields!Enquired.Value, Fields!Sold.Value)) <-- This will be the RED <Expr> below Finally, you want to determine the percentage by dividing this value by the sum of the Enquired values, so the final expression needs to be =Sum(iif(Fields!Sold.Value>Fields!Enquired.Value, Fields!Enquired.Value, Fields!Sold.Value)) / sum(Fields!Enquired.Value) <-- This will be the BLUE <Expr> below This will give a design such as this And when run will give this output Hopefully that is the output you require. Please let me know if I can help further. Thank You very much. I see the difference in approach. I was so focused on getting the 0 or >100% out of the way i think that got me thinking differently.
common-pile/stackexchange_filtered
Div not visible in IE8/7... sometimes I'm having a strange issue that I haven't encountered before. I'm working on the front-end code for this page: REDACTED When I view this in IE, the dropdown div doesn't show up (I currently have it set to show always while I troubleshoot, so it's that big block that has lists of Men's and Women's categories). I can click that div in IE Dev Tools and see its outline, but the div itself is nowhere to be found. In the process of troubleshooting, I dumped my code into jsFiddle to mess around a little and there it works fine, while in IE. It's the exact same HTML and I just loaded my external CSS and JS files as resources, so it's all completely identical. Here's the jsFiddle link: REDACTED Anybody have any idea what gives? Here's the CSS for the affected div: #productDrop { background: url(../../images/global/bgProductDrop.png) no-repeat; left: -646px; height: 166px; padding: 40px; top: 37px; width: 675px; } Thanks much in advance, Marcus from ie.css delete the overflow-x:hidden; for the #headerWrapper
common-pile/stackexchange_filtered
Python flask and how to show code in a web page My web framework is Flask. I would like to create a section of a page that will show java script and html code fragments. I dont know the lingo for doing this. Are there any python flask plugins for this? ANy other type of js libaries? Again, I need js and html code snippets. Thanks Take a look at Prettify - Stackoverflow uses this to format code. It will format code on your page and make it look as it does here on this site. You just include the JavaScript and CSS on your page. <link rel="stylesheet" type="text/css" href="css/colorschemes/prettify.css" /> <script type="text/javascript" src="js/prettify/prettify.js"></script> See the setup instructions here It worked for JS but on for html this showed a button rather than a code snippent.... Have a look at pygments. It can handle a large number of formats.
common-pile/stackexchange_filtered
Autosubmit form input with Siri I'm developing a simple web app for Iphone. I have a form with an input text and I'd like to use the voice to search. It works but I have to press on the return button of the keybord. Is there a way to auto submit the query generated by Siri? in JavaScript you can listen for the onChange event and then form.submit()
common-pile/stackexchange_filtered
How to vertically separate Material UI Gridded Paper components? Goal components are not being spaced like I would like them to be. This is currently what they look like. So far, I've tried setting the Paper selector display to flex. I've also tried increasing the spacing properties on the different Grid components. Increasing the space just made the padding expand out. I would like them to be spaced out vertically from one another instead of overlapping. How can I accomplish this responsively? I want it to look like the diagram below where the red boxes are the goal components and the black box represents the web page. import React, { useEffect } from "react"; import Moment from "react-moment"; import PropTypes from "prop-types"; import { Link } from "react-router-dom"; import { connect } from "react-redux"; import { getGoals } from "../../actions/goal"; import Spinner from "../layout/Spinner"; import Navbar from "../dashboard/Navbar"; import ThumbUpAltIcon from "@material-ui/icons/ThumbUpAlt"; import ThumbDownAltIcon from "@material-ui/icons/ThumbDownAlt"; import ChatIcon from "@material-ui/icons/Chat"; import DeleteIcon from "@material-ui/icons/Delete"; import DoneIcon from "@material-ui/icons/Done"; import { Typography, Container, CssBaseline, makeStyles, Grid, Avatar, Paper } from "@material-ui/core"; const useStyles = makeStyles(theme => ({ paper: { height: "auto" }, actionButtons: { marginTop: "3vh" }, profileHeader: { textAlign: "center", marginBottom: 20 }, avatar: { width: theme.spacing(7), height: theme.spacing(7) } })); const Goals = ({ getGoals, auth, goal: { goals, user, loading } }) => { useEffect(() => { getGoals(); }, [getGoals]); const classes = useStyles(); return loading ? ( <> <Navbar /> <Container component="main" maxWidth="xs"> <CssBaseline /> <div className={classes.paper}> <Spinner /> </div> </Container> </> ) : ( <> <CssBaseline /> <Navbar /> <Container> <Typography variant="h2" className={classes.profileHeader}> Goals </Typography> {/* parent grid */} <Grid container spacing={4}> {goals.map(singleGoal => ( <Grid className={classes.paper} key={singleGoal._id} spacing={1} container item direction="row" alignItems="center" component={Paper} > <Grid item container direction="column" justify="center" alignItems="center" xs={3} > <Avatar className={classes.avatar} src={singleGoal.avatar} /> <Typography variant="caption"> {singleGoal.first_name} {singleGoal.last_name} </Typography> <Typography variant="caption" className={classes.postedOn}> Posted on{" "} <Moment format="MM/DD/YYYY">{singleGoal.date}</Moment> </Typography> </Grid> <Grid container item direction="column" xs={9}> <Typography variant="body1">{singleGoal.text}</Typography> <Grid item className={classes.actionButtons}> <ThumbUpAltIcon /> <ThumbDownAltIcon /> <ChatIcon /> <DoneIcon /> <DeleteIcon /> </Grid> </Grid> </Grid> ))} </Grid> </Container> </> ); }; Goals.propTypes = { getGoals: PropTypes.func.isRequired, goal: PropTypes.object.isRequired }; const mapStateToProps = state => ({ goal: state.goal, auth: state.auth }); export default connect(mapStateToProps, { getGoals })(Goals); It will be nice if you can provide a minimum workable example here. for using vertically <Paper /> with styled to spacing do it like : import React from "react"; import "./styles.css"; import { makeStyles } from "@material-ui/core/styles"; import Paper from "@material-ui/core/Paper"; const useStyles = makeStyles(theme => ({ root: { display: "flex", //flexWrap: 'wrap', "& > *": { margin: theme.spacing(1), width: theme.spacing(46), height: theme.spacing(16) }, padding: theme.spacing(5, 5), height: "100%", //display: "flex", flexDirection: "column", justifyContent: "center" }, paper: { //margin: theme.spacing(10), marginBottom: theme.spacing(5) // Change this line for more spacing } })); export default function SimplePaper() { const classes = useStyles(); return ( <div className={classes.root}> <Paper elevation={4} className={classes.paper} /> <Paper elevation={4} className={classes.paper} /> <Paper elevation={4} className={classes.paper} /> </div> ); } Answer output : HERE Turns out I just needed to add the marginBottom to my Paper selector. height: "auto", marginBottom: theme.spacing(3) }``` I've created an example of responsive layout using the Grid with Paper components (similar to yours). Hope that helped. import React from "react"; import "./styles.css"; import { makeStyles } from "@material-ui/core/styles"; import { Avatar, Grid, Paper, Typography } from "@material-ui/core"; const useStyles = makeStyles(theme => ({ root: { padding: theme.spacing(2) }, paper: { minHeight: theme.spacing(10), padding: theme.spacing(2) }, avatar: { marginBottom: theme.spacing(1.5) }, [theme.breakpoints.down("xs")]: { description: { marginTop: theme.spacing(1.5) } } })); export default function SimplePaper() { const classes = useStyles(); const users = [ { name: "Jason", desc: `Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.` }, { name: "Jonathan", desc: `Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.` }, { name: "Joshua", desc: `Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.` } ]; const renderPaper = ({ name, desc }) => ( <Grid className={classes.paper} component={Paper} container alignItems="center" > <Grid item xs={12} sm={3} md={2}> <Grid container direction="column" alignItems="center"> <Avatar className={classes.avatar} /> <Typography variant="subtitle2">{name}</Typography> </Grid> </Grid> <Grid item xs={12} sm={9} md={10}> <Typography variant="body1" align="left" className={classes.description} > {desc} </Typography> </Grid> </Grid> ); return ( <Grid className={classes.root} container direction="column" spacing={4}> {users.map(user => ( <Grid item>{renderPaper(user)}</Grid> ))} </Grid> ); } Working Demo: https://codesandbox.io/s/broken-monad-jv6pv?fontsize=14&hidenavigation=1&theme=dark
common-pile/stackexchange_filtered
How can I display a random set of slides with the Twitter Bootstrap carousel? How can I create an array of slides and then only display a certain number of them randomly? For example, if I set up an array like this: var slides = []; slides[0] = '<div class="item"><img alt="" src="/Slides/gallery.jpg"></div>'; slides[1] = '<div class="item"><img alt="" src="/Slides/item1.jpg"></div>'; slides[2] = '<div class="item"><img alt="" src="/Slides/forums.jpg"></div>'; slides[3] = '<div class="item"><img alt="" src="/Slides/featured.jpg"></div>'; slides[4] = '<div class="item"><img alt="" src="/Slides/sale.jpg"></div>'; slides[5] = '<div class="item"><img alt="" src="/Slides/discount.jpg"></div>'; slides[6] = '<div class="item"><img alt="" src="/Slides/gallery.jpg"></div>'; slides[7] = '<div class="item"><img alt="" src="/Slides/a.jpg"></div>'; slides[8] = '<div class="item"><img alt="" src="/Slides/b.jpg"></div>'; slides[9] = '<div class="item"><img alt="" src="/Slides/c.jpg"></div>'; slides[10] = '<div class="item"><img alt="" src="/Slides/d.jpg"></div>'; How can I choose 5 of the 10 at random and then output it into the Twitter Bootstrap carousel? <div id="myCarousel" class="carousel"> [Items go here, first one needs to get the class 'active'] </div> I would fill the "Items go here" with server-side code; forget the JS array. For example, in PHP, you could use array_rand to get a random subset. Otherwise... if you really want to use JS, then pick a few elements at random and append them to your div. You may have to initialize the carousal afterwords using the API. I was going to post an answer for you but as it turns out, it's much harder to shuffle an array properly with Javascript (without a plugin) than it is in PHP etc as @Mark mentions above. Now the other reason you should be doing this on the server side is that you'll be saving load time/resources by not bringing back all the possibilities before they are randomly selected, and you place slides with JS, if the client has Javascript disabled they see nothing. At least if you place them with HTML & PHP (etc) you will see them. In addition to what the others have said, I'd only put the filename in your array, not a big gob of HTML that's identical for each item. Loop through them to render the HTML in the page, or do it server-side. The site isn't PHP-driven though. Also, the HTML for the slides will be different (text, etc.), but I just wanted to show a basic example. Any recommendations on how to do this with JavaScript, jQuery, or a specific plugin? Surely you're using some server-side language? It doesn't have to be PHP. I don't have any code for you, there's really not much to it. Write a standard for loop, and append each slide. $('#myCarousel').append(slides[i]). Then initialize it, $('#myCarousel').carousel(). Once you've got that working, work on picking a random subset of elements.
common-pile/stackexchange_filtered
Nifi GetDynamoDB - get all values of range I have a dynamoDB table that I'm populating with PutDynamoDB, no problems, it has a hash key of "userid" and a range key of "timestamp", the epoch time of the event. Is it possible, using the GetDynamoDB processor, to get back all or part of the set of range keys associated with the hash ID? That is, if user 1 posts 5 times, I want to be able to issue a GetDynamoDB call with the userid value of 1, and get back an array of all 5 post times by user 1. If I only want the 3 posts made yesterday, I want to supply a greater than and less than value to the lookup query, so that I only recover those 3 records. I've tried simply querying without a range key value, and have not had any luck. In dynamoDB's interface, I can issue a hash key with a "between" range for the sort key. Is it possible to do the same in nifi? I do not believe the GetDynamoDB processor allows this type of query. GetDynamoDB uses Dynamo's BatchGetItem API, which requires a complete set of keys for each item retrieved. In contrast, I believe the Query API is what you used in the DynamoDB console UI to request a set of multiple range key items for the same hash key. The GetDynamoDB processor code certainly would be a good place to start if you wish to develop a processor that performs the query operation. Darn, not the answer I was ultimately hoping for, but an answer nevertheless! I'll see what I can do about putting a QueryDynamoDB processor together sometime.
common-pile/stackexchange_filtered
What's the best thing to do about rudeness in questions / comments? I'm thinking of this exchange here Finding the best two predictor variables used conjointly, and levels of each - both in the comments on the original question (and now in the edits to the question) and to my answer. Obviously a good approach as individuals is to stay calm and keep away, but how should we act as a community? I'd like to see a clear message that this sort of behavior is not welcome on the site. But having said that the question is clearly within scope and so should not be closed on any of the usual grounds. Also, I was surprised to find not much discussion on rudeness in the meta, am I maybe looking in the wrong place. I've not much to add to @Matt's reply. I am grateful to you, Peter, for showing such a positive attitude. This user has been kindly notified by the system that constructive and respectful exchanges are expected on this community site. On behalf of the community I apologize to you, Peter, for any grief your valiant attempts at helping with this thread may have brought you. It was left open while there was some hope for a constructive resolution. To my view, that hope is largely gone now, but not for lack of any effort on your part (or on the part of other interlocutors). One way or another, the entire exchange will be cleaned up soon. No problem, thanks. @All By means of deletion and judicious editing of comments to the referenced thread, I have retained the content relevant to the question and removed the other remarks contained therein. I hope everyone involved views this as constructive, but if any of these changes have (inadvertently) changed the meaning, please feel free to make suitable modifications or additions. Please just make sure your comments address the substance of the question. I have no problems with your edits, @whuber, but his final comment on that thread - "You people are geniuses, providing a great deal of insight into answering the question. Problem solved! If you're ever in town be sure to look me up so that I can buy you a beer." was clearly sarcasm though I refrained from flagging it. Re: the thread where you ask us to vote on the merits of the question - I admit to being a downvoter. My rationale was that his third question was a thinly veiled re-phrasing of the previous question. I almost close voted but decided to simply downvote. Why don't we just take the final comment at face value and leave it at that? :-) @whuber, I had another run in with this guy. See the edit history at http://stats.stackexchange.com/questions/28847/how-to-estimate-the-extrema-of-an-unknown-function-relating-two-predictors-to-a ... I deleted his solicitation for contact outside of the site a few times in a row (the same material you deleted a few days ago). Eventually he had an outburst that I reverted by editing. At that point I flagged the post. For future reference, should I have just left it alone after he was resisting the edit and flagged it at that point? Or...? @Macro Thank you for your efforts. It's hard to say what's the right course of action in such circumstances. It's probably a good idea to flag any situation where a fight seems to be emerging, such as an editing/rollback war (as in this circumstance) or when language becomes uncivil. Even the most tactful and reputable user has only limited capabilities to stop a determined individual from harming a thread or abusing the site. Moderators can lock posts to prevent additional changes (both temporarily and permanently); they can contact users behind the scenes; and they can suspend user accounts. The question is heavily downvoted, and I think anybody who has seen it would think twice before putting time into answering additional questions from the OP (assuming he ever comes back to ask all of us idiots another question). That's probably enough of a message. That kind of asinine behavior is in violation of our etiquette, so I'd say it is indeed fair game for downvotes or closure (whether or not that's a listed reason - the software guides how we manage the community but shouldn't dictate). You unfortunately caught the brunt of the nastiness, but it looks to me like the system is working basically as planned. Yes, I think you're probably right on all counts, including that the system is basically working. I would like to second what others have said here, which is that Macro and you behaved with admirable maturity in the face of this situation. I do think it's perfectly acceptable to close or delete a question [answer, comment] if someone's behavior is obnoxious. I think this may be sufficient sometimes to chastise someone and get them to follow simple courtesy. (I actually loosely remember an example in which I flagged someone's comment, it was deleted, and the poster was more appropriate thereafter, but I wouldn't be able to find the thread again now.) However, that won't always work. The individual in question seems dedicated to behaving this way (you can check his other question, and his website). Should these posts have been closed, I suspect more nastiness, possibly via alternate accounts registered under pseudonyms would follow. Unfortunately, the nature of the internet is such that this is destined to occur occasionally, and there's no ultimate solution besides walking away and ignoring it. Thus, I think @MattParker is right that the system worked basically as well as can be hoped.
common-pile/stackexchange_filtered
C# How to lock in abstract class with third parties Introduction I have a public abstract class, with an abstract method, which I want to call from a worker thread. When the method is called, the respective instance should be locked down in order to prevent state changes during calculation. I only want to work with the abstract class, as the implementation of the inheritors is done by third parties. public abstract class MyClass { public abstract MyResult GetData(); } The problem My library is used by third parties, I have to assume that they know nothing about the internal implementation of the library. I don't want to force them to study the documentation of my class, before they are able to implement their own inheritor as I consider this bad form. My approach My first idea was to add a protected lock object to the class and lock on it when calling the method. However, in order for this to be useful, the third party would have to lock on it as well, and thus know about it. As I don't want to force the third party to know about the internals, I don't like this option. public abstract class MyClass { protected readonly object myLock = new object(); public MyResult GetData() { MyResult result; lock(myLock) { result = GetDataInternal(); } return result; } protected abstract MyResult GetDataInternal(); } Background I'm working on a data pipeline, which runs on a separate thread. This pipeline requests data in a specific format and processes it in the background. Providing the data can take some time and the provided data relies on properties of the objects. In this case, its a preparation pipeline for 3D models. The question How can I lock a whole object without knowing its implementation? If there is no such way, then is there an agreed upon pattern or something like that for this problem? Just a comment. Forcing people that inherit from your types to read the documentation and follow the rules you've set forth is not bad form, it is absolutely required. Granted, you should make it easy for them to fall into the "pit of success" but you simply cannot make a type foolproof if you intend to let them inherit from your type. For all you know they could implement methods returning the negative of what you expect, throwing a big spanner into the works of any code using these descendant types. There is no generalized lock that will "lock out code" from executing without that code also caring about the lock. In other words, if some code accesses the object without locking, and some code with, you can't magically make the first piece of code care about the lock without changing it. You can close down your object so as to only provide thread-safe access, such as only providing snapshots of its internals but this too requires changes to any 3rd party code that uses your type. You are right, reading the documentation is a must. But here I dislike the requirement as intuitively most people would fail to recognize the myLock property. I dislike implementations which are (in most cases) only properly used when the documentation was read. One more note - you can make myLock private. It's only used by your public method. No one inheriting the class needs to know it's there. They need to know about it, as right now the only thing is does is preventing multiple simultaneous calls to "GetDataInternal". However, right now it doesn't prevent instance access by other threads. My library is used by third parties, I have to assume that they know nothing about the internal implementation of the library. (..) When the method is called, the respective instance should be locked down in order to prevent state changes during calculation. I think that the best way is to .. make them know, and make sure they know that they are responsible for it. You can easily make it intuitive without (much) documentation. Consider changing your abstract class to something like: public interface ILockable { void FreezeDataForCalculations(); void ThawAfterCalculations(); } public abstract class MyBaseClass<T> where T:ILockable { public abstract T GetData(); } Usage: public class MyThingie : MyBaseClass<TheActualData> { } public class TheActualData : ILockable { public string Foo {get;set;} public void FreezeDataForCalculations() { ...???...} public void ThawAfterCalculations() { ....???.... } } Now, you effectively ensured that: whoever wants to implement it, has to provide his own type, that implements extra interface whoever implementa that extra interface will notice this two methods, and they will at least think "wtf", and will either understand immediatelly, or will try consulting the documentation you do no locking for the data, creator of the class is responsible for it implementor now can choose whever to actually implement freeze/thaw pair, or leave them empty and simply write their own code to not modify the data in the meantime your code now has to call 'Freeze' and 'Thaw' appropriatelly, and can assume the implementor did what he was expected to On the contrary, if you can't assume that he did was he was expected to, then change API of your library and don't allow user-defined types, and restrict the API to only your own types that you can ensure that will play nice. This is beautifull and simple. If someone ignores this, then it's clearly their own fault. Looks like a great solution to me. Thank you! I think that this is a step back from the original design. The question was about how to hide the locking behavior. The OP's code did just that. This does the opposite - exposing the need for a lock and making the consumer responsible. There may still be a way to improve on the original but it's accomplishing the intent. Someone inheriting the class can fill in the needed functionality but doesn't even need to know that there's locking going on. Another way of looking at it - this requires someone to call methods that don't have any apparent relation to anything the class does. Any class, even a string, may need to be protected from changes in state from various threads. I don't recommend creating a pattern where we explicitly "mark" classes that need to be locked, implying that everything else is implicitly thread safe. I wouldn't say it's a step back. My original design would still need the consumer to implement the locking behaviour himself, as otherwise the lock wouldn't do anything usefull. The problem was, that I had a property, which could be overlooked easily, and thus lead to incorrect implementations from the consumer. The solution proposed by quetzalcoatl solves this problem. Unless there's a way to lock the whole object, there's no other way than to make the consumer responsible. @ScottHannen: I would agree with you, but you assume a bit too much. Say, what if this one data structure is the only thread-unsafe thing in the whole process? we don't know it. This 'downgraded' piece of code solves a certain problem. This problem was not about enforcing protection. The problem was about making the users of this code (who apparently don't fancy reading the docs) notice that there are concurrency and data mutation problems involved. This piece of code did it in a naive and flashy way, mostly to get attention. I actually don't like this pattern. I am always tempted to roll out some cool and fancy management layer or transparent snapshotting of the data or (..). However, complexity and implementation cost is always a factor. When the point of contact between threaded and nonthreaded areas is simple and tiny, bloating the API with such things is .. disputable at least. Btw. I am a bit surprised by your idea that someone may assume something to be implicitly thread safe. I probably won't be mistaken much if I say that in .Net, the tradition is that everything is implicitly thread unsafe unless specified otherwise. As for the pattern, take a look at Freezable in WPF for example. It denotes thread-safe things while others stay unsafe. It is a very clear design. Adding such piece of "synchronization" will not make anyone think that everything else is safe. Just the opposite. (yes, I know that Freezable works in a different way than here. But the idea is similar) You're correct. Everything should be assumed to be non-thread-safe unless otherwise specified. That's the problem with explicitly indicating that class isn't. That being said, my answer was really just a stab at it based on what I could get from the question. I don't think that I really understand the scenario at all. Perhaps if there was code illustrating how a 3rd party library inherits from the class and then where and how that derived class is used. Without that complete picture I'm really just guessing.
common-pile/stackexchange_filtered
Angular 2 how to compile a template and save the result html Ok, by now I have to create a body for an e-mail. I need to take a template, interpolate it with the data I already have (js Object) and have this resulted html string in a variable, so I can send this body to a server that will actually send the e-mail. I am not sure about using other template engine, handlebars for instance, to generate this html. Is there a way to use the angular 2 template engine? Keep in mind that I dont need to show it, I want to keep it in memory only. If I cant use it what should I use? Is there any best known or recomended aprouch? Not sure why you need it but here is some ideas: import {Component, NgModule, Input, VERSION, ChangeDetectorRef} from '@angular/core' import {BrowserModule} from '@angular/platform-browser' import {Injectable, ComponentFactoryResolver, ApplicationRef, Injector} from '@angular/core'; @Component({ selector:'my-cmp', template: 'here is {{title}}' }) export class MyComponent { @Input() title: string; } @Component({ selector: 'my-app', template: ` <input #inp> <button (click)="create(inp.value)">Create</button> `, }) export class App { constructor(private componentFactoryResolver: ComponentFactoryResolver, private appRef: ApplicationRef, private injector: Injector) { } create(title) { let factory = this.componentFactoryResolver.resolveComponentFactory(MyComponent); let ref = factory.create(this.injector); ref.instance.title = title; this.appRef.attachView(ref.hostView); console.log(ref.location.nativeElement); this.appRef.detachView(ref.hostView); } } @NgModule({ imports: [ BrowserModule ], declarations: [ App , MyComponent], entryComponents: [MyComponent], bootstrap: [ App ] }) export class AppModule {}
common-pile/stackexchange_filtered
Is Fractional Differencing still important when using a LSTM model? In his seminal book "Advances in Financial Machine Learning", Dr. Marcos Lopez de Prado describes the importance of using fractional differencing to preserve memory while keeping stationary. But in deep neural networks, some models like LSTM, Transformers, and even CNN, can retain memory. In these models, when providing a sufficiently large interval to the network, is it still useful to apply Fractional Differencing? Can I get away with just using price returns and a long interval, say 128 days?
common-pile/stackexchange_filtered
How can the Feynman rules be read off the Lagrangian? I am reading Peskin. In his functional methods chapter he says that (i) "Once the quadratic terms in the Lagrangian are properly understood" and (ii) "The propagators of the theory are computed" then "the vertices can be read directly from the Lagrangian as the coefficients of the cubic and higher order terms." What does this mean? In particular: (1) What does it mean that the quadratic terms are properly understood? How can one improperly understand a quadratic? What does this mean? (2) What does it mean that the vertices can be read directly from the Lagrangian as a coefficient? For example, (2a) how can one determine what the vertex itself looks like? And (2b) in $\phi^4$ theory, the coefficient is $- \lambda/4!$, whilst the Feynman rule for the vertex is $-i\lambda \neq - \lambda/4!$. Keep in mind that it is nearly impossible to explain how perturbative QFT calculations follow from Lagrangians such that the answer is both relatively short and detailed. So I am going to write an introductory answer. If you want more details on any of its part, you can look up textbooks, or you can let me know in the comments, in which case I will consider updating this answer. Suppose your model has $n$ quantum fields (they can be organized as Poincare multiplets or all be scalars, for what follows it doesn't matter). The generic expression for the quadratic term in the Lagrangian is thus $$ \mathcal{L}_2 = \frac{1}{2} \left( K_{ab} \partial_{\mu} \phi^{a} \partial^{\mu} \phi^{b} - M_{ab} \phi^{a} \phi^{b} \right). $$ (Actually, if some of the fields have spacetime indices, there could be additional terms like $N_{\alpha a} \psi_{\mu}^{\alpha} \partial^{\mu}\phi^{a}$, but they can be treated in the same matter, so we won't lose generality if we just ignore this issue here.) First we would like to re-express this Lagrangian, using integration by parts (remember that the Lagrangian is integrated over spacetime to give the action of the system describing its dynamics), as follows: $$ \mathcal{L}_2 = \frac{1}{2} \phi^{a} \hat{Q}_{a b} \phi^{b}, $$ where $\hat{Q}$ is the second-order linear differential operator acting on fields. It is called the Euler-Lagrange operator because it generates the classical equations of motion through $$ \hat{Q}_{ab} \phi^{b}_{\text{classical}} = 0. $$ For example, for the multiplet of Klein-Gordon fields it turns out to be $$ \hat{Q}_{ab} = \delta_{ab} \Box + M_{ab}, $$ where $M_{ab}$ is called the mass matrix. The basis in which $M_{ab}$ is diagonal is a proper basis for expressing fields associated to elementary particles, the diagonal values being the masses squared of the elementary particles. The d'Alambert operator is $\Box = \partial_{\mu} \partial^{\mu}$. In the quantum theory we want to calculate the propagator, or the time-ordered product of two field operators: $$ \Delta^{ab} (x, y) = \left< \phi^{a}(x) \phi^{b}(y) \right>. $$ It turns out that the propagator is equal to the Feynman Green's function of the differential operator $\hat{Q}$, which can be derived in the path integral formalism: $$ \hat{Q}_{ab}(x) \Delta^{bc} (x, y) = i \delta_a^c \delta^{(4)} (x - y). $$ This is what is meant by treating the quadratic term in the Lagrangian properly. At this point it is worth mentioning that sometimes the operator $\hat{Q}_{ab}$ is singular, that is, doesn't have an inverse in the class of functions with radiation boundary conditions. This is because of gauge invariance. The simplest case where this shows up is the free Maxwell Lagrangian. The modern way of dealing with this is through the formal manipulation with path integrals called the Faddeev-Popov procedure, which introduces additional terms in the Lagrangian (gauge-fixing term and maybe ghost fields). The resulting Lagrangian is still applicable to the same physical model (which is guaranteed by the Faddeev-Popov procedure), but its differential operator is not singular and the propagator can be calculated. This propagator turns out to be unphysical and depends on the unphysical gauge fixing parameter, but when used to calculate S-matrix elements between physical states, the dependence on the unphysical parameter disappears and gauge invariance is restored. (In fact, gauge invariance is still present in the modified Lagrangian in the form of BRST supersymmetry. Do not confuse it with SUSY.) Now consider a perturbation of the Lagrangian, i.e. a higher-order term. We deal with such perturbations using, rather unimaginatively, perturbation theory. In the path integral formalism it can be done by Taylor-expanding the exponential of the interaction Lagrangian and making it a part of the correlation functional, keeping the quadratic term as the effective action functional. Then we can apply Wick's theorem (which is only valid for quadratic actions, but hey, that's what is left after we expanded the interaction term) and that would lead us to Feynman rules. This part is usually the same in all theories, and the final Feynman rules can be easily predicted by simply looking on the structure of the interaction term in the Lagrangian. That is what is meant by "reading off Feynman rules". For example, consider a single Klein-Gordon field with a 4-th order interaction term $$ \mathcal{L}_4 = - \frac{\lambda}{4!} \phi^4. $$ We would like to Taylor-expand it in any expression for the correlation function of any functional $F$: $$ \left< F[\phi] \right> = \int D\phi \exp \left[ i \int (\mathcal{L}_2 + \mathcal{L}_4) \right] F[\phi] = $$ $$ \left< F[\phi] \left( 1 + i \int \mathcal{L}_4 + \frac{i^2}{2} \intop_x \intop_y \mathcal{L}_4 (x) \mathcal{L}_4 (y) + \dots \right) \right>_0, $$ where the subscript $<>_0$ means that we use the free theory action, which is $\int \mathcal{L}_2$, for which the Wick's theorem is applicable. Each integral in the series above then corresponds to the addition of an interaction vertex to the Feynman diagram. The expression for the vertex is easy to deduce: it is equal to $$ - \frac{i \lambda}{4!} \int d^4 x, $$ with the integral being over the position of the vertex. The factor of $4!$ also appears in the numerator because we have exactly $4!$ ways of contracting 4 operators at the same point with 4 other operators by propagators. Thus the factors nicely cancel out (in fact, it was the reason for choosing $\lambda$ such that $4!$ enters the denominator of $\mathcal{L}_4$ in the first place). So, we could either keep $4!$ in the vertex expression and consider the $4!$ different contractions which appear after using Wick's theorem unequivalent, or we can consider them equivalent and cancel the factors of $4!$ which is what is usually done in the literature. I hope this answers your question.
common-pile/stackexchange_filtered
What is the cause of "Trust anchor for certification path not found." This maybe a clone from this thread Anchor not found but apparently I can't make it run. I have a very limited knowledge in android studio and this is my first time to develop an application using android studio. Does anyone here can lead me to the right track to solve this problem? I have attached my logcat below.
common-pile/stackexchange_filtered
Same seed in two different editors give me different results (Pycharm and Jupyter Notebook) I have the following code: import json import pandas as pd import numpy as np import random pd.set_option('expand_frame_repr', False) # To view all the variables in the console # read data records = [] with open('./data/data_file.txt', 'r') as file: for line in file: record = json.loads(line) records.append(record) # construct list of ids ids = set() for record in records: for w in record['A']: ids.add(w['NAME']) random.seed(1234); sampled_ids = random.sample(ids,50) When I run this code one time in Pycharm IDE and then immediately after in a Jupyter Notebook - I get different ids sampled in each one. What's going on? P.S I used the semicolon on the last line because I found out that if I try to set the seed on one line and then sample on the next line - even in the same IDE I get different results each run. This is truly mysterious to me. I use Python 3.7 @Carcigenicate its the same version, in the same virtual env (which I created for this project) @Carcigenicate I checked again - they are both set to use 3.7.3 The cause of such a behaviour is lying in set. Set is constructed from objects based on their hash values (the elements of a set must be hashable, i.e. must have __hash__ method), and hash values differ when starting another console. (Not always, but that's another theme). For example, there are results from two consols in the same IDE: 1/A: arr1 = set('skevboa;gj[pvemoeprnjpdbr ]p') random.seed(1234) random.sample(arr1, 3) Out[47]: ['p', 'k', ']'] random.seed(1234) random.sample(arr1, 3) Out[48]: ['p', 'k', ']'] hash('s') Out[49]:<PHONE_NUMBER>552045688 2/A: arr1 = set('skevboa;gj[pvemoeprnjpdbr ]p') random.seed(1234) random.sample(arr1, 3) Out[29]: [';', 'a', 'b'] random.seed(1234) random.sample(arr1, 3) Out[30]: [';', 'a', 'b'] hash('s') Out[31]: -2409441490032867064 Knowing the source of problem you can choose a method to solve the issue. For example, using sorted: 1/A: random.seed(1234) random.sample(sorted(arr1), 3) Out[50]: ['p', ']', ' '] 2/A: random.seed(1234) random.sample(sorted(arr1), 3) Out[32]: ['p', ']', ' '] Thanks! That was indeed the problem! Can you explain why sorted() fixes it? I didn't quite understand. @Corel random.sample returns k (3 in your case) elements based on randomly generated indices. Initializing random.seed you preserve index generation. Since sorted returns a sorted list, the same values are returned always. set implicitly turned into a tuple inside sample and since set is unordered returned values are values at the same indices but according to current order in tuple.
common-pile/stackexchange_filtered
How to get the validity date range of a price from individual daily prices in SQL I have some prices for the month of January. Date,Price 1,100 2,100 3,115 4,120 5,120 6,100 7,100 8,120 9,120 10,120 Now, the o/p I need is a non-overlapping date range for each price. price,from,To 100,1,2 115,3,3 120,4,5 100,6,7 120,8,10 I need to do this using SQL only. For now, if I simply group by and take min and max dates, I get the below, which is an overlapping range: price,from,to 100,1,7 115,3,3 120,4,10 I removed the inconsistent database tags from the question. Please tag only with the database you are really using. "I need to do this using SQL only" Eve4ry RDBMS uses a different dialect of SQL, there is no "SQL only" solution, as it'll could well be specific to the RDBMS you are using. What have you tried so far as well, and why didn't it work? This is a gaps-and-islands problem. The simplest solution is the difference of row numbers: select price, min(date), max(date) from (select t.*, row_number() over (order by date) as seqnum, row_number() over (partition by price, order by date) as seqnum2 from t ) t group by price, (seqnum - seqnum2) order by min(date); Why this works is a little hard to explain. But if you look at the results of the subquery, you will see how the adjacent rows are identified by the difference in the two values. Thank you Gordon. problem got solved. Understood the logic on how it worked. SELECT Lag.price,Lag.[date] AS [From], MIN(Lead.[date]-Lag.[date])+Lag.[date] AS [to] FROM ( SELECT [date],[Price] FROM ( SELECT [date],[Price],LAG(Price) OVER (ORDER BY DATE,Price) AS LagID FROM #table1 A )B WHERE CASE WHEN Price <> ISNULL(LagID,1) THEN 1 ELSE 0 END = 1 )Lag JOIN ( SELECT [date],[Price] FROM ( SELECT [date],Price,LEAD(Price) OVER (ORDER BY DATE,Price) AS LeadID FROM [#table1] A )B WHERE CASE WHEN Price <> ISNULL(LeadID,1) THEN 1 ELSE 0 END = 1 )Lead ON Lag.[Price] = Lead.[Price] WHERE Lead.[date]-Lag.[date] >= 0 GROUP BY Lag.[date],Lag.[price] ORDER BY Lag.[date] Another method using ROWS UNBOUNDED PRECEDING SELECT price, MIN([date]) AS [from], [end_date] AS [To] FROM ( SELECT *, MIN([abc]) OVER (ORDER BY DATE DESC ROWS UNBOUNDED PRECEDING ) end_date FROM ( SELECT *, CASE WHEN price = next_price THEN NULL ELSE DATE END AS abc FROM ( SELECT a.* , b.[date] AS next_date, b.price AS next_price FROM #table1 a LEFT JOIN #table1 b ON a.[date] = b.[date]-1 )AA )BB )CC GROUP BY price, end_date
common-pile/stackexchange_filtered
Trying to run jitsi/web image on EKS but facing the "failed to pull and unpack image" issue Althouth the jitsi/web image is a public image and I was able to run nginx image I was expecting to run the jitsi/web image similarly but faced the above issue. There is no latest tag for jitsi/web. You can use stable tag.
common-pile/stackexchange_filtered
How to Limit YouTube OAuth Partner Scope to One CMS I'm building a server-side web app for OAuth that requires access to a YouTube CMS via the YouTube Partner scope. If that person has access to more then one CMS, the token generated by the OAuth process gives us access to every CMS that they have access to. I'd like to limit the token's access to a single CMS that the user chooses, similar to them choosing a channel if they have multiple channels. But there does not appear to be a way to have Google's OAuth screen ask for a single CMS. We always end up with a token that gives us access to every CMS they have access to, which is more of a liability than I want. Is it possible to either alter or influence the OAuth process so Google will ask the user which CMS they want to grant us access to? It seems the only levers that can be pulled in influencing the behavior of the OAuth process are the authorization parameters. Out access type is offline, granular consent is enabled, and prompt is set to "consent." Virtually even combination of parameters has been tried, and it doesn't affect the selection screen for granting access. It is generally not possible to ask for a specific resource through the Google OAuth consent page, so if you want to add further restrictions beyond what the OAuth scope you selected offers, you will have to do the check on your own by calling the YouTube API and maintaining some ACL on your end.
common-pile/stackexchange_filtered
iPhone scroll view / dynamic text in label then table view and images what is the right way to place table view after label that populated dynamical and cannot guess what its height and they are all placed in scroll view ? Thanks The UITableView has a header title.. so why don't you just use a table view instead of a scroll view and set it's header title dynamically? Thanks for the reply but what if there is a photo after the table , or even i need to place another table view ?
common-pile/stackexchange_filtered
Configuring SpatiaLite database access for Python I'm working on a GIS project, and I would like to implement and test some geo-spatial algorithms in Python. For this purpose, I will not only need SQLite, but also SpatiaLite, in order to store and query the location data. Now I've tried to install the pyspatialite package, but no matter what Python version I tried (I tried all versions from 2.6 to 3.3), the pip keeps insisting, that none of the existing pyspatialite packages are compatible with my version of Python which is 2.6.6. If I try to do this using easy_install, I get a traceback and an error: AttributeError: MSVCCompiler instance has no attribute 'compiler' And that also occurs, if I try to install the package manually, by executing the setup.py file. From what I've already searched, some people suggest to connect to a SpatiaLite database somehow using SQLite and loading extension, but frankly I have no idea how to do it, and couldn't understand any of these answers. Are you able to propose a solution in a clear, step-by step way? I'm not a very experienced Python programmer yet. Another attempt, this time with Python 3.3.5. The following code: import sqlite3 conn = sqlite3.connect(":memory:") conn.enable_load_extension(True) conn.execute('SELECT load_extension("libspatialite-2.dll")') yields: Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> conn.execute('SELECT load_extension("libspatialite-2.dll")') sqlite3.OperationalError: %1 is not a valid Win32 application. And again, I don't seem able to resolve this on my own. The error above was due to something else, I reinstalled Python and pysqlite, and we are back with the old error. There are two options now: I compile using from pysqlite2 import dbapi2 as sqlite3. In this case code is the following: from pysqlite2 import dbapi2 as sqlite3 conn = sqlite3.connect(":memory:") conn.enable_load_extension(True) conn.execute('SELECT load_extension("DLLs\libspatialite-4.dll")') curs = conn.cursor() In that case, the error is: Traceback (most recent call last): File "C:\Users\mszydlowski\Desktop\Project\sqlite.py", line 3, in <module> conn.enable_load_extension(True) AttributeError: 'pysqlite2.dbapi2.Connection' object has no attribute 'enable_load_extension' I compile using import sqlite3. In this case code is the following: import sqlite3 conn = sqlite3.connect(":memory:") conn.enable_load_extension(True) conn.execute('SELECT load_extension("DLLs\libspatialite-4.dll")') curs = conn.cursor() And the error: Traceback (most recent call last): File "C:\Users\mszydlowski\Desktop\Project\sqlite.py", line 4, in <module> conn.execute('SELECT load_extension("DLLs\\libspatialite-4.dll")') OperationalError: The specified module could not be found. even though the file is certainly in there, and I did use the double backslashes, like I was already advised. You need a Unix like C compiler and the sources of SQLite for build and install the dependency libraries (AttributeError: MSVCCompiler instance has no attribute 'compiler') and Windows don't have native compilers as Linux or Mac OS X, but you can try, look at Build pyspatialite on Windows, or Installation of Pyspatialite on Windows, for example. You don't need Pyspatialite to connect to Spatialite via Python. You can use the latest version of Pysqlite, look at special python library needed for spatialite? or even the sqlite3 standard module of Python. You can download a Pysqlite version for Windows at Christoph Gohlke's Unofficial Windows Binaries for Python Extension Packages The version of sqlite3.dll included with Python doesn't seem to want to play nice with Spatialite. The only thing I could get to work (short of compiling everything from source) was: Download SQLite (or cyqlite - a recompile of SQLite for Windows with some handy features enabled, such as R-Tree so you can do spaital indexes) i.e. sqlite-dll-win32-x86-[version].zip Download mod_spatialite (Windows binaries are in the pink box at the bottom of the page) i.e. mod_spatialite-[version]-win-x86.7z Unzip first SQLite/cyqlite then mod_spatialite into the same folder (overwrite if there are any conflicts) Add this folder to your system Path Rename the sqlite3.dll that is in your Python DLLs directory, so that Python will use the new one on your path See this blog post for more info.
common-pile/stackexchange_filtered
Codeigniter pagination links cannot link to a page number I am a beginner in using Codeigniter pagination. I don't understand what I made wrong with the configuration of the Codeigniter pagination. When I tried to inspect the elements inside of the page number, then I clicked the href of my Codeigniter pagination then it successfully works. But when I wrapped it with an <li> tag it doesn't link. Here is my config for the codeigniter pagination: $config = array(); $config['base_url'] = ''.base_url().'hire-workers/'; $config['total_rows'] = $count_all->num_rows(); $config['per_page'] = 5; $config['uri_segment'] = 2; $config['full_tag_open'] = '<ul class="pagination">'; $config['full_tag_close'] = '</ul>'; $config['next_link'] = 'Next'; $config['next_tag_open'] = '<li class="next page">'; $config['next_tag_close'] = '</li>'; $config['prev_link'] = ' Previous'; $config['prev_tag_open'] = '<li class="prev page">'; $config['prev_tag_close'] = '</li>'; $config['cur_tag_open'] = '<li class="active"><a href="">'; $config['cur_tag_close'] = '</a></li>'; $config['num_tag_open'] = '<li class="page">'; $config['num_tag_close'] = '</li>'; $this->pagination->initialize($config); I tried removing the : $config['num_tag_open'] = '<li class="page">'; $config['num_tag_close'] = '</li>'; and my link works but my view was destroyed. I don't really know why, since I am only a beginner in using Codeigniter. It looks like it's a styling issue @sauhardnc i already figured it out. It is a script issue
common-pile/stackexchange_filtered
What do the 4 buttons in the new navigation do? I am not a tester for the new navigation. I applied to be a tester about two weeks ago but I am not until now. I am curious about what the 4 buttons do. You can see them it the screenshot. What do the 4 buttons do? I am not a tester for the new navigation. I applied to be a tester about two weeks ago but I am not until now. I am curious about what does the 4 buttons mean. So I asked here. :) Thanks. I have edited my question. Well, first I'll talk about the two top buttons. When you hover over them, it tells you what they do. The top left button says "Expanded layout toggle". When it is selected, you will see an expanded version of each question, meaning the title, beginning of question, tags, minutes/hours ago asked, and user profile with picture and reputation and badges are shown. Also, the votes and views are in an up and down fashion. The top right button says "Collapsed layout toggle" when it is hovered over. When it is selected, only the question title, tags, time ago asked, and user profile with rep. The votes and views are shown in a side by side fashion. The bottom buttons are pretty self explanatory based on what they show when they are hovered over. They will only be showed if two or more tags are being looked at. The bottom left button, labelled "any" says "Show questions with any of the tags". This means that it will show questions that have at least one of the tags specified, in your situation your favorite tags. The button right button, labelled "all" says "Show questions with all the tags" (this should be "all of the tags") so when it is selected, it will show questions that have all of the tags being looked for.
common-pile/stackexchange_filtered
Simulation of Brownian Motion If I want to simulate Brownian motion in the Euclidean space I can simulate it by a point that is moving a distance $\epsilon$ in an arbitrary direction then it randomly choose a new direction and moves a distance $\epsilon$ again and so on. The smaller the $\epsilon$ the closer the simulation will be to the real Brownian motion. How can I simulate Brownian motion in the hyperbolic space (Poincare Disk model for instance)? Does the same work here where I replace the Euclidean distance by the hyperbolic distance? My intuition is yes but when I did the simulation the random walk do not seem to be transient but it should be! The same method should work. It's a little tricky to find the points which are distance $\epsilon$ away from a given point in the disk model. One way is to translate a circle from the center. Yes, that's correct. And it is transient, but how can you tell this from running a simulation for finite time? Maybe you need to run it longer? I do not have the reputation to comment but shouldn't be the steps be Gaussian random variables of zero mean and variance $\epsilon$ instead of steps of constant length? Does it makes a difference? It makes no difference in the limit as the step size goes to zero (and the number of steps goes to infinity). @user9126: please do not use answers to comment. This can be generalized quite a bit. See, for instance, Ming Liao, Lévy Processes in Lie Groups, Cambridge University Press, Cambridge Tracts in Mathematics, 162. Hyperbolic spaces have nice presentations as $SO^+(1,n)/SO(n)$ so can be handled with essentially the same machinery. Taking the Poincaré disk model for hyperbolic space as being formed by all complex numbers with modulus smaller than 1, the distance between two points can be calculated as $$d(z_1,z_2)=\tanh^{-1}\left|\frac{z_1-z_2}{1-z_1\bar{z_2}}\right| \; .$$ The set of points at equal distance $\varepsilon=\tanh(d(z,z_0))$ from a fixed point $z_0$ can be shown to be represented by a circle in the Poincaré disk model with center $$\frac{(1-\varepsilon^2)z_0}{1-\varepsilon^2|z_0|^2}$$ and radius $$\frac{\varepsilon(1-|z_0|^2)}{1-\varepsilon^2|z_0|^2} \; .$$ But this is not the end of the story, you can't just pick points uniformly on this circle. What you want is to pick points such that you can arrive from $z_0$ to them such that the direction of departure from $z_0$ was chosen uniformly. This will induce a particular distribution on that circle. To figure out this distribution, it is necessary to compute the intersection of an h-line* through $z_0$ for an arbitrary direction and the circle we just computed. From that, we can get the sought distribution. I'll update this later if I get to computing that part. EDIT: Following the suggestion by @Douglas Zare, this can also be achieved by picking a point at a distance $\varepsilon$ from the origin uniformly, where the deformation of the Poincaré disk model won't be a problem since h-lines passing through the origin are all just diameters. All we have to do then is translate the point with the translation that takes the origin to the point $z_0$ which can be done with a Möbius transformation $$z \mapsto w=\frac{z+z_0}{\bar{z}_0z+1} \; .$$ This is also easily implemented computationally. Here's how I coded it in R: # Brownian motion on Poincaré disk (no comments about crappy code plz kthx) epsilon=0.01; z<-0; path<-c(z); for (t in 1:10000) { jitter=runif(1,0,1); dz=epsilon*complex(1,cos(2*pi*jitter),sin(2*pi*jitter)); z<-(z+dz)/(Conj(dz)*z+1); path<-c(path,z); } plot(Re(path),Im(path),type="l",col="blue",asp=1,xlim=c(-1,1),ylim=c(-1,1)); curve(sqrt(1-x^2),-1,1,col="red",add=TRUE); curve(-sqrt(1-x^2),-1,1,col="red",add=TRUE); And here is a picture of a run of the code *By h-line, I mean a line in the hyperbolic plane, which will be either a diameter, either a circle segment orthogonal on the boundary of the Poincaré disk. Rather than choosing from the appropriate distribution, wouldn't it be better to choose a point on a circle of distance $\epsilon$ around the origin (which can conveniently be done uniformly) and then 'translate' the chosen point by $z_0$? (As Douglas Zare suggests in his comment) Sure, but where's the fun in that? Mind you, I'll edit my post to reflect that. IIRC, there a discussion of Brownian motion in Feller (An Introduction To Probability Theory and it's Applications). Is it a discussion about Brownian motion in non-Euclidean spaces? Euclidean only, I believe, but it's been 30 years since I read it. I think that the start will be to define what is a Brownian motion on certain subset of $\mathbb{R}^3$ in this case. The key property of Brownian motion $B_t$ is that it has independent increments, i.e. $B_t-B_s$ is independent of $B_s$. Hence one way of simulating Brownian motion is by summing up independent random variables. For two-dimensional Brownian motion this is achieved by generating $n$ random vectors $X_i$ and then taking $$\xi_t=\frac{1}{\sqrt{n}}\left(\sum_{i=1}^{[nt]}X_i+(nt-[nt])X_{[nt]+1}\right),$$ where $[\cdot]$ is whole part of a real number. The functional central limit theorem ensures that $\xi_t$ converges to $B_t$ as $T$ goes to infinity, so take large $T$ and you will get a simulation of $B_t$. It is possible also to take only the partial sums $\sum_{i=1}^{[nt]}X_i$, the end result will be the same. Now to apply this approach to a certain space, you need, as I said before the definition of the Brownian motion is needed in said space and the corresponding functional central limit theorem. My own impression when writing my thesis (which was about multi-parameter Wiener processes) was that functional central limit theorems go hand in hand with the brownian motion, i.e. if you can define Brownian motion, you can prove the FCLT. Before though delving into literature I would advise to find a way how to generate random sample from the space you are intend to work in. This might be the most difficult part in simulation.
common-pile/stackexchange_filtered
How can I remap Alt+Space to Alt+Shift in AutoHotkey? I want to remap Alt+Space to Alt+Shift -- Left Alt+Right Alt+Space to Left Alt+Left Shift. How can I do this in AutoHotkey? I don't have a good way to test this, but maybe something like: !Space::!+ <!>!Space::<!<+ it does not work What application are you using to test this?
common-pile/stackexchange_filtered
Multiple tags separated by commas don't works in textarea fields i've an issue using Tag with Channel Form in front. I've Tag v.4.2.8 and it's setted in the CP and the code is {exp:channel:form channel="spots" class="form-horizontal" return="spot/ENTRY_ID/URL_TITLE" } <div class="form-group"> <label class="control-label col-xs-1">Titolo</label> <div class="col-xs-11"> <input class="form-control" placeholder="Inserisci il titolo della tua domanda" name="title" type="text"> </div> </div> <div class="form-group"> {field:spot_desc} </div> <div class="form-group"> <label class="control-label col-xs-1">Tags</label> <div class="col-md-11"> <textarea class="form-control" rows="1" name="tags"></textarea> <p class="help-block">Separa ogni tag con la virgola</p> </div> </div> <div class="form-group"> <input type="submit" value="Submit"> </div> {/exp:channel:form} But when try to submit tags commas separate example, taga, commas the results is only one tag with the commas <a href="">example, taga, commas</a> Where i'm wrong? You could always just use the {field:tag_fieldtype_short_name} tag, which Solspace recommends in their docs. If that doesn't work you can always look at the page source after it parses and see the way Solspace requires the field. Docs for using the Tag module in channel form. As mentioned in another answer, you would need to use the Tag Widget, i.e. {field:my_tag_custom_field_name} to add tags from a Channel Form in which you're creating a new channel entry. What happens in the background is that tags are added to a textarea field with name="my_tag_custom_field_name", with each tag on a separate line (newline-separated). If you have a regular textarea field with your Tag field's name in the name="" attribute, you can also enter new tags as long as each tag is on their own line. This happens to be a coincidence that it works, since Channel Form should use the Tag Widget. Alternatively, you can use {exp:tag:form} to add tags to a pre-existing entry: http://www.solspace.com/docs/tag/form/. In this case, you would have to create an entry first, then add tags to the newly created entry.
common-pile/stackexchange_filtered
$2^{\mathrm{nd}}$ order nonlinear ODE: $4y''\sqrt{y}=1, y(0)=1, y'(0)=1$ I am solving this second order nonlinear equation, that is in the title. My solution is: $$ \frac{4}{3}(y^{1/2}+c)^{3/2}-4c(y^{1/2}+c)^{1/2}+a=x $$ where $c$ and $a$ are constants that spawned rom integration. However, I can not see how to find these integration constants if I apply $y(0)=1$ and $y'(0)=1$. Any help is appreciated! $$ 4y''\sqrt{y}=1\quad\Longrightarrow\quad 4y''y'=\frac{y'}{\sqrt{y}}\quad\Longrightarrow\quad 2\big(y'\big)^2=2\big(\sqrt{y}\big)+c, $$ for some $c$ constant, which is equal to zero, due to the initial data. Hence $$ \big(y'\big)^2=\sqrt{y}\quad\Longrightarrow^*\quad y'=y^{1/4} \quad\Longrightarrow\quad y^{-1/4}y'=1 \quad\Longrightarrow\quad\frac{y^{3/4}}{3/4}=t+c'. $$ $y'=y^{1/4}$ and not $y'=-y^{1/4}$ due to the initial data. Also, $$ c=\frac{4}{3}, $$ again due to the initial condition, and hence $$ y(t)=\left(\frac{3t}{4}+1\right)^{4/3}. $$ Very nice solution! Just that i was solving by substituting y'=a and arrived at that gigantic answer. I am wondering if i could somehow find constants from my answer, but yours solution is much more rational, thanks $ 2(y')^2=2(y)^{1/2}+c $, how do you know that c here is 0? may be a silly question, but i dont know how can you see this from initial conditions
common-pile/stackexchange_filtered
How to convert only one (out of several) numeric columns to one decimal point I have a dataframe with 3 columns. The column 'machinery' represents word counts, 'cum_sum' is a rolling total count of the column titled 'machinery' and 'cum_pct' is the rolling cumulative percent total. I have tried pd.options.display.float_format = '{:,.1f}'.format but this shows 1 decimal place for all columns. How to I show the column 'cum_pct' to one decimal place, not changing the decimal place for the other 2 columns? Please tag a language so the right people can find your question. Solution: # Generate rolling total of the column machinery in data frame 'tokens_test' like above: tokens_test['cum_sum'] = tokens_test['machinery'].cumsum() tokens_test['cum_pct'] = ((100.0*tokens_test['cum_sum']/tokens_test["machinery"].sum())) # Apply formatting: cols = ['cum_pct'] tokens_test[cols] = tokens_test[cols].applymap('{:,.1f}'.format)
common-pile/stackexchange_filtered
XSD XML Analysing the xs:sequence The xs:sequence says that the elements should be in sequence. Suppose I have the xsd as shown below. <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="personinfo"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string"/> <xs:element name="country" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Which of the below two XMLs would be correct? <?xml version="1.0" encoding="UTF-8"?> <personinfo> <firstname>Abc</firstname> <firstname>Xyz</firstname> <country>CountryOfAbc</country> <country>CountryOfXyz</country> </personinfo> or <?xml version="1.0" encoding="UTF-8"?> <personinfo> <firstname>Abc</firstname> <country>CountryOfAbc</country> <firstname>Xyz</firstname> <country>CountryOfXyz</country> </personinfo> This is a reasonable question, but what it lacks is any indication that you tried to validate your two sample XML documents against your schema (e.g. using the online validator to which I link in my answer) and the finer point(s) underlying your doubt what is valid according to your schema. Regardless, you should have all you need across the answers offered. Neither. What would conform with the schema is... <?xml version="1.0" encoding="UTF-8"?> <personinfo> <firstname>Abc</firstname> <country>CountryOfAbc</<country> </personinfo> ...or: <?xml version="1.0" encoding="UTF-8"?> <personinfo> <firstname>Xyz</firstname> <country>CountryOfXyz</country> </personinfo> You can validate an XML document against your XSD schema using an online XML validator if you don't have one locally. If what you really want is 1 or more personinfo elements in succession - for example... <?xml version="1.0" encoding="UTF-8"?> <people> <personinfo> <firstname>Abc</firstname> <country>CountryOfAbc</country> </personinfo> <personinfo> <firstname>Xyz</firstname> <country>CountryOfXyz</country> </personinfo> </people> ...try a schema like this instead: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="people"> <xs:complexType> <xs:sequence> <xs:element name="personinfo" minOccurs="1" maxOccurs="unbounded"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string"/> <xs:element name="country" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Otherwise see @kjhughes' answer for schemas that describe the two sample documents you offered in your question. Neither of those two XML documents instances would be valid against that XSD. This XML document instance would be valid for your XSD: <personinfo> <firstname>Abc</firstname> <country>CountryOfAbc</<country> </personinfo> Or, you could adapt the XSD to make your given two XML document instances be valid via the use of the maxOccurs occurrence constraint. For your first example: <xs:element name="personinfo"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string" maxOccurs="2"/> <xs:element name="country" type="xs:string" maxOccurs="2"/> </xs:sequence> </xs:complexType> </xs:element> For your second example: <xs:element name="personinfo"> <xs:complexType> <xs:sequence maxOccurs="2"> <xs:element name="firstname" type="xs:string"/> <xs:element name="country" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> Personally, I prefer it this way: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="personinfo"> <xs:complexType> <xs:sequence> <xs:element name="record" maxOccurs="unbounded"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="xs:string" /> <xs:element name="country" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Thus the XML will be: <personinfo> <record> <firstname>Abc</firstname> <country>CountryOfAbc</country> </record> <record> <firstname>Xyz</firstname> <country>CountryOfXyz</country> </record> </personinfo> I stand corrected. Adding maxOccurs="unbounded" attribute to record element. This is another option centered around what I think (per my answer) @cprasad may really want based on experience - and given multiple firstname and country elements in his example XML documents. Note that I corrected the quotes around some of the attribute values in your schema; maybe a tool in the copy/paste chain to get the schema into your answer mucked with them; but it was clear what you meant and easier I thought to just clean them up than ping you with another comment to do so. Thanks for making the quotes point. Yes, it came from a series of copy-and-paste and find-and-replace.
common-pile/stackexchange_filtered
how to solve the frame ID setting warning? Do you have any idea ** to correct the following warning setting? [WARN] [1340092854.777455509]: Message from [/hokuyo] has a non-fully-qualified frame_id [laser]. Resolved locally to [/laser]. This is will likely not work in multi-robot systems. This message will only print once. [WARN] [1340092854.968655384]: Message from [/hector_mapping] has a non-fully-qualified frame_id [laser]. Resolved locally to [/laser]. This is will likely not work in multi-robot systems. This message will only print once. [ERROR][1340123076.274228430]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ERROR] [1340123076.523439543]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] Originally posted by jas on ROS Answers with karma: 33 on 2012-06-19 Post score: 0 Set the frame_id to /laser and the warnings will go away. Originally posted by tfoote with karma: 58457 on 2012-07-23 Post score: 1 Comment by kedarm on 2013-10-03: Although I agree that this solution will work for single robot settings, I think a better and more robust solution is given here: https://code.ros.org/trac/ros-pkg/ticket/5511 Frames gets defined in a hierarchy like parent_frame/child_frame. If a frame do not have any parent, it is defined as /frame. First warning is : Message from [/hokuyo] has a non-fully-qualified frame_id [laser]. Resolved locally to [/laser]. It is advisable to fully specify the hierarchy when using frames as using just name could lead to ambiguity in choosing the correct frame. This will definitely cause problem in multi-robot scenario. About following errors: [ERROR][1340123076.274228430]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] [ERROR] [1340123076.523439543]: Trajectory Server: Transform from /map to /base_link failed: Unable to lookup transform, cache is empty, when looking up transform from frame [/base_link] to frame [/map] You are not publishing tf for: map --> base_link Point no. 17 at link http://www.ros.org/wiki/tf/FAQ will be helpful Originally posted by prince with karma: 660 on 2012-06-21 This answer was NOT ACCEPTED on the original site Post score: 1
common-pile/stackexchange_filtered
Uninstall removes installation folder as well as parent folder User selects Default Web Site to install his own site to subfolder under it. All is ok, but on Uninstall step this folder+subfolder are deleted and naturally Default Web Site also is deleted from IIS. How to avoid deleting of Default Web Site folder? If the WebSite element is outside of a Component it should be used as a locator only and not installed or uninstalled which sounds like what you want for Default Web Site. If you have the WebSite element inside of a Component then you are telling WiX that you really do want to control the installation and uninstall of that WebSite. WiX WebSite Element (see bottom of page) may be my mistake: installer adds web app (virtual directory) to the existing Default Web Site. And uninstall removes virtual dir, user files AND this Default Web site(!) which was not created by installer but existed before.. +1. @Oleg, follow Rob's suggestion and move WebSite element out of the component definition. This is a proper solution to your case, as you install just the virtual dir, not the site itself. However, there are case when you need to put the site in the component even if it resolves to the Default Web Site. In this case, it makes sense to mark the component as 'Permanent'. But, just to emphasize once again, the solution to your case is 99% likely to move WebSite element out of the Component definition. An uninstall shouldn't leave resources on the target machine. This is why IIS elements are always removed by the uninstall process. If you want to install your web site and leave it during uninstall, you can try using custom actions instead of WiX support. I mean: IIS contains Default Web Site, installer adds user's Web app to this site. But Uninstall removes user's virtual directory+files and parent Default Web Site. one way is to have the wesite component inside a fragment not inside a component, the website would be searched not installled or uninstalled. <!-- Because this is under a fragment (rather than a component) it merely locates the specified site rather than install or uninstall it --> <iis:WebSite Id="WS_DEFAULT" Description="Default Web Site"> <iis:WebAddress Id="AllUnassigned" Port="80" /> </iis:WebSite> So when uninstalling, only the app/files installed would be removed.
common-pile/stackexchange_filtered
PHP | Python map() equivalent does PHP have an equivalent to Pythons Map()-Function? If not, is it possible to build it on your own ? Thanks in Advance! Yes. use array_map() How "equivalent" does it have to be? There's a bit of difference between map and array_map, but that's mostly due to how these languages work… To expand on vivek_23's comment. python items = [1, 2, 3, 4, 5] squared = list(map(lambda x: x**2, items)) print(squared) // [1, 4, 9, 16, 25] PHP (< 7.4) $items = [1, 2, 3, 4, 5]; $squared = array_map(function($x) { return $x ** 2; }, $items); var_dump($squared); // [1, 4, 9, 16, 25] PHP (7.4 +) Arrow functions have been introduced into PHP since version 7.4. $items = [1, 2, 3, 4, 5]; $squared = array_map(fn($x) => $x ** 2, $items); var_dump($squared); // [1, 4, 9, 16, 25]
common-pile/stackexchange_filtered
Using UIImagePNGRepresentation I'm trying to save a UIImageView image to a png file, and then use that file as a GLTexture. I'm having trouble with this, when I try to run my code in the simulator, Xcode says it succeeded, but the simulator crashes with no error messages. Here is my code: NSString *dataFilePath = [NSHomeDirectory<EMAIL_ADDRESS> NSData *imageData = UIImagePNGRepresentation(imageView.image); [imageData writeToFile:dataFilePath atomically:YES]; // Load image into texture loadTexture("Picture.png", &Input, &renderer); Make sure the build configuration in the upper left is set to Debug (e.g., "Simulator -- 3.0 | Debug"), and do Build > Build and Debug. Then do Run > Console to view your console. That should show you your error messages. also check dataFilePath and imageData... anyone of them NULL? Do Path has valid string? When I run the Build & Debug and view the Console, I find that the problem is that my "Picture.png" file is not found. How can I get my UIImageView to save to a png file that I can use within my app? I've also tried this code, and my app still does not respond to using the Picture.png in my texture. Help? // Save the image NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) ; NSString *imagePath = [paths objectAtIndex:0] ; NSString *filename = @"Picture.png" ; NSString *filepath = [NSString stringWithFormat:@"%@/%@", imagePath, filename] ; NSData *imageData = [NSData dataWithData:UIImagePNGRepresentation(imageView.image)]; [imageData writeToFile:filepath atomically:YES]; // Load image into texture loadTexture("Picture.png", &Input, &renderer); Judging by your reply to Metalmi, it looks like you're not giving us the actual code you're compiling. If you're actually doing what it looks like you're doing, then you're trying to write data to your application's bundle, which is going to fail because you're not allowed to do that. Can you provide us with the exact code you're trying to execute? Try this: NSString *dataFilePath = [[NSBundle mainBundle] pathForResource:@"Picture" ofType:@"png"]; I tried that, but I am getting this error message "Assertion failed: (CGImage), function loadTexture, file Program received signal: “SIGABRT”." Here's my code:// Save the image NSString *dataFilePath = [[NSBundle mainBundle] pathForResource:@"Picture" ofType:@"png"]; NSData *imageData = UIImagePNGRepresentation(imageView.image); [imageData writeToFile:dataFilePath atomically:YES]; // Load image into texture loadTexture("Picture.png", &Input, &renderer); And the line it crashes at: CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage; rt_assert(CGImage); iOS apps cannot write inside their own bundles.
common-pile/stackexchange_filtered
Jquery prop.("disabled",true) not working I am doing a simple disabling and enabling of checkbox based on the status of another checkbox. I am having trouble disabling the checkbox by using prop("disabled", true). But It is not working. I tried using the prop("checked", true) and it works well. I don't why prop("disabled", true) is not working. HTML: <div class="form-group question-option-block form-group"> <label for="option[]" class="control-label">Options</label> <div class="row" style=""> <div class="col-xs-5 col-md-8"><input class="form-control question-option" name="option[]" type="text" value="0" id="option[]"></div> <div class="col-xs-2 col-md-1"><input type="checkbox" class="open_ended_answers" name="open_ended_answers[]" value="1"> Open-ended </div> <div class=" col-xs-2 col-md-1"> <input type="checkbox" class="required_open_answers" name="required_open_answers[]" value="1"> Required</div> </div> <div class="row" style="margin-top:20px;"> <div class="col-xs-5 col-md-8"><input class="form-control question-option" name="option[]" type="text" value="1" id="option[]"></div> <div class="col-xs-2 col-md-1"><input type="checkbox" class="open_ended_answers" name="open_ended_answers[]" value="1"> Open-ended </div> <div class=" col-xs-2 col-md-1"> <input type="checkbox" class="required_open_answers" name="required_open_answers[]" value="1"> Required</div> </div> <div class="row" style="margin-top:20px;"> <div class="col-xs-5 col-md-8"><input class="form-control question-option" name="option[]" type="text" value="2" id="option[]"></div> <div class="col-xs-2 col-md-1"><input type="checkbox" class="open_ended_answers" name="open_ended_answers[]" value="1"> Open-ended </div> <div class=" col-xs-2 col-md-1"> <input type="checkbox" class="required_open_answers" name="required_open_answers[]" value="1"> Required</div> <div class="col-xs-2 col-md-1"><button class="btn btn-danger btn-flat remove-option">Remove</button></div> </div> </div> Javascript: $('.open_ended_answers').change(function() { if($(this).is(":checked")) { $(this).closest('.row').find(".required_open_answers").prop('disabled',false); } else{ $(this).closest('.row').find(".required_open_answers").prop('disabled',true); $(this).closest('.row').find(".required_open_answers").prop('checked',false); } }); if(required_open_answers != null){ required_open_answers_input.each(function(i,val) { if(required_open_answers[i] !== undefined) { if(open_ended[i] == "1"){ $(this).prop("disabled", true); }else{ $(this).prop("disabled", false); } if(required_open_answers[i] == "1"){ $(this).prop("checked",true); } } }); } drop your parsed html here Is it the code within the .change() handler that you're asking about? What is the second block of code? I am sure $(this).prop("disabled", true); works just fine. Are you sure the the flow runs to that statement? @nnnnnn the first code is for the on change handler. while the second line of code is included in the on ready handler for first run. Checking existing values. @vothaison Yes I am, I tried changing the code to change a text and it run wells. And I believe $(this).prop("disabled", true) is working fine cause I used it to other input elements and it is working just fine. But for some reason it is not working on this specific area. I am not sure why. Please provide more context with HTML. At the moment we are guessing what your HTML structure is. If you can, please include edit your question and add the HTML in a snippet, the <> button in the editor. That will provide an interactive example of the problem you have. Also check for any console errors and report those. @JonP I included the html code in the question For jQuery 1.6+ User can use .prop() function: $("input").prop("disabled", true); $("input").prop("disabled", false); For jQuery 1.5 and below You need to use .attr() and disable the checkbox $("input").attr('disabled','disabled'); and for re enable the checkbox (remove attribute entirely): $("input").removeAttr('disabled'); Assuming you have an event handler on a checkbox, in any version of jQuery you can use the following property to check if your checkbox is enabled or not if (this.disabled){ // your logic here } More information at: http://api.jquery.com/prop/#entry-longdesc-1 http://api.jquery.com/attr/ Used both removeAttr and attr with no luck. @banri16 please edit your question, adding a jsfiddle (html included) where you reproduce the error, I would be glad help you to fix it. Thanks. I included the html code in my question. I tried to reproduce the error in jsfiddle but It is hard to reproduce since I am using laravel framework. Tried to code it but it was working fine. I think it is a problem in the code flow. I'm already using prop isn't OP using .prop... If you are not getting the desired output using prop("disabled", true), try $(this).attr("disabled", true) I also tried this one but unfortunately it is not also working you got the solution? yes, it was in a change handler. Thanks for your help. Okay that's great. I found the issue. There was a change handler for a document which enable all inputs which overrides my script. I changed this code $(document).on("change", questionType, function() { if(questionType.val() == 0) { changeOption(0); }else if(questionType.val() == 2){ changeOption(2); }else { changeOption(1); } }); to $(questionType).on("change", function() { if(questionType.val() == 0) { changeOption(0); }else if(questionType.val() == 2){ changeOption(2); }else{ changeOption(1); } }); ChangeOption() include process of enabling all the input elements. Thank you so much for all of your help. Try this : Replace this : if($(this).is(":checked")) { $(this).closest('.row').find(".required_open_answers").prop('disabled',false); } with if($(this).is(":checked")) { $(this).closest('div[class*="row"]').find(".required_open_answers").prop('disabled',false); } you can use javascript instead $.each(document.getElementsByTagName('input'), (element) => { element.disabled = true; }); Or, in your case this.disabled = true; Thanks but I found the issue.
common-pile/stackexchange_filtered
Beamer title page extra line space in author getting some extra line space between Prof. DEF GHI and Prof. Jklmno Pqrst in the following beamer latex code. How to solve it? \documentclass[mathserif,10pt,graphics]{beamer} \title[Title]{Title} \subtitle{\vspace*{0.5cm}{Subtitle}} \author[ABC]{ABC\\[5mm]{\footnotesize \textbf{Supervisors:}\\Prof. QWERTY\\Prof. DEF GHI\\Prof. Jklmno Pqrst}} \institute[UniversityXYZ]{} \date[Today]{Today's date} \begin{document} \begin{frame} \titlepage \end{frame} \end{document} The problem is that you are using the font switch \footnotesize for multiple line information inside a group and didn't ended the paragraph before leaving the group, so the wrong value for \baselineskip gets applied. You need \author[ABC]{ABC\\[5mm]{\footnotesize \textbf{Supervisors:}\\Prof. QWERTY\\Prof. DEF GHI\\Prof. JKLMN OPqrst\endgraf}} I have no idea why, but if you add \\ to the end of the last professor it fixes the problem. \author[ABC]{ABC\\[5mm]{\footnotesize \textbf{Supervisors:}\\Prof. QWERTY\\Prof. DEF GHI\\Prof. Jklmno Pqrst\\}} It doesn't really solve it, but hides it; as soon as you add some text in the new line the problem reappears. The reason of the problem as well as a possible solution can be found in my comment to the question. If you want to, you can add that to your answer. There is an unnecessary pair of curly braces around the names of the professors. Removing these braces resolves the problem. Those are required for enforcing the footnotesize I solved it using \hphantom to add space between names. \author[ABC]{ABC\\[5mm]{\footnotesize \textbf{Supervisors:}\\Prof. QWERTY\\Prof. DEF GHI\\ \hphantom\\ Prof. Jklmno Pqrst\\}}
common-pile/stackexchange_filtered
.getJSON() returns null I'm using jQuery to get and parse JSON using example flickr service : http://api.jquery.com/jQuery.getJSON/ everything works fine if i use flickr url , the problem starts when i try to get my own service. It works when I save the output and read it from file but i doesn't work from the server url. Firebug shows two that the object is null which is obvious but it also shows something like this : GET http://myurl.xx 200 OK 107ms Date Sat, 26 Jun 2010 10:24:08 GMT Server Apache/2 X-Powered-By PHP/5.2.10 Set-Cookie SESS1faf247ad9843d9dd296b3673ae414f3=1e90f24286485ce326718211ed258733; expires=Mon, 19-Jul-2010 13:57:28 GMT; path=/; domain=.myrul.xx Expires Sun, 19 Nov 1978 05:00:00 GMT Last-Modified Sat, 26 Jun 2010 10:24:08 GMT Cache-Control store, no-cache, must-revalidate, post-check=0, pre-check=0 Vary Accept-Encoding,User-Agent Content-Encoding gzip Content-Length 1167 Keep-Alive timeout=1, max=100 Connection Keep-Alive Content-Type text/javascript; charset=utf-8 Request headers Host myurl.xx User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; pl; rv:<IP_ADDRESS>) Gecko/20100611 Firefox/3.6.4 ( .NET CLR 3.5.30729; .NET4.0E) Accept application/json, text/javascript, */* Accept-Language pl,en-us;q=0.7,en;q=0.3 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-2,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Origin null I've changed the url to : myrul.xx but I use valid url. What can be a possible reason for that result ? for example I've tryed that (I don't use any query strings) : $.getJSON('http://mysite.com', function (jd) { $.each(jd, function (index, value) { $('#stage').append('<p>' + value.title + '</p>'); $('#stage').append('<img src=\'' + value.picture + '\'></p>'); $('#stage').append('<p><a href=\'' + value.url + '\'>url</a></p>'); $('#stage').append('<hr/>'); }); }); Is your own service definitely sending back well-formed JSON-encoded data? Can you post the code you're using, specifically the URL/querystring? It's ok to replace the domain with "mysite.net", we need to know if it's the same domain as the page doing the request, and if not, what your querystring looks like. Yes , my service sends valid JSON data , I validated it using two diffetent sites with the same result , those sites use jQuery too so I'm doing something wrong on my end. I've updated my post with one of sample code pieces I used. @Jacob - Are you sure it's sending data, e.g. you're seeing it in the console/net panel when jQuery makes the call? Visiting it in your browser manually isn't the same...we need to see the response when jQuery makes the request. You may want to try just browsing to the uri and see if you get valid json back (to help isolate the problem). You need to add a callback in the url i.e Flickr have jsoncallback http://api.flickr.com/services/feeds/photos_public.gne?tagmode=any&format=json&jsoncallback=?
common-pile/stackexchange_filtered
3 wire load sensor connection to INA125P I am trying to use a 3 wire load sensor, connect it to INA125 for voltage amplification and then use the amplified output to be fed to ADC of Arduino. I had used below configuration with INA125. Where S+ and S- are Sense + and Sense -. I tried all the configuration i.e single load cell, half wheat-stone bridge, full weat-stone bridge but nothing worked. I only keep on getting some random ADC value 14-16 and even on pressing the load sensor upside down, nothing changed. Basically i followed the below configuration while trying with single load cell. And a simple arduino code, just to read ADC value in order to check , whether i am getting things right. As per code, the ADC value must change, but they didn't. Below is the sample code. #include<stdio.h> #include <LiquidCrystal.h> LiquidCrystal lcd(12, 11, 5, 4, 3, 2); int sensorValue = 0; int sensorPin = A0; void setup() { Serial.begin(9600); Serial.println("Weight sensor reading"); lcd.begin(16, 2); } void loop() { lcd.begin(16, 2); lcd.setCursor(0,0); lcd.print("Weight measurement"); sensorValue = analogRead(sensorPin); Serial.print(sensorValue); lcd.setCursor(0,1); lcd.print(sensorValue); delay(200); } Now with respect to the single load cell, i connected black to GND, white to +5V and Red to S+ at pin 6 of INA125 and i connected PIN7 that is S- to GND. But this didn't worked. When i used 2 load sensors, i connected the white wires of both load sensors to +5V, Black wire to GND, RED wire of 1st load sensor to S+ and RED wire of other load sensor to S-. But even that didn't work and readings remain same i.e. 14-16 volts and didn't change even on applying sufficient amount of pressure. Now with respect to the full wheat-stone bridge configuration, i used this pic based configuration given on this link.But even it didn't worked. I am not able to understand what i am doing wrong.Can any one suggest me Important : I am using 10k resistance between PIN8 an PIN9, which provide me a gain of 10. Is that sufficient for arduino to read ? Or i must use some other resistance which must provide me a higher gain. But i think even with 10k resistance i must see some change in the value of the ADC, but i am not even getting that. In my earlier question also, i asked a similar type of question, but at that time i wasn't having INA125 with me. Below is the pic for the configuration which i am using. I even tried replacing 10k register for Rg with 68 Ohm, but it didn't worked. I am quite a noob (I'm actually a programmer, not an electrical engineer!) - but I'm doing something similar and maybe my discoveries will help you out. Firstly, I suggest you read this: http://www.instructables.com/id/Arduino-Load-Cell-Scale/ Yes - it's for a 4-wire load cell, but it's very similar. Also, read this: http://airtripper.com/1626/arduino-load-cell-circuit-sketch-for-calibration-test/ FIRSTLY: The big difference between these two articles, is the latter shows exciting the load cell from the INA125 voltage reference... NOT the arduino supply. I would strongly suggest doing this - as my readings significantly stabilised (improved from 50g fluctuation to only 5g!). SECONDLY: In your particular circuit, you cannot use pin 15 for your voltage reference (5v) - Page 11 (section "Precision Voltage Reference") of the specification says "Positive supply voltage must be 1.25V above the desired reference voltage." http://www.ti.com/lit/ds/symlink/ina125.pdf This means that because your circuit supply is 5v, you can only use a voltage reference pin that is less than 5v-1.25v=3.75v. (Why? It appears that the IC uses 1.25v to generate those reference voltages, meaning that the 5v and 10v pins will not actually be producing 5v and 10v for you!). That leaves only the 2.5v reference pin as a candidate. Unfortunately, that also means that if you use the same voltage reference as E+, you will be running your load sensor at 2.5v - which may not be enough excitation - you will need to read your load cell spec - but they usually want around 10v to really work well. I originally made the same mistake, and used the 5v reference pin, with a circuit supply of 5v, but then I saw this on my scope: That spike is a 100mV pulse every 200ms. With my calibrations, it resulted in 200g worth of error!! When I switched to the 2.5Vref, that spike went away. SECONDLY: Why is your VrefOUT (pin 4) connected to your 5v supply? This pin should ONLY be connected to your VrefIN (pin 14 for 2.5v, pin 15 for 5v, pin 16 for 10v) AND your load cell E+. Here is my understanding of what it's for... The amplifier needs to have a consistent voltage reference, as the circuit supply may fluctuate throughout its life (i.e. depleting battery etc), so you need to give the INA125 a known voltage reference - luckily the INA125 produces 3 of them! (2.5, 5, and 10). THIRDLY: your amplifier gain... I don't use Arduinos, but my analog inputs are referenced against 3.3v. My load cell produces about 4.1mv when loaded with 5kg - I needed to amplify that to near 3.3v, so my required gain was around 800!! If your cell output and Arduino requirements are anywhere near mine - then your gain resistor is FAR too big. Mine was 75 ohms. With such a huge resistor, I would expect you to see no change on your analog input. So, to summarise: Feed your load-cell E+ from your INA125P pin 4 - not your circuit supply. Pin 4 will be much smoother and more consistent. Don't connect your pin4 to your circuit supply (marked as 5v in your diagram). I don't know why you did this. You amplifier gain is probably too small, as a result of your gain resistor being far too large. If you can't be bothered calculating what resistor you need, grab a potentiometer in the range of 200R and play with it.
common-pile/stackexchange_filtered
Do rolling release distros like Arch Linux include kernel upgrades as part of their rolling upgrades? Do fully rolling_release distros like ArchLinux, openSUSE, Alpine and so forth include kernel-upgrades as part of their rolling upgrades? Or kernel upgrades are a separate issue even within the rolling_release paradigm? The kernel is just another package in Arch. When upstream pushes a stable release, the maintainer will package it for Arch. The only special treatment the kernel, and every other package in the [core] repository gets, is that they are released to [testing] first, so that developers and experienced users with that repository enabled can report any issues before they are introduced to the general population of users. Once a package--including the kernel--has sufficient sign-offs, it will be pushed to the standard repositories.
common-pile/stackexchange_filtered
PCR efficiency or DNA yield with single primer How to calculate number of DNA molecules synthesized after 'n' no. cycles with single primer (either only forward primer or reverse primer)? Is there any formula to calculate the DNA yield after such PCR? Supposing, I am setting a PCR reaction for 100 template DNA molecules for 30 cycles, with only one primer - forward primer for target gene. How many DNA molecules will be generated at the end of 30 cycles. All other conditions remain the same. Thanks. Technically, if you use only 1 primer, it will not be PCR anymore. But that's what gonna happen if you have 100 template DNA molecules (made of strand A and complimentary strand B) and abundance of primer (say, 10x, or 1000 molecules): At first cycle primers will bind to template DNA, but only to one strand (for example, A) DNA polymerase will extend those primers, creating complimentary strands B'. Some strands B' will be shorter than template because DNA pol is not perfect DNA-primer complex will melt at the beginning of cycle 2 At the beginning of cycle 2 primer molecules will again bind to template, again to strands A, and process continues Since there is no step where strand A is being copied, this process will be linear, not a "chain reaction". At each cycle you will get the same amount of strands B'. It is hard to say how much B' molecules you will get at each cycle, but at the end of 30 cycles, it will be at best 30x of the first cycle. Remember, though, that DNA polymerase degrades, and cycle 30 will be less efficient than cycle 1. Assuming perfect polymerase, and that extension time is long enough, you will get 100 strands B' on cycle 1, 100 strands B' on cycle 2 and so on. At the end of 30 cycles you will have 30x100 strands B', and 100 strands A and 100 strands B.
common-pile/stackexchange_filtered