anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Cases of gravitational lensing resulting in a recognizable image of an extended object?
Question: There are several different classifications of gravitational lensing phenomenon. Here I am asking for any examples of strong lensing where the lensed image of an extended object is magnified and recognizable as an extended object. Of course it may be distorted somewhat, but if the lensed object is a galaxy, then the lensed image should appear larger and "galaxy-shaped" even though distorted. If it's a tight pattern of stars or other objects, then the pattern should be larger. I have a strong memory of reading about such a case in Nature or Science within the last few years, but I can not find it. I believe that the nearer, lensing object was a galaxy. Answer: There are actually quite a few examples of this. A particularly nice one, where you can see star-forming substructure within the distant, lensed galaxy (the extended, vertical "snake" structure just right of center), is this HST image: This is another HST-based example, in which the same background galaxy is lensed multiple times (indicated by white ovals) by an intervening galaxy cluster, with a reconstruction of the background galaxy in undistorted form shown in the lower-left: https://www.space.com/14481-hubble-photo-brightest-galaxy-gravitational-lens.html People have even obtained spatial resolved spectroscopy within the lensed galaxies, as discussed in this paper (which includes spectroscopy of the first lensed galaxy pictured above).
{ "domain": "astronomy.stackexchange", "id": 3452, "tags": "observational-astronomy, gravitational-lensing, deep-sky-observing" }
How do we get energy to start walking?
Question: The answer seems obvious, it's our energy that gets converted into kinetic energy. But my question is how exactly? Which force is responsible for doing work on us so that we gain kinetic energy? It can't be friction as the point of contact with the ground is at rest. Is it some kind of internal force that does this work? Answer: Biomechanics is complicated! The first thing we have to do is start falling. We have to become out of balance, one way or another. The easiest to visualize version of this is to consider starting to walk with our knees locked and our lower back locked. This decreases how many degrees of freedom we have available to us and points to the one degree left: the ankles. Dorsiflection muscles like Tibialis Anterior pull on tendons on the front side of the foot. This decreases the net downward force that the front of the foot can cause (and thus increases the downward force on the heel), and the result is that you start to lean forward. Wow, that was the hard part. Once we start leaning forward, we can start using gravity to do the dirty work for us. We let gravity pull our body down until we can start using our glutes to rotate the hip joint. As long as we do this in concert with the effects of gravity (and we're pretty good after it with about 4+ years of practice), the result is that the downward portion of the forces generated by those glutes opposes gravity, keeping us from going further downward, and the lateral portion of the forces start to drive us forward. And, of course, the forces driving us forward eventually work their way down from the hip muscles down to the bottom of the foot, where friction holds the foot in place. As such, since the foot cannot move, the muscles and tendons pull the body forward. If you think in force terms from the perspective of the foot and the muscles and tendons connected to it, the muscles contact, pulling on the tendons, and the tendons exert a forward acceleration on the body above it, accelerating it forward. Now what makes walking take a few years to master is that one basically has to maintain this state as a series of falls. If one falls a bit too far forward, one adjusts placement and timing so that the next step does not cause as much forward acceleration, and lets the torque of gravity on the body slow that acceleration down (when our weight is on the front foot, our center of mass is behind our foot, so the net effect of gravity pulling us down plus the forces applied to the ground with our feet push us backwards). But all of this starts with a really strange move where we have to lose balance so that we can fall into the first step. There are ways to move that don't rely on falling. Many martial arts practice them. However, they are very unusual feeling and not nearly as fast. Most everybody falls a little because its more energy efficient.
{ "domain": "physics.stackexchange", "id": 69695, "tags": "newtonian-mechanics, classical-mechanics, work, everyday-life, biology" }
Lots of RegEx match against huge number range (PHP)
Question: I have to check a given amount of regular expressions, which are defining number ranges for dial plans, against an input number range. The target is to check and see, if any of the numbers in the range are free, or which regular expression matches for the number. Last situation is said to be an occupied number, otherwise if no regex matches, the number is defined as free. I match every number from the input range against every regexp and generate some output code. The main problem is, that this sort of "brute force" causes huge load on server and for 50.000 numbers the default php timeout is thrown. Anyhow, this is in my eyes also too long to wait for. The output is not even the real problem (but could and will get optimized anyhow), but for now I need a more efficient way to match. Here is my code: $start_time = microtime(true); // Startzeit $rangeStart = $_POST["search_numberrange_start"]; $rangeEnd = $_POST["search_numberrange_end"]; $this->applicationManager->sortAppsByNumber(); $isdn_applications = $this->applicationManager->getISDNapps(); $sip_applications = $this->applicationManager->getSIPapps(); $isdn_regexp = array(); $sip_regexp = array(); for($i = 0; $i < count($isdn_applications); $i++) array_push($isdn_regexp, "/".$isdn_applications[$i]->getNumber()."/"); for($i = 0; $i < count($sip_applications); $i++) array_push($sip_regexp, "/".$sip_applications[$i]->getNumber()."/"); $matched_sip = array(); $matched_isdn = array(); $number = $rangeStart; do { $matched = false; $code = $number; if($this->numberRangeCheckType == "default" || $this->numberRangeCheckType == "isdn") { for($i = 0; $i < count($isdn_regexp); $i++) { if(preg_match($isdn_regexp[$i],$number)) { $code = $code." <a href='#application_".$i."' class='applink'>".$isdn_applications[$i]->getName()."</a>"; $matched = true; } } } if(!$matched) { $code = "<li class='free'>".$number." FREI</li>"; array_push($matched_isdn, array("code" => $code)); } else { $code = "<li class='occupied'>".$code."</li>"; array_push($matched_isdn, array("code" => $code)); } $code = $number; $matched = false; if($this->numberRangeCheckType == "default" || $this->numberRangeCheckType == "sip") { for($i = 0; $i < count($sip_regexp); $i++) { if(preg_match($sip_regexp[$i],$number)) { $code = $code." <a href='#application_".$i."' class='applink'>".$sip_applications[$i]->getName()."</a>"; $matched = true; } } } if(!$matched) { $code = "<li class='free'>".$number." FREI</li>"; array_push($matched_sip, array("code" => $code)); } else { $code = "<li class='occupied'>".$code."</li>"; array_push($matched_sip, array("code" => $code)); } $number++; }while($number < $rangeEnd); $end_time = microtime(true); $time = $end_time - $start_time; echo "<p>Seite generiert in ".round($time, 5)." Sekunden</p>"; /*switch($this->numberRangeCheckType) { case "isdn": echo "<p class='match_caption'>ISDN:</p>"; $this->printMatchedEntry($matched_isdn); break; case "sip": echo "<p class='match_caption'>SIP:</p>"; $this->printMatchedEntry($matched_sip); break; case "default": echo "<p class='match_caption'>ISDN:</p>"; $this->printMatchedEntry($matched_isdn); echo "<p class='match_caption'>SIP:</p>"; $this->printMatchedEntry($matched_sip); break; } */ $end_time = microtime(true); $time = $end_time - $start_time; $isdn_applications are objects that hold a name and number for an application. (same for sip_) $isdn_regexp hold the regexp that are built from the numbers. (note: numbers itself are stored and hold regexp, only / / is missing for preg_match which is there attached). Feel free to ask questions if something is not clearly enough and thanks for taking time! Answer: Well, REGEX is actually your first problem. A REGEX function is typically slower than its string/int counterparts. So, if you can do something with a normal function, or a few normal functions, it is normally preferable to REGEX. This is why so many people say REGEX is bad. Its not bad, just misused and misunderstood. REGEX should only be used when you can't find a normal function to do something for you, or when the normal function(s) required would just be too costly, and is usually justified by some sort of profiling attempt. However, maybe switching from REGEX isn't plausible in this situation. It appears that you might be working inside someone else's framework, at least with the procedural code mingled with OOP, I assume this is the case. Now, REGEX isn't your only problem. There are sections in your code that are redundant and are causing some inefficiency as well. For instance, your two for loops. First off, let me say that you should keep using braces on one line statements to enhance legibility and decrease the possibility of mistakes. That isn't effecting your program's efficiency, but it might effect yours. Now, the first inefficiency is that count. For and while loops, unlike foreach loops, call functions passed in as parameters on each iteration. So you are essentially asking for a new count every time the loop restarts. While this inefficiency is usually overlooked because of its triviality, I point it out because your program is going to need all the help it can get. And its just good practice anyways. The second inefficiency is the need for two loops at all. If you get the count of both arrays, then compare them to find the larger, then you can use that larger number to loop from and use if statements to ensure that you don't go over the maximum allowed for the smaller. The last inefficiency, at least here, would be that array_push() function. Why use a function that can more easily and legibly be accomplished without one? for( $i = 0; $i < $isdnCount || $i < sipCount; $i++ ) { if( $i < $isdnCount ) { $isdn_regexp[] = "/{$isdn_applications[ $i ]->getNumber()}/"; } if( $i < $sipCount ) { $sip_regexp[] = "/{$sip_applications[ $i ]->getNumber()}/"; } } Not to say the above is 100% efficient either. You should profile that and compare it, maybe all those comparisons made it just as bad or worse. Maybe moving the of || statement out of the for loop would make it better, after all, it is being queried on each iteration. Why did you define $rangeStart then assign it to $number without ever using it? Anyways, that's a micro-inefficiency, this do/while loop is the biggest concern. There is a principle that will really help you here and in your future endeavors: "Don't Repeat Yourself" (DRY). As the name implies, if you do something more than once, find some way of making that task repeatable without explicitly having to rewrite it. Typically you start with a loop. If the loop doesn't fix it for you, then you move on to functions, and finally, if functions don't fix it for you, then you move on to classes. Eventually you will be able to look at a problem and intuitively know what kind of solution it requires. Besides making your script more light weight and efficient, it also makes it easier to read, which is a big issue with your current code. Now, the first repetition I see actually makes that loop I showed you above redundant. Why loop over the source to extract a new array, if you are just going to loop over the new array to do something with the previous source? Do all of your looping at once, whenever possible. Speaking of DRY, your $matched statements can use it too. No need to tell it to push onto the end of the array twice, you've already set up your statements to pass it the same source, just move the push out of the statements so you'll only have to call it once. if( ! $matched ) { $code = '<li class="free">' . $number . ' FREI</li>'; } else { $code = '<li class="occupied">' . $code . '</li>'; } $matched_isdn[] = array( "code" => $code ); As above, it appears that these for loops are redundant. You could probably combine them, similarly to the one showed above. I won't write this one out, as you can extract everything you need from that previous example and apply it here. That appears to be it for your inefficiencies. If the program is still running too slow, it is probably because you have a double loop and that internal loop is probably rather large. I can't think of any way around this, however. You have to loop over these arrays for each number. The only "fix" would be to limit the range of numbers you use on each run. Here are some suggestions for your code Be consistent with your naming. Your first variable uses under_score, then you switch to camelCase, then back and forth throughout the application. Choose one style, at least for the same datatype. I have seen people switch between them for functions and variables, and that is perfectly fine, but all of your variables should be the same, and all of your functions should be the same. If this means shifting your style to resemble the environment you are coding in, then so be it. Better consistent than a mess. Validate and sanitize any user input, unless the scope of this program is purely internal and you trust all input. But even then, people make mistakes, so basic validation wouldn't hurt. It appears that you are working with classes and OOP. In order to be effective with it, you should brush up on key OOP principles, such as the DRY principle I mentioned. There's also SOLID. Google those and you should get a whole bunch more to work with. You don't have to understand all of them right away. Just reading it so that you are aware of it is enough at first. Then, subconsciously you will be able to start looking at your code differently and can then start trying to apply these principles. I hope this helps!
{ "domain": "codereview.stackexchange", "id": 2293, "tags": "php, regex" }
Diagrams involved in 1-loop electron self-energy in QED
Question: I'm following the derivation of electron self-energy at 1-loop in QED on Peskin-Schroeder, page 216. To second order in the coupling the considered diagram (7.15) is The 2-point correlator at second order in the coupling contains, beyond the 2 external fields, 2 interactions: up to integrals and $\gamma$-matrices $$ \langle \Omega | T \, \psi(x) \, \bar{\psi}(y) \, \, A_{\mu}(z) \, \bar{\psi}(z) \, \psi(z) \, \, A_{\nu}(w) \, \bar{\psi}(w) \, \psi(w) | \Omega \rangle . $$ Since every $\psi$ can in principle be contracted with every $\bar{\psi}$ this would provide $3! = 6$ diagrams, factorizing in three pairs of identical diagrams (so that the factor of $2$ cancels against the $1/2$ from the second order expansion of the exponential): the one above plus Diagram B is 2-loop and contains a vacuum bubble, so is to be discarded upon the usual vacuum bubble factorization argument. But why is diagram A not considered in Peskin-Schroeder derivation? Answer: As suggested in the comments the diagram evaluates to $0$, introducing a photon mass $\mu$ \begin{equation} \begin{split} \text{Fourier amputated diagram} &= (-ie)^2(-1)\int \frac{d^4k}{2\pi^4} \gamma^{\mu} \text{Tr} \left[ \gamma^{\nu} \frac{\require{cancel}\cancel{k}+m}{k^2-m^2} \right] \frac{-i\eta_{\mu\nu}}{-\mu^2} \\ & \propto \int d^4k \, \gamma^\mu k_\alpha \frac{\text{Tr}\left[\gamma^\mu \gamma^\alpha \right]}{k^2-m^2} \\ & \propto \int d^4k \, \frac{\cancel{k}}{k^2-m^2} = 0 \end{split} \end{equation}
{ "domain": "physics.stackexchange", "id": 52496, "tags": "quantum-field-theory, quantum-electrodynamics, feynman-diagrams, perturbation-theory, self-energy" }
Project Euler 18/67 : Maximum Path using Memoization
Question: Problem statement of PE18 is : By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 And PE67 is the very same question but, NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; It is not possible to try every route to solve this problem, as there are 2^99 altogether! If you could check one trillion (10^12) routes every second it would take over twenty billion years to check them all. It cannot be solved by brute force, and requires a clever method! ;o) My Implementation : public class Euler_18{ // A custom data structure used to store // the array so that I can seperate the // logic from the storage static class TriangularArray{ HashMap<Integer, int[]> map; int someInt; int size; // All the elements will be stored in // HashMap according to their order public TriangularArray(int size){ this.size = size; map = new HashMap<Integer, int[]>(); // Initialise the array for(int i=1; i<=size; i++){ int[] currArray = new int[i]; map.put(i, currArray); } } // Accept the array public void acceptArray(Scanner in){ for(int i=1; i<=size; i++){ int[] currArray = map.get(i); for(int j=0; j<currArray.length; j++){ currArray[j] = in.nextInt(); } map.put(i, currArray); } } // Display the array public void displayArray(){ for(int i=1; i<=size; i++){ int[] currArray = map.get(i); System.out.println("Array " + i + " : " + Arrays.toString(currArray)); } } // Finds the maximum using Memoization. public void findMaximum(){ for (int i=size-1; i>0; i--) { int[] currArray = map.get(i); int[] belowArray = map.get(i+1); for (int j=0; j<currArray.length; j++) { // The value of current element will be the maximum of the 2 values // of the array directly below it currArray[j] = Math.max(belowArray[j], belowArray[j+1]) + currArray[j]; } } System.out.println("The maximum route is of length : " + map.get(1)[0]); } } public static void main(String[] args) { Scanner in = new Scanner(System.in); int size = in.nextInt(); TriangularArray theArray = new TriangularArray(size); theArray.acceptArray(in); theArray.displayArray(); theArray.findMaximum(); } } Answer: Storage A Map is most useful when the key values are not in a sequence, but sparse, with smaller and bigger gaps between key values. In this example, the key corresponds to a row of the triangle, taking on values from 0 to size, and using all values in the range. An array would be a more natural choice for storage, in this example an int[][]. Note that a hash map has some storage overhead. Also, accessing array elements is simpler to write than .get(...) and .put(...) calls. Encapsulation This is a bit unfortunate: TriangularArray theArray = new TriangularArray(size); theArray.acceptArray(in); theArray.displayArray(); theArray.findMaximum(); Since findMaximum will modify the underlying storage, if you call theArray.findMaximum() one more time, it will produce different output, which is unexpected. It's best when you can call a method multiple times and get the same result consistently. Unless of course it is by design that the call manipulates state, for example in an iterator. The name "findMaximum" doesn't hint at manipulating state, which is misleading. In fact, findMaximum is designed for one-time use. It would be better to rewrite this in a way that the state manipulation becomes obvious. Or throw an exception if the method is called a second time. Or encapsulate the logic in a way that calling findMaximum repeatedly would produce the same result consistently. I would also recommend to make findMaximum return the maximum instead of printing text. Pointless statements This variable is unused (and poorly named): int someInt; The map.put(i, currArray) statement at the end is unnecessary: int[] currArray = map.get(i); for(int j=0; j<currArray.length; j++){ currArray[j] = in.nextInt(); } map.put(i, currArray); Because currArray comes from map.get(i), and it is not reassigned. Other redundancies TriangleArray.size is redundant. The same information is available through the underlying storage, whether using a map or an array. Instead of initializing the arrays in the constructor, you could do it at the same time as parsing from Scanner, to iterate over the lines only once. Most of the comments are completely redundant, noise, for example: // Display the array public void displayArray() { This helps nobody, omit such redundant comments. Naming acceptArray is a strange name for a method that parses numbers from a Scanner. How about parseFromScanner ? displayArray is a strange name for method that prints the content of the triangle. How about simply print ? As the method is on the TriangleArray class, it's implied that it will print the triangle itself. Alternative implementation Consider this alternative implementation: static class TriangularArray { private final int[][] rows; public TriangularArray(int size) { rows = new int[size][]; } public void parseFromScanner(Scanner scanner) { for (int i = 0; i < rows.length; i++) { rows[i] = new int[i + 1]; for (int j = 0; j < rows[i].length; j++) { rows[i][j] = scanner.nextInt(); } } } // Finds the maximum using Memoization. public int findMaximum() { int[] below = rows[rows.length - 1]; for (int i = rows.length - 2; i >= 0; i--) { int[] current = rows[i].clone(); for (int j = 0; j < current.length; j++) { current[j] = Math.max(below[j], below[j + 1]) + current[j]; } below = current; } return below[0]; } }
{ "domain": "codereview.stackexchange", "id": 18499, "tags": "java, programming-challenge" }
Interview Coding Challeng for iOS Part 1 - the Static Objective-C Library
Question: I recently interviewed with a company that needed a C/C++ programmer to work on the iOS side of the products. The job description indicated they needed someone with 4 years of Objective-C and iOS programming and I was surprised that they wanted to interview me. Prior to this coding challenge I have never worked in Xcode, or programmed in iOS, Objective-c or swift. I am an absolute beginner in these areas. I still don't think I know these programming environments but I am learning. Environtment OSX - El Capitan Xcode - Version 8.2 (8C38) // swift 3 Running in the iPhone 7 simulator. Late 2010 17 inch MacBook Pro The following section is the extraction of the email the hiring manager sent me: Programming Challenge: Create a static library or iOS Framework using Objective-C that performs the following 3 functions: Collects the GPS location (latitude and longitude) of the user at a point in time Collects the battery state and returns whether or not the device is plugged in and what percentage of life is left. Accesses any publicly available, free API to collect the data of your choice and returns it (this should be a network call) Build a simple application with 3 buttons and a label where text can be displayed. Each button should call into the three functions of the library described above and output the response to the label. Your application should consist of two tabs, one written in Objective-C and one written in Swift. Both tabs should call into the same Objective-C library and perform the same function. Only use Apple frameworks to complete this task. Fully comment your code explaining your logic and choices where multiple choices are available. For example, Apple provides numerous ways to retrieve a network resource, document why you choose the solution you did. Please send me the full SINGLE Xcode project. End of Challenge This question has been divided into 2 parts based on the size of the code to be reviewed. One part contains the Objective-C static library and the other part contains the simple application. This question contains the static library written in Objective-C the application can be found in this Question. The source code and project files for both questions can be found at this GitHub repository in case you are interested in building it and running it. https://github.com/pacmaninbw/iOSCodeChallenge The static library took 28 hours to research and code. What I Desire From the Review Since this is the first time I've programmed for iOS in both Objective-C and Swift I'd like to know: Are there any memory leaks? What iOS, Objective-C or Swift programming conventions have I mised or used incorrectly? What are the obvious things that I should know that I don't? What error checking should I have included that I didn't? How could I have written this with less code? PCI7DataModelLibrary.h // // PCI7DataModelLibrary.h // PCI7DataModelLibrary // // Created by Paul Chernick on 4/18/17. // /* * This file contains the API for the PCI7DataModelLibrary */ #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @interface PCI7DataModelLibrary : NSObject - (id)init; - (BOOL)IsGpsAvailable; - (NSString *)provideGPSLocationData; - (NSString *)provideBatteryLevelAndState; - (NSString *)provideNetworkAccessData; - (UIAlertController*)provideGPSAlerters; @end PCI7DataModelLibrary.m // // PCI7DataModelLibrary.m // PCI7DataModelLibrary // // Created by Paul Chernick on 4/18/17. // // This object provides a library interface to GPS data, battery data and a network service. // The library consists of three different data models, one for the GPS data, one for the battery data and one // for the network data. Each of the different data models is contained within it's own class. This library was // implemented this way to ease the implementation, ease debugging, and allow multiple engineers to work in // parallel to implement the library. This implementation follows what is sometimes know as the Single // Responsibility Principle. // // Each child data model has it's own initialization functions and is completely self contained. There are no // dependencies between the different data models. #import "PCI7DataModelLibrary.h" #import "PCIGpsDataModel.h" #import "PCIBatteryDataModel.h" #import "PCINetworkingDataModel.h" // TODO : create a tabel of data models and symbolic constants that provide indexes into that table. @implementation PCI7DataModelLibrary { PCIBatteryDataModel *batteryDataModel; PCIGpsDataModel *gpsDataModel; PCINetworkingDataModel *networkDataModel; BOOL unableToEstableConnection; } - (BOOL)IsGpsAvailable { BOOL GpsIsAvailble = NO; if (gpsDataModel) { GpsIsAvailble = gpsDataModel.doesGPSHardWareExists; } return GpsIsAvailble; } - (UIAlertController*)provideGPSAlerters { UIAlertController* gpsAlertController = nil; if (gpsDataModel) { gpsAlertController = gpsDataModel.alertUserNoGPSHardware; } return gpsAlertController; } - (NSString *)provideGPSLocationData { NSString *gpsLocationData = nil; if (gpsDataModel) { gpsLocationData = [gpsDataModel provideGPSLongitudeAndLatitudeWithTimeStamp]; } else { gpsLocationData = @"Unable to access location data at this time."; } return gpsLocationData; } - (NSString *)provideBatteryLevelAndState { NSString *batteryLevelAndState = nil; if (batteryDataModel) { batteryLevelAndState = batteryDataModel.provideBatteryLevelAndState; } else { batteryLevelAndState = @"Unable to access battery state and level at this time"; } return batteryLevelAndState; } - (NSString *)provideNetworkAccessData { NSString *networkAccessData = nil; if (networkDataModel) { networkAccessData = networkDataModel.provideANetworkAccess; } else { // If an attemp to create the if (unableToEstableConnection) { networkAccessData = @"Unable to establish network connection"; } else { networkAccessData = @"provideANetworkAccess Not Implemented Yet"; } } return networkAccessData; } - (id)init { if (self = [super init]) { batteryDataModel = [[PCIBatteryDataModel alloc] init]; gpsDataModel = [[PCIGpsDataModel alloc] init]; networkDataModel = [[PCINetworkingDataModel alloc] init]; // TODO : add error handling for failure of any of the child initialization. // This includes handling memory allocation errors, device errors and // networking errors. Catch all lower level exceptions here // return nil if error is uncorrectible. if (!networkDataModel) { // Don't report errors at this point, report errors when the user clicks the button. unableToEstableConnection = YES; } // If none of the data models can be constructed then this object has no meaning. // If there are more than the 3 data models create a table of data models and loop // through all of them to check if any of the data models are created. if ((!batteryDataModel) && (!gpsDataModel) && (!networkDataModel)) { return nil; } } return self; } @end PCIGpsDataModel.h // // PCIGpsDataModel.h // PCIDataModelLibrary // // The GPSDataModel class is responsible for interfacing with the CLLocationManager to retrieve the // GPS Latitude and Longitude. #import <Foundation/Foundation.h> @interface PCIGpsDataModel : NSObject - (id)init; - (BOOL)doesGPSHardWareExists; - (NSString*)provideGPSLongitudeAndLatitudeWithTimeStamp; - (UIAlertController*)alertUserNoGPSHardware; @end PCIGpsDataModel.m // // PCIGpsDataModel.m // PCIDataModelLibrary // #import <CoreLocation/CoreLocation.h> #import <UIKit/UIKit.h> #import "PCIGpsDataModel.h" @interface PCIGpsDataModel () <CLLocationManagerDelegate> @property (nonatomic, strong) CLLocationManager *locationManager; @property (nonatomic, strong) NSDateFormatter *dateFormatter; - (void)initializeGPSHardwareExists; - (NSString*)dateFromLocation; - (NSString*)latitudeFromLocation; - (NSString*)longitudeFromLocation; @end @implementation PCIGpsDataModel { BOOL mustUseGPSHardware; BOOL hardwareExistsOnDevice; BOOL firstTimeGPSDataRequested; // Flag to allow asking user to use alternate method rather than GPS and to enable location service CLLocation *lastReportedLocation; } - (NSDateFormatter *)dateFormatter { if (_dateFormatter == nil) { _dateFormatter = [[NSDateFormatter alloc] init]; [_dateFormatter setDateStyle:NSDateFormatterMediumStyle]; [_dateFormatter setTimeStyle:NSDateFormatterLongStyle]; } return _dateFormatter; } #pragma mark - Private Functions - (void)initializeGPSHardwareExists { if (lastReportedLocation) { // The GPS hardware provides better accuracy than WiFi or Radio triangulation. // Both horizontal and vertical accuracy will be greater than zero if the GPS hardware is available. if (([lastReportedLocation horizontalAccuracy] > 0) && ([lastReportedLocation verticalAccuracy] > 0)) { hardwareExistsOnDevice = YES; } else { hardwareExistsOnDevice = NO; } } else { hardwareExistsOnDevice = NO; } } - (NSString*)dateFromLocation { NSString* dateString = nil; if (lastReportedLocation) { NSDate* timeStampInDateForm = [lastReportedLocation timestamp]; dateString = [self.dateFormatter stringFromDate:timeStampInDateForm]; } else { dateString = @"No location data."; } return dateString; } - (NSString*)latitudeFromLocation { NSString* latitude = nil; if (lastReportedLocation) { CGFloat untranslatedLatitude = lastReportedLocation.coordinate.latitude; NSString* direction = @"North"; if (untranslatedLatitude < 0.0) { direction = @"South"; } latitude = [NSString stringWithFormat:@"Latitude = %4.2f %@", fabs(untranslatedLatitude), direction]; } else { latitude = @"No location data."; } return latitude; } - (NSString*)longitudeFromLocation { NSString* longitude = nil; if (lastReportedLocation) { CGFloat untranslatedLongitude = lastReportedLocation.coordinate.longitude; NSString* direction = @"East"; if (untranslatedLongitude < 0.0) { direction = @"West"; } longitude = [NSString stringWithFormat:@"Longitude = %4.2f %@", fabs(untranslatedLongitude), direction]; } else { longitude = @"No location data."; } return longitude; } #pragma mark - Location Manager Interactions /* * Callback function, the CLLocationManager calls this function with updated location data when changes to the location occur. */ - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { // test that the horizontal accuracy does not indicate an invalid measurement if (([newLocation horizontalAccuracy] > 0) && ([newLocation verticalAccuracy] > 0)) { hardwareExistsOnDevice = YES; } else { hardwareExistsOnDevice = NO; } lastReportedLocation = newLocation; [self initializeGPSHardwareExists]; } - (BOOL)doesGPSHardWareExists { return hardwareExistsOnDevice; } #pragma mark - public interfaces /* * This alert is generated here in the library so that the user interface doesn't need * to know about the business logic. The yes/no buttons affect variables here in the * library that should not be exposed to the user interface. */ - (UIAlertController*)alertUserNoGPSHardware { [NSThread isMainThread]; UIAlertController *alertToPresent = nil; NSString* alertTitleString = @"GPS Alert"; NSString* alertMessage = @"No GPS hardware use Triangulation?"; if (!hardwareExistsOnDevice && mustUseGPSHardware) { alertToPresent = [UIAlertController alertControllerWithTitle: alertTitleString message:alertMessage preferredStyle:UIAlertControllerStyleAlert]; UIAlertAction* yesButton = [UIAlertAction actionWithTitle:@"YES" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) {mustUseGPSHardware = NO;}]; [alertToPresent addAction:yesButton]; UIAlertAction* noButton = [UIAlertAction actionWithTitle:@"NO" style:UIAlertActionStyleDefault handler:^(UIAlertAction * action) {mustUseGPSHardware = YES;}]; [alertToPresent addAction:noButton]; } return alertToPresent; } - (NSString*)provideGPSLongitudeAndLatitudeWithTimeStamp { NSString *gpsLongitudeAndLatitudeWithTimeStamp = nil; if (!lastReportedLocation) { gpsLongitudeAndLatitudeWithTimeStamp = @"No Location data available"; } else { if (hardwareExistsOnDevice || !mustUseGPSHardware) { gpsLongitudeAndLatitudeWithTimeStamp = [NSString stringWithFormat:@"%@\n%@\n time stamp = %@", self.latitudeFromLocation, self.longitudeFromLocation, self.dateFromLocation]; } else { gpsLongitudeAndLatitudeWithTimeStamp = [NSString stringWithFormat:@"GPS hardware not on device using alternate method\n%@\n%@\n time stamp = %@", self.latitudeFromLocation, self.longitudeFromLocation, self.dateFromLocation]; } } firstTimeGPSDataRequested = NO; return gpsLongitudeAndLatitudeWithTimeStamp; } - (void)gpsSetMustUseGPSHardware { mustUseGPSHardware = YES; // Not current checked. } - (void)gpsAllowWiFiOrRadioTriangulation { mustUseGPSHardware = NO; // Not current checked. } - (void)gpsSetHardwareExists { hardwareExistsOnDevice = NO; // This value may be changed on the first or following location updates. } // CLLocationManager returns locations based on the delegate model, Apple does not provide an alternate method. The locations are returned based // on the device moving a specified distance. Requesting greater accuracy can for the service to use GPS hardware if it is available, it may // default to WiFi or Radio triangulation. Using when the GPS hardware is used, it can affect power usage, this should be considered // during implementation in real life. // The CLLocation manager is configured and data collection is started during the initialization of this data model, // I considered it a better than doing all this work during the first click of the button. The initialization will fail // if the user clicks the "Don't Allow" button on the alert. This initialization will also failed if Info.plist does not // contain the following // <key>NSLocationWhenInUseUsageDescription</key> // <string>This will be used to obtain your current location.</string> // <key>NSLocationAlwaysUsageDescription</key> // <string>This application requires location services to work</string> - (id)init { if (self = [super init]) { mustUseGPSHardware = YES; firstTimeGPSDataRequested = YES; lastReportedLocation = nil; // This is updated periodically, set to nil to prevent access to uknown address until update _locationManager = [[CLLocationManager alloc] init]; if (_locationManager) { // Attempt to force the device to use the GPS rather than WiFi or Radio Triagulation self.locationManager.desiredAccuracy = [ @"AccuracyBest" doubleValue]; // If the user moves 100 meters then a location update should occur. self.locationManager.distanceFilter = [@100.0 doubleValue]; // The following code checks to see if the user has authorized GPS use. if ([self.locationManager respondsToSelector:@selector(requestWhenInUseAuthorization)]) { [self.locationManager requestWhenInUseAuthorization]; } // Now that the configuration of the CLLocationManager has been completed start updating the location. [self.locationManager startUpdatingLocation]; } self.locationManager.delegate = self; } return self; } @end PCIBatteryDataModel.h // // PCIBatteryDataModel.h // PCIDataModelLibrary // // // Provides public interfaces to get the battery level and battery state from the device. #import <Foundation/Foundation.h> @interface PCIBatteryDataModel : NSObject - (id)init; - (NSString *)provideBatteryLevelAndState; - (NSString *)provideBatteryLevel; - (NSString *)provideBatteryState; @end PCIBatteryDataModel.m // // PCIBatteryDataModel.m // PCIDataModelLibrary // // Provides public interfaces to get the battery level and battery state from the device. #import <UIKit/UIKit.h> #import "PCIBatteryDataModel.h" @implementation PCIBatteryDataModel { UIDevice *thisDevice; } - (id)init { if (self = [super init]) { // To optimize performance of the calls to getBatteryLevelAndState, getBatterLevel, getBatteryState // get a pointer to the device only once and enable battery monitoring only once. Battery monitoring // must be enabled to get the information from the device or the simulator. The simulator does not // fully support modeling the battery. thisDevice = [UIDevice currentDevice]; [thisDevice setBatteryMonitoringEnabled:YES]; } return self; } // each of the following functions could return [NSString stringWithFormat: FORMATSTRING, arguments], but // the following functions/methods implementations allow for editing and improvements. // getBatteryLevelAndState could have performed all of the operations, but I try to follow the Single // Responsibility Principle as well as the KISS principle. - (NSString *)provideBatteryLevelAndState { NSString *batteryStateAndLevel = nil; batteryStateAndLevel = [NSString stringWithFormat:@"%@\n%@", self.provideBatteryState, self.provideBatteryLevel]; return batteryStateAndLevel; } - (NSString *)provideBatteryLevel { NSString *batteryLevelString = nil; batteryLevelString = [NSString stringWithFormat:@"Battery Level = %0.2f", [thisDevice batteryLevel]]; return batteryLevelString; } - (NSString *)provideBatteryState { NSString *batterStateString = nil; NSArray *batteryStateArray = @[ @"Battery state is unknown", @"Battery is not plugged into a charging source", @"Battery is charging", @"Battery state is full" ]; batterStateString = [NSString stringWithFormat:@"Battery state = %@", batteryStateArray[thisDevice.batteryState]]; return batterStateString; } @end PCINetworkingDataModel.h // // PCINetworkingDataModel.h // PCIDataModelLibrary // // Created by Paul Chernick on 4/17/17. // Provides an internet connection interface that retrieves the 15 minute delayed price of American Express. #import <Foundation/Foundation.h> @interface PCINetworkingDataModel : NSObject<NSURLSessionDataDelegate> - (id)init; - (NSString*)provideANetworkAccess; @end PCINetworkingDataModel.m // // PCINetworkingDataModel.m // PCIDataModelLibrary // If you decide to build and run this in the simulator please see this article on stackoverflow.com // http://stackoverflow.com/questions/41273773/nw-host-stats-add-src-recv-too-small-received-24-expected-28 // the Xcode issue occurs in this code. // You will also need the following in your Info.plist // <key>NSAppTransportSecurity</key> // <dict> // <key>NSAllowsArbitraryLoads</key> // <true/> // </dict> /* Provides an internet connection interface that retrieves the 15 minute delayed price of American Express. * This model uses the Apple's NSURLSession Networking interfaces that are provided by the foundation and * core foundation. * * Due to the nature of the data there is no reason to directly access sockets in the lower layers and Apple * recommends using the highest interfaces possible. This app is written specifically for iOS devices such * as the iPhone and iPad and doesn't need to be concerned with portability. The NSURLSession Networking * interfaces are used because this only needs to access public data on the internet. The NSURLSession * Networking interfaces provide the initial connection and reachabilty checking. * * From https://developer.apple.com/library/content/documentation/NetworkingInternetWeb/Conceptual/NetworkingOverview/CommonPitfalls/CommonPitfalls.html#//apple_ref/doc/uid/TP40010220-CH4-SW1 * Sockets have many complexities that are handled for you by higher-level APIs. Thus, you will have * to write more code, which usually means more bugs. * In iOS, using sockets directly using POSIX functions or CFSocket does not automatically activate the * device’s cellular modem or on-demand VPN. * The most appropriate times to use sockets directly are when you are developing a cross-platform tool * or high-performance server software. In other circumstances, you typically should use a higher-level API. */ #import <Foundation/Foundation.h> #import <CoreFoundation/CoreFoundation.h> #import "PCINetworkingDataModel.h" // Private interface and private functions. @interface PCINetworkingDataModel() @property (nonatomic,strong) NSURLSessionDataTask *retrieveDataStockPriceTask; @property (nonatomic,strong) NSURLSession *retrieveDataPriceSession; - (NSURLSession *)createSession; - (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error; - (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask didReceiveData:(NSData *)stockPriceData; - (void)setupAndStartdownloadStockPrice; - (NSString*)ParseDownoadedFileForStockPrice; - (void)createStockPriceFullFileSpec; - (NSString*)retrieveStringFromDownloadedFileByUrl; @end @implementation PCINetworkingDataModel // Instance variables { NSString* googleFinanceUrl; NSString* stockPriceFilePath; NSString* stockPriceFileName; NSString* stockPriceFileFullFileSpec; NSString* downloadFailedErrorString; BOOL downloadHadErrors; // BOOL downloadCompleted; } // Create a session using the main operation queue rather than an alternate, this allows synchronous programming // rather than asynchronous programming. The session is resumed in the calling function. - (NSURLSession *)createSession { static NSURLSession *session = nil; session = [NSURLSession sessionWithConfiguration:[NSURLSessionConfiguration defaultSessionConfiguration] delegate:self delegateQueue:[NSOperationQueue mainQueue]]; return session; } // Apple developer documentation indicates that this error handler should check reachability after a slight // wait (15 minutes) and then restart the download session. Since the original session start was based on // a button click instead of checking reachability and then restarting the request the error will be reported // and the task will quit. // https://developer.apple.com/library/content/documentation/NetworkingInternetWeb/Conceptual/NetworkingOverview/WhyNetworkingIsHard/WhyNetworkingIsHard.html#//apple_ref/doc/uid/TP40010220-CH13-SW3 // For requests made at the user’s behest: // Always attempt to make a connection. Do not attempt to guess whether network service is available, and do not cache that determination. // If a connection fails, use the SCNetworkReachability API to help diagnose the cause of the failure. Then: // If the connection failed because of a transient error, try making the connection again. // If the connection failed because the host is unreachable, wait for the SCNetworkReachability API to call your registered callback. // When the host becomes reachable again, your app should retry the connection attempt automatically without user intervention // (unless the user has taken some action to cancel the request, such as closing the browser window or clicking a cancel button). // Try to display connection status information in a non-modal way. However, if you must display an error dialog, be sure that it does // not interfere with your app’s ability to retry automatically when the remote host becomes reachable again. Dismiss the dialog // automatically when the host becomes reachable again. - (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error{ if (error) { downloadHadErrors = YES; NSString* errorString = [error localizedFailureReason]; downloadFailedErrorString = [NSString stringWithFormat:@"An error occurred while downloading the stock price %@", errorString]; NSLog(@"Data retrieval completed with error: %@\n", error); NSLog(@"Data retrieval completed with error: %@", errorString); } // The download failed, release any system resources used by the session, no need to finish any tasks here. [self.retrieveDataPriceSession invalidateAndCancel]; } // We got the stock price data from the web interface, save it to a file so that it can be processed later. - (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask didReceiveData:(NSData *)stockPriceData { NSError *writeError; if (stockPriceData) { NSString *stockPriceRequestReply = [[NSString alloc] initWithData:stockPriceData encoding:NSUTF8StringEncoding]; downloadHadErrors = NO; NSString* urlOfstockPriceFileFullFileSpec = [NSString stringWithFormat:@"file://%@", stockPriceFileFullFileSpec]; NSURL *downloadedDataInFile = [NSURL URLWithString:urlOfstockPriceFileFullFileSpec]; if (![stockPriceRequestReply writeToURL:downloadedDataInFile atomically:NO encoding:NSUTF8StringEncoding error: &writeError]) { NSString* errorString = [writeError localizedFailureReason]; downloadFailedErrorString = [NSString stringWithFormat:@"An error occurred while writing file %@", writeError]; downloadHadErrors = YES; } } else { NSString* errorString = [writeError localizedFailureReason]; downloadFailedErrorString = [NSString stringWithFormat:@"An error occurred while receiving the data %@", writeError]; downloadHadErrors = YES; } // Since the download seems to have completed, finish any unfinished business and then invalidate the session // to release any resources that need to be released. [self.retrieveDataPriceSession invalidateAndCancel]; } // Set up a delegate based HTTP download and start the download - (void)setupAndStartdownloadStockPrice { downloadHadErrors = NO; downloadFailedErrorString = nil; // Set up a NSURLSession to do an HTTP Download from the google finance self.retrieveDataPriceSession = [self createSession]; NSURL *stockPriceURL = [NSURL URLWithString:googleFinanceUrl]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:stockPriceURL]; self.retrieveDataStockPriceTask = [self.retrieveDataPriceSession dataTaskWithRequest:request]; [self.retrieveDataStockPriceTask resume]; } /* * The data returned and stored in the file looks like: // [ { "id": "1033" ,"t" : "AXP" ,"e" : "NYSE" ,"l" : "80.11" ,"l_fix" : "80.11" ,"l_cur" : "80.11" ,"s": "0" ,"ltt":"3:30PM EDT" ,"lt" : "Apr 20, 3:30PM EDT" ,"lt_dts" : "2017-04-20T15:30:24Z" ,"c" : "+4.56" ,"c_fix" : "4.56" ,"cp" : "6.04" ,"cp_fix" : "6.04" ,"ccol" : "chg" ,"pcls_fix" : "75.55" } ] * * The only relavent data is ,"l" : "80.11" * The data needs to be parsed so that it only returns 80.11 */ - (NSString*)retrieveStringFromDownloadedFileByUrl { NSString* fileContents = nil; NSString* urlOfstockPriceFileFullFileSpec = [NSString stringWithFormat:@"file://%@", stockPriceFileFullFileSpec]; NSURL *downloadedFile = [NSURL URLWithString:urlOfstockPriceFileFullFileSpec]; NSError *readErrors; fileContents = [[NSString alloc] initWithContentsOfURL:downloadedFile encoding:NSUTF8StringEncoding error:&readErrors]; if (fileContents == nil) { downloadFailedErrorString = [readErrors localizedFailureReason]; NSLog(@"Error reading file at %@\n%@", downloadedFile, [readErrors localizedFailureReason]); downloadHadErrors = YES; return downloadFailedErrorString; } else { NSRange range = [fileContents rangeOfString:@",\"l\" : \""]; NSString *hackedFileContents = [fileContents substringFromIndex:NSMaxRange(range)]; NSArray *hackedStringComponents = [hackedFileContents componentsSeparatedByString:@"\""]; fileContents = hackedStringComponents[0]; } return fileContents; } // Read the downloaded file into a string, then parse the string. // This is implemented without a check that the download task completed. Since the main operation // queue is being used rather than an alternate queue it is assumed that this program is // synchronous rather than asynchronous - (NSString*)ParseDownoadedFileForStockPrice { NSString* returnAmexStockPrice = nil; // If there were errors, just report the errors, don't attempt to parse the data in the file // since it may not be there or it may be incomplete. if (downloadHadErrors) { return downloadFailedErrorString; } returnAmexStockPrice = [self retrieveStringFromDownloadedFileByUrl]; return returnAmexStockPrice; } // Returns a string for the label in the user interface when the Networking button is clicked. // Attempt to connect using WiFi first because this uses less power than the cellphone (WWAN) // connection and does't add cost to the user (data charge on cell phone usage). // Download a file containing the current delayed American Express stock price. Parse the file // to get only the stock price. Embed the stock price within the text to be returned. // // The download is not performed in the background, it is a download initiated when the user // clicks on the button. Any errors such as no connection, or the URL can't be reached are // therefore reported to the user. - (NSString*)provideANetworkAccess { NSString* networkAccessDisplayString = @"Network access not implemented yet"; [self setupAndStartdownloadStockPrice]; // Since the download is implemented on the main queue this is a synchronous rather // than asynchronous program. It is therefore safe to process the data here rather // than in a delegate. NSString* amexStockPrice = [self ParseDownoadedFileForStockPrice]; if (!downloadHadErrors) { NSString* preface = @"The current American Express stock price with a 15 minute delay is "; networkAccessDisplayString = [NSString stringWithFormat:@"%@ $%@", preface, amexStockPrice]; } else { networkAccessDisplayString = amexStockPrice; } return networkAccessDisplayString; } - (void)createStockPriceFullFileSpec { // find a path to a writable directory to store the downloaded file. This is done once // so that button clicks are not affected by a directory search. NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); stockPriceFilePath = [paths objectAtIndex:0]; stockPriceFileName = @"latestStockQuote.txt"; stockPriceFileFullFileSpec = [stockPriceFilePath stringByAppendingPathComponent:stockPriceFileName]; } // Simple setup since all networking actions will be performed when the user clicks a button in the // application interface. The alternate to this would be to set up a timed loop that would download // the information on a periodic basis in the background using the features of NSUrlSession (not an // explict loop in this file). // While implementing this as a loop would show the results of the in the background would allow the // button click to perform more quickly, it would use more power, and if the user isn't close to a WiFi // connection it would add more cost to the users data charges. Since the amount of data expected is // less than 1K in size, it shouldn't take more than 5 seconds to download. If it was larger than 1K // this would be implemented as a loop to repeated download the data in the background every 15 minutes. - (id)init { googleFinanceUrl = @"http://finance.google.com/finance/info?client=ig&q=NSE:AXP"; downloadHadErrors = NO; downloadFailedErrorString = nil; stockPriceFilePath = nil; stockPriceFileName = nil; stockPriceFileFullFileSpec = nil; [self createStockPriceFullFileSpec]; return self; } @end Answer: For having never done Objective-C before, this looks pretty good! Here are some things I would do differently. Library Name What is PCI7? For an acronym used in every class in your example, it would have been nice to have an explanation somewhere of what it means! Data Model In the header, I would not #import <UIKit/UIKit.h> for just 1 member function. I would forward declare UIAlertController and then do the #import in the source file. The reason I'd do it that way is that any file that imports this header will then also import UIKit.h which is a fairly large header to import. I would also use more Objective-C style naming. This is something you'll see more of as you use Objective-C more often. I'd make IsGpsAvailable have a lowercase first letter and capitalize all of GPS like in the other method names. I'd probably also make most of those methods into properties. But those are minor decisions. More importantly, I'd have the library return something more useful than strings. Strings are pain to work with for stuff that's not textual. For the GPS location data, I'd return a CLLocation*, since that's what most other methods that deal with locations take, or at least a CLLocationCoordinate2D if that would be more appropriate. The caller can decide if they want to display it as a string or do a calculation with it, or pass it to another method. Likewise with the battery state and the stock price. Lastly, I'd advise you to make your BOOLs positive rather than negative. It becomes confusing to understand otherwise. Was it not able to establish the connection? Yes it was not able to do that. Wait… what? I'd make it look like this: #import <Foundation/Foundation.h> @class UIAlertController; @interface PCI7DataModelLibrary : NSObject @property (readonly) BOOL isGPSAvailable; @property (readonly) CLLocation* GPSLocationData; @property (readonly) UIDeviceBatteryState* batteryLevelAndState; @property (readonly) NSString* networkAccessData; @property (readonly) BOOL connectionEstablished; -(id)init; -(UIAlertController*)GPSAlerters; @end That's all I have time for tonight.
{ "domain": "codereview.stackexchange", "id": 25569, "tags": "programming-challenge, interview-questions, objective-c, ios" }
How to come up with a language that is recognizable but not co-recognizable?
Question: Forming a language that is recognizable but not co-recognizable. I'm having trouble coming up with a language with these properties. A recognizable language is a language $A \subseteq \Sigma^*$ iff $A = L(M)$ for some Turing machine $M$. Co-recognizable is the exact same except the complement of $A$ has to be recognizable. My question is how do I come up with a language that I know will be recognizable but not co-recognizable? Answer: Remember that there are three things a Turing machine can do, given an input. It can: Accept Reject Diverge (infinite-loop) "Recognizable" means "we can accept on strings in our language, and reject-or-diverge on anything else". "Co-recognizable" means "we can reject-or-diverge on strings in our language, and accept on anything else". Or, equivalently, "we can accept-or-diverge on strings in our language, and reject on anything else". So the key is, intuitively, to find a language so that a TM always diverges on (some) strings not in the language. A classic example is the Halting Problem. (To be more specific, the set of encoded Turing machines that halt on empty input, for a given encoding.) You can recognize this language by simulating the machine, then accepting. But you can't recognize its opposite. (Formally, you have to do a bit more work. You have to prove that the language is undecidable: that no Turing machine can always say "no" correctly, even if it can always say "yes" correctly. Usually this means reduction to the Halting Problem or Rice's Theorem.)
{ "domain": "cs.stackexchange", "id": 14096, "tags": "turing-machines, church-turing-thesis" }
What citrate is a common anticoagulant?
Question: When I underwent plateletpheresis, something that the staff called "citrate" was added to my blood as an anticoagulant. Everything I can find about this product online refers to it as just "citrate". But from what little I know about chemical names, citrate is an ion, normally found not by itself but rather as magnesium citrate, potassium citrate, calcium citrate, vel sim. What's the other half of this anticoagulant: which citrate is it? Answer: Per the NIH: ANTICOAGULANT CITRATE DEXTROSE A... ... Citric Acid, anhydrous, USP 0.073 g Sodium Citrate, dihydrate, USP 0.220 g Dextrose, monohydrate, USP 0.245 g Water for Injection, USP q.s. pH: 4.5 – 5.5 They used the citrate ion in the form of citric acid and sodium citrate. The citric acid is for chleating calcium while the sodium citrate does this and acts as a buffer to prevent the citric acid from raising your blood pH too much when injected as this would be painful and hazardous. Sodium is naturally in your blood and will not interfere with action potentials and is a abundant and cheap ion to use in manufacturing so it makes a logical counter ion for the citrate buffer.
{ "domain": "chemistry.stackexchange", "id": 10889, "tags": "biochemistry, ions" }
Why is 'Manhattan distance' a better heuristic for 15 puzzle than 'number of tiles misplaced'?
Question: Consider two heuristics $h_1$ and $h_2$ defined for the 15 puzzle problem as: $h_1(n)$ = number of misplaced tiles $h_2(n)$ = total Manhatten distance Could anyone tell why $h_2$ is a better heuristic than $h_1$? I would like to know why the number of nodes generated for $h_1$ is greater than that for $h2$. Also why going deeper into the state space the number of nodes increase drastically for both heuristics. Source: Informed Search Answer: There probably will be no formal proof; probably the only way to tell which is better is through experiments. But some intuition seems possible. $h_1$ only takes into account whether a tile is misplaced or not, but it doesn't take into account how far away that tile is from being correct: a tile that is 1 square away from its ultimate destination is treated the same as a tile that is far away from where it belongs. In contrast, $h_2$ does take this information into account. Instead of treating each tile as either "correct" or "incorrect" (a binary decision), $h_2$ introduces shades of grey that take into account how far the tile is from where it belongs. It seems plausible that this might possibly yield some improvement. Of course, the only way to find out which one actually works better is to try the experiment. But this might give some intuition about why one might reasonably hope that $h2$ could be potentially be better than $h_1$.
{ "domain": "cs.stackexchange", "id": 9977, "tags": "algorithms, heuristics" }
Does a vaccine reduce the contagion "efficiency"?
Question: Note: I am specifically interested in the question in the context of COVID, but general information is welcome as well. If someone vaccinated still catches the COVID, is their capacity to infect others smaller, the same, or higher compared to someone not vaccinated? In other words, is there a relationship between the "efficiency" of infecting others and the fact that someone was vaccinated (but still got infected)? Answer: Each individual case will be different, but in general it is useful to ask "why did the vaccinated person still catch the Covid disease?" If they caught the disease because the vaccine had essentially no effect on them (for example, some immunocomprimised individuals who cannot generate an appropriate immune response), then they are likely to be at least as contagious as an unvaccinated individual. In fact, they may carry a higher viral load and be more infectious because of their underlying impaired immune response. It's clear now that those who are vaccinated and successfully develop a targeted immune response against the covid virus (almost everyone for the two current mRNA vaccines), tend to have lower viral loads even if they are infected with the covid virus. This is presumably the reason why the incidence and severity of Covid disease is dramatically lower among vaccinees. The lower viral loads do indeed lessen the chance of infecting others with the covid virus, but do not completely eliminate it.
{ "domain": "biology.stackexchange", "id": 11433, "tags": "vaccination, infection, covid" }
What does the output of an encoder in encoder-decoder model represent?
Question: So in most blogs or books touching upon the topic of encoder-decoder architectures the authors usually say that the last hidden state(s) of the encoder is passed as input to the decoder and the encoder output is discarded. They skim over that topic only dropping that sentence about encoder outputs being discarded and that's it. It makes me confused as hell and even more so, because I'm also reading that in transformer models the encoder output is actually fed to the decoder, but since that's the only thing coming out of an non-rnn encoder, no surprise here. How I understand it all is that in transformer architectures the encoder returns "enriched features". If so, then in classical E-D architecture encoder returns just features. Why then is the output of the encoder model ignored in the non-transformer architecture? What does it represent? Answer: Encoder-decoder with RNNs With RNNs, you can either use the hidden state of the encoder's last time step (i.e. return_sequences=False in Keras) or use the outputs/hidden states of all the time steps (i.e. return_sequences=True in Keras) : If you are just using the last one, it will be used as the initial hidden step of the decoder. With this approach, you are training the model to cram all the information of the source sequence in a single vector; this usually results in degraded result quality. If you are using all the encoder states, then you need to combine them with an attention mechanism, like Bahdanau attention or Luong attention (see their differences). With this approach, you have N vectors to represent the source sequence and it gets better results than with just the last hidden state, but it requires to keep more things in memory. The output at every time step is a combination of the information at the token at that position and the previous ones (because RNNs process data sequentially). Encoder-decoder with Transformers The encoder output is always the outputs of the last self-attention block at every time step. These vectors are received by each decoder self-attention layer and combined with the target-side information. The information of all tokens of the encoder is combined at every time step through all the self-attention layers, so we don't obtain a representation of the original tokens, but a combination of them.
{ "domain": "datascience.stackexchange", "id": 11556, "tags": "deep-learning, transformer, encoder" }
On Christoffel symbol and vector fields
Question: Take the defining equation of Christoffel symbols: $$\nabla_{\frac{\partial}{\partial x^\mu}}\frac{\partial}{\partial x^\nu}=\Gamma^{\sigma}_{\mu\nu}\frac{\partial}{\partial x^{\sigma}}$$ Both sides of the above definition are vector fields, in fact, the right side being a linear combination of coordinate vector fields with the coefficients of the combination being precisely the Christoffel symbols $\Gamma^{\sigma}_{\mu\nu}$ that do not transform as tensor. The above fact motivates my following question: If one has a vector field $X$ written out in a chart $x^\mu$ as $X=X^{\mu}\frac{\partial}{\partial x^\mu}$, is it the case that the smooth functions $X^\mu$ on the manifold should always transform as a vector? The right side of the definition above for Christoffel symbols suggest that this claim is not true. Answer: If you have some $\{f^\mu\}\subset C^\infty(U)$ where $U$ is some coordinate domain, then $$\tag{$1$}X=f^\mu \partial_\mu$$ is indeed a vector field in $U$. If with respect to some other coordinate system, we have $$X=g^{\mu'}\partial_{\mu'},$$ then we will of course have $$\tag{$2$}g^{\mu'}=\frac{\partial x^{\mu'}}{\partial x^\mu} f^\mu.$$ But if $\{f^\mu\}$ has a transformation law other than (2), it does not have to be that $$g^{\mu'}=f^{\mu'}.$$ So basically, if you define a vector field by (1) and then transform, you have to forget whatever auxiliary transformation law the multiplet $\{f^\mu\}$ has.
{ "domain": "physics.stackexchange", "id": 43873, "tags": "general-relativity, differential-geometry, tensor-calculus, coordinate-systems, vector-fields" }
Does quantum observables demand both Heisenberg and Schrodinger representations?
Question: I will consider an observable $\mathcal{O}\in\mathcal{L}(\mathscr{H})$ and, for simple, let me assume $\mathscr{H}$ is finite dimensional. Now, for some time independent hamiltonian $\mathcal{H}$ I can evolve $\mathcal{O}\to\mathcal{O}(t)=U_t^\dagger\mathcal{O}U_t$, where $U_t=e^{-it \mathcal{H}}$. I then define $\delta\mathcal{O}(t)=\mathcal{O}(t)-\mathcal{O}$, which is hermitian for $\mathcal{O}$ is hermitian. Something sounds weird to call this object an observable; I don't see how to represent it in a time independent frame, that is, to the best of my knowledge this can be defined only at the Heisenberg picture. In this simple, finite dimensional case, does legit quantum observables demand a representation in both Heisenberg and Schrodinger picture? In particular, is $\delta\mathcal{O}(t)$ an observable and, if not, what is the catch? At a more operational perspective, it also seems to me that acquiring the outcomes of such operator would demand measurements which are non-local in time. So, a related question: is $\delta \mathcal{O}(t)$ measurable? I am aware of cases in which one can assign a two-point-measurement protocol to obtain such outcomes, but I am interested in the general case in which I do not have to assume anything about the state of the system. Answer: Strictly speaking, this operator $\delta\mathcal{O}$ is ill-defined because you are taking the difference of operators defined on different (but isomorphic) Hilbert spaces. This point is usually not emphasized, but if we were being careful, the stricture of quantum mechanics is to assign a Hilbert space to each value of time (constant-time slices if you're doing field theory). The time-evolution operator, with this understanding, can be interpreted as the linear map from the Hilbert space at one time to the Hilbert space at a different time. Since the evolution operator is invertible, all these Hilbert spaces are isomorphic, and hence the distinction is usually forgotten, but this is also the reason why canonical commutation relations are always between operators at the same time. So, the operator $\delta\mathcal{O}$ is being built from operators defined as acting on different Hilbert spaces and hence is ill-defined as an operator. The loophole, however, is that its expectation value is still well-defined as being the difference of the expectation values.
{ "domain": "physics.stackexchange", "id": 74132, "tags": "quantum-mechanics, observables" }
C++ custom exception handling using std::error_category and std::system_error
Question: I want to make my own custom exception handling and I am curious if I am going the right path, maybe some of you could suggest to me how I could improve it? Or maybe I have went the wrong direction? (Also it would be amazing to know how I could make this piece of code more of a modern C++.) Here is a simple namespace with classes, which inherits from std::error_category and std::system_error. #include <string> #include <system_error> #include <iostream> namespace CustomException { class ErrorC : public std::error_category { using Base = std::error_category; public: char const *name() const noexcept override { return "App error"; } std::error_condition default_error_condition(int const code) const noexcept override { (void) code; return {}; } bool equivalent(int const code, std::error_condition const &condition) const noexcept override { (void) code; (void) condition; return false; } bool equivalent(std::error_code const &code, int const condition) const noexcept override { return Base::equivalent(code, condition); } std::string message(int const condition) const override { return "An application error occurred, code = " + std::to_string(condition); } constexpr ErrorC() : Base{} {} }; auto app_error_category() -> ErrorC const & { static ErrorC the_instance; return the_instance; } enum class BreakErrorCode { FAILED_INIT_BREAKS = 1, FAILED_TO_LINK_BREAKS = 2, }; class BreakError : public std::system_error { using Base = std::system_error; public: BreakErrorCode appErrorCode() const { return static_cast<BreakErrorCode>(code().value()); } explicit BreakError(const BreakErrorCode code) : Base{(int) code, app_error_category()} { } BreakError(const BreakErrorCode code, const std::string &description) : Base{(int) code, app_error_category(), description} { } }; enum class EngineErrorCode { FAILED_INIT_ENGINE = 101, MOTHER_BOARD_FAILED_LINKING = 102, }; class EngineError : public std::system_error { using Base = std::system_error; public: EngineErrorCode appErrorCode() const { return static_cast<EngineErrorCode>(code().value()); } EngineError(const EngineErrorCode code) : Base{(int) code, app_error_category()} { } EngineError(const EngineErrorCode code, const std::string &description) : Base{(int) code, app_error_category(), description} { } }; enum class ControlErrorCode { CONTROL_GIMBAL_FAILED = 301, OBTAIN_CONTROL_AUTHORITY_FAILED = 302, }; } Main.cpp, which invokes methods throwing exceptions from previously declared namespace. using namespace CustomException; void initEngine() { throw EngineError(EngineErrorCode::FAILED_INIT_ENGINE, "Failed init engine"); } void initBreaks() { throw BreakError(BreakErrorCode::FAILED_INIT_BREAKS, "Failed init breaks"); } int main() { try { initEngine(); initBreaks(); } catch (const BreakError &error) { std::cout << error.what() << std::endl; } catch (const EngineError &error) { std::cout << error.what() << std::endl; } catch (...) { std::cout << "Caught something unexpected error" << std::endl; } return 0; } I would appreciate very much any kind of comments / help. Answer: Here are some things that may help you improve your code. Separate interface from implementation The interface goes into a header file and the implementation (that is, everything that actually emits bytes including all functions and data) should be in a separate .cpp file. The reason is that you might have multiple source files including the .h file but only one instance of the corresponding .cpp file. In other words, split your existing CustomException namespace into into a .h file and a .cpp file. Correctly override functions The code currently contains this as part of class ErrorC: bool equivalent(int const code, std::error_condition const &condition) const noexcept override { (void) code; (void) condition; return false; } This is not right for several reasons. First, the standard says that this should return true if the error is equivalent, but this does not do that. Instead, I'd write: return default_error_condition(code) == condition; Better still, omit it, since this is not a pure virtual method in std::error_category. Don't create pointless classes The CustomException class is doing very little here and I would strongly recommend omitting it and just deriving directly from std::error_category. See C.120 for further guidance. Don't use all caps for enum name ALL CAPS have been traditionally used for macros. To avoid misleading the reader, don't use them if it's not a macro. See ES.9. Fix the spelling If the machine has "breaks" it is broken. If it has "brakes" it has the ability to slow down. These homonyms are easily confused, but it's important to make sure you don't have spelling errors, especially in interface code, so that you don't confuse users of your code. Consider combining error codes It appears that you are intending to have non-overlapping and unique error codes for each type of error, but that is not assured because the errors are in three different enum classes. I'd suggest combining them. Reconsider your classes If the error codes were combined, as suggested above, then the differentiation among them would be by the error code. Having different classes doing exactly the same thing makes little sense to me. I'd suggest a single error class derived from std::error_category would be sufficient.
{ "domain": "codereview.stackexchange", "id": 40630, "tags": "c++, error-handling" }
'Genetic algorithm' implementation for constructing a battle mech
Question: This is a simple but complete console application where I experiment with genetic algorithm to try to figure out the best way to construct a battle mech that would win in a simple turn-based combat. I welcome your criticism and pointers about what I could improve in the code above: architecture-wise, technique-wise, presentation-wise, and whatever else catches your interest. To provide at least a sample code as per the local rules, this is the method to make two mechs meet in combat: // Match::match() // ============== // A static method that can be used to execute a match between the two provided mechs. // Return value: 0 = draw, 1 = mech #1 wins, 2 = mech #2 wins. int Match::match (Mech& m1, Mech& m2) { m1.resetCombatValues(); m2.resetCombatValues(); int turn = 0; static const int turnLimit = 25; // Each turn: while (turn < turnLimit) { m1.actCombatTurn (m2); m2.actCombatTurn (m1); bool mech1Alive = m1.isAlive(); bool mech2Alive = m2.isAlive(); if (!mech1Alive && !mech2Alive) return 0; if (!mech1Alive) return 2; if (!mech2Alive) return 1; turn++; } // Turn limit reached. return 0; } Answer: Here's a summary of my comments in the answer format: Instead of coded return values, I'd consider using an enumeration for clarity: http://en.cppreference.com/w/cpp/language/enum // A good rule of thumb is that if you have to document a special meaning of (implicit) enumerators, like "0 = draw, 1 = mech #1 wins, 2 = mech #2 wins", then it's probably better to use an explicit enumeration. I would also suggest using a namespace non-member function instead of a static member function whenever possible: https://stackoverflow.com/a/1435105/859774 Further, unless you're supporting negative turns count turnLimit or want turn to go backward (would that even make sense in your context? BTW, turn++ looks awkward, change it to ++turn), I would use size_t instead of int for turn and turnLimit: http://en.cppreference.com/w/cpp/types/size_t // NOTE: in fact, the cases where int is the right type to choose are significantly rarer than most beginning developers seem to assume -- as a good rule of thumb, whenever you think you should use an int, you should always stop and think whether it's actually a good idea -- e.g., are you sure you really want negative values? I also wouldn't use mech1Alive or mech2Alive but rather their opposites (e.g., mech1Dead or mech2Dead) -- note that you're always negating them in your if statements, so it would improve code clarity to do so straight away, at the outset. I've just looked at the online version, BTW. Your next step should be to remove all uses of new (and, consequently, all manual invocations of delete) and all of the resource-owning raw pointers: preferably change them to (stack-allocated) automatic variables (using references, preferably to const, when avoiding copies is necessary), then only for the ones for which you can't possibly do that (if there are any, don't just assume this, but carefully verify that free-store/heap dynamic allocation is in fact absolutely required!) use std::unique_ptr, and finally, as a last resort, use std::shared_ptr (and, as a last resort after this last resort ;], in the extremely rare case you have need to break a cycle, consider std::weak_ptr). See: https://codereview.stackexchange.com/a/25721/24670
{ "domain": "codereview.stackexchange", "id": 3845, "tags": "c++" }
Fusing pressure/depth sensor/IMU for Nav2
Question: Can I fuse a depth sensor or a pressure sensor's readings with imu to publish /odom (odometry) in Nav2 stack? I don't have wheel encoders in my robot, and I don't know what to use. Any resources? Answer: If you are asking for a package that will do that, you can use robot_localization, but you'll run into some trouble. Your IMU, at most, will give you linear acceleration, rotational velocity, and absolute orientation. A pressure sensor, once converted to a depth measurement, will provide Z position. Without a reference for either X and Y position or at least X and Y velocity, your state estimate is going to explode. We're also aiming to deprecate robot_localization in favor of fuse, but we need to implement some 3D sensor models first. https://docs.ros.org/en/noetic/api/robot_localization/html/index.html https://docs.ros.org/en/noetic/api/fuse_doc/html/index.html
{ "domain": "robotics.stackexchange", "id": 38828, "tags": "ros2, imu, ros-humble, sensor-fusion, nav2" }
ros2 master and slave
Question: Hello, I would like to setup a ros master machine and a ros master slave using ros2 dashing. I cannot find anything at all online, only unanswered questions. So how can I do this ? Do I need to set env variables like in ROS1 ?: ROS_HOSTNAME and ROS_IP ? I understand that with the DDS things might be very different now, so is there a way to do this ? Thanks ! Thanks a lot Originally posted by Mackou on ROS Answers with karma: 196 on 2020-09-30 Post score: 0 Original comments Comment by gvdhoorn on 2020-09-30: Just to clarify: ROS never had a "master and slave" architecture. There is only a master in the sense that it knows everything about nodes which have registered themselves as participants in the ROS node graph. Other nodes can then query the master for this information. But after that, it's not involved at all any more, and communication (of whatever sort) is peer-to-peer (in ROS 1). Describing a ROS 1 system as "master - slave" is really not something which makes sense. It would be almost the same as saying your computer or phone are slaves to a DNS server. Which is not the case at all. Answer: There is no master in ros2 so you do not have to set it. But if you run the ROS Bridge there you have to say where to find the ros1 master. Originally posted by duck-development with karma: 1999 on 2020-09-30 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Mackou on 2020-09-30: Thanks ! You are right, I kept experimenting and my two coputers on the networks managed to find the nodes by themselves and I didnt have to do anything. Is it because of how the DDS is implemented ? Thanks ! Comment by duck-development on 2020-10-01: Yes. The dds handles the service discovery.
{ "domain": "robotics.stackexchange", "id": 35585, "tags": "ros" }
Leaf formation on unknown tree: cocoon or something else?
Question: A small tree in my yard has leaves which appears to double over themselves. The first two pictures show this phenomena, and the third is the same tree to maybe aid identification. I can't absolutely guarantee when this first appeared, but I believe it was this growing season, and I know it's been there at least a month (unless it's reoccurring, I am not tracking the individual leaves). My first thought is cocoons or some other insect behavior. Can anyone tell me what it is? (What kind of tree it is will also be valuable, in case we can't find the answer to the leaf question here) EDIT: Location is outside Boston, MA (USA). Answer: Based on the location and assuming this isn't a domesticated hazelnut there are two likely hazelnut species. The lack of the distinctive "beak" seen on the Beaked hazelnut means this is most likely an American hazelnut (Corylus americana): The hairs along the young branch are also consistent with this (see for example the Plant Guide from the USDA). Note, however, there are hybrids between American and European hazelnuts so it is beyond my ability to be completely sure by visual inspection alone.
{ "domain": "biology.stackexchange", "id": 9992, "tags": "species-identification, botany, trees" }
From which dimensionful constants does proton mass arise?
Question: It is well known that the most of the proton (or any other hadron with light quarks) mass is not made up from quark masses, but it is dynamically generated by QCD mess inside. I've also heard that, even if quarks would be massless, protons (and other hadrons) would still have a nonzero mass. However, if proton mass does not (in most part) arise from the quark masses, from which dimensionful constants does it arise? I've heard that proton mass arises from spontaneous symmetry breaking of scale invariance. However, this is a troublesome explanation, or non-explanation at best, because it opens more questions: If a theory is scale invariant, how can it pick a scale when breaking this symmetry? The proton mass is a constant, so how can the scale invariance be broken across entire universe the same way? Is there a field, very resistant to change, that permeates entire space to ensure the constancy of proton mass? Answer: I think its easiest to understand this if one has a minimal understanding of QFT. I'm not sure about your background knowledge but hopefully this isn't gibberish to you. The QCD Lagrangian for massless quarks is given by, \begin{equation} {\cal L} = - g \sum_i \bar{\psi} _i A _\mu \gamma ^\mu \psi _i - \frac{1}{4} F _{ \mu \nu } F ^{ \mu \nu } \end{equation} where the fields are, $ A _\mu $ and $ \psi _i $. The only constant in the equation is the coupling constant, $g$. Therefore, we see that there is no single scale in the Lagrangian. Naively one would say that the theory is scale invariant. However, there is a subtlety. We haven't fully specified the theory. We have yet to say what the value of the coupling constant is. The problem is that QFT causes the strength of an interaction to depend on the scale which its measured. Luckily, we know how to calculate how a coupling changes with scale (this is done in every full year QFT course), \begin{equation} \frac{ d \alpha }{ d \log \mu } = - \frac{ b }{ 2\pi } \alpha ^2 \end{equation} where, $ \alpha \equiv g ^2/4 \pi $ and $ b $ are calculable numbers. For QCD with the SM fermions we have, \begin{equation} b = 7 \end{equation} From here its easy to solve the differential equation above and get the coupling as a function of the scale, $ \mu $, \begin{align} \frac{1}{ \alpha ( \mu ) } &= \frac{1}{ \alpha ( \mu _0 ) } - \frac{ b }{ 2\pi } \log \frac{ \mu }{ \mu _0 } \\ \alpha_s (\mu) &= \frac{ \alpha _s ( \mu _0 ) }{ 1 + \alpha _s ( \mu _0 ) \frac{ b }{ 2\pi } \log \frac{ \mu }{ \mu _0 } } \end{align} Therefore, we can measure the coupling at some scale and then know what it is at every scale. As pointed out by the OP, we can already see breaking of scale invariance since the couplings depend on scale. Now we move on to the relation to $ \Lambda_{QCD} $. This is conventionally defined as the scale where the coupling becomes infinite. From the running above we see this occurs when, \begin{equation} \mu \equiv \Lambda_{QCD} = \mu _0\exp \left[ - \frac{ 2\pi }{ b \alpha _s ( \mu _0 ) } \right] \end{equation} Here we see that the scale only depends the field content (through $b$) and Natures choice for the coupling.
{ "domain": "physics.stackexchange", "id": 18213, "tags": "quantum-chromodynamics, symmetry-breaking, physical-constants, protons, scale-invariance" }
Why is my Bubblesort implementation faster than my Quicksort code?
Question: Here is my Bubblesort code: public static List<int> BubbleSort(List<int> _digitList) { List<int> digitList = _digitList; bool didSwap = true; while (didSwap) { didSwap = false; for (int i = 0; i < digitList.Count - 1; i++) { if (digitList[i] > digitList[i + 1]) { int temp = digitList[i]; digitList[i] = digitList[i + 1]; digitList[i + 1] = temp; didSwap = true; } } } return digitList; } And here is my Quicksort Method, which is actually an implementation of the pseudocode on Wikipedia: public static List<int> Quicksort(List<int> array) { if (array.Count <= 1) { return array; //An array of Zero ot one elements is already sorted } int pivot = 0; List<int> less = new List<int>(); List<int> greater = new List<int>(); for (int i = 1; i < array.Count;i++ ) { if (array[i] <= array[pivot]) { less.Add(array[i]); } else { greater.Add(array[i]); } } List<int> combined = Quicksort(less); combined.Add(array[pivot]); combined.AddRange(Quicksort(greater)); return combined; } So for the List = {211, 16, 42, 166, 192, 2, 13, 81, 6, 1, 5, 115, 17, 67}; I get following Stopwatch values. Bubblesort: 00:00:00.0002873 Quicksort: 00:00:00.0003831 Does this mean my Quicksort code is poor or did I misunderstand the Stopwatch concept? Answer: It is normal in production quality qsort code to switch to another sorting method (maybe a unrolled bubble sort) when the size of the input is small (often 8 is used). QSort has very high overheads, but scales well, for a small input size the overheads are much more important than the scaling. If you run your code on an input that is 1000 items long, then I expect that your qsort would be faster than your bubble sort. Try writing a test problem that increases your input size in steps of 100 and then graph the results with both sorting methods. The other way to look at it, is to find the largest input that each sort method can sort in say 5 seconds. Also your qSort could be written to be a lot faster by not creating 3 new Lists, it is also very important to quickly choose a good item to privet on. You do not even set the size of the list when you create them, hence each list will have to be reallocated and copied many times as you add items to it. You have a qSort that is implemented is an inefficient way compared to a boubleSort that is close to the best implementation for boubleSort. Yet as you confirmed in your comment the qSort does better when you have over 10000 items, slowing just how much better qSort scales.
{ "domain": "codereview.stackexchange", "id": 6743, "tags": "c#, sorting, quick-sort" }
Are there experimental observations of the Abraham-Lorentz force?
Question: The Abraham-Lorentz force is the force a classical charged particle particle exerts on itself due to its own electromagnetic field. It has a rather simple formula that reads $$ \vec{F}_\mathrm{AL} = \frac{2 q^2}{3 c^2} \vec{\dddot{x}} \,. $$ My question is the following: Is there any realizable context where a classical charged body can be observed to experience the Abraham-Lorentz force? It is unlikely that one could directly observe the immediate acceleration due to $\vec{F}_\mathrm{AL}$, but an indirect observation through long-term energy losses of the body might be observable. Answer: Ok, I would like to thank Andrew Steane and Vladimir Kalitvianski for their input. I have done some digging myself and I believe I have gathered enough material to compile an answer to the question from the following sources: The 2017/2018 notes of Kirk T. McDonald On the History of the Radiation Reaction, the 2016 paper by Di Piazza et al. Investigation of classical radiation reaction with aligned crystals, and the 2017 paper by Wistisen et al. Experimental Evidence of Quantum Radiation Reaction in Aligned Crystals. According to McDonald: We noted earlier that while the radiation reaction for oscillating currents has clear manifestation in the so-called radiation resistance of antennas, there is no experimental evidence for the classical radiation reaction of an individual electric charge. Now let me cite the abstract of Di Piazza et al.: The self-consistent underlying classical equation of motion including radiation-reaction effects, the Landau-Lifshitz equation, has never been tested experimentally, in spite of the first theoretical treatments of radiation reaction having been developed more than a century ago. Here we show that classical radiation reaction effects, in particular those due to the near electromagnetic field, as predicted by the Landau-Lifshitz equation, can be measured in principle using presently available facilities, in the energy emission spectrum of 30-GeV electrons crossing a 0.55-mm thick diamond crystal in the axial channeling regime By the Landau-Lifschitz equation they mean the approximation where the term $\dddot{x}$ is replaced by the jerk felt by a hypothetical "test particle" accelerated in the very same external field but without any radiation reaction. (I.e., simply an approximation of the Abraham-Lorentz force.) The proposal of Di Piazza et al. was then experimentally realized by Wistisen et al. but they found that the classical approximation of the radiation reaction is not a valid model and that: The measured photon emission spectra show features which can only be explained theoretically by including both 1) quantum effects related to the recoil undergone by the positrons in the emission of photons and the stochasticity of photon emission, and 2) radiation-reaction effects stemming from the emission of multiple photons. 2021 Update: A follow-up paper by Nielsen et al. from 2020 under the title Radiation Reaction near the Classical Limit in Aligned Crystals showed that Hitherto, the experimental problem in validating the LL equation has been to achieve sufficiently strong fields for radiation reaction to be important without quantum effects being prominent. Notwithstanding, here we provide a quantitative experimental test of the LL equation by measuring the emission spectrum for a wide range of settings for 50 GeV positrons crossing aligned silicon single crystals near the (110) planar channeling regime as well as 40 GeV and 80 GeV electrons traversing aligned diamond single crystals near the ⟨100⟩ axial channeling regime. The experimental spectra are in remarkable agreement with predictions based on the LL equation of motion with small quantum corrections for recoil and, in case of electrons, spin and reduced radiation emission, as well as with a more elaborate quantum mechanical model. That is, it can be safely stated that at least the behaviour of beams of particles is consistent with the Abraham-Lorentz (Landau-Lifschitz) radiation-reaction force whenever the classical limit is applicable. Thus, as far as concerns the original question, we can state that there exist: Experimental realizations where ensembles of particles feel radiation reaction that can be modeled by the Abraham-Lorentz (or Landau-Lifschitz) formula. This includes radiation losses by beams of particles in particle accelerators or crystals, as well as the "radiation resistance" of currents in antennas. Experimental realizations of quantum radiation-reaction that cannot be reasonably modeled by the Abraham-Lorentz (or Landau-Lifschitz) formula. However, there currently seems to be no realizable experimental setup where effects of the Abraham-Lorentz force on a single classical body can be measured or observed.
{ "domain": "physics.stackexchange", "id": 54789, "tags": "electromagnetism, electromagnetic-radiation, experimental-physics, jerk" }
HTML5 Video player
Question: I do HTML5 Video player with some controls. I have a button, where I change classname for make play, pause or replay button. I have a mute/unmute button, volume range slider, timer and fullscreen mode button. Maybe I can do some functions better or faster, and also, maybe I need to change comments? Logic: "use strict" doc = document video = doc.getElementById("video") video.controls = false ###* Video controls ### play_button = doc.getElementById("play-button") progress_bar = doc.getElementById("progress-bar") progress_load = doc.getElementById("progress-load") current_time_block = doc.getElementById("time-current") duration_block = doc.getElementById("time-duration") volume_button = doc.getElementById("volume-button") volume_range = doc.getElementById("volume-range") screen_button = doc.getElementById("screen-button") ###* # A video DOM currentTime property formatting. # @param {current_time} Video currentTime property. # @return {string} Time in the format 00:00. #### video_time_format = (current_time) -> seconds = Math.floor(current_time) minutes = Math.floor(current_time / 60) if minutes >= 10 then minutes = minutes else minutes = "0" + minutes if seconds >= 10 then seconds = seconds else seconds = "0" + seconds minutes + ":" + seconds ###* Get a video DOM duration property. ### video_duration = null get_video_duration = -> if video.duration video_duration = video.duration ###* Set video duration to video controls panel. ### video.addEventListener("loadedmetadata", -> duration_block.textContent = video_time_format(get_video_duration()) ) ###* # A helper function for update progress bar events. # Set video current time in video controls panel and progress bar. # @param {position} Percentage of progress. ### current_time_update = (position) -> current_time_block.textContent = video_time_format(video.currentTime) progress_load.style.width = position ###* # The value is converted into a percentage value using the video’s duration # and currentTime. ### video.addEventListener("timeupdate", -> current_time_update(Math.floor((100 / video_duration) * video.currentTime) + "%")) ###* # A clickable progress bar. # Get x-coordinate of the mouse pointer, converted its into a time. ### progress_bar.addEventListener "click", ((event) -> mouseX = event.offsetX video.currentTime = mouseX * video_duration / progress_bar.offsetWidth current_time_update(mouseX + "px") ), false ###* # Start playback and change replay button to pause button. #### video_replay = -> video.currentTime = 0 video.play() play_button.classList.remove("md-replay") play_button.classList.add("md-pause") ###* # Rests video to start position and change play button to pause button. #### video_play = -> video.play() play_button.classList.remove("md-play-arrow") play_button.classList.add("md-pause") ###* # A nearest integer of video DOM currentTime property pluralize. # @param {current_time} A nearest integer of video DOM currentTime property. # @return {string} A pluralized time with title. #### pluralize_time = (current_time) -> cases = [2, 0, 1, 1, 1, 2] index = if current_time % 100 > 4 && current_time % 100 < 20 then 2 else cases[Math.min(current_time % 10, 5)] first_titles = ["Просмотрена ", "Просмотрено ", "Просмотрено "] second_titles = [" секунда", " секунды", " секунд"] first_titles[index] + current_time + second_titles[index] ###* # Stop playback and change pause button to play button. #### video_pause = -> video.pause() play_button.classList.remove("md-pause") play_button.classList.add("md-play-arrow") console.log pluralize_time(Math.floor(video.currentTime)) play_pause_toggle = -> if video.ended video_replay() else if video.paused video_play() else video_pause() play_button.addEventListener("click", play_pause_toggle) video.addEventListener("click", play_pause_toggle) ###* Change pause button to play. #### video.addEventListener("ended", -> play_button.classList.remove("md-pause") play_button.classList.add("md-replay") ) ###* # Sound turned on when volume button pressed. # Change `volume-off` button to `volume-up`. #### video_volume_unmuted = -> video.muted = false volume_button.classList.remove("md-volume-off") volume_button.classList.add("md-volume-up") ###* # Sound turned off when volume button pressed. # Change `volume-up` button to `volume-off`. #### video_volume_muted = -> video.muted = true volume_button.classList.remove("md-volume-up") volume_button.classList.add("md-volume-off") ###* # Sound turned on when sound is off and user press shortcut for unmute volume. # Call `video_volume_unmuted()` function. #### video_volume_on = -> video.volume = 0.1 volume_range.value = 0.1 video_volume_unmuted() ###* # Sound turned off when slider control off. # Change `volume-up` button to `volume-off`. #### video_volume_off = -> video.volume = 0 volume_button.classList.remove("md-volume-up") volume_button.classList.add("md-volume-off") ###* Sound turned on or off. ### volume_toggle = -> if video.volume == 0 video_volume_on() else if video.muted video_volume_unmuted() else video_volume_muted() volume_button.addEventListener("click", volume_toggle) change_volume = -> volume_range_value = parseFloat(volume_range.value) if volume_range_value == 0 video_volume_off() else video.volume = volume_range_value video_volume_unmuted() volume_range.addEventListener("input", change_volume) ###* # Launch into full screen mode. # Change `md-fullscreen` button to `md-fullscreen-exit`. #### launch_into_full_screen = -> screen_button.classList.remove("md-fullscreen") screen_button.classList.add("md-fullscreen-exit") if video.requestFullscreen video.requestFullscreen() else if video.mozRequestFullScreen video.mozRequestFullScreen() else if video.webkitRequestFullscreen video.webkitRequestFullscreen() ###* # Exit full screen mode. # Change `md-fullscreen-exit` button to `md-fullscreen`. #### exit_full_screen = -> screen_button.classList.remove("md-fullscreen-exit") screen_button.classList.add("md-fullscreen") if video.exitFullscreen video.exitFullscreen() else if video.mozCancelFullScreen video.mozCancelFullScreen() else if video.webkitExitFullscreen video.webkitExitFullscreen() ###* Launching and existing fullscreen mode. ### full_screen_toggle = -> if !video.fullscreenElement && !video.mozFullScreenElement && !video.webkitFullscreenElement launch_into_full_screen() else exit_full_screen() screen_button.addEventListener("click", full_screen_toggle) ###* Shortcuts. ### doc.onkeydown = (event) -> play_pause_toggle() if event.keyCode == 32 full_screen_toggle() if event.keyCode == 70 volume_toggle() if event.keyCode == 77 video_replay() if event.keyCode == 48 Layout: <div class="custom-video"> <video id="video"> <source src="http://media.w3.org/2010/05/sintel/trailer.mp4" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"' /> <source src="http://media.w3.org/2010/05/sintel/trailer.webm" type='video/webm; codecs="vp8, vorbis"' /> </video> <div class="video-controls cf"> <div id="progress-bar" class="progress-bar"> <div id="progress-load" class="progress-load"></div> </div> <div id="play-button" class="video-control-button md-2x md-play-arrow left"> </div> <div id="volume-controls-wrapper" class="volume-controls-wrapper left"> <div id="volume-button" class="video-control-button md-2x md-volume-up left"> </div> <input type="range" id="volume-range" class="video-control-range left" min="0" max="1" step="0.1" value="1"> </div> <div class="time-display left"> <span id="time-current" class="time-current">00:00</span> <span class="time-separator">/</span> <span id="time-duration" class="time-duration">00:00</span> </div> <div id="screen-button" class="video-control-button md-2x md-fullscreen right"> </div> </div> Answer: Your HTML looks good, except for a missing </div> brace. You can validate it at the W3C HTML Validator. While this appears to be a snippet, you should structure your HTML files like this, in the case you are not already: <!doctype html> <html> <head> <title>Page Title Here...</title> </head> <body> <!-- Content Here... --> </body> </html> Right here, I would not align your code like this, it is easier to write and makes just as much sense without those extra spaces: play_button = doc.getElementById("play-button") progress_bar = doc.getElementById("progress-bar") progress_load = doc.getElementById("progress-load") current_time_block = doc.getElementById("time-current") duration_block = doc.getElementById("time-duration") volume_button = doc.getElementById("volume-button") volume_range = doc.getElementById("volume-range") screen_button = doc.getElementById("screen-button") Your indentation, spacing, and naming are excellent, although I prefer a 4-space indent over a 2-space indent. I do not know CoffeeScript or JavaScript, so I cannot make any more comments.
{ "domain": "codereview.stackexchange", "id": 12680, "tags": "html, html5, coffeescript, video" }
Group theory of tensor products of the harmonic oscillator
Question: I learned that the symmetry group of the quantum isotropic harmonic oscillator in n-dimensions is $SU(n)$. Specifically, in two-dimensions, it is $SU(2)$ and hence the eigenstates are given by the irreducible representations of $SU(2)$. Sticking to the usual notation for angular momentum in physics, we can label the eigenstates by $|l,m>$, where $m=-l, -l+1,..., l$ and $l$ runs over non-negative half integers. Here each $l$ gives a different irreducible representation with dimension $2l+1$. This is precisely the degeneracy of the $l$-th harmonic oscillator shell, with energy $\hbar\omega(2l+1)$. Furthermore, the angular momentum in z-direction is given by $2\hbar m$. So far, so good. Now I could form tensor products of the different representations if I'm interested in describing multiple particles inside the harmonic oscillator. For non-interacting particles the tensor product $V_{l_1}\otimes V_{l_2}$ is a subspace of the eigenspace of the combined Hamiltonian with energy $2\hbar\omega(l_1+l_2+1)$. On the other hand, I can decompose this thensor product representation into irreducible reps in the standard way, $V_{l_1}\otimes V_{l_2} = \bigoplus_{L=|l_1-l_2|}^{l_1+l_2} V_L.$ However, these $V_L$ now seem to play a very different role, as the energy of the system is apparently not given by $\hbar\omega(2L+1)$. So what is the proper interpretation of the $|L,M>$? Answer: You are trying to understand the language of the Jordan-Schwinger realization of angular momentum. You must understand that throughout there are only two different oscillators, not more, even when you "add angular momenta" by tensoring states. If, somehow, bizarrely, you changed your Hilbert space, you'd have more oscillators, instead, and su(n) instead of su(2)! But you don't. Call the oscillators a and b. $$ H/\hbar\omega= 1 + a^\dagger a + b^\dagger b ,\\ L_z=(a^\dagger a - b^\dagger b)/2 , ~~L_+ = a^\dagger b , ~~~L_-= b^\dagger a,\\ a^{\dagger ~k} b^{\dagger ~n}|0\rangle \equiv |k;n\rangle . $$ You then have $$ E=1+ k+n, ~~~L_z |k;n\rangle={k-n\over 2}|k;n\rangle\equiv m |k;n\rangle, \\ L^2|k;n\rangle = {k+n\over 2} \left ({k+n\over 2} +1\right )|k;n\rangle \equiv l(l+1) |k;n\rangle \leadsto\\ k=l+m, ~~~ n= l-m. $$ So the doublet representation is $$ |\uparrow\rangle =|1;0\rangle, ~~~(m=1/2, l=1/2)\\ |\downarrow\rangle =|0;1\rangle, ~~~(m=-1/2, l=1/2)\\ E=1+1, $$ the first excited state. Tensor two ups. This is the state $$ |\uparrow \uparrow\rangle= |2;0\rangle ~~\implies l=1, m=1, E=1+2. $$ Apply the lowering operator to it, to get $$ {|\uparrow\downarrow\rangle +|\downarrow\uparrow\rangle \over \sqrt{2}}=|1;1\rangle ~~\implies l=1, m=0, E=1+2, $$ and further $$ |\downarrow \downarrow\rangle= |2;0\rangle ~~\implies l=1, m=-1, E=1+2, $$ the three second excited states. It is evident that, for any $l$, the $m=0$ states require $k=n$. So the state orthogonal to the middle state above must be the $$ {|\uparrow\downarrow\rangle -|\downarrow\uparrow\rangle \over \sqrt{2}}=|0;0\rangle=|0\rangle ~~\implies l=0, m=0, E=1, $$ the ground state! But it's inaccessible to tensoring: you cannot combine two excitations to drop to the ground state, The reason is the oscillators commute among themselves, and so you only get the symmetric representation in the composition, and you project out all others. (To get the above singlet, you'd need fermion oscillators, antisymmetrizing the two tensor factors.) So the only surviving factor in your (last) "Clebsch reduction" direct sum formula is the symmetric rep, $L=l_1+l_2$. There are no other Ls to consider. Edit post comments It appears the question was about the 4-dimensional isotropic oscillator, after all, whose symmetry group is SU(4) with 15 symmetry generators, not just 3! The energy/$\hbar \omega$ is then $4/2 + n$, where $n\equiv n_1+n_2+n_3+n_4$, the sum of the occupation numbers of all 4 uncoupled oscillators. The degeneracy for a given energy shell characterized by n, above, is the dimensionality of the symmetric representation of SU(4) with n boxes in the Young tableau (so, arrayed next to each other), $$ {4+n-1 \choose n}= \frac{(n+3)(n+2) (n+1)}{3!} ~~, $$ hence 4,10,20,... for n=1,2,3,... Check this.
{ "domain": "physics.stackexchange", "id": 86392, "tags": "quantum-mechanics, angular-momentum, harmonic-oscillator, group-theory, identical-particles" }
Help with Seurat QC ambiguity
Question: I have four PBMC samples from 10X scRNA-seq > cancer An object of class Seurat 36601 features across 18338 samples within 1 assay Active assay: RNA (36601 features, 0 variable features) > And this is Seurat QC plot If I am not wrong, samples were of relatively low quality, with gene expression data revealing the presence of mitochondrial genes, as well as MALAT-1, which are suggestive of poor sample quality (dead/dying cells). Any way by this plot I filtered cells to remove data that had more than 20 "percent" mitochondrial expression , > 2000 features or < 100 features cancer <- subset(cancer, subset = nFeature_RNA > 100 & nFeature_RNA < 2000 & percent.mt < 20) By this I lost most of cells > cancer An object of class Seurat 36601 features across 6,883 samples within 1 assay Active assay: RNA (36601 features, 0 variable features) > Please, can somebody experienced in scRNA-seq inspect this plot and my thresholds and tell me if I am wrong, any suggestion, if losing this number of cells is normal Thank you Answer: Answer from @haci, converted from comment: I think you should not be using a cut-off higher than 20 or 25 for the reasons I explained above. I would advise against using cells with more than 20-25% percent.mito. This metric is used as a proxy for cell death and there is not much point keeping these cells if the cell death is so severe to the point where let's say half the reads map to the mitochondrial genome (with a 50% cut-off). There will be many caveats one being apoptotic cells from different lineages clustering together, not interesting unless you are studying apoptosis related phenomena. In my opinion 6000 good quality cells are better than 10000 apoptotic cells.
{ "domain": "bioinformatics.stackexchange", "id": 1904, "tags": "scrnaseq, seurat, 10x-genomics, qc" }
Calculus of Variations commutes with Integrals
Question: I have a question about the variational calculus. Assume a function $q(t,x)$ gives rise for another function $$f(x) := \int dt q(t,x).$$ My question is why the variation $\delta$ commutes with the integral, therefore why holds (*) $$\delta f(x) := \int dt \delta q(t,x)~?$$ I know that intuitively it seems to works since variations are linear and we can interpret in integral as a sum passing to a limit. Is there a way to verify rigorously that (*) holds? I'm working with wiki's definitions describing the "variation operator" $\delta$. Futhermore which "tools" are recomendable to handle/investivate similar properies in variational calculus when one "struggles" with it? Does it suffice only to work with the "defining equation" $$\begin{align} \delta F[\rho ;\phi ]= \int \frac{\delta F}{\delta\rho}(x) \phi(x) \; dx &= \lim_{\varepsilon\to 0}\frac{F[\rho+\varepsilon \phi]-F[\rho]}{\varepsilon} \\&= \left [ \frac{d}{d\epsilon}F[\rho+\epsilon \phi]\right ]_{\epsilon=0}~?\end{align} $$ Answer: I think the notation $\delta f(x)$ while possibly intuitive is extremely ambiguous and not well defined. The formula at the end of your post: (with minor notational change) \begin{equation} \delta F_{\phi}(\rho) := \dfrac{d}{d \varepsilon} \bigg|_{\varepsilon = 0} F(\rho + \varepsilon \phi) \end{equation} is well defined, and unambiguous (as long as you specify the domain and target space of the function $F$). The way to "read" the symbol $\delta F_{\phi}(\rho)$ is "the directional derivative of the function $F$ at the point $\rho$, along the direction $\phi$". If you look at any advanced calculus textbook such as Loomis and Sternberg's Advanced Calculus, you'll see that this is precisely how directional derivatives are defined (some books require $\phi$ to be a unit vector... but that's not needed). In the subject of Calculus of variations, this is often called "the first variation of $F$ at $\rho$, along $\phi$" (or simply, the first variation of $F$). Regardless of what you want to call it, the formula above is well defined, and thus we can apply it to your question. If we define $f(x) = \displaystyle\int_a^b q(t,x) \, dt$, then we can compute the first variation of $f$ at the point $x$, along $\phi$ as follows: \begin{align} \delta f_{\phi}(x) &:= \dfrac{d}{d \varepsilon} \bigg|_{\varepsilon = 0} f(x + \varepsilon \phi) \\ &:= \dfrac{d}{d \varepsilon} \bigg|_{\varepsilon = 0} \int_a^b q(t,x + \varepsilon \phi) \, dt \\ &= \int_a^b \dfrac{\partial}{\partial \varepsilon} \bigg|_{\varepsilon = 0} q(t,x + \varepsilon \phi) \, dt \end{align} ($:=$ means "by definition") In the last equality I made use of the Leibniz Integral rule for differentiating under the integral. The quantity inside the integral can be expressed using the multi-variable chain rule as $\dfrac{ \partial q}{ \partial x} (t,x) \cdot \phi$. But if you want to express it as a variation, using $\delta$, then to be proper, you would have to do the following: for each $t \in [a,b]$, define $Q_t(x) = q(t,x)$. Then, the quantity inside the integral is precisely $\delta (Q_t)_{\phi}(x)$ (the first variation of the function $Q_t$ at $x$, along $\phi$). So, what we have shown is \begin{equation} \delta f_{\phi}(x) = \int_a^b \delta(Q_t)_{\phi}(x) \, dt. \end{equation} And now, if you abuse notation by suppressing the direction of variation $\phi$, and if you're too lazy to define a new function $Q_t$, so that domains etc match up, then you get the claimed formula: \begin{equation} \delta f(x) = \int_a^b \delta q(t,x) \, dt. \end{equation}
{ "domain": "physics.stackexchange", "id": 92044, "tags": "mathematical-physics, integration, variational-calculus" }
Slam gmapping tutorial troubleshooting
Question: Hello, I'm following the map-building tutorial at http://www.ros.org/wiki/slam_gmapping/Tutorials/MappingFromLoggedData and I can get all the way through step 2 (hurray). Then when I enter: $ rosbag play aptLaserData.bag which is the second part 1 of step 2, I get the error: [ INFO] [1311128237.384522694]: Opening aptLaserData.bag [FATAL] [1311128237.388682646]: Time is out of dual 32-bit range My .bag file is indeed named aptLaserData.bag, but could you tell me what's causing the problem? Thanks, Khiya Originally posted by Khiya on ROS Answers with karma: 49 on 2011-07-19 Post score: 0 Original comments Comment by Khiya on 2011-07-28: Hi Brian, thanks for your patience! Please see my comment on Tim's post, below, for access to my aptLaserData.bag file. Comment by Khiya on 2011-07-21: PS, could this problem have to do with the fact that I'm using a virtual machine running Ubuntu 10.10 from a 32-bit Windows machine running Vista? Comment by Khiya on 2011-07-20: Brian, I can't find a way to upload a .bag file onto ROS answers. Unless you know of a better way to put my data online, I'm going to upload it to the repository for one of my other projects and make it available there. Comment by Brian Gerkey on 2011-07-20: Another user reported the same problem, using the bag supplied with the tutorial: http://answers.ros.org/question/248/slam_gmapping-mappingfromloggeddata-out-of-dual-32. I wasn't able to reproduce it. Can you post your bag somewhere that we can try playing it back? Answer: No activity in a month, closing. In the future, please use the 'edit' feature to update the original question, or 'post a comment' feature in order to do followup so that it is more easy to track the state of the Q&A. Originally posted by kwc with karma: 12244 on 2011-09-02 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 6192, "tags": "navigation, laserscan, gmapping" }
Conservative force definition
Question: Classical Mechanics, by John Taylor defines a conservative force $F$ as a force that satisfies: $F$ depends only on the particle's position and no other variables. Work done by $F$ is the same for all paths taken between two points I'm wondering if this definition is redundant. Doesn't (1) imply (2) and vice versa? If not, what is an example of a force that satisfies (1) but not (2) and an example of a force that satisfies (2) but not (1)? Answer: The comment of @probably_someone shows clearly the necessity of (1). It eliminates a possible force dependence on time, velocity or on any other parameters. (2) does not follow from (1): Consider the force on one pole of a long thin bar magnet which is next to a current carrying wire. The work done moving it in a circle around the wire is different to the work done in a loop which doesn't go around the wire. The same would be the location dependent force on an object moved in a water whirl. (1) doesn't follow from (2): When a charged particle moves in a magnetic field no work is done on the particle on going on any path from A to B. The force experienced by the particle is dependent on the velocity not only the position (inhomogeneous B).
{ "domain": "physics.stackexchange", "id": 46739, "tags": "newtonian-mechanics, forces, work, definition, conservative-field" }
Why isn't the constellation of that month's zodiac prominent or even visible during that month?
Question: For instance, the month of October is called the month of Leo, but you will mainly see Aries, Taurus and the Southern skies during October, Leo isn't very prominent. So how is the zodiac decided? Answer: The Zodiac sign of a month is decided by the month when the Sun is "in" that sign from the point of view of Earth. (See Mike G's comment below - because of the precession of the Earth, Zodiac signs are now 30 degrees "off" from their original markings/associations with the months of the year.) I believe that August is actually the month of the Leo, not October - more specifically, July 23 to August 23. (see here for more date info) If you look at the diagram, you'll see that the Sun is right in front of Leo from the point of view of Earth during the month of most of August, and thus we call that the month of the Leo. Some more examples - during October, the month of Libra, the Sun is directly in front of Libra, and during March, the month of Pisces, the Sun is directly in front of Pisces.
{ "domain": "astronomy.stackexchange", "id": 5090, "tags": "constellations, zodiac" }
URDF - wheels joints are stuck to base_link
Question: After spending so many hours figuring things out, I finally got to make a two wheeled robot that gets controlled with diff_drive_controller using ROS. However, whenever i publish cmd_vel like the following: rostopic pub /diff_drive_controller/cmd_vel geometry_msgs/Twist -- '[1.0, 0, 0]' '[0.0, 0.0, 1.2]' the robot moves in a weird way: the base_link seems to be fixed to the wheels. Whenever the wheels move, base_link moves with the wheels. What I want is the base_link to stay still and the wheels to rotate. Here is my urdf of the two wheeled robot: <link name="base_link"> <collision name='collision'> <origin xyz="0 0 0.1" rpy="0 0 0" /> <geometry> <box size=".3 .3 .05"/> </geometry> </collision> <visual name='visual'> <origin xyz="0 0 0.1" rpy="0 0 0" /> <geometry> <box size=".3 .3 .05"/> </geometry> </visual> <inertial> <origin xyz="-0.1 0 0.1" rpy="0 0 0" /> <mass value="5"/> <inertia ixx="0.03854" ixy="0.0" ixz="0.0" iyy="0.075" iyz="0.0" izz="0.03854"/> </inertial> </link> <link name="left_wheel"> <collision name="collision"> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </collision> <visual name="visual"> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> <inertial> <mass value="1"/> <inertia ixx="0.002708" ixy="0.0" ixz="0.0" iyy="0.002708" iyz="0.0" izz="0.005"/> </inertial> </link> <link name="right_wheel"> <collision name="collision"> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </collision> <visual name="visual"> <origin xyz="0 0 0" rpy="0 1.5708 1.5708" /> <geometry> <cylinder length="0.05" radius="0.1"/> </geometry> </visual> <inertial> <mass value="1"/> <inertia ixx="0.002708" ixy="0.0" ixz="0.0" iyy="0.002708" iyz="0.0" izz="0.005"/> </inertial> </link> <gazebo reference="left_wheel"> <mu1 value="10000.0" /> <mu2 value="10000.0" /> <fdir1>0 1 0</fdir1> <minDepth>0.01</minDepth> </gazebo> <gazebo reference="right_wheel"> <mu1 value="10000.0" /> <mu2 value="10000.0" /> <fdir1>0 1 0</fdir1> <minDepth>0.01</minDepth> </gazebo> <joint type="continuous" name="left_joint"> <child link="left_wheel">left_wheel</child> <parent link="base_link">base_link</parent> <axis xyz="0 1 0"/> <origin xyz="0 -0.175 0.1" rpy="0 0 0"/> <!--<limit effort="100" velocity="10.0"/> --> </joint> <joint type="continuous" name="right_joint"> <child link="right_wheel">right_wheel</child> <parent link="base_link">base_link</parent> <axis xyz="0 1 0"/> <origin xyz="0 0.175 0.1" rpy="0 0 0"/> <!--<limit effort="100" velocity="10.0"/> --> </joint> <transmission name="right_tran"> <type>transmission_interface/SimpleTransmission</type> <joint name="right_joint"> <hardwareInterface>VelocityJointInterface</hardwareInterface> </joint> <actuator name="right_motor"/> <mechanicalReduction>1</mechanicalReduction> <motorTorqueConstant>1</motorTorqueConstant> </transmission> <transmission name="left_tran"> <type>transmission_interface/SimpleTransmission</type> <joint name="left_joint"> <hardwareInterface>VelocityJointInterface</hardwareInterface> </joint> <actuator name="left_motor"/> <mechanicalReduction>1</mechanicalReduction> <motorTorqueConstant>1</motorTorqueConstant> </transmission> And this is my .yaml code for the controller: # Publish all joint states ----------------------------------- joint_state_controller: type: "joint_state_controller/JointStateController" publish_rate: 50 ## Position Controllers --------------------------------------- #left_joint_velocity_controller: # type: velocity_controllers/JointVelocityController # joint: left_joint # pid: {p: 100.0, i: 0.01, d: 10.0} #right_joint_velocity_controller: # type: velocity_controllers/JointVelocityController # joint: right_joint # pid: {p: 100.0, i: 0.01, d: 10.0} diff_drive_controller: type: "diff_drive_controller/DiffDriveController" publish_rate: 50 left_wheel: ['left_joint'] right_wheel: ['right_joint'] wheel_separation: 0.35 pose_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] twist_covariance_diagonal: [0.001, 0.001, 0.001, 0.001, 0.001, 0.03] base_frame_id: base_link # Velocity and acceleration limits for the robot linear: x: has_velocity_limits : false #max_velocity : 10.0 # m/s has_acceleration_limits: false #max_acceleration : 6.0 # m/s^2 angular: z: has_velocity_limits : false #max_velocity : 10.0 # rad/s has_acceleration_limits: false #max_acceleration : 6.0 # rad/s^2 Originally posted by Moneyball on ROS Answers with karma: 13 on 2017-03-12 Post score: 0 Answer: I'm new to this myself, but I resently got DiffDriveController to work on my robot. If I understand your problem correctly, I think you need to fix the mass / inertia. From what I have read, the identity matrix is a bad default for the moment of inertia matrix. I use these macros in my model: <xacro:macro name="cylinder_inertia" params="m r h"> <inertia ixx="${m*(3*r*r+h*h)/12}" ixy = "0" ixz = "0" iyy="${m*(3*r*r+h*h)/12}" iyz = "0" izz="${m*r*r/2}" /> </xacro:macro> <xacro:macro name="box_inertia" params="m w h d"> <inertia ixx="${m / 12.0 * (d*d + h*h)}" ixy="0.0" ixz="0.0" iyy="${m / 12.0 * (w*w + h*h)}" iyz="0.0" izz="${m / 12.0 * (w*w + d*d)}"/> </xacro:macro> Make sure you configure the friction of your wheels so they don't slip. Here's a sample from my model. You will probably have to play with the values, some of them may be overkill. <gazebo reference="wheel_FL"> <mu1 value="100000.0"/> <mu2 value="100000.0"/> <fdir1>0 1 0</fdir1> <minDepth>0.01</minDepth> <slip1>0</slip1> <slip2>0</slip2> </gazebo> Originally posted by gudjon with karma: 111 on 2017-03-12 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Moneyball on 2017-03-12: Hi thank you. I tried it with your equation and it seems to move better than before. But the movement is still a bit jerky. One quick question: what is the mass of your base_link and wheels? Comment by Moneyball on 2017-03-12: Also in the yaml code, what exactly are pose_covariance_diagonal and twist_covariance diagonal? Will they affect the way my robot moves? Comment by gudjon on 2017-03-12: My vehicle is still a work in progress, but the mass is set very high (300+ for the base, 10 for each wheel), also it has 4 wheels. Does your vehicle have a third contact to the ground, like a caster wheel, or does it stand on two wheels like an inverted pendulum? Comment by gudjon on 2017-03-12: As far as I know the covariance has nothing to do with the driving properties, it's only used for the odometry. Another thing you could look into is the friction of the tires. To make sure their not slipping or bouncing. I'll update my answer to include a sample. Comment by Moneyball on 2017-03-12: I don't have caster wheel, just two wheels on the side of the base_link. How do I change the friction? If I never specified it, would it default to 0 friction? Comment by Moneyball on 2017-03-12: Just saw your updated post: ill go check to see if friction will do the trick Comment by Moneyball on 2017-03-13: when i added , parseURDF failed so i only used mu1, mu2, fdir, and mindepth. I forgot the -r flag for rostopic pub. Now everything works! Thank you
{ "domain": "robotics.stackexchange", "id": 27290, "tags": "ros, joint, urdf" }
Finding the direction of an induced current, swinging pendulum
Question: The conductor shown in #1 is held still in position #2. The switch s1 is now open, and the switch s2 is now closed, see #1. There is no current going through the conductor as its held still. Then, the conductor is released and an electric current is induced. Find the direction of this current: 1 As the conductor swings towards the lowest point. 2 As the conductor makes its way up to the right after having passed the lowest point. Since the flux is reduced when the conductor makes its way down the lowest point (angle between area vector and b field goes towards 90 degrees), the flux is reduced. Therefore i think there must be induced a magnetic field going in the same direction as the field in #2, and the force on the conductor must be opposite of the direction of the velocity (Lenz's law) i.e to the left, and therefore the current should be going into the plane of the paper. Because the flux is reduced after the conductor has passed the lowest point and swings up to the right, my idea is that it should be the excact opposite (current going out of the plane of the paper). However this is not correct apparently. It's a bit of a complex problem, so if you need additonal details or explanations please let me know. Answer: . . . and the force on the conductor must be opposite of the direction of the velocity (Lenz's law) i.e to the left, and therefore the current should be going into the plane of the paper. Q So when the conductor has passed the vertical which way is the conductor moving and so which way should the opposition to motion be? A The same as before, to the left, and so the induced current must be in the same direction as before. Before the conductor reached the vertical position the induced current produce a clockwise magnetic field around the bottom conductor thus trying to increase the (downward) magnetic flux through the loop to try and counteract the decreasing magnetic flux due to the external magnetic field.. On the other side of the vertical the direction of the induced current and hence the induced magnetic field is the same but note that now that induced magnetic field and hence the flux is upwards through the loop and it is trying to reduce the increase in magnetic flux through the loop due to the external magnetic field. Update as a result of a comment The external magnetic field always points downwards. The magnetic field due to the induced current has a downward component through the loop when the loop is left of vertical (left picture) and an upward component through the loop when the loop is right of vertical (right picture).
{ "domain": "physics.stackexchange", "id": 40756, "tags": "electromagnetism, induction" }
vibrating charged string
Question: I know how to calculate the electric field generated by a charged string at rest. And I know how to calculate the vibrations of a (not charged) string with given boundary and initial conditions. All these are classical and solved physical problems. But, what about a vibrating charged string? I suppose that, in this case, we have an emission of electromagnetic waves and the vibrations are damped, but I can't find an example of the solution of such a problem in classical books on the web. Does someone know how to solve this problem or know a reference? It seems that there is some interest to my question. So I add something that better illustrate the question. I think that the starting point can be the equation of a vibrating string whit a force acting on its points, that is: $$ u_{xx}+\frac{1}{\tau}f(x,t)=\frac{\mu}{\tau}u_{tt} $$ where $\mu$ is the linear density and $\tau$ is the tension (all supposed constants). If the string is charged the radiation emitted generates a back reaction force that, in a non relativistic approximation, is the Abraham-Lorentz force proportional to the jerk ( the variation of acceletarion $\dot a$). So the equation becomes: $$ \alpha u_{ttt}+\beta u_{tt}=u_{xx} \qquad (1) $$ where $\alpha =\frac{2\lambda^2}{3\tau c^3}$ wit $\lambda$ the linear density of the charge. I'm not convinced by my equation $(1)$ because the backreaction term represents the interaction of the string element with the ''self field'' ( generated by the same string element) but not the reaction to the field generated by the entire string. Anyway, also if the dumping term is very little (due to to the $c^3$ in the denominator), I'm curious to explore the possible solutions of this problem. Answer: Let us, for concreteness, talk about the string with fixed endpoints, and one with a homogeneous, fixed charge density along it. When we derive the string equations, we essentially use the assumption $$k A \ll L$$ where $k$ is the number of antinodes ($k\geq 1$), $A$ is the amplitude of the oscillation, and $L$ the string length. Then, if we take the leading order term $\sim kA/L$ in our equations and neglect anything $\mathcal{O}(k^2 A^2/L^2)$, we get the linear string equation as you know it. Thus, we should only consider the effects of lowest order in $A/L$ in our computation. Now let us assume that the charge density is so small that the string does not yet feel any back-reaction. Then we make a multipole expansion of the electromagnetic radiation leaving the system and we obtain that the average power radiated by the $2^n$ multipole in all directions is $$P_{2^n} \propto \frac{1}{\lambda^2} \frac{|M_{2^n}|^2}{\lambda^{2n}}$$ where $\lambda$ is the wave-length of the electromagnetic radiation, and $|M_{2^n}|$ is the amplitude of the oscillation of the $2^n$ pole (assuming it undergoes a harmonic oscillation, also note that there are "longitudinal" parts of the multipoles in the string which do not oscillate). On the other hand, the time-varying $2^n$-pole is proportional to the total charge of the object $Q$ and some typical length to the n-th power, in our case you can compute that the time-variable part will be always proportional to $A^n$. (If you are unsure, take a look at the $n=1$ dipole, and $n=2$ quadrupole radiation computation on wiki.) The frequency of the emitted radiation $f_{EM}$ will be some integer multiple of the frequency of the vibration of the string, which is $$f_s = \frac{k v_s}{2L}$$ where $v_s$ is the wave-speed in the string. Since $\lambda=c/f_{EM}$, after dropping some integer factors we finally get $$P_{2^n} \propto \frac{Q^2}{\lambda^2} \left(\frac{A}{L} \frac{v_s}{c}\right)^{2n}$$ That is, the various multipole contributions to the radiation of the string drop of very quickly with exactly the expansion parameter $A/L$ whose higher powers we neglect in the expansion. On the other hand, you can verify that the energy of the string is $\propto (A/L)^2$ itself, so in a consistent $(A/L)$-power expansion we necessarily get that the dipole radiation is stealing energy already in the linear mode of the oscillations of the string. Then again, unless the back-reaction becomes strong, it is safe to assume that the quadrupole and higher-order leakage of energy will be negligible even when we assume some back-reaction on the string (the dimensional analysis makes this more or less inevitable). Now, this doesn't quite help us to solve the nonlinear back-reaction equations as you propose in your question, but at least it allows us to build a leading-order model for the behaviour of the string. This is because we know from the dipole radiation formula the loss of energy averaged over one period must be (now adding all the factors) $$-\langle \frac{d E_s}{d t} \rangle = P_2 = \frac{\mu_0 }{12 \pi c} \omega^4 |M_2|^2 $$ Now let us assume that the change of the amplitude of a given mode $A_k$ is so slow that we can use this average to compute the evolution of it $A_k$ by simply integrating it. We use the formula for the energy of the mode $E_{ks} = M A_k^2 \omega_k^2/2$ where $M$ is the string mass, $\omega_k = \pi v_s k/L$. A quick computation also yields $M_{2,k} = Q/k$ for k odd, and $M_{2,k} = 0$ for k even, where $Q$ is the total charge of the string. When the dust settles, you obtain the average evolution of the amplitude of the $k-th$ mode is given by $$ \dot{A}_k = - \zeta A,\;k\;odd$$ where $\zeta$ is a constant damping rate I am sure you will be able to retrieve from the equations above. (The validity of the "slow-change" approximation is $\zeta \ll \omega_k$.) The important message is that the amplitude will evolve as $$A_k(t) \propto e^{- \zeta t},\;k\;odd$$ and, quite surprisingly, the even modes will not change, at least on the time-scale where the linear string approximation will be valid. So this was a short treatment which shows you that the dominant reaction of the string will be to radiate away its dipole oscillation. Of course, once non-linearities start to show up, internal physics of the string transferring energies from mode to mode as well as the higher-order multipole and non-trivial back-reaction effects start to show. However, notice that "wave-like" back-reaction does not show in the linear-string mode since the string elements move at speeds $\sim v_s A/L \ll c$. So in the linear mode we could only hope for a slightly better description of the evolution of $A_k$ similar to the damped harmonic oscillator. Another problem is that the back-reaction of small singular objects such as infinitesimal strings is generally ill-behaved, and it is better to state what is the model really describing and compute from there. A similar problem would be with the $\mathcal{O}(A^2/L^2)$ non-linearities, they will once again depend on the matter model. So, this is really as far as you can go with a self-consistent description of the problem. Last note to the Abraham-Lorentz force - of course, that is not the only contribution, the string element would receive contributions from the electromagnetic field of its neighbours. I would guess that these contributions are actually dominant and that is the reason why the forces on some modes add up to damping while not for others.
{ "domain": "physics.stackexchange", "id": 44666, "tags": "electromagnetism, classical-electrodynamics, vibrations" }
How to filter ribosomal RNA from scRNA-seq data
Question: I want to filter out ribosomal RNA from scRNA-seq data (downloaded from here). Is there a list of known ribosomal RNA? The only solution I found is SortMeRNA, however it works with raw sequencing data afaik, while I already have a matrix with transcript counts for each cell. I searched for a comprehensive list of rRNAs but I didn't find any. Answer: The rRNA genes in that dataset are Rn45s and Rn4.5s. BTW, you have gene counts, not transcript counts.
{ "domain": "bioinformatics.stackexchange", "id": 361, "tags": "rna-seq, scrnaseq, rna, ribosomal" }
Dirac fields: Do particle and antiparticle creation operators act differently on the vacuum?
Question: Given a Dirac field $$\Psi(x):=\int\frac{d^4k}{(2\pi)^4}\delta\left(p_0-\omega(\mathbf{k})\right)\sum_s\left(a_s(k)u_s(k)e^{-ikx}+b^\dagger_s(k)v_s(k)e^{ikx}\right)$$ with the creation operators $a^\dagger_s(k),b^\dagger_s(k)$ for particles and antiparticles respectively, how do these operators act on the vacuum? In particular, is it true that $|k\rangle=a^\dagger_s(k)|0\rangle=b^\dagger_s(k)|0\rangle$? Answer: Ah I think I understand your question now and I think this is a simple notational issue. The single particle states for the particles and antiparticles should be denoted differently, i.e. trying to be as close to your notation would give something like $$|k,s\rangle \equiv a^\dagger_s(k)|0\rangle \ \ \ \ , \ \ \ \ |\tilde{k},\tilde{s}\rangle \equiv b^\dagger_s(k)|0\rangle \ .$$ And all the usual commutation relations are the same. Perhaps more standard notation would be $|1_{k}\rangle \equiv a^\dagger_s(k)|0\rangle$ and $|\bar{1}_{k}\rangle \equiv b^\dagger_s(k)|0\rangle $, but I'm not totally sure what's most common.
{ "domain": "physics.stackexchange", "id": 74522, "tags": "quantum-field-theory, operators, vacuum, dirac-equation, antimatter" }
Why photon and electrons travel at same speed in thunder
Question: They say light travels faster than sound. Lightning is just electrons, right? Then why are both electrons and photons traveling at the same speed when thunder storms occur? Answer: Think about a piece of copper wire. It is packed with free electrons that just can't escape because if they did the protons in the copper atoms would pull them back. It's just like a pipe full of water, sealed at both ends. Now if you push some water into one end of the pipe (which you can only do if some water can also leave the other end), which water comes out? Is the water that comes out the same water that you pushed in? Of course not. That water moved slowly. What moved fast was the pressure wave (sound wave) that told the water at the other end to come out. A lightning bolt is just a long wire of air that conducts electricity because it's hot. The electrons move slowly, but they can't "bunch up", so the wave of pressure (voltage) travels at its natural speed, which is very fast, but less than the speed of light.
{ "domain": "physics.stackexchange", "id": 47049, "tags": "visible-light, electrons, speed-of-light, lightning" }
Wave propagation in digital image
Question: I believe the following question in summary is: How to approximate Euclidean distance in a digital plane? When a pebble falls on a calm surface of water a circular wave propagates. I want to color the pixels with time to show this effect. So I discretize the time and in each time step, starting from the center, I color one pixel away in all directions. But this gives a square wave. I guess what is wrong is that I have approximated the Euclidean distance with the infinity norm. How do I approximate Euclidean distance to get the a circular wave on the pixels? I don't want to measure the distance from each pixel to the center in each time slot. That will be very heavy. I am looking for a simple algorithm like coloring the next pixel adjacent to last colored pixel. Answer: Here's a trick that video game designers have used for years. A computationally cheaper approximation to Euclidean distance between two points $(x_1, y_1)$ and $(x_2, y_2)$ is to use $$ d=\max(|x_1-x_2|, |y_1-y_2|)+\min(|x_1-x_2|, |y_1-y_2|)/2 $$ The relative error between this and the Euclidean distance is no more than 11.8%, which might be good enough for your purposes.
{ "domain": "cs.stackexchange", "id": 2381, "tags": "algorithms, computational-geometry, approximation" }
Wave-packet backwards emission
Question: I am trying to reproduce the results of a TDSE two wave-packet algorithm described in "Computational Physics: Problem Solving with Python" by Landau, Paez and Bordeianu, in section 22.4 (Wave Packet–Wave Packet Scattering). The chapter references this report and this page of results. In pretty much all my own results and in some of those provided by the author (like this), you can see a small fragment of a wave-packet being "emitted" backwards in the very beginning of the computation. (The mass of the left packet is a fraction of the right packet's.) The interaction potential used in these pictures is of gaussian form (attractive, but short range; shouldn't affect the packets at the initial distance). The method assumes that the wave-packets initially are far enough such that the total wavefunction is a product of two gaussians. My question is, how does one explain this emission? This doesn't seem to make much sense physically. Answer: I claim no particular expertise here, but following on from the comment by denklo let me just give a quotation from the paper which you already reference, bottom of p9: At early times in Fig. 3, as well as in other animations, we can see very small wavepackets moving in opposite directions to the larger wavepackets for each particle. These are numerical artifacts. While wavepackets with reversed values of $k$ are valid solutions of the Schrödinger equation, they should be elminated by the initial conditions. For example, if $\exp(ikx)$ is a valid solution, then so is $\exp(−ikx)$, yet it is hard to get rid of it completely. They thank one of the referees for pointing this out to them.
{ "domain": "physics.stackexchange", "id": 57804, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, computational-physics, software" }
Synthesizing 2-methyl-2-butanol
Question: I have recently been trying to come up with a method to synthesize 2-methyl-2-butanol or tert-amyl alcohol. I wish to avoid complex/expensive steps since as a home chemist/student, I don't have access to much money or equipment. What I have done so far: I noticed that methyl ethyl ketone has pretty much the correct structure, minus a methyl group. I then realized this meant I could use a methylating agent to get the correct molecule. What I needed was to break the $\ce{C=O}$ bond from the carbonyl group and add a methyl group. Long story short, I found that either Grignard reagents or something like methyl lithium would do it since it would donate an electron pair to the carbon, and form a bond with the methyl. I would then have the 2-methyl-2-butanol as a conjugate base with $\ce{Li}$ ions present. I could then react them out with $\ce{HCl}$ to precipitate out the $\ce{Li}$ salt and add a hydrogen to the negatively charged oxygen. My problem: I don't know if this would work at all. I didn't find any documentation. Also, methyl lithium is very expensive and difficult to make so I can't use it. Does anyone have any other route suggestions? Would it be possible to make methyl sodium and if so, would it work the same? Is it possible to make Grignard reagents easily at home? I'm a hobby chemist, so please be detailed with how you predicted the organic reaction. Also, I don't necessarily care about what chemicals I use for the reaction, so if anyone can suggest a different starting chemical, I'd be glad to hear it. I was even thinking of using an alkene group of some kind and methylating the $\ce{C=C}$ bond, but I don't know for sure, as I'm pretty new to organic chem. Answer: Isopropyl lithium and ethyl bromide would be my go-to if I were to try to prepare this stuff (which I will call TAA). This requires a solid fume hood, and the resulting mixture of salt, TAA, and isopropyl lithium solvent (usually n-hexane) should be easy to separate. The hexane can be stripped from the TAA via fractional distillation, with more distillation passes resulting in a more pure product. TAA boils at $\pu{102 ^\circ C}$ (Sigma Aldrich), and n-hexane boils at around $\pu{70 ^\circ C}$ - this $\pu{30 ^\circ C}$ gap in BP passes the rule-of-thumb "can I separate with distillation?" question. All that said, even pricey Sigma Aldrich can get you 99% for $115 a liter. I am aware that TAA is an experimental recreational drug, and a possible replacement for ethanol. I advise caution in sourcing this material for that purpose, as the impurities present are likely to be more toxic than TAA itself. I believe the industrial synth is from 2-methyl-2-butene (2m2b) with the addition of water (this would be cheapest); TAA prepared this way can be purified further by refluxing with an open condenser, enabling any unreacted alkene to evaporate (Boiling point = $\pu{39 ^\circ C}$). Acetone and ethyl Grignard would also form TAA, but with much more hassle and hazard, so the remnant ethyl chloride and acetone from such a synthesis are not likely impurities in any cheap sample.
{ "domain": "chemistry.stackexchange", "id": 10100, "tags": "organic-chemistry, experimental-chemistry, synthesis, alcohols" }
Does MATLAB code retain its accuracy when is executed in dsp kit?
Question: MATLAB code (at https://metrw-pitch.blogspot.com/ you can see improved code) executing in MATLAB online R2020b, outputs good accuracy even for low fundamental frequencies (fundFreq) but the code (originally in the answer to a question of mine) for a similar algorithm in C++ (ported in Matlab), executing in Visual Studio 2019, outputs bad accuracy in low (lower than 150 Hz) frequencies. My question is whether MATLAB's code executing in a dsp kit retains its accuracy. I'm homeless (in France) and get internet access only at Social Services offices and Post Offices, so it's impossible for me to run code with a dsp kit. Would you please test MATLAB's code in your dsp kits and tell me its output for fundFreq 50, 60 and 70 Hz? Be aware that when changing fundFreq, also change, if necessary, FFTfundFreq and/or GridDemiSpan so that the grid's span covers fundFreq. I have asked a similar question on 8 Feb 2021 at 13:25, at MATLAB but nobody answered yet. Copy of Matlab's code without comments is following. SampFreq = 16000; Segm = 1:1600; FundFreq = 50; FFTfundFreq = 41; GridDemiSpan = 10; FirstHarmAngles = FundFreq*2*pi/SampFreq*Segm+1.9*pi; SinFirstHarmAngles = sin(FirstHarmAngles); SecondHarmAngles = FundFreq*2*2*pi/SampFreq*Segm+0.9*pi; SinSecondHarmAngles = sin(SecondHarmAngles); ThirdHarmAngles = FundFreq*3*2*pi/SampFreq*Segm+0.3*pi; SinThirdHarmAngles = sin(ThirdHarmAngles); Xn = 170000*SinFirstHarmAngles+220000*SinSecondHarmAngles+... 150000*SinThirdHarmAngles; Freqs = FFTfundFreq-GridDemiSpan:0.1:FFTfundFreq+GridDemiSpan; MagnSqrd = ones(1,201); for f = 1:201 Angles = Freqs(f)*2*pi/SampFreq*Segm; XnCos = sum(Xn.*cos(Angles)); XnSin = sum(Xn.*-sin(Angles)); MagnSqrd(f) = XnCos.^2+XnSin.^2; end [maxMagnSqrd, maxMagnSqrdIndex] = max(MagnSqrd); GRIDfundFreq = Freqs(maxMagnSqrdIndex); disp(GRIDfundFreq); Answer: Yes, C\C++ code generated from Matlab/Simulink code should behave the same as the original code. However there are a few caveats, here are 3 problems I've experienced in real-life. 1 - Simulink\Matlab performs an implicit reset just before starting the simulation/script. Make sure you perform this reset in your C\C++ code, otherwise your results won't be the same and won't be comparable. Simulink coder generates a reset function for you. I've seen a corner case where the reset function would not reset all global variables, so be careful. 2 - Variable types. Matlab/Simulink typically use double-precision floating point variables. If your code uses doubles then great. The behaviour should be the same. However, for performance reasons, you might choose to use single-precision floating point variable aka floats. Then the behaviour could change slightly. There are ways to mitigates the effect of quantization, some algorithms are really sensitive to precision, others less so. Even if you use doubles there are some difference between platforms. So if you use doubles on an ARM µProcessor, the behaviour could be slightly different in corner cases. 3 - Integer saturation. I made the mistake in the past of using a 32-bit integer in Matlab as a free-running counter. It worked well, but after like 10 days the code would freeze. The counter would be stuck at 2^32-1 instead of wrapping back to 0. Obviously, you won't see that behaviour in simulation. So be careful, test your code thoroughly.
{ "domain": "dsp.stackexchange", "id": 9861, "tags": "matlab" }
How to prevent an asteroid from spinning while pushing it?
Question: Consider a perfectly spherical asteroid in deep space (away from other celestial bodies). The asteroid has uniform density so its Center of Mass (CoM) coincides with its geometric center. The asteroid is rigid and does not deform when touched or pushed. Initially the asteroid does NOT spin about its CoM in the inertial reference system. The pale green rectangles appearing on the asteroid's surface in the Diagram below visualize the lack of asteroid's spin. A maneuverable spacetug (space-pusher for European readers) continuously applies a variable force to the surface of the asteroid, e.g. at a points P1, .. P7 (small yellow dots), via a rigid and flat pushplate, which is mounted in front of the spacetug (thick blue line), in order to push the asteroid along an arbitrary path (gray dashed curve). The spacetug continuosly applies the variable force vector (red arrows) along the lines connecting the points P1, .. P7 and the CoM. The acceleration of the asteroid along the gray path is NOT assumed to be zero. The pushplate does not slide on the surface of the asteroid - instead, the pushplate "rolls" on the asteroid's surface from its point of view. QUESTION: Is keeping the force vector pointed at the CoM, sufficient to prevent the asteroid from spinning about its CoM as it is pushed along an arbitrary path? Answer: If the hypothetical asteroid is not spinning to begin with, yes, force on the center of mass (for instance, gravity coupling of the 'tug' craft with the asteroid) does not exert any torque, so will not cause any rotation. If a mechanism can be coupled to the asteroid that can extend a small mass on a string, the asteroid-mass pair can have a large-ish rotational moment, so would become a nearly rotationless object when the string length is long (a kilometer, perhaps?). Then, you can simply release the mass (cut the string) and engage the tug with a nonspinning asteroid. A realistic force analysis would have to include light pressure, outgassing, ablation, electrical forces (solar wind can carry charges) and a tiny tidal force (depending on some elasticity or nonspherical mass distribution).
{ "domain": "physics.stackexchange", "id": 59886, "tags": "newtonian-mechanics, rotational-dynamics" }
Finite Potential Barrier Where $E=V_0$
Question: If we have a free particle of energy E incident on a potential $V$ $$V(x) = \begin{cases}0 & x \leq 0 \\ V_0 & 0 < x < L \\ 0 & x \geq L\end{cases}$$ We find that the wave function $\phi$ $$\phi(x) = \begin{cases} e^{ikx}+re^{-ikx} & x < 0 \\ A + Bx & 0 < x < L \\ te^{ikx} & x > L \end{cases}$$ Is a Hamiltonian eigenfunction with energy $E = V_0$. I can show for $x < 0$ and $x > L$, that $\phi$ satisfies $\hat{H}\phi = V_0\phi$ but for $0 < x < L$ we have $\hat{H}\phi = 0$. I am trying to understand this as a result of $E - V_0 = 0$, thus in this region $\phi$ has an eigenvalue of $0$? So in this region the particle has zero energy? But I cannot find a rigorous explanation of this online anywhere and was hoping someone here could clear it up for me. Additionally: Wikipedia states that a complete solution can be found by using constraints on $A$ & $B$ that can be found by matching the wave function and its derivatives at $0$ & $L$. Could somebody please explain why the wave function and its derivative must be continuous everywhere. Answer: The eigenvalue of the Hamiltonian is $E=V_0$ in the three regions $$\hat{H}\phi(x)=\Big(\hat{T\,}+\hat{\,V}(x)\Big)\phi(x)=E\phi(x)=V_0\phi(x).$$ For $0<x<L$, what you have is $$\Big(\hat{T\,}+V_0\Big)\phi(x)=V_0\phi(x)\,\,\rightarrow\,\,\hat{T\,}\phi(x)=0,$$ so $0$ is the eigenvalue of the kinetic energy. With regard to your last question, more info here and here.
{ "domain": "physics.stackexchange", "id": 77145, "tags": "quantum-mechanics, quantum-tunneling" }
Are agave plants perennial?
Question: If I were to harvest an agave plant for its nectar, would it kill the plant? I have watched videos of the process and it seems quite invasive. Answer: When harvesting agave nectar, generally the whole plant is harvested at once to get to the core, where most of the sap is. There isn't exactly an easy way to continually get nectar from the agave plant since it is herbaceous, whereas something like maple trees can be tapped because the wood can support the spigot.
{ "domain": "biology.stackexchange", "id": 10160, "tags": "botany, plant-physiology, plant-anatomy, agriculture, vegetation" }
Surface friction and Newton's third law
Question: My question is regarding a specific case displaying Newton's third law. In the diagram below, a man is shown exerting a force on the wall, which in turn causes an equal and opposite reaction force on the man. I understand that the reason the man doesn't fly backwards is due to friction with the ground - the man exerts an action force on the ground which causes a reaction force back on the man. Hence, there is a 0 net force acting on the man as the original reaction force from the wall cancels out the reaction force from the ground. I am not confused about how objects move in general with respect to the third law since i understand that the action and reaction forces act on two separate objects. However, the logic behind this specific example would suggest that nothing on a plane would be able to move when a force is applied, since the friction from the ground would result in a 0 net force on the object. For example, in the diagram shown below, a hand is exerting an action force on the brick. This in turn causes the brick to exert an action force on the floor which causes a reaction force on the brick. The action force on the brick and the reaction force on the brick exerted by the floor cancel each other out resulting in a 0 net force on the brick. Hence, the brick (or any other object on a plane like this) should not move and accelerate no matter how large a force is supplied. However, we are clearly able to push objects like bricks across a surface like a table in real life. I don't understand how this could be achieved as my understanding tells me that you cannot ever overcome the frictional force of the floor. The original YouTube video where i got my diagrams from: https://www.youtube.com/watch?v=91QYouih4bQ Answer: You are assuming the force applied to the block and the friction force must be equal in magnitude, but this is not always true. Adopting the typical simple model of friction, all you have to do is apply a large enough force to overcome static friction between the block and the ground in order for the block to move. Action-reaction force pairs are always equal in magnitude and opposite in direction, but there isn't any "law" in general relating forces in different pairs. There is no reason to assume that the force you apply is always equal and opposite to the friction force. If this is too much, instead think of a block on a frictionless surface with two people pushing on either end. Of course the block will push back on each person with the same magnitude of force each person exerts on the block, but the direction of the block's acceleration is just determined by whoever chooses to push more (or if they push with the same force then the block won't move). There isn't any law that relates the two applied forces.
{ "domain": "physics.stackexchange", "id": 54548, "tags": "newtonian-mechanics, forces, acceleration, free-body-diagram" }
Imaging fidelity in a 4F setup
Question: In Fourier optics, a 4$f$ setup is an arrangement of 4 lenses like so: The idea is that the beam waist $\omega_f$ at the last position (at $x = 2f_1 + 2f_2$) is equal to the waist $\omega_i$ at x = 0 (times the magnification, given by the ratio of the two focal lengths). Does this only happen when the lenses are separated by $f_1 + f_2$ ? Ideally the distance between the lenses can be infinite, since the beam is collimated. Obviously the beam is Gaussian so one cannot do that, but still. Is the image at $x = 2f_1 + 2f_2$ only focussed (at a waist) if the inter-lens distance is $f_1 + f_2$? Answer: Yes, the lenses have to be separated by $f_1+f_2$. To see why, consider in the ray optics picture what happens when the distance is different and the input rays are parallel instead of originating at a point in the focus. Or consider in the Fourier optics picture where the Fourier transform plane would be in such a system.
{ "domain": "physics.stackexchange", "id": 43368, "tags": "optics, geometric-optics, lenses" }
How do you configure uvc_camera?
Question: I'm trying to get my camera_node working, and it doesn't seem to be taking the parameters given into account at all. My launch file (excerpt): Output from rostopic echo /lifecam/camera_info header: seq: 463 stamp: secs: 1303400952 nsecs: 715715842 frame_id: camera height: 480 width: 640 distortion_model: '' D: [] K: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] R: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] P: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0 y_offset: 0 height: 0 width: 0 do_rectify: False --- Clearly the calibration data is not being read (thus a bunch of zeros in the camera_info message), and the width/height parameters are the defaults, not the values I set them to. I'm not sure what I'm doing wrong here: I can view the image stream in rviz, but rviz will complain about NaNs and infs in the calibration data (since it's not being read). Help would be greatly appreciated. lifecam.yaml $ cat rover/lifecam.yaml --- image_width: 640 image_height: 480 camera_name: camera camera_matrix: rows: 3 cols: 3 data: [661.285, 0, 391.756, 0, 650.523, 205.652, 0, 0, 1] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [0.125743, 0.237875, -0.0083126, 0.0506531, 0] rectification_matrix: rows: 3 cols: 3 data: [1, 0, 0, 0, 1, 0, 0, 0, 1] projection_matrix: rows: 3 cols: 4 data: [661.285, 0, 391.756, 0, 0, 650.523, 205.652, 0, 0, 0, 1, 0] I just updated the above data as per Markus Bader's recommendations - doesn't seem to have worked. The camera was recalibrated, and it appears that it is certainly not reading the params set: when set_camera_info is called, it saves the data into /tmp/calibration.yaml, not the file set in the launchfile. Thanks for all the help! -- EDIT -- This seems to be some sort of a weird rosparam problem - I just tried to set the parameters by editing the defaults in /opt/ros/diamondback/stacks/camera_umd/uvc_camera/src/camera.cpp, and calibration data and everything work from there. Originally posted by rbtying on ROS Answers with karma: 73 on 2011-04-20 Post score: 0 Original comments Comment by Ken on 2011-04-21: The combination of nodelets and the explicit node name is triggering the problem. With e.g.: "ROS_NAMESPACE=camera rosrun uvc_camera camera_node _width:=800 _height:=600 __name:=lifecam" the /camera/lifecam/height param is set but getPrivateNode still has us looking at /camera/uvc_camera/height Comment by rbtying on 2011-04-20: Yes, I did - see above for the lifecam.ini Comment by Eric Perko on 2011-04-20: What does your lifecam.ini file look like? Did you calibrate using the camera_calibration package and "commit" the calibration to that file? Answer: It looks like you are setting the size of the image to something different than what you calibrated with. The lifecam.ini is calibrated for 640x480, but you have the camera configured for 320x240. The camera calibration size needs to match the size of the image; the camera_info_manager doesn't handle the current image size being different than the calibrated size if I recall correctly. Try setting the camera to 640x480 and see if the problem still occurs. If not, try setting the resolution to 320x240 and then recalibrating and using the new file. A shortcut could be to just edit your calibration's width and height values, but there is no guarantee that the calibration will still be correct at the different image size. Originally posted by Eric Perko with karma: 8406 on 2011-04-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by rbtying on 2011-04-20: It's probably worth noting that it did accept the calibration from the cameracalibrator.py's 'commit', it just didn't keep it after the instance was killed | also that NONE of the params are being set - changing the frame doesn't work, nor the dimensions, nor the framerate. Comment by rbtying on 2011-04-20: I tried setting width/height to 640x480, as well as changing the ini file's parameters, but there's no change: the camera_info still says 640x480 (regardless of what I set height/width to in both the launch and ini files), and the other values are all zero.
{ "domain": "robotics.stackexchange", "id": 5414, "tags": "ros, roslaunch, uvc-camera" }
What is going on with this generating functional? (QFT)
Question: I am reading Peskin and Schroeder's chapter on functional methods and they compute the following correlation function: \begin{equation*} \begin{split} \langle 0| T\phi_1\phi_2\phi_1\phi_3 |0\rangle &= \frac{\delta}{\delta J_1 } \frac{\delta}{\delta J_2 } \frac{\delta}{\delta J_3 }[-J_xD_{x4}]e^{-\frac{1}{2}J_xD_{xy}J_y}\bigg|_{J=0}\\ &= \frac{\delta}{\delta J_1 } \frac{\delta}{\delta J_2 } [-D_{34}+J_xD_{x4}J_yD_{y3}] e^{-\frac{1}{2}J_xD_{xy}J_y}\bigg|_{J=0}\\ &= \frac{\delta}{\delta J_1 } [-D_{34}J_xD_{x2}+D_{24}J_yD_{y3}+J_xD_{x4}D_{23}] e^{-\frac{1}{2}J_xD_{xy}J_y} \bigg|_{J=0}\\ &=D_{34}D_{12}+D_{24}D_{13}+D_{14}D_{23} \end{split} \end{equation*} However, I don't understand why in the third line the term \begin{equation*} J_xD_{x4}J_yD_{y3} \frac{\delta}{\delta J_2} e^{-\frac{1}{2}J_xD_{xy}J_y} = J_xD_{x4}J_yD_{y3}(-J_zD_{z2}) \end{equation*} doesn't appear. I would expect this term from the product rule acting on $J_xD_{x4}J_yD_{y3}e^{-\frac{1}{2}J_xD_{xy}J_y}$ in the second line. What am I missing? Note on notation: I am using Peskin's short notation where: $$ -\frac{1}{2}J_xD_{xy}J_y=-\frac{1}{2} \int d^4x'\,d^4y'\, J(x') D_F(x'-y')J(y') $$ Answer: Since all the $J_u$ are set to zero at the end, any term in which they are still present after the final functional derivative has acted will die anyway. That's why Peskin and Schroeder elide them - but of course this term arises due to the product rule (and is included in the derivation in e.g. Schwartz)
{ "domain": "physics.stackexchange", "id": 72925, "tags": "quantum-field-theory, path-integral, propagator, wick-theorem" }
Chern-Simons form of Euler class
Question: Consider the Euler class for curvature $F_{AB} = d\omega_{AB}+\omega_A^{~~~C}\wedge\omega_{CB}$ where $\omega$ is the spin-connection given by $$\int_{\mathcal{M}} \epsilon^{ABCD}F_{AB} \wedge F_{CD} = \int_{\mathcal{M}}\star F^{CD}\wedge F_{CD}$$ where $\epsilon^{ABCD}F_{AB} = \star F^{CD}$. Now, I wish to find its Chern-Simon form. For that I expand one of the $F$'s in the above. $$\int_{\mathcal{M}}\star F^{CD}\wedge F_{CD} = \int_{\mathcal{M}}(d\omega_{AB}+\omega_A^{~~~C}\wedge\omega_{CB})\wedge \star F^{AB} \\= \int_{\partial\mathcal{M}}\omega_{AB}\wedge\star F^{AB}+ \int_{\mathcal{M}}\omega_{AB}\wedge d\star F^{AB}+ \int_{\mathcal{M}}\omega_A^{~~~C}\wedge\omega_{CB}\wedge \star F^{AB}$$ The first term in the last equality is the Chern-Simons form I am looking for but somehow the last two terms I do not seem to be able to find a way to cancel. Can someone provide any suggestions as to how to go about this? My expectation was that somehow the last two terms would combine to become $$\int_{\mathcal{M}}\omega_{AB}\wedge D\star F^{AB}$$ which vanishes by Bianchi identity. However, $$\int_{\mathcal{M}}\omega_{AB}\wedge D\star F^{AB} \neq \int_{\mathcal{M}}\omega_{AB}\wedge d\star F^{AB} + \int_{\mathcal{M}}\omega_A^{~~~C}\wedge\omega_{CB}\wedge \star F^{AB} $$ Can someone help with this? Edit: I have noticed that $$\int_{\mathcal{M}}\omega_{AB}\wedge d\star F^{AB} + \int_{\mathcal{M}}\omega_A^{~~~C}\wedge\omega_{CB}\wedge \star F^{AB} \\= \int_{\mathcal{M}} \omega_{AB}\wedge\star d(\omega^{AC}\wedge\omega_C^{~~~~B}) + \int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star d\omega^{AB} +\int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star(\omega^{AD}\wedge\omega_D^{~~~~B})\\= \int_{\mathcal{M}}d(\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB})+\int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star(\omega^{AD}\wedge\omega_D^{~~~~B})\\ = \int_{\partial\mathcal{M}}\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB}+\int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star(\omega^{AD}\wedge\omega_D^{~~~~B})$$ where I have simply substituted for $F$ in the above. It seems that the last term will not vanish by any means. Does that mean the Euler-Class above doesn't have a corresponding Chern-Simons form? Can anyone comment on this? Answer: The Euler class in this Lorentz gauge gravity context is called the topological Gauss-Bonnet form. However, rather than $$ \int_{\mathcal{M}} \epsilon^{ABCD}F_{AB} \wedge F_{CD} = \int_{\partial\mathcal{M}}\omega_{AB}\wedge\star F^{AB} + \int_{\partial\mathcal{M}}\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB} $$ one is supposed to look for: $$ \int_{\mathcal{M}} \epsilon^{ABCD}F_{AB} \wedge F_{CD} = \int_{\partial\mathcal{M}}\omega_{AB}\wedge\star F^{AB} -\frac{1}{3} \int_{\partial\mathcal{M}}\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB} \\=\int_{\partial\mathcal{M}}\omega_{AB}\wedge\star (d\omega^{AB}+\frac{2}{3}\omega^A_{~~~C}\wedge\omega^{CB}) $$ The derivation in OP of $$ \int_{\mathcal{M}} \omega_{AB}\wedge\star d(\omega^{AC}\wedge\omega_C^{~~~~B}) + \int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star d\omega^{AB}= \int_{\mathcal{M}}d(\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB}) $$ seems to be wrong. It should instead be $$ \int_{\mathcal{M}} \omega_{AB}\wedge\star d(\omega^{AC}\wedge\omega_C^{~~~~B}) + \int_{\mathcal{M}} \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star d\omega^{AB}= -\frac{1}{3}\int_{\mathcal{M}}d(\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB}) $$ since you have to be mindful of the extra minus sign when you switch the order of Exterior product between one-forms such as $d$ and $\omega$. Added note: It seems that the manipulations of indices such $A/B/C/D$ and $\epsilon^{ABCD}$ ($\star$) operation could be at times confusing/daunting. There is actually a way to get rid of (or hide) these nuisances and do all the calculations in a much simplified/elegant way: all you have to do is write the spin connection one-form in terms of gamma operators (see reference here) $$ \omega = \frac{1}{4}\omega^{AB}\gamma_A\gamma_B $$ Then a lot of formulations can be simplified. For instance, the 4-$\omega$ term can be rewritten as $$ \omega_A^{~~~C}\wedge\omega_{CB}\wedge\star(\omega^{AD}\wedge\omega_D^{~~~~B})\sim Tr[\gamma_5 \omega \wedge\omega \wedge\omega \wedge\omega ] $$ where $Tr[...]$ is the trace of gamma matrices, and $\gamma_5 = i\gamma_0\gamma_1\gamma_2\gamma_3$. And the proof of it being identical to zero is straight forward, since: $$ Tr[\gamma_5 \omega \wedge\omega \wedge\omega \wedge\omega ] \\= Tr[\gamma_5 \omega \wedge (\omega \wedge\omega \wedge\omega) ] \\= Tr[\omega \wedge \gamma_5(\omega \wedge\omega \wedge\omega) ] \\= -Tr[\gamma_5 (\omega \wedge\omega \wedge\omega)\wedge \omega ] \\= -Tr[\gamma_5 \omega \wedge\omega \wedge\omega \wedge\omega ] $$ where we have used the fact that $$ \gamma_5 \omega = \omega \gamma_5 $$ and $$ Tr[F\wedge G] = -Tr[G\wedge F] $$ if both $F$ and $G$ are odd-forms. Other identities could also be easily proved. For example: $$ d(\star\omega_{AB}\wedge\omega^A_{~~C}\wedge\omega^{CB}) \\ \sim d(Tr[\gamma_5\omega\wedge\omega\wedge\omega]) $$ and $$ d(Tr[\gamma_5\omega\wedge\omega\wedge\omega]) \\=Tr[d(\gamma_5\omega\wedge\omega\wedge\omega])] \\=Tr[\gamma_5d\omega\wedge\omega\wedge\omega - \gamma_5\omega\wedge d(\omega\wedge\omega)] \\=Tr[\gamma_5d\omega\wedge\omega\wedge\omega - \gamma_5\omega\wedge d\omega\wedge\omega + \gamma_5\omega\wedge\omega\wedge d\omega] \\=3Tr[\gamma_5d\omega\wedge\omega\wedge\omega] $$ I will leave you an exercise to prove that the cosmological constant term as shown below is NOT identical to zero (contrary to the 4-$\omega$ term being identically zero as proved above) $$ CC \sim Tr[\gamma_5 e \wedge e \wedge e \wedge e ] $$ where $e= e^A \gamma_A$ is the veirbein/tetrad. Hint: $\gamma_5 e = -e \gamma_5 $
{ "domain": "physics.stackexchange", "id": 98994, "tags": "general-relativity, differential-geometry, chern-simons-theory" }
zbar_ros does not detect QR codes from Gazebo camera sensor
Question: I would like to detect QR codes in my gazebo simulation. To do this, I would like to use the zbar_ros package, since it is fast and has a small footprint. While I was able to detect QR codes using a real camera (simple usb camera), I cannot get it working with the virtual Gazebo camera sensor. I expect that the problem is in the encoding of the image, but I am not sure: I do not see the flaw in the configuration (and I tried a lot of different configurations already). Any ideas? To be clear: the node does not crash. It just detects zero codes. Sensor section in my URDF <camera name='__default__'> <horizontal_fov>1.047</horizontal_fov>  <clip> <near>0.1</near> <far>100</far> </clip> </camera> Some lines from my version of zbar_ros barcode_reader_nodelet.cpp void BarcodeReaderNodelet::imageCb(const sensor_msgs::ImageConstPtr &image) { cv_bridge::CvImageConstPtr cv_image; cv_image = cv_bridge::toCvShare(image, "bgr8"); zbar::Image zbar_image(cv_image->image.cols, cv_image->image.rows, "GREY", cv_image->image.data, cv_image->image.cols * cv_image->image.rows); And some proof that is really should be able to detect the code: For the sake of completeness: The default second argument for toCvShare() was mono16. That did not work for my webcam as well, mono8 did. I matched the second argument of toCvShare() to the configuration in my URDF: B8G8R8. zbar_image can also be initialized with the format "Y800" instead of "GREY". If I understand correctly, zbar_image does some conversion itself, so I prefer to do everything in grayscale. Making the QR code bigger does not help. Originally posted by Cees on ROS Answers with karma: 51 on 2017-01-31 Post score: 1 Answer: So, I figured out why this is not working. My initial approach was to revert to the settings of the real camera (since detection worked for this camera), and check the encoding by examining the messages on the camera topic. Turned out the encoding was RGB8. So: I changed my Gazebo camera settings to R8G8B8, and indeed: the encoding on the virtual topic was now RGB8. However: QR Codes were still not detected. I changed the zbar_ros nodes a little bit to show an OpenCV window to make sure the translation to OpenCV images was going well. This was the case, so the problem was still unknown. While on the verge of a mental breakdown, desperately staring at my screen, I noticed a difference between the QR code in simulation and the physical code on my desk: it was mirrored! Apparently, the QR code reader on my phone allows for mirrored QR codes, and was able to recognize it in the simulation, while zbar_ros does not support mirrored QR codes. So, in short: mirroring my textures solved everything. Originally posted by Cees with karma: 51 on 2017-02-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2017-02-15: I feel this might be nice to document somewhere (other than here on ROS Answers), but I don't really know where would be the best place for this. re: mirorred: have you verified that Gazebo renders everything correctly? There have been problems with the simulated camera in the past. Comment by Cees on 2017-02-15: I am not sure either: It is not related to zbar_ros, not really to Gazebo as well, but to Ogre3D if I understand correctly. I feel I do not know enough about how Ogre3D applies texture to recognize if this is a bug or not. If there is anything I can do to validate if the camera works correctly, lmk. Comment by paulbovbel on 2017-02-17: nice find! Feel free to add a gazebo-related note to the zbar_ros wiki. Comment by Cees on 2017-02-17: Will do somewhere the coming weeks, but first I'd like to validate if it has anything to do with this: http://answers.ros.org/question/232534/gazebo-camera-frame-is-inconsistent-with-rviz-opencv-convention/
{ "domain": "robotics.stackexchange", "id": 26871, "tags": "gazebo" }
Baking Soda and Vinegar: Are there any secondary reactions?
Question: I want to run a Lego Pneumatic Engine on the CO2 that is produced, but before I do, I want to be sure that that's the ONLY gaseous product from a real-world reaction. Everyone seems to assume that it is, just by omission, but no one actually says it. I'm sure it would run, but I'd rather not damage the ABS plastic or probably-steel parts (shiny metal piston shaft), or a rubber seal or the factory lubricant or something else that I can't see. CO2 seems inert enough by itself, but is that really ALL I'll get? (assuming of course, that I keep the foam out) Answer: It seems to me, according to the comments on the question, that there are NO additional reactions or products to worry about. However, because of the foam, some of the original reactants can be tossed up, and find their way into the later parts of the apparatus as a haze/fog/cloud that is not completely inert. Is that more-or-less accurate? I'm trying to get an answer here, and not just a string of comments.
{ "domain": "chemistry.stackexchange", "id": 13958, "tags": "inorganic-chemistry, everyday-chemistry" }
FFT for expanded form of equation multiplication
Question: I know how to use the FFT for multiplying two equations in $O(n\,log\,n)$ time, but is there a way to use FFT to compute the expanded equation before simplifying? For example, if you are multiplying $A(x) = 1 + 2x + 6x^2$ and $B(x) = 4 + 5x + 3x^2$ such that you get $C(x) = A(x) \cdot B(x) = 4 + 5x + 3x^2 + 8x + 10x^2 + 6x^3 + 24x^2 + 30x^3 + 18x^4$ without going directly to the simplified answer? Furthermore, is it possible to use FFT to do this expanded form multiplication in $O(n\,log\,n)$ time? If so, can you show me how to apply FFT to this scenario? Answer: The trivial algorithm that multiplies every monomial in $A$ by every monomial in $B$ takes time $O(|A| \cdot |B|)$ (where $|A|$ is the number of monomials in $A$ or $\deg A + 1$, depending on the representation), which is the same order of magnitude as the size of the output, and so optimal. You only need FFT if you want to actually compute the product $AB$. In particular, there is no way to compute your function in time $O(n\log n)$, simply because the length of the output is $\Omega(n^2)$.
{ "domain": "cs.stackexchange", "id": 1784, "tags": "algorithms, algorithm-analysis, randomized-algorithms, fourier-transform" }
Is there a FNP problem that's NP-hard but not FNP-hard?
Question: For the reductions, choose a class C such that [it's clear what FC means] and FC is not known to be able to solve the satisfaction search problem, and assume that FC indeed can't solve that search problem. With the following definitions of hardness for FNP problems R, NP-hard ​ ​ - ​ ​ The corresponding decision problem is NP-hard (with respect to C reductions). FNP-hard ​ ​ - ​ ​ There is a C reduction f from satisfiability to the corresponding decision problem and a function g in FC such that for all instances x of the satisfiability problem, for all strings y, if ​ f(x) R y ​ then g(x,y) is a satisfying assignment to x. is there a FNP problem that's NP-hard but not FNP-hard? Answer: I'm going to argue that the answer is probably "Yes", i.e., there probably are FNP problems that are NP-hard but not FNP-hard. Warmup: ​ With respect to polynomially-closed reduction classes that can't solve all of FNPSPACE, FNPSPACE-hardness is different from NPSPACE-hardness. Proof: ​ ​ ​ ​ ​ ​ ​ Let R be the relation given by xRy if and only if [x$\in$QBF$\hspace{.02 in}$ and y is the empty string]. QBF $\in$ PSPACE-complete $=$ NPSPACE-complete , ​ so R is in FNPSPACE and NPSPACE-hard. Let f and g form an arbitrary reduction from an arbitrary relation S to R, and let x be an arbitrary instance of S. x has an S-witness ​ $\implies$ ​ f(x) has an R-witness ​ $\implies$ f(x) R empty_string ​ $\implies$ ​ x S g(x,empty_string) ​ . In other words, an arbitrary reduction from a search problem to R can be used to solve the search problem. ​ ​ ​ Therefore R is not FNPSPACE-hard with respect to polynomially-closed reduction classes that can't solve all of FNPSPACE. Cryptographic Assumptions and sub-P reductions: For each positive integer j, let R$\hspace{.02 in}$j be the relation given by ​ x R (C,v) ​ if and only if C is a circuit and x is a CNF-SAT instace and ​ size(C) ≤ (number_of_variables_in_(x))$\hspace{.03 in}$j ​ and C(v) satisfies x. ​ ​ ​ Obviously, each R$\hspace{.02 in}$j is in FNP and NP-hard. ​ ​ ​ If they are all FNP-hard under non-uniform parallelizable reductions, then there can't even be a version of time-lock puzzles against non-uniform adversaries that is otherwise very weak, namely, ones in which [the size of the puzzle can be polynomial in the time for an honest user to solve it] and [there is no restriction on the resources needed to create the puzzle] and [security only needs to hold infinitely often]. ​ Furthermore, if there are efficiently-computable functions that are one-way against probabilistic parallel adversaries then one can replace both instances of "non-uniform" with "probabilistic". Probabilistic Polynomial-Time Reductions and handwaving: (This is essentially tautological, but) If all NP-hard FNP problems are FNP-hard, then all proof systems for SAT are "somewhat constructive", in the sense that any original instance can be efficiently turned an equisatisfiable instance so that one will be able to go from any proof of satisfiability of the equisatisfiable instance to a satisfying assignment of the original instance. Non-interactive zaps are certainly not obviously "somewhat constructive" in that sense.
{ "domain": "cstheory.stackexchange", "id": 3594, "tags": "np-hardness, reductions, search-problem" }
What do the indices in 4-vector notation indicate?
Question: What do the indices in 4-vector notation indicate? Is it vector components or the vector itself? I have a little confusion after this Wikipedia article. Since the indices are summed over how can the left hand side have any index? Could you please explain? Notation The notations in this article are: lowercase bold for three-dimensional vectors, hats for three-dimensional unit vectors, capital bold for four dimensional vectors (except for the four-gradient), and tensor index notation. Four-vector algebra Four-vectors in a real-valued basis A four-vector A is a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations: $${\displaystyle {\begin{aligned}\mathbf {A} &=(A^{0},\,A^{1},\,A^{2},\,A^{3})\\&=A^{0}\mathbf {E} _{0}+A^{1}\mathbf {E} _{1}+A^{2}\mathbf {E} _{2}+A^{3}\mathbf {E} _{3}\\&=A^{0}\mathbf {E} _{0}+A^{i}\mathbf {E} _{i}\\&=A^{\alpha }\mathbf {E} _{\alpha }\\&=A^{\mu }\end{aligned}}}$$ (Source) Answer: The confusion stems from the fact that one commonly wants to consider what are called passive transformations as opposed to active transformations. The idea can be seen in three dimensions and then generalized from there. One should not write ${\bf u} = u^a$, it is an abuse of notation. But one can use a symbol without indices, such $\bf u$ or $U$, in more than one way, as I shall explain. Suppose $\bf u$ is a vector in three dimensions. Then we can write $$ {\bf u} = u^1 \hat{\bf e}_1 + u^2 \hat{\bf e}_2 +u^3 \hat{\bf e}_3 $$ where $u^i$ are the components of the vector and $\hat{\bf e}_i$ are some set of basis vectors. Let's suppose those basis vectors are aligned along the coordinate axes of some rectangular system of coordinates. Now consider the effect of a rotation $R$. We can rotate the vector ${\bf u}$ so as to get some other vector ${\bf v} = R {\bf u}$. This is called an active transformation. The vector changes to a different vector. But we can also consider a rotation of the coordinate axes, without rotating our vector $\bf u$. This is called a passive transformation because $\bf u$ does not change. If we rotate the coordinate axes then we will find the vectors along the new coordinate axis directions are not the same as the ones along the old coordinate axis directions, so let's use a prime to indicate this distinction: $$ \hat{\bf e}'_i = R \hat{\bf e}_i. $$ From this it follows that $$ \hat{\bf e}_i = R^{-1} \hat{\bf e}'_i. $$ This fact can be useful to note, but for present purposes it is more useful merely to note that the old basis vectors $\hat{\bf e}_i$ can themselves be written in terms of the new basis vectors as a linear sum. So each $\hat{\bf e}_i$ is equal to some sum of $\hat{\bf e}'_i$ and we shall find $$ {\bf u} = u'^1 \hat{\bf e}'_1 + u'^2 \hat{\bf e}'_2 +u'^3 \hat{\bf e}'_3 $$ where $u'^i$ are a new set of coefficients, called components. The point being that the vector $\bf u$ has not changed but the components $u'^i$ are different from the components $u^i$ because the basis vectors $\hat{\bf e}'_i$ are different from the basis vectors $\hat{\bf e}_i$. An example of this fact, now in 4 dimensions, is the widely used relation $$ u'^a = \Lambda^a_{\;\mu} u^\mu $$ where $\Lambda^a_{\; b}$ is the Lorentz transformation and I am now adopting the Einstein summation convention. In general relativity this gets generalised to $$ u'^a = \frac{\partial x'^a}{\partial x^\mu} u^\mu. $$ So far we have maintained a strict distinction between a vector $\bf u$ and its components $u^a$ or $u'^a$. But often it is useful to find a less cluttered notation, where indices (whether superscript or subscript) are not needed. So to this end let's define $$ U = \{ u^a \}. $$ This equation asserts that the symbol $U$ refers to the set of components of the 4-vector $\bf u$ with respect to the unprimed coordinate basis. Notice I am now using $U$ for the set of components and $\bf u$ for the 4-vector. Next let's define $$ U' = \{ u'^a \}. $$ This asserts that the symbol $U'$ refers to the set of components of the 4-vector $\bf u$ with respect to the primed coordinate basis. Now if we are doing special relativity where the transformation is a Lorentz transformation, then we can conveniently write the relationship between $U$ and $U'$ as: $$ U' = \Lambda U $$ where it is understood that the components of $\Lambda$ are gathered together in a $4 \times 4$ matrix, and the lists of components $U$ and $U'$ are to expressed as the components of column vectors, and $\Lambda U$ is the ordinary matrix multiplication operation. This is quite a convenient notation so it is widely used, but it leads to the confusion between the set of components and the 4-vector itself. In a passive transformation, such as a change of inertial frame of reference, the 4-vector itself does not change, but the set of components changes from $U$ to $U'$. So strictly one should not call $U$ here 'the 4-vector' but rather 'the set of components of the 4-vector with respect to the unprimed inertial frame', and $U'$ is the set of components of the 4-vector with respect to the primed inertial frame. If you don't like this index-free notation you don't have to use it. For special relativity I think it is quite a nice notation, but I would not employ it in G.R. where I find it clearer to stick to the use of indices when one is referring to components. This does not prevent one from referring directly to the 4-vectors or other tensors when one wishes.
{ "domain": "physics.stackexchange", "id": 76694, "tags": "special-relativity, coordinate-systems, vectors" }
Login script check
Question: I have written a script that sits on the admin portion on my website. Here I assume the user is valid as I have code that checks that already. The below code is checks if the user is Admin. If they are Admin they will be flagged with a "Y" on the database (this will be a "1" for optimization later but for sanity's sake with testing Y was easier). App Code: Public Function IsUserAdmin(ByVal iUserID As Long) As Boolean Dim sConnString As String = System.Web.Configuration.WebConfigurationManager.ConnectionStrings("mySQL").ConnectionString Dim dsNames As SqlDataSource Dim bReturn As Boolean = False dsNames = New SqlDataSource dsNames.ConnectionString = sConnString Dim sSQL As String sSQL = "SELECT IsAdmin FROM [SystemUsers] WHERE ID=@UserID" dsNames.SelectCommand = sSQL dsNames.SelectParameters.Clear() dsNames.SelectParameters.Add("UserID", iUserID) For Each datarow As Data.DataRowView In dsNames.Select(DataSourceSelectArguments.Empty) ‘ do I need a loop? If datarow("IsAdmin").ToString().ToUpper = "Y" Then bReturn = True End If Next Return bReturn dsNames.dispose End Function .Net Code ‘Assuming basic login was okay we have a UserObject/UserID Dim vAdmin as string vAdmin = IsUserAdmin(Session("UserObject")) If vAdmin = True Then 'Valid User Else Response.Redirect("../Default.aspx") End If Answer: I see you're not using the role manager built into .NET (together with a built-in membership provider). If you were, then this could be codeless and configured in the Web.config. For example, the Web.config of my Logs directory (which contains log files) look like this: <?xml version="1.0"?> <configuration> <system.web> <authorization> <allow roles="Supervisor"/> <deny users="*"/> </authorization> </system.web> <system.webServer> <directoryBrowse enabled="true"/> </system.webServer> </configuration> Second, ideally you should call the Dispose method of your SqlDataSource when you finish using it.
{ "domain": "codereview.stackexchange", "id": 9205, "tags": ".net, vb.net, authentication" }
Confusion on flux definition
Question: In the context of fluid mechanics, flux is the quantity per time per area and one could calculate the total movement of that quantity per unit of time through a control surface by $$\iint{}flux(S,t)dS$$ However when learning about electromagnetism electric flux is defined as $$flux=\iint{}E(S,t)dS$$ How can both results be consistent with one definition? In the first, the quantity would be calculated by integrating the flux. In the second, the flux is the quantity be calculated. Is there something I am missing or is this truly an inconsistency? Answer: Flux is a mathematical quantity that can be defined for any vector field. The flux of the field $\vec{F}$ through some surface $a$ is $$ \Phi = \int \limits_a \vec{F}(\vec{r},t) \cdot \hat{a} ~da. $$ In the case of fluid mechanics, the vector field is the momentum or velocity field of the fluid, and so corresponds to a mass passing through the surface, or total flow rate respectively. In the case of E&M, both the electric and magnetic fields have meaningful fluxes in the theory. However, the meaning is not the rate at which matter passes through the surface, it is instead related to the rate at which the other type of field is induced.
{ "domain": "physics.stackexchange", "id": 51725, "tags": "newtonian-mechanics, electromagnetism, thermodynamics, fluid-dynamics, flow" }
Optimising multi-part LINQ to Entities query
Question: I have inherited an application which uses LINQ-to-Entities for ORM and don't have a great deal of exposure to this. There are many examples like this throughout the application. Could you tell me where to start in refactoring this code? I appreciate that it's a large chunk of code, however the query isn't particularly performant and I wondered what you thought the main bottlenecks are in this case. /// <summary> /// Get templates by username and company /// </summary> /// <param name="companyId"></param> /// <param name="userName"></param> /// <returns></returns> public List<BrowsingSessionItemModel> GetItemBrowsingSessionItems( int companyId, string userName, Boolean hidePendingDeletions, Boolean hideWithAppointmentsPending, Boolean hideWithCallBacksPending, int viewMode, string searchString, List<int?> requiredStatuses, List<int?> requiredSources, string OrderBy, BrowsingSessionLeadCustomField fieldFilter) { try { IQueryable<Lead> exclude1; IQueryable<Lead> exclude2; IQueryable<Lead> exclude3; //To prepare call backs pending if (hideWithCallBacksPending == true) { exclude1 = (from l1 in db.Leads where (l1.Company_ID == companyId) from l2 // Hiding Pending Call Backs in db.Tasks .Where(o => (o.IsCompleted ?? false == false) && (o.TaskType_ID == (int)RecordEnums.TaskType.PhoneCall) && (o.Type_ID == (int)RecordEnums.RecordType.Lead) && (o.Item_ID == l1.Lead_ID) && (o.Due_Date > EntityFunctions.AddDays(DateTime.Now, -1)) ) select l1); } else { exclude1 = (from l1 in db.Leads where (0 == 1) select l1); } //To prepare appointments backs pending if (hideWithAppointmentsPending == true) { exclude2 = (from a1 in db.Leads where (a1.Company_ID == companyId) from a2 // Hiding Pending Appointments in db.Tasks .Where(o => (o.IsCompleted ?? false == false) && (o.TaskType_ID == (int)RecordEnums.TaskType.Appointment) && (o.Type_ID == (int)RecordEnums.RecordType.Lead) && (o.Item_ID == a1.Lead_ID) && (o.Due_Date > EntityFunctions.AddDays(DateTime.Now, -1)) ) select a1); } else { exclude2 = (from a1 in db.Leads where (0 == 1) select a1); } //To prepare deletions if (hidePendingDeletions == true) { exclude3 = (from d1 in db.Leads where (d1.Company_ID == companyId) from d2 // Hiding Pending Deletions in db.LeadDeletions .Where(o => (o.LeadId == d1.Lead_ID)) select d1); } else { exclude3 = (from d1 in db.Leads where (0 == 1) select d1); } IQueryable<Lead> list = (from t1 in db.Leads from t2 in db.LeadSubOwners .Where(o => t1.Lead_ID == o.LeadId && o.Expiry >= DateTime.Now) .DefaultIfEmpty() where (t1.Company_ID == companyId) where ((t2.Username == userName) && (viewMode == 1)) || ((t1.Owner == userName) && (viewMode == 1)) || ((viewMode == 2)) // Either owned by the user or mode 2 (view all) select t1).Except(exclude1).Except(exclude2).Except(exclude3); // Filter sources and statuses seperately if (requiredStatuses.Count > 0) { list = (from t1 in list where (requiredStatuses.Contains(t1.LeadStatus_ID)) select t1); } if (requiredSources.Count > 0) { list = (from t1 in list where (requiredSources.Contains(t1.LeadSource_ID)) select t1); } // Do custom field filter here if (fieldFilter != null) { string stringIntegerValue = Convert.ToString(fieldFilter.IntegerValue); switch (fieldFilter.FieldTypeId) { case 1: list = (from t1 in list from t2 in db.CompanyLeadCustomFieldValues .Where(o => t1.Lead_ID == o.Lead_ID && fieldFilter.TextValue == o.LeadCustomFieldValue_Value) select t1); break; case 2: list = (from t1 in list from t2 in db.CompanyLeadCustomFieldValues .Where(o => t1.Lead_ID == o.Lead_ID && stringIntegerValue == o.LeadCustomFieldValue_Value) select t1); break; default: break; } } List<Lead> itemsSorted; // Sort here if (!String.IsNullOrEmpty(OrderBy)) { itemsSorted = list.OrderBy(OrderBy).ToList(); } else { itemsSorted = list.ToList(); } var items = itemsSorted.Select((x, index) => new BrowsingSessionItemModel { Id = x.Lead_ID, Index = index + 1 }); return items.ToList(); } catch (Exception ex) { logger.Info(ex.Message.ToString()); return new List<BrowsingSessionItemModel>(); } } Answer: You're right... this is a large chunk of code that is a little difficult to digest. With that said, several things come to mind. First off, this method is way too large. This is not a style issue perse... In this case, it's so large, it will not be JIT Optimized because the resulting IL will be too large. Break this up into logical sub-functions. I would suggest you push portions (if not all) of this back to the database using a SPROC, VIEW, or more likely a combination of the two. I would buy some donuts for your DB folks and make a new friend. If that is not an option, I would drop back to ADO.NET and build a single dynamic query that incorporates all the 'exclude' calls, main list call, and the filtering in one shot. This will be a little tedious but should provide you something that is a lot more performant. Keep in mind that old-style ADO.NET is between 150%-300% faster then even the best ORM. That's not a critism because they are really valueable in some areas, especially if you can leverage near-caching options. That said, they're not a solution for everything.
{ "domain": "codereview.stackexchange", "id": 929, "tags": "c#, linq, entity-framework, asp.net-mvc-3" }
How to better understand the RIP-nomenclature used in the CMIP5 project?
Question: I am a statistician working on a climatological downscaling project in the Caribbean Sea. I am just starting to learn about the CMIP5 GCM data project. I plan to download CMIP5 data from 6 climate centers and examine 3 scenarios: historical, RCP4.5, and RCP8.5 I was hoping that someone could further explain the CMIP5 ensemble RIP-nomenclature to me: R: realization I: initialization P: physics More specifically, if I were to use the same ensemble number (say r1i1p1) across all 6 centers - would I have the exact same data for each scenario (historical, RCP4.5, and RCP8.5)? If not, should I keep the ensemble number consistent across models or should I vary it somewhat? I guess the main question here: is it more common in geography to examine the variance between climate center projections or the variance between runs within a single ensemble? Answer: The RIP-nomenclature is used to distinguish between different runs of the same scenario within a modeling center rather than to indicate any similarity across modeling centers. Strictly it's within and across models not centers, because some centers have more than one model, e.g., the MIROC center submitted models MIROC5, MIROC4h, MIROC-ESM, etc. So r1i1p1 for some scenario is typically just the ensemble member that happened to be run first for each model. One caveat is that whichever historial ensemble member you pick for each model, you should use the same member from the corresponding RCP scenario, e.g., historical r3i1p1 should join seamlessly with rcp8.5 r3i1p1: A recommendation for CMIP5 is that each so-called RCP (future scenario) simulation should when possible be assigned the same realization integer as the historical run from which it was initiated. From Taylor et al (2012) CMIP5 Data Reference Syntax (DRS) and Controlled Vocabularies (pdf) There's loads more detail in that Data Reference Syntax doc that I've linked to, and a summary of it on the IPSL website. Karl Taylor gave a presentation that summarised it even more succinctly: r = “realization”: simulations started from equally likely initial conditions that lead to equally likely realizations of the true climate trajectory i = “initialization”: only used in decadal predictions, to distinguish among different initialization procedures p = “physics”: to identify simulations that are very closely related (e.g., “perturbed physics” ensemble members or simulations forced by slightly modified parameterizations) In practice, the "initialization" and "physics" ensemble members are a bit specialist and the vast majority of CMIP5 users just stick to the rNi1p1 members, and most of them probably just stick to r1i1p1 from each model (e.g, Peng et al, 2019) for simplicity. You should feel free to use any of the "realisation" members from each model, and you could pick r1i1p1 from one model and r10i1p1 from another. I'd advise sticking to one ensemble member from each model though, to avoid your analysis over-representing the forced responses of models that submitted more realizations to the CMIP5 database. To address your final question, it's more common to look at the variance across the models for a given scenario. But there are also plenty of studies that use the variance within each model ensemble to estimate the uncertainty of a model property due to internal variability, e.g., emergent constraints on climate sensitivity. It really depends what you're investigating.
{ "domain": "earthscience.stackexchange", "id": 1890, "tags": "meteorology, climate-change, climate, climate-models" }
User-friendly string struct and utility functions
Question: Created a new typedef and functions supporting it, making it user-friendly [specifically programmer-friendly]. What does the code do? It makes users [programmers] easy to deal with strings as most functions are there. Additionally, we can append, obtain substrings, replace string very very easily Also there is no specific limit to the string itself but most functions limit the str to GLOBAL_MAX_CHAR_LIMIT which is defined in mystring.h Note: This code may contain bugs but I have fixed tons of them which I encountered. Also sstrreplace() is limited to have 7 chars in replacing strings -> I dont know why but if the length is greater than 7 wierd things happen <- I am in progress of finding its solution. Also got to know that gets() is not safe so I used fgets() and removed '\n' in strinput()! All suggestions, feedbacks and bug-encounters are welcome! Note: This code has been updated and has fixed bugs. updated version can be found here The header file: mystring.h #ifndef STRING_H_DEFINED #define STRING_H_DEFINED #define GLOBAL_MAX_CHAR_LIMIT 200 //dont need to include stdbool.h! #define true 1 #define false 0 typedef int bool; typedef struct{ char * _str; }sstring; typedef struct{ char _str[GLOBAL_MAX_CHAR_LIMIT]; }substr; //functions declarations sstring cstr2sstr(char _cstr[GLOBAL_MAX_CHAR_LIMIT]); char sgetchar(sstring _str,int _pos); substr strinput(); substr ssetchar(sstring _str,int _pos,char _ch); substr sstrappend(sstring _str,char * _append); substr ssubstr(sstring _str,int pos1,int len); char * sstrval(sstring _str); int cstrlen(const char * _str); int sstrlen(sstring _str); int sstrfind(int init,sstring _strvar,const char * _find); sstring sstrreplace(sstring _str,char * _find,char * _repl); #endif mystring.c << to be included and compiled at once #include "mystring.h" #include<string.h> #include<stdio.h> char * sstrval(sstring _str){ return _str._str; } int cstrlen(const char * _str){ int i=0; while(1>0){ if(_str[i]=='\0'){ return i; } i++; } } int sstrlen(sstring _str){ return cstrlen(sstrval(_str)); } substr sstrappend(sstring _str,char * _append){ int _len = sstrlen(_str),_len2=cstrlen(_append); char temp[_len+_len2]; for(int i=0;i<_len+_len2;i++){ temp[i] = _str._str[i]; if(i>=_len){ temp[i] = _append[i-_len]; } } substr ret; for(int i=0;i<cstrlen(temp);i++){ ret._str[i] = temp[i]; } ret._str[_len+_len2] = '\0'; return ret; } int sstrfind(int init,sstring _strvar,const char * _find){ const char * _str; _str = _strvar._str; int _len1 = cstrlen(_str); int _len2 = cstrlen(_find); int matching = 0; //some wierd conditions check [user, are u fooling the function ?] if(_len2>_len1||init<0||init>_len1-1||_len2==0){ return -1; } //the main finder for(int i=init;i<_len1+1;i++){ if(_str[i]==_find[0]){ for(int z=0;z<_len2+1;z++){ if(matching==_len2){ return i; } else if(_str[i+z]==_find[z]){ matching+=1; } else{ matching=0; break; } } } } return -1; } substr ssubstr(sstring _str,int pos1,int len){ substr self; if(pos1<0||len<1||sstrlen(_str)<1||pos1+len>sstrlen(_str)){ return self; } char a[GLOBAL_MAX_CHAR_LIMIT]; for(int i=0;i<GLOBAL_MAX_CHAR_LIMIT;i++){ a[i] = '\0'; } if(pos1==0){ for(int i=0;i<len;i++){ a[i] = _str._str[i]; } } for(int i=pos1;i<pos1+len;i++){ a[i-pos1] = _str._str[i]; } substr b; for(int i=0;i<GLOBAL_MAX_CHAR_LIMIT;i++){ b._str[i] = '\0'; } for(int i=0;i<=GLOBAL_MAX_CHAR_LIMIT;i++){ if(i>GLOBAL_MAX_CHAR_LIMIT){ return self; } b._str[i] = a[i]; } return b; } substr ssetchar(sstring _str,int _pos,char _ch){ substr tmp; //nullify the string buffer for(int i=0;i<GLOBAL_MAX_CHAR_LIMIT;i++){ tmp._str[i]='\0'; } if(_pos<0){ return tmp; } //copy the string buffer and set the char of _pos as _ch for(int i=0;i<sstrlen(_str);i++){ tmp._str[i]=_str._str[i]; if(i==_pos){ tmp._str[i]=_ch; } } return tmp; } char sgetchar(sstring _str,int _pos){ char _ret; _ret = '\0'; if(_pos<0){ return _ret; } _ret = _str._str[_pos]; return _ret; } substr strinput(){ substr temp; fgets(temp._str,GLOBAL_MAX_CHAR_LIMIT,stdin); temp._str[cstrlen(temp._str)-1] = '\0'; return temp; } sstring sstrreplace(sstring _str,char * _find,char * _repl){ //duplicate of _str sstring _dup = _str; //temp str's sstring _tmp,_tmp2; substr self,tmp; int _lens = strlen(_str._str); int _lenf = strlen(_find); int _lenr = strlen(_repl); //current limit - idk why but if _lenr > 7 WIERD THINGS HAPPEN so return original string if(_lenr>7){ return _dup; } int temp=0,nottoappend=0; char tmpstr[GLOBAL_MAX_CHAR_LIMIT]; temp=sstrfind(0,_str,_find); if(temp==-1){ return _dup; } int tmpint=0,num=1; while(1>0){ temp = sstrfind(0,_dup,_find); if(temp==-1){ //incase of char buffering printf("%c\b \b",_dup._str[sstrlen(_dup)-1]); return _dup; }else{ if(temp==sstrlen(_dup)-1){ _dup._str = ssubstr(_dup,0,sstrlen(_dup)-1)._str; _dup._str = sstrappend(_dup,_repl)._str; //incase of char buffering printf("%c\b \b",_dup._str[0]); return _dup; } if(_lenr==0){ _tmp._str = ssubstr(_dup,0,temp)._str; _tmp2._str = ssubstr(_dup,temp+_lenf,sstrlen(_dup)-temp-_lenf)._str; _dup._str = sstrappend(_tmp,_tmp2._str)._str; }else{ _tmp._str = ssubstr(_dup,0,temp)._str; _tmp2._str = ssubstr(_dup,temp+_lenf,sstrlen(_dup)-temp-_lenf)._str; _dup._str = sstrappend(_tmp,_repl)._str; _dup._str = sstrappend(_dup,_tmp2._str)._str; } } } } sstring cstr2sstr(char _cstr[GLOBAL_MAX_CHAR_LIMIT]){ sstring _ret; _ret._str = ""; for(int i=0;i<cstrlen(_cstr);i++){ _ret._str = sstrappend(_ret," ")._str; } for(int i=0;i<cstrlen(_cstr);i++){ _ret._str = ssetchar(_ret,i,_cstr[i])._str; //incase of char buffering printf("%c\b%c\b \b",_cstr[i],_ret._str[i]); } return _ret; } exampleusage.c #include<stdio.h> #include "mystring.h" int main(){ int len1,find; sstring var1,var2,var3; char chr; printf("Enter your name: "); //strinput! var1._str = strinput()._str; //length of string len1 = sstrlen(var1); printf("\nYour name's Length is: %d",len1); //replacing all spaces with underscores var2._str = sstrreplace(var1," ","_")._str; printf("\nYour url might look like : '%s'",var2); //with no spaces var2._str = sstrreplace(var1," ","")._str; printf("\nConJusted Name: %s",var2); //appending " is a good person full of honesty and great behaviour!" var2._str = sstrappend(var1," is a good person full of honesty and great behaviour!")._str; printf("\nGood about you: %s",var2); //finding space! find = sstrfind(0,var1," "); printf("\nFirst space in your name is in %d charachter",find); //substrings! find = sstrfind(0,var1," "); var2._str = ssubstr(var1,0,find)._str; printf("\nYour first name: %s",var2); var2._str = ssubstr(var1,find+1,len1-find-1)._str; printf("\nYour last name: %s",var2); //setting chars! //set first char to 'F' var2._str = ssetchar(var1,0,'F')._str; printf("\nYour name with first char as 'F': %s",var2); //getting chars! //get last char of name! chr = sgetchar(var1,sstrlen(var1)-1); printf("\nYour name's last char is '%c'",chr); } Answer: Don't reimplement bool Just #include <stdbool.h>. The problem with reimplementing it is that it will now conflict with the definitions from <stdbool.h>, so someone using both <stdbool.h> and your string library will have a problem. Even worse, you are not using bool, true or false anywhere in your own code, so it was useless to define them to begin with. Don't reimplement standard library functions Why implement cstrlen() when you can just use the standard library's strlen()? In fact, you call strlen() yourself inside sstrreplace(). Related to this: The result of strlen() is a size_t An int might not be large enough to represent the size of all strings on your platform. The proper type to store the length of a string is size_t. Buffer overflows In sstrappend() the concatenation of two strings can be longer than GLOBAL_MAX_CHAR_LIMIT. You don't check against that limit when copying temp into ret._str. This means your program has a buffer overflow that might be exploitable. Note that just ensuring you never write more than GLOBAL_MAX_CHAR_LIMIT characters into ret._str might fix the buffer overflow, but still is problematic. Consider you might want to create a full pathname by concatenating a directory name and a filename, and the result is longer than can be stored. If you return a truncated filename, the program might open the wrong file, which might itself be a security issue. Incorrect copying of sstrings In sstrreplace(), I see: sstring _dup = _str; This will not actually create a copy of the string itself, it will only create a copy of the pointer to the string, because afterwards the following is true: _dup._str == _str._str. Inconsistent use of underscores Some variables are prefixed with an underscore, some are not. I don't see any pattern to it. Also, some uses of leading underscores are reserved in C, I recommend you don't use them as prefixes at all. Work with the standard library instead of against it The standard string functions in C leave a lot to be desired, so creating a library to add functionality like search-and-replace is great. But I recommend you just make it work with regular C strings, and have it return regular C strings. There are several ways in which you can do that: Have your function allocate memory for the result, and return a pointer to that memory after storing the result in it. Have the caller specify a pointer to and the size of a buffer where your function can write the results to. Also, just use standard library functions inside your own functions when appropriate. For example, I would rewrite sstrappend() like so: char *sstrappend(const char *str1, const char *str2) { size_t len1 = strlen(str1); size_t len2 = strlen(str2); char *result = malloc(len1 + len2 + 1); if (!result) { /* Could not allocate memory, let the caller deal with it. */ return NULL; } memcpy(result, str1, len1); memcpy(result + len1, str2, len2); result[len1 + len2] = '\0'; return result; }
{ "domain": "codereview.stackexchange", "id": 42279, "tags": "c, strings" }
Can't install Ros2 on Ubuntu 16.04 from debs
Question: Hi! I've been trying to follow the tutorial for installing ROS2 Crystal from debs. However when I try doing apt install ros-crystal-desktop I get an error: E: Unable to locate package ros-crystal-desktop When doing "sudo apt update" I can see: Hit:22 http://packages.ros.org/ros2/ubuntu xenial InRelease Also the file vim /etc/apt/sources.list.d/ros2-latest.list exists and has the following contents: deb [arch=amd64,arm64] http://packages.ros.org/ros2/ubuntu xenial main I'm on a laptop with Ubuntu 16.04 with 64 bit processor and I have ROS Kinetic installed and working flawlessly. Am I missing anything obvious here? Originally posted by msadowski on ROS Answers with karma: 311 on 2019-01-02 Post score: 1 Answer: Am I mussing anything obvious here? Perhaps this comment from the page you link: Debian packages for ROS 2 Bouncy (the latest release) are available for Ubuntu Bionic; packages for ROS 2 Ardent are available for Ubuntu Xenial. There are no .debs for Ubuntu Xenial, no matter the architecture. See also REP-2000: ROS 2.0 Target Platforms - Crystal Clemmys (December 2018 - December 2019), where Ubuntu Xenial is marked with an [s], meaning: " [s] " Compilation from source, the ROS buildfarm will not produce any binary packages for these platforms. Originally posted by gvdhoorn with karma: 86574 on 2019-01-02 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by msadowski on 2019-01-02: OK, that was pretty obvious thing that I've missed. Thanks! Comment by Tejas Kumar shastha on 2019-08-19: Okay this is pretty misleading because the Tutorial page mentioned by OP wrongly mentions (as of posting this comment) that binaries of Crystal are indeed available for Xenial as well as Bionic! Comment by gvdhoorn on 2019-08-19: Can you clarify where you read that? I only see: Debian packages for ROS 2 Crystal (the latest release) and ROS 2 Bouncy are available for Ubuntu Bionic; packages for ROS 2 Ardent are available for Ubuntu Xenial. and: ROS 2 Ardent (Ubuntu Xenial) Comment by Tejas Kumar shastha on 2019-10-16: Terribly sorry for the late response, but I hope this is still relevant : In the page https://index.ros.org/doc/ros2/Installation/Crystal/ , it says - Binary packages¶ We provide ROS 2 binary packages for the following platforms: Linux (Ubuntu Xenial(16.04) and Ubuntu Bionic(18.04)) Debian packages "fat" archive OS X Windows
{ "domain": "robotics.stackexchange", "id": 32228, "tags": "ros2" }
What's the term for death by dissolving in AI?
Question: What's the term (if such exists) for merging with AI (e.g. via neural lace) and becoming so diluted (e.g. 1:10000) that it effectively results in a death of the original self? It's not quite "digital ascension", because that way it would still be you. What I'm thinking is, that the resulting AI with 1 part in 10000 being you, is not you anymore. The AI might have some of your values or memories or whatever, but it's not you, and you don't exist separately from it to be called you. Basically - you as you are dead; you died by dissolving in AI. I would like to read up on this subject, but can't find anything. Answer: I find the concept of the a Turing machine useful. In one dimension, everything is a string. All of the parts that are "not you" are merely a substrate, a medium for the program your_mind runs on top of. The you, your identity, the "metaphysical" component we think of as the mind, is a result of running the algorithm that is your_mind on the bioware of your body, or, the hardware (technically, "wetware".) So what we're really talking about is the software, and in that light I might use: Translation because the software is being translated to execute in a new environment, or Migration as in moving software from one system to another. Philip Dick wrote a philosophical narrative, not technically sci-fi, called The Transmigration of Timothy Archer which is about identity moving between bodies. In a rare departure from his usual work about AI and the effects of a technological society on the human spirit, this book looks at the question of identity in the context of the soul, which opens up all kinds of philosophical questions surrounding the type of technology we're speculating on, particularly in relation to the self. I value artistic insight, and Phillip K. is considered quite prescient, so perhaps Transmigration is most appropriate, as it carries both metaphysical and information technology meanings. Re death by dissolution, it's worthwhile to look at the etymology of dissolve: late 14c. (transitive and intransitive) "to break up" (of material substances), from Latin dissolvere "to loosen up, break apart," from dis- "apart" (see dis-) + solvere "to loosen, untie," from PIE *se-lu-, from reflexive pronoun *s(w)e- (see idiom) + root *leu- "to loosen, divide, cut apart." Meaning "to disband" (an assembly) is early 15c. Related: Dissolved; dissolving. I think you're on the right track with dilution, certainly per the modern usage, but it may be possible to get more precise. It's not about breaking up, or washing away (except metaphorically); rather, it's about minimization as in the diminishment of the original software kernel (the self) in relation to an expanding algorithm. early 15c., from merger of two obsolete verbs, diminue and minish. Diminue is from Old French diminuer "make small," from Latin diminuere "break into small pieces," variant of deminuere "lessen, diminish," from de- "completely" + minuere "make small" (from PIE root *mei- (2) "small"). The Old French diminuer is apt, as is the Latin deminuere and deminuo, which also carries a meaning of "civil death" and "abatement". Abatement carries a meaning of mitigation, which can be defined fundamentally as "lessening of effect," in this case of the kernel of the original self in relation to the new aggregate. It may be useful to think of it as a ratio 1/ℵ, with the self as the 1. Important to note the ratio is not literal--each number represent an aggregation of functions we call programs. That's how I'd think of it mathematically, but metaphysically, Nat nails it by referencing Jung and the death of the ego. Ego comes from the Latin noun for the self (I, me) which can also be plural (we, us). It's also fun to note that the Latin verb of being is sum, because in English, sum has a mathematical meaning of an aggregate considered as a whole. Because we're talking about what it means to be a person, but we also want to connote a function (a process of relative minimization) I am thinking: Depersonalization (noun) the action of divesting someone or something of human characteristics or individuality.SOURCE: Google The fun part is that there's a legal definition of person: In legal use, "corporate body or corporation having legal rights," 15c., short for person aggregate (c. 1400), person corporate (mid-15c.) Which I think applies (loosely) to the aggregation of function that comprises the program we call the self, and the greater aggregation of functions that form a new self. Because here we're talking about the loss of the original individual, depersonalization as opposed to repersonalization as a corporate entity. Alternately, I might propose: Deindividualization with the definition "a loss in individual identity within a group" (also known as deindividuation). (Charles Stross wrote about this in Accellerando, where a character chooses to be subsumed by a group intelligence as the only means of escaping a swarm of autonomous lawsuits;)
{ "domain": "ai.stackexchange", "id": 271, "tags": "agi, terminology, control-problem" }
Exception problem that is related to time issue
Question: I have 2 nodes that all read/deal with time from a recorded dataset(It doesn't start with ros::Time::now() and the like). After I successfully build the point cloud using those data then, I've just realized that I caught a lot of exceptions(after looking at the rxconsole output) that is related to time issue. I would prefer to solve this problem because it will be useful if I want to use with other nodes that are normally expect ROS time. I copied some information from the debug info as below: Is there any solution for this: header: seq: 11 stamp: 1327570083.262721196 frame_id: level: 4 name: /pointcloud_builder_node msg: Problem: Lookup would require extrapolation into the past. Requested time 24.553000000 but the earliest data is at time 1327570080.147540118, when looking up transform from frame [/laser] to frame [/base_link]. If I interpret correctly, 24.553000000 is the one in my dataset and 1327570080.147540118 is the one that ROS was using. Thanks in advance. Originally posted by alfa_80 on ROS Answers with karma: 1053 on 2012-01-25 Post score: 0 Answer: I see two possible solutions here. Either you can use rosbag play --clock <bag> and set the ros parameter use_sim_time to true (link) or you write your own rosbag playback node (e.g. using the python API, link) and set the time stamps manually to current time before publishing them. The former solution causes all your nodes (don't forget to restart them after setting use_sim_time) to run on the timestamps recorded in the bag. That only works if you don't run on a real robot and want to use real sensors. The latter solution is more work to implement but is also more flexible. It would be just a few lines of python code though. Originally posted by Lorenz with karma: 22731 on 2012-01-25 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by alfa_80 on 2012-01-26: Looks good, I'll try it later..Thanks a lot. Comment by Lorenz on 2012-01-26: That's one way. Another one is to provide similar functionality in your publisher node. Click on the link :) You basically need to publish time on the /clock topic. I've never done that however. Comment by alfa_80 on 2012-01-26: "..you need to do something similar, i.e. provide simulation time (link on more information in my answer)", you mean, in this case I need to generate a rosbag first right in order to use the --clock parameter? Comment by alfa_80 on 2012-01-26: Nevermind, I'll stick to having those warnings/exceptions first, but later on when I need to use with other packages, I'll change the one that you suggested using ros::Time::now(). Thanks anyway. Comment by Lorenz on 2012-01-26: Sure. You lose something by setting the laser stamp time to current time. That's one reason why rosbag has the --clock parameter. If you need to use correct laser timestamps in your application , you need to do something similar, i.e. provide simulation time (link on more information in my answer). Comment by alfa_80 on 2012-01-26: Somebody told me, that way is also problematic. I think so too. Because if we set using ros::Time::now(), it's very different from the original timestamp as the original one has uneven step size, the sequence is not really like (23.3, 23.4, 23.5) but rather like (23.3, 24.0, 24.2)..somthg like that. Comment by Lorenz on 2012-01-26: Yes. that's exactly what I meant. Comment by alfa_80 on 2012-01-26: Currently, I set it like this [ros::Time scan_time(double(data_set[t][1]/1000)); scan.header.stamp = scan_time;]. Do you mean, instead of doing that I should set it this way [scan.header.stamp = ros::Time::now();] and of course, both cases are running in a loop. Did you mean that? Comment by Lorenz on 2012-01-25: Then use just ros::Time::now() instead of that one of the dataset :). Just make sure to set it not when you are reading from the file but just before publishing. Comment by alfa_80 on 2012-01-25: I'm the one who wrote this publisher. So, I just use the timestamp from the dataset when I publish it. Comment by alfa_80 on 2012-01-25: The output: " seq: 32 stamp: secs: 25 nsecs: 348000000 frame_id: laser " Comment by Lorenz on 2012-01-25: If the time stamps are set using ros::Time::now() right before the laser package is published, the stamps should be fine at least. Do a rostopic echo /your/laser/topic to check what they are set to. Comment by alfa_80 on 2012-01-25: It contains many data from some sensors, say for laser scanner data, I write a publisher node that reads certain columns and all rows in a dataset. The timestamps are generated by somebody else which I use it to publish those laser data. Comment by Lorenz on 2012-01-25: How do you play it back then and how are the time stamps generated? Comment by alfa_80 on 2012-01-25: My dataset is not in rosbag anyway. it's just in "normal" .txt format. Comment by alfa_80 on 2012-01-25: For the latter solution also, what do you mean by "set the time stamps manually to current time before publishing them..", how do I convert say for my timestamp of 2 row say "24.553" into ROS-type one? Comment by alfa_80 on 2012-01-25: For the latter solution, does that mean, all my data indices like x-value, say stored in [:,3] will be changed right?
{ "domain": "robotics.stackexchange", "id": 7998, "tags": "ros, timestamp, transform" }
Uncorrelated but overlapping spectrum.
Question: This answer may be straight forward but I cannot figure out. One can understand the difference between cross-correlation and convolution from the link below: What is the difference between convolution and cross-correlation? The question is can there be two complex signal which are uncorrelated but their spectrum overlaps? In other words spectrum is not disjoint or product of power spectrum of each signal is not zero in each frequencies? Answer: If you mean correlation at the origin (no signal shifts), then yes so long as all the overlapping frequencies the two signals are 90 degrees out of phase. That is, for all positive $\omega$ that are overlapping, $h_1(\omega) = k_\omega h_2(\omega) e^{\pm i \tfrac{\pi}{2}}$, where $h_1$ and $h_2$ are the spectrums and $k_\omega$ is just a real scalar. The negative frequencies will be the conjugate of these. If you mean correlation with signal shifts (cross-correlation - similar to convolution) then no, because if both signals have the same non-zero frequency then at some shift the sinusoids will not be 90 degrees out of phase, and therefore correlated. More simply Imagine a signal consisting of a sine wave and another consisting of a cosine wave of the same frequency. They will have overlapping spectrums (same frequency) but have zero correlation. Add to the first signal a sinusoid of a certain frequency, and add to the second signal the same sinusoid but 90 degrees phase shifted. Example Barbara image with DC component removed, and Barbara image with DC removed and half spectrum multiplied by $i$ and the other half by $-i$ (90 degrees phase shift and conjugate for the negative frequencies) Their spectrums overlap completely but they have zero correlation (at shift 0).
{ "domain": "dsp.stackexchange", "id": 2902, "tags": "discrete-signals, fourier-transform, linear-systems" }
ROS Noetic xacro issue - Cannot load command parameter [robot_description] returned with code [2]
Question: OS: Ubuntu 20.04.4 LTS Kernel: Linux 5.13.0-35-generic Architecture: x86-64 ROS distro: ROS Noetic I am trying to launch my turtlebot3 for a school assignment in the gazebo simulator using the following command: roslaunch rob521_lab3 turtlebot3_world.launch However, I am getting the following error: xacro: in-order processing became default in ROS Melodic. You can drop the option. substitution args not supported: No module named 'defusedxml' when processing file: /home/sug/catkin_ws/src/rob521_lab3/urdf/turtlebot3_waffle_pi.urdf.xacro RLException: Invalid <param> tag: Cannot load command parameter [robot_description]: command [['/opt/ros/noetic/lib/xacro/xacro', '--inorder', '/home/sug/catkin_ws/src/rob521_lab3/urdf/turtlebot3_waffle_pi.urdf.xacro']] returned with code [2]. Param xml is <param name="robot_description" command="$(find xacro)/xacro --inorder $(find rob521_lab3)/urdf/turtlebot3_$(arg model).urdf.xacro"/> The traceback for the exception was written to the log file I have been trying to search various other forums and threads and most say to modify the .launch file relating to the parameter for "robot description". However, I was unable to modify the launch file param line in any way that resulted in successful launch. Here are my ROS Environment Variables: ROS_VERSION=1 ROS_PYTHON_VERSION=3 ROS_PACKAGE_PATH=/home/sug/catkin_ws/src:/opt/ros/noetic/share ROSLISP_PACKAGE_DIRECTORIES=/home/sug/catkin_ws/devel/share/common-lisp ROS_ETC_DIR=/opt/ros/noetic/etc/ros ROS_MASTER_URI=http://localhost:11311 ROS_ROOT=/opt/ros/noetic/share/ros ROS_DISTRO=noetic Originally posted by sugumarp on ROS Answers with karma: 16 on 2022-03-22 Post score: 0 Answer: Sorry, the solution was a very basic one and solved with one line: pip install defusedxml Now, it is working. I was going too deep into deprecation of xacro from kinetic/melodic to noetic. Originally posted by sugumarp with karma: 16 on 2022-03-28 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37521, "tags": "ros, urdf, xacro, turtlebot3, robot-description" }
Extracting values from dictionaries where the keys match
Question: I have two Dictionaries with around 65,000 KeyValuePairs each. I used foreach and if-else statements to compare them and get values, but it goes very slow. How could I optimize my code and gain more speed? private void bgwCompare_DoWork(object sender, DoWorkEventArgs e) { var i = 0; foreach (KeyValuePair<string, string> line1 in FirstDictionary) { foreach (KeyValuePair<string, string> line2 in SecondDictionary) { if (line1.Key == line2.Key) { ResultDictionary.TryAdd(line1.Value, line2.Value); ListViewItem item = new ListViewItem(line1.Value); item.SubItems.Add(line2.Value); ResultList.Items.Add(item); } i++; bgwCompare.ReportProgress(i * 100 / (FirstDictionary.Count() * SecondDictionary.Count())); } } } Answer: If I'm understanding it correctly, you're populating a ConcurrentDictionary from the values of two other ConcurrentDictionaries, where the keys are equal. Try this, it's vastly faster than your loop in my tests. var matches = FirstDictionary.Keys.Intersect(SecondDictionary.Keys); foreach (var m in matches) ResultDictionary.TryAdd(FirstDictionary[m], SecondDictionary[m]);
{ "domain": "codereview.stackexchange", "id": 19793, "tags": "c#, performance, hash-map, join" }
Axiomatic classical mechanics
Question: I am looking for a book that axiomatizes classical (meaning Newtonian) mechanics, as some books do with special relativity. I would be very interested in reading such a book. Answer: Although its notation is kind of outdated, I find Foundations of Mechanics by Abrahams and Marsden a very nice book that really axiomatizes Classical Mechanics. I should warn you, however, that the Math requirements are beyond simple-minded calculus, it requires you to be familiar with Elementary Differential Geometry. The first chapter of the book covers the necessary mathematical background wonderfully, assuming you already mastered Calculus, undergraduate Linear Algebra, and undergraduate Real Analysis. It is a great book for the ones who like to be inbetween Math and Physics. Someone clearly more oriented towards Physics may find the book very rigid.
{ "domain": "physics.stackexchange", "id": 58886, "tags": "newtonian-mechanics, classical-mechanics, resource-recommendations" }
Why is an event horizon dependent on density?
Question: It’s frequently stated in Astronomy documentaries that you could replace the sun with a black hole of equivalent mass and the orbital mechanics will continue as normal. This works fine in dispelling the idea that you’ll be “sucked” into a black hole, but I don’t understand the formation of an event horizon from this. This suggests that Newtonian mechanics (I think) hold from the perspective of the Earth’s orbit. If I were to “fall” towards this singularity from Earth, nothing would change as I pass the orbit of Mercury, so it still holds to that point. However, something drastically changes after that point (somewhere) – in one scenario I just fall into the sun and, in another, I somehow hit an event horizon and all sorts of weird things happen. In both cases, it seems that the mass acts in the same spot (a center of gravity) but the dynamics are radically different on that journey... yet the mass is the same, as is the center of mass. My best guess is that, as I cross the surface of the sun, I start accumulating solar mass on the opposite side of me from the center of gravity that’s attracting me, and that prevents the formation of an event horizon. Is this reasoning correct? If so, is there a formalised explanation for this? I don’t seem to be able to research this properly as a layman to physics. Answer: My best guess is that, as I cross the surface of the sun, I start accumulating solar mass on the opposite side of me from the center of gravity that’s attracting me, and that prevents the formation of an event horizon. Is this reasoning correct? Yes, that reasoning is correct. The simplest solution to the Einstein field equations in General Relativity is the Schwarzschild solution which describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentumof the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. As balkael mentioned, the Sun's Schwarzschild radius, $r_S$, is approximately 3 km. That means that if the Sun's mass could be compressed down to a sphere of $6\pi$ km circumference it would be a black hole. But that doesn't mean that anything special occurs at 3 km from the centre of the uncompressed Sun. Side note Newtonian gravity is a very good approximation at distances from the centre of mass that are large compared to $r_S$. At Earth's orbital radius, the difference between Newtonian gravity & GR is minute. Even at Mercury's orbit the difference is rather small. One of the early triumphs of GR is that it correctly predicts the anomalous apsidal precession of Mercury's orbit. According to Newton, the major axis of Mercury's elliptical orbit (aka the line of apsides) would point in a constant direction, if the solar system consisted only of the Sun and Mercury, but due to the gravity of the other planets (and because the Sun isn't a perfect sphere) the line of apsides slowly rotates, as shown: From Wikipedia: Mercury deviates from the precession predicted from these Newtonian effects. This anomalous rate of precession of the perihelion of Mercury's orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier. The total precession is only 574.10 ± 0.65 arc-seconds per century. The anomalous precession due to relativistic effects is only 43 arc-seconds per century. That is 43 / 3600 degrees. I mentioned earlier that nothing special happens at $r_S$ in the Sun. That's because when you go inside a spherically symmetric body the mass above your head exerts zero gravitational force on you. In Newtonian gravity, this is due to the Shell theorem, as G. Smith said. It's also true in General Relativity, due to Birkhoff's theorem. So all of the Sun's matter that's more distant than $r_S$ from the centre cannot create an event horizon. If you could somehow compress that matter sufficiently then a black hole would form, but no known process can do that. As far as we know, the smallest black holes that can be created in a type II supernova explosion have a mass around 3-5 $M_\odot$ (solar masses), with the progenitor star having a mass around 20 $M_\odot$. So density is only of indirect importance, the main thing is to get enough mass within the Schwarzschild radius. Actually, it doesn't have to just be mass, all forms of energy contribute to the stress-energy-momentum tensor which is the source of spacetime curvature.
{ "domain": "physics.stackexchange", "id": 67483, "tags": "gravity, black-holes, event-horizon, density" }
The synchronized clocks on earth's surface: at which observer's rate are they beating?
Question: From what I understand, the time rates (I'm not speaking about absolute times) of all clocks on earth's surface are synchronized. This means that, say, a mobile phone's clock is generally not beating the mobile phone's proper time – any synchronization would be out as soon as we swing the phone around – although the difference is probably undetectable with today's technology. I assume this synchronized time rate is the proper-time rate of some observer. My question is: which one? For instance: An observer standing still on Earth at zero sea level and zero latitude? Or an observer at zero sea level and zero latitude that's standing still with respect to the distant stars? Or an observer at infinity (the $t$ coordinate in Schwarzschild's metric)? (My question is different from this question, which is about how synchronization is achieved in practice). There are several nice works discussing effects of time rate differences on and around the earth, for example with relation to the GPS system (on which there are many questions here). I give a list below. But in none of them I managed to find an explicit answer to my question. Glad if you can share some references where this is discussed! [Edit: here are some useful references from @PM2Ring's informative answer: Petit, Wolf: Relativistic theory for time comparisons: a review McCarthy, Seidelmann: Time: From Earth Rotation to Atomic Physics (also here) Guinot: Is the International Atomic Time TAI a coordinate time or a proper time? ] Fliegel, DiEsposti: GPS and Relativity: An Engineering Overview Ashby, Weiss: Global Position System Receivers and Relativity Ashby: Relativity in the Global Positioning System Müller, Soffel, Klioner: Geodesy and relativity Rizzi, Ruggiero: Relativity in Rotating Frames Answer: I assume this synchronized time rate is the proper-time rate of some observer. Yes and no. ;) UTC is derived from TAI, International Atomic Time a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). TAI / Terrestrial Time is notionally the proper time of an observer at sea level, but it's not actually defined in terms of an observer at some location. Instead, it's defined as a scaled version of TCG, Geocentric Coordinate Time, which is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the center of the Earth: that is, a clock that performs exactly the same movements as the Earth but is outside the Earth's gravity well. It is therefore not influenced by the gravitational time dilation caused by the Earth. Since TCG is defined in terms of an observer at rest relative to the Earth's center it is not affected by special relativity time dilation due to Earth's rotation. The conversion from TCB to TAI is a simple linear equation, so TAI is also oblivious to SR time dilation variations; it assumes a constant rate difference due to gravitational time dilation. Incidentally, the Lorentz time dilation factor due to the Earth's rotation speed at the equator is $\gamma \approx 1 + 1.2×10^{-12}$. Another important precision time scale is TCB, Barycentric Coordinate Time It is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the barycenter (center of mass) of the Solar System. Closely related to TCB is TDB, Barycentric Dynamical Time, which is used by JPL for calculating orbits and astronomical ephemerides of planets, asteroids, comets and interplanetary spacecraft in the Solar System. High precision conversion between TAI and TDB is rather elaborate, since it takes into account the gravitational field of all the major Solar System bodies. For details, see The JPL Planetary and Lunar Ephemerides DE440 and DE441, Park et al (2021). https://doi.org/10.3847/1538-3881/abd414 If you really want to dive into the rabbit-hole of UTC and leap seconds, see A brief history of time scales, by Steve Allen of the Lick Observatory.
{ "domain": "physics.stackexchange", "id": 98894, "tags": "general-relativity, spacetime, coordinate-systems, time, observers" }
Are Instantons Massless?
Question: That is, are the only field configurations which give a non-zero winding number ones in which the Fourier transform includes a factor like $\theta(k^0)\hat{D}\delta(k^2)$, where $\hat{D}$ is some differential operator acting on the delta function? Below I will explain why I think this may be the case, and hopefully we can have a discussion about it: Say we consider the theory of a massless fermion $\psi$ coupled to some gauge field $A^a_{\mu}$ in $4$ spacetime dimensions. The Lagrangian I would like to consider is $$L=i\bar{\psi}\gamma^{\mu}\nabla_{\mu}\psi-\frac{1}{4g^2}f^a_{\mu\nu}f^{a\mu\nu}$$We know that this theory has a chiral anomaly of the form $$\partial_{\mu}J^{\mu}_5=-\frac{C}{32\pi^2}\varepsilon^{\mu\nu\rho\sigma}f^a_{\mu\nu}f^a_{\rho\sigma}$$ Where $\text{tr}(t_at_b)=C\delta_{ab}$ and $t_a$ are the generators of the gauged Lie group, and the representation $\psi$ transforms under. Note first of all that if we integrate both sides we get $$Q_5(t=\infty)-Q_5(t=-\infty)=-2C\nu$$ Where $\nu$ is the instanton winding number. At this point, I would now like to consider the Wilsonian Effective Lagrangian $L_{\Lambda}$, for which all of the modes of mass $m>\Lambda$ are integrated out. Under a chiral transformation $\psi\to e^{i\alpha\gamma_5}\psi$, we know that $L_{\Lambda}\to L_{\Lambda}-\alpha\frac{C}{32\pi^2}\varepsilon^{\mu\nu\rho\sigma}f^a_{\mu\nu}f^a_{\rho\sigma}$. For this to be the case, it would be essential that none of the massive modes of mass $m>\Lambda$ have non-zero winding number. This must be case, or else instantons would affect the transformation properties of $S_{\Lambda}=\int d^4xL_{\Lambda}$ under a chiral transformation. This must also be the case for all values of $\Lambda$, since this chiral transformation does not depend on $\Lambda$ either. Therefore the only room we have for instantons is them to be massless, and have the factor mentioned above in their Fourier transform. The statement I left in bold is what this argument hinges on, and is also a statement that I am not certain of it's validity. I know for sure that if we only integrated out the high mass modes of $\psi$, that would be the way $L_{\Lambda}$ transforms. However, the fact that we also integrate out the high mass modes of $A^a_{\mu}$ potentially screws this nice transformation up. I would really appreciate getting both feedback on the argument, and whether or not the statement in bold is true! Answer: Introductory pedantry Asking whether "an instanton" is massless is a meaningless question: A field configuration does not correspond to a quantum state, quantum states are rather (in one representations) functionals in the fields. Things that are not a quantum state do not have a mass since you can't apply the mass operator $P_\mu P^\mu$ to them. Nevertheless, an instanton - as an extremum of the classical action - is frequently taken as the starting point of perturbation theory, and more or less corresponds to some sort of "perturbative vacuum state" in this sense. It would be meaningful to ask whether or not we have to assign a "mass" to this vacuum, if it were not additionally complicated that the "true" vacua of an instantonic theory are the $\theta$-vacua which are superpositions of the vacua with definite instanton number, see also this answer of mine. The modes of the quantum field do not intrinsically carry any "winding number", as your question implies in passing. They are operators. A winding number is a characteristic of a field configuration, or of a state built by creation/annihilation operators above a particular perturbative vacuum, not of an operator. In any case, instantons are not excited states that would be related to the naive modes of the quantum field and therefore the effect off the Wilsonian cutoff on them needs to be considered more carefully. Wilsonian flow in gauge theories A detailed reasoning about how Wilsonian effective actions work for non-Abelian gauge theories would far exceed the scope of this answer. A good starting point is Pawlowski's "Wilsonian Flows in Non-Abelian Gauge Theories" (PDF link), where in particular chapter VI and appendix B are relevant to the question at hand. The upshot is this: The instantonic effects persist, in a suitably renormalized manner, along the entire Wilsonian flow, i.e. they are never cut off. But the chiral transformation you write down does not hold for the Wilsonian action. What one has to establish for the persistence of the instantonic effects is that the zero modes of the regularized Dirac operator don't vanish along the flow, since the difference in chirality of these zero modes is why the instantons appear as the anomalous terms in the first place (cf. its derivation via the Fujikawa method, see also this answer of mine). Then one has to actually compute the zero-mode contribution in terms of the fields with the cutoff in place to see how the term violating the chiral symmetry looks in practice.
{ "domain": "physics.stackexchange", "id": 57527, "tags": "quantum-field-theory, lagrangian-formalism, mass, effective-field-theory, instantons" }
suggestion for good online source according to Syllabus
Question: Hi, are there some online courses e.g. some classes in Coursera? I have difficulty in following the professor's teaching because I have a weak statistics background. I want to catch up by reading some online complementary resource! Thank you! Answer: A good deal of the topics, excluding the more advanced ones like autoencoders or manifold learning, can be learned in the excellent course of andrew ng https://www.coursera.org/learn/machine-learning There's also https://www.coursera.org/specializations/deep-learning?utm_source=deeplearningai&utm_medium=institutions&utm_campaign=WebsiteCoursesDLSBottomButton that goes deeper into aspects of deep learning For generative models I don't have specific recommendations...
{ "domain": "datascience.stackexchange", "id": 6261, "tags": "machine-learning, statistics, data-science-model" }
Interpolated FIR filter (from Oppenheim and Schafer's Discrete-Time Signal Processing, 3rd ed)
Question: [from Discrete-time Signal Processing by Oppenheim and Schafer, 3rd ed., p.196] Two questions: In this context, the filter with system function represented by Eq. (103) is called an interpolated FIR filter. This is because the corresponding impulse response can be seen to be the convolution of $h_1[n]$ with the second impulse response expanded by $M_1$. This explanation as to why Eq. (103) is called an interpolated FIR filter is still not clear for me. Why is the impulse response given by Eq (104) FIR? Answer: The reason why "interpolated FIR filter" is an appropriate name is because the filter $h_2[n]$ has $M_1-1$ zeros between its (non-zero) filter taps, and the convolution with the filter $h_1[n]$ interpolates those $M_1-1$ values. Eq. $(104)$ generally only represents an FIR filter if both $h_1[n]$ and $h_2[n]$ are FIR filters.
{ "domain": "dsp.stackexchange", "id": 9643, "tags": "discrete-signals, filter-design, finite-impulse-response, interpolation, multirate" }
Breaking strength of cord when doubled over a pulley
Question: Consider the situation in which a piece of cord is looped over a pulley and both free ends are fixed to an object which exerts a 10 kN downward force. Would the setup hold given a cord with a tensile breaking strength of 8 kN? My instinct says no, due to the fact that the portion of the rope going over the pulley is subjected to the full 10 kN force. This can be depicted easier with two pulleys side by side. I ask because in rock climbing, a common method for building an anchor involves a loop of cord tied as depicted below. The top 3 carabiners are clipped to fixed points on the rock (bolts, camming devices, etc.) and the climbers then attach to the bottom "master point" below the figure-8 knot. As you can see, each of the 3 "legs" of the anchor are an example of this pulley problem. I am concerned because in the climbing community, it is accepted as common knowledge that each of the 3 legs will be able to hold twice the rating of the cord (ignoring strength loss due to the figure-8 knot) due the two strands sharing the load. Is this really the case? Or are the legs only capable of supporting up to the single strand tensile strength rating as I suspect? I am interested in answers concerning both "perfect" frictionless pulleys as well as how a real world pulley (like a carabiner) would affect the outcome. Answer: First, you have a mistake in your second picture. The top tension is 5 kN, just like the other two legs. Your intuition has misled you. You don't add the tensions in the other two legs. Second, for practical ropes, a sharp bend decreases strength. The weakest point will be where it wraps over a carabiner. Another weak point will be the knot, though to a lesser degree than if it was a knot in a single cord, or if it was an overhand knot. Third, friction has a major effect. Loops are often made by laying two ends of a flat strap over each other and sewing. Seat belts are often done this way. The joint is far stronger than the threads that sew it together. The thread press the ends together strongly. This makes friction very large. Kind of like when a tire sits on the road. Fourth, the knot may be hurting more than helping. Suppose you hung on the anchor and swung to the left. The left two cords would be slack. The right cord would take the full force. You should thread a single long cord under the bottom pulley, over the top left, under the bottom, over the top middle, under the bottom, over the top right, and back to the bottom, so that it makes a single continuous loop. That makes it possible for the cords to slide so the tension in all three legs stays equal. Of course, friction can mess that up. But a single long loop that slides is not a good idea. If the loop breaks anywhere, your whole anchor fails. You do have redundancy with your design. If the top middle carabiner or cord fails, you have two more still holding. The best would be three independent cords to three independent top carabiners. And doubled carabiners or one extra strong carabiner (like you have) for the bottom. It is hard to tie this so that all three have tension. One generally will be slack. But it is redundant. The figure 8 knot is a good idea if you do use one long cord for this anchor. It turns the single loop into three. One leg can break and the other two will hold.
{ "domain": "physics.stackexchange", "id": 73965, "tags": "newtonian-mechanics, forces" }
What supervised machine learning model can be used to generate a scorecard-like result?
Question: A scorecard is typically used in Credit Application. One very common model for developing a credit scorecard is logistic regression since it has well-defined probabilities. Apart from logistic regression, is there any model that can be used in the scorecard? For example, I don't know whether Support Vector Machine can be used since it only outputs a decision boundary. More on the scorecard: Features are assigned with weightings All features are categorical The sum of weightings of all features with value True is the total score (like a checklist) There will be a cutoff point to classify good/bad (label, +1,-1) How far from the cutoff point represents probabilities. Answer: It depends what you mean by "can be used": any regression algorithm can be used, the question is how reliably it would perform. You can compare different algorithms experimentally (if you have a dataset). [Updated after question edited] In general the way to use ML with this kind of setting is to train a classification model based only on the categorical features. Depending on the type of algorithm, the combination of features might not always be a weighted sum, and the result label may or may not be based on a cutoff point. In order to have a cutoff point (thus a numerical prediction), the method must be a soft classification method. Alternatively a regression model could be trained for predicting the numerical value. So that leaves you with many options: soft classification: linear/logistic regression, Naive Bayes, ... regression: linear/logistic regression, SVM, decision trees, ... Note: technically the probability doesn't represent "how far from the cutoff point", it represents the probability of the instance being positive (p=1).
{ "domain": "datascience.stackexchange", "id": 7661, "tags": "logistic-regression, svm, model-selection" }
Rigid body rotation apparent energy paradox
Question: I'm doing some calculations on the motion of a rigid body as part of a project, and (as a tangent) I've come across something that I can't quite explain. Case 1: If I apply a force $F$ though the centre of mass of a rigid body, I can make it move in a straight line with some velocity. Case 2: If, however, I apply the same force $F$ so it is off-set from the centre of mass by some perpendicular distance $d$, I can make it move in a straight line, but also make it rotate. It's a fairly straight forward proof to show that the off-set force in Case 2 can be replaced by a force $F$ at the centre of mass (i.e. identical to Case 1) plus a force couple resulting in the moment $M=F\times d$ (see wikipedia). Now the sticking point - in Case 1 the force (let's say it's an instantaneous impulse) will create some linear motion. If the body has mass $m$ and resultant velocity $v$, then the energy of the system is $E=\frac{1}{2} m v^2$. In Case 2, I have the same force applied at the COM, so I should get the same resultant velocity and the energy would be $E_{\text{linear}}=\frac{1}{2} m v^2$, but I also get a rotation, and that energy will be $E_{\text{rotational}} = \frac{1}{2} I \omega^2$, where $\omega$ is whatever angular velocity results. Now I know there are some details I'm missing here but from this (albeit lacking in some detail) perspective, it looks like there is suddenly more energy in the system, just by pushing at a different point. That just doesn't sound right to me. What's going on here? Is there actually more energy in Case 2, or is there some detail I'm missing/neglecting/glazing over? Answer: The problem here is that no force can be perfectly impulsive because then the distance moved while the force was being applied would be zero and therefore the work done would be zero. If we assume some finite impulse $I$ then: $$ I = F t $$ where we are applying the force $F$ for a time $t$. The perfectly impulsive force would be the limit $t \to 0$, and that requires $F \to \infty$. We end up with the impulse being given by the product of infinity times zero, which is undefined. So let's consider a non-infinite force $F$ applied for a short but non-zero time $t$. When you apply the force in line with the centre of mass the object moves some distance $s$ and does work: $$ W = Fs $$ The increase in the linear kinetic energy is equal to this work. If you now apply the force offset from the centre of mass by a distance $r$ we still get the same linear acceleration, but now we get an angular acceleration as well. So in addition to moving a distance $s$ the object rotates by some angle $\theta$, and the rotation by a (small) angle $\theta$ means the point of application of the force moves an additional distance $s' = r\theta$. This means the total work done is now: $$ W = F(s + r\theta) $$ It is the extra term $Fr\theta$ that goes into the rotational kinetic energy. So the solution to the paradox is that the force does more work when applied off centre because the point of application of the force moves an increased distance.
{ "domain": "physics.stackexchange", "id": 73221, "tags": "newtonian-mechanics, reference-frames, rotational-dynamics, rigid-body-dynamics" }
Galactic/Fast-DDS middleware can't loan messages
Question: I am using ROS 2 galactic and running with my environment set for FastRTPS middleware. I am using the loaned message buffer API and the XML configuration for shared memory described in the Fast-DDS documentation (snippet): <transport_descriptors> <!-- Create a descriptor for the new transport --> <transport_descriptor> <transport_id>shm_transport</transport_id> <type>SHM</type> </transport_descriptor> </transport_descriptors> <participant profile_name="SHMParticipant" is_default_profile="true"> <rtps> <!-- Link the Transport Layer to the Participant --> <userTransports> <transport_id>shm_transport</transport_id> </userTransports> </rtps> </participant> When I run my publish code, I get the following output: Currently used middleware can't loan messages. Local allocator will be used. My testing shows that latency is increasing as data size increases so it appears that it really isn't using zero copy. Code can be found at: https://github.com/mschickler/simple_perf It was announced in ROS discourse that the latest Fast-DDS supported zero copy / shared memory. Am I doing something wrong? Does the rmw layer not yet support the new functionality? Any ideas? Originally posted by mschickler on ROS Answers with karma: 95 on 2021-06-14 Post score: 1 Answer: As far as I'm aware, FastDDS does not support the loaned messages, and performs their shared memory transport in some other method. The loaned message API is used in iceoryx right now, which does work with CycloneDDS. Originally posted by allenh1 with karma: 3055 on 2021-06-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mschickler on 2021-06-15: Do you know how I would go about configuring to use Cyclone DDS with iceoryx integration? When I try this test with default RMW (Cyclone) the results still don't look like zero copy is happening. Comment by allenh1 on 2021-06-15: Yep -- it's not on by default, but you can read about it here, it's pretty painless. Comment by budrus on 2021-06-15: There is also additional documentation in rmw_cyclonedds. Also note that the patch release 1 for Galactic should be released in the next 2 weeks and does include improvements and bug fixes for the zero-copy case Comment by mschickler on 2021-06-15: Thanks folks. Appreciate the help!
{ "domain": "robotics.stackexchange", "id": 36528, "tags": "ros2" }
Derivative of signal with missing samples
Question: I have software that tracks an object moving (in the x-dimension only) across a video shot from a stationary camera. I need to find the velocity and acceleration of the object as functions of time. This calculation can be done offline, so there are no real-time constraints. The motion of the object is also pretty regular, decelerating approximately constantly throughout the duration of motion. Normally I would design an FIR differentiator to do the job, however the object in the video becomes occluded for up to a few frames at a time on a regular basis throughout it's motion, so I don't have a point for it for several frames. In other words, my data looks like this: frame_index = [1 2 3 6 7 8 9 13 14 15 16 17 18 21 22 ... ] x_position = [0 0.2 0.39 1.14 1.22 ... ] I've thought about interpolating and then using an FIR differentiator but am unsure what a good scheme for interpolating irregular data like this would be. I've also thought about using a formulation based on Lagrange interpolating polynomials to calculate derivatives directly, but do not know enough about them to understand trade-offs. My goal is accuracy. So, what would be the best way to go about calculating approximations of the first and second derivatives of the data? Answer: See Smooth noise-robust differentiators. While Pavel doesn't address your specific problem there, he and I corresponded about a similar "missing samples" problem earlier this year, and he has some good ideas that you can try. In particular, he recommends polynomial approximation in the region of the missing samples. In addition I'll mention that if you use Neville's Algorithm you can guarantee that the polynomial passes through your "good" sample values. You can find Pavel's contact info in various places on his Website. Greg Berchin
{ "domain": "dsp.stackexchange", "id": 2242, "tags": "discrete-signals, interpolation, derivative" }
Magnification in eye
Question: NOTE: This is a dynamic question. There is just ONE question that is being asked among the following. So please answer that one only according to given conditions: (Do mention which question is being answered.) Q.1 (conditions to answer Q.1.1 or Q.1.2) What does the human eye perceives (in context to magnification): A) Lateral magnification B) Angular magnification Q.1.1 IF only (A) is correct, then why it becomes difficult to distinguish between a small object placed closer and larger object farther. Q.1.2 IF only (B) is correct, then why do we see object(placed at appropriate distance) through the lens magnified ? Because the formula of magnification is $$m=\frac {H_{image}}{H_{object}}=\frac {image-distance}{object-distance}$$ Then we can write, $$\frac {H_{image}}{image-distance}= \frac {H_{object}}{object-distance}$$ This shows that the angular size of object and image is same. so how we see magnification in lens? Answer: B is correct. When an object is closer to you, it occupies a larger angular extent: so when you focus on it, it leaves a bigger image on your retina - it looks bigger. UPDATE when you add a second lens L in between the eye E and the object O, this lens creates a "virtual image" V, which subtends a larger angle as seen at the eye (see the two green lines):
{ "domain": "physics.stackexchange", "id": 36119, "tags": "optics, geometric-optics, vision" }
Do fermions of different types have the same quantum states available to occupy?
Question: I'm not asking whether two fermions of different types can occupy the same quantum state, cf. the Pauli exclusion principle. I'm asking whether fermions of different types would have the same options available if you had one in at a time. An example of what I mean is the fact that muons will occupy much smaller orbitals around nuclei than electrons will. In this case, electrons and muons don't seem to have the same quantum states available for them around a nucleus. Is this the case with any two fermion types? Answer: The difference in the bound states of electrons or muons and a nucleus originates from their different masses (muons are a lot heavier than electrons). Because the Hamiltonian of the problem depends on the mass, the energy eigenvalues do as well. In general, fermions (or any two particles) will be able to occupy the same state if all their properties which enter the Hamiltonian are the same. For example, a spin-up and a spin-down electron have the same states in an atom as long as there is no magnetic field. As soon as a magnetic field is switched on, this degeneracy disappears, because the spins have an energy in the magnetic field which depends on their direction.
{ "domain": "physics.stackexchange", "id": 82261, "tags": "quantum-mechanics, hilbert-space, fermions, orbitals, quantum-states" }
Finding Jacobian matrix using the DH parameter table and relative transformation matrices
Question: Let's say we have a three revolute joint robot with the following DH parameter table: Where the last row corresponds to the transformation between the last joint frame and the end-effector frame. We can easily derive the transformation matrices from the DH table: $$A_{i-1,i}=A_{z,\theta}\cdot A_{z,d}\cdot A_{x,a}\cdot A_{x,\alpha}$$ And get the absolute transformation matrix of the minimal configuration: $$A_{0,2}=A_{0,1}\cdot A_{1,2}$$ And of the end-effector: $$A_{0,3}=A_{0,2}\cdot A_{2,3}$$ What are the steps to calculate the Jacobian, without using the predefined formulas: I know that there are some partial derivatives by each $\theta_i$ involved, but I'm not sure what the procedure is. Answer: Your kinematic equation, which links TCP position to joint position is: $$X = A_{0,3} \times Q$$ or $$ \begin{pmatrix} x \\ y\\ z\\ 1\end{pmatrix} = A_{0,3} \times \begin{pmatrix} \theta_1 \\ \theta_2\\ \theta_3\\ 1\end{pmatrix} $$ based on this, you will need to write: $$ \left\{ \begin{eqnarray} x &=& f_x(\theta_1, \theta_2, \theta_3)\\ y &=& f_y(\theta_1, \theta_2, \theta_3)\\ z &=& f_z(\theta_1, \theta_2, \theta_3)\\ \alpha &=& f_\alpha(\theta_1, \theta_2, \theta_3)\\ \beta &=& f_\beta(\theta_1, \theta_2, \theta_3)\\ \gamma &=& f_\gamma(\theta_1, \theta_2, \theta_3) \end{eqnarray} \right.$$ where $\alpha, \beta, \gamma $ are the Euler angles of end-effector orientation (it does not matter in which order they are defined, you just have to pick one and stick with it, e.g. XYZ order or ZXZ order). Based on these equations you can write the Jacobian as: then $$ J = \left[ \begin{matrix} \frac{\partial f_x}{\partial \theta_1} & \frac{\partial f_x}{\partial \theta_2} & \frac{\partial f_x}{\partial \theta_3}\\ \frac{\partial f_y}{\partial \theta_1} & \frac{\partial f_y}{\partial \theta_2} & \frac{\partial f_y}{\partial \theta_3}\\ \frac{\partial f_z}{ \partial \theta_1 } & \frac{\partial f_z}{\partial \theta_2} & \frac{\partial f_z}{\partial \theta_3}\\ \frac{\partial f_\alpha}{ \partial \theta_1 } & \frac{\partial f_z}{\partial \theta_2} & \frac{\partial f_z}{\partial \theta_3}\\ \frac{\partial f_\beta}{ \partial \theta_1 } & \frac{\partial f_z}{\partial \theta_2} & \frac{\partial f_z}{\partial \theta_3}\\ \frac{\partial f_\gamma}{ \partial \theta_1 } & \frac{\partial f_z}{\partial \theta_2} & \frac{\partial f_z}{\partial \theta_3}\\ \end{matrix} \right]$$ Please note that this is $6x3$ as it is used to compute both linear and rotational velocities of the end-effector based on the 3 joint velocities as follows: $$ \left[ \begin{array} \dot{x} \\ \dot{y} \\ \dot{z} \\ \dot{\alpha} \\ \dot{\beta} \\ \dot{\gamma} \end{array} \right] = J \times \left[ \begin{array} \dot{\theta_1} \\ \dot{\theta_2} \\ \dot{\theta_3} \end{array} \right] $$
{ "domain": "robotics.stackexchange", "id": 2071, "tags": "kinematics, jacobian, dh-parameters" }
robotino-ros package
Question: Hello everyone, i using the robotino with kinetic . I use the following robotino package from this source for my project. I started the with roslaunch robotino_node robotino_node.launch and than i get some error in my launch file. ERROR: cannot launch node of type [robotino_node/robotino_node]: can't locate node [robotino_node] in package [robotino_node] ERROR: cannot launch node of type [robotino_node/robotino_odometry_node]: can't locate node [robotino_odometry_node] in package [robotino_node] ERROR: cannot launch node of type [robotino_node/robotino_laserrangefinder_node]: can't locate node [robotino_laserrangefinder_node] in package [robotino_node] ERROR: cannot launch node of type [robotino_node/robotino_camera_node]: can't locate node [robotino_camera_node] in package [robotino_node] process[robot_state_publisher-6]: started with pid [2608] ERROR: cannot launch node of type [robotino_node/robotino_mapping_node]: can't locate node [robotino_mapping_node] in package [robotino_node] robotino_node.launch-file <?xml version="1.0"?> <launch> <arg name="hostname" default="172.26.1.1" /> <node name="robotino_node" pkg="robotino_node" type="robotino_node" output="screen"> <param name="hostname" value="$(arg hostname)" /> <param name="max_linear_vel" value="0.5" /> <param name="min_linear_vel" value="0.05" /> <param name="max_angular_vel" value="3.0" /> <param name="min_angular_vel" value="0.1" /> <param name="downsample_kinect" value="true" /> <param name="leaf_size_kinect" value="0.04" /> <remap from="robotino_joint_states" to="joint_states" /> <!--remap from="image_raw" to="image"/--> </node> <node name="robotino_odometry_node" pkg="robotino_node" type="robotino_odometry_node" output="screen"> <param name="hostname" value="$(arg hostname)" /> </node> <node name="robotino_laserrangefinder_node" pkg="robotino_node" type="robotino_laserrangefinder_node" output="screen"> <param name="hostname" value="$(arg hostname)" /> <param name="laserRangeFinderNumber" value="0" /> </node> <node name="robotino_camera_node" pkg="robotino_node" type="robotino_camera_node" output="screen"> <param name="hostname" value="$(arg hostname)" /> <param name="cameraNumber" value="0" /> </node> <!--<node pkg="robotino_switch" type="robotino_switch_node" name="robotino_switch" output="screen"> <param name="hostname" value="$(arg hostname)" /> </node>--> <node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher" output="screen"> <param name="publish_frequency" type="double" value="20.0" /> </node> <node name="robotino_mapping_node" pkg="robotino_node" type="robotino_mapping_node" output="screen"> <param name="hostname" value="$(arg hostname)" /> </node> <!--node pkg="tf" type="static_transform_publisher" name="laser_link_broadcaster" args="0.12 0 0.025 0 0 0 base_link laser_link 50" /--> <param name="robot_description" textfile="$(find robotino_description)/urdf/robotino.urdf" /> </launch> What is the problem and how can I solve it? Please let me know, when you need more information about this problem. regards, Markus Originally posted by MarkusHHN on ROS Answers with karma: 54 on 2019-03-20 Post score: 0 Answer: i forgot to source my workspace --> source robotino_ws/devel/setup.bash That solve my problem Originally posted by MarkusHHN with karma: 54 on 2020-06-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32688, "tags": "ros, roslaunch, robotino, ros-kinetic" }
When does light reach a shell observer in Schwarzschild metric?
Question: I am trying to simulate the trajectory of light in the Schwarzschild metric (as seen by a far away observer) with fixed $\theta = \pi/2$. According to my source (Chapter 18, section 18.5) the trajectory is then governed by: $$\frac{dr}{dt} = \dot{r}$$ $$\frac{d\phi}{dt} = \dot{\phi}$$ $$\frac{d\dot{r}}{dt} = \frac{-4M^2+2Mr+(r-5M)r^3\dot{\phi}^2}{r^3}$$ $$\frac{d\dot{\phi}}{dt} = \frac{2(-3M+r)\dot{r}\dot{\phi}}{(2M-r)r}$$ I have a situtation where shell observer sits at $(r_T, \phi_T)$ and I know that $r(0) = r_0$, $\phi(0) = \phi_0$, $r(T) = r_T$, $\phi(T) = \phi_T$ where $r_0$, $r_T$,$\phi_0$ and $\phi_T$ are known, but $T$ is unknown. It seems to me that I need an additional constraint to figure out $T$ since I have 4 equations (the ones above), but 5 unknowns ($r(t), \dot{r}(t), \phi(t), \dot{\phi}(t), T$). Do I need an additional constraint to figure out $T$ and what would that constraint be? Answer: The way the equations are presented seems unnecessarily obscure, as there are only two equations that matter: $$ \frac{d^2r}{dt^2} = \frac{-4M^2+2Mr+(r-5M)r^3}{r^3}\,\dot{\phi}^2 $$ $$ \frac{d^2\phi}{dt^2} = \frac{2(-3M+r)}{(2M-r)r} \, \dot{r}\dot{\phi} $$ These come from the geodesic equation expressed using coordinate time. So you start at some convenient $(r, \phi)$ with initial coordinate velocity $(\dot{r}, \dot{\phi})$ and integrate forward in time to calculate $r$, $\phi$, $\dot{r}$ and $\dot{\phi}$ as a function of the coordinate time $t$. You get to pick whatever initial values for $r$, $\phi$, $\dot{r}$ and $\dot{\phi}$ you want, but obviously $\dot{r}$ and $\dot{\phi}$ are related because you're describing a light beam. The relationship comes from the Schwarzschild metric. For a light ray $ds = 0$, and we can take $\theta = \pi/2$ and $d\theta = 0$, so we get: $$ 0 = -\left(1-\frac{2M}{r}\right)dt^2 + \frac{dr^2}{\left(1-\frac{2M}{r}\right)} + r^2d\phi^2 $$ or: $$ \left(1-\frac{2M}{r}\right) = \frac{\dot{r}^2}{\left(1-\frac{2M}{r}\right)} + r^2\dot{\phi}^2 $$
{ "domain": "physics.stackexchange", "id": 22199, "tags": "homework-and-exercises, general-relativity, black-holes, differential-geometry, geodesics" }
Contradiction in Ohm's Law and relation $P=VI$
Question: Ohm's law states that electric current is directly proportional to voltage provided that physical conditions like temperature remain constant i.e. $$V = IR$$ On the other hand, $$\text{Power = Voltage} \times \text{Current}$$ So here it seems that the higher the voltage, the lower the current, provided that the power remains constant (i.e. current is inversely proportional to the voltage here which is against Ohm's Law.). Now my question is how do physicists explain this apparent contradiction? Or maybe this not a contradiction because I am analysing things incorrectly? P.S.: I am a tenth grade student so please refrain from the usage of highly complicated terminologies in your answers. Answer: When you are going from an equation to a proportionality statement you need to be mindful of what is being kept constant. $V=IR$ means that $I$ varies directly with $V$ if $R$ is constant. $P=IV$ means that $I$ varies inversely with $V$ if $P$ is constant. The only time you could get a contradiction is if you are comparing situations where the power is constant and also the resistance is constant. But if that's the case you'll find there is only one solution for $I$ and $V$, that is to say, with those restrictions $I$ and $V$ can't vary - directly or inversely.
{ "domain": "physics.stackexchange", "id": 35839, "tags": "electric-circuits, electric-current, electrical-resistance, voltage, power" }
Determining number of unique hydrogens in lactupicrinal 1H NMR
Question: I'm having trouble understanding which hydrogens are magnetically distinct in the following molecule (lactupicrinal): I've counted all the hydrogens (there are 20) and I know that hydrogens in a methyl group are all chemically the same (I think), but I don't know about the hydrogens in the ethyl group (double bonded on the lower right side). Am I right in that all other hydrogens outside of those groups are distinct, or are the mirrored hydrogens on the benzene ring also equivalent? Please help, I'm completely at a loss and I've been trying desperately to understand hydrogen equivalence for weeks. Answer: Your question doesn't sound like you care about exact splitting, so I'm going to ignore magnetic equivalence/inequivalence. If you do care, you need to take that into account. For the purposes of this discussion, equivalent means chemically equivalent. You also need to keep in mind that NMR is achiral. In other words, opposite enantiomers will produce the same spectrum. The standard procedure is to replace the atom of interest with something that is distinguishable from the regular atom. I usually replace an $\ce{H}$ atom with an $\ce{H}*$, but anything will do, provided it's still a hydrogen but different from the others. With this replacement you've created a new molecule. For two specific hydrogen atoms, you can perform the replacement operation to generate two molecules. Now, compare these two molecules. If they are different molecules (e.g., geometric isomers, (E)-/(Z)- isomers, cis-/trans- isomers), then the original hydrogen atoms are heterotopic and should have different chemical shifts. Their environments are different. If the molecules are diastereomers, then the two hydrogens are diastereotopic. Since we can distinguish diastereomers by achiral methods, so can NMR. That means that the two original hydrogen atoms will have different shifts. If the molecules are enantiomers, then the two hydrogens are enantiotopic. They are different, but because NMR can't distinguish between enantiomers, it can't tell that these two hydrogen atoms are different. They will have the same shift. If the two molecules are identical, then the two hydrogens are homotopic. Nothing can distinguish them. They are chemically equivalent, and by NMR, they will have the same chemical shift. Try applying this test to the hydrogen atoms on the ethyl group and the hydrogen atoms on the ring.
{ "domain": "chemistry.stackexchange", "id": 12704, "tags": "organic-chemistry, nmr-spectroscopy" }
Python image art project
Question: This program takes in an image, and saves an "arrowed" version of that image to /tmp/image.png. Example input and output is below the code. from PIL import Image, ImageDraw import operator from pprint import pprint import sys base_width = 10 arrow_base_height = 15 arrow_point_height = 15 margin = (20,20) do_outline = True outline = (0, 0, 0) # black background_color = (255, 255, 255) # white arrow_width = base_width / 2 total_arrow_height = arrow_base_height + arrow_point_height total_arrow_width = 2 * base_width def drawArrow(coords, color): if do_outline: draw.polygon(coords, fill=color, outline=outline) else: draw.polygon(coords, fill=color) def to_real_coordinates(coords): # translates the coords to pixels on the picture return translate(coords, margin) def translate(coords, vector): # Translates a list of coordinate tuples over a vector tuple t_coords = [] for cord in coords: t_coords.append(tuple(map(operator.add, cord, vector))) return t_coords def mirror(coords): # Takes a list of coordinate tuples and mirrors it across the first element of # the first tuple # Formula: 2 * base - original m_coords = [] base = coords[0] double_base = tuple(map(operator.mul, base, len(base)* (2,) )) for cord in coords: m_coords.append(tuple(map(operator.sub, double_base, cord))) return m_coords def get_arrow_coords(): coords = [ (0, 0), (arrow_base_height, 0), (arrow_base_height, arrow_width), (arrow_base_height + arrow_point_height, - arrow_width), (arrow_base_height, -3 * arrow_width), (arrow_base_height, - base_width), (0, - base_width) ] return coords if __name__ == "__main__": orig = Image.open(sys.argv[1]).transpose(Image.ROTATE_90) pix = orig.load() new_size = (1024,1024) actual_size = (new_size[0] + 2 * margin[0], new_size[1] + 2*margin[1]) im = Image.new("RGB", actual_size, background_color) draw = ImageDraw.Draw(im) arrow = get_arrow_coords() m_arrow = mirror(arrow) for i in range(new_size[0] / total_arrow_height): for j in range((new_size[1] / total_arrow_width)): color = pix[ i * total_arrow_height * orig.size[0] / new_size[0], j * total_arrow_width * orig.size[1] / new_size[1] ] # calculate and draw arrow coords = translate(arrow, (i * total_arrow_height, j * total_arrow_width)) real_coords = to_real_coordinates(coords) drawArrow(real_coords, color) # calculate and draw mirrored arrow coords = translate(m_arrow, (arrow_base_height + i * total_arrow_height, j * total_arrow_width)) real_coords = to_real_coordinates(coords) drawArrow(real_coords, color) im = im.transpose(Image.ROTATE_270) im.show() im.save("/tmp/image.png") Answer: Be consistent and pythonic in naming functions. draw_arrow instead of drawArrow Use comprehensions instead of creating empty list and appending to it in a for loop. It's both faster and easier to read. def translate(coords, vector): # Translates a list of coordinate tuples over a vector tuple return tuple(tuple(map(operator.add, c, vector)) for c in coords) def mirror(coords): # Takes a list of coordinate tuples and mirrors it across the first element of # the first tuple # Formula: 2 * base - original base = coords[0] double_base = tuple(map(operator.mul, base, len(base) * (2,) )) return tuple(tuple(map(operator.sub, double_base, cord)) for c in coords) Why have a function that returns a constant value. Why creating a variable when you can just return the value. def get_arrow_coords(): return [...] # or if you worry about mutability - use tuples. They actually faster if you wouldn't try to modify them (creating new ones from old ones) a lot. arrow_coords = ... You can also use collections.namedtuple('Size', 'height width') instead of using plain tuples for sizes. This would improve readability a little. And I'm not sure, but maybe you would benefit from using numpy as it seems that you're doing some matrices work.
{ "domain": "codereview.stackexchange", "id": 25681, "tags": "python, image" }
Page for personal portfolio animations
Question: I built my portfolio page using Bootstrap and jQuery, but on lower performance computers the animations seem choppy. I am interested in JavaScript optimization and was hoping you all had some ideas on how to more efficiently execute my code. You can see it live here: bgottschling.github.io. HTML: <!DOCTYPE html> <html > <head> <meta charset="UTF-8"> <title>Brandon Gottschling's Portfolio</title> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- Font Awesome --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css" type='text/css'> <!-- Font MFizz --> <link rel="stylesheet" href="http://cdn.ovispot.com/c/font-mfizz/1.2/font-mfizz.css" type='text/css'> <link rel='stylesheet prefetch' href='http://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css'> <link rel='stylesheet prefetch' href='http://cdnjs.cloudflare.com/ajax/libs/animate.css/3.2.3/animate.min.css'> <link rel="stylesheet" href="css/style.css"> </head> <body> <div class="container-fluid all"> <nav class="navbar navbar-default navbar-fixed-top"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle Navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#1">Brandon Gottschling</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav navbar-right"> <li id="home"><a href="#1"><span class="glyphicon glyphicon-home"></span> Home</a></li> <li id="about"><a href="#2"><i class="fa fa-info-circle nav-icon"></i> About</a></li> <li id="portfolio"><a href="#3"><i class="fa fa-folder-open nav-icon"></i> Portfolio</a></li> <li id="contact"><a href="#4"><i class="fa fa-envelope nav-icon"></i> Contact</a></li> </ul> </div> <!-- /.navbar-collapse --> </div> <!-- /.container-fluid --> </nav> <br/> <div class="row"> <div class="jumbotron home" id="1"> <img class="image-border img-responsive text-center" src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/facebrandon_zpsdsvir6wl.jpg" alt="Brandon Gottschling in a sweater!"> <h2 class="text-center">Brandon Gottschling </h2> <h3 class="text-center">Full Stack Developer</h3> <h4 class="text-center">Atlanta, Georgia</4> </div> </div> <div class="row"> <div class="container well about" id="2"> <h2 class="text-center title-text">About Me</h2> <p class=""> I am very passionate about technology and how it advances us as a civilization. Currently in my career I am employed as a Product Specialist supporting a content management system at <a href="http://www.vertafore.com/">Vertafore</a>, an insurance software company. I have life long aspirations to become a software developer. I currently use <strong>HTML5</strong>, <strong>CSS3</strong>, <strong>JavaScript</strong> and other JS frameworks like <strong>Bootstrap</strong>, <strong>JQuery</strong>, <strong>AngularJS</strong>, <strong>ExpressJS</strong>, and <strong>NodeJS</strong>. I also have experience with <strong>MongoDB</strong>, and <strong>T-SQL</strong>. What interests me the most about the JavaScript language is that it allows you to develop front and back-end applications all using one language. I find the MEAN stack, as they call it, practical due to the fact that you are not flipping between different languages. Not to mention its leverage of HTTP for scalability, availability, and versatility. What I mean by this is that you can develop robust applications with next to no footprint, readily available wherever there is an internet connection and a web browser. To me, something about that seems powerful. </p> </div> </div> <div class="row"> <div class="container well portfolio" id="3"> <h2 class= "text-center title-text">Portfolio</h2> <div class="row"> <div class="col-md-4"> <a href="http://codepen.io/brandon-gottschling/full/XmLvmo/" class="thumbnail" target="_blank"> <img src="http://i1382.photobucket.com/albums/ah271/Brandon_Gottschling/thumbnail1_zpsdbbhlko6.png" alt="" class="img-thumbnail"> <div class="caption"> <p>Quote-O-Matic</p> </div> </a> </div> <div class="col-md-4"> <a href="#" class="thumbnail"> <img src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg" alt="" class="img-thumbnail"> <div class="caption"> <p>Project #2</p> </div> </a> </div> <div class="col-md-4"> <a href="#" class="thumbnail"> <img src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg" alt="" class="img-thumbnail"> <div class="caption"> <p>Project #3</p> </div> </a> </div> <div class="col-md-4"> <a href="#" class="thumbnail"> <img src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg" alt="" class="img-thumbnail"> <div class="caption"> <p>Project #4</p> </div> </a> </div> <div class="col-md-4"> <a href="#" class="thumbnail"> <img src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg" alt="" class="img-thumbnail"> <div class="caption"> <p>Project #5</p> </div> </a> </div> <div class="col-md-4"> <a href="#" class="thumbnail"> <img src="http://i1382.photobucket.com/albums/ah249/alyssa_marie21/iph_zpsrzdkhjpj.jpg" alt="" class="img-thumbnail"> <div class="caption"> <p>Porject #6</p> </div> </a> </div> </div> </div> <div class="row"> <div class="container well contact" id="4"> <div class= "title-text text-center"> <h2>Contact Me</h2> <h4>Let My Passion Be Your Product</h4> </div> <div class="row social_buttons"> <div class="col-sm-offset-1 col-md-2 text-center linkedin"> <a href="https://www.linkedin.com/in/bgottschling" class="btn btn-default btn-lg center-block" role="button" target="_blank"><i class="fa fa-linkedin"></i> LinkedIn</a> </div> <div class="col-md-2 text-center"> <a href="https://github.com/bgottschling" class="btn btn-default btn-lg center-block" role="button" target="_blank"><i class="fa fa-github"></i> Github</a> </div> <div class="col-md-3 text-center"> <a href="http://www.freecodecamp.com/bgottschling" class="btn btn-default btn-lg center-block" role="button" target="_blank"><i class="fa fa-fire"></i> freeCodeCamp</a> </div> <div class="col-md-2 text-ceneter"> <a href="http://codepen.io/brandon-gottschling" class="btn btn-default btn-lg center-block" role="button" target="_blank"><i class="fa fa-codepen"></i> Codepen</a> </div> </div> </div> </div> <div class="footer"> <div class="container"> <p class="">Copyright © Brandon Gottschling 2015. All Rights Reserved</p> </div> </div> </div> <script src='http://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js'></script> <script src='http://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js'></script> <script src="js/index.js"></script> </body> </html> CSS: body { background: #A9E7F8; } .image-border { border-radius: 50% 5% 50% 5%; height: 15%; width: 15%; margin: 0 auto; } .about { background: #A8FBAD; font-size: 15px; height: 100%; } .portfolio { background: #FFD5AA; } .contact { background: #B2B9FA; } .footer { color: #FFFFFF; } .img-thumbnail { max-height: 346px; max-width: 200px; } .linkedin { margin-left: 12%; } .title-text { margin-bottom: 3%; } JS: $(document).ready( $(".navbar-right li").hover( function() { if (!$(this).hasClass('animated')) { $(this).dequeue().stop().animate({ width: "120px" }); } }, function() { $(this).addClass('animated').animate({ width: "103px" }, "normal", "linear", function() { $(this).removeClass('animated').dequeue(); } ); } ), $("#home").hover( function() { $(".home").addClass("animated bounce"); }, function() { $(".home").removeClass("animated bounce"); }), $("#about").hover( function() { $(".about").addClass("animated bounce"); }, function() { $(".about").removeClass("animated bounce"); }), $("#portfolio").hover( function() { $(".portfolio").addClass("animated bounce"); }, function() { $(".portfolio").removeClass("animated bounce"); }), $("#contact").hover( function() { $(".contact").addClass("animated bounce"); }, function() { $(".contact").removeClass("animated bounce"); }) ); Answer: With jQuery it's usually faster to not use the shorthand methods for event binding. There should be a performance increase if you change your hover methods to something like the following: $("#contact") .on("mouseenter", function () { $(".contact").addClass("animated bounce"); }) .on("mouseleave", function () { $(".contact").removeClass("animated bounce"); }) I would also try to avoid jQuery animations. Alternatives might be GSAP or velocity.js (there are many others). Also if you used one of them you might not need jQuery ;) EDIT You should also move the .row containing your "Contact Me" out of its parent (also .row) so they are on the same level. At the moment it's the reason your page has a horizontal overflow. EDIT 2 Nice, happy to help. A further improvement would be to replace $("#"+ $(this).attr("id")) with $("#" + this.id) (same thing goes for the class selector in the mouseleave) If you use the same jQuery object several times it is best to reference it in a variable and use that. Its faster then creating the object each time. EDIT 3 An even greater improvement would be to replace $("#"+ $(this).attr("id")) with $(this) (I had to laugh when I realized it ;)
{ "domain": "codereview.stackexchange", "id": 18470, "tags": "javascript, jquery, css, html5" }
Angular Momentum vs Moment of Inertia
Question: Pretty sure that this question has already been answered in this site, but I cannot find it. Anyway, here's the question: What is the difference between angular momentum and moment of inertia? Answer: Angular momentum is the "moment of momentum", meaning it gives us an idea of how far is the linear momentum vector applied at. Torques involve the moment arm of a force, and angular momentum involves the moment arm of momentum. Particle Mechanics Take a single particle moving in a straight line (in the absence of external forces). It has mass $m_i$, it is located at vector $\boldsymbol{r}_i$ with velocity vector $\boldsymbol{v}_i$. This sets us up for the following definitions Linear Momentum of particle, $$ \boldsymbol{p}_i = m_i \boldsymbol{v}_i $$ Angular Momentum about the origin, $$ \boldsymbol{L}_i = \boldsymbol{r}_i \times \boldsymbol{p}_i $$ The above is sufficient to recover the location of the path line, at least the point on the path of the particle closest to the origin. $$ \boldsymbol{r}_{\rm path} = \frac{ \boldsymbol{p}_i \times \boldsymbol{L}_i }{ \| \boldsymbol{p}_i \|^2} $$ You can easily prove this if you show that $\boldsymbol{L}_i = \boldsymbol{r}_{\rm path} \times \boldsymbol{p}$, which you do with vector triple product identity $\boldsymbol a\times( \boldsymbol b \times \boldsymbol c) = \boldsymbol b (\boldsymbol a \cdot \boldsymbol c) - \boldsymbol c ( \boldsymbol a \cdot \boldsymbol b)$. So in summary, Angular momentum describes the (permendicular) distance where linear momentum acts. The conservation of angular momentum law means that not only linear momentum as a vector is conserved, but also the location of this vector (or the line in space the vector acts through) is conserved. Rigid Body Mechanics When you extend the above to multiple particles clumped together as a rigid body the concept of moment of inertia arises. First off, Charles's theorem state that for all the distances to be maintained, each particle can only move with a combination of translation and rotation (with vector $\boldsymbol{\omega}$) about a common axis. Commonly the center of mass is used as a reference point, and so the motion of each particle in the body is restricted to $\boldsymbol{v}_i = \boldsymbol{v}_{\rm com} + \boldsymbol{\omega} \times \boldsymbol{r}_i$. Commonly the motion is decomposed to the translation of the center of mass, and the rotation about the center of mass. This yields the following relationships Linear Momentum of rigid body, $$\boldsymbol{p} = \sum \limits_i m_i \boldsymbol{v}_i = m \boldsymbol{v}_{\rm com}$$ Angular Momentum about center of mass, $$\boldsymbol{L}_{\rm com} = \sum \limits_i \boldsymbol{r}_i \times m_i \boldsymbol{v}_i = \sum \limits_i m_i \boldsymbol{r}_i \times ( \boldsymbol{\omega} \times \boldsymbol{r}_i ) $$ Mass moment of inertia In order to understand angular momentum of a rigid body rotating about the center of mass better, it is common to separate the geometry parts from the motion parts $$ \boldsymbol{L}_{\rm com} = \underbrace{ \mathrm{I}_{\rm com} }_{\text{geometry}} \;\;\underbrace{ \boldsymbol{\omega}}_{\text{motion}} $$ where $$ \mathrm{I}_{\rm com} = \sum_i (-m_i [\boldsymbol{r}_i \times][\boldsymbol{r}_i \times]) = \sum_i m_i \left| \matrix{ y^2+z^2 & - x y & - x z \\ - x y & x^2+z^2 & -y z \\ -x z & -y z & x^2+y^2} \right| $$ This is the mass moment of inertia tensor. It is the rotational equivalent to mass, since $\boldsymbol{p} = m \boldsymbol{v}_{\rm com}$ and $\boldsymbol{L}_{\rm com} = \mathrm{I}_{\rm com} \boldsymbol{\omega}$ have a similar form. So, mass moment of inertia describes how far away from the rotation axis is the mass distributed at. It conveys the geometry information of angular momentum. So if a known mass moment of inertia about an axis $I$ is described by $I = m r ^2$, it means the geometry of the problem is similar to that of a mass ring with radius $r$. More details in this similar answer.
{ "domain": "physics.stackexchange", "id": 60209, "tags": "angular-momentum, moment-of-inertia" }
Testbed for testing navigation algorithms
Question: I'm looking for a testbed (simulator or web-based interface that lets me to have control on a robot) for testing different routing and navigation algorithms. Is there such a system on the web? Answer: You could use player/stage or gazebo
{ "domain": "robotics.stackexchange", "id": 1055, "tags": "navigation, routing" }
Why is the speed of light a limit? and why is it just $3×10^8\,\text{m/s}$?
Question: Why does light travel with speed $3×10^8\,\text{m/s}$? and why not more? Answer: I will repeat what I have said elsewhere on another "why" question: Physics is not mathematics. It is an observational discipline that uses mathematical formulations to fit observations and predict the behavior of new set ups. Experimental observations can be fitted very accurately by using quantum mechanics and electrodynamics where the velocity of light c is constant. This constant has been measured and is an experimental fact. The answer is it travels at that speed "because this is what we have observed/measured". In general, physics does not answer "why" questions , only how from certain assumptions and using mathematical formulations one can describe physical systems.
{ "domain": "physics.stackexchange", "id": 25569, "tags": "special-relativity, speed-of-light, physical-constants" }
Encoding a tree
Question: Let $A$ be an $m \times n$ matrix. Let $F(A,S(t),t)$ be a program function, where $t$ is an integer counter beginning at $0$ and stopping at some $k \leq m,$ and such that $S(0)$ is empty. The function first iterates $t=1$ and then operates on $A$ in such a way that indices for a subset of columns of $A$ are stored in $S(1).$ For each index $k \in S(1),$ the program recursively calls itself and collects another subset of columns of $A$ that depend on which $k \in S(1)$ is under examination. How should $S$ be constructed to record the indices of each branch $k$ of $S(1)$ so that the indices in further recursive calls remain connected and in order with their respective parent indices? I have not programmed in years, so I apologize if the question is poorly worded, and would welcome any editing or correction if clarification is needed. In short, I need to set up some kind of data structure $S$ for indices of $A$ that can be described as a set of trees; the pseudocode I have written intends to traverse branches of these trees, operating on the columns corresponding to the stored indices. My problem is that I do not know what data structure to use to store the indices properly. I thought of creating an array for each branch that follows each possible line of recursive calls of my program, but it becomes hard to think of even how to label these arrays, as I do not know beforehand how many I need nor how many recursive calls my program will make prior to termination. My only other thought was to use a language like MATLAB that could create an adjacency matrix for each tree belonging to each index $k$ collected in the first function call, but I am not quite sure how I could program this. Any help would be appreciated! EDIT: Here is the pseudocode for the function: The function receives a matrix $A$ and a vector $b.$ The matrix is assumed to have more columns than rows, so that solutions will be underdetermined (and therefore infinite over real and complex fields, and likely large for finite fields), assuming the system has a solution to begin with. My job is to find all minimal support solutions, or solutions to the system that have the most 0's as entries. Of course (as part of my dissertation shows, oddly for the first time in published literature over every finite field, though it has been shown for binary and for real/complex already) the problem is NP-Hard, making the program my advisor and I devised hopelessly exponential - but it is what it is; my advisor wanted me to sort of start something that might continue in the spirit of the SAT solvers out there, I guess, since algorithms that seek minimal support solutions over finite fields are (at the time of this post) comparatively few. =================== FUNCTION $MinSup([A|b],r)$ Matrix $A$, vector $b$ (as an augmented system), "depth" $r$ initialize $A=A_0,$ $b=b_0,$ $r=1$ ASSUPTION: Ax=b has at least one solution, $m=numrows(A) < n=numcolumns(A)$, and A is row-independent. RETURNS: Support for the set of minimal support vectors $X$ to the system $Ax=b.$ PSEUDOCODE (sorry, SE is forcing weird formatting I cannot fix) Form augmented system $[A|b].$ (EDIT: No longer needed as the program receives an augmented system from the start) Use row operations on $[A|b]$ until column b has a 1 on top and 0's on the bottom (doing row exchanges to get a nonzero top if necessary). Look for any columns that have all zeros from rows $r$ through m (and a nonzero top). Record these indices in $S.$ If S is not empty, then for each $k \in S,$ if $r=1$ RETURN the set $x_k=(a_k)^{-1}e_k$ and $x_j=0$ otherwise (solutions are weight one). If $r \neq 1,$ for each $k \in S$ BACKSOLVE column $k$ and RETURN each smallest resulting minimal support solution (the matrices $A$ sent to the function when $r>1$ will have $r-1$ columns of the identity matrix already). Else For all columns that have a nonzero top row entry, record the indices of each column in $S.$ Iterate $r=r+1.$ If $r=m$ then RETURN all $n$ choose $m$ solutions as your minimal support set. ELSE For each index $k \in S,$ BACKSOLVE column $k$ (we need to use row $r-1$ at this point to pivot for each of these, and of course store the augmented system somewhere in case we need to return to this level). There should be a copy of (a multiple of) column $k$ as the "new $b.$" Send this updated augmented system and index $A', b',r$ to another call of $MinSup([A|b],r).$ ======== There will probably be a small parent program that can cull through multiple responses for the MinSup calls. Essentially, I keep calling until I "hit the jackpot," or the deepest iteration $r$ for which my row-exchanged update to $b$ on that particular call identifies a perfect multiple for rows $r$ and below (has all zeros from row $r+1$ to $m$). As anyone can see, this is obviously a combinatorial nightmare, but it is what it is and I have to program it - and given that my advisor is now one year past retirement, and knows nothing but Mathematica, I'll have to program it on that somehow. It is ugly, but it is what it is. Answer: It might be that I'm misunderstanding, but in your case a Node data structure should be all you need. I'll use a C++ example: class NodeData{ Matrix a; Vect b; int column; ... }; class Node{ NodeData data; vector<Node*> children; }; Your tree is represented by a single Node which is the root. You iterate the children vector to iterate node's children.
{ "domain": "cs.stackexchange", "id": 8614, "tags": "data-structures, dynamic-programming" }
Can movebase be used for ackermann steered robots!? and what could the error: Failed to find a valid control. mean?
Question: I use the map server, a global and local plan is created and I also get cmd_vel outputs. This is a geometry_msgs/Twist output with a value linear.x for the speed and angular.z for the speed around the z-axis -> I generate a odom message from this message to simulate a input for the navigation stack... noticeable is that the value for the speed around the z-axis changes a lot, also from negative to positive values and back... and after some time I get this error!? I'm glad about any suggestions or help (I use ros hydro) Best regards Matthias Originally posted by moejoegoe on ROS Answers with karma: 1 on 2014-03-30 Post score: 0 Answer: The short answer is that navfn and base_local_planner aren't suitable for ackermann type robots. The planners don't take into account the kinematic constraints (minimum turning radius, can't turn in place) of the robot, and therefore can generate plans that aren't executable by the robot. It is possible to use the sbpl_lattice_planner as the global planner, but there isn't a great local planner for ackermann type robots yet. It's difficult to say what is wrong with your specific simulation without more details. It sounds like the local planner is failing to find a valid path, but without knowing the specifics of your configuration, it's hard to say whether that's a problem with your costmaps, with the local planner configuration, with how you're running the simulation, or something else entirely. Originally posted by ahendrix with karma: 47576 on 2014-03-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 17468, "tags": "ros, pathplanning" }
Tutorial pubsub is compiling but not running
Question: Hi I am trying to run the pubsub tutorial using gradle. The program compiles fine but when I try to run the listener program using the command; $ ./build/install/rosjava_wiki_tutorial_pubsub/bin/rosjava_wiki_tutorial_pubsub org.ros.rosjava_tutorial_pubsub.Talker it gives the following errors; Loading node class: org.ros.rosjava_tutorial_pubsub.Talker Exception in thread "main" org.ros.exception.RosRuntimeException: Unable to locate node: org.ros.rosjava_tutorial_pubsub.Talker at org.ros.RosRun.main(RosRun.java:56) Caused by: java.lang.ClassNotFoundException: org.ros.rosjava_tutorial_pubsub.Talker at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.ros.internal.loader.CommandLineLoader.loadClass(CommandLineLoader.java:239) at org.ros.RosRun.main(RosRun.java:54) Can you please help me in figuring out what is going on. I did a little digging and my CLASSPATH variable is appearing blank but I am not sure how I can set it. Originally posted by mfahad on ROS Answers with karma: 11 on 2013-02-14 Post score: 1 Original comments Comment by damonkohler on 2013-02-19: I can't reproduce this issue. Make sure you've followed the instructions exactly. For instance, I see that you've renamed the directory. Depending on how you went about this, it could be part of your problem. Comment by mfahad on 2013-02-19: Hi Damon, actually just today I realized it might not be because of rosjava. I was trying to compile and run a different java program and it gave me a similar exception and failed to run. So I think that my CLASSPATH is not right. When I try echo $CLASSPATH, it is blank. Any idea on the right path? Comment by damonkohler on 2013-02-21: The class path will be set automatically by the scripts Gradle generates. Comment by mfahad on 2013-02-21: This is strange then because I was not using gradle for the other program I was trying to run but still that was giving the same exceptions. Answer: I C that its an year old, but probably its same as : http://answers.ros.org/question/34092/problem-running-rosjava_tutorial_pubsub/ So probably using this : ./rosjava_tutorial_pubsub/build/install/rosjava_tutorial_pubsub/bin/rosjava_torial_pubsub org.ros.rosjava_tutorial_pubsub.Talker should run this. Originally posted by aknirala with karma: 339 on 2013-12-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12901, "tags": "ros, java, rosjava" }
How do instantons cause vacuum decay?
Question: From what I read about on instantons (Zee, QFT in a Nutshell, pg 309-310), an instanton is a vacuum solution that maps $S^3 \rightarrow S^3$ (the boundary of Euclideanized spacetime), which comes from minimizing the Euclidean action for some Lagrangian with a nontrivial vacuum structure. I've also read (for example in Muckanov, Physical Foundations of Cosmology, pg 180-199) about how instantons can mediate quantum tunneling from one vacuum state to another. My question is: how are these two ideas/definitions of instantons related? All of the simples examples that I have looked at of nontrivial vacuum solutions involve solitons, vortices, or hedgehogs, which as far as I know cannot mediate decay from a metastable vacuum. Solitons etc are defined on spacial infinity so I know (suspect?) that the fact that an instanton lives on the boundary of space$time$ is related to its connection to rate of vacuum decay. I would greatly appreciated some simple examples/links to references as well. Answer: Let us look at the instantons of an ordinary pure Yang-Mills theory for gauge group $G$ in four Euclidean dimensions: An instanton is a local minimum of the action $$ S_{YM}[A] = \int \mathrm{tr}(F \wedge \star F)$$ which is, on $\mathbb{R}^4$, precisely given by the (anti-)self-dual solutions $F = \pm \star F$. For (anti-)self-dual solutions, $\mathrm{tr}(F \wedge \star F) = \mathrm{tr}(F \wedge F)$. The latter is a topological term known as the second Chern class, and its integral is discrete: $$\int \mathrm{tr}(F \wedge F) = 8\pi^2 k$$ with integer $k \in \mathbb{Z}$ (don't ask about the $\pi$). For given $k$, one also speaks of the corresponding curvature/gauge field as the $k$-instanton. Now, how does this relate to the things you have asked about? Instantons as vacua Since the instanton provides a local minimum of the action, it is a natural start for perturbation theory, where it naturally then represents the vacuum. We have infinitely many vacuua to choose from, since $k$ is arbitrary. Instantons and the three-sphere (The motivation here is, that, for the vacuum to have finite energy, $F = 0$ at infinity, so we seek actually a solution for the field equations on $\mathbb{R}^4 \cup \{\infty\} = S^4$ such that $F(\infty) = 0$) Take two local instanton solutions $A_1,A_2$ (for same Chern class $k$) on some open disks $D_1, D_2$´. Now, glue them together by a gauge transformation $t : D_k \cap D_{k'} \rightarrow G$ as per $$ A_2 = tA_1t^{-1} + t\mathrm{d}t^{-1} $$ (we are essentially defining the principal bundle over $S^4$ here) and observe that $\mathrm{tr}(F_i \wedge F_i) = \mathrm{d}\omega_i$ with $\omega_i$ the Chern-Simons form $$ \omega_i := \mathrm{tr}(F_i \wedge A_i - \frac{1}{3} A_i \wedge A_i \wedge A_i) $$ Take the two disks as being the hemispheres of an $S^4$, overlapping only at the equator. If we now calculate the chern class again, we find: $$ 8\pi^2 k = \int_{D_1} \mathrm{d}\omega_1 + \int_{D_2} \mathrm{d}\omega_2 = \int_{\partial D_1} \omega_1 + \int_{\partial D_2} \omega_2 = \int_{S^3} \omega_1 - \int_{S^3} \omega_2$$ due to Stokes' theorem and different orientation of the hemisphere boundary w.r.t. each other. If we examine the RHs further, we find that $$ k = - \frac{1}{24\pi^2} \int_{S^3} \mathrm{tr}(t\mathrm{d}t^{-1} \wedge t\mathrm{d}t^{-1} \wedge t\mathrm{d}t^{-1})$$ so the $k$ is completely determined by the chosen gauge transformation! As all $k$-vacua have the same value in the action, they are not really different. This means we can already classify an $k$-instanton by giving the gauge transformation $t : S^3 \rightarrow G$. The topologist immediately sees that $t$ is therefore given by choosing an element of the third homotopy group $\pi_3(G)$, since homotopic maps integrate to the same things. For simple Lie group, which we always choose our gauge groups to be, $\pi_3(G) = \mathbb{Z}$, which is a nice result: $t$ is (up to homotopy, which is incidentally the same as up to global gauge transformation here) already defined by the $k$-number of the instanton. Instantons and tunneling Now we can see what tunneling between an $N$- and an $N + k$-vacuum might mean: Take a $[-T,T] \times S^3$ spacetime, that is, a "cylinder", and fill it with a $k$-instanton field configuration $A_k$. This is essentially, by usual topological arguments, a propagator between the space of states at the one $S^3$ to the other $S^3$. If you calculate its partition function, you get a tunneling amplitude for the set of states belonging to $\{-T\} \times S^3$ turning into the set of states belonging to $\{T\} \times S^3$. Calculate again the Chern class (or winding number or Poyntragin invariant - this thing has more names than cats have lives): $$ 8\pi^2 k = \int_{[-T,T] \times S^3} \mathrm{d}\omega = \int_{\{T\}\times S^3} \omega(-T) - \int_{\{-T\}\times S^3} \omega(T)$$ If the $S^3$ represent vacua, the field strength vanishes there and $A(-T),A(T)$ are pure gauge, i.e. $A(\pm T) = t_\pm \mathrm{d} t_\pm^{-1}$, so we have the Chern-Simons form reducing to the Cartan-Maurer form $\omega(\pm T) = \frac{1}{3} t_\pm \mathrm{d} t_\pm^{-1} \wedge t_\pm \mathrm{d} t_\pm^{-1} \wedge t_\pm \mathrm{d} t_\pm^{-1}$. But now the two boundary integrals for the winding number are simply determined by the homotopy class of $t_\pm : \{\pm T\} \times S^3 \rightarrow G$, let's call them $k_\pm$. Therefore, we simply have $k = k_+ - k_-$. So, we have here that a cylinder spacetime with a $k$-instanton configuration indeed is the propagator between the space of states associated with a spatial slice of a $k-$-instanton and the space of states associated with a spatial slice of a $k_+$-instanton, where $k_\pm$ differ exactly by $k$, so you would get the amplitude from the partition function of that cylinder. To actually calculate that is a work for another day (and question) ;)
{ "domain": "physics.stackexchange", "id": 15147, "tags": "quantum-field-theory, vacuum, instantons" }
6 ways to find maximum value in C++
Question: This code is part of a project of N-ways to code for one particular task. In this case, this C++ code consists of 6 different functions to return a maximum value from an array of real numbers. Mainly use for-loop and while-loop, the last function uses recursive method. The explanation of the other 5 functions are included in the code. I would like to know how to improve the code. // Author : anbarief@live.com // Since 10 March 2018 #include <iostream> #include <string> float max_1(float x[], int sizex){ float max = x[0]; for (int index=0; index < sizex; index++){ if (x[index] >= max) { max = x[index]; } } return max; } float max_2(float x[], int sizex){ float max; for (int index=0; index < sizex; index++){ if (x[index+1] >= x[index]) { max = x[index+1]; } else { max = x[index]; } x[index+1]=max; } return max; } float max_3(float x[], int sizex){ float max = x[0]; int index=0; while (index < sizex) { if (x[index] >= max) { max = x[index]; } index = index+1; } return max; } float max_4(float x[], int sizex){ float max, maxx, maxxx; if (x[0] >= x[sizex-1]){ max = x[0]; } else{ max = x[sizex-1]; }; for (int index=1; index < sizex; index++){ if (x[sizex-index] >= x[index]) { maxx = x[sizex-index]; } else { maxx = x[index]; } if (maxx >= max){ maxxx = maxx; } else{ maxxx = max; } max = maxx; } return maxxx; } float max_5(float x[], int sizex){ float max, maxx, maxxx; if (x[0] >= x[sizex-1]){ max = x[0]; } else{ max = x[sizex-1]; }; int index = 1; while(index < sizex){ if (x[sizex-index] >= x[index]) { maxx = x[sizex-index]; } else { maxx = x[index]; } if (maxx >= max){ maxxx = maxx; } else{ maxxx = max; } max = maxx; index=index+1; } return maxxx; } float max_6(float x[], int sizex){ float max; if (x[sizex]>=x[sizex-1]){ max=x[sizex]; } else{ max=x[sizex-1]; }; if (sizex == 1){ return max; }; x[sizex-1] = max; return max_6(x, sizex-1); } int main(){ float data[] {1, 1, 2, -2, -2233, -112.3, 3, 3, 3, 4.123, 1, 44.234, 2.0013, 3, 5, 5, 6, 6, 3, 56, 112, 112, 112.3, 12, 3}; const int n = sizeof(data)/sizeof(*data); std::string ex_1 = "max_1 : Comparing per-element in a for-loop to get the maximum val."; std::cout << '\n' << ex_1; std::cout << '\n' << max_1(data, n); std::string ex_2 = "max_2 : Comparing two adjacent elements of the data in a for-loop to get the maximum val."; std::cout << '\n' << ex_2; std::cout << '\n' << max_2(data, n); std::string ex_3 = "max_3 : Similar as max_1, but using while-loop"; std::cout << '\n' << ex_3; std::cout << '\n' << max_3(data, n); std::string ex_4 = "max_4 : Comparing end-to-end elements of the data in a for-loop to get the maximum."; std::cout << '\n' << ex_4; std::cout << '\n' << max_4(data, n); std::string ex_5 = "max_5 : Similar as max_4, but using while-loop."; std::cout << '\n' << ex_5; std::cout << '\n' << max_5(data, n); std::string ex_6 = "max_6 : Using a recursive method to find the maximum."; std::cout << '\n' << ex_6; std::cout << '\n' << max_6(data, n); return 0; } Edit: the 4th and 5th functions should be edited as float max_4(float x[], int sizex){ float max, maxx, maxxx; if (x[0] >= x[sizex-1]){ max = x[0]; } else{ max = x[sizex-1]; }; for (int index=1; index < sizex-1; index++){ if (x[sizex-index-1] >= x[index]) { maxx = x[sizex-index-1]; } else { maxx = x[index]; } if (maxx >= max){ maxxx = maxx; } else{ maxxx = max; } max = maxx; } return maxxx; } float max_5(float x[], int sizex){ float max, maxx, maxxx; if (x[0] >= x[sizex-1]){ max = x[0]; } else{ max = x[sizex-1]; }; int index = 1; while(index < sizex-1){ if (x[sizex-index-1] >= x[index]) { maxx = x[sizex-index-1]; } else { maxx = x[index]; } if (maxx >= max){ maxxx = maxx; } else{ maxxx = max; } max = maxx; index=index+1; } return maxxx; } Answer: I understand that, since you want to find as many ways as possible to implement a max function over a float array, you will enumerate c-style implementations among others. But you must add c++-style implementations as well. First thing to remember is that C++ offers reference next to pointers, and that includes references to arrays. Add template deduction to the mix and you can deduce the size of an array: template <std::size_t N> // see http://en.cppreference.com/w/cpp/types/size_t float max(const float (&array) [N]); Actually, there is no point in restricting your function to an array of floats, so the signature of the template function should be extended to arrays of any type: template <typename T, std::size_t N> T max(const float (&array) [N]); You could even modernize the signature a bit more, with auto: template <typename T, std::size_t N> auto max(const T (&array) [N]); but it's more a matter of taste. So now that your signature is simplified and takes care of determining the size of the array without sizeof arithmetic, let's move on to the implementation. Interesting detail, since you now deal with a reference to an array, you can use std::begin and std::end to return iterators (actually pointers, of course), to the first and past-the-last elements, which probably is more readable. With that in mind, there is an STL algorithm you can re-use, std::max_element: #include <algorithm> template <typename T, std::size_t N> auto max(const T (&array) [N]) { return *std::max_element(std::begin(array), std::end(array)); } If you look at the signature of this std::max_element algorithm, you'll notice it is a constexpr template function, meaning it can compute its result at compile time if its arguments are known at compile-time. So we have to modify our signature also to benefit from this: #include <algorithm> template <typename T, std::size_t N> constexpr auto max(const T (&array) [N]) { return *std::max_element(std::begin(array), std::end(array)); } So that, given the above definition, if you go with: int main() { float array[5]; for (int i = 0; i < 5; ++i) array[i]=0.1*i; constexpr auto test1 = max({5.2f,2.6f,8.f,4.f,9.f}); auto test2 = max(array); std::cout << test1 << " and " << test2; } test1 will be calculated at compile-time and test2 at runtime. As for the details of the loop, if you want to re-implement the max_element algorithm's logic, I would go with a loop over iterators, not indexes, only to make it more readable: template <typename T, std::size_t N> constexpr auto max(const T (&array) [N]) { auto max_value = *std::begin(array); for (auto it = std::begin(array)+1; it != std::end(array); ++it) { if (*it > max_value) max_value = *it; } return max_value; }
{ "domain": "codereview.stackexchange", "id": 29779, "tags": "c++, comparative-review" }
What is the efficacy of Pertussis booster vaccine among different age groups?
Question: The Murray Microbiology book says that it is prefentially 10 years, similarly Estonian and Finnish health associations. However, my professor says that it can be 5-7 years. I started to think if the age is affecting here to the result. When should you take booster vaccination of Pertussis if you are a) 25 years old, b) 50 years old, c) 70 years old, and d) 90 years old? My professor says that there is no significant studies about the efficacy of Pertussis booster vaccination among different age groups. Is this true? Answer: No, it is not true, see references 1 and 2 for this purpose. From these articles which followed up booster vaccinations for pertussis it seems that there is at least some protection 5 and 8 years after the boost. There is another study, which says that 10 years are relatively safe to assume, as the reduction in antibody levels over time estimated from the 5 year study would still allow protective levels of antibodies. See reference 3 for details. One of the problems to make statements about the duration of the protection is the lack of data. Additionally there was a change in the vaccine from whole cell (which is pretty immungenic, but has higher number unwanted side effects) against a acellular vaccine. This change happened in the 90s, so there are no longterm data available. See reference four for more details. The common recommendation at the moment for adults is to repeat the booster vaccination every 10 years (together with tetanus and diphteria) and for people which have close contact to unvaccinated persons every 5 years to prevent infections (see reference 4). References: Immunity to pertussis 5 years after booster immunization during adolescence. Immune responses to pertussis antigens eight years after booster immunization with acellular vaccines in adults. How long can we expect pertussis protection to last after the adolescent booster dose of tetanus-diphtheria-pertussis (Tdap) vaccines? Acellular pertussis vaccine use in risk groups (adolescents, pregnant women, newborns and health care workers): A review of evidences and recommendations
{ "domain": "biology.stackexchange", "id": 3185, "tags": "vaccination" }
Ice Crystal Matrix measurements of H2O molecules?
Question: I want a mathematically generated 3D STL copy of this image, however it's represent algorithmically. If I do all the H2O's as individual pieces with female connections prepared for chopped bike spokes, are all the H2O's the same and what angles do they have? Is there a written array of positions and degrees of rotation of the water molecules of a regular ice matrix? Answer: I decided to summarize all my comments in a form of answer and add some illustrations. Note that I have no experience in 3D printing, so I mainly focus on crystallographic part. Since you asked for"a mathematically generated" 3D structure, you ought to know that water crystallizes in $P\mathrm{6_3cm}$ (# 185) space group having a hexagonal symmetry. Detailed information about symmetry generators and matrix transformations can be found online or in International Tables for Crystallography [1, pp. 582-583]: There is already enough information to create a 3D model. If you want a more convenient way, you can get a CIF file for ice (COD-1011023) (which already embeds all the information above), and load it with Mercury (free, available on Windows, Linux, MacOS). From here on it's just a screencast of what's to click to get the desired molecular pattern like the one on your animated GIF: Load the structure (drug-n-drop CIF file, or via Ctrl+O). What you see now is an asymmetric unit, in simple terms – a seed which an entire crystal structure can be grown from: Let's expand the structure beyond those 6 atoms you've stuck with. Go to Calculate > Packing/Slicing.... Tick Pack option, and click + 0.5 boxes next to $a$ and $b$ axis: Rotate the grown structure approximately like it's shown below: Note that you have 4 water molecules on each side that you don't need. Those can be deleted by right-clicking on them and selecting Delete this molecule: At this point you should have exactly the same 3D representation of the crystal structure: Now you can print a 3D model directly from Mercury (File > Print in 3D...). The one with supporting framework looks like this: 6.1. Alternatively, you can save the structure as XYZ (Ctrl+S, choose Xmol files) and do some post-processing work in Blender if needed. Reference International Tables for Crystallography: Space-group symmetry, 1st ed.; Hahn, T., Ed.; Fuess, H., Hahn, T., Wondratschek, H., Müller, U., Shmueli, U., Prince, E., Authier, A., Kopský, V., Litvin, D. B., Rossmann, M. G., et al., Series Eds.; International Union of Crystallography: Chester, England, 2006; Vol. A.
{ "domain": "chemistry.stackexchange", "id": 9504, "tags": "crystal-structure, molecular-structure, phase" }
Is it possible to genetically modify a plant at home?
Question: Would I be able to genetically modify a plant at home? What equipment will be necessary? I think it might be a fun change from the 'norm' of regular hybridisation, to try some inter-family gene insertion, instead of staying within a genus. Are some plants easier to modify than others? Answer: Well, that depends on your home. ;) I think it is not an easy process. There are two main methods that are used to genetically modify plants: Using the bacterium, Agrobacterium tumifaciens, as a vector for the DNA. Agrobacterium has the ability to infect plants and insert DNA into a plant's genome. It causes crown gall tumours in natural infections. This method has mainly been used to modify broad leaved plants, such as sugar beet and oilseed rape, but is now also being applied to monocot species, such as maize and rice Particle bombardment or biolistics where the DNA to be inserted is coated on minute gold particles and fired into plants cells. This approach is used for monocot plants such as maize and rice Here I found simple step by step article, but lil bit old. may be there are new methods for this process. How To Genetically Modify a Seed, Step By Step
{ "domain": "biology.stackexchange", "id": 3050, "tags": "genetics, botany, plant-physiology" }
Collecting hashes of all files in a folder and its subfolders into a dictionary in Python
Question: What I am doing here is iterating through all the files in the parentFolder and its subfolders collecting all the hashes of all the files and putting them into a dictionary where the key is the path of the file and the hash is the key: def createDicts(parentFolder): hashesDict = {} for folderName, subFolders, fileNames in os.walk(parentFolder): for fileName in fileNames: filePath = os.path.join(folderName, fileName) tempHash = getHash(filePath) hashesDict.update({filePath : tempHash}) return hashesDict getHash() function: def getHash (fileName): with open(fileName,"rb") as f: bytes = f.read() # Read file as bytes readableHash = hashlib.md5(bytes).hexdigest() return(readableHash) Is there a way to make this faster? Answer: In terms of code there's not much here in this snippet to go on to tell you how to make it faster as getHash isn't defined. Style PEP-8 is the standard style recommendation for Python. In it, it includes things like how many blank lines, how much indentation, recommendations for variable naming and casing. There are linters (Flake8, Pylint and others) which check your code against PEP-8 and issue useful warnings. In your case there are several simple and obvious things. Standard indentation for Python is 4 spaces. Likewise variable names and function names should be lower_snake_case. You also have a tonne of whitespace and the end of your lines. You may also want to look at pathlib instead of os.path as the more modern way of handling paths. Temporaries Instead of hashesDict.update({filePath : tempHash}) you can use hashesDict[filePath] = tempHash which avoids creating the temporary. You also aren't using subFolders so you can replace that with _ to say this is irrelevant. Dict comprehension To speed it up, you may find that a dict comprehension helps: def create_dicts(parent_folder): return {os.path.join(folder_name, file_name): getHash(file_path) for folder_name, _, file_names in os.walk(parent_folder) for file_name in file_names} but as I have no idea as to the expense of getHash that may be taking the time. getHash Your getHash function loads the entire file into memory at once allocating and clearing large chunks of memory constantly, particularly if you have any large files this will be killer. The cleaner way of doing this is loading memory in chunks. def getHash(filename: str, *, buffer_size: int = 65536) -> str: fhash = hashlib.md5() with open(filename, 'rb') as in_file: while chunk := in_file.read(buffer_size): fhash.update(chunk) return fhash.hexdigest() N.B Choosing the right buffer_size can give some speedup, but will not be essential, md5 isn't necessarily the best or fastest hash, alternatives are available.
{ "domain": "codereview.stackexchange", "id": 44639, "tags": "python, performance, file-system, hashcode" }
Finding maximum length of continuous string which has same characters
Question: I'm solving this problem from codeforces. I've a solution which has complexity of O(n²). But the time limit exceeds on some of the test cases. The problem statement is: High school student Vasya got a string of length n as a birthday present. This string consists of letters 'a' and 'b' only. Vasya denotes beauty of the string as the maximum length of a substring (consecutive subsequence) consisting of equal letters. Vasya can change no more than k characters of the original string. What is the maximum beauty of the string he can achieve? Input The first line of the input contains two integers n and k (1 ≤ n ≤ 100 000, 0 ≤ k ≤ n) — the length of the string and the maximum number of characters to change. The second line contains the string, consisting of letters 'a' and 'b' only. Output Print the only integer — the maximum beauty of the string Vasya can achieve by changing no more than k characters. Examples Sample Input/Output: 8 1 aabaabaa Output 5 My solution is: import java.util.Scanner; public class CF676C { public static void main(String args[]){ Scanner in = new Scanner(System.in); int n = in.nextInt(), max = in.nextInt(),a=0,b=0; String s = in.next(); int i,j; for(i=0;i<n;i++){ if(s.charAt(i) == 'a'){ ++a; } else{ ++b; } } char z; if(a<=b){ z='a'; } else{ z='b'; } int max1; int count=0; int temp=0; for(i=0;i<s.length();i++) { max1=max; count=0; for(j=i;j<s.length()&&max1!=-1;j++) { if(s.charAt(j)==z&&max1!=-1) { max1--; } if(max1!=-1) { count++; } } if(count>temp) { temp=count; } } System.out.println(temp); } } I can't understand how to reduce time taken by the code. Is there any other solution or approach available? Answer: Yes, you can tackle this problem is a much faster way, with an O(n) algorithm, using a left and a right pointer delimiting the substrings to consider. For each character in the input String, the idea is to keep an array counting the number of 'a' and 'b' between a given left pointer, that will represent the start of the substring, and the current character. When the count of each 'a' or of each 'b' is less than the allowed number of characters to change, we can consider making the change and incrementing the answer. Then, when both of those 2 counts becomes greater than the allowed number of characters to change, it means we cannot change more characters so we hit a maximum length: the left pointer is increased, and the count of the character at that pointer is decreased. The count of 'a' and 'b' characters can be modeled as an integer array (index 0 corresponds to the count of 'a' while index 1 corresponds to the count of 'b'). Scanner in = new Scanner(System.in); int n = in.nextInt(), max = in.nextInt(); String s = in.next(); int left = 0, answer = 0; int[] count = { 0, 0 }; for (char c : s.toCharArray()) { count[c - 'a']++; if (Math.min(count[0], count[1]) > max) { count[s.charAt(left) - 'a']--; left++; } else { answer++; } } System.out.println(answer); This approach passes all the tests of the challenge.
{ "domain": "codereview.stackexchange", "id": 21897, "tags": "java, strings, programming-challenge, time-limit-exceeded" }
c++17 compatible std::bind_front alternative
Question: One of the things that has been really exciting me in c++20 is std::bind_front. Using placeholders with std::bind and boost::bind has really bothered me and the code looked messier and messier with each call to bind. It was bad enough for me to decide to enable -std=c++2a and pray that I wouldn't have to fix the code in the future if something gets changed. I was messing around with parameter pack recursion when I realized I could make a std::bind_front alternative that would work in c++17. If I removed the std::invoke it would even work in c++11. It even seems to compile faster than my standard library's implementation. //#include <utility> //#include <functional> //of course this would go into some kind of namespace template <class F, class A> struct _bind_obj { F originalFunc; A arg; template <class... Args> inline auto operator()(Args&&... a){ return std::invoke(originalFunc, arg, std::forward<Args>(a)...); } _bind_obj(F &&_originalFunc, A &&_arg) : originalFunc(std::forward<F>(_originalFunc)), arg(std::forward<A>(_arg)){ } }; template <class F, class A> auto bind_front(F &&func, A &&arg){ return _bind_obj<F, A>( std::forward<F>(func), std::forward<A>(arg) ); } template <class F, class FirstA, class... A> auto bind_front(F &&func, FirstA &&firstA, A&&... a){ return bind_front( bind_front( std::forward<F>(func), std::forward<FirstA>(firstA) ), std::forward<A>(a)... ); } So what do you think? Are there some cases in std::bind_front that I missed? Are there any optimizations I should make? Answer: bind_front can (and should) be made constexpr. The callable object and the bound arguments need to be decayed per the standard. You can store all arguments in a tuple instead of generating nested wrappers: template <class FD, class... Args> class bind_obj { // ... FD func; std::tuple<Args...> args; }; and then call std::apply(func, args, std::forward<A>(call_args)...) (which internally calls invoke.) Otherwise, nice code.
{ "domain": "codereview.stackexchange", "id": 36039, "tags": "c++, performance, functional-programming, reinventing-the-wheel, template-meta-programming" }
Code works in Electric flawlessly but segfaults in Fuerte
Question: Dear ROS brothers, I found this implementation of TinySLAM for ROS called coreslam, which I checked out from the University of Albany repository. As far as I understood, the code was developed back in 2008 and the latest change was made in December 2010. So I guess the pkg was working fine with CTurtle and most likely Diamondback. I have a Fuerte installation in Ubuntu 11.10 and I was able to compile the pkg after explicitly linking the signals boost library (as suggested in the Fuerte Migration Guide) in the CMakeList.txt: rosbuild_add_boost_directories() rosbuild_add_executable(bin/slam_coreslam src/slam_coreslam.cpp src/main.cpp) rosbuild_link_boost(bin/slam_coreslam signals) As soon as I run the slam_coreslam node, I receive an immediate and deathly segmentation fault. I understand that the program crashes in line 27 of main.cpp, when the SlamCoreSlam class is initialized (slam_coreslam.h, slam_coreslam.cpp), because when I comment out this line, it will not segfault. I have used gdb to debug the problem, here is the gdb stack backtrace: (gdb) exec-file slam_coreslam (gdb) r Starting program: /home/david/stacks/coreslam/bin/slam_coreslam slam_coreslam [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. 0x08062ac8 in ?? () (gdb) bt #0 0x08062ac8 in ?? () #1 0x004ba113 in __libc_start_main () from /lib/i386-linux-gnu/libc.so.6 #2 0x08062be9 in ?? () Backtrace stopped: Not enough registers or memory available to unwind further (gdb) I've also run it with valgrind, here is the output: david@David-Laptop:~/stacks/coreslam/bin$ valgrind ./slam_coreslam ==7329== Memcheck, a memory error detector ==7329== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. ==7329== Using Valgrind-3.6.1-Debian and LibVEX; rerun with -h for copyright info ==7329== Command: ./slam_coreslam ==7329== ==7329== Warning: client switching stacks? SP change: 0xbe824fb0 --> 0xbdffbc90 ==7329== to suppress, use: --max-stackframe=8557344 or greater ==7329== Invalid write of size 4 ==7329== at 0x8062AB7: main (main.cpp:25) ==7329== Address 0xbe824fac is on thread 1's stack ==7329== ==7329== Invalid write of size 4 ==7329== at 0x8062AC8: main (main.cpp:26) ==7329== Address 0xbdffbc98 is on thread 1's stack ==7329== ==7329== ==7329== Process terminating with default action of signal 11 (SIGSEGV) ==7329== Access not within mapped region at address 0xBDFFBC98 ==7329== at 0x8062AC8: main (main.cpp:26) ==7329== If you believe this happened as a result of a stack ==7329== overflow in your program's main thread (unlikely but ==7329== possible), you can try to increase the size of the ==7329== main thread stack using the --main-stacksize= flag. ==7329== The main thread stack size used in this run was 8388608. ==7329== ==7329== Process terminating with default action of signal 11 (SIGSEGV) ==7329== Access not within mapped region at address 0xBDFFBC8C ==7329== at 0x402242C: _vgnU_freeres (vg_preloaded.c:58) ==7329== If you believe this happened as a result of a stack ==7329== overflow in your program's main thread (unlikely but ==7329== possible), you can try to increase the size of the ==7329== main thread stack using the --main-stacksize= flag. ==7329== The main thread stack size used in this run was 8388608. ==7329== ==7329== HEAP SUMMARY: ==7329== in use at exit: 29,809 bytes in 429 blocks ==7329== total heap usage: 1,296 allocs, 867 frees, 53,217 bytes allocated ==7329== ==7329== LEAK SUMMARY: ==7329== definitely lost: 0 bytes in 0 blocks ==7329== indirectly lost: 0 bytes in 0 blocks ==7329== possibly lost: 4,632 bytes in 159 blocks ==7329== still reachable: 25,177 bytes in 270 blocks ==7329== suppressed: 0 bytes in 0 blocks ==7329== Rerun with --leak-check=full to see details of leaked memory ==7329== ==7329== For counts of detected and suppressed errors, rerun with: -v ==7329== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 67 from 8) Segmentation fault I guess it is a library problem... I have also tried to compile the pkg (Original CMakeLists.txt) with an Electric instalation that I have in a different computer and found out that the node runs smoothly with no problem at all (working similarly to Gmapping)! I was completely surprised and I am really curious about what may be causing the segmentation fault in Fuerte. The only thing that I made differently was including the signal boost libraries in the CMakeLists.txt before compiling. Is anyone willing to help this curious young man on finding a working solution of this pkg for Fuerte? Thanks in advance! Originally posted by DavidPortugal on ROS Answers with karma: 349 on 2012-12-29 Post score: 1 Original comments Comment by joq on 2013-01-03: To get help, you will probably need to update your question to include the gdb stack backtrace and any other relevant information. Comment by DavidPortugal on 2013-01-07: Thank you joq. Just edited the post :) Any ideas, anyone? Answer: So the backtrace ("Not enough registers or memory available to unwind further") made me wondering about memory issues. So instead of instantiating the class SlamCoreSlam, I simply tried to allocate memory in main.cpp. So originally we had: int main(int argc, char** argv) { ros::init(argc, argv, "slam_coreslam"); SlamCoreSlam cs; ros::spin(); return(0); } Which, I changed to: int main(int argc, char** argv) { ros::init(argc, argv, "slam_coreslam"); //SlamCoreSlam cs; SlamCoreSlam *cs = new SlamCoreSlam[1]; ros::spin(); delete[] cs; return(0); } And now it works fine. I don't know what is going on in the code to cause the segfault and still don't really understand why the first one would work in electric, but not in fuerte. But, at least, the problem was solved. Hope the solution may help some one in the future. Originally posted by DavidPortugal with karma: 349 on 2013-01-08 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 12227, "tags": "slam, navigation, ros-fuerte, slam-gmapping, gmapping" }
Recursive Error Handling in Python with a limited amount of tries
Question: I have a class method that is set up to make requests to an API and I'm having issues handling the possible errors that could arise by trying to communicate with that server. The idea is if an error were to pop up, then it would call on itself until either it no longer needs to (because it worked), or it reaches a limit I set up for it; in this case, I would want it to just simply raise an exception that would be caught somewhere else in the code. What I don't know is how to properly return the response if it crapped out the first time it made the call and had to call on itself from the except block. I'm not 100% clear if this actually gets the job done. Additionally, I dont know if this is the best way to do it. I know a possible suggestion is to make error handling more specific, given the requests library myriad exceptions, but I figured that any exception should just be handled by trying X amount of time and then quitting if it doesn't work. def getRequest(self, endpoint, attempts): baseUrl = self.baseUrl url = baseUrl + endpoint if self.expirationT <= datetime.datetime.now(): self.token, self.expirationT = self.auth(client_id, client_secret) else: pass try: response = requests.get(url, auth = BearerAuth(self.token)) response.raise_for_status() except: if attempts < 20: time.sleep(3) response = self.getRequest(endpoint, attempts + 1) return response else: raise Exception else: return response Answer: I know a possible suggestion is to make error handling more specific, given the requests library myriad exceptions, but I figured that any exception should just be handled by trying X amount of time and then quitting if it doesn't work. I disagree here. Not just any exception should be handled. It's possible to have typo'd a bug into your code inside the try, and you definitely don't want try masking a bug. If you check the source (or documentation), you'll see that the requests exceptions all seem to inherit from RequestException. If you really want to handle every possible request exception the same, I would catch the base class RequestException instead. I still don't think this is a good idea though without doing any logging. There may very well be a RequestException that gets thrown at some point which indicates that you accidentally gave the request bad data, not that there was a problem with the request being carried out using good data. I'd check the docs for the methods used and figure out what exact exceptions you want to retry on. This also doesn't need to be recursion. In this case, nothing bad will likely happen because you have a limit of 20 retries, which isn't enough to exhaust the stack any sane case. If you ever increase that limit up to 1000 though, you may run into real problems. I think this could be done pretty easily using a while True loop. The first two lines of the function seem to be essentially constants, so they don't need to be recomputed every time. Everything under those lines though can be stuck in a loop. def getRequest(self, endpoint, max_attempts=20, retry_delay=3): baseUrl = self.baseUrl url = baseUrl + endpoint attempts = 0 while True: if self.expirationT <= datetime.datetime.now(): self.token, self.expirationT = self.auth(client_id, client_secret) try: response = requests.get(url, auth=BearerAuth(self.token)) response.raise_for_status() return response except requests.RequestException as e: attempts += 1 if attempts < max_attempts: time.sleep(retry_delay) else: raise RuntimeError("Max number of retires met.") # Or to preserve in the trace the original problem that caused the error: # raise RuntimeError("Max number of retires met.") from e Things to note: To retry now, instead of manually recursing, I'm just letting control fall out of the except so that the while can restart again. Instead of attempts being a parameter, I just made it a local variable which is incremented inside of the except. I'm throwing a more specialized exception with an informative error message. Throwing the generic Exception makes life more difficult for the users of your code. Ideally, they should be able to pick and choose what exceptions they handle and when. Throwing Exception though forces them to catch your errors. RuntimeError isn't really the best exception here, but I couldn't think of a good built-in one for this purpose. You may want to make a custom exception for this case: class TooManyRetries(Exception): pass . . . raise TooManyRetries("Max number of retires met.") I got rid of the else: pass. That isn't necessary. You had two "magic numbers": 20 and 3 to mean the max number of attempts and the retry delay. I don't think it's a good idea to have those hard coded though. What if you want to change either at some point? You'd have to edit the code. I made them parameters of the function, defaulting to the values that you had. If you don't specify them, the behavior will be as you had before, but now they can be easily changed as needed.
{ "domain": "codereview.stackexchange", "id": 36702, "tags": "python, python-3.x, error-handling" }
Custom robot model load in moveit warning
Question: Hello there, I have created a custom support package for the robot model, and then I have loaded it into moveit. Issue: I am getting a warning as "Requesting initial scene failed" When I am planning, I cannot visualize the plan, but I am not getting any errors. Execution works perfectly. For Issue 1: Requesting initial scene failed For Issue 2: And when I am planning I am getting this log in terminal: [ INFO] [1617785113.351132162]: Planning request received for MoveGroup action. Forwarding to planning pipeline. [ INFO] [1617785113.351322921]: Using planning pipeline 'ompl' [ INFO] [1617785113.353181068]: Planner configuration 'arm' will use planner 'geometric::RRTConnect'. Additional configuration parameters will be set when the planner is constructed. [ INFO] [1617785113.354095609]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.354366929]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.354535507]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.354738569]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.365235684]: arm/arm: Created 5 states (2 start + 3 goal) [ INFO] [1617785113.365411044]: arm/arm: Created 4 states (2 start + 2 goal) [ INFO] [1617785113.365518967]: arm/arm: Created 4 states (2 start + 2 goal) [ INFO] [1617785113.365803082]: arm/arm: Created 5 states (3 start + 2 goal) [ INFO] [1617785113.366140687]: ParallelPlan::solve(): Solution found by one or more threads in 0.012549 seconds [ INFO] [1617785113.366741588]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.366960209]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.367066199]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.367190773]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.367611265]: arm/arm: Created 5 states (2 start + 3 goal) [ INFO] [1617785113.368098842]: arm/arm: Created 4 states (2 start + 2 goal) [ INFO] [1617785113.368433953]: arm/arm: Created 5 states (3 start + 2 goal) [ INFO] [1617785113.368865325]: arm/arm: Created 5 states (2 start + 3 goal) [ INFO] [1617785113.369476415]: ParallelPlan::solve(): Solution found by one or more threads in 0.003009 seconds [ INFO] [1617785113.369935539]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.370027843]: arm/arm: Starting planning with 1 states already in datastructure [ INFO] [1617785113.370645285]: arm/arm: Created 5 states (2 start + 3 goal) [ INFO] [1617785113.370709295]: arm/arm: Created 4 states (2 start + 2 goal) [ INFO] [1617785113.370981838]: ParallelPlan::solve(): Solution found by one or more threads in 0.001303 seconds [ INFO] [1617785113.373381294]: SimpleSetup: Path simplification took 0.002287 seconds and changed from 3 to 2 states Originally posted by Ranjit Kathiriya on ROS Answers with karma: 1622 on 2021-04-07 Post score: 1 Answer: Everything works fine. Even if I am facing this warning, To solve this Just Untick and Tick Motion Planning box. Originally posted by Ranjit Kathiriya with karma: 1622 on 2021-04-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 36289, "tags": "moveit" }
Optimizing Hash Table
Question: I am trying to write my own implementation of a hash table (hash map) in C++. It turns out that my code is unoptimized as I can't pass performance tests. Can you please give some advice for optimization of the program? I am using open addressing for resolving collisions with quadratic probing. Hash function for string: value of polynom, coefs of polynom - every single char of string. Only strings are inserted into hash table #include <iostream> #include <string> #include <vector> #include <algorithm> #include <cmath> //HashTable class /* Using open adressing for resolving collisions with quadratic probing */ class HashTable { public: HashTable(); bool Set(std::string key); bool Remove(std::string key); bool Get(std::string key); private: int GetHash(std::string text, int table_size); std::vector<std::pair<std::string, bool>> table; void ExtendTable(); size_t fullness = 0; }; HashTable::HashTable() { //first element of pair - key, second - is the cell free for(int i = 0; i < 8; i++) { table.push_back(std::make_pair("",true)); } } int HashTable::GetHash(std::string text, int table_size) { //Hash function for string. Calculates polynom value using Horner method int b = text[0]; int point = (int)sqrt(table_size); //x value for polynom for(int i = 1; i < text.size(); i++) { b = (text[i] + b*point) % table_size; } return b % table_size; } void HashTable::ExtendTable() { //Extends table if fullness == 3/4 * size of table //creates new table std::vector<std::pair<std::string, bool>> new_table; for(int i = 0; i < 2*this->table.size(); i++) { new_table.push_back(std::make_pair("", true)); } //copying old table to a new one for(int i = 0; i < this->table.size(); i++) { int new_index = GetHash(this->table[i].first, 2*this->table.size()); int step = 1; while(!new_table[new_index].second) { new_index += pow(step, 2); new_index %= 2*this->table.size(); step++; } new_table[new_index] = this->table[i]; } this->table = new_table; } bool HashTable::Set(std::string key) { //Adding new element to hash table. Dublicates are ignored int index = GetHash(key, this->table.size()); int step = 1; while(!this->table[index].second) { if(this->table[index].first == key) return false; index += pow(step, 2); index %= this->table.size(); step++; } this->table[index] = std::make_pair(key, false); this->fullness++; if(this->fullness*4 >= this->table.size()*3) ExtendTable(); return true; } bool HashTable::Get(std::string key) { //Checks if element is in hash table int index = GetHash(key, this->table.size()); int step = 1; int counter = 0; while(counter < this->fullness+1) { if(this->table[index].first == key) return true; index += pow(step, 2); index %= this->table.size(); step++; counter++; } return false; } bool HashTable::Remove(std::string key) { //removes hash table int index = GetHash(key, this->table.size()); int step = 0; int counter = 0; while(counter < this->fullness+1) { if(this->table[index].first == key) { this->fullness--; this->table[index].first = ""; this->table[index].second = true; return true; } index += pow(step, 2); index %= this->table.size(); step++; counter++; } return false; } int main() { HashTable table; std::ios_base::sync_with_stdio(0),std::cin.tie(0),std::cout.tie(0); std::string input = ""; while(std::cin >> input) { if(input == "+") { std::cin >> input; bool flag = table.Set(input); if(flag) std::cout << "OK" << std::endl; else std::cout << "FAIL" << std::endl; } else if(input == "?") { std::cin >> input; bool flag = table.Get(input); if(flag) std::cout << "OK" << std::endl; else std::cout << "FAIL" << std::endl; } else if(input == "-") { std::cin >> input; bool flag = table.Remove(input); if(flag) std::cout << "OK" << std::endl; else std::cout << "FAIL" << std::endl; } } return 0; } Answer: The Hash Your table looks to have a power-of-two size, starting at 8 and then doubling if needed. That's fine, good even, depending on how you use it. But then this happens: int HashTable::GetHash(std::string text, int table_size) { //Hash function for string. Calculates polynom value using Horner method int b = text[0]; int point = (int)sqrt(table_size); //x value for polynom for(int i = 1; i < text.size(); i++) { b = (text[i] + b*point) % table_size; } return b % table_size; } The slow square root is a problem of its own (how big of a problem depends on the size of the strings), but a very bad effect happens if table_size is a power of four (which it is half the time): point would be a power of two. So the multiplication (modulo a power of two) just shifts bits out of the top and loses them, deleting bits in a first-in-first-out fashion: the final hash is only affected by the last couple of characters, the bits from the first characters get shifted out. The effect gets worse as the table gets bigger, eventually only the very last character would be part of the hash. The overall effect on your program is that as the table gets bigger, performance fluctuates between OK (probably) for odd power sizes and Increasingly Bad for even power sizes, getting worse and worse for bigger tables and long strings that share a suffix. This wouldn't have been an issue for prime size tables, but that comes with a significant downside of its own. What to use instead: std::hash<std::string> probably, or write a hash that does not suffer from this problem, there are many string hashing algorithms that don't have this issue. Also b should really be some unsigned type, both to avoid the scary UB nature of signed integer overflow and also the more practical concern of avoiding a negative value as result (as a reminder, % on signed types returns the signed remainder, the result can be negative depending on the inputs). Which leads to: The Types A lot of variables and return types here are of type int. Many of them should be something else, such as size_t. Using int results in many unexpected type conversions, for example in index %= this->table.size(); which actually converts index to size_t first, then does the remainder, then converts back to an int again. Having a signed index risks overflowing it if the step gets big, and often costs an explicit sign-extension operation. The first index, which comes from GetHash, could be negative, which would be bad (indexing the vector at a negative index). The Quadratic step You wrote: new_index += pow(step, 2); That's a common thing for beginners to write, but unfortunately, that's a floating point square. The resulting code on x64 with Clang 9 and -O2 is: xorps xmm0, xmm0 cvtsi2sd xmm0, r14d xorps xmm1, xmm1 cvtsi2sd xmm1, ebx mulsd xmm0, xmm0 addsd xmm1, xmm0 cvttsd2si eax, xmm1 A lot of converting and other floating point operations. Writing it as new_index += step * step; results in: mov eax, edi imul eax, edi add eax, ebx But it turns out you don't even need this, see further below.. The Modulo You wrote: index %= this->table.size(); Which does not use that the table size is a power of two, so for example on x64 with Clang 9 and -O2 again, that results in: cdqe xor edx, edx div rsi ; use rdx (the remainder) A 64-bit div ranges from slow to very slow. The time depends on the processor model and on the values being divided and on whether we're measuring latency or throughput (or a bit of both?), so it's difficult to pin a single number to it, but as a ballpark number let's say it's around 50x as slow as integer addition. Division (and therefore remainder) is a difficult operation, so this issue is not restricted to x64 processors. Having a power-of-two sized table is perfect to avoid that operation, you can use (and it's a waste of the opportunity to not use this): index &= this->table.size() - 1; Unfortunately compilers are not yet so sophisticated that they can discover and track the property of the size being a power of two through the program. Such optimizations do happen locally, if the divisor is obviously a power of two which is much easier for a compiler to discover than a more "global" invariant of your data structure. Non-termination of Set It's unlikely, but Set could loop forever. What that takes is a probe sequence that visits only filled slots. You might expect the 75% fullness bound to prevent that, but this probe sequence (cumulatively adding i² modulo a power of two), while not too bad, does not guarantee visiting 75% of the slots. There is a probe sequence that does guarantee that: index += step; Doesn't look quadratic? It is! step goes up by 1 every step, so the sequence formed by index has the closed form formula index(i) = initialHash + (i² + i) / 2: the famous triangular numbers based quadratic probing. That will visit every slot (if necessary, of course escaping from the loop early is encouraged!) with no duplicates, so there would be no accidental infinite loop. Well this may all look pretty negative, but there are a couple of things I definitely liked in that code: firstly using std::vector for the storage, so this class does not need to concern itself with memory management. And this trick, this->fullness*4 >= this->table.size()*3, rather than multiplying by a floating point number.
{ "domain": "codereview.stackexchange", "id": 36779, "tags": "c++, programming-challenge, reinventing-the-wheel, homework" }