anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Do plasma B cells express the BCR or only produce soluble antibodies?
Question: Is the B cell receptor still expressed on a B cell once it has begun to produce soluble antibodies? Is there a gene change that prevents the membrane-bound form from being produced anymore? Answer: B cells can secrete antibody before they are terminally differentiated into plasma cells, so there is a phase during which both membrane-bound and secretory antibodies can be produced by the same cell: When a naïve or memory B cell is activated by antigen (with the aid of a helper T cell), it proliferates and differentiates into an antibody-secreting effector cell. Such cells make and secrete large amounts of soluble (rather than membrane-bound) antibody, which has the same unique antigen-binding site as the cell-surface antibody that served earlier as the antigen receptor (Figure 24-17). Effector B cells can begin secreting antibody while they are still small lymphocytes, but the end stage of their maturation pathway is a large plasma cell --Molecular Biology of the Cell. 4th edition. Plasma cells, generally speaking, have little or no surface immunoglobulin, but it's been claimed that some subsets of plasma cells do have surface Ig: Surprisingly, although IgG PCs downregulated surface IgG expression, IgA and IgM PCs expressed their respective isotype both intracellularly and on the plasma membrane (Figure 1A, lower panel). Importantly, concordant Ig’s were also detected on the plasma membrane of IgA and IgM, but not IgG, PCs isolated from BM or colon LP indicating that this property is a characteristic of PCs present in their physiological niches --A functional BCR in human IgA and IgM plasma cells
{ "domain": "biology.stackexchange", "id": 10133, "tags": "immunology" }
mavros with serial connection
Question: Hi! I try get imu_pub topics. Connection via USB. I launch: roslaunch mavros apm2.launch and get [ INFO] [1423223050.794198448]: FCU URL: /dev/ttyACM0:115200 [ INFO] [1423223050.794370090]: device: /dev/ttyACM0 @ 115200 bps [ INFO] [1423223050.794872555]: GCS bridge disabled [ INFO] [1423223050.799750201]: Plugin [alias 3dr_radio] blacklisted [ INFO] [1423223050.858428577]: Plugin Command [alias command] loaded and initialized [ INFO] [1423223050.858484604]: Plugin [alias ftp] blacklisted [ INFO] [1423223050.858500013]: Plugin [alias global_position] blacklisted [ INFO] [1423223050.861491572]: Plugin GPS [alias gps] loaded and initialized [ INFO] [1423223050.870787476]: Plugin IMUPub [alias imu_pub] loaded and initialized [ INFO] [1423223050.870830161]: Plugin [alias local_position] blacklisted [ INFO] [1423223050.873211089]: Plugin Param [alias param] loaded and initialized [ INFO] [1423223050.877987176]: Plugin RCIO [alias rc_io] loaded and initialized [ INFO] [1423223050.878022845]: Plugin [alias safety_area] blacklisted [ INFO] [1423223050.878043033]: Plugin [alias setpoint_accel] blacklisted [ INFO] [1423223050.878060355]: Plugin [alias setpoint_attitude] blacklisted [ INFO] [1423223050.878076733]: Plugin [alias setpoint_position] blacklisted [ INFO] [1423223050.878093765]: Plugin [alias setpoint_velocity] blacklisted [ INFO] [1423223050.885327432]: Plugin SystemStatus [alias sys_status] loaded and initialized [ INFO] [1423223050.888867617]: Plugin SystemTime [alias sys_time] loaded and initialized [ INFO] [1423223050.890337403]: Plugin VFRHUD [alias vfr_hud] loaded and initialized [ INFO] [1423223050.895379793]: Plugin Waypoint [alias waypoint] loaded and initialized [ INFO] [1423223050.895428569]: MAVROS started. MY ID [1, 240], TARGET ID [1, 1] [ERROR] [1423223052.700703044]: FCU: Calibrating barometer [ INFO] [1423223052.704659082]: CON: Got HEARTBEAT, connected. [ INFO] [1423223054.245342256]: FCU: barometer calibration complete [ INFO] [1423223054.253319191]: FCU: GROUND START [ INFO] [1423223062.721007722]: FCU: ArduCopter V3.2 (c8e0f3e1) [ INFO] [1423223062.724986869]: FCU: Frame: Y6 [ INFO] [1423223067.726841955]: WP: mission received [ INFO] [1423223074.878456321]: PR: parameters list received I receive messages from /mavros/state, but not from /mavros/imu/data ADDED: I can receive messages after ~set_stream_rate with max rate 13 for /mavros/imu/data Originally posted by tuuzdu on ROS Answers with karma: 85 on 2015-02-06 Post score: 0 Answer: Mavros don't change stream setup at startup. But usual it is not needed. Also there is mavsys script, try do mavsys rate --raw-sensors 10. Originally posted by vooon with karma: 404 on 2015-02-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tuuzdu on 2015-02-07: I think that rosrun mavros mavsys rate --all 30 is equal rosservice call /mavros/set_stream_rate 0 30 1, isn't it? But when I use the first one, I get maximum rate about 20 hz for mavros/imu/data... I don't understand why that happen. Comment by vooon on 2015-02-07: Equal. Strange :) Comment by tuuzdu on 2015-02-08: What is maximum rate of mavros/imu/data? Comment by vooon on 2015-02-08: Rate limited by FCU. With PX4 i got maximum ~150 Hz, and on some branches up to 200. Comment by tuuzdu on 2015-02-08: With apm2 I have maximum 25hz after rosrun mavros mavsys rate --extra1 50 ...
{ "domain": "robotics.stackexchange", "id": 20804, "tags": "mavros" }
How does one determine the number of eigenstates of a system with a given spin?
Question: I have had a true/false question in a practice exam stating: For a spin 3/2 system (S=3/2), there are only four spin eigenstates. which is true. (solutions) I do not understand how one can determine how many eigenstates exist for a given spin system. All I know is a s=1/2 system has two eigenstates. Answer: The states may take any value between $S$ and $-S$ in steps of 1. I.e. for $S = \tfrac{3}{2}$ valid states would be $\lbrace\tfrac{3}{2},\tfrac{1}{2},-\tfrac{1}{2},-\tfrac{3}{2}\rbrace$, for $S = \tfrac{1}{2}$ we get $\lbrace\tfrac{1}{2},-\tfrac{1}{2}\rbrace$ and for $S = 2$ we get the five states $\lbrace2,1,0,-1,-2\rbrace$. Notice that then nececarrily follows that $S$ may take only full or half integer-values.
{ "domain": "physics.stackexchange", "id": 59678, "tags": "quantum-mechanics, quantum-spin" }
Are EM fields made of photons or are they fundamental?
Question: I have read these questions: Virtual photon description of B and E fields How do virtual photon cloud produce discrete magnetic field lines in bar magnet? How virtual photons give rise to electric and/or magnetic field? What are electromagnetic fields made of? Where Alfred Centauri says: The electromagnetic field is one such fundamental entity. It's not made of anything else, it just is what it is. And where DJBunk says: Electromagnetic fields, which include static electric and magnetic fields, are indeed made of photons. So the question is what are they made of, photons or are they fundamental. Question: Which one is right, are EM fields made of photons or are they fundamantal? Answer: Which one is right, are EM fields made of photons or are they fundamantal? A classical "electromagnetic field" cannot be defined in the way an electric field or a magnetic field is defined classically (or a gravitational one). One needs a test particle to measure the strength of the field, and I cannot define a test particle that will measure the strength of electromagnetic radiation. One can define an electric field and magnetic field at a point in space-time , and Maxwell's equations connect them as one entity that behaves differently given the velocity of the observer.Two different kinds of test particles are necessary to test electric and magnetic fields, and that is part of the confusion. To start with, since from our present knowledge, everything classical emerges from the underlying quantum mechanical level, fundamental are the photon particles represented in quantum field theory by a photon field, on which creation and annihilation operators generate the real photons. Both classical Maxwell equations and the quantum mechanical ones are mathematical models. It has been demonstrated mathematically that the classical fields emerge from the quantum mechanical ones, and there is smooth continuity, going from particles ( photons) to electromagnetic waves(classical ). It can also be shown that at the limiting case of static behavior this continuity exists, and virtual photons can mathematically model static fields. As physicists we accept what the rigorous mathematical models predict and describe, as long as there is no experimental falsification. So at this point in time, fundamental is the photon field with its creation and annihilation operators ( an operator field). Virtual photons are a price for using mathematics, as they cannot be measured, but as far as the theory goes, the whole thing hangs together with no experimental falsifications.
{ "domain": "physics.stackexchange", "id": 50331, "tags": "electromagnetic-radiation, photons, quantum-electrodynamics, wave-particle-duality, virtual-particles" }
Why does the Heisenberg uncertainty principle apply to particles?
Question: This might be a slightly naive question, and if so I apologize, but I am currently a little confused as to why the Heisenberg Uncertainty principle should apply to particles, i.e. our system (say an electron) after we observe it and collapse it’s wave function. From what I understand, the Heisenberg Uncertainty principle just comes from the fact that momentum is the Fourier transform of position (wave number technically I think, but all the same since momentum is related to wavelength which is related to wave number). The more localized one is, the less localized the other will be because ‘localized’ things require a larger distribution of frequencies to localize them. Nonetheless, it seems as those this should only hold, if our object is treated as a wave, but if we treat it like a particle, it feels like this should just go away. Even if you represent a particle like a wave by using something like the Dirac delta function or whatnot, you would get essentially an infinite number of corresponding wave numbers, in other words total uncertainty on the momentum which seem strange if we think of things like particles classically. It just feels like in order for Heisenberg to hold, things always need to be ‘wave-like’ in some sense. I apologize for the long winded question, but any help would be appreciated. Edit: Thank you all for your responses. I think my confusion has been cleared up. Answer: I understand your confusion. It is due to an old-fashioned way of introducing Uncertainty relations based on wave formalism, dating back to Heisenberg, but probably quite misleading. Quantum mechanics (QM) does not say that particles are waves. That was de Broglie's original point of view, but today is untenable. Particle dynamics may be described using waves. But this is not the same as saying particles are waves. There are many reasons for that. I mention a couple of them: quantum wavefunctions for more than one particle are not functions of a single space point; in measurements, nobody ever measured a fraction of charge, spin, or any other property of the particle like it would happen if the physical properties would have been spread over an extended field. QM is a probabilistic theory from which we can extract consequences on the statistical behavior of many measurements on equally prepared systems. However, in most cases, the outcome of an individual measurement is a random variable. Moreover, QM can be formulated differently, and wavefunctions in a Hilbert space are just one of the possibilities. The real issue is the calculation of probabilities. The actual content of the Heisenberg relations is captured by the Robertson-Schrödinger theorem: $\Delta x \Delta p_x \geq \frac{\hbar}{2}$ is a statement about the variances of the random variables corresponding to independently measured position and momentum in an ensemble of equally prepared particles. As such, it is neither a statement about the measure of both momentum and position of a single particle nor an effect of the interaction with a measurement device. Limits for combined measurements on the same system exist, but it is a different story, and there are strong indications that such limits differ from the usual Robertson-Schrödinger result.
{ "domain": "physics.stackexchange", "id": 93560, "tags": "quantum-mechanics, wavefunction, heisenberg-uncertainty-principle, wave-particle-duality, observables" }
What is the difference between Lumped and Distributed systems?
Question: What are the salient differences between Lumped and Distributed systems? In what contexts are distributed systems the appropriate model and in what are lumped systems the appropriate model? Also, Lumped systems are said to be described by ordinary differential equations while the latter is said to be described by partial differential equations. Can someone explain why? Answer: The elements building a lumped system are thought of being concentrated at singular points in space. The classical example is an electrical circuit with passive elements like resistor, inductance and capacitor. The physical quantities current and voltage are functions of time (only). E. g. the current at a capacitor with capacity $C$ is given by $$ i(t) = C\frac{\mathrm d v(t)}{\mathrm d t} $$ Where $C$ is a constant (and so are $R$ and $L$). This leads to ordinary differential equations. In contrast, the elements in distributed systems are thought of being distributed in space, so that physical quantities depend on both time and space. The classical example is the electrical line where inductance, capacity and resistance are not constant but functions of length $x$. This leads to partial derivatives of $i(t,x)$ and $v(t,x)$ in $t$ and $x$.
{ "domain": "dsp.stackexchange", "id": 955, "tags": "linear-systems, system-identification" }
Overlay multiple workspace in ROS2 Foxy
Question: Hi guys! I'm less than average in ROS and pretty noob in ROS2 :) I want to play a little bit with some packages in Foxy and after several install due to some n00bs error I want to build more workspaces in order to limit the damage. I have these workspaces: ros2_ws turtlebot3_ws dev_ws How can I make a package from 'dev_ws' to use a package from 'turtlebot3_ws'? I've tried several ways to make the overlay but no succes! thanks in advance! Originally posted by Ktysai on ROS Answers with karma: 112 on 2021-07-23 Post score: 0 Answer: If you source (e.g. . ~/turtlebot3_ws/install/setup.bash) the workspaces where the dependency is prior to building/running things from your dev_ws, it should have the desired effect. Note that new paths are added to the front of the various path variables, so whatever you've sourced most recently will have priority (e.g. if you have a package that you are building from source in one directory, but have also installed using apt, sourcing the directory will ensure that the source version will be used). Originally posted by shonigmann with karma: 1567 on 2021-07-23 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ruffsl on 2021-07-23: A concrete example of this kind of daisy chaining of workspaces can also be seen used in practice from the Dockerfile for the Nav2 project, where both an underlay and overlay workspace are built one after the other on top of the installed ros folder: https://github.com/ros-planning/navigation2/blob/5c61644651c4eab882b042e073d0f5964f03a501/Dockerfile#L111 Comment by Ktysai on 2021-07-24: Thank you! @shonigmann, after bashing my heard against the keyboard I've seen a similar path. The explanations are a cool bonus! :) @ruffsl I have to look more careful at that code, is not that readable for me at the moment.
{ "domain": "robotics.stackexchange", "id": 36747, "tags": "ros, ros2, overlay" }
Enumerating an alphabet
Question: Since train rides can be long and boring, I've decided to make use of that time and fiddle around with some good ol' C. What I did was to create a way to enumerate words of a certain length of an arbitrary alphabet. Basically, suppose you have an alphabet { '0', '1' } (ooh, binary!) and want to get all words of length 7, you'd call enumerate("01", 7); and have a nice array of pointers containing the strings "0000000" to "1111111". What for, you ask? Dunno. Stuff. I've written it on my phone, so it may be a tad slimmer than the usual 80 characters. It's also not split into files for the very same reason. #include <math.h> #include <stddef.h> #include <stdio.h> #include <stdlib.h> #include <string.h> char *translate( unsigned long value, const char *alphabet, size_t length ) { size_t base; char *result; if (!alphabet || (base = strlen(alphabet)) < 2) { return NULL; } if (length == 0) { length = 1 + floor(log(value) / log(base)); } result = malloc(length + 1); if (!result) { return NULL; } memset(result, alphabet[0], length); result[length] = 0; for (int i = length - 1; i >= 0 && value > 0; --i) { unsigned long mod = value % base; result[i] = alphabet[mod]; value -= mod; value /= base; } return result; } char **enumerate( const char *alphabet, const size_t length ) { size_t base; if (!alphabet || (base = strlen(alphabet)) < 2) { return NULL; } if (length == 0) { return NULL; } unsigned long end = pow(base, length); char **array = malloc((end + 1) * sizeof(char*)); if (!array) { return NULL; } array[end] = NULL; for (int i = 0; i != end; ++i) { char *storage = malloc(length + 1); char *translated = translate(i, alphabet, length); if (!storage || !translated) { if (storage) { free(storage); } if (translated) { free(translated); } array[i] = NULL; goto error_cleanup; } strcpy(storage, translated); free(translated); array[i] = storage; } return array; error_cleanup: for (int i = 0; array[i] != NULL; ++i) { free(array[i]); } free(array); return NULL; } Example usage; simply concatenate the listings. int main(int argc, char *argv[]) { if (argc < 2) { printf("Usage: %s <length> <alphabet>", argv[0]); exit(0); } size_t length = atoi(argv[1]); char *alphabet = argv[2]; char **array = enumerate(alphabet, length); if (!array) { printf("Error while enumerating."); exit(1); } for (int i = 0; array[i] != NULL; ++i) { printf("%s\n", array[i]); free(array[i]); } free(array); exit(0); } What I'm most concerned about is idiomaticality (is that even a word?) of the code, since my C is rusty and self-taught, especially concerning the use of that string array. Also, is my goto used well and with good reason? Answer: If you want to improve usability, you should introduce some kind of type for your array (typedef the pointer, or introduce some type of struct that will contain useful metadata) and provide the appropriate free function for it. This way, the user doesn't have to free all the elements themselves and can just call my_free(&my_type_ptr) when he's done. Your error_cleanup should just call this function as well. This bit of code does unnecessary work: char *storage = malloc(length + 1); char *translated = translate(i, alphabet, length); if (!storage || !translated) { if (storage) { free(storage); } if (translated) { free(translated); } array[i] = NULL; goto error_cleanup; } strcpy(storage, translated); free(translated); array[i] = storage; in the form of allocating an extra block of memory and copying things there from an already allocated block of memory. Simplify to: array[i] = translate(i, alphabet, length); if (array[i] == NULL) { // Error in translating, abort goto error_cleanup; } is my goto used well and with good reason? No. You only have one user of the goto. If you look above at the more concise version of your function, it's completely reasonable just to say "no more goto" and replace it with free_word_array(array); return NULL;. Any time you have a need to clean multiple resources on any failure, goto can be a good choice. (argc < 2) needs to be (argc < 3). As it is written, running your program with only one extra argument ./main 6 results in Error while enumerating. instead of printing the help message. Remember, the program invocation is the 0th argument but counts in the count as well. I prefer return from main rather than exit. translate could use more comments within the algorithm, for quicker reading/understanding. It looks like you're just enumerating all possible words within a given set, and this recovers the word. Maybe it makes sense not to have an array of arrays, but rather an array of numbers that you then convert to the string representation if the user wants to print it? But then the logical conclusion of that is that you don't really need to allocate memory at all then: you can just as easily return the number of permutations, and work out the appropriate requested permutation when it is requested. Your code will only work with single-wide characters, and produce gibberish for any alphabet that contains characters that aren't (think UTF-8, for instance, where the character can be wider than a byte, meaning that you'll shuffle the character incorrectly). This might not be in your requirements, but thought I'd mention it since this always plagues non-ASCII character stuff in C.
{ "domain": "codereview.stackexchange", "id": 25114, "tags": "c, strings, pointers" }
Filling a rectangle using smaller ones / Slicing a rectangle into smaller ones
Question: Preface I am currently trying to translate an algorithm for the procedural generation of dungeons into code. For this matter I have divided the algorithm into phases and have come up with a solution for the first step and in this first post concerning the first phase I would appreciate feedback on the general implementation and specific questions that came up. I hope you don't mind that I have chosen a title that describes two different approaches which arguably yield the same result I want to achieve here. Explanation I want to write a function GenerateGrid(mapWidth, mapHeight, maxRoomWidth, maxRoomHeight) that creates a two-dimensional grid of size mapWidth x mapHeight and fills it with smaller rectangles (rooms) that adhere to the following rules: All rooms have to be rectangular. Hence no L-shapes or other fancy stuff. All rooms have random sizes [1, maxRoomWidth] x [1, maxRoomHeight]. My naive approach revolves around traversing the grid in right-down direction and attempting to fit in a room of random size if possible until everything is claimed. Code private static void GenerateGrid(byte mapWidth = 10, byte mapHeight = 10, byte maxRoomWidth = 3, byte maxRoomHeight = 3) { var grid = new byte[mapHeight, mapWidth]; var random = new Random(); byte roomNumber = 1; for (var y = 0; y < mapHeight; y++) for (var x = 0; x < mapWidth; x++) { // Tile is already claimed. if (grid[y, x] != 0) continue; int roomWidth; int roomHeight; bool isColliding; do { roomWidth = random.Next(0, maxRoomWidth) + 1; roomHeight = random.Next(0, maxRoomHeight) + 1; // Check whether there is enough space to fit the room by accessing // all respective tiles. isColliding = mapWidth - x < roomWidth || mapHeight - y < roomHeight; if (isColliding) continue; // Check whether the (seemingly free) space isn't claimed though. for (var yOffset = 0; yOffset < roomHeight; yOffset++) for (var xOffset = 0; xOffset < roomWidth; xOffset++) isColliding |= grid[y + yOffset, x + xOffset] != 0; } while (isColliding); // Assign the room number to tiles and claim them. for (var yOffset = 0; yOffset < roomHeight; yOffset++) for (var xOffset = 0; xOffset < roomWidth; xOffset++) grid[y + yOffset, x + xOffset] = roomNumber; roomNumber++; } for (var y = 0; y < grid.GetLength(0); ++y) { for (var x = 0; x < grid.GetLength(1); ++x) Console.Write($"{grid[y, x],4}"); Console.WriteLine(); } } Sample A sample call to GenerateGrid(20, 20, 4, 4) will yield the following output: Questions In general this part works very well and has some aesthetical issues I will be covering in the next question, but what is your opinion about the current code in general? Is there anything that can be simplified or combined? In my sample I was quite "lucky" since the resulting map wasn't as degenerate as usual. With the way how the algorithm is designed bigger rooms are pretty rare since it happens fairly often that a bigger cannot be placed starting at (x,y) as there is often another room starting at (x+1, y-?) that is dangling and obstructing the way down. In my particular example room 36 could not expand to the right side since room 27 was already there. The only way that I was able to "enforce" more bigger rooms is by adjusting the roomWidth and roomHeight variables with a deviation using roomWidth = random.Next(0, maxRoomWidth + 1) + 1; as well as roomHeight = random.Next(0, maxRoomHeight - 1) + 1;. Is there any better approach to allow slightly more big rooms? Ultimately the last question follows from the same issue that was described in question 2. Most rooms end up being narrow and aligned to go from NE to SW while rooms that are aligned from NW to SE are pretty rare. I assume this will adjust itself once the problem in question 2 is fixed but if not, how can I balance the number of rooms that are horizontal or vertical? Remarks The algorithm will have other steps which may imply that the code from this part could be optimized for later stages. In the next stages the algorithm will proceed roughly like this: The algorithm will create a graph from the grid with nodes being the rooms and edges between the nodes if they are neighbours. After applying random weightings to the edges a shortest path between two specified rooms (called start and end) will be computed. All rooms along the path will be added to final map as THE solution path that will always exist. Eventually using some branching random rooms will be added to the map to make it non-linear, with some dead ends and making the map feel more "natural". No alternative paths will be allowed here so the basic idea is to add random edges (and corresponding rooms) to the final map which connect rooms that are already part of the final map and ones that are not (to prevent alternative routes and cycles). This will happen with an increasing chance to stop prematurely. Answer: private static void GenerateGrid(byte mapWidth = 10, byte mapHeight = 10, byte maxRoomWidth = 3, byte maxRoomHeight = 3) It's a private function, do you ever call it without parameters? With just 3 parameters? I'd guess not, remove those default values (and if you really need a function to quickly generate a map using all the default values just add an overloaded version). Also you do not validate function arguments. It may be OK for a private function but at least add Debug.Assert() as appropriate (for example Debug.Assert(maxRoomWidth <= mapWidth) or whichever other rules you have). var grid = new byte[mapHeight, mapWidth]; Here you're using a byte just to store the room number. You, however, repeat the same value in each cell of the grid and it's definitely a waste. Why don't you introduce Grid and Room classes? Room will have it's ID property and the Grid is, to begin with, just a list of rooms. Code will be slightly slower when searching but much easier to read: foreach (var location in grid.GetFreeLocations()) { } Or, simpler to implement: while (!grid.IsFull) AllocateRoomAtLocation(grid.GetNextFreeGridLocation()); I guess you understand the point. I know this is an annoying change because you have to rewrite existing algorithm but you're using this data structure only because it's somehow handy for this specific task (not that much, IMO) but it'll be a pain when you will need to use it in your actual game code. Note that if you need to simply keep track of used cells (still producing Room objects, of course!) you can use a simpler data structure like a bit map (not an image but a grid of bits!) If in future you will want to generate rooms with a different algorithm? You may consider to implement a Strategy pattern to inject the allocation algorithm. This will also make testing much easier (without relying on randomly generated data) because you can prepare well-defined layouts to complete (something you can't do if your only interface is GenerateMap().) It doesn't make much sense to comment the other code given that I'd radically change it because of the above but few notes. for (var y = 0; y < mapHeight; y++) for (var x = 0; x < mapWidth; x++) Nesting is ugly, I agree, but do not try to hide it in this way. If you feel that a code snippet smells or it's hard to read then it's time to extract a method for it! For example: for (var y=0; y < mapHeight; ++y) { for (var x=0; x < mapWidth; ++x) { AllocateRoomAtLocation(x, y); } } Of, if you love LINQ: IEnumerable<(byte X, byte Y)> GetAllLocations(byte width, byte height) { return Enumerable.Range(0, width) .SelectMany(x => Enumerable.Range(0, height).Select(y => (x, y))); } Just for fun the misused JOIN version: Enumerable.Range(0, width).Join(Enumerable.Range(0, height), _ => true, _ => true, (x, y) => new (x, y));. And then: foreach (var location in GetAllLocations(mapWidth, mapHeight)) AllocateRoomAtLocation(location); Or (assuming you have a ForEach() extension method on IEnumerable<T>): GetAllLocations(mapWidth, mapHeight).ForEach(AllocateRoomAtLocation); The same should be applied all around to other parts of your code. Pretty often loops are a good indicator that it's time to introduce a separate method. Your while loop will greatly benefit from a refactoring, move it to a separate method and it'll be easier to read. Note that you use isColliding only to exit the loop then imagine this: while (true) { // ... bool isColliding = mapWidth - x < roomWidth || mapHeight - y < roomHeight; if (isColliding) break; // ... } Of course we should have a separate IsColliding() method but we already introduced a Grid class... Why first you use mapHeight and mapWidth in your loop and at the end you use grid.GetLength(0)? A difference is eye catching and future reader will stop and think...think...think...to finally deduce that matrix size is still what specified in those parameters and mapHeight may be used. Not directly related to your code but you may want to start allocating big rooms first (yes you need parameters to define, for example, the expected average room size and deviation from that). If you start allocating big rooms (in random locations) then you have better chance to have a less dispersed map (and small rooms may be used to simply fill the gaps).
{ "domain": "codereview.stackexchange", "id": 27534, "tags": "c#, performance, algorithm, game" }
Streaming hacker news posts
Question: Hacker News recently released an official API. Unfortunately, it cannot be used to get the newest posts submitted. I wrote a script to provide a faux streaming-api like method (like praw reddit's submission_stream) so that I can simply put it in a loop and do something like make a gtk-notification when title contains the string 'python'. """ Usage: from hn import HNStream h = HNStream() for item in h.stream(): print "Title: ", item.title print "By user: ", item.by """ from lxml import html as lh import re import requests import threading import time import ujson as json __all__ = ['HNStream'] class Item: def __init__(self, **kwargs): self.__dict__.update(kwargs) def __str__(self): return "{}: {}".format(self.type, self.title) __repr__ = __str__ class HackerNews: def __init__(self): self.base_url = 'https://hacker-news.firebaseio.com/v0/{api}' def get(self, uri): response = requests.get(uri) if response.status_code == requests.codes.ok: return json.loads(response.text) else: raise Exception('HTTP Error {}'.format(response.status_code)) def item(self, item_id): uri = self.base_url.format(**{'api': 'item/{}.json'.format(item_id)}) return Item(**self.get(uri)) class SimpleFIFO: def __init__(self, length): self.length = length self.values = length * [None] def contains(self, iid): return iid in self.values def append(self, value): assert isinstance(value, str), "Only append strings" self.values.append(value) while len(self.values) > self.length: self.values.pop(0) assert len(self.values) == self.length def __str__(self): return str(self.values) class HNStream(threading.Thread): def __init__(self): threading.Thread.__init__(self) def stream(self): itembuffer = SimpleFIFO(length=30) while True: r = requests.get('https://news.ycombinator.com/newest') tree = lh.fromstring(r.text) # http://stackoverflow.com/a/2756994 links = tree.xpath("//a[re:match(@href, 'item\?id=\d+')]/@href", namespaces={"re": "http://exslt.org/regular-expressions"}) links = set(links) for link in links: iid = re.match(r'item\?id=(\d+)', str(link)).groups()[0] if not itembuffer.contains(iid): itembuffer.append(iid) yield HackerNews().item(iid) time.sleep(30) Answer: I have a number of suggestions: Don't use self.dict def __init__(self, **kwargs): self.__dict__.update(kwargs) It is pretty unclear which attributes the class will have after calling the constructor, as it is basically up to the caller to decide this. This is not optimal, as you want to be in control of what data is set in an item. I'd suggest to use named arguments instead: def __init__(self, id, type, title): self.type = type self.title = title self.id = id # Add other attributes that I don't know of? While this may seem like blowing up the code, it makes it a lot easier to understand for an uninformed reader, and therefor should be preferred IMHO. Append instead of format Since you are basically just appending something to self.base_url, you might as well append using + instead of using format. I find it a lot easier to read, but that depends a lot on what you prefer. Consequently: self.base_url = 'https://hacker-news.firebaseio.com/v0/' Then uri = self.base_url.format(**{'api': 'item/{}.json'.format(item_id)}) would look like this if using append: uri = self.base_url + "item/{}.json".format(item_id) HackerNews.get() The get() method's name does not explain what the method does. I'd suggest something offering a bit more documentation, for example: download_item() or download_entry(). On a little side note: You can remove the else: statement, as the raise Exception(..) command will only be executed if the if statement evaluates to False. This depends a bit on personal taste though. I'd also change the ordering like this: if response.status_code != requests.codes.ok: raise Exception('HTTP Error {}'.format(response.status_code)) return json.loads(response.text) I'd prefer this ordering as it has the return statement at the end, which is where I'd look first when searching for it. SimpleFIFO Maybe you can use Python's Queue class? See here for documentation: https://docs.python.org/2/library/queue.html It also provides a max size, the only thing missing is the check for adding only strings. HNStream.init() You can use the super keyword (see also: https://stackoverflow.com/questions/576169/understanding-python-super-and-init-methods) instead of explicitly calling the constructor like this: def __init__(self): super(HNSTream, self).__init__() Or in Python 3: def __init__(self): super().__init__() HNStream.stream() Just a few naming suggestions: r -> result iid -> item_id That was it, hope it helps!
{ "domain": "codereview.stackexchange", "id": 10737, "tags": "python, multithreading" }
Is compiler for dependent type much harder than an intepreter?
Question: I have been learning something about implementing dependent types, like this tutorial, but most of them is implementing interpreters. My question is, it seems that implementing a compiler for dependent type is much harder than a compiler, because you can really evaluate the the dependent type arguments for type checking. So Is my naive impression right? If it is right, any example / resources about implementing a statically checked language supporting dependent type? Answer: This is an interesting question! As Anthony's answer suggests, one can use the usual approaches to compiling a non-dependent functional language, provided you already have an interpreter to evaluate terms for type-checking. This is the approach taken by Edwin Brady. Now this is conceptually simpler, but it does lose the speed advantages of compilation when performing type checking. This has been addressed in several manners. First, one can implement a virtual machine which compiles terms to byte-code on the fly to perform the conversion check. This is the idea behind vm_compute implemented in Coq by Benjamin Gregoire. Apparently there is also this thesis by Dirk Kleeblatt on this exact subject, but down actual machine code rather than a virtual machine. Second, one may generate code in a more conventional language which, upon execution, checks all the conversions necessary to type-check a dependently typed program. This means we can use Haskell, say, to type-check an Agda module. The code can be compiled and run, and if it accepts, then the code in the dependently-type language can be assumed to be well-typed (barring implementation and compiler errors). I've first heard this approach suggested by Mathieu Boesflug. Finally, one may require that the terms appearing in types and the terms intended to be run be part of two distinct languages. If the terms appearing at the type level do not themselves have dependent types, then one may compile in two stages: first, compile the "type-level" code and then you can execute this when checking the types of the "term-level" code. I'm not aware of any system that proceeds in this manner, but it is potentially possible for many systems, like Microsoft's F$^*$ language which has distinct type-level and program-level terms.
{ "domain": "cstheory.stackexchange", "id": 2229, "tags": "type-systems, compilers, dependent-type" }
How can I accurately measure the actuation force of small buttons (<1cm diameter) using home equipment?
Question: I'm trying to measure the actuation force of buttons on the Playstation Vita, in preparation of building an external device that presses these buttons automatically. Specifically, I'm trying to measure the force required to push the buttons on the D-Pad on the left, and the action buttons on the right. Googling "how to measure actuation force" will give you pages of results of mechanical keyboard enthusiast forums, and they often mention measuring the actuation force by stacking pennies / whatever coins you have until the key is depressed and the computer recognises the keypress. I can't do that normally here, since the buttons are really small (<1cm for the action buttons) and even the smallest coin my currency has cannot be stacked onto it without toppling over. In an attempt to mitigate the problem, I made the following setup. It still uses coins as weights to push down the button, but I rested the coins on the ruler that in turn rests on another object of somewhat equal height/thickness to the game console, in this photo, the remote control seen in the background. I conducted the experiment by continuously adding coins with a game running on the console until it registers a keypress, and then summing up the weights of the coins. However, the results are rather inconsistent and questionable. I repeated my experiments several times (by knocking down the coin tower and stacking it up again) as standard practice, and I got values ranging from 100g to 180g, which feels extremely high for buttons on a handheld game console, and that large spread seems to be telling me that my experimental technique is just plain wrong. Some possible causes of inconsistencies from the setup: The coins are not the same - I'm stacking 5 different coins at once The coins are not stacked in the same order each time I redo the experiment The center of the coin tower is not aligned with the center of the button on the game console - I don't know how to do this without the coin tower making contact with the neighboring buttons or it toppling over Some steps I have taken to possibly improve accuracy: The ruler used is a stiff metal ruler and does not bend significantly The ruler is kept horizontal The only points of contact of the ruler are the remote control seen in the background and a single button on the D-Pad. TL;DR: How can I improve the accuracy of my setup, or other better ways to measure actuation force of such small buttons? Update & Final Results I used the weighing scale method, and the results are 80g for the D-Pad and about 120g for the action buttons on the right. Thanks everyone! Answer: If you want a quick and somewhat inaccurate hill-billy engineering answer, I would take a digital kitchen scale and place the PSV ontop of it, tare it, then load up a program to test the buttons (game, menu, etc). with the scale device zero'd out, slowly depress the button until it is read by the program. a full actuation can and will differ from the triggering of the circuit. Wait for the program to read the triggering rather than the button's actuation, and add a few extra grams to it. If you want a precise measurement... I have no idea what you have available for building as far as "home equipment" is concerned.
{ "domain": "engineering.stackexchange", "id": 1644, "tags": "mechanical-engineering, measurements, torque, experimental-physics" }
How to determine stable electron states in ionic and covalent bonds?
Question: I'm working on a program that needs to determine if a bond between two or more elements will result in a stable state. I understand at a high-level how to fill electron subshells using the Aufbau principle, but I also read that in some cases, electrons will jump from a lower energy shell or orbital to a higher one in order to maintain a stable state (Filling Electron Shells). For example, if I want to determine if Hydrogen and Nitrogen will form a stable bond I would fill the shells for Nitrogen like so: When adding Hydrogen, would the two electrons in the 2s shell jump to the 2p shell because with the one electron from Hydrogen the HN would then have a full 2p shell? Like this: Or is this not a stable state? Would it require a large amount of energy to excite or promote those two 2s electrons to the 2p shell? I'm trying to understand if there are rules or heuristics I can use to estimate if two ore more elements will bond (on there own w/o adding a large amount of energy to the system) using their valence electron configuration like this, or if there are too many exceptions, making it not a simple programming task to estimate this. Answer: I'm trying to understand if there are rules or heuristics I can use to estimate if two ore more elements will bond You should try to find good textbook/course about Molecular Orbital theory. As I recall, good university-level textbooks on general/inorganic chemistry dive into this aspect and make analysis for dimers of second row elements. This heuristics, however, are of limited usability: they can be used to explain differences in family of compounds with similar structure, but are virtually useless for predicting geometry in complicated cases. And yet, this is the best you can get without diving into quantum chemistry. Here is quickly found link on the subject. http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch8/mo.html#valence
{ "domain": "chemistry.stackexchange", "id": 138, "tags": "electrons, bond, covalent-compounds, ionic-compounds" }
Thermal wave function of the harmonic oscillator - proving that it's a gaussian?
Question: I'm a bit stumped trying to prove this. I've computed the probability density for a thermal density matrix for the quantum harmonic oscillator, namely $$ \rho(x) = \frac{\sum_n^\infty e^{-\frac{\hbar\omega}{2kT}(2n+1)}\frac{1}{2^nn!}\left(\frac{m\omega}{\pi\hbar}\right)^{1/2}e^{-\frac{m\omega}{\hbar}x^2}H_n^2\left(\sqrt{\frac{m\omega}{\hbar}}x\right)}{\sum_n^\infty e^{-\frac{\hbar\omega}{2kT}(2n+1)}} $$ Now, I can compute the expectation value of $\langle x^2 \rangle$ for this distribution making use of the properties of Hermite polynomials. It turns out to be $\langle x^2 \rangle = \frac{\hbar}{m\omega}\frac{1+\xi^2}{1-\xi^2} $ with $\xi = e^{-\frac{\hbar\omega}{2kT}}$. I have the strong impression the overall function is really just a Gaussian with the corresponding variance. I tried calculating it numerically for a number of temperatures, and it always fits to a very high precision. However I can't prove it theoretically. I've tried multiple lines of attack - trying to prove that all momenta are equivalent to the normal distribution's by expanding the $x^l$ term in Hermite polynomials and making use of the triple product symbol, trying to express the polynomials as a Taylor series, Fourier transforms... the problem remains too hard to bring back to a simple analytical form. Basically the core of it is proving that: $$ e^{-z^2}\sum_n^{\infty}\xi^{2n+1}\frac{1}{2^nn!}H_n^2(z) $$ is still Gaussian in $z$, albeit with different width. Any ideas? Thanks! Answer: According to Feynman's Statistical Mechanics, equation 2.84, it is indeed a Gaussian: $$ \rho(x)=\sqrt{\frac{m\omega}{2\pi\hbar\sinh{2f}}}\exp(-\frac{m\omega}{\hbar}x^2\tanh{f}) $$ where $$ f=\frac{\hbar\omega}{2kT} $$ However, Feynman derives this by solving a differential equation, not by doing the sum you're trying to do. It looks like this page http://functions.wolfram.com/Polynomials/HermiteH/23/02/ of formulas for infinite summations of Hermite polynomials has what you are looking for as the 10th formula if you set $z=z_1$.
{ "domain": "physics.stackexchange", "id": 50198, "tags": "quantum-mechanics, harmonic-oscillator" }
Prove that if $\hat H | a_n\rangle=a_n|a_n\rangle$, then $f(\hat H)| a_n\rangle=f(a_n)|a_n\rangle$
Question: In Quantum Mechanics you have the eigenvalue equation: $$\hat H | a_n\rangle=a_n|a_n\rangle \tag{1}$$ where $\hat H$ is the Hamiltonian operator, $\{|a_n\rangle\}$ is a complete set of eigenstates in Hilbert space and $\{a_n\}$ is the set of the eigenvalues (suppose there is no degeneration). So, how would you show that if $f(x):\Re \to \Re$ (with certain properties to specify later), then follows $$f(\hat H)| a_n\rangle=f(a_n)|a_n\rangle \tag{2}$$ Some books have this as part of the definition of the function of an operator: $f(\hat H)$, but can you derive (2) from (1) using whatever you need (spectral theory, calculus or whatever)? Answer: Functions of an operator are (or, can be) defined by their power series: $$f(\hat{A}) = f_0 + f_1 \hat{A} + f_2 \hat{A}^2 + \cdots$$ It's easy to prove that $$\hat{H}\lvert a_n\rangle = a_n\lvert a_n\rangle \quad\implies\quad \hat{H}^k\lvert a_n\rangle = a_n^k\lvert a_n\rangle$$ and if you plug that into the power series definition of the function, it will show that $f(\hat{H})\lvert a_n\rangle = f(a_n)\lvert a_n\rangle$.
{ "domain": "physics.stackexchange", "id": 24635, "tags": "quantum-mechanics, operators" }
Is there some quantum potential producing exponential eigenvalues?
Question: Usual central potentials produce quantum spectra with energy levels going as $n$, $n^2$, $n^3$ and so on, being $n$ the quantum number of the orbit. In the other extreme we have "dirac-delta" potentials which have only a single discrete eigenvalue. I was wondering, what kind of potential do we need for producing an exponential $e^n$ set of discrete eigenvalues? Answer: For 1D potentials, the sequence of bound state energy eigenvalues $E_n$ cannot grow faster than what happens in the case of an infinite well, i.e. $E_n$ cannot grow faster than $n^2$.
{ "domain": "physics.stackexchange", "id": 14743, "tags": "quantum-mechanics, mathematical-physics, schroedinger-equation, potential, eigenvalue" }
What to do with bloom when dependency unavailable on Debian
Question: From ROS Kinetic looks like Debian (Jessie for Kinetic) will be official supported OS. I had an issue when I made a release of a package into Kinetic where one of the dependencies was not yet available on Jessie at that time so that build on ROS buildfarm failed only for Jessie. Looks like @tfoote fixed it by re-running bloom, but what magic with bloom we could use for this kind of problem? Originally posted by 130s on ROS Answers with karma: 10937 on 2016-05-18 Post score: 0 Answer: If the dependencies are not available you'll have to do like you did and tell bloom to skip that action. If you skip an action for a target like debian it needs to be added to the blacklist on the buildfarm so that the buildfarm will not try to repeatedly build. We are refactoring the buildfarm config to allow a higher level of granularity on the blacklisting: https://github.com/ros-infrastructure/ros_buildfarm_config/pull/41 without that landed we could not easily blacklist a package for debian only. Once that's merged we will be able to do so. Originally posted by tfoote with karma: 58457 on 2016-05-18 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24688, "tags": "ros, ros-kinetic, bloom-release, jessie" }
Is viscosity proportional to the number of hydroxyl groups?
Question: Is the following statement always true? The more $\ce{-OH}$ functional groups in the molecule, the more is its viscosity? I think it is true, because it is known that weak intermolecular forces lead to lower viscosities and strong intermolecular forces lead to higher viscosities and because the more $\ce{OH}$ functional groups in the molecule will give rise to a molecule having stronger molecular forces. As the case in glycerol ($\ce{C3H8O3}$) and water ($\ce{H2O}$) Therefore the statement must be true. What do you think? Answer: Is the following statement always true? The more −OH functional groups in the molecule, the more is its viscosity? It's hard to find anything that is "always" true in chemistry, but I'd bet that within a series of molecules where the only variable is the number of $\ce{OH}$ groups, your statement is generally true, so I basically agree with your position. Your analysis and examples are also good. As you pointed out, it is about intermolecular forces. Specifically, molecules with hydroxyl groups can form intermolecular hydrogen bonds (see first picture below). These hydrogen bonds cause the molecules to "stick" together and act as if they had a higher molecular weight. Sugars have many hydroxyl groups that give rise to many intermolecular hydrogen bonds and cause sugars to flow in a slow, syrupy manner. Here is a comparative series of alcohols all involving a 3 carbon chain. When the number of hydroxyl groups remains the same and we just vary their position on the chain the viscosity hardly changes. On the other hand, note how the viscosity increases markedly each time we add another hydroxyl group to the chain. It appears that the total number of hydroxyl groups is what really matters, so your statement does appear to be generally true as long as the number of hydroxyl groups is the only variable.
{ "domain": "chemistry.stackexchange", "id": 1572, "tags": "organic-chemistry, intermolecular-forces" }
Special case of the $MST-$ Problem
Question: I am working on the following exercise: Consider an undirected complete graph $G(V,E)$ and positive real numbers $a_1,a_2,\ldots,a_n$. The task is to find a MST with respect to the edge weights $w_e = a_ia_j$ for each $e = \{i,j\}$. Provide an algorithm that solves this problem that is as efficient as possible. It is safe to say that just using Kruskal's or Prim's algorithm will not be sufficient here. So we should use the special structure of $w$. I came up with the following idea: Sort the edges in increasing order according to their edge weights. This can be done in $\mathcal{O}(\lvert E \rvert log(\lvert E \rvert))$ time. Traverse the edges, starting from the smallest and add it to $T$ if it contains an unvisited vertex. Mark the newely visited vertices. This can be done in $\mathcal{O}(\lvert E \rvert)$ time. But this algorithm is not as efficient as possible for it has $\mathcal{O}(\lvert E \rvert log(\lvert E \rvert))$ time complexity. The problem here is the procedure in step 1. Could we somehow leave it out? Answer: Since the graph is complete, you can connect each vertex with any other vertex. Note that each vertex must be connected to the tree and hence has an incident edge in the tree. Assuming all weights are positive, choose a vertex $v$ with the minimum weight and construct the tree from all incident edges with this vertex. The tree is hence a star graph with $v$ the center of the star. Assuming the numbers can be negative (which is not specified in you question), then distinct two cases. the first one, all weights are negative. Hence, it turns into the case above, since all products have to be positive. The second case is where we have at least one negative weight and one positive weight. Let $u$ be a vertex with the maximum positive weight. Let $v$ be a vertex with negative weight with maximum absolute value among all negative weights. connect $u$ to all vertices with negative weights and $v$ to all vertices with positive weights. Be aware of not adding the edge between $u$ and $v$ twice. Try to proof the correctness as an exercise. It should be clear using elementary math and an exchange argument.
{ "domain": "cs.stackexchange", "id": 14996, "tags": "graphs, time-complexity, minimum-spanning-tree" }
How to create a package for new gazebo plugins?
Question: Hi, I have created plugins for IR Sensor and Sonar Sensor. I want to create a package for both so that it can be tested on other computer and then submitted to ROS. Further i would like to mention here that i have created Sonar Sensor (IRSensor and its controller are already present in gazebo) by the name of SonarSensor and its controller named "sonar_array" as well. I have also done some changes to files called MultiRayShape and gz.h( i have added sonarIface in the gz.h). Similarly xacro files are also created for IRSensor and Sonar Sensor. I would like to know how to create package for sonar sensor, IRSensor plugin and SonarSensor plugin. Any Help in this regard will be appreciated. Regards, Saeed Anwar Originally posted by SAK on ROS Answers with karma: 94 on 2011-09-04 Post score: 1 Answer: Do you mean you want to create a stack to regroup your different packages? You can use roscreate-stack and roscreate-pkg to create a stack or a package ( see this wiki page ) Originally posted by Ugo with karma: 1620 on 2011-09-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by SAK on 2011-09-05: these are only source files like the one present in /opt/ros/diamondback/stacks/simulator_gazebo/gazebo_plugins (gazebo_ros_ir.cpp) and the xacro files present /opt/ros/diamondback/stacks/pr2_common/pr2_description/urdf/sensors (xx.gazebo.xacro and xx.urdf.xacro) Comment by Ugo on 2011-09-04: Are those binaries or sources? Comment by SAK on 2011-09-04: I have two kind of files. some files are newly created (GazeboRosIr, and GazeboRosSonar, Sonarsensor, sonar_array) and some are modified files (MultiRayShape, gz). now how can i deal with all these Comment by Ugo on 2011-09-04: If you depend on roscpp for example, you'll have a src and an include directory where you can put your sources. If your work contains files that should be copied to the gazebo package, may be you should propose a patch for this package? Comment by SAK on 2011-09-04: Thank for your Reply. For time being i am only creating packages, not regrouping. I have seen this tutorial, but i does specify how to put files in the package. further how i will specify the path for the files to be put in the gazebo during package creation?
{ "domain": "robotics.stackexchange", "id": 6604, "tags": "ros, gazebo, gazebo-plugins, packages, package" }
Finding simpler equivalent regular expressions
Question: I'm doing an exercise from my book that says: Let $r$ and $s$ be arbitrary regular expressions over the alphabet $\Sigma$. Find a simpler equivalent regular expression: a. $r(r^*r + r^*) + r^*$ b. $(r + \Lambda)^*$ c. $(r + s)^*rs(r + s)^* + s^*r^*$ The book doesn't cover how to simplify regular expressions, so I searched online and I presumed you would use the algebraic laws for regular expressions. I was able to use these laws to come up with something for part a. only: a. $r(r^*r + r^*) + r^*$ $r(r^+ + r^*)+r^*$ $r(r^+ + r^+ + \Lambda) + r^*$ $r(r^++\Lambda)+r^*$ $rr^+ + r\Lambda + r^*$ $rr^+ + r + r^*$ I don't know how to approach b. or c., because b. has $(r + \Lambda)^*$ and c. has $(r+s)^*$, and I couldn't find how to deal with these. Any hints? Answer: A better approach would be to understand what words do these regular expressions represent. For example, what words are in $(r+\Lambda)^*$? They look something like $r_1r_2 \Lambda r_3 \Lambda = r_1r_2r_3$, where $r_1,r_2,r_3$ are generated by $r$. It seems that $\Lambda$ isn't making much of a difference. This should help you simplify $(r+\Lambda)^*$. The same approach works for the other expressions (including the first one, which you haven't simplified completely).
{ "domain": "cs.stackexchange", "id": 1096, "tags": "regular-expressions" }
Not sure about why I can't consider the free pulley, hanging masses as a system
Question: Okay, so for the record, I've solved this system considering each object as an individual system but what I can't comprehend is why I can't consider the hanging pulley and the 2 hanging masses as a system with a combined mass of (m1+m2) and total force acting as [(m1+m2)g-T] and then solve for the tension by equating accelerations. This always yields an incorrect answer and I've serious troubles justifying that to myself. Some help would really do me good and help me gain a better perspective. The picture encloses what I wanna consider as a system just so it's clear. Thanks. (Friction is absent, strings and pulleys are ideal) Answer: You can consider the hanging pulley and the masses hanging from it as a system of mass $m_1+m_2$. However, analysing the forces on this sytem does not tell you about the motion of individual parts of the system. It only tells you about the acceleration of its centre of mass. Often this is not useful. If the system is rigid, then the acceleration of the centre of mass is the same as the acceleration of each part of the system. (This is also the case if $m_1= m_2$, even if the two masses are in relative motion.) If the system is not rigid, these accelerations are not the same. To find out how the separate parts of the system move, you have to consider internal forces. Ultimately this is the same as analysing the forces on each part of the system separately.
{ "domain": "physics.stackexchange", "id": 42243, "tags": "newtonian-mechanics" }
Finding the remnants of recent supernova explosions in the solar system's neighbourhood
Question: I just found an article, Long-Ago Supernovae Littered Earth, which reviews evidence presented in The locations of recent supernovae near the Sun from modelling 60Fe transport; D. Breitschwerdt et al., Nature 532, 73–76 (2016), that there were supernova explosions in the neighbourhood of the Sun (a few tens of parsecs, if I'm reading it correctly) in the comparatively recent past (about 2 million years). What are our chances of finding the remnants of those explosions? That is: given reasonable assumptions for the parents' velocities with respect to the solar system, and for the kick given by the supernova to the remnant, how far are these remnants likely to be from the solar system now? In addition to that, is there any chance of finding them there using current or future astronomical surveys? I imagine the answer to the latter is "definitely not", but I might as well ask. (Note, however, that the main question is where the remnants are likely to be now, and only then how detectable they're likely to be.) Answer: Slim, unless the remnants are still in a close binary system. Almost all massive stars are born in binary systems and some fraction of these survive after a supernova to form neutron star or black hole binary systems. Typical kick velocities appear to be 200-500 km/s (e.g. Janka 2013 http://arxiv.org/abs/1306.0007), which translates (useful fact) to 200-500 pc/Myr, so they would be long gone from their birth location. If they are single, or ejected from a binary they will be invisible (for a black hole) or practically invisible (for a neutron star, which would have cooled by many orders of magnitude in a million years). So it is possible I suppose, especially when we have Gaia data, that a binarity survey of isolated high mass stars might uncover an invisible compact companion (if they were accreting we would surely already know about them; no examples are nearby), and then the space motions might trace it back to a position in Sco Cen. NB By "remnant" I assumed you meant the compact object. There are of course large, expanding supernova remnants produced by the explosion. These fade and are mixed into the ISM on timescales of 100,000 years. There is little chance of seeing one from a 2-million year old supernova.
{ "domain": "physics.stackexchange", "id": 29975, "tags": "astronomy, supernova" }
Intuition of Impulse Formula $J = \sum F \Delta t$
Question: I understand that $$\begin{align} J = \sum F \Delta t &= \Delta p \\ \sum F &= \frac{\Delta p}{\Delta t} \\ &= \frac{mv_2 - mv_1}{\Delta t} \\ &= m \cdot \frac{\Delta v}{\Delta t} \\ &= ma \end{align}$$ $J$ is impulse, $p$ is momentum However, by just looking at the equation $$J = \sum F \Delta t$$ I seem to think that impluse is high if for same force, time is higher. But I think that's wrong? I think for a certain force, if I exert it in a short time theres a larger impluse? How can I then understand the above equation more intuitively without expanding it out? Answer: "I seem to think that impluse is high if for same force, time is higher." This is correct, to develop an intuitive understanding, we must just realize that impulse simply refers to a change in momentum. So lets say you apply a force of 1N to an object. Is the object going to have a larger change in momentum if you apply 1N for 1s or 1N for 5s? It should seem clear then that a longer exertion time leads to a larger impulse.
{ "domain": "physics.stackexchange", "id": 93521, "tags": "momentum, kinematics" }
Electron equations of motion for plasma
Question: I'm reading through an Introduction to Plasma Physics by Francis F. Chen, and in the simplified derivation for plasma oscillation in 1D, the book quotes the electron equations of motion as: $$mn_e\left[\frac{\partial\mathbf{v}_e}{\partial t}+(\mathbf{v}_e\cdot\mathbf{\nabla})\mathbf{v}_e\right]=-en_e\mathbf{E}$$ This looks like the usual form of $m\frac{d\mathbf{v}_e}{dt}=\mathbf{F}=-e\mathbf{E}$, but with an additional $(\mathbf{v}_e\cdot\mathbf{\nabla})\mathbf{v}_e$ term. In this case the term is just $v_e^{(x)}\frac{\partial\mathbf{v}_e}{\partial x}$. It looks familiar, but I can't recall where this originates from. Other deviations I've read do not include it, and it is later ignored when considering the case when $\left|\mathbf{v}\right|<<1$. From this assumption, I would think special relativity but I haven't found a mention of it yet. Does anyone know where this extra term comes from? Answer: In one dimension, we have: $\frac{dv_e}{dt}=\frac{\partial v_e}{\partial t}+\frac{\partial v_e}{\partial x}\frac{dx}{dt}=\frac{\partial v_e}{\partial t}+v_e\frac{\partial v_e}{\partial x}$ In 3 dimensions, the x derivative "becomes" a gradient. Thus we get: $ \frac{d\vec{v_e}}{dt}=\frac{\partial \vec{v_e}}{\partial t}+(\vec{v_e}\cdot\vec{\nabla})\vec{v_e}$
{ "domain": "physics.stackexchange", "id": 58651, "tags": "newtonian-mechanics, oscillators, plasma-physics" }
Does ROS run on Windows?
Question: Considering that 90% of desktops are (still) Windows-based, are there plans on making at least some parts of ROS run on that OS? Originally posted by Alex Bravo on ROS Answers with karma: 901 on 2011-02-15 Post score: 1 Answer: Update It is possible to install and use ROS Lunar on Windows 10 thanks to WSL: http://wiki.ros.org/Installation/Windows This is the best way to use ROS on Windows right now, though it is not perfect. Efforts are underway to bring ROS to Windows. There is already the custom port done by REC and there should (soon) be mingw compatibility coming from an external contributor. We are working on a longer-term overhaul of the build system to make Windows compatibility possible. It's a large enough effort that we can't make any promises as to when, but we think there's a large enough community now to make this finally happen. Originally posted by kwc with karma: 12244 on 2011-02-15 This answer was ACCEPTED on the original site Post score: 5 Original comments Comment by Pablo Iñigo Blasco on 2013-12-30: Any progress on this? Comment by VictorLamoine on 2017-08-07: @Pablo Iñigo Blasco yes: http://wiki.ros.org/Installation/Windows Comment by gvdhoorn on 2017-08-07: I believe it's important to recognise that ROSonWSL is not the same as having a native Win32/64 ROS. WSL does not have the same level of access to Windows resources, so 'driver' components wanting to access hw (usb, gpus for gpgpu fi) will not work (easily).
{ "domain": "robotics.stackexchange", "id": 4747, "tags": "windows" }
Simultaneous diagonalization of Hamiltonian and momentum operator
Question: I'm looking at a translationally invariant problem with 3 atoms arranged in a circle each with one valence electron capable of tunelling to either of its two neighbors. With a tunelling rate of $-|A|/\hbar$, we have the Hamiltonian $$H = \begin{pmatrix} E_a & - |A| & - |A|\\ - |A| & E_a & - |A|\\ - |A| & - |A| & E_a \end{pmatrix}$$ which can be shifted by $+|A|$: $$\begin{pmatrix} E_a + |A| & 0 & 0\\ 0 & E_a + |A| & 0\\ 0 & 0 & E_a + |A| \end{pmatrix}.$$ Because of translational symmetry, $p$ is a good quantum number and $[H,\hat{p}]=0$. This means that we can diagonalize $H$ as well as $\hat{p}$ simultaneously and construct momentum eigenstates out of the Hamiltonian eigenstates. How do I do this? EDIT: Okay so I've shifted the matrix as follows (simply $-E_a$ on the diagonal): $$\begin{pmatrix} 0 & -|A| & -|A|\\ -|A| & 0 & -|A|\\ -|A| & -|A| & 0 \end{pmatrix}.$$ Eigenvalues are $-2|A|$ corresponding to the eigenstate (1,1,1), $|A|$ corresponding to (-1,0,1) and |A| corresponding to (-1,1,0). So $|A|$ is degenerate. Now I need to diagonalize $p$. I guess what my problem is, is that I'm not quite sure of the matrix representation of $p$ here. How do I then diagonalize it, if I can't find its matrix representation? I know each matrix element is $\langle \psi_i | p | \psi_j \rangle$, but I don't know what the wave functions look like. How do I proceed? Answer: First of all, your shifting is done wrong! It is true that you can shift the energy around, but that implies adding a multiple of the identity matrix. In other words, you can only add/subtract stuff from the diagonal of your Hamiltonian! So, there is no way around diagonalizing the Hamiltonian as is. That will give you eigenvalues that may or may not be degenerate. If they turn out to be not degenerate (i.e. you have 3 distinct eigenvalues), then you're done: You know that $H$ and $p$ are simultaneously diagonalizable, but that there is only one way to diagonalize $H$, so that one way must already make $p$ diagonal as well. If the eigenvalues turn out to be degenerate, you'd then have to find a linear combination of the eigenvectors that makes $p$ diagonal. I'd try something that looks like a plain wave. UPDATE re your EDIT: Well, yes, you do know what the wavefunctions look like: $(1,1,1)$, $(-1, 0, 1)$ and $(-1,1,0)$. Your model is discrete (also called a tight-binding model) and so the wavefunctions are just finite-sized vectors. So, what you really are after are eigenstates with discrete cyclic symmetry, and you can construct them from "plain waves", $\psi_n \sim e^{i k n}$ where $n = 1,2,3$ is the number of the atom. The allowed "momenta" are given by the condition that we want periodic boundary conditions, i.e., site "4" would be the same as site "1", so $3k = 2\pi N$ for some integer $N$. The three lowest (in absolute value) allowed $k$-values then are $0$ and $\pm \frac{2}{3}\pi$. The eigenstate with $k = 0$ would just have constant amplitude on each lattice site. This is just what you found for the energy eigenvalue $-2|A|$. For $k = 2/3 \pi$ and $k = -2/3 \pi$ you get two vectors each for which you'd then have to find the linear combination of eigenvectors for energy eigenvalue $|A|$, which I leave as an exercise :)
{ "domain": "physics.stackexchange", "id": 9508, "tags": "solid-state-physics" }
Can SETI certify whether or not Proxima b is inhabited by beings using electromagnetic communication?
Question: Can SETI certify whether or not Proxima b is inhabited by beings using electromagnetic communication? By certify I mean by past, present or future observations using technologies available. Answer: Using current technology (and by that I mean experiments and telescopes that are available now) we would probably be unable to detect life on Earth even if observed from a distance of 4 light years, which is the distance to Proxima Centauri. A "blind" search could look for radio signatures and of course this is what SETI has been doing for lots of different stars, including, I imagine, Proxima Centauri. If we are talking about detecting "Earth-like" signals, then we must assume that we are not talking about deliberate beamed attempts at communication, and so must rely on detecting random radio "chatter" and accidental signals generated by our civilisation. As I have remarked in the linked question: The SETI Phoenix project was the most advanced search for radio signals from other intelligent life. Quoting from Cullers et al. (2000): "Typical signals, as opposed to our strongest signals fall below the detection threshold of most surveys, even if the signal were to originate from the nearest star". Quoting from Tarter (2001): "At current levels of sensitivity, targeted microwave searches could detect the equivalent power of strong TV transmitters at a distance of 1 light year (within which there are no other stars)...". The equivocation in these statements is due to the fact that we do emit stronger beamed signals in certain well-defined directions, for example to conduct metrology in the solar system using radar. Such signals have been calculated to be observable over a thousand light years or more. But these signals are brief, beamed into an extremely narrow angle and unlikely to be repeated. You would have to be very lucky to be observing in the right direction at the right time if you were performing targeted searches. It has been suggested that new radio telescope projects and technology like the Square Kilometre Array may be capable of serendipitously detecting radio "chatter" out to distances of 50 pc ($\sim 150$ light years) - see Loeb & Zaldarriaga (2007) - and so Proxima Centauri would be well within this range. This array, due to begin full operation some time after 2025 could also monitor a multitude of directions at once for beamed signals.
{ "domain": "astronomy.stackexchange", "id": 1778, "tags": "exoplanet, radio-astronomy" }
Difficulty moving from electric to groovy - bullet issues
Question: Hi, I'm trying to upgrade a system from electric to groovy. Unfortunately, I'm getting stuck since the codebase uses btQuaternion. The short version though is, I can't find the migration script mentioned here: http://ros.org/wiki/geometry/bullet_migration My tf folder has no scripts with that name. All I have are: static_transform_publisher tf_change_notifier tf_echo tf_empty_listener tf_monitor What happened to the conversion script? Was it removed? Originally posted by aespielberg on ROS Answers with karma: 113 on 2013-07-01 Post score: 1 Answer: It's not being installed into groovy. You can get it directly from the source https://github.com/ros/geometry/blob/groovy-devel/tf/scripts/bullet_migration_sed.py Originally posted by tfoote with karma: 58457 on 2013-07-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by aespielberg on 2013-07-01: Is there anything I'll have to do set-up wise other than just running this script? Comment by tfoote on 2013-07-01: Change to the directory you want to modify. It's good practice to make sure you don't have any local changes in your source control so you can check the diff and rollback if it doesn't work as you want. Comment by aespielberg on 2013-07-01: Okay...it messed up a little bit but with tiny changes it then worked. Thanks!
{ "domain": "robotics.stackexchange", "id": 14773, "tags": "ros, bullet, ros-electric, ros-fuerte, ros-groovy" }
Dot product between relative velocity and relative position in orbital mechanics
Question: Reading Curtis' book Orbital Mechanics I found this relation that confused me. $$ \underline{r}\cdot\underline{\dot{r}}=r\dot{r} $$ What happens, say, for a circular orbit where $\underline{r}$ and $\dot{\underline{r}}$ are orthogonal? Shouldn't the scalar product be zero? Thank you! Answer: Yes, it should be, and it is. On the right side of the equation, $\dot r$ is zero for a circular orbit, because $r$ is constant.
{ "domain": "physics.stackexchange", "id": 72229, "tags": "newtonian-mechanics, orbital-motion, satellites" }
Extensions for setting members via expressions and reflection
Question: I'd like to make the usage of my configuration framework easier so I created a few extensions that after getting a value from a source automatically assign it to a property or field. They should make retrieving and assigning setting values as convenient as possible. I'm not concerned about the performance as most of the assignments are one time operations when the application starts. Introduction Let me provide a little bit context first, before I show the code for a review. The configuration framework is based on a simple interface that provides just two methods: public interface IConfiguration { [CanBeNull] T GetValue<T>([NotNull] CaseInsensitiveString name); void Save([NotNull] CaseInsensitiveString name, [NotNull] object value); } I use it with dependency injection and set the values either inside a constructor or while composing an Autofac container (depends on what is more convenient and whether the properties are read-only). To avoid creating keys manually I use a couple extensions that do that for me. They use expressions to generate the names. public static class ConfigurationExtensions { public static IConfiguration SetValue<T>(this IConfiguration configuration, Expression<Func<T>> expression) { var value = configuration.GetValue(expression); expression.SetValue(value); return configuration; } public static T GetValue<T>(this IConfiguration config, Expression<Func<T>> expression) { var memberExpr = expression.Body as MemberExpression ?? throw new ArgumentException("Expression must be a member expression."); var name = $"{memberExpr.Member.DeclaringType.Namespace}+{memberExpr.Member.DeclaringType.Name}.{memberExpr.Member.Name}"; return config.GetValue<T>(name); } } Review Now that I have a name and a value it's time to assign it to a property or field. For this I use another extension that uses a little bit of reflection to make this happen. It evaluates the expressions and according to its type sets either a property or a field and it can do this for static and instance classes. The extension can be use outside and inside a class. public static class MemberSetter { public static void SetValue<T>([NotNull] this Expression<Func<T>> expression, object value) { if (expression == null) throw new ArgumentNullException(nameof(expression)); if (expression.Body is MemberExpression memberExpression) { var obj = GetObject(memberExpression.Expression); switch (memberExpression.Member.MemberType) { case MemberTypes.Property: var property = (PropertyInfo)memberExpression.Member; if (property.CanWrite) { ((PropertyInfo)memberExpression.Member).SetValue(obj, value); } else { var bindingFlags = BindingFlags.NonPublic | (obj == null ? BindingFlags.Static : BindingFlags.Instance); var backingField = (obj?.GetType() ?? property.DeclaringType).GetField($"<{property.Name}>k__BackingField", bindingFlags); if (backingField == null) { throw new BackingFieldNotFoundException(property.Name); } backingField.SetValue(obj, value); } break; case MemberTypes.Field: ((FieldInfo)memberExpression.Member).SetValue(obj, value); break; default: throw new ArgumentException($"Member must be either a {nameof(MemberTypes.Property)} or a {nameof(MemberTypes.Field)}."); } } else { throw new ArgumentException($"Expression must be a {nameof(MemberExpression)}."); } } private static object GetObject(Expression expression) { // This is a static class. if (expression == null) { return null; } if (expression is MemberExpression anonymousMemberExpression) { // Extract constant value from the anonyous-wrapper var container = ((ConstantExpression)anonymousMemberExpression.Expression).Value; return ((FieldInfo)anonymousMemberExpression.Member).GetValue(container); } else { return ((ConstantExpression)expression).Value; } } } Tests Here are a couple of test I wrote. [TestMethod] public void Load_InstanceMembers_OnTheType_Loaded() { var config = new Configuration(new Memory { { "PublicProperty", "a" }, { "PrivateProperty", "b" }, { "PublicField", "c" }, { "PrivateField", "d" }, { "PrivateReadOnlyField", "e" }, }); var x = new InstanceClass(config); config.SetValue(() => x.PublicProperty); config.SetValue(() => x.PublicField); config.SetValue(() => x.PublicReadOnlyProperty); CollectionAssert.AreEqual(new[] { "a", null, "c", null, null, "f" }, x.GetValues().ToList()); } [TestMethod] public void Load_InstanceMembers_InsideConstructor_Loaded() { var config = new Configuration(new Memory { { "PublicProperty", "a" }, { "PrivateProperty", "b" }, { "PublicField", "c" }, { "PrivateField", "d" }, { "PrivateReadOnlyField", "e" }, { "PublicReadOnlyProperty", "f" }, }); var x = new InstanceClass(config); CollectionAssert.AreEqual(new[] { "a", "b", "c", "d", "e", "f" }, x.GetValues().ToList()); } [TestMethod] public void Load_StaticMembers_Loaded() { var config = new Configuration(new Memory { { "PublicProperty", "a" }, { "PrivateProperty", "b" }, { "PublicField", "c" }, { "PrivateField", "d" }, { "PrivateReadOnlyField", "e" }, { "PublicReadOnlyProperty", "f" }, }); config.SetValue(() => StaticClass.PublicProperty); config.SetValue(() => StaticClass.PublicField); config.SetValue(() => StaticClass.PublicReadOnlyProperty); CollectionAssert.AreEqual(new[] { "a", null, "c", null, null, "f" }, StaticClass.GetValues().ToList()); } public class InstanceClass { public InstanceClass() { } public InstanceClass(IConfiguration config) { config.SetValue(() => PublicProperty); config.SetValue(() => PrivateProperty); config.SetValue(() => PublicField); config.SetValue(() => PrivateField); config.SetValue(() => PrivateReadOnlyField); config.SetValue(() => PublicReadOnlyProperty); } public string PublicProperty { get; set; } private string PrivateProperty { get; set; } public string PublicField; private string PrivateField; private readonly string PrivateReadOnlyField; public string PublicReadOnlyProperty { get; } public IEnumerable<object> GetValues() { yield return PublicProperty; yield return PrivateProperty; yield return PublicField; yield return PrivateField; yield return PrivateReadOnlyField; yield return PublicReadOnlyProperty; } } public static class StaticClass { public static string PublicProperty { get; set; } private static string PrivateProperty { get; set; } public static string PublicField; private static string PrivateField; private static readonly string PrivateReadOnlyField; public static string PublicReadOnlyProperty { get; } public static IEnumerable<object> GetValues() { yield return PublicProperty; yield return PrivateProperty; yield return PublicField; yield return PrivateField; yield return PrivateReadOnlyField; yield return PublicReadOnlyProperty; } } I'm mostly interested about: What do you think of this design? Can it more convenient? Is it easy to use and understand? Is the code clean enough? Answer: This still looks fairly tedious: config.SetValue(() => x.PublicProperty); config.SetValue(() => x.PublicField); config.SetValue(() => x.PublicReadOnlyProperty); If you are going with reflection, I'd go all the way and implement automatic serialization/deserialization. //pseudocode for property deserialization var targetObject = ...; foreach(var property in targetObject.GetType().GetProperties()) { if (!property.CanWrite()) continue; var key = ... ; //generate key if (!config.HasKey(key)) continue; var value = config.GetValue(key); property.SetValue(targetObject, value); } A few problems that arise: You can no longer specifically select which members to serialize/deserialize. But this problem can be solved by attributes (similar to [XmlIgnore]). You either need a non-generic IConfiguration.GetValue, or you will have to use reflection in order to call it. You need a way to check whether the key is present in configuration. P.S. I'm not sure I like the idea of messing with private fields/properties. It straight up breaks encapsulation, it is no longer just about convenience.
{ "domain": "codereview.stackexchange", "id": 26685, "tags": "c#, reflection, expression-trees" }
Trouble with creating an image_transport nodelet
Question: I'm working in ROS noetic on ubuntu 20.04 I'm pretty new to the concepts of nodelets but I felt I needed them to more efficiently go through a process of taking uncompressed images and then decompressing them with the image_transport package without it having to go through TCP. I went through the pluginlib tutorial and then this tutorial for nodelets, which I based my code off of. I slightly altered it to first take normal images, which worked, but once I started working with the image_transport package, I've been encountering an error related to the callback signature. Here is my main code where the commented out lines are the lines for the normal publisher/subscriber that worked with sensor_msgs/Image types. #include <ros/ros.h> #include <std_msgs/String.h> #include <sensor_msgs/CompressedImage.h> #include <sensor_msgs/Image.h> #include <nodelet/nodelet.h> #include <opencv2/core.hpp> #include <opencv2/imgcodecs.hpp> #include <cv_bridge/cv_bridge.h> #include <image_transport/image_transport.h> #include <pluginlib/class_list_macros.h> #include <stdio.h> #include <boost/bind.hpp> namespace image_nodelet { class Transport_Nodelet: public nodelet::Nodelet { public: Transport_Nodelet() { } private: virtual void onInit() { ros::NodeHandle nh = getPrivateNodeHandle(); ros::NodeHandle im_nh = getNodeHandle(); NODELET_DEBUG("Initializing nodelet..."); image_transport::ImageTransport it(im_nh); sub_ = it.subscribe("/acl_jackal/forward/color/image_raw/compressed", 1, boost::bind(&Transport_Nodelet::callback, this, _1)); pub_ = it.advertise("img_out", 1); //image_transport::Subscriber sub = it.subscribe("/acl_jackal/forward/color/image_raw/compressed", 1, &Transport_Nodelet::callback, this); //image_transport::Publisher pub = it.advertise("img_out", 1); // Create a publisher topic //pub = nh.advertise<sensor_msgs::Image>("img_out",10); // Create a subscriber topic //sub = nh.subscribe("/acl_jackal/forward/color/image_raw/compressed",10, &Transport_Nodelet::callback, this); } // ros::Publisher pub; // ros::Subscriber sub; image_transport::Publisher pub_; image_transport::Subscriber sub_; void callback(const sensor_msgs::CompressedImageConstPtr& input) { try { // Convert the compressed image ROS message to a cv::Mat using cv_bridge cv_bridge::CvImagePtr cv_ptr = cv_bridge::toCvCopy(input, sensor_msgs::image_encodings::BGR8); // Here cv_ptr->image contains the uncompressed image as cv::Mat // You can now publish the uncompressed image if needed pub_.publish(cv_ptr->toImageMsg()); } catch (cv_bridge::Exception& e) { ROS_ERROR("Could not convert from '%s' to 'bgr8'.", input->format.c_str()); } // sensor_msgs::Image output; // output.data = input->data; // NODELET_DEBUG("msg data = %d",output.data); // ROS_INFO("msg data = %d",output.data); // pub.publish(output); } }; // Export the Hello_World class as a plugin using the // PLUGINLIB_EXPORT_CLASS macro. PLUGINLIB_EXPORT_CLASS(image_nodelet::Transport_Nodelet, nodelet::Nodelet); } However, I get an error when I catkin_make: /home/glenn/catkin_make_ws/src/my_workspace/src/my_nodelet.cpp:31:141: required from here /usr/include/boost/bind/bind.hpp:319:35: error: no match for call to ‘(boost::_mfi::mf1<void, image_nodelet::Transport_Nodelet, const boost::shared_ptr<const sensor_msgs::CompressedImage_<std::allocator<void> > >&>) (image_nodelet::Transport_Nodelet*&, const boost::shared_ptr<const sensor_msgs::Image_<std::allocator<void> > >&)’ 319 | unwrapper<F>::unwrap(f, 0)(a[base_type::a1_], a[base_type::a2_]); This is referring to line 31 of my code which is: sub_ = it.subscribe("/acl_jackal/forward/color/image_raw/compressed", 1, boost::bind(&Transport_Nodelet::callback, this, _1)); I followed the image_transport API documentation for the callback signature, where the only difference is I used sensor_msgs::CompressedImageConstPtr& instead of sensor_msgs::ImageConstPtr& which you can see in my callback function. Any help would be appreciated. I've tried looking at all the documentation and forums but found no hints at a solution to my problem. Update: I tried the obvious and just followed exactly as the document says for the callback, so I changed the argument type in the callback from void callback(const sensor_msgs::CompressedImageConstPtr& input) to void callback(const sensor_msgs::ImageConstPtr& input) With everything else the same, and it compiled. Does this mean I can't use the compressed image type with image_transport? I managed to already read uncompressed images so would I better off manually uncompressing without the image_transport package? Answer: About your build error: if you look at image_transport.h, you'll see that the advertise() and subscribe() are defined only for ImageConstPtr objects. That's why it won't build if you pass a CompressedImageConstPtr object. Not sure if you are aware of it, but the image_transport::ImageTransport class has a feature to automatically decompress a CompressedImage msg, then pass an ImageConstPtr object to your subscribe() callback. Take a look at this page: http://wiki.ros.org/image_transport/Tutorials/ExaminingImagePublisherSubscriber
{ "domain": "robotics.stackexchange", "id": 38755, "tags": "callback, ros-noetic, nodelet, image-transport, compressed-image-transport" }
Find the cheapest order placed out of all of the stores visited - follow-up #1
Question: Made some adjustments to my original program. Here's a link to it: original. Also, here's a link to the problem statement: problem statement. The assignment is past due; The code I turned in is stupid simple. Which is great, but I didn't learn anything, so I am testing the many different ways to do things in C, and hoping some will have some hints, tips, tricks, advice, suggestions, etc. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <stdbool.h> int read_positive_int(const char* prompt) { printf("%s\n\nInput Specifications: Please type a positive integer and press the 'Enter' or the 'return' key when finished.\n", prompt); int n; while (true) { char line[1024]; /* the input line */ fgets(line, sizeof(line), stdin); int sscanf_result = sscanf(line, "%d", &n); if (sscanf_result == 1 && n > 0) { return n; } puts("\nInput Error: Please carefully read the input specifications that are provided after each question prompt and then try again.\n"); } } float read_real_positive_float(const char* prompt) { printf("%s\n\nInput Specifications: Please type a real number in currency format, i.e., XXX.XX, and press the 'Enter' or the 'return' key when finished.\n", prompt); float n; while (true) { char line[1024]; fgets(line, sizeof(line), stdin); int sscanf_result = sscanf(line, "%f", &n); if (sscanf_result == 1 && n > 0 ) { return n; } puts("\nInput Error: Please carefully read the input specifications that are provided after each question prompt and then try again.\n"); } } int main(void) { int total_shops = read_positive_int("How many shops will be visited?"); float **cost_ingredients_ptr = malloc(total_shops * sizeof(float*)); /* allocate an array of pointers */ if (!cost_ingredients_ptr) { fprintf(stderr, "Memory allocation failure!\n"); exit(1); } float **total_cost_ingredients_ptr = malloc(total_shops * sizeof(float*)); memset(total_cost_ingredients_ptr, 0, total_shops * sizeof(float)); if (!total_cost_ingredients_ptr) { fprintf(stderr, "Memory allocation failure!\n"); exit(1); } for (int i = 0; i < total_shops; i++) { printf("\nYou are at shop #%d.\n\n", i+1); int quantity_ingredients = read_positive_int("How many ingredients are needed?"); cost_ingredients_ptr[i] = malloc(quantity_ingredients * sizeof(float)); if (!cost_ingredients_ptr[i]) { fprintf(stderr, "Memory allocation failure!\n"); exit(1); } total_cost_ingredients_ptr[i] = malloc(sizeof(float)); if (!total_cost_ingredients_ptr) { fprintf(stderr, "Memory allocation failure!\n"); exit(1); } for (int j = 0; j < quantity_ingredients; j++) { printf("\nWhat is the cost of ingredient #%d", j+1); cost_ingredients_ptr[i][j] = read_real_positive_float("?"); *total_cost_ingredients_ptr[i] += cost_ingredients_ptr[i][j]; } printf("\nThe total cost at shop #%d is $%0.2f.\n", i+1, *total_cost_ingredients_ptr[i]); if (i == total_shops-1) { float cheapest_order = *total_cost_ingredients_ptr[0]; int location_cheapest_order = 1; for (int k = 1; k < total_shops; k++) { if (*total_cost_ingredients_ptr[k] < cheapest_order) { cheapest_order = *total_cost_ingredients_ptr[k]; location_cheapest_order = k + 1; } printf("\nThe cheapest order placed was at shop #%d, and the total cost of the order placed was $%0.2f.\n", location_cheapest_order, cheapest_order); } } } free(cost_ingredients_ptr); free(total_cost_ingredients_ptr); cost_ingredients_ptr = NULL; total_cost_ingredients_ptr = NULL; return 0; } Answer: Don't be overly verbose Your prompts are very verbose, and you use a lot of newlines. Keep it simple. If you ask "how many", then people naturally expect to enter a natural number. If you ask for "cost", then people will naturally assume they can enter some value with two digits after the comma. If you prompt something, replace the newline after the prompt with a space, then the cursor will be after the question. You also don't need to read a whole line into a buffer, and then use sscanf() on the buffer, you can directly use scanf() to read a value from the standard input. I would just write: printf("How many shops will be visited? "); int total_shops; if (scanf("%d", &total_shops) != 1) { fprintf(stderr, "Invalid input!\n"); return 1; } Also, try to use \n only at one end of format strings, instead of both at the start and the end. Don't use the _ptr postfix I would avoid using prefixes or postfixes that describe the type of a variable. It is usually not necessary. Use structs to organize your data You have a number of shops, and each shop has a number of ingredients. Instead of making total_cost_ingredients_ptr a pointer to pointer, it is better to define a struct that represents a shop, and then to create an array out of the shops, like so: struct shop { int num_ingredients; float *ingredient_costs; }; struct shop *shops = calloc(num_shops, sizeof(*shop)); for (int i = 0; i < num_shops; i++) { int num_ingredients = ...; /* Read number of ingredients */ shops[i].num_ingredients = num_ingredients; shops[i].ingredient_costs = calloc(num_ingredients, sizeof(*shops[i].ingredient_cost)); ... } Later on you can then refer to ingredient number j in shop i as: shops[i].ingredient_costs[j]. Note that I used calloc() here instead of malloc(). It is slightly easier to use the former to allocate arrays, and it will also pre-zero the allocated memory for you. Move actions to be done after the last shop data is read to after the loop Whenever you see this pattern: for (int i = 0; i < num; i++) { ... if (i == num - 1) do_something(); } Just move that last part out of the loop: for (int i = 0; i < num; i++) { ... } do_something(); Move long sections of code into functions You created read_positive_int() and read_real_positive_float() for relatively trivial code, but you forgot to put the code to read all ingredients for a shop into its own function. You can structure the code like so: void read_shop_ingredients(struct shop *shop) { ... } int main() { struct shop *shops = ...; ... for (int i = 0; i < num_shops; i++) { read_shop_ingredients(&shops[i]); } } Don't store temporary data longer than necessary You are storing the cost of each individual ingredient in const_ingredients, but you are actually only interested in the sum of the ingredients for each store. So don't store the individiual costs, but while reading the costs, add them immediately to the total_costs_ingredients. Don't use a 2D array for 1D data You basically copy&pasted the code for costs_ingredients to total_costs_ingredients, creating a 2D array where one of the dimensions is just 1 item big. That's not very efficient. Also, if you use the struct approach, it would've been obvious right away. So with this in mind, the struct can be changed to: struct shop { float total_ingredient_costs; }; And a struct with just one element is a bit overkill; you could have gotten away with allocating just a single 1D array in your code: float *total_ingredient_costs = calloc(num_shops, sizeof(*total_ingredient_costs)); Use return instead of exit() if possible There is no need to call exit() inside main(), just use return 1 to exit with an error code. Don't clear local variables at the end of a function While it is certainly good to call free() for every piece of memory you allocated with malloc() and calloc(), there is no need to set the pointers to NULL here.
{ "domain": "codereview.stackexchange", "id": 32202, "tags": "c" }
A* Search with Array Lists
Question: I implemented A* Search with Array Lists following the pseudocode here. I have been working with different algorithms on my own, and now, I am trying to optimize them so that I can run them with less code. The actual algorithm I wrote does not use a lot of code, but the setup does (creating the grid and setting the values for each successor). Is there a more efficient way to create the grid and set the successor values? If you have any other suggestions regarding regarding refactoring, I'd be happy to hear those as well. package AStar; import java.util.ArrayList; public class AStar { private static int[][] grid = new int[3][3]; public static void main(String[] args) { ArrayList<AStarNode> path = new ArrayList<>(); path = runAStar(6); printPath(path); } private static void printPath(ArrayList<AStarNode> path) { for (AStarNode node : path) { System.out.print(node.value + " "); } System.out.println(); } public static ArrayList runAStar(int goal) { ArrayList<AStarNode> path = new ArrayList<>(); //2 3 4 //5 0 9 //0 8 6 //Test #2 grid[0][0] = 2; grid[0][1] = 3; grid[0][2] = 4; //row #2 grid[1][0] = 5; grid[1][1] = 0; grid[1][2] = 9; //row #3 grid[2][0] = 0; grid[2][1] = 8; grid[2][2] = 6; //initilialize the open list ArrayList<AStarNode> openList = new ArrayList<>(); //initialize the closed list ArrayList<AStarNode> closedList = new ArrayList<>(); //put the starting node on the open list AStarNode startingNode = new AStarNode(); startingNode.parent = null; startingNode.x = 0; startingNode.y = 0; startingNode.f = 0; startingNode.value = grid[0][0]; openList.add(startingNode); //while the open list is not empty AStarNode q = new AStarNode(); AStarNode nextQ = new AStarNode(); ArrayList<AStarNode> openListTemp = new ArrayList(); while (!openList.isEmpty()) { openListTemp = new ArrayList(); //find the node with the least f on the open list, call it "q" float smallestF = Float.MAX_VALUE; for (AStarNode node : openList) { if (node.f < smallestF) { smallestF = node.f; q = node; } } //pop q off of the open list for (AStarNode node : openList) { if (node != q) { openListTemp.add(node); } } openList = openListTemp; //generate q's 4 successors and set their parents to q ArrayList<AStarNode> successors = new ArrayList<>(); //North Node - node above the current one AStarNode north = new AStarNode(); north.x = q.x - 1; north.y = q.y; north.parent = q; successors.add(north); //South Node - node below the current one AStarNode south = new AStarNode(); south.x = q.x + 1; south.y = q.y; south.parent = q; successors.add(south); //East Node - node to the right of the current one AStarNode east = new AStarNode(); east.x = q.x; east.y = q.y - 1; east.parent = q; successors.add(east); //West Node - node to the left of the current one AStarNode west = new AStarNode(); west.x = q.x; west.y = q.y + 1; west.parent = q; successors.add(west); int min = 0; int max = 2; //remove nodes that are outside of the grid ArrayList<AStarNode> temp = new ArrayList<>(); for (AStarNode node : successors) { int x = node.x; int y = node.y; if (node.x < min || node.x > max || node.y < min || node.y > max || grid[x][y] == 0) { //do nothing } else { node.value = grid[x][y]; temp.add(node); //add all items except the invalid one to a new temp list } } System.out.println("Q: " + q.value); for (AStarNode inTemp : temp) { System.out.println("Successor Node: " + grid[inTemp.x][inTemp.y]); } System.out.println("___________________________"); ArrayList<AStarNode> tempFinal1 = new ArrayList<>(); ArrayList<AStarNode> tempFinal2 = new ArrayList<>(); //for each successor for (AStarNode successor : temp) { //if successor is the goal, stop the search int x = successor.x; int y = successor.y; //if successor is the goal, stop the search if (grid[x][y] == goal) { path.add(q); path.add(successor); return path; } //successor.g = q.g + distance between successor and q successor.g = q.g + 1; //successor.h = distance from goal to successor successor.h = 1; //successor.f = successor.g + successor.h successor.f = successor.g + successor.h; //if a node with the same position as successor is in the OPEN list //which has a lower f than successor, skip this successor for (AStarNode checkOpenList : openList) { if ((checkOpenList.x == x && checkOpenList.y == y) && checkOpenList.f < successor.f) { } else { tempFinal1.add(checkOpenList); } } //if a node with the same position as successor is in the CLOSED list //which has a lower f than successor, skip this successor for (AStarNode checkClosedList : closedList) { if ((checkClosedList.x == x && checkClosedList.y == y) && checkClosedList.f < successor.f) { } else { tempFinal2.add(checkClosedList); } } //otherwise, add the node to the open list if (successors.contains(successor)) {//if the current successor has not been removed nextQ = successor; openList.add(successor); } } //add the node and its parent to the path if the parent is not null if (q.parent != null) { path.add(q.parent); path.add(q); } //add q to the closed list closedList.add(q); q = nextQ; } return path; } } Answer: Declare with an interface ArrayList<AStarNode> path = new ArrayList<>(); path = runAStar(6); You don't need the new ArrayList. ArrayList<AStarNode> path; path = runAStar(6); This would work fine and saves an unnecessary object creation. List<AStarNode> path = runAStar(6); This is even better though. It declares and initializes path in the same statement. It also changes the type of path from a rigid implementation to the flexible interface. Currently if you wanted to change from ArrayList to a different implementation, you'd have to change it in many places. If we instead make the change to use the interface, future changes to the implementation can change just one place. Another advantage is that it makes the methods more reusable. If you have two methods that take a List, you can use them with any kind of list. If they take a specific implementation, then you have to rewrite them for each implementation. As a general rule, it is often easier if you use the interface anywhere you can. Only specify the implementation if you have to do so. Either because the implementation has functionality that is not accessible through the interface or because you are producing a new object (e.g. new ArrayList). Shorter initialization //2 3 4 //5 0 9 //0 8 6 //Test #2 grid[0][0] = 2; grid[0][1] = 3; grid[0][2] = 4; //row #2 grid[1][0] = 5; grid[1][1] = 0; grid[1][2] = 9; //row #3 grid[2][0] = 0; grid[2][1] = 8; grid[2][2] = 6; You can replace this with grid = new int[][] {{2, 3, 4}, {5, 0, 9}, {0, 8, 6}}; More discussion. Avoid static private static int[][] grid = new int[3][3]; Why is this static? That means that you can only have one AStar. private int[][] grid = new int[3][3]; This way we can search multiple grids. Of course that wouldn't work if the grid is hardcoded into the runAStar method. Either pass it into the method or set it prior to the method. Pick your data type //initilialize the open list ArrayList<AStarNode> openList = new ArrayList<>(); You put the currently open nodes in a list. As a result, insertion is quick (constant time) but finding is slow (linear time). It is often quicker to keep something in order than to put it in order. You're putting things in order on the fly. Consider using a data structure that maintains order, e.g. a TreeSet. NavigableSet<AStarNode> openNodes = new TreeSet<>(new AStarNodeComparator()); Personally, I'd prefer openNodes to openList as I find it more descriptive. It also works regardless of the underlying type. Note that TreeSet may not be the best data structure either. I like it because it supports quick inserts and lookups. Maybe there's a better one though. You might have to try several to find the best one. Compare runtimes based on real data sets. Repeat the search on the same data with different data structures. Note that you need to implement AStarNodeComparator to make this work. //initialize the closed list ArrayList<AStarNode> closedList = new ArrayList<>(); Here you do a lot of searches for particular nodes. Each node should only appear once. So we want a Set with a quick lookup. HashSet has a constant time lookup. Set<AStarNode> closedNodes = new HashSet<>(); Use the data structure openListTemp = new ArrayList(); //find the node with the least f on the open list, call it "q" float smallestF = Float.MAX_VALUE; for (AStarNode node : openList) { if (node.f < smallestF) { smallestF = node.f; q = node; } } //pop q off of the open list for (AStarNode node : openList) { if (node != q) { openListTemp.add(node); } } openList = openListTemp; But we changed to a NavigableSet. q = openNodes.pollFirst(); We don't need the rest of the code. That's the power of using the right data structure. Use helper methods private void addIfValid(List<AStarNode> validNodes, AStarNode node) { if (node.x >= min && node.x <= max && node.y >= min && node.y <= max && grid[node.x][node.y] != 0) { node.value = grid[node.x][node.y]; validNodes.add(node); } } Now rather than saying successors.add(north); You could say addIfValid(successors, node); Or perhaps even better private void addIfValid(List<AStarNode> validNodes, int x, int y, AStarNode parent) { if (x < 0 || x >= grid.length || y < 0 || y > grid[x].length || grid[x][y] == 0) { return; } AStarNode node = new AStarNode(); node.x = x; node.y = y; node.parent = parent; node.value = grid[x][y]; validNodes.add(node); } Which would allow you to replace //West Node - node to the left of the current one AStarNode west = new AStarNode(); west.x = q.x; west.y = q.y + 1; west.parent = q; successors.add(west); with addIfValid(successors, q.x, q.y + 1, q); and would eliminate int min = 0; int max = 2; //remove nodes that are outside of the grid ArrayList<AStarNode> temp = new ArrayList<>(); for (AStarNode node : successors) { int x = node.x; int y = node.y; if (node.x < min || node.x > max || node.y < min || node.y > max || grid[x][y] == 0) { //do nothing } else { node.value = grid[x][y]; temp.add(node); //add all items except the invalid one to a new temp list } }
{ "domain": "codereview.stackexchange", "id": 19731, "tags": "java, algorithm, a-star" }
Why does a small puddle of water evaporate faster at the edges than the center?
Question: I have read that ceiling tile stains and coffee ring stains are darker on the edges than the center because the puddles evaporate fastest at the point of contact between the surface, air, and water and water that is evaporated leaves behind its sediments. My question is: why does water evaporate faster at this boundary than in the center or any other part of the water puddle? Answer: All liquids are not evenly spaced like a rectangular block, but rather like an irregular ellipsoid with a bulge in the center. Its impossible to discern this bulge with the naked eye, however, it is very visible in mercury: Why this bulge is created in the first place because of surface tension. The liquid tries to have the least surface area possible to make surface energy minimum, and ideally, the least surface area is possible in the sphere, liquids like water don't have enough surface tension to hold themselves and create droplets like mercury, but it tries and creates very eccentric(squashed down) ellipsoid Because of the bulge, more water molecules are exposed to air at the corners than at the bulge which facilitates evaporation. I hope it helps.
{ "domain": "physics.stackexchange", "id": 92244, "tags": "thermodynamics, everyday-life, water, surface-tension, evaporation" }
Steam from a cup of coffee
Question: I observed that, in winter there is more visible steam from a cup of coffee than in summer. Is there any phenomenon taking place here. Answer: The amount of water that air can take up before the water creates fog or visible steam depends on temperature. The colder the air, the less water it needs to create fog/steam. It is the same principle when hot air rises, for example when pushed up a mountain and then it starts to cool down drastically --> It will rain. For more have a look at: Relative humidity in https://en.wikipedia.org/wiki/Humidity
{ "domain": "physics.stackexchange", "id": 98685, "tags": "thermodynamics, temperature, everyday-life, phase-transition, humidity" }
Print greatest factor if it is same for both numbers
Question: Take input n1 and n2 and check if they have the same greatest factor. If they do, print that factor. If they don't, print "No". example: input: 6 9 output: 3 input: 15 27 output: No n1=int(input()) n2=int(input()) l1=[] l2=[] for i in range(1,n1): if n1%i==0: l1.append(i) l1.sort(reverse=True) cf1=l1[0] for j in range(1,n2): if n2%j==0: l2.append(j) l2.sort(reverse=True) cf2=l2[0] if cf1==cf2: print(cf1) else: print("No") Answer: At any point, 1 is always common factor of any two numbers. Your code works perfectly. But if you want to write a shorter code, check the condition if both numbers are divisible by any number using for loop going from range 2 (1 is always factor) to min of both numbers. Take a new variable f which is False initially. If the condition is satisfied i.e. if both nos are divisible by a no, change f to True. Then, out of the loop, write an if condition, if f==True then print the number or else the condition was never satisfied and f is False and print "no". Your code: n1=int(input("Enter n1")) n2=int(input("Enter n2")) f=False for i in range(2,min([n1,n2])): if n1%i==0 and n2%i==0: g=i f=True if f: print(g) else: print("no")
{ "domain": "codereview.stackexchange", "id": 42316, "tags": "python, python-3.x, mathematics, factors" }
I want to know whether the Create2 can interface a LIDAR?
Question: I want to know if the Create 2 can interface a laser radar? For example an RPLidar. Answer: If you are willing to work at it, adding LIDAR to the Create 2 should be possible. If you read the iRobot Create 2 OI Specs, you will see that you can communicate with the Create via serial communication. If you look at the datasheet for the RPLidar, you will see that you can communicate with the sensor using serial communication. Because you cannot program the Create directly, you will need to use a microcontroller to interface the LIDAR and the robot.
{ "domain": "robotics.stackexchange", "id": 1190, "tags": "irobot-create, laser, lidar" }
Is it possible to convert a normal force into a vertical force of a moving object into the direction of travel?
Question: I have read some questions here about normal forces applied to moving objects but I still can't find an answer to the following scenario: So there is an object with a constant velocity and on its way the object is impacted by varying normal forces . Inside the object is a damper which brings the object back on its original path. Can I extract energy from the damper without slowing down the object? If not, is it even possible to convert the normal force into a force into the direction of travel to increase its velocity? If yes, what would happen in the "black box" to make it possible in a real technical system? My approach so far: On one hand this system seems to be like an object with friction and it's not possible to increase speed with friction. On the other hand kinetic energy is dependent from velocity so if the impact is faster than the damper, there would be a energy difference which can be extracted. Thanks in advance! Answer: You can imagine some kind of "regenerative damping" whereby the damping mechanism would store the energy of the shocks rather than dissipate it (actuating a magnet along a coil, etc.) However, since the energy of the shock is by definition a consequence of (and hence borrowed from) the kinetic energy of the body or vehicle considered, then in the best case scenario, feeding said energy back to said body would merely help to maintain its initial speed and not actually speed up the moving object.
{ "domain": "physics.stackexchange", "id": 80301, "tags": "newtonian-mechanics, forces, energy, energy-conservation, friction" }
Rails: create a relationship between comment and user
Question: I'm trying to connect the create of a comment to the current_user. I have a model comment which is a polymorphic association. class CreateComments < ActiveRecord::Migration[6.0] def change create_table :comments do |t| t.string :content t.references :commentable, polymorphic: true t.timestamps end end end In the controller I'm trying to connect the @comment.user = current_user. controllers/comments_controller.rb def create @comment = @commentable.comments.new comment_params @comment.user = current_user @comment.save redirect_to @commentable, notice: 'Comment Was Created' end But I get the following error: NoMethodError: undefined method `user=' for # Did you mean? user_id= I would rather set up a relationship from which I can get to a user (The creator of comment) from the comment itself. For example @comment.user instead of just having the id. I have the following models: class Question < ApplicationRecord belongs_to :user has_many :comments, as: :commentable has_rich_text :content end class User < ApplicationRecord devise :database_authenticatable, :registerable, :recoverable, :rememberable, :validatable has_many :posts has_many :questions has_one_attached :avatar before_create :set_defaults def set_defaults self.avatar = 'assets/images/astronaut.svg' if self.new_record? end end class Comment < ApplicationRecord belongs_to :commentable, polymorphic: true end Since comment is a polymorphic association, I'm not sure if it should be connected to user via a belongs_to to user. [5] pry(#<Questions::CommentsController>)> @comment => #<Comment:0x00007fb3e6df72e8 id: nil, content: "some test content", commentable_type: "Question", commentable_id: 1, created_at: nil, updated_at: nil, user_id: nil> [6] pry(#<Questions::CommentsController>)> Does this mean, I should create a foreign_key in the comments migration to a user? How can I go about getting the creator of the comment from the comment itself? Something like @comment.creator or @comment.user? Answer: Don't think your question belongs to this code review section. You would have gotten help much faster if had posted in stackoverflow instead. Anyway there are couple points you need to check. The CreateComments migration must have been updated (or at least there was another migration that changed the table) because the migration does not say anything about user_id column but the your model has that. Since comment is a polymorphic association, I'm not sure if it should be connected to user via a belongs_to to user. You are confused between the polymorphic association relation and the user's relation of a comment. They are different, a comment belongs to a Question (or Post, Thread... that why you have polymorphic here) and also belongs to a user who created the comment. Even better, you can have another polymorphic association for commenter (I will skip it to avoid adding noise to your question) Does this mean, I should create a foreign_key in the comments migration to a user? NO How can I go about getting the creator of the comment from the comment itself? Something like @comment.creator or @comment.user? They are just methods rails generated for you after you setup the association, so yeah you can have whatever you want Here is what I would do to get comment.commenter and comment.commenter= to work: class Comment < ApplicationRecord belongs_to :commentable, polymorphic: true belongs_to :commenter, class_name: 'User', foreign_key: 'user_id' end
{ "domain": "codereview.stackexchange", "id": 37090, "tags": "ruby, ruby-on-rails, active-record" }
ROS Wiki Indexer
Question: Is there a public interface to all the information that the ROS Wiki indexes, other than these pages Originally posted by David Lu on ROS Answers with karma: 10932 on 2012-07-17 Post score: 0 Answer: You can check the information on the wiki macros: http://www.ros.org/wiki/WikiMacros They depend on static files that have to be generated regularly to not become outdated. Also see the packages (well, the source as of now): http://www.ros.org/wiki/rosdoc http://www.ros.org/wiki/rosorg Both packages contribute to the wiki result, but the code and structure need refactoring. I might do that soon, Tully also has it on his todo list. Originally posted by KruseT with karma: 7848 on 2012-07-20 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by David Lu on 2012-07-20: +1 for the existence of the rosorg stacks. I'd like to play in a roswiki sandbox.
{ "domain": "robotics.stackexchange", "id": 10240, "tags": "ros, meta" }
Time dilation in the center of the earth
Question: The time dilation seem to slow time in stronger gravitational fields. Lets say we put a person with a clock in the center of the earth. It will feel no gravity since it's pulling from every direction effectively reducing itself to zero. Is the effect of time slow will still be observable? In that case one can "theoretically" build a time slowing machine. Answer: At the center of the Earth there would be no net gravitational force. This does not mean the same as no gravity. The center is affected equally by all of Earth's mass so it is pulled equally in all directions. So there would still be gravitational time dilation even though there is no net force to pull it more in one direction than another.
{ "domain": "physics.stackexchange", "id": 81531, "tags": "special-relativity, gravity, time, time-dilation" }
Does sky luminance really decrease in steps as the Sun goes deeper under the horizon?
Question: Playing with some atmospheric scattering simulations, I've come across a fact that, as the Sun goes lower under the horizon, sky luminance (neglecting sources of light other than the Sun) appears to not decrease very smoothly – instead it decreases in a sort of steps. See e.g. the plot of zenith luminance with Sun elevation (all plots here assume observer on the ground): or horizon luminance (at the Sun's azimuth, so the peak here corresponds to the solar disk): After some experiments I've concluded that each new step appears after I add a new scattering order taken into account. If I e.g. take only 3 or 2 scattering orders into account, zenith luminance appears to look like this: and horizon luminance: These steps correspond to the events when each order of scattering gets fully into the Earth's shadow, so that the next order becomes dominant. Now, what I first thought is that this jumpiness is an artifact of some discretization in the simulation. Indeed, until I increased resolution in solar elevation to insane levels, I got some bogus steps not even related to any physical events. But even after increasing resolutions in various variables (Sun elevation, view zenith angle, sun-camera angle, camera altitude) and taking really many (150) orders of scattering into account these smooth steps still remain, and begin to seem physical. The good part of this result is that it actually is practically falsifiable. Namely, it predicts fast decrease of zenith luminance till about -7.5° Sun elevation and of horizon luminance till about -15°, after which luminance decreases considerably slower until about -18° for zenith and -25° for horizon. Although the -25° seems too deep for practical measurements (due to airglow, zodiacal light etc.), the others appear to be within the astronomical (or brighter) twilight. So my question is, has this steppy luminance change actually been measured in real Earth atmosphere? Answer: Such pronounced steps appeared to have been an artifact of the calculations in the model. Namely, integrals of radiance over all directions were done numerically, with 16 steps in $\theta$ and in $\varphi$ spherical coordinates. This led to poor resolution (in $\theta$ direction) of the horizon, where the red sky of the sunset/sunrise is a relatively thin strip, which shrinks with Sun setting deeper. After I increased number of samples, I got very different radiances: The 64 and 96 sample curves are nearly indistinguishable here, meaning that the computation has converged to some useful precision. Here's the 96-sample version alone, to better see its smoothness: Some very small and smooth steps are still present after the first step, but I think they'll never be seen in reality due to the noise and other, higher-intensity light effects of the Earth's atmosphere and space. The first step doesn't get any smoother after increasing number of samples, but it isn't much surprise either: it's the border of the normal Earth's shadow coming into zenith, which should be expected to affect further evolution of zenith radiance as the Sun sets deeper.
{ "domain": "physics.stackexchange", "id": 56871, "tags": "visible-light, scattering, atmospheric-science" }
Shortest path in divisors graph
Question: There is a graph with N vertices numbered from 1 to N. Edge between a and b exists if and only if a|b or b|a. If a|b then the weight of the edge is b/a. If b|a then the weight of the edge is a/b. Given two vertices u and v, I want to find the length of the shortest path between them. I think that the shortest path should pass through gcd(u, v) or lcm(u, v) (lowest common multiple), but I can't prove it. Can someone tell if my theorem is correct? If it is not correct, then how to solve this? It is not my homework. Answer: Given two vertices $u$ and $v$, I want to find the length of the shortest path between them. I think that the shortest path should pass through $\gcd(u, v)$ or $lcm(u, v)$ (lowest common multiple), but I can't prove it. Can someone tell if my theorem is correct? I think the claim is not strictly true as written, and I will provide a counter-example. First a claim: For any two vertices in this graph, there is a shortest path between them which uses only edges whose weight is a prime. Pf: If the weight is not a prime, it can be factored, and this yields a different path between the same two vertices. The sum of any sequence of positive numbers all at least 2 is less than or equal to their product (trivial, by induction), so that is a shorter path, which can be substituted into the original in place of the non-prime edge. (Note that a slightly stronger thing we could say is "shortest paths can only contain edges whose weight is a prime or is 4", proof left to reader.) Now take two arbitrary non-prime coprime numbers, say, 6 = 2 * 3, and 35 = 5 * 7. Clearly, a shortest path using only prime edges has at least 4 steps. Also, we can ignore the possibility of a shortest path using edges corresponding to primes which are not {2, 3, 5, 7} -- by commutativity of multiplication, any such edges can be reordered until the end, without changing the weights of any of the edges, and the total effect of any such edges for primes which were not originally involved must be a loop, so in fact they could be skipped. It follows (from this argument) that the shortest paths from 6 to 35 are in bijection with orderings of {2, 3, 5, 7}. They all have the same cost, which is 2 + 3 + 5 + 7. Some of them pass through the lcm or gcd, but some do not. For instance, 6 -> 2 -> 10 -> 70 -> 35. I think you can prove that there is a shortest path passing through the gcd? And there is a shortest path passing through the lcm? I think the idea to prove that would be, look at the edges in the path, and look at which ones make the number larger, and which ones make the number smaller. Then rearrange the edges using commutativity of multiplication as before. If you sort it so that those that make the number larger come first, and then are followed by those that make the number smaller, then in the middle I think you pass through the lcm. If you sort it so that those that make the number smaller come first, then I think you pass through the gcd.
{ "domain": "cs.stackexchange", "id": 5909, "tags": "graphs, combinatorics, number-theory" }
What technology is behind a deep freezer door
Question: I dont know if the title or question makes any sense but when i close my deep freezer i find it difficult to open immediately after closing until after about 2 minutes have passed. How is this done and why? It works all the same with no power source Does it have something to do with the door seal? I dont get it Answer: On a different scale and with different and more severe temperature variations, consider the following: Place a few ounces/tens of milliliters of water in a plastic bottle. Heat it to boiling in a microwave oven. As soon as the water reaches the boiling point, remove it and tighten the cap. As the water vapor, which is less dense than dry air, condenses, the space in which it previously occupied is "vacated" causing a pressure reduction. In this extreme example, the plastic bottle will collapse quite dramatically. In the case of your freezer, when the door is open, some of the cold air within will fall out, due to the increased density when compared to the ambient air. This vacancy causes humid air from outside the freezer to be pulled into the enclosure. When the door is closed, the air is quite quickly chilled, causing water vapor to condense, reducing the pressure. Additionally, one could consider that the warmer air is less dense and when chilled, becomes more dense, also causing a pressure reduction. This pressure reduction, combined with what you have described as a well sealing door, results in the difficulty of opening the door. I have duplicated the collapsing bottle process and also have experienced additional force required to open my refrigerator and freezer doors.
{ "domain": "engineering.stackexchange", "id": 4841, "tags": "mechanical-engineering, electrical-engineering, machine-design" }
Is time going backwards beyond the event horizon of a black hole?
Question: For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? This question is a followup for the comment I made for this question: Are we inside a black hole? Food for thought: if time stops at the event horizon (for an outside observer), for inside, my intuition suggests, time should go backwards. So for matter, that's already inside when the black hole forms, it won't fall towards a singularity but would fall outwards towards the event horizon due to this time reversal. So inside there would be an outward gravitational force. It would be fascinating if it turns out that all this cosmological redshift, and expansion we observe, is just the effect of an enormous event horizon outside pulling the stuff outwards. So from outside: we see nothing fall in, and see nothing come out. And from inside: we see nothing fall out, and see nothing come in. Hopefully the answers make this clear, and I learn a bit more about the GR. :) Answer: It's easy to forget that, in the context of relativity, there is no universal time. You write: For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? But your intuition doesn't seem to take into account that, for an observer falling into the hole, time doesn't stop at the event horizon. The point is that one must be much more careful in their thinking about time within the framework of general relativity where time is a coordinate and coordinates are arbitrary. In fact, within the event horizon, the radial coordinate becomes time-like and the time coordinate becomes space-like. This simply means that, to "back up" inside the event horizon is as impossible as moving backwards in time outside the event horizon. In other words, the essential reason it is impossible to avoid the singularity once within the horizon is precisely that one must move forward through time which, due to the extreme curvature within the horizon, means moving towards the central singularity.
{ "domain": "physics.stackexchange", "id": 99932, "tags": "general-relativity, black-holes, time, event-horizon, causality" }
Sorting and tagging values in objects
Question: Wherever the biggest number is always use the 'big' value If another number is equal to that number, use the 'big' value Set the next number in line to 'medium' If another number is equal to that number, use the 'medium' value Set the next number in line to 'small' If another number is equal to that number, use the 'small' value entries: any { "FirstNames": 6, "Names": 8, "": 6, "Locations": 3, "Others": 2}; let sorted = this.sortByValues(this.entries); var obj = {}; for (var i = 0; i <= sorted.length; i++) { if (Object.keys(obj).length === 0 && obj.constructor === Object) obj[sorted[i]] = "big"; else if (this.entries[sorted[i]] == this.entries[sorted[i-1]] && obj[sorted[i-1]] == "big") obj[sorted[i]] = obj[sorted[i-1]]; else if(this.entries[sorted[i]] < this.entries[sorted[i-1]] && obj[sorted[i-1]] == "big") obj[sorted[i]] = "medium"; else if(this.entries[sorted[i]] == this.entries[sorted[i-1]] && obj[sorted[i-1]] == "medium") obj[sorted[i]] = obj[sorted[i-1]]; else if(this.entries[sorted[i]] < this.entries[sorted[i-1]] && obj[sorted[i-1]] == "medium") obj[sorted[i]] = "small"; else if(this.entries[sorted[i]] == this.entries[sorted[i-1]] && obj[sorted[i-1]] == "small") obj[sorted[i]] = obj[sorted[i-1]]; else obj[sorted[i]] = obj[sorted[i-1]]; } sortByValues(list : any) { return Object.keys(list).sort(function(a,b){return list[a]-list[b]}).reverse()); } Answer: Only two values of the sorted array are needed to decide the resultant tags, so the algo is: scan all entries and find the maximum and pre-maximum. scan all entries and tag them accordingly in a new object Tests show 10x speed-up, but of course it's noticeable only on large number of repetitions. tagValues(data : any) { let keys = Object.keys(data); let len = keys.length; let max = -Infinity, max2 = -Infinity; for (let i = 0; i < len; i++) { let value = data[keys[i]]; if (value > max) { max2 = max; max = value; } } let tagged = {}; for (let i = 0; i < len; i++) { let key = keys[i]; let value = data[key]; tagged[key] = value === max ? 'big' : value === max2 ? 'medium' : 'small'; } return tagged; } P.S. As for sortByValues, instead of reversing an array after sorting it, invert the comparison.
{ "domain": "codereview.stackexchange", "id": 22229, "tags": "javascript, typescript" }
Why is $e^{i\hat{p}L/\hbar}$ only an operator when it is outside an integral?
Question: Looking at the screenshot provided below, which is an excerpt from this textbook, really nothing more than a derivation of momentum as the generator of translations, would someone be kind enough to explain why "$p$ is an operator only outside the integral" as he says? It's not clear to me that this should be true. What about the act of integration is turning $p$ from an operator into a variable? Answer: What is going on is that you have one operator which is diagonal in one particular basis. So to act with it on some vector, you expand the vector on the basis, and act with it on each term. Very explicitly, and using a finite basis first to make the point. Let us suppose that you have a vector space $V$ and one operator $T$ which has the property that it has one basis of eigenvectors, i.e., one basis $\{e_i\}\subset V$ such that $$Te_i=\lambda_i e_i\tag{1}.$$ In that case if you want to act with $T$ on some generic $v$, the way to go is to expand $v$ in the basis and use linearity of $T$: $$Tv=T\sum_{i=1}^n v^ie_i=\sum_{i=1}^n v^i Te_i=\sum_{i=1}^n v^i\lambda_i e_i\tag{2}.$$ In particular this is good to define functions of operators. If you have a function $F(x)$ and you want to define $F(T)$ it is immediate to do so in the basis of eigenvectors of $T$. You define the action on the basis by $$F(T)e_i=F(\lambda_i)e_i\tag{3}$$ and extend the action by linearity. In other words, $F(T)$ is defined by: $$F(T)v=F(T)\sum_{i=1}^n v^ie_i = \sum_{i=1}^n v^i F(T)e_i = \sum_{i=1}^n v^i F(\lambda_i)e_i.\tag{4}$$ Now, what you have is the "continuous basis" version of that. You are describing the states of your system in the position representation by position space wavefunctions $f(x)$. In that space, the momentum operator $P$ acts by differentiation, $$Pf(x)=-i\hbar \dfrac{d}{dx}f\tag{5}.$$ You want to define a particular function of $P$, namely $e^{i\frac{L}{\hbar}P}$. To do so, you want to use the basis of eigenstates of $P$. This means you must solve $$P\psi_p(x)=-i\hbar \dfrac{d}{dx}\psi_p(x)=p\psi_p(x)\tag{6}.$$ This gives you the exponentials $\psi_p(x)=\frac{1}{\sqrt{2\pi\hbar}}e^{i\frac{p}{\hbar}x}$. This is a basis of eigenstates of $P$, the analogue $e_i$ appearing in (1). You can now define your desired exponential using the analogue of (3): $$e^{i\frac{L}{\hbar}P}\psi_p(x)=e^{i\frac{L}{\hbar}p}\psi_p(x)\tag{7}$$ and extension by linearity. But beware that now linear combinations are taken with integrals since we are working with a continuous basis. In that case, expanding $f(x)$ on the basis and using the definition of $e^{i\frac{L}{\hbar}P}$ we have $$e^{i\frac{L}{\hbar}P}f(x)=e^{i\frac{L}{\hbar}P}\int dp \tilde{f}(p)\psi_p(x)=\int dp \tilde{f}(p)e^{i\frac{L}{\hbar}P}\psi_p(x)=\int dp e^{i\frac{L}{\hbar}p}\tilde{f}(p)\psi_p(x)\tag{8}.$$ So you see that in the end this is, in fact, the definition of $e^{i\frac{L}{\hbar}P}$ on the position representation: it acts diagonally on the basis and is extended by linearity. This is the continuous version of (4) and is nothing but a definition.
{ "domain": "physics.stackexchange", "id": 90378, "tags": "quantum-mechanics, operators, fourier-transform" }
urdf collada difference
Question: Hi, What is the major difference of urdf,stl,collada? STL is only composed by mesh, so it only can move the whole object, right? And urdf can define several joints to move. But if I use collada file could I move joints like importing urdf? Or I should import many collada files and control each part separately? Originally posted by sam on ROS Answers with karma: 2570 on 2011-03-06 Post score: 1 Answer: The Collada spec. 1.5 allows you to include also the kinematic chains. On the last Beta Program Conference Call I remembered one group (I think JSK) was expanding the Collada-ROS implementation. I currently have no idea of the current status. Originally posted by KoenBuys with karma: 2314 on 2011-03-07 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4971, "tags": "ros, joint, urdf, collada, stl" }
Benchmark using floating point math
Question: I wrote some code in Python, to benchmark mathematical operations. It does not do anything in particular besides just calculating a lot of floating point numbers. To clarify: I need 128-bits precision floating point numbers. from __future__ import division from numpy import arctan, sin, absolute, log10 from mpmath import * import time imax = 1000001 x = mpf(0) y = mpf(0) z = mpf(0) t = mpf(0) u = mpf(0) i = 1 mp.prec = 128 start_time = time.time() while i < 1000001: i += 1 x = mpf(x + 1) y = mpf(y + x * x) z = mpf(z + sin(y)) t = mpf(t + absolute(z)) u = mpf(u + log10(t)) print("--- %s seconds ---" % (time.time() - start_time)) print x print y print z print t print u This took approximately 87 secs to print results. How can I improve this code? My friend in fortran wrote similar program and it took only 3.14 secs for him to print results in 128-bit precision. Also in general I want to improve speed of my code. Instead 87 secs I want to reach times much closer to fortran. Answer: Disclaimer: I maintain the gmpy2 library which also provides arbitrary-precision arithmetic. gmpy2 includes fast integer arithmetic based on the GMP library. If gmpy2 is not installed, mpmath uses Python's native integer type(s) for computation. If gmpy2 is installed, mpmath uses the faster gmpy2.mpz integer type. You didn't mention if you have gmpy2 (or gmpy, an older version) installed, so I ran one test with your original code, mpmath, and without gmpy2. The running time was ~91 seconds. With gmpy2 installed, the running time was ~63 seconds. The rest of the examples assume gmpy2 is installed. Your code include superfluous calls to mpf(). Since x is already an mpf, the result of x+1 will also be an mpf so the mpf() call is not needed. If I remove those calls, the running time drops to ~56 seconds. You import from both numpy and mpmath. mpmath.sin and mpmath.log10 replace the functions imported from numpy. But you are still using numpy.absolute. Numpy can be very fast but only when uses types it natively supports. If I remove numpy and change absolute to abs, the running time drops to ~48 seconds. Here is the code with all the changes listed from above: from __future__ import division from mpmath import * import time imax = 1000001 x = mpf(0) y = mpf(0) z = mpf(0) t = mpf(0) u = mpf(0) i = 1 mp.prec = 128 start_time = time.time() while i < 1000001: i += 1 x = x + 1 y = y + x * x z = z + sin(y) t = t + abs(z) u = u + log10(t) print("--- %s seconds ---" % (time.time() - start_time)) print x print y print z print t print u I don't see any other significant improvements for an mpmath based approach. gmpy2 also provides arbitrary-precision floating point based on the MPFR library. If I use gmpy2 instead, the running time is reduced to ~18 seconds. Here is the code for a gmpy2 based solution: from __future__ import division import gmpy2 from gmpy2 import mpfr, sin, log10 import time imax = 1000001 x = mpfr(0) y = mpfr(0) z = mpfr(0) t = mpfr(0) u = mpfr(0) i = 1 gmpy2.get_context().precision = 128 start_time = time.time() while i < 1000001: i += 1 x = x + 1 y = y + x * x z = z + sin(y) t = t + abs(z) u = u + log10(t) print("--- %s seconds ---" % (time.time() - start_time)) print x print y print z print t print u
{ "domain": "codereview.stackexchange", "id": 16919, "tags": "python, performance, python-2.x, numpy" }
Relativistic Euler-Lagrange equation
Question: I am confused from the equation 6, why we get Euler-Lagrange equation from equation 8 but not from equation 6? Why we need to use $\zeta$ as invariant parameter in equation 8 even we already have invariant parameter $s$ in equation 6, in order to get relativistic euler-lagrange equation? Reference: Relativistic mechanics satya prakash page no.402 Answer: Assuming that we are talking about a massive point particle, we know that the arclength $$s~=~c\tau\tag{A}$$ is the speed of light $c$ times the proper time $\tau$ (up to an additive constant), and the 4-velocity $$u^{\mu}~:=~\frac{dx^{\mu}}{d\tau}\tag{B}$$ satisfies $$u^{\mu}u_{\mu}~\stackrel{(A)+(B)+(3)}{=}~c^2\tag{7}.$$ [For the overall sign, compare with the Minkowski sign convention (3).] The most important point (which Prakash doesn't seems to explain) is now that in the stationary action/Hamilton's principle (in contrast to e.g. Maupertuis' principle) the integration region $[\zeta_1,\zeta_2]$ for the world-line parameter $\zeta$ is kept fixed and the same for all paths/trajectories. Also note that the 4 position coordinates $x^{\mu}$ are to varied independently (say, within timelike curves), and that the quantity $$ \dot{x}^{\mu}\dot{x}_{\mu}, \qquad \dot{x}^{\mu}~:=~\frac{dx^{\mu}}{d\zeta}, \tag{C}$$ is not fixed (but say, positive). The main reason that we cannot pick the arclength $s$ (or equivalently the proper time $\tau$) as the world-line parameter $\zeta$ is that the integration region $[s_1,s_2]$ should then be fixed, but this contradicts the fact that neighboring paths/trajectories clearly generically have different arclengths. Moreover, Prakash points out that if $\zeta=\tau$ then the 4 position coordinates $x^{\mu}$ cannot be varied independently because of the constraint (7), cf. eqs. (A)+(B)+(3), i.e. there are only 3 independent position variables, so the variational principle (in its current form) does also not work for this reason.
{ "domain": "physics.stackexchange", "id": 90184, "tags": "general-relativity, lagrangian-formalism, variational-principle, geodesics, point-particles" }
Mass convert categorical columns in Pandas (not one-hot encoding)
Question: I have pandas dataframe with tons of categorical columns, which I am planning to use in decision tree with scikit-learn. I need to convert them to numerical values (not one hot vectors). I can do it with LabelEncoder from scikit-learn. The problem is there are too many of them, and I do not want to convert them manually. What would be an easy way to automate this process. Answer: If your categorical columns are currently character/object you can use something like this to do each one: char_cols = df.dtypes.pipe(lambda x: x[x == 'object']).index for c in char_cols: df[c] = pd.factorize(df[c])[0] If you need to be able to get back to the categories I'd create a dictionary to save the encoding; something like: char_cols = df.dtypes.pipe(lambda x: x[x == 'object']).index label_mapping = {} for c in char_cols: df[c], label_mapping[c] = pd.factorize(df[c]) Using Julien's mcve will output: In [3]: print(df) Out[3]: a b c d 0 0 0 0 0.155463 1 1 1 1 0.496427 2 0 0 2 0.168625 3 2 0 1 0.209681 4 0 2 1 0.661857 In [4]: print(label_mapping) Out[4]: {'a': Index(['Var2', 'Var3', 'Var1'], dtype='object'), 'b': Index(['Var2', 'Var1', 'Var3'], dtype='object'), 'c': Index(['Var3', 'Var2', 'Var1'], dtype='object')}
{ "domain": "datascience.stackexchange", "id": 3260, "tags": "scikit-learn, pandas, categorical-data, labels" }
Make the robot stop if it finds an obstacle in front
Question: I am using the navigation stack (nav2) without problems, but I would like to understand how I can make the robot stop from its path if it finds a dynamic obstacle in front of it (at a minimum distance of n metres)? Is there a demo or tutorial to take inspiration from? many thanks in advance Answer: You can use the CollisionMonitor package inside the Nav2 Stack. Here is a tutorial: Using Collision Monitor and here are the sources Github
{ "domain": "robotics.stackexchange", "id": 38974, "tags": "navigation, ros2, nav2, avoid-obstacle" }
can I make an optical lattice with a non Ti:Sapphire laser?
Question: I am currently at a small undergraduate institution that does not have access to workhorses like a Ti:Saphire laser, however I am very interested in trying to make an optical lattice with lower power lasers (HeNe, Ar ion) and hopefully do something cool with it (can't really do ultracold stuff either but might be able to use a decent vacuum chamber). Thoughts? Answer: The AC Stark effect, which is describes how atoms feel a potential energy shift from off-resonant light, is for alkali atoms (1): $$U(r)=\frac{3\pi c^2}{2\omega_0^3} \frac{\Gamma}{\Delta}I(r)$$ where $\omega_0$ is the frequency at resonance, $\Delta$ is the frequency detuning of your laser from this resonance, $\Gamma$ is the natural linewidth of the transition, and $I$ is the laser intensity. The usual strategy is to make both $\Delta$ and $I$ large. This lets you have a strong force on the atoms without much heating (which scales as $\frac{1}{\Delta^2}$). The point is, if you want a weaker laser, for the same force you have to get closer to the resonance. This will correspondingly make the heating for a given force stronger, although if your atoms are relatively hot anyway that might not matter to you. So, a few things you need to ask yourself are: How strong an optical lattice would you need, given your expected temperature? Do you have a laser with the right combination of detuning and power to reach that strength? Would you have to be so close to the resonance that heating becomes an insurmountable problem? Good luck!
{ "domain": "physics.stackexchange", "id": 33523, "tags": "condensed-matter, laser, optical-lattices" }
Partitioning a connected polygon into connected pieces of equal area
Question: Armaselu and Daescu (TCS, 2015) present algorithms that, given a convex polygon $P$ and an integer $m$ (which must be a power of $2$), return a partition of $P$ into $m$ convex polygons with the same area and same perimeter. If we only want the area to be equal (and do not care about perimeter), then the problem becomes easy for any $m$: move a "knife" (a straight line) over $P$ from left to right, and make a cut whenever the area covered by the knife is $1/m$. Since $P$ is convex, the resulting pieces are convex too. But what if $P$ is not convex? Then, cutting $P$ by a knife might generate pieces that are not convex and even not connected. What is an algorithm for partitioning a polygon (that is connected but not necessarily convex) into $m$ connected polygons? My guess is that the problem should be much easier for hole-free polygons. But even for this case, I could not find an algorithm. Answer: There must be many ways to do it - here is one way... Compute the medial axis of the polygon using the $L_1$ metric. Any point on the boundary defines a natural segment that goes from this point to a point on the medial axis - lets call the leash of the point. Pick an arbitrary point on the boundary of the polygon, and start moving it counterclockwise. Continue sweeping until the leash sweeps over area $1/m$ of the polygon. The swept area is a connected polygon of the desired area. Now continue in this fashion breaking the polygon into $m$ connected polygons of the same area.
{ "domain": "cstheory.stackexchange", "id": 5211, "tags": "cg.comp-geom, partition-problem, polygon" }
Simpler method to find users in child categories
Question: I am using the ancestry gem to organize my post categories tree. I have 8 parent categories, and they all have many child categories. To retrieve an array of child categories, I use: parent_category.children I defined these relations in my category model: has_ancestry has_many :posts has_many :users, through: :posts The last relation allows me to find the users who post in each category. What I am trying to do: on first login, users have to choose their categories of interest. On the next screen, they are suggested users who post in the categories they chose. To find the suggested users, I wrote a method: def recommendations @users = [] (current_user.get_voted Category).each do |category| category.children.each do |cat| @users << cat.users end end @users = @users.flatten.uniq end This works, but it seems overly complicated. Is there any way to do this in a "best practice" way ? Answer: Some notes: ys = [] + xs.each + ys.push + ys.flatten That's an imperative and pretty verbose pattern. A functional approach would use flat_map. @users = @users.flatten.uniq. Don't re-use variables. Different values, different variables. (current_user.get_voted Category). You saved two parens on the function call at the cost of being forced to write them to wrap the expression. It's not worth it. I'd just use parens on calls except in DSL-style statements. Check https://github.com/bbatsov/ruby-style-guide I'd write: def recommendations @users = current_user.get_voted(Category). flat_map { |category| category.children.flat_map(&:users) }. uniq end
{ "domain": "codereview.stackexchange", "id": 19045, "tags": "ruby, ruby-on-rails, active-record" }
AES GCM Encryption Code (Secure Enough or not)
Question: I am creating one library for encryption/decryption using AES-256 with GCM Mode(With Random IV/Random Salt) for every request. Code I have written is(reference : AES Encryption/Decryption with key) : public class AESGCMChanges { static String plainText1 = "DEMO text to be encrypted @1234"; static String plainText2 = "999999999999"; public static final int AES_KEY_SIZE = 256; public static final int GCM_IV_LENGTH = 12; public static final int GCM_TAG_LENGTH = 16; public static final int GCM_SALT_LENGTH = 32; private static final String FACTORY_ALGORITHM = "PBKDF2WithHmacSHA256"; private static final String KEY_ALGORITHM = "AES"; private static final int KEYSPEC_ITERATION_COUNT = 65536; private static final int KEYSPEC_LENGTH = 256; private static final String dataKey = "demoKey"; public static void main(String[] args) throws Exception { byte[] cipherText = encrypt(plainText1.getBytes()); String decryptedText = decrypt(cipherText); System.out.println("DeCrypted Text : " + decryptedText); cipherText = encrypt(plainText2.getBytes()); decryptedText = decrypt(cipherText); System.out.println("DeCrypted Text : " + decryptedText); } public static byte[] encrypt(byte[] plaintext) throws Exception { byte[] IV = new byte[GCM_IV_LENGTH]; SecureRandom random = new SecureRandom(); random.nextBytes(IV); // Get Cipher Instance Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); byte[] salt = generateSalt(); // Generate Key SecretKey key = getDefaultSecretKey(dataKey, salt); // Create GCMParameterSpec GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, IV); // Initialize Cipher for ENCRYPT_MODE cipher.init(Cipher.ENCRYPT_MODE, key, gcmParameterSpec); // Perform Encryption byte[] cipherText = cipher.doFinal(plaintext); byte[] message = new byte[GCM_SALT_LENGTH + GCM_IV_LENGTH + plaintext.length + GCM_TAG_LENGTH]; System.arraycopy(salt, 0, message, 0, GCM_SALT_LENGTH); System.arraycopy(IV, 0, message, GCM_SALT_LENGTH, GCM_IV_LENGTH); System.arraycopy(cipherText, 0, message, GCM_SALT_LENGTH+GCM_IV_LENGTH, cipherText.length); return message; } public static String decrypt(byte[] cipherText) throws Exception { if (cipherText.length < GCM_IV_LENGTH + GCM_TAG_LENGTH + GCM_SALT_LENGTH) throw new IllegalArgumentException(); ByteBuffer buffer = ByteBuffer.wrap(cipherText); // Get Salt from Cipher byte[] salt = new byte[GCM_SALT_LENGTH]; buffer.get(salt, 0, salt.length); // GET IV from cipher byte[] ivBytes1 = new byte[GCM_IV_LENGTH]; buffer.get(ivBytes1, 0, ivBytes1.length); byte[] encryptedTextBytes = new byte[buffer.capacity() - salt.length - ivBytes1.length]; buffer.get(encryptedTextBytes); // Get Cipher Instance Cipher cipher = Cipher.getInstance("AES/GCM/NoPadding"); // Generate Key SecretKey key = getDefaultSecretKey(dataKey, salt); // Create GCMParameterSpec GCMParameterSpec gcmParameterSpec = new GCMParameterSpec(GCM_TAG_LENGTH * 8, ivBytes1); // Initialize Cipher for DECRYPT_MODE cipher.init(Cipher.DECRYPT_MODE, key, gcmParameterSpec); // Perform Decryption byte[] decryptedText = cipher.doFinal(encryptedTextBytes); return new String(decryptedText); } private static SecretKey getDefaultSecretKey(final String password, final byte[] salt) throws NoSuchAlgorithmException, InvalidKeySpecException{ return getSecretKey(password, salt, FACTORY_ALGORITHM, KEY_ALGORITHM, KEYSPEC_ITERATION_COUNT, KEYSPEC_LENGTH); } private static SecretKey getSecretKey(final String password, final byte[] salt, final String factoryAlgorithm, final String keyAlgorithm, final int iterationCount, final int keyLength) throws NoSuchAlgorithmException, InvalidKeySpecException{ SecretKeyFactory factory = SecretKeyFactory.getInstance(factoryAlgorithm); return new SecretKeySpec(factory.generateSecret(new PBEKeySpec(password.toCharArray(), salt, iterationCount, keyLength)).getEncoded(), keyAlgorithm); //thanks alot for the bug report } private static byte[] generateSalt() { final Random r = new SecureRandom(); byte[] salt = new byte[32]; r.nextBytes(salt); return salt; } } Now my question is: Does it have any security flaws? Does it follow best practices? Can I do something to improve it? All the lengths that I have taken for SALT, IV, Authentication tag are they OK? or they needs to be changed. Answer: Protocol Well, first the good news. I don't see anything particularly wrong with the algorithms or parameters used. Class design This is a badly designed class with a lot of copy / paste going on (and I've found a clear indication for that at the end, where you copy code directly from this site). I'm not a big fan of static classes, and this one doesn't lend itself particularly well for it. For instance, you don't want to use a separate password for each call, and you certainly don't want to derive a key from the same password multiple times. By line code review public class AESGCMChanges { That's not a good name for a class. I presume this is testing only though. public static final int GCM_IV_LENGTH = 12; Some things are called _SIZE and others _LENGTH. It might be that one is in bits and the other one in bytes, but if you mix them I would indicate it in the name of the constants. Generally crypto-algorithm specifications define IV and tag sizes in bits, so probably best keep to that (and divide by Byte.SIZE where needed). The constants have the correct sizes assigned to them though, although a salt of 32 bytes / 256 bits may be a bit overkill: 128 bits is a plenty. private static final String FACTORY_ALGORITHM = "PBKDF2WithHmacSHA256"; Nope, that name doesn't do it for me. It's not a (generic?) factory you are naming, it is a name - and hash configuration - of the Password Based Key Derivation Algorithm or PBKDF. private static final String dataKey = "demoKey"; For testing purposes it could be an idea to make such a key a constant in a testing class, but here it is really unwanted. The naming is incorrect as well, you'd expect all uppercase for constants. Besides that, it's not the "data key" nor is it even a "key", it's a password or passphrase. byte[] cipherText = encrypt(plainText1.getBytes()); Always indicate the character encoding or you may see changes. Generally, I'd default to StandardCharsets.UTF_8 on Java (if you use the string directly then you will get an annoying checked exception to handle, something you can do without). public static byte[] encrypt(byte[] plaintext) throws Exception This is not a good method signature. I'd at least expect a password within it (as long as you keep to the current design anyway). What is good is that the plaintext and ciphertext is specified in bytes. The exception handling is not well worked out; just throwing Exception is as bad as a catch-all. For a good idea of how to handle Java crypto-exceptions take a look here. What about creating a class that has e.g. the number of iterations as configuration option, then initializes using the password and then allows for a set of plaintexts to be encrypted / decrypted? Currently you don't allow any associated data for GCM mode. GCM mode specifies authenticated encryption with associated data or AEAD cipher. Not including associated data is not wrong, but it could be a consideration. byte[] IV = new byte[GCM_IV_LENGTH]; SecureRandom random = new SecureRandom(); random.nextBytes(IV); Normally I would not comment on this as it is not wrong or anything like that. It shows good use of the secure random class. However, I think it is not very symmetric with generateSalt; why not create a method for the IV as well? SecretKey key = getDefaultSecretKey(dataKey, salt); A "default" secret key? What's that? This is the key that's going to encrypt the data, right? Shouldn't this be called the "data key" in that case? We've already established that the other key is really a password. Besides that, I would not call a method that performs a deliberately heavy operation such as password based key derivation a "getter" either. It should be named e.g. deriveDataKey or something similar. Missing from the call is the work factor / iteration count. I would certainly make that configurable and possibly store it with the ciphertext. You should use the highest number you can afford for the iteration count, and that number is going to shift to higher values in the future. Or so it should anyway. System.arraycopy(cipherText, 0, message, GCM_SALT_LENGTH+GCM_IV_LENGTH, cipherText.length); The salt and IV are relatively small, so buffering them in a separate array. However, Java has specific methods of writing data to an existing array using ByteBuffer. If your data is not that big then I would be OK with not using multiple update calls or streaming the data. I would however not recommend duplicating the ciphertext using System.arrayCopy. And it is strange that this has not been implemented using ByteBuffer considering that you have used it for the decrypt method (again, the more symmetry the better). return new SecretKeySpec(factory.generateSecret(new PBEKeySpec(password.toCharArray(), salt, iterationCount, keyLength)).getEncoded(), keyAlgorithm); //thanks alot for the bug report The reason that the password is seen as character array is that you can destroy it's contents after you've handled it, in this case created a key from it. Array contents can be destroyed in Java (with reasonable certainty), String values cannot. So using password.toCharArray() here doesn't let you do this. //thanks alot for the bug report What bug report? What's the point of having a "thank you" here? Why not include a link if you decide to include such a comment? In this case it seems you copied a small method without attribution. This also shows the dangers of having end of line comments; they are easily missed. They are even less visible if too much is going on in that single line - as is the case here.
{ "domain": "codereview.stackexchange", "id": 39685, "tags": "java, aes" }
C++ can not build simple test program after update (Ubuntu 22.04/ROS 2 Rolling)
Question: I just updated my Ubuntu 22.04 installation with ROS 2 rolling on it and now when I try to rebuild my ROS packages I get an error message that it can't compile a simple test program: CMake Error at /home/john/.local/lib/python3.10/site-packages/cmake/data/share/cmake-3.27/Modules/CMakeTestCXXCompiler.cmake:60 (message): The C++ compiler "/usr/bin/c++" is not able to compile a simple test program. It fails with the following output: Change Dir: '/home/john/floyd2_ws/build/moveit_msgs/CMakeFiles/CMakeScratch/TryCompile-cr7niU' Run Build Command(s): /home/john/.local/lib/python3.10/site-packages/cmake/data/bin/cmake -E env VERBOSE=1 /usr/bin/gmake -f Makefile cmTC_00a33/fast /usr/bin/gmake -f CMakeFiles/cmTC_00a33.dir/build.make CMakeFiles/cmTC_00a33.dir/build gmake[1]: Entering directory '/home/john/floyd2_ws/build/moveit_msgs/CMakeFiles/CMakeScratch/TryCompile-cr7niU' Building CXX object CMakeFiles/cmTC_00a33.dir/testCXXCompiler.cxx.o /usr/bin/c++ w -o CMakeFiles/cmTC_00a33.dir/testCXXCompiler.cxx.o -c /home/john/floyd2_ws/build/moveit_msgs/CMakeFiles/CMakeScratch/TryCompile-cr7niU/testCXXCompiler.cxx c++: warning: w: linker input file unused because linking not done c++: error: w: linker input file not found: No such file or directory gmake[1]: *** [CMakeFiles/cmTC_00a33.dir/build.make:78: CMakeFiles/cmTC_00a33.dir/testCXXCompiler.cxx.o] Error 1 gmake[1]: *** Deleting file 'CMakeFiles/cmTC_00a33.dir/testCXXCompiler.cxx.o' gmake[1]: Leaving directory '/home/john/floyd2_ws/build/moveit_msgs/CMakeFiles/CMakeScratch/TryCompile-cr7niU' gmake: *** [Makefile:127: cmTC_00a33/fast] Error 2 CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:2 (project) I'm at a complete loss as to what to do. I tried removing the build directory but I still get the error. Answer: Make sure your flags are correct. In my case, I used DCMAKE_CXX_FLAGS="w" and not DCMAKE_CXX_FLAGS="-w". As Tully pointed out, this could have been resolved by posting the command I used to cause the error. The "w" in the error message kept glaring at me as if it was telling me "hey, dummy, look here."
{ "domain": "robotics.stackexchange", "id": 38625, "tags": "ros, c++, colcon" }
How much has Earth warmed since preindustrial times, after all?
Question: Call it hair splitting, if you want, but I see at least three numbers in the latest IPCC report (e.g. on pages 7-51–7-52). The first one, "the total human forced GSAT change from 1750–2019", is 1.29°C. The second one, "the total (anthropogenic plus natural) emulated GSAT between 1850–1900 and 2010–2019", is 1.14°C. The third one, "the assessed GSAT", is 1.06°C. Suppose, the second one is a simulation and the third one is an observation (or so I understood). But why is the first estimate considerably higher than the second one even though the former doesn't include small natural warming, I take it? Keep in mind, 1750 and 1850 are almost the same baselines and differ by a measly 0.1°C so it doesn't explain the discrepancy. I'm not a science skeptic, by the way Answer: Some of the relevant AR6 headline values are: +1.09 °C = Observed GMST and GSAT change between the periods 1850-1900 and 2011-2020 +1.06 °C = Observed GMST and GSAT change between the Chapter 7 attribution assessment periods 1850-1900 and 2010-2019 +1.14 °C = Emulated GSAT change between the Chapter 7 attribution assessment periods 1850-1900 and 2010-2019 +1.29 °C = Emulated GSAT change between end points 1750 and 2019, attributed to anthropogenic forcings The +1.06 °C and +1.09 °C values are the same observed estimates of temperature change, it’s just that the latter covers the most recent “current” period that was available at the time the report was being written. Yes, the 0.03 °C difference comes from only one year of difference in the meaning periods, but that just reflects the effect of interannual variability on 10-year means. The +1.06 °C value is only really included in the report for comparison with the work in Chapter 7. The +1.06 °C and +1.14 °C values are two different estimates of the same GSAT metric. They're not independent estimates because the latter is constrained by the GSAT observations, as well as other observations and models. When you consider their uncertainty bounds, these estimates are consistent with each other, but, as noted in Chapter 7 p7-52: As the emulated response attempts to constrain to multiple lines of evidence (Supplementary Material 7.SM.2), only one of which is GSAT, they should not necessarily be expected to exactly agree. The GSAT observations tell us what happened, but they don’t tell us why it happened: how much does each forcing agent contribute to the record? In theory we could estimate these contributions by running full ESMs many many times, covering uncertainty in the forcing agents, perhaps also constrained by observations and their uncertainties, in an attempt to see the signals through the noise of internal variability. But in practice, it’s just not computationally possible to do that, which is where the emulators come in. Cross-Chapter Box 7.1 Climate model emulators are simple physically-based models that are used to approximate large-scale climate responses of complex Earth System Models (ESMs). Due to their low computational cost they can populate or span wide uncertainty ranges that ESMs cannot. They need to be calibrated to do this and, once calibrated, they can aid inter-ESM comparisons and act as ESM extrapolation tools to reflect and combine knowledge from ESMs and many other lines of evidence. The emulators don’t (and don’t attempt to) capture the full, noisy temperature record including interannual variability. Think of them as emulating the underlying forced response of temperature without the obscuring effects of noise. The difference between emulated values +1.14 °C and +1.29 °C is a mixture of several things: the 1750 versus 1850-1900 baselines (about 0.1 °C), the non-anthropogenic forcings (about -0.02 °C), and using end points rather than end periods (about 0.2 °C). Note that just looking at the end points is normally not recommended in observations and ESMs because a lot of the difference will be internal variability, but that’s not really a problem in these smooth emulations. You just have to be careful what you do with them, i.e., what other values you’re comparing them to. So, the bottom line is that the best estimate of the warming that has been expressed by the system to date is +1.09 [0.95 to 1.20] °C. From the AR6 SPM headline statements (my emphasis): A.1.2 Each of the last four decades has been successively warmer than any decade that preceded it since 1850. Global surface temperature8 in the first two decades of the 21st century (2001-2020) was 0.99 [0.84-1.10] °C higher than 1850-1900. Global surface temperature was 1.09 [0.95 to 1.20] °C higher in 2011–2020 than 1850–1900, with larger increases over land (1.59 [1.34 to 1.83] °C) than over the ocean (0.88 [0.68 to 1.01] °C). The estimated increase in global surface temperature since AR5 is principally due to further warming since 2003–2012 (+0.19 [0.16 to 0.22] °C). Additionally, methodological advances and new datasets contributed approximately 0.1ºC to the updated estimate of warming in AR6.
{ "domain": "earthscience.stackexchange", "id": 2413, "tags": "climate-change" }
Game of Life with NumPy
Question: I started this exercise with NumPy with a goal to find neighbors and return the new matrix. I want to get your feedback. Here's an example from this website. It looks like it's \$O(N^2)\$, and I'm adding a internal loop to look around neighbors. import numpy as np import pprint world = np.array([[0, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 0, 0]]) pprint.pprint(world) size = world.shape[0] def next_state(world): """ :param world: :return: """ size = world.shape[0] neighbors = np.zeros(shape=(size, size), dtype=int) new_world = np.zeros(shape=(size, size), dtype=int) neighbor_count = 0 # Ignore edges: start xrange: in 1 for rows in xrange(1, size - 1): for cols in xrange(1, size - 1): # Check neighbors for i in [-1, 0, 1]: for j in [-1, 0, 1]: # Condition to not count existing cell. if rows + i != rows or cols + j != cols: neighbor_count += world[rows + i][cols + j] neighbors[rows][cols] = neighbor_count if neighbors[rows][cols] == 3 or (world[rows][cols] == 1 and neighbors[rows][cols] == 2): new_world[rows][cols] = 1 else: new_world[rows][cols] = 0 neighbor_count = 0 pprint.pprint(neighbors) return new_world print next_state(world) Answer: That next_state function creates two brand new numpy array. Creating numpy array is slow. Should just update an existing numpy array. Can divide the code into two classes. One for world, the other for the engine. World can have the world array and visualization. Engine can have the neighbor array. Actually the neighbor array can be much smaller than the world if we update the world from left to right. Python loop over each element (the row and col loops) is much slower than numpy's method. Can vectorize counting of neighbor by shifting the world and add to neighbor: . neighbor = np.zeros(world.shape, dtype=int) neighbor[1:] += world[:-1] # North neighbor[:-1] += world[1:] # South neighbor[:,1:] += world[:,:-1] # West neighbor[:,:-1] += world[:,1:] # East neighbor[1:,1:] += world[:-1,:-1] # NW neighbor[1:,1:] += world[:-1,:-1] # NE Draw animation of world with matplotlib: import numpy as np import matplotlib.pyplot as plt class World(object): def __init__(self, shape, random=True, dtype=np.int8): if random: self.data = np.random.randint(0, 2, size=shape, dtype=dtype) else: self.data = np.zeros(shape, dtype=dtype) self.shape = self.data.shape self.dtype = dtype self._engine = Engine(self) self.step = 0 def animate(self): return Animate(self).animate() def __str__(self): # probably can make a nicer text output here. return self.data.__str__() class Animate(object): def __init__(self, world): self.world = world self.im = None def animate(self): while (True): if self.world.step == 0: plt.ion() self.im = plt.imshow(self.world.data,vmin=0,vmax=2, cmap=plt.cm.gray) else: self.im.set_data(self.world.data) self.world.step += 1 self.world._engine.next_state() plt.pause(0.01) yield self.world class Engine(object): def __init__(self, world, dtype=np.int8): self._world = world self.shape = world.shape self.neighbor = np.zeros(world.shape, dtype=dtype) self._neighbor_id = self._make_neighbor_indices() def _make_neighbor_indices(self): # create a list of 2D indices that represents the neighbors of each # cell such that list[i] and list[7-i] represents the neighbor at # opposite directions. The neighbors are at North, NE, E, SE, S, SW, # W, NE directions. d = [slice(None), slice(1, None), slice(0, -1)] d2 = [ (0, 1), (1, 1), (1, 0), (1, -1) ] out = [None for i in range(8)] for i, idx in enumerate(d2): x, y = idx out[i] = [d[x], d[y]] out[7 - i] = [d[-x], d[-y]] return out def _count_neighbors(self): self.neighbor[:, :] = 0 # reset neighbors # count #neighbors of each cell. w = self._world.data n_id = self._neighbor_id n = self.neighbor for i in range(8): n[n_id[i]] += w[n_id[7 - i]] def _update_world(self): w = self._world.data n = self.neighbor # The rules: # cell neighbor cell's next state # --------- -------- ----------------- # 1. live < 2 dead # 2. live 2 or 3 live # 3. live > 3 dead # 4. dead 3 live # Simplified rules: # cell neighbor cell's next state # --------- -------- ----------------- # 1. live 2 live # 2. live/dead 3 live # 3. Otherwise, dead. w &= (n == 2) # alive if it was alive and has 2 neighbors w |= (n == 3) # alive if it has 3 neighbors def next_state(self): self._count_neighbors() self._update_world() def main(): world = World((1000, 1000)) for w in world.animate(): pass if __name__ == '__main__': main()
{ "domain": "codereview.stackexchange", "id": 25247, "tags": "python, numpy, game-of-life" }
get_fk works but get_ik doesn't from same service node
Question: Hello all, I am trying to access my rosservice node made by the warehouse arm navigation wizard. The service is running but when I try to execute a simple ik calculation (through the get_ik tutorial) my code crashes here: if(ik_client.call(gpik_req, gpik_res)) with a console error: martin@ubuntu:~# rosrun H20_robot_arm_navigation get_left_ik [ INFO] [1360706997.070270979]: 1 [ INFO] [1360706997.070335100]: 2 [ INFO] [1360706997.070358093]: 3 [ INFO] [1360706997.070378924]: 4 [ERROR] [1360706997.071244382]: Inverse kinematics service call failed I put some ROS_INFO lines for the purposes of debugging: Here is the full code that I used from the tutorials: #include <ros/ros.h> #include <kinematics_msgs/GetKinematicSolverInfo.h> #include <kinematics_msgs/GetPositionIK.h> int main(int argc, char **argv){ ros::init (argc, argv, "get_ik"); ros::NodeHandle rh; ros::service::waitForService("H20_robot_left_arm_kinematics/get_ik_solver_info"); ros::service::waitForService("H20_robot_left_arm_kinematics/get_ik"); ros::ServiceClient query_client = rh.serviceClient<kinematics_msgs::GetKinematicSolverInfo>("H20_robot_left_arm_kinematics/get_ik_solver_info"); ros::ServiceClient ik_client = rh.serviceClient<kinematics_msgs::GetPositionIK>("H20_robot_left_arm_kinematics/get_ik"); // define the service message kinematics_msgs::GetKinematicSolverInfo::Request request; kinematics_msgs::GetKinematicSolverInfo::Response response; if(query_client.call(request,response)) { for(unsigned int i=0; i< response.kinematic_solver_info.joint_names.size(); i++) { ROS_DEBUG("Joint: %d %s",i,response.kinematic_solver_info.joint_names[i].c_str()); } } else { ROS_ERROR("Could not call query service"); ros::shutdown(); exit(1); } // define the service messages kinematics_msgs::GetPositionIK::Request gpik_req; kinematics_msgs::GetPositionIK::Response gpik_res; gpik_req.timeout = ros::Duration(5.0); gpik_req.ik_request.ik_link_name = "left_hand_link"; ROS_INFO(" 1 "); gpik_req.ik_request.pose_stamped.header.frame_id = "base_link"; gpik_req.ik_request.pose_stamped.pose.position.x = 0.235; gpik_req.ik_request.pose_stamped.pose.position.y = -0.04445; gpik_req.ik_request.pose_stamped.pose.position.z = -0.609; ROS_INFO(" 2 "); gpik_req.ik_request.pose_stamped.pose.orientation.x = 0.99; gpik_req.ik_request.pose_stamped.pose.orientation.y = -0.0782; gpik_req.ik_request.pose_stamped.pose.orientation.z = 0.0; gpik_req.ik_request.pose_stamped.pose.orientation.w = 1.0; gpik_req.ik_request.ik_seed_state.joint_state.position.resize(response.kinematic_solver_info.joint_names.size()); gpik_req.ik_request.ik_seed_state.joint_state.name = response.kinematic_solver_info.joint_names; ROS_INFO(" 3 "); for(unsigned int i=0; i< response.kinematic_solver_info.joint_names.size(); i++) { gpik_req.ik_request.ik_seed_state.joint_state.position[i] = (response.kinematic_solver_info.limits[i].min_position + response.kinematic_solver_info.limits[i].max_position)/2.0; } ROS_INFO(" 4 "); if(ik_client.call(gpik_req, gpik_res)) { ROS_INFO(" 5 "); if(gpik_res.error_code.val == gpik_res.error_code.SUCCESS) for(unsigned int i=0; i < gpik_res.solution.joint_state.name.size(); i ++) ROS_INFO("Joint: %s %f",gpik_res.solution.joint_state.name[i].c_str(),gpik_res.solution.joint_state.position[i]); else ROS_ERROR("Inverse kinematics failed"); } else ROS_ERROR("Inverse kinematics service call failed"); ros::shutdown(); } Does anyone know what is going wrong? I used the get_fk tutorial. Kind Regards, Martin Originally posted by MartinW on ROS Answers with karma: 464 on 2013-02-12 Post score: 2 Original comments Comment by Carlos on 2013-04-22: I'm dealing with the same issue now Martin and I have no idea about what to do. Did you find a workaround? Regards, Carlos Comment by Carlos on 2013-04-24: It does work that way! Thank you! But it also means that the warehouse viewer is doing something that we're not and I can't figure out what is it! Could it be it's not properly instantiating ArmKinematicsConstraintAware with the custom robot parameters? or something about TF? I'll keep looking! Comment by MartinW on 2013-04-25: I believe it is instantiating a planning scene, but I haven't looked into it further yet. I believe you have to set a blank (or saved) planning scene and it's a fairly simple call! If you find out how please let me know, I'm taking a break from coding to write my thesis these days. Best of luck! Answer: Yes Carlos, to get the ik services to work it required that I run the warehouse planner and setup a new motion plan request (or at least click on an old one). Once I did this all the services worked but not until I clicked on the MPRs. Hope this helps, cheers! Originally posted by MartinW with karma: 464 on 2013-04-23 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12859, "tags": "ros" }
Radio vs optical telescope imaging
Question: As I understand, the visible light from an optical telescope is focused on a sensor which correlates light exposure to an electrical voltage, which is then converted to an image. A single antenna radio telescope's signal is focused on some sort a sensor but this signal represents only a single 'pixel.' To generate a composite image of an object, a time intensive scan pattern needs to be carried out. Why does this difference exist? Why does a radio telescope see a single pixel, while optical telescopes see an entire image? Are radio telescope sensors only a single pixel, or is a radio telescope itself, regardless of the sensor, only capable of seeing a single pixel? Answer: It's all because of the wavelength of light. In most bands the radio telescope is about the size of the wavelength it's observing - so it can only see a single point at once anyway. It would be like having an optical telescope that was a tiny microscopic pinhole - there wouldn't be much point in having a megapixel camera behind it. Radio telescopes that work with mm wavelength band now have multi-pixel detectors. To get comparable resolution to an optical telescope with wavelengths that are a million times longer we would need a telescope a million times larger. So we link radio telescopes together - to make a single telescope. Each telescope measures a single part of the incoming signal and we re-create the detailed picture later in a computer.
{ "domain": "physics.stackexchange", "id": 10607, "tags": "astronomy, visible-light, telescopes, radio" }
Coping values based on indicator qubit in Qiskit
Question: Give the following input: $$ A: 1110 \\ B: 0111 \\ \text{indicator}: 0 \text{ or } 1 $$ How do I copy the value of A or B to target qubits if indicator is 0 or 1 respectively? In the image below, the indicator=0 so the value of the copy_of_a_or_b qubits should resemble a_input qubits: How can I do this in Qiskit? Answer: It seems you are looking for a classical operation called multiplexer. It is possible to build such operation with Qiskit's uniformly controlled gates (UCGate). More info in here. However, this way might not be the easier way. After reading Cryoris' answer, here is my version of your example (which piggyback on Cryoris' great answer): Let's set your current circuit: from qiskit import * a_input = QuantumRegister(4, name='a') b_input = QuantumRegister(4, name='b') inputs = QuantumCircuit(a_input, b_input) inputs.initialize('1110', a_input) inputs.initialize('0111', b_input) inputs.draw('mpl') (side note, if your run inputs.decompose().draw('mpl') you will notice that the order of the $X$ gates are going to be "in reverse". That's because Qiskit endianness.) Now, time to create the multiplexer. This scales up the Cryoris' answer: from qiskit.circuit.library import CXGate indicator = QuantumRegister(1, name='indicator') a_or_b = QuantumRegister(4, name='aORb') multiplexer = QuantumCircuit(a_input, b_input, indicator, a_or_b) for i in range(4): # If indicator is 0, copy from A multiplexer.append(CXGate().control(1, ctrl_state='0'), [indicator, a_input[i], a_or_b[i]]) multiplexer.barrier(a_input, b_input) for i in range(4): # If indicator is 1, copy from B multiplexer.append(CXGate().control(1, ctrl_state='1'), [indicator, b_input[i], a_or_b[i]]) multiplexer.draw('mpl') The register a_or_b needs to be measured: output = ClassicalRegister(4, name='output') measure = QuantumCircuit(a_or_b) measure.measure_all(a_or_b) The composed result: circuit = inputs + multiplexer + measure circuit.draw('mpl') Finally, we need to "set" the indicator qubit. Once circuit for each possibility: indicator0 = QuantumCircuit(indicator) indicator0.initialize('0', indicator) indicator1 = QuantumCircuit(indicator) indicator1.initialize('1', indicator) Time to test how it works: job = execute(indicator0 + circuit, backend=BasicAer.get_backend('qasm_simulator')) job.result().get_counts().keys() dict_keys(['1110']) job = execute(indicator1 + circuit, backend=BasicAer.get_backend('qasm_simulator')) job.result().get_counts().keys() dict_keys(['0111'])
{ "domain": "quantumcomputing.stackexchange", "id": 2184, "tags": "quantum-gate, programming, qiskit" }
Gravitational Potential of a Sphere vs Gravitational Binding Energy of a Sphere
Question: My question is about two equations regarding uniform spheres that I've run into: $\quad V=\frac{GM}{r},$ and $\quad U = \frac{3}{5}\frac{GM^2}{r}.$ 1) On one hand, $V$ is unknown to me, and is described (in Solved Problems in Geophysics) as "the gravitational potential of a sphere of mass M." I also found it online called "the potential due to a uniform sphere." 2) On the other hand, $U$ is what I've seen before and I know it by the descriptions "sphere gravitational potential energy" or "gravitational binding energy." My understanding is that $U$ is the amount of energy required to build the sphere piece by piece from infinity. I also recognize $GMm/r$ as the gravitational potential between two masses. Can someone explain the difference between these concepts? How can $GM/r$ be the "gravitational potential of a sphere"? Isn't that what $U$ is? Answer: There is a mistake in one of your formulas, $U=\frac{3 G M^2}{5 R}$ with $R$ equal to the sphere radius is the energy required to blow every tiny shred of the sphere apart so that its pieces no longer interact gravitationally, as you said, while $V$ as given above with $r$ equal to distance from the sphere center describes how the sphere interacts with other (celestial) bodies, i.e test particles moving in the sphere's gravitational field feel $V$. To elaborate: the gravitational field around a point mass and around an object that's spherically symmetric is the same outside of the object due to symmetry considerations, which is why $V$ agrees with the formula for the gravitational potential between 2 masses.
{ "domain": "physics.stackexchange", "id": 6650, "tags": "newtonian-gravity, potential, potential-energy, definition, binding-energy" }
Does the frictional force change if a bowling ball is slipping depending on the relative speed difference?
Question: In the special case where a bowling ball has initial translational velocity but no initial angular velocity, the bowling ball will experience a contact force due to Coulomb friction $\mu mg$. In the special case where the bowling ball has initial translational velocity $v$ and initial angular velocity $\omega$ such that $v$ = $r$ x $\omega$, where $r$ is the radius of the ball, the contact force is $0$ and the ball is rolling without slipping. In the general case where the bowling ball has initial translational velocity $v$ and initial angular velocity $\omega$; such that $\omega \ne 0$ and such that $v \ne r$ x $\omega$, is the contact force still $\mu mg$? Or will it depends on the relative speed between the ball and the surface; eg $v$ - ($r$ x $\omega)$ ? Reference: https://bowlingknowledge.info/images/stories/what_makes_a_bowling_ball_hook.pdf Answer: If you write the equations of motion for this free body diagram you obtain: $$m\,\ddot{x}=F-F_c$$ $$I\,\ddot{\varphi}=F_c\,r$$ you have two situation I) rolling condition this mean that $$x=\,r\,\varphi~,\Rightarrow~\ddot{x}=r\,\ddot{\varphi}$$ thus you obtain for the contact force $F_c$ $$F_c=\frac{F\,I}{I+m\,r^2}$$ II) sliding condition in this case the contact force is a function of : $$F_c=F_c(s~,\mu~,N)$$ where $s=\dot{x}-r\,\dot{\varphi}$ the sliding velocity $\mu$ the friction coefficient $N=m\,g$ the normal force with $$s\mapsto \frac{\dot{x}-\omega\,r}{\omega\,r}$$ $$F_{c,\text{max}}=\mu_{\text{max}}\,N$$ $$F_{c,g}=\mu_{g}\,N$$
{ "domain": "physics.stackexchange", "id": 73539, "tags": "newtonian-mechanics, rotational-dynamics, friction" }
Compare counts of string occurrences
Question: Knowing my solution ... Return True if the given string contains an appearance of "xyz" where the xyz is not directly preceeded by a period (.). So "xxyz" counts but "x.xyz" does not. xyz_there('abcxyz') → True xyz_there('abc.xyz') → False xyz_there('xyz.abc') → True def xyz_there(str): return str.count("xyz") > str.count(".xyz") ...evolved from this code: def xyz_there(str): if str.count("xyz") > str.count(".xyz"): return True return False I wonder if there's some rule that describes when you can flatten the IF and write is as a return value like I did? Answer: You can use boolean expressions directly. When you return a boolean expression directly, the code is naturally readable, and shorter too. No need to write out a full if-else condition. I don't think it's a "rule" though. It's a general recommendation. Your code has a bigger problem though: it's inefficient. count will search through the entire string. A better way is to use regular expressions: import re xyz_not_preceded_by_dot = re.compile(r'(?<!\.)xyz') def xyz_there(str): """ >>> xyz_there("abcxyz") True >>> xyz_there("abc.xyz") False >>> xyz_there("xyz.abc") True >>> xyz_there("abc") False """ return xyz_not_preceded_by_dot.search(str) is not None
{ "domain": "codereview.stackexchange", "id": 14909, "tags": "python, strings" }
What is the possible range of SVR parameters range?
Question: I'm working on a regression problem. While tunning the Parameters of SVR I got the following values c=100, gamma= 10 and epsilon =100. For which I got 95 percent r-square. My question is what is the theoretical range of these parameters values.? Answer: I support vector regression the inverse regularization parameter $C$ can be selected from the interval $[0,\infty)$. In which $C=0$ means that we are very heavily regularizing and $C\to \infty$ no regularization. The parameter $\varepsilon$ is also from the interval $[0,\infty)$. In which $\varepsilon=0$ forces the regression to penalize every point that is not exactly on the regression line. Whereas $\varepsilon > 0$ allows an indifference margin around the regression in which a deviation will not be counted as an error. Additionally, there are slack variables of $\xi\geq 0$ and $\hat{\xi}\geq 0$. These are zero if a point is inside the indifference margin. If a data point lies above and outside the indifference margin we will have $\xi>0$ and if a data point lies below and outside the indifference margin we will have $\hat{\xi}<0$. I think you mean the form parameter of your radial basis function when you talk about $\gamma$. If we have $$\varphi(\boldsymbol{x}_i,\boldsymbol{x}_j|\gamma)=\exp\left[-\gamma||\boldsymbol{x}_i-\boldsymbol{x}_j||^2\right]$$ then $\gamma \in (0,\infty)$ (Note the minus sign in front of $\gamma$). For $\gamma \to 0$ we make the kernel flatter as $\varphi \approx 1$. If $\gamma \to \infty$ we will get a very peaked kernel. Which will be 1 when $\boldsymbol{x}_i\approx \boldsymbol{x}_i$ and almost zero everywhere else. You should also have a look at the documentation for the implementation of these parameters. It might happen that these parameters are not implemented as you think (see note on $\gamma$).
{ "domain": "datascience.stackexchange", "id": 4859, "tags": "python, regression, svm, hyperparameter" }
Why aren't all actions conformally invariant?
Question: I am very confused about coordinate invariance of actions in classical field theories on arbitrary background spacetime or even with dynamical metric. From this question, we see that if the integrated term, namely the Lagrangian density, of the action is well-defined, i.e. is a 0-tensor (a scalar), then the action is necessarily coordinate invariant. Now, as a conformal transformation is a particular type of coordinate transformation, shouldn't any action be conformally invariant? I know that is certainly false but I am really interested in a rigorous explanation. Answer: Note the words "conformal transformation" can mean slightly different things in different places. But conformal transformations, in the sense that it's used in conformal field theories, are not just coordinate transformations. Instead they are a simultaneous coordinate and field transformation such that the metric is left invariant. This often feels rather odd as conformal transformations are usually only talked about as being coordinate transformations, but keeping careful track of what's going on in sources such as the Di Francesco et at text will reveal that while they perform coordinate transformations, they always employ the flat metric. Polchinski's string book is a little more explicit about this. But if you want the real hard proof that conformal transformations, as they appear in CFTs, are not just coordinate transformations, look no further than the transformation of the stress tensor under such a transformation. This transformation rule can be found in any source on CFTs and is usually one of the very first things written down/derived. You will note that the transformation is very simply not the transformation of a rank 2 tensor under a coordinate transformation. Instead there is an additional term (the Schwartzian) in the transformation rule which comes from the fact that we are transforming the metric directly as a field at the same time (it's simultaneous Weyl transformation). For a couple specific references, see eq (4.31) in David Tong's string theory notes or eq (2.4.26) in Polchinski's strings book.
{ "domain": "physics.stackexchange", "id": 79984, "tags": "lagrangian-formalism, reference-frames, field-theory, coordinate-systems, conformal-field-theory" }
What is the "male gender rudiment" that women allegedly have?
Question: In The Man without Qualities Robert Musil wrote (my emphasis): "But the nature," he thought, "gives the man nipples and the woman a male gender rudiment. We cannot conclude from this, however, that our ancestors were hermaphrodites." What male gender rudiment does he mean? It must be some "useless" body part in female body (like man's nipples). Original German text: "Aber die Natur" dachte er "gibt dem Mann Saugwarzen und der Frau ein männliches Geschlectsrudiment, ohne dass daraus zu schliessen wäre, unsere Vorfahren seien Hermaphroditen gewesen. Answer: First, let's briefly discuss the development of the the male and female internal genitalia and sex glands: These include the ejaculatory duct, prostate and seminal vesicles for males, and uterus, uterine tubes and the proximal part of the vagina for females. They all develop from two tube-like structures called mesonephric and paramsonephric ducts. These ducts initially exist in both males and females, but normally, only one develops while the other degenerates, depending on the presence of sex-determining factors and hormonal activity. The mesonephric ducts will develop to the male internal genitalia (the organs are mentioned above), while the paramesonephric duct will develop to the female internal genitalia (also mentioned above). Now, to your question: A male rudiment that exists in females is the non-fully degenerated mesonephric duct, named the Epoophoron and Paroophoron in adults, which later in life may form the Gartner’s cyst.
{ "domain": "biology.stackexchange", "id": 8309, "tags": "human-anatomy" }
Is Centripetal Velocity a Thing?
Question: I'm quite new to physics so this question may sound dumb for many of you. But when I was learning about uniform circular motion, all sources I can find talks about centripetal acceleration, and, when multiplied by mass, the centripetal force. However, when I tried to look up centripetal velocity, I found nothing. According to my understanding, if there's an acceleration and it's not balanced out(which it's not, because it can actually change the direction of the tangential velocity) then it must induce a velocity. If so, why doesn't the body get closer and closer toward the centre of the circle? In projectile motion, we know that X and Y motions are unrelated and do not affect each other, could this also be the case in circular motion? The tangential velocity and the centripetal velocity(if exists) are perpendicular and therefore do not affect each other, but together they affect to direction of the body's motion. It's just that in projectile motion, the Y motion always points to the ground, causing a parabolic motion, whereas in circular motion the centripetal acceleration always changes direction(always points to the centre of circle) which is why a circular path is caused. Am I right? Answer: Centripetal force always points radially inwards. But the object is always moving tangentially. Thus, the force vector is acting at right angles to the velocity vector. In this situation, there is no force acting in the direction of its velocity so its speed does not increase. Instead it accelerates sideways. This causes all the vectors to rotate. Circular motion is a condition of constant radial acceleration. If there were such a thing as centripetal velocity it would accumulate over time and approach infinity. Fortunately, there is no such thing and speed remains constant.
{ "domain": "physics.stackexchange", "id": 68654, "tags": "newtonian-mechanics, classical-mechanics, kinematics" }
Ltree inverse crossing function using high order functions
Question: I was solving some Haskell exercises as a way to refresh some of the language concepts. I encounter the following exercise: data LTree a = Leaf a | Fork (LTree a) (LTree a) crossing :: LTree a -> [(a,Int)] crossing (Leaf x) = [(x,0)] crossing (Fork e d) = map(\(x,y) -> (x,y+1)) (crossing e ++ crossing d) The above function basically takes a Ltree and turns into a list of pars (Lists the tree leaves, along with their depth). The exercise is to make a function build :: [(a,Int)] -> LTree a, that does the inverse of the crossing function, such as build (crossing a) = a for any 'a' tree. What I have done so far: build :: [(a,Int)] -> LTree a build l = fst (multConvert (map (\(x,n) -> (Leaf x, n)) l)) multConvert :: [(LTree a,Int)] -> (LTree a,Int) multConvert [x] = x multConvert l = multConvert (convert l) convert :: [(LTree a,Int)] -> [(LTree a,Int)] convert [] = [] convert [x] = [x] convert ((a,b):(c,d):xs) | b == d = ((Fork a c),(b-1)):xs | otherwise = (a,b):convert ((c,d):xs) Basically, I turn the list produce by the crossing into an list of (LTree,a) , for example [(3,3),(4,3)..] becomes [(Leaf 3, 3), (Leaf 4, 3)...]. The convert function will take this list of (LTree,a) and turn consecutive elements with the same depth into a Fork with both elements. For example [(Leaf 3, 3), (Leaf 4, 3), (Leaf 5, 2)...] becomes [(Fork (Leaf 3) (Leaf 4),2), (Leaf 5, 2)...]. Note, that the new produced element have a depth-1. multConvert function will apply the above idea on the list, over and over, until this list have only one element. This element will represent the original tree. In other words, my algorithm builds the Ltree using a bottom up approach. I would like to know if there is a more robust way to solve this problem. For example using high order functions, instead of explicitly recursive (possibly using foldr). Answer: I think this is a continuation-passing solution (simple parser) for writing build: build :: [(a,Int)] -> LTree a build xs = parseAt 0 xs fst -- The first Int is the depth of the "root" of parseAt parseAt :: Int -> [(a,Int)] -> ( (LTree a, [(a,Int)]) -> r ) -> r parseAt r [] k = error "input too short" parseAt r xs@((a,n):rest) k = case compare r n of GT -> error "depth too small" EQ -> k (Leaf a,rest) LT -> parseAt (succ r) xs (\ (left,rest1) -> parseAt (succ r) rest1 (\ (right,rest2) -> k (Fork left right, rest2)))
{ "domain": "codereview.stackexchange", "id": 3050, "tags": "haskell" }
How 4-vector nature of the value is connected with it's conservation law?
Question: In electrodynamics Poynting vector and energy flux of field don't create 4-vector. Also they aren't conserved independently from substance (conservation law includes summand connected with current density). In linearized gravity mass density and mass current density as components of stress-energy tensor also aren't conserved. And they also aren't components of 4-vector. Is facts of non-conservation of values above connected with the absense of 4-vector nature? Answer: To my knowledge there is no connection between transformation properties of an object (4-vectorness) and it conservation under time evolution. As you can guess the conservation law should be related to the dynamics of the system and hence it should be expressed in such terms like a Lagrangian, Hamiltonian etc. In the most general case conservation of a quantity is ruled by the Noether theorem and is related to the symmetries of a system. E.g. is a system is symmetric under shifts in time energy is conserved. Then you can think of symmetries with respect to rotations that leads to conservation of angular momentum, that is not a 4-vector as well.
{ "domain": "physics.stackexchange", "id": 11260, "tags": "electromagnetism, general-relativity, conservation-laws" }
P = NP clarification
Question: Let's use Traveling Salesman as the example, unless you think there's a simpler, more understable example. My understanding of P=NP question is that, given the optimal solution of a difficult problem, it's easy to check the answer, but very difficult to find the solution. With the Traveling Salesman, given the shortest route, it's just as hard to determine it's the shortest route, because you have to calculate every route to ensure that solution is optimal. That doesn't make sense. So what am I missing? I imagine lots of other people encounter a similar error in their understanding as they learn about this. Answer: Your version of the TSP is actually NP-hard, exactly for the reasons you state. It is hard to check that it is the correct solution. The version of the TSP that is NP-complete is the decision version of the problem (quoting Wikipedia): The decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. In other words, instead of asking "What is the shortest possible route through the TSP graph?", we're asking "Is there a route through the TSP graph that fits within my budget?".
{ "domain": "cs.stackexchange", "id": 15510, "tags": "np-complete, p-vs-np" }
Poisson summation formula and periodic summation of Fourier transforms
Question: One of the forms of the Poisson summation formula is $$ \sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi f T n}\; {=} \; \sum_{k=-\infty}^{\infty} X\left(f - k/T\right),$$ where $x(nT)$ are samples of a continuous function $x(t)$, and $X(f)$ the fourier transform of $x(t)$. The RHS is a periodization of $X(f)$. What I couldn't understand is the following, we can define two functions $x_{1}(t)$ and $x_{2}(t)$ that are equal at the sample points, $x_{1}(nT)=x_{2}(nT)$. We have $$ \sum_{n=-\infty}^{\infty} T\cdot x_{1}(nT)\ e^{-i 2\pi f T n}\; {=} \; \sum_{k=-\infty}^{\infty} X_{1}\left(f - k/T\right),$$ and $$ \sum_{n=-\infty}^{\infty} T\cdot x_{2}(nT)\ e^{-i 2\pi f T n}\; {=} \; \sum_{k=-\infty}^{\infty} X_{2}\left(f - k/T\right),$$ where $X_{1}(f)$ and $X_{2}(f)$ are the Fourier transforms of $x_{1}(t)$ and $x_{2}(t)$ respectively. But since $x_{1}(nT)=x_{2}(nT)$, we then also have $$ \sum_{n=-\infty}^{\infty} T\cdot x_{1}(nT)\ e^{-i 2\pi f T n}=\sum_{n=-\infty}^{\infty} T\cdot x_{2}(nT)\ e^{-i 2\pi f T n},$$ Which means $$\boxed{\sum_{k=-\infty}^{\infty} X_{1}\left(f - k/T\right){=} \; \sum_{k=-\infty}^{\infty} X_{2}\left(f - k/T\right)}$$ But we can choose the functions $x_{1}(t)$ and $x_{2}(t)$, even though they are equal at the sample points, in such a way that their transforms $X_{1}(f)$ and $X_{2}(f)$ are quite different from each other, and, therefore, so will their periodizations. And so why should this last equation be true in general? Answer: $x(t)$ is a continuous-time signal with Fourier transform $X(f)$. There is no restriction whatsoever on the bandwidth of $x(t)$. If the signal is sampled at intervals of $T$ seconds, then the $n$-th sample of $x(t)$ is $x_n = x(nT)$. The OP correctly states that $$ \sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi f nT} =\sum_{n=-\infty}^{\infty} T\cdot x_n\ e^{-i 2\pi f nT} = \sum_{k=-\infty}^{\infty} X\left(f - k/T\right).\tag{1}$$ Regardless of whether $X(f)$ is bandlimited or not, the sum on the right is a periodic function of the frequency $f$ with period $\frac 1T$. There is no requirement that $X(f)$ and $X\left(f-\frac 1T\right)$ have non-overlapping support. The OP then wonders: if there is another signal $y(t)$ with different Fourier transform $Y(f)$ but $y(t)$ and $x(t)$ are equal to one another at the sampling instants $nT$, that is, $y(nT) = x(nT) = x_n$ for all $n$ and so $$\sum_{n=-\infty}^{\infty} T\cdot x(nT)\ e^{-i 2\pi f nT} = \sum_{n=-\infty}^{\infty} T\cdot y(nT)\ e^{-i 2\pi f nT} = \sum_{n=-\infty}^{\infty} T\cdot x_n\ e^{-i 2\pi f nT} ,$$ then $(1)$ implies that $$\sum_{k=-\infty}^{\infty} X\left(f - k/T\right)=\sum_{k=-\infty}^{\infty} Y\left(f - k/T\right).\tag{2}$$ Why would such equality hold? Well, there are infinitely many different signals that all have the same set of sample values $\{x_n\}$, not just $x(t)$ and $y(t)$. But among all these signals with the the same set of sample values $\{x_n\}$, there is only one signal $x_0(t)$ ---- most well-beloved, perhaps even adored, on dsp.SE ---- that not only has sample values $\{x_n\}$ but also Fourier transform $X_0(f)$ whose support is $\left(-\frac{1}{2T}, \frac{1}{2T}\right)$; that is, $X_0(f) =0$ for $|f| \geq \frac{1}{2T}$. Thus, in the sum $\sum_{k=-\infty}^{\infty} X_0\left(f - k/T\right)$, the term $X_0\left(f - k/T\right)$ occupies the frequency band $\left(-\frac{k-1}{2T}, \frac{k+1}{2T}\right)$ and doesn't overlap at all with any other term $X_0\left(f - k^\prime/T\right)$: they occupy disjoint frequency bands. Put another way, for all real numbers $f$, the sum $$\sum_{k=-\infty}^{\infty} X_0\left(f - k/T\right),\tag{3}$$ equals the sum $$ \sum_{k=-\infty}^{\infty} X\left(f - k/T\right)\tag{4}$$ but is different from the sum in $(4)$ in that for any real number $f$, no more than one of the terms $X_0\left(f - k/T\right)$ can be nonzero. In contrast, in the sum in $(4)$, for any choice of $f$, more than one term is typically nonzero. The special signal $x_0(t)$ is the only one of the myriad signals with sample values that can be reconstructed (e.g. by ideal low-pass filtering) from the sample values $\{x_n\}$. But what of the other signals with the same sample values? Well, their Fourier transforms are not restricted to have support $\left(-\frac{1}{2T}, \frac{1}{2T}\right)$; the support extends beyond that and might even be the entire frequency axis, and so for any given $f$, more than one term can be nonzero as stated earlier. Thus, all the other signals with sample values $\{x_n\}$ are effectively what is commonly called undersampled; the sampling rate is not high enough, and so their spectra alias into the band $\left(-\frac{1}{2T}, \frac{1}{2T}\right)$ and the result of this aliasing is exactly $X_0(f)$ in the band $\left(-\frac{1}{2T}, \frac{1}{2T}\right)$. In summary, $(2)$ holds because the two different undersampled signals (with identical sample values) have the same spectrum $X_0(f)$ in the band $\left(-\frac{1}{2T}, \frac{1}{2T}\right)$ after aliasing is taken into account.
{ "domain": "dsp.stackexchange", "id": 8332, "tags": "fourier-transform, sampling" }
Why does the value of the escape velocity approach 0?
Question: I am a little confused about escape velocity. Does the escape velocity always approach 0 as we go to an infinitely far distance even if there isn't any friction? If so, why does it approach 0? Shouldn't it be moving with constant speed? Answer: Escape velocity is a minimum velocity a body at a given point must have to escape the gravitational field of some other body. Escape velocity depends on initial location of the body. If the initial position is very far from this other body you only need to push it a little bit and it will fly away and never return. The escape velocity is very small, almost zero. If initial velocity of a body is higher than escape velocity it will fly away, never return, and after a long time far away from the source of gravitational field the velocity of our body will be constant and not zero. But this velocity is not an escape velocity (not sure if there is some special name for it)
{ "domain": "physics.stackexchange", "id": 84757, "tags": "newtonian-mechanics, energy-conservation, projectile, speed, escape-velocity" }
How to calculate the energy freed in the reaction: $^{10}_5Be +\space ^2_1H \rightarrow \space^{11}_5B + \space ^1_1H$?
Question: I have the following reaction: $^{10}_5\mathrm{Be} +\space ^2_1\mathrm{H} \rightarrow \space^{11}_5\mathrm{B} + \space ^1_1\mathrm{H}$ And I know that I have to use the formula: $E = \Delta m\cdot c^2 = \Delta m \cdot \frac{931,5MeV}{u}$. So I just need $\Delta m$ which is equal to: $\Delta m = m_b - m_a$ where $m_b$ represents the mass "before the reaction" and $m_a$ the mass "after the reaction" so we have: $m_b = m(^{10}_5\mathrm{Be}) + m(^2_1\mathrm{H})$ $m_a = m(^{11}_5\mathrm{B}) + m(^1_1\mathrm{H})$ The book which contains this problem contains the following table: https://i.stack.imgur.com/ALKcS.png but from this table, I only know $ m(^1_1\mathrm{H})$ and $m(^2_1\mathrm{H})$ i.e. $m_b = m(^{10}_5\mathrm{Be}) + 2.01410u$ $m_a = m(^{11}_5\mathrm{B}) + 1.00783u$ How do I calculate $m(^{10}_5\mathrm{Be})$ and $m(^{11}_5\mathrm{B})$ ? P.S. I don't know if the tag is correct. The chapter in the book where I found this exercise is called "Basics of nuclear physics". Answer: How do I calculate $m\left(^{10}_4\mathrm{Be}\right)$ and $m\left(^{11}_5 \mathrm{B}\right)$? The masses and various other properties of isotopes are available freely at Wolfram Alpha. They are, $m\left(^{10}_4\mathrm{Be}\right)=10.013533818u$ $m\left(^{11}_5 \mathrm{B}\right)=11.009305406u$ where $u$ denotes unified atomic mass units. Notice you are already given the mass number in the superscript of the isotope. As John Rennie noted, the reaction should probably be with $^{9}_4\mathrm{Be}$, $$^{9}_4\mathrm{Be} + ^{2}_1 \mathrm{H} \to ^{10}_{5}\mathrm{B} + n$$ in which case the mass is $9.012182201u$.
{ "domain": "physics.stackexchange", "id": 13722, "tags": "homework-and-exercises, nuclear-physics, textbook-erratum" }
What is the age of the Gamburtsev Mountains?
Question: The mechanism for the formation of the Gamburtsev Subglacial Mountains in East Antarctica seems to be a combination of old volcanism and Cretaceous rifting (Ferraccioli et al., 2011). While the modeling results seem to match the available geophysical observations, have there been any dating studies that looked at age through the analysis of rock samples? Answer: The Lamont-Doherty Earth Observatory describes the Gamburtsev Subglacial Mountains (GSM) as being like: opening the door of an Egyptian pyramid and finding an astronaut inside. There is no good reason for an astronaut to be inside an Egyptian pyramid just as there is no good reason for a major mountain range in the middle of East Antarctica. Despite being buried under a considerable amount of ice and observed to have had slow erosion rates since at least the Permian (Cox et al. 2010), accessible detrital material has been determined to have been deposited as fluvio-deltaic deposits in Prydz Bay (Cox et al. 2010; Flierdt et al. 2008). $\ce{U-Pb}$ dating of dertital zircons and $\ce{^40Ar/^39Ar}%edit$ dating of detrital hornblendes age them as ~530 Ma and ~519 Ma respectively, with no evidence of any yonger volcanic contributions (Flierdt et al. 2008). These dates led Cox et al. (2010) to postulate that the GSM formed during the Pan-African Orogeny. Veevers and Saeed (2013) suggest that detrital zircons found in the Ellsworth Land–Antarctic Peninsula–Weddell Sea–Dronning Maud Land sector of ages 1.4-1.7 Ga, 1.9-2.1 Ga and 3-3.35 Ga possibly have their provenance from the GSM. In summary, in the absence of direct measurements of GSM petrology, detrital material from pre-glacial erosion has suggested that the youngest tectonothermal evidence is for a Cambrian age of the GSM. References Cox et al. 2010 Extremely low long-term erosion rates around the Gamburtsev Mountains in interior East Antarctica, Geophysical Research Letters Flierdt et al. 2008 Evidence against a young volcanic origin of the Gamburtsev Subglacial Mountains, Antarctica Geophysical Research Letters Veevers and Saeed, 2013 Age and composition of Antarctic sub-glacial bedrock reflected by detrital zircons, erratics, and recycled microfossils in the Ellsworth Land–Antarctic Peninsula–Weddell Sea–Dronning Maud Land sector (240°E–0°–015°E) Gondwana Research
{ "domain": "earthscience.stackexchange", "id": 356, "tags": "mountains, geomorphology, antarctica" }
Create a tree from a list of nodes with parent pointers only
Question: I want you to pick my code apart and give me some feedback on how I could make it better or more simple. class TreeNode { private TreeNode left; private TreeNode right; private TreeNode parent; int item; public TreeNode (TreeNode left, TreeNode right, TreeNode parent, int item) { this.left = left; this.right = right; this.parent = parent; this.item = item; } public int getItem() { return item; } public void setItem(int item) { this.item = item; } public TreeNode getLeft() { return left; } public void setLeft(TreeNode left) { this.left = left; } public TreeNode getRight() { return right; } public void setRight(TreeNode right) { this.right = right; } public TreeNode getParent() { return parent; } public void setParent(TreeNode parent) { this.parent = parent; } } public class Ebay { private TreeNode root; public Map<TreeNode, List<TreeNode>> getMap (List<TreeNode> listOfTreeNodes) { final Map<TreeNode, List<TreeNode>> map = new HashMap<TreeNode, List<TreeNode>>(); for (TreeNode treeNode : listOfTreeNodes) { if (map.get(treeNode.getParent()) != null) { map.get(treeNode.getParent()).add(treeNode); } else { List<TreeNode> list = new ArrayList<TreeNode>(); list.add(treeNode); map.put(treeNode.getParent(), list); } } return map; } public void soStuff (List<TreeNode> listOfTreeNode) { final Map<TreeNode, List<TreeNode>> map = getMap (listOfTreeNode); root = map.get(null).get(0); constructTree ( map , root); } public TreeNode constructTree (Map<TreeNode, List<TreeNode>> map, TreeNode node) { if (map.containsKey(node)) { List<TreeNode> list = map.get(node); node.setLeft(constructTree(map, map.get(node).get(0))); if (list.size() == 2) { node.setRight(constructTree(map, map.get(node).get(1))); } } return node; } Answer: Your TreeNode class is mostly allright. But you have to be careful when handling trees with parent pointers, or the tree could go out of sync, and deteriorate to a weird class. A setParent on its own is silly. The setLeft and setRight should also make sure to set the parent of the new child node: public void setLeft(TreeNode left) { this.left = left; left.parent = this; } Do not force a user of your class to do this bookkeeping. Your item is weird. It is the only non-private field, and is an int. Use generics, so that your tree can carry any type. A usable tree implementation will implement certain collection interfaces like Iterable. Your Ebay class is very weird. The methods getMap and constructTree are public, but are not very useful outside of that class. Both should also be static. It is not immediately obvious what soStuff does. In seems to be a constructor, or initializer. Why not name it as such? You use a HashMap, but do not provide a custom hashing method for TreeNode. While you inherit one from Object, this isn't terribly advisable. What these three methods do is: You get a list of TreeNodes which only point to their parent. The task is to link these to a full tree. You build a map that maps parent nodes to a list of childs. You then assign the first item in each list to parent.left, the 2nd to parent.right, if applicable. Your solution has the advantage that it builds the tree top-down, recursively, and won't touch any nodes that will not be in the final tree. It comes at the cost of building an entirely unneccessary HashMap. The following implementation assumes that no unrelated nodes are in the input list: /** * Expects a TreeNode collection where each node points to its parent. * Builds the full tree and returns the root (which has to have a * null parent). */ public static TreeNode linkToTree(Iterable<TreeNode> nodes) { /* Building the tree. * * At some point, we *will* find the root, thus getting a * handle on the tree. If not, "null" will be returned, which * is a good error case. * * Each node already knows its parent, so we just add the node * as a new child to the parent. We treat a "null" slot as empty. * First, we fill the left slot, then overwrite the right slot * without any checks – last one wins out. */ TreeNode root; for (TreeNode node : nodes) { final TreeNode parent = node.getParent(); // try to detect the root node if (parent == null) { root = node; } // add this node to the parent's left slot if it's empty else if (parent.getLeft() == null) { parent.setLeft(node); } // … else overwrite right slot else { parent.setRight(node); } } return root; } What did I do differently? This method is public static. It is reusable, and does not modify any instance members. It takes any Iterable, not just a List. Why be unneccessarily specific? It is documented. Note that I have a large comment inside my method explaining why I wrote my code this way, and point out a certain edge case (not finding any root). Inside my if/else, I have smaller comments pointing out what each test means. I did not use recursion. While recursion can be very elegant, it is to be avoided on the JVM. It uses less memory than your solution, is shorter, and more elegant. Due to all of the comments, the implementation shouldn't be harder to understand than your code, even though the underlying algorithm is slightly more difficult. I don't use nondescriptive variable names like map.
{ "domain": "codereview.stackexchange", "id": 16895, "tags": "java, tree" }
Median of numbers stored in array
Question: Problem is to calculate median of numbers. My idea is : Fill array of integer with numbers using Arraymake() function copy the address into a pointer created in main() function pass that array to Median() function and calculate that median Code: #include <stdio.h> #include <stdlib.h> int* Arraymake(size_t);//Function for creating array of numbers. void Median(size_t ,int* );//Function for calculate the median. int main(int argc, char **argv) { size_t count;//Size of the list of numbers. puts("Size of the list:"); scanf("%Iu",&count); int *ArResult=Arraymake(count);//Result of the first function Median(count,ArResult); free(ArResult); return 0; } int* Arraymake(size_t ln) //ln~lenght { int* Array=(int*) malloc(ln*sizeof(int)); int number ,i; //numbers filling the array puts("Please input numbers.\n"); for(i=0;i<(ln);i++) { scanf("%i", &number); Array[i]=number; } return Array; } void Median(size_t ln,int* Array) { float median; if((ln)%2==0) { median= (Array[(ln)/2]+Array[(ln/2 )-1] )/2; printf("%f",median); } else { median= Array[((ln-1)/2)]; printf("%f",median); } } I want to find a way to optimize my variables and make my code more self-documented? How can I improve my code? Answer: 1. Arraymake: Change the name to MakeArray as it makes more sense or name it MakeArrayFromInput to make it more descriptive to what it actually does Name the parameter length, as in : int* MakeArray(size_t length) - this is descriptive and you don't have to put the comment explaining what it does Name local variables in camelCase, so array instead of Array Matter of preference but consider declaring array as int *array instead of int* array - you won't forget that the variable itself is a pointer and you have to declare additional pointers in the same statement with *s - int *array, *anotherPointer; int number ,i; //numbers filling the array - comment is redundant and you don't have to declare i here, you can do it in the for - speaking of which: for(i=0;i<(ln);i++) can become for(int i = 0; i < length; i++) - parentheses over ln are redundant, you should use better spacing to make it more readable. 2. Median: The same story with the parameter and naming, also pass the array first, then the length : void Median(int *array, size_t length) You don't have to declare the median variable before the if Parentheses in the if are redundant : if(length % 2 == 0) The calculation of the median in the case of an even number of elements is flawed - you do an integer division as both operands of / are ints - you'd have to make at least one of them a float e.g. like this median= (Array[(ln)/2]+Array[(ln/2 )-1] )/2.0; - see the 2.0 - this is a double literal, a floating-point number, not an int You can shorten the calculation a bit by changing array[(length - 1) / 2]; to array[length / 2]; in the case of an odd number - the integer division will truncate the result into what you need The function should do one thing - you're calculating the median and printing it in the same function. Consider making the function Median return the value of the median and print it somewhere else The function then should look more or less like this : float Median(int *array, size_t length) { if(length % 2 == 0) { float median = (array[length / 2] + array[length / 2 - 1]) / 2.0; return median; } else { float median = array[length / 2]; return median; //or just simply return array[length / 2]; } } Something more complex, the function can be greatly simplified by using the ternary operator ?: : float Median(int *array, size_t length) { return length % 2 == 0 ? (array[length / 2] + array[length / 2 - 1]) / 2.0 : array[length / 2]; } However, this is just trivia and you should aim to make your code as readable as possible 3. main I think that all of the comments are redundant here, everything is self-documenting Naming : ArResult should be something descriptive and simple - the most simple way would be to name it just array Spacing in function parameters e.g. scanf("%Iu", &count); instead of scanf("%Iu",&count); 4. General advice Work on the spacing - try to put spaces between operands and operators as in if(length % 2 == 0), in places like for loops : for(int i = 0; i < length; i++) and in function parameters : scanf("%Iu", &count); Put comments before the thing they're describing, not at the end of the line - this way if you want to make it a multi-line comment you won't have problems and it's more readable that way e.g. like this: //Function for creating array of numbers. int* MakeArray(size_t); //Function for calculating the median. int Median(int*, size_t); This is just an example - I think the function names are descriptive enough not to put comments there, they will clutter the view : int* MakeArray(size_t); int Median(int*, size_t); EDIT What I didn't initially remember is that to calculate the median, you have to use a sorted array. You should probably do that separately int *array = MakeArray(count); qsort(array, count, sizeof(int), compare) float result = Median(count, array); The qsort is a function from stdlib.h. You have to also declare a function compare to use in it e.g. int compare(const void* a, const void* b) { int int_a = *((int*) a); int int_b = *((int*) b); if(int_a == int_b) return 0; else if (int_a < int_b) return -1; else return 1; } This is taken from https://stackoverflow.com/a/3893967/7931009
{ "domain": "codereview.stackexchange", "id": 27184, "tags": "c, median" }
how to make new class from the test data
Question: I have a list of accounts as data set and I need to group the accounts that refer to the same user using many features. I'm thinking to use machine learning( but I'm new in this domain), because I know the group of each account for the training data set. ex of training data: account-id Feature1 Feature2 class(Group) 1 T1 P4 Gr1 2 T2 P4 Gr1 3 T3 P2 Gr2 The problem is in the testing of data and when a new account arrive for a new group not learned before in the training set. ex of testing data: account-id Feature1 Feature2 4 T5 P5 5 T6 P5 6 T3 P2 The groups of the testing data should be as following: account-id Feature1 Feature2 class(Group) 4 T5 P5 Gr3 5 T6 P5 Gr3 6 T3 P2 Gr2 The accounts 4 and 5 are in a new group (Gr3) which is not learned before in the training data. My question is how could I group the new data under a new class that is not defined before in the learning phase ? and which algorithm can I use to solve this issue ? Answer: I think you need to read about Online learning, it refers to learning when new data is being constantly added. In these cases you need an algorithm that can update itself as new data arrives (i.e. it doesn't need to recalculate itself from scratch). In other words, incrementally. There are incremental versions for support vector machines (SVMs) and for neural networks. Also, bayesian networks can be made to work incrementally.
{ "domain": "datascience.stackexchange", "id": 517, "tags": "machine-learning, classification, multiclass-classification, multilabel-classification" }
Neglecting second order differentials
Question: I am currently doing some Lorentz invariance exercises considering infinitesimal Lorentz transformations, and have been told to neglect second order differentials. It's not the first time I have come across seeing something like this, and I wonder which is the justification behind this, I mean, is this a justifiable exact procedure or just an approximation? Answer: is this a justifiable exact procedure or just an approximation? Well it all depends on precisely what you are doing. Examples: If you are attempting to determine what the Lie algebra $\mathfrak{so}(3,1)$ of the Lorentz group $\mathrm{SO}(3,1)$ is, then you are doing something exact because the Lie algebra of the Lorentz group is obtained precisely by considering a general one-parameter family $\Lambda(\epsilon)$ of Lorentz transformations that "begin" at the identity, $\Lambda(0) = I$, and then determining the derivative of such a family at $\epsilon = 0$. The result $X$ is an element of the Lie algebra (up to a factor of $i$ depending on your conventions): \begin{align} \Lambda'(0) = X \end{align} How is this related to neglecting second-order terms and higher? Well recall that we can Taylor expand the family $\Lambda(\epsilon)$ as follows: \begin{align} \Lambda(\epsilon) = I + \epsilon\Lambda_1 + O(\epsilon^2) \end{align} And now, if we take the derivative with respect to $\epsilon$ and then set $\epsilon$ to zero, we obtain precisely the coefficient of the term fist order in $\epsilon$; $\Lambda'(0) = \Lambda_1$, which is therefore an element of the Lie algebra of the Lorentz group. If you are attempting to determine what happens if you perform a "small" Lorentz transformation, then considering a family $\Lambda(\epsilon)$ as above and only keeping the terms up to first order is an approximation, but it can be viewed as a more precise definition of "small." If you are attempting to show that some object, like a term in a Lagrangian, or an action etc., is Lorentz-invariant, then often you only care to show that this invariance holds "infinitesimally," namely to first order. One reason for this is that such infinitesimal invariance is sufficient for the conclusions of Noether's theorem to hold, so in this context you don't so much care if the object being considered is invariant under a full Lorentz transformation. Another important observation (as essentially indicated by Danu in his comment above) is that every element of the proper, orthochronous Loentz group $\mathrm{SO}^+(3,1)$ can be obtained by performing successive infinitesimal Lorentz transformations. The more precise statement of this fact is that The exponential map $\exp:\mathfrak{so}(3,1)\to\mathrm{SO}^+(3,1)$ is surjective. Concretely, this means that for every proper, orthochronous Lorentz transformation $\Lambda$, there exists an element $X$ of the Lie algebra $\mathfrak{so}(3,1)$ for which \begin{align} \exp X = \Lambda. \end{align} One way of thinking about this result is that it shows that the Lie algebra contains all of the information about the group in this case; every group element can be reconstructed by exponentiating a Lie algebra element. In this sense, you lose no information about proper-orthochronous Lorentz transformations by considering only first-order approximations to them. You may also be interested in the following related physics.SE posts: Rigorous underpinnings of infinitesimals in physics Commutator of Lorentz boost generators : visual interpretation
{ "domain": "physics.stackexchange", "id": 10362, "tags": "special-relativity, approximations, differentiation, calculus" }
Real and fictitious forces exerted on a mass hanging from a string
Question: I am trying to figure out if I am right about this(picture at the bottom): Let's say we have mass M hanging from a string above the surface of the earth, creating θ degrees between it and the north pole. The way I see it, all forces applied are : The gravitational force towards the center of the earth which has a component acting as the centripetal force towards where the axis of rotation of the earth passing at that latitude (Mgsinθ). Then we have the suspension force T (please correct me if this is not the way it's called in English) which has a component that together with the centrifugal fictitious force balance out the centrifugal force. The other component of T balances out (Mgcosθ). My problem with this is that if Tcosθ = Mgcosθ ==> T=Mg Which means that F(centrifugal)+Tsinθ>Mgsinθ. So what happens to F(centrifugal)? What am I missing? Answer: For a spherical etc Earth the force diagram for the mass on the end of the string $m$ looks like this with $\vec F_{\rm cp} = \vec F_{\rm g}- \vec T $ where $\vec F_{\rm g}$ is the gravitational attraction on the mass due to the Earth, $\vec F_{\rm cp}$ is the force causing the centripetal acceleration of the mass $mr\omega^2$ and $-\vec T$ is the tension in the string. You can think of $\vec F_{\rm cp}$ and $\vec T$ as the two componets of the force $\vec F_{\rm g}$. You will see from this diagram that the string direction is not directly towards the centre of the Earth. If you want to make it a statics problem in the rotating frame of the Earth then all you need to do is add a force $ \vec F _{\rm cf} =-\vec F_{\rm cp} $ to the diagram where $\vec F_{\rm cf}$ is the centrifugal force and then $\vec F_{\rm g} - \vec T + \vec F_{\rm cf}=0$.
{ "domain": "physics.stackexchange", "id": 56133, "tags": "classical-mechanics, centrifugal-force" }
How do I make a debian binary out of my stacks and get it listed as a ros package?
Question: I have a stack that's fairly well-behaved with the standard ros tools and such (rosmake cleanly, etc) and listed on ROS.org so it's known by the ROS indexer. How do I get it from this stage to where someone else can do an 'apt-get install PACKAGE-NAME' at the command-line? Package in question: http://www.ros.org/wiki/rcommander_core Repository: https://code.google.com/p/gt-ros-pkg.rcommander-core/ Originally posted by haidai on ROS Answers with karma: 31 on 2012-05-22 Post score: 1 Answer: Seems that you already have it listed on ros.org as you posted the link. For a release see here: http://www.ros.org/wiki/release given that this is still valid. Originally posted by dornhege with karma: 31395 on 2012-05-22 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by haidai on 2012-05-22: Does listing it on ros.org impact how it'll be released? Comment by tfoote on 2012-05-22: Listing it on ros.org is independent but should usually come first. Comment by tfoote on 2012-05-23: I created a guide to listing packages on ros.org at: http://ros.org/wiki/Repositories/IndexSubmission
{ "domain": "robotics.stackexchange", "id": 9500, "tags": "ros, binary, ubuntu, rospackage, debian" }
Algorithms: Determining Asymptotic Notation from a given execution time
Question: I'm studying for an Algorithms and Data Structure test. There is a type of question that is usually always asked by my professor but I don't know how to answer/solve it. Question 1: An Algorithm with an worst-case execution time of 3n*(log n), being n the number of elements in the input, is: a) An algorithm with an execution time of type Θ(n log n). b) An algorithm with an execution time of type O(n log n). c) An algorithm with an execution time of type O(n^2). d) None of the above. Question 2: An Algorithm with an execution time of 2^100 + (1/3)*n^2 + 100n, being n the number of elements in the input, is: a) An algorithm with an execution time of type Θ(n^2). b) An algorithm with an execution time of type O(2^n). c) An algorithm with an execution time of type Θ(2^n). d) None of the above. I want to know how I can think about these problems in order to solve them. Any help is welcome (even by just giving the solution to these two questions). Thanks. Answer: Assuming that $f(n)$ and $g(n)$ are asymptotically positive (as it is usually the case) you can use the following sufficient condition to determine the asymptotic relation of $f(n)$ and $g(n)$. If $\lim_{n \to \infty} \frac{f(n)}{g(n)}$ exists and is $c \in \mathbb{R_0^+} \cup \{+\infty\}$, then: If $c< +\infty$ then $f(n) = O(g(n))$ (and $g(n) = \Omega(f(n))$). In particular: If $0 < c < +\infty$ then $f(n) = \Theta(g(n))$ (and $g(n) = \Theta(f(n))$). If $c=0$ then $f(n) = o(g(n))$ (and $g(n) = \omega(f(n))$). If $c > 0$ then $f(n) = \Omega(n))$ (and $g(n) = O(f(n))$). In particular: If $0 < c < +\infty$ then $f(n) = \Theta(g(n))$ (and $g(n) = \Theta(f(n))$). If $c = +\infty$ then $f(n) = \omega(g(n))$ (and $g(n) = o(f(n))$). Moreover, in computing the limit, you can replace $f(n)$ with a function $h(n)$ such that $h(n) \sim f(n)$ (see, e.g., this page on Wikipedia). The same holds for $g(n)$. For polynomials this amounts to taking the monomial of highest degree. Moreover, since scaling $c$ by a (positive) constant does not change the asymptotic relation between $f(n)$ and $g(n)$, you can also drop any multiplicative constant (which will always be positive since $f(n)$ and $g(n)$ are asymptotically positive). For example, instead of comparing $f(n) = 3n^2 + 2n +50$ with $g(n) = 5n^5 + 4n^3 - 2^{10}$, you can compare $x^2$ with $x^5$ instead. This immediately shows that $c$ exists and is $0$, therefore $f(n) = O(g(n))$ and, in particular, $f(n) = o(g(n))$. While the above rules will probably work for the vast majority of the functions you will encounter, they can't always be used. Consider, for example, $f(n) = 2 + \sin(n)$ and $g(n) = 1$. Here $f(n) = \Theta(g(n))$ but $\lim_{n \to \infty} \frac{f(n)}{g(n)}$ does not exist.
{ "domain": "cs.stackexchange", "id": 15848, "tags": "algorithms, asymptotics, runtime-analysis, big-o-notation" }
Does tapping at the side of a bottle prevent shaken soda from bubbling over?
Question: Anecdotal evidence has it that a bottle of soda that was heavily shaken will not bubble over if tapped at the side multiple times. Yet I wonder: Has the tapping really any effect? Or could it be that the mere time that passes while tapping at the side of the bottle has the carbon dioxide not mixed with the soda anymore? Answer: Well here's an unexpected source: Steve Spangler Science! Apparently after shaking, there are bubbles left on the sides of the can (or bottle), under the liquid. Since the soda is supersaturated with $CO_{2}$ the bubbles become nucleation sites for the conversion of aqueous $CO_{2}$ to gaseous $CO_{2}$. Since there are many bubbles, and since they're under the liquid level, there is a quick increase in the volume of gas inside the can, and the liquid is pushed outwards as the gas tries to escape the liquid. Evidently tapping the can before opening it dislodges the bubbles under the liquid, decreasing the amount of nucleation sites below the liquid level. Therefore when opening the can, there is no sudden overwhelming conversion of aqueous $CO_{2}$ to the gaseous form inside the liquid, and the liquid does not gush out. Of course, if one waits long enough after shaking a can, the bubbles stuck on the side will eventually rise to the surface, and the can becomes safe to open again all by itself. This does require some time though, more than the few seconds that someone would spend flicking their can.
{ "domain": "physics.stackexchange", "id": 7354, "tags": "water, pressure, everyday-life" }
Why does equilibrium favour weak acid or weak base?
Question: For example, in the reaction $\ce{NaOH +HCl->NaCl +H_2O}$, the products are favoured by a factor of approximately $10^{24}$. Is this a general rule? Is it because the products are more stable than the reactants? Answer: Here is a simple intuitive explanation. In your above example, there is actually 2 reactions occurring, the forward and the reverse reaction. Let us consider the forward reaction. Here, both $\ce{NaOH}$ and $\ce{HCl}$ are strong bases and acids. What this means is that HCl has a strong ability to donate a proton and NaOH has a strong ability to accept protons. This means that when NaOH and HCl react together, the reaction will basically go to completion. So you will end up with up with very little NaOH and HCl and the equilibrium will favour the products. Now lets look at the reverse reaction which is: $$\ce{Na+ + Cl- + H2O -> NaOH + HCl}$$ Here $\ce{Cl-}$ is the conjugate base of HCl which is a strong acid. This means that $\ce{Cl-}$ is a really weak base, meaning that it is pretty bad at pulling protons from molecules. Also water is the conjugate acid of NaOH, which is a strong base. This means that water is a really weak acid, meaning that it is also pretty bad at kicking out one of its protons. This means that in this reaction, barely any of the products will react and you will mainly end up with NaCl and water. As you can see in both reaction the equilibrium will favour the weak acid and base as they aren't able to react that well together. This effect can also be explained by using equilibrium constants but I won't go into that.
{ "domain": "chemistry.stackexchange", "id": 4301, "tags": "acid-base, equilibrium" }
Is the fact that 100 kPa equals about 1 atmosphere accidental?
Question: Typical atmosphere near sea level, in ambient conditions is around 100,000 pascals. But the pascal, as the unit, is not defined through Earth atmospheric pressure. It's defined as one newton per square meter. The newton is $\rm{kg \: m}\over s^2$. So, $\rm[Pa] = [ {kg \over {m \: s^2}} ]$. Nowadays, definitions of units are often fixed to various natural phenomena, but it wasn't quite so when they were being created. The Second is an ancient unit, derived from a fraction of day, 1/86400 of synodic day on Earth. The meter is derived from circumference of Earth, $10^{-7}$ the distance from north pole to equator. The kilogram came to be as mass of a cubic decimeter of water. 100,000 pascals, or 1 bar, though, is about the average atmospheric pressure at sea level. That's an awfully round number - while Earth atmosphere pressure doesn't seem to have anything in common with the rest of the "sources" of the other units. Is this "round" value accidental, or am I missing some hidden relation? Answer: This is a coincidence. There's nothing about the atmosphere that would make it have a nice relationship with the Earth's rotation or diameter, or the fact that water is plentiful on the surface. On the other hand, it's important to note that the coincidence isn't quite as remarkable as you note, because of a version of Benford's law. Given absolutely zero prior knowledge about how much air there is in the atmosphere, our guess about the value of the atmospheric pressure would have to be evenly distributed over many orders of magnitude. This is akin to throwing a dart at a piece of log-scale graph paper: Note that the squares in which the coordinates start with $1.\:{{.}{.}{.}}$ are bigger than the others, so they're rather more likely to catch the dart. A similar (weaker) effect makes the probability of the second digit being 0 be 12% instead of the naive 10%.
{ "domain": "physics.stackexchange", "id": 42909, "tags": "pressure, si-units, metrology" }
Compilation with a witness / certificate
Question: Sometimes, to install a program, you have a choice between compiling it yourself or downloading a precompiled binary. In theory (using a new programming language and a new compiler designed specifically for this), is it possible to generate a witness / certificate with the compiled binary such that checking the witness / certificate is very easy when compared to compiling things yourself, but ensures that the compilation indeed yields this binary ? To avoid trivial answers, I'll specify things a bit more: The source language should contain ML, and the target language should be some realistic assembly language. The compiler should do many optimizations so that speed of the compiled program is comparable to that of OCaml, and in particular the compilation can not be just concatenating an interpreter and the source. From what I've read, the longest thing in compilation is the optimizations. So my question is more or less: Can optimizations run much faster on a non-deterministic machine (in which case, we can use the witness to know which path to take on the real machine). Answer: In theory, what you are asking about has been studied, under the name "translation validation". See, e.g., the following classic paper: George C. Necula. "Translation validation for an optimizing compiler." PLDI 2000. In practice, compilation is such a complicated messy process that I doubt there will be any easy-to-verify certificate that proves the entire compilation process was done correctly (including lexing, parsing, front-end, optimization, back-end, assembly, linking, etc.). Academic work typically focuses on just one or two phases of the process (e.g., some of the stages of optimization). One should separate two separate issues: can the verifier be simpler than the compiler? can the verifier be faster than the compiler? Simpler is interesting, because it means the verifier can potentially be more trustworthy (e.g., less likely to have bugs). Classic work on translation validation focuses on that case. Faster is a different question. Some optimizations are deterministic and won't benefit, but there are certainly plenty of optimizations that involve a search over some space of possibilities. An extreme version of that is the concept of "superoptimization", which optimizes a short code sequence by searching over all possible optimized versions and finding the one that is the fastest while remaining semantically equivalent to the original. See, e.g., the following papers: R. Sausnauskas, Y. Chen, P. Collingbourne, J. Ketema, G. Lup, J. Taneja, J. Regehr. "A Synthesizing Superoptimizer." Sorav Bansal, Alex Aiken. "Automatic Generation of Peephole Superoptimizers." ASPLOS 2006.
{ "domain": "cs.stackexchange", "id": 15714, "tags": "compilers" }
Doubts and some confusion on variance for complex rv
Question: This question is in continuation of the one asked here. Let's say that the measurement noise $w$ or any random variable is circularly Gaussian complex. If the imaginary and real components each has a variance of 0.5, then what would be the total variance? Should I write: $w \sim CN(0,\sigma^2_w)$? If on the other hand, if $w \sim CN(0,2\sigma^2_w)$ then does it mean that the real and imaginary components each have variance 1, so the total variance is 2. Is my understanding correct? Is there a rule of thumb whether the variance should be 0.5 for each component or can it be anything? Answer: If the imaginary and real components each has a variance of 0.5, then what would be the total variance? The variance of the complex RV in that case would be $0.5 + 0.5 = 1$. If on the other hand, if $w∼CN(0,2σ^2_w)$ then does it mean that the real and imaginary components each has variance 1, so the total variance is 2. Is my understanding correct? Each has variance $2σ^2_w/2 = σ^2_w$. In other words, when you add two RV, the variance of the result is the sum of the variances of each RV. Is there a rule of thumb whether the variance should be 0.5 for each component? When talking about Rayleigh fading coefficients, you want the variance per dimension to be 0.5 because that results in the average transmitted power being equal to the average received power (that is, the channel neither creates nor absorbs energy).
{ "domain": "dsp.stackexchange", "id": 5467, "tags": "digital-communications, gaussian, self-study, random, complex" }
NCF Recommender- The target encoded within the model input, why doesn't it overfit easily?
Question: In the recommender system NCF, the input is a batch of user-item interactions (one-hot encoded) and the output is a 0-1 score of whether the item has been bought or not: This seems to indicate that the item input vector that the model is trained on already contains y. I understand the purpose of this, but doesn't that lead to a dangerously high chance of overfitting? Answer: The input of the system is a given user and a given item (each respectively one-hot encoded). The output of the system is binary (picked or not). The authors filtered the data to require each user to have picked at least 20 different items. Thus, enabling the system to learn the general preferences of a user. The system could overfit. The system could memorize that users would only pick items they have already picked. The system would fail on the chosen evaluation protocol - held-out last interaction. The model performs better than other recommendation systems on this evaluation protocol which is evidence that this system is not overfitting.
{ "domain": "datascience.stackexchange", "id": 6995, "tags": "recommender-system" }
Genetic Algorithms: Converge to best solution for one, few or many environments?
Question: Let's say we take someone's garden (A) as the environment. We want a robot to pick up a series of eggs that chickens have laid in the garden, while covering as little ground as possible (those heavy rover tracks damage the turf, you know). So we evolve a solution whereby the genes representing the robot's behaviour (i.e. we are not just evolving some chain of path states/positions) work great in that garden. All freshly laid eggs in (A) are collected in record time. Our algorithm has learned some tricks about navigating gardens; like driving a car for the first time. But there is more. If I now put the same robot in a different garden (B), it may or may not do OK. It depends, I suppose, on how similar (B) is to (A) in certain key factors, i.e., will any of the tricks learned in (A) apply here? The likelihood is it will need new tricks, so I'm going to have evolve it further in (B) if I want anything like an optimal solution. Yet if I don't, at every iteration, continue to test it in (A) too, then I may be losing fitness for (A) in every new generation, yes? So if I want a algorithm that performs well under a wide variety of circumstances, I am going to have to test it in all of those N circumstances, for all of the M candidates; will I thus need to run MxN tests on each generation? What about circumstances I didn't cover? Answer: Unless you define "wide variety" very precisely and prove that your algorithm is able to abstract over all varieties in the way you want, then yes, you'll have to test it every time. I think for most interesting problems you won't be able to produce such a proof. For a related entertaining read, check Neural Network Follies. The author describes a project where a neural network was trained to spot tanks in images. Instead the network learned how to tell whether it's sunny or not.
{ "domain": "cs.stackexchange", "id": 7184, "tags": "algorithms, artificial-intelligence, genetic-algorithms" }
Why are there no automated translated subtitles?
Question: This might be a rather naive question. There are programs that can automate subtitles, and also programs that can automate translations. Why, then, is there apparently no program that can automate translated subtitles? That is, the program listens to the audio, and then outputs the subtitles in whatever language desired? I get that it won't be very high-quality subtitles, but it'd still be better than nothing. If there is such a program (I'm unable to find one), a link to that program would answer this question. Answer: OpenAI's Whisper is a widely known automatic speech recognition model that can also translate the input audio to text in the specified target language. There are C++ implementations that can be used reasonably well without a GPU, like whisper.cpp and faster-whisper.
{ "domain": "datascience.stackexchange", "id": 12014, "tags": "machine-translation" }
Fake 3D effect in SFML - follow up
Question: Based on my previous question, I have implemented all the recommendations received. Here is a summary of the improvements: Use meaningful names as much as possible. Removed multiple declarations in one line. Removed the dynamic_casts and applied a new class hierarchy design. Added a Game class to improve code modularity. I would like to know whether or not I have implemented the class hierarchy design for Colors in a suitable way. #include <SFML/Graphics.hpp> #include <vector> #include <memory> #include <cmath> #include <stdexcept> #include <iostream> namespace { float increase(float start, float increment, float max) { auto result = start + increment; while (result >= max) result -= max; while (result < 0) result += max; return result; } float limit(float value, float min, float max) { return std::max(min, std::min(value, max)); } } class Polygon : public sf::Drawable, public sf::Transformable, sf::NonCopyable { public: Polygon() : mVertices(sf::Quads, 4u) {} void setVertices(float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4, sf::Color color) { mVertices[0].position = sf::Vector2f(x1, y1); mVertices[1].position = sf::Vector2f(x2, y2); mVertices[2].position = sf::Vector2f(x3, y3); mVertices[3].position = sf::Vector2f(x4, y4); mVertices[0].color = mVertices[1].color = mVertices[2].color = mVertices[3].color = color; } private: void draw(sf::RenderTarget& target, sf::RenderStates states) const { states.transform *= getTransform(); target.draw(mVertices, states); } sf::VertexArray mVertices; }; struct Point { struct Screen { float x{}; float y{}; float width{}; }screen{}; sf::Vector3f world{}; sf::Vector3f camera{}; void project(float cameraX, float cameraY, float cameraZ, float cameraDepth, float width, float height, float roadWidth) { camera.x = world.x - cameraX; camera.y = world.y - cameraY; camera.z = world.z - cameraZ; auto scale = cameraDepth / camera.z; screen.x = width / 2.f + scale * camera.x * width / 2.f; screen.y = height / 2.f - scale * camera.y * height / 2.f; screen.width = scale * roadWidth * width / 2.f; } }; class Colors { public: Colors() : mSelf(this) {} virtual ~Colors() = default; virtual sf::Color getRoad() { return{}; } virtual sf::Color getGrass() { return{}; } virtual sf::Color getRumble() { return{}; } virtual sf::Color getLane() { return{}; } void setColor(Colors& c) { mSelf = &c; } Colors& getColor() const { return *mSelf; } private: Colors* mSelf; }; class Light : public Colors { sf::Color getRoad() { return{ 100, 100, 100 }; } sf::Color getGrass() { return{ 16, 170, 16 }; } sf::Color getRumble() { return{ 85, 85 , 85 }; } sf::Color getLane() { return{ sf::Color::White }; } }light; class Dark : public Colors { sf::Color getRoad() { return{ 100, 100, 100 }; } sf::Color getGrass() { return{ 0, 154, 0 }; } sf::Color getRumble() { return{ 187,187, 187 }; } sf::Color getLane() { return{ 100, 100, 100 }; } }dark; class Segment : public sf::Drawable, public sf::Transformable, sf::NonCopyable { public: Segment() : mPoint1() , mPoint2() , mRumbleSide1() , mRumbleSide2() , mLanes1() , mLanes2() , mMainRoad() , mLandscape() , mColors() , mIndex() {} Point& getPoint1() { return mPoint1; } Point& getPoint2() { return mPoint2; } void setSegmentColors(Colors& it) { mColors.setColor(it); } void setIndex(std::size_t i) { mIndex = i; } std::size_t getIndex() const { return mIndex; } void setGrounds(float width) { auto lanes = 3u; // Landscape mLandscape.setSize({ width, mPoint1.screen.y - mPoint2.screen.y }); mLandscape.setPosition(0, mPoint2.screen.y); mLandscape.setFillColor(mColors.getColor().getGrass()); // Rumble sides auto rumbleWidth1 = rumbleWidth(mPoint1.screen.width, lanes); auto rumbleWidth2 = rumbleWidth(mPoint2.screen.width, lanes); mRumbleSide1.setVertices(mPoint1.screen.x - mPoint1.screen.width - rumbleWidth1, mPoint1.screen.y, mPoint1.screen.x - mPoint1.screen.width, mPoint1.screen.y, mPoint2.screen.x - mPoint2.screen.width, mPoint2.screen.y, mPoint2.screen.x - mPoint2.screen.width - rumbleWidth2, mPoint2.screen.y, mColors.getColor().getRumble()); mRumbleSide2.setVertices(mPoint1.screen.x + mPoint1.screen.width + rumbleWidth1, mPoint1.screen.y, mPoint1.screen.x + mPoint1.screen.width, mPoint1.screen.y, mPoint2.screen.x + mPoint2.screen.width, mPoint2.screen.y, mPoint2.screen.x + mPoint2.screen.width + rumbleWidth2, mPoint2.screen.y, mColors.getColor().getRumble()); // Main Road mMainRoad.setVertices(mPoint1.screen.x - mPoint1.screen.width, mPoint1.screen.y, mPoint1.screen.x + mPoint1.screen.width, mPoint1.screen.y, mPoint2.screen.x + mPoint2.screen.width, mPoint2.screen.y, mPoint2.screen.x - mPoint2.screen.width, mPoint2.screen.y, mColors.getColor().getRoad()); // Lanes auto laneMarkerWidth1 = laneMarkerWidth(mPoint1.screen.width, lanes); auto laneMarkerWidth2 = laneMarkerWidth(mPoint2.screen.width, lanes); auto lanew1 = mPoint1.screen.width * 2 / lanes; auto lanew2 = mPoint2.screen.width * 2 / lanes; auto lanex1 = mPoint1.screen.x - mPoint1.screen.width + lanew1; auto lanex2 = mPoint2.screen.x - mPoint2.screen.width + lanew2; for (auto lane = 1u; lane < lanes; lanex1 += lanew1 + 1, lanex2 += lanew2 + 1, lane++) { if (lane == 1) mLanes1.setVertices(lanex1 - laneMarkerWidth1 / 2, mPoint1.screen.y, lanex1 + laneMarkerWidth1 / 2, mPoint1.screen.y, lanex2 + laneMarkerWidth2 / 2, mPoint2.screen.y, lanex2 - laneMarkerWidth2 / 2, mPoint2.screen.y, mColors.getColor().getLane()); else mLanes2.setVertices(lanex1 - laneMarkerWidth1 / 2, mPoint1.screen.y, lanex1 + laneMarkerWidth1 / 2, mPoint1.screen.y, lanex2 + laneMarkerWidth2 / 2, mPoint2.screen.y, lanex2 - laneMarkerWidth2 / 2, mPoint2.screen.y, mColors.getColor().getLane()); } } private: float rumbleWidth(float projectedRoadWidth, std::size_t lanes) { return projectedRoadWidth / std::max(6u, 2 * lanes); } float laneMarkerWidth(float projectedRoadWidth, std::size_t lanes) { return projectedRoadWidth / std::max(32u, 8 * lanes); } void draw(sf::RenderTarget& target, sf::RenderStates states) const { states.transform *= getTransform(); target.draw(mLandscape, states); target.draw(mRumbleSide1, states); target.draw(mRumbleSide2, states); target.draw(mMainRoad, states); target.draw(mLanes1, states); target.draw(mLanes2, states); } private: Point mPoint1; Point mPoint2; Polygon mRumbleSide1; Polygon mRumbleSide2; Polygon mLanes1; Polygon mLanes2; Polygon mMainRoad; sf::RectangleShape mLandscape; Colors mColors; std::size_t mIndex; }; class Game { public: Game() : mWindow(sf::VideoMode(640, 480), "test") , mSegmentLength(200.f) , mPlayerX(0.f) , mCameraDepth(1 / std::atan((100 / 2.f))) , mCameraHeight(1000.f) , mPosition(0.f) , mTrackLength(0.f) , mSegments() , mSpeed(0.f) { addRoad(); } void run() { sf::Clock clock; auto timeSinceLastUpdate = sf::Time::Zero; while (mWindow.isOpen()) { auto elapsedTime = clock.restart(); timeSinceLastUpdate += elapsedTime; while (timeSinceLastUpdate > TimePerFrame) { timeSinceLastUpdate -= TimePerFrame; processEvents(); update(TimePerFrame); } render(); } } private: void processEvents() { sf::Event event; while (mWindow.pollEvent(event)) { if (event.type == sf::Event::Closed) mWindow.close(); } } void update(sf::Time timePerFrame) { auto dt = timePerFrame.asSeconds(); auto maxSpeed = mSegmentLength / dt; auto accel = maxSpeed / 5.f; auto breaking = -maxSpeed; auto decel = -accel; auto offRoadDecel = -maxSpeed / 2.f; auto offRoadLimit = maxSpeed / 4.f; const auto& playerSegment = *mSegments[static_cast<std::size_t>(std::floor(mPosition / mSegmentLength)) % mSegments.size()]; auto speedPercent = mSpeed / maxSpeed; auto dx = dt * speedPercent; if (sf::Keyboard::isKeyPressed(sf::Keyboard::Left)) mPlayerX -= dx; if (sf::Keyboard::isKeyPressed(sf::Keyboard::Right)) mPlayerX += dx; if (sf::Keyboard::isKeyPressed(sf::Keyboard::Up)) mSpeed += accel * dt; else mSpeed += decel * dt; if (sf::Keyboard::isKeyPressed(sf::Keyboard::Down)) mSpeed += breaking * dt; if (((mPlayerX < -1.f) || (mPlayerX > 1.f)) && (mSpeed > offRoadLimit)) mSpeed += offRoadDecel * dt; mPlayerX = limit(mPlayerX, -2.f, 2.f); mSpeed = limit(mSpeed, 0, maxSpeed); mPosition = increase(mPosition, dt * mSpeed, mTrackLength); } void render() { auto width = 640.f; auto height = 480.f; auto roadWidth = 2000.f; auto maxy = height; const auto& baseSegment = *mSegments[static_cast<std::size_t>(std::floor(mPosition / mSegmentLength)) % mSegments.size()]; mWindow.clear(); for (auto n = 0u; n < mSegments.size(); n++) { auto& segment = *mSegments[(baseSegment.getIndex() + n) % mSegments.size()]; bool looped = segment.getIndex() < baseSegment.getIndex(); auto camX = mPlayerX * roadWidth; auto camY = mCameraHeight; auto camZ = mPosition - (looped ? mTrackLength : 0.f); auto& point1 = segment.getPoint1(); auto& point2 = segment.getPoint2(); point1.project(camX, camY, camZ, mCameraDepth, width, height, roadWidth); point2.project(camX, camY, camZ, mCameraDepth, width, height, roadWidth); if ((point1.camera.z <= mCameraDepth) || (point2.screen.y >= maxy)) continue; segment.setGrounds(width); mWindow.draw(segment); maxy = point2.screen.y; } mWindow.display(); } void addRoad() { auto rumbleLength = 3u; mSegments.reserve(500); for (auto i = 0u; i < 500; ++i) { auto segment = std::make_unique<Segment>(); segment->setIndex(i); segment->getPoint1().world.z = i * mSegmentLength; segment->getPoint2().world.z = (i + 1) * mSegmentLength; if (static_cast<std::size_t>(std::floor(i / rumbleLength)) % 2) segment->setSegmentColors(light); else segment->setSegmentColors(dark); mSegments.push_back(std::move(segment)); } mTrackLength = mSegments.size() * mSegmentLength; } private: sf::RenderWindow mWindow; float mSegmentLength; float mPlayerX; float mCameraDepth; float mCameraHeight; float mPosition; float mTrackLength; std::vector<std::unique_ptr<Segment>> mSegments; const static sf::Time TimePerFrame; float mSpeed; }; const sf::Time Game::TimePerFrame = sf::seconds(1 / 60.f); int main() { try { Game game; game.run(); } catch (std::exception& e) { std::cout << "\nEXCEPTION: " << e.what() << std::endl; } } Answer: Like I commented, It looks pretty clean overall, so I'll only mention a couple things this time. You might consider waiting a little bit before selecting this, in case other reviewers show up. mSelf in Colors is strange. I didn't quite understand its purpose and what you are trying to achieve with it that can't be accomplished with the this pointer. I think you might have done that because of the getColor/setColor methods? That's not a very conventional approach. The usual would be for Segment to have a Colors pointer, then just swap that pointer when you need to setSegmentColors. void setSegmentColors(const Colors * c) { mColors = c; } The methods of Colors could probably be pure virtual (= 0;). I don't suppose you'd need to declare a Colors instance by value, one the mSelf oddity is fixed. When you implement a virtual base class, be sure to annotate the methods in child classes with override. Also take a look into the final specifier. You can add it to classes that are not meant to be inherited from and to leaf nodes in a class hierarchy. All the virtual get* methods in Colors should be const. They are not mutating any member data. The globals light and dark, even though not carrying any member state as of now, should also probably be constants. I'm very picky about const because mutable shared state is a major source of bugs. Knowing a thing is const greatly reduces the number of places you have to look into if you find an inconsistent state somewhere. No need to call the default constructors of all members of Segment in its constructor. This is C++11, so if you'd like to emphasise that they are default initialized, you can just use the {} syntax on declaration, e.g.: Point mPoint1{};. If you do that, you can get rid of Segment's constructor. rumbleWidth() and laneMarkerWidth() could either be joined with the other helpers inside the unnamed namespace or could be declared as static, since those are pure functions that are not relying on any member state of Segment.
{ "domain": "codereview.stackexchange", "id": 17023, "tags": "c++, c++11, sfml" }
is centrifugal release equal to explosive release
Question: Scenario # 1 I put gunpowder and then a ball bearing in an old musket and fire the bullet. Scenario # 2 Lets imagine I had a motor with a disk on it and there was the same ball bearing stuck on the edge of the disk somewhere. If I spin the motor up fast and then release the ball bearing somehow, it will fire off perpendicular to the tangent line of the release point. Lets imagine for the purposes of comparison, I spin up the motor with enough rpm such that the ball bearing’s velocity equaled the velocity of the musket shot in scenario 1. Newton's 3rd law of motions states that "for every action there is an opposite and equal reaction". With that in mind, it shouldn’t really matter how I expelled the hunk of lead, would both systems react in the opposite direction the same at the moment of release? (I’m aware that the spinning system would have a torque while it was spinning, but lets ignore that) The reason I’m asking that is; intuitively the ball bearing is desperate to escape the faster the disk is being spun. It would seem weird that it would suddenly kick the system backwards when it was already straining to leave the system the whole time. Also, there seems to be an intuitive difference because the expelling force in the spinning system is 90 degrees to the applied force. Answer: If you were actually to do Scenario 2, the motor would indeed be flung backwards. This might seem a bit odd at first glance; if the ball bearing is moving already at the time of release, why would the motor be ejected backwards when it releases the bearing? The way to resolve this confusion is to note that the motor is actually shaking around when it's spinning with the bearing attached. The easiest way to model it is via a simple picture of a dumbbell, with one weight being the motor and the other weight being the bearing: Both the motor and the bearing rotate about a pivot point. When the bearing is released (ie, the connecting line in the picture above is severed), it flies in one direction, and the motor flies in the opposite direction, in exact analogy to the musket scenario. I avoided math in this answer because I figured an intuitive visual explanation was sufficient; other posters will probably give more details, if you want them.
{ "domain": "physics.stackexchange", "id": 12139, "tags": "newtonian-mechanics, centrifugal-force" }
What are the factors controlling the fate of post-adhesed volatile molecules of the olfactory epithelium?
Question: What happens to volatile molecules that reach the main olfactory epithelium in the nasal cavity after they bind and the neural stimulus fades? To what extent do such factors as receptor kinetics and diffusion (or any others) direct where these molecules end up? Answer: Simply, the odorant particles/molecules never actually enter the cell through endocytosis. Instead, these olfactory receptors, belonging to the G-protein-coupled receptor family, are transmembrane receptors whose structure passes through the cellular wall (several times actually) to functional complexes on the other side, and they use simple and reversible ligand binding as their primary activation mechanism. Odorant particles are dissolved in the mucous lining the nasal cavity, and in this form come into contact with the receptors in the cell membrane of the epithelial cells. These bind as ligand complexes to receptor sites, triggering a change in the receptor's structure which in turn causes the coupled G-protein to release its bound guanosine diphosphate, and instead bind guanosine triphosphate from the interior cell environment. When that happens, the G-protein is "activated"; it decouples from the receptor, splits in half into Ga and Gbg subunits, and those two halves bind to effector enzymes which trigger the cell signalling. As a result of the protein decoupling, the charge balance of the receptor changes, releasing the complexed ligand from its binding to the receptor, back into the extracellular environment. To reverse all this, an enzyme called RGS (Regulator of G-protein Signalling) binds to the Ga subunit, triggering hydrogenation of the GTP molecule it still has bound into GDP, causing the protein's subunits to recombine and recouple to the receptor. The released ligand, meanwhile, is washed away again by the mucous flow from the Bowman's glands. It may complex with other receptors along the way, but eventually it makes its way past the nasal membrane to the pharynx at the back of the throat, where the mucous is swallowed and the odorant broken down in the stomach. This is the final fate of virtually all odorant agonists. A select few, however, bind more strongly, and are not released by the structure changes of the receptor. When this happens, the receptor is stuck in the "on" position. One of two things then happens; either the receptor continues to activate, causing neural impulses which your brain eventually ignores (odor fatigue), or mechanisms inside the cell notice the faulty receptor, bring it into the cell in a process similar to endocytosis, and attempt to break down its components and fix the damage to reuse it. More often than not, the bound chemical ends up poisoning the cell by binding to something more vital than an ordinary G-protein receptor, eventually triggering programmed cell death. That's fine; the basal cells behind the surface of the olfactory epithelium divide fast enough to replenish every cell in the nasal membrane about once every two days. Sources: Wikipedia - Olfactory epithelium, Olfactory receptor and G protein-coupled receptor.
{ "domain": "chemistry.stackexchange", "id": 436, "tags": "biochemistry, kinetics" }
Change the background color of the console with the help of enum
Question: I've recently learned the concept of enum and method calling with the enum. From what I learned I've done this simple snippet which changes the background color of the console with the help of enum and method. public static void SetColor(RanngDe R){ switch (R){ case RanngDe.Blue: Console.BackgroundColor = ConsoleColor.Blue; break; case RanngDe.Green: Console.BackgroundColor = ConsoleColor.Green; break; default: Console.BackgroundColor = ConsoleColor.Yellow; break; } } // Enum Declaration public enum RanngDe{ Blue=0, Green=1, Yellow=3 } I'm calling RanngDe method in the main method in switch ...case block as menu driven. I know, I've used switch ...case in void SetColor(RanngDe R) method, which is my major concern, because this makes my code redundant as I'm calling it in the menu driven program. Is this approach acceptable as good practice? If not then, how should I make this better? Answer: As has been mentioned using a Dictionary<RanngDe,Color> will make your code much simpler: public enum RanngDe { Blue = 0, Green = 1, Yellow = 3 } static Dictionary<RanngDe, ConsoleColor> colors = new Dictionary<RanngDe, ConsoleColor>() { {RanngDe.Blue,ConsoleColor.Blue }, {RanngDe.Green,ConsoleColor.Green }, {RanngDe.Yellow,ConsoleColor.Yellow } }; public static void SetColor(RanngDe R) { Console.BackgroundColor = colors[R]; } Depending on your implementation you might be able to do away with the method completely and just use the assignment. I came up with another approach. If you change the values of Ranngde to be the same as the ConsoleColor enum, you can cast the Ranngde value to ConsoleColor: public enum RanngDe { Blue = ConsoleColor.Blue, Green = ConsoleColor.Green, Yellow = ConsoleColor.Yellow } public static void SetColor(RanngDe R) { Console.BackgroundColor = (ConsoleColor)R; }
{ "domain": "codereview.stackexchange", "id": 25399, "tags": "c#, beginner, console, enum" }
Find two numbers that sum closest to a given number
Question: I'm working on the problem of finding 2 numbers that their sum is closest to a specific number. Input numbers are not sorted. My major idea is to sort the numbers and find from two ends. I'm wondering if there are any better ideas in terms of algorithm time complexity. Any advice of code bug and code style is appreciated. import sys def find_closese_sum(numbers, target): start = 0 end = len(numbers) - 1 result = sys.maxint result_tuple = None while start < end: if numbers[start] + numbers[end] == target: print 0, (numbers[start], numbers[end]) return elif numbers[start] + numbers[end] > target: if abs(numbers[start] + numbers[end] - target) < result: result = abs(numbers[start] + numbers[end] - target) result_tuple = (numbers[start], numbers[end]) end -= 1 else: if abs(numbers[start] + numbers[end] - target) < result: result = abs(numbers[start] + numbers[end] - target) result_tuple = (numbers[start], numbers[end]) start += 1 return result, result_tuple if __name__ == "__main__": numbers = [2,1,4,7,8,10] target = 16 numbers = sorted(numbers) print find_closese_sum(numbers, target) Answer: Your approach is good. Improvement you can do is use a better sorting technique and rest the approach you are using seems best one. Sort all the input numbers (Use better sorting techniques Eg. Quicksort). Use two index variables l and r to traverse from left and right ends respectively. Initialize l as 0 and r as n-1. sum = a[l] + a[r] If sum is less than number, then l++ If sum is greater than number, then r–-. Keep track of min sum Repeat steps 3, 4, 5 and 6 while l < r. Time complexity: complexity of quick sort + complexity of finding the optimum pair = O(nlogn) + O(n) = O(nlogn)
{ "domain": "codereview.stackexchange", "id": 23622, "tags": "python, algorithm, python-2.x" }
Address Duplicate Medical Claims (with SQL Injection GUI)
Question: These are some pieces of a program I wrote to help my team automatically analyze and fix medical claims that have been rejected. I thought some people might be interested to see what I have done to allow my team to modify a base query through what is kind of a SQL injection GUI. I had a few objectives in building this: Pull query logic out of the backend to where the user can see it, learn it, and suggest improvements/additions Make it easy to add new pieces of query logic and user options governing them Make it easy to add new markets that can plug into the existing logic options Determine if the user asked for the default options for their market or chose custom ones: Standard Module: M0310QueryPreparation Option Explicit Sub GetQueryReplacementsFromGlossaryAndQueryLogicSheet(ByVal strQueryType As String) 'Modify the upcoming query based upon the user's choice of logic If IsUserFormLoaded("CustomLogicOptions") = False Then DefaultQueryReplacements strQueryType Else If CustomLogicOptions.Tag = "On" Then CustomQueryReplacements strQueryType Else DefaultQueryReplacements strQueryType End If End If End Sub Find which options apply to their choice (for example, these are for a user choice of the default options): Standard Module: M03110QueryReplacements Option Explicit Sub DefaultQueryReplacements(ByVal strQueryType As String) 'Find the settings for lngMarketID.Value on the State Configuration sheet and match them to query replacements on the Glossary and Query Logic sheet With StateConfiguration 'Find the standard logic columns on the State Configuration sheet Dim lngStateConfigurationSettingNamesRow As Long lngStateConfigurationSettingNamesRow = .Columns(1).Find(What:="Market ID", Lookat:=xlWhole).Row Dim lngStateConfigurationFirstSettingColumn As Long lngStateConfigurationFirstSettingColumn = .Rows(1).Find(What:="Standard State-Specific Logic", Lookat:=xlWhole).Column Dim lngStateConfigurationLastSettingColumn As Long lngStateConfigurationLastSettingColumn = .Cells(lngStateConfigurationSettingNamesRow, 1).End(xlToRight).Column Dim rngStateConfigurationSettingNames As Range Set rngStateConfigurationSettingNames = .Range(.Cells(lngStateConfigurationSettingNamesRow, lngStateConfigurationFirstSettingColumn), .Cells(lngStateConfigurationSettingNamesRow, lngStateConfigurationLastSettingColumn)) Dim lngSettingsRowForMarketIDBeingAnalyzed As Long lngSettingsRowForMarketIDBeingAnalyzed = WorksheetFunction.Match(lngMarketID.Value, .Columns(1), 0) 'For the setting-options for lngMarketID.Value from the State Configuration sheet Dim rngStateConfigurationSetting As Range For Each rngStateConfigurationSetting In rngStateConfigurationSettingNames 'If they match to a query replacement on the Glossary and Query Logic sheet 'Store the replacement in the appropriate query replacements dictionary PossibleAdditionToQueryReplacementsDictionary strQueryType, rngStateConfigurationSetting.Value, .Cells(lngSettingsRowForMarketIDBeingAnalyzed, rngStateConfigurationSetting.Column).Value Next rngStateConfigurationSetting End With End Sub Get the matching injections from the user-facing sheet: Standard Module: M03111QueryReplacements Option Explicit Sub PossibleAdditionToQueryReplacementsDictionary(ByVal strQueryType As String, ByVal strSetting As String, ByVal strOption As String) 'If the strSetting-strOption combination matches to a query replacement on the Glossary and Query Logic sheet 'Store the replacement in the appropriate query replacements dictionary With GlossaryandQueryLogic 'Prepare to match strSetting and strOption Dim lngGlossarySettingColumn As Long lngGlossarySettingColumn = .Rows(1).Find(What:="Setting", Lookat:=xlWhole).Column Dim lngGlossaryOptionColumn As Long lngGlossaryOptionColumn = .Rows(1).Find(What:="Option", Lookat:=xlWhole).Column Dim lngGlossaryQueryReplacementColumn As Long If dictQueryLogicColumnByQueryType.Value.Exists(strQueryType) Then lngGlossaryQueryReplacementColumn = dictQueryLogicColumnByQueryType.Value(strQueryType) Else MsgBox "Error: A misconfiguration has occurred. The program is looking for a " & strQueryType & "Query Replacement Logic column on the " & .Name & " sheet, but is unable to find one. Please have a new column with this type of logic added to the sheet with appropriate query replacements." Cancel End If 'Find the rows for this setting Dim lngGlossaryFirstRowForThisSetting As Long lngGlossaryFirstRowForThisSetting = .Columns(lngGlossarySettingColumn).Find(What:=strSetting, Lookat:=xlWhole).Row 'Find the first row of the next setting and subtract one Dim lngGlossaryLastRowForThisSetting As Long If lngGlossaryFirstRowForThisSetting <> .Cells(.Rows.Count, lngGlossarySettingColumn).End(xlUp).Row Then lngGlossaryLastRowForThisSetting = .Columns(lngGlossarySettingColumn).Find(What:="*", After:=.Cells(lngGlossaryFirstRowForThisSetting, lngGlossarySettingColumn), SearchDirection:=xlNext, SearchOrder:=xlByRows).Row - 1 Else lngGlossaryLastRowForThisSetting = .Cells(.Rows.Count, 4).End(xlUp).Row End If 'Find the row for this option Dim rngGlossaryPossibleOptionsForThisSetting As Range Set rngGlossaryPossibleOptionsForThisSetting = .Range(.Cells(lngGlossaryFirstRowForThisSetting, lngGlossaryOptionColumn), .Cells(lngGlossaryLastRowForThisSetting, lngGlossaryOptionColumn)) Dim rngGlossaryOptionForThisSetting As Range Set rngGlossaryOptionForThisSetting = rngGlossaryPossibleOptionsForThisSetting.Find(What:=strOption, Lookat:=xlWhole) 'Match it to a Query Replacement If (rngGlossaryOptionForThisSetting Is Nothing) Then 'Something on the State Configuration sheet needs to be fixed MsgBox "Error: The " & strSetting & " option for this state does not match any known options on the Glossary and Query Logic sheet. Please have a new row with this type of logic added to the sheet with appropriate query replacements." Cancel Else Dim strQueryReplacementLogic As String strQueryReplacementLogic = .Cells(rngGlossaryOptionForThisSetting.Row, lngGlossaryQueryReplacementColumn).Value If strQueryReplacementLogic <> vbNullString Then dictQueryReplacementDictionariesByQueryType.Value.Item(strQueryType).Add Key:=strSetting, Item:=strQueryReplacementLogic End If End With End Sub Get the base query (which is pasted onto the spreadsheet) and maintain its existing readability: Standard Module: M0310QueryPreparation (same as GetQueryReplacementsFromGlossaryAndQueryLogicSheet above) Option Explicit Function PrepareOriginalQuery(ByVal QuerySheet As Worksheet) As String 'Build the query from the pieces Dim strQuery As String strQuery = vbNullString With QuerySheet Dim lngLastRowOfQuery As Long lngLastRowOfQuery = .Cells.Find(What:="*", SearchDirection:=xlPrevious, SearchOrder:=xlByRows).Row Dim lngRowIndex As Long For lngRowIndex = 2 To lngLastRowOfQuery Dim rngNewLineOfQueryText As Range Set rngNewLineOfQueryText = .Range(.Cells(lngRowIndex, 2), .Cells(lngRowIndex, .Columns.Count)).Find(What:="*") If Not rngNewLineOfQueryText Is Nothing Then Dim lngTabIndex As Long lngTabIndex = 2 'Starts at 2 due to Settings Column Do Until lngTabIndex = rngNewLineOfQueryText.Column strQuery = strQuery & vbTab lngTabIndex = lngTabIndex + 1 Loop strQuery = strQuery & rngNewLineOfQueryText.Value End If strQuery = strQuery & vbCrLf Next lngRowIndex End With PrepareOriginalQuery = strQuery End Function Make the injections and ensure that they are SQL-compliant Standard Module: M0310QueryPreparation (same as two other methods above) Option Explicit Function CleanQuery(ByVal strOriginalQuery As String, ByVal dictQueryReplacements As Scripting.Dictionary) As String Dim strCleanedQuery As String strCleanedQuery = strOriginalQuery 'Make replacements of settings placeholders in query Dim Loop1 As Long For Loop1 = LBound(dictQueryReplacements.Keys) To UBound(dictQueryReplacements.Keys) If dictQueryReplacementSettingsThatAreNotWhereClauseConditions.Value.Exists(dictQueryReplacements.Keys(Loop1)) = True Then strCleanedQuery = Replace(strCleanedQuery, "--" & dictQueryReplacements.Keys(Loop1), dictQueryReplacements.Items(Loop1)) Else strCleanedQuery = Replace(strCleanedQuery, "--" & dictQueryReplacements.Keys(Loop1), "and " & dictQueryReplacements.Items(Loop1)) End If Next Loop1 'Remove extraneous "and"s Dim lngAndLocation As Long lngAndLocation = InStr(1, strCleanedQuery, "and ", vbTextCompare) Do Until lngAndLocation = 0 Dim boolBeginningOfPreviousWordFound As Boolean boolBeginningOfPreviousWordFound = False Dim strPreviousWord As String strPreviousWord = vbNullString Dim lngCharactersBackFromAnd As Long lngCharactersBackFromAnd = 0 Do Until boolBeginningOfPreviousWordFound = True lngCharactersBackFromAnd = lngCharactersBackFromAnd + 1 Dim strCurrentCharacter As String strCurrentCharacter = Mid(strCleanedQuery, lngAndLocation - lngCharactersBackFromAnd, 1) Select Case strCurrentCharacter Case vbCrLf, vbTab, " ", vbCr, vbLf If strPreviousWord <> vbNullString Then boolBeginningOfPreviousWordFound = True Case Else strPreviousWord = strCurrentCharacter & strPreviousWord End Select Loop If strPreviousWord = "where" Or strPreviousWord = "on" Then strCleanedQuery = Left(strCleanedQuery, lngAndLocation - 1) & Right(strCleanedQuery, Len(strCleanedQuery) - lngAndLocation - 3) lngAndLocation = lngAndLocation - 1 End If lngAndLocation = InStr(lngAndLocation + 1, strCleanedQuery, "and", vbTextCompare) Loop CleanQuery = strCleanedQuery End Function Answer: First - I think the design of your code, as well as the logic, are good. I'm not sure why you have a bunch of modules, but I'm just testing it all in a single module. Variable Naming You're using some Hungarian notation. Read more. What I mean is you're putting the Type prefix on all your variables - strQueryType dictQueryReplacements rngNewLineOfQueryText lngStateConfigurationSettingNamesRow There's no real need for that. The name of your variable should make it clear what Type it is. The lngStateConfigurationSettingNamesRow is a row, it's obviously going to be an integer, so I don't need that prefix letting me know. (Ignoring that this might not be possible) -With rngStateConfigurationSettingNames you can let me know it's a range like configSettingsHeaderRange or, better yet, give these ranges names, as in create named ranges on the worksheet. So instead, it might look like this Set settingHeaders = SettingSheet.Range("ConfigurationHeaders") Right? You can tell me a lot with names, and you can avoid those huge lines. Usually I'm like "Hey give your variables meaningful names! Characters are free!", but here, you've gone above and beyond naming your variables. In fact, the names are a little overwhelming. If that's your style, then that's your style, it's just a bit much if someone were to come after you and try to debug given variable names take up half the screen area. Being Explicit You've done an excellent job ensuring that everything is properly qualified e.g. Set rngGlossaryPossibleOptionsForThisSetting = .Range(.Cells()), .Cells()) There's no chance that you will hit the wrong sheet, having wrapped it in a With. Error Handling You've trapped your error, at the least ones you expect, in If blocks (VBA Guard Clauses) - If (rngGlossaryOptionForThisSetting Is Nothing) Then 'Something on the State Configuration sheet needs to be fixed MsgBox "Error: The " & strSetting & _ " option for this state does not match any known options on _ the Glossary and Query Logic sheet. Please have a new row with this type of logic added to the sheet with appropriate query replacements." Cancel Else The only thing is the use of Cancel. You haven't declared a Cancel so I imagine you want to use the built in cancel, but your functions don't have a cancel argument (and instead errors). Instead, if you want to stop the procedure use Exit Function or Exit Sub Your error messages are pretty long too, perhaps put those in a variable so the error block doesn't seem so lopsided. Even put them in a constant. However, the block seems backwards to me. You're testing for the error and relying on the Else to do it all. Instead, test that there isn't an error and guard in the Else If Not rngGlossaryOptionForThisSetting Is Nothing Then Dim strQueryReplacementLogic As String strQueryReplacementLogic = .Cells(rngGlossaryOptionForThisSetting.Row, lngGlossaryQueryReplacementColumn).Value If strQueryReplacementLogic <> vbNullString Then dictQueryReplacementDictionariesByQueryType.Value.Item(strQueryType).Add Key:=strSetting, Item:=strQueryReplacementLogic Else 'Something on the State Configuration sheet needs to be fixed MsgBox "Error: The " & strSetting & ErrorMessage Exit Function End If Boolean If If dictQuery...Value.Exists(dictQueryReplacements.Keys(Loop1)) = True If IsUserFormLoaded("CustomLogicOptions") = False When you test a boolean, you can just use the boolean as the test If dictQuery.Value.Exists() Then If Not IsUserFormLoaded("Custom") Then Strings Since Mid, Left and Right only return strings, you can use the Typed functions Mid$,Left$ and Right$ Comments You have a lot of comments. Comments - "code tell you how, comments tell you why". The code should speak for itself, if it needs a comment, it might need to be made more clear. If not, the comment should describe why you're doing something rather than how you're doing it. Here are a few reasons to avoid comments all together.
{ "domain": "codereview.stackexchange", "id": 29951, "tags": "vba, excel" }
What is the formula for the glug point?
Question: When you pour water out of a bottle, normally you have a smooth stream. However, if you pour it too fast it glugs, which is to say, comes out in quantized bursts. What is the formula for calculating the glug point angle, and what is the formula for calculating the size of the glug quanta and glug quanta frequency? Answer: Glugging is determined by pure geometry--- in your chosen tilt, as the water pours out, will the water completely cover the opening, or not? If the water doesn't cover the opening, air will stream in to equalize the pressure in the bottle to the pressure outside, and there will be no glug. When the water covers the opening, the water will still pour out, expanding the air trapped in the bottle. When the air pressure inside is small enough, the atmospheric pressure outside will push the water up more than gravity pulls it down, and it will not allow the water to leave. But the water's inertia streaming out will overshoot this point, so that the pressure will be less than this magic value, and air will force its way into the water. This air will push a bubble into the water, recompressing the air trapped in the bottle until the pressure inside rises past the magic value. At this point, the water will fall back down through the opening again, allowing the air bubble to rise to join the air inside the bubble. This is an oscillatory process because of the inertia of the water--- both the outflow of water and the inflow of air overshoot the equilibrium. The process is not "quantized", because it depends on the exact amount of air trapped in the bottle--- the more air there is, the further apart and bigger the glugs will be (you can see this effect in an emptying water-cooler bottle), and on the exact amount of extra water released before each glug, which depends on the inertia of the water and its exact exit velocity and opening geometry. In a good model, it should be possible to calculate the exact amount of glugging from the condition that the air is always in equilibrium, and the water is flowing out according to Bernoulli's law, with inertial effects leading to a certain lag in time of the response to the pressure. An interesting case is when the water is forced to go out through a straw. In this case, the glugging doesn't happen, because the water's viscosity dominates and there is no lag. When you have an oscillatory system with friction, generically there is a critical value of friction which kills oscillations (see critical damping) and leaves only exponential decay. This allows a stable equilibrium, where the pressure of the air inside is just low enough to hold the water up. So the water doesn't leave the upside-down bottle. If you arrange for the bottle to be completely full, you can do this without a straw too, although this is often unstable, because if a certain size bubble enters, it will allow the oscillations to begin with its volume of trapped air expanding and contracting.
{ "domain": "physics.stackexchange", "id": 1757, "tags": "fluid-dynamics, everyday-life" }
Why can't I properly place the Depth Camera in the Beginner Model Editor tutorial?
Question: I have been trying to go through the Beginner Model Editor tutorial. I am not able to place the "Depth Camera" on the top of the vehicle. I don't seem to be able to move it in "z." Sometimes it disappears altogether as I try to move it around. I am using Gazebo 7.4.0 and Ubuntu 16.04.1 LTS (Xenial). Originally posted by beartrap on Gazebo Answers with karma: 16 on 2016-10-24 Post score: 0 Original comments Comment by chapulina on 2016-10-24: How are you moving it? Make sure you're dragging the blue arrow. The more perpendicular your view angle is to the arrow, the more detailed movements you'll be able to make. You can also try making use the the align or snap tools. Comment by beartrap on 2016-10-24: I was not dragging the blue arrow, rather I was trying to move it as if it were a Power Point object. It is working for me now. Thanks for your help. Answer: I needed to drag the arrows -- blue arrow for z movement. I was trying to drag the body of the object instead. Originally posted by beartrap with karma: 16 on 2016-10-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 4006, "tags": "gazebo" }