anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What are the various methods for speeding up neural network for inference?
Question: One way to speed up a neural network is to prune the network and reducing number of neurons in each layer. What are the other methods to speed up inference? Answer: These are some ways to speed up inference: Reduction of float precision: This is done post-training. According to work in this segment, a very little accuracy is sacrificed for a huge benefit in memory usage reduction. Also it speeds up inference. float32 -> float8. Reference paper: https://arxiv.org/pdf/1502.02551 Using ReLU or similar small compute power activations: Benefits are obvious when you don't need to compute heavy exponents, like in tanh or sigmoid. Binary Neural Architecture: This is new. This taps upon the ability to use binary valued weights and activations (1 bit) compared to 32 bit float counterparts. Estimation and learning is done through POPCNT and XNOR operations (for matrix products) and STE (straight through estimator) for backpropagation. You do have to sacrifice for a large number of neurons to learn the same features, but, the speed is on average 7x faster. Reference paper: https://pjreddie.com/media/files/papers/xnor.pdf https://software.intel.com/en-us/articles/binary-neural-networks Hardware standpoint: Using specialised hardware to compute matrix products, pretty standard examples are GPUs and TPUs.
{ "domain": "ai.stackexchange", "id": 1109, "tags": "neural-networks, deep-learning, convolutional-neural-networks" }
Does gravity keep nearby galaxies from flying apart in space expansion?
Question: On a large scale, the universe expansion pulls galaxies apart while gravity keeps galaxies from expanding. So there seems to be a certain scale, at which the expansion and gravity roughly cancel each other. Within a galaxy gravity wins, but remote galaxies fly apart. What happens if galaxies are close to each other? I know Andromeda is not a good example, as it is already on a collision course with Milky Way, but generally, do neighboring galaxies of the same group tend to be kept together by gravity or do they fly apart with the expansion of the universe? At what scale does the expansion finally overcome gravity? Answer: Yes, galaxy clusters can remain bound together despite the expansion of the universe. Analysing the balance between expansion and gravitational attraction produces an estimate where test particles remain bound in circular orbit around a larger mass if $2GM_{obj} \geq \beta^* H_0^2 r_0^3$ where $\beta^*\approx 5.3$, producing the criterion $$\frac{M_{obj}}{10^{12} M_\odot} > 3h_{70}^2 \left( \frac{r_0}{1 \mathrm{Mpc}} \right)^3.$$ The Milky Way has a sphere of influence is 0.7 Mpc, while a typical star of $0.5 M_\odot$ (with no other competition) has a sphere of influence 55 pc across. A less restrictive bound uses 1.18 rather than 3. See this paper for a comparison with actual supercluster data. It also looks at non-circular orbits, where things are more complicated.
{ "domain": "physics.stackexchange", "id": 44100, "tags": "gravity, space-expansion, galaxies" }
Minimum number of non-coplanar forces required to keep an object in equilibrium
Question: The minimum number of non-coplanar forces that can keep a particle in equilibrium is: (a) 1 (b) 2 (c) 3 (d) 4 Answer given is option $(d)$ , i.e $4$. But can’t it be $(c)$ , i.e $3$ too? Suppose I have three forces: $F_1= 3i +4j$ $F_2= -3i +5k$ $F_3= -4j -5k$ $F_1$ lies in x-y plane, $F_2$ in x-z plane and $F_3$ in z-y plane. So all of them are non coplanar. But sum of all of them results in $0$. So shouldn’t three forces be enough to keep an object in equilibrium? Answer: Actually, the forces $\vec{F_1}, \vec{F_2}, \vec{F_3}$ defined by the points $(3,4,0)$ , $(-3,0,5)$ and $(0,-4,-5)$ are coplanar. Even though $\vec{F_1}, \vec{F_2}, \vec{F_3}$ algebraically sum to $\vec 0$ they are nevertheless coplanar. In fact, if you do a little algebra (solve three simultaneous equations) or calculate the cross product $(\vec{F_2}-\vec{F_1})\times (\vec{F_3}-\vec{F_1})$ (which gives you the vector normal to this plane) you deduce that $(3,4,0)$, $(-3,0,5)$ and $(0,-4,-5)$ lie on the plane defined by the equation $$20x-15y+12z=0$$ So the rule still holds and you would actually need $3+1$ non coplanar forces for equilibrium, where the forth force would need to be equal and opposite to the sum of the first three forces. Note that three coplanar forces, or even two coplanar forces can sum to zero.
{ "domain": "physics.stackexchange", "id": 87368, "tags": "homework-and-exercises, forces, vectors, linear-algebra, equilibrium" }
Running water and stand still water impact
Question: I jumped from height $\:\rm h$ into a moving water canal of breadth $\:\rm b$ and infinite length with velocity $v$ and in a different fall, I'm plummeting into a swimming pool of length $l$ breadth $b$ and depth $d$. Assuming both water have same density, which fall impact will take the most toll on the body? I personally felt swimming pool have greater impacts. But can someone help me with required equations and conceptual explanation? Answer: Water is an incompressible fluid, which means when a force acts on it, the force resonates all the way through and across the water body. When a human slams into water from a great height, the fall is considered fatal, why? As you accelerate through the fall and gain momentum/velocity, your kinetic energy rockets through the fall. You can calculate the force of impact using: $$F\delta t = m\delta v$$ When the $t$ in the above equation is small, the force is greater since they are inversely proportional, and even though water is a fluid, it needs time get out of the way fast enough to absorb the impact and cushion you to safety (since motion and velocity is a function of time), at high final velocity $v$ before contacting the water surface, there isn't much time for fluidity so the body experiences a normal reactional force approximately equal to one you'll get if you were to impact a solid ground. So velocity and time matters a lot in dictating whether you survive the fall or otherwise regardless of how colossal the water body is or how fast it's moving.
{ "domain": "physics.stackexchange", "id": 56580, "tags": "classical-mechanics" }
frequency of oscillation of the disk
Question: The mass m is suspended by a string from a disk circular homogeneous of mass mc and radius R, as illustrated in figure 24. The disk is restricted from rotating by a spring attached to it at a distance cr of its center of rotation. If mass m undergoes a downward displacement x, determine the frequency of oscillation of the disk. Can someone explain to me why mx and Jteta are in that direction (have positive sinal) ? Answer: The choice of: positive axis is arbitrary (whether up/down or left/right as positive does not make a difference in the solution). direction of unknown vector quantities is also arbitrary. i.e. you draw the unknown quantity as positive or negative and if the equations are properly constructed then the sign will indicate the true direction. The most common problem in dynamics problems is what happens with derived quantities e.g. moment of an unknown force. More specifically, if it required to determine the moment of a $m\ddot{x}$ term (which describes an "active" force), then depending on the position of the $m\ddot{x}$ relative to the point that the moments are calculated the resulting moment can have a positive or negative sign. An additional example --which commonly produces confusion -- is the term $J \cdot \ddot{\theta}$. Although the positive direction for $\theta$ is defined as ClockWise(CW), the moment is positive for CounterClockWise (CCW). So in this case, I suspect what is happening is that whoever drew the diagram drew the term $J \cdot \ddot{\theta}$ as positive with respect to the moment positive. Apart from that I am not certain that in this problem you can ignore gravity. Which means that the force on the rope will not be equal to $m\cdot \ddot{x}$ (it can actually be derived by $m\cdot g -T = m\cdot \ddot{x}$). Also, I agree with you that $\ddot{x}$ should be pointing downwards, if $\theta $is rotating counterclockwise (unless $\dot{\theta}$ is too large). However, in general it will be very difficult to decypher what was the exact methodology that the person used to derive the equation (or his/her intent),without a more detailed explanation of the solution. .
{ "domain": "engineering.stackexchange", "id": 4668, "tags": "vibration" }
Equation for finding electrical resistance
Question: I want to know how the flow of electrons will change when I change the tempertature from 100 F to 250 F in a silicon semi-conductor (ex: computer mouse) How can I find this out? Answer: If you consider a homogeneous piece of silicon the total flow of electrons through it is: $$ I = \frac{U}{R} = n \mu \frac{S}{d} U $$ where $R$ - the resistance of the piece, $U$ - external voltage applied to it. The resistance depends on: $n$ - the concentration of electrons (number of electrons per m$^3$), $\mu$ - the mobility of electrons (ratio of velocity of the electron and electric field that makes it move), $S$ and $d$ - cross-section and length of the sample. The changes of geometric size with temperature are negligible. So the values that affect the resistance are concentration $n$ and mobility $\mu$. The mobility depends on the temperature and also on the concentration of various defects in the sample. At 100 F the temperature dependence is dominating. The concentration is the most complicated point. There are the following cases: Pure silicon. All the electrons (and the same amount of holes) are thermally generated. Their concentration depends on the temperature exponentially. If you need total current don't forget about the holes. Silicon doped with donors. The amount of thermally generated electrons is negligible. The concentration does not depend on the temperature. Semiconductor device with p-n junction or/and heterojunction (connection of different materials). The laser/LED of optical computer mouse is this case. The sample is not homogeneous and the concentration is determined mainly not by temperature but by more interesting things like voltage polarity. This case requires more formulas and exact data concerning the sample structure. ! The laser is made of GaAs and similar materials not silicon. The attempts to make silicon laser never stop though. Edited (2011/12/15): For the temperature dependence of electron mobility Wikipedia gives $$ \mu(T) \approx \mu_0 T^{-2.4} $$ where $\mu_0 = 9.46 \cdot 10^{6} \text{m}/\text{(V s)}$, hope I've calculated it correctly from first point set (black circles) here This formula takes into account only electron scattering on the oscillations of the ions of the crystal. This effect is dominating at room temperature and higher. The temperature must be in Kelvin degrees here. For the electron $n$ and hole $n_h$ concentration in pure silicon (case 1.) at room temperature and higher one can use the following formula: $$ n = n_h = N_\text{eff} \; T^{\;3/2} \exp \left( -\frac{E_g}{2k_B T} \right) $$ where $N_\text{eff}$ - some constant describing the shape of conduction and valence bands of silicon (I have not found explicit value yet), $T$ - temperature in Kelvin degrees, $E_g = 1.12\;\text{eV} = 1.79 \cdot 10^{-19} J$ - energy gap of silicon, $k_B$ - Boltzmann constant.
{ "domain": "physics.stackexchange", "id": 2055, "tags": "electricity, semiconductor-physics" }
What is this aquatic organism in Fiji?
Question: I found them under reefs in Fiji. The hand-like structures moved slowly. I was not sure if it was an animal or plant. I was not even sure if they were several individuals or it was a whole. The part that was not hidden by the stone was about 50cm length. I did not touch them, since I was not sure if it was dangerous. Answer: It is probably Holothuria leucospilota Holothuria leucospilota is a medium-sized sea cucumber reaching a length of up to 40 centimetres (16 in) when relaxed but it can stretch to about a metre (yard) when extended. Holothuria leucospilota is found in shallow water along the east coast of Africa and in much of the Indo-Pacific region. It is a common species on the north east coast of Australia where it is found on reefs and rocky coasts, often partly concealed under a boulder. References https://en.wikipedia.org/wiki/Holothuria_leucospilota
{ "domain": "biology.stackexchange", "id": 8815, "tags": "species-identification" }
What photoconducting materials/metamaterials reach full conductivity within 1/30th of a picosecond
Question: I'm doing research on photoconducting materials/metamaterials but I'm having a hard time finding confirming how fast they reach full conuctivity, What photoconductors reach full conductivity and back again within 1/30th of a nanosecond? Any resources(such as websites) where I could hope to find a photoconductor with this reaction time would be useful as well... Answer: I have found the substance I was looking for: http://ieeexplore.ieee.org/document/64523/ It's called low temperature grown GaAS, It is an excellent photoconductor at the frequencies I need.
{ "domain": "physics.stackexchange", "id": 45156, "tags": "material-science, conductors, optical-materials" }
Constancy of Coefficients of Additive Integrals Throughout Subsystems of a Closed System
Question: I'm studying Landau and Lifshitz's Statistical Physics, Part 1, 3rd edition and am looking for clarification on the following statement, which appears on page 11 in the section on The Significance of Energy: As we know from mechanics, there exist only seven independent additive integrals of motion: the energy, the three components of the momentum vector and the three components of the angular momentum vector. We shall denote these quantities for the ath subsystem (as functions of the coordinates and momenta of the particles in it) by $E_a$(p,q), $\mathbf{P}_a(p,q)$, $\mathbf{M}_a(p,q)$ respectively. Landau/Lifshitz go on to say that since $\log{\rho_a}$ (that is, the distribution function for the $a$th subsystem) is an additive integral of the motion, it must be expressible as a linear combination: $$\log{\rho_a} = \alpha_a + \beta E_a(p,q) + \mathbf{\gamma \cdot P}_a(p,q) + \mathbf{\delta \cdot M}_a(p,q)$$ "with constant coefficients $\alpha_a, \beta, \mathbf{\gamma}, \mathbf{\delta}$ of which $\beta, \mathbf{\gamma}, \mathbf{\delta}$ must be the same for all subsystems in a given closed system." I do not understand which assumptions were made to justify the statement that these coefficients must be the same for all subsystems. My best guess is that it follows from the assumption that the subsystems are weakly interacting in a sufficiently small time interval such that the distribution function of two subsystems together, $\rho_{12}$ is equal to the product of the distribution function of each subsystem ($\rho_1$, $\rho_2$) separately: $$\rho_{12} = \rho_1 \rho_2.$$ (Landau/Lifshitz call this "statistical independence" for sufficiently small times in "quasi-closed" subsystems.) They also state that the closed system as a whole is in "statistical equilibrium," for which the time scales in question are long compared to the relaxation time. I don't know if this assumption is used to justify the claim about the coefficients or if it is in conflict with the time scale for which quasi-closed systems are statistically independent. It is not clear to me how the result follows from the weakly interacting or statistical equilibrium assumptions, but the former is certainly required otherwise $\log{\rho_a}$ wouldn't be an additive integral and hence the linear combination wouldn't necessarily hold. Any help would be greatly appreciated. Also for clarity, I am willing to take for granted Landau/Lifshitz's claim that there exist only seven independent additive integrals of motion. Answer: For the $a$ subsystem: $$ \log \rho_a = \alpha_a + \beta_a E_a + \mathbf{\gamma}_a \cdot \mathbf{P}_a + \delta_a \cdot \mathbf{M}_a . $$ Writing down this equation assumes that Liouville's theorem is applicable, which itself assumes that the subsystem $a$ is closed (or quasi-closed for relatively short time intervals). For another subsystem labelled $b$: $$ \log \rho_b = \alpha_b + \beta_b E_b + \mathbf{\gamma}_b \cdot \mathbf{P}_b + \delta_b \cdot \mathbf{M}_b . $$ For the combined subsystem $ab$: $$ \log \rho_{ab} = \alpha_{ab} + \beta_{ab} (E_{a} + E_{b}) + \mathbf{\gamma}_{ab} \cdot (\mathbf{P}_{a} + \mathbf{P}_{b}) + \delta_{ab} \cdot ( \mathbf{M}_{a} + \mathbf{M}_{b}) . $$ Substituting the three expressions above into the equation $\log \rho_{ab} = \log\rho_a + \log\rho_b$ (which follows from the "statistical independence" assumption) and equating the coefficients of the independent variables $E_a, E_b, \mathbf{P}_a, \mathbf{P}_b, \mathbf{M}_a, \mathbf{M}_b$ gives: $$ \alpha_{ab} = \alpha_a + \alpha_b ,$$ $$ \beta_{ab} = \beta_a = \beta_b , $$ $$ \gamma_{ab} = \gamma_a = \gamma_b , $$ $$ \delta_{ab} = \delta_a = \delta_b . $$ So $\beta$, $\gamma$ and $\delta$ don't depend on the subsystem label. Any closed macroscopic system will be in statistical equilibrium if it is observed for a long enough time (longer than the relaxation time), but I think that statistical equilibrium is an instantaneous quality of a system. Landau and Lifshitz' definition is: If a closed macroscopic system is in a state such that in any macroscopic subsystem the macroscopic physical quantities are to a high degree of accuracy equal to their mean values, the state is said to be in a state of statistical equilibrium. So I think that a system can be both in statistical equilibrium and quasi-closed.
{ "domain": "physics.stackexchange", "id": 36554, "tags": "thermodynamics, statistical-mechanics, integrals-of-motion" }
Movement on Labyrinth with Best First Search
Question: I have the following labyrinth where R is the robot(the parent node), red blocks are the obstacles where the R cannot move and GOAL is the node where we want the robot to go. Every node should be written in this form R(1,2,4), 1 is the row, 2 is the column and 4 is the distance from the current node to the goal node GOAL. I am in a position where the solution tree looks like this Now, I don't know which node should I select C or D ? Both of them have the same distance to the goal. Answer: It doens't matter which one you take, because they both have exactly the same value. Which one actually gets taken will depend on unimportant details of the implementation, such as the order the nodes were added to the tree.
{ "domain": "cs.stackexchange", "id": 9704, "tags": "algorithms, graphs, shortest-path, heuristics, searching" }
How much mass must Earth lose to lose its Moon?
Question: Maybe I am wrong about the basic concept here, but for my curiosity: how much mass would need to leave the Earth (like satellites and rockets) to create an imbalance between the Earth and Moon so that the Moon can escape the Earth's gravitational pull and leave its orbit? Answer: Half. The escape velocity for an object at a distance $D$ from an object of mass $M$ is $\sqrt{2GM/D}$. The circular orbital velocity (the Moon is on an orbit that's close enough to circular that I'll just assume this) at the same distance is $\sqrt{GM/D}$. Setting the escape velocity from the Earth with it's new reduced mass $M_{\rm new}$ equal to the orbital velocity around the Earth with it's usual mass $M_{\rm old}$ gives: $$\sqrt{\frac{2GM_{\rm new}}{D_{\rm Moon}}} = \sqrt{\frac{GM_{\rm old}}{D_{\rm Moon}}}$$ Which immediately gives: $$\frac{M_{\rm new}}{M_{\rm old}} = \frac{1}{2}$$ Note that you'd need to remove all this mass to well outside the Moon's orbit, probably well away in the direction opposite where you're going to send the Moon. Otherwise, you'll expand the orbit of the Moon, but once it gets back outside the distribution of expelled mass, it will remain bound to the system. This will obviously not be achieved with rockets launching satellites, chiefly because rockets work by throwing mass out of the back end... mass that would get left behind. It's pretty obvious that $0.5\,{\rm M}_{\oplus}$ in rocket fuel (even if half the Earth's mass could be made into rocket fuel!) is not enough rocket fuel to launch $0.5\,{\rm M}_{\oplus}$ in payload to escape velocity. Even if it could, you wouldn't be left with half the Earth, but instead $0.5\,{\rm M}_{\oplus}$ in rocket exhaust...
{ "domain": "physics.stackexchange", "id": 31867, "tags": "homework-and-exercises, newtonian-gravity, earth, estimation, moon" }
What is the driving force behind sigmatropic reactions?
Question: I am studying sigmatropic reactions in context of pericyclic reactions and I am confused about the overall purpose of this type of reactions. I know that sometimes in sigmatropic reactions, more substituted alkene in the product in one reason why the reactions take place. However, in other types of sigmatropic reactions, like hydride shift, or when more substituted alkene product is not produced, I am having trouble seeing why this reactions occurs in the first place. Its almost like the electrons and double bonds are simple moving around to new locations, without any purpose. Answer: Most of these reactions are indeed equilibrium reactions, meaning that they go forwards and backwards all the time and at a given point in time there will be a certain ratio of reactants to products corresponding to the equilibrium constant. Which side is favoured is often determined by minute stabilisations that are indeed very hard to rationalise. Take for example the set of Wagner-Meerwein rearrangements that occur in the synthesis of cholesterol after squalene was cyclised. There appears to be no true stabilisation which would explain why one side is favoured over the other. In the end all we know is that the product is more stable or the reaction would not occur.
{ "domain": "chemistry.stackexchange", "id": 7710, "tags": "organic-chemistry, reaction-mechanism, pericyclic" }
ROS2 Colcon debug symbols for use with DDD
Question: Is there a way to get Colcon to build ROS2 binaries with debug symbols so that I can run nodes with something like DDD? I'm able to run using DDD with the following command: ros2 run --prefix 'ddd' ros1_bridge dynamic_bridge But without debug symbols, I can't use feature like breakpoints or stepping through the source code. Originally posted by mlanting on ROS Answers with karma: 53 on 2019-04-03 Post score: 4 Answer: You should be able to do this with: colcon build <other_args> --cmake-args -DCMAKE_BUILD_TYPE=RelWithDebInfo instead of using RelWithDebInfo you could use Debug if you really don't want any optimisation. Assuming the project you're building is a CMake project of course (which the bridge is). Edit: Is there a way to get Colcon to build ROS2 binaries pedantic perhaps, but colcon doesn't build anything. It drives other tools. So in this case we're asking colcon to pass on some arguments for CMake, which in the end will cause it to generate build scripts that will actually compile things with debug flags enabled. Originally posted by gvdhoorn with karma: 86574 on 2019-04-03 This answer was ACCEPTED on the original site Post score: 11 Original comments Comment by mlanting on 2019-04-03: Exactly what I needed, thanks!
{ "domain": "robotics.stackexchange", "id": 32813, "tags": "ros, ros2, colcon, ros-crystal" }
Are 99% of animals bilaterally symmetric?
Question: The Wikipedia article for "Symmetry in Biology" claims: "Animals with bilateral symmetry are classified into a large group called the Bilateria which contains 99% of all animals (comprising over 32 phyla and 1 million described species)." This claim that almost all animals are Bilateria can be found in other sources too, e.g. in "A new paradigm for animal symmetry" by Gábor Holló. In both cases, however, no source is provided. I'm no biologist so I don't no exactly how self-evident this claim is. Does someone know a source corroborating these claims? Answer: Well, you could count yourself. Kingdom Animalia encompasses about 1.3 million known species. There is one large phylum where most species lack symmetry - Porifera (5,500 species). Cnidaria (10,000 species) have radial symmetry. All the other broad phyla, from flatworms to chordates, have bilateral symmetry during embryonic development. Echinoderms (7,000 species) are pentaradially symmetrical as adults but bilateral as larvae and considered part of Bilateria. Source: Campbell biology (specifically, chapters 32 and 33)/ Lisa A. Urry, Michael L. Cain, Steven A. Wasserman, Peter V. Minorsky, Rebecca B. Orr, Neil A. Campbell. -- Twelfth edition. | New York, NY : Pearson, 2020.
{ "domain": "biology.stackexchange", "id": 11689, "tags": "zoology, symmetry" }
What makes a constant of nature fundamental?
Question: I came across a video on youtube in which Alexander Unzicker argues that in order to have a scientific revolution, one of the constants of nature should be eliminated(at 23:00). He gives the example of Electromagnetism where by the equation $\epsilon_0\mu_0 = \dfrac{1}{c^2}$ one can eliminate $\mu_0$. According to him, since we can relate 3 constant to each other we can eliminate one of them. He also says in Newton's theory we eliminate the constant $g$ with equation $g = \dfrac{GM}{r^2} $ Finally he says boltzmann constant was eliminated by relating kinetic energy to temperature by $\frac{1}{2}mv^2 = \frac{3}{2}kT $ So, this video made me think: What makes a physical constant truly fundamental? Does relating one constant to another means that we can eliminate one of the constants? For example is fine structure constant $\alpha = \frac{k e^2}{\hbar c}$ not fundamental? On the other hand one can argue that dimensionless constants are more fundamental. And I don't understand anything from boltzmann example, I think there is a huge problem in it, or am I missing something? Answer: What makes a physical constant truly fundamental? Does relating one constant to another means that we can eliminate one of the constants? First of all keep in mind that everything we, humans, describe is all based on our interpretation of the universe and we try to formulate it is such a way that we can best understand and describe how the universe works. In other words allow us to predict as exactly as possible what should happen given a set of initial conditions. A fundamental constant is described as a value that is constant over time and space. A better term is universal constant. Basically if you set up your experiment correctly you should always measure the same value, now, in the past, in the future in any location of the universe. (as long as you take space-time warping into account in your experiment/calculations) However not everyone finds this a satisfying or good description of what a fundamental constant should be. Some say that a fundamental constant should not be able to be expressed by other fundamental constants. In other words, every fundamental constant should only be possible to obtain by measuring it, even if you know all other constants. All other things should then be possible to calculate from fundamental constants and a complete theory of how the universe works. The problem with that is, how do you decide which constant is fundamental or not. let's look at the case of $\epsilon_0\cdot\mu_0=1/c^2$, how do you determine which two would be fundamental and which one would not be? Objectively there isn't much of a difference between the three, so the choice would be rather arbitraty. Therefor that description is unlikely to be adopted any time soon. So what is a fundamental physical constant or in better words an universal constant?: A constant that is the same in the entire universe that doesn't change over time. If these constants would be different then the universe would look completely different and it's highly likely life would be impossible, our universe/constants are therefor sometimes also described as a goldi-lock universe, which is an often used argument for people who believe in multi-universe theories. Also keep in mind that just because an "expert" in a field says something, doesn't mean it's true. Experts also often disagree, especially the more complex the subject is. He's also talking a lot of "empty air" and mostly talking extremely philosophical and proposing HIS view/theory. Just take a look at all the comments below already and how many relative downvotes he has 108/630 (20-7-2020), which is 17% of the people downvoted the video. However only a very specific target group will even watch this video, indicating that many of the people that watch it (strongly) disagree with him. It feels to me more that he is proposing if you know equation X, then you won't need to know constant Y. However that's a bit ambiguous because that also works the other way around then. Basically he's bashing many physicists and saying that they are focussing on the wrong things. So take what he says and how he says it with an open mindset but questioning.
{ "domain": "physics.stackexchange", "id": 87999, "tags": "physical-constants" }
Why is there no desert in Southeast China?
Question: Arabia, Sahara and Southeast China and India all are on the tropic of Cancer yet Southeast China and India are not deserts. I know that India is not a desert because of monsoon wind which causes heavy rain. But China doesn't have that kind of wind. Also Tibet which is in Southwest China is very dry because the moist air doesn't get over the Himalayas. But east of that you would normally expect there to be another big desert, but there isn't. Why is this? Answer: This is a very active region for tropical storms. NASA
{ "domain": "earthscience.stackexchange", "id": 1508, "tags": "wind, rainfall, rain" }
Why are some housekeeping genes considered better?
Question: Whenever a PCR is done, we have to use housekeeping genes like GAPDH, to self normalise the values to account for different starting cDNA/DNA. Now there are many different genes like GAPDH, ubiquitin, actin etc. Now when dealing with cells, I can understand that due to their immortal nature and frankly shocking number of generations undergone by them, the expression of the housekeeping gene may change. Assuming that I am not dealing with cells and instead with animal tissues, does it matter which house keeping gene I select? Or in other words is GAPDH going to give me the same consistency between samples as something like 18s Ribsomal RNA or am I missing something specific? Answer: Yes, it does matter. Gene expression varies a lot thoughout tissues and living conditions, so you need to test a variety of genes and choose the ones with the smallest variation in the expression between sample and control. If you don't find any, you have to test other genes for this purpose. This is important, as this can severly affect the outcome of your experiment, since you normalize to the housekeeping genes. If these are not constant, it will cause problems. Have a look into the references, this helped me a lot. References: Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. The normalization of gene expression data in melanoma: investigating the use of glyceraldehyde 3-phosphate dehydrogenase and 18S ribosomal RNA as internal reference genes for quantitative real-time PCR. Normalization of qRT-PCR data: the necessity of adopting a systematic, experimental conditions-specific, validation of references.
{ "domain": "biology.stackexchange", "id": 3920, "tags": "pcr" }
Determinant calculation of a matrix of any degree
Question: My program calculates the determinant of a matrix of any degree. If you really can't understand my comments or indentifiers please let me know. //;;MakeShift make element number 'ind' the head of list: (defun MakeShift (ind L buf) (if (= ind 1) (cons (car L) (append buf (cdr L))) (makeReplace (- ind 1) (cdr L) (append buf (list (car L)))) ) ) //;;Shift call MakeShift: (defun Shift (ind L) (MakeShift ind L nil)) //;;makeTransp make transposition of two elements list: (defun makeTransp (L) (cons (cadr L) (cons (car L) nil))) //;;PushForEach put element elem into a heads of all lists of L: (defun PushForEach (elem L) (if (null L) nil (cons (cons elem (car L)) (PushForEach elem (cdr L))) ) ) //;;MakeTranspositions create a list of all transpositions //;;using first transposition like '(1 2 3 ...). //;;transpNum is transpositions number and numOfElem //;;is amount of elements in transposition: (defun MakeTranspositions (transp transpNum numOfElem) (cond ((> transpNum numOfElem) nil) ((= numOfElem 2) (cons transp (cons (makeTransp transp) nil))) (T (append (PushForEach (car transp) (MakeTranspositions (cdr transp) 1 (- numOfElem 1))) (MakeTranspositions (Shift (+ transpNum 1) transp) (+ transpNum 1) numOfElem))) ) ) //;;MakeFirstTransp make a first transpositiion like '(1 2 3 ... ) //;;which has number of element equal matrix degree: (defun MakeFirstTransp (matrixDegree transp) (if (= matrixDegree 0) transp (MakeFirstTransp (- matrixDegree 1) (cons matrixDegree transp)) ) ) //;;Transpositions make all transpositions of matrix using MakeTranspositions: (defun Transpositions (matrixDegree) (MakeTranspositions (MakeFirstTransp matrixDegree nil) 1 matrixDegree) ) //;;GetCol return elemnt number col in row (row belong to matrix): (defun GetCol (col rowVector) (if (= col 1) (car rowVector) (GetCol (- col 1) (cdr rowVector)) ) ) //;;GetElem return element a[row][col] which belong to matrix: (defun GetElem (row col matrix) (if (= row 1) (GetCol col (car matrix)) (GetElem (- row 1) col (cdr matrix)) ) ) //;;CheckFirst check first element in transposition (cons first transp) for even: (defun CheckFirst (first transp) (cond ((null transp) 1) ((< first (car transp)) (CheckFirst first (cdr transp))) (T (* -1 (CheckFirst first (cdr transp)))) ) ) //;;Sign return sign of transposition (1 or -1): (defun Sign (transp) (if (null (cdr transp)) 1 (* (CheckFirst (car transp) (cdr transp)) (Sign (cdr transp))) ) ) //;;Product return product of elements of matrix by transposition transp: (defun Product (matrix transp) (GetProduct matrix 1 transp)) //;;GetProduct are called by Product: (defun GetProduct (matrix ind transp) (if (null transp) 1 (* (GetElem ind (car transp) matrix) (GetProduct matrix (+ ind 1) (cdr transp)) ) ) ) //;;GetSumm return summ of all products by transpositions whith their signs: (defun GetSumm (matrix transps) (if (null transps) 0 (+ (* (Sign (car transps)) (Product matrix (car transps))) (GetSumm matrix (cdr transps)) ) ) ) //;;Determinant call GetSumm: (defun Determinant (matrix matrixDegree) (GetSumm matrix (Transpositions matrixDegree)) ) //;;So, programm work fast. Answer: The first thing is to learn proper indentation. //;;PushForEach put element elem into a heads of all lists of L: (defun PushForEach (elem L) (if (null L) nil (cons (cons elem (car L)) (PushForEach elem (cdr L))) ) ) What's wrong with the code layout? Common Lisp has built-in documentation features. A documentation string can be placed inside the function and can be retrieved with the DOCUMENTATIONfunction. trailing parentheses are a big DON'T DO THAT. It just wastes space without adding much information. Your editor will count parentheses or will show the corresponding parentheses. put constructs, when they are too long on a line, over several lines. don't use CamelCase, use hyphens Let's do the editing step by step: (defun PushForEach (elem L) "put element elem into a heads of all lists of L" (if (null L) nil (cons (cons elem (car L)) (PushForEach elem (cdr L))) ) ) (defun PushForEach (elem L) "put element elem into a heads of all lists of L" (if (null L) nil (cons (cons elem (car L)) (PushForEach elem (cdr L))))) Now we get the version which is best readable: (defun push-for-each (elem L) "put element elem into a heads of all lists of L" (if (null L) nil (cons (cons elem (car L)) (push-for-each elem (cdr L))))) Above function has a documentation string and useful code layout. Next: can we write it better? You can bet that basic recursion in Lisp has been provided as a higher-order function. Applying a function over all items in a list is called mapping. The basic mapping function is MAPCAR. (defun push-for-each (element list) "adds element as head of all sublists of list" (mapcar #'(lambda (item) (cons element item)) list))
{ "domain": "codereview.stackexchange", "id": 45197, "tags": "matrix, common-lisp" }
Are stars really "burning balls of gas"?
Question: Even the NASA website refers to stars as "big ball of gas". It's a common statement that's popularized by movies and popular culture. However, stars are technically not in a gaseous state - they're in a state of plasma. So do stars qualify as being made of "gas" as the proverb would suggest, or is that considered a misconception? Answer: While I agree that it boils down to semantics, I actually disagree on the scientific use of the term in the comments. In astronomy, we know there's a difference between gas and plasma, but we almost always use the term "gas" when talking about what's in stars. E.g. "the fraction of gas locked up in stars" (as opposed to in the interstellar medium). We also talk about the metallicity of gas, meaning both gas and plasma. I think we only use the term plasma when specifically discussing properties that are unique to plasma, like being "frozen" in magnetic fields. And googling ["ionized gas" -plasma] (the '-plasma' is to get rid of articles explaining what plasma is) returns 243,000 hits, of which by far the most seem to be scientific papers and websites. So I think the answer is: "No, stars aren't made of gas, they're made of plasma, but go ahead and call it gas, you won't offend any astronomer." However, I agree with Rob Jeffries that we don't like when you call it burning gas. It's nuclear power! It's a million$^\dagger$ times more powerful than fire! $^\dagger$Why "a million"? Because nuclear reactions are measured in MeV, while chemical reactions are measured in eV.
{ "domain": "astronomy.stackexchange", "id": 1337, "tags": "star" }
Injecting AutoMapper profiles
Question: I'd like to know if I'm doing profile configuration in the wrong place or in the wrong way. I'm following the Onion Architecture, so that restricts the direction of my dependencies towards the center. Core My domain model and AutoMapper facade: namespace Core.Domain { public class MyModel { // model stuff } } namespace Core.Services { public interface IMapper { object Map(object source, Type sourceType, Type destinationType); } } Infrastructure AutoMapper facade implementation: namespace Infrastructure.Mapping { public class Mapper : IMapper { private readonly IMappingEngine _mappingEngine; public Mapper(IMappingEngine mappingEngine) { _mappingEngine = mappingEngine; } public object Map(object source, Type sourceType, Type destinationType) { return _mappingEngine.Map(source, sourceType, destinationType); } } } UI This is my controller and view model. I'm using the AutoMapper via a filter, following this example. namespace UI.Controllers { public class HomeController : Controller { [AutoMap(typeof(MyModel), typeof(MyViewModel))] public ActionResult Index() { var myItem = _myRepository.GetById(0); return View(myItem); } } } namespace UI.ViewModels { public class MyViewModel { // view stuff } } Dependency Resolution This is where I have my doubts: namespace DependencyResolution { public class MappingModule : NinjectModule { public override void Load() { Mapper.Initialize(cfg => cfg.AddProfile(new MyProfile())); Bind<IMappingEngine>().ToMethod(ctx => Mapper.Engine); Bind<IMapper>().To<Mapping.Mapper>(); Kernel.BindFilter<AutoMapFilter>(FilterScope.Controller, 0) .WhenActionMethodHas<AutoMapAttribute>() .WithConstructorArgumentFromActionAttribute<AutoMapAttribute>("sourceType", att => att.SourceType) .WithConstructorArgumentFromActionAttribute<AutoMapAttribute>("destType", att => att.DestType); } } public class MyProfile : Profile { protected override void Configure() { Mapper.CreateMap<MyModel, MyViewModel>().ForMember(...); } } } Questions Is the way I bind to AutoMapper wrong? Is this the wrong place for the profile (keep in mind the dependency restriction)? In the ideal world I would have placed Mapper.CreateMap<MyModel, MyViewModel>().ForMember(...) in Global.asax, but how do I expose CreateMap without referencing AutoMapper? Is there anything else you have noticed? Answer: What is the purpose of the IMapper interface and Mapper class? It looks to me that they are just wrapping the IMappingEngine interface and MappingEngine class. While this is a good method when you have a third party class that doesn't have an interface, I think it is overkill here. Why don't you just use the IMappingEngine where you need that functionality? If you are going to keep your Mapper class, I would rename it, having two Mapper classes is confusing. As for where it is, I don't have a problem with doing it this way. All the wire-up is done in one place, and its easy to find and add to as needed.
{ "domain": "codereview.stackexchange", "id": 6409, "tags": "c#, dependency-injection, asp.net-mvc-4" }
Negative pressure in general relativity
Question: Is there an intuitive way to understand what negative pressure means in general relativity in the same way as positive pressure can be thought to be kinetic energy of gas particles? Dark energy has positive energy but negative pressure, is there other examples with negative pressure? If dark energy was caused by vacuum fluctuations of quantum fields, then why do the vacuum fluctuations cause negative pressure instead of positive? Answer: Is there other examples with negative pressure? Any elastic object under tension has negative pressure. Soap bubbles have surface tension, which is a lower-dimensional equivalent of negative pressure. If dark energy was caused by vacuum fluctuations of quantum fields, then why do the vacuum fluctuations cause negative pressure instead of positive? Suppose the vacuum energy density is positive. This doesn't depend on the total amount of space, so the more space you have, the more total energy there is. That is, intuitively it takes work to expand space, so the vacuum carries negative pressure. It's just the opposite of how you get work when you expand a gas, so gases have positive pressure. A more formal way to put it is that the vacuum energy-momentum is by definition Lorentz invariant, so the only option is that it's proportional to the Minkowski metric $\eta^{\mu\nu}$. So it has to have a relative sign between the timelike component (energy) and the spacelike component (pressure).
{ "domain": "physics.stackexchange", "id": 64576, "tags": "general-relativity, pressure, dark-energy" }
Sediment boundarys produced by largest meteors? Vredefort, Sudbury and Chicxulub
Question: Chicxulub crater is the third largest crater on earth, which is said to have produced the KT boundary. What is the sediment layer produced by the 1st and 2nd largest craters? Also, there many craters before and after Chicxulub was created, but the KT boundary is the only one commonly mentioned…why? Answer: Vredefort No extraterrestrial iridium anomaly: http://adsabs.harvard.edu/full/1989LPSC...19..733F I'm not sure I fully agree with them, because there is quite a lot of Ir there and their threshold of 200 pg/g seems somewhat arbitrary to me. They claim the Ir was derived from local rocks, but that's a hell lot of iridium! Sudbury There is an an iridium anomaly. Read all about it here (paywalled) and here (open access). but the KT boundary is the only one commonly mentioned…why? Because it's the first one found, a nice story of serendipity, because it's the last really big one, and the one that killed the dinosaurs. Everything you need to make a good story - even though it's not unique. Iridium (and other PGE) anomalies and pretty much common feature of impacts. This paper shows attempts to link mass extinction events with impacts. This is a controversial issue, because you can also link massive volcanic events with mass extinctions, not only impact. But that's not for now - they do show iridium anomalies for all impacts. That's another paper showing iridium anomalies for impacts, much older this time.
{ "domain": "earthscience.stackexchange", "id": 1150, "tags": "earth-history, meteorite, natural-disasters, impacts" }
Quasars with differing emission / absorption redshifts
Question: I was reading this (perhaps offbeat) "anti-big-bang" article, http://www.newtonphysics.on.ca/hubble/ and the author comments "a recent catalog of objects having very large redshifts shows that among 109 quasi-stellar objects for which both absorption and emission lines could be measured, the value of the absorption redshift of a given object is always different from the one measured in emission for the same object." So, I had trouble Googling this. What is a particularly good "example object" of this phenomenon (if any), perhaps where I can see the issues arising discussed? Is there a prosaic or generally accepted "conventional" explanation? BTW I've just realised the article is perhaps from 1988 (it's somewhat confusingly also dated 2012) - so this may be completely out of date. Answer: I had a quick look at the paper - it's mostly nonsense. The intrinsic light from a quasar is completely dominated by its emission line spectrum and a mostly featureless continuum. The observed wavelength of the emission lines compared to the rest wavelength give the true redshift of the quasar. Absorption lines in quasar spectra are predominantly due to foreground gas clouds at lower redshifts than the more distant quasar source. Therefore the absorption lines do evidence a lower redshift, but that's because they are due to gas that is unassociated with the quasar. Here's a picture from a Hamburg Sternwarte description of these absorption lines, that illustrates the situation. Absorption lines are formed due to absorption of the quasar continuum by gas clouds (or maybe even much fainter normal galaxies) A, B and C, which are at progressively lower redshifts (often multiple absorption systems can be found and can be used not only to place the distance of the intervening cloud/galaxy but also say something about its chemical composition). Spectrum D shows how the measured spectrum at the telescope will contain a highly redshifted quasar's light with superimposed absorption bands due to the lower redshift clouds.
{ "domain": "physics.stackexchange", "id": 16869, "tags": "cosmology, astrophysics, gravitational-redshift" }
Return the lexicographic order
Question: Given set of words that are lexicographically sorted, return lexicographic order. E.g: abc acd bcc bed bdc dab The order of letters for the given example would be a->b->c->e->d Time: Part-1: Complexity of constructing graph: \$O(n * m)\$, where \$n\$ is number of words and \$m\$ is max length of any word. Part-2: Topological sort: \$O(V + E)\$, where \$V\$ is number of vertices and \$E\$ is number of edges Space: \$O(V + E)\$ - entire graph is stored Looking for request code review, optimizations and best practices. Also verifying if how would final answer for complexity look like. E.g: Would it be \$O(n*m + E + V)\$? class GraphLexico<T> implements Iterable<T> { /* A map from nodes in the graph to sets of outgoing edges. Each * set of edges is represented by a map from edges to doubles. */ private final Map<T, List<T>> graph = new HashMap<T, List<T>>(); /** * Adds a new node to the graph. If the node already exists then its a * no-op. * * @param node Adds to a graph. If node is null then this is a no-op. * @return true if node is added, false otherwise. */ public boolean addNode(T node) { if (node == null) { throw new NullPointerException("The input node cannot be null."); } if (graph.containsKey(node)) return false; graph.put(node, new ArrayList<T>()); return true; } /** * Given the source and destination node it would add an arc from source * to destination node. If an arc already exists then the value would be * updated the new value. * * @param source the source node. * @param destination the destination node. * @param length if length if * @throws NullPointerException if source or destination is null. * @throws NoSuchElementException if either source of destination does not exists. */ public void addEdge (T source, T destination) { if (source == null || destination == null) { throw new NullPointerException("Source and Destination, both should be non-null."); } if (!graph.containsKey(source) || !graph.containsKey(destination)) { throw new NoSuchElementException("Source and Destination, both should be part of graph"); } /* A node would always be added so no point returning true or false */ graph.get(source).add(destination); } /** * Given a node, returns the edges going outward that node, * as an immutable map. * * @param node The node whose edges should be queried. * @return An immutable view of the edges leaving that node. * @throws NullPointerException If input node is null. * @throws NoSuchElementException If node is not in graph. */ public List<T> edgesFrom(T node) { if (node == null) { throw new NullPointerException("The node should not be null."); } List<T> edges = graph.get(node); if (edges == null) { throw new NoSuchElementException("Source node does not exist."); } return Collections.unmodifiableList(graph.get(node)); } /** * Returns the iterator that travels the nodes of a graph. * * @return an iterator that travels the nodes of a graph. */ @Override public Iterator<T> iterator() { return graph.keySet().iterator(); } } public final class LexicographicalSort { private LexicographicalSort() {} /** * Returns the list of characters in lexicographically sorted order. * * Note that if entire information needed to determine lexicographical * order is not present then results are unreliable. * * @param dictionary the list of words ordered in lexicographical order */ public static List<Character> lexigoGraphicOrder(List<String> dictionary) { final GraphLexico<Character> graph = new GraphLexico<Character>(); for (int i = 0; i < dictionary.size() - 1; i++) { createGraph(dictionary.get(i), dictionary.get(i + 1), graph); } return topologicalSort(graph); } /** * Creates a DAG based on the lexicographical order. * * * @param string1 the first string with higher placement/priority in dictionary * @param string2 the second string with lesser placement/priority in dictionary * @param graph the DAG to be constructed. */ private static void createGraph(String string1, String string2, GraphLexico<Character> graph) { char[] ch1 = string1.toCharArray(); char[] ch2 = string2.toCharArray(); // pick the smaller length int minLength = ch1.length > ch2.length ? ch2.length : ch1.length; for (int i = 0; i < minLength; i++) { if (ch1[i] != ch2[i]) { graph.addNode(ch1[i]); graph.addNode(ch2[i]); graph.addEdge(ch1[i], ch2[i]); return; } } } /** * Running the topological sort, on the constructed graph * * * @param graph the DAG determining priority of characters * @return the characters in lexicographic order */ private static List<Character> topologicalSort(GraphLexico<Character> graph) { final GraphLexico<Character> reverseGraph = reverseGraph(graph); final List<Character> result = new ArrayList<Character>(); final Set<Character> visited = new HashSet<Character>(); final Set<Character> finished = new HashSet<Character>(); for (Character node : reverseGraph) { explore(node, result, visited, finished, reverseGraph); } return result; } private static void explore (Character node, List<Character> result, Set<Character> visited, Set<Character> finished, GraphLexico<Character> reverseGraph) { if (visited.contains(node)) { if (finished.contains(node)) return; else throw new IllegalArgumentException("Cycle detected. "); } visited.add(node); for(Character currNode : reverseGraph.edgesFrom(node)) { explore(currNode, result, visited, finished, reverseGraph); } finished.add(node); result.add(node); } private static GraphLexico<Character> reverseGraph(GraphLexico<Character> graph) { final GraphLexico<Character> graphRev = new GraphLexico<Character>(); for (Character node : graph) { graphRev.addNode(node); } for (Character node : graph) { for (Character neighbors : graph.edgesFrom(node)) { graphRev.addEdge(neighbors, node); } } return graphRev; } } Followed by testing public class LexicographicalSortTest { @Test public void testLexicoGraphicalSort() { List<String> list = new ArrayList<String>(); list.add("abc"); list.add("acd"); list.add("bcc"); list.add("bed"); list.add("bdc"); list.add("dab"); List<Character> expectedList = new ArrayList<Character>(); expectedList.add('a'); expectedList.add('b'); expectedList.add('c'); expectedList.add('e'); expectedList.add('d'); assertEquals(expectedList, LexicographicalSort.lexigoGraphicOrder(list)); } } Answer: You code has lots of strengths. Its main weakness is the lack of explanation for the algorithm. Why are you building the reverse graph? A reverse post order visiting of the graph is a topo sort order. Just get the post order and then reverse it! Saves lots of effort and space. I can't grok how your topo sort is finding DAG roots. You can't afford to start searching at nodes that don't have zero parents. Keeping a parent count for each character makes it super-easy to find the roots. Your cycle detection is too complex. A cycle occurs if and only if a recursive call to explore encounters a node that's already on the stack. Just keep an "active" set. Add a node before a recursive call to explore and remove it afterward. If you encounter a node that's active, you've found a cycle. The active set will generally be much smaller than the node set, so this is a tiny performance win as well. Additional things to think about: You can use a LinkedHashSet to maintain the post order of visits and keep track of already-visited nodes efficiently with one data structure instead of two. Using sets instead of lists of successors in the graph removes a run time dependency on graph degree. This is a small thing. Learn how to use JavaDoc format comments. It's worth the time in the long run. A few of your names are deceptive. A dictionary in code usually refers to a hash of strings. Etc., etc. Not sure why you broke strings into character arrays. It's simpler to have the graph edge adder create missing nodes than to deem missing nodes an error condition. I liked this problem so well I coded my own version to check some of the ideas above: import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.LinkedHashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; public class LexiDetective { /** * A little unit test of the lexicographic detective. */ public static void main(String[] args) { String [] words = { "abc", "acd", "bcc", "bed", "bdc", "dab" }; CharacterGraph graph = new CharacterGraph(); graph.insertWordList(words); try { List<Character> order = graph.topoSort(); for (Character ch : order) { System.out.print(ch); } System.out.println(); } catch (Exception ex) { // Topo sort might find cycle and raise this exception. System.err.println(ex.getMessage()); } } } /** * Symbol graph specialized for characters and enhanced to deal * with lexicographically ordered word lists. */ class CharacterGraph extends SymbolGraph<Character> { /** * Insert a lexicographically ordered word pair into the * character graph using the first non-equal character pair to * add an edge. * * @param x first word in lexicographic order * @param y second word */ void insertWordPair(String x, String y) { int len = Math.min(x.length(), y.length()); for (int i = 0; i < len; i++) { char cx = x.charAt(i); char cy = y.charAt(i); if (cx != cy) { addEdge(cx, cy); break; } } } /** * Insert a lexicographically ordered word list into the character * graph by inserting each adjacent word pair. * * @param list lexicographically ordered word list */ void insertWordList(String [] list) { for (int i = 0; i < list.length - 1; i++) { insertWordPair(list[i], list[i + 1]); } } } /** * A class for graphs with symbols as nodes. Includes topological sort * in symbol order. * * @param <Symbol> ordered symbol type */ class SymbolGraph<Symbol> { /** * Information about a symbol and its successors in the graph. * * @param <T> symbol type */ static class NodeData<T> { /** * Count of symbols with this one as successor. */ int parentCount = 0; /** * Set of successor symbols. */ final Set<T> successors = new HashSet<>(); } /** * Graph adjacencies stored as a map from symbols to node data. The node * data stores information about the symbol and its successors. */ Map<Symbol, NodeData> adjacencies = new HashMap<>(); /** * Add a node to the graph unless it's already there. * * @param a datum for node * @return node, either existing or newly created */ NodeData<Symbol> addNode(Symbol a) { NodeData<Symbol> data = adjacencies.get(a); if (data == null) { data = new NodeData<>(); adjacencies.put(a, data); } return data; } /** * Add an edge to the graph unless it's already there. * * @param a edge origin * @param b edge destination */ void addEdge(Symbol a, Symbol b) { NodeData<Symbol> aData = addNode(a); if (!aData.successors.contains(b)) { aData.successors.add(b); NodeData<Symbol> bData = addNode(b); ++bData.parentCount; } } /** * Visit the graph rooted at given symbol in post order, accumulating an * ordered set of visited symbols. * * @param a the symbol for the (sub)graph to search * @param visited an ordered set that gives the post visit order */ void postOrderVisit(Symbol a, Set<Symbol> ancestors, LinkedHashSet<Symbol> visited) throws Exception { if (ancestors.contains(a)) { throw new Exception("Cycle detected. No post order exists."); } if (!visited.contains(a)) { NodeData<Symbol> data = adjacencies.get(a); if (data != null) { ancestors.add(a); for (Symbol aSuccessor : data.successors) { postOrderVisit(aSuccessor, ancestors, visited); } visited.add(a); ancestors.remove(a); } } } /** * Topologically sort the symbol graph and return the result. * * @return topological sort of symbols */ List<Symbol> topoSort() throws Exception { Set<Symbol> ancestors = new HashSet<>(); LinkedHashSet<Symbol> visited = new LinkedHashSet<>(); // Loop through all the symbols and their data. for (Entry<Symbol, NodeData> pair : adjacencies.entrySet()) { // Search each root (symbol with no parents). if (pair.getValue().parentCount == 0) { postOrderVisit(pair.getKey(), ancestors, visited); } } // Reverse the post order visit to get a topo sort. List<Symbol> order = new ArrayList<>(visited); Collections.reverse(order); return order; } }
{ "domain": "codereview.stackexchange", "id": 7307, "tags": "java, algorithm, strings" }
Files manager API in C++
Question: I wrote a file manager API in C++ to make the read-write operations easier in read/write cases, needing big structured data to/from multiple files. I want to get a review of the library if it's readable and easy to understand, and if it serves its purpose. File.hpp #ifndef FILESAPI_FILE_H #define FILESAPI_FILE_H #include <iostream> #include <mutex> #include <fstream> #include <vector> #include <boost/assert.hpp> #include "../utilities/DesignText.hpp" #include "../utilities/Exceptions.hpp" namespace FilesApi { /// Use in read/write for non-vectors overload operator. e.g: file >> rw_t<T>{val, 1}; template<typename T> struct rw_s { T* val; size_t val_size; rw_s(T &value, size_t arr_size = 1) : val(&value), val_size(arr_size) { assert(arr_size > 0); } rw_s(T *value, size_t arr_size = 1) : val(value), val_size(arr_size) { assert(arr_size > 0); assert(value != nullptr); } }; /// Wrapper function for creation of rw_t object, without need for specify the type after the function name /// Instead of call: f << rw_t<int>(a, size); /// Call: f << rw_soft(a, size); template<typename T> rw_s<T> rw_soft(T &value, size_t arr_size = 1) { return rw_s<T>(value, arr_size); } template<typename T> rw_s<T> rw_soft(T *value, size_t arr_size = 1) { return rw_soft(*value, arr_size); } /** * >> if file_mode is OPEN_IN_ACTION: * SINGLE_AND_DONE - read/write single time, and then close the file. * SINGLE_AND_MORE - read/write single time, but don't close the file yet. After single read/write the mode will automatic update to SINGLE_AND_DONE mode. * MULTIPLE - close the file only in programmer order, until then the file will be remain open. */ enum class ReadWriteMode { SINGLE_AND_DONE, SINGLE_AND_MORE, MULTIPLE, DONE }; enum class FileAction { READ, WRITE, NONE }; enum class FileMode { ALWAYS_OPEN, OPEN_IN_ACTION }; class File { private: bool is_ready; std::string name; std::string path; FileMode file_mode; ReadWriteMode read_write_mode; int multiple_times_left; FileAction file_action; std::mutex read_write_mutex; std::fstream file_ptr; bool is_open; std::ios_base::openmode read_flags; std::ios_base::openmode write_flags; bool use_exceptions; /** * Open file in specific format * \param mode_flags - fstream.open() flags. * \param new_file_action - Open purpose. */ void open(std::ios_base::openmode mode_flags, const FileAction &new_file_action); /** * Auto update for the file's mode (READ / WRITE / NONE). */ void update_rwm(); /** * Close file * \param automatic - Close request from API(true) or from User(false) */ void close(bool automatic); /** * Is file ready for read/write actions. Exception if file not ready. * \return Is file name not empty. */ bool is_file_ready(int); public: /** * Ctor * \param file_name - if @param file_path == "" => path/to/file/filename.bin else filename.bin * \param exceptions - Throw exceptions on errors Or use bold cout messages. * \param file_path - file's path. */ explicit File(const std::string &file_name, bool exceptions = false, const std::string &file_path = ""); /** * Close the file. */ ~File(); /** * Force close the file. */ void close(); /** * Set file's name. * \param new_name - New file's name. */ void set_name(const std::string &new_name); /** * Set file's name. * \param new_name - New file's name. */ File &operator=(const std::string &new_name); /** * Get file's name * \return File's name. */ std::string get_name(); /** * Is file ready for read/write actions. Without exception if file not ready. * \return Is file name not empty. */ bool is_file_ready(); /** * Init current file's mode * \param mode - How much reads/writes until the file will close. * \param multiple_times - if mode is multiple note how much times (-1 for unknown - won't close the file without programmer order/interrupt). */ void init_read_write_mode(const ReadWriteMode &mode, int multiple_times = -1); /** * Init read fstream flags. * \param read_flags - When open function in read mode will occur, those flags will be in use. */ void init_read_flags(std::ios_base::openmode read_flags = std::ios_base::in); /** * Init write fstream flags. * \param write_flags - When open function in write mode will occur, those flags will be in use. */ void init_write_flags(std::ios_base::openmode write_flags = std::ios::out | std::ios::binary | std::ios::in); /** * Read to non-vector variable * \param T - variable type * \param val - variable address * \param data_size - in case of array- array's size. * \return this File object. */ template<class T> File &read(T *val, size_t data_size = 1); /** * Read to vector variable * \tparam T - vector type * \param val - vector to read into (Have to be initialize with the size of inputs' count). * \param data_size - vector to read into (Have to be initialize with the size of inputs' count). * \return this File object. */ template<class T> File &read(std::vector<T> &val); /** * Write non-vector variable * \tparam T - variable type * \param val - variable address * \param data_size - in case of array- array's size. * \return this File object. */ template<class T> File &write(const T *val, size_t data_size = 1); /** * Write vector variable * \tparam T - vector type * \param val - vector to write. * \return this File object. */ template<class T> File &write(const std::vector<T> &val); /** * Read to vector * \tparam T - vector type * \param data - vector to read into * \return this File object. */ template<class T> File &operator>>(std::vector<T> &data); /** * Read to non-vector * \tparam T - variable type * \param info - { * val - variable non-vector to read into * val_size - in case of array- array's size (else leave as default 1) * } * \return this File object */ template<class T> File &operator>>(const rw_s<T> &info); /** * Write vector to file * \tparam T - vector type * \param data - vector to write * \return this File object */ template<class T> File &operator<<(const std::vector<T> &data); /** * Write non-vector to file * \tparam T - variable type * \param info - { * val - variable non-vector to write * val_size - in case of array- array's size (else leave as default 1) * } * \return this File object */ template<class T> File &operator<<(const rw_s<T> &info); }; template<class T> File &File::read(T *val, const size_t data_size) { if (!is_file_ready(0)) { return *this; } open(read_flags, FileAction::READ); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.read((char *) (val), sizeof(T) * data_size); update_rwm(); return *this; } template<class T> File &File::write(const T *val, const size_t data_size) { if (!is_file_ready(0)) { return *this; } open(write_flags, FileAction::WRITE); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.write(reinterpret_cast<const char *>(val), sizeof(T) * data_size); update_rwm(); return *this; } template<class T> File &File::read(std::vector<T> &val) { if (!is_file_ready(0)) { return *this; } open(read_flags, FileAction::READ); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.read(reinterpret_cast<char *>(val.data()), sizeof(T) * val.size()); update_rwm(); return *this; } template<typename T> File &File::write(const std::vector<T> &val) { if (!is_file_ready(0)) { return *this; } open(write_flags, FileAction::WRITE); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.write(reinterpret_cast<const char *>(val.data()), sizeof(T) * val.size()); update_rwm(); return *this; } template<class T> File &File::operator>>(std::vector<T> &data) { return read(data); } template<class T> File &File::operator>>(const rw_s<T> &info) { return read(info.val, info.val_size); } template<class T> File &File::operator<<(const std::vector<T> &data) { return write(data); } template<class T> File &File::operator<<(const rw_s<T> &info) { return write(info.val, info.val_size); } } #endif //FILESAPI_FILE_H FilesManager.hpp #ifndef FILESAPI_FILESMANAGER_H #define FILESAPI_FILESMANAGER_H #include <iostream> #include <vector> #include <map> #include <memory> #include "File.hpp" namespace FilesApi { using add_data = std::tuple<const std::string, const std::string>; class FilesManager { private: std::map<std::string, std::shared_ptr<File>> files; size_t max_files; // zero for unlimited std::string files_path; // Leave "" if there is no single path for all of the files bool use_exceptions; void remove_unusable_files(); public: /** * Ctor * \param exceptions - Throw exceptions on errors Or use bold cout messages. * \param max_files - Maximum files number to manage in this FilesManager object (0 for unlimited). * \param files_path - if @param files_path == "" => in new file associate you will have to supply full * file path, e.g: "path/to/file/filename.bin" * else supply only file name, e.g: "filename.bin", if @param files_path == "path/to/file/" */ explicit FilesManager(bool exceptions = false, size_t max_files = 0, const std::string &files_path = ""); /** * Add new file * \param id - file id (will be use to get this File object). * \param file - file's name or path (if @files_path == ""). */ void add(const std::string &id, const std::string &file); /** * Remove file * \param id - file's id */ void remove(const std::string &id); /** * Get file * \param id - file's id */ File &get(const std::string &id); /** * Add new file * \param data - tuple(0) => file id. tuple(1_ => file name or path (if files path is ""). */ void operator+=(const add_data &data); /** * Get file * \param id - file's id */ File &operator[](const std::string &id); /** * Remove file * \param id - file's id */ void operator-=(const std::string &id); }; } #endif //FILESAPI_FILESMANAGER_H File.cpp #include "../headers/File.hpp" namespace FilesApi { File::File(const std::string &file_name, bool exceptions, const std::string &file_path) { name = file_name; path = file_path; is_ready = !name.empty(); use_exceptions = exceptions; is_open = false; file_mode = FileMode::OPEN_IN_ACTION; read_write_mode = ReadWriteMode::DONE; file_action = FileAction::NONE; write_flags = std::ios::out | std::ios::binary | std::ios::in; read_flags = std::ios_base::in; } File::~File() { std::lock_guard<std::mutex> guard(read_write_mutex); close(); } void File::open(std::ios_base::openmode mode_flags, const FileAction &new_file_action) { std::lock_guard<std::mutex> guard(read_write_mutex); if (!is_file_ready(0)) { if (is_open) { file_ptr.close(); is_open = false; } return; } if (file_action != new_file_action) { file_ptr.close(); is_open = false; if (file_action != FileAction::NONE) { std::cout << DesignText::make_colored("Pay attention: file mission replaced by another one. (file closed)", DesignText::Color::RED, false) << std::endl; } } file_action = new_file_action; if (!is_open) { file_ptr.open(path + name, mode_flags); if (file_ptr.fail()) { is_open = false; if (!use_exceptions) { std::cout << DesignText::make_colored("Error Opening file: " + path + name, DesignText::Color::RED, true) << std::endl; } else { throw FileOpenException(path + name); } } is_open = true; std::cout << DesignText::make_colored("File has safely opened.", DesignText::Color::GREEN, false) << std::endl; } } void File::close(bool automatic) { if ((!automatic) || (file_mode == FileMode::OPEN_IN_ACTION)) { if (is_open) { file_ptr.close(); is_open = false; read_write_mode = ReadWriteMode::DONE; file_action = FileAction::NONE; std::cout << DesignText::make_colored("File has safely closed.", DesignText::Color::GREEN, false) << std::endl; } } } void File::close() { close(false); } void File::update_rwm() { if (!is_file_ready(0)) { return; } switch (read_write_mode) { case ReadWriteMode::SINGLE_AND_DONE: case ReadWriteMode::DONE: close(true); break; case ReadWriteMode::SINGLE_AND_MORE: read_write_mode = ReadWriteMode::SINGLE_AND_DONE; break; case ReadWriteMode::MULTIPLE: if (multiple_times_left > -1 && !--multiple_times_left) { multiple_times_left = -1; close(true); } break; } } void File::init_read_write_mode(const ReadWriteMode &new_mode, const int multiple_times) { read_write_mode = new_mode; multiple_times_left = multiple_times; } void File::init_read_flags(const std::ios_base::openmode new_read_flags) { read_flags = new_read_flags; } void File::init_write_flags(const std::ios_base::openmode new_write_flags) { write_flags = new_write_flags; } void File::set_name(const std::string &new_name) { if (!new_name.empty()) { name = new_name; is_ready = true; return; } if (name.empty()) { is_ready = false; } } std::string File::get_name() { return name; } bool File::is_file_ready(int) { if (!is_ready) { if (!use_exceptions) { std::cout << DesignText::make_colored("Pay attention: file name is empty. can't open this file.", DesignText::Color::RED, true) << std::endl; } else { throw FileNotReadyException(); } return false; } return true; } bool File::is_file_ready() { return is_ready; } File &File::operator=(const std::string &new_name) { set_name(new_name); return *this; } } FilesManager.cpp #include "../headers/FilesManager.hpp" namespace FilesApi { FilesManager::FilesManager(bool exceptions, size_t max_files, const std::string &files_path) : max_files(max_files), files_path(files_path), use_exceptions(exceptions) { } void FilesManager::add(const std::string &id, const std::string &file) { remove_unusable_files(); if (max_files == 0 || files.size() + 1 < max_files) { files.insert(std::pair<std::string, std::shared_ptr<File>>(id, std::make_shared<File>(file, use_exceptions, files_path))); } } void FilesManager::remove(const std::string &id) { remove_unusable_files(); files.erase(id); } File &FilesManager::get(const std::string &id) { remove_unusable_files(); File *ret_file = files[id].get(); if (ret_file == nullptr) { files[id] = std::make_shared<File>("", use_exceptions, files_path); ret_file = files[id].get(); } return *ret_file; } void FilesManager::operator+=(const add_data &data) { add(std::get<0>(data), std::get<1>(data)); } File &FilesManager::operator[](const std::string &id) { return get(id); } void FilesManager::operator-=(const std::string &id) { remove(id); } void FilesManager::remove_unusable_files() { for (auto &file : files) { if (file.second && !file.second->is_file_ready()) { files.erase(file.first); } } } } Use example: #include <iostream> #include <vector> #include <complex> #include "../src/headers/FilesManager.hpp" using namespace std; using namespace FilesApi; int mainFilesManagerOperatorsTest() { FilesManager fm(false, 0, "../TestFiles/"); string files[] = {"test_file.bin", "test_file2.bin"}; fm["1"] = files[0]; vector<complex<float>> wdata = {{1, 9}, {3, 75}, {213.34, 21.4}, {153.1, 15.85}}; vector<complex<float>> rdata(wdata.size()); fm["1"].init_read_write_mode(ReadWriteMode::SINGLE_AND_DONE); //fm.get("1").write(wdata.data(), wdata.size()); // Use it as complex<float> array. Possible. fm["1"].write(wdata); fm["1"].init_read_write_mode(ReadWriteMode::SINGLE_AND_DONE); fm["1"].read(rdata); fm += add_data("5", files[1]); // Add file to collection int a = 12; int b; fm["5"] << rw_soft(a); // Work fm["5"].write(&a); // Work fm["5"] >> rw_soft(b); // Work cout << b << endl; // Prints 12 fm -= "5"; // Remove the file from collection fm["5"] << rw_soft(a); // Error fm["5"].write(&a); // Error fm["5"] >> rw_soft(b); // Error //fm["2"] = files[1]; fm += add_data("2", files[1]); for (size_t i = 0; i < rdata.size(); i++) { cout << rdata[i] << endl; } fm["2"].init_read_write_mode(ReadWriteMode::MULTIPLE); for (size_t i = 0; i < 100; i++) { fm["2"].write(&i); } //f.init_read_write_mode(ReadWriteMode::MULTIPLE); size_t j; for (size_t i = 0; i < 100; i++) { fm["2"].read(&j); cout << j << " "; } cout << endl; return 0; } You can find more examples/code utilities on GitHub. Answer: File class: rw_s(T &value, size_t arr_size = 1); ... template<typename T> rw_s<T> rw_soft(T &value, size_t arr_size = 1); ... File &read(T *val, size_t data_size = 1); The default size argument is likely to cause problems. I'd suggest separating the interface for reading / writing arrays from single values. Reading / writing a single value can be done by taking a reference argument and no size argument. Reading and writing an array should take both parameters, and not have a default size (because the user will always want to specify it). They can still use the same underlying implementation. template<class T> File &File::operator >> (std::vector<T> &data) { return read(data); } These operators are unnecessary duplication of the read and write functions. Note that the C++ standard streams use operator>> and operator<< for formatted (text) input and output, whereas your file class only does binary input and output. This may cause confusion. std::vector is only one type of container, and users are likely to require support for data structures or their own custom classes. It might be better to implement the stream operators as free functions rather than member functions. That would allow users to specify their own custom versions. All these implementations can then use the read() and write() member functions (or other stream operators). file_ptr.read((char *)(val), sizeof(T) * data_size); // note: missing reinterpret_cast? file_ptr.read(reinterpret_cast<char *>(val.data()), sizeof(T) * val.size()); file_ptr.read(reinterpret_cast<char *>(val.data()), sizeof(T) * val.size()); file_ptr.write(reinterpret_cast<const char *>(val), sizeof(T) * data_size); T must be a trivially copyable type for these to work. I suggest adding a static_assert(std::is_trivially_copyable_v<T>, ...""); template<class T> File &File::read(std::vector<T> &val) { if (!is_file_ready(0)) { return *this; } open(read_flags, FileAction::READ); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.read(reinterpret_cast<char *>(val.data()), sizeof(T) * val.size()); update_rwm(); return *this; } template<typename T> File &File::write(const std::vector<T> &val) { if (!is_file_ready(0)) { return *this; } open(write_flags, FileAction::WRITE); std::lock_guard<std::mutex> guard(read_write_mutex); file_ptr.write(reinterpret_cast<const char *>(val.data()), sizeof(T) * val.size()); update_rwm(); return *this; } One often uses a dynamic container like std::vector when one does not know the required size in advance. It would be more helpful to store the size of the vector too, instead of forcing users to deal with this themselves. template<class T> File &read(T *val, size_t data_size = 1); Note that even basic types are different sizes on different platforms, and may be big or little endian. This may not matter to you right now, but it does make your read and write implementations inherently dangerous. A safer interface would ensure that the user specifies the size of the type they wish to write, and convert to a specific endianness before writing (and do the opposite for reading). File::File(const std::string &file_name, bool exceptions, const std::string &file_path) { name = file_name; path = file_path; is_ready = !name.empty(); use_exceptions = exceptions; is_open = false; file_mode = FileMode::OPEN_IN_ACTION; read_write_mode = ReadWriteMode::DONE; file_action = FileAction::NONE; write_flags = std::ios::out | std::ios::binary | std::ios::in; read_flags = std::ios_base::in; } Prefer to use a constructor initializer-list, instead of initializing members in the body of the constructor, so that initialization only happens once: File::File(const std::string &file_name, bool exceptions, const std::string &file_path): is_ready(!file_name.empty()), name(file_name), path(file_path), ... is_ready and is_open are unnecessary duplication. We could instead write them as functions that return !name.empty() and file_ptr.is_open(). File::~File() { std::lock_guard<std::mutex> guard(read_write_mutex); close(); } Shouldn't the guard be inside the close function, since the user can call close themselves? std::string get_name(); bool is_file_ready(); // ... and others Member functions that don't change member data must be const. bool is_file_ready(int); Use a function with a different name (e.g. throw_if_not_ready), instead of an unused int parameter. void init_read_write_mode(const ReadWriteMode &mode, int multiple_times = -1); This function does not really do any "initialization". Perhaps set_... would be better. void File::set_name(const std::string &new_name) It's extremely inconsistent and confusing to allow the user to change this while a file is open! The same issue exists with several other functions in the class. File &File::operator=(const std::string &new_name) { set_name(new_name); return *this; } A file name string is not logically a file. This is unnecessary duplication of set_name. void init_write_flags(std::ios_base::openmode write_flags = std::ios::out | std::ios::binary | std::ios::in); Why in for writing? void File::open(std::ios_base::openmode mode_flags, const FileAction &new_file_action) FileAction should probably be determined by checking the flags. As it is, we have duplicate information that could be inconsistent. FilesManager class: std::map<std::string, std::shared_ptr<File>> files; The file ownership is not shared with anything, so we have no reason to use a shared_ptr. std::string files_path; It turns out we are setting the path member of File to this same value for every single file! That's a huge amount of unnecessary duplication. void FilesManager::operator+=(const add_data &data) { add(std::get<0>(data), std::get<1>(data)); } File &FilesManager::operator[](const std::string &id) { return get(id); } void FilesManager::operator-=(const std::string &id) { remove(id); } We're duplicating code here, and also adding complexity with the add_data struct. Generally in C++ we should avoid overloading operators, unless it's for very common and unambiguous mathematical operations. We would expect const versions of get() and operator[]. The behavior of adding a missing file is very surprising for a get() function. void FilesManager::remove_unusable_files() This function seems to be called whenever we access or do anything with the files. It would be simpler to just avoid storing any unusable files in the first place. void FilesManager::add(const std::string &id, const std::string &file) File &FilesManager::get(const std::string &id) void FilesManager::remove(const std::string &id) Do we not call these from different threads? If the File class needs to be thread-safe, doesn't the FilesManager too? void FilesManager::add(const std::string &id, const std::string &file) It seems the user has to know the file name to add it to the FilesManager. In that case, they already have the means to get the filename for a given id, and there is probably no need to store the filename in the File. Purpose: Overall it's unclear what the purpose of any of this code is. It seems a mixture of the following: Binary IO - this is useful, but the current implementation isn't platform independent, and has other flaws. Thread-safe reading and writing - but being thread-safe in itself doesn't really help with anything, and the FileManager isn't thread-safe. Associating file paths with ids - but we can do that more clearly with a separate map. Some sort of automatic file opening and closing system - but we can do that better with simple RAII (std::fstream file handles close the file when the handle goes out of scope - the user can simply hold the handle for as long as they need). Writing binary data: Utilities for writing binary data are definitely helpful. But we can implement these around the existing std::ostream and std::istream through a simpler interface: enum class Endian { Big, Little }; void WriteBinary(std::ostream& stream, Endian endian, bool value); void WriteBinary(std::ostream& stream, Endian endian, char value); void WriteBinary(std::ostream& stream, Endian endian, signed char value); void WriteBinary(std::ostream& stream, Endian endian, unsigned char value); // ... (same for other pod types) void ReadBinary(std::istream& stream, Endian endian, bool& value); // ... (similar for reading) Note that correct platform independence is quite difficult. Internally these functions would need to: Decide on a fixed number of bits to write for each type (and handle issues with types being different sizes on different platforms). Use std::memcpy to convert to the unsigned type of the corresponding fixed size. Change the byte order from system endianness to the output endianness (if needed). Finally call stream.write(); We (or users of the library) can extend this by defining similar functions for custom types: template<class T> void WriteBinary(std::ostream& stream, Endian endian, std::vector<T> const& value); template<class KeyT, class ValueT, class PredicateT, class AllocatorT> void WriteBinary(std::ostream& stream, Endian endian, std::std::map<KeyT, ValueT, PredicateT, AllocatorT> const& value);
{ "domain": "codereview.stackexchange", "id": 35915, "tags": "c++, file-system" }
Creating a pendulum simulation in C#
Question: I'm working on a game that involves swinging around a pole, and I want to simulate the physics, not just hard code the rotation. I figure a pendulum is a fairly decent model to start with, but I coded a fairly basic simulation and when I run it it slowly starts to drift. Here's a rough summary of the code I have: //let g be the force due to gravity //let o be a vector from the rotation point to the object’s current position (offset) //let m be the normalized cross product of o and the object’s right hand direction (aka the direction of movement) //get the component vector of gravity in the movement direction $$\vec h = (\vec g \cdot \vec m)\vec m$$ //let v be the object’s current velocity and w be the component in the direction of movement $$w = \vec v \cdot\vec m$$ //then let c be the centripetal force $$\vec c = -w^2 \frac{\vec o }{\lVert \vec o \rVert}$$ //add the acceleration to the current velocity (multiplied by a small time interval dt) $$\vec v = \vec v + (\vec c+\vec h)\ dt$$ //change position based on velocity $$\vec x = \vec x+ \vec v\ dt$$ If I run it for a little while, the pendulum starts slowing and drifts downwards. I can fix the drift by just hardcoding it to stay within a certain range, but I'm not sure why the pendulum is slowing. Can anyone let me know if I'm doing something wrong, or if there's a better way to do this? Answer: Numerical integration of the equations of motion of a pendulum faces two separate but interconnected problems. The first problem is common to every Newtonian dynamical system. It is connected to how the unavoidable inaccuracy introduced by the numerical integration modifies the qualitative and quantitative features of the exact solution. In particular, it is of the utmost importance to understand the effect of the algorithmic error on the conserved quantities. In general, if an algorithm based on a fixed time step $\Delta t$ has a global error proportional to $\Delta t^n$, the energy can be conserved only at the same order. However, this is not the whole story because, depending on the algorithm, errors on a periodic motion may or may not compensate. The simplest possible algorithm, the explicit Euler one, advances from time $t$ to time $t+\Delta t$ according to $$ \begin{align} x_{n+1}&=x_{n}+v_{n}\Delta t\\ v_{n+1}&=v_{n}+a_{n}\Delta t, \end{align} $$ but it is known to be unsatisfactory for serious numerical work (in the formulas, $a_n$ is the acceleration evaluated from the position at the time $t$). It is globally a first-order algorithm, but the conservation of energy is very poor, and even with a very small time step, results show a drift in the energy. A much better algorithm is the Euler-Cromer algorithm, which is what has been used here. It is summarized by the evolution steps $$ \begin{align} v_{n+1}&=v_{n}+a_{n}\Delta t\\ x_{n+1}&=x_{n}+v_{n+1}\Delta t \end{align}. $$ It is still first-order but is much more stable than the Euler algorithm. The global deviations from energy conservation remain $O(\Delta t)$, but they oscillate, and there is no systematic drift of the energy. Interestingly, without significant additional work, in the case of forces depending only on the position, one could use a definitely better algorithm, the Störmer-Verlet: $$ \begin{align} x_{n+1}&=x_{n}+v_{n}\Delta t+\frac12 a_n \Delta t^2\\ v_{n+1}&=v_{n}+\frac12 (a_{n}+a_{n+1})\Delta t, \end{align} $$ that is a second-order and symplectic algorithm. However, this first part on the numerical integration algorithms is only half of the story when dealing with the case of a pendulum. The second half, in a way much more important, has to do with the way the constraint of a fixed length of the pendulum is incorporated in the description of the motion. The simplest way and I would recommend it, is by recasting the description and the equation of motion in terms of a single angular coordinate. It requires to work in terms of angular velocity, angular acceleration, and torque, expressed as a function of the angle, but the final equation is pretty simple: $$ L \ddot \theta = -g \sin(\theta), $$ and can be integrated in the same way as the equations of motion written in cartesian coordinates. The main advantage is that the constraint of a fixed length of the pendulum is built-in. Obtaining the cartesian coordinates from the length and the angle is trivial. The alternative is to work with two degrees of freedom (for instance, the two cartesian coordinates of the center of mass of the pendulum. In this case, one has to consider the presence of the reaction force due to the constraint of a motion on a circle. From the analytical side, it is not difficult to obtain the expression for such a constraining force. However, on the side of numerical integration of the equations of motion, a constraint of this kind poses a fundamental problem. The numerical evolution of the system is accurate only at the order $\Delta t^n$ ($n$ depending on the algorithm). This implies that also the constraint will be fulfilled only at the same order. And this may be a big problem. For instance, in the pendulum case, if the length varies systematically, drifting from the starting value, the period of the pendulum will be affected. Therefore, in the case of a constrained dynamics (with this kind of so-called holonomic constraints), the numerical algorithms must be modified. The known solution of the problem (J.P. Ryckaert, G. Ciccotti, H.J.C. Berendsen. J. Comput. Phys., 23 (1977), p. 327) requires to add to the ($n+1$)-step positions and velocities an additional modification enforcing the constraint. Such modification depends on the algorithm and usually requires the numerical solution of a system of equations at every step. It would be possible to say more on the algorithms for constrained evolution, but I think this information could be enough for the original question. In particular, as already said, I would strongly advise using the angular coordinate description.
{ "domain": "physics.stackexchange", "id": 79952, "tags": "newtonian-mechanics, angular-momentum, rotational-dynamics, computational-physics, software" }
Downloading, decompressing and inserting data into an SQLite database
Question: I have implemented a Service, which executes three operations, in the following order: Downloading a compressed (.zip) CSV file Decompress the CSV file Reading the CSV file and inserting the data into a SQLite database I'm more or less pleased with the performance of (1) and (2), but (3) is taking quite some time. Depending on which unit I debug on, and the size of the CSV file, it takes 45-120 seconds. I'd like this part to be a bit faster. As stated, the first and second operations are currently fine in the aspect of performance, but I will add the code and appreciate any input that would lead to an increased performance. However, focus is on (3), since it's the slowest as of now. (1) Downloading and (2) Decompressing Below is the code that defines the AsyncTask which downloads and decompresses the CSV file. It's started from a service, and is provided with a URL which points to the location of the file. Also provided is the helper class which handles the decompress logic. private class DownloadSaidAndDoneDataTask extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected String doInBackground(String... params) { int count; try { URL url = new URL(params[0]); URLConnection connection = url.openConnection(); connection.connect(); int lenghtOfFile = connection.getContentLength(); InputStream input = new BufferedInputStream(url.openStream(), 10 * 1024); File folder = new File(Environment.getExternalStorageDirectory().getPath() + "/Riksdagskollen"); if (!folder.exists()) { folder.mkdir(); } // Output stream to write file in SD card OutputStream output = new FileOutputStream(Environment.getExternalStorageDirectory().getPath() + "/Riksdagskollen/Sagtochgjort.zip"); byte data[] = new byte[1024]; long total = 0; while ((count = input.read(data)) != -1) { total += count; // Publish the progress which triggers onProgressUpdate method publishProgress("" + (int) ((total * 100) / lenghtOfFile)); // Write data to file output.write(data, 0, count); } // Flush output output.flush(); // Close streams output.close(); input.close(); //Unzip String zipFile = Environment.getExternalStorageDirectory() + "/Riksdagskollen/Sagtochgjort.zip"; String unzipLocation = Environment.getExternalStorageDirectory() + "/Riksdagskollen/"; Decompress d = new Decompress(zipFile, unzipLocation); d.unzip(); } catch (Exception e) { e.printStackTrace(); } return ""; } @Override protected void onProgressUpdate(String... values) { super.onProgressUpdate(values); //Update progressbar } @Override protected void onPostExecute(String s) { super.onPostExecute(s); downloadSaidAndDoneDataTask = null; //Initiate database insert when download and decompress are done insertCsvDataIntoDatabase(); } } Decompress.java public class Decompress { private String _zipFile; private String _location; public Decompress(String zipFile, String location) { _zipFile = zipFile; _location = location; hanldeDirectory(""); } public void unzip() { try { FileInputStream inputStream = new FileInputStream(_zipFile); ZipInputStream zipStream = new ZipInputStream(inputStream); ZipEntry zEntry = null; while ((zEntry = zipStream.getNextEntry()) != null) { Log.d("Unzip", "Unzipping " + zEntry.getName() + " at " + _location); if (zEntry.isDirectory()) { hanldeDirectory(zEntry.getName()); } else { FileOutputStream fout = new FileOutputStream( this._location + "/" + zEntry.getName()); BufferedOutputStream bufout = new BufferedOutputStream(fout); byte[] buffer = new byte[1024]; int read = 0; while ((read = zipStream.read(buffer)) != -1) { bufout.write(buffer, 0, read); } zipStream.closeEntry(); bufout.close(); fout.close(); } } zipStream.close(); Log.d("Unzip", "Unzipping complete. path : " + _location); } catch (Exception e) { Log.d("Unzip", "Unzipping failed"); e.printStackTrace(); } } public void hanldeDirectory(String dir) { File f = new File(this._location + dir); if (!f.isDirectory()) { f.mkdirs(); } } } (3) Inserting This operation is my main concern. I've managed to reduce the execution time from 25 minutes to 1-2 minutes by handling the inserts as a single transaction. If possible, I'd like it to be even faster. private class InsertSaidAndDoneDataIntoDataBase extends AsyncTask<String, String, String> { @Override protected void onPreExecute() { super.onPreExecute(); } @Override protected String doInBackground(String... params) { String csvFile = Environment.getExternalStorageDirectory().getPath() + "/Riksdagskollen/Sagtochgjort.csv"; BufferedReader br = null; BufferedReader brCount = null; String line; String cvsSplitBy = ","; long total = 0; try { br = new BufferedReader(new FileReader(csvFile)); brCount = new BufferedReader(new FileReader(csvFile)); dataSource.open(); dataSource.beginTransaction(); long lenghtOfFile = 0; while (brCount.readLine() != null) { lenghtOfFile++; } while ((line = br.readLine()) != null && !canceledDatabaseTransaction) { total++; String[] data = line.split(cvsSplitBy); String personId = data[0]; String documentType = data[4]; String subType = data[5]; String session = data[6]; String documentId = data[7]; String term = data[8]; String authority = data[9]; String date = data[10]; String speaker = data[11]; String speakerTime = data[15]; String numOfChars = data[16]; String personActivities = data[17]; dataSource.createSaidAndDoneEntry(personId, documentType, subType, session, documentId, term, authority, date, speaker, speakerTime, numOfChars, personActivities); publishProgress("" + (int) ((total * 100) / lenghtOfFile)); } dataSource.setTransactionSuccessful(); } catch (IOException e) { e.printStackTrace(); } finally { try { dataSource.endTransaction(); dataSource.close(); } catch (SQLException e) { e.printStackTrace(); } if (br != null) { try { br.close(); } catch (IOException e) { e.printStackTrace(); } } if (brCount != null) { try { br.close(); } catch (IOException e) { e.printStackTrace(); } } } return ""; } @Override protected void onProgressUpdate(String... values) { super.onProgressUpdate(values); //Update progress dialog } @Override protected void onPostExecute(String s) { super.onPostExecute(s); insertSaidAndDoneDataIntoDataBase = null; } } Edit Below is the code for the createSaidAndDoneEntry method. database is the reference from getReadableDatabase() which I call in a separate method prior to any database calls. public long createSaidAndDoneEntry(String personId, String documentType, String subType, String session, String documentId, String term, String authority, String date, String speaker, String speakerTime, String numberOfChars, String personActivities) { ContentValues values = new ContentValues(); values.put(DBHelper.COLUMN_SAIDANDDONE_PERSON_ID, personId); values.put(DBHelper.COLUMN_SAIDANDDONE_DOCUMENT_TYPE, documentType); values.put(DBHelper.COLUMN_SAIDANDDONE_SUB_TYPE, subType); values.put(DBHelper.COLUMN_SAIDANDDONE_SESSION, session); values.put(DBHelper.COLUMN_SAIDANDDONE_DOCUMENT_ID, documentId); values.put(DBHelper.COLUMN_SAIDANDDONE_TERM, term); values.put(DBHelper.COLUMN_SAIDANDDONE_AUTHORITY, authority); values.put(DBHelper.COLUMN_SAIDANDDONE_DATE, date); values.put(DBHelper.COLUMN_SAIDANDDONE_SPEAKER, speaker); values.put(DBHelper.COLUMN_SAIDANDDONE_SPEAKER_TIME, speakerTime); values.put(DBHelper.COLUMN_SAIDANDDONE_NUMBER_OF_CHARACTERS, numberOfChars); values.put(DBHelper.COLUMN_SAIDANDDONE_PERSON_ACTIVITIES, personActivities); return database.insert(DBHelper.TABLE_NAME_SAIDANDDONE, null, values); } Answer: I guess, you could save quite some time by making the operations overlap: Feed the download stream into the ZipInputStream and feed the output into the database. The timing could be something like max(a, b, c) instead of a+b+c, but it could easily lead to a messy code, so let's forget it for now. I'll only look at the third part, i.e., InsertSaidAndDoneDataIntoDataBase. @Override protected void onPreExecute() { super.onPreExecute(); } This does nothing, so leave it out. @Override protected String doInBackground(String... params) { String csvFile = Environment.getExternalStorageDirectory().getPath() + "/Riksdagskollen/Sagtochgjort.csv"; The filename surely should be a constant. Moreover, during the decompression you may want to look if such an entry exists. BufferedReader br = null; BufferedReader brCount = null; String line; String cvsSplitBy = ","; The first two have to be declared outside of the try-finally block, but the latter two don't. Always minimize the scope. Also consider something smarter than classical try-finally, maybe Lombok's @Cleanup or Guava's Closer. I'd also use a single BufferedReader variable only. After counting the lines, I'd close it and reassign a new BufferedReader to the variable. Re-reading a file is pretty wasteful, but I guess, you can't simply read it all into memory (as ArrayList<String>)? long lenghtOfFile = 0; But it's not the file length. dataSource.createSaidAndDoneEntry I'm afraid, this is the time-consuming part. And you're keeping the dataSource code secret. Some databases can read CSV directly, I'd check this first. Otherwise, I'm not sure about Android, but normally you should use a PreparedStatement consider using addBatch.
{ "domain": "codereview.stackexchange", "id": 13216, "tags": "java, performance, android, sqlite" }
Interface for obtaining the bounding box for a collection of elements
Question: How could I improve this code? I used a position interface to avoid code duplication, can it be done better? Can I reduce the amount of code somehow? interface PositionInterface { double getPosition(PageAreaInterface pArea); } private double getMinPosition(Collection<PageAreaInterface> pAreas, PositionInterface pPosition) { double lMinPosition = Double.MAX_VALUE; for (PageAreaInterface lArea : pAreas) { lMinPosition = Math.min(lMinPosition, pPosition.getPosition(lArea)); } return lMinPosition; } private double getTop(Collection<PageAreaInterface> pAreas) { return getMinPosition( pAreas, new PositionInterface() { @Override public double getPosition(PageAreaInterface pArea) { return pArea.getBoundingBox().getTop(); } } ); } private double getLeft(Collection<PageAreaInterface> pAreas) { return getMinPosition( pAreas, new PositionInterface() { @Override public double getPosition(PageAreaInterface pArea) { return pArea.getBoundingBox().getLeft(); } } ); } private double getMaxPosition(Collection<PageAreaInterface> pAreas, PositionInterface pPosition) { double lMaxPosition = Double.MIN_VALUE; for (PageAreaInterface lArea : pAreas) { lMaxPosition = Math.max(lMaxPosition, pPosition.getPosition(lArea)); } return lMaxPosition; } private double getBottom(Collection<PageAreaInterface> pAreas) { return getMaxPosition( pAreas, new PositionInterface() { @Override public double getPosition(PageAreaInterface pArea) { return pArea.getBoundingBox().getBottom(); } } ); } private double getRight(Collection<PageAreaInterface> pAreas) { return getMaxPosition( pAreas, new PositionInterface() { @Override public double getPosition(PageAreaInterface pArea) { return pArea.getBoundingBox().getRight(); } } ); } Answer: Another alternative. Nutshell Don't put type into name (PageAreaInterface etc.) Lose the Hungarian; appropriately-short methods remove its utility. Create an enum and method to get specific coordinates. Create getMinimum and getMaximum methods in a PageAreaCollection taking an enum. Justifications (working backwards) PageAreaCollection Static utility methods strike me as un-OO, particularly when there are other options. Create a type, with type-appropriate methods: code shrinks, and reads better: min = PageAreaUtils.findMinimum(pageAreas, TOP); // Contrast with... min = pageAreas.findMinimum(TOP); Shorter is good, but which reads nicer? Which is more communicative? With static imports, you're still left with: min = findMinimum(pageAreas, TOP); // Minimum pageAreas?! No... min = findMinimumTop(pageAreas); // Doesn't read right. min = findMinimumTopIn(pageAreas); // Better? min = findMinimum(TOP).in(pageAreas); // Better? IMO the amount of extra work/code to remove the method from its rightful place (a method of a pageArea collection) isn't worth the effort. Enum in bounding box The enum and utility method could live in the collection, too, if the bounding box class isn't yours to finagle. // In bounding box, collection, or standalone. public enum POSITION { TOP, LEFT, BOTTOM, RIGHT } // In bounding box or collection. public double getPosition(POSITION pos) { switch (pos) { case TOP: return getTop(); case LEFT: return getLeft(); case BOTTOM: return getBottom(); case RIGHT: return getRight(); } throw new RuntimeException("Bad position provided: " + pos); } Min/max position locators in PageAreaCollection class This is essentially the same as the previous suggestion to use Collections.min/max, but I'd still wrap it all up so the mainline code doesn't have to see how it's implemented. This way or that, it's significantly cleaner, with an appropriately-named Comparator. Collection<PageArea> pageAreas; public double getMinimumPosition(MyRect.POSITION pos) { double min = Double.MAX_VALUE; for (PageArea area : pageAreas) { min = Math.min(min, area.getBoundingBox().getPosition(pos)); } return min; } public double getMaximumPosition(MyRect.POSITION pos) { double max = Double.MIN_VALUE; for (PageArea area : pageAreas) { max = Math.min(max, area.getBoundingBox().getPosition(pos)); } return max; } Essentially the same if getPosition() needs to be in the collection. My quibble with having the method in the bounding box is that it makes getting the position a bit bulky; I'd actually prefer this: area.getBoundingBox(pos) // or area.getBoundingBoxPosition(pos)? Hungarian Ew. A method that's a dozen lines long doesn't need differentiation between parameters and locals; it's obvious. At most I could see naming member variables, but even that... Meh. Interface naming A PageAreaInterface is just a PageArea. An implementation may deserve a special name, but it'd be a "special" PageArea in that it implements specific functionality likely worth naming. It's the same reason we don't name things IWhatever anymore. The interface is the Whatever, implementations provide specificity and deserve naming.
{ "domain": "codereview.stackexchange", "id": 913, "tags": "java" }
Choosing the desired icon when fetching results
Question: I'm creating a Q&A website, Something like SO. Now I need to print a "check" icon (to specify accepted answer) next to every answer. Now there are some conditions: Is the current user logged? Is the current user the author of the question? Is there any accepted answer? And some other cases. Actually, I implemented that. But I did it using a lot of nested if-statement and seems to me really unoptimized. Now I want to know is there any better algorithm to write this code? $results = $stm->fetchAll(PDO::FETCH_ASSOC); // Fetching all answers foreach ($results as $result){ if ($_SESSION['id'] == $author_id){ // User is logged if($result['id'] == $AcceptedAnswerId){ // There is accepted answer $AcceptAnswerIcon = "<a href='#'> <i style='color: green;' class='fa fa-check'></i> </a>"; } else { // There isn't accepted answer $AcceptAnswerIcon = "<a href='#'> <i style='color: gray' class='fa fa-check'></i> </a>"; } } elseif ($_SESSION['id'] !== $author_id) { // User isn't logged if($result['id'] == $AcceptedAnswerId){ // There isn accepted answer $AcceptAnswerIcon = "<i style='color: green;' class='fa fa-check'></i>"; } else { // There isn't accepted answer $AcceptAnswerIcon = null; } } else { $AcceptAnswerIcon = null; } // ... I will use $AcceptAnswerIcon in here and mix it with answer structure } // End of "foreach" block Answer: Extract Duplication to Function You could try to extract some of the duplication: getCheckIconHTML($color) { return "<i style='color: " . $color . ";' class='fa fa-check'></i>"; } Now all the check icon HTML is in one place, which also makes it easier to change. But the HTML really doesn't only depend on the color, but it also matters if it is clickable or not, so lets add that as well: getCheckIconHTML($color, $isClickable) { $checkIcon = "<i style='color: " . $color . ";' class='fa fa-check'></i>"; if ($isClickable) { $checkIcon = "<a href='#'>" . $checkIcon . "</a>"; } return $checkIcon; } Improve the ifs Now, the foreach would look like this, which is already better: foreach ($results as $result){ if ($_SESSION['id'] == $author_id) { // User is logged if ($result['id'] == $AcceptedAnswerId) { $AcceptAnswerIcon = getCheckIconHTML('green', true); } else { $AcceptAnswerIcon = getCheckIconHTML('grey', true) } } else { // User isn't logged if ($result['id'] == $AcceptedAnswerId) { $AcceptAnswerIcon = getCheckIconHTML('green', false); } else { $AcceptAnswerIcon = null; } } } But now that we have the function, we can also restructure the ifs to get this: $isUserLoggedIn = ($_SESSION['id'] == $author_id); if ($result['id'] == $AcceptedAnswerId) { // There is accepted answer $AcceptAnswerIcon = getCheckIconHTML('green', $isUserLoggedIn); } else { $AcceptAnswerIcon = $isUserLoggedIn ? getCheckIconHTML('grey', true) : null; } Misc you don't need the $_SESSION['id'] !== $author_id check in the elseif. You already checked it in the if, so you know that the user is not logged in. If the last else is entered, something went wrong. variable names should start with a lower-case letter. your spacing is sometimes off. I would move the code that checks if a user is logged in into a user class or something. If you just have checks like this all over the place, it will be hard to change the functionality later on.
{ "domain": "codereview.stackexchange", "id": 17907, "tags": "php, performance" }
Printing Feature Contributions in a Random Forest algorithm from the Treeinterpreter library leading to errors
Question: I am working on a dataset where I predict the risks of developing pancreatic cancer with respect to a number of variables. I have created a random forest, and want to find the feature contributions. I have already used the "Treeinterpreter" library, resulting in a contributions array that is three-dimensional. I want to display the contributions in the array beside the name of the factor/variable. I have used the code below to do so, however, the code responsible for displaying the contributions does not work. I have tried multiple methods, including converting the dataframe to a numpy array, and other methods such as .all() and .any(). However, none are producing the desired result. What can be the right way to display the feature contributions with respect to each of the feature it represents? # -*- coding: utf-8 -*- """ Created on Mon Apr 15 13:39:19 2019 @author: GoodManMcGee """ import pandas as pd from sklearn.metrics import accuracy_score from sklearn import tree from sklearn.model_selection import train_test_split from sklearn import preprocessing from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from IPython.display import Image from sklearn.tree import export_graphviz from treeinterpreter import treeinterpreter as ti import matplotlib.pyplot as plt import numpy as np import itertools data = pd.read_csv("pancreatic_cancer_smokers.csv") target = data['case (1: case, 0: control)'] data.drop('case (1: case, 0: control)', axis=1, inplace=True) x_train, x_test, y_train, y_test = train_test_split(data, target, test_size = 0.2) clf = RandomForestClassifier(n_estimators=100) clf.fit(x_train, y_train) y_pred = clf.predict(x_test) clf_accuracy = accuracy_score(y_test, y_pred) clf_pred, clf_bias, contributions = ti.predict(clf, x_test) #The code below was taken from DataDive's treeinterpreter tutorial. #The aforementioned messages applies to all code between the underscores #/////////////////////////////////////////// for i in range(len(x_test)): print ("Instance", i) print ("Bias (trainset mean)", clf_bias[i]) print ("Feature contributions:") for c, feature in sorted(zip(contributions[i], data.feature_names), key=lambda x: -abs(x[0])): #An error occurs in the "data.feature_names" method in the code above:AttributeError: 'DataFrame' object has no attribute 'feature_names'. I have tried referenceing columns from datasets also, but that also leads to errors: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() print (feature, round(c, 2)) print ("-"*20) #/////////////////////////////////////////// Answer: Try this in the last part of your code: for i in range(len(x_test)): print ("Instance", i) print ("Bias (trainset mean)", clf_bias[i]) print ("Feature contributions:") for c, feature in sorted(zip(contributions[i,:,0], data.columns),key=lambda x: -abs(x[0])): print (feature, round(c, 2)) print ("-"*20) The problem is that you are sorting contributions without taking into account that contributions is a 3D array and the column names is accesible with data.columns, not data.feature_names.
{ "domain": "datascience.stackexchange", "id": 5067, "tags": "python, random-forest, feature-extraction" }
how to set the rotate axis?
Question: I import two links from solidworks, they are stl files, and I add coodinate system on a hole of the two links,now I want to make the second lind rotate around the setted X axis, but it can not ,how can I deal with it? the code as follow: link: 0 1.0 0.0 0 0 0 .......link1.stl world link 1 0 0 Originally posted by lugd1229 on Gazebo Answers with karma: 75 on 2012-12-03 Post score: 1 Original comments Comment by asomerville on 2012-12-03: Can you clarify? I cant quite tell what you're asking. Comment by lugd1229 on 2012-12-03: I will use plugin animate a link rotate around a setting axis of a coordinate system attached on another link. Answer: I'm not sure I understand your question exactly, but I'll go by what I think you're asking. The coordinate system transformations are based on the joints only. That is to say that the link transformations have no effect on where the next joint is placed, so your stl files and where you set their origins to are irrelevant. Therefore you must place the next joint with respect to the previous one. In the code you pasted you do not set an origin, so it will likely default to 0, 0, 0 which would put it exactly on top of the last one. Originally posted by asomerville with karma: 283 on 2012-12-04 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by lugd1229 on 2012-12-04: thank you!
{ "domain": "robotics.stackexchange", "id": 2840, "tags": "gazebo" }
Einstein's first postulate implies the second?
Question: Einstein's two postulates of special relativity are as follows, 1. The principle of relativity: The laws of physics are the same in all inertial systems. There is no way to detect absolute motion, and no preferred inertial system exists. 2. The constancy of the speed of light: Observers in all inertial systems mea- sure the same value for the speed of light in a vacuum. Now for an exercise in my book, I am requested the following: 'Explain why Einstein argued that the constancy of the speed of light (postulate 2) actually follows from the principle of relativity (postulate 1).' I have been trying to figure this out for a very long time now with no luck. I thought of identifying a physical law that if not the same in two different inertial systems leads to a contradiction, but I can't think of anything. Could you push me in the right direction? Answer: Further to Timaeus's Answer, the second postulate follows from the first postulate if we know about light. Otherwise, the second cannot follow from the first in a strict sense. However, even if you don't know about light, there is still a way whereby the second postulate can be strongly motivated by the first, as follows. The first postulate is essentially Galileo's notion of relativity as explained by his Allegory of Salviati's Ship. If you assume: The first relativity postulate; and A concept of absolute time, i.e. that the time delay between two events will be measure to be the same for all inertial observers; and Homogeneity of space and time so that linear transformation laws between inertial frames are implied (see footnote) Then these three assumptions alone uniquely define Galilean Relativity. However, if you ask yourself "what happens to Galileo's relativity if we relax the assumption of absolute time" but we keep 1. and 3. above, then instead we find that a whole family of Lorentz transformations, each parametrised by a parameter $c$, are possible. Galilean relativity is the limiting member of this family as $c\to\infty$. The study of this question was essentially Einstein's contribution to special relativity. You can think of it as Galileo's relativity with the added possibility of an observer-dependent time. I say more about this approach to special relativity in my answer to the Physics SE Question "What's so special about the speed of light?". It follows from this analysis that if our Universe has a finite value of $c$, then something moving at this speed will be measured to have this speed by all inertial observers. However, there is nothing in the above argument to suggest that there actually is something that moves at this speed, although we could still measure $c$ if we can have two inertial frames moving relative to each other at an appreciable fraction of $c$. It becomes a purely experimental question as to whether there is anything whose speed transforms in this striking way. Of course, the Michelson Morley experiment did find something with this striking transformation law. Footnote: The homogeneity of space postulate implies the transformations act linearly on spacetime co-ordinates, as discussed by Joshphysic's answer to the Physics SE question "Homogeneity of space implies linearity of Lorentz transformations". Another beautiful write-up of the fact of linearity's following from homogeneity assumptions is Mark H's answer to the Physics SE question "Why do we write the lengths in the following way? Question about Lorentz transformation".
{ "domain": "physics.stackexchange", "id": 89539, "tags": "special-relativity, speed-of-light, inertial-frames, relative-motion" }
Autonomous Mapping help!
Question: I have a robot that uses odometry and Lidar. Im using both Nav2 and slam_toolbox in order to manually create the map and then navigating through it. My question is, Is there a way to make the robot map the room autonomusly? To know when the map is over and goes back home? Answer: I think there is a bit of misconception with what SLAM does, just remember that SLAM stands for; Simultaneous Localization and Mapping, what you want is to explore the map (autonomously of course) so that while you explore it the map is built up, then the key term here is "exploration" (this is not a SLAM business). On the internet you can find many alternatives to explore environments, among the most popular you can find rrt_exploration, frontier_exploration and ros2_explorer. Regarding going home when the map is finished, it is a bit difficult to answer since you must define or the library/package you use certain metrics with which you can consider that the map is complete. I hope this answer sheds some light on your question.
{ "domain": "robotics.stackexchange", "id": 2736, "tags": "slam, mapping, autonomous-car, ros-humble, nav2" }
Strong AI vs Singularity - which should happen first?
Question: What is supposed to happen first: Strong AI or Technological Singularity? Meaning which option is more likely, that the Strong AI that will bring as to the state of technological singularity or achieving technological singularity will allow us to construct strong AI? Answer: The definition of "technological singularity" answers the question: The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. (wiki) note the order of facts is "invention of artificial superintelligence" (AGI) followed by the "unfathomable changes".
{ "domain": "ai.stackexchange", "id": 453, "tags": "agi, singularity, superintelligence" }
Moment of inertia of a cube along one of its diagonal
Question: Yes I've seen the other question posted on the site - Moment of inertia of solid cube about body diagonal , but I don't really understand the solution The final result is $m l^2/6$, but I'm not able to get it. I assumed it would be the same as the moment of inertia of diagonal for a square but that's $ml^2/12$. When we rotate both of them along the axis we will get the same shape won't we? Thanks in advance. Answer: For calculation of moment of inertia, what actually matters is the mass distribution and not the shape the body makes while rotating. When a square is kept along its diagonal then we get the shape of two isosceles triangle about the axis. But when cube is kept about its diagonal, the shape is of a pyramid about the axis. These shapes are different and hence mass distribution about them is different. So your assumption : I assumed it would be the same as the moment of inertia of diagonal for a square but that's 2/12. When we rotate both of them along the axis we will get the same shape won't we? is fundamentally wrong. Note that law for moment inertia does not take into consideration how the body is rotating. $$I={\int r^2 dm}$$ To solve: Divide the pyramid into squares with infinitesimal small depth and find their moment of inertia $dI$ using Parallel Axis Theorem in a coordinate system. Integrate them to get your answer.
{ "domain": "physics.stackexchange", "id": 72096, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics, moment-of-inertia" }
NP-complete proof from Dasgupta problem on Kite
Question: I am trying to understand this problem from Algorithms. by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani, chapter8, Pg281. Problem 8.19 A kite is a graph on an even number of vertices, say $2n$, in which $n$ of the vertices form a clique and the remaining $n$ vertices are connected in a “tail” that consists of a path joined to one of the vertices of the clique. Given a graph $G$ and a goal $g$, the KITE problem asks for a subgraph which is a kite and which contains $2g$ nodes. Prove that KITE is NP-complete. Any pointers to start with this problem? I am completely lost with it. Answer: You can reduce CLIQUE ($G$ has a clique of size $k$) to KITE: given $G=(V,E)$ and $k$, just build in polynomial time a new graph $G'$ in this way: for each node $v_i$ add a tail of $k$ new nodes. If $G'$ has a kite of size $2k$ then the $G$ has a clique of size $k$ (the kite without the tail). Added nodes cannot introduce new cliques on G′, so $G$ contains exactly the same cliques of $G'$.
{ "domain": "cs.stackexchange", "id": 467, "tags": "complexity-theory, np-complete, reductions" }
What does Hooke's law have to do with molecular forces?
Question: In The Feynman Lectures, in the chapter Characteristics of Force, In the section entitled Molecular forces, Feynman talks about the molecular forces, and then he states afterwards: If the molecules are pushed only a very small distance closer, or pulled only a very small distance farther than d, the corresponding distance along the curve of Fig. 12–2 is also very small, and can then be approximated by a straight line. Therefore, in many circumstances, if the displacement is not too great the force is proportional to the displacement. This principle is known as Hooke’s law,... I understand that at $x=d$ and the region nearby, this function can be modeled as $F=-kx$. Although I know that Hooke's law for the springs has exactly the same form, But why Feynman brings up Hooke's law when discussing molecular forces? Has these two forces something to do with each other? Is one the result of another? or he just saw an opportunity to mention Hooke's law, since The molecular force function at $x=d$ has the same mathematical form of Hooke's law? Answer: As the picture already shows this can only be considered an approximation for the force in a interval $x\in (d-\epsilon, d+\epsilon)$ for small $\epsilon$. Already the constant $k$ in your expression $F=-kx$ will be different from the constant $k$ in the picture in $F=\frac{k}{r^7}$ - basically, in Hooke's law you will have additional dependencies on $d$ and the exponent $7$. So, it would be better to call $k$ in Hooke's law differently here, e.g. $\tilde k$. Hooke's law then is a first order approximation at $d$ - think Taylor expanding the force around $d$. Of course, this can be done in a lot of different to very different scenarios. Therefore, both the linear force of Hooke's law and the corresponding quadratic energy are ubiquitous in physics. If you are interested more in the energy than the force, you'll see much more often the term "harmonic oscillator", especially once you go to quantum mechanics where talking about forces turns out to be more cumbersome than about energies. So, to sum it up, as Alex' answer put it: It's mostly to be considered a mnemonic, as you've most likely seen Hooke's law for springs before anything else of this kind. Small addendum: I'm a bit sceptical about that $F=\frac{k}{r^7}$ bit... That will/might hold for negative $k$ and for the asymptotic bit for $r\to \infty$, but lacks local minima. So already that is an approximation to the function shown in the graph. It's a bit asking for confusion to place it there in the first place.
{ "domain": "physics.stackexchange", "id": 51088, "tags": "solid-state-physics, spring, elasticity, approximations" }
Why do cells in meristematic tissues lack vacuoles?
Question: Cells located in meristematic cells in plants lack vacuoles. However, this contradicts with the fact that plant cells have large vacuoles to store water and keep the plant in an upright position. I would also like to add that meristematic tissues are generally found in roots and stems of plants. Finally, I would like to ask why do meristems lack vacuoles? Sources Turgor pressure exerted by the vacuole is also essential in supporting plants in an upright position: Wikipedia Apical mertisem is present at the growing tips of stems and roots and increases the length of the stem and the root. The girth of the stem or root increases due to lateral mertisem (cambium): Science for Class 9 by National Council of Educational Research & Training, page number 69 Cells of meristematic tissue are very active, they have dense cytoplasm, thin cellulose walls and prominent nuclei. They lack vacuoles: Science for Class 9 by National Council of Educational Research & Training, page number 70 Answer: Meristematic tissues or meristems contain cells which can rapidly divide or reproduce and is responsible for the overall growth of the plant. Apical meristems and lateral meristems are found in the roots and stems of the plants. Please note that the whole stem or root is not laded with these tissues. Plant cells usually have a large central vacuole which applies turgor pressure to keep the plant upright, as mentioned in one of your sources. The tips do not require to be very stiff and straight. So, a central vacuole is just not required. Also, meristematic cells grow various organelles as they mature.
{ "domain": "biology.stackexchange", "id": 11281, "tags": "cell-biology, botany, tissue" }
Calculating tips
Question: I am enrolled in an online JavaScript course and one of our coding challenges was presented as follows: John and his family went on a holiday and went to 3 different restaurants. The bills were $124, $48, and $268. To tip the waiter a fair amount, John created a simple tip calculator (as a function). He likes to tip 20% of the bill when the bill is less than $50, 15% when the bill is between $50 and $200, and 10% if the bill is more than $200. In the end, John would like to have 2 arrays: Containing all three tips (one for each bill) Containing all three final paid amounts (bill + tip) I have come up with the following solution: var bills = [124, 48, 268]; var totals = []; var pointlessArray = []; function calculateTip(cost) { switch (true) { case cost < 50: return cost * .2; break; case cost > 49 && cost < 201: return cost * .15; break; case cost > 200: return cost * .1; break; default: Error('Unsupported input.'); } } function makePointlessArray(inputArray) { var length = inputArray.length; for (var i = 0; i < length; i++) { pointlessArray[i] = calculateTip(inputArray[i]); } } function calculateTotal(billArray) { var length = billArray.length; for (var i = 0; i < length; i++) { totals[i] = billArray[i] + calculateTip(billArray[i]); } } makePointlessArray(bills); calculateTotal(bills); console.log(`The bills are: ${bills}`); console.log(`The calculated tips are: ${pointlessArray}`); console.log(`The calculated totals are: ${totals}`); I don't think this is practical at all for calculating tips, but have tried to stay within the parameters of the challenge. I am unsure if declaring the arrays as global variables is the best practice or if some other method should be used, but as a JS newbie I would appreciate any input on pitfalls in my code. Answer: break statements after return While it is a great habit to include the break statements, there is no need to include break statements following a return, since anything following the return statement within a function is unreachable. switch(true) while this works because each case statement must evaluate to true, it is simpler to just use if statements. Your logic checks if the value is less than 50, then if it is greater than 49 and less than 201. The requirements never specified if the values would be integers, floats, etc. So it would be wise to consider a value like 49.5. That value would be less than 50 so the first condition would be true. However if the value was $200.50 the second condition would evaluate to true, even though the value was more than 200. So update the condition to cost <= 200. Otherwise if neither of those conditions have been met, the value must be more than 200. function calculateTip(cost) { if (cost < 50) { return cost * .2; } if (cost <= 200) { return cost * .15; } return cost * .1; } Error The default case of the switch statement merely calls Error default: Error('Unsupported input.'); This likely won't do what you expect. If you want an error to be thrown, then instantiate an error with the new operator and precede it with the throw operator: throw new Error('Unsupported input.'); It would be wise the check the input before comparing the value against other numbers, perhaps with the parseFloat() function: function calculateTip(cost) { if (!parseFloat(cost)) { throw new Error('Unsupported input.'); } That way anything that cannot be coerced the an integer (e.g. {}) will cause the error to be thrown. Updated code See the code below with the advice above implemented. var bills = [124, 48, 268]; var totals = []; var pointlessArray = []; function calculateTip(cost) { if (!parseFloat(cost)) { throw new Error('Unsupported input.'); } if (cost < 50) { return cost * .2; } if (cost <= 200) { return cost * .15; } return cost * .1; } function makePointlessArray(inputArray) { var length = inputArray.length; for (var i = 0; i < length; i++) { pointlessArray[i] = calculateTip(inputArray[i]); } } function calculateTotal(billArray) { var length = billArray.length; for (var i = 0; i < length; i++) { totals[i] = billArray[i] + calculateTip(billArray[i]); } } makePointlessArray(bills); calculateTotal(bills); console.log(`The bills are: ${bills}`); console.log(`The calculated tips are: ${pointlessArray}`); console.log(`The calculated totals are: ${totals}`);
{ "domain": "codereview.stackexchange", "id": 32995, "tags": "javascript, homework" }
Why does snow not stick to the fur of wolves and some breeds of dogs?
Question: This question made me wonder, why for example wolves don't suffer from these "paw snowballs"? (...) but after a few miles on the trails in soft snow, he often develops snowballs between his paw pads - usually one big one right in the middle. (...) I have heard really bad snow buildup can lead to raw/bleeding paw pads. Answer: I couldn't find scientific references, but regular web searches basically indicate that long fur as well as webbed toes promote formation of ice and snowballs in certain breeds. From Hubpages: The snow attaches to the dog's long hair, melts from the body heat, and forms ice balls that grow larger, stretching your dog's toes apart and causing cracking, bleeding, and hair pulling. This is painful and distressing for the dog, who may then try to remove them by licking them, which then causes even more ice to build up. And among the solutions: Clip the fur between your dog's toes. [J]ust trim off the excess fluff. A typical breed thriving in icy conditions is the Siberian husky. A comparison to to an airedale terrier breed shows the difference in fur length quite clearly, especially around the areas most suceptible to come in contact with snow (legs & chest areas): And both breeds in the snow:
{ "domain": "biology.stackexchange", "id": 5132, "tags": "mammals" }
Are carboxylic anhydrides in graphene oxides considered adsorption sites for metals? If so, are they more or less reactive than carboxylic acids?
Question: The literature abundantly addresses the fact that carboxylic acids present in graphene oxides act as efficient adsorption sites for metals in aqueous solutions (e.g., HE et al., 2021; ZHAO et al., 2019; NOVÁCEK et al., 2017; WANG et al., 2018). Wang et al. (2018), for example, showed through DFT measurements that the adsorption energies of the complexes (–COOH)/Co, GO(–OH)/Co, and NGO/Co, where N corresponds to nitrogen-containing functional groups introduced into graphene oxide and Co to cobalt ions, were 44.05 kcal/mol, 16.44 kcal/mol, and 6.33 kcal/mol, in that order. This result was corroborated by the performance of the adsorbents in removing Co(II) from the water (GO (0.74 mmol/g) > NGO (0.46 mmol/g)). However, I have not found anything regarding whether carboxylic anhydrides, which can be identified via potentiometric titration, also capture metals through their complexation and/or via ion exchange. I would like to know if they can and if their reactivity is lower than, equal to, or higher than that of carboxylic acids in the water, present in graphene oxides, for the adsorption of metals. Thanks in advance! HEA, Lei; WANG, Lei; ZHU, Haomiao; WANG, Zhe; ZHANG, Luxia; YANG, Lutao; DAI, Yong; MO, Hong; ZHANG, Jun; SHEN, Jian. A reusable Fe3O4/GO-COOH nanoadsorbent for Ca2+ and Cu2+ removal from oilfield wastewater. Chemical Engineering Research and Design, v. 166, n. 171, p. 248-258, jan. 2021. ZHAO, L.; CHEN, J.; XIONG, N.; BAI, Y.; YILIHAMU, A.; MA, Q.; YANG, S.; WU, D.; YANG, S. Carboxylation as an effective approach to improve the adsorption performance of graphene materials for Cu2+ removal. Science of the Total Environment, v. 682, p. 591-600, set. 2019. Elsevier BV. http://dx.doi.org/10.1016/j.scitotenv.2019.05.190. NOVÁČEK, M.; JANKOVSKÝ, O.; LUXA, J.; SEDMIDUBSKÝ, D.; PUMERA, M.; FILA, V.; LHOTKA, M.; KLÍMOVÁ, K.; MATĚJKOVÁ, S.; SOFER, Z. Tuning of graphene oxide composition by multiple oxidations for carbon dioxide storage and capture of toxic metals. Journal of Materials Chemistry A, v. 5, n. 6, p. 2739-2748, 2017. Royal Society of Chemistry (RSC). http://dx.doi.org/10.1039/c6ta03631g. WANG, X.; LIU, Y.; PANG, H.; YU, S.; AI, Y.; MA, X.; SONG, G.; HAYAT, T.; ALSAEDI, A.; WANG, X. Effect of graphene oxide surface modification on the elimination of Co(II) from aqueous solutions. Chemical Engineering Journal, v. 344, p. 380-390, jul. 2018. Elsevier BV. http://dx.doi.org/10.1016/j.cej.2018.03.107. Answer: There are no surprises that the metal adsorption literature is silent about carboxylic anhydride on carbon surfaces. These groups do exist on carbon surfaces but as you can guess from the name "anhydride" and "water" do not get along very well. This is why I had asked you to clarify your medium? These anhydride groups will enventually hydrolyze to carboxylic acid groups. There is a very nice paper "Formation and chemistry of carboxylic anhydrides at the graphene edge" [RSC Adv., 2015,5, 104198-104202, DOI https://doi.org/10.1039/C5RA23209K], that shows the hydrolysis of carboxylic anhydride in water and in humid air. This process is relatively fast as show in the Supporting File. In the field of carbons and surfaces, I would advise to read at 12 papers before arriving at a conclusion. Surfaces are very very difficult to analyze chemically and a lot of speculation exists.
{ "domain": "chemistry.stackexchange", "id": 16727, "tags": "coordination-compounds, titration, adsorption, alkaline-earth-metals, carboxylic-acids" }
Example in which light takes the path of maximum optical length
Question: According to the modern version of Fermat's principle,"A light ray in going from point A to point B must traverse an optical path length that is stationary with respect to variations of that path.".Is a maximum optical path length possible ?What if we keep adding deviations to the optical path length? Answer: You are quoting wikipedia There is no maximum length from point A to point B (the path could be arbitrarily long), the more deviation to that minimum path the more length is added in such a way that the light phase is so mixed up that it ends up cancelling itself (no light). EDIT: Quantum electrodynamics from Feynman offers a nice accessible explanation for this phenomenon
{ "domain": "physics.stackexchange", "id": 53147, "tags": "optics, geometric-optics, variational-principle" }
How to finish silanization process correctly?
Question: I have some 2mm glass beads and I'm doing silanization on them. For doing that, I have a small bottle that I fill 1/3 with glass beads and then pour silan solution on them and shake well. However, after 60 min that I drain the solution from bottle and as soon as I wash the beads with water, the beads are covered with a white greasy product which is hardly removed by water and finally I wash it by acetone to become transparent again. Is this a correct method of silanization (and washing the products) to achieve the maximum hydrophobicity effect of silan or I should revise the method? Answer: Sol-gel preparation techniques have many variations. The contact time -beads to the solution - determines thickness of the sol-gel layer. Here you can try to shorten the dipping (coating) time. Glass is usually dried at 60°C for 30 mins and then "cured" at 500 - 550°C form 5 s up to 2 hours. I would suggest to find out an article and follow its preparation method.
{ "domain": "chemistry.stackexchange", "id": 4849, "tags": "organic-chemistry, experimental-chemistry, surface-chemistry" }
3D plot using rxplot
Question: Hi, I have the messages /master_position with x, y, z so I using command rxplot -M 3d /master_position/x /master_position/y /master_position/z or rxplot -M 3d /master_position/x:y:z but it didn't work and got this error rxplot: error: You may only specific 2 or 3 topics with '3d' or 'scatter'. When 2 topics are provided, time is used as the third axis. Thanks Originally posted by embeddedheaven on ROS Answers with karma: 121 on 2011-06-14 Post score: 1 Answer: Update: bug is now closed, will be included in next release. This is filed as a bug here: https://code.ros.org/trac/ros/ticket/3376 I'm hoping to get time later this week to close out some rxplot tickets. Originally posted by kwc with karma: 12244 on 2011-06-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 5844, "tags": "ros, rxplot" }
RTAB-MAP with xtion, rplidar, turtelbot (how to utilize lidar internally)
Question: Hello, I have question about building 3D map with RTAB-MAP and lidar. As far as I know, RTAB-MAP is SLAM based on image-features and performs loop closure using features. 3D mapping on RTAB-MAP with turtlebot, xtion is good. But, my robot has additionally lidar and I want to utilize lidar. I read the questions about build 3d map. I think that my situation is similar below. https://answers.ros.org/question/239646/build-a-map-with-husky-laser-and-kinectrealsense/ https://answers.ros.org/question/239823/build-a-map-with-rtabmap-realsense-sick-and-imu-with-sensor-fusion/ https://answers.ros.org/question/242263/3d-map-with-rtab-sick-imu-and-husky-a200/ My questions are below. How do you use additional lidar internally in RTAB-MAP? 1-1.For improving quality of map, how to use lidar in RTAB-MAP? I think this system for utilizing lidar and improving quality of map. I wonder my method is good or not. I think below system is using all sensors, so it can make good 3D map. This link is system image I thought. link text Thank you for reading questions. Originally posted by JunJun on ROS Answers with karma: 26 on 2017-11-29 Post score: 0 Answer: Based on this example, here is how to add lidar to rtabmap: <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start"> ... <remap from="scan" to="/base_scan"/> <param name="subscribe_scan" type="bool" value="true"/> <param name="RGBD/NeighborLinkRefining" type="string" value="true"/> <!-- Odometry correction --> <param name="Reg/Strategy" type="string" value="1"/> <!-- 1=ICP --> <param name="Reg/Force3DoF" type="string" value="true"/> <!-- 2d slam --> </node> Make sure TF of camera and lidar are correctly set, so that 3D point clouds match with the 2D scans. cheers Originally posted by matlabbe with karma: 6409 on 2017-11-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by JunJun on 2017-12-01: Hi Thank you for answer. I checked TF and build map. I wonder that lidar is used for correction pose or not. http://128.134.57.131/wordpress/wp-content/uploads/2017/12/스크린샷-2017-12-01-17-19-12.png http://128.134.57.131/wordpress/wp-content/uploads/2017/12/스크린샷-2017-12-01-17-19-59.png Thank you. Comment by JunJun on 2017-12-01: You said "Make sure TF of camera and lidar are correctly set, so that 3D point clouds match with the 2D scans." this means that 2d scans is used for matching 3D point clouds and quality of 3D map is better than without lidar?? I want to know quality of SLAM depends on only RGB-D, image features. Comment by matlabbe on 2017-12-01: With parameter "Reg/Strategy" set to 1 (ICP), scans are used to refine loop closure detections and for odometry correction (when "RGBD/NeighborLinkRefining" is true). In my example, lidar is used to refine the pose, e.g., the 3D clouds are added to poses refined by lidar. Comment by matlabbe on 2017-12-01: In your screenshots, the TF between lidar and camera seems ok, as the lidar scans (cyan points) are aligned with their corresponding clouds created from the camera. Comment by JunJun on 2017-12-03: Thank you for your answer. I got what you said. As you said, I measure and set TF. Below link is TF of my robot. link text
{ "domain": "robotics.stackexchange", "id": 29472, "tags": "lidar, turtlebot, cartographer, xtion, rtabmap" }
Ideal surface for a perfect lens
Question: in this physics lecture, on slide 15-16, it is found that the ideal surface for a perfect lens (which maps a plane wavefront into a perfect spherical wavefront, i.e. which makes focus all input parallel rays into one point) can be an hyperbola or an ellipsoid according to the refraction index ratio being higher or lower than 1: Now, I don't understand this result quite well. My doubts are: Imagine the rays start from right (being parallel). In case of an hyperbola, they are already in glass and then go into air. In case of an ellipsoid, they are in air and then go into glass. None of them are actually the common "thin lens" we usually study in basic optics (air - lens - air). How could we adapt these results to a thin lens? Should it have a hyperbolic/ellipsoid shape on both sides? I cannot visualize why a spherical surface is not ideal to map a plane wavefront into a spherical wavefront. A spherical surface slow down the input plane wavefront points in a spherical wave. I find strange that this does not occur, whilst the ideal surface are hyperbola and ellipsoid! Some books propose a different ideal surface for the perfect lens, precisely a cartesian oval. Other sources say the ideal surface should be parabolic, like for a mirror... which is the truth? Answer: Spherical surfaces are used for lenses because they are much easier to manufacture precisely. The curvature is the same everywhere. You rub a spherical grinding surface all over the glass. Lenses do have to be precise. An error of a wavelength of light matters. For visible light, that is about half a micron. Fortunately, lenses typically use relatively large diameter spheres. In this case, a sphere is extremely close to the ideal shape. For a small diameter lens, the approximation is close enough to perfect. For a larger diameter, the difference in shape is larger. The outer edge is a good match to an ideal shape with a different wavelength. The outer rays come to a focus in a different spot than the center rays. This is called spherical aberration, the aberration or error cause by using a spherical surface instead of an ideal surface. To make an ideal lens from your left diagram, put a planar surface on the right side of the lens. Planar wavefronts will pass through a planar surface without changing direction. For the right diagram, put a spherical surface centered on the focal point on the left surface. Spherical wavefronts will not change direction passing through it. Of course this only works for waves parallel to the axis. There are other aberrations to deal with for off axis rays. And variations in wavelength also cause aberrations because the index of refraction varies with wavelength. Lens design gets complicated quickly. But for an application like focusing a laser, these simple lenses solve the problem.
{ "domain": "physics.stackexchange", "id": 91153, "tags": "optics, waves, geometric-optics, lenses, optical-materials" }
Symmetry of an in-phase and an out-of-phase wavefunction "approaching" each other in an MO
Question: Hiho. I have a question concerning this image (Chemistry, Catherine E. Housecroft et al., Pearson Education, 2010). In the text, it states the following: "Each of the new molecular wavefunctions, $\Psi$(in-phase) and $\Psi$(out-of-phase), has an energy associated with it wich is dependent of the distance apart of the two hydrogen nuclei. As Figure 4.13 [provided image] shows, the energy of $\Psi$(in-phase) is always lower than $\Psi$(out-of-phase)." The in-phase-function has a stabilized minimum, but the out-of-phase-function seems stable only at an infinite distance. For my intuition, the two graphs should be symmetric, since the two wavefunctions are alike and approach each other equally. Why do they behave like this? Why are those functions not symmetric? A following question would be, when I combine those two functions, I'll get a minimum at the x-value I marked with "minimum". Would this be the distance at which a nucleus in a stable diatomic molecule would be found like in this figure? === Thank you for your help and have a nice day :) Answer: Considering this diatomic molecule, the total wavefunction $\Psi$ is formed via a linear combination of the atomic orbitals $\psi_1$ and $\psi_2$ (which is an approximation, but not a too bad one). $$ \Psi^+ = c_1 \psi_1 + c_2 \psi_2 $$ $$ \Psi^- = c_1 \psi_1 - c_2 \psi_2 $$ The (electronic) energy of the system can be described using the following (stationary) Schrödinger equation: $$ H_\text{el} \Psi_\text{el} = E_\text{el} \Psi_\text{el}$$ So somehow this all factors into each other and in the end gives you the curves (for the potential energy of the system vs. internuclear distance) that you posted. Where is the symmetry? Well, it is still there, with respect to the center of one atom. If you imagine rotating the functions around the $y$ axis, you will get a symmetric image. And you're allowed to do this because space expands in all three directions. So in theory (if you follow through and expand the function to everywhere) you get a 3D image of the energy at every point around one atom. But: The wavefunction in itself is not all that interesting to us, because there is no "real" physical information that can be directly extracted from it. But fret not, we can at least get out the electron probability density by complex-squaring it: $$ \rho = |\Psi|^2 = \Psi \Psi^* $$ And this is where those nice small bubbles come into play that we see in your second figure. And we have again found the symmetry that you seek, because both linear combinations have $D_{\infty h}$ symmetry (I think... I'm not all too good with point groups and the such).
{ "domain": "chemistry.stackexchange", "id": 1014, "tags": "physical-chemistry, molecular-orbital-theory" }
Explicit differential equation based model of protein folding
Question: I am an applied mathematician interested in the dynamics of potential systems - i.e., systems with multiple unique energy minima. One of the best examples of such systems are protein folding potentials. From sources like This one I know that the potential, $$ V(x) $$ exists and is a function of the positions of each residue in space and their interactions with each other. Since such a function exists, there should (mathematically) be a set of ordinary differential equations that capture the dynamics described by the potential. However, although I have seen definitions of $V(x)$ in the literature, for the life of me I can't figure out exactly what the set of differential equations associated with this system is - i.e., the system is usually written out in a generalized form that applies to all proteins, but the exact information required to build such a potential for one such protein is unclear. Does anyone have an example of of a particular $V(x)$ (for a particular, short, set of amino acids, say) where all the parameters are known and the system can be solved numerically as a set of ordinary differential equations? Answer: What you are looking for is a force field: http://en.wikipedia.org/wiki/Force_field_(chemistry) Standard force fields for proteins include CHARRM and AMBER. These have relatively simple, well-defined expressions for defining the various types of interactions between bonded and non-bonded atoms. Parameters are available for all standard atom types, and are fit to experimental and/or electronic structure calculations. These force fields are considered "classical" in that they do not explicitly treat electronic interactions, but rather try to capture these interactions in a coarse-grained way. Typically, there are expressions for describing the following interactions: bonded interactions (bond stretching) angle interactions (bond bending) torsion interactions (dihedral bending) 12-6 Lennard-Jones interactions (non-bonded, dispersion forces) Short-range charge-charge interactions (Coulombic interactions between charged or partially charged atoms) Long-range charge-charge interactions (see Ewald summation method for more details) Note that in these classical models, partial charges are generally statically assigned to each atom. The functional form of these different terms varies depending on the force field and sometimes implementation. These interactions are typically pairwise additive, so to get the total potential energy of the system you simply add together all the various interactions between the appropriate pairs of atoms. There are certainly more rigorous models that attempt to capture other electronic structure affects, however for large molecules such as proteins, relatively simple force fields are generally applied. Please note that finding the global minimum in potential energy of a protein is not a trivial problem. People have been working on this for years, it's extremely difficult to develop algorithms that do not get trapped in local minima.
{ "domain": "chemistry.stackexchange", "id": 1978, "tags": "physical-chemistry, biochemistry, energy" }
Why does the water-pressure of my shower fall if I hold the shower head high?
Question: I live in a very old house - build 1902 - in the 4th floor in the city of Karlsruhe (Germany). I have a shower and the gas-heater for the water is in it: If I hold the shower head over some height (see image), the water pressure is suddenly reduced. Whats the reason for that? (I guess the way the pump works might be the reason.) Answer: Your shower is fed by a water tower. The maximum pressure at your shower head is determined by the equation $P_{shower}=\rho g\Delta h$, where $\Delta h$ is the difference in high between your shower head and the water level in the water tower. Because you live on the 4th floor, which is apparently close to the water line in your near by water tower, as you lift your shower head, $\Delta h$ decreases and so does your water pressure. Other things like pipe lengths, pipe bends and water restrictors impact the water pressure, but those things are (mostly) constant and shouldn't depend on height.
{ "domain": "physics.stackexchange", "id": 4141, "tags": "water, everyday-life" }
Are there any models for two light Higgs?
Question: The LHC results today seem to have two possible Higgs peaks, one at 119 GeV and the other at 125-126 GeV. All the multi-Higgs supersymmetry models I've seen that have multiple Higgs have only one light Higgs, the others being well over 600 GeV. While I think it's unlikely that there really are two distinct peaks, that would certainly be the most interesting result from ATLAS and CMS. Are there any extant models with two light Higgs that are close in energy but distinct? Answer: The Minimal Supersymmetric Standard Model, or MSSM, uses two Higgs doublets which – after the "eating" of 3 out of 8 real polarizations by the $W^+,W^-,Z^0$ bosons – leave 5 physical polarizations of the Higgs. Two of them are electrically charged, $H^\pm$, and three of them are neutral. One of the three neutral ones is CP-odd, $A$, and the other two are CP-even, $h$ and $H$. The $h$ boson is the lighter one and has to be below 135 GeV or so. The $H$ boson is heavier and may be anywhere above 135 GeV or so. It's true that models where the heavier Higgs is vastly heavier are recently favored in literature, for various reasons, but that doesn't follow from supersymmetry. Supersymmetry could work with two Higgses at 125 and 244 GeV (if I borrow another 2-sigma bump from ATLAS), among many other options. Even without SUSY, one could construct multi-Higgs-doublet models. But if we assume that the couplings are comparable, they "share" the total cross section of a single Higgs. So you couldn't produce two peaks in the gamma-gamma channel, for example, that would be equally strong as the peak of the Standard Model Higgs. They would be smaller. The announced "glimpse" of the Higgs today does agree with the SM Higgs. Of course, it's not measured too accurately but the existence of two light Higgses that share the cross section is already close to excluded, too. Moreover, if you look at all the channels at all the colliders, it's pretty clear that there is only one Higgs. Look at Phil Gibbs' galactic tri-collider Yes/No reconstructions: http://vixra.files.wordpress.com/2011/12/globalhiggssignaldec2011.png http://vixra.files.wordpress.com/2011/12/globalhiggssignaldec2011lo.png http://blog.vixra.org/2011/12/10/the-higgs-boson-live-from-cern/#liveblog The "Yes" peak near 125 GeV is really sharp and clean now. Another Higgs well above 200 GeV or so remains viable, especially if it is at many TeVs, and its existence would modify the cross sections and properties of the light 125 GeV Higgs boson just by a few percent. The light Higgs would remain Standard-Model-like.
{ "domain": "physics.stackexchange", "id": 2049, "tags": "supersymmetry, large-hadron-collider, higgs" }
Split string with adding separator to substring
Question: I wrote my Split() extension, it's main goal is to save delimiters and add them to splitted strings. For example: I have a string "-1-2+3+4-5-6", and separators '+', '-' and I want to have following: "","-1","-2","+3","+4","-5","-6" or "-","1-","2+","3+","4-","5-","6" depends on input separator option. My extension method consists of three parts: Check input values; Getting all indexes of separators, and separators in source string; Split source string and adding separators to left or right substring. Code: public enum SeparatorOptions { None = 0, AddSeparatorToLeftSubstring = 1, AddSeparatorToRightSubstring = 2 } public static class Extensions { public static string[] Split(this string source, char[] separators, SeparatorOptions separatorOptions) { if (separatorOptions < SeparatorOptions.None || separatorOptions > SeparatorOptions.AddSeparatorToRightSubstring) { throw new ArgumentException("Arg_SeparatorEnumIllegalVal"); } if (separators == null || separators.Length == 0) { return source.Split((char[])null); } if (separatorOptions == SeparatorOptions.None) { return source.Split(separators); } // Getting all indexes of separators, and separators in source string int foundCount = 0; int[] separatorIndexes = new int[source.Length]; char[] separatorValues = new char[source.Length]; for (int i = 0; i < source.Length; i++) for (int j = 0; j < separators.Length; j++) { char separator = separators[j]; if (source[i] == separator) { separatorValues[foundCount] = source[i]; separatorIndexes[foundCount++] = i; break; } } string[] splitString = source.Split(separators); // Adding separators to left or right substring if (separatorOptions == SeparatorOptions.AddSeparatorToLeftSubstring) return AddSeparatorToLeftSubstring(splitString, separatorValues, foundCount); if (separatorOptions == SeparatorOptions.AddSeparatorToRightSubstring) return AddSeparatorToRightSubstring(splitString, separatorValues, foundCount); return splitString; } private static string[] AddSeparatorToLeftSubstring(string[] splitString, char[] separatorValues, int foundCount) { for (int i = 0; i < foundCount; i++) { splitString[i] += separatorValues[i]; } return splitString; } private static string[] AddSeparatorToRightSubstring(string[] splitString, char[] separatorValues, int foundCount) { for (int i = 1, j = 0; i <= foundCount; i++, j++) { splitString[i] = separatorValues[j] + splitString[i]; } return splitString; } } class Program { static void Main(string[] args) { string str = "-1-2+3+4-5-6"; Console.WriteLine("Original split()"); var res2 = str.Split(new char[] { '+', '-' }, SeparatorOptions.None); foreach (var el in res2) Console.WriteLine(el); Console.WriteLine("Add to right substring"); var res = str.Split(new char[] { '+', '-' }, SeparatorOptions.AddSeparatorToRightSubstring); foreach (var elem in res) Console.WriteLine(elem); Console.WriteLine("Add to left substring"); var res3 = str.Split(new char[] { '+', '-' }, SeparatorOptions.AddSeparatorToLeftSubstring); foreach (var el in res3) Console.WriteLine(el); Console.ReadKey(); } } What I wrote seems to work, although I am not sure if it is the best way, or recommended way to do something like this, so I want to ask what is wrong with my code and how could it be done better. Answer: Review You use very clean and strong names for all your variables so it's very easy to understand your code. You too often go without {}. Especially the double for loops become very confusing. You don't use good exception messages. In Arg_SeparatorEnumIllegalVal the Arg prefix is not necessary because the type of the exctpion is already telling me it's about an argument. You also formatted it like it was a variable with PascalCase but it's a message so it should be formatted like a normal sentence. Alternative solution There is a much shorter way to get the same results. You could use two different regexes: var str = "-1-2+3+4-5-6"; Regex.Matches(str, "([-+]?[0-9]?)").Cast<Match>().Select(m => m.Value).Dump(); Regex.Matches(str, "([0-9]?[-+]?)").Cast<Match>().Select(m => m.Value).Dump(); which will respectively give you: -1 -2 +3 +4 -5 -6 "" and - 1- 2+ 3+ 4- 5- 6 "" Whether you want to keep the empty string or not is up to you.
{ "domain": "codereview.stackexchange", "id": 30720, "tags": "c#, strings, extension-methods" }
Union of halting-like problem and non-halting-like problem
Question: I came across the following problem: Define languages $L_0$ and $L_1$ as follows : $L_0=\{⟨M,w,0⟩∣M\text{ halts on }w\}$ $L_1=\{⟨M,w,1⟩∣M\text{ does not halt on }w\}$ Here $⟨M,w,i⟩$ is a triplet, whose first component $M$ is an encoding of a Turing Machine, second component $w$ is a string, and third component $i$ is a bit. Let $L=L_0∪L_1$. Which of the following is true? A. $L$ is recursively enumerable, but $\overline{L}$ is not B. $\overline{L}$ is recursively enumerable, but $L$ is not C. Both $L$ and $\overline{L}$ are recursive D. Neither $L$ nor $\overline{L}$ is recursively enumerable I first felt that despite the bit as a third member of triple, $L_0$ is still equivalent to halting problem and $L_1$ is non halting problem. Union of halting and non halting problem is recursive as can be seen here. So, same will apply to the languages in the problem and their union will also be recursive, that is option C. But the answer given was D. So am guessing if its correct or not. I was not able to guess how that extra bit in the triple makes it different from halting problem. Answer: Let's write formally what $L$ and $\overline{L}$ are: $$L=\{\langle M,w,b \rangle| (b=0 \wedge M(w) \mbox{ halts}) \vee (b=1 \wedge M(w) \mbox{ does not halt}) \}$$ $$\overline{L}=\{\langle M,w,b \rangle| (b=1 \vee M(w) \mbox{ does not halt}) \wedge (b=0 \vee M(w) \mbox{ halts}) \}$$ The latter is confusing, so let's rearrange it: $$\overline{L}=\{\langle M,w,b \rangle| (b=1 \wedge M(w) \mbox{ halts}) \vee (b=0 \wedge M(w) \mbox{ does not halt}) \}$$ (Note that we ignore malformed inputs, but that can be easily circumvented and does not effect the answer). From this, you can easily see that $L$ and $\overline{L}$ are basically the same idea -- the bit at the end determines whether you're looking at the halting problem or at its complement. In either case, neither $L$ nor its complement are recursively enumerable. One thing that can trigger you to suspect this is the answer, even without formally writing $\overline{L}$, is the fact that both the halting problem and its complement easily reduce to $L$. Thus, $L$ is not recongnizable nor co-recognizable, and so neither is its complement.
{ "domain": "cs.stackexchange", "id": 15391, "tags": "formal-languages, undecidability, halting-problem, semi-decidability" }
What language should the following DFA recognise?
Question: My question is: what language should the following DFA recognise? It seems that it should contain an odd number of substrings in the form 11*0. However, I am not sure whether there are any other conditions. Answer: Arrive at the answer by following all paths from the start state to the accepting state identifying the 'tail recursion': 0*1+0+(1+0+1+0+)* = 0*1*100*(1*100*1*100*)* = 0*11*00*(11*00*11*00*)* Intuitively, these are all strings over {0,1}* with an odd number of 10 substrings which is equivalent to the characterization you gave.
{ "domain": "cs.stackexchange", "id": 7510, "tags": "automata, finite-automata" }
WinForm Logon Best Practices
Question: I have a working logon for a WinForms Project. I use the program.cs file to launch the login form. I am not so sure there isn't a better way of implementing this. Here is my program.cs file: using System; using System.Windows.Forms; using WindowsFormsApp.Presenters; using WindowsFormsApp.Views; using Autofac; using DbContexts; using Serilog; namespace WindowsFormsApp { internal static class Program { public static IContainer Container { get; private set; } public static string UserName { get; set; } /// <summary> /// The main entry point for the application. /// </summary> [STAThread] private static void Main() { var builder = new ContainerBuilder(); builder.Register(c => new MyContext()); Container = builder.Build(); Log.Logger = new LoggerConfiguration() .MinimumLevel.Debug() .WriteTo.Console() .WriteTo.RollingFile("log-{Date}.txt") .CreateLogger(); Log.Information("Application Started"); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); var loginForm = new LoginForm(); var results = loginForm.ShowDialog(); if (results == DialogResult.Cancel) System.Environment.Exit(1); while (results != DialogResult.OK) { results = loginForm.ShowDialog(); if (results == DialogResult.Cancel) System.Environment.Exit(1); } var mainFormView = new MainFormView(); mainFormView.Tag = new MainFormPresenter(mainFormView); Application.Run(mainFormView); } } } Any suggestions or comments are welcome. Answer: You have a public static property UserName that can be set by anyone, anywhere in your code. You didn't detail the usage of this property. If it's the logon form that should update it and it will never update again, have your Program class set this field. If it's another class that updates this field, maybe this property doesn't belong here. You have some code that is duplicated, so there must be a way to do better. From my point view, you are doing many different things in your Main method, maybe you could put these operations in separate methods : Init, Login, Start. Use using You put the Presenter for MainForm in the Tag Property, which I think belongs to a specific property. That way the MainForm can control who has access to it, otherwise any object that has an access to the MainForm can recover the Tag and therefore the Presenter. Another problem is that the Tag property is an object, so you will have to convert it every time you want to use it. So, all together that would become : using System; using System.Windows.Forms; using WindowsFormsApp.Presenters; using WindowsFormsApp.Views; using Autofac; using DbContexts; using Serilog; namespace WindowsFormsApp { internal static class Program { public static IContainer Container { get; private set; } public static string UserName { get; private set; } /// <summary> /// The main entry point for the application. /// </summary> [STAThread] private static void Main() { Init(); Login(); Start(); } private void Init() { var builder = new ContainerBuilder(); builder.Register(c => new MyContext()); Container = builder.Build(); Log.Logger = new LoggerConfiguration() .MinimumLevel.Debug() .WriteTo.Console() .WriteTo.RollingFile("log-{Date}.txt") .CreateLogger(); Log.Information("Application Started"); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); } private void Login() { using (loginForm = new LoginForm()) { var results DialogResult.None; do { results = loginForm.ShowDialog(); if (results == DialogResult.Cancel) System.Environment.Exit(1); } while (results != DialogResult.OK); //Since we logged on correctly, we can update UserName (I guess) UserName = loginForm.ValidatedUserName; } } private void Start() { using (var mainFormView = new MainFormView()) { mainFormView.Presenter = new MainFormPresenter(mainFormView); Application.Run(mainFormView); } } } }
{ "domain": "codereview.stackexchange", "id": 39170, "tags": "c#, authentication, winforms" }
South of the tropics, where does the sun's noon shadow point?
Question: If I understand correctly, in countries to the north of the tropics the sun's noon shadow points north. What happens south of the tropics? Is it pointing south? Answer: Is it pointing south? Yes, the shadow points south when you are south of the tropics (south of the Tropic of Capricorn). In the tropics (between the Tropic of Cancer and the Tropic of Capricorn), the direction of the Sun's shadow depends on the location and the date. As the angle between the sun and the earth's axis changes during the year, the apparent path of the sun in the sky changes. Outside the tropics, this change is insufficient to ever bring the sun directly overhead. Regardless of the date, the sun will be seen to be toward the equator at local noon. (Same as saying the shadow points away from the equator). Within the tropics, the changing path sometimes brings the sun directly overhead. The position at local noon depends on the specific location and the date. On the December solstice, the sun will be overhead for locations on the Tropic of Capricorn and will cross to the south at local noon for all locations in the tropics (and those further north). During the June solstice, the situation is reversed for the tropics. All locations south of the Tropic of Cancer would see the sun pass to the north.
{ "domain": "astronomy.stackexchange", "id": 3687, "tags": "the-sun, solar-system, earth" }
Underwater Property Simulation with ROS2 e.g Particle Plume
Question: I want to develop and test adaptive path planning algorithms for AUVs or also USVs. So far I did this in ROS1 with the UUV Simulator and especially the uuv_plume_simulator. Now I am looking for ways to do this in ROS2. I managed to get a simulation of an AUV working with ROS2 Humble and Gazebo Ignition but I couldn't find a solution comparable to the UUV Plume Simulator. Are there Gazebo (Ignition/Garden/Harmonic) Plugins or Projects that already deal with the simulation of the distribution of water properties like salinity/turbidity/temperature or the simulation of freshwater/hotwater/oil/particle spill or plumes? Ideally these properties would be also influenced by currents. So far i only came across the EnvironmentPreload Plugin for Gazebo Harmonic but it seems to only offer the a static distribution of values. Any recommendations are highly appreciated. Answer: We did a port of the UUV Plume Simulator Package to Ros2 and it is available here: https://github.com/tiko5000/uuv_plume_simulator_ros2
{ "domain": "robotics.stackexchange", "id": 39060, "tags": "gazebo, ros-humble, underwater, auv, usv" }
Does the frame of reference change the behavior of a particle
Question: "Behavior" word in the title is not very correct. I'm in a train at a railway station and on the platform there is a glass case (the mass of case is 5 kg and if it gets hit with a velocity of 10 N then the glass will break), a frame of reference is attached to me. Now my train starts with an acceleration of $2\, m/s^2$. Now according to me I'm at rest and glass case is moving backward and so the acceleration on the glass case is $2\, m/s^2$ (at present I'm not considering the direction). Therefore the force acting on the glass case is $F=ma=5\times 2=10\, N$ at this the glass case should break, but this is not happening. Whereas according to a frame of reference attached to a person on the platform every thing is fine(i.e. there is no force(horizontal) acting on the glass case). The law cannot be wrong and so that means that I'm somewhere wrong. I cannot find my mistake so please help me. Answer: You are calculating an unreal force by using Newton's second law. Remember that Newton's laws are valid in inertial frames of reference. And Newton defined inertial frames as those frames where an object continues to be at rest or in constant motion unless acted upon by a real physical force.In your example the train or you is not an inertial frame. Hence, calculating $F$ from $F=ma$ for the glass case is invalid for the glass case is not acted upon by any real force. You cannot say a force of that much newtons is acting on it. When a real force acts on a body and causes an acceleration then only you can calculate the force by knowing the mass and the acceleration and this happens in inertial frames.
{ "domain": "physics.stackexchange", "id": 24520, "tags": "newtonian-mechanics, forces, reference-frames" }
Reading PointCloud2 in C++
Question: How do you read a PointCloud2 in C++? I'm interested in getting the depth(z point). Maybe an equivalent of this: reading pointcloud2 in python. Originally posted by Homer Manalo on ROS Answers with karma: 475 on 2011-03-02 Post score: 1 Answer: If you know what type of points you expect inside the PointCloud2, you can use the information from the pcl_ros wiki page for Subscribing to point clouds. Originally posted by Eric Perko with karma: 8406 on 2011-03-02 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Homer Manalo on 2011-03-03: For some reason, I did not notice that. Thanks. Comment by Eric Perko on 2011-03-03: Note that that is an optical frame, with Z pointing along the camera's viewing axis, not a standard body frame. In that situation, the camera frame should be specified with an _optical suffix, as noted in REP 105: http://www.ros.org/reps/rep-0103.html#coordinate-frame-conventions . Comment by Eric Perko on 2011-03-03: That warning isn't a big deal. The question I had was what sort of reference frame the PointCloud was using. According to the diagram and docs at http://www.ros.org/wiki/stereo_image_proc#stereo_image_proc.2BAC8-common.Nodes , /camera ought to be the frame of the left camera in the pair. Comment by Homer Manalo on 2011-03-03: The frame_id is [camera]. I actually have a warning message on the rviz console window saying [camera] is not a fully qualified frame_id (resolved locally to [/camera]). I have not setup any tf here. Comment by Eric Perko on 2011-03-03: What is the reported frame_id in the PointCloud2 you are reading? Comment by Homer Manalo on 2011-03-03: I'm just using the pointcloud from stereo_image_proc. Where is the origin here? (top-left corner? top-right corner? center of the image?) Comment by dornhege on 2011-03-02: That depends on whoever sends the pointcloud and what frame the data is in. For sensors the origin is usually (0,0,0). A negative y is also not unexpected. It could mean a point right or below the sensor (depending on the coordinate system) Comment by Homer Manalo on 2011-03-02: Is the origin(0.0, 0.0) of points x and y at the center of frame? I'm seeing a negative y point here. Comment by surabhi96 on 2018-07-19: @Eric I did not understand your solution. Can you elaborate? Thanks
{ "domain": "robotics.stackexchange", "id": 4927, "tags": "c++, pointcloud" }
Genome assembly from error-prone reads
Question: I understand how to assemble genome from error-free reads. I implemented like this: Construct directed overlap graph with reads as vertices and edges as maximum overlap between two vertices. Edges represent the length of overlapping maximum substring of one read suffix with other read prefix. Find the path that visits all vertices only once in overlap graph. With small graph we can use greedy approach : Starting from first pattern always choose egde of maximum weight and store shortest common superstring spelled by this path in resulting string. Resulting string spelled by path is shortest common superstring of all reads, that is assembled genome. For example AACTAG CTAGAT Gives the AACTAGAT as a result Now let's say that two error-prone reads have at most one error (mismatch): AACTAG CTCGAT As you can see second read has one substitution in third character from 'A' to 'C'. So how assembled genome should look like for the error-prone reads? Do I take A or C to resulting string? Or I'm missing something? Note: I used short reads for example because of simplicity. I work with much larger number of reads which are 100 nucleotides long and have at most one substitution of character. So that's a big overlap graph. I know I can use De Bruijn graph, but can I use some greedy approach (like in error-free example)? Answer: It's common for assemblers to have a coverage threshold to make sure that there are enough reads across the same region to properly assemble a contig. You can't work out an ideal solution from a situation where the consensus sequence is ambiguous. If your overlap were representative (e.g. a heterozygous variant in a diploid genome, based on a few tens of reads of 100bp, rather than 6bp), then the resulting assembly would have a bubble in the assembly graph at the variant point. A / \ A-A-C-T G-A-T \ / C A conservative assembler might produce two assembled contigs corresponding to these two paths, AACTAGAT and AACTCGAT. A greedy assembler would just pick one path and use that. Subsequent assembly steps might attempt to search for bubbles and annotate the assembled contigs with variant locations.
{ "domain": "bioinformatics.stackexchange", "id": 265, "tags": "genome, assembly, reads" }
Is Artificial intelligence simply taking decisions on the basis of values produced by a machine learning model
Question: I am researching on AI and its working. Whenever I try to search for AI algorithms, ML algorithms come up. Then, I read the differences between ML & AI. One of the key points mentioned was "AI is decision making" & "Machine learning is generating values and learn new things". I come up with a conclusion that ML allows us to take generate some sort of values and using AI we can make decisions with those values. But I am confused with "The weather forecast" problem. Our machine learning model will directly generate the decision that will it rain or not? Is our ML model lies in the AI domain or I am wrong? Help me! Answer: Artificial Intelligence is a very broad area of Computer Science which is intertwined with many other fields, and someone might argue that its definition is the discipline that develops rationally acting systems. When it comes to Machine Learning, the generally accepted definition is programming computers to perform a specific task without specific instructions about the problem. Being that broad of a field, AI comprises Machine Learning problems, but the difference between the two areas relies on how you devise your system: if you make use of problem-specific knowledge in your algorithm: you're outside the ML field; if you formulate your problem in a general-enough manner and use a generic algorithm: most likely you're inside the ML field. I'm sure you have already seen this image from Nvidia a million times already, but it's intuitive to understand what's the relationship between AI and ML: the latter is part of the former.
{ "domain": "cs.stackexchange", "id": 15663, "tags": "machine-learning, artificial-intelligence" }
Neutrino decoupling and units
Question: I have a problem that states: In the early Universe, neutrinos can be created and destroyed by the process: \begin{equation} \nu \bar{\nu} \leftrightarrow e^{+}e^{-} \end{equation} The thermally averaged cross section for this process is given by $\langle \sigma|v|\rangle = K G_{F}^{2}T^{2}$, where $K$ is a constant of order unity. Assume that $K = 1$. Use the condition $\Gamma \equiv n_{\nu}\langle \sigma|v|\rangle = H$ to calculate the decoupling temperature of neutrinos. From the Friedmann equation I have: \begin{equation} H = \sqrt{\frac{8\pi}{3}G\rho}, \end{equation} where: \begin{equation} \rho = \left(\frac{\pi^{2}}{30c^{5}\hbar^{3}} g\left(k_{B}T\right)^{4}\right), \end{equation} and $g$ is the g-factor (degree of freedom) which is 1 for neutrinos and 2 for electrons/positrons. $n_{\nu}$ is given by: \begin{equation} n_{\nu} = \left(\frac{\zeta(3)}{\pi^{2}c^{3}\hbar^{3}} g\left(k_{B}T\right)^{3}\right), \end{equation} with $\zeta(3) = 1.20206$. Now, I've tried to solve for $T$ $\left(n_{\nu}\langle \sigma|v|\rangle = H\right)$ in a bunch of ways, both in hand, and with computer, which for the computer is: $$T_{units} = \frac{\sqrt[3]{s}K}{J\sqrt[3]{kg}m^{5/3}},$$ with $s$ being seconds, $K$ is kelvin, $J$ is joule, $kg$ is, yeah, kilograms, and $m$ is meter. If I do it by hand, I end up with $T$ (In eV): $$T_{units} = \sqrt[5]{\frac{kg}{m^4}}$$ So yeah, not the same (At least what I can see), but either way, it gives an incorrect unit for temperature (Either kelvin or eV). And I can't seem to see what I can do in order to get the rights units in this. Searching on the web for neutrino decoupling, I can see the relation: $$T \sim \left( \frac{\sqrt{G}}{G_F^2} \right) ^{1/3} \sim 1~\textrm{MeV}$$ So I know my temperature should be around the same, but again, I can't see how the units for the calculation can give me MeV or kelvin at least. Answer: Correction of typos and clarifications In the Friedmann equation, '$\rho$' is strictly speaking $\rho_m$, the mass density. Hence the presented Friedman equation has to be changed as follows: \begin{equation} H = \sqrt{\frac{8\pi}{3}G\rho_m}=\sqrt{\frac{8\pi}{3}\frac{G\rho}{c^2}}, \end{equation} in order to use the expression given below for the energy density, denoted by $\rho$. In the expression for the energy density, the $c^5 \hbar^3$ factor is wrong. Energy density means energy per unit volume, $EL^{-3}$, where $E$ and $L$ denote units of energy and length, respectively. Since $\left(k_{B}T\right)^{4}$ has units of $E^4$, and $c\hbar$ has units $EL$, we need a $(c\hbar)^3$ factor to get the right dimensions. This means: \begin{equation} \rho = \frac{\pi^{2}}{30c^{3}\hbar^{3}} g\,\left(k_{B}T\right)^{4}, \end{equation} The number of degrees of freedom for each neutrino flavour is $g_\nu=2$ (neutrino plus antineutrino; see, for instance, p.45 of Dodelson's Modern Cosmology), while, for clarity, I just point out that $g_{e^-}=g_{e^+}=2$, implying $g_e \equiv g_{e^-}+g_{e^+}=4$. Since the Universe at the time of neutrino decoupling is presumably populated by photons, electrons, positrons and the three known flavours of neutrinos, the $g$ factor appearing in the energy density $\rho$ in the previous equation is actually a weighted sum of the $g$ factors of these populating species, given by: \begin{equation} g = \sum_{\textrm{bosons } i}g_i + \frac{7}{8}\sum_{\textrm{fermions } j}g_j = g_\gamma +\frac{7}{8}(g_{e}+3\,g_{\nu}) = 2 + \frac{7}{8}(4+3\times2) = \frac{43}{4} \end{equation} For additional clarity, see J. Bernstein, L. Brown, and G. Feinberg, Rev. Mod. Phys. 61 (1989) 25. The $g$ in $n_\nu$ is actually ${3\over4} g_\nu$. This is so because we are computing the rate $\Gamma$ of weak thermalizing processes which essentially occur through the interaction of electrons and electron neutrinos, and thus $\Gamma$ depends on the existing density of this species of neutrinos. The muon and tau neutrinos are assumed to be decoupled (yet they are still there, contributing to the energy density). Also, neutrinos are fermions, requiring the numerical $3\over4$ factor. Finally, the thermally averaged cross-section $\langle \sigma|v|\rangle$ has units of area times speed, $L^3 T^{-1}$ (where $T$ denotes units of time). Since we are insisting on bringing out all factors of $c$, $\hbar$, and $k_B$, we change $T\rightarrow k_B T$ and, noticing that $G_F$ has units $EL^3$ (see, for instance, p.313 of Griffiths' Introduction to Elementary Particles, 2nd ed.), we can also change $G_{F}\rightarrow G_{F}/(c\hbar)^3$. Now, the product $(k_B T)^2 \,(G_{F}/(c\hbar)^3)^2$ has units of $E^{-2}$. We multiply this by $c^3\hbar^2$, which has units $E^2L^3T^{-1}$ (since $\hbar$ has units $ET$), to produce the dimensionally correct expression for the thermally averaged cross-section: \begin{equation} \langle \sigma|v|\rangle = \bigg(\frac{G_{F}}{c^3\hbar^3}\bigg)^2\, (k_B T)^2 \, c^3\hbar^2 \end{equation} Putting things together Insisting on keeping the $c$, $\hbar$, and $k_B$ factors, $\Gamma = H$ yields, after much care with all factors: \begin{equation} (k_B T)^3 = \underbrace{\Bigg(\frac{\pi^3}{\zeta(3)}\sqrt{\frac{8\pi}{90}} \frac{\sqrt{g}}{\frac{3}{4}g_\nu} \Bigg )}_{\simeq \,29.8} \underbrace{\Bigg(\sqrt{\frac{G}{c\hbar}}\Bigg)}_{\equiv \,1/M_{Pl}} \,\frac{c^4 \hbar^6}{{G_F}^2} \simeq 29.8 \Bigg[ \big(M_{Pl} c^2\big) \, \bigg(\frac{G_{F}}{c^3\hbar^3}\bigg)^2 \Bigg]^{-1} \end{equation} In the above we have introduced the Planck mass $M_{Pl}$. Knowing that $M_{Pl} c^2 \simeq 1.22\times 10^{19}\,\textrm{GeV}$ and $G_{F}/(c\hbar)^3\simeq 1.17\times 10^{-5} \,\textrm{GeV}^{-2}$, one gets: \begin{equation} k_B T \simeq \left ( \frac{29.8}{1.67\times 10^{9} \,\textrm{GeV}^{-3} } \right )^{1/3} \simeq 2.6 \,\textrm{MeV} \end{equation} The (natural) moral It is very cumbersome to carry around all the $c$, $\hbar$, and $k_B$ factors (as you can see by doing the above computation yourself), which is why one very much likes to work in the so-called natural units in the context of particle physics and cosmology. In fact, taking all these factors to be $1$, the first equation of the previous section immediately gives the result for the neutrino freeze-out/decoupling temperature in the form you have found it: \begin{equation} T \,\simeq \,29.8 \,\left( \frac{\sqrt{G}}{{G_F}^2} \right) ^{1/3} \simeq 2.6\,\textrm{MeV} \end{equation} Of course, here, only the order of magnitude can be meaningful since we took $K=1$. The $k_B$ is there but equals $1$ in these natural units, so one just writes $T$ instead of $k_B T$. This and the last expression given above are the same, just written using different units.
{ "domain": "physics.stackexchange", "id": 18065, "tags": "homework-and-exercises, cosmology, units, neutrinos" }
RNN using multiple time series
Question: I am trying to create a neural network using time series as input, in order to train it based on the type of each series. I read that using RNNs you can split the input into batches and use every point of the time series into individual neurons and eventually train the network. What I am trying to do though is use multiple time series as an input. So for example you might receive input from two sensors. (So two time series), but I want to use both of them in order to get a final result. Also I am not trying to predict future values of the time series, I am trying to a get a classification based on all of them. How should I approach this problem? Is there a way to use multiple time series as an input to an RNN? Should I try to aggregate the time series into one? Or should i just use two different neural networks? And if this last approach is correct, if the number of time series increases wouldn't that be too computer intensive? Answer: Multivariate time series is an active research topic you will find a lot of recent paper tackling the subject. To answer your questions, you can use a single RNN. You can input one value for each time step. Nothing keeps you from adding another value at each time step (if your sensor are synchronized). Your model will then learn how to classify with a two dimensional time series. You check this blog. In your case, only the output is different. As for the two last points, aggregating the time series into one is risky in the sense that you might lose important information during the process. Finally the main disadvantage of your last point is that you won't be able to use a potential correlation between the two time series for the final classification.
{ "domain": "datascience.stackexchange", "id": 2101, "tags": "time-series, rnn" }
Electrophorus: Why can't we use a magnet in substituion of a charged body to induce the polarization?
Question: Since a magnetic field can induce a current in a coil, moving electrons from one side to another. Why isn't possible to use the same principle in a electrophorus using one magnet instead of charged body? What's the difference between induction with magnetics and a negatively charged body? P.S: I'm not asking for clarification about the difference about a magnetic field and a electric filed. Maybe it's related, but my point is that if both can induce charges, why it cannot act as a substitution on a electrophorus? Answer: The magnetic field only induces currents when it is changing. In the standard electrophorus, you use a static electric field to induce a charge on one part of the metal, and then you manually drain the charge from another part of the metal. When it's a static magnetic field, nothing happens. You could make an electrophorus by using a coil attached to a pair of plates, then quickly push a magnet so that it runs by the coil, inducing a current which charges the plates, then (quickly, while the magnet is still moving), disconnect the coil from the plates. This would work to charge the plates, but it isn't an elecrophorus, but a minature dynamo used to charge a capacitor.
{ "domain": "physics.stackexchange", "id": 1996, "tags": "electromagnetism, electrostatics" }
How to effectively wash the precipitate or crystals on the filter and why?
Question: My lab manual says that the precipitation on filter should be washed with a few small portions of the washing liquid. Why can't I just completely fill the filter funnel with the solvent instead of steadily add new tiny portions in it? Also, how effective this washing process is? How many times should I wash the precipitation before it's completely free from impurities? What filter funnel of what diameter of the paper filter should I use? Isn't the biggest one the is also the most effective? Finally, why do I have to study limits when I want to become a chemist, not a mathematician? Answer: Every time the precipitation on filter is washed, there is a certain constant volume $V_\mathrm{ret}$ of washing fluid retained on the filter. If the initial concentration of impurity is $c_0$, then initial amount of impurity on filter is $$n_0 = c_0V_\mathrm{ret}$$ and after washing with a new portion of the fluid with the volume $V_\mathrm{add}$ concentration of impurity is reduced to $$c_1 = \frac{V_\mathrm{ret}}{V_\mathrm{ret} + V_\mathrm{add}}c_0.$$ Analogously, after $n$-th washing act $$ c_n = \left(\frac{V_\mathrm{ret}}{V_\mathrm{ret} + V_\mathrm{add}}\right)^n c_0 \qquad\text{or}\qquad c_n = \left(\frac{1}{1 + \frac{V_\mathrm{add}}{V_\mathrm{ret}}}\right)^n c_0 $$ and, assuming we have a total volume of washing liquid of $V_\mathrm{tot} = nV_\mathrm{add}$, the former formula becomes $$c_n = \left(\frac{1}{1 + \frac{V_\mathrm{tot}}{V_\mathrm{ret}}\frac{1}{n}}\right)^n c_0$$ Now, lets assume we have a moderate amount of impurities (e.g. $c_0 =\pu{0.1 mol L-1}$) that we are trying to get rid of by washing with different amounts of the same volume of the same solvent (say, $V_\mathrm{tot} = \pu{100 mL}$). However, we use different filters, for instance: a large one (retains $1/10$ of the washing liquid, or $V_\mathrm{ret} = \pu{10 mL}$); a middle one (retains $1/20$ of the washing liquid, or $V_\mathrm{ret} = \pu{5 mL}$); a small one (retains $1/100$ of the washing liquid, or $V_\mathrm{ret} = \pu{1 mL}$), and, of course, different portions of the liquid (various $n$). Lets plot the concentration of the impurity for each case as a function of washing frequency: First and foremost, it's obvious that it is always better to wash with small portions many times, rather than two or three times with large quantities of washing liquid. Second, the smaller the filter, the less impurity will be left after the same number $n$ of washing acts. Now, can we get rid of the impurity once and for all by washing with an extremely large number of small portions? In fact, lets use an infinite number and take the following limit $$\lim_{n\to\infty}c_n = \lim_{n\to\infty}{\left(\frac{1}{1 + \frac{V_\mathrm{tot}}{V_\mathrm{ret}}\frac{1}{n}}\right)^n c_0},$$ which is, in fact, one of the notable special limits, sometimes called the second remarkable limit: $$\lim_{x\to\infty} \left(\frac{1}{1+\frac{a}{x}}\right)^x = \frac{1}{e^a}$$ so that in our case we can conclude it's impossible to completely get rid of the impurity only by washing: $$\lim_{n\to\infty}{\left(\frac{1}{1 + \frac{V_\mathrm{tot}}{V_\mathrm{ret}}\frac{1}{n}}\right)^n c_0} = c_0\exp{\left(-\frac{V_\mathrm{tot}}{V_\mathrm{ret}}\right)} \neq 0$$ Conclusions It is impossible to completely remove all impurities by washing the crystals/precipitate on the filter even by washing it infinite number of times. It is more efficient to wash with small amounts of liquid many times than only a couple with large ones. Use filter of the right size: not too big to retain the traces of impurities, but big enough to maintain efficient liquid transport through its surface. When possible, use pleated filter paper. Appendix The figure have been plotted with pgflots. Feel free to play around with it or reuse for your lab manual. Source code: \documentclass{standalone} \usepackage{pgfplots} \usepackage{siunitx} \begin{document} \begin{tikzpicture} \begin{axis}[ title={Effectiveness of washing methods ($c_0 = \SI{0.1}{\mole\per\liter}$)}, xmin=0, xmax=20, ymin=0, ymax=0.001, xlabel = $n$, ylabel = {$c_n$/\si{\mole\per\liter}}, minor tick num=4, grid, legend cell align={left}, ] \addplot [ domain=1:20, samples=200, solid, thick, ] {0.1*(1/(1 + 10/x))^x}; \addlegendentry{$V_\mathrm{tot}:V_\mathrm{ret} = 1:0.10$} \addplot [ domain=1:20, samples=200, dashed, thick, ] {0.1*(1/(1 + 20/x))^x}; \addlegendentry{$V_\mathrm{tot}:V_\mathrm{ret} = 1:0.05$} \addplot [ domain=1:20, samples=200, dotted, thick, ] {0.1*(1/(1 + 100/x))^x}; \addlegendentry{$V_\mathrm{tot}:V_\mathrm{ret} = 1:0.01$} \end{axis} \end{tikzpicture} \end{document}
{ "domain": "chemistry.stackexchange", "id": 11215, "tags": "experimental-chemistry, teaching-lab, filtering" }
SVD vs matched filter
Question: Reading about singular value decomposition (SVD) in the context of signal processing applications, one can separate the signal from the noise into orthogonal subspaces. On the surface this sounds like using the decomposed output of the SVD is more optimal than matched filter (or correlation) for detection, demodulation, timing estimation, etc. Is this true or is there more nuance to this problem? Answer: That's not true, it's not better. The thing is: the matched filter just implements the projection in the signal vector space, onto the signal vector itself (or a multiple thereof). (You'll find correlation is just an inner product in that space.) The line through that vector is the signal subspace, the plane to which that vector is normal is the noise space. Matched filtering is a method of linear algebra, and not different than using the SVD to find the subspaces. It's just that for matched filtering, the signal space is known.
{ "domain": "dsp.stackexchange", "id": 9172, "tags": "signal-detection, cross-correlation, matched-filter, eigendecomposition, svd" }
JavaScript shortcut to create elements
Question: I've created a personal API for creating elements for an HTML page. I would like feedback on efficiency, particularly in creating buttons and labels. Any and all feedback is appreciated and considered tag.js /* * Shortcuts for creatings different tags */ function createLabel(text) { let label_tag = document.createElement('label'); let label_tag_text = document.createTextNode(text); tabel_tag.appendChild(label_tag_text); return label_tag; } function createButton(type) { /* need to manually add .onclick when creating button */ let button_tag = document.createElement('button'); button_tag.type = type; return button_tag; } function createInput(type, id) { let input_tag = document.createElement('input'); input_tag.type = type; input_tag.id = id; return input_tag; } function createDiv(id) { let div_tag = document.createElement('div'); div_tag.id = id; return div_tag; } function createTag(tag) { /* * Used for creating basic tags (<p>, <br>, <hr>, etc) * Any tag that doesn't often use an ID, TYPE, NAME, CLASS, etc */ return document.createElement(tag); } Answer: You have a lot of repeated code. If you follow the same design you will end up with a huge list of create???() functions You could simplify to a single function, and pass tag name and an optional properties containing element and style properties. const createTag = (tag, props = {}) => { const style = props.style; if (style) { delete props.style } const el = Object.assign(document.createElement,props); if (style) { Object.assign(el.style, style) } return el; } Thus you can create tags createTag("input", {type : "button", id : "buttonId"}); createTag("button", {value : "Click me"}); createTag("div", {textContent: "abc", id: "abcId", className: "abcDiv", style: {width: "10px"}}); If you did want to have a function to create each type then it would pay to put them together. const createDOM = { label(... button(... input(... div(... // and so on }
{ "domain": "codereview.stackexchange", "id": 33648, "tags": "javascript, dom" }
How is Gyro Drift dealt with?
Question: A quick question. My own robot's got a gyro and I'm going to set up it up with a Kinect. The gyro is reasonably accurate, but like all these sensors it will drift over time. How is that delt with re: mapping? Many Thanks Mark Originally posted by MarkyMark2012 on ROS Answers with karma: 1834 on 2012-10-25 Post score: 1 Answer: There is no way to eliminate drift, but there are some things you can always do. Dont know what kind of gyro you are using, but for raw sensor you can: compensate with calibration, collect some amount of samples while gyro is as stady as possible and avarage of measurment use as calibration value (it will change with temperature and/or voltage, depending on the sensor) use magnetometer for Z axis and/or accelerometers for XY, but it makes it INS and gets more complicated (and more expensive) if robot is not moving, you can detect no signals from wheels at all. That makes odometry way more reliable than in motion, so you can disregard gyro readings. It may improve Z accurqacy if robot is in stop for a long time. It will mess things up if you roatate robot without moving the wheels though. SLAM algoritms may help, if you plan to use Kinect, thats the way. Read about rgbdslam and Kinect, materials on it are all over the web (including ros wiki and ros answers) Originally posted by kszonek with karma: 459 on 2012-10-26 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by MarkyMark2012 on 2012-10-26: Yep the gyro auto calibrates at start up, drift is inevitable. I might marry it with a compass and integrate the two readings over time
{ "domain": "robotics.stackexchange", "id": 11519, "tags": "ros, navigation, gyro, gmapping" }
is the minimum weight perfect matching decoder optimal
Question: The toric code and other popular codes can be decoded using minimum weight perfect matching. Is this an optimal decoder? Here by optimal, I mean it gives the best logical error rate vs physical error rate performance in depolarizing channel. "Threshold" is often used to characterize toric codes, does that assume optimal decoding or a particular type? Answer: Is [minimum weight perfect matching] an optimal decoder? No, it's not optimal. For example, it uses the weight of the shortest path between two detection events as an approximation for the contributions of all topologically equivalent paths. An optimal decoder would exactly compute the contributions of all possible errors consistent with the symptoms, instead of focusing on the most likely one. "Threshold" is often used to characterize toric codes, does that assume optimal decoding or a particular type? Thresholds are always relative to a decoder. And ideally that decoder is something that runs in polynomial time, instead of a hypothetical optimal decoder which likely takes exponential time.
{ "domain": "quantumcomputing.stackexchange", "id": 3503, "tags": "error-correction, stabilizer-code, toric-code" }
Why does a chemical have more than one oxidation number?
Question: Like for group I metals, it is understood that all the elements in the group would have an oxidation number of +1. As far as I know, oxidation number is how much electron the element loses/gains. Hence my question is, why does an element like Iron, Manganese or even oxygen have more than one oxidation state unlike elements like Magnesium or Potassium. Edit: If I have stated any terms incorrectly, please let me know because I'd want to know where I went wrong. Answer: As you told oxidation number(oxidation state) is number assigned to atom, ion or compound which represent electron gain or lost by atom, ion or compound. Some element have more than one oxidation number, because they act differently when they combine with other element. Eg. Oxygen (act differently when combined with peroxide or superoxide.) Some element have more than one oxidation number, because they can gain as well as loss electrons.So they have negative as well as positive oxidation number. Almost all the elements have more than one oxidation state including Magnesium and potassium. You can see the list of oxidation state of all the elements here
{ "domain": "chemistry.stackexchange", "id": 1912, "tags": "redox, oxidation-state" }
Amortize time for a counter with the operations INCREMENT and DECREMENT
Question: Let a binary counter with the operations INCREMENT and DECREMENT. I need to show that you can't implement this kind of counter with constant amortized time per operation. Hence, I need to show that there's a series of $N$ operations with amortized time of $\omega(N)$. My Try: Let's assume we made $2^k-1$ INCREMENT operations. Hence, our counter is a sequence with $k$ 1's. Now, Let's consider a sequence of $N$ operations alternating between DECREMENT and INCREMENT (DEC,INC,DEC,INC,DEC...) Each operation must be $\Theta(k)$. Somehow, I need to figure out that it has to be that the amortize time is $\omega(N)$. How? Answer: Your solution is on track. As you comment, if you increment a counter $2^k-1$ times and then do $m$ increment/decrement operations, in total you must have modified at least $km$ array positions (this will serve as our lower bound on the running time). In your case $N = 2^k-1 + m$. If you choose, for example, $m = 2^k+1$, then $N = 2^{k+1}$ while the lower bound is $km \geq \log_2 N \cdot (N/2) = \Omega(N\log N)$. This shows that the amortized running time is $\Omega(\log N)$, which is certainly superconstant. We can easily get a matching (non-amortized) upper bound. If we only perform $N$ operations, then the value of the counter is in the range $[-N,N]$, and so takes $O(\log N)$ bits to represent. It is not difficult to implement increment and decrement so that they take time linear in the length of the representation, so $O(\log N)$ in this case.
{ "domain": "cs.stackexchange", "id": 3645, "tags": "algorithm-analysis, runtime-analysis, amortized-analysis" }
Enabling or disabling input fields
Question: In my Rails app, I have a form. The form has radio buttons for entity type and has a text field for SSN and another for EIN. Based on the radio button selected, one of the text fields will be disabled and the other enabled (disabled = false). If the user selects the radio button for entity_type_i, it should disable the EIN field, and enable SSN. For the other four radio buttons, it should do the opposite - disable SSN and enable EIN. Since I don't know JavaScript, I ended up copying and pasting the same code and just changing the name of the id (i.e. #user_entity_type_c). Basically #user_entity_type_c, #user_entity_type_p, #user_entity_type_l and #user_entity_type_o should behave the same. Without knowing any better, the code seems like it's redundant. Is there a more efficient way to write the JavaScript? Radio buttons for the form: <span class="radio inline"> <label for="user_entity_type_i"> <input class="radio_buttons required" required="required" aria-required="true" type="radio" value="I" name="user[entity_type]" id="user_entity_type_i" /> Individual / Sole Proprietor </label> </span> <span class="radio inline"> <label for="user_entity_type_c"> <input class="radio_buttons required" required="required" aria-required="true" type="radio" value="C" name="user[entity_type]" id="user_entity_type_c" /> Corporation </label> </span> <span class="radio inline"> <label for="user_entity_type_p"> <input class="radio_buttons required" required="required" aria-required="true" type="radio" value="P" name="user[entity_type]" id="user_entity_type_p" /> Partnership </label> </span> <span class="radio inline"> <label for="user_entity_type_l"> <input class="radio_buttons required" required="required" aria-required="true" type="radio" value="L" name="user[entity_type]" id="user_entity_type_l" /> Limited Liability Company </label> </span> <span class="radio inline"> <label for="user_entity_type_o"> <input class="radio_buttons required" required="required" aria-required="true" type="radio" value="O" name="user[entity_type]" id="user_entity_type_o" /> Other </label> </span> The JavaScript: <script> $(function(){ $("#user_entity_type_i").change(function(){ if($("#user_entity_type_i").prop("checked") == true){ $("#user_ein").prop("disabled", true); $("#user_ssn").prop("disabled", false); }else{ $("#user_ein").prop("disabled", false); } }); $("#user_entity_type_c").change(function(){ if($("#user_entity_type_c").prop("checked") == true){ $("#user_ssn").prop("disabled", true); $("#user_ein").prop("disabled", false); }else{ $("#user_ssn").prop("disabled", false); } }); $("#user_entity_type_l").change(function(){ if($("#user_entity_type_l").prop("checked") == true){ $("#user_ssn").prop("disabled", true); $("#user_ein").prop("disabled", false); }else{ $("#user_ssn").prop("disabled", false); } }); $("#user_entity_type_p").change(function(){ if($("#user_entity_type_p").prop("checked") == true){ $("#user_ssn").prop("disabled", true); $("#user_ein").prop("disabled", false); }else{ $("#user_ssn").prop("disabled", false); } }); $("#user_entity_type_o").change(function(){ if($("#user_entity_type_o").prop("checked") == true){ $("#user_ssn").prop("disabled", true); $("#user_ein").prop("disabled", false); }else{ $("#user_ssn").prop("disabled", false); } }); }); Answer: From your code it looks like you're changing one of "enabled" options only in one of the five cases. You can do that with a loop, using a class selector. Also, it may be just me, but I don't think you should use value in an input box / radio button to store something different than the default value that you want that item to have. I suggest changing it to something like value_type. So, what you can do is add another class to your input elements, select by that and do what you need, something like: $(function() { $('input.entity-option').change(function() { if (this.value_type === "I") { $("#user_ssn").prop("disabled", false); $("#user_ein").prop("disabled", true); } else { $("#user_ssn").prop("disabled", true); $("#user_ein").prop("disabled", false); } }); }); And in your HTML you need something like: <span class="radio inline"> <label for="user_entity_type_i"> <input class="radio_buttons required entity-option" required="required" aria-required="true" type="radio" value_type="I" name="user[entity_type]" id="user_entity_type_i" /> Individual / Sole Proprietor </label> </span> <span class="radio inline"> <label for="user_entity_type_c"> <input class="radio_buttons required entity-option" required="required" aria-required="true" type="radio" value_type="C" name="user[entity_type]" id="user_entity_type_c" /> Corporation </label> </span> <span class="radio inline"> <label for="user_entity_type_p"> <input class="radio_buttons required entity-option" required="required" aria-required="true" type="radio" value_type="P" name="user[entity_type]" id="user_entity_type_p" /> Partnership </label> </span> <span class="radio inline"> <label for="user_entity_type_l"> <input class="radio_buttons required entity-option" required="required" aria-required="true" type="radio" value_type="L" name="user[entity_type]" id="user_entity_type_l" /> Limited Liability Company </label> </span> <span class="radio inline"> <label for="user_entity_type_o"> <input class="radio_buttons required entity-option" required="required" aria-required="true" type="radio" value_type="O" name="user[entity_type]" id="user_entity_type_o" /> Other </label> </span>
{ "domain": "codereview.stackexchange", "id": 24067, "tags": "javascript, beginner, jquery, html, form" }
How does a boomerang behave in microgravity?
Question: Let's say I have perfected a boomerang throwing machine - in Earth surface conditions the boomerang is always thrown the same way, and always returns to the throwing arm. And let's say I've sponsored the building of a kilometre radius 1 atmosphere gym for the ISS. I then mount the machine on the wall of this room and let rip. What sort of trajectory would the boomerang follow? It seems equally possible that since the lift would not be counteracting any gravity it would follow some sort of upward (relative to the original horizontal plane) spiral or alternatively that it would start rotating on all three axes. To be clear, by "weightless" I meant in microgravity, not having zero mass. I'm not asking about magical materials here. Answer: Apparently, a sufficiently sturdy boomerang does return to its thrower in microgravity, as shown in the NASA video "International Toys in Space: Boomerang" in 2013: https://www.youtube.com/watch?v=6Vj-FoJky6w
{ "domain": "physics.stackexchange", "id": 47818, "tags": "newtonian-mechanics, newtonian-gravity, aerodynamics" }
roslint python errors to stdout?
Question: I'm running roslint within travis like this: catkin build $(catkin list --depends-on roslint -u) --no-deps --catkin-make-args roslint When there is a lint error in C++ it is visible in stdout and is easy to read, but python errors are in a log file like this: Errors << foo_pkg:make /home/lucasw/catkin_ws/logs/foo_pkg/build.make.472.log Adding --verbose works but there is a ton of other output also obscuring the actual error- is there a way to just get the python lint error? Originally posted by lucasw on ROS Answers with karma: 8729 on 2019-10-17 Post score: 1 Answer: You can do so by making pep8 redirect stdout to stderr with the following in your CMakeLists.txt set(ROSLINT_PYTHON_OPTS "1>&2") roslint_python() Originally posted by Rufus with karma: 1083 on 2021-03-01 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by lucasw on 2021-03-01: That works, but it seems like there ought to be a standard option to do that, I looked at the roslint pycodestyle.py source and tried out some options but they didn't have any effect when deliberately introducing pep8 violations: set(ROSLINT_PYTHON_OPTS "--show-source") set(ROSLINT_PYTHON_OPTS "--show-pep8") set(ROSLINT_PYTHON_OPTS "--verbose") or in combination set(ROSLINT_PYTHON_OPTS "--show-source" "--show-pep8") Comment by Rufus on 2021-03-01: I think the key is to have outputs print to stderr which is what cpplint does. However, there doesn't seem to be any option to have pep8 output to stderr. Comment by lucasw on 2021-09-08: Example of disabling all the default ignores (E121,E123,E126,E226,E24,E704,W503,W504): set(ROSLINT_PYTHON_OPTS "--ignore=''" "1>&2")
{ "domain": "robotics.stackexchange", "id": 33906, "tags": "ros, ros-melodic" }
Femtosecond laser: Why FROG technique has 3 frequency-doubled outputs?
Question: If the phase matching condition is perfectly satisfied and the two lines of pulse arrived at the crystal with the same phase, then according to the superposition principle, the composite result should only be the central pulse with frequency 2w. Why there still exist 2 other frequency-doubled pulse at the two original directions? Thank you very much! Answer: Here's a simple picture, you are looking at a $\chi^{(2)}$ process which involves the square of the electric field of both beam 1 and 2. So the contributions to the measured sum frequency generation are roughly given by: $$(\mathbf{E}_1(t)+\mathbf{E}_2(t))^2=\vert E_1(t)\vert^2+\vert E_1(2)\vert^2 + 2 \mathbf{E}_1(t)\cdot \mathbf{E}_2(t)$$ I'm leaving out details about the dielectric function of the material which changes the direction of the E-fields, but you can add those in later. In any case, there are three contributions. The first is SFG of pulse 1 with itself (i.e your top beam), the second is SFG of the second pulse with itself (bottom beam), and the final is from pulse 1 and pulse 2 together (middle beam). Notice that at the very best case the third term with be equal to the sum of the other two, and in the worst case it is zero. This is why generally the middle beam in your diagram is weaker than the other two beams which don't require sensitive phase matching conditions.
{ "domain": "physics.stackexchange", "id": 63310, "tags": "laser" }
Rotation About Axis of Diatomic Molecule
Question: While counting the degrees of freedom of a diatomic molecule, We neglect the rotation about the axis of the molecule stating the reason that it's energy is negligible. I agree with this reasoning, and have no questions about it. But if we look at the derivation of the equipartition theorem (Here), then we see that any independent energy which is a quadratic function of momentum or position should have an energy of $\frac{1}{2}kT$. But the above derivation never assumes that the energy should not be neglegible, or anything else of that sort. So according to the derivation, even the rotation about the axis of the molecule should have a energy of $\frac{1}{2}kT$. So, why don't we count the rotation about the axis as a degree of freedom? Answer: The real reason that you don't count rotation around the molecule's narrow axis is that that degree of freedom is frozen out. The formula for rotational kinetic energy that is relevant here is given by: $$K = \frac{1}{2} \vec{L} \cdot I^{-1} \vec{L},$$ where $I$ is the moment of inertia tensor. If we choose our coordinates to diagonalize $I$ for a molecule we get (ignoring numerical factors fixed by the geometry): $$I \sim m\left[\begin{array}{ccc} a_0^2 & 0 & 0 \\ 0 & a_0^2 & 0 \\ 0 & 0 & r_{\mathrm{nuc}}^2 \end{array}\right],$$ where $a_0$ is the Bhor radius ($\approx 5\times10^{-11}\operatorname{m}$), $r_{\mathrm{nuc}}$ radius of the nucleus ($\approx 1\times 10^{-15}m$), and $m$ is the mass of the atoms (basically the nuclei). Because angular momentum is quantized in packets of size $\hbar \approx 10^{-34} \operatorname{m}^2\operatorname{kg} \operatorname{s}^{-1}$ it takes about $K \sim \frac{\hbar^2}{m a_0^2} \approx 7\operatorname{meV}$ (that's milli-electron-Volts, $T\sim 90 \operatorname{K}$) to set an $\mathrm{H}_2$ molecule spinning about its long axes, but about $K \sim \frac{\hbar^2}{m r_{\mathrm{nuc}}^2} \approx 7\operatorname{MeV}$ of energy (mega-electron-Volts, $T\sim 8 \times 10^{10}\operatorname{K}$) to set it spinning about its narrow axis. Needless to say, that much thermal energy would quickly dissociate the molecule and ionize the atoms. Bottom line, it takes about $10^{10}$ times as much energy to get a diatomic molecule spinning along its narrow axis than the long axes, making that degree of freedom very effectively frozen out. It is correct that the equipartition theorem would, classically, put the lie to the claim that the energy stored in rotations around the narrow axis contain negligible energy - classically it would have its $kT/2$, too. This is one of those times that quantum mechanics is unavoidable for explaining a phenomenon.
{ "domain": "physics.stackexchange", "id": 42739, "tags": "thermodynamics, energy, statistical-mechanics, degrees-of-freedom, gas" }
Why is deoxyadenosine monophosphate called "dAMP" and not "DAMP"?
Question: Why is deoxyadenosine monophosphate called "dAMP" and not "DAMP"? After all, deoxyribonucleic acid is called "DNA", and not "dNA" (nor "dRNA", thanks for that insight, @Karsten Theis), and so it looks like "deoxy" or "deoxyribo" is "D" and not "d" in the case of DNA. It seems that "dAMP" is generated using a different rule. I wonder why? "DAMP" has the advantage of being easily recalled, because it is an English word. Answer: While the current answers say what d stands for, they don't really explain why D wasn't chosen instead. We could, for example, equally say that "D shows that it has deoxyribose..." I offer this without any evidence (and without knowing whether such evidence exists). If you are interested in evidence, you will likely have to find the original article / work in which these were named, and hope that the author included some rationale. AMP, adenosine monophosphate, is already an important biomolecule. Prefixing it with a lowercase d in dAMP suggests that it is the same as AMP, but just modified by a little bit (which is true; it just has one oxygen less). This is perhaps partly stylistic, but it also makes it easy to recognise visually as the small letter does not stand out as much. There are also other parallel examples, such as cyclic AMP (written as cAMP).
{ "domain": "chemistry.stackexchange", "id": 16385, "tags": "nomenclature, dna-rna" }
Why is superconductance not observed at regular ambient temperatures?
Question: I have done some reading on superconductance and understand that the reason it happens is due to the formation of Cooper pairs resulting from the attractive momentary charge concentration resulting from a phonon. (As a side-question, is that very different from London dispersion force or a temporary dipole in essence?) What I don't understand is why an electron would cause a phonon to be produced resulting in a Cooper pair at 4 Kelvin but that same election wouldn't at 273 K. Is this because Cooper pairs have relatively weak bonds? Is it just really energetically unfavorable to have these two electrons hanging out in a net-zero spin state? Does nature abhor electrons violating the Pauli exclusion principle? Answer: So upon looking into this further, it looks like Cooper pair interactions are pretty weak, on the order of 10^-3 eV. As such, I'm guessing the reason that we see superconducting at low temps and not at higher temps (with a few "hot superconductors" making an exception) is because at higher temperatures there is enough energy in the system such that Cooper pair interactions would last only momentarily such that you wouldn't have all of the electrons flowing in the conductor taking on a Cooper pair formation simultaneously to allow resistance-free conductance. I feel that's a sufficient answer and this question can be closed.
{ "domain": "physics.stackexchange", "id": 29911, "tags": "superconductivity, phonons" }
The optimized way to find the top k leading candidates from a unsorted hash map in python
Question: I need to write a method that will accept a timestamp and takes an array of votes, I need to return the k leading candidates with that timestamp. I came up with the following solution, Here is the input to the methods: votes = [{'candidate':'a', 'timestamp':2},{'candidate':'c', 'timestamp': 5},{'candidate':'c', 'timestamp': 12}] timestamp = 5 k = 5 And the method to solve the problem, def leading_candidates(votes, timestamp,k): candidates = {} leading_candidates = [] for vote in votes: if vote['timestamp'] <= timestamp: if vote['candidate'] not in candidates: candidates[vote['candidate']] = 1 else: candidates[vote['candidate']] += 1 sorted_votes = sorted(candidates.values())[:k] for candidate in candidates: if candidates[candidate] in sorted_votes: leading_candidates.append(candidate) return leading_candidates print(leading_candidates(votes, timestamp, 2)) As you can see the second solution has a time complexity of \$O(k\,n)\$ where k is the time it takes to find the index in the leading candidates sorted array, In the worst case, it can be \$O(n^2)\$ and because of sorting it may be at least \$O(n\,\log n)\$. Is there any way we can make it work with \$O(n)\$? Answer: So, you want to count something and afterwards get the top k? That sounds like a job for collections.Counter! from collections import Counter def leading_candidates(votes, timestamp, k): vote_counts = Counter(vote['candidate'] for vote in votes if vote['timestamp'] <= timestamp) return [candidate[0] for candidate in vote_counts.most_common(k)] if __name__ == "__main__": print(leading_candidates(votes, timestamp, 2)) This way you don't need to special case a candidate not yet having received a vote (something you could have also done with a collections.defaultdict(int)). And it is \$\mathcal{O}(n)\$. Also note that if k is large, the line if candidates[candidate] in sorted_votes will become slow, as it is a linear scan. At the same time, you can iterate over the keys and values of a dictionary at the same time with candidates.items(), so you don't need to do candidates[candidate]. Python has an official style-guide, PEP8, which recommends using spaces after commas, which you forgot to do before k in the function signature. You should always guard your code with an if __name__ == "__main__": guard to allow importing from the script without running it.
{ "domain": "codereview.stackexchange", "id": 42326, "tags": "python, performance, python-3.x, complexity" }
What is the interpretation of the qubit labels when drawing a qiskit circuit with 'with_layout=True'?
Question: When using with_layout=True as argument in qc.draw() where qc is a QuantumCircuit object the resulting text includes the mapping from the qubits in the circuit to the physical qubits like this: ancilla_0 -> 0 ---- q_3 -> 1 ---- q_2 -> 2 ---- ... q_0 -> 3 ---- meas: 3/==== Which one (q_3 or 1 eg.) is the qubit in the circuit and which one the physical qubit? Note: I assume the answer can be found at wire_names() of TextDrawing(qiskit/visualization/text.py lines 782 to 789 in version 0.36.2), but I'm not sure if I understand it correctly. for reg in self.qubits: register = self.bit_locations[reg]["register"] index = self.bit_locations[reg]["index"] qubit_label = get_bit_label("text", register, index, qubit=True, layout=self.layout) qubit_label += ": " if self.layout is None else " " qubit_labels.append(qubit_label + initial_qubit_value)``` Answer: $a \rightarrow b$ means the virtual qubit $a$ is mapped to the physical qubit $b$. You can easily check that by visualizing the layout of the transpiled circuit using plot_circuit_layout function. Draw the virtual view: plot_circuit_layout(transpiled_circ, backend = backend, view = 'virtual') Draw the physical view: plot_circuit_layout(transpiled_circ, backend = backend, view = 'physical') Now, compare both views with the layout information in your circuit plot.
{ "domain": "quantumcomputing.stackexchange", "id": 4265, "tags": "qiskit" }
For a given packing (hcp, fcc, bcc), which ions occupy the corners and which occupy the faces or centers?
Question: I recently came across the packing of atoms in unit cells. I wanted to know if there was a way to tell where the atoms in the unit cell are just by knowing the compound and its packing (hcp, fcc, etc.). For example, if given the information that lithium iodide packs in fcc, how would I know where the lithium ions are and where the iodine ions are? Which would occupy the corners, or will both will share equally the corners and faces? Is there anything I could use to deduce where the atoms will be? Another example, if NaCl exists in fcc where will the cations and anions be? Looking at its structure the chloride anions are on the edge centers. Can such forms be deduced or is memorizing the only way forward? Answer: The actual answer is that it doesn't matter. For many of the 1:1 solid-state structures, either the cations or the anions may be considered to be at the vertices (i.e. corners) of the unit cell. By symmetry, both representations are entirely equivalent. To see why this is the case, it is helpful to look at a 2D analogue first. The following could be considered to be a 2D version of the cesium chloride (CsCl) structure.* The blue and orange dots represent cations and anions respectively (or the other way round; it doesn't matter, the point is that they're different things). From the way I've drawn the unit cells (i.e. the black lines joining the dots), it appears that the blue dots should be at the vertices and the orange dot in the centre of the unit cell. It's obvious that this is a valid unit cell, as you can repeat it as many times as you wish to generate the full crystal structure. However, I could just as easily have drawn this: Again, this is a valid unit cell; but this time, the orange dots are at the vertices and the blue dot is in the centre. This shows that there is no difference between the two representations of the unit cell. Both of them can be repeated (and translated) to form exactly the same crystal structure. That should already suggest to you the answer to the 3D case in your question. Let's look at the 3D CsCl structure. You can draw it out in 3D and try to convince yourself again using the argument above. However, I think it's easier to use the 2D depiction on the left, where the numbers indicate the positions of the atoms (lattice points) in the third dimension, as a fraction of the unit cell length in that dimension. So, 0 and 1 indicates that there is a blue atom at the front and at the back, whereas 0.5 indicates the orange atom right in the middle of the unit cell. What we need to do is to extend this unit cell a couple of times in each dimension. The first arrow shows us how to extend the unit cell in the third dimension, which has been collapsed: essentially, we need to add 1 to every number (which corresponds to adding a new unit cell behind the current one). The second arrow is just a tiling in the other two dimensions. I'm now going to redraw the unit cell: Now, all we need to do is to subtract 0.5 from every number (this is OK because it's just shifting the zero, i.e. moving our "point of view" backwards / forwards) and we get the same CsCl unit cell but with the atoms swapped round. A similar exercise may convince you of the NaCl / rock salt case.† Footnotes * That's similar to the body-centred cubic structure, but is not the same: in the bcc structure the atom in the middle is the same as the one at the edges. Thanks to Karsten Theis for pointing that out. † Similarly, the NaCl structure is similar to the face-centred cubic, but fcc has the same atom at all the lattice points.
{ "domain": "chemistry.stackexchange", "id": 16111, "tags": "crystal-structure, solid-state-chemistry" }
Dictionary populated on demand
Question: I work with an embedded target that contains a list of commands and registers that may change from one device to another. So I need to build my commands list from metadata received from that particular target. Unfortunately, the communication with the target is slow and the amount of commands is high. So I cannot build the full command's list at the beginning, but only when somebody requests it. Of course, the list of commands should not be modifiable (by mistake), by the user which I assume is dumb. So I would like to prevent the instance to be easily modifiable in IPython. For example: device = Device() device.commands.<tab> # the user see all the available commands device.commands.execute = 'oops' # I want to prevent this to happen device.commands['execute'] = 'oops' # Neither this. So I wrote this design pattern: import time # The two following functions are the ones I use to communicate # with my target. def get_item_names(): return ['item%d' % i for i in range(100000)] def fetch_item(item): # Time consuming function time.sleep(0.1) return item # The dictionary that is populated on demand. class Foo(dict): __locked = False def __init__(self): # Populate elements with None elements. # This is required to be able to see the available commands # in IPython with the auto-completion turned on. for item in get_item_names(): default = None self[item] = default # This is a very ugly workaround. With cProfiler, I can see # the addition of __setattr__ is very costy. self.__locked = True # The item is fetched only on demand def __getitem__(self, item): if item in self and super(Foo, self).__getitem__(item) == None: fetched = fetch_item(item) super(Foo, self).__setitem__(item, fetched) return fetched else: return super(Foo, self).__getitem__(item) @property def __members__(self): return self.keys() def __getattr__(self, item): return self[item] def __setattr__(self, key, value): if self.__locked: print('Not possible') else: super(Foo, self).__setattr__(key, value) def __setitem__(self, key, value): if self.__locked: print('Not possible') else: super(Foo, self).__setitem__(key, value) Is that pattern good and what can I do to improve the bad performances with the lock mechanism? I naively tried to add setattr(self, '__setattr__', newsetattr) to __init__ but it does not work. Answer: __missing__ I would have started to propose a simpler design using the __missing__ dunder so that, each time a user tries to access a command that has not been populated, the __missing__ method would be called and you could fetch the informations: def Foo(dict): def __missing__(self, key): self[key] = item = fetch_item(key) return item And voilà. I wouldn't try to enforce barriers to avoid users behaviour. After all, we're all consenting adults here. Playing it nice with the interactive shell This behaviour, however let the user clueless about all the possible commands available at the tap of the TAB key. They are still able to call get_item_names if need be. But you could automatically build them using some metaclass logic. The class you want in the end could be: def Foo: _<some_name> = None @property def <some_name>(self): if self._<some_name> is None: self._<some_name> = fetch_item(<some_name>) return self._<some_name> def __getitem__(self, key): return getattr(self, key) with every <some_name> being returned by get_item_names. Read-only of the attributes are ensured by the property. So you need to define your base class being: class Foo(metaclass=CommandBuilder): def __getitem__(self, key): return getattr(self, key) And build your metaclass like class CommandBuilder(type): def __new__(cls, name, bases, dct): for command_name in get_item_names(): _command_name = '_{}'.format(command_name) dct[_command_name] = None def getter(self): command_value = getattr(self, _command_name) if command_value is None: command_value = fetch_item(command_name) setattr(self, _command_name, command_value) return command_value dct[command_name] = property(getter) return super(CommandBuilder, cls).__new__(cls, name, bases, dct) If you trully need to use a subclass of dict here, it wouldn't be much more difficult: def Foo(dict, metaclass=CommandBuilder): def __missing__(self, key): item = getattr(self, key) # will call the property super(Foo, self).__setitem__(key, item) return item def __setitem__(self, key, value): pass # or raise an exception of your own def __delitem__(self, key): pass # or raise an exception of your own Immutability of the attributes is still guarantied by property.
{ "domain": "codereview.stackexchange", "id": 22447, "tags": "python, performance, hash-map" }
Questions about Lorentz Matrices and Lorentz Metric
Question: (I use the abstract index notation convention in this post) In $\mathbb{R}^4$, denote the Lorentz Metric as $g_{\mu\nu}=$diag$(-1,1,1,1)$, then we can define the Lorentz Matrices to be all $4\times 4$ matrices $\Lambda^{\mu}_{\;\; \nu}$ such that $$ \Lambda^{a}_{\;\; \mu} g_{ab} \Lambda^b_{\;\; \nu}=g_{\mu\nu} $$ now, my First Question is, say we get a Symmetric $4\times 4$ matrix $X_{\mu\nu}$ (i.e. $X_{\mu\nu}=X_{\nu\mu}$), such that for any Lorentz Matrix $\Lambda$, one has $\Lambda^{a}_{\;\; \mu} X_{ab} \Lambda^b_{\;\; \nu}=X_{\mu\nu}$. Then can we conclude that there exists some $\alpha\in \mathbb{R}$ with $X_{\mu\nu}=\alpha \Lambda_{\mu\nu}$? If the above conjecture is ture, then my Second Question will be that say we get a Totally Symmetric tensor $T_{abcd}$ (i.e. $T_{abcd}$ is unchanged under any permutation of indices), such that for any Lorentz Matrix $\Lambda$, $$ \Lambda^{a}_{\;\;\mu}\Lambda^{b}_{\;\;\nu}\Lambda^{c}_{\;\;\sigma}\Lambda^{d}_{\;\;\rho} T_{abcd}=T_{\mu\nu\sigma\rho} $$ Then can we conclude that $T_{abcd}$ can be expressed as $$ T_{abcd}=\alpha(g_{ab}g_{cd}+g_{ac}g_{bd}+g_{ad}g_{bc}) $$ for some $\alpha\in \mathbb{R}$? (notice the RHS above is already completely symmetric). If all of the aboves are ture, then my Third Question is can we extend this result for $2n$-order tensor $T_{a_1 a_2\cdots a_{2n-1}a_{2n}}$? What I know now is that since the scalar matrix $-I$ is a Lorentz Matrix, then for any odd order tensor $T$ with the above property, $T$ will be zero. And I guess these thoughts can help to illustrate the duality between the Lorentz Metric and the Lorentz Matrix. Answer: Yes, the first conjectured fact is clearly true. Symmetry is not even necessary. $\eta:= diag(-1,1,1,1)$ henceforth. PROPOSITION. Let us assume that $$\Lambda^t X \Lambda = X$$ for all $\Lambda \in SO(1,3)_+$ and some $X \in M(4,\mathbb{R})$. Then $$X = c\eta$$ for some $c\in \mathbb{R}$. PROOF. It is not difficult to prove that the (orthochronous proper) Lorentz group is closed with respect to the transposition operation, so that $$\Lambda \eta \Lambda^t = \eta$$ if and only if $$\Lambda^t \eta \Lambda = \eta\:.$$ Therefore, from $$\Lambda^t X \Lambda = X$$ we have $$\eta \Lambda^t X \Lambda = \eta X $$ $$\Lambda \eta \Lambda^t X \Lambda = \Lambda \eta X$$ $$\eta X \Lambda = \Lambda \eta X$$ We conclude that $\eta X$ commutes with the fundamental representation of $SO(1,3)_+$. Since it is irreducible, we have that $$\eta X = c I \tag{1}$$ for some $c \in \mathbb{R}$. Multiplying both sides of (1) with $\eta$ we have $$X = c\eta\:.$$ QED Regarding your further conjectures, they are known facts (I do not remember the precise statement of your third conjecture), but the proofs are not so easy to find in the literature. In Goodman R., Wallach N.R.: Symmetry, Representations, and Invariants, Graduate Texts in Mathematics, vol. 255. Springer, New York (2009) and Weyl, H.: The Classical Groups: Their Invariants and Representations. Princeton University Press, Princeton (1997) you should find the proofs or ideas to prove those statements. In this paper of mine and I. Khavkine we used those facts. I referred my statement to $SO(1,3)_+$ instead of $O(1,3)$. One should pay attention to the use of $SO(1,3)_+$ vs $O(1,3)$. That is because $\epsilon_{abcd}$ is also invariant under the action of $SO(1,3)_+$. However it is not symmetric...
{ "domain": "physics.stackexchange", "id": 98522, "tags": "special-relativity, metric-tensor, inertial-frames, tensor-calculus" }
Wavelength comparison of two waves
Question: Is there any non-digital (naturally existing) mechanism to compare two or more waves in such a way: Input 1 Input 2 .... Output ------- -------- .... ------------ Lower Higher .... Lower/Higher Wavelength Wavelength .... Wavelength ... some kind of selectively permeable membrane which allows one wave to pass-through? Pardon my poor physics knowledge. I don't have a clue; would it be related to wave theory, applied physics or applied optics? Answer: You may want to look resonance up. There are all manners of physical systems that have a natural oscillation frequency, be this mechanical, optical, or whatever. When excited by a multi-frequency signal, they will amplify their natural frequency more than any other. So in a way you can think of them as blocking all other frequencies. EDIT OK, look at this image taken from the wikipedia article... Lets say you have a system with a resonant frequency of 100 Hz. Any real system also has some amount of attenuation due to friction, which is indicated by the $\delta$ parameter. Lets say that our system follows the curve $\delta = 0.2\omega_0$. So lets now excite this system with a combination of three frequencies: 50, 100 and 200 Hz. The 50Hz excitation will be amplified to about 133% of the input, the 100 Hz excitation to about 240%, and the 200 Hz to only 33% of the input. As the graph shows, you can actually use any resonant system with strong dampening to filter out the higher frequencies, and adjust the cut frequency with the natural frequency of your system.
{ "domain": "physics.stackexchange", "id": 5696, "tags": "optics, wavelength, applied-physics" }
Intrinsic parity of particle and antiparticle with spin zero
Question: I need to prove that the intrinsic parities of a particle and antiparticle with spin zero are the same. Can I prove that by an argument that operator of $P$-inversion commutes with charge conjugation operator for the spin-zero particle? $$ \hat {P}\Psi = \pm \Psi , \quad \hat {C} \Psi = \Psi^{*}, \quad \hat {C} \hat {P}\Psi = \pm \Psi^{*} = \hat {P}\hat {C}\Psi = \pm \Psi^{*}. $$ Answer: I slightly deviate from your notation and use $\phi $ to denote the scalar field as its more standard. Also I should point out that quantum fields are operators and thus under a transformation they get acted on from both the left and the right. The complex scalar field is given by, \begin{equation} \phi (x) = \int \frac{ \,d^3p }{ (2\pi)^3 } \frac{1}{ \sqrt{ 2E _{ {\mathbf{p}} }}} \left( a _{ {\mathbf{p}} } e ^{ - i p \cdot x } + b ^\dagger _{ {\mathbf{p}} } e ^{ i p \cdot x } \right) \end{equation} Under parity we have that $ a _{ {\mathbf{p}} } \rightarrow a _{ - {\mathbf{p}} } $ and $ b _{ {\mathbf{p}} } \rightarrow b _{ - {\mathbf{p}} } $ which results in, \begin{equation} P \phi ( t, {\mathbf{x}} ) P = \phi ( t , - {\mathbf{x}} ) \end{equation} Under complex conjugation we have that $ a _{ {\mathbf{p}} } \leftrightarrow b _{ {\mathbf{p}} } $ which results in \begin{equation} C \phi ( t , {\mathbf{x}} ) C = \phi ^\ast ( t , {\mathbf{x}} ) \end{equation} The commuting nature of $ C $ and $ P $ is then quite trivial. Complex conjugation has nothing to do with what position the field is at. Its easy to see that, \begin{equation} C P \phi (x) P C = C \phi ( t , - {\mathbf{x}} ) C = \phi ^\ast ( t , - {\mathbf{x}} ) \end{equation} \begin{equation} P C \phi (x) CP = C \phi ^\ast ( t , {\mathbf{x}} ) C = \phi ^\ast ( t , - {\mathbf{x}} ) \end{equation} and hence the two operators must commute.
{ "domain": "physics.stackexchange", "id": 12111, "tags": "quantum-field-theory, operators, quantum-spin, parity, cpt-symmetry" }
Does Alzheimer affect more than day-to-day memory?
Question: I know that Alzheimer's damages a part of the brain called the hippocampus, which has a central role in day-to-day memory. But, could it affect also on things in other memory centers, things such as taste and moves? Answer: Oh, yes, though short-term memory loss is often an early sign. "For many, decline in non-memory aspects of cognition, such as word-finding, vision/spatial issues, and impaired reasoning or judgment, may signal the very early stages of Alzheimer’s disease." from https://www.nia.nih.gov/alzheimers/publication/alzheimers-disease-fact-sheet. Warning: grim. Whatever your brain does, Altzheimer's eventually undoes. For more details: http://www.alzheimers.net/stages-of-alzheimers-disease/
{ "domain": "biology.stackexchange", "id": 6076, "tags": "brain, memory" }
What happens when you send a Bell state through depolarizing channel?
Question: For noise parameter $Q$ and a density matrix $\rho$, we know that the depolarization channel $\mathcal{E}$ would act like: $$ \mathcal{E}(\rho) = (1 - Q)\rho +Q\frac{I}{2}, $$ where $I$ is the identity matrix in 2 dimensions. Now, an interesting question is, what happens to a Bell state $|\Phi^{+}\rangle = \frac{1}{2}(|00\rangle + |11\rangle)$, when we apply $\mathcal{E}$ to each qubits of the Bell state separately. What would be the resulting state? Meaning, what is: $$ (\mathcal{E} \otimes \mathcal{E})(|\Phi^{+}\rangle \langle\Phi^{+}| ) $$ Remembering that, operation on the first qubit would collapse the state. Is it physically possible? Answer: From my understanding of what you are asking, you may take the product of two depolarization operations, using the reduced density matrix of each qubit in the Bell state in the expression. Let's denote our two qubits as $\mathrm{A}$ and $\mathrm{B}$. The Bell state of these two qubits is then: $$ |\beta_{00} \rangle =\frac{|0 \rangle_\mathrm{A} \otimes |0 \rangle_\mathrm{B} + |1 \rangle_\mathrm{A} \otimes |1 \rangle_\mathrm{B}}{\sqrt{2}} = \frac{|00 \rangle + |11 \rangle}{\sqrt{2}} $$ With a density matrix: $$ \rho = | \beta_{00} \rangle \langle \beta_{00} | = \frac{ |00 \rangle \langle 00| + |00 \rangle \langle 11 | + |11 \rangle \langle 00| + |11 \rangle \langle 11 |}{2} $$ Which makes the two reduced density matrices: $$ \rho_\mathrm{A} = \text{tr}_\mathrm{B}(\rho) = \frac{|0 \rangle \langle 0| + |1 \rangle \langle 1|}{2} = \frac{I}{2} $$ $$ \rho_\mathrm{B} = \text{tr}_\mathrm{A}(\rho) = \frac{|0 \rangle \langle 0| + |1 \rangle \langle 1|}{2} = \frac{I}{2} $$ The operation can then be defined as (assuming the same noise parameter $Q$ on both applications of the operation): $$ \mathcal{E} \otimes \mathcal{E} = (Q \; \frac{I}{2} + (1-Q) \rho_\mathrm{A}) \otimes (Q \; \frac{I}{2} + (1-Q) \rho_\mathrm{B}) $$ Which ultimately simplifies to: $$ \mathcal{E} \otimes \mathcal{E} = Q(2-Q) \; \frac{I}{2} \otimes \frac{I}{2} + (1-Q)^2 \; \rho $$ Because both reduced density matrices for the Bell state given are equivalent to the density matrix for a completely mixed state, i.e. $\frac{I}{2}$. Hope this helps!
{ "domain": "quantumcomputing.stackexchange", "id": 1629, "tags": "entanglement, quantum-operation, noise, bell-basis, textbook-and-exercises" }
Does the number density of photons $n_\gamma\approx 10^8 \:\mathrm m^{-3}$ refer to CMB photons only?
Question: When we talk about the present-day number density of photons ($n_\gamma\approx 10^8 \:\mathrm m^{-3}$) in the universe do we mean the number density of CMB photons? I mean there are other sources of photons, for example, stellar or galactic sources. Are these photons included in $n_\gamma$ ? Is it the same $n_\gamma$ which is used to quantify the baryon asymmetry in the universe? Answer: In principle, the number density of photons include all photons, both of cosmic origin (e.g. the cosmic microwave background; CMB) and of astrophysical origin (starlight, gamma rays from gamma-ray bursts, radio waves from quasars, etc.). However, CMB photons outnumber all other types of photons by more than 200:1. The cosmic background radiation The figure below, from Hill et al. (2018), shows the brightness of the sky over the full electromagnetic spectrum, from radio to gamma rays: More specifically, the $y$ axis shows the specific intensity $I_\nu$, multiplied by the frequency $\nu$. This is a convenient measure because it gives the contribution per logarithmic scale, so when plotted on a logarithmic scale, if two peaks are equally wide, the one with the highest value of $\nu I_\nu$ has a larger energy density. Thus you see that, by far, the largest contribution comes from the CMB. The second largest contribution to the energy density are the cosmic infrared and optical background (CIB and COB), which come from galaxies. At extreme frequencies, you have the even lower X-ray and gamma-ray background (CXB and CGB), which come from active galactic nuclii (quasars etc.). See also model fit to these observation by Inoue 2014, fig. 1. Photon number densities However, since photons have different energies, a larger amount of photons is needed to produce a given energy for low-energy photons than for high-energy photons. Dividing by $\nu$ to get $I_\nu$ and by Planck's constant $h$ to get the number flux, and multiplying by $4\pi$ gives the photon flux from all directions, i.e. number of photons per second. Further dividing by the speed of light $c$ gives the number density. That is, $$ n = \nu I_\nu \times \frac{4\pi}{h \nu} \frac{1}{c} $$ In the plot below, I took the data from the plot above, interpolated a bit, and calculated the number density: For each "family" of photons, I integrated the number densities across the frequency bands, writing the numbers in black. The CMB photons have a total number density of $411\,\mathrm{cm}^{-3}$, which is seen to be a factor of $$ \frac{n_\mathrm{CMB}}{n_\mathrm{CRB}+n_\mathrm{CIB}+\cdots} = \frac{411}{0.63+1.24+\cdots} \simeq 220 $$ more than all of the other photons combined! Note that the UV background is quite uncertain, both because UV observations from ground is very difficult so you'll have to go to space, and because interstellar hydrogen is very efficient at absorbing UV radiation. Analytical expression for the number density Because the CMB is described by a near-perfect blackbody of temperature $T = 2.7255\,\mathrm{K}$, their number density $n_\mathrm{CMB}$ can be calculated analytically as $$ \begin{array}{rcl} n_\mathrm{CMB} & = & 16\pi \left( \frac{kT}{hc} \right)^3 \zeta(3) \\ & \simeq & 411\,\mathrm{cm}^{-3}. \end{array} $$ Here, $k$, $h$, $c$, and $\zeta$ are Boltzmann's constant, Planck's constant, the speed of light, and the Riemann zeta function, respectively. Baryon asymmetry As for your second question, when particles and antiparticles annihiliate, they emit gamma-rays, which become a part of the CGB, but in the very high frequency end, like $\nu>10^{20}$ Hz. Thus, the CGB can, as you suggest, be used to constrain the baryon asymmetry in the Universe (see e.g. Ballmoos 2014). But if you define $n_\gamma$ as the number density of all photons, these photons contribute negligibly to $n_\gamma$.
{ "domain": "physics.stackexchange", "id": 23714, "tags": "cosmology, astrophysics, cosmic-microwave-background, baryogenesis, leptogenesis" }
Sinusoidal Wave & Conservation of energy
Question: I am having a problem with assigning conservation of energy with sinusoidal wave. For suppose, if I hit a water surface with a stone, then wave is gonna propagate through the whole surface as a sinusoidal wave. But by hitting a portion of the surface I only give a part of the surface to oscillate. But after some time the surrounding molecules will oscillate with the same energy; forming a sinusoidal wave. But where do these molecules energy from? I am sure I have a misconception somewhere. Answer: But after some time the surrounding molecules will oscillate with the same energy; forming a sinusoidal wave. But where do these molecules energy from? Conservation of energy means that if you sum up all the oscillation energies of the molecules at time t' they should sum up to the energy of the impact at time t. The energy is spread to a bigger volume, the amplitude of the wave falls, and that is why waves dissipate completely. ( there will be radiative etc losses too, but smaller than the geometrical dissipation, unless the medium has great viscossity).
{ "domain": "physics.stackexchange", "id": 33286, "tags": "energy, waves" }
How do I stabilize my oscillating system?
Question: I have two scalars $x$ and $y$ that vary with time $t$ such that \begin{align} \frac{dx}{dt} &= ay + b + dx\\ \frac{dy}{dt} &= cx \end{align} but $a, b, c$ are unknown. If $d$ is too small, the system will be underdamped (and oscillate badly), but if $d$ is too large, the system is overdamped, and doesn't converge fast enough. How do I set $d$ based on statistics taken about historic $x, y$ so that the system is critically damped and moves as quickly as possible to the point where $x=0, y=\bar y = -\frac{b}{a}$? Answer: First, combine your two variable set of first order differential equations into a single variable second order one. $$ \frac{d^2y}{dt^2} = c \frac{dx}{dt} = acy + bc $$ $$ \frac{d^2y}{dt^2} - acy = bc $$ Solve: $$ y(t) = C_1 e^{\sqrt{ac} \cdot t} + C_2 e^{-\sqrt{ac} \cdot t} - \frac{b}{a} $$ $$ x(t) = \int{ ( ay(t) + b ) dt } $$ $$ x(t) = C_1 \frac{a}{\sqrt{ac}} e^{\sqrt{ac} \cdot t} - C_2 \frac{a}{\sqrt{ac}} e^{-\sqrt{ac} \cdot t} + C_3 $$ Now analyze: Initial conditions have to be set so that $ C_1 = 0 $ or your system will grow exponentially (assuming $ ac > 0 $). Also, $ C_3 $ will need to be zero. If $ ac < 0 $ then the system will oscillate forever since there is no damping term in the second order equation. $$ \frac{dy}{dt} = \sqrt{ac} \left( C_1 e^{\sqrt{ac} \cdot t} - C_2 e^{-\sqrt{ac} \cdot t} \right) $$ $$ \frac{dx}{dt} = a \left( C_1 e^{\sqrt{ac} \cdot t} + C_2 e^{-\sqrt{ac} \cdot t} \right) $$ Initial Conditions: $$ y(0) = C_1 + C_2 - \frac{b}{a} $$ $$ \frac{dy}{dt}(0) = \sqrt{ac} \left( C_1 - C_2 \right) $$ $$ x(0) = \frac{a}{\sqrt{ac}} ( C_1 - C_2 ) + C_3 $$ $$ \frac{dx}{dt}(0) = a \left( C_1 + C_2 \right) $$ With $ C_1, C_3 = 0 $: $$ y(0) = C_2 - \frac{b}{a} $$ $$ \frac{dy}{dt}(0) = - \sqrt{ac} \cdot C_2 $$ $$ x(0) = -\frac{a}{\sqrt{ac}} \cdot C_2 $$ $$ \frac{dx}{dt}(0) = a \cdot C_2 $$ It's been a while, I hope I did it right. Ced ================================== Followup: Yes, you can include an $x$ term in the first equation, or a $y$ term in the second equation, or both. Then repeat the process I went through above to put your system in this form: $$ \ddot y + A \dot y + B y = 0 $$ Whatever constant you get on the right side will be part of the particular solution. Add it in later as I did above. This is the basics of solving a homogenous second degree linear differential equation: Assume the solution is of the form $ C e^{rt} $ $$ r^2 C e^{rt} + A r C e^{rt} + B C e^{rt} = 0 $$ $$ \left( r^2 + A r + B \right) \left( C e^{rt} \right) = 0 $$ One, or both, of the factors has to be zero. $ C e^{rt} $ won't be, so: $$ r^2 + A r + B = 0 $$ This is a standard quadratic equation. With real valued coefficients you will either get two real roots, or a conjugate pair of complex roots. $$ r = - \frac{ A }{2} \pm \sqrt{ \left(\frac{ A }{2} \right)^2 - B } $$ This will give you $r_1$ and $r_2$. Since it is a linear differential equation, any linear combination of the two solutions will also be a solution. $$ y(t) = C_1 e^{r_1 t} + C_2 e^{r_2 t} $$ The real part of the root will determine the growth behavior. If it is negative the system will converge to an equilibrium point, if it is positive, the system will grow exponentially. The imaginary part, if there, will determine the frequency of the oscillations. Therefore, you can calculate the parameters from observations by measuring the rate of decay to get the real part(s), and measuring the frequency to get the imaginary part. Once you have those, you should be able to solve for your $a,b,c$ and $d,e$ if you include them. Of course, you could leave your equation set in matrix form and solve it with eigenvalues and eigenvectors, but that is a whole 'nuther discussion. Ced
{ "domain": "dsp.stackexchange", "id": 6023, "tags": "control-systems, oscillator" }
Machine learning roadmap not for beginners
Question: To introduce myself: I know what is RL, know some RL algorithms such as PPO, A2C. Know about offline RL, online RL. I have read many papers about RL. Such as MuZero, AplhaZero, Decision Transformer and etc. I also know much about supervised learning. Know many architectures from MPL to modern Transformers. I can solve many tasks. I have read many papers about supervised learning. Such as DETR, VIT, T5, BERT, GPT-3 and many others. I know some things from unsupervised learning. I have read some papers such as DINO, UP-DETR, SIMCLR, and etc. Yesterday i knew about SwAV. I gonna read paper. And learn more about clustering in unsupervised learning. To create Neural Nets i use pytorch. I'm student at university. I want to be professional AI researcher. And want to work at FAIR. Question: What should i learn next? Which skills i have to have to be AI researcher? Please give full roadmap from my current level to level of researcher. Thank you. Answer: As a professional researcher at a top 3 universities in the USA, I will give you my perspective. Please note that everyone's path is different so listen to lots of opinions. The best hints and advises will be part of those opinions so you will get an idea of a possible path. Also, note that I won't assume that you are a prodigy student that you might drop out from college and companies such as FAIR, Google etc will take you under their wings and train you as a researcher. My advice will be mainly focused on what on average you need to increase your chances of success. First of all, becoming a professional researcher and working at FAIR are 2 different things (that of course can be achieved at the same or at different times). Now, to become a professional researcher you will need professional training = a PhD programme. The PhD years are the years that you will be trained in order to carry out original research. So pursuing a PhD should be your number 1 goal mid-term (after graduating). Bare in mind that as a Research Scientist in FAIR you will be expected to define your own research path which should align with the objectives of the larger group of researchers that you will belong. This means that you need to ask the right questions. Questions that are interesting and potentially lead to interesting and impactful work. And of course this implies that you know very well the literature of the subject of your research. Also, usually Research Scientists at FAIR, Google etc have already some good impactful research highlights from their PhD/Postdoc and the company is be interested in incorporating these research directions to their research directions. In other words they have something of value to "sell" and not just good skills. This leads to my next point: specialization. If your goal is to work as a Research Scientist at FAIR (and for example not Google) you need to have specific reasons on why FAIR. Do you like something specific about their research? If you want to become an RL expert for example this might not necessarily lead you to FAIR as they are not as heavily invested in RL compared to e.g. Google. These are things you need to think about. Having a bit of experience here and there in ML won't lead you anywhere if you don't start focusing on what you want to do in order to achieve your goals. We are talking about a marathon not a sprint so you need to think about long term goals (a plan for the next 10yrs). Knowing about architectures and have zero ML foundation won't lead you far in the field. As you gain experience start understanding group/family of methods/techniques that do X. For example you might know about PPO, A2C but you might not know what are the differences between Policy Gradient Methods and value methods. These are things that you need to think about for the more focused path that you will take. Successful researchers know very well their field and can come up with successful next steps that push the boundary of their field. You need to build a more general understanding than just knowing about architectures. This leads to my final point: Math. Being good in Math and Stats is extremely valuable as a researcher in ML. Yes, select the tough path and become good in the math of your field. Don't listen to people telling you that you don't need math. Few other things: get involved in research groups early to find out what you like. If needed attend a master of research or a master of science especially if you need more courses. Email people to get advice and ask questions again and again! Try to publish early and then apply for a PhD. Choose a good supervisor that will be also a mentor (VERY important as some people are supervising without mentoring). You might end up at a very competitive lab but supervisor might be so busy that you will get nothing from him/her. Think also what YOU get from a project/collaboration/apprenticeship etc. Always keep your math and coding skills sharp. And lastly focus (again), narrow down the research fields that intrigue you. I tried to cover as much as I could. Good luck with everything and remember exercise patience and persistence!
{ "domain": "datascience.stackexchange", "id": 10223, "tags": "machine-learning, nlp, reinforcement-learning, computer-vision, unsupervised-learning" }
Why this term can be written as the electric force component of the lorentz law?
Question: I am reading Special Relativity and Classical Field Theory by Susskind. In chapter 6, he is deriving the Lorentz force law from the action, and he presents the following equation $$m\frac{d}{dt}\frac{\dot{X_p}}{\sqrt{1-\dot{x}^2}} = e\left(\frac{\partial A_0}{\partial X^p} - \frac{\partial A_p}{\partial t}\right) + e\dot{X^n}\left(\frac{\partial A_n}{\partial X^p} - \frac{\partial A_p}{\partial X^n}\right)$$Then he says we can see easily that the first term on the right is the electric field term $\left(F = q\vec{E}\right)$, and that with some manipulation, we can rewrite the second term on the right as the magnetic component of the force. I actually see the magnetic component easily (through some cross-product manipulation), however, I do not see how the first term on the right can be written as the electric component of the force. Can someone explain this? Answer: $e(\frac{\partial A_0}{\partial X^p} - \frac{\partial A_p}{\partial t}) + e\dot{X^n}(\frac{\partial A_n}{\partial X^p} - \frac{\partial A_p}{\partial X^n})$... ...however, I do not see how the first term on the write can be written as the electric component of the force. Can someone explain this? The $A^0$ component is what you might usually call the "electric potential" $\phi$. The spatial components $A^p$ are the components of what you would usually call the "vector potential" $\vec A$. By definition, the electric field is: $$ \vec E = -\vec \nabla {\phi} - \frac{1}{c}\frac{\partial \vec A}{\partial t} $$ The context is not entirely clear, but it looks like one of the signs of the term you wrote will flip when you raise the index in the numerator (depending on if you are using the mostly-plus or mostly-minus metric) and you will see that the term is the component of the electric field written above.
{ "domain": "physics.stackexchange", "id": 90775, "tags": "electromagnetism, special-relativity, forces" }
What is the function of dihydrofolate reductase in humans?
Question: According to StatPearls, synthetic folic acid — as an artificial dietary supplement — needs to be converted into the active form tetrahydrofolate (THF) by dihydrofolate reductase. In the cells, folic acid is reduced to THF, a biologically active form, in a two-step process that requires two molecules of NADPH and the enzyme dihydrofolate reductase (DHFR). But naturally occurring folates — as natural constituents of the diet or derived from intestinal flora — are derivatives of tetrahydrofolate, (the active form), so the human body doesn’t need to convert them using dihydrofolate reductase. If this is the case, why do humans have the enzyme dihydrofolate reductase? Answer: The poster’s assumption that the sole role of dihydrofolate reductase (DHFR) is to convert folic acid to tetrahydrofolate is incorrect. The enzyme is important in at least two metabolic pathways in mammals — the degradation of phenylalanine and the synthesis of thymidine. Phenylalanine degradation The degradation of phenylalanine to tyrosine requires tetrahydrobiopterin: Tetrahydrobiopterin is synthesized de novo in mammals, the last stage being the reduction of dihydrobiopterin catalysed by dihydrofolate reductase: [Sources: Berg et al. Biochemistry 5e (2002) Ch.28] Thymidylate synthesis Thymidine triphosphate (dTPP) is an essential DNA precursor, and is derived from thymidylate (dTMP). This latter is formed by the methylation of dUMP using N5,N10-methylenetetrahydrofolate, which itself is converted to dihydrofolate. This must be reduced to tetrahydrofolate in a reaction catalysed by dihydrofolate reductase in order to regenerate the methylenetetrahydrofolate: [Source: Berg et al. Biochemistry 5e (2002) Ch.25]
{ "domain": "biology.stackexchange", "id": 11900, "tags": "human-biology, biochemistry, enzymes" }
Why does our voice sound different on inhaling helium?
Question: This question (and answer) is an attempt to clear the air on what appears to be a very simple issue, with conflicting or unclear explanations on the internet. Arguments, negations, etc are invited. I'm classifying this as a physics question, since it has to do with resonances, attenuation, etc. Question: I recently Googled this, and found a ton of articles with confusing explanations: One of the top Google results from a site called Live"Science", says - "....their wavelengths stay the same regardless of whether the tract is filled with helium gas or air....That means the frequencies of the resonant harmonics must increase in a helium-filled cavity instead." The above appears to completely contradict the source university article as well as this other article in Scientific American that say the pitch of the sound (and the actual frequency of oscillations) doesn't change, only the timbre (and the distribution of power between low and high frequencies) changes. Additionally, neither the university article quoted nor Scientific American explain HOW the presence of helium leads to the presence of higher frequencies. For example, the university article simply says - "Inhaling helium changes the frequencies of the resonances, and therefore of the formants they produce" --- Okay, how? There also appears to no consensus on whether a LISTENER in a helium atmosphere would hear the same frequencies as normal (it's helium all the way so it shouldn't make any difference right?) OR hear the squeaky voices associated with helium. This should be explainable by physics, what is the answer? Answer: In order to properly understand this without any unnecessary "controversy", let's break the whole process of sound generation and perception into 5 important, but completely separate parts. We'll then proceed to explain each part using a few different examples and pieces of derivative logic: Vibration of the vocal folds Transmission of energy from vocal folds to air in the vocal tract Resonance and Attenuation in the vocal tract Transmission of energy from the end of the vocal tract (mouth) to the surrounding medium Reception and perception of sound by another human. Now: The frequency generated by the vocal folds depends on the tension exerted on them and surrounding muscles. This is a neuromuscular process and is NOT affected by Helium or any other gas (at least in the short term). So our vocal folds continue to vibrate at the same frequency in helium as in normal air. 2. Sound is produced by the transmission of the vibrations produced in the vocal fold, to the air in the vocal tract. This "transmission" doesn't occur by any magic. The vocal folds - as they vibrate - push and pull columns of air in their immediate vicinity, not very different from the way you may push a child on a swing at specific intervals, so as to produce sustained oscillations, and brief enjoyment. (The "pull" in this analogy though, is provided by gravity). The point is, the child oscillates at the same frequency at which you are pushing the swing. i.e. If you are pushing the swing once every N seconds, the child also completes a swing once every N seconds. This is true regardless of the weight of the child, correct? Similarly, the air in the vocal tract, also vibrates at the same frequency as the vocal chords. This fact, is also true regardless of the mass of the air particles. In other words, the frequency of sound does not change, regardless of the medium in which it is transmitted. Time-Out The last one was a doozy. Frequency of sound does not change? Then why on earth does helium sound different from normal air? While the frequency of sound does not change, the SPEED of sound does. Why? Consider this old classical physics equation: Kinetic Energy = $\frac{mv^2}{2}$, where m = mass and v = speed (let's not say 'velocity' for now) Now the vocal chords, vibrate with the same Force N at the same frequency T. Thus the energy it conveys must be the same in ALL media. In other words, for a given constant value of Kinetic Energy, v^2 is inversely proportional to the mass of the particles. This naturally means sound travels faster in Helium than in air. Now, we know the other old equation: Speed = Wavelength x Frequency Now since we know that the FREQUENCY of sound is the same in Helium and in Air, and the speed of sound is greater in Helium, it follows that the Wavelength of sound is greater in Helium than in Air. This is a very important conclusion, that bears directly on our next deduction. 3. Now, we have a very important conclusion in our kitty - "Wavelength of sound is greater in Helium than in Air". Remember that the vocal tract is often modelled (simplistically) as an open or closed tube. To refresh why that's important, see Wikipedia. The vocal tract is actually not really a cylinder, but a fairly complex shape. This means it has areas of constriction and expansion that change depending on the position of your tongue, tension in the tract, and several other factors. So in a sense, in these complex configurations, the vocal tract can be modelled as a series of tubes of varying diameters and varying levels of "closure" of either openings. Now this means, that different parts of the vocal tract, depending on their geometrical configuration and their material characteristics, resonate with different WAVELENGTHS of sound. Notice I said WAVELENGTHS and not FREQUENCIES. In common parlance, "Frequencies" is often used since W and F are directly inter-related in a common medium. However, even if we change the medium through which sound is being propagated, the interaction of sound waves with open and closed tubes depends strictly on its wavelength and not its frequency. Now would be a good point to return to the marquee conclusion we drew from point 2 - "Wavelength of sound is greater in Helium than in Air". This leads us to the following KEY/FINAL CONCLUSIONS: In a vocal tract filled with Helium: The frequencies of sound do not change The wavelengths of sound DO change Because the wavelengths have changed, the portions of the sound spectrum produced by the vocal chords that are attenuated and resonated by different portions of the vocal tract, also change. This results in the sound spectrum output by the combination of the vocal chords and vocal tract in Helium, being different from the sound spectrum output in normal Air. This means, the net distribution of energies among high and low frequencies (or the timbre) changes with a change in sound medium. Whereas the fundamental frequency of the sound (closely related to pitch) does not change. Let's look at the spectrogram of two sample sounds helpfully provided in the NSW article. ![enter image description here][1] Unfortunately due to the experimental conditions the two sounds do not have the same content (different sentences are spoken) and therefore the spectrogram cannot be exactly relied upon. However, the fundamental frequency in both is roughly the same and therefore supports our conclusion that the pitch is the same. Since different words are used in either sound, a timbre comparison cannot be made (since the difference in energy distributions visible in the spectrogram can be attributed to the different words spoken). Also, for simplicity and ease of understanding a "Melodic Spectrogram" has been used in favor of the raw, noisier spectrogram. It was generated using Sonic Visualizer. We are not Done! We started with the promise of explaining sound transmission and reception/perception in FIVE parts. We are done with only three. Let's get through the remaining parts very quickly. 4. Transmission of sound from mouth to air - As covered by point 2, with a change in medium, the sound frequency does not change, but the wavelength does. This means that the only effect of filling a room with helium as well (rather than just the vocal tract) is to increase the wavelength of the sound. 5. The above has no impact on sound perception. The ear and brain together are primarily a FREQUENCY receiver. The ear translates air pulsations into hair cell oscillations, which then translate to synchronous pulses on attached neurons. Since the timing of the pulses is correlated ONLY to frequency, and the timing of the pulses is what produces notions of pitch, timbre etc, we can safely assume that the ear transcribes sound to the brain faithfully based on frequency. Wavelength has no impact on this process. However, the ear, just like the vocal tract, is non-linear. Which means that it too, is going to attenuate/resonate some sounds (the specific non-linear properties of the cochlea are still being studied). However, UNLIKE the vocal tract, the ear/cochlea is a sealed, fluid-filled chamber. The properties of the cochlea are not affected by surrounding air but only by the fluid, which of course could be affected by blood composition and other biological factors. But NOT the immediate environment. Thus at the root of all the confusion around production and reception of sound in alternative media like Helium, is that the vocal tract's non-linear characteristics are affected by the surrounding medium, whereas the ear's are not. That's it. [1]: https://i.stack.imgur.com/CDxuf.png
{ "domain": "physics.stackexchange", "id": 97550, "tags": "acoustics, resonance" }
I need help computing the effect of curvature on the FRW metric
Question: Apparently there are different forms of the FLRW metric. I'm focusing on Anti-de Sitter space, so I'll just give the hyperbolic version of the function. $$ds^2=-c^2dt^2+a^2(t)\left[dr^2+R_0\space \sinh\left(\frac{r}{R_0}\right)d\Omega^2\right]\tag 1$$ $$d\Omega^2=d\theta^2+sin^2 \theta\space d\phi^2$$ Here, $d\Omega$ is the angular separation of two points in the sky, but I'm not interested in two points in the sky. I'm analyzing SNe Ia data, so I'm just working in line-of-sight measurements, so as I understand this, the $d\theta$ and $d\phi$ terms go to zero (that is, there's no change in the angle), so the whole $d\Omega$ term is zero. This leaves us with: $$ds^2=-c^2dt^2+a^2(t)dr^2$$ Which doesn't seem right. Is the distance in a hyperbolic plane the same as a flat plane or closed surface if you're not dealing with angular separation? Am I interpreting the metric correctly? EDIT: The other form of the FRW metric seems to suggest the curvature changes the length of a line-of-sight measurement. $$ds^2=-c^2dt^2+a^2(t)\left[\frac{dr^2}{1+k\space r^2}+r^2d\Omega^2\right]\tag 2$$ Where $k$ is either a scalar (1 for closed, 0 for flat, -1 for saddle) or the Gaussian Curvature (still not sure how that's used). Setting the angular separation to zero, you get: $$ds^2=-c^2dt^2+a^2(t)\frac{dr^2}{1+k\space r^2}$$ This seems to fly in the face of the version in (1), so I'm missing some major concept here. Can anyone tell me what that is? Answer: The correct equation should be $$ds^2 = -c^2dt^2+a^2(t)[\frac{dr^2}{1-\kappa r^2} + r^2d\Omega^2]$$ Here $r$ is the usual radial coordinate. We may re-define the radial coordinate by taking $$d\chi = \frac{dr}{\sqrt{1-\kappa r^2}}$$ such that $$r = \begin{cases} sinh(\chi), & \kappa = -1 \\ \chi & \kappa = 0 \\ sin(\chi) & \kappa = 1 \end{cases}$$ This implies $$ds^2 = -c^2dt^2 + a^2(t)[d\chi^2 + S_{\kappa}^2(\chi)d\Omega^2]$$ where $$S_{\kappa}(\chi) = \begin{cases} sinh(\chi), & \kappa = -1 \\ \chi & \kappa = 0 \\ sin(\chi) & \kappa = 1 \end{cases}$$ So in both cases when $d\Omega = 0$, we have the same result.
{ "domain": "physics.stackexchange", "id": 67539, "tags": "cosmology, metric-tensor, curvature, anti-de-sitter-spacetime" }
Can a super-positioned human be used to differentiate between the Copenhagen interpretation and many-worlds?
Question: I'm pretty new to quantum physics, so there's a good chance that I have this all backwards. However, by my understanding of the Copenhagen interpretation, the wave function 'collapses' upon observation. In the many worlds interpretation, the observer becomes decohered and collapse never occurs. I understand that it's fairly difficult to super-position big things. However, if many-worlds holds correct, then it should be really easy to super-position a human. For example, they observe the result of an experiment. Upon seeing result A they take a step back. Upon seeing result B they take a step forward. Bam, we have a massive super-positioned object. So the question is, is it possible to super-position a human such that they can interfere with themselves in some way? Or at least such that we can measure whether or not they are super-positioned? For example, perhaps the super-positioned person carries a "photon gun", and shoots it at where his super-positioned self would be. He may not see anything special, but could it be set up such that an outside observer (unaware of where the super-positioned human is standing) could measure the interference between the photons shot by the photon gun? If so, then it seems that there is an experiment which can differentiate between the Copenhagen interpretation and many-worlds. I've been told that this is not the case, so I'm wondering where my logic failed. Answer: If you believe quantum mechanics is a complete and accurate description of "reality", it is all too easy to set up a superposition of humans. As the OP mentioned, measure a superposed qubit, and if A, step forward. Otherwise, step backward. The problem lies with decoherence which prevents us from measuring any interference.
{ "domain": "physics.stackexchange", "id": 4614, "tags": "quantum-mechanics, decoherence" }
Has my program made good use of c++ features and is there anything in my program I've done which I should avoid?
Question: I was wondering if any of you would mind reviewing my program? I'm a beginner / aspiring programmer and I'm looking for tips on how I could make my program more efficient. For example, have I made good use of sub-routines / functions? Am I using classes properly? Is there anything I could do to perhaps shorten the length of the program? // Created by George Austin Bradley on 19/11/2019. // Copyright © 2019 George Austin Bradley. All rights reserved. #include <iostream> #include <iomanip> #include <vector> #include <cctype> using namespace std; class cCar{ private: string _sName; double _dPrice; public: cCar(string s, double d){ _sName = s; _dPrice = d; } string getName(){return _sName;} double getPrice(){return _dPrice;} }; vector<cCar>CarDatabase(vector<cCar>&car_list){ car_list.push_back(cCar("Blue Nissan Skyline",1000)); car_list.push_back(cCar("Red Mini",3000)); car_list.push_back(cCar("Black Land Rover",4000)); car_list.push_back(cCar("Beatle",9000)); car_list.push_back(cCar("Ferrari",300000)); return car_list; } class Finance{ private: string _sUserName; double _dCostOfCar; string _sChosenCar; int _iFinancePlan; double _dDepositedAmount; double _dMonthlyPayments; double _dTotalLeftToPay; public: Finance(string sName, double dCostOfCar, string sChosenCar, int iFinancePlan, double dDepositedAmount, double dDMonthlyPayments, double dTotalLeftToPay){ _sUserName = sName; _dCostOfCar = dCostOfCar; _sChosenCar = sChosenCar; _iFinancePlan = iFinancePlan; _dDepositedAmount = dDepositedAmount; _dMonthlyPayments = dDMonthlyPayments; _dTotalLeftToPay = dTotalLeftToPay; } string getUserName(){return _sUserName;} double getdCostOfCar(){return _dCostOfCar;} string getChosenCar(){return _sChosenCar;} int getFinancePlan(){return _iFinancePlan;} double getDepositAmount(){return _dDepositedAmount;} double getMonthlyAmount(){return _dMonthlyPayments;} double getTotalLeftToPay(){return _dTotalLeftToPay;} }; //START OF PROTOTYPE void ViewPurchases(vector<Finance>&buyers, char &cOption, bool &bExit); //END OF PROTOTYPE //1. This displays the car menu items. void display_menu(vector<cCar>&car_list) { cout << "\nMENU"; for (int iCount = 0; iCount != car_list.size(); iCount++) { cout << "\n" << iCount + 1 << ". " << car_list[iCount].getName(); cout << "\n\tPrice: £" << car_list[iCount].getPrice(); cout << "\n"; } } //This procedure proccesses the user's selection and all information regarding price and name of car are then transferred to transaction variables. void selectedCar(vector<cCar>&car_list, string &sNameOfChosenCar, double &dCostOfChosenCar) { int iSelectionFromMenu = -1; do{ cout << "\nChoose a car that you'd wish to buy from the menu (1 - " << car_list.size() << "): "; cin >> iSelectionFromMenu; if(iSelectionFromMenu > 0 && iSelectionFromMenu <= car_list.size()) { sNameOfChosenCar = car_list[iSelectionFromMenu - 1].getName(); dCostOfChosenCar = car_list[iSelectionFromMenu - 1].getPrice(); } else { cout << "\nPlease enter valid number!"; iSelectionFromMenu = -1; } }while(iSelectionFromMenu == -1); } //This procedure gets from the user their preferred finance plan through their input. void FinanceLength(int &iFinanceLength) { do{ cout << "\nHow long do you wish for your finance plan to last? (1 - 4 years): "; cin >> iFinanceLength; if (iFinanceLength < 0 || iFinanceLength > 4) { cout << "\nOops, try again! Please enter between 1 - 4!"; } }while(iFinanceLength < 0 || iFinanceLength > 4); } //This procedure gets the user's deposit. void DepositMoney(double &dDepositAmount) { do{ cout << "\nEnter deposit amount (minimum £500 accepted): £"; cin >> dDepositAmount; if (dDepositAmount < 500) { cout << "\nTry again! Deposit an amount greater than or equal to £500."; } }while(dDepositAmount < 500); } //This function calculates the amount of money the user has to pay after deposit, added tax and charge percentage of 10% double TotalLeftToPay(double iFinanceLength, double dDepositAmount, double dCostOfChosenCar) { double dChargePercentage = 0.10; double dTotalLeftToPay = dCostOfChosenCar + (dCostOfChosenCar * dChargePercentage) - dDepositAmount + 135; return dTotalLeftToPay; } //This calculates monthly payments. double MonthlyPayments(double dTotalLeftToPay, int iFinanceLength) { double dMonthlyPayments = (dTotalLeftToPay / iFinanceLength) / 12; return dMonthlyPayments; } void EndOfProgramOptions(vector<Finance>&buyers, char &cOption, bool &bExit) { char cInputSelection = 0; do{ cout << "View your purchases (y/n): "; cin >> cInputSelection; cInputSelection = toupper(cInputSelection); if (cInputSelection == 'Y') { ViewPurchases(buyers, cOption, bExit); } }while(cInputSelection != 'Y' && cInputSelection != 'N'); } //This asks the user whether they'd like to restart the application. void RestartOptions(char &cOption, bool &bExit, vector<Finance>&buyers) { do{ cout << "\nDo you wish to make another purchase? (y/n): "; cin >> cOption; cOption = toupper(cOption); switch(cOption) { case 'Y': bExit = false; break; case 'N': EndOfProgramOptions(buyers, cOption, bExit); bExit = true; break; default: cout << "Sorry, that's an invalid input, please try again!"; continue; } }while(cOption != 'Y' && cOption != 'N'); } //This string function returns either year or years (plural) string YearOrYears(int iFinanceLength) { return (iFinanceLength > 1)? "years" : "year"; } //This displays receipt of the user's transaction. void Receipt(const string &sUserName, const int &iFinanceLength, const double &dDepositAmount, char cOption, bool &bExit, const string &sNameOfChosenCar, const double &dCostOfChosenCar, vector<Finance>&buyers) { double dTotalLeftToPay = TotalLeftToPay(iFinanceLength, dDepositAmount, dCostOfChosenCar); double dMonthlyPayments = MonthlyPayments(dTotalLeftToPay, iFinanceLength); buyers.push_back(Finance(sUserName,dCostOfChosenCar,sNameOfChosenCar,iFinanceLength,dDepositAmount, dMonthlyPayments,dTotalLeftToPay)); cout << "\nReceipt for: " << sUserName << ". "; cout << "\nYou have chosen " << sNameOfChosenCar << "."; cout << "\nYour finance plan timescale is " << iFinanceLength << " " << YearOrYears(iFinanceLength) << "."; cout << "\nYou've deposited £" << dDepositAmount << "."; cout << "\nTotal left to pay: £" << dTotalLeftToPay; cout << "\nMonthly Payments: £" << dMonthlyPayments; cout << "\n"; RestartOptions(cOption, bExit, buyers); } //This displays receipt of the user's transaction. void ViewPurchases(vector<Finance>&buyers, char &cOption, bool &bExit) { for (int iCount = 0; iCount != buyers.size(); iCount++) { cout << "\nPurchase " << iCount + 1 << " by " << buyers[iCount].getUserName() << ". "; cout << "\nYou have chosen " << buyers[iCount].getChosenCar() << "."; cout << "\nYour finance plan timescale is " << buyers[iCount].getFinancePlan() << " " << YearOrYears(buyers[iCount].getFinancePlan()) << "."; cout << "\nYou've deposited £" << buyers[iCount].getDepositAmount() << "."; cout << "\nTotal left to pay: £" << buyers[iCount].getTotalLeftToPay() << "."; cout << "\nMonthly Payments: £" << buyers[iCount].getMonthlyAmount() << "."; cout << "\n"; } RestartOptions(cOption, bExit,buyers); } //This asks the user whether they're happy with the options of they've chosen. void AcceptDeclineOptions(string &sUserName, int &iFinanceLength, double &dDepositAmount, bool &bExit, string &sNameOfChosenCar, double &dCostOfChosenCar, vector<Finance>&buyers) { char cOption = 0; do { cout << "\nConfirm finance plan (y/n): "; cin >> cOption; cOption = toupper(cOption); if (cOption == 'Y') { Receipt(sUserName, iFinanceLength, dDepositAmount, cOption, bExit, sNameOfChosenCar, dCostOfChosenCar, buyers); } else if (cOption == 'N') { RestartOptions(cOption, bExit, buyers); } else { cout << "\nSorry, that's not a valid command."; } }while(cOption != 'Y' && cOption != 'N'); } int main() { bool bExit = false; int iFinanceLength = 0; double dDepositAmount = 0; string sNameOfChosenCar = ""; double dCostOfChosenCar = 0; vector<cCar>car_list; CarDatabase(car_list); vector<cCar>car_purchases; vector<Finance>buyers; cout << "Welcome!"; string sUserName = ""; cout << "\nEnter your name: "; cin >> sUserName; do{ display_menu(car_list); selectedCar(car_list, sNameOfChosenCar, dCostOfChosenCar); FinanceLength(iFinanceLength); DepositMoney(dDepositAmount); AcceptDeclineOptions(sUserName, iFinanceLength,dDepositAmount,bExit, sNameOfChosenCar, dCostOfChosenCar, buyers); }while(bExit == false); } Answer: A big issue with your code is consistency and there are a lot of duplicated points of pain in your code. I've not repeated them in every case. Just a nitpick, but sorting your headers makes them easier to read. #include <iostream> #include <iomanip> #include <vector> #include <cctype> Avoid declaring using namespace std;, especially in a header file. using namespace std; Don't prefix your types. class cCar { Declare access modifiers in public, protected, private order. private: Systems Hungarian notation is out-dated by many decades and isn't useful anymore with modern inventions like intellisense. Avoid starting a variable declaration with an underscore (and never containing two back-to-back), the rules are obscure and you may clash with a reserved-for-implementation declaration. string _sName; From a rounding error perspective, never use floating-point datatypes when dealing with money. Prefer long long and then divide by the appropriate conversion from smallest unit to whole unit amounts at the end. i.e. if units are in dollars and cents, you'd store the values in cents/pennies and divide by 100 at the end; if the units are in pounds and pence (after 1971), you'd store the values in pence and divide by 100 at the end as well. double _dPrice; public: Use constructor initializer lists instead of copying data twice in the body. The argument names aren't helpful. Consider naming them more appropriately such as name and price. cCar(string s, double d) { _sName = s; _dPrice = d; } string getName() { return _sName; } double getPrice() { return _dPrice; } }; You append the argument vector AND return it. Why? vector<cCar>CarDatabase(vector<cCar>& car_list) { You aren't declaring cCar's constructor explicit. Because of this, consider using an initializer list instead of explicitly calling the constructor. i.e. car_list.push_back({"Blue Nissan Skyline", 1000});. car_list.push_back(cCar("Blue Nissan Skyline", 1000)); car_list.push_back(cCar("Red Mini", 3000)); car_list.push_back(cCar("Black Land Rover", 4000)); car_list.push_back(cCar("Beatle", 9000)); car_list.push_back(cCar("Ferrari", 300000)); return car_list; } Be consistent. You prefixed cCar but nothing else. Prefer removing the prefix of cCar. Put unrelated classes in their own header and implementation files. See previous about avoiding floating-point types when dealing with money. class Finance { private: string _sUserName; double _dCostOfCar; string _sChosenCar; int _iFinancePlan; double _dDepositedAmount; double _dMonthlyPayments; double _dTotalLeftToPay; public: Consistency. You have well-defined argument names here but not in cCar. Use constructor initializer list instead of copying in the body. Systems Hungarian notation is out-dated. Finance(string sName, double dCostOfCar, string sChosenCar, int iFinancePlan, double dDepositedAmount, double dDMonthlyPayments, double dTotalLeftToPay) { _sUserName = sName; _dCostOfCar = dCostOfCar; _sChosenCar = sChosenCar; _iFinancePlan = iFinancePlan; _dDepositedAmount = dDepositedAmount; _dMonthlyPayments = dDMonthlyPayments; _dTotalLeftToPay = dTotalLeftToPay; } //... }; Useless comments. A professional programmer is going to know what this is. Consistency as well. Why didn't you prototype every function? //START OF PROTOTYPE void ViewPurchases(vector<Finance>& buyers, char& cOption, bool& bExit); //END OF PROTOTYPE Comments that simply repeat information given by the function's name are less than useful and should be removed. //1. This displays the car menu items. void display_menu(vector<cCar>& car_list) { //... Again, consistency. This function and its arguments use camalCase and C-style lower-case with underscores. Other functions use just C-style. Pick one. //This procedure proccesses the user's selection and all information regarding price and name of car are then transferred to transaction variables. void selectedCar(vector<cCar>& car_list, string& sNameOfChosenCar, double& dCostOfChosenCar) { //... The following comments are unhelpful as they add no more information than what the function name already provides. //This procedure gets the user's deposit. void DepositMoney(double& dDepositAmount) { //... //This function calculates the amount of money the user has to pay after deposit, added tax and charge percentage of 10% double TotalLeftToPay(double iFinanceLength, double dDepositAmount, double dCostOfChosenCar) { //... //This calculates monthly payments. double MonthlyPayments(double dTotalLeftToPay, int iFinanceLength) { //... //This asks the user whether they'd like to restart the application. void RestartOptions(char& cOption, bool& bExit, vector<Finance>& buyers) { //... //This string function returns either year or years (plural) string YearOrYears(int iFinanceLength) { return (iFinanceLength > 1) ? "years" : "year"; } //... //This displays receipt of the user's transaction. void Receipt(const string& sUserName, const int& iFinanceLength, const double& dDepositAmount, char cOption, bool& bExit, const string& sNameOfChosenCar, const double& dCostOfChosenCar, vector<Finance>& buyers) { //... //This displays receipt of the user's transaction. void ViewPurchases(vector<Finance>& buyers, char& cOption, bool& bExit) { //... //This asks the user whether they're happy with the options of they've chosen. void AcceptDeclineOptions(string& sUserName, int& iFinanceLength, double& dDepositAmount, bool& bExit, string& sNameOfChosenCar, double& dCostOfChosenCar, vector<Finance>& buyers) { //... Personally, main should be the first function after the prototype list and every other function should appear after it in logical order as they are first called. int main() { Don't declare all your variables at the top of the function. Declare them closest to first use as possible. Systems Hungarian notation is outdated. bool bExit = false; int iFinanceLength = 0; double dDepositAmount = 0; string sNameOfChosenCar = ""; double dCostOfChosenCar = 0; Modern compilers are smart. It's better to return the car_list than fill it via a pass-by-reference. vector<cCar>car_list; CarDatabase(car_list); car_purchases isn't used at all. Remove it. vector<cCar>car_purchases; buyers is only ever used internally. Consider moving it closer to its intended scope or, for such a small program, making it a static global. vector<Finance>buyers; //...
{ "domain": "codereview.stackexchange", "id": 36734, "tags": "c++" }
If object size increases, shouldn't magnification decrease by formula
Question: According to the magnification formula, magnification is the image size by object size According to this if object comes near and its size increases shouldn't magnification decrease? Answer: Magnification is an inherent property of the lens, independent of the height and distance of the original object. Let’s say we are moving a toy car towards the lens, then the distance between object and the lens decreases, but the distance between the image decreases too. The formula is $M=\frac{d_i}{d_o}$. On the other hand, if your object is larger, then the image becomes larger (higher). $M=\frac{H_i}{H_o}$ is the formula for magnification. Hence, these two factors are independent of the magnification: the magnification of a lens is constant.
{ "domain": "physics.stackexchange", "id": 50615, "tags": "optics" }
Does $\mathsf{EXP}=\mathsf{NEXP}$ imply $\mathsf{E}=\mathsf{NE}$?
Question: Does $\mathsf{EXP}=\mathsf{NEXP}$ imply $\mathsf{E}=\mathsf{NE}$? Answer: This is open, as far as I know. It could be provable (because its hypothesis may be false) or it just be difficult to show that any $2^{n^k}$-time algorithm for Succinct3SAT can be converted into a $2^{O(n)}$-time algorithm for Succinct3SAT. In general, theorems of this kind are called "downward collapses" which say if two "large" classes are equal then two "smaller" classes are equal. These theorems are rare. Usually you can either prove an "upward collapse" (small classes equal implies larger classes equal, like $P = NP$ implies $NEXP = EXP$) or its contrapositive, a "downward separation". Something along the lines of what you want is the theorem by Hartmanis, Immerman and Sewelson (http://dl.acm.org/citation.cfm?id=808769) that $NE = E$ $\iff$ every sparse set in $NP$ is contained in $P$. This gives a "downward collapse" but only for the sparse sets (those sets that contain only $poly(n)$ strings of length $n$).
{ "domain": "cstheory.stackexchange", "id": 1450, "tags": "cc.complexity-theory, conditional-results, nexp, structural-complexity" }
Projecting a vector to another to detect the sparse values of such vector
Question: Assuming we have sparse vector of length $N$ such as $X = [0,1,0,-1,0,1,1,0]$ which has some non-zeros values. The vector $x = iFFT(X)$ is convoluted with another vector $h$ resluting $y = h*x$. Suppose that $h$ and $y$ are known, is it possible to recover the sparse vector $X$ using such methods of sparse vector estimation, such as compressive sensing or any other method? $NP$: I have suggested to take $FFT$ for $y$ resulting $Y = FFT(y) = HoX$, where $o$ is the element-wise multiplication between $H$ and $X$. Then based on $Y$ I think it's easier to recover sparse vector $X$. But the issue when using element-wise division, I can't get the position of each non-zeros element in $X$. On the other hand, I don't know how to implement compressive sensing to recover $X$. Thank you in advance Answer: When you have the described convolution measurement, you can indeed recover your $X$ using sparse recovery methods. Some examples are, as you suggest, OMP, MP, or for example subspace pursuit (SP), compressive sampling matching pursuit (CoSaMP), iterative hard thresholding (IHT), a plethora of other variants of greedy recovery methods, as well as basis pursuit (BP), basis pursuit denoising (BPDN), and approximate message passing (AMP). The key to this is the measurement matrix. Have a look at the random demodulator architecture to see how they construct their measurement matrix ($\mathbf M$) [1]. This is very similar to what you want. Your IFFT is modelled by their $\mathbf F$; in their case the transform is just DFT instead of IDFT, but that does not change the principle. Your measurement matrix (lets us call it $\mathbf H$) is a little bit different than [1]'s $\mathbf M$. Assume that we can express your convolution as discrete convolution. I would model you measurement matrix as: $$\mathbf H = \begin{bmatrix} \mathbf h & 0 & & \ldots && 0\\ 0 & \mathbf h & 0 & \ldots && 0\\ \vdots\\ 0 & & \ldots && 0 & \mathbf h \end{bmatrix}$$ This is a convolution matrix. The non-zeros parts $\mathbf h$ of each row of $\mathbf H$ is the vector you convolve $x$ by (where I would call it vector $\mathbf x$). The non-zero parts of $\mathbf H$ generally overlap across the columns. See for example MATLAB's convmtx for a practical example of construction of $\mathbf H$. You may want to sub-sample the rows of $\mathbf H$ (i.e. only keep every $m$th row), corresponding to a lower sample-rate of your sequence $y$ than your original sequence $x$.
{ "domain": "dsp.stackexchange", "id": 8452, "tags": "computer-vision, signal-detection, compressive-sensing, detection" }
Where does gravitational waves' energy go?
Question: Following the measurement of gravitational waves, many sources described them and explained they carry energy away. What I don't get is how this energy will get transfered back to anything else. If the fabric of space-time itself is vibrating, it would seem to be in impossible for any physical object to gain this energy. What am I missing? How would one hypothetically get energy out of gravitational waves? If impossible, does the universe end up with nothing but GW? Answer: To a first approximation gravitational waves are never dissipated. They just spread out into the universe gradually getting fainter. Gravitational waves are exceedingly difficult to dissipate for the same reason that they are exceedingly hard to generate in the first place. They couple very weakly to matter. The gravitational wave detected by LIGO stretched than compressed the Earth by a factor of about $10^{-21}$. The Earth is squidgy, meaning that if you stretch it then let it relax you don't get as much energy out as you put in - the rest goes to heating up the Earth. So in principle some of the energy in the gravitational wave was dissipated as it deformed the Earth. However in practice the fraction of its energy that the wave lost is utterly insignificant. Possibly the bits of the wave that hit Jupiter and the Sun lost a bit more energy, but remember that most of the wave passed through the Solar System without hitting any matter at all. However gravitational waves do get fainter with time for two reasons. Firstly the gravitational wave from a black hole merger propagates roughly in a plane so its intensity falls off as a factor of somewhere between $\frac{1}{r}$ and $\frac{1}{r^2}$, where $r$ is distance away from the source. Secondly the energy in the wave is diluted as the universe expands. Actually the expansion not only dilutes the energy but it also red shifts it, so if $a$ is the scale factor by which the universe has expanded the energy of the wave falls as $\frac{1}{a^4}$. You ask: If impossible, does the universe end up with nothing but GW? but this isn't going to happen simply because it's so hard to produce gravitational waves. The matter currently lying around in the universe is mostly going to remain lying around in the universe for the foreseeable future. Actually gravitational waves aren't unique in not being dissipated. Light interacts very strongly with matter, but most light emitted by objects in the universe isn't going to hit any matter simply because the universe is mostly empty space. For example most of the photons emitted in the cosmic microwave background (CMB) haven't hit anything in the 13.8 billion years since, which is of course why we can still see the CMB.
{ "domain": "physics.stackexchange", "id": 28548, "tags": "energy, gravitational-waves" }