anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Can solutions of GR have non-zero genus?
Question: Imagine there is "cavity" in one's locally Lorentzian manifold (the manifold has non-zero genus). Have these kind of solutions in general relativity been considered? Is there any experiment one can do to prove we live in a non-zero genus world? Answer: Any manifold, regardless of its properties, is a solution to the Einstein field equations for some stress-energy tensor $T_{\mu\nu}$, providing it admits a metric, $g_{\mu\nu}$. In theory, you are free to pick your favourite algebraic manifold of genus $g$, compute the metric and find its corresponding stress-energy tensor. However, if we restrict the stress-energy, say by requiring that $T_{\mu\nu}$ be infinitely differentiable, then we also restrict the space of allowable metrics. Likewise, if we demand the manifold be globally hyperbolic, which is a frequent requirement due to the implication on causality, then we also restrict possible solutions. Based on a cursory search, I cannot find any classification of genus $g$ spacetime solutions yet. However, in the paper here it is written that, In the current paper we study that equation on closed 2-dimensional surfaces that have genus $>0$. We derive all the solutions assuming the embeddability in 4-dimensional spacetime that satisfies the vacuum Einstein equations... It seems we can have sub-manifolds in spacetime which do have non-zero genus, and are sensible solutions to the vacuum Einstein field equations, meaning at least $T_{\mu\nu}$ is physically sensible, being zero.
{ "domain": "physics.stackexchange", "id": 53174, "tags": "general-relativity, differential-geometry, topology" }
Can I use household vacuum cleaner with AquaFilter as construction/industrial one
Question: I'm going to buy Thomas vacuum cleaner with Aqua filter and use it to sand walls a bit with grinder. And after finishing the decoration, use it as an ordinary cleaner. May construction plaster, putty sanding harm it? Or it is OK to use the household vacuum for small amount of grinding? Answer: Actually I've bought household Thomas Multi Clean X10 Parquet vacuum with Aqua filter and rent the industrial cleaner Hilti VC-40UM with Hilti DGH-130 concrete grinder. There was about 2 full buckets of dust, I cleaned the filter for Hilti about 10 times during work of single small 3x5 meters room. So I can say that household cleaner won't last after such an action. In addition to that Aqua filter passes about 20% of dry dust through. Then it is being caught by sponges and microfiber filter before the engine. I suppose 20% dust from 2 buckets would be enough to kill those filters
{ "domain": "engineering.stackexchange", "id": 3342, "tags": "grinding" }
Is isotropy a fundamental/invariant feature of our universe, or is it merely a convenient, albeit arbitrary, feature of some reference frames?
Question: This is related to a previous post. Assuming that the Cosmological Principle is correct, does this imply that the universe possess an empirically privileged reference frame? What I am trying to understand is related to the following: From what I understand of general relativity (GR) (which is NOT much), not only is it conceptually different from Classical mechanics (CM), but how it is applied is also different. In CM, you solve for a particular system within the universe, whereas in GR, you must solve for an entire spacetime. This seems to create challenges when physicists want to describe individual systems that behave relativistically, requiring the specification of an entire, ad hoc, and hopefully computationally benign universe surrounding the real object of interest. In other words, it appears that theoretically, there is no way to talk JUST about a black hole, or a neutron star, etc. It is always embedded in a completely specified 4-D spacetime (yes, redundant adjective, but I am emphasizing that time is not the independent variable here). OK...I hope that was generally correct, because it pertains to my actual question. Since GM specifies an entire spacetime in an invariant way, is there a sense in which an entire spacetime is isotropic and homogenous even though different reference frames within the spacetime may see otherwise? Its hard for me to describe without the proper theory, but I am thinking of an abstract sort of homogeneity/isotropy in the tensor equations, where there is no "directionality" or "hereness" in the equations (not in a coordinate sense). In other words, I'm thinking more along the lines of abstract algebra, less differential geometry, if that makes sense to you theoretical types who actually can do this stuff (I'm merely a consulting engineer/applied math troglodyte). To state this a bit more concretely, I offer this related question: Doesn't fact that we can describe our universe using comoving coordinates imply that the universe is fundamentally isotropic/homogenous in the algebraic sense above? I say this because I can imagine it's possible to have space-time specifications in GM where you cannot make corrections from your reference frame to get to an isotropic frame. In which case, every observer would agree that the universe is not isotropic. Therefore, to my amateur mind, it seems like a very special thing that we can make such simple "inertial" corrections, suggesting that there is something fundamentally correct about the isotropic frame, such that the most accurate way to look at our world is from a comoving frame since it reflects the underlying symmetry better than a frame with a peculiar velocity. Hence, being at rest relative to this comoving frame seems to show you what the universe really looks like and defines what motion counts as "relativistic" vs. merely being a fixed point in the universal expansion. The only reason I think this relativistic vs hubble-flow part is relevant is because Brian Greene, in "The Fabric of the Cosmos" said that all commoving observers would have synchronized clocks, implying that even though they are in motion relative to each other they do not experience time dilation, since they are moving with spacetime, not through spacetime. Sorry for the long post, but I am trying to convey in simple words what may be more succinctly expressed using theory. If my reasoning above appears correct, then why do we act as if all frames are epistemologically/experientially valid? The laws may work equally well in all frames, but it seems that frames with peculiar motion see a distorted now and where due to their motion in spacetime, somewhat like the distortion of a sound due to motion through a transmission medium. Thanks again for any observations or thoughts and corrections to my thinking. If I'm not obviously wrong, then I don't know if there is a strict answer to resolving this...I just want to know what more informed minds think of this issue/confusion. Answer: I think you're close to hitting on a key point about the currently accepted mainstream cosmological model -- it is in fact, quite special. In the above, I think you seem to be confusing a few things about the general case of a generic spacetime, and a few details of our current best model. The key feature of our current best model, the Robertson-Walker cosmological model with a $\Lambda$CDM matter distribution is that it DOES, in fact, have a special time coordinate. This arises from the fact that we, in the coordinate system naturally picked out by our galaxy, observe that (given that we zoom out to a sufficiently large distance scale) the universe looks the same in all directions, and that it does not appear that any one point is "special". We can leverage this fact to construct a cosmological model within the framework of general relativity. The end result that you get is that while general relativity itself does not give us a global inertial frame, this model does give us a special coordinate system -- the one that is at rest relative to the galaxies as they expand, and where time is measured "since the big bang" at any given point. You can, of course, still have any number of other coordinate systems, but if you are taking measurements in these coordinate systems, you will always be able to observe that you are moving relative to the galactic fluid, and you will always be able to come up with a number saying that "the dust at this place has existed X years since the big bang", so you can always convert back to this special system. But this "niceness" is not a feature of general relativity -- it is merely a feature of our particular cosmological model. And we can only be confident that this model conforms to our universe due to the fact that our direct cosmological observations are consistent with isotropy and homogeneity, and it is, in fact, not valid in the "zoomed in" view, where the geometry is distorted by black holes, and galaxies and everything else that there is.
{ "domain": "physics.stackexchange", "id": 10414, "tags": "general-relativity, cosmology, time, reference-frames" }
Is it possible to make a CS:GO Machine Learning AI?
Question: I am not an expert on Machine Learning, Neural Networks or NEAT. In fact, I probably have no clue what I'm talking about. My question is if you can make a learning AI that learns to play complex multiplayer games and possibly outpreform humans. If it is possible, could you also recommend a language or languages to make this AI in? (I know I'll probably have to take a VACation for botting, but it's something I feel like we should try.) Answer: The answer is yes: Example of a neural network outplaying human players in DOTA. I haven't been able to find much regarding what kind of neural network but here is what is on the OpenAI website. If you're a beginner you can learn some basic architecture and design principles in neural networks using Python and Keras (a neural network library).
{ "domain": "datascience.stackexchange", "id": 4417, "tags": "machine-learning, neural-network, training, ai" }
Underground temperature record
Question: Max/min Thermometer readings from a weather station with Stevenson screen are subject to noise from many sources and represent the max/ min at that one point in space. It's well known that underground temperatures are stable at, I believe, the average year-round temperature. They would then make a much better way of measuring average temperature over years. Do any underground instrumental temperature records exist? Answer: There are many underground temperature measurements. Monitoring of permafrost is one instance where underground temperature has been monitored for many years. In Europe there are many boreholes reaching about 100 m under ground that measure temperature. A project called Permafrost and Climate in Europe (PACE) established a network of bore holes that monitor ground temperatures from Svalbard to Spain (Harris et al. 2009). Results from one of these boreholes in Tarfala, Sweden, (67.9098° N, 18.6101° E) has shown warming trends in the upper parts of the bore hole since establishment in the year 2000 (Bolin Centre, no year). The upper pane in the figure shows the air temperature (in degrees Celsius) at the PACE bore hole at 1650 m a.s.l. (red) as well as the monthly average temperature at the Tarfala Research Station (1130 m a.s.l.; blue) in the northern Swedish mountains. The lower three graphs show the temperature (again in degrees Celsius) at the depths of 25, 30 and 40 m below the ground surface. As can be seen warming occurs at depth. In this particular bore hole the seasonal effects of summer and winter reaches about 20 m below ground level so the presented curves shows the longer term trends in ground temperature. There are some spurious deviations likely caused by water leaking into the bore hole and refreezing causing momentary increases in temperature due to the release of heat as the phase change from liquid to solid occurs. The extrapolated depth of the transition from negative (permafrost) to positive temperature is about 300 m below the ground at this site. References Bolin Centre for Climate Research: Tarfala Data, Permafrost. (Data base) https://bolin.su.se/data/tarfala/permafrost.php Harris et al 2009. Permafrost and climate in Europe: Monitoring and modelling thermal, geomorphological and geotechnical responses. Earth-Science Reviews. https://www.sciencedirect.com/science/article/pii/S0012825208001311
{ "domain": "earthscience.stackexchange", "id": 2511, "tags": "meteorology, paleoclimatology" }
Simple uint128_t implementation
Question: I made a simple uint128_t implementation for a project that I'm working on. The reason for not using for example boost::uint128_t is that it is not fully constexpr like I would like it to be. For this reason, I only implemented the operators that I'm going to use. I'm for example not interested in operator*, as I will never multiply 2 such numbers. Same for operator/ and others. Any comments will be welcome! :) #ifndef UINT128_HPP #define UINT128_HPP #include <type_traits> #include <cstdint> #include <array> #include <ostream> namespace types { // special class to represent a 128 bit unsigned number // some shortcuts were taken to not make it too unnecessary complex struct uint128_t final { constexpr uint128_t() = default; template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t(T&& num) noexcept : array{ static_cast<std::uint64_t>(num), 0ull } {} template<typename T> friend constexpr uint128_t operator&(uint128_t lhs, T&& rhs) noexcept { return lhs &= rhs; } constexpr uint128_t& operator&=(const uint128_t& rhs) noexcept { array[0] &= rhs.array[0]; array[1] &= rhs.array[1]; return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator&=(T&& rhs) noexcept { array[0] &= rhs; array[1] = 0; return *this; } template<typename T> friend constexpr uint128_t operator|(uint128_t lhs, T&& rhs) noexcept { return lhs |= rhs; } constexpr uint128_t& operator|=(const uint128_t& rhs) noexcept { array[0] |= rhs.array[0]; array[1] |= rhs.array[1]; return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator|=(T&& rhs) noexcept { array[0] |= rhs; return *this; } template<typename T> friend constexpr uint128_t operator^(uint128_t lhs, T&& rhs) noexcept { return lhs ^= rhs; } constexpr uint128_t& operator^=(const uint128_t& rhs) noexcept { array[0] ^= rhs.array[0]; array[1] ^= rhs.array[1]; return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator^=(T&& rhs) noexcept { array[0] ^= rhs; return *this; } friend constexpr uint128_t operator~(uint128_t value) noexcept { value.array = { ~value.array[0], ~value.array[1] }; return value; } template<typename T> friend constexpr uint128_t operator<<(uint128_t lhs, T&& rhs) noexcept { return lhs <<= rhs; } constexpr uint128_t& operator<<=(const uint128_t& rhs) noexcept { if (rhs.array[1] > 0) return array = {}, *this; return *this <<= rhs.array[0]; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator<<=(T&& rhs) noexcept { if (rhs == 0) return *this; const auto backup = array[1]; array[1] = array[0]; if (rhs < 64) { array[0] <<= rhs; array[1] >>= 64 - rhs; array[1] |= backup << rhs; } else { array[0] = 0; array[1] <<= rhs - 64; } return *this; } template<typename T> friend constexpr uint128_t operator>>(uint128_t lhs, T&& rhs) noexcept { return lhs >>= rhs; } constexpr uint128_t& operator>>=(const uint128_t& rhs) noexcept { if (rhs.array[1] > 0) return array = {}, *this; return *this >>= rhs.array[0]; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator>>=(T&& rhs) noexcept { if (rhs == 0) return *this; const auto backup = array[0]; array[0] = array[1]; if (rhs < 64) { array[1] >>= rhs; array[0] <<= 64 - rhs; array[0] |= backup >> rhs; } else { array[1] = 0; array[0] >>= rhs - 64; } return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr bool operator==(T&& rhs) const noexcept { return array[0] == static_cast<std::uint64_t>(rhs) && array[1] == 0; } constexpr bool operator==(const uint128_t& rhs) const noexcept { return array[0] == rhs.array[0] && array[1] == rhs.array[1]; } template<typename T> constexpr bool operator!=(T&& rhs) const noexcept { return !(*this == rhs); } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr bool operator<(T&& rhs) const noexcept { return array[0] < static_cast<std::uint64_t>(rhs) && array[1] == 0; } constexpr bool operator<(const uint128_t& rhs) const noexcept { if (array[1] == rhs.array[1]) return array[0] < rhs.array[0]; return array[1] < rhs.array[1]; } template<typename T> constexpr bool operator>(T&& rhs) const noexcept { return *this >= rhs && *this != rhs; } template<typename T> constexpr bool operator<=(T&& rhs) const noexcept { return !(*this > rhs); } template<typename T> constexpr bool operator>=(T&& rhs) const noexcept { return !(*this < rhs); } template<typename T> friend constexpr uint128_t operator+(uint128_t lhs, T&& rhs) noexcept { return lhs += rhs; } constexpr uint128_t& operator+=(const uint128_t& rhs) noexcept { *this += rhs.array[0]; array[1] += rhs.array[1]; return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator+=(T&& rhs) noexcept { // overflow guard if (array[0] + rhs < array[0]) ++array[1]; array[0] += rhs; return *this; } template<typename T> friend constexpr uint128_t operator-(uint128_t lhs, T&& rhs) noexcept { return lhs -= rhs; } constexpr uint128_t& operator-=(const uint128_t& rhs) noexcept { *this -= rhs.array[0]; array[1] -= rhs.array[1]; return *this; } template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator-=(T&& rhs) noexcept { // overflow guard if (array[0] - rhs > array[0]) --array[1]; array[0] -= rhs; return *this; } friend std::ostream& operator<<(std::ostream& stream, const uint128_t& num) noexcept { return stream << '[' << num.array[0] << " + " << "2**64 * " << num.array[1] << ']'; } private: std::array<uint64_t, 2> array{}; }; } #endif Answer: Avoid Simple Header Guard Collisions #ifndef UINT128_HPP #define UINT128_HPP Don't just name the guard after the filename. Append differentiators (file path, GUID, date, etc) to minimize the chance of collision. An example <project>_<path_part1>_..._<path_partN>_<file>_<extension>_<guid> // include/pet/project/file.hpp #ifndef PET_PROJECT_FILE_HPP_68B24FD6E49248409028D9814FE4CD515 Negative Values template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t(T&& num) noexcept : array{ static_cast<std::uint64_t>(num), 0ull } {} The integral conversion rules for values with an unsigned destination is defined. From the C++ standard (n4659 §7.8.2): If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo \$2^n\$ where n is the number of bits used to represent the unsigned type). [Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note] Say we want to the maximum value of uint128_t. For the built-in integral types, we can construct them with the value -1. Constructing a uint128_t with a value of -1 only sets the low bits. You should check to see if num is negative and set the high bits appropriately. template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t(T&& num) noexcept : array{ static_cast<std::uint64_t>(num) , static_cast<std::uint64_t>((num >= 0) ? 0 : -1) } {} Your functions that take a templated integral argument need to be corrected. Undefined Behavior template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator<<=(T&& rhs) noexcept { if (rhs == 0) return *this; const auto backup = array[1]; array[1] = array[0]; if (rhs < 64) { array[0] <<= rhs; array[1] >>= 64 - rhs; array[1] |= backup << rhs; } else { array[0] = 0; array[1] <<= rhs - 64; } return *this; } In your final else block, you have undefined behavior if rhs >= 128 (n4659 §8.8.1). The operands shall be of integral or unscoped enumeration type and integral promotions are performed. The type of the result is that of the promoted left operand. The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand. template<typename T, typename = std::enable_if_t<std::is_integral_v<std::decay_t<T>>>> constexpr uint128_t& operator<<=(T&& rhs) noexcept { if (rhs < 64) { if (rhs != 0) { array[1] = (array[1] << rhs) | (array[0] >> (64 - rhs)); array[0] = array[0] << rhs; } } else if (rhs < 128) { array[0] = 0; array[1] = array[1] << (rhs - 64); } else { array = {}; } return *this; } Don't Abuse the Comma Operator constexpr uint128_t& operator>>=(const uint128_t& rhs) noexcept { if (rhs.array[1] > 0) return array = {}, *this; return *this >>= rhs.array[0]; } Simplifying The Comparators constexpr bool operator==(const uint128_t& rhs) const noexcept { return array[0] == rhs.array[0] && array[1] == rhs.array[1]; } std::tuple (requires <tuple>) provides simple lexicographical comparisons that works at compile-time (unlike std::arrays comparators). We can create std::tuples on the fly with std::tie(). constexpr auto tie() const noexcept { return std::tie(array[1], array[0]); } constexpr bool operator==(const uint128_t& rhs) const noexcept { return tie() == rhs.tie(); } constexpr bool operator<(const uint128_t& rhs) const noexcept { return tie() < rhs.tie(); } // ... You'll notice that many of your functions have the same boilerplate. You can use the pre-processor to generate all of the comparators. #define COMP128(op) \ constexpr bool operator op(const uint128_t& rhs) const noexcept { \ return tie() op rhs.tie(); \ } COMP128(==) COMP128(!=) COMP128(>=) COMP128(<=) COMP128(>) COMP128(<) #undef COMP128 Similarly, the preprocessor could also be used to generate the logic/logic-assignment operators. std::array<uint64_t, 2> array{}; The use of std::array<> feels like unnecessary obfuscation. I'd recommend using two variables with descriptive names (like lo and hi).
{ "domain": "codereview.stackexchange", "id": 25878, "tags": "c++, integer, c++17" }
Convolution between two unequal length vectors
Question: Considering an example, where the model is \begin{align} x[n] = \mathbf{h}^\mathsf{T}\mathbf{u}[n] + w[n]. \label{Eq1} \end{align} $\mathbf{h} = [h_0,h_1,\ldots,h_{M-1}]^\mathsf{T}$ of length $M$ which represents the coefficients of the LTI SISO channel and $\mathbf{u}[n] = [u[n], u[n-1], u[n-2],\ldots,u[n-M+1]$ is the input to the channel. Let $\mathbf{v} = [v_0,v_1,\ldots,v_{M-1}]^\mathsf{T}$ of length $M$ be the coefficients of the equalizer. If the length of the equalizer is different, say the channel length is $M$ and equalizer length is $Q=2M-1$ then how would the convolution happen since there will be a mismatch in the number of elements of $\mathbf{h}$ and $\mathbf{v}$ I can do conv(h,w) in Matlab where h is the coefficients of the channel and w denotes the coefficients of the equalizer. Assuming known values of h = [1,0.2,0.6] and the estimated coefficients of the equalizer as `w = [0.8,0.1,0.2,0.3,0.5]'. How would the convolution work? Can somebody please show first few steps. The first few sample will be zero based on my calculation. But i am not sure. Answer: The discrete convolution sum operation is not restricted to equal length vectors. You can, and most of the time you do, convolve two different signals of arbitary lengths. Your confusion is probably with something else. The equalizer length can be different than that of the channel model length. That should not pose a problem but it would of course effect the equalization performance.
{ "domain": "dsp.stackexchange", "id": 5768, "tags": "convolution, equalizer" }
Is hot air or hot water more effective at melting a frozen pipe?
Question: This question is related to specific heat, and I thought I understood it but I've managed to confuse myself. My initial reasoning was that since in both cases, the pipe is the same, the amount of energy needed to raise the temperature will depend on the specific heat only, and since air has a lower specific heat, it will be more effective. After I started thinking about it more, I thought that the specific heat of the pipe is actually what is important in determining how much energy will be needed to raise its temperature. In this case, the amount of energy needed to unfreeze the pipe should remain constant, and since the specific heat of water is higher, it is more effective at passing on higher amounts of energy than air. Are either of these reasons correct or am I really misunderstanding something? I appreciate any help or illumination you can offer! Thanks! Answer: Specific heat relates to the heat capacity; how much temperature change you get for a specific mass and energy once the heat has finished ‘moving’. But you also have to consider how fast the heat is moving. For melting a frozen pipe with a fluid you also need to consider the heat resistivity of the fluid which is the inverse of heat conductivity. Some fluids like air are good heat insulators - much more so than water. The resistivity will limit the rate at which energy can be transferred to the mass. Since water has a much lower resistivity than air ($\sim 1.7\ \mathrm{m\cdot K/W}$ vs $\sim 50\ \mathrm{m\cdot K/W}$) you can expect a heat transfer rate into the cold pipe about 25 times faster. This all assumes the air and water are at the same temperature.
{ "domain": "chemistry.stackexchange", "id": 3444, "tags": "thermodynamics" }
How to prove for that a given non-regular language its given super-set is also non-regular?
Question: Consider the non-regular language $L_1$ = {$a^n$b$a^n$: $n \geq 0$} and its super-set language L = Language of strings with equal number of trailing and leading a's, or in other words, $$L = \{a^n b w b a^n : n \ge 0, w \in \{a,b\}^*\} \cup \{a^n b a^n : n \ge 0\}.$$ How would we prove that L is also non-regular? I have the idea that I should use the help of $L_2 = L - L_1$, but I'm stuck at the following step: $L_1 \cap L_2 = \phi $ , thus $L_1 \cap L_2$ is a regular language. I was trying to prove it using contradiction. Let, L to be regular which is $L_1 \cup L_2 $ is regular. And tried to prove, $L_1$ or $L_2$ is equal to a set union or intersection or complement operation over the regular sets $L_1 \cap L_2 $ and $L_1 \cup L_2$ But stuck at $L_1 = [(L_1 \cup L_2) \cap (L_2)^c] \cup (L_1 \cap L_2)$ With Myhill-Nerode theorem: I might be incorrect please fix me if I am wrong. Take S =$a^*(b+b.(a+b)^*.b)$ Pick x and y from S such that $x \neq y$. x = $a^n b$ y = $a^m b$ Take, z = $a^n$ $x.z \in L$ but $y.z \notin L$. Thus, by Myhill_Nerode one can conclude it to be non-regular. Answer: Since you already know that $L_1$ is nonregular, here is a simple proof. Suppose that $L$ is regular. Then since $a^*ba^*$ is regular, the language $L \cap a^*ba^*$ should be regular. But $L \cap a^*ba^* = L_1$, which is not regular. Hence $L$ is nonregular.
{ "domain": "cs.stackexchange", "id": 5269, "tags": "formal-languages, regular-languages" }
how to get position of robot
Question: can anybody tell me how to get the robot position(odometry x and y ) from tf message in c++ file. Originally posted by Xittx on ROS Answers with karma: 9 on 2013-02-25 Post score: 0 Answer: REP 105 defines this. You are looking for the transform /odom -> /base_link. Originally posted by dornhege with karma: 31395 on 2013-02-25 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13051, "tags": "ros" }
Is nuclear pasta or neutron star crust iron stable outside of neutron stars?
Question: When neutron stars collide, they crash into and kill each other in a kilonova explosion, which blasts a lot of their material into space. A huge amount of neutronium, neutron star stuff, is suddenly no longer in the high pressure of the star, and instantly expands explosively into normal matter. In other words, thou shalt not have isolated teaspoons of neutronium. But is the same true for the crust, made of a form of highly compressed iron called nuclear pasta, 10 billion times stronger than steel? Is the crust material of neutron stars stable outside of neutron stars? Answer: "made of a form of highly compressed iron called nuclear pasta," This isn't the case. The crust of a neutron star consists of (starting at the surface going down). (Picture from Watanabe & Maruyama 2012) Iron-peak nuclei plus degenerate electrons. Increasingly neutron-rich elements with higher atomic numbers (i.e. not iron) plus degenerate electrons. Very, very neutron rich heavy nuclei (mass numbers of several hundred) plus degenerate electrons and free neutrons. Nuclear pasta, where the identity of individual nuclei disappears and the nucleons (mainly neutrons) form long chains, or sheets, surrounded by degenerate free neutrons and electrons. None of these compositions is stable at low pressures. Specifically, you need to have them surrounded by a high density gas of degenerate electrons to suppress the beta decay of neutrons and neutron-rich nuclei. Putting this stuff into a low density environment would be explosive (but somewhat less) just as in the case of "neutronium" (neutron matter inside neutron stars contains protons and electrons too). This is not because of the beta decay, which is a slow process, but because the constituents have a huge amount of kinetic energy (that is what pressure is).
{ "domain": "astronomy.stackexchange", "id": 5045, "tags": "neutron-star" }
Localization and Mapping in a Small Crop Field
Question: I have a small crop field inside my house and I want to create a map of it. The goal is to create a map so that I can monitor the growth of the plants and do some post processing to analyze the details of individual plant. Hardware: Since it is a small crop field, I have two poles on either ends of the field with a wire connecting the two poles. the robot/assembly will be hooked to the wire. Therefore, the path of the robot is always fixed. I have a 3D lidar, a camera, IMU and a GPS. I have interfaced all the sensors with my Nvidia TX2 controller using ROS. In short, I have raw data coming into the system from all sensors. I want some advice regarding how to proceed with this project? Should I start with localization? As a first step, should I fuse data from the GPS and IMU to get accurate positioning of the system? With the idea that, if I accurately localize my system in the field and then acquire images for every position, is it possible to join the images somehow? Do I need to implement SLAM here? Answer: If you intend to stitch images together, you don't need to strictly implement slam. As long as there is good overlap between images, you can use SIFT/SURF features to extract matching keypoints and stitch the images together. Take a look at this tutorial on panorama creation. However, given your setup, I believe SLAM would be the way to go! since you already have ros setup, I'd suggest going forward with the robot_localization package. You can use this package to fuse the GPS and IMU data to get an accurate location of your robot and then use that to transform your images and lidar pointclouds to create a map. To create a map with lidar pointcloud you can use iterative closest points or use a package like rtabmap/octomap. As a stretch, you can even fuse your lidar pointcloud with images to colorize your map.
{ "domain": "robotics.stackexchange", "id": 2318, "tags": "ros, localization, imu, mapping, gps" }
Confusion regarding angular momentum of a tilted rod
Question: I have a question regarding angular momentum and torque in the following example. This is not a homework problem, I just believe understanding some parts of this particular problem would help me understand the two concepts better. I have a rod of length $l$ oriented at an angle $\phi$ about an axis that passes through its center of mass. The rod is rotating about this axis, and I'm supposed to find the magnitude of the angular momentum and the torque. In this example, the given angle is $\pi/4$, but I'm looking for a general solution. Using the inertia tensor, I can easily find the moment of inertia of this rod, in the direction of the axis. let us have the $x$ axis along the rod, and the $y$ axis perpendicular to it. Let us have the axis of rotation along some direction $\vec{a}$. We can see, that $\vec{a}=a\cos\phi\hat{i}+a\sin\phi\hat{j}$. If $I$ denotes the inertia tensor, and $I_a$ represents the moment of inertia about this axis $\vec{a}$, then I can easily write : $$I_a=( cos\phi, \sin\phi) I ( cos\phi, \sin\phi)^T $$ In the case of our rod, the moment of inertia along the length ($x$ axis) is $0$. So, we can just say : $I_a=I_y \sin^2\phi=\frac{1}{12}Ml^2\sin^2\phi$ So, I think I've managed to find the moment of inertia about the $a$ axis. Now I'm confused regarding how should I calculate the angular momentum in this case. On one hand, I have $$\vec{L}=I\vec{\omega}$$ So, I can simply multiply $\omega\hat{a}$ to the moment of inertia along $\hat{a}$. Using this, I get $$\vec{L}=\frac{1}{12}Ml^2\omega \sin^2\phi\hat{a}$$ However, I could also have done the following : $$L=I_x\omega_x \hat{i}+I_y\omega_y\hat{j}$$ Here $I_x=0$ and $\omega_y=\omega\hat{a}.\hat{j}=\omega\sin\phi$ In this case, I get: $$\vec{L}=I_y\omega_y=\frac{1}{12}Ml^2\omega\sin\phi\hat{j}$$ I'm inclined to believe that the second one is correct, but I have no idea why the first method seems to be wrong here. Any explanation would be highly appreciated. Answer: Your projection of the inertial moment into the rotational axis, $\hat a$, is not correct. Starting with your inertial moment in your $\hat i$ (along the rod) and $\hat j$ (perpendicular to the rod) coordinate. $$ \bf{I} = \begin{bmatrix} 0 & 0 \\ 0 & \frac{1}{12}Ml^2 \end{bmatrix} $$ The transform to the $\hat a$ and $\hat b$ (perpendicular to $\hat a$) by a rotation matrix $\bf R$: \begin{align} \bf I_a & = \bf R^T \bf I \bf R \\ &= \begin{bmatrix} \cos \phi & \sin\phi \\ -\sin\phi & \cos\phi \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & \frac{1}{12}Ml^2 \end{bmatrix} \begin{bmatrix} \cos \phi & -\sin\phi \\ \sin\phi & \cos\phi \end{bmatrix} \\ &= \frac{1}{12}M l^2 \begin{bmatrix} \sin^2 \phi & \sin\phi\cos\phi \\ \sin\phi\cos\phi & \cos^2\phi \end{bmatrix} \end{align} Then angular momentum in $\hat a$ and $\hat b$ coordinate become \begin{align} \vec L_a & = \bf I_a \, \vec \omega_a \\ &= \frac{1}{12}M l^2 \begin{bmatrix} \sin^2 \phi & \sin\phi\cos\phi \\ \sin\phi\cos\phi & \cos^2\phi \end{bmatrix} \begin{bmatrix} \omega \\ 0 \end{bmatrix}\\ &=\frac{1}{12}M l^2 \, \omega \sin\phi \left\{ \sin\phi \,\hat a + \cos\phi \,\hat b \right\} \end{align} You may also check that $\vec L_a = {\bf R^T} \vec L $.
{ "domain": "physics.stackexchange", "id": 86048, "tags": "rotational-dynamics, angular-momentum, torque, rotational-kinematics, moment-of-inertia" }
Why is the standard atomic weight of chlorine not a whole number?
Question: Why is the standard atomic weight of chlorine, 35.5, not a whole number? Like for example, it could be exactly 35 or exactly 36. Please show the solution of formulae on how u reach to 35.5) with some English words so I can write the answer in my homework assignment and please make it small. Thanks Answer: The process is explained (for silicon) here: The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: $\ce{^28Si}$, $\ce{^29Si}$ and $\ce{^30Si}$. The atomic masses of these nuclides are known to a precision of one part in 14 billion for $\ce{^28Si}$ and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table). The calculation is $$A_r(\ce{Si}) = (27.97693 \times 0.922297) + (28.97649 \times 0.046832) + (29.97377 \times 0.030872) = 28.0854$$ For chlorine, the idea is the same, but you only have two isotopes, $\ce{^35Cl}$ (76%) and $\ce{^37Cl}$ (24%).
{ "domain": "chemistry.stackexchange", "id": 10481, "tags": "isotope" }
Giveth me thy easier user input in C++ - follow up
Question: I've decided to take some of the many suggestions for improvement on my previous question, Easier user input in C++, and actually get it to work as expected. This time around, a few things are different, namely: The function will now accept string input with spaces, like "Ethan Bierlein", and not just spit back the first "word" in the input. The function now also accepts a different input stream rather than just std::cin, if, for some reason you do something like that. The function now allows for a "no-prompt" option, as well, which just means that the argument prompt has a default value of "". There is an optional way to specify the type of the prompt as well now. It's default type is std::string. The function, now named get_input, has it's own namespace, easy_input, rather than being patched into std. I do have a few concerns about my code though, and I'd like to hear your opinions on them: Is it a good idea to declare a static variable, just so I can have a default value for prompt? Is it a good idea to have a default value for prompt? How necessary is it to provide an option to get input from a different input stream? Am I writing good C++, or am I doing certain things horribly wrong? Is there anything else that stands out for improvment? easy_input.h #if HAVE_PRAGMA_ONCE #pragma once #endif #ifndef EASY_INPUT_H_ #define EASY_INPUT_H_ #include <iostream> #include <string> #include <boost/lexical_cast.hpp> namespace easy_input { const std::string input_error_message = "an I/O error was encountered."; static std::string default_prompt = ""; template <typename TInput, typename TPrompt = std::string> TInput get_input( const TPrompt& prompt = default_prompt, std::istream& input_stream = std::cin ); } /** * This function serves as a useful wrapper for getting user input. * Rather than forcing the user to type out multiple lines every * time they want to get input, they only have to type one line. * @tparam TInput - The type of the input to obtain. * @tparam TPrompt - The type of the prompt to be used. The default type is std::string. * @param {TPrompt} prompt - The prompt to be used. */ template <typename TInput, typename TPrompt = std::string> TInput easy_input::get_input( const TPrompt& prompt = easy_input::default_prompt, std::istream& input_stream = std::cin) { std::cout << prompt; std::string user_input_value { }; if(!std::getline(input_stream, user_input_value)) { throw std::istream::failure { easy_input::input_error_message }; } return boost::lexical_cast<TInput>(user_input_value); } #endif main.cpp (tests) #include <iostream> #include "easy_input.h" int main() { std::string a = easy_input::get_input<std::string>("Enter your name please! "); std::cout << a << "\n"; int b = easy_input::get_input<int>("Enter an integer please! "); int c = easy_input::get_input<int>("Enter an integer please! "); std::cout << b + c << "\n"; std::string d = easy_input::get_input<std::string>(); std::cout << d << "\n"; int e = easy_input::get_input<int>(); int f = easy_input::get_input<int>(); std::cout << e + f << "\n"; } Answer: Redefinition of default arguments First of all, you cannot redefine default arguments. From [dcl.fct.default], slightly abridging the example: A default argument shall not be redefined by a later declaration (not even to the same value). [ Example: void m() { void f(int, int); // has no defaults f(4); // error: wrong number of arguments void f(int, int = 5); // OK f(4); // OK, calls f(4, 5); void f(int, int = 5); // error: cannot redefine, even to // same value } —end example ] So the definition of get_input should just be: template <typename TInput, typename TPrompt> TInput easy_input::get_input( const TPrompt& prompt, std::istream& input_stream) { ... } Otherwise, the code is ill-formed and should not compile (though apparently earlier versions of gcc happily allow this). Include guards AND pragma Given that you have include guards, it's unnecessary to additionally have the conditional #pragma once. If you don't want to unconditionally use the pragma, just stick with the include guards. Default Prompt and Error You use the error message in a single place - it'd be better to just write out the string literal there. It'll be easier to find when grepping through code, and there's no advantage I see in having the global constant. With the default prompt, rather than all the additional complexity of default arguments, I think it'd be simpler to just add an overload: template <typename TInput> TInput get_input(std::istream& input_stream = std::cin); template <typename TInput, typename TPrompt> TInput get_input(const TPrompt& prompt, std::istream& input_stream = std::cin) { std::cout << prompt; return get_input<TInput>(input_stream); } Ultimately though... My biggest takeaway from 5gon12eder's excellent answer to your original question was that there really isn't anything easy about this input, and it may just be easier to avoid it and stick to being explicit with the streams. Here's something this solution can't handle. Let's say I want to input two ints. I could write: int a, b; std::cout << "$"; std::cin >> a >> b; And end up doing something like: $42 15 And that works. But if I tried to do: int a = get_input<int>("$"); int b = get_input<int>(); and tried to input both on the same line... I'd get a lexical error since "42 15" isn't an int. Maybe you're OK with that? But it's limiting and sadly not easy. Then again, nothing IO related in C++ is really easy anyway.
{ "domain": "codereview.stackexchange", "id": 16477, "tags": "c++, c++11, io, user-interface" }
Concurrency-safe task scheduler
Question: I'm trying to do some task scheduler and worker. Tasks are added to the queue from which worker retrieves and processes them. Tasks from one queue must be executed sequentially. Tasks from different queues are executed in parallel. When the worker has completed all the tasks in the queue, he waits for a while and finishes his work. The queue must be removed from the schedule I did something, but I think it isn't the best solution. I think there may be problems when worker shuts down and closes the channel. Is my solution concurrency-safe? // Schedule struct type Schedule struct { sync.RWMutex queues map[int]chan Task idle byte } // Scheduler pushes task to the queue func (s *Schedule) Scheduler(t Task, i int) { var queue chan Task var ok bool s.RLock() if queue, ok = s.queues[i]; !ok { s.RUnlock() s.Lock() if queue, ok = s.queues[i]; !ok { queue = make(chan Task) s.queues[i] = queue go s.worker(queue, i) } s.Unlock() } else { s.RUnlock() } queue <- t } // Worker retrieves task from the queue and process func (s *Schedule) worker(c chan Task, i int) { timeout := time.After(s.idle * time.Second) done := false for !done { select { case task := <-c: task.Execute() timeout = time.After(s.idle * time.Second) case <-timeout: s.Lock() close(c) delete(s.queues, i) s.Unlock() done = true default: time.Sleep(10 * time.Millisecond) } } } Answer: There is no need for the default clause in Schedule.worker. Memory side, it is better to create a time.Timer object and call Timer.Reset & Timer.Stop on it. No need for the done variable, we can just return when we are done. func (s *Schedule) worker(c chan Task, i int) { t := time.NewTimer(s.idle * time.Second) for { select { case task := <-c: task.Execute() if !t.Stop() { <-t.C } t.Reset(s.idle * time.Second) case <-t.C: s.Lock() close(c) delete(s.queues, i) s.Unlock() return } } } It is not advisable to close a channel from a receiver. Better to close the channel when you know that there are no more tasks and have the worker wait for that. func (s *Schedule) worker(c chan Task, i int) { for task := range c { task.Execute() } }
{ "domain": "codereview.stackexchange", "id": 36791, "tags": "go, concurrency" }
Interpretation of field quantization
Question: In the book on Quantum Field Theory by Peskin and Schroeder, it is explained how the field is promoted to an operator, now my question is that in Quantum Mechanics, operators act on kets, what does this field operator act on? Answer: There is a Hilbert space also for quantum fields and the field operator acts on the vectors ("kets") of that Hilbert space. From this perspective there is no difference between quantum mechanics and QFT.
{ "domain": "physics.stackexchange", "id": 91541, "tags": "quantum-field-theory, hilbert-space, operators, second-quantization, quantum-states" }
Getting accurate real-time x,y coordinates with gmapping?
Question: Hello, I have a Turtlebot 2 (Kobuki base) running in Ubuntu 12.04, ROS Groovy. I want to be able to print out a map of the robot's X,Y coordinates in real time. Using information I've spliced together from various tutorials (still a newbie), I've come up with a node that uses a StampedTranform object to lookup the transform from "/map" to "base_link". It then takes the transform.getOrigin().x() and transform.getOrigin().y() and prints it to a text file. My steps are this: roslaunch turtlebot_bringup turtlebot.launch roslaunch turtlebot_navigation gmapping_demo.launch roslaunch turtlebot_teleop keyboard_teleop.launch I then launch my application and manually drive the robot in a rectangular path around an office building (perhaps 50ft by 20ft). I load the resulting XY text file into MS Excel, and plot the points, but the path looks pretty awful. I don't have enough karma to post pictures here, but suffice to say it looks like a squashed rhombus. My question is, what am I doing wrong? Is there a different transform path I should be listening in on? Am I using the wrong function to extract X and Y? Or is it a bad idea in the first place to try and get the path coordinates in real time while the map is being built? Thanks in advance! I've posted my code below, if anyone is interested. #include "ros/ros.h" #include "std_msgs/String.h" #include <tf/transform_listener.h> #include <iostream> #include <fstream> int main(int argc, char **argv) { ros::init(argc, argv, "PoseUpdate"); ros::NodeHandle n; ros::Publisher chatter_pub = n.advertise<std_msgs::String>("chatter", 1000); tf::TransformListener listener; ros::Rate rate(1.0); std::ofstream myfile; myfile.open("/tmp/OUTPUTXY.txt"); while (n.ok()) { tf::StampedTransform transform; try { //ROS_INFO("Attempting to read pose..."); listener.lookupTransform("/map","/base_link",ros::Time(0), transform); ROS_INFO("Got a transform! x = %f, y = %f",transform.getOrigin().x(),transform.getOrigin().y()); myfile << transform.getOrigin().x() << "," << transform.getOrigin().y() << "\n"; } catch (tf::TransformException ex) { ROS_ERROR("Nope! %s", ex.what()); } rate.sleep(); } myfile.close(); ROS_ERROR("I DIED"); return 0; } Originally posted by BlitherPants on ROS Answers with karma: 504 on 2013-07-12 Post score: 5 Answer: If you're using a TurtleBot with the Kinect or Asus Xion a space that large is going to be close to impossible to map. The Kinect only has an effective range of 3m and with it's small field of view a lot of the time it will be out of visual range of any obstacle. And the odometry of the Kobuki is much better than the Create, but it's still not going to allow many meters of deadreakoning. And with the small laser scan width, it is much harder for gmapping to do good scan matching since much of the time it will see feature less walls in it's small field of view. This is something that can be significantly improved but may require new tools and algorithms, not just tuning parameters. Or a new sensor such as a longer range/wider field of view laser scanner will help significantly. Originally posted by tfoote with karma: 58457 on 2013-07-12 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by BlitherPants on 2013-07-12: It's a Kinect. The area I'm driving it around in has hallways that are about 5 feet wide (office with cubicles, etc). Is it really hopeless even with all the doorways and cubes in sight? Comment by BlitherPants on 2013-07-12: Also, if I drive around and do a map-saver to PGM, the output map looks pretty decent, so I had thought that the system was able to manage the office space... Comment by tfoote on 2013-07-12: It really depends on the space and how much you can keep the kinect scan line full. The office space we were testing in had 10' wide corridors. There are techniques such as doing perimeter driving which can help a lot. If you're getting a decent PGM of the map, you should be able to make it work. Comment by BlitherPants on 2013-07-12: By "perimeter driving", do you just mean hugging the wall? Comment by tfoote on 2013-07-12: Pretty much. Keep the kinect field of view as full as possible within 3m of distance. Comment by BlitherPants on 2013-07-12: That's encouraging to hear, thanks. Otherwise, do my methods of getting the transform look correct to you? Comment by tfoote on 2013-07-12: Ahh, sorry I didn't look closely at the code. I think you're asking for the inverse of the transform you want. Such that you're plotting the position of the origin of the map in relationship to the position of the robot. Try reversing the argument order. Comment by BlitherPants on 2013-07-12: You mean like this? listener.lookupTransform(/base_link","/map","ros::Time(0), transform); Comment by BlitherPants on 2013-07-12: Hmm. Just tried that one line swap and got some pretty bizzare results that look nothing like a path. Not sure why. Comment by tfoote on 2013-07-12: Sorry maybe you were right originally. The easiest way to know you're doing it right is to create a point at 0,0 of the robot and ask tf to transformPoint into the map. Comment by BlitherPants on 2013-07-12: Not sure how to do that. I'll have to read up on that. Thank you very much for your help! Comment by BlitherPants on 2013-07-30: (In case anyone reading this is curious, I have tried to create a point at 0,0 on base_link, then transformPoint()ed it into /map. I do get the same X,Y values as the method in my code above) Comment by Aarif on 2014-11-18: hi @BlitherPants, your code is very useful, i am new in ros and I've a question that, getting some data from a topic, we need to make a subscriber, but in you code as this line "ros::Publisher chatter_pub = n.advertise<std_msgs::String>("chatter", 1000);" suggests you made a publisher, how?? Comment by BlitherPants on 2014-11-18: Hi @Aarif; the line you showed is actually leftover from previous code and is useless for this application. The code that is truly doing the work is the TransformListener in this line: listener.lookupTransform("/map","/base_link",ros::Time(0), transform); Comment by Aarif on 2014-11-21: Thank you very much @BlitherPants. :)
{ "domain": "robotics.stackexchange", "id": 14901, "tags": "navigation, turtlebot, kobuki, tf2, gmapping" }
Why the He core gets hotter and hotter? Is it because it shrinks?
Question: I know that after a star in the main sequence runs out of H in the core, it will start burning H in the shells surrounding the (now) He core. (1) Why now the Hydrogen shells are hot enough for burning H and not before? What made them hotter? I know too that the burning of H in the shells surrounding the He core will produce new He which will be deposited in the He core and therefore the He core will increase its mass and it will be compressed as a result (since we are talking about a degenerate matter core then its size/radius will follow the relation $R\propto M^{-1/3})$ (2) Does it shrinkage heats up either the core itself or the surrounding H shells? If it is so, Why? I know that when it comes to an ideal gas and gravitational collapse, then the Virial Theorem tells us that half of the released gravitational energy will be used for increasing the internal energy of the system, that is, its temperature. However, I am not quite sure if the same holds for a degenerate core. (If I am not wrong, the Virial Theorem derives from the condition of hydrostatic equilibrium). I also know that the degenerated He core is isothermal so it will have the same temperature than the immediate surrounding H burning shell, so, if you can answer me why the surrounding H shells becoming hotter and hotter, it may be the same as answer me why the degenerate He core becomes hotter and hotter. Thanks! :) Answer: The answer lies in something called the virial theorem. A He core that has ceased nuclear burning and that is in quasi-static equilibrium will have a relationship between the temperature and pressure in its interior and the gravitational "weight" pressing inwards. This relationship is encapsulated in the virial theorem, which says (ignoring complications like rotation and magnetic fields) that twice the summed internal energy of particles ($U$) in the gas plus the (negative) gravitational potential energy ($\Omega$) equals zero. $$ 2U + \Omega = 0$$ This assumes that the pressure of gas outside the core is negligible (which is a reasonable assumption given the fall in temperature and density with radius.) Now you can write down the total energy of the core as $$ E_{tot} = U + \Omega$$ and hence from the virial theorem that $$E_{tot} = \frac{\Omega}{2},$$ which is negative. If we now remove energy from the core, by allowing the gas to radiate away energy (or maybe even some neutrino losses), such that $\Delta E_{tot}$ is negative (we can ignore energy generation since fusion has ceased), then we see that $$\Delta E_{tot} = \frac{1}{2} \Delta \Omega$$ So $\Omega$ becomes more negative - which is another way of saying that the core is attaining a more collapsed configuration. This process will occur unless the gas is completely degenerate, which in practice the core never attains (the assumption of complete degeneracy is essentially saying that the gas has a negligible temperature). The temperatures are always high enough for the degeneracy to be only partial. Oddly, at the same time, we can use the virial theorem to see that $$ \Delta U = -\frac{1}{2} \Delta \Omega = -\Delta E_{tot}$$ is positive. i.e. the internal energies of particles in the gas (and hence their temperatures) actually become hotter. In other words, the gas has a negative heat capacity. Because the temperatures and densities are becoming higher, the interior pressure increases and can support a more condensed configuration. However, if the radiative losses continue, then so does the collapse. This process is ultimately arrested in the core by the onset of He fusion. Another way of thinking about this is that gravitational compression is doing work on the gas, thus raising its internal energy and hence temperature. The pressure does not rise significantly because most of the energy goes into raising the temperature of the non-degenerate ions which have a much larger heat capacity. As the core contracts, the H-shell immediately around the core will also contract in response and achieve higher temperatures. As a result the shell burns even more fiercely than hydrogen burned in the core during the main sequence lifetime and the luminosity of the star increases as it ascends the red giant branch.
{ "domain": "physics.stackexchange", "id": 41719, "tags": "condensed-matter, stars, stellar-physics, stellar-evolution, virial-theorem" }
How to include my own rule base (.pl) while launching json_prolog
Question: I wanted to include my rule base written in a prolog file(.pl) and launch it through the json_prolog launch file suggested in json_prolog launch. How can I do it? I can load the prolog file from a python node by querying ['CHOPIN.pl']. But when I do the same in json_prolog launch file, the query returns error Traceback (most recent call last): File "/home/rohit/fuerte_workspace/sandbox/knowrob_CHOPIN/scripts/tutorial_test_query.py", line 16, in query = prolog.query("hot(location1).") File "/home/rohit/fuerte_workspace/stacks/knowrob/json_prolog/src/json_prolog/prolog.py", line 69, in query return PrologQuery(query_str) File "/home/rohit/fuerte_workspace/stacks/knowrob/json_prolog/src/json_prolog/prolog.py", line 22, in init raise PrologException('Prolog query failed: %s' % result.message) json_prolog.prolog.PrologException: "Prolog query failed: PrologException: error(syntax_error(operator_expected), string('expand_goal((hot(location1).),_Q), call(_Q) . ', 27))" This is a specific error for my query. But, I hope you get the idea. So, how do I load my rule base file(rule_base.pl) while launching json_prolog, to make it global and to be queried directly? Thank you. Originally posted by olchandra on ROS Answers with karma: 7 on 2014-10-20 Post score: 0 Answer: The json_prolog node reads two ROS parameters, the initial package to be loaded and an initialization goal, see here, which you can use to pass the 'consult' or 'use_module' directive. Alternatively, as you have created your own package, you can give that package as initial_package argument when starting json_prolog, which will then load everything that is listed in your init.pl. You can have a look at other KnowRob packages for examples. The error you list above was however caused by a syntax error: When sending commands via json_prolog, do not add the trailing dot '.'. All commands are passed via the 'expand_goal' predicate, so they should not have the '.' inside which indicates the end of a query. Originally posted by moritz with karma: 2673 on 2014-10-21 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by olchandra on 2014-10-21: Hello again. Thanks to your answer I can load the prolog file while starting json_prolog, by giving the package name as argument. But still I cannot pass the prolog file as the parameter "initial_package" in the launch file for json_prolog. Comment by olchandra on 2014-10-21: Ok...I got to load the prolog file from json_prolog launch file. I think the launch file suggested here is not accurate. It works only when I put the param inside node. Thank you again moritz.
{ "domain": "robotics.stackexchange", "id": 19788, "tags": "ros, json-prolog, knowrob" }
Is it always possible to write the state corresponding to a set of stabilizer generators?
Question: Given a set of stabilizer generators, is it always possible to write down the state corresponding to it? Is there a way to write down the quantum state corresponding to a stabilizer generator? Answer: Given an $n$-qubit system and $n$ generators $g_i$ (which commute and square to identity), then the state that you are after satisfies $$ g_i|\psi\rangle=|\psi\rangle. $$ (This defines it uniquely, up to a global phase and normalisation.) One straightforward way to construct this directly is just $$ |\psi\rangle\langle\psi|=\frac{1}{2^n}\prod_i(I+g_i). $$ Thus, you can compute the matrix using the $\{g_i\}$ and find the vector $|\psi\rangle$ by determining the one eigenvector that does not have 0 eigenvalue. In practice, you can do this simply by reading off any non-trivial column of the matrix, and normalising. For example, let $n=2$ and take $$ g_1=X\otimes X,\qquad g_2=Z\otimes Z. $$ We have $$ |\psi\rangle\langle\psi|=\frac14(I+XX)(I+ZZ)=\frac14(I+XX+ZZ-YY). $$ If I read off the first column, this is equivalent to computing $$ |\psi\rangle\langle\psi|00\rangle=\frac14(I+XX+ZZ-YY)|00\rangle=\frac12(|00\rangle+|11\rangle). $$ Thus, $$ |\psi\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle). $$
{ "domain": "quantumcomputing.stackexchange", "id": 5570, "tags": "quantum-state, stabilizer-code, stabilizer-state" }
Idiomatic way to tell whether code is on simulator or real robot
Question: The simulation (Gazebo) can differ from the actual robot (GoPiGo3) is particular ways, e.g the number of readings in a scan topic - in my case 360 in gazebo and 720 on Ydlidar X4. So I often have either a constant GAZEBO=True or an inline if True: both of which are kind of ugly. Is there a more typical way of making code behave a little differently depending on where it is running? Originally posted by pitosalas on ROS Answers with karma: 628 on 2020-12-23 Post score: 0 Answer: Is there a more typical way of making code behave a little differently depending on where it is running? I would say "no". And I don't believe there should be one, as ideally your nodes (or other nodes) don't have to realise there is a simulation "on the other side" of topics/services/actions. If you need different settings, which I believe is what you are asking for, then I would suggest creating multiple .launch files. For your simulation case, you'd load the .launch file which initialises your application with "simulation settings". For use with real hw, you'd create a similar .launch file, but then it loads settings which work for your real hw. In your specific case, you'd have a setting for the nr of points in a LaserScan to the appropriate value. (but I'm not entirely sure why you can't len(msg.ranges) or msg.ranges.size() instead) So I often have either a constant GAZEBO=True or an inline if True: both of which are kind of ugly yes, that is indeed really rather ugly, and I would really recommend you avoid doing that. Originally posted by gvdhoorn with karma: 86574 on 2020-12-24 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by gvdhoorn on 2020-12-24: I always tell "my" students to treat Gazebo as "just a piece of hw", and don't think of it as a special simulator or anything like that. You really want to avoid embedding special cases / code paths "for this simulator I always use" in your code, as the minute someone else uses something else, those tend to break. They also reduce the reuse potential of your code significantly.
{ "domain": "robotics.stackexchange", "id": 35906, "tags": "ros-melodic" }
Print maximum frequency number in an array(with slight modification)
Question: There is an integer array and 2 variables Q and M.The output required is the number which has the maximum frequency in the array, but the catch is that each number in the array can be increased/decreased by M and that too Q number of times. In case of multiple answers print the first one. eg: array:1,2,3,4 and Q=1,M=1 Answer:3 ,as the array can be transformed into 2,2,2,4 (here 2 appears 3 times in this array, adding 1 in 1 and subtracting 1 from 3.) A hashtable can be used here,but can't understand how to handle the addition and subtraction of each number of the array. Can anyone please suggest an approach for that part? Thanks. Answer: Start by sorting the array. Now go over the array using four pointers, left, center left, center right, and right. The two central pointers bracket all the occurrences of some value $x$. The left pointer points at the leftmost value which is at least $x-M$, and the right pointer points at the rightmost value which is at most $x+M$. Using the positions of these pointers you can calculate the maximum frequency of $x$. Traversing the array should take linear time, so the entire algorithm takes time $O(n\log n)$. For a possibly simpler solution, you can sort the array and then use a single pointer to traverse it. Treat this pointer as one of the central pointers from the previous algorithm, and find the locations of all other pointers in logarithmic time using binary search. Traversal now takes time $O(n\log n)$, but this doesn't increase the asymptotic running time of the algorithm.
{ "domain": "cs.stackexchange", "id": 9850, "tags": "algorithms, arrays, hash-tables, c++" }
Starting the Turtlebot on 11.04 as a non turtlebot user
Question: After having followed the setup instructions: http://www.ros.org/wiki/Robots/TurtleBot/diamondback/Robot%20Setup http://www.ros.org/wiki/turtlebot/Tutorials/Networking%20Setup I'm attempting to start the turtlebot service,as suggested here: http://www.ros.org/wiki/turtlebot_bringup/Tutorials/TurtleBot%20Bringup But no such service can be found. There is a file called /usr/sbin/turtlebot-start and when I run this I get: /usr/sbin/turtlebot-start: line 28: setuidgid: command not found I'm running Ubuntu 11.04, and have installed the electric rather than the diamondback version (ros-electric-turtlebot-robot). Any ideas on what's going wrong? [Supplemental] If I do the following: sudo apt-get install daemontools setuidgid Then edit /usr/sbin/turtlebot-start and change "turtlebot" in the setuidgid command to a valid user then I am able to run this script. However, I then get: Failed to open port /dev/ttyUSB0. Please make sure the Create cable is plugged into the computer. The create base is switched on and the cable is connected. I can see that /dev/ttyUSB0 does exist. Originally posted by JediHamster on ROS Answers with karma: 995 on 2011-10-15 Post score: 0 Original comments Comment by JediHamster on 2011-10-15: That's more successful, although I get the following: [WARN] [WallTime: 1318702992.358059] Invalid OI Mode 4 [WARN] [WallTime: 1318702992.359357] Invalid Charging Source 184, actual value: 184 Comment by Lorenz on 2011-10-15: Can you please post the error message that you get when executing 'roslaunch turtlebot_bringup minimal.launch'? Answer: You need to make sure that the user you have chosen has permissions to the port. We setup the udev rules for the user turtlebot in the official ISO. See this question http://answers.ros.org/question/2145/cant-access-turtlebot-through-ttyusb0 Originally posted by tfoote with karma: 58457 on 2011-10-15 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 6984, "tags": "ros, turtlebot, service" }
How strong can the magnetic field of a black hole be if it accretes matter?
Question: I've read some papers where black holes have magnetic fields on the order of 10^6 Gauss, of course this compared to the magnetic field of neutrons stars or magnetars is quite small. So I'm curious whether there have been any observed black holes that have magnetic fields that surpass that of magnetars. I know that black holes can only have magnetic fields if they accrete matter, so my question is, is it possible for one to have such high accretion rate that its magnetic field is like a magnetar or even stronger? Answer: Yes, absolutely. The magnetic field of a BH, as you describe, requires there to be active accretion. In particular, the accretion pressure determines how strong of a magnetic field can be confined. The magnetic pressure, which scales as $B^2$ (for magnetic field strength $B$), can at most be equal to the accretion pressure, scaling like $\rho v^2$ (for density of material $\rho$ and in-falling velocity $v$). This is called 'equipartition'. Typically maximum accretion rates are the given by the Eddington Accretion Rate, so you can solve for the typical maximum magnetic field strengths which are theoretically something like $10^{17} G$ for a roughly solar-mass BH (just slightly larger than that of a NS because of the slightly larger escape velocity---the speed of light). Magnetic-field strengths inferred from BH (relativistic) Jets are consistent with this picture, both for transient jets (e.g. from Gamma Ray Bursts) and steady ones (e.g. blazars).
{ "domain": "physics.stackexchange", "id": 35533, "tags": "black-holes, magnetic-fields, astrophysics, astronomy, neutron-stars" }
Train error vs. Test error in linear regression by samples analysis
Question: I have run a multivariate linear regression model on a small set of about 3500 samples. While the model's error is as large as expected, I also ran a bias vs. variance analysis by comparing the train set error vs. the test set error using different sample sizes. I was expecting something like this: But instead I found that the train error doesn't plateau at any point. The code that generates this graph is the following: def loss_by_sample(X, Y, testX, testY, learning_rate): samples_list = list() train_loss_list = list() test_loss_list = list() m = X.shape[0] for i in range(50, len(X), 50): samples_list.append(i) weights, loss = gradient_descent(learning_rate, X[:i], Y[:i]) X2 = tf.concat([tf.ones([X.shape[0], 1]), X], 1) train_loss = (1/(2 * m) * tf.tensordot(tf.transpose(h(X2[:i], weights) - Y[:i]), (h(X2[:i], weights) - Y[:i]), axes=1))[0][0] train_loss_list.append(train_loss) X2 = tf.concat([tf.ones([testX.shape[0], 1]), testX], 1) test_loss = (1/(2 * m) * tf.tensordot(tf.transpose(h(X2, weights) - testY), (h(X2, weights) - testY), axes=1))[0][0] test_loss_list.append(test_loss) // plot train_loss, test_loss I have tried different learning rates and the one I picked is the one that minimizes the loss using mean squared error: (values to the right of the x axis after the last datapoint are inf. Is there something I can conclude from this? Any reason why the train error can surpass the test error? Answer: It occurs because your data is noisy. Suppose that your model is $$ y = X\beta + noise. $$ Suppose that $\beta$ is recovered exactly. Since the noise is present, our model will NOT predict accurately, it will be always some noise. In other words, think that any time we predict the values, there is some random perturbation. The amount of perturbation is of the same order each time. When we compute the error, we square these perturbations, sum them up, and then take the square root, i.e., compute the norm In your example, the size of the training set is growing, and, therefore, the test error WITHOUT averaging is growing as well -- you sum up squares of the perturbation. The size of the test set is steady, so the norm of error is about the same there. Solution: use appropriate averaging to both train and test set. train_loss = (1/np.sqrt(i) * tf.tensordot(tf.transpose(h(X2[:i], weights) - Y[:i]), (h(X2[:i], weights) - Y[:i]), axes=1))[0][0] test_loss = (1/(2 * X2.shape[0]) * tf.tensordot(tf.transpose(h(X2, weights) - testY), (h(X2, weights) - testY), axes=1))[0][0] P.S. X2 in my formulas are as defined in your cell, accordingly. I would suggest choosing a different name for the variable.
{ "domain": "datascience.stackexchange", "id": 8739, "tags": "classification, linear-regression" }
In Transformer's multi-headed attention, how attending "different representation subspaces at different positions" is achieved?
Question: Question partially inspired by this post about the need of multi-head attention mechanism. For me though it is still not clear how we will be able to initialise those attention heads in a diverse way(so that they potentially can - as stated in the Attention is all you need paper - attend to information from different representation subspaces at different positions) and most importantly preserve this diversity during the training process. Answer: I think, what you are looking for is $d_k = d_v = d_{model}/h$ [1] where $h$ number of heads and $d_{model}$ dimensions of keys, values and queries for single attention version of the model. In relation of the model architecture and its embedding specifically, the above translates to $Query Size = Embedding Size / h$ Model input According to the above, at the input embedding layer weights for each head are stacked together in the single embedding matrix. Stacking together, hmm.. (stacking, ensembling). Per head scores As in the normal self-attention, attention score is computed per head but given the above, these operations also take in place as a single matrix operation and not in a loop. The scaled dot product along with other calculations take place here. Multi head merge As a final step, the attention score of each head is merged by simply reshaping the full attention score matrix so that the per-head attention scores are concatenated into a single attention score. Summing it up With a clearer view of what the architecture of the model is and how its computational graphs looks like, we can go back to your original questions and say: The multi-headed model can capture richer interpretations because the embedding vectors for the input gets "segmented" across multiple heads and therefore different sections of the embedding can attend different per-head subspaces that link back to each word. In a more general sense, one can argue that running through the scaled dot-product attention multiple times in parallel and concatenating is a form of ensembling. I hope it helps. For more details about the several operation taking place in the transformer architecture, I suggest you have a look at this post from which my answer is heavily influenced from: https://towardsdatascience.com/transformers-explained-visually-part-3-multi-head-attention-deep-dive-1c1ff1024853 [1] 3.2.2 https://arxiv.org/pdf/1706.03762.pdf
{ "domain": "datascience.stackexchange", "id": 9613, "tags": "machine-learning, deep-learning, neural-network, transformer, attention-mechanism" }
Overlapping genetic information in eukaryotes
Question: In my research, I look at a lot of gene predictions / annotations. Frequently, I see loci where multiple gene models overlap. I haven't taken a systematic approach to analyzing these cases, but I do remember seeing quite a bit of variation in the direction of the overlapping genes (same vs different directions), the amount of overlap, and even the number of overlapping genes. I know enough about gene prediction to take any computational predictions with a grain of salt--even those supported by transcript and peptide alignments. However, these cases have me thinking--does overlap of genetic information really occur in eukaryotes? I seem to remember learning (or hearing anecdotally) that it can happen in prokaryotes, and that seems to be understandable given the compactness of prokaryotic genomes. But can this happen in eukaryotes? Has this been studied, and are there cases that have been confirmed experimentally? Answer: You might be interested in the INK4A locus (chromosome 9p), encoding both p19 and p16 genes, very close to p15. You can read a description here. All three proteins are known experimentally to exist. Now, whether these are two different genes or the same gene with alternative splicing and start sites leading to different reading frames it's up to discussion. The point is that p19 and p16 share DNA coding sequence but not protein sequence nor function.
{ "domain": "biology.stackexchange", "id": 105, "tags": "genetics, gene-annotation, eukaryotic-cells, genomics" }
Convert a FIR to an equivalent IIR
Question: Is there a way to convert a FIR to an IIR filter with the most similar behavior? Answer: I would say that the answer to your question - if taken literally - is 'no', there is no general way to simply convert an FIR filter to an IIR filter. I agree with RBJ that one way to approach the problem is to look at the FIR filter's impulse response and use a time domain method (such as Prony's method) to approximate that impulse response by an IIR filter. If you start from the frequency response then you have lots of methods for designing IIR filters. Even though it was published about 25 years ago, I believe that the method by Chen and Parks is still one of the better ways to approach the design problem. Another very simple method for the frequency domain design of IIR filters is the equation error method, which is described in the book Digital Filter Design by Parks and Burrus. I've explained it in this answer. If the phase response is of importance to you, then one problem you will be facing when designing IIR filters in the frequency domain is the exact choice of the desired phase response. If the overall shape of the desired phase is given you still have one degree of freedom, which is the delay. E.g., if the desired phase is $\phi_D(\omega)$, and the desired magnitude is $M_D(\omega)$ then your desired frequency response can be chosen as $$H_D(\omega)=M_D(\omega)e^{j(\phi(\omega)-\omega\tau)}\tag{1}$$ where $\tau$ is an unknown delay parameter. Of course you can say that if $\phi_D(\omega)$ is given then you don't want to modify it with an additional (positive or negative) delay. But it turns out that in practice the average delay is not always important, and - more importantly - for certain values of $\tau$ your approximation will be much better for a given filter order than for others. So the delay $\tau$ can become an additional design parameter and should be chosen optimally or at least reasonably. I've written a thesis on the design of digital filters with prescribed magnitude and phase responses. One chapter deals with the frequency domain design of IIR filters. That method can be used to design IIR filters with approximately linear phase in the pass-bands, or to approximate any other desired phase (and magnitude) response. The filters are not only guaranteed to be stable, but you can also prescribe a maximum pole radius, i.e., you can define a certain stability margin. You can also find this method in a paper published in the IEEE Transactions on Signal Processing.
{ "domain": "dsp.stackexchange", "id": 5279, "tags": "filters, filter-design, infinite-impulse-response, finite-impulse-response" }
Depth images with undetectable outlines
Question: I'm trying to capture depth images with a RGBD camera, but all of the images have undetectable outlines around objects: Is this expected from RGBD cameras or is mine a dud? It's an Astra Pro. Answer: That's perfectly normal and expected. Most RGBD camera work by projecting a point pattern into the world and capturing it with an IR-camera. Camera and projector are positioned some cm apart (called baseline) to make triangulation possible. So there is always the chance that some parts of the world cannot be seen by the camera even though the projector put some pattern on them (and vice versa). The camera also needs to see a small patch of the pattern and at depth jumps, some part of the pattern is missing.
{ "domain": "robotics.stackexchange", "id": 2003, "tags": "slam, computer-vision, cameras" }
How can the panel method be used to find drag and lift if it is for invicid flow?
Question: If simple CFD softwear like XFoil and OpenVSP use the panel method to find an estimate of the drags and lifts/pressure distributions. How do they find drag and lift if the panel method works based on invicid flow? I thought that there can be no drag or lift created in a complete invicid flow which i think i do understand. (I am awear its obvious the panel method cant calculate purley from itself the skin friction) But it is not at all obvious to me regading any form/induced drag Im at university but we mostly just use the softwear/equations, but i recetly take the time to actually try understand the working/derivations so apologies theres something obvious missed. Answer: Questions about OpenVSP are best directed to the OpenVSP Google Group. Aviation SE is also a great place to ask aerodynamics questions. The potential flow parts of XFoil and VSPAERO can only calculate inviscid flow. XFoil is a 2D code, so its inviscid mode can not calculate any drag. VSPAERO is a 3D code, so its inviscid capability can estimate the induced drag. Both tools have (very different) ways of estimating viscous sources of drag. Remember, all models are wrong, some models are useful. XFoil uses an integral boundary layer model. This means that the boundary layer equations have been rearranged and manipulated into an integral. The boundary layer equations are parabolic, which means that information only flows in one direction and a space marching algorithm can be used. One of the thin boundary layer assumptions is that the pressure 'across' the boundary layer is constant. So, the 'inviscid' pressure is used as the pressure on the outside of the boundary layer. Integration starts at the stagnation point and proceeds to the trailing edge. XFoil has different modes that couple the inviscid and viscous solution in different ways. In the fully coupled mode, I believe the boundary layer's momentum thickness is used to perturb the inviscid airfoil's shape (via a transpiration boundary condition) to iteratively get the pressure distribution to better match what the real flow would see. XFoil was developed for relatively low Reynolds numbers, so it works best in those situations. It has models for laminar separation bubbles, laminar to turbulent transition, and other complex phenomena. It is a very good 2D airfoil code whose solutions are widely used. VSPAERO's viscous drag estimate is far more primitive than XFoil's. It uses some empirical models to estimate the local skin friction drag. For wings, it uses models of airfoil drag coefficient as a function of Reynolds Number, Mach Number, lift coefficient, and thickness to chord ratio. For bodies, it uses a simple form factor and Reynolds number lookup for skin friction coefficient. VSPAERO's estimate is better than nothing, but not much. In particular, a panel code model will leave out lots of details (landing gear and other sometimes important chunks of the airframe. Protuberances, antennae, scoops and nozzles for cooling, hinge gaps, surface roughness, etc.) These all contribute to drag and are all left out of the model. So when VSPAERO gives you an unrealistic L/D, don't forget all the D that it doesn't know about. I expect some upgrades to VSPAERO's drag models to be in place by H2 2024 that will allow the user to specify arbitrary additional drag terms to be added to the aircraft -- and also for the user to specify airfoil drag models to be used for lifting surfaces. For now, I typically recommend users ignore VSPAERO's viscous drag term and instead use OpenVSP's parasite drag buildup tool (or their own tool) to estimate the friction drag of the aircraft. The main exception to this is if you are using the unsteady propeller/rotor capabilities of VSPAERO, in that case, the airfoil drag model is very useful at estimating the parasite power for the prop/rotor.
{ "domain": "physics.stackexchange", "id": 98108, "tags": "fluid-dynamics, pressure, drag, viscosity, lift" }
What would happen to an unprotected human body on the surface of Venus?
Question: In the tremendous heat and pressure on the surface of Venus what would happen to an exposed human body? Would it burn up, dissolve, mummify or something else? Presumably the water and fats would boil away. What would become of proteins and bone? Answer: The effect of the pressure is insignificant compared to the effect of the temperature. 90atm pressure could be even survivable for a short time (in an argon atmosphere with 0.2atm partial pressure of oxygen). 400C temperature causes mortal burning wounds within seconds (if the whole body is affected). Some seconds later the person is unconscious because of the overheating of the brain. Death occurs within a minute. The Venusian atmosphere is mainly CO2 (with a little nitrogen), thus oxidizing reactions won't happen with the body. Also any rotting process is prevented. Mummification could happen - Venus is as dry as the Sahara. But proteins can't survive in this temperature for very long. The result would be a dry, charred, carbonized body, with nearly intact bones in it (CaCO3 is still stable at this temperature). If there is very little O2 in the Venusian surface (its high atmosphere has relatively more), then it would slowly oxidize the remains. In this case, the result will be a skeleton.
{ "domain": "physics.stackexchange", "id": 37837, "tags": "thermodynamics, pressure, planets, biophysics" }
How to adjust speed of turtlebot in Gmapping?
Question: I was looking through some of the launch and node files and wasn't sure where to begin with adjusting turtlebots speed when executing a goal in gmapping or amcl? I want the robot to move a bit slower to clean up some of the jerkyness. Also like to include that I am using Diamondback on 10.10 with updated and basically stock turtlebot files(with the exception of adjusting the teleop packages to slow speed). I am assuming the gmapping works differently than the teleop. Am I wrong? Any advice appreciated Originally posted by Atom on ROS Answers with karma: 458 on 2011-11-04 Post score: 3 Answer: Hi Atom, I haven't tried this with my TurtleBot, but I think you can lower the max_vel_x parameter in the file base_local_planner_params.yaml which is found in the config directory of the turtlebot_navigation package. The default value is 0.50 m/s. You might also try tweaking some other parameters in that file such as: min_vel_x: 0.10 max_rotational_vel: 1.5 min_in_place_rotational_vel: 1.0 acc_lim_th: 0.75 acc_lim_x: 0.50 acc_lim_y: 0.50 --patrick Originally posted by Pi Robot with karma: 4046 on 2011-11-05 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by Pi Robot on 2011-11-05: Great! If you are happy with the answer, please click on the little checkmark above (only you can do that) and then the answer will appear as "answered" in the index. Comment by Atom on 2011-11-05: I also reduced all the parameters you listed in half. ( trying to reduce the original speed by 1/2) Works better for me now, Although I will try using turtlebot calibrate to clean things up, or just keep playing with the numbers till I find the sweet spot.. Thanks Patrick Comment by Atom on 2011-11-05: I changed values in the params.yaml as you suggested. Seems to do the trick. Although I did notice that my mapping with turtlebot in rviz was a bit faster in rotation than in reality. Which gave me a inaccurate map. So I ended up matching the values from my teleop.launch and that helped...
{ "domain": "robotics.stackexchange", "id": 7190, "tags": "navigation, turtlebot, gmapping" }
Is the term 'interpretation' in quantum mechanics the same as the term 'interpretation' in probability?
Question: For example, the prefix term 'micro-' in 'microfinance' has a different purpose from the 'micro-' in 'microeconomics'. I heard all 'interpretations' of quantum mechanics give exactly the same answer to every measurement so they are all equally correct. Is that the same use of the term 'interpretations' as in 'interpretations' of probability? Context: In 2011, learned mathematical (frequentist) statistics in 2011 as a quant undergrad. In 2014, I encountered the aforementioned. Sheldon: Okay. Um, what is the correct interpretation of quantum mechanics? Howard: Since every interpretation gives exactly the same answer to every measurement, they are all equally correct. However, I know you believe in the Many Worlds Interpretation, so I’ll say that. Now do you think I’m smart enough? In 2015, I discovered the Bayesian interpretation of probability as a quant grad eg Bayesian logit model - intuitive explanation? and that 99% of my statistics were frequentist. So Bayesians and frequentists interpret probability differently leading to the things like Lindley's parardox, but they both follow Kolmogorov's axioms and Bayes' theorem so they will never differ on $\mathbb P(A)$ where $A$ is an event in $(\Omega, \mathscr F, \mathbb P)$ Is that the same idea as the use of the term 'interpretation' in quantum mechanics? If no, why exactly? If so, elaborate if you want. Answer: Interpretation in quantum mechanics is explaining how seemingly counter-intuitive quantum mechanical behavior comes about. There exist many interpretations. Interpretations of probability are explaining what probability is (i.e., how it is correctly defined). There are exactly two of them: frequentist and Bayesian interpretations - I do not discuss them here, since I have done it just yesterday in an answer to a different question. In other words, these are two different disciplines that require explaining/interpreting rather different things. (Moreover, QM is a physical discipline, whereas probability theory is more a mathematical one, although the need for interpretation comes from applying it to real data - not necessarily physics data.) In terms of philosophical standpoints there is some overlap between the interpretations of QM and those of probability, as pointed out in the answer by @EricDavidKramer.
{ "domain": "physics.stackexchange", "id": 70856, "tags": "quantum-mechanics, quantum-interpretations, probability, mathematics" }
Even and Odd Filters and Quadrature
Question: I am a mathematician trying to understand some signal processing terms. My first question regards the terms 'even' and 'odd' filters. What does this mean? Does it reference the fact that upon a Fourier transform, there is a cosine and a sine function, which in turn are even and odd? If not, a link to a reliable source would be appreciated. Second question: What is a quadrature? I have heard the term used for the phase shift between two Fourier transforms, the method for dividing a frame into four parts, and taking a gradient-like subtraction from vertical, horizontal, and both diagonal directions. Is it all of these at once? Also, a link to a reliable source would be great. Answer: An even function is symmetric: $$f(x) = f(-x)$$ An odd function is anti-symmetric: $$-f(x) = f(-x)$$ A signal in quadrature with another has all its Fourier series components phase-shifted by 90 degrees. The two signals are thus orthogonal. To get the quadrature signal we apply the Hilbert transform (or Riesz transform in more than 1 dimension) $$g(x) = \pm \mathcal{H}[f](x)$$ A cosine wave is even and symmetric. A sine wave is odd and anti-symmetric. Therefore if you apply the Hilbert transform to an even signal (cosine Fourier series components) you get an odd signal (sine Fourier series components). If you apply the Hilbert transform again you get the negative original signal. Apply again you get the negative of the original Hilbert transformed signal. Apply again you get the original signal. This is where the "quad" part comes in. This is useful in signal analysis. Since the Hilbert transform only phase-shifts the Fourier series components, the energy of the signal remains constant. We can thus reconstruct the original signal (e.g. wavelet denoising). Also, given an even filter that responds to peaks and troughs, we can create an odd filter that responds to edges. The pair of filters is called a quadrature filter pair and allows the analysis of signal features. See the analytic signal for more details.
{ "domain": "dsp.stackexchange", "id": 2988, "tags": "image-processing, fourier-transform" }
Dynamic Time Warping is outdated?
Question: At http://www.speech.zone/exercises/dtw-in-python/ it says Although it's not really used anymore, Dynamic Time Warping (DTW) is a nice introduction to the key concept of Dynamic Programming. I am using DTW for signal processing and are a little surprised: What is used instead? Answer: I wouldn't consider DTW to be outdated at all. In 2006 Xi et al. showed that [...] many algorithms have been proposed for the problem of time series classification. However, it is clear that one-nearest-neighbor with Dynamic Time Warping (DTW) distance is exceptionally difficult to beat. The results of this paper are summarized in the book "Temporal Data Mining" by Theophano Mitsa as follows: In [Che05a], a static minimization–maximization approach yields a maximum error of 7.2%. With 1NN-DTW, the error is 0.33% with the same dataset as in the original article. In [Che05b], a multiscale histogram approach yields a maximum error of 6%. With 1NN-DTW, the error (on the same data set) is 0.33%. In [Ead05], a grammar-guided feature extraction algorithm yields a maximum error of 13.22%. With 1NN-DTW, the error was 9.09%. In [Hay05], time series are embedded in a lower dimensional space using a Laplacian eigenmap and DTW distances. The authors achieved an impressive 100% accuracy; however, the 1NN-DTW also achieved 100% accuracy. In [Kim04], Hidden Markov Models achieve 98% accuracy, while 1NN-DTW achieves 100% accuracy. In [Nan01], a multilayer perceptron neural network achieves the best performance of 1.9% error rate. On the same data set, 1NN-DTW’s rate was 0.33%. In [Rod00], first-order logic with boosting gives an error rate of 3.6%. On the same dataset, 1NN-DTW’s error rate was 0.33%. In [Rod04], a DTW-based decision tree gives an error rate of 4.9%. On the same dataset, 1NN-DTW gives 0.0% error. • In [Wu04], a super-kernel fusion set gives an error rate of 0.79%, while on the same data set, 1NN-DTW gives 0.33%. Please see the original book for a list of the mentioned references. An important thing to note here is the fact that Xi et al. even managed to beat the performance of an MLP back in 2006. Even though the situation might look a bit different these days (as we have better and faster Deep learning algorithms at hand), I would still consider DTW a valid option to look into when it comes to signal classifications. Update I would also like to add a link to a more recent paper called "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms" from 2016. In this paper, the authors "have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other)". The following quotes from the paper stress that DTW is (or at least was in 2016) indeed still relevant: Many of the algorithms are in fact no better than our two benchmark classifiers, 1-NN DTW and Rotation Forest. For those looking to build a predictive model for a new problem we would recommend starting with DTW, RandF and RotF as a basic sanity check and benchmark. Received wisdom is that DTW is hard to beat.
{ "domain": "datascience.stackexchange", "id": 3946, "tags": "time-series" }
How to identify low or high pressure area
Question: If rain is due to the flow of humid air from a high pressure area to a low pressure area, without the use of devices, technology or mass media is it possible to detect regions of high or low atmospheric pressure and know their locations? For example: from my terrace or open land - and how do I spot regions of low or high atmospheric pressure? Answer: Because you specifically asked about winds and pressure, there is a fairly applicable rule of thumb. It's called Buys Ballot's Law. Basically, if the wind is to your back (coming behind you), and you're in the Northern Hemisphere, generally low pressure is to your left and high pressure is to your right. This graphic from http://www.maiamarinelli.com/ illustrates it: However, do note that Buys Ballot is really an imperfect estimation. There's three big caveats/adjustments to consider: [in all following description, replace left with right (and vice versa) for Southern Hemisphere] The Buys Ballot Relationship fundamentally relies upon the Earth's rotation, i.e. the always vaunted apparent force termed the Coriolis Effect. Coriolis actually causes wind NOT to flow towards low pressure, but instead along equal pressure contours. If it weren't for the Coriolis Effect, low pressure centers would quickly fill in with air, and we wouldn't have the complex weather that we have. That said, at the equator, where the differences in rotation with latitude are smallest, Buys Ballot no longer works as there's limited/no Coriolis. So the nearer the equator you are, the more low pressure would be more infront left, or even straight in front of you if wind is to your back. Friction also slightly imbalances the relationship near the ground, such that wind actually flows towards low pressure again. So once again, wind to your back may actually indicate the low pressure center is more to your front left than your left at ground level. If you were up off the ground, or out in a body of water, this friction adjustment would be minimized, and low pressure is indeed more to your left. Local effects, such as terrain (or buildings) definitely overwhelm the larger pattern at times. And because the Coriolis force isn't significant in smaller, more localized features, Buys Ballot's becomes shakier in those situations. The impetus from the broader wind flow does feed down to guide even pressure local features. So wind to your back may well be telling you that a smaller area of low pressure is somewhere between to your left and in front of you... but that low pressure could be local: a building... a thunderstorm... even a tornado. That said, as the pressure gradient gets greater, Coriolis is more and more marginalized, and the low pressure can be almost directly in front of you... and there are even rare circumstances where it is strong that the low can rotate backwards, and low pressure may be to your right. As far as I know, this scenario only can maintain itself at all in strong thunderstorms, with the quite rare antimesocyclones and anticyclonic tornadoes. However, perhaps it could also happen on the microscale, such as around buildings, but those circumstances are no within my area of knowledge. So all that put together: If indeed you can get away from those big buildings, are fairly removed from the regional impacts of mountains and smaller nearby terrain features, and ideally can escape areas with dense foliage... then if the wind is coming from behind you, you can be almost certain that large low pressure is to your left or front left (in the NH, right/front right in the SH)... if you can find the wind direction. And if there isn't much wind? Well then you're probably fairly near a high pressure center! But that will make it fairly tough to diagnose exactly where high pressure centers/low pressure centers are in relationship to you. (And if there is a great deal of wind... well maybe you should just get inside, and turn on your tv to find out when the big tornado/hurricane/winter storm is going to end!) Other options: You could also use a barometer to diagnose falling/rising pressure trends. Those indicate that a low/high is moving towards you. However, this doesn't tell you what direction the low or high is located [though you can often make an educated guess based upon predominant wind flow; most pressure systems come from the west in the mid-latitudes, which include much of the US/Europe/Asia]. As gansub noted in the comments, you could watch how atmospheric conditions evolve, such as the cloud patterns/types. https://www.youtube.com/watch?v=tsD6zkBMmck is a reasonable description of what goes on around fronts, which could help alert you to incoming storms (both warm and cold fronts are dips of low pressure, and indicate a large low pressure system is coming). The continuing lack of cloudcover might be an indication of high pressure in the vicinity.
{ "domain": "earthscience.stackexchange", "id": 977, "tags": "wind, sea-level, rainfall, barometric-pressure, air-currents" }
In special relativity what is the energy of a macroscopic body?
Question: Is the energy of a macroscopic body in special relativity still given by: $E=\gamma m c^2$? If so why do we not need to consider the motion of the individual particles that make it up? Is this because the rest mass $m$ includes this motion? If this is the case and the rest mass includes all the energies of the particles that it is made up from then why is the rest mass invariant? Since the individual energies of these particles are not invariant, so why taken as a whole why do they become invariant? (any formulas of how to get from energies of particles to the rest mass of the whole macroscopic body would be appreciated) Answer: Assuming the body is rigid (i.e. the constituent particles don't move relative to each other - a good approximation when an everyday object is travelling at relativistic speeds), then $\gamma$ will be the same for all particles. Thus: $E = \gamma m_1c^2 + \gamma m_2c^2 = \gamma(m_1+m_2)c^2 = \gamma mc^2$.
{ "domain": "physics.stackexchange", "id": 19093, "tags": "special-relativity, energy" }
"Exactly Equal" and "At Least" in electron excitation
Question: When we examine IR spectra, we see troughs corresponding to absorption at exactly a specific frequency that corresponds to the energy needed to stretch certain bonds (although translational motions and intermolcular forces can broaden the accepted frequency). Coordination compounds absorb light at exactly the frequency corresponding to the crystal-field splitting energy. However, when we talk about energy from photons required to break a bond, we say "at least" the frequency instead of "exactly" in the examples above. Take $\ce{Cl-Cl}$ for example, we say that the photon must have at least a frequency of 607 THz (or wavelength of no more than 496 nm). There is still a definite energy involved here: 242 kJ/mol to promote an electron from a bonding MO in the molecule to an anti-bonding MO. As another example, when we talk about ionizing radiation, we say that there is at least a certain amount of energy needed to ionize a compound. For example, my textbook says that a photon needs "at least 1216 kJ/mol" to ionize water. There is still a definite energy level involved: bringing the bonding electron from its negative potential energy MO to 0. In all of these examples, definite energies were involved. Why is it that sometimes we say that that a photon needs to have exactly the energy needed, and other times the minimum energy needed? Answer: When you ionize or break something, the resulting particles (an electron and an ion, or maybe two atoms, or whatever) would fly away from each other with arbitrary speed. Their kinetic energy would absorb any excess energy of the photon. This is the "at least" case. On the other hand, when you excite some particle (molecule or whatever) in such a way that it remains in one piece, then you can't send it flying, because the momentum conservation forbids that. This is the "exactly equal" case. Surely this is a simplification; in fact, a photon may excite a molecule using only a part of its energy, while keeping the rest of it to itself (Raman scattering), but that's another story.
{ "domain": "chemistry.stackexchange", "id": 5443, "tags": "energy, molecular-orbital-theory, photochemistry, ionization-energy" }
Does Low Temperature Attract Negatively Charged Particles?
Question: I'm studying the effects of temperature in producing electricity like in clouds that produces lightning. Since dark clouds during a thunderstorm is below freezing point, it attracts negatively charged particles, then producing lightning when positively charged particles meet with the negatively charged particles in the cloud. So, the question is does temperature (like below freezing point), attracts negatively charged particles that helps produces lightning? Thanks for your help. Answer: Temperature is a statistical measure, it depends on ensembles of particles and provides context for chemical interactions. If the little balls in the picture represent molecules of air, there is no way for "temperature" to interact individually with the particles. Certainly attract and repulse electromagnetically has no meaning attributed to temperature. Off hand one would expect that with smaller kinetic energy less electrons will come off in interactions than with higher kinetic energy molecules.
{ "domain": "physics.stackexchange", "id": 62137, "tags": "electricity, temperature, lightning, freezing" }
Sensor compatibility with power source
Question: I have a sensor whose datasheet mentions that it should be connected to 5V Vcc. A green LED glows on the sensor when it is powered from the source at 5V. However, even if I power it with 3.3V, still the green LED glows. Does this affect the working of the sensor in any way? If somebody could please let me know about this, I would be very much thankful. Answer: If it says 5V power it up with 5V. The fact that the LED is glowing does not necessarily mean that the sensor is operating as it should. You can only safely say that the LED is receiving enough voltage to operate (nothing more nothing less), unless you have the manual of the sensor that says otherwise (e.g. if the greenlight on the LED is indicative of appropriate power supply).
{ "domain": "engineering.stackexchange", "id": 3694, "tags": "sensors" }
Position eigenstates in curved space
Question: How does one define position eigenstates in curved space (say a manifold $\mathcal{M}$)? Let us say that it is defined as usual $$\hat{x}|x\rangle = x|x\rangle$$ Then how does one define the identity operator in this basis? Is this proposal correct ? $$\int_{\mathcal{M}} d^4x~\sqrt{g}|x\rangle\langle x| = \bf{I}$$ where $g$ is the metric on $\mathcal{M}$. Can anyone provide appropriate reference in this regard? Answer: This is more a question of how to normalize the eigenstates. This is just a convention that can choose. Myself, I like to write $\langle x|x'\rangle= \delta_g^n(x-x')$ where the delta function is defined by $$ \int_M d^nx \sqrt{g} \,\delta^n_g(x-x')=1, $$ but other choices may be preferable. What is important that you state your normalization conventions somwhere when attempting to communicate with others.
{ "domain": "physics.stackexchange", "id": 80232, "tags": "quantum-mechanics, differential-geometry, qft-in-curved-spacetime" }
Why does holding a hot object with a cloth make it feel less hot?
Question: Let's say that I held a hot object with a warm cloth. It instantly feels less hot and only warm to the touch. This is because the cloth is an insulator and doesn't allow as large a heat transfer as if I held the object with my bare hands. However, I presume that eventually the cloth will reach the same temperature as the object in question when it reaches steady state. In that scenario, the same amount of heat must be transferred from the object to my hand as before. So why can I perpetually hold, say, a hot pan with oven mitts without burning my hand? Answer: Your body’s circulatory system is removing heat from your hand inside the oven mitt. This makes your whole body act as a radiator to dissipate the slight temperature increase in your hand and keep the temperature from rising too much inside the mitt. The mitt acts as an insulator and slows the heat transfer into your hand to a rate that can be radiated by the body. So the heat goes from “hot object” to “mitt” to “hand” to “the rest of the human body” to “ambient air.” You need an insulator so the transfer from “hot object” to “mitt” is much slower than from “mitt” to “ambient air” (through your hand and body). If the system were closed, then it would eventually heat up inside the mitt, as you suggested. An example of this is firefighters. They wear insulation all over their body and go into burning buildings. But they are super quick because unless the insulator is perfect, which it isn’t, they will eventually cook to death as the inside of the suit warms up. Despite using really good insulators! If there’s nowhere for the heat to escape, it will eventually heat up. So your mitt isn’t even close to what firefighters use, but you can hold a hot pot forever because your body is given enough time to remove the heat without getting burned.
{ "domain": "physics.stackexchange", "id": 71604, "tags": "thermodynamics, temperature, everyday-life, thermal-conductivity" }
Code coverage with pytest-based ROS tests
Question: Hi, I'm trying to add code coverage to my pytests, following the instructions from code_coverage repo, except that I use python3-coverage. I run my rostest with pytest using the Pytest integration instructions So I run both prefixed with python3-coverage run -p. The rostest file looks like: <launch> <node name="config_server" pkg="config_server" type="config_server_node.py" launch-prefix="python3-coverage run -p"> </node> <param name="test_module" value="test_config_server.py"/> <test test-name="config_server_test" pkg="config_server" type="pytest_runner.py" time-limit="60.0" launch-prefix="python3-coverage run -p"/> </launch> As explained in the issue I opened on code coverage repo, I'm actually getting a coverage report on screen, but all files are empty! nodes/config_server_node.py 7 0 100% src/config_server/config_server.py 138 8 94% (see the whole command line output on the issue for details) Has anyone tried this before and has an example of how to do? Thanks a lot Originally posted by jorge on ROS Answers with karma: 2284 on 2022-08-09 Post score: 1 Answer: My bad: Python code coverage goes to somewhere else Python code coverage html-format: /home/jorge/.ros/htmlcov/index.html. Originally posted by jorge with karma: 2284 on 2022-08-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37912, "tags": "rostest" }
Why does ML's Value Restriction stop you from capturing parametric values?
Question: Here the MLton manual explains ML's value restriction. Specifically, it disallows functions to take on multiple parametric instantiations if they have a closure over local variables. Edit: note from the answer that the MLton manual is wrong about the value restriction. See instead Real World Ocaml. Why is this a problem in ML when C# can easily form closures over parametric variables? For example, this code is invalid, because ML will not allow this function to parameterize over int and string in the same program, because it has a let binding. val f = let in fn x => x end val _ = (f "foo"; f 13) However, I can easily make a C# function which captures a parameterized variable, and parametrically instantiate it over both int and string. I've made the example more convincing, because a literal translation of the above ML is a trivial C# program. class Program { static Func<A> foo<A>(Func<A,A,A> fn, A initial, A accumulate) { A cur = initial; return () => { cur = fn(cur,accumulate); return cur;}; } static void Main(string[] args) { var my_bar_a = foo( (x,y) => { return x+y;}, 0, 1); var my_bar_b = foo( (x,y) => { return x+y; }, ":", "-"); Console.WriteLine("a: " + my_bar_a().ToString()); Console.WriteLine("a: " + my_bar_a().ToString()); Console.WriteLine("b: " + my_bar_b().ToString()); Console.WriteLine("b: " + my_bar_b().ToString()); Console.WriteLine("a: " + my_bar_a().ToString()); Console.WriteLine("b: " + my_bar_b().ToString()); } } output as expected: a: 1 a: 2 b: :- b: :-- a: 3 b: :--- Why does ML prevent us from doing this? Answer: However, I can easily make a C# function which captures a parameterized variable, and parametrically instantiate it over both int and string. The equivalent to code forbidden by value restriction would need a generic local variable of type which would be written something like Func<A, A> foo<A> = ..., which isn't legal in C# (though it isn't enough to violate value restriction, it's required). In your case my_bar_a and my_bar_b do not themselves have generic parameters. If C# did allow generic local variables then it would need something like the value restriction.
{ "domain": "cstheory.stackexchange", "id": 4022, "tags": "ocaml" }
The quantum hall effect and Hofstadter's butterfly spectrum
Question: What is the connection between the quantum Hall effect and the Hofstadter's butterfly spectrum? I mean, can I understand something about the quantum Hall effect in the Hofstadter's butterfly spectrum? Answer: When you study the Quantum Hall effect and more precisely the Hall conductivity you learn that the latter is related to certain quantity called the Chern Number $\; \sigma_{xy}= \frac{-e^{2}}{2\pi\hbar} C \; $ which is also related to Berry Phase. When you consider particles on a lattice in a magnetic field, the resulting Schrodinger equation (under certain conditions) yields to the Harper equation which gives the Hofstadter's spectrum (for rational values of the flux). For this lattice model we can compute the Chern Number which leads to the Hall Conductivity as mentioned above. This is the only link that I see, hope it helps you a little bit.
{ "domain": "physics.stackexchange", "id": 67219, "tags": "topological-insulators, quantum-hall-effect" }
Would sound intensity weaken or not in 1D space air with perfect elastic collision?
Question: Let's assume 2 things. we got 1D space of air we got perfect elastic collisions. While I talk, would the intensity created at my mouth be the same at the receiver? If the answer is yes, I don't understand. Let me explain why. In the perfectly elastic collision, in order for the intensity be the same at receiver, one thing should happen. Each particle should give its whole kinetic energy to the next particle while colliding it. Otherwise, if bounce off happens, next particle which was at rest before collision wouldn't have the same energy then the cycle repeats and 3rd particle wouldn't have the same KE as 2nd one, gradually decreasing each particle's energy. So why would it be correct that intensity still would be the same at the receiver in these conditions ? Maybe, I'm misunderstanding the definition of intensity. If you say that with these conditions, particle would transfer all its KE to the next one and so on, I would ask: why.. Answer: There is some confusion about what an elastic collision is Your confusion comes from a misunderstanding of what an elastic collision means. I understand your confusion. The issue here is that you think that a rubber ball has elastic collisions, it does not, it actually experiences inelastic collisions. (This deserves it's own question, and it is extremely stupid... like many things in science). What occurs in an elastic collision In an elastic collision between A and B. As this is 1D space all of A's kinetic energy is directed at B therefore all of A's kinetic energy is transferred to B. and all of B's kinetic energy is transferred to A. What does this mean? If B was stationary with 0 joules of kinetic energy, and A was moving with 1000 joules of kinetic energy, after the collision, B will have 1000 joules of kinetic energy, and A will have 0 Joules of kinetic energy. Therefore B will be moving, and A will be stationary. multi dimensional space In 2d/3d space, most collisions involves vectors that won't transfer 100% of energy from A to B... This is because the only energy that is transferred is the energy that is pointed at B. All energy that is perpendicular to B is not transferred. So in non 1d space, when A collides with B. All of A's KE directed to B is transferred to B, and all of B's energy that is directed A is transferred A. Any energy perpendicular to A/B remains with A/B and is not transferred.
{ "domain": "physics.stackexchange", "id": 95746, "tags": "waves, conservation-laws, acoustics" }
Is this proper usage of "safe publication"?
Question: With regards to Safe Publication, consider this piece of code: public class Test { public static void main(String args[]) { ThreadB t = new ThreadB(); t.start(); t.Grab(new Bag("HI")); } } class Bag { private String item; // <-- no final here public String Item() { return item; } public Bag(String item) { this.item = item; } } class ThreadB extends java.lang.Thread { private Bag bag; public void Grab(Bag bag) { this.bag = bag; } @Override public void run() { try { while (true) { if (bag != null) { System.out.println(bag.Item()); } } } catch (Exception e) { e.printStackTrace(); } } } Is it guaranteed that thread B sees the item ("HI") in bag, Or is it true that thread B may in fact see the item in the bag as null, and will never see the item "HI" in the bag? Answer: Chapter 3.5. Safe Publication in Java Concurrency in Practice contains a very similar example. To cut a long story short: it's not thread safe. ThreadB could see its bag reference as null. Furthermore, ThreadB could see its bag reference as not null, but the referenced Bag's item field could be seen as null by ThreadB. On my machine sometimes it's an endless loop but I can't reproduce the second scenario when just the Bag.item is null. Please note that making the Bag.item field final does not make it thread-safe, access to ThreadB.bag also have to be synchronized: public class ThreadB extends java.lang.Thread { private Bag bag; public synchronized void setBag(Bag bag) { this.bag = bag; } @Override public void run() { try { while (true) { synchronized (this) { if (bag != null) { System.out.println(bag.getItem()); break; } } } } catch (Exception e) { e.printStackTrace(); } } } Or you can use an AtomicReference: public class ThreadB extends java.lang.Thread { private final AtomicReference<Bag> bag = new AtomicReference<Bag>(); public void setBag(final Bag bag) { this.bag.set(bag); } @Override public void run() { try { while (true) { Bag bag = this.bag.get(); if (bag != null) { System.out.println(bag.getItem()); break; } } } catch (Exception e) { e.printStackTrace(); } } } Some other notes: 1, public String Item() { return item; } From Code Conventions for the Java Programming Language: Methods should be verbs, in mixed case with the first letter lowercase, with the first letter of each internal word capitalized. So, this method should be called getItem: public String getItem() { return item; } 2, According to the same document, chapter File Organization, methods should appear after constructors: public class Bag { private String item; public Bag(String item) { this.item = item; } public String getItem() { return item; } }
{ "domain": "codereview.stackexchange", "id": 940, "tags": "java, thread-safety" }
Compensate for FIR filter attenuation
Question: Is there a simple way to compensate for the reduction in amplitude of a low pass filter of a noisy sine wave signal that has been smoothed using a FIR filter so that the amplitude of the smoothed sine wave matches that of the approximately known amplitude of the noisy original signal? The approximate amplitude of the original data could be estimated empirically from the RMS value over one full (estimated) period. The FIR filter coefficients I am using are h = [ 1 2 3 3 2 1 ] / 12 Answer: FIR definition: $$ y[n] = \sum_{k=0}^{N} { b_k x[n-k] } $$ Sinusoid signal definition: $$ x[n] = M \cos( \alpha n + \phi ) $$ A whole bunch of math: $$ y[n] = M \sum_{k=0}^{N} { b_k \cos( \alpha (n-k) + \phi ) } $$ $$ y[n] = M \sum_{k=0}^{N} { b_k [ \cos( \alpha n + \phi ) \cos( \alpha k ) + \sin( \alpha n + \phi ) \sin( \alpha k ) ] } $$ $$ y[n] = M \cos( \alpha n + \phi ) \sum_{k=0}^{N} { b_k \cos( \alpha k ) } + M \sin( \alpha n + \phi ) \sum_{k=0}^{N} { b_k \sin( \alpha k ) } $$ $$ y[n] = A \cos( \alpha n + \phi ) + B \sin( \alpha n + \phi ) $$ $$ A = M \sum_{k=0}^{N} { b_k \cos( \alpha k ) } $$ $$ B = M \sum_{k=0}^{N} { b_k \sin( \alpha k ) } $$ $$ y[n] = M_2 \cos( \alpha n + \phi + \theta ) $$ $$ y[n] = M_2 \cos( \theta ) \cos( \alpha n + \phi ) - M_2 \sin( \theta ) \sin( \alpha n + \phi ) $$ $$ A = M_2 \cos( \theta ) $$ $$ B = -M_2 \sin( \theta ) $$ $$ A^2 + B^2 = M_2^2 = M^2 \left[ \left( \sum_{k=0}^{N} { b_k \cos( \alpha k ) } \right)^2 + \left( \sum_{k=0}^{N} { b_k \sin( \alpha k ) } \right)^2 \right] $$ Your desired equation: $$ \frac{M}{M_2} = \frac{ 1 }{ \sqrt{ \left( \sum_{k=0}^{N} { b_k \cos( \alpha k ) } \right)^2 + \left( \sum_{k=0}^{N} { b_k \sin( \alpha k ) } \right)^2 } } $$ I think I've done the math right. The $b_k$s are your FIR coefficients and $\alpha$ is your frequency in radians per sample. $M_2$ is the amplitude of your smoothed sinusoid and $M$ is the original amplitude. So you want multiply your smoothed results by $ \frac{M}{M_2} $. Hope this helps. I just did this and haven't tested it. Ced
{ "domain": "dsp.stackexchange", "id": 5960, "tags": "discrete-signals, lowpass-filter, finite-impulse-response" }
Many World's Hypothesis
Question: According to the Many Worlds interpretation of quantum mechanics by Hugh Everett, taking the double slit experiment as an example, every possible outcome that can happen does happen. And the chances of any specific interaction occuring in our universe is based on its probability. Imagine that we do the double slit experiment a million times. If this interpretation is correct, then shouldn't we get an irregular or different observation in at least one of those experiments? What gives us a perfect observation every time? Answer: Many worlds is an interpretation. It is not a separate theory. It is a story we tell about the theory. The actual predictive part of the theory is in the mathematics. Thus, it does not, and cannot, make any predictions that are different from standard theory.(*) Any controversy is over whether it is the "right" story to tell and whether or not it fully agrees with the picture in the mathematics. Also, in theory, you are right, if you repeat a quantum-probabilistic experiment a huge number of times you will see very unlikely events. But that has nothing to do with what interpretation of quantum mechanics you are using - it's purely how probability works. If some event has a fixed probability $P$ of occurring on any given trial, then probability and statistics alone tell us that if we repeat that trial $N$ times, there is probability $1 - (1 - P)^N$ to observe the event during those $N$ trials, which approaches $1$ as $N \rightarrow \infty$ so long as $P > 0$. For any individual double slit experiment, there is a probability density $p(x)$ for an agent (i.e. you, watching the experiment) to acquire the information "It hit spot $x$ on the target". Actual probability is, of course, the integral to hit within a particular region: $P[R] := \int_R p(x)\ dx$. This comes from the mathematics of quantum mechanics itself, so it holds on ALL interpretations. I believe what you're asking is that "is it possible that, if we do it enough, we observe something extremely unlikely, like that all the electrons or photons end up in one narrow area"? The answer is yes - at least insofar as we are willing to be that quantum mechanics is a totally valid theory, of course. If we fire $N$ particles, the probability to see them end up all in the one small area $R$ is $1 - (1 - P[R])^N$, just as before. If $R$ is very thin and $N$ very large, this will be VERY small (makes the lottery look like fate by comparison). But not zero. (*) There is one caveat here: Some "interpretations" do try to alter the mathematics, esp. with an aim toward simplifying it, like trying to derive the Born rule that relates wave functions to probability amplitudes, from some more elementary principle. But these really, then, should be called separate "quantum-like" theories, not interpretations.
{ "domain": "physics.stackexchange", "id": 53386, "tags": "quantum-mechanics, double-slit-experiment, quantum-interpretations" }
Is collision theory applicable only for molecules?
Question: $$\ce{Na (s) + Cl_2 (g)->NaCl(s)}$$ Is collision theory applicable for the above reaction, or is collision theory applicable only for molecules? In other words, do chlorine atoms collide with sodium atoms to produce sodium chloride , or does the chlorine atom get near the sodium atom, electron transfer takes place (as it is energetically favorable), and $\ce{Na+}$ and $\ce{Cl-}$ get stuck? Edit after @BuckThorn's comment: or does the chlorine atom get near the sodium atom, electron transfer takes place (as it is energetically favorable), and $\ce{Na+}$ and $\ce{Cl-}$ get stuck? In this process, the Na atom and the Cl atom don't touch, and electron transfer occurs at a distance (due to attractive/repulsive forces). Only after electron transfer takes place do $\ce{Na+}$ and $\ce{Cl-}$ touch and get stuck. That's why I don't think it is akin to collision. Answer: Your example is one of many that have been studied by molecular beam scattering techniques so this is a rather general answer. Collision theory calculates the rate constant averaged over all geometries and energies at at given temperature. It is the product of cross section, average collision velocity and the Arrhenius factor. Thus it depends on the cross section. The simplest model assumes hard spheres so will work for atom-atom, atom -molecule etc. but less well as the species become more complex. But actually the cross section is very complicated especially as the potential between molecules varies with distance and orientation perhaps as coulomb potential or Lennard Jones or something more complicated. In this case The trajectories of the interaction between species has to be calculated and these averaged to get the collision theory rate constant. In fact the rate constant is not that important what is important is the potential energy profile between species as this reflects the electronic properties of the molecules and can be calculated from Quantum Theory. The rate constant can then, in principle, be calculated from the potential although experiment is always essential. Experiments of this sort are studied using atom/molecular beams and numerous reactive scattering reactions have been studied such as H+H2, D+H2, N+O2, F+H2,H+F2, O+CH, O+Cs and many more. You should look at two excellent books on this topic Levine & Bernstein 'Molecular Reaction Dynamics, and Chemical Reactivity' publ OUP 1987, and Steinfeld, Francisco & Hase, 'Chemical Kinetics and Dynamics', publ Prentice Hall 1999. The picture below shows how complicated the trajectories become when there is a Lennard jones potential between two atoms as one approaches the other from the left but at different position vertically. You can see that the species get pulled together at a larger separation than hard sphere (the inner circle) but can orbit and escape if there is enough energy. In this picture no reaction occurs but you can appreciate if the species get close enough with enough energy a condition could be made to make them react, for example being within the wide grey ring. The picture below shows the Cl+H2 $\to$ HCl+H reaction. The species approach with the same energy in each case but collide at different point in the H$_2$ vibration and this leads to different outcomes, i.e no reaction or more of less vibrational excitation in HCL. The blue colour is low energy and the transition state can be seen at the bend. To calculate the rate constant the number of successful transition state crossings vs. total number at a given energy must be calculated. This could then be compared with the collision theory rate calculation and an effective cross section derived.
{ "domain": "chemistry.stackexchange", "id": 16563, "tags": "inorganic-chemistry, physical-chemistry, reaction-mechanism" }
pcl coordinate system orientation
Question: I am aware that ros uses the positive coordinate system in which x=forward, y=left, and z=up, but can anyone tell me what pcl data uses? Originally posted by ee.cornwell on ROS Answers with karma: 108 on 2013-02-21 Post score: 0 Original comments Comment by ee.cornwell on 2013-02-22: joq, that's what I thought until I used the function pcl::fromROSMsg() and a pcl passthrough filter. The PCL z parameter pertained to forward direction and y pertainted to the upward direction. I'm speaking in terms of the pcl namespace. Is this correct? Comment by joq on 2013-02-23: Check the header.frame_id field in the message. It may be using an "optical" frame by mistake. See: http://www.ros.org/reps/rep-0103.html#coordinate-frame-conventions Comment by ee.cornwell on 2013-02-23: Thanks for the help! Answer: The same, in whatever frame of reference the PointCloud2 message specifies. Originally posted by joq with karma: 25443 on 2013-02-22 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13005, "tags": "ros, coordinate-system, pcl, ros-pcl" }
Integral in $n$−dimensional euclidean space
Question: I've asked this question in Mathematics Stack Exchange, but unfortunately there is no answer yet. I repost it because this integral comes from QFT and maybe someone here did it before or could help me. I merely copy this post. I want to calculate this integral in $n$-dimensional euclidean space. $$I(x)=\int_{\mathbb{R}^n}\frac{d^n k}{(2\pi)^n}\frac{e^{i(k\cdot x)}}{k^2+a^2},$$ where $k^2=(k\cdot k)$, $k=(k_1,\ldots,k_n)\in\mathbb{R}^n$, $x=(x_1,\ldots,x_n)\in\mathbb{R}^n$,$a\in \mathbb{R}$. I've done this integral for $n=3$ by spherical coordinates and residue theorem. I have $$I(r)=\frac{1}{4\pi r}e^{-ar},$$ where $r=|x|$ But in $n$-dimensions I failed in using spherical coordinates, because I have never done it before. Also I see that this integral is Fourier transform of $\frac{1}{k^2+a^2}$, but I failed here too, because I can't find Fourier pair in my reference books. If someone could guide me in this integration it would be great. Answer: WARNING: The function is not absolutely integrable for $n>1$, so the integral strongly depends on how you decide to compute it if you break the integration into iterated integrals. Use instead cylindric coordinates. $k = (z, \vec{r})$, where $\vec{r} \in \mathbb R^{n-1}$ and $z\in \mathbb R$. You have this way, assuming that $x$ is directed along $z$: $$I(x) = \frac{1}{(2\pi)^n}\int_{\mathbb R^{n-1}} d\vec{r} \int_{\mathbb R} dz \frac{e^{i|x|z}}{\vec{r}^2 + z^2 +a^2}=\frac{\omega_{n-1}}{(2\pi)^n}\int_{0}^{+\infty} dr \int_{\mathbb R} dz \frac{e^{i|x|z}r^{n-2} }{r^2 + z^2 +a^2}$$ So: $$I(x) = \frac{2\omega_{n-1}}{(2\pi)^n}\int_{0}^{+\infty} dr \int_0^{+\infty} dz \frac{r^{n-2}\cos(|x|z) }{r^2 + z^2 +a^2}$$ where $\omega_{n-1} = \frac{2\pi^{(n-1)/2}}{\Gamma((n-1)/2)}$ is the measure of the surface of the unit sphere in $\mathbb R^{n-1}$. The internal integral can be found in several books e.g. identity 3.723(2) in Gradshteyn - Ryzhik book (seventh edition). Performing it one has: $$I(x) = \frac{\pi\omega_{n-1}}{(2\pi)^n}\int_{0}^{+\infty} dr \frac{r^{n-2}e^{-|x|\sqrt{r^2 +a^2}} }{\sqrt{r^2 +a^2}} $$ The remaining integral, passing to integrate in $d(r^2/a^2)$, can be computed in terms of Bessel functions $K_\nu$ using identity 3.479(1) in Gradshteyn - Ryzhik book (seventh edition). Please check everything since, as usual, I am not confident in my computations!
{ "domain": "physics.stackexchange", "id": 11733, "tags": "homework-and-exercises, quantum-field-theory, fourier-transform, integration" }
Sum of average of all subarrays
Question: Suppose there is an integer array $a_1,a_2,...,a_n$. Calculate the sum of average of all subarrays. For example, the sum of average of all subarrays of array $[1,3,5]$ is $1+3+5+\frac{1+3}{2}+\frac{3+5}{2}+\frac{1+3+5}{3} = 18$ I'm wondering does it exist an $O(n)$ algorithm? Answer: Hint: This sum is a linear combination of the array elements, with varying weights . In the case of four elements, $$a+b+c+d+\frac{a+b}2+\frac{b+c}2+\frac{c+d}2+\frac{a+b+c}3+\frac{b+c+d}3+\frac{a+b+c+d}4$$ $$a:1+\frac12+\frac13+\frac14$$ $$b:1+\frac22+\frac23+\frac14$$ and symmetrically for $c,d$. If precomputing these coefficients for the required $n$ is allowed, the solution is indeed $O(n)$. Otherwise, the pattern is a sum of fractions with denominators that are the naturals and the numerators follow a triangle wave. I guess that these can be computed efficiently by computing the prefix sum of the inverses of the naturals, then accumulating the terms incrementally, in total time $O(n)$.
{ "domain": "cs.stackexchange", "id": 20086, "tags": "algorithms, discrete-mathematics" }
Average Case Analysis for finding max and min value on an array
Question: Given the following algorithm, to find the maximum and minimum values of an array - don't mind the language: MaxMin(A[1..n]) max = A[1]; min = A[1]; for (i = 2; i<=n; i++) if (A[i] > max) max = A[i]; else if (A[i] < min) min = A[i]; print(max, min); I need to do a probabilistic analysis for the average case of comparisons that will be made on its execution. So far, my solution is: Given an indicator random variable: $$ X_i = \begin{cases} \text{1, if max $>$ $A[i]$}\\ \text{0, if max $<$ $A[i]$}\\ \end{cases} $$ and assuming an uniform distribution for $A[1..n]$, the expected probability is: $$E[x] = \sum\limits_{i=1}^{n} Pr(X_i)$$ where $$Pr(X_i)$$ is the probability of the i-th element be the $max$ element in $A[1..n]$. It's possible to determine that: $$Pr(x_1) = 1, Pr(x_2) = 1/2, Pr(x_3) = 1/3 ,..., Pr(x_n) = 1/n$$ and thus for an array of size $n$ the expected value can be calculated as: $$E[x] = 1 + 1/2 + 1/3 + ... + 1/n = \sum\limits_{i=1}^{n}{\frac{1}{i}} \approx \log{n}$$ And the same goes for $min$, which will give the same result $\log{n}$. (1) My questions are: is it correct? Does the complexity for the average case of the given algorithm is $\theta(\log{n})$? Can I use the argument pointed by (1), just modifying $X_i$ so: $$ X_i = \begin{cases} \text{1, if min $<$ $A[i]$}\\ \text{0, if min $>$ $A[i]$}\\ \end{cases} $$ I already read this (unfortunately the link to the video isn't available), but it only explains for $max$ and my analysis must be for both $max$ and $min$. Answer: First, note that the first comparison will always compare $n-1$ times, independent of the distribution of the input. So, what you really want to know is how many times the second part of the if compares. For this, you can use a indicator random variable like this: $$ X_i = \begin{cases} \text{1, if A[i] $\le$ $\max$}\\ \text{0, if A[i] $>$ $\max$}\\ \end{cases} $$ So, just like you did, assuming an uniform distribution for $A[1..n]$, the expected probability is: $$E[x] = \sum\limits_{i=1}^{n} Pr(X_i)$$ But we do not want $Pr(i=\max)$, we want $\overline{Pr(i=\max)}$, like we stated before. So, using your probability of the $i-th$ element to be the $max$ element: $$Pr(i = \max) = 1/i$$ We have this complement: $$\overline{Pr(i=\max)} = (1 - 1/i)$$ From which we can put in the expected probability: $$E[x] = \sum\limits_{i=2}^{n} (1 - 1/i) = \sum\limits_{i=2}^{n}1 - \sum\limits_{i=2}^{n}1/i = n - 1 - (\ln|n| - 1 + \Theta(1))$$ So, to get the total quantity of comparisons, we can just sum the number of comparisons of the two parts. Note that the $\Theta(1)$ can absorb a constant values. $$(n - 1) + n - 1 - \ln|n| + 1 + \Theta(1) = 2n - 1 - \ln|n| + \Theta(1)$$ And we get the expected total number of comparions: $$2n - \ln|n| + \Theta(1)$$
{ "domain": "cs.stackexchange", "id": 11084, "tags": "complexity-theory, algorithm-analysis, probabilistic-algorithms" }
As I inflate a balloon at a constant temperature does the pressure of gases inside it on the balloon increase?
Question: I know it's a dumb question but I am having a little misunderstanding with Boyle's Law. Shouldn't pressure be inversely proportional with Volume , or this is meant for the pressure outside of gases outside the balloon,can you use another example for this relation . Answer: The general relation for an ideal gas is $$PV=nRT$$ where $P$ is the pressure, $V$ is the volume, $n$ is the number of moles of the gas particles, $R\approx 8.314 \mbox{ J/ mol}\cdot\mbox{K}$ is the gas constant and $T$ is the temperature in Kelvin. As you inflate a balloon at constant temperature, $T$ remains constant while $n$ increases because you are adding gas with your lungs, and $V$ increases because it is inflating. Your question is what happens to the pressure $P$? The answer is... the gas equations doesn't give you enough information to figure this out. Since $RT$ is constant you have $PV/n=\mbox{const.}$. But both $P$ and $V$ are allowed to vary so you need more information to answer this than just the gas law alone. However, you know that once you've blown into the balloon, the system will come to equilibrium, which means that forces on the balloon are balanced. You have pressure inside the balloon pushing out, and this must be balanced by the outside air pressure pushing in PLUS the force of the elastic which is also pushing in. The more the balloon is inflated, the more force the elastic will apply (think Hooke's law). Outside air pressure can be taken as constant. Therefore pressure will increase as you inflate the balloon, because the force from the elastic grows as the balloon is inflated. No need to invoke the gas law at all. In fact, the gas law can't save you here!
{ "domain": "physics.stackexchange", "id": 52317, "tags": "pressure, ideal-gas" }
Basic Java Swing Calculator
Question: I am new to Java Swing and have decided to create a calculator to learn some of the basic. I would like someone to look over my code to see if they can give me any improvements. I have 6 classes and 2 interfaces. I have chosen to split the communication of both operators and digits. The mainframe is the controller and actually creates the objects. The other classes are the different components. MainFrame (creates all objects): package calculator; import java.awt.BorderLayout; import javax.swing.JFrame; import javax.script.ScriptEngineManager; import javax.script.ScriptException; import javax.script.ScriptEngine; public class MainFrame extends JFrame { private FormArea formpanel; private OutputArea outputText; private WorkoutExpression workingOutMathExpression; private static String mathmaticalExpression = ""; public MainFrame() { super("Calculator"); setVisible(true); setSize(800, 600); setDefaultCloseOperation(EXIT_ON_CLOSE); formpanel = new FormArea(); outputText = new OutputArea(); workingOutMathExpression = new WorkoutExpression(); setLayout(new BorderLayout()); formpanel.setStringListener(new DigitListener() { public void StringEmmiter(String text) { outputText.addText(text); mathmaticalExpression = mathmaticalExpression + text; //System.out.println(mathmaticalExpression); WorkoutExpression.setMathExpression(mathmaticalExpression); } }); formpanel.setOperatorListener(new MathOperatorListener() { public void OperatorEmitter(String text2a) { if (text2a.equals("CLEAR")) { outputText.refreshTextArea(); mathmaticalExpression = ""; workingOutMathExpression.resetMathExpression(); } else if (text2a.equals("CLEAR") == false) { outputText.addText(text2a); if (text2a.equals("=")) { try { outputText.addText(workingOutMathExpression.getCalculationOfExpression()); } catch (ScriptException e) { e.printStackTrace(); } } else { mathmaticalExpression = mathmaticalExpression + text2a; //System.out.println(mathmaticalExpression); WorkoutExpression.setMathExpression(mathmaticalExpression); } } } }); add(formpanel, BorderLayout.WEST); add(outputText, BorderLayout.CENTER); } } Formpanel is the panel which contains all the buttons: package calculator; import java.awt.Color; import java.awt.Dimension; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.ArrayList; import java.util.Arrays; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JPanel; public class FormArea extends JPanel implements ActionListener { private ButtonTemplate No1; private ButtonTemplate No2; private ButtonTemplate No3; private ButtonTemplate No4; private ButtonTemplate No5; private ButtonTemplate No6; private ButtonTemplate No7; private ButtonTemplate No8; private ButtonTemplate No9; private ButtonTemplate No0; private ButtonTemplate addition; private ButtonTemplate substraction; private ButtonTemplate division; private ButtonTemplate total; private ButtonTemplate multiplication; private JButton clearButton; private ButtonTemplate decimalPoint; private DigitListener digitListener; private MathOperatorListener operatorListener; public FormArea() { // SetSize setPreferredSize(new Dimension(300, 500)); // Create all Objects No1 = new ButtonTemplate("1"); No2 = new ButtonTemplate("2"); No3 = new ButtonTemplate("3"); No4 = new ButtonTemplate("4"); No5 = new ButtonTemplate("5"); No6 = new ButtonTemplate("6"); No7 = new ButtonTemplate("7"); No8 = new ButtonTemplate("8"); No9 = new ButtonTemplate("9"); No0 = new ButtonTemplate("0"); addition = new ButtonTemplate("+"); substraction = new ButtonTemplate("-"); division = new ButtonTemplate("/"); total = new ButtonTemplate("="); multiplication = new ButtonTemplate("*"); clearButton = new JButton("CLEAR"); decimalPoint = new ButtonTemplate("."); clearButton.setPreferredSize(new Dimension(100, 60)); setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); // Layout // Column 1 gc.gridx = 0; gc.gridy = 0; gc.weightx = 0.1; gc.weighty = 0.1; add(No1, gc); gc.gridx = 0; gc.gridy = 1; gc.weightx = 0.1; gc.weighty = 0.1; add(No4, gc); gc.gridx = 0; gc.gridy = 2; gc.weightx = 0.1; gc.weighty = 0.1; add(No7, gc); // Column 2 gc.gridx = 1; gc.gridy = 0; gc.weightx = 0.1; gc.weighty = 0.1; add(No2, gc); gc.gridx = 1; gc.gridy = 1; gc.weightx = 0.1; gc.weighty = 0.1; add(No5, gc); gc.gridx = 1; gc.gridy = 2; gc.weightx = 0.1; gc.weighty = 0.1; add(No8, gc); // Column3 gc.gridx = 2; gc.gridy = 0; gc.weightx = 0.1; gc.weighty = 0.1; add(No3, gc); gc.gridx = 2; gc.gridy = 1; gc.weightx = 0.1; gc.weighty = 0.1; add(No6, gc); gc.gridx = 2; gc.gridy = 2; gc.weightx = 0.1; gc.weighty = 0.1; add(No9, gc); // Operators gc.gridx = 0; gc.gridy = 3; gc.weightx = 0.1; gc.weighty = 0.1; add(addition, gc); gc.gridx = 1; gc.gridy = 3; gc.weightx = 0.1; gc.weighty = 0.1; add(No0, gc); gc.gridx = 2; gc.gridy = 3; gc.weightx = 0.1; gc.weighty = 0.1; add(substraction, gc); // Operator 2 gc.gridx = 0; gc.gridy = 4; gc.weightx = 0.1; gc.weighty = 0.1; add(division, gc); gc.gridx = 1; gc.gridy = 4; gc.weightx = 0.1; gc.weighty = 0.1; add(multiplication, gc); gc.gridx = 2; gc.gridy = 4; gc.weightx = 0.1; gc.weighty = 0.1; add(total, gc); gc.gridx = 1; gc.gridy = 5; gc.weightx = 0.1; gc.weighty = 5; add(clearButton, gc); gc.gridx = 0; gc.gridy = 5; gc.weightx = 0.1; gc.weighty = 0.1; add(decimalPoint, gc); // ActionButton No1.addActionListener(this); No2.addActionListener(this); No3.addActionListener(this); No4.addActionListener(this); No5.addActionListener(this); No6.addActionListener(this); No7.addActionListener(this); No8.addActionListener(this); No9.addActionListener(this); No0.addActionListener(this); ArrayList<ButtonTemplate> operatorsArrayList = new ArrayList<ButtonTemplate>(Arrays.asList(iterateThroughOperators())); for (ButtonTemplate btn : operatorsArrayList) { btn.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { ButtonTemplate source1 = (ButtonTemplate) e.getSource(); if (source1 == btn) { operatorListener.OperatorEmitter(btn.getText()); } } }); } clearButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { JButton s1 = (JButton) e.getSource(); operatorListener.OperatorEmitter(s1.getText()); } }); } public void setStringListener(DigitListener digitlistener) { this.digitListener = digitlistener; } public void setOperatorListener(MathOperatorListener operatorListener) { this.operatorListener = operatorListener; } public ButtonTemplate[] iterateThroughButton() { ButtonTemplate btn[] = { No1, No2, No3, No4, No5, No6, No7, No8, No9, No0 }; return btn; } public ButtonTemplate[] iterateThroughOperators() { ButtonTemplate operatorBtn[] = { addition, substraction, division, total, multiplication,decimalPoint }; return operatorBtn; } public void actionPerformed(ActionEvent e) { JButton source = (JButton) e.getSource(); for (ButtonTemplate SingleButton : iterateThroughButton()) { if (source == SingleButton) { digitListener.StringEmmiter(SingleButton.getText()); } } } } Button Template is just a standard button with a specified size: package calculator; import java.awt.Dimension; import javax.swing.JButton; public class ButtonTemplate extends JButton{ private String nameOfTheButton; public ButtonTemplate(String nameOfButton){ this.nameOfTheButton = nameOfButton; this.setText(nameOfTheButton); this.setPreferredSize(new Dimension(70,50)); } } DigitListener listens for the digits and sends them to the mainframe: package calculator; public interface DigitListener { public void StringEmmiter(String text1); } Operator listener listens to the operators and sends them to the mainframe: package calculator; public interface MathOperatorListener { public void OperatorEmitter(String text2); } MathExpression works out the mathematical expression using scriptengine: package calculator; import javax.script.ScriptEngineManager; import javax.script.ScriptException; import javax.script.ScriptEngine; public class WorkoutExpression { private static String mathExpression; public static void setMathExpression(String mathExpression) { WorkoutExpression.mathExpression = mathExpression; } public String getCalculationOfExpression() throws ScriptException{ ScriptEngineManager mgr = new ScriptEngineManager(); ScriptEngine engine = mgr.getEngineByName("JavaScript"); return engine.eval(mathExpression).toString(); } public void resetMathExpression(){ mathExpression = "0"; } } Output area is whether the calculation is displayed: package calculator; import java.awt.BorderLayout; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JTextArea; public class OutputArea extends JPanel { private JTextArea textarea; public OutputArea() { textarea = new JTextArea(); setLayout(new BorderLayout()); add(new JScrollPane(textarea)); } public void addText(String text){ textarea.append(text); } public void refreshTextArea(){ textarea.setText(null); } } Answer: Use Arrays At a glance, your creating and instantiating the ButtonTemplate objects could be done as an array. e.g. private ButtonTemplate[] buttons = new ButtonTemplate[10]; It would allow you to use a simple loop to instantiate, and add the ActionListener, e.g. for (int i = 0; i < buttons.length; i++) { buttons[i] = new ButtonTemplate(Integer.toString(i)); buttons[i].addActionListener(this); } Employ Lambda Expressions If you have access to Java 8, You can use lambda expressions to simply the code where you use Functional Interfaces, e.g. instead of: btn.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { ButtonTemplate source1 = (ButtonTemplate) e.getSource(); if (source1 == btn) { operatorListener.OperatorEmitter(btn.getText()); } } }); You can simply write: btn.addActionListener(e -> { if ((ButtonTemplate)e.getSource() == btn) { operatorListener.OperatorEmitter(btn.getText()); } }); I also use e.getSource() directly here rather than storing it as a variable since we're only doing one thing with it anyway, but feel free to use it your way if you find it makes it more understandable/readable. Read up on what I linked, but just as an additional example, your MathOperatorListener is a functional interface. So this section where you call setOperatorListener formpanel.setOperatorListener(new MathOperatorListener() { public void OperatorEmitter(String text2a) { Can be replaced with: formpanel.setOperatorListener(e -> { // content }); The e is just a variable used to refer to the text2a that is passed. You can name it anything. Same deal of course for your DigitListener interface. You may start using the @FunctionalInterface annotation so the compiler would check and confirm for you, albeit Functional Interfaces are easily identifiable since they only have a single method).
{ "domain": "codereview.stackexchange", "id": 15432, "tags": "java, beginner, swing, calculator" }
How does Huygens' principle work with a wave pulse?
Question: Huygens' principle says that points on a wave can be thought of as the sources of new waves. This makes a lot of sense to me in the context of a moving wave and diffraction. However, in the context of a wave pulse, like a single stone being dropped into a pond, it feels as if Huygens' principle would predict that the circular wave pulse would reflect back in on itself. When I drop a stone in water, it makes one ring coming out, but Huygens' principle would expect that that ring would send waves back towards the center where the stone hit. What's going on here? Answer: With a compact wave pulse, Huygens’ Principle still holds, and points along the wave front acts as sources. On the advancing side of the pulse, the outgoing waves from different parts of the front interfere constructively. However, on the receding side, the waves have different phases and interfere destructively. That accounts for how the wave packet advanced into the quiescent region ahead of it but does not rebound back into the region behind. (As a caveat, I should add that dropping a stone into a pond does not produce a clean outgoing pulse. A impulse source at a point produces an outgoing wave with no further disturbance behind it, but this does not occur in two dimensions. Behind the leading ripple produced by a thrown stone, there are still oscillations in the water’s surface after, which makes the analysis via Huygens’ Principle quite a bit more complicated.)
{ "domain": "physics.stackexchange", "id": 73275, "tags": "waves, huygens-principle" }
Problem Listener with
Question: Good morning people I did a listener for the camera/image_raw node republish (camera/image_test, but the following error appears: Client [/ QR_Listener_5598_1492006578702] wants topic /camera/image_test to have datatype/md5sum [std_msgs /String/992ce8a1687cec8c8bd883ec73ca41d1], but our version has [sensor_msgs/Image/060021388200f6f0f447d0fcd9c64743]. Dropping connection. Follow the listener code: #!/usr/bin/env python import rospy import cv2 import zbar from std_msgs.msg import String from PIL import Image def callback(data): scanner = zbar.ImageScanner() scanner.parse_config('enable') while not rospy.is_shutdown(): ret, output = data.read() if not ret: continue gray = cv2.cvtColor(output, cv2.COLOR_BGR2GRAY, dstCn=0) pil = Image.fromarray(gray) width, height = pil.size raw = pil.tobytes() image = zbar.Image(width, height, 'Y800', raw) scanner.scan(image) qrc = None for symbol in image: print '"%s"' % symbol.data qrc = symbol.data qrc cv2.imshow("#Qr Code", output) if qrc != None: rospy.loginfo(rospy.get_caller_id() + "Qr Code %s:", data.data) def listener(): rospy.init_node('QR_Listener', anonymous=True) rospy.Subscriber("/camera/image_test", String, callback) rospy.spin() if __name__ == '__main__': try: listener() except rospy.ROSInterruptException: pass Originally posted by icarold on ROS Answers with karma: 5 on 2017-04-12 Post score: 0 Answer: The answer is already in the error message: You create a std_msgs::String subscriber to listen to an sensor_msgs::Image message. So it should be from sensor_msgs.msg import Image (...) rospy.Subscriber("/camera/image_test", Image, callback) Originally posted by NEngelhard with karma: 3519 on 2017-04-12 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by icarold on 2017-04-12: Many thanks, my friend !! Lack of attention to mine. I'll test here Comment by icarold on 2017-04-12: The command did not work: from visualization_msgs.msg import Image ImportError: cannot import name Image Comment by jarvisschultz on 2017-04-12: @icarold You should not add an answer that is not really an answer. If you need to add information, please edit your original question. I've moved your answer to be a comment. Comment by jarvisschultz on 2017-04-12: The error described in your comment is because the visualization_msgs package doesn't have a message called Image (it has ImageMarker). My guess is that you are actually looking for the sensor_msgs/Image message type. Comment by NEngelhard on 2017-04-13: sorry, it's of course in sensor_msgs, fixed it Comment by icarold on 2017-04-13: I used the sensor_msgs.msg import image as suggested, but now it is giving no error data.read(). File "/home/icaro/catkin_ws/src/tcc/scripts/listener_qr.py", line 14, in callback Ret, output = data.read () AttributeError: 'Image' object has no attribute 'read' Comment by icarold on 2017-04-13: If I take the read() error in the: ret, output = data TypeError: 'Image' object is not iterable Comment by NEngelhard on 2017-04-13: why do you think that an Image-object has a read-function? http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython Comment by icarold on 2017-04-13: Thanks friends for the help!! NEngelhard, this link that you sent helped me and it worked, I managed to do !! Thank you Comment by NEngelhard on 2017-04-13: great! then please mark the answer as correct so that the question can be closed
{ "domain": "robotics.stackexchange", "id": 27590, "tags": "ros, python, opencv" }
How are jet streaks formed?
Question: "Jet Streak Dynamics I: The four-quadrant model" - explains their effects on frontogenesis. But how are they formed to begin with? Answer: In the blog post, "[They] start with a pressure gradient." So I'll discuss how a pressure gradient force develops at the jet streak level and why it develops at the jet streak level. First let's go over some basics. From the meteorological perspective, one can think of pressure as the amount of air mass above a point (gravity provides the acceleration). So high pressure means relative large amounts of air mass above a point and low pressure means a relatively low amount of air mass above a point. Now, think of two columns of air that start at surface and go into space. One column is really cold and the other column is really warm. Warm air is less dense than the cold air. If the cold air column and the warm column have the same amount of mass, the warm air column will be taller. If the two columns have the same mass, the surface pressure of the two columns will be the same. If we travel up these two columns we would expect the pressure to decrease with them as the amount of mass above us decreases. The pressure in the cold column would decrease faster than the mass in the warm column as we travel up the columns. This means at a certain height, say 10 km, the pressure in the warm column will be much greater than in the cold column (see image below). This is a pressure gradient and causes the jet streak in the link you posted. So how do we get cold air column next to warm columns? Those are fronts! A northern hemisphere warm front typically will have cold air to the north and warm air to the south. So the pressure gradient force at 10 km above the surface (about jet streak level) will be from south to north (note: the pressure gradient is from north to south in this example). Wind wants to go from south to north but is deflected by the Coriolis force as explained in the blog post. Now check out a weather map, you'll find the jet streak will be over a surface warm front! Now, as we keep going up, the maximum pressure gradient will be at the jet streak level. Above the jet streak level, is the stratosphere which contains ozone. O-zone absorbs radiation and warms the air around it. The stratosphere starts at a lower altitude over cold surface air, so our pressure gradient begins to shrink above the jet streak. In other words, the jet streak is at the maximum pressure gradient. Which occurs above the boundary of cold air and warm at the surface. From: http://apollo.lsc.vsc.edu/classes/met130/notes/chapter10/graphics/pf_xsect2.free.gif This picture does a good job of showing how cold air will have lower pressure than warm air aloft. Edit: I should mention, fronts and jet streaks are kinda a chicken and egg scenarios. They each cause each other to some degree.
{ "domain": "earthscience.stackexchange", "id": 182, "tags": "meteorology, weather" }
Unambiguousness and determinism of CFGs for them to be LR
Question: I came across this statement: Note that there are unambiguous grammars for which every LR parser construction method will produce a parsing action table with parsing action conflicts. I was guessing what could be the characteristics of such grammars? Then I came across this question which very well asks: Why does there exist a conflict even though it is not ambiguous grammar? The answer does not give straight answer to above "Why". But definitely says this: A context-free language is LR(k) precisely if it is deterministic. In this context, I have following guessings which I want confirmations for: All LR grammars are unambiguous, merely because they need to be deterministic and set of deterministic languages is proper subset of unambiguous languages. In other words, unambiguous languages which are non LR are those which are non deterministic. Dragon book says we can enforce association and precedence to ensure unambiguousness of grammar by resolving conflicts. I also read that we do left factoring to convert non deterministic grammar to deterministic grammar. I am quite confused about what actually converts non LR grammar to LR after considering above two sentences and point 2. Is it enforcing association and/or precedence or left factoring. I feel it should be left factoring given determinism is absolutely necessary than unambiguousness and there must exist some ambiguous grammars which do not turn to LR by enforcing association and/or precedence. Am I correct with this? Left factoring eliminates FIRST-FIRST conflicts in LL grammarref. But does it help to eliminate SHIFT-REDUCE and REDUCE-REDUCE conflicts in LR grammars? Answer: Your first and second points are correct, although you need to take more care distinguishing between properties of grammars and properties of languages. A language is context free if (and only if) there exists a context free grammar for it. It is also deterministic if (and only if) there exists a deterministic context free grammar for it. That doesn't mean that all grammars for the language will be deterministic or even context-free; there are basically an infinitude of possible grammars for any language, deterministic and not, ambiguous and not, and not restricted to the smallest class in the Chomsky hierarchy. In fact, figuring out whether a language has one of these properties is often not easy. While we can easily see which of Chomsky's classes a grammar belongs to, and we can easily determine whether or not a particular context-free grammar is, for example, $LR(k)$ for any given $k$, similar statements about languages are much harder. In particular, the following questions are undecidable, which means that no algorithm exists which will produce a correct answer for every possible input: Does a context-free grammar exist for a given language? Does an $LR(k)$ grammar exist for a given language? Does a deterministic grammar exist for a given language? Some questions about grammars are also undecidable: Is there a $k$ for which an $LR(k)$ parser can be generated for a given grammar? Do two context-free grammars recognise the same language? Is a given context-free grammar ambiguous? (There are many more, but these ones seemed relevant.) Note that "undecidable" doesn't mean you can't ever figure it out. For certain languages and grammars it is quite possible to answer the above questions. But there is no algorithm which can generate a solution. Finding a solution requires luck and perseverance, and there's no guarantee you'll manage it. (This is somewhat like the problem of proving a given mathematical hypothesis.) One of the consequences of all that undecidability is that there are no 100% reliable algorithms which can "remove ambiguity" from a grammar, make a grammar deterministic, and so on. All of the procedures recommended in the various textbooks you refer to (and internet sites) are just heuristics: they might work, but there are no guarantees. If you determine that a given grammar is not $LR(1)$, or not $LL(1)$, etc., you can try applying the various procedures mentioned, but you might not be able to find a transformation which works. And the mere fact that you didn't manage that transformation proves absolutely nothing about the language, although again there are cases in which you can prove that a given language has no deterministic grammar, or even that it has no unambiguous grammar. (Languages for which no unambiguous grammar exists are called "inherently ambiguous" and you can find examples by searching for that phrase, if you're interested.) I understand that this may all seem unsatisfactory. I think you really want there to be some visible attribute of a grammar (or language) which you can point to and say, "because of this feature, this grammar is not $X$" (for some property $X$). But there really isn't a better characterisation of, for example, grammars which are not $LR(1)$ than "the $LR(1)$ parser generation algorithm failed to produce a parser for this language". So, in short: A grammar is deterministic if the $LR(k)$ parser generation algorithm works for some $k$, and non-deterministic if it doesn't work for any $k$. But you can't prove non-determinism that way, because you'd have to try all possible values of $k$ and that would literally take you forever. Furthermore, even if a grammar is non-deterministic, it might or might not be ambiguous. There's no algorithm which can tell you that, either. Finally, with respect to my answer which you quoted, I think it does explain why that particular grammar is non-deterministic: it's because you need to do the first reduction when you hit the middle of the sentence, but since you don't know where the middle of the sentence is until you reach the end (and the sentence could be arbitrarily long), you cannot know at which point in the input you need to do the first reduction. However, the language is certainly unambiguous: there is only one derivation which works. (These statements are true of palindrome grammars regardless of how many symbols there are in the alphabet. But if the alphabet contains only a single symbol, then there is a different grammar which happens to be regular. In this particular case it's not hard to figure out what it is.)
{ "domain": "cs.stackexchange", "id": 14134, "tags": "formal-languages, automata, context-free, compilers, parsers" }
Human Evolution in Modern Times
Question: I understand that evolution occurred to form the current hominids from a common ancestor millions of years ago. As evolutionary processes take a long time, is there proof of evolution occurring with humans today? Answer: Yes, there are examples. First: If you want to say, that humans and monkeys evolved, than it is better to say that they evolved from a common ancestor. This makes quite a difference. If you are looking for examples of human evolution then one of the most obvious traits under evolutionary selection is pigmentation. There is a clear correlation with lattitude (and this UV-index): The changes in pigmentation occured - seen from a evolutionary perspective - pretty recent when humans moved out of africa and to other regions (see here). See these papers for more details: The evolution of human skin coloration Colloquium paper: human skin pigmentation as an adaptation to UV radiation. Epidermal pigmentation in the human lineage is an adaptation to ultraviolet radiation. Other examples would be the tolerance for lactose (the sugar contained in the milk) in adults which arose during the last 5-10.000 years. Normally only babies can process these sugar which is contained in large amounts in the mothers milk. When they are weaned, usually the enzyme is not expressed anymore. However with the settling of the humans and the domestication of cattle this changed, so we can still drink milk and eat milk products without producing intestinal problems. See here for more details: Genetic signatures of strong recent positive selection at the lactase gene. Adult-type hypolactasia and regulation of lactase expression. A third example would be the sickle-cell anemia, which happens due to a point mutation in the human hemoglobin protein. This causes an aggregation of the hemoglobin molecules in the red blood cells which then leads to a reduced elasticity (see here for more details). It also leads to a protection against malaria (most likely by the reduction of the life span of the erythrocytes). The areas with high prevalence of the sickel cell anemia in africa correlate pretty nicely with the distribution of malaria. An Immune Basis for Malaria Protection by the Sickle Cell Trait Sickle Cell Anaemia and Malaria If you look further, you will find a number of different examples, where evolution is present after humans went through a genetic bottleneck (meaning the number of humans was drastically reduced).
{ "domain": "biology.stackexchange", "id": 2608, "tags": "evolution" }
Half-life equation for 2nd order kinetics
Question: My friends and I were doing some problems from this year's IChO Preparatory Problems (PDF from the 49th International Chemistry Olympiad (2017)) when we stumbled upon a question which we had some confusion with. Task 8. Decomposition of Nitrous Oxide Nitrous oxide decomposes exothermically into nitrogen and oxygen, at a temperature of approximately $\pu{565 ^\circ C}$. $$\ce{2N2O (g) -> 2N2 (g) + O2 (g)}$$ This reaction follows the second-order kinetics when carried out entirely in the gas phase. 8.1) If the reaction is initiated with $[\ce {N2O}]$ equal to $\pu{0.108 mol dm-3}$, what will its concentration be after $\pu{1250 s}$ have elapsed at $\pu{565 ^\circ C}$? The rate constant for the second order decomposition of $\ce{N2O}$ is $\pu{1.10\times10^-3 dm3 mol-1 s-1}$ at this temperature. Task 8 details the kinetics of the decomposition of nitrous oxide. We were confused with part 8.1 of the task which required us to find the concentration of the reactant after a particular duration of time has elapsed, given the temperature, initial concentration and rate constant of the reaction. We approached the question using two different methods: I approached it using the method, as suggested in the solutions manual, of using the integrated rate law for 2nd order kinetics and substituting the values provided into that equation. My answer was exactly that stated in the solutions manual. However, they approached it by first finding the half-life using the equation for the half-life for 2nd order kinetics and then, finding the concentration after knowing how many half-lives have passed. Their answer was slightly off. After substituting random values into the equations for both methods, I realised that the "half-life method" which my friends used only gives a reasonable approximation when the time elapsed < half-life, gives the exact value when the time elapsed = half-life and goes completely off when the time elapsed > half-life. Why is that so? Since the half-life equation for 2nd order kinetics is derived from the integrated rate law (as shown on Chemistry LibreTexts), shouldn't they give the same answer? Answer: Roughly speaking, half-life is not a thing at all for the second order (or any order other than first, for that matter). What would they do after finding the number of half-lives? Raise 2 to that power? Too bad, because that only works for the exponential decay, which is the solution of the first-order kinetics and no other order. Really, what if the number of half-lives equals 2? After spending the first half-life, we now have a different half-life ahead of us, because it is concentration dependent, and the concentration has changed. Following your link, we see the same statement, except they don't put enough emphasis on it, as to my taste: For this reason, the concept of half-life for a second-order reaction is far less useful.
{ "domain": "chemistry.stackexchange", "id": 9316, "tags": "kinetics" }
Algorithm(s) to mix audio signals without clipping
Question: I'd like to mix two or more PCM audio channels (eg recorded samples) digitally in an acoustically-faithful manner, preferably in near-real-time (meaning little or no peek-ahead). The physically "correct" way to do this is summing the samples. However when you add two arbitrary samples, the resulting value could be up to twice the maximum value. For example, if your samples are 16-bit values, the result will be up to 65536*2. This results in clipping. The naive solution here is to divide by N, where N is the number of channels being mixed. However, this results in each sample being 1/Nth as loud, which is completely unrealistic. In the real world, when two instruments play simultaneously, each instrument does not become half as loud. From reading around, a common method of mixing is: result = A + B - AB, where A and B are the two normalized samples being mixed, and AB is a term to ensure louder sounds are increasingly "soft-clipped". However, this introduces a distortion of the signal. Is this level of distortion acceptable in high-quality audio synthesis? What other methods are there to solve this problem? I'm interested in efficient lesser-quality algorithms as well as less-efficient high-quality algorithms. I'm asking my question in the context of digital music synthesis, for the purpose of mixing multiple instrument tracks together. The tracks could be synthesised audio, pre-recorded samples, or real-time microphone input. Answer: The physically "correct" way to do this is summing the samples. However when you add two arbitrary samples, the resulting value could be up to twice the maximum value. ... The naive solution here is to divide by N, where N is the number of channels being mixed. That's not the "naive" solution, its the only solution. That's what every analog and digital mixer does, because it's what the air does, and it's what your brain does. Unfortunately, this appears to be a common misconception, as demonstrated by these other incorrect non-linear "mixing" (distortion) algorithms: Mixing digital audio (the wrong way) A quick-and-dirty audio sample mixing technique to avoid clipping (don't do this) The "dividing by N" is called headroom; the extra room for peaks that's allocated above the RMS level of the waveform. The amount of headroom required for a signal is determined by the signal's crest factor. (Misunderstanding of digital signal levels and headroom is probably partially to blame for the Loudness war and Elephunk.) In analog hardware, the headroom is maybe 20 dB. In a hardware DSP, fixed-point is often used, with a fixed headroom; AD's SigmaDSP, for instance, has 24 dB of headroom. In computer software, the audio processing is usually performed in 32 bit floating point, so the headroom is enormous. Ideally, you wouldn't need to divide by N at all, you'd just sum the signals together, because your signals wouldn't be generated at 0 dBFS in the first place. Note that most signals are not correlated to each other, anyway, so it's uncommon for all the channels of a mixer to constructively interfere at the same moment. Yes, mixing 10 identical, in-phase sine waves would increase the peak level by 10 times (20 dB), but mixing 10 non-coherent noise sources will only increase the peak level by 3.2 times (10 dB). For real signals, the value will be between these extremes. In order to get the mixed signal out of a DAC without clipping, you simply reduce the gain of the mix. If you want to keep the RMS level of the mix high without hard clipping, you will need to apply some type of compression to limit the peaks of the waveform, but this is not part of mixing, it's a separate step. You mix first, with plenty of headroom, and then put it through dynamic range compression later, if desired.
{ "domain": "dsp.stackexchange", "id": 4242, "tags": "audio, algorithms, distortion" }
The effect of buildings on frost
Question: I have see this morning this curious garden where one part was covered with frost while the other was frost free. Question 1: What are the factor that play a role in this configuration? My guess is radiation from the building and the wind (the frost free part being more protected from the wind by the building) but then I wonder : Will there still be these two zones after a night without any wind? (Question 2) The left house is heated, the middle one isn't but it's inhabited, the shed is closed but not insulated and without heating source inside. There isn't any other surrounding structures that could create shade. The open side of the garden is facing south. I observed the frost after sunrise, but before the sun rays were reaching the garden. Answer: I would say the most relevant factor involved would be Radiative cooling, with ground areas closer to the buildings not losing as much heat energy as the buildings act to reflect some of the radiation back keeping those areas frost free, while open exposed areas lose more energy and experience frost.
{ "domain": "earthscience.stackexchange", "id": 682, "tags": "temperature, wind, world-building" }
Calculating the connectance of a matrix iteratively
Question: I have a performance issue in R. I have a function that iterates over a dataframe for different levels of "site" and "method". The function samples 1:1000 interactions (rows), converts these to matrices and calculates a value of connectance for each matrix. This is repeated n times. The function runs exactly as I want it, returning a dataframe with connectance values for 1 to 1000 interactions when repeated a small number of times. The problem is when I increase the number of repetitions (say to 100) the function runs progressively slower. df <- read.table(text = "bird_sp plant_sp value site method 1 species_a plant_a 1 a m 2 species_a plant_a 1 a m 3 species_b plant_b 1 a m 4 species_b plant_b 1 a m 5 species_c plant_c 1 a m 6 species_a plant_a 1 b m 7 species_a plant_a 1 b m 8 species_b plant_b 1 b m 9 species_b plant_b 1 b m 10 species_c plant_c 1 b m 11 species_a plant_a 1 a f 12 species_a plant_a 1 a f 13 species_b plant_b 1 a f 14 species_b plant_b 1 a f 15 species_c plant_c 1 a f 16 species_a plant_a 1 b f 17 species_a plant_a 1 b f 18 species_b plant_b 1 b f 19 species_b plant_b 1 b f 20 species_c plant_c 1 b f", header = TRUE) xDegrees <-function(df, size, numRep){ #Loading required library require(bipartite) #Creating vector of unique combinations df <- within(df, {SiteMethod <- paste(site, method, sep = ":")}) #Creating empty dataframe connectMatrix <- as.data.frame(matrix(rep(0,4), ncol = 4)) colnames(connectMatrix) <- c("Site","Method","Size","connectance") #Beginning of matrix k <- 1 #Beginning subsetting loop for(i in 1:length(unique(df$SiteMethod))){ #subsetting dataset dfSub <- subset(df, SiteMethod == unique(df$SiteMethod)[i]) #Storing site for matrix site <- as.character(dfSub[1,]$site) #Storing method for matrix method <- as.character(dfSub[1,]$method) for(l in 1:numRep){ #Beginning calculation loop for(j in 1:length(size)){ #show progress print(paste("S:M", i,j , "completed", sep = " ")) #The size being calculated subSize <- size[j] #generate random samples and convert to matrices rows <- sample(1:nrow(dfSub), subSize, replace=T) intlist <- dfSub[rows,] mat <- with(intlist, tapply(value, list(plant_sp, bird_sp), sum)) mat[is.na(mat)] <- 0 #network level function to calculate connectance con <- networklevel(mat, index = c("connectance")) #Stitch matrix together connectMatrix[k,] <- c(site, method, subSize, con) #Update row k <- k + 1 } } } #Return complete matrix return(connectMatrix) } #run the function. 1:1000 interactions, 100 reps stuff <- xDegrees(df, size = 1:1000, numRep = 100) Any ideas on how to speed this up? Answer: Consider by to slice your dataframes by the site and method factors and then pass dataframe subsets into an adjusted xDegrees function. Other than the group slice, the other major change is combining for loops with rep and passing result into one sapply call. Below of course is not tested since you do not provide a reproducible example. # Loading required library require(bipartite) xDegrees <-function(dfSub, size, numRep){ # Retrieving current site s <- as.character(dfSub$site[[1]]) # Retrieving current method m <- as.character(dfSub$method[[1]]) # Build large vector of all random runs sample_iter <- rep(seq_along(size), numRep) # Generate random samples and convert to matrices con_vec <- sapply(sample_iter, function(i) { rows <- sample(1:nrow(dfSub), i, replace=T)) intlist <- dfSub[rows,] mat <- with(intlist, tapply(value, list(plant_sp, bird_sp), sum)) mat[is.na(mat)] <- 0 # network level function to calculate connectance con <- networklevel(mat, index = c("connectance")) }) # size is a multiple of con_vec (whose length = size * numRep) return(data.frame(method = m, site = s, subsize = size, con = con_vec)) } df_List <- by(df, df[,c("site", "method")], FUN=function(d) xDegrees(d, size = 1:1000, numRep = 100)) finald_df <- do.call(rbind, df_List)
{ "domain": "codereview.stackexchange", "id": 28745, "tags": "performance, graph, matrix, r, simulation" }
How do I use a column with data of different layers for AI?
Question: I am working with real estate data for an ML/DL project. In the csv file there is a column in which each cell contains data like the examples below: Karachi Houses > DHA Defence Houses > DHA Phase 6 Houses Karachi Houses > DHA Defence Houses > DHA Phase 7 Houses Karachi Houses > DHA Defence Houses > DHA Phase 8 Houses Karachi Houses > DHA Defence Houses > DHA Phase 4 Houses Karachi Houses > DHA Defence Houses > DHA Phase 5 Houses Karachi Flats > DHA Defence Flats > DHA Phase 8 Flats > Emaar Crescent Bay Flats > Emaar Pearl Towers Flats Karachi Flats > DHA Defence Flats > DHA Phase 2 Extension Flats Karachi Flats > DHA Defence Flats > DHA Phase 8 Flats > Emaar Crescent Bay Flats Karachi Flats > DHA Defence Flats > DHA Phase 7 Flats > Jami Commercial Area Flats Karachi Houses > DHA Defence Houses > DHA Phase 7 Extension Houses Karachi Flats > DHA Defence Flats > DHA Phase 6 Flats > Nishat Commercial Area Flats It basically tells where in where in where in where a particular property is located. The objective is to create a price prediction model. I think I can do something with this particular column (maybe classify it somehow) to be able to use it. But because the data might have 3 layers, or 4 or 5 or maybe more, I can't figure out the right way to make it numerical to for example implement a decision tree or random forest. Any guidance? Answer: You can create a new column for each layer of location, such as city, area, phase, and tower/building. Then, split the data in the original column by the ">" symbol and assign each layer to its corresponding new column. For the properties with less than five layers, you can assign NaN values to the rest of the layers. After that, you can encode the categorical data using one-hot encoding or label encoding to make it numerical. This will enable you to use the new columns as features for your ML/DL model.
{ "domain": "datascience.stackexchange", "id": 11737, "tags": "machine-learning, classification, random-forest, decision-trees" }
Kinetic Energy And Rotational Motion
Question: The problem is, "A metal can containing condensed mushroom soup has mass 220 g, height 11.0 cm and diameter 6.38 cm. It is placed at rest on its side at the top of a 3.00-m-long incline that is at 30.0° to the horizontal and is then released to roll straight down. It reaches the bottom of the incline after 1.50 s. (a) Assuming mechanical energy conservation, calculate the moment of inertia of the can. (b) Which pieces of data, if any, are unnecessary for calculating the solution? My attempt at solving (a): I figured that I could use the equation $\Sigma W=\Delta K=1/2I\omega_f^2-1/2I\omega_i^2$ Since the force of gravity that is acting along the incline is applied constantly over a distance, $W_g=mg\cos(60^{\circ})(3.00~m)$; and since the can rolls down the incline in 1.50 s, $v=3.00/1.5 \implies 2~m/s$, which means that $\omega_f=2/0.0319 \implies 62.695925~rad/s$ With this, and knowing that $\omega_i=0$, $mg\cos(60^{\circ})(3.00)=1/2I(62.695925)^2 \implies I=\frac{(6.00)mg\cos(60^{\circ})}{(62.695925)^2}$ When I calculated this, I got $I=0.00165~kg\cdot m^2$; however, the true answer is $I=0.000187~kg\cdot m^2$ I've re-worked my solution several times, what am I doing incorrectly? As for (b), the answer is that the height of the can is an irrelevant piece of information, why is that? Answer: 1) The first thing I notice is that you have stated that the velocity at the end of the ramp is $2\textrm{ m/s}$. Remember that the can is accelerating as it rolls down the ramp, so the equation $v=\textrm{d}s/\textrm{d}t$ is not applicable here for finding the instantaneous velocity at the bottom. The can does indeed average $2\textrm{ m/s}$ during its trip, but this is not the final velocity of the can. Use this new corrected value to calculate angular frequency. 2) I find this problem simpler to solve using energy analysis. Take the can's initial potential energy: $$ E_\textrm{pot} = m g h = 3.234\textrm{ J}\quad.$$ We also know that the final kinetic energy of the can must equal this due to the conservation of energy, but the final energy of the can must be broken into translational kinetic energy (due to the can's movement) and rotational kinetic energy (due to the rotation). (This is why your solution above was giving incorrect answers as it didn't take translational kinetic energy into account.) Thus, we also know that: $$ E_\textrm{pot}(t=0) = E_\textrm{kin,trans}(t=1.5\textrm{ s}) + E_\textrm{kin,rot}(t=1.5\textrm{ s})\quad,$$ which, for our case, is $$ 3.234\textrm{ J} = \frac{1}{2} mv^2 + \frac{1}{2}Iω^2\quad.$$ Plugging in known values to this equation with the correct value for angular velocity $\vec \omega = \vec r \times \vec v$ gives the accepted answer: $$ I = 0.000187\textrm{ kgm}^2\quad.$$ As for part B, the height of the can is irrelevant because as long as we know the mass and radius of the can, we can solve the problem. The ‘extra mass’ resulting from lengthening the can would be centered about the can's original center of mass, and as such the moment of inertia would not be affected for this problem.
{ "domain": "physics.stackexchange", "id": 5562, "tags": "homework-and-exercises, newtonian-mechanics, forces, rotational-dynamics, moment-of-inertia" }
Comparing columns from two CSV files - follow-up
Question: This is the second part of a question, you can find the first part here: Comparing columns from two CSV files I've made some changes to the script and here is what it looks like now: import csv, sys def get_column(columns, name): count = 0 for column in columns: if column != name: count += 1 else: return(count) def set_up_file(file, variable): columns = next(file) siren_pos = get_column(columns, 'SIREN') nic_pos = get_column(columns, 'NIC') variable_pos = get_column(columns, variable) return(siren_pos, nic_pos, variable_pos) def test_variable(variable): with open('source.csv', 'r') as source: source_r = csv.reader(source, delimiter=';') sir_s, nic_s, comp_s = set_up_file(source_r, variable) line_s = next(source_r) with open('tested.csv', 'r') as tested: tested_r = csv.reader(tested, delimiter=';') sir_t, nic_t, comp_t = set_up_file(tested_r, variable) size = sum(1 for line in tested_r) tested.seek(0, 0) line_t = next(tested_r) line_t = next(tested_r) correct = 0 try: while True: if(line_s[sir_s] == line_t[sir_t] and line_s[nic_s] == line_t[nic_t]): if(line_s[comp_s] == line_t[comp_t]): correct += 1 line_t = next(tested_r) line_s = next(source_r) elif(int(line_s[sir_s]) > int(line_t[sir_t])): line_t = next(tested_r) elif(int(line_s[sir_s]) < int(line_t[sir_t])): line_s = next(source_r) else: if(int(line_s[nic_s]) > int(line_t[nic_t])): line_t = next(tested_r) else: line_s = next(source_r) except StopIteration: return(correct / size * 100) def main(): with open('tested.csv', 'r') as file: file_r = csv.reader(file, delimiter=';') columns = next(file_r) found = test_variable('SIREN') for column in columns: print(column, '%.2f' % (test_variable(column) / found * 100)) if(__name__ == "__main__"): main() There are no real performance issue in the new version but I feel like it could still be improved greatly. Also, would it be possible to reduce the size of the test_variable function? I thought about cutting it right before the try statement but I'll end up passing about 7 parameters which is not really a clean solution in my opinion. Answer: There is some duplication of code because you are handling two files. When you need some data converted to int it would be best to do soon as you've read it to avoid sprinkling other logic with int() calls. The two columns SIREN and NIC combined seem to form a sorting key to the file. You could simplify the if elif... part by performing the comparisons on (SIREN, NIC) tuples. To address the above, I propose to organize the code like this: def parse_file(file, variable): reader = csv.reader(file, delimiter=';') sir_s, nic_s, comp_s = set_up_file(reader, variable) for line in reader: key = int(line[sir_s]), int(line[nic_s]) yield key, line[comp_s] def test_variable(variable): with open('source.csv', 'r') as source, open('tested.csv', 'r') as tested: source_r = parse_file(source, variable) tested_r = parse_file(tested, variable) correct = 0 try: line_s = next(source_r) line_t = next(tested_r) while True: key_s, comp_s = line_s key_t, comp_t = line_t if line_s == line_t: correct += 1 if key_s >= key_t: line_t = next(tested_r) if key_s <= key_t: line_s = next(source_r) except StopIteration: return correct Note however that I've omitted the computation of size. This could be done by incrementing a variable after reading each line, but since lines are read at multiple places, and some may be left in the end if the other file ends first, it may be best to count the lines separately like you have done.
{ "domain": "codereview.stackexchange", "id": 32713, "tags": "python, csv" }
What is the velocity $u^\mu$ in the stress-energy-tensor of a perfect fluid?
Question: I am currently learning about fluid dynamics in special relativity. We defined the stress-energy-tensor of a perfect fluid to be \begin{equation} T^{\mu \nu} = (\rho + P) u^\mu u^\nu + P g^{\mu \nu}. \end{equation} We said that $u^\mu$ is the 4-velocity of the "MCRF" of the fluid, that is the frame in which the bulk velocity is zero, i.e. \begin{equation} \sum_a u_a^\text{MCRF} = 0. \end{equation} Where the summation goes over all particles. It is however possible to derive the Euler-equation (in the non-relativistic limit) from the conservation of energy-momentum $T^{\mu\nu}_{\quad,\nu}= 0$, \begin{equation} \rho (\partial_t \mathbf{v} + (\mathbf{v} \cdot \nabla) \mathbf{v})) = - \nabla P. \end{equation} I have trouble imagining that the bulk velocity should depend on space-time coordinates. I guess you could in principle define the bulk velocity to be zero at spaces other than the fluid and $u^\mu$ at the point (or volume) of the fluid. But then the Euler equation seems to be strange. The Euler equation usually treats the velocity as a vector field of the fluid and gives insight into the internal dynamics of the fluid. How can that happen with the bulk velocity which contains no information about the internal flows? Could someone explain to me how I can interpret $u$ or $\mathbf{v}$ as a vector field in the sense of how the Euler equation treats them? Answer: The MCRF is not a single frame: there is a different frame at each spacetime point. The summation for the bulk velocity doesn't run over all particles: it runs over a small region surrounding the chosen point, much smaller than the typical distances we will be considering in our study of the fluid, but much larger than the distance between molecules or the mean free path or whatever. It's a local bulk velocity. There's nothing relativistic about this: it's exactly the same definition of fluid velocity as the one we use in classical fluid dynamics.
{ "domain": "physics.stackexchange", "id": 73648, "tags": "special-relativity, fluid-dynamics, stress-energy-momentum-tensor" }
Approximation algorithm for minimising $x_i+y_i$ for monotonically increasing sequence $x_i$ and monotonically decreasing sequence $y_i$
Question: This is a cross-post from a question I asked on cs.stackexchange 2 weeks ago with no answers. I thought it might find home here. We are given sorted $0\leq x_1 \leq x_2 \leq \dots \leq x_n$ and $y_1 \geq y_2 \geq \dots \geq y_n \geq 0$ non-negative integers accessible through oracles, with the additional constraints $x_{i+1}-x_i \leq 1$ and $y_i - y_{i+1} \leq 1$. Can we approximate the minimum of $x_i + y_i$ with $o(n)$ oracle queries to $x_i$, $y_i$ values, or is $\Omega(n)$ required? For the exact case, a simple adversarial example can prove that you need to check all $n$ indices to find the minimum exactly (described in the link above). A $2$-approximation can be taken by returning the index $i$ that minimizes $|x_i-y_i|$. This can be done using binary search using $O(\log n)$ queries. Can we do better than a $2$-approximation in $o(n)$? Answer: Theorem 1. For any $\epsilon>0$, there is a $(1+\epsilon)$-approximation algorithm that makes $O(\epsilon^{-1}\log n)$ queries. Note that if $\epsilon$ is arbitrarily small but constant, the algorithm makes $O(\log n)$ queries. Before we prove the theorem, we prove the following utility lemma: Lemma 1. Let $P_1, P_2, \ldots, P_m$ be a partition of the indices $[n]$, and for each $\ell\in [m]$ let $a(\ell)$ be a $(1+\epsilon)$-approximate solution to the subproblem induced by $P_\ell$. That is, $$x_{a(\ell)} + y_{a(\ell)} \le (1+\epsilon) \big(\min_{i\in P_\ell} x_i + y_i\big).$$ Let $a(\ell^*)$ be the best of these approximation solutions. That is, $\ell^* = \arg\min_\ell \{x_{a(\ell)} + y_{a(\ell)}\}$. Then $a(\ell^*)$ is a $(1+\epsilon)$-approximate solution to the original problem. Proof. For any $i\in[n]$, we have, for $\ell$ such that $i\in P_\ell$, $$x_i + y_i \ge (x_{a(\ell)} + y_{a(\ell)})/(1+\epsilon) \ge (x_{a(\ell^*)} + y_{a(\ell^*)})/(1+\epsilon).~~~~~\Box$$ Now we prove the theorem. Here is the algorithm. First, using a single binary search on $x_i/y_i$, find $h = \max\{i\in [n] : x_i / y_i \le 1\}$. (Note that $x_i/y_i$ is increasing with $i$, so this takes $O(\log n)$ queries.) Now partition the index set $[n]$ into two parts: $\{1, \ldots, h\}$ and $\{h+1, \ldots, n\}$. First compute a $(1+\epsilon)$-approximate solution to the subproblem induced by the first part $\{1, \ldots, h\}$ as follows. Fix integer $k = \lceil 1/\epsilon\rceil$. Using $k-1$ binary searches, partition the index set $[h]$ into $k$ parts, where, for each $\ell\in [k]$, each index $i$ in the $\ell$th part $P_\ell$ satisfies $$(\ell-1)/k \le x_i / y_i \le \ell/k.$$ [For intuition, note that this condition is equivalent to $$y_i (1+(\ell-1)/k) \le x_i + y_i \le y_i(1 + \ell/k),$$ so that within the part we have $$x_i + y_i = (1+O(1/k))(1+(\ell-1)/k) y_i,$$ that is, the value $(1+(\ell-1)/k) y_i$ is a $(1+O(\epsilon))$-approximation to $x_i + y_i$.] Now, for each $\ell\in [k]$, let $a(\ell)$ be the index $i$ in the $\ell$th part $P_\ell$ minimizing $y_i$. Note that $y_i$ is non-increasing with $i$, so we can just take $a(\ell) \gets \min P_\ell$, without doing additional queries. Note that $a(\ell)$ is a $(1+\epsilon)$-approximate solution to the subproblem induced by $P_\ell$, because for any $i\in P_\ell$ we have $$x_i + y_i \ge y_i(1+(\ell-1)/k) \ge y_{a(\ell)}(1+(\ell-1)/k) \ge (x_{a(\ell)} + y_{a(\ell)})\alpha(k, \ell),$$ where $$\alpha(k, \ell) = \frac{1+(\ell-1)/k)}{1+\ell/k} = \frac{k+\ell-1}{k+\ell} = 1 - \frac{1}{k+\ell} \ge 1- \frac{1}{k+1} = \frac{1}{1+1/k} \ge \frac{1}{1+\epsilon}.$$ By Lemma 1, this gives us a $(1+\epsilon)$-approximate solution to the first part $[h]$ of $[n]$, using $O(\epsilon^{-1} \log n)$ queries. Similarly (exchanging the roles of $x$ and $y$) we can compute a $(1+\epsilon)$-approximate solution to the second part $[n]\setminus[h]$ using $O(\epsilon^{-1} \log n)$ queries. By Lemma 1, taking the best of these two solutions gives us a $(1+\epsilon)$-approximate solution to the original problem, in $O(\epsilon^{-1} \log n)$ queries.$~~~\Box$
{ "domain": "cstheory.stackexchange", "id": 5555, "tags": "approximation-algorithms, oracles" }
Is Moment of Inertia, always, independent of angular velocity?
Question: I once heard in a ted talk that the elementary particles like electron and proton can exist in two different positions at the same time. Now,I'm trying to understand a rotating rod from this perspective: Consider a rod rotating about an axis perpendicular to the plane of the rod and passing through the centre. Increase the angular velocity of the rod to very high values (tending to infinity(hypothetical case)). What we( a frame from ground) would observe or what conclusions can we draw: we could say that the rotating rod looks pretty much like a disc(And I'm not sure if it would look like a stationary disc or rotating disc). The above point can be understood easily,if we treat rod like a fast moving electron where it has tendency to exist at two positions.Now that the rod is in rotational motion (not something like the motion of an electron ,precisely.) it would have a tendency to exist at different positions (something like drawing two diameters of the circle traced by the rod, and saying that the rod exists at both the diameter) As rod is moving with very high(almost infinity) angular velocity we could say that the rod exists at all the diameter of the circle.If you put your finger at any point on the circle, traced by the rod, you would feel the rod because (since rod has very high angular velocity) it exists at many different positions at the same time. I think we can consider this rod (with high angular velocity) to be a solid object,precisely, a disc. CASE I: I'm assuming that the disc(so formed from observers frame) to be rotating. So,finally we have a rotating disc from observers frame. Points we know about a rotating disc and a rotating rod: Moment of inertia of a rod about an axis through the centre and perpendicular to the plane=[M(L^2)]/12=P Moment of inertia of a disc about an axis through the centre and perpendicular to the plane=[M((L/2)^2)]/2=(3p)/2 How has the moment of inertia of the rod changed like that? We see that the length of rod hasn't changed(Assuming the rod to be a ,strictly, rigid body) and since moment of inertia has increased, does this mean that mass has increased? Does this mean that moment of inertia is dependent on angular velocity(if it is high enough). CASE II:Now,I'm assuming that the disc is a stationary one: In such a case moment of inertia of the disc(so formed from observers frame) would be zero. How's this possible? How can a body lose moment of inertia due to very high(angular velocity)? IS MY UNDERSTANDING AND CORRELATION CORRECT ? If wrong please explain where I have misunderstood the concepts. Answer: I once heard in a ted talk that the elementary particles like electron and proton can exist in two different positions at the same time. Saying that particles can exist in two places at once is just people trying to explain quantum mechanics to the lay person. This is not actually what is happening. According to traditional interpretations, particles like electrons do not have any defined position until it is measured to be at some position. As rod is moving with very high(almost infinity) angular velocity we could say that the rod exists at all the diameter of the circle.If you put your finger at any point on the circle, traced by the rod, you would feel the rod because (since rod has very high angular velocity) it exists at many different positions at the same time. Classically, if an object is in motion it does not mean it exists in multiple positions. Even something moving at a high velocity only exists at one position in one instant of time. I think we can consider this rod (with high angular velocity) to be a solid object,precisely, a disc. Just because due to our biology a fast rotating rod looks like a disk, it does not mean the spinning rod is actually a disk. It is still just a spinning rod. In other words, a spinning rod is not the same thing as a stationary disk. After this point, your other assumptions and cases do not need to be persued any further.
{ "domain": "physics.stackexchange", "id": 50353, "tags": "special-relativity, rotational-dynamics, moment-of-inertia, angular-velocity" }
Keeping air in a well
Question: Let's say I've got an Earth-like planet with no atmosphere: it's just a barren ball of rock. I want to live there, but I don't like domes, so instead I'm just going to dig a big hole and let gravity keep the air in. How deep a hole do I need? According to a chart I found, the density of the atmosphere drops to pretty much zero by about 50km, at the top of the stratosphere. But 'pretty much zero' is not zero; the mesosphere beyond that extends up to about 80km and while vanishingly thin is responsible for dealing with most meteors. If my hole is a mere 50km deep, then, some of my air is going to diffuse out of the hole and onto the planet's surface. But the surface of my planet is largely flat; there's nowhere for the air to go, so it's just going to hang around and form a dynamic equilibrium. (Unlike, say, if I built a 50km wall and tried to keep the air inside. Air would leak over the top of the wall, fall down into the vacuum on the other side, and be lost forever. Which is why the Ringworld had walls 1000km high.) So I don't really know how shallow a hole I can get away with. I can replace the air, but I would like it to go without maintenance for at least small geological timescales. Any advice before I start up the earth-moving equipment? (Yes, it's SF worldbuilding.) Answer: Even a normal planet doesn't permanently lock its atmosphere: a little bit of it is creeping out all the time. The air molecules are distributed according to a Maxwell-Boltzmann distribution, which falls off to zero exponentially. A small fraction of that air will always be above escape velocity and will disappear into space. The distribution of air re-thermalizes, and thus another fraction is lost to space. The fraction that is above escape velocity depends on the mass of the molecule: it's appreciable for helium on Earth (popped balloons are gone forever). For your deep well, you'd have to consider the shape of the Maxwell-Boltzmann distribution and the variation of pressure with altitude (and include a non-Earth "g"). Frame the problem in terms of the amount of loss that you're comfortable with--- something so small that it won't be missed or can easily be replenished. Someone who's actually engineering this might also want to chill the upper layer of gas with some kind of large-scale air conditioning. That would reduce the loss so that the hole wouldn't need to be as deep. Maybe a greenhouse effect could be useful to keep the upper layer cold and the lower layer warm. After all, who needs to see the sun?
{ "domain": "physics.stackexchange", "id": 2448, "tags": "space, atmospheric-science, planets" }
Probability of probing $t$ locations in a Cuckoo hash is $O(\frac{1}{2^{t/2}})$ locations in the worst case
Question: I was told this question may be better received here. Prove that the probability that an insertion into a cuckoo hash table probes $t$ array locations is $O(\frac{1}{2^{t/2}})$. Keep in mind that there are two tables, each with size $s \ge 2n$, where $n$ is the number of elements in the set. I'm trying to use induction, but I don't know if this is the best method to go about proving this. The worst case to probe $1$ array location would be when all $n$ element are stored in the first table, and we get probability $\frac{n}{2n} = \frac{1}{2} = O(\frac{1}{\sqrt{2}})$ for insertion. The worst case to probe $2$ array locations would be when all $n$ elements are stored in the first table, we hit the first table, and we succeed in the second table. This has the same probability as it does to probe $1$ array location. However, I don't know how to continue the analysis. For example, what's the worst case probability to probe $4$ array locations? The question statement implies that the worst case is $O(\frac{1}4)$, but how do we achieve this result? If the first and second tables each have $\frac{1}4$ of their array locations occupied, then the worst case to probe $4$ elements would be hitting in the first table, hitting in the second table, hitting in the first table again, then finally inserting into the second table $\Rightarrow \frac{1}4\cdot\frac{1}4\cdot\frac{1}{4}\cdot\frac{3}4 = \frac{3}{256} \not = \frac{1}4$. Does anyone have a clearer way of thinking about this problem? Answer: Basically, you just write down everything that is necessary to have $t$ evictions and then use the universality of your hash functions to bound the probability. Assume you want to insert $x$ and your hash functions are $f,g$. Then if you have $t$ probes if $f(x)$ is occupied that is $$ \begin{align} \mathbb{P}[t=1]& =\sum_{y \neq x} \mathbb{P}[f(x)=f(y)]\\ &\le n \cdot 1/s \le 1/2. \end{align}$$ since your hash function are $(1,k)$-universal (I assume). Now for $t=2$, the place of $f(x)$ have to be occupied by some element $y$ (that is $f(x)=f(y)$), and the alternative for $y$ is blocked by some other element $z$ (that is $g(y)=g(z)$). Summing up over all possible values for $y$ and $z$ gives $$ \begin{align} \mathbb{P}[t=2]& =\sum_{y,z \neq x} \mathbb{P}[f(x)=f(y) \text { and } g(y)=g(z) ]]\\ &\le n^2 \cdot 1/s^2 \le 1/4. \end{align}$$ The summands of the sum are bounded by $1/s^2$ due to the universality of the hash functions, and you have less than $n^2$ summands. I think from here you can continue by yourself.
{ "domain": "cs.stackexchange", "id": 3699, "tags": "algorithms, algorithm-analysis, data-structures, hash-tables" }
Derivation of entropy, I don't understand the relation $ \frac{\partial S_2}{\partial E_1} = -\frac{\partial S_2}{\partial E_2} $
Question: My course guide gives the following derivation for change in entropy w.r.t. energy, where I don't understand a step: \begin{align} E & = E_1 + E_2 \\ S & = S_1 + S_2 \\ S(E,E_1 ) & = S_1 (E_1) + S_2(E-E_1) \end{align} Then the question is asked, which $E_1$ the new S wouldn't change. I don't truly understand what the premise is of this question. But I understand that it is a maximum, thus you can find it with setting a derivative to zero as follows: $$\frac{\partial S}{\partial E_1} = 0 =\frac{\partial S_1}{\partial E_1} + \frac{\partial S_2 }{\partial E_1} \tag{ chain rule}$$ Then the guide just says: "Use the following relation". I don't get how to find this relation? $$ \frac{\partial S_2}{\partial E_1} = -\frac{\partial S_2}{\partial E_2} $$ Where then this follows: $$ \frac{\partial S_1}{\partial E_1} = \frac{\partial S_2}{\partial E_2} $$ Central question: $$ \frac{\partial S_2}{\partial E_1} = -\frac{\partial S_2}{\partial E_2} $$ How to find this. When I differentiate $S$ to $E_2$ I just get: $$\frac{\partial S}{\partial E_2} = 0 =\frac{\partial S_1}{\partial E_2} + \frac{\partial S_2 }{\partial E_2} \tag{ chain rule}$$ Answer: It's not unreasonable to be confused about this. Let's say you have a function $f$ of one variable and $f'$ is its derivative. Then we have $$\frac{d}{dx} f(c-x) = \color{red}{-}f'(c-x)$$ via the chain rule. If we have a function $g$ of two variables, then we might similarly write $$\frac{\partial}{\partial x} g(c-x,y) = \color{red}{-} \big(\partial_1 g\big)(c-x,y)$$ where $\partial_1g$ is the function obtained by differentiating $g$ with respect to its first entry. It is extremely common to simply call $\partial_1 g$ the same thing as $\frac{\partial g}{\partial x}$, but that only works when we assume that $x$ is the thing which we plug into the first slot of $g$. When that isn't the case - e.g. here - that notation is bad in my opinion. In our case, we have the following expression: $$S(E,E_1) := S_1(E_1) + S_2(E - E_1)$$ $S_1(\epsilon)$ is the entropy of the first system when it has energy $\epsilon$. $S_2(\epsilon)$ is the entropy of the second system when it has energy $\epsilon$. $S(E,E_1)$ is the entropy of both systems together when they have total energy $E$, and when the first system has energy $E_1$ (so the second system has energy $E_2= E-E_1$). If we wish to maximize this with respect to $E_1$, we would differentiate and set the result to zero: $$\frac{d}{dE_1} \big(S_1(E_1) + S_2(E-E_1)\big) = S_1'(E_1) \color{red}{-} S_2'(E-E_1) = 0$$ $$\implies S_1'(E_1) = S_2'(\underbrace{E-E_1}_{=E_2})$$ This is what your course guide is trying to say.
{ "domain": "physics.stackexchange", "id": 87904, "tags": "homework-and-exercises, thermodynamics, entropy, differentiation, notation" }
Origin of Church encodings
Question: In which paper did Alonzo Church first describe Church encoding? I can't find any articles that actually cite the paper, but I am interested in reading it. Answer: It should be this one: Church, A. (1941) The Calculi of Lambda-Conversion. Annals of Mathematics Studies No. 6, Princeton University Press.
{ "domain": "cstheory.stackexchange", "id": 1502, "tags": "reference-request, lo.logic, lambda-calculus, ho.history-overview" }
Initializing and manipulating OctoMaps with more than just occupancy
Question: Hi, Sorry if this is too much of a C++ question, but I'm having trouble understanding how to initialize and update an octree in which the nodes can store a data value. There is a nice example online showing how to make an octree such as OcTree tree (0.1); //0.1 is the spatial resolution in m. and then setting the occupancy of the nodes to true, using tree.updateNode(point_in_3d,true) Where point_in_3d is an octomath::Vector3. So far so good. However, reading the documentation, I've come across an octomap::OcTreeDataNode< T > that I'd much rather use. With a data container of arbitrary type T in every leaf, I could presumably store additional information about the geometry there, such as the estimated surface normal, or color, or measurement uncertainty, etc. What I'd like to know (and haven't been able to understand from the docs) is, how to create such an octree (suppose we want T to be a 3-element vector of type float), and how to update the data. The reason I'm not using pcl octrees is because octomap provides a way of doing ray casting into the tree, and also because of octovis (neat!). Thanks Originally posted by Daniel Canelhas on ROS Answers with karma: 465 on 2011-10-06 Post score: 1 Answer: That is indeed possible. What you need to do is define a new node type and a new octree type, as the nodes just store data and the tree handles the data manipulation. OcTree stores just occupancy but you can easily derive from OccupancyOcTreeBase with your own node type as template parameter if you want to add additional properties to the node. Note that this will not only affect leafs but also inner nodes of the tree (as all nodes are identical). As examples to get started, there is an OcTreeStamped implementation which stores timestamps, and in the OctoMap trunk there is a ColorOcTree which stores color (not yet released, thus not yet in the ROS octomap). Originally posted by AHornung with karma: 5904 on 2011-10-06 This answer was ACCEPTED on the original site Post score: 4 Original comments Comment by AHornung on 2011-10-09: Oh, I just realized: The ColorOcTree is only in OctoMap trunk (not yet released)! So if you are working in ROS you obviously won't find it. You would need to compile your own version in your copy but you shouldn't even need it because you only want is as example of node extension / derivation. Comment by AHornung on 2011-10-09: Do you have a line in your manifest? Otherwise the linker won't find octomap from your own package. You should also make sure that you rename both the files and the classes, as they will conflict otherwise. Comment by Daniel Canelhas on 2011-10-07: The previous example is just with an unaltered copy of color_tree_test.cpp. The same thing happens if I take OcTreeStamped and rename/modify the code related to storing/managing the timestamp. Comment by Daniel Canelhas on 2011-10-07: I copied ColorOcTree.h into the include directory of a ros package and included it in my code. rosmake seems to have a problem with the linking because i get: color_tree_test.cpp:23: undefined reference to `octomap::ColorOcTree::ColorOcTree(double)' when compiling. (same happens w/ my custom nodes) Comment by Daniel Canelhas on 2011-10-07: Thanks! For a derivation of OcTreeStamped, I could work on something similar to the LogOdds, it seems. Comment by AHornung on 2011-10-06: Best copy the OcTreeStamped or ColorOcTree files, rename them, and start modifying them to the behavior you want. The stamped classes probably have the least overhead in them. Comment by Daniel Canelhas on 2011-10-06: Ive read the answer for a similar question: http://answers.ros.org/question/1094/augmenting-octomap-nodes-with-additional-info But I'm having trouble understanding how to actually make this derivation. Like I said, this is probably a bit of a C++ question. doxygen doesnt say much for a beginner :(
{ "domain": "robotics.stackexchange", "id": 6884, "tags": "ros, octomap, octomap-mapping" }
IUPAC nomenclature of alicyclic compound with an equal number of cyclic and acyclic carbons
Question: In this compound there are three cyclic and three acyclic carbons. Even the substituent comes on the 2nd position in both cases. So, should the main chain be the cyclic one, since there is a rule that cyclic has a priority over acyclic? Answer: According to current IUPAC recommendations (2013), for the preferred IUPAC name, a ring has seniority over a chain. Therefore, the given structure is a propylcyclopropane. (However, the IUPAC recommendations allow that the context may favour the chain.) The two prefixes of cyclopropane (chloro- and 2-bromopropyl-) are arranged alphabetically. Thus, the name is 1-(2-bromopropyl)-2-chlorocyclopropane.
{ "domain": "chemistry.stackexchange", "id": 3024, "tags": "organic-chemistry, nomenclature" }
Can a projectile bullet kill?
Question: Today, I was watching a movie, in which a person shot a gun in air. The bullet went high in sky, became stationary for a moment and dived back and killing a non-innocent person by hitting him in head directly. And yes, the bullet was shot at near vertical. But, the question is, is it possible? I know that an ideal projectile will reach ground with speed equal to launching speed. But in this case air is also present which will constantly decrease bullet's velocity, until it reaches it terminal velocity. But is it fatal? Also there is another movie in which they make a type of bomb with gravity only. They use a long and heavy rod planted in a satellite (no explosives), which whenever required, is not launched, but just dropped. And they showed it powerful enough to clean a whole city. Is it actually possible and do we need to care? Can someone do this in reality? I know in ideally it should hit at $11 km s^{-1}$, but what about reality? As per WillO's comment, I found this in a Wikipedia's comment on G.I. Joe on the page kinetic bombardment: However, the movie misrepresented physics by claiming the rod would not be "launched" or "fired" but merely "dropped". If it were released without force it would orbit the Earth in the same manner as the platform itself. Answer: I think this would be better suited for the Skeptics SE, but here goes... The Mythbusters tested the "bullet fired into the air" myth and concluded it's plausible under certain conditions (bullet fired under an angle so it maintains it's spin and thus doesn't tumble). See here. As for the second thing, it's called kinetic bombardment. The US Air Force is doing research on it and claims a tungsten rod 6m long and 0.3m in diameter would deliver the energy equivalent of several tons of TNT. So yes, absolutely possible. "Parking" such rods in an orbit around Earth would be quite expensive though.
{ "domain": "physics.stackexchange", "id": 56710, "tags": "newtonian-mechanics, collision, projectile, speed, biology" }
Why isn't pressure a measure of energy?
Question: Hey guys, I'm having a problem in understanding the 1st Law of Thermodynamics. If i would increase the pressure of a closed system by compressing a gas in a cylinder isothermal, the 1.Law states, that all the work i put in to the system, i get out by heat. So my 2.state would be the gas at the same temperature (same internal energy), but compressed (at a higher pressure). But isn't this compressed gas able to do more work, than the not compressed? If i drilled a hole in the cylinder my high pressure gas could do some work. Where does this energy come from, since all my input energy i put in by work, came out in form of heat? Answer: The pressure is higher, but the volume is less. The kinetic energy per particle stays the same, when compressing isothermally, there are just more particles per volume to "bounce on the walls", thus the higher pressure. Your higher pressure gas from your drilled cylinder could just do more work per volume, but since there is less volume available the total work stays the same!
{ "domain": "physics.stackexchange", "id": 513, "tags": "thermodynamics, energy, energy-conservation, pressure" }
HTML form using or
Question: I read that, in HTML, it is better to display non-tabular data using the <div> tag instead of the <table> tag. I have a website that is displaying a lot of things as tables, because I like the look of it. Here's what it looks like: Screenshot Here are the CSS and HTML for the table: table{width:100%; border-collapse:collapse; table-layout:auto; vertical-align:top; margin-bottom:15px; border:1px solid #CCCCCC;} table thead th{color:#FFFFFF; background-color:#666666; border:1px solid #CCCCCC; border-collapse:collapse; text-align:center; table-layout:auto; vertical-align:middle;} table tbody td{vertical-align:middle; border-collapse:collapse; border-left:1px solid #CCCCCC; border-right:1px solid #CCCCCC;} table thead th, table tbody td{padding:5px; border-collapse:collapse;} table tbody tr:nth-child(odd){color:#666666; background-color:#F7F7F7;} table tbody tr:nth-child(even){color:#666666; background-color:#E8E8E8;} .table_sign_up{width:600px;} .table_sign_up th:nth-child(1){width:200px;} .table_sign_up th:nth-child(2){width:400px;} .table_sign_up input[type="text"] {width:380px;} ::-webkit-input-placeholder{color: #B0B0B0;} ::-moz-placeholder{color: #B0B0B0;} :-ms-input-placeholder{color: #B0B0B0;} :-moz-placeholder{color: #B0B0B0;} <!-- *************************************************************************** --> <h1>Your Info</h1> <!-- *************************************************************************** --> <p>Volunteers under 14 should be accompanied by an adult.</p> <table class="table_sign_up"> <thead> <tr> <th>Info Needed</th> <th>Enter It Here</th> </tr> </thead> <tbody> <tr> <td> E-Mail Address </td> <td> <input type="text" name="email" value="" required maxlength="254" /> </td> </tr> <tr> <td> First Name </td> <td> <input type="text" name="firstname" value="" required maxlength="100" /> </td> </tr> <tr> <td> Last Name </td> <td> <input type="text" name="lastname" value="" required maxlength="100" /> </td> </tr> <tr> <td> Cell Phone </td> <td> <input type="text" name="phone" value="" required placeholder="111-111-1111" maxlength="30" /> </td> </tr> <tr> <td> Shirt Size </td> <td> <select name="shirt_size" required> <option value="" selected="selected" ></option> <option value="Extra Small" >Extra Small</option> <option value="Small" >Small</option> <option value="Medium" >Medium</option> <option value="Large" >Large</option> <option value="Extra Large" >Extra Large</option> <option value="2XL" >2XL</option> <option value="3XL" >3XL</option> </select> </td> </tr> <tr> <td> How did you hear about this race? </td> <td> <select name="source" required> <option value="" selected="selected" ></option> <option value="The race e-mailed me" >The race e-mailed me</option> <option value="My volunteer group told me" >My volunteer group told me</option> <option value="marathonvolunteers.com" >marathonvolunteers.com</option> <option value="craigslist.org" >craigslist.org</option> <option value="idealist.org" >idealist.org</option> <option value="volunteermatch.org" >volunteermatch.org</option> <option value="createthegood.org" >createthegood.org</option> <option value="allforgood.org" >allforgood.org</option> <option value="eventbrite.com" >eventbrite.com</option> <option value="marathonvolunteers.com" >marathonvolunteers.com</option> <option value="Facebook Ad" >Facebook Ad</option> <option value="Race Facebook" >Race Facebook</option> <option value="Marathon Volunteers Facebook" >Marathon Volunteers Facebook</option> <option value="My friend told me" >My friend told me</option> <option value="google.com" >google.com</option> <option value="bing.com" >bing.com</option> <option value="yahoo.com" >yahoo.com</option> <option value="Other" >Other</option> </select> </td> </tr> </tbody> </table> I experimented with converting my code from <table> to <div>. It renders exactly the same to the user, but the code is a lot different. .data-collection {display:table; width:600px; border-collapse:collapse; table-layout:auto; vertical-align:top; margin-bottom:15px; border:1px solid #CCCCCC;} .data-collection div {display:table-row;} .data-collection div:nth-child(1) {color:#FFFFFF; background-color:#666666; border:1px solid #CCCCCC; border-collapse:collapse; text-align:center; table-layout:auto; vertical-align:middle; font-weight:bold;} .data-collection div:nth-child(1) span {vertical-align:middle; border-collapse:collapse; border-left:1px solid #CCCCCC; border-right:1px solid #CCCCCC; padding:5px; border-collapse:collapse;} .data-collection div:nth-child(2n+2){color:#666666; background-color:#F7F7F7;} .data-collection div:nth-child(2n+3){color:#666666; background-color:#E8E8E8;} .data-collection div span {display:table-cell; padding:5px; border-collapse:collapse;} .data-collection div span:nth-child(1){width:200px;} .data-collection div span:nth-child(2){width:400px;} .data-collection input[type="text"] {width:380px;} .data-collection input[type="password"] {width:380px;} ::-webkit-input-placeholder{color: #B0B0B0;} ::-moz-placeholder{color: #B0B0B0;} :-ms-input-placeholder{color: #B0B0B0;} :-moz-placeholder{color: #B0B0B0;} <!-- *************************************************************************** --> <h1>Your Info</h1> <!-- *************************************************************************** --> <p>Volunteers under 14 should be accompanied by an adult.</p> <div class="data-collection"> <div> <span>Info Needed</span> <span>Enter It Here</span> </div> <div> <span> E-Mail Address </span> <span> <input type="text" name="email" value="" required maxlength="254" /> </span> </div> <div> <span> First Name </span> <span> <input type="text" name="firstname" value="" required maxlength="100" /> </span> </div> <div> <span> Last Name </span> <span> <input type="text" name="lastname" value="" required maxlength="100" /> </span> </div> <div> <span> Cell Phone </span> <span> <input type="text" name="phone" value="" required placeholder="111-111-1111" maxlength="30" /> </span> </div> <div> <span> Shirt Size </span> <span> <select name="shirt_size" required> <option value="" selected="selected" ></option> <option value="Extra Small" >Extra Small</option> <option value="Small" >Small</option> <option value="Medium" >Medium</option> <option value="Large" >Large</option> <option value="Extra Large" >Extra Large</option> <option value="2XL" >2XL</option> <option value="3XL" >3XL</option> </select> </span> </div> <div> <span> How did you hear about this race? </span> <span> <select name="source" required> <option value="" selected="selected" ></option> <option value="The race e-mailed me" >The race e-mailed me</option> <option value="My volunteer group told me" >My volunteer group told me</option> <option value="marathonvolunteers.com" >marathonvolunteers.com</option> <option value="craigslist.org" >craigslist.org</option> <option value="idealist.org" >idealist.org</option> <option value="volunteermatch.org" >volunteermatch.org</option> <option value="createthegood.org" >createthegood.org</option> <option value="allforgood.org" >allforgood.org</option> <option value="eventbrite.com" >eventbrite.com</option> <option value="marathonvolunteers.com" >marathonvolunteers.com</option> <option value="Facebook Ad" >Facebook Ad</option> <option value="Race Facebook" >Race Facebook</option> <option value="Marathon Volunteers Facebook" >Marathon Volunteers Facebook</option> <option value="My friend told me" >My friend told me</option> <option value="google.com" >google.com</option> <option value="bing.com" >bing.com</option> <option value="yahoo.com" >yahoo.com</option> <option value="Other" >Other</option> </select> </span> </div> </div> Both sets of code feel messy to me. Which approach is better? Is there anything else I could do to make my CSS code more readable? Answer: Specification In the HTML specifications you can find the following: The HTML table model allows authors to arrange data -- text, preformatted text, images, links, forms, form fields, other tables, etc. -- into rows and columns of cells. From w3.org "11.1 Introduction to tables" So, technically it's fine to have form elements within a table. Layout Your layout definitely looks tabular. As mentioned in the comments, you even have a heading for each columns. So from that point it's fine, too. Markup Two things that can be improved in your markup: Form element You don't have a <form>-element currently. You can wrap the whole table into a form: <form> <table></table> </form> Or you can make use of the HTML5 option, to link elements to a form, even if they aren't children of the form: <form id="my-form"></form> <table> <tbody><tr><td> <input form="my-form"> </td></tr></tbody> </table> See w3.org: 4.10.18.3 Association of controls and forms Link between label and form-element Currently there's no connection between your labels and the actual input. Use a <label> element to create a link between those two. The advantage is also, that if somebody clicks the label the input will be focussed: <td> <label for="email">E-Mail Address</label> </td> <td> <input type="text" id="email" name="email" value="" required maxlength="254"> </td> Screenreader Using form elements in a table is fine for screenreader as well. However, you should keep those things in mind: Forms should be clear and intuitive. They should be organized in a logical manner. Instructions, cues, required form fields, field formatting requirements, etc. should be clearly identified to users. Provide clear instructions about what information is desired. If any form elements are required, be sure to indicate so. Make sure that the order in which form elements are accessed is logical and easy. This can sometimes be problematic if tables are used to control layout of form items. From webaim.org "Creating Accessible Forms: Ensure Forms are Logical and Easy to Use" As your form is very simple, this shouldn't be a problem. If your form grows or becomes more complex, e.g you use labels to address multiple inputs, here's a good read, where tables are used for the layout: Advanced Form Labeling: Handling Multiple Labels To sum it up, you're approach is fine, and it's even better than the other one, where "Info Needed" and "Enter It Here" aren't associated with the columns in any way.
{ "domain": "codereview.stackexchange", "id": 24514, "tags": "html, css, form, layout" }
Real life physics problem: Why is my desktop computer affected by my plasma ball?
Question: Note: this is strictly a physics question, not meant to be an advertisement I was running my desktop with a plasma ball on the side. The desktop has a touch screen enabled. I started to notice that the screen began to behave erratically, shifting up and down, left and right. At first I thought it was some sort of strange glitch, or perhaps a virus, but the issue persisted and cannot be fixed until I decided to turn off the plasma ball at which point the screen turned normal. Can someone elaborate on how the plasma ball may have interacted with the touch-enabled screen so it resulted in the erratic behavior on the screen that I saw? Answer: Short answer: plasma ball emits RF "noise". The touch screen is sensitive to noise (as it tries to detect the presence of your finger by detecting very tiny currents that appear between transparent electrodes as a result of the presence of your finger (which is dielectrically different than air ). Hoping someone else will elaborate for you.
{ "domain": "physics.stackexchange", "id": 22161, "tags": "electromagnetism, electrostatics, everyday-life, plasma-physics" }
If we inject charge carriers into insulators, will they become conductors?
Question: Since there're also valance band and empty band in insulators, if we introduce electrons in empty band or holes in valance band, will it conduct electric current like semiconductors? For example, we can expose insulators under ionizing radiation to induce electron-hole production. Answer: A common insulator behaves, in principle, like a semiconductor only with a larger bandgap. If, e.g. you supply enough photons with high enough energy to lift electrons from the valence band into the conduction band, you can get the insulator pretty much conductive.
{ "domain": "physics.stackexchange", "id": 34175, "tags": "electricity, solid-state-physics, semiconductor-physics" }
Torque vs Moment
Question: I was wondering, why in Newtonian physics torque is called "torque" while in static mechanics they call it "moment"? I prefer by far the term "torque", for not only it sounds strong, but also instead of moment, the correct synonym of torque is moment of force. Answer: Torque is the informal, practical man's way of calling this thing; the moment of force is the more quantitative, scientific term which is better at expressing the formula $$ \vec \tau = \vec r \times \vec F $$ The position $\vec r$ and the cross product, for this specific case, are responsible for the words "moment of" while $\vec F$ is the force. A similar addition of "moment" to other terms is the way to express other quantities although the terminology isn't quite systematic. For example, the angular momentum $\vec L = \vec r \times p$ is derived from the momentum $\vec p$ (well, the word "moment" was already inside "momentum" so people added "angular" instead because "moment of momentum" sounds awkward). The moment of inertia is a quantity like $I = mr^2$ which differs from the (inertial) mass $m$ by powers of $r$, too.
{ "domain": "physics.stackexchange", "id": 9947, "tags": "classical-mechanics, terminology, torque, moment" }
Why do people say that 'power flows' into or out of a component in an electric circuit?
Question: Currently, I am learning about electric circuits, and how to apply the basic concepts of physics, like work and energy. But I keep running into the same phrase--on Wikipedia, my textbooks, everywhere-- Wikipedia: Electric Power electric power can flow either into or out of a component This doesn't make any sense to me. Power is supposed to be defined as the time rate of change of work, so how can it 'flow' anywhere, its a rate. It makes sense to say that energy 'flows' places, since that's a quantity, but why would you say that the rate at which energy is transferred, i.e. power, flows somewhere? Answer: You're not the first, nor the last, to find the phrase "power flow" somehow wrong. For example, from W J Beaty's article on electrical misconceptions: ELECTRIC POWER FLOWS FROM GENERATOR TO CONSUMER? Wrong. Electric power cannot be made to flow. Power is defined as "flow of energy." Saying that power "flows" is silly. It's as silly as saying that the stuff in a moving river is named "current" rather than named "water." Water is real, water can flow, flows of water are called currents, but we should never make the mistake of believing that water's motion is a type of substance. Talking of "current" which "flows" confuses everyone. The issue with energy is similar. Electrical energy is real, it is sort of like a stuff, and it can flow along. When electric energy flows, the flow is called "electric power." But electric power has no existence of its own. Electric power is the flow rate of another thing; electric power is an energy current. Energy flows, but power never does, just as water flows but "water current" never does.
{ "domain": "physics.stackexchange", "id": 30488, "tags": "electricity, electric-circuits, energy-conservation, power" }
What is the height above ground of NCEP/NCAR Reanalysis 1 variables?
Question: I am confused regarding the height above ground of this data set. http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.html I have to reprocess this data set and need to know this piece of information. Answer: The height coordinate varies and is described for the various variables on the site you link. For example: Levels: Surface or near the surface (.995 sigma level), or entire atmosphere (eatm) and Levels: 17 Pressure levels (mb): 1000,925,850,700,600,500,400,300,250,200,150,100,70,50,30,20,10 Some variables have less: omega (to 100mb) and Humidities (to 300mb). For the pressure levels coordinate the height above the ground is variable and dependent on atmospheric conditions. The sigma levels are terrain following and flatten out with height and will vary with local topography. (see this for more info) You can estimate height using the hypsometric equation, or use the geopotential field from the data to determine the height at specific grid locations (it will vary horizontally). For processing, I would recommend using the geopotential data since it is available for each of the vertical coordinate possibilities in this dataset.
{ "domain": "earthscience.stackexchange", "id": 567, "tags": "atmosphere, atmosphere-modelling, ncep, reanalysis" }
How to check a qubit's state?
Question: Is it possible to write an algorithm that returns a single bit that represents if a qubit is in superposition or not, without compromising the qubit's wavefunction? Example: is_in_superposition($\vert 0 \rangle$) = 0 is_in_superposition($\dfrac{1}{\sqrt{2}} \vert 0 \rangle + \dfrac{1}{\sqrt{2}} \vert 1 \rangle$) = 1 Answer: Even though this gate wouldn't be unitary, it wouldn't even be linear. Let us write this down. Let us call this gate $\mathbf{S}$. $\mathbf{S}$ has the following properties: $$\mathbf{S}\,|x\rangle=\begin{cases}|0\rangle&\text{if } |x\rangle=|0\rangle\text{ or }|1\rangle\\|1\rangle&\text{otherwise}\end{cases}$$ Then we would have: $$\mathbf{S}\,(\alpha\,|0\rangle+\beta\,|1\rangle)=|1\rangle$$ by assumption. But we also have: $$\mathbf{S}\,(\alpha\,|0\rangle+\beta\,|1\rangle)=\alpha\,\mathbf{S}\,|0\rangle+\beta\,\mathbf{S}\,|1\rangle = \alpha\,|0\rangle+\beta\,|0\rangle$$ which is contradictory. Hence, such a gate doesn't exist. Actually, with the same argument, you cannot implement such a gate with additional qubits either: since the gate makes a distinction between superposed and non-superposed states, it cannot be linear and as such, cannot be a gate. Maybe an algorithm that would do such a task, if you want a classical result rather than a quantum state, could be implemented as such: Create the state you're interested in. Measure it. Repeat. If at least once you get different results (and if we don't consider noise that may affect the state), then you know that this quantum state is in a superposition. Of course, this is a very bad algorithm: Creating the state can be computationally hard. Noise can induce inaccuracies in the measurement. You're never sure that a state really is not in a superposed state. Still, you would have noticed that a measurement is performed. Hence, the wavefunction is affected. Since you want to get a classical result, the only way you have for this is to perform a measurement. Our only solution here is to add qubits to get this piece of information. Even there, it is not possible to do such a thing. We cannot get our answer as a quantum state using the same argument stated above. We cannot even use controlled gates to see whether $\alpha$ or $\beta$ is nil, since the measurement will then affect $|x\rangle$. You can refer/learn more about that by searching "Weak measurement" Unless I'm mistaken, it is thus impossible to perform such an operation.
{ "domain": "quantumcomputing.stackexchange", "id": 1532, "tags": "quantum-algorithms, quantum-state" }
Code compares Columns "A" of two workbooks and copies different information to destination workbook with entire selected row. LastRow count slows code
Question: Code explanation: I have a code, which performs two tasks - To open two workbooks, one being extract info and one destination and it compares the column A with Column A of these workbooks and all matching cells are made vbBlue (Disclaimer:code is made with several other codes from net and with my customisations, id add credit, but I lost the links :(). It sets a range and in the extract file it finds all the vbBlue cells and selects their entire rows, then the selection is pasted into the destination folder. What is the issue: Now, funny thing is this code work for me well, but for small amounts of rows, I have a file with 70000 rows and 350000 rows and What I managed to dig up is that the row.count (LastRow function) is making it incredibly slow, now I could manually put my ranges and its holidays right... Well I tried and the part, which does : For i = 2 to LastRow does not do what I thought it would. So I need assistance in how to make this code faster, because this is the deBugging part, which made me stuck. Update: Apparently arrays would make this work faster than flash himself, but its out of my scope to arrange them here, I keep getting errors, if ill manage ill update here.. Sub moduleUpdate() Dim wsSource As Worksheet Dim wsDest As Worksheet Dim recRow As Long Dim lastRow As Long Dim fCell As Range Dim i As Long Dim rCell As Range Dim LastRows As String Dim cell As Range Dim rng As Range Dim FoundRange As Range LastRows = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1 Set DstFile = Workbooks("ExtractFile.xlsx") Set wsSource = Workbooks("ExtractFile.xlsx").Worksheets("Sheet1") Set wsDest = Workbooks("Workbook.xlsx").Worksheets("Sheet1") Application.ScreenUpdating = False recRow = 1 With wsSource lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row For i = 2 To lastRow 'See if item is in Master sheet Set fCell = wsDest.Range("A:A").Find(what:=.Cells(i, "A").Value, LookAt:=xlWhole, MatchCase:=False) If Not fCell Is Nothing Then 'Record is already in master sheet recRow = fCell.Row Else .Cells(i, "A").Interior.Color = vbBlue recRow = recRow + 1 End If Next i End With Set rng = Range("A1:A90000") LastRows = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1 For Each cell In rng.Cells If cell.Interior.Color = vbBlue Then If FoundRange Is Nothing Then Set FoundRange = cell Else Set FoundRange = Union(FoundRange, cell).EntireRow End If End If Next cell If Not FoundRange Is Nothing Then FoundRange.Select Selection.Copy Workbooks("Workbook.xlsx").Activate ActiveWorkbook.Sheets("Sheet1").Activate LastRows = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1 Range("A" & LastRows).Select Workbooks("Workbook.xlsx").Worksheets("Sheet1").PasteSpecial 'If Not FoundRange Is Nothing Then FoundRange.Select 'Clean up Application.CutCopyMode = False Application.ScreenUpdating = True 'DstFile.Save 'DstFile.Close End Sub Answer: Before getting to speed improvements, there are some structural and language best practices to address. Best Practice: Always declare Option Explicit at the top of the module to ensure all variable used in your code are declared more info here . Make it automatic: in the VBIDE, check the 'Tools -> Options... -> (Editor tab) Require Variable Declaration' option. Best Practice: If Application flags must be set and reset, use error handling to guarantee that it happens. The current code is structured as: Sub moduleUpdate() '{declarations} '{set workbook/worksheet variables} Application.ScreenUpdating = False '{executable code...} <= if an error/exception occurs in any of this code, the 'Clean up' code is not executed 'Clean up Application.CutCopyMode = False Application.ScreenUpdating = True 'DstFile.Save 'DstFile.Close End Sub Instead, use the On Error Goto XXX statement to guarantee that the 'Clean up' code is executed Option Explicit Sub moduleUpdate() '{declarations} '{set workbook/worksheet references} Application.ScreenUpdating = False On Error Goto Cleanup '<= guarantees that the 'Clean up' code executes '{executable code...} <= if an error/exception occurs in any of this code, execution jumps to the 'Cleanup:' label Cleanup: Application.CutCopyMode = False Application.ScreenUpdating = True 'DstFile.Save 'DstFile.Close End Sub Best Practice: Avoid use of ActiveWorkbook and ActiveSheet. Instead, create dedicated variables that hold references to these documents. Using ActiveWorkbook and ActiveSheet may result in unexpected workbook and sheet references. This is an excellent resource regarding not using Select, ActiveSheet, etc. The code sets up dedicated variables here: Dim DstFile As Workbook Set DstFile = Workbooks("ExtractFile.xlsx") Dim wsSource As Worksheet Set wsSource = Workbooks("ExtractFile.xlsx").Worksheets("Sheet1") Dim wsDest as Worksheet Set wsDest = Workbooks("Workbook.xlsx").Worksheets("Sheet1") As a new reader of the code, DstFile and wsDest names are confusing since 'Dst' and 'Dest" both look like abbreviation of the word 'destination'. To avoid this situation the code below changes the variable names as follows: Dim extractWorkbook As Workbook Set extractWorkbook = Workbooks("ExtractFile.xlsx") Dim extractWorksheet As Worksheet Set extractWorksheet = extractWorkbook.Worksheets("Sheet1") Dim wsDest as Worksheet Set wsDest = Workbooks("Workbook.xlsx").Worksheets("Sheet1") The expression: LastRows = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1 is executed 3 times within the code, but the result LastRow is only used once. The first time this expression is executed, it is unclear what is the 'ActiveSheet'. It took some experimenting with fake data files, but it seems that for this code to work, the moduleUpdate must be invoked while "ExtractFile.xlsx" is the active worksheet. Rather than counting on this condition to pre-exist, simply drop the use of ActiveSheet and use variable extractWorksheet. The second time it is invoked, it is more apparent that ActiveSheet refers to extractWorksheet. Again, it doesn't really matter since this assignment to LastRow is not used - but it is better to explicitly use variable extractWorksheet. The third time the expression is invoked, wsDest is activated so it is now the 'ActiveSheet': Selection.Copy '<= copies the selection of the ActiveSheet (extractWorksheet) to the clipboard Workbooks("Workbook.xlsx").Activate '<= the workbook containing wsDest becomes the 'ActiveWorkbook' ActiveWorkbook.Sheets("Sheet1").Activate ' <= wsDest is the ActiveSheet LastRows = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row + 1 'last row of the `ActiveSheet (wsDest) Range("A" & LastRows).Select '<= this is a Range within the 'ActiveSheet (wsDest) 'Now, rather than use 'ActiveSheet' OR wsDest, the paste target is explicitly identified !? Workbooks("Workbook.xlsx").Worksheets("Sheet1").PasteSpecial So, certainly the first 2 expression can be deleted. It may not speed up execution, but removing all the implicit references to the ActiveSheet is an improvement to the 3rd use of LastRows. Using faked data and the original code, the logic seems to copy rows from extractWorksheet to wsDest where a matching value is not found. This behavior does not quite agree with your description, but it makes sense if the goal is to copy 'new' rows from extractWorksheet to wsDest worksheet. So, the remainder of this answer assumes this observation is correct. One last Best Practice: Within Subroutines and Functions, declare local variables close to their initial use. Makes code easier to read/interpret. Now...speed There are two loops in the current code. Each loop iterates presumably thousands of times. So, to speed up execution, the code below employs the suggested best practices and does the following to improve execution speed. Declare/Initialize objects and reference values from objects once (outside of the loop) as much as possible. These kinds of operations can be an expensive when done thousands of times. Iterate through the datasets using a single loop instead of two. Evaluate for a match and copy the data if required within the same iteration. Avoid copy/paste operations using the clipboard Option Explicit Sub moduleUpdate() Dim extractWorkbook As Workbook Set extractWorkbook = Workbooks("ExtractFile.xlsx") Dim extractWorksheet As Worksheet Set extractWorksheet = extractWorkbook.Worksheets("Sheet1") Dim wsDest As Worksheet Set wsDest = Workbooks("Workbook.xlsx").Worksheets("Sheet1") On Error GoTo Cleanup Application.ScreenUpdating = False Dim recRow As Long recRow = 1 With extractWorksheet Dim lastRow As Long lastRow = .Cells(.Rows.Count, "A").End(xlUp).Row 'Create the Range object to search with 'Find once...declare and initialize it outside of the loop Dim wsDestRange As Range Set wsDestRange = wsDest.Range("A:A") 'Determine where to begin appending rows of data to wsDest worksheet Dim wsDestNextRowNumber As Long wsDestNextRowNumber = wsDest.Cells(Rows.Count, "A").End(xlUp).Row + 1 Dim fCell As Range 'this needs a better name, 'fCell" is not descriptive Dim extractWorksheetCell As Range Dim rowNum As Long For rowNum = 2 To lastRow Set extractWorksheetCell = .Cells(rowNum, "A") Set fCell = wsDestRange.Find(what:=extractWorksheetCell.Value, LookAt:=xlWhole, MatchCase:=False) If fCell Is Nothing Then 'Value does not exist in the wsDest worksheet 'Is setting the color really required, or was the color attribute used used as a marker? extractWorksheetCell.Interior.Color = vbBlue 'Append the entire row's content to the wsDest worksheet extractWorksheetCell.EntireRow.Copy wsDest.Range("A" & CStr(wsDestNextRowNumber)) wsDestNextRowNumber = wsDestNextRowNumber + 1 End If Next rowNum End With Cleanup: Application.CutCopyMode = False Application.ScreenUpdating = True End Sub If the above changes do not speed up execution sufficiently, then you may need to investigate manipulations using arrays, collections and dictionaries. Good luck!
{ "domain": "codereview.stackexchange", "id": 44158, "tags": "vba, excel, time-limit-exceeded, macros" }
Why is it necessary to consider infinitesimal changes in p,V,T for H,U and G given that they're state functions?
Question: State functions such as $G$ only depend on the state of the system and are not dependent on the "path" that took the system to that state (which would be the case for work, for example, which is not a state function. We know that: $$\mathrm{d}G=V\mathrm{d}p-S\mathrm{d}T$$ So… $$\mathrm{d}G=\left(\frac{\partial G}{\partial p}\right)_T\mathrm{d}p+\left(\frac{\partial G}{\partial T}\right)_p\mathrm{d}T$$ Consequently, by comparing coefficients: $$V=\left(\frac{\partial G}{\partial p}\right)_T$$ and $$-S=\left(\frac{\partial G}{\partial T}\right)$$ Just taking the equation involving $V$ now to save time and space: $$\int_{p_1}^{p_2}V\mathrm{d}p=\int_{p_1}^{p_2}\left(\frac{\partial G}{\partial p}\right)_T\mathrm{d}p$$ Using the perfect gas equation and integrating leaves the result: $$G(p_2)-G(p_1)=n\mathcal{R}T\ln\left(\frac{p_2}{p_1}\right)$$ But if $G$ is independent of the path taken to get to the final state, why shouldn't I use the equation $$\Delta G=V(p_2-p_1)?$$ Answer: It is important to make clear any requirement that a variable be constant for each equation you write. To go from $$\mathrm{d}G = V\,\mathrm{d}p - S\,\mathrm{d}T$$ to $$\mathrm{d}G = V\,\mathrm{d}p,$$ you would need to assume temperature is constant. And to go from $$\mathrm{d}G = V\,\mathrm{d}p$$ to $$\Delta G = V(p_2 - p_1)$$ you would need to assume volume is constant. If temperature and volume are both constant, the pressure is constant also. So your final equation is only saying: when temperature, pressure and volume are constant, there is no change in Gibbs free energy.
{ "domain": "chemistry.stackexchange", "id": 4375, "tags": "physical-chemistry, thermodynamics, free-energy" }
How does sampling rate impact Discrete-Time Kalman Filter state space modeling assumptions?
Question: Consider a very simple, discrete-time constant position-type model for state updating in a Kalman filter: $$ x_{k+1} = x_k + w_k $$ The Kalman filter will be run with update interval $T_s$ such that $x_k$ represents position at time $t=kT_s$. Typically, the process noise $w_k$ is modeled as a zero-mean, white stationary random process of variance $\sigma^2_w$. Now, assume that the position time series is strictly band-limited with bandwidth far lower than the Nyquist frequency $1/(2T_s)$. This corresponds to a high degree of oversampling. In principle (say, with zero measurement noise), the process noise could be recovered by applying a simple differencing high pass filter with transfer function: $$ H(z) = 1 - z^{-1} $$ to the position time series. However, if the position time series is strictly bandlimited, then the recovered process noise is strictly bandlimited, which violates the white process noise model. Does this mean that the Kalman filter needs to use a correlated process noise model for highly oversampled systems? In my case, the state space model (which, in fact, is constant velocity not constant position) describes the time evolution of a certain biological quantity. The discrete-time state space model has not been chosen from a precise description of the dynamics of the underlying physical (physiological) phenomena, (This is because such descriptions are complex and require access to additional biological quantities which are rarely knowable in practice). Rather, the constant velocity model is a common choice in the literature motivated primarily by expedience, simplicity, and good tracking results. In the literature, all results seem to be reported for near-Nyqvist sampling. Some studies in the literature analyzed real biological data (again, sampled near Nyqvist) and claimed that the process noise for this model was white. In my case, for historical reasons, I am oversampling by a good factor of 15. When I analyzed a very clean sample signal, I found that at the oversampled rate, the process noise was highly correlated. A decimated-to-near-Nyqvist version of the signal gave rise to a significantly lower amount of temporal correlation in the process noise. This is what motivated my question. Answer: The answer to your question is : maybe. When one is at the discrete time model, it is generally assumed that you solved the state transition equations from the continuous time differential equations where the physics have already been appropriately modeled, which is where your answer lies. Time discretization is based on sampling the continuous time state transition matrix, thus the sampling intervals aren't related to Nyquist, they correspond to the fidelity of the modeling. Shorter intervals tend to be more accurate approximations. The time intervals don't have to be uniform, but uniform intervals provide a less complicated implementation. While shorter time intervals correspond to a heuristic where shorter has better approximation, you raise a valid concern, sampling too closely, may indeed introduce correlation in the process noise. This can be solved with an augmented state. These are all considerations. Kalman Filters need to be tuned. The measurement noise has been neglected in this discussion. So, the answer to your question is that it really depends.
{ "domain": "dsp.stackexchange", "id": 5535, "tags": "sampling, autocorrelation, kalman-filters, nyquist" }
Non-deterministic Turing machine for $L_1 = \{w\#0^n|w \text{ is a suffix of some $x$ in $L$ with } |x|=n\}$
Question: Show if L is in NP, then also L1 is in NP $$L_1 = \{w\#0^n|w \text{ is a suffix of some $x$ in $L$ with } |x|=n\}$$ I know that if L is in NP, then there exists a NTM $M_L$ than accepts $x$, with $|x|=n$. I can use $M_L$ to describe an NTM $N$ for $L_1$. I have a problem by understanding which is the best non-deterministic operation in $N$ to create an $x$ string for $L$. Answer: Let us describe an NTM $N$ that computes $L_1$ in polynomial time. The machine "guesses" the first $n - |w|$ letters of a word $x'$ (it makes all possible guesses non-deterministicly) and append $w$ to $x'$ then it simulates $M$ on the resulting word. If $M$ accepts on any of the guesses the machine $N$ halts and accepts. If all reject, $N$ also rejects. Since, we have $n$ zeroes as a part of the input, the guesses are done in linear time in the size of the input. The simulation also runs in polynomial time in an NTM by assumption, hence, we get a total polynomial running itme and hence, $L_1$ is in NP. The correctness of $N$ is straight forward.
{ "domain": "cs.stackexchange", "id": 15254, "tags": "formal-languages, turing-machines, time-complexity, nondeterminism" }
Every action has an equal and opposite reaction. Is is true for torques as well?
Question: So, we studied in classical mechanics that every action has an equal and opposite reaction. So if we apply a force to some object, that object will exert an equal amount of force on us but in the opposite direction. Can the same also be said about torques? If so, can someone explain how it could be possible? Answer: Something to keep in mind: torques are not as fundamental as forces. I say this for two reasons. First, torques are defined in terms of forces. Second, the torque produced by a force depends on our subjective point of reference. With that being said, if you have confusions about torques, the best place to start is to just think about forces instead. So let's do that. Let's say I apply a tangential force of magnitude $F$ to the edge of a wheel of radius $R$ with my hand. Well then by Newton's third law the wheel applies a force to my hand of equal magnitude and opposite direction as the force I applied to the wheel. Both forces act at the same point in space: the point of contact between my hand and the wheel. Let's look at the torque of these forces about some point, say the center of the wheel. Then the torque of my force is $$\tau_{\text{me}}=FR$$ and by Newton's third law the torque of the wheel's force is $$\tau_{\text w}=-FR=-\tau_{\text {me}}$$ So we do in fact get an "equal but opposite torque". It is worth pointing out that just like for forces this does not mean both torques act on the wheel. My torque acts on the wheel. The wheel's torque acts on me. I'll also point out that just like how Newton's third law for forces gives us linear momentum conservation, this version of the third law for torques gives us angular momentum conservation.
{ "domain": "physics.stackexchange", "id": 60296, "tags": "newtonian-mechanics, rotational-dynamics, torque" }
How to find the inverse Z transform of this function in z domain?
Question: i have a function $$F(z)=\frac{z-0.4}{z^2+z+2}$$ i need to find the inverse z transform of it , i have tried it with residues but the roots are too much ugly and it involves lots of messy calculations , is there any other way to proceed ? Answer: I think you are using the partial fraction expansion in wrong way. You have to know for $z$ transform we expand it as a function of $z^{-1}$ and not $z$. for your special problem factor out $z^2$ from denominator then divide the nominator by that. Then you could factor your polynomial over a real field or complex field. This means after factorization over complex field you may obtain polynomials with complex coefficient but factorization over a real field you obtain polynomials with real coefficients. so for your case the denominator is irreducible over real field but we could factor it over a complex field, which lead to a pair of conjugate poles (roots). Now you could easily obtain the inverse using the inverse of a single pole (don't worry about the complex numbers, after summing response of 2 conjugate pole the complex parts will cancel out each other and only a real part remain). Also I have to say, for your case you will reach 2 answer considering the region of convergence (RoC) in $z$ space.
{ "domain": "dsp.stackexchange", "id": 5518, "tags": "filters, z-transform" }
Naming of 2-methylpropane
Question: I can't understand why 2-methylpropane has/needs the 2 in front of it? It appears to me that if you were to add the methyl group at the first or last carbon on the chain then you get plain old butane and so defining 2-methylpropane is unnecessary when it could just be methylpropane. Have I missed something or is this just a convention? Apologies if it's a really simple question; it's my first ever time studying organic chemistry. Answer: You are correct. It is unnecessary to include the 2 in order to provide an unambiguous name. This compound is also often called isobutane which is an older name that is still in common usage.
{ "domain": "chemistry.stackexchange", "id": 5727, "tags": "organic-chemistry, nomenclature, hydrocarbons" }
Possible memory issue in linked list program
Question: I am trying to write a program which will count the occurrence of words in a paragraph. The logic I am following : I am using a linked list for the purpose. And I am searching sequentially - if new word encountered adding the word in the list, but if word already exist in the list increase its count flag. Code: //case insensitive string matching int strcicmp(char const *a, char const *b) { int d; for(;;a++,b++) { d=tolower(*a)-tolower(*b); if(d!=0 || !*a) return d; } } //declare the linked list structure to store distinct words and their count typedef struct node { char *word; int count; struct node *next; } node; node *ptr, *newnode, *first=NULL, *last=NULL; void insertnewword(char *ch) { newnode=(node*)malloc(sizeof(node)); if(newnode == NULL) { printf("\nMemory is not allocated\n"); exit(0); } else { newnode->word=ch; newnode->count=1; newnode->next=NULL; } if(first==last && last==NULL) { first=last=newnode; first->next=NULL; last->next=NULL; } else { last->next=newnode; last=newnode; last->next=NULL; } } void processword(char *ch) { int found=0; //if word is already in the list, increase the count for(ptr=first;ptr!=NULL;ptr=ptr->next) if(strcicmp(ptr->word, ch) == 0) { ptr->count += 1; found=1; break; } //if it's a new word, add the word to the list if(!found) insertnewword(ch); } int main() { const char *delimiters=" ~`!@#$%^&*()_-+={[}]:;<,>.?/|\\\'\"\t\n\r"; char *ch, *str; str=(char*)malloc(sizeof(char)); ch=(char*)malloc(sizeof(char)); //get the original string scanf("%[^\n]%*c", str); //fgets(str, 500, stdin); //get the tokenized string ch=strtok(str,delimiters); while(ch!=NULL) { //a, an, the should not be counted if(strcicmp("a", ch)!=0 && strcicmp("an", ch)!=0 && strcicmp("the", ch)!=0) processword(ch); ch=strtok(NULL,delimiters); } //print the word and it's occurrence count for(ptr=first; ptr!=NULL; ptr=ptr->next) printf("%s\t\t%d\n",ptr->word,ptr->count); return 0; } this seem to be working fine for few number of words, but if word count is more than 6-7, this program is encountering some problem. I can always implement any other logic for the same problem, but I want to know the issue with this implementation. Thanks in advance Answer: The well hidden bug. This allocates strings of length 1. str=(char*)malloc(sizeof(char)); ch=(char*)malloc(sizeof(char)); This is enough to hold the terminating '\0' but nothing else. You probably want to hold a longer string try. str = (char*)malloc(sizeof(char) * 500 ); ch = (char*)malloc(sizeof(char) * 500 ); You should also check to see if they actually worked and returned a good value. One of the major problems with C code is forgetting to check the result of a call to make sure it actually worked. Scanning. This fails if the input is an empty line. (it will not remove the '\n' on empty lines. scanf("%[^\n]%*c", str); Also you should probably put the maximum size of the string encoded into the format specifier. scanf("%500[^\n]", str); scanf("%*c"); // Now read and throw away the newline character. // Thus you strip the newline even if the read fails. // PS you may want to check the read worked. Code Review: Avoid global variables. node *ptr, *newnode, *first=NULL, *last=NULL; There is no need for any of these variables. The variables ptr and newnode should be local varaibles declared inside functions. The variables first and last describe a list. It would be best if you wrapped these up into a structure (as they represent a single entity (a list)). Your code is currently written to support a single list. If you pass this as a parameter to your functions you can support multiple lists in your application (it also makes your code much easier to read. typedef struct List { node* first; node* last; } List; int main() { List wordList = { NULL, NULL}; // STUFF // pass word list as a parameter. processword(&wordList, ch); } Indide: void insertnewword(char *ch) As mentioned above newnode should be a local function variable (not a global) and you should pass the list you are adding the node onto as the first parameter so that you don't need to keep first and last as global variables. In this condition we have already asserted that first == last first->next=NULL; last->next=NULL; So this double assignment of NULL is redundant. Also When you create the node you have already set it up to be NULL. So there is no point in setting it to NULL again. On the other side of the else you have the same issue. last->next=NULL; At this point last points at the new node. Which had its next value set to NULL during construction. Another trick you could look up is using a sentinel node. Basically you always put one fake node into the list. Then the code for handling insert/delete become much easier to write because you don't need to check if the list is NULL (because there is always at least one node). typedef struct List { node sentinel; node* first; node* last; } List; void initList(List* list) { list->sentinel = {NULL, -1, NULL}; list->first = &list->sentinel; list->last = list->first; } void destroyList(List* list) { node* loop; node* next; for(loop = list->head->next;loop != NULL; loop = next) { next = loop=>next; free(loop); } } void insertnewword(List* list, char* ch) { node* newnode = (node*)malloc(sizeof(node)); if (newnode == NULL) { /* ERROR */ exit(0); } newnode->word = ch; newnode->count = 1; newnode->next = null; list->last->next = newnode; last->last = newnode; } In: void processword(char *ch) As mentioned above ptr should be a local function variable (not a global) and you should pass the list you are adding the node onto as the first parameter so that you don't need to keep first as global variables. This is not wrong. for(ptr=first;ptr!=NULL;ptr=ptr->next) if(strcicmp(ptr->word, ch) == 0) {} But I would always put the {} braces around sub statement groups. It becomes too easy for people to accidentally add more code to the for loop without thinking about it and suddenly what you expect to be in the loop is not actually part of the loop. Also it keeps your style consistent if you always have the {} rather than only sometimes having them. Same comment about {} here. if(!found) insertnewword(ch); A lot of poeple use the uneven style. if(!found) { insertnewword(ch); } This saves some space. Personally I still use the even style. if(!found) { insertnewword(ch); } But to be honest I am starting to like the uneven style and may change. But for now the matching open and close above each other appeal to me more.
{ "domain": "codereview.stackexchange", "id": 9376, "tags": "c, linked-list, memory-management" }
Help It doesn't compile my program
Question: Hi ,I have a problem when I try compiling my program I have the files fismain.cpp , fis.h and fis.c and when I compiled without ROS I havent a problem but when I try to integrate in ROS I cant compiling,There is not defined as certain functions are in the file fis. c. I've added in CMakelist.txt: add_library(fis /home/magno/proyecto/fis.c) rosbuild_add_executable(fismain src/fismain.cpp) target_link_libraries(fismain fis) The error is: Linking CXX executable ../bin/fismain CMakeFiles/fismain.dir/src/fismain.o: In function `inicializaSistemaDifuso(char*)': /home/magno/proyecto/prueba5/src/fismain.cpp:40: undefined reference to `returnFismatrix' /home/magno/proyecto/prueba5/src/fismain.cpp:43: undefined reference to `fisBuildFisNode' CMakeFiles/fismain.dir/src/fismain.o: In function `finalizaSistemaDifuso()': /home/magno/proyecto/prueba5/src/fismain.cpp:56: undefined reference to `fisFreeMatrix' /home/magno/proyecto/prueba5/src/fismain.cpp:57: undefined reference to `fisFreeMatrix' /home/magno/proyecto/prueba5/src/fismain.cpp:58: undefined reference to `fisFreeMatrix' CMakeFiles/fismain.dir/src/fismain.o: In function `presentarInformacion': /home/magno/proyecto/prueba5/src/fismain.cpp:64: undefined reference to `fisPrintData' CMakeFiles/fismain.dir/src/fismain.o: In function `evaluarFisOnLine': /home/magno/proyecto/prueba5/src/fismain.cpp:73: undefined reference to `getFisOutput' CMakeFiles/fismain.dir/src/fismain.o: In function `presentarInformacion': /home/magno/proyecto/prueba5/src/fismain.cpp:64: undefined reference to `fisPrintData' CMakeFiles/fismain.dir/src/fismain.o: In function `evaluarFisOnLine': /home/magno/proyecto/prueba5/src/fismain.cpp:73: undefined reference to `getFisOutput' collect2: ld devolvió el estado de salida 1 make[3]: *** [../bin/fismain] Error 1 make[3]: se sale del directorio «/home/magno/proyecto/prueba5/build» make[2]: *** [CMakeFiles/fismain.dir/all] Error 2 make[2]: se sale del directorio «/home/magno/proyecto/prueba5/build» make[1]: *** [all] Error 2 make[1]: se sale del directorio «/home/magno/proyecto/prueba5/build» make: *** [all] Error 2 sorry for my english. Originally posted by Bernardo on ROS Answers with karma: 41 on 2013-03-11 Post score: 0 Answer: Try to use relative paths and use the ros package structure. Then your makefile is in the root of the package and the source files in src. Then add_library(fis src/fis.c) You are also mixing c++ and c, that could also give problems. Originally posted by davinci with karma: 2573 on 2013-03-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 13310, "tags": "ros" }
Is $PSPACE$ believed to be different than $PP$?
Question: From Googling, I couldn't find any discussion about whether $PP=PSPACE$ is more or less likely than $PP\subsetneq PSPACE$. Is it currently believed that $PP\neq PSPACE$? What would be the implications if the two happen to be equal? Answer: I hope someone with more knowledge can supply an additional answer. I don't have a reference or a survey*, but in my experience people expect that $\text{PP}\subsetneq \text{PSPACE}$, mostly because, by default, complexity classes that "look" unequal are usually assumed to be unequal unless there is a very good reason to think otherwise; and because this heuristic has proved useful more often than not. There is not much evidence for either equality or inequality. A weak form of evidence is that $\text{PP}^{O}\subsetneq \text{PSPACE}^{O}$ relative to a random oracle $O$. Once upon a time, the Random Oracle Hypothesis conjectured that such a random oracle separation would imply a separation in the real world, but this was overturned when $\text{IP}=\text{PSPACE}$ was proved (which, incidentally, also overturned the assumption above, that different-looking complexity classes are always different). The class $\text{PP}$ can be generalized to the counting hierarchy, $\text{CH}=\text{PP}\cup \text{PP}^{\text{PP}}\cup \text{PP}^{\text{PP}^{\text{PP}}}\cup\cdots$. If $\text{PP}=\text{PSPACE}$ then also $\text{CH}=\text{PSPACE}$. So, a weak form of evidence that $\text{PP}\subsetneq \text{PSPACE}$ would be to find an oracle relative to which $\text{CH}\subsetneq \text{PSPACE}$, but such an oracle has not yet been found. A good start would be to address the second level of the counting hierarchy and find an oracle relative to which $\text{PP}^{\text{PP}}\subsetneq \text{PSPACE}$, but we don't have this either. *We have a survey with $n=1$ person, namely Ryan Williams [1] gives a 50% probability to $\text{TC}^0=\text{NC}^1$, and provides some reasons. Since $\text{TC}^0=\text{NC}^1$ implies $\text{PP}=\text{PSPACE}$, logically he gives at least 50% probability to $\text{PP}=\text{PSPACE}$. (thanks to domotorp for finding this resource) An implication in the other direction is not known. [1] Ryan Williams. Some Estimated Likelihoods For Computational Complexity. https://people.csail.mit.edu/rrw/likelihoods.pdf
{ "domain": "cstheory.stackexchange", "id": 5600, "tags": "cc.complexity-theory, space-complexity, pspace, probabilistic-complexity" }
What are the formulas for the MAE , MSE when output is a vector?
Question: What are the formulas for the MAE , MSE when output is a vector? MAE: Mean Absolute Error MSE: Mean squared error Answer: You'd just sum the losses over each of the dimensions. If your model outputs a vector of size $d$, it's functionally the same as if you had a model that outputs $d$ different outputs, with $d$ different labels. If you had $d$ labels, for some loss $L$: $$\sum_{i\ \in\ [1 .. d]} \frac{1}{n} \sum_{j\ \in\ [1..n]} \mathcal{L}(Y_{j,i}, \hat{Y}_{j,i}) = \frac{1}{n} \sum_{j\ \in\ [1..n]} \sum_{i\ \in\ [1 .. d]} \mathcal{L}(Y_{j,i}, \hat{Y}_{j,i})$$ You can see this, for example, for image generation tasks, where both the target and the prediction are matrices. For each image prediction, they average the loss over each of the pixels.
{ "domain": "ai.stackexchange", "id": 3996, "tags": "machine-learning, linear-algebra" }
Why does rotation occur?
Question: So it sounds like a dumb question, as it is very intuitive why rotation occurs. However, can you give me a scientific explanation as to why whenever I exert a force on a body it tends to move, but when it is tied to something or I am pushing it through the edge of an object, it rotates? What is there in a body which lets it rotate in place of simply moving? Answer: The scientific explanation is easy: if the sum of all forces is not directly in line with the center of mass of the object, it will rotate, and friction forces are applied at the point of contact. The more interesting question is how to understand that intuitively. Consider the case of a soda can rolled along a table. For sake of argument, let's say you push right through the center of the can. So we can look at all of the forces. There's you pushing on it from one side. There's gravity pulling it down There's the table pushing it up (the so called "normal force" which keeps the can from falling through the table) Friction The last one is the key. There's friction between the table and the can. It acts at the point where the can touches the table, resisting the forward motion you are applying to the can. Because this force is not in line with the center of mass of the can, it starts rotating. We can change this up a bit by pushing on the can a little lower. Now, friction is still opposing us like before, but the force of our hand is also no longer in line with the center of mass. In fact, if we choose just the right point, we can cancel the rotational force caused by friction holding the can back with the rotational force caused by our finger pushing it forward off-center! This concept is essential in the game of pool. When you strike the cue ball, you strike it in different places to impart rotational motion (spin) to the cue ball. If you strike it dead on center, the friction of the table causes the ball to roll forward. If you strike it slightly below center, you can cause the ball to just slide forward without rotating very much (eventually the table will win, and the ball will start to roll). In fact, you can even strike it far enough below the center to cause the ball to start spinning backwards. This has the neat effect of causing the ball to roll backwards after hitting something! A few trick shots
{ "domain": "physics.stackexchange", "id": 36491, "tags": "newtonian-mechanics, rotational-dynamics, torque, rotation" }
Logarithmic fourier transform(LFT) on audio signal
Question: I am trying to analyze the musics as possible as precisely. Of course I tried FFT, but got some problems. I found low frequencies have very low resolution than human's hearing. I tried very long time FFT to resolve this problem, but even analyzing with 8192 samples/s in 44100Hz sample rate(Means lack of time resolution), I got not enough resolution on low frequencies. I found there are few solutions. Firstly, a quadratic interpolation on FFT bins. But it seems not a perfect way. Problems of this method are: 1. 'If i want to determine freqs between the freq bins, which three bins should I select to do an interpolation?' 2. 'Even I do this, there are no actual additional informations on result. I know interpolations are kind of tricky method.' Secondly, extracting each freq bins with desired frequency, so I can extract the bins logarithmically. But have a critical computational cost problem: (maybe over)N^2. Thirdly, LFT(Logarithmic Fourier Transform). This requiers logarithmically-spaced samples and gives me result exactly what I looking for with incredibly fast speed; https://stackoverflow.com/questions/1120422/is-there-an-fft-that-uses-a-logarithmic-division-of-frequency But I have no idea with that algorithm. I tried to understand the paper and implement it, but it was impossible because of lack of my english and mathematical skills. So, I need a help of implementation of LFT. Answer: The simplest and most pragmatic solution is to use a normal FFT of a sufficiently large size that you get the required resolution at the lowest frequency of interest. E.g. if you want 1 Hz resolution at the lowest frequency of interest then you will need a 1 second FFT window, i.e. the FFT size would need to be equal the sample rate, e.g. 44100. Note that even if you could implement a logarithmic FFT then it would still be bound by the laws of physics (information theory) and you would still need a similar length sample window - all you would gain would be convenience (not having to aggregate output bins) at the expense of performance.
{ "domain": "dsp.stackexchange", "id": 653, "tags": "audio, fft" }