anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Keyword/phrase extraction from Text using Deep Learning libraries
Question: Perhaps this is too broad, but I am looking for references on how to use deep learning in a text summarization task. I have already implemented text summarization using standard word-frequency approaches and sentence-ranking, but I'd like to explore the possibility of using deep learning techniques for this task. I have also gone through some implementations given on wildml.com using Convolutional Neural Networks (CNN) for sentiment analysis; I'd like to know how one could use libraries such as TensorFlow or Theano for text summarization and keyword extraction. Its been about a week since I started experimenting with Neural nets, and I am really excited to see how the performance of these libraries compares to my previous approaches to this problem. I am particularly looking for some interesting papers and github projects related to text summarization using these frameworks. Can anyone provide me with some references? Answer: The Google Research Blog should be helpful in the context of TensorFlow. In the above article, there is a reference to the Annotated English Gigaword dataset which is routinely used for text summarization. The 2014 paper by Sutskever et al titled Sequence to Sequence Learning with Neural Networks could be a meaningful start on your journey as it turns out that for shorter texts, summarization can be learned end-to-end with a deep learning technique. Lastly, here is a great Github repository demonstrating text summarization while making use of TensorFlow.
{ "domain": "datascience.stackexchange", "id": 2758, "tags": "neural-network, text-mining, deep-learning, beginner, tensorflow" }
Why is $T^*S^3$ a conifold?
Question: So, I was reading the famous Gopakumar Vafa paper, and they mention that $T^*S^3$ is a conifold. Why is this the case? I would naively expect $T^*S^3$ to be basically the same everywhere ($S^3$ is a maximally symmetric space, so I kind of expected its cotangent bundle to be nice as well). So where are the conical singularities coming from? Answer: The conifold singularity is the quadric hypersurface singularity given in complex coordinates by $x_1^2+x_2^2+x_3^2+x_4^2=0$ It is also known as the 3-fold ordinary double point. You can smooth this singularity by perturbing the equation: $x_1^2+x_2^2+x_3^2+x_4^2=\epsilon$ (for some $\epsilon\neq 0$). This variety is smooth ("looks the same everywhere" in the sense that it's a complex manifold). It is diffeomorphic to the cotangent (or equivalently tangent) bundle of $S^3$: the zero section is just the real locus (assuming $\epsilon$ is real). You can write down a diffeomorphism explicitly, e.g. https://math.stackexchange.com/q/1784898 So $TS^3$ is a smoothing ("the Milnor fibre") of the conifold singularity. This singularity also admits a small resolution, which is the complex manifold given by the total space of the bundle $\mathcal O(-1)\oplus \mathcal O(-1)\to \mathbb{P}^1$. The process of degenerating the quadric to become singular and then resolving is often called a conifold transition.
{ "domain": "physics.stackexchange", "id": 75196, "tags": "differential-geometry, string-theory, topological-field-theory, chern-simons-theory" }
How would reversed sound sound like in real life?
Question: Might be an odd question, but recently I read up on how sound works. From Wikipedia: In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as air, water or other materials. Which got me thinking on how fake reversed sound in an audio software must be. Imagine someone knocking on a wooden door. From what I understand, the sound you hear comes from the wooden door vibrating from the impact of the fist. If you record it and reverse the sound in an audio software, it can't be an accurate representation of how it would sound if it was reversed in real life: it would have to (somehow) mimic the vibration of the wooden door and play that backwards, which is obviously impossible. So if I'm right, how would such a sound actually sound in real life if it was reversed? Answer: The sound wave that a computer uses is a measure of amplitude and frequency of the high and low pressure zones of the sound wave detected at a point in space. Think of the audio recording as you sitting still listening to the knock at the door. If you had a sensitive pressure measurement device that could respond at very high frequency, you could measure how the pressure changes over time. That would give you the equation for your sound wave at one point in space over the course of time. The computer then does it's magic and powers a speaker to vibrate with those amplitudes and frequencies to recreate that particular sound when you want it. If you somehow reversed time and stood at that particular point, the sound waves should theoretically go through the same amplitudes at the same frequencies, just in the opposite order. This is also what audio software does to the captured wave to reverse it. Theoretically they should sound the same. The issue is that you can't reverse sound waves. The vibrations spread in every direction, not just to the point where you measure them. They dissipate, so to reverse them, all the other vibrations and heat and movement caused by the sound would have to be undone. Reversing sound is unphysical; but if we could, I see no reason why it wouldn't be like the audio software.
{ "domain": "physics.stackexchange", "id": 42561, "tags": "acoustics" }
If a photon strikes a perfectly reflecting mirror, it is essentially at rest at the instant of collision. So why does the photon exist?
Question: Photons can only move at the speed of light.how do we define its existence at the instant when it collides with a perfect mirror? Answer: The photon is never at rest. It always moves with the speed of light. Before the collision, it moves at the speed of light. Then it's absorbed with no intermediate state of lower velocity. After the absorption, the photon just doesn't exist anymore. So it never is essentially at rest. Not even at the time of absorption. One moment it has a speed c, the next moment it's not there, not being able to have zero velocity. So at the moment of its absorption, there is no change of velocity because the next moment there is no photon anymore to have a zero velocity. Only upon reemission, it will acquire (instantaneously) a velocity in the reflected direction. The same argument holds for the reemission.
{ "domain": "physics.stackexchange", "id": 79973, "tags": "special-relativity, optics, photons, speed-of-light, reflection" }
catkin_make fails: Invoking "make cmake_check_build_system" failed
Question: Hello, I just installed ROS and I'm following the beginner tutorial to get familiar with ROS. I've done until step 4 in "Creating a ROS msg and srv", I don't have any problem in the previous steps but when I try "catkin_make" it fails and I can't find the problem. Thank you for your help. Cheers, Carlos This is the message returned in the terminal: carlos@carlos-VirtualBox:~/catkin_ws$ catkin_make Base path: /home/carlos/catkin_ws Source space: /home/carlos/catkin_ws/src Build space: /home/carlos/catkin_ws/build Devel space: /home/carlos/catkin_ws/devel Install space: /home/carlos/catkin_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/carlos/catkin_ws/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/carlos/catkin_ws/devel -- Using CMAKE_PREFIX_PATH: /home/carlos/catkin_ws/devel;/opt/ros/indigo -- This workspace overlays: /home/carlos/catkin_ws/devel;/opt/ros/indigo -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Using Debian Python package layout -- Using empy: /usr/bin/empy -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/carlos/catkin_ws/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- Using Python nosetests: /usr/bin/nosetests-2.7 -- catkin 0.6.11 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - beginner_tutorials -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'beginner_tutorials' -- ==> add_subdirectory(beginner_tutorials) -- Using these message generators: gencpp;genlisp;genpy -- beginner_tutorials: 1 messages, 1 services CMake Error at /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:104 (message): catkin_package() called with unused arguments: ... Call Stack (most recent call first): /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:98 (_catkin_package) beginner_tutorials/CMakeLists.txt:81 (catkin_package) -- Configuring incomplete, errors occurred! See also "/home/carlos/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/carlos/catkin_ws/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed while the CMakeError.log is: Determining if the pthread_create exist failed with the following output: Change Dir: /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp Run Build Command:/usr/bin/make "cmTryCompileExec972305253/fast" /usr/bin/make -f CMakeFiles/cmTryCompileExec972305253.dir/build.make CMakeFiles/cmTryCompileExec972305253.dir/build make[1]: Entering directory `/home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp' /usr/bin/cmake -E cmake_progress_report /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp/CMakeFiles 1 Building C object CMakeFiles/cmTryCompileExec972305253.dir/CheckSymbolExists.c.o /usr/bin/cc -o CMakeFiles/cmTryCompileExec972305253.dir/CheckSymbolExists.c.o -c /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c Linking C executable cmTryCompileExec972305253 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec972305253.dir/link.txt --verbose=1 /usr/bin/cc CMakeFiles/cmTryCompileExec972305253.dir/CheckSymbolExists.c.o -o cmTryCompileExec972305253 -rdynamic CMakeFiles/cmTryCompileExec972305253.dir/CheckSymbolExists.c.o: In function `main': CheckSymbolExists.c:(.text+0xa): undefined reference to `pthread_create' collect2: error: ld returned 1 exit status make[1]: Leaving directory `/home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp' make[1]: *** [cmTryCompileExec972305253] Error 1 make: *** [cmTryCompileExec972305253/fast] Error 2 File /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c: /* */ #include <pthread.h> int main(int argc, char** argv) { (void)argv; #ifndef pthread_create return ((int*)(&pthread_create))[argc]; #else (void)argc; return 0; #endif } Determining if the function pthread_create exists in the pthreads failed with the following output: Change Dir: /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp Run Build Command:/usr/bin/make "cmTryCompileExec639770778/fast" /usr/bin/make -f CMakeFiles/cmTryCompileExec639770778.dir/build.make CMakeFiles/cmTryCompileExec639770778.dir/build make[1]: Entering directory `/home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp' /usr/bin/cmake -E cmake_progress_report /home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp/CMakeFiles 1 Building C object CMakeFiles/cmTryCompileExec639770778.dir/CheckFunctionExists.c.o /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create -o CMakeFiles/cmTryCompileExec639770778.dir/CheckFunctionExists.c.o -c /usr/share/cmake-2.8/Modules/CheckFunctionExists.c Linking C executable cmTryCompileExec639770778 /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTryCompileExec639770778.dir/link.txt --verbose=1 /usr/bin/cc -DCHECK_FUNCTION_EXISTS=pthread_create CMakeFiles/cmTryCompileExec639770778.dir/CheckFunctionExists.c.o -o cmTryCompileExec639770778 -rdynamic -lpthreads /usr/bin/ld: cannot find -lpthreads collect2: error: ld returned 1 exit status make[1]: Leaving directory `/home/carlos/catkin_ws/build/CMakeFiles/CMakeTmp' make[1]: *** [cmTryCompileExec639770778] Error 1 make: *** [cmTryCompileExec639770778/fast] Error 2 Originally posted by CarlosRoncal on ROS Answers with karma: 21 on 2015-05-06 Post score: 1 Original comments Comment by gvdhoorn on 2015-05-06: Can you add the contents of your CMakeLists.txt as well to your question? Remove all the boilerplate comments first though. Answer: The posted console output already contains the exact error message: CMake Error at /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:104 (message): catkin_package() called with unused arguments: ... Call Stack (most recent call first): /opt/ros/indigo/share/catkin/cmake/catkin_package.cmake:98 (_catkin_package) beginner_tutorials/CMakeLists.txt:81 (catkin_package) So you are passing invalid arguments to the catkin_package() function. If the posted code is actually what you have in your CMake file: catkin_package( ... CATKIN_DEPENDS message_runtime ... ...) then ... is clearly not a valid argument. If ... is mentioned in the tutorial it is meant as an example and the dots have to be replaced with whatever else you need to pass. Originally posted by Dirk Thomas with karma: 16276 on 2015-05-06 This answer was ACCEPTED on the original site Post score: 6
{ "domain": "robotics.stackexchange", "id": 21622, "tags": "catkin-make" }
Thermostat code for molecular dynamics with angle dependent potential
Question: I'm building molecular dynamics code for the specific purpose of simulating Janus particle because no package supports the custom angle dependent potential to my knowledge. I'm building thermostat part and I'm encountered with the problem of angular momentum. Since usual molecular dynamics uses the thermostat for the velocity of the center of mass of each particle, I didn't concern with angular velocity. However, there is angular momentum for Janus particle (hence torque update is performed each time step) so I think I should incorporate thermostat for that. If I'm correct, is there any formalism or reference to build the langevin thermostat(or any stochastic thermostat) code? Answer: It is not easy to answer this question, as you do not specify what you want from your thermostat. For example, do you want to reproduce a specific kinetics (overdamped, underdamped, etc) or not? Do you need a deterministic or a stochastic thermostat? If you are only interested in the thermodynamics (and hence you don't care about the kinetics and dynamics) then the simplest (working) thermostat you can use is the Andersen thermostat. With this thermostat, the particles' momenta (velocity and angular velocity) are extracted anew from a Maxwell-Boltzmann distribution. The time interval between two refresh procedures is chosen randomly and it is controlled by a parameter $\nu$, which sets the coupling between the thermostat and the system. A slightly more sophisticated thermostat has been introduced here. A slightly more complicated option is the Bussi-Donadio-Parrinello thermostat (firstly introduced here), which is an upgrade of the Berendsen thermostat and has lately gained a lot of attention. In contrast with the latter, it has been shown to correctly sample the canonical ensemble. It can be straightforwardly extended to the rotational degrees of freedom. If you want a slightly more realistic dynamics you can use a Langevin thermostat. The simplest option is to add to the equations of motion of the angular velocity a friction and a random term, the way you do for the translational degrees of freedom. However, please note that the rotational diffusion coefficient $D_r$ of a sphere with no-slip boundary conditions is related to the translational diffusion coefficient, $D_t$, via the relation $D_r = 3 / \sigma^2 D_t$, where $\sigma$ is the diameter of the particle. In a Langevin thermostat the diffusion coefficient is, in turn, linked through the fluctuation-dissipation relation to the friction coefficient $\gamma = T / D$. As a result, the friction coefficients for the rotational and translational degrees of freedom differ by a factor of 3. There are also some variants that explicitly take into account rotational degrees of freedom and are optimised to work with rigid bodies. See for example this recent paper and references therein.
{ "domain": "physics.stackexchange", "id": 47186, "tags": "classical-mechanics, statistical-mechanics, angular-momentum, molecular-dynamics, soft-matter" }
Why is this argument for $P\neq NP$ wrong?
Question: I know its silly, but i managed to confuse myself and i need help settling this Suppose $P=NP$, then clearly for every oracle $A$ we have $P^A=NP^A$ which contradicts the fact that there exists some oracle $A$ for which $P^A\neq NP^A$, hence $P\neq NP$ Whats wrong? Thanks! Answer: Sure, you just have to be careful thinking about what it means to have an oracle. The problem comes from an annoying abuse of notation we use in CS: In the statement $P=NP$, $P$ refers to a set of languages. But in the statement $P^A = NP^A$, $P$ refers to a class of Turing Machines (determinstic polytime TMs). You should think of these two $P$s as of completely different types. So even if the two sets of languages $P$ and $NP$ are the same, deterministic polytime TMs still do not work in the same way as nondeterministic ones. In particular, given an oracle, a nondeterministic TM can "ask many questions at once", which is something the regular TM cannot do. So even if they decide the same set of languages when neither type of machine is given additional help, the oracle might help one type of machine more than another.
{ "domain": "cs.stackexchange", "id": 4027, "tags": "complexity-theory, p-vs-np, oracle-machines" }
Language tokens library
Question: This may seem a little small, but it's actually a self-contained crate in my project. This crate contains definitions for the source tokens in the language I'm working on, Nafi. As of current, it's definitely quite minimal, because I learned from an earlier attempt that I want to move in small chunks up the full tree to working language, in order to keep myself motivated with tangible progress. This tokens crate may grow slightly before my "proper" community-challenge submission, but the structure of this library is here so I thought it fit to submit for a review. I actually have a working lexer for most (but not yet all) of these tokens, but that's not quite ready yet. You can view the current auto-generated documentation or see the full project on GitHub. Cargo.toml [package] name = "nafi-tokens" version = "0.0.1" publish = false [dependencies.num] version = "0.1" default-features = false features = [ "bigint" ] lib.rs //! Tokens of Nafi source #![forbid(bad_style, missing_debug_implementations, unconditional_recursion, future_incompatible)] #![deny(missing_docs, unsafe_code, unused)] #![feature(conservative_impl_trait)] extern crate num; mod symbol; mod literal; pub use literal::{BigUint, Literal, StringFragments}; pub use symbol::Symbol; /// A token in the source code. Simply chunking the source into units to then parse. #[derive(Clone, Debug, Eq, PartialEq)] #[allow(missing_docs)] pub enum Token { #[doc(hidden)] _Unknown(usize), Whitespace(usize), Symbol(usize, Symbol), Literal(usize, Literal), Keyword(usize, Keyword), Identifier(usize, String), } impl Token { /// The start location of this token. pub fn position(&self) -> usize { match *self { Token::_Unknown(pos) | Token::Whitespace(pos) | Token::Symbol(pos, _) | Token::Literal(pos, _) | Token::Keyword(pos, _) | Token::Identifier(pos, _) => pos, } } } /// A reserved identifier-like in the source code. #[derive(Copy, Clone, Debug, Eq, PartialEq)] #[allow(missing_docs)] pub enum Keyword { Let, Mutable, If, Else, } literal.rs pub use num::bigint::BigUint; use std::borrow::Cow; /// A literal in the source code, e.g. a string or number. #[derive(Clone, Debug, Eq, PartialEq)] #[allow(missing_docs)] pub enum Literal { Integer(BigUint), String(StringFragments), } impl From<BigUint> for Literal { fn from(uint: BigUint) -> Self { Literal::Integer(uint) } } impl From<String> for Literal { fn from(s: String) -> Self { Literal::String(s.into()) } } impl<'a> From<&'a str> for Literal { fn from(s: &'a str) -> Self { Literal::String(s.into()) } } impl From<StringFragments> for Literal { fn from(fragments: StringFragments) -> Self { Literal::String(fragments) } } #[derive(Clone, Debug, Eq, PartialEq)] enum StringFragment { Str(String), InvalidEscape(String), } impl<S: Into<String>> From<S> for StringFragment { fn from(s: S) -> Self { StringFragment::Str(s.into()) } } /// A String that also remembers invalid escapes inside it. #[derive(Clone, Debug, Default, Eq, PartialEq)] pub struct StringFragments { fragments: Vec<StringFragment>, } impl<S: Into<String>> From<S> for StringFragments { fn from(s: S) -> Self { StringFragments { fragments: vec![s.into().into()] } } } impl StringFragments { /// Create a new, empty string. pub fn new() -> StringFragments { Default::default() } /// Push a character onto the end of this string. pub fn push(&mut self, char: char) { let len = self.fragments.len(); if len == 0 { self.fragments.push(StringFragment::Str(char.to_string())); } else { if let StringFragment::Str(_) = self.fragments[len - 1] { if let StringFragment::Str(ref mut string) = self.fragments[len - 1] { string.push(char); } } else { self.fragments.push(StringFragment::Str(char.to_string())); } } } /// Push a string onto the end of this string. pub fn push_str<'a, S: Into<Cow<'a, str>>>(&mut self, str: S) { let len = self.fragments.len(); if len == 0 { self.fragments .push(StringFragment::Str(str.into().into_owned())); } else { if let StringFragment::Str(_) = self.fragments[len - 1] { if let StringFragment::Str(ref mut string) = self.fragments[len - 1] { string.push_str(str.into().as_ref()); } } else { self.fragments .push(StringFragment::Str(str.into().into_owned())) } } } /// Push an invalid escape onto the end of this string. pub fn push_invalid_escape<S: Into<String>>(&mut self, escape: S) { self.fragments .push(StringFragment::InvalidEscape(escape.into())) } /// Try to turn this string into a normal string. /// /// Fails if any invalid escapes are present. pub fn try_into_string(self) -> Result<String, InvalidEscapes> { if self.fragments.len() == 1 { if let StringFragment::Str(_) = self.fragments[0] { if let Some(StringFragment::Str(string)) = self.fragments.into_iter().next() { return Ok(string); } else { unreachable!() } } } return Err(InvalidEscapes( self.fragments .into_iter() .filter_map(|fragment| match fragment { StringFragment::InvalidEscape(escape) => Some(escape), StringFragment::Str(_) => None, }) .collect(), )); } } /// The invalid escapes in a string literal. #[derive(Clone, Debug, Eq, PartialEq)] pub struct InvalidEscapes(Vec<String>); impl InvalidEscapes { /// Create an iterator over the invalid escapes. /// /// You get what was attached after the `\`. /// E.g. `\w` gives `w` and `\u{INVALID}` gives `u{INVALID}` pub fn iter<'a>(&'a self) -> impl Iterator<Item = &'a str> { self.0.iter().map(String::as_str) } } symbol.rs /// A symbol in the source code, e.g. `+-={}[]<>` (or others) #[derive(Copy, Clone, Debug, Eq, PartialEq)] #[allow(missing_docs)] pub enum Symbol { ExclamationMark, // QuotationMark, // will never happen -- superseded by string literal NumberSign, DollarSign, PercentSign, Ampersand, // Apostrophe, // will never happen -- superseded by quote literal LeftParenthesis, RightParenthesis, Asterisk, PlusSign, Comma, HyphenMinus, FullStop, Solidus, Colon, Semicolon, LessThanSign, EqualsSign, GreaterThanSign, QuestionMark, CommercialAt, LeftSquareBracket, ReverseSolidus, RightSquareBracket, CircumflexAccent, LowLine, GraveAccent, LeftCurlyBracket, VerticalLine, RightCurlyBracket, Tilde, Other(char), } impl Symbol { /// The character in the source pub fn as_char(&self) -> char { use Symbol::*; match *self { ExclamationMark => '!', NumberSign => '#', DollarSign => '$', PercentSign => '%', Ampersand => '&', LeftParenthesis => '(', RightParenthesis => ')', Asterisk => '*', PlusSign => '+', Comma => ',', HyphenMinus => '-', FullStop => '.', Solidus => '/', Colon => ':', Semicolon => ';', LessThanSign => '<', EqualsSign => '=', GreaterThanSign => '>', QuestionMark => '?', CommercialAt => '@', LeftSquareBracket => '[', ReverseSolidus => '\\', RightSquareBracket => ']', CircumflexAccent => '^', LowLine => '_', GraveAccent => '`', LeftCurlyBracket => '{', VerticalLine => '|', RightCurlyBracket => '}', Tilde => '~', Other(char) => char, } } } As the rest of the code is rather simple, I'm most interested in literal.rs and the code supporting the string literal -- StringFragment(s). NOTE: "Code is fine, move on" is a viable answer. But there's always something else you can say as well. Answer: I only see minor things; most of the code appears to be straight-forward shuffling around of data. I would not use char (or any other type) as a variable name. The risk for confusion is too high in my opinion. Your enum variant is called Str, but it holds a String. Since enough people are confused with &str vs String, it's worth it to be consistent. I disagree with the current rustfmt formatting for one expression blocks, so I'd advocate for placing a separate block inside the match arms: .filter_map(|fragment| { match fragment { StringFragment::InvalidEscape(escape) => Some(escape), StringFragment::Str(_) => None, } })
{ "domain": "codereview.stackexchange", "id": 27473, "tags": "rust, community-challenge, language-design" }
Can you give an intuitive idea behind how the Minimum Weight Perfect Matching (MWPM) decoder work?
Question: The Minimum Weight Perfect Matching (MWPM) decoder seems to be the most popular choice for decoding error syndromes in Surface Code quantum error correction. Can anyone give an intuitive idea of how it works, with an example? Answer: I recommend reading the sparse blossom paper. Strings In the surface code, errors produce pairs of detection events. Chains of errors can cancel out the detection events along the chain, leaving only the detection events at the end. In other words, errors form strings and you can see the endpoints of the strings. The goal of a matching decoder is to infer the shortest possible strings, given the endpoints (the excitations; the detection events). Certificates Matching works by producing a certificate-of-optimality. The certificate works by placing a circle of some radius around each excitation, where if two excitations are matched their circles must touch: In the above picture, how do we know that there's no way to match up the points at the centers of the circles that uses less total line length? Well, if you focus on one detection event, any solution must have a part-of-line that crosses out of its circle. So all solutions must have a cost at least as high as the sum of the radii of these circles. But this solution has exactly that cost. Therefore it is optimal. By not overlapping, and touching if linked, the circles certify that this specific set of lines is optimal. A complication that can show up is sometimes you can't find a way to draw these circles without nesting some of them into a combined shape with a constant-crossing-distance perimeter called a blossom: (Exercise: the above picture doesn't technically require the blossom. You can find a set of normal circles that certifies the solution. Find one.) The same idea as before applies. ANY solution to the problem MUST have cost of at least leave(A)+leave(B)+leave(C)+leave(D)+leave(E)+cross(1)+cross(2). Otherwise it can't possibly connect up everything. But this specific solution has exactly that cost, therefore it's optimal. Growth So the goal is to produce this certificate of optimality that looks like nested circles. How do we do that? All you have to do is to start in a state where every excitation has a radius-0 circle around it, and start growing the circles. Never stop growing a circle until it finds a partner. When a pair of circles touches each other, they form a matched pair of partners, and their circles can stop growing. When a circle hits a matched pair, it hasn't found a partner (it's the odd one out). So it must keep growing. To avoid the circles overlapping, and to avoid losing the existing pairing, one of the circles in the pair must start shrinking and the other must start growing. This process can repeat, forming a whole tree of shrinking and growing circles. When a tree hits a different tree, the collision point forms a match which then allows the trees to internally fall apart into matches. When a tree hits itself, a blossom forms to remove the cycle. The circles within the blossom stop growing, and instead a radius starts growing around the blossom as a whole. The blossom is treated as if it is one object, capable of partnering to a circle to stop it growing. In some situations a blossom will start shrinking. If it shrinks too much, it can actually shatter back into the original circles. Call a circle "energetic" if it has never paused its growth due to finding a partner. You can show that, for these rules, there will always be one circle that's energetic until the end. Or, in other words, if no circles are energetic then all detection events are paired. Assuming all circles grow and shrink at a shared consistent rate, the algorithm cannot run longer than the maximum distance from one detection event to another or to the boundary, because by that point all the circles will have merged into one big thing and that thing would contain an even number of excitations (or touch the boundary) which causes circles to stop being energetic. Incidentally, although I've been saying "circle" this whole time, the idea generalizes to any distance metric. I should technically have been saying "shape of constant radius". This generality is what allows the algorithm to work on graphs, where the shape of constant radius is formed by all points that can be reached by traveling along the edges of the graph according to their weighs. Here's an animation of the process. It doesn't use the same color scheme as the other pictures, and it doesn't show the edges within the blossoms, but it should give you an idea of how the algorithm progresses:
{ "domain": "quantumcomputing.stackexchange", "id": 4837, "tags": "error-correction, surface-code, fault-tolerance, minimum-weight-perfect-matching" }
Simplified Derivation of CHSH/Bell inequalities
Question: Preamble Back in the day when I was still studying, I visited a very interesting lecture by a young professor that focused on the intersection between physics and computer science, with some modules about computability, computational complexity, and quantum computing. In one lesson, we discussed the CHSH/Bell inequalities and had a derivation on the blackboard that was so strikingly simple that it really fascinated me. Now, years later, when discussing these matters with a friend over a coffee and reminiscing the good old days, I decided to go back to my old lecture notes and revisit the derivation to increase my understanding a bit. Unfortunately, I haven't been able to trace the line of thought from my old lecture notes and decided to come here for some help. The derivation went somewhat like this: Setup A device emits two objects, and two Agents Alice (A) and Bob (B) each have a detector that can be set to two modes, measuring to different properties of the objects: X and Y, each of which can only yield the values +1 or -1. Alice and Bob choose freely and independently which quantity to measure in each iteration and note down their results. The result is a table like this: # | X_A Y_A | X_B Y_B ----------------------- 1 | +1 | -1 2 | -1 | +1 .......................... Now, making the standard three assumption of Realism (unmeasured quantities exist regardless), we can define a quanitity $C$ as $C=X_AX_B + X_AY_B + X_BY_A - Y_AY_B $ which could be written down for each line in the table if we were to know all the inputs (which exist due to realism and are just not known to us). Now, $C \leq 2$ can be proven by simply checking all possibilities. Problem Up to this point, it's pretty uncontroversial, I think. But now comes the twist: In my old lecture notes, it says something along the lines of: Using Locality and Causality, we find $\left\langle C\right\rangle=\left\langle X_AX_B + X_AY_B + Y_AX_B - Y_AY_B\right\rangle = \left\langle X_AX_B\right\rangle + \left\langle X_AY_B\right\rangle + \left\langle Y_AX_B\right\rangle - \left\langle Y_AY_B\right\rangle$ where the $\left\langle \dots \right\rangle$ notation marks the expectation value. The lecture notes then go on to state that a measurement of $\left\langle C\right\rangle>2$ obtained by summing up the four individual expectation values, which can easily be measured, can then be used to create a contradiction. Obviously, the second equality relation in the quoted paragraph contains all of the actual "magic". Now, this to me looks like a strong oversimplification. For once, the separation of the components into individual expectation values is a simple property of the fact that the expectation value is a sum, so Locality and Causality don't have anything to do with that. By thinking about it, I realized that the crucial point of the whole operation is actually hidden by the notation. Because the individual expectation values are only calculated on those parts of the table where they can be found, one would actually need to write it like this: $\left\langle C\right\rangle=\left\langle X_AX_B + X_AY_B + X_BY_A - Y_AY_B\right\rangle_{\rm all} = \left\langle X_AX_B\right\rangle_{XX} + \left\langle X_AY_B\right\rangle_{XY} + \left\langle Y_AX_B\right\rangle_{YX} - \left\langle Y_AY_B\right\rangle_{YY}$ So the crucial point is actually the restriction of the space over which the expectation value is computed, e.g. the assumption that $\left\langle T\right\rangle_{\rm all} = \left\langle T\right\rangle_{\rm subset}$ I have crawled the web for a significant amount of time, but I haven't been able to find a derivation that follows a similar pattern like the one from my old lecture notes. Most derivations prefer to use correlations or conditional probabilities instead of expectation values of products, and I've only seen one example that had a similar argumentation pattern, but did not explicitly state the equality of expectation values as it did in my lecture notes. From the fact that I haven't been able to find an example for this very simple type of derivation, I would usually deduce that there's probably a flaw in this line of reasoning, which makes people resort to more complicated derivations. However, I have some problems simply accepting the fact that the proof presented in the lecture was invalid, and would like to understand better if (and why) this is the case. Question Is it possible to derive the CHSH/Bell inequalities in the way I outlined? Does the equality relation $\left\langle C\right\rangle=\left\langle X_AX_B + X_AY_B + X_BY_A - Y_AY_B\right\rangle_{\rm all} = \left\langle X_AX_B\right\rangle_{XX} + \left\langle X_AY_B\right\rangle_{XY} + \left\langle Y_AX_B\right\rangle_{YX} - \left\langle Y_AY_B\right\rangle_{YY}$ actually hold - or what additional assumptions do you need to make in order to use this relation for a simplified derivation? Answer: Your derivation is pretty standard, but, under influence of computer science, there has been a tendency over the last decade to present the CHSH inequality as a game with binary output being 0 or 1 instead of ±1, hence a shift in presentation towards conditional probalilities instead of expectation values. As you have guessed, the equality $$\begin{multline*}\left\langle C\right\rangle\stackrel{\text{def}}{=}\left\langle X_AX_B + X_AY_B + X_BY_A - Y_AY_B\right\rangle_{\rm all} \\\stackrel{?}{=} \left\langle X_AX_B\right\rangle_{XX} + \left\langle X_AY_B\right\rangle_{XY} + \left\langle Y_AX_B\right\rangle_{YX} - \left\langle Y_AY_B\right\rangle_{YY}\end{multline*}$$ holds because of the linearity of the expectation value, under the condition that $C$ is well defined and the average is taken over the same subset. both conditions are linked with the local hidden variables (LHV) hypothesis : $C$ is well defined if $X_A$, $X_B$, $Y_A$ and $Y_B$ are well defined even when they are not measured. This is ensured by the LHV hypothesis and wrong under quantum mechanics. The difficulty in experimental realization was usually to ensure the second condition is met, in a “loophole free” way. The idea is to chose the setting in each side randomly in such a way that the other side cannot learn the measurement setting. This ensures that the average is taken properly. This is done by a random choice made simultaneously (i.e. late enough so that the information cannot propagate at the speed of light to the other location on time to influence the measurement result.)
{ "domain": "physics.stackexchange", "id": 28589, "tags": "bells-inequality" }
Can we identify a given metric as a black hole solution?
Question: Given a metric $g_{\mu \nu}(x)$, can we identify whether it corresponds to a black hole? To be more precise, can we perform some calculations or define certain parameters of the metric which can help us identify it as a black hole solution? I understand that there are two essential ingredients to identify if a solution to the Einstein field equations is a black hole solution -- (1) presence of a singularity, and (2) presence of a global event horizon. Please correct me if I am wrong -- Penrose-Hawking singularity theorem does not help us differentiate one type of singularity from another, what I mean by this is that from geodesic incompleteness argument I cannot differentiate a naked singularity from a black hole singularity. So to correctly identify whether a given solution corresponds to a black hole I need to be able to describe the presence of a global event horizon. There might be some argument via gravitational path integrals made by Hawking and Gibbons in their 1977 PRD paper titled "Action integrals and partition functions in quantum gravity" (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.15.2752), but I am not yet very comfortable with gravitational path integrals. I would like to know if there is any result which says that if the metric has "certain properties" we can identify it as a black hole solution? The discussion in Identifying Black Hole Horizon from metric tensor makes some remarks relevant to my question, but I do not restrict myself to a spherically symmetric geometry. Answer: A singularity is not required for something to be a black hole (in the literature, people sometimes talk about regular black holes). As you can check on more mathematical books on GR (such as Hawking & Ellis or Wald), the definition of a black hole is Let $M$ be a strongly asymptotically predictable spacetime. If the set $B = M \setminus J^-(\mathscr{I}^+)$ is non-empty, it is said to be a black hole. "Strongly asymptotically predictable" means the spacetime is sufficiently well-behaved at infinity so that $\mathscr{I}^+$ (the future null infinity, which you can roughly think of as all of the observers that go infinitely far away in infinite time, or where the light rays go to in infinite time) has some nice properties. $J^-(S)$ is the causal past of the set $S$, i.e., it is a generalization of the past light cone (including the cone's surface) for a set in a general curved spacetime. A less mathematical way of stating the previous definition is Let $M$ be a "well-behaved" spacetime. If there is a region $B$ in the spacetime which does not lie in the causal past of any observers that go infinitely far in infinite time, then $B$ is said to be a black hole. The condition of the observers going to infinity in infinite time is to ensure they don't end up trapped in a black hole, for example. Notice that you have to ask all observers if each event in the spacetime is in their causal past. If some point is in the causal past of a single observer, then it is not in the black hole. Notice then that this means that the definition of a black hole is global. You can't use only local properties to characterize whether black hole. In particular, you can't pinpoint locally where is the event horizon of a black hole. With all this in mind, notice that the answer to your question is no. The metric alone is not enough to characterize the presence of a black hole, since it is a local construction. You need at least some other source of information. For example, you might need to clarify what is the topology of the spacetime you are considering. Notice that the Schwarzschild metric, for example, might or not lead to a black hole: depending on how you set up the spacetime, the metric might correspond to a black hole or to the exterior region of a star. It ends up depending on the ranges of the parameters that go in the metric (i.e., in the topology, a global property). However, there is an interesting addition. If you happen to know you are dealing with a solution to the Einstein equations that has some nice initial properties (for example, it starts as a star and collapses to a singularity), then it is conjectured that no singularity can form without the formation of an event horizon. I.e., there is some belief that many problems of physical interest actually can't present a naked singularity. These are known as the Cosmic Censorship Conjectures. If you assume a conjecture of this form (and its hypothesis), then Penrose's theorem implies the presence of a black hole. Notice the Nobel committee did assume this when saying that "black hole formation is a robust prediction of the general theory of relativity". (Notice global information is already being assumed in the hypothesis to the conjectures and in whatever singularity theorem you use, and therefore this comment doesn't disagree with what I stated previously.) Comments Q: Does thermodynamic properties imply that a spacetime has a black hole? A: No, thermodynamics does not imply a black hole: de Sitter spacetime also has thermodynamic properties, as I discussed in this answer, with some further comments in this other answer. I can't think of any thermodynamic properties that are specific to black holes, specially since their behavior is pretty much dictated by the usual laws of thermodynamics.
{ "domain": "physics.stackexchange", "id": 92064, "tags": "general-relativity, black-holes, metric-tensor, event-horizon, causality" }
Polarization of E-M waves, basic concept question
Question: So I searched around a bit on SE for an answer to this question, didn't find exactly what I was looking for. When it comes to polarization of E-M waves (and subsequently, light), does it suffice to say the following: E-M waves have a particular direction of propagation, and we know that the oscillating electric and magnetic fields must both be perpendicular to the direction of travel I visualize this by holding out my right hand with my fingers straight and my thumb perpendicular The direction I "push" with my palm is the direction of wave travel and my fingers represent the electric field and my thumb the magnetic field Under this premise, could I simply use my left hand to represent the other way this works, ie. polarization? My logic is that the electric field and magnetic field are still both perpendicular to the wave travel direction but now are "flipped". Is this an overly simplified approach? (Context: Deriving Planck's formula and there's a factor of two in determining number of modes for a 3D E-M wave in a cavity with perfectly reflective walls, held in thermoequilibrium- the factor of 2 supposedly comes from polarization) Answer: In a linearly polarized plane electromagnetic wave with a wave vector $\vec k$ and the electric field vector $\vec E$ giving the polarization direction, the magnetic field is given by the cross product $$\vec B=\frac{\vec k/|\vec k|×\vec E}{Z_0}$$ where $Z_0=\sqrt{\mu_0/\epsilon_0}$ is the free space wave impedance. The convention of the orientation of the cross product vector $\vec c=\vec a×\vec b$ is the right hand rule: When the forefinger of the right hand points in the direction of $\vec a$ and the middle finger in the direction of $\vec b$, then the vector $\vec c$ is coming out of the thumb. Using a left hand orientation would not give a different linear polarization. Circular polarized waves, however, have a right-handed or left-handed $\vec E$ and $\vec B$ field vector rotation, where $\vec E$ and $\vec B$ are always at right angles according to the above cross product.
{ "domain": "physics.stackexchange", "id": 34819, "tags": "electromagnetism, visible-light, polarization" }
Convection Heat Transfer
Question: I am working on a problem at the moment and would appreciate some of your expertise regarding this issue. If you see the image below, I am trying to calculate heat transfer from a fluid entering a vessel/tank on the grey cuboid within. To begin with, I am focussing on side C. I would like to understand how we would work out the Velocity of the fluid at point C when water is pumped in at A (intake), and, when discharging from point (B), i.e A flow, B return. The overall problem I am trying to solve is how long it would take to heat the cuboid from convection heat transfer (forced + also natural). At the moment I am calculating (Re) and need to determine velocity. It is not a home work question, it is for general research. There is no actual question I’m working from. I would like to understand how to work out velocity at C when there is a flow into A and out of B. This will allow me to work out Re at C. Answer: Assuming incompressibility you can work with volume-flow to calculate (a first guess) the velocity, $u$ with the following equations. Probably you have some volume or mass $\dot{m}$ flow entring at (A). This mass-flow is entering the box through a opeing with a certain cross-section $A_\mathrm{A}$. The relation is: $u_\mathrm{A} = \frac{\dot{m}}{\rho\, A_\mathrm{A}}$ The same relation should also be valid for the exit B. $u_\mathrm{B} = \frac{\dot{m}}{\rho\, A_\mathrm{B}}$ For the velocity around the cube we will now assume a constant velocity $u_\mathrm{C}$ which is of course not really the case but it is a start. Here the equation has the same structure you just need to put in the correct area ($A_\mathrm{C}$) which is the area of the box minus the area of the cube. $u_\mathrm{C} = \frac{\dot{m}}{\rho\, (A_\mathrm{box} - A_\mathrm{cube})}$
{ "domain": "engineering.stackexchange", "id": 1898, "tags": "mechanical-engineering, heat-transfer, convection" }
What is the theoretical upper limit on the rigidity of a material?
Question: Take a perfectly rigid metal rod of length $2\ell$ and some uniform linear density. Place one end (‘south’) at $(0,-\ell)$ and the other (‘north’) at $(0, \ell)$. Over some reasonably short time interval $t$, perhaps on the order of a fraction of a second, displace the center of the rod eastward from $(0,0)$ to $(1,0)$. In practice it's very easy to do this so that the entire rod moves one unit eastward; in particular the north end moves from $(0, \ell)$ to $(1, \ell)$. But this is actually a classical view of the situation. To see this, make $\ell$ very long, say on the order of ten light-seconds, and large enough to be bigger than $t\cdot c$. Then at time $t$ the center of the rod is at $(1,0)$ but the north end is still at $(0,\ell)$, because the north end can't have noticed yet that the middle has moved. But this has an implication for the material properties of the bar. I claimed in the first paragraph that it was perfectly rigid, but it now appears that it isn't as rigid as all that. Purely from speed-of-light considerations we can conclude that even a perfectly elastic bar must temporarily deform in the process of being translated from $x=0$ to $x=1$. It seems to me that if one assumed that the rod had length $2\ell$ and uniform linear density $\rho$, then one could calculate the amount of force required to translate it from $x=0$ to $x=1$ by pushing on the midpoint. Then supposing that the rest of the rod followed as quickly as speed-of-light propagation allows, one could calculate the stiffness of the rod, and this would be a theoretical upper bound on the maximum stiffness of any material whatever. But I don't have enough expertise or understanding of materials calculations to do actually perform this one. Also I suspect I must have left out something important, for the same reason. My questions are: Can this calculation be done, or is there some reason the whole idea is unsound? If it does make sense, what upper limit on material stiffness does this method produce? I suppose that if it does work, the upper bound is vastly greater than the stiffness of any real material, but I don't mind that. (I found the question Extended Rigid Bodies in Special Relativity, which is clearly related to this, but doesn't get at what I want. My earlier question Behavior of shock waves at relativistic speeds started out as an attempt to ask this one, and somehow went in a completely different direction by the time I posted it.) Answer: I believe that one could rephrase the question as "if the limit of the speed of sound in a medium must be the speed of light in vacuum, what does that mean for the limit on rigidity of an object?" Speed of sound is given by $$c=\sqrt{\frac{E}{\rho}}$$ - it depends on both density and Young's modulus. I would consider "rigidity" to be just the modulus, and if there is no theoretical limit on density then there is no theoretical limit on rigidity (following your logic). Of course from a materials science and quantum mechanics perspective there is always going to be a finite force-distance relationship for atoms - this sets realistic limits on elastic modulus that are well below the theoretical one calculated above. At 12,000 m/s, diamond (a very rigid material) is still far away from the limit.
{ "domain": "physics.stackexchange", "id": 18343, "tags": "classical-mechanics, special-relativity, material-science" }
Is there any scientific evidence that a human has ever grown a third set of teeth?
Question: This is about the possibility (or lack thereof) for a person to re-grow a new "permanent tooth" or set of teeth, to replace the teeth that grew after their milk teeth fell out. I had earlier seen some anecdotal evidence on the internet that this would have happened to some people. Anyway, what I wanted to ask is if there is any scientific evidence (e.g. properly recorded unusual medical phenomena) that shows that a person has grown a tooth (or a set of teeth) to replace their permanent teeth? Answer: There are legitimate case reports in credited journals of hyperdontia, or the condition of having supernumerary teeth. Such cases are often associated with congenital syndromes-- cleft lip and palate, trichorhinophalangeal syndrome, cleidocranial dysplasia, and Gardner's syndrome. I included a case report and a comprehensive review for you below. Case Report from American Journal of Orthodontic Dentofacial Orthopedics, 2011. Comprehensive Review from Journal of Oral Science, 2014.
{ "domain": "biology.stackexchange", "id": 3322, "tags": "human-biology, development, stem-cells, teeth" }
Node weighted Steiner Tree Problem where all Nodes have the same Weight
Question: The node weighted Steiner Tree Problem as found in this compendium: $\textbf{Instance}: \text{Graph } G = (V, E)\text{, set of terminals } S \subseteq V \text{ and a node weight function } w:V \to \mathbb{R}^+$ $\textbf{Solution}: \text{A tree } T = (V_T,E_T) \text{ in } G \text{, such that } S\subseteq V_T \subseteq V,~E_T \subseteq E.$ $\textbf{Objective}: \text{Minimize the sum of the vertices weight}$ Is NP hard. But what if all nodes have the same weight? So the weight function $w$ is just a constant function that maps to 1 for all Vertices: $\forall v \in V:w(v) = 1$ Is there a name for this Problem? Is it NP hard? If yes what algorithms can I use for approximation? Please note that this problem is not equals MST, ShortestPath or TSP Answer: Since all vertex weights are equal, minimizing $\sum_{v \in V_T} w(v)$ is equivalent to minimizing $|V_T|$, and since a tree on $x$ vertices has exactly $x-1$, edges, this is equivalent to minimizing $|E_T|$. Therefore your problem is NP-hard. Moreover, any $\alpha$-approximation algorithm for the classical Steiner tree problem, where $\alpha$ is a constant, can also be used to provide a $\alpha$-approximation for your version. Let $v$ be a terminal and let $\overline{w}$ be the (constant) vertex weight. Add a new vertex $v'$ and the edge $(v, v')$. Then drop all vertex weights, and set all edge weights to $\overline{w}$. You are left with an instance of Steiner tree such that any solution to this instance can be converted to a solution for your version of the problem having the same cost (by simply deleting $(v,v')$ from $T_V$), and vice-versa (by adding $(v,v')$ to $T_V$).
{ "domain": "cs.stackexchange", "id": 20459, "tags": "graphs, trees, np" }
Validation form JavaScript
Question: I am a beginner to JavaScript and I know about jQuery Validations however I do not want to use a foundation like jQuery just yet. I am wondering if what I have done its good or if there is another way to do it other then jQuery? I won't post the html just the JS. It does work by the way! Just wondering if there is a better way! function validateForm() { var userName = document.forms["register"]["username"].value; var passWord = document.forms["register"]["password"].value; var c_passWord = document.forms["register"]["c_password"].value; if (userName == null || userName == "") { alert("Your username can not be empty!"); return false; } else if (userName.length < 3) { alert("Your username must be atleast 3 characters long!"); return false; } if (passWord == null || passWord == "") { alert("Your password can not be empty"); return false; } else if (passWord.length < 5) { alert("Your password must be more the 5 characters"); return false; } } Answer: It mostly looks fine. Some things I would change: var userName = document.forms["register"]["username"].value; var passWord = document.forms["register"]["password"].value; I would extract the repetition here to get the form. This makes the code cleaner and makes it simpler if you change the name of the form. var form = document.forms["register"]; var userName = form["username"].value; var passWord = form["password"].value; Here if (userName == null || userName == "") { You could just use if (!userName == null) { One last thing is that you probably want to check and remove leading and trailing spaces in the user name (and possibly in the password). There is a built in trim method you can use if you're not targeting IE8.
{ "domain": "codereview.stackexchange", "id": 24578, "tags": "javascript, beginner, validation, form" }
ros with multiple distros
Question: I have a 3rd party gadget whose ROS interface is compiled against an older version of ROS. I'm curious, to what extent is it possible to intermix clients and ROS Master from different distributions. Is there any compatibility between them? Thanks, Val Originally posted by vschmidt on ROS Answers with karma: 242 on 2017-01-16 Post score: 0 Answer: Although it is not advisable, mixing ROS versions will work in some cases. Naturally, it depends on which versions you are trying to mix, but for at least a number of the newest releases, the underlying communication protocol hasn't changed, so as long as your message definitions are identical, it should work. Obviously, if the message definition changes for the newer distro, this will no longer work so there's no saying if it will continue to work. For a more elaborate explanation, take a look at the following related questions: #q104086, #q76279 and #q37403. Originally posted by rbbg with karma: 1823 on 2017-01-16 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 26741, "tags": "ros" }
More elegant way to construct the S3 path name with given options
Question: I have a method that generates the S3 key prefix according to the provided args. However, my gut feeling is that this method is not as elegant as it should be, maybe I just don't have an eye for it. Care to instruct me on how to make this little bit less sketchy? private String generateFullQualifiedS3KeyPrefix(DataStream dataStream, Options options) { String environmentName, applicationName, streamName = applicationName = environmentName = null; if (dataStream != null) { environmentName = dataStream.getEnvironmentName(); applicationName = dataStream.getApplicationName(); streamName = dataStream.getStreamName(); } String year, month, date, groupById = date = month = year = null; if (options != null) { year = String.valueOf(options.getYear()); month = String.format("%02d", options.getMonth()); date = String.format("%02d", options.getDate()); groupById = options.getGroupById(); } String[] arr = new String[]{environmentName, applicationName, streamName, groupById, year, month, date}; StringJoiner filePath = new StringJoiner("/"); for (int i = 0; i < arr.length; i++) { if (arr[i] == null) { break; } filePath.add(arr[i]); } return filePath.toString() + "/"; } Answer: The local variables make this very verbose: each variable is declared, initialized, re-assigned, and referenced. (They each appear 3-4 times in the code.) You could instead accumulate values in a list. Creating an array that may contain null values, then filtering out the null values in a second step feels like a waste that's easy to avoid. You could instead accumulate non-null values in a list. Using StringJoiner is efficient to join a string from multiple parts. Then at the end the filePath.toString() + "/" is an inefficient string concatenation that could have been easily avoided by appending an empty string to filePath. And instead of using StringJoiner, you could use String.join. Consider this alternative: private String generateFullQualifiedS3KeyPrefix(DataStream dataStream, Options options) { List<String> values = new ArrayList<>(); if (dataStream != null) { values.add(dataStream.getEnvironmentName()); values.add(dataStream.getApplicationName()); values.add(dataStream.getStreamName()); } if (options != null) { values.add(String.valueOf(options.getYear())); values.add(String.format("%02d", options.getMonth())); values.add(String.format("%02d", options.getDate())); values.add(options.getGroupById()); } values.add(""); return String.join("/", values); }
{ "domain": "codereview.stackexchange", "id": 31469, "tags": "java, spring, amazon-web-services" }
Centripetal force on electron moving through curved wire
Question: Suppose we have a curved ohmic conductor in the form of a circular arc with some thickness say $d$. Now, say we apply a potential difference across the two ends of the conductor. The electrons will start moving through the conductor as a result of the potential difference supplied. However, to move in the curved circular path, we must exert some type of centripetal force on the electron. My question is what exactly produces this centripetal force and how exactly is the electron made to curve through the conductor? Answer: It is the same as absmall ball going through a curved tube, the wall provide the force, with e the wall have a slight negative charge to guide the e. But also the velocity is very small, so the force needed is small.
{ "domain": "physics.stackexchange", "id": 96783, "tags": "electric-current, centripetal-force" }
How to pull out the momentum operator?
Question: In the equation (1.7.17), how does operator $p$ get out of the bracket without any operation though $<a | $, $| x'>$ are function of $x'$? How to prove this? Answer: $\newcommand{ket}[1]{\left| #1 \right>}$ $\newcommand{bra}[1]{\left< #1 \right|}$ $\newcommand{bk}[2]{\left< #1 \big| #2 \right> }$ Though I am not sure 100% if what I am going to do is legitimate I would suggest the following (I am about 90% sure that it is legitimate): The confusion arises because the author has used $x'$ for two distinct things, namely one for translation and one for the identity operator $\mathbb{1}=\int \ket{x}\bra{x} \mathrm{d}x$. I suggest using $\Delta x$ for the translation so that they would be two distinct things, which leads to the following: $$\int \mathrm{d}x' \ket{x'}\left(\bk{x'}{a} - \Delta x \frac{\partial}{\partial x} \bk{x'}{a} \right) \tag{1.7.15}$$ Again comparing both sides yields: $$\hat{p}\ket{a}= \frac{\hbar}{i} \int \mathrm{d}x' \ket{x'} \frac{\partial}{\partial x} \bk{x'}{a} $$ Acting on it with $\bra{x}$ yields: \begin{align} \bra{x}\hat{p}\ket{a} &= \frac{\hbar}{i} \int \mathrm{d}x' \bk{x}{x'} \frac{\partial}{\partial x} \bk{x'}{a} \\ &= \frac{\hbar}{i} \int \mathrm{d}x' \delta(x-x') \frac{\partial}{\partial x} \bk{x'}{a} \\ &= \frac{\hbar}{i} \frac{\partial}{\partial x} \bk{x}{a} \end{align} Notice that this is the same equation with $x$ and $x'$ changed, which doesn't affect the final answer.
{ "domain": "physics.stackexchange", "id": 20804, "tags": "quantum-mechanics, operators, momentum, hilbert-space" }
Shape of Weak-Strong Acid-Base Titration
Question: I can understand why in any titration, the pH changes really quickly near the equivalent point. However, in a titration of either weak base-strong acid or weak acid-strong base, the weak acid/base first has a steep rate of change in pH in the beginning, then slows down at the midpoint, then becomes steep again closer to the equivalence point. Why is this the case? (See picture below). Also, it seems that the weaker the acid/base, the more pronounced the initial quick rate of change. Also, why is the "midpoint" (not the equivalence point) so special anyways? You can refer to this picture. The steep drop portion is from 0-10 mL. Answer: Suppose $\ce{NH3(aq)}$ is being titrated with $\ce{HCl(aq)}$. The products are $\ce{NH4+ (aq)}$ and $\ce{Cl- (aq)}$. As the titration proceeds [$\ce{NH3}$] decreases and [$\ce{NH4+}$] increases. The presence of $\ce{NH3}$ and $\ce{NH4+}$ together constitutes a buffer system, in this case a weak base and its conjugate acid. When molarities of both species are relatively large, the buffering capacity is significant. Halfway to the equivalence point, the molarities of both species are equal, which allows for the greatest possible buffering action. In the very early part of the titration, the molarity of $\ce{NH4+}$ is not great enough to allow much of a buffering effect, so the pH falls rapidly with the addition of $\ce{HCl}$. But more toward the point where [$\ce{NH3}$] becomes equal to the [$\ce{NH4+}$], the buffering action is increasingly able to "resist" changes in pH so the slope of the titration curve becomes more shallow. Past this point, the [$\ce{NH3}$] steadily diminishes to the point that buffering is no longer possible and the pH dives through the equivalence point as we expect. Past the equivalence point, only $\ce{NH4+}$ is present so no buffering can occur and there are no inflections like the one that occurs at the start of the titration. The importance of the "midpoint" as it is labeled in the illustration is that pH vs. titrant volume analyses are sometimes done specifically to determine the K$_a$ or K$_b$ of a weak acid or weak base, respectively. Continuing with the $\ce{NH3}$ and $\ce{HCl}$ example, the K$_b$ for $\ce{NH3}$ is $[\ce{NH4+}][\ce{OH-}]/[\ce{NH3}]$. Halfway to the equivalence point, [$\ce{NH4+}$]=[$\ce{NH3}$] and we have K$_b = [\ce{OH-}$] or pK$_b$ = pOH.
{ "domain": "chemistry.stackexchange", "id": 5325, "tags": "acid-base, titration" }
Parallel axis theorem
Question: For this question we are working in 2-D space. Let's say that we have an arbitrary body. I know the inertia with respect to a perpendicular axis (to the body, that is) that passes through a point (A). However, I'd like to know the inertia with respect to a parallel axis (B) a distance $d$ away so I use the parallel axis theorem: $$ I_B = I_\mathrm{A} + md^2 $$ I show the diagram of the 2-D body and my value $I_B$ (not $I_A$) to someone else. They, however, would like to know what the inertia is with respect to the axis at A. They use the parallel axis theorem - plug n' chug - as well. $$ I_A = I_\mathrm{B} + md^2 $$ For $d \in \mathbb{R}, \quad d^2$ is bound to be > 0! So their $I_A$ is not going to equal the initial $I_A$. Obviously there has been some confusion of ideas on my part. But I don't know where I went wrong. Where did I go wrong? Answer: The parallel axis theorem $I_B = I_A + md^2$ only applies when $A$ is the center of mass.
{ "domain": "physics.stackexchange", "id": 18599, "tags": "newtonian-mechanics, statics, inertia" }
Force input to harmonic oscillator force as function of time or displacement as function of time?
Question: Question title basically says it. If the governing equations are like: $$x''m_1 = c({x_1}'-{x_2}') + k_1({x_1}'-{x_2}') = f(t)$$ etc... Since all the terms are force terms, shouldn't the input match the units and be a function of force with respect to time? This is very confusing. The equation is expecting a force but getting a displacement. So for instance, in this matlab tutorial of the quarter car model, the height of the bump in the road is the same as the height of the input step function. 1 cm height bump is modeled as 1 cm height step function. The following images are just snippets to show what I am referring to: http://ctms.engin.umich.edu/CTMS/index.php?example=Suspension&section=ControlStateSpace Again, This paper uses a sin function to model speed hump with the amplitude of the sin function = the height of the bump. https://peer.asee.org/on-the-analysis-and-design-of-vehicle-suspension-systems-going-over-speed-bumps.pdf So my question is, if you make the step input a height of 1 cm, that doesn't seem to take into account the velocity of the vehicle. The suspension will respond differently to the same physical height at different speeds. Therefore, it seems to make more sense to to calculate the impulse experienced by the wheel using: $$Ft = mv$$ and with an estimated change in time and change in velocity and get the average force on the wheel. Then use that force as the height of the step input. So let's say you get F= 50 kN then the step height would not be 1 cm but 50 kN. The only thing I could think is that you would set the velocity initial conditions of the masses to the desired velocity at which you want to analyze. In other words, why this? $$Diff. equation = displacement(t)$$ and not this? $$Diff. equation = force(t)$$ Answer: I am getting a "displacement" because as the wheel rolls over a step-up bump of a given height, the wheel will be displaced. So my question is why would this value be the input to the diff. EQs, rather than the magnitude of the force delivered by the bump of the given height? It's true that the the input stimulus to this system is the displacement $W$ but it's not true that the forcing functions for the associated differential equations are a displacement. Look at the matrix equations (1) - (3) at the link and see that one of the scalar equations is $$\ddot{Y}_1 = \frac{K_2}{M_2}X_1 - \left(\frac{K_1}{M_1} + \frac{K_1}{M_2} + \frac{K_2}{M_2}\right)Y_1 + U\left(\frac{1}{M_1} + \frac{1}{M_2}\right) - \frac{K_2}{M_2}W$$ which can be rearranged as so $$\ddot{Y}_1 + \left(\frac{K_1}{\mu}+\frac{K_2}{M_2}\right)Y_1 - \frac{K_2}{M_2}X_1 - \frac{U}{\mu} = -\frac{K_2}{M_2}W$$ where $$\frac{1}{\mu} \equiv \frac{1}{M_1} + \frac{1}{M_2}$$ and so the forcing function to this differential equation is a function of $W$ (with dimensions of acceleration) but isn't identical to $W$.
{ "domain": "physics.stackexchange", "id": 52401, "tags": "classical-mechanics, energy" }
How to estimate taps required for Parks-McClellan filters
Question: I have some code based on Jake Janovetz's Parks-McClellan (Remez) filter generating code. How can I estimate the number of taps required to build a lowpass filter given requirements for pass band ripple and stop band attenuation? I already know how to convert from these requirements back to the filter error deviation. Answer: If you're using MATLAB, the function firpmord exists to help you with that. Like some other MATLAB functions, it doesn't link to any libraries or mex files, it's simply MATLAB code that runs. The only reason I mention it is that when you open this function (open firpmord), it has a subfunction remlpord that was written by (ta-daaaaa!) J. H. McClellan himself. It's using a matrix of hardcoded numbers and references Rabiner & Gold, Theory and Appications of DSP, pp. 156-7. The method therefore must be somewhat empirical, though I won't argue one way or the other. In any case, you can study this function (it's very short) and write your own based on it. I failed to find any specific papers that address the problem though, but perhaps there are books.
{ "domain": "dsp.stackexchange", "id": 114, "tags": "filters, filter-design, lowpass-filter" }
Sanitisation function: any holes?
Question: I've come up with this small function to make user submitted strings safe for MySQL. I'd be grateful if someone could point out any security holes in this. I've tested it out, and it happily replaces quotes and the like. The only issue I can see is the lack of escaping ampersands, but this shouldn't matter right? $keywords = array("delete from", "drop table", ";", "="); $safeKeywords = array("delete&nbsp;from", "drop&nbsp;table", "&#59;", "&#61;"); function dbSanitise($field) { global $keywords, $safeKeywords; $sanitised = str_ireplace($keywords, $safeKeywords, $field); $sanitised = htmlentities($sanitised, ENT_QUOTES); $sanitised = mysql_real_escape_string($sanitised); return $sanitised; } Putting this string into the function above: Hello world delete from DELETE FROM ; = " ' '' Yields this: Hello world delete&amp;nbsp&amp;#59;from delete&amp;nbsp&amp;#59;from &amp;#59; &amp;#61; &quot; &#039; &#039;&#039; Which I can only see as being perfectly acceptable for a MySQL insert operation. If I'm wrong, do let me know! Thanks for any help. Answer: You shouldn't try to think of all the bad things that could be in the query, you'll never think of them all. As it stands, your replacement is useless (as some of your commentors have noted). You can have any text you want inside of the string, its characters which might cause the string to be escaped which are a problem. mysql_real_escape_string does everything you need. You don't need to implement your own function. Better yet use prepared statements.
{ "domain": "codereview.stackexchange", "id": 428, "tags": "php, mysql, security" }
Why is the $\frac {1} {\beta T}$ a constant (Boltzmann Constant)?
Question: I have been studying Statistical Mechanics from the Book "Statistical Mechanics by R.K. Pathria & Paul D. Beale". So, in chapter 1, it discusses about Statistical basis of Thermodynamics, starting with defining number of microstates, $\Omega$ of a system with $(N,V,E)$ [where $N$ = number of particles, $V$ = Volume of the system, $E$ = Total Energy of the system] to get into a condition of equilibrium for the system : Now, from that we found a microscopic entity called $\beta$ (defined below) which must be same for the two systems A and B in order to be in equilibrium. $$\beta = \left( \frac {\partial \ln \Omega (N,V,E)}{\partial E} \right)_{N,V,E=\bar E}$$ Now, this must be related to Thermodynamic Temperature (since that also ensures Thermodynamic Equilibrium), $T$ defined by $$\frac 1 T = \left( \frac{\partial S} {\partial E} \right)_{N,V}$$ And, the relation was established by multiplying $\frac 1 \beta$ with $\frac 1 T$ and finding: $$ \frac {\Delta S}{\Delta \ln \Omega} = \frac 1 {\beta T} = \text{constant} = k_B $$ Now, at this point, I had no clue as to how come this parameter can be a constant (which was named after Boltzmann, and we know that is a universal constant, but how is that defined here?). There was no convincing remarks about that in the book (or maybe, I am missing an obvious fact here) Answer: The difficulty in answering this question lies in there being so many different ways of presenting the basics of statistical mechanics. If you're prepared to accept that your first two equations are essentially microscopic and macroscopic versions of the same thing, then you can see that they will remain so if we multiply both sides of the first equation by a constant, $k_\text B$. We'll give $k_\text B$ the same units as those of $S$: $$k_\text B \beta =\left(\frac {\partial (k_\text B\ln \Omega)}{\partial E}\right)_{N, V, E=\tilde E}.$$ Why have we done this? Because $k_\text B \ln \Omega$ has the units of $S$, so if we were prepared to accept the original first equation as saying essentially the same as the second (macroscopic) equation, we can accept the modified equation, with the right numerical value for $k_\text B$, as exactly matching the second, with $\frac 1T=k_\text B \beta$.
{ "domain": "physics.stackexchange", "id": 81934, "tags": "thermodynamics, statistical-mechanics, temperature, equilibrium, physical-constants" }
Stress tensor in product of 2D CFTs
Question: I was struggling with a question, hoping someone could point me in the right direction. I'm interested in 2D CFTs on a cylinder. I want to take the tensor product of two CFTs. My questions are these: (1) It seems that the total stress tensor will have modes that are the sum of the individual stress tensors $T^{(1)}+T^{(2)}$, like $T(z) = \sum z^{-n-2}(L_n^{(1)}+L_n^{(2)})$. I'm making a mistake in my calculation, but this operator should be in the conformal block of the identity, right? A descendant coming from applying an operator in the "full" algebra, like $(L_{-2}^{(1)} + L_{-2}^{(2)})|0\rangle$? (2) I should get operators like $T^{(1)} - T^{(2)}$, and I can't see from where they would descend. I expect they should be primary operators, but I'm making an error I think. Is this a primary operator? Answer: I found the mistake. (1) The state $(L_{-2}^{(1)} + L_{-2}^{(2)})|0\rangle$ does correspond to the total stress tensor for the product CFT. Acting on this state with the lowering operator $L_{2}^{(1)} + L_{2}^{(2)}$ gives a state proportional to the vacuum (except in the case where the central charge $c=0-$then the stress tensor is primary. (2) The state $T^{(1)} - T^{(2)}$ is not the state to consider; it's actually a linear combination of two states with well-defined (though different) properties under conformal transformations. The state built from $T^{(1)}$ and $ T^{(2)}$ that has good transformation properties and is linearly independent with $T^{(1)} + T^{(2)}$ is the combination $(c^2 T^{(1)} - c^1 T^{(2)})|0\rangle$, where $c^i$ is the central charge of the $i^{th}$ CFT. Acting with Virasoro raising operators shows that this state is a primary state.
{ "domain": "physics.stackexchange", "id": 13089, "tags": "conformal-field-theory, tensor-calculus, stress-energy-momentum-tensor" }
Discovery of spin-3 particle at LHCb
Question: I just read a discussion on the CERN website regarding first observation of a heavy flavored spin-3 particle at LHCb. This appears to be a post from last July. Is there anyone knowledgeable enough in this area who would be able to comment on some of the possible theoretical/ hypothetical implications of the existence of spin-3 particles? Is there any thought that their existence could imply additional fundamental forces? Answer: Is there anyone knowledgable enough in this area who would be able to comment on some of the possible theoretical/ hypothetical implications of the existence of spin 3 particles? Is there any thought that their existence could imply additional fundamental forces? If you look at the presentation linked in the link you gave , in page five, you will see that spin 3 resonances have appeared in the charmed section. This is the first indication of similar spectroscopy in the beauty sector. The spin is a combination of quark spins and the angular momentum of the quarks within the resonance. It is all about how the quarks bind into resonances which is mainly QCD though QED cannot be ignored in the modeling . The study of this spectroscopy will be useful in evaluating QCD phenomenological models. The fundamental forces still are strong, weak, electromagnetic and gravitional.
{ "domain": "physics.stackexchange", "id": 20767, "tags": "particle-physics, angular-momentum, quantum-spin, standard-model" }
Time travel and nuclear decay
Question: Reading a previous closed question an interesting variation has come to my mind. Suppose that time travel to the past was possible: I wait for an atom to decay and measure the time, $t_{1a}$ I travel back in time at $t_0<t_{1a}$ I wait for the same atom to decay and measure the time, $t_{1b}$ Let's think about the two values, $t_{1a}$ and $t_{1b}$. If they coincide, then from my point of view the measured $t_1$ time would not be governed by chance (the second time I would know $t_1$ a priori). It would therefore prove some form of existence of hidden variables. Would this violate any known laws of quantum mechanics? Would this prove the existence of hidden variables (e.g. Bohm's interpretation)? If they are different, then what to make of $t_{1a}$ and $t_{1b}$? Which would be the correct value? To me this doesn't make any sense, but maybe it could be compatible with the multiverse interpretation (I don't know how though). In other words, does this gedankenexperiment: ...help us select viable QM interpretations and exclude others? Which? ...lead us to conclude that backwards information time travel is incompatible with QM? Answer: There's a prescription by Deutsch for the quantum mechanics of closed timelike curves. It works on the level of density states, instead of Hilbert space states. Given his prescription, he showed that a fixed point solution always exists no matter what the initial conditions are. However, this solution isn't unique in general. Also, pure states can be converted into mixed states. Not only that, the solution isn't even linear in the initial density state. It has been shown by Aaronson and Watrous that this prescription allows time travelling computers to solve PSPACE-complete problems. There's another prescription based upon post-selection, as expounded by Seth Lloyd where you would observe the same decay time. This prescription has the drawback that not every initial condition admits a solution. It has been shown that time-travelling computers can only solve PP-complete problems. Both prescriptions violate unitarity. In fact, the density matrix evolves nonlinearly in both prescriptions. This nonlinearity has the potential to wreak havoc upon quantum mechanics. The difference between both prescriptions can be seen most clearly with the grandfather paradox. To simplify matters, consider a qubit which can only take on two states, $| 0 \rangle$ and $| 1 \rangle$. Let's also suppose this qubit is sent around on a closed timelike curve and loops back onto itself. During a cycle around the loop, the qubit is flipped. Now classically, there clearly isn't any consistent solution. However, according to Deutsch's prescription, the mixed density state $\begin{pmatrix} \frac{1}{2} & a\\ a & \frac{1}{2} \end{pmatrix}$ where $a$ is a real number between $-\frac{1}{2}$ and $\frac{1}{2}$ is a fixed-point solution. He chooses to interpret this using the many worlds interpretation as follows; say the qubit starts off with a value of $0$ in one world. After a loop, it ends up with a value of $1$ in a different parallel universe. According to Lloyd's prescription, on the other hand, there is no solution at all! However, for the example you presented, both prescriptions will give the same answer, namely the time traveller will observe the same decay time both times around. This is because the nuclear decay is not an integral part of the closed timelike loop. To see this, suppose we have the original prepared unstable particle, plus some experimental apparatus with a clock and a clock pointer which will be set to the time of the nuclear decay. After waiting for some time much longer than the half-life, the clock pointer will end up in the state $\sqrt{k}\int^\infty_0 dt\, e^{-kt/2} | t \rangle$. The whole point is, it doesn't make any difference if the time traveller doesn't have direct access to the unstable particle, but only to the clock pointer, and it doesn't make any difference either if the clock pointer is prepared outside the time machine, i.e. is part of the initial conditions. Of course, it might turn out the correct prescription is none of the above. Or it might also turn out that closed timelike curves are absolutely forbidden in a complete theory of quantum gravity. Yet another possibility might be closed timelike curves are allowed, but a complete theory of quantum gravity somehow manages to preserve unitarity, just as it presumably preserves unitarity in evaporating black holes.
{ "domain": "physics.stackexchange", "id": 213, "tags": "quantum-mechanics, epistemology, time-travel, measurement-problem" }
Arduino-Create 2: Reading Sensor Values
Question: Over the past few weeks, I have been attempting to interface the iRobot Create 2 with an Arduino Uno. As of yet, I have been unable to read sensor values back to the Arduino. I will describe by hardware setup and my Arduino code, then ask several questions; hopefully, answers to these questions will be helpful for future work with the Create 2. Hardware: The iRobot Create 2 is connected to the Arduino Uno according to the suggestions given by iRobot. Instead of the diodes, a DC buck converter is used, and the transistor is not used because a software serial port is used instead of the UART port. Software: The following is the code that I am implementing on the Arduino. The overall function is to stop spinning the robot once the angle of the robot exceeds some threshold. A software serial port is used, which runs at the default Create 2 Baud rate. #include <SoftwareSerial.h> int rxPin=3; int txPin=4; int ddPin=5; //device detect int sensorbytes[2]; //array to store encoder counts int angle; const float pi=3.1415926; #define left_encoder (sensorbytes[0]) #define right_encoder (sensorbytes[1]) SoftwareSerial Roomba(rxPin,txPin); void setup() { pinMode(3, INPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(ledPin, OUTPUT); Roomba.begin(19200); // wake up the robot digitalWrite(ddPin, HIGH); delay(100); digitalWrite(ddPin, LOW); delay(500); digitalWrite(ddPin, HIGH); delay(2000); Roomba.write(byte(128)); //Start Roomba.write(byte(131)); //Safe mode updateSensors(); // Spin slowly Roomba.write(byte(145)); Roomba.write(byte(0x00)); Roomba.write(byte(0x0B)); Roomba.write(byte(0xFF)); Roomba.write(byte(0xF5)); } void loop() { updateSensors(); // stop if angle is greater than 360 degrees if(abs(angle)>2*pi){ Roomba.write(173); delay(100); } } void updateSensors() { // call for the left and right encoder counts Roomba.write(byte(148)); Roomba.write(byte(2)); Roomba.write(byte(43)); Roomba.write(byte(44)); delay(100); // load encoder counts into an array int i = 0; while(Roomba.available()) { int c = Roomba.read(); sensorbytes[i] = c; i++; } angle=((right_encoder*72*pi/508.8)-(left_encoder*72*pi/508.8))/235; } Questions: Am I loading the sensor values into the array correctly? This same code works when a bump and run program is implemented, but that requires knowing only one bit rather than two bytes. How many bytes can be read over the serial connection at a time? A previous post (Help sending serial command to Roomba) highlights that one byte can be sent at a time. Does this imply that the reverse is true? If so, would a solution be to use a char array to read the values instead and then to append two chars to form an signed int? Is serial communication synchronization a problem? I am assuming that synchronization is not a problem, but is it possible for the bytes to be split on the nibble boundaries? This would present a problem because there is not a nibble datatype. Answer: As it turns out, I was not loading the sensor values into the array properly. It seems that data is transmitted one byte at a time, so the sensorbytes array needed to be an array of 4 elements rather than two. Once this was corrected, the correct encoder counts could be determined by joining the upper and lower bytes of the encoder counts from each wheel using the following scheme: // highbyte is shifted left eight bits, lowbyte is added to highbyte // encoder_count=highbyte<<8+lowbyte left_encoder=sensorbytes[0]<<8+sensorbytes[1]; This scheme made sure that the resulting left_encoder was a signed int rather than an unsigned int had the word() function been used. The revised updateSensors() code can be found in this post: iRobot Create 2: Encoder Counts. Serial communication synchronization was not a problem. I suspect that this sort of thing is handled behind the scenes and does not need to be programmed at the high level that I am working on.
{ "domain": "robotics.stackexchange", "id": 682, "tags": "arduino, irobot-create, roomba" }
Role of Neuromorphic Computing and Quantum Computing in the field of Robotics and AI
Question: I asked a similar kind of question some time ago (Neuromorphic Engineering and Robotics) Since then, many things have come to the point of revelation. A road-map for neuromorphic computing was revealed recently; It proposes the analog way of computation, to solve advanced computer vision problems. IBM and Qualcomm are also working on the similar project though on the digital side. Memristor technology is slated to come very soon. The question I am asking here is How is the Robotics community working to adopt the technology? This question opens the domain for other pressing questions which have been answered cryptically since the 1980s. Are neuromorphic computers good for mission critical precise robots, like that on Mars? Can we use neuromorphic systems on Avionics systems? How is neuromorphic processing going to help us solve the problems on NLP, and knowledge processing? Aren't quantum computers very similar to neuromorphic computers in ideology? If neuromorphic robots gain traction, will digital hardware still be required? It would be really nice if someone could explain all points, because answers in various but sparsely related research papers are very cryptic. Answer: You've asked more than one question, so I'll try to answer them in order. The Robotics community has not yet hit the limits of current hardware, so very little work is being done on the exotic cutting edge like neuromorphic hardware. The exception to this is software neural nets, which have come in and out of fashion for decades, and the Nv artificial neurons used successfully in BEAM robotics. No. Neuromorphic systems are not ideal for precision. Most probes are remote-controlled, not Artificially Intelligent. To some extent. A bird's brain could not be built in neuromorphic form, and then used to fly a fighter jet, as the control actuators and senses are completely different. If the algorithms for good flight are properly understood, they could be implemented on digital hardware more easily than trying to wire up a neuromorphic chip. Its not. The hope was that if we could throw enough neurons at the problem , and put them in a learning circuit, then we would not have to understand NLP and similar problems, because the machine would magically solve them for us. But while that might work for a demo - look up Perceptrons, for instance - it does not get us any closer to the algorithms we need. Not at all. Neuromorphic mimics the neural structures in living creatures. Quantum computers specialize in simultaneously parallel computing. Most likely. Digital systems work very well for measurement and calculation. Consider how microcontrollers are often easier to understand, debug and use than operational amplifiers, for many control systems. I am afraid that this is largely an opinion-based answer, as the questions were very general, and contained enough scope for an entire course.
{ "domain": "robotics.stackexchange", "id": 363, "tags": "microcontroller, computer-vision, machine-learning, research" }
Conceptual problem with action considered as function of endpoints
Question: I am having some trouble with understanding why it makes sense to consider action in classical mechanics as function of endpoints $q_{initial}, \ q_{final}$ and endtimes $t_{initial}, \ t_{final}$. Since we want to define such function as action functional acting on physical trajectory connecting these points, i.e. $S(q_f,q_i;t_f,t_i)= \left. \int _{t_i,q_i}^{t_f,q_f}L \mathrm{d}t \right|_{physical \ trajectory}$ But for such definition to make sense for all $q$in configuration space, isn't it required that any two points can be joined by trajectory of any time length we choose AND that such trajectory is unique? This requirement doesn't seem likely to be true. It is also possible that I completely misunderstood what is going on. For reference, I got it from Landau's mechanics in sections "43. Action as function of coordinates" and "44. Maupertuis principle". In any case I need some tips. I suspect that I am missing something fundamentally important. I have managed to construct an amusing counterexample for existence of such defined action function. I will post it here for it might be useful for somebody asking the same question in future. Consider system with configuration space being a circle, i.e. one spatial coordinate which is periodic, $\phi \in (0, 2 \pi )$. Consider Lagrangian $L=\frac{1}{2} \dot{\phi}^2$ Now fix both endpoints at, say $\phi=0$, $t_{i}=0, \ t_{f}=2 \pi$. It is clear that since we consider physical trajectories, angular velocity needs to be conserved. But we can choose it to be any integer we like and we get a physical trajectory satisfying demanded conditions. It is easily calculated that action functional evaluated on such trajectory is $S=k \pi$ for $k$ being an integer. Hence $S$ as function of coordinates is ill-defined. Several questions arise: To what extend this " patological" behavior canbe attribute to topology of configuration space? In this case it is not simply connected. In this e ample we can make it unambiguous by demanding that shortest trajectory is chosen. Is such prpcedure possible in general? Is this connected to problem of existence of solutions to HJE equation? What is the "right" way to think about relation of function in HJE equation with action integral? I will be perfectly delighte with a reference rather than complete answer! Answer: Comments to the question (v2): Yes, OP is right. The classical path/stationary solution between $(q_i,t_i)$ and $(q_f,t_f)$ does not necessarily exist nor is it necessarily unique. See e.g. this and this Phys.SE posts. However, existence and uniqueness is often true in sufficiently small neighborhoods (if the path is not allowed to leave the neighborhood). The (Dirichlet) on-shell action $S(q_f,t_f;q_i,t_i)$ is mentioned in this Phys.SE post. One possible way to extend the definition of the on-shell action $S(q_f,t_f;q_i,t_i)$ to the case where the classical solution exists but is not necessarily unique, is to pick the minimum of the different on-shell actions. A connection between Hamilton's principal function $S(q,\alpha, t)$ and the on-shell action is outlined in this Phys.SE post. OP's counterexample is also mentioned in my Phys.SE answer here. Using my notation, the Lagrangian reads $$ \tag{1} L ~:=~\frac{I}{2}\dot{\theta}^2.$$ The angular momentum is conserved on-shell and given by $$ \tag{2} L_z ~=~I \frac{\theta_f-\theta_i}{t_f-t_i}. $$ The Hamilton's principal function is $$\tag{3} S(\theta,L_z, t)~=~L_z \theta -\frac{L_z^2}{2I} t ,$$ while the on-shell action is $$\tag{4} S(\theta_f,t_f;\theta_i,t_i)~=~ \frac{I}{2} \frac{(\theta_f-\theta_i)^2}{t_f-t_i}~=~ S(\theta_f,L_z, t_f)-S(\theta_i,L_z, t_i). $$
{ "domain": "physics.stackexchange", "id": 27776, "tags": "classical-mechanics, lagrangian-formalism, variational-principle, action" }
Extracting the five most frequent queries from a log file
Question: I'm trying to print the top 5 most queried strings from a text file. I can't use other third-party libraries to make it easier w.r.t hashmap implementations. I need to improve on these if possible: cyclomatic complexity Memory usage Execution time Page faults import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.util.Comparator; import java.util.HashMap; import java.util.Map; import java.util.Scanner; import java.util.TreeMap; class Topfive { private HashMap<String, Integer> hashmap = new HashMap<String, Integer>(); public HashMap<String, Integer> getHashmap() { return hashmap; } public void putWord(String main) { Integer frequency = getHashmap().get(main); if (frequency == null) { frequency = 0; } hashmap.put(main, frequency + 1); } public TreeMap<String, Integer> process(File fFile) throws FileNotFoundException { Scanner scanner = new Scanner(new FileReader(fFile)); while (scanner.hasNextLine()) { String wordp = scanner.nextLine(); int j = wordp.indexOf("query=") + 6; int k = wordp.length() - 1; String fut = wordp.substring(j, k).trim(); this.putWord(fut); } scanner.close(); ValueComparator bvc = new ValueComparator(getHashmap()); TreeMap<String, Integer> sorted_map = new TreeMap<String, Integer>(bvc); sorted_map.putAll(getHashmap()); return sorted_map; } public static void main(String[] args) { if (args.length > 0) { File fFile = new File(args[0]); Topfive topfive = new Topfive(); try { TreeMap<String, Integer> sorted_map = topfive.process(fFile); int count = 0; for (String key : sorted_map.keySet()) { System.out.println(key); count++; if (count >= 5) { break; } } } catch (FileNotFoundException e) { e.printStackTrace(); } } } } class ValueComparator implements Comparator<Object> { private Map<String, Integer> base; public ValueComparator(Map<String, Integer> base) { this.base = base; } @Override public int compare(Object first_obj, Object second_obj) { int ret = -1; if (base.get(first_obj) < base.get(second_obj)) { ret = 1; } return ret; } } Sample text in text file which is given as command line argument: [Fri Jan 07 18:37:54 CET 2011] new query: [ip=60.112.154.0, query=this year] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=116.161.234.129, query=fashion] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=38.214.87.66, query=big lies] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=60.112.154.0, query=this year] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=116.161.234.129, query=fashion] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=38.214.87.66, query=big lies] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=68.175.141.150, query=seven levels] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=114.235.27.231, query=head] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=67.238.116.254, query=special] [Fri Jan 07 18:37:54 CET 2011] new query: [ip=220.153.109.208, query=present] Answer: Your names could be improved, e.g. hm isn't a good name for a member variable. You could replace the HashMap by a multiset implementation, e.g. HashMultiset from Google Guava. Then you get putWord for free. The Comparator (and its Map) should be generified. Further, it seems to be plain wrong, as it never gives 0 as result of compare, but that is the expected outcome if two values are "equal" (whatever this means in the actual context). Without looking too deep in the code, it might even be that a simple priority queue could replace all the Comparator and TreeMap stuff. The main method is too long and should be split in logical parts. Probably it would be better to avoid static methods, but to create a Topfive instance which does most of the work. If you can use Java 7, try out the ARM block feature for your file access. For your question you used the tag "clean code", but it looks like you didn't read the book by Uncle Bob. Check it out! [Edit] Based on your clarification I would write the class as follows: import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.SortedSet; import java.util.TreeSet; public class TopWords { private final SortedSet<Freq> frequencies = new TreeSet<Freq>(); public TopWords(File file) throws IOException { List<String> wordList = readWords(file); Map<String, Integer> freqMap = getFrequencies(wordList); sortFrequencies(freqMap); } public SortedSet<Freq> getFrequencies() { return frequencies; } private List<String> readWords(File file) throws IOException { List<String> result = new ArrayList<String>(); BufferedReader br = new BufferedReader(new FileReader(file)); for (String line = br.readLine(); line != null; line = br.readLine()) { result.add(line); } return result; } private Map<String, Integer> getFrequencies(List<String> wordList) throws IOException { Map<String, Integer> freqMap = new HashMap<String, Integer>(); for (String line : wordList) { int start = line.indexOf("query=") + 6; int end = line.length() - 1; String word = line.substring(start, end).trim(); Integer frequency = freqMap.get(word); freqMap.put(word, frequency == null ? 1 : frequency + 1); } return freqMap; } private void sortFrequencies(Map<String, Integer> freqMap) { for (Entry<String, Integer> entry : freqMap.entrySet()) { frequencies.add(new Freq(entry.getKey(), entry.getValue())); } } public List<String> getTop(int count) { List<String> result = new ArrayList<String>(count); for (Freq freq : frequencies) { if (count-- == 0) { break; } result.add(freq.word); } return result; } public static void main(String[] args) { if (args.length > 0) { try { TopWords topFive = new TopWords(new File(args[0])); List<String> topList = topFive.getTop(5); for (String word : topList) { System.out.println(word); } } catch (IOException e) { e.printStackTrace(); } } } public class Freq implements Comparable<Freq> { public final String word; public final int frequency; private Freq(String word, int frequency) { this.word = word; this.frequency = frequency; } public int compareTo(Freq that) { int result = that.frequency - this.frequency; return result != 0 ? result : that.word.compareTo(this.word); } } } Of course this might be suboptimal depending on the intended use, but I think you get the general idea. [Update] After some meditation I came to the conclusion that I thought way too complicated. Try the following version: import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; import java.util.Collections; import java.util.List; public class TopWords { private final List<Freq> frequencies = new ArrayList<Freq>(); public TopWords(File file) throws IOException { List<String> wordList = getSortedWordList(file); sortFrequencies(wordList); } public List<Freq> getFrequencies() { return frequencies; } private List<String> getSortedWordList(File file) throws IOException { List<String> result = new ArrayList<String>(); BufferedReader br = new BufferedReader(new FileReader(file)); for (String line = br.readLine(); line != null; line = br.readLine()) { int start = line.indexOf("query=") + 6; int end = line.length() - 1; String word = line.substring(start, end).trim(); result.add(word); } br.close(); Collections.sort(result); return result; } private void sortFrequencies(List<String> wordList) { String lastWord = null; int count = 0; for (String word : wordList) { if (word.equals(lastWord)) { count++; } else { if (lastWord != null) { frequencies.add(new Freq(lastWord, count)); } lastWord = word; count = 1; } } if (lastWord != null) { frequencies.add(new Freq(lastWord, count)); } Collections.sort(frequencies); } public List<Freq> getTop(int count) { return frequencies.subList(0, Math.min(frequencies.size(), count)); } public static void main(String[] args) { if (args.length > 0) { try { TopWords topFive = new TopWords(new File(args[0])); List<Freq> topList = topFive.getTop(5); for (Freq freq : topList) { System.out.println(freq.word + " " + freq.frequency); } } catch (IOException e) { e.printStackTrace(); } } } public class Freq implements Comparable<Freq> { public final String word; public final int frequency; private Freq(String word, int frequency) { this.word = word; this.frequency = frequency; } public int compareTo(Freq that) { int result = that.frequency - this.frequency; return result != 0 ? result : that.word.compareTo(this.word); } } }
{ "domain": "codereview.stackexchange", "id": 1039, "tags": "java, performance, logging, statistics" }
Why is knowing (or approximating) the hamiltonian of atoms important?
Question: I know this is a qualitative question but I think it's an important one. I'm currently going through quantum physics and quantum chemistry, and a huge part of this (especially in chem) is approximating the Hamiltonians of atoms with methods such as perturbation theory. What important things does this allow you to do that you could not otherwise accomplish? Why is knowing the energy (or other eigenvalues) important in the real physical world? Ideally, what are some applications that require such knowledge? Answer: The hamiltonian determines the time evolution of a system. In other words, given the way our system is now, the hamiltonian tells us how we expect it to behave later. Diagonalizing the hamiltonian to find the energy eigenstates and eigenvalues gives us a convenient way to represent the different possible behaviors of the system and predict how the system will behave in the future. For example, one thing we can find is the eigenstate with the lowest energy eigenvalue — the ground state. This is typically the most stable configuration of a system and the one we are likeliest to find it in most of the time. Knowing the ground state lets us predict things like how we can expect electric charge to be distributed in different parts of an atom or molecule. Knowing the energies of the eigenstates with higher energies — the excited states — tells us what changes in energy our system can undergo. Since energy is conserved, those changes are accompanied by emissions or absorptions of energy. For example, an atom can transition from a higher energy state to a lower energy state, emitting light in the process. If we know the energy difference between the states in question, we can predict the wavelengths of light that can be emitted by the atom. In general, we can write any state of the system as a sum of energy eigenstates. Since the rule for how the energy eigenstates change in time very simple, this gives us a relatively simple way to predict how an arbitrary state will change in time. For complex systems with lots of particles, like a fluid or a crystalline solid, there may be excited states representing, for example, sound waves traveling in the system. Knowing these excited states and their energies lets us predict things like how the system will vibrate if we try to squeeze it or tap on it. I could go on with infinite examples, but hopefully this gives you an idea of some of the practical things we can do with the hamiltonian and its spectrum. Writing down the hamiltonian is the main way we make predictions about anything in quantum physics.
{ "domain": "physics.stackexchange", "id": 90672, "tags": "quantum-mechanics, hamiltonian, physical-chemistry, perturbation-theory" }
Longest common prefix (Leetcode)
Question: Link here I'm currently learning c++ coming from a python background, so I'll include a solution in python and in c++ for the problem statement below, I'm including both for convenience, if you don't know c++, feel free to review python and vice versa. Write a function to find the longest common prefix string amongst an array of strings. If there is no common prefix, return an empty string "". Example 1: Input: words = ['flower', 'flow', 'flight'] Output: 'fl' Example 2: Input: strs = ['dog', 'racecar', 'car'] Output: '' longest_common_prefix.py def get_longest(words): if not words: return '' common = words[0] for word in words: while not word.startswith(common): common = common[:-1] return common if __name__ == '__main__': print(f"Longest prefix: \n{get_longest(['flower', 'flow', 'fly'])}") Leetcode stats: Runtime: 32 ms, faster than 76.56% of Python3 online submissions for Longest Common Prefix. Memory Usage: 14 MB, less than 100.00% of Python3 online submissions for Longest Common Prefix. longest_common_prefix.h #ifndef LEETCODE_LONGEST_COMMON_PREFIX_H #define LEETCODE_LONGEST_COMMON_PREFIX_H #include <string_view> #include <vector> std::string_view get_common_prefix(const std::vector<std::string_view>& words); #endif //LEETCODE_LONGEST_COMMON_PREFIX_H longest_common_prefix.cpp #include <iostream> #include <string_view> #include <vector> std::string_view get_common_prefix(const std::vector<std::string_view> &words) { if (words.empty()) return ""; std::string_view common = words[0]; for (auto word: words) { while (word.find(common, 0) != 0) { common = common.substr(0, common.size() - 1); } } return common; } int main() { std::vector<std::string_view> xxx{"flow", "flower", "fly"}; std::cout << "Longest prefix:\n" << get_common_prefix(xxx); } Leetcode stats: Runtime: 0 ms, faster than 100.00% of C++ online submissions for Longest Common Prefix. Memory Usage: 9.9 MB, less than 7.29% of C++ online submissions for Longest Common Prefix. Answer: I'm only going to review the C++ code here, as all I could suggest for the Python code also applies to the C++, so is included in this review. Firstly, the interface is quite limiting - the inputs need to be converted to vector of string-view objects, which is inconvenient if I have a linked-list of strings, or an input stream yielding QStrings. I recommend changing to accept a pair of iterators, or in sufficiently modern C++, a std::ranges::range object. This test is inefficient: word.find(common, 0) != 0 If we don't find common at position 0, find() will continue searching the rest of the string (the Python code is better here). We need an implementation of starts_with() (which is in C++20's std::string) - or better, we could use std::mismatch() to directly find how much of the strings are common, eliminating the loop where we repeatedly remove a single character. Here's my attempt at that, also with a simple optimisation to return early when the common string becomes empty: #include <algorithm> #include <iterator> #include <string_view> #include <vector> namespace { template<typename String> String common_prefix(const String& a, const String& b) { using std::begin; using std::end; auto end_iter = std::mismatch(begin(a), end(a), begin(b), end(b)); if (end_iter.first == end(a)) { return a; } if (end_iter.second == end(b)) { return b; } return String(begin(a), end_iter.first - begin(a)); } } template<typename Iter, typename IterEnd = Iter> std::string_view get_common_prefix(Iter first, IterEnd last) { if (first==last) { return ""; } std::string_view common = *first; for (auto it = first; it != last; ++it) { common = common_prefix(common, *it); if (common.empty()) { return common; } } return common; } template<typename Container> std::string_view get_common_prefix(const Container& words) { using std::begin; using std::end; return get_common_prefix(begin(words), end(words)); }
{ "domain": "codereview.stackexchange", "id": 39860, "tags": "python, c++, programming-challenge" }
If gravitation is due to space-time curvature, how can a body free-fall in a straight line?
Question: According to general relativity, Gravity is due to space-time curvature. Then all paths must be curved. If so, how can there be any straight line motion? The body must follow a curved path. So, there is no possibility of straight-line motion. In a curved space-time, there is no such thing as a straight line. If so, then how can there be a straight-line free fall? Answer: user36790, please don't take this answer the wrong way. It is not meant to be disparaging. Per your user page, you are 17 years old. You have some misunderstandings. You're way ahead of your peers, many of whom will hold similar (or even stronger) misunderstandings throughout their lives. You have asked a number of related questions over the last few hours. They all result from the same misunderstanding. This misunderstanding is that you are looking at things from a Newtonian point of view, where space is Euclidean, where time is an independent parameter, and where everyone agrees one what space and time are. That's not how things work. It is very close to how things work under some special circumstances. Those special circumstances in which space and time locally appear to be distinct and Newtonian -- that's what we ordinarily experience on an everyday basis. This is why Newtonian mechanics has been so successful. That Newtonian mechanics works so well in our ordinary, everyday world does not mean that it is universally correct. In fact, we know it's not universally correct. Then all paths must be curved. If so, how can there be any straight line motion? This is your Newtonian mindset at work. Both special relativity and general relativity are markedly non-Euclidean. The sharp distinction between space and time in Newtonian mechanics becomes blurred in relativity theory; space and time become different aspects of one thing, spacetime. Even though geometry in relativity theory is not Euclidean, one can still ask from the perspective of the non-Euclidean geometry of general relativity, "What is straight?" One definition of "straightness" in Euclidean geometry is that a straight line between two points is the path that has the shortest length amongst all the paths that connect the two points in question. This concept of "straightness" extends nicely into the geometry of general relativity. All we need is something to measure "distance," a "metric," and this is something that general relativity provides. This generalization of a Euclidean straight line to a non-Euclidean geometry is called a "geodesic."
{ "domain": "physics.stackexchange", "id": 24244, "tags": "general-relativity, differential-geometry, curvature, geodesics, equivalence-principle" }
What's the definition of ACTL?
Question: I have been looking for the definition of ACTL, but Google has given me very little to go with. So far, I know ACTL is another form of CTL model checking, and CTL includes the following operators: Always Exist Global Finally Next AND / OR NOT So what does ACTL include and how is it different from CTL? Many thanks Answer: ACTL is the universal fragment of CTL. Thus, existential path quantification is not allowed. So a path formula is of the form $AF\psi$, $AG\psi$, or $AX\psi$ (or a conjunction or disjunction of path formulas). Moreover, you are not allowed a general NOT operator, but rather negations have to be on the atomic propositions (otherwise this fragment would be equal to CTL).
{ "domain": "cs.stackexchange", "id": 6762, "tags": "model-checking, temporal-logic" }
Toroid moments tensor decomposition
Question: I am currently working on my bachelor's thesis on the anapole / toroidal moment and it seems that I am stuck with a tensor decomposition problem. I have actually never had a course about tensors, so I am a complete newbie. I need to expand a localized current density, which is done by expressing the current via delta distribution and expanding the latter: $$\vec{j}(\vec{r},t) = \int\vec{j}(\vec{\xi},t) \delta(\vec{\xi}-\vec{r}) d^3\xi$$ $$\delta(\vec{\xi}-\vec{r}) = \sum_{l=0}^{\infty} \frac{(-1)^l}{l!} \xi_i ...\xi_k \nabla_i ... \nabla_k \delta(\vec{r}) $$ So I get some result containing the following tensor: $$B_{ij...k}^{(l)}(t) := \frac{(-1)^{l-1}}{(l-1)!} \int j_i \xi_j ... \xi_k d^3\xi$$ So far, I have understood the math. But now comes the tricky part. In the paper, it says that "we can decompose the tensors $B_{ij...k}^{(l)}$ into irreducible tensors, separating the various multipole moments and radii." and further "...of third rank, $B_{ijk}^{(3)}$ can obviously reduced according to the scheme $1 \times (2+0) = (3+1)+2+1$. It can be seen that the representation of weight $l=1$ is extracted twice from $B_{ijk}^{(3)}$." And then follows what seems like the decomposition and I am hopelessly lost. $$j_i\xi_j\xi_k = \frac{1}{3} \left[ j_i\xi_j\xi_k + j_k\xi_i\xi_j + j_j\xi_k\xi_i - \frac{1}{5} \left( \delta_{ij}\theta_k + \delta_{ik}\theta_j + \delta_{jk}\theta_i \right) \right] - \frac{1}{3} \left( \epsilon_{ijl} \mu_{kl} + \epsilon_{ikl}\mu_{jl}\right)$$ $$+ \frac{1}{6} \left( \delta_{ij}\lambda_k + \delta_{ik}\lambda_j - 2 \delta_{jk}\lambda_i \right) + \frac{1}{5} \left( \delta_{ij}\theta_k + \delta_{ik}\theta_j + \delta_{jk}\theta_i \right)$$ with $$\mu_{ik} = \mu_i\xi_k + \mu_k\xi_i \ , \ \mu_i=\frac{1}{2} \epsilon_{ijk}\xi_j j_k$$ $$\theta_i=2\xi_i \vec{\xi}\cdot \vec{j} + \xi^2 j_i$$ $$\lambda_i=\xi_i\vec{\xi}\cdot \vec{j} - \xi^2 j_i$$ This decomposition obviously contains many quantities that later on appear also in the multipole expansion, e.g. the magnetic quadrupole moment $\mu_{ik}$. So on the physics side of things, this makes sense to me. But not on the mathematical side. On this board I found some questions regarding tensor decomposition and in the answers I learned something about symmetric and antisymmetric tensors and that every tensor can be decomposed in several irreducible ones, which better represent physical properties of the system and symmetries. But I still, some questions are still there... 1.) What do the numbers $\frac{1}{3}$, $\frac{1}{5}$, etc. mean? Is this some kind of normalization? 2.) How exactly does one decompose the tensor? How can I reconstruct what exactly has been done, which steps one has to follow to decompose it like this? Answer: This appears to be related to the decomposition of a totally symmetric tensor into traceless parts, which is a fairly involved process. The general equation is $$\mathcal{C} Q_{a_1 a_2\cdots a_s} = \sum_{k=0}^{[\frac{s}{2}]} (-1)^s \frac{\binom{s}{k} \binom{s}{2k}}{ \binom{2s}{2k}} \delta_{(a_1 a_2} \cdots \delta_{a_{2k-1} a_{2k}} Q_{a_{2k+1}\cdots a_s)}{}^{c_1} {}_{c_1} {}^{c_2}{}_{c_2} {}^{\cdots c_k}{}_{\cdots c_k},$$ where $[\cdot]$ denotes the integer part, Einstein summation is implied and $Q_{(a_1 a_2 \cdots a_s)} \equiv \frac{1}{s!} \sum_{\sigma\in S_s} Q_{a_{\sigma(1)} a_{\sigma(2)} a_{\sigma(3)} \cdots a_{\sigma(s)}}$. For the quadrupole moment it is $\mathcal{C}Q_{ab} = Q_{ab} - \frac{1}{3} Q^c{}_c \delta_{ab}$, for the octupole $\mathcal{C}Q_{abc} = Q_{abc} - \frac{1}{5} (Q^d{}_{dc}\delta_{ab} + Q^d{}_{da} \delta_{bc}+ Q^d{}_{db}\delta_{ac})$; these yield the factors in your question. An indication (perhaps proof, although I'm not certain about this at the moment) that the traceless part of a totally symmetric tensor is an irreducible representation is easy to see if one uses the hook formula in dimension 3. A totally symmetric tensor of rank $s$ has $\frac{1}{2}(s+1)(s+2)$ degrees of freedom and the traceless one has the latter minus the number of ways to obtain the traces, $\binom{s}{2}$, which yields $2s+1$. This is the dimension of the irreducible representation of the algebra of SO(3) with spin $s$. A full proof of this statement is in Maggiore Gravitational waves - theory and experiments. Reference: F.A.E. Pirani Lectures on General Relativity 1965.
{ "domain": "physics.stackexchange", "id": 13703, "tags": "electromagnetism, particle-physics, tensor-calculus, multipole-expansion" }
How is the depth of a circuit creating "Constant size vector states" $O(\log b)$
Question: In Prakash's thesis - (link to PDF), section 2.2.2 Constant size vector states: We show that the vector state $|x\rangle$ for $x\in R^b$ can be created in time $\widetilde{O}(\log(b))$ using a specialized quantum circuit of size $O(b)$ and pre computed amplitudes. The method is useful for creating constant sized superpositions and is illustrated for a 4 dimensional state $|\phi\rangle$ in figure 2.4. Question Wouldn't a sequence of conditional rotations on all nodes of a binary tree of depth $\log(b)$ result in a circuit with $O(b)$ number of conditional rotations? For example in this circuit we need a rotation on the first qubit, then 2 conditional rotations from the first qubit on the second qubit. And if we had 3 qubits we'd have a 4 conditional rotation from the first and second qubit on the 3rd qubit. I am having trouble understanding how an exponential speedup over a typical QRAM was achieved here. Edit I should clarify that I also think that the number of gates would be the same as the depth of the circuit. Here is an example of a circuit preparing a state of 4 qubits from https://arxiv.org/abs/2010.00831 Answer: As mentioned in this answer, it is possible to perform each succession of rotation in parallel. Let us suppose that we're at depth $k$ in our tree. If we were to implement the successive rotations as depicted in the circuit you've linked, the qubit number $k$ would undergo $2^k$ controlled rotations. More precisely, each of these rotations would be controlled off of a different number. For notation's sake, let us say that we want to apply a $R_Y$ gate with angle $\theta_x$ only if the $k-1$ previous qubits are in state $|x\rangle$. That is, we want to implement this gate: $$|x\rangle|0\rangle\to|x\rangle\left(\cos\left(2\pi\theta_x\right)|0\rangle+\sin\left(2\pi\theta_x\right)|1\rangle\right)=|x\rangle R_Y\left(4\pi\theta_x\right)|0\rangle$$ (Note: using $2\pi\theta_x$ ensures that $\theta_x\in[0\,;\,1)$). Note that what you want to do, if I understand correctly, is to load a state from a QRAM structure. In such a case, the coefficients of the vector you load could be negative, in which case some additional gates should be added, as described in Algorithm 1 of this paper. Here, I'll focus on a state with all amplitudes being positive as in your example, since it shows the gist of why we can implement such a gate efficiently. Suppose that we have access to a function $f$ such that: $$f(x)=\theta_x$$ with $\theta_x$ being represented on $p$ qubits, $p$ translating the precision of the angle. Computing $f$ can be done efficiently thanks to the tree structure. As such, we can implement $f$ as a quantum oracle: $$U_f|x\rangle|y\rangle=|x\rangle\left|y\oplus\theta_x\right\rangle$$ We assume that this oracle can be implemented in time $T_f(k)$. Now, let's look at what happens when we apply $P$ gates on $|x\rangle\left|\theta_x\right\rangle$. Let us write $\theta_x$ in binary as: $$\theta_x=\sum_{i=1}^{p}b_{x, i}2^{-i}$$ Suppose now that we apply a $P(2\pi)$ gate on the first qubit of the second register, a $P\left(\pi\right)$ gate on the second qubit of the second register, etc... up to a $P\left(\frac{\pi}{2^{p-2}}\right)$ gate on the last qubit of the last register. As a recall, the $P(\theta)$ gates leaves $|0\rangle$ untouched and apply a $\theta$ phase on $|1\rangle$: $$P(\theta)=\begin{pmatrix}1&0\\0&\mathrm{e}^{\mathrm{i}\theta}\end{pmatrix}$$ Thus, the $P$ gate that is applied on qubit $i$ doesn't do anything if $b_{x, i}=0$. If $b_{x, i}=1$, then it adds a $\frac{2\pi}{2^{1-i}}$ phase to the state. Thus, all in all, the phase of this state is now: $$2\pi\sum_{i=1}^{p}b_{x, i}2^{1-i}=4\pi\theta_x$$ Thus, using a single layer of gates we've managed to implement the following transformation: $$\left|\theta_x\right\rangle\to\mathrm{e}^{4\mathrm{i}\pi\theta_x}\left|\theta_x\right\rangle$$ Suppose now that we replace these $P$ gates by controlled-$P$ gates controlled on the single qubit target. Then this would implement the following transformation: $$\begin{align}\left|\theta_x\right\rangle|0\rangle\to{}&\left|\theta_x\right\rangle|0\rangle\\\left|\theta_x\right\rangle|1\rangle\to{}&\mathrm{e}^{4\mathrm{i}\pi\theta_x}\left|\theta_x\right\rangle|0\rangle\end{align}$$ That is, by replacing the $P$ gates by controlled ones, we've actually implemented a $P\left(4\pi\theta_x\right)$ gate on the last qubit. Let us denote by $V$ the following gate: $$V=\frac{1}{\sqrt{2}}\begin{pmatrix}1&-\mathrm{i}\\\mathrm{i}&-1\end{pmatrix}$$ Note that we have: $$\begin{align} VP(\theta)V^\dagger &= \frac12\begin{pmatrix}1&-\mathrm{i}\\\mathrm{i}&-1\end{pmatrix}\begin{pmatrix}1&0\\0&\mathrm{e}^{\mathrm{i}\theta}\end{pmatrix}\begin{pmatrix}1&-\mathrm{i}\\\mathrm{i}&-1\end{pmatrix} \\&=\mathrm{e}^{\mathrm{i}\frac{\theta}{2}}\begin{pmatrix}\cos\left(\frac{\theta}{2}\right)&-\sin\left(\frac{\theta}{2}\right)\\\sin\left(\frac{\theta}{2}\right)&\cos\left(\frac{\theta}{2}\right)\end{pmatrix}\\&=\mathrm{e}^{\mathrm{i}\frac{\theta}{2}}R_Y(\theta)\end{align}$$ Thus, if we apply a $V$ gate on the single target qubit before and after the $P$ gate, this transforms it into a $R_Y$ rotation (up to an unconvenvient phase). Putting everything together, we start with the following state: $$|x\rangle|0\rangle|0\rangle$$ We apply $U_f$: $$|x\rangle\left|\theta_x\right\rangle|0\rangle$$ We apply $V$ on the target qubit: $$|x\rangle\left|\theta_x\right\rangle V|0\rangle$$ We apply the $P\left(4\pi\theta_x\right)$ gate on the last qubit using the method outlined above: $$|x\rangle\left|\theta_x\right\rangle P\left(4\pi\theta_x\right)V|0\rangle$$ We then apply the $V$ gate once again: $$|x\rangle\left|\theta_x\right\rangle VP\left(4\pi\theta_x\right)V|0\rangle=|x\rangle\left|\theta_x\right\rangle \mathrm{e}^{2\mathrm{i}\pi\theta_x}R_Y\left(4\pi\theta_x\right)|0\rangle$$ We now remove the unwanted phase by applying $P(-\pi),P\left(-\frac{\pi}{2}\right),\cdots$ on the second register: $$|x\rangle\left|\theta_x\right\rangle R_Y\left(4\pi\theta_x\right)|0\rangle$$ And finally, we uncompute the second register by applying $U_f$ once again: $$|x\rangle\left|0\right\rangle R_Y\left(4\pi\theta_x\right)|0\rangle$$ We can now discard the second register as it is not entangled with the others. All in all, the depth of this circuit is $2T_f(k)+2+p+1=2T_f+p+3$, where $2T_f(k)$ comes from computing and uncomputing the second register, $p$ comes from the controlled-$P$ gates on the target qubit, $2$ comes from the two $V$ gates on the target qubit and $1$ comes from the layer of $P$ gates used to uncompute the unwanted relative phase (since they're applied on the same layer). Note that the only term that depends on $k$ is $2T_f(k)$. $p$ has to be chosen accordingly to the desired precision. Thus, all the depth essentially comes from the $2T_f(k)$ term. I'm less confident on that part, but I think that thanks to the tree structure, $f$ can be computed in time $O(\log b)$, which in turns ensures that $U_f$ can be implemented in time $\DeclareMathOperator{\polylog}{polylog}O(\polylog b)$, hence the final complexity.
{ "domain": "quantumcomputing.stackexchange", "id": 4866, "tags": "quantum-state, state-preparation" }
Read JSON files, basic calculations and write over another JSON file
Question: Class SectorController calculates weight coefficients for sector performances in equity exchange markets using minute data from an API (for instance, if a group of equities are up in the past 5 minutes, then coefficient is positive, if not, is negative, ranging from -1 to +1). Most of calculations are based on other scripts, which is not necessary for this review. Would you be so kind and review this class and help me to make it faster, if possible? Script class SectorController { /** * * @var a string of iextrading base URL */ const BASE_URL = "https://api.iextrading.com/1.0/"; /** * * @var a string of target path and query */ const TARGET_QUERY = "stock/market/batch?symbols="; /** * * @var a string for backend path for every sector */ const EACH_SECTOR_DIR_PREFIX = "/../../dir/dir/dir/dir-"; /** * * @var a string for backend path for index sector */ const INDEX_SECTOR_DIR_PREFIX = "/../../dir/dir/dir/dir/"; /** * * @var a string for live data path */ const LIVE_DATA_DIR = "/../../../public_html/dir/dir/"; function __construct() { echo "YAAAY! " . __METHOD__ . " success \n"; return true; } public static function getSectors(){ $baseUrl=self::BASE_URL.self::TARGET_QUERY; $currentTime = date("Y-m-d-H-i-s"); $permission = 0755; $indexData = array( "Overall" => array("sector_weight" => 1, "sector_coefficient" => 5, $sectorInfos=SectorController::iexSectorParams(); foreach ($sectorInfos as $a => $sectorInfo) { $sectorUrl = $baseUrl . implode(",", array_keys($sectorInfo["selected_tickers"])) . "&types=quote&range=1m"; $rawSectorJson = file_get_contents($sectorUrl); $rawSectorArray = json_decode($rawSectorJson, true); // Write the raw file $rawSectorDir = __DIR__ . self::EACH_SECTOR_DIR_PREFIX . $sectorInfo["directory"]; if (!is_dir($rawSectorDir)) { mkdir($rawSectorDir, $permission, true); } $rawSectorFile = $rawSectorDir . "/" . $currentTime . ".json"; $fp = fopen($rawSectorFile, "a+"); fwrite($fp, $rawSectorJson); fclose($fp); // Calculate the real-time index $indexValue = 0; foreach ($rawSectorArray as $ticker => $tickerStats) { if (isset($sectorInfo["selected_tickers"][$ticker], $tickerStats["quote"], $tickerStats["quote"]["extendedChangePercent"], $tickerStats["quote"]["changePercent"], $tickerStats["quote"]["ytdChange"])) { $changeAmount = ($tickerStats["quote"]["extendedChangePercent"] + $tickerStats["quote"]["changePercent"] + $tickerStats["quote"]["ytdChange"])/200; $indexValue += $sectorInfo["sector_weight"] * $sectorInfo["selected_tickers"][$ticker] * $changeAmount; } } $indexData[$sectorInfo["sector"]] = array("sector_weight" => $sectorInfo["sector_weight"], "sector_coefficient" => 5, $indexData["Overall"]["sector_value"] += $indexData[$sectorInfo["sector"]]["sector_value"]; } // Calculate the index factor for better visibility between -1 and +1 $frontIndexData = array(); foreach ($indexData as $sectorName => $sectorIndexData) { $indexSign = $sectorIndexData["sector_value"]; if ($indexSign < 0) { $indexSign = - $indexSign; } $indexFactor = 1; for ($i=0; $i <= 10; $i++) { $indexFactor = pow(10, $i); if (($indexFactor * $indexSign) > 1) { $indexFactor = pow(10, $i - 1); break; } } $frontIndexData[$sectorName] = $sectorIndexData["sector_weight"] * $sectorIndexData["sector_coefficient"] * $sectorIndexData["sector_value"] * $indexFactor; } // Write the index file $indexSectorDir = __DIR__ . self::INDEX_SECTOR_DIR_PREFIX; if (!is_dir($indexSectorDir)) {mkdir($indexSectorDir, $permission, true);} $indexSectorFile = $indexSectorDir . $currentTime . ".json"; $indexSectorJson = json_encode($frontIndexData, JSON_FORCE_OBJECT); $fp = fopen($indexSectorFile, "a+"); fwrite($fp, $indexSectorJson); fclose($fp); $sectorDir = __DIR__ . self::LIVE_DATA_DIR; if (!is_dir($sectorDir)) {mkdir($sectorDir, $permission, true);} // if data directory did not exist // if text file did not exist if (!file_exists($sectorDir . "text.txt")){ $handle=fopen($sectorDir . "text.txt", "wb"); fwrite($handle, "d"); fclose($handle); } $sectorCoefFile = $sectorDir . "text.txt"; copy($indexSectorFile, $sectorCoefFile); echo "YAAAY! " . __METHOD__ . " updated sector coefficients successfully !\n"; return $frontIndexData; } public static function iexSectorParams(){ $sectorInfos = array( array( "sector" => "IT", "directory" => "information-technology", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "AAPL" => 0.05, "AMZN" => 0.05, "GOOGL" => 0.05, "IBM" => 0.05, "MSFT" => 0.05, ) ), array( "sector" => "Telecommunication", "directory" => "telecommunication-services", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "VZ" => 0.05, "CSCO" => 0.05, "CMCSA" => 0.05, "T" => 0.05, "CTL" => 0.05, ) ), array( "sector" => "Finance", "directory" => "financial-services", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "JPM" => 0.05, "GS" => 0.05, "V" => 0.05, "BAC" => 0.05, "AXP" => 0.05, ) ), array( "sector" => "Energy", "directory" => "energy", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "CVX" => 0.05, "XOM" => 0.05, "APA" => 0.05, "COP" => 0.05, ) ), array( "sector" => "Industrials", "directory" => "industrials", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "CAT" => 0.05, "FLR" => 0.05, "GE" => 0.05, "JEC" => 0.05, ) ), array( "sector" => "Materials and Chemicals", "directory" => "materials-and-chemicals", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "DWDP" => 0.05, "APD" => 0.05, "EMN" => 0.05, "ECL" => 0.05, "FMC" => 0.05, ) ), array( "sector" => "Utilities", "directory" => "utilities", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "PPL" => 0.05, "PCG" => 0.05, "SO" => 0.05, "WEC" => 0.05, ) ), array( "sector" => "Consumer Discretionary", "directory" => "consumer-discretionary", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "DIS" => 0.05, "HD" => 0.05, "BBY" => 0.05, "CBS" => 0.05, "CMG" => 0.05, ) ), array( "sector" => "Consumer Staples", "directory" => "consumer-staples", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "PEP" => 0.05, "PM" => 0.05, "PG" => 0.05, "MNST" => 0.05, "TSN" => 0.05, ) ), array( "sector" => "Defense", "directory" => "defense-and-aerospace", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "BA" => 0.05, "LMT" => 0.05, "UTX" => 0.05, "NOC" => 0.05, "HON" => 0.05, ) ), array( "sector" => "Health", "directory" => "health-care-and-pharmaceuticals", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "UNH" => 0.05, "JNJ" => 0.05, "PFE" => 0.05, "UHS" => 0.05, "AET" => 0.05, "RMD" => 0.05, ) ), array( "sector" => "Real Estate", "directory" => "real-estate", "sector_weight" => 0.05, "sector_coefficient" => 5, "selected_tickers" => array( "CCI" => 0.05, "AMT" => 0.05, "AVB" => 0.05, "HCP" => 0.05, "RCL" => 0.05, "HST" => 0.05, ) ) ); return $sectorInfos; } function __destruct() { echo "YAAAY! " . __METHOD__ . " success! \n"; return true; } } Output (text.txt) {"Overall":0.05,"IT":0.05,"Telecommunication":0.05,"Finance":0.05,"Energy":0.05,"Industrials":0.05,"Materials and Chemicals":0.05,"Utilities":0.05,"Consumer Discretionary":0.05,"Consumer Staples":0.05,"Defense":0.05,"Health":0.05,"Real Estate":0.05} Answer: I don't fully understand what your script is doing, but I can offer a few refinements. Regarding $sectorUrl = $baseUrl . implode(",", array_keys($sectorInfo["selected_tickers"])) . "&types=quote&range=1m";, because you are building a url, I think it would be better practices to implode with %2C to make the RFC folks happy. It doesn't look like a good idea to append json strings after json strings. For this reason, you should not be fwriting with a+. If you mean to consolidate json data on a single json file, then the pre-written data needs to be extracted, decoded, merged with the next data, then re-encoded before updating the file. Otherwise, you will generate invalid json in your .json file. Rather than manually converting negative values to positive with if ($indexSign < 0) {$indexSign = - $indexSign;}, you should be using abs() to force values to be positive. $indexSign = abs($sectorIndexData["sector_value"]); The $indexFactor can be determined without iterated mathematics, you can treat it as a string and just count the zeros immediately to the right of the decimal place. $indexFactor = 10 ** strlen(preg_match('~\.\K0+~', $float, $zeros) ? $zeros[0] : 0) The \K in the pattern means "restart the fullstring match" on perhaps it would be clearer for this situation to say "forget the previously matched characters (the dot)". pow() can be written as ** from php5.6+ Beyond those few pieces, I don't see much to comment on. As I have stated in recent posts on your questions, always endeavor to minimize total fwrite() calls as much as possible.
{ "domain": "codereview.stackexchange", "id": 33913, "tags": "performance, beginner, php, json, api" }
Job applicant email system
Question: I just wrote this and don't like how bulky it is, also given the fact I will have to add at least another if statement. I was going to switch it to a case statement but wanted to check if there were even better ways to reduce the clutter. string emailLetterPath = Server.MapPath("~/emails/rejected.htm"); if (jobApplicantAndJob.Jobs.Title == "Store Sales Manager" || jobApplicantAndJob.Jobs.Title == "Sales Representative") { emailLetterPath = Server.MapPath("~/emails/TC1RejectedP3.htm"); } if (jobApplicantAndJob.Jobs.Title == "Outside Sales Professional") { emailLetterPath = Server.MapPath("~/emails/TC2RejectedP3.htm"); } The default option should be the rejected.htm. Answer: I was going to switch it to a case statement but wanted to check if there were even better ways to reduce the clutter. You could load the mappings from job titles to emails into a (static) dictionary, and then do a dictionary look-up (using the default value if it's not in the dictionary). The Dictionary<string,string> can be initialized with a collection initializer. Using a (data-driven) dictionary instead of a (hard-coded) switch statement has the additional benefit that users can configure/maintain the behaviour by editing a configuration file (e.g. a tab-delimited 'CSV file' of title/email pairs) or database, instead of editing code.
{ "domain": "codereview.stackexchange", "id": 5997, "tags": "c#, asp.net" }
Scaling down a raster image
Question: I have a raster image (A) of size 20x20 and I want to scale it down to a size of 10x10 (B). Naturally in the resulting picture B one pixel will represent 4 pixels shaped 2x2 from A. Is it possible to give a canonical answer to how the pixel values of B have to be calculated if no neighbouring pixels (surrounding the 2x2 subset) in A should be taken into account? Mean, maximum or something else? Answer: Bilinear is the most widely used method. The nearest neighbor down-sampling algorithm is the fastest but least accurate. Note that when you are trying to down-scale an image by half, the bilinear sampling algorithm becomes the average (mean as you mentioned): x1[i/2][j/2] = (x[i][j] + x[i+1][j] + x[i][j+1] + x[i+1][j+1]) / 4;
{ "domain": "dsp.stackexchange", "id": 3552, "tags": "image-processing" }
Why is sodium bisulfite needed in syn-dihydroxylation?
Question: On the reaction of syn-dihydroxylation with stoichiometric quantities of OsO4 (not a catalytic amount) the second step is $\ce{NaHSO3/H2O}$ or $\ce{Na2SO3/H2O}$. According to my understanding, $\ce{H2O}$ is the only thing needed to hydrolyze the cyclic osmate ester resulting from the first step of adding $\ce{OsO4}$. So, what is the use of $\ce{NaHSO3}$ or $\ce{Na2SO3}$ here? Answer: You need to reduce remaining oxidizer (OsO4) and maintain pH of the solution. NaHSO3 acts both as the reducing agent and a buffer.
{ "domain": "chemistry.stackexchange", "id": 6007, "tags": "organic-chemistry, redox" }
Are deterministic Turing machines as powerful as probabilistic Turing machines?
Question: I am wondering if it is known whether probabilistic Turing machines are more powerful than deterministic ones, in the sense that they can solve problems faster on average, or can solve problems that deterministic Turing machines cannot? That is, if a Turing machine can "flip coins", does this allow it to do things it could not have done before? I ask because in fields like cryptography, you always assume the adversary is a probabilistic Turing machine (that is they can draw from a random source), so I am guessing that this assumption means it is known that probabilistic Turing machines are stronger? Or is this just conjecture, but we use probabilistic Turing machines anyway because deterministic Turing machines are just a special case? Answer: This is a famous open problem in computer science theory. In particular, it comes down to whether BPP = P. It is widely conjectured and suspected that BPP = P, or in other words that randomness does not significantly increase the power of computer algorithms. However, we have no proof of this conjecture. So, it remains possible that randomness allows some problems to be solved more efficiently (our best guess is, this is probably not the case, but we cannot completely rule it out). In cryptography, there are several reasons we tend to use probabilistic algorithms rather than deterministic algorithms (e.g., probabilistic Turing machines rather than deterministic Turing machines): Much of cryptography needs randomness to do anything. For instance, many techniques need to generate a random key, or generate a random value (nonce, IV, seed, salt, key, etc.). So, we need a probabilistic algorithm, so it has the ability to generate a random value, just to implement the cryptographic algorithm in the first place. Empirically, in practice, it is easy to generate random numbers on real computers. So we might as well pick a theoretical formulation that matches the capabilities that real computers have. In cryptography, we want to give the adversary every reasonable advantage. If we can prove security against all probabilistic polynomial-time algorithms, that is a more meaningful result than proving security against all deterministic polynomial-time algorithms (the former implies security against both, whereas the latter does not necessarily guarantee security against probabilistic algorithms). So, our definitions typically allow the adversary/attacker to use a probabilistic algorithm or probabilistic Turing machine, because this gives us a stronger, more meaningful notion of security.
{ "domain": "cs.stackexchange", "id": 20186, "tags": "turing-machines" }
Is it possible to determine the motion of a vehicle moving at a constant speed of $0.8c$?
Question: Suppose you are in a rocket with no windows, traveling in deep space far from other objects. Without looking outside the rocket or making any contact with the outside world, explain how you could determine whether the rocket is (a) moving forward at a constant $80$% of the speed of light and (b) accelerating in the forward direction. This is one of the discussion questions from the book ‘University Physics with Modern Physics’ (Q $38$, Page $124$, Ch $4$: Newton’s Laws of motion. This is a discussion question anyway, not a numerical problem so I hope it does not get discarded here) (b) If the rocket is accelerating forward, I can do a few things to sense the acceleration. For example, if I put a tennis ball on the floor, it should roll backwards without any apparent force acting on it. Similarly, If I am sitting in a chair, I should be pressed backwards, and I should be able to feel the acceleration, and hence the contact force acting on me. Likewise, I can’t play catch with a tennis ball with my friend. He will perceive the ball coming at him faster, and I would perceive it moving slower (assuming he is standing to my right, in the direction of rocket’s acceleration). From these observations, I should be able to say that the rocket is accelerating, in the direction opposite to the direction in which the tennis ball on the floor starts to roll. (a) But what if the rocket is moving at a constant velocity at $0.8c$? Since there are no windows, is it possible to tell if the rocket is at rest or moving at $0.8$C? I googled it, and some of the answers explained that since “Constant velocity means no acceleration, and therefore you are in freefall. You have mass, but not weight, so a bathroom scale would read $0$.” I understand that during freefall, a bathroom scale would always read $0$. But doesn’t freefall mean gravity is the only force acting on us? Since the question says that the rocket is in deep space far from other objects, how can I be in freefall? The question explicitly says “the rocket is moving forward at $0.8c$, which means it is moving straight and not orbiting any planet or a star. How am I in free fall then? If I’m not in free fall, is it possible to determine whether the rocket is moving forward at $0.8c$, as the question asks? Answer: The question you quote appears to suggest that you are so far from other objects that their gravitational effects can be overlooked. If that is the case then the answers are: a. You cannot determine whether the ship has any specific speed relative to anything else. Indeed, the implicit assumption that the ship can have an absolute speed of 0.8c without any specified frame of reference is a faulty one. b. If the ship were accelerating in the forward direction then anything not fixed within it would drift to the back.
{ "domain": "physics.stackexchange", "id": 63215, "tags": "homework-and-exercises, newtonian-mechanics, forces, acceleration, inertial-frames" }
Why is the colour of sunlight yellow?
Question: I was going through the preliminary papers of other schools and found a question that I did not know. It was "Why sunlight appears yellow?". Can anyone answer it? Answer: Color of Sunlight as seen on Earth's surface during day is yellow due to Rayleigh Scattering. Our Sun is actually white (mixture of all wavelengths of visible spectrum) if we see it from outer space or high-altitude airplanes. Our atmosphere scatters shorter to bigger wavelengths color from sunlight when the white light travels through it. During day, it scatters violet and blue colors leaving yellowish sunlight (the reason why sky is blue and sunlight is yellow). During morning and evening, the sun appears reddish because light rays needs to travel longer distance in atmosphere which causes scattering of yellow light too.
{ "domain": "physics.stackexchange", "id": 42931, "tags": "visible-light, temperature, scattering, thermal-radiation, sun" }
Loading modifications at runtime
Question: I'm working on a game engine among other things in Java, and would like to know how to optimise my code, as it is ridiculously ugly and bloated. I'm terrible with operating the java.util.File and need my code to be reviewed by others. In essence, I have a folder hierarchy setup in the following manner: Custom folder which the user decides on. Variable is folder. More folders. Each one corresponding to a mod. mod.cfg file. I do checks to see if it contains information too. Java class referred from the mod.cfg file, needs to exist AND be of a specific object, namely Modification. If all of this works, then the mod is loaded into memory. public static void loadModifications(String folder) { // TODO: Optimise modification loading. File enclosingFolder = new File(folder); if (enclosingFolder.exists()) { for (File modFolder : enclosingFolder.listFiles()) { if (modFolder.isDirectory()) { File modConfig = new File(modFolder.getPath() + "/mod.cfg"); if (modConfig.exists()) { String modName = (String) ConfigHandler.getPropertyValue(modConfig.getPath(), "mod_name"); String modMain = (String) ConfigHandler.getPropertyValue(modConfig.getPath(), "mod_main"); try { if (modMain != null) { try { String className = modFolder.getPath().replaceFirst(enclosingFolder.getPath(), ""); className = className.substring(1, className.length()); className += "." + modMain; className = className.replaceAll("/", "."); Object modClass = Class.forName(className).newInstance(); if (modName != null) { if (modClass instanceof Modification) { Logger.log(LogLevel.MESSAGE, "Loading modification: '" + modClass.getClass().getName() + "' as '" + modName + "'"); ((Modification) modClass).setConfigFile(modConfig.getPath()); ((Modification) modClass).init(); Logger.log(LogLevel.MESSAGE, ((Modification) modClass).getInfo()); mods.put(modName, (Modification) modClass); } else { Logger.log(LogLevel.WARNING, "Modification: '" + modName + "' does not derive from abstract class: 'Modification', ignoring."); } } else { Logger.log(LogLevel.WARNING, "'mod_name' undefined for: '" + modFolder.getPath() + "'"); } } catch (InstantiationException | IllegalAccessException e) { Logger.log(LogLevel.WARNING, "Error loading main class for modification '" + modName + "', ignoring."); } } else { Logger.log(LogLevel.WARNING, "'mod_main' undefined for: '" + modFolder.getPath() + "', ignoring."); } } catch (ClassNotFoundException e) { e.printStackTrace(); } } else { Logger.log(LogLevel.WARNING, "Configuration file does not exist for modification: '" + modFolder.getPath() + "', ignoring."); } } } } else { Logger.log(LogLevel.ERROR, "Failed to locate modification folder: '" + folder + "'"); } } Footnote: Logger and LogLevel refers to a customised logging method I implemented. ConfigHandler refers to a property loader from files. Albeit named mod.cfg, the file has the structure of a Java Properties file. Answer: 1) Few tweaks on condition checking to improve performance a bit. if (modMain != null) { try { if (modName != null) { String className = modFolder.getPath().replaceFirst(enclosingFolder.getPath(), ""); className = className.substring(1, className.length()); className += "." + modMain; className = className.replaceAll("/", "."); Object modClass = Class.forName(className).newInstance(); if (modClass instanceof Modification) { Logger.log(LogLevel.MESSAGE, "Loading modification: '" + modClass.getClass().getName() + "' as '" + modName + "'"); ((Modification) modClass).setConfigFile(modConfig.getPath()); ((Modification) modClass).init(); Logger.log(LogLevel.MESSAGE, ((Modification) modClass).getInfo()); mods.put(modName, (Modification) modClass); } else { Logger.log(LogLevel.WARNING, "Modification: '" + modName + "' does not derive from abstract class: 'Modification', ignoring."); } } else { Logger.log(LogLevel.WARNING, "'mod_name' undefined for: '" + modFolder.getPath() + "'"); } } catch (InstantiationException | IllegalAccessException e) { Logger.log(LogLevel.WARNING, "Error loading main class for modification '" + modName + "', ignoring."); } } else { Logger.log(LogLevel.WARNING, "'mod_main' undefined for: '" + modFolder.getPath() + "', ignoring."); } If modName is NULL, there is no point in loading the class details and creating its instance. This saves some time. Handle exceptions accordingly. 2) If you have access to Modification class and can create its instance using NEW operator, it is preferred over reflection as reflection takes more time. This is my suggestion. Please try and let me know if you found any difference in performance.
{ "domain": "codereview.stackexchange", "id": 20108, "tags": "java, file-system, dynamic-loading" }
What is the Quantum Transition Time for Photon Emission?
Question: When an electron in an atom changes energy states to emit a photon, how long does the process take? Is this question even meaningful? Answer: Both the ground state and the excited state are eigenfunctions of the time independant Schrodinger equation. That means they are also time independant so the excited state will never decay to the ground state i.e. the transition time would be infinite. However the excited state doesn't decay to just the ground state, it decays to the ground state plus a photon, and the oscillating electric field of the photon is not time independant. The electric field of the photon has to be included in the Hamiltonian, and when you do this the Schrodinger equation is no longer time independant and the excited state is no longer an eigenfunction. There is now a non-zero probability for the excited state to decay, so the decay rate will be greater than zero and the (average) transition time will be less than infinite. Strictly speaking the presence of the photon makes this a relativistic system, and we should calculate the decay rate using quantum field theory. There are some comments about this in the answers to the question Why and how, in QED, can excited atoms emit photons?. However for most purposes we can calculate transition rates using Fermi's golden rule. There is a detailed discussion of the calculation in the question Interpretation of "transition rate" in Fermi's golden rule and the answers to it. If you're interested in seeing worked examples, e.g. for the hydrogen atom, a quick Google will find you lots of articles on the subject.
{ "domain": "physics.stackexchange", "id": 17112, "tags": "quantum-field-theory" }
Relativity of Jerk
Question: Popular expositions of general relativity start with a thought experiment showing that it is impossible to distinguish a constantly accelerating frame of reference in a free fall from a free floating frame of reference. Thought Experiment: Person A is a small closed box, free-falling towards earth. Person B is in a small closed box floating around in space. If they both do the same experiments, they should see the same results. For example, if they have a small ball and toss it inside their box, they would both see that ball travel in a straight line (not curving) towards the wall. They would also both feel themselves floating around as if there was no gravity. The same thought experiment could be applied to a frame undergoing a constant jerk. Does that lead to a new theory of relativity? Answer: Gravity happens to measurably correspond to a constant acceleration, not jerk. So, no, this doesn't lead to a new theory of gravity. At least not immediately. On the other hand, strictly speaking, there is actually a slight jerk involved in free fall if an object falls long enough for the increasing gravitational field to become relevant. Whether a model for our world involving relativity of jerk could be built, is something I'd like to know myself...
{ "domain": "physics.stackexchange", "id": 53421, "tags": "general-relativity, reference-frames, acceleration, equivalence-principle, jerk" }
Problem with catkin depends
Question: When I compile my package with catkin_make I see this error: prima@prima-UX32VD:~/catkin_ws$ catkin_make Base path: /home/prima/catkin_ws Source space: /home/prima/catkin_ws/src Build space: /home/prima/catkin_ws/build Devel space: /home/prima/catkin_ws/devel Install space: /home/prima/catkin_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/prima/catkin_ws/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/prima/catkin_ws/devel -- Using CMAKE_PREFIX_PATH: /home/prima/catkin_ws/devel;/opt/ros/hydro -- This workspace overlays: /home/prima/catkin_ws/devel;/opt/ros/hydro -- Using PYTHON_EXECUTABLE: /usr/bin/python -- Python version: 2.7 -- Using Debian Python package layout -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/prima/catkin_ws/build/test_results -- Found gtest sources under '/usr/src/gtest': gtests will be built -- catkin 0.5.86 -- BUILD_SHARED_LIBS is on -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- ~~ traversing 1 packages in topological order: -- ~~ - ompl_planner -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- +++ processing catkin package: 'ompl_planner' -- ==> add_subdirectory(ompl_planner) -- Using these message generators: gencpp;genlisp;genpy CMake Error at /opt/ros/hydro/share/catkin/cmake/catkinConfig.cmake:72 (find_package): Could not find a package configuration file provided by "ompl" with any of the following names: omplConfig.cmake ompl-config.cmake Add the installation prefix of "ompl" to CMAKE_PREFIX_PATH or set "ompl_DIR" to a directory containing one of the above files. If "ompl" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): ompl_planner/CMakeLists.txt:7 (find_package) -- Configuring incomplete, errors occurred! make: *** [cmake_check_build_system] Errore 1 Invoking "make cmake_check_build_system" failed but in my ompl package in /opt/ros/hydro/share/ompl/ there is the correct ompl-config.cmake file. this is my CmakeLists.txt cmake_minimum_required(VERSION 2.8.3) project(ompl_planner) find_package(catkin REQUIRED COMPONENTS roscpp costmap_2d geometry_msgs nav_core nav_msgs pluginlib tf angles ompl ) catkin_package( INCLUDE_DIRS include LIBRARIES ompl_planner CATKIN_DEPENDS roscpp nav_core pluginlib ) include_directories( include ${catkin_INCLUDE_DIRS} ) add_library(ompl_planner src/ompl_planner.cpp ) install(TARGETS ompl_planner LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} ) install(DIRECTORY include/ompl_planner/ DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} ) install(FILES bgp_plugin.xml DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} ) What can I do? Originally posted by Stefano Primatesta on ROS Answers with karma: 402 on 2014-04-16 Post score: 0 Original comments Comment by joq on 2014-04-17: Most likely your workspace environment is wrong. Did you remember to source /opt/ros/hydro/setup.bash before first running catkin_make? Did you later source ~/catkin_ws/devel/setup.bash? Comment by Stefano Primatesta on 2014-04-17: I always do it. The problem is only with OMPL package, but not with other packages. Answer: ompl is not a catkin package: http://wiki.ros.org/ompl Therefore you should not find it with find_package(catkin REQUIRED COMPONENTS ompl). Instead use find_package(OMPL) to find it and then its custom variables: OMPL_FOUND - OMPL was found OMPL_INCLUDE_DIRS - The OMPL include directory OMPL_LIBRARIES - The OMPL library OMPLAPP_LIBRARIES - The OMPL.app library OMPL_VERSION - The OMPL version in the form <major>.<minor>.<patchlevel> OMPL_MAJOR_VERSION - Major version OMPL_MINOR_VERSION - Minor version OMPL_PATCH_VERSION - Patch version Originally posted by Dirk Thomas with karma: 16276 on 2014-04-18 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Stefano Primatesta on 2014-04-20: Thanks!!! The answer is very useful Comment by Tixiao on 2014-11-09: I also met this problem and your answer solved it. Thank you so much!
{ "domain": "robotics.stackexchange", "id": 17677, "tags": "catkin, ompl, cmake" }
Spin-1 particle polarization direction
Question: For spin-1 particle, I don't quite understand how the following relationship is derived: $$\left|+1\right>=-\frac{1}{\sqrt2}(\hat e_x+i\hat e_y)$$ $$\left|0\right>=\hat e_z$$ $$\left|-1\right>=\frac{1}{\sqrt2}(\hat e_x-i\hat e_y),$$ where $\left|-1\right>$, $\left|0\right>$, $\left|+1\right>$ are eigenstate of angular momentum in the z direction. It appears that one need to first make an ad hoc assertion on the middle equation, and then somehow use the raising and lowering operator to obtain the other equations, but I am not sure how the raising and lowering operator will give relationship on the unit vector. I am also not clear on the interpretation of the above relation: naively I would say the middle equation shows that the $\left|0\right>$ state is linearly polarized in the z direction, whereas the $\left|+1\right>$ and $\left|-1\right>$ states are linearly polarized along the $y=x$ and $y=-x$ line respectively. Answer: For spin-1 particle, I don't quite understand how the following relationship is derived The numbers inside the kets on the left-hand side represent the eigenvalue of $L_z$. Therefore, we can figure out the right-hand side by how they transform under physical rotations $R_z(\theta)$ about the $z$-axis. The $|0 \rangle$ state corresponds to $\hat{e}_z$, because it is unchanged by a rotation about the $z$-axis, $$R_z(\theta) \hat{e}_z = \hat{e}_z.$$ On the other hand, note that $$R_z(\theta) (\hat{e}_x + i \hat{e}_y) = \cos \theta \, \hat{e}_x + \sin \theta \, \hat{e}_y + i \cos \theta \, \hat{e}_y - i \sin \theta \, \hat{e}_x$$ so collecting terms, $$R_z(\theta) (\hat{e}_x + i \hat{e}_y) = (\cos \theta + i \sin \theta)(\hat{e}_x + i \hat{e}_y) = e^{i \theta} (\hat{e}_x + i \hat{e}_y)$$ which indicates that $\hat{e}_x + i \hat{e}_y$ has angular momentum $1$ and hence corresponds to $|1 \rangle$. A similar argument goes for $|-1 \rangle$. The overall phases and normalization factors are just by convention. I am also not clear on the interpretation of the above relation The $|0 \rangle$ state represents polarization about the $z$-axis. The $|\pm 1 \rangle$ states represent circular polarizations, rotating clockwise/counterclockwise about the $z$-axis.
{ "domain": "physics.stackexchange", "id": 63902, "tags": "quantum-mechanics, quantum-spin, polarization" }
Does the speed of electrons depend on energy?
Question: I would like to know whether the speed of an electron depends on energy. If yes then in a circuit when electrons flow out of a resistor the energy decreases by a considerable amount, leading to the charge per electron decrease and eventually to the decrease in current in a series circuit. How is that possible? Answer: If you have current flowing one way through a resistor, then the electrons flow through the other way. Since current flows from the high voltage end of a resistor to the low voltage end, then the electrons come in at the low voltage end and come out at the high voltage end. When electrons (which are negatively charged) go from low voltage to high voltage, they gain energy from the electric field driving them. However, in a resistor, they lose an equal and opposite amount of energy by repeatedly crashing into other parts of the resistor, thus heating up the resistor. The power loss in a resistor is $IV$, which is exactly what the electrons would have gained in energy had they not lost it due to collisions within the resistor. So two things happened, the electrons were given energy from the electric field, and they gave energy to the resistor by heating it up. Easy come, easy go. Equal numbers of electrons flowed in and out (from no charge buildup) and they leave with the same energy, hence same speed, so same current.
{ "domain": "physics.stackexchange", "id": 20114, "tags": "energy, electric-circuits, electrons, electric-current, electrical-resistance" }
Quick encoding of balanced vectors
Question: It is easy to see that for any $n$ there exists a 1-1 mapping $F$ from {0,1}$^n$ to {0,1}$^{n+O(\log n)}$ such that for any $x$ the vector $F(x)$ is "balanced", i.e., it has equal number of 1s and 0s. Is it possible to define such $F$ so that given $x$ we can compute $F(x)$ efficiently ? Thanks. Answer: Let's consider $n$-bit strings $x$. Definitions: $f(x,i)$ = bit string $x$ with last $i$ bits complemented. $b(x)$ = "imbalance" of $x$: number of 1s in $x$ $-$ number of 0s in $x$. Now fix a string $x$. Consider the function $g(i) = b(f(x,i))$. Observations: $g(0) = b(x)$. $g(n) = -g(0)$. $|g(i) - g(i+1)| = 2$ for all $i$. We either remove one 0 and add one 1 or vice versa. Now it follows that there exists an $i$ such that $-1 \le g(i) \le +1$. Hence we can construct an $(n+O(\log n))$-bit string $y$ as follows: concatenate $f(x,i)$ and the binary encoding of the index $i$. The absolute value of the imbalance of $y$ is $O(\log n)$. Moreover, we can recover $x$ given $y$; the mapping is bijection. Finally, you can add $O(\log n)$ dummy bits that reduce the imbalance of $y$ from $O(\log n)$ to $0$.
{ "domain": "cstheory.stackexchange", "id": 77, "tags": "ds.algorithms, ds.data-structures, encoding" }
Word Chain Implementation follow up
Question: it's hard for me to implement all those features that were suggested, i did so many thing wrong. can you please review this update on my code, concerning OOP, clean code, and maybe an eye on the algorithm used What's note-worthy: the code looks very more clear now, single responsibility seems to be corrected, no more brain killer... Main class skipping the wrapping class, just posting the main method public static void main( String[] args ) { WordChainBuilder wordChainBuilder = new WordChainBuilder("src/main/resources/words.txt"); Collection<String> goldChain = wordChainBuilder.build("gold", "lead"); System.out.println("chain: "+goldChain); Collection<String> rubyChain = wordChainBuilder.build("ruby", "code"); System.out.println("chain: "+rubyChain); } WordChainBuilder class WordChainBuilder { private final WordReader wordReader; WordChainBuilder(String wordListFilename) { wordReader = new WordReader(wordListFilename); } Collection<String> build(String start, String destiny) { if (start == null || destiny == null || start.length() != destiny.length()) { throw new IllegalArgumentException(); } return build(new Node(start), new Node(destiny)).asStrings(); } NodeList build(Node start, Node destiny) { List<String> words = wordReader.readAllWordsWithLength(start.getLength()); WordList wordList = new WordList(words); NodeList currentDepth = new NodeList(start); while (!currentDepth.isEmpty()) { NodeList nextDepth = new NodeList(); for (Node node : currentDepth.getNodes()) { if(node.isDestiny(destiny)){ return buildChain(node); } NodeList candidates = findCandidates(node, wordList); nextDepth.addAll(candidates); } wordList.removeAll(nextDepth.asStrings()); currentDepth = nextDepth; } return NodeList.emptyList(); } private NodeList findCandidates(Node node, WordList wordList) { NodeList derivedNodes = new NodeList(); for (String derivedWord : wordList.getOneLetterDifferenceWords(node.getWord())) { Node derivedNode = new Node(derivedWord); derivedNode.setPredecessor(node); derivedNodes.add(derivedNode); } return derivedNodes; } private NodeList buildChain(Node node) { NodeList chain = new NodeList(); while (node.hasPredecssor()){ chain.addFirst(node); node = node.getPredecessor(); } chain.addFirst(node); return chain; } } Node class Node { private Node predecessor; private final String word; Node(String word) { if (word == null || word.isEmpty()){ throw new IllegalArgumentException(); } this.word= word; } String getWord() { return word; } void setPredecessor(Node node){ predecessor = node; } Node getPredecessor() { return predecessor; } boolean hasPredecssor(){ return predecessor != null; } int getLength(){ return word.length(); } boolean isDestiny(Node destiny) { return word.equals(destiny.word); } @Override public String toString() { return word; } } i'm not sure if i did get the idea of first class collection properly... NodeList class NodeList extends ArrayList<Node>{ NodeList(Node start) { super(); add(start); } NodeList() { super(); } static NodeList emptyList() { return new NodeList(); } Collection<String> asStrings(){ return stream().map(Node::getWord).collect(Collectors.toList()); } void addFirst(Node node){ add(0, node); } } same thing here, is that a proper implementet of a first class collection ? WordList class WordList extends ArrayList<String> { private final List<String> words; WordList(List<String> words){ this.words = words; } List<String> getOneLetterDifferenceWords(String word) { OneLetterDifference oneWordDifference = new OneLetterDifference(word); return words.stream().filter(oneWordDifference::test).collect(Collectors.toList()); } } OneLetterDifference class OneLetterDifference implements Predicate<String> { private final String referenceWord; OneLetterDifference(String referenceWord){ this.referenceWord = referenceWord; } @Override public boolean test(String word) { int difference = 0; for (int i = 0; i < word.length(); i++) { if (word.charAt(i) != referenceWord.charAt(i)) { difference = difference + 1; if (difference > 1){ return false; } } } return difference == 1; } } WordReader class WordReader { private final String filename; public WordReader(String filename) { this.filename = filename; } List<String> readAllWordsWithLength(int length) { try (Stream<String> lines = Files.lines(Paths.get(filename), Charset.defaultCharset())) { return lines.filter(line -> line.length() == length).collect(Collectors.toList()); } catch (IOException e) { e.printStackTrace(); } return Collections.emptyList(); } } Answer: Dependency Injection & Responsibility Currently your constructor delegates the wordListFilename to the WordReader, which you build there with new. public WordChainBuilder(String wordListFilename) { wordReader = new WordReader(wordListFilename); } The relationship between WordChainBuilder and WordReader is called composition in UML. Imagine you want to write a Unit-Test and a Unit-Test has to be fast and has to test only one unit. A test for WordChainBuilder can't be a Unit-Test because it depends on the file system via WordReader and is therefore not fast. Imagine WordChainBuilder should now read files from a database. We have to change the constructor. The definition of Robert C. Martin for responsibilities is "There should never be more than one reason for a class to change" So WordChainBuilder have still more than one responsibility: reading words and build the chain. The responsibility should only be to build the chain. If you create an interface called WordProvider, you can easily switch from a file reader to a database reader, assume they are of typeWordProvider. Let's change the composition to aggregation: public WordChainBuilder(WordProvider provider) { this.provider = provider; } Now it is possible to write unit-tests, since WordChainBuilder doesn't depend directly on the file-system anymore and we could write a Mock and inject it to WordChainBuilder. The Value Object Word a value object is a small object that represents a simple entity whose equality is not based on identity: i.e. two value objects are equal when they have the same value In your code base I read often word but it is from type String. Why don't you create a class for it? The statement start.length() != destiny.length() could be written as start.hasEqualLength(destiny). The implementation OneLetterDifference of a FunctionalInterface is actually a method that belongs to the class Word, because you compare to Strings that represents a Word.
{ "domain": "codereview.stackexchange", "id": 33556, "tags": "java" }
Binding energy and energy released
Question: Let's say that we have a nucleus with 7 units of mass. And let's say that the combined mass of its constituents is 10 units. The 3 units contribute toward the "binding energy." However, we do know that when (for example) hydrogen fuses to create helium, energy is also released along with that reaction. Shouldn't we say that: Energy (Mass) of constituents = Energy (Mass) of resulting nucleus + Binding Energy+ Energy Released (as radiation) instead of: Energy (Mass) of constituents = Energy (Mass) of resulting nucleus + Binding Energy? Answer: The released energy is exactly that binding energy. Binding things together lowers the total energy in the system (neglecting kinetic energy for the moment). The excess energy present in the system before must go somewhere (because of the conservation of energy), so it is released in some way (mostly emitting by photons or as kinetic energy). So in your example, the system first has an energy equivalent of 10 mass units, and after binding only 7. When the binding reaction occurs, suddenly there is an energy excess equivalent to 3 mass units in the system, so this energy (the binding energy) is released in some way.
{ "domain": "physics.stackexchange", "id": 56992, "tags": "energy, energy-conservation, nuclear-physics, binding-energy" }
largest earthquake not in a subduction zone
Question: Whats the largest earthquake since 1900 that was not in a subduction zone (not a megathrust). Wikipedia says that Since 1900, all earthquakes of magnitude 9.0 or greater have been megathrust earthquakes. No other type of known terrestrial source of tectonic activity has produced earthquakes of this scale. Answer: 08/15/1950 Assam, tibet, 8.9 magnitude. according to the USGS, https://earthquake.usgs.gov/earthquakes/browse/largest-world.php Note you are talking about a very small window of time there are only 5 earthquakes of 9.0 or greater within that time frame.
{ "domain": "earthscience.stackexchange", "id": 1614, "tags": "earthquakes" }
Counting letters in a string
Question: This little program is self-explanatory. I count letters in a string (can be any string), using a for loop to iterate through each letter. The problem is that this method is very slow and I want to avoid loops. Any ideas? I thought that maybe if I remove checked letters from the string after each loop, then in some cases, where many letters repeat, that would make a difference. def count_dict(mystring): d = {} # count occurances of character for w in mystring: d[w] = mystring.count(w) # print the result for k in sorted(d): print (k + ': ' + str(d[k])) mystring='qwertyqweryyyy' count_dict(mystring) The output: e: 2 q: 2 r: 2 t: 1 w: 2 y: 5 Answer: Use the built in Counter in the collections module: >>> from collections import Counter >>> Counter('qwertyqweryyyy') Counter({'y': 5, 'e': 2, 'q': 2, 'r': 2, 'w': 2, 't': 1})
{ "domain": "codereview.stackexchange", "id": 12793, "tags": "python, strings" }
Proton: 2 up, 1 down quark, Neutron: 2 down, 1 up, how can Neutron: proton + electron?
Question: I imagine that there is a pretty simple answer to my question, but I have just never gotten it straight. If a proton is comprised of two up quarks and a down, and neutrons are comprised of two down and an up, how can a neutron be a proton and electron? Answer: A neutron is not "a proton and an electron". A neutron is not composed of a proton and an electron inside of the neutron. In quantum mechanics, particles can appear and disappear or change into other particles. With the neutron, one of the down quarks can decay change into an up quark by emitting a W boson, turning into a proton. The W boson quickly decays into an electron and an electron antineutrino. The new up quark didn't exist until the down quark turned into it. The W boson is what is called a virtual particle. It doesn't exist in the classical sense, it's just kind of there in the ambiguous region of spacetime where the decay occurs. The electron and antineutrino didn't exist until the decay. Here is a Feynman diagram of the process, from here:
{ "domain": "physics.stackexchange", "id": 42151, "tags": "electrons, nuclear-physics, quarks, neutrons, protons" }
X-ray production rate
Question: A 100 MeV proton beam of $10^{14}$ proton/s is perpendicularly incident on a rhodium foil 25 $\mu$m in thickness. Estimate the production rate of K and L x-rays(use the figure below). So far I figured out from above the cross section for L x-rays to be $8\times 10^{2}$ barns and for K x-rays to be $1\times 10^{2}$ barns I have the right answers for this one but I can't seem to figure out the right formula to use in order to arrive the given answer. L $1.3\times10^{13}$ per second K $1.6\times10^{12}$ per second Answer: To do this homework problem, you need to find the number of nuclei per barn in the foil. If you read The Value of the Cross-Section Concept you will understand what to do.
{ "domain": "physics.stackexchange", "id": 13762, "tags": "homework-and-exercises, radiation, x-rays" }
Can an object from a natural process escape earth gravitation?
Question: I'm no expert, but I once studied basic from advanced physics and understand gravity action/reaction escape velocity of 11.2 km/s from the earth surface escape velocity changing as the object go far away Karman line for space (100 km) What I want to know, can a natural object, from a natural process, reach the escape velocity 11.2 km/s and get free of earth gravitation? I have in mind some volcano explosions. Tornados seem weak. Or also a Comet impact. Question is not limited in time, maybe it was possible millions of years before, when activities and gravity might have been different. Answer: Yes, it is not only possible, but has almost certainly happened on Earth. The asteroid that killed the dinosaurs is thought to have produced these high velocity fragments on the order of one-thousandth of the mass of the impactor according to Poveda and Cordero [2008]. In fact, they estimate the volume of escape velocity ejecta from the Chicxulub event as quite substantial in that: the number of fragments with sizes larger than 10 cm and 2 cm is about 4x10^10 and 2x10^12, respectively The authors of the above paper call these "Chicxulubites" and speculate on the possibility of finding them on the Moon and Mars. The larger the impactor radius and velocity, the higher a percentage of its overall mass will reach escape velocity. In an asteroid impact, the fastest moving ejecta comes from near the center of the crater. The ejecta closer to the edge of the crater has much lower velocity: The equation governing the velocity of the ejecta $V_{ej}$ is therefore partly a function of $r$, the distance from the center of the crater. $$V_{ej} = \frac{2\sqrt{Rg}}{1+\epsilon }\left(\frac{r}{R}\right)^{-\epsilon}$$ Where $\epsilon$ is a material coefficient of 1.8 for hardpack soil, $g$ is the surface gravitational acceleration, $R$ is the total radius of the equator, and $r$ is the radius at which the ejecta was ejected. Notes: I go into more detail in this answer: What percentage of a lunar meteor strike is blown back into space?. Practically, an object near sea level must achieve higher than escape velocity in order to overcome atmospheric drag as it escapes.
{ "domain": "astronomy.stackexchange", "id": 5791, "tags": "earth, volcanism, escape-velocity" }
Why does time-independent Hamiltonian not depend on angle variable?
Question: In Landau and Lifshitz Mechanics, $\S50$ Canonical variables a time-independent Hamiltonian is considered, and a canonical transformation is done such that adiabatic invariant $I$ becomes the new momentum. Then the angle variable is found as $$w=\frac{\partial S_0(q,I;\lambda)}{\partial I},$$ where $S_0$ is abbreviated action (and generating function for the canonical transformation), $q$ is old position variable and $\lambda$ is a constant parameter. Now L&L say: Since the generating function $S_0(q,I;\lambda)$ does not depend explicitly on time, the new Hamiltonian $H'$ is just $H$ expressed in terms of the new variables. In other words, $H'$ is the energy $E(I)$, expressed as a function of the action variable. Accordingly, Hamilton's equations in canonical variables are $$\dot I=0,\;\;\;\dot w=\frac{\mathrm dE(I)}{\mathrm dI}.\tag{50.4}$$ Now my question is: why does $H'=E$ not depend also on $w$? $H$ does in general depend on old position variable $q$ (even if it does not depend explicitly on time), so why shouldn't $H'$ depend on angle variable? Answer: This is more or less an exercise in chasing definitions. The adiabatic invariant $I$ is defined as $$ I\equiv \oint p \frac{\mathrm{d}q}{2\pi}\tag{49.7}$$ where the integral is taken over the path for given $E$ and $\lambda$. The external parameter $\lambda(t)$ is a slowly varying function of time $t$ in $\S49$, but is assumed to be a constant in $\S50$. Let us suppress the role of $\lambda$ in what follows to keep formulas simple. Then $I=f(E)$ is only a function of the energy $E$. Combined with the fact that the Hamiltonian $H$ does not depend explicitly on time, it show that the Kamiltonian ($\equiv$ the new Hamiltonian) $$H^{\prime}~\equiv~ K~=~H~=~E~=~f^{-1}(I)$$ is only a function of $I$, which is also the new momentum.
{ "domain": "physics.stackexchange", "id": 31207, "tags": "classical-mechanics, energy, hamiltonian-formalism, hamiltonian, phase-space" }
Does the amount of torque that can be transferred from wheel to rod change based on the distance of the connection to the rim of the wheel
Question: Let's say I have a rod that I need rotating, like in a regular crank mechanism: Now, the motor I use is too weak to generate the required amount of torque to move the mechanism. So I put a gear with 10 teeth on the motor, and replace the wheel with a 40-toothed gear. Now I have a mechanical advantage of 4, so the driven gear can deliver 4 times the amount of torque that the motor can. Does the amount of torque that the driven gear can transfer differ when I connect the rod close to the hub of the gear, compared to when I connect it closer to the rim? Answer: l Let's say the radius of your gear is R, and you connect the rod at distance x from the center, with the gear's torque 20Nm. The force your rod receives from the gears is $$ F_{rod}=\tau*\frac{R}{x}=20\frac{R}{x}$$ The smaller the x, the distance from the center the bigger F. But its displacement is smaller. It works like a lever with a fulcrum at the center of the hub which you try to lift an object at a distance of x from the fulcrum. and you have an action force $$F_{gear}=\frac{\tau}{R}$$ $$\frac{F_{rod}}{F_{gear}}=\frac{R}{x}$$
{ "domain": "engineering.stackexchange", "id": 4302, "tags": "gears, torque" }
Partial recursive function with no total recursive extension
Question: We define a partial recursive function $f:\{0,1\}^* \longrightarrow \{0,1\}^*$ to be semi-good if we can define a total recursive function $g:\{0,1\}^* \longrightarrow \{0,1\}^*$ from $f$, such for all $x \in \{0,1\}^*$ either $f(x) = g(x)$ or $f(x)\uparrow$. Now I want to prove that there exists a partial recursive function that is not semi-good. I must make a partial recursive function that is not semi-good for example $f_1$ ,but my problem is that I must show that I can not define any total recursive function $g:\{0,1\}^* \longrightarrow \{0,1\}^*$ from $f_1$. I don't have any idea how can I show this. Can I use the fact that a specific language like $L$ is not recursive so its characteristic function $\mathcal{X}_L$ is not total recursive function? How can I make the partial function from this fact? Answer: Take the function that interprets its input as the description of a Turing machine, and outputs the number of steps it takes the machine to halt, if it halts, and is undefined otherwise. This function is clearly partially computable, but it has no computable extension, since you could use any such extension to solve the halting problem (why?). Exercise for the reader: convert this construction to a Boolean function.
{ "domain": "cs.stackexchange", "id": 8221, "tags": "computability, semi-decidability" }
Spliting keras model into multiple GPU's
Question: Dear fellow Data Scientists. I'm having a problem with splitting model into multiple GPU's. I have read something about "towering" in native tensorflow but my whole architecture is already written in keras (tensorflow backend of course). Keras as far as I know only supports data paralellism which is useless while operating on images bigger than 1760x1760 in my case(Yolo architecture). I'm asking for advice, how could I achieve this without using native tensorflow I must run this model on 4500x4500 images and I can use up to 4 Tesla k40 (11GB) GPU's. EDIT: I'm already using batch = 1 Answer: For the device parallelism (aka model parallelism) see this FAQ: Device parallelism. Here is an example of doing this with Horovod.
{ "domain": "datascience.stackexchange", "id": 4470, "tags": "python, keras, tensorflow, gpu" }
Stabilising an inverted pendulum
Question: With the problem of stabilising an inverted pendulum on a cart, it's clear that the cart needs to move toward the side the pendulum leans. But for a given angle $\theta$, how much should the cart move, and how fast? Is there a theory determining the distance and speed of the cart or is it just trial and error? I've seen quite a few videos of inverted pendulum, but it's not clear how the distance and speed are determined. Answer: There are lots of ways to solve this problem, which falls into the category of Control Engineering. There are two standard approaches: Classical Control: The control command has to be proportional to a linear combination of the error, the rate of change of the error, and the integral over time of the error, a.k.a. a PID controller. This approach requires you to tune three parameters controlling which dictate how much gain should be used in proportion to the three aforementioned measurements. This can be done manually by trial and error, but more sophisticated approaches exist such as Ziegler-Nichols tuning. Optimal Control: construct a cost function which is constrained by the system's dynamics and obtain it's minimum. A common choice is a quadratic cost function in terms of the state vector and the control command, leading to a thing called an LQR controller. The minimum is obtained by solving the Riccati equations, for which there are standard numerical techniques. In my personal experience, LQR gives better results, but requires a bit of computational power and mathematical understanding to accomplish. PID is, by far, easier to implement and doesn't require a whole lot of mathematical understanding to do, but may not perform as robustly as you wish. In any case, you will need sensors to detect the pendulum's angle and either its rate of change in angle, or a data recorder to accumulate the error over time, or both. Also, you may find that your sensors do not always give correct values, due to cheap technology or unexpected disturbances, in which case you also need to account for uncertainty in measurements. A Kalman filter would be needed in such a case, and a sophisticated mathematical understanding will of the dynamics be needed to develop one properly.
{ "domain": "robotics.stackexchange", "id": 1094, "tags": "mobile-robot, sensors, accelerometer, gyroscope" }
Motivation for the definition of k-distillability
Question: Definition of k-distillability For a bipartite state $\rho$, $H=H_A\otimes H_B$ and for an integer $k\geq 1$, $\rho$ is $k$-distillable if there exists a (non-normalized) state $|\psi\rangle\in H^{\otimes k}$ of Schimdt-rank at most $2$ such that, $$\langle \psi|\sigma^{\otimes k}|\psi\rangle < 0, \sigma = \Bbb I\otimes T(\rho).$$ $\rho$ is distillable if it is $k$ for some integer $k\geq 1.$ Source I get the mathematical condition but don't really understand the motivation for $k$-distillability in general, or more specifically the condition $\langle \psi|\sigma^{\otimes k}|\psi\rangle < 0$. Could someone explain where this comes from? Answer: Remember that the partial transpose condition is generally good for detecting entanglement, i.e. a bipartite state $\rho$ is certainly entangled if the partial transpose is not non-negative. In other words, if there exists a state $|\psi\rangle$ such that $$ \langle\psi|I\otimes\text{T}(\rho)|\psi\rangle<0, $$ then the state is certainly entangled. If you want to be able to distil some entanglement from $k$ copies then, crudely, you'd like to look at $k$ copies of the partially transposed state, and if that has a negative eigenvalue, you would be able to extract some entanglement. With that level of explanation, you'd ask why looking at more than one copy is any use -- the eigenvalues of many copies of $\sigma$ are easily related to the eigenvalues of a single copy. However, this is because of the extra condition that $|\psi\rangle$ must be Schmidt rank 1 or less. I presume that this is because you can give an explicit distillation protocol based on the properties of $|\psi\rangle$. Essentially, this is due to the fact that you're trying to project onto a Bell pair which, of course, is Schmidt rank 2. For a better understanding that the very hand-wavy suggestions I've just given, you'd want to work through page 2 of https://arxiv.org/abs/quant-ph/9801069
{ "domain": "quantumcomputing.stackexchange", "id": 1767, "tags": "entanglement, mathematics, state-distillation" }
cant move turtle in turtlesim with key inputs
Question: I am trying to go through the tutorial. I am on " Understanding ROS Topics". I can get the turtlesim open but when I try to move the turtle, nothing happens. No, keyinputs appear in the terminal. I am getting the following error in the "roscore" terminal. Couldn't find an AF_INET address for [IP_OF_TURTLEBOT] I have looked around a lot and cannot find the answer. Can anyone help me? Originally posted by mercury on ROS Answers with karma: 5 on 2015-01-05 Post score: 0 Answer: Probably a duplicate of http://answers.ros.org/question/163556/how-to-solve-couldnt-find-an-af_inet-address-for-problem/ You might also want to check Network configuration page. Originally posted by 130s with karma: 10937 on 2015-01-05 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by mercury on 2015-01-05: Thanks, this lead me to the correct answer, I had to fix my IP andress
{ "domain": "robotics.stackexchange", "id": 20485, "tags": "ros, turtlesim, turtlesim-node, network" }
Making a list of shades and styles for plots
Question: For a section of a much larger program that generates plots, I am creating a list of styles that I can pull from to create a consistent format from plot-to-plot. It works as follows I have a list of colors shade = ['k','m'] and a list of styles style = ['--', ':','-','.'] that I am concatenating to get the desired list Colors= [['k--', 'k:', 'k-', 'k.'], ['m--', 'm:', 'm-', 'm.']]. I am achieving this with the following nested loop (t and phi are not always necessarily the same size but for example sake they are here) import numpy as np t = np.linspace(0,1,24) phi = np.linspace(0,1,24) shade = ['k','m'] style = ['--', ':','-','.'] Colors = [] for i in shade: cs = [] for j in style: for k in range(int(len(t)/len(phi))): cs.append(i+j) Colors.append(cs) print(Colors) This is fine and works, plus it is a small operation so I'm not worried about the run-time of it, I am just tired of looking at such a big loop that could probably be written in one line. Answer: The inner for loop over the range is virtually useless, since it only ever executes once. Hence it can be dropped and your code can be condensed to one nested list comprehension. shades = ['k','m'] styles = ['--', ':','-','.'] colors = [[shade+style for style in styles] for shade in shades] print(colors)
{ "domain": "codereview.stackexchange", "id": 43096, "tags": "python, numpy" }
VBA and SQL to return SQL results to Excel
Question: Now that I finally have time to look back at this code I wrote over a year ago. I need help with streamlining the code I have written. I had to write a significant amount of IF statements to get this to work, but im thinking using functions anbd maybe dictionaries would be a much more efficient way to go about this code. My skills are still not the greatest, so some thoughts and examples with how to set this code up to not only run more efficiently, but also streamline the code itself will be of great use. This code does run and gives the desired results. This code runs a SQL search from a IBM AS/400 server based on the criteria entered in by a user on a UserForm. Dim wsCity As Range, wsState As Range, wsAgeL As Range, wsAgeU As Range, wsGender As Range, wsDOB As Range, wsAge As Range Dim strConn As String, strSQL As String, uName As String, empName As String, lableCap As String, sqlCity As String, sqlState As String, sqlGender As String Dim CS As New ADODB.Connection, RS As New ADODB.Recordset Private Sub Search_Click() Dim wsDD As Worksheet Dim DOBRange As Range, AgeRange As Range Dim CitySQL As String, StateSQL As String, DOBSQL As String, CustSQL As String, sqlAgeLStr As String, sqlAgeUStr As String, sqlAgeBStr As String Dim lastRowDOB As Long, lastRowAge As Long, i As Long, lastx As Long Dim cell Dim x As Long, a As Integer, aLower As Integer, aUpper As Integer Set CS = CreateObject("ADODB.Connection") Set RS = CreateObject("ADODB.Recordset") Set wsCity = DE.Range("City") Set wsState = DE.Range("State") Set wsDOB = DE.Range("DOB") Set wsGender = DE.Range("Gender") Set wsAgeL = DE.Range("AgeLower") Set wsAgeU = DE.Range("AgeUpper") aLower = wsAgeL aUpper = wsAgeU sqlAgeLStr = "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) >= " & aLower & "" & "" Debug.Print sqlAgeLStr sqlAgeUStr = "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) >= " & aUpper & "" & "" Debug.Print sqlAgeUStr sqlAgeBStr = "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) BETWEEN " & aLower & " AND " & aUpper & "" Debug.Print sqlAgeBStr Application.ScreenUpdating = False strConn = REDACTED FOR PUBLIC VIEWING sqlCity = wsCity.Value sqlState = wsState.Value sqlGender = wsGender.Value strSQL = "SELECT " & _ "cfna1,CFNA2,CFNA3,CFCITY,CFSTAT,LEFT(CFZIP,5) FROM CNCTTP08.JHADAT842.CFMAST CFMAST " & _ "WHERE cfdob7 != 0 AND cfdob7 != 1800001 AND CFDEAD = 'N' AND " a = 0 'SEARCHES BY CITY ONLY If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 1 'SEARCHES BY CITY AND STATE If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 2 'SEARCHES BY CITY AND GENDER If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 3 'SEARCHES BY CITY AND AGE LOWER If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 4 'SEARCHES BY CITY AND AGE UPPER If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 5 'SEARCHES BY CITY AND FULL AGE RANGE If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 6 'SEARCHES BY CITY, GENDER AND FULL AGE RANGE If wsCity.Value <> vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 7 'SEARCHES BY CITY, STATE AND GENDER If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 8 'SEARCHES BY CITY, STATE, GENDER AND LOWER AGE If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 9 'SEARCHES BY CITY, STATE, GENDER, UPPER AGE RANGE If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 10 'SEARCHES BY CITY, STATE, GENDER, FULL AGE RANGE If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 11 'SEARCHES BY STATE If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 12 'SEARCHES BY STATE AND GENDER If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 13 'SEARCHES BY STATE AND AGE LOWER If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 14 'SEARCHES BY STATE AND AGE UPPER If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 15 'SEARCHES BY STATE AND FULL AGE RANGE If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 16 'SEARCHES BY STATE, GENDER AND AGE LOWER If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 17 'SEARCHES BY STATE, GENDER AND AGE UPPER If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 18 'SEARCHES BY STATE, GENDER AND FULL AGE RANGE If wsCity.Value = vbNullString And wsState.Value <> vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 19 'SEARCHES BY GENDER If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 20 'SEARCHES BY GENDER AND AGE LOWER If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 21 'SEARCHES BY GENDER AND AGE UPPER If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 22 'SEARCHES BY GENDER AND FULL AGE RANGE If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value <> vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 23 'SEARCHES BY LOWER AGE RANGE If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU = vbNullString Then a = 24 'SEARCHES BY UPPER AGE RANGE If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU <> vbNullString Then a = 25 'SEARCHES BY FULL AGE RANGE If wsCity.Value = vbNullString And wsState.Value = vbNullString And wsGender.Value = vbNullString And _ wsAgeL = vbNullString And wsAgeU = vbNullString Then a = 26 'SEARCHES BY CITY, STATE, FULL AGE RANGE If wsCity.Value <> vbNullString And wsState.Value <> vbNullString And wsGender.Value = vbNullString And _ wsAgeL <> vbNullString And wsAgeU <> vbNullString Then a = 27 Select Case a Case Is = 1 'SEARCHES BY CITY ONLY strSQL = strSQL & "CFCITY= '" & UCase(wsCity.Value) & "' AND " & _ "CFSEX != 'O'" Case Is = 2 'SEARCHES BY CITY AND STATE strSQL = strSQL & "CFSEX != 'O' AND " & _ "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "'" Case Is = 3 'SEARCHES BY CITY AND GENDER strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSEX = '" & wsGender & "'" Case Is = 4 'SEARCHES BY CITY AND AGE LOWER strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ sqlAgeLStr Case Is = 5 'SEARCHES BY CITY AND AGE UPPER strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ sqlAgeUStr Case Is = 6 'SEARCHES BY CITY AND FULL AGE RANGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ sqlAgeBStr Case Is = 7 'SEARCHES BY CITY, GENDER, AND FULL AGE RANGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSEX = '" & UCase(wsGender.Value) & "' AND " & _ sqlAgeBStr Case Is = 8 'SEARCHES BY CITY, STATE AND GENDER strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "'" Case Is = 9 'SEARCHES BY CITY, STATE, GENDER AND LOWER AGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "' AND " & _ sqlAgeLStr Case Is = 10 'SEARCHES BY CITY, STATE, GENDER, UPPER AGE RANGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "' AND " & _ sqlAgeUStr Case Is = 11 'SEARCHES BY CITY, STATE, GENDER, FULL AGE RANGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity) & "' AND " & _ "CFSTAT = '" & UCase(wsState) & "' AND " & _ "CFSEX = '" & UCase(wsGender) & "' AND " & _ sqlAgeBStr Case Is = 12 'SEARCHES BY STATE strSQL = strSQL & "CFSTAT= '" & UCase(wsState.Value) & "'" Case Is = 13 'SEARCHES BY STATE AND GENDER strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "'" Case Is = 14 'SEARCHES BY STATE AND AGE LOWER strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ sqlAgeLStr Case Is = 15 'SEARCHES BY STATE AND AGE UPPER strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ sqlAgeUStr Case Is = 16 'SEARCHES BY STATE AND FULL AGE RANGE strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "') AND " & _ sqlAgeBStr Case Is = 17 'SEARCHES BY STATE, GENDER AND AGE LOWER strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "' AND " & _ sqlAgeLStr Case Is = 18 'SEARCHES BY STATE, GENDER AND AGE UPPER strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "' AND " & _ sqlAgeUStr Case Is = 19 'SEARCHES BY STATE, GENDER AND FULL AGE RANGE strSQL = strSQL & "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ "CFSEX = '" & wsGender & "' AND " & _ sqlAgeBStr Case Is = 20 'SEARCHES BY GENDER strSQL = strSQL & "CFSEX = '" & wsGender & "'" Case Is = 21 'SEARCHES BY GENDER AND AGE LOWER strSQL = strSQL & "CFSEX = '" & wsGender & "' AND " & _ sqlAgeLStr Case Is = 22 'SEARCHES BY GENDER AND AGE UPPER strSQL = strSQL & "CFSEX = '" & wsGender & "' AND " & _ sqlAgeUStr Case Is = 23 'SEARCHES BY GENDER AND FULL AGE RANGE strSQL = strSQL & "CFSEX = '" & wsGender & "' AND " & _ sqlAgeBStr Case Is = 24 'SEARCHES BY LOWER AGE RANGE strSQL = strSQL & "CFSEX != 'O' AND " & _ sqlAgeLStr Case Is = 25 'SEARCHES BY UPPER AGE RANGE strSQL = strSQL & "CFSEX != 'O' AND " & _ sqlAgeUStr Case Is = 26 'SEARCHES BY FULL AGE RANGE strSQL = strSQL & "CFSEX != 'O' AND " & _ sqlAgeBStr Case Is = 27 'SEARCHES BY CITY, STATE, FULL AGE RANGE strSQL = strSQL & "CFCITY = '" & UCase(wsCity) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "' AND " & _ sqlAgeBStr End Select strSQL = strSQL & " ORDER BY cfna1 ASC" Debug.Print strSQL DataEntry.Hide CS.Open (strConn) RS.Open strSQL, CS MarketingList.Range("B2").CopyFromRecordset RS RS.Close CS.Close Set RS = Nothing Set CS = Nothing Application.ScreenUpdating = True MarketingList.Activate FormatHeaders SearchComplete.Show End Sub Private Sub AgeLower_AfterUpdate() Set wsAgeL = DE.Range("AgeLower") wsAgeL = Format(DataEntry.AgeLower, "0") End Sub Private Sub AgeUpper_AfterUpdate() Set wsAgeU = DE.Range("AgeUpper") wsAgeU = Format(DataEntry.AgeUpper, "0") End Sub Private Sub City_AfterUpdate() Set wsCity = DE.Range("City") wsCity = DataEntry.City End Sub Private Sub Male_Click() Set wsGender = DE.Range("Gender") Select Case DataEntry.Male Case Is = True wsGender = "M" Case Is = False wsGender = vbNullString End Select End Sub Function OrdDateToDate(OrdDate As String) As Long Dim TheYear As Integer Dim TheDay As Integer Dim TheDate As Long TheYear = CInt(Left(OrdDate, 4)) TheDay = CInt(Right(OrdDate, 3)) TheDate = DateSerial(TheYear, 1, TheDaDE) OrdDateToDate = TheDate End Function Private Sub Female_Click() Set wsGender = DE.Range("Gender") Select Case DataEntry.Female Case Is = True wsGender = "F" Case Is = False wsGender = vbNullString End Select End Sub Answer: Let's start with the easy stuff. At first glance the code looks horrendous but after taking a closer look well it is horrendous. JK for the most part you need to learn a few tricks that will greatly simplify the code. Miscellaneous As is, I see no reason for the class members because everyone of these fields are being set at each point of use. In this way, if one of the references changes you will have to update the reference at each point of use. If would make more sense to set the fields one time when the userform is initialized. Private rCity As Range, rState As Range, rAgeL As Range, rAgeU As Range, rGender As Range, rDOB As Range, rAge As Range Private Sub UserForm_Initialize() Set rCity = DE.Range("City") Set rState = DE.Range("State") Set rDOB = DE.Range("DOB") Set rGender = DE.Range("Gender") Set rAgeL = DE.Range("AgeLower") Set rAgeU = DE.Range("AgeUpper") End Sub Why prefix the ranges with ws? Typically, ws signifies Worksheet. wsCity As Range, wsState As Range, wsAgeL As Range, wsAgeU As Range, wsGender As Range, wsDOB As Range, wsAge As Range Why use the New keyword if you are going to set the instances using CreateObject? There is no reason for Connection and Recordset to be fields. They should be local variables. CS As New ADODB.Connection, RS As New ADODB.Recordset What the heck are you setting a class member field for in a control AfterUpdate event? Private Sub City_AfterUpdate() Set wsCity = DE.Range("City") wsCity = DataEntry.City End Sub Use helper variables to simplify and clarify you code. Unless you want to ensure that the user changes the value then don't bother setting your fields here. Use Me instead of DataEntry. Private Sub City_Change() DE.Range("City") = Me.City.Value End Sub Sub Search_Click() This is a bit of a mess. To begin with this Search_Click() is doing too much. Setting Class Members Establishing a Connection Building a Query String Executing the Query Transferring the The fewer tasks that a method performs the easier it is to test and modify. By combining all the If statements using If and ElseIf, you could eliminate the Select Case block. If Len(wsCity.Value) > 0 And Len(wsState.Value) = 0 And Len(wsGender.Value) = 0 And Len( If Len(wsCity.Value) > 0 And Len(wsState.Value) = 0 And Len(wsGender.Value) = 0 And Len(wsAgeL) = 0 And Len(wsAgeU) = 0 Then Rem SEARCHES BY CITY ONLY strSQL = strSQL & "CFCITY= '" & UCase(wsCity.Value) & "' AND CFSEX != 'O'" ElseIf Len(wsCity.Value) > 0 And Len(wsState.Value) > 0 And Len(wsGender.Value) = 0 And Len(wsAgeL) = 0 And Len(wsAgeU) = 0 Then Rem SEARCHES BY CITY AND STATE strSQL = strSQL & "CFSEX != 'O' AND " & _ "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "'" ElseIf Len(wsCity.Value) > 0 And Len(wsState.Value) = 0 And Len(wsGender.Value) > 0 And Len(wsAgeL) = 0 And Len(wsAgeU) = 0 Then Rem SEARCHES BY CITY AND GENDER strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSEX = '" & wsGender & "'" Rem More Clauses End If Alternately, you could eliminate the If clause by using Select Case True. Select Case True Rem SEARCHES BY CITY ONLY Case Len(wsCity.Value) > 0, Len(wsState.Value) = 0, Len(wsGender.Value) = 0, Len(wsAgeL) = 0, Len(wsAgeU) = 0 strSQL = strSQL & "CFCITY= '" & UCase(wsCity.Value) & "' AND CFSEX != 'O'" Rem SEARCHES BY CITY AND STATE Case Len(wsCity.Value) > 0, Len(wsState.Value) > 0, Len(wsGender.Value) = 0, Len(wsAgeL) = 0, Len(wsAgeU) = 0 strSQL = strSQL & "CFSEX != 'O' AND " & _ "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSTAT = '" & UCase(wsState.Value) & "'" Rem SEARCHES BY CITY AND GENDER Case Len(wsCity.Value) > 0, Len(wsState.Value) = 0, Len(wsGender.Value) > 0, Len(wsAgeL) = 0, Len(wsAgeU) = 0 strSQL = strSQL & "CFCITY = '" & UCase(wsCity.Value) & "' AND " & _ "CFSEX = '" & wsGender & "'" Rem More Cases End Select I would write a Function in a public module to return the SQL. This function would take all its arguments through parameters and not rely on global variables or worksheet ranges. This will break the dependency to the current workbook structure and make if far easier to test your code. Function getCFMASTSQL(City As String, State As String, DOB As Single, Gender As String, AgeLower As String, AgeUpper As String) As String Const BaseSQL As String = "SELECT cfna1, CFNA2, CFNA3, CFCITY, CFSTAT, LEFT(CFZIP,5) FROM CNCTTP08.JHADAT842.CFMAST CFMAST " Dim Wheres As New Collection If DOB > 0 Then Wheres.Add "cfdob7 = " & DOB Else Wheres.Add "cfdob7 != 0" Wheres.Add "cfdob7 != 1800001" Wheres.Add "CFDEAD = 'N'" End If If Len(AgeLower) > 0 And Len(AgeUpper) > 0 Then Wheres.Add "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) BETWEEN " & AgeLower & " AND " & AgeUpper ElseIf Len(AgeLower) > 0 Then Wheres.Add "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) >= " & AgeLower ElseIf Len(AgeUpper) > 0 Then Wheres.Add "TIMESTAMPDIFF(256, CHAR(TIMESTAMP(CURRENT TIMESTAMP) - TIMESTAMP(DATE(DIGITS(DECIMAL(cfdob7 + 0.090000, 7, 0))), CURRENT TIME))) <= " & AgeUpper End If If Len(Gender) > 0 Then Wheres.Add "CFSEX = '" & Gender & "'" Else Wheres.Add "CFSEX != 'O'" End If If Len(City) > 0 Then Wheres.Add "CFCITY = '" & UCase(City) & "'" If Len(State) > 0 Then Wheres.Add "CFSTAT = '" & UCase(State) & "'" Dim SQL As String If Wheres.Count > 0 Then Dim Values() As String ReDim Values(1 To Wheres.Count) Dim n As Long For n = 1 To Wheres.Count Values(n) = Wheres(n) Next SQL = BaseSQL & vbNewLine & "WHERE " & Join(Values, " AND ") Else SQL = BaseSQL End If getCFMASTSQL = SQL End Function
{ "domain": "codereview.stackexchange", "id": 36551, "tags": "sql, vba, excel, db2" }
About Control Unit in CPU and Clock Cycle
Question: I've been studying about CPU and I am trying to implement a small CPU, like MU0. Control unit gets instruction and generates and gives several control signals to other parts of CPU, such as ALU, PC, ACC, etc. And I know that these works in one clock cycle. But I have some quetions. Meaning of Signals (0 or 1) in Clock Cycle Each one clock cycle has signal 1 first and has 0 next. Since CPU is an electronic equipment, 1 would mean current is flowing (so, "turn on") and 0 would mean no current flow (so, "turn off"). So, does each control signals connected to the clock cycle circuit turn off when the clock cycle is 0 at the same time, if we ignore delays of the circuit? If not, i.e., if control signals can be 1 although clock cycle is 0, I can state my hypothesis: Clock signal is only meaningful and only used when signal toggles 1→0 or 0→1. Control unit can not work when its signal stays in the same value since it can not recognize time is going. Is that true or false? Timing to Control each Control Signals (source: tistory.com) Here is the structure of MU0, one of simple CPUs. If we want to store data from memory with the address in PC to IR and increase PC by 1 (i.e., PC = PC + 1), the manual says that setting control signals as below would do that. 0 : Asel, Bsel, ACCce, ACCoe 1 : PCce, IRce, MEMrq, RnW with ALU function B+1. Note that the value of PC goes in ALU and the output of ALU is PC+1 since Asel(A selector), Bsel are 1. But, since PCce(PC chip enabler) is 1, PC change the value by the input of PC, which is the output of ALU. It seems good to make PC as PC+1. But my questions is: This is an infinite loop theoritically since PC+1 would goes into ALU and PC+2 would save in PC, and then PC+2 would goes into ALU and PC+3 would save in PC, ... so and on until one clock cycle ends. To avoid this, controlling timings would be different for each control signals. Is this really true, or is there another solution? Thanks. Answer: 1.) (Almost) every component of the CPU needs to be synchronized by the clock, which means they all do their work only if the clock is high (1); never when it's low (0) (or alternatively only during rising edge). This could for example be implemented using an AND-gate, with the input to the component as first and the clock signal as second input. So the component will only get its new input if the clock is currently high, else it does nothing. And because the clock speed is set above the longest time any signal needs to propagate through any circuit in the CPU, this system is preventing signals from arriving before other signals are ready and thus keeps everything safe and synchronized. The control signals can be set at any time, but they will only be accepted during high clock. 2.) This is no loop, since the PC is actually a register which is egde triggered. This means, like mentioned above, that it is changing its state only one time per cycle for a short period of time. At an earlier time, the last output from ALU (PC + 1) is just waiting at the input of the PC. Then, at the next cycle, the PC takes its input. This is when PC = PC + 1 actually happens.
{ "domain": "cs.stackexchange", "id": 4395, "tags": "computer-architecture" }
What is the difference between artificial intelligence and machine learning?
Question: These two terms seem to be related, especially in their application in computer science and software engineering. Is one a subset of another? Is one a tool used to build a system for the other? What are their differences and why are they significant? Answer: Machine learning has been defined by many people in multiple (often similar) ways [1, 2]. One definition says that machine learning (ML) is the field of study that gives computers the ability to learn without being explicitly programmed. Given the above definition, we might say that machine learning is geared towards problems for which we have (lots of) data (experience), from which a program can learn and can get better at a task. Artificial intelligence has many more aspects, where machines may not get better at tasks by learning from data, but may exhibit intelligence through rules (e.g. expert systems like Mycin), logic or algorithms, e.g. path-finding). The book Artificial Intelligence: A Modern Approach shows more research fields of AI, like Constraint Satisfaction Problems, Probabilistic Reasoning or Philosophical Foundations.
{ "domain": "ai.stackexchange", "id": 744, "tags": "machine-learning, comparison, terminology, ai-field" }
Unit Testing for a Complex Game
Question: I have known about unit testing for a while now, but I am just now finally understanding how to implement it. I think that my initial implementation is a little rough so I could use some feedback. I have implemented these tests in order to verify that a new feature of my strategy game is working properly. Previously, I would have output these values to the screen somewhere and simply watched to make sure that the expected behavior was happening. With these tests, I have concrete proof that the feature is working. This is much better! However there are some problems. I needed to implement a Testing category for the class that I am testing because some of the methods I needed to run the tests are private. By creating this category at the top of the testing class, I can use private methods from the class without having to permanently expose them. This works, but I have heard that I should only be unit testing public methods, so I am wondering if what I have done is acceptable. I should note that I also had to add a couple getter methods to the class to get some information that was not public. DTDwarfTests.m #import <XCTest/XCTest.h> #import "DTDwarf.h" #import "DTJobUnit.h" @interface DTDwarf (Testing) -(void) setMoodForTesting:(int)moodAmount; -(void) startJobCountdown:(JobType)jobType; -(BOOL) doJobCountdown; -(void) calculateMoodState; -(void) calculateMoodAmountForJob; -(JobType) getFavoriteJob; -(JobType) getHatedJob; @end @interface DTDwarfTests : XCTestCase @end @implementation DTDwarfTests - (void)setUp { [super setUp]; // Put setup code here. This method is called before the invocation of each test method in the class. } - (void)tearDown { // Put teardown code here. This method is called after the invocation of each test method in the class. [super tearDown]; } -(void) testMoodInitialization { DTDwarf *dwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; XCTAssert(dwarf.moodAmount > 0); } -(void) testFavoriteAndHatedJobsValues { DTDwarf *dwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; XCTAssert([dwarf getFavoriteJob] != JobTypeNumJobTypes); XCTAssert([dwarf getHatedJob] != JobTypeNumJobTypes); } -(void) testFavoriteJobMoodInfluence { DTDwarf *dwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; [dwarf setMoodForTesting:0]; dwarf.dwarfJobUnit = [[DTJobUnit alloc]initWithJobType:[dwarf getFavoriteJob]]; [dwarf calculateMoodAmountForJob]; XCTAssert(dwarf.moodAmount > 0); } -(void) testHatedJobMoodInfluence { DTDwarf *dwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; [dwarf setMoodForTesting:1000]; dwarf.dwarfJobUnit = [[DTJobUnit alloc]initWithJobType:[dwarf getHatedJob]]; [dwarf calculateMoodAmountForJob]; XCTAssert(dwarf.moodAmount < 1000); } -(void) testBadMoodIncreasesCountdown { DTDwarf *happyDwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; [happyDwarf setMoodForTesting:1000]; [happyDwarf calculateMoodState]; [happyDwarf startJobCountdown:JobTypeMining]; int happyDwarfCountdown = 0; while (![happyDwarf doJobCountdown]) { happyDwarfCountdown++; } DTDwarf *sadDwarf = [[DTDwarf alloc]initWithWorldSize:CGSizeZero]; [sadDwarf setMoodForTesting:0]; [sadDwarf calculateMoodState]; [sadDwarf startJobCountdown:JobTypeMining]; int sadDwarfCountdown = 0; while (![sadDwarf doJobCountdown]) { sadDwarfCountdown++; } XCTAssert(happyDwarfCountdown < sadDwarfCountdown); } @end Answer: Unit testing isn't something I do a whole lot of. So as for a general review of what you've written, I can't attest much to that. But as to the idea of creating a class category just to sorta-kinda exposed "private" methods for the sake of testing and testing only? Well... let's talk about Objective-C, shall we? Despite the way we talk about "methods", Objective-C doesn't really do... methods... they're all messages which invoke underlying C functions. In other languages, if you try to call a non-existent method, your code won't compile. In Objective-C, you'll merely get a warning if the file you're in can't see the method you're trying to call, and if it really doesn't exist, you'll get a runtime crash. But the fact remains that you can still compile the code and you can still pass the message to the object. And if the object implements the method that matches the message, it will execute the code. So even without the class category, we could still call the methods on the object--we'd just have a bunch of warnings. And there's ways around the warnings as well. This blog discusses this in more detail, but here's an example of doing this with one of your methods: Given the method: - (void)setMoodForTesting:(int)moodAmount; We could invoke this method as such: SEL mftSelector = NSSelectorForString(@"setMoodForTesting:"); ((void (*) (id, SEL, float))objc_msgSend)(dwarf, mftSelector,1000); It may also be worth looking into this blog entry for some information on cleaning this up and making it more reasonable to use. In the end, this allows you to still test private details of the class, but these details aren't exposed anywhere to anyone. If somehow you were to accidentally import this .m directly, you would then expose these private details and could accidentally use them somewhere.
{ "domain": "codereview.stackexchange", "id": 10583, "tags": "game, unit-testing, objective-c" }
Visualisation of bandwith of filter with sampling
Question: I'm having some trouble visualising (and understanding) the filtering of a sampled signal. If I understand correctly, after sampling a signal, it is desired to have it go through a lowpass to retrieve the original signal again. The literature states the following below. $\omega_0 < B < \omega_s - \omega_0$ Where $\omega_s$ is the sampling frequency and B is, what I assume, the bandwith of the filter. I do not understand what is meant by the above expression and the literature is kind of crap at explaining it. I was hoping someone can visualise this for me. Answer: $B$ is the cut-off frequency of the anti-alias filter needed before sampling; $\omega_0$ is the signal bandwidth, and $\omega_s$ is the sampling frequency. $B$ can also be the cut-off frequency of the (ideal) reconstruction filter (band-limited interpolator). If this is the case, this suggests that the reconstruction filter is an ideal low-pass filter with cut-off frequency between the signal bandwidth and the next overlapped frequency component ($\omega_s - \omega_0$). The sampled signal contains repetitions of the original specturm every $\omega_s$, so when reconstructing, you want to get rid of all these repetitions and keep only the spectrum centered at 0. I don't know what literature you are referring to, and why is it so crappy.
{ "domain": "dsp.stackexchange", "id": 2876, "tags": "filters, sampling" }
Reads count in metagenomics
Question: Background: I am developing a pipeline for metagenomic studies of human gut microbiote. In particular, I am mapping the reads data originating from shotgun whole genome sequencing onto a gene catalogue (similar to IGC) and counting reads mapped on each feature/gene. Till now I have been using NGLess, which is a rather convenient tool. However, we have some questions about how exactly it counts reads and it might be not the best option for scaling up our pipeline. Question: I am looking for alternative tools for gene count. A brief search has shown that htseq-count and featureCount are two frequently used tools. However they mainly appear in RNAseq context, and I am not certain whether they could be used in my case. I would also appreciate comments on the possibility of using samtools mpileup. Answer: I think that there is a good argument for using technologies adapted from RNA-seq for metagenomic abundance quantification. The tool that I am most familiar with (that I use for this purpose) is kallisto, which uses pseudoalignment instead and is therefore less compute intensive. For some background on the conceptual linkages between metagenomic abundance and transcriptomic abundance you can see the kallisto metagenome abundance paper or the explanatory blog post. I think that it is preferable to use a tool that is specifically adapted for metagenomics, as there are always some numerical issues that might have been accounted for by the authors. I think that for this purpose it is maybe best not to naively use tools like mpileup, though I have no specific objection to it. However, if you look around you can see various papers using e.g. Cufflinks and Cuffdiff for effectively the same purpose, and Cufflinks is definitely intended for RNA-seq. Depending on your specific application, you might want to use a different tool like mOTUS2, if you are interested in estimating abundance of specific genomes or lineages in a metagenome mixture. This method is probably more accurate as it is based on averaging across the whole genome, but it is not very useful if what you are interested in is more e.g. functional profiling of the metagenome.
{ "domain": "bioinformatics.stackexchange", "id": 1697, "tags": "read-mapping, metagenome, featurecounts" }
What's the state machine of DriverNode?
Question: Hi all, I'm looking at the hokuyo node. As far as I understood the class HokuyoDriver extends driver_base::Driver which provides the methods doOpen doClose doStart doStop The class HokuyoNode instead extends driver_base::DriverNode. However in the code I cannot find how this node calls the methods doXYZ of the HokuyoDriver. In other words I would like to understand the (implicit or explicit) state machine of driver_base::DriverNode and when the methods doXYZ are called. Originally posted by LucaGhera on ROS Answers with karma: 128 on 2012-05-10 Post score: 2 Answer: It is a bit hard to follow. As best I can tell, driver_base implements three states: Closed: not connected to the device. Opened: device is open, but not streaming data. Running: device is streaming data. There is some confusion about the names: some comments refer to the Opened state as Stopped (following doStop()), but I think they the same. The dynamic reconfigure parameters are coded with a bit mask, indicating what state changes are necessary for updating each parameter. That is a useful idea, which can easily be implemented without using driver_base explicitly. Originally posted by joq with karma: 25443 on 2012-05-10 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9335, "tags": "ros, hokuyo-node, hokuyo" }
Can one representation of a projector operator be re-arranged to get another?
Question: I have a vector space $V$ and a subspace of $V$, $W$. Let $P$ be the projection operator for subspace $W$. Also let the dimension of $W$ be $d$. Also I have two orthonormal basis $(a_1,a_2,...a_d)$ and $(b_1,b_2,....b_d)$ for subspace $W$, where each $a_i$ and $b_i$ $\in V$. Now I can express $P$ in outer product ( bra-ket ) form in the following two ways $$P_1=\sum_{i=1}^{i=d}|a_i\rangle \langle a_i|...(1)$$ $$P_2=\sum_{i=1}^{i=d}|b_i\rangle \langle b_i|...(2)$$ What I know is that both $P_1$ and $P_2$ represent the operator $P$ and are related by $P_1=UP_2U^{\dagger}$ for some unitary operator $U$. My question is can terms in $P_1$ be re-arranged to get $P_2$. What I mean to say is if I express each $a_i$ in terms of basis $\{b_j\}$ and put it in equation $(1)$ will I get $(2)$? I am not blindly asking , i tried taking $V$ as 3-D space and $W$ as 2-D space. I was able to show for some examples but in general I am not able to prove. Answer: Since $\lvert a_i\rangle$ and $\lvert b_i\rangle$ are both bases for the space $W$, there exists a unitary $U=\sum \lvert b_j\rangle \langle a_j\rvert$ which maps $\lvert b_i\rangle=U\lvert a_i\rangle$ for all $i$. This $U$ can be naturally embedded in $V$, i.e., we can think of it as an operator $U:V\rightarrow V$. Then, $$ P_2 = \sum \lvert b_i\rangle\langle b_i\rvert = \sum U \lvert a_i\rangle\langle a_i\rvert U^\dagger = P_1\ . $$
{ "domain": "physics.stackexchange", "id": 19051, "tags": "quantum-information, operators, vectors" }
Is ANF-SAT P or NP?
Question: Given a finite set of equations in ANF, for example: $$ \begin{cases} (x_1 \land x_2) \oplus (x_1 \land x_3 \land x_4) \oplus 1 = 0 \\ x_3 \oplus (x_2 \land x_3 \land x_4) = 0 \\ (x_1 \land x_4) \oplus (x_1 \land x_2) \oplus (x_3 \land x_4) = 0 \end{cases} $$ Is this P or NP? The only assumption is that number of variables is finite. I know it can be converted to CNF and become NP-Complete, but I can't find an algorithm for converting a general ANF to CNF which is P (so this does not imply it is NP-Complete) This is also different from XOR-SAT as it is not linear and so Gaussian elimination is not an option. The answer might be using Schaefer's dichotomy theorem, but I'm not sure if it applies or not. This is similar to this question but the OP was not clear about question and there is also no clear answer, so I'm asking a clear one here. Answer: If you are asking about the problem where the input is a system of equations in ANF (algebraic normal form) and the output is whether the system of equations is satisfiable, this problem is NP-complete. There is a reduction from 3SAT. Suppose we have a 3CNF formula $\varphi = C_1 \land \cdots \land C_m$, where $C_i$ is the $i$th clause in $\varphi$, with variables $x_1,\dots,x_n$. We'll create a system of ANF equations that are satisfiable iff $\varphi$ is satisfiable, as follows. First, let's take care of any negations in $\varphi$. For each variable $x_i$, introduce another variable $y_i$, along with the ANF equation $$1 \oplus x_i \oplus y_i = 0.$$ Then, we can replace each negated literal $\neg x_j$ in $\varphi$ with $y_j$. In this way we obtain a 3CNF formula with only non-negated literals. Next, we will introduce one ANF equation per clause, as follows. Introduce variables $z_1,\dots,z_m$. Suppose clause $C_i$ has the form $a \lor b \lor c$ (after removing negations). Then we will generate the ANF equation $$z_i + abc + ab + bc + ac + a + b + c = 0.$$ Notice that this forces $z_i$ to be true iff clause $C_i$ is satisfied. Finally, add the ANF equation $$1 + z_1 z_2 \cdots z_m = 0.$$ This forces all of the $z_i$ to be true, i.e., all of the clauses $C_i$ to be satisfied. Now the system of all ANF equations generated in this way is satisfiable, iff $\varphi$ is satisfiable.
{ "domain": "cs.stackexchange", "id": 20362, "tags": "complexity-theory, satisfiability, p-vs-np" }
Efficient 2D convolution/cross-correlation along only one axis (1D output)
Question: I wish to convolve/cross-correlate two images and but, only horizontally, yielding 1D output. So, first output point would be sum(x * h), second sum(x * h_shift1), where h_shift1 is h horizontally shifted by 1 sample, in Python np.roll(h, axis=1). Basically, pass images through each other horizontally. I know that 2D cross-correlation is efficiently done as $$ \texttt{iFFT}_{2d}\bigg( \texttt{FFT}_{2d}\big(x\big) \cdot \overline{\texttt{FFT}_{2d}\big(\texttt{iFFTSHIFT}_{2d}(h)\big)} \bigg) $$ but this yields a 2D output which I don't need. Can my case be handled efficiently? Convention For sake of this post, assume a "backward" (which I think should be standard) definition, where $\overline{f(\cdot)}$ is complex conjugation: $$ (x \star h)(\tau) = \int_{-\infty}^{\infty} x(t) \overline{h(t - \tau)} dt $$ which just flips the order of arguments relative to standard. An answer with the standard definition is also acceptable. Clarification This is not "batched 1D" / "many 1D". The sum and product for every shift is 2D: $$ (\mathbf{a} \star \mathbf{b})(\tau) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \mathbf{a}(y, x) \overline{\mathbf{b}(y, x - \tau)} dxdy $$ Again, out[i] = sum(a * shift(b, i)), where a and b are 2D and sum yields a scalar. Brute force example MATLAB -- Python -- same outputs Answer: We observe that "2D along 1D" is equivalently: first do 1D horizontally (each row independently), then sum vertically. The complete operation for an output point, except for a shift, is sum(a * b), which is a 2D product and 2D sum. 1D convolution for row m does sum(a[m, :] * b[m, :]) for all shifts of b by i, b_i. Summing vertically for a given i is hence summing sum(a[m, :] * b_i[m, :]) for all m. (2) is same as sum(a * b_i), i.e. sum(a[:, :] * b_i[:, :]). So, if we let hf = ifftshift(conj(fft(h, axis=1)), axes=1), and prod = fft(x, axis=1) * hf, then it's just sum(ifft(prod, axis=1), axis=0). But we observe, by linearity, we can move sum inside ifft for a great speedup. All together, $$ \texttt{CC}_{2d1d}(x, h) = \texttt{iFFT}_{1d} \left( \sum_{m=0}^{M - 1} \left( \texttt{FFT}_{1d}\big(x\big) \cdot \overline{\texttt{FFT}_{1d}\big(\texttt{iFFTSHIFT}_{1d}(h)\big)} \right)[m] \right) $$ where 2D indexing is $x[m, n]$, and $\texttt{op}_{1d}$ denotes 1D operation along $n$'s axis. Thanks to @CrisLuengo and @Royi for pointers. Example in question Applying in code (extending the code at bottom)... import matplotlib.pyplot as plt from PIL import Image # load image as greyscale x = np.array(Image.open("cim0.png").convert("L")) / 255. h = np.array(Image.open("cim1.png").convert("L")) / 255. # blank regions default to `1`, undo that x[x==1] = 0 h[h==1] = 0 # compute out = cc2d1d(x, h)[0].real # plot plt.plot(out); plt.show() the peak is near center, as expected: Applications I used it to identify abrupt changes in audio, by cross-correlating CWT's impulse response with non-linearly filtered version of SSQ_CWT. So one major use is 2D template matching upon underlying 1D structures. Surely there's plenty others. (Note for those curious in the linked post) But! I by no means did this with images like in this post. An "image" involves up to three major modifications - compression, color-mapping, and clipping (vmin, vmax args in plt.imshow) - which change its numeric representation once loaded from image into array. Instead I operated on the original arrays, and it's clear from the worse results in this post. Convolution? Vertical? Convolution: remove np.conj Vertical: ifftshift(, axes=1) -> ifftshift(, axes=0), and mean(axis=0) -> mean(axis=1) Boundary effects / "time aliasing": pad and unpad exactly same as with 1D convolutions. But note, if $h$ isn't reusable, it's faster to adjust unpad indices instead of doing ifftshift, as shown in Royi's answer on conv2 (ignore vertical unpad). Benchmarks (CPU) For reusable $h$: def cc2d1d_hf(x, hf): return ifft((fft(x) * hf).sum(axis=0)) shapes = [(8192, 8192), (256, 262144), (262144, 256)] for shape in shapes: x = np.random.randn(*shape) hf = np.conj(fft(ifftshift(np.random.randn(*shape), axes=1))) %timeit cc2d1d_hf(x, hf) 3.01 s ± 122 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 4.21 s ± 138 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 2.3 s ± 78.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) Code import numpy as np from numpy.fft import fft, ifft, ifftshift def cc2d1d(x, h): prod = fft(x) * np.conj(fft(ifftshift(h, axes=1))) return ifft(prod.sum(axis=0)) def cc2d1d_brute(x, h): out = np.zeros(x.shape[-1], dtype=x.dtype) h = ifftshift(np.conj(h), axes=1) for i in range(len(out)): out[i] = np.sum(x * np.roll(h, i, axis=1)) return out for M in (128, 129): for N in (128, 129): x = np.random.randn(M, N) + 1j*np.random.randn(M, N) h = np.random.randn(M, N) + 1j*np.random.randn(M, N) out0 = cc2d1d(x, h) out1 = cc2d1d_brute(x, h) assert np.allclose(out0, out1)
{ "domain": "dsp.stackexchange", "id": 11970, "tags": "image-processing, convolution, cross-correlation" }
Calculating user birth information
Question: Can you please check if I've written the code correctly? The task was: Calculate the user's month of birth as a a number, where January = 0 through to December = 11. Take the string entered Get the substring, being the first three characters Convert to uppercase Find the starting location of the three-letter abbreviation in the month abbreviations string, and divide this by 3 (this is not the only way to find the month number, but it allows us to practice searching in a string) var year = prompt('Enter year of birth as a 4 digit integer'); var month = prompt('Enter the name of the month of birth'); // Chop everything after the first 3 characters and make it uppercase month = month.substring(0,3).toUpperCase(); // Store your array in months, differently named than the month input var months = ["JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL", "AUG", "SEP", "OCT", "NOV", "DEC"]; // We then use array.indexOf() to locate it in the array var pos = months.indexOf(month); if (pos >= 0) { // valid month, number is pos } var date = prompt('Enter day of birth as an integer'); Answer: Here is the whole code; explanations after it. (function () { var year, month, months, pos, date; year = prompt( "Enter year of birth as a 4 digit integer" ); month = prompt( "Enter the name of the month of birth" ); months = "JANFEBMARAPRMAYJUNJULAUGSEPOCTNOVDEC"; pos = months.indexOf( month.substring( 0, 3 ).toUpperCase() ); if ( pos === -1 ) { alert( "Invalid month name: " + month ); } else { alert( "Month number: " + ( 1 + pos / 3 ) ); } date = prompt( "Enter day of birth as an integer" ); })(); (function () { ... })(); The above code is called an Immediately-Invoked Function Expression (IIFE), and is used to not pollute the global namespace with all of our variables. Variables declared with a var are only accessible inside the IIFE. Other styles of IIFEs are also used (such as with a semicolon before it), however this is what I prefer. var year, month, months, pos, date; It is considered good practice to put all variable declarations at the top. This makes for better minification (when the code is used in production), and to prevent errors do to misunderstanding of variable hoisting. year = prompt( "Enter year of birth as a 4 digit integer" ); month = prompt( "Enter the name of the month of birth" ); For a better user experience, it might have been better to use a form, but I don't think that would be worth the effort. Note in addition that I standardized the quotes (" everywhere), and that I used spaces inside parentheses (different coding styles differ; this is my preferred one). months = "JANFEBMARAPRMAYJUNJULAUGSEPOCTNOVDEC"; This is the names of the month (first three letters) in one string. pos = months.indexOf( month.substring( 0, 3 ).toUpperCase() ); This gets the (zero based) location of the searched month. Note that your way is more robust (for example, try an input of Unjvember), but that this seems to be what was asked for. if ( pos === -1 ) { alert( "Invalid month name: " + month ); } else { alert( "Month number: " + ( 1 + pos / 3 ) ); } The requirements didn't ask explicitly for an alert telling the number, but I put it in anyway. date = prompt( "Enter day of birth as an integer" ); This (and the first prompt for a year) was in your original code, so I left it in (formatted).
{ "domain": "codereview.stackexchange", "id": 2139, "tags": "javascript, strings, beginner, datetime" }
My Robot Doesn't Move - What am I doing wrong?
Question: Greetings, I am have been working for days struggling through all the documentation and reading forums. All this to no avail. My Failed Robot Gist Here I can get the robot to load in RVIZ just fine. The Joints all rotate as expected. Once I get to Gazebo though, it just sort or rolls a tiny bit back and forth. I have a little window that pops up where I can induce some forward velocity on the left wheel. But NOTHING happens. Would anyone be so kind to check out files in the Gist? I played with intertials and PIDs but NOTHING is working. Cheers, Coach Originally posted by Coach Allen on Gazebo Answers with karma: 3 on 2020-01-27 Post score: 0 Original comments Comment by Coach Allen on 2020-01-29: Chirp Chirp Answer: So the first error I see is that you do not load the joint_state_controller in the controller_spawner node. ackerman.launch: line: 13 <node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" ns="/basic_frame_pig" args="joint_state_controller front_left_wheel_controller"/> After that the namespaces might be little challenging so if this isn't enough to make it work, I would look into the namespaces. Originally posted by kumpakri with karma: 755 on 2020-01-31 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 4469, "tags": "gazebo, ros-melodic, gazebo-all-versions" }
Does resonant inductive coupling work in the presence of a strong magnetic field?
Question: Does resonant inductive coupling work in the presence of a strong magnetic field? I am unsure because resonant inductive coupling uses magnetic fields to transmit power wirelessly and a strong magnetic field may cause interference. Example scenario: two devices using RIC in an MRI machine. Will they work as expected? Answer: RIC involves magnetic fields oscillating at a high frequency. The system won't pick up any other frequency. A constant-in-time (DC) magnetic field magnetic field has no direct effect because 0 Hz is the wrong frequency, and it has no indirect effect by the superposition principle. I guess it's possible that there may be a magnetic material in the RIC system (say, an iron-core inductor). I doubt there is, but I don't know for sure. If there is, there could be problems because (1) The DC magnetic field would affect the AC magnetic susceptibility (the superposition principle stops being true with strong fields and magnetic materials), and (2) The component would get ripped out and go flying and damage the MRI :-D
{ "domain": "physics.stackexchange", "id": 24365, "tags": "electromagnetic-radiation, magnetic-fields, induction, resonance" }
Why is marginal reconstruction "more correct" than joint reconstruction in some cases?
Question: When reconstructing the ancestral states on a phylogenetic tree given the states at the tips, there are a number of methods for performing the reconstruction. This question is about marginal and joint maximum likelihood reconstruction. To quote the wiki, marginal reconstrucion "is akin to a greedy algorithm that makes the locally optimal choice at each stage of the optimization problem", while joint reconstruction "find(s) the joint combination of ancestral character states throughout the tree which jointly maximizes the likelihood of the data" Pupko et al. (2000) describe an algorithm for performing joint reconstruction. I have a question about a paragraph in the discussion of this paper, where they discuss the differences in the results between the joint and marginal reconstructions. Deciding which is ‘‘ more correct ’’ depends on the question asked. For instance, if one wishes to count the number of threonine- to-methionine replacements over the entire tree, then the joint reconstruction should be used to obtain this number (2, in our case, on the branch connecting node 24 to node 3 and the branch connecting node 32 to node 33). However, if one wishes to synthesize the hypothetical cytochrome b sequence of the ancestor of Cetartiodactyla, then one should use the marginal reconstruction approach. We emphasize that both methods compute op- timal reconstructions by using all of the available data. Discrepancies originate not from misuse of information, but from the difference in the nature of the probabilistic questions asked I don't understand why you would use marginal reconstruction for estimating the ancestral state at one node, or how the question you ask affects which method you should use. There is only one true ancestral state, I would think that the correct method for any question is the one that most accurately estimates the true ancestral state. Can someone shed some light on why the marginal reconstruction method is preferable in this situation? Answer: I know this is too late, but since this is something I'm struggling with too I thought I'd post here in case others also find there way here. I don't have an answer but, here are some quotes that might help (I found them at least partially helpful). Yang (2004): "Marginal reconstruction is more suitable when one wants the sequence at a particular node, as in the molecular restoration studies. Joint reconstruction is more suitable when one counts changes at each site." http://aracnologia.macn.gov.ar/st/biblio/Yang%202006%20Computational%20Molecular%20Evolution.pdf Revell (2014): "Joint reconstruction is finding the set of character states at all nodes that (jointly) maximizes the likelihood." "Marginal reconstruction is finding the state at the current node that maximizes the likelihood integrating over all other states at all nodes, in proportion to their probability" http://www.phytools.org/eqg/Lecture_5.1/Revell.ancestral-state-reconstruction.pdf Joint reconstruction is "not (necessarily) equivalent to picking the state at each node with the highest probability" (Revell 2014), but is the method of picking the tree with the overall highest MLE.
{ "domain": "biology.stackexchange", "id": 10169, "tags": "evolution, theoretical-biology, phylogenetics" }
Problem arising when make ROSARIA
Question: The rosaria installation is successful, but when type catkin_make, the command line shows that: ~/src/rosaria/RosAria.cpp:6:25: fatal error: Aria/Aria.h: could not find such file or directory : Aria/Aria.h compilation terminated. make[2]: *** [rosaria/CMakeFiles/RosAria.dir/RosAria.cpp.o] error 1 make[1]: *** [rosaria/CMakeFiles/RosAria.dir/all] error 2 How to solve it? Thanks a lot! Originally posted by yujunzeng on ROS Answers with karma: 13 on 2014-11-04 Post score: 0 Original comments Comment by ReedHedges on 2014-12-03: The ARIA library should have been downloaded and installed when you used rosdep update and rosdep install rosaria (http://wiki.ros.org/ROSARIA/Tutorials/How%20to%20use%20ROSARIA). What was the output of those commands, any errors? Comment by ReedHedges on 2014-12-03: Note that rosdep install rosaria builds and installs ARIA differently than the normal package from Adept MobileRobots. You can skip that if you want, in which case RosAria should when built use the normal Adept install location /usr/local/Aria instead... but either way ARIA needs to be installed Answer: Provide more details about the system and ROS distro you are using. It seems you do not have your ros workspace properly configured or you are not installing in the ros workspace. Are you sourcing the setup.bash file in your catkin_ws/devel folder properly? Originally posted by Vegeta with karma: 340 on 2014-12-03 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 19959, "tags": "rosaria, ros-indigo" }
What is the size in m³ of a toxic benzene cloud when if 20.000 kg liquid benzene would vaporize?
Question: Benzene is carcinogen, The short term exposure limit for airborne benzene is 5 ppm for 15 minutes see https://en.wikipedia.org/wiki/Benzene#Benzene_exposure_limits I would like to know which volume of air could contain toxic benzene if an benzene ship with 2000 ton benzene should vaporise vaporization to the air. Benzene is heavier than air so it would be closed to the ground. How close is not a chemical subject but a physical phenomenon which is out of the scope of this question. In this calculation I use a lot of assumptions, just because the lack of info. But at least it gives a idea what might happen. Hopefully, on a later moment, some assumptions could be replace by facts, making the calculation more accurate. For now it is meant to get an idea what would happen in the worst case scenario. Assumption only 1% of the benzene vaporise. The question how much benzene could vaporise is out of the scope of this question. Calculation adjusted with info of @maurice : Benzene is not a ideal gas. The vapor pressure of benzene is about 0.04 bar at 0°C. Mol mass benzene: 78,11184 g/mol Mol / kg: ~12,8 mol benzene (1000 g / 78,11184 g/mol) 0.022413969545014... m3/mol at 0 °C, see https://en.wikipedia.org/wiki/Molar_volume using Avogadro's law The vapor pressure of benzene is at 0°C: 0.04 bar 1 kg benzene liquid: 0,01148 m³ benzene gas at 1 bar (~12,8 mol benzene * 0.022413969545014... m3/mol * 0,04bar / 1 bar) Benzene to vaporize: 1 % Mass vaporized benzene: 20.000 kg ( 2000.000 kg * 1%) Total 100% benzene gas: 230 m³ ( 20.000 kg liquid benzene * 0,01148 m³/kg) Assumption height benzene cloud from the ground: 1 meter high: Surface: 230 m² 3 meter high: Surface: 77 m² (230 m² / 3m) Assumption gas will spread as a circle: Radius with height of 3 meter: 8,5 m (230 m³ = 3,14 * r²) Diameter cloud of 3 m high: 17 m Which can move with the speed of wind which can be 0 m/s (0 Beaufort) up to 28 m/s (10 Beaufort) Benzene will probably mix with air. But airborne benzene stay toxic up to 5 pmm. So the maximum size of the toxic cloud would be: The short term exposure limit for airborne benzene is 5 ppm Maximum amount of gas which could be carcinogen: ~46 * 10⁶m³ (230 m³ / 5* 10-⁶ ppm) Assumption height benzene cloud from the ground: 1 meter high: Surface: 46 * 10⁶m² 3 meter high: Surface: 9,2 * 10⁸m² ( 46 * 10⁶m³ / 3m) Assumption gas will spread as a circle: Radius with height of 3 meter: 3,8 km (46 * 10⁶m² = 3,14 * r²) Diameter cloud of 3 m high: 7,6 km If the cloud gets larger it would be less than 5 pmm and is not toxic anymore. This will be a matter of windspeed and time, which both are beyond the scope of this question. The key question for me is: Is the calculation for a vessel loaded with 2000 ton benzene where 1% leaks generate a cloud with carcinogen air between about 230 m³ and 46 * 10⁶m³ correct? Or do I miss important issues. Answer: According to vapor tension online calculator, at $t=\pu{20 ^\circ C}$, the vapor pressure of benzene equates to $p=\pu{10.018 kPa \simeq 10 kPa = 0.1 atm = 100 000 ppmv}$ If we consider benzene vapor as an ideal gas, the saturated vapor density $\rho$ is the maximal concentration of benzene vapor at this given temperature: $$\rho = \frac {pM}{RT} = \pu{\frac {10000 \cdot 0.078}{8.314 \cdot 293.15} kg m^-3}\simeq \pu{0.320 kg m^-3}$$ The volume of air, saturated by $m=\pu{20 000 kg}$ of benzene is: $$V_\mathrm{saturated}=\frac m\rho \simeq \pu{62500 m^3}$$ The air volume for the $\pu{15 min}$ safety threshold at $\pu{5 ppmv}$ is: $$V_\mathrm{threshold}=V_\mathrm{saturated} \frac{100000}{5}\simeq \pu{1.25e9 m^3}$$ Assuming an uniform, pie-shaped cloud of height $h=\pu{3 m}$, the cloud would have a diameter $D$ : $$D=\sqrt{\frac{4 \cdot V_\mathrm{threshold}}{\pi h}}=\sqrt{\frac{4 \cdot \pu{1.25e9 m^3}}{3.14 \cdot \pu{3 m}}}\simeq \pu{23 000 m}$$ The diameter for the analogical cloud of the saturated air would be $\pu{163 m}$. For obvious reasons, the real cloud shape and concentration distribution would be very different and hard to predict. The 3D shape of the cloud very depends on the wind speed, the vertical air thermal gradient and the wind speed vertical gradient ( See the Richardson's number ). This determines the level of turbulent mixing in low hundreds meters of troposphere. But the estimate above provides you a ballpark figure.
{ "domain": "chemistry.stackexchange", "id": 14447, "tags": "concentration, safety" }
Sort Array By Parity
Question: The task is taken from leetcode Given an array A of non-negative integers, return an array consisting of all the even elements of A, followed by all the odd elements of A. You may return any answer array that satisfies this condition. Example 1: Input: [3,1,2,4] Output: [2,4,3,1] The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted. Note: 1 <= A.length <= 5000 0 <= A[i] <= 5000 My functional solution 1 const sortArrayByParity = A => A.reduce((acc, x) => { if (x % 2 === 0) { acc.unshift(x); } else { acc.push(x); } return acc; }, []); My functional solution 2 const sortArrayByParity = A => A.reduce((acc, x) => x % 2 === 0 ? [x, ...acc] : [...acc, x]); My imperative solution function sortArrayByParity2(A) { const odd = []; const even = []; for (const a of A) { if (a % 2 === 0) { even.push(a); } else { odd.push(a); } } return [...even, ...odd]; }; Answer: Bug Your second functional solution does not run. You forgot to add the second argument to A.reduce. I will assume you wanted an array as the last argument. Why functional sucks This example clearly illustrates the problem with some functional solutions that involve data manipulation. The requirement of no side effects forces the solution to copy the whole dataset even when you manipulate only a single item. This is particularly noticeable in your second functional solution. See performance results below. Code and Style Some minor points... function () {}; the trailing semicolon is not needed. Swap the conditions. You have if(a % 2 === 0) { /*even*/ } else { /*odd*/ } ... can be if(a % 2) { /*odd*/ } else { /*even*/ } Compact code. Try to avoid sprawling code. It may not matter for small segments of code, but source code can very long and reading code there spans pages is not as easy as having it all in one view. Before a newline it is either } or ;. There are two exceptions. The } that closes an object literal should have a closing ; eg const a = {};. And multi line statements and expressions. Know the language. You do a lot of code examples, many of them are rather trivial. Of late many of your posts contain bugs or incomplete code (may be a sign of boredom? or a lack of challenge (engagement)) . I do not believe in the classical closed book assessment examination, it does not reflect the real world. However a good memory of the field makes you a more productive coder. There are many subtle tricks in JavaScript that can catch you out if unaware. Testing your knowledge improves your understanding of the language making you a better coder. This is a example JavaScript Web Development Quiz picked at random from a web search "javascript quiz" It is good practice to do one of these every now and then. I did not get 100% Example A Compacting the function. function sortByParity(arr) { const odd = [], even = []; for (const val of arr) { if (val % 2) { odd.push(val) } else { even.push(val) } } return [...even, ...odd]; } Performance The second functional example was so slow I had to push the other best time down to the timer resolution cutoff 0.2ms or it would have taken forever to complete the test. The functions as tested function sortByParity_I1(A) { const odd = [], even = []; for (const a of A) { if (a % 2 === 0) { even.push(a) } else { odd.push(a) } } return [...even, ...odd]; } const sortByParity_F2 = A => A.reduce((acc, x) => x % 2 === 0 ? [x, ...acc] : [...acc, x], []); const sortByParity_F1 = A => A.reduce((acc, x) => { if (x % 2 === 0) { acc.unshift(x) } else { acc.push(x) } return acc; }, []); Benchmarks Mean time per call to the function in 1/1,000,000 second. OPS is operations per second. % is relative performance of best. For array of 1000 random integers sortByParity_I1: 20.709µs OPS 48,287 100% sortByParity_F1: 133.933µs OPS 7,466 15% sortByParity_F2: 3,514.830µs OPS 284 1% For array of 100 random integers sortByParity_I1: 2.049µs OPS 488,148 100% sortByParity_F1: 10.005µs OPS 99,947 20% sortByParity_F2: 46.679µs OPS 21,422 4% Note that the results for the functional solution do not have a linear relationship with the size of the array. Improving performance I will drop the slow functional and add an improved imperative function that pre allocates the result array. This avoids the overhead of allocation when you grow the arrays. As there is one array the overhead of joining the two arrays is avoided as well. This almost doubles the performance. function sortByParity_I2(A) { const res = new Array(A.length); var top = res.length - 1, bot = 0; for (const a of A) { res[a % 2 ? top-- : bot ++] = a } return res; } 10000 items sortByParity_I2: 119.851µs OPS 8,343 100% sortByParity_I1: 223.468µs OPS 4,474 54% sortByParity_F1: 5,092.816µs OPS 196 2% 1000 items sortByParity_I2: 13.094µs OPS 76,372 100% sortByParity_I1: 23.731µs OPS 42,138 55% sortByParity_F1: 123.381µs OPS 8,104 11% 100 items sortByParity_I2: 0.900µs OPS 1,110,691 100% sortByParity_I1: 2.245µs OPS 445,398 40% sortByParity_F1: 9.520µs OPS 105,039 9% Test on Win10 Chrome 73 CPU i7 1.5Ghz
{ "domain": "codereview.stackexchange", "id": 34315, "tags": "javascript, algorithm, programming-challenge, functional-programming, ecmascript-6" }
How do I save selected sequences in seqkit to a file?
Question: Consider: seqkit grep -nrp ";s__nosocomialis;" ribogrove_11.217_sequences.fasta Using this command lets me select all sequences related to the organism "A. nosocomialis" in the RiboGrove database. How do I save these selected sequences to a new ".fasta" file? Answer: Based on the docs, seqkit grep has a -o "output file" option, so if you get the correct output with: seqkit grep -nrp ";s__nosocomialis;" ribogrove_11.217_sequences.fasta Then this command will (in theory) give you your desired outcome: seqkit grep -nrp ";s__nosocomialis;" ribogrove_11.217_sequences.fasta -o new.fasta
{ "domain": "bioinformatics.stackexchange", "id": 2466, "tags": "seqkit" }
What happens when you replace an identity matrix with a matrix full of ones?
Question: In physics, we often use resolutions of identity $$\sum_n |n\rangle\langle n|=\mathbb{I}$$ to simplify expressions. Sometimes, the "full matrix" (for lack of a better term) $$\sum_{m,n}|m\rangle\langle n|\equiv\mathbb{J}$$ appears instead. This has properties like $$\mathbb{J}^N=\mathbb{J}\,\mathrm{Tr}[\mathbb{J}]^{n-1}$$ instead of the usual $\mathbb{I}^N=\mathbb{I}$. Can we say anything conclusive about the relationship between $$\langle \psi|\mathbb{J}|\phi\rangle\qquad \mathrm{and}\qquad \langle \psi|\mathbb{I}|\phi\rangle=\langle\psi|\phi\rangle,$$ or is there no direct way of simplifying $\langle \psi|\mathbb{J}|\phi\rangle$? This question came up in simplifying a sum of Clebsch-Gordan coefficients \begin{align} \sum_J \langle j_1,k_1;j_2,k_2|J,k_1+k_2\rangle\langle J,k_1^\prime+k_2^\prime|j_1^\prime,k_1^\prime;j_2^\prime,k_2^\prime\rangle&=\sum_{J,l,l^\prime} \langle j_1,k_1;j_2,k_2|J,l\rangle\langle J,l^\prime|j_1^\prime,k_1^\prime;j_2^\prime,k_2^\prime\rangle\\ &= \langle j_1,k_1;j_2,k_2|\mathbb{J}|j_1^\prime,k_1^\prime;j_2^\prime,k_2^\prime\rangle. \end{align} (My scenario has $j_1^\prime=j_1$ and $j_2^\prime=j_2$ so the sum over $J$ is unique.) It would be nice if this constrained the possible relationships between the $k$s. The obvious problem is that $\mathbb{J}$ is basis-dependent, so I doubt any more simplications can arise. Answer: Never forget to think about statistics when considering quantum mechanics. Your question is related to the three correlations between pairs of three variables. Famously, "is correlated with" isn't as transitive as we'd like to think. Let $|\chi\rangle:=\sum_m|m\rangle$. Write $\sim$ between complex numbers of the same modulus. We can choose three angles so$$\begin{align}\langle\psi|\chi\rangle&\sim\sqrt{\langle\psi|\psi\rangle\langle\chi|\chi\rangle}\cos\theta_{\psi\chi},\\\langle\chi|\phi\rangle&\sim\sqrt{\langle\phi|\phi\rangle\langle\chi|\chi\rangle}\cos\theta_{\chi\phi},\\\langle\psi|\phi\rangle&\sim\sqrt{\langle\psi|\psi\rangle\langle\phi|\phi\rangle}\cos\theta_{\psi\phi}.\end{align}$$In particular,$$\begin{align}\langle\psi|\mathbb{J}|\phi\rangle&\sim\langle\chi|\chi\rangle\sqrt{\langle\psi|\psi\rangle\langle\phi|\phi\rangle}\mathrm{Tr}(\mathbb{J})\cos\theta_{\psi\chi}\cos\theta_{\chi\phi},\\|\cos\theta_{\psi\phi}-\cos\theta_{\psi\chi}\cos\theta_{\chi\phi}|&\le|\sin\theta_{\psi\chi}\sin\theta_{\chi\phi}|.\end{align}$$These don't provide much in the way of constraints (but they're the best we can do), because the angles in question could be the internal angles of any triangle.
{ "domain": "physics.stackexchange", "id": 82705, "tags": "quantum-mechanics, operators, angular-momentum, representation-theory" }
Creating a cell, not from another cell. Will it be possible?
Question: If some time in the future, we can know exactly what a cell (for example simple prokaryote bacteria) contains, (I mean, exactly which molecules, the shape of them, the density of each, everything), Then can we create a new cell (not from another cell)? I mean, if we have such technology, then create a soup just like what that cell contains, and a DNA exactly same as that bacteria cell, and then put some of it inside a Cell membrane, will it start living? Answer: I think it could feasible to assemble complete synthetic cells in future. One of the first successes will probably be a simple bacteria. The synthetic genome is already on the road, so here, I'm pointing to other technologies that can be helpful. Feel free to add/edit/complete. Inorganic chemistry. Self-replicating inorganic cells can be already obtained from polymer emulsions: optimistically, one may try in future to work with organic macromolecules. 3D printing technology may also evolve to the point that it will be possible to assemble cells by printing microscopic 'ink' molecules with the correct cellular organization. 3D printing works by serially printing one layer on the top of the other: to prevent molecular layers to mix, a technology to serially froze the printed layer will help in obtain a frozen synthetic cell.
{ "domain": "biology.stackexchange", "id": 854, "tags": "cell-biology, synthetic-biology" }
checking whether a language is turing recognizable
Question: After reading about it in the textbook and in the web, i was wondering about the "turing recognizable" concept. so for instance, if i take a simple language like:"L = {< M > | M ACCEPTS < M >}", then it should be a turing recognizable language since there can be a turing machine that halts and accepts strings in it, and for strings not in that language it doesn't halt or just skip them. however, how can i build such a turing machine(i mean the pseudocode for it)? i know it's quite a simple question, but i rose it out of curiosity so i could learn and from that to adapt to more complicated problems. thank you very much for helping me Answer: Let $U$ be a universal Turing machine. Then running $U$ on the input $\langle M \rangle, x$ will produce the same output as running $M$ on the input $x$ -- that's what it means for $U$ to be a universal Turing machine. Others have shown how to build a universal Turing machine. Then, you can build a Turing machine to do what you want by using $U$: namely, on input $\langle M \rangle$, your Turing machine should run $U(\langle M \rangle, \langle M \rangle)$. Or, to put it another way, your Turing machine should make a second copy of the input it receives on the input tape, then run $U$. If you know how to build a Turing machine to copy a bit-string, and if you have a universal Turing machine $U$, you'll have a solution for your problem. This might not sound very enlightening, but it's the easiest way to describe how you can do it. Programming Turing machines is incredibly tedious and usually not enlightening: it's like programming in a particularly nasty and human-unfriendly programming languages. Therefore, it's best avoided wherever possible. In this case, my solution lets you take advantage of the fact that someone else has already figured out how to build a universal Turing machine. You might be wonder how to do that. Well, the basic idea is that you need to implement an interpreter for Turing machines. Pick your favorite programming language, like Python; could you write a Python program to interpret a Turing machine, one step at a time? I bet you could -- it's a simple matter of programming. Now you just need to do that, but this time instead of writing a Python program, you need to write a Turing machine -- a nastier and messier task that's far more tedious, but not conceptually different.
{ "domain": "cs.stackexchange", "id": 13552, "tags": "complexity-theory, turing-machines, computability" }
Difference between partial derivatives and derivatives in physics
Question: Please explain me the difference between $\lim_{x->0}\frac{\partial E}{\partial x}$ and $\lim_{x->0}dE/dx$.In physics I encountered something similar while reading about Newton's Law of Fluids.While in F.M. White its done using partial derivatives.I want to know the physical difference instead of the highly mathematical one. Having said that I am well conversed with the first principle of derivatives. Answer: For a detailed explanation search WikiPedia for derivative, partial derivative and total derivative. For a short, non-mathematical, summary see below. The partial derivative of a function of several variables is it's derivative with respect to one of those variables, assuming that all other variables are constant. The total derivative does not make this assumption and includes all indirect dependencies to find the overall dependency with respect to the variable of interest. As an example, consider $\frac{df}{dx}$, the total derivative of the function $f(x,y)$ with respect to the variable $x$: $$\frac{\operatorname df}{\operatorname dx}=\frac{\partial f}{\partial x} + \frac{\partial f}{\partial y} \frac{\operatorname dy}{\operatorname dx}$$ which depends on both $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ (i.e. the partial derivatives of $f$ with respect to $x$ and $y$).
{ "domain": "engineering.stackexchange", "id": 1670, "tags": "mechanical-engineering, electrical-engineering, mathematics, experimental-physics" }
Reaction of naphthalene with sodium dichromate/sulfuric acid
Question: I recently came across a question where naphthalene undergoes oxidation with sodium dichromate in presence of sulfuric acid. The given options were benzoic acid, phthalic acid, decalin, and tetralin. I assumed the reaction to be similar to that of the reaction between potassium permanganate under acidic conditions, yielding phthalic acid. This reaction was described in Oxidation of naphthalene with KMnO4. Could anyone please clarify the reaction, whether my assumption is correct, and explain the mechanism? Answer: It depends on the reaction conditions. I suspect the answer the question setter wants to see is phthalic acid, but the true answer is none of these. I have found multiple references (such as this US Patent and this Org. Syn prep) that refer to the production of naphthoquinones by oxidation of naphthalenes with high valent Chromium reagents under acid conditions.
{ "domain": "chemistry.stackexchange", "id": 15854, "tags": "organic-chemistry, reaction-mechanism, aromatic-compounds, organic-oxidation" }
Get alpha numeric count from a string
Question: Following is the code I am using to find a separate count for alphabets and numeric characters in a given alphanumeric string: Public Sub alphaNumeric(ByVal input As String) 'alphaNumeric("asd23fdg4556g67gh678zxc3xxx") 'input.Count(Char.IsLetterOrDigit) Dim alphaCount As Integer = 0 '<-- initialize alphabet counter Dim numericCount As Integer = 0 '<-- initialize numeric counter For Each c As Char In input '<-- iterate through each character in the input If IsNumeric(c) = True Then numericCount += 1 '<--- check whether c is numeric? if then increment nunericCounter If Char.IsLetter(c) = True Then alphaCount += 1 '<--- check whether c is letter? if then increment alphaCount Next MsgBox("Number of alphabets : " & alphaCount) '<-- display the result MsgBox("Number of numerics : " & numericCount) End Sub Everything works fine for me. Let me know how I can make this simpler. Answer: In general your code looks good, here are a few smaller remarks. Method name: Capitalize the name of your method and make it more meaningful. Use CountAlphaNumeric or something similar. Comments in code: You can omit the comments in your code. It speaks for itself what the code is doing, certainly because you use clear names for your variables. IsNumeric() - Char.IsDigit(): In the .NET framework, there's the Char.IsDigit method, use this one instead: If Char.IsDigit(c) = True Then numericCount += 1 MsgBox() - MessageBox.Show(): Although MsgBox is valid, it also comes from the VB era. In the .NET framework there's the MessageBox.Show method, use that one instead: MessageBox.Show("Number of alphabets : " & alphaCount) String.Format(): To insert variables in a string, use the String.Format instead of just concatenating the values: Dim result As String = String.Format("Number of alphabets : {0}", alphaCount) "Expression = True": You can leave out the = True part in your if conditions, since the methods return a boolean value. This is what the code now looks like: Public Sub CountAlphaNumeric(ByVal input As String) Dim alphaCount As Integer = 0 Dim numericCount As Integer = 0 For Each c As Char In input If Char.IsDigit(c) Then numericCount += 1 If Char.IsLetter(c) Then alphaCount += 1 Next MessageBox.Show(String.Format("Number of alphabets : {0}", alphaCount)) MessageBox.Show(String.Format("Number of numerics : {0}", numericCount) End Sub Using LinQ: Although not always the best option, you can achieve the same result using LinQ, using the Enumerable.Count method: Dim alphaCount = input.Count(Function(c) Char.IsLetter(c)) Dim numericCount = input.Count(Function(c) Char.IsDigit(c)) Here's the complete code using LinQ: Public Sub CountAlphaNumericUsingLinQ(ByVal input As String) Dim alphaCount = input.Count(Function(c) Char.IsLetter(c)) Dim numericCount = input.Count(Function(c) Char.IsDigit(c)) MessageBox.Show(String.Format("Number of alphabets : {0}", alphaCount)) MessageBox.Show(String.Format("Number of numerics : {0}", numericCount)) End Sub
{ "domain": "codereview.stackexchange", "id": 11897, "tags": "strings, vb.net" }
Redshifting of Light and the expansion of the universe
Question: So I have learned in class that light can get red-shifted as it travels through space. As I understand it, space itself expands and stretches out the wavelength of the light. This results in the light having a lower frequency which equates to lowering its energy. My question is, where does the energy of the light go? Energy must go somewhere! Does the energy the light had before go into the mechanism that's expanding the space? I'm imagining that light is being stretched out when its being red-shifted. So would this mean that the energy is still there and that it is just spread out over more space? Answer: Dear QEntanglement, the photons - e.g. cosmic microwave background photons - are increasing their wavelength proportionally to the linear expansion of the Universe, $a(t)$, and their energy correspondingly drops as $1/a(t)$. Where does the energy go? It just disappears. Energy is not conserved in cosmology. Much more generally, the total energy conservation law becomes either invalid or vacuous in general relativity unless one guarantees that physics occurs in an asymptotically flat - or another asymptotically static - Universe. That's because the energy conservation law arises from the time-translational symmetry, via Noether's theorem, and this symmetry is broken in generic situations in general relativity. See http://motls.blogspot.com/2010/08/why-and-how-energy-is-not-conserved-in.html Why energy is not conserved in cosmology Cosmic inflation is the most extreme example - the energy density stays constant (a version of the cosmological constant with a very high value) but the total volume of the Universe exponentially grows, so the total energy exponentially grows, too. That's why Alan Guth, the main father of inflation, said that "the Universe is the ultimate free lunch". This mechanism (inflation) able to produce exponentially huge masses in a reasonable time frame is the explanation why the mass of the visible Universe is so much greater than the Planck mass, a natural microscopic unit of mass.
{ "domain": "physics.stackexchange", "id": 54494, "tags": "general-relativity, energy, electromagnetic-radiation, energy-conservation, space-expansion" }
Parse data from Input file and print results
Question: I have written a script which does parsing to the input file and take out some values from them with respect to the node and print the data accordingly. Below is my script, and it works as expected: #!/usr/bin/perl use strict; use warnings; use Time::Local 'timelocal'; use List::Util qw(reduce); use POSIX qw( strftime ); my $i = 0; print "*"x20; print "\n"; while(<DATA>){ chomp; next unless ($_); my @data = split / /, $_; $i++; my ($node, $time, $date, $time1, $unit); my %hash; if (scalar @data == 3){ if( $data[0] =~ /FileName=([^_]+(?=_))_(\S+)_file.csv:(\S+),/gm ){ ($node, $time, $unit) = ($2, $1, $3); if( $time =~ /[a-zA-Z](\d+).(\d+)/gm ){ $date = $1; $time1 = $2; } } print "Node_$i:$node\n"; my $datetime = $date.$time1; my ($second,$minute,$hour,$day,$month,$year); my $unix_time; if ($datetime =~ /(....)(..)(..)(..)(..)/){ ($second,$minute,$hour,$day,$month,$year) = (0, $5, $4, $3, $2, $1); $unix_time = timelocal($second,$minute,$hour,$day,$month-1,$year); } my @vol = split /,/, $data[2]; foreach my $element (@vol){ $hash{$unix_time} = $element; $unix_time += 6; } my $key = reduce { $hash{$a} <= $hash{$b} ? $a : $b } keys %hash; my $val = $hash{$key}; my $dt = strftime("%Y-%m-%d %H:%M:%S", localtime($key)); print "Text_$i:First occured on $dt on the Unit:$unit and the value is $val\n"; } } print "*"x20; print "\n"; print "TotalCount=$i\n"; __DATA__ Node=01:FileName=A20200804.1815+0530-1816+0530_Network=NODE01_file.csv:Unit=R1,Meter=1 Vol 19,12,17,20,23,15,16,11,13,17 Node=02:FileName=A20200804.1830+0530-1831+0530_Network=NODE02_file.csv:Unit=R5,Meter=3 Vol 12,13,15,16,10,15,15,13,14,11 So, here we have 2 lines of data in input file which is giving output something like below: ******************** Node_1:Network=NODE01 Text_1:First occured on 2020-08-04 18:15:42 on the Unit:Unit=R1 and the value is 11 Node_2:Network=NODE02 Text_2:First occured on 2020-08-04 18:30:24 on the Unit:Unit=R5 and the value is 10 ******************** TotalCount=2 So, logic in parser is each line data belongs to each node (node will be unique in the input file). Here you can see the Volume data which is generated based on the time. For example, NODE01 volume data it shows for 18:15 to 18:16 (10 volume values, that means each value is been generated in 6secs interval and its fixed through out all the node volume data). From the list of volumes I should take the least number and its respective time with seconds. I am able to fetch as per the logic explained. Here I need experts feedback on regex (which I am using) also there are couple of if conditions which looks really odd to me. Is there any possiblity to simply the script? Answer: The code looks fine and it is working for the given input data. However, it can be difficult to assess which inputs will be regarded as valid, and how it will behave in case of unexpected input. One approach to uncertainty about code (will it work?) is to let it pass through a testing framework. This requires splitting your code into smaller units that can easily be tested. At the end of this post, I will present an example of how the code can be adapted to a testing framework, but before that there are some minor issues I would like to mention. Unecessary g and m flag Consider this line: if( $data[0] =~ /FileName=([^_]+(?=_))_(\S+)_file.csv:(\S+),/gm ){ Since the code is only processing a single line at a time and there is only one node on each line, global matching is not necessary. Also the m is not needed. It allows ^ and $ to match internal the start and end of internal lines for a multiline string. Unnecessary use of lookahead regex Consider this line: if( $data[0] =~ /FileName=([^_]+(?=_))_(\S+)_file.csv:(\S+),/gm ){ First, as commented above we can remove the g and m flags. Then /[^_]+(?=_)_/ is simpler written as /[^_]+_/ Make code easier to read This code: ($node, $time, $unit) = ($2, $1, $3); is easier to read (my opinion) if written as: ($time, $node, $unit) = ($1, $2, $3); such that the capture variables are sorted in numerical order. Similar for this line: my ($second,$minute,$hour,$day,$month,$year) = (0, $5, $4, $3, $2, $1); it can be written as: my ($year, $month, $day, $hour, $minute, $second ) = ( $1, $2, $3, $4, $5, 0); Shebang See this blog for more information. I usually use #!/usr/bin/env perl instead of #!/usr/bin/perl. Most systems have /usr/bin/env, and it allows your script to run if you e.g.have multiple perls on your system. For example if you are using perlbrew. say vs print I prefer to use say instead of print to avoid typing a final newline character for print statements. The say function was introduced in perl 5.10, and is mad available by adding use v5.10 or use use feature qw(say) to the top of your script. Declare variable as close to their definition as possible By declaring variable in the same scope as they are used and as close the their first usage point as possible will help a reader to quickly reason about the code, which will help producing correct code. For example, in this code my ($second,$minute,$hour,$day,$month,$year); if ($datetime =~ /(....)(..)(..)(..)(..)/){ ($second,$minute,$hour,$day,$month,$year) = (0, $5, $4, $3, $2, $1); the variables are only used within the if clause, so we can write it as: if ($datetime =~ /(....)(..)(..)(..)(..)/){ my ($second,$minute,$hour,$day,$month,$year) = (0, $5, $4, $3, $2, $1); Easier parsing of dates using Time::Piece In the program below, I show how you can use Time::Piece instead of timelocal to simplify the parsing of dates. Example code with unit tests Main script p.pl: #! /usr/bin/env perl package Main; use feature qw(say); use strict; use warnings; use Carp; use Data::Dumper qw(Dumper); # Written as a modulino: See Chapter 17 in "Mastering Perl". Executes main() if # run as script, otherwise, if the file is imported from the test scripts, # main() is not run. main() unless caller; sub main { my $self = Main->new(); $self->run_program(); } # --------------------------------------------- # Methods and subroutines in alphabetical order # --------------------------------------------- sub bad_arguments { die "Bad arguments\n" } sub init_process_line { my ( $self ) = @_; $self->{lineno} = 1; } sub new { my ( $class, %args ) = @_; my $self = bless \%args, $class; } sub process_line { my ($self, $line) = @_; my $proc = ProcessLine->new( $line, $self->{lineno} ); $self->{lineno}++; return $proc->process(); } sub read_data { my ( $self ) = @_; # TODO: Read the data from file instead! my $data = [ 'Node=01:FileName=A20200804.1815+0530-1816+0530_Network=NODE01_file.csv:Unit=R1,Meter=1 Vol 19,12,17,20,23,15,16,11,13,17', 'Node=02:FileName=A20200804.1830+0530-1831+0530_Network=NODE02_file.csv:Unit=R5,Meter=3 Vol 12,13,15,16,10,15,15,13,14,11' ]; $self->{data} = $data; } sub run_program { my ( $self ) = @_; $self->read_data(); $self->init_process_line(); for my $line ( @{$self->{data}} ) { my ($node, $dt, $unit, $val) = $self->process_line($line); my $res = { node => $node, dt => $dt, unit => $unit, val => $val, }; # TODO: write the data to STDOUT or to file in correct format print Dumper($res); } } package ProcessLine; use feature qw(say); use strict; use warnings; use Carp; use POSIX qw( strftime ); use Time::Piece; sub convert_date_to_epoch { my ( $self, $date ) = @_; my $unix_time = Time::Piece->strptime( $date, "%Y%m%d.%H%M%z" )->epoch(); return $unix_time; } # INPUT: # - $time_piece : initialized Time::Piece object # # sub convert_epoch_to_date { my ( $self, $time_piece ) = @_; my $dt = $time_piece->strftime("%Y-%m-%d %H:%M:%S"); return $dt; } sub get_volumes { my ( $self, $data ) = @_; $self->parse_error("No volumes") if !defined $data; my @vols = split /,/, $data; $self->parse_error("No volumes") if @vols == 0; for my $vol ( @vols ) { if ( $vol !~ /^\d+$/ ) { $self->parse_error("Volume not positive integer"); } } return \@vols; } # INPUT: # - $volumes : list of volumes (integers). # # RETURNS: - index of smallest item (if there are multiple minimal, the index of # the first is returned. # # ASSUMES: # - Length of list >= 1 # - Each item is a positive integer. # - NOTE: The items do not need to be unique. # sub find_min_vol { my ( $self, $volumes) = @_; my $min = $volumes->[0]; my $idx = 0; for my $i (1..$#$volumes) { my $value = $volumes->[$i]; if ( $value < $min) { $min = $value; $idx = $i; } } return $idx; } sub new { my ( $class, $line, $lineno ) = @_; my $self = bless {line => $line, lineno => $lineno}, $class; } sub parse_error { my ( $self, $msg ) = @_; croak ( sprintf( "Line %d: %s : '%s'\n", $self->{lineno}, $msg, $self->{line} // "[undef]" ) ); } sub process { my ($self) = @_; my $line = $self->{line}; chomp $line; $self->parse_error("Empty line") if !$line; my ($field1, $field3) = $self->split_line( $line ); my $date = $field1->get_date(); my $node = $field1->get_node(); my $unit = $field1->get_unit(); my $unix_time = $self->convert_date_to_epoch( $date ); my $volumes = $self->get_volumes( $field3 ); my $idx = $self->find_min_vol($volumes); my $vol = $volumes->[$idx]; my $vol_epoch = $unix_time + $idx*6; my $time_piece = localtime($vol_epoch); # convert to local time zone my $dt = $self->convert_epoch_to_date( $time_piece ); return ($node, $dt, $unit, $vol); } # INPUT: # - $line: defined string # sub split_line { my ( $self, $line ) = @_; my @data = split / /, $line; my $N = scalar @data; $self->parse_error( "Expected 3 fields (space-separated). Got $N fields.") if $N !=3; return (Field0->new($self, $data[0]), $data[2]); } package Field0; use feature qw(say); use strict; use warnings; sub get_date { my ( $self ) = @_; my $data = $self->{data}; my $date; if( $data =~ s/FileName=([^_]+)_// ) { my $time = $1; if( $time =~ /[a-zA-Z](\d{8}\.\d{4}[+-]\d{4})-\d{4}[+-]/ ) { $date = $1; } else { $self->{parent}->parse_error("Could not parse time info"); } } else { $self->{parent}->parse_error("Could not parse time info"); } $self->{data} = $data; return $date; } sub get_node { my ( $self ) = @_; my $data = $self->{data}; my $node; if( $data =~ s/(\S+)_// ) { $node = $1; } else { $self->{parent}->parse_error("Could not parse node info"); } $self->{data} = $data; return $node; } sub get_unit { my ( $self ) = @_; my $data = $self->{data}; my $unit; if( $data =~ s/file\.csv:(\S+),// ) { $unit = $1; } else { $self->{parent}->parse_error("Could not parse unit info"); } $self->{data} = $data; return $unit; } sub new { my ( $class, $parent, $data ) = @_; return bless {parent => $parent, data => $data}, $class; } Unit test script t/main.t: use strict; use warnings; use Test2::Tools::Basic qw(diag done_testing note ok); use Test2::Tools::Compare qw(is like); use Test2::Tools::Exception qw(dies lives); use Test2::Tools::Subtest qw(subtest_buffered); use lib '.'; require "p.pl"; { subtest_buffered "split line" => \&split_line; subtest_buffered "get_date" => \&get_date; subtest_buffered "get_node" => \&get_node; # TODO: Complete the test suite.. done_testing; } sub get_date { my $proc = ProcessLine->new( "", 1 ); my $fld = Field0->new($proc, "Node=01:FileName=A20200804.1815+0530-1816+0530_N"); is($fld->get_date(), '20200804.1815+0530', 'correct'); $fld = Field0->new($proc, "ileName=A20200804.1815+0530-1816+0530_N"); like(dies { $fld->get_date() }, qr/Could not parse/, "bad input"); $fld = Field0->new($proc, "FileName=A20200804.1815-1816+0530_N"); like(dies { $fld->get_date() }, qr/Could not parse/, "bad input2"); } sub get_node { my $proc = ProcessLine->new( "", 1 ); my $fld = Field0->new($proc, "Node=01:FileName=A20200804.1815+0530-1816+0530_N"); # TODO: complete this sub test.. } sub split_line { my $proc = ProcessLine->new( "", 1 ); like(dies { $proc->split_line( "" ) }, qr/Got 0 fields/, "zero fields"); like(dies { $proc->split_line( " " ) }, qr/Got 0 fields/, "zero fields"); like(dies { $proc->split_line( "1" ) }, qr/Got 1 fields/, "one field"); like(dies { $proc->split_line( "1 2" ) }, qr/Got 2 fields/, "two fields"); my ($f1, $f3); ok(lives { ($f1, $f3) = $proc->split_line( "1 2 3" ) }, "three fields"); is($f1->{data}, "1", "correct value"); is($f3, "3", "correct value"); }
{ "domain": "codereview.stackexchange", "id": 39223, "tags": "parsing, regex, perl" }
What is the chemical composition of the human body?
Question: What is the average percent composition of each element in the average human body? Answer: oxygen 61.3533 carbon 22.8291 hydrogen 9.9877 nitrogen 2.5683 calcium 1.4268 phosphorus 1.1129 potassium 0.20 sulfur 0.20 sodium 0.14 chlorine 0.14 magnesium 0.03 iron 0.01 fluorine 0.004 zinc 0.003 silicon 0.001 rubidium 0.001 strontium 0.0005 bromine 0.0004 lead 0.0002 copper 0.0001 aluminum 0.0001 cadmium 0.0001 cerium 0.0001 barium 0.00003 iodine 0.00003 tin 0.00003 titanium 0.00003 boron 0.00003 nickel 0.00002 selenium 0.00002 chromium 0.00002 manganese 0.00002 arsenic 0.000010 lithium 0.000010 cesium 0.000009 mercury 0.000009 germanium 0.000007 molybdenum 0.000007 cobalt 0.000004 antimony 0.000003 silver 0.000003 niobium 0.000002 zirconium 0.000001 lanthanium 0.000001 gallium 0.000001 tellurium 0.000001 yttrium 0.0000009 bismuth 0.0000007 thallium 0.0000007 indium 0.0000006 gold 0.0000003 scandium 0.0000003 tantalum 0.0000003 vanadium 0.0000002 thorium 0.0000001 uranium 0.0000001 samarium 0.00000007 beryllium 0.00000005 tungsten 0.00000003 Reference: Emsley, John, The Elements , 3rd ed., Clarendon Press, Oxford, 1998
{ "domain": "chemistry.stackexchange", "id": 10513, "tags": "organic-chemistry, elements" }
How current is induced when there is a change in external magnetic field?
Question: If still charges in a wire loop do not respond to a(or have their own) magnetic field, then how is current is generated by changing a magnetic field? And why only a changing magnetic field? What actually happens at the atomic level? Answer: If still charges in a wire loop do not respond to a (or have their own) magnetic field,… Electrons at rest (even in a wire loop) have their own magnetic field. Each electron is also a magnetic dipole. Unfortunately, this is often forgotten. Incidentally, this magnetic dipole is even a constant and therefore an intrinsic (unchangeable under all circumstances) property of the electron. Just between us, the electron should actually be called an “electron-magneton”. …then how is current is generated by changing a magnetic field? In electrostatics, electric fields are not influenced by magnetic fields and vice versa. This allows the assumption that the changing external magnetic field affects the magnetic dipoles of the electrons in the conductor loop. And why only a changing magnetic field? While a constant magnetic field only aligns these dipoles in its direction and shifts them sideways as with the Lorentz force, this process is repeated again and again with a changing magnetic field - a current flows. What actually happens at the atomic level? See the explanation above.
{ "domain": "physics.stackexchange", "id": 98650, "tags": "electromagnetism, magnetic-fields, electric-fields, electromagnetic-induction" }
Do Hermite functions also represent the functions of compact support in Schwartz space?
Question: Schwartz space includes wave functions, such as Gaussians (wave function of harmonic oscillator, for instance), that range over all of x, are infinitely differentiable, square integrable from minus infinity to infinity, and vanish faster than any power of x. Schwartz space also includes the smooth functions of compact support (wave function of particle in a box, for instance). My question is can all of the above Schwartz functions be expanded into the same set of Hermite functions (of course, with different coefficients), or do the smooth functions of compact support require some kind of modified Hermite functions (such as setting them equal to zero outside of a specific interval) vs the Hermite functions used for expansion of Schwartz functions that are not of compact support? I have been trying to research this but have not gotten a clear answer. Answer: Assuming you are using the same definition of Hermite functions found in this Wikipedia article, they form an orthonormal basis of $L^2(\mathbb{R})$, so any $L^2$-function can be expanded in them. This includes functions with compact support. (Note though that the partial sums $\sum_{n=0}^N c_n|\psi_n\rangle$ will not have compact support for any finite $N$.)
{ "domain": "physics.stackexchange", "id": 34636, "tags": "quantum-mechanics, wavefunction" }
Why do local hidden variable theories predict a triangular pattern for the graph?
Question: My friends and I got into an argument about determinism, and I brought up that quantum events are random. But I couldn't prove it. I found the Wikipedia page on Bell's theorem, which seems to imply what I'm trying to show, because it disqualifying local hidden variable models. But I don't understand how the experiment works. I think I understand the steps taken: An electron-positron pair is produced, with opposite spins. Alice measures the spin of the electron along the x-axis. Bob measures the spin of the positron along some axis, which could be the x-axis. Alice and Bob compare their results, recording a +1 if their spins match, and a -1 if they do not. A graph of "angle between Alice and Bob's axes" vs. "sum of many trials" is created. The part I don't get is: Why would local hidden variable theories predict a triangular pattern for the graph, and likewise, why would entanglement predict a cosine? Answer: Bell's theorem basically states that some predictions of quantum mechanics cannot be obtained from a local hidden variable model of the theory. Some people (like Nielsen and Chuang) refer to this as the fact that there cannot exist a local realist theory that has the same predictions as quantum mechanics. Roughly speaking, a local theory is one in which systems that are space-like separated cannot influence each other. A realist theory is one in which the properties of systems have definite values, independent of measurements of them. Within this terminology, what you are trying to show to your friends is that quantum mechanics is not a realist theory, there is inherent uncertainty about the value of physical properties before they are measured. But you see, Bell's theorem only formally tells us that we cannot have both realism and locality. However, it says nothing about keeping one but dropping the other. So, can there be a non-local realist model that makes the same predictions as quantum mechanics? Well yes there can! An example is the Brohm-deBroglie interpretation of quantum mechanics, which you can learn more about if you are interested. The bottom line is that we cannot prove that the predictions of quantum mechanics imply that the properties of physical systems, like spin, are not determined before measurement, because we know that there is a theory in which they are determined that makes the same experimental predictions!
{ "domain": "physics.stackexchange", "id": 76253, "tags": "quantum-mechanics, quantum-information, quantum-interpretations, determinism, bells-inequality" }
Basic question about multiplex PCR
Question: Let's say I have a DNA sequence with the following structure: $$ 5' - N_n - S_1 - N_{1000} - S_2 - N_{1000} - S_3 - N_n - 3' $$ Here, the $N$s represent stretches of arbitrary sequence of the indicated length. $S_1$, $S_2$ and $S_3$ are all unique sequences 20 bp in length. I design a forward primer PF that anneals to the positive strand at $S_1$. I design two reverse primers PR1 and PR2 that anneal to the negative strand at $S_2$ and $S_3$ respectively. Their annealing temperatures are reasonably close. Clearly, PF and PR1 together will produce a 1040 bp PCR product, and PF with PR2 will produce a 2060 bp product. Now let's say I add 0.2 μM PF (standard concentration), 0.1 μM PR1 and 0.1 μM PR2 all in the same reaction. I then run a PCR, with extension time sufficient for the 2 kb product. I can see the following possible outcomes: I get a 50/50 mix of 1 kb and 2 kb products. PR1 competes aggressively and I get hardly any 2 kb product and a lot of 1 kb product. After a few cycles, the 1 kb product that has accumulated serves as an efficient forward primer for PR2, and I end up getting mostly 2 kb product and little 1 kb product. Which one will happen? If it depends, what are the major factors on which it depends? Primer concentrations, extension time, annealing temperature, dNTP amount, number of cycles? This question is basically a very simplified multiplex PCR. While much has been written about practical applications of multiplex PCR, I am hoping that answers will help me understand the considerations that go into deducing optimal multiplex PCR conditions without doing any empirical calibration. This isn't to circumvent the calibration, but to understand the process conceptually. Answer: I haven't done this exact experiment. I am just deducing from the known facts about PCR. The product of PF and PR2 will serve as a template for both 1kb product (P1) and 2kb product (P2). Lets assume that after 2nd cycle there is 1 copy each of P1 and P2 (Delay of 1 cycle to make a smaller length product. See here). Lets assume that the primer binding affinity is equal (and for just preliminaries consider that only one primer can bind a template — such as in overlapping binding sites). So half of P2 will bind to PR2 and half to PR1. Number of copies at nth cycle is: P2(n) = 1.5 × P2(n-1) P1(n) = 2 × P1(n-1) + 0.5 × P2(n-2) However this is not really the scenario; the primers are not competing for the template. Which means that both primers can bind to the substrate simultaneously. DNA polymerases (except Klenow) will remove any DNA strand bound to the template, ahead of it. In that case P1 will not form. But when PR2 runs out, the unused PR1 can produce P1. Now there are additional factors, such as the distance between PR1 and PR2 binding sites. If the distance between the primer binding sites is less than the smaller product then the abovementioned case occurs. If the inter-primer distance is larger than smaller product then both products will form (and just like the competing primers case, P2 will serve as a template for both products). In this case the copy numbers will be: P2(n) = 2 × P2(n-1) P1(n) = 2 × P1(n-1) + 1 × P2(n-2) This means that P1 will dominate. Other assumptions: No limiting concentrations of dNTP, primers etc PCR efficiencies are equal for both primers and is equal to 2 (in non-competing cases) Primer annealing temperatures are same i.e binding affinities are same.
{ "domain": "biology.stackexchange", "id": 3044, "tags": "biochemistry, pcr" }