anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Displacement current - how to think of it
Question: What is a good way to think of the displacement current? Maxwell imagined it as being movements in the aether, small changed of electric field producing magnetic field. I don't even understand that definition-assuming there is aether. (On the topic of which, has aether actually been disproved? I read that even with the Michelson-Morley experiment the aether wasn't disproved.) Answer: Maxwell's equations in a vacuum have induction terms. (1) There is a term saying that a time-varying magnetic field produces an electric field. (2) There is a term saying that a time-varying electric field produces a magnetic field. Among people who insist on giving hard-to-remember names to all the terms in Maxwell's equations, #2 is called the displacement current. The name is a bad one, because it's not a current, i.e., it has nothing to do with the motion of charged, material particles. The only reason it has the misleading name is that it adds to the current term, and Maxwell, who made up the name, wasn't sure what its ultimate origin was. The importance of term #2 is mainly that it allows the existence of electromagnetic waves. In an electromagnetic wave, the changing E field induces the B field, and the changing B field induces the E field. There are elementary reasons that term #2 has to exist. For example, suppose you have a circular, flat Amperian surface $S_1$ and you shoot a single charged particle perpendicularly through its center. In this situation, Maxwell's equations without term #2 predict that the magnetic field at the edge of the surface will be zero, then infinite for an instant, and then zero again after that. But if we construct a similar Amperian surface $S_2$ with the same boundary but an interior surface that is bowed out rather than flat, we get a prediction that the infinite field occurs at a different time. This proves that we can't get away with leaving Maxwell's equations in a form with all the terms except term #2. The deeper reason for term #2 is that it's required by relativity. Only with term #2 do Maxwell's equations have a form that is the same in all frames of reference.
{ "domain": "physics.stackexchange", "id": 7712, "tags": "electromagnetism, maxwell-equations, aether" }
What is the ideal database that allows fast cosine distance?
Question: I'm currently trying to store many feature vectors in a database so that, upon request, I can compare an incoming feature vector against many other (if not all) stored in the db. I would need to compute the Cosine Distance and only return, for example, the first 10 closest matches. Such vector will be of size ~1000 or so. Every request will have a feature vector and will need to run a comparison against all feature vectors belonging to a subset within the db (which will most likely be in the order of thousands of entries per subset in the worst case scenario). Which database offers the flexibility to run such a query efficiently ? I looked into postgres but I was wondering if there were alternatives that better fit this problem. Not sure it matters much, but I'm most likely going to be using Python. I found this article about doing it in SQL. EDIT: I am open to alternative solutions for this problem that are not necessarily tied to SQL. Answer: If it's only a few thousand entries each with a 1,000 features, you may just be able to keep it in RAM if you are running this on some kind of server. Then when you get a new feature vector, just run cosine similarity routine. An easy way to do this is just use something standard like pandas and scikit-learn. Alternatively you can keep everything in SQL, load it into something like pandas and use scikit-learn. I'm actually not sure you'll get much of a speed up, if any, by writing the computation in SQL itself.
{ "domain": "datascience.stackexchange", "id": 11774, "tags": "feature-extraction, databases" }
Schwarzschild Solution
Question: I'm able to derive the Schwarzschild solution under the assumptions that the metric is (1) static (2) spherically symmetric and that the space is the vacuum. However, I have read that the Schwarzschild solution can be found assuming only that the metric is a spherically symmetric vacuum. How would the Schwarzschild solution be derived under these weaker conditions? Answer: There is a theorem which states that any spherically symmetric solution to the vacuum equations is also necessarily static and asymptotically flat. It is known as Birkhoff's theorem. Chapter 4 of Straumann (2013) contains a full proof.
{ "domain": "physics.stackexchange", "id": 19015, "tags": "general-relativity, black-holes, tensor-calculus" }
Ros concert halts on adding remote machine
Question: I am going through basic tutorial for setting up distributed chatter concert.. However, on step 3.3 the launch halts. EDIT1: similar warnings are received on (successful) launching of local nodes, so I guess that is not the issue The info (and possible cause of trouble) that I get is: Rapp Manager : disabling apps requiring capabilities [Couldn't find capability server node. Error: Node 'capability_server' not found.] Rapp Manager : 'rocon_apps/chirp' is not unique and has no preferred rapp. rocon_apps/moo_chirp' has been selected. The last received info is about Rapp manager initialised, and what usually follows (in successful, local launches) is the info about adding connection to public interface. Could that be a trouble? How could I inspect it further? I run it on Ubuntu 14.04, ros indigo, and installed rocon via package manager. Originally posted by gavran on ROS Answers with karma: 526 on 2016-03-09 Post score: 0 Original comments Comment by gavran on 2016-03-14: actually, the behaviour is the same as one that can be seen locally when no concert is launch, but dude is launched. that means that remote machine can't detect existing concerts? what could be possible reasons? Answer: The problem was not in the ros or concert, but in the fact that my wireless network had some restrictions on broadcasting. The solution for the problem was to create an ad hoc wireless network (as described here, for example) and then the tutorial worked as described. Originally posted by gavran with karma: 526 on 2016-03-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 24050, "tags": "ros, rocon" }
Image resizing class
Question: How does this class to resize an image look? using System; using System.Collections.Generic; using System.Web; using System.Drawing; using System.IO; /* * Resizes an image **/ public static class ImageResizer { // Saves the image to specific location, save location includes filename private static void saveImageToLocation(Image theImage, string saveLocation) { // Strip the file from the end of the dir string saveFolder = Path.GetDirectoryName(saveLocation); if (!Directory.Exists(saveFolder)) { Directory.CreateDirectory(saveFolder); } // Save to disk theImage.Save(saveLocation); } // Resizes the image and saves it to disk. Save as property is full path including file extension public static void resizeImageAndSave(Image ImageToResize, int newWidth, int maxHeight, bool onlyResizeIfWider, string thumbnailSaveAs) { Image thumbnail = resizeImage(ImageToResize, newWidth, maxHeight, onlyResizeIfWider); thumbnail.Save(thumbnailSaveAs); } // Overload if filepath is passed in public static void resizeImageAndSave(string imageLocation, int newWidth, int maxHeight, bool onlyResizeIfWider, string thumbnailSaveAs) { Image loadedImage = Image.FromFile(imageLocation); Image thumbnail = resizeImage(loadedImage, newWidth, maxHeight, onlyResizeIfWider); saveImageToLocation(thumbnail, thumbnailSaveAs); } // Returns the thumbnail image when an image object is passed in public static Image resizeImage(Image ImageToResize, int newWidth, int maxHeight, bool onlyResizeIfWider) { // Prevent using images internal thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); // Set new width if in bounds if (onlyResizeIfWider) { if (ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; } } // Calculate new height int newHeight = ImageToResize.Height * newWidth / ImageToResize.Width; if (newHeight > maxHeight) { // Resize with height instead newWidth = ImageToResize.Width * maxHeight / ImageToResize.Height; newHeight = maxHeight; } // Create the new image Image resizedImage = ImageToResize.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero); // Clear handle to original file so that we can overwrite it if necessary ImageToResize.Dispose(); return resizedImage; } // Overload if file path is passed in instead public static Image resizeImage(string imageLocation, int newWidth, int maxHeight, bool onlyResizeIfWider) { Image loadedImage = Image.FromFile(imageLocation); return resizeImage(loadedImage, newWidth, maxHeight, onlyResizeIfWider); } } Answer: PascalCase the method names and method params if you are feeling overly ambitious. // Set new width if in bounds if (onlyResizeIfWider) { if (ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; } } FindBugs barks in Java for the above behavior... refactor into a single if since you are not doing anything within the first if anyways... // Set new width if in bounds if (onlyResizeIfWider && ImageToResize.Width <= newWidth) { newWidth = ImageToResize.Width; } Comments here could be a bit more descriptive; while you state what the end result is I am still lost as to why that would resolve the issue. // Prevent using images internal thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); Maybe something similar to what is stated on this blog... // Prevent using images internal thumbnail since we scale above 200px; flipping // the image twice we get a new image identical to the original one but without the // embedded thumbnail ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone); ImageToResize.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipNone);
{ "domain": "codereview.stackexchange", "id": 2232, "tags": "c#, asp.net, image" }
Does time move slower at the equator?
Question: While answering the question GPS Satellite - Special Relativity it occurred to me that time would run more slowly at the equator than at the North Pole, because the surface of the Earth is moving at about 464 m/s compared to the North Pole. The difference should be given by: $$ \frac{1}{\gamma} \approx 1 - \frac{1}{2}\frac{v^2}{c^2} $$ and at $v = 464\ \mathrm{m/s}$ we get: $$ \frac{1}{\gamma} \approx 1 - 1.2 \times 10^{-12} $$ This is a tiny difference – about 4 days over the 13.7 billion year lifetime of the universe – but according to Wikipedia the accuracy of current atomic clocks is about $1$ part in $10^{14}$, so the difference should be measurable. However, I have never heard of any measurements of the difference. Is there a flaw in my reasoning or have I simply not been reading the right journals? Answer: The difference would indeed be measurable with state-of-the-art atomic clocks but it's not there: it cancels. The reasons actually boil down to the very first thought experiments that Einstein went through when he realized the importance of the equivalence principle for general relativity – it was in Prague around 1911-1912. See e.g. the end of http://motls.blogspot.com/2012/09/albert-einstein-1911-12-1922-23.html?m=1 to be reminded about Einstein's original derivation of the gravitational red shift involving the carousel. The arguments for John's setup may be seen e.g. in this paper: http://arxiv.org/abs/gr-qc/0501034 There is a sense in which the "geocentric" reference frame rotating along with the Earth every 24 hours is more inertial than the frame in which the Earth is spinning. Consider one liter of water somewhere – near the poles or the equator – at the sea level. Keep its speed relatively to the (rotating) Earth's surface tiny, just like what is easy to get in practice. Now, let's check the energy conservation in the Earth's rotating frame. The energy is conserved because this background – even in the "seemingly non-inertial" rotating coordinates – is asymptotically static, invariant under translations in time. The energy is conserved but the potential energy of one static (in this frame) liter of water may be calculated as $$m_c^2 \sqrt{|g_{00}|}. $$ Because the $00$-component of the metric tensor is essentially the gravitational (which is normally called "gravitational plus centrifugal" in the "naive inertial" frame where the Earth is spinning) potential and it is constant at the sea level across the globe, $g_{00}$ which encodes the gravitational slow down as a function of the place in the gravitational field must be constant in everywhere at the sea level, too. In the "normal inertial" frame where the Earth is spinning, the special relativistic time dilation is compensated by the fact that the Earth isn't spherical, and the gravitational potential is therefore less negative i.e. "less bound" at the sea level near the equator. Some calculations involving the ellipsoid shape of the Earth may yield an inaccurate cancellation. (That error may be attributed to not quite correct assumptions that the Earth's mass density is uniform etc., assumptions that are usually made to make the problem tractable.) But a more conceptual argument shows that the non-spherical shape of the Earth is a consequence of the centrifugal force. Quantitatively, this force is derived from the centrifugal potential, and this centrifugal potential must therefore be naturally added to the normal gravitational potential to calculate the full special-relativistic-plus-gravitational time dilation. That makes it clear why this particular calculation is easier to do in the frame that rotates along with the Earth's surface and the effect cancels exactly. Let me mention that the spacetime metric in the frame rotating along with the Earth isn't the flat Minkowski metric. If we allow the frame to rotate with the Earth, we just "maximally" get rid of the effects linked to the centrifugal force and the corresponding corrections to the red shift. However, in this frame spinning along with the Earth, there is still the Coriolis force. In the language of the general relativistic metric, the Coriolis acceleration adds some nontrivial off-diagonal elements to the metric tensor. These deviations from the flatness are responsible for the geodetic effect as well as frame dragging. Every argument showing the exact cancellation of the special relativistic effect must use the equivalence principle at one point or another; any argument avoiding this principle – or anything else from general relativity – is guaranteed to be incorrect because separately (without gravity and its effects), the special relativistic effect is certainly there.
{ "domain": "physics.stackexchange", "id": 15044, "tags": "special-relativity, time-dilation" }
Django ListView with MySQL query for the queryset
Question: (Note: Even if you don't know Django, it's probably the query that needs work anyway, so you probably don't need to know Django/Python) I have a ListView that displays all the entries in a mapping table between the pizza table and the topping table. Unfortunately, I can't just use Django's helpfulness and just give it the model to use (I'm skipping why for the sake of brevity), and have to use get_queryset instead. Unfortunately again, the load time is slow - there are almost 700 pizzas in the table and I need to use it in my sql query to order by pizza name. Here's what I'm currently doing: class MapPizzaToppingListView(ListView): model = MapPizzaTopping paginate_by = 20 template_name = 'onboardingWebApp/mappizzatopping_list.html' def get_queryset(self): cursor = connections["default"].cursor() cursor.execute("""SELECT map.* FROM map_pizza_topping as map INNER JOIN pizza ON (map.pizza_id = pizza.id) ORDER BY pizza.name""") queryset = dictfetchall(cursor) for row in queryset: pizza = Pizza.objects.get(id=row['pizza_id']) topping = Topping.objects.get(id=row['topping_id']) row['pizza_name'] = "%s (%s)" % (pizza.name, pizza.merchant.name) row['topping_name'] = topping.name return queryset It takes a few seconds to load now, but we're going to be adding tons more pizzas eventually, so this really needs to be sped up. Unfortunately, my MySQL knowledge, though decent, isn't really to a point where I know how to make it better. What can I do to improve it? Some more information that could be useful - the map_pizza_topping table just has a pizza_id column, a topping_id column, and a quantity column. The pizza table just has name, price, customer_id, and merchant_id columns, and the topping table just has name and is_dairy columns. Answer: This is slow NOT because of your SQL. The SQL in the post is a simple JOIN, on id columns that are indexed by Django by default. Without doubt, the cause of the slowness is this: for row in queryset: pizza = Pizza.objects.get(id=row['pizza_id']) topping = Topping.objects.get(id=row['topping_id']) For every row returned by the first query, you load a Pizza and a Topping. Django might be smart enough to not re-fetch the same topping multiple times, but for every unique Pizza and every unique Topping, Django will have to run an additional query. With 700 pizzas, you're looking at at least 700 queries, which is clearly not efficient. Unfortunately, I can't just use Django's helpfulness and just give it the model to use (I'm skipping why for the sake of brevity) It would be best to get to the bottom of this. Take a look at the documentation of the select_related method on querysets, I think the solution for your case should be somewhere around there. I really think there's a solution within the realm of "Django's helpfulness", without resorting to such queries, you just need to figure it out. (You might want to ask some questions on that on stackoverflow.com) Another workaround might be to stop calling Pizza.objects.get and Topping.objects.get: include in your query all the fields you need for pizzas and toppings, and build simple dictionaries from them. Rest assured, when you stop calling Pizza.objects.get and Topping.objects.get 700 times you will notice a massive speed improvement.
{ "domain": "codereview.stackexchange", "id": 10404, "tags": "python, optimization, sql, mysql, django" }
2 single slit experiments in parallel
Question: I’ve recently being reading a bit on information theory and quantum mechanics was introduced. Of course the 2 slit experiment came up. I wonder about something but have not been able to find any info from Google or here. So consider the usual 2 slit setup with one difference - an opaque barrier is placed connecting the screen and the midpoint of the barrier between the slits. This to me sounds like a parallel version of the single slit experiment, but with a single shared light source. What would an observer on each side of the new barrier see? My intuition is that the new barrier is effectively a detector, since one of the observers will observe each photon on their side. Is there any account of such an experiment? Answer: The introduction of an opaque barrier between the slits (if I understand the question correctly, I include an image for reference) corresponds to discarding the wave-function coming from the other slit. Effectively, whatever result you get at the "output" screen will correspond to the output one would obtain if the other slit was shut. Now, as you are studying the effects of having 'something' between the slits, you might be interested in reading about the effects of electromagnetism in the double slit experiment (cf. https://en.wikipedia.org/wiki/Aharonov–Bohm_effect, and Sakurai's Modern Q.M. Ch. 2.6 and 2.7). This effect explains the shift on the output screen due to an interaction of the photon field with the electron field through the electromagnetic potential. I hope this answer brings you new knowledge and interesting questions! HTH!
{ "domain": "physics.stackexchange", "id": 97070, "tags": "quantum-mechanics, double-slit-experiment" }
A simple register VM written in Rust
Question: I'm teaching myself Rust, and to do this I've written a toy register based virtual machine. I hope the code is easy to follow - I just want to know if there are any mistakes I am making with the language philosophy or if there are syntax tricks I could be using. For example, the program counter pc and the register fields reg_field are mutable. I feel like this is the right thing to do in this case as it better reflects processor architecture. But of course, I might be wrong... const NUMBER_OF_REGISTERS: usize = 3; enum Instruction { Halt, Load { reg: usize, value: u16 }, Swap { reg1: usize, reg2: usize, reg3: usize }, Add { reg1: usize, reg2: usize, reg3: usize }, Branch { offset: usize } } fn main() { let encoded_instructions = &[0x1110, 0x2100, 0x3010, 0x0]; cpu(encoded_instructions); } fn cpu(encoded_instructions: &[u16]) { let mut pc = 0; let mut reg_field = [0; NUMBER_OF_REGISTERS]; loop { let encoded_instruction = fetch(pc, encoded_instructions); let decoded_instruction = decode(encoded_instruction); match decoded_instruction { Some(Instruction::Load { reg, value }) => load(reg, value, &mut reg_field, &mut pc), Some(Instruction::Swap { reg1, reg2, reg3 }) => swap(reg1, reg2, reg3, &mut reg_field, &mut pc), Some(Instruction::Add { reg1, reg2, reg3 }) => add(reg1, reg2, reg3, &mut reg_field, &mut pc), Some(Instruction::Branch { offset }) => branch(offset, &mut pc), Some(Instruction::Halt) => { halt(&reg_field); break } None => break, } } } fn fetch(pc: usize, instructions: &[u16]) -> u16 { instructions[pc] } fn halt(register_field: &[u16]) { println!("{:?}", register_field[0]); } fn load(register: usize, value: u16, register_field: &mut [u16], pc: &mut usize) { register_field[register] = value; *pc += 1; } fn swap(reg1: usize, reg2: usize, reg3: usize, register_field: &mut [u16], pc: &mut usize) { register_field[reg3] = register_field[reg1]; register_field[reg1] = register_field[reg2]; register_field[reg2] = register_field[reg3]; *pc += 1; } fn add(reg1: usize, reg2: usize, reg3: usize, register_field: &mut [u16], pc: &mut usize) { register_field[reg3] = register_field[reg1] + register_field[reg2]; *pc += 1; } fn branch(offset: usize, pc: &mut usize) { *pc -= offset; } fn decode(encoded_instruction: u16) -> Option<Instruction> { let operator = encoded_instruction >> 12; let reg1 = ((encoded_instruction >> 8) & 0xF) as usize; let reg2 = ((encoded_instruction >> 4) & 0xF) as usize; let reg3 = (encoded_instruction & 0xF) as usize; let offset = (encoded_instruction & 0xFFF) as usize; let value = encoded_instruction & 0xFF; match operator { 0 => Some(Instruction::Halt), 1 => Some(Instruction::Load { reg: reg1, value: value }), 2 => Some(Instruction::Swap { reg1: reg1, reg2: reg2, reg3: reg3 }), 3 => Some(Instruction::Add { reg1: reg1, reg2: reg2, reg3: reg3 }), 4 => Some(Instruction::Branch { offset: offset }), _ => None, } } Answer: Your code was very readable and easy to follow. Most of my suggestions center around trying to showcase some more Rust idioms and features. A collection of registers in normally called a register file. Since the constant for the number of registers is only used to declare the register file, create a type alias and use that. This sets you up nicely to make it a standalone type in the future. I'm glad to see names for registers in Instruction, but maybe give them more meaningful names? Add is a great example - which registers are the inputs and which is the output? You almost always want to derive Debug for a type. Copy and Clone are also very common. Implementing PartialEq will allow easier test writing. There's a distinct lack of types and methods, overall the code feels pretty C-like. I'd associate functions with types, making them methods. For example, decode is the most obvious change to me. Returning an Option from decode is iffy. Failing to decode an instruction doesn't seem like the absence of a value, it feels like a failure, which is normally reserved for Result. When splitting match arms onto a different line from the pattern, use braces. Consider inlining some of the instruction implementations into execute as many are only one line long anyway. In a processor, it's very typical to automatically increment the program counter every instruction. Branch offsets take that automatic increment into account. This is nice here because it removes pc from all the non-branching methods. I just subtracted 1 when applying the jump, but it's probably better to fixup the offsets. Create a type for sequences of instructions, I used Program. This will give you somewhere to hang fetch. type RegisterFile = [u16; 3]; #[derive(Debug, Copy, Clone)] enum Instruction { Halt, Load { reg: usize, value: u16 }, Swap { reg1: usize, reg2: usize, reg3: usize }, Add { reg1: usize, reg2: usize, reg3: usize }, Branch { offset: usize } } impl Instruction { fn decode(encoded_instruction: u16) -> Option<Self> { let operator = encoded_instruction >> 12; let reg1 = ((encoded_instruction >> 8) & 0xF) as usize; let reg2 = ((encoded_instruction >> 4) & 0xF) as usize; let reg3 = (encoded_instruction & 0xF) as usize; let offset = (encoded_instruction & 0xFFF) as usize; let value = encoded_instruction & 0xFF; match operator { 0 => Some(Instruction::Halt), 1 => Some(Instruction::Load { reg: reg1, value: value }), 2 => Some(Instruction::Swap { reg1: reg1, reg2: reg2, reg3: reg3 }), 3 => Some(Instruction::Add { reg1: reg1, reg2: reg2, reg3: reg3 }), 4 => Some(Instruction::Branch { offset: offset }), _ => None, } } fn execute(&self, registers: &mut [u16], pc: &mut usize) -> bool { match *self { Instruction::Load { reg, value } => { load(reg, value, registers); }, Instruction::Swap { reg1, reg2, reg3 } => { swap(reg1, reg2, reg3, registers); }, Instruction::Add { reg1, reg2, reg3 } => { add(reg1, reg2, reg3, registers); }, Instruction::Branch { offset } => { branch(offset, pc); }, Instruction::Halt => { halt(registers); return false; }, } true } } fn halt(register_file: &[u16]) { println!("{:?}", register_file[0]); } fn load(register: usize, value: u16, register_file: &mut [u16]) { register_file[register] = value; } fn swap(reg1: usize, reg2: usize, reg3: usize, register_file: &mut [u16]) { register_file[reg3] = register_file[reg1]; register_file[reg1] = register_file[reg2]; register_file[reg2] = register_file[reg3]; } fn add(reg1: usize, reg2: usize, reg3: usize, register_file: &mut [u16]) { register_file[reg3] = register_file[reg1] + register_file[reg2]; } fn branch(offset: usize, pc: &mut usize) { *pc -= offset - 1; } struct Program<'a> { instructions: &'a [u16], } impl<'a> Program<'a> { fn fetch(&self, pc: usize) -> u16 { self.instructions[pc] } } fn cpu(program: Program) { let mut pc = 0; let mut registers = RegisterFile::default(); loop { let encoded_instruction = program.fetch(pc); let decoded_instruction = Instruction::decode(encoded_instruction); match decoded_instruction { Some(instr) => { if !instr.execute(&mut registers, &mut pc) { break } } None => break, } pc += 1; } } fn main() { let encoded_instructions = Program { instructions: &[0x1110, 0x2100, 0x3010, 0x0] }; cpu(encoded_instructions); }
{ "domain": "codereview.stackexchange", "id": 20191, "tags": "rust, virtual-machine" }
Sudoku solver in Haskell
Question: I tried to implement a naive brute-force Sudoku solver in Haskell (I know there are loads of good solutions already) and I'd like some reviews from you experts. The solver is very simple and it uses the List monad to try all the possible combinations. It's not optimized at all, but it takes an awful lot of time to solve even the simplest grids. I'm trying to understand if there is a problem with the algorithm itself (too simple) or with my implementation. Anyway, here is the code. module Main where import Data.List (nub, concat, findIndices) import Control.Monad (liftM2, forM, join, guard) import Data.Maybe (catMaybes, fromMaybe) import Debug.Trace type Board = String -- Some boards -- other examples: http://norvig.com/top95.txt boards :: [Board] boards = map parseBoard [ "4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......", "..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3..", "483921657967345821251876493548132976729564138136798245372689514814253769695417382", "483...6..967345....51....93548132976..95641381367982453..689514814253769695417..2", "..3.2.6..9..3.5..1..18.64....81.29..7.......8..67.82....26.95..8..2.3..9..5.1.3..", ".2.4.6..76..2.753...5.8.1.2.5..4.8.9.6159...34.28.3..1216...49.......31.9.8...2.." ] -- The idea is to try all the possibilities by substituting '.' with all -- possible chars and verifying the constraint at every step. When there are -- no more dots to try, backtrack. -- This is done in the List monad. solve :: Board -> [Board] -- solve board | trace (showBoard board) False = undefined solve board = go dotIdxs where dotIdxs = findIndices (== '.') board go :: [Int] -> [Board] go [] = do -- no dots to try: just check constraints guard $ not $ isObviouslyWrong board return board -- go dotIdxs | trace (show dotIdxs) False = undefined go dotIdxs = do -- in the List monad: try all the possibilities idx <- dotIdxs val <- ['1'..'9'] let newBoard = set board idx val -- guard against invalid boards guard $ not $ isObviouslyWrong board -- carry on with the good ones solve newBoard -- Create a new board setting board[idx] = val set :: Board -> Int -> Char -> Board set board idx val = take idx board ++ [val] ++ drop (idx + 1) board safeHead :: [a] -> Maybe a safeHead [] = Nothing safeHead (x:_) = Just x -- Block of indices where to verify constraints blockIdxs :: [[Int]] blockIdxs = concat [ [[r * 9 + c | c <- [0..8]] | r <- [0..8]] -- rows , [[r * 9 + c | r <- [0..8]] | c <- [0..8]] -- cols , [[r * 9 + c | r <- [rb..rb + 2], c <- [cb..cb + 2]] | rb <- [0,3..8], cb <- [0,3..8]] -- blocks ] -- Check if constrains hold on grid -- This means that block defined in blockIdxs does not contain duplicates, a -- part from '.' isObviouslyWrong :: Board -> Bool isObviouslyWrong board = any (isWrong board) blockIdxs where isWrong board blockIdx = nub blockNoDots /= blockNoDots where blockNoDots = filter (/= '.') block block = map (board !!) blockIdx -- Filter out spurious chars parseBoard :: Board -> Board parseBoard = filter (`elem` "123456789.") -- Pretty output showBoard :: Board -> String showBoard board = unlines $ map (showRow board) [0..8] where showRow board irow = show $ take 9 $ drop (irow * 9) board test :: Maybe Board test = safeHead . solve $ boards !! 2 main :: IO () main = interact $ showBoard . fromMaybe "Solution not found" . safeHead . solve . parseBoard Answer: A large part of the inefficiency may stem from the decision to represent partially completed boards as full-blown 81-element lists, vs. a representation that's more easily mutated. For example, if a board were represented as a short list of only the (position,value) pairs selected so far, the selection of one more value could very inexpensively add another pair to the head of the existing list. This instead of "replacing a dummy '.' element" which typically involves expensive copying. This will change the details of how you validate a board, but not by much -- you are just replacing "dense array" positional indexing and dummy '.' entries with "sparse array" key matching and skipped entries. This change is also somewhat independent of the algorithm you use to decide which positions or values to try first, though different representations can influence this choice as they make it easier or harder to check for different patterns of likely/unlikely choices.
{ "domain": "codereview.stackexchange", "id": 1366, "tags": "haskell, sudoku" }
What are the differences between the below feature selection methods?
Question: Do the below codes do the same? If not, what are the differences? fs = RFE(estimator=RandomForestClassifier(), n_features_to_select=10) fs.fit(X, y) print(fs.support_) fs = SelectFromModel(RandomForestClassifier(), max_features=10) fs.fit(X, y) print(fs.support_) fs= RandomForestClassifier(), fs.fit(X, y) print(fs.feature_importances_[:10,]) Answer: They are not the same. As the name suggests, "recursive feature elimination" (RFE) recursively eliminates features, by fitting the model and throwing away the least-important one(s). After removing one feature, the next iteration may find the remaining features have changed order of importance. This is especially true in the presence of correlated features: they may split importance when included together, so might both be dropped by your second approach; but in RFE, one gets dropped at some point, but then the other one appears more important in the following iterations (since it no longer splits its importance with its now-dropped companion) and so is kept. Your third approach doesn't do any feature selection; it just prints the first (not top) feature importances (according to the model fitted on all features).
{ "domain": "datascience.stackexchange", "id": 10886, "tags": "machine-learning, scikit-learn, feature-selection" }
Method to achieve a constant force no matter the actual velocity when under load
Question: So, this might be a somewhat strange need, as I don't seem to see many resources on this. I'm trying to create a device where I can set an amount of force for it to generate, and when it would behave as such: If there is no or only a small opposing force, it would drive forward in the direction of the force. If there is a constant force being applied that is equal and opposite to the force being generated, it would stall indefinitely without damage. If there is an opposing force larger than the generated force, it would be pushed back, while still providing the constant force as a resistance, therefore reducing the effective backward force; and it should be able to handle all this without damage. All this only need to occur in the range of no more than 5 cm, and the simpler and smaller the system is, the better. I was wondering if a linear motor, considering that they are also called a force motor, can handle such a demand, and if so, what type? There's also the concern that such a small linear motor can't be found anywhere (or might be prohibitively expensive). Another thought that I had was to use a torque motor (with some gears to translate torque into linear force, of course), but I couldn't find a definitive source saying that those can indeed provide a constant torque even when being pushed back by the load, I just see that they provide high torque at low speed. Also, on the topic of torque motors, do DC torque motors exist, or are they AC only? Of course, if you have a better idea for achieving this need of a constant force no matter what's actually happening in terms of movement and outside load, please let me know! Finally, please do tell me if you think a different community would be a better place for this question. Thank you all for your time! Answer: If the changes are slow enough, a simple pulley with counterweight might do the job. Constant force hangers are used very commonly for piping, where you can find interesting designs using combinations of springs and lever arms. One of the simplest designs is here. Similar effect is also achieved in compound bows, that are optimised to resist the middle part of the draw with the maximum force in order to store maximum energy (the force also drops at the end of the draw, so you can hold the fully drawn bow with a lots of energy with smaller effort than traditional bow).
{ "domain": "engineering.stackexchange", "id": 5275, "tags": "motors, linear-motion, linear-motors" }
Algorithms for generating brick walls
Question: I'm not sure which StackExchange community is the correct place to ask this so I'm trying this one. I want to write a code that generates random brick wall type textures. Similarly to what, for example, this commercial software does: http://www.vizpark.com/shop/walls-and-tiles/ . So far I've thought at least three types of brick wall textures I would like to generate: 1) Periodic and user-defined pattern with one or more predefined brick types (bricks can be different shape an size). This is the simplest case. 2) Completely random pattern with predefined brick types 3) Completely random pattern with random sized bricks. This is pretty much a mathematical problem that could be rephrased as: what is the way to fill a predefined area with certain shapes (either random or predefined) without leaving gaps? I would assume that people have thought about this type or problems before, but I don't know how to search for it. I'm not looking for a conclusive answer, but would appreciate links to articles, etc. discussing different variations of this or related problems. Answer: I would like to comment this, but reputation... What you are looking for is a Tessellation. A tessellation of a flat surface is the tiling of a plane using one or more geometric shapes, called tiles, with no overlaps and no gaps. For example, the Penrose Tiling, the Pithagorean Tiling, Domino Tiling or Polyomino Tiling.
{ "domain": "cs.stackexchange", "id": 8579, "tags": "algorithms" }
Cucumber scenario to test REST interface to scuba diving logbooks
Question: I am looking for ideas how to improve this cucumber scenario (testing a REST interface) and make it more concise. Feature: Get list of Logbooks As a client, I want to get a list of a certain users logbooks so that i can present the logbooks name and the number of scuba dives to the user. Scenario: Get a list of logbooks Given the system knows about the user "tom" And the user has the email "tom@tom.com" And he owns the logbooks | pacific | | atlantic | | gulf | When the client asks the system for a list of logbooks owned by user Then the client gets a list with 3 logbooks And all logbooks of the list have the users name And all logbooks of the list have the users email And one logbook has the name | pacific | | atlantic | | gulf | But if the user has no logbook And the client requests a logbook from the system Then the client gets an error message Details: A client (browser/other service) is accessing a logbook service via REST. The client can ask with a user name and an email address the system about a list of logbooks (named pacific, atlanic and gulf) the user might have stored at the system. In case the user has no logbooks stored, a error message is returned. Answer: Here is how I would update what has been given, but below there are some points for further improvement. Feature: Get list of Logbooks As a client I want to get a list of a certain users logbooks So that i can present the logbooks name and the number of scuba dives to the client. Background: Given the user "tom" has the email "tom@tom.com" Scenario: Get a list of logbooks Given "tom" owns the logbooks: | pacific | | atlantic | | gulf | When the client requests for a list of logbooks owned by "tom" Then the client gets a list with 3 logbooks And the requested logbooks should have the users name and email And the requested logbooks should have the names: | pacific | | atlantic | | gulf | Scenario: The user has no logbooks Given "tom" owns no logbooks When the client requests for a list of logbooks owned by "tom" Then the client should see the message "tom has no logbooks" WAYS TO IMPROVE BDD is all about conversations. Cucumber being a BDD tool means that it helps with breaking down the conversational barriers between the business and the development team. You should be agreeing on the language that is used with both the dev team and business before the feature file has been completed. Is the language stated in the scenarios the same language that the business and the dev team have agreed to use? If not, update the language. (And the requested log books should have the users name and email, for instance) Are there any extra steps that the business wants to know about? If so, add them in. Are there any steps that aren't needed? If so, remove them.
{ "domain": "codereview.stackexchange", "id": 24427, "tags": "rest, cucumber" }
Updating OpenNI and NITE version
Question: I'm using the Kinect for person tracking, and I have encountered a bug where OpenNI causes a segmentation fault when many objects move in/out of the frame. I did some research and it looks like the bug (below) was fixed in a newer version of OpenNI/NITE. Program received signal SIGSEGV, Segmentation fault. 0x00007fffe23b374d in Segmentation::checkOcclusion(int, int, int, int) () from /usr/lib/libXnVFeatures.so (gdb) My question is, how can I safely update OpenNI to the latest version to work within the ROS framework? Thanks for the help! Originally posted by bkx on ROS Answers with karma: 145 on 2012-02-05 Post score: 0 Answer: Well I'm not sure that I fixed the problem, but I haven't seen the segfault since updating to the latest stable OpenNI binaries here Originally posted by bkx with karma: 145 on 2012-02-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 8117, "tags": "ros, kinect, openni, openni-kinect, nite" }
Do new neurons divide propotionally?
Question: Do new neurons divide propotionally? if i try to improve my reasoning skills,are the new neurons only made for that specific region of the brain that controls reasoning skill? i have heard that one side of the brain controls the other side of the body, if i try to use my non-dominant hand(left hand in my case),are new neurons only made for right side of the brain,does it improve only some of the brain regions? Answer: It seem's you are assuming that your / the adult human brain produces new neurons over time- this is (largely) incorrect: neurons are non-dividing cells and are all formed throughout embryogenesis and very early childhood / infancy (see also this question). Only very exceptions to this are known as adult neurogenesis (generation of new neurons), the most likely or active region in humans would be the hippocampus, however the extent or importance of the process in humans is still not really known. Additionally the hippocampus is an area of the brain that mostly holds memories and is not really related to reasoning skills or body / motor control.
{ "domain": "biology.stackexchange", "id": 10181, "tags": "brain" }
Does a coin tossing algorithm terminate?
Question: Suppose we have an algorithm like: n = 0 REPEAT c = randomInt(0,1) n = n + 1 UNTIL (c == 0) RETURN n (Assumuing the random number generator produces "good" random numbers in the mathematical sense.) I understand that there is no number $n \in \mathbb{N}$ such that the algorithm is guaranteed to terminate after fewer than $n$ steps. However, the probability of terminating after some finite number of steps is 1. Is there a convention among computer scientists to call an algorithm like this either "terminating" or "non-terminating"? Answer: The formal, unambiguous way to state this is “terminates with probability 1” or “terminates almost surely”. In probability theory, “almost” means “with probability 1”. For a probabilistic Turing machine, termination is defined as “terminates always” (i.e. whatever the random sequence is), not as “terminates with probability 1”. This definition makes decidability by a probabilistic Turing machine equivalent to decidability by a deterministic Turing machine supplied with an infinite tape of random bits — PTM are mostly interesting in complexity theory. In applied CS, though, computations that always give the correct result but terminate only with probability 1 are a lot more useful than computations that may return an incorrect result.
{ "domain": "cs.stackexchange", "id": 3588, "tags": "algorithms, randomized-algorithms" }
Land proportions in NASA blue marble photographs
Question: What is the explanation for the apparent size difference of North America in these two photos from NASA? Image source Image source Answer: This is a perspective effect. In essence, the second image is taken from a lower orbit which is closer to Earth, and the Earth only looks spherical because of the use of a fisheye lens that strongly distorts the edges of the image. This means that the field of view is a lot smaller. The Earth still looks like a circle on the page, though from close up the edges can look a bit distorted. In the second image there is no land to be distorted in the edges, and there are effects from the camera lens which can look weird to the human eye (to make the apparent sizes match you're comparing a very wide angle lens with a much narrower one). However, this effect is not photoshop magic. (That said, the first image is, in fact, a very carefully reconstructed mosaic that is made from images taken at much lower altitudes, in a painstaking process that is explained in detail in this Earth Observatory post. It's important to emphasize that, from whatever altitude Simmon simulated, this is indeed the continental layout that you would observe with your naked eye. The original posting of this image clearly identifies it as a mosaic: NASA is always very careful to precisely label every image it publishes in a correct fashion.) I can't find, unfortunately, the altitude that Simmons used to simulate the first image. Any brave takers care to dig through the documentation and source files to see if it's there? The second image, referenced here, was taken by Suomi NPP from an altitude of ~830 km, from where the perspective looks roughly like this, where it is obvious that the wide field of view is only possible because of the fisheye lens, with its associated distorsions.
{ "domain": "physics.stackexchange", "id": 88849, "tags": "optics, geometry, camera, nasa, astrophotography" }
An error in launching my custom world into Gazebo
Question: In the tutorial of Ros integration http://gazebosim.org/wiki/Tutorials/1.9/Using_roslaunch_Files_to_Spawn_Models using the following command . ~/catkin_ws/devel/setup.bash roslaunch MYROBOT_gazebo MYROBOT.launch I had the following error > [MYROBOT.launch] is neither a launch file in package > [MYROBOT_gazebo] nor is [MYROBOT_gazebo] a launch file name my distribution is Hydro I use ubuntu 12.10 any help,please Thanks in advance Originally posted by Eman on Gazebo Answers with karma: 3 on 2014-02-17 Post score: 0 Original comments Comment by Ben B on 2014-02-17: Have you run catkin_make in that workspace before launching? Answer: This indicates some kind of problem with your workspace setup. Either your "MYROBOT_gazebo" package could not be found, or your "MYROBOT_gazebo" package could be found, but not the "MYROBOT.launch" launch file inside it. To find out what is happening, you can try roscd MYROBOT_gazebo If get a "no such package/stack" error, the package could not be found. Originally posted by Stefan Kohlbrecher with karma: 473 on 2014-02-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Eman on 2014-02-18: @Stefan Kohlbrecher Thanks very much for your help. God bless you. I tried the command " roscd MYROBOT_gazebo " & I found the error that you mentioned ( roscd: No such package/stack 'MYROBOT_gazebo' ) . what should I do if the package are not found? Should I remove the catkin_WS & then recreate it & recreate the package? Comment by Stefan Kohlbrecher on 2014-02-18: This means you either made a mistake when following the tutorial, or there is an error in the tutorial. I´d suggest re-doing the tutorial and taking extra care not to miss anything. Hopefully that solves the problem, otherwise report back. Note that MYROBOT_gazebo should be replaced with your own package name, for example "eman_robot_gazebo". It seems the tutorial also requires some basic knowledge of how catkin and ROS work. Comment by Eman on 2014-02-18: Thanks a lot for your help. God bless you. Thanks also for your suggestion. I will re-do the tutorial with care of every thing & confirm basic knowledge of how catkin and ROS work.
{ "domain": "robotics.stackexchange", "id": 3553, "tags": "ros, gazebo-tutorial, gazebo-1.9" }
Would a pair of independent quantum coin tosses be perfectly anti-correlated?
Question: Background Suppose we attach a button to an electronic flip flop such that an LED will toggle when we press the button with 50% probability, where the source of the randomness is a quantum event, such as using a Geiger counter to detect whether the next detected arrival was at an even or odd number of milliseconds (I don't think the details really matter). We assume the LED is off initially. If we press the button once, then it is clear that the LED will be on with probability 50%. (Illustrated as experiment 1. Each column represents a different experimental run.) If we look and then press the button again, the LED may or may not change to a different state. Question What happens if we press the button twice (without peeking in the middle)? (Illustrated in experiment 2.) My thoughts Intuitively it feels that this should just make the outputs different, but just as randomly distributed as after one button press. My problem arises when I consider this from a Quantum mechanics framework. My understanding is that the action of pressing a button can be represented as a Unitary matrix acting on the amplitudes of the different possible states. Initially we are certainly in state 0, and after a single button press we are 50% in state 0, and 50% in state 1. Therefore this feels like a rotation of 45 degrees. But, if this is correct, then when I press the button twice I end up with a total rotation of 90 degrees and therefore we are certainly in state 1! Of course, in practice the apparatus is not sufficiently shielded and the system will behave classically - but I am interested in whether this logic is theoretically flawed rather than practical reasons why I can't test this with a real experiment. Request Can anyone help me understand this contradiction? (By the way, this is not a homework question - just something that I was discussing with my son as a thought experiment after reading Quarantine by Greg Egan - highly recommended.) Answer: It does not matter whether or not you look. What matters is that it is possible to look in principle, because the quantum particle interacts with the LED and therefore becomes entangled with it (your unitary matrix acts on both the particle and LED). So the outcome will always be the same as experiment 1. The technical term for this is "which-way" information: the state of the LED contains the information about whether or not the particle decayed. As long as which-way information is available in principle, it does not matter whether or not you choose to make use of this information. You can see why this must be so: imagine that you choose not to look, but your son does choose to look at the LED at time 1. It must be the case that you both see the same physical outcome when you both look at time 2.
{ "domain": "physics.stackexchange", "id": 19729, "tags": "quantum-mechanics, quantum-information, measurement-problem" }
Current constraints on Dark Matter self-interaction from galactic profiles
Question: The self-interaction of dark matter may be small but it cannot be negligible if it is able to dissipate energy to relax into galactic clumps (necessary to explain galaxy rotation curves). According to some answers in this old question: How Does Dark Matter Form Lumps?, the gravitational self-interaction alone is enough to allow dark matter clumping (via n-body interactions). Although two answers suggest something other than gravity is needed (one states considering the weak force is necessary, while another answer argues for why gravity alone doesn't explain how in cosmology dark matter could clump first). I am curious about: Have the measurements of dark matter profiles of galaxies become good enough to provide indirect measurements of dark matter self-interactions? Can this self-interaction be used to say anything about the mass of the dark matter particles? At the very least, can we say with certainty they have mass above some threshold (ruling out very light particles such as axions or neutrinos, and ruling out some kind of unseen massless particles)? Since the strength and radial distribution of the gravitational force vs the weak force differ so strongly, is it possible to determine from the self interaction whether dark matter interacts via the weak force? Answer: First of all, let me clarify that a non-gravitational interaction is absolutely not required to allow dark matter to clump. There are many attempts to constrain the self-interaction cross section of dark matter. Qualitatively speaking, if dark matter scatters off of itself, this would tend to put an upper limit on the local density of dark matter. Measurements sensitive to this density (e.g. rotation curves, gravitational lensing) can therefore constrain the cross section. There are several examples in the literature of such constraints (this being only a small selection). As it stands, there is still debate over whether a claim of a self-interaction detection (or really, an interestingly constraining limit) can be made, so learning much about the particle mass or the force mediating the interaction is still rather premature.
{ "domain": "physics.stackexchange", "id": 39761, "tags": "gravity, cosmology, dark-matter, galaxies, weak-interaction" }
Maximum acceleration uphill of a truck - not sure what the equations mean
Question: I'm trying to solve a problem that reads: The coefficient of static friction between a truck's tires and the road is 0.850. What is the maximum acceleration uphill that the truck can have if the road is tilted 12 degrees to the horizontal? I drew a free body diagram and came up with the following equations: X direction: $f=ma+mgsin\theta = \mu_sF_n$ Y direction: $F_n = mgcos\theta$ Substituting $mgcos\theta$ into the x direction equation and solving for $a$ gives: $a=g[\mu_s cos\theta-sin\theta]$ At this point I'm not sure what this last equation means or how to find the maximum acceleration from it. It's saying that the acceleration depends only on the force of gravity, the angle of the incline, and the coefficient of static friction. I'm confused because shouldn't the acceleration depend on things like the horsepower of the engine as well as the variables in the equation? Does this equation give the value of some kind of hard cap as to how much friction can contribute to movement given a certain coefficient of friction? Thanks! Answer: Its true that the truck's acceleration does depend on the engine specifications. However, the maximum acceleration the truck can have is the same as the maximum friction of the wheels from the force of static friction. We don't want the truck wheels to slip. If it accelerates above the static friction force then the wheels will begin to slip. Without this force of friction the truck would not move at all. $$a=g[\mu_s\cos \theta − \sin \theta]$$ is correct; plugging the values in will give you the maximum acceleration of the truck. As an additional afterthought to the problem. The fact that the maximum acceleration does not depend on mass implies that it can be said that the maximum acceleration of the truck does not depend on the number of wheels that it has. Assume a truck with 8 wheels (an APC weighing m1=4 tons), versus a truck with 4 wheels (a SUV weighing m2=1 ton), versus a bike with 2 wheels (m3=20 lbs); the number of wheels reduces the mass parameter used in Newton's second law by a fraction of the total. The m parameter in your equations could be written $$m = m1/8$$, or $$m = m2/4$$, or $$m = m3/2$$ This mass parameter would still cancel out in the end, the APC, the SUV, and the Bike all would have the same maximum acceleration.
{ "domain": "physics.stackexchange", "id": 20574, "tags": "homework-and-exercises, forces, free-body-diagram" }
Force induced in a coil by a moving magnet
Question: So a magnet moves towards one end of a coil. A pole is induced in this end of the coil that opposes the pole of the magnet moving towards it (e.g. south pole induced to oppose the north pole of the moving magnet). Now my question is, which direction would the coil move (if it were to)? North-South should result in attraction, right? Answer: First of all, if the north pole of the magnet is brought closer to the coil, anticlockwise current will develop in the coil as seen from the magnet side. This corresponds to North pole on the coil and not South pole. So essentially there will be repulsion.
{ "domain": "physics.stackexchange", "id": 44478, "tags": "electromagnetism" }
Tension at topmost point of a vertical circle
Question: I read that for a body to move in a vertical circle, the minimum tension needed is 0 at the topmost point. Can someone explain to me what it means to have positive or negative tension on top. I'm not able to visualise it. Answer: First you need to think what is tension. Tension is a strain that is created in the rope (or string) when a force tries to elongate the rope. It means if no force to elongate, then no tension. Now taking the example you asked about. If there is a tension in the string at topmost point then it will revolve because that tension is the centripetal force which causes it to make the circular motion. What if the tension is $0$ at the topmost point? No problem. The linear speed (which acts perpendicular to the centripetal force i.e. tension in this case) will keep it in motion. What is positive tension? It's the tension in the string that comes into play when, say, you try to elongate the rope. What is negative tension? It's when you try to compress the rope. But wait, you can't do it because if you do it then it will bend but won't compress. So the tension in the string will be zero. You may get it in some numerical problems because it tells how much force we have applied till the tension became zero from positive.
{ "domain": "physics.stackexchange", "id": 26941, "tags": "newtonian-mechanics, forces, energy, work, string" }
Are white holes made of exotic matter
Question: I have recently watched Kurzgesagt in a Nutshell's video on Wormholes https://www.youtube.com/watch?v=9P6rdqiybaw and I have been thinking about white holes, and exotic matter. As I understand, white holes are an exact opposite of black holes, Black Holes are condensed of positive mass, where it is so heavy that it attracts everything into it. White holes are opposite, they repel everything that goes around it and give out 'stuff'. Hence, I am thinking if white holes have this property, would it mean it has negative mass, which is essentially exotic matter? Answer: A Schwarzschild white hole is a time reversed black hole. On the inside it has a singularity that explodes with energy and matter at the moment when the internal time starts. After this initial moment the singularity no longer exists on the internal time scale. The internal time of the while hole is disconnected from the external time by the event horizon. Therefore, the question, "When did the singularity explode?" has no meaning for an external observer. Matter and energy created in the explosion crosses the event horizon from inside out and flies away at different moments of outside time. With matter and radiation leaving, the mass of the white hole becomes smaller, its event horizon shrinks until eventually disappears, and the white hole ceases to exist. A white hole does not repulse matter. Gravity is always attractive. On the inside, the direction of time is radial from the singularity to the event horizon, so matter simply is moving in time. Passed the horizon the radial coordinate becomes spatial on the outside. Stuff is brown out through the horizon at the speed of light (which is relative) and flies away on inertia. Aside from the initial singularity, which is expected to be described by the future theory of quantum gravity, there is no exotic matter inside or outside the white hole. It's event horizon cannot be crossed from outside to inside. So researches on the inside cannot receive any information from outside and have no way of knowing in what universe they will end up when they are thrown out through the horizon. It is believed that white holes do not exist, because they would violate energy conservation by producing matter from nothing. Expert comments are welcome and I would edit the answer accordingly, if any part if it is incorrect.
{ "domain": "physics.stackexchange", "id": 51125, "tags": "white-holes, exotic-matter" }
Rotation matrix with deficit angle
Question: I need to find the rotation matrix for a space with a deficit angle. The question is as pictured The following is my answer to the question: If $\theta$ could vary between $0$ and $2 \pi$, $$ R(\theta) = \begin{pmatrix} \cos(\theta) && \sin(\theta) \\ -\sin(\theta) && \cos(\theta) \end{pmatrix} $$ In this space, instead of rotating $2 \pi$ to get to the same point, we rotate $2 \pi - \phi$. So a rotation of $2 \pi$ (full circle) in this funny space is equivalent to a rotation of $2 \pi - \phi$ in ordinary space. So a rotation of $\theta$ in the ordinary space is equivalent to a rotation of $\frac{\theta}{1 - \frac{\phi}{2 \pi}}$ in the funny space. Thus, with the new metric, we let $ \theta \rightarrow \frac{\theta}{1-\frac{\phi}{2 \pi}}$ and we have $$ R(\theta) = \begin{pmatrix} \cos\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) && \sin\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) \\ -\sin\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) && \cos\Big(\frac{\theta}{1-\frac{\phi}{2 \pi}}\Big) \end{pmatrix} $$ $$ \therefore R(0) = \begin{pmatrix} 1 && 0 \\ 0 && 1 \end{pmatrix} $$ and $$ R(2 \pi - \phi ) = \begin{pmatrix} 1 && 0 \\ 0 && 1 \end{pmatrix} $$ This satisfies the requirement that $R(0) = R(2 \pi - \phi) = I_{2} $. Is this the correct rotation matrix and are my steps logical? Thank you. Answer: I think that your metric is not correct. why ? your new polar coordinates are: $x=r\cos \left( {\frac {2\pi \,\theta}{2\,\pi -\phi}} \right) $ $y=r\sin \left( {\frac {2 \pi \,\theta}{2\,\pi -\phi}} \right) $ The Jacobi matrix is: $J=\left[ \begin {array}{cc} \cos \left( 2\,{\frac {\pi \,\theta}{2\, \pi -\phi}} \right) &-2\,r\sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi}} \right) \pi \left( 2\,\pi -\phi \right) ^{-1} \\ \sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi }} \right) &2\,r\cos \left( 2\,{\frac {\pi \,\theta}{2\,\pi -\phi}} \right) \pi \left( 2\,\pi -\phi \right) ^{-1}\end {array} \right] $ and the metric : $g=J^T\,J=\left[ \begin {array}{cc} 1&0\\ 0&\,{\frac {4{\pi }^{2}}{ \left( 2\,\pi -\phi \right) ^{2}}r^2}\end {array} \right] $ If you know the equations for $x$ and $y$ you can calculate the transformation matrix $R$ with this equation: $J=R\,H$ , with the matrix $H_{i,i}=\sqrt{g_{i,i}}\,,H_{i,j}=0$ $H= \left[ \begin {array}{cc} 1&0\\ 0&2\,{\frac {\pi \, r}{2\,\pi -\phi}}\end {array} \right] $ $R=J\,H^{-1}$ $R=\left[ \begin {array}{cc} \cos \left( 2\,{\frac {\pi \,\theta}{2\, \pi -\phi}} \right) &-\sin \left( 2\,{\frac {\pi \,\theta}{2\,\pi - \phi}} \right) \\ \sin \left( 2\,{\frac {\pi \, \theta}{2\,\pi -\phi}} \right) &\cos \left( 2\,{\frac {\pi \,\theta}{2 \,\pi -\phi}} \right) \end {array} \right] $ This is your transformation matrix. Remark: I use symbolic Program MAPLE to do the calculation
{ "domain": "physics.stackexchange", "id": 51280, "tags": "homework-and-exercises, differential-geometry, metric-tensor, group-theory, rotation" }
Continuity equation for charge density
Question: Let $\rho$ be the charge density and $M_i$ the momentum density. The article I am reading states that the continuity equations for this system are given by, \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot j=0 \end{equation} and \begin{equation} \frac{\partial M_i}{\partial t} + \nabla_i \cdot \tau_{ij} =0 \end{equation} The second equation makes sense to me since flux is defined as the rate at which the quantity flows divided by the area which the quantity flows through. Thus, \begin{equation} \phi_M = \frac{\partial(mv)}{\partial t}A^{-1} = \frac{ma}{A}=\frac{F}{A} \end{equation} which gives stress so that makes sense. However for the first equation, I do not understand how one obtains current for the flux. It would seem to me that $j$ should be the current density instead. Since, \begin{equation} \phi_{\rho} = \frac{\partial q}{\partial t}A^{-1} = \frac{j}{A} \end{equation} Which corresponds to current density. Answer: The continuity equation in EM is analogous to the hydrodynamical continuity equation: $$ \partial_{t} \rho + \nabla \cdot(\rho {\bf u}) = 0 $$ where the quantity $ \rho \mathbf{u}$ represents a kind of "flux" or "flux density", this is exactly the same as the form of the current density $\mathbf{j}$, which is $\mathbf{j} = \rho \mathbf{u}$, where $\rho$ is the charge density and $\mathbf{u}$ the particle drift velocity. Typically, we just call $\mathbf{j}$ current for simplicity.
{ "domain": "physics.stackexchange", "id": 90199, "tags": "fluid-dynamics, conservation-laws, continuum-mechanics" }
MyTimer based on QTimer
Question: I needed a timer that fires in intervals for a given duration when e.g. a button is pressed. The button can be pressed several times thus I thought it would be easiest to create a new timer for each button press and the timer destroys itself when its duration passed. I wrote this... Interface: // timerhandler.h class TimerHandler { public: virtual void processTimer()=0; // callback void startNewTimer(int interval,int duration); virtual ~TimerHandler(){} }; //timerhandler.cpp #include "timerhandler.h" #include "mytimer.h" void TimerHandler::startNewTimer(int interval,int duration){ new MyTimer(this,interval,duration); } Usage: MainWindow inherits TimerHandler //MainWindow.cpp void MainWindow::on_pushButton_start_clicked() { this->startNewTimer(100,2000); } void MainWindow::processTimer() { std::cout << "TIMER" << std::endl; } Implementation: //mytimer.h #include <QObject> #include <QTimer> class TimerHandler; class MyTimer : public QObject { Q_OBJECT public: friend class TimerHandler; private: TimerHandler* th; QTimer* timer; MyTimer(){}; MyTimer(TimerHandler* th,int interval,int duration); ~MyTimer(); public slots: void commitSuicide(); void on_timer_fired(); }; //mytimer.cpp MyTimer::MyTimer(TimerHandler* th,int interval,int duration) : th(th) { timer = new QTimer(this); connect(timer, SIGNAL(timeout()), this, SLOT(on_timer_fired())); timer->start(interval); QTimer::singleShot(duration,this,SLOT(commitSuicide())); } MyTimer::~MyTimer(){ delete timer; } void MyTimer::commitSuicide(){ delete this; } void MyTimer::on_timer_fired(){ th->processTimer(); } ...it works, but I wonder if there is an easier way (less code) and it is horribly inflexible. I am a bit stuck when I want to change the code to allow different callbacks (that take different parameters). I first tried a virtual slot as interface. I think it would work somehow, but when TimerHandler inherits QOject, then my MainWindow inherits QObject twice and I could not get it running. Also if there is anything else that I can/should fix please let me know. Answer: That's a lot of code indeed, and for all that code it is not very flexible: the "slot" name is fixed, you force multiple inheritance on your users, so they can have only one such type of timer - what if I want one to call processThing() and another to do processOtherThing? You should use the signal/slot mechanism to your advantage - that's what it is there for, and get rid of your interface and class entirely: a single (global/static) function is enough. Consider something like this (QTimer::singleShot itself is a good example of how you should be doing it): void limited_timer(int interval, int duration, QObject *target, const char *slot) { QTimer *timer = new QTimer; QObject::connect(timer, SIGNAL(timeout()), target, slot); QTimer::singleShot(duration, timer, SLOT(deleteLater())); timer->start(interval); } Usage (in one of your main window's methods): limited_timer(100, 2000, this, SLOT(processTimer()));
{ "domain": "codereview.stackexchange", "id": 17057, "tags": "c++, timer, qt" }
Generating large Sudoku grid in C#
Question: I'm new to C# and am trying to write a program that randomly generates complete Sudoku grid of variable size. I'm able to generate a 25x25 grid in usually less than 10 seconds but I haven't been able to make a 36x36. This is the method I thought of to create a grid: Instantiate structure classes: cells, rows, columns, boxes, and grid. Each cell is assigned a random integer. A set of functions recurses over each character value (the symbol to be displayed in a cell of the grid) and each box of the grid. If a dead end is hit, the state is reverted to the last branch and the next possible path is taken, going through every possible path until a complete grid is created. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace Sudoku { class Program { // characters used in the grid public const string valueList = "123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ?"; static void Main(string[] args) { Console.SetWindowSize(150, 60); Console.SetCursorPosition(0, 0); // start generating grid for (int i=0; i<4; i++) { Thread t = new Thread(() => { new Grid(4); }); t.Priority = ThreadPriority.Highest; t.Start(); } } } /** * Basic building blocks of the grid. */ class Cell : IComparable<Cell> { // cell coordinates public int X; public int Y; // Containing groups public Group Row; public Group Column; public Group Box; // Possible values public string PossibleValues = Program.valueList; // Display value public char Value = 'x'; // Use this to randomize sort order int I = Grid.Rand.Next(); /** * Constructor */ public Cell (int x, int y, Group row, Group column, Group box, int numValues) { X = x; Y = y; Row = row; Column = column; Box = box; // assign to groups row.AddCell(this); column.AddCell(this); box.AddCell(this); // init possible values PossibleValues = PossibleValues.Substring(0, numValues); } /** * Assign a value to this cell while removing it from possible values of related cells */ public void AssignValue(char value) { if (PossibleValues.Length > 0 && PossibleValues.IndexOf(value) != -1) { RemoveValueFromGroups(value); Value = value; } } /** * Remove a value from possible values */ protected void RemoveValue(char value) { int index = PossibleValues.IndexOf(value); // remove value if exists in possible values if (index != -1) { PossibleValues = PossibleValues.Remove(index, 1); } } /** * Remove a value from all related group members */ protected void RemoveValueFromGroups (char value) { for (int i=0; i<Row.Cells.Length; i++) { Row.Cells[i].RemoveValue(value); Column.Cells[i].RemoveValue(value); Box.Cells[i].RemoveValue(value); } } /** * Used to sort cells randomly using their assigned RNG value I */ public int CompareTo(Cell c) { if (c == null) return 0; return I.CompareTo(c.I); } /** * Regenerate the RNG value */ public void ReseedRng() { I = Grid.Rand.Next(); } } /** * A class that holds the cells. Can be rows, columns, or boxes */ class Group { public Cell[] Cells; protected int Index = 0; /** * Constructor */ public Group (int numCells) { Cells = new Cell[numCells]; } /** * Add a cell to the group */ public void AddCell (Cell cell) { Cells[Index++] = cell; } /** * Get a sorted set of all cells that can potential have the given value */ public SortedSet<Cell> GetCandidates (char value) { SortedSet<Cell> candidates = new SortedSet<Cell>(); // Add eligible cells foreach (Cell cell in Cells) { if (cell.Value == 'x' && cell.PossibleValues.Contains(value)) { candidates.Add(cell); } } return candidates; } } /** * Class that represents the sudoku square and all it's parts */ class Grid { //static Grid Instance; string PossibleValues = Program.valueList; public int BoxSideLength; public static Random Rand = new Random(); public Group[] Rows; public Group[] Columns; public Group[] Boxes; public Cell[] Cells; protected static bool CompletedGrid = false; /** * Constructor */ public Grid (int boxSideLength) { int sideLength = boxSideLength * boxSideLength; BoxSideLength = boxSideLength; PossibleValues = PossibleValues.Substring(0, sideLength); Rows = new Group[sideLength]; Columns = new Group[sideLength]; Boxes = new Group[sideLength]; // instantiate the groups for (int i=0; i<sideLength; i++) { Rows[i] = new Group(sideLength); Columns[i] = new Group(sideLength); Boxes[i] = new Group(sideLength); } Cells = new Cell[sideLength * sideLength]; // instantiate the cells for (int y=0; y<sideLength; y++) { for (int x=0; x<sideLength; x++) { int boxIndex = (x / boxSideLength) + (y / boxSideLength) * boxSideLength; Cells[x + y * sideLength] = new Cell(x, y, Rows[y], Columns[x], Boxes[boxIndex], sideLength); } } // start building the grid // Assign the cell values while (!PopulateChar(PossibleValues[0])) { // reset Rng if complete failure occurs foreach (Cell cell in Cells) { cell.ReseedRng(); } } // first completed grid if (!CompletedGrid) { CompletedGrid = true; Draw(); Console.ReadLine(); } } /** * Used to recursively feed values into the AssignValues method */ protected bool PopulateChar(char value) { //Console.SetCursorPosition(0, 0); //Draw(); // check for completed grid, end processing if (CompletedGrid) { return true; } return AssignValues(Boxes[0], value); } /** * Used to recursively assign the given value to a cell in each box group */ protected bool AssignValues(Group box, char value) { var candidates = box.GetCandidates(value); if (candidates.Count > 0) { foreach (Cell cell in candidates) { // check for completed grid, end processing if (CompletedGrid) { return true; } // save current state of grid State[] states = new State[Cells.Length]; for (int i=0; i<Cells.Length; i++) { states[i] = new State(Cells[i].Value, Cells[i].PossibleValues); } cell.AssignValue(value); // determine if this cell will cause the next box to error int index = Array.IndexOf(Boxes, box); int gridRowIndex = index / BoxSideLength; int gridColIndex = index % BoxSideLength; bool causesError = false; for (int i = index + 1; i < Boxes.Length; i++) { if (/*i > BoxSideLength * 2 &&*/ gridRowIndex != i / BoxSideLength || gridColIndex != i % BoxSideLength) continue; bool hasFreeCell = false; foreach (Cell testCell in Boxes[i].Cells) { if (testCell.PossibleValues.Contains(value)) { hasFreeCell = true; break; } } if (!hasFreeCell) { causesError = true; break; } } // move on to next box if no error if (!causesError) { int nextBoxIndex = index + 1; if (nextBoxIndex == Boxes.Length) { // start assigning next character int indexOfNextChar = PossibleValues.IndexOf(value) + 1; // Check for grid completion if (indexOfNextChar == PossibleValues.Length) return true; // move on to next char if (PopulateChar(PossibleValues[indexOfNextChar])) return true; } else { // recurse through next box if (AssignValues(Boxes[nextBoxIndex], value)) return true; } } // undo changes made in this recursion layer for (int i = 0; i < Cells.Length; i++) { Cells[i].Value = states[i].Value; Cells[i].PossibleValues = states[i].PossibleValues; } } } return false; // no viable options, go back to previous box or previous character } /** * Output the grid to console */ public void Draw() { int rowCounter = 0; foreach (Group row in Rows) { StringBuilder rowString = new StringBuilder(); foreach (Cell cell in row.Cells) { rowString.Append(cell.Value); rowString.Append(' ', 2); } rowCounter++; if (rowCounter == BoxSideLength) { rowCounter = 0; Console.WriteLine(rowString.Append('\n').Append('-', Rows.Length*3)); } else Console.WriteLine(rowString + "\n"); } } } } /** * Used for persisting a cell's state */ struct State { public char Value; public string PossibleValues; public State(char value, string possibleValues) { Value = value; PossibleValues = possibleValues; } } The Grid constructor is passed an integer that is the square root of a side length of the grid, so 5 for a 25x25 grid. Any tips on how to generate 36x36 and larger grids within a reasonable amount of time? Answer: Your description of the algorithm used describes the famous backtracking algorithm. This algorithm, by definition, is guaranteed to find a solution, but is terribly slow because it tries all permutations until a solution is found. In order to speed your algorithm up, you should try to solve the grid the way a human would. The first step is to develop a table of legal values for each cell, rather than just trying any value from the allowed range. This alone will significantly increase the speed of the backtracking solution. The second step is looking at all possible values in each row, column, and subgrid. If a cell can only have one value, based on the limiting values in the row/column/subgrid, then you have a positive value for that cell, which can be used to reduce the number of allowed values in other cells. You also have a positive value if that cell is the only free cell in one of these subsets to support a specific value. The third step is slightly more complicated: If any two cells in a row/column/subgrid each can have only two identical values (both cells can have either 2 or 4, for example), then one of those cells will be 2 and the other 4, and you can remove these potential values from all other cells in the subgroup you are examining. The same applies to any N values in N cells ([2, 5], [3, 5], and ([2, 3] or [2, 3, 5]), for example, guarantee that this set of three values will exist in those three cells, and no other cells in the subgroup will have these values). The fourth step is based on the same principle as the previous point, but is slightly harder to identify. These cells are when any N cells have a block N values that no cells in the subgroup have. For example, if a subgrid only has two cells that allow 2 and 4, but in the current state, placing a 6 or an 8 in one of the cells would not look like a mistake (e.g. the "penciled" values for the cells are [2, 4, 6, 8] and [2, 4, 8], but no other cell in the row/column/subgrid has the values [2, 4]). Identifying these patterns and limiting these cells to supporting [2, 4] can help you limit other cells' potential values, and reduce the numbers checked in your backtracking solution significantly. Most advanced Sudokus can be solved with just these rules, and if you encounter a legal Sudoku that cannot be solved with these rules, your program will still be significantly faster because it will limit the solution to checking only potential solutions knowing the current state.
{ "domain": "codereview.stackexchange", "id": 23655, "tags": "c#, beginner, sudoku" }
Having a problem understanding the execution of I/O in Von Neumann model
Question: I might have gone too deep in my search after the answer which might be much easier than what I figured. Essentially I wanted to figure out how I/O are executed in a Von Neumann machine, but more I read, the more confused I became. What they are saying in "Introduction to computing systems" (Patt and Patel) and other websites covering this is that I/O devices in the VN machine has an I/O controller which works as a liaison between the I/O device(s) and the CPU which will interrupt the CPU once the I/O operation is finished. (1. it receives the write/read request, 2. notifies the cpu and let it go back to doing whatever task it was doing, 3. interrupts it once the I/O operation is finished). So this is some kind of Interrupt-driven I/O, but is that enough to explain how I/O are executed in VN machine? How does a program communicate with an I/O device? How will you explain the data transfer between memory and I/O devices? Answer: You are not wrong in stating that you went too deep in your search. To counter these confusions, you might want to study how instructions are executed electronically in a computer. The one thing worth remembering is: In a computer everything is controlled by the CPU. That includes the IO devices, their interfaces, the instructions put forth by them and the ones relegated to them. Typically, what happens is that a program or a facility in your machine generates interrupts. Generally, the interrupt itself is an instruction to the CPU coming from the memory, loaded in the memory by the CPU itself at a prior time. When the CPU executes the interrupt, it checks the status of I/O device (like a key press), registers it and manipulates it as per instructions that follow. When modern computers turn on, BIOS - Basic Input Output System, that is - is used for generating interrupts and is used as an interface/liaison between the CPU and I/O devices. BIOS itself is a program (set of instructions) that resides on ROM and is read by the CPU at time zero. After the boot up is complete and OS is loaded to the memory, this facility is provided by the Operating System. How does a program communicate with an I/O device? That depends upon the level of abstraction you are working on. If you want to see the program as a single and lone set of instructions to the CPU then the I/O interrupts are a part of the program. They are executed and tell the processor which device is to be contacted and in what manner. A little knowledge of assembly would be helpful for clarification in this context. But a small example of instructions to CPU is as follows: 1. Load integer 8 in Register 1. //Instruction 'Load', Data '8' 2. Check keyboard for key press. //An interrupt to Keyboard 3. Store key press value in Register 2. 4. Add contents of Register 1 and Register 2 in Register 3. 5. Pass contents of Register 3 to Display. //An output interrupt. In the case of an operating system like Windows, the interrupts are a part of operating system program and are used to call the CPU's attention not only to I/O devices, but also to other programs in the memory. The IO interrupts here are in driver domain present in the memory as an interface. The user applications or the other programs in memory make calls to the operating system to make these interrupts, they can not directly make interrupts. Following is an example of a simple program in MSIL assembly. .maxstack 2 .entrypoint //begins here. ldc.i4 8 //load integer 8 to stack. ldc.i4 9 //load integer 9 to stack too. add //Add last two integers. call void [mscorlib]System.Console.WriteLine(int32) //A call to windows to //display the answer in console. Windows does the rest. How will you explain the data transfer between memory and I/O devices? An instruction by a program to check the status of IO device. An instruction to get the data from the device and store it into a register. An instruction to store the value of that register into a memory location at a certain address in RAM. That location can be read by the program when required.
{ "domain": "cs.stackexchange", "id": 5973, "tags": "computer-architecture" }
Why do we use the external pressure to calculate the work done by gas
Question: I read in a textbook that in the case when we have a gas in a cylinder fitted with a massless frictionless piston being held with an external pressure $p_1$, and when the pressure is reduced to become the value $p_2$, the gas pushes up against the piston and then the work done by the gas for a small change in volume is calculated by: $$\mathrm dW=p_2\,\mathrm dV$$ Here, is what I don't conceptually understand. If the gas's molecules was under some pressure $p_1$ which is equal to the external pressure in the static state then after the external pressure became lower than the internal pressure, shouldn't the work done by the gas be the difference between the two pressures? Answer: If the piston is frictonless and massless, then, if you do a force balance on the piston, you must have that the force per unit area that the gas exerts on the inside face of the piston will always be equal to the external force per unit area that one imposes on the outside face of the piston. The sudden drop in pressure on the outside face of the piston causes the gas to undergo an irreversible expansion. During an irreversible expansion, the local pressure within the cylinder becomes non-uniform, so that the average pressure of the gas differs from the force per unit area at the piston face. As a result, the ideal gas law (or other equation of state) cannot be applied globally to the gas in the cylinder. In addition, during an irreversible expansion, there are viscous stresses present in the gas that allow the force per unit area at the piston face to drop to the new lower value while requiring that force to match the external force on the outer face. So the work done by the gas on the piston is equal to the external force per unit area times the change in volume: $$W = \int{P_{ext}dV}$$ This equation is always satisfied, irrespective of whether the expansion is reversible or irreversible.
{ "domain": "chemistry.stackexchange", "id": 14889, "tags": "thermodynamics, pressure" }
Klenow fragment does not seem to have 5->3 exonuclease activity, but it can still displace nucleotides in front of him - How?
Question: Klenow fragment does not seem to have 5->3 exonuclease activity, but it can still displace nucleotides in front of him - How? How would this displacement mechanism differ. My first guess is that exonuclease activity actually digests upcomming strand into single nucleotides, but how come the displacement activity not get in the way for lab work? : "The 5' -> 3' exonuclease activity of E. coli's DNA polymerase I makes it unsuitable for many applications. " Why doesn't the 5->3' displacement activity make the applications troublesome then? What would those applications be. I've spend a long time looking for these answers, but unfortunately I've found nothing. Maybe someone has any idea:) Cheers! References: Image - http://www.vivo.colostate.edu/hbooks/genetics/biotech/enzymes/klenow.html Answer: Generally, the New England Biolabs website is a great resource for this sort of information. Maybe you already know this: Klenow fragment is widely used in molecular biology today. The broad use of Klenow fragment is because it will remove 3' overhangs, thereby 'polishing' or 'blunting' the DNA. See the famous cloning techniques manual for additional information. It is a polymerase, though, and it does perform 5-->3' elongation in the presence of dNTPs. (sidenote: Klenow was initially used in polymerase chain reaction (PCR) development, and later replaced with heat-stable Taq pol.) The so-called 'strand displacement' activity of the enzyme allows the 5-->3 elongation of the new strand to occur even in the presence of an existing, annealed strand ahead of the advancing polymerase, as you say. This may seem problematic at first glance, but the job of polymerase is to make a complete, templated copy of DNA -- that is why reseachers use it, and that is its job in nature too! Stand displacement is really convenient for scientists when using random primers, for instance. The DNA polymerization can continue unimpeded despite the presence of a 'downstream' barrier. This is also why not having 5'-->3' exonuclease activity is helpful in this context. (sidenote: pol I is not the main replicative polymerase in E. coli. and its discovery is one of the great stories of post-war biology). I hope this addresses your question(s).
{ "domain": "biology.stackexchange", "id": 6504, "tags": "biochemistry, molecular-biology, cell-biology" }
How is the oxidation number related to the group number?
Question: If an element $\ce{X}$ forms the highest oxide of the formula $\ce{XO3}$, then it belongs to group: A) 14 B) 15 C) 16 D) 17 How is an oxidation number of an element related to its group? I tried comparing $\ce{X}$ with $\ce{N}$-, $\ce{S}$-, $\ce{As}$-compounds, but all 3 come in the form $\ce{XO3}$ ($\ce{NO3}$, $\ce{SO3}$, $\ce{AsO3}$), so I can't guess the group number. Answer: For main group elements (groups 1, 2, 13-18), the highest possible oxidation state $O_\text{max}$ is given by the group number $G$ (for groups 1 and 2) or by the group number - 10 (for the rest): $$ O_\text{max} = \begin{cases} G &\text{if } G \in \{1,2\} \\ G - 10 &\text{if } 13 \leq G \leq 18 \end{cases} $$ This means that for your compound $\ce{\stackrel{VI}{X}O3}$, so with the oxidation state VI, the corresponding group would be group 16.
{ "domain": "chemistry.stackexchange", "id": 2035, "tags": "inorganic-chemistry, periodic-trends, oxidation-state" }
Is $ALL\setminus(RE \cup co-RE)$ empty?
Question: Possible Duplicate: Are there languages that are not in RE nor CO-RE? Let $ALL$ be the language of all decision problems. My question is, is there a language that is neither recognizable or complement-recognizable? If such a language exists, I believe it would be very interesting to study it (if it is possible!). Answer: I'm posting my comment as an answer, at the request of the OP. Arithmetic Hierarchy AH is a class of decision problems defined as below: Let $Δ_0 = \Sigma_0 = \Pi_0 = R$. Then for $i>0$, let $Δ_i = R^ {\Sigma_{i-1}}$. $\Sigma_i = RE ^{\Sigma_{i-1}}$. $\Pi_i = coRE ^{\Sigma_{i-1}}$. Obviously, AH $\subseteq$ ALL. On the other hand, "each level of AH strictly contains the levels below it." (As pointed by Robin Kothari in a comment above). Therefore, $ALL\setminus(RE \cup coRE)$ is nonempty.
{ "domain": "cstheory.stackexchange", "id": 772, "tags": "computability" }
What are the practical consequences of a metapackage?
Question: What are the practical consequences of a metapackage? How are they supposed to be used? Are they only a check that you have all packages you need or can they help with getting all the packages you need? We have now a lot of packages checked in to subversion. We would like to define different sub-sets of these packages and let different people check out and update different sub-sets. How do you do that the ros way? Originally posted by TommyP on ROS Answers with karma: 1339 on 2013-09-25 Post score: 1 Answer: metapackages allow you to group a set of functionality under a convenient name for easy installation. They are empty packages which only have dependencies. They are only really helpful for distribution or released packages. They are especially designed for binary based installations. If you are installing from source the grouping can only be done by the repository higherarchy in svn. (In DVCSs you can usually not even checkout subtrees and are limited to the repos. I would recommend that you look into the rosinstall_generator for your purposes of checking out arbitrary code. But again it will work much better for released code. For arbitrary code subsets making common rosinstall file snippets for rosinstall or wstool are probably your best approach. Originally posted by tfoote with karma: 58457 on 2013-09-28 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by TommyP on 2013-09-28: I see. Maybe "They are only really helpful for distribution or released packages. They are especially designed for binary based installations." or something similar should be added to the Wiki documentation? Comment by felix k on 2013-09-29: @TommyP: feel free to do so! :-) Even if it's only a link to a place explaining the thing. And oh, http://wiki.ros.org/Packages and http://wiki.ros.org/Stacks do not yet mention catkin packages at all.
{ "domain": "robotics.stackexchange", "id": 15649, "tags": "ros" }
Shouldn't $dU=TdS-PdV$ be true for every closed system?
Question: For a closed system, if there aren't chemical reactions or a phase change inside it, $dU=TdS-PdV$ is the fundamental thermodynamic relation. This expression can be generalized, for a closed system undergoing phase transition, or chemical reactions, as $dU=TdS-PdV+\sum_i \mu_idN_i$ The generalized relation above seems to have a straightforward physical meaning, it takes into account the energy change due to chemical reactions or phase transitions. However, when i think that the system is closed, it seems wrong, because, for a closed system, $dU=\delta Q +\delta W $, and, for a reversible process, we have $(\delta Q)_{rev}=TdS$, $(\delta W)_{rev}=-PdV$. So, substituting, we get $dU=TdS-PdV$, that, since it is an equation between state functions, is true for every transformation. I pondered about the derivation above, i'm pretty sure that, the equations $dU=\delta Q +\delta W $, and $(\delta Q)_{rev}=TdS$, are always true for a closed system. On the other hand, i'm not sure about $(\delta W)_{rev}=-PdV$, that may be wrong if the system is undergoing phase transition or chemical reactions. If that is the case, however, i can't figure out why. Answer: It is not sufficient that the system is closed (i.e., does not exchange matter with the outside world) - it should be also in equilibrium, as all the equations you use are for systems in equilibrium. This includes the reversible process, since, by definition, it is a process so slow that at every point the system can be regarded as being in equilibrium. Further, in statistical physics a change of internal energy is usually decomposed into a change caused by macroscopic factors - like changing the volume, change of external field, etc., and microscopic factors - e.g., due to the energy exchange between the molecules with the adjacent regions. These are called respectively work and heat. Work is not necessarily a result of changing volume - we often define generalized forces and generalized coordinates, so that $$ dU = TdS -PdV+\sum_i F_idX_i $$ One frequently used example is magnetization and the external magnetic field for magnetic systems (e.g., in Ising model.) When including term $\sum_i\mu_idN_i$ the notion of closed system becomes somewhat ambiguous and loses its value, since this can be viewed as several systems exchanging particles - and hence manifestly open systems (rather than closed.)
{ "domain": "physics.stackexchange", "id": 93707, "tags": "thermodynamics, chemical-potential" }
Given directed connected weighted graph, check if d(v) = delta(s,v)
Question: I'm having some hard time with this problem. Can someone give me some clue/guidance? This is an homework question, so please don't just solve it. Given a weighted directed connected graph $G = (V,E)$ and given another function $d: V \to \mathbb{R}^+$ (including zero), find a linear time algorithm that checks if $d(v)=\text{delta}(s,v)$ for each $v$, for some fixed vertex $s$. $\text{delta}(s,v)$ is like in Dijkstra algorithm, meaning shortest path from $s$ to $v$. My thought is to make use of BST to check if these functions are equal, but I can't avoid running Dijkstra for it. Answer: If $d$ indeed represents the length of the shortest path, we must have $$ d(v)= \begin{cases} 0, &\text{if $v=s$,}\\ \displaystyle\min_{u:(u,v)\in E}\{d(u)+w(u,v)\}, &\text{otherwise,} \end{cases} $$ where $w(u,v)$ is the weight of the edge $(u,v)$. So you can check whether $d$ satisfies this property for all $v$. It takes only linear time. If $d$ does not satisfy this property, it cannot represent the length of the shortest path. However, if $d$ does satisfy this property, does it really represent the length of the shortest path? I'll let you figure out this part. For a further hint: Note what you need to prove is that, for all $v$, $d(v)$ is length of the shortest path from $s$ to $v$. Suppose the shortest path from $s$ to $v$ contains $k$ edges, you may try a mathematical induction on $k$.
{ "domain": "cs.stackexchange", "id": 13622, "tags": "graphs, shortest-path, weighted-graphs" }
Do organopolonium compounds exist?
Question: Analogues of alcohols exist for all the heavier Group 16 elements, namely sulfur, selenium, and tellurium. Would polonium also be able to form a "polonol" like $\ce{CH3PoH}$? Answer: As one might expect for an element with no long-lived isotopes, little is known about organopolonium compounds. What is known may be broadly divided into two categories [1,2; cited by Wikipedia]: Compounds where polonium takes the place of a lighter chalcogen, acting more nearly parallel to sulfur, selenium and tellurium than to oxygen. Examples include polonoethers $\ce{R2Po}$ and arylpolonium halides $\ce{Ar2PoX2,Ar3PoX}$. Complexes similar to those formed with metals. WP cites complexes with 2,3-butanediol and thiourea from Ref. [1]. All such compounds are known only in tracer amounts because of both chemical instability and rapid radioactive decay. References 1. Zingaro, Ralph A. (2011). "Polonium: Organometallic Chemistry". Encyclopedia of Inorganic and Bioinorganic Chemistry. John Wiley & Sons. p. 1–3. https://doi.org/10.1002/9781119951438.eibc0182. 2. Murin, A. N.; Nefedov, V. D.; Zaitsev, V. M.; Grachev, S. A. (1960). "Production of organopolonium compounds by using chemical alterations taking place during the β-decay of RaE" (PDF). Dokl. Akad. Nauk SSSR (in Russian). 133 (1): 123–125. Retrieved 12 April 2020.
{ "domain": "chemistry.stackexchange", "id": 15554, "tags": "organic-chemistry, periodic-trends, organometallic-compounds" }
PDO wrapper class
Question: Connection stored in the xml config file. <?xml version='1.0' ?> <database> <connection> <dbtype>mysql</dbtype> <dbname>shoutbox</dbname> <host>localhost</host> <port>3306</port> <user>admin</user> <password>admin</password> </connection> </database> class DBWrapper { /** * Stores the database connection object. * * @access protected * @var database connection object */ protected $dbo = NULL; /** * Stores the class instance, created only once on invocation. * Singleton object instance of the class DBWrapper * * @access protected * @static */ protected static $instance = NULL; /** * Stores the database configuration, from the config.xml file * * @access protected */ protected $xml; /** * When the constructor is called (which is called only once - singleton instance) * the connection to the database is set. * * @access protected */ protected function __construct() { $this->getConnection(); } /** * Grabs the database settings from the config file * * @access private */ private function loadConfig() { $this->xml = simplexml_load_file("Config.xml"); } /** * Instantiates the DBWrapper class. * * @access public * @return object $instance */ public static function getInstance() { if(!self::$instance instanceof DBWrapper) { self::$instance = new DBWrapper(); } return self::$instance; } /** * Sets up the connection to the database. */ protected function getConnection() { if(is_null($this->dbo)) { $this->loadConfig(); list($dsn,$user, $password) = $this->setDSN(); $this->dbo = new PDO($dsn,$user,$password); $this->dbo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } } /** * Constructs the database source name(dsn) after the config file is read. * * @return array */ protected function setDSN() { $dbtype = $this->xml->connection[0]->dbtype; $dbname = $this->xml->connection[0]->dbname; $location = $this->xml->connection[0]->host.":".$this->xml->connection[0]->port; $user = $this->xml->connection[0]->user; $password = $this->xml->connection[0]->password; $dsn = $dbtype.":dbname=".$dbname.";host=".$location; return array($dsn, $user,$password); } /** * Initiates a transaction. */ protected function beginTransaction() { $this->dbo->beginTransaction(); } /** * Commits a transaction. */ protected function commitTransaction() { $this->dbo->commit(); } /** * Roll back a transaction. */ protected function rollbackTransaction() { $this->dbo->rollBack(); } /** * Select rows from the database. * * @param string $table Name of the table from which the row has to be fetched * @param array $columns Name of the columns from the table * @param array $where All the conditions have to be passed as a array * @param array $params For binding the values in the where clause * @param array $orderby Name of the columns on which the data has to be sorted * @param int $start Starting point of the rows to be fetched * @param int $limit Number of rows to be fetched * @exception $ex * @return int $rowcount */ public function select($table, $columns = '*', $where = '', $params = null, $orderby = null, $limit = null, $start = null) { try { $query = 'SELECT '; $query .= is_array($columns) ? implode(",",$columns) : $columns; $query .= " FROM {$table} "; if(!empty($where)) { $query .= " where ".implode(" and ", $where); } if(is_array($orderby)) { $query .= " order by "; $query .= implode(",",$orderby); } $query .= is_numeric($limit) ? " limit ".(is_numeric($start) ? "$start, " : " ").$limit : ""; $sth = $this->dbo->prepare($query); $sth->execute($params); $rows = $sth->fetchAll(PDO::FETCH_ASSOC); return $rows; } catch(Exception $ex) { $this->exceptionThrower($ex,true); exit; } } /** * Insert's a row into the database. * * @param string $table Name of the table into which the row has to be inserted * @param array $params For binding the values in the where clause * @exception $ex * @return int $rowcount */ public function insert($table, $params) { try { $bind = "(:".implode(',:', array_keys($params)).")"; $query = "INSERT INTO ".$table. "(" .implode(",", array_keys($params)).") VALUE ".$bind; $this->beginTransaction(); $sth = $this->dbo->prepare($query); $sth->execute($params); $rowcount = $sth->rowCount(); $this->commitTransaction(); return $rowcount; } catch(Exception $ex) { $this->exceptionThrower($ex,false); exit; } } /** * Delete's a row from the database. * * @param string $table Name of the table into which the row has to be deleted * @param array $where All the conditions have to be passed as a array * @param array $params For binding the values in the where clause * @exception $ex * @return int $rowcount */ public function delete($table,$where=null,$params=null) { try { $query = 'DELETE FROM '.$table; if(!is_null($where)) { $query .= ' WHERE '; $query .= implode(" AND ",$where); } $this->beginTransaction(); $sth = $this->dbo->prepare($query); $sth->execute($params); $rowcount = $sth->rowCount(); $this->commitTransaction(); return $rowcount; } catch(Exception $ex) { $this->exceptionThrower($ex,false); exit; } } /** * Update's a row in the database. * * @param string $table Name of the table into which the row has to be updated * @param array $set Values to be changed are set as an associative array * @param array $where All the conditions have to be passed as a array * @param array $params For binding the values in the where clause * @exception $ex * @return int $rowcount */ public function update($table, $set ,$where = null, $params = null) { try { $count = 0; $str = ''; $query = "UPDATE {$table} SET "; foreach($set as $key=>$val) { $count += 1; if($count > 1) { $query .= " , "; } if(is_numeric($val)){ $query .= $key ." = ". $val; } $query .= $key ." = '". $val."'"; } echo $query."<br/>"; if(!is_null($where)) { $query .= " where ". implode(" and ", $where); } $this->beginTransaction(); $sth = $this->dbo->prepare($query); $sth->execute($params); $rowcount = $sth->rowCount(); $this->commitTransaction(); return $rowcount; } catch(Exception $ex) { $this->exceptionThrower($ex,false); exit; } } /** * @param object $ex Incoming exception object * @param bool $isSelect Useful for instantiating a roll back */ private function exceptionThrower($ex, $isSelect = true) { if(!$isSelect) { $this->rollbackTransaction(); } echo "Exception in the: ".get_class($this). " class. <b>Generated at line number:</b> ".$ex->getLine(). "<br/> <b>Exception:</b> ".$ex->getMessage(). "<br/><b>Trace:</b>".$ex->getTraceAsString(); } } How to use the class. #Using the file: $db = DBWrapper::getInstance(); #Selecting data $table = "shouts"; $columns = array("id","name","post"); $where = array("email like :email"); $params = array('email' => 'dan@harper.net'); $result_set = $db->select($table,$columns,$where, $params); $result_set = $db->select($table); foreach($result_set as $result) { echo "<b>Post:</b>".$result['post']."<br/>"; } #Insert $table = "shouts"; $insert = array('name'=>'chaitanya','email'=>'learner@sfo.net','post'=>'Congratulations! You have successfully created a Hello World application!', 'ipaddress'=>$ipaddress); echo "<br/>Count: ".$db->insert("shouts", $insert); #Update $table = "shouts"; $set = array('name'=>'code learner', 'email'=>'learner@code.com'); $where = array("id = :id"); $values = array('id'=>1); echo $db->update($table, $set, $where, $values); //$where = array("id IN (:id0,:id1,:id2,:id3)"); //$where = array("id BETWEEN :id0 and :id1"); #Delete $table = "shouts"; $where = array("id = :id"); $values = array('id'=>1); echo $db->delete($table, $where, $values); I have written a PDO wrapper, am very new to PHP and this is my first try in OOP-PHP. I request to suggest Changes in the way the class can be implemented in a better way Features to be added Answer: Seeing as there are no other answers I will give a quick review. You seem to have done a good job with this class. Here are some things I would do: Remove the evil singleton. It is valid to have more than one database connection. Don't limit yourself to a single instance. Use Dependency Injection. Provide transaction checking: Add an inTransaction property to the class, set it when you start a transaction. Throw an exception if your code tries to start more than one transaction or commit or rollback without being in a transaction. Remove the transaction from your methods. An insert or update call may be part of a much wider transaction - it should not be committed without the rest working. Get rid of exceptionThrower. Create an Exception_DB class that extends an exception. If you should rollback do so before you throw. There is some really useful information that you could write in this exception class if you pass the DB. PDO has the errorCode and errorInfo which will give you an idea of why your SQL statement is wrong.
{ "domain": "codereview.stackexchange", "id": 788, "tags": "php, pdo" }
Why does the principle of locality of computation not relativize?
Question: Although I have trouble understanding oracle TMs, I appreciate that non-relativizing techniques will be needed to resolve P vs. NP (as well as most other open problems in TCS). However, one of the techniques I've read about that does not relativize is using the fact that all of a Turing machine's computation is local and I'm wondering why this is a valid approach. After all, the oracle tape has to be read somehow and if it is not locally then how? As I learned through this post regarding the Time Hierarchy Theory and relativization, the oracle of a polytime OTM must be accessed via a polytime reduction so it cannot just act arbitrarily. Is it just that the oracle tape is read only? Or is that even if computation is local for an OTM, we just regard this aspect as irrelevant since a call to the oracle is seen as taking one step? Thanks. Answer: When you go into a special "oracle" step, the oracle reads the entire contents of the oracle tape and determines its output based on the entire tape. This is the non-local aspect of oracle computation.
{ "domain": "cs.stackexchange", "id": 3762, "tags": "proof-techniques, oracle-machines, relativization" }
I'm having trouble with falling objects with constant mass flow rate
Question: So it follows that F=d(mv)/dt and so F = m dv/dt + v dm/dt It is given that there is some sand flowing with constant massflow rate M dropped from a height h and we are to find the force exerted on the surface it falls. This means that the velocity can be found using conservation of energy where mgh=1/2mv^2 so sqrt(2gh) = v What I'm having trouble understanding why dv/dt in this system should be zero. I thought that dv/dt => a and so the acceleration should be g? However when I was taught this topic the professors said dv/dt should be zero but I don't understand why this is. Any help would be appreciated Answer: In your equation for conservation of energy, $v=\sqrt{2gh}$ is the final velocity with which the sand hits the ground. The sand is falling from a constant height so final velocity $v$ does not change : neither $h$ nor $g$ is changing. Of course the velocity of the sand increases from the point of release, but this variable velocity is not the final velocity $v=\sqrt{2hg}$ which you have calculated from the conservation of energy.
{ "domain": "physics.stackexchange", "id": 44403, "tags": "newtonian-mechanics, forces" }
pythonic longest repetition
Question: I put together this code to for the Udacity cs101 exam. It passes the test, but I feel their must be a more elegant way to manage this problem. Some help from a module (like itertools). Question 8: Longest Repetition Define a procedure, longest_repetition, that takes as input a list, and returns the element in the list that has the most consecutive repetitions. If there are multiple elements that have the same number of longest repetitions, the result should be the one that appears first. If the input list is empty, it should return None. def longest_repetition(alist): alist.reverse() largest = [] contender = [] if not alist: return None for element in alist: if element in contender: contender.append(element) if len(contender) >= len(largest): largest = contender else: contender = [] contender.append(element) if not largest: return contender[0] else: return largest[0] #For example, print longest_repetition([1, 2, 2, 3, 3, 3, 2, 2, 1]) # 3 print longest_repetition(['a', 'b', 'b', 'b', 'c', 'd', 'd', 'd']) # b print longest_repetition([1,2,3,4,5]) # 1 print longest_repetition([]) # None Answer: First, I'll comment on your code, and then I'll discuss a more "Pythonic" way to do it using features in the Python standard library. There's no docstring. What does the function do? You start out by reversing the list. This is a destructive operation that changes the list. The caller might be surprised to find that their list has changed after they called longest_repetition! You could write alist = reversed(alist) instead, but it would be even better if you didn't reverse the list at all. By changing the >= test to > you can ensure that the earliest longest repetition is returned, while still iterating forwards over the input. You build up a list contender that contains the most recent series of repetitions in the input. This is wasteful: as contender gets longer, the operation element in contender takes longer. Since you know that this list consists only of repetitions of an element, why not just keep one instance of the element and a count of how many repetitions there have been? You only update largest when you found a repetition. That means that if there are no consecutive repetitions in the input, largest will never be updated. You work around this using an if statement at the end to decide what to return. But if you always updated largest (whether you found a repetition or not) then you wouldn't need the if statement. By suitable choice of initial values for your variables, you can avoid the if not alist special case. And that would allow your function to work for any Python iterable, not just for lists. So let's apply all those improvements: def longest_repetition(iterable): """ Return the item with the most consecutive repetitions in `iterable`. If there are multiple such items, return the first one. If `iterable` is empty, return `None`. """ longest_element = current_element = None longest_repeats = current_repeats = 0 for element in iterable: if current_element == element: current_repeats += 1 else: current_element = element current_repeats = 1 if current_repeats > longest_repeats: longest_repeats = current_repeats longest_element = current_element return longest_element So is there a more "Pythonic" way to implement this? Well, you could use itertools.groupby from the Python standard library. Perhaps like this: from itertools import groupby def longest_repetition(iterable): """ Return the item with the most consecutive repetitions in `iterable`. If there are multiple such items, return the first one. If `iterable` is empty, return `None`. """ try: return max((sum(1 for _ in group), -i, item) for i, (item, group) in enumerate(groupby(iterable)))[2] except ValueError: return None However, this code is not exactly clear and easy to understand, so I don't think it's actually an improvement over the plain and simple implementation I gave above.
{ "domain": "codereview.stackexchange", "id": 2291, "tags": "python" }
Why is copper(II) sulfate added drop by drop during the Biuret test?
Question: The Biuret test is a test for proteins. The procedure is to add a strong base, such as NaOH, to dissolve the protein. After that, copper(II) sulfate is added drop by drop into this until a (violet) colour change is observed. I have a few questions. How does NaOH dissolve the protein exactly? What is the significance of adding copper(II) sulfate drop by drop? How would the reaction vary if copper (II) sulfate was added first and NaOH added subsequently, drop by drop? Answer: The Biuret Test is a chemical assay that detects the presence of proteins in a sample. The test is not named after any famous scientist, but after an urea dimer called biuret $(\ce{H2NC(O)NHC(O)NH})$. However, funny thing is biuret is not a part of the test at all. The test is called Biuret Test because biuret gives a positive result to the test (See the image below). The test relies on the characteristic color change (from blue to deep purple or violet; see the image), which confirms the presence of proteins. Biuret isn't a protein, but it gives a positive result to the test, because it has two amide bonds. Your question (1): How does $\ce{NaOH}$ dissolve the protein exactly? Proteins has a lot of amide (peptide) bonds, terminal amine and carboxylic acid groups, and along the $\alpha$-helix, a lot of amino acid side chains including amine groups (e.g., lysine), carboxylc acid groups (e.g., aspartic acid), thiol groups (e.g., cystine), etc. All of them can be reacted with $\ce{NaOH}$ to become ionic, which make it more soluble in water. Specifically, recall most of zwitterionic form of amino acids resist to dissolve in water. Regardless, $\ce{NaOH}$ is essential for the test, not only to make protein more soluble, but also to extract $\ce{H}$ from amide $\ce{N-H}$ bond to make $\ce{N}$ negative (Ref.1). Your question (2): What is the significance of adding copper(II) sulfate drop by drop? When you perform the Biuret Test in qualitative manner, you need to be careful to see color change clearly, I'd say picture perfectly. Suppose, you have very dilute protein sample. Adding the blue solution into a colorless solution is tricky, because if you are not slow enough, you won't realize there is a color change happens due to both (before and after) solutions been colored but intensities are low (see the near pipette tip of the picture). If you are doing the quantitative test, it doesn't matter how fast you add the solution. After color development, you use UV-Vis-Spectrometer to measure color intensity: at appropriate wavelength to purple color ($\pu{540 nm}$) where absorbance is zero for the $\ce{CuSO4}$ blue color, thus interference is minimal. Your question (3): How would the reaction vary if copper(II) sulfate was added first and $\ce{NaOH}$ added subsequently, drop by drop? When you search for the Biuret Test, you'd find out that biuret solution is already alkaline with $\ce{NaOH}$ (see the description of solution in the picture). Thus, I'd say, the order does not matter. Yet, if you want to do a crude test with aqueous $\ce{CuSO4}$ solution and $\ce{NaOH}$ solution, you should be careful, because initial $\ce{Cu(OH)2}$ precipitation might happens. If interested, you may read Ref.2 to see the effects of all solutions on the test, which gives very comprehensive review of the test. References: H. Sigel, R. B. Martin, “Coordinating properties of the amide bond. Stability and structure of metal ion complexes of peptides and related ligands,” Chem. Rev. 1982, 82(4), 385–426 (DOI: 10.1021/cr00050a003). A. G. Gornall, C. J. Bardawill, M. M. David, “Determination of Serum Proteins by Means of the Biuret Reaction,” Journal of Biological Chemistry 1949, 177, 751-766 (http://www.jbc.org/content/177/2/751.short).
{ "domain": "chemistry.stackexchange", "id": 11512, "tags": "reaction-mechanism, aqueous-solution, biochemistry, coordination-compounds, proteins" }
Hokuyo_node single scan mode
Question: Hi Is it possible to configure the hokuyo_node to produce a scan at a specified time, as in single scan mode? As i understand the current default configuration is continuous scan mode where the node continuously publishes laser scans in the topic Originally posted by Sentinal_Bias on ROS Answers with karma: 418 on 2013-03-13 Post score: 0 Answer: AFAIK there are no such settings, but you can subscribe to the laser topic and pick out the frame you want. Timestamp is stored in header of the topic message. See LaserScan message for details. Originally posted by K Chen with karma: 391 on 2013-03-14 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13351, "tags": "ros, hokuyo-node" }
Minimum number of interacting particles to form a plasma
Question: After reading this question https://physics.stackexchange.com/questions/82486/what-is-the-minimum-number-of-electrons-necessary-to-propagate-a-plasmon I began to wonder what is the minimum number of particles needed to have a plasma? I figure that you need at least two to have density oscillations, but I would imagine you would need quite a bit more to have anything that resembles debye screening. I know that the solar wind plasmas have a number density on the order of unity so I dont think its an issue of $n_e$. Thanks! Answer: It actually turns out that the number density is an important aspect of defining a plasma. The general definition is a plasma is a quasi-neutral gas of charged and neutral particles which exhibit collective motions The three conditions for exhibiting collective motions is temperature and (number) density dependent, stemming from the Saha equation, are the time scale of oscillatory motion ought to be less than collisional time scales ($\tau \ll \tau_n\propto1/n_n\sqrt{T}$ with $n_n$ the number of neutrals) the length scale of plasma dynamics needs to be much larger than the mean-free-path ($\lambda\gg\lambda_D\propto\sqrt{T/n}$ with $n$ the number of charged particles) the number of particles in the Debye sphere needs to be significant ($N_D\propto\sqrt{T^3/n}\gg1$) Pictorially, one can see it from the following image (which comes from one of Hans Goedbloed's lectures, specifically MHD1.pdf at the bottom of the linked page). A slightly different version of this image appears in his book Principles of Magnetohydrodynamics (co-authored by Stephan Poedts). The caption in the book reads, Conditions for collective plasma behaviour, in terms of the density $n\equiv n_e\approx Zn_i$ and temperature $T\sim T_e\sim T_i$, are satisfied in the shaded1 area for time scales $\tau<\tau_n=1\,{\rm s}$ and length scales $\lambda>\lambda_D=1\,{\rm m}$, where $N_D\gg1$ is also satisfied. The restrictions on the upper time limit of low density astrophysical plasmas quickly approach the age of the Universe, whereas the restrictions on the lower length limit for high density laboratory fusion experiments approach microscopic dimensions. It seems, then, that the "minimum" density that satisfies this is a $T\sim10^{3.5}\,{\rm K}$ plasma of $n\sim10^5\,{\rm m^{-3}}$ charged particles (corresponding to $n_n\sim10^{-20}\,{\rm m^{-3}}$ neutrals). 1 The shaded region in the book is the white region above
{ "domain": "physics.stackexchange", "id": 10675, "tags": "plasma-physics" }
Fields in QFT: What are the Initial Conditions, $\phi(t=0,x)$?
Question: Solving a differential equation generally goes something like this: One is given a set of initial conditions. There is a large freedom of choice in doing so. For example, in Quantum Mechanics any normalizable $\psi(x,0)$ will do The differential equation then determines how the state evolves, giving $\psi(x,t)$ for all times $t$. Consider now QFT, in particular the free real scalar field as a simple example. The (Klein-Gordon) equation of motion for fields in the Heisenberg picture is solved by $$\phi(\vec{x},t)=e^{itH}\phi(\vec{x},0)e^{-itH}$$ For the real scalar field, the explicit solution is written $$\phi(\vec{x},t)=\int (a(\vec{p})e^{-ip\cdot x}+a^{\dagger}(\vec{p})e^{ip\cdot x}) \frac{d^3\vec{p}}{\sqrt{(2\pi)^3 2E_0}}$$ In QFT lecture, it is often stated that this is the most general $\phi(x,t)$ which solves the equation of motion. Let's investigate that claim. In particular, plugging in $t=0$, we have assumed the initial condition $$\phi(\vec{x},0)=\int (a(\vec{p})e^{i\vec{p}\cdot \vec{x}}+a^{\dagger}(\vec{p})e^{-i\vec{p}\cdot \vec{x}}) \frac{d^3\vec{p}}{\sqrt{(2\pi)^3 2E_0}}$$ But as far as I can tell, the only constraint on $\phi(x,0)$ introduced in lecture is that it should obey the commutation relations. $\phi(x,0)$ is a very important quantity, because in the Schrödinger picture it is the field operator, just as analogously $x$ and $p=i\nabla$ were the position and momentum operators in Quantum Mechanics. Therefore I'd like to ask for one of two possible solutions: Can someone provide a proof that we must have this particular initial condition for the fields? My feeling is that the commutation relations alone are not enough to give this specific form, as operators have a huge amount of degrees of freedom. Just as interesting would be to provide a counterexample: an initial condition $\phi(\vec{x})$ which satisfies the commutation relations but is not equal to $\phi(\vec{x},0)$ above. Either of these answers would be very elucidating for me. If not a proof, then a reference to a proof would of course also be just as helpful. EDIT: After more thought, the answer to this question is sort of a mixture of knzhou and ~Cosmos Zachos' answers. As knzhou remarked, the form $\phi(x,0)$ is not uniquely defined by the commutation relations or KG equation, because you could just redefine $\phi(x,0)$ to be the field with $t=1$ and all equations would still hold. This would be equivalent to redefining the operators $a(\vec{p})$ by adding a phase. However, there is a strong restriction on the $a(\vec{p})$ operators, namely* they can only be redefined up to a momentum-dependent phase $a(\vec{p}) \to e^{i \alpha(\vec{p})}a(\vec{p})$. The proof is along the lines of what Cosmos Zachos' was mentioning and the treatment is done in many QFT textbooks. The essential reason this is possible is that redefining $a(\vec{p}) \to e^{i \alpha(\vec{p})}a(\vec{p})$ does not change the commutation relations. *The momentum-dependence is my own realization and not taken from a book, but seeing as the commutation relations remain unchanged under the redefinition, it seems clear that they cannot constrain the phase of $a(\vec{p})$. In addition, redefining $\phi(\vec{x},0) \to \phi(\vec{x},1)$ is a momentum-dependent change in the $a$ operators: $a(\vec{p}) \to e^{i\sqrt{\vec{p}^2+m^2} t} a(\vec{p})$ Answer: First off, we rarely want to solve explicitly for the field operators as a function of time in QFT. We're usually far more interested in things like correlation functions or $S$-matrix elements. Second off, this is usually not the way we think about Heisenberg picture. In Heisenberg picture, $$\langle A \rangle(t) = \langle \psi | A(t) | \psi \rangle.$$ So to evaluate an expectation value $\langle A \rangle(t)$, we need two pieces of information: the initial state $|\psi \rangle$ and the initial operator $A(0)$. Usually, we think of $|\psi \rangle$ as specifying the initial condition, and $A(0)$ as specifying what operator we're talking about. So there is no freedom in $A(0)$, if you change it you're talking about a physically different operator. As a simple example, you can show that in Heisenberg picture, the operators $x(t)$ and $p(t)$ rotate into each other, because classically the simple harmonic oscillator rotates in phase space. If you decided to define a new position operator $x'(t) = x(t + \pi /2)$ it would satisfy all the equations, but it won't mean the position anymore, it would in fact be measuring the momentum. As another example, you could define a new field operator $\phi'(\mathbf{x}, t) = \phi(\mathbf{x}, t+1)$. It will satisfy all the required relations, but it just won't mean the same thing: rather than give the field value at time $t$ it will give what the field value will be at time $t+1$.
{ "domain": "physics.stackexchange", "id": 53020, "tags": "quantum-field-theory, operators" }
Minimum four-momentum for particles colliding
Question: This comes from Griffiths' Introduction to Elementary Particles. After introducing the reader to four-momentum, he provides us with this example problem: The Bevatron at Berkeley was built with the idea of producing antiprotons by the reaction $p + p \to p + p + p + \bar{p}$. That is, a high-energy proton strikes a proton at rest, creating (in addition to the original particles) a proton-antiproton pair. Question: What is the threshold energy for this reaction (i.e. the minimum energy of the incident proton)? To solve this, he first starts by considering what happens before the collision from the view of the proton at rest. This gives us the total four-momentum vector of $$p_{\text{tot}}^\mu = (E + m_p, \mathbf{p})$$ I am using natural units. The $E$ corresponds to the total energy of the high-energy proton, $\mathbf{p}$ is its momentum, and $m_p$ is the mass of the resting proton. Next, he considers the total four-momentum after the interaction from the centre of mass perspective and says that the four-momentum is $$p_{\text{tot}'}^\mu = (4m_p, \mathbf{0})$$ Obviously, he sums over the 4 particle's four-vectors, but I'm not sure why he could conclude that every proton had energy $m_p$. Should it not have energy larger than $m_p$ since it is travelling? Answer: $$p_{\text{tot}'}^\mu = (4m_p, \mathbf{0})$$ ... Should it not have energy larger than $_$ since it is travelling? ANS: Adding to the earlier comment, these are the threshold-case components in the center of momentum frame, not the lab frame. By the way, to supplement Griffith's algebraic-approach, you may find it enlightening to draw an energy-momentum diagram of the process in the lab frame: What is generally a polygon ( an irregular hexagon with Minkowski-congruent sides, two sides for the incoming particles and the remaining 4 for the outgoing particles) will be a Minkowski-isoceles triangle. (Think about why that would be.) Use the Minkowski Law of [hyperbolic]-Cosines to find the hyperbolic-cosine (the time-dilation factor of the incident proton in the lab frame). update: Here's a visualization of "proton-antiproton pair" production in energy-momentum space with Desmos ( https://www.desmos.com/calculator/kckm1cxuom ) The orange hyperbola is the mass-shell of the COM, centered at the origin. The obscured violet hyperbola is the mass-shell of the COM-for-the-threshold case. The green hyperbola is the unit-mass mass-shell of the incident proton (centered at the tip of the 4-momentum of the target-proton at rest in the LAB). The threshold-case is located in energy-momentum space as the intersection of the green hyperbola and the violet hyperbola. LAB frame COM-threshold frame (This reveals that this problem is, geometrically-speaking, essentially the the clock-effect [as seen in the twin paradox].) The outgoing 4-momenta are chained by a sequence of unit-mass mass-shells (not shown by default)---a linkage. You can tune these 4-momenta to move off of the threshold condition (revealing the distinction between the orange and violet mass-shells). beyond-threshold in the COM-threshold frame
{ "domain": "physics.stackexchange", "id": 92506, "tags": "special-relativity, energy, momentum, collision" }
Find dogs who stopped being good boys without ever being good boys - Was HAVING the wrong approach?
Question: The problem I have solved is as follows: Consider a table of dogs, DOGGIES, that records on each row the ID of a doggy, one skill that they have, when they started having it, when they stopped having it (assume a useful placeholder if they never stop), and an extra bool column called IS_NO_LONGER_GOOD_BOY. This bool column is magically a property of the skill and not of the dog or the history. Find every doggy who has a skill with a 0 value for IS_NO_LONGER_GOOD_BOY without, at some point in time, having that same skill with a 1 value for IS_NO_LONGER_GOOD_BOY. For each doggy found, return the IS_NO_LONGER_GOOD_BOY = 0 row(s) that caused them to appear. Doggies can have multiple skills at any given time, but doggies with no skills aren't in the data. There is no need to worry about NULLs, malformed data, or other such traps. This was my solution. It works, but the code comments explain my dislike of it. SELECT [ID], [SKILL_NAME], [START_DATE], [STOP_DATE], [IS_NO_LONGER_GOOD_BOY] FROM [DOGGIES] AS [DOGS_OUT] --Not sure if the alias is needed, but it adds clarity? WHERE EXISTS ( SELECT 1 FROM --Really don't like this bit. --This SELECT 1 part only exists so I can write a WHERE after the indented query's HAVING. --You'd really think there'd be a way to put this inside the indented query. --But the innermost query needs the HAVING part, so I think it's inescapable? ( SELECT [ID], [SKILL_NAME] FROM [DOGGIES] GROUP BY [ID], [SKILL_NAME] HAVING COUNT(CASE WHEN [IS_NO_LONGER_GOOD_BOY] = 0 THEN 1 END) > 0 --SQL has no COUNTIF AND COUNT(CASE WHEN [IS_NO_LONGER_GOOD_BOY] = 1 THEN 1 END) = 0 ) AS [DOG_SKILL_PAIRS] WHERE [DOG_SKILL_PAIRS].[ID] = [DOGS_OUT].[ID] AND [DOG_SKILL_PAIRS].[SKILL_NAME] = [DOGS_OUT].[SKILL_NAME] ) My question is simply if what I've done can be improved upon. Some suspicions of mine are as following: I get the feeling that all of this code should only need at most one level of nesting, rather than having a correlated subquery inside the outermost query. Working from the innermost layer outwards, I don't like how I've gone from a HAVING clause without a WHERE clause to a WHERE clause that filters the result of the HAVING clause. You typically go from WHERE to HAVING, not the other way around. Can window functions save the day? COUNT(CASE WHEN [IS_NO_LONGER_GOOD_BOY] = 0 THEN 1 END) OVER (PARTITION BY [ID], [SKILL_NAME]) AS [FALSE_COUNT] sounds like it ought to be a good idea, but I couldn't make it fit. I'm on a 2016 version of SQL Server. Answer: First some commentary on the question. @200_success is right to point out that IS_NO_LONGER_GOOD_BOY does not make sense as a property on the skill, but that's hardly the only problem: Don't store booleans in negative form! Store is_good_boy instead. Otherwise, double negatives produce headaches when trying to understand logic. Re. assume a useful placeholder if they never stop: a "useful placeholder" is null. So a well-designed schema would ignore "There is no need to worry about NULLs"; this is not a "trap" but a "feature" and a "necessary mechanism for good schema design". doggies should be a separate table from doggy_skills should be a separate table from skills. doggy_skills being a join table, it will have foreign key references to both of the other tables. So, making a judgement call, I will assume recommendation of "what should be done" and not only "what should be done to the letter of the question". I'm sure you can fill out the latter based on the other points I suggest, if you so need. You do not need a group by nor a count nor a having, but you do need a correlated subquery. You only need one level of nesting. In terms of style, delete all of your [] escaping brackets; and (more as a matter of personal taste) there is no reason to SHOUT. Suggested create table doggies( id int primary key ); create table skills( id int primary key, name varchar(100) not null ); create table doggie_skills( doggie int not null references doggies(id), skill int not null references skills(id), start datetime not null default current_timestamp, stop datetime, is_good_boy bit not null default 1, primary key(doggie, skill, start), check(start < stop) ); insert into doggies(id) values (1), (2), (3), (4), (5), (6); insert into skills(id, name) values (10, 'roll over'), (11, 'brain surgery'); insert into doggie_skills(doggie, skill, start, stop, is_good_boy) values --- included --- -- Only one skill, current, good boy (1, 10, '2010-01-01T00:00:00', null, 1), -- One current skill with a history, always a good boy (2, 10, '2010-01-01T00:00:00', '2015-01-01T00:00:00', 1), (2, 10, '2016-01-01T00:00:00', null, 1), --- excluded --- -- no skills (3) -- no current skills (4, 11, '2010-01-01T00:00:00', '2015-01-01T00:00:00', 1), -- one current skill but not a good boy (5, 11, '2010-01-01T00:00:00', null, 0), -- one current skill, good boy now, bad boy before (6, 10, '2010-01-01T00:00:00', '2015-01-01T00:00:00', 0), (6, 10, '2016-01-01T00:00:00', null, 1); -- Find every doggie who has a skill with a 1 value for is_good_boy -- ^^^ implying stop is null -- without, at some point in time, having that same skill with a 0 value for is_good_boy select * from doggie_skills as current_skill where current_skill.is_good_boy = 1 and current_skill.stop is null -- current and not exists( select 1 from doggie_skills as old_skill where old_skill.skill = current_skill.skill and old_skill.doggie = current_skill.doggie and old_skill.is_good_boy = 0 ); Also see fiddle.
{ "domain": "codereview.stackexchange", "id": 43590, "tags": "sql, sql-server, t-sql" }
Linear scale transformations for bosons and fermions
Question: In the book from Coleman: The Aspects of Symmetry, p. 70; linear scale transformations or dilations are defined as $$ x \rightarrow e^\alpha x $$ with $\alpha$ being a real number. The fields change as $$ \phi(x) \rightarrow e^{\alpha d} \phi (e^{\alpha x}) $$ which yields an infinitesimal transformation $$ \delta \mathbf{\phi} = (d + x^\lambda \partial_\lambda ) \mathbf \phi $$where $d$ is a matrix. (It is clearer as $\delta \phi_i = (d_{ij} + x^\lambda \partial_\lambda \delta_{ij} ) \phi_j$, where $\delta_{ij}$ is the Kronecker delta.) Now, there is a statement I have not been able to prove: For a large class of theories (including all renormalizable field theories) these transformations are symmetries, if all non-dimensionless coupling constants (including the masses) are set equal to zero, and if $d$ is chosen to be a matrix that multiplies all Bose fields by one and all Fermi fields by $\frac{3}{2}$. Based on this quote, I have tried using the lagrangian $\mathcal L = \partial_\mu \phi_i \partial^\mu \phi_i$ which is the Klein Gordon lagrangian with $m=0$. Computing $\mathcal L[\phi + \delta \phi] - L[\phi]$ gives the variation of the action which should be 0 up to a total derivative when $d_{ij} = 1 \cdot \delta_{ij}$. However I find terms with extra derivatives that do not cancel. I would like to know how to find the 1 and 3/2 for bosons and fermions. I guess this is a general result and there is no need to pick a specific lagrangian. Answer: For a massless scalar field, $$I[\phi] :=\int g^{\mu\nu}\frac{\partial \phi(x)}{\partial x^\mu}\frac{\partial \phi(x)}{\partial x^\nu} dx^0dx^1 dx^2dx^3$$ where $g = diag(-1,1,1,1)$. If replacing $\phi \to \phi_\lambda$ where $$\phi_\lambda(x) := \lambda \phi(\lambda x) \quad \mbox{for $\lambda >0$}$$ (evidently $\lambda = e^\alpha$), we have $$I[\phi_\lambda]=\int g^{\mu\nu}\frac{\partial \phi(\lambda x)}{\partial x^\mu}\frac{\partial \phi(\lambda x)}{\partial x^\nu} \lambda^2 dx^0dx^1dx^2dx^3 \:.$$ That is $$I[\phi_\lambda]=\int g^{\mu\nu}\frac{\partial \phi(\lambda x)}{\partial \lambda x^\mu}\frac{\partial \phi(\lambda x)}{\partial \lambda x^\nu} \lambda^4 dx^0dx^1dx^2 dx^3 = \int g^{\mu\nu}\frac{\partial \phi(\lambda x)}{\partial \lambda x^\mu}\frac{\partial \phi(\lambda x)}{\partial \lambda x^\nu} d \lambda x^0 d \lambda x^1 d\lambda x^2 d \lambda x^3 = \int g^{\mu\nu}\frac{\partial \phi( y)}{\partial y^\nu}\frac{\partial \phi(y)}{\partial y^\nu} dy^0dy^1dy^2dy^3 = I[\phi]\:.$$ thus $$I[\phi_\lambda]= I[\phi]\:.$$ The difference with spinors is that only one derivative enters the action and this explains the different power. I leave you the simple computations.
{ "domain": "physics.stackexchange", "id": 36789, "tags": "homework-and-exercises, field-theory, dimensional-analysis, scale-invariance" }
Finding a gameObject on a screen
Question: I'm working on a warning system for my game. I want to display an arrow on the screen where the object is going to be coming from. Here is an image of what I have in mind: The object will spawn from outside the screen and start making its way to the screen. The code that I have created works, but it looks terrible. How can I fix this to make it more readable? private GameObject warningObject; private float warningBufferFromScreenEdge = 1f; public void ShowWarningOnScreen() { if (warning == false) { if (!NearOffScreenWarning.nearOffScreenWarningSingleton.CheckIfInScreen(gameObject)) { warningObject = ObjectPoolManager.ObjectPoolManagerSingleton.AsteroidWarnings.GetPooledObject(); if (warningObject != null) { warningObject.SetActive(true); warningObject.transform.position = NearOffScreenWarning.nearOffScreenWarningSingleton.left; warning = true; } } } else { if (warningObject.activeInHierarchy == true) { FindAsteroidWarningSide(); switch (warningSides) { case WarningSides.TopLeft: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax + warningBufferFromScreenEdge); break; case WarningSides.Top: warningObject.transform.position = new Vector2(gameObject.transform.position.x, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge); break; case WarningSides.TopRight: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMax - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge); break; case WarningSides.Right: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, gameObject.transform.position.y); break; case WarningSides.BottomRight: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMax - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin + warningBufferFromScreenEdge); break; case WarningSides.Bottom: warningObject.transform.position = new Vector2(gameObject.transform.position.x, Boundary.boundarySingleton.YMin + warningBufferFromScreenEdge); break; case WarningSides.BottomLeft: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin + warningBufferFromScreenEdge); break; case WarningSides.Left: warningObject.transform.position = new Vector2(Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, gameObject.transform.position.y); break; default: break; } if (NearOffScreenWarning.nearOffScreenWarningSingleton.CheckIfInScreen(gameObject)) { warningObject.SetActive(false); warning = false; } } } } enum WarningSides { TopLeft, Top, TopRight, Right, BottomRight, Bottom, BottomLeft, Left, } private WarningSides warningSides; private void FindAsteroidWarningSide() { if (transform.position.y > Boundary.boundarySingleton.YMax) { if (transform.position.x < Boundary.boundarySingleton.XMin) { warningSides = WarningSides.TopLeft; } else if (transform.position.x > Boundary.boundarySingleton.XMax) { warningSides = WarningSides.TopRight; } else { warningSides = WarningSides.Top; } } else if (transform.position.x > Boundary.boundarySingleton.XMax) { if (transform.position.y > Boundary.boundarySingleton.YMin) { warningSides = WarningSides.Right; } else if (transform.position.y < Boundary.boundarySingleton.YMin) { warningSides = WarningSides.BottomRight; } } else if (transform.position.y < Boundary.boundarySingleton.YMin) { if (transform.position.x > Boundary.boundarySingleton.XMin) { warningSides = WarningSides.Bottom; } else if (transform.position.x < Boundary.boundarySingleton.XMin) { warningSides = WarningSides.BottomLeft; } } else { warningSides = WarningSides.Left; } } Answer: I suspect your TopLeft case has an error in it (adding the buffer to the YMax instead of subtracting it). All the logic going into creating the Vector2 could use a revisit: why should we have to calculate these X and Y offsets for every warning we want to create? It would be much simpler to have a RectangleF (from System.Drawing) with these offsets already calculated for us: RectangleF warningBoundary = new RectangleF ( Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge, // Invert Y-Axis Boundary.boundarySingleton.XMax - Boundary.boundarySingleton.XMin - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin - Boundary.boundarySingleton.YMax + warningBufferFromScreenEdge // Invert Y-Axis ); ... case WarningSides.BottomRight: warningObject.transform.position = new Vector2(warningBoundary.Right, warningBoundary.Bottom); break; Unfortunately the RectangleF class treats (0,0) as the Top-left corner, whereas your boundary appears to treat it as the Bottom-left corner, so the creation of the rectangle requires the Y-axis to be inverted. I think you fell into a trap by using an enum for your warning sides, however. By pairing the X and Y coordinates as enumerations, you exponentiate the number of states you are required to check (3*3). Ultimately, you're using FindAsteroidWarningSide to calculate your Vector2, so why not just have that method return the Vector2 directly? By treating the X and Y coordinates separately, the logic also simplifies: private Vector2 FindAsteroidWarningSide() { float x = gameObject.transform.position.x; float y = gameObject.transform.position.y; RectangleF warningBoundary = new RectangleF ( Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge, // Invert Y-Axis Boundary.boundarySingleton.XMax - Boundary.boundarySingleton.XMin - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin - Boundary.boundarySingleton.YMax + warningBufferFromScreenEdge // Invert Y-Axis ); if (transform.position.x < Boundary.boundarySingleton.XMin) { x = warningBoundary.Left; } else if (transform.position.x > Boundary.boundarySingleton.XMax) { x = warningBoundary.Right; } if (transform.position.y < Boundary.boundarySingleton.YMin) { y = warningBoundary.Bottom; } else if (transform.position.y > Boundary.boundarySingleton.YMax) { y = warningBoundary.Top; } return new Vector2(x, y); } While I'm not particularly fond of nested ternaries, if you were going for brevity you could shorten the above considerably: private Vector2 FindAsteroidWarningSide() { RectangleF warningBoundary = new RectangleF ( Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge, // Invert Y-Axis Boundary.boundarySingleton.XMax - Boundary.boundarySingleton.XMin - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin - Boundary.boundarySingleton.YMax + warningBufferFromScreenEdge // Invert Y-Axis ); float x = (transform.position.x < Boundary.boundarySingleton.XMin) ? warningBoundary.Left : (transform.position.x > Boundary.boundarySingleton.XMax) ? warningBoundary.Right : gameObject.transform.position.x; float y = (transform.position.y < Boundary.boundarySingleton.YMin) ? warningBoundary.Bottom : (transform.position.y > Boundary.boundarySingleton.YMax) ? warningBoundary.Top : gameObject.transform.position.y; return new Vector2(x, y); } Now we can get rid of that switch statement and enum altogether and just call the method directly: if (warningObject.activeInHierarchy == true) { warningObject.transform.position = FindAsteroidWarningSide(); if (NearOffScreenWarning.nearOffScreenWarningSingleton.CheckIfInScreen(gameObject)) { warningObject.SetActive(false); warning = false; } } The final code: private GameObject warningObject; private float warningBufferFromScreenEdge = 1f; public void ShowWarningOnScreen() { if (warning == false) { if (!NearOffScreenWarning.nearOffScreenWarningSingleton.CheckIfInScreen(gameObject)) { warningObject = ObjectPoolManager.ObjectPoolManagerSingleton.AsteroidWarnings.GetPooledObject(); if (warningObject != null) { warningObject.SetActive(true); warningObject.transform.position = NearOffScreenWarning.nearOffScreenWarningSingleton.left; warning = true; } } } else { if (warningObject.activeInHierarchy == true) { warningObject.transform.position = FindAsteroidWarningSide(); if (NearOffScreenWarning.nearOffScreenWarningSingleton.CheckIfInScreen(gameObject)) { warningObject.SetActive(false); warning = false; } } } } private Vector2 FindAsteroidWarningSide() { float x = gameObject.transform.position.x; float y = gameObject.transform.position.y; RectangleF warningBoundary = new RectangleF ( Boundary.boundarySingleton.XMin + warningBufferFromScreenEdge, Boundary.boundarySingleton.YMax - warningBufferFromScreenEdge, // Invert Y-Axis Boundary.boundarySingleton.XMax - Boundary.boundarySingleton.XMin - warningBufferFromScreenEdge, Boundary.boundarySingleton.YMin - Boundary.boundarySingleton.YMax + warningBufferFromScreenEdge // Invert Y-Axis ); if (transform.position.x < Boundary.boundarySingleton.XMin) { x = warningBoundary.Left; } else if (transform.position.x > Boundary.boundarySingleton.XMax) { x = warningBoundary.Right; } if (transform.position.y < Boundary.boundarySingleton.YMin) { y = warningBoundary.Bottom; } else if (transform.position.y > Boundary.boundarySingleton.YMax) { y = warningBoundary.Top; } return new Vector2(x, y); }
{ "domain": "codereview.stackexchange", "id": 14819, "tags": "c#, unity3d" }
Moisture Holding Capacity of Air table or function?
Question: It is strange but I cannot find the good table of Moisture Holding Capacity of Air g/kg or lb/lb like the chart here: https://www.engineeringtoolbox.com/moisture-holding-capacity-air-d_281.html I am trying to make calculations in MS Excel visual basic module, I have the absolute humidity and need to get the relative humidity for given temperature. Answer: If A is the absolute humidity(in mass of water vapor per mass of dry air), the moles of water vapor per mole of dry air is $$m=\frac{29}{18}A$$The mole fraction of water vapor in the in the air is $$x=\frac{m}{(m+1)}=\frac{29A}{29A+18}$$If $p_{atm}$ is the current atmospheric pressure, then the partial pressure of water vapor in the air is $$p=xp_{atm}=\frac{29A}{29A+18}p_{atm}$$The relative humidity is the partial pressure divided by the equilibrium vapor pressure of water vapor at the temperature T ($p_{vap,eq}$) times 100 %:$$RH=100\left(\frac{29A}{29A+18}\right)\frac{p_{atm}}{p_{vap,eq}}$$
{ "domain": "physics.stackexchange", "id": 58134, "tags": "thermodynamics" }
What spider is this? Is it dangerous to humans?
Question: What is the species? Is it dangerous to humans? I chanced upon this beautiful one near our water well. It stood at a height where an adult could accidentally walk through the web and the spider would land on one's face. As a test, I dropped a piece of leaf onto its web, and the spider jumped to action, checking what it was and it took out the leaf from its web. Size: Web, almost circular 50 cm diameter. Spider, about 8cm. Location: Malabar region, Kerala state, south of India Answer: This is most likely a spider from the genus Argiope, which has a few members native to India. See here for a list, I think this is most likely Argiope pulchella, see the image from the Wikipedia: Wikipedia also says that these spiders hunt insects, but are not dangerous for humans.
{ "domain": "biology.stackexchange", "id": 5851, "tags": "species-identification" }
How to use data from a topic inside action server?
Question: Hi, I'm using action server following the tutorial. In my application, I want to implement the action as run the motor and stop when the torque is greater than some value or the limit the reached. So I need to check the torque and limit switch frequently (published as two topics \torque, \limit) while the action is executing. I checked the tutorial SimpleActionServer(GoalCallbackMethod), but this tutorial only have one topic to subscribe and that's why it can use the topic callback function to implement the function of the action. A more general question would be how can I use topic data inside action server. Is there any example I could check? Thanks for any hint. Originally posted by Fei Liu on ROS Answers with karma: 47 on 2013-06-22 Post score: 1 Answer: The goal callback method is probably still the skeleton of what you want. I'd make the following changes. Instead of spin() ing, use a spinOnce() inside of a loop. Instead of doing the computation inside the topic callbacks, just make each callback store the message/data you get. So you'd have class members of, say, current_torque and limit_reached. These would get updated every time spinOnce is called and there's a message available. Then, do your computation in the main loop. You can use both current_torque and limit_reached since they're class members. Be careful about initialization, though. EDIT with additional questions: Q1. Do you mean the spin() in the main function? A1: Yes. Have a look at this page. Q2. You mention about the main loop, is the main loop inside the main function or the callback function in the action server class? A2: I meant inside the main function. That really depends more on your needs, though. It sounded like you wanted your action server to start executing a goal, checking two quantities as it goes along, perhaps aborting, succeeding, or continuing execution based on their values. In that case, I'd have a callback for each topic you're interested in and callbacks for goal messages and cancel messages. Like I said above, the callbacks for topics just store the value of the messages. Similarly, the goal callback just stores the goal. Then, in the main function, you check to see if you have a goal or not, what type it is, and whether you should abort, succeed or continue based on the inputs you saved when they came in before the current loop iteration. If you only need to check your values once when you get a goal or a cancel request, then it should be fine to put the logic in the callback. The values should be there if something is publishing them. Q3. Will the action server callback function block the topic callback? A3: Yes, callbacks will block. You need to spin to get messages on topics you're subscribed to. That's why I'd make simple callbacks and a more complicated main function. But again, that depends on your goals. Originally posted by thebyohazard with karma: 3562 on 2013-06-24 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Fei Liu on 2013-06-24: Thanks for your answer. I have a few questions. 1. Do you mean the spin() in the main function? 2. You mention about the main loop, is the main loop inside the main function or the callback function in the action server class? 3. Will the action server callback function block the topic callback?
{ "domain": "robotics.stackexchange", "id": 14662, "tags": "ros, action-server, action, rostopic" }
Is there any difference between tidal locking and synchronous rotation?
Question: I'm trying to understand more about orbital mechanics, and I'm encountering a few terms which I'm not sure if they are exactly the same. The two terms are Tidal Locking and Synchronous Rotation. To my understanding, Tidal Locking is when the period of one body orbiting another is matching the period of the body rotating around its own axis (like the earth and the moon), however, I'm not sure if Synchronous Rotation means the exact same thing. Answer: Synchronous rotation is when the orbit of a body has the same period as its spin. If the inclination and obliquity are the same, then the same face of the body will always point towards the barycenter. Here is an animation of Pluto and Charon Credit: Stephanie Hoover Wikimedi commons. Tidal forces impart torque to orbiting body rotations. An orbiting body is in tidal lock if this torque lies in a local minimum, keeping the existing spin rate stable. That is, the tidal torque opposes both increasing and decreasing spin rates, and other rotational torques aren't large enough to break into another spin/orbit configuration ratio. Usually, tidal lock occurs in synchronous rotation, like our Earth and Moon (almost), or Pluto and Charon. However, Mercury's rotation/spin ratio is tidally locked and stable at 3:2. Also, other forces than tidal forces can cause torques on an orbiting body. Examples are atmospheric heating torques and torques due to asymmetric mass distributions.
{ "domain": "astronomy.stackexchange", "id": 6027, "tags": "rotation, tidal-forces" }
At what pressure will water boil at room temperature and why?
Question: Is there a specific pressure that is needed to boil water at room temperature? If there is, what is it? Why does water boil at a low pressure at all? Answer: Since boiling, by definition, occurs when a liquid's vapor pressure reaches ambient pressure, your question is identical to asking what the vapor pressure of water is at room temperature. Here's an example of an online table: At 23°C, for example, water would boil at a pressure of about 21.1 torr, or about a fortieth of atmospheric pressure.
{ "domain": "physics.stackexchange", "id": 48688, "tags": "pressure, temperature, water" }
Do determinacy of one-step evaluation and uniqueness of normal forms apply to all (or most) languages in TAPL?
Question: In Types and Programming Languages by Pierce, when talking about untyped arithmetic expressions in Chapter 3, there are two theorems: $-→$ is single-step evaluation relation: 3.5.4 Theorem [Determinacy of one-step evaluation]: If $t -→ t'$ and $t -→ t''$ , then $t' = t''$ . $-→ ∗$ is multi-step evaluation relation: 3.5.11 Theorem [Uniqueness of normal forms]: If $t -→ ∗ u$ and $t -→ ∗ u'$ , where $u$ and $u'$ are both normal forms, then $u = u'$. Do the two theorems also apply to all (or most) the other languages/systems in the book, or only to the untyped arithmetic expressions? From my limited experiences in a few programming languages, it seems that an expression is always evaluated in exactly one deterministic process, so I wonder if both theorems apply to all the languages. Thanks. Answer: Uniqueness of normal forms definitely applies. This is still a theorem you have to prove, but if it doesn't hold, what you have isn't really a normal form. The whole point of normal forms is that there's a unique form for each value, that's what makes it "normal". For determinancy, I think it holds for most languages in the book. Usually when you're defining a small step semantics, you try to do it in such a way that it's deterministic. However, there are notable exceptions to this. First, if you have a general rewrite system, like the untyped lambda calculus, then it's totally valid to have non deterministic reductions. This is what lets us talk about Church Rosser and the diamond property. Secondly, if you have a language with any sort of concurrency, then you probably want to have non deterministic rules, to model the different possible orders of evaluation, so that you can show your language is well behaved regardless of which interleaving is taken. That said, I want to be clear that these things do not come for free. They hold for the Languages in the book because Pierce has been very careful to ensure they hold. It is easy to accidentally definite a language where these do not hold.
{ "domain": "cs.stackexchange", "id": 14576, "tags": "programming-languages, type-theory, types-and-programming-languages" }
What causes a volcano?
Question: Inspired by the question, How does one measure what causes earthquakes?, I'd like to know what causes another great geological phenomenon - volcanoes? Answer: Most volcanism occurs at three major tectonic features: Subduction Zones, Rifting Centers (Like East Pacific Rise) and under hotspots. I will start with the most contentious, hotspots. The clearest example of hotspot volcanism is the island chain Hawaii. Underneath the island, there is a chaotic discontinuity caused by the boundary conditions at the core-mantle boundary. This causes highly viscous and hot mantle to upwell rapidly and push through the lithosphere, forming a volcano. The orgin of hotspots is not well known, and it is even a debate if they exist (they do, imo). Some papers worth looking at are by Paul Hall (BU) and Christopher Kincaid(URI), and maybe Dave Stegman (Scripps) who publish on melting caused by mantle plumes. As a consequence of a subducting slab, at a distance in front of the trench, Arc Volcanoes form. The distance in front of the trench is often thought to be where the overriding plate is 100-120km higher than the suducting slab. The cold, oceanic plate plunges underneath the continental plate in the image above, adding water to the mantle. This water lowers the melting temperature (solidus) of the mantle inside the wedge of the subduction zone, causing partial melting (4-6%). This melt, more viscous and buoyant than the solid mantle, floats to tho the top of the mantle and eventually penetrates the lithosphere and forms a volcano. This question is also relevant. Finally, at divergent plate boundaries, the process is fairly straight forward. A break in the lithosphere caused by rifting plates allows the pressurized mantle to flow towards the opening. This causes and abundance of hot, fertile mantle which can be melted. This melted mantle creeps through the lithosphere and forms ocean floor volcanoes (sometimes called smokers).
{ "domain": "earthscience.stackexchange", "id": 29, "tags": "volcanology" }
Homework Question: Moments
Question: Can someone please explain this question? My work: If we take B to be the pivot then as B gets closer to mg the torque due to mg gets smaller, but I don't really see how this affects the force F (I can only think about how this might affect the torque by F) Answer: In order the beam to stay still, you must have balanced forces and torques. So if the distance from the point that mg is acting and the reaction in point B is getting smaller , the net force (reaction) in support B is getting bigger in order to have the torques balanced. This way the force in point A is obviously getting smaller.
{ "domain": "physics.stackexchange", "id": 28945, "tags": "homework-and-exercises, newtonian-mechanics, forces, torque, statics" }
Can a drink with such composition have 0 caloric value (whether manufacturer lied)?
Question: I bought a drink which have next statements: The nutritional value, g/100 ml: no proteins, fats, carbohydrates. Energy value (calorific value): kJ / 100 kcal to mg: absent. Can a drink with such composition have 0 nutritional/calorific value ? Composition: prepared drinking water, flavoring base "Cola" dye: caramel, acidity regulator: phosphoric acid, caffeine - no more than 150 ml/g, natural flavor enhancer, sweetener aspartame, acesulfame potassium, sodium saccharin, sodium cyclamate, preservative sodium benzoate. It contains a source of phenylalanine. Answer: You have to keep in mind the bane of all introductory chemistry students: excess precision and rounding. Every measurement has error bars, even if those error bars are implicit. It's senseless to report a value that has (much) more precision than your error bars. A value that's measured as zero isn't necessarily exactly zero. ... it's only "mostly zero". The particular rounding rules that are used in nutritional labels depend on the specific regulatory agency, but in the US, the FDA provides rounding rules for nutritional labels. It counts anything less than 5 (dietary) Calories per serving as being zero. Likewise, anything with less than 0.5 g of protein, fat or carbohydrate can be listed as having zero grams. This is how Tic-Tac breath mints can be advertised as being "zero calorie" despite being made almost entirely of sucrose (table sugar): a serving contains less than 5 Calories, and so can be rounded to zero on the label. In your case, there's a few things which could contain calories. Carmel color is made from sugar, and may contain residual saccarides. Aspartame is a methyl ester of a dipeptide, which is converted in the body to its constituent amino acids, providing a small amount of calories. (Gram for gram, aspartame would contain almost the same amount of calorific energy as sugar does. It's about 200 times sweeter than sugar, though, so on an equivalent sweetening basis it has much fewer calories.) Depending on what the "natural flavor enhancer" is, that might also contain carbohydrates, proteins or fats. (Natural flavors are often fats or fat-soluble, so flavorings often contain trace amounts of dietary fats.) For a single serving (and for 100 ml) there's only a small amount of carbohydrate, only a scant amount of amino acids, a negligible amount of fat, and an insignificant number of calories. They're a rounding error. But it is true that if you had enough of the drink, you could indeed have an appreciable number of calories, carbohydrates, protein, etc. It's just that it would come along with enough water to make actually consuming that amount physically unfeasible.
{ "domain": "chemistry.stackexchange", "id": 6346, "tags": "organic-chemistry, food-chemistry" }
Electron released between parallel plates
Question: An electron is released between fixed charged parallel plates. The seperation between the plates is small compared to the dimensions of each plate so that we consider the electric field to be uniform. Does it create electromagnetic wave because of acceleration? How does the energy conservation take place in this situation? Answer: The electron will accelerate, which, roughly speaking, will create an increasing magnetic field and therefore an electromagnetic wave. As the electron is moving, its initial potential energy is being converted to the kinetic energy and some of it will be lost to the radiation. Once the electron hits a positively charged plate (because that's where it would obviously end up), its kinetic energy will be transformed, roughly speaking, to heat. Presumably, the electron was initially placed between the plates by some external force, so we can say that it got its initial potential energy (eventually lost to the radiation and heat), as it was moved to this position. The change of the electrostatic energy of the system before the electron was moved to its initial position and after it has landed on the positive plate, depends on where this electron came from. For instance, if the electron was taken from the positive plate, the potential energy was added to the system and then lost, so the final electrostatic energy of the system would be the same as it was before the whole experiment had started. If the electron was taken from the negative plate, some energy was lost as the electron was moved into its initial position between the plates and some more energy was lost as the electron completed its trip to the positive plate, moving on its own. In this case the final electrostatic energy of the system would be lower than it was before the experiment. We can say that it was slightly discharged. If the electron was moved in from the outside, the final electrostatic energy of the system would also be lowered because, at the end of the experiment, the charge on the positive plate will be reduced. The exact sequence of energy changes, in this case, depends on the initial position of the electron. For illustration purposes, let's look at three basic subcases: A) initial position is in the middle of the gap B) initial position is quarter gap above the neutral line C) initial position is quarter gap below the neutral line Let's assume that the voltage between the plates is 1V. So, it takes 1eV to move an electron from the positive plate to the negative or 1eV is released (to radiation and heat), when an electron moves from the negative plate to the positive. The diagrams below show that the introduction of the electron into the system, could leave its energy unchanged (case A), increase it (case B) or decrease it (case C). In case A, the electron is moved normal to the electric field lines and therefore no work is performed by the system or on the system and, therefore, the energy of the system does not change. In case B, the electrons is moved into the field, i.e., it has to be pushed in by an external force, which performs the work, and therefore the energy of the system (which now includes the new electron) increases. Based on the position of the electron between the plates, quarter gap above the neutral line, the energy of the system has increased by 0.25eV. In case C, the electron is moved against the field, i.e., it is pulled by the field, i.e., the system performs some work and, therefore the system loses some energy. Based on the position of the electron between the plates, quarter gap below the neutral line, the energy of the system has decreased by 0.25eV. As illustrated in the diagrams, the energy loss of the system after the electron has been released depends on the initial position of the electron, but in the end, the total loss of energy by the system, in comparison with its energy before the electron has been introduced, is the same 0.5eV in all three cases and could be calculated based on the fact that the system ended up with one unit of charge less on the positive plate. NOTE: In the case when the electron was taken from the negative plate and ended up on the positive plate, the charge on both plates was reduced by one unit or, assuming that the voltage in that case was 1V, the energy loss would be 1eV.
{ "domain": "physics.stackexchange", "id": 48440, "tags": "electromagnetism, energy-conservation" }
K-fold crossvalidation: how do MSE average and variance vary with K?
Question: I'd like to get an intuition about how varying k impacts k-fold validation. Is the following right? Average of the OOS MSEs should generally decrease with k Because, a bigger "k" means the training sets are larger, so we have more data to fit the model (assuming we are using the "right" model). Variance of the OOS MSEs should generally increase with k. A bigger "k" means having more validation sets. So we have have more individual MSEs to average out. Since the MSEs of many small folds will be more sparse than MSEs of few large folds, variance will be higher. Answer: Average of the OOS MSEs should generally decrease as k increases. This is right but the difference is much less then on your chart. Suppose we have a dataset where the error will halve if we increase the data 10 times (approximately true for the paper Scaling to Very Very Large Corpora for Natural Language Disambiguation). Then the difference between 5-fold and 20-fold validation will be about 5% (1/(2^log10(0.95/0.8)) not halving like on your graph. And the difference between 20-fold and infinity-fold will be only about 1.5% (1/(2^log10(1/0.95)) For the chart you could use the formula: Average OOS MSE = 1/(2^log10(1-1/k))*MSE_inf. This will assume that you have MSE = MSE_inf at infinity. Variance of the OOS MSEs should generally increase as k increases. MSE is an average and according to the Central Limit Theorem (if squared errors (SE) are independent and identically distributed which, in my opinion, is supposed for most of the machine learning algorithms) variance should equal to Var(SE)/N, where N is the number of data points used to calculate MSE. So for 5-fold you will have variance Var(SE)/(Npop/5) Where Npop is the total number of points that you have. For the average MSE between all k-folds the variance will be the same and equal to Var(SE)/Npop. So the answer is that the variance of individual MSE numbers of each k-fold increases with k, but the variance of the final average MSE does not depend on the number of folds. To calculate the variance of the final MSE based on MSE of folds: Var(MSE_final) = Mean(Var(MSE_folds))/k = Sum((MSE_folds - Mean(MSE_folds))^2)/k^2 MSE change based on number of folds Here MSE at infinity is assumed 0.05. Variance of individual k-fold MSEs change based on number of folds Here Var(squared error for one point) is assumed 10 and number of observations is 1000.
{ "domain": "datascience.stackexchange", "id": 3642, "tags": "cross-validation" }
Special Relativity - travelling close to light speed
Question: When we say something travels close to the speed of light, what is its speed relative to? For example, we have 4 highly advanced spacecraft at rest beside each other, labelled A, B, C and D. We leave A at rest and accelerate B, C and D to .8c. We can now consider B, C and D to be at rest and that A is retreating from them at .8c. Considering B, C and D to be at rest we can now accelerate C and D to .8c relative to B. Can we then consider C and D to be at rest and further accelerate D to .8c with respect to C and continue doing so ad infinitum with an endless array of spacecraft? Answer: Yes we can. According to special relativity, all inertial frames of reference are equal. What might seem as a paradox here, is that if C is travelling at $0.8c$ with respect to B, C should be travelling at $1.6c > c$ with respect to A, which is a contradiction. However that argument is flawed, because the correct formula for addition of velocities in SR is $$ w = \frac{u + v}{1 + \frac{uv}{c^2}}$$ According to this, velocity of B with respect to A is $v_{B:A} = 0.8c$, $v_{C:A} = 0.976$, $v_{D:A} = 0.9973c$ ... It is rather easy to see that if the relative velocity of n-th spaceship with respect to first is subluminal, then also n+1-th has a subluminal velocity: $$ w = \frac{u + v}{1 + uv/c^2} = c + \frac{-c - uv/c + u + v}{1 + uv/c^2} = c + \frac{-c(1 - u/c)(1 - v/c)}{1 + uv/c^2} < c $$
{ "domain": "physics.stackexchange", "id": 18787, "tags": "special-relativity, reference-frames, velocity, observers" }
Working rosjava service server/client example?
Question: Hi, Is there any working service server/client example code like the AddTwoInts tutorial for rospy and roscpp? I am able to run publisher/subscriber in rosjava, but struggling making a rosjava service server node works. What I am trying is a simple service server which takes empty request and print something on the screen. import com.google.common.base.Preconditions; import org.ros.node.DefaultNodeFactory; import org.ros.node.Node; import org.ros.node.NodeConfiguration; import org.ros.node.NodeMain; import org.ros.node.service.ServiceServer; import org.ros.internal.node.service.ServiceResponseBuilder; import org.ros.service.std_srvs.Empty; public class TestServiceServer implements NodeMain { private static final String SERVICE_NAME = "/test_service"; private static final String SERVICE_TYPE = "std_srvs/Empty"; private Node node; @Override public void main(NodeConfiguration configuration) { Preconditions.checkState(node == null); Preconditions.checkNotNull(configuration); try { System.out.println("Starting a Testing Service Node........"); node = new DefaultNodeFactory().newNode("test_service_server", configuration); ServiceServer<Empty.Request, Empty.Response> server = node.newServiceServer(SERVICE_NAME, SERVICE_TYPE, new ServiceResponseBuilder<Empty.Request, Empty.Response>() { @Override public Empty.Response build(Empty.Request request) { Empty.Response response = new Empty.Response(); System.out.println("called!!"); return response; } }); //server.awaitRegistration(); } catch (Exception e) { if (node != null) { node.getLog().fatal(e); } else { e.printStackTrace(); } } @Override public void shutdown() { node.shutdown(); node = null; } } When I call the service: rosservice call /test_service These error shows up: Traceback (most recent call last): File "/opt/ros/electric/ros/bin/rosservice", line 46, in <module> rosservice.rosservicemain() File "/opt/ros/electric/stacks/ros_comm/tools/rosservice/src/rosservice.py", line 731, in rosservicemain _rosservice_cmd_call(argv) File "/opt/ros/electric/stacks/ros_comm/tools/rosservice/src/rosservice.py", line 586, in _rosservice_cmd_call service_class = get_service_class_by_name(service_name) File "/opt/ros/electric/stacks/ros_comm/tools/rosservice/src/rosservice.py", line 357, in get_service_class_by_name service_type = get_service_type(service_name) File "/opt/ros/electric/stacks/ros_comm/tools/rosservice/src/rosservice.py", line 141, in get_service_type return get_service_headers(service_name, service_uri).get('type', None) File "/opt/ros/electric/stacks/ros_comm/tools/rosservice/src/rosservice.py", line 113, in get_service_headers return roslib.network.read_ros_handshake_header(s, cStringIO.StringIO(), 2048) File "/opt/ros/electric/ros/core/roslib/src/roslib/network.py", line 367, in read_ros_handshake_header raise ROSHandshakeException("connection from sender terminated before handshake header received. %s bytes were received. Please check sender for additional details."%b.tell()) roslib.network.ROSHandshakeException: connection from sender terminated before handshake header received. 0 bytes were received. Please check sender for additional details. Could someone point what's the problem? Thanks. Originally posted by Liang-Ting Jiang on ROS Answers with karma: 21 on 2011-11-01 Post score: 2 Answer: Looks like services seem to be working with a current (11/23) checkout, I just ran the AddTwoInts test with a python service and a java client (code below). I followed the setup howto here. import org.apache.commons.logging.Log; import org.ros.exception.RemoteException; import org.ros.node.Node; import org.ros.node.NodeMain; import org.ros.node.service.ServiceClient; import org.ros.node.service.ServiceResponseListener; import org.ros.service.test_ros.AddTwoInts; import com.google.common.base.Preconditions; public class TestService implements NodeMain { private Node node; private static final String SERVICE_NAME = "/add_two_ints"; private static final String SERVICE_TYPE = "test_ros/AddTwoInts"; @Override public void main(Node node) { Preconditions.checkState(this.node == null); this.node = node; try { final Log log = node.getLog(); ServiceClient<AddTwoInts.Request, AddTwoInts.Response> client = node.newServiceClient(SERVICE_NAME, SERVICE_TYPE); // TODO(damonkohler): This is a hack that we should remove once it's // possible to block on a connection being established. Thread.sleep(100); AddTwoInts.Request request = new AddTwoInts.Request(); request.a = 2; request.b = 2; client.call(request, new ServiceResponseListener<AddTwoInts.Response>() { @Override public void onSuccess(AddTwoInts.Response message) { log.info("I added 2 + 2: " + message.sum); } @Override public void onFailure(RemoteException arg0) { log.info("I failed to add 2 + 2"); } }); } catch (Exception e) { if (node != null) { node.getLog().fatal(e); } else { e.printStackTrace(); } } } @Override public void shutdown() { node.shutdown(); node = null; } } Originally posted by JeffRousseau with karma: 1607 on 2011-11-23 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 7156, "tags": "ros, service, rosjava" }
Strategy Pattern Implementation
Question: I thought I would learn a design pattern today and picked this one. I wrote a simple Android test demo to test the pattern. main.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <Spinner android:id="@+id/select" android:layout_height="wrap_content" android:layout_width="fill_parent" /> <EditText android:id="@+id/editOne" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="First" android:editable="false"/> <EditText android:id="@+id/editTwo" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="Second" android:editable="false"/> </LinearLayout> strings.xml <?xml version="1.0" encoding="utf-8"?> <resources> <string name="app_name">TestingStrategyPattern</string> <string-array name="names"> <item>Queued</item> <item>In Progress</item> <item>Started</item> <item>Finished</item> <item>Destroyed</item> <item>Bombarded</item> <item>Ready</item> <item>Paused</item> <item>Stopped</item> <item>Resolved</item> <item>Abandoned</item> </string-array> </resources> MyActivity.java package com.example.TestingStrategyPattern; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.EditText; import android.widget.Spinner; public class MyActivity extends Activity { private boolean editOneEnabled = false; private boolean editTwoEnabled = false; /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Spinner statesSpinner = (Spinner) findViewById(R.id.select); ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource(this, R.array.names, android.R.layout.simple_spinner_dropdown_item); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); statesSpinner.setAdapter(adapter); final EditText editOne = (EditText) findViewById(R.id.editOne); final EditText editTwo = (EditText) findViewById(R.id.editTwo); statesSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String selectedItem = parent.getSelectedItem().toString(); if(selectedItem.equals("Queued")) { editOneEnabled = true; editTwoEnabled = false; } else if(selectedItem.equals("In Progress")){ editOneEnabled = true; editTwoEnabled = false; } else if(selectedItem.equals("Started")) { editOneEnabled = true; editTwoEnabled = false; } else if(selectedItem.equals("Bombarded")) { editOneEnabled = true; editTwoEnabled = false; } else if(selectedItem.equals("Ready")) { editOneEnabled = false; editTwoEnabled = true; } else if(selectedItem.equals("Paused")) { editOneEnabled = false; editTwoEnabled = true; } else if(selectedItem.equals("Stopped")) { editOneEnabled = false; editTwoEnabled = true; } else if(selectedItem.equals("Resolved")) { editOneEnabled = false; editTwoEnabled = true; } else if(selectedItem.equals("Abandoned")) { editOneEnabled = true; editTwoEnabled = false; } else { editOneEnabled = false; editTwoEnabled = false; } if(editOneEnabled == true) { editOne.setEnabled(true); } else { editOne.setEnabled(false); } if(editTwoEnabled == true) { editTwo.setEnabled(true); } else { editTwo.setEnabled(false); } } @Override public void onNothingSelected(AdapterView<?> parent) { } }); } } Now this is the code before I refactored it to use the strategy pattern, but effectively what happens is that based on the selection from the drop down/spinner widget, the two text fields will get enabled or disabled accordingly. Here is the refactored code: Strategy.java package com.example.strategy; public interface Strategy { boolean isEnabled(String state); } Context.java package com.example.strategy; public class Context { private Strategy strategy; public Context(Strategy strategy) { this.strategy = strategy; } public boolean executeStrategy(String state) { return this.strategy.isEnabled(state); } } ButtonOneStrategy.java package com.example.strategy; import java.util.HashMap; import java.util.Map; public class ButtonOneStrategy implements Strategy { private Map<String, Boolean> mappings; public ButtonOneStrategy() { mappings = new HashMap<String, Boolean>(); mappings.put("Queued", true); mappings.put("In Progress", true); mappings.put("Started", true); mappings.put("Bombarded", true); mappings.put("Ready", false); mappings.put("Paused", false); mappings.put("Stopped", false); mappings.put("Resolved", false); mappings.put("Abandoned", false); mappings.put("Destroyed", false); mappings.put("Finished", false); } @Override public boolean isEnabled(String state) { return mappings.get(state); } } ButtonTwoStrategy.java package com.example.strategy; import java.util.HashMap; import java.util.Map; public class ButtonTwoStrategy implements Strategy { private Map<String, Boolean> mappings; public ButtonTwoStrategy() { mappings = new HashMap<String, Boolean>(); mappings.put("Queued", false); mappings.put("In Progress", false); mappings.put("Started", false); mappings.put("Bombarded", false); mappings.put("Ready", true); mappings.put("Paused", true); mappings.put("Stopped", true); mappings.put("Resolved", true); mappings.put("Abandoned", true); mappings.put("Destroyed", false); mappings.put("Finished", false); } @Override public boolean isEnabled(String state) { return mappings.get(state); } } MyActivity.java package com.example.TestingStrategyPattern; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.ArrayAdapter; import android.widget.EditText; import android.widget.Spinner; import com.example.strategy.ButtonOneStrategy; import com.example.strategy.ButtonTwoStrategy; import com.example.strategy.Context; public class MyActivity extends Activity { private boolean editOneEnabled = false; private boolean editTwoEnabled = false; /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Spinner statesSpinner = (Spinner) findViewById(R.id.select); ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource(this, R.array.names, android.R.layout.simple_spinner_dropdown_item); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); statesSpinner.setAdapter(adapter); final EditText editOne = (EditText) findViewById(R.id.editOne); final EditText editTwo = (EditText) findViewById(R.id.editTwo); final com.example.strategy.Context btnOneContext = new Context(new ButtonOneStrategy()); final com.example.strategy.Context btnTwoContext = new Context(new ButtonTwoStrategy()); statesSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String selectedItem = parent.getSelectedItem().toString(); editOneEnabled = btnOneContext.executeStrategy(selectedItem); editTwoEnabled = btnTwoContext.executeStrategy(selectedItem); if(editOneEnabled == true) { editOne.setEnabled(true); } else { editOne.setEnabled(false); } if(editTwoEnabled == true) { editTwo.setEnabled(true); } else { editTwo.setEnabled(false); } } @Override public void onNothingSelected(AdapterView<?> parent) { } }); } } I added Strategy.java, Context.java, ButtonOneEnabled.java, ButtonTwoEnabled.java and changed MyActivity.java as a result of the new pattern. Here are my questions: What do you think about how I went about implementing this pattern? Have I done it in a good way? I felt that this pattern was overkill for the example. Could someone explain the benefits of using this pattern in larger examples or indeed in this example? I really felt that although this pattern could scale really well, it felt a bit of an over design, creating more objects with their own maps etc. I thought it was too heavy. What do you think about this? Answer: The Strategy pattern is more useful when the implementations are significantly different, and typically done using multiple classes implementing the same interface. In your case, the implementation is possible using a single class, for example: class ValueMatchingStrategy implements Strategy { private final Set<String> values; public ValueMatchingStrategy(String... values) { this.values = new HashSet<String>(Arrays.asList(values)); } @Override public boolean isEnabled(String value) { return values.contains(value); } } And then in your code you could create two instances of this class, for example: Strategy buttonOneStrategy = new ValueMatchingStrategy("Queued", "In Progress", "Started", "Bombarded"); Strategy buttonTwoStrategy = new ValueMatchingStrategy("Ready", "Paused", "Stopped", "Resolved", "Abandoned"); This part can be vastly simplified: if(editOneEnabled == true) { editOne.setEnabled(true); } else { editOne.setEnabled(false); } if(editTwoEnabled == true) { editTwo.setEnabled(true); } else { editTwo.setEnabled(false); } To simply this: editOne.setEnabled(editOneEnabled); editTwo.setEnabled(editTwoEnabled); Finally, but very importantly, all the names you used in the strategy pattern logic are strange, unnatural. editOneEnabled and editTwoEnabled clearly sound like hypothetical code and made it a bit difficult to review your code. executeStrategy is really a meaningless name. A method name with "execute" in it sounds more like the Command pattern. The Strategy interface shouldn't really be called "Strategy", and its method shouldn't really be called "executeStrategy", but be more meaningful for the purpose that it's supposed to accomplish. In any case, and as I explained above, since a single implementation can handle both of your use cases, the Strategy pattern doesn't seem necessary here. We don't know how you expect your code to evolve. Whether it's worth the investment depends on your use case, requirements, and likely changes you anticipate.
{ "domain": "codereview.stackexchange", "id": 11239, "tags": "java, design-patterns, android, xml" }
Lever rule confusion
Question: At point a3, I know the ratio of the liquid phase to the solid phase, but I don't get the highlighted statement which says b3 gives the composition. I thought a3 already gave the composition? Do I need to apply another lever rule where b3 is the center? If so, where would the endpoints be? Answer: The statement refers to the composition of the liquid phase, which is simply the x-value of $b_3$.
{ "domain": "chemistry.stackexchange", "id": 9209, "tags": "physical-chemistry, phase" }
Compensator design
Question: I got a plant $G(s)=\left(0.13s+1\right)/s^2$ to design a compensator which provides below demands: Settling time : max 2s Overshoot : max %35 Gain margin : min 10 dB Phase margin : min 30 deg Controller effort (r to u) : max 0.9 Bandwith : min 10 rad/s The best architecture so far was the one below but I couldn't reach the demands. assigning $C1=0.03\cdot\left(\left(s+80\right)\left(s+10\right)\right)/\left(\left(s+0.12\right)\left(s+1\right)\right)$, $C2=17.5\text{m}$ and $H=1$ results as below: Can anyone explain or guide a design approach on how to handle $\left(s+a\right)/s^2$ type plants when designing compensators or mention some tips/shortcuts for architecture selection? How do we select the order of the controller? Answer: Consider the traditional control diagram below. If you set $C=(2\zeta\omega_ns+\omega_n^2)/(0.13s+1)$, then you'll get the following closed-loop system transfer function: $$ T=\frac{GC}{1+GC}=\frac{2\zeta\omega_ns+\omega_n^2}{s^2+2\zeta\omega_ns+\omega_n^2}. $$ This can be achieved through zero-pole cancelation, which is doable since $G$ has a zero in LHP. Finally, you can easily attain your design requirements with $\omega_n \approx 10\,\text{rad/s}$ and e.g. $\zeta=\sqrt{2}/2$. The requirement on the maximum controller effort cannot be attained with a pure LTI controller though. You may thus consider LTV and/or nonlinear approaches to this end for controlling a double integrator (for example relying on optimal control).
{ "domain": "robotics.stackexchange", "id": 2280, "tags": "control, pid, microcontroller, matlab, design" }
Lagrangian and finding equations of motion
Question: I am given the following lagrangian: $L=-\frac{1}{2}\phi\Box\phi\color{red}{ +} \frac{1}{2}m^2\phi^2-\frac{\lambda}{4!}\phi^4$ and the questions asks: How many constants c can you find for which $\phi(x)=c$ is a solution to the equations of motion? Which solution has the lowest energy (ground state)? My attempt: since lagrangian is second order we have the following for the equations of motion: $$\frac{\partial L}{\partial \phi}-\frac{\partial}{\partial x_\mu}\frac{\partial L}{\partial(\partial^\mu \phi)}+\frac{\partial^2}{\partial x_\mu \partial x_\nu}\frac{\partial^2 L}{\partial(\partial^\mu \phi)\partial(\partial^\nu \phi)}=0 $$ then the second term is zero since lagrangian is independent of the fist order derivative. so we will end up with: $$\frac{\partial L}{\partial \phi}=-\frac{1}{2} \Box \phi+m^2\phi-\frac{\lambda}{3!}\phi^3$$ and:$$\frac{\partial^2}{\partial x_\mu \partial x_\nu}\frac{\partial^2 L}{\partial(\partial^\mu \phi)\partial(\partial^\nu \phi)}=-\frac{1}{2}\Box\phi$$ so altogether we have for the equations of motion: $$-\frac{1}{2}\Box\phi+m^2\phi-\frac{\lambda}{6}\phi^3-\frac{1}{2}\Box\phi=0$$ and if $\phi=c$ where "c" is a constant then $\Box\phi=0$ and then the equation reduces to $$m^2\phi-\frac{\lambda}{6}\phi^3=0$$ which for $\phi=c$ gives us 3 solutions:$$c=-m\sqrt{\frac{6}{\lambda}}\\c=0\\c=m\sqrt{\frac{6}{\lambda}}$$ My question is is my method and calculations right and how do I see which one has the lowest energy (ground state)? so I find the Hamiltonin for that? Answer: Looks good so far. To find the Hamiltonian you just use that if $L = T - U$ then $H = T + U$ (technically there are some extra assumptions there, but if your case it works out fine). Since $T = 0$ if $\phi$ is constant, you just need to find out which of those values $c$ minimize(s) the potential energy $-1/2 m^2 \phi^2 + \lambda/4! \phi^4$.
{ "domain": "physics.stackexchange", "id": 65666, "tags": "homework-and-exercises, lagrangian-formalism, field-theory, symmetry-breaking" }
Why do we should not to use simple count instead of cumulative count in Counting Sort?
Question: I have this piece of code for counting sort and it is "counting" sort, because it actually counts occurrences. And it doesn't use cumulative sum. I want to ask why it is bad to not to use cumulative sum in counting sort algorithm? (BTW it has O(n) runtime complexity) def my_counting_sort(lst): counts = (max(lst)+1) * [0] # n output = [] for item in lst: # n counts[item] += 1 for index in range(len(counts)): # n item = counts[index] while item > 0: output.append(index) item -= 1 return output Answer: It is related to the stability of output this algorithm: Stable sort: For same number appearing more than once in the input array, ties are broken by the rule that whichever number appears first in the input array appears first in the output array. •The property of stability is important only when satellite data are carried around with the element being sorted. •Counting sort is often used as a subroutine in radix sort. Counting sort’s stability is crucial to radix sort’s correctness. Having a cumulative sum helps it in making it a stable sort.
{ "domain": "cs.stackexchange", "id": 16640, "tags": "algorithms, sorting" }
Calculating the Presence of Anthocyanin in a Solution
Question: I've produced a red cabbage solution (boiling red cabbage in water basically), and would like to calculate the concentration of the red cabbage/anthocyanin in the solution. How is this accomplished? Thanks Answer: It is not a trivial exercise because it is not a single compound in red cabbage which causes this color (anthocyanin is a class not a single compound). There is a whole master's thesis dedicated for isolating red cabbage pigments from the University of Ohio. https://etd.ohiolink.edu/!etd.send_file?accession=osu1437655932&disposition=inline One can determine monomeric anthocyanin using UV-Vis spectrophotometry by measuring absorbances at two wavelengths. It is always good to mention what apparatus your school has. Giusti MM, Wrolstad RE. 1996. Characterization of red radish anthocyanins. J Food Sci 61(2):322-6. Giusti MM and Wrolstad RE. 2005. Characterization and measurement of anthocyanins by UVvisible spectroscopy. In: Handbook of Food Analytical Chemistry. RE Wrolstad, SJ Schwartz (eds). John Wiley & Sons Inc. New York .p: F1.2.1-F1.2.13
{ "domain": "chemistry.stackexchange", "id": 11609, "tags": "experimental-chemistry" }
The philosophy behind the mathematics of quantum mechanics
Question: My field of study is computer science, and I recently had some readings on quantum physics and computation. This is surely a basic question for the physics researcher, but the answer helps me a lot to get a better understanding of the formulas, rather than regarding them "as is." Whenever I read an introductory text on quantum mechanics, it says that the states are demonstrated by vectors, and the operators are Hermitian matrices. It then describes the algebra of vector and matrix spaces, and proceeds. I don't have any problem with the mathematics of quantum mechanics, but I don't understand the philosophy behind this math. To be more clear, I have the following questions (and the like) in my mind (all related to quantum mechanics): Why vector/Hilbert spaces? Why Hermitian matrices? Why tensor products? Why complex numbers? (and a different question): When we talk of an n-dimensional space, what is "n" in the nature? For instance, when measuring the spin of an electron, n is 2. Why 2 and not 3? What does it mean? Is the answer just "because the nature behaves this way," or there's a more profound explanation? Answer: Vector spaces because we need superposition. Tensor product because this is how one combines smaller systems to obtain a bigger system when the systems are represented by vector space. Hermitation operator because this allows for the possibility of having discrete-valued observables. Hilbert space because we need scalar products to get probability amplitudes. Complex numbers because we need interference (look up double slit experiment). The dimension of the vector space corresponds to the size of the phase space, so to speak. Spin of an electron can be either up or down and these are all the possibilities there are, therefore the dimension is 2. If you have $k$ electrons then each of them can be up or down and consequently the phase space is $2^k$-dimensional (this relates to the fact that the space of the total system is obtained as a tensor product of the subsystems). If one is instead dealing with particle with position that can be any $x \in \mathbb R^3$ then the vector space must be infinite-dimensional to encode all the independent possibilities. Edit concerning Hermitation operators and eigenvalues. This is actually where the term quantum comes from: classically all observables are commutative functions on the phase space, so there is no way to get purely discrete energy levels (i.e. with gaps in-between the neighboring values) that are required to produce e.g. atomic absorption/emission lines. To get this kind of behavior, some kind of generalization of observable is required and it turns out that representing the energy levels of a system with a spectrum of an operator is the right way to do it. This also falls in neatly with rest of the story, e.g. the Heisenberg's uncertainty principle more or less forces one to have non-commutative observables and for this again operator algebra is required. This procedure of replacing commutative algebra of classical continuous functions with the non-commutative algebra of quantum operators is called quantization. [Note that even on quantum level operators can still have continuous spectrum, which is e.g. required for an operator representing position. So the word "quantum" doesn't really imply that everything is discrete. It just refers the fact that the quantum theory is able to incorportate this possibility.]
{ "domain": "physics.stackexchange", "id": 1381, "tags": "quantum-mechanics, mathematics" }
Improving efficiency for two sum
Question: I am trying to solve this problem: Given an array of integers, find two numbers such that they add up to a specific target number. and this is my strategy: For each i in array num, do a binary search in numfor target - i in num. class Solution: # @return a tuple, (index1, index2) def binary_search(self, array, target): def _binary_search(a, l, h, t): if l == h: return None elif a[(h + l) / 2] == t: return (h + l) / 2 elif a[(h + l) / 2] > t: return _binary_search(a, l, (h + l) / 2, t) elif a[(h + l) / 2] < t: return _binary_search(a, (h + l) / 2 + 1, h, t) else: return None return _binary_search(array, 0, len(array) - 1, target) def twoSum(self, num, target): s_num = sorted(num) z_num = [(i, num.index(i)) for i in s_num] for tup in z_num: n = tup[0] i = tup[1] t = self.binary_search(s_num, target - n) if t != None: index_2 = num.index(target - n) if i == index_2: index_2 = i + 1 + num[i:].index(target - n) return (i + 1, index_2 + 1) Time complexity: \$O(n * log(n))\$ This algorithm is not accepted since the it exceeds the time limitation. How can I improve it? Answer: You are correct that the first step is to sort the array. The second step - an actual search for solution - is also happen to be of n*log(n) complexity. What is important, they are very different n*log(n): sorting is implemented in native code, so it is virtually instantaneous comparing to the Python loops. The key to solution is to realize that the second step can be done in linear time: Given a sorted array data you may build another sorted array target_minus_data in linear time. It is also sorted, descending. Reverse it (linear time). Now there are two sorted arrays sharing the same value. It can be found also in linear time by a merge-like algorithm. Of course, you don't have to physically reverse target_minus_data - just iterate it backwards. If you look closely, you do not even have to build it; everything can be done in the fly. Merge data with itself, from both ends, a backward iterator returning target - data[bkwd].
{ "domain": "codereview.stackexchange", "id": 31265, "tags": "python, performance, algorithm, programming-challenge" }
Point forces doing work
Question: So I have a question: In a children's park, there is a slide which has a total length of 10 m and a height of 8m. Vertical ladder is provided to reach the top. A boy weighing 200 N climbs up the ladder to the top of the slide and slides down to the ground. The average friction offered by the slide is three tenth of his weight. Find the work done by the ladder on the boy as he goes up. Neglect any work done by forces inside the body of the boy. My question is: how does the ladder do any work on the boy? So the boy pushes on the ladder with his feet at a point and the ladder pushes back at his feet at that point. But don't point forces do no work (in the words of my teacher) or something? Answer: You are correct. In the rest frame of the ladder, the point of contact between the ladder and the boy's foot never moves, so the ladder does no work. It is precisely the work done by forces inside the body of the boy that would be doing the work here. The fact that the problem says to ignore these forces might indicate they indeed want you to say the answer is $0\,\rm J$ However, physics problems like these can be written sloppily; even ignoring the issue raised by the OP, if the author of the problem is assuming the ladder does a non-zero amount of work, then they have not provided enough information, as you do not know the velocity of the boy at the bottom and at the top of the ladder. In this case, it is likely that whoever made this problem was assuming the boy starts and stops at rest when going up the ladder, and was really asking how much work is needed in order to get the boy to the top of the ladder; then the simple answer is just $mgh=1600\, \rm J$, of course. In this particular case I would recommend just asking your teacher to clarify.
{ "domain": "physics.stackexchange", "id": 98577, "tags": "homework-and-exercises, newtonian-mechanics, forces, work" }
A d-ary heap problem from CLRS
Question: I got confused while solving the following problem (questions 1–3). Question A d-ary heap is like a binary heap, but(with one possible exception) non-leaf nodes have d children instead of 2 children. How would you represent a d-ary heap in an array? What is the height of a d-ary heap of n elements in terms of n and d? Give an efficient implementation of EXTRACT-MAX in a d-ary max-heap. Analyze its running time in terms of d and n. Give an efficient implementation of INSERT in a d-ary max-heap. Analyze its running time in terms of d and n. Give an efficient implementation of INCREASE-KEY(A, i, k), which flags an error if k < A[i] = k and then updates the d-ary matrix heap structure appropriately. Analyze its running time in terms of d and n. My Solution Give an array $A[a_1 .. a_n]$ $\qquad \begin{align} \text{root} &: a_1\\ \text{level 1} &: a_{2} \dots a_{2+d-1}\\ \text{level 2} &: a_{2+d} \dots a_{2+d+d^2-1}\\ &\vdots\\ \text{level k} &: a_{2+\sum\limits_{i=1}^{k-1}d^i} \dots a_{2+\sum\limits_{i=1}^{k}d^i-1} \end{align}$ → My notation seems a bit sophisticated. Is there any other simpler one? Let h denotes the height of the d-ary heap. Suppose that the heap is a complete d-ary tree $$ 1+d+d^2+..+d^h=n\\ \dfrac{d^{h+1}-1}{d-1}=n\\ h=log_d[n{d-1}+1] - 1 $$ This is my implementation: EXTRACT-MAX(A) 1 if A.heapsize < 1 2 error "heap underflow" 3 max = A[1] 4 A[1] = A[A.heapsize] 5 A.heap-size = A.heap-size - 1 6 MAX-HEAPIFY(A, 1) 7 return max MAX-HEAPIFY(A, i) 1 assign depthk-children to AUX[1..d] 2 for k=1 to d 3 compare A[i] with AUX[k] 4 if A[i] <= AUX[k] 5 exchange A[i] with AUX[k] 6 k = largest 7 assign AUX[1..d] back to A[depthk-children] 8 if largest != i 9 MAX-HEAPIFY(A, (2+(1+d+d^2+..+d^{k-1})+(largest-1) ) The running time of MAX-HEAPIFY: $$T_M = d(c_8*d + (c_9+..+c_13)*d +c_14*d)$$ where $c_i$ denotes the cost of i-th line above. EXTRACT-MAX: $$ T_E = (c_1+..+c_7) + T_M \leq C*d*h\\ = C*d*(log_d[n(d-1)+1] - 1)\\ = O(dlog_d[n(d-1)]) $$ → Is this an efficient solution? Or there is something wrong within my solution? Answer: Your solution is valid and follows definition of d-ary heap. But as you pointed out your notation is a bit sophisticated. You might use those two following functions to retrieve parent of i-th element and j-th child of i-th element. $\text{d-ary-parent}({\it i}) \\ \ \ \ \ {\bf return}\ \lfloor (i-2)/d + 1 \rfloor$ $\text{d-ary-child}(i, j) \\ \ \ \ \ {\bf return}\ d(i-1)+j+1$ Obviously $1 \le j \le d$. You can verify those functions checking that $\text{d-ary-parent}(\text{d-ary-child}(i,j)) = i$ Also easy to see is that binary heap is special type of $d$-ary heap where $d=2$, if you substitute $d$ with $2$, then you will see that they match functions PARENT, LEFT and RIGHT mentioned in book. If I understand your answer correctly you use a geometric progression. In your case you get go $h = log_d(n\,d-1+1) - 1$, which is obviously $\log_d(n\,d) - 1 = \log_d(n) + \log_d(d) - 1 = \log_d(n) + 1 - 1 = \log_d(n)$, which in fact is a valid and correct solution. But just for sake of handling constant fluctuations you might wanna write $\Theta(\log_d(n))$. The reason for this is that some heaps might not be balanced, so their longest path and shorthest path migt vary by some constant $c$, by using $\Theta$ notation you eliminate this problem. You don't need to re-implement procedure given in textbook, but you must alter it a bit, eg assigning all children to $AUX$ table using given $\text{d-ary-parent}$ and $\text{d-ary-child}$ functions. Because $\text{EXTRACT-MAX}$ was not altered, it depends on running time of $\text{MAX-HEAPIFY}$. In your analysis you now you must use worst-case time proportional to hight and number of children which each node must examine (which is at most d). Once again your analysis is very precise, in the end you got $O(d\ \log_d(n(d-1)))$, which can be transformed to: $O(d\ \log_d(n(d-1))) = O(d(\log_d(n) + \log(d-1))) = O(d\ log_d(n) + d\ \log_d(d-1))$ For practical reasons we can always assume that $d \ll n$, so we can lose the $d \log_d(d-1)$ part of O notation, then we will get $O(d\log_d(n))$. Which is also valid solution. But not surprisingly you can also analyse function run time using Master theorem, which will show that $\text{MAX-HEAPIFY}$ is not only $O$ but even $\Theta$. CLRS book already provides INSERT procedure. Which looks like this: $\text{INSERT}(A, key)\\ \ \ \ \ A.heap\_size = A.heap\_size + 1 \\ \ \ \ \ A[A.heap\_size] = -\infty\\ \ \ \ \ \text{INCREASE-KEY}(A, A.heap\_size, key)$ It can be easy proven but common sense dictates that it's time complexity is $O(\log_d(n))$. It's because heap might be traversed all the way to the root. Just like INSERT, INCREASE-KEY is also defined in textbook as: $\text{INCREASE-KEY}(A, i, key)\\ \ \ \ \ {\bf if}\ key < A[i]\\ \ \ \ \ \ \ \ \ {\bf error} \text{"new key is smaller then current"}\\ \ \ \ \ A[i] = key\\ \ \ \ \ {\bf while}\ i > 1\ and\ A[i] > A[\text{d-ary-parent}(i)]\\ \ \ \ \ \ \ \ \ A[i] \leftrightarrow A[\text{d-ary-parent(i)}]\\ \ \ \ \ \ \ \ \ i = \text{d-ary-parent(i)}$ Complexity is obviously $O(\log_d(n))$ (see previous point).
{ "domain": "cs.stackexchange", "id": 9567, "tags": "data-structures, time-complexity, runtime-analysis" }
When parallel light Ray's ( not parallel to the principal axis) pass through a convex lens, where does it converge on the focal plane?
Question: I know that the Ray's will converge at the focal plane. But how do we calculate exactly where it will strike? Is there any formula for the same? Answer: They converge at a point on the focal plane such that the angle between the reflected ray and principle axis is the same as the angle between the incident ray and principle axis. Let the height of the point of convergence from the principle axis be $h$, focal length be $f$ and the angle between the incident ray and principle axis be $\alpha$, then we get $$\frac h f = \tan \alpha$$
{ "domain": "physics.stackexchange", "id": 67284, "tags": "geometric-optics, lenses" }
What is the difference between LTL and CTL?
Question: I already read examples of formulas in CTL but not in LTL and vice-versa, but I'm having trouble gaining a mental grasp on LTL formulas and really what, at the heart, is the difference. Answer: To really understand the difference between LTL and CTL you have to study the semantics of both languages. LTL formulae denote properties that will be interpreted on each execution of a program. For each possible execution (a run), which can be see as a sequence of events or states on a line — and this is why it is named "linear time" — satisfiability is checked on the run with no possibility of switching to another run during the checking. On the other hand, CTL semantics checks a formula on all possible runs and will try either all possible runs (A operator) or only one run (E operator) when facing a branch. In practice this means that some formulae of each language cannot be stated in the other language. For example, the reset property (an important reachability property for circuit design) states that there is always a possibility that a state can be reached during a run, even if it is never actually reached (AG EF reset). LTL can only state that the reset state is actually reached and not that it can be reached. On the other hand, the LTL formula $\Diamond\Box s$ cannot be translated into CTL. This formula denotes the property of stability : in each execution of the program, s will finally be true until the end of the program (or forever if the program never stops). CTL can only provide a formula that is too strict (AF AG s) or too permissive (AF EG s). The second one is clearly wrong. It is not so straightforward for the first. But AF AG s is erroneous. Consider a system that loops on A1, can go from A1 to B and then will go to A2 on the next move. Then the system will stay in A2 state forever. Then "the system will finally stay in a A state" is a property of the type $\Diamond\Box s$. It is obvious that this property holds on the system. However, AF AG s cannot capture this property since the opposite is true : there is a run in which the system will always be in the state from which a run finally goes in a non A state. I don't know if this answers to your question, but I would like to add some comments. There is a lot of discussion of the best logic to express properties for software verification... but the real debate is somewhere else. LTL can express important properties for software system modelling (fairness) when the CTL must have a new semantics (a new satisfiability relation) to express them. But CTL algorithms are usually more efficient and can use BDD-based algorithms. So... there is no best solution. Only two different approaches, so far. One of the commenters suggests Vardi's paper "Branching versus Linear Time: Final Showdown".
{ "domain": "cstheory.stackexchange", "id": 928, "tags": "lo.logic, model-checking, temporal-logic" }
Bremsstrahlung radiations
Question: Why are Bremsstralung radiations ignored in case of heavy ions(such as $\alpha$ particles) and not for $\beta$ particles when calculating the rate of energy loss of the heavy ions moving in some medium. (Bethe formula) Answer: The power radiated from charged particles is proportional to the square of their acceleration. If equally charged particles are subject to the same accelerating electromagnetic forces then particles with greater mass (i.e. the ions) will experience a much smaller acceleration and therefore emit much, much less power.
{ "domain": "physics.stackexchange", "id": 45267, "tags": "nuclear-physics" }
If a $J/\psi$ decays to an electron-positron pair 5% of the time, how often would a $\phi$ meson decay to a electron-positron pair?
Question: I know the mass of $J/\psi$ to be 3097 mev and the mass of phi to be 1018 mev. I know that $J/\psi$ decays to electron and positron 5% of the time. I also know the full width of j/psi to be 0.092mev and that the phi meson lives 50 times longer than $J/\psi$. My professor claims that if I am given that info and can draw the feynman diagrams for both interactions then it should be possible to make an estimate on how often the phi meson decays to electron-positron pair too. But I dont understand how this can done. I have drawn both diagrams (c-cbar or s-sbar to electron and positron with photon boson between). I also know the following formulas: total width = \hbar / \tau where tau is the decay time of the particle. I also know that the branching fraction is given as: BF = partial width / total width Im not sure how one could estimate the branching fraction of phi to electron-positron pair. Answer: This is the mate of your previous question, with a small difference: the virtual photon producing the $e^+ e^-$ pair couples twice as strongly to $c\bar c$ as to $s\bar s$, so, squaring the diagram, $$ \Gamma( \phi\to e^+ e^-) = \Gamma( \psi\to e^+ e^-)/4, $$ while (the BF is closer to 6% than to 5%) $$ \Gamma( \psi\to e^+ e^-)\approx 0.06 ~\Gamma_{\psi ~total} \approx 5.6 ~\hbox {keV}. $$ Hence, $$\Gamma( \phi\to e^+ e^-)\approx 1.4 ~\hbox{ keV}, $$ compared to the PDG value of 1.3 keV. The corresponding BF is about 3 $\cdot 10^{-4}$. Your professor claims well.
{ "domain": "physics.stackexchange", "id": 98852, "tags": "particle-physics, standard-model, mesons" }
TensorFlow MLP loss increasing
Question: When I train my model the loss increases over each epoch. I feel like this is a simple solve and I am missing something obvious but I cannot figure out what is it. Any help would be greatly appreciated. The neural network: def neural_network(data): hidden_L1 = {'weights': tf.Variable(tf.random_normal([784, neurons_L1])), 'biases': tf.Variable(tf.random_normal([neurons_L1]))} hidden_L2 = {'weights': tf.Variable(tf.random_normal([neurons_L1, neurons_L2])), 'biases': tf.Variable(tf.random_normal([neurons_L2]))} output_L = {'weights': tf.Variable(tf.random_normal([neurons_L2, num_of_classes])), 'biases': tf.Variable(tf.random_normal([num_of_classes]))} L1 = tf.add(tf.matmul(data, hidden_L1['weights']), hidden_L1['biases']) #matrix multiplication L1 = tf.nn.relu(L1) L2 = tf.add(tf.matmul(L1, hidden_L2['weights']), hidden_L2['biases']) #matrix multiplication L2 = tf.nn.relu(L2) output = tf.add(tf.matmul(L2, output_L['weights']), output_L['biases']) #matrix multiplication output = tf.nn.softmax(output) return output My loss, optimiser and loop for each epoch: output = neural_network(x) loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y) ) optimiser = tf.train.AdamOptimizer().minimize(loss) init = tf.global_variables_initializer() epochs = 5 total_batch_count = 60000//batch_size with tf.Session() as sess: sess.run(init) for epoch in range(epochs): avg_loss = 0 for i in range(total_batch_count): batch_x, batch_y = next_batch(batch_size, x_train, y_train) _, c = sess.run([optimiser, loss], feed_dict = {x:batch_x, y:batch_y}) avg_loss +=c/total_batch_count print("epoch = ", epoch + 1, "loss =", avg_loss) sess.close() I have a feeling my problems lies in the either the loss function or the loop I wrote for each epoch, however I am new to TensorFlow and cannot figure this out. Answer: You are using the function softmax_cross_entropy_with_logits which, according to Tensorflow's documentation, has the following specification for logits, logits: Per-label activations, typically a linear output. These activation energies are interpreted as unnormalized log probabilities. Hence, you should pass the activations before the non-linearity application (in your case, softmax). You can fix it by doing the following, def neural_network(data): hidden_L1 = {'weights': tf.Variable(tf.random_normal([784, neurons_L1])), 'biases': tf.Variable(tf.random_normal([neurons_L1]))} hidden_L2 = {'weights': tf.Variable(tf.random_normal([neurons_L1, neurons_L2])), 'biases': tf.Variable(tf.random_normal([neurons_L2]))} output_L = {'weights': tf.Variable(tf.random_normal([neurons_L2, num_of_classes])), 'biases': tf.Variable(tf.random_normal([num_of_classes]))} L1 = tf.add(tf.matmul(data, hidden_L1['weights']), hidden_L1['biases']) #matrix multiplication L1 = tf.nn.relu(L1) L2 = tf.add(tf.matmul(L1, hidden_L2['weights']), hidden_L2['biases']) #matrix multiplication L2 = tf.nn.relu(L2) logits = tf.add(tf.matmul(L2, output_L['weights']), output_L['biases']) #matrix multiplication output = tf.nn.softmax(logits) return output, logits Then, outside your function, you can retrieve the logits, and pass it to your loss function, as in the example bellow, output, logits = neural_network(x) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) I remark that you may still be interested in the outputs tensor, for calculating your network's accuracy. If this substitution doesn't work, you should also experiment with the learning rate parameter on your AdamOptimizer (see the documentation here).
{ "domain": "datascience.stackexchange", "id": 6132, "tags": "neural-network, tensorflow" }
Third Kepler law and mass dependance
Question: The third Kepler law states that: \begin{equation} \frac{T^2}{R^3}=\frac{4\pi^2}{G(M+m)} \end{equation} Where $T$ is the period of the orbital movement, $R$ is the semimajor axis, $M$ is the mass of the sun and $m$ is the mass of the planet. This is counterintuitive to me because I believed that gravitational motion was independent of the mass of the orbiting planet, since the mass $m$ cancels out from the beginning when one states newton's law. Furthermore, I thought that this had to do with some fundamental things asociated with gravity being a geometrical theory that doesn't depend on your mass but just on the geometry of your trajectory. Why does the period depend on the mass of the planet then? Answer: The $M+m$ in third Kepler's law is a vestige of the reduced mass associated to the two body problem. Roughly speaking we map a coupled and complicated system of two interacting particles into an equivalent problem of decoupled differential equations, one of them describing the motion of a particle of reduced mass $\mu$ under a central potential corresponding to gravitational interaction. By integrating Kepler's second law, $dA/dt=L/2\mu$, over a complete orbit we obtain $$\frac AT=\frac{L}{2\mu},$$ where $A$ is the area of the orbit and $L$ the angular momentum of the particle of mass $\mu$. For simplicity let us consider a circular orbit or radius $R$. Then $$T^2=\frac{4\pi^2\mu^2R^4}{L^2}.\tag1$$ In the circular orbit, the centrifugal force matches gravity, thus $$\frac{GMm}{R^2}=\mu\omega^2R=\mu R\frac{L^2}{\mu^2R^4},$$ since $L=\mu R^2\omega$. Solving for $\mu^2 R^4/L^2$ and plugging back into (1) we obtain $$T^2=\frac{4\pi^2R^3}{G(M+m)}.$$ Note that for the solar system we normally have $M\gg m$ so we normally neglect $m$.
{ "domain": "physics.stackexchange", "id": 43205, "tags": "newtonian-mechanics, newtonian-gravity, mass, orbital-motion, inertial-frames" }
Toggling an array of filters
Question: What follows is a piece of code that essentially toggles an array of filters (if the filter doesnt exist it adds it, if it does, it removes it). What would you suggest is the best way to write the following imperative approach declaratively? var selectedFilters = [ {name:"SomeName"} , ... ] var inputFilter = {name:"OtherName"}; var indexFound = -1; for (let i = 0; i < selectedFilters.length; i++) { if (selectedFilters[i].name === inputFilter.name) { indexFound = i; } } if (indexFound != -1) { selectedFilters.splice(indexFound, 1); } else { selectedFilters.push(inputFilter); } An idea would be to use filter first to weed out the item if it exists by name, then if the resulting array is equal to the original, push. But it still doesnt feel right. Answer: Pure V State There are two ways you can do this. Pure The first functional pure method first copies the array, then checks if the item to toggle exists then depending on that result adds or removes the item. Making sure that the added item is a copy, not a reference. It has no side effects but requires additional memory and CPU cycles. const toggleItem = (itemDesc, items, prop = "name") => { items = [...items]; const index = items.findIndex(item => itemDesc[prop] === item[prop]); index > -1 ? items.splice(index, 1) : items.push({...itemDesc}); return items; } State The second does not create a new array and keeps all references It is "functionally" impure and ensures that the changed state is available to all references to the original. It is considerably quicker and uses less memory. const toggleItem = (itemDesc, items, prop = "name") => { const index = items.findIndex(item => itemDesc[prop] === item[prop]); index > -1 ? items.splice(index, 1) : items.push(itemDesc); return items; }
{ "domain": "codereview.stackexchange", "id": 29968, "tags": "javascript" }
Where to download a table with ICD-9-CM codes?
Question: I am looking for a simple table with all ICD9 codes (For example: Diagnosis Code 414.0 -> Coronary atherosclerosis). I can only find weird PDF or other kinds of text documents. Or web forms, where I have to post a certain code to get the description. Here is what I have found so far. http://www.icd9data.com/2015/Volume1/default.html https://www.aapc.com/codes/icd9-codes-vol3-range https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes https://www2.gov.bc.ca/gov/content/health/practitioner-professional-resources/msp/physicians/diagnostic-code-descriptions-icd-9 https://ftp.cdc.gov/pub/Health_Statistics/NCHS/Publications/ Answer: The zip files found within https://www.cms.gov/Medicare/Coding/ICD9ProviderDiagnosticCodes/codes contain text files and Excel spreadsheets that you can open with Excel, R or Python depending on how you want to use this data.
{ "domain": "bioinformatics.stackexchange", "id": 2157, "tags": "icd-codes" }
Root to leaf path with given sum in binary tree
Question: For a given binary tree and a sum, I have written the following function to check whether there is a root to leaf path in that tree with the given sum. /* //A binary tree node struct Node { int data; struct Node* left, * right; }; */ bool hasPathSum(Node *node, int sum) { if(!node) return sum==0; return ( hasPathSum(node->left, sum-node->data) || hasPathSum(node->right, sum-node->data) ); } Are there any edge cases in which the code will break? Also, do comment on the code style. Answer: Give your operators some breathing space. if (!node) { return sum == 0; } return hasPathSum(node->left, sum - node->data) || hasPathSum(node->right, sum - node->data); Note that return expression needs no parenthesis. sum - node->data seems more natural to be expressed once: sum -= node->data; return hasPathSum(node->left, sum) || hasPathSum(node->right, sum); I see no edge cases except possible overflows.
{ "domain": "codereview.stackexchange", "id": 22343, "tags": "c++, tree" }
How to act an operator on a two-particle spin state?
Question: I'm doing an assignment for my quantum class at the moment and I'm having trouble figuring out how to act a Spin operator on a two-particle state - specifically in finding the eigenvalues - I've spent several hours going through examples but they tend to not be too enlightening and It'd be appreciated if someone could explain if a) what I'm doing is correct b) how operators work over tensor products as well. My operator is $S_z = S_{1z}+ S_{2z}$ Where $S_{iz}|\pm\rangle= \pm\frac{\hbar}{2}|\pm\rangle $. I wish to find eigenvalues of $S_z$ of the following state: $|+-\rangle - |-+\rangle $. Knowing $S_z = S_{1z}\otimes \mathbb{1} + \mathbb{1}\otimes S_{2z}$ I do the following: $(S_{1z}+S_{2z})(|+-\rangle - |-+\rangle) = S_{1z} (|+-\rangle - |-+\rangle) +S_{2z}(|+-\rangle - |-+\rangle)$ $=\left(\frac{\hbar}{2}-\frac{-\hbar}{2}\right)(|+-\rangle + |-+\rangle)+\left(\frac{-\hbar}{2}-\frac{\hbar}{2}\right)(|+-\rangle + |-+\rangle)$ $=\hbar (|+-\rangle - |-+\rangle) -\hbar (|+-\rangle + |-+\rangle)$ $=(\hbar-\hbar) (|+-\rangle - |-+\rangle)$ $=0$ I'm confused here now for several reasons. Do I now factorise getting an eigenvalue of 0? When I say $S_{1z}+S_{2z}$ what does "+" mean? Is it regular addition or is it a direct sum or something? Have I evaluated this correctly? Most of the examples I find are a bit simple and not too enlightening. Answer: $\newcommand{\ket}[1]{\left| #1 \right>}$ $\newcommand{\o}{\mathbf 1}$ Physicists are lazy people we all are! When you see something like $S_{1z}+S_{2z}$ you should really think of the following: $$S_{1z}+S_{2z} \equiv S_{1z} \otimes \mathbf 1+ \mathbf 1 \otimes S_{2z}$$ Since you get tired of writing it over and over you just shorten it by an addition and remember how to act on the states. That is basically what you are doing wrong in your calculation. Remember that $$\ket {+-} \equiv \ket + \otimes \ket -$$ Which means the following: \begin{align} \left(S_{1z}+S_{2z}\right) \ket{+-} &= \big( S_{1z} \otimes \mathbf 1+ \mathbf 1 \otimes S_{2z} \big) \big( \ket + \otimes \ket - \big) \tag{1} \\ &= S_{1z}\ket + \otimes \ket - + \ket + \otimes S_{2z}\ket -\\ &=\frac{\hbar}{2}\big( \ket + \otimes \ket- - \ket + \otimes \ket -\big) =0 \\ \end{align} Notice how long the equation ($1$) is, which is, as I said before, usually shortened at the cost of ambiguity. I think you can work out your question by yourself after this explanation.
{ "domain": "physics.stackexchange", "id": 22061, "tags": "operators, quantum-spin, hilbert-space, tensor-calculus, eigenvalue" }
Identification of this species of Toad
Question: I found a short video on the Internet of a toad and I wonder if anybody could identify it? It was posted on a social-media/blogging website called 'Tumblr', so I don't know much about it (e.g., where the video took place). EDIT: The original link is no longer active. Here's an alternate link. Useful for hearing the specimen's "squeak"/scream. EDIT 2: Here's a gif of some frames from the video for permanence on this site: P.s. I think it's a toad. Please, don't hesitate to correct me/edit my question, if i'm mistaken. Answer: This indeed looks like a species of "Pacman Frog." Specifically, this specimen most resembles the terrestrial Ceratophrys cranwelli (Cranwell's horned frog or Chacoan horned frog). © 2014 James H. Harding Facts: 8-13 cm long; can weigh up to 0.5 kg. Origin: endemic to dry Gran Chaco region of Argentina, Bolivia, Paraguay and Brazil. Cranwell's are very popular as pets. Their common name comes from their large mouths. According to Wikipedia: Like most members of the genus Ceratophrys, they are often considered Pacman frogs because of their resemblance to the popular video game character of the same name. Like most reptiles/amphibians traded worldwide as pets, a fair amount of interbreeding between captive species (and even speciation as a result of captivity) result in various hybrids and color schemes. I couldn't find any reputable sources describing the sound the frog makes in the video, but Valetti et al. (2013) studied their calls in the wild. Closely related Ceratophrys ornata is both larger and typically much greener compared to C. cranwelli.
{ "domain": "biology.stackexchange", "id": 6857, "tags": "zoology, species-identification, herpetology" }
Gravity force between two objects with different mass
Question: We know thanks to Newton that: $$F=G\frac{m_1\cdot m_2}{r^2}$$ Where $G$ is the Gravitational Constant that is about $6.673\cdot10^{-11}$ $m_1$ and $m_2$ are the masses of two different objects $r$ is the distance between them. We also know that $F=ma$ so can we write this? : $$ma=G\frac{m_1\cdot m_2}{r^2}$$ If so what is $m$ in the first term of the equation? $m_1$ or $m_2$? Thanks a lot for help!!! Answer: Yes, this is all correct so far. What you need to remember here is that Force is a vector quantity. That is, it has a direction associated with it. A force pushing you into the ground is the not same as one pushing you up into the sky, like the seat of a flying airplane. So here you need to lable your force $\vec{F}_1$, say, and this would be the force acting on particle $m_1$. Then if you draw a nice force diagram, of the Sun and Earth or anything in particular, you'll be able to assign the directions correctly since you know that Newtonian gravity is an attractive force. Does this clear it up?
{ "domain": "physics.stackexchange", "id": 20371, "tags": "newtonian-mechanics, gravity, forces, newtonian-gravity" }
Can powdered beta-tin be made from raising the temperature of alpha-tin?
Question: When white tin ($\beta$-tin) is cooled to a temperature below $13.2\ \mathrm{^\circ C}$, it creates the allotrope of gray ($\alpha$-tin), a gray, amorphous powder. My question is that once you have the powdered gray tin, doe just raising the temperature above the point of stability turn it back to white tin, but in a powdered form? Is there a change in appearance of the gray powder when that happens, or is it mostly unnoticeable? Does the conversion to gray tin change the melting point? Answer: $\alpha$Sn and $\beta$Sn are the two solid allotropes of Sn. As you not, below 13C the stable phase is $\alpha$Sn, which has a diamond cubic crystal structure (like diamond, Si, and Ge) and is a semi-metal. Above 13C, the thermodynamically stable phase is $\beta$Sn, a body-centered tetragonal crystal. So, cycling back and forth around 13C varies the thermodynamically stable phase back and forth, with the only question being the kinetics of the transformation. The video in the comment by @JasonPatterson shows that the transformation indeed takes place reasonably quickly (unlike diamond to graphite for carbon). As for melting, if you rapidly heated $\alpha$Sn and avoided the phase transformation, you would find that the melting temperature would be lower. Using the SGTE thermodynamic data (A.T. Dinsdale, CALPHAD 15(4) 317-425 (1991)), one finds that the $\alpha$Sn -> liquid phase transition would occur at about 430.7K, ~75K lower than the standard $\beta$Sn -> liquid melting point at 505.06K.
{ "domain": "chemistry.stackexchange", "id": 2820, "tags": "physical-chemistry, metallurgy, melting-point, allotropes" }
Can any one reaction in a cell be at equilibrium?
Question: I know that metabolism as a whole can never be at equilibrium (otherwise the cell is dead !) but I wonder whether a few reactions in the cell could be at chemical equilibrium at a given point of time. Is it possible theoretically ? Is there any real example ? Answer: I think there are a few principles that we need to consider before answering your question. (a) When a reaction is at equilibrium, the rate of any elementary reaction is exactly balanced by that of the reverse process. This is an important one. The above principle follows from transition-state theory, which holds that the activated state for the reaction in one direction is that same for that in the reverse direction. It also follows from the principle of microscopic reversibility at equilibrium or, more correctly, the principle of detailed balance at equilibrium, which states that “in a system at equilibrium each collision has its exact counterpart in the reverse direction, and that the rate of every chemical process is exactly balanced by that of the reverse process” (my emphasis) [Laidler, 1987, p 130]. It is important to realise that these two points of view are equivalent. To again quote Laidler (p130), if one is working within the framework of TST “the principle of microscopic reversibility presents nothing new”. To put it bluntly: a system where any elementary reaction, either explicitly or implicitly, is not exactly balanced by the reverse process is not at equilibrium. By this criteria, the actin example alluded to above cannot, even loosely, be considered at equilibrium. This principle has many important consequences, even when a system is not at equilibrium. One is that the product of the ratios of rate constants along any path on a cycle equals the equilibrium constant between the forms (species) connected, and is equal to one around a full cycle (closed loop) (see Cornish-Bowden, 2004, p 104). [see Addendum for an example of where this consequence was not adhered to]. (b) When a species is in a steady-state, the rate of formation equals the rate of breakdown. The steady-state concentration may differ markedly from the equilibrium concentration. (c) As @WYSIWYG has pointed out, an enzyme (catalyst) does not change the equilibrium constant for the uncatalyzed reaction. (d) When we are dealing with equilibria, we are usually interested in Gibbs free energy changes. For example (Silbey & Alberty, 2001, p 277). ΔGo' = -RT ln K’ (In the above equation K’ is the apparent equilibrium constant, that is the equilibrium constant at specified pH, and ΔGo' is the standard transformed Gibbs free energy of a biochemical reaction. More on this distinction, if you are interested, in the Silbey and Alberty reference quoted above). The important point is that there is a relatively simple relationship between Gibbs free energy and the equilibrium constant. The Gibbs free energy, of course, gives a measure of the spontaneity of a reaction and also its capacity to do work. When the transformed Gibbs free energy is of a reaction (ΔG') is zero, the reaction (at specified pH) is at equilibrium and cannot perform useful work, and reactions with negative ΔG' may be considered ‘spontaneous’. It is also important to realize that equilibrium is a dynamic state: chemical bonds are continuously being broken and energy is continuously being redistributed (with no loss of energy). [I am dealing here with the transformed free energy, ΔG', that is the free energy change at specified pH, rather than ΔG (the pH independent value), as this is the most useful when considering a biochemical reaction. However, similar conclusions apply to ΔG]. Having got those out of the way, I can now attempt to answer your question (slightly rephrased). Can a reaction in a metabolic pathway be at equilibrium? To be extremely pedantic, if there is a flux through the pathway (net conversion of first substrate to end product) then the answer is no (Newsholme & Start, 1973, p 11). That is, if there is a flux through the pathway, ΔG' cannot be exactly zero for any individual reaction. However, reactions in a metabolic pathway may be very close to equilibrium (Newsholme & Start, 1973, chapter 1). Let’s (once again) rephrase your question. Are there any examples of reactions in metabolic pathways that are close to equilibrium, and how can we determine this? To again quote Newsholme & Start (1973, p11) “In a series of reactions that constitute a metabolic pathway, a few may be displaced far from equilibrium, whereas the majority of reactions may be close to equilibrium”. So how can this be determined? One way would be to measure the ratio of products to substrates (or the ratio of product to substrate pairs) in the cell, and compare this with the equilibrium constant. Note that it is only the ratio of substrate/product pairs we are interested in, not the absolute concentrations. We might be interested in the NAD+/NADH ratio in the cell, for example. That is, we measure the mass action ratio and compare this with the equilibrium constant. Such measurements are fraught with difficulties, but let’s agree that they can be made. We could rapidly freeze the tissue sample to -190°C (using liquid nitrogen), for example, and then measure the ratio of metabolites. Finally, it should be pointed out that comparison of mass-action ratio with the equilibrium constant is not the only way of deducing that a reaction is near equilibrium, and agreement between alternative methods is highly desirable before any firm conclusions are drawn. Let’s consider glycolysis as an example. It is generally agreed that the reactions catalyzed by phosphoglucoisomerase, phosphoglycerate mutase and enolase are all close to equilibrium: the mass action ratios and the equilibrium constants are about the same (see Newsholme & Start, 1973, p 98). It is also generally agreed that the reactions catalyzed by phosphofructokinase and pyruvate kinase are far from equilibrium (see Newsholme & Start, 1973, p 98). What is the rationale? In general, it is control reactions catalyzed by regulatory enzymes that are displaced from equilibrium. (A key property of a regulatory enzyme is that the activity is controlled by factors other than substrate concentration). The enzyme phosphofructokinase is a good example. This enzyme plays a key role in the regulation of glycolysis. Measurement of the mass-action ratio for this enzyme gives a value of 0.029, where the equilibrium constant for the reaction is about 1000 (see Newsholme & Start, 1973, p31). It is clear that physiologically this enzyme catalyzes a reaction that is far from equilibrium. In conclusion, many reactions in metabolic pathways may be close to equilibrium, but regulatory enzymes almost always catalyze reactions that are displaced from equilibrium. Finally, if anyone wishes to provide more recent references or to otherwise improve the answer, feel free to edit. Addendum For a well-known controversy in the enzyme kinetics field where the requirement that product of the ratios of rate constants around a closed loop equal the equilibrium constant for the cycle was not adhered to, see Selwyn (1993) and Topham & Brocklehurst (1992). These authors were critizing the work of Varon et al (1992), who made a bit of a hash of things. If you are interested in this area, you may wish to read up on this controversy, which is very informative. (Selwyn’s paper is a great start). [All the above papers are freely available in PubMed Central]. References Cornish-Bowden, A. (2004) Fundamentals of Enzyme Kinetics. 3rd Edn. Portland Press. Laidler, K. J. (1987) Chemical Kinetics 3rd Edn. Harper & Row. Newsholme, E.A. & Start, C (1973) Regulation in Metabolism. John Wiley & Sons. Silbey, R. J. & Alberty, R.A (2001) Physical Chemistry. 3rd Edn. John Wiley. Eric Arthur Newsholme (1935–2011) Obituary
{ "domain": "biology.stackexchange", "id": 1358, "tags": "biochemistry, bioenergetics" }
Roulette game in C++
Question: I'm kind of new to C++ and was just wondering if anyone could give me tips on how I could make my code more efficient (I added some comments to the code to help you understand it better). Is it a good idea to use switch cases? Is there a way I can use more functions/arrays/pointers? Source code Here is some of the code: case 1: //Asks for number to place bet on cout << endl << "Please choose a number to place your bet on: "; cin >> betNumber; //Checks if number is valid (between 1 and 36) while (betNumber < 1 || betNumber > 36) { cout << endl << "You must choose a valid number between 1 and 36, inclusive!" << endl; cout << "Please choose a number to place your bet on: "; cin >> betNumber; } //Asks for amount to bet on that number cout << endl << "How much would you like to bet on the number " << betNumber << "? $"; cin >> betAmount; //Checks if minimum amount is $1 and if the player has enough money in their account while (betAmount < 1 || betAmount > bankAccount) { cout << endl << "You have $" << bankAccount << " in your bank account: $"; cin >> betAmount; } //Seeds random number to the generator srand(time(0)); //Generates a random number randomNumber = 1 + (rand() % 36); cout << endl << "The ball landed on the number " << randomNumber << "."; //Checks if player won or lost their bet if (betNumber == randomNumber) { bankAccount = win(betAmount, betOdds, bankAccount); cout << endl << "Congratulations, you won! You now have $" << bankAccount << " in your account." << endl; } else { bankAccount = lose(betAmount, bankAccount); cout << endl << "Bad luck, you lost! You now have $" << bankAccount << " in your account." << endl; } break; Answer: Since there aren't any loops in your code that aren't waiting for the user to provide some input, it's probably way premature to talk about efficiency. Is your code measurably slow? Does it leave you hanging? If not, unless you have an unusual need, save questions on efficiency for later. So let's talk about some more important things: readability and maintainability. Here are some tips on ways to improve the readability and maintainability of your code. These aren't hard rules that you should never break (especially the one on comments); they are guidelines on ways to make your life easier that you should learn to bend when the guideline makes things awkward instead. I hope this helps, even though it may be a lot to take in all at once. Feel free to ask follow-up questions and get other opinions. Avoid Redundancy Redundancy can show up in many forms. One example of it is in your declaration of arrays, using code like int betType[11] = {35, 17, 8, 11, 5, 2, 1, 1, 2, 1, 1};. Unless there's something very special about the number 11, there's no reason to call it out. Instead just say int betType[] = {35, 17, 8, 11, 5, 2, 1, 1, 2, 1, 1}; which will automatically determine the size of the array for you. When you later check your bounds with while (betChoice < 1 || betChoice > 11) {, you can instead use a calculated size (_countof(betType) or sizeof(betType)/sizeof(betType[0])), or even use a std::vector<int> instead of an int[], and check against the vector's size(). This will help you avoid magic numbers that don't mean much later. After all, if someone asks you what's special about 11, would you think it's the number of available bet types? But if they asked what betTypes.size() meant, it would be easy to answer. Another way redundancy shows up is in large blocks of repeating code. For instance, case 1 and case 2 have almost the same code. In fact I had to read it a couple times to find the part that was different. Sometimes this can be best handled by refactoring similar parts of code into functions, and passing parameters to them that control how they differ. Sometimes it's better just to extract the parts that are identical into simpler functions, and use them. I'll touch on this more below, but I certainly don't have the answer. Avoid Obfuscation In the code commented Displays a large dollar sign, there are a lot of casts from int to char so that cout prints the value as a character. But the characters in question are not that unusual. Just use the actual character you want to show, for example replacing cout << endl << " " << (char)36 << (char)36 << (char)36 << (char)36 << (char)36; with cout << endl << " $$$$$" This will not only be easier to type or update, it will be easier to read. Avoid Comments This recommendation is somewhat controversial, but it begins to target your question about functions. Instead of commenting what a line of code does, comment how a block of code does something unusual. When you're first starting out, everything seems unusual, but eventually you will see patterns and only need to comment on things that are not common patterns. But then, instead of commenting what a block of code does, give it a name instead by putting it in a function. For example, you have several cases where you ask how much the user wants to bet on a number, then loop until they enter a valid number. You could extract this loop into a helper function like this: int getBetAmount(int bankAccount) { int betAmount; cin >> betAmount; while (betAmount < 1 || betAmount > bankAccount) { cout << endl << "You have $" << bankAccount << " in your bank account: $"; cin >> betAmount; } return betAmount; } int _tmain() { : : : case 1: : : : cout << endl << "How much would you like to bet on the number" << betNumber << "? $"; betAmount = getBetAmount(bankAccount); : : : : : : case 2: : : : cout << endl << "How much would you like to bet on the numbers" << betNumber << " and " << betNumber + 3 << "? $"; betAmount = getBetAmount(bankAccount); : : : } Find some other code that doesn't change much and extract that into functions as well. For example, the code commented Checks if player won or lost their bet, I see creating a function you'd call like this: case 1: : : : bankAccout = awardWinnings(betNumber == randomNumber, betAmount, betOdds, bankAccount); break; case 2: : : : bankAccount = awardWinnings(betNumber == randomNumber || betNumber + 3 == randomNumber, betAmount, betOdds, bankAccount); break; After you make these changes, ideally the parts that are different will start to stand out, and the parts that are the same will have good names that tell you what they do even if they don't have a comment. And then you can more easily avoid incorrect comments like case 2's Check if number is valid (between 1 and 36) that actually checks for 33. You can also avoid comments by naming constants. Instead of starting with int bankAccount = 500 and then 500 lines later referencing 500 to figure out your overall winnings, perhaps declare const int StartingBankAccount = 500; and use the name instead of the number in both places. If you decide to change the initial account wealth, this also helps ensure your ending summary remains correct. Avoid Bad Dice While this is a toy program, and a person is unlikely to play long enough for it to matter, rand() % max is a flawed approach to generating random numbers. It's flawed in ways too subtle for me to explain (I understand it, but not well enough to explain it). However Stephan T. Lavavej knows it much better and explains it in a video called rand() Considered Harmful; watch it and use the approach he recommends if you want a more uniformly distributed random number.
{ "domain": "codereview.stackexchange", "id": 18921, "tags": "c++, beginner, random, game" }
how can I improve the check name file part
Question: /** * Upload has a limit of 10 mb * @param string $dir $_SERVER['DOCUMENT_ROOT'] * @param string $path Path do you want to upload the file * @param string $filetype jpg|jpeg|gif|png|doc|docx|txt|rtf|pdf|xls|xlsx|ppt|pptx * @param array $_FILES An associative array of items uploaded to the current script via the HTTP POST method. * @return string ?$fileName:False */ function uploadFiles($dir, $path, $filetype) { $dir_base = "{$dir}{$path}"; $dateadded = date('ynj_Gis-'); $rEFileTypes = "/^\.($filetype){1}$/i"; $MAXIMUM_FILESIZE = 10 * 1024 * 1024; // UPLOAD IMAGES $isFile = is_uploaded_file($_FILES['file']['tmp_name']); if ($isFile) { $safe_filename = $dateadded . preg_replace(array('/\s+/', '/[^-\.\w]+/'), array('_', ''), trim($_FILES['file']['name'])); if ($_FILES['file']['size'] <= $MAXIMUM_FILESIZE && preg_match($rEFileTypes, strrchr($safe_filename, '.'))) { $isMove = move_uploaded_file($_FILES['file']['tmp_name'], $dir_base . $safe_filename); } } if ($isMove) { return $safe_filename; } else { return false; } } How can I improve the name check part or another things? Answer: I have two minor comments. First, I prefer having my variables defined in all branches, to avoid the PHP Notice: Undefined variable:... error in logs. So, an initialization $isMove = false; would be nice. Btw, maybe $isMoved seems a bit more accurate/correct name. This part might also be written like this: if (!is_uploaded_file($_FILES['file']['tmp_name'])) return false; $safe_filename = $dateadded . preg_replace(array('/\s+/', '/[^-\.\w]+/'), array('_', ''), trim($_FILES['file']['name'])); if ($_FILES['file']['size'] > $MAXIMUM_FILESIZE) return false; if (!preg_match($rEFileTypes, strrchr($safe_filename, '.'))) return false; if (!move_uploaded_file($_FILES['file']['tmp_name'], $dir_base . $safe_filename)) return false; return $safe_filename; Depending on your taste, this might be easier to read. My second comment is related to this: $rEFileTypes = "/^\.($filetype){1}$/i"; What is the purpose of {1} (which means "repeat once")? I'd write that regex like this: $rEFileTypes = "/\.($filetype)\$/i"; and I would replace preg_match($rEFileTypes, strrchr($safe_filename, '.')) with preg_match($rEFileTypes, $safe_filename) It seems to me a bit less cluttered this way, and I'd expect such regex to be faster than doing the old one + strrchr, but I have no proof that it really is. However, I don't think this is a kind of code in which speed is very important (upload of a file and moving it around will eat up much more time). Btw, $filetypes might be a bit better name. A bit more important comment is regarding the possibility that, given a long enough filename, your safe one will grow too long when you prepend it with a date, and you will lose the file extension (or a part of it). You might want to address that issue by checking the length and trimming the filename part if needed. Now, for the general approach, there is a question about what you're doing with this. Usually, it is better to keep the original filenames in a database, and to store the uploaded files under a completely generic name (using, for example, an auto_increment primary key from the filenames table). Also, if you don't need filenames at all, you can then just make them generic. Notice that your $dateadded makes it very likely that your filenames are unique, but it doesn't really guarantee it. Depending on your use and the expected number of users, this might be a potential problem (although I wouldn't expect it).
{ "domain": "codereview.stackexchange", "id": 4557, "tags": "php" }
Recursive equation for complexity: T(n) = log(n) * T(log(n)) + n
Question: For analyzing the running time of an algorithm , I'm stuck with this recursive equation : $$ T(n) = \log(n) \cdot T(\log n) + n $$ Obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering $n$ to be $2^m$, but I couldn't manage to find any good fix. Answer: I suppose you are looking for an asymptotic bound. Notice that the recursion depth is $\log^* n$, that is how often do I have to apply the logarithm recursively to get below 2. Also, the function is increasing. Using these two facts, you can plug in the recursion once and then you see that you have at most $\log^*$ summands, each of them at most $\log^2 n$ and then the additional $n$. This sum is dominated by $n$. $$ \begin{align} T(n)&=\log(n) T(\log n)+n=\log n \log\log n \;T(\log\log n)+\log^2(n)+n \\ &\le \log^*n\log^2n+n=O(n) \end{align} $$ Clearly $T(n)\ge n$ and thus $T(n)=\Theta(n)$.
{ "domain": "cs.stackexchange", "id": 1826, "tags": "asymptotics, runtime-analysis, recursion, master-theorem" }
Can a perceptron forget?
Question: I would like to build an online web-based machine learning system, where users can continuously add classified samples, and have the model updated online. I would like to use a perceptron or a similar online-learning algorithm. But, users may make mistakes and insert irrelevant examples. In that case, I would like to have the option to delete a specific example, without re-training the perceptron on the entire set of examples (which may be very large). Is this possible? Answer: As I understand the process, altering a perceptron without retraining is impossible. The weight adjustments are not only relative to that specific example but also relative to the other training examples that have gone before. Identifying the incorrectly classified instance and removing it from the test set before retraining the model would seem to be the most effective way of correcting the weights. I think it's worth pointing out that in comparison to other machine learning algorithms, perceptrons are relatively resistent to noise and incorrectly classified instances in the training set. If you're encountering a large number of misclassified instances, it would seem more prudent to have better validation at the point you ingest the data prior to training than to come up with some way to correct for misclassified instances after the perceptron has been trained. If that's not possible and you're able to identify the incorrectly classified instances as such, then removing them and retraining would seem the only way to effectively remove the impact of the misclassified instances.
{ "domain": "cs.stackexchange", "id": 1459, "tags": "machine-learning, online-algorithms" }
Dynamic reconfigure
Question: this is my client.py #!/usr/bin/env python PACKAGE = 'dynamic_tutorials' import roslib;roslib.load_manifest(PACKAGE) import rospy import dynamic_reconfigure.client from geometry_msgs.msg import Vector3, Twist def callback(config): rospy.loginfo("Config set to {bool_param}".format(**config)) if __name__ == "__main__": rospy.init_node("dynamic_client") pub = rospy.Publisher('/turtle1/cmd_vel', Twist) rospy.wait_for_service("/dynamic_tutorials/set_parameters") tw = Twist(Vector3(1,2,0), Vector3(0,0,1)) client = dynamic_reconfigure.client.Client("dynamic_tutorials", timeout=30, config_callback=callback) r = rospy.Rate(1) x = 0 b = True while not rospy.is_shutdown(): x = x+1 if x >5: pub.publish(tw) x=0 client.update_configuration({"bool_param":b}) r.sleep() When i run my server.py and client.py and turtlesim_node and dynamic_reconfigure, the turtle in the turtlesim node goes in circle every 5 seconds(regardless if the bool_param in dynamic reconfigure is ticked or unticked). If i click the bool_param, it becomes unticked immediately. Intended action: When i click on a button(bool_param, in this case is ticked) in rqt_dynamic_reconfigure , it should publish the command and remain publishing the command untill i give further instructions. How can i implement this action? I need to control the turtle movement using dynamic_reconfigure. Thank you! Originally posted by Azhar on ROS Answers with karma: 100 on 2017-02-05 Post score: 0 Original comments Comment by gvdhoorn on 2017-02-06: Can you clarify why you want to do this with dynamic_reconfigure? Comment by Azhar on 2017-02-06: Basically I want an on/off button in my dynamic_reconfigure, to publish my customized data. So if the button is "on", it would publish the data. Current problem : the button in the dynamic configure does not have any effect on the turtle when turned "on" or "off". Comment by gvdhoorn on 2017-02-06: That sounds like a strange thing to do with dynamic_reconfigure. More something for a service. Why do you feel dynamic_reconfigure should be used for something like this? re: no effect: well, your callback doesn't do anything besides printing the config. It doesn't update anything. Comment by Azhar on 2017-02-06: i need dynamic_reconfigure to able to change between modes for an autonomous vehicle, which is the ultimate goal. I am using turtlesim to find out how it can be done first. Is it possible for you to guide me what i should have in my callback, to get the intended function? Any form of guide! :) Comment by gvdhoorn on 2017-02-06: I still think this is not something you'd want to do with dynamic_reconfigure, but see amcl/src/amcl_node.cpp for how AMCL uses dyn rcfg. Mimic that in Python. Answer: THANKS @gvdhoom! i managed to get what i want using the dynamic_reconfigure! General info: The config is a dictionary which stores the True/False value of the parameter. So by playing with it , you can control what you want to do! #!/usr/bin/env python PACKAGE = 'dynamic_tutorials' import roslib;roslib.load_manifest(PACKAGE) import rospy import dynamic_reconfigure.client from geometry_msgs.msg import Vector3, Twist def callback(config): global pub,tw,cw,global_name, client print config global_name = config['bool_param'] pub = rospy.Publisher('/turtle1/cmd_vel', Twist) tw = Twist(Vector3(1,2,0), Vector3(0,0,1)) cw = Twist(Vector3(1,2,0),Vector3(0,0,-1)) if __name__ == "__main__": rospy.init_node("dynamic_client") rospy.wait_for_service("/dynamic_tutorials/set_parameters") client = dynamic_reconfigure.client.Client("dynamic_tutorials", timeout=30, config_callback=callback) r = rospy.Rate(1) global_name = rospy.get_param("/dynamic_tutorials/bool_param") while not rospy.is_shutdown(): if global_name: pub.publish(tw) else: pub.publish(cw) r.sleep() Originally posted by Azhar with karma: 100 on 2017-02-13 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 26933, "tags": "dynamic-reconfigure" }
How can motion due to uniform velocity influence the distance-time graph of a particle oscillating sinusoidally?
Question: While reading "An Introduction to Mechanics" by Daniel Kleppner and Robert Kolenkow I came across the following problem statement: "The electron is initially at rest, $x_0$ = $v_0$ = 0, so we have x(t) = $a_0$/ ω *t − $a_0$/ $ω^2$ * sinωt. The result is interesting: the second term oscillates and corresponds to the jiggling motion of the electron that we predicted. The first term, however, corresponds to motion with uniform velocity, so in addition to the jiggling motion the electron starts to drift away." I know that the following equation would form a sine wave. x(t) = − $a_0$/ $ω^2$ * sinωt But I can't visualize how the "first term", i.e. $a_0$/ ω *t , would influence the sinusoidal graph. What type of drift would it create. If someone could explain it with a graph, I would have been very grateful. Answer: Think about how the two terms act as $t$ increases. At $t=0$, both terms are zero as expected. As you begin to increase $t$, the sinusoidal term will begin to oscillate, but the first term will increase linearly. Thus, your sinusoid will shift upwards at a constant rate. Ignoring the proportionality of the constants, a graph might look something like this:
{ "domain": "physics.stackexchange", "id": 44614, "tags": "classical-mechanics, electric-fields" }
Simple text RPG in Python
Question: I am trying to teach myself to code using Python. The following is the first real program I have written from scratch. I feel that it is messy and in need of improvement, but I am either unsure of what to improve next or don't know how to improve it. I am looking for suggestions about: turning my functions into class methods, implementing the inventory{} list, improving my ranmob function to work with a much larger (list?) of monsters, and possibly turning my while True loop into a gameLoop function. Any other general advice would be appreciated. from random import randint class Dice: def die(num): die=randint(1,num) return die class Character: def __init__(self,name,hp,thaco,ac,inventory,exp): self.name=name self.hp=hp self.thaco=thaco self.ac=ac self.inventory=inventory self.exp=exp class Fighter(Character): def __init__(self): super().__init__(name=input("What is your characters name?"),thaco=20,ac=10, hp=10,inventory={},exp=10) prof = "fighter" maxhp=10 level=1 hd=10 level2=20 class Cleric(Character): def __init__(self): super().__init__(name=input("What is your characters name?"),thaco=20,ac=10, hp=8,inventory={},exp=8) prof= "cleric" maxhp=8 level=1 hd=8 level2=15 class Mage(Character): def __init__(self): super().__init__(name=input("What is your characters name?"),thaco=20,ac=10, hp=4,inventory={},exp=4) prof= "mage" mana=1 maxmana=1 maxhp=4 level=1 hd=4 level2=10 class Goblin(Character): def __init__(self): super().__init__(name="goblin", hp=7,thaco=20, ac=6,inventory={}, exp=7) class Orc(Character): def __init__(self): super().__init__(name="orc", hp=8,thaco=18, ac=6,inventory={}, exp=8) def profession(): print("What is your class?",'\n', " press f for Fighter",'\n', " press c for Cleric",'\n', " press m for Mage") pclass=input(">>>") if pclass =="f": Prof = Fighter() elif pclass=="c": Prof = Cleric() elif pclass == "m": Prof = Mage() else: Prof=Fighter() #profession() return Prof def ranmob(): mob = Goblin() if Dice.die(2)<2 else Orc() return mob def playerAttack(): roll=Dice.die(20) if roll>=hero.thaco-mob.ac: print("You hit") if hero.prof=="fighter": rollD=Dice.die(10) if hero.prof=="cleric": rollD=Dice.die(6) if hero.prof=="mage": rollD=Dice.die(4) print("for",rollD,"damage") mob.hp-=rollD print("the",mob.name,"has",mob.hp,"hp left") else: print("You miss") def monsterAttack(): roll=Dice.die(20) if roll>=mob.thaco-hero.ac: print("Monster hit") if mob.name=="goblin": rollD=Dice.die(4) elif mob.name=="orc": rollD=Dice.die(6) print("for",rollD,"damage") hero.hp-=rollD print(hero.name,"has",hero.hp,"hp left") else: print("Monster misses") def levelUp(): while hero.exp>=hero.level2: levelGain=False hero.level+=1 levelGain=True hero.level2=hero.level2*2 if levelGain==True: hero.maxhp+=Dice.die(hero.hd) hero.hp=hero.maxhp if hero.prof=="mage": hero.maxmana+=1 hero.mana=hero.maxmana print("You Gained a level","\n",'hp:',hero.hp,"\n",'level:',hero.level) levelGain=False while hero.level>=3: hero.level-=3 hero.thaco-=1 print("thaco:",hero.thaco) def commands(): if hero.prof=="fighter": print (" press f to fight",'\n', "press enter to pass") command=input("~~~~~~~~~Press a key to Continue.~~~~~~~") if command=="f": playerAttack() if command=="": pass if hero.prof=="cleric": print (" press f to fight",'\n', "press h to heal",'\n', "press enter to pass") command=input("~~~~~~~~~Press a key to Continue.~~~~~~~") if command=="f": playerAttack() elif command =="h": if hero.hp<hero.maxhp: hero.hp+=Dice.die(8) if hero.hp>hero.maxhp: hero.hp=hero.hp-(hero.hp-hero.maxhp) print("You now have:",hero.hp,"hp") else: print("Your hit points are full") commands() elif command=="": pass if hero.prof=="mage": print (" press f to fight",'\n', "press s for spells",'\n', "press m to generate mana",'\n', "press enter to pass") command=input("~~~~~~~~~Press a key to Continue.~~~~~~~") if command=="f": playerAttack() elif command =="s": print("You have",hero.mana,"mana") if hero.mana>=1 and hero.mana<3: print("press s for sleep",'\n', "press m for magic missile") command=input(">>>") if command =="s": print("You put the monster to sleep it is easy to kill now") mob.hp-=mob.hp hero.mana-=1 if command=="m": if hero.mana<hero.maxmana: hero.mana+=Dice.die(4) if hero.mana>hero.maxmana: hero.mana-=(hero.mana-hero.maxmana) dam =Dice.die(4)*hero.mana mob.hp-=dam print("You use all your mana! and do",dam,"damage!") hero.mana-=hero.mana elif hero.mana>=3: print("press s for sleep",'\n', "press m for magic missile",'\n', "press f for fireball") command=input(">>>") if command =="s": print("You put the monster to sleep it is easy to kill now") mob.hp-=mob.hp hero.mana-=1 if command=="m": dam=Dice.die(4)*hero.mana mob.hp-=dam print("You use all your mana! and do",dam,"damage!") hero.mana-=hero.mana if command=="f": print("You are temporarily blinded by a feiry flash of light.") dam=0 dam+=Dice.die(6) dam+=Dice.die(6) dam+=Dice.die(6) mob.hp-=dam print("You did",dam,"points of damage") hero.mana-=3 else: print("Your mana is empty") commands() elif command =="m": if hero.mana<hero.maxmana: hero.mana+=1 print("You have",hero.mana,"mana") elif hero.mana>=hero.maxmana: print("Your mana is full.") print("You have",hero.mana,"mana") commands() elif command=="": pass mob=ranmob() hero=profession() print("name hp thaco ac inventory xp",'\n', hero.name,hero.hp,hero.thaco,hero.ac,hero.inventory,hero.exp) while True: if mob.hp<=0: print('The',mob.name,'is dead!') hero.exp+=mob.exp print('hero xp',hero.exp) mob=ranmob() if hero.hp<=0: mob.exp+=hero.exp print("mob xp:",mob.exp) print(hero.name,'died!') #name=input("What is your characters name?") hero=profession() print("name hp thaco ac inventory xp",'\n', hero.name,hero.hp,hero.thaco,hero.ac,hero.inventory,hero.exp) levelUp() print("You see",mob.name+",",mob.name,"has",mob.hp,"hp.") if hero.hp>0: commands() if mob.hp>0: monsterAttack() Answer: class Dice: def die(num): die=randint(1,num) return die A few points: I would call the method roll, and the class Die; It's typical to model a die by setting the number of sides in __init__, then calling roll without any arguments; Why bother assigning die? I would have written: class Die: """Represents a single die.""" def __init__(self, sides=6): """Set the number of sides (defaults to six).""" self.sides = sides def roll(self): """Roll the die.""" return random.randint(1, self.sides) Note the use of docstrings to provide information about the class and its methods. Now e.g. Dice.die(2) becomes Die(2).roll(), which I think is much clearer about what's happening, and you can make a single die: four_sided_die = Die(4) and roll it repeatedly: four_sided_die.roll() I would be inclined to make the player classes separate to the monster classes, so you'd have an inheritance structure like: Character / \ Player Monster / | \ / \ Fighter Mage Cleric Goblin Orc This lets you factor out more of the duplication. For example: class Player(Character): def __init__(self, hp, exp): super().__init__(input("What is your character's name? "), 20, 10, hp, {}, exp) class Fighter(Character): # Note that constants should be UPPERCASE_WITH_UNDERSCORES HD = 10 LEVEL = 1 # should this really be a class attribute? LEVEL_2 = 20 MAX_HP = 10 PROF = "fighter" def __init__(self): super().__init__(10, 10) def ranmob(): mob = Goblin() if Dice.die(2)<2 else Orc() return mob A few points: random_mob would be a better name; This returns a single enemy, which I'm not sure I'd call a "mob"; There's no need to use the Dice. I'd use random.choice for this, and allow a size of mob to be specified: ENEMIES = (Goblin, Orc) def random_mob(size): return [random.choice(ENEMIES)() for _ in range(size)] this uses a list comprehension to create a list of randomly-selected enemies from the tuple. There is a vast amount of duplication in commands, and it contains logic (what the Players can do) that should be stored with the Player. For example, if each Player subclass had a dictionary of valid commands (each implemented as an instance method): class Fighter(Player): ... def fight(self): super().fight() # all players can fight def cast_spell(self): ... def generate_mana(self): ... COMMANDS = { 'f': ('fight', fight), 's': ('spells', cast_spell), 'm': ('generate mana', generate_mana), } Then commands starts: # Show the valid commands for command, action in hero.COMMANDS.items(): print('press {} to {}'.format(command, action[0])) print('press Enter to skip') # Get validated user input while True: command = input("~~~~~~~~~Press a key to Continue.~~~~~~~") if command and command not in hero.COMMANDS: print('Not a valid command') continue break # Run the appropriate action if command: hero.COMMANDS[command][1]() # call the method See Asking the user for input until they give a valid response for more on input validation. You can extend this to define the appropriate parameters for each action, etc..
{ "domain": "codereview.stackexchange", "id": 13660, "tags": "python, object-oriented, python-3.x, dice, role-playing-game" }
Is mechanical energy conserved when charges accelerate?
Question: Suppose we have two identical charges $q_1$ and $q_2$ and there is some distance $d$ between them. Imagine they are not allowed to move at first, but suddenly "we let them go". Then, they will start to move away from each other with a non-constant acceleration, due to the time-varying force acting on them. Namely, Coulomb's law tells us that each one experiences a force such that its magnitude is: $$F(t)=\frac{q_1q_2}{4\pi \epsilon_0}\frac{1}{d^2(t)}$$ What can we do to find the speed $v(t)$ at which the charges will be moving at any instant $t$? Well, at first I thought of using the conservation of mechanical energy. The electric potential energy of the system at instant $t$ would be $$U_e(t) = \frac{q_1q_2}{4\pi \epsilon_0 d(t)}$$ And therefore the speed could be found by calculating the amount of that potential energy that is transformed to kinetic energy to conserve the mechanical energy of the system. However, this is where my confusion arose. The charges are accelerating, and therefore there is radiation going on. This means that there is a time-varying electric field that generates a time-varying magnetic filed, which generates a time-varying electric field, and so on. These contributions to the electric field are not conservative, as they curl is not zero. This would mean that thinking of electric potential energy makes no sense, as the electric field in this case would not be electrostatic and, therefore, would not be conservative (leading to the idea of an "electric potential" being nonsense). So what's going on? Is mechanical energy conserved? If yes, how can it be possible given that the fields are non-conservative? Answer: Namely, Coulomb's law tells us that each one experiences a force such that its magnitude is: $$F(t)=\frac{q_1q_2}{4\pi \epsilon_0}\frac{1}{d^2(t)}$$ This is exactly wrong! :) As you'd recall, Coulomb's law is a law applicable to an electrostatic situation. When the charges are allowed to move, we have a patently dynamical situation and one cannot apply Coulomb's law. One would have to use the full Maxwell equations to solve for the electric and magnetic fields produced by one charge at the position of the other charge and then apply Lorentz force law to ultimately find the force experienced by each of the charges. One can find the electric and magnetic fields produced by a generically moving charge particle using, for example, the Liénard–Wiechert potential. However, you'd get a pretty coupled system of differential equations given that each of the charges is moving and would be producing a force on the other particle given by the Liénard–Wiechert potential which requires the velocity and position of the source at a retired time. I'd assume you will have to solve it numerically. The charges are accelerating, and therefore there is radiation going on. This is precisely correct. The total energy of the system would obviously be conserved but a part of that energy would be in the form of radiation which travels off to infinity and thus cannot be thought of as contributing to the potential energy between the two particles. So, yes, one cannot use the conservation of mechanical energy in this situation, however, energy conservation still applies. In other words, $$\int dV \bigg( \frac{1}{2}\epsilon_0 E^2+\frac{1}{2\mu_0}B^2\bigg)+\frac{1}{2}m_1v_1^2+\frac{1}{2}m_2v_2^2$$ would still be conserved (assuming non-relativistic speed for particles). Just that a part of that energy in the field will be in the form of radiation. You won't be really able to use this effectively for you'd need to solve for $E$ and $B$ using the full dynamical machinery of Maxwell's equations to actually compute the integral. Finally, as @ThePhoton has pointed out, you can do an approximate calculation so long as your estimation of how much energy would be lost in radiation is low compared to the total mechanical energy in the initial state.
{ "domain": "physics.stackexchange", "id": 67227, "tags": "electromagnetism, energy, electromagnetic-radiation, energy-conservation" }
Happy Number Program
Question: I wrote this program, to determine if any given number is happy, and would like to know of any improvements that can be made to make it better. I have tested it, but whether or not it works perfectly I have no idea, there might be some numbers that get through, although I doubt it. import java.util.HashSet; import java.util.Scanner; import java.util.Set; public class HappyNumber { public static void main(String[] args) { System.out.print("Please enter a number: "); int number = new Scanner(System.in).nextInt(), value; Set<Integer> unique = new HashSet<Integer>(); while (unique.add(number)) { value = 0; for (char c : String.valueOf(number).toCharArray()) value += Math.pow(Integer.parseInt(String.valueOf(c)), 2); number = value; } System.out.println(number == 1 ? "Happy" : "Not Happy"); } } Answer: Apart from what @RubberDuck said, you can also do a bit less work by not converting from/to String, but iterating over the digits one by one directly. I've also moved the value to where it is initialised, that way its scope is better visible. After making those changes I arrive at the following snippet. isHappy or (isHappyNumber) can be reused and the main function only does interface stuff. import java.util.HashSet; import java.util.Scanner; import java.util.Set; public class HappyNumber { public static boolean isHappy(int number) { Set<Integer> unique = new HashSet<Integer>(); while (unique.add(number)) { int value = 0; while (number > 0) { value += Math.pow(number % 10, 2); number /= 10; } number = value; } return number == 1; } public static void main(String[] args) { System.out.print("Please enter a number: "); int number = new Scanner(System.in).nextInt(); System.out.println(isHappy(number) ? "Happy" : "Not Happy"); } }
{ "domain": "codereview.stackexchange", "id": 32308, "tags": "java" }
Could this Francobelgian comic book woman really have these 4 diseases at the same time?
Question: I'm doing an archive binge of a paper comic "The Kiekeboes". One of the characters is an elderly woman currently at the doctor, and she says the has the following illnesses: Erythema Exsudativum Multiforme; Necrobiosis Lipoïdica; Phlegmasia Alba Dolens; and Metropathia Haemorrhagica. now, I googled these, and these are apparently real diseases (which actually makes sense, since she said she found them in her medical dictionary), namely 3 skin diseases and 1 menstruation problem. However, I find it hard to believe an elderly woman could have 3 different skin diseases and a menstrual disorder at the same time, especially since she already stopped menstruating. Is it possible for a patient to have all 4 of these illnesses at the same time? Or do the causes interfere with each other too much for that? I don't want to know how to cure it, I just wonder how much they're talking out of their anuses. Image I just took with my camera: Kiekeboe 21: the Piri-Piri Pills, Page 17, strip 30. Author: Merho. Publisher: J.Hoste NV (currently part of Standaard Uitgeverij). Picture taken by Nate Kerkhofs on 16 september 2014. It is in Dutch, but the names of the illenesses are clearly readable (since they're Latin anyway). The doctor asks "what are you here for, madame? a cold, reumathism, headache?" right before this. The woman answers "no, doctor, I'm struggling with ... and also ... not to mention ... and a really annoying .... The doctor then replies "but madame, where did you get all those illnesses?" the woman replies "in my medical encyclopaedia!". Answer: Erythema multiforme (minor) can and does occur in a lot of people; while it is usually self-limited, it can recur, especially when the trigger is an unsuspected food.[1] How common is it? It is very common. Necrobiosis Lipoidica is not uncommon in diabetics, less common but still found in non-diabetics. It can occur at any age, including the eighth decade. It also shows a sex predilection, being 3 times more common in women than in men.[2] How common is it? It is not rare. Phlegmasia Alba and Cerulea Dolens[3] is serious and associated with deep vein thrombosis, which is an acute event. It is not uncommon, but it's not chronic. (More than 600,000 cases of venous thromboembolism are estimated to occur each year in the United States, all of which can lead to Phlegmasia Alba and Cerulea Dolens. How common is it? Luckily, not very. If the author is referring to garden-variety thrombophlebitis, though, that's a chronic problem, and not uncommon in the elderly. Thrombophlebitis is chronic and common enough. I'm not going to address Metropathia Haemorrhagica, because, as you have noted, it's found in menstruating women. Diseases generally don't really interfere with each other; some diseases can't co-exist (you can't have hyperthyroidism and hypothyroidism at the same time, for example, or have an active pituitary gland tumor and empty sella syndrome at the same time), but diseases don't interfere with each other exactly. Some diseases have common underlying foundations, so can occur co-morbidly, for example collagen vascular diseases can manifest with skin, joint, and lung findings, which might not be diagnosed as several manifestations of one disorder if it's not obvious. A better example might be diabetes, where similar underlying pathologies cause, say, heart disease, skin disease, eye-problems and kidney disease together. Whatever, the likelihood of a little old lady having all four of your diseases are nil. The first three all the time (assuming not garden variety thrombophlebitis) - nil. Once, not impossible once, but not chronically. Of the first two - not far fetched. The first - common as dirt (comes and goes, though.) I hope this helped. [1] Erythema Multiforme on Medscape [2] Necrobiosis Lipoidica on Medscape [3] Phlegmasia Alba and Cerulea Dolens on Medscape
{ "domain": "biology.stackexchange", "id": 9411, "tags": "medicine" }
Moving a bar magnet towards a freely hanging coil
Question: "A bar magnet is moved towards a freely hanging coil. Determine if the coil stays stationary or not. If it moves, determine if it moves away or towards the magnet." My hypothesis: From Lenz's law an e.m.f will be induced in the coil to oppose the change in magnetic flux due to the relative approach of the bar magnet. The resultant current in the coil produces a force on the coil equal to the Lorentz force due to the approach of the bar magnet, and in the opposite direction. As such, the forces should cancel out, causing the magnet to remain stationary. I suspect that there are some flaws in my hypothesis. Are my deductions correct, and is there a better explanation? (note: I'm not sure if the coil will move or not) Answer: Obviously, the coil will move. When we are moving the bar magnet toward the coil a current will flow through the coil in a direction to reduce the rate of change of flux through it. Now you can think the coil as a tiny magnetic dipole of dipole moment m=I a(or a small bar magnet for your convenience ). And this will move in the presence of other magnet if it is freely hanging. You may find this Wikipedia article useful.
{ "domain": "physics.stackexchange", "id": 14746, "tags": "electromagnetism, newtonian-mechanics" }
Can I get the Hamiltonian in the Heisenberg picture by simply plugging in the position and momentum in the Heisenberg picture?
Question: I am learning the relationship between Schrödinger and Heisenberg pictures. I have a question about how to compute the time-dependent Hamiltonian with the form $$H=\frac{p^2}{2m}+V(x,t)$$ in the Heisenberg picture. To define notation clearly, I use $H_H(t),p_H(t),x_H(t)$ to denote quantities in the Heisenberg picture, while I use $H,p,x$ to denote the quantities in the Schrödinger picture. By definition $$ H_H(t)=U^\dagger(t)H U(t).$$ My question is whether a straightforward plugin also gives the correct answer, that is $$ H_H(t)= \frac{p_{H}^2(t)}{2m} + V(x_H(t),t).$$ Answer: My understanding is it is correct if assuming $V$ is smooth enough to do a Taylor series. My calculation is $$ U^\dagger(t)p^2 U(t) = U^\dagger(t)p U(t) U^\dagger(t) p U(t) =p_{H}^2(t) $$ and $$ U^\dagger(t)V(x,t) U(t) =U^\dagger(t) \left ( \sum_{n=0}^\infty \frac{V^{(n)}(0,t)}{n!}x^n \right ) U(t) = \sum_{n=0}^\infty \frac{V^{(n)}(0,t)}{n!} x^n_H(t)=V(x_H(t),t), $$ where $V^{(n)}(0,t)$ is the $n$-th partial derivative.
{ "domain": "physics.stackexchange", "id": 94504, "tags": "quantum-mechanics" }
How can I tell that circular motion is a solution for a particle confined to the surface of a cone?
Question: I'm working on a problem where a particle of mass $m$ is confined to the surface of an inverted half cone (and is circling downwards due to gravity), with the cone's half angle $\alpha$. I chose to use cylindrical coordinates $(z,\phi,\rho)$ and I used the Lagrangian to solve this problem. After going through some math, I find the equation of motion for $z$, from which I can write that $$\ddot{z}\sec(\alpha) - \frac{p^2_{\phi}}{m^2z^3tan^2(\alpha)}+ g = 0$$ Here, $p_{\phi}$ is the angular momentum, which is conserved. Only $z$ depends on time, the other expressions are all constants. At this point, I've been told that it 'can be seen' that one solution to this is given by a circular motion at constant height $z_c$. I am then asked to impose a small perturbation $z = z_c + \eta$, and (keeping only first order terms in $\eta$) find the period with which $z$ will oscillate around $z_c$. Now I am pretty clueless how to do this. First of all, how can you see that there is a circular motion at constant height $z_c$? I mean, I can plug in $z = z_c$ and solve for it, but then I don't see how to find the period of something like that with the small perturbation. All the perturbation does is add some terms, but I don't see how they are time dependent and I certainly don't see how to extract a period from it. Could someone perhaps suggest a 'plan of attack'? If I do simply plug in $z = z_c$ I find that $$z_c=\left(\frac{p^2_{\phi}}{gm^2\tan^2(\alpha)}\right)^{\frac{1}{3}}$$ which at least has the right units. Moreover, plugging $z = z_c + \eta$ into the first equation and keeping only first order terms of $\eta$, I find that $$z = \frac{2z_c}{3} - \frac{p^2_{\phi}}{3z_c^2gm^2\tan^2(\alpha)}$$ But I don't see any period in that. Answer: If you want to see whether a particular function $z(t)$ represents an allowed motion of the particle, all you need to do is check whether it satisfies the equation of motion (the differential equation in your question). If you plug the function in and you get a mathematical contradiction, it is not a solution. Otherwise, it is. (Sometimes you have to be careful about corner cases, but this is not one of those times.) Perhaps it'll help you to think about it this way: when the problem says it 'can be seen' that one solution to this is given by a circular motion at constant height $z_c$ that means there is some constant $z_c$ such that $z(t) = z_c$ is a solution to the differential equation. Now, in theory, you could systematically test every possible height until you found one that works - that is, plug $z(t) = 1\text{ m}$, $z(t) = 2\text{ m}$, etc. into the differential equation and see if it works out to be equal to zero, but of course the smarter way is to use algebra to identify the only value that might work, which you did. You found that $$z_c=\left(\frac{p^2_{\phi}}{gm^2\tan^2(\alpha)}\right)^{\frac{1}{3}}$$ If you're not clear about how this shows that circular motion is a possible solution, I'd suggest plugging $$z(t) = \left(\frac{p^2_{\phi}}{gm^2\tan^2(\alpha)}\right)^{\frac{1}{3}}$$ into the differential equation and checking for yourself that the left side does simplify to zero when you do this. Now to the part about the perturbation. Forget the cone for a moment and think about a ball rolling along the bottom of a valley of some kind (a ditch or channel or tube). One way this can happen, of course, is that the ball rolls straight down the center. But another allowable motion is that the ball is a little off-center and that it moves slightly side-to-side as it rolls, tracing out some kind of oscillatory pattern centered on the bottom of the valley. This is a common pattern for any sort of physical system in a stable equilibrium centered on some coordinate $x_c$: while one allowable motion is just being stuck at $x_c$, another allowable motion is some kind of small oscillation around $x_c$. So instead of solving for $x(t)$ directly, you change variables to $\delta(t) = x(t) - x_c$, It's frequently easier to solve for $\delta(t)$ than it is for $x(t)$, because you know that $\delta(t)$ is centered around zero and thus small, and when you write your formulas in terms of $\delta$ instead of $x$ you can expand them in Taylor series and throw away everything but the largest nontrivial terms. In your case, you're doing this with $z(t) = z_c + \eta(t)$, instead of $x(t) = x_c + \delta(t)$. Different names (and meanings) for the variables, but the procedure is the same. You change variables from $z$ to $\eta$. Then you can expand in a Taylor series in $\eta$ and keep only the lowest-order nontrivial terms in $\eta$. Note that I do say nontrivial because you have to keep some terms which actually do involve $\eta$ in order to solve for it. Usually this means keeping up to order $\eta^1$, but in some cases there is a reason to keep higher-order terms as well - say, if all the $O(\eta^1)$ terms cancel out, or if you want a better approximation.
{ "domain": "physics.stackexchange", "id": 12460, "tags": "classical-mechanics, lagrangian-formalism" }