anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Monte Carlo AI in 21 game
Question: I am very interested in the Monte Carlo AI. I tried my best, still, this AI plays very badly. This code "works" in the meaning that it does not crash, but the quality of play is extremely low. Have I completly misunderstood the Monte Carlo AI or is there just a nasty bug in my code preventing it from playing correctly? Can you understand every detail of code? Creating an AI is quite complex a task so I would like to have the better style possible in order to keep the code understandable by everyone. """ TITLE: Monte Carlo AI that plays the 21 game AUTHOR: Caridorc Tergilti LICENSE: Creative Commons 3.0 CURRENT_STATE: The AI is currently very weak, I have no clue why. General explanation: This game is very,very easy, an AI can be disigned in much simpler ways than this. I designed a Monte Carlo AI because I want to learn about Monte Carlo AI-s (You can find out more about Monte Carlo here: https://en.wikipedia.org/wiki/Monte_Carlo_method) and I want to discover if such an AI can actually be powerful in a game. (Yes, there are many articles and papers on the internet that say that Monte Carlo AI works but I want to prove it myself). This programme is also Creative Commons so I hope that many people will enjoy reading this programme to understand Monte Carlo AI better. I will now explain my understanding of Monte Carlo AI. 1) The computer, knowing the rules, must decide what move is better. 2) FOREACH move the computer is going to simulate the move. 3) It will than play a big number of random games starting from the position after he made the move. 4) Each move will be awarded a score equals to: wins / total_games. 5) The move with the highest score will be played. """ import random WELCOME = """ 21 game. The total starts at 0. Each player can chose 1, 2, or 3, the number is than added to the total. If when a player adds his number, he makes the total equals or more than 21, he loses. """ DEPTH = 1000 def play_random_game(state): """ This function plays a random game starting from a state. The opponent moves first. Returns 1 if you win, 0 if you lose """ if state >= 21: return 0 # You previously played a move that made you lose while 1: state += random.randint(0,3) # Opponent if state >= 21: return 1 # YOU WIN state += random.randint(0,3) if state >= 21: return 0 # YOU LOSE def play_n_games(state,number): """ Plays a certain number of random games, all starting from the state. Return the wins/total ratio. """ outcomes = [] for _ in range(number): outcomes.append(play_random_game(state)) return sum(outcomes) / len(outcomes) def AI(total): """ This artificial intelligence uses Monte Carlo to make the best move. """ list_of_outcomes = [] possible_moves = [1,2,3] for move in possible_moves: list_of_outcomes.append(play_n_games(total,DEPTH)) # Chose the move with the better score if max(list_of_outcomes) == list_of_outcomes[0]: return 1 elif max(list_of_outcomes) == list_of_outcomes[1]: return 2 elif max(list_of_outcomes) == list_of_outcomes[2]: return 3 def interface(): """ Allows the user to play the game against the AI. """ total = 0 print(WELCOME) while 1: user_number = int(input("Enter your number: ")) assert (user_number in [1,2,3]),"Each player can chose 1, 2, or 3" total += user_number print("The total is " + str(total)) print() if total >= 21: print("You lost") break total += AI(total) print("After the AI added a number") print("The total is " + str(total)) print() if total >= 21: print("You won") break if __name__ == "__main__": interface() Answer: Your simulation assumes the human is playing randomly. This is not realistic. Consider the case when it's the human's turn at 19. A sensible player will certainly play 1 to force a win, but a random player has only 1/3 probability of doing the same. The simulation teaches the AI that the computer has an advantage in that situation, which is wrong. If the rules of the game were changed so that every move must be made at random, your algorithm should be able to predict each player's probability of winning at any point. However, that would be a different game: a game of chance. There's also a simple bug: you generate random moves by randint(0,3). This should of course be randint(1,3) because 0 is not a valid move.
{ "domain": "codereview.stackexchange", "id": 10254, "tags": "python, game, ai" }
Toxicity of phaeomelanin
Question: I have been reading an article discussing a study by Tiffany Slater and Maria McNamara in which they identified molecular evidence of phaeomelanin in the fossil record. Slater is quoted as saying "scientists still don’t know how – or why – phaeomelanin evolved because it is toxic to animals". I am curious what toxicity is being referred to here, since looking into it has only turned up the fact that phaeomelanin is responsible for red haired pigments in humans as well as reddish coloration in other animals, and the only mentions I've seen of toxicity are either phototoxicity that might occur when exposed to certain kinds of light, or increased susceptibility to skin damage from sunlight. I did find a few studies in the search results whose titles were completely impenetrable to a layman such as myself, so maybe there is more information there that I wasn't able to dig out. Is there a little more context around this statement that phaeomelanin is toxic that I am missing? I'm especially interested in information in layman's terms as I am not deeply familiar with the field of biology, but I am always curious! Answer: The paper in which Slater et al. reported preservation of phaeomelanin does not include the word or root ‘toxic’, so it is difficult to know whether the quotation is correct and, if so, what was actually meant. The Wikipedia entry for melanin states that as well as being responsible for the ‘red hair’ in certain people (who seem to survive this), phaeomelanins are concentrated in the lips, nipples, glans penis and vagina (depending on sex) of the rest of us. This suggests that phaeomelanins cannot be toxic, per se. The article does go on to say: “Exposure of the skin to ultraviolet light increases pheomelanin content, as it does for eumelanin; but rather than absorbing light, phaeomelanin within the hair and skin reflect yellow to red light, which may increase damage from UV radiation exposure.” However this is not the same as saying that phaeomelanin is toxic. Furthermore, the work reported was not concerned with human skin, but the colouration of feathers in the fossils of birds and feathered dinosaurs — presumably an advantageous trait, that outweighed any deleterious effects of UV radiation. This all suggests to me that a complex system for producing different variants of melanins arose in animals long before the appearance of mammals, and that there was differentiation in relation to which cells migrated to or were present in which tissues.
{ "domain": "biology.stackexchange", "id": 12406, "tags": "toxicology, melanin" }
Finding the best category of poker hand
Question: I have 16 methods in Java that return int values. I need to find the one that returns the highest value. I initially implemented this: private int best() { int max = playGame(ONES); max = Math.max(max, playGame(TWOS)); max = Math.max(max, playGame(THREES)); max = Math.max(max, playGame(FOURS)); max = Math.max(max, playGame(FIVES)); max = Math.max(max, playGame(SIXES)); max = Math.max(max, playGame(SEVENS)); max = Math.max(max, playGame(EIGHTS)); max = Math.max(max, playGame(THREE_OF_A_KIND)); max = Math.max(max, playGame(FOUR_OF_A_KIND)); max = Math.max(max, playGame(FULL_HOUSE)); max = Math.max(max, playGame(SMALL_STRAIGHT)); max = Math.max(max, playGame(LARGE_STRAIGHT)); max = Math.max(max, playGame(ALL_DIFFERENT)); max = Math.max(max, playGame(CHANCE)); max = Math.max(max, playGame(ALL_SAME)); return max; } To make this slightly better, I could reduce one comparison by comparing two methods simultaneously. private int best() { int max1 = Math.max(playGame(ONES), playGame(TWOS)); int max2 = Math.max(playGame(THREES), playGame(FOURS)); int max3 = Math.max(playGame(FIVES), playGame(SIXES)); int max4 = Math.max(playGame(SEVENS), playGame(EIGHTS)); int max5 = Math.max(playGame(THREE_OF_A_KIND), playGame(FOUR_OF_A_KIND)); int max6 = Math.max(playGame(FULL_HOUSE), playGame(SMALL_STRAIGHT)); int max7 = Math.max(playGame(LARGE_STRAIGHT), playGame(ALL_DIFFERENT)); int max8 = Math.max(playGame(CHANCE), playGame(ALL_SAME)); int max11 = Math.max(max1, max2); int max12 = Math.max(max3, max4); int max13 = Math.max(max5, max6); int max14 = Math.max(max7, max8); int max21 = Math.max(max11, max12); int max22 = Math.max(max13, max14); int max = Math.max(max21, max22); return max; } I mentioned 16 methods as playGame(CONSTANT) callbacks different methods based on the constant. However, this code is still bad. What could be a better way to achieve this? Answer: I think that it would be best if you could put those constants into a enum so that you can use Enum.values to get them all in an array. If so you can do this: enum Play { TWOS, THREES, FOURS, FIVES, SIXES, SEVENS, EIGHTS, THREE_OF_A_KIND, FOUR_OF_A_KIND, FULL_HOUSE, SMALL_STRAIGHT, LARGE_STRAIGHT, ALL_DIFFERENT, CHANCE, ALL_SAME; } class Main { int playGame(Play play) { // you need to implement this method returning something meaningful. return 0; } int best() { return Stream.of(Play.values()) .mapToInt(this::playGame) .max().getAsInt(); } } If you want to keep those constants as independent static final (Strings?) then you will need to create a list of them explicitly: //... static final String TWOS = "TWOS"; static final String THREES = "THREES"; //... static final List<String> PLAYS = Arrays.asList(TWOS, THREES, ...); //... int best() { return PLAYS.stream() .mapToInt(this::playGame) .max().getAsInt(); }
{ "domain": "codereview.stackexchange", "id": 26662, "tags": "java" }
Why is Proof Checker required in Proof Carrying Code
Question: In the classical PLDI'98 paper by Necula, "The design and implementation of a certifying compiler", the high-level verifier uses: VCGen to generate verification conditions (safety predicates) First-order logic theorem prover to prove the conditions LF proof checker to check the proof from step (2) I am a bit confused by step (3). Why is it required at all? Will just (1) and (2) not suffice? Why don't we just trust the proof generated by a theorem prover? Answer: The purpose of the proof checker is to minimise the trusted computing base. By having a proof checker, neither the compiler nor the theorem prover need to be correct. The paper makes this point on Page 3: Neither the compiler nor the prover need to be correct in order to be guaranteed to detect incorrect compiler output. This is a significant advantage since the VCGen and the proof checker are significantly simpler than the compiler and the prover. A proof checker is just a couple of lines of code, and can be hand-inspected for correctness. In contrast, an automated prover that performs well is extremely complex and unlikely to be correct, although with well-tested and widely used provers, the mistakes will be in edge cases that might not be easy to trigger. Have a look at the 30k LOC C code that make up Lingeling, a state-of-the-art SAT solver to see just how complicated automated theorem provers can be. Without a proof checker, you'd have to prove correct that theorem prover. This is beyond whaty we can economically do in 2015.
{ "domain": "cstheory.stackexchange", "id": 3402, "tags": "proof-theory, automated-theorem-proving, program-verification" }
How would waves (in a fluid) behave without intermolecular attraction?
Question: Water molecules, as many people know, are polarized, and so water molecules tend to have an attractive force between them. But how would the waves in water behave if this attraction no longer was present, but everything else was the same? Answer: There are different factors that make water a media for waves. For transversal waves the viscosity and cohesion of molecules are essential. Even if the water is assumed not to evaporate. Try to imagine pushing a boat forward in a liquid with low viscosity. The propeller will just easily skid without generating any wave or traction. However for P waves a liquid with low or no viscosity would still work as long as it can contract and expand under stress. If you in a crude way look at a differential cube of that liquid omega= (k/m)^1/2. K is the water stiffness index and m is its mass.
{ "domain": "earthscience.stackexchange", "id": 818, "tags": "ocean, water, waves, hypothetical, wave-modeling" }
Deep learning - rule generation
Question: I wanted to know if there is any methodology in Deep/Machine learning, where given a set of input/output values, it can derive rules for the same. Lets say I generate training input and output by $y=x^2$ i/p | o/p 0 0 2 4 . . 1000 1000000 It sort of generate rule like, $y=x*x$ Answer: One way of stating what you are looking for is to find a simple mathematical model to explain your data. One thing about neural networks is that (once they have more than 2 layers, and enough neurons total) they can in theory emulate any function, no matter how complex. This is useful for machine learning as often the function we want to predict is complex and cannot be expressed simply with a few operators. However, it is kind of the opposite of what you want - the neural network behaves like a "black box" and you don't get a simple function out, even if there is one driving the data. You can try to fit a model (any model) to your data using very simple forms of regression, such as linear regression. So if you are reasonably sure that your system is a cubic equation $y= ax^3 + bx^2 +cx +d$ then you could create a table like this: bias | x | x*x | x*x*x | y 1 0 0 0 0 1 2 4 8 4 1 3 9 27 9 1 . . . . 1 100 1000000 10000 10000 and then use a linear regression optimiser (sci-kit learn's SGD optimiser linked). With the above data this should quickly tell you $b=1, a,c,d=0$. But what it won't tell you is whether your model is the best possible or somehow "correct". You can scan for more possible formulae by creating more columns - any function of any combination of inputs (if there is more than one) that could be feasible. However, the more columns you add in this way, the more likely it is you will find an incorrect overfit solution that matches all your data using a clever combination of parameters, but which is not a good general predictor. To address this, you will need to add regularisation - a simple L1 or L2 regularisation of the parameters will do (in the link I gave to scikit-learn, the penalty argument can control this), which will penalise large parameters and help you home in on a simple formula if there is one.
{ "domain": "datascience.stackexchange", "id": 977, "tags": "machine-learning, deep-learning, training" }
urg_node timestamp way too far in the future
Question: I am trying to get the urg_node to publish scans to the default topic (laser) and the default frame_id (laser). It all works fine when I couple the commands: rosrun urg_node urg_node _ip_address:="192.168.0.10" _ip_port:-"10940" and rosrun tf static_transform_publisher 0 0 0 0 0 0 map laser 40 in two terminals. But my original intention is to be able to publish it on the laser topic and have the map to laser transform as NOT static. But this means that the transform now has an additional requirement that they their time stamps should be synchronized. But they are not! The laser messages are being published with the time stamp on a scale of 1454425297 seconds as opposed to the rest of the nodes (including the ones publishing the other transforms) which are on a scale of just a few 100 seconds (corresponding to the time passed since I started the roscore). I would also like to note that I am using gazebo to publish these other transforms. My question is: Obviously these time stamps have to match up and there are only two obvious approaches: Somehow get the lidar to publish laser messages with the timestamp of the gazebo nodes. Have all other nodes publish messages with the timestamp of the LIDAR. Unfortunately, I am unable to do either. Things I tried: rosrun urg_node urg_node _ip_address:="192.168.0.10" _ip_port:-"10940" calibrate_time:="true" rosrun urg_node urg_node _ip_address:="192.168.0.10" _ip_port:-"10940" time_offset:-"-1454425297.0" ` ` Originally posted by Reuben John on ROS Answers with karma: 21 on 2016-02-02 Post score: 0 Answer: Pretty sure you accidentally set "use_sim_time" to true on the parameter server. Once this parameter is set, all ROS nodes started listen to the /clock topic for time. Make sure to kill your roscore, start up everything and then run rosparam get use_sim_time If it returns "true" you have some launch file that sets sim time to true, causing the observed problems. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-02-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by Reuben John on 2016-02-03: Your right! rosparam get use_sim_time returns true! But even after tutorial command: roslaunch gazebo_ros empty_world.launch paused:=true use_sim_time:=false gui:=true throttled:=false headless:=false debug:=true, it shows true. rosparam set /use_sim_time false works but maybe the bad approach Comment by Stefan Kohlbrecher on 2016-02-03: Mixing simulation and real hardware drivers can be tricky, as the former uses sim time per default and the latter might expect to not run sim time. Maybe things work better when switching off the time sync for the urg_node (in case it's been true before).
{ "domain": "robotics.stackexchange", "id": 23631, "tags": "ros, urg-node, timestamp, ros-indigo, transform" }
Age-ing due to Time Dilation
Question: Will a person on top of hill will age faster than one at sea level due to Time Dilation? Answer: It is indeed the case that due to gravitational time dilation a person on top of a hill would age slightly faster than a person at sea level. You don't need to climb hills to measure this effect. Gravitational time dilations due to height differences of a few feet have been measured in the laboratory. The effect is tiny though. In the neighborhood of a large spherically-symmetric massive object such as earth, and compared to being infinitely far away from the object, your aging slows down by a factor $$\sqrt{1\ -\ \frac{v_{esc}^2}{c^2}}$$ here $v_{esc}$ is the velocity needed at your particular position to escape from the gravitational pull of the object, and $c$ the speed of light. When comparing two positions near to each other and close to the object, we can derive a simple equation that describes the relative time dilation. For two positions close to earth with height differences much smaller than the radius of earth, the fractional time dilation is given by $g h / c^2$, where $g$ denotes the local gravitational acceleration and $h$ the height difference. For a hill on earth that peaks 90 m (300 ft) high, and using $g=10\ m/s^2$ and $c=3\ 10^8 m/s$, we find $g h / c^2 \approx 10^{-14}$. In other words, over a period of 3 years, a person on top of the hill would age one millionth of a second (one microsecond) more compared to a person at the foot of the hill.
{ "domain": "physics.stackexchange", "id": 8924, "tags": "general-relativity, time, popular-science" }
Function caller (to multiple receivers) interface
Question: This class is designed to call a similar function of multiple objects of multiple classes using a single interface (it is not for calling functions that return a value). A potential usage will be sending same data to multiple writers (log writer, HTML generator and styled printer to console). Tested with Python 3.4.2 I've also included a test/example in the below code: """ caller module: Contains Caller class - call multiple other classes or modules with similar function definitions, (it is not for calling functions that returns a value) Author : Bhathiya Perera """ class Caller(): """call other classes or modules with similar function definitions""" def __init__(self, *receivers): """Initialize Parameters : receivers - va-arg receivers (can be objects, modules, another caller , ....) """ self._names = [] self._receivers = receivers def __getattr__(self, name): """Get attribute of a given name This will return 'self' therefore it can be called later """ self._names.append(name) return self def __call__(self, *args, **kw): """This class is callable with any arguments or key-value arguments It will then be posted to all receivers """ if len(self._names) == 0: raise Exception("Cannot call") method_name = self._names.pop() for receiver in self._receivers: method = getattr(receiver, method_name) method(*args, **kw) return self if __name__ == "__main__": # ------------------------------------------------- # Test its usage class Receiver1(): def a(self, arg): print ("Receiver1 - a", arg) def b(self, arg): print ("Receiver1 - b", arg) class Receiver2(): def a(self, arg): print ("Receiver2 - a", arg) def b(self, arg): print ("Receiver2 - b", arg) class Receiver3(): def a(self, arg): print ("Receiver3 - a", arg) def b(self, arg): print ("Receiver3 - b", arg) c = Caller(Receiver3()) d = Caller(Receiver1(), Receiver2(), c) d.a("hello a") d.b("hello b") print ("-----") d.a.b.a.b.a.b.a("a")("b")("c")("d")("e")("f")("g") If it was executed as a module it will print: Receiver1 - a hello a Receiver2 - a hello a Receiver3 - a hello a Receiver1 - b hello b Receiver2 - b hello b Receiver3 - b hello b ----- Receiver1 - a a Receiver2 - a a Receiver3 - a a Receiver1 - b b Receiver2 - b b Receiver3 - b b Receiver1 - a c Receiver2 - a c Receiver3 - a c Receiver1 - b d Receiver2 - b d Receiver3 - b d Receiver1 - a e Receiver2 - a e Receiver3 - a e Receiver1 - b f Receiver2 - b f Receiver3 - b f Receiver1 - a g Receiver2 - a g Receiver3 - a g Review for Python conventions and anything else. Answer: If I understand correctly, the purpose of Caller seems to be to call a function on multiple objects. It works like this: Construct a Caller x by passing 1 or more objects in the constructor Call any method m on x, and it will be dispatched to method m of all objects Right? The docstring doesn't explain this very well. CallDispatcher might be a better name. My impression is that you're discovering powerful features of Python, and you're trying to use them because you can, not because you have a concrete purpose. Sort of like an exercise, not for real-life use. It seems to me that __getattr__ is seriously abused. I don't have experience overriding this method, but I would guess it's designed to implement getters dynamically. First of all, adding getters dynamically seems like a hack that should be used with extreme care, including a good justification that it's the best option. Secondly, instead of behaving as a getter, this implementation mutates the object and returns self. I see you did that to make chaining and currying possible, it's interesting, but it seems a misuse of the language. Another thing I don't like about the approach is the heavy dependence on duck typing. The objects that can be used with Caller don't have to follow a well-defined interface, they can be anything. The user just has to make sure that when they call a .hello function on a Caller object, all the objects inside have a .hello function defined. It's great that Python let's us do this kind of thing, but it doesn't mean that we should. I prefer to have well-defined and well-documented interfaces, with a list of legitimate methods that I'm allowed to call. Python conventions Instead of: class Caller(): The class should be declared as: class Caller: Instead of: if len(self._names) == 0: The Pythonic way: if not self._names: Instead of: print ("Receiver3 - b", arg) There should be no space before the opening paren: print("Receiver3 - b", arg)
{ "domain": "codereview.stackexchange", "id": 11270, "tags": "python, beginner, python-3.x, reflection, duck-typing" }
Is there any software package for quantum chemistry that includes CAMB3LYP?
Question: I want to do calculations on systems with photoinduced electron transfer, and I've read that the Coloumb Attenuating Method is a modification of the functionals that makes calculations including long-range electron transfer more accurate. Unfortunately I can't seem to find CAM in Gaussian. Is there some other software I should look at? Answer: CAM-B3LYP is present in Gaussian, Q-Chem, GAMESS, NWChem, ORCA, DALTON, DIRAC, and perhaps other major software packages, either as cam-b3lyp or camb3lyp, however keywords are entered. Notably it isn't available in TURBOMOLE as of version 7.1.
{ "domain": "chemistry.stackexchange", "id": 7287, "tags": "quantum-chemistry, computational-chemistry" }
Simulating Turing machines (output included) with circuits
Question: A Turing machine with input alphabet {0,1} computes a partial or total function $f \colon \{0,1\}^* \to \{0,1\}^*$. Is it possible to construct a circuit family $\{C_n\}$ such that for an input $x$ of length $n$, $C_n(x) = f(x)$? I have not seen this kind of circuit family in any of the standard texts, nor have I encountered it after lots of googling. I understand that for decision problems, it is unnecessary to consider such circuit families (which is probably why nobody has bothered to consider them). Or maybe it is just a trivial modification and so is relegated as an exercise to the reader (although I haven't seen such an exercise mentioned anywhere). There are two immediate obstacles, which I see, that such a circuit family would need to overcome: What does $C_n(x)$ do when $f(x)$ diverges? If $|x| = |y|$, but $|f(x)| < |f(y)|$, the output of $C_n(x)$ will contain less bits than $C_n(y)$. How would this work? I personally don't see any method of overcoming these obstacles. Answer: Yes, as long as we allow encoding the the result of $f$ in a basically trivial way, rather than outputting exactly $f(x)$. To get over obstacle (2) (different output lengths for the same input length): for any given input length $n$, there is a maximum output length $m$. $C_n$ will output $2m$ bits. The first $2|f(x)|$ bits output by $C_n(x)$ will be the bits of $f(x)$ doubled, and the remaining $2(m-|f(x)|)$ bits will be the string 01 repeated over and over. For example, if $m = 6$ and for some $x$ of length 10, $f(x)=0010$, then $C_{10}(x) = 00\ 00\ 11\ 00\ 01\ 01$ (spaces added for ease of reading). To get over obstacle (1) ($f(x)$ may not halt): for any given input length $n$, there is a time $t$ such that, for all $x$ of length $n$, either $f(x)$ halts within $t$ steps, or $f(x)$ does not halt ever. If $f(x)$ does not halt, then we make the output of the circuit be a string of 10 repeated over and over again. Of course, such a circuit family can not in general be uniform, since deciding that halting time is in general uncomputable, but the circuit family still exists.
{ "domain": "cstheory.stackexchange", "id": 342, "tags": "cc.complexity-theory, circuit-complexity, turing-machines, circuit-families" }
If massive animals live longer, why do humans with gigantism die younger?
Question: What I mean is humans suffering from gigantism (type 1 neurofibromatosis, Marfan syndrome, X-linked dominant acro-gigantism, et cetera) rarely live more than 5 decades (50 years). The same can be said to wolves: wild ones rarely live more than 9 years, and domesticated wolves (dogs) can live up to 20 years if they are small like chihuahuas, beagles, and toy poodles, and no more than 7 years if they are massive like great Danes, mastiffs, and Rottweilers. But, at the same time, horses can live 3 decades (30 years), but hamsters only live 4 years. Answer: In short many things determine life expectancy and the interplay between different factors will determine an individuals life expectancy and a species' average life expectancy. How life expectancy links to size: Among mammals there is an inverse relation between heart rate and life expectancy (mammals tend to average 7.3+/15.6 x10^8 beats per lifetime which appears to be a symptom of the fact that heart rate is a marker of metabolic rate and so there is a fixed 'metabolic lifetime' for animals (source). This is somewhat substantiate by metabolism variants in C. elegans and the link between metabolism and free radicals and DNA damage (source). Larger animals are thought to have a slower metabolic rate by weight (not overall) due to scaling laws (see Kleiber's law) and how surface area and volume relate and change temperature regulation metabollic requirements. Other factors that change life expectancy: In breeding: a reduction in genetic variation caused by inbreeding (e.g. many dog species though the extent to which they are inbred will depend on the species) can lead to lots of underlying health issues that may reduce the lifespan independent of metabolic rate Stress: Higher stress will reduce life expectancy Nutritional abundance: reducing caloric intake may reduce metabolic rate and extend lifespan, though starvation will decrease lifespan Gigantism: Whilst the life expectancy for a species may be higher in general people with gigantism may live shorter lives than possible because of non-metabolic related reasons. For example many forms of gigantism are linked to hormones which promote cell division and growth which are also linked to forms of cancer.
{ "domain": "biology.stackexchange", "id": 11322, "tags": "human-biology, mammals, lifespan, dogs, rodents" }
error building android core(2)
Question: I asked a question on this http://answers.ros.org/question/33031/error-building-android-core , but after i configuring the sdk path, i came across this kind of problem, i don't what to do then, all files seems to be ok. Could you help me and thank you very much Building > :android_acm_serial:deployLibs > Resolving dependencies ':android_a:android_acm_serial:deployLibs UP-TO-DATE :android_acm_serial:updateProject Updated local.properties Updated file /home/hanbo/my_workspace/android_core/android_acm_serial/proguard-project.txt Building > :android_gingerbread_mr1:deployLibs > Resolving dependencies ':andr:android_gingerbread_mr1:deployLibs UP-TO-DATE :android_gingerbread_mr1:updateProject Error: The project either has no target set or the target is invalid. Please provide a --target to the 'android update' command. :android_gingerbread_mr1:debug Buildfile: build.xml does not exist! Build failed FAILURE: Build failed with an exception. What went wrong: Execution failed for task ':android_gingerbread_mr1:debug'. Command 'ant' finished with (non-zero) exit value 1. Try: Run with --info or --debug option to get more log output. Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':android_gingerbread_mr1:debug'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:68) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46) at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:34) at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter$1.run(CacheLockHandlingTaskExecuter.java:34) at org.gradle.cache.internal.DefaultCacheAccess$2.create(DefaultCacheAccess.java:200) at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:172) at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:198) at org.gradle.cache.internal.DefaultPersistentDirectoryStore.longRunningOperation(DefaultPersistentDirectoryStore.java:111) at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.longRunningOperation(DefaultTaskArtifactStateCacheAccess.java:83) at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter.execute(CacheLockHandlingTaskExecuter.java:32) at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:55) at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57) at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:41) at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:51) at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:52) at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:42) at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:247) at org.gradle.execution.DefaultTaskGraphExecuter.executeTask(DefaultTaskGraphExecuter.java:192) at org.gradle.execution.DefaultTaskGraphExecuter.doExecute(DefaultTaskGraphExecuter.java:177) at org.gradle.execution.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:83) at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:36) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61) at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23) at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67) at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter$1.run(TaskCacheLockHandlingBuildExecuter.java:31) at org.gradle.cache.internal.DefaultCacheAccess$1.create(DefaultCacheAccess.java:111) at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:126) at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:109) at org.gradle.cache.internal.DefaultPersistentDirectoryStore.useCache(DefaultPersistentDirectoryStore.java:103) at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.useCache(DefaultTaskArtifactStateCacheAccess.java:79) at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter.execute(TaskCacheLockHandlingBuildExecuter.java:29) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61) at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23) at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67) at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61) at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:54) at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:155) at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:110) at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:78) at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:45) at org.gradle.launcher.daemon.protocol.Build.run(Build.java:67) at org.gradle.launcher.daemon.protocol.Build.run(Build.java:63) at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:45) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:45) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:24) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.ReturnResult.execute(ReturnResult.java:34) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$4.call(ForwardClientInput.java:116) at org.gradle.launcher.daemon.server.exec.ForwardClientInput$4.call(ForwardClientInput.java:114) at org.gradle.util.Swapper.swap(Swapper.java:38) at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:114) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:60) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:61) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy.doBuild(StartBuildOrRespondWithBusy.java:49) at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:34) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.HandleStop.execute(HandleStop.java:36) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.CatchAndForwardDaemonFailure.execute(CatchAndForwardDaemonFailure.java:32) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.HandleClientDisconnectBeforeSendingCommand.execute(HandleClientDisconnectBeforeSendingCommand.java:21) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.StopConnectionAfterExecution.execute(StopConnectionAfterExecution.java:27) at org.gradle.launcher.daemon.server.exec.DaemonCommandExecution.proceed(DaemonCommandExecution.java:122) at org.gradle.launcher.daemon.server.exec.DefaultDaemonCommandExecuter.executeCommand(DefaultDaemonCommandExecuter.java:55) at org.gradle.launcher.daemon.server.Daemon$1$1.run(Daemon.java:123) at org.gradle.messaging.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66) Caused by: org.gradle.process.internal.ExecException: Command 'ant' finished with (non-zero) exit value 1. at org.gradle.process.internal.DefaultExecHandle$ExecResultImpl.assertNormalExitValue(DefaultExecHandle.java:338) at org.gradle.process.internal.DefaultExecAction.execute(DefaultExecAction.java:39) at org.gradle.api.tasks.Exec.exec(Exec.java:58) at org.gradle.api.internal.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:196) at org.gradle.api.internal.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:102) at org.gradle.api.internal.CompositeDynamicObject.invokeMethod(CompositeDynamicObject.java:99) at org.gradle.api.tasks.Exec_Decorated.invokeMethod(Unknown Source) at org.gradle.util.ReflectionUtil.invoke(ReflectionUtil.groovy:23) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:150) at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:145) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:477) at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:466) at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:60) ... 76 more Originally posted by han on ROS Answers with karma: 110 on 2012-05-02 Post score: 0 Answer: hi, i do like this and solve the problem: after you make rosjave_core step 1. cd ~/my_workspace rosws merge http://android.rosjava.googlecode.com/hg/.rosinstall rosws update source setup.bash step 2. before this set your sdk path correctly like this:export PATH=${PATH}:/home/hanbo/android-sdk-linux/tools:/home/hanbo/android-sdk-linux/platform-tools , add it to your .bashrc file roscd android_core ./gradlew debug But attention for this one you may need to install google stack.(http://ros.org/wiki/google, ./gradlew install) and build it. but you should ./gradlew install rosjava_core again. After that you may come across some problems like build.xml missing, then run this-- "android update project --path ./android_tutorial_camera --target android-13" in your terminal for each package that needs build.xml Originally posted by han with karma: 110 on 2012-05-04 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 9219, "tags": "android-core, android" }
Problem in Recommendation for categorical data?
Question: I have been building a recommendation model to recommend certain questions in an interaction platform to users to help each other. I have calculated an affinity score between categories to find which top categories are to be recommended. But each category has questions by users in itself. The amount of the questions increases with every new post a user posts in a certain category. Now how can I choose which of these questions to recommend when I have chosen the category through my affinity score ? Do I make it random ? Do I display the questions which come first in the data base ? Or is there any better alternative ? Answer: Welcome to the site, I'll propose few alternatives below in increasing order of complexities. Simple sorting criteria: Think what the user wants to see and create a score for sorting questions. Ordering of question based on some criteria like number of answers or number of views it has already received. Show top N based on score. Derived sorting criteria: Use a combination of factors like number of answers, freshness of question, popularity of question based on number of views to create a derived score. Show top N based on score. Adding Discovery to (1-2): The options above will penalize new questions and run into typical explore Vs. exploit trade-off in any recommendation system. You can alleviate that by adding a random bump to score of new questions so that they get a chance to participate. Jointly learning categories and questions: You can setup your recommendation algorithm to actually work at question ID level and use categories as side information. Check this worked out example of LightFM library in Python. Coincidentally, it uses stackoverflow questions and categories as examples. The first 3 options are easy to implement but biased. Fourth one will need some data transformations and I'm not sure if your objective is to model directly at question ID level.
{ "domain": "datascience.stackexchange", "id": 3675, "tags": "machine-learning, python, recommender-system" }
FIR design : compute coefficients for the same frequency response at another sampling frequency
Question: I would like to know if there's a standard way to obtain the FIR coefficients for a sampling frequency $F'$ when the coefficients are for a sampling frequency $F$. I would like to implement the filter described in ITU R BS.1770-3 (the oversampling FIR filter, page 18). The filter coefficients are given for a sampling frequency of 48 kHz. What would be the way to derive the coefficients for another sampling frequency? Edit december 28 2012 I wonder if there's a meaning in my request. The described filter is used on a 48 kHz sampling frequency audio signal that has been transformed into a 192 kHz sampling frequency signal by stuffing 3 zeros between each original sample. It is the second step of an oversampling method. The signal $[x_0, x_1, ..., x_n]$ is first transformed into $[x_0, 0,0,0, x_1, 0,0,0, ...,x_n,0,0,0]$. The filter I have mentioned is applied on this zero-stuffed signal. If I understand correctly (which might not be the case), the cut-off frequency of this low pass filter can be expressed relative to the sampling frequency. Given this context, and the desired result which is a sort of evaluation of the reconstruction of the audio signal into the analog domain, wouldn't this relative to sampling frequency cut-off frequency be appropriate for any sampling frequency ? Let's say the cut-off frequency of the filter is $\alpha \times F_s$ ($\alpha$ should be around 0.125 for a 22 kHz analog upper bandwith ?), wouln't this $\alpha$ value be the good one for others sampling frequencies ? If I have an original signal at 8 kHz sampling frequency, a four time oversample would lead to a 32 kHz sampling frequency, and I can suppose that the reconstructed analog signal would have an upper bandwith limit at approximately 4 kHz ? End of Edit december 28 2012 Answer: There is no direct way of converting filter coefficients between two sample rates while maintaining the exact transfer function (at least where it's properly defined). Re-sampling the impulse response can be done but will often result in extra latency, a longer filter, and some change in the frequency response. In this case, however, you can derive the original filter specification from the filter coefficients and than re-design the filter using the same specification at a different sample rate. By visual inspection we can find that the ITU filter is an equiripple filter with pass band edge: 20 kHz stop band edge: 28 kHz pass band ripple: 0.1dB stop band attenuation: 40 dB Let's say you want that filter at 4*44.1kHz instead of 4*48kHz. In Matlab you would do the following: %% match the filter at 44.1 kHz fs = 4*44100; d = fdesign.lowpass('Fp,Fst,Ap,Ast',20000,28000,0.1,40,fs); hd = design(d,'equiripple'); % convert to polyphase h4 = 4*reshape(hd.Numerator',4,12)'; and the coefficients would be 0.0057292 0.001552 -0.0015723 -0.0049864 -0.0056096 -0.0017098 0.0050592 0.0099256 0.0080942 -0.0011323 -0.012505 -0.017267 -0.0093558 0.0088501 0.025828 0.027239 0.0063161 -0.028386 -0.054115 -0.045155 0.011432 0.10497 0.20287 0.26531 0.26531 0.20287 0.10497 0.011432 -0.045155 -0.054115 -0.028386 0.0063161 0.027239 0.025828 0.0088501 -0.0093558 -0.017267 -0.012505 -0.0011323 0.0080942 0.0099256 0.0050592 -0.0017098 -0.0056096 -0.0049864 -0.0015723 0.001552 0.0057292 However, there are a bunch of potential problems: In this case we simply lucked out that the number of coefficients for the new sample rate was 48 as well. In other cases you may have to adjust the stop band frequency to dial in the desired number of coefficients. This particular filter specification has been derived from the original sample rate, i.e. pass band and stop band are symmetrically spaced around the Nyquist frequency of 24 kHz. That is intentional. If your sample rate is different, you need to understand WHY it is different and whether the original specification is still appropriate. In the case of 44.1 kHz you may consider placing pass and stop band symmetrically around 22.050 kHz and maybe also consider lowering the pass band and/or increasing the number of coefficients.
{ "domain": "dsp.stackexchange", "id": 8607, "tags": "filters, finite-impulse-response" }
Checking if a git repository needs to be updated
Question: I made a small Vim distribution that uses vundle. I decided to make a small function that checks if the repository must be updated. """""""""""""""""""""""""""""""""""""""""""""""""" "Update the repository """""""""""""""""""""""""""""""""""""""""""""""""" "A function to update if necessary function! Get_update() "Get the current path and change the directory to update the good "repository let l:current_path = expand("<sfile>:p:h") let l:path = '~/.vim-mahewin-repository' exec 'cd' l:path let l:last_local_commit = system('git rev-parse HEAD') let l:last_remote_comit = system('git ls-remote origin -h refs/heads/master |cut -f1') if l:last_local_commit != l:last_remote_comit echo 'Need to be updated, launches updated' :!git pull `git remote` `git rev-parse --abbrev-ref HEAD` endif exec 'cd' l:current_path endfunction It's not very nice, but I have not found out how to do it better. I'm open to advice. Answer: Your function has the side-effect of switching the directory to the file's path. To be side-effect-free, you should let l:current_path = getcwd() instead. Better yet, use the --git-dir and --work-tree options to git(1) to avoid having to cd at all. You always query the remote named origin, but pull from `git remote`. That inconsistent looks like a bug. I don't see any reason to use --abbrev-ref when the result is not intended for human consumption. Just use the full commit id.
{ "domain": "codereview.stackexchange", "id": 4378, "tags": "shell, git, vimscript" }
Removing unnecessary floats, height, widths
Question: I have coded a webpage and it looks exactly how I want, but I think there could be improvements, possible as unnecessary floats, etc... Could anybody please review my CSS code? Its not difficult or vague, I guess. DEMO: http://jsfiddle.net/dqC8t/2364/ CSS: body{ background: url("http://i.imgur.com/cQhlsYZ.png") repeat-x; } h1{ font-size: 18px; font-weight: bold; color: #444; } h2{ font-size: 16px; font-weight: bold; color: #444; } #wrapper{ width: 980px; margin: 0 auto; } /*************START HEAD CONTENT*************/ #header{ width: 100%; float: left; } #headerLogo{ float: left; margin-top: 20px; margin-left: 20px; margin-right: 70px; } #menu { font-family: 'ProximaNova-Bold'; font-size: 16px; margin-top:40px; } #menu ul { list-style-type: none; } #menu li { display: inline-block; width: 150px; border-bottom-style: solid; border-bottom-width: 4px; margin: -5px ; padding: 0 15px 0 0 ; } .item-1 { border-bottom-color: #0099CC; } .item-2 { border-bottom-color: #FF4444; } .item-3 { border-bottom-color: #669900; } .item-4 { border-bottom-color: #FFBB33; } .item a { text-decoration: none; } #headContent{ float: left; margin-top: 20px } .hnItem{ width: 242px; height: 184px; float: left; margin-right: 4px; } .hnItem img { display: block; } #hnItemLast{ width: 242px; height: 184px; float: left; } #hnItemLast img{ display: block; } .hnTextContainer{ height: 40px; padding: 10px 15px; font-family: 'ProximaNova-Regular'; font-size: 14px; line-height: 21px; color: #c8cbcb; background-image: linear-gradient(#262828,#1c1e1e); } /*************END HEAD CONTENT***************/ /*************START MAIN CONTENT*************/ #mainContent{ width: 980px; height: 800px; float: left; margin-top: 20px; background-color: #FFF; } /*************START NEWS LIST CONTENT********/ #nlContainer{ width:660px; float:left; font-family: 'ProximaNova-Regular'; } #nlContainer p{ font-size:14px; } #nlContainer a{ text-decoration: none; } #nlHeader{ float: left; width:100%; margin-top: 15px; margin-bottom: 5px; margin-left: 20px; } .nlItem{ width: 100%; float: left; margin-top: 10px; margin-left: 20px; } .nlImageContainer{ width: 200px; height: 100px; float: left; padding: 3px; border: 1px solid #e3e3e3; background-color: #efefef; } .nlTextContainer{ float:left; margin-top: 10px; margin-left: 20px; } .rmLink a{ font-size: 14px; color: #069; } .rmLink span{ font-size: 10px; color: #c73f20; } /*************END NEWS LIST CONTENT**********/ /*************START SIDEBAR CONTENT**********/ #sidebar{ width: 300px; height: 400px; float:left; margin-right: 20px; } #scBanner{ margin-top: -10px; } HTML: <div id="wrapper"> <div id="header"> <div id="headerLogo"> <a href="index.html"><img alt="myDr logo" height="50" width="134" src="http://i.imgur.com/w4FYag7.png"></a> </div> <div id="menu"> <ul> <li class="item item-1"><a href="">GEZONDHEID A-Z</a></li> <li class="item item-2"><a href="">MEDICIJNEN</a></li> <li class="item item-3"><a href="">GEZOND LEVEN</a></li> <li class="item item-4"><a href="">NEWS &amp; EXPERTS</a></li> </ul> </div> </div> <div id="headContent"> <div class="hnItem"> <a href="#"></a> <div class="hnImageContainer"><img alt="#" height="124" src="http://i.imgur.com/w4FYag7.png" width="242"></div> <div class="hnTextContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</div> </div> <div class="hnItem"> <a href="#"></a><div class="hnImageContainer"><img alt="#" height="124" src="http://i.imgur.com/w4FYag7.png" width="242"></div> <div class="hnTextContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</div> </div> <div class="hnItem"> <a href="#"></a><div class="hnImageContainer"><img alt="#" height="124" src="http://i.imgur.com/w4FYag7.png" width="242"></div> <div class="hnTextContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</div> </div> <div id="hnItemLast"> <a href="#"></a><div id="hnImageContainer"><img alt="#" height="124" src="http://i.imgur.com/w4FYag7.png" width="242"></div> <div class="hnTextContainer">Lorem ipsum dolor sit amet, consectetur adipisicing elit</div> </div> </div> <div id="mainContent"> <div id="nlContainer"> <div id="nlHeader"> <h1>Laatste nieuws</h1> </div> <div class="nlItem"> <div class="nlImageContainer"> <a href="#"><img alt="#" height="100" src="http://i.imgur.com/w4FYag7.png" width="200"></a> </div> <div class="nlTextContainer"> <h2><a href="#">Lorem ipsum dolor sit amet consectetur</a></h2> <p>Lorem ipsum dolor sit amet consectetur dolor sit<br>amet consectetur amet</p> <p class="rmLink"><a href="#">Lees meer<span>&gt;&gt;</span></a></p> </div> </div> <div class="nlItem"> <div class="nlImageContainer"> <a href="#"><img alt="#" height="100" src="http://i.imgur.com/w4FYag7.png" width="200"></a> </div> <div class="nlTextContainer"> <h2><a href="#">Lorem ipsum dolor sit amet consectetur</a></h2> <p>Lorem ipsum dolor sit amet consectetur dolor sit<br>amet consectetur amet</p> <p class="rmLink"><a href="#">Lees meer<span>&gt;&gt;</span></a></p> </div> </div> <div class="nlItem"> <div class="nlImageContainer"> <a href="#"><img alt="#" height="100" src="http://i.imgur.com/w4FYag7.png" width="200"></a> </div> <div class="nlTextContainer"> <h2><a href="#">Lorem ipsum dolor sit amet consectetur</a></h2> <p>Lorem ipsum dolor sit amet consectetur dolor sit<br>amet consectetur amet</p> <p class="rmLink"><a href="#">Lees meer<span>&gt;&gt;</span></a></p> </div> </div> <div class="nlItem"> <div class="nlImageContainer"> <a href="#"><img alt="#" height="100" src="http://i.imgur.com/w4FYag7.png" width="200"></a> </div> <div class="nlTextContainer"> <h2><a href="#">Lorem ipsum dolor sit amet consectetur</a></h2> <p>Lorem ipsum dolor sit amet consectetur dolor sit<br>amet consectetur amet</p> <p class="rmLink"><a href="#">Lees meer<span>&gt;&gt;</span></a></p> </div> </div> </div> <div id="sidebar"> <div id="scBanner"> <a href="#"><img alt="#" height="140" src="http://i.imgur.com/w4FYag7.png" width="300"></a> </div> </div> </div> </div> Answer: HTML: For prototyping, you should include the # for links to actually trigger link behavior in browsers: <a href="#">Link</a> Instead of placing <span>&gt;&gt;</span> in your HTML, you should use… If you page is not dynamically generated, you may consider using the CSS properties width and height instead of the HTML attributes CSS: …the :after pseudo element: .rmLink:after { /* puts a space and two `&gt;` after rmLink */ content: "\00A0" "\003E" "\003E"; } If you're not using some kind of CSS reset, you don't need to set font-weight: bold; on headings; All browsers should have User Agent Styles for them. As already suggested, you should atleast provide sans-serif in your font-family declaration as a fallback, especially if you're not using web fonts (which you should consider) When you define hex-based color values, keep in mind you can write #fff instead of #ffffff and so on; There are several occurences in your code where you could change that Note: You have a lot of similar margin declarations which can be simplified and stripped down. Think about where you rather should use padding on a parent instead of margin on several child elements. Check out nlItem and nlTextContainer. You're repeating yourself.
{ "domain": "codereview.stackexchange", "id": 5827, "tags": "optimization, html, css" }
Anti-pattern or acceptable way of using Promises?
Question: Whenever I need to use a Promise interface in my NodeJs code and I don't have a promise to start off with, I do this: var someResultPromise = Q.resolve() .then(function () { var something = 'value'; var anotherPromise = callFunctionThatReturnsPromise(something) .then(function (result) { return processing(result); }); return anotherPromise; }); Is this a good way to do it? If no, why not? Answer: The Q.resolve() and the outer then(function() {...}) are not necessary. Your code will simplify to ... var something = 'value'; var someResultPromise = callFunctionThatReturnsPromise(something).then(function (result) { return processing(result); }); ... and in general that is what we would write. However sometimes it's advantageous to start a promise chain with a resolved promise, even if it's not strictly necessary. For example, we might want a chain such as ... var promise = doSomethingAsync() .then(doSomethingAsync) .then(doSomethingAsync) .then(doSomethingAsync); ... but we might choose to write ... var promise = Q.resolve() .then(doSomethingAsync) .then(doSomethingAsync) .then(doSomethingAsync) .then(doSomethingAsync); ... which is functionally identical but easier on the eye because it allows the first doSomethingAsync to be coded the same as all the others; it does not appear to be a special case.
{ "domain": "codereview.stackexchange", "id": 17070, "tags": "javascript, design-patterns, promise" }
ATM (Automated Teller Machine) Terminal Application
Question: I consider myself a beginner in Java. Therefore I want to improve and learn about Java(8) and OOP as much as I can. Feedback about: performance clearn code is very much appreciated. If you could review with regard to these questions, then I'd be very grateful: Is having multiple returns in the function pinIsValidFor a bad pattern? How would you write it differently? Are the classes too long (they nearly got 300 lines of code)? What would be a good size for a Java-class This is "Automated Teller Machine Terminal Application" in Java: You can do typical transaction like deposit, withdraw, and show balance. It persistently saves your transactions in an external file so you can see your changes after restarting the application. It also (tries to) handle all exception cases. GitHub Main file (not much in here) import controller.Atm; public class Main { public static void main(String[] args) { Atm.start(); } } In the model folder I got my AccountCard file: package model; import model.exceptions.DepositNegativeBankTransfer; import model.exceptions.ExceedLimitTransfer; import model.exceptions.OverdrawBankTransfer; import model.exceptions.ZeroBankTransfer; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Date; import java.util.Map; public class AccountCard { private String pin = null; private int amount = 0; DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH"); private Map<String, Integer> transferHistory = new DefaultLinkedHashMap<>(0); public AccountCard(String pin, int amount, String withdrawDate, int timesWithDraw) { this.pin = pin; this.amount = amount; this.transferHistory.put(withdrawDate, timesWithDraw); } public void setPin(String pin) { this.pin = pin; } public String getPin() {return this.pin;} public boolean isPinCorrect(String pinInputByUser) { return this.pin.equals(pinInputByUser); } public int getBalance() { return amount; } public void withdrawAmount(int withdrawAmount) throws OverdrawBankTransfer, ZeroBankTransfer, ExceedLimitTransfer { if (withdrawAmount > this.amount) throw new OverdrawBankTransfer("Withdrawal exceed balance."); if (withdrawAmount == 0) throw new ZeroBankTransfer("Withdrawing of zero amount is forbidden."); Date now = new Date(); if (isTransferExceedLimit(now)) throw new ExceedLimitTransfer("Limit of numbers of withdrawals exceeded"); this.amount -= withdrawAmount; logBankTransfer(); } private boolean isTransferExceedLimit(Date date) { String now = dateFormat.format(date); return transferHistory.get(now) > 3; } private void logBankTransfer() { Date date = new Date(); String now = dateFormat.format(date); int timesWithDrawCurrent = transferHistory.get(now); transferHistory.put(now, ++timesWithDrawCurrent); } public void depositAmount(int depositAmount) throws DepositNegativeBankTransfer, ZeroBankTransfer { if (depositAmount < 0) throw new DepositNegativeBankTransfer("Invalid operation. Deposit amount is negative"); if (depositAmount == 0) throw new ZeroBankTransfer("Depositing of zero amount is forbidden."); this.amount += depositAmount; } public String getWithdrawDate() { String withdrawDate = ""; for(Map.Entry<String, Integer> entry : transferHistory.entrySet()) { withdrawDate = entry.getKey(); } return withdrawDate; } public int getTimesWithDraw() { int timesWithDraw = 0; for (Map.Entry<String, Integer> entry : transferHistory.entrySet()) { timesWithDraw = entry.getValue(); } return timesWithDraw; } } In my controller folder I got my Atm-file: package controller; import controller.helper.Helper; import model.AccountCard; import model.exceptions.*; import java.io.*; import java.text.NumberFormat; import java.util.*; public class Atm { final static String ACCOUNT_PATH = "files/account.txt"; final static String PIN = "PIN"; final static String AMOUNT = "amount"; final static String LASTWITHDRAW = "lastWithdraw"; final static String TIMESWITHDRAW = "timesWithDraw"; final static int MAX_PIN_INPUT_COUNT = 3; final static String IRRECOGNIZABLE_PIN = "XXX"; static boolean atmIsActivated; static boolean cardIsInserted; public static void start() { atmIsActivated = true; while(atmIsActivated) { redrawConsole(); System.out.println("Press ENTER to insert card"); Scanner scanner = new Scanner(System.in); cardIsInserted = (scanner.nextLine() != null); if (cardIsInserted) { AccountCard account = loadCardInformation(); transactionalManipulationOf(account); } } } private static AccountCard loadCardInformation() { String pin = null; int amount = 0; String withdrawDate = null; int timesWithDraw = 0; try ( FileReader fReader = new FileReader(ACCOUNT_PATH); BufferedReader bReader = new BufferedReader(fReader); ){ while(true) { String line = bReader.readLine(); if (line == null) break; if (line.equals(PIN)) { pin = bReader.readLine(); } else if (line.equals(AMOUNT)) { amount = Integer.parseInt(bReader.readLine()); } else if (line.equals(LASTWITHDRAW)) { withdrawDate = bReader.readLine(); } else if (line.equals(TIMESWITHDRAW)) { timesWithDraw = Integer.parseInt(bReader.readLine()); } } } catch (FileNotFoundException e) { try { throw new FileNotFoundException("Can't find card"); } catch (FileNotFoundException e1) { Helper.printErrorMessage(e1); } } catch (IOException e) { try { throw new CardReadFail("Can't read card information"); } catch (CardReadFail cardReadFail) { Helper.printErrorMessage(cardReadFail); } } return new AccountCard(pin, amount, withdrawDate, timesWithDraw); } private static void redrawConsole() { final String ANSI_CLS = "\u001b[2J"; final String ANSI_HOME = "\u001b[H"; System.out.print(ANSI_CLS + ANSI_HOME); System.out.flush(); System.out.println("********* ATM *********"); } private static void transactionalManipulationOf(AccountCard account) { redrawConsole(); while(cardIsInserted) { showMainMenu(); processUserSelection(account); } } private static void processUserSelection(AccountCard account) { String userInput = Helper.getUserInput(); switch (userInput.toLowerCase()) { case "w": withdrawFrom(account); break; case "d": depositTo(account); break; case "s": showBalanceOf(account); break; case "e": returnCardOf(account); break; default: printUnknownOperation(); } } private static void showMainMenu() { System.out.println("=======Main-Menu======="); System.out.println("What do you want to do?"); System.out.println("Withdrawal \t\t(w)"); System.out.println("Deposit \t\t(d)"); System.out.println("Show Account \t(s)"); System.out.println("Exit \t\t\t(e)"); System.out.println("________________________"); } private static void printUnknownOperation() { System.out.println("Unknown operation please try again"); } private static void returnCardOf(AccountCard account) { redrawConsole(); saveCardFor(account); System.out.println("Card returned. Thanks for using our ATM. Have a nice day. :)"); cardIsInserted = false; } private static void saveCardFor(AccountCard account, String pin) { try ( FileWriter fWriter = new FileWriter(ACCOUNT_PATH); ){ fWriter.write(PIN + "\n"); fWriter.write(pin + "\n"); fWriter.write(AMOUNT + "\n"); fWriter.write(account.getBalance() + "\n"); fWriter.write(LASTWITHDRAW + "\n"); fWriter.write(account.getWithdrawDate() + "\n"); fWriter.write(TIMESWITHDRAW + "\n"); fWriter.write(account.getTimesWithDraw() + "\n"); } catch (IOException e) { try { throw new CardSaveFail("Save card failed. Transaction made in this session were not saved!"); } catch (CardSaveFail cardSaveFail) { Helper.printErrorMessage(cardSaveFail); } } } private static void saveCardFor(AccountCard account) { saveCardFor(account, account.getPin()); } private static void showBalanceOf(AccountCard account) { redrawConsole(); Locale currentLocale = Locale.US; int amount = account.getBalance(); NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance(currentLocale); System.out.println("Your current balance is now " + currencyFormatter.format(amount)); } private static void depositTo(AccountCard account) { redrawConsole(); System.out.println("How much do you want to deposit?"); int depositAmount = Helper.getIntegerUserInput(Helper.getUserInput()); if(pinIsValidFor(account)) { try { account.depositAmount(depositAmount); showBalanceOf(account); } catch (DepositNegativeBankTransfer depositNegativeBankTransfer) { Helper.printErrorMessage(depositNegativeBankTransfer); } catch (ZeroBankTransfer zeroBankTransfer) { Helper.printErrorMessage(zeroBankTransfer); } } else { System.out.println("Transaction canceled"); } } private static boolean pinIsValidFor(AccountCard account) { System.out.println("Please enter your PIN to execute transaction"); int trials = 0; while (trials < MAX_PIN_INPUT_COUNT) { String userInput = Helper.getUserInput(); if (account.isPinCorrect(userInput)) return true; if (userInput.equals("c")) return false; printWarningMessage(++trials); } if (trials >= MAX_PIN_INPUT_COUNT) killCardFor(account); return false; } private static void printWarningMessage(int trials) { System.out.print("Incorrect PIN. "); Map<Integer, String> warningMessage = new HashMap<Integer, String>(){ { put(1, "Please try again"); put(2, "You typed in your pin wrong twice already. This is your last try"); put(MAX_PIN_INPUT_COUNT, "You typed in the pin wrong trice. Transaction aborted. \n Your card is now invalid."); } }; System.out.println(warningMessage.get(trials)); } private static void killCardFor(AccountCard account) { saveCardFor(account, IRRECOGNIZABLE_PIN); cardIsInserted = false; System.out.println("Card killed"); } private static void withdrawFrom(AccountCard account) { redrawConsole(); System.out.println("How much do you want to withdraw?"); int depositAmount = Helper.getIntegerUserInput(Helper.getUserInput()); if(pinIsValidFor(account)) { try { account.withdrawAmount(depositAmount); showBalanceOf(account); } catch (ExceedLimitTransfer exceedLimitTransfer) { Helper.printErrorMessage(exceedLimitTransfer); } catch (OverdrawBankTransfer overdrawBankTransfer) { Helper.printErrorMessage(overdrawBankTransfer); } catch (ZeroBankTransfer zeroBankTransfer) { Helper.printErrorMessage(zeroBankTransfer); } } else { System.out.println("Transaction canceled"); } } } In addition to that I created different custom Exception classes to handle exceptions, a custom LinkedHashMap, and a Helper-class for recurring methods. All of them can be inspected here: GitHub Answer: Thanks for sharing the code. Here is what I think about it: Naming Finding good names is the hardest part in programming, so always take your time to think about the names of your identifiers. On the bright side you follow the Java Naming Conventions. But in Java there is an exception for boolean variables and methods: they should start with is or has like instead of atmIsActivated it should be isAtmActivated. Entry class. In general it is a good thing to have a tiny entry class. We usually use it to switch from static context to object context. Your entry class simply delegates to another static method in a different class. Therefore it is almost useless. OOP Doing OOP means that you follow certain principles which are (amongst others): information hiding / encapsulation single responsibility separation of concerns KISS (Keep it simple (and) stupid.) DRY (Don't repeat yourself.) "Tell! Don't ask." Law of demeter ("Don't talk to strangers!") favor polymorphism over branching Interfaces, abstract classes, or inheritance support that principles and should be used as needed. In that sense your code really does implement some of the OO principles. But your entire program is in static context with is bad for a couple of reasons (especially for a "training" program): no polymorphism no dependency injection hard to UnitTest If you would actually use objects instead of just static methods you could make that code simpler and easier to extend. The typical example is the switch in your method processUserSelection: the OOP approach to this is to define a common interface for the operations. interface AtmOperation { void operateOn(AccountCard account); } The we have a separate class for each operation (mot likely in its own class) class WithdrawOperation implements AtmOperation { @Override void operateOn(AccountCard account){ } } class DepositOperation implements AtmOperation { @Override void operateOn(AccountCard account){ } } class BalanceOperation implements AtmOperation { @Override void operateOn(AccountCard account){ } } Some of that operations need resources from the ATM: class BalanceOperation implements AtmOperation { /* pretend we have a class AtmConsole with methods .redraw() and .println() */ private final AtmConsole console; public BalanceOperation( AtmConsole console){ this.console = console; } @Override void operateOn(AccountCard account){ console.redraw(); Locale currentLocale = Locale.US; int amount = account.getBalance(); NumberFormat currencyFormatter = NumberFormat.getCurrencyInstance(currentLocale); console.println("Your current balance is now " + currencyFormatter.format(amount)); } } class InputErrorOperation implements AtmOperation { private final AtmConsole console; public BalanceOperation( AtmConsole console){ this.console = console; } @Override void operateOn(AccountCard account){ console.redraw(); console.println("Unknown operation please try again"); } } There are different ways to access this implementations. The easiest way is to put them in a Map when you initialize the ATM: // ... private final Map<String,AtmOperation> operations = new HashMap<>(); private final AtmConsole console = new AtmConsole(); private final AtmOperation invalidInputOperation = new InvalidInputOperation(console); public static void start() { operations.put("w",new WithdrawOperation(console)); operations.put("d",new DepositOperation(console)); operations.put("s",new BalanceOperation(console)); operations.put("e",new ReturnCardOperation(console)); atmIsActivated = true; // ... then your method processUserSelection would change to: private static void processUserSelection(AccountCard account) { String userInput = Helper.getUserInput(); AtmOperation operation = operations.getOrDefault(userInput.toLowerCase(),invalidInputOperation); operation.operateOn(account); } benefits are: configurations, definition and usage of the operations are clearly separated. you can change the operations without changing the place(s) where they are used. you can put each operation in its own class file you can easily write UnitTests for each operation to verify and document their behavior. I assume the AtmConsole is the console of the Atm where all the transactional information and main menu is displayed. How would the AtmConsole look like? Im my world there could be different console types. Therefore I had an AtmConsole interface: interface AtmConsole{ void redraw(); void println(String message); /* most likely it should also handle the input Because it is the User Interface: */ String aqireInput(); } And This would be my implementation: class AtmConsoleCommandLine implements AtmConsole { private final Scanner input; public AtmConsole(Scanner input){ this.input = input; } @Override public void redraw() { final String ANSI_CLS = "\u001b[2J"; final String ANSI_HOME = "\u001b[H"; System.out.print(ANSI_CLS + ANSI_HOME); System.out.flush(); System.out.println("********* ATM *********"); } @Override public void println(String message){ System.out.println(message); } @Override public String aqireInput(){ return input.nextLine(); } } In the function returnCardOf I set the value of the global variable isCardInserted to false. As an experiences programmer you should know that global variables usually do more harm than good in any language... ;o) How would I do this if the returnCardOf function is in a class of it's own? Would I make the returnCardOf into a reference object and pass it as an argument into returnCardOf? This is a question of separating concerns Although the state "is a card inserted" seems to belong to the ATM itself, There is another place where it could belong: The AccountCard has no meaning outside the ATM (as far as your program is concerned) Therefore it could also hold the information of being rejected. So you could move the global variable to this class and add the methods isInserted() and release(). On the other hand: Although there is a pattern not to use Exception for flow control: I think that releasing the Card from the ATM is an "unhappy" event ultimately finishing the program. So IMHO having the release object throwing an exception that is cough in the loop is a valid option.
{ "domain": "codereview.stackexchange", "id": 25562, "tags": "java" }
A question about the general solution to the infinite square well
Question: I was working through Griffiths' Introduction to Quantum Mechanics, specifically the part about the 1D infinite square well potential (situated between $x = 0$ and $x = a$). To my understanding, this allows for multiple wave functions, each associated with a discrete level of energy: $$\Psi_n(x, t) = \sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)\,e^{-i\frac{E_n}{\hbar}t}$$ where: $$E_n = \frac{n^2\pi^2\hbar^2}{2a^2m}.$$ This is where it starts to get confusing to me. Does this mean that only one of these wave functions describes the particle state? Or is it that a general solution can be obtained by combining all the above possible wave functions to get the following one: $$\Psi(x, t) = \sqrt{\frac{2}{a}}\sum_{n=1}^{+\infty}C_n\, \sin\left(\frac{n\pi}{a}x\right)\,e^{-i\frac{E_n}{\hbar}t}$$ This is how the textbook says the general solution is determined, but what's confusing me, in this case, are the $C_n$'s (the equation was taken directly from the book). How did they get here, even though they were not present in the first equations? And how are we to determine them? Answer: I have now understood that the general solution is not simply a sum of the stationary states, but rather a LINEAR COMBINATION of the stationary states, thus the presence of the $C_n$'s. I also understood that the way to compute the $C_n$'s is to use the orthogonality of the stationary states to get: $$C_n = \int_0^a\Psi(x,0)\psi_n^*(x)dx$$
{ "domain": "physics.stackexchange", "id": 83136, "tags": "quantum-mechanics, homework-and-exercises, hilbert-space, wavefunction, schroedinger-equation" }
Solution to Project Euler Problem #37 in C++: Truncatable Primes
Question: I was recently looking at some of my old Project Euler solutions, and saw that my implementation for Problem 37 could be improved drastically. The problem reads as follows: The number 3797 has an interesting property. Being prime itself, it is possible to continuously remove digits from left to right, and remain prime at each stage: 3797, 797, 97, and 7. Similarly we can work from right to left: 3797, 379, 37, and 3. Find the sum of the only eleven primes that are both truncatable from left to right and right to left. NOTE: 2, 3, 5, and 7 are not considered to be truncatable primes. Here is my C++ solution to the problem: #include <vector> #include <iostream> #include <cmath> #include <algorithm> #include <cstdlib> #include <string> std::vector<long long int> primesUpto(long long int limit) // Function that implements the Sieve of Eratosthenes { std::vector<bool> primesBoolArray(limit, true); std::vector <long long int> result; primesBoolArray[0] = primesBoolArray[1] = false; long long int sqrtLimit = std::sqrt(limit) + 1; for (size_t i = 0; i < sqrtLimit; ++i) { if (primesBoolArray[i]) { for (size_t j = (2 * i); j < limit; j += i) { primesBoolArray[j] = false; } } } for (size_t i = 0; i < primesBoolArray.size(); ++i) { if (primesBoolArray[i]) { result.push_back(i); } } return result; } bool isTruncPrime(long long int number, const std::vector<long long int>& primeList) { std::string numberString = std::to_string(number); for (int i = 1; i < numberString.size(); ++i) { std::string truncLeft = numberString.substr(0, i); // The truncated prime from the left std::string truncRight = numberString.substr(i, numberString.size() - 1); // The truncated prime from the right if (! ( std::binary_search(primeList.begin(), primeList.end(), std::atol(truncLeft.c_str())) && std::binary_search(primeList.begin(), primeList.end(), std::atol(truncRight.c_str())) ) // If either of the two truncated sides are not prime ) { return false; } } return true; // All truncated parts are prime, so the number is a truncatable prime } int main() { const std::vector<long long int> primesUptoMillion = primesUpto(1'000'000LL); // Represents all the primes up to 1 million int numberTruncatablePrimes = 0; long long int currentNumber = 9; // 2, 3, 5, and 7 are not included in the search for truncatable primes long long int truncatablePrimeSum = 0; while (numberTruncatablePrimes != 11) { if ( std::binary_search(primesUptoMillion.begin(), primesUptoMillion.end(), currentNumber) && // If the number itself is prime isTruncPrime(currentNumber, primesUptoMillion) // If the number is also a truncatable prime ) { ++numberTruncatablePrimes; // Increase amount of truncatable primes truncatablePrimeSum += currentNumber; // Add the number's sum } currentNumber += 2; // Only odd numbers can be prime other than 2, so no need to look at every number } std::cout << truncatablePrimeSum << "\n"; } Here is how I run the code: g++ Problem037.cpp -std=c++14 -O2 Here is code for execution, timing, and output: $ time ./a.out 748317 real 0m0.042s user 0m0.040s sys 0m0.004s Here are the things I want from this Code Review: The function isTruncPrime() does a lot of string manipulation to verify its parameter. Is there a way to improve the algorithm? While the code does run pretty quickly, is there any code that slows down the rest significantly that I could improve on? Are there any better or more idiomatic ways to format/structure my program so it follows C++ style? Answer: Simple things You've misspelt std::size_t in a couple of places. I find I often do that, particularly when I've been writing C as well. Conversion of primes table I don't think there's a good reason to collapse primesBoolArray into a vector when it's perfectly usable as it is (and faster to use than a binary search). In fact, I'd go further, and make it live up to its name (since we always call the function with a constant): // Get an array of bool - containing true at the prime indexes template<std::size_t N> std::array<bool,N> primesUpto() { // Use the Sieve of Eratosthenes std::array<bool,N> primes; primes[0] = primes[1] = false; std::fill(primes.begin()+2, primes.end(), true); static const long long int sqrtLimit = std::sqrt(N) + 1; for (std::size_t i = 2; i < sqrtLimit; ++i) if (primes[i]) for (std::size_t j = i+i; j < N; j += i) primes[j] = false; return primes; } String manipulation We can use division to truncate numbers. Right to left is most obvious: for (; n; n/=10) test(n); Left to right can be done like this: for (std::size_t x = 10; x < n; x*= 10) test(n%x); This becomes template<std::size_t N> bool isTruncPrime(std::size_t n, const std::array<bool,N>& primes) { for (std::size_t x = 10; x < n; x*= 10) if (!primes[n%x]) return false; for (; n; n/=10) if (!primes[n]) return false; return true; } These two changes reduce runtime on my system from 0.022 seconds to 0.007 seconds. We can afford to search the whole problem space (assuming we already know there are no 7-digit truncatable primes): int main() { static const auto primes = primesUpto<1'000'000>(); auto sum = 0ull; // Start at 11, because single-digit numbers aren't truncatable // Advance by 2, because even numbers are non-prime (except 2, but 2<11) for (std::size_t n = 11; n < primes.size(); n += 2) { if (isTruncPrime(n, primes)) sum += n; } std::cout << sum << "\n"; } This form allows us to parallelize the computation to (possibly) eke out a tiny bit more speed: #pragma omp parallel for reduction(+:sum) On my system, this makes it much slower - you'd need a larger problem space to benefit, I think. Reduce the search space Instead of testing all odd numbers, we can reduce the number of calls to isTruncPrime. We know that all such numbers end in a prime, but can't end in 2 or 5 (since they are composite), so the last digit can be only 3 or 7. Similarly, the first digit must be prime, and intermediate digits must be odd (but not 5): static const auto b0 = { 0, 20000, 30000, 50000, 70000 }; static const auto b1 = { 10000, 30000, 70000, 90000 }; static const auto c0 = { 0, 2000, 3000, 5000, 7000 }; static const auto c1 = { 1000, 3000, 7000, 9000 }; static const auto d0 = { 0, 200, 300, 500, 700 }; static const auto d1 = { 100, 300, 700, 900 }; static const auto e0 = { 20, 30, 50, 70 }; static const auto e1 = { 10, 30, 70, 90 }; for (std::size_t a: { 0, 200000, 300000, 500000, 700000 }) { for (std::size_t b: a ? b1 : b0) { for (std::size_t c: a+b ? c1 : c0) { for (std::size_t d: a+b+c ? d1 : d0) { for (std::size_t e: a+b+c+d ? e1 : e0) { for (std::size_t f: { 3, 7 }) { const auto n = a + b + c + d + e + f; if (isTruncPrime(n, primes)) { sum += n; } } } } } } } This knocks another 70% off the runtime on my system. Modified code #include <array> #include <algorithm> #include <cmath> #include <cstddef> // Get an array of bool - containing true at the prime indexes template<std::size_t N> std::array<bool,N> primesUpto() { // Use the Sieve of Eratosthenes std::array<bool,N> primes{}; primes[0] = primes[1] = false; std::fill(primes.begin()+2, primes.end(), true); constexpr std::size_t sqrtLimit = std::sqrt(N) + 1; for (std::size_t i = 2; i < sqrtLimit; ++i) if (primes[i]) for (std::size_t j = i+i; j < N; j += i) primes[j] = false; return primes; } template<std::size_t N> constexpr bool isTruncPrime(std::size_t n, const std::array<bool,N>& primes) { for (std::size_t x = 10; x < n; x*= 10) if (!primes[n%x]) return false; for (; n; n/=10) if (!primes[n]) return false; return true; } #include <iostream> int main() { static const auto primes = primesUpto<1'000'000>(); auto sum = 0ull; static const auto b0 = { 0, 20000, 30000, 50000, 70000 }; static const auto b1 = { 10000, 30000, 70000, 90000 }; static const auto c0 = { 0, 2000, 3000, 5000, 7000 }; static const auto c1 = { 1000, 3000, 7000, 9000 }; static const auto d0 = { 0, 200, 300, 500, 700 }; static const auto d1 = { 100, 300, 700, 900 }; static const auto e0 = { 20, 30, 50, 70 }; static const auto e1 = { 10, 30, 70, 90 }; for (std::size_t a: { 0, 200000, 300000, 500000, 700000 }) { for (std::size_t b: a ? b1 : b0) { for (std::size_t c: a+b ? c1 : c0) { for (std::size_t d: a+b+c ? d1 : d0) { for (std::size_t e: a+b+c+d ? e1 : e0) { for (std::size_t f: { 3, 7 }) { const auto n = a + b + c + d + e + f; if (isTruncPrime(n, primes)) { sum += n; } } } } } } } std::cout << sum << "\n"; }
{ "domain": "codereview.stackexchange", "id": 29597, "tags": "c++, performance, programming-challenge, c++14" }
C inner product function without using array subscripting
Question: As part of a question designed to help us understand the relationship between pointers and arrays in C, I've been asked to write an inner product function that doesn't use any array subscripting. Here's what I came up with, but it looks like the kind of complicated 'clever' coding that we've traditionally been told NOT to write. Any feedback on it, or how it could be done better / more efficiently would be much appreciated. int inner_product(const int *a, const int *b, int n) { const int *p, *q; int result = 0; for(p = a, q = b; p < a + n, q < b + n; p++, q++) { result += *p * *q; } return result; } Edit - Inner product is simply summing the product of the indexes, so a[1,2,3] b[2,3,4] would be 1*2 + 2*3 + 3*4 Answer: I have a few comments on style. (By which I really mean, 100% opinion... :D) One declaration per line A lot of people strongly disagree with this, but I find it much easier to read code that has one declaration per line. const int *p; const int *q; Is a lot clearer to me than: const int *p, *q; Though really, the only time I've vehemently opposed to it is if both variables are not the same in terms of a pointer or value: const int *p, q; Is a harder to read and more prone to errors than: const int *p; const int q; n I would name the n paramter something more meaningful. I would consider calling it size or num. It's fairly apparent that n is meant to be the number of elements, but two or three extra keystrokes seems worth the extra clarity. Possible implementation I would probably write the function something like this: int inner_product(const int *a, const int *b, int num) { const int* end = a + num; int sum = 0; while (a != end) { sum += (*a) * (*b); ++a; ++b; } return sum; } Since the two pointers are incremented at the same time, there's no reason to check both. This should be a tiny bit more efficient (well... probably not once the compiler does it's magic optimisations on the original), and I think it's a bit clearer.
{ "domain": "codereview.stackexchange", "id": 2169, "tags": "c, homework" }
What is the difference between the metric (tensor), $g_{\mu\nu}$, and the invariant interval, ${ds}^2$?
Question: Here is a question from a problem sheet I found which I'm going to use to illustrate a point: The $\color{red}{\text{metric}}$ on a unit sphere is $${ds}^2={d \theta}^2+{\sin}^2\theta\, {d\phi}^2\tag{1}$$ where $\theta$ and $\phi$ are spherical polar coordinates. $u$ and $v$ are vector fields on the sphere with components $$u^\theta=0,\,\,\,u^\phi=1-\cos\theta,\qquad v^\theta =1,\,\,\,v^\phi = 0$$ Evaluate $u\cdot u$, $v\cdot v$ and $u\cdot v$ I am not interested in the answer to this question as I know I can calculate these dot products. I am simply questioning the terminology of the word I marked in red. It was my understanding that the ${ds}^2$ is known as the "invariant interval", and may not even be a distance, for example, it could measure the time between two events. I also know that the ${ds}^2$ may be written in terms of a 'metric (Minkowski) tensor', in other words, $${ds}^2=\eta_{\mu\nu}{dx^\mu}{dx^\nu}$$ where $\eta_{\mu\nu}=\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}.\,$ Eqn $(1)$ can also be re-written in terms of a metric tensor. This is also confirmed by Wikipedia where the metric is denoted by $g_{\mu\nu}$. So if the metric is not ${ds}^2$, then why is ${ds}^2$ being called the metric in the question? I have already looked at this question and this question asked on this site, an answer given to the latter seems to suggest that there is no difference between ${ds}^2$ and $g_{\mu\nu}$, but I don't understand the reasoning. Answer: If you want to be super systematic about language and not overloading the terminology, you can say the following. Fix a smooth $n$-dimensional manifold $M$. A pseudo-Riemannian metric tensor field on $M$ is by definition a smooth $(0,2)$-tensor field on $M$, which is pointwise symmetric, non-degenerate, and has the same signature (number of plus and minus in the Sylvester matrix form) at every point. Such a tensor field is typically denoted by the symbol $g$ (so for each point $p\in M$, $g_p:T_pM\times T_pM\to\Bbb{R}$ is a bilinear, symmetric non-degenerate function which eats two tangent vectors and outputs a real number). Now, in physics, it is common to abbreviate terminology in the following ways. First, the adjective pseudo-Riemannian is often omitted because in GR we care exclusively with Lorentzian signature (1 plus, and $n-1$ minus, or the other way around), and since everyone knows it's about Lorentzian signature, we'd rather not beat the already dead horse. Next, the phrase "tensor field" is often shortened (by abuse of language) to just "tensor" because... well that's just the way things are. So, you may hear $g$ being referred to as "the metric tensor $g$". This is of course incorrect (but standard) terminology since the word field tells us there is one at every point in the manifold. The next abbreviation is to omit the word 'tensor' in this description, and simply speak of "the metric $g$". In Physics, people won't have any trouble understanding what you mean, but in math, a very common source of confusion for students is with the use of 'metric' in 'metric tensor field' in the context of Riemannian manifolds and 'metric space'. Now, one can always introduce a coordinate chart $(U,x=(x^1,\dots, x^n))$, and in this chart, we can write \begin{align} g|_U&=g_{ab}\,dx^a\otimes dx^b, \end{align} where $g_{ab}:U\to\Bbb{R}$ are smooth functions, namely $g_{ab}(p):=g\left(\frac{\partial}{\partial x^a}(p),\frac{\partial}{\partial x^b}(p)\right)$. Ok, now given the object $g$ as above, we can define the following object, called the quadratic form associated to $g$. This I shall denote as $Q_g$, and it is a function $Q_g:TM\to\Bbb{R}$ defined as $Q_g(v)= g(v,v)$, for all $v\in TM$. So, you take any tangent vector $v$, and plug it into $g$ twice. The following is a basic linear algebra fact: we can recover $g$ from $Q_g$, in the following sense. For any $p\in M$ and $v,w\in T_pM$, we have \begin{align} g(v,w)&=\frac{Q_g(v+w)-Q_g(v-w)}{4}\tag{$*$}. \end{align} Think of the analogous statement for multiplication of real numbers. If I have two real numbers $x,y$, then from the sum/difference of squares formula, $xy=\frac{(x+y)^2-(x-y)^2}{4}$. The general form $(*)$ above is called the polarization identity. Thus, given any symmetric $(0,2)$ tensor field, we can define a corresponding quadratic form, and conversely given any quadratic form, we can define a symmetric $(0,2)$ tensor field which has that as the quadratic form. Because of this equivalence (going back and forth between $(0,2)$ tensor (fields) and quadratic form (fields)) some would consider this a reasonable overload of terminology, and start referring to $Q_g$ as "the metric". The above terminology of "quadratic form associated to $g$" is how a mathematician would phrase it. In terms of a coordinate chart $(U,x)$, this would be written as \begin{align} Q_g|_U&=g_{ab}\,dx^a\,dx^b. \end{align} The meaning of the product $dx^a\,dx^b$ on the right is as follows. The object $Q_g$ takes a tangent vector $v\in T_pM$ and outputs the number $Q_g(v)=g_{ab}(p)\,dx^a(v)\cdot dx^b(v)$ (recall that $dx^a$ is a 1-form so it takes a vector $v\in T_pM$ as input and outputs $dx^a(v)\in\Bbb{R}$ as output; this number is often denoted as $v^a$, and called the "$a^{th}$ component of $v$ with respect to the coordinate-induced basis $\left\{\frac{\partial}{\partial x^i}(p)\right\}_{i=1}^n$ of $T_pM$"). Motivated by the coordinate expression on the right and classical terminology, in Physics, we refer to $Q_g$ as the infinitesimal squared distance, or also as the line element (associated to $g$), and in SR/GR also the (infinitesimal) spacetime interval (the adjective 'infinitesimal' referring to the fact that it is at the level of tangent spaces), and instead of the notation $Q_g$, it is much more common to use the notation $ds^2$, (even though it is not the exterior derivative $d(s^2)$ of the square of a function $s^2$, nor is it the product of a 1-form $ds$ with itself in any manner). This is just symbolic and suggestive notation. In coordinates, we thus write \begin{align} Q_g|_U\equiv ds^2|_U=g_{ab}\,dx^a\,dx^b, \end{align} where $\equiv$ means 'same thing in different notation'. See this MSE answer of mine for more details about the notation involving tensor products and symmetrized tensor products and quadratic forms. In your specific example, you're dealing with Riemannian signature (all plus). So, being slightly more systematic with the language, I would say the metric tensor field $g$ on the unit sphere $S^2$ is such that if you restrict it to the domain of the spherical coordinate mapping, then \begin{align} g&=d\theta\otimes d\theta+\sin^2\theta\,d\phi\otimes d\phi. \end{align} (but take a look at the above MSE answer of mine; if you use the symmetrized tensor product notation, you can write this also as $g=d\theta^2+\sin^2\theta\,d\phi^2$). Equivalently, you could say the line element on the sphere is such that when restricted to the sphere, it equals \begin{align} ds^2=d\theta^2+\sin^2\theta\,d\phi^2 \end{align} (the RHS now being interpreted as a quadratic form). At the end of the day, they're giving you the same information. If it were me, I prefer to keep a terminological distinction between $g$ (pseudo-Riemannian metric tensor field, or if I'm working in RIemannian geoemtry, I'd abbreviate this to "Riemannian metric", or if I'm doing GR, I'd say "Lorentzian metric") and $Q_g\equiv ds^2$ (which I'd prefer to call the Quadratic form associated to $g$, or just the line element). Having said this, because of the linear algebra fact mentioned above, it isn't super necessary (once you have learnt the definitions) to be so strict with maintaining the distinction (and for people who know this fact, it is so obvious that they may even blur the distinction between a symmetric $(0,2)$ tensor field and its associated quadratic form, so they may start writing stuff like $g=ds^2=g_{ab}\,dx^a\,dx^b$). One final thing I'll mention is that sometimes you'll see statements like "the metric $g_{ab}$". This can be interpreted in two ways. The first is you have $g$ as above, and you're fixing a coordinate chart $(U,x)$ as above, and considering the component functions $\{g_{ab}\}_{a,b=1}^n$; in this sense identifying a tensor field with its component functions with respect to some coordinates is an abuse/overload of language, and I would strongly caution against it unless you know precisely what you're talking about. Alternatively, it is also common to use the abstract index notation in which the symbol $g_{ab}$ denotes the actual $(0,2)$ tensor field $g$ (I have my reasons for using this notation only occasionally, but it's logically fine).
{ "domain": "physics.stackexchange", "id": 88973, "tags": "general-relativity, metric-tensor, coordinate-systems, terminology, notation" }
Add/Remove class on scroll depending on scroll length
Question: I am a jQuery beginner and would like some pointers on whether the following code could be made shorter, or there is a better way to get the same result. jQuery(document).ready(function(){ var aa=jQuery('#navigation-wrapper'); var bb=jQuery('#top'); jQuery(window).scroll(function(){ if(jQuery("body").hasClass("home")){ if(jQuery(this).scrollTop()>510){ aa.addClass("navbar-fixed-top"); //bb.css('marginTop', aa.height()); } else{ aa.removeClass("navbar-fixed-top"); //bb.css('marginTop', 0); } } else { if(jQuery(this).scrollTop()>293){ aa.addClass("navbar-fixed-top"); //bb.css('marginTop', aa.height()); } else{ aa.removeClass("navbar-fixed-top"); //bb.css('marginTop', 0); } } }); }); Answer: The code can made very short. $(window).scroll(function () { var scrollTop = $(document.body).hasClass('home') ? 510 : 293; $('#navigation-wrapper').toggleClass('navbar-fixed-top', $(this).scrollTop() > scrollTop); }); There is no need of wrapping the code in $(document).ready() for binding events on window. Use ternary operator Use toggleClass() with second parameter to add/remove class based on a condition.
{ "domain": "codereview.stackexchange", "id": 21488, "tags": "javascript, jquery, css" }
"Welcome to Buzzway Subs!"
Question: There were a few recent questions which had a challenge like this: The subway shop provides catering for meetings and other events. All sandwich platters are $40 each. Cookie platters are $20 each. Beverages are not included in catering service. Write a program that prompts the user for the number of sandwich platters and the number of cookie platters. The program should compute the total cost of the order (including a 6% tax). Input statements you should code - How many sandwich platters? - How many cookie platters? Output statements your program should display: - Price of sandwich platter(s): - Price of cookie platter(s): - Price before taxes: - Tax: - Price plus taxes: Code the program using Scanner Console.ReadLine(). This was originally intended for Java, evidently, but I made an implementation in C#. This is the first time I write anything more substantive than 'Hello, World!' in C#, so all recommendations are welcome! Working demo on DotNetFiddle.net using System; using System.Collections.Generic; using System.Text; public class BuzzwaySubs { private static Dictionary<string, decimal> cateringMenu = new Dictionary<string, decimal>() { {"Sandwich Platter", 39.99M}, {"Cookie Platter", 19.99M} }; public const decimal SALES_TAX = 0.06M; public static void processCustomer() { Console.WriteLine("Welcome to Buzzway Subs!"); Console.WriteLine("May I take your order?"); var order = takeCateringOrder(); var subTotal = calculateSubTotal(order); var salesTax = calculateSalesTax(order); var total = calculateTotal(order); Console.WriteLine ( "Subtotal: {0:0.00}\n"+ "Tax: {1:0.00}\n" + "Total: {2:0.00}\n", subTotal, salesTax, total ); decimal payment = getPaymentFromCustomer(total); processPayment(total, payment); printReceipt(order, payment); Console.WriteLine("Thank you for shopping at Buzzway!"); } public static void printCateringMenu() { Console.WriteLine("Catering Menu:"); foreach (var product in cateringMenu) { Console.WriteLine("{0}: ${1:0.00}", product.Key, product.Value); } Console.WriteLine(); } public static Dictionary<string, int> takeCateringOrder() { printCateringMenu(); Dictionary<string, int> order = new Dictionary<string, int>(); foreach (var product in cateringMenu) { string input; int quantity; bool isValidQuantity; bool proceedToNextItem = false; while (!proceedToNextItem) { Console.WriteLine("Purchase how many of {0} for ${1:0.00} each?", product.Key, product.Value); input = Console.ReadLine(); isValidQuantity = int.TryParse(input, out quantity); if (!isValidQuantity) { Console.WriteLine("Invalid number input."); } else if (isValidQuantity && quantity == 0) { //don't add item proceedToNextItem = true; } else { order.Add(product.Key, quantity); proceedToNextItem = true; } } } return order; } public static decimal calculateSubTotal(Dictionary<string, int> order) { decimal subTotal = 0M; int itemQty = 0; foreach (var item in order) { itemQty = item.Value; decimal costOfItems = itemQty * cateringMenu[item.Key]; subTotal += costOfItems; } return subTotal; } public static decimal calculateSalesTax(Dictionary<string, int> order) { return calculateSubTotal(order) * SALES_TAX; } public static decimal calculateTotal(Dictionary<string, int> order) { decimal total = calculateSubTotal(order) + calculateSalesTax(order); return total; } public static decimal getPaymentFromCustomer(decimal total) { decimal payment = 0M; bool isValidPayment = false; while (!isValidPayment) { Console.WriteLine("Your total due is ${0:0.00}. \nPay how much?", total); var input = Console.ReadLine(); isValidPayment = decimal.TryParse(input, out payment); if (!isValidPayment) { Console.WriteLine("Invalid payment amount. Please try again."); } else { isValidPayment = true; } } return payment; } public static decimal processPayment(decimal total, decimal payment) { bool isPaymentEnough = false; decimal change = 0; while (!isPaymentEnough) { if (payment == total) { isPaymentEnough = true; } else if (payment < total) { Console.WriteLine ( "Payment ${0:0.00} is not enough for total ${1:0.00}" ,payment, total ); payment = getPaymentFromCustomer(total); } else { isPaymentEnough = true; change = payment + -total; Console.WriteLine("Your change is ${0:0.00}.", change); } } return change; } public static void printReceipt(Dictionary<string, int> order, decimal payment) { StringBuilder receipt = new StringBuilder(""); Console.WriteLine("--- Receipt ---"); foreach (var item in order) { Console.WriteLine ( "{0} {1} ${2} ea. ${3}", item.Value, item.Key, cateringMenu[item.Key], item.Value * cateringMenu[item.Key] ); } var subTotal = calculateSubTotal(order); var salesTax = calculateSalesTax(order); var total = calculateTotal(order); Console.WriteLine("Subtotal: ${0:0.00}", subTotal); Console.WriteLine("Tax: ${0:0.00}", salesTax); Console.WriteLine("Total: ${0:0.00}", total); Console.WriteLine("Payment: ${0:0.00}", payment); decimal change = processPayment(total, payment); } } public class Program { public static void Main() { new BuzzwaySubs(); BuzzwaySubs.processCustomer(); } } Example run: Welcome to Buzzway Subs! May I take your order? Catering Menu: Sandwich Platter: $39.99 Cookie Platter: $19.99 Purchase how many of Sandwich Platter for $39.99 each? 1 Purchase how many of Cookie Platter for $19.99 each? 2 Subtotal: 79.97 Tax: 4.80 Total: 84.77 Your total due is $84.77. Pay how much? 85 Your change is $0.23. --- Receipt --- 1 Sandwich Platter $39.99 ea. $39.99 2 Cookie Platter $19.99 ea. $39.98 Subtotal: $79.97 Tax: $4.80 Total: $84.77 Payment: $85.00 Your change is $0.23. Thank you for shopping at Buzzway! Answer: First, keep track of your instances of a class: new BuzzwaySubs(); BuzzwaySubs.processCustomer(); That should be: BuzzwaySubs restaurant = new BuzzwaySubs(); // or `var restaurant` restaurant.processCustomer(); As it is, the first line is utterly useless. Additionally, this only works because your methods are all static. Keeping track of your instances is important because what happens when you have two restaurants? You need to know which restaurant is controlled by which class instance so you can manage them appropriately. Second, declare your variables in the tightest scope possible: int itemQty = 0; foreach (var item in order) { itemQty = item.Value; decimal costOfItems = itemQty * cateringMenu[item.Key]; subTotal += costOfItems; } That variable should be declared in the foreach loop. You have this probably in a great many places. foreach (var item in order) { int itemQty = item.Value; decimal costOfItems = itemQty * cateringMenu[item.Key]; subTotal += costOfItems; } This is important for many reasons, including keeping your variables from leaking information to other sections of the program, releasing memory when you aren't using it, and more. Third, your naming does not follow standard C# naming practices: public static void printCateringMenu() Public methods are named with PascalCase. public const decimal SALES_TAX = 0.06M; Public fields are also named with PascalCase. Fourth, you've got a bad case of the static fever. Please, oh please, why is every method in BuzzwaySubs static? You might as well have made the class static! What if you have different BuzzwaySubs restaurants? Static instances are only created once because they belong to the type, not the instance, so you are stuck with only many restaurants, but only one menu, one checkout system, and one of everything else that should be restaurant specific? Also, if you had not made your methods static, that crazy logic in Main() would have been illegal (is that, in fact, what lead to this?). Fifth, R# is warning me about some unnecessary logic: if (!isValidQuantity) { Console.WriteLine("Invalid number input."); } else if (isValidQuantity && quantity == 0) { //don't add item proceedToNextItem = true; } The isValidQuantity check in the else if is unneeded because you check it in the if above. If it is false, your logic will never get past the first if. StringBuilder receipt = new StringBuilder(""); decimal change = processPayment(total, payment); Unused variables. else { isValidPayment = true; } Unnecessary assignment - value is already true. These are all dead code for the programmer to read that the compiler is likely (and hopefully) optimizing away. Sixth, keep your variable declaration consistent: var total = calculateTotal(order); decimal change = processPayment(total, payment); Consistency is next to cleanliness. Seventh, please use appropriate access modifiers: public const decimal SALES_TAX = 0.06M; When are you ever going to have to access that from outside the method? That should be private. Some of your methods should probably be private as well. The access of fields, methods, etc. is very important to control to prevent outsiders from seeing what you are doing. When I create an instance of this class, which should I be able to see your background information and what your methods are all doing? Eighth, if you need to run a loop at least once, consider using a do-while loop: bool isValidPayment = false; while (!isValidPayment) { Console.WriteLine("Your total due is ${0:0.00}. \nPay how much?", total); string input = Console.ReadLine(); isValidPayment = decimal.TryParse(input, out payment); if (!isValidPayment) { Console.WriteLine("Invalid payment amount. Please try again."); } } Becomes: bool isValidPayment; do { Console.WriteLine("Your total due is ${0:0.00}. \nPay how much?", total); string input = Console.ReadLine(); isValidPayment = decimal.TryParse(input, out payment); if (!isValidPayment) { Console.WriteLine("Invalid payment amount. Please try again."); } } while (!isValidPayment); This is not required, but it shows more clearly that the loop must execute at least once by definition, and it prevents someone from breaking your program by changing the bool isValidPayment = false; to bool isValidPayment = true;. Ninth, you can make this dictionary readonly: private static Dictionary<string, decimal> _cateringMenu = new Dictionary<string, decimal>() { {"Sandwich Platter", 39.99M}, {"Cookie Platter", 19.99M} }; private static readonly Dictionary<string, decimal> _cateringMenu = new Dictionary<string, decimal>() { {"Sandwich Platter", 39.99M}, {"Cookie Platter", 19.99M} }; This will prevent you from overwriting it with a new dictionary. Ten, you can use continue and break in loops. break will exit the loop, while continue will evaluate the condition and start at the beginning of the loop. Using these statements can help you remove some of your flags. For example, instead of using proceedToNextItem = true; to exit the loop when the condition is re-evaluated as the last statement in a successful loop, you can use break. Eleven, you have a bug in your logic. You do not check if the customer orders a negative amount of food: isValidQuantity = int.TryParse(input, out quantity); if (!isValidQuantity) { Console.WriteLine("Invalid number input."); } else if (isValidQuantity && quantity == 0) { //don't add item proceedToNextItem = true; } else { order.Add(product.Key, quantity); proceedToNextItem = true; } You can change the first condition to: if (!isValidQuantity || quantity < 0) to fix this. Twelve, you can use string interpolation, if you are using C# 6 (All credit to Jeroen Vannevel. This is signaled by placing a $ in front of the string, such as: Console.WriteLine($"Your total due is ${total:0.00}. \nPay how much?"); Instead of: Console.WriteLine("Your total due is ${0:0.00}. \nPay how much?", total); Thirteen, also credit to Jeroen Vannevel, you can use expression-bodied members when your methods are only one expression long: public decimal CalculateSalesTax(Dictionary<string, int> order) { return CalculateSubTotal(order) * SalesTax; } Becomes: public decimal CalculateSalesTax(Dictionary<string, int> order) => CalculateSubTotal(order) * SalesTax;
{ "domain": "codereview.stackexchange", "id": 17196, "tags": "c#, beginner, io" }
exercise on Huffman encode
Question: I have these 4 symbols with their probabilities: x P(x) -------- 1 0.3 2 0.3 3 0.2 4 0.2 I built the Huffman tree in this way: and I obtainded: x P(x) C(x) ---------------- 1 0.3 0 2 0.3 10 3 0.2 110 4 0.2 111 it's correct? Because according to the solution the results should be: x P(x) C(x) ---------------- 1 0.3 00 2 0.3 01 3 0.2 10 4 0.2 11 Why? Yet I followed the steps shown here. Answer: A quick way to check whether your answer has a chance of being correct is to compute the average code length. Your encoding gives the average length of $2.1$, which is greater than using a code of fixed length $2$, so it can't be correct. If you follow the priority queue algorithm from the source you cite, then you would notice that after merging nodes 3 and 4 you get one supernode of priority 0.4. Now your queue would have three elements of priorities $0.3, 0.3,$ and $0.4$. Thus, you would next merge elements corresponding to priorities $0.3$ and $0.3$ (the algorithm works by merging two nodes with lowest priorities), which happen to be nodes 1 and 2.
{ "domain": "cs.stackexchange", "id": 6796, "tags": "huffman-coding" }
NAV2 cancel bt_action_node
Question: In NAV2, I defined my own on_cancelled for my bt_action_node. Nevertheless, I realized it`s never called if halt (the method that cancels the action) is called. The halt will set the tree to IDLE, but then the next tick will send a new goal, instead of processing the previously cancelled goal. How to make sure the on_cancelled is called in that case? Am I missing something here? Do I need to overrides the halt method as well to avoid the IDLE state before calling the on_cancelled? Thanks Answer: Cancelling in action terms is when an external entity cancels the goal. This is a callback function that the cancelled state can be processed by the BT node in case it wants to populate / change the BT::NodeStatus. This is not called to cancel an action, but called once an action is already cancelled by an external agent (e.g. autonomy system or elsewhere in the BT using a cancel node) to potentially modify the BT status to send on the node's exit. Halt is not cancel. While halt will cancel a goal when a BT is ending, that cancel is not an external cancel but an internal cancel to put the BT back into a neutral state - so there's no need for you to worry about messing with the BT::NodeStatus because the status is already set to idle by the halt process -- so there's no need to trigger this function.
{ "domain": "robotics.stackexchange", "id": 38509, "tags": "ros2, nav2" }
Find signal's maximum peak in window
Question: I have a 9-dimensional signal (as a csv from this Gist) that looks like this: A signal peaks every 30 steps. I want to get the maximum values of the peaks in that 30 second window. Here's what I've hacked together so far, where sig is the signal loaded from the csv file and max_res is the desired result: trial_num = 8 dims = 9 step = 30 max_res = np.zeros(trial_num) tmp = sig.reshape((trial_num, step, dims)) max_dim = np.argmax(np.sum(tmp, axis=1), axis=1) sing_dim = np.zeros((trial_num, step)) for t_i in range(trial_num): sing_dim[t_i] = tmp[t_i, :, max_dim[t_i]] max_res = np.max(sing_dim, axis=1) How can I replace the for-loop with a vectorized operation? Answer: Instead of getting the positions of the maximum value and then retrieving the values thanks to the positions, you can directly ask numpy to retrieve the maximum values: tmp = sig.reshape((trial_num, step, dims)) max_res = np.max(np.max(tmp, axis=1), axis=1) I suggest you, however, to use a function to encapsulate this behaviour. It could take the amount of steps per cycle as a parameter and compute the rest from there: def max_peak_values(data, steps_per_cycle=30): length, dims = data.shape trials = length // steps_per_cycle new_shape = trials, steps_per_cycle, dims return np.max(np.max(data.reshape(new_shape), axis=1), axis=1) and use it like: max_res = max_peak_values(sig, 30)
{ "domain": "codereview.stackexchange", "id": 22346, "tags": "python, numpy, vectorization, signal-processing" }
Using the Lagrangian Method to Solve Bead on Rotating Rod
Question: The following problem appeared on the Princeton University Physics Competition in 2017: A bead of mass $m$ is free to slide along a thin rod of length $L$ tilted at angle $\phi$ to the vertical. The rod has a base point fixed to the ground and is spinning at constant angular velocity $\omega$ about the vertical. Gravity acts in the downwards vertical direction. If the rod rotates faster than a certain $\omega_c$ the bead will start to fly off. What is $\omega_c$? Suppose that the bead is at length $q_0>0$ along the rod from the base point. Express your answer in terms of $g, m, q_0, \phi$. I have been trying to solve this problem using Lagrangian mechanics, however I keep getting a different answer to what appears on the answer key. The following is my attempt: Let $T$ be the kinetic energy of the system and let $V$ be the potential energy of the system. Then, $$T=\frac{1}{2}m(q^2\dot{\phi}^2 + \dot{q}^2)$$ $$V = mg q\cos \phi.$$ Hence, the Lagrangian of the system is $$\mathcal{L} =\frac{1}{2}m(q^2\dot{\phi}^2 + \dot{q}^2) - mg q\cos \phi.$$ Thus, one of the Euler-Lagrange equations is $$m\ddot{q} = mq\dot{\phi}^2 - mg \cos \phi \implies \ddot{q} = q\dot{\phi}^2 - g \cos \phi.$$ At the critical value for $\omega$, we have $q=q_0$ and $\dot{q} = \ddot{q} = 0$. Hence, $$0=q_0\omega_c^2 - g \cos \phi \implies \omega_c^2 = \frac{g\cos\phi}{q_0} $$. The answer that the answer key gets (using $F=ma$) is $w_c^2 = \frac{g}{q_0 \tan \phi}$. I'm not sure where I have gone wrong. Any help would be greatly appreciated. The following are links to the question paper and answer key: Question Paper: http://pupc.princeton.edu/archive/PUPC2017OnsiteExam.pdf Answer Key: https://pupc.princeton.edu/archive/PUPC2017OnsiteSolutions.pdf Answer: It looks like the links in the question both lead to the problem description. The answers can be found here. The answer in the solution manual is incorrect. You can see the mistake when looking at lines (1) and (3) of the solution to this problem: \begin{equation} m\ddot{q}=F_{c}\sin\phi - mg\cos\phi \\ F_{c}=m\omega^{2}R_{0} = m\omega^{2}q_{0}\sin\phi. \end{equation} In the first equation the $\sin\phi$ is because we want to know the force along the rod away from the anchor point of the rod. In the last equation they've just used geometry to convert the distance from the axis of rotation $R_{0}$ to $q_{0}$. The problem with the answer is when they substitute the second result into the first, they lose track of a $\sin\phi$. The bead will start to fly off when the force due to angular motion away from the anchor point and the force due to gravity towards the anchor point are equal, $m\ddot{q}=0$: \begin{equation} m\omega_c^{2}q_{0}sin^{2}\phi = mg\cos\phi \\ \to \omega_c^2 = \frac{g}{q_{0}\sin\phi\tan\phi}. \end{equation} For the Lagrangian formalism: \begin{equation} T = \frac{1}{2}mv^{2} = \frac{1}{2}m (r^2\omega^2 + \dot{q}^2)=\frac{1}{2}m (\omega^{2}q^2\sin^2\phi + \dot{q}^2) \\ U = mgq\cos\phi. \end{equation} The Lagrangian is now \begin{equation} \mathcal{L} = \frac{1}{2}m(\omega^{2}q^2\sin^2\phi + \dot{q}^2) - mgq\cos\phi. \end{equation} The Euler-Lagrange equation is, \begin{equation} m\ddot{q} = m\omega^2q\sin^2\phi - mg\cos\phi = 0\\ \to \omega_c^2 = \frac{g}{q_{0}\sin\phi\tan\phi}. \end{equation} In the last line I used the substitutions you've already stated. With all that said, I'm not sure your equation T makes sense, I don't see why $\phi$ would be changing.
{ "domain": "physics.stackexchange", "id": 53556, "tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism" }
What is the area under the graph for decelerating objects?
Question: For a deceleration in 7 seconds which area do we use? Do we use the area for the triangle or do we include even the area under the triangle which makes it area of a trapezium? Answer: The distance travelled in the first $7$ seconds is the whole area under the velocity/time graph from $0$ to $7$ seconds i.e. option B, the trapezium.
{ "domain": "physics.stackexchange", "id": 98172, "tags": "homework-and-exercises, kinematics, calculus" }
Why does ice reduce swelling?
Question: It is common practice to use ice packs on injuries that cause bruising and swelling. It seems to be an effective method to assist in reducing swelling. Why does ice reduce welling? Answer: Swelling is one of the signs of inflammation. Inflammation involves release of histamine by mast cells present in the tissues. Histamine causes vasodilation and leads to leakage of fluid from the blood, along with which neutrophils and other WBCs also enter the area. They phagocytose microbes that might have entered with the injury. Applying ice would cause vasoconstriction* (i.e. narrowing of blood vessels.) which would reduce the leakage of tissue fluid and hence swelling. *If you wonder why ice(or any cool thing) causes vasoconstriction think why you turn pale in winter ? Vasoconstriction reduces blood flow to the particular area and hence the exchange of heat is reduced which conserves body heat in cold environments.
{ "domain": "biology.stackexchange", "id": 1511, "tags": "medicinal-chemistry, injury" }
costmap_2d observation_sources: 'expected_update_rate'?
Question: Hello, In the sources for the navigation stack, there's a sample costmap_2d config file at navigation/costmap_2d/launch/example_params.yaml. In the part pertaining to the base_scan source, there's an undocumented parameter called 'expected_update_rate': observation_sources: base_scan base_scan: {data_type: LaserScan, expected_update_rate: 0.4, observation_persistence: 0.0, marking: true, clearing: true, max_obstacle_height: 0.4, min_obstacle_height: 0.08} Is this what it sounds like (laser update rate in Hz)? 0.4 is kind of a weird value for a scanning laser so thought I'd check. Thanks, Rick Originally posted by Rick Armstrong on ROS Answers with karma: 567 on 2014-09-06 Post score: 0 Answer: Answered my own question. It's really an "interval" rather than a "rate": costmap_2d/src/observation_buffer.cpp:244: bool ObservationBuffer::isCurrent() const { if (expected_update_rate_ == ros::Duration(0.0)) return true; bool current = (ros::Time::now() - last_updated_).toSec() <= expected_update_rate_.toSec(); if (!current) { ROS_WARN( "The %s observation buffer has not been updated for %.2f seconds, and it should be updated every %.2f seconds.", topic_name_.c_str(), (ros::Time::now() - last_updated_).toSec(), expected_update_rate_.toSec()); } return current; } Originally posted by Rick Armstrong with karma: 567 on 2014-09-06 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 19324, "tags": "navigation, costmap-2d" }
Factors on which brightness of light depends
Question: Does brightness of light depend on number of quanta in the light? It makes sense assuming that the number of photons would affect the brightness of light. Answer: Yes, it does. It also depends on something which is not so clear in that formula, which is the square of the amplitude for each photon to get where it is going. So if we are looking at a detection screen, for example at the back of your eyes you have precisely such a screen that we call your retinas, we can parameterize it by two coordinates, call them $(x, y)$. Then a photon has an amplitude $\phi(x, y)$ to show up at any given place, and then the actual probability for the photon to arrive is $|\phi(x, y)|^2~dx~dy$ for it to be found in the little square $(x, x + dx) \times (y, y+dy).$ (If you're not familiar with the notation, this set is the set of all all points $(a, b)$ such that $x < a < x + dx$ and $y < b < y + dy$, and the $d\bullet$ expression stands for "a little difference in" and is just meant to say that $dy$ is a very small length in the $y$-direction.) Now photons' amplitudes are complex numbers, which means that they are scaled rotation matrices -- they have a scale factor and a phase angle that they rotate through. For photons traveling over a distance $\ell$ at the speed of light $c$, the scale factor is just $1/\ell$ and the phase angle is $\nu~\ell/c$ where $\nu$ is the frequency. All of the wavy interference effects come from these phase angles interfering over many different trajectories, but the biggest effect on the "envelope" of brightness comes from this $1/\ell^2$ factor that comes after you square the amplitude. This says that if you're twice as far away from a source, it is one quarter as intense. Finally, brightness depends on intrinsic response in the screen itself. The screen might be made out of detectors which respond to some frequencies more than others, with some curve $u(\nu)$ or so. For example your eye cannot see infrared or ultraviolet light, but snakes have dimples which can see infrared radiation and I think bees have receptors in the facets of their eyes for ultraviolet light. Furthermore green photons are a little more intense on your eyes than red or blue ones are; they look brighter to you. Putting it all together, you have something which looks like: brightness = (number of photons) * (probability for photon to go from source to detector) * (brightness response of detector per photon). All three of these are important for how bright something looks to us or our detectors.
{ "domain": "physics.stackexchange", "id": 42253, "tags": "visible-light, intensity" }
RosoutPanel not included in Fuerte?
Question: The rxtools libraries do not seem to be included in Fuerte. Is there any way to install them (or have they been moved to a new place maybe)? I have connected a RosoutPanel to my GUI, so it would be nice to be able to use it in the new relase too. To clarify, i'm looking for the C++ source code, not rxconsole. I'm including rxtools/rosout_panel.h in my code, but can not find it in Fuerte. Originally posted by Ola Ringdahl on ROS Answers with karma: 328 on 2012-04-26 Post score: 0 Original comments Comment by joq on 2012-04-26: The rxconsole command works for me. What OS are you using? Did you install from source or binaries? Comment by Ola Ringdahl on 2012-04-26: The rxconsole command works, but i'm using the C++ library functions for rxtools::RosoutPanel. In Electric the code is located in stacks/rx/rxtools/src/ but in Fuerte the stack/rx is empty and the rxtools package has moved to share/ but there is no src directory, only bin/. I installed from binaries Comment by mjcarroll on 2012-04-27: I was under the impression that rxtools was getting deprecated in favor of the new ros_gui package that should be coming along shortly. You may want to investigate that alternative. Comment by Ola Ringdahl on 2012-04-27: Seems a bit odd to remove the old before having an alternative ready... Comment by joq on 2012-04-27: Yes, wx is deprecated in favor of Qt as a long-term goal due to cross-platform portability issues. But, http://ros.org/wiki/rxtools says its interfaces are "stable", with no deprecation indication. So, there is a defect somewhere: either in code or documentation. Comment by mjcarroll on 2012-04-28: Sorry, I was mistaken, the correct phrasing is that it is "End of Life" http://ros.org/wiki/fuerte#Migrating Answer: It looks like those C++ interfaces were left out when rxtools was converted to the new catkin build system. The rxtools::RosoutPanel is a documented interface, and its omission was probably unintentional. Please open a defect ticket for the rx component on the ROS Trac system. Originally posted by joq with karma: 25443 on 2012-04-27 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 9149, "tags": "ros, ros-fuerte, rxtools" }
Are dimensions of physical quantities and 1D/2D/3D/4D... spacetime dimensions the same?
Question: Are the [L]/[M]/[T]... dimensions of length/mass/time... somehow related to 1/2/3/4D spacetime dimensions? If not; why are they called dimensions and what do they actually mean? Answer: No, they have nothing to do with one another. As per the Wiki article on dimensional analysis, Poisson first used the word "dimension" to refer to physical units in 1833.
{ "domain": "physics.stackexchange", "id": 75119, "tags": "dimensional-analysis, spacetime-dimensions, si-units" }
Is it possible to express the Lorentz force in terms of differential forms?
Question: The electromagentism can be formulated in terms of differential forms, defining de electromagnetic four-potential $A$ as a 1-form, the electromagnetic 2-form (to give it a name) $F=dA$, etc. And the Lorentz force is defined as the inner product of the electromagentic tensor, which is the 2-form as a 2-rank tensor (I think you understand what I'm trying to say with that), with the four-velocity(and the charge), but it can be defined directly like some operation using the 2-form $F$ and the four-velocity? Answer: A $p$-form is, in tensor terms, a tensor whose components have exactly $p$ lower indices that are antisymmetric. For example, the electromagnetic 2-form $F=dA$ can be written in terms of its components as follows: $$F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu.$$ Meanwhile, a vector is, in tensor terms, a tensor whose components have exactly one upper index. For example, the relativistic 4-velocity has components $u^\mu$. There is a way to take the product of a vector and a $p$-form and get a $p-1$-form from it; it's called the interior product. The equations on the Wikipedia page may look formidable, but in terms of the components, you simply contract the (upper) vector index with the first (lower) index of the $p$-form: $$(\iota_u F)_\nu = u^\mu F_{\mu\nu},$$ where $\iota_u F$ denotes the interior product of the vector $u$ with the 2-form $F$, and $(\iota_u F)_\nu$ are the components of the resulting 1-form. (Note that if you contract one index of a $p$-form you will have $p-1$ lower antisymmetric indices left, and that's why the interior product takes $p$-forms to $p-1$-forms.) Using the interior product, we can now very easily write down the force 1-form $f$ as the interior product of the 4-velocity $u$ with the electromagnetic 2-form $F$, times the electric charge $q$: $$f = q \, \iota_u F,$$ or in terms of components, $$f_\nu = q \, u^\mu F_{\mu\nu}.$$ Note: Sometimes you will see the interior product written as $\iota_u F = u \lrcorner F$. In this notation, the Lorentz force is $f = q \, u \lrcorner F$. Further reading: Geometry, Topology And Physics, 2nd Edition, by Nakahara.
{ "domain": "physics.stackexchange", "id": 42551, "tags": "electromagnetism, special-relativity, forces, tensor-calculus" }
DAM enzyme distances move along the genome
Question: I am fusing a protein with a Dam enzyme (http://en.wikipedia.org/wiki/Dam_(methylase)). The idea is that when the protein binds to the DNA, the Dam enzyme will start methylating nearby GATC sites, thus helping identify the protein binding region (using DpnI later on, and microarray technology). However, I have no idea, in theory, how many (bps) will the enzyme traverse across the genome from its starting location. I.e., how far (in terms of bp) from its starting location will the enzyme methylate the GATC sites. There are methods for finding protein binding regions along the genome that use such technology, such example here: http://www.nature.com/nbt/journal/v18/n4/abs/nbt0400_424.html Answer: Dam methyltransferase methylates large DNA molecules in a processive manner, with an effective range of up to 7 kb on either side of an initial methylation reaction. My answer is taken from this paper: Urig, S. et al. (2002) The Escherichia coli Dam DNA Methyltransferase Modifies DNA in a Highly Processive Reaction J Mol Biol 319 1085-1096 The authors report on a kinetic analysis of E. coli dam methylase. For anyone using this system as described by the OP, this paper is essential reading. Their conclusion is that the methylation process is highly processive, which is to say that a single dam-DNA interaction leads to multiple methylations before the enzyme dissociates from the DNA. I'll restrict myself to describing just two of their experiments: Figure 5 In this experiment they analysed the time course of methylation of an end-labelled 879-mer with 4 GATC sites, using DpnI digestion to detect methylation. In this experiment all DNA was found to be either fully methylated or unmethylated - no evidence for the presence of partially methylated DNA molecules was detected. Figure 6 In this experiment they analysed the methylation of λ DNA (48.5 kb) and were able to detect partially methylated molecules. A kinetic analysis of the data results in the following conclusions (MTase; methyltransferase): From these simulations, we [determine] a processivity of nav=3000. This means that after binding to the λ-DNA and starting the one-dimensional diffusion process, one MTase on average meets 3000 dam sites before it dissociates from the DNA. Since the enzyme performs a random walk, it will hit many sites more often than only once and the effective range of processivity corresponds to the square root of nav. Therefore, the dam MTase is able to methylate approximately 55 GATC sites on λ-DNA in a processive reaction in vitro. In this simulation, 4.6 associations of an MTase molecule to the same molecule of λ-DNA are required to obtain fully methylated DNA. In a random DNA sequence GATC should occur on average every 44 = 256 residues. This suggests an approximate range of (55/2)*256 = 7 kb on each side of an initial methylation reaction.
{ "domain": "biology.stackexchange", "id": 690, "tags": "molecular-biology" }
Does electric charge affect space time fabric?
Question: I am confused with this question. Does electric charge affect the space time fabric? If so, why? Also if electric charge does not affect the space time fabric, how can we interpret the origin of the electric field around a charge? Answer: The The electric field around a charge has energy, and so contributes to the gravitational field around the charge. You can see the effect in the exact solution for charged black holes, where the curvature is nonzero everywhere because of the electric charge. $$ ds^2 = - f(r) dt^2 + {1\over f(r)} dr^2 + r^2 (d\theta^2 + \sin^2\theta d\phi^2) $$ $$ f(r) = 1 - {2m\over r} + {Q^2\over r^2} $$ If you set Q to zero, you get a solution of the vacuum Einstein equations, so that there is no curvature away from the singular region at $r=0$. But if you have a nonzero Q, you get curvature at all points, and this is due to the stress-energy of the electric field. $$ E = {Q\over r^2} \hat{r} $$ leads to a local energy density which is $$ {Q^2\over 2 r^4} $$ outside of the black hole, and this energy density makes a contribution to the total mass. It is a negligible fraction of the mass of the black hole, unless the black hole is extremal. This happens when $m=Q$, so that the solution degenerates, and the horizon becomes AdS as opposed to Rindler. In such a limit, the electric repulsion is equal and opposite to the gravitational attraction, and two such stationary black holes just sit next to each other, neither replling nor attracting. In this limit, you can consider the entire mass of the black hole as contained in the field. But the question is not about the effect of electricity on space, but about the geometric interpretation of electromagnetic fields. What electromagnetism is a change in the "phase space fabric", not the "metric space" fabric. When you take a particle around a loop in space, there is quantum mechanical phase from doing this, and the extra phase $\Delta \Phi$ when the particle is charged is given by: $$ \Delta\Phi = q\oint A \cdot dx $$ This is the definition of the electromagnetic gauge field, and it is the fundamental geometric description of electromagnetism. It has nothing to do with the direct geometry of space time, it has to do with an internal geometry in the shape of a circle, which is the phase of the quantum wavefunction. In Kaluza Klein theory, this circle becomes a geometrical circle, an extra dimension in the shape of a small circle, and the interpretation of the vector potential is that it is the amount of translation you get in the little circle when you go around a loop in our ordinary 4 dimensions. The charge is then interpreted as the momentum along this circle, and in quantum mechanics, the phase acquired from translation is the same as the momentum, so this embeds electromagnetism in geometry. But within modern theories, a gauge field is as fundamental as a metric field, and if you have enough supersymmetry requiring the gauge field to be there, you can consider the gauge field to be just as geometrical. It is not the same geometry as the geometry that explains gravity, but it is geometry nonetheless, it is the geometry of a fibre bundle (a separate space attached to the main space at every point, like the extra circle in Kaluza Klein in the limit that it is infinitesimal). So in the modern understanding, the notion of geometry is just general enough to make any gauge field geometrical, even if it isn't directly a change in the naive thing you would call the "space-time fabric".
{ "domain": "physics.stackexchange", "id": 4397, "tags": "general-relativity, spacetime, quantum-electrodynamics, charge" }
Different time for same amount of kph/mph change at different speeds
Question: First of all excuse me if my english isn't perfect, and if the question is hard to understand, even I am a little confused about what do I even want to ask. So I bought a new wristwatch not too long ago, and it has a scale called "Tachymeter". It is basicaly a speed metering tool, and works like this: You are going with for example a car, start the stopwatch and stop it after you travelled 1 kilometer. Where the stopwatch stops, the tachymeter scale will tell you the average speed. (note that it is going backwards on the kph scale, as it takes you more time to cover a fixed distance when going slower of course) Its pretty simple. Here is a picture of how it looks: Now my problem is that I noticed that at lower speeds it takes more time for the stopwatch to "travel" across the same amount of kph than at higher speeds. For example the time it takes for it to get from 80 to 60 kph is 15 seconds, but to travel the same 20 kph difference from 220 kph to 200 kph is only 2 seconds. Now this was my main concern, and I dont even know how to ask a question about it, but this makes me quite confused. Its like the kph unit is not linear with the amount of the elapsed time or something. Can anyone explain to me what is going on? Thank you for the answers! Answer: https://en.wikipedia.org/wiki/Tachymeter_(watch)#Measuring_speed The watch hand always moves at a constant speed, and you always measure a constant - 1 unit - distance (km in your case). The variable is the amount of seconds you devide the units with. E.g. if you cover 1 unit in 1 sec, and the unit is km your average speed will be 3600 km/h. If you cover 1 unit in 10 sec, and your unit is km, your average speed will be 360 km/h, and so on. So the slower you are the less your average speed changes with any added second. If you add 1 sec to 1 sec you increased the time by 100%. If you add 1 sec to 100 sec, you just increased the time with 1%. So naturally your scale is hyperbolic in nature.
{ "domain": "physics.stackexchange", "id": 45937, "tags": "time, speed, distance" }
Satellite in orbit in front of and behind the Moon
Question: Would it be possible to put a satellite into the same orbit as the Moon, but far enough ahead or behind the Moon to remain in place? Has this ever been done in practice? Answer: Yes. Trojan orbits (60° before or after the Moon) are even stable in most cases. These points are called the L₄ and L₅ points, which are two of the five Lagrangian points. These orbits are mostly impractical (nothing is there). Today the most known is the Chinese Queqiao satellite, which is a communication relay to their probe on the far side of the Moon. Instead of using L₄ or L₅, Queqiao is in a halo orbit around another Lagrangian point; Earth-Moon L₂.
{ "domain": "astronomy.stackexchange", "id": 7074, "tags": "the-moon" }
Is there a theoretical limit to the splitting of atomic energy levels?
Question: We know that the hyper fine interaction is due to interactions between the nucleus and the electron and Zeeman splitting induces further quantization but what I am wondering is, are there any higher order perturbations that may contribute to further level splitting or is there a derivable limit? Answer: Yes, adding external fields or interactions with nearby objects can dramatically alter the electronic structure of an atom, especially the valence band. A magnetic field can cause splitting through the Zeeman effect, there can also be splitting due to an electric field, called the Stark effect or the Autler-Townes effect for AC fields. You might also be interested in the Lamb shift. If you are near other atoms there can be other effects. Chemical bonding is clearly a rearranging of the electronic energy states caused by an external perturbation. But also within a material the nearby atoms will affect the electronic structure.
{ "domain": "physics.stackexchange", "id": 66314, "tags": "quantum-mechanics, atomic-physics, perturbation-theory" }
Factor $f$ of internal energy of a gas
Question: For a $n$-atomic gas in any sort of geometry, The formula for $f$ is $$f = 3n- \text{number of constraints}.$$ The way I was taught this formula was like each $n$ particles< there is $3$ ways it can move so $3n$ now from these ways we need to exclude the number of constraints on it's motion. But now I'm confused, because couldn't the molecule move in any $x$, $y$ and $z$ direction like there are $6$ total directions because for example there is like $-x$ and $+x$ side. And for molecules with more than two particles, does the formula include rotational d.o.f as well? And, how do I know if I should include vibrational nodes or not? I saw this question: Extra vibrational mode in linear molecule But I'm looking for something more general to use like for any shape and kind of molecule. As in, I learned from chemistry that molecules can have different geometery according to vsepr theory based on lonepair and number of bonds Answer: In the case of one point-like particle, one has only 3 independent configurational degrees of freedom because a position in 3D is uniquely identified by three independent displacements from a chosen origin. The adjective independent is the key concept to exclude counting positive and negative displacements along an axis as two different degrees of freedom. The word independent, in the present context has exactly the same meaning as in the case of vector spaces: two displacements are independent if the only way to obtain a zero displacement by the linear combination $$ a {\bf x} + b {\bf y} $$ is when both $a$ and $b$ are zero. If the particle is an m-atom molecule, the configuration of each molecule requires $3m$ independent coordinates. However, if some distances can be treated as fixed, there is a reduction of the independent degrees of freedom, equal to the number of independent constraints. For example, in the case of a rigid di-atomic molecule, we have $n=2$, but the resulting $6$ degrees of freedom are reduced to $5$ by the presence of a single scalar constraint on the distance between the two atoms. Which is consistent with the fact that one configuration is given once we provide three coordinates for the center of mass and two angles to assign the orientation of the molecule. All rigid linear molecules have $5$ degrees of freedom: for each additional atom added to the first two, there are $3$ additional coordinates for its position, but 3 additional constraints originating from the rigid geometry (one distance plus two angles). In the case of a non-linear molecule made by $3$ atoms, we have to subtract $3$ independent scala constraints of fixed distances from the $9$ degrees of freedom of a three-atom system. Here again, addition of more atoms with rigid distances from the first three, corresponds to add $3$ new coordinates but the same time $3$ more scalar constraints. As a result, a rigid non-linear molecule would require ony six numbers to uniquely identify its space configuration. Of course, we could have more than 5 or 6 degrees of freedom, in the case of poly-atomic molecules, if only part of the distances are fixed. So far, it's just matter of counting atoms and constraints. The real physical question is under which conditions we could consider an intramoleclar distance as fixed? The answer requires Quantum Mechanics. It turns out that every motion requiring excitation $\Delta E \gg k_BT$ is dynamically frozen and the system behaves as if there would be a rigid constraint.
{ "domain": "physics.stackexchange", "id": 69774, "tags": "thermodynamics, statistical-mechanics, molecules, degrees-of-freedom" }
Geodesic equation (free particle)
Question: How to find a coordinate system whose geodesic equation does not have the "Christoffel symbol" term? (i.e. free particle - generalized Newton's second law.) Answer: Suppose you're in a coordinate system where the Christoffels don't vanish at some point. To choose a coordinate system where the Christoffel symbols vanish at a given point $p$, you must apply a Christoffel symbol change of variables: $$0={\bar\Gamma}^k{}_{ij} = \frac{\partial x^p}{\partial y^i}\, \frac{\partial x^q}{\partial y^j}\, \Gamma^r{}_{pq}\, \frac{\partial y^k}{\partial x^r} + \frac{\partial y^k}{\partial x^m}\, \frac{\partial^2 x^m}{\partial y^i \partial y^j}$$ For simplicity, maybe $\frac{\partial x^a}{\partial y^b}=\delta^a_b$ (evaluated at point $p$ and point p only, so this says nothing about the second derivatives), in which case the equation becomes: $$0= \Gamma^k_{ij} + \frac{\partial^2 x^k}{\partial y^i \partial y^j}$$ if $x^k=y^k+C^k_{i j} y^i y^j$ and $p$ is the origin, this tells you immediately that if you choose $C^k_{ij}=-\Gamma^k_{ij}$ then you're in a frame where all of the Christoffels vanish.
{ "domain": "physics.stackexchange", "id": 26721, "tags": "general-relativity, spacetime, differential-geometry, metric-tensor, geodesics" }
Isosurfaces from three dimensional column data: methods
Question: I have just been asked the following question, and I somehow felt short of smart answers. You are given a series of $N$ triplets of values ($P_1$, $P_2$, $P_3$), pertaining to physical measurements. $N$ is in the order of $100$ to several thousands. My colleague would like to draw isosurfaces from this dataset. I have not more (yet) information about the quality of space filling, and the uncertainty around the data. So I suggested to interpolate the given point cloud to a regular grid and use standard isosurface extraction. But there might be other, more direct methods, dealing with point cloud issues. Some related papers: Direct Isosurface Extraction from Scattered Volume Data Isosurface Generation for Large-Scale Scattered Data Visualization Would you suggest sounder approaches (references, software implementing them are a plus) Answer: You are given a series of $N$ triplets of values ($P_1$, $P_2$, $P_3$), pertaining to physical measurements. $N$ is in the order of $100$ to several thousands. My colleague would like to draw isosurfaces from this dataset. This is a little bit broad but perhaps still with some chances of accurate answers. To an extent, it is important to know where the data comes from because it affects how the iso-surfaces are reconstructed. A Digital Elevation Model, for example, is essentially a 2D construction but volumetric data are inherently 3D. In the latter case, a trully 3D surface has to be fitted through the points of the contour which is much more challenging. In both cases, the marching cubes algorithm is the standard way to extract the iso-surface but other methods involving a shrinking elastic surface (basically, 3D generalisations of active contours) have also been used. In terms of practically applying it: For relatively small point-clouds (a few hundreds of points), Blender with Point Cloud Skinner, can produce quick results. I have used this to create a surface through Magnetoencephalography sensors (whose locations are known) which ends up looking like a helmet. If your point cloud describes a convex shape, this will have no problems recovering it. I think that it works similarly to QHull (i.e, extracts the convex hull) but it has a threshold parameter that to an extent allows it to account for holes. Paraview. Incredibly useful and with Python scripting capabilities. The software handles loading the data and several standard operations and allows extracting the results. The good thing about Paraview is that it includes a "raw import" functionality which you can use to load data in whatever format you may have it in. Isosurfaces are produced with "Contour". This is straightforward for volumetric data. If you already have the points, there is a "Table to structural grid" function that will bring the data to the expected form prior to applying "Contours". I would consider this the faster option over the Visualisation Toolkit but if you need more control over the result, you might find the VTK more useful. Meshlab. Meshlab is specifically built for handling huge point clouds including fitting surfaces. Meshlab is interesting because there are various scripts you can use around it, depending on what you are trying to achieve. The Point Cloud Library and its Python bindings. This is similar to Meshlab in that it can handle huge volumes of data and it includes a surface fitting capability. It is however more challenging in setting up and operating. Hope this helps.
{ "domain": "dsp.stackexchange", "id": 7082, "tags": "sampling, interpolation, 3d, software-implementation, visualization" }
lookupTransform after publishing transform seems not right
Question: There seems a problem in the lookupTransform function. The code provided below... Please just ignore the other variables.. I just copied a code snippet from my ROS project. My concern is.. After publishing the transform, I tried looking up the transform to check and see if the transformation has indeed applied successfully! But what turns out was the x,y,z,roll and pitch are consistent but the yaw is inconsistent! First, I tried getting the ros::Time(0) latest available transformation but the yaw was inconsistent. Hoping that the yaw will be consistent if I lookupTransform at the exact time as when the transformation was published, it really just showed inconsistency. Kindly go through the code and take special notice to the codes that I will highlight. ros::Time now = ros::Time::now(); try { tf_.lookupTransform("/map", "/map1Origin", ros::Time(0), transform1); map2_.x = transform1.getOrigin().x() - map2Origin_.x; map2_.y = transform1.getOrigin().y() - map2Origin_.y; map2_.z = transform1.getOrigin().z() - map2Origin_.z; map2_.roll = transform1.getRotation().x() - map2Origin_.roll; map2_.pitch = transform1.getRotation().y() - map2Origin_.pitch; map2_.yaw = transform1.getRotation().z() + m.yaw * CV_PI/180 - map2Origin_.yaw; transform_.setOrigin( tf::Vector3 (map2_.x, map2_.y, map2_.z)); tf::Quaternion q; q.setRPY(map2_.roll, map2_.pitch, map2_.yaw); transform_.setRotation(q); br_.sendTransform(tf::StampedTransform(transform_, now, "/map", "/map2")); std::cerr << "(y)Map Merge Successful [" << m.x << "," << m.y << "," << m.yaw << "].." << std::endl; } catch (tf::TransformException ex) { std::cerr << "Catched error" << std::endl; } // HIGHLIGHT: map2_ is the exact tf that was used to broadcast the transform. std::cerr << "map2_="<<map2_.x << "," << map2_.y << "," << map2_.z << "," << map2_.roll << "," << map2_.pitch << "," << map2_.yaw << std::endl; try { tf_.waitForTransform("/map", "/map2", now, ros::Duration(3.0)); tf_.lookupTransform("/map", "/map2", now, transform1); } catch (tf::TransformException ex) { std::cerr << "Catched error" << std::endl; } // HIGHLIGHT: Transform1 is the queried transform of the newly broadcasted transform. std::cerr << transform1.getOrigin().x() << "," << transform1.getOrigin().y() << "," << transform1.getOrigin().z() << "," << transform1.getRotation().x() << "," << transform1.getRotation().y() << "," << transform1.getRotation().z() << std::endl; Here is the output: http://postimg.org/image/bc3k76vub/ Notice these lines: map2_=-0.05,-0.45,0,0,0,1.93732 -0.05,-0.45,0,0,0,0.824126 The lookup transform is different. Why is that so? Originally posted by Xegara on ROS Answers with karma: 52 on 2014-06-21 Post score: 0 Answer: I managed to find a workaround. I researched on other possible answers. I realized that there are two representations of the angles -- quaternion and euler. What I inputted was expressed in euler -- RPY -- but then when we get the rotation, it returns a quaternion. On the steps that I did, I did not use the constructor in tf::Quaternion even though it did accept parameters as yaw, pitch, roll. I tried that but the RPY keeps on changing.. Instead I used the member function of tf::Quaternion in setting the yaw, pitch, roll which totally worked. So here are the steps: 1. To broadcast RPY transformation in radians tf::Quaternion q; q.setRPY(roll,pitch,yaw); transform.setRotation(q); 2. To read the RPY in radians tf.waitForTransform("turtle1", "carrot1", now, ros::Duration(3) ); tf.lookupTransform("turtle1", "carrot1", now, transform_); transform_.getBasis().getRPY(roll,pitch,yaw); #include <ros/ros.h> #include <tf/transform_broadcaster.h> #include <tf/transform_listener.h> #include <tf/LinearMath/Matrix3x3.h> int main(int argc, char**argv){ ros::init(argc, argv, "sample"); ros::NodeHandle node; tf::TransformBroadcaster br; tf::StampedTransform transform; tf::StampedTransform transform_; tf::TransformListener tf; ros::Rate rate(10.0); int i = 0; double roll = 0, pitch = 1, yaw = 3.14/180*90; while(node.ok()){ transform.setOrigin( tf::Vector3(0.0,0.0,0.0)); tf::Quaternion q; q.setRPY(roll,pitch,yaw); transform.setRotation(q); ros::Time now = ros::Time::now(); br.sendTransform(tf::StampedTransform(transform, now, "turtle1", "carrot1")); std::cerr << "One: " << roll << "," << pitch << "," << yaw << std::endl; tf.waitForTransform("turtle1", "carrot1", now, ros::Duration(3) ); tf.lookupTransform("turtle1", "carrot1", now, transform_); transform_.getBasis().getRPY(roll,pitch,yaw); std::cerr << "Two: " << roll << "," << pitch << "," << yaw << std::endl; rate.sleep(); } return 0; } Hope this helps other people too. :-) Originally posted by Xegara with karma: 52 on 2014-06-22 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 18336, "tags": "ros, learning-tf" }
How can I make big confusion matrices easier to read?
Question: I have recently published a dataset (link) with 369 classes. I ran a couple of experiments on them to get a feeling for how difficult the classification task is. Usually, I like it if there are confusion matrices to see the type of error being made. However, a $369 \times 369$ matrix is not practical. Is there a way to give the important information of big confusion matrices? For example, usually there are a lot of 0s which are not so interesting. Is it possible to sort the classes so that most non-zero entries are around the diagonal in order to allow showing multiple matrices which are part of the complete confusion matrix? Here is an example for a big confusion matrix. Examples in the Wild Figure 6 of EMNIST looks nice: It is easy to see where many cases are. However, those are only $26$ classes. If the whole page was used instead of only one column this could probably be 3x as many, but that would still only be $3 \cdot 26 = 78$ classes. Not even close to 369 classes of HASY or 1000 of ImageNet. See also My similar question on CS.stackexchange Answer: You can apply a technique I described in my masters thesis (page 48ff) and called Confusion Matrix Ordering (CMO): Order the columns/rows in such a way, that most errors are along the diagonal. Split the confusion matrix into multiple blocks such that the single blocks can easily printed / viewed - and such that you can remove some of the blocks because there are to few data points. Nice side effect: This method also automatically clusters similar classes together. Figure 5.12 of my masters thesis shows that: You can apply confusion matrix ordering with clana
{ "domain": "datascience.stackexchange", "id": 2251, "tags": "visualization, confusion-matrix" }
Do any good theories exist on why the weak interaction is so profoundly chiral?
Question: I find the profound asymmetry in the sensitivity of left and right chiral particles to be one of the most remarkable analytical observations captured in the Standard Model. Yet for some, I've not found much in the way of discussions that worry about why of such as truly remarkable fact is true. I can't help but be reminded a wee bit of views on the motions of planets before Newton... you know, "it be Angels that do push them around, ask ye not why!" Seriously, I know the Standard Model has a lot of givens in it... but surely someone has mulled over why the universe might exhibit such a non-intuitive and thus interesting asymmetry? And perhaps even developed some solid speculations or full theories on why such in-your-face chiral asymmetries exist in nature? Do such theories exist, or is this asymmetry truly just a "given" and nothing more? Answer: surely someone has mulled over why the universe might exhibit such a non-intuitive and thus interesting asymmetry? Oh yes, definitely. I have for one (though I haven't made a significant contribution to the question)! :) There are a number of "left-right symmetric" models out there which usually involve a group like $SU(2)_L \times SU(2)_R$ where the $SU(2)_R$ gets spontaneously broken by a Higgs mechanism. You'll find a number of highly cited papers in InspireHEP. I've always thought these models sounded very interesting but haven't yet had the opportunity to work on them! If you find anything good let me know. :)
{ "domain": "physics.stackexchange", "id": 6946, "tags": "standard-model, higgs, cp-violation, electroweak, cpt-symmetry" }
Laravel controller Single Responsibility principle
Question: I have a method in Laravel controller. How can I improve this code? How can I implement single responsibilty here or maybe there are some other tips I can use in this code? I know that controller methods should be kept as small as possible,but I don't know if it's better idea to split this code in separate methods. public function displayData(CityRequest $request) { if (isset($request->validator) && $request->validator->fails()) { return redirect()->back()->withErrors($request->validator->messages()); } else { $cityName = $request->city; $cityName = str_replace(' ','-',$cityName); $cityName = iconv('UTF-8', 'ISO-8859-1//TRANSLIT//IGNORE', $cityName); $response = $this->guzzle->request('GET', asset("/api/products/recommended/" . $cityName)); $response_body = json_decode($response->getBody()); if(isset($response_body->error)) { return redirect()->back()->withErrors(['city'=>$response_body->error]); } $city = $response_body->city; $condition = $response_body->current_weather; $products = $response_body->recommended_products; return back()->with(['city' => $city, 'condition' => $condition, 'products' => $products]); } } Answer: Here are some simple tips on how you can improve your code. There are still more you can do to improve it, but is a good start and it helps you get closer to the single responsibility principle. Move the API request to a service (App\Services\ProductsApi\Client). Move the formatting of city to a helper (App\Helpers\Format). Consider moving the formatting of city to CityRequest so the controller doesn't have to do it. No need for new variables, use the properties returned by $response instead. No need for else-statement, since the if-statement has return. Your controller: use App\Services\ProductsApi\Client; public function __construct(Client $client) { $this->client = $client; } public function displayData(CityRequest $request) { if (isset($request->validator) && $request->validator->fails()) { return redirect()->back()->withErrors($request->validator->messages()); } $response = $this->client->getRecommended( App\Helpers\Format::slugify($request->city) ); if ($response->error) { return redirect()->back()->withErrors([ 'city' => $response->error ]); } return back()->with([ 'city' => $response->city ?? null, 'condition' => $response->current_weather ?? null, 'products' => $response->recommended_products ?? null, ]); } Helper: public static function slugify($value) { return Illuminate\Support\Str::slug( iconv('UTF-8', 'ISO-8859-1//TRANSLIT//IGNORE', $value), '-' ); } (None of the code above is tested)
{ "domain": "codereview.stackexchange", "id": 37179, "tags": "php, laravel" }
Why does welding produce UV light?
Question: Looking directly at a welder is dangerous because large amounts of UV light is produced. What makes this light? Is it electrons from the current that excites metal atoms, and these atoms sends out UV light? Or does the extreme heat have anything to do with this? Is it dangerous to look directly at a nail being melted (glowing brightly) by hundreds of amperes? Is it dangerous looking at an oxyhydrogen explosion in itself, or could it be dangerous if the explosion touches other substances exerting UV light because of the extreme heat of the explosion? Answer: All materials emit thermal radiation (such as light). The hotter the material, the more the radiation is shifted to high frequencies (shorter wavelengths). The radiation comes from oscillating electrons (regardless of whether there is an electric current). Welding reaches temperatures high enough to cause significant emission of UV light. Oxyacetylene and oxyhydrogen flames can both be over 3000 C degrees and therefore can produce hazardous amounts of UV light. Arc welding is even hotter and produces more UV light. Running hundreds of amps of current through a nail would be similar to arc welding. http://www.mapfre.com/fundacion/html/revistas/seguridad/n124/articulo1En.html
{ "domain": "physics.stackexchange", "id": 12252, "tags": "thermodynamics, electricity, visible-light, electric-current" }
Can I use hyperparameters obtained from tuning in R in the final model training in python?
Question: I am currently creating and evaluating several models for a dataset. Because I am more versed in R and like the tidymodel workflow, I am using tidymodels and tune to find the optimal hyperparameters for e.g. a lightgbm model. However because the dataset is very large and R not as performant as python, I do this using a sample of the full data. Once I've obtained the optimal parameters, could I use those to train the final model on the full data in python? I don't see a reason why not but I am not sure I am overlooking something. Answer: If the model being retrained in Python uses different data, then the "optimal" hyperparameters may be different. All hyperparameters may not be invariant to size of data. The larger data may contain more noise/signal or different noise/signal. My guess - if your sample is a random sample from the larger data and you have trained multiple samples and the hyperparameters are consistent, then you are probably close. Also, depending on your use case, "optimal" may really mean good enough. The business decisions that are derived from the predictions may not improve with a slightly "better" model. You can use the sampled hyperparameters, train with the full data, examine the decisions that are being made from the predictions, and go from there.
{ "domain": "datascience.stackexchange", "id": 11015, "tags": "hyperparameter-tuning" }
False-color wavelength assignments in this "drop-dead gorgeous" image of NGC 2903?
Question: C|Net's Hubble spots drop-dead gorgeous spiral galaxy tucked into Leo links to NASA's Hubble Spots Stunning Spiral Galaxy which shows the image below. The caption on the NASA page doesn't mention the color coding, nor does it mention a reference to a technical page for the image's construction or history. Question: Is this close to a "straight RGB" image of NGC 2903, or were filters used at certain wavelengths, possibly including IR or UV, in order to highlight certain regions and emissions, then assigned false colors. If the latter, is it a standard color coding? The NASA page says: NGC 2903 is located about 30 million light-years away in the constellation of Leo (the Lion), and was studied as part of a Hubble survey of the central regions of roughly 145 nearby disk galaxies. This study aimed to help astronomers better understand the relationship between the black holes that lurk at the cores of galaxies like these, and the rugby-ball-shaped bulge of stars, gas and dust at the galaxy’s center — such as that seen in this image. Text credit: ESA (European Space Agency) Click to view full size! Answer: Spacetelescope.org indicates under "Colours & filters" that the blue channel of the image is from a 658 nm filter (red) and the red channel is from an 814 nm filter (near infrared). The cyan and orange rows of the table also list these filters; presumably the green channel is a blend of the two. The ACS instrument handbook says filter F658N is narrow, passing only NII or slightly redshifted Hα emission (Table 5.1, Figure 5.4). Filter F814W is wide, passing wavelengths between 710 nm and 960 nm (Table 5.2, Figure 5.1). NGC 2903 is one of the brighter galaxies not in the Messier catalog. Here is a wider angle, more natural RGB image by Bob Franke, rotated for easier comparison. The reddish emission nebulae here appear blue in the HST image.
{ "domain": "astronomy.stackexchange", "id": 3709, "tags": "photography, hubble-telescope, image-processing" }
Bond character and bond angles
Question: Somewhere along the line of researching why the $\ce{Cl-C-H}$ bond angle in methyl chloride is less than what is predicted ($\pu{109.5^\circ}$), I ran into a book which said that the electron withdrawing effects of the chlorine atom gave the $\ce{C-Cl}$ bond more "p-character". What does this mean? Is he simply using this term to describe the elongated nature of the bond - i.e. s-orbitals are spherical and p-orbitals not spherical but rather elongated (at least as depicted by cartoon drawings in textbooks)? Is the author saying that the electron-withdrawing effects of chlorine cause the electrons to spend more time further away from the nucleus; hence the "p-character"? I understand that in a spherical shell, electrons are basically equidistant from the nucleus. But in an elongated, "p-character" shell, electrons can go further away from the nucleus. Answer: S orbitals are lower in energy than P orbitals. Electrons prefer to be in as low an energy orbital as possible. Therefor, when we mix S and P orbitals to make hybrid orbitals, the more S character the orbital has the lower the energy of electrons occupying that orbital. Let me use chloroform, $\ce{CHCl3}$, as an example to make my point (I have data for it and the $\ce{C-Cl}$ bonds are analogous to those in methyl chloride). The chlorine atom is electronegative, so electrons in the $\ce{C-Cl}$ bond will spend more of their time closer to chlorine - away from the carbon, then they would in say a $\ce{C-H}$ bond. So if the electron density in the hybrid orbital contributed by carbon to the $\ce{C-Cl}$ bond is going to be reduced, why put as much S character in it? Instead save that S character for other hybrid orbitals emanating from the carbon that have higher electron density in them. It turns out in the case of chloroform, knowing the various bond angles and the $\ce{C_{3V}}$ symmetry of the molecule, that the carbon portion of the $\ce{C-Cl}$ has a hybridization of about $\ce{sp^4}$ (more P character and less S character just as we predicted since it has a lower electron density) and the $\ce{C-H}$ bonds are approximately $\ce{sp^{2.75}}$ (using that S character we saved from the $\ce{C-Cl}$ bond to stabilize the electrons in these 3 orbitals, just as we predicted). The same effects play out in methyl chloride, just not as dramatically (e.g. the hybridization index of the $\ce{C-Cl}$ bond will be greater than 3 but less than 4).
{ "domain": "chemistry.stackexchange", "id": 1242, "tags": "bond, vsepr-theory" }
Dynamically sized C array
Question: I've been working in C for a while and have decided to implement my own dynamically sized array as an exercise and to actually be used in a project. I have also written Doxygen documentation for the library, which I have removed from the code here for readability. You can find the full code with documentation here. I know I should comment inside the actual methods of the library but I haven't gotten around to it yet. example.c - Example dynarray usage #include <stdlib.h> #include <time.h> #include <stdio.h> #include <math.h> #include "dynarray.h" int main(int argc, char **argv) { // The array dynarray_t array; // The current index in the array int index; // Seed the random number generator with the current time srand(time(NULL)); // Initialize the array with 8 elements if (!dynarray_init(&array, 8)) { printf("Failed to initialize array\n"); return EXIT_FAILURE; } // Allocate memory for the value to be stored in the array int *value = malloc(sizeof(int)); // Add 10 elements to the array for (index = 0; index < 10; index++) { // Assign the value to be stored in the array *value = rand() % 100; // Add the element to the array if (!dynarray_add(&array, value)) { printf("Failed to add element %d to array\n", index); // Free all elements in the array dynarray_deep_free(&array); // Free the value free(value); return EXIT_FAILURE; } // Allocate memory for the next value value = malloc(sizeof(int)); } // Assign the value to be stored in the array *value = 9999; // Set the first element if (!dynarray_set(&array, 0, value)) { printf("Failed to set element 0 in array\n"); // Free all elements in the array dynarray_deep_free(&array); // Free the value free(value); return EXIT_FAILURE; } // The size of the array int size = array.size; // Iterate the elements in the array for (index = 0; index < size; index++) { // Retrieve the current element of the array if (!dynarray_get(&array, index, (void **) &value)) { printf("Failed to retrieve element %d from the array\n", index); // Free all elements in the array dynarray_deep_free(&array); return EXIT_FAILURE; } // Output the current element printf("[%2d] = %2d\n", index, *value); } // Output information about the array printf("Current size: %d elements\n", array.size); printf("Current capacity: %d elements\n", array.capacity); printf("Initial capacity: %d elements\n", array.initial_capacity); printf("Resized %d times\n", (int) floor(log(array.capacity / array.initial_capacity) / log(2))); // Free all elements in the array dynarray_deep_free(&array); return EXIT_SUCCESS; } dynarray.h #ifndef _DYNARRAY_H_ #define _DYNARRAY_H_ #define DYNARRAY_SUCCESS 1 #define DYNARRAY_ERROR (!DYNARRAY_SUCCESS) struct dynarray { void **data; int initial_capacity; int capacity; int size; }; typedef struct dynarray dynarray_t; unsigned int dynarray_init(dynarray_t *array, int initial_capacity); unsigned int dynarray_size(dynarray_t *array, int *size); unsigned int dynarray_add(dynarray_t *array, void *item); unsigned int dynarray_set(dynarray_t *array, int index, void *item); unsigned int dynarray_get(dynarray_t *array, int index, void **item); unsigned int dynarray_remove(dynarray_t *array, int index); unsigned int dynarray_free(dynarray_t *array); unsigned int dynarray_deep_free(dynarray_t *array); #endif dynarray.c #include <stdlib.h> #include "dynarray.h" unsigned int dynarray_init(dynarray_t *array, int initial_capacity) { if (array == NULL || initial_capacity < 1 || (initial_capacity % 2) != 0) { return DYNARRAY_ERROR; } void **data = malloc(sizeof(void *) * initial_capacity); if (data == NULL) { return DYNARRAY_ERROR; } array->data = data; array->initial_capacity = initial_capacity; array->capacity = initial_capacity; array->size = 0; return DYNARRAY_SUCCESS; } unsigned int dynarray_size(dynarray_t *array, int *size) { if (array == NULL) { return DYNARRAY_ERROR; } *size = array->size; return DYNARRAY_SUCCESS; } unsigned int dynarray_resize(dynarray_t *array, int capacity) { if (array == NULL || array->data == NULL) { return DYNARRAY_ERROR; } void **data = realloc(array->data, sizeof(void *) * capacity); if (data == NULL) { return DYNARRAY_ERROR; } array->data = data; array->capacity = capacity; return DYNARRAY_SUCCESS; } unsigned int dynarray_add(dynarray_t *array, void *item) { if (array == NULL || array->data == NULL || item == NULL) { return DYNARRAY_ERROR; } int capacity = array->capacity; int size = array->size; if (size >= capacity) { if (dynarray_resize(array, capacity * 2) != DYNARRAY_SUCCESS) { return DYNARRAY_ERROR; } } array->data[array->size] = item; array->size++; return DYNARRAY_SUCCESS; } unsigned int dynarray_set(dynarray_t *array, int index, void *item) { if (array == NULL || array->data == NULL || index < 0 || index >= array->capacity || item == NULL) { return DYNARRAY_ERROR; } array->data[index] = item; return DYNARRAY_SUCCESS; } unsigned int dynarray_get(dynarray_t *array, int index, void **item) { if (array == NULL || array->data == NULL || index < 0 || index >= array->size || item == NULL) { return DYNARRAY_ERROR; } *item = array->data[index]; return DYNARRAY_SUCCESS; } unsigned int dynarray_remove(dynarray_t *array, int index) { if (array == NULL || array->data == NULL || index < 0 || index >= array->size) { return DYNARRAY_ERROR; } void **data = array->data; int initial_capacity = array->initial_capacity; int capacity = array->capacity; int size = array->size; data[index] = NULL; if (index < (size - 1)) { int current; for (current = index; current < (size - 1); current++) { data[current] = data[current + 1]; data[current + 1] = NULL; } } array->size = --size; int new_capacity = capacity / 2; if (size > 0 && size <= new_capacity && new_capacity >= initial_capacity) { if (dynarray_resize(array, new_capacity) != DYNARRAY_SUCCESS) { return DYNARRAY_ERROR; } } return DYNARRAY_SUCCESS; } unsigned int dynarray_free(dynarray_t *array) { if (array == NULL || array->data == NULL) { return DYNARRAY_ERROR; } free(array->data); array->data = NULL; array->capacity = 0; array->size = 0; return DYNARRAY_SUCCESS; } unsigned int dynarray_deep_free(dynarray_t *array) { if (array == NULL || array->data == NULL) { return DYNARRAY_ERROR; } void **data = array->data; int size = array->size; int index; for (index = 0; index < size; index++) { free(data[index]); } dynarray_free(array); return DYNARRAY_SUCCESS; } Answer: Should the dynarray type be opaque? Personally I have my doubts, but I have found it useful to make library types opaque in development and testing, as an aid to identifying missing library functionality. It's common practice in C to have indices and sizes be size_t, you might want to do that (and eliminate all the index<0 checks). A piece of pedantry, perhaps, but I think 'empty' would be better than free as dynarray_free doesn't free the container, just the contents. Why rule out adding NULL data in dynarray_add and dynarray_set? It could be reasonable to want to first create places for items to live, and then in another pass create the items themselves. A user can only find out the index of data added with dynarry_add by using the size field before the call or the size field-1 after the call; this seems a bit ugly. Further to Ratchet Freak's point about free, even if the intention is that only the one heap well ever be used, the items themselves could contain allocated data. You should consider passing a itemfree function to dynarray_deep_free and dynarray_remove. If I was using the library I'd want a function that would call a function for every element of the array, rather than having to code such loops myself. Strictly speaking, identifiers ending in _t are reserved for the OS/standard libraries. It's good practice to not define identifiers like that. It can be irritating to have to remember a different name for 0 and 1 for error returns for all the libraries a program uses. Since you only have two values, I think it would be better to be explicit in the documentation, and code, that these are 0 and 1, and dispense with the names. Why require that initial_capacity be even in dynarray_init?
{ "domain": "codereview.stackexchange", "id": 13456, "tags": "c, array, memory-management" }
Simple string compression in Python
Question: This is my solution to exercise 1.6 in Cracking the Coding Interview. I am interested in receiving feedback on the coding style and time/space complexity. The exercise statement is: Implement a method to perform basic string compression using the counts of repeated characters. For example, the string aabcccccaaa would become a2b1c5a3. If the compressed string would not become smaller than the original string your method should return the original string. You can assume the string has only uppercase and lowercase letters (a-z). import unittest from collections import defaultdict from math import log10, floor def compress_string(data: str) -> str: """A function to perform basic string compression using the counts of repeated characters If the compressed string is not smaller than the original string the function returns the original string. The assumption is that the string has only uppercase and lowercase letters (a-z).""" curr_char_pos = 0 frequencies = defaultdict(int) compressed_string = [] compressed_string_size = 0 for idx in range(len(data)): if compressed_string_size >= len(data): break if data[idx] == data[curr_char_pos]: frequencies[curr_char_pos] += 1 else: compressed_string.append(data[curr_char_pos]) compressed_string.append(frequencies[curr_char_pos]) compressed_string_size += floor(log10(frequencies[curr_char_pos])) + 2 curr_char_pos = idx frequencies[curr_char_pos] = 1 compressed_string.append(data[curr_char_pos]) compressed_string.append(frequencies[curr_char_pos]) compressed_string_size += floor(log10(frequencies[curr_char_pos])) + 2 if compressed_string_size < len(data): compressed_data = ''.join(str(char) for char in compressed_string) return compressed_data else: return data class MyTest(unittest.TestCase): def test_compress_string(self): self.assertEqual(compress_string('aabcccccaaa'), 'a2b1c5a3') self.assertEqual(compress_string('abcd'), 'abcd') self.assertEqual(compress_string('tTTttaAbBBBccDd'), 'tTTttaAbBBBccDd') Answer: This is called Run-length-encoding and there are many implementations of this on the Internet, for example: http://rosettacode.org/wiki/Run-length_encoding#Python I have written a solution myself just now but posting it would not really help (it would be like adding another drop in the ocean) so I will comment your code thoroughly instead: optimization? This was probably added as an optimization: if compressed_string_size >= len(data): break But except for inputs with no consecutive runs whatsoever, the compressed size will exceed the real size only near the end of the compression. Does the overhead of the additional conditional check each loop iteration take less time than the iterations saved? If it does, does it save enough time to justify adding complexity? I suggest you first worry about writing a working program and then apply optimizations with profiling (execution timing). Additional variable not needed compressed_string_size = 0 This was probably added as an optimization too, but is it worth it? No, because len runs in O(1) time complexity that is, len does not need to scan the whole list, because lists remember their length, so calling len is almost instantaneous, in fact using that variable is likely a pessimization as log is a very expensive mathematical operation.
{ "domain": "codereview.stackexchange", "id": 22028, "tags": "python, strings, python-3.x, complexity, compression" }
What force needs to be applied to produce twice as much acceleration as already present in a body?
Question: Let the current body of mass $m$ be moving with acceleration $a$, produced by applying $F = ma$. So, I thought that if I apply force $F$ again (same magnitude and direction as before), I would get acceleration $a$ (as $a = F/m$). So, $a$ added to the already present $a$ would give $2a$. So, I answered to this question: "Same as before force needs to be applied". But, it turned out my answer was wrong. I also saw What does an applied force on an already accelerating object do?, but it seemed to show my answer correct, when it actually isn't. EDIT: Their is no friction between the mass and surface on which it is moving. Answer: An object will not accelerate without an non-zero net force. If you accelerate an object with a constant force, and you keep applying that force, then the object will keep accelerating at the same rate. The answer is twice the force. $$a = \dfrac{F}{m}$$ $$2a = \dfrac{2F}{m}$$
{ "domain": "physics.stackexchange", "id": 22411, "tags": "newtonian-mechanics, forces, acceleration" }
What is wrong with my circuit for the fourth-root of $X$?
Question: For learning purposes I would like to hand-craft my own circuit for the fourth-root of $X$, using $S$, $T$, and $\sqrt X$ gates. Note that $\sqrt[4]X$ is of order eight while $\sqrt X$ is of order four, and we can use two ancillas to temporarily store the respective eigenspace. The recipe that I have been following is to: Hit both ancilla with a Hadamard $H$, Have the lower-order ancilla perform a controlled $\sqrt X$ on the target with the higher-order ancilla perform a controlled $X$ on the target, Perform a QFT on the ancillas, Phase the ancillas, Perform an IQFT on the ancillas, Have the higher-order ancilla perform another controlled $X$ while the lower-order ancilla performs a controlled $X^{-1/2}$, and Hit both ancillas with an $H$ to revert: I try the above circuit in Quirk; although the ancillas properly revert $|0\rangle$, it gives me a different answer on the target than Quirk's native $\sqrt[4] X$. On the Bloch sphere my recipe says the target's $\theta$ is $135^\circ$ while the native $\sqrt[4] X$ should be at $45^\circ$. Did I get my endian-convention wrong on the QFT (red)? Or did I do the uncomputing wrong? Did I not phase properly (purple)? Here's a Quirk snapshot to compare. The hand-crafted circuit is on the first three qubits with the ancilla the first two qubits and the target the third qubit, while the native $\sqrt[4] X$ for comparison is the fourth qubit. The Bloch spheres/amplitudes are different between the third qubit (my circuit) and Quirk's native circuit: Answer: With the gates you allowed, you can conjugate the $T$ so that it rotates around $X$ instead of $Z$. That gives you $\sqrt[4]X$. In your circuit I think you got your QFTs backwards, so the phase estimation is estimating the negation of the phase. This is negating the rotation you apply, and making both bits of the phase estimation register relevant instead of just one as should be the case for this case. Also I think you got your endianness backwards. The S and the T in the center might be in the wrong order. This might be because you didn't include swaps as part of your decomposed QFTs. An endian error and a sign error. The eternal struggle against mundane mistakes continues. Fixing them both makes it work. or, using the built-in QFT (and switching some other endian stuff):
{ "domain": "quantumcomputing.stackexchange", "id": 3571, "tags": "circuit-construction, quirk" }
What is a Keplerian shear
Question: I was reading an article on Gas Giants and how some form close to their host star. One line in the article says: A region of the proto-planetary disk will be susceptible to gravitational instability if the free-fall time due to self gravity is sufficiently rapid to overcome Keplerian shear. I tried to find a wiki on what Keplerian sheer was, not much luck. Does it go by another name, or can some one explain what it is ? Saw a paper that mentioned it about ergodic theory (no idea what that theory is) it's way above my knowledge to get a general idea of what it is from the paper. Answer: Consider a non-rigid rotating system where the rotational velocity is given by the Kepler speed. In such system, adjacent orbits have a different orbital speed of $$ v \approx \sqrt{\frac{GM}{r}}.$$ Thus the orbital velocity drops the further you go outward. Place yourself onto one particle and you see that the elements outward are slower than you, while the elements further inward are faster. A velocity shear is $dv / dr$. Adding "Keplerian" tells you (1) how fast the velocity is changing, and (2) that the shear is in the radial direction. A shear $dv / dz$ would be in a different direction, so it would not make sense to call it Keplerian because the Keplerian velocity depends only on radial distance. In a protoplanetary disk, such shear motion counter-acts self-gravity of the small particles in a way it has mass further inside rotate quicker. With sufficiently dense disk, thus solid mass, and sufficient gas in the disc, this may actually lead to hydrodynamic instabilities forming vortices which locally counter this shear - and essentially only then allow planetesimals to form. In order to assess stability in such rotating disc with shear, one uses the Toomre criterion (which is basically the Jeans stability criterion expanded to a rotating disc). Similarily Keplerian shear can be observed in the Saturnian rings where in the absence of gas and hydrodynamics it basically stops the growth of the so-called moonlets.
{ "domain": "astronomy.stackexchange", "id": 6183, "tags": "exoplanet, planetary-formation" }
Wider sharpening filter
Question: The kernel of a sharpening filter of size 3x3 may look like $$ \frac{1}{4}\begin{bmatrix} 0 & 1 - \alpha & 0\\ 1 - \alpha & 4\alpha& 1 - \alpha\\ 0 & 1 - \alpha & 0 \end{bmatrix} $$ where $\alpha > 1$ What is a natural extension for size $(2n + 1)\times(2n + 1)$? The sum of all elements should be equal to 1, and the matrix should be symmetric. Answer: There are two basic forms of sharpening filter: I - Laplace(I) k I - (k-1) smooth(I) where I is the image, is a positive value, and k is a value larger than 1. The first form is referenced by Laurent in his answer, you can use the Laplacian of Gaussian to construct it. The second one is the classical unsharp masking, and was used in photography before computers existed. Both of these can explain the filter in the OP, as both methods do more or less the same thing. But as you scale them, they become different. The second form is the easiest to control. It has two parameters: the size of the smoothing filter (and its shape), and the k. The larger k, the stronger the sharpening effect. The larger the smoothing, the stronger the sharpening effect as well, but it also selects for the size of the edges that are sharpened.
{ "domain": "dsp.stackexchange", "id": 11329, "tags": "image-processing, highpass-filter, kernel" }
Creating a list of new areas for patches of conserved land within a watershed
Question: I'm writing code to create a list of new areas for patches of conserved land within a watershed based on the current size distribution of conserved land patches that are in the watershed now. I'm generating a new random area based on a normal probability density function fitted to the log-transformed distribution of the sizes of currently conserved land. Then I have a list of the areas of tax parcels in the region, and I want to randomly grab a tax parcel that is within ± 10% of the randomly generated area. The loop will continue until I've generated a total 4,735 hectares of new areas for conservation. The code works, and runs very quickly if I lower the threshold for total new conserved area to ~500 hectares. However, for my current endpoint (4,735 new hectares), it runs so slowly that I haven't actually ever run it all the way through (waited an hour+). Any ideas on how I can streamline this code to get it to run faster are greatly appreciated! # Set endpoint of total new area new_cons_area = 4735.44 # Get new random areas until they total the new_cons_area sum_rand_area = 0 new_rand_area = [] rand_sizes = [] index_list = [] while sum_rand_area < new_cons_area: # Get new random size based on current distribution of sizes of conservation patches new_log_size = np.random.normal(param_log[0],param_log[1],1) # Transform back to actual area new_size = np.e**new_log_size[0] # Set buffer boundaries min_size = new_size * 0.9 max_size = new_size * 1.1 # Get length of new_rand_area list current_length = len(new_rand_area) # Set up loop to grab sizes from the current list of tax parcels suitable for conservation if new_size >= 3.33: # Run this loop until a new size is added to the list of new random areas while len(new_rand_area) == current_length: # Grab a random index from the list of parcel areas rand_index = random.randrange(len(parcel_areas)-1) # See if the parcel area associated with the random index matches the randomly generated size if min_size <= parcel_areas[rand_index] <= max_size: # If criteria is met, add the randomly generated size to a list rand_sizes.append(int(new_size)) # Add the randomly generated index to list so you can grab the object ids in the next step index_list.append(rand_index) # Add the actual tax parcel area to a list new_rand_area.append(parcel_areas[rand_index]) # Add the area to the total area of new land to be conserved sum_rand_area += parcel_areas[rand_index] Answer: The problem is that random.randrange(len(parcel_areas)-1) most of the times returns an area which fails the min_size <= parcel_areas[rand_index] <= max_size test. Therefore it is logical to restrict the random selection to the set of areas which satisfy the requirements beforehand, and eliminate the inner loop completely. Sort the list of areas by size. Once you determined min_size and max_size, find corresponding bounds (with binary search), and select randomly within these bounds.
{ "domain": "codereview.stackexchange", "id": 18153, "tags": "python, random, time-limit-exceeded" }
Can anyone tell me how to control the DC motors using ROS and Arduino
Question: i am fairly new to ROS. I'm trying to control 2 DC motors through ROS and Arduino. I've been using rosserial package and also i've tried ros_arduino_bridge. Originally posted by antonchi on ROS Answers with karma: 1 on 2017-02-12 Post score: 0 Original comments Comment by NEngelhard on 2017-02-12: The title is fort a short 'title', could you please try to edit your question into a title (one line) and a question (as long as needed?) Answer: On the smaller Arduinos with only 2kb RAM I've had trouble using rosserial because of the amount of RAM consumed, so I've abandoned that for ros_arduino_bridge. If you are using a motor driver not already supported by ros_arduino_bridge, you'll have to modify the code in a couple of places. First, in ROSArduinoBridge.ino, look at this code (lines 48-64): //#define USE_BASE // Enable the base controller code #undef USE_BASE // Disable the base controller code /* Define the motor controller and encoder library you are using */ #ifdef USE_BASE /* The Pololu VNH5019 dual motor driver shield */ #define POLOLU_VNH5019 /* The Pololu MC33926 dual motor driver shield */ //#define POLOLU_MC33926 /* The RoboGaia encoder shield */ #define ROBOGAIA /* Encoders directly attached to Arduino board */ //#define ARDUINO_ENC_COUNTER #endif You need to uncomment the definition of USE_BASE, comment out ROBOGAIA, and then add your own define for your motor controller, say #define RKI1340_MOTOR_CONTROLLER. Then, in motor_driver.ino, you'll need to add code to support that motor driver. The bottom of that file looks like this: #else #error A motor driver must be selected! #endif You'll need to add an #elif before the #else for RKI1340_MOTOR_CONTROLLER, and add a definition of void setMotorSpeeds(int leftSpeed, int rightSpeed). There are other changes required to support encoders different from the ones already coded. You might compare with the changes Marco Walther made to support the Pololu A-Star board with onboard motor driver. (I've contributed to his mods.) https://github.com/mw46d/ros_arduino_bridge/tree/indigo-devel/ros_arduino_firmware/src/libraries/ROSArduinoBridge Originally posted by Mark Rose with karma: 1563 on 2017-02-12 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 26994, "tags": "ros, arduino, rosserial" }
17 Joules of Energy From a Mouse Trap
Question: Do you think it would be possible to get 17 joules out of a standard size mouse trap. By my math, it is a torsion coefficient of 3.45 or so out of the spring. Answer: I haven't used a mousetrap for several decades, but as I recall the moving arm is about 5cm long, so the tip moves 0.05$\pi$ or about 0.16m. To get 17J of work the force at the tip of the arm would need to be 100N. I'm fairly sure the force isn't anything like that great. I remember being able to pull the arm back with one finger. I would guess the force is nearer 10N, so you'd only get around 2J out.
{ "domain": "physics.stackexchange", "id": 93008, "tags": "homework-and-exercises, energy, newtonian-mechanics, rotational-dynamics, spring" }
Has there been a big change in 1983 when the definition of the metre changed?
Question: The metre was defined at the end of the $18^{th}$ century as the ten-millionth part of the quarter of the meridian (from the north pole to equator). Then, from $1983$ the definition changed for the distance traveled by light in a rather short elapse of seconds. My question is, did the value of the metre change in absolute in 1983 ? My textbook talks about a difference of $0,229$mm (I couldn't understand if it was "in absolute" or if it was a difference between the Delambre & Mechain measured circumference and the satelite-measured circumference expressed in a fixed metre system). Answer: No, there was not. The meter has been redefined multiple times, and, each time, the intent was to keep the actual length as close as possible to what it was before. Not long after it was originally defined based on the length of the meridian, it was redefined to be the distance between two marks on the official platinum-iridium meter bar in Paris. That definition stuck until 1960, when it was defined to be a certain number of wavelengths of light produced by a certain electron transition in a certain isotope of Krypton. The specified number of wavelengths was 1,650,763.73. Note that, if they were willing to allow the length of the meter to change by almost a quarter of a mm, they would have specified that number to fewer decimal places. Finally, it was redefined to be 1/299,792,458 of a light-second. Again, they specified that many digits because they didn't want to change the length of a meter by more than a tiny fraction. In fact, they didn't want to change it at all - they just wanted to make it more precise. At the end of the History of the meter article in Wikipedia, there is a table of the different definitions. It shows the precision of each definition, but not an absolute difference. That is because each one was designed to be within the range of error of the definition before. It is as if you bought a new meter stick with thinner tick marks than the old one, so you can make more precise measurements. But each tick mark in the new ruler is basically in the middle of the corresponding tick mark in the old ruler.
{ "domain": "physics.stackexchange", "id": 100152, "tags": "definition, history, si-units, metrology, length" }
Rust Console RPN Calculator
Question: I've been trying to teach myself Rust, and I decided that a simple console calculator would be a good project to learn from. use std::io; extern crate regex; #[macro_use] extern crate lazy_static; use regex::Regex; fn main() { loop { println!("Enter input:"); let mut input = String::new(); io::stdin().read_line(&mut input) .expect("Failed to read line"); let tokens = tokenize(input); let stack = shunt(tokens); let res = calculate(stack); println!("{}", res); } } #[derive(Debug)] #[derive(PartialEq)] enum Token { Number (i64), Plus, Sub, Mul, Div, LeftParen, RightParen, } /// Tokenizes the input string into a Vec of Tokens. fn tokenize(mut input: String) -> Vec<Token> { lazy_static! { static ref NUMBER_RE: Regex = Regex::new(r"^[0-9]+").unwrap(); } let mut res = vec![]; while !(input.trim_left().is_empty()) { input = input.trim_left().to_string(); input = if let Some((_, end)) = NUMBER_RE.find(&input) { let (num, rest) = input.split_at_mut(end); res.push(Token::Number(num.parse::<i64>().unwrap())); rest.to_string() } else { res.push(match input.chars().nth(0) { Some('+') => Token::Plus, Some('-') => Token::Sub, Some('*') => Token::Mul, Some('/') => Token::Div, Some('(') => Token::LeftParen, Some(')') => Token::RightParen, _ => panic!("Unknown character!") }); input.trim_left_matches(|c| c == '+' || c == '-' || c == '*' || c == '/' || c == '(' || c == ')').to_string() } } res } /// Transforms the tokens created by `tokenize` into RPN using the /// [Shunting-yard algorithm](https://en.wikipedia.org/wiki/Shunting-yard_algorithm) fn shunt(tokens: Vec<Token>) -> Vec<Token> { let mut queue: Vec<Token> = vec![]; let mut stack: Vec<Token> = vec![]; for token in tokens { match token { n @ Token::Number(_) => queue.push(n), op @ Token::Plus | op @ Token::Sub | op @ Token::Mul | op @ Token::Div => { while let Some(o) = stack.pop() { if precedence(&op) <= precedence(&o) { queue.push(o); } else { stack.push(o); break; } } stack.push(op) }, p @ Token::LeftParen => stack.push(p), Token::RightParen => { let mut found_paren = false; while let Some(op) = stack.pop() { match op { Token::LeftParen => { found_paren = true; break; }, _ => queue.push(op) } } assert!(found_paren) } } } while let Some(op) = stack.pop() { queue.push(op); } queue } /// Takes a Vec of Tokens converted to RPN by `shunt` and calculates the result fn calculate(tokens: Vec<Token>) -> i64 { let mut stack = vec![]; for token in tokens { match token { Token::Number(n) => stack.push(n), Token::Plus => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a + b); }, Token::Sub => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a - b); }, Token::Mul => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a * b); }, Token::Div => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a / b); }, _ => unreachable!() // By the time the token stream gets here, all the LeftParen // and RightParen tokens will have been removed by shunt() } } stack[0] } /// Returns the precedence of op fn precedence(op: &Token) -> usize { match op { &Token::Plus | &Token::Sub => 1, &Token::Mul | &Token::Div => 2, _ => 0, } } Is there anything that can be improved? In particular Is there any better way to pattern match on the first character of a string and then remove it? That part of the code feels very clunky. Is there a better way to associate the precedence of an operator with its definition? It bugs me to have to write a precedence function in order to get the precedence of an operator. Answer: There's no space before the parenthesis on an enum variant derives are combined into one line. Make precedence an inherent method on Token. Inside of precedence, match on the dereference of the value. This avoids the spread of &. There's no need for lazy_static; just create a structure and put the regex in it, the reuse it in the loop. There's no need for the large amount of string allocation. Take in a &str instead of a String and simply slice it up. Instead of trimming the input by the operator characters, skip by the number of bytes the first character was. There's no need to specify the type of parse. The code trims the left multiple times; just do it once. There's no need to specify type of queue as it's inferrable. There's no need to use the @ pattern binding; can just use token. Larger ideas: Create a newtype around Vec<Token> to indicate that data is in RPN order. Types avoid the need for documentation. Create multiple types of enums; one with parens and one without. Then there's one less place to have an unreachable. If there's subsets, you could embed the subset in the superset. The error handling is pretty rough for the end user. Mismatched parenthesis kill the program instead of explaining the error and letting the user continue. There's no (obvious) way to exit the program other than by killing it or closing stdin (which produces another error message). use std::io; extern crate regex; // 0.1.80 #[macro_use] extern crate lazy_static; use regex::Regex; fn main() { let tokenizer = Tokenizer::new(); loop { println!("Enter input:"); let mut input = String::new(); io::stdin() .read_line(&mut input) .expect("Failed to read line"); let tokens = tokenizer.tokenize(&input); let stack = shunt(tokens); let res = calculate(stack); println!("{}", res); } } #[derive(Debug, PartialEq)] enum Token { Number(i64), Plus, Sub, Mul, Div, LeftParen, RightParen, } impl Token { /// Returns the precedence of op fn precedence(&self) -> usize { match *self { Token::Plus | Token::Sub => 1, Token::Mul | Token::Div => 2, _ => 0, } } } struct Tokenizer { number: Regex, } impl Tokenizer { fn new() -> Tokenizer { Tokenizer { number: Regex::new(r"^[0-9]+").expect("Unable to create the regex"), } } /// Tokenizes the input string into a Vec of Tokens. fn tokenize(&self, mut input: &str) -> Vec<Token> { let mut res = vec![]; loop { input = input.trim_left(); if input.is_empty() { break } let (token, rest) = match self.number.find(input) { Some((_, end)) => { let (num, rest) = input.split_at(end); (Token::Number(num.parse().unwrap()), rest) }, _ => { match input.chars().next() { Some(chr) => { (match chr { '+' => Token::Plus, '-' => Token::Sub, '*' => Token::Mul, '/' => Token::Div, '(' => Token::LeftParen, ')' => Token::RightParen, _ => panic!("Unknown character!"), }, &input[chr.len_utf8()..]) } None => panic!("Ran out of input"), } } }; res.push(token); input = rest; } res } } /// Transforms the tokens created by `tokenize` into RPN using the /// [Shunting-yard algorithm](https://en.wikipedia.org/wiki/Shunting-yard_algorithm) fn shunt(tokens: Vec<Token>) -> Vec<Token> { let mut queue = vec![]; let mut stack: Vec<Token> = vec![]; for token in tokens { match token { Token::Number(_) => queue.push(token), Token::Plus | Token::Sub | Token::Mul | Token::Div => { while let Some(o) = stack.pop() { if token.precedence() <= o.precedence() { queue.push(o); } else { stack.push(o); break; } } stack.push(token) }, Token::LeftParen => stack.push(token), Token::RightParen => { let mut found_paren = false; while let Some(op) = stack.pop() { match op { Token::LeftParen => { found_paren = true; break; }, _ => queue.push(op), } } assert!(found_paren) }, } } while let Some(op) = stack.pop() { queue.push(op); } queue } /// Takes a Vec of Tokens converted to RPN by `shunt` and calculates the result fn calculate(tokens: Vec<Token>) -> i64 { let mut stack = vec![]; for token in tokens { match token { Token::Number(n) => stack.push(n), Token::Plus => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a + b); }, Token::Sub => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a - b); }, Token::Mul => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a * b); }, Token::Div => { let (b, a) = (stack.pop().unwrap(), stack.pop().unwrap()); stack.push(a / b); }, _ => { // By the time the token stream gets here, all the LeftParen // and RightParen tokens will have been removed by shunt() unreachable!(); }, } } stack[0] } I'm not really happy with the parsing aspect of the code, but I'm not seeing an obvious better thing at the moment. As pointed out by Francis Gagné, you can call Chars::as_str to get the remainder of the string after pulling off the first character: let mut chars = input.chars(); match chars.next() { Some(chr) => { (match chr { '+' => Token::Plus, '-' => Token::Sub, '*' => Token::Mul, '/' => Token::Div, '(' => Token::LeftParen, ')' => Token::RightParen, _ => panic!("Unknown character!"), }, chars.as_str()) I don't get the point of creating a whole new structure just for the tokenizer function when lazy_static! can do the same thing Likewise, I don't get the point of using lazy_static! when normal language constructs can do the same thing ^_^. lazy_static! currently requires heap allocation and that memory can never be reclaimed until the program exits. Creating a value and using a reference to it is made completely safe by Rust's semantics and lifetimes, so I find myself using stack allocations far more frequently than I would with a language like C. I generally dislike singletons of any kind, for the reasons espoused throughout the Internet. In this case, the singleton is created in the final binary, which ameliorates the problem somewhat.
{ "domain": "codereview.stackexchange", "id": 21911, "tags": "rust, math-expression-eval" }
Adding compile-time format validation to your string.Format calls
Question: I've written another diagnostic for VSDiagnostics that adds compile-time security for your formatting specified in a string.Format call. The exact scenario it guards against is when you have something like this: string s = string.Format("Hello {0}, I'm {1}!", "John"); This will throw a runtime exception because it cannot find an argument for the second placeholder. What my analyzer does is evaluate the format, looks at the eligible placeholders and compares that with the amount of arguments that have been passed in. When it determines that the format is invalid for the given arguments, it will underline the format and report an error-level diagnostic. Note that the purpose of this analyzer should be the above scenario. Other things like placeholders being in a non-lexical order or unused placeholders are handled separately. There are a few restrictions in place: The specified format has to be a literal -- it cannot be retrieved from a field, method call or anything else Interpolated strings as a format specified are not (yet) supported If the arguments are passed in using an explicit array, only inline initialized arrays are accepted. Arrays defined elsewhere are not supported. I am mostly interested in finding a way to make the logic of selecting the format a little more cleanly. Additionally (perhaps more importantly): can you find a scenario that I haven't accounted for? All verified scenarios so far can be found here. Analyzer Github [DiagnosticAnalyzer(LanguageNames.CSharp)] public class StringDotFormatWithDifferentAmountOfArgumentsAnalyzer : DiagnosticAnalyzer { private const DiagnosticSeverity Severity = DiagnosticSeverity.Warning; private static readonly string Category = VSDiagnosticsResources.StringsCategory; private static readonly string Message = VSDiagnosticsResources.StringDotFormatWithDifferentAmountOfArgumentsMessage; private static readonly string Title = VSDiagnosticsResources.StringDotFormatWithDifferentAmountOfArgumentsTitle; internal static DiagnosticDescriptor Rule => new DiagnosticDescriptor(DiagnosticId.StringDotFormatWithDifferentAmountOfArguments, Title, Message, Category, Severity, true); public override ImmutableArray<DiagnosticDescriptor> SupportedDiagnostics => ImmutableArray.Create(Rule); public override void Initialize(AnalysisContext context) { context.RegisterSyntaxNodeAction(AnalyzeNode, SyntaxKind.InvocationExpression); } private void AnalyzeNode(SyntaxNodeAnalysisContext context) { var invocation = context.Node as InvocationExpressionSyntax; if (invocation == null) { return; } // Verify we're dealing with a string.Format() call if (!invocation.IsAnInvocationOf(typeof(string), nameof(string.Format), context.SemanticModel)) { return; } if (invocation.ArgumentList == null) { return; } // Verify the format is a literal expression and not a method invocation or an identifier // The overloads are in the form string.Format(string, object[]) or string.Format(CultureInfo, string, object[]) var allArguments = invocation.ArgumentList.Arguments; var firstArgument = allArguments.ElementAtOrDefault(0, null); var secondArgument = allArguments.ElementAtOrDefault(1, null); if (firstArgument == null) { return; } var firstArgumentIsLiteral = firstArgument.Expression is LiteralExpressionSyntax; var secondArgumentIsLiteral = secondArgument?.Expression is LiteralExpressionSyntax; if (!firstArgumentIsLiteral && !secondArgumentIsLiteral) { return; } // We ignore interpolated strings for now (workitem tracked in https://github.com/Vannevelj/VSDiagnostics/issues/313) if (firstArgument.Expression is InterpolatedStringExpressionSyntax) { return; } // If we got here, it means that the either the first or the second argument is a literal. // If the first is a literal then that is our format var formatString = firstArgumentIsLiteral ? ((LiteralExpressionSyntax) firstArgument.Expression).GetText().ToString() : ((LiteralExpressionSyntax) secondArgument.Expression).GetText().ToString(); // Get the total amount of arguments passed in for the format // If the first one is the literal (aka: the format specified) then every other argument is an argument to the format // If not, it means the first one is the CultureInfo, the second is the format and all others are format arguments // We also have to check whether or not the arguments are passed in through an explicit array or whether they use the params syntax var formatArguments = firstArgumentIsLiteral ? allArguments.Skip(1).ToArray() : allArguments.Skip(2).ToArray(); var amountOfFormatArguments = formatArguments.Length; if (formatArguments.Length == 1) { // Inline array creation à la string.Format("{0}", new object[] { "test" }) var arrayCreation = formatArguments[0].Expression as ArrayCreationExpressionSyntax; if (arrayCreation?.Initializer?.Expressions != null) { amountOfFormatArguments = arrayCreation.Initializer.Expressions.Count; } // We don't handle method calls var invocationExpression = formatArguments[0].Expression as InvocationExpressionSyntax; if (invocationExpression != null) { return; } // If it's an identifier, we don't handle those that provide an array as a single argument // Other types are fine though -- think about string.Format("{0}", name); var referencedIdentifier = formatArguments[0].Expression as IdentifierNameSyntax; if (referencedIdentifier != null) { // This is also hit by any other kind of identifier so we have to differentiate var referencedType = context.SemanticModel.GetTypeInfo(referencedIdentifier); if (referencedType.Type == null || referencedType.Type is IErrorTypeSymbol) { return; } if (referencedType.Type.TypeKind.HasFlag(SyntaxKind.ArrayType)) { // If we got here it means the arguments are passed in through an identifier which resolves to an array // aka: calling a method that returns an array or referencing a variable/field that is of type array // We cannot reliably get the amount of arguments if it's a method // We could get them when it's a field/variable/property but that takes some more work and thinking about it // This is tracked in workitem https://github.com/Vannevelj/VSDiagnostics/issues/330 return; } } } // Get the placeholders we use, stripped off their format specifier, get the highest value // and verify that this value + 1 (to account for 0-based indexing) is not greater than the amount of placeholder arguments var placeholders = PlaceholderHelpers.GetPlaceholders(formatString) .Cast<Match>() .Select(x => x.Value) .Select(PlaceholderHelpers.GetPlaceholderIndex) .Select(int.Parse) .ToList(); if (!placeholders.Any()) { return; } var highestPlaceholder = placeholders.Max(); if (highestPlaceholder + 1 > amountOfFormatArguments) { context.ReportDiagnostic(Diagnostic.Create(Rule, firstArgumentIsLiteral ? firstArgument.GetLocation() : secondArgument.GetLocation())); } } } Extensions Github public static class Extensions { // TODO: tests // NOTE: string.Format() vs Format() (current/external type) public static bool IsAnInvocationOf(this InvocationExpressionSyntax invocation, Type type, string method, SemanticModel semanticModel) { var memberAccessExpression = invocation?.Expression as MemberAccessExpressionSyntax; if (memberAccessExpression == null) { return false; } var invokedType = semanticModel.GetSymbolInfo(memberAccessExpression.Expression); var invokedMethod = semanticModel.GetSymbolInfo(memberAccessExpression.Name); if (invokedType.Symbol == null || invokedMethod.Symbol == null) { return false; } return invokedType.Symbol.MetadataName == type.Name && invokedMethod.Symbol.MetadataName == method; } // TODO: tests public static T ElementAtOrDefault<T>(this IEnumerable<T> list, int index, T @default) { return index >= 0 && index < list.Count() ? list.ElementAt(index) : @default; } } PlaceholderHelpers Github internal static class PlaceholderHelpers { /// <summary> /// Removes all curly braces and formatting definitions from the placeholder /// </summary> /// <param name="input">The placeholder entry to parse.</param> /// <returns>Returns the placeholder index.</returns> internal static string GetPlaceholderIndex(string input) { var temp = input.Trim('{', '}'); var colonIndex = temp.IndexOf(':'); if (colonIndex > 0) { return temp.Remove(colonIndex); } return temp; } /// <summary> /// Get all elements in a string that are enclosed by an uneven amount of curly brackets (to account for escaped /// brackets). /// The result will be elements that are either plain integers or integers with a format appended to it, delimited by a /// colon. /// </summary> /// <param name="input">The format string with placeholders.</param> /// <returns>Returns a collection of matches according to the regex.</returns> internal static MatchCollection GetPlaceholders(string input) { // This regex uses a named group so we can easily access the actual value return Regex.Matches(input, @"(?<!\{)\{(?:\{\{)*((?<index>\d+)(?::.*?)?)\}(?:\}\})*(?!\})"); } /// <summary> /// Returns all elements from the input, split on the placeholders. /// This method is useful if you want to make use of the rest of the string as well. /// </summary> internal static string[] GetPlaceholdersSplit(string input) { return Regex.Split(input, @"(?<!\{)\{(?:\{\{)*(\d+(?::.*?)?)\}(?:\}\})*(?!\})"); } } Unit tests Github Tests have been omitted for brevity. You can take a look at the Github link to see which scenarios I have accounted for. Answer: Instead of looking for a string literal expression, call SemanticModel.GetConstantValue() on the argument node. This will work properly with string concatenation or constant fields. Instead of hard-coding String.Format and looking at argument types, look for all invocations whose parameter lists end with string format followed by one or more object or params object[] parameters. This will let you work with other formatting methods like TextWriter.WriteLine IsAnInvocationOf should not assume MemberAccess (eg, using static); instead, get the symbol for the invocation expression itself.
{ "domain": "codereview.stackexchange", "id": 17632, "tags": "c#, roslyn" }
Where does all the mass created from energy go?
Question: So mass can be created from energy when small protons speed up, 430 times bigger to be exact. I don't know if this is a stupid question, but I'm in middle school so cut me some slack. Where does all that mass go? Is it converted to thermal energy? Say we covered the earth with solar panels, that would produce a lot of energy, also producing a lot of mass. I don't know if that's the right wording, I don't want to sound like I don't know energy can't be created or destroyed but if anyone could answer these questions for me that'd be great. Answer: The notion of "mass" is probably less deeply meaningful as you might think. Science has come a long way since the days when mass was thought to have such deep significance. Nowadays, energy is the primary concept, because there is a law of conservation of energy, and energy is linearly additive: that means that the sum of energies for two separate systems equals the total energy for the system as a whole. These two properties - conservation and linear additivity make energy a useful notion in physics. Mass has neither of these properties. It is not conserved, and it is not additive. The rest mass of a system two photons moving in opposite directions is nonzero, whereas the rest mass of each is nought. In particular, since mass is not conserved, it doesn't have to "go anywhere", unlike energy. It can simply disappear or appear, as in the photon example. So nowadays mass is less useful as a concept in physics. Modern physics simply thinks of the notion of rest mass of a system, and this is a shorthand for the total energy of a system as measured from a reference frame at rest relative to the system (in SI units, we multiply by $c^2$ in the rest frame to get from mass to energy). But the notion is still all about energy. The rest mass $m_0$ can be used in the relativistic version $\mathbf{F}=\frac{\mathrm{d}}{\mathrm{d}\tau}(m_0\,\mathbf{u})$ of Newton's second law, where $\mathbf{F}$ is the Four Force and $\mathbf{u}$ the four velocity. As such, rest mass can also be thought of as measuring a system's inertia. Rest masses are important identifying data for fundamental particles, because the total energy of these particles is always the same when measured from a frame at rest relative to them. This last statement holds for massive particles: massless particles like the photon have no rest frame. Incidentally, though, if you want to express the solar energy incident on Earth as a mass, then it works out to be roughly a kilogram each second. Of course, long term, all of that is radiated back into space. Human energy usage is about five tonnes per year, or about 0.2 grams per second.
{ "domain": "physics.stackexchange", "id": 41308, "tags": "special-relativity, energy, mass, energy-conservation, mass-energy" }
What is most simple eukaryotic genome?
Question: Expressed in number of Base Pairs or Bytes, about how large is the simplest eukaryotic genome? How much of this is 'junk-DNA' (non-coding)? Answer: You asked about eukaryotes. The genome of the yeast Saccharomyces cerevisiae is 12.2 Mb. The genome of the smallest free-living eukaryote, Ostreococcus tauri (a unicellular green alga) is 12.6 Mb There are smaller eukaryotic genomes, but these are not free-living organisms they are intracellular parasites.
{ "domain": "biology.stackexchange", "id": 2193, "tags": "biochemistry, dna, bioinformatics, eukaryotic-cells" }
Why is normal contact force electromagnetic in nature?
Question: I learnt that normal contact force can be interpreted as the restoring force when an object undergoes deformation due to external stress, and it is perpendicular to the surface of contact. I also learnt that normal contact force is electromagnetic in nature, i.e. the repulsion between atoms of two objects on their surfaces of contact. However, I cannot find the connection between the first interpretation to the second one. I do not understand how restoring force due to deformation is the same as the electromagnetic forces between atoms on the surface in contact. So my questions are: How to find a coherent explanation for normal contact force that relates the first interpretation to the second one mentioned above? How to explain, from an electromagnetism standpoint, the greater the extent of deformation, the greater the normal contact force? Is any form of restoring force due to deformation (e.g. a compressed spring) normal contact force? Answer: Your first explanation is based on observations of the stuff of ordinary life. Stuff is, in general, springy. Push on it, it pushes back. If it's really springy stuff, then it moves a lot when you push on it a little. If it's really stiff stuff, then it moves a little even if you push on it a lot. But all stuff is observed to be springy. This interaction between pushing and springing back is the macroscopic behavior of stuff. Your second explanation is based on a deeper understanding of what stuff is made of -- to wit, all normal stuff that we might stuff our finger into is made of really little stuff called atoms which interact with each other primarily through the electric field and the behavior of electrons, which themselves interact with the atomic nuclei through the electric field. Your finger is made of stuff, which is made of atoms, the outermost parts of which is electrons. If you find something to push on, that will, inevitably, be made out of stuff that's made out of atoms, the outermost parts of which are electrons (unless you've stumbled onto some antimatter, in which case your experiment is about to end catastrophically). When you push on the stuff, the electrons on the atoms on the surface of the stuff are repelled by the electrons on the atoms on the surface of your finger -- and that's what causes the normal force. It's the same thing as in the first example, it's just that in the first example we're modeling the stuff as, well, "just stuff", while in the second case we've asked "yes, but why?" enough time that we're down to discussing just how unfriendly electrons can be toward each other when you attempt forced marriage of their favorite atoms. Note that if you do this experiment right after putting a drop of super glue on the stuff, you'll find that the normal force can change sign, much to your consternation if you were expecting it to only be positive, or zero (if you try this, don't freak out when you have stuff glued to your finger -- your skin if constantly exuding oil and shedding its outer layers -- just wait a few hours, and you'll peel right off of the stuff). What you've done is chemically bonded your skin to the stuff, via the action of electrons on the outermost layers of the stuff, your skin, and the glue. Which relates to your next two questions. The reason that stuff occurs as solids (or even liquids, for that matter), is because electrons, in addition to making atoms repel each other, will also make them attract each other over very short distances. Solid stuff is made of atoms that are in deep and committed relationships. At the macro level, we say "stuff deforms and pushes back". At the atoms-and-electrons level, when stuff deforms, atoms are being pulled further away from each other than they'd like (tension), pushed closer together to each other than they'd like (compression), or being asked to slide in relation to each other (shear). All of these movement-vs-force interactions happen because the electrons and atomic nuclei have preferred positions, and when they resist being moved out of these preferred positions it's electric fields that are generating the forces. Your third question is kind of unrelated to the first two -- yes, normal force is due, at the macro level, to deformation (or deformation is due to the normal force -- take your pick). But I believe your confusion is that the two statements that you lead with are both describing the same phenomenon -- just at different scales.
{ "domain": "physics.stackexchange", "id": 92150, "tags": "newtonian-mechanics, electromagnetism, forces" }
What are methods human actors use to imitate robots?
Question: Robot technology is usually thought from an engineering perspective. A human programmer writes a software this executed in a robot who is doing a task. But what would happen, if the project is started with the opposite goal? The idea is, that the human becomes the robot by himself. That means, the human is using makeup to make his face more mechanically, buys special futuristic clothing which mirrors the light and imitates in a roleplay the working of a kitchen robot. What are methods human actors use to imitate robots? Answer: The great acting teacher Stella Adler wrote about mannerisms being a powerful tool for actors. Method acting in general focuses on natural performances based roughly on understanding the mindset of the character portrayed. It's possible actors who have portrayed androids have observed industrial robots to inform their physicality, and many performances convey the idea, via movement, of a mechanical inner structure. (It is often said that an "actor's body is their instrument".) What is more interesting is actors trying to convey the cognitive structure of the androids. With Arnold, and Terminator robots in general, the baseline performance is decidedly robotic, to convey their inhumanity. But the more advanced Terminators are able to mimic naturalistic human mannerisms, and even established human characters, to trick humans. Lieutenant Data often used head motions, such as cocking his head slightly, to convey computation. Here the character arc involved working to become more human, as this character draws heavily on Pinocchio, the wooden puppet that became a boy. Overall Data's performance conveyed a lack of emotion, a definite reference to the logic-oriented Mr. Spock, although I recall episodes where Data experimented with "emotional circuits" and "humor circuits", where the output was intentionally inconsistent with natural human behavior. Blade Runner, where the Tyrell Corporation's motto was "More Human than Human", presented the cutting edge Nexus-6 androids as having emotions, but, due to their artificially short life-spans, were portrayed as childlike in trying to reconcile extremely powerful feelings. The Voight-Kampff Test, a form of Turing Test, used in the film to identify androids, relied on the emotional response to questions. The key plot point of Do Androids Dream of Electric Sheep, the novel the film was based on, utilized what would be formalized as evolutionary game theory to hypothesize that empathy is a natural function of intelligence sufficiently advanced. Deckard, who may or may not have been an android, and Rachel, who definitely was, are both capable of love. This capacity informed their performances, to the extent that the androids came off as more human than the actual humans, due to the depth of their emotion. This is also reflected in Blade Runner 2049 via the girlfriend-bot Joi, who us the most limited android, but the most human character in the film per her capacity to love (or at least simulate it.) In the recent HBO Westworld reboot, the Androids replicate natural human mannerisms when playing their designated roles, but reset to more mechanical mannerisms when acting under their own agency. This is reflected in Ex Machina, where the android mimics human emotions to pass a Turing Test and trick the human subject, only to revert to purely alien mannerisms after the android is free. ("Alien" here used in the sense of non-human--it's possible the android is sentient as it seems to convey some degree of emotion in regarding the simulated human skin it will wear.) The most interesting recent android performance may come from the recent Alien: Covenant where Michael Fassbender plays two identical androids, David and Walter, which have two distinct neural structures. (David has the capacity to be creative, where Walter cannot. In the film it is mentioned that David made people uncomfortable, so the creative functions were removed from subsequent models.) The key difference in the performance seems to be that David demonstrates passion, and even emotions, where Walter is more clearly "robotic". In general, the underlying approach of actors seems to have been to show the androids being distinct from humans, drawing a clear, though sometimes subtle, contrast. Actors portraying androids have typically utilized robotic mannerisms to convey an artificial entity.
{ "domain": "ai.stackexchange", "id": 1495, "tags": "social, human-like, mythology-of-ai" }
Why tautomers are considered to be the same chemical compound?
Question: Because of the rapid interconversion, tautomers are generally considered to be the same chemical compound. (Wikipedia) But looking at the examples provided on that same Wikipedia article, it is obvious that different chemical reactions would take place in each case (for example enol vs. keto). (source) Answer: There are a number of types of rapid chemical equilibria that basically cannot be turned off. Tautomers are probably the best known examples, although fluxional behavior is observed in many inorganic and organometallic systems. The origin for most tautomers is that protons are really quantum-mechanical. They can tunnel just like electrons can delocalize. Obviously different tautomers will have different stability, much like some electric resonance structures are clearly more stable. The comments to your question above indicate that most chemists would just call it "acetone" but of course there's some prop-1-en-2-ol present too. Yes, the reactivities might be different. It's a partly a matter of the balance of the equilibrium (i.e., that for many tautomers, only one form predominates). There are exceptions, e.g. phenols (from Wikipedia): In certain aromatic compounds such as phenol, the enol is important due to the aromatic character of the enol but not the keto form. Melting the naphthalene derivative naphthalene-1,4-diol 2 at 200 °C results in a 2:1 mixture with the keto form. It's also a matter of the time-scale. For example, at room temperature if I take an NMR, do I see two sets of peaks? What about IR or other characterization techniques? If I can't tell the difference because there are one set of peaks, I think many chemists will consider there to be "one compound" even if multiple tautomers are present. For an example of this (from the cheminformatics perspective), Marc Nicklaus of the NIH/National Cancer Institute gave a great talk at the 247th ACS national meeting "The NCI/CADD Group's InChI Usage and Analysis of Tautomerism for InChI V2". The talk focuses on tautomers and what software is trying to do to derive simple rules (as opposed to lots of quantum calculations or experiments) to determine one standard tautomer. But they also measured several tautomers by NMR: These would obviously be chemically quite different, but one form clearly predominates.
{ "domain": "chemistry.stackexchange", "id": 3264, "tags": "isomers, tautomer" }
Is the _error_ in the context of ML always just the difference of predictions and targets?
Question: Simple definitional question: In the context of machine learning, is the error of a model always the difference of predictions $f(x) = \hat{y}$ and targets $y$? Or are there also other definitions of error? I looked into other posts on this, but they are not sufficiently clear. See my comment to the answer in this post: What's the difference between Error, Risk and Loss? Answer: The error can have different forms depending on the application. For example for a simple regression we often use the sum of squared deviations between the actual output $y_n$ for the input $x_n$ and the predicted output $\hat{y}(x_n)$ for the input $x_n$. The total loss $J_\text{Gauss}$ is then given as the sum over all squared errors (also known as Gaussian Loss)for each observation. $$J_\text{Gauss}= \sum_{n=1}^N\left[y_n-\hat{y}(x_n)\right]^2$$ If we use absolute values instead of squares we obtain the Laplacian loss function $J_\text{Laplace}$, which is given by $$J_\text{Laplace}=\sum_{n=1}^N\left|y_n-\hat{y}(x_n)\right|$$ If we rather try to compare two probability distributions $p(x)$ and $q(x)$ we use a unsymmetric distance meassure called Kullback-Leibler divergence $$ {\displaystyle D_{\text{KL}}(P\parallel Q)=\int _{-\infty }^{\infty }p(x)\ln {\frac {p(x)}{q(x)}}\,dx}. $$ For binary classification we can use the hinge-loss $$J_\text{hinge}=\sum_{n=1}^N\max \{0, 1- t_n \hat{y}(x_n)\},$$ in which $t_n=+1$ if observation $x_n$ is from the postive class and $t_n=-1$ from the negative class. For support vector regression the $\varepsilon$-insensitive loss $J_\varepsilon$ is used. It is defined by the following equation. $$J_\varepsilon=\max\{0,|y_n-\hat{y}(x_n)|-\varepsilon\}$$ This loss acts like a threshold. It will only count something as an error if the error is larger then $\varepsilon$. As you can see there are some meassures (see this Wikipedia article for loss functions used for classification) of error for comparing the predicted output and the observed output.
{ "domain": "datascience.stackexchange", "id": 5720, "tags": "machine-learning, machine-learning-model, terminology, definitions" }
What is the bump in the resistivity vs temperature curve of the nichrome 80 wire?
Question: I have measured the resistivity of a nichrome ($80$ Nickel $20$ Chrome) wire as a function of temperature. Because my setup is quite crude (just a power supply and a couple of multimeters to measure voltage and current), I do not know if the bump that I get around $0.6A$ is physical or an artifact of the measuring setup. However, I stumbled across this plot online (https://super-metals.com/wp-content/uploads/2015/04/Nichrome-Alloys-for-Heating.pdf) I can see that the same bump is present more or less at the same temperature. It is not very clear if this temperature factor is the temperature coefficient of resistance or what. So my question is: What is the physical explanation for that bump? I am a particle Physicist so you can assume I understand the basics of thermodynamics and quantum mechanics. If you happen to know the real source of the figure it would help immensely. EDIT$1$: I have substituted the Resistance vs Current plot with the more explicative Resistivity vs Temperature plot. EDIT$2$: I found another site online where the same behavior was observer: http://www.brysonics.com/heating-a-nichrome-wire-with-math/ Answer: The nichrome wire undergoes a phase transition at that mysterious temperature which changes its unit cell structure. The two different unit cells have slightly different bulk resistivities. Superimposed on that shift is the usual increase in resistivity with temperature for metallic solids.
{ "domain": "physics.stackexchange", "id": 78033, "tags": "solid-state-physics, temperature, electrical-resistance, metals" }
Force in an isolated capacitor when a dielectric slab is introduced
Question: my question is about capacitance: a dielectric slab is inserted between the plates of an isolated capacitor. The force between the plates will remain unchanged. how? I mean the knowledge I had gained is that $$E=F/q$$ since a dielectric slab is being introduced in an isolated capacitor therefore $$ E(new)=E(old)/K$$ where $K =$ dielectic constant of slab then $$F(new)=F(old)/K $$ [as $F=Eq$] please correct me Answer: When the dielectric is inserted in between the plates of an isolated capacitor, the electric field E inside the dielectric decrease k times. Outside the dielectric, at the location where plates are situated, electric field is same as before, hence the force between them don't change. Even if you fill the space in between the plates completely with a dielectric, one can imagine the plates to be having some thickness, and so at any interior point of one plate there is no dielectric.
{ "domain": "physics.stackexchange", "id": 60448, "tags": "electrostatics, capacitance, dielectric" }
Conventions for dimensions of input and weight matrices in neural networks?
Question: Im currently learning neural networks and I see conflicting decsriptions of the dimensions of weight and input matrices on the internet. I just wanted to know if there is some convention which more people use than the other. I currently define my input matrix X with the dimensions of: (m x n) Where m is the number of samples and n is the number of features. And I define my weight matrices with the dimensions: (a x b) Where a is the number of neurons in the layer and b is the number of neurons in the last layer. Is that conventional or should I change something? Answer: I would not say there is such a convention for it per se (if anyone has anything to comment on this, I would also like to know). I think to make it clearer how the layer's input x interacts with the weights W, it might better to define the dimensions as the following: x: (m x n) W: (n x k) bias term b: (k) m remains as the number of examples. n represents number of input features and k represents number of neutrons in the layer. As we know, we compute the output of the layer y as Wx + b. Therefore, the resulting output matrix will be (m x k)
{ "domain": "datascience.stackexchange", "id": 7853, "tags": "neural-network, matrix" }
from linux to windows usb camera connection
Question: Hi everybody I have ROS fuerte running on a robot (on which I have a ptz ethernet camera and a usb logitech camera). The robot is running on linux ubuntu oneiric (11.10). What I was wondering and I need your expert help on is this: I can connect over wifi from a windows 7 machine to my robot. I can see from a web broswer the image coming from the PTZ ethernet camera. How can I connect and see the image coming from the usb logitech camera? On linux, when I run the command rosrun image_view image_view image:=/logitech_usb_cam/image_raw I can see the image. But how do I do that from a windows machine? Any ideas? Thank you Originally posted by agrirobot-George on ROS Answers with karma: 1 on 2013-02-10 Post score: 0 Original comments Comment by dornhege on 2013-02-10: Are you running ROS on the windows machine? Answer: You can run the mjpeg_server on the robot, and connect a browser to the url stream that you'd like to view. You have not clarified if the windows machine is running ROS, so there may be other possibilities that could exist for you. Originally posted by SL Remy with karma: 2022 on 2013-02-12 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 12830, "tags": "ros, image, windows" }
Angular acceleration of a rod about different points
Question: Please consider two cases. 1)Suppose a rod is hinged, such that it is free to rotate about one of its edges. Now, the rod rotates with an angular acceleration α under the influence of a force F applied on the other end. We can find out α easily with the torque equation (Given that mass & length of rod is m & L). Now in this condition, if we apply torque equation on the COM of the rod, the angular acceleration that we have here is same. Note: $F_1$ & $F_2$ are the forces provided by the hinges. 2) Now almost the same case, the only difference is that now, the force ‘F’ is acting on the COM, and we’ve to find angular acceleration of the rod about a)hinges b)the other end. Why is the angular acceleration different? Doesn’t it has to be the same? Answer: The hinge is not accelerating. So when considering torques about that axis, the analysis is simple. The center of mass is accelerating. So when looking at torques about that axis, it is at rest in a non-inertial reference frame. Fictitious forces will appear in that frame. But since the forces that appear act through the center of mass, they apply no torque. So the analysis is again simple. Point A is also accelerating. So when looking at the torque about it, you have to consider the fictitious forces that appear due to the acceleration of the axis.
{ "domain": "physics.stackexchange", "id": 50238, "tags": "homework-and-exercises, rotational-dynamics" }
Thermodynamic definition of an adiabatic process
Question: I am posting about this because it seems to be a big issue and misconception in the thermodynamic literature. My issue is about adiabatic processes. As I see it there are two intrinsically different definitions of adiabatic processes: Processes for which $\delta Q_\mathrm{irr}=Td_iS+Td_eS=0$ ($Td_iS$ is the irreversible heat produced and $Td_eS$ the heat due to heat transfer). This means that in these processes there is no heat generation whatsoever. This also means that any adiabatic process is isentropic. Actually, I think this definition is wrong, because every irreversible process will produce entropy $Td_iS$ which cannot be compensated, because the system is thermally isolated ($Td_eS=0$), so that $\delta Q_\mathrm{irr}>0$. (I think the right definition) Processes for which $\delta Q_\mathrm{rev}=Td_eS=0$. This means that no heat transfer is allowed into the system, but still irreversible processes can generate heat. The second one should be in principle correct, as an adiabatic, irreversible expansion of a gas can heat it up due to entropy production. To name an example for definition 2, I could name the expansion of the universe which is adiabatic in the sense of no heat transfer (no environment). Still, the entropy is increasing, since $Td_eS\neq 0$ (while it is assumed to be 0 in definition 1). However, large parts of the literature work with the first definition also. One example for the use of the first definition is https://chemistry.stackexchange.com/questions/16260/derivation-of-the-relation-between-temperature-and-pressure-for-an-irreversible or https://chemistry.stackexchange.com/questions/38127/reversible-and-irreversible-adiabatic-expansion Here, the authors claim that they derived the expression for the volume change of an irreversible, adiabatic process and starts with the equation (I have seen exactly the same equation in Lecture notes of my Thermodynamics class and other books): $\mathrm{d}U=-p_\mathrm{ex}\mathrm{d}V$ However, according to the first law: $\mathrm{d}U=\delta Q_\mathrm{irr}+\delta W_\mathrm{irr}=\delta Q_\mathrm{irr}-p_\mathrm{ex}\mathrm{d}V$ So to reproduce the equation above $\delta Q_\mathrm{irr}=0$ which means that the use of the above equation assumes definition 1, which makes no sense in my opinion. In my opinion, for any process we should have: $\mathrm{d}U=\delta Q_\mathrm{rev}+\delta W_\mathrm{rev}=0-pdV=-pdV$ which means that no matter the reversibility, a specific volume change will always induce the same change in the internal energy. I would appreciate any opinion on this issue. Answer: So I think there are two things going on here. The first thing is that, yes, some people use the term adiabatic to mean any process with no heat transfer, whilst others use it to mean specifically a reversible process in which there is no heat transfer. The use of the same term for both concepts is sometimes inconvenient and something to watch out for, but there is not a great deal we can do at this point. It should be obvious when put in these terms, however, that both concepts are well defined and important. The second issue is what is going on in your definition 1. Now from the Clausius inequality we have that $$ dS \ge \frac{dQ}{T} $$ If we add an extra term to turn it into an equality we get \begin{align} dS &= \frac{dQ}{T} + dS_{irr}\\ dS_{irr} &\ge 0 \end{align} so the extra (positive) term goes on the other side of the equation to what you have in your definition 1. That is in an irreversible process we transfer less heat into the system and do more work on it to compensate. Turning the minus signs around this means that in irreversible processes the system gives out more heat and does less work, i.e. irriversible processes are less efficient.
{ "domain": "physics.stackexchange", "id": 76986, "tags": "thermodynamics, entropy, adiabatic" }
Brainfuck code optimization in Python
Question: I wrote a program to automatically optimize my Brainfuck source files. It works well, but the optimizations are only trivial ones and probably there are still some other points to improve. def main(): print "Enter the path of a brainfuck source file you want to optimize" path = raw_input() if path in ('exit', 'quit', 'close'): exit() f = open(path, 'r') text = f.read() f.close() cmd = '' # removing comments and whitespaces for c in text: if c in ('+', '-', '<', '>', '.', ',', '[', ']'): cmd += c found = True while found: length = len(cmd) found = False for i in range(length-1): if cmd[i:i+2] in ('+-', '-+', '<>', '><'): cmd = cmd[0:i] + cmd[i+2:length] # removing useless instruction pairs found = True ending = '' for i in range(len(path)): if path[i] == '.': ending = path[i:len(path)] path = path[0:i] break path = path + '_opt' + ending f = open(path, 'w') f.write(cmd) f.close() if __name__ == '__main__': while True: main() EDIT: I'm considering about removing whole brackets, if it is obvious that they are never called. This is the case, if an opening bracket is directly after a closing bracket (][), because the first loop only breaks if the current cell value is zero and that means, that either the pointer or the cell value has to change to make the second loop work. However, unlike the other optimizations, I can't just remove ][, it should remove everything between [and the matching ] (including [ and ]). Are there any other cases I could remove unnecessary brainfuck code? Answer: I'd do a couple of things: Remove exit. It's not intended to be use in proper programs. Add more functions, it's hard to tell what your current code is doing, and so adding a new function would make your code more understandable, and functions with good names are easy to understand. Your code that uses the found flag is terribly inefficient, you want to check if the previous character combined with the current character are in the optimize list, as then you can remove the previous character, and otherwise add the new character. You should always use with when you are using an object. This is as even if you come across an error when using the file, it will safely close the file for you and will remove the need to call .close. Change the way that you find the first period in the path. You can use ''.find, to find the first occurrence of a character in a string, rather than looping through the string. Adding all the above to your program can get you: ALLOWED_CHARS = {'+', '-', '<', '>', '.', ',', '[', ']'} USELESS = {'+-', '-+', '<>', '><'} def new_file_location(path): dot = path.find('.') head, ending = path[:dot], path[dot:] return head + '_opt' + ending def remove_useless(cmd): new_cmd = [] for char in cmd: if new_cmd and (new_cmd[-1] + char) in USELESS: new_cmd.pop() else: new_cmd.append(char) return ''.join(new_cmd) def main(cmds): return remove_useless(c for c in cmds if c in ALLOWED_CHARS) if __name__ == '__main__': while True: path = raw_input("Enter the path of a brainfuck source file you want to optimize") if path in ('exit', 'quit', 'close'): break with open(path) as f: text = f.read() cmd = main(text) with open(new_file_location(path), 'w') as f: f.writ(cmd) Finally I'd change the way that new_file_location works, in *nix you use .filename for secret folders, and so your program will do things such as change this, /home/peilonrayz/.python/brainfuck.py, to /home/peilonrayz/._optpython/brainfuck.py... Not really what you want.
{ "domain": "codereview.stackexchange", "id": 23303, "tags": "python, performance, python-2.x, brainfuck" }
Prove big O notation for $\log(n!)$ without applying Stirling's formula
Question: I want to prove that, $$ \log n! \in O(n \log n) \land \log n! \in \Omega(n \log n)$$ The straightforward approach is to apply Stirling's formula but I am looking for a different path to follow. Can somebody please guide me towards it? Answer: A nice method is : $$ \begin{align*} \log n! & = \log \prod_{\psi=1}^n \psi \\ & = \sum_{\psi=1}^n \log \psi \\ & \sim \int_1^n \log \psi \; \mathrm d \psi \\ & = \psi \log \psi - \psi \Biggr|_{1}^{n} \\ & = n \log n - (n - 1) \end{align*} $$ If we had used stirling's approximation instead, we would have received $$ \log n! = n \log n - n + \Theta(\log n) $$ Alternate methods involve The method present in this math.se page. Using $n! = \int_0^\infty x^n e^{-x} \; \mathrm d x$
{ "domain": "cs.stackexchange", "id": 20161, "tags": "big-o-notation, factorial" }
Wave reflection
Question: I have a simple question on wave reflection. I know that if I have a progressive monochromatic EM wave and a mirror, the reflected wave will be opposite in phase on the mirror to assure a total E field equal to 0 on the mirror. But when I think about summation of two progressive waves (in opposite directions), I have : $$\textrm{Im} \left(e^{i(kx-\omega t)}+e^{i(kx+\omega t)}\right)=2\sin(kx)\cos(\omega t)$$ Cool, I have a stationary wave. If I study these two propagative functions and I want to see where they cancel (the place where my mirror would be), I have : $kx-\omega t=kx+\omega t+\pi$ And it gives : $2\omega t=\pi$. So I don't understand, when I study this equation I see that it is not possible to have a mirror because if there would have a mirror, I would have the equation $2\omega t=\pi$. Could anyone help me ? Answer: Your expression tells us that the standing wave amplitude is zero twice per period. But the waves also cancel where $\sin(kx) = 0$ - that is where the nodes of the standing wave are.
{ "domain": "physics.stackexchange", "id": 29828, "tags": "electromagnetism, waves" }
Simple Blackjack game in Python 3.4
Question: I made a simple Blackjack game in Python 3.4.3. I'm a beginner, so it would be awesome if you could rip up my code and tell me everything I did wrong. I know there's no insurance for when the dealer has an ace (yet), and there's some other blackjack rules that I'm going to add in the future, but I wanted to get a base code out there. import random as r import itertools as i suit = 'scdh' rank = '23456789TJQKA' deck = tuple(''.join(card) for card in i.product(rank, suit)) val = () for _ in range(9): val = val + (_+2, _+2, _+2, _+2) if _ == 8: for __ in range(3): val = val + (10, 10, 10, 10) val = val + (1, 1, 1, 1) deckval = dict(zip(deck, val)) def deal(): global hand, dealer_hand, player_hand, counter hand = r.sample(deck, 52) counter = 0 dealer_hand = list(hand[counter:counter + 1]) counter += 2 player_hand = list(hand[counter:counter + 2]) counter += 2 def sum_player_hand(): global hand, player_hand, counter, player_sum, opt_player_sum player_sum = 0 opt_player_sum = 0 for a in range(len(player_hand)): if int(deckval[player_hand[a]]) == 1 and opt_player_sum + int(deckval[player_hand[a]]) <= 21: opt_player_sum = player_sum + int(deckval[player_hand[a]]) + 10 player_sum += int(deckval[player_hand[a]]) elif opt_player_sum > 21: player_sum += int(deckval[player_hand[a]]) opt_player_sum = player_sum else: player_sum += int(deckval[player_hand[a]]) opt_player_sum += int(deckval[player_hand[a]]) def dealer_init(): global hand, dealer_hand, counter, dealer_sum, opt_dealer_sum dealer_sum = 0 opt_dealer_sum = 0 if int(deckval[dealer_hand[0]]) == 1: dealer_sum += int(deckval[dealer_hand[0]]) opt_dealer_sum += dealer_sum + 10 else: dealer_sum = int(deckval[dealer_hand[0]]) opt_dealer_sum = int(deckval[dealer_hand[0]]) dealer_logic() def dealer_logic(): global hand, dealer_hand, counter, dealer_sum, opt_dealer_sum if dealer_sum >= 17 or opt_dealer_sum >= 17: pass else: while opt_dealer_sum <= 16: dealer_sum = 0 opt_dealer_sum = 0 dealer_hand = dealer_hand + list(hand[counter:counter + 1]) counter += 1 for _ in range(len(dealer_hand)): if int(deckval[dealer_hand[_]]) == 1 and (opt_dealer_sum + int(deckval[dealer_hand[_]])) <= 21: opt_dealer_sum += int(deckval[dealer_hand[_]]) dealer_sum += int(deckval[dealer_hand[_]]) else: dealer_sum += int(deckval[dealer_hand[_]]) opt_dealer_sum += int(deckval[dealer_hand[_]]) def main(): global hand, dealer_hand, player_hand, counter, player_sum, dealer_sum, opt_player_sum, opt_dealer_sum sum_player_hand() print('\nDealer has:', dealer_hand[0:2], '--') if player_sum <= 21: if opt_player_sum == player_sum or opt_player_sum > 21: print('Your hand is:', player_hand, '\n', 'Your sum is:', player_sum) else: print('Your hand is:', player_hand, '\n', 'Your sum is:', player_sum, 'or', opt_player_sum) choice = input('Hit or stay? ').lower() if choice == 'hit': player_hand = player_hand + list(hand[counter:counter + 1]) counter += 1 main() elif choice == 'stay': print('') if opt_player_sum <= 21: print('Final Hand: ', player_hand, 'Final Sum:', opt_player_sum) dealer_init() if opt_dealer_sum <= 21: print('Dealer has:', dealer_hand, 'Sum:', opt_dealer_sum) if 21 >= opt_dealer_sum > opt_player_sum: print('DEALER WINS') else: print('YOU WIN') run() else: print('Dealer has:', dealer_hand, 'Sum:', dealer_sum) if 21 >= dealer_sum > opt_player_sum: print('DEALER WINS') else: print('YOU WIN') run() else: print('Final Hand: ', player_hand, '\n', 'Final Sum:', player_sum) dealer_init() if opt_dealer_sum <= 21: print('Dealer has:', dealer_hand, 'Sum:', opt_dealer_sum) if 21 >= opt_dealer_sum > player_sum: print('DEALER WINS') else: print('YOU WIN') run() else: print('Dealer has:', dealer_hand, 'Sum:', dealer_sum) if 21 >= dealer_sum > player_sum: print('DEALER WINS') else: print('YOU WIN') run() else: print('') print('***Please enter hit or stay***') main() else: print('BUST\nYOUR HAND WAS:', player_hand, '\nYOUR SUM WAS:', player_sum, '\n') dealer_init() if opt_dealer_sum < 21: print('Dealer has:', dealer_hand, 'Sum:', opt_dealer_sum) if dealer_sum > 21: print('DEALER BUSTS') run() else: print('Dealer has:', dealer_hand, 'Sum:', dealer_sum) if dealer_sum > 21: print('DEALER BUSTS') run() def run(): play = input('********************\nWould you like to play again?').lower() if play == 'yes': deal() main() elif play == 'no': pass else: print('Please enter yes or no') run() deal() main() run() Answer: I'd like to add few things to previous answer: Constants should be in upper case, even if they constructed dynamically, e.g. SUIT, RANK, DECK. Group you code by responsibility, let Card class cover SUIT, RANK and VALUE constants to handle every static information that belongs to cards. The same relate to deck and hand. global do not use it. It makes you program hard for reading and understanding. There are a lot of other techniques that are lot cleaner. Also you are not just using global variables you modify them, it is really hard to understand full logic. At least, every function should receive all needed data via arguments and return modified tuple of values, but do not modify globals. main function. Well, it is pretty dirty. It should chain other functions with minimal possible logic, no direct input or output. I'm talking not about your main function, but about what everybody think when somebody says: "main function". Main is entry point, this is where your entire module start and finish working. UPD: I'd start with this draft import itertools class Card(object): SUIT = 'scdh' RANK = '23456789TJQKA' def __init__(self, suit, rank): self.suit = suit self.rank = rank def get_value(self): return 2 + self.RANK.find(self.rank) def __repr__(self): return '{}{}'.format(self.suit, self.rank) DECK = [ Card(suit, rank) for suit, rank in itertools.product(Card.SUIT, Card.RANK) ] class Hand(object): def __init__(self): self.cards = [] def add_card(card): self.cards.append(card) def get_value(): return sum([card.get_value() for card in self.cards]) class Match(object): def prompt_continue(self): """Asks user for starting new round""" pass def run(self): """Single round logic""" self.init_deck() self.init_player_hand() self.init_dealer_hand() while not self.all_ready(): self.player_take_card() self.dealer_take_card() self.print_results() def run_loop(self): while self.prompt_continue(): self.run() if __name__ == '__main__': match = Match() match.run_loop()
{ "domain": "codereview.stackexchange", "id": 14186, "tags": "python, beginner, python-3.x, playing-cards" }
Electric dipole moment (EDM) and CP violation
Question: It's well known that a non-zero value for the electric dipole moment (EDM) would imply CP violation. If we consider the interaction Hamiltonian of an EDM $d$ with an electric field $\vec{E}$, $$ H = -\frac{d}{S}\vec{S}\cdot \vec{E}, \quad \mbox{$\vec{S}$ is the spin} $$ we see it is P- and T-odd, but is it C-even? If CPT theorem holds, then it should be C-even but because $\vec{E}$ is C-odd it should imply that the spin changes sign under C. Is this true? My question comes from the answer to this post Does Charge conjugation change the spin momentum?, where it is said that it should be clear the spin does not change under C. Therefore, there is a contradiction, isn't it? Answer: No, there's no contradiction, the issue is that you're mixing up ideas from nonrelativistic quantum mechanics and relativistic quantum field theory. You've written down the Hamiltonian for a single electron in an external field. This automatically puts you in the realm of nonrelativistic quantum mechanics. In the restricted setting you are considering, the operation $\hat{C}$ doesn't even make any sense, because it should map the electron to a positron, but your Hilbert space explicitly includes only the states of a single electron. (That's why we never talk about $\hat{C}$ in undergraduate QM courses.) So in such a setting, you can't even state the CPT theorem, much less apply it! If you were working in relativistic QFT, you would instead have a term like $$\mathcal{L} \supset \bar{\psi} \sigma^{\mu\nu} \gamma^5 \psi F_{\mu\nu}$$ which in the low energy limit reduces to your term. This term is invariant under CPT. Speaking loosely, the "extra" sign flip that seems to be missing in the nonrelativistic analysis comes from the fact that C flips the sign of the electron/positron charge.
{ "domain": "physics.stackexchange", "id": 66771, "tags": "symmetry, quantum-spin, dipole-moment, cp-violation, cpt-symmetry" }
Easiest Beat Tracking Algorithim
Question: I want a relatively easy to implement but relatively reliable algorithm for detecting beat locations in an audio file.What I am aiming to do is get the location of the closest beat to a particular time instant i.e if given current playtime, how can I get the location of next beat.Does anyone know of any algorithms that could help me solve this problem? Answer: The matlab codes that implemented D. Ellis's algorithm are on their website: http://labrosa.ee.columbia.edu/projects/beattrack/
{ "domain": "dsp.stackexchange", "id": 4020, "tags": "audio, discrete-signals, sound, music" }
Why does adding a dropout layer improve deep/machine learning performance, given that dropout suppresses some neurons from the model?
Question: If removing some neurons results in a better performing model, why not use a simpler neural network with fewer layers and fewer neurons in the first place? Why build a bigger, more complicated model in the beginning and suppress parts of it later? Answer: The function of dropout is to increase the robustness of the model and also to remove any simple dependencies between the neurons. Neurons are only removed for a single pass forward and backward through the network - meaning their weights are synthetically set to zero for that pass, and so their errors are as well, meaning that the weights are not updated. Dropout also works as a form of regularisation, as it is penalising the model for its complexity, somewhat. I would recommend having a read of the Dropout section in Michael Nielsen's Deep Learning book (freely available), which gives nice intuition and also has very helpful interactive diagrams/explanations. He explains that: Dropout is a radically different technique for regularization. Unlike L1 and L2 regularization, dropout doesn't rely on modifying the cost function. Instead, in dropout we modify the network itself. Here is a nice summary article. From that article: Some Observations: Dropout forces a neural network to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. Dropout roughly doubles the number of iterations required to converge. However, training time for each epoch is less. With H hidden units, each of which can be dropped, we have 2^H possible models. In testing phase, the entire network is considered and each activation is reduced by a factor p. Example Imagine I ask you to make me a cup of tea - you might always use your right hand to pour the water, your left eye to measure the level of water and then your right hand again to stir the tea with a spoon. This would mean your left hand and right eye serve little purpose. Using dropout would e.g. tie your right hand behind your back - forcing you to use your left hand. Now after making me 20 cups of tea, with either one eye or one hand taken out of action, you are better trained at using everything available. Maybe you will later be forced to make tea in a tiny kitchen, where it is only possible to use the kettle with your left arm... and after using dropout, you have experience doing that! You have become more robust to unseen data.
{ "domain": "datascience.stackexchange", "id": 7163, "tags": "machine-learning, deep-learning, keras, regularization, dropout" }
How to extract heat transfer model parameters from empirical data?
Question: I have made a simple model of heat transfer between ambient and a silicon chip (module) from which I can read its internal temperature $T_m$. I do not need fancy equations and an approximate model will do just fine. I have modelled the system using electrical equivalent model (RC), but determining parameters requires equilibrium point data. Using the heat transfer slope I could potentially determine the model parameters, only that the data is noisy and even using Rick Chartrand, "Numerical differentiation of noisy, nonsmooth data" algorithm I still couldn't get a good estimate for the actual derivatives. My solution now is to fit data and extract parameters from empirical data. After I heat the module I let it cool down, but to measure the internal temp I need some part of the module to run, which should account for a residual power $P$. The model is $\frac{\delta T_m}{\delta t} = c (T_a -T_m(t)) + P$ I can easily assume that the room will not change temperature because of a very small chip running, such that $T_a$ is constant. I can determine estimates of $c$ from halftime and $P$ from solving $\frac{\delta T_m}{\delta t} = 0$. A linear sweep of values around this estimates will give probably sufficiently good estimates. My problem comes from the fact that if I solve the diferential equation I end up with this solution $T_m(t) = T_m(0) e^{-ct} + (cT_a+P)t$ which makes no physical sense since it simply states temperature will rise infinetly. What is the right solution for this diferential equation of heat transfer? Answer: The correct solution to your differential equation is: $$ T_m(t)=(T_m(0)-\frac{P}{c}-T_a)e^{-ct} + (T_a+\frac{P}{c}) $$ or more succinctly, $$ T_m(t)=(T_m(0)-T_m(\infty))e^{-ct}+T_m(\infty) $$ where $T_m(\infty)=T_a+\frac{P}{c}$.
{ "domain": "physics.stackexchange", "id": 28659, "tags": "thermodynamics, differential-equations" }
Path integral vs. measure on infinite dimensional space
Question: Coming from a mathematical background, I'm trying to get a handle on the path integral formulation of quantum mechanics. According to Feynman, if you want to figure out the probability amplitude for a particle moving from one point to another, you 1) figure out the contribution from every possible path it could take, then 2) "sum up" all the contributions. Usually when you want to "sum up" an infinite number of things, you do so by putting a measure on the space of things, from which a notion of integration arises. However the function space of paths is not just infinite, it's extremely infinite. If the path-space has a notion of dimension, it would be infinite-dimensional (eg, viewed as a submanifold of $C([0,t] , {\mathbb R}^n))$. For any reasonable notion of distance, every ball will fail to be compact. It's hard to see how one could reasonably define a measure over a space like this - Lebesgue-like measures are certainly out. The books I've seen basically forgo defining what anything is, and instead present a method to do calculations involving "zig-zag" paths and renormalization. Apparently this gives the right answer experimentally, but it seem extremely contrived (what if you approximate the paths a different way, how do you know you will get the same answer?). Is there a more rigorous way to define Feynman path integrals in terms of function spaces and measures? Answer: Path integral is indeed very problematic on its own. But there are ways to almost capturing it rigorously. Wiener process One way is to start with Abstract Wiener space that can be built out of the Hamiltonian and carries a canonical Wiener measure. This is the usual measure describing properties of the random walk. Now to arrive at path integral one has to accept the existence of "infinite-dimensional Wick rotation" and analytically continue Wiener measure to the complex plane (and every time this is done a probabilist dies somewhere). This is the usual connection between statistical physics (which is a nice, well-defined real theory) at inverse temperature $\beta$ in (N+1,0) space-time dimensions and evolution of the quantum system in (N, 1) dimensions for time $t = -i \hbar \beta$ that is used all over the physics but almost never justified. Although in some cases it was actually possible to prove that Wightman QFT theory is indeed a Wick rotation of some Euclidean QFT (note that quantum mechanics is also a special case of QFT in (0, 1) space-time dimensions). Intermezzo This is a good place to point out that while path integral is problematic in QM, whole lot of different issues enter with more space dimensions. One has to deal with operator valued distributions and there is no good way to multiply them (which is what physicist absolutely need to do). There are various axiomatic approaches to get a handle on this and they in fact do look very nice. Except that it's very hard to actually find a theory that satisfies these axioms. In particular, none of our present day theories of Standard model have been rigorously defined. Anyway, to make the Wick rotation a bit clearer, recall that Schrödinger equation is a kind of diffusion equation but for an introduction of complex numbers. And then just come back to the beginning and note that diffusion equation is macroscopic equation that captures the mean behavior of the random walk. (But this is not to say that path integral in any way depends on the Schrödingerian, non-relativistic physics) Others There were other approaches to define the path-integral rigorously. They propose some set of axioms that path-integral has to obey and continue from there. To my knowledge (but I'd like to be wrong), all of these approaches are too constraining (they don't describe most of physically interesting situations). But if you'd like I can dig up some references.
{ "domain": "physics.stackexchange", "id": 22882, "tags": "quantum-mechanics, path-integral" }
Amplitude spectrum for simple RL circuit
Question: I have some problems trying to understand amplitude spectrum for simple RL circuit shown below: for this circuit in order to find amplitude spectrum of response Vo we write folowing equations: base on the final equation we draw aplitude spectrum: My questons: To me this amplitude spectrum doesn't make any sense. for low values of w (omega) Vo voltage on inductor shall be 0 since inductor is short circut for DC. What we have on our amplitude spectrum then - high Vo for low frequencies! And vice versa low Vo for high frequencies...how is that possible? Why in the question for n-th harmonic (second equation) the DC component of Vs has been dropped and sumation has been substituted with phasor -90 degrees? In the first equation which describes square wave input signal, what does this n = 2k - 1 means and how does it influence this signal? What if it was n = 2k+1 or simply 2k? Answer: As you correctly point out, this is a RL high-pass filter and has zero transfer at zero frequency. And indeed, the constant term in the input isn't present in the output. But this doesn't imply that the output signal must have higher frequency components with greater amplitude than lower frequency components. If the lower frequency components in the input signal have much larger amplitudes than the high frequency components, the spectrum of the output may be dominated by the lower frequency components. The transfer function is given by $$H(\omega) = \left(\frac{V_o(\omega}{V_s(\omega)}\right) = \frac{j2\omega}{5 + j2\omega} = \frac{j2\omega}{5 + j2\omega}$$ where $\omega$ is the angular frequency. For a constant, the angular frequency is zero and so $H(0) = 0$. This means that the constant term in the input is multiplied by zero for the output. Setting $\omega_n = 2n\pi$, write the transfer function in terms of $n$: $$H_n = \left(\frac{V_o}{V_s}\right)_n = \frac{j2\omega_n}{5 + j2\omega_n} = \frac{j2n\pi}{5 + j2n\pi}$$ with magnitude $$|H_n| = \frac{2n\pi}{\sqrt{25 + 4n^2\pi^2}} = $$ Now, calculate a few of the $|H_n|$. $$|H_0| = 0$$ $$|H_1| = \frac{2\pi}{\sqrt{25 + 4\pi^2}}\approx 0.783$$ $$|H_3| = \frac{6\pi}{\sqrt{25 + 36\pi^2}}\approx 0.967$$ $$|H_5| = \frac{10\pi}{\sqrt{25 + 100\pi^2}}\approx 0.988$$ $$|H_7| = \frac{14\pi}{\sqrt{25 + 196\pi^2}}\approx 0.994$$ So, although this is high-pass filter, the fundamental $(n=1)$ is high enough in frequency such that it isn't greatly attenuated. In the input signal (a square wave with constant offset), the fundamental and harmonics have amplitude $$A_1 = \frac{2}{\pi} \approx 0.637$$ $$A_3 = \frac{2}{6\pi} \approx 0.212$$ $$A_5 = \frac{2}{10\pi} \approx 0.127$$ $$A_7 = \frac{2}{14\pi} \approx 0.091$$ In the output signal, the fundamental and harmonics have amplitude $$|H_1|A_1 \approx 0.498$$ $$|H_3|A_3 \approx 0.205$$ $$|H_5|A_5 \approx 0.126$$ $$|H_7|A_7 \approx 0.090$$ So you see that the filter mainly affects the amplitudes of the zero frequency term and fundamental while leaving the amplitudes of the harmonics essentially unchanged. Why in the question for n-th harmonic (second equation) the DC component of Vs has been dropped and sumation has been substituted with phasor -90 degrees? I don't know, bad notation? Maybe it should be something like $(\mathbf{V}_s)_n$ instead? And add $(\mathbf{V}_s)_0 = 1/2$? Importantly, one can't add phasors associated with different frequencies so a summation here would be incorrect. However, one can represent each term in the (time domain) summation as a phasor and solve for the output separately for each component (which is what we've done here).
{ "domain": "physics.stackexchange", "id": 49867, "tags": "electric-circuits, frequency" }
Why is th $\hat{r}$ component zero in this integral?
Question: I'm trying to evaluate the magnetic field by calculating the Coloumb integral $\overrightarrow{A}$, and later I will take: $$\overrightarrow{B}=\nabla \times \overrightarrow{A}$$ However, in the middle of everything, I get to (cylindrical coordinates): $$\overrightarrow{A}=\frac{\mu_oI}{4\pi} \oint_{C} \frac{a\hat{\phi'}}{[r²+(z-b)^2-2ar\cos(\phi-\phi')]^{1/2}} d\phi'$$ I should show that the only component of this expression is $\hat{\phi}$, where: $$\hat{r'}=\hat{x}\cos{\phi'}+\hat{y}\sin{\phi}$$ $$\hat{\phi'}=-\hat{x}\sin{\phi'}+\hat{y}\cos{\phi'}$$ $$\hat{r}=\hat{x}\cos{\phi}+\hat{y}\sin{\phi}$$ $$\hat{\phi}=-\hat{x}\sin{\phi}+\hat{y}\cos{\phi}$$ and that it's leading to only this integral: $$\overrightarrow{A}=\frac{\mu_oI}{4\pi} \oint_{C} \frac{a\cos{\left(\phi-\phi'\right)}\hat{\phi}}{[r²+(z-b)^2-2ar\cos(\phi-\phi')]^{1/2}} d\phi'$$ But when I rewrite how $\hat{\phi'}$ relates to $\hat{\phi}$ and $\hat{r}$, it's not so obvious that the $\hat{r}$ component disappears. The writer refers to symmetry, but I can't still figure this out. So I basically have this integral instead because I can't show how the $\hat{r}$ component is zero: $$\overrightarrow{A}=\frac{\mu_oI}{4\pi} \oint_{C} \frac{a\sin{\left(\phi-\phi'\right)}\hat{r}+a\cos{\left(\phi-\phi'\right)}\hat{\phi}}{[r²+(z-b)^2-2ar\cos(\phi-\phi')]^{1/2}} d\phi'$$ Any ideas? I've been stuck at this for a while, and I've tried to show it in Cartesian coordinates without no luck. Answer: Hint: substitute $u=\cos{(\phi - \phi')}$ and integrate.
{ "domain": "physics.stackexchange", "id": 10323, "tags": "electromagnetism" }
Confusion regarding magnetic moment
Question: Is it possible to have a magnetic configuration without a north and a south pole? My school textbook says that it is possible, as in the case of toroid and a wire carrying current. Now, the explanation given suggests that this type of case is possible if and only if a configuration has a net zero magnetic moment, as in the above case, but not for a bag magnet or a solenoid. However, I don't understand this justification, like how the di-polarity is related to the magnetic moment. Is there a mathematical interpretation? Answer: Imagine if you have two dipoles that are right next to each other pointing in opposite directions. Such a configuration has no dipole moment, and at long range the field is a "quadruple" field. That's the next term in the multipole expansion.
{ "domain": "physics.stackexchange", "id": 100613, "tags": "magnetic-fields, magnetic-moment" }
How to solve lookup transform problem between frames?
Question: I am using hector_mapping to create map of my room. I used openni node to get the depth/image_raw from kinect sensor, I then changed it to laser scan data using "depthimage_to_laserscan" and used that "scan" topic to create hector_mapping. When I run all of these in a machine it works fine and creates a map but when I run "openni_launch" and "depthimage_to_laserscan" on odroid and "hector_mapping" on my machine, I get an error the following error: "lookupTransform base_link to camera_depth_frame" timed out. Could not transform laser scan into base frame". What does this error mean and how did it not occur when everything was running on same machine.? My odroid and machine communicates over a wireless network. My machine runs ROS indigo on Ubuntu Trusty. Originally posted by Ros User on ROS Answers with karma: 40 on 2016-08-16 Post score: 0 Original comments Comment by patrchri on 2016-08-16: Do you have a tf connection between your laser scan and your base frame. A node that publishes that connection to the tf topic ? Comment by Mark Rose on 2016-08-16: The easiest way to get what @patrchri describes is to create a URDF file and then use robot_state_publisher and join_state_publisher to publish the transforms automatically. Answer: If you are running on two machines in distributed fashion first be sure that ROS connectivity is fully working. It's important to test both directions, i.e. to make sure publishing and subscribing works. See this Q/A for some info in case it does not. If ROS connectivity works, the next thing to look out for are time sync issues between both machines. See for instance this Q/A. Another reason for things not working could be overloading the wireless network (i.e. if you try to transmit too much data). Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-08-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25532, "tags": "navigation, mapping, hector-mapping, ros-indigo" }
Functional representation of operators in second quantization
Question: In path integral formalism, the operator $\hat{a}$ and $\hat{a}^\dagger$ represented by numbers $\alpha$ and $\bar{\alpha}$ according to $\hat{a}$|$\alpha$⟩=$\alpha$|$\alpha$⟩ and <$\alpha$|$\hat{a}^\dagger$=<$\alpha$|$\bar{\alpha}$, respectively. Here, |$\alpha$⟩ is the coherent state, say the eigenstate of $\hat{a}$. Based on this logic, the second quantization Hamiltonian $\hat{H}(\hat{a},\hat{a}^\dagger)$'s functional representation is thus $H(\alpha,\bar{\alpha})$. However, when including the imaginary time variable $\tau$, in the text book, it seems that the operator $\hat{X}(\hat{a}(\tau),\hat{a}^\dagger(\tau))$ can be automatically represented by $\hat{X}(\alpha(\tau),\bar{\alpha}(\tau))$. I understand that $\hat{X}(\hat{a},\hat{a}^\dagger)$ $\rightarrow$ $\hat{X}(\alpha,\bar{\alpha})$ is fine and is guaranteed by $\hat{a}$|$\alpha$⟩=$\alpha$|$\alpha$⟩. Can someone tell me why $\hat{X}(\hat{a}(\tau),\hat{a}^\dagger(\tau))$ $\rightarrow$ $\hat{X}(\alpha(\tau),\bar{\alpha}(\tau))$ is also valid? I am confused at this point because to me the $\hat{a}(\tau)$|$\alpha(\tau)$⟩=$\alpha(\tau)$|$\alpha(\tau)$⟩ is not at all obvious. Answer: I'm not sure what the reality or not of time is meaning to you--you never revealed your text. Given the standard coherent states, $\hat{a}$|$\alpha$⟩=$\alpha$|$\alpha$⟩ and <$\alpha$|$\hat{a}^\dagger$=<$\alpha$|$\bar{\alpha}$, if you hamiltonian is standard ordered, i.e. with all annihilators to the right of creators, obviously $$ \langle \alpha| {H}(\hat{a},\hat{a}^\dagger)|\alpha\rangle= H(\alpha,\bar{\alpha}). $$ In the Heisenberg picture, $ \hat a(t) =e^{-i\omega t}\hat a ~~~\leadsto$ $$ \hat a(t)|\alpha\rangle =e^{-i\omega t}\hat a|\alpha\rangle= e^{-i\omega t} \alpha|\alpha\rangle\equiv \alpha (t) |\alpha\rangle . $$ In the Schrödinger picture, $\hat a|\alpha(t)\rangle=\alpha(t)|\alpha(t)\rangle$, amounts to the same eigenvalue equation as above, with the suitable redefinitions! We then have $$ \langle \alpha (t)| {H}(\hat{a},\hat{a}^\dagger)|\alpha(t)\rangle= H(\alpha (t),\bar{\alpha}(t)). $$ Again in the Schrödinger picture, by the same token, $$ \hat X=(\hat a + \hat a^\dagger)/\sqrt{2}, ~~~\leadsto \\ \langle \alpha (t)| \hat X |\alpha(t)\rangle= (\alpha (t)+\bar{\alpha}(t))/\sqrt{2}. $$ I'm not sure where the imaginary or real time would enter here.
{ "domain": "physics.stackexchange", "id": 75933, "tags": "quantum-field-theory, hilbert-space, operators, path-integral, coherent-states" }
Covering a graph with M cliques maximizing total edges weight
Question: I am working on a problem that involves distributing a set of N supplements across a predefined number of meals (M) in a way that maximizes the total number of positive interactions and minimizes negative interactions between the supplements. Each interaction between the supplements is known beforehand (positive, neutral or negative: see the image below). I'm seeking advice on the mathematical formulation of this problem and potential algorithms or tools that could be used to solve it. Here's what I came up with: We can build a graph G from table of compatibility, with edges 1/0/-1 for positive, neutral and negative interactions correspondingly Then we can remove -1 edges completely Now we have modified graph G' with 0/1 edges only (see the image below) And we want to cover it with M cliques (bc clique in this graph will guarantee no 'bad' connections), finding a coverage with maximum number of +1 green edges This method is not very good because: it doesn't consider using -1 red edges at all while building meals it has a very big time complexity (hours or days for 1 calculation on ~20 supplements) I'm open for any suggestions. Maybe there are existing algorithms for solving this problem, or tools (like solvers)? Or is there a name for this problem I could search algorithms for? Answer: One approach is to solve this with tools from operations research and combinatorial optimization, and specifically, to view this as an instance of ILP or SAT. Let $x_{c,i}$ be a variable that will be true (i.e., 1) if supplement $i$ is assigned to meal $c$, or false (0) otherwise. Let $y_{c,i,j} = x_{c,i} \land x_{c,j}$. Let $r_{i,j}$ be the reward if supplements $i,j$ are used in the same meal ($+1$ if they interact positively, $0$ if there is no interaction, $-1$ if they interact negatively). Then your goal is to maximize $\sum r_{i,j} y_{c,i,j}$, subject to constraints that require $x$ to be a valid solution. These constraints can be expressed in SAT or ILP pretty easily: Exactly one of $x_{c,i}$ is true, for each $i$. (In ILP: $\sum_c x_{c,i} = 1$. In SAT, this is a 1-out-of-M constraint.) $y_{c,i,j} = x_{c,i} \land x_{c,j}$. (In ILP: see Express boolean logic operations in zero-one integer linear programming (ILP). In SAT, this is $(x_{c,i} \lor \neg y_{c,i,j}) \land (x_{c,j} \lor \neg y_{c,i,j}) \land (\neg x_{c,i} \lor \neg x_{c,j} \lor x_{c,i,j})$.) Then you can use a SAT solver that supports pseudo-boolean constraints (e.g., Z3) or an ILP solver (e.g., Cplex, Gurobi) to solve this system. If you use a SAT solver, you might want to use binary search over the target value of the objective function, using the SAT solver to search for a solution that achieves that target. (Optionally, you can add some additional constraints for symmetry breaking, e.g., WLOG you can force $x_{1,1}$ to be true, force $x_{1,2} \lor x_{2,2}$ to be true, etc.) If $N,M$ are big enough, this won't work, but it might be a simple to implement approach if $N,M$ aren't too big.
{ "domain": "cs.stackexchange", "id": 21804, "tags": "algorithms, graphs, optimization, combinatorics, partitions" }
What physical phenomenon best explains the region of very short half-lives on table of nuclides?
Question: On this interactive table of nuclides, there is a region just "north east" of $^{208}\text{Pb}$ and $^{209}\text{Bi}$ with extremely unstable nuclides (the yellow/pink/light green squares). The longest $\text{T}_{1/2}$ between $^{211}\text{Po}$ and $^{224}\text{U}$ is 0.511 seconds, with most being in the millisecond to nanosecond range. Refer to the area circled in red in the below image. On the nuclide chart, this "gouge" is plainly visible from like a mile away. :-) I'd like to think of it as an "island of instability". From a nuclear physics perspective, What phenomenon best explains this? Answer: It is somewhat ironic that this "Island of Instability" would occur just after one of the most stable large nuclei, i.e. $Pb^{208}$. $Pb^{208}$ owes its stability to the fact that it is doubly magic (consisting of closed shells of both neutrons and protons). These doubly magic systems are spherical and when they occur near the line of beta stability (as is the case for$Pb^{208}$) their stability is further enhanced. So what could explain the marked instability of the nuclei immediately following? The answer to this question requires some fine details of the nuclear shell model and specifically the nature of the neutron and proton orbitals that are being filled in this region. If one looks at the ground state spins of both $Pb^{209}$ and $Bi^{209}$, one sees that they both have spin $\frac92$. From calculations that I performed for my PhD, these orbitals are likely to be $0g\frac92$ and $0h\frac92$ respectively for the neutron and proton orbitals. The unique thing about this situation is that both of these orbitals have very high orbital angular momentum (4 and 5 respectively). When extra neutrons and protons are added to a spherical core, the pairing force acts to render the lowest total nuclear spin possible. For even-even nuclei the ground states are always spin 0. The high angular momentum of the orbitals following $Pb^{208}$ means that these paired entities will form on the nuclear periphery (forced there by the high angular momentum barrier). That means that $\alpha$ decay (for even-even systems) will be more probable in this region than would be the case if either or both of the pairing orbitals had lower angular momentum. As the comments have indicated, these are mostly $\alpha$ emitters, so the enhanced instability is to be expected.
{ "domain": "physics.stackexchange", "id": 33398, "tags": "nuclear-physics, radioactivity, isotopes, half-life" }
Replace string based on multiple rules, don't replace same section twice
Question: Given a 2-column CSV dictionary file that contains translation of certain words, this c# code needs to replace the first occurrence of a word in the dictionary. Once a segment of string has been replaced, it cannot be matched or overwritten by a new dictionary word. This sounded like a fairly simple task but the code I came up with runs horribly slow and I'm looking for ways to improve it. All the offending code is in the TestReplace function, I build up a hashset that keeps track of what character Ids in the string have been touched. When you apply a rule that changes the lenght of the string, all the character ids switch around so they have to be recalculated and I believe it costs a lot of performance. Wish there was a simpler way to do this ! Here is a super simple case of what the code tries to do : Dictionary file: hello >>> hi hi >>> goodbye Input: hello, world! First rule is applied so string becomes -> hi, world. The word hi is now locked. Second rule is applied but the string does not become goodbye, world since this part is locked. using System; using System.Collections.Generic; using System.Linq; using System.Text.RegularExpressions; namespace stringReplacer { class Program { public static Dictionary<string, string> ReplacementRules = new Dictionary<string, string>() { {"John","Freddy" }, {"John walks","Freddy runs" }, {"brown dog","gray dog" }, {"dog","cat" }, {"- not -", "(not)" }, {"(","" }, {")","" }, {"whenever", "sometimes, when"}, {"raining", "snowing" }, {"his", "many" } }; static void Main(string[] args) { string Input = "John walks his brown dog whenever it's - not - raining"; string ExpectedOutput = "Freddy walks many gray dog sometimes, when it's (not) snowing"; string TestReplaceOutput = TestReplace(Input, ReplacementRules); ValidateReplacement("TestReplace", TestReplaceOutput, ExpectedOutput); } public static string TestReplace(string input, Dictionary<string, string> ReplacementRules) { HashSet<int> LockedStringSegment = new HashSet<int>(); foreach (var rule in ReplacementRules) { string from = Regex.Escape(rule.Key); string to = rule.Value; var match = Regex.Match(input, from); if (match.Success) { List<int> AffectedCharacterPositions = Enumerable.Range(match.Index, match.Length).ToList(); if (!AffectedCharacterPositions.Any(x => LockedStringSegment.Contains(x))) { input = Regex.Replace(input, from, to); int LengthDelta = to.Length - rule.Key.Length; LockedStringSegment .Where(x => x > match.Index + rule.Key.Length).OrderByDescending(x => x).ToList() .ForEach(x => { //We shuffle the locked character's place depending on the replacement delta. LockedStringSegment.Remove(x); LockedStringSegment.Add(x + LengthDelta); }); //Add all the new locked character's position to the hashset. Enumerable.Range(match.Index, to.Length).ToList().ForEach(x => LockedStringSegment.Add(x)); } } } return input; } public static void ValidateReplacement(string MethodName, string Actual, string Expected) { Console.Write($"{MethodName} : "); if (Expected != Actual) Console.WriteLine("String replacement doesn't work"); else Console.WriteLine("It works"); Console.WriteLine($"Expected : {Expected}"); Console.WriteLine($"Actual : {Actual} \n\n"); } } } Answer: A recursive solution seems like a good fit for this problem. public static string ReplaceOnce( string input, Dictionary<string, string> ReplacementRules) { var matches = ReplacementRules.Where(rule => input.Contains(rule.Key)); if (!matches.Any()) return input; var match = matches.First(); int startIndex = input.IndexOf(match.Key); int endIndex = startIndex + match.Key.Length; var before = ReplaceOnce( input.Substring(0,startIndex), ReplacementRules ); var replaced = match.Value; var after = ReplaceOnce( input.Substring(endIndex), ReplacementRules ); return before + replaced + after; }
{ "domain": "codereview.stackexchange", "id": 40506, "tags": "c#, performance, strings, regex" }
Robot moving sideways in RVIZ ROS
Question: Hi, I have a wheeled-legged robot as you can see in the picture. When I simulate it in ROS-Gazebo it works fine, however at the same time in Rviz it moves sideways instead of forward or backwards. I have no idea why it is behaving like this. Would greatly appreciate if someone can give me some clue here. Answer: I have found the problem after some long search on the internet. The problem was that in Rviz, X-axis is defined as front of the robot and Y-axis is defined as the sideways. However, in my URDF I defined the robot in a way where Y-axis was its front and X-axis was its sideways. After fixing this, my robot moves normally in Rviz and Gazebo both.
{ "domain": "robotics.stackexchange", "id": 2520, "tags": "ros, rviz" }
Best tutorial about the variational quantum eigensolver (VQE)
Question: I want to find a tutorial on VQE with a good valance between the theoretical background of the method, and it's working implementation on QPU. Covering advanced topics such as quantum error mitigation. Any advice will be appreciated. Answer: The best review of VQE that I have came across is the following: The Variational Quantum Eigensolver: a review of methods and best practices Chapter 8 covers a good amount of material on error mitigation techniques. Furthermore, it has all the references you need to look up for more details.
{ "domain": "quantumcomputing.stackexchange", "id": 4071, "tags": "vqe, error-mitigation" }
$\mathbb{N}$ in intensional MLTT with judgmentally commutative $+$ and $\times$
Question: Is there a way to implement natural numbers in intensional Martin-Löf type theory so that addition and multiplication is judgmentally commutative? Answer: This is impossible. Suppose that we have such a type $T$, with an implementation of addition $\mathit{add} : T \to T \to T$, which is judgementally commutative. Because MLTT is strongly normalising, we know that we can put $\mathit{add}$ in $\beta$-normal, $\eta$-long form. Now suppose that we have two variables $x, y$ of type $T$. Now consider the terms $\mathit{add}\,x\,y$ and $\mathit{add}\,y\,x$. Substituting $x$ and $y$ for the formal parameters of $\mathit{add}$ will not create any new $\beta$-reducible expressions, because $x$ and $y$ are neutral terms. Now, $\eta$-expand the occurences of $x$ and $y$ as demanded by the definition of $T$. Now, the resulting terms will be identical except that we've permuted the occurences of $x$ and $y$. Recall that judgemental equality for $\beta$-normal, $\eta$-long terms is just $\alpha$-equivalence. Since we assumed $\mathit{add}\,x\,y$ and $\mathit{add}\,y\,x$ were assumed to be judgementally equal, this means that the normal forms for these two terms are $\alpha$-equivalent. Since anywhere that $x$ occurs in the normal form for $\mathit{add}\,x\,y$, a $y$ occurs in the corresponding position of the normal form for $\mathit{add}\,y\,x$, the only way that these two terms can be $\alpha$-equivalent is if neither $x$ nor $y$ occurs in the term. This means that $\mathit{add}\,x\,y$ and $\mathit{add}\,y\,x$ do not use their arguments! As a result, we can conclude that $\mathit{add}\,1\,1$ and $\mathit{add}\,0\,0$ are also judgementally equal. Therefore $T$ cannot be the natural numbers, since $2$ and $0$ must not be judgementally equal.
{ "domain": "cstheory.stackexchange", "id": 5231, "tags": "type-theory, dependent-type, typed-lambda-calculus" }
Why do red muscle fibres have more mitochondria than white muscle fibre but less ATP than White muscle fibres?
Question: This is a question given in my Anatomy book and I am really confused because logically the substance which have more mitochondria should have more ATP as mitochondria is the power house of cell.But the question contrast the logic. Answer: This is because of the difference in mitochondrial protein expression of red and white muscle fibres. There is difference in posttranslational modification, which leads to functional difference as red muscle contraction is slow, and white muscle contraction is rapid. Metabolic control in mitochondria contributes in speed of activity and cellular energy production. So, these differences effect the cellular energy levels in both white and red muscle fibres. Following factors also contribute to the ATP production: White muscle mitochondria have higher proton leak. Red muscle contain high palmitic acid and oleic acid, but white muscle possess more phosphate ions. Collagen and elastic fibers are largely distributed in red muscle mitochondria, whereas high titin levels and densely packed sarcomeres are present in white muscles. Whereas high levels of long sarcomeres are present in red muscles. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3118618/ https://www.hindawi.com/journals/bmri/2018/5816875/
{ "domain": "biology.stackexchange", "id": 10149, "tags": "human-biology, human-anatomy, muscles" }
How can one find the area of $B$-$H$ hysteresis loop by the Monte-Carlo method?
Question: How can one find the area of $B$-$H$ hysteresis loop by the Monte-Carlo method? Answer: You're looking for what is called rejection sampling. One of the primary introductions to this method is how one would determine $\pi$ using Monte Carlo (i.e., sampling $x,\,y\in[0,\,1]$ and counting number of times $x^2+y^2<1$ out of total trials, then $\pi\simeq4\times\text{hits}\,/\,\text{total}$ where the 4 comes from sampling one quarter of the square). For magnetic hysteresis, you have $B\in[-b,\,b]$ and $H\in[-h,\,h]$ with $b,\,h$ the maximal values for the given material. If you draw numbers uniformly for $B$ and $H$ from these ranges, it should just be a matter of counting the number of times the drawn point $(B,\,H)$ is in the interior of the curve versus the total number of samples. Whether a point is interior depends on: $B\in[-b,\,b]$ $H\in[-\mathcal{H}(B),\,\mathcal{H}(B)]$ where $\mathcal{H}(B)$ denotes the hysteresis curve. The first point should always be true, since we're forcing that from the uniform sampling. The second one is where the rejection occurs since it is possible that the $H$ you've sampled exceeds the curve. Your area is then the ratio of interior points to total points. Note that for Monte Carlo methods, the error is proportional to $1/\sqrt{N}$ where $N$ is the number of samples. Sampling 10,000 points would give you two decimal places of accuracy, so depending on your needs you may want more or less of these.
{ "domain": "physics.stackexchange", "id": 91954, "tags": "electromagnetism, computational-physics" }
How do I read mmCIF files from Alphafold DB as coordinates to use in replicating their visualization for Unity VR?
Question: https://alphafold.com/entry/Q9Y499 Starting at line 918 of the downloadable mmCIF file, it looks like there might be coordinates in x, y,z: ATOM 1 N N . TRP A 1 1 ? 12.549 -2.565 -21.811 1.0 96.06 ? 1 TRP A N 1 Q9Y499 UNP 1 W ATOM 2 C CA . TRP A 1 1 ? 12.477 -2.290 -20.365 1.0 96.06 ? 1 TRP A CA 1 Q9Y499 UNP 1 W ATOM 3 C C . TRP A 1 1 ? 11.221 -2.928 -19.812 1.0 96.06 ? 1 TRP A C 1 Q9Y499 UNP 1 W ATOM 4 C CB . TRP A 1 1 ? 12.469 -0.785 -20.103 1.0 96.06 ? 1 TRP A CB 1 Q9Y499 UNP 1 W ATOM 5 O O . TRP A 1 1 ? 10.206 -2.922 -20.501 1.0 96.06 ? 1 TRP A O 1 Q9Y499 UNP 1 W ATOM 6 C CG . TRP A 1 1 ? 13.795 -0.114 -20.270 1.0 96.06 ? 1 TRP A CG 1 Q9Y499 UNP 1 W ATOM 7 C CD1 . TRP A 1 1 ? 14.286 0.508 -21.369 1.0 96.06 ? 1 TRP A CD1 1 Q9Y499 UNP 1 W ATOM 8 C CD2 . TRP A 1 1 ? 14.791 0.098 -19.228 1.0 96.06 ? 1 TRP A CD2 1 Q9Y499 UNP 1 W I am a bioinformatics noob (took an intro class in my undergrad CS degree at a Canadian university) but know some Unity game engine with which I want to build a standalone visualization tool for users with VR headsets. Answer: Don't A protein is not only list of atomic coordinates. Bar for a proof of concept (i.e. learn to code in Unity VR), it is an endless spiral of hurt in a field where there is a lot of competition and were you to push a product you'd need to invest more manhours in publicity than in coding... Connectivity Here is the short of it. Chemistry is classically a graph network: nodes=atoms, edges=bonds. The connectivity of the atoms is dictated by their residue's 3-letter name, which is either assumed —software know the 20 AAs (and several modifications, eg. SEP, 4x2 bases and some ligands, e.g. HOH— or a special entry in the PDB or mmCIF file —in PDB it's a CONECT entry. E.g. https://www.rcsb.org/ligand/CFF AlphaFold2 does not have ligands, AlphaFill does. But scientists (not PR specialists) use PDB entries or run their own modelling. AlphaFold2 from EBI is rarely used: oligomers with ligands is where it's at. Especially if doing a drug discovery campaign (and red biotech can and does afford VR headsets). Format mmCIF is a deposition standard, PDB is the workhorse standard. mmTF is a cool idea that is not catching on (cf XKCD comic strip about standard proliferation...). Using PDB is easier: https://www.wwpdb.org/documentation/file-format-content/format33/v3.3.html Representation Problems arise when offering representations —everyone has their favourite. See NGL gallery for an array: https://nglviewer.org/ngl/gallery/index.html Surface display has additional issues —PyMOL simply and openly reuses the Advance Poisson-Boltzmann Solvation tool as it's tricky. Dragons in the minefield In a PDB entry not from AF2, you can have disulfides (SSBOND and/or CIZ entry), isopeptide bonds (LINK), missing atoms, the UNK, UNX and UNL residues, alphatraces, insertion sequences, alt. occupancy, gaps, non-standard usages of all of these etc etc. and my favourite implied proximity bonding.
{ "domain": "bioinformatics.stackexchange", "id": 2238, "tags": "proteins" }
Why can't a static magnetic field have a local maximum?
Question: Reading a brief on magnetic traps I've read that you can't have a local maximum in the magnetic field. I believe this is linked to the divergence of the B-field being zero in vacuum, but I don't see why one can have a local minimum but not a maximum? Answer: For a stationary point to be a local maximum or local minimum, then the eigenvalues of the Hessian matrix must either be all negative or all positive respectively. However, a neat trick is that the trace of the Hessian is equal to the sum of the eigenvalues and is also the Laplacian. So, to answer the question we ask what the Laplacian of the magnetic field magnitude looks like, or more conveniently, the Laplacian of $B^2$. $$\nabla^2 B^2 = \nabla^2(B_{x}^2 + B_{y}^2 + B_{z}^2)$$ $$\nabla^2 B^2 = \nabla \cdot \nabla(B_{x}^2 + B_{y}^2 + B_{z}^2)$$ $$\nabla^2 B^2 = 2\nabla \cdot (B_x \nabla B_x + B_y \nabla B_y + B_z \nabla B_z)$$ $$\nabla^2 B^2 = 2( [\nabla B_x]^2 + [\nabla B_y]^2 + [\nabla B_z]^2 + B_x \nabla^2 B_x + B_y \nabla^2 B_y + B_z \nabla^2 B_z)\ \ \ (1)$$ For a time-independent, current free situation we know that $\nabla \times {\bf B} =0$. If we take the curl of both sides $$\nabla \times \nabla \times {\bf B} = -\nabla^2 {\bf B} + \nabla(\nabla \cdot {\bf B}) = 0$$ and therefore, because $\nabla \cdot {\bf B} = 0$, we can also say $\nabla^2 {\bf B} = 0$ and that the individual components $\nabla^2 B_x = \nabla^2 B_y = \nabla^2 B_z = 0$. Using this in equation (1), we get $$\nabla^2 B^2 = 2( [\nabla B_x]^2 + [\nabla B_y]^2 + [\nabla B_z]^2),$$ but because the square of the individual gradients must always be $\geq 0$, we can say that $$\nabla^2 B^2 \geq 0$$. From the discussion of the Hessian and its relationship with the eigenvalues and the Laplacian at the beginning we can thus say that whilst a local minimum in the magnitude of the magnetic field is possible because the trace of the Hessian can be $>0$, it is impossible to have a local field magnitude maximum, because it cannot be $<0$.
{ "domain": "physics.stackexchange", "id": 93544, "tags": "electromagnetism, magnetic-fields, magnetostatics" }