anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Chomp Game in Python 3
Question: I have deliberately avoided any OOP for this project. I'm fairly happy with my implementation of the classic strategy game "Chomp". The computer just uses a random valid choice, so no AI. I would appreciate feedback on points of style and correctness. Is the code basically OK? Does it seem to have a coherent style? What improvements would you make? Thanks in advance. """ Chomp - a strategy game """ import random import time NUM_ROWS = 5 NUM_COLS = 6 def print_title(): print(r""" ______ __ __ ______ __ __ ______ /\ ___\ /\ \_\ \ /\ __ \ /\ "-./ \ /\ == \ \ \ \____ \ \ __ \ \ \ \/\ \ \ \ \-./\ \ \ \ _-/ \ \_____\ \ \_\ \_\ \ \_____\ \ \_\ \ \_\ \ \_\ \/_____/ \/_/\/_/ \/_____/ \/_/ \/_/ \/_/ """) def print_instructions(): print("Welcome to Chomp. Choose a square and all squares to the right") print("and downwards will be eaten. The computer will do the same.") print("The one to eat the poison square loses. Good luck!") print() def who_goes_first(): if random.randint(0, 1) == 0: return "computer" else: return "human" def play_again(): print("Would you like to play again? (yes or no)") return input().lower().startswith("y") def print_matrix(matrix): for row in matrix: for elem in row: print(elem, end=" ") print() def validate_user_input(player_choice, board): try: row, col = player_choice.split() except ValueError: print("Bad input: The input should be exactly two numbers separated by a space.") return False try: row = int(row) col = int(col) except ValueError: print("Input must be two numbers, however non-digit characters were received.") return False if row < 0 or row > NUM_ROWS - 1: print(f"The first number must be between 0 and {NUM_ROWS - 1} but {row} was passed.") return False if col < 0 or col > NUM_COLS - 1: print(f"The second number must be between 0 and {NUM_COLS - 1} but {col} was passed.") return False if board[row][col] == " ": print("That square has already been eaten!") return False return True def update_board(board, row, col): for i in range(row, len(board)): for j in range(col, len(board[i])): board[i][j] = " " def get_human_move(board): valid_input = False while not valid_input: player_choice = input("Enter the row and column of your choice separated by a space: ") valid_input = validate_user_input(player_choice, board) row, col = player_choice.split() return int(row), int(col) def get_computer_move(board): valid_move = False while not valid_move: row = random.randint(0, NUM_ROWS - 1) col = random.randint(0, NUM_COLS - 1) if board[row][col] == " ": continue else: valid_move = True return row, col def main(): board = [] for i in range(NUM_ROWS): row = [] for j in range(NUM_COLS): row.append("#") board.append(row) board[0][0] = "P" game_is_playing = True turn = "human" print_title() print_instructions() while game_is_playing: if turn == "human": # Human turn print("Human turn.") print() print_matrix(board) print() row, col = get_human_move(board) if board[row][col] == "P": print() print("Too bad, the computer wins!") game_is_playing = False else: update_board(board, row, col) print() print_matrix(board) print() turn = "computer" time.sleep(1) else: # Computer turn row, col = get_computer_move(board) print(f"Computer turn. the computer chooses ({row}, {col})") print() if board[row][col] == "P": print() print("Yay, you win!") game_is_playing = False else: update_board(board, row, col) print_matrix(board) print() turn = "human" if play_again(): main() else: print("Goodbye!") raise SystemExit main() Answer: Your game has the same kind of flaw like a lot of the other games that have recently been here on Code Review: an unnecessary recursion in the main game flow: What do I mean by that? Let's look at your main function: def main(): # ... all of the actual game here ... if play_again(): main() else: print("Goodbye!") raise SystemExit As one can clearly see main will call main every time the player chooses to play another round, leading to deeper and deeper recursion. If someone would be really obsessed with your game and would try to play more than sys.getrecursionlimit() games (here on my machine it's 3000), there would be a RuntimeError. Fortunately this can easily be fixed using a simple while loop: def main(): while True: # ... all of the actual game here ... if not play_again(): print("Goodbye!") break Recursion gone, welcome binge playing till your fingers bleed. Now that we have that sorted out, let's look at some of the details of your code. print_* Not much to critique here, but I would like to introduce you to textwrap.dedent. textwrap.dedent allows you to indent text blocks as you would do with code and then takes care to remove the additional leading whitespace. from textwrap import dedent def print_title(): print( dedent(r""" ______ __ __ ______ __ __ ______ /\ ___\ /\ \_\ \ /\ __ \ /\ "-./ \ /\ == \ \ \ \____ \ \ __ \ \ \ \/\ \ \ \ \-./\ \ \ \ _-/ \ \_____\ \ \_\ \_\ \ \_____\ \ \_\ \ \_\ \ \_\ \/_____/ \/_/\/_/ \/_____/ \/_/ \/_/ \/_/ """) ) But it basically boils down to personal taste which version you prefer. The same logic could be applied to print_instructions to only need a single call to print. who_goes_first who_goes_first could be simplified a little bit: def who_goes_first(): return random.choice(("computer", "human")) Or even be left out? You current code does not use it. print_matrix Again, the code could be simplified a little bit, e.g. using str.join: def print_matrix(matrix): for row in matrix: print(" ".join(row)) From what you can actually see, there should be no difference to the previous output, but there is actually no trailig whitespace in this version. You could even go a little bit futher than this using a list comprehension: def print_matrix(matrix): print("\n".join(" ".join(row) for row in matrix)) validate_user_input / get_human_move validate_user_input does a good job at validating the given user input, but keeps the results of the parsing to itself. That leads to duplicate code in get_human_move. With a little bit of rewriting that duplication can be removed: def validate_user_input(player_choice, board): try: row, col = player_choice.split() except ValueError: raise ValueError( "Bad input: The input should be exactly two numbers separated by a space." ) try: row = int(row) col = int(col) except ValueError: raise ValueError( "Input must be two numbers, however non-digit characters were received." ) if row < 0 or row > NUM_ROWS - 1: raise ValueError( f"The first number must be between 0 and {NUM_ROWS - 1} but {row} was passed." ) if col < 0 or col > NUM_COLS - 1: raise ValueError( f"The second number must be between 0 and {NUM_COLS - 1} but {col} was passed." ) if board[row][col] == " ": raise ValueError("That square has already been eaten!") return row, col def get_human_move(board): while True: player_choice = input("Enter the row and column of your choice separated by a space: ") try: row, col = validate_user_input(player_choice, board) break except ValueError as ex: print(ex) return row, col So what has happened here? validate_user_input now does not print the error message itself, but raises an informative exception instead. if no reason to raise an exception has occured, validate_user_input now returns row and col, to they do not need to be recomputed in get_human_move get_human_move was adapted to that change and now tries to get the validated user input, prints the reason if that fails, and asks the user to try again. Of course, raising exceptions is just one way to do this. There are many other ways that lead to a similar structure. If you decide to implement these changes, parse_user_input is maybe a more appropiate name now, given its new capabilites. You might want think about passing NUM_ROWS/NUM_COLS as parameters or determine it from board to cut down on global variables as well. Quick sidenote: 0-based indexing might be unfamiliar to people that have no programming backgroud (or use MatLab all the time ;-)), so maybe allow the user to enter 1-based indices and substract 1 before validation. update_board Depending on your choice on the previous point, this is either the way to go, or you should use NUM_ROWS/NUM_COLS here too to be consistent. get_computer_move Maybe you shoud add some simple heuristics to make your computer a little bit smarter, e.g. it sometimes chooses to take the poison even if it still has alternatives, or even the possibilty to win. Also (theoretically), it is be possible that this random sampling algorithm never finds a valid solution ;-) To avoid that, generate a list of valid rows and values, and pick a sample from these lists. If you stick to the current approach, you can at least get rid of the "support variable" valid_move: def get_computer_move(board): while True: row = random.randint(0, NUM_ROWS - 1) col = random.randint(0, NUM_COLS - 1) if board[row][col] == EMPTY_SPOT: continue else: break return row, col main main already had some structural remarks, so let's look at its code a little bit: This board = [] for i in range(NUM_ROWS): row = [] for j in range(NUM_COLS): row.append("#") board.append(row) can be recuded to a single nested list comprehension: board = [["#" for _ in range(NUM_COLS)] for _ in range(NUM_ROWS)] Depending on your choice regarding the global variables, NUM_ROWS/NUM_COLS would need to be removed here to. Maybe even allow them to be set as command line arguments by the user? The rest of the code has some "magic values" (they are not so magic in your case), e.g. board[0][0] = "P", turn = "computer", and turn = "human" (there was also " " earlier to mark empty spots). The problem with those "magic values" is that your IDE has no chance to help you spot errors. You did write "p" instead of "P" and now the game does weird things? Too bad. You will have to find that bug yourself! The way to go about this would be to use global level constants, because this is what globals are actually good for, or an enum if you have several distint values like human and computer. This is a possible way to do it (with a sketch of how main would look like): FILLED_SPOT = "#" POISON_SPOT = "P" EMPTY_SPOT = " " class Actor(enum.Enum): HUMAN = "human" COMPUTER = "computer" # ... lot of code here ... def main(): while True: board = [[FILLED_SPOT for _ in range(NUM_COLS)] for _ in range(NUM_ROWS)] board[0][0] = POISON_SPOT turn = Actor.HUMAN # or: who_goes_first() # ... while game_is_playing: if turn == Actor.HUMAN: # Human turn # ... if board[row][col] == POISON_SPOT: # ... else: # ... turn = Actor.COMPUTER else: # Computer turn # ... if board[row][col] == POISON_SPOT: # ... else: # ... turn = Actor.HUMAN if not play_again(): print("Goodbye!") break I chose this to showcase both variants. In your case to could also just use either of those, no need to have both of them in the game. All of this may sound harsh, but it really isn't meant that way! Your code is actually quite enjoyable to review. Your style is quite consistent, the names are good and the general structure is clear. Keep up the good work!
{ "domain": "codereview.stackexchange", "id": 35254, "tags": "python, game" }
Data type conversion from "word" to Float64: Arduino and ROS
Question: Hello, Recently, I have been working on the Torxis motors which are used in my robotic arm. I am sending six different joint values to six motors from RViz and I was able to move the robotic arm. The problem arises when I am fetching the joint values from the motor feedback. The feedback we receive is a 16-bit integer data that is stored in a variable with datatype "word". Now, I am trying to convert this "word" datatype into Float64 and was trying to publish it onto a ROS topic where there is a lot variation in the data that I am seeing on the topic. Actually, I was expecting a single value as I was giving stable joint values without any change but I ended up receiving a range of values varying at very a high rate on the topic. The feedback function is given below. I need to convert the variable data to a Float64 data. word feedBack(byte address) { word data = 0; byte low, high; Serial2.write(header); Serial2.write(address); Serial2.write(feed1); while(!Serial2.available()){} low = Serial2.read(); high = Serial2.read(); //Serial.println(data); //low = 0; high = 1; data = high; data = data << 8 | low; data = 4095-data; return data; } Originally posted by Apuroop on ROS Answers with karma: 3 on 2018-05-28 Post score: 0 Answer: There seem to be two different questions here; why the joint angles are varying so much when the should be stable and how to convert a word (unsigned 16 bit integer) to a double precision float on an Arduino. I can't help you with the first question without some more information, such as what device you're reading the serial data from to get the joint position. Regarding converting the integer value to a 64 bit float, this is either very easy or impossible depending on the type of arduino you're using. The problem is that the standard arduino boards simply don't support 64 bit float values at all, some of them however do support it such as the Arduino Due and the Teensy 3.x family. If you use a board which supports 64 bit floats then you can simply assign the word value to the float: double my64BitFloat = wordValue; If you have to use a board which doesn't support 64 bit floats then I can't see a solution to this. However the rosserial documentation states that it will automatically convert 32 bit floats to 64 bit floats when serializating messages with a loss of precision. But since you're converting from a 16 bit integer this loss of precision will not effect you at all. So in short you can actually use a float (32 bit floating point) and assign that value to the message without any problems. Hope this helps. Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-05-29 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 30920, "tags": "ros-kinetic" }
Superposition while plotting the flow lines of an electric field using the "gradient method"?
Question: I've seen that if you wanted to plot the flow lines (field lines) of a vector field $F$, then a simple method is to find expressions for the x and y components of the field: $E_x$ and $E_y$ respectively, and then form an equation: $$\frac{dy}{dx}=\frac{E_y}{E_x}$$ Then integrate (solve the ODE). This would then give you the general form of the flow lines (a set of equations where you can vary the constant of integration to get all flow lines). My question is, given an electric field formed by two arbitrary point charges in space, if you wished to plot the flow lines of the field, could you find $\frac{E_y}{E_{x}}$ for charge 1 and then find the same expression for charge two and then add them together to find $\frac{dy}{dx}$, according to superposition or would this not give the correct expression? If it wouldn't give the right flow lines, why would it not? Mathematically, if we use superscript notation for components due to charges 1 and 2. Combining the components: $$\frac{dy}{dx}=\frac{E_y^1+E_y^2}{E_x^1+E_x^2}$$ Which is definitely correct. Adding the gradients of the fields individually: $$\frac{dy}{dx}=\frac{E_y^1}{E_x^1}+\frac{E_y^2}{E_x^2}$$ Is this equivalent? It seems it should not be just by considering general fractions, but for the specific case of electric fields due to separate charges, is it equivalent due to linearity? Answer: You have to be careful where the linearity comes from. The linearity comes from the Maxwell equation in the vacuum. Therefore you can just add the two solutions of the equations around the point charges and get a valid electric field for all the vacuum around. Your transformation $E_x / E_y$ is not a linear transformation. So although the Maxwell equations are linear, you lose the linearity there.
{ "domain": "physics.stackexchange", "id": 37858, "tags": "electrostatics, electric-fields, vector-fields" }
What is meant by the rank of the scoring function here?
Question: I've been reading the paper Reinforcement Knowledge Graph Reasoning for Explainable Recommendation (by Yikun Xian et al.) lately, and I don't understand a particular section: Specifically, the scoring function $f((r,e)|u)$ maps any edge $(r,e)$ to a real-valued score conditioned on user $u$. Then, the user-conditional pruned action space of state $s_t$ denoted by $A_t(u)$ is defined as: $A_t(u) = \{(r,e)| rank(f((r,e)|u))) \leq \alpha ,(r,e) \in A_t\} $ where $\alpha$ is a predefined integer that upper bounds the size of the action space. Details about the scoring function can be found in the attached paper. What I don't understand is: What does rank mean, here? Is the thing inside of it a matrix? It would be great if someone could explain the expression for the user conditional pruned action space in greater detail. Answer: $\text{rank}(f((r,e)|u))$ in $A_t(u)$ means to compute the value of scoring function $f$ for all pairs $(r,e)\in A_t$ which are conditioned by $u$, then sort them in a descending order. The rank of the $f((r,e)|u)$ in this order is equal to $\text{rank}(f((r,e)|u))$. Hence $\text{rank}(f((r,e)|u)) \leqslant \alpha$ means to select the $\alpha$ top most scored pairs.
{ "domain": "ai.stackexchange", "id": 2035, "tags": "reinforcement-learning, papers, recommender-system, knowledge-graph, knowledge-graph-embeddings" }
Mosquito in a Bus
Question: My Question goes like this: If a mosquito inside the bus wants to move from back to front, will there be any difference if the bus is stationary or moving (ie extra effort in the latter case?)? Consider it as a closed bus & mosquito at a fixed point in bus, ie the air column will be constant throughout the process, & the mosquito will always be in that air column & never touch any part of the bus. Here's another question which has the similar answer as above question : Suppose the bus is at rest, & mosquito is also at rest (wrt to both ground & bus) and is in the air column, & not touching any part of bus. If bus starts moving, will the mosquito stay at rest wrt ground, ie go to back, or will it stay at rest wrt bus? For both questions, consider that when bus starts,it instaneously starts moving in constant velocity.(no acceleration) Thanks in advance to everyone who tries to answer it. Answer: Small insects and animals have to deal with the viscosity of the air. The motion of the train takes the air with it, the air does not accumulate at the back neither during acceleration or steady velocity. The effect will be small and only during acceleration. The air has the steady velocity of the vehicle and anything suspended in it by buoyancy will have the same velocity It is easier to see with water , which will slosh initially backwards during acceleration, and at steady velocity will have the velocity of the container, which is in contact with the vehicle. We feel the effect of acceleration in a car by being pushed back in our seats at start up, and falling forward at slow down, and the mosquito will feel a bit of this, cushioned by the viscosity of the air. At steady velocities it is as if we are sitting at the table at home and the mosquito will also be motionless with respect to the vehicle and us.
{ "domain": "physics.stackexchange", "id": 11387, "tags": "newtonian-mechanics, relative-motion" }
Confusion related to effective population size
Question: I am making a demographic model for African American population in which I say 80% of ancestry comes from African population and 20% comes from European population. I am using the time/population time from Gazave et al. (2014). The authors doe not mention the effective population size (Ne) of African American population after admixture. Is there a publication someone can point me to that mentions the effective population size of African American (AA) or even European American (EA). I know there is not much difference between the two. It's just I need a figure to have my simulation up and running. Note Ne is an estimate of the people size based genetic diversity and migration and is associated with the Fst statistics Answer: That is a fantastic question and in my opinion a really important question in context. The Ne of Africa will be much bigger than the European population, however I'm less sure about the West African population which gave rise to the Afro-American population. I suspect it is still large because of the "out of Africa" theory. My advice is to start by taking a complete look at the entire subject, from Tenesa et al (2007). Ne is a really important parameter in any population study and extremely useful. I have feeling this could be complex, but I cannot hypotheses any further because I don't do human genetics (except for immunology).
{ "domain": "bioinformatics.stackexchange", "id": 992, "tags": "phylogeny, phylogenetics" }
How comunicate to arm_navigation node of ros electric
Question: I have a problem and do not know how to solve. I am using ros, with its new arm_navigation, launched on roslaunch, and then not as moving the robot. I try to run this code to communicate with the move_arm, but I can not move anything ros::init (argc, argv, "move_arm_pose_goal_test"); ros::NodeHandle nh; actionlib::SimpleActionClient<arm_navigation_msgs::MoveArmAction> move_arm("move_arm",true); move_arm.waitForServer(); ROS_INFO("Connected to server"); arm_navigation_msgs::MoveArmGoal goalA; goalA.planner_service_name="/environment_server/set_planning_scene_diff"; goalA.planning_scene_diff.robot_state.joint_state.name.resize(7); goalA.planning_scene_diff.robot_state.joint_state.name[0]="wam1"; goalA.planning_scene_diff.robot_state.joint_state.name[1]="wam2"; goalA.planning_scene_diff.robot_state.joint_state.name[2]="wam3"; goalA.planning_scene_diff.robot_state.joint_state.name[3]="wam4"; goalA.planning_scene_diff.robot_state.joint_state.name[4]="wam5"; goalA.planning_scene_diff.robot_state.joint_state.name[5]="wam6"; goalA.planning_scene_diff.robot_state.joint_state.name[6]="wam7"; goalA.planning_scene_diff.robot_state.joint_state.position.resize(7); goalA.planning_scene_diff.robot_state.joint_state.position[0]=0.0; goalA.planning_scene_diff.robot_state.joint_state.position[1]=1.0; goalA.planning_scene_diff.robot_state.joint_state.position[2]=0.5; goalA.planning_scene_diff.robot_state.joint_state.position[3]=1.0; goalA.planning_scene_diff.robot_state.joint_state.position[4]=0.5; goalA.planning_scene_diff.robot_state.joint_state.position[5]=0.0; goalA.planning_scene_diff.robot_state.joint_state.position[6]=0.5; if (nh.ok()) { bool finished_within_time = false; move_arm.sendGoal(goalA); finished_within_time = move_arm.waitForResult(ros::Duration(200.0)); if (!finished_within_time) { move_arm.cancelGoal(); ROS_INFO("Timed out achieving goal A"); } else { actionlib::SimpleClientGoalState state = move_arm.getState(); bool success = (state == actionlib::SimpleClientGoalState::SUCCEEDED); if(success) ROS_INFO("Action finished: %s",state.toString().c_str()); else ROS_INFO("Action failed: %s",state.toString().c_str()); } } as missing in electric tutorials for this, anyone know how to do? thanks for all! enter code here Originally posted by Ivan Rojas Jofre on ROS Answers with karma: 70 on 2011-10-17 Post score: 1 Answer: We are in the process of updating these tutorials. In the meantime, a good introduction to the arm navigation stacks in Electric can be found in an IROS tutorial: http://www.ros.org/wiki/arm_navigation/Tutorials/IROS2011 Originally posted by Sachin Chitta with karma: 1304 on 2011-11-04 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Ivan Rojas Jofre on 2011-11-11: Thanks Sachin for your answer!!!!, Finally , I can comunicate whith arm_navigation node has long time. Comment by Ivan Rojas Jofre on 2011-11-11: Thanks Sachin for your answer!!!!, Finally , I can comunicate whith arm_navigation node has long time.
{ "domain": "robotics.stackexchange", "id": 6994, "tags": "ros, arm-navigation, move-arm, ros-electric" }
Signal-to-noise ratio of the difference between two signals
Question: Something tells me this must be a fairly simple question, but I have somehow been unable to find an answer to it. In short: I need to calculate the difference between two signals, A and B, each one of them with its own signal-to-noise ratio. What is the SNR of the resulting value? As for the context: we are doing differential photometry, generating the light curve of some celestial bodies by comparing their instrumental magnitudes (a logarithmic measure of its brightness) to those of others. These magnitudes are derived from the flux of each star, which is also used in order to estimate the signal-to-noise ratio, given by the following formula [Astronomical Photometry, A. Henden, 1982]: $$ SNR = \frac{(star\;counts - sky\;counts)}{\sqrt{star\;counts}} $$ Note that, in case the background were neligible, the SNR could be obtained as $star\;counts / {\sqrt{star\;counts}}$, as for photon arrivals the statistical noise fluctuation is represented by the Poisson distribution, according to Henden. Also, counts are a measurement of the flux, as we are using a CCD camera in order to conduct photometry. SNR to error (in magnitudes) A S/N of 100 means that the noise causes the counts to fluctuate about the mean by an amount equal to one hundredth of the mean value. To compute this error in magnitudes, we compare the mean number of counts, $c$, to the maximum or minimum values induced by noise, that is $$\Delta m = -2.5 \log{( \frac{c\pm\frac{c}{100}}{c})}$$ $$\Delta m = -2.5 \log{( 1\pm\frac{1}{100}})$$ $$= \pm 0.01 ~magnitude$$ In other words, a S/N of 100 implies an observational error of 0.01 magnitude. [From Henden's Astronomical Photometry, pages 77-78] Answer: Treating the signals as time series: If the first signal $S_1$ has a noise component $N_1$ added to it, then the noisy signal is $S_1+N_1$, similarly the second signal is $S_2+N_2$, so the difference signal would be $(S_1+N_1)-(S_2+N_2)$ and its signal to noise ratio would be $\langle(S_1-S_2)^2\rangle\over\langle(N_1-N_2)^2\rangle$ If the signals are uncorrelated $\langle(S_1-S_2)^2\rangle$ is just $\langle S_1^2\rangle+\langle S_1^2\rangle$. If the signals are correlated, you will have to estimate the covariance $\langle S_1S_2\rangle$ to compute $\langle S_1^2\rangle+\langle S_1^2\rangle-2\langle S_1S_2\rangle$ Ditto for the noise.
{ "domain": "physics.stackexchange", "id": 2241, "tags": "astrophysics, photons, signal-processing, noise" }
how to classify text based on more than one column
Question: I passages of text to classify by topic. I am using scikit learn, e.g. linear svc, but open to other options. Currently, use only the text of each passage (column labeled "En" below). But I feel that using the title of each passage (column labeled "Ref" below) would help. Ref En Topics 3 Gittin modifier meaning board referred unique name mo... dinei-haget 11 Even HaEzer hand katafres explanation hand slanted obvious... dinei-haget 67 Rest on Holiday similar two baskets untithed fruit front first... laws-of-holidays 118 Beitzah mishna states one ate food prepared festival e... laws-of-holidays 131 Sabbath one may mix water salt oil dip one bread put c... rabbinically-forbidden-activities-on-shabbat One excellent idea, which @Erwan mentioned in the comments, is to simply include the title together with the text of the passage. But I have two issues with that: How do I do that? Don't I want to make the title (or "Ref") count for more weight than the other words in the passage? Answer: In order to make the model take into account the two columns and distinguish whether a word is from the title or the article, you can generate the two sets of features independently and then concatenate the two vectors of features. This way the learning algorithm can assign different weights to a particular word depending on whether it comes from the title or the article. However there is a disadvantage in doing that: depending on how many instances, how many words etc. there are, increasing the number of features might make the task harder for the model (for instance cause overfitting). It's probably worth testing the two options: concatenate the text from the two columns then generate the features, i.e. no distinction between the columns but simpler job for the classifier generate the features independently then concatenate the two sets of features (as above).
{ "domain": "datascience.stackexchange", "id": 7724, "tags": "classification, multilabel-classification, text" }
Why does frictional force cause a car to move? Also, is friction a reaction force?
Question: My teacher told me that this was because frictional force resists relative motion between the engine and the car. However, in this case it seems to allow relative motion between the ground and the car. Answer: [...] it seems to allow relative motion between the ground and the car. While there is relative motion between car and ground, there is not relative motion between wheel and ground. And that's what matters, because that's where the friction is. Think of walking. Your body moves relative to the ground. But at each step, your foot on the ground is stationary. Your foot pushes on the ground backwards. Static friction holds back forwards to avoid that your foot slides. On a car's wheel, the same happens, just at each new point that continuously comes in contact with the ground. For that short moment that it is in contact, that point is stationary and there is no sliding. That point pushes backwards, and so a static friction pushes forwards to avoid sliding (to avoid wheel spin). Also, is friction a reaction force? You can think of static friction as a reaction force, if you will. It only exists because your leg - or the car's wheel - applies a force backwards on the ground.
{ "domain": "physics.stackexchange", "id": 62152, "tags": "newtonian-mechanics, forces, friction, free-body-diagram" }
Increment object properties based on a particular subclass of an abstract class
Question: I have a Composition object that contains 4 ArrayList objects, each of which contains objects of a particular subclass of my Abstract class Role. The 4 subclasses are Bruiser, Healer, Tank, and DamageDealer. The point of the program is to analyze two opposing Composition objects as they are being built (players fill their Composition object one Role pick at a time), and based on each of their properties, recommend to the player a particular Hero for a particular Role that would suit the player's Composition. The values of the Composition object's properties are incremented accordingly each time a Role subclass gets added to one of its ArrayLists, like so (note: "this" corresponds to the Composition object): public class Composition { private ArrayList<Tank> tanks; private ArrayList<DamageDealer> damageDealers; private ArrayList<Healer> healers; private ArrayList<Bruiser> bruisers; private boolean doubleSupport; private boolean doubleWarrior; private boolean heavyDamage; private boolean lockdownProtection; private int mobility; private int waveclear; private int nuMelee; private int nuRange; private int lockdown; private int burstMitigation; private int sustainedMitigation; private int damage; private int burstDamageRating; private int sustainedDamageRating; private int frontlineRating; private int defensiveRating; private int offensiveRating; public Composition() { this.tanks = new ArrayList<Tank>(); this.damageDealers = new ArrayList<DamageDealer>(); this.healers = new ArrayList<Healer>(); this.bruisers = new ArrayList<Bruiser>(); } private void addMutualContribution(Role r) { this.mobility += r.getMobility(); this.defensiveRating += r.getPeeling(); this.waveclear += r.getWaveclear(); this.burstMitigation += r.getBurstMitigaion(); this.sustainedMitigation += r.getSustainedMitigation(); } public void addBruiserContribution(Bruiser b) { addMutualContribution(b); this.offensiveRating += b.getPressure(); this.damage += b.getDamage(); this.frontlineRating += b.getFrontlineRating(); addBruiser(b); } public void addDDContribution(DamageDealer d) { addMutualContribution(d); this.burstDamageRating += d.getBurstDamageRating(); this.sustainedDamageRating += d.getSustainedDamageRating(); this.damage += d.getDamageOutput(); this.offensiveRating += d.getAggression(); addDamageDealer(d); } public void addTankContribution(Tank t) { addMutualContribution(t); this.defensiveRating += t.getAntiMeleeEffectiveness(); this.offensiveRating += t.getEngage(); this.frontlineRating += t.getFrontlineRating(); addTank(t); } public void addHealerContribution(Healer h) { addMutualContribution(h); this.burstMitigation += h.getBurstHealing(); this.sustainedMitigation += h.getSustainedHealing(); if (h.hasLockdownProtection()) { this.lockdownProtection = true; } addHealer(h); } private void addTank(Tank t) { tanks.add(t); if (tanks.size() + bruisers.size() > 1) { setDoubleWarrior(true); } } private void addHealer(Healer h) { healers.add(h); if (healers.size() == 2) { setDoubleSupport(true); } } private void addDamageDealer(DamageDealer d) { damageDealers.add(d); if (damageDealers.size() + bruisers.size() > 2) { setHeavyDamage(true); } } private void addBruiser(Bruiser b) { bruisers.add(b); if(bruisers.size() + damageDealers.size() > 2) { setHeavyDamage(true); } } private void setDoubleWarrior(boolean b) { doubleWarrior = b; } private void setDoubleSupport(boolean b) { doubleSupport = b; } private void setHeavyDamage(boolean b) { heavyDamage = b; } } To me, this implementation feels backwards. Ultimately I should only have the one public method to add a Role subclass to the composition, then the Composition should privately adjust its own properties based on which particular subclass is being added, something like this: public void addRole(CustomEnum roleType, Role r) { // increment properties inherent to all roles like mobility, waveclear, etc. // ... switch (roleType) { case BRUISER: addBruiserContribution(r); break; case TANK: addTankContribution(r); break; case DAMAGE_DEALER: addDDContribution(r); break; case HEALER: addHealerContribution(r); break; default: throw new InvalidArgumentException("Wrong roletype provided!"); } } This doesn't work without me having to hardcast the Role abstract class into a particular subclass before passing it to the appropriate add__Contribution() method, which is a big red-flag. Thus it feels like I missed an opportunity for cleaner polymorphism, but I can't figure out where. How could I make my code well-formed and polymorphic to suit my needs? Role Abstract class: public abstract class Role { protected String name; protected int mobility; protected int waveclear; protected int peeling; protected int burstMitigation; protected int sustainedMitigation; protected boolean lockdown; protected boolean aaControl; protected AttackRange range; public String getName() { return name; } public int getMobility() { return mobility; } public int getWaveclear() { return waveclear; } public int getPeeling() { return peeling; } public int getBurstMitigaion() { return burstMitigation; } public int getSustainedMitigation() { return sustainedMitigation; } public boolean hasLockdown() { return lockdown; } public boolean hasAAControl() { return aaControl; } public AttackRange getRange() { return range; } } Bruiser Subclass: public class Bruiser extends Role { private int pressure; private int damage; private int frontlineRating; protected Bruiser() { } public int getPressure() { return pressure; } public int getDamage() { return damage; } public int getFrontlineRating() { return frontlineRating; } } Healer subclass: public class Healer extends Role { private int burstHealing; private int sustainedHealing; private boolean lockdownProtection; protected Healer() { } public int getBurstHealing() { return burstHealing; } public int getSustainedHealing() { return sustainedHealing; } public boolean hasLockdownProtection() { return lockdownProtection; } } Tank subclass: public class Tank extends Role { private int antiMeleeEffectiveness; private int frontlineRating; private int engage; protected Tank() { } public int getAntiMeleeEffectiveness() { return antiMeleeEffectiveness; } public int getFrontlineRating() { return frontlineRating; } public int getEngage() { return engage; } } Damage Dealer subclass: public class DamageDealer extends Role { private int burstDamageRating; private int sustainedDamageRating; private int aggression; private int damageOutput; protected DamageDealer() { } public int getBurstDamageRating() { return burstDamageRating; } public int getSustainedDamageRating() { return sustainedDamageRating; } public int getAggression() { return aggression; } public int getDamageOutput() { return damageOutput; } } Composition Class (minus property "getters"): Answer: Commenting on the snippets you published and your question: Thus it feels like I missed an opportunity for cleaner polymorphism, but I can't figure out where. The problem I see is in the mapping from Role subclass properties that aren't present in all Roles to Composition properties. An example is this.offensiveRating += b.getPressure();. I'd recommend to include an abstract getPressure() into the Role abstract class (maybe renamed to getOffensiveRating() to better match its intent). For all subclasses that don't have a "pressure" implement the method returning 0. Then you don't need any case distinctions, just one addContribution(Role r) will do the job. An alternative might be to have a non-abstract Role.getPressure() method returning 0, to be overridden only by Roles that hava a pressure.
{ "domain": "codereview.stackexchange", "id": 27187, "tags": "java, object-oriented, polymorphism" }
Lee's algorithm for synthesis of ranking functions in size-change termination proofs
Question: Consider the following statement that appeared in Lee's Ranking functions for size-change termination (SCT). Let $G$ satisfy the SCT condition. There is an effective procedure to construct a $G$-ranking function expressed using only min, max and lexicographic tuples of program parameters and constants. I'm interested mostly in the computational side of it. Has this been implemented before? If so, in which systems? In fact, Lee's preliminary algorithm was made more practical by subsequent revisions in joint work with Ben-Amram and Codish. There remains the question of whether their claimed complexity is aceptable in current verifiers and whether it has already been tested in practice. Answer: The complexity is acceptable in current verifiers, and has been implemented in at least the AProVE termination analysis tool for term rewrite systems. They describe their implementation in Lazy Abstraction for Size-Change Termination, by Codish, Fuhs, Giesl & Schneider-Kamp, basing their implementation on the papers you refer to. Performance is good both from an expressiveness point of view and a run-time perspective. The additional (clever!) trick is to pre-process the search for an appropriate abstraction and ranking function into a SAT problem (along with other search procedures for termination analysis) and having a SAT solver try to simultaneously find an instance. This trick is common in the termination analysis community.
{ "domain": "cstheory.stackexchange", "id": 4475, "tags": "program-verification, formal-methods" }
reimer tieman reaction only one ortho product
Question: why we cannot have two reimer tiemann substitutions on the same phenol as CHO- group is electron withdrawing it wont disturb electron concentration at its meta which is phenol's ortho- product Answer: There are some literature evidence that shows to have two Reimer–Tiemann substitutions on same phenol ring could be possible. For example, salicilic acid is a Reimer–Tiemann product from phenol and carbon tetrachloride. Yet, it had undergone another Reimer–Tiemann reaction to give 5-formyl-2-hydroxybenzoic acid with $17\%$ yield (para-substitution) but no 3-formyl-2-hydroxybenzoic acid (ortho-substitution) when chloroform was replaced by trichloroacetic acid (J. Chem. Soc., 1933, 496-500). For more examples: Read, Chem. Rev., 1960, 60(2), 169–184. J. Chem. Soc., 1933, 496-500.
{ "domain": "chemistry.stackexchange", "id": 10112, "tags": "organic-chemistry, aromatic-compounds, phenols, carbene" }
State of a system after $L_z^2$ been measured
Question: A system is initially in a state given by $$|{\psi_i}\rangle = \begin{pmatrix}\frac12\\1\over2\\1\over{\sqrt2}\end{pmatrix}$$ corresponding to the angular momentum $l=1$ in the $L_z$ basis of states with $m=+1,0,-1$. If $L_z^2$ is measured in this state yielding a result 1, what is the state after the measurement? First, I don’t really understand the phrase “angular momentum $l=1$ in the $L_z$ basis of states with $m=+1,0,-1$”. Does $l=1$ not necessarily imply $m=+1,0,-1$? Second, I think since the measurement leaves the system in an eigenstate of $L_z$, the state after the measurement will be $$|{\psi_f}\rangle = \begin{pmatrix}0\\0\\1\end{pmatrix}$$ Is it correct? Or do I need to use the ladder operators? Answer: "In the basis $m$ = +1, 0, -1" means that each row of your vector corresponds to one of these $m$. Column vectors like that make no sense unless you specify a basis; it's like if I gave you a vector in Euclidean space $(1,2,3)$ without telling you whether the basis is Cartesian $(\mathbf{i},\mathbf{j},\mathbf{k})$ or Spherical $(\mathbf{r},\mathbf{\phi},\mathbf{\theta})$. In the same way as a vector $(1,2,3)$ in the Cartesian basis can be written as $1\mathbf{i} + 1\mathbf{j} + 1\mathbf{k}$, then your $|{\psi_i}\rangle = \begin{pmatrix}\frac12\\1\over2\\1\over{\sqrt2}\end{pmatrix}$ can be written as $$ \frac{1}{2}|m=1\rangle + \frac{1}{2}|m=0\rangle + \frac{1}{\sqrt{2}}|m=-1\rangle.$$ $l=1$ means that the total angular momentum is $l(l+1) = 2$ ($\hbar$). Unless you make another measurement to break the $m$ degeneracy then you have to assume a superposition of the three possible states (in your case the m=-1 is twice more likely than any of the other two). Given $l$, $m$ ranges from $-l$ to $+l$. Hence the three numbers. If $L_z$ gives you one, it meant that $m=1$ and the state after the measurement is $$ |{\psi_f}\rangle = \begin{pmatrix}1\\0\\0\end{pmatrix}, $$ according to my convention above. However you are saying that a measurement of $L_z^2$ gives you one, which means that you could have either $m=-1$ or $m=1$. So in that case you throw away the $m=0$ solution and renormalise your state: $$|{\psi_i}\rangle = N \begin{pmatrix}\frac12\\0\\1\over{\sqrt2}\end{pmatrix},$$ which gives $$ |{\psi_i}\rangle = \sqrt{\frac{4}{3}} \begin{pmatrix}\frac12\\0\\1\over{\sqrt2}\end{pmatrix}. $$
{ "domain": "physics.stackexchange", "id": 38045, "tags": "quantum-mechanics, homework-and-exercises, angular-momentum, hilbert-space, measurement-problem" }
Is there an equation for the strong nuclear force?
Question: The equation describing the force due to gravity is $$F = G \frac{m_1 m_2}{r^2}.$$ Similarly the force due to the electrostatic force is $$F = k \frac{q_1 q_2}{r^2}.$$ Is there a similar equation that describes the force due to the strong nuclear force? What are the equivalent of masses/charges if there is? Is it still inverse square or something more complicated? Answer: From the study of the spectrum of quarkonium (bound system of quark and antiquark) and the comparison with positronium one finds as potential for the strong force $$V(r) = - \dfrac{4}{3} \dfrac{\alpha_s(r) \hbar c}{r} + kr$$ where the constant $k$ determines the field energy per unit length and is called string tension. This resembles the Coulomb law for short distances, while for large distances, the $k\,r$ factor dominates (confinement). It is important to note that the coupling $\alpha_s$ also depends on the distance between the quarks. we must also keep in mind that this equation gives us Strong potential, not force. To get the magnitude of the strong interaction force, one must differentiate it in terms of distance. This formula is valid and in agreement with theoretical predictions only for the quarkonium system and its typical energies and distances. For example charmonium: $r \approx 0.4 \ {\rm fm}$. So it is not as universal as eg. the gravity law in Newtonian gravity.
{ "domain": "physics.stackexchange", "id": 94183, "tags": "forces, particle-physics, quantum-chromodynamics, interactions, strong-force" }
Could a non-unitary time evolution violate the no-cloning theorem?
Question: It is written in some places that the unitarity of time evolution is what prevents quantum cloning. However, consider the typical definition of a cloning operator $A$. For all $\left|\psi\right>$ and a standard state $\left|0\right>$, $$ A[\left|\psi\right>\otimes \left|0\right>] = \left|\psi\right>\otimes \left|\psi\right> $$ Without using the unitarity of $A$, I can follow the proof in these notes to demonstrate no-cloning. With a superposition state $\left|\chi\right> = a\left|\psi\right>+b\left|\phi\right>$, $A$ can be appled to find, $$ A[\left|\chi\right>\otimes \left|0\right>] = a(\left|\psi\right>\otimes \left|\psi\right>)+b(\left|\phi\right>\otimes\left|\phi\right>) $$ $A$ could also be applied to find, $$ A[\left|\chi\right>\otimes \left|0\right>] =\left|\chi\right>\otimes\left|\chi\right> = (a\left|\psi\right>+b\left|\phi\right>)\otimes(a\left|\psi\right>+b\left|\phi\right>) $$ These expressions are not equal, so there is a contradiction. This appears to be a proof that no-cloning is impossible for any linear time evolution, even in an alternate universe where quantum time evolution does not have to be unitary. Is this reasoning correct? Answer: In fact, this is one of the two popular kinds of proofs by contradiction of the no-cloning theorem which claims the nonexistence of quantum operation that can duplicate arbitrary unknown quantum state. It may be helpful to clearly give two kinds of proofs here: (i) proof by contradiction based on the linearity: just as what you have done, assume that there is a quantum operation $U_{\mathrm{clone}}$ which can duplicate arbitrary unknown quantum state. Then for arbitrary state $|\Psi\rangle=\alpha|\phi_1\rangle+\beta|\phi_2\rangle$, $$U_{\mathrm{clone}}|\Psi\rangle|0\rangle=|\Psi\rangle|\Psi\rangle.$$ However, by the assumption of linearity of this quantum cloning operation, we have $$U_{\mathrm{clone}}|\Psi\rangle|0\rangle=\alpha U_{\mathrm{clone}}|\phi_1\rangle|0\rangle+\beta U_{\mathrm{clone}}|\phi_2\rangle|0\rangle =\alpha |\phi_1\rangle|\phi_1\rangle+\beta |\phi_2\rangle|\phi_2\rangle.$$ We thus arrive at a contradiction. Here I would like to point out that this proof is actually the initial proof of no-cloning theorem used by Wotters and Zurek in their paper in 1982 and also by Dieks in his 1982 paper where he also indicated that the linearity of quantum mechanics can be used to prove the impossibility of superluminal communication. (ii) proof by contradiction based on the unitarity of the quantum operation: we first assume that the quantum clone operation $U_{\mathrm{clone}}$ is unitary, then for arbitrary states $|\Psi\rangle$ and $|\Phi\rangle$, $U_{\mathrm{clone}}|\Psi\rangle|0\rangle=|\Psi\rangle|\Psi\rangle$ and $U_{\mathrm{clone}}|\Phi\rangle|0\rangle=|\Phi\rangle|\Phi\rangle$. Taking the inner product of two sides of the above two equations and using the unitarity assumption of $U_{\mathrm{clone}}$, we arrive at $\langle \Psi|\Phi\rangle=(\langle\Psi|\Phi\rangle)^2$ which is the case only when $|\Psi\rangle$ and $|\Phi\rangle$ are orthogonal. This proof is first proposed by Yuen in his paper in 1986. Now this proof is more popular in quantum information books, e.g., Peres' book Quantum Theory: Concepts and Methods, Nielsen and Chuang's book Quantum Computation and Quantum Information.
{ "domain": "physics.stackexchange", "id": 56262, "tags": "quantum-mechanics, quantum-information" }
Treating the measured frequency response of a system as a DFT
Question: I have frequency response data of an analog system that I would like to turn into a digital filter in Matlab. Is it kosher to think of my frequency samples as constituting a DFT of the system impulse response, even though there was never a time-domain signal in the first place? If I do an ifft on the frequency response data, can this be treated as the impulse response of the system? If so, what is the sample rate? Do I assume it is twice the highest measured frequency? Is there a way to build a digital filter from impulse response data without knowing the sample rate? Answer: So, after worrying about whether the frequency response is "low pass in nature" (i don't think it needs to be a LPF, it could be high pass in nature, but all of the interesting features of your frequency response should be below Nyquist and maybe well below Nyquist), you might also wonder about if you have the necessary phase information in your frequency response. if it is magnitude-only data, then you might have to guess at the phase. if you iFFT the magnitude-only data (mirroring it about $f=0$), you will come up with an impulse response that is also mirrored about $t=0$. that's fine, give it a delay of half of the length of the impulse response, and you have a linear-phase FIR. you get linear phase because you started not knowing anything of the phase. you can also take that FIR, factor it with some nasty factorization program (maybe MATLAB has that), reflect (using the reciprocal function) all zeros outside the unit circle to inside the unit circle, the re-multiply all of the factors and you will have a minimum-phase FIR with the same magnitude frequency response.
{ "domain": "dsp.stackexchange", "id": 2922, "tags": "filters, sampling, frequency-response" }
Opencv kmeans predict equivalent
Question: I'm moving my project from Python's libraries to opencv, and I have one big problem. In Python skylearn, I have kmeans object, which has two useful methods: fit_predict and fit, which both are nescessery for my project. The problem is I can't find any equivalent in opencv. There is only function cv::kmeans, which is not enough for me. When I'm using Python skylearn the pseudo code of my program looks like: kmeans.fit_predict(train_data) // it gaves me labels, and compute centers kmeans.predict(test_sample) // it gaves me labels for test sample computed from centers which were computed above Is there any equivalent or way to do the same in opencv? Answer: The OpenCV kmeans function is equivalent to fit(). It returns the cluster centers, and you can use this to implement your own predict()function. For a new sample, calculate the distance to each of the cluster centers. The cluster with the lowest distance is the predicted class.
{ "domain": "datascience.stackexchange", "id": 4267, "tags": "machine-learning, python, scikit-learn, opencv" }
Filter string with min distance in r
Question: I have the following sample data: target <- "Stackoverflow" candidates <- c("Stackflow", "Stackflow", "Stckoverfow") I would like to filter the string from candidates that has the lowest (levensthein) distance to the target string. I decided to use the pipe operator to improve readability. However, i find it suboptimal that adist doesnt return the strings that corresponds to the distances. My attempts: Version 1) library(magrittr) candidates %>% adist(y = target) %>% which.min %>% `[`(x = candidates, i = .) Version 2) library(dplyr) candidates %>% adist(y = target) %>% data.frame(dist = ., candidates = candidates) %>% filter(dist == min(dist)) %>% select(candidates) Goal: I want to Improve readability. Memory usage or performance do not matter as data is pretty small. Using additional packages, like dplyr, is also fine. In version 1 i think `[`(...) is bad to read. Version 2 seems better to me. But it does not seem optimal to me, that i have to add the strings to the data.frame after having applied adist() to them: data.frame(dist = ., candidates = candidates). Answer: What is wrong with simple code like?: distances <- adist(candidates, target) candidates[distances == min(distances)] This is shorter and, in my opinion, easier to read, as it do not require any knowledge of additional packages. Also, you mentioned that your data is small and it looks like you are working with vectors, in that case I do not see a point to use data.frame, dplyr etc.
{ "domain": "codereview.stackexchange", "id": 36379, "tags": "comparative-review, r, edit-distance" }
Should we upsample the channel when upsample the signal
Question: I have an OFDM system with Number of sub-carriers $N= 1024$, modulated data using $QAM$ modulation is transmitted via those subcarriers as follows: modulation --> ifft --> adding CP --> upsampling --> conv channel --> adding noise Then at the receiving side, the received signal is processed as follows, discarding the channel delay --> downsampling --> CP removal --> fft --> MMSE equalizer --> demodulation The issue which I am facing in that system is in the equzlizer step, When using the original channel, I can not get the performance back. Howenver, when I estimate that channel after the step of fft and then use the estimated channel, The performance is OK !! Why does that issue happens? I think because of the upsampling step, because when I don't upsample the signal, that becomes fine but when I use the upsampline and use then equalize using the original channel, I can't get the performance back. Answer: In simulation, we do upsample to simulate the "analog" signal which is a continuous signal in reality but we represent it as a high-time-resolution in our digital/discrete simulation. When we upsample our data before going through the channel, we should also upsample the "impulse response" (IR) of the channel. It means that the timing-base of both signal and IR of channel must be same. For example, without upsampling, take for example of a channel with an IR of {4,0,0,3,0,1.5,0,0,0}. First, We may apply our signal (without upsampling) to this channel. Then, we upsample our signal with a factor of 2 and want to apply that to this channel again. this time we have to upsample our channel's IR with a factor of 2 just like the signal. So, the upsampled channel'IR would be: {4,0,0,0,0,3,0,0,1.5,0,0 ....}. So, the time-base or time-resolution of both signal and channel should be equal.
{ "domain": "dsp.stackexchange", "id": 9000, "tags": "digital-communications, ofdm, equalization, channel-estimation" }
Machine learning algorithm which gives multiple outputs from single input
Question: I need some help, i am working on a problem where i have the OCR of an image of an invoice and i want to extract certain data from it like invoice number, amount, date etc which is all present within the OCR. I tried with the classification model where i was individually passing each sentence from the OCR to the model and to predict it the invoice number or date or anything else, but this approach takes a lot of time and i don't think this is the right approach. So, i was thinking whether there is an algorithm where i can have an input string and have outputs mapped from that string like, invoice number, date and amount are present within the string. E.g: Inp string: the invoice #1234 is due on 12 oct 2018 with amount of 287 Output: Invoice Number: #1234, Date: 12 oct 2018, Amount 287 So, my question is, is there an algorithm which i can train on several invoices and then make predictions? Answer: Keras functional API's are a way your can solve you problem. Using keras functional API, we can build models that resembles more like graphs such as this: In order to build a model like this, you can use keras as follows: from keras.models import Model from keras import layers from keras import Input input_layer = Input(shape=(100,), dtype='float32', name="Input") split_layer = layers.Dense(32, activation='relu', name='split_layer')(input_layer) first_layer = layers.Dense(32, activation='relu', name='first_layer')(split_layer) second_layer = layers.Dense(32, activation='relu', name='second_layer')(split_layer) model = Model(input_layer,[first_layer, second_layer]) model.summary() In order to compile this model, we can define different loss functions for different layers model.comile(optimizer=optimizer, loss={'first_layer':'mse', 'second_layer':'binary_crossentropy'}, metrics=['accuracy']) Once you are done with building the network, you could simply fit you data as follows: model.fit(X, {'first_layer': first_layer_Y, 'second_layer': second_layer_targets}, epochs=10 )
{ "domain": "datascience.stackexchange", "id": 3920, "tags": "machine-learning, python, deep-learning, ocr" }
Question about aircraft/rockets
Question: Lets say that you're sitting an inverted airplane. How do you determine how fast the plane must accelerate in order for you to not fall out? Answer: That depends on two things: The coefficient of friction between the pilot and his seat and the direction of acceleration. First case: The aircraft accelerates along its flight path. The pilot is pressed against the seat by the acceleration, and if that pressure is sufficient, friction will keep him in place. Since the coefficient of static friction $\mu_s$ is equivalent to the tangent of the inner frictional angle, and the ratio between gravity and acceleration is also a tangent, the acceleration $a$ along the flight path must be $$a > g\cdot \mu_s$$ assuming a horizontal flight path and a vertical backrest. For different flight path and backrest angles correct accordingly. Second case: The aircraft flies a parabola such that the pilot is pressed into his seat by centrifugal forces. If the angular velocity of the pitch motion is $q$ and the centrifugal force has to be greater than the pilot's weight, the condition is $$q > \frac{g}{v}$$ The higher the speed $v$ you accelerate to is, the smaller the minimum pitch rate becomes.
{ "domain": "physics.stackexchange", "id": 29926, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram" }
How is proving a context free language to be ambiguous undecidable?
Question: I've read somewhere that a Turing machine cannot compute this and it's therefore undecidable but why? Why is it computationally impossible for a machine to generate the parse tree's and make a decision? Perhaps I'm wrong and it can be done? Answer: We reduce from Post's Correspondence Problem. Suppose we can, in fact, decide the language $\{\langle G\rangle\mid G\textrm{ is an ambiguous CFG}\}$. Given $\alpha_1, \ldots, \alpha_m, \beta_1, \ldots, \beta_m$: Construct the following CFG $G = (V,\Sigma,R,S)$: $V = \{S, S_1, S_2\}$, $$\begin{align} R = \{S_{\phantom0}&\rightarrow S_1\mid S_2,\\ S_1&\rightarrow \alpha_1 S_1 \sigma_1 \mid \cdots \mid \alpha_m S_1 \sigma_m \mid \alpha_1 \sigma_1 \mid \cdots \mid \alpha_m \sigma_m,\\ S_2&\rightarrow \beta_1 S_2 \sigma_1\mid \cdots \mid \beta_m S_2 \sigma_m \mid \beta_1 \sigma_1\mid \cdots \mid \beta_m \sigma_m\} \end{align}$$ (where $\sigma_i$ are new characters added to the alphabet, e.g., $\sigma_i = \underline{i}$). If the grammar is ambiguous, then there is a derivation of some string $w$ in two different ways. Supposing, wlog, that the derivations both start with the rule $S\rightarrow S_1$, reading the new characters backwards until they end makes sure there can only be one derivation, so that's not possible. Hence, we see that the only ambiguity can come from one $S_1$ and one $S_2$ 'start'. But then, taking the substring of $w$ up to the beginning of the new characters, we have a solution to the PCP (since the strings of indices used after those points match). Similarly, if there is no ambiguity, then the PCP cannot be solved, since a solution would imply an ambiguity that just follows $S\Rightarrow S_1\Rightarrow^* \alpha\tilde{\sigma}$ and $S\Rightarrow S_2\Rightarrow^* \beta\tilde{\sigma}$, where $\alpha = \beta$ are strings of matching $\alpha$'s and $\beta$'s (since the $\tilde{\sigma}$'s match). Hence, we've reduced from PCP, and since that's undecidable, we're done. (Let me know if I've done anything boneheaded!)
{ "domain": "cstheory.stackexchange", "id": 5793, "tags": "computability, turing-machines, context-free, ambiguity" }
Confusion in a trick in solving an energy eigenfunction
Question: Given a non-relativistic energy eigenfunction for a central potential $\left|\Phi \right>$ In solving relativistic hydrogen atom, one of the terms is $$ \left<\Phi\middle|\frac{e^2}{r}\middle|\Phi\right> \tag{1} $$ I have read a trick to solve it is: $$ \left<\Phi\middle|\frac{e^2}{r}\middle|\Phi\right> =\left<\Phi\middle|\frac{-e}{2}\frac{\partial}{\partial e}\left[\frac{\hat {P}^2}{2m}-\frac{e^2}{r}\right]\middle|\Phi\right> = \left<\Phi\middle|\frac{-e}{2}\frac{\partial}{\partial e}\middle[\hat {H}\middle|\Phi\middle>\right]\tag{2} $$ and it gives the correct value $$-\frac{me^4}{\hbar^2n^2} .\tag{3}$$ In order to understand this trick, I tried this in another term: $$ \left<\Phi\middle|\frac{e^4}{r^2}\middle|\Phi\right> =\left<\Phi\middle|\frac{e^2}{r} \frac{e^2}{r}\middle|\Phi\right> =\left<\Phi\middle|\frac{e^2}{4}\left[\frac{\partial}{\partial e}\left(\frac{P^2}{2m}-\frac{e^2}{r}\right)\right]^2\middle|\Phi\right> ,\tag{4} $$ assuming $(\frac{\partial}{\partial e}\hat{H})^\dagger = \frac{\partial}{\partial e}\hat{H}$. then this equals $$ \left<\frac{e}{2}\frac{\partial}{\partial e} \left(\frac{\hat{P}^2}{2m}-\frac{e^2}{r}\right)\Phi\right|\left|\frac{e}{2}\frac{\partial}{\partial e}\left(\frac{\hat{P}^2}{2m}-\frac{e^2}{r}\right)\Phi\right> = \left(\frac{me^4}{\hbar^2n^2}\right)^2 .\tag{5} $$ However, since this very term is supposed to tell us how the relativistic effect destroy the symmetry, while this result gives us no degeneracies breaking. As it turns out, the correct answer for this term is $$\frac{m^2e^4}{(\ell+1/2)\hbar^4n^3}.\tag{6}$$ It takes me so long that I still can't figure it out where is wrong, I suspect $\frac{\partial}{\partial e}\hat{H}$ is not Hermitian. Answer: The theory behind the trick is based on the Hellmann-Feynman (HF) theorem $$ \frac{dE_{\lambda}}{d\lambda}~=~\langle \psi_{\lambda} | \frac{d\hat{H}_{\lambda}}{d\lambda}| \psi_{\lambda} \rangle,\tag{A}$$ which works with a single derivative, but not with a square of a derivative, cf. OP's failed calculation (5) for the expectation value $\langle\frac{1}{r^2}\rangle$. Incidentally on the Wikipedia page, the correct result (6) for $\langle\frac{1}{r^2}\rangle$ is obtained via the HF theorem by varying wrt. the azimuthal quantum number $\ell$ rather than the electric charge $e$. (Concerning a subtlety in the variation wrt. $\ell$, see also this related Phys.SE post.)
{ "domain": "physics.stackexchange", "id": 32567, "tags": "quantum-mechanics, atomic-physics, symmetry, hydrogen" }
What's the evolutionary reason behind decussation?
Question: A bunch of stuff in the human nervous system decussates. Optical information inputs from the eyes cross over in the optical chiasm. Multiple sensory and motor pathways cross-over before ascending the spinal chord or within the brain stem. What's the current hypotheses being explored for justifying this anatomical phenomena? Answer: I don't think it is a clever thing to group all types of decussation and look for a general explanations. I would tend to think that different decussation have different explanations. It is like asking what are the hypothesis to explain evolution of body size. There is no general answer to that but only a list of case specific impact of different factors on body size (see this post). Consider the two case-studies 1) "why does left brain controls the right part of the body" and 2) "Torsion in gastropoda" as examples of how different explanation applies to different decussation. why does left brain controls the right part of the body? See @shigeta's answer to this post. It is very interesting. Torsion in Gastropoda Torsion in Gastropoda is an interesting case. In this lineage individuals undergo some torsion during their development. This wikipedia articles say: Snails are distinguished by an anatomical process known as torsion, where the visceral mass of the animal rotates 180° to one side during development, such that the anus is situated more or less above the head. This process is unrelated to the coiling of the shell, which is a separate phenomenon. Torsion is present in all gastropods, but the opisthobranch gastropods are secondarily detorted to various degrees.[13][14] Torsion occurs in two mechanistic stages. The first is muscular and the second is mutagenetic. The effects of torsion are primarily physiological - the organism develops an asymmetrical nature with the majority of growth occurring on the left side. This leads to the loss of right-paired appendages (e.g., ctenidia (comb-like respiratory apparatus), gonads, nephridia, etc.). Furthermore, the anus becomes redirected to the same space as the head. This is speculated to have some evolutionary function, as prior to torsion, when retracting into the shell, first the posterior end would get pulled in, and then the anterior. Now, the front can be retracted more easily, perhaps suggesting a defensive purpose. However, this "rotation hypothesis" is being challenged by the "asymmetry hypothesis" in which the gastropod mantle cavity originated from one side only of a bilateral set of mantle cavities.[15] Gastropods typically have a well-defined head with two or four sensory tentacles with eyes, and a ventral foot, which gives them their name (Greek gaster, stomach, and poda, feet). The foremost division of the foot is called the propodium. Its function is to push away sediment as the snail crawls. The larval shell of a gastropod is called a protoconch. You can find more information there
{ "domain": "biology.stackexchange", "id": 2878, "tags": "evolution, human-anatomy, neuroanatomy" }
Particles acting like waves
Question: Wave–particle duality is kinda bothering me... I read that electrons can act like waves, but I know that electrons are actually particles. The theory says that if you have not observed the particle it acts like a wave and can explore all classically available particle trajectories simultaneously and when you observe it it ends up in just one place. But doesn't that just mean that this particle is a particle the whole time, but you can't know it's location without observing it and the wave just presents a range of possible locations where it might be if you do observe it? And if so why would you call it "Wave–particle duality" if this particle is just a particle doing it's thing and a wave is just a best guess of where it is? Answer: What you think as a particle, the electron for example, is a quantum mechanical entity that behaves as a classical billiard ball in some experiments but collectively displays behaviors that cannot be explained by classical mechanics, one of them is to display a wave nature, i.e. interference phenomena, when studied appropriately. I will repeat some paragraphs from a previous answer. In the quantum mechanical framework, single events/instances can be described by classical trajectories and physics. It is when the statistics are accumulated that the wave behavior appears. The statistical distribution of such scatterings will be a probability distribution given by the quantum mechanical wave equations, and will display the wave nature of the underlying framework. The waves in quantum mechanics are probability waves . Many instances must be accumulated in a distribution to manifest the wave nature . In the double slit experiment with single electrons a single electron does not express any wave nature. One can calculate its trajectory classically after the fact. One cannot predict the trajectory unless the probability wave nature of the underlying framework is taken into account. So in the double slit experiment the individual electron appears as a dot on the screen but its trajectory cannot be predicted by classical mechanics, by knowing the momenta and geometries. To summarize for individual measurements the wave nature may not appear at all or cannot be predictive of a trajectory. What the wave nature does is predict a statistical distribution for the particles under consideration. In the double slit experiment with individual electrons the accumulation of single "particle" hits displays an interference pattern, and this is the particle/wave duality. you ask: But doesn't that just mean that this particle is a particle the whole time, but you can't know it's location without observing it and the wave just presents a range of possible locations where it might be if you do observe it? In fact you do not know what it is until you observe it, and if it is one event, you can only pull your beard and wonder at its trajectory as a single particle. By accumulating events you observe the wave nature in the distribution. This insight allowed to describe mathematically the microcosm with the quantum mechanical differential equations, which are wave equations, and a system of postulates, the main one " the square of the solution is the probability distribution". The description has been validated by innumerable experiments.
{ "domain": "physics.stackexchange", "id": 16015, "tags": "wave-particle-duality" }
Dissipation caused by Gravitational Wave Emission
Question: Two massive bodies orbiting each other can lose energy through gravitational wave emission until colliding. Can a single massive body, moving with constant velocity with respect to an observer, lose it's kinetic energy to gravitational waves? I get that changing the position like this without oscillating doesn't result in a wave exactly, but mustn't the new information about the gravitational field still be transmitted, resulting in energy loss? Answer: The answer and reasoning for this are exactly the same as in electromagnetism. (Assuming the observer is far enough away and has light enough mass that we can ignore the effect the observer has on spacetime for the massive body). Since the massive body is traveling at a constant velocity (below $c$), we can go into its rest frame. Since it obviously isn't emitting waves in its rest frame, and since the laws of physics are the same in any inertial frame, it is also not emitting waves in the observer's rest frame. At this point I am going to transition to talking about electromagnetism, because the correct words are probably more familiar. But, the logic here can also be transferred to gravity (to leading order in perturbation theory). The electric field due to the "static" part of the field (not the wave) falls off as $1/r^2$, where $r$ is the distance to the "charged body" (assuming, since we've moved to electromagnetism, that the body has a net charge). The electric field due to an electromagnetic wave sourced by an accelerating charged body falls off as $1/r$. If the charged body is moving at a constant velocity $v$, the information about the changing $1/r^2$ part of the field, does indeed travel at $v$. You can imagine that in some sense, the entire $1/r^2$ field configuration is moving at a constant velocity. If the charge accelerates, this information needs to propagate to the observer somehow, which it does in the form of a wave. The above image, from the textbook by Purcell and Morin (which I grabbed from this website) shows how an electromagnetic wave communicating the change of a particle's velocity to a distant observer. Far away (far enough that light has not had time to propagate since the particle accelerated), the field lines move at a constant velocity, and point to where the particle would be if it hadn't accelerated. Nearby, the field lines point to the current location of the particle. In between the near and far regions, there is a visible shell, which is the wave (remember that the electric field points perpendicular to the direction of propagation of a wave). The wave communicates to the distant observer, that the point charge has accelerated. In gravity, at asymptotically large distances away from the massive body, the spacetime curvature falls off as $1/r$ for gravitational waves emitted from the body (if its quadrupole moment changes with time), and $1/r^2$ for a body that is static or moving with a constant velocity. We can make the same conceptual split; gravitational waves carry information that tell a distant observer when the quadrupole moment of the mass distribution of the source changes, much like electromagnetic waves carry information that the dipole moment of the charge distribution changes. A massive body moving at a constant velocity does not lead to gravitational waves. This description and connection to electromagnetism relies on perturbation theory; there are also more rigorous descriptions in terms of quantities describing the asymptotic behavior of the spacetime like the Bondi news, but this does not change the answer to your question.
{ "domain": "physics.stackexchange", "id": 83937, "tags": "general-relativity, energy-conservation, gravitational-waves, dissipation, binary-stars" }
Does evaporation reverse salt hydrolysis?
Question: It has been a long time since I studied organic chemistry, but one thing I do remember is that when we needed cyanide salts we were told not to bother keeping any remaining solution because the salts hydrolyze very quickly. But can't hydrolyzed salts be fully recovered by simply evaporating the water? E.g., we don't worry about evaporating NaCl from a distilled water solution and finding anything other than NaCl. So I'm wondering if the lab instructions to neutralize and dispose of cyanide salt solution was just based on expediency. Or: can something unsafe happen during the evaporation of a hydrolyzed salt solution? Answer: Simple cyanide salts In aqueous solution, simple cyanide salts more or less completely dissociate into their constituent ions. E.g. for potassium cyanide: $\ce{{KCN}(s) + H2O(l) -> K+(aq) + CN- (aq)}$ Those solutions are relatively stable. There is an acid-base equilibrium between $\ce{CN-}$ and $\ce{HCN}$, but in unbuffered or in alkaline aqueous solution this equilibrium will be almost completely on the left: $\ce{CN- + H2O <<=> HCN + OH-}$ Over very long times, if the solution is left in an open container, some of the tiny, minute amount of $\ce{HCN}$ formed in this way could evaporate, leading to a very slow loss of cyanide from the solution. If stored closed this will not be a problem. (Mixing simple cyanide salts with an acid will result in an immediate and dangerous release of gaseous $\ce{HCN}$, so in this question I assume that no acids are involved.) Thus, for simple cyanide salt solutions stored appropriately, yes you should indeed be able to recover all of the initially added the solid salt after boiling off or otherwise removing any water. Cyanide complexes of transition metals Salts such as potassium ferricyanide $\ce{K3Fe(CN)6}$ consist of potassium cations and complexed ferricyanide anions, $\ce{Fe(CN)6^{~3-}}$. They are soluble in water as well, but unlike their simple cyanide anions, they can be hydrolyzed. The hydrolysis is catalyzed by UV light and results in the formation of free $\ce{CN-}$ anions as well as iron hydr(oxide) solid precipitates. See this book for more information.
{ "domain": "chemistry.stackexchange", "id": 3897, "tags": "organic-chemistry, hydrolysis" }
Use the pumping lemma to show the language is not regular
Question: I can use the pumping lemma to prove simpler examples, but i'm finding this problem rather complex partly due to the notation. Can anyone explain how I would do this problem: For any string $s$ in $\{a, b\}^*$, let $n_a(s)$ be the number of a’s in s, and let $n_b(s)$ be the number of b’s in s. Let L over {a, b} be given by $L = \{x \in \{a, b\}^* | n_b(x) = n_a(x)^2 \space \text{and} \space x \notin ((a^∗b^∗) ∪ (b^∗a^∗))\} $. Prove, using the pumping lemma for regular languages, that the language L is not a regular language I know we suppose that it is regular then by the pumping lemma it has to satisfy the properties of the pumping lemma for regular languages. Hence, we need to find a decomposition and break it by pumping this decomposition which is no longer in the language forming a contradiction by hypothesis. However, I fail to see the decomposition or what to pump.... Answer: Assume $L$ is regular, and let $p$ be as in the pumping lemma. Note that $s = a^pb^{(2p)^2}a^p\in L$. Now, by the pumping lemma, we have a decomposition of $s = xyz$. As $|xy|\leq p$, we must have that $y\subseteq a^p$, Say that $y = a^k$ for some $k\leq p$ (and by the property that $|y|>0$, we have that $k\geq 1$). Then, we have that: $$s = \underbrace{a^{p-k}}_x\underbrace{a^k}_y\underbrace{b^{4p^2}a^p}_z$$ Again, by the pumping lemma, we have that $L\ni xz = a^{p-k}b^{4p^2}a^p$. But, we have that $n_a(xz) = 2p-k$, and $n_b(xz) = 4p^2$, but $n_a(xz)^2 = 4p^2-4pk+k^2\neq 4p^2 = n_b(xz)$, where the lack of equality holds because $1\leq k\leq p$, as stated before. So, we have a contradiction to the pumping lemma, so $L$ isn't regular. This all is entirely standard, and the only "trick" is picking the right string $s$. A few tips for this: $s$ will generally need to depend on $p$ in some way so it's "long enough to be pumped" You'll want to carefully choose the first $p$ characters of $s$. If it can all be one character repeated (like in this case we had $a^p$), it makes determining what $y$ is much easier, and you can do the proof with a single case. Besides that, you want to choose the rest of the string so $s\in L$, and some "pumped version" of $s$ isn't in $L$. This is clearly the hardest part, and the part I don't have a great procedure for. Generally trying a few things ends up with something working eventually. For "counting" based ones like the one you posted (some number of one symbol is equal to some number of some other symbol), usually having the part you pump be of the form $a^k$ is enough to mess with the counting.
{ "domain": "cs.stackexchange", "id": 10384, "tags": "formal-languages, regular-languages, pumping-lemma" }
Why is this statement "The number of negative and positive charges in the Universe does not change" false?
Question: I am not able to understand why this is false given that total charge in a system is conserved. Does this mean that charge can be created or destroyed such that sum of charge remains same? Answer: If in my universe, there are $3$ electron charges and $4$ positrons, so that there is $1$ unit of positive charge. Then one day a photon reacts via $$\gamma \rightarrow e+e^+$$ so that now there are $4$ electrons and $5$ positrons. That makes your statement wrong! While still, the total charge remains $1$ unit. Edit: Please find the details of the pair production here. I haven't gotten into the details about the universe, but I think the central idea is clear. But as raised by a question in a comment, $1$ electron is bound to a proton making a nucleus and the others are free. The pair production takes place near this atom so that the momentum conservation remains valid.
{ "domain": "physics.stackexchange", "id": 78159, "tags": "electrostatics, conservation-laws, charge, universe" }
Simple yet threadsafe file cache in java
Question: I'm attempting to cache contents of a file to avoid frequent reads, using a very simple implementation in a single class. The file contains a list of emailids, one per line. The file changes rarely, and the cache should be updated immediately when it does. So I've checked the file's timestamp in each read operation. The class will be used in servlets and rest apis - as member variable. Although I've synchronized the only public method of the class, I'm not sure whether the code is threadsafe. Do I need to add/change something here to ensure thread safety ? public class CachedFile { private File file ; private long lastModified ; private List<String> fileLines ; private boolean fileCached ; public CachedFile( final String inputFileName ) { file = new File( inputFileName ) ; fileCached = false ; } public synchronized List<String> getLines() throws IOException { //if file is not cached or if its modified after the last read, read & cache it if( fileCached == false || fileModifiedAfterLastRead() ) { readFile() ; } return fileLines ; } private void readFile() throws IOException { //read file to cache fileLines = FileUtils.readLines( file ) ; //cache the last modified time lastModified = file.lastModified() ; //set the cached flag to true fileCached = true ; } private boolean fileModifiedAfterLastRead() { return( file.lastModified() > lastModified ) ; } } Answer: Since you mentioned that the file could be written by another process (assuming in another jvm), there is no way you can introduce the Object monitor based thread safety. So you can depend upon the last modified time of the file object. You can also remove the synchronization from your getLines() if you dont return the instance variable. But you do return that so synchronization should be there in getLines(). Basically you want to read the file without the data corruption. That could happen if another process is writing to the file when your process reads from it. If that is the case, you want to re-read the file. Here is how I would implement it. private void readFile() throws IOException { int retryCount = 5; while (retryCount > 0) { long beforeRead = file.lastModified(); //read file to cache fileLines = FileUtils.readLines(file); if (beforeRead == file.lastModified()) break; retryCount--; } //cache the last modified time lastModified = file.lastModified() ; //set the cached flag to true fileCached = true ; } The while loop will retry the read operation if there is a mismatch. Also the retry has to be finite number to avoid looping forever in case of any file system failures.
{ "domain": "codereview.stackexchange", "id": 28040, "tags": "java, multithreading, file, cache" }
How can you test to see if a dice is weighted?
Question: I was browsing Etsy today and came across this. What tests are there to see if the dice are usable, ie, if one side isn't favored over another, and if all sides are balanced? Would this just be to roll the dice a large nubmer of times and collect the data? Or are there other tests that could be done without doing that? Answer: A statistical test would require a large number of rolls of the dice. For a simpler statistical example, to test whether a coin is fair by tossing a coin $N$ times would result in approximately $\frac{N}{2}$ heads but the standard deviation of that count would be about $sqrt(N)$ so to get to a level of significance where the standard deviation is only 0.01 on the 0.50 value that you are trying to measure would require $N = (0.01)^{-2} = 10000$ tosses of a coin. Another test you could try would be to try to accurately measure where the center of mass of the die is. You could do this by seeing if the die can approximately balanced on a knife edge which is bisecting one of the die's faces. Do this for 3 of the orthogonal faces of the die and if they all are balanced, then the center of mass is in the geometric center of the die. However, I don't know how to determine what error in this measurement would be achievable and how much an off-center center of mass would affect the fairness of the die.
{ "domain": "physics.stackexchange", "id": 16013, "tags": "classical-mechanics, probability" }
Does anybody make and sell stickers with the ROS logo and related products/brands/companies?
Question: Where can I get stickers for ROS, related products, and the companies that produce them? I suddenly have a great desire to put stickers on my laptop after living in Silicon Valley for sometime now. I also want to stick them on my robots and STOP signs. Does anybody make and sell stickers with the ROS logo and related products/brands/companies? I would love to have stickers for the things like OpenCV, Turtlebot, OSRF, Gazebo, Willow Garage, Yujin, Kobuki, OpenNI, Clearpath, etc. Bonus points for scratch N' sniff Originally posted by mirzashah on ROS Answers with karma: 1209 on 2013-06-20 Post score: 4 Answer: There are stickers available on Zazzle As well as Posters and Canvas Prints General maketing materials are listed here. Originally posted by tfoote with karma: 58457 on 2021-09-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 14645, "tags": "ros" }
Problems using avr_bridge, Arduino <-> ROS communication
Question: Hi, I'd like to enable an Arduino Duemilanove to work with ROS. I've found the avr_bridge package, which is supposed to exactly do what I wanted. The problem is I cannot get the Hello World tutorial to work. I think I did everything as explained in the aforementioned Wiki-pages. I've defined fputc and fgetc functions, corrected some minor errors in library paths and the project builds nicely. But when I load it to Atmega328P it's not echoing the messages sent. I've done some debugging (without JTAG, so just blinking leds in different parts of the code) and have stated that the UART-read part isn't working (ie. the callback method is not being called when data from /call topic is being read). I guess the error is somewhere in node_handle.cpp, in the part: if (com_state == msg_data_state) { packet_data_left--; if (packet_data_left < 0) { resetStateMachine(); if (header->packet_type == 255) this->sendID(); if (header->packet_type == 0){ //topic, //ie its a valid topic tag //then deserialize the msg this->msgList[header->topic_tag]->deserialize(buffer+4); //call the registered callback function this->cb_list[header->topic_tag](this->msgList[header->topic_tag]); } if(header->packet_type == 1){ //service } } } I've been able to publish some data on the /response topic, though. Any clues? Originally posted by tom on ROS Answers with karma: 1079 on 2011-04-06 Post score: 0 Answer: Can you check out the personal repository of the author (http://github.com/adasta/rutgers-ros-pkg.git)? This might be similar to the problem I encountered before. Originally posted by Homer Manalo with karma: 475 on 2011-04-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by tom on 2011-04-06: OK - there is obviously the imu_9drazor code using it to look up. Comment by tom on 2011-04-06: Thanks Homer. Is there a HelloWorld example for this code as well? I run gen_avr.py and got the avr_ros directory for the callResponse example generated. But, as naming in the code changed, it would be handy to have a quick example to look at.
{ "domain": "robotics.stackexchange", "id": 5294, "tags": "ros, arduino, embedded" }
Obstacle Position
Question: Hi there, Is it possible to obtain the co-ordinates of an obstacle that we manually add in Gazebo ? I want to write a ros node such that the user adds the obstacle initially and then the node obtains that position of obstacle from Gazebo and modifies its x-cordinate. Is this possible ? Originally posted by ktiwari9 on ROS Answers with karma: 61 on 2015-08-17 Post score: 0 Answer: You can probably achieve what you want by using the Gazebo services that are available when you start gazebo with ROS support. Once Gazebo is started you can do a rosservice list and you should multiple services offered by Gazebo that allow retrieving model properties, spawning models, moving them etc (here are the srv definitions). Most services offered are also described in the Gazebo "Connect to ROS" tutorial. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2015-08-17 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by ktiwari9 on 2015-08-17: So in the Connect to ROS tutorial they mention something like rosservice call /gazebo/set_model_state to spawn a coke can at a specific location and orientation, Is it possible to do this via a ROS node ? Comment by Stefan Kohlbrecher on 2015-08-17: Yes, you can cal ROS service from the command line (by typing "rosservice call ...", but you can of course also call them from code (see Python or C++ tutorials for "Writing a Simple Service and Clieant" here: http://wiki.ros.org/ROS/Tutorials). Comment by ktiwari9 on 2015-08-18: Hi Stefan, Please see my comments above. I uploaded my code and I have some trouble adding both pedestrian and robot to the scene. Could you help me out ?
{ "domain": "robotics.stackexchange", "id": 22459, "tags": "ros, position" }
Feynman Lectures Vol. 1: error in the formula for resolving power?
Question: In chapter 27 of Feynman Lectures on Physics Vol. 1, section 7 on resolving power, Feynman states the rule for optical resolution: two different point sources can be resolved only if one source is focused at such a point that the times for the maximal rays from the other source to reach that point, as compared with its own true image point, differ by more than one period. So the rule as stated by Feynman is $$ t_2 - t_1 > 1/f $$ where $f$ is the frequency of light. Feynman then continues: If the distance of separation of the two points is called $D$, and if the opening angle of the lens is called $θ$, then one can demonstrate that $ \, t_2 - t_1 > 1/f \, $ is exactly equivalent to the statement that $D$ must exceed $λ/(n \ \text{sin} \theta)$, where $n$ is the index of refraction at $P$ and $λ$ is the wavelength. However, what I get is $$ D> \frac{λ}{2 n \ \text{sin} \theta} $$ i.e. an extra factor of $2$ in the denominator. This holds when $D$ is sufficiently small (as is clear from the picture). If we define $t$ as the time to propagate from $P$ to $S$ or $R$ then by small angle trigonometry (not small in $\theta$!) we get that (omitting $c$ and $n$) $\ t_1 = t - D\ \text{sin} \theta \ $ and $\ t_2 = t + D\ \text{sin} \theta \ $, hence my result. Am I making some trivial mistake? If not, how come this has not been spotted before? N.B.: Feynman writes that $\theta$ is the opening angle of the lens, whereas in the picture it is depicted as the half-angle. Taking $\theta$ to mean the full angle $\angle SPR$ the resulting denominator should be $2 n \ \text{sin} \frac{\theta}{2}$. Answer: If you look at the pictures of this lecture and listen to the tape, it's apparent Feynman was winging it. He had a few pages of notes, weighted down under an ashtray on the lecture table, but he never looked at them. What he refers to as the "opening angle of the lens" is, in fact, half that angle, as can be seen in his blackboard figure. To the left of his figure you can see that he not only missed the factor 2 in the denominator, he put n*sin(theta) in the numerator! What he actually said was, "I haven't time to derive it here, but I'll leave it as a problem to see if you can figure it out, that this condition is exactly the same as this: that if the distance of separation of these two, on this point here, is called ugh (pause) d! (pause) and if the opening angle of the lens is called theta, then you can demonstrate that that is exactly equivalent to the statement that n times the sine of theta, where n is the index in this region - supposing that this is all built under oil or something with an index n - that n sine theta... (very long pause) [under his breath, to himself: ... must be, which way? ... then d lambda n sine theta...] D must exceed lambda n sine theta. (pause) D must be greater than that or you can't see it." This will be corrected. If 'label' will send me an email and tell me his or her name I will add it to the list of Contributors posted on our FLP Errata page. Michael Gottlieb Editor, The Feynman Lectures on Physics New Millennium Edition mg@feynmanlectures.info
{ "domain": "physics.stackexchange", "id": 57441, "tags": "optics, geometric-optics, diffraction" }
ValueError: Expected 2D array, got scalar array instead using predict method
Question: I am trying to get a predicted value instead of whole features for a particular level using predict method. import numpy as np import pandas as pd import matplotlib.pyplot as plt #Importing Dataset dataset = pd.read_csv('C:/Users/Rupali Singh/Desktop/ML A-Z/Machine Learning A-Z Template Folder/Part 2 - Regression/Section 7 - Support Vector Regression (SVR)/Position_Salaries.csv') print(dataset) X = dataset.iloc[:, 1:2].values Y = dataset.iloc[:, 2].values # Feature Scaling from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() sc_Y = StandardScaler() X = sc_X.fit_transform(X) Y = sc_Y.fit_transform(Y.reshape(-1,1)) #Fitting SVR model to dataset from sklearn.svm import SVR regressor = SVR(kernel='rbf') regressor.fit(X,Y) #Visualizing the dataset plt.scatter(X, Y, color = 'red') plt.plot(X, regressor.predict(X), color = 'blue') plt.show() # Predicting a new Result Y_pred = regressor.predict(6.5) print(Y_pred) This is my dataset, here I am trying to predict value only for level 6 Position Level Salary 0 Business Analyst 1 45000 1 Junior Consultant 2 50000 2 Senior Consultant 3 60000 3 Manager 4 80000 4 Country Manager 5 110000 5 Region Manager 6 150000 6 Partner 7 200000 7 Senior Partner 8 300000 8 C-level 9 500000 9 CEO 10 1000000 This is the error message I am getting: File "C:/Users/Rupali Singh/PycharmProjects/Machine_Learning/SVR.py", line 34, in <module> Y_pred = regressor.predict(6.5) File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\svm\base.py", line 322, in predict X = self._validate_for_predict(X) File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\svm\base.py", line 454, in _validate_for_predict accept_large_sparse=False) File "C:\Users\Rupali Singh\PycharmProjects\Machine_Learning\venv\lib\site-packages\sklearn\utils\validation.py", line 514, in check_array "if it contains a single sample.".format(array)) ValueError: Expected 2D array, got scalar array instead: array=6.5. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. I would be really grateful for any kind of help. Answer: Try: Y_pred = regressor.predict(np.array([6.5]).reshape(1, 1)) Scikit does not work with scalars (just one single value). It expects a shape $(m\times n)$ where $m$ is the number of features and $n$ is the number of observations, both are 1 in your case.
{ "domain": "datascience.stackexchange", "id": 7341, "tags": "machine-learning, regression, prediction" }
Utility of D latch/flip-flop and how it differs from an SR latch/flip-flop
Question: I understand that in a D latch, whenever the clock signal is high, Q matches D, and while the clock signal is low, it holds the previous state of D. For a D flip-flop, Q will hold whatever value D is at the exact moment C goes high, and will hold that same state until C goes high again. I am able to draw the clock diagram and identify these circuits. But I am not understanding the purpose of these components in a high level context. What exactly does the D latch and D flip-flop do? (Differences and similarity) From my understanding, it is to be able to "store" a bit value, but if that's the case, why not use an SR latch? Edit: I have seen the following post. However, my question is a little more specific. I am asking specifically about a D latch/flip-flop. I also want to be able to differentiate how this is different from a SR latch, since it seems that their descriptions seem to do the same thing. Answer: One big difference is that while the SR flipflop has a "not-allowed" state (i.e., inputs S=1, R=1), the D flipflop has no such condition. Another difference, is that the D flipflop just mirrors the input into the output, while SR has two inputs: one for setting the ouput to 1, and one for resetting the output into a 0.
{ "domain": "cs.stackexchange", "id": 6713, "tags": "circuits" }
Schrödinger's cat; why was it necessary?
Question: Could someone please explain to me the idea that Schrödinger was trying to illustrate by the cat in his box? I understand that he was trying to introduce the notion of the cat being both alive and dead at the same time. But why was it necessary to introduce this thought experiment and what did it achieve? Answer: First, a historical subtlety: Schrödinger has actually stolen the idea of the cat from Einstein. Second, both men – Einstein and Schrödinger – used the thought experiment to "explain" a point that was wrong. They thought it was absurd for quantum mechanics to say that the state $a|{\rm alive}\rangle+b|{\rm dead}\rangle$ was possible in Nature (it was claimed to be possible in quantum mechanics) because it allowed the both "incompatible" types of the cat to exist simultaneously. Third, they were wrong because quantum mechanics does imply that such superpositions are totally allowed and must be allowed and this fact can be experimentally verified – not really with cats but with objects of a characteristic size that has been increasing. Macroscopic objects have already been put to similar "general superposition states". The men introduced it to fight against the conventional, Copenhagen-like interpretations of quantum mechanics, and that's how most people are using the meme today, too. But the men were wrong, so from a scientifically valid viewpoint, the thought experiment shows that superpositions are indeed always allowed – it is a postulate of quantum mechanics – even if such states are counterintuitive. Similar superpositions of common-sense states are measured so that only $|a|^2$ and $|b|^2$ from the coefficients matter and may be interpreted as (more or less classical) probabilities. Due to decoherence, the relative phase is virtually unmeasurable for large, chaotic systems like cats, but in principle, even the relative phase matters. Quite generally, the people who are wrong – who have a problem with quantum mechanics – like to say that the superposition means that the cat is alive "and" dead. But the right, quantum answer is that the addition in the wave function doesn't mean "and". Instead, it means a sort of "or", so the superposition simply says that the cat is dead or alive, with the appropriate probabilities (quantum mechanics determines not only the probabilities but also their complex phases, and those may matter for other questions).
{ "domain": "physics.stackexchange", "id": 14968, "tags": "quantum-mechanics, hilbert-space, superposition, thought-experiment, schroedingers-cat" }
Nuclear Feedback Loop (Fusion and Fission)
Question: What are the main factors that prevent a feedback loop being created by using a hybrid method of fission and fusion. Fusion building up to fissionable materials, and fission breaking down till fusion is possible. Ex: numbers of neutrons, cross section for interaction, the gap between elements, amount of energy needed, probability of correct species being generated. Further, would it be possible to at least get a few occurrences of the fission/fusion cycle. Answer: The cycle you wish to form can be outlined like this. More massive nuclei are produced from less massive nuclei (fusion) with the net output of energy and this type of process continues to be exothermic until nuclei of approximately the mass of a iron nucleus are being produced. To produce even more massive nuclei there must be an input of energy (and an abundance of neutrons) and such reactions are endothermic. These sort of processes occur in stars and supernova explosions. Having produced these more massive nuclei they can decay via a number of processes including fission with the release of energy. These reactions are exothermic. Some of the less massive nuclei which are produced are stable which means that energy is required to split them up further into even less massive nuclei and these are endothermic reactions. The process of splitting the massive nuclei (less in mass than iron) into less massive nuclei again requires a net input of energy ie such reactions are endothermic. The production of nuclei whose masses move towards the mass of iron nuclei and the production of nuclei more massive than that of iron does occur naturally but the production of less massive nuclei from those nuclei which have a mass less then that of iron nuclei is very unlikely to happen. The fission/fusion cycle that you have described is not really feasible.
{ "domain": "physics.stackexchange", "id": 56960, "tags": "nuclear-physics, fusion" }
High pressures under ocean surface
Question: I read a book about deep water exploring in mariana's trench. They of course talked a lot about pressure, and so this question came to my mind: If you're, say, 2km down in the Mariana's Trench there is of course a lot, lot of pressure. Would this pressure push you down to the ocean floor with high speed? Or would it just press you down to the height of a pancake and then you slowly would sink to ocean floor? Slowly or fast? Answer: what the pressure would do is evenly compress your body on all sides equally. the parts of your body with air in them would be easily compressed, and so they would be crushed down into solid bone and watery tissue. You would sink slowly with your lungs smashed completely flat.
{ "domain": "physics.stackexchange", "id": 52774, "tags": "pressure, geophysics, oceanography" }
Amount of light blocked by a body at the Earth-Sun L1 point
Question: I came across this question on the World Building SE. It has an accepted answer but it seems to be under some debate (see comments). So I thought I'd put it to some physicists: What formula would describe the amount of light blocked by a body of a given radius placed at the L1 Lagrangian point? Answer: The L1 Lagrange point is at a distance of approximately: $$D_{\oplus\text{-}\rm L1}=D_{\odot\text{-}\oplus}\left(\frac{{\rm M}_\oplus}{3{\rm M}_\odot}\right)^{1/3}$$ from Earth. An object of linear size $L$ subtends an angular size, in radians, of $\Theta=L/D$ (assuming that the size is much smaller than the distance, which in this case it is). The angular size of the Sun is about $\Theta_\odot = 0.5^\circ=0.01\,{\rm rad}$. The fraction of light blocked $f_L$ is proportional to the square of the ratio of the angular sizes (assuming here a round object): $$f_{L} = \left(\frac{L}{\Theta_\odot D_{\rm \oplus\text{-}L1}}\right)^2$$ This is valid if $\Theta < \Theta_\odot$; if you make the object any bigger than the light is completely blocked and $f_L=1$, obviously. I've neglected some geometric details here, most importantly that the light source has finite size/is not distant enough for the wavefront to be completely flat. A more realistic treatment would involve thinking along these lines.
{ "domain": "physics.stackexchange", "id": 46096, "tags": "visible-light, astronomy, geometric-optics" }
Tool or script to parse annotated VCF files
Question: I've annotated VCF files using snpEff and looking for a tool or script to parse the VCF file and clean up the file to make it interpretable for a biologist. Answer: (edit)you can filter the VCF annotations with snpsift, I've also written a VcfFilterSequenceOntology http://lindenb.github.io/jvarkit/VcfFilterSequenceOntology.html I've written vcf2table: http://lindenb.github.io/jvarkit/VcfToTable.html It decodes VEP and SNPeff annotations: >>chr1/10001/T (n 1) Variant +--------+--------------------+ | Key | Value | +--------+--------------------+ | CHROM | chr1 | (....) VEP +--------------------------+------+----------------+------------+-----------------+--------+------------------+-----------------------------------------------+-------------+---------+-----------------+----------------------+ | PolyPhen | EXON | SIFT | ALLELE_NUM | Gene | SYMBOL | Protein_position | Consequence | Amino_acids | Codons | Feature | BIOTYPE | +--------------------------+------+----------------+------------+-----------------+--------+------------------+-----------------------------------------------+-------------+---------+-----------------+----------------------+ | probably_damaging(0.956) | 8/9 | deleterious(0) | 1 | ENSG00000102967 | DHODH | 346/395 | missense_variant | R/W | Cgg/Tgg | ENST00000219240 | protein_coding | | | 3/4 | | 1 | ENSG00000102967 | DHODH | | non_coding_exon_variant&nc_transcript_variant | | | ENST00000571392 | retained_intron | | | | | 1 | ENSG00000102967 | DHODH | | downstream_gene_variant | | | ENST00000572003 | retained_intron | | | | | 1 | ENSG00000102967 | DHODH | | downstream_gene_variant | | | ENST00000573843 | retained_intron | | | | | 1 | ENSG00000102967 | DHODH | | downstream_gene_variant | | | ENST00000573922 | processed_transcript | | | | | 1 | ENSG00000102967 | DHODH | -/193 | intron_variant | | | ENST00000574309 | protein_coding | | probably_damaging(0.946) | 8/9 | deleterious(0) | 1 | ENSG00000102967 | DHODH | 344/393 | missense_variant | R/W | Cgg/Tgg | ENST00000572887 | protein_coding | +--------------------------+------+----------------+------------+-----------------+--------+------------------+-----------------------------------------------+-------------+---------+-----------------+----------------------+ Genotypes +---------+------+-------+----+----+-----+---------+ | Sample | Type | AD | DP | GQ | GT | PL | +---------+------+-------+----+----+-----+---------+ | M10475 | HET | 10,2 | 15 | 10 | 0/1 | 25,0,10 | | M10478 | HET | 10,4 | 16 | 5 | 0/1 | 40,0,5 | | M10500 | HET | 10,10 | 21 | 7 | 0/1 | 111,0,7 | | M128215 | HET | 15,5 | 24 | 0 | 0/1 | 49,0,0 | +---------+------+-------+----+----+-----+---------+
{ "domain": "bioinformatics.stackexchange", "id": 198, "tags": "variant-calling, snp" }
A self contained parser generator implementation
Question: This is a recreational project, I was trying to make a parser generator with a grammar inspired from: https://docs.python.org/3/reference/grammar.html Unfortunately, understanding that specific grammar's syntax (meta-grammar?) ended up being way harder than I expected so, I ended up creating my own. I call it KiloGrammar (sorry for the bad pun). It ended up being very different from what I was planning but seems to do the job. It actually describes a stack-machine and probably is Turing complete, though I did not have the time to attempt implementing something like rule 110 to verify it. here's a snippet of a grammar to parse simple math expressions: # this grammar parses simple math expressions like: a + 10 * (8 + 5) token var "[A-Za-z]+" token int "-?[0-9]+" token float "-?[0-9+]+\.[0-9]+" token whitespace "[ \s]+" keyword "(" keyword ")" keyword "+" keyword "-" keyword "*" keyword "/" shorthand "NUMBER" "int|float|var" shorthand "EXPRESSION" "MATH_NODES|NUMBER" shorthand "MATH_NODES" "ADD|SUB|MUL|DIV" shorthand "operation" "+|-|*|/" rule ignore_whitespace (whitespace) pop(1) rule math_priority (ADD|SUB, *|/, EXPRESSION) pop(3); push([0][1], [0][0], [0][2], [1], [2]) rule math (EXPRESSION, operation, EXPRESSION) pop(3) push(node( pick_name([1], operation, MATH_NODES), [1], [0], [2])) rule parenthesis ("(", EXPRESSION, ")") pop(3); push([1]) here's the full implementation of the script: kilogrammar.py """ This script is a parser for the kilogrammar language It compiles it into a parser by just reusing its own code To use it just type the command: python kilogrammar.py my_parser.kg -compile > output_parser.py to watch the parser working in interactive mode you can do: python kilogrammar.py my_parser.kg -interactive -color This file is quite long, to find meaningful sections, just search for names in this table of contents: colors used for printing: class Color functions used to make the interactive visualization: def pretty_lines def pretty_print main Node class used by the parser: class Node main Token class used by the parser: class Token class that implement tokenizing code: class Tokenizer class that implements parsing code as a stack machine: class Parser code used to simplify the implementation of new parser rules using decorators: def match class MatchRuleWrapper default functions avaliable to the kilogrammar language: KG_BUILTINS = " main tokenizer loop: while self.char_ptr < len(self.text): main parser loop: for name, rule in rules Kilogrammar language parser class: class KiloParser(Parser): Kilogrammar tokenizer class: class KiloTokenizer(Tokenizer): """ # this tag is used to mark the start of the section that # is going to be copied as a new file # TAG1: reutilize code start import inspect import os import re class Color: """ colors defined as escape sequences for fancy output. easy to use but dont work on all terminals https://en.wikipedia.org/wiki/ANSI_escape_code """ @classmethod def enable(cls): cls.red = "\u001b[31m" cls.yellow = "\u001b[38;5;221m" cls.pink = "\u001b[38;5;213m" cls.cyan = "\u001b[38;5;38m" cls.green = "\u001b[38;5;112m" cls.reset = "\u001b[0m" @classmethod def disable(cls): cls.red = "" cls.yellow = "" cls.pink = "" cls.cyan = "" cls.green = "" cls.reset = "" Color.disable() class Node: """ Main class to construct an abstract syntax tree, It is expected to encapsulate instances of Node() or Token() """ def __init__(self, node_type, contents): self.type = node_type assert(type(contents) == list) self.contents = contents def pretty_lines(self, out_lines=None, indent_level=0): """ return pretyly formated lines containing the contents of its nodes and subnodes in a printable and human-readable form """ if out_lines is None: out_lines = [] out_lines.append("".join((" " * indent_level, repr(self), ":"))) for sub_thing in self.contents: if isinstance(sub_thing, Node): sub_thing.pretty_lines(out_lines, indent_level + 1) else: out_lines.append("".join( (" " * (indent_level + 1), repr(sub_thing)))) return out_lines def __hash__(self): # used for easy comparation with sets return hash(self.type) def __eq__(self, other): if isinstance(other, Node): return self.type == other.type else: return self.type == other def __repr__(self): return f"{Color.green}{self.type}{Color.reset}" def panic(msg, line, col, text): """ raise an SyntaxError and display the position of the error in the text """ text_line = text.split("\n")[line] arrow = " " * col + "^" raise SyntaxError("\n".join(("", text_line, arrow, msg, f" line: {line + 1}, collumn: {col + 1}"))) class Token: """ Main token class, its supposed to encapsulate snippets of the text being parsed """ def __init__(self, token_type, contents, line=None, col=None): self.type = token_type self.contents = contents self.line = line self.col = col def __hash__(self): return hash(self.type) def __eq__(self, other): if isinstance(other, Node): return self.type == other.type else: return self.type == other def __repr__(self): if self.contents in {None, ""}: return f"{Color.cyan}{self.type} {Color.reset}" return f"{Color.cyan}{self.type} {Color.pink}{repr(self.contents)}{Color.reset}" class Tokenizer: """ Main Tokenizer class, it parses the text and makes a list of tokens matched based on the rules defined for it it starts trying to match rules at the start of the file and when the first rule matches, it saves the match as a token and move forward by the length of the match to the next part of the text unless the rule defines a callback function, in this case the callback has to move to the next token using the feed() method If no rule matches any part of the text, it calls a panic() """ rules = ["text", r"(?:\n|.)*"] def __init__(self, text, skip_error=False): self.text = text self.tokens = [] self.errors = [] self.skip_error = skip_error self.char_ptr = 0 self.line_num = 0 self.col_num = 0 self.preprocess() self.tokenize() def preprocess(self): pass @staticmethod def default_callback(self, match, name): self.push_token(name, match[0]) self.feed(len(match[0])) return len(match[0]) > 0 def push_token(self, type, value=None): self.tokens.append(Token(type, value, self.line_num, self.col_num)) def pop_token(self, index=-1): return self.tokens.pop(-1) def feed(self, n): for _ in range(n): if self.text[self.char_ptr] == "\n": self.line_num += 1 self.col_num = -1 self.char_ptr += 1 self.col_num += 1 def tokenize(self): import re import inspect rules = [] self.preprocess() for rule in self.rules: if len(rule) == 2: (name, regex), callback = rule, self.default_callback elif len(rule) == 3: name, regex, callback = rule else: raise TypeError(f"Rule not valid: {rule}") try: regex = re.compile(regex) except Exception as e: print(str(e)) raise TypeError(f"{type(self)}\n {name}: {repr(regex)}\n" f"regex compilation failed") rules.append((name, regex, callback)) while self.char_ptr < len(self.text): for name, regex, callback in rules: match = regex.match(self.text, self.char_ptr) if match: done = callback(self, match, name) if done: break else: err = (f"Unexpected character: {repr(self.text[self.char_ptr])}", self.line_num, self.col_num) if self.skip_error: self.errors.append(err) self.feed(1) else: panic(*err, self.text) class MatchRuleWrapper: """ Encapsulates a parser rule definition and tests it against the parser stack, if a match is found, it calls its callback function that does its thing on the parser stack. """ def __init__(self, func, rules, priority=0): self.func = func self.rules = rules self.priority = priority def __call__(self, parser, *args): n = len(parser.stack) if len(self.rules) > n or len(self.rules) > n: return i = 0 for rule in reversed(self.rules): item = parser.stack[-1 - i] i += 1 if not (rule is None or item.type in rule): break else: matches = parser.stack[-len(self.rules):] self.func(parser, *matches) return matches def match(*args, priority=0): """ returns decorator that helps defining parser rules and callbacks as if they were simple instance methods. In reality those methods are turned into MatchRuleWrapper callbacks """ import inspect for arg in args: if not isinstance(arg, (set, str)) and arg is not None: raise TypeError(f"match_fun() invalid argument: {arg}") match_rules = [] for arg in args: if arg == None or type(arg) == set: match_rules.append(arg) elif isinstance(arg, str): match_rules.append({s for s in arg.split("|") if s}) else: raise TypeError(f"wrong type of argumment: {type(arg)}, {arg}") arg_count = len([type(arg) for arg in args]) + 1 def decorator(func): paramaters = inspect.signature(func).parameters if len(paramaters) is not arg_count: if not inspect.Parameter.VAR_POSITIONAL in {p.kind for p in paramaters.values()}: raise TypeError( f"function {func} does not contain {arg_count} argumments") return MatchRuleWrapper(func, match_rules, priority) return decorator class Parser: """ A stack machine that simply run its rules on its stack, every time no rule maches the contents of the stack a new token is pushed from the token list """ def __init__(self, tokenizer, preview=999999, token_preview=5): self.tokens = tokenizer.tokens self.text = tokenizer.text self.token_ptr = 0 self.stack = [] self.preview = preview self.token_preview=token_preview self.parse() def push(self, node_type, contents=None): if contents is None: contents = [] if type(node_type) in {Node, Token}: self.stack.append(node_type) else: self.stack.append(Node(node_type, contents)) def pop(self, repeat=1, index=-1): for _ in range(repeat): self.stack.pop(index) def parse(self): rules = [(name, rule) for name, rule in inspect.getmembers(self) if isinstance(rule, MatchRuleWrapper)] rules = sorted(rules, key=lambda r: -r[1].priority) while True: for name, rule in rules: matched = rule(self) if matched: break else: if not self.token_ptr < len(self.tokens): break self.stack.append(self.tokens[self.token_ptr]) self.token_ptr += 1 if self.preview > 0: self.pretty_print(self.preview, self.token_preview) print("stack:", self.stack, "\n") if matched: print("matched rule:", name, matched, "\n") else: print("no rule matched\n") inp = input(" Hit enter to continue, type 'e' to exit: ") if inp == "e": self.preview = 0 os.system("cls" if os.name == "nt" else "clear") def pretty_print(self, maximun_tree, maximun_tokens): lines = [] for thing in self.stack: if isinstance(thing, Node): lines.extend(thing.pretty_lines()) else: lines.append(repr(thing)) display_lines = lines[max(-maximun_tree, -len(lines)):] if len(display_lines) < len(lines): print("...") print("\n".join(display_lines)) print("\nNext tokens:") for i, token in enumerate(self.tokens[self.token_ptr:]): print(token) if i == maximun_tokens: break class KiloTokenizer(Tokenizer): last_indent = None indent_deltas = [] def handle_indent(self, match, name): n = len(match[1]) if self.last_indent is None: self.last_indent = n delta = n - self.last_indent self.last_indent = n if delta > 0: self.push_token("indent_increase") self.indent_deltas.append(delta) elif delta < 0: while delta < 0 and self.indent_deltas: self.push_token("indent_decrease") delta += self.indent_deltas.pop(-1) if delta > 0: self.push_token("inconsistent_indent") rules = [["indent", r"\n([ \t]*)(?:[^ \t\n])", handle_indent], # TAG1: reutilize code end ["newline", r"\n"], ["string", r""""(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*'"""], ["whitespace", r"[ ;\t]+"], ["comment", r"#.*\n"], ["integer", r"(-?[0-9]+)\b"], ["rule", r"rule"], ["case", r"case"], ["keyword", r"keyword"], ["token", r"token"], ["word", r"\b[A-Za-z_]+[A-Za-z0-9_]*\b"], ["name", r"[^0-9\[\]\(\);\| \t\'\"\n,#>]+"], ["pipe", r"\|"], ["(", r"\("], [")", r"\)"], ["[", r"\["], ["]", r"\]"], [",", r","]] def preprocess(self): shorthand_re = re.compile( r"""shorthand\s*("(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*')\s*("(?:[^"\\]|\\.)*"|'(?:[^'\\]|\\.)*')""" ) new_lines = [] shorthands = [] for line in self.text.split("\n"): for replace, to in shorthands: replaced_line = line.replace(replace, to) if replaced_line != line: line = replaced_line match = shorthand_re.match(line) if match: shorthands.append((match[1][1:-1], match[2][1:-1])) new_lines.append("".join(("# processed ", line))) else: new_lines.append(line) self.text = "\n".join(new_lines) class KiloParser(Parser): """ Implementation of the Kilogrammar parser """ # ================================================= # === tokens and keywords # ================================================= @match("whitespace|newline|comment") def ignore(self, ig): self.pop(1) @match("token", "name|word", priority=1) def token_start(self, token, name): self.pop(2) self.push("TOKEN_DEF_START", [name]) @match("TOKEN_DEF_START", "string", priority=1) def token_def_stage_1(self, tdef, string): self.pop(1) tdef.contents.append(string) tdef.type = "TOKEN_DEFINITION" @match("keyword", "string", priority=1) def keyword_def(self, keyword, string): self.pop(2) self.push("KEYWORD", [string]) # ================================================= # === rule definitions # ================================================= @match("rule", "word") def rule_name(self, rule, name): self.pop(2) self.push("RULE_DEF_NAME", [name]) @match("RULE_DEF_NAME", "(") def rule_def_start(self, name, par): self.pop(1) name.contents.append(Node("MATCH_LIST", [])) name.type = "RULE_DEF_MATCH_LIST" @match("RULE_DEF_MATCH_LIST", "NAME_GRP", ",|)") def rule_def_extend(self, rule, names, sep): self.pop(2) rule.contents[1].contents.append(names) self.push(sep) @match("RULE_DEF_MATCH_LIST", ",") def rule_strip_comma(self, rule, comma): self.pop(1) @match("RULE_DEF_MATCH_LIST", ")") def rule_natch_list_finish(self, rule, par): rule.type = "RULE_DEF" self.pop(1) @match("RULE_DEF", "BLOCK") def rule_finish(self, rule, block): self.pop(1) rule.contents.append(block) rule.type = "RULE_DEFINITION" # ================================================= # === name groups # ================================================= @match("NAME_GRP", "pipe", "name|word|string", priority=2) def mane_grp_extend(self, grp, pipe, name): self.pop(2) grp.contents.append(name) @match("name|word|string", ",|pipe|)", priority=-1) def name_grp(self, name, sep): self.pop(2) self.push("NAME_GRP", [name]) self.push(sep) # ================================================= # === indent blocks # ================================================= @match("FUNC_CALL", "indent_decrease") def block_end(self, call, indent): self.pop(2) self.push("BLOCK_END", [call]) @match("FUNC_CALL", "BLOCK_END", priority=0) def block_end_expand(self, call, block): self.pop(2) block.contents.append(call) self.push(block) @match("indent_increase", "BLOCK_END", priority=1) def block_finish(self, indent, block): self.pop(2) self.push("BLOCK", list(reversed(block.contents))) # ================================================= # === function calls # ================================================= @match("word", "(") def func_call_start(self, name, p): self.pop(2) self.push("FUNC_CALL_START", [name, Node("ARGS", [])]) @match("FUNC_CALL_START", "indent_increase|indent_decrease|inconsistent_indent") def ignore_indent(self, call, indent): self.pop(1) @match("FUNC_CALL_START", ",") def ignore_comma(self, func, separator): self.pop(1) @match("FUNC_CALL_START", "integer|NAME_GRP|FUNC_CALL", ",|)") def add_func_arg(self, func, arg, separator): self.pop(2) func.contents[1].contents.append(arg) self.push(separator) @match("FUNC_CALL_START", ")") def func_call_finish(self, func, par): self.pop(1) func.type = "FUNC_CALL" # ================================================= # === Node Indexing # ================================================= @match("INDEXES", ",|)") def indexes_to_func(self, indexes, sep): self.pop(2) self.push("FUNC_CALL", [Token("name", "get_node"), Node("ARGS", indexes.contents)]) self.push(sep) @match("[", "integer", "]") def make_index(self, sq, n, sq1): self.pop(3) self.push("INDEXES", [n]) @match("INDEXES", "INDEXES") def sub_index(self, i, j): self.pop(1) i.contents.append(j.contents[0]) KG_BUILTINS = """ # ============================================ # Kilogrammar language builtins # ============================================ def push(parser, matches, *args): for arg in args: parser.stack.append(arg) def pop(parser, matches, *args): if len(args) == 0: parser.stack.pop(-1) else: for _ in range(args[0]): parser.stack.pop(-1) def node(parser, matches, name_grp, *args): return Node(name_grp[0], list(args)) def pick_name(parser, matches, name_selector, name_grp_from, name_grp_to): if isinstance(name_selector, (Node, Token)): name_selector = name_selector.type elif isinstance(name_selector, tuple): name_selector = name_selector[0] return (name_grp_to[name_grp_from.index(name_selector)],) def get_node(parser, matches, *args): node = matches[args[0]] for index in args[1:]: node = node.contents[index] return node """ KG_BUILTINS_FUNC_LIST = [ "push", "pop", "node", "pick_name", "get_node" ] MAIN = r""" if __name__ == '__main__': import sys if "-color" in sys.argv: Color.enable() text = None if "-type" in sys.argv: text = input("\n\n input >>>") elif len(sys.argv) > 2 and os.path.isfile(sys.argv[1]): with open(sys.argv[1], "r") as f: text = f.read() if text is not None: if '-interactive' in sys.argv: preview_length = 999999 else: preview_length = 0 tokens = TokenizerClass(text) parser = ParserClass(tokens, preview=preview_length) parser.pretty_print(999999, 999999) else: print("this script seems to not have syntax errors.") """ def validate(parser): for node in parser.stack: if node.type not in\ {"indent_decrease", "indent_increase", "RULE_DEF", "TOKEN_DEFINITION", "KEYWORD", "RULE_DEFINITION"}: while isinstance(node, Node): #find a leaf token node = node.contents[0] panic(f"untexpected token: {node}", node.line, node.col, parser.text) def parser_compile(parser): rule_defs = [] token_defs = [] keyword_defs = [] def extract_high_level_parts(contents): for node in contents: if isinstance(node, Node): extract_high_level_parts(node.contents) if node == "RULE_DEFINITION": rule_defs.append(node) elif node == "TOKEN_DEFINITION": token_defs.append(node) elif node == "KEYWORD": keyword_defs.append(node) extract_high_level_parts(parser.stack) final_lines = [] # recicling a usefull piece of code that cant be expressed directly using # this language, with open(__file__, "r") as myself: myself.seek(0) lines = myself.readlines() start = 0 end = 0 for i, line in enumerate(lines): if line.startswith("# TAG1: reutilize code start"): start = i elif line.startswith("# TAG1: reutilize code end"): end = i code = "".join(lines[1 + start:end])[0:-1] code = code.replace("KiloTokenizer", "TokenizerClass") final_lines.append(code) for token_def in token_defs: name = token_def.contents[0].contents regex = token_def.contents[1].contents[1:-1] final_lines.append(f' ["{name}", "{regex}"],') for keyword_def in keyword_defs: name = keyword_def.contents[0].contents[1:-1] regex = re.escape(name) final_lines.append(f' ["{name}", "{regex}"],') final_lines.append(" ]") def make_match_list(match_list): args = [] for name_grp in match_list.contents: arg = [] for name in name_grp.contents: if name == "string": arg.append(f"'{name.contents[1:-1]}'") elif name in {"word", "name"}: arg.append(f"'{name.contents}'") args.append("".join(("{", ", ".join(arg), "}"))) return ", ".join(args) def make_func_call(node): contents = node.contents name_token = contents[0] name = name_token.contents if name not in KG_BUILTINS_FUNC_LIST: panic(f"function does not exist: {name}", line=name_token.line, col=name_token.col, text=parser.text) argumments = contents[1].contents args = [] for arg in argumments: if arg.type in "integer": args.append(arg.contents) elif arg.type == "FUNC_CALL": args.append(make_func_call(arg)) elif arg.type == "NAME_GRP": args.append(repr(tuple(node.contents for node in arg.contents))) return f"{name}(parser, matches, {', '.join(args)})" final_lines.extend([KG_BUILTINS]) final_lines.append("class ParserClass(Parser):") final_lines.append("") for i, rule in enumerate(rule_defs): rule = rule.contents func_name = rule[0].contents block = rule[2].contents match_args = make_match_list(rule[1]) final_lines.append(f" @match({match_args}, priority={-i})") final_lines.append(f" def rule{i}_{func_name}(parser, *matches):") for func_call in block: final_lines.append(f" {make_func_call(func_call)}") final_lines.append("") final_lines.append(MAIN) for line in final_lines: print(line) if __name__ == "__main__": import sys if len(sys.argv) > 1: with open(sys.argv[1], "r") as f: tok = KiloTokenizer(f.read() + "\n;") if "-color" in sys.argv: Color.enable() if "-compile" in sys.argv: parser = KiloParser(tok, preview=0) validate(parser) parser_compile(parser) else: if "-interactive" in sys.argv: preview = 999999 else: preview = 0 parser = KiloParser(tok, preview=preview) parser.pretty_print(999999, 999999) validate(parser) you can run it using: python kilogrammar.py some_input_grammar.txt -compile > output_parser.py To test your new parser just python output_parser.py some_input.txt -color it should print a syntax tree. or to watch the syntax tree being constructed: python output_parser.py some_input.txt -interactive -color it also works for the parser generator itsel: python kilogrammar.py some_input_grammar.txt -interactive -color Even thought it's a toy project and I had no idea of what I was making I would like to know your thoughts about the usability and quality of it, specially about the meta-grammar(?) used by it. Answer: Singletons Color has been written as a singleton. That's fine I guess, but it doesn't need the class machinery. All you're effectively doing is making an inner scope. (You're also missing defaults.) You could get away with a submodule called color whose __init__.py consists of RED: str = '' YELLOW: str = '' PINK: str = '' CYAN: str = '' GREEN: str = '' RESET: str = '' def enable(): global RED, YELLOW, PINK, CYAN, GREEN, RESET RED, YELLOW, PINK, CYAN, GREEN, RESET = ( f'\u001b[{code}m' for code in ( '31', '38;5;221', '38;5;213', '38;5;38', '38;5;112', '0', ) ) # Similar disable Type hints I have no way of knowing, in the entire definition of the Token class, what token_type is. If it's a string, declare it : str in the function signatures where it appears. Typo collumn -> column argumments -> arguments recicling -> recycling usefull -> useful Strange logic if len(self.rules) > n or len(self.rules) > n: looks like the second predicate is redundant. Loop like a native Without having tried it, i = 0 for rule in reversed(self.rules): item = parser.stack[-1 - i] i += 1 looks like it could be for rule, item in reversed(zip(self.rules, parser.stack)): Imports import inspect should appear at the top, not in function scope, unless you have a really good reason. Generator This match_rules = [] for arg in args: if arg == None or type(arg) == set: match_rules.append(arg) elif isinstance(arg, str): match_rules.append({s for s in arg.split("|") if s}) else: raise TypeError(f"wrong type of argumment: {type(arg)}, {arg}") should be pulled out into a function that, rather than building up match_rules, yields your set on the inside of the loop. Length of a comprehension Isn't arg_count = len([type(arg) for arg in args]) + 1 just arg_count = len(args) + 1 ?
{ "domain": "codereview.stackexchange", "id": 37893, "tags": "python, python-3.x, parsing, meta-programming" }
Would dimples help a ping pong ball travel faster?
Question: Dimples are used on golf balls to promote the formation of a turbulent boundary layer that stays with the ball longer as it travels through the air, therefore greatly decreasing the pressure drag experienced by the ball. The increased skin friction drag created by the dimples on the ball's surface is very small compared to the pressure drag reduction that it benefits from; therefore, the overall drag decreases as a result of the dimples. This is useful because a golf ball needs to travel far distances with a potentially large amount of air resistance. Conversely, an airplane does not need dimples on its wings or body because it is already aerodynamic in shape (instead of blunt-faced like the golf ball) and therefore the pressure drag is already reduced such that adding dimples would actually increase the skin friction more than it would decrease the pressure drag, and the overall drag on the airplane would likely increase as a result. My question is, based on this knowledge, would ping pong balls travel faster as a result of adding dimples? Would they benefit from dimples at all? They have a similar size and shape but much smaller mass than golf balls, which is where my uncertainty comes from. Answer: You are mostly correct that the dimples on a golf ball reduce its drag by promoting transition from laminar to turbulent flow, which reduces the size of the separated wake behind the ball. But that doesn't mean that a different ball would benefit in a similar way. The Reynolds number (Density * Speed * Diameter / Viscosity) has to be within a certain range for the benefit to occur. At smaller values, the dimples (or other roughness) wouldn't be enough of a disturbance to cause transition, and at higher values the flow would be turbulent without them. To see whether ping pong balls would benefit from dimples, we need to compare their speeds and diameters with those of golf balls. A quick Internet search found that golf balls travel at something like 80.5 m/s and have a diameter of about 42.6mm, whereas the numbers for ping pong balls are about 30 m/s and 40mm. Comparing the Reynolds numbers gives about 240,000 for a golf ball and 80,000 for a ping pong ball. The figure below (taken from Fluid Dynamic Drag by S. Hoerner) plots experimental drag measurements for spheres as functions of Reynolds number, and shows a dramatic change in shape at Reynolds numbers in the transitional region between 100,000 and 1,000,000. The next figure (from the same source) zooms in more closely to our region of interest, and shows that the ping pong ball's Reynolds number of about 80,000 is just below the beginning of the transitional region, thus indicating that it would not be likely to benefit from adding dimples.
{ "domain": "physics.stackexchange", "id": 73823, "tags": "fluid-dynamics, pressure, projectile, drag, aerodynamics" }
What is the difference between Hadoop and noSQL
Question: I heard about many tools / frameworks for helping people to process their data (big data environment). One is called Hadoop and the other is the noSQL concept. What is the difference in point of processing? Are they complementary? Answer: Hadoop is not a database, hadoop is an entire ecosystem. Most people will refer to mapreduce jobs while talking about hadoop. A mapreduce job splits big datasets in some little chunks of data and spread them over a cluster of nodes to get proceed. In the end the result from each node will be put together again as one dataset. Let's assume you load into hadoop a set of <String, Integer> with the population of some neighborhoods within a city and you want to get the average population over the whole neighborhoods of each city(figure 1). figure 1 [new york, 40394] [new york, 134] [la, 44] [la, 647] ... Now hadoop will first map each value by using the keys (figure 2) figure 2 [new york, [40394,134]] [la, [44,647]] ... After the mapping it will reduce the values of each key to a new value (in this example the average over the value set of each key)(figure 3) figure 3 [new york, [20264]] [la, [346]] ... now hadoop would be done with everything. You can now load the result into the HDFS (hadoop distributed file system) or into any DBMS or file. Thats just one very basic and simple example of what hadoop can do. You can run much more complicated tasks in hadoop. As you already mentioned in your question, hadoop and noSQL are complementary. I know a few setups where i.e. billions of datasets from sensors are stored in HBase and get then through hadoop to finally be stored in a DBMS.
{ "domain": "datascience.stackexchange", "id": 1, "tags": "nosql, tools, processing, apache-hadoop" }
Help me identifiy this moth?
Question: I found this moth in Bengaluru/India. Its body length was nearly 2cm. Brown colored with three yellow rings. Answer: This looks to be a handmaiden moth (Syntomoides imaon) which has been observed in this part of India. Source: India Biodiversity Portal
{ "domain": "biology.stackexchange", "id": 8952, "tags": "species-identification, entomology, taxonomy, lepidoptera" }
Isn’t there suppose to be vacuum above the closed end of Mercury manometer?
Question: I found this very old Mercury closed end manometer(I think so). Looking at the graduations on the left side it shows very low vacuum values and the right side values from 110-0-80. The reading is in mBars. So, when I looked into some theory on manometers the closed one has some column of vacuum at the closed end. Which I guess is created due to the weight of Mercury going down? Is this setup correct or should the Mercury be drained and filled again to get that gap at the section with red arrow. Sorry since this is the first time I’m using a Mercury type gauge. Most of the time I’ve used only analog needle gauges or digital ones. Answer: Yes at the top of the left tube you need vacuum because you want to measure pressure by only watching the height of the mercury. You don't need vacuum to make this manometer work but it's super not pratical, infact if you would have air or some sort of gas its pressure would change with the change in height of the column of mercury (also the graduation would be non-linear), so that's why we use vacuum (notice how in your foto the left column has no gap at the top and its completely filled with mercury). But how it works? basically the pressure of the left column of mercury ($P=\delta gh$, where $\delta$ is the mercury density and h the height of the column) must be equal to the air pressure "entering" from the right tube. Infact the value 0 (aka vacuum) is positioned in the middle of the tube, where the pressure of the two column of mercury (height = h/2) is balanced (the mercury goign down in the left column "doesn't leave anything behind", so the vacuum remains).
{ "domain": "physics.stackexchange", "id": 77019, "tags": "pressure, fluid-statics, vacuum" }
How can you stabilize enzymes in pellets made from microcrystalline cellulose?
Question: I want to make pellets consisting of mainly the following: microcrystalline cellulose saccharose rice starch ascorbic acid glycerin and an enzyme as active component. Is it furthermore necessary to add protease inhibitors or other stabilizers to remain the activity of the enzymes for a periode of at least one year OR is the dry environment conservation enough? The enzyme is called histaminase and is extracted from kidney cortex through a series of centrifugation, filtration and dialysation. To be more precise, it is a liquid protein extract containing small amounts of this enzyme. No buffer etc. is added to the protein extract so far. The whole purification process is done at 4°C. I don't know how long the enzyme is stable without further preparation. Probably not that long because some proteases should remain in the natural homogenate from porcine kidney cortex. The other components of the pellets are of pharmaceutical quality. The aim of this experiment is to feed the pellets to dogs with digestive complaints. For this purpose the pellets will get a enteric coating to survive acid environment of the stomach. Answer: As you can probably guess from my comment, there are a lot of factors to take into account when thinking about protein stability. Here's what I would recommend you do. First, make sure your extract is in a neutral buffer, like PBS, HEPES, or something similar. This is a fairly standard procedure when working with proteins. Next, since this is a kidney extract, there are likely to be gobs of proteases around, so I would strongly recommend adding broad-spectrum protease inhibitors, and keeping the extract as cold as possible to inhibit any enzyme activity. I would also add BSA at 10 mg/ml, both to stabilize the proteins and keep them from precipitating while in solution, and to lessen the chances that any proteases present will attack your enzyme of interest. Once you have your solution stabilized, I would aliquot it and lyophilize it. Proteases are most active in solution, so once dried you'll reduce their activity even more. You can now add the inert ingredients (microcrystalline cellulose, etc.), mix thoroughly, press your tablets, and add the enteric coating. Finally, once everything is prepared, I would store the final product at as cold a temperature as is convenient. Again, this will inhibit any enzyme activity and prevent degradation. I don't know if you have a functional assay to determine activity of the enzyme of interest, but if you don't I'd highly recommend developing one if at all possible. This way, you can test the activity of the fresh preparation of extract and compare it to various stages of processing into tablets, especially the finished goods, to ensure that you haven't lost much activity. I would also test random tablets throughout the year to make sure they haven't lost activity in storage, otherwise you won't be able to compare your experimental results.
{ "domain": "biology.stackexchange", "id": 5957, "tags": "pharmacology, enzymes" }
Scalar multiple of inertial frame
Question: Consider an inertial frame of reference $S$. Now take a second frame $S'$, defined as follows: If a point $P$ has coordinates $$(t,x_1,x_2,x_3)$$ in $S$, then it has coordinates $$(t,2x_1,2x_2,2x_3)$$ in $S'$. Now my question is: is $S'$ an inertial frame of reference? I get contradicting results by using the definitions from Wikipedia. On the one hand it says: (...)it is a frame of reference in which Newton's first law of motion holds. Certainly this law holds for $S'$, so it is an inertial frame. But then Wikipedia says: Measurements of objects in motion (but not subject to forces) in one inertial frame can be converted to measurements in another by a simple transformation - the Galilean transformation in Newtonian physics or by using the Lorentz transformation in special relativity(...) There is no Galilean transformation or Lorentz transformation that takes $S$ to $S'$. So $S'$ is not an intertial frame. Answer: You are mixing frames and coordinate vectors, and that is leading to some confusion. In a frame, vector quantities like position and velocity have some vector value. Changing frames may change those vectors (such as how accceleration is changed by transforming into a rotating frame). When you speak of the coordinates changing from $(t, x_1, x_2, x_3)$ to $(t, 2x_1, 2x_2, 2x_3)$, you are using coordinate vectors. You construct a coordinate vector by starting with a frame and then defining a basis -- 3 vectors with which you are "measuring" all vectors. Any vector is thus converted to a coordinate vector by taking the dot products with all 3 vectors. The result is a triple (a quad, if you count time) of real numbers. At the deepest level, your two "frames" are actually the same frame, with different trios of basis vectors. This means you may measure things differently, but fundamentally they are the same vector. This is no more poignant than the fact that I can measure something as 1 inch long or 2.54 cm long. Fundamentally, the vector from one end of the object to the other didn't change. All I did was change my basis vectors, scaling by "units."
{ "domain": "physics.stackexchange", "id": 100240, "tags": "special-relativity, reference-frames, coordinate-systems, inertial-frames" }
Using the optical theorem to calculate the imaginary part of a loop diagram
Question: I'm trying to calculate the imaginary part of this diagram in $\phi^4$ theory, using the optical theorem, and I'm having trouble. All of the examples I can find use the theorem to relate the imaginary part of the total 2-particle to 2-particle forward scattering amplitude to the total cross-section; that's not what I'm trying to do. I see a lot of equations that look like this: $$ 2 \operatorname{Im} A = \int d\Pi | B |^2 $$ wherein $A$ is the diagram above and $B$ is but nobody really discusses such equations or works any examples. Does that mean that I just take the modulus squared of the tree-level diagram: $|i \lambda|^2 = \lambda^2$? That seems too simple, which brings me to the integration over $d\Pi$. What the heck is $d\Pi$? Peskin and Schroeder don't say, but they make vague mention of the phase space of the intermediate particles, so is $d\Pi$ just the differential phase space of the "two" $\phi$s in the loop? If so, how do I go about setting up and evaluating that integral? If not, what is $d\Pi$, and how do I evaluate the integral over it? Answer: d$\Pi$ is, indeed, the differential phase-space. Peskin and Schroeder have an equation of exactly the form above in figure 7.6 on page 235, and although they don't say what $d\Pi$ is there, they do define a similar, but more specific, quantity $d\Pi_n$, the differential phase-space for $n$ particles, in equation 4.80 on page 106: $$ d\Pi_n = \left ( \prod _f \frac{d^3 p_f}{(2\pi)^3} \frac{1}{2E_f} \right ) (2\pi)^4 \delta^{(4)}\left (P-\sum_f p_f \right) $$ wherein the $\Pi$ on the left is a variable indicating the $n$-body phase space, the $\prod$ on the right is a product symbol, $P$ is the net external 4-momentum, and the subscript $f$ indicates the final state 4-momenta of the $n$ particles. Noting that $$\int \frac{d^4 p}{(2\pi)^4} 2\pi\delta(p^2-m^2) = \int\frac{d^3 p}{(2\pi)^3}\frac{1}{2 E_{\vec p}}$$ this prescription for $d\Pi$ yields the same integrals for evaluation of a diagram via the optical theorem as do Cutkosky's cutting rules, confirming that this is the correct $d\Pi$ and not just a coincidence of notation. That said, there are still serious complications in the evaluation of those integrals. See this related question for details.
{ "domain": "physics.stackexchange", "id": 20788, "tags": "quantum-field-theory, feynman-diagrams" }
Measuring a specific observable using cirq
Question: I am simulating a circuit in cirq, and I need to measure an observable $$A(\theta) = R^\dagger_z(\theta)XR_z(\theta),$$ and I am not quite sure how to do that. My first thought was to apply these gates to the qubit and then measure in the computational basis, but that didn't give the right result. I read in the documentation that you can measure specific observables using pauli strings, so I tried defining a pauli string along the lines of pauli_string = cirq.Z(q_in)**(-np.pi/4)*cirq.X(q_in)*cirq.Z(q_in)**(np.pi/4) But I am getting an error saying i can't construct pauli strings out of the exponantiated bit pauli_string = cirq.PauliString(cirq.Z(q_in))**(-np.pi/4)*cirq.X(q_in)*cirq.Z(q_in)**(np.pi/4) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~ TypeError: unsupported operand type(s) for *: 'GateOperation' and 'SingleQubitPauliStringGateOperation' I am not quite sure anymore how to simulate measuring this observable using cirq. If the answer is pauli strings, how can I make it work? If there is an easier/more correct way to do it, what is it? Answer: By default Cirq offers Z basis measurement, not unlike a physical device (though there is the Pauli string wrapper around it as you mentioned which does the same thing under the hood I detail). When you want to change the Z observable measured, you should think of each gate conjugating it. I.e. $X=HZH$, $A(\theta)=R_z(-\theta) H Z H R_z(\theta)$. Thus, you'll need to apply the $R_z$ rotation, then H, then simply cirq.measure. As you are working with arXiv:2209.07345v2, I implemented the circuit from Fig 1, having $|\psi\rangle = T|+\rangle$ and $\theta=0.2\pi$ and I undid the rotations $HZ^sR_z(\theta)$ by applying the inverses in reverse order. The final state vector shows that we have $T|+\rangle$ on the second qubit, while the first one is probabilistically $|0\rangle$ or $|1\rangle$. a, b = cirq.LineQubit.range(2) theta = 0.2 * np.pi c= cirq.Circuit( cirq.H(a), cirq.T(a) # T rotation on a , cirq.H(b) # plus state on b , cirq.CZ(a,b) , cirq.rz(theta)(a), cirq.H(a) ,cirq.measure(a, key='a'), # measure A(theta) cirq.H(b), # undo H cirq.Z(b).with_classical_controls('a'), # undo classical CZ cirq.rz(-theta)(b), # undoing the theta rotation ) for _ in range(10): res = cirq.Simulator().simulate(c) print(cirq.dirac_notation(res.final_state_vector)) Results in something similar to this, showing that we get back the T rotated state on b: 0.71|10⟩ + (0.5+0.5j)|11⟩ 0.71|00⟩ + (0.5+0.5j)|01⟩ 0.71|10⟩ + (0.5+0.5j)|11⟩ 0.71|00⟩ + (0.5+0.5j)|01⟩ 0.71|10⟩ + (0.5+0.5j)|11⟩ 0.71|10⟩ + (0.5+0.5j)|11⟩ 0.71|10⟩ + (0.5+0.5j)|11⟩ 0.71|00⟩ + (0.5+0.5j)|01⟩ 0.71|00⟩ + (0.5+0.5j)|01⟩ 0.71|10⟩ + (0.5+0.5j)|11⟩
{ "domain": "quantumcomputing.stackexchange", "id": 5446, "tags": "simulation, cirq" }
Implementing a pipe-like program without wait?
Question: So I implemented a program that takes an input file, two command strings and an output file to mimick the behaviour of running : <input cmd1 -option | cmd2 -option > output that's called like this : ./pipe input "cmd1 -opt" "cmd2 -opt" output and I did without using the wait system call since while there are open file descriptors on the pipe I opened, the exec'd commands will wait for one another. that is the pipe takes care of coordination IIUC. But I feel like I am doing it wrong since it seems that people I using wait() out of convention. Is it reasonnable to think it is not necessary in my case since I only need the return value of the second command and the pipe ensures communication or am I missing something ? What else am I doing wrong in terms of code structure or style ? Here is my code : #include <unistd.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include "pipex.h" static const char g_cmd_not_found[] = { "command not found: " }; static const char g_empty_string[] = { "The name of the input or output file cannot be an empty string\n" }; static int open_or_die(char *filename, int flags, mode_t mode) { int fd; fd = open(filename, flags, mode); if (fd == -1) { if (*filename == '\0') write(STDERR_FILENO, g_empty_string, sizeof(g_empty_string)); else { write(STDERR_FILENO, "pipex: ", sizeof("pipex: ")); perror(filename); } exit(EXIT_FAILURE); } return (fd); } static void pipe_or_die(int *pipe_fds) { int r; r = pipe(pipe_fds); if (r == -1) { write(STDERR_FILENO, "pipex: ", sizeof("pipex: ")); perror("pipe"); exit(EXIT_FAILURE); } } static void file_is_ok_or_die(char **cmdv, char **pathvar_entries) { if (access(cmdv[0], X_OK) == -1) { write(STDERR_FILENO, "pipex: ", sizeof("pipex: ")); if (cmdv[0][0] != '/') { write(STDERR_FILENO, g_cmd_not_found, sizeof(g_cmd_not_found)); ft_puts_stderr(cmdv[0]); } else perror(cmdv[0]); free_null_terminated_array_of_arrays(cmdv); free_null_terminated_array_of_arrays(pathvar_entries); if (errno == ENOENT) exit(127); else if (errno == EACCES) exit(126); else exit(EXIT_FAILURE); } } void execute_pipeline(char *cmd_str, int read_from, int write_to, char **env) { char **pathvar_entries; char **cmdv; redirect_fd_to_fd(0, read_from); redirect_fd_to_fd(1, write_to); pathvar_entries = ft_split(get_path_var(env), ':'); cmdv = ft_split(cmd_str, ' '); if (!pathvar_entries || !cmdv) { write(STDERR_FILENO, "pipex: ", sizeof("pipex: ")); perror("malloc"); free_null_terminated_array_of_arrays(cmdv); free_null_terminated_array_of_arrays(pathvar_entries); exit(EXIT_FAILURE); } cmdv[0] = get_command_path(cmdv, get_pwd_var(env), pathvar_entries); file_is_ok_or_die(cmdv, pathvar_entries); execve(cmdv[0], cmdv, env); free_null_terminated_array_of_arrays(cmdv); free_null_terminated_array_of_arrays(pathvar_entries); } int main(int ac, char **av, char **envp) { int pipefd[2]; int child_pid; int infile_fd; int outfile_fd; if (ac != 5) print_usage_exit(); pipe_or_die(pipefd); child_pid = fork(); if (child_pid == -1) perror("fork"); else if (child_pid == 0) { infile_fd = open_or_die(av[1], O_RDONLY, 0000); close(pipefd[0]); execute_pipeline(av[2], infile_fd, pipefd[1], envp); } else { outfile_fd = open_or_die(av[ac - 1], O_WRONLY | O_CREAT | O_TRUNC, 0666); close(pipefd[1]); execute_pipeline(av[3], pipefd[0], outfile_fd, envp); } return (EXIT_SUCCESS); } ``` Answer: Error handling The way you handle error is quite unusual. Why use write() instead of fprintf(stderr, ...)? Why have some error messages stored in a variable like g_empty_string, but other errors messages are passes as literals, like "pipex: "? Why try to open() first and only if it fails check if filename is empty? On Linux, I recommend you use err() to report errors and exit with an error code in one go. For example: static int open_or_die(const char *filename, int flags, mode_t mode) { if (!filename[0]) err(EXIT_FAILURE, "Empty filename"); int fd = open(filename, flags, mode); if (fd == -1) err(EXIT_FAILURE, "Error trying to open '%s'", filename); return fd; } Don't use access() to check if you can execve() Instead of first checking with access() if a file is executable before calling execve(), just call execve() unconditionally, and then just check the return value of execve(). Otherwise, you will have a TOCTTOU bug. if (execve(cmdv[0], cmdv, env) == -1) err(EXIT_FAILURE, "Could not execute '%s'", cmdv[0]); Note that if execve() succeeds, it will never return, so there's no need to free anything afterwards. Why you should wait() If you don't call wait(), your program will terminate when the second command terminates. However, consider that the first command might still be doing something. It will then continue in the background, but you won't have control over it anymore. Suppose you want to call ./pipe twice, and the second call depends on results from the first call, then you would really want to ensure both processes of the first call have finished.
{ "domain": "codereview.stackexchange", "id": 42274, "tags": "c, linux" }
Why does a circular loop in an external magnetic field when released flip over?
Question: A current-carrying loop of wire is placed in a uniform external magnetic field as shown. If the current in the wire is traveling counterclockwise in the picture, what do you predict the loop will do when released? The answer provided for this question: From the RHR for loops, place the heel of the right hand in the plane of the loop so that the fingers are curled in the direction of the current. The extended thumb points in the direction of the B-field. Since it is anti-parallel to the external B-field, the loop will flip over and then remain at rest. I don't understand what causes that. When you apply the Right-Hand Rule or Flaming Left-hand Rule I prefer, the force induced is inward around the loop. How come it makes it flip over? I must be missing something deep here. What is the underlying concept that causes the wire to flip over? Answer: The question is poorly phrased but it's getting at something interesting. You're correct that all the forces on the various pieces of the wire cancel out, but note that they point inwards rather than outwards. This means that the loop is in an equilibrium state. However, this equilibrium state is an unstable equilibrium. Imagine that the loop was perturbed ever so slightly, rotated by a very small angle about a vertical axis in the plane of the paper. In other words, the right side of the loop is slightly above the plane of the paper, and the left side of the loop is slightly below. If this happens, the net force on the right side of the loop will point to the left; and the net force on the left side of the loop points to the right. For both sides, there is a torque on the wire that will tend to rotate the loop further out of the plane. So unless the loop is perfectly placed perpendicularly to the magnetic field, it will not stay in that configuration. In contrast, let's suppose that the loop is flipped over, so that the current is now flowing clockwise in the plane. A similar argument will show that a small perturbation out of the plane will cause a restoring torque that tends to push it back into the plane. In other words, a clockwise current is stable for a magnetic field into the page, while a counter-clockwise current is unstable.
{ "domain": "physics.stackexchange", "id": 95939, "tags": "homework-and-exercises, electromagnetism" }
Electric field due to semi-circular ring of charges
Question: If I have a semi-circular ring of charges (charges uniformly distributed), centred at the origin of the $x-y$ plane, with radius $r$, I can easily find that the electric field at the origin is $$-\frac{2kQ}{\pi r^2} $$ in the $y$-direction (not sure how to illustrate y-hat using TeX). However, I am having a lot of difficulty working out the electric field on the $x$-axis for any point inside the ring, i.e. for $-r<x<r$. Is this a complicated generalisation? How do I tackle it? (NB: I prefer a vector-based approach, rather than a magnitude based approach.) Answer: Yes it is a complicated generalization. The electric field at a point $\mathbf r$ is $$ \mathbf E(\mathbf r) = k\int \frac{\mathbf r - \mathbf r'}{|\mathbf r - \mathbf r'|^3} dq'. $$ For the problem you're attempting to solve, let $R$ be the radius of the ring to avoid notational confusion with other "r" variables, then $$ \mathbf r = (x,0,0), \qquad \mathbf r'= (R\cos\theta', R\sin\theta', 0). $$ It follows that \begin{align} \frac{\mathbf r - \mathbf r'}{|\mathbf r - \mathbf r'|^3} = \frac{(x - R\cos\theta', -R\sin\theta', 0)}{((x-R\cos\theta')^2 + R^2\sin^2\theta')^{3/2}}. \end{align} For the problem at hand, the charge measure $dq'$ is $$ dq' = \frac{Q}{\pi R}(Rd\theta') = \frac{Q}{\pi}d\theta'. $$ Plugging these in reveals that to compute the field at a given $x\neq 0$, we'd need to compute the following integral: $$ k\int_{-\pi}^0 \frac{(x - R\cos\theta', -R\sin\theta', 0)}{((x-R\cos\theta')^2 + R^2\sin^2\theta')^{3/2}} \frac{Q}{\pi}d\theta'. $$ This is a hard integral compared to the case in which $x=0$ because in that case it collapses to $$ -\frac{kQ}{\pi R^2}\int_{-\pi}^0 (\cos\theta', \sin\theta', 0)d\theta' = \left(0, -\frac{2kQ}{\pi R^2}, 0\right). $$
{ "domain": "physics.stackexchange", "id": 40322, "tags": "electromagnetism, electrostatics" }
Birthday validity-checking
Question: I have written code to check any birthday input's validity. As I am new in programming, and after going through several debugging steps, the code became very ugly. month_dict = {'jan':'January', 'feb':'February', 'mar':'March', 'may':'May', 'jul':'July', 'sep':'September', 'oct':'October', 'dec':'December', 'apr':'April', 'jun':'June', 'aug':'August', 'nov':'November'} day = int(raw_input ('Enter your birth day: ')) month = raw_input ("Enter your birth month: ") year_input = int (raw_input ('Enter your birth year: ')) days_31 = ['jan', 'mar', 'may', 'jul', 'aug', 'oct', 'dec'] days_30 = ['apr', 'jun', 'sep', 'nov'] days_28 = ['feb'] def valid_day_finding (): global valid_day if month_name in days_31: if day > 0 and day < 32: valid_day = day else: valid_day = 'invalid' elif month_name in days_30: if day >= 1 and day <= 30: valid_day = day else: valid_day = 'invalid' elif month_name in days_28: if year != 'invalid': if (year % 4 == 0 and year % 100 != 0) or (year % 400 == 0): if day >= 1 and day <= 29: valid_day = day else: valid_day = 'invalid' else: if day >= 1 and day <= 28: valid_day = day else: valid_day = 'invalid' else: valid_day = 'invalid' else: valid_day = 'invalid' def valid_month_finding(): global month_name if month in month_dict.keys(): month_name = month else: month_name = 'invalid' def valid_year_finding(): global year if year_input > 1900 and year_input <2020: year = year_input else: year = 'invalid' def birthday_checking(): if valid_day != 'invalid' and month_name != 'invalid' and year != 'invalid': print 'your birthdate is %d - %s - %d' % (valid_day, month_dict[month_name], year) else: print 'Your Birthday is invalid' valid_year_finding() valid_month_finding() valid_day_finding() birthday_checking() This code is very inefficient. How can I improve this code? I know that there are built-in functions to check this, but I am trying to learn coding, which is why I am giving this a try. Answer: Naming and using functions These are all very bad function names: valid_year_finding() valid_month_finding() valid_day_finding() birthday_checking() Think of functions as actions. Typically their names should include a verb, take an input, and either return an output, or modify state. Consider this rephrasing and usage: try: year = get_valid_year(year_input) month = get_valid_month(month_input) day = get_valid_day(year, month, day_input) validate_birthday(year, month, day) print('Your birthday is {} - {} - {}'.format(year, month, day)) except ValueError as e: print('Invalid birthday: ' + e.message) The get_* methods should return valid values, or else raise an exception if the input is invalid. This way if a problem is found with year, there's no need to process month. Notice that above I pass year and month as parameters to get_valid_day. This usage makes sense, as you cannot validate a day without knowing the year and the month. Example implementation: def get_valid_year(year_input): year = int(year_input) if 1900 < year < 2020: return year raise ValueError('year is not within 1900 and 2020') Formatting There are serious formatting issues with this code. Please read and follow the Python style guide. A variable should have one type For example after these statements, year might end up as either integer or string: if year_input > 1900 and year_input <2020: year = year_input else: year = 'invalid' Avoid the global keyword, and global variables It's too hard to avoid using global. For example instead of this: def valid_year_finding(): global year if year_input > 1900 and year_input <2020: year = year_input else: year = 'invalid' You could reorganize the code like this: def valid_year_finding(): if year_input > 1900 and year_input < 2020: return year_input else: return 'invalid' year = valid_year_finding() We got rid of the global keyword, but the year_input global variable still remains. Let's make that a parameter of the function instead: def valid_year_finding(year_input): if year_input > 1900 and year_input < 2020: return year_input else: return 'invalid' year = valid_year_finding(year_input) Avoid free variables If a variable is used in a code block but not defined there, it is a free variable. In this function, year_input is a free variable: def valid_year_finding(): if year_input > 1900 and year_input < 2020: return year_input else: return 'invalid' Free variables are acceptable when their scope is small and obvious. In the posted code, year_input is defined in the global scope, making it a global variable, which should be avoided. Use the chained comparison operator This condition can be simplified: if year_input > 1900 and year_input < 2020: Using the chain comparison operator: if 1900 < year_input < 2020: Use the right data structures Instead of lists: days_31 = ['jan', 'mar', 'may', 'jul', 'aug', 'oct', 'dec'] days_30 = ['apr', 'jun', 'sep', 'nov'] days_28 = ['feb'] These should be sets: days_31 = {'jan', 'mar', 'may', 'jul', 'aug', 'oct', 'dec'} days_30 = {'apr', 'jun', 'sep', 'nov'} days_28 = {'feb'} # granted, for this one, list or set doesn't matter Use doc tests Doc tests can greatly help you test your implementations, here's an example: def get_valid_year(year_input): """ >>> get_valid_year('1901') 1901 >>> get_valid_year('1900') Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: year is not within 1900 and 2020 >>> get_valid_year('2020') Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: year is not within 1900 and 2020 >>> get_valid_year('x') Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: invalid literal for int() with base 10: 'x' """ year = int(year_input) if 1900 < year < 2020: return year raise ValueError('year is not within 1900 and 2020') If you have this code in a script called bday.py, you could run the doc tests with: python -m doctest bday.py If the tests pass, it outputs nothing. Otherwise it prints an informative report of what went wrong.
{ "domain": "codereview.stackexchange", "id": 15478, "tags": "python, beginner, datetime, validation" }
How to change the position of groups colors in DimPlot of Seurat
Question: I have a Seurat object and plotted the Dimplot for UMAP visualization for 2 variables, as shown in the image below. Now, the problem is that I want the group by variables such as Non-responder and Responder and anti-CLTA4, anti-CLTA4+PD1, anti-PD1 on the top of the UMAP plot and not on the right side. My desired output would look like the following. The timely help is highly appreciated, many thanks. Answer: DimPlot returns a ggplot object, so ggplot functions can be applied to it. You can change the legend position like this: DimPlot(data) + theme(legend.position = "top") # or DimPlot(data) + theme(legend.position = c(.1, .9))
{ "domain": "bioinformatics.stackexchange", "id": 2367, "tags": "scrnaseq, seurat, ggplot2" }
Is multiprocessing possible on a turing machine?
Question: I recently created a parallel implementation of the Merge Sort, in which the sorting of several groups was accomplished by different processes, and was wondering if this was theoretically possible on Turing Machine? Answer: You can consider two variants of this question. Can a Turing machine simulate parallelism? Can a Turing machine simulate parallelism efficiently? The answer for the first question is a resounding affirmative. Just like your own OS simulates threads on a single-core CPU, so a Turing machine can simulate parallelism. Regarding the more refined second question, it depends on your definition of efficiency. You can probably simulate parallelism with at most a polynomial blow-up in resources (mainly time). Turing machines are not a good model if you care about concrete efficiency. The more appropriate model is the RAM machine. RAM machines can probably simulate parallelism pretty efficiently, though you should not accept a running time any less that the combined running times of the individual threads in your parallel solution.
{ "domain": "cs.stackexchange", "id": 4595, "tags": "turing-machines, parallel-computing" }
Graph degree as solution for undirected graph paint
Question: The problem is: Given an undirected graph represented as an adjacency matrix and an integer k, write a function to determine whether each vertex in the graph can be coloured such that no two adjacent vertices share the same colour using at most k colours. Source: https://www.dailycodingproblem.com/ The proposed solution uses a backtracking algorithm (https://www.dailycodingproblem.com/blog/graph-coloring), but I would like to know if just evaluating the vertice degree is enough (source: https://youtu.be/LUDNz2bIjWI?t=169). boolean canBeColored(int[][] adjacencyMatrix, int colors) { for (int row = 0; row < adjacencyMatrix.length; row++) { int degree = 0; for (int column = 0; column < adjacencyMatrix[row].length; column++) { if (adjacencyMatrix[row][column] == 1) { degree++; } } if (degree > colors) { return false; } } return true; } Answer: It is not. Consider a triangle graph which has tree nodes and three edges. All vertices has degree two but three colors are required to color it. Your function would fail for that input. In fact, the problem is NP-complete meaning that it is not known if an efficient algorithm can be constructed for solving it. So if you can do it, you'll win fame and fortune and also a very large monetary prize.
{ "domain": "codereview.stackexchange", "id": 36968, "tags": "java, graph" }
How does the Hertzsprung-Russell Diagram allow us to calculate the distance to stars?
Question: I understand how to interpret an H-R diagram, in the sense that I know that the upper right top corner is occupied by cool stars, but they are very luminous so they must be big; and the bottom left corner is hot stars, not luminous, so they are small in size. However, I have tried to read a textbook and look online, but have yet to understand how from this information, we can measure the distance to stars of undefined distance. Answer: There are two main techniques that I know of for using HR diagrams to measure distance. The first is to, basically, plot a group of stars that were likely all formed at the same time (that is, a cluster) with apparent brightness against color. Because the stars are all in the same cluster, they are at nearly the same distance. So you can find the distance to the cluster by finding how much you have to shift the diagram up or down to get it to line up with either a similar HR diagram of a cluster at a known distance or an HR diagram of stars with parallax distances. The second way is called "tip of the red giant branch", typically used with galaxies where the main sequence won't be as well defined as it is for single clusters. As a star ages it moves through the HR-diagram on paths at different rates. Most of the star's lifetime is spent on the line where it burns hydrogen in its core, known as the "main sequence". As the fusing layer expands due to the growth of the inert core in the center, the star moves up and to the right (brighter and redder) on the diagram. For stars with mass less than around 1.6 times the sun's, the pressure and temperature eventually get high enough that helium begins fusing, in a process called the helium flash, causing the core to expand and cool, making the outer layers of the star contract. That moves the star back down and to the left in the HR-diagram, leaving a kind of "cusp" in the path through the HR diagram. The location of that cusp is called the "tip of the red giant branch". Because we know the luminosity and color of that tip, when we find it on an HR diagram of stars in a galaxy, we can work out a lot of information about the galaxy (redshift, distance, etc).
{ "domain": "physics.stackexchange", "id": 43812, "tags": "astrophysics, stars, galaxies, distance, luminosity" }
How can I remap topic cloud_in to my sensor's PointCloud2?
Question: Hi friends I am new in ROS and I want do 3D map with my Kinect, I think a good option is with octomap_server, but I don't get it. I think the easiest way would be using rviz but I can not get it. I think I must remap topic cloud_in to my sensor but I am not sure if it is going to run and I dont know the way to get it. Does anybody know a tutorial? Help me please. Originally posted by Rookie92 on ROS Answers with karma: 47 on 2014-05-23 Post score: 0 Answer: you can remap "your pointCloud2 input" topic to cloud_in, by including following lines to your launch file, e.g., if you are using this file: octomap_mapping / octomap_server / launch / octomap_mapping.launch write, <remap from="cloud_in" to="/(point cloud topic) /> under (RGBD slam is also a nice option for mapping using kinect.) Originally posted by Sudeep with karma: 460 on 2014-05-24 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 18050, "tags": "kinect, rviz, octomap, octomap-server" }
Does Shor's algorithm end the search for factoring algorithms in the quantum world of computation?
Question: In other words, will factoring research remain solely in the classical world or are there interesting research on-going in the quantum world related to factoring? Answer: Asymptotically, Shor's algorithm is really efficient. Basically it's just: superposition, modular exponentiation (the slowest step), and a fourier transform. Modular exponentiation is what you do to actually use the RSA cryptosystem. That means to a quantum computer, encrypting/decrypting RSA legitimately would be about the same speed as using Shor's algorithm to break the system. So I'm skeptical that there will be any improvements on the basic idea. That said, any improvement to integer addition, integer multiplication, or the quantum fourier transform would improve Shor's algorithm, and those are all very general subroutines that people will almost certainly work on. A short search on Google Scholar shows lots of research on improving quantum arithmetic circuits. I think there will be more research on classical/quantum trade-offs in Shor's algorithm. That is, if you have a small or noisy quantum computer, can you modify Shor's algorithm so that it still works, but maybe needs a lot more pre- and post-processing on a classical computer, or maybe has a lower probability of success, etc.? In this area there's Quantum Algorithms for Computing Short Discrete Logarithms and Factoring RSA Integers. There's also the Quantum Number Field Sieve, an approach where a "small" quantum computer (too small to use Shor's algorithm directly) is used as a subroutine of the classical number field sieve, slightly improving the time complexity (though I am personally convinced that error correction for this will require more physical qubits than vanilla Shor's algorithm). In short, I don't expect any radical new quantum factoring algorithms and I don't think anyone's working on it. But there are a lot of interesting tweaks to be made to fit specific use cases.
{ "domain": "quantumcomputing.stackexchange", "id": 730, "tags": "quantum-algorithms, shors-algorithm" }
groovy rosconsole error
Question: I noticed ROS Groovy is no longer in beta and I wanted to try catkin with groovy. I am able to create a new package using catkin. Also i am able to include "ros/ros.h" in my code but whenever I use anything from ros I something get an error like: beginner_tutorials_node.cpp:(.text+0x13): undefined reference to `ros::console::g_initialized' beginner_tutorials_node.cpp:(.text+0x23): undefined reference to `ros::console::initialize()' beginner_tutorials_node.cpp:(.text+0x6c): undefined reference to `ros::console::initializeLogLocation(ros::console::LogLocation*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::console::levels::Level)' beginner_tutorials_node.cpp:(.text+0xa7): undefined reference to `ros::console::setLogLocationLevel(ros::console::LogLocation*, ros::console::levels::Level)' beginner_tutorials_node.cpp:(.text+0xb1): undefined reference to `ros::console::checkLogLocationEnabled(ros::console::LogLocation*)' beginner_tutorials_node.cpp:(.text+0xfc): undefined reference to `ros::console::print(ros::console::FilterBase*, log4cxx::Logger*, ros::console::levels::Level, char const*, int, char const*, char const*, ...)' collect2: ld returned 1 exit status my code: #include "ros/ros.h" #include "ros/console.h" #include "ros/assert.h" #include "ros/static_assert.h" int main(int argc, char ** argv) { ROS_INFO("string"); return 0; } my cmake_lists.txt: cmake_minimum_required(VERSION 2.8.3) project(beginner_tutorials) ## Find catkin macros and libraries ## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz) ## is used, also find other catkin packages find_package(catkin REQUIRED COMPONENTS roscpp std_msgs rosconsole roscpp_serialization message_generation rostime) ## System dependencies are found with CMake's conventions # find_package(Boost REQUIRED COMPONENTS system) ## Uncomment this if the package has a setup.py. This macro ensures ## modules and scripts declared therein get install#include <ros/console.h>ed # catkin_python_setup() ####################################### ## Declare ROS messages and services ## ####################################### ## Generate messages in the 'msg' folder # add_message_files( # FILES # Message1.msg # Message2.msg # ) ## Generate services in the 'srv' folder # add_service_files( # FILES # Service1.srv # Service2.srv # ) ## Generate added messages and services with any dependencies listed here # generate_messages( # DEPENDENCIES # std_msgs # ) ################################################### ## Declare things to be passed to other projects ## ################################################### ## LIBRARIES: libraries you create in this project that dependent projects also need ## CATKIN_DEPENDS: catkin_packages dependent projects also need ## DEPENDS: system dependencies of this project that dependent projects also need catkin_package( INCLUDE_DIRS include # LIBRARIES beginner_tutorials CATKIN_DEPENDS roscpp std_msgs rosconsole roscpp_serialization message_generation rostime # DEPENDS system_lib ) ########### ## Build ## ########### ## Specify additional locations of header files include_directories(include ${catkin_INCLUDE_DIRS} ) ## Declare a cpp library # add_library(beginner_tutorials # src/${PROJECT_NAME}/beginner_tutorials.cpp # ) ## Declare a cpp executable add_executable(beginner_tutorials_node src/beginner_tutorials_node.cpp) ## Add dependencies to the executable # add_dependencies(beginner_tutorials_node ${PROJECT_NAME}) ## Specify libraries to link a library or executable target against # target_link_libraries(beginner_tutorials_node # ${catkin_LIBRARIES} # ) ############# ## Install ## ############# ## Mark executable scripts (Python etc.) for installation ## not required for python when using catkin_python_setup() # install(PROGRAMS # scripts/my_python_script # DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} # ) ## Mark executables and/or libraries for installation # install(TARGETS beginner_tutorials beginner_tutorials_node # ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} # LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION} # RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} # ) ## Mark cpp header files for installation # install(DIRECTORY include/${PROJECT_NAME}/ # DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION} # FILES_MATCHING PATTERN "*.h" # PATTERN ".svn" EXCLUDE # ) ## Mark other files for installation (e.g. launch and bag files, etc.) # install(FILES # # myfile1 # # myfile2 # DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION} # ) ############# ## Testing ## ############# ## Add gtest based cpp test target and link libraries # catkin_add_gtest(${PROJECT_NAME}-test test/test_beginner_tutorials.cpp) # if(TARGET ${PROJECT_NAME}-test) # target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME}) # endif() ## Add folders to be run by python nosetests # catkin_add_nosetests(test) my package.xml: <?xml version="1.0"?> <package> <name>beginner_tutorials</name> <version>0.0.0</version> <description>The beginner_tutorials package</description> <buildtool_depend>catkin</buildtool_depend> <build_depend>message_generation</build_depend> <build_depend>rosconsole</build_depend> <build_depend>roscpp</build_depend> <build_depend>roscpp_serialization</build_depend> <build_depend>rostime</build_depend> <build_depend>std_msgs</build_depend> <run_depend>message_generation</run_depend> <run_depend>rosconsole</run_depend> <run_depend>roscpp</run_depend> <run_depend>roscpp_serialization</run_depend> <run_depend>rostime</run_depend> <run_depend>std_msgs</run_depend> <!-- The export tag contains other, unspecified, tags --> <export> <!-- You can specify that this package is a metapackage here: --> <!-- <metapackage/> --> <!-- Other tools can request additional information be placed here --> </export> </package> hope anyone can help me out here. Thank in advance Originally posted by Dickvdsteen on ROS Answers with karma: 103 on 2013-01-09 Post score: 1 Answer: Those are linker errors. Uncomment the linker flag block ## Specify libraries to link a library or executable target against # target_link_libraries(beginner_tutorials_node # ${catkin_LIBRARIES} # ) That should fix it. EdIT. The roscpp tutorials also have a catkin button that will guide you along the building process with catkin: http://www.ros.org/wiki/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29 Originally posted by KruseT with karma: 7848 on 2013-01-10 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 12351, "tags": "ros, catkin, ros-groovy, linking" }
Updating Google Sheets table efficiently
Question: Program description: A program accepts a list (uTable). The list consists of at least 4 different lists (rows), each with 7 elements (cells). Check whether the value in [7] is zero and assign a new random variable to it on the basis of what [3] is: 1, 2, 3, 4 or 5. A chance should be used to calculate the chance of the assigning operation. A cell should be updated using sheet_store.update_cell(i, k, value). A check for admin permissions should be included ( if adminId in admins:). My solution: if adminId in admins: uTable = sheet_store.get_all_values() for i in range(len(uTable)): row = uTable[i] if (row[7] == '0'): chance = random.uniform(0, 1) if (row[3] == '5'): if chance <= 0.05: print('5 EXECUTED!') sheet_store.update_cell(i+1, 8, 1) elif (row[3] == '4'): random_1 = randint(1, 3) if chance <= 0.1: print('4 EXECUTED!') sheet_store.update_cell(i+1, 8, random_1) elif (row[3] == '3'): random_2 = randint(1, 5) if chance <= 0.4: print('3 EXECUTED!') sheet_store.update_cell(i+1, 8, random_2) elif (row[3] == '2'): random_3 = randint(1, 10) if chance <= 0.6: print('2 EXECUTED!') sheet_store.update_cell(i+1, 8, random_3) elif (row[3] == '1'): random_4 = randint(1, 20) if chance <= 1: print('1 EXECUTED!') sheet_store.update_cell(i+1, 8, random_4) return True else: return False Input: (TABLE IS TRANSFERRED AS LIST) (Row and column with index 0 are not included) 0 1 2 3 4 5 6 7 1 артифак 0 5 незвестен a_1.jpg 07/08/20/15/30 0 2 Съедобный предмет 1 3 бафф a_2.jpg 0 3 описание 2 4 неизвестен a_3.jpg время обновления 1 4 Отличный компан 3 2 компаньон 7 Runtime: 1 EXECUTED! 2 EXECUTED! success 0.5s Is there any way to improve the code and make the program run faster? Thank you in advance. Answer: These are my suggestions regarding performances: Generate random numbers only if needed The new value for the column 7 can be generated after you are sure that needs to be updated. From: elif (row[3] == '4'): random_1 = randint(1, 3) if chance <= 0.1: print('4 EXECUTED!') sheet_store.update_cell(i+1, 8, random_1) To: elif (row[3] == '4'): if chance <= 0.1: print('4 EXECUTED!') sheet_store.update_cell(i+1, 8, randint(1, 3)) In this way, the new random value is generated only if chance<=0.1. Same for the other cases. API call cost I assume that you are using gspread. The doc says: Under the hood, gspread uses Google Sheets API v4. Most of the time when you call a gspread method to fetch or update a sheet gspread produces one HTTP API call. Given this statement, your code sends 1 API call with get_all_values() and 1 APIs call for each row to update with update_cell(). In the worst case, there are 5 API calls. One suggestion is to generate the list of all new values for the column 7 and send a single update call with update(), so that the total number of API calls will be reduced to 2.
{ "domain": "codereview.stackexchange", "id": 40110, "tags": "python, python-3.x, google-sheets" }
Move It! polar robot configuration not moving second joint
Question: Hi all! I'm fighting to create a polar robot configuration with Move It. The polar robot has 3 joints: rotational, rotational and prismatic. Polar robot http://www.robotpark.com/academy/PG/Spherical-Robots-Robotpark.png I have model it in a xacro file and can move it with problems in Rviz with fake controllers. Rviz simulation Here is the xacro file of the robot (be care to uncomment the lines for the joint with the world) link And for launch it, this is the command: roslaunch arm_description rviz.launch Then I follow the MoveIt! tutorial for a custom robot (link) and generate all the files correctly, but as you can see in the next video, it only moves the first joint (rotational) and the third joint (prismatic), but not the second one (rotational). Video And for launch it, this is the command: roslaunch rrp_moveit demo.launch I would apreciate some help in trying to fix this problem. Kinds regards, Jorge EDIT: This is the workspace for all the project. Originally posted by jdeleon on ROS Answers with karma: 133 on 2018-10-01 Post score: 0 Original comments Comment by gvdhoorn on 2018-10-02: Cross-post of ros-planning/moveit#1074. Answer: The solution is explained in ros answer #95626 The problem is that the arm needs to have, at least 6DOF for the solver to find the solution. The solution is to add the next line in kinematics.yaml file: position_only_ik: True Originally posted by jdeleon with karma: 133 on 2018-10-02 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by nguyentuMTA on 2019-04-03: I using position_only_ik: True for 3 dof custom arm but with that when I use the interactive marker only 1st 2nd joint can rotate the 3rd can't move so the robot can reach all the its own workspace Does anyone know about this
{ "domain": "robotics.stackexchange", "id": 31847, "tags": "ros, joint, moveit, ros-kinetic, robot" }
Is "Egyptian Year" the same as a modern sidereal year?
Question: Copernicus uses the term "Egyptian Year" throughout his discussions of the movements of the Earth, and of his and other models of the movements of the planets; but is unclear from his text, or from the general definitions I've found, what this corresponds to in modern astronomical terms. What, precisely, is an "Egyptian Year"? Is it identical with a modern sidereal year; if not, what are the correct conversions between the two? Answer: Egyptian year is not the same as sidereal year. Egyptian year is exactly 365 days, whereas sidereal year is approximately 365.256 days.
{ "domain": "physics.stackexchange", "id": 4830, "tags": "astronomy, terminology, history" }
Correlation length at low temperatures?
Question: The correlation length gives (approximately) the distance over which a spin flip has an effect. For systems with ordered phases, at low temperatures the correlation length is then small (since a single spin flip will have little affect due to the large energy needed to rotate all the spins). But the one-dimensional Ising model (although it doesn't have an ordered phase) has a correlation length that diverges at low temperature. Given what I have said above and given at $T=0$ for the 1D Ising we do get an ordered phase, why is the correlation length diverging instead of going to zero? Answer: I assume that the question is not about how one shows that the correlation length diverges, as this is a simple computation when $d=1$, but rather about intuition behind the result. First, note that the same also happens when you consider the $d$-dimensional Ising model, with $d\geq 2$, as the temperature approaches $T_c$ from above. (Actually, also from below, but let's stick to the situation you are interested in.) In both cases, the reason is the same (although the mechanism is more subtle in higher dimensions). The correlation length is the relevant length-scale in the system. So, above $T_c$, it measures, for example, the typical size of clusters of spins taking the same values. The size of these clusters diverges as the system gets closer and closer to the critical temperature. Rather than the kind of dynamical interpretation you seem to be using, you should rather interpret the correlation length in the following statistical manner: suppose you only observe a single spin of your system (say, the spin at $0$, $\sigma_0$) and you discover that it takes the value $+1$. What can you say about the value of the spin at $i\neq 0$? Well, if $i$ is "close enough" to $0$, then knowing that you have a $+1$ at $0$ makes it more likely to observe a $+1$ also at $i$. The correlation length quantifies what "close enough" means. Namely it is the typical distance up to which the probability of observing a $+1$, given that $\sigma_0=+1$, differs significantly from $1/2$. Now, for the one-dimensional model at low temperature, observing that $\sigma_0=+1$ makes it very likely that you'll see only $+1$ up to very large distances, precisely because the energetic cost is huge to flip spin. This will occur (the cost being finite), but the density of pairs of neighbors with spins taking different values goes to zero as $T\downarrow 0$. The average distance between two consecutive such pairs will be of the order of the correlation length, so it diverges. Addendum Let me stress the difference with what happens as $T\downarrow 0$ in higher dimensions. In this case, the system is in an ordered phase. For definiteness, let's assume that it is in the $+$ state $\mu^+_T$. In this case, the correlation length measures the typical distance over which $$ \mu_T^+(\sigma_i=s \,|\, \sigma_0=s) \text{ differs significantly from } \mu_T^+(\sigma_i=s). $$ And this distance decays to $0$ as $T\downarrow 0$ , for reasons explained in this answer.
{ "domain": "physics.stackexchange", "id": 49700, "tags": "condensed-matter, phase-transition, critical-phenomena, scale-invariance" }
Measurement of a State Not in the Eigenbasis of the Operator
Question: Suppose I have a two dimensional Hilbert space $\{ |0 \rangle,|1\rangle \}$ with these states being orthonormal. Now suppose I have the Hamiltonian $H=|1\rangle \langle 0|+|0\rangle \langle 1| .$ It is clear that the (normalised) eigenstates of this are: $|\phi_0\rangle=\frac{1}{\sqrt{2}}\Big(|0\rangle+|1\rangle\Big)$ and $|\phi_1\rangle=\frac{1}{\sqrt{2}}\Big(|0\rangle-|1\rangle\Big)$ with eigenvalues $+1$ and $-1$ respectively. Now what I am confused about is calculating probabilities: I know that given the state $| \psi(t)\rangle$ of the system, if I want to calculate the probability of this state being in an arbitrary state $|\phi\rangle$, then I just need to calculate their overlap $\big($ie. $\mathcal{P}(\phi)=|\langle \phi | \psi(t) \rangle |^2\big).$ Suppose I chose the state to be $|0\rangle$, then I would need to find its overlap. What is the actual grounding for this calculation $\dots$ From the principles of QM the only outcomes from measurement are eigenvalues of the operator in question, and the probability is given by the overlap of the eigenstate with the time evolving state of the system. So I should only be able to find the probabilities of being in states $|\phi_0 \rangle$ or $|\phi_1 \rangle$ by this axiom (obviously not true though!) So how can I find the probability of being in state $|0\rangle$ if it isn't an eigenstate of the Hamiltonian operator. Is there an operator corresponding to the original basis states which we can measure (though this would be the identity)? What is the intuition behind this definition so that I can some appreciation as to why it makes sense to compute the overlap of states? Answer: Perhaps the confusion here is "what is an operator for the measurement whose eigenvector is $|0\rangle$?" The usual operator is the pure density matrix or projection operator $|0\rangle\langle 0|$ for which $|0\rangle$ has eigenvalue 1. The other eigenvector is $|1\rangle$ which has eigenvalue 0. If you want to you can write $|0\rangle$ in the $\phi_j$ basis by using: $|0\rangle = (|\phi_0\rangle + |\phi_1\rangle)/\sqrt{2}$
{ "domain": "physics.stackexchange", "id": 57277, "tags": "quantum-mechanics, operators, hilbert-space, probability" }
How local is the stress tensor?
Question: I am confused by the definition of the stress tensor in a crystal (let's say a semi-conductor), I don't see how it could be "more local" than over an unit cell. I know that in field theory the stress tensor, or more precisely the whole stress-energy tensor, is local function, a function of the point $\vec{x}$ but I don't understand the interpretation you can give to shearing a point. Plus, in the lattice case I stressed (pun semi-intended), you shouldn't be able to look at displacements or deformations on smallest scales than your elementary cell? Answer: An interesting question. You are right, the stress in a crystalline solid, or any solid, is treated by engineers as a macroscopic property of matter assuming matter is a continuous medium. It is given in terms of the external forces acting on the solid per unit area at some direction. Hence the distinction of $\sigma_{xy}, \sigma_{yz}$ etc. This goes with the definition of stress $\sigma_{ij}={\frac {dF_i}{dA_j}}$ where $dF_i, dA_j$ are Cartesian tensors (not like the general contravariant and covariant tensors in general tensor analysis as in GR.) This derivative is meaningful locally and, as said above, treats the solid as a continuous medium. In order to define stress in terms of the crystal structure you can relate it via the strain tensor, which can be defined in terms of relative infinitesimal displacements of the atoms in the cell. There are a couple of ways of doing this, one of which is the Lagrangian description. In this, the coordinates $(x_1,x_2,x_3)$ of the atoms in the unrestrained state are taken as the independent variables, while $(u_1,u_2,u_3)$ are the relative displacements of the atoms, and they are the dependent variables. This leads to the following definition for the strain tensor (Langrangian strain) $\eta_{ij}={\frac 1 2}( {\frac {\partial u_i}{\partial xj}}+{\frac {\partial u_j}{\partial x_i}}+\Sigma {\frac {\partial u_r}{\partial x_i}} {\frac {\partial u_r}{\partial x_j}} )$ where the summation is over the index $r$. Note that this is a function of the coordinates of the atoms in the lattice, so $\eta_{ij}$ is a locally defined quantity. Macroscopically strain and stress relate via Young’s modulus $E$ in the relation $\sigma=\epsilon E$ (Hooke’s law,) which would be ok for an isotropic material, in which direction of application of the force is irrelevant. For anisotropic materials, such as crystalline solids in condensed matter physics, this relation is generalised to the following $\sigma_{ij}=\Sigma C_{ijkl}\eta_{kl}$ and the summation is assumed over the $kl$ indices. The coefficients $C_{ijkl}$ are the second order ‘elastic constants’ or elastic stiffness coefficients of the material, and they are fourth order tensors, defined as derivatives of the elastic energy of the material, i.e. the potential energy of the atoms in the crystal lattice due to their relative displacements. I think this is what TMS meant in his comment. I hope this helps to understand the notion of locality of stress in crystalline solids.
{ "domain": "physics.stackexchange", "id": 6377, "tags": "semiconductor-physics, continuum-mechanics, lattice-model" }
Time Dependent Expectation Value of an Operator
Question: So I have an issue understanding how to solve problems with time-dependent operators (Heisenberg picture). In this case I have a resonator prepared in a coherent state: $|\psi (t=0)\rangle = |\alpha \rangle$. Then I need to find the time evolution of the system that has the following Hamiltonian, $H/\hbar = \omega a ^\dagger a$: $$|\psi (t) \rangle = e^{-i|\alpha|^2\omega t}|\alpha\rangle$$ But then I need to find the "time-dependent expectation value of the operator $Q = \frac{a + a^\dagger}{\sqrt{2}}$". Now I don't understand how you are supposed the time dependence here: $$\langle \psi (t)|Q|\psi (t)\rangle = \langle\alpha|e^{i|\alpha|^2\omega t}Qe^{-i|\alpha|^2\omega t}|\alpha\rangle = \langle\alpha|Q|\alpha\rangle = \frac{\alpha + \alpha^*}{\sqrt{2}}$$ Am I missing something here, this feels like it is very simple but can't wrap my head around it. Many thanks! Answer: I will develop Mike Stone's answer. Actually, coherent states are eigenstates of the annihilation operator only, i.e. $\hat{a}|\alpha\rangle = \alpha|\alpha\rangle$, hence $\langle\alpha|\hat{a}^\dagger = \alpha^*\langle\alpha|$ by taking the hermitian conjugate, but $\hat{a}^\dagger|\alpha\rangle \neq \alpha^*|\alpha\rangle$. Indeed, if the creation and annihilation operators were simultaneously diagonalizable $-$ in other words, if they possessed the same eigenstates $-$, then they would commute, but we know that $[\hat{a},\hat{a}^\dagger] = 1$. In consequence, the relation $\hat{a}^\dagger\hat{a}|\alpha\rangle = |\alpha|^2|\alpha\rangle$ is not true, nor $e^{-i\omega t\hat{a}^\dagger\hat{a}}|\alpha\rangle = e^{-i\omega t|\alpha|^2}|\alpha\rangle$. Now, since the eigenbasis of the number operator $\hat{n} = \hat{a}^\dagger\hat{a}$ corresponds to the Fock states $\{|n\rangle\}_{n\in\mathbb{N}}$ and given that the coherent state $|\alpha\rangle$ is represented by $e^{-\frac{1}{2}|\alpha|^2} \displaystyle\sum_{n\ge0} \frac{\alpha^n}{\sqrt{n!}} |n\rangle$ in that basis, one has : $$ |\psi(t)\rangle = e^{-i\hat{H}t/\hbar}|\alpha\rangle = e^{-\frac{1}{2}|\alpha|^2} \sum_{n\ge0} \frac{\alpha^n}{\sqrt{n!}} e^{-i\omega t\, \hat{a}^\dagger\hat{a}}|n\rangle = e^{-\frac{1}{2}|\alpha|^2} \sum_{n\ge0} \frac{(\alpha e^{-i\omega t})^n}{\sqrt{n!}} |n\rangle = |\alpha e^{-i\omega t}\rangle, $$ hence $$ \langle\hat{q}\rangle = \langle\psi(t)|\hat{q}|\psi(t)\rangle = \langle\alpha e^{-i\omega t}| \frac{\hat{a}^\dagger+\hat{a}}{\sqrt{2}} |\alpha e^{-i\omega t}\rangle = \frac{\alpha^*e^{i\omega t} + \alpha e^{-i\omega t}}{\sqrt{2}}, $$ where $\hat{a}^\dagger$ has been applied on the left and $\hat{a}$ on the right.
{ "domain": "physics.stackexchange", "id": 97869, "tags": "quantum-mechanics, operators" }
Battery Swapping in Electric Cars
Question: What are the advantages and disadvantages (technical, financial etc.) of having battery swapping system over charging stations for electric cars? As per my reading, battery swapping seems feasible for electric buses and 2-wheelers but not for cars. Is that really the case? Also, is it possible for the battery to be housed under the bonnet instead of having it under the floor of the car so as to do away with underground swapping (like the one showcased by Tesla)? I know that the battery is placed there so as to lower the center of gravity. However, if there aren't any other significant benefits, can the aforementioned be done and battery swapping be made easier? Answer: This idea has been suggested often, but there's way more to it than you'd think. One big problem is that batteries are heavy, and there are multiple problems arising because of this. The main thing being the packaging in the car. You want a decent range in a car, so the battery will be large and heavy. If you put it in the front, the car will be front heavy, and it'll be understeered. The front will break out in the corner. Same problem when putting it in the back, it'll be oversteered, which can be even more dangerous for ignorant drivers. You'll notice that a disturbed weight distribution causes bad driving behaviour in general, from a car. Placing it too high will also cause problems, the car will dive under braking and accelerating, and it'll roll in the corners. Long story short, batteries are annoyingly big and heavy and awkaward things we have to get stuffed inside a car. If you make people use batteries on exchange basis, you can only have a few standard battery sizes and configurations, or it'll be unfeasible. This means the battery will never be optimal for a specific car. And this is why exchange batteries aren't a thing. (yet) Others often mention that different batteries will have a different quality, but I don't think this really matters. If you you have a lease contract, or if you pay per kilometer, the battery quality doesn't matter anymore. Bad ones will just be taken outof the cylce and be recycled. It would be feasible if only a few different cars were used on the road, so the number of batteries exchange stations need to have in stock is also limited. Or, we'll have to wait for batterytechnology to advance a lot, so we can have a decent range with small and light batteries. Then the packaging would be less critical and standard battery sizes could be a solution. Until then, i don't think we'll see exhange batteries being offered at petrolstations.
{ "domain": "engineering.stackexchange", "id": 2171, "tags": "mechanical-engineering, electrical-engineering, battery, electric-vehicles" }
Egg-Detection in Pastries: An Analysis in Heat-Alterated Molecule Identification
Question: I would like to know if anybody here knows of a method to detect the presence of ovalbumin--or any unique, egg-related molecule, in a baked good. Here I am anticipating that the "unique egg-related molecule" might alter under high temperatures, and so the suspect molecule for detection may or may not be any molecule which is present in a fresh, uncooked egg. I'm thinking of something that can be brushed on to a pastry to evolve a color change. Any help is appreciated. Answer: There are a number of commercial kits available to determine the presence of egg in commercial products and they are based on a technique called sandwich ELISA (enzyme linked immunosorbent assay). A recent article in Food Research International by Gomaa and Boye discusses the impact of thermal processing on allergen detection (in this case casein, egg, gluten and soy). Egg recoveries with the kits are low, ranging from 48% down to 0. It appears that in some cases detecting egg in thermally processed samples is either very difficult to do or the egg products are transformed significantly that they can no longer be called egg (I did not read the paper in enough detail to determine if the authors speak to this point). The kits tested in this study were from the following companies: Morinaga, Ridascreen and Ridascreen. I web search will reveal these products and additional information, but I'll leave that to the viewer as I do not want this question to be an endorsement or critique of any commercial products. The alternative to these kits appears to be flow cytommetry, which has better recoveries, but does not fit the OPs conditions of economical for a mid-income business without access to a laboratory.
{ "domain": "chemistry.stackexchange", "id": 603, "tags": "organic-chemistry, everyday-chemistry, molecules, color, heat" }
Is there a general way to plot the evolutionary track of a star on HR diagram?
Question: HR diagrams for stars are available on the internet and it is also easy to plot this for thousands of stars from their absolute magnitude and color index values (obtainable from any catalog). However, we don't see many evolutionary tracks on HR diagram for a particular star. Even if we see one, that is probably a schematic diagram rather than a plot from data. Is it not possible to plot the evolutionary track of a star given its mass, temperature, luminosity, etc? Is there any such software/packages where you can achieve this? Answer: One nice resource is the Digital demo room. You can choose the masses of the stars, their metallicity, and it creates a movie where you can see the evolution of those stars in the HR diagram. It's completely online, no need to download anything. If you are looking for something closer to research, you can download the SSE code and run it from your computer. It is a rapid stellar evolution code. Today I'd say it's quite outdated, but many modern rapid population synthesis codes are still based on it. In the related paper Hurley et al. 2000 you can find the actual formuale used to compute the evolution. A step further would be to abandon the simple (and approximated) analytical formulae and solve the stellar fundamental equations directly, which you can do by downloading and running MESA, a state-of-the-art 1D stellar evolution code, that is free, well documented and (arguably) user friendly. It will allow you to see the evolution of the star in the HR diagram, as well as the Kippenhan diagram of the interor of the star and the Temperature-density profile for any given time (and much, much more). Each of these steps that I am suggesting is an order of magnitude more complex than the previous one, requiring more time to setup, to run and more advanced knowledge of how stars work to use properly and interpret the result.
{ "domain": "astronomy.stackexchange", "id": 6499, "tags": "stellar-evolution, hr-diagram" }
Total number of isomers formed on monobromination of 2-methylbutane
Question: What is the total number of isomers formed on monobromination of 2-methylbutane (including configurational and stereoisomers)? I tried working it out. However, I found only five: According to the source, the answer is six. Which structure am I missing? Answer: Here are the six structures that your source is probably referencing.
{ "domain": "chemistry.stackexchange", "id": 13930, "tags": "organic-chemistry, stereochemistry, chirality, halogenation" }
How much of arctic ice is from snow fall versus frozen sea water?
Question: How much of arctic ice is from snow fall versus frozen sea water? I presume arctic ice comes from both sources. Is there a way to tell how the ice was formed? In other words, if you made a core through the arctic ice cap, could you tell how much of the core was from snow versus frozen sea water? What's the difference? Answer: I'm assuming this question is asking about the sea ice in the Arctic Ocean as opposed to ice caps such as that over Greenland. The sea ice on the Arctic Ocean is predominantly (overwhelmingly!) frozen ocean water. The Arctic Ocean loses 17 to 18 thousand cubic kilometers of ice every summer, only to regain most of that melted ice during the long Arctic winter when the ocean freezes at the surface. That corresponds to a layer of ice well over a meter thick that melts and reforms every year. The Arctic is a desert, and an extreme desert during the Arctic winter. At 40 below zero (Fahrenheit or Celsius), it's simply too cold to snow. The amount of precipitation that falls on the Arctic Basin over the six to seven months or so when the ocean freezes is about a tenth of a meter. Snowfall cannot account for anywhere close to the meter+ thick layer of ice that forms during the Arctic winter. The tiny amount of snow that falls during the Arctic winter can't accumulate the way it does over ice caps. Compare with Antarctica. Antarctica, like the Arctic, is a desert. Unlike the Arctic, Antarctica is a continent. The little snow that does fall accumulates over the millennia, and longer. The Antarctic ice caps have existed for millions of years. Now look at the Arctic. There is very little Arctic sea ice that is over ten years old. (Thanks to global warming, there is very little Arctic sea ice that is over five years old nowadays.)
{ "domain": "earthscience.stackexchange", "id": 1563, "tags": "oceanography, snow, sea-ice, arctic, cryosphere" }
Can hydrogen alone be used as a fuel?
Question: I have always thought that fuel such as petrol, diesel, etc.. are getting burnt to move the pistons in the car (I am not bringing physics). Why not hydrogen be used as a fuel but I know the incident about the Hindenburg disaster but the same thing could happen if the Hindenburg was filled with petrol. So both are combustible and that's what we want. Petrol and hydrogen do the same job what does he want?. When hydrogen is burnt there won't be any smoke so it saves the environment. PS: This is just to make this question a bit understandable I have mentioned the word "Alone" in the question. Answer: Hydrogen fuel cells which produce electricity from hydrogen and oxygen are over a hundred years old https://www.thoughto.com/hydrogen-fuel-cells1991799 The recently introduced Toyota Mirai runs off a hydrogen fuel cell. www.toyota.co.uk/mirai as does the Hyundai ix35 and Honda Clarity There aren't very many hydrogen filling stations in the UK which rather limits its appeal
{ "domain": "chemistry.stackexchange", "id": 8535, "tags": "combustion, hydrogen, fuel" }
Is there a definition for describing nuclear material undergoing fission/fusion?
Question: Is there a term similiar as a piece of coal burning as ambers? Can nuclear material undergoing the reaction be transported? Answer: Is there a definition for describing nuclear material undergoing fission/fusion? Is there a term similiar as a piece of coal burning as embers? No, mainly because of the complexity and size of the systems. Can nuclear material undergoing the reaction be transported? For fusion to happen you need a Tokamak or something like that , unless you mean a hydrogen bomb while exploding. For fission a reactor or a bomb. "critical" is a term used when fission starts. No way like transportable embers.
{ "domain": "physics.stackexchange", "id": 56423, "tags": "nuclear-physics, physical-chemistry, chemical-potential" }
Complexity of 4-coloring a map with constraints
Question: The well-known Four color theorem states that every map which is divided into regions, can be colored using 4 colors such that no two adjacent regions have the same color. In fact, there exists a quadratic algorithm for 4-coloring planar graphs. Suppose you are given a map (e.g. the world's map) and a list of $k$ constraints, e.g. Greece is colored blue, and Spain, Italy and Uruguay are red. Can this problem be solved in poly time if $k$ is part of the input? Can this be solved in poly time if $k$ is fixed (i.e. is the problem fixed parameter tractable with respect to $k$)? Answer: It is $NP$-complete. Consider a graph $G$ which is modified by duplicating every vertex, and connecting every duplicate vertex to its original. Then if we constrain all the duplicate vertices to a fixed color, then the thus obtained graph is 4-colorable (with constraints) if and only if the original graph is 3-colorable.
{ "domain": "cs.stackexchange", "id": 4124, "tags": "complexity-theory, graphs, colorings, parameterized-complexity, planar-graphs" }
QED: Would atoms without electrons be visible?
Question: I have been reading a lot of QED books lately, and understand (as well as possible anyway) the interaction between electrons and photons. But I can't seem to get a clear indication of the interaction between photons and protons. It seems that normal light (not talking about high-energy levels or anything exciting, just the stuff that comes out of a light bulb) would be insufficient to really produce a reflection, but, so far, it depends upon who I ask. That said, to boil down what I am really trying to determine: Would otherwise-normal atoms (or matter, really) with no electrons be visible? Would the protons take up the role normally provided by the electrons and cause a similar scattering of light, or would it really just mess things up? Answer: Ordinary light has far too little energy to significantly affect protons. But gamma rays are the result of interactions between protons, neutrons and photons in an unstable nucleus (i.e., a radioactive atom). Normal atoms without their electrons are positively charged and would not form ordinary matter but an exploding gas. Most ordinary experience would become invalid.
{ "domain": "physics.stackexchange", "id": 5315, "tags": "quantum-electrodynamics" }
A little confusion with the Knapsack problem (a worked example)
Question: I'm going through a worked example on the Knapsack problem: My problem is that I don't understand quite follow the last bulletpoint. Where does the $x_4 = 4/5$ comes from? I know $x_4$ has to be a fraction of 1 (otherwise it breaks the inequality) but why specifically $4/5$? Any insight is highly appreciated. Answer: The (non-negative) variables $x_1,x_2,x_3,x_4$ must satisfy the constraint $$ 2x_1+2x_2+4x_3+5x_4 \leq 8. $$ If $x_1=x_2=1$ then this simplifies to $$ 4x_3+5x_4 \leq 4, $$ which implies in particular that $x_3 \leq 1$ and $x_4 \leq 4/5$ (since $x_3,x_4 \geq 0$).
{ "domain": "cs.stackexchange", "id": 4475, "tags": "algorithms, dynamic-programming, linear-programming" }
Saving row versions in MySQL
Question: I just want to confirm that I am versioning my "versionable" fields correctly, and that there isn't another, better way to do this. I have a table event, a record of an event in time. It has some fields that I want to be invariant, like it's ID and event series it is associated with. It also has some fields that I want to be editable, like the date or the description. The other issue is that I have foreign keys on the auto_increment ids of both tables, which is why I think I need two tables and not just one, or? But I also want to keep the history of the variant fields, So I created two tables: invariant id int series int active boolean and variant eventID int //foreign key to the invariant ID field id int //this table needs its own id which serves as a foreign key on another table date Date description varchar(255) active boolean When an edit is made to the variant fields, I am switching the active Boolean on existing rows with the same eventID to false. then when I insert my new version, I can get the just latest version on the invariant - variant join by specifying where active=true. If/when I want to delete the event entirely, I am setting active to false in the invariant table. As I said at the top, I just want to confirm that this is an optimal solution for the specified requirements, or if there are better ways or things I am not understanding Answer: Do not use an 'active' column as a way of joining a table. The other issue is that I have foreign keys on the auto_increment ids of both tables I'm assuming you mean the invariant & variant tables both have PrimaryKeys which are auto_incremented. If so, this is fine. Most tables have an auto incremented primary key. You could put all the columns onto the variant table (which is where they belong) and have an audit table. The audit table would contain the old value(s) & new value(s) along with the changed date. If you wish to continue with your unique versioning system, I would still remove the active indicator on both tables, and instead sort by latest date to find the current. If no rows exist on invariant, either variant is 'inactive' or should be deleted.
{ "domain": "codereview.stackexchange", "id": 34320, "tags": "sql, mysql" }
what are effects of working with categorical dataset
Question: I am working on classification problem where the dataset contains 90% of features as categorical. It is binary classification problem, and the class is heavily imbalanced. I performed Smote over sample and created a model. I also tried similar approach with undersampling. Both the method with logistic regression performs mediocre. I want to know how having too many categorical variables impacts the model and possible efficient way to approach the problem feature1:1-3 feature2:0-1 feature3:0-3 feature4: 1-4 feature5: 0-2 feature6: 0-5 feature7: 1-4 feature8: continuous( max 10) feature9 continuous( max 10) class: 0-1 Answer: You need to distinguish a few problems here. How well can your data classify some outcome, what features should be used, how to deal with imbalanced classes. Categorical features are not a problem per se. In the current stage, feature selection might be an issue. Thus, use logit with lasso or ridge to shrink features which are not too helpful (happens automatically). Also dummy/one-hot encoding would be worth a try (jointly with lasso). https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeClassifier.html
{ "domain": "datascience.stackexchange", "id": 5803, "tags": "machine-learning, classification, data" }
Carbon with two double bonds and oxygen name
Question: I have a compound like this: $$\ce{CH2=C=O}$$ I wonder if it is a functional group or a normal group. As a functional group I couldn't find out. Is it a normal compound or a functional group and what is it's IUPAC name Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) the name of the given compound is the systematically constructed name for ketones. Therefore, the preferred IUPAC name (PIN) is ethenone. The name ketene is retained, but only for general nomenclature.
{ "domain": "chemistry.stackexchange", "id": 3296, "tags": "organic-chemistry, nomenclature" }
Experiment preparation - Box to reduce background photons noise
Question: I am planning to perform experiment with a photomultiplier tube (PMT) in my lab. The system is not large, with a length scale of about 10 cm. I am looking to reduce the background photons noise as much as I can, and for this I thought to perform the experiment inside a box of the above length scale. Does anyone here have recommendations for a box that can do that job? I was looking at Amazon/Ebay/Alli express, but there are so many options, so I thought asking another people would be beneficial. *My budget are flexible Answer: Use a "Pelican box" or similar black and waterproof cases. Works perfectly.
{ "domain": "physics.stackexchange", "id": 93419, "tags": "experimental-physics, experimental-technique, noise" }
How accurate is the magnetic field determined by assuming a circular current carrying loop as a magnetic dipole?
Question: I've learnt that a circular loop of area $A$ carrying a current $i$ produces a magnetic moment equal to $iA$ and the field due to the loop can be considered to be due to a magnetic dipole, consisting of two magnetic charges (magnetic monopoles) - positive (north) and negative (south), each of pole strength $m$ and separated by a distance $d$ such that $md=iA$. By considering a circular loop carrying a current as a magnetic dipole, the magnetic field could be easily calculated at any point using Coulomb's law of magnetism and law of superposition, instead of using the Biot-Savart law. However, this analogy is not completely perfect because magnetic monopoles have not been observed so far. Also, magnetic field lines do not start or end at a particular point, they only form closed loops and this is contrary to the dipole picture. Further, the magnetic field inside the dipole system is opposite in direction when compared to that of a circular loop. This can be inferred in the central regions in the following diagrams: Left: Magnetic field due to a current carrying circular loop. Right: Magnetic field due to positive (north) and negative (south) magnetic charges (magnetic monopoles) separated by some distance (magnetic dipole). Image source: Magnetic dipole - Wikipedia It can be seen that the magnetic field lines are similar in both diagrams except in the central region where the directions are opposite. Also, the field at the assumed pole cannot be determined by using Coulomb's law of magnetism as $r=0$ in $\frac{\mu_0}{4\pi}\frac{m}{r^2}$. However, it can be easily calculated using Biot-Savart law. My questions are as follows: Is the magnetic dipole picture of assuming a circular loop carrying a current is a kind of "approximation"? Or in other words, is the field determined by assuming magnetic fields due to magnetic monopoles different from the original field due to circular loop? In short, how accurate is the magnetic field determined by assuming a circular current carrying loop as a magnetic dipole? Does the pole picture causes variation only in the direction or also includes difference in the magnitude of the field? Where does the pole picture give accurate result for both magnitude and direction of the magnetic field around a circular loop carrying a current? What are the specifications for the choice of $m$ and $d$? From $md=iA$, the product $md$ is a constant for a particular $i$ and $A$, however, we're free to choose $m$ and $d$ in such a way it satisfies the condition. So does a small value of $d$ (and large value of $m$) give more accurate results, or is it the other way round? Answer: I feel like a lot of these questions are asking the same thing, so do tell me if I've misinterpreted any. Is the magnetic dipole picture of assuming a circular loop carrying a current is a kind of "approximation"? Or in other words, is the field determined by assuming magnetic fields due to magnetic monopoles different from the original field due to circular loop? Yes. For example, as you can clearly see in the figure, the two fields don't even point in the same direction in the interior. In short, how accurate is the magnetic field determined by assuming a circular current carrying loop as a magnetic dipole? It gets more accurate the further away you get, because both of them approach the ideal dipole field, $$\mathbf{B}(\mathbf{r}) = \frac{\mu_0}{4 \pi} \left(\frac{3 \hat{\mathbf{r}} (\mathbf{m} \cdot \hat{\mathbf{r}}) - \mathbf{m}}{r^3} \right).$$ More precisely, at large distances, both of these fields differ from the ideal dipole field, in both magnitude and direction, by a fractional amount proportional to $\ell/r$, where $\ell$ is a characteristic length scale. For the loop picture, $\ell = \sqrt{A}$. For the pole picture, $\ell = d$. As you go to smaller $r$, the divergences get a lot bigger, e.g. for $r \lesssim \ell$ the fields are completely different from the ideal dipole field, and from each other. Does the pole picture causes variation only in the direction or also includes difference in the magnitude of the field? Yes. For example, as you can see from the figure, the magnitudes diverge at the poles and at the loop, and there's no corresponding divergence in the other picture. Where does the pole picture give accurate result for both magnitude and direction of the magnetic field around a circular loop carrying a current? At large $r$. What are the specifications for the choice of $m$ and $d$? From $md=iA$, the product $md$ is a constant for a particular $i$ and $A$, however, we're free to choose $m$ and $d$ in such a way it satisfies the condition. So does a small value of $d$ (and large value of $m$) give more accurate results, or is it the other way round? What is exactly true is that in the limit $d \to 0$ and $A \to 0$, the two fields approach each other, because they both become the same as the ideal dipole field. When $A \neq 0$, it's not clear what value of $d$ gets a more "accurate" result, because it depends on how you define "accuracy". For example, if you wanted the field at $r = 0$ to be as accurate as possible, then you should send $d \to \infty$, because that gets zero field at $r = 0$. But then that would get the field at $r \approx \sqrt{A}$ totally wrong. If an IIT JEE problem asks you what the most "accurate" result is, this is a vague and undefined criterion and the correct answer is whatever random thing the test writer had in mind at the moment.
{ "domain": "physics.stackexchange", "id": 65344, "tags": "electromagnetism, magnetic-fields, magnetic-moment, magnetic-monopoles" }
The greatest rotation inertia of a system?
Question: Consider this scenario: There are $n$ tiny balls (tiny means we can fix several balls in one place) with mass $m_1,m_2,...$ fixed in a stick which we may ignore the mass of the stick. Now we rotate the system(stick and balls) around the center of mass. I feel like the greatest rotation inertia will be achieved when half the mass of balls is at one point of the stick while the other half is at the opposite point of the stick. But I don't know how to prove it mathematically? Answer: If there are $n$ balls, the total mass of all the balls is $m$ and the length of the stick is $L$ An arrangement such as $A$ wouldn't be maximum as we need the moment of inertia and it depends on $m_id^2$ and the $d_i$ could be greater by going to arrangement $B$ If we moved a proportion of the mass $km$ from one end of $B$ to make $C$, the COM is now a distance $kL$ from one end and $(1-k)L$ from the other. The moment of inertia for $B$ is $$\frac{mL^2}{4}\tag1$$ For $C$ it's $(kL)^2(1-k)m+((1-k)L)^2km$ and that simplifies to $$k(1-k)mL^2\tag2$$ Expression 2) is only greater than 1) if $$k(1-k)\gt \frac{1}{4}\tag3$$ $$4k^2-4k+1 \lt0\tag4$$ $$(2k-1)^2\lt0\tag5$$ That's not possible, although we can get $0$ if $k=1/2$. So changing the proportion of the mass at the ends of $B$ can't increase the moment of inertia. If some of the mass was moved from one end of $B$ to a place not at the other end of the rod, that wouldn't help, so arrangement $B$ gives the maximum moment of inertia.
{ "domain": "physics.stackexchange", "id": 84829, "tags": "rotational-dynamics, reference-frames, moment-of-inertia, rigid-body-dynamics" }
Problems with ROSARIA
Question: Hello. I got a probem with running Pioneer. Please help me. I installed a package ROSARIA to my robot P3-DX and it works well. But what i want to do is to move the robot by keyboard. By the way, I found that there are many source code in the Aria package such as ArActionKeydrive.cpp ArActionInput.cpp etc. I want to know how to use these useful codes at the same time. thank you. Originally posted by Blake on ROS Answers with karma: 31 on 2012-10-30 Post score: 2 Original comments Comment by georgebrindeiro on 2012-10-31: I don't have any experience with ROSARIA, but so far I have had no problem with p2os with both the Pioneer P3-DX and P3-AT. If you are not determined to use Aria specifically, I could try to help you get started. Comment by allenh1 on 2012-10-31: I also use the Pioneer-3DX robots. I have set up 3 of them for ros. I definitely recommend you approach this with P2OS instead of ROSARIA. You can even avoid setting up a separate laptop for it. Answer: You would use a ROS "client" node that sends messages to RosAria's /RosAria/cmd_vel topic to control robot velocity. See "Writing a Simple Publisher and Subscriber" at http://wiki.ros.org/ROS/Tutorials to get started writing your own client. Some examples you can start from is available at https://github.com/pengtang/rosaria_client, including a simple teleoperation using keyboard control. Another one that uses Python is http://wiki.ros.org/teleop_twist_keyboard, just map its cmd_vel output topic to /RosAria/cmd_vel or vice-versa. Originally posted by ReedHedges with karma: 821 on 2015-06-26 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11568, "tags": "ros, pioneer, rosaria, aria" }
Array manipulation exercise
Question: While trying to learn more about arrays in C, I tried to write some code that did the following: Read a stream of numbers from the stdin and store them in an array Print the array in the order the numbers have been stored (i.e. print the original array) Print the array in reversed order Sort and print the array Given an array, search whether an entered value is present in an array. If it is present, print the index otherwise print that the value doesn't exist. It will be great if somebody could review it and let me know how to improve it. #include <stdio.h> #include <stdlib.h> static void printArray(int array[],int startIndex,int endIndex){ if(startIndex < endIndex){ while(startIndex <= endIndex){ printf(" %i ",array[startIndex++]); } }else{ while(startIndex >= endIndex){ printf(" %i ",array[startIndex--]); } } printf("\n"); } static int cmpfunc(const void * a,const void * b) { if(*(int*)a > *(int*)b) return 1; else return -1; } static void sortedArray(int* originalArray){ qsort((void*)originalArray,(sizeof(originalArray)/sizeof(originalArray[0])),sizeof(originalArray[0]),cmpfunc); return; } static int getIndex(int value,int array[],int size){ int i; for(i = 0;i < size;i++){ if(array[i] == value){ return i; } } return -1; } static void identifyTheIndices(int *arbitraryArray,int size){ char buf[10]; printf("Enter the value to search for..enter q to exit\n"); while(fgets(buf,sizeof buf,stdin) != NULL){ if(buf[0] == 'q'){ break; } char *end; int value = (int) strtol(buf,&end,0); if(end != buf){ int currentIndex = getIndex(value,arbitraryArray,size); if(currentIndex > -1){ printf("Found the entered value %i at index %i\n",value,currentIndex); }else{ printf("Entered value %i doesn't exist\n",value); } } printf("Enter the value to search for..enter q to exit\n"); } } int main(int argc,char **argv) { int counter = 0; if(argc > 1){ int originalArray[argc-1]; while(counter < (argc - 1)){ int currentValue = atoi(argv[counter+1]); printf("Reading input value %i into array \n",currentValue); originalArray[counter] = currentValue; counter++; } int size = sizeof(originalArray)/sizeof(originalArray[0]); printf("Printing out the original array\n"); printArray(originalArray,0,size - 1); printf("Printing out the array in reverse\n"); printArray(originalArray,size - 1,0); printf("Sorting the array in ascending order\n"); qsort((void*)originalArray,size,sizeof(originalArray[0]),cmpfunc); printf("Printing out the sorted array\n"); printArray(originalArray,0,size-1); int arr[] = { 47, 71, 5, 58, 95, 22, 61, 0, 47 }; identifyTheIndices(arr,sizeof(arr)/sizeof(arr[0])); } return 0; } Answer: I disagree with @vishram0709 about taking values from the command line. There is nothing wrong with this. It is often useful to have the option of taking values from the command line; you can also prompt for input values if none are given on the command line. The fault he pointed out is caused by not validating the input values. The same error could occur if the value was input using scanf - and you saw in one of your earlier questions that scanf has its own issues. To check the input values you can use errno = 0; char *end; long value = strtol(string, &end, 0); if (errno == ERANGE) { perror(string); exit(EXIT_FAILURE); } The input values above are now long not int because strtol converts to long and detects ERANGE based upon the size of the maximum possible long. If you wanted to detect over-range int values you could use: char *end; long value = strtol(string, &end, 0); if ((int) value != value) { fprintf(stderr, "%s: Result too large\n", string); exit(EXIT_FAILURE); } Some other observations: printArray would be more normal taking a const start point and a length. static void printArray(const int array[], size_t n); and have another printReverseArray to print the reversed array. cmpfunc should return 0 if the items match. sortedArray is unused. It is also wrong in that (sizeof(originalArray)/sizeof(originalArray[0])), gives something meaningless when originalArray is a pointer. sizeof(originalArray) gives the size of the pointer, not the array that it points to. Later on in main, you do it right, with int size = sizeof(originalArray)/sizeof(originalArray[0]); because here originalArray is a real array, not a pointer. But you already had the array size (argc - 1), so this was unnecessary. the array parameter to getIndex should be const. But the function is unreliable, as it fails for negative numbers. To make it work you need to separate the return value from the success/failure. static int getIndex(int value, int array[], int size, int *index); add some spaces, eg after if, while, for, ; and , etc. you use the variable names array, originalArray and arbitraryArray to identify essentially the same thing - an array. Don't use multiple names for equivalent things without good reason. Your reading loop int counter = 0; ... int originalArray[argc-1]; while(counter < (argc - 1)){ int currentValue = atoi(argv[counter+1]); printf("Reading input value %i into array \n",currentValue); originalArray[counter] = currentValue; counter++; } would be neater as: --argc; int array[argc]; for (int i = 0; i < argc; ++i) { int v = atoi(argv[i + 1]); printf("Reading input value %i into array \n", v); array[i] = v; } shorter variable names are ok, and indeed preferable, where their scope is restricted.
{ "domain": "codereview.stackexchange", "id": 5837, "tags": "c, array" }
Filter a list of integers by parity
Question: Exercise 2.20 The procedures +, *, and list take arbitrary numbers of arguments. One way to define such procedures is to use define with dotted-tail notation. In a procedure definition, a parameter list that has a dot before the last parameter name indicates that, when the procedure is called, the initial parameters (if any) will have as values the initial arguments, as usual, but the final parameter's value will be a list of any remaining arguments. For instance, given the definition (define (f x y . z) <body>) the procedure f can be called with two or more arguments. If we evaluate (f 1 2 3 4 5 6) then in the body of f, x will be 1, y will be 2, and z will be the list (3 4 5 6). Given the definition (define (g . w) <body>) the procedure g can be called with zero or more arguments. If we evaluate (g 1 2 3 4 5 6) then in the body of g, w will be the list (1 2 3 4 5 6) Use this notation to write a procedure same-parity that takes one or more integers and returns a list of all the arguments that have the same even-odd parity as the first argument. For example, (same-parity 1 2 3 4 5 6 7) (1 3 5 7) (same-parity 2 3 4 5 6 7) (2 4 6) I wrote the following: (define (same-parity n . l) (define (even? n) (= (remainder n 2) 0)) (define (odd? n) (not (even? n))) (define (rec n . l) (define (include-and-go) (cons (car l) (apply rec (cons n (cdr l))))) (define (exclude-and-go) (apply rec (cons n (cdr l)))) (if (null? l) null (if (or (and (even? (car l)) (even? n)) (and (odd? (car l)) (odd? n))) (include-and-go) (exclude-and-go)))) (cons n (apply rec (cons n l)))) Is there a better way? Answer: If one were to abstract out the filter function (named filt here), it would greatly simplify writing same-parity: (define (filt test? lis) (cond ((null? lis) lis) ((test? (car lis)) (cons (car lis) (filt test? (cdr lis)))) (else (filt test? (cdr lis))))) (define (same-parity n . lis) (filt (if (odd? n) odd? even?) lis)) The filter function is common enough that it is part of the Scheme standard and is appropriately named filter.
{ "domain": "codereview.stackexchange", "id": 223, "tags": "lisp, scheme, sicp" }
"Lowest unique number" challenge
Question: Again, out of fun on Saturday night, I decided to solve the following problem from CodeEval. The challenge is finding the index of the lowest unique number. There is a game where each player picks a number from 1 to 9, writes it on a paper and gives to a guide. A player wins if his number is the lowest unique. We may have 10-20 players in our game. Input sample: Your program should accept as its first argument a path to a filename. You're a guide and you're given a set of numbers from players for the round of game. E.g. 2 rounds of the game look this way: 3 3 9 1 6 5 8 1 5 3 9 2 9 9 1 8 8 8 2 1 1 Output sample: Print a winner's position or 0 in case there is no winner. In the first line of input sample the lowest unique number is 6. So player 5 wins. 5 0 Suppose you're given a set of numbers from players for the round of game. E.g. 2 rounds of the game look this way: 3 3 9 1 6 5 8 1 5 3 9 2 9 9 1 8 8 8 2 1 1 The output is the winner's position, or 0 in case there is no winner. In the first line of input sample the lowest unique number is 6, so player 5 wins (it was player number 5 who threw number 6). And in the second set there is no unique number, so the output will be 0. So, the overall output will be: 5 0 I solved this problem correctly using Linq in the following code, but the performance is not that great at 255 ms for about 100 data points. Any suggestion for improving performance or perhaps another novel solution? The data have to be read from a text file in CodeEval, so my code is structured around that way. static void Main(string[] args) { using (StreamReader reader = File.OpenText(args[0])) { while (!reader.EndOfStream) { var linevalue = reader.ReadLine(); if (!string.IsNullOrEmpty(linevalue)) { var numbers = linevalue.Split(' '); var groups = numbers.GroupBy(x => x).Select(g => new { number = g.Key, freq = g.Count() }); var nonRepeated = groups.Where(x => x.freq == 1).ToList(); if (nonRepeated.Count() != 0) { var singles = nonRepeated.Select(x => x.number).ToList(); var smallest = singles.Min(); var playerNumber = Array.IndexOf(numbers, smallest); Console.WriteLine(playerNumber + 1); } else { Console.WriteLine("0"); } } } } } Answer: I threw your code through a profiler and it told me that ToList was responsible for quite a bit of your execution time. So why is that? You have 2 ToList calls in your code:: var nonRepeated = groups.Where(x => x.freq == 1).ToList(); if (nonRepeated.Count() != 0) { var singles = nonRepeated.Select(x => x.number).ToList(); var smallest = singles.Min(); So what happens here? You start by iterating through groups to find those with freq equal to 1. Count is optimized to lists, so that one should be constant time. Then you iterate through nonRepeated to select number and then finally you iterate through it to select the minimum value. That's a lot of iterations. Let's try to cut that down: var nonRepeated = groups.Where(x => x.freq == 1) .Min(x => x.number); if (nonRepeated != null) { var playerNumber = Array.IndexOf(numbers, nonRepeated); By comparing the execution time of this and the other one (on my system) it seems we have achieved a small improvement. Not enough to brag about though. Since we don't have any ToList calls anymore, I profiled the code once again. This time it is Min that takes up the time (not really surprising - It is forcing the GroupBy, Select and Where to be evaluated). So our bottleneck is (still) the LINQ queries. Can we speed it up? Hint: LINQ usually have a slight overhead compared to doing things 'manually'. Let's try to do these things ourselves, instead of relying on LINQ. Let's replace that GroupBy and Select stuff: var values = new Dictionary<string, int>(); foreach (var number in numbers) { if (values.ContainsKey(number)) //No curly-brackets to keep it short. values[number]++; else values.Add(number, 1); } A benchmark of the code told me that was a clear improvement. Nice! But the profiler still says that Where and Min are expensive. Let's do those manually too. string minValue = null; foreach (var value in values) { if (value.Value == 1) { if (minValue == null || String.Compare(value.Key, minValue, StringComparison.InvariantCulture) == -1) minValue = value.Key; } } if (minValue != null) ... Once again, it seems we got an overall improvement in performance. Interestingly now the bottlenecks seem to be WriteLine and OpenText. We could try to experiment with other ways of reading from file/writing to console, but I don't really feel like doing that right now (and I don't think the 'easy' methods like File.ReadAllLines and File.ReadLines is much faster than your way, if at all). You could try to play with StringBuilder and only write to console once, but in my experience it is rarely an improvement when the method is only executed once. But maybe you can do some other things to squeeze just a little bit more out of your method. First of all, is the IsNullOrEmpty really necessary? It's a safety mechanism and in real life we would probably want to have it. But this is a coding exercise and we are all about performance right now. Another thing you might want to consider is that strings are expensive. As it is, you are currently working with all your numbers as strings. Split creates a bunch of new strings and every time we compare values we do a string comparison. It might be worth to convert your values to int or char instead. HINT: If you try to convert to int values, it can be done by using char ASCII values. You know that the values will always be in the range [1-9]. The ASCII value for '1' is 49, '2' is 50, etc. So if you do value[0] - 48, you can convert you strings to ints with very little overhead. When you want to optimize something it is always a good idea to measure and profile it and see just where the time is spent. Sometimes you can get big improvements with very few changes.. But you can also make things worse with very few changes. After the changes mentioned in this post (except the string to int conversion) is applied, the code will look something like this: using (StreamReader reader = File.OpenText(args[0])) { while (!reader.EndOfStream) { var linevalue = reader.ReadLine(); var numbers = linevalue.Split(' '); var values = new Dictionary<string, int>(); string minValue = null; foreach (var number in numbers) { if (values.ContainsKey(number)) values[number]++; else values.Add(number, 1); } foreach (var value in values) { if (value.Value == 1) { if (minValue == null || String.Compare(value.Key, minValue, StringComparison.InvariantCulture) == -1) minValue = value.Key; } } if (minValue != null) { var playerNumber = Array.IndexOf(numbers, minValue); Console.WriteLine(playerNumber + 1); } else { Console.WriteLine("0"); } } } As you can see, you have 2 loops within the while now. It might be possible to merge them into one by doing something clever. I tried the code on CodeEval and got a 181ms execution time. That is a 74 ms improvement. To motivate you a bit, I know it can be done (at least a little) faster. ;)
{ "domain": "codereview.stackexchange", "id": 9920, "tags": "c#, performance, linq" }
Young's Double Slit Experiment Intuition on Constructive Interference in Two Dimensions
Question: I have an intuition problem when imagining the interference of two waves constructively and destructively. Here is a picture of the experience and the waves (taken from a KhanAcademy video): So how come there is constructive interference exactly in the middle, even though the waves don't add up "peak" to "peak" (isn't it just a bit beside the peek?) Answer: If the slits are in-phase sources, then the waves from them will arrive in phase at all points equidistant from the sources (that is "exactly in the middle"), as well as at certain other points (given by the 'path difference' rules). Where waves arrive in phase, they interfere constructively. One such place would be where two peaks intersect on the pattern, but don't forget that the waves are progressing, and that all the points through which the intersecting peaks pass marks a line of constructive interference. Try to think about in phase points rather than peak-on-peak.
{ "domain": "physics.stackexchange", "id": 42650, "tags": "visible-light, waves, double-slit-experiment" }
How to understand the curvature of this metric?
Question: Suppose we have the metric $ ds^2 = dr^2 + \alpha^2 d\phi^2$, where $\alpha$ is a constant, $0 \leq r \leq \infty$, $ 0 \leq \phi \leq 2 \pi$ and we identify points $\phi = 0$ with points $\phi = 2\pi$. Now since we have a constant metric the Christoffel symbols are zero and thus so to is the curvature tensor. So this should be flat space. When I try to find the length along radial paths from $0$ to $R$ I find the length to be $R$, but when I find the length of fixed radius paths from $0$ to $2\pi$ I find them to be of length $2\pi \alpha$. So regardless of the position in the $r$ direction we have fixed circumference, thus it seems like this space can be embedded in 3D cylindrical coordinates as a cylinder. Question 1: Is the above correct? If it is, why are cylinders flat but not spheres or saddle points? Intuitively is it because flattening a cylinder only introduces folds where as a sphere would have complex deformations? Question 2: If you draw a sphere in spherical coordinates where $r$, $\theta$ and $\phi$ are all draw at right angles a sphere would look like a rectangle with relevant sides of $\theta$ and $\phi$ identified. Since a rectangle looks flat to me, how can you be sure that you are embedding in such a way as to visualise the curvature? Is it that any coordinates where the sides are identified will reveal the curvature? Answer: I answered an imported version of this question on physics overflow. I decided to post that answer here as well. Due to formatting differences between the sites it will be a inconvenient to maintain up to date versions of both answers, therefore I will only update the original posting of the answer unless a major modification is necessary. To answer the first question: This is the line element for a cylinder though the variable names might make more sense when the line element is written as, $ds^2 = dz^2 + r^2 d\phi^2$, where $r$ is a constant. In this context $z$ is the elevation on the cylinder, $r$ is the radius of the cylinder, $\phi$ is the angle which wraps around the cylinder. Cylinders are flat because they can be cut and unfolded, making them into a rectangular sheet, without stretching the material of the cylinder. More formally there exists an isometric transformation from the cylinder to the plane. The identification of $\phi=0$ with $\phi=2\pi$ is a topological property of the cylinder which doesn't necessarily say anything about its curvature. To be clear when I say that a transformation is isometric I meant that it preserves the relative distances of points as measured locally within the manifold. Distances measured through the embedding space are irrelevant. Now suppose we wanted to map the plane to the sphere. We might start by turning the plane into a cylinder since we know we can do isometrically. To turn the cylinder into a sphere we would have to identify all the points on the rim at the top of the cylinder. The problem with this last identification is that it sends points that were formerly a finite distance apart to the same location meaning that the metric cannot have been preserved. The above example isn't really a proof that there is not isometric mapping from a plane to a sphere, but I think it at least gives some insight into what is going on. To answer the second question: It looks like you are thinking of Identifying $\theta=0$ with $\theta=\pi$ and $\phi=0$ with $\phi=2\pi$ which would not form a sphere. Instead this would form a torus. To form a sphere from $0 \leq \theta \leq \pi$ and $0\leq \phi \leq 2\pi$: For $ 0 < \theta < 2\pi $ identify $\phi=0$ with $\phi = 2\pi$. For $\theta = 0 $ identify all values of $\phi$ with a single point. For $\theta = \pi $ identify all values of $\phi$ with a single point. An important thing to realize at this point is that we still don't have a proper sphere because we haven't defined a metric. As it stands our manifold could just be some deflated beach ball. One line element which would be consistent with the boundary conditions is, $$ds^2 = d\theta^2 + \theta(\pi-\theta) d\phi^2, $$ which is not the metric of a $2$-Sphere.
{ "domain": "physics.stackexchange", "id": 14483, "tags": "differential-geometry, curvature" }
teb_local_planner development status
Question: Hey guys, I don't know if this is the right place to ask this but I would like to know if the local planner "teb_local_planner" will be further developed? Most of the commits were in 2016 and minor changes in 2017. The reason I am asking this is that I am using the planner at the moment (works pretty well) and just wanted to know if there will be new features in 2018? The best person to answers this would be @croessmann (thank you for all your great work) probably. Thank you guys in advance! Originally posted by ce_guy on ROS Answers with karma: 77 on 2018-03-20 Post score: 0 Answer: Hi, yeah, there are still some unanswered issues and pull requests for the package. Unfortunately, I had not much spare-time during the last months to address these issues and to further provide some improvements. I am going to try to spend some time on improving the planner and check all the pull requests during the next weeks. I also have some ideas which would require a major update on the code, I will see what I can get in soon. Cheers, Christoph Originally posted by croesmann with karma: 2531 on 2018-03-20 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by ce_guy on 2018-03-20: Thank you very much for the fast answer, sounds great. I thought you might have finished your research on this topic. The underlying trajectory control in the planner is a model predictive control isn‘t it? Do you have any book recommendations on this topic? I really want to get a deep understanding Comment by ce_guy on 2018-03-20: I have read most of your conference paper on this topic but it’s a little bit hard for me to understand everything. Maybe you have a good hint for me to get deeper in this topic (my background is computer engineering with two semesters control theory). Thank you very much in advance!
{ "domain": "robotics.stackexchange", "id": 30389, "tags": "ros, teb-local-planner" }
what cools bottle of water faster: ice or snow
Question: Imagine you have a pile of snow and a pile of ice shards. You put a soda bottle which has a room temperature into both piles. Which bottle is going to cool down faster? Answer: Thermal conductivity of ice is higher than thermal conductivity of snow, so ice is the winner. Ice has higher thermal conductivity because it is more dense than snow. When snow is as dense as ice then it is no longer snow, it is ice. Thank you for your answers, without them I wouldn't guessed it right :) .
{ "domain": "physics.stackexchange", "id": 1183, "tags": "temperature, ice" }
What angles of releasing a pendulum will affect the speed of a spherical object hit by it?
Question: I tried to search for papers for similar experiments but I couldn't find any. Could someone please help me find resources for my experiment. It would also be great if someone could give some suggestions for my research. Like the methodology I should use, fixed variables, and so on. Answer: so what you're trying to find out is how the angle effects the speed, if you think about it an increase of angle increases the potential energy to be converted into kinetic energy. so to help start off we need to think about when the ball will be hit will it be at the bottom of the swing wor some way up to how would this affect the results can you find a theoretical equation to compare the results to. you will need to think about the momentum of the coalition because you could have energy loss in an inelastic collision. but draw a diagram and write out formulae and facts about pendulums which can help. and just follow ideas till they break in some way and try again. but questions need to be a little more specific for this site because it is difficult to give a solid answer.
{ "domain": "physics.stackexchange", "id": 71257, "tags": "classical-mechanics, speed" }
Has a magnetic field flip of a distant star ever been measured?
Question: The magnetic field of the Sun flips during each solar cycle, with the flip occurring when sunspot cycle is near its maximum. Levels of solar radiation and ejection of solar material, the number and size of sunspots, solar flares, and coronal loops all exhibit a synchronized fluctuation, from active to quiet to active again, with a period of 11 years. Has this phenomenon ever been measured, directly or indirectly, on stars outside the solar system? Answer: The Sun's magnetic activity cycle of $\sim 22$ years involves a large-scale reversal of the polarity of the magnetic field every $\sim 11$ years. There are very many observations of other solar-type stars that show, indirectly, they they too have magnetic activity cycles in the form of modulated emission of tracers of the magnetic field - starspots, chromospheric and coronal activity (e.g. Olah et al. 2016 and references therein). To directly measure the reversing cycles in magnetic polarity requires spatially resolved maps of the vector magnetic field. Such spatially resolved maps are possible for fast-rotating, and hence highly magnetically active stars through Zeeman Doppler Imaging. In general, highly magnetically active stars appear not to show magnetic activity variations as strongly as the Sun. Nevertheless, recent instrumental developments have led to (difficult) observations of some solar-type stars with intermediate rotation rates. There is now plenty of evidence for magnetic polarity reversals in many of these (e.g. in Chi$^1$ Ori, Rosen et al. 2016; in LQ Hya, Lehtinen 2019; in V1358 Ori, Willamo et al. 2021). Whilst the observations are not densely sampled enough to say for sure that this is cyclical behaviour following the same pattern as the Sun, the reversals have been found to correlate quite well with maxima or minima in the indirect indicators of magnetic activity.
{ "domain": "astronomy.stackexchange", "id": 6324, "tags": "star, magnetic-field" }
Could all strings be one single string which weaves the fabric of the universe?
Question: This question popped out of another discussion, about if the photon needs a receiver to exist. Can a photon get emitted without a receiver? A universe containing only one electron was hypothetically suggested. And a similar theory was mentioned in Feynman’s nobel lecture in 1965: As a by-product of this same view, I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, "Feynman, I know why all electrons have the same charge and the same mass" "Why?" "Because, they are all the same electron!" And, then he explained on the telephone, "suppose that the world lines which we were ordinarily considering before in time and space - instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time - to the proper four velocities - and that's equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron." "But, Professor", I said, "there aren't as many positrons as electrons." "Well, maybe they are hidden in the protons or something", he said. I did not take the idea that all the electrons were the same one from him as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole! It was a very successful theft indeed, so let me try a theft too. Since Wheeler came up with this theory, we have come up with string theory that create electrons and other elementary particles by string vibrations. Could all strings also have equal properties or interactions, so they can create different particles with equal charge and mass? So if we merge Wheeler’s idea with string theory, we can formulate this into a hypothetical question: Could all strings be one single string which weaves the fabric of the universe? To simplify it even further we can say that a single particle drags the string along and tie the knots in the fabric of the universe together by interactions with itself. So then we get a single string or a single particle universe, which is the ultimate simplicity. The speed of light can’t be a threshold for such a particle; because the particle itself must travel with infinite speed far beyond C and probably don't even have a velocity we can put any number on, but just call infinite speed. To go past the speed of light the particle must be without mass, and then it has no inertia and is free to go everywhere to interact with its own string which is woven into time, space, particles, mass, charge, magnetism, gravity, me, you and the universe itself. Answer: Some of the responses to this question are sour because there are no equations, it speculates in a naive way, etc. However, it has an impressionistic resemblance to some important ideas which really may be part of the final picture in physics. Specifically, I mean (1) the idea that all physical reality consists of vibrations in a single substance (2) the idea that the history of the universe is a knotted worldline. (1) was the physical picture of 11-dimensional supergravity, the leading candidate for a theory of everything before string theory, and still an important limit of string theory. In d=11 SUGRA, everything is just excitations of supergeometry. In string theory and M theory, we have strings and branes, but it seems rather plausible that in the end these will turn out to be extended excitations of some "generalized geometry". Regarding (2), the question talks about strings, but later it talks about a "single particle [that] drags the string along". So I take that to be a return to Wheeler's idea of a particle weaving back and forth in time, with the "string" being the world-line. Well, that doesn't sound like string theory, where strings describe two-dimensional histories, "worldsheets". However, before the people who know physics stop reading, hear me out... What this reminds me of, first of all, is Witten's work on expectation values of knotted Wilson lines in three dimensions. Although it utilizes the apparatus of quantum field theory, that is usually conceived of as an exercise in mathematics only - a way to get knot invariants from a three-dimensional perspective. However, in his recent work on Khovanov homology, he managed to embed these calculations specifically into M-theory. And I think the work on the Jones polynomial originally derives from the first studies of topological string theory (and it continues to be developed in that context, e.g. see recent papers on the "refined topological string"). Now let us turn to gauge/gravity duality, in its AdS form and its far more problematic dS form. We now have many dualities in which string theory in AdS4 is equivalent to some d=3 theory of Chern-Simons plus matter fields. We also have one dS4 duality, proposed by Strominger et al, in which a Vasiliev gravity theory in dS4 is dual to a d=3 Euclidean field theory. And recall that Vasiliev theory may be a truncation of string theory. So let's suppose, for the sake of discussion, that our asymptotically dS4 universe is dual to a Euclidean Chern-Simons+matter theory. I know this idea has problems, but perhaps it is a step towards something that really works, and not just a dead end... In any case, couldn't you form a loop basis for this dual theory? Similar to what the hated "loop quantum gravitists" do. But here you don't even need dynamics because time is going to be holographically emergent. The Hilbert space of the Euclidean theory is a superposition of knotted loop states, and everything about the dual dS4 theory should be obtained by applying the right operators to those states. This may seem rather a long way from where we started. But still, the idea is that the universe is described by a string theory holographically emergent from a superposition of knots. Exactly how it would work is unclear to me, but this seems to define a rather weighty research program, rather than being a vaporous idea you can immediately refute. And yet ideas like this do emerge from vaporous musings like the one in the question! It's just that they have to be informed by logic and by physical and mathematical knowledge, in order to get anywhere. So by all means, criticize people's vague musings. But be aware that sometimes, they have more potential than meets the eye...
{ "domain": "physics.stackexchange", "id": 8468, "tags": "quantum-mechanics, quantum-field-theory, string-theory, quantum-gravity, popular-science" }
Parallel universe and Infinite monkey theorem
Question: Is the Infinite monkey theorem helpful for determining the existence of the very same our universe somewhere else? Answer: No. Well, not really, though some amusement can be had by calculating how far you'd have to go to find an exact copy of your mother in law. However these calculations are not based on any rigorous science, so while they're fun take care with them. The basic idea is that if you take some system (e.g. your mother in law) containing $n$ Planck volumes then the maximum number of configurations of this system is 2$^n$. So you need to look at about 2$^n$ such volumes to stand a reasonable chance of finding a duplicate of your mother in law. This is the origin of claims that an exact copy of the Earth must exist if you take a big enough region of the universe. Whether such claims have any physical validity is open to debate.
{ "domain": "physics.stackexchange", "id": 5837, "tags": "cosmology, universe, probability, multiverse" }
Clustering Formulas for Networks
Question: Consider an undirected, unweighted graph =(,). I want to compute the clustering coefficient of each node. In the publicly available lecture from stanford, the following formula for computing the clustering coefficient $e_v$ for a node $v$ is given as: $$e_v = \frac {\#\text{edges among neighboring nodes}} {\#\text {node pairs among } k_v \text {neighboring nodes}} = \frac {\#\text{edges among neighboring nodes}}{{k_v \choose 2}} \in [0,1]$$ while in the documentation of the library networkX for python, defines the clustering coefficient as follows: $$c_u = \frac {2T(u)} {deg(u) (deg(u)-1)}$$ where $T(u)$ is the number of triangles through node $u$ and $()$ is the degree of $$. I calculated a few examples (Erdös-Renyi Networks) and both gave the same result for every node in the example graphs. Can somebody give me the intuition behind that observation? Answer: $\#\text{edges among neighboring nodes}$ is the same as $T(v)$ since a triangle through $v$ means that the side opposite to $v$ in the triangle is incident on two neighboring nodes. $k_{v} = deg(v)$. Therefore, ${k_v \choose 2} = \frac{k_{v}(k_{v}-1)}{2} = \frac{deg(v)(deg(v)-1)}{2}$ Therefore, $$e_v = \frac {\#\text{edges among neighboring nodes}}{{k_v \choose 2}} = \frac {2T(v)} {deg(v) (deg(v)-1)}$$
{ "domain": "cs.stackexchange", "id": 18607, "tags": "graphs, clustering" }
What is the name of this berry producing tree?
Question: I live in Middle Tennessee (USA) and came across this tree while I was hiking through the woods. I'm curious as to what type of tree this is and if the berries have any uses or are poisonous. The tree is rather short, and I was unable to see any more distinctive features given that the tree was surrounded by other brush. I can supply a better picture in a day or two if necessary. Answer: This is some type of Sumac in the genus Rhus. Probably either Rhus glabra (Smooth Sumac) or Rhus typhina (Staghorn Sumac). The berries of those two species are supposedly not poisonous. But they are closely related to other poisonous plants such as poison ivy and poison sumac. Poison ivy and poison sumac are in the genus Toxicodendron, but used to be in the genus Rhus. A photo of Smooth Sumac
{ "domain": "biology.stackexchange", "id": 5945, "tags": "botany, species-identification, trees" }