anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Why Does Amplitude Have no Effect on the Energy of a Light Particle? | Question: In my high school physics class, I was taught that the energy of light is dependent only on the frequency, as demonstrated in the equation $E = h \cdot \nu$.
My question is, why is amplitude part of the equation? As the amplitude of the light increases, it gets more intense, i.e. brighter (ignoring that more light also makes things brighter), so wouldn't it make sense that light that has a greater amplitude also has more energy? And as an extension of that assumption: that lower amplitudes have less energy?
EDIT: not a duplicate, because I'm asking why amplitude is not part of the energy equation, not if a particle has amplitude.
Answer: As you've said, that's the energy of a light particle, not an arbitrary light wave. As you point out, an electromagnetic plane wave with electric field amplitude $E_0$ has total average energy density $u=\varepsilon_0E_0^2/2$. The formula of energy you have given actually relates this to the number density $n$ of light particles per unit volume, by $u=nh\nu$. As such,
$$n=\frac{\varepsilon_0E_0^2}{2h\nu}.$$
EDIT
The thing to be considered here is, while the amplitude of the wave does matter for its total energy, it has no bearing on the energy of the particles that make it up.
What increases the amplitude of an electromagnetic wave is an increase in the number of photons (light particles, each with energy $E=h\nu$) that are part of this entire wave. | {
"domain": "physics.stackexchange",
"id": 50258,
"tags": "energy, visible-light, waves"
} |
Problem loading cram_physics_utils | Question:
Hi, when I try to load some asd files in rosemacs I encounter the following problem. No idea how to solve, need help!!!
; g++ -m64 -I/opt/ros/fuerte/include -I/opt/ros/fuerte/stacks/cram_core/3rdparty/cffi/cffi-dev/ -o ~/.cache/common-lisp/sbcl-1.0.50-linux-amd64~/ros/fuerte_workspace/stable_stacks/cram_highlevel/cram_physics_utils/src/assimp-grovel ~/.cache/common-lisp/sbcl-1.0.50-linux-amd64~/ros/fuerte_workspace/stable_stacks/cram_highlevel/cram_physics_utils/src/assimp-grovel.c
;
; compilation unit aborted
; caught 1 fatal ERROR condition
I am using Fuerte on Ubuntu10.04X64, cram_highlevel is cloned yesterday.
contents of the inferior-lisp
> STYLE-WARNING:
> Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P.
>; loading #P"~/.slime/fasl/2012-06-19/sbcl-1.0.50-linux-x86-64/swank-match.fasl"
>; loading #P"~/.slime/fasl/2012-06-19/sbcl-1.0.50-linux-x86-64/swank-rpc.fasl"
>; loading #P"~/.slime/fasl/2012-06-19/sbcl-1.0.50-linux-x86-64/swank.fasl"
>WARNING: These Swank interfaces are unimplemented:
> (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN)
>; file: ~/.swank.lisp
>; in: SETF SWANK:*GLOBALLY-REDIRECT-IO*
>; (SETF SWANK:*GLOBALLY-REDIRECT-IO* T)
>; ==>
>; (SETQ SWANK:*GLOBALLY-REDIRECT-IO* T)
>;
>; caught WARNING:
>; undefined variable: SWANK:*GLOBALLY-REDIRECT-IO*
>;
>; compilation unit finished
>; Undefined variable:
>; SWANK:*GLOBALLY-REDIRECT-IO*
>; caught 1 WARNING condition
>;; Swank started at port: 57838.
57838
>* STYLE-WARNING: redefining SWANK::SYMBOL-INDENTATION in DEFUN
STYLE-WARNING: redefining SWANK::MACRO-INDENTATION in DEFUN
Debug window
Component "" not found
[Condition of type ASDF:MISSING-COMPONENT]
Restarts:
0: [ABORT] Abort compilation.
1: [*ABORT] Return to SLIME's top level.
2: [TERMINATE-THREAD] Terminate this thread (#<THREAD "new-repl-thread" RUNNING {100305B5C1}>)
Backtrace:
0: ((LAMBDA ()))
1: ((LAMBDA ()))
[No Locals]
2: (ASDF::CALL-WITH-SYSTEM-DEFINITIONS #<CLOSURE (LAMBDA #) {100323AB19}>)
3: ((SB-PCL::FAST-METHOD ASDF:OPERATE (T T)) #<unused argument> #<unused argument> ASDF:LOAD-OP "")
4: ((SB-PCL::EMF ASDF:OPERATE) #<unused argument> #<unused argument> ASDF:LOAD-OP "")
5: ((LAMBDA ()))
6: ((FLET SWANK-BACKEND:CALL-WITH-COMPILATION-HOOKS) #<CLOSURE (LAMBDA #) {100323AA99}>)
7: (SWANK::OPERATE-ON-SYSTEM "" SWANK-IO-PACKAGE::LOAD-OP)
8: ((LAMBDA ()))
9: (SWANK::MEASURE-TIME-INTERVAL #<CLOSURE (LAMBDA #) {100323A9D9}>)
10: (SWANK::COLLECT-NOTES #<CLOSURE (LAMBDA #) {100323A9A9}>)
11: (SB-INT:SIMPLE-EVAL-IN-LEXENV (SWANK:OPERATE-ON-SYSTEM-FOR-EMACS "" (QUOTE SWANK-IO-PACKAGE::LOAD-OP)) #<NULL-LEXENV>)
12: (EVAL (SWANK:OPERATE-ON-SYSTEM-FOR-EMACS "" (QUOTE SWANK-IO-PACKAGE::LOAD-OP)))
13: (SWANK:EVAL-FOR-EMACS (SWANK:OPERATE-ON-SYSTEM-FOR-EMACS "" 'SWANK-IO-PACKAGE::LOAD-OP) "COMMON-LISP-USER" 29)
14: (SWANK::PROCESS-REQUESTS NIL)
15: ((LAMBDA ()))
16: ((LAMBDA ()))
17: (SWANK-BACKEND::CALL-WITH-BREAK-HOOK #<FUNCTION SWANK:SWANK-DEBUGGER-HOOK> #<CLOSURE (LAMBDA #) {1003060119}>)
18: ((FLET SWANK-BACKEND:CALL-WITH-DEBUGGER-HOOK) #<FUNCTION SWANK:SWANK-DEBUGGER-HOOK> #<CLOSURE (LAMBDA #) {1003060119}>)
19: (SWANK::CALL-WITH-BINDINGS ((*STANDARD-OUTPUT* . #) (*STANDARD-INPUT* . #) (*TRACE-OUTPUT* . #) (*ERROR-OUTPUT* . #) (*DEBUG-IO* . #) (*QUERY-IO* . #) ...) #<CLOSURE (LAMBDA #) {1003060139}>)
20: (SWANK::HANDLE-REQUESTS #<SWANK::MULTITHREADED-CONNECTION {1003B98CC1}> NIL)
21: ((FLET #:WITHOUT-INTERRUPTS-BODY-[BLOCK414]419))
22: ((FLET SB-THREAD::WITH-MUTEX-THUNK))
23: ((FLET #:WITHOUT-INTERRUPTS-BODY-[CALL-WITH-MUTEX]301))
24: (SB-THREAD::CALL-WITH-MUTEX ..)
25: (SB-THREAD::INITIAL-THREAD-FUNCTION)
26: ("foreign function: call_into_lisp")
27: ("foreign function: new_thread_trampoline")
New problem after the patch, when I load cram_plan_library
; compiling file "~/ros/fuerte_workspace/stable_stacks/cram_highlevel/cram_plan_library/src/at-location.lisp" (written 14 NOV 2012 01:44:02 PM):
; compiling (IN-PACKAGE :PLAN-LIB)
; compiling (DEFVAR *AT-LOCATION-LOCK* ...)
; compiling (DEFCONSTANT +AT-LOCATION-RETRY-COUNT+ ...)
; compiling (CRAM-PROJECTION:DEFINE-SPECIAL-PROJECTION-VARIABLE *AT-LOCATION-LOCK* ...)
; compiling (DEFUN LOCATION-DESIGNATOR-REACHED ...)
; compiling (DEFMACRO WITH-EQUATE-FLUENT ...)
;
; compilation aborted because of fatal error:
; SB-INT:SIMPLE-READER-PACKAGE-ERROR at 3626 (line 75, column 45) on #<SB-SYS:FD-STREAM
; for "file ~/ros/fuerte_workspace/stable_stacks/cram_highlevel/cram_plan_library/src/at-location.lisp"
; {10044AC5E1}>:
; Symbol "WITH-TRANSFORMS-CHANGED-CALLBACK" not found in the CL-TF package.
;
; compilation aborted after 0:00:00.057
WARNING:
COMPILE-FILE warned while performing #<COMPILE-OP NIL {1003BAB741}> on
#<CL-SOURCE-FILE "cram-plan-library" "src" "at-location">.
;
; compilation unit aborted
; caught 2 fatal ERROR conditions
Originally posted by ZiyangLI on ROS Answers with karma: 93 on 2012-11-15
Post score: 0
Original comments
Comment by Lorenz on 2012-11-15:
Is this really the complete output after the error occurred? To improve readability (newlines don't seem to be displayed correctly), can you please change the formatting of your code from citation to code? Just mark the code and press Ctrl-k.
Comment by Lorenz on 2012-11-15:
Ok. More questions :) For me, the inferior-lisp buffer doesn't really show anything useful. But since you got a compilation error, I guess emacs entered the debugger. Can you please provide the contents of the debugger window? It should contain the actual compiler error message.
Comment by Lorenz on 2012-11-15:
The error message above doesn't seem to be related to assimp. It rather looks like you tried to load an asdf file with name "".
Comment by ZiyangLI on 2012-11-15:
Thank you Lorenz, your patch indeed works!
Comment by Lorenz on 2012-11-15:
Your second issue is completely unrelated to cram_physics_utils. Please open a new question and re-mark my answer as correct if it was. I will answer there.
Comment by ZiyangLI on 2012-11-15:
Done, thanks!
Answer:
My guess is that you do not have assimp installed. I just added the corresponding rosdep rule and pushed it to cram_highlevel. Try installing it with:
sudo apt-get install libassimp-dev
If that doesn't help, please edit your answer to provide more information. Please provide more lines around the actual error, maybe it contains something useful that helps debugging the problem. Also, please provide information such as the ROS version you are using and the version of Ubuntu and make sure you have a look at the support page.
Update: Do you maybe have openrave installed? Its requires version 3 of assimp which unfortunately conflicts with version 2 provided by Ubuntu and used by ROS Fuerte. If you need to use assimp3, apply the following patch in cram_highlevel:
diff --git a/cram_physics_utils/src/assimp-grovel.lisp b/cram_physics_utils/src/assimp-grovel.lisp
index e1b6413..8c43062 100644
--- a/cram_physics_utils/src/assimp-grovel.lisp
+++ b/cram_physics_utils/src/assimp-grovel.lisp
@@ -28,11 +28,11 @@
;;; POSSIBILITY OF SUCH DAMAGE.
;;;
-(include "assimp/assimp.h")
-(include "assimp/aiMesh.h")
-(include "assimp/aiScene.h")
-(include "assimp/aiMaterial.h")
-(include "assimp/aiPostProcess.h")
+(include "assimp/types.h")
+(include "assimp/mesh.h")
+(include "assimp/scene.h")
+(include "assimp/material.h")
+(include "assimp/postprocess.h")
(in-package :physics-utils)
Just copy-paste it to a file, e.g. ~/assimp3.patch. Then execute the following commands:
roscd cram_highlevel
patch -p1 < ~/assimp3.patch
Originally posted by Lorenz with karma: 22731 on 2012-11-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11758,
"tags": "ros"
} |
(Project Euler #1) Find the sum of all the multiples of 3 or 5 below 1000 | Question: I have decent experience in programming before (mostly C++), but I am very, very, very new to Python, and I decided to try out Project Euler as an exercise. Here's the description of Problem 1:
If we list all the natural numbers below 10 that are multiples of 3 or
5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
Here's my solution:
sum = 0
for n in range(0, 1000):
if n % 3 == 0 or n % 5 == 0:
sum += n
print(sum)
The program works fine. What I want to get from a code review:
whether I did anything non-Pythonic;
whether I did anything outdated;
whether I did anything against the PEP 8;
whether there's a better way of doing this;
...
Answer: Welcome to Python!
"Project Euler exists to encourage, challenge, and develop the skills and enjoyment of anyone with an interest in the fascinating world of mathematics."
Like you, I went to Project Euler when I was learning Python as yet another language for my toolbox. Unfortunately, Project Euler is primarily a mathematics challenge site, not a programming challenge site. The emphasis is on solving the problem and getting the right answer; not on programming skills. More over, the site asks that you not post your solution, which really discourages you from getting feedback on your programming skills, and proper best practices for the language. So while you can get problems that you can actually try writing code to solve the problem for, you’re still discouraged from getting feedback on your approach. Not exactly an ideal site for learning a new language on.
Still, you have violated their request and posted your solution, so let’s try and give you some useful feedback.
Write Functions
You’ve got two sets of data to try your solution on. The first being the “example” data in the problem itself; the second being the dataset you are being asked to solve. If you’re going to do something twice, write a function:
def sum_of_multiples_of_3_or_5_below(limit):
total = 0
for n in range(0, limit):
if n % 3 == 0 or n % 5 == 0:
total += n
return total
Then you can test your function with the example data, as well as solve the problem:
assert sum_of_multiples_of_3_or_5_below(10) == 23
answer = sum_of_multiples_of_3_or_5_below(1000)
print(f"sum of all multiples of 3 or 5 below 1000 is {answer}")
This gives you confidence in your solution. Usually the example data is fairly trivial, so the time needed to solve the problem twice isn’t noticeably increased.
Use a __main__ guard
Now that we have a function, it is possible to import this “module” into another program to reuse the function. Except, it runs that pesky code at the bottom, generating unexpected output. Using a __main__ guard, the code will only execute when we run this script, not when this scripted is imported:
if __name__ == '__main__':
assert sum_of_multiples_of_3_or_5_below(10) == 23
answer = sum_of_multiples_of_3_or_5_below(1000)
print(f"sum of all multiples of 3 or 5 below 1000 is {answer}")
Generalization
This function is still perhaps too specific. Why just below a limit. Why just multiples of 3 or 5? We can generalize things a wee bit, and maybe actually increase the possibility of reusing the function elsewhere. And perhaps more importantly, explore the capabilities of Python.
First, instead of passing in the limit, let’s pass in the range.
def sum_of_multiples_of_3_or_5_in(iterable):
total = 0
for n in iterable:
if n % 3 == 0 or n % 5 == 0:
total += n
return total
if __name__ == '__main__':
assert sum_of_multiples_of_3_or_5_in(range(0, 10)) == 23
answer = sum_of_multiples_of_3_or_5_in(range(0, 1000))
print(f"sum of all multiples of 3 or 5 below 1000 is {answer}")
range is a first class object in Python. It can be passed as an argument. So now you can easily compute the sum of the multiples of 3 or 5 in range(1000, 2000).
Or ... any iterable object, actually, such as lists.
print(sum_of_multiples_of_3_or_5_in([10, 12, 15, 17, 18, 19, 20])
How about those multiples? Let’s make them more general:
def sum_of_multiples_in(iterable, m1, m2):
total = 0
for n in iterable:
if n % m1 == 0 or n % m2 == 0:
total += n
return total
if __name__ == '__main__':
m1 = 3
m2 = 5
assert sum_of_multiples_in(range(0, 10), m1, m2) == 23
answer = sum_of_multiples_in(range(0, 1000), m1, m2)
print(f"sum of all multiples of {m1} or {m2} below 1000 is {answer}")
You had a formula for computing the answer before. Sum of multiples of 3, plus sum of multiples of 5, minus sum of multiples of 15. Now it is harder, because m1 could be a multiple of m2 or vis versa. More cases to check for. But the above works just fine regardless of whether m1 and m2 are mutually prime or not.
any
Why only multiples of 2 numbers? Why not multiples of 3, 5 or 7? Passing yet another argument to the function seems wrong, because we’ll then need another function for 4 multiples, and yet another for 5 multiples. Let’s instead pass a list.
def sum_of_multiples_in(iterable, multiples):
total = 0
for n in iterable:
for m in multiples:
if n % m == 0:
total += n
break
return total
That’s a good start. For each value of n, we start looping of the multiples, and if we find one, we add n to total and break out of the inner loop, to continue with the next n value.
But we can make it clearer. We want to know if n is a multiple of any of the multiples. Python has an any() function, which is true of any of the terms is true:
def sum_of_multiples_in(iterable, multiples):
total = 0
for n in iterable:
if any(n % m == 0 for m in multiples)
total += n
return total
There is also an all(...) function which returns true only if all of the terms are true. Not needed here, but good to have in your back pocket.
sum
Now that we have a loop, an accumulator, and a filter condition, we can combine the three into a single sum() operation:
def sum_of_multiples_in(iterable, multiples):
return sum(n for n in iterable if any(n % m == 0 for m in multiples))
Variable arguments
Using our above function, we have to pass in a list of multiples:
assert sum_of_multiples_in(range(0, 10), [3, 5]) == 23
It may be desirable to get rid of that explicit list [3, 5], and just pass in the arguments 3, 5 like we did earlier. We can do this by using a variable argument list syntax.
def sum_of_multiples_in(iterable, *multiples):
return sum(n for n in iterable if any(n % m == 0 for m in multiples))
assert sum_of_multiples_in(range(0, 10), 3, 5) == 23
After all explicit arguments (iterable in this case), all remaining (non-keyword) arguments are rolled up into one list and assigned to the *args argument ... named multiples in this case.
"""Docstrings"""
Comments are used to describe the code to someone reading the source code. Doc-strings are used to describe how to use the code you’ve written, without the user needing to read your code. Various tools exist to extract the doc-strings, and turn them into webpages, PDF documents and so on. The simplest is Python’s built-in help() command.
"""
A collection of functions for solving problems from Project Euler.
(Currently, only Problem 1)
"""
def sum_of_multiples_in(iterable, *multiples):
"""
From a list of numbers, return the sum of those numbers which
are a multiple of one or more of the remaining arguments.
"""
return sum(n for n in iterable if any(n % m == 0 for m in multiples))
if __name__ == '__main__':
m1 = 3
m2 = 5
assert sum_of_multiples_in(range(0, 10), m1, m2) == 23
answer = sum_of_multiples_in(range(0, 1000), m1, m2)
print(f"sum of all multiples of {m1} or {m2} below 1000 is {answer}")
A doc string is a string appearing at the top of a module, class, and/or function. It can be a single quoted string ("docstring" or 'docstring') or a triple quoted string ("""docstring""" or '''docstring'''). Triple quoted strings are typically used since they can span multiple lines and can contain quotes without needing escaping.
Save the file as pe1.py, then from a Python interpreter, type:
>>> import pe1
>>> help(pe1)
to see your help documentation.
Type Hints
Coming from C++, you will be used to a more “type safe” environment. Python’s fast and loose rules for type safety may be a wee bit difficult to get used too. Fortunately (or unfortunately), Python 3.6 and later allows you to specify “type hints”. These do absolutely nothing ... at least, as far as the Python interpreter is concerned. They can be read by static analysis tools, which can reason about them and ensure variables are being used in their intended fashion. If used for nothing else, they can provide additional “documentation” about the types of arguments for functions, and the return type of the function.
def sum_of_multiples_in(iterable: int, *multiples: int) -> int:
...
You can use type hints on local variables as well:
total: int = 0
Hope this jump starts your exploration of Python. And once again, welcome to Python! | {
"domain": "codereview.stackexchange",
"id": 35951,
"tags": "python, beginner, python-3.x, programming-challenge"
} |
Find the sum along root-to-leaf paths of a tree | Question: Most of you already know me, please be brutal, and treat this code as if it was written at a top tech interview company.
Question:
Given a binary tree and a sum, find all root-to-leaf paths where each
path's sum equals the given sum.
For example: Given the below binary tree and sum = 22,
5
/ \
4 8
/ / \
11 13 4
/ \ / \
7 2 5 1
return
[
[5,4,11,2],
[5,8,4,5]
]
Time taken: 26 minutes (all 110 test cases passed)
Worst case: \$O(n^2)\$?
Since when I add to resList, it copies all the elements again which can take \$O(n)\$ and I traverse \$O(n)\$ nodes.
Space complexity: \$O(n)\$
My code:
ArrayList<ArrayList<Integer>> res = new ArrayList<ArrayList<Integer>>();
ArrayList<Integer> curList = new ArrayList<Integer>();
public ArrayList<ArrayList<Integer>> pathSum(TreeNode root, int sum) {
if(root==null){
return res;
}
curList.add(root.val);
if(root.left==null && root.right==null){
if(sum - root.val==0){
res.add(new ArrayList<Integer>(curList));
}
}
if(root.left!=null){
pathSum(root.left, sum-root.val);
}
if(root.right!=null){
pathSum(root.right, sum-root.val);
}
curList.remove(new Integer(root.val));
return res;
}
Answer: You are treating res and curList as if they are Globals, and, since they are globals, there is no reason to return ret in the function at all.
As a result of this, your code is not re-entrant (you can only have one method calling your pathSum at any one point in time).
The right solution to this is to pass the curList and ret values as parameters to the method, convert it to private, and create a new public method which creates the instances as you need them.....
public ArrayList<ArrayList<Integer>> pathSum(TreeNode root, int sum) {
ArrayList<ArrayList<Integer>> res = new ArrayList<ArrayList<Integer>>();
ArrayList<Integer> curList = new ArrayList<Integer>();
pathSum(root, sum, curList, res);
return res;
}
private void pathSum(TreeNode root, int sum,
ArrayList<Integer> curList, ArrayList<ArrayList<Integer>> res) {
....
** change the methods called as part of the recursion too**
....
}
That is the big structural change, but I would recommend more:
methods should not return specific List implementation types unless those types have special features you need. your method should return List<List<Integer>> and not ArrayList<ArrayList<Integer>>
convert curList to an array of int[], and the return valye of the system to List
you do the calculation sum-root.val in multiple places. Firstly, it should be spaced properly: sum - root.val, and secondly, you should save it as a variable once, and re-use that variable in the places where it currently is a function
About the complexity
you ask if worst case is \$O(n^2)\$ ... no, it is not.
Worst case is \$O(n \log(n))\$. This is my reasoning:
the depth of a binary tree is about \$\log(n)\$.
The deepest a binary tree can be is depth n, but, in that case there is only one possible solution, so the complexity will be two \$O(n)\$ operations, one to scan the single deep branch, and another to copy the array.
the worst case is actually a fully-balanced tree where every leaf node matches the intended sum, in which case the number of solutions is proportional to \$O(n)\$, but the actual copy to the array will be of \$O(log(n))\$ elements
So, My assessment is \$O(n \log(n))\$
Feel free to debate this... I am not 100% certain.... | {
"domain": "codereview.stackexchange",
"id": 9229,
"tags": "java, algorithm, interview-questions, tree"
} |
Lack of intuition for distribution function in micro and macro state description | Question: I am a mathematician who is trying to understand statistical mechanics / thermodynamics. I need a hint wrt the interpretation / meaning of the distribution function. Currently I seem to have a basic misunderstanding which is a show stopper for further progress. My current understanding is as follows:
Micro state: I have a large number of particles ($n$) and a phase space ${\cal Q}$ of the corresponding Hamiltonian system, i.e. ${\Bbb R}^{6n}$. I consider points in this phase space and trajectories through it.
Macro state: I want to concentrate on more essential aspects of the system, such as descriptions in terms of energy, pressure, temperature etc. So, one macro state $m$ corresponds to a possibly very large set $M$ of micro states. In principle, given $m$, I could find $M$.
My Problem: Numerous texts now move ahead and introduce a distribution function or probability density on the set of micro states. I do not understand why I should be interested in considering a distribution function on the micro states.
Suppose, I have macro state $m$. So a micro state description could, in my understanding, already be given by the set $M \subset {\cal Q}$ of micro states. By the indifference principle I could assume that all of them are equally probable, which would give me some sort of distribution function / probability density. However, I read the texts in such a manner, that there could be several different distribution functions / probability densities. I want to understand which additional physical insight / intuition is modeled by the additional structure provided when moving from the set $M$ to a distribution function / probability density on the set.
Answer: Your understanding about macro- and micro-states is absolutely correct.
The reason for introducing a probability distribution on the microstates is rooted in the ensemble formulation of statistical mechanics. Its goal is to allow to evaluate average values as averages over the set of microstates instead as time averages. The reason, from the theoretical perspective, is evident. A time average would imply to solve the equation of motion for a large number of interacting particles, while an ensemble average completely circumvents the step connected to the dynamical description.
Even nowadays, when both approaches are possible with numerical sumulation tools (Molecular Dynamics vs Monte Carlo methods) ensemble averages for some problem may provide a significant advantage. | {
"domain": "physics.stackexchange",
"id": 57032,
"tags": "thermodynamics, probability"
} |
Rate-determining step and steady state approximation failure | Question: 1) When can't the rate-determining step be applied, and why? How do you recognize these cases?
2) When can't the steady state approximation be applied, and why?
Answer:
When can't the rate-determining step be applied, and why? How to recognize those mechanisms?
The rate determining step (often abbreviated as "rds") of a reaction is the slowest step in a reaction. Every reaction has a "slowest step", therefore, every reaction has an rds. The slowest step will be the step where we pass over a transition state with the highest energy, e.g. the highest energy point along the reaction coordinate. Look at the figure below, the reaction coordinate shown in the top left diagram
illustrates a reaction with a transition state, but no intermediate. There is only one step in this reaction, passing over that transition state (marked by the double-headed arrow) is the rds for that reaction. The drawing in middle and right side of the top row show a reaction with an intermediate, and consequently, two transition states. In each case the double-headed arrow identifies the rds - the highest energy point along the reaction coordinate. The 3 drawings in the bottom row illustrate the case of a reaction with 2 intermediates (and therefore 3 transition states) along the reaction coordinate - in each case the arrow identifies the transition state with the highest energy, the rds.
When can't the steady state approximation be applied, and why?
Consider the following reaction
$$\ce{A <=>C[{k_1}]\ [B] <=>C[{k_2}]\ C}$$
$\ce{A}$ is the starting material, $\ce{[B]}$ is an intermediate and $\ce{C}$ represents the product. Either the drawing in the middle or right in the top row represent the reaction coordinate for this reaction. The steady state assumption applies to reactions that have one or more intermediates along the reaction coordinate. The steady state assumption states that the rate of change of the concentration of the intermediate is approximately zero. In other words, the intermediate is being created at about the same rate at which it is
$$\ce{\frac{d[B]}{dt} = 0}$$
being destroyed. The steady state assumption is best applied to situations where the intermediate is present in low concentration (e.g. $\ce{k_2 >> k_1}$). A general rule of thumb for the assumption to be valid is for
$$\ce{\frac{k_2}{k_1} > 10 }$$
Looking back at the figure, we now see that this condition is met in the top row, center drawing. Here is a Wikipedia article on the steady state assumption, the section on "Validity" is nice and clear if you'd like to read further on the subject. | {
"domain": "chemistry.stackexchange",
"id": 1723,
"tags": "kinetics"
} |
Entropy change for a real gas via Peng-Robinsons EOS | Question: Consider a process with inlet conditions ~300K,~50Bar and outlet conditions ~350K,~150Bar. Entropy departures as per Peng-Robinsons EOS. I am evaluating the entropy change via the following:
∆ = @ + (@. ∗ (/)) − .∗(/) − @
This gives phenomenal results for my intended application.
However, I'm concerned as technically I must use the natural logarithm. However, the natural logarithm is giving rather poor results.
Does anyone have any suggestions or alternative methods I could pursue to evaluate entropy change? Any readings throughout the literature? I highly appreciate any thoughts or suggestions.
Answer: In my judgment, $\Delta S$ cannot be determined for PR without first solving for the initial and final volumes, $V_1$ and $V_2$. Then one can use the equation $dS=\frac{C_V}{T}dT+\left(\frac{\partial P}{\partial T}\right)_VdV$ in conjunction with Hess's law to obtain:$$\Delta S=\int_{V_1}^{\infty}{F(T_1,V)dV}+C_V^{IG}\ln{(T_2/T_1)}+R\ln{(V_2/V_1)}-\int_{V_2}^{\infty}{F(T_2,V)dV}$$where $C_v^{IG}$ is the heat capacity in the ideal gas limit (infinitely large specific volume) and $$F(T,V)=\left(\frac{\partial P}{\partial T}\right)_V-\frac{R}{V}$$
For Peng-Robinson, $$F(T,V)=\frac{R}{V-b}-\frac{R}{V}+\frac{a\kappa}{b}\sqrt{\frac{\alpha}{8TT_c}}\left[\frac{1}{V+b(1-\sqrt{2})}-\frac{1}{V+b(1+\sqrt{2})}\right]$$where $$\alpha(T)=\left(1+\kappa\left(1-\sqrt{\frac{T}{T_c}}\right)\right)^2$$and $$\kappa=0.37464+1.54226\omega-0.26992\omega^2$$So the entropy departure is $$S_{dep}=R\ln{\frac{V}{V-b}}+\frac{a\kappa}{b}\sqrt{\frac{\alpha}{8TT_c}}\ln{\left[\frac{V+b(1+\sqrt{2})}{V+b(1-\sqrt{2})}\right]}$$ | {
"domain": "physics.stackexchange",
"id": 93602,
"tags": "thermodynamics, pressure, entropy, physical-chemistry, gas"
} |
Milky Way position on the sky | Question: I'm looking for some sort of boundary data to be able to render the milky way on a star map, as visible from Earth. Something that looks like this:
For that, I need something like a collection of RA hours and DEC degrees of the "boundary points" of what's visible from Earth (technically the Galactic Center), possibly with proper motion too. I'm not looking for precise luminosity data or anything like that, just the points of the blob on the sky that most resembles the Milky Way's shape and position from the Earth. It's important that I want to render the sky map for any given surface point on Earth, for any given time (within the last 100 years at least).
Do you know of a database like that? I've been looking on VizieR but I couldn't find what I was looking for.
Answer: There is a very nice project called d3-celestial by Olaf Frohn on github. In contains a data file describing the Milky Way as polygons, see here. A demo showing this Milky Way can be found here. And even better, the source for this data is cited, pointing to the Milky Way Outline Catalog by Jose R. Vieira.
Depending on your project, the json format from d3-celestial might be easier to read than the one from Jose R. Vieira.
Note that you don't have to worry about these "contours" moving on a time scale of hundred years, but this is another question. | {
"domain": "astronomy.stackexchange",
"id": 7030,
"tags": "milky-way, data-analysis, star-catalogues"
} |
error running _arm_navigation.launch file | Question:
Dear All,
I generated the shadowrobot_arm_navigation package correctly for my shadow robot.
the " planning_components_visualizer.launch" launch file is working. but when I run the "shadowarm_arm_navigation.launch" file I got the following error:
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
Assimp reports no scene in package://sr_hand/model/meshes/F1.mesh
waitForService: Service [/register_planning_scene] has not been advertised, waiting...
Robot frame is 'world'
Waiting for robot state ...
Waiting for joint state ...
Waiting for environment server planning scene registration service /register_planning_scene
waitForService: Service [/register_planning_scene] has not been advertised, waiting...
Waiting for environment server planning scene registration service /register_planning_scene
waitForService: Service [/register_planning_scene] has not been advertised, waiting...
Waiting for robot state ...
Waiting for joint state ...
Originally posted by robot_arm on ROS Answers with karma: 1 on 2012-06-14
Post score: 0
Answer:
You need wto things in place for the arm_navigation.launch file to work - first, you need to make sure that you have are publishing a joint state message with positions and velocities for every joint in your robot. Then you need to make sure that the controller is up and running, advertising the correct action (control_msgs/FollowJointTrajectoryAction), and that you have the correct action name set in the move_arm launch file.
Originally posted by egiljones with karma: 2031 on 2012-06-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9791,
"tags": "ros, shadow-robot, arm-navigation"
} |
Is direction of angular velocity just a definition or has a physical significance? | Question: I am a high schooler so I don't know a lot of fancy maths but I do know some of calculus and multiplication of vectors as dot or cross product. I am learning about Angular velocity. And I am confused that is the direction of angular velocity just a definition or has a physical significance. I looked and searched for this on the internet and several other places and of course, I found the answer but its too diverse as someone says that it is defined and others say that it has some significance. I was amazed and confused much more when I saw gyroscopes in action.
Here are some of the search work I did:
Answer on Quora by Bibhusit Tripathy states that it has some significance
Answer on Physics Stackexchange by The Ledge says that it's just a convention
And there are several other pages on the internet that I tried but this remains the same all over. So what I want is not only the answer but also its validity. Thanks and appreciation to anyone who answers or puts his/her effort into this question.
Edit
Many people were getting confused by what I mean by Physical Significance.
Here's what I mean
If a thing has physical significance then its effects will be real and you will be able to see them. As a force, although the force itself is not visible its effects are and that too in the same direction in which a force is said to be in. So a direction is real but a quantity assigned in that direction can be just to help us solve some problems or fix some glitches and it could very well be a mathematical trick like a pseudo force in an accelerated frame. Hence for this question, has the direction which is told to be the direction of angular velocity something physical that is happening in that direction? Like a motion, you can not say that a car is moving in $-X$ direction if it is moving in $+X$ direction if the coordinate system is already defined of course.
Edit 2
Everyone confused due to a lot of ambiguity in the question. Here's the final Edit and this is the actual question whose answer would be indirectly the answer to this entire title- Could we have defined the direction of Angular Velocity to any other direction if we had more options or let's say we had 4-dimensions reality?
Answer:
I am learning about Angular velocity. And I am confused that is the direction of angular velocity just a definition or has a physical significance.
You are going to get confusing answers, because your question as stated doesn't mean much. But it means something....
There are things in mathematical notation that are basicly arbitrary. Somebody chose to write them that way, and they worked, and now everybody does it that way. Like multiplication distributes over addition, and we write $a(b+c)$. We could have used any other symbol in place of (). $a:b,c:$ would have worked as well. For $a(b-c)$ we could do $a:b,-c:$.
We could have a convention that each time you have a newline.
$a(b(d+e-f)+c)$ becomes
a:
b:
d,e,-f
,c
That would some ways work better though it would take more space on the page. It's basicly arbitrary which way we use.
But the fact that $a(b+c)=ab+ac$ is not arbitrary. It's important.
It looks to me like you're asking what's the important part, and what's just convention.
Could we have defined the direction of Angular Velocity to any other direction if we had more options or let's say we had 4-dimensions reality?
It would have to amount to the same thing -- if it gave a different answer then it would be a wrong answer. Unless we changed the concepts around somehow so they combined differently to get the same end result.
But yes, instead of defining a vector axis as the defining direction, we could have two vectors to define the plane the rotation is in. And then at any one moment the velocity would be something in that plane. That wouldn't make any practical difference in 3D but it might be clearer.
A rotation is in some particular plane. If you use polar notation, rotation changes the angle but not the length. Polar coordinates (or for that matter cartesian coordinates) gives you an arbitrary zero point, and whatever point you rotate around, you arbitrarily subtract its displacement from all the locations so it will be at zero to do the rotation. You can add the displacement back later with no loss.
Using the normal vector is only one possible way to describe which plane the rotation is in. That's arbitrary notation. | {
"domain": "physics.stackexchange",
"id": 72255,
"tags": "newtonian-mechanics, vectors, conventions, angular-velocity"
} |
What is the base cancer rate for an arbitrary carcinogen? | Question: Are all carcinogens equally potent? Is the relationship between dose and probability of cancer roughly equal, or are there some carcinogens that provoke cancer significantly more than their cousins?
Answer: To answer this question in its entirety we have to split it into two questions:
What are the underlying mechanisms of carcinogenity?
One of the main mechanism behind carcinogenity is the mutagenity of the cancerogens, i.e. the ability to cause mutations, that are abberations of the cell DNA leading to uncontrolled proliferation. This classical paper investigates the relation between cancerogenity and mutagenity.
One should mention here that there are many types of mutations possible, mutations are not equally dangerous for cells and some mutations can be successfully repaired using the intact strain.
Therefore the following parameters of the source substances need to be measured to estimate the cancerogenity:
Substance concentrations or absolute amounts (both being indirect measures of their chemical activities, the lower concentration is needed for causing cancer the higher is the chemical activity and vise versa).
Substance radioactivity (for the mutagenesis due to radiation).
How can we measure the carcinogenity of different substances?
The most general approach here is to introduce certain amount of cancerogen into the animal body or to the cultured cell and to see the effect. The effect is calculated as the percentage of cells that undergo the transformation from normal into cancer cells. Two metrics are available here:
DT = Tumorigenic Dose (the amount of substance causing certain
percentage of cancer in treated animals, all treated animals are taken
for 100%)
CT = Tumorigenic Concentration (same, but adjusted for
concentration and used in cell cultures).
(They are written CT and DT because in science people tend to used Latin abbreviations where the adjective actually follows the noune).
The common metrics are DT5/CT5 (5% cells/animals get cancer) and DT50/CT50 (50% of the animals). Those are similar to other common metrics, the most common is LC50/LD50 -- lethal dose for 50% of the animals/cells.
Unfortunately I couldn't not find any pre-compiled list with most known cancerogens and their TD/TC values. These seem to be interesting primarily for scientists. But going back to your question: you are absolutely right: some cancerogens are much more potent in causing cancer than the others! | {
"domain": "biology.stackexchange",
"id": 48,
"tags": "cancer, statistics"
} |
Solar neutrino problem in 1968 and experimental verification of neutrino oscillation in 2001. Why the huge delay? | Question: Solar neutrino deficit was first observed in the late 1960's. And theory of neutrino oscillation was developed in 1967. But,in 2001, the first convincing evidence of solar neutrino oscillation came in SNO. Why did it take nearly 35 years to verify the neutrino oscillation?
reference: http://en.wikipedia.org/wiki/Neutrino_oscillation
Answer: The unique thing about SNO was that it was simultaneously sensitive to charged-current and neutral-current interactions, because they used deuterated water.
The three main interactions are
Neutrino capture on deuterium, $\nu + n \to e + p$, which generates a fast electron and a slow proton. The lepton and the baryon exchange a $W$ boson (the "charged weak current"). Only electron-type neutrinos may participate in this interaction; $\nu_\mu$ and $\nu_\tau$ would have to generate heavier leptons, but solar neutrinos don't carry enough energy to make those more massive particles.
Deuterium dissociation due to neutrino scattering, $\nu + (np) \to \nu + n + p$. The free neutron will wander around for a while before getting captured on another deuteron and emitting a gamma ray. Because the neutrino's charge doesn't change this reaction is mediated by the "neutral current" (the $Z$ boson) and all neutrinos contribute equally.
Elastic scattering from electrons, $\nu + e \to \nu + e$. This interaction has both charged- and neutral-current contributions, so neutrinos of all flavors may contribute, but electron neutrinos contribute more heavily than the other flavors.
These different interaction channels gave independent measurements of the total neutrino flux and the electron neutrino flux.
It's worth noting that the neutral current had only just been predicted in 1967, and was not discovered until the early 1970s.
For the most part the solar neutrino community believed that there was some misunderstood property of neutrino detection that caused everybody to measure one-third the predicted solar neutrino flux. It took many years before the possibility that the misunderstood bit was a property of the neutrino itself was really taken seriously.
I don't know for certain, but I would expect that the design discussions for SNO began in the early 1990s. There are many technical challenges associated with the detector — not least that they have many tons of heavy water suspended in many tons of light water in a thin, transparent membrane. The heavy water is on loan from the Canadian nuclear power industry; SNO has a hefty insurance policy to pay to replace it if the membrane ruptures and the heavy water mixes with the light water and is ruined. | {
"domain": "physics.stackexchange",
"id": 13988,
"tags": "particle-physics, experimental-physics, neutrinos"
} |
Is elitism preferred over non-elitism in the cross-over operator? | Question: There are two potential approaches when performing cross-over operation in genetic algorithms.
Use only the elites in the pool, probably the ones that are also going to be directly transferred to the next generation.
Use all the population present in the pool.
Is there any evidence that cross-over only with the elites of the population makes the GA converge faster to a good solution? I guess that, in order to escape from local minima, cross-over with all the population is needed. On the other hand, why should we perform cross-over with the least fit individuals?
Any idea?
Answer: First of all the answer to your question is largely dependent on the problem you are trying solve, the size of your population, the size of your problem's search space and the rest of your GA's hyper-parameters such as your mutation rate.
If the problem has a large search space, then applying the elites strategy you described above will most likely cause your algorithm to never reach the optimal solution or reach it after too many iterations. The reason for this being that if your search space is big, then since the initialization of your initial population is random, the chances is that the "fittest" individual(s) in the first few iterations are not necessarily the gonna lead you to the optimal solution. Hence applying the elites strategy too early might make you get stuck on a local optima forever or for too long. The way I have personally found to be effective for this kind of situations is to allow your algorithm to explore as large a search space as possible earlier on(i.e use "all the population present in the pool") and then gradually introduce the elite strategy you described above(this is an idea similar to momentum in neural networks). In short, you want to explore earlier on, and then when you gradually start exploiting what you have found.
If the search space is relatively small, I think the elites strategy would be a very good idea. What you might also try to do, if you really want to stick to using elitism, you can try to mutate the elite individual(this kinda allows you to explore a little bit), or choose a higher elitism rate(i.e the number of individuals to be elite).
Now to answer you other questions:
Is there any certain belief that cross-over on only elites of the population, converges the solutions faster?
No, but I don't have reference for this so I guess I speak under correction, but I will explain why I said no:
The problem with pure elitism is that it causes the genetic algorithm to converge to local maxima instead of the global maxima. Basically pure elitism is just a race to the nearest local maxima and and as soon as you get to the local maxima, if you continue with elitism, you get almost no improved from there.
I guess in order to escape from local minima, cross-over on all the population is needed; on the other hand I also say why performing cross over on weak population?
"why perform cross-over on weaker individuals in the population?" - to explore. The hope is that applying a cross-over or a mutation on a weaker individual will produce a fitter individual. | {
"domain": "ai.stackexchange",
"id": 339,
"tags": "genetic-algorithms, evolutionary-algorithms, crossover-operators, elitism"
} |
How to shorten this terrible HTTP header parser? | Question: I am trying to read a line ending in \r\n from a Handle. This is an HTTP header.
I’m fairly new to functional programming, so I tend to use a lot of case expressions and stuff. This makes the code very long and ugly.
handleRequest :: HostName -> Handle -> IO ()
handleRequest host handle = do
requestLine <- readHeaderLine
putStrLn $ requestLine ++ "\n-------------------"
-- FIXME: This code is bad, and its author should feel bad.
where
readHeaderLine = do
readHeaderLine' ""
where
readHeaderLine' s = do
chr <- hGetChar handle
case chr of
'\r' -> do
nextChr <- hGetChar handle
case nextChr of
'\n' -> return s
_ -> readHeaderLine' $ s ++ [chr, nextChr]
_ -> readHeaderLine' $ s ++ [chr]
How can I reduce the amount of case expressions in this code? I thought of using Parsec, but that seemed overkill to me for something this trivial, and I don’t know how well it works with Handles.
Answer: You can use hGetContents to read lazily from a handle, and the split package (which will hopefully be bundled into the next Haskell Platform release) provides some nice ways of dealing with lists.
import Data.List.Split (splitOn)
import System.IO (hGetContents)
... do
...
contentLines <- splitOn "\r\n" <$> hGetContents handle
contentLines will contain a lazy list of the "entire contents" of that handle, split into chunks that were originally separated by \r\n.
Another approach is to use something like the conduit package. See, for example, Data.Conduit.Binary.lines. Keep an eye on conduit & friends; I get the feeling that within the next year or so the Haskell community will start to agree on the "best" implementation of this sort of abstraction, and then some good tutorials will inevitably follow. | {
"domain": "codereview.stackexchange",
"id": 2410,
"tags": "haskell, parsing, functional-programming, http"
} |
torque for a machine shaft | Question: this is a formula for torque and I was wondering for what P in this case stands for
.
.
.
Answer: $$M_t = \frac{P}{2\pi\cdot n}$$
where:
$M_t$ is the torque transmitted through the shaft (unit in SI: Nm).
P is the power transmitted through the shaft (unit in SI: W).
$n$ are the rotations of the shaft in revolutions per minute (rps). (this is important for the units to be correct)
If you want to use $n$ with revolutions per minute (rpm), you should use the following formula
$$M_t = \frac{60\cdot P}{2\pi\cdot n [rpm]}$$
(by square brackets next to a quantity I am presenting the units you should use in the equation)
As mentioned elsewhere another common equation (which basically incorporates the conversion between rpm and angular velocity) is the following:
$$M_t =\frac{P}{\omega}$$
where:
$\omega $ is the angular velocity (units in SI: rad/s)
The relation of angular velocity with the rotations per second and minute correspondigly are $\omega = 2\pi \cdot n[rps]$ and $\omega = \frac{2\pi \cdot }{60}n[rpm]$ | {
"domain": "engineering.stackexchange",
"id": 4257,
"tags": "torque, solid-mechanics"
} |
Cosmic Microwave Background seen from a hypothetical foreign Galaxy? | Question: My basic understanding of the CMB tells me that this 'wall of radiation' is currently the furthest electromagnetic wave from our position that we can detect. Through relativity we know that objects far away are seen in an 'older' state due to the emitted photons taking time to reach our measuring tools.
My understanding is that as there is effectively no centre of the universe, so the CMB was emitted from every point of the universe roughly 300 thousand years after the Big Bang as it cooled and expanded. I understand that the CMB we detect today has travelled roughly 47 billion light years to reach us (EDIT: unsure on figures) and therefore we are effectively seeing the universe roughly 13.8 billion years ago.
My questions are as follows. Suppose we are an observer in a hypothetical Galaxy X located 47 billion light years (EDIT: unsure on figures) from the Milky Way Galaxy. We launch our own microwave detecting satellite and survey our sky.
If there is no centre or starting point of the universe, when we look out far enough from Galaxy X do we see the same CMB beginnings that an observer in the Milky Way Galaxy sees?
When looking back towards the Milky Way Galaxy do we simply see CMB from roughly 13.8 billion years ago?
If everywhere is effectively the centre of the universe, I would expect an observer from every Galaxy in the universe to see the same CMB?
If there was an observer at every point of the universe, would we expect them to all see the same CMB when looking out with microwave measuring equipment?
Would they all argue that they are in the present time and every direction they observe at far enough distances are further back in time?
Am I on the 'right track' here?
Cheers,
Dan.
EDIT: unsure on figures.
Questions still stand to be answered.
Answer: The CMB will look very nearly the same to both us and your observer on a distant galaxy.
Your argument is correct. Our model for the expansion of the universe is based on the assumption that the universe is the same everywhere - technically that is it isotropic and homogeneous. Though at the current time this is obviously untrue on the small scale (because we have dense objects like stars separated by vacuum) it appears to be true if we average the matter distribution on a large scale. And at the time the CMB was emitted, well before any stars formed, the distribution of matter was homogeneous to better than one part in $10^5$.
So we expect that for every observer everywhere in the inverse the CMB is going to look the same - to about one part in $10^5$. That one part in $10^5$ is the fluctuations in the CMB most recently measured by the WMAP experiment. The current favourite explanation for these is that they originated in quantum fluctuations during cosmological inflation, and because these fluctuations are random they are randomly distributed and will look different for different observers.
However, even the fluctuations will look the same in some respects. Though the detail of the fluctuations will be different for different observers we expect that their power spectrum will be the same for everyone. | {
"domain": "physics.stackexchange",
"id": 79321,
"tags": "cosmology, galaxies, cosmic-microwave-background"
} |
Relation of spring constant with mean radius of spring | Question: My teacher says that spring constant depends on its radius. I tried to understand it, and checked many questions in this site and other sites. All of them say that spring constant depends on number of windings and material of spring. But it doesn't say anything about its radius. Is it true? If yes, what is the radius meant by him? Is it the radius of winding or radius of string on which the spring made?
Answer: The stiffness $k$ of a coil spring can be expresses as:
$$k=\frac{E\,d^4}{16\,(1+\nu)\,(D-d)^3\,n}$$
Where $E$ is the modulus of elasticity of the material, $d$ is the diameter of the wire used in the coil, $\nu$ is the poisons ratio of the material, $D$ is the outer diameter of the coil, and $n$ is the number of wraps in the coil.
Now $\frac{D-d}2$ is equal to the mean radius of the coil ie. the distance from the center of the coil to the middle of the wire. So the equation could be rewritten as:
$$k=\frac{E\,r^4}{8\,(1+\nu)\,{r_{mean}}^3\,n}$$
Where $r$ is the radius of the wire and $r_{mean}$ is the mean radius of the coil. | {
"domain": "physics.stackexchange",
"id": 25481,
"tags": "newtonian-mechanics, classical-mechanics, spring"
} |
Why is angular momentum of the Earth/Moon system conserved? | Question: Why is the angular momentum of the Earth/Moon system conserved, apparently unaffected by external forces such as force of the Sun?
Answer: Angular momentum is conserved when the net external torque is zero (you're thinking about linear momentum) via $\tau = \frac{dL}{dt}$. Anyway, as long as we assume that the earth, moon, and planets are all orbiting around in a disk (which is a decent approximation), then the position vector to the Earth/Moon system from any other of the planets is in the same plane as their perturbing gravitational forces, and furthermore these other planets are sufficiently far away from the Earth/Moon system so that the position vector and force vector are essentially parallel, meaning that the torque, $\tau = \vec{r} \times \vec{F}_{Grav} $, is zero, and thus angular momentum is conserved. | {
"domain": "physics.stackexchange",
"id": 51883,
"tags": "angular-momentum, conservation-laws, solar-system, celestial-mechanics, tidal-effect"
} |
What is the maximum pressure that JB Weld can withstand? | Question: JB Weld, a brand name of steel-reinforced epoxy, makes claims of being the "world's strongest bond".
What is the maximum pressure it can withstand, and what is it about the interaction between the steel and the epoxy that makes it so strong?
Answer: This is only a partial answer, addressing the maximum pressure a JB weld can withstand. I am still thinking about the reason for the strength so I will leave it to the 'real' chemists among you for now.
According to Repair Products the JB weld in fully hardened state has the following properties (in psi):
Tensile Strength: 3960
Adhesion: 1800
Flex Strength: 7320
Tensile Lap Shear: 1040
Tensile strength is simply the pressure needed to break the material by pulling it apart. Flex strength is the pressure needed to bend the material which, in case of a perfectly homogeneous sample would equal the tensile strength.
The adhesion is the pressure needed to pull the bond from the surface when pulling perpendicular to the surface (parallel to the bond). The tensile lap shear is also an 'adhesion', but measured when pulling parallel to the surface (perpendicular to the bond).
Since the latter is the lowest it means that this would be the maximum pressure the JB weld could withstand, but only if it is applied exactly perpendicular to the bond. If the direction is in a different direction than you will get some mixture between the adhesion and the tensile lap shear pressures.
What is clear from these numbers is that the bond itself will not break but rather it will detach from the materials it is keeping together. | {
"domain": "chemistry.stackexchange",
"id": 439,
"tags": "everyday-chemistry, materials"
} |
Models doesn't appear when setting up gazebo programmatically | Question:
Hi everyone,
I tried the example animated_box which setup the server with a box inside.Then I launch the gzclient, it connect to the server and the box appears in the "models" but not on the render.
I also tried to setup other models programmatically but they don't move (it's like they're not being simulated): they don't fall down for example (gravity flag is true in the model).
Is there a way to make this work?
I'm working on Windows with gazebo 7 compiled in 32bits with Visual Studio 2013. Also unlike the tutorial I use the latest version of OGRE.
Thank you!
XB32Z
Originally posted by XB32Z on Gazebo Answers with karma: 23 on 2016-03-10
Post score: 1
Original comments
Comment by XB32Z on 2016-03-10:
it appears when gzclient is launch before publishing
Answer:
Sorry for the very late answer :/
I made my own branch available here: https://bitbucket.org/XB32Z/windows_gazebo7 It hasn't been merged with the main because it doesn't compile under linux anymore. I guess that Gazebo is just not made for Windows ...
However I got it to work: launch the client before adding any models !
Thx for your help.
Originally posted by XB32Z with karma: 23 on 2016-03-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Peter Mitrano on 2016-04-01:
The order of launch sounds like the right solution. Side note: we still intend to get gazebo running on Windows eventually, so we would love if you could keep that windows branch up to date with master. Even if it doesn't compile on linux, we can work that part out. Windows contributions are super helpful for us!
Comment by XB32Z on 2016-04-08:
Hi!
I'm planing to make something cleaner soon :) | {
"domain": "robotics.stackexchange",
"id": 3885,
"tags": "windows"
} |
Force applied to a cam follower | Question: Hello I am a simple rock climber, and would like to figure out how much force is applied to The lobe of a cam/the rock when the cam is loaded. So the cam lobe is circle the ramp or cam goes from 7/16”-11/16” I’m not sure that matters or if just the point of contact matters. If I ignore friction and those kind of nussance variables what is my formula if I 180lbs simply hang on the cam. (Is it a simple lever calculation)!
Answer: The curve of a spring loaded cam is certain curve called exponential spiral with the unique property that the angle it touches the wall of the crack and a horizon remains the same (13-14 degrees) no matter haw much the cam expands.
This small angle means the force applied by the cam to the crack is always $F_{horiz}= W_{weight}*cotan(13)= W*4.3$
Which after many tests has proven to be the safest and most practical.
here is the detail calculations. cam math. | {
"domain": "engineering.stackexchange",
"id": 2946,
"tags": "mechanical-engineering"
} |
Fourier Transform of $|t|$ | Question: I was going through Papoulis' book (The Fourier Integral and its Applications) when I came across the Fourier Transform for $|t|$. To find it he writes $|t|$ as (I am not sure how):
$$|t| = -\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\cos(\omega t)}{\omega^2}d\omega \tag{1}$$
and then states the Inverse Fourier Transform formula to write:
$$\mathcal{F}\{|t|\} = -\frac{2}{\omega^2}$$
which checks out since:
\begin{align}
\mathcal{F}^{-1}\{-2/\omega^2 \} &= \frac{1}{2\pi}\int_{-\infty}^{\infty}-\frac{2}{\omega^2}e^{j\omega t}d\omega\\
& = -\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{e^{j\omega t}}{\omega^2}d\omega\\
& = -\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\cos(\omega t)}{\omega^2}d\omega \qquad 1/\omega^2 \: \text{is an Even Function}
\end{align}
and also for the forward Fourier Transform we can use the definition in $(1)$ as follows:
\begin{align}
\mathcal{F}\{|t|\} & = -\frac{1}{\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{\cos(\omega_0 t)}{\omega_0^2}e^{-j\omega t}dt \:d\omega_0\\
& = -\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{1}{\omega_0^2}\int_{-\infty}^{\infty}\cos(\omega_0 t)e^{-j\omega t}dt \:d\omega_0\\
& = -\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{1}{\omega_0^2}\left[\pi (\delta(\omega_0-\omega)+\delta(\omega_0 +\omega) \right] \:d\omega_0\\
& = -\int_{-\infty}^{\infty}\frac{1}{\omega_0^2}\left[(\delta(\omega_0-\omega)+\delta(\omega_0 +\omega) \right] \:d\omega_0\\
& = -\left(\frac{1}{\omega^2}+\frac{1}{\omega^2}\right) = -\frac{2}{\omega^2}
\end{align}
(apologies for any confusion with the two different $\omega$'s) but why does the definition in $(1)$ hold? Also is there a better way to find this Fourier Transform? I was thinking perhaps using the fact that:
$$|t| = t \:\mathtt{sgn}(t)$$
I could apply the convolution theorem where:
$$|t| \overset{\mathcal{F}}\longrightarrow \frac{1}{2\pi} \left(\mathcal{F}\{t\}\ast\mathcal{F}\{\mathtt{sgn}(t)\}\right)$$
where $\mathcal{F}\{\mathtt{sgn}(t)\} = \frac{2}{j\omega}$ and $\mathcal{F}\{t\}$ is:
$$tx(t) \overset{\mathcal{F}}\longrightarrow j\frac{d}{d\omega}X(j\omega)$$
where $x(t) = 1$ and $X(j\omega) = 2\pi \delta(\omega)$ which gives us:
$$t \overset{\mathcal{F}}\longrightarrow 2\pi j \delta'(\omega)$$
Writing the convolution we have:
\begin{align}
\frac{1}{2\pi} \left(\mathcal{F}\{t\}\ast\mathcal{F}\{\mathtt{sgn}(t)\}\right) &= \frac{1}{2\pi}\int_{-\infty}^{\infty} 2\pi j \delta'(y)\cdot \frac{2}{j(\omega-y)}dy\\
&= \int_{-\infty}^{\infty} \delta'(y)\cdot \frac{2}{(\omega-y)}dy
\end{align}
which should equal $-\frac{2}{\omega^2}$ but I don't know how since I am not exactly sure how the derivative of the Dirac-Delta Impulse behaves under the integral or otherwise.
This was my attempt at getting a reasonable and well-understandable solution for the Fourier Transform of $|t|$. I would appreciate it if anyone could either explain how we can obtain $(1)$ or complete/fix my attempt using the convolution theorem (or perhaps my solution is completely wrong since $t$ is not absolutely integrable but I treated it in the realm of tempered distributions which may or may not work, I am not sure). Better yet if someone could offer an alternative solution that involves some rigorous mathematics then that would be wonderful.
Answer: For the derivation of Eq. $(1)$ in your question, Papoulis refers to Eq. ($I$-$32$) in the appendix on distributions. That equality is basically a consequence of how the derivative of a distribution is defined (see below).
Concerning your suggestion to compute the Fourier transform of $|t|$ by noting that
$$|t|=t \operatorname{sgn}(t)$$
and using the well-known Fourier identities
\begin{align*}
t &\Longleftrightarrow 2\pi j\delta'(\omega)\\
\operatorname{sgn}(t) &\Longleftrightarrow \frac{2}{j\omega}
\end{align*}
you just need to realize that for any differentiable function $f(\omega)$ we have
$$(f\star\delta')(\omega)=f'(\omega)$$
where $\star$ denotes convolution. Hence,
\begin{align*}
\mathcal{F}\{|t|\} &= \frac{1}{2\pi}2\pi j\delta'(\omega)\star\frac{2}{j\omega} \\
&= \delta'(\omega)\star\frac{2}{\omega} \\
&= \left(\frac{2}{\omega}\right)' \\
&= -\frac{2}{\omega^2}
\end{align*}
What follows is not a rigorous proof but an attempt to make Eq. $(1)$ of the question plausible. Let $g(t)$ be a distribution and $\phi(t)$ a well-behaved test function. The derivative $g'(t)$ of a distribution is defined by
$$\int_{-\infty}^{\infty}g'(t)\phi(t)dt=-\int_{-\infty}^{\infty}g(t)\phi'(t)dt\tag{1}$$
Note that this definition is consistent with the rule for integration by parts, assuming that the product $g(t)\phi(t)$ vanishes for $|t|\to\infty$.
With $g'(t)=1/t^2$ and $\phi(t)=\cos\omega t$ we have $g(t)=-1/t$ and $\phi'(t)=-\omega\sin\omega t$. From $(1)$ it follows that
$$\int_{-\infty}^{\infty}\frac{\cos\omega t}{t^2}dt=-\omega\int_{-\infty}^{\infty}\frac{\sin\omega t}{t}dt\tag{2}$$
The integral on the right-hand side of $(2)$ is $\pi$ times the DC value of the frequency response of an ideal lowpass filter with cut-off frequency $\omega$ (for negative $\omega$ we have to invert the sign):
$$\int_{-\infty}^{\infty}\frac{\sin\omega t}{t}dt=\pi\operatorname{sgn}\omega\tag{3}$$
Hence,
$$\int_{-\infty}^{\infty}\frac{\cos\omega t}{t^2}dt=-\omega\,\pi\operatorname{sgn}\omega=-\pi|\omega|\tag{5}$$
Exchanging the variables $\omega$ and $t$, we obtain
$$|t|=-\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{\cos\omega t}{\omega^2}d\omega\tag{6}$$
Note that from $(5)$ we also obtain the Fourier transform pair
$$|\omega|\Longleftrightarrow-\frac{1}{\pi t^2}\tag{7}$$ | {
"domain": "dsp.stackexchange",
"id": 12461,
"tags": "fourier-transform, integration, dirac-delta-impulse"
} |
How to denote the space complexity in terms of output | Question: Normally the space complexity of an Algorithm $A$ is denoted $\textrm{SPACE}(A)$, which means how much space is needed by the computation itself. I would however like to also describe how much storage an algorithm needs, i.e. if I have a Turing Machine with three tapes, one for the input, one for the computation, and one for the output, I would like to express the size of each one of them.
Is there a standard way of doing so? If so, is there a reference?
Currently I am using the symbol $\textrm{DATA}$. As an example consider function $list(n) = (1,\dots,n)$. The respective algorithm $List$ for computing the function $list$ is s.t. $\textrm{TIME}(List(n)) \in \mathcal{O}(n)$, since we need to write $n$ symbols, $\textrm{SPACE}(List(n)) = \mathcal{O}(1)$, since we only need to store the current symbol, and $\textrm{DATA}(List(n)) = \mathcal{O}(n)$, since the output is a vector of length $n$.
Answer: If your algorithm/TM is called $A$ and the input $x$, it is customary to denote the size of the output by using function and string notation, that is $|A(x)|$. | {
"domain": "cs.stackexchange",
"id": 5500,
"tags": "complexity-theory, space-complexity, notation"
} |
start ros node without a launch file | Question:
I would like to init and start a ros node as a service in an other framework to offer my data to an already existing data analysis software written in ROS. That means that this node is created within a non-ros process without a call of launch files.
So far I managed to call ros::init and create a nodeHandle etc. But I need to start any of rosnode first via roslaunch before my process get fully initialize (otherwise my process is blocked and waits for ROS). I guess roscore etc is missing.
How can I initialize all that within my own process, so that ROS gets fully set up without any call of roslaunch.
Originally posted by JohnnyRep on ROS Answers with karma: 1 on 2013-03-11
Post score: 0
Answer:
Just run roscore once somewhere else, and you should be good. You'll need to make sure that all of your ROS processes are launched with the proper $ROS_MASTER_URI environment variable so that they point to the machine on which you're running the roscore.
To learn more about how all this works, take a look at the roscore wiki page.
Originally posted by jbohren with karma: 5809 on 2013-03-11
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 13315,
"tags": "roslaunch, roscore"
} |
What happens to resistance if we double current intensity? | Question:
problem: what happens to resistance if we double current intensity ?
my attempt:
ohm's law tells us $R = { V \over I }$ so : $${R \over 2} = { V \over 2I }$$
so resistance should decrease to half if we double the current intensity , however my textbook says that the resistance should stay the same and i doubt that and i wanted to know if this is a typo or i have mistaken
Answer: If we have a given physical resistor, and we want to double the current through it, we do that by doubling the voltage across it. The resistance, at least ideally, stays the same.
In the real world, the resistor value will change somewhat due to the temperature of the part rising. Maybe a few 10's of ppm for a high quality resistor, or several per cent if we deliberately choose a resistor material with high thermal coefficient of resistance (TCR).
If we had a fixed voltage source and wanted to choose a different resistor that would allow twice the current through, then we'd choose one with half the resistance value. If you had this in mind, then you've answered the question correctly, you've just misunderstood what question the instructor wanted to ask (because their wording was ambiguous). | {
"domain": "physics.stackexchange",
"id": 61601,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage"
} |
Differentiating linear and non-linear motion | Question: If there is a Person sitting in a vehicle which is moving, how could he find out whether it's in motion and if he figures that out, how does he find out whether it's accelerated or non-accelerated?
Answer: If the vehicule is moving at constant speed and the individual can't see the exterior of the vehicule, then there is absolutely no way for him to tell if it's moving.
It's actually a fundamental building block of special and general relativity. All inertial frames are equivalent, meaning that you can't determine an absolute velocity for an object, only velocities relative to you.
If the vehicule is accelerating, then you will feel it, because humans have built-in accelerometer in there bodies. A more fundamental true is that giving differents observers, all of the observers will always agree on which ones are accelerating. There is no ambiguity, in contrast with uniform motion at constant velocity where it is not possible to determine an absolute velocity. | {
"domain": "physics.stackexchange",
"id": 53202,
"tags": "newtonian-mechanics, reference-frames, acceleration, relative-motion, machs-principle"
} |
Getting the number of weeks from a time span | Question: I have a very big clinical data like
Date.Death Date Of Operation
02/08/2015 27/11/2012
02/08/2015 27/11/2012
I want to know how many weeks each patient has lived after operation
I then tried
> t1 <- as.POSIXct("27/11/2012")
> t2 <- as.POSIXct("02/08/2015")
> as.double(difftime(t2,t1,unit="weeks"))
[1] -1317.571
Or
> span <- interval(as.POSIXct("27/11/2012"), as.POSIXct("02/08/2015"))
> span
[1] 0027-11-20 LMT--0002-08-20 LMT
> t <- as.period(span, unit="day")
> t
[1] "-9223d 0H 0M 0S"
> t / as.period(dweeks(1))
estimate only: convert to intervals for accuracy
[1] -1317.571
Both give wrong number of weeks
Manually calculating one by one is painful and error prone
You know anyway to get the number of weeks?
Thanks
Answer: I think your error is located in your use of as.POSIXct which gives you wrong date formats because you did not provide a format in your function:
> as.POSIXct("27/11/2012")
[1] "27-11-20 LMT"
> as.POSIXct("27/11/2012", format = "%d/%m/%Y")
[1] "2012-11-27 PST"
So, you need to first convert your date in the appropriate date format and then you will be able to calculate the time interval.
Personally, I prefer to manipulate dates with lubridate package because the name of their function is a little bit more intuitive to me. For example, if you have a date to the format "Day/Month/Year", you can use dmy function.
With your example, you can do:
library(lubridate)
library(dplyr)
df <- data.frame(Date.Death = c("02/08/2015","02/05/2015"),
Date.Operation = c("27/11/2012","27/11/2012"))
df %>% mutate_all(~dmy(.)) %>%
mutate(Diff_Weeks = interval(Date.Operation,Date.Death) / weeks(1),
Diff_days = Date.Death - Date.Operation)
Date.Death Date.Operation Diff_Weeks Diff_days
1 2015-08-02 2012-11-27 139.7143 978 days
2 2015-05-02 2012-11-27 126.5714 886 days
``` | {
"domain": "bioinformatics.stackexchange",
"id": 1344,
"tags": "r"
} |
"Falling forces" | Question: First, I'm completely ignorant in physics, that's why I need your help and I'm sorry if I don't have a proper technical language.
I was trying to understand the forces that play a role during a fall when climbing. In particular, the carabiners indicate the maximum force in kilo-Newtons ($kN$) a carabiner can carry before break (e.g. $22 kN$).
Reading on Wikipedia I understood (I think) how to calculate the force ($N$) of my body.
$$1 kg = 9.81 N$$
$$75 kg * 9.81 N/kg = 735.75 N = 0.736 kN$$
Now I know that my body, if I stand on Earth, has a weight of $0.736 kN$. This is nice but not really useful to me, because what I want to calculate is the force on the carabiner if my body is in free fall for, for example, 6 metres.
Can anyone teach me the topics I need to know and help me to understand how to find the proper formula? Thanks!
Answer: As first approximation a person falling in climbing is subjected to
$$
F_{stop} = \sqrt{2*g*m*k*\frac{Q}{L}}
$$
where k represents the elasticity of the rope, Q is length of the fall and L the length of the rope.
The carabiner will be subjected to a force equal to $2F_{stop}$.
In case of perfectly vertical fall and no friction.
Have in mind that 22kN is not a random choice but is twice the maximum force a human body can absorb without major damages... carabiners are designed to break at 22kN.
You should find out the derivation of the formula in this article:
W. Dan Curtis. Taking a Whipper – The Fall-Factor Concept in Rock Climbing, The College Mathematics Journal, March 2005, Vol. 36, No. 2 | {
"domain": "physics.stackexchange",
"id": 45063,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-fall"
} |
Calculating area of visible sky | Question: Can we calculate the area of sky visible to us from the point we are standing?I mean is there any idea or experiment to calculate it?
Answer: Approximating the Earth as a sphere with radius $R$, then when viewing from a height $h$ above the surface, the Earth blocks out a cone of some opening angle $2\vartheta$, where $\csc\vartheta = 1+\frac{h}{R}$. Thus, the visible portion has a solid angle of
$$\Omega = 2\pi\left(1+\cos\vartheta\right) = 2\pi\left(1+\frac{\sqrt{h^2+2Rh}}{R+h}\right)$$
steradians. Divide this by $4\pi$ to obtain the fractional area of the the visible sky, compared to what you could have if the Earth wasn't blocking your view, since a full sphere subtends a solid angle of $4\pi$ steradians. That is probably a more natural measure of the visible sky than a literal area.
For an actual area, you need some sort of reference distance $r$ to measure from, with the visible sky a distance $r$ away having area $A = \Omega r^2$. | {
"domain": "astronomy.stackexchange",
"id": 689,
"tags": "observational-astronomy, deep-sky-observing"
} |
Implementation of a generic Stack in C++ | Question: I've implemented a generic Stack in C++. I would like a code review in regards to my code, especially on whether or not my implementation satisfy the following 4 points:
My Stack class guarantees Strong Exception Safety, using the copy and swap idiom
The container still works if T passed in is not default constructible
Correctness of the implementation of the Rule of Five
General implementation correctness and efficiency (i.e: No memory leaks, dangling pointers ... ...)
Additionally, I have also added 4 questions in the code as comments as my concerns in the correctness of my code's implementation/style. I would greatly appreciate if those questions are addressed in the review as well.
#pragma once
template <typename T>
class Stack
{
public:
Stack();
Stack(const Stack& other);
Stack(Stack&& other) noexcept;
~Stack();
Stack<T>& operator =(const Stack<T>& other);
Stack<T>& operator =(Stack<T>&& other) noexcept;
void swap(Stack<T>& other) noexcept;
friend void swap(Stack<T>& A, Stack<T>& B)
{
A.swap(B);
}
void pop();
T& top();
void push(T item); //Pass T by value? What if T is big and expensive?
private:
struct Node
{
Node* next;
T data; //Should data be stored as a reference instead? (i.e: T& data)
}; //Should I be adding constructors/destructors and such in this struct?
Node* head;
void reverse(Node** head); //reverse method is not specific to *this stack, should I make it inline or something?
};
template <typename T>
Stack<T>::Stack()
:head(nullptr)
{}
template <typename T>
Stack<T>::Stack(const Stack& other)
:Stack()
{
Node* curr = other.head;
while(curr != nullptr)
{
push(curr->data);
curr = curr->next;
}
reverse(&head);
}
template <typename T>
Stack<T>::Stack(Stack&& other) noexcept
:head(nullptr)
{
swap(*this, other);
}
template <typename T>
Stack<T>::~Stack()
{
Node* curr = head;
while(curr != nullptr)
{
Node* tmp = curr;
curr = curr->next;
delete tmp;
}
delete head;
}
template <typename T>
Stack<T>& Stack<T>::operator =(const Stack<T> &other)
{
Stack tmp(other);
swap(*this, tmp);
return *this;
}
template <typename T>
Stack<T>& Stack<T>::operator =(Stack<T>&& other) noexcept
{
swap(*this, tmp);
return *this;
}
template <typename T>
void Stack<T>::swap(Stack& other) noexcept
{
using std::swap;
swap(head, other.head);
}
template <typename T>
void Stack<T>::pop()
{
if(head == nullptr)
throw std::runtime_error("No item found in stack");
Node* curr = head;
head = head->next;
delete curr;
}
template <typename T>
void Stack<T>::push(T item)
{
Node* tmp = new Node;
tmp->next = head;
tmp->data = std::move(item);
head = tmp;
}
template <typename T>
T& Stack<T>::top()
{
if(head == nullptr)
throw std::runtime_error("No item found in stack");
return head->data;
}
template <typename T>
void Stack<T>::reverse(Node **head)
{
if(head == nullptr || *head == nullptr)
throw std::runtime_error("head is null or no head to reverse");
Node *prev = nullptr;
Node *next = *head->next;
while(*head != nullptr)
{
*head->next = prev;
prev = *head;
*head = next;
if(next != nullptr)
next = next->next;
}
*head = prev;
}
Answer: Answers to your questions:
Should push take a value parameter?
Probably not.
As-is, the caller will make a temporary (on the stack), invoke the copy constructor on the temporary, and then push will allocate Node which includes a T, call the default constructor on it, and finally use the move-assignment to fill in Node.data. That means you allocate/construct T twice (once on the stack and once in Node) and reason over the members again in the move-assignment operator. If you're sure the type is small and easily copy-constructed, this is not worth worrying about and allows for more compact code (and a simpler interface). Similarly, if you trust the compiler's optimizer, maybe this isn't worth worrying about.
The simple alternative would be to provide both const T& and T&& versions of push and pass that on to a constructor for Node which takes the same type. A more elegant solution would be to take templatized arguments and std::forward to T's constructor (when it's made in Node). But, maybe then it should be called emplace.
Should Node.data be T& rather than T?
No. This has lifetime and aliasing implications which would violate a reasonably assumed contract for Stack<T>. If the calling code wants references, it can specify Stack<T&>.
Should Stack<T>::Node have constructors and destructors?
Since the struct is private, you should only add those members if the containing code needs it. I think the default destructor will meet your needs, but, for several reasons, you'll want to construct T.data and thus you'll need some constructor for T.
Stack<T>::reverse is independent of this*, how to decorate?
As I'll get to later, I'd recommend eliminating the method entirely, but the direct answer to your question is to use static.
Bugs:
head is deleted twice in destructor
Imagine if you have a single element Stack; you'll assign curr to that node, notice it's not nullptr, set tmp to that node, advance curr to nullptr, delete the node, exit the loop, and then try to delete head.
Stack<T>::push is not exception safe
If something throws an exception between the new Node and the head = tmp (especially T& operator::T(T&& other)), then you'll leak the Node. You should either wrap the logic in try/catch/delete/throw or use a smart pointer (e.g., std::unique_ptr) to guarantee cleanup.
Node* tmp = nullptr;
try
{
tmp = new Node;
// init tmp
}
catch (...)
{
::delete tmp;
throw;
}
or
std::unique_ptr<Node> tmp(std::make_unique<Node>());
// init tmp
head = tmp.release();
You did not meet your goal #2 regarding no default constructor for T
When Node is constructed, it implicitly constructs a T with the default constructor.
Efficiency issues:
Your copy constructor does more work than it needs to.
Just have a Node** which lets you append to the linked list; you don't need to make two passes over the data.
Node** dst = &head;
for (Node* src = other.head; src != nullptr; src = src->next)
{
std::unique_ptr<Node> tmp(std::make_unique<Node>());
// init tmp
*dst = tmp.release();
dst = &(*dst)->next;
}
Of course, there is some shared code with push; I'll leave it to you to common factor it or, more likely, move the initialization for tmp into the constructor.
Give thought to object-creation overhead.
This was touched on regarding the question about push taking a ref.
Style issues:
std::unique_ptr<T> is a safer alternative than T* in most cases
The object referenced by Stack<T>.head is "owned" by the Stack<T> (it has sole control of lifetime). The same is true for Stack<T>::Node.next. Using this and swap, you can get a lot more confidence in execption correctness.
However, you'll have a de-facto implementation for your destructor which will be correct but might not be the implementation you'd choose. (The optimizer might do the right thing with respect to tail recursion; having confidence in this across compilers and versions requires more validation than I'd personally want to do when I could write an explicit loop to free stuff.)
use for for loops when appropriate.
Your copy constructor looks like an obfuscated for loop to me.
Share code between pop and your destructor.
while (head != nullptr) pop();
Your move constructor could use the default constructor.
This is a nit-pick, but logicially speaking, you're starting with an empty Stack<T> and swapping it with the temporary Stack<T>&&. | {
"domain": "codereview.stackexchange",
"id": 21259,
"tags": "c++, c++11, stack"
} |
Counting arrays with Euclidean distance at most 2 from a given binary array | Question: I have a binary array like this:
$$A = [0,1,0,0,1,0]\,.$$
I'm trying to find a way to calculate how many arrays of the same length exist that have a Euclidean distance of 2 or less from this array.
So, how many arrays of length 6 exist where
$$\sqrt{\Sigma(A_{_i} - B_{_i})^2}\leq 2\,?$$
I'm trying to find or create a formula that takes an array like above and outputs a count of how many possible binary arrays exist that fit the conditions of the above formula.
I've looked online for a formula without success.
Answer: Don't search for a formula – you'll probably never find something so specific. Instead, try to break up the task into smaller units.
Since your arrays are binary,
$$(A_i-B_i)^2 = \begin{cases}0 &\text{if }A_i=B_i\\ 1&\text{if }A_i\neq B_i\,.\end{cases}$$
So $\sqrt{\sum(A_i-B_i)^2}\leq 2$ if, and only if there are at most four values of $i$ such that $A_i\neq B_i$. So you just need to compute the number of ways that $B$ could have zero, one, two, three or four different entries from $A$ (or, more simply, the number of ways that it could have five or six different entries, and subtract that from the total possible values of $B$). This calculation is high-school combinatorics. Note that, because the arrays are binary, there's only one value of $B_i$ that's the same as $A_i$ and only one that's different. | {
"domain": "cs.stackexchange",
"id": 11198,
"tags": "nearest-neighbour, euclidean-distance"
} |
Why does $[Q,P]=i\hbar$ work for fermion? Shouldn't fermion satisfy anticommuting relation? | Question: For hydrogen, we use $[Q,P]=i\hbar$ for electron, which is a fermion. Does it have a deeper reason such as that we're really considering the proton + electron system, which might be of bosonic nature?
Answer: You can have a consistent picture using second quantisation. In general, you can promote a first quantised operator:
$$
O^1 = \sum O_{ij}|i\rangle\langle j|
$$
to a second quantised operator:
$$
O^2 = \sum O_{ij}c_i^\dagger c_j
$$
which you interpret being the original operator weighted by the distribution of particles in the orbitals.
If you're describing fermions, then the creation/annihilation operators satisfy the CAR's:
$$
\{c_i,c_j^\dagger\} = \delta_{ij}
$$
This suffices to show that the mapping conserves commutators:
$$
\begin{align}
[A^2,B^2] &= \sum A_{ij}B_{kl}[c_i^\dagger c_j,c_k^\dagger c_l] \\
&= \sum A_{ij}B_{kl}(\delta_{jk}c_i^\dagger c_l-\delta_{il}c^\dagger_kc_j) \\
&= \sum (A_{ik}B_{kj}-B_{ik}A_{kj})c_i^\dagger c_j \\
&= [A^1,B^1]^2
\end{align}
$$
Note that the key property:
$$
[c_i^\dagger c_j,c_k^\dagger c_l] = \delta_{jk}c_i^\dagger c_l-\delta_{il}c^\dagger_kc_j
$$
is also valid for bosons satisfying the CCR's, so the result still applies. This is why you can basically identify an annihilation as a ket and a creation as a bra.
Coming back to your electrons, you can similarly define in 1D (setting $\hbar=1$):
$$
\begin{align}
X &= \int xc_x^\dagger c_x dx\\
&= -\int i\frac{dc_p^\dagger}{dp} c_p dp \\
P &= \int pc_p^\dagger c_p \frac{dp}{2\pi} \\
&= \int i\frac{dc_x^\dagger}{dx} c_x dx \\
\end{align}
$$
so $[X,P] = in$ with $n=\int c_x^\dagger c_x dx = \int c_p^\dagger c_p \frac{dp}{2\pi}$ with:
$$
\{c_x,c_y^\dagger\} = \delta(x-y)\\
\{c_p,c_q^\dagger\} = 2\pi\delta(p-q)
$$
Note that you must be careful with the interpretation of momentum. The kinetic energy is not $P^2/2$, but rather:
$$
H_k = \int \frac{p^2}{2}c_p^\dagger c_p \frac{dp}{2\pi} \\
$$
In the case of non-interacting particles, this allows you to reduce a second quantisation problem into a first quantisation problem, and you just need to determine the filling of the orbitals.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 99503,
"tags": "quantum-mechanics, commutator, fermions, hydrogen, anticommutator"
} |
What's the correct link between Dirac notation and wave mechanics integrals? | Question: In wave mechanics when we compute the expectation value of energy we write the following
$$\left<\hat{H}\right>=\int_{-\infty}^\infty\mathrm{d}x\ \psi^*(x)\hat{H}\psi(x)=\int_{-\infty}^\infty\mathrm{d}x\ \psi^*(x)E\psi(x)$$
In Dirac notation it is simply written as
$$\left<\hat{H}\right>=\left<\psi\left|\hat{H}\right|\psi\right>$$
We then chooses some orthogonal basis $\left|x\right>$ and $\left|x'\right>$ in the position space and expand the dirac notation above as follows (with the limits ($-\infty$ to $\infty$) omitted for simplicity)
$$\left<\hat{H}\right>=\left<\psi\left|\hat{H}\right|\psi\right>$$
$$=\int \mathrm{d}x'\int \mathrm{d}x\left<\psi\left|x'\right>\left<x'\right|\hat{H}\left|x\right>\left<x\right|\psi\right>$$
Now since the basis is orthogonal and $\hat{H}$ has eigenvalues $E$ thus the following equation is obeyed
$$\left<x'\left|\hat{H}\right|x\right>=\left<x'\left|E\right|x\right>\tag{1}$$
Since $E$ is just a constant, it can be taken out from the brackets
$$\left<x'\right|E\left|x\right>=E\left<x'\right|\left. x\right>$$
and since the bases are orthogonal
$$\left<x'\right |\left. x\right>=\delta(x'-x)\tag{2}$$
Thus the expectation value integral becomes
$$=\int\mathrm{d}x\int\mathrm{d}x\left<\psi\left|x'\right>E\delta(x'-x)\left<x\right|\psi\right>$$
Integrating with respect to $x'$
$$=\int\mathrm{d}x\left<\psi\left|x\right>E\left<x\right|\psi\right>=\int\mathrm{d}x\ \psi^*(x)E\psi(x)$$
Are the steps in $(1)$ and $(2)$ legal?
If question 1 is true, is it legal to do a similar treatment for other operators e.g.
$\hat{p}$, $\hat{a}^\dagger$, $\hat{j}$ etc. to recover their wave mechanics counterparts of the expectation value from the dirac notation?
If question 1 is false, how to correctly (and preferably mathematically rigorously) recover the wave mechanics result of the expectation value of any operator $\hat{A}$ (not necessary self adjoint/hermitian) from the Dirac notation $\left<\psi\right|\hat{A}\left|\psi\right>$?
Answer: It seems to me that you are making some confusion. The problem with the passage [1] (and [2]) that you outline is that you are not allowed to do that (on a rigorous level) if the operator has continuous spectrum, for there are no corresponding eigenvectors on the Hilbert space (and it is wrong also on a non-rigorous level as pointed out by others). Anyways, Dirac notation is nothing fancy (in my opinion), just a convenient way to write scalar products in Hilbert spaces (someone may say there are Gel'fand triples and so on, but they do not add, again in my opinion, any relevant further insight in this particular context).
It is actually perfectly legitimate to look for the "wave mechanics notation" (i.e. on some $L^2$ space) starting from an abstract Hilbert space. When you write $\langle\psi, H\psi\rangle$, you are writing the scalar product on the space $\mathscr{H}$ between $H\psi$, where $H$ is a linear operator (supposedly densely defined, and $\psi$ is in its domain of definition), and $\psi$.
Now every separable Hilbert space (almost any Hilbert space utilized in quantum theories is separable) is isometrically isomorphic to some $L^2(\Omega,d\mu)$ (actually to $l^2$), even if the isomorphism can be difficult.
So, let's suppose that $\mathscr{H}$ is separable and you know explicitly the isometric isomorphism $i:\mathscr{H}\to L^2(\mathbb{R}^d)$, for some $d\in \mathbb{N}^*$. Let's say that $i\phi_n=\phi_n(x)\in L^2(\mathbb{R}^d)$, where $\{\phi_n\}_{n\in\mathbb{N}}\subset \mathscr{H}$ is an orthonormal basis in $\mathscr{H}$, and $\{\phi_n(x)\}_{n\in\mathbb{N}}\subset L^2$ is one on the latter. Then you have that
$$\langle\psi, H\psi\rangle_{\mathscr{H}}= \langle i\psi, (iHi^{-1}) i\psi\rangle_{L^2}=\langle \psi(x), E(x,\partial_x) \psi(x)\rangle_{L^2}\; ,$$
where $\psi(x)=\sum_{n}a_n\phi_n(x)$ (if $\psi=\sum_n a_n \phi_n$), and $E(x,\partial_x)$ is a linear differential operator that corresponds to the operator $H$.
Now the form of $E(x,\partial_x)$ may not be easy, but if the isomorphism $i$ is known explicitly, then it could be recovered from $E(x,\partial_x)=iHi^{-1}$. Also, observe that in general $E(x,\partial_x)$ would be a differential operator, not just a function of $x$: a classical example is the operator $-\partial_x^2 + V(x)$, where $V$ is a suitable function. | {
"domain": "physics.stackexchange",
"id": 21862,
"tags": "quantum-mechanics, wavefunction, notation"
} |
Are common names of substituents accepted by IUPAC? | Question: Are names of substituents like isopropyl and sec-butyl accepted by IUPAC?
Or do we need to name them according to standard IUPAC conventions?
(Ex- 1-methylethyl for isopropyl)
My book uses common names for substituents at a lot of places but I am doubtful about this.
Answer: According to the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) various prefixes are retained for use in general (IUPAC) nomenclature. However, many prefixes are no longer recommended.
Trivial, common, and traditional prefixes have always been an integral part of organic nomenclature. However, as systematic nomenclature develops and becomes widely used, many of these prefixes fall by the wayside. Accordingly, each set of IUPAC recommendations contains fewer of these traditional prefixes.
The prefix “isopropyl” for $\ce{(CH3)2CH-{}}$ is retained only for use in general (IUPAC) nomenclature.
For the preferred IUPAC name (PIN), the preferred prefix is “propan-2-yl”.
The prefix “1-methylethyl” may be used in general (IUPAC) nomenclature.
The prefix “sec-butyl” for $\ce{CH3-CH2-CH(CH3)-{}}$ was still contained in the 1993 recommendations but is no longer recommended as approved prefix.
For the preferred IUPAC name (PIN), the preferred prefix is “butan-2-yl”.
The prefix “1-methylpropyl” may be used in general (IUPAC) nomenclature. | {
"domain": "chemistry.stackexchange",
"id": 3955,
"tags": "organic-chemistry, nomenclature"
} |
How to observe off-diagonal long range order in superfluid? | Question: off-diagonal long range order in superfluid is an effect that the matrix element of the single particle's density matrix remains finite in the long distance limit.
My question is: how to prove this experimentally?
Answer: It should first be noted that Off-Diagonal Long-Range Order (ODLRO) and superfluidity do no necessarily go hand in hand. ODLRO is associated with a Bose-Einstein condensed phase (BEC), which usually also behaves as a superfluid and the latter thus inherits the ODLRO property. However, you can have systems that are superfluid but where BEC is not possible (and hence lack true ODLRO, but can display some sort of quasi-ODLRO) such as the BKT phase.
So anyway. Following the above, let's rephrase your question to How to observe off-diagonal long range order in a Bose-Einstein condensate?
Let's look at the asymptotic behaviour of the off-diagonal one-body density matrix, used in the Penrose-Onsager criterion as a rigorous definition for a BEC. In the first quantisation formalism, this is defined as:
\begin{equation}
\begin{gathered}
n^{(1)}(\mathbf{r}, \mathbf{r}') = \sum_i n_i\, \psi_i^\ast(\mathbf{r}) \psi_i(\mathbf{r}') \\ = n_0\,\phi_0(\mathbf{r})^\ast \phi_0(\mathbf{r}')+ \sum_{i\neq 0} n_i\, \psi_i(\mathbf{r})^\ast \psi_i(\mathbf{r}) \\ = n_0\, \phi_0(\mathbf{r})^\ast \phi_0(\mathbf{r}')+ \sum_{i\neq 0} n_i\, \mathrm{e}^{-\frac{\mathrm{i}}{\hbar}\mathbf{p}\cdot(\mathbf{r}-\mathbf{r}')},
\end{gathered}
\end{equation}
where in the last term a special case of free particles was assumed, to express them as plane waves. $n_0$ is the density of atoms in the $0$ state (ground state), which we have taken out of the sum $\sum_i$ for reasons below. At increasing separation, this tends to a constant value because the contributions of $\mathbf{p} \neq 0$ average out:
\begin{equation}
\lim_{|\mathbf{r} - \mathbf{r}'| \rightarrow \infty} n^{(1)}(\mathbf{r}, \mathbf{r}')\rightarrow n_0 \neq 0.
\end{equation}
This is exactly the definition of Off-Diagonal (because of the $\mathbf{r}$ and $\mathbf{r}'$) Long-Range (because of the limit $|\mathbf{r} - \mathbf{r}'| \rightarrow \infty$) order, ODLRO.
To confirm this experimentally, then, you have to devise an experiment where you can see whether or not phase coherence is preserved over 'long distances'. One such experiment is reported here (plot shown below). What is plotted is the visibility of the fringes in a matter-wave interference pattern as a function of the spatial extent of the atomic cloud (in this sense, $z$ larger than the inteatomic spacing is considered 'large distances'). For a thermal state, this decays to zero within the thermal de Broglie wavelength, whereas for a BEC it stays constant owing to the presence of off-diagonal long-range order (the actual constant is related to $n_0$). | {
"domain": "physics.stackexchange",
"id": 69456,
"tags": "statistical-mechanics, condensed-matter, superconductivity, bose-einstein-condensate, superfluidity"
} |
rqt plugin not listed | Question:
Hi,
I tried everything mentioned in other posts and I still can't see my own plugin in rqt. I did --force-discover, I deleted the config file but it won't show.
I am wondering if I am missing some PATH variables or if I need to compile it differently. My plugin works when I run "rosrun gcs_gui gcs_gui". "rqt -s gcs_gui" returns that no plugin with this name was found. Any ideas?
Update: I was obviously missing the plugin.xml. But there I don't know how I need to set it up. What is my class type? I programmed the plugin in C++ but is the base_class_type rqt_gui_py::Plugin anyway? Do I need to use a python file somewhere?
Vincenz
Originally posted by Vinni on ROS Answers with karma: 13 on 2018-05-25
Post score: 0
Answer:
Hi Vinni,
it's rather difficult to help you without any further information.
I suggest you use this (great) repo catkin_create_rqt which creates a boilerplate for an RQT GUI with properly set CMakeLists.txt and plugin export file.
Then you can just copy and past your functions to the newly created boilerplate.
Regards,
Marco.
Originally posted by Femer with karma: 253 on 2018-05-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Vinni on 2018-05-25:
Hi Marco,
that is actually how I started off (well, with catkin_create_qt_pkg). I altered my code from there but it is not running. I will try setting it up as another package and see if I can get that to work. Main problem for me is setting up the plugin.xml correctly...
Comment by Vinni on 2018-05-25:
I get a strange error like this:
Failed to load nodelet [rqt_gcs/View_1] of type [rqt_gcs/View]: Could not find library corresponding to plugin rqt_gcs/View. Make sure the plugin description XML file has the correct name of the library and that the library actually...
Comment by Vinni on 2018-05-25:
Worked in the end. Great tool.
Comment by Femer on 2018-05-26:
good to hear that it worked eventually! What was the issue?
Comment by Vinni on 2018-05-29:
I tried somehow merging it right away with my old files. That didn't work. Simply running the script and executing the files worked. Unfortunately my workflow is not done yet. I can't get it to work the way I'd like to... Any idea? See https://answers.ros.org/question/292328/gui-creation-through-rqt | {
"domain": "robotics.stackexchange",
"id": 30908,
"tags": "plugin, ros-kinetic, rqt"
} |
Two-dimensional array allocation in Go | Question: I am creating a two-dimensional array, which I am going to process later in ways similar to image MinFilter, procedural labyrinth generation, etc. -- implying using coordinates and neighbors.
Here are two ways I came up with now:
array := make([][]byte, 0, HEIGHT)
for i := 0; i < HEIGHT; i++ {
array = append(array, bytes.Repeat([]byte{5}, WIDTH))
}
array := make([][]byte, HEIGHT)
for i := range array {
array[i] = bytes.Repeat([]byte{5}, WIDTH)
}
They are both less straight-forward than the original Ruby code:
array = Array.new(HEIGHT){ Array.new(WIDTH){ 5 } }
But it is the first time I tried Go, so I ask you:
Which of these solutions is better?
And which is faster?
And why?
Answer: First, let's rewrite your code in idiomatic Go.
// For loop using a for clause
func NewRectangleF(height, width int, value byte) [][]byte {
r := make([][]byte, 0, height)
for i := 0; i < height; i++ {
r = append(r, bytes.Repeat([]byte{value}, width))
}
return r
}
// For loop using a range clause
func NewRectangleR(height, width int, value byte) [][]byte {
r := make([][]byte, height)
for i := range r {
r[i] = bytes.Repeat([]byte{value}, width)
}
return r
}
Then write and run some tests and benchmarks using Go 1.3.
$ go version
go version go1.3rc2 linux/amd64
$ go test -v -bench=.
=== RUN TestNewRectangle
--- PASS: TestNewRectangle (0.00 seconds)
PASS
BenchmarkNewRectangleFL5 50 38272524 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleRL5 50 38195877 ns/op 6340608 B/op 2049 allocs/op
The benchmarks are for a large rectangle ((2 * 1024) by (3 * 1024)) with an initial value of 5.
Not surprisingly, the results are the same because we are doing the same thing. The range form is clearly better because it's easier to read and easier to verify that it's correct.
Are the results fast? Here's a simple alternative basic implementation.
// Benchmark basic
func NewRectangleB(height, width int, value byte) [][]byte {
r := make([][]byte, height)
for i := range r {
w := make([]byte, width)
if value != 0 {
for j := range w {
w[j] = value
}
}
r[i] = w
}
return r
}
The benchmark results:
$ go test -v -bench=. -run=!
PASS
BenchmarkNewRectangleFL5 50 38272524 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleRL5 50 38195877 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleBL5 500 5428041 ns/op 6340608 B/op 2049 allocs/op
The basic implementation looks faster; it uses about 86% less CPU time (5428041 ns/op vs. average 38234200.5 ns/op). It doesn't use the bytes.Repeat function. In Go 1.2 and Go 1.3 the bytesRepeat function looks slow.
// bytes.Repeat returns a new byte slice consisting of count copies of b.
func Repeat(b []byte, count int) []byte {
nb := make([]byte, len(b)*count)
bp := 0
for i := 0; i < count; i++ {
bp += copy(nb[bp:], b)
}
return nb
}
Let's see if we can write an optimal version by improving the repeat function and reducing the large number of heap allocations (2049 allocs/op). Plus, since make sets the underlying array to the zero value for the type, we can make a zero initial value a special case.
func repeat(b []byte, count int) []byte {
nb := make([]byte, len(b)*count)
if len(b) == 1 && b[0] == 0 {
return nb
}
bp := copy(nb, b)
for bp < len(nb) {
copy(nb[bp:], nb[:bp])
bp *= 2
}
return nb
}
// Benchmark optimization
func NewRectangleO(height, width int, value byte) [][]byte {
r := make([][]byte, height)
a := repeat([]byte{value}, height*width)
start, end := 0, width
for i := range r {
r[i] = a[start:end:end]
start, end = end, end+width
}
return r
}
The benchmark results:
$ go test -v -bench=. -run=!
PASS
BenchmarkNewRectangleFL5 50 38272524 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleRL5 50 38195877 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleBL5 500 5428041 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleOL5 2000 1468944 ns/op 6340608 B/op 2 allocs/op
BenchmarkNewRectangleOL0 50000 34646 ns/op 55296 B/op 2 allocs/op
The optimal implementation looks faster; it uses about 96% less CPU time (1468944 ns/op vs. average 38234200.5 ns/op) and about 99.9% less heap allocations (2 allocs/op vs. 2049 allocs/op). It's even faster for a zero initial value (34646 ns/op vs. average 38234200.5 ns/op).
We should also check that small rectangles (2 by 3 with an initial value of 5) are also reasonable.
$ go test -v -bench=.
PASS
BenchmarkNewRectangleOS5 10000000 233 ns/op 57 B/op 1 allocs/op
BenchmarkNewRectangleOS0 10000000 206 ns/op 57 B/op 1 allocs/op
Go performance is continually being improved. The version at tip, which will be released as Go 1.4, incorporates some of the optimizations in bytes.Repeat. Currently, it doesn't include the special-case optimization for a zero initial value.
// bytes.Repeat returns a new byte slice consisting of count copies of b.
func Repeat(b []byte, count int) []byte {
nb := make([]byte, len(b)*count)
bp := copy(nb, b)
for bp < len(nb) {
copy(nb[bp:], nb[:bp])
bp *= 2
}
return nb
}
For Go 1.4 and later versions, we can at least write:
func NewRectangle14(height, width int, value byte) [][]byte {
r := make([][]byte, height)
var a []byte
if value == 0 {
a = make([]byte, height*width)
} else {
a = bytes.Repeat([]byte{value}, height*width)
}
start, end := 0, width
for i := range r {
r[i] = a[start:end:end]
start, end = end, end+width
}
return r
}
The results at tip for Go 1.4 are:
$ go version
go version devel +7d2e78c502ab Sat Jun 14 16:47:40 2014 +1000 linux/amd64
$ go test -v -bench=. -run=!
PASS
BenchmarkNewRectangleFL5 1000 2061854 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleRL5 1000 2054323 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleBL5 500 5276701 ns/op 6340608 B/op 2049 allocs/op
BenchmarkNewRectangleOL5 1000 1496754 ns/op 6340608 B/op 2 allocs/op
BenchmarkNewRectangleOL0 50000 32721 ns/op 55296 B/op 2 allocs/op
BenchmarkNewRectangleOS5 10000000 221 ns/op 57 B/op 1 allocs/op
BenchmarkNewRectangleOS0 10000000 194 ns/op 57 B/op 1 allocs/op
BenchmarkNewRectangle14L5 1000 1497511 ns/op 6340608 B/op 2 allocs/op
BenchmarkNewRectangle14L0 2000 803754 ns/op 6340608 B/op 2 allocs/op
The optimal implementation is still faster; it uses about 29% less CPU time (1496754 ns/op vs. average 2058088.5 ns/op) and about 99.9% less heap allocations (2 allocs/op vs. 2049 allocs/op). It's even faster for a zero initial value (32721 ns/op vs. average 2058088.5 ns/op).
In addition to the benchmarks, we could have used profiling.
File rectangle.go:
package rectangle
import (
"bytes"
)
// For loop using a for clause
func NewRectangleF(height, width int, value byte) [][]byte {
r := make([][]byte, 0, height)
for i := 0; i < height; i++ {
r = append(r, bytes.Repeat([]byte{value}, width))
}
return r
}
// For loop using a range clause
func NewRectangleR(height, width int, value byte) [][]byte {
r := make([][]byte, height)
for i := range r {
r[i] = bytes.Repeat([]byte{value}, width)
}
return r
}
// Benchmark basic
func NewRectangleB(height, width int, value byte) [][]byte {
r := make([][]byte, height)
for i := range r {
w := make([]byte, width)
if value != 0 {
for j := range w {
w[j] = value
}
}
r[i] = w
}
return r
}
func repeat(b []byte, count int) []byte {
nb := make([]byte, len(b)*count)
if len(b) == 1 && b[0] == 0 {
return nb
}
bp := copy(nb, b)
for bp < len(nb) {
copy(nb[bp:], nb[:bp])
bp *= 2
}
return nb
}
// Benchmark optimization
func NewRectangleO(height, width int, value byte) [][]byte {
r := make([][]byte, height)
a := repeat([]byte{value}, height*width)
start, end := 0, width
for i := range r {
r[i] = a[start:end:end]
start, end = end, end+width
}
return r
}
// Go version tip (go1.4)
func NewRectangle14(height, width int, value byte) [][]byte {
r := make([][]byte, height)
var a []byte
if value == 0 {
a = make([]byte, height*width)
} else {
a = bytes.Repeat([]byte{value}, height*width)
}
start, end := 0, width
for i := range r {
r[i] = a[start:end:end]
start, end = end, end+width
}
return r
}
File rectangle_test.go:
package rectangle
import (
"fmt"
"testing"
)
type nrFunc func(height, width int, value byte) [][]byte
func testNewRectangle(t *testing.T, nr nrFunc) {
/*
#!/usr/bin/env ruby
HEIGHT = 2
WIDTH = 3
array = Array.new(HEIGHT){ Array.new(WIDTH){ 5 } }
# [[5, 5, 5], [5, 5, 5]]
print array
print "\n"
*/
height, width, value := 2, 3, byte(5)
r := nr(height, width, value)
if len(r) != height || cap(r) != height ||
len(r[0]) != width || cap(r[0]) != width ||
fmt.Sprintln(r) != "[[5 5 5] [5 5 5]]\n" {
t.Error("Invalid rectangle:", r, fmt.Sprint(r))
}
}
func TestNewRectangle(t *testing.T) {
var tests = []nrFunc{
NewRectangleF, NewRectangleR,
NewRectangleB, NewRectangleO, NewRectangle14,
}
for _, test := range tests {
testNewRectangle(t, test)
}
}
var (
smallHeight = 2
smallWidth = 3
largeHeight = smallHeight * 1024
largeWidth = smallWidth * 1024
zeroValue = byte(0)
fiveValue = byte(5)
)
func BenchmarkNewRectangleFL5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleF(largeHeight, largeWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangleRL5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleR(largeHeight, largeWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangleBL5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleB(largeHeight, largeWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangleOL5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleO(largeHeight, largeWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangleOL0(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleO(largeHeight, smallWidth, zeroValue)
}
_ = r
}
func BenchmarkNewRectangleOS5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleO(smallHeight, smallWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangleOS0(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangleO(smallHeight, smallWidth, zeroValue)
}
_ = r
}
func BenchmarkNewRectangle14L5(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangle14(largeHeight, largeWidth, fiveValue)
}
_ = r
}
func BenchmarkNewRectangle14L0(b *testing.B) {
b.ReportAllocs()
var r [][]byte
for i := 0; i < b.N; i++ {
r = NewRectangle14(largeHeight, largeWidth, zeroValue)
}
_ = r
} | {
"domain": "codereview.stackexchange",
"id": 8031,
"tags": "array, go, comparative-review"
} |
Why the work done by system not stored as potential energy? | Question: Now choose a spring mass system now work done by external agent in slowly moving from equilibrium position is stored as potential energy but where is work done by spring force gone.For genralization work done on system is stored as potential energy but where is work done by system gone during the process?
Answer: For generalization work done on system is stored as potential energy but where is work done by system gone during the process?
The work done by the spring mass system takes the work done on the spring mass system and stores it as potential energy in the spring mass system. The net work done on the mass of the system is zero. The following is presented as an explanation.
When an external agent (for example you) applies a force to a mass on the end of a spring (let's assume the spring is horizontal so that gravity plays no role) it compresses the mass spring system. The force you apply to the mass is in the same direction as the displacement of the mass so you do positive work on the mass equal to $\frac{kx^2}{2}$.
At the same time the spring exerts a force on the massin a direction opposite the displacement of the mass. Therefore it does negative work taking the energy you provided to the mass and storing it as potential energy of the spring mass system. The net work done on the mass is zero, since by the work energy theorem the net work done on an object equals its change in kinetic energy. For this example, the change in kinetic energy is zero since the mass begins and ends with zero velocity.
Bottom line, the work you did winds up stored in the spring mass system as potential energy. The work you did in compressing the spring is not gone. It just becomes potential energy.
The gravity analogy is when you lift an object starting at rest on the ground and bring it to a point at rest a height $h$ from the ground you do positive work transferring energy to the object. At the same time the force of gravity, which acts opposite the direction of movement of the object does an equal amount of negative work, taking the energy you gave the object and storing it as gravitational potential energy of the object/earth system.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 58755,
"tags": "newtonian-mechanics, potential-energy, spring"
} |
On these pictures of accelerator equipment, what are the blue metal things? | Question: I have visited many accelerator, and these blue things
can be seen where cryo technology is used. What are they (He valves?/safety valves?) Why is so many needed?
Edit:
Here there are more blue things seen mounted on top of the helium line behind the LHC.
Answer: I'd guess they are valve actuators. A quick Google found an example at http://www.ge-energy.com/products_and_services/products/valves_control_and_safety/masoneilan_type_87_88_pneumatic_multi_spring_actuators.jsp | {
"domain": "physics.stackexchange",
"id": 3546,
"tags": "accelerator-physics"
} |
Why is Venus cloud covered but not in an ice age? | Question: I understand that Venus is closer to the Sun, but shouldn't Venus's cloud cover effect it in the same way as Earth's cloud cover would cause an ice age?
Venus with no clouds.
Answer: The cloud layers on Venus are largely composed of compounds containing sulfur, including sulfuric acid (which may rain on the planet) and sulfur dioxide. Sulfur dioxide is short-lived on Earth, where it reacts with oxygen, but it can stay stable in other atmospheres with low oxygen content. Oxygen is virtually non-existent on Venus, which is why the cloud layers have persisted for so long. They reflect both radiation coming from space and radiation coming from the ground a lower atmosphere.
That said, Venus's atmosphere is over 95% carbon dioxide, which is an extremely powerful greenhouse gas. This is orders of magnitude greater than levels on Earth, which are several hundredths of one percent of the atmosphere. Water vapor, too, contributed to the greenhouse effect on early Mars.
The dust that would be thrown up by an impact, however, would not show high levels of sulfur dioxide or similar gases, and levels of greenhouse gases would not go up significantly. Infrared radiation will not be re-radiated in significant amounts back to the ground, and thus temperatures on Earth will fall, not rise.
On Venus, though, the clouds will indeed reflect radiation, and the extremely high levels of greenhouse gases will, in any case, still contribute to runaway global warming. | {
"domain": "physics.stackexchange",
"id": 34172,
"tags": "thermodynamics, temperature, planets, geophysics, climate-science"
} |
Complete human rDNA sequence | Question: I've been trying to retrieve the complete human rDNA sequence (non-spacers and spacers), which should be about 43-kb in length using Biomart, NCBI, and rnacentral, but I have only been able to find the 13-kb non-spacer sequence. Are the spacer regions not yet sequenced due to difficulty in sequencing a repetitive region? Thanks for your help.
Answer: Managed to find the complete sequence of the repeating unit:
https://www.ncbi.nlm.nih.gov/nuccore/U13369 | {
"domain": "biology.stackexchange",
"id": 11498,
"tags": "dna, human-genetics, dna-sequencing, rna"
} |
Overfitting/Underfitting with Data set size | Question: In the below graph,
x-axis => Data set Size
y-axis => Cross validation Score
Red line is for Training Data
Green line is for Testing Data
In a tutorial that I'm referring to, the author says that the point where the red line and the green line overlap means,
Collecting more data is unlikely to increase the generalization
performance and we're in a region that we are likely to underfit the
data. Therefore it makes sense to try out with a model with more capacity
I cannot quite understand meaning of the bold phrase and how it happens.
Appreciate any help.
Answer: So, the underfitting means that you still have capacity for improving your learning while overfitting means that you have used a capacity more than needed for learning.
Green area is where testing error is rising i.e. you should continue providing capacity (either data points or model complexity) to gain better results. More green line goes, more flat it becomes i.e. you are reaching the point where the provided capacity (which is data) is enough and better to try providing the other type of capacity which is model complexity.
If it does not improve your test score or even reduce it that means that the combination of Data-Complexity was somehow optimal and you can stop training. | {
"domain": "datascience.stackexchange",
"id": 6974,
"tags": "machine-learning, cross-validation"
} |
If in the quantum world reality is just based on observation then why molecules (with electrons and protons) are real? | Question: If in the quantum world reality is just based on observation then why atoms and molecules are real?
I mean, it is said that when a quantum particle is not observed it's neither spinning up nor down (if I understood that correctly), and from what I understand everything has its quantum particle, and so, why molecules down to electrons and protons have a predictable behavior [i.e becomes real] when their quantum particles are probabilistic.
Answer:
If in the quantum world reality is just based on observation then why atoms and molecules are real?
In classical physics our human sense of "reality" was described with mathematical formulae which extrapolated down to very small distances and times are only "real" by definition and because the formulae worked .
The need for quantum physics mathematics and formulae came because there were measurements that could not be described with the mathematics of thermodynamics, classical mechanics and classical electrodynamics, and so we have arrived at the present quantum physics which is descriptive and predictive at the level of the microcosm.
Atoms and molecules are described by quantum mechanical formulae, which work. The problem The problem comes with this assumption "reality is just based on observation " and "I mean, it is said that when a quantum particle is not observed it's neither spinning up nor down". The correct statement is "we cannot know how it is spinning unless we observe it".
You can say the same for classical thermodynamics and an ensemble of molecules. The formulae of thermodynamics are such that you do not know whether a molecule is going up or down, it could be doing anything for all you can know, unless you measure it( a particular molecule).
why molecules down to electrons and protons have a predictable behavior [i.e becomes real] when their quantum particles are probabilistic.
Atoms and molecules are also probabilistic in the way they interact with each other. and the formulae predicting their quantum behavior are as real as classical physics formulae, just different. | {
"domain": "physics.stackexchange",
"id": 80813,
"tags": "quantum-mechanics"
} |
Galileo's treatment of uniformly accelerated motion | Question: Galileo asserts that if a body accelerates uniformly, its velocity increases as the even integers ($1,2,3,4$ etc.) and therefore, the distances passed by the body in equal times increase as the odd integers $(1,3,5,7)$ etc.
This makes no sense to me. If we suppose that velocity is a continuous function of time, with $v(t) = t$, and that $v(0) = 0$ and $v(1) = 1$, for example, it follows that the distance elapsed in the first period of time is $1/2$. Likewise in the second period of time the distance elapsed is $3/2$ etc.
Where does this whole odd-integer business come from?
Edit: you can find the exact statement in Corollary 1 here: https://oll.libertyfund.org/titles/galilei-dialogues-concerning-two-new-sciences
Answer: “As” means “proportional to”. The sequences $\frac12, \frac32, \frac52, \dots$ and $1, 3, 5, \dots$ are proportional. | {
"domain": "physics.stackexchange",
"id": 69575,
"tags": "newtonian-mechanics, classical-mechanics, acceleration, velocity, history"
} |
What is $\mathsf{NP}$ restricted to linear size witnesses? | Question: This is related to the question Is the Witness Size of Membership for Every NP Language Already Known?
Some natural $\mathsf{NP}$(-complete) problems have linear length witnesses: a satisfying assignment for $SAT$, a sequence of vertices for $HAMPATH$, etc.
Consider the complexity class "$\mathsf{NP}$ restricted to linear length witnesses". Formal definition of this complexity class, call it $\mathcal{C}$: $L\in\mathcal{C}$ if $\exists L'\in\mathsf{P}\colon (x\in L \iff \exists w\in\{0, 1\}^{O(|x|)}\colon (x, w)\in L')$.
Is this a known complexity class? What are its properties?
Answer: The class ${\cal C}$ you are proposing is probably not $NP$. (If ${\cal C} = NP$, then every $NP$ language would have linear-size witnesses, which would imply that every $NP \subseteq TIME[2^{O(n)}]$ and $NP \neq EXP$, among other things).
It is very natural to consider such classes; they arise in several settings. In this paper, Rahul Santhanam (implicitly) proposed the notation $TIGU(t(n),g(n))$ for time-$t(n)$ computation with $g(n)$-guess bits. Hence ${\cal C} = \bigcup_{k} TIGU(n^k,kn)$. In this paper, I defined an analogous class $NTIBI[t(n),b(n)]$. (NTIBI stands for "nondeterministic time and bits".) Also, Cai and Chen would call your class $GC(O(n), P)$ (GC stands for "Guess and Check", cf. L. Cai and J. Chen. On the amount of nondeterminism and the power of verifying. SIAM Journal on Computing, 1996). Finally, if you search for "bounded nondeterminism" you may find three more notations for the same class... | {
"domain": "cstheory.stackexchange",
"id": 1531,
"tags": "cc.complexity-theory, complexity-classes, np"
} |
Where is the connection between $U(1)$ gauge field and $\mathbb{Z}_2$ gauge theory? | Question: I am a graduate student in condensed matter physics and today I was reading the Wikipedia article Topological Order.
There is the part:
Note that superconductivity can be described by the Ginzburg–Landau
theory with dynamical U(1) EM gauge field, which is a Z2 gauge theory,
that is, an effective theory of Z2 topological order.
Where is the connection between the U(1) gauge field and the Z2 gauge Theory? Or in other words is the Z2 topological oder a consequence of the U(1) gauge field?
Answer: This answer requires understanding of various things, so I am just gonna drop googleable names of theories and concepts to keep the text concise and readable.
TL;DR It's all about classifying the superconducting phase transition. Real-life superconductivity does not break any symmetries, hence the symmetry classification (Ginzburg-Landau formalism) is not applicable. Topological phase transitions do not need symmetry breaking. Real-life superconductors are hence labelled by their (non-trivial) topological order.
Classifying phase transitions
We can all agree that there is a definite difference between a normal metallic phase ($T \gg T_{\mathrm{c}}$) and a superconducting phase ($T \leq T_{\mathrm{c}}$). Namely, the lack of electrical resistance and hence unhindered flow of charge carriers.
A very powerful way of classifying phase transition is looking that which (global) symmetry is broken at the transition. Global symmetries are real symmetries, and breaking them has physical effects, namely the emergence of Goldstone modes, which are massless and gapless. Typical examples are the breaking of the rotational invariance of a paramagnet to a ferromagnet, at the Curie temperature. For $T<T_{\mathrm{c}}$, the magnetisation is non-zero and (spontaneously, in the absence of an external magnetic field) chooses a direction. The symmetry of the system goes from $SO(3)$ to $SO(2)$ with two Goldstone bosons generated, called spin-waves or magnons. Or a liquid ($SO(3)$) becoming a solid (no continuous symmetry), generating three phonons.
This is nicely quantified by the Ginzburg-Landay formalism, where the potential energy usually looks like:
$$ V \propto a\phi^2 + b\phi^4,$$
with $a= a_0(T-T_{\mathrm{c}})$ such that for $T<T_{{\mathrm{c}}}$ the potential doesn’t have a minimum at $\phi=0$ anymore but rather a ring of minima at $\phi = -a_0/b$. Because the state chooses a particular state out of this degenerate ring, the symmetry is broken:
Symmetry breaking is only a way of classifying phase transitions, in particular is only applicable to second-order (or "continuous") phase transitions.
Other transitions do not break any symmetries. But the phases are "definitely different" anyway. So how do you quantify this order? Sometimes you can classify it as a topological order, that is you can find a topological invariant which is different in either phase.
Supefluidity ("fake" superconductivity, but sometimes referred to as "textbook", since it's the easiest to show)
Consider an interacting field theory with a global $U(1)$ symmetry. According to Noether's theorem, this is associated with the conservation of particle number.
Breaking this symmetry leads to one Goldstone boson, with a linear (low-energy) dispersion $E \propto |\mathbf{p}|$. This results in the presence of a critical velocity $v_p$ below which the fluid experiences no viscosity.
This is a superfluid. Viscous-free flow. Somehow reminiscent of superconducting flow eh?
Real-life superconductivity
Superfluid is a scalar complex field theory (hence the $U(1)$ symmetry) with a global (real) symmetry. In the non-interacting limit, it's just Bose-Einstein condensation.
Real-life superconductors are not just "Bose-Einstein condensates" of Cooper pairs, as sometimes found in the literature.
By real-life I still mean the most basic form of superconductivity, that is $s$-wave and within the BCS theory.
There is still a $U(1)$ symmetry involved. But it's not a real one. It is a local $U(1)$ symmetry, which is nothing else than a redundancy. Hence it's also called a gauge symmetry, and it cannot be broken, and it cannot lead to Goldstone bosons. So we can't classify superconductivity as a symmetry-breaking transition!
Some people call the $U(1)$ for superfluidity "static", and the one here "dynamical". Local/dynamical gauge symmetries are coupled to gauge fields $\mathbf{A}$ such as the electro-magnetic field. So a local $U(1)$ scalar complex field theory describes a charged (bosonic, spin-$0$) system.
This is the "dynamical U(1) EM gauge field" mentioned in your quote.
There is still an order parameter for the transition. This is not the number of condensed particles (like in a superfluid or a BEC), but it's the superconducting bandgap $\Delta$.
However, as mentioned, local symmetries cannot be broken. Hence there is no symmetry-breaking that one can use to classify the transition.
How to classify the two phases then?
It turns out you can calculate a topological invariant in the superconducting phase, and it's the Pfaffian of the Hamiltonian. It is just a sign and can therefore only take two values, $\pm 1$, which lead to the $\mathbb{Z}_2$ theory that you talk about. This invariant is only defined in the superconducting phase, when the superconducting gap $\Delta \neq 0$. For the metallic phase, the Hamiltonian has the usual particle number conservation: if $\Delta=0$ the numbers of filled electron and hole states is conserved, and so the invariant once again becomes just the number of filled states.
The $\mathbb{Z}_2$ order stems from the particle-hole symmetry of the superconducting Hamiltonian, and is discussed more here .
Further reading
I invite you to read the accepted answer to this question, by the guy who (among others) wrote the book where the Wikipedia passage you quote is taken from almost verbatim. | {
"domain": "physics.stackexchange",
"id": 63719,
"tags": "superconductivity, topological-phase"
} |
Oscillations when using teb_local_planner for car like robot | Question:
Facing an issue with the teb_local_planner, which is also mentioned here - Oscillations around global path while using teb local planner?.
I am using an Ackermann drive based (car like steering mechanism) robot with a teb_local_planner. However, the robot seems to oscillate around the global path, i.e. if the robot starts off to the right of the path, then goes to the left, tries to come back and again goes to the right.
This is my teb_local_planner_params.yaml:
TebLocalPlannerROS:
odom_topic: /vesc/odom
map_frame: map # default value is odom
# ******* Trajectory **********
teb_autosize: True
dt_ref: 0.4
dt_hysteresis: 0.1
global_plan_overwrite_orientation: True
allow_init_with_backwards_motion: True
max_global_plan_lookahead_dist: 3.0
feasibility_check_no_poses: 2
# ********** Robot **********
max_vel_x: 2.0
max_vel_x_backwards: 1.0
max_vel_y: 0.0
max_vel_theta: 0.3 # the angular velocity is also bounded by min_turning_radius in case of a carlike robot (r = v / omega)
acc_lim_x: 0.5
acc_lim_theta: 0.5
# ********************** Carlike robot parameters ********************
min_turning_radius: 0.82
wheelbase: 0.34 # Wheelbase of our robot
cmd_angle_instead_rotvel: False
# ********************************************************************
footprint_model: # types: "point", "circular", "two_circles", "line", "polygon"
type: "circular"
radius: 0.28 # for type "circular"
# ********** GoalTolerance **********
xy_goal_tolerance: 0.2
yaw_goal_tolerance: 3.0
# velocity at goal point can be anything
free_goal_vel: False
# ********** Obstacles **********
min_obstacle_dist: 0.2
include_costmap_obstacles: True
costmap_obstacles_behind_robot_dist: 1.0
obstacle_poses_affected: 30
inflation_dist: 0.3
costmap_converter_plugin: ""
costmap_converter_spin_thread: True
costmap_converter_rate: 5
# ********** Optimization Parameters **********
weight_kinematics_forward_drive: 100.0
weight_acc_lim_x: 0.0
I visualized the trajectories (nav_msgs/Path) in rviz. The global planner's trajectory looked fine, one which the human would have taken. The local planner's trajectory had back and forth motions.
Fixed the robot_base_frame from the center of my robot to the rear axle
Reduced both acc_lim_theta and max_vel_theta from about 0.3 to 0.12.
Increased weight_kinematics_forward_drive, which penalizes backward motion, from 1.0 to 1000. The path planned by the planner remained the same.
This is the video of the actual robot motion (click on the image, redirects to YouTube):
Originally posted by Subodh Malgonde on ROS Answers with karma: 512 on 2018-10-26
Post score: 0
Answer:
My problem was with parameter tuning. I made the following changes:
Decreased inflation_dist and min_obstacle_dist
Increased max_vel_theta and acc_lim_theta (I should have guessed that low values for these parameters are limiting the steering angle of the car)
Now there is no back and forth.
Originally posted by Subodh Malgonde with karma: 512 on 2018-10-30
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 31970,
"tags": "navigation, path-planning, ros-melodic, teb-local-planner, move-base"
} |
Can GridSearchCV be used for unsupervised learning? | Question: im trying to build an outlier detector to find outliers in test data. That data varies a bit (more test channels, longer/shorter testing).
First im applying the train test split because i want to use grid search for hypertuning. This is timeseries data from multiple sensors and i removed the time column beforehand.
X shape : (25433, 17)
y shape : (25433, 1)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=(0))
Standardize afterwards and then i changed them into an int Array because GridSearch doesnt seem to like continuous data. This surely can be done better, but i want this to work before i optimize the coding.
'X'
mean = StandardScaler().fit(X_train)
X_train = mean.transform(X_train)
X_test = mean.transform(X_test)
X_train = np.round(X_train,2)*100
X_train = X_train.astype(int)
X_test = np.round(X_test,2)*100
X_test = X_test.astype(int)
'y'
yeah = StandardScaler().fit(y_train)
y_train = yeah.transform(y_train)
y_test = yeah.transform(y_test)
y_train = np.round(y_train,2)*100
y_train = y_train.astype(int)
y_test = np.round(y_test,2)*100
y_test = y_test.astype(int)
I chose the IForrest because its fast, has pretty good results and can handle huge data sets (i currently only use a chunk of the data for testing). Setting Up the GridSearchCV:
clf = IForest(random_state=47, behaviour='new',
n_jobs=-1)
param_grid = {'n_estimators': [20,40,70,100],
'max_samples': [10,20,40,60],
'contamination': [0.1, 0.01, 0.001],
'max_features': [5,15,30],
'bootstrap': [True, False]}
fbeta = make_scorer(fbeta_score,
average = 'micro',
needs_proba=True,
beta=1)
grid_estimator = model_selection.GridSearchCV(clf,
param_grid,
scoring=fbeta,
cv=5,
n_jobs=-1,
return_train_score=True,
error_score='raise',
verbose=3)
grid_estimator.fit(X_train, y_train)
The Problem:
I cant fit the grid_estimator.
GridSearchCV needs an y_argument, without y its passing me the "missing y_true" error.
What should be used as a target here ? Atm i just passed an important data column to y for testing, but im getting this error that i dont understand:
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput
targets
I also got the advice that the i need a scoring function and the iForest doesnt have one.
I couldnt find useful information for this, are there any helpful guides or info that can help me ?
Answer: The goal of GridSearchCV is to iterate over (hence search) all possible combinations (hence grid) of hyper parameters and evaluate a model on a cross-validation (hence CV). You do need some score to compare models with different sets of hyper parameters. If you can come out with some reasonable way to score a model after the fit, you can write a custom scoring function. If this scoring function does not require target (y) to be computed, you can simply pass an array of zeros to GridSearchCV. The example of such scorer is given here.
Otherwise, if you use some supervised model on a filtered (by IsolationTrees) data, you can do that using Pipelines, and run GridSearchCV on that, see examples in sklearn docs:
from sklearn.pipeline import Pipeline
from sklearn.ensemble import IsolationForest
estimators = [('filter_data_it', IsolationForest()),
('clf', LogisticRegression())]
pipe = Pipeline(estimators)
param_grid = dict(filter_data_it__max_features=[5,15,30], clf__C=[0.1, 10])
grid_search = GridSearchCV(pipe, param_grid=param_grid)
recall, that when you use Pipelines you need to prepend param_grid with the name of the pipeline step.
UPD1. As stated in the comments IF don't have method transform, thus simple chaining will not work. The way IF works is by predicting outliers and not by filtering the data (you are supposed to filter outliers afterwards). However, there is a way around this problem. We need to create a new class with transform method, which will run IF, and filter the data based on its predictions. I will update the code snippet.
It turns out there is no clear way to adapt sklearn API for that purpose, as stated in these questions, 1, 2, also this answer suggest a solution, however it is relatively complex. Thus, I suggest you proceed with scorer example. | {
"domain": "datascience.stackexchange",
"id": 11253,
"tags": "python, outlier, grid-search, isolation-forest"
} |
publisher not publishing on topic | Question:
im trying to publish a message on a topic once but it never publishes the data at all. This is the code
in main
ros::init(argc, argv, "keeper_env");
ros::NodeHandle n;
ros::Publisher chatter_pub = n.advertise<std_msgs::Empty>("toggle_led", 1000);
std_msgs::Empty msg;
chatter_pub.publish(msg);
even putting it in a finite while loop dosent work
int i=0;
ros::Rate rat(20);
while(i<10)
{
chatter_pub.publish(msg);
ros::spinOnce();
i++;
rat.sleep();
}
but if i put it in an infitite loop then it publishes the data.
while(ros::ok())
{
chatter_pub.publish(msg);
ros::spinOnce();
}
If i put it in a if condition in that infinite loop then it still dosent publish. i dont want it to be called infintely. i just need it once. can someone please help me
int i=0;
while(ros::ok())
{
if(i==0)
{
printf("Doses it go iun here \n");
chatter_pub.publish(msg);
i++;
}
}
Originally posted by vivek rk on ROS Answers with karma: 56 on 2013-06-23
Post score: 2
Answer:
@Philip is right. The publisher need some time to connect to the subscribers. Your finite loop is just to short for that. I'm not sure what you expect from you condition case in the infinite loop as that is never true.
The proper solution to your problem is to use pub.getNumSubscribers() and wait until that is > 0. Then publish.
Originally posted by dornhege with karma: 31395 on 2013-06-24
This answer was ACCEPTED on the original site
Post score: 10
Original comments
Comment by Philip on 2013-06-24:
Ah, thanks for the hint about getNumSubscribers! I wasn't aware of that :-)
Comment by aschaefer on 2018-12-05:
In case you run into this issue with rospy: The corresponding function is called get_num_connections(). | {
"domain": "robotics.stackexchange",
"id": 14672,
"tags": "ros, publisher"
} |
Points-in-a-plane from HackerRank | Question: I've been struggling with this problem for days now, making no progress:
There are N points on an XY plane. In one turn, you can select a set of collinear points on the plane and remove them. Your goal is to remove all the points in the least number of turns. Given the coordinates of the points, calculate two things:
The minimum number of turns (T) needed to remove all the points.
The number of ways to to remove them in T turns. Two ways are considered different if any point is removed in a different turn.
-- https://www.hackerrank.com/challenges/points-in-a-plane
I've tried a greedy exhaustive solution, where I draw a line between each pair of points, then start to eliminate the lines in descending order based on the number of points they cross. Unfortunately, there is at least one case where this approach produces suboptimal results:
For ease of discussion, I will call lines "longer" or "shorter", based on how many points they include. The greedy algorithm is simply to eliminate lines in order of descending length.
Suppose we have a set of N longer lines, and another set of M shorter lines. Our greedy algorithm will eliminate the long lines first. But what if every single point of the long lines is also included in a short line? In that case,our initial elimination of the N longer line was a waste, since we would have gotten those lines "for free" had we just eliminated the shorter lines. Specifically, our greedy approach will require N + M eliminations, where we could have cleared all points in just M steps.
The simplest example input demonstrating this is:
(0, 1), (1, 2), (2, 4), (3, 3)
(0, 0), (1, 0), (2, 0), (3, 0)
(0,-1), (1,-2), (2,-4), (3,-3)
As you can see, we have a line of length 4 running along the X axis, and 4 shorter vertial lines of length 3 perpendicular to it. Our greedy algorithm will first eliminate the longest line, after which there will be 8 sets of points remaining, with no more than 2 of them collinear. Eliminating those will thus take 4 steps, for a total 5, where we could have eliminated all points in just 4 steps, had we simply eliminated the 4 vertical lines first.
Could someone provide at least a hint at the general body of knowledge required to approach this? I solved many other HackerRank questions, but can't make any headway with this one.
Answer: this is the problem of covering points with lines or linear facility location
which is NP-Hard. This is why you couldn't find a greedy solution. There are approximation and exact algorithms for this problem. If you want an exact algorithm that is efficient on large inputs I suggest you a simple version of parametrized solution for this problem. | {
"domain": "cs.stackexchange",
"id": 1880,
"tags": "algorithms, graphs"
} |
Oscilations in loss curve | Question: I saw a similar question, but I think my problem is something different.
While training, the training loss and the validation loss move around one number, not decreasing significantly.
I have 122707 training observations and 52589 test observations with 55 explanatory variables and one dependent, One CONN1D with 24 filters, 2 Lstm years with 24 units and one dense layer. I've added a dropout rate of 0.2 between the layers. Total parameters 13417.
Seems like my model is not learning at all. Does it mean that the dataset is not a good representation of the specific problem? Should I increase the number of epochs? I use Adam optimizer with default learning rate.
Adding additional info:
I am trying to predict next hour air pollution based on previous value air pollution concentration, previous hour meteorological data as temperature, windspeed etc. Day, Hour and Month are also included and encoded with One Hot Encoding. Additionaly the wind degree is decomposed according to its sin and cos components. Previously I tried normalization of data but it didn`t seem to give any difference. Haven't try any other models. Here is the model:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=24, kernel_size=3,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 55]),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(24, return_sequences=True),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.LSTM(24, return_sequences=True),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 1000)
])
Somewhere I saw Labda layer after Dense layer in regression. I noticed that adding Lambda layer at the end speeds up the learning. I multiply the output with 1000 because it is the maximum value for the variable I want to predict.
Answer: Rather than oscillations, it looks like white noise, like a random walk. In other words, as you said, your model is not learning anything.
Unfortunately it's impossible to say what's wrong, since we can't see any code. We need more information about dataset, how you processed it, model implementation, all the hyperparams you chose, what other versions you tried before that one, ... the list is countless. But most importantly it's really hard to help you without code.
If the dataset is a good one the problem must be some error you made along the way.
EDIT:
Here's what I think:
You don't need Conv layers followed by RNN layers. This doesn't really make sense. Let the LSTM receive raw input.
Don't use Dropout with RNNs, they don't go along very well together. Dropout makes sense with Dense and Conv data, but in RNNs, where sequence is everythin, they can actually make things worse. Somebody uses recurrent dropout as an alternative but it's not necessary.
Don't use return_sequences=True between an LSTM and a Dense layer. That must be used between LSTM's only.
That Lambda layer at the end is probably causing most of the error. If you multiply all your predictions by 1000, what you get is by definition a prediction that on a completely different scale than your target value.
The Network overall is too deep and has too many parameters. I assume you are working with the famous Beijing air quality dataset. In this case, It is enought to work with one LSTM layer, followed by a Dense node to make the prediction. Everything else is overkill and not necessary for a simple dataset like that.
Something much more simple, like:
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(24, input_shape=(seq_len, n_vars)),
tf.keras.layers.Dense(1),
])
has higher chance to work. (Please specify the input shape correctly). Try playing with its hyperparameters, after you made sure all variables are properly scaled between train and test data.
Good luck! | {
"domain": "datascience.stackexchange",
"id": 8784,
"tags": "cnn, lstm, loss-function"
} |
Why does gravity increase in star formation? | Question: When a star ignites ( ie. fusion starts ), the star maintains its form by balancing gravity's inward pressure, and radiation's outward pressure.
I get that the fusion of hydrogen atoms releases energy... fine...
How does gravity keep it together if the mass is lessening as a result of fusion( mass being converted into energy from fusion) while gravity is weakening( as mass lessens )?
Wouldn't the radiation overpower the force of gravity and tear the star apart?
Answer: I am going to start with this paragraph from Wikipedia (emphasis mine):
The most important fusion process in nature is the one that powers
stars. In the 20th century, it was realized that the energy released
from nuclear fusion reactions accounted for the longevity of the Sun
and other stars as a source of heat and light. The fusion of nuclei in
a star, starting from its initial hydrogen and helium abundance,
provides that energy and synthesizes new nuclei as a byproduct of that
fusion process. The prime energy producer in the Sun is the fusion of
hydrogen to form helium, which occurs at a solar-core temperature of
14 million kelvin. The net result is the fusion of four protons into
one alpha particle, with the release of two positrons, two neutrinos
(which changes two of the protons into neutrons), and energy.
Different reaction chains are involved, depending on the mass of the
star. For stars the size of the sun or smaller, the proton-proton
chain dominates. In heavier stars, the CNO cycle is more important.
The proton-proton chain set of reactions look like this:
The CNO cycle looks like this:
Net Result
Either way, the net result is 4 protons ($^1\!$H nuclei) are turned into 1 alpha particle ($^4\!$He nucleus) plus 2 positrons (e$^+$). The 2 positrons go on to annihilate 2 electrons, so altogether we have a mass change of $$ \Delta M = M_{\mathsf \alpha} - 2M_{\mathsf e} - 4M_{\mathsf P}\,. $$
Let's find out the fractional change in mass: $$ f_\Delta = \frac{\Delta M}{4M_{\mathsf P}} = \frac{M_{\mathsf \alpha} - 2M_{\mathsf e} - 4M_{\mathsf P}}{4M_{\mathsf P}}\,. $$
Now the ratio of the mass of an alpha particle to a proton is $3.9726$, or $$ M_{\mathsf \alpha} = 3.9726\times M_{\mathsf P}\,. $$
The ratio of the mass of a proton to an electron is $1836.1$, or $$ M_{\mathsf e} = \frac{M_{\mathsf P}}{1836.1} = 0.0005446\times M_{\mathsf P}\,. $$
Substituting into the $f_\Delta$ equation, $$ f_\Delta = \frac{3.9726\times M_{\mathsf P} - 0.0011\times M_{\mathsf P} - 4\times M_{\mathsf P}}{4\times M_{\mathsf P}} = \frac{-0.0285}4 = -0.007125 = -0.7125\%\,\,.$$
So obviously, Even if all of the hydrogen were converted (only a fraction actually is) the loss of mass to the star would be too negligible to matter.
A more important mass loss for large stars is that from their stellar wind, which for very large main sequence stars (types O or B) removes a sizable fraction of the very large star's mass over it's lifetime. | {
"domain": "astronomy.stackexchange",
"id": 1189,
"tags": "gravity, star-formation, radiation"
} |
Finding Peaks in an Autocorrelation Function | Question: I'm trying to find the period of a signal. I've used FFT to compute the autocorrelation of the signal. As can be seen from the autocorrelation function (plotted below) I obtained, there are 70 sample between peaks which actually indicates the period of my signal.
What is the best way to extract the indices of these peaks from such a data?
Answer: Removing the DC offset from your signal will get rid of the triangular "trend" seen here. Another way to detrend data (which is not specific to autocorrelation functions) is to subtract from your function a median-filtered version of itself (the median-filtered version corresponding to the trend).
You can then detect peak by detecting local maxima - if $X(n) = \max_{k \in [k-W, k+W]} X(k)$, then $n$ is a peak. $W$ is a scale factor which indicates how narrow and close to each other you allow your peaks to be. | {
"domain": "dsp.stackexchange",
"id": 444,
"tags": "fft, cross-correlation"
} |
What is this error regarding easting and northing of ground control points? | Question: *I want to understand what would it mean by easting of ground control points (GCP), and what would it mean to have rmse of x m with a minimum of a m.
The average root mean square errors (RMSE) in the easting and northing
of the GCPs were 'x' and 'y' m, respectively, with a minimum value of 'a' m
and a maximum value of 'b' m.
Answer: Easting and northing provide an alternative to latitude and longitude for specifying a point on the Earth ellipsoid. I suggest you read (for example) the wiki.gis.com pages on the Universal Transverse Mercator (UTM) coordinate system and easting and northing. There are lots of other articles on these topics on the internet.
There are advantages and disadvantages with regard to easting and northing versus latitude and longitude. One key advantage of easting and northing is that for nearby points such as the error in ground control points, easting and northing take advantage of the fact that the surface of the Earth is nearly flat on a local scale. One key disadvantage of easting and northing is that it's rather convoluted.
Because this concept is already well-documented, and because it is rather convoluted, I'm not going to go into details in this answer. I suggest instead that you read up on the topic and ask additional questions when you come up with them.
Regarding the specific issue raised in the question,
The average root mean square errors (RMSE) in the easting and northing of the GCPs were 'x' and 'y' m, respectively, with a minimum value of 'a' m and a maximum value of 'b' m.
Another way to put the first part of the quoted text is that the ground control points have a root mean square error of x meters in the east-west direction and y meters in the north-south direction. I would have to see the relevant text to properly interpret the second part (the part about minimum and maximum errors). I suspect that those are the minimum and maximum of the square roots of the sum of the squares of the easting and northing errors, but that's just a guess. | {
"domain": "earthscience.stackexchange",
"id": 2585,
"tags": "remote-sensing, data-analysis, statistics"
} |
A simple electromotor? | Question: I have to build a simple electromotor in the following way: I attach a permanent magnet to a battery, connect some metal supports to the terminals of the battery, and place a coil of wire on the supports, suspended above the magnet. My question is, will it still work just as well (or at all) if the wire is insulated? Thanks!
Answer: It will work just as well provided the part of the coil making contact with the supports is not insulated. The rest of it can be insulated. The important property is that current has the ability to run through the coil because it is this current that experiences a force due to the magnetic field of the permanent magnets causing the coil to rotate. For a bit more detail and pictures:
http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/mothow.html | {
"domain": "physics.stackexchange",
"id": 7661,
"tags": "magnetic-fields, electromagnetism"
} |
Difference between gas combustion engine and gas turbine? | Question: I am a non-engineer and I am using a gas power plant data base. Some plants listed are of the type "combustion engine", some of the type "gas turbine", some "steam turbine" and some "combined cycle". I believe to understand that combined cycle is a combination of combustion engine and steam turbine. But what is the difference between combustion engine and gas turbine then? Or does it mean the same?
Answer: Combustion engine - an internal combustion engine like a car motor, with pistons moved in cycles. It's inefficient but can be easily scaled down to almost arbitrarily small sizes; usually used as a backup.
Gas turbine - in power plants, these are very similar in construction to jet engines, where gas is the fuel - a multi-stage turbine compressor, a turbine on exhaust, high RPM; the torque produced is used to run the generator. It's not as efficient as steam or combined cycle, though more than combustion engine - but the power output can be rapidly tuned to needs, providing a response to changing demand (which is typical for the "more inert" types that use steam.)
Steam turbine - gas heats water in a boiler; overheated steam runs through turbines, then is cooled. This is the same principle as great most other thermal power plants (coal, nuclear, geothermal). It's usually a large installation and may take hours to get up to speed (so no rapid response to demand + waste of energy as demand rapidly vanishes) but it has a very good efficiency.
Combined cycle - exhaust from gas turbine (the "jet engine") is used to heat water into steam and run a steam turbine. Better efficiency than both above, and provides the much desired rapid response. Of course cost of construction is similar to sum of costs of construction of the two, and maintenance is more complex, but the operational costs are reduced. | {
"domain": "engineering.stackexchange",
"id": 1553,
"tags": "power-engineering, gas"
} |
Which requirements are needed for 2 different species to be able to have offsprings? | Question: Sometimes different species mate and they are able to have offsprings, usually with anomalies. Are there known requirements for 2 different species to produce offsprings? Why species like the lion and the tiger, which seems to have many differences are able to produce offsprings, when others which supposedly share 99% of the DNA can't?
Answer: Lets break this down like a logical problem. In reality, it involves the correct gene expressions at the correct time, place, etc. But that would be a book.
Prezygotic reproductive isolating mechanisms
Environment
A bull and a whale can't mate; they don't live in the same environment. Lions and tigers similarly do not mate in the wild due to lack of overlap in the environment. In captivity, yes.
Delivery of gametes
Sperm must be able to be delivered to the ovum for fertilization to occur. Some species just can't get this part right. Your house cat would not be a suitable donor for an elephant.* In closer species, say a chicken and a peacock, the rooster's courtship display (which is nowhere near as spectacular at the peacock's) will not be accepted by the female; no opportunity for delivery of gametes.
Some gametes can be delivered but not ova (the female isn't "in season".) Zero there as well.
Unity of gametes
If the gametes are both delivered at the correct time, can the sperm bind to the egg? Successful binding requires a receptor-ligand interaction which actually has a high degree of species specificity. So binding requires that the sperm and the egg can, say, "communicate" successfully. Dog sperm doesn't "speak" cat ovum.
If they do "speak the same language", can the sperm penetrate the ovum's protective coating (the zona pellucida)? Assuming sperm capacitation (a big hurdle right there), the sperm may not have the correct enzymes in it's 'head' to penetrate zona pellucida. Not all enzymes carry out the same function on all molecules, and if the correct enzymes are present to digest/break through the zona pellucida, is there enough? Is the sperm active enough? I'll liken this kind of communication to 'pillow talk'. We're getting much closer. Let's say the sperm has his pillow talk down, and can penetrate the layer.
Postzygotic isolating mechanisms
Viability
The hybrid embryo must be able to develop into a zygote. Hybrid inviability means there wasn't enough similarity to proceed to a blastocyst and the hybrid dies.
Say viability is achievable. The blastocyst/blastomere must "communicate" with the mother so that attachment is not only possible but can be sustained. In effect, it must "shout" to the mom, "I'm here! Pay attention to me!" (In humans, this involves secretion of large quantities of a hormone called chorionic gonadotropin, which stimulates continued secretion of progesterone, thus no shedding of the uterine lining. Not all mammalian blastocysts communicate the same way. If what we have here is a failure to communicate... as with Cool Hand Luke, the result is not good.
Hybrid sterility
The species were close enough for the conceptus to be heard by the mom. The right chromosomes with the right gene expression results in a live birth.
Success!
If the hybrid is fertile, it is by one definition not a different species. (There is much debate as to what exactly defines a species. See the references.)
If you are still left saying, "But why?", then it must be explained on a genetic/molecular level, with sentences like,
The zona pellucida of mammalian eggs is composed mainly of three glycoproteins, all of which are produced exclusively by the growing oocyte. Two of them, ZP2 and ZP3, assemble into long filaments, while the other, ZP1, cross-links the filaments into a three-dimensional network. The protein ZP3 is crucial: female mice with an inactivated ZP3 gene produce eggs lacking a zona and are infertile.
If this is the level of explanation you seek, there's a book for you: Molecular Biology of the Cell. Alberts B, Johnson A, Lewis J, et al.
Fertilization
The Process of Speciation
Fertilization
*Note here I'm referring to morphological differences, one way used to define species. | {
"domain": "biology.stackexchange",
"id": 6821,
"tags": "reproduction, species"
} |
String compression algorithm using functional JavaScript | Question: I've written an algo to compress a string
eg. aabcbbba to a2bcb3a
I'm pretty new to functional programming, I feel it's easier to debug and learn from functional code if you're good at it already, but total opposite if you're not.
I'm wondering how I can make this code cleaner and more functional.
I feel like there's gotta be a better way to do compressCharacters without the need of an array or a result variable (perhaps substituting forEach with something else) as well as reducing the lines of code in groupCharacters
const signature = `aabcbbba`;
const compressString = signature => {
return compressCharacters(groupCharacters(signature));
}
const groupCharacters = signature => {
let newSignature = "", arr = [];
// convert signature to an array
let result = [...signature].reduce((accumulator, element, index) => {
// check if last letter in accumulator matches the current element
if (accumulator[accumulator.length -1] !== element) {
// add the accumulator string into an array
arr.push(accumulator);
// clear the newSignature string and set it to current element
newSignature = element;
} else {
// if it doesn't match, add it to the accumulator
newSignature = accumulator += element;
}
// check if it's the last item - add to array
if (index === signature.length - 1) arr.push(element);
// return newSignature to accumulator
return newSignature;
})
return arr;
}
const compressCharacters = arr => {
let newArray = [];
let result = arr.forEach(e => e.length > 1 ? newArray.push(`${e[0]}${e.length}`) : newArray.push(e))
return newArray.join("");
}
compressString(signature);
Answer: Style notes
Not putting {, } around single line statement blocks is a bad habit. Always delimit statement code blocks.
In compressCharacters newArray should be a constant. In groupCharacters arr should be a constant.
Consistent naming is important. Your naming is all over the place. You call a string, characters (in compressCharacters), string (in compressString), signature, and e in the forEach. A character you call element. You abbreviate an array to arr in one function and call it newArray in another. Most of the names are describing the type and not the abstracted data that they hold.
Don't add useless or redundant code. In compressCharacters you create a variable result that you do nothing with. Not to mention that forEach does not have a return defined. Also result in groupCharacters is never used.
Don't declare variables outside the scope that they are to be used. newSignature is only used inside reduce but declared outside the callback.
Functional programing means that functions should not have side effect (change the state of anything outside the function's scope.) The reduce callback uses the array arr which breaks the no side effects rule. And the forEach pushes to arr which is also outside the forEach callbacks scope (use map or reduce in that case).
Applying the above you would get something like the following code.
const groupRuns = str => [...str].reduce((groups, char) => {
const last = groups.length - 1;
if (last < 0 || groups[last][0] !== char) { groups.push(char) }
else { groups[last] += char }
return groups;
}, []);
const concatGroups = groups => groups.reduce((str, g) =>
str + (g.length > 2 ? g[0] + g.length : g)
, "");
const compressString = str => concatGroups(groupRuns(str));
const signature = `aabcbbbaaaaaabcccccdddddddbbba`;
compressString(signature);
Note that rather than add numbers to groups of size 2 the group must be 3 or larger. | {
"domain": "codereview.stackexchange",
"id": 32609,
"tags": "javascript, functional-programming, compression"
} |
Clausius–Clapeyron equation: shape of phase diagrams makes no sense | Question: I am trying to model the melting point of a substance at varying pressures (ranging from very small to very very large). All I am trying to do is make an equation that relates melting temperature to pressure, so $T(P)$ is some function. To do this, I am trying to use the Clausius–Clapeyron equation (CC), which states that
$$\frac{\mathrm dP}{\mathrm dT} = \frac{L}{TΔV}.$$
In other words, the slope of the equilibrium line on the phase diagram should decrease as temperature is increased.
However, this is not the case; the curve of the equilibrium line is exponential and the slope $\mathrm dP/\mathrm dT$ increases as $T$ increases. Integrating CC we arrive at a logarithmic function, which again is not what empirical measurements reflect.
As I see it, the empirical results and the equation that is supposed to describe them are mutually exclusive. There is no way to arrive at an exponential curve from a slope that varies with $1/x.$ The CC equation and phase diagrams cannot be both be true at once and it is driving me mad.
Why is this the case? Is the CC equation valid at all because it seems to be totally false? What function do I use to model melting points at different temperatures?
The results that are so stupefying are these:
The shape of the curve is exponential. But, the supposed derivative is $1/T$, in which case the slope of each curve (red and blue here) should flatten as $T$ increases, but it steepens. Also, integrating that supposed derivative gives us $\ln (T)$ which is definitely not the shape of the phase diagram. This discrepancy is true for both the liquid/solid and liquid/gas curves.
I hope this clarifies the question!
Answer: This is a summary of the equations to use to calculate phase transitions.
The Clapeyron equation $\displaystyle p_2-p_1=\frac{\Delta H}{\Delta V}\ln\left( \frac{T_2}{T_1} \right)$ is used for a solid-liquid transition. The changes in enthalpy and volume relate therefore to changes occurring in fusion.
The Clausius-Clapeyron equation describes solid-vapour and liquid-vapour changes because the final volume is far greater than the initial one, and is $\displaystyle \frac{dp}{dT}=p\frac{\Delta H}{RT^2}$ where $\Delta H$ the enthalpy change at the liquid–vapour or sublimation transition. Integrating this last equation from pressure $p_1 \to p_2$ and temperature $T_1 \to T_2$ gives $\displaystyle \ln\left(\frac{p_2}{p_1} \right) = -\frac{\Delta_{vap}H}{R}\left( \frac{1}{T_2}-\frac{1}{T_1} \right) $.
The change in the volume during fusion is $\displaystyle \Delta_{fus}V = m\left(\frac{1}{d_l}-\frac{1}{d_s} \right)$ where $m$ is the molar mass and $d_l$ and $d_s$ the densities of the liquid and solid. The pressure variation for the solid to liquid (melting or fusion) change is
$$\displaystyle p_2=p_1+\frac{\Delta_{fus}H}{\Delta_{fus}V}\ln\left(\frac{T_2}{T_1} \right)$$
and for evaporation and sublimation
$$\displaystyle p_2=p_1\exp\left( -\frac{\Delta_{vap}H}{R}\left(\frac{1}{T_2}-\frac{1}{T_1} \right) \right)$$
with the appropriate $\Delta H$. This is $\Delta_{vap}H$ for evaporation and $\Delta_{fus}H + \Delta_{vap}H$ for sublimation. Sublimation is treated as two steps merged into one; melting and instantaneous evaporation.
The $p$ vs $T$ plot for benzene is shown in the figure. Notice how this is different to how these are generally shown. This is sometimes due to the fact that log pressure is sometimes plotted but this is not always shown on the figure. Notice also how the solid liquid line is effectively vertical.
The data used is
R = 8.314 #( J/mol/K)
dens_sol = 981.0 #( kg/m^3)
dens_liq = 879.0 #( kg/m^3)
mol_mass = 78.0/1000.0 #( kg/mol)
DH_vap = 30.8*1000 #( J/mol)
DH_fus = 10.6*1000 #( J/mol)
p3 = 36.0/760*101325 #( triple point pressure Pa)
T3 = 5.5 + 273.16 #( triple point temperature K)
DV_fus = mol_mass*(1/dens_liq-1/dens_sol) # delta volume fusion
Python/numpy functions for the pressure are:
p_liq_vap= lambda T: p3*np.exp( (DH_vap/R)*(1/T3-1/T))
p_sol_vap= lambda T: p3*np.exp( ((DH_fus + DH_vap)/R ) * (1/T3-1/T) )
p_sol_liq= lambda T: p3 + DH_fus/DV_fus*(np.log(T) - np.log(T3)) | {
"domain": "chemistry.stackexchange",
"id": 13792,
"tags": "physical-chemistry, thermodynamics, phase, melting-point, pressure"
} |
Positive 1-in-3 SAT FPT or Fixed Parameter Intractable | Question: There are a number of satisfiability problems that are difficult to solve even in the fixed parameter sense. For example, Weighted q-CNF Satisfiability is W[1]-complete when parameterized by the number of variables that are set to true.
My question is: is there any literature on whether Positive 1-in-3 is W[1]-hard or fixed parameter tractable when parameterized by the number of variables that are set to true? Positive 1-in-3 SAT is the problem where all literals in an expression are positive and exactly one literal in each clause is true.
Answer: No, since even the generalization to 1-in-at-most-3 is FPT for the same reason as vertex cover.
(With at most 3 variables per clause, there will be at most 3 cases for each step.)
I feel like that generalization should have a simple
poly-size kernel, but can't figure out any way of showing that.
The corresponding problem for at-most-1-in-3 constraints, where one wants
exactly k variables to be true, is W[1]-hard by reduction from independent set:
If k<2 then brute force, else:
There's a variable for each vertex and one other variable.
The constraints are u , other_variable , v for edges u,v.
Since 2≤k and other_variable is in every constraint, other_variable must be false, so
the satisfying assignments with exactly k Trues correspond to the independent k-sets. | {
"domain": "cstheory.stackexchange",
"id": 3801,
"tags": "np-hardness, sat, parameterized-complexity"
} |
How to represent impulse function in 2D? | Question: To be more specific I want to show that impulse function in 2D can be represented as $β(r)=δ(r)/πr$.
Also I want to show that each projection of a two dimensional impulse function at the origin is a Delta function.
I could not find any useful information about these two problems, if anyone could help I will be appreciated.
Answer: So dealing with generalized functions like the Dirac delta requires some care, and when dealing with N-dimensional versions you need to be very explicit with your notation to keep things straight.
I'll denote the 2 dimensional delta function in polar coordinates at the origin as ${}^2\delta(r, \theta) = {}^2\delta(r)$, since for the special case of the origin, $\theta$ doesn't matter.
For this derivation, ${}^2\delta(r)$ represents the following limiting sequence of functions (that are asymmetrical about 0):
$${}^2\delta(r) = \lim_{\epsilon \rightarrow 0^+} \dfrac{1}{\pi\epsilon^2} \quad 0 < r < \epsilon$$
Using this limiting sequence of functions, the 2-D delta function at the origin in polar coordinates has the following property of "integrating to 1" for the 2-D integration:
$$\int_0^{2\pi} \int_0^{\infty} {}^2\delta(r) \space r \space \mathrm{d}r \space \mathrm{d}\theta = 1$$
To separate the $r$ and $\theta$ portions of ${}^2\delta(r)$, so we can use a 1D Dirac Delta function, I'll use the following limiting sequence of functions (that are asymmetric about 0) for the 1D Dirac Delta function:
$${}^1\delta_a(r) = \lim_{\epsilon \rightarrow 0^+} \dfrac{1}{\epsilon} \quad 0 < r < \epsilon$$
Using this limiting sequence of functions, the 1-D delta function at the origin has the following property of "integrating to 1" for the 1-D integration:
$$\int_0^{\infty} {}^1\delta_a(r) \space \mathrm{d}r = 1$$
We can then equate the two above integrals to get the relationship between $ {}^2\delta(r)$ and ${}^1\delta_a(r)$
$$\begin{align}\\
\int_0^{\infty} {}^1\delta_a(r) \space \mathrm{d}r &= 1\\
&= \int_0^{2\pi} \int_0^{\infty} {}^2\delta(r) \space r \space \mathrm{d}r \space \mathrm{d}\theta\\
&= \int_0^{\infty} {}^2\delta(r) \space 2\pi r \space \mathrm{d}r\\
\end{align}$$
so we have
$${}^2\delta(r) = \dfrac{{}^1\delta_a(r)}{2\pi r}$$
This differs from your desired result by a factor of $2$, because I did not allow the 1 dimensional delta function to be symmertic around $0$, which wouldn't make sense for a polar origin where $r$ cannot be less than $0$ and $\theta$ ranges in $[0, 2\pi)$.
If one allows $r$ to go negative and $\theta$ to range in $[0, \pi)$, then one can show for a 1 dimensional delta function that is symmetric around $0$
$${}^2\delta(r) = \dfrac{{}^1\delta(r)}{\pi |r|}$$
(note the absolute value bars!)
This is a good example of why delta functions require careful definitions and considerations when making statements.
To show the projections of a 2-D delta function are themselves delta functions, start with a suitable limiting sequence of functions for ${}^2\delta(x,y)$, like an infinitely thin and tall rectangle function both in x and in y, such that
$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} {}^2\delta(x,y) \space \mathrm{d}x \space \mathrm{d}y = 1$$
and the answer should fall out from the limiting sequence of functions you use. | {
"domain": "dsp.stackexchange",
"id": 6874,
"tags": "image-processing, homework, impulse-response, 2d"
} |
Problems with 2 Dynamixels MX-28 over USB2Dynamixel | Question:
Greetings from Germany,
I have a Problem: I try to run the Pi Robot face tracking System and I'm using two dynamixel MX-28 Motors for movement. I interfaced the Motors and the PC via USB2Dynamixel USB-Interface with RS-485 connection. Running the Robotis-node via launchfile gets me the following console-message:
setting /run_id to bb081e58-9218-11e2-adf3-000bab41dd30
process[rosout-1]: started with pid [5802]
started core service [/rosout]
process[dynamixel_manager-2]: started with pid [5814]
process[dynamixel_controller_spawner1-3]: started with pid [5815]
[INFO] [WallTime: 1363864577.448834] pan_tilt_port: Pinging motor IDs 0 through 25...
[INFO] [WallTime: 1363864577.951461] pan_tilt_port controller_spawner: waiting for controller_manager dxl_manager to startup in global namespace...
[ERROR] [WallTime: 1363864579.650591] Exception thrown while getting attributes for motor 1 - Invalid response received from motor 1. Wrong packet prefix ['\xbf', '\xff']
[ERROR] [WallTime: 1363864579.665361] Exception thrown while getting attributes for motor 1 - Invalid response received from motor 1. Wrong packet prefix ['\xdf', '\xff']
[INFO] [WallTime: 1363864579.833898] pan_tilt_port: Found 1 motors - 1 MX-28 [1], initialization complete.
[INFO] [WallTime: 1363864580.160889] pan_tilt_port controller_spawner: All services are up, spawning controllers...
[INFO] [WallTime: 1363864580.424991] Controller pan_controller successfully started.
[WARN] [WallTime: 1363864580.587375] The specified motor id is not connected and responding.
[WARN] [WallTime: 1363864580.592829] Available ids: [1]
[WARN] [WallTime: 1363864580.595967] Specified id: 2
[ERROR] [WallTime: 1363864580.599861] Initialization failed. Unable to start
controller tilt_controller
[dynamixel_controller_spawner1-3] process has finished cleanly
My config file looks like that:
from math import radians
port = '/dev/ttyUSB0'
servo_param = {
1: {'name': 'head_pan_joint',
'home_encoder': 1024,
'max_speed': radians(200),
'max_ang': radians(160),
'min_ang': radians(-160)
},
2: {'name': 'head_tilt_joint',
'home_encoder': 0,
'max_speed': radians(200),
'max_ang': radians(90.),
'min_ang': radians(-90.)
}
}
Thanks for your Help.
Originally posted by Bison on ROS Answers with karma: 52 on 2013-03-20
Post score: 1
Answer:
THE SOLUTION.
First of all thanks again for all those that answered my question.
We tried a Program under Windows to check if our servos were damaged. There we found out that there is an id preinstalled by Robotis for every motor. In both cases this id had the value 1. So every time we tried to get the motors online our controller found the first motor with id = 1 and added it to the list. But then there was no servo with the id = 2 as we indicated in the config-file.
With the dynamixel-windows-program we could change the motor ids so that we got the values we needed.
Originally posted by Bison with karma: 52 on 2013-03-20
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Victor_ocv2 on 2013-03-20:
Its better if you edit your question with this information | {
"domain": "robotics.stackexchange",
"id": 13455,
"tags": "ros, dynamixel, robotis"
} |
Dynamically adding a static transformation to tf | Question:
I have written an object in C++ which contains position and orientation information in member variables. These variables are not likely to change over the lifetime of the object. I am using these variables to perform my own hard-coded transforms between the world and the frame defined by each object.
I realize now that this is probably not the correct way to do things, and I should be using tf to do this work for me.
My question is how do I setup a static frame for each object when it is constructed?
I know I can create a static_transform_publisher node to do this, but I'm not too excited about creating a bunch of nodes during program operation that don't really do anything. It seems opposed to the ROS design philosophy.
Is there some object I can create that will do this for me?
Originally posted by Sebastian on ROS Answers with karma: 363 on 2015-06-22
Post score: 0
Answer:
multistatic_transform_publisher from lscr_tf_tools can help with this. You can run a single transform publisher node, and use messages to tell it to set transforms.
Originally posted by Dan Lazewatsky with karma: 9115 on 2015-06-22
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 21982,
"tags": "ros, static-transform-publisher, roscpp, transform"
} |
What does it mean when PDB lists multiple organisms for the same structure? | Question: I'm browsing through ribosomes which have been modeled with Cryoem on PDB and am pretty confused by the fact that for some, PDB, or rather the authors of the deposition probably, put multiple organisms from different domains of life... How does one interpret that?
So far as i understand bos taurus is a cow and e.coli is a bacterium. So eukarya and bacteria.. Thank you.
Answer: Since this is a 40S ribosomal protein, and therefore eukaryotic-specific (as the description says) the listing of E. coli is not intuitive as you note. If you go down to "Macromolecules" and click on "Nucleic Acids/Hybrids" you can see the listed E. coli sequence is just 28bp, and all Us (poly(U) tail?). So, you can safely ignore the bacterial annotation I think. | {
"domain": "bioinformatics.stackexchange",
"id": 1368,
"tags": "pdb"
} |
Astronomical data convertion from Jy/pixel to MJy/sr? | Question: I'm in the middle of collecting infrared data of various wavelengths. My problem is the following: SPIRE data (250µm) are in MJy/sr and PACS data (100µm) are in Jy/pixel. I would like to combine these data, but I'm a bit lost with these units... So I would have several questions:
How can we switch from Jy/pixel to MJy/sr?
Is "Jy/beam" equivalent to "Jy/deg^2" or "Jy/sr"?
Answer: For both conversions, you need a bit more information about the data.
How can we switch from Jy/pixel to MJy/sr? The Jy -> MJy part is just a factor of $10^{6}$ Jy/MJy. The pixel -> sr (steradian) part requires that you know the angular size of a pixel on the sky. For PACS it looks like the pixel size is different for different wavelengths; see Table 3.1. Once you know the pixel size in arcseconds, then the conversion is 206265 arcseconds / radian (or more precisely, 3600*180/pi). Square the size in radians to get the area in steradians. You also may be able to find the pixel size for your data by looking in the FITS header, if you have FITS images.
Is "Jy/beam" equivalent to "Jy/deg^2" or "Jy/sr"? No, but you can convert between the two. Both are flux per angular area on the sky. You need to know the angular area of the telescope beam (e.g. in square degrees, or steradians) and then you can convert between the two.
The README for the ATLAS survey says this about the conversion in their SPIRE maps:
"To convert the maps from Jy/beam to Jy/pixel, in order to carry out aperture photometry,
the values in the maps should be divided by the ratio between the beam area and the pixel
area in arcsec^2 (469/36, 831/64 and 1804/144 at 250, 350 and 500 microns respectively).
The mean of the maps should also be subtracted. Users interested in the most accurate
aperture photometry of extended sources should consider scaling the SPIRE fluxes using the
K4 corrections for extended sources. They should also consider making an aperture
correction." | {
"domain": "astronomy.stackexchange",
"id": 4615,
"tags": "observational-astronomy, data-analysis"
} |
Stuck on thought experiment about light | Question: Say we have a very long fluid pipe with the width of a few astronomical units, and that this pipe is perfectly resistant to sustain the stress of a perfectly incompressible fluid going through it without collapsing.
Assume also that there are no external fields acting on this pipe, nor the fluid; the only force is a device which will apply a pressure source pushing this fluid in one end of the pipe. The pipe is filled with this fluid.
If I look at the other end of the pipe, is the fluid going to come out as soon as the device is activated on the other end? It sounds like it shouldn't be possible because it would be transmitting information faster than a pulse of light travelling across the length of the pipe, but I can't see what is absurd about it.
What's wrong with it?
Answer: As @JonCuster already pointed out in the comments, you are assuming an incompressible fluid. Under unrealistic assumptions you of course get unrealistic results. In reality, you will get a simple pressure wave traveling through your pipe at the speed of sound. | {
"domain": "physics.stackexchange",
"id": 94177,
"tags": "special-relativity, speed-of-light, causality, faster-than-light, thought-experiment"
} |
Potential of an axisymmetric disc with constant rotation velocity | Question: I am having trouble understanding why the form of the 3D potential for a disc with a constant rotation velocity for circular orbits of stars within the disc
\begin{equation}
v(R) = v_0, \tag{1}
\end{equation}
must be of the form
\begin{equation}
\Phi(r,z)=v_0^2 \ln{(r+|z|)},\tag{2}
\end{equation}
where $(r,\theta,\phi)$ are spherical co-ordinates and $(R,\theta,z)$ are cylindrical co-ordinates.
The definition of the potential $\Phi$ by Green (in terms of the point-mass Green's function) is
\begin{equation}
\Phi(\mathbf{x}) = -G \int{\frac{\rho({\mathbf{y}})}{|\mathbf{x}-\mathbf{y}|} \; d^3 \mathbf{y}}.\tag{3}
\end{equation}
And I have already worked out that the surface density is
\begin{equation}
\Sigma(R) = \frac{v_0^2}{2\pi G} \frac{1}{R} \delta(z),\tag{4}
\end{equation}
that is, the disc is infinitesimally thin.
Mathematically, I cannot see how this can possibly give a $z$-dependence, since the $\delta(z)$ knocks it out immediately! I can however see physically that the potential must depend on $z$ independently of $r$, since it should be axisymmetric, not spherically symmetric.
I would be grateful for some advice on this apparent discrepancy between the physics of the problem and its mathematical description.
Answer: Hints:
Note that the derivative of the sign function
$$ {\rm sgn}^{\prime}(z)~=~2\delta(z) \tag{A}$$
is twice the Dirac delta distribution. This fact seems to be at the heart of OP's question.
Repeated differentiations of the Mestel disk potential
$$\Phi~:=~ v_0^2 \ln(r+|z|), \qquad r~:=~\sqrt{R^2+z^2}, \tag{B}$$
leads to
$$\frac{\partial \Phi}{\partial z}~=~v_0^2\frac{{\rm sgn}(z)}{r},\tag{C}$$
$$\frac{\partial ^2\Phi}{\partial z^2}~=~-v_0^2\frac{|z|}{r^3}+\frac{2v_0^2}{R}\delta(z),\tag{D}$$
$$\frac{1}{R}\frac{\partial}{\partial R}R\frac{\partial\Phi}{\partial R}~=~\frac{v_0^2|z|}{r^3},\tag{E}$$
$$4\pi G \rho~=~\nabla^2\Phi~=~\frac{2v_0^2}{R}\delta(z).\tag{F}$$
The above calculations can be given rigorous meaning in distribution theory, i.e. with the help of test functions.
For a thin 2D disk, the mass density is
$$\rho~=~\Sigma \delta(z),\tag{G}$$
so that the surface density is
$$ \Sigma~\stackrel{(F)+(G)}{=}~\frac{v_0^2}{2\pi G R}.\tag{H}$$
References:
J. Binney & S. Tremaine, Galactic Dynamics, 2nd edition (2008); p. 99. | {
"domain": "physics.stackexchange",
"id": 31078,
"tags": "newtonian-gravity, mathematical-physics, astrophysics, galaxies, dirac-delta-distributions"
} |
What did Feynman want to mean by 'artificial energy'? | Question: I was reading Mechanical and electrical energies by Feynman when I came before this:
We have seen an analogous situation in electrostatics. We showed that the energy of a capacitor is equal to $Q^2/2C$. When we use the principle of virtual work to find the force between the plates of the capacitor, the change in energy is equal to $Q^2/2$ times the change in $1/C$. That is,
$$\Delta U=\frac{Q^2}{2}\,\Delta\left(\frac1C\right)=−\frac{Q^2}2 \frac{\Delta C}{C^2}.\tag{15.14}$$
Now suppose that we were to calculate the work done in moving two conductors subject to the different condition that the voltage between them is held constant. Then we can get the right answers for force from the principle of virtual work if we do something artificial. Since $Q=CV$, the real energy is $\frac12 CV^2$. But if we define an artificial energy equal to $−\frac12 CV^2$, then the principle of virtual work can be used to get forces by setting the change in the artificial energy equal to the mechanical work, provided that we insist that the voltage $V$ be held constant. Then
$$\Delta U_\textrm{mech}=\Delta\left(−\frac{CV^2}{2}\right)=−\frac{V^2}{2}\Delta C,\tag{15.15}$$
which is the same as Eq. $(15.14)$. We get the correct result even though we are neglecting the work done by the electrical system to keep the voltage constant. Again, this electrical energy is just twice as big as the mechanical energy and of the opposite sign.
I couldn't really comprehend what he wanted to tell in the above excerpt especially by the bold line.
What did Feynman wanted to mean by 'artificial energy'? Meant to say, what is the need of introducing it here; what did he want to say in this para?
Answer: He meant that the "energy" function
$$
E_\textrm{art}(C) = - \frac{1}{2}CV^2
$$
is introduced solely for the purpose of getting the right result with the "principle of virtual work". It is not really EM energy in the usual sense as energy stored in the capacitor, available for use. If there is potential difference $V$, the latter energy $E$ is actually always positive and equal to
$$
E = \frac{1}{2}CV^2.
$$ | {
"domain": "physics.stackexchange",
"id": 81723,
"tags": "electrostatics"
} |
Current in parallel circuit and short circuits | Question: I would like to ask a question which is partially addressed in this question:
Current in Parallel Circuits Since the voltage in parallel is constant, why is it that when there is a short circuit, no current flows through the second path? Shouldn't the current in that path be independent of the other short circuit path?
Answer: If you have an ideal short circuit whose resistance is zero there can be no voltage across the short circuit otherwise an infinite (impossible) current will flow. So if there is no voltage across the short circuit then there is no voltage across the other parallel component and so no current will flow through that component. But note that since we already know that we put a nonzero voltage across the circuit, this conclusion is not really meaningful because we already have a contradiction.
What we can conclude is that there is no ideal short circuit. In fact, you are correct in thinking that a current will flow through he parallel component, as a short circuit in real life has a finite resistance. Of course, that current would be much smaller than the current through the (non-ideal) short circuit. | {
"domain": "physics.stackexchange",
"id": 29102,
"tags": "electric-circuits, electric-current"
} |
3D scanner from Phone Camera | Question: 123D software can construct a 3D model from photos taken from your phone. It doesn't process the photos in your phone. Instead, it sends them to the cloud to create 3d model. How can i construct a 3d model like this (only with one camera)? I searched it but i can only find information on laser/procetor scanners (simple and desktop use only). I think 123D uses only IMU sensors and camera why do they use the cloud? Can a beaglebone or rasperry pi create 3d models like this?
Answer: Creating 3D models with this method is very compute intensive, 123d uses many pictures (at least 20), and examines them for feature points that are common in several pictures and by examining how they change between pictures it can help build up a 3ds point cloud which is then textured using the pictures, this is very resource intensive, and could be done by a beaglebone or raspberry pi, but it would take a very long time. If you wanted to make your own system to 3d scan objects with cameras it is soing to be a lot easier to have an array of them pre calibrated to work in sync to generate models. This is how many professional setups work. | {
"domain": "robotics.stackexchange",
"id": 708,
"tags": "computer-vision, 3d-printing, 3d-reconstruction, 3d-model"
} |
Meaning of "intensity" in the optical Kerr effect in optical quantum computation | Question: Kerr media, or mediums displaying the optical Kerr effect, are used in some optical quantum computers - on Chuang and Nielsen's Quantum Computing and Quantum Information, pgs. 289 - 290, it says,
Nonlinear optics provides one final useful component for this exercise: a material whose index of refraction $n$ is proportional to the total intensity $I$ of light going through it:
$$n(I) = n+n_2I$$
This is known as the optical Kerr effect [...]
Examining the Wikipedia article for the Kerr effect mentions three (!) different Kerr effects: the magneto-optic, electro-optic, and just plain old optical Kerr effect. Since the phrasing and equations match, I assume the book means the optical Kerr effect (also known as the AC Kerr effect).
The major difference between the optical/AC Kerr effect and the electro-optic/DC Kerr effect is for the DC version to work, one must manually apply the electric field to the medium, whereas for the AC version, the light going through produces the effect itself. So far, so good.
Now, the problem in my understanding arises when considering the term "intensity" ($I$) and what it means. When talking about optical quantum computers, we're talking about single photons, meaning the only way to vary the energy of the photon is by varying the wavelength (though this is a very small difference in energy).
Reading some papers, it talks about the light being of higher intensity to cause a larger Kerr effect...but it's a single photon. How can you vary intensity here, in a way large enough to really change the effect's significance?
Answer: As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density
$$
u=\frac{\hbar\omega}{V}
$$
is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) divided by something called the mode volume $V$. The mode volume is a crucial quantity but it will change from situation to situation, and it is essentially the volume occupied by the mode in question. For a cavity mode, a good approximation is the cavity length times the focal spot width, for a fiber it's the volume of the fiber, and so on. It will rarely be less than a cubic millimeter.
If you put those naive numbers in, you will get intensities that are some $22$ orders of magnitude weaker than the atomic unit of intensity, and that gives a good feeling for just how much work there is to do. A naive experiment where you just expect one photon to influence another single photon through a good old-fashioned Kerr-effect nonlinearity is simply not feasible.
That's not to say that the goal is unreachable, and indeed there are many approaches that provide credible roadmaps to nonlinear photon-photon interactions (and interesting partial results in that direction), but they do involve nontrivial work on the interaction medium. Rococo's link on Rydberg-atom media is a good example, atoms near nano-photonic structures is a promising avenue (example), and so on, but none of the proposals is (to my knowledge) scalable enough that you can shoot for an interaction-based photonic quantum computers. Simply put, photons just don't interact with each other strongly enough that you can make two-photon gates with them.
And, as you pointed out in the comments, this leaves open a gap in terms of how the existing photonic quantum computers operate if you can't use any two-photon entangling gates. This is a nontrivial question, and the answer is that to do quantum computation you need entanglement between the constituent qubits, but you don't actually need to do that during 'runtime'.
Instead, the relevant model is called measurement-based quantum computing, and the idea is essentially that you start off with a highly entangled multi-qubit state as a resource, and then you perform a bunch of single-qubit gates and measurements (along with feed-forward so e.g. each measurement can influence what gates and measurements are implemented on the next qubit down the line) without any further entangling operations.
This then shifts the burden to the creation of the entangled state, but that can be done at the start. This is usually through spontaneous parametric down-conversion, plus a bunch of Bell-state measurements, so that's where the nonlinearity ends up: the probability is still low, but you can keep trying until you have a suitable state, and then you run on with the calculation.
For further details on this scheme, I'll refer you to this PhD thesis. | {
"domain": "physics.stackexchange",
"id": 41935,
"tags": "photons, quantum-optics, quantum-computer, intensity"
} |
Pre-commit hook to prevent large file commits | Question: I've written the below bash script to run as a pre-commit hook. The intention is to check the git staging area for any files larger than 1mb, and prevent the commit if any are present.
#!/bin/sh
too_big() {
bytez=$(cat "$(git rev-parse --show-toplevel)/$1" | wc -c)
if [ "$bytez" -gt 1000000 ] ; then
cat <<EOF
Error: Attempting to commit a file larger than approximately 1mb.
Commiting large files slows jenkins builds, clones, and other operations we'd rather not slow down.
Consider generating, downloading, zipping, etc these files.
Offending file - $1
EOF
exit 1
fi
}
# If you want to allow large files to be committed set this variable to true.
allowbigfiles=$(git config --bool hooks.allowbigfiles)
# Redirect output to stderr.
exec 1>&2
if [ "$allowbigfiles" != "true" ]
then
set -e
git diff --name-only --cached $1 | while read x; do too_big $x; done
fi
Edit:
The final script ended up as part of a library of client side Git Hooks
Answer: Although described as a Bash script, this appears to be a portable shell script that can be run by any POSIX-conformant shell. That's a good thing, as it means we can use a much smaller, leaner shell such as Dash.
If you haven't yet installed shellcheck, I recommend you do so (there's also a web version you can try). It highlights the following:
Useless cat here:
bytez=$(cat "$(git rev-parse --show-toplevel)/$1" | wc -c)
That can be simplified to
bytez=$(<"$(git rev-parse --show-toplevel)/$1" wc -c)
Unquoted expansion of $1 - we really wanted to write "$1" there.
Unsafe read x ought to be read -r x
$x is unquoted
Piping the file into wc isn't an efficient way to measure size of a file; we could simply use stat:
bytez=$(stat -c %s "$(git rev-parse --show-toplevel)/$1")
And instead of running git rev-parse for every file in the changeset, run it once and remember the value in a variable.
The error message should go to the standard error stream (I see the whole script is redirected to &2)
It's not obvious why set -e is right down inside the if - I'd normally put that immediately after the shebang.
Consider also set -u to help detect a likely cause of errors.
Spelling: unless you really mean "1 millibit", that should be "1MB".
A suggestion that might fall into the "too cute" category: since git config --bool always produces true or false as output, we can simply execute that as a command:
if ! $(git config --bool hooks.allowbigfiles)
then
Line-based reading (i.e. git diff --name-only | while read) isn't totally robust; there's a -z option provided to produce NUL-separated output. This will require Bash, though, in order to read -d.
Improved code
#!/bin/bash
set -e
too_big() {
if [ "$(stat -c %s "$toplevel/$1")" -gt 1000000 ] ; then
cat <<EOF
Error: Attempting to commit a file larger than approximately 1MB.
Commiting large files slows jenkins builds, clones, and other
operations we would rather not slow down.
Consider generating, downloading, zipping, etc these files.
Offending file - $1
EOF
exit 1
fi
}
# If you want to allow large files to be committed set this variable to true.
allowbigfiles=$(git config --bool hooks.allowbigfiles)
# Redirect output to stderr.
exec >&2
if ! "$allowbigfiles"
then
toplevel=$(git rev-parse --show-toplevel)
git diff --name-only -z --cached "$1" |
while IFS= read -d '' -r x; do too_big "$x"; done
fi | {
"domain": "codereview.stackexchange",
"id": 36108,
"tags": "bash, shell, git"
} |
What are some good practices to follow during EPIC DNA methylation data analysis? | Question: I recently got some EPIC DNA methylation data and I was wondering what are some good practices to follow?
I am interested in knowing about normalization and differential analysis. Thank you.
Answer: EPIC data can be processed in the same manner as the previous iteration of methylation array data from Illumina (450k). This means that starting with .idat files, normalization should be performed (for example, via the minfi package). A recent paper from the creators of minfi is particularly helpful because it makes clear that normalized EPIC data from their package can be immediately compared against, for example, level 3 TCGA data.
After that, I suggest using the manifest to attach genomic coordinates to your probes and segregating them into functional regions. By testing differential methylation in only regulatory regions, for example, you can increase the statistical power by reducing the overall number of tests to the ones you expect to yield major differences.
There are existing packages out there for differential methylation analysis, but without knowing your replicate structure or aims, it is difficult to point you in the right direction. | {
"domain": "bioinformatics.stackexchange",
"id": 62,
"tags": "r, normalization, methylation, microarray"
} |
What are the steps to record and play with rosbag2? | Question:
Hi,
I have tried rosbag2 on latest build Crystel and also play following commands but it gives me some errors. Please guide me for running rosbag2 with the simple example.
ros2 bag record -a
ros2 bag play rosbag2_2019_01_22-17_57_14/rosbag2_2019_01_22-17_57_14.db3
ros2 bag info rosbag2_2019_01_22-17_41_19/rosbag2_2019_01_22-17_41_19.db3
Error:
DISCLAIMER
ros2 bag is currently under development and not ready to use yet
[ERROR] [rosbag2_storage]: Failed to load metadata: Exception on parsing info file: bad file
[ERROR] [rosbag2_storage]: Could not open 'rosbag2_2019_01_22-18_03_08/rosbag2_2019_01_22-18_03_08.db3' with 'sqlite3'. Error: Failed to read from bag 'rosbag2_2019_01_22-18_03_08/rosbag2_2019_01_22-18_03_08.db3': No metadata found.
[ERROR] [rosbag2_storage]: Could not load/open plugin with storage id 'sqlite3'.
[ERROR] [rosbag2_transport]: Failed to play: No storage could be initialized. Abort
Originally posted by Gabbar on ROS Answers with karma: 49 on 2019-01-22
Post score: 2
Original comments
Comment by Technerd on 2019-05-28:
You should run ros2 bag play [folder name] and mention the folder that contains all rosbag files
(metdata.yaml,...).
Answer:
The idea for rosbag2 is that you don't load the concrete db3 file, but specify the folder where your data is located.
Can you try to run
ros2 bag play rosbag2_2019_01_22-17_57_14
I believe this should help.
To give you a bit of background: The file you specified is then getting interpreted as the folder name and within rosbag2 looks for a metdata file telling how to interpret this data. Obviously, this metadata file is not found.
I believe though the error messages are misleading. I think it makes sense to check whether the given file is indeed a directory or not and present the error message accordingly.
Feel free to open a ticket for this here: https://github.com/ros2/rosbag2
Originally posted by Karsten with karma: 643 on 2019-01-22
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by Gabbar on 2019-01-22:
Thanks !! I forgot to see little comment at the page.
One more question, when we play rosbag with melodic (ROS1). It requires roscore (central management) and In ROS2, We don't need roscore, right? We can use this command directly, Right?
Comment by Karsten on 2019-01-23:
that is correct. You can directly use the rosbag2 command.
Feel free to mark my answer as correct. | {
"domain": "robotics.stackexchange",
"id": 32312,
"tags": "ros2"
} |
Qubit ordering in qiskit | Question: I am confused about the qubit ordering in circuit diagrams and endianness used in qiskit. As far as I understand, qiskit uses little endian (least significant qubit is rightmost) and while drawing circuits, qiskit plots least siginificant qubit at the top. So, we have the following table:
qubits
decimal representaion
ckt
statevector
$|00\rangle$
$|0\rangle$
[1 0 0 0]
$|01\rangle$
$|1\rangle$
[0 1 0 0]
$|10\rangle$
$|2\rangle$
[0 0 1 0]
$|11\rangle$
$|3\rangle$
[0 0 0 1]
But, when I use QFT on the 2 qubit state $|10\rangle(i.e.|2\rangle)$, the result I expect is $\frac{1}{2}\Sigma_{y=0}^{3}exp(\frac{2\pi j*2y}{4})|y\rangle = \frac{1}{2}(|0\rangle - |1\rangle + |2\rangle - |3\rangle)$ i.e. statevector [0.5,-0.5,0.5,-0.5], however the following code:
from qiskit import QuantumCircuit
from qiskit.circuit.library import QFT
from qiskit import Aer,execute
qc2 = QuantumCircuit(2)
qc2.x(1)
# prepare the state |10>
two_qbit_QFT_ckt = QFT(2)
qft_ckt_2 = qc2+two_qbit_QFT_ckt
# apply QFT on the state |10>
state_backend = Aer.get_backend('statevector_simulator')
qft_res_2 = execute(qft_ckt_2,state_backend).result().get_statevector()
print(qft_res_2)
outputs [0.5,-0.5,0.5j,-0.5j]. I believe there is some qubit ordering problem that I am not getting right, but I can't figure out what it is. I have also seen the following two questions, but they didn't help much.
Q1: Big Endian vs. Little Endian in Qiskit
Q2: qiskit: IQFT acting on subsystem of reversed-ordered qubits state
Can you please help me find the problem?
Answer: I looked at what you pointed out and I think I figured it out. So if you look at the general form of the circuit for the QFT you have this (from this book )
and if you compare with the circuit from the QFT class in Qiskit, you notice they are the same. However, if you look at the implementation they do here, notice they say « Note: Remember that Qiskit's least significant bit has the lowest index (0), thus the circuit will be mirrored through the horizontal in relation to the image in section 5. » and then they build the mirror of what we have in the first picture.
This seems to be a known issue since it has been pointed out here, but in the meantime I found two workarounds. The first one is to use the implementation they create in the Qiskit textbook, just take their functions and you have the QFT working with the little endian. The second one is to still use the QFT class but with some tricks. For example with the code you put:
from qiskit import QuantumCircuit
from qiskit.circuit.library import QFT
from qiskit import Aer,execute
qc2 = QuantumCircuit(2)
qc2.x(1)
# prepare the state |10>
two_qbit_QFT_ckt = QFT(2,do_swaps=False,inverse=True) #here are the changes
qft_ckt_2 = qc2+two_qbit_QFT_ckt
rev_qft_ckt_2 = qft_ckt_2.reverse_bits() #same as putting swaps in the end of the circuit
# apply QFT on the state |10>
state_backend = Aer.get_backend('statevector_simulator')
qft_res_2 = execute(rev_qft_ckt_2,state_backend).result().get_statevector()
print(qft_res_2)
You take the inverse QFT without the swaps, giving actually the right QFT with the Qiskit notations, then add yourself the swaps in the end by reversing the qubits, and it gives you the result you want :
[ 0.5-6.123234e-17j -0.5+6.123234e-17j 0.5-6.123234e-17j
-0.5+6.123234e-17j]
I hope this will help you, if you need anything else feel free to ask! :) | {
"domain": "quantumcomputing.stackexchange",
"id": 2316,
"tags": "programming, qiskit, quantum-fourier-transform"
} |
Generate a nonuniform illumination (bias field) 2D image | Question: I am creating a nonuniform illumination in MATLAB R2015. The nonuniform illumination (known as bias field) is defined as
Nonuniform Illumination (intensity inhomogeneity) which manifests itself as slow intensity variations in the same class over the image domain.
The non-linear degree of intensity inhomogeneity is indicated by the range of values of the bias field in the interval [1 − α, 1 + α] with α > 0
I used the below MATLAB code to generate the intensity inhomogeneity image in case of α = 0.2:
cols=256;rows=256;
alpha=0.2;
x = linspace(1-alpha,1+alpha,cols);
y = linspace(1-alpha,1+alpha,row2);
% Transform into a grid
[X,Y] = meshgrid(x,y);
bias=X;
imshow(bias,[])
However, I am confused about the term 'non-linear'. It means that the value in the range [1 − α, 1 + α] needs to be slowly changed in a non-linear manner. In the above code, it seems that it is linearly changing. Is this correct? Could you suggest me a method to create an image based on the definition?
This is an expected example which I need to obtain
or
Answer: I provide an example for the generation of a nonuniform illumination using polynomials.
First, I take a vertical lineout (see first image) out of your last image, X being the pixel position and Y being the image intensity. For the horizontal direction I am using a simple parabola. Using kroneker product, the image is generated.
In order to ensure that the image intensity is between 1-alpha and 1+alpha, a streching is implemented as proposed by @Carl_Witthoft.
See code and resulting image below:
N=3; % degree of polynomial fit
alpha=0.2;
c = polyfit(X,Y,N);
y_f = zeros(size(Y));
for i=0:N
y_f = y_f + c(N-i+1)*X.^i;
end
figure;
plot(X,Y, 'DisplayName', 'lineout'); hold on;
plot(X,y_f, 'DisplayName', 'fit'); legend show;
y_f = y_f./100;
xr = linspace(-4,4,150);
yr = 2-0.1*xr.^2;
img = kron(yr,y_f);
%% ensure that all values are between 1-\alpha and 1+\alpha
minV = min(img(:));
maxV = max(img(:));
mV = maxV-minV;
img = img-minV; % now from 0 to maxV-minV
img = img/(maxV-minV)*2*alpha; % now from 0 to 2*alpha
img = img + 1-alpha; % now from 1-alpha to 1+alpha
figure; imshow(img,[]) | {
"domain": "dsp.stackexchange",
"id": 4040,
"tags": "matlab, computer-vision, image-processing, image-segmentation"
} |
A web crawler in Python | Question: The crawler crawls for a set of keywords and saves the count in a database:
import re
import time
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
import os
import httplib2
#import Links
#import Keywords
import MySQLdb
import peewee
from peewee import *
from datetime import datetime
import argparse
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler('crawler.log')
fh.setLevel(logging.DEBUG)
#ch = logging.StreamHandler()
#ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
#ch.setFormatter(formatter)
#logger.addHandler(ch)
fh.setFormatter(formatter)
logger.addHandler(fh)
parser = argparse.ArgumentParser()
parser.add_argument('-l', '--url', help="The base link to be crawled", required=True)
parser.add_argument('-k', '--keywords', help="Keywords to search", required=True)
args = parser.parse_args()
keywords = (args.keywords).split(',')
mapping = dict()
mapping[args.url] = keywords
logger.info(mapping)
db = MySQLDatabase('WebSpider', user='ruut', passwd='ruut')
parsed = set()
class DATA(peewee.Model):
parent_link = peewee.CharField()
sub_link = peewee.CharField()
keyword = peewee.CharField()
count = peewee.IntegerField()
class Meta:
database = db
db_table = 'DATA'
def make_soup(s):
match=re.compile('https://|http://')
if re.search(match,s):
try:
http = httplib2.Http()
status, response = http.request(s)
page = BeautifulSoup(response,'lxml')
return page
except:
return None
else:
return None
def get_list_of_urls(url):
match = re.compile('(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9]\.[^\s]{2,})')
soup = make_soup(url)
l = set()
try:
for a in soup.find_all('a'):
try:
if '?' not in a['href'] and re.search(match,a['href']) and re.search(re.compile(url),a['href']) and a['href']!=url:
l.add(str(a['href']))
except Exception as e:
logger.info('Exception ' + str(a)+' has no href')
logger.info(e)
continue
except Exception as e:
logger.info('Exception ' + url+' has no links')
logger.info(e)
pass
return l
def get_all_the_urls(base,list_of_urls,depth):
logger.info(depth)
if depth == 10:
return
else:
depth = depth + 1
for i in list_of_urls: #scan the list of urls
s = get_list_of_urls(i)
get_all_the_urls(base,s,depth)
for j in s: #scan the sublinks
try:
if j in parsed:
continue
soup = make_soup(j)
logger.info('url is '+ j)
for k in mapping[base]: #look for keys on the webpage
key_count = len(soup(text=re.compile(k, re.IGNORECASE)))
logger.info('Key count is '+str(key_count))
if(key_count>0):
record = DATA(parent_link = base,sub_link = j ,keyword = k ,count = key_count) #i,j,k,key_count
record.save()
parsed.add(j)
logger.info('saved data successfully ' +str(key_count))
except Exception as e:
logger.info('Exception ' +str(e)+' in keywords searching')
continue
def populate_db():
k = set()
k.add(args.url)
temp = time.time()
logger.info(str(datetime.now()))
get_all_the_urls(args.url,k,0)
logger.info('time taken '+str(time.time()-temp))
populate_db()
Answer: Some of the general things I would work on:
split the code into separate modules logically. Currently, you have all the code mixed up in single file - you have argument parsing, database interactions, web-scraping code blocks in one place
consistent indentation. Use 4 spaces for indentation
variable naming. Use descriptive variable names. Variable names like l, i or j are not meaningful and raise questions when reading the code
Code Style
avoid handling broad exceptions with a bare except
remove unused imports and re-group them based on PEP8 recommendations
make sure to properly use whitespaces in expressions in statements
put the main execution logic of the program into the if __name__ == '__main__':
you don't need that pass in the get_list_of_urls() function
depth = depth + 1 could be shortened to depth += 1
Performance
since you are requesting the pages from the same host multiple times, consider switching to requests making use of a single session instance which allows to re-use an underlying TCP connection making subsequent requests to the same host faster
importing SoupStrainer was actually a good idea. You can use it and scope the parsing to only the desired parts of the HTML
since you are using regular expressions checks here and there, consider pre-compiling them and using the compiled patterns for searching and matching | {
"domain": "codereview.stackexchange",
"id": 28713,
"tags": "python, web-scraping"
} |
What is the energy operator and from where do we get it? | Question: I am trying to learn Quantum mechanics from MIT OCW Videos
about quantum mechanics. I have reached the 5th lecture. Please help me in understanding this:
In the middle (At 32:08), the professor wrote that the
$$\displaystyle\text{Energy Operator}={\hat p^2\over2m}+V(\hat x).$$
Questions:
From where do we get this equation?
What is $V(\hat x)$ here?
Is $V(\hat x)\overset?=V(x)$?
Afterwards (At 1:15:46), the professor wrote
$$\hat E=i\hbar{\partial\over \partial t}.$$
So are there two energy operators?
Answer: The energy operator is obtained via the so-called correspondence principle. This means that one considers the classical expression for the total energy
$$\frac{p^2}{2m}+V(x)$$
and replaces the momentum and position variables (numbers in classical mechanics) by the momentum and position operators. $p^2/2m$ is the kinetic energy (it's just another way of writing $\frac{1}{2}mv^2$) and $V(x)$ is the potential energy.
$V$ is first of all just a function of its argument. If you write $V(x)$, you evaluate this function at (the number) $x$. If you write $V(\hat x)$, you replace the position (number) with the position operator, so the whole thing, $V(\hat x)$ is also an operator, specifically the potential energy operator.
This is how you obtain the energy operator $\hat E$ (also called Hamiltonian and thus conventionally written as $\hat H$) from the correspondence principle. This is more of an axiom of quantum mechanics, there is no inherent motivation. The idea is that in the classical limit, the results of classical physics, specifically the classical expression for the energy, should be retrievable.
Now, when you want a specific realization of the position and momentum operators on the Hilbert space the wave functions are going to live in, you replace $\hat x\psi(x)$ by $x\psi(x)$, i.e. the position operator acts on a wave function by multiplying it with $x$, and $\hat p\psi(x)$ with $-i\hbar\partial_x\psi(x)$. By the same token, the energy operator is written as $i\hbar\partial_t$. Note, that the form of the momentum operator and the energy operator are somewhat similar they differ only by $\partial_x$ being replaced with $\partial_t$. If you know Noether's theorem, you will be able to appreciate this fact.
Equating both forms of the energy operator gives the time-dependent Schrödinger equation:
$$i\hbar\partial_t\psi(x)=\hat H\psi(x)$$
where
$$\hat H = \frac{\hat{p}^2}{2m}+V(\hat x)$$ | {
"domain": "physics.stackexchange",
"id": 14670,
"tags": "quantum-mechanics, energy, operators, schroedinger-equation, hamiltonian"
} |
Degree of Unsaturation or Index of Hydrogen Deficiency | Question: I know the formula for the DOU, but don't really understand why the formula works. Could someone elaborate that to me?
Answer: The formula for calculating the Degree of Unsaturation (DU) is awkward. The derivation will come later. There is a better way to address this issue by what I shall call the Atom Replacement Method (ARM). The gist of the method is to replace all heteroatoms with C and/or H. This revised formula is compared with the most saturated hydrocarbon bearing the same number of carbon atoms. As an example, consider the molecular formula of the marine dye Tyrian purple: $\ce{C16H8Br2N2O2}$.
Replace halogens with the same number of hydrogens and ignore divalent atoms: $\ce{C16H8Br2N2O2}$ --> $\ce{C16H10N2}$
Replace each nitrogen with one carbon and one hydrogen: $\ce{C16H10N2}$ --> $\ce{C18H12}$ (1)
The most saturated $\ce{C18}$ hydrocarbon is $\ce{C18H38}$ (2)
Subtracting 1 from 2 gives 26.
DU=26/2=13. There are 4 rings and 9 double bonds in Tyrian purple.
The following derivation of the formula is located here. Here is a synopsis.
Let c=#C, h=#H, n=#N and x=#Br, where "x" is the number of halogens.
Ignoring divalent atoms and replacing xX with xH in CcHhNnXx
gives CcHh+xNn
Replacing nN with nC and nH yields Cc+nHh+x+n (1)
The most saturated hydrocarbon is Cc+nH2(c+n)+2 (2)
Subtracting 1 from 2 and simplifying gives 2c+2n+2-h-n-x
Further simplification affords 2c+n+2-h-x
Division by 2 gives
DU = c+1+(n-h-x)/2
Q.E.D.
For a related discussions on ChemSE, see this link and this one. | {
"domain": "chemistry.stackexchange",
"id": 17343,
"tags": "organic-chemistry"
} |
Atom searcher (basically a file search function) | Question: I apologize for the title, really didn't know what to call this program. In short, the program takes a file of values for the various atoms of amino acids, and then searches this file based on user input. I'm basically looking for any input on how to improve my script. I have a bad habit of using nested loops, splitting all the time, and poor naming. So any type of feedback on my code would be highly appreciated!
The file is a csv file that contains various information:
comp_id,atom_id,count,min,max,avg,std
ALA,H,86795,-0.914,69.229,8.193,0.641,488
ALA,HA,58922,-2.52,17.870,4.244,0.443,1135
ALA,MB,56709,-14.040,5.48,1.352,0.280,1024
ALA,C,55999,0.037,187.2,177.728,3.776,40
ALA,CA,76797,17.007,354.698,53.166,2.773,88
ALA,CB,72862,-40.993,318.868,19.052,3.066,200
ALA,N,82913,0.049,766,123.353,6.027,93
ARG,H,57814,0.011,178,8.241,1.052,36
ARG,HA,40349,1.212,12.57,4.289,0.469,471
....
VAL,CG2,43052,-5.648,320.420,21.346,2.531,92
VAL,N,75697,0.2,529,121.146,7.361,82
There are various amino acids (e.g. ALA, ARG, VAL), each has various different types of atoms (N,HA,CA,etc.). What I care about however is purely the Carbon atoms, and their attached Hydrogen (e.g. CA and HA,CB and MB, etc.). Specifically, the avg and std values (e.g. 8.193 and 0.641). The user can input their own carbon and hydrogen values, to see what amino acid it matches up with. Think of it as coordinates, you put in the latitude and longitude values, and it gives you the location. Since the 2 go together, both the Carbon and Hydrogen must match to get a printout (again, like latitude and longitude). So practice example:
#user inputs 52 and 4, they get a printout
ALA CA 53.166 2.773 ALA HA 4.244 0.443
Since 52 falls within 53.166+/-2.77 and 4 falls within 4.244+/-0.443, these coordinates designate ALA.
I've also added an additional 'High error' printout. Sometimes you get a match because the error is so high, it has a massive range. For these values, the range probably doesn't mean too much (still valuable info, but wanted the user to know if they got a match due to a high std). I chose 25% of the average as the definition for high error.
Finally, thought I'd also mention this since you might notice in my script there is a specific conditional on 'VALN'. This was because the way I determine if you move on to another amino acid, is by checking the current looped value, by the previous. However, when you reach the end of the file, the current will be the same as the end value (and subsequently, that amino acids lists will not get checked/printed). This is my "hackish" way of resolving this issue.
This is what I came up with:
def search_fun(carbon,hydrogen):
"""
This will go through each amino acid, and check its carbon and hydrogen coordinates.
If they are within the user inputed range, it will store these in the lists.
Upon completing an amino acid, it will then go through all the matches, and print them out accordingly"""
residue_list=[]
carbon_list=[]
hydrogen_list=[]
with open('bmrb.csv') as file:
for lines in file:
if lines == '\n':
continue
split_lines=lines.split(',')
residue=split_lines[0]
if residue == 'comp_id':
continue
residue_list.append(residue)
atom=split_lines[1]
chemical_shift=float(split_lines[5])
std=float(split_lines[6])
lower_half=chemical_shift-std
upper_half=chemical_shift+std
if residue_list[0] != residue or (residue+atom) == 'VALN':
if len(carbon_list) >= 1 and len(hydrogen_list) >= 1:
for values in carbon_list:
split_carbon=values.split()
for values2 in hydrogen_list:
split_hydrogen=values2.split()
if split_hydrogen[1][1] == split_carbon[1][1]:
if float(split_carbon[3]) > (0.25*float(split_carbon[2])) or float(split_hydrogen[3]) > (0.25*float(split_hydrogen[2])):
print(f'{values} {values2} HIGH ERROR')
else:
print(values,values2)
carbon_list.clear()
hydrogen_list.clear()
else:
carbon_list.clear()
hydrogen_list.clear()
residue_list.clear()
residue_list.append(residue)
if carbon>lower_half and carbon<upper_half:
carbon_list.append(f'{residue} {atom} {chemical_shift} {std}')
if hydrogen>lower_half and hydrogen<upper_half:
hydrogen_list.append(f'{residue} {atom} {chemical_shift} {std}')
def main_loop():
while True:
question=input('input carbon and hydrogen values: ')
split_question=question.split()
search_fun(float(split_question[0]),float(split_question[1]))
print('\n\n\n')
main_loop()
This is a test run of the output you should get using the above code and below csv file:
input carbon and hydrogen values: 42 3.2
ARG CD 43.201 2.938 ARG HD2 3.107 0.266
ARG CD 43.201 2.938 ARG HD3 3.091 0.285
ASP CB 40.895 2.563 ASP HB2 2.716 0.511
PHE CB 39.955 3.611 PHE HB2 2.992 0.381
PHE CB 39.955 3.611 PHE HB3 2.934 0.399
TYR CB 39.307 3.133 TYR HB2 2.898 0.466
TYR CB 39.307 3.133 TYR HB3 2.833 0.483
Here is the entire csv file:
comp_id,atom_id,count,min,max,avg,std
ALA,H,86795,-0.914,69.229,8.193,0.641,488
ALA,HA,58922,-2.52,17.870,4.244,0.443,1135
ALA,MB,56709,-14.040,5.48,1.352,0.280,1024
ALA,C,55999,0.037,187.2,177.728,3.776,40
ALA,CA,76797,17.007,354.698,53.166,2.773,88
ALA,CB,72862,-40.993,318.868,19.052,3.066,200
ALA,N,82913,0.049,766,123.353,6.027,93
ARG,H,57814,0.011,178,8.241,1.052,36
ARG,HA,40349,1.212,12.57,4.289,0.469,471
ARG,HB2,36605,-4.78,27.530,1.790,0.310,470
ARG,HB3,34641,-1.320,27.530,1.759,0.322,500
ARG,HD2,32127,-6.44,5.0,3.107,0.266,638
ARG,HD3,29287,-0.690,5.0,3.091,0.285,615
ARG,HE,10898,1.150,116.661,7.450,2.838,7
ARG,HG2,32714,-1.45,4.2,1.559,0.284,597
ARG,HG3,30376,-1.298,5.47,1.539,0.298,621
ARG,HH11,971,4.41,11.7,6.938,0.576,22
ARG,HH12,740,4.41,10.727,6.881,0.543,17
ARG,HH21,833,1.233,11.352,6.825,0.652,19
ARG,HH22,685,1.233,60.1410,6.905,2.136,1
ARG,C,35275,0.174,184.96,176.415,3.365,13
ARG,CA,49856,8.369,358.124,56.782,3.345,57
ARG,CB,46468,16.52,329.120,30.695,2.515,125
ARG,CD,27783,18.9350,342.642,43.201,2.938,46
ARG,CG,27535,12.17,328.290,27.260,3.041,42
ARG,CZ,743,43.199,184.497,160.136,7.440,8
ARG,N,53676,0.125,433.808,120.816,4.763,83
ARG,NE,6869,-23.150,149.1080,90.097,13.747,53
ARG,NH1,283,6.450,124.7890,78.516,13.368,6
ARG,NH2,248,66.2,128.470,78.360,13.933,7
ASN,H,47608,0.008,121.370,8.331,0.974,128
ASN,HA,33194,0.896,7.110,4.661,0.362,460
ASN,HB2,31112,-0.827,8.883,2.800,0.335,492
ASN,HB3,30047,-0.948,5.806,2.742,0.359,506
ASN,HD21,23425,0.783,111.320,7.337,0.850,48
ASN,HD22,23159,0.905,111.320,7.144,0.867,109
ASN,C,29727,0.114,185.3000,175.215,3.563,17
ASN,CA,41894,2.200,354.022,53.547,3.517,28
ASN,CB,39745,1.9620,342.798,38.727,3.598,45
ASN,CG,2689,0.000,185.503,176.229,8.760,11
ASN,N,44735,0.041,426.314,118.930,5.122,29
ASN,ND2,20306,21.038,1114.29,112.908,12.638,11
ASP,H,68763,-0.35,25.876,8.300,0.590,571
ASP,HA,46632,-3.75,8.66,4.585,0.327,680
ASP,HB2,43472,-5.2,37.4,2.716,0.511,75
ASP,HB3,41794,-1.46,37.2,2.667,0.518,100
ASP,HD2,18,1.160,12.30,5.991,3.334,0
ASP,C,43696,0.106,184.14,176.361,3.568,24
ASP,CA,60457,5.630,354.531,54.690,2.720,67
ASP,CB,57295,9.7,341.273,40.895,2.563,146
ASP,CG,963,2.637,188.215,177.196,18.089,13
ASP,N,66001,0.061,428.093,120.699,4.642,95
CYS,H,23821,3.723,12.660,8.380,0.695,148
CYS,HA,19401,-9.858,43.5,4.680,0.976,58
CYS,HB2,18672,-39.82,363.580,3.134,6.357,41
CYS,HB3,18201,-44.2,363.580,3.055,5.762,43
CYS,HG,254,-1.830,10.700,2.029,1.353,4
CYS,C,11404,1.000,187.591,174.775,3.469,10
CYS,CA,17149,30.6688,82.3,58.022,3.462,20
CYS,CB,16356,17.99,73.920,33.377,6.523,18
CYS,N,18895,-147,628,120.438,18.215,82
GLN,H,48881,0.000,66.542,8.216,0.653,231
GLN,HA,33387,0.403,7.43,4.264,0.432,551
GLN,HB2,30357,-1.514,10.461,2.043,0.276,415
GLN,HB3,28935,-1.4980,20.9,2.013,0.326,349
GLN,HE21,21428,-3.41,23.893,7.219,0.497,188
GLN,HE22,21310,1.025,113.695,7.036,0.879,29
GLN,HG2,28356,-1.76,33.5990,2.314,0.338,327
GLN,HG3,26350,-1.395,34.946,2.293,0.361,357
GLN,C,31356,0.069,1755.998,176.338,9.609,13
GLN,CA,43483,1.733,356.830,56.562,2.640,46
GLN,CB,40787,1.843,328.286,29.194,2.533,126
GLN,CD,2616,6.789,190.624,179.292,7.623,7
GLN,CG,25210,2.097,333.032,33.807,2.562,41
GLN,N,46869,0.000,418.059,119.962,4.176,126
GLN,NE2,19322,33.9,412.160,111.882,2.985,60
GLU,H,89195,0.008,122.9,8.330,0.743,322
GLU,HA,60909,0.433,8.02,4.242,0.413,1077
GLU,HB2,55127,-1.470,4.82,2.018,0.222,781
GLU,HB3,51907,-1.633,8.095,1.994,0.228,751
GLU,HE2,18,0.801,11.96,4.709,2.604,0
GLU,HG2,50906,-0.674,4.69,2.264,0.222,837
GLU,HG3,47453,-0.10,4.69,2.245,0.224,767
GLU,C,57652,0.074,184.71,176.828,4.280,40
GLU,CA,78638,1.056,360.826,57.327,3.270,75
GLU,CB,73549,9.08,330.834,30.019,3.150,117
GLU,CD,1013,0.000,198.609,181.090,14.839,8
GLU,CG,45672,6.16,337.230,36.143,2.948,64
GLU,N,85881,0.044,422.043,120.721,4.689,112
GLY,H,86072,-15.3,121.881,8.327,0.765,735
GLY,HA2,58056,-3.4,8.64,3.961,0.399,937
GLY,HA3,55297,-3.936,43.9930,3.888,0.439,773
GLY,C,54280,1.000,189.533,173.834,3.426,55
GLY,CA,76239,2.200,344.994,45.377,2.219,169
GLY,N,81099,0.2,791,109.680,7.053,192
HIS,H,24445,-0.3,13.34,8.256,0.733,261
HIS,HA,17566,0.676,11.38,4.617,0.565,230
HIS,HB2,16391,-2.168,45.897,3.159,1.118,129
HIS,HB3,15940,-6.2,38.5,3.100,1.087,138
HIS,HD1,1018,-15,86.5,9.987,8.570,23
HIS,HD2,11621,-25.85,67.8,7.148,3.262,90
HIS,HE1,9143,-26.6,134.811,7.831,2.535,63
HIS,HE2,388,-15,76.4,11.107,7.896,11
HIS,C,15093,1.000,184.204,175.133,4.716,15
HIS,CA,21851,11.40,355.084,56.521,3.407,62
HIS,CB,20513,13.496,329.046,30.324,3.186,56
HIS,CD2,7547,7.19,159.946,119.910,5.680,49
HIS,CE1,5913,8.198,166.282,137.244,5.712,55
HIS,CG,270,18.669,139.83,131.179,9.513,3
HIS,N,22875,0.2,427.146,119.658,5.239,41
HIS,ND1,816,31.026,261.013,193.109,32.573,2
HIS,NE2,754,17.0,257.572,180.840,20.342,20
ILE,H,59946,0.008,11.871,8.264,0.692,293
ILE,HA,41048,-9.0,173.538,4.167,1.009,7
ILE,HB,38633,-2.442,38.700,1.783,0.399,210
ILE,HG12,35114,-10.1,5.56,1.263,0.453,270
ILE,HG13,33779,-10.1,9.71,1.192,0.485,250
ILE,MD,38936,-4.15,13.891,0.671,0.332,621
ILE,MG,36922,-3.919,6.23,0.768,0.306,577
ILE,C,38288,0,187.551,175.800,4.524,29
ILE,CA,53038,20.877,362.184,61.623,3.359,62
ILE,CB,49504,-34.477,339.785,38.583,2.926,83
ILE,CD1,35029,2.7,314.600,13.505,3.480,110
ILE,CG1,31261,8.0,329.288,27.757,3.344,137
ILE,CG2,33140,0.79,317.615,17.608,3.243,97
ILE,N,57362,0.0000,531,121.425,6.042,89
LEU,H,99282,-0.3,13.220,8.219,0.651,501
LEU,HA,67703,0.000,119.411,4.303,0.644,70
LEU,HB2,62221,-1.522,8.02,1.607,0.360,803
LEU,HB3,59729,-1.79,8.39,1.523,0.376,865
LEU,HG,55123,-2.08,5.7,1.502,0.348,672
LEU,MD1,63101,-3.42,30.176,0.748,0.331,965
LEU,MD2,60780,-3.42,24.504,0.727,0.358,774
LEU,C,63540,0.071,189.78,176.991,3.682,29
LEU,CA,87816,1.056,158.320,55.653,2.236,189
LEU,CB,82155,7.439,93.180,42.248,2.020,527
LEU,CD1,54890,0.683,120.700,24.674,2.047,209
LEU,CD2,52489,0.280,116.300,24.119,2.125,161
LEU,CG,48288,0.000,75.280,26.805,1.494,354
LEU,N,94665,0.044,627,121.959,7.753,70
LYS,H,84117,0.002,64.423,8.175,0.668,498
LYS,HA,58613,-0.118,32.650,4.258,0.457,643
LYS,HB2,52752,-1.416,10.94,1.774,0.266,854
LYS,HB3,49716,-3.038,9.43,1.746,0.283,821
LYS,HD2,42396,-1.6800,119.620,1.607,0.643,29
LYS,HD3,38017,-2.02,29.047,1.595,0.272,557
LYS,HE2,41666,-0.493,42.02,2.911,0.289,457
LYS,HE3,36694,-0.046,7.344,2.903,0.223,782
LYS,HG2,47718,-1.654,6.7,1.363,0.272,978
LYS,HG3,44019,-1.83,5.575,1.348,0.283,923
LYS,C,51474,0.112,996.253,176.614,5.736,38
LYS,CA,71777,1.155,359.222,56.949,3.205,71
LYS,CB,67058,-26.686,332.988,32.791,2.923,94
LYS,CD,38624,0.834,329.284,28.997,2.640,75
LYS,CE,37258,-0.130,342.334,41.926,3.045,68
LYS,CG,40990,12.109,325.487,24.960,3.133,95
LYS,N,78570,0.041,427.245,121.038,4.691,124
LYS,NZ,303,1.950,177.2,51.816,33.019,2
LYS,QZ,1617,-10.9,10.506,7.339,1.046,44
MET,H,23446,-0.21,177,8.257,1.261,15
MET,HA,16662,-0.93,313.565,4.410,2.443,1
MET,HB2,14928,-27.312,33.750,2.024,0.583,84
MET,HB3,14085,-27.312,12.94,1.995,0.522,104
MET,HG2,13710,-33.86,32.7,2.376,1.463,44
MET,HG3,12981,-33.86,31.7,2.350,1.575,48
MET,ME,10583,-24.86,10.2000,1.773,1.563,79
MET,C,15432,2.200,183.25,176.200,3.324,5
MET,CA,21816,25.7283,85.327,56.149,2.289,59
MET,CB,20187,0.2,332.173,32.973,3.219,49
MET,CE,9592,0.000,317.645,17.254,4.252,53
MET,CG,11803,2.30,332.686,32.077,3.243,28
MET,N,22664,0.000,428.252,120.054,4.996,36
PHE,H,42717,-0.5,12.1759,8.337,0.731,262
PHE,HA,28990,1.33,59.70,4.618,0.727,23
PHE,HB2,27036,-0.463,7.979,2.992,0.381,371
PHE,HB3,26376,-0.212,12.72,2.934,0.399,389
PHE,HD1,22740,0.603,12.154,7.037,0.399,217
PHE,HD2,19220,0.603,12.154,7.038,0.412,194
PHE,HE1,19877,-2.838,14.080,7.062,0.453,167
PHE,HE2,16994,0,12.9,7.060,0.448,158
PHE,HZ,13928,-7.14,43.623,6.993,0.719,115
PHE,C,26768,0.088,184.929,175.449,3.069,9
PHE,CA,37271,4.917,363.618,58.107,3.822,36
PHE,CB,34997,2.161,341.700,39.955,3.611,44
PHE,CD1,13641,7.160,143.4500,131.172,5.998,70
PHE,CD2,9678,7.160,140.309,131.324,4.575,35
PHE,CE1,11887,0.000,149.609,130.316,5.835,61
PHE,CE2,8420,7.472,149.609,130.527,4.030,35
PHE,CG,421,7.229,152.844,137.247,11.620,4
PHE,CZ,8840,7.351,165.611,129.016,4.185,31
PHE,N,40480,0.067,422.843,120.393,5.461,51
PRO,H2,5,8.070,9.673,8.756,0.710,0
PRO,HA,33161,0.636,135.80,4.388,0.803,43
PRO,HB2,30818,-1.501,5.63,2.069,0.371,536
PRO,HB3,29932,-3.48,6.10,1.996,0.382,558
PRO,HD2,28519,-6.56,7.67,3.636,0.447,423
PRO,HD3,27539,-6.56,8.865,3.602,0.469,496
PRO,HG2,27730,-2.35,7.395,1.918,0.342,667
PRO,HG3,25811,-1.520,4.92,1.894,0.351,627
PRO,C,28640,0,183.517,176.630,4.386,30
PRO,CA,41044,0,363.087,63.330,3.613,80
PRO,CB,38296,0,333.586,31.887,3.162,71
PRO,CD,25032,1.155,350.648,50.343,3.214,61
PRO,CG,24932,2.436,327.402,27.277,3.727,44
PRO,N,2050,3.566,430,134.575,24.897,37
SER,H,72252,-15.3,116.95709,8.278,0.723,290
SER,HA,50558,1.277,58.739,4.477,0.475,421
SER,HB2,46319,0.61,9.182,3.867,0.278,725
SER,HB3,43053,0.61,41.7,3.843,0.343,503
SER,HG,924,0.13,11.36,5.422,1.193,23
SER,C,46531,0.000,197.1,174.589,3.254,32
SER,CA,65467,4.331,361.278,58.694,2.805,70
SER,CB,60788,-939.2800,365.087,63.723,4.984,170
SER,N,68552,0.000,416.964,116.292,4.253,189
THR,H,64336,0.02,21.7,8.233,0.640,534
THR,HA,44303,0.87,7.468,4.451,0.479,264
THR,HB,40659,0.087,71.587,4.168,0.655,78
THR,HG1,1629,-1.783,11.01,5.212,1.402,39
THR,MG,40565,-12.1,16.3,1.138,0.279,510
THR,C,40395,4.780,185.918,174.456,4.070,35
THR,CA,56552,0.971,92.659,62.210,2.759,104
THR,CB,52562,-939.2800,629.206,69.590,5.649,162
THR,CG2,34435,7.177,175.6,21.595,1.917,112
THR,N,61259,0.0,402,115.403,6.323,64
TRP,H,14089,3.421,17.315,8.269,0.781,92
TRP,HA,9794,2.043,11.414,4.678,0.534,77
TRP,HB2,9273,0.42,5.35,3.179,0.350,143
TRP,HB3,9017,-0.3776,7.972,3.116,0.372,137
TRP,HD1,8273,1.880,10.75,7.128,0.363,126
TRP,HE1,9199,-1.279,131.711,10.094,1.445,37
TRP,HE3,7185,1.85,12.233,7.299,0.525,128
TRP,HH2,7126,2.84,10.900,6.952,0.455,111
TRP,HZ2,7765,2.63,10.81,7.267,0.412,115
TRP,HZ3,6927,0.76,8.898,6.848,0.472,92
TRP,C,8460,2.500,184.30,175.973,6.049,12
TRP,CA,11894,2.966,362.099,57.713,4.800,12
TRP,CB,11102,1.6,328.795,30.089,4.784,23
TRP,CD1,5274,30.236,183.141,126.325,4.470,23
TRP,CD2,188,1.578,155.174,127.130,13.071,2
TRP,CE2,248,56.4176,177.710,137.535,9.569,6
TRP,CE3,4409,-10.872,174.807,120.173,5.545,29
TRP,CG,259,4.174,116.526,110.100,9.006,2
TRP,CH2,4655,-6.333,160.818,123.539,5.024,22
TRP,CZ2,5025,7.107,159.041,114.037,4.609,30
TRP,CZ3,4434,-8.702,161.540,121.151,4.660,22
TRP,N,12864,6.712,423.160,121.648,6.026,13
TRP,NE1,7540,0.53,435.960,129.269,6.295,31
TYR,H,36554,0.02,12.34,8.294,0.739,180
TYR,HA,25016,0.442,7.160,4.609,0.563,203
TYR,HB2,23316,-21.230,23.28,2.898,0.466,195
TYR,HB3,22790,-21.230,23.28,2.833,0.483,237
TYR,HD1,20167,0.190,10.5,6.920,0.373,237
TYR,HD2,17229,0.5522,10.499,6.916,0.377,211
TYR,HE1,19125,0.08,11.8,6.690,0.309,160
TYR,HE2,16443,0.43,11.7,6.690,0.320,147
TYR,HH,442,-0.788,31,9.103,2.096,5
TYR,C,22274,2.200,184.78,175.368,4.700,22
TYR,CA,31109,2.200,357.681,58.144,3.099,25
TYR,CB,28911,18.38,338.686,39.307,3.133,43
TYR,CD1,12301,19.589,141.572,132.361,5.290,65
TYR,CD2,8449,3.492,139.644,132.362,5.325,48
TYR,CE1,12085,40.435,182.764,117.730,4.101,109
TYR,CE2,8324,34.1221,154.10,117.772,3.349,68
TYR,CG,390,7.113,175.115,128.143,12.323,6
TYR,CZ,287,6.839,165.718,155.511,13.729,3
TYR,N,34074,0.2,818,120.749,11.899,35
VAL,H,78671,-0.41,120.980,8.271,0.790,168
VAL,HA,53950,-2.83,54.971,4.168,0.629,126
VAL,HB,50358,-27.480,31.75,1.979,0.450,389
VAL,MG1,50627,-27.2,24.20,0.819,0.333,562
VAL,MG2,49730,-27.2,56.56,0.801,0.431,245
VAL,C,50693,1,205.699,175.631,3.413,28
VAL,CA,69771,20.668,362.057,62.496,3.197,101
VAL,CB,64788,15.597025,331.747,32.716,2.289,140
VAL,CG1,44602,-7.4,321.185,21.547,2.434,90
VAL,CG2,43052,-5.648,320.420,21.346,2.531,92
VAL,N,75697,0.2,529,121.146,7.361,82
```
Answer: Simplify code!
with open('bmrb.csv') as file: followed by for lines in file: can be simplified into
for lines in open("bmrb.csv").readlines():
with the change above you can completely remove if (lines == '\n') clause
Use Enum for clarity
split_lines[0], split_lines[1]. 0 and 1 are called magic numbers.
A magic number is a numeric literal (for example, 8080, 2048) that is used in the middle of a block of code without explanation. It is considered good practice to avoid magic numbers by assigning the numbers to named constants and using the named constants instead.
Instead what if you made an Enum called Data and named those constants?
Enums in Python
from enum import Enum
class Data(Enum):
residue = 1
atom = 2
# the rest of the elements
Now when you want to refer to the 1st element, you can simply do split_lines[Data.atom.value] It is a little more typing, but it is also clearer as to what you mean from that line.
This also means you can remove the creation of copies. Not to create a new variable residue but just split_lines[Data.residue.value]
Format your code
if you write x = y + 65 compared to x+y=65 and x = float(y) compared to x=float(y), your code becomes much more readable
More simplification
question=input('input carbon and hydrogen values: ')
split_question=question.split()
search_fun(float(split_question[0]),float(split_question[1]))
becomes
carbon, hydrogen = map(float,input("Enter carbon and hydrogen values: ").split())
search_fun(carbon, hydrogen)
Split work into functions
you have this line
if float(split_carbon[3]) > (0.25*float(split_carbon[2])) or float(split_hydrogen[3]) > (0.25*float(split_hydrogen[2])):
print(f'{values} {values2} HIGH ERROR')
Give a meaningful name to a new function where it would take in the various args and return True or False based on the formula. This way you can get rid of a lot of clunk in the search_fun() function.
if formula_1(Args...) or formula_2(Args...):
print(f'{values} {values2} HIGH ERROR')
The same idea can apply to many other code segments, and make your code much more readable.
Using csv.DictReader
As suggested by @Graipher, it will be much better to use csv.DictReader as it will do a lot of the splitting work for you
from csv import DictReader
with open("csvfile.csv") as csvfile:
reader = DictReader(csvfile, delimiter = ',')
for line in reader:
print(line['atom_id')
This will split the values into a dictionary, where the keys will be the words at the top of the file comp_id,atom_id,count,min,max,avg,std. This is much better as you won't need to split the lines manually, and there won't be any magic numbers as the keys to your dictionary will be pre-defined by you.
csv file handling in Python | {
"domain": "codereview.stackexchange",
"id": 39638,
"tags": "python"
} |
Is this rubber/PVC coupling a good enough for small torque (0.1 N.m) | Question: I am working on a project that involves speed regulation of a BLDC motor under no-load and load conditions. I wish to use another machine operated as generator, acting as load on the motor, as shown in this video.
The coupling used in this motor/generator arrangement looks handmade out of a rubber tube or somethhing. I am considering using it as an alternative to a flexible coupling. Purchasing an actual flexible coupling is not an option for me. Moreover, I need the coupling on an urgent basis.
My question is, can this arrangement (or something similar) be used to couple a 15W motor to a similar rating machine, if the rated torque is not exceeding 0.1 N.m?
Answer: Absolutely, I use this method all the time for small robots, and actuators. I can find no information on the formal torque ratings, but I did find in a document called "PVC Piping Systems: Helpful tips for Avoiding Problems"
The recommended best practice is to use a thread sealant (not a thread lubricant) and to assemble
the joint to finger tight plus one and one-half turns, two turns at the most. Finger tight can be
defined as: tightened using the fingers, no tools, to a torque of about 1.2 to 1.7 foot-pounds (1.7
to 2.3Nm).
I am assuming the pipe can withstand the torque of tightening threads. I suspect that tie pipe can withstand much more torque than that though. | {
"domain": "robotics.stackexchange",
"id": 670,
"tags": "control, brushless-motor"
} |
Problem with Publish/Subscribe | Question:
Basically, I have a node that is subscribed to 6 other nodes.
I have an initialize routine that needs a response from each node that they're done initializing. I get a fine response from 3 of them, but not the other 3. The weird thing is, they actually publish their message and can see it if I echo on the topic in the terminal, but the node receiving does not actually receive it. The even weirder part is, that if I publish to the topic with the message through the terminal, the receiving node actually reacts as it should.
I hope this makes sense.
Thanks in advance!
Originally posted by MrSnail on ROS Answers with karma: 3 on 2013-05-25
Post score: 0
Original comments
Comment by Ben_S on 2013-05-25:
Are all nodes running on the same machine? Network set up correctly?
Comment by MrSnail on 2013-05-25:
Yes, same machine.
Answer:
The six nodes send out one message each when they are done with initialization?
Then my first guess would be that there's some timing issue. Possibilities are e.g.
The "supervision node" starts listening only after the first messages are already sent
The three nodes publish immediately after advertising their publisher, before the "supervision node" is completely subscribed
First attempt at a solution: in the six nodes, include a delay of one or two seconds between creation of the publisher and publishing the first message. Start the supervision node first, then the other nodes. If this works, you found your culprit.
Also, you could enable latching:
When a connection is latched, the last
message published is saved and
automatically sent to any future
subscribers that connect. This is
useful for slow-changing to static
data like a map. Note that if there
are multiple publishers on the same
topic, instantiated in the same node,
then only the last published message
from that node will be sent, as
opposed to the last published message
from each publisher on that single
topic.
If both do not work, check if the subscriber and the publishers use the exact same message type. Differences (e.g. publishing std_msgs::Pose, subscribing to std_msgs::PoseStamped) might lead to not receiving anything.
Originally posted by Philip with karma: 990 on 2013-05-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by MrSnail on 2013-05-25:
Latching worked! Thank you. | {
"domain": "robotics.stackexchange",
"id": 14288,
"tags": "ros"
} |
Motion of a particle in a wave | Question: I am having trouble solving this question. I need clarification.
Q. Figure A (in the bottom paragraph) shows the equilibrium positions of air
particles 1,2,3, etc. Figure B shows their positions at an instant when a longitudinal travelling wave moves through
air to the right. Which of the following are correct statements regarding particles in figure B?
Options are:
Particles 4 is at rest at this instant
Particle 3 and 5 are moving in same direction
Particle 3 will move to the right immediately after this instant of time
Particle 7 will move to the left immediately after this instant of time
Answers are -$2,3,4$
Since the particle 4 is at equilibrium position , so velocity would be the maximum hence option A is incorrect.
However I am not able to think of the movement of other particles, it looks like 3 and 5 are at the extreme positions so they should move leftward and rightward respectively.
The wave direction is rightward so the particle 7 should move towards the right.
P.S. The question from the beginning, along with the answer choices after it, are both from this image in the link here.
Answer: Great question!
I am used to questions about standing waves, so I had to hunt for the answer to this.
I have looked at several websites, and here's my impression of what is going on. In the animations, where the particles are all bunched up, they are all traveling in the direction of the wave, (they are the wave pulse) so $3,4,5$ and $9,10,11$ should all be moving to the right. The particles in the gaps, where the concentration is lower, should be moving left, to get back to their starting places. So I think $1,7,13$ would all be moving left.
In a traveling longitudinal wave, none of the particles end up traveling, it is only the pattern which travels. Each individual particle oscillates in place. A particle at its equilibrium position is like a mass on a spring passing through its equilibrium position: that is the point where it has maximum velocity.
This isn't like a standing wave where particles stay stationary at node points. None of the particles stay stationary in a traveling wave, they all oscillate. In order for a pulse to travel to the right, new particles have to join in coming from the right, while old particles leave from the left edge, slow down, then turn around and go back, to become part of the next longitudinal pulse.
So the place of high particle concentration is a "wave crest" traveling to the right. The central particle is moving at the wave velocity. The particle to its left is moving slightly slower, so it will pull away, while the particle to its right is also moving slightly slower, so it will get more bunched up. The particle left of the crest is slowing down, while the particle on the right is speeding up towards maximum, when it will get its turn as the wave crest. | {
"domain": "physics.stackexchange",
"id": 78802,
"tags": "homework-and-exercises, waves"
} |
Why doesn't the brightness of a bulb change with time? | Question: Household bulbs get alternating current, which means that the voltage of source and current in circuit keep changing with time, which implies that the power supply isn't constant. However, we don't see any changes in brightness of the bulb. Why is that ?
Answer: Two reasons:
An incandescent bulb glows not (directly) because it has electricity going through it, but because it is hot. Even when the power going through the bulb decreases, it takes some time for the filament to cool down. Even once the bulb is turned off, it takes some time (a fraction of a second) for the light to fade.
What variation there is in the light is too fast for our eyes to see.
You can see the AC flicker in slow motion videos if the camera has a sufficient frame rate, for instance this one. | {
"domain": "physics.stackexchange",
"id": 47223,
"tags": "electricity, thermal-radiation"
} |
Coulomb interaction and conservation laws | Question: In many-body solid-state physics, the Coulomb interaction term in the Hamiltonian usually implies the momentum conservation law in indicies:
$$H_c=\frac{1}{2} \sum_{\mathbf{k},\mathbf{k}',\mathbf{q} \neq 0} V_{\mathbf{q}} a^{\dagger}_{\mathbf{k}'-\mathbf{q}} a^{\dagger}_{\mathbf{k}+\mathbf{q}}a_{\mathbf{k}'} a_{\mathbf{k}},$$
where $\mathbf{k},\mathbf{k}',\mathbf{q}$ are quasi-momenta and quantum numbers for the continuum spectrum of electron gas simultaneously.
In quantum chemistry textbooks, the Coulomb term usually looks like:
$$H_c=\sum_{i,j,k,l} V_{i,j,k,l} a^{\dagger}_{i} a^{\dagger}_{j}a_{k} a_{l}$$
Numbers $i,j,k,$ and $l$ are running over some discrete energy spectrum. Is it possible to state any conservation laws for quantum numbers $i,j,k,$ and $l$ in the expression above? Should not they obey any conservation law, selection rules or any additional restrictions? I will appreciate any references to textbooks or papers.
Answer: Quantum chemistry treats localized systems for which the conservation of total momentum is a true but not-very-useful fact; it therefore doesn't make much sense to incorporate it very explicitly into the notation. (Solid-state systems, on the other hand, are infinite lattices that are invariant under discrete translations, so that the total electron momentum will be a constant of the motion. The formalism is then adapted to this.)
In the quantum chemical formalism, the interaction coefficients are
$$V_{ijkl}=\langle\phi_i\phi_j|\hat{v}|\phi_k\phi_l\rangle,$$
where the creation operator $a_i^\dagger$ creates an electron in the state $|\phi_i\rangle$. The pairwise Coulomb repulsion $\hat{v}$ is indeed translation invariant, in that it commutes with the total translation $\hat{U}=e^{i(\hat{\mathbf{p}}_1+\hat{\mathbf{p}}_2) \cdot \mathbf{r}}$ by any displacement $\mathbf{r}$. Thus the coefficient is also equal to
$$V_{ijkl}=\langle\phi_i|e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} \otimes\langle\phi_j|e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} \cdot\hat{v}\cdot e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_k\rangle\otimes e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_l\rangle.$$
Unlike in solid-state systems, though, orbitals like $e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_k\rangle$ are not related in any way to the rest of the basis, other than in the single necessary expansion
$$e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_k\rangle=
\sum_j |\phi_j\rangle\langle\phi_j|e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_k\rangle.$$
With this you can formulate the global translation invariance as a condition on the $V_{ijkl}$:
$$V_{ijkl}=\sum_{i',j',k',l'}
\langle\phi_i|e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_{i'}\rangle
\langle\phi_j|e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi{j'}\rangle
\langle\phi_{k'}|e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_k\rangle
\langle\phi_{l'}| e^{-i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_l\rangle
V_{i'j'k'l'}.$$
The reason this looks so ugly is that there is as yet no selection rule on the matrix elements of the translation between the different basis functions, such as $\langle\phi_i|e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_{i'}\rangle$. In quantum chemical applications, the basis is localized around the nuclei and there will not be any such selection rule, so the above is the best you'll get. (In practice this is not a problem as you know $\hat{v}$ beforehand and use it to calculate the $V_{ijkl}$. If you want to postulate some coefficients then you do need to check the above relation for all displacements $\mathbf{r}$ or your hamiltonian will not be translation invariant.)
Note, though, that since the above formalism is completely general, you still have the option to choose a translation-invariant basis, for which $e^{i\hat{\mathbf{p}} \cdot \mathbf{r}} |\phi_{i}\rangle=e^{i{\mathbf{p_i}} \cdot \mathbf{r}} |\phi_{i}\rangle$, as in solid-state applications. In this case the matrix elements will simplify to delta functions and the coefficients will be forced into the first form you give. | {
"domain": "physics.stackexchange",
"id": 7563,
"tags": "many-body, quantum-chemistry"
} |
What do I call this shaft that drives the rubber stopper into the glass vial in a vial stoppering machine? | Question: I'm translating a Russian text about a drug product manufacturing process. The text mentions briefly the vial stoppering machine, in which rubber stoppers are first fed into the hopper and then directed towards vials, and a пуансон drives the stopper into the vial. Multitran provides multiple options for how to translate this "пуансон". It's basically some shaft-like member.
I googled and found a description of some stoppering machine in Russian, and it has a diagram, with "пуансон" under number 60:
The text there says that the axle (59) rotates, making the cam (58) rotate and push this.. plunger\piston\punch member (?) towards the vial and drive the rubber stopper into the vial.
I'm unsure which word to use for this part.
Answer: A camshaft is rotating a cam that is part of the shaft. It seems the cam acts on a roller, fixed on a push-rod, and so pushes down the push-rod in the guider.
I think the push-rod pushes the rubber down with a punch cap.
I think push-rod is the word you're searching for if you're talking about part 60.
Plunger may be a synonym, but I think of a kind of pump when hearing that term. | {
"domain": "engineering.stackexchange",
"id": 1966,
"tags": "terminology"
} |
Perfect elastic collision and velocity transfer | Question: So my teacher told me that when you have two identical balls in a perfectly elastic collision, the first ball A will collide with B and afterwards A will stop and B continue. Why is this? Doesn't Newton's 3rd law imply both balls would get an equal force into opposite direction during the collision? And if A was heavier than B, does A continue in the same direction after elastically colliding with B (that's the only logical result I can think of if this is true).
Answer: In any collision, momentum is conserved. This means
\begin{equation}
m_1u_1 + m_2u_2 = m_1v_1 + m_2v_2
\end{equation}
For a perfectly elastic collision, kinetic energy is also conserved
\begin{equation}
m_1u_1^2 + m_2u_2^2 = m_1v_1^2 + m_2v_2^2
\end{equation}
Solving these equations simultaneously ($v_1$ and $v_2$ are the variables)
\begin{equation}
v_1 = \frac{u_1(m_1-m_2) + 2m_2u_2}{m_1+m_2};\\
v_2 = \frac{u_2(m_2-m_1) + 2m_1u_1}{m_1+m_2};
\end{equation}
when $m_1=m_2$, these reduce to
\begin{equation}
v_1 = u_2;\\
v_2 = u_1;
\end{equation}
You can check out what happens for other cases as well ($m_1 >> m_2$ or $u_2 = 0$, etc.)
EDIT:
If you look at it from the point of view of forces, you will see the same force act on both objects, in opposite directions. This will cause an acceleration depending on the mass of the object ($F=ma$), but only for the tiny instant that the two are in contact. Now, for example, considering equal masses, the force would decelerate the first object to some velocity, and accelerate the second object to the same velocity (because both have equal masses, and the force acts for an equal amount of time). From the momentum equations, we find that the velocities are swapped.
Important point to remember: Force is not velocity. The same force can produce different accelerations and hence different velocities for different masses. | {
"domain": "physics.stackexchange",
"id": 15450,
"tags": "newtonian-mechanics, energy-conservation, momentum, conservation-laws, collision"
} |
First fundamental form in the Gibbons-Hawking-York boundary term | Question: Let me expose my problem, I am trying to perform the explicit variation of the Gibbons-Hawking-York boundary term,
$$S_{GH}=\int_{\partial M} d^{n-1}x\sqrt{\left|h\right|}K$$
The problem I have is that in the calculation of $\delta\sqrt{\left|h\right|}$, it seems like I should carry the calculation as if $h$ was the first fundamental form
$$h_{\mu\nu}=g_{\mu\nu} - \sigma n_{\mu} n_{\nu} $$
inasmuch as I obtain the good result doing so. First, I use the identity
$$
\delta\sqrt{\left|h\right|} = -\frac12 \sqrt{\left|h\right|} h_{\mu\nu} \delta h^{\mu\nu}
$$
then I express $\delta h$ in fuction of $\delta g$ $$\delta h^{\mu\nu} = \delta g^{\mu\nu} -\sigma \delta n^\mu n^\nu -\sigma n^\mu\delta n^\nu $$
and using the fact that $h_{\mu\nu} n^\mu = 0$, we obtain
$$\delta\sqrt{\left|h\right|} = -\frac12 \sqrt{\left|h\right|} h_{\mu\nu} \delta g^{\mu\nu}.$$
My problem is that $h$ is not the first fundamental form in the first expression, but the induced metric. The determinant of the first fundamental form is 0 in gaussian normal coordinates, so it seems like I am skipping a step in this derivation, but I just cannot find what (I never had a course in differential geometry, so my understanding of the difference between the first fundamental form and the induced metric is really poor).
Answer: I got the answer reading a book from E. Poisson, what I was doing was indeed wrong, you have to start with the induced metric given by
$$ h_{ab}= g_{\mu\nu}e^{\mu}_a e^{\nu}_b $$
where $$e^{\mu}_a=\frac{\partial x^{\mu}}{\partial y^a}$$
are the tangent vectors to curves of the hypersurface. Then, you just replace $g$ by $h$ in the usual relation
$$\delta\sqrt{\left|h\right|} = -\frac12 \sqrt{\left|h\right|} h_{ab} \delta h^{ab}.$$ Using the Kronecker invariance, one finds
$$\delta\sqrt{\left|h\right|} = \frac12 \sqrt{\left|h\right|} h^{ab} \delta h_{ab}.$$
Then using the fact that
$$ \delta h_{ab} =\delta g_{\mu\nu}e^\mu_a e^\nu_b$$
since the tangent vectors are invariant, we finally get
$$ \delta\sqrt{\left|h\right|} = \frac12 \sqrt{\left|h\right|} h^{\mu\nu} \delta g_{\mu\nu}$$
where we have used the definition of the PROJECTOR
$$h^{\mu\nu} = h^{ab}e^\mu_a e^\nu_b.$$
And this is where the problem was coming, $h^{\mu\nu}$ is NOT the induced metric, but a projector associated to that induced metric. | {
"domain": "physics.stackexchange",
"id": 23162,
"tags": "general-relativity, lagrangian-formalism, differential-geometry, variational-principle, boundary-terms"
} |
Derive Frequency Representation of Impulse Train Function | Question: I want to walk through the derivation of the frequency representation of an impulse train.
The definition of the impulse train function with period $T$ and the frequency representation with sampling frequency $\Omega_s = 2\pi/T$ that I would like to derive is:
\begin{align*}
s(t) &= \sum\limits_{n=-\infty}^{\infty} \delta(t - nT) \\
S(j\Omega) &= \frac{2\pi}{T} \sum\limits_{k=-\infty}^{\infty} \delta(\Omega - k\Omega_s) \\
\end{align*}
Using the exponential Fourier series representation of the impulse function and applying the Fourier transform from there results in:
\begin{align*}
s(t) &= \frac{1}{T} \sum\limits_{n=-\infty}^{\infty} e^{-jn\Omega_s t} \\
S(j\Omega) &= \int_{-\infty}^\infty s(t) e^{-j\Omega t} dt \\
S(j\Omega) &= \int_{-\infty}^\infty \frac{1}{T} \sum\limits_{n=-\infty}^{\infty} e^{-jn\Omega_s t} e^{-j\Omega t} dt \\
S(j\Omega) &= \frac{1}{T} \int_{-\infty}^\infty \sum\limits_{k=-\infty}^{\infty} e^{-j(k\Omega_s + \Omega) t} dt \\
\end{align*}
To get from there to the end result, it would seem that the integration would need to be over a period of $2\pi$. Where $\Omega = -k\Omega_s$, the exponent would be $e^0$ and integrate to $2\pi$ and for other values of $\Omega$, there would be a full sine wave that would integrate to zero. However, the limits of integration are negative infinity to positive infinity. Can someone explain this? Thanks!
Answer: You correctly figured out that the occurring integrals don't converge in the conventional sense. The easiest (and definitely non-rigorous) way to see the result is by noting the Fourier transform relation
$$1\Longleftrightarrow 2\pi\delta(\Omega)$$
By the shifting/modulation property we have
$$e^{j\Omega_0t}\Longleftrightarrow 2\pi\delta(\Omega-\Omega_0)$$
So each term $e^{jn\Omega_s t}$ in the Fourier series transforms to $2\pi\delta(\Omega-n\Omega_s)$, and the result follows. | {
"domain": "dsp.stackexchange",
"id": 3213,
"tags": "fourier-transform, fourier-series"
} |
Does the magnetic anisotropy state only have two possible directions? | Question: Wikipedia says "The magnetic moment of magnetically anisotropic materials will tend to align with an easy axis". Does this mean that it is completely impossible to orient the magnetic moment with any direction no matter how strong the magnetic field is? Or it is only about spontaneous magnetization?
Answer: Magnetocrystalline anisotropy is all about spontaneous magnetization. Considere a (single domain) ferromagnetic material. Depending on its crystal structure, its spontaneous magnetization will tend to be aligned with a direction that minimize the interaction energy between magnetic dipoles carried by the atoms contained in one unit cell of the lattice.
That's why, depending on the geometry of the lattice system, you will have different types of magnetic anisotropy : uniaxial (one single easy axe), cubic (2 axes, one easy and one hard), tetragonal, etc.
So far, the most common anisotropy one can meet is the uniaxial one, that is particulary useful for the Stoner-Wohlfarth model.
Now, how about when an external magnetic field is applied? Well, there are two kind of energies in competition : the anisotropy energy which is minimum when the magnetization is along the direction of the easy axis, and the Zeeman energy which is the interaction energy between the magnetization and the external field. This situation is well described by the Stoner-Wohlfarth model and consists basically in minimizing the energy of the ferromagnetic system :
$$
\mathrm{E}(\theta,\phi)=K\sin^2\theta-\mu_0M_sH\cos(\theta-\phi)
$$
The first term is the anisotropy energy ($\theta$ being the angle between the easy axis and the magnetization, and $K$ the anisotropy strength), the second correspond to the Zeeman energy ($H$ is the norm of the applied magnetic field and $\phi$ the angle that it forms with the easy axis).
For convinience, let be $\phi=\pi$ fixed as an example. Then the energy $\mathrm{E}$ has to be minimize according to the variable $\theta$. In dimensionless units, one have :
$$
\mathrm{e}(\theta)=\frac{1}{K}\,\mathrm{E}(\theta,\phi=\pi)=\sin^2\theta+2h\cos\theta\quad\text{with}\quad h=\frac{\mu_0M_sH}{2K}
$$
One can easily find the equilibrium states $\theta_{\mathrm{eq}}$ by computing $\partial_\theta\,\mathrm{e}(\theta=\theta_{\mathrm{eq}})=0$ :
$$
\theta_{\mathrm{eq}}\equiv\{0,\pi,\theta_h=\arccos(h)\}
$$
and then study their stability by looking at the sign of the quantity $\partial^2_\theta\,\mathrm{e}(\theta=\theta_{\mathrm{eq}})$, namely :
$$
\partial^2_\theta\,\mathrm{e}(\theta=0)=2(1-h)
$$
$$
\partial^2_\theta\,\mathrm{e}(\theta=\pi)=2(1+h)
$$
$$
\partial^2_\theta\,\mathrm{e}(\theta=\theta_h)=2(h^2-1)
$$
The take home message here is that it is possible to change the stability of the different equilibrium solutions by varying the magnitude $H$ of the magnetic field, i.e. by varying $h$. By changing $h$, one can actually flip the magnetization from one equilibrium orientation to another. | {
"domain": "physics.stackexchange",
"id": 19291,
"tags": "magnetic-fields, material-science, magnetic-moment"
} |
How to heat water at very high temepratures? | Question: I am working on a project and recently came through a situation where I'll be working with water evaporation. On searching the internet I found that water heated at 350Celsius would generate almost 1600 newton force. But I'm confused that how will I be able to achieve 350 degree heat? I mean, the water itself evaporates at 100 degree, so how will I be able to hold it upto till 350 degree? or is there any other theory/way which I'm missing.
Answer: Think of a pressure cooker...
The higher the pressure, the higher the boiling temperature. You need a vessel that will hold the pressure at 350C.
Wikipedia gives a formula for the pressure need to get the boiling temperature of water up to a certain value. The formula is
$$T_b=1730.53/(8.07131-\log_{10}P) -233.426$$
where $T_b$ is the boiling temperature in C, and $P$ is the required pressure in Torr.
Putting 350 in the equation gives approximately $2\times10^5\,\rm Torr$ or $260\,\rm Atm$ or $26\,\rm MPa$.
Now build a pressure cooker strong enough to hold that (plus some safety margin I hope). | {
"domain": "physics.stackexchange",
"id": 19809,
"tags": "thermodynamics, water, evaporation, heat-engine"
} |
Best practices for decouple classes in C# | Question: So this is a lot more confusing than it has to be (I could just stick all of this in the main class with the ui event handlers) but I wanted to decouple this class for learning purposes.
Basic information:
I pulled out a bunch of code and put it in a separate class. This separate class opens files so I used exception handling. When an exception is thrown it should update the UI with an error message. To decouple this class I created an event handler and event listeners.
Questions:
Is this a common way to decouple classes?
Is this too loosely coupled where it creates too much overhead
Is this so decoupled that it makes it completely too complicated?
My friend suggested passing Form1 to the function, but I would still need to use the name of the label. So it would be less coupled, but not completely decoupled. Is this an acceptable approach?
Are there some other approaches that would work better?
Original class with UI event handlers:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.IO;
namespace compiler
{
public partial class Form1 : Form
{
CompilerControls controls = new CompilerControls();
bool ErrorFlag = false;
public Form1()
{
InitializeComponent();
//add an event listener to handle exceptions
controls.HandleException += new ExceptionCaught(CatchException);
}
private void Form1_Load(object sender, EventArgs e)
{
}
public void CatchException(CustomEventArgs e)
{
UpdateStatus(e.Message, Color.Red);
ErrorFlag = true;
}
private void ctrlOpenFile_Click(object sender, EventArgs e)
{
DialogResult sourceFile = openFileDialog1.ShowDialog();
if (sourceFile == DialogResult.OK)
{
// Read the lines into a list from the file
controls.ReadFile(openFileDialog1.FileName);
//Print the source file to the text box
txtMainBox.Clear();
txtMainBox.Text = controls.GetSourceFile();
if (!ErrorFlag)
{
//Show status message and move forward
UpdateStatus("File Opened Successfully", Color.Green);
ctrlCreateChFile.Enabled = true;
ctrlOpenFile.Enabled = false;
}
else
ErrorFlag = false;
}
}
private void UpdateStatus(string message, Color color)
{
lblStatus.ForeColor = color;
lblStatus.Text = message;
}
private void ctrlCreateChFile_Click(object sender, EventArgs e)
{
//delete everything in the main text box and get/create the contents of the character file
txtMainBox.Clear();
txtMainBox.Text = controls.GetChFile();
//deslect all of the text in the main text box
txtMainBox.GotFocus += delegate { txtMainBox.Select(0, 0); };
//if there wasn't an exception thrown
if (!ErrorFlag)
{
//Show status message and move forward
UpdateStatus("Successfully Created Character File", Color.Green);
ctrlCreateChFile.Enabled = false;
ctrlCreateTokens.Enabled = true;
}
else
ErrorFlag = false; //if there was an exception thrown, ignore the above statements only ONCE
}
}
}
Decoupled class with exception handling:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace compiler
{
public delegate void ExceptionCaught(CustomEventArgs e);
class CompilerControls
{
private List<String> fileLines = new List<string>();
//add an event handler
public event ExceptionCaught HandleException;
//handle the event of an exception being thrown
private void OnCaught(CustomEventArgs e)
{
if (HandleException != null)
HandleException(e);
}
public void ReadFile(String FileName)
{
try
{
using (StreamReader sr = new StreamReader(FileName))
{
string line;
while ((line = sr.ReadLine()) != null)
fileLines.Add(line);
}
}
catch (IOException)
{
OnCaught(new CustomEventArgs("File could not be Opened"));
}
catch (OutOfMemoryException)
{
fileLines.Clear();
OnCaught(new CustomEventArgs("File too large"));
}
}
public string GetSourceFile()
{
string text;
text = "/*******************************************************************" + Environment.NewLine;
text += "/ Stephen Granet" + Environment.NewLine;
text += "/ CS 451 Compiler" + Environment.NewLine;
text += "/" + Environment.NewLine;
try
{
//Format each line from the file and print it to the text box
for (int i = 0; i < fileLines.Count(); i++)
{
text += "/ " + (i + 1) + ": " + fileLines[i] + Environment.NewLine;
}
}
catch (OutOfMemoryException)
{
OnCaught(new CustomEventArgs("File too large"));
return "";
}
//Print footer information to the text box
text += "/******************************************************************/";
return text;
}
public String GetChFile()
{
String text = "";
//Convert the fileLines into one long string, and split each character into its own array element
char[] symbols = (string.Join("", fileLines)).ToCharArray();
//cycle through each symbol and print it to the text box
foreach (char symbol in symbols)
{
if ((symbol != '\n') && (symbol != ' ') && (symbol != '\t'))
text += symbol + Environment.NewLine;
}
CreateChFile(text);
return text;
}
private void CreateChFile(string content)
{
//Write the data to the ch.txt file
try
{
File.WriteAllText("ch.txt", content);
}
catch (IOException)
{
OnCaught(new CustomEventArgs("Could not create Character File"));
}
}
}
}
Note: This is a homework assignment. However, I'm not asking a question on the homework part of the program. This is for my own practice.
Answer: Thanks to ANeves for pointing out that I could also submit my comments as an answer by invoking question 5 :)
Original comment:
Not a direct answer to your question, so a comment: your approach hides useful information from the caller. You handle the exception, which has lots of information in it, by passing much less information to an event. For example, an IOException will tell you why the file couldn't be opened; the event does not. Exceptions have a stack trace; the event does not. Sometimes it's best to let the caller handle the exception, since the caller knows best how to react to a given exceptional condition.
So I should just not check for exceptions in the decoupled class, and make whoever calls it check for exceptions? – Stephen Granet 8 mins ago
Most likely, yes. The ReadFile method takes a file path and fills a collection of strings with the lines of the file. That's pretty simple. By using the event-based pattern you've developed, you require the consumer of the method to subscribe to an event if they want to know about exceptions. The built-in exception handling mechanism, on the other hand, comes for free.
Indeed, the direct caller of ReadFile might not need to catch the exception. It might be appropriate to allow the exception to bubble up to a higher point in the call stack. In general, any given method should only handle exceptions that it knows about, and for which it has some specific course of action. (Such a course of action could be, for example, informing the user that the path was invalid and asking for new input.)
At the entry point of your application (the entry point of each thread, actually), you'll usually want a general exception handler for logging exceptions that weren't handled more specifically.
Back to your program: to decouple the ReadFile logic from the calling class, you could have the method return a List<string> rather than operate on a private member of the class. This has many advantages: easier testing and greater reusability come to mind.
Another suggestion: Since you concatenate all the lines in the end, you could skip a lot of this and use the File.ReadAllLines method instead.
In general, it seems, you might want to focus on the single responsibility principle as a way of arriving at a decoupled design. | {
"domain": "codereview.stackexchange",
"id": 1782,
"tags": "c#, mvc"
} |
Simple speed up of C++ OpenMP kernel | Question: This function calculates the standard deviation of a patch, given a kernel size and greyscale OpenCV image. The middle pixel of the patch is kept if stdev of the patch is below the given threshold, else it is rejected. This is done for each pixel except the border.
I have never worked with OpenMP or optimization of C++, so all help is welcome. I'm probably doing some very stupid things that slow down the process drastically. It doesn't need to be the fastest, but I think some easy tricks will significantly speed it up.
#include "stdafx.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/photo/photo.hpp"
#include <stdlib.h>
#include <stdio.h>
#include "utils.h"
#include <windows.h>
#include <string.h>
#include <math.h>
#include <numeric>
using namespace cv;
using namespace std;
Mat low_pass_filter(Mat img, int threshold, int kernelSize)
{
unsigned char *input = (unsigned char*)(img.data);
Mat output = Mat::zeros(img.size(), CV_8UC1);
unsigned char *output_ptr = (unsigned char*)(output.data);
#pragma omp parallel for
for (int i = (kernelSize - 1) / 2; i < img.rows - (kernelSize - 1) / 2; i++){
for (int j = (kernelSize - 1) / 2; j < img.cols - (kernelSize - 1) / 2; j++){
double sum, m, accum, stdev;
vector<double> v;
v.reserve(kernelSize*kernelSize);
// Kernel Patch
for (int kx = i - (kernelSize - 1) / 2; kx <= i + (kernelSize - 1) / 2; kx++){
for (int ky = j - (kernelSize - 1) / 2; ky <= j + (kernelSize - 1) / 2; ky++){
v.push_back((double)input[img.step * kx + ky]);//.at<uchar>(kx, ky));
}
}
sum = std::accumulate(std::begin(v), std::end(v), 0.0);
m = sum / v.size();
accum = 0.0;
std::for_each(std::begin(v), std::end(v), [&](const double d) {
accum += (d - m) * (d - m);
});
stdev = sqrt(accum / (v.size() - 1));
if (stdev < threshold){
output_ptr[img.step * i + j] = input[img.step * i + j];
}
}
}
return output;
}
Answer: Vector v is not required. Instead of adding items to it, iterate directly over your source array, and then use variance = E(v²) / E(v)² so that your inner code becomes:
double sum = 0;
int n = kernelSize * kernelSize;
// Kernel Patch
for (int kx = ...) {
for (int ky = ...) {
double d = (double)input[img.step * kx + ky]);
sum += d;
}
}
const double mean = sum/n;
double sum2 = 0;
for (int kx = ...) {
for (int ky = ...) {
double d = (double)input[img.step * kx + ky]);
sum2 += (d - mean) * (d - mean);
}
}
const double stddev = sqrt(sum2/n);
if (stddev < threshold) {
...;
}
After that, consider that the sum of elements centred around (x+1,y) can be found from the result for (x,y) simply by subtracting all the elements in the previous left-hand column, and adding all the elements in the new right-hand column. An analogous operation works vertically.
Also, check your compiler options - are you auto-vectorizing loops, and using SIMD instructions (if available)? | {
"domain": "codereview.stackexchange",
"id": 14015,
"tags": "c++, opencv, openmp"
} |
Subscribing to a topic and storing the message | Question:
Drear ROS users,
I have a model of a robot arm (KUKA youBot) running in Gazebo. In my cpp program, which generates messages for the Joints of the model, I also need to somehow read and store into a variable the current positions of the Joints.
As far as I know, the corresponding messages are published under the topic named "joint_states". So, when I type in a separate Terminal Window "rostopic echo joint_states", I get all the names of the joints together with corresponding positions, velocities, etc.,constantly published with some high freq. (+ from the youbot Documentation (Locomotec) it is known, that "the arm periodically publish JointState messages for position and velocities of the arm joints").
So, how can I subscribe for this topic "joint_states", read out and store the positions of the arm joints?
Hope for your help!
2 dornhege: Please, if possible, explain in some more detail, what do you mean by "just use sensor_msgs::JointState"? Please give a short example on how to, e.g., add 0.03 to all current positions of the Joints, which have currently been read out by "just using sensor_msgs::JointState".. You will help me so much!.. - ASMIK2011ROS (13 mins ago)
Originally posted by ASMIK2011ROS on ROS Answers with karma: 62 on 2011-06-05
Post score: -1
Answer:
OK, I've solved the problem. So, once again ('cause seemingly the original description was not clear enough or too long to read(?) ;), my problem was: I HAD TO BE ABLE TO READ OUT AND STORE THE POSITIONS FROM THE MODEL IN GAZEBO "FROM TIME TO TIME, WHEN NEEDED" INSIDE A "GLOBAL" PROGRAM, WHICH HAS PLENTY OF OTHER DIFFERENT TASKS (So, creating a separate node for subscribing for messages of joint_state topic was of course not the solution I needed). HERE'S THE CODE I USED inside my "global program":
using namespace std;
float joint_pos_global;
void callback(const sensor_msgs::JointState & msg)
{
joint_pos_global = msg.position[3];
}
int main(int argc, char** argv) {
//--------//
ros::init(argc, argv, "listener");
ros::NodeHandle nh;
ros::Subscriber sub; sub = nh.subscribe ("joint_states", 10, callback);
//--------//
smth....smth...smth...smth...
//--Here I suddenly need to know the current position of joint Numb. 3--//
ros::spinOnce();
cout << "Position of joint 3 is" << joint_pos_global << "\n";
//--------------------------------------------------------------------------------------------------//
}
Originally posted by ASMIK2011ROS with karma: 62 on 2011-06-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by JonW on 2011-06-10:
That may not work as you expect - I believe that Messages are queued up for delivery by ROS until the spin occurs at which point all callbacks occur. When the queue overflows messages are dropped, so the joint states you are seeing may be the ones from immediately after your last spin.
Comment by Abdul Mannan on 2016-11-25:
Hi, I am facing the same problem. Could you tell me the header files you are including in the beginning. I am getting errors by simply including
#include "ros/ros.h"
#include "std_msgs/String.h"
Thank you so much. | {
"domain": "robotics.stackexchange",
"id": 5755,
"tags": "ros"
} |
Spinor irreducible reps of the Lorentz group and their algebra | Question: Antisymmetric tensor of rank two can be connected with spinor formalism by the formula
$$
M_{\mu \nu} = \frac{1}{2}(\sigma_{\mu \nu})^{\alpha \beta}h_{(\alpha \beta )} - \frac{1}{2}(\sigma_{\mu \nu})^{\dot {\alpha} \dot {\beta} }h_{(\dot {\alpha} \dot {\beta} )},
$$
where
$$
h_{(\alpha \beta )} = (\sigma^{\mu \nu})_{\alpha \beta}M_{\mu \nu}, \quad h_{(\dot {\alpha} \dot {\beta} )} = -(\tilde {\sigma}^{\mu \nu})_{\dot {\alpha }\dot {\beta }}M_{\mu \nu} \qquad (.1)
$$
are an irreducible spinor representations (for other definitions look here).
With generator $J_{\mu \nu}$ of the Lorentz group and corresponding irreducible representation $T(g) = e^{\frac{i}{2}\omega^{\mu \nu}J_{\mu \nu}}$, by rewriting antisymmetric tensor $\omega^{\mu \nu}$ with using spinor formalism we can get
$$
T(g) = e^{\frac{i}{2}\left(\omega^{(ab)}J_{(ab)} + \omega^{(\dot {a}\dot {b})}J_{(\dot {a}\dot {b})}\right)},
$$
where (compare with $(.1)$)
$$
J_{(ab)} = \frac{1}{2}(\sigma^{\mu \nu})_{a b}J_{\mu \nu}, \quad J_{(\dot {a} \dot {b} )} = -\frac{1}{2}(\tilde {\sigma}^{\mu \nu})_{\dot {a }\dot {b}}J_{\mu \nu},
$$
so the Lorentz group is generated by two symmetrical spinor tensors.
I got commutation relations of these tensors:
$$
[J_{(\dot {a} \dot {b})}, J_{(\dot {c} \dot {d})}] = \frac{i}{2}\left( \varepsilon_{\dot {a}\dot {c}}J_{(\dot {b} \dot {d})} + \varepsilon_{\dot {b} \dot {d}}J_{(\dot {a} \dot {c})} + \varepsilon_{\dot {a} \dot {d}}J_{(\dot {b} \dot {c})} + \varepsilon_{\dot {b} \dot {c}}J_{(\dot {a} \dot {d})}\right),
$$
$$
[J_{(a b)}, J_{(c d)}] = \frac{i}{2}\left( \varepsilon_{ac}J_{(bd)} + \varepsilon_{bd}J_{(ac)} + \varepsilon_{ad}J_{(bc)} + \varepsilon_{bc}J_{(ad)}\right).
$$
But commutator $[J_{(a b)}, J_{(\dot {c} \dot {d})}]$ isn't equal to zero, against expectations. It's equal to
$$
[J_{(a b)}, J_{(\dot {c} \dot {d})}] = -\frac{i}{8}\left( (\sigma^{\beta})_{b\dot {c}}(\sigma^{\nu})_{a\dot {d}} + (\sigma^{\beta })_{b \dot {d}}(\sigma^{\nu})_{a \dot {c}} + (\sigma^{\beta })_{a \dot {c}}(\sigma^{\nu})_{b \dot {d}} + (\sigma^{\beta})_{a \dot {d}}(\sigma^{\nu})_{b \dot {c}}\right)J_{\beta \nu},
$$
which isn't zero (look here).
Should it be so?
Answer: Your last expression is equals to zero, because the first term is symmetric in $\beta$ and $\nu$, while $J_{\beta\nu}$ is antisymmetric in $\beta$ and $\nu$. | {
"domain": "physics.stackexchange",
"id": 9243,
"tags": "special-relativity, group-representations, commutator, spinors"
} |
A small GOTO text adventure game | Question: EDIT_START: I want to thank all people for giving me such good answers! It's hard for me to choose any answer over another, because I see that all of your answers are valid and good in their own perspective. I want to clarify my own question. My question is not "How do I not use GOTO?", but my question is "How do I use GOTO in a better way?". This implicates that I want to use GOTO for program room/state transition at all costs. This is for educational purpose and for discovering the limits of C. I will give out a bounty as soon as possible to my question, to give back a reward. Anyway thank you all! I'll place a LABEL for ya all in my program ;-) EDIT_END:
I was discussing with someone about using GOTO at stackoverflow. May someone teach me some hidden tricks in using GOTO? Do you have some suggestions for improvement? You may enjoy my little adventure game, give it a try. ^^
PS play the game before you read the source, otherwise you get spoiled
#include <stdio.h>
#include <stdlib.h>
enum _directions{
DIR_0 = 0b0000,
DIR_E = 0b0001,
DIR_W = 0b0010,
DIR_WE = 0b0011,
DIR_S = 0b0100,
DIR_SE = 0b0101,
DIR_SW = 0b0110,
DIR_SWE = 0b0111,
DIR_N = 0b1000,
DIR_NE = 0b1001,
DIR_NW = 0b1010,
DIR_NWE = 0b1011,
DIR_NS = 0b1100,
DIR_NSE = 0b1101,
DIR_NSW = 0b1110,
DIR_NSWE = 0b1111
} DIRECTIONS;
void giveline(){
printf("--------------------------------------------------------------------------------\n");
}
void where(int room, unsigned char dir){
printf("\nYou are in room %i. Where do you want GOTO?\n", room);
if(dir & 8) printf("NORTH: W\n");
else printf(".\n");
if(dir & 4) printf("SOUTH: S\n");
else printf(".\n");
if(dir & 2) printf("WEST: A\n");
else printf(".\n");
if(dir & 1) printf("EAST: D\n");
else printf(".\n");
}
char getdir(){
char c = getchar();
switch(c){
case 'w' :
case 'W' :
return 'N';
case 's' :
case 'S' :
return 'S';
case 'a' :
case 'A' :
return 'W';
case 'd' :
case 'D' :
return 'E';
case '\e' :
return 0;
}
return -1;
}
int main(int argc, char *argv[]){
START:
printf("THE EVIL GOTO DUNGEON\n");
printf("---------------------\n");
printf("\nPress a direction key \"W, A, S, D\" followed with 'ENTER' for moving.\n\n");
char dir = -1;
ROOM1:
giveline();
printf("Somehow you've managed to wake up at this place. You see a LABEL on the wall.\n");
printf("\"Do you know what's more evil than an EVIL GOTO DUNGEON?\"\n");
printf("You're wondering what this cryptic message means.\n");
where(1, DIR_SE);
do{
dir = getdir();
if(dir == 'S') goto ROOM4;
if(dir == 'E') goto ROOM2;
}while(dir);
goto END;
ROOM2:
giveline();
printf("Besides another LABEL, this room is empty.\n");
printf("\"Let's play a game!\"\n");
where(2, DIR_W);
do{
dir = getdir();
if(dir == 'W') goto ROOM1;
}while(dir);
goto END;
ROOM3:
giveline();
printf("Man, dead ends are boring.\n");
printf("Why can't I escape this nightmare?\n");
where(3, DIR_S);
do{
dir = getdir();
if(dir == 'S') goto ROOM6;
}while(dir);
goto END;
ROOM4:
giveline();
printf("Is this a real place, or just fantasy?\n");
printf("\"All good things come in three GOTOs.\"\n");
where(4, DIR_NSE);
do{
dir = getdir();
if(dir == 'N') goto ROOM1;
if(dir == 'S') goto ROOM7;
if(dir == 'E') goto ROOM5;
}while(dir);
goto END;
ROOM5:
giveline();
printf("This is a big river crossing. I guess I need to JUMP.\n");
where(5, DIR_SWE);
do{
dir = getdir();
if(dir == 'S') goto ROOM8;
if(dir == 'W') goto ROOM4;
if(dir == 'E') goto ROOM6;
}while(dir);
goto END;
ROOM6:
giveline();
printf("This place doesn't look very promising.\n");
where(6, DIR_NSW);
do{
dir = getdir();
if(dir == 'N') goto ROOM3;
if(dir == 'S') goto ROOM9;
if(dir == 'W') goto ROOM5;
}while(dir);
goto END;
ROOM7:
giveline();
printf("\"Give a man a LOOP and you feed him FOR a WHILE;\n");
printf(" teach a man a GOTO and you feed him for a RUNTIME.\"\n");
where(7, DIR_NE);
do{
dir = getdir();
if(dir == 'N') goto ROOM4;
if(dir == 'E') goto ROOM8;
}while(dir);
goto END;
ROOM8:
giveline();
printf("This looks like an endless LOOP of rooms.\n");
where(8, DIR_NW);
do{
dir = getdir();
if(dir == 'N') goto ROOM5;
if(dir == 'W') goto ROOM7;
}while(dir);
goto END;
ROOM9:
giveline();
printf("You've found your old friend Domino. He doesn't looks scared, like you do.\n");
printf("\n\"Listen my friend,\n");
printf(" If you want to escape this place, you need to find the ESCAPE KEY.\"\n");
printf("\nWhat does this mean?\n");
where(9, DIR_N);
do{
dir = getdir();
if(dir == 'N') goto ROOM6;
}while(dir);
goto END;
printf("You never saw me.\n");
END:
giveline();
printf("The End\n");
return 0;
}
Answer:
May someone teach me some hidden tricks in using GOTO?
goto is kind of a one trick pony. But is part of a family of control transfer constructs. All the looping and decision structures are in a sense a refined or specialized application of a goto. So from that pov, using goto is advisable only if it cannot be done with any of the "normal" control structures like if, while, for etc.
The next level up is function calls. A function call is a super-powered goto. Not only can you jump and execute a named piece of code, but you can also jump back right where you came from. Plus you can pass arguments and return a value, if desired.
Another level up is making use of function pointers. A pointer to a function can be saved in a variable or an array. It can be passed to and returned from functions. For a state machine like in the question, I'd be very tempted to organize the rooms into an array of function pointers. But I'd probably use a lot of macros, too.
The next level up from functions is setjmp/longjmp. These let you jump back across several levels of the call stack. It's sometimes useful to have a setjmp call in the main loop or initialization of the program and then the program can restart or bail-out if it runs into certain recoverable errors.
I suppose the next level up might be signal handlers and/or forking off child processes. Or maybe loading a dynamic library. | {
"domain": "codereview.stackexchange",
"id": 39678,
"tags": "c, adventure-game"
} |
Abuse of notation in GR? $f(x)$ vs. $(f \circ \psi^{-1})(x)$ | Question: I see some GR books write $f(x)$ (or even $f(x^\mu)$) when talking about a function on a manifold. But with $f: M \to \mathbb R$ and coordinates defined by a chart $\psi: M \to \mathbb R^n$, shouldn't the notation rather be $(f \circ \psi^{-1})(x)$ when evaluating $f$ at the point corresponding to $x$? The only one I see write it like this is Wald; even Carroll, who uses a chart construction similar to Wald's, writes it the first way a lot.
Is this simply a common abuse of notation, or am I actually missing something?
Answer: It's an abuse of notation. If the chart is given by $(U,x)$ with $x:M\rightarrow \mathbb R^n$ the chart map, then for each $p\in U\subseteq M$ we have
$$f(p) = \bigg(f\circ x^{-1}\bigg)\big(x(p)\big) \equiv f_x\big(x(p)\big)$$
where $f_x :\mathbb R^n \rightarrow \mathbb R$ is the local expression of $f$ in the chart $x$. When working with functions on manifolds, one almost always works at the chart level with such objects like $f_x$ with the understanding that they "lift" to a well-defined function $f$ at the manifold level. Of course, there are objects appear in charts which do not exhibit this behavior, such as the connection coefficients $\Gamma$. These objects are only defined in a chart, and are not tensorial in nature.
In any case, in my opinion it's pedagogically very important to make the distinction between functions on a manifold and functions in a chart. Once the issue is fully understood, I might be tempted to relax a bit and write fewer symbols, with clarifications added as necessary. | {
"domain": "physics.stackexchange",
"id": 84068,
"tags": "general-relativity, differential-geometry, notation"
} |
Why does firing electromagnetic radiation at populations of charged particles put them 'in phase'? | Question: I don't have a strong background in physics, but instead I'm educated in biochemistry. A lot of the principles behind the methods we use in structural biology perplex me. To give two examples:
1) FT-ICR MS (Fourier Transform Ion Cyclotron Resonance Mass Spectrometry) involves trapping ions in a Penning trap, and detecting the frequency at which they orbit the trap's longitudinal axis with the induction of current on two 'peripheral' coils. If the ions are not in phase, the signal on both coils will add to zero. To get them in phase, a scan of electromagnetic radiation is shot at them, and they will absorb relevant frequencies to their cycle time. Why is this the case?
2) In NMR (nuclear magnetic resonance) we can fire radio frequencies at samples of huge numbers of atomic nuclei to put their nuclear magnetic dipole moments spins in phase.
By this logic, could we (with a brilliant enough light source) change the spin of the earth since it somewhat exhibit properties of 'bar-magnet' subatomic particle? I know this sounds dumb -- I guess what I'm looking for is... what is this property of charged particles that allows you to bring them positionally 'in phase' just by shooting them with light?
Answer: The principle behind this is similar to the driven harmonic oscillator in classical mechanics in case that is more relatable. The energy of the electromagnetic radiation is absorbed when the driving frequency is close to the frequency of the free harmonic oscillator i.e. its resonance frequency. This is possible, because the electromagnetic field interacts with the electric or magnetic dipole moment of the particles.
Whether this causes the particles to be in phase depends on what you actually mean by being in phase. In your first example it is about the phase of their oscillation in regard to the spatial position. I'm not really familiar with cyclotron resonance but as far as I understand the excitation of the particles to a greater orbit leads to a higher probability of particles bunching in packets (please edit this part if it's wrong). I don't think all of the particles in the trap need to be in phase in order to produce a usable signal.
In the case of NMR being in phase means that the spins point in the same direction during the temporal dynamics. This entirely depends on the type NMR measurement you want to perform. In the equilibrium state, about half of the spins point in one direction and the other half in the other with a small imbalance that causes a net magnetization. These spins are therefore already in phase. For a $T_1$ measurement you would simply flip this magnetization by a pulse, which doesn't change the phase effectively. For a $T_2$ measurement the initial phase is also kept when the magnetization is brought into the equatorial plane by a shorter pulse. Subsequent pulses only lead to a refocusing of the individual phases, that are drifting apart in the subsequent dynamics (see pulse echo). Therefore it could be said that pulses in NMR suppress dephasing rather than setting a phase.
Lastly, whether the magnetization of the earth could be reversed by some electromagnetic pulses can't really be answered right now, as it is not understood what causes the poles to reverse in the first place. The current theory establishes turbulent plasma currents near the inner core as the main source for the magnetic field. So I guess you would somehow need to reverse the direction of those currents, which isn't quite the same mechanism as reversing the magnetization of a nuclear spin. | {
"domain": "physics.stackexchange",
"id": 53769,
"tags": "quantum-mechanics, electromagnetism"
} |
Search for row number of maximum value in matrix column | Question: Background
I'm writing code that searches for the row number of the maximum value in a partial column of a matrix. The partial column corresponds with the non-zero values of an equivalent lower triangular matrix. That is, starting from the input row index idx through the final row. (This is my own version of subroutine for Gaussian Elimination with Partial Pivoting, for those who are interested).
Basically, the behavior of MaxLocColumnWise is the search of columnwise maximum location in the submatrices of A. Passing by reference, MaxLocColumnWise changes the value of k.
For example, for the following matrix
$$
A = \begin{bmatrix}1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{bmatrix}
$$
when k=0, MaxLocColumnWise(A,k) converts k to the index of maximum element in [1,4,7], which is 2, which is the index of 7 i.e. we get 2 as a result of
int k = 0;
`MaxLocColumnWise(A,k)`;
std::cout << k <<"\n";
when k=1, MaxLocColumnWise(A,k) converts k to the index of maximum element in [5,8], which is 2, which is the index of 8. (This case is highlighted in red here)
when k=2, MaxLocColumnWise(A,k) converts k to the index of maximum element in [9], which is 2, which is the index of 9
Code
Interestingly it turns out that the following code takes quite long time.
void MaxLocColumnWise(Matrix A, int &idx){
int n = A.size();
int col = idx;
double currentAbsMax = abs(A[col][col]);
for (int i=(col+1); i<n; i++){
double currentVal = abs(A[i][col]);
if (currentVal > currentAbsMax){
currentAbsMax = currentVal;
idx = i;
}
}
}
I have implemented this subroutine to the following Gaussian elimination with partial pivoting routine:
void GaussElimPartialPivot(Matrix& A, Vector& b){
int n = A.size();
for (int j=0; j<(n-1); j++){
int index = j;
MaxLocColumnWise(A, index);
SwapMatrixRows(A, j, index);
SwapVector(b, j, index);
// main loop
for (int i=(j+1); i<n; i++){
double m = A[i][j]/A[j][j];
b[i] -= m*b[j];
for(int k=j; k<n; k++){
A[i][k] -= m*A[j][k];
}
}
}
}
But when A gets large, the program gets slower exactly due to the subroutine MaxLocColumnWise, which was verified from excluding each subroutine in the main code.
But I'm not sure exactly where in MaxLocColumnWise() to blame. Any help will be appreciated.
(An excuse for the exclusion of the code : Matrix is just from typedef std::vector<Vector> Matrix;, and the Vector is typedef std::vector<double> Vector;)
Answer: I see a number of things that may help you improve your code.
Pass by const reference where practical
The first argument to MaxLocColumnWise is a Matrix but that causes the entire input matrix to be duplicated. Better would be to make it const Matrix & because it is not modified and it doesn't need to be duplicated. This is very likely the crux of your code's performance problem. On my machine with a matrix of size 1000, that single change drops the execution time down from 3.1 seconds to 11 milliseconds.
Prefer return value over reference
Instead of modifying one of the passed parameters, it's often better to return a value instead. So in this case, the function would be
int MaxLocColumnWise(const Matrix &A, int idx);
Check parameters before use
If idx is a negative number or beyond the end of the Matrix, your program will invoke undefined behavior and it could crash or worse. Better would be to verify the value is in a valid range before use.
Use appropriate data types
An index into an array is never negative, so instead of int, I'd recommend using std::size_t.
Provide complete code to reviewers
This is not so much a change to the code as a change in how you present it to other people. Without the full context of the code and an example of how to use it, it takes more effort for other people to understand your code. This affects not only code reviews, but also maintenance of the code in the future, by you or by others. One good way to address that is by the use of comments. Another good technique is to include test code showing how your code is intended to be used. I split your code into a header Gauss.h and implementation file Gauss.cpp and then created a test driver. These are the resulting files, after applying all of the suggestions above:
Gauss.h
#ifndef GAUSS_H
#define GAUSS_H
#include <vector>
typedef std::vector<double> Vector;
typedef std::vector<Vector> Matrix;
std::size_t MaxLocColumnWise(const Matrix &A, std::size_t idx);
void GaussElimPartialPivot(Matrix& A, Vector& b);
#endif // GAUSS_H
Gauss.cpp
#include "Gauss.h"
#include <cmath>
std::size_t MaxLocColumnWise(const Matrix &A, std::size_t idx){
auto maxIndex{idx};
const auto col{idx};
for (double maxValue{-1}; idx < A.size(); ++idx) {
auto currentVal{std::abs(A[idx][col])};
if (currentVal > maxValue){
maxValue = currentVal;
maxIndex = idx;
}
}
return maxIndex;
}
void GaussElimPartialPivot(Matrix& A, Vector& b) {
const std::size_t n{A.size()};
for (std::size_t j{1}; j < n; ++j) {
auto maxcol{MaxLocColumnWise(A, j)};
SwapMatrixRows(A, j, maxcol);
SwapVector(b, j, maxcol);
for (std::size_t i{j-1}; i < n; ++i) {
double m{A[i][j]/A[j][j]};
b[i] -= m*b[j];
for (auto k{j}; k < n; ++k){
A[i][k] -= m*A[j][k];
}
}
}
}
main.cpp
#include "Gauss.h"
#include <iostream>
#include <numeric>
int main() {
constexpr size_t n{1000};
Matrix m;
m.reserve(n);
int startval{1};
for (size_t i{0}; i < n; ++i) {
Vector v(n);
std::iota(v.begin(), v.end(), startval);
m.push_back(v);
startval += n;
}
for (int i=0; i < n; ++i) {
auto max{MaxLocColumnWise(m, i)};
if (max != n-1) {
std::cout << "Error:" << i << ", " << MaxLocColumnWise(m, i) << '\n';
}
}
} | {
"domain": "codereview.stackexchange",
"id": 34362,
"tags": "c++, performance, matrix"
} |
Project Euler 22: Names scores by Loki | Question: Started working through Project Euler this weekend.
Project Euler 22
Using names.txt (right click and 'Save Link/Target As...'), a 46K text
file containing over five-thousand first names, begin by sorting it
into alphabetical order. Then working out the alphabetical value for
each name, multiply this value by its alphabetical position in the
list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN,
which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the
list. So, COLIN would obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
#include <iostream>
#include <fstream>
#include <iterator>
#include <string>
#include <set>
#include <numeric>
#include "PunctFacet.h"
using ThorsAnvil::Util::PunctFacet;
long scoreName(std::string const& name)
{
return std::accumulate(std::begin(name), std::end(name), 0L,
[](long v1, char x){return v1 + x - 'A' + 1;});
}
int main()
{
// Open a file that considers " and , as space and thus ignores them
std::ifstream data;
data.imbue(std::locale(std::locale(), new PunctFacet(std::locale(), "\",")));
data.open("euler/e22.data");
// read all the names in a set (its sorted)
std::set<std::string> names{std::istream_iterator<std::string>(data),
std::istream_iterator<std::string>()};
// Calculate the result
long score = 0;
long loop = 1;
for(auto name: names) {
score += (loop * scoreName(name));
++loop;
}
std::cout << score << "\n";
}
PunctFacet.h
#ifndef THORSANVIL_UTIL_PUNCT_FACET_H
#define THORSANVIL_UTIL_PUNCT_FACET_H
#include <locale>
#include <string>
#include <sstream>
namespace ThorsAnvil
{
namespace Util
{
// This is my facet that will treat the characters in `extraSpace`
// as space characters and thus ignore them with formatted input
class PunctFacet: public std::ctype<char>
{
public:
typedef std::ctype<char> base;
typedef base::char_type char_type;
PunctFacet(std::locale const& l, std::string const& extraSpace)
: base(table)
{
std::ctype<char> const& defaultCType = std::use_facet<std::ctype<char> >(l);
// Copy the default value from the provided locale
static char data[256];
for(int loop = 0;loop < 256;++loop) { data[loop] = loop;}
defaultCType.is(data, data+256, table);
// Modifications to default to include extra space types.
for(auto space: extraSpace) {
table[space] |= base::space;
}
}
private:
base::mask table[256];
};
}
}
#endif
Results
> g++ -O3 -std=c++14 euler/e22.cpp
> time ./a.out
871198282
real 0m0.018s
user 0m0.009s
sys 0m0.007s
Answer: #include <numeric>!
I see no reason why individual score is computed via std::accumulate, but the total uses a loop. I recommend std::inner_product there, at least for consistency.
Similarly, the
for(int loop = 0;loop < 256;++loop) { data[loop] = loop;}
is
std::iota(data, data + 256, 0);
Of course, a magic 256 shall be replaced by a symbolic name.
I don't feel comfortable with std::set used just to avoid sorting. It obscures the intention. You seem to have the same feeling, and put the clarifying comment. I recommend to be explicit: read the names into a vector, and sort it. BTW, it would also reduce space requirements. | {
"domain": "codereview.stackexchange",
"id": 25496,
"tags": "c++, programming-challenge, c++17"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.