anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Are charge and energy inter-convertible? | Question: During pair annihilation ([(e-)+(e+)]-------->gamma radiation}is it not that charges in pair (yet opposite)are converted into energy, and does this also mean that energy(gamma radiation) can be converted into charge..and vice versa(during the reverse process-pair production)?
In general, does a law of charge-energy equivalence exist?
P.S:detailed answer is requested, as i am a beginner..sorry for the inconvenience.
Answer: No, they're not, because they both must be conserved. A single electron cannot just transform into photons, because the electron has negative charge and the photons have zero charge.
What's going on the pair annihilation process (the most common case being $e^+ e^- \to \gamma\gamma$) is that different forms of energy are being converted into each other: the electron and positron have energy due to their movement and also due to their mass; the photons have no mass, so their energy is entirely kinetic, but the total is the same before and after.
Now, this doesn't rule out the opposite process $\gamma\gamma \to e^+ e^-$, and as far as I know it's entirely possible for two gamma photons to transform into an electron-positron pair. But the net charge is always zero. | {
"domain": "physics.stackexchange",
"id": 39406,
"tags": "electrostatics"
} |
How to calculate accuracy on keras model with multiple outputs? | Question: I have a keras model that takes in an image with (up to) 5 MNIST digits and outputs a length and then (up to) 5 digits. I see that model.evaluate() reports accuracies for each of the outputs but how do I determine how good the model is at predicting the numbers? Do I need to write that myself?
Answer: It's going to take a bit of engineering - since you have a variable size output, you need to encode the length into the output in order to evaluate the accuracy of the model overall. If instead of outputting "up to 5 digits", you output an array of 5 predictions, where some non-digit (such as -1) operates as indicating that there is no digit present, you can better evaluate your network. If you retrain your network as such (where $X$ is the array of images and $Y$ is an array containing arrays of form $[1,4,3,-1,-1]$, for example), then model.evaluate($X_{test}$,$Y_{test}$) will work as expected.
If you don't want to re-train your network, you can write a simple function to take the output from model.predict($X_{test}$) and encode it into the corresponding format. This encode function will simply go from $[1,4,3]$ to $[1,4,3,-1,-1]$. You can then calculate the accuracy by sklearn.metrics.accuracy_score($encode$(model.predict($X_{test}$)),$Y_{test}$), where $encode$ is the aforementioned function. | {
"domain": "datascience.stackexchange",
"id": 1407,
"tags": "keras, accuracy"
} |
Rviz does not display image (invalid drawable) on Mac OSX 10.8.3 | Question:
When I rosrun rviz rviz, and add image. At the bottom left a video stream should appear that is coming from /image_raw topic . However, the image window freezes. In the console I am getting this
rosrun rviz rviz
[ INFO] [1370417297.135309000]: rviz version 1.9.30
[ INFO] [1370417297.135369000]: compiled against OGRE version 1.7.4 (Cthugha)
2013-06-05 16:28:17.144 rviz[33669:f0b] *** WARNING: Method userSpaceScaleFactor in class NSView is deprecated on 10.7 and later. It should not be used in new applications. Use convertRectToBacking: instead.
2013-06-05 16:28:17.813 rviz[33669:f0b] invalid drawable
[ INFO] [1370417297.847879000]: OpenGl version: 2.1 (GLSL 1.2).
2013-06-05 16:28:17.921 rviz[33669:f0b] invalid drawable
2013-06-05 16:28:31.311 rviz[33669:f0b] invalid drawable
2013-06-05 16:28:31.376 rviz[33669:f0b] invalid drawable
...
I know this is an old problem, however I am asking if there is some progress has been done recently?
I am using mac osx 10.8.3, groovy.
UPDATE
If I rosrun rviz rviz -l after adding image, I get the following console output
[ INFO] [1371039331.242585000]: Texture: ROSImageTexture0: Loading 1 faces(PF_A8R8G8B8,420x300x1) with hardware generated mipmaps from Image. Internal format is PF_A8R8G8B8,420x300x1.
[ INFO] [1371039331.256845000]: GLRenderSystem::_createRenderWindow "OgreWindow(2)", 720x382 windowed miscParams: externalGLControl= externalWindowHandle=140700326980512 macAPI=cocoa macAPICocoaUseNSView=true
[ INFO] [1371039331.256900000]: Creating a Cocoa Compatible Render System
[ INFO] [1371039331.257431000]: Mac Cocoa Window: Rendering on an external plain NSView*
2013-06-12 21:15:31.257 rviz[21491:f0b] invalid drawable
2013-06-12 21:15:31.338 rviz[21491:f0b] invalid drawable
2013-06-12 21:15:31.382 rviz[21491:f0b] invalid drawable
...
UPDATE 2
I’ve built ogre-1.8.1, and linked libOgreRTShaderSystem.dylib to /usr/local/lib/libOgreMain.dylib due to the fact that the latest versions of ogre do not have libOgreMain.dylib. Then I was able to begin building rviz, however at 87% I got many linking errors
Linking CXX shared library /Users/artemlenskiy/Documents/Research/ros/mycatkin/devel/lib/libdefault_plugin.dylib
Undefined symbols for architecture x86_64:
"Ogre::Quaternion::FromAngleAxis(Ogre::Radian const&, Ogre::Vector3 const&)", referenced from:
Ogre::Quaternion::Quaternion(Ogre::Radian const&, Ogre::Vector3 const&) in camera_display.cpp.o
Ogre::Vector3::getRotationTo(Ogre::Vector3 const&, Ogre::Vector3 const&) const in interactive_marker_control.cpp.o
Ogre::Quaternion::Quaternion(Ogre::Radian const&, Ogre::Vector3 const&) in interactive_marker_control.cpp.o
Ogre::Vector3::getRotationTo(Ogre::Vector3 const&, Ogre::Vector3 const&) const in arrow_marker.cpp.o
Ogre::Quaternion::Quaternion(Ogre::Radian const&, Ogre::Vector3 const&) in shape_marker.cpp.o
Ogre::Quaternion::Quaternion(Ogre::Radian const&, Ogre::Vector3 const&) in odometry_display.cpp.o
Ogre::Quaternion::Quaternion(Ogre::Radian const&, Ogre::Vector3 const&) in pose_display.cpp.o
...
So, unfortunately I couldn’t test whether ogre 1.8.1 solves rviz problem or not. The latest Ogre 1.9.1RC1 does come as a framework in a dmg, I didn’t want to mess up with it.
Originally posted by Artem on ROS Answers with karma: 709 on 2013-06-04
Post score: 4
Original comments
Comment by William on 2013-06-05:
I'll try to reproduce this.
Comment by Artem on 2013-06-12:
I thought maybe old Ogre version causes it, so I tried to compile ogre-1.9RC1, but after compilation I could find dylib.
Comment by William on 2013-06-12:
Sorry I haven't had time to reproduce this, if you can find the dylib manually you can add it to the DYLD_LIBRARY_PATH, this is likely caused by something not properly setting the rpath for the dylib.
Comment by bombilee on 2013-07-14:
@Art if you use ogre-1.8.1, for the missing libOgreMain.dylib create a symbolic link form the Ogre.framework/Ogre , after that rviz should be able to compile with orge 1.8.1.
ex. ln -s /usr/local/Cellar/ogre/1.8.1/lib/Release/Ogre.framework/Ogre /usr/local/Cellar/ogre/1.8.1/lib/libOgreMain.dylib
Comment by Artem on 2013-07-14:
I came to conclusion that it's not Ogre, there is something else what causes image view in rviz to freeze. I think it is something to do with OpenGL support in Cocoa. For now Im using rosrun image_view, everything else works fine in rviz.I hope @William or somebody else will come up with a solution.
Answer:
I can reproduce this, I opened a ticket: https://github.com/ros-visualization/rviz/issues/646
Originally posted by William with karma: 17335 on 2013-06-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14430,
"tags": "rviz, osx"
} |
Under what circumstances can a Prover in an Interactive Proof System simply simulate the Verifier? | Question: I have read Arora, and Sipser chapters discussing IP, but have not seen mention of such a possibility. Given that a Prover is computationally unbounded is it not possible under certain circumstance that the Prover could just build all possible Verifiers for the Language at hand and simulate one that closely (or perfectly) mimics the one it is interacting with?
Perhaps I'm being ignorant, but it seems to me an aspect of Interactive Proof Systems that is assumed but poorly specified is that the Prover must know something (maybe even everything) about the language we are trying to verify, or else the Prover wouldn't be very good at being convincing. Put another way: It seems the Prover knows what the problem being verified is, and this aspect of the Prover doesn't seem well specified.
If the Prover knows what the problem/language at hand is then is it not possible that even in the private coin model given certain properties of the problem the Prover could just figure out all reasonable verifiers for the problem and simulate them?
Answer: The prover can indeed simulate a verifier: indeed, the prover can simulate all verifiers which perform their task properly, and do so for each possible collection of coin flips. So if there are a million possible verifiers and 100 coins, the prover can just simulate all $1000000\cdot2^{100}$ possibilities for each of the possible proofs it would generate. Indeed, you could even use this as an algorithm for finding a proof for the verifier: generate all proofs of length 1, 2, ... until you find one that works for all possible verifiers and coin states. But for interactive proof systems, we usually just care about whether there is such a proof, or whether even an all-powerful system would not be able to find one. | {
"domain": "cstheory.stackexchange",
"id": 5608,
"tags": "interactive-proofs"
} |
PHP List Iteration | Question: Gentlemen!
I am hoping to find a more efficient (less hairy) way to do the following code.
The public interface will call calculateListItemValue and pass the following variables
$source_list: stdClass() with a structure similar to List JSON below.
$target_list: stdClass() with a structure similar to List JSON below.
$item_index: integer index pointing to the index of the array (list)->items[$item_index]
$type: string value containing the controlling type $source_list->attribute->type to be compared to
calculateListItemValue should check the value of attribute->value and calculate the final value of the operation based on range and operator
If value >= 0 result should be positive
If value < 0 result should be negative
If range is 0 apply Operator() to self
If range is 4 apply Operator() to all items in source_list (or target_list if value < 0)
Else apply to items with attribut->attrNo == range
Code follows.
/* List JSON example
* {
* "attributes": {
* "operator": "+",
* "value": "10",
* "range": "2,3",
* "type": "ATK",
* "leader": true,
* "attrNo": 2
* }
* }
*
*/
private function calculateListItemValue(&$source_list, &$target_list, $item_index, $type = 'ATK')
{
$power = array();
$operator = $source_list->items[$item_index]->attribute->operator;
$value = intval($source_list->items[$item_index]->attribute->value);
$leader = $source_list->items[$item_index]->attribute->leader ? 1 : .3;
$item_type = $source_list->items[$item_index]->attribute->affect;
$range = explode(',', $source_list->items[$item_index]->attribue->range);
if($value >= 0)
{
if(array_search('0', $range) !== false)
{
if($item_type == $type)
{
$power['power_up'] = $this->Operator($operator, ($type == 'ATK' ? $source_list->items[$item_index]->ATK : $source_list->items[$item_index]->DEF), $value, $leader);
}
}
elseif(array_search('4', $range) !== false)
{
foreach($source_list->items as $item)
{
if($item_type == $type)
{
$power['power_up'] += $this->Operator($operator, ($type == 'ATK' ? $item->ATK : $item->DEF), $value, $leader);
}
}
}
else
{
foreach($source_list->items as $item)
{
if(isset($item->attribute->attrNo))
{
if(array_search($item->attribute->attrNo, $range) !== false)
{
if($item_type == $type)
{
$power['power_up'] = $this->Operator($operator, ($type == 'ATK' ? $item->ATK : $item->DEF), $value, $leader);
}
}
}
}
}
}
elseif($value < 0)
{
if(array_search('0', $range) !== false)
{
return $power;
}
elseif(array_search('4', $range) !== false)
{
foreach($target_list->items as $item)
{
if($item_type == $type)
{
$power['power_down'] += $this->Operator($operator, ($type == 'ATK' ? $item->ATK : $item->DEF), $value, $leader);
}
}
}
else
{
foreach($target_list->items as $item)
{
if(isset($item->attribute->attrNo))
{
if(array_search($item->attribute->attrNo, $range) !== false)
{
if($item_type == $type)
{
$power['power_down'] += $this->Operator($operator, ($type == 'ATK' ? $item->ATK : $item->DEF), $value, $leader);
}
}
}
}
}
}
return $power;
}
private function Operator($operator, $base, $value, $mul)
{
$result = 0;
if($operator == '*')
{
$result = abs(ceil(($base * ($value * $mul)) - $base));
}
elseif($operator == '+')
{
$result = abs(ceil(($base + ($value * $mul)) - $base));
}
return $result;
}
Edit: Let me say that the code itself works fine, no problems.. but it's a nightmare to look at and explain to other people.
Answer: Well, first off, you should be wary of using referenced parameters. It sometimes makes legibility difficult, but in this case it is not even necessary. The method you are using these referenced variables in is private, therefore the class is the only thing that will ever use these values. Why not use properties instead? It accomplishes the same thing, is more legible, and is more extensible.
private
$source_list,
$target_list
;
private function calculateListItemValue( $item_index, $type = 'ATK' ) {
$this->source_list;//do something to source list
$this->target_list;//do something to target list
//etc...
}
Second, why is this method so long? Break this up into multiple methods based on functionality. If we follow the Single Responsibility Principle, then our methods should only do just enough to fulfill their purpose. No more, no less. Methods can call other methods to accomplish their task, but they should not need to know how those other methods do so.
There are a lot of violations of the "Don't Repeat Yourself" (DRY) Principle. The following is a pretty basic example of it. You should also notice that I'm using type casting instead of intval(). This makes your code just a bit easier to read by removing the need to wrap the entire line in parenthesis.
$attribute = $source_list->items[ $item_index ]->attribute;
$operator = $attribute->operator;
$value = ( int ) $attribute->value;
$leader = $attribute->leader ? 1 : .3;
$item_type = $attribute->affect;
Why are you using array_search() here? array_search() returns the array key if found, FALSE otherwise. Because of this you are forced to explicitly check for FALSE. If that is all you want, why not use in_array() instead? It is cleaner because it only ever returns TRUE/FALSE. It's either found or it isn't. BTW: there is no need to treat "0" as a string. Just use the integer, PHP is loosely typed and will alow it.
if( in_array( 0, $range ) ) {
//etc...
}
Asides from those principles I've already mentioned, you are also violating the Arrow Anti-Pattern. By this I mean that your code is too heavily indented. The two principles I mentioned above will help. For instance, each if/else statement terminates in a final check ($item_type == $type). This could be done once, just before the first if statement and return early. This follows the DRY principle, helps the Arrow Anti-Pattern by removing a level of indentation across the entire method, and increases efficiency, meaning your program will run faster.
if( $item_type != $type ) {
return $power;//empty array because nothing was done with it.
}
Let's talk about ternary statements for a moment. Ternary is a very powerful tool, and is wonderful if used correctly. However, it quickly becomes a pain and makes your code illegible if used incorrectly. If you find that your statements become too long, then you should opt for a full if/else structure. If you find that you are opting for multiline ternary to make them more legible, then you should opt instead for a full if/else structure. If you find the statement too complex, making it difficult to read, then you should opt for the full if/else structure. So...
$power['power_up'] = $this->Operator($operator, ($type == 'ATK' ? $source_list->items[$item_index]->ATK : $source_list->items[$item_index]->DEF), $value, $leader);
//compared to
if( $type == 'ATK' ) {
$power[ 'power_up' ] = $this->Operator( $operator, $source_list->items[ $item_index ]->ATK ), $value, $leader );
} else {
$power[ 'power_up' ] = $this->Operator( $operator, $source_list->items[ $item_index ]->DEF, $value, $leader );
}
You should notice an immediate difference in how this looks. In the first example, that could easily be mistaken for multiple statements (or so it appeared in my editor), or mistaken for a really long single statement as the ternary is hard to spot. In the second you instantly know what's going on. Of course, you'll notice that the above is still violating the DRY principle and is still a little difficult to read. So let's modify it a little more.
$item = $source_list->items[ $item_index ];
if( $type == 'ATK' ) {
$power_up = $item->ATK;
} else {
$power_up = $item->DEF;
}
//or ternary now
$power_up = $type == 'ATK' ? $item->ATK : $item->DEF;
$power[ 'power_up' ] = $this->Operator( $operator, $power_up, $value, $leader );
There, much better. The rest of your code pretty much follows the same logic. All of it can benefit from the above suggestions, so I will stop here and allow you to do the rest. If you need any clarification, let me know. | {
"domain": "codereview.stackexchange",
"id": 2516,
"tags": "php"
} |
Install on Ubuntu 16.04 fails | Question:
Hello-
I'm trying to install ros-desktop-full and have tried several times ending in the same error. There are dependency problems reported as --configure fails for several sub-components. Here is the first encountered:
sudo apt-get install ros-desktop-full
.....OK so far...numerous packages downloaded and installed...
Setting up ros-kinetic-pcl-msgs (0.2.0-0xenial-20161026-182630-0700) ...
dpkg: dependency problems prevent configuration of ros-kinetic-pcl-conversions:
ros-kinetic-pcl-conversions depends on libpcl-dev; however:
Package libpcl-dev is not configured yet.
ros-kinetic-pcl-conversions depends on libpcl1.7; however:
Package libpcl1.7 is not configured yet.
dpkg: error processing package ros-kinetic-pcl-conversions (--configure):
dependency problems - leaving unconfigured
....etc.
Then a string of similar failures where --configure fails
Originally posted by thealy on ROS Answers with karma: 1 on 2017-01-23
Post score: 0
Answer:
You appear to be mixing debian upstream packages ros-desktop-full with OSRF hosted packages ros-kinetic-pcl-msgs It is not recommended to mix those.
http://wiki.ros.org/UpstreamPackages
Originally posted by tfoote with karma: 58457 on 2017-02-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26804,
"tags": "ubuntu"
} |
launch file to run nodes on multiple machines | Question:
I am trying to write a lunch file to run a node on my turtlebot robot and visualizes the results in rviz on my workstation. To start the turtlebot I like to include the minimal.launch file which is located on the turtlebot.
The launch file I am writing is on the workstation.
Can someone help me to configure the "include " statement to let the launch file know that minimal.launch is located on the turtlebot. the "include" command doesnot have "machine=..." option.
What does this mean "You can also use the $(env ENVIRONMENT_VARIABLE) syntax within include tags to load in .launch files based on environment variables (e.g. MACHINE_NAME)."
Also, do you have a launch file to do what is state in : Talker / listener across two machines
www.ros.org/wiki/ROS/Tutorials/MultipleMachines
Originally posted by Solmaz on ROS Answers with karma: 3 on 2013-08-13
Post score: 0
Answer:
You have to look at the section 4 of this page
Originally posted by Fabien R with karma: 90 on 2013-11-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15248,
"tags": "roslaunch"
} |
Tic-Tac-Toe solver | Question: As a programming exercise, I've attempted a Tic-Tac-Toe solver which takes a finished board and determines a winner.
On top of checks for malformed input/ambiguous games (e.g. both players with winning positions), what else can I do to improve this? In particular, is the coding style acceptable and are there better ways to check for win conditions?
def tictac(b):
"""
Parses a tic-tac-toe board and returns a winner.
Input: a list of lists containing values 0 or 1.
0 corresponds to 'noughts' and 1 to 'crosses'
e.g.
>>> gameboard = [[0,0,1],[0,1,0],[1,1,0]]
>>> tictac(gameboard)
X wins
"""
# If the sum of a column/row/diagonal is 0, O wins.
# If the sum is number of rows/columns, X wins.
winner = ""
board_range = range(len(b))
# Check rows and columns.
for i in board_range:
row_sum = sum(b[i])
col_sum = sum([x[i] for x in b])
if row_sum == 0 or col_sum == 0:
winner = "O wins"
elif row_sum == len(b) or col_sum == len(b):
winner = "X wins"
# Check the diagonals.
fwd_diag_sum = sum([b[i][i] for i in board_range])
bck_diag_sum = sum([b[i][len(b)-i-1] for i in board_range])
if fwd_diag_sum == 0 or bck_diag_sum == 0:
winner = "O wins"
if fwd_diag_sum == len(b) or bck_diag_sum == len(b):
winner = "X wins"
if winner:
print winner
else:
print "Game is a tie!"
For convenience, here's a little test function too:
def test_tic_tac():
def pretty_print(b):
for row in b:
print row
gameboard = [[0,0,1],[0,1,0],[1,1,0]]
pretty_print(gameboard)
tictac(gameboard)
gameboard = [[0,1,1],[0,0,0],[1,1,0]]
pretty_print(gameboard)
tictac(gameboard)
gameboard = [[1,0,1],[0,0,1],[0,1,0]]
pretty_print(gameboard)
tictac(gameboard)
gameboard = [[0,0,1,0],[1,1,0,1],[0,1,1,1],[0,1,0,1]]
pretty_print(gameboard)
tictac(gameboard)
Answer: Missing handling for incomplete boards of finished games
The program works only with complete boards. It won't work with this finished game with incomplete board, which is kind of a big drawback for a game evaluator:
o
x o o
o x x
Return result instead of printing
It would be better to not have the board evaluation logic and printing in the same method.
You should split these steps into separate functions.
This will make some optimizations easier (see my next point).
It will also help improving your testing technique.
Testing by reading output is not convenient.
It's best when tests either pass or fail,
so that you can verify if the program behaves correctly just by checking that everything passes.
One way to achieve this is using assertions, for example:
assert "X wins" == evaluate_board([[0, 0, 1], [0, 1, 0], [1, 1, 0]])
A better way is using the unittest package of Python,
and splitting your test method to distinct test cases for different kinds of winning positions, such as win by row sum, col sum, diagonal.
Wasted operations
In the loop that checks the row and column sums,
once you found a winner, the loop happily continues,
and after the loop you check the diagonals,
when in fact you could return the winner as soon as you see it:
if row_sum == 0 or col_sum == 0:
return "O wins"
By returning sooner,
the final if winner statements could be replaced with a simple return "Game is a tie!"
Duplicated string literals
The "O wins", "X wins" string literals appear multiple times.
It would be better to avoid such code duplication,
as it's prone to errors if you ever change the message format,
you have to remember to make the same change everywhere.
Coding style
You should follow PEP8. | {
"domain": "codereview.stackexchange",
"id": 11961,
"tags": "python, python-2.x, tic-tac-toe"
} |
About making wiki pages and ROS packages | Question:
Hello,
I designed a wiki page for the ROS wiki, how can I send it for revision or upload it?
Thanks beforehand!
Originally posted by pexison on ROS Answers with karma: 82 on 2015-04-01
Post score: 0
Answer:
You can edit and create wiki pages yourself. Just create an account and go ahead.
Originally posted by dornhege with karma: 31395 on 2015-04-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by pexison on 2015-04-01:
About the design, how can I add the information about the creator, maintainer, status of the package, ...?
I have this till now: turtlebot_2dnav
Comment by Dirk Thomas on 2015-04-01:
If you click on "Documentation Status" on your wiki page you will see that your new repository has not yet been indexed. You have just added the repo to the rosdistro half an hour ago. The job will get generated over night and then it might take up to three days to index your repo. | {
"domain": "robotics.stackexchange",
"id": 21321,
"tags": "ros, wiki"
} |
Significance of single point energy when calculating interaction energies | Question: I am currently investigating about the interaction behavior of a few atoms in certain conditions.
Is it possible to use the concept of single point energy to represent the atomic interaction energies or I have to go other way around?
What is the basic difference between potential energy and single point energy?
Answer: Single point energy arises in the framework of the Born–Oppenheimer approximation and corresponds to just one point on the potential energy surface. Physically it is the total energy of the molecular system with its nuclei beeing fixed (or clamped) at some particular locations in space. In other words, it is total energy of the molecular system within the so-called clamped nuclei approximation.
Mathematically, if you develop the Born–Oppenheimer approximation step-by-step you can easily see that single point energy it is the sum of the electronic energy and nuclear repulsion potential energy,
$$
U = E_{\mathrm{e}} + V_{\mathrm{nn}} \, ,
$$
where the electronic energy $E_{\mathrm{e}}$ is the solution of the electronic Schrödinger equation,
$$
\hat{H}_{\mathrm{e}} \psi_{\mathrm{e}}(\vec{r}_{\mathrm{e}}) = E_{\mathrm{e}} \psi_{\mathrm{e}}(\vec{r}_{\mathrm{e}}) \, .
$$
The fact that at this point we use the symbol $U$ which (alongside with $V$) is usually used for potential energy to mean the single point energy is justified a little later. Namely, when we introduce the Born-Oppenheimer approximation which give rise to the nuclear Schrödinger equation,
$$
\Big( \hat{T}_{\mathrm{n}} + U(\vec{r}_{\mathrm{n}}) \Big) \psi_{\mathrm{n}}(\vec{r}_{\mathrm{n}}) = E \psi_{\mathrm{n}}(\vec{r}_{\mathrm{n}}) \, ,
$$
it is easy to recognize that the values of the single point energy $U$ for all possible nuclear configurations define the potential energy for nuclear motion. So, it is in this sense that the single point energy is related to the potential energy.
Update: it became clear that OP misunderstood the notion of the single point energy $U$. Indeed, once we do few single point calculations for different nuclear configurations, the resulting $U(\vec{r}_{\mathrm{n}})$ is the potential energy for nuclear motion. However, it is not the interaction energy between some fragments, though, the interaction energy contributes to it. So if one wants to obtain the interaction energy one has to decompose the potential energy $U(\vec{r}_{\mathrm{n}})$ into its parts.
There different ways to perform the energy decomposition, just to name a few without any particular order:
SAPT (Symmetry-Adapted Perturbation Theory) a separate program (few of them, to be more precise) which can be interfaced with different quantum chemistry codes.
NEDA (Natural Energy Decomposition Analysis) which is available as a part of NBO package.
Morokuma decomposition already available in some quantum chemistry codes (GAMESS-US, for instance).
LMO-EDA (Localised Molecular Orbital Energy Decomposition Analysis) also available in some quantum chemistry codes (GAMESS-US, for instance). | {
"domain": "chemistry.stackexchange",
"id": 3609,
"tags": "computational-chemistry, energy, intermolecular-forces, electrostatic-energy"
} |
How is heat transfer calculated for an aqueous salt solution? | Question: I am familiar with using $\dot{Q}=c_p\cdot\dot{m}\cdot\Delta T$ to calculate the heat transfer rate of a fluid given a singular value for specific heat capacity (such as with water), but how do I go about calculating heat transfer rate for an aequeous solution such as $MgCl_2 (aq)$? Do I somehow use the heat capacities of both water and salt together?
Answer: The formula you are quoting is for estimating the heat exchange rate of a fluid that enters a control volume with rate $\dot m$ and has a change in temperature $\Delta T$.
If you know
the mass rate of the solution
the precise per weight ratio of your solution
the heat capacity of the elements
the temperature difference
And provided there are not endothermal or exothermal reactions, then its basically a pretty straight forward sum of the parts.
$$\dot{Q}_{total} = \dot{Q}_{water} + \sum _{i=1}^n \dot{Q}_{sub.1} $$
$$\dot{Q}_{total} = \dot{m}_{water}c_{p,water}\Delta T + \sum _{i=1}^n \dot{m}_{sub.i}c_{p,sub.i}\Delta T $$
However, you will find that in most cases, because $c_p$ of water is so much greater that most other substances and the weight percentage in most solutions is much greater, you probably don't need to bother. | {
"domain": "engineering.stackexchange",
"id": 3550,
"tags": "thermodynamics, heat-transfer, cooling, refrigeration"
} |
Current without Voltage and Voltage without Current? | Question: At school I've always learned that you can view Current and Voltage like this:
The current is the flow of charge per second and the Voltage is how badly the current 'wants' to flow.
But I'm having some trouble with this view. How can we have a Voltage without a current? There is nothing to 'flow', so how can it be there? Or is it 'latent' voltage, I mean is the voltage just always there and if a current is introduced it flows?
Also, I believe you can't have current without voltage. This to me seems logical from the very definition of current. But if you have a 'charge' without a voltage, doesn't it just stay in 1 place? Can you view it like that? If you introduce a charge in a circuit without a voltage it just doesn't move?
Answer: What flows is not the voltage but the charge, and that flow is called current. There can be voltage without a current; for instance if you have a single charge, that charge induces a voltage in space, even if it's empty. Voltage, in the most physical way, is a scalar field that determines the potential energy per unit charge at every point in space.
Now, you can't have currents without voltages because if there's a current there's a charge moving, and every charge produces a voltage, but you can have currents without voltage differences in space. For example, if you have a charged sphere, and you make it rotate, the charge will be on the surface and by rotating the sphere you will have a current on the surface, but the voltage is the same at every point of the surface. Also magnetization of materials can induce currents by the same way.
If you introduce a charge in a circuit without a voltage it just
doesn't move?
That's true, it won't move, unless you have some changing magnetic field that may introduce "voltage differences" between the same point, making $\nabla\times E\not =0$, although that wouldn't be electrostatic voltage the way you're seeing it. | {
"domain": "physics.stackexchange",
"id": 17253,
"tags": "voltage, electric-current"
} |
Geometry confusion with incident E field | Question: My geometry is very rusty and I'm having trouble understanding why for the incident E field the X component is multiplied by cos and the -Z component is multiplied by sin instead of the other way around. Would someone please explain this?
Answer: Let's zoom in to your diagram a bit and draw in some extra angles:
This should make it obvious why $E_x = E \cos\theta$ and $E_z = E\sin\theta$. | {
"domain": "physics.stackexchange",
"id": 22884,
"tags": "electromagnetism, reflection, geometric-optics"
} |
Partial Legendre transform: understanding a simple example | Question: Consider the following function:
$$f(x_1, x_2) = x_1^2x_2+x_1x_2^3$$
$f$ is a function of $(x_1, x_2)$. The conjugate variables $(u_1, u_2)$ to $(x_1, x_2)$ are $$u_1 = \partial f/ \partial x_1 = 2 x_1 x_2 + x_2^3$$ and $$u_2 = \partial f/ \partial x_2 = x_1^2 + 3x_1x_2^2.$$
One can construct $$g=f - u_1 x_1$$ which, to my understanding, replaces $x_1$ to its conjugate variable, $u_1$. The differential of $g$ is
$$dg = u_2 dx_2 - x_1 du_1$$
thus $g=g(u_1, x_2)$. In words, $g$ is a function of the old variable $x_2$ and the conjucate variable to $x_1$, $u_1$.
Now I fail to see this in the above example. I can compute
$$g = f - u_1x_1 = -x_1^2x_2$$
but $g$ is then still a function of $(x_1, x_2)$, not $(u_1, x_2)$. What am I missing here?
In theory, I could invert the $u_1=u_1(x_1, x_2)$ and $u_2=u_2(x_1, x_2)$ equations, then plug $x_1(u_1, u_2)$ and $x_2(u_1, u_2)$ into $f(x_1, x_2)$ to get $f(u_1, u_2)$. But that's just a simple change of variable. What's the point of the Legendre transformation concept then?
Also, in this particular example, inverting the $u(x)$ equations doesn't seem obvious. How would one get the expression for $g(u_1, x_2)$?
Answer: I can show you the steps that you need to calculate the Legendre transformation, you have to look at the theory behind .
for a given function of two variable $f(x,y)$ we are looking for the Legendre function $\tilde{f}(u,v)$
Step I:
The determinant of matrix $A$ must be unequal zero. where $A$ is:
$$A= \left[ \begin {array}{cc} {\frac {\partial ^{2}}{\partial {x}^{2}}}f
\left( x,y \right) & \left( {\frac {\partial }{\partial x}}f \left( x
,y \right) \right) {\frac {\partial }{\partial y}}f \left( x,y
\right) \\ \left( {\frac {\partial }{\partial x}}f
\left( x,y \right) \right) {\frac {\partial }{\partial y}}f \left( x
,y \right) &{\frac {\partial ^{2}}{\partial {x}^{2}}}f \left( x,y
\right) \end {array} \right]
$$
Step II:
with the equations:
$$u=\frac{\partial f(x,y)}{\partial x}\tag 1$$
$$v=\frac{\partial f(x,y)}{\partial y}\tag 2$$
we get from equation (1) , $x=f_x(u,y)$ and from equation (2) $y=f_y(u,x)$
both solutions must be exist
Step III:
with this solutions $f_x$ and $f_y$ we obtain the Legendre function
$$\tilde{f}(u,v)=u\,f_x(u,v)+v\,f_y(u,v)-f(f_x(u,v),f_y(u,v))$$
Example:
$$f(x,y)={x}^{2}y+x{y}^{3}$$
$$f_x(u,y)=1/2\,{\frac {u-{y}^{3}}{y}}$$
$$f_y(u,x)=1/3\,{\frac {\sqrt {3}\sqrt { \left( x \right) \left( u-{x}^{2}
\right) }}{x}}
$$
$$\tilde{f}(u,v)=1/2\,{\frac {u \left( u-{v}^{3} \right) }{v}}+1/3\,\sqrt {3}\sqrt {v
\left( u-{v}^{2} \right) }-1/12\,{\frac { \left( u-{v}^{3} \right) ^{
2}\sqrt {3}\sqrt {v \left( u-{v}^{2} \right) }}{{v}^{3}}}-1/18\,{
\frac { \left( u-{v}^{3} \right) \sqrt {3} \left( v \left( u-{v}^{2}
\right) \right) ^{3/2}}{{v}^{4}}}
$$ | {
"domain": "physics.stackexchange",
"id": 57866,
"tags": "homework-and-exercises, lagrangian-formalism, hamiltonian-formalism, differentiation"
} |
What does it mean to say that "the fundamental forces of nature were unified"? | Question: It is said that immediately after the Big Bang, the fundamental forces of nature were unified. It is also said that later they decoupled, becoming separate forces.
Indeed, if we look at the list of states of matter on Wikipedia we see:
Weakly symmetric matter: for up to $10^{−12}$ seconds after the Big Bang the strong, weak and electromagnetic forces were unified.
Strongly symmetric matter: for up to $10^{−36}$ seconds after the Big Bang, the energy density of the universe was so high that the four forces of nature — strong, weak, electromagnetic, and gravitational — are thought to have been unified into one single force. As the universe expanded, the temperature and density dropped and the gravitational force separated, a process called symmetry breaking.
Not only is it said that the forces were once unified, but this is also somehow related to the states of matter.
I want to understand all of this better. What does it truly mean, from a more rigorous standpoint, to say that the forces were unified and later decoupled? How this relate to the states of matter anyway?
Answer: When we say that the forces were unified, we mean that the interaction was described by a single gauge group. For example, in the original grand unified theory, this group was $SU(5)$, which spontaneously broke down to $SU(3) \times SU(2) \times U(1)$ as the universe cooled. These three components yield the strong, weak, and electromagnetic forces respectively.
I'll try to give a math-free explanation of what this means. To do so I'll have to do a decent amount of cheating.
First, consider the usual strong force. Roughly speaking, the "strong charge" of a quark is a set of three numbers, the red, green, and blue color charges. However, we don't consider the strong force three separate forces because these charges are related by the gauge group: a red quark can absorb a blue anti-red gauge boson and become blue. In the case of the strong force, we call those bosons gluons, and there are 8 of them.
At regular temperatures, the strong force is separate from the electromagnetic force, whose charge is a single number, the electric charge, and whose gauge boson is the photon. There is no gauge boson that converts between color charge and electric charge; the two forces are independent, rather than unified.
When we say all the forces were unified, we mean that all of the Standard Model forces were described by a common set of charges, which are intermixed by 24 gauge bosons. These gauge bosons are all identical in the same way that the 8 gluons are identical. In particular, you can't point at some subset of the 24 and say "these are the gluons", or "this one is the photon". They were all completely interchangeable.
As the universe cooled, spontaneous symmetry breaking occurred. To understand this, consider slowly cooling a lump of iron to below the Curie temperature. As this temperature is passed, the iron spontaneously magnetizes; since the magnetization picks out a specific direction, rotational symmetry is broken.
In the early universe, the same process occurred, though the magnetization field is replaced with an analogue of the Higgs field. This split apart the $SU(5)$ gauge group into the composite gauge group we have today.
The process of spontaneous symmetry breaking is closely analogous to phase transitions, like the magnetization of iron or the freezing of water, which is why we talk about 'strongly/weakly unified' matter as separate states of matter. Like the iron, which state we are in is determined by the temperature of the universe. However, a exact theoretical description of this process requires thermal quantum field theory. | {
"domain": "physics.stackexchange",
"id": 33170,
"tags": "quantum-field-theory, forces, grand-unification, unified-theories"
} |
Errors upgrading to the latest roscpp debs | Question:
I'm seeing the following errors when upgrading to the latest versions of the roscpp debs:
$ sudo apt-get dist-upgrade
... snip ...
dpkg: error processing /var/cache/apt/archives/ros-hydro-cpp-common_0.3.17-0precise-20140108-0945-+0000_amd64.deb (--unpack):
trying to overwrite '/opt/ros/hydro/include/ros/header.h', which is also in package ros-hydro-roscpp 1.9.50-0precise-20131015-1959-+0000
... snip ...
dpkg: error processing /var/cache/apt/archives/ros-hydro-roscpp-traits_0.3.17-0precise-20140108-0948-+0000_amd64.deb (--unpack):
trying to overwrite '/opt/ros/hydro/include/ros/message_event.h', which is also in package ros-hydro-roscpp 1.9.50-0precise-20131015-1959-+0000
... snip ...
dpkg: error processing /var/cache/apt/archives/ros-hydro-rosbag-storage_1.9.52-0precise-20140108-1606-+0000_amd64.deb (--unpack):
trying to overwrite '/opt/ros/hydro/include/rosbag/buffer.h', which is also in package ros-hydro-rosbag 1.9.50-0precise-20131015-2159-+0000
... snip ...
Errors were encountered while processing:
/var/cache/apt/archives/ros-hydro-cpp-common_0.3.17-0precise-20140108-0945-+0000_amd64.deb
/var/cache/apt/archives/ros-hydro-roscpp-traits_0.3.17-0precise-20140108-0948-+0000_amd64.deb
/var/cache/apt/archives/ros-hydro-rosbag-storage_1.9.52-0precise-20140108-1606-+0000_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
Has anyone else seen this before? Any suggestions for dealing with these upgrade errors or working around them?
Originally posted by ahendrix on ROS Answers with karma: 47576 on 2014-01-09
Post score: 0
Answer:
Running sudo apt-get install -f fixed this for me.
Originally posted by ahendrix with karma: 47576 on 2014-01-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16625,
"tags": "ros"
} |
What is total magnetic flux through a coil? | Question: According to Gauss's law of magnetism, the total magnetic flux through a closed surface is zero. But during induction, we study that the magnetic field lines passing through a coil change, as does flux given by $\Phi = LI$. But even if they change, the net lines coming in= net lines going out. So, flux should be zero?
Answer: According to Gauss' Law, the "net" magnetic flux is zero for a closed surface because magnetic monopoles don't exist but by writing $\Phi=LI$, we measure the outflux/influx produced by the single pole but though the "net" flux here also is zero. | {
"domain": "physics.stackexchange",
"id": 41068,
"tags": "electromagnetism, electricity, magnetic-fields, electric-circuits, electromagnetic-induction"
} |
speed-density relationship transportation question | Question: I have the following questions:
Would the capacity be the area under the curve? Or can I use this formula: $q_c = \frac{k_j \times v_f}{4}$ from the greenshields model, where the capacity flow is related to the free flow speed and jam density. If so then $q_c = 6600$
The following question is :
I am really not sure how to approach this question. AADT = Annual average daily traffic.
Answer: This is one of those vocabulary problems. If you know what they mean, it should be a matter of applying unit conversions.
I'll make some sense of the words. And then I will do my utmost to forget it, because it's pretty useless to learn some model when you don't even know if reality fits it...
Capacity is the maximum possible veh/hr possible.
The formula you have works if the curve is linear (which we can believe from the graph).
More generally it is max(speed*density). And here's a derivation of that qc formula.
speed = 110 km/h*(1-density/(120 veh/km)).
speed = vf*(1-density/kj)
speed*density=vf*density-vf*density^2/kj
max of such a parabola occurs where derivative is 0 so:
0=vf-2*vf/kj*density
density = kj/2
speed=vf/2
max(speed*density) = qc = kj*vf/4
Or 3300 veh/hr (which is probably per direction with an implicit 2 directions based on your mention of 6600)
Next question is irksome in how useless it is. If someone knows 12% of AADT occurs... Then that person could have counted what AADT was when taking measurements to obtain that 12%.
Anyway, it mentions a 55/45 directional split, so 3300 corresponds to the 55%. Word trap is whether that's 55% split of veh/hr or veh/km. Traffic in the measurement sense (rather than "stuck in traffic" sense where it means a section that actually lacks traffic per units of time) is something per time so it is the former (veh/hr). That means the other direction contributes 3300/55*45=2700 So both dirs it's 3300+2700=6000 veh in that hr. Divide by .12 aka 12% and you get 50000. | {
"domain": "engineering.stackexchange",
"id": 4864,
"tags": "transportation"
} |
WCF Duplex service authentication | Question: I have been thinking about a way to implement this and I am not sure that what I have done is correct, because it surely sounds kinda dirty to me.
Basically what I have is a WCF duplex service which multiple clients subscribe to. My problem is the authentication. What I have done is I pass the username and password to the Subscribe method, then check those against the database and add the client's subscription to a Dictionary<UserAccount,ICallback>. Then(just for testing purposes so far) I have a timer which calls a method on each callback every 1 second.
All of this is working fine, with the exception that when I disconnect and connect the same client, the service crashes which I believe is a pretty trivial problem but I just haven't had much time to look into it. I am not sure this is the best way to handle authentication though. I do know about authentication using a custom UserNameValidator and a certificate with that, but last time I tried to do it, it took me a really long time and still couldn't get the client right for some reason, even though the service was working fine with a WCF service testing software(can't remember the name right now, but it was pretty decent IMO). Also I really don't want to have to deploy a certificate to every single client and purchasing a certificate is a big NO. The service is supposedly going to be working in LAN most of the time.
So here is the code:
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)]
public class SchoolTestMakerService : ISchoolTestMakerService
{
Dictionary<UserAccount, ISchoolTestMakerCallbackContract> clients =
new Dictionary<UserAccount, ISchoolTestMakerCallbackContract>();
IHashGenerator hashGenerator = new SHA1Generator();
UnitOfWork unitOfWork;
Timer timer;
int statusCode;
public SchoolTestMakerService()
{
SchoolTestMakerDbInitializer dbInitializer = new SchoolTestMakerDbInitializer(hashGenerator);
SchoolTestMakerDbContext context = new SchoolTestMakerDbContext(dbInitializer);
unitOfWork = new UnitOfWork(context);
timer = new Timer((s) => ChangeStatus("Status: " + statusCode++),null,0,1000);
}
public void Subscribe(string userName,string password)
{
string passwordHash = hashGenerator.GenerateHash(password);
UserAccount userAccount = unitOfWork.Repository<UserAccount>().Get(x => x.Username == userName&&x.PasswordHash==passwordHash);
if(!clients.ContainsKey(userAccount))
{
ISchoolTestMakerCallbackContract callback = OperationContext.Current.GetCallbackChannel<ISchoolTestMakerCallbackContract>();
clients.Add(userAccount, callback);
}
}
public void Unsubscribe()
{
ISchoolTestMakerCallbackContract callback = OperationContext.Current.GetCallbackChannel<ISchoolTestMakerCallbackContract>();
clients.Remove(clients.First(x => x.Value == callback).Key);
}
private void ChangeStatus(string newStatus)
{
foreach(var client in clients)
{
client.Value.StatusChanged(newStatus);
}
}
}
[ServiceContract(SessionMode=SessionMode.Required,
CallbackContract=typeof(ISchoolTestMakerCallbackContract))]
public interface ISchoolTestMakerService
{
[OperationContract(IsOneWay=false,IsInitiating=true)]
void Subscribe(string userName,string password);
[OperationContract(IsOneWay = false, IsTerminating = true)]
void Unsubscribe();
}
public interface ISchoolTestMakerCallbackContract
{
[OperationContract(IsOneWay = true)]
void StatusChanged(string newStatus);
}
Client code:
using SchoolTestMaker.DataAccess;
using SchoolTestMaker.DataAccess.Utilities;
using SchoolTestMaker.DataAccess.Utilities.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.Entity;
using TestClient.SchoolTestMakerServiceProxy;
namespace TestClient
{
class Program
{
static void Main(string[] args)
{
string userNameString = "UserName: ";
string passwordString = "Password: ";
Console.WriteLine(userNameString);
Console.WriteLine(passwordString);
Console.SetCursorPosition(userNameString.Length, 0);
string userName = Console.ReadLine();
Console.SetCursorPosition(passwordString.Length, 1);
string password = Console.ReadLine();
SchoolTestMakerCallback callback = new SchoolTestMakerCallback();
SchoolTestMakerServiceClient client = new SchoolTestMakerServiceClient(new System.ServiceModel.InstanceContext(callback));
client.Open();
client.Subscribe("Pesho Daskala", "parola123");
Console.Read();
client.Unsubscribe();
}
}
}
public class SchoolTestMakerCallback:ISchoolTestMakerServiceCallback
{
public void StatusChanged(string newStatus)
{
Console.WriteLine(newStatus);
}
}
The client is basically a dead simple console application(just for testing purposes of course... haven't written the real client yet). The user account is also just a dummy for testing.
SHA1Generator and IHashGenerator:
public class SHA1Generator:IHashGenerator
{
public string GenerateHash(string input)
{
using (SHA1Managed sha1 = new SHA1Managed())
{
var hash = sha1.ComputeHash(Encoding.UTF8.GetBytes(input));
var sb = new StringBuilder(hash.Length * 2);
foreach (byte b in hash)
{
sb.Append(b.ToString("X2"));
}
return sb.ToString();
}
}
}
public interface IHashGenerator
{
string GenerateHash(string input);
}
By the way, ignore the Session nonsense in the Service code. I forgot to remove it after switching to a single instance.(I am pretty sure it is pointless in that case and does literally nothing).
I'd really appreciate some help. I am pretty new to WCF but not to C# as a whole. I am sorry for the long question. I have tried to provide as much information as possible. If anything is missing please give me a chance to fix it before downvoting :)
Answer: Problems
in Subscribe() you check, without testing for null, if this UserAccount is contained in the dictionary.
in Unsubscribe() you use the callback to get the Key to call Remove() without checking the success of the call.
Naming and style
Your naming is good. You are using meaningful names and also use the correct casing.
The declaration part of your class variables of SchoolTestMakerService looks kind of ugly (sorry). You should consider to initialize clients inside the constructor.
General
I think the unsubscribe part is problematic, because you are using the callback which is the Value of a KeyValuePairto retrieve the Key.
I would add a class to the service having two properties, the useraccount and the callback like
class UserCallBack
{
public UserAccount User { get; set; }
public ISchoolTestMakerCallbackContract CallBack { get; set; }
}
which could be extended e.g by adding properties like LastActive, TimeOut and a IsValid() method.
Inside the Subscribe() method I would create a hash using the username and the password hash, which I would use as the key and the UserCallBack as the value of the dictionary.
This hash should be returned by the method and for each call of a service method be passed as parameter.
Dictionary<String, UserCallBack> clients = new Dictionary<String, UserCallBack>();
public string Subscribe(string userName,string password)
{
string passwordHash = hashGenerator.GenerateHash(password);
UserAccount userAccount = unitOfWork.Repository<UserAccount>().Get(x => x.Username == userName && x.PasswordHash == passwordHash);
if (userAccount == null) { return String.Empty; }
string key = CreateKeyHash(userName, passwordHash);
if (!clients.ContainsKey(key))
{
ISchoolTestMakerCallbackContract callback = OperationContext.Current.GetCallbackChannel<ISchoolTestMakerCallbackContract>();
UserCallBack userCallBack = new UserCallBack() { userAccount, callback };
clients.Add(key, userCallBack );
}
return key;
}
private String CreateKeyHash(string userName, string passwordHash)
{
return hashGenerator.GenerateHash(userName + passwordHash);
}
public void Unsubscribe(string userKey)
{
if (userKey == null) { return; }
clients.Remove(userKey);
} | {
"domain": "codereview.stackexchange",
"id": 11195,
"tags": "c#, authentication, wcf"
} |
What is the nature of the state of a 2 particle system? | Question: If I want do describe two particles #1 and #2 within Quantum Mechancis - let them be different in some way (Spin for example) - how do "I" decide if the total state of the system should be an entangled state, a product state or some density matrix?
I am concerned about this because I read that for some reason the universe is technically in an entangled state with everything and if we ignore it we should write every state as a density matrix (because we end up with a statistical mixture, if we ignore the universe). But this would mean that writing states as Dirac Brakets would be always wrong.
Edit:
I can build up a two particle system ether as product-state or as an entangled state.
Say I want to mix an electron with the rest of the universe.
If I choose a product state I can ignore the universe and just look at the electron without losing any information. My electron description is complete without the universe.
If I choose an entangled state for the electron and the universe and I again want to focus only on the electron I end up with a statistical mixture. That I can ONLY write as a density matrix because I choose to ignore the universe I "forget" some information about the state. I add to the quantum probability the probability due to my incomplete knowledge of the actual electron state. This is not possible with Diracs Braket.
My question is how do I choose with what kind of state I start?
Answer: Extending David Z's answer:
If you want to answer your question, you can ask the problem differently: your particle is independent or it is part of the system, ergo it is interacting with the system.
In entangled state, although you loose concrete information about your particle, but you get something much more greater: relational information.
I suppose you understand what an entangled state is, so I won't go into details, but if you want I will explain it with pleasure.
So if your particles are interacting with each other, and one's action has a direct effect on the other, then you start as an entangled state, otherwise as a product state. If you have a complex system then entanglement is often broken and remade ( like atom ionization ). | {
"domain": "physics.stackexchange",
"id": 23090,
"tags": "quantum-mechanics"
} |
Offset after Butterworth bandpass filtering | Question: guess I'm stuck with this filtering problem and I'm thinking it has something to do with my raw data but I'm not so sure. So the problem is the following, please refer to the figures below, basically, all I'm trying to do is implement a bandpass Butterworth filter to filter the raw data trace (green). The raw data consists of 2501 points and was sampled at 20 kHz (100-ms). I'd like to bandpass filter this trace between 3 Hz and 170 Hz using the Matlab function 'butter' followed by either 'filter' (top figure) or 'filtfilt' (bottom figure), preferably the latter.
My Matlab code is as follows:
A = Average(1,:); % raw data
fs_Hz = 20000; % sampling rate (Hz)
order = 3;
fcutlow = 3; % Hz
fcuthigh = 170; % Hz
[b,a] = butter(order,[fcutlow,fcuthigh]/(fs_Hz/2), 'bandpass');
x = filtfilt(b,a,A);
plot(x);
hold on
plot(A,'color','g');
legend('filtered data','raw data');
As you might guess the problem is the offset that I observe after both filtering functions. These 2501 point traces actually stem from a very long data-trace, should I possibly filter this long trace prior to extracting these smaller traces? Is the reason for the offset the fluctuating pattern of the trace? I can't detrend the data as I need the y-axis information for the subsequent analysis on this data. Should I be using a completely different type of filtering method; FIR instead of IIR??
Any advice on my problem would be greatly appreciated.
IljaPilja
Answer: Your data have a non-zero DC component which is filtered out by the bandpass filter. So the filter output has approximately zero DC, which causes the corresponding vertical shift of the signal. If you want to retain the DC information in the signal you must use a lowpass filter instead of a bandpass filter. | {
"domain": "dsp.stackexchange",
"id": 9800,
"tags": "matlab, filtering, butterworth"
} |
Collection of histories vs. collection of momentary configurations | Question: For a given Hamiltonian, is the space of histories of a classical system the same as the symplectic manifold?
Do I have to take care of gauge equivalences and if so, is this only an issue for fields (not for trajectories)?
Answer: Basically, the answer to both questions is yes. In the field theory case, however, this is true (optimistically speaking according to Witten-1) - in the case when the field equations are hyperbolic wave equations. In this case one needs the initial conditions to be defined on global Cauchy hypersurface. In this case there is a one to one correspondence between the space of solutions and the space of initial data.
Actually, we always construct the symplectic structure from the Lagrangian in the course of the procedure of canonical quantization, when we construct the conjugate momenta and declare canonical Poisson brackets between the coordinates and their momenta. As a matter of fact, when the coordinates define an affine space (or superspace) like in the theory of spinless point particles or the case of scalar, spinor or Yang-Mills fields, then the phase space is an affine symplectic vector (super)space.
Now, your second assertion is also true (please see Witten-1 and: Witten-2 ). In the presence of gauge freedom, (which is a choice or a declaration that we make); the space of solutions modulo the gauge freedom is symplectic and is identified with the physical phase space of the theory. This identification is natural because we measure only gauge invariant quantities. The first assertion that this space is symplectic stems from a deep theorem in symplectic geometry called the Marsden-Weinstein reduction. (This theorem is true in finite dimensions but was generalized to many infinite dimensional cases), which asserts that the reduction of a symplectic manifold by a gauge freedom is also symplectic.
Another way to look at the gauge redundancy is to observe that there are certain functions of the initial data that can be arbitrarily evolved in time and still the field equations are satisfied, please see Belot.
Now, in contrast to the unreduced symplectic manifolds which are in our cases nice symplectic vector spaces, the reduced manifolds are actually
orbifolds. These spaces contain singular submanifold which may be classically harmless as long as one stays away from them, but have profound effect after quantization, which is not very well understood until now.
There are cases when the unreduced phase spaces of certain gauge theories are infinite dimensional while the reduced ones are finite dimensional.
These cases led Witten to one of his greatest achievements, namely in the Chern-Simons theory or in the 2+1 dimensional gravity theory in Witten-2. | {
"domain": "physics.stackexchange",
"id": 10874,
"tags": "hamiltonian-formalism"
} |
Can a massive body achieve speed of light in outer space? | Question: If I throw a ball in earth atmosphere,due to its interaction with molecules of air, heat transfer will happen and also due to friction caused by air.
But
If a throw a ball in outer space where there are no molecules. A body can achieve the speed of light (maybe after a long time) in outer space too.
Is that right?
What would happen to it then?
Answer: Remember Newton's laws: an object only accelerates when it has a force acting on it. If you're in free space and you throw and object, you are essentially releasing it at a constant velocity. After this, no force acts on the object and so it will continue to travel at a constant velocity forever. It will not speed up!
If, on the other hand, you find a way to apply a constant force to it, it will be seen to accelerate. However, while its velocity will increase, it will never cross (or even reach) the speed of light in any finite amount of time. This is fundamentally due to the way that velocities add in special relativity, which is different from our "common-sense" way of looking at things. See this answer for the exact formula. You may also find this answer to a different (but similar) question interesting. | {
"domain": "physics.stackexchange",
"id": 73723,
"tags": "speed-of-light"
} |
[gazebo 11] How to set a default variable/parameter in xacro | Question:
I have a main xacro file where I am calling another xacro as such:
<xacro:include filename="$(find uuv_gazebo_ros_plugins)/urdf/thruster_snippets.xacro"/>
I have modified the included xacro "thruster_snippets.xacro" so that I can change the size of the thrusters if needed. (sometimes the .dae file is large and therefore needs to be scaled down). The parameter that I added was scale to the "generic_thruster_macro" and the "thruster_module_first_order_basic_fcn_macro".
However, I want to set a default value for scale so that I don't have to include that parameter every time I add a new robot.
Does anyone know how to do this?
This code looks like this:
<?xml version="1.0"?>
<robot xmlns:xacro="http://www.ros.org/wiki/xacro">
<!-- ROTOR DYNAMICS MACROS -->
<!-- First order dynamics -->
<xacro:macro name="rotor_dyn_first_order_macro" params="time_constant">
<dynamics>
<type>FirstOrder</type>
<timeConstant>${time_constant}</timeConstant>
</dynamics>
</xacro:macro>
<!--
MACROS FOR CONVERSION FUNCTIONS BETWEEN ROTOR'S ANG. VELOCITY AND
THRUSTER FORCE
-->
<!--
1) Basic curve
Input: x
Output: thrust
Function: thrust = rotorConstant * x * abs(x)
-->
<xacro:macro name="thruster_cf_basic_macro"
params="rotor_constant">
<conversion>
<type>Basic</type>
<rotorConstant>${rotor_constant}</rotorConstant>
</conversion>
</xacro:macro>
<!--
2) Dead-zone nonlinearity described in [1]
[1] Bessa, Wallace Moreira, Max Suell Dutra, and Edwin Kreuzer. "Thruster
dynamics compensation for the positioning of underwater robotic vehicles
through a fuzzy sliding mode based approach." ABCM Symposium Series in
Mechatronics. Vol. 2. 2006.
Input: x
Output: thrust
Function:
thrust = rotorConstantL * (x * abs(x) - deltaL), if x * abs(x) <= deltaL
thrust = 0, if deltaL < x * abs(x) < deltaR
thrust = rotorConstantR * (x * abs(x) - deltaR), if x * abs(x) >= deltaL
-->
<xacro:macro name="thruster_cf_dead_zone_macro"
params="rotor_constant_l
rotor_constant_r
delta_l
delta_r">
<conversion>
<type>Bessa</type>
<rotorConstantL>${rotor_constant_l}</rotorConstantL>
<rotorConstantR>${rotor_constant_r}</rotorConstantR>
<deltaL>${delta_l}</deltaL>
<deltaR>${delta_r}</deltaR>
</conversion>
</xacro:macro>
<!--
3) Linear interpolation
If you have access to the thruster's data sheet, for example,
you can enter samples of the curve's input and output values
and the thruster output will be found through linear interpolation
of the given samples.
-->
<xacro:macro name="thruster_cf_linear_interp_macro"
params="input_values
output_values">
<conversion>
<type>LinearInterp</type>
<inputValues>${input_values}</inputValues>
<outputValues>${output_values}</outputValues>
</conversion>
</xacro:macro>
<!-- THRUSTER MODULE MACROS -->
<xacro:macro name="generic_thruster_macro"
params="namespace
thruster_id
*origin
mesh_filename
scale
*dynamics
*conversion">
<joint name="${namespace}/thruster_${thruster_id}_joint" type="continuous">
<xacro:insert_block name="origin"/>
<axis xyz="1 0 0"/>
<parent link="${namespace}/base_link"/>
<child link="${namespace}/thruster_${thruster_id}"/>
</joint>
<link name="${namespace}/thruster_${thruster_id}">
<xacro:box_inertial x="0" y="0" z="0" mass="0.001">
<origin xyz="0 0 0" rpy="0 0 0"/>
</xacro:box_inertial>
<visual>
<origin rpy="0 0 0" xyz="0 0 0"/>
<geometry>
<mesh filename="${mesh_filename}" scale="${scale}"/>
</geometry>
</visual>
<collision>
<!-- todo: gazebo needs a collision volume or it will ignore the pose of
the joint that leads to this link (and assume it to be the identity) -->
<geometry>
<cylinder length="0.000001" radius="0.000001"/>
</geometry>
<origin xyz="0 0 0" rpy="0 0 0"/>
</collision>
</link>
<gazebo>
<plugin name="${namespace}_${thruster_id}_thruster_model" filename="libuuv_thruster_ros_plugin.so">
<linkName>${namespace}/thruster_${thruster_id}</linkName>
<jointName>${namespace}/thruster_${thruster_id}_joint</jointName>
<thrusterID>${thruster_id}</thrusterID>
<xacro:insert_block name="dynamics"/>
<xacro:insert_block name="conversion"/>
</plugin>
</gazebo>
<gazebo reference="${namespace}/thruster_${thruster_id}">
<selfCollide>false</selfCollide>
</gazebo>
</xacro:macro>
<!--
Thruster model with first order dynamic model for the rotor dynamics
and a proportional non-linear steady-state conversion from the rotor's
angular velocity to output thrust force
-->
<xacro:macro name="thruster_module_first_order_basic_fcn_macro"
params="namespace
thruster_id
*origin
mesh_filename
scale
dyn_time_constant
rotor_constant">
<xacro:generic_thruster_macro
namespace="${namespace}"
thruster_id="${thruster_id}"
mesh_filename="${mesh_filename}"
scale = "${scale}">
<xacro:insert_block name="origin"/>
<xacro:rotor_dyn_first_order_macro time_constant="${dyn_time_constant}"/>
<xacro:thruster_cf_basic_macro rotor_constant="${rotor_constant}"/>
</xacro:generic_thruster_macro>
</xacro:macro>
</robot>
Originally posted by anonymous53275 on ROS Answers with karma: 1 on 2023-05-26
Post score: 0
Answer:
This is documented in section 7.1 of https://wiki.ros.org/xacro
Example snippet: params="x y:=${2*y} z:=0"
Originally posted by Mike Scheutzow with karma: 4903 on 2023-05-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 38406,
"tags": "xacro"
} |
What does "Giving an overall running time" mean in this context? | Question: In the following image
The author refers to an overall running time linear to |E|. But the author then says this is at most O(n^2). This seems contradictory because O(n^2) is quadratic time. Did the author mean to write something else? Or am I misunderstanding something?
EDIT!: My question is: What does the author mean when writing "overall running time linear to |E|"?
EDIT: More context, define E, which is all the edges and is stored as variable j:
Answer: Maybe, a simpler English version is this:
The computation of $L(j)$ then takes time proportional to the indegree
of $j$, as a result, it takes time linear in $E$ in total. | {
"domain": "cs.stackexchange",
"id": 12376,
"tags": "algorithms"
} |
Does a vehicle with defferential gear still move straight? | Question: I am in the concept phase of a driving robot. The two wheels on the front axle will be powered, while the rear will be dragged along. The rear is also responsible for steering but this has noting to do with my question.
Since the robot is required to make relatively sharp turns at high speed. Therefore I have two options to compensate the different speeds on both sides. On the one hand, a differential gear in the front axle could be used. It would be powered by one motor then. On the other hand, I could simply use two motors directly powering each a front wheel. This way I could simulate the differentiation in software.
I'd like to go for the first approach, using the hardware differential. But I have the one concern with it. Would a robot vehicle with differential gear still move straight, without explicit steering applied?
My imagination is that, with those wheels not being solidly connected, the robot would move in random curves which I'd have to compensate with a lot of steering then. I know that for real cars, differential gears are standard and do their work, but now I am talking about a small robot measuring about 6 inches.
Answer: If I understand your question, you are asking whether a vehicle balancing on two wheels (or two wheels and one caster) will be able to move straight, or at least predictably, if both wheels were driven from the same motor and used a differential.
The answer is yes, but only if you have a way to equalize the forces of friction affecting each wheel. For example, by applying a brake individually to each wheel, you could alter the balance of force between the left and right. It would be crude, but certainly possible to control the steering in this way. In fact, many tractors use independent left and right brake pedals to accomplish this.
Without actively braking (or some other method), your intuition is correct: the amount of rotation of each wheel would depend on an unpredictable set of forces, and the robot would make seemingly random turns. | {
"domain": "robotics.stackexchange",
"id": 212,
"tags": "motor, wheeled-robot, motion, wheel"
} |
Is the number of inequivalent elementary cellular automata rules really 88? | Question: Everywhere from Wolfram's "New Kind of Science" (p. 57) to Wikipedia they say that, out of all possible 256 (=2^8) elementary cellular automata rules, 88 are inequivalent (as defined in the Wikipedia article).
Now, the problem is that I am failing to reproduce the 88 number. No matter how I try - I always get 72.
And it gets worse. Sequence A005418 seems to be defined exactly as Wolfram's inequivalence (see the comment: "Number of bit strings of length (n-1), not counting strings which are the end-for-end reversal or the 0-for-1 reversal of each other as different."). And in that sequence at position 8 they also have 72, not 88.
Off course, there is a chance that my program is wrong and/or I do not fully understand the definition of rule equivalence. Therefore I would like to ask someone to reproduce the computations, and either confirm Wolfram's original number 88, or give me the correct number of inequivalent rules (be it 72, or something else).
This problem is important for a research paper I am working on. And it's not very difficult to implement.
Answer: I tried it myself with the following Python code:
def yflip(a):
D={0:0,1:4,2:2,3:6,4:1,5:5,6:3,7:7} # Vertical flip
#D={0:7,1:6,2:5,3:4,4:3,5:2,6:1,7:0} # String reverse
t=0
for i in range(8):
c=(a>>i)&1
t |= c << D[i]
return t
def complement(a):
return a ^ 0xff
def canonical_form(a):
b=yflip(a)
c=complement(a)
d=complement(yflip(a))
return min(a,b,c,d)
A=set()
for patt in range(256):
A.add(canonical_form(patt))
print len(A)
The dictionary D defines how to do a vertical flip. I think the correct interpretation is to move the state for the case 110 to the state for the case 011. This results in a different arrangement of the bits to just doing a straight string reverse.
Note wikipedia gives us a test case:
For example, if the definition of rule 110 is reflected through a vertical line, the following rule (rule 124) is obtained:
This works with my dictionary assert yflip(110)==124, but not with the string reverse.
The result for this is that with a straight string reverse I get 72 (same as yours).
With my interpretation of the vertical flip I get 80. Unfortunately, this is still not 88 so I am just as confused now...
UPDATE
I had misunderstood the complementary rule:
The actual complementary rule should be implemented as follows:
def complement(a):
t=0
for i in range(8):
# Flip bits in this case
x = i ^ 7
# Try case x in the original pattern
result = 1 & (a>>x)
# Use the opposite
result ^= 1
t |= result << i
return t
this results in 88 cases being found.
Note that you need to exchange 0's and 1's in both the cases, and in the final results. | {
"domain": "cs.stackexchange",
"id": 4145,
"tags": "automata, combinatorics, cellular-automata"
} |
Conservative force and its Potential Energy Function | Question: We are given that $\vec{F}=k\left<y,x,0\right>$, and asked whether $\vec{F}$ is a conservative force. If yes, we are asked to find $U(x,y,z)$ and then find $\vec{F}$ back from $U$ and show it matches the original form.
Given $\vec{\nabla} \times\vec{F}=\vec{0}$, force is conservative.
Therefore, $U(x,y,z)=-\int_{r_0}^r \vec{F} \cdot d\vec{r}=-\int_0^x kydx-\int_0^y kxdy=-2xyk.$ (Note that the $z$ component is $0$).
Such a potential function yields $\vec{F}=-\vec{\nabla}U=2k\left<y,x,0\right>.$
Something must be wrong or I must be missing something because I get an extra factor of 2 and I do not understand why.
Answer: Well you integrated it the wrong way.
$$\int ky \, dx + \int kx \, dy = \int k \,d(xy) $$
X and y are not independent. | {
"domain": "physics.stackexchange",
"id": 37669,
"tags": "newtonian-mechanics, classical-mechanics, forces, potential-energy"
} |
How to remap from geometry_msgs/PoseStamped to move_base_msgs/MoveBaseActionGoal in ROS node | Question:
I am writing a node where I subscribe to the topic /move_base_simple/goal which is of message tip geometry_msgs/PoseStamped and I am wanting to publish this information to the topic /move_base/move/goal which is of message type move_base_msgs/MoveBaseActionGoal. I have the following script to try to do what I have described:
#!/usr/bin/env python
import rospy
import roslib
import actionlib
from std_msgs.msg import String, Empty
from geometry_msgs.msg import PoseStamped
from move_base_msgs.msg import MoveBaseActionGoal
def callback(data):
rospy.loginfo(rospy.get_caller_id() + 'I HEARD %s', data.pose.position)
rospy.loginfo(rospy.get_caller_id() + 'I heard %s', data.pose.orientation)
pub = rospy.Publisher('move_base/move/goal', MoveBaseActionGoal, queue_size = 10)
stuff_to_publish = data
pub.publish(stuff_to_publish)
def listener():
# In ROS, nodes are uniquely named. If two nodes with the same
# name are launched, the previous one is kicked off. The
# anonymous=True flag means that rospy will choose a unique
# name for our 'listener' node so that multiple listeners can
# run simultaneously.
rospy.init_node('navgoallistener', anonymous=True)
rospy.Subscriber('move_base_simple/goal', PoseStamped, callback)
#message = rospy.Subscriber('move_base_simple/goal', PoseStamped, callback)
#print message.pose.orientation
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
"""def talker():
pub = rospy.Publisher('move_base/move/goal', MoveBaseActionGoal, queue_size = 10)
rospy.init_node('navgoalsetter', anonymous = True)"""
if __name__ == '__main__':
listener()
However I am having a problem because the messages are of different type. The relevant documentation for the message types can be found here http://docs.ros.org/diamondback/api/geometry_msgs/html/msg/PoseStamped.html and here http://docs.ros.org/diamondback/api/move_base_msgs/html/msg/MoveBaseActionGoal.html. Basically I am wondering how to populate the header and goal_id section of the move_base_msgs message.
Originally posted by dkrivet on ROS Answers with karma: 19 on 2018-08-20
Post score: 0
Answer:
For a typical application, you can just increment up 1 the goal ID, or even just leave it as 0. It's only important if you're trying to track multiple goals through the action server. For a typical navigation application where you have 1 goal at a time or have some knowledge of the state outside of move base (which it looks like you do) this is unnecssary. Its primarily for book keeping.
You can remap the arguments into the MoveBaseGoal object and then the actionlib_msgs/GoalID just fill in the header and increment the goal_ID by 1 for completion sake.
Originally posted by stevemacenski with karma: 8272 on 2018-08-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2018-08-21:\
or even just leave it as 0.
it might appear that way, but you'd be violating the contract between clients and servers and then run the risk of undefined and/or unexpected behaviour.
Even if you only have a single goal active at a time, the message documentation of actionlib_msgs/GoalId ..
Comment by gvdhoorn on 2018-08-21:
.. states (from here):
The id provides a way to associate feedback and result message with specific goal requests. The id specified must be unique.
Servers will assume ..
Comment by gvdhoorn on 2018-08-21:
.. that is the case, and not all server implementations will be able to handle non-unique goal ids.
The Python actionlib module has a generator available: goal_id_generator. | {
"domain": "robotics.stackexchange",
"id": 31579,
"tags": "ros, python, ros-kinetic, remapping, publisher"
} |
Why does strenous exercise cause vision of "lights"? | Question: I had a hard climb a week ago. I got so tired then any time I closed my eyes I saw these lights inside my head. I see these lights almost every time that I run fast or some thing like that. What are they and where do they come from?
Here is a picture I drew using Photoshop ;)
Answer: This is a common phenomena which most of us come across. Seeing flashes of light, stars and other shapes in the eyes occur when the body goes through stressful activities. For example while you are climbing the blood flow will be more to other prominent parts like hands and lower waist so brain and eyes will get less supply of blood as well as nutrients. Hence those flashes or stars are visible to us. This phenomena is called "Phosphene" and a phosphene is a phenomenon characterized by the experience of seeing light without light actually entering the eye.
Wikipedia/Phosphene gives an explanation for occurrence of Phosphenes:
Phosphene is "seeing stars", from a sneeze, laughter, a heavy and deep cough, blowing of the nose, a blow on the head or low blood pressure (such as on standing up too quickly or prior to fainting). It is possible these involve some mechanical stimulation of the retina, but they may also involve mechanical and metabolic (such as from low oxygenation or lack of glucose) stimulation of neurons of the visual cortex or of other parts of the visual system.
So seeing flashes of light can be due to:
Blood shifting.
Low blood sugar(causes weakness).
Head trauma.
Headache.
So to summarize this thing, we should have enough nutrients, water supply to the body whenever we come across stressful activities. Phosphenes are temporary stuff which do not harm eyes. | {
"domain": "biology.stackexchange",
"id": 4059,
"tags": "human-biology, brain, bioenergetics"
} |
Why is a quantum computer in some ways more powerful than a nondeterministic Turing machine? | Question: The standard popular-news account of quantum computing is that a quantum computer (QC) would work by splitting into exponentially many noninteracting parallel copies of itself in different universes and having each one attempt to verify a different certificate, then at the end of the calculation, the single copy that found a valid certificate "announces" its solution and the other branches magically vanish.
People who know anything about theoretical quantum computation know that this story is absolute nonsense, and that the rough idea described above more closely corresponds to a nondeterministic Turing machine (NTM) than to a quantum computer. Moreover, the compexity class of problems efficiently solvable by NTMs is NP and by QCs is BQP, and these classes are not believed to be equal.
People trying to correct the popular presentation rightfully point out that the simplistic "many-worlds" narrative greatly overstates the power of QCs, which are not believed to be able to solve (say) NP-complete problems. They focus on the misrepresentation of the measurement process: in quantum mechanics, which outcome you measure is determined by the Born rule, and in most situations the probability of measuring an incorrect answer completely swamps the probability of measuring the right one. (And in some cases, such as black-box search, we can prove that no clever quantum circuit can beat the Born rule and deliver an exponential speedup.) If we could magically "decide what to measure", then we would be able to efficiently solve all problems in the complexity class PostBQP, which is believed to be much large than BQP.
But I've never seen anyone explicitly point out that there is another way in which the popular characterization is wrong, which goes in the other direction. BQP is believed to be not a strict subset of NP, but instead incomparable to it. There exist problems like Fourier checking which are believed to not only lie outside of NP, but in fact outside of the entire polynomial hierarchy PH. So with respect to problems like these, the popular narrative actually understates rather than overstates the power of QCs.
My naive intuition is that if we could "choose what to measure", then the popular narrative would be more or less correct, which would imply that these super-quantum-computers would be able to efficiently solve exactly the class NP. But we believe that this is wrong; in fact PostBQP=PP, which we believe to be a strict superset of NP.
Is there any intuition for what's going on behind the scenes that allows a quantum computer to be (in some respects) more powerful than a nondeterministic Turing machine? Presumably this "inherently quantum" power, when combined with postselection (which in a sense NTMs already have) is what makes a super-QC so much more powerful than a NTM. (Note that I'm looking for some intuition that directly contrasts NTMs and QCs with postselection, without "passing through" the classical complexity class PP.)
Answer: From a pseudo-foundational standpoint, the reason why BQP is a differently powerful (to coin a phrase) class than NP, is that quantum computers can be considered as making use of destructive interference.
Many different complexity classes can be described in terms of (more or less complicated properties of) the number of accepting branches of an NTM. Given an NTM in 'normal form', meaning that the set of computational branches are a complete binary tree (or something similar to it) of some polynomial depth, we may consider classes of languages defined by making the following distinctions:
Is the number of accepting branches zero, or non-zero? (A characterisation of NP.)
Is the number of accepting branches less than the maximum, or exactly equal to the maximum? (A characterisation of coNP.)
Is the number of accepting branches at most one-third, or at least two-thirds, of the total? (A characterisation of BPP.)
Is the number of accepting branches less than one-half, or at least one-half, of the total? (A characterisation of PP.)
Is the number of accepting branches different from exactly half, or equal to exactly half, of the total? (A characterisation of a class called C=P.)
These are called counting classes, because in effect they are defined in terms of the count of accepting branches.
Interpreting the branches of an NTM as randomly generated, they are questions about the probability of acceptance (even if these properties are not efficiently testable with any statistical confidence). A different approach to describing complexity classes is to consider instead the gap between the number of accepting branches and the number of rejecting branches of an NTM. If counting the cumulation of NTM computational branches corresponds to probabilities, one could suggest that canceling accepting branches against rejecting branches models the cancellation of computational 'paths' (as in sum-over-paths) in quantum computation — that is, as modeling destructive interference.
The best known upper bounds for BQP, namely AWPP and PP, are readily definable in terms of 'acceptance gaps' in this way. The class NP, however, does not have such an obvious characterisation. Furthermore, many of the classes which one obtains from definitions in terms of acceptance gaps appear to be more powerful than NP. One could take this to indicate that 'nondeterministic destructive interference' is a potentially more powerful computational resource than mere nondeterminism; so that even if quantum computers do not take full advantage of this computational resource, it may nevertheless resist easy containment in classes such as NP. | {
"domain": "quantumcomputing.stackexchange",
"id": 32,
"tags": "speedup, complexity-theory, bqp"
} |
Is post selection a quantum effect? | Question: Suppose I have two bits $x$ and $y$ and a map $M:x,y \to x \veebar y,x \veebar y$ where
$x,y$ $\in \{0,1\}$ and $\veebar$ stands for bit xor. If I post-select the case where the output is $x^{'}=0 , y^{'}=1$, no input gives rise to this output. But this is similar to the concept of entanglement. If the $M$ is tensor product of vectors and $x$ and $y$ are qubit states then the an entangled state such as $|\psi\rangle = \frac{1}{2}(|00\rangle+|11\rangle)$ is such one state which cannot be expressed as tensor of two qubits. Does this make the ability to post-select a quantum effect ?
Answer: It sounds like your map $M$ sends $|00\rangle$ and $|11\rangle$ to $|00\rangle$ and sends $|10\rangle$ and $|01\rangle$ to $|11\rangle$. As described this can not be done. To implement a dynamic it needs to be reversible. The information must go somewhere.
Usually the information to reverse your computation is sent out of your computer by the exhaust fan and you call it heat and ignore that the details were there to undo your calculation. There is no such thing as a free deletion.
In fact it possible to do arbitrarily complex computation with arbitrarily small amounts of energy, you really just need energy to throw away information (send it elsewhere) not to process it in reversible ways.
So there is no quantum version of your algorithm since it is unphysical until you spell out some other registers to preserve the information.
Entanglement is just a result of linearity. There is no reason to expect or demand factorizability of states. And this isn't related to post selection.
In quantum mechanics there is a concept called post selection. But the point is to start with a subpopulation and after the fact break it into subsubpopulations. It's just grouping things together to compute averages of the groups instead of averages of the whole. Nothing deep. | {
"domain": "physics.stackexchange",
"id": 21754,
"tags": "quantum-mechanics"
} |
Is gauge invariance necessary to have Ward identity hold for off-shell amplitudes? | Question: In this other SE post: Is it really proper to say Ward identity is a consequence of gauge invariance? it is shown that the on-shell Ward identity is a consequence of global $U(1)$ symmetry for QED. This is very logical because the Ward identity is a consequence of the global symmetry and the contact terms are zero once you use the LSZ formula to extract the S-matrix matrix element. But it is also important to know how the the Ward identity behaves off-shell. For example when we consider the self-energy of a photon we know that
$$q_{\mu} \Pi^{\mu \nu} (q) = 0 \tag{1}$$
even if $q$ is off-shell. This is what protects the mass of the photon after renormalization.
Let us therefore write the Ward identity in the most general case:
$$\begin{equation}
\label{eq1}
\partial_{\mu} \langle T j^{\mu} (x) F_{1} (x_1)...F_{N} (x_n)\rangle = -i \sum_{j=1}^{n} \delta^4 (x-x_i) \langle TF_{1} (x_1)...\delta F_{j}(x_j)...F_{N} (x_n)\rangle,\tag{2}
\end{equation}$$
where $F$ can be either a fermion field $\psi$ or a photon field $A^{\mu}$. Let us consider the case that for every fermion field there is an antifermion field. This is what happens in all diagrams since fermion fields inside Green's function are appear in couples inside currents. By using global $U(1)$ we get that the variations of photon fields are automatically zero, every fermion brings a $+i\theta$, and every antifermion brings a $-i\theta$, therefore the right hand side of Ward's identity is zero. This implies that even off-shell $q_{\mu} \Pi^{\mu \nu} (q)$. Therefore it is not gauge symmetry that protects the mass of the photon but global symmetry. If this is true it would imply that QED need not be a gauge theory to be renormalizable and we can add a mass term for the photon: $ \mathcal{L}_{\gamma mass}=\frac{1}{2} m_{\gamma}^2 A^{\mu}A_{\mu} $ that breaks gauge symmetry but preserves global symmetry. Also, because of the Ward identity the photon propagator after renormalization would be:
$$\begin{equation}
D^{\mu \nu} = \frac{-i g^{\mu \nu}}{(q^2-m_{\gamma}^2)\left(1+ \Pi(q^2)\right)},\tag{3}
\end{equation}$$
where $\Pi^{\mu \nu} (q) \equiv (g^{\mu \nu} q^2-q^{\mu} q^{\nu}) \Pi(q^2)$. This would imply that $U(1)$ symmetry preserves the mass of the photon even if $m_{\gamma} \neq 0$. All of this looks extremely suspicious to me and I am pretty sure I must have made a mistake somewhere in the derivation of the off-shell Ward identity but I can't tell where, and why assuming gauge symmetry would fix such mistake.
Answer:
The Ward identities (2) for connected$^1$ correlators (with appropriate contact terms, derived via the Schwinger-Dysons equations for a global symmetry) hold off-shell, cf. Ref. 1.
However, the transversality (1) of the photon 1-particle irreducible (1PI) vacuum polarization/self-energy is a consequence of local gauge symmetry (and appropriate class of gauge-fixing conditions), as explained in e.g. this and this Phys.SE posts.
References:
M.E. Peskin & D.V. Schroeder, An Intro to QFT; Section 9.6.
--
$^1$ The Ward identity (2) is originally for not-necessarily-connected correlators, but one can identify connected components, cf. the linked cluster theorem, to make it a statement about connected correlators. | {
"domain": "physics.stackexchange",
"id": 85752,
"tags": "quantum-field-theory, quantum-electrodynamics, gauge-invariance, correlation-functions, ward-identity"
} |
ros c++ pause unpause gazebo | Question:
Hi,
I want to control the gazebo simulation by c++.
My core code is below:
ros::ServiceClient pauseGazebo = n_.serviceClient<std_srvs::Empty>("/gazebo/pause_physics");
std_srvs::Empty pauseSrv;
pauseGazebo.call(pauseSrv);
std::cout<<"enter q to continue"<<std::endl;
char key;
std::cin>>key;
if('q' == key)
{
pauseGazebo.shutdown();
ros::ServiceClient unpauseGazebo = n_.serviceClient<std_srvs::Empty>("/gazebo/unpause_physics");
std_srvs::Empty unpauseSrv;
pauseGazebo.call(unpauseSrv);
std::cout<<"q is pushed"<<std::endl;
}
The puase work well, after I enter q, the "q is pushed" output, but the gazebo is still in the state of pasue.
Can you provide some tip?
Originally posted by TouchDeeper on ROS Answers with karma: 57 on 2019-03-28
Post score: 0
Answer:
You have a typo in your code. The second service call should be
unpauseGazebo.call(unpauseSrv);
whereas you have
pauseGazebo.call(unpauseSrv);
which you have already shutdown.
A couple of more suggestions / information (as I have OCD)
You don't need to explicitly call shutdown as per documentation. When the service client goes out of scope, it will be automatically shutdown.
Also, declaring everything on the top makes the code more readable, like this
ros::ServiceClient pauseGazebo = n_.serviceClient<std_srvs::Empty>("/gazebo/pause_physics");
ros::ServiceClient unpauseGazebo = n_.serviceClient<std_srvs::Empty>("/gazebo/unpause_physics");
std_srvs::Empty pauseSrv;
std_srvs::Empty unpauseSrv;
char key;
pauseGazebo.call(pauseSrv);
std::cout<<"enter q to continue"<<std::endl;
std::cin>>key;
if('q' == key)
{
unpauseGazebo.call(unpauseSrv);
std::cout<<"q is pushed"<<std::endl;
}
Originally posted by janindu with karma: 849 on 2019-03-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TouchDeeper on 2019-03-28:
Hi,@janindu
Thanks for your point. You are right. I‘m too careless. | {
"domain": "robotics.stackexchange",
"id": 32774,
"tags": "gazebo, c++, ros-kinetic"
} |
Why can an applied force exceed static friction *without slipping* when both objects are free? | Question: Consider the following example:
A mass ($m_1$) is stacked on top of another mass ($m_2$) resting on a frictionless table in a vacuum. There is a nonzero coefficient of static friction ($\mu_s$) between the masses. What is the maximum force $F$ that can be applied to the top mass such that no slipping occurs?
Clearly, the normal force is $F_n = m_1 g$ and so the friction will resist a force up to $F_f = \mu_s m_1 g$ in magnitude. As far as I know, this is correct.
My (incorrect) intuition says that we should stop here, and $F_\text{max} = \mu_s m_1 g$ is the maximum force before $m_1$ starts to slip. However my textbook (and this answer) says that we should continue, and solve for the acceleration of the two masses:
$$
\begin{align*}
F_{\text{net, }1} &= F_\text{max} - F_f & F_{\text{net, }2} &= F_f \\\\
a_1 &= \frac{F_\text{max} - F_f}{m_1} & a_2 &= \frac{F_f}{m_2}
\end{align*}
$$
Now, since we want $m_1$ and $m_2$ to move as a single unit, these accelerations should match:
$$
\begin{align*}
a_1 &= a_2 \\\\
\frac{F_\text{max} - F_f}{m_1} &= \frac{F_f}{m_2} \\\\
F_\text{max} &= \frac{F_fm_1}{m_2} + F_f \\\\
F_\text{max} &= \boxed{\frac{\mu_s m_1^2 g}{m_2} + \mu_s m_1 g}
\end{align*}
$$
This result seems to imply that we can apply a force much larger than the friction (especially if $m_2$ is small) without causing any slipping. Is that true, and why is it possible, conceptually? I understand the algebra behind the calculation, but I'm failing to wrap my head around the reality of it; it seems impossible that we can apply an arbitrarily large force by making $m_2$ sufficiently small, even though the friction is constant.
Answer: To quickly review, friction arises because when two objects are being driven to slide past each other, any sliding generally incurs an energy penalty (because of dissipative interactions at the surfaces—microscale protrusions need to irreversibly deform during rubbing, for example). For sliding to proceed spontaneously, the driving force needs to be large enough that the input force–distance work pays for that energy penalty.
We often consider one object to be fixed in place, or sufficiently massive that it serves as an intuitive frame of reference. That object, acting as a base, never moves in those simple cases.
But what if that object—here, $m_2$—weighs almost nothing?
Any initial frictional force easily accelerates it, and now the driving force for sliding has disappeared, as both objects are moving in concert.
The only hope one has to detach from $m_2$ is essentially to jerk away so quickly that its own slight mass generates an inertial force comparable to the frictional threshold of sliding. In other words, we can't rely on the base staying fixed; we have to consider its mass as a relevant parameter—and we might first guess a simple scaling as $\sim 1/m_2$ from the above reasoning. Combination of the only other known parameters gives $\sim \mu_s m_1^2g/m_2$ as a term with dimensionality of force, and more rigorous analysis confirms the presence of this term.
It can be insightful to envision some practical cases of a very light $m_2$. Consider, say, trying to move a sealed ream of paper on a slick surface by brushing one's fingers on the top, in contrast to moving a single sheet of paper on the same surface. | {
"domain": "physics.stackexchange",
"id": 100517,
"tags": "newtonian-mechanics, friction"
} |
Basic Question about Energy-Force Relationship | Question: I was looking at the solution of a problem, and saw
$$\frac {dE}{dt}= \vec v\cdot \vec F$$
Where does that come from? I would've expected
$$\frac {dE}{dt}= m v \cdot \dot v+ \frac {dU}{dt}$$
Answer: If the potential $U$ is constant in both time and space then we have $\frac {dU}{dt} = 0$ and
$\displaystyle \frac {dE}{dt} = m \vec v \cdot \dot {\vec v}$
But $m \dot {\vec v} = m \vec a = \vec F$ so we have
$\displaystyle \frac {dE}{dt} = \vec v \cdot \vec F$ | {
"domain": "physics.stackexchange",
"id": 97417,
"tags": "newtonian-mechanics, forces, energy, work"
} |
Do force fields come from potential fields, or do potential come from forces? | Question: Please excuse me if this question is a duplicate. I tried my best but I didn't find an existing question for this.
In physics class, I was first introduced to gravity in terms of a force, specifically, the inverse square law. At the same time, there is also the concept of gravitational potential energy, and the gravitational force vector always pointed from high potential to low potential.
Mathematically, you could describe that as $F_g = -\nabla g$, where $g$ is the gravitational potential field. On the other hand, you could also say in terms of work $g = -\int F_g \cdot dx$ (assuming a point at $\infty$ has 0 potential).
The way I typically thought about it is that something like flux as the "root source" of the fields, and the flux comes from a property such as mass for gravity or charge for electromagnetism. Then, the potentials are a result of the force. That makes sense to me since the force field is like the flux area density, which would be proportional to the inverse square of the distance from the source of the flux. For example, the electric force weakens by a factor of 4 when your distance doubles, since the flux is being spread over a Gaussian surface that's 4 times larger.
My question is, is the flux responsible for setting up the forces, from which the potentials come as a result, or does it set up the potentials, and the forces simply act along the gradient? Or, are these interpretations the same?
In problems I usually thought about it in terms of the differential way but I would love to get some insight on this. Thanks.
Answer: Thinking in terms of "root source" is somehow like going in circles: you might always choose the starting point of your theory however you want and argue that the remaining quantities easily follow from the definitions (or as properties, or else). What makes more sense, in physics, is trying to understand what are the observable quantities (i. e. quantities that one can empirically measure with experiments) versus the mathematical framework that you introduce to describe the theory and make predictions.
In your example the observable quantities are the forces and the charges (by charge we mean gravitational/electrical unit of particles) because most experiments are designed to directly access them; in principle not even fields are observables as it is experimentally difficult to decouple the effect of the force from the unit of charge giving raise to such a force. One might argue that in some cases fields potentials may be observable (see the Aharonov-Bohm effect, for instance), but this is a refined topic and open for discussion.
On the other hand, once you are provided with a full mathematical description of your theory, you see that (if some particular conditions hold) conservative fields are uniquely specified in each point in space once you assign the divergence and the curl: essentially one can prove that the solutions to those differential equations are enough to reconstruct the fields at every point. Then it is up to you what quantity you want to use for your equations: usually it comes down to choosing the one such that the equations take the simplest form in terms of symmetries and solutions.
My question is, is the flux responsible for setting up the forces, from which the potentials come as a result, or does it set up the potentials, and the forces simply act along the gradient? Or, are they the same?
Forces exist because they are generated by charges, thus neither the flux, nor the potential, nor the field, nor anything else are responsible for setting them up. Moreover, they are not the same, as one is a scalar function and the other is a vector field that, only is some particular cases, can be derived from each other. | {
"domain": "physics.stackexchange",
"id": 50132,
"tags": "forces, potential, potential-energy, vector-fields, conservative-field"
} |
Does Poisson's effect explain why the necking effect is more apparent in some materials during a tensile test? | Question: I know the ratio at which the cross sectional area changes to length is Poisson's ratio but is this why necking is more apparent in some materials than others or is the necking effect only dependent on the material's ductility?
Answer: It is important to think about the underlining physics of a tensile test. What material are you testing? What is the geometry you are testing? How much global strain is being applied? With these questions in mind, I think it is necessary to think about this problem both in terms of elastic and plastic deformations.
Consider the stress-strain behavior for two materials, one with no plasticity (traditionally rubbers) and one that exhibits plasticity (traditionally metals). First let's focus on the elastic material. It is possible to imagine a situation were differences in Poisson's ratio manifests as differences in necking behavior. One might consider there to be necking in a simple rectangular tensile specimen. In this situation the boundary conditions are such that displacements at the ends of the specimen are zero. Therefore, during deformation a curved edge profile might develop. For perfectly elastic deformations this is where it is clear that differences in Poisson's ratio might affect the relative necking between two materials. If the Poisson's ratio is zero, then we would not expect there to be any geometric changes in the sample--the rectangular specimen just gets longer. However, on the other end of the spectrum, if the material is incompressible, $\nu = 0.5$, the initial and final configurations must have the same volume. Geometrically this means, given the fixed zero displacement boundary conditions of our test, there needs to be some distribution of deformation, necking, within the specimen.
In the context of plasticity the picture gets a bit more complicated. The short answer is that the final geometry of a tensile specimen will be dependent on its material properties. This includes both the plastic and elastic behavior (including the Poisson's ratio) of the material. Materials that are more ductile have a higher strain to failure, and this feature, when comparing elastically similar yet plastically dissimilar materials, predominately drives differences in necking behavior. If you have two materials, one which can withstand 2% strain and one 20% (assuming $\epsilon_{yield} = 1\%$), each increment of strain the second material can withstand relative to the first will contribute to differences in the final geometries of the specimens. In this example the failure of the material is driving the final configuration of the neck.
Finally, it is important to realize that one necking starts to occur things are no longer as simple as they appear. Once the specimen starts to neck the stress state with in the gage is, if it ever really was, no longer uniform. This means that at the smallest section of the neck stresses increase (dropping crosssectional area) and new components of stress start popping up. | {
"domain": "engineering.stackexchange",
"id": 1089,
"tags": "mechanical-engineering, civil-engineering"
} |
Question simplifying boolean expression | Question: This is what I have so far, so I just want to know if I am doing everything right or wrong. I am novice in the topic.
I have been given:
$(A \oplus B) \land \neg A \lor A$
XOR = $\oplus$
By order of operations regroup:
$(A \oplus B ) \land (\neg A \lor A)$ // XOR
(~A B + A ~B) * ~A + A // Distributive law
(~A ~A B + ~A A ~B) + A // Idempotent Law
~A B + A ~A ~B + A // Inverse Law
~A B + 0 ~B + A
~A B + A
Is this statement right?
Answer: Note that $\land$ has higher precedence than $\lor$ (see order of precedence@wiki). Therefore,
$$(A \oplus B) \land \lnot A \lor A = ((A \oplus B) \land \lnot A) \lor A = (((\lnot A \land B) \lor (A \land \lnot B)) \land \lnot A) \lor A$$
Then iteratively apply the distributive law and you will get $A \lor B$. | {
"domain": "cs.stackexchange",
"id": 6216,
"tags": "boolean-algebra"
} |
Has this 2D filter for enhancing circular dots in images a name? | Question: I came across this 2D filter for enhancing circular dots in images, for example to enhance a dot with a diameter of 5 pixel, the filter is:
$\frac{1}{336}\left( \begin{array}{ccccccc}
0 & 0 & −21 & −21 & −21 & 0 & 0\\
0 & −21 & 16 & 16 & 16 & −21 & 0\\
−21 & 16 & 16 & 16 & 16 & 16 & −21\\
−21 & 16 & 16 & 16 & 16 & 16 & −21\\
−21 & 16 & 16 & 16 & 16 & 16 & −21\\
0 & −21 & 16 & 16 & 16 & −21 & 0\\
0 & 0 & −21 & −21 & −21 & 0 & 0 \end{array} \right)$
Has this filter a name? Like Sobel or Scharr.
What is the rationale behind its coefficients?
Answer: This is a matched filter, where the shape of the filter coincides with the shape of the signal to be detected.
In this case, whenever the filter convolution overlaps a circular dot in the image, a maximum will be present in the output image. Thus, searching for the single maximum pixels in the output image will give the center position of dots in the original image.
Note that your linked source mentions that it is a matched filter. | {
"domain": "dsp.stackexchange",
"id": 2659,
"tags": "image-processing, filters, convolution, 2d"
} |
Placing interactive markers on joints of a robot model using Ros3djs | Question:
I was trying to tele-operate a robot arm via interactive markers placed on each joint of the model displayed in a web page. I am planning to use ros3djs to address this task. I went through the tutorials mentioned in the package website. But, i cannot understand how to place markers on each joint of the robot arm.
How can i do this ?
Originally posted by Werewolf_86 on ROS Answers with karma: 125 on 2017-01-02
Post score: 0
Original comments
Comment by gvdhoorn on 2017-01-02:
This would appear to a duplicate of Placing interactive markers on joints of a robot model using Ros3djs. Can you please not post questions multiple times, it typically results in either both questions ignored, or duplication of effort.
Comment by Werewolf_86 on 2017-01-02:
Thanks for pointing out my mistake. I deleted the previous post. Would you be able to help me out regarding this problem ? Kind regards.
Answer:
Is this a ros3djs specific problem, i.e. do you have interactive markers working in RViz, but not ros3djs?
If you don't, you'll need a package that generates those markers first.
You can look to http://wiki.ros.org/cob_interactive_teleop or http://wiki.ros.org/turtlebot_interactive_markers/Tutorials/indigo/UsingTurtlebotInteractiveMarkers for inspiration on how to use interactive markers for teleoperation :)
Originally posted by T045T with karma: 48 on 2017-02-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26622,
"tags": "ros, ros3djs"
} |
Trouble accessing Raspberry pi 2 GPIO with python | Question:
I have a scenario in which i have to take readings from a DHT11 Temperatute & Humidity sensor and publish it to a topic (I am using python so suggestions keeping python in mind would be helpful). The problem is that i need to do sudo to assess the GPIO but again rosrun is not found when in sudo. What should i do ?
Originally posted by nishthapa on ROS Answers with karma: 47 on 2016-03-18
Post score: 0
Answer:
Please try to use the search next time (or Google: $searchterms site:answers.ros.org fi), this has been asked (and answered) many times before. See Raspberry Pi 2, ROS and GPIO access for a recent one.
Originally posted by gvdhoorn with karma: 86574 on 2016-03-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by nishthapa on 2016-03-18:
But that case uses the WiringPI Library for C++. I am trying to do this with Python.
Comment by gvdhoorn on 2016-03-18:
The question I linked as an example. There are many more, and also dealing with Python.
Comment by gvdhoorn on 2016-03-18:
Also: WiringPi/WiringPi-Python.
Comment by nishthapa on 2016-03-21:
Wiringpi2 says its deprecated and wiringpi still requires sudo. What should i do ?
Comment by gvdhoorn on 2016-03-21:
From _wiringpi2/README.txt:
This package has been deprecated in favour of using the name "WiringPi" which provides the same exact functionality.
It's just a name change. | {
"domain": "robotics.stackexchange",
"id": 24163,
"tags": "python"
} |
Is there a 'proper time' for macroscopic system in relativistic statistical mechanics? | Question: In relativity, we define proper time for a particle therefore can discuss about casuality-order of events preserved for it.
For statistical mechanics in classical mechanics, macroscopic systems evolving by time follow the same time axis-hence the increase of entropy by time(a.k.a. the second law of thermodynamics) can be accepted 'naturally'. However, for a macroscopic system in equilibrium, can we define proper time? For example, for gases, if the comoving frame of the particles differ, the particles themselves can evolve through their own proper time-however what about the gas, the macroscopic system itself? Is there a well-defined time describing change for equilibrium statistical mechanics?
p.s.) The anomaly in definition of temperature in special relativity also led me to this frustration.
Source: http://kirkmcd.princeton.edu/examples/temperature_rel.pdf
Thank you for reading my question. If you have a clear answer, I would appreciate it if you let me know.
Answer: Every macroscopically large box of gas that is not accelerating has a well-defined rest frame of reference and thus the proper time of that box of gas will be equivalent to the coördinate time in that rest frame of reference in Special Theory of Relativity.
There is no generally accepted solution if the box of gas is moving. Plenty of authors have argued their case, but you have those that argue it is going to transform by multiplying the Lorentz factor $\gamma$, others argue it should be a division, yet others say it should be Lorentz invariant, and some others entertain the square of the Lorentz factor in either multiplication or division. My personal belief is that temperature is simply only well-defined for systems at rest.
In General Theory of Relativity the concept is worse and I am not familiar with attempted solutions. | {
"domain": "physics.stackexchange",
"id": 94966,
"tags": "special-relativity, statistical-mechanics"
} |
Why is it easier to observe B-meson oscillations than D-meson ones? | Question: B-meson oscillations were observed several years ago by the CDF experiment B-oscillations, but D-meson could not be observed until very recently by the LHCb experiment. I'm wondering why is it more difficult to observe D-oscillations than B-oscillations, taking into account that $D^0$ mesons have lower mass than $B^0$ ones.
Answer: the oscillation probability (for K, B, D mesons) is proportional to the mass difference of mass eigenstate. This mass difference is related to the oscillation diagrams like
To compute the amplitude of these diagrams you have to consider the CKM matrix for decays between quark families.
if you consider all the diagrams you will see that in D meson this is strongly suppressed wrt K and B since it is proportional to (Vcb Vub)^2 . So the oscillation is slower and more difficult to observe | {
"domain": "physics.stackexchange",
"id": 52871,
"tags": "particle-physics, experimental-physics, particle-detectors, accelerator-physics, particle-accelerators"
} |
Impulse theorem | Question: According to my physics book, The impulse-momentum theorem states that the change in momentum of an object equals the impulse applied to it.
To indicate the average force acting on the object itself, the book uses the following notation $<\vec F>$.
I would like to understand how to indicate the magnitude of this average force. Shall I use the notation $|<\vec F>|$ or $<|\vec F|>$?
I am not sure the second notation is correct.
Answer: The first one is the magnitude of the average force $$|<\vec{F}>|=\Bigg|\frac{1}{T}\int_0^T\vec{F}dt\Bigg|.$$
The second one is the average of the magnitude of the force,
$$<|\vec{F}|>=\frac{1}{T}\int_0^T|\vec{F}|dt.$$
In the impulse-momentum context, the first one is which makes sense. | {
"domain": "physics.stackexchange",
"id": 77486,
"tags": "newtonian-mechanics, forces"
} |
name of log(n+1) plot | Question: I am trying to plot a distribution of positive integers which contains a lot of variance. I opted to use the log of the y-values but that causes issues due to the inclusion of zeros. I though of plotting log10(n+1), but it seems a bit janky.
Is this solution used more often?
Does it have a name?
Is there a better/more common method?
Answer: I'm not aware of a standard name, but it does appear to be fairly common. Programmatically, it's often implemented as log1p ("log of 1 plus").
stats.SE "Plotting 0 in a log scaled axis"
stats.SE "Interpreting log-log regression with log(1+x) as independent variable", asking about regression with log1p
SAS blog "Log transformations: How to handle negative data values?", suggests $\log(1+\epsilon)$
There's also the "symlog" or "log-modulus" functions when you really care about negative values as well:
SAS blog "A log transformation of positive and negative values" puts forward $\operatorname{sgn}(x)\log(1+|x|)$ as "log-modulus" function
SO "What is the origin of Matplotlib's symlog (a.k.a. symmetrical log) scale?"
SO "Logscale plots with zero values in matplotlib", the answer suggests symlog, but a linked duplicate has an answer suggesting log1p | {
"domain": "datascience.stackexchange",
"id": 11104,
"tags": "distribution, logarithmic"
} |
Understanding formula for wave displacement | Question: I have a brief question regarding the formula for wave displacement that I've just encountered. My textbook says:
For a simple plane wave, we have, for a simple harmonic with displacement $u$:
$$u = A \cos(\kappa x - \omega t)$$
where $\omega$ is the angular frequency, $\kappa$ is the wavenumber ($\omega$/wave-velocity), $A$ is the maximum amplitude, and $t$ is time.
Here's my question. If $\kappa$ is defined as $\omega/v$, then I get, by plugging into the formula:
$$u = A \cos(\frac{\omega}{v}x - \omega t)$$
$$= A \cos(\omega (\frac{x}{v} - t))$$
But since $t = \frac{x}{v}$,we get:
$$u = A \cos(\omega(t - t)) = A$$
So by my reasoning (which is obviously wrong), I always end up with $u = A$, which basically makes the whole $\cos$-term redundant. If anyone can explain to me what is wrong with my reasoning, I would greatly appreciate it!
Answer: It is not the case that $x=vt$. The $x$ and $t$ in your expression for $u$ represent a position $x$ along the wave and a time $t$ of your choosing at which one might want to determine the amplitude of the wave. One can choose $t$ and $x$ completely independently of one another; there is no functional relationship between them, and in particular, they are not directly related to the wave propagation speed.
For example, you could ask "what is the amplitude of the wave at the origin at time zero?" This would correspond to determining the amplitude at $x=0$ and $t=0$, and in this case the expression
$$
u = A\cos(\kappa x - \omega t)
$$
would give $u=A$. On the other hand, you could ask "what is the amplitude of the wave at the origin but at time $t_0>0$?" This would correspond to $x=0$ and $t=t_0$ which would give amplitude
$$
u = A\cos(\omega t_0)
$$ | {
"domain": "physics.stackexchange",
"id": 8442,
"tags": "waves, displacement"
} |
Pandas: How can I update dataframe values? | Question: I have two spreadsheets where one is updating the other.
How can I update this data using the pandas library?
Example, where 'b' updates 'a':
a = {'field': ['a', 'b', 'c'], 'value': ["", None, 1]}
b = {'field': ['a', 'b', 'd'], 'value': [1, 2, 1]}
Expected outcome:
c = {'field': ['a', 'b', 'c', 'd'], 'value': [1, 2, 1, 1]}
Answer: df_a = pandas.DataFrame(a)
df_b = pandas.DataFrame(b)
c = pandas.concat([df_a, df_b], ignore_index=True).drop_duplicates(subset=['field'], keep='last') | {
"domain": "datascience.stackexchange",
"id": 6127,
"tags": "python, pandas"
} |
Scalability of C server implementation based on pthreads | Question: I am wondering about the feasibility of the following basic implementation of a server and how well it would scale. I know that large-scale, distributed servers should probably be written in a language like Erlang, but I'm interested in the viability of the following code "these days".
Other than bugs/issues I'd primarily like to know 3 things:
C headers have many with compatibility methods/structs/etc. Some of which do similar things. Is this a correct "modern" way to handle incoming IPv4 and IPv6 connections?
How scalable is it? If I have a single VPN and don't need a distributed server, is it adequate for todays applications? (Potentially thousands/millions of concurrent connections? I appreciate the latter would also very much be hardware dependent!)
// SimpleCServer.c
// Adapted from http://beej.us/guide/bgnet/output/print/bgnet_A4.pdf
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>
#include <arpa/inet.h>
#include <pthread.h>
// The port users will be connecting to
#define PORT "12345"
// Prototype for processing function
void *processRequest(void *sdPtr);
// Get sockaddr, IPv4 or IPv6
void *get_in_addr(struct sockaddr *sa) {
if (sa->sa_family == AF_INET) {
return &(((struct sockaddr_in*)sa)->sin_addr);
}
return &(((struct sockaddr_in6*)sa)->sin6_addr);
}
int main(int argc, char *argv[]) {
// Basic server variables
int sockfd = -1; // Listen on sock_fd
int new_fd; // New connection on new_fd
int yes=1;
int rv;
struct addrinfo hints, *servinfo, *p;
struct sockaddr_storage their_addr; // connector's address information
socklen_t sin_size;
char s[INET6_ADDRSTRLEN];
// pthread variables
pthread_t workerThread; // Worker thread
pthread_attr_t threadAttr; // Set up detached thread attributes
pthread_attr_init(&threadAttr);
pthread_attr_setdetachstate(&threadAttr, PTHREAD_CREATE_DETACHED);
// Server hints
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = AI_PASSIVE; // use my IP
if ((rv = getaddrinfo(NULL, PORT, &hints, &servinfo)) != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv));
return 1;
}
// Loop through all the results and bind to the first we can
for(p = servinfo; p != NULL; p = p->ai_next) {
if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) {
perror("server: socket");
continue;
}
if (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int)) == -1) {
perror("setsockopt");
exit(2);
}
if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) {
close(sockfd);
perror("server: bind");
continue;
}
break;
}
if (p == NULL) {
fprintf(stderr, "server: failed to bind\n");
return 3;
}
// All done with this structure
freeaddrinfo(servinfo);
// SOMAXCONN - Maximum queue length specifiable by listen. (128 on my machine)
if (listen(sockfd, SOMAXCONN) == -1) {
perror("listen");
exit(4);
}
printf("server: waiting for connections...\n");
// Main accept() loop
while (1) {
// Accept
sin_size = sizeof their_addr;
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
if (new_fd == -1) {
perror("accept");
continue;
}
// Get IP Address for log
inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), s, sizeof s);
printf("server: got connection from %s\n", s);
// Process the request on a new thread. Spawn (detaching) worker thread
pthread_create(&workerThread, &threadAttr, processRequest, (void *)((intptr_t)new_fd));
}
return 0;
}
void *processRequest(void *sdPtr) {
int sd = (int)sdPtr;
fprintf(stderr, "Processing fd: %d\n", sd);
// Processing goes here
FILE *fpIn = fdopen(sd, "r");
FILE *fpOut = fdopen(sd, "w");
fprintf(fpOut, "Processing fd %d on server.", sd);
fflush(fpOut);
//
fclose(fpIn);
fclose(fpOut);
close(sd);
return NULL;
}
Answer: First of all, each connection consumes a local port. Therefore, the number of concurrent connection is hard limited by unsigned short, that is 65536 (so millions are out of question). There are also other limitations you may or may not care about.
Second, thread creation is somewhat expensive. Consider pre-allocating a thread pool.
Third, a code for reading data from the connection is missing. I assume that the intention is for each thread to issue a recv system call. This may lead to many thousands outstanding system calls, each consuming kernel resources. Using poll is way more scalable.
Finally, a code review. Your main does way too much. Variables are declared too far away from their uses. Consider restructuring. At least 2 functions (setup_listener_socket and mainloop) must be realized. | {
"domain": "codereview.stackexchange",
"id": 7947,
"tags": "c, multithreading, server, pthreads"
} |
Do balanced charges propagate electrónic waves? | Question: Two charges with different signs neutralize each other and do not generate an electric field when they are joined. Free electrons inside a conductor are balanced (as a whole) with the positive charge of the lattice.
Accelerated charges propagate electromagnetic waves.
Is the latter true only for separated charges, such as surface charge, or is it also true for electrons inside a conductor despite being balanced?
Answer: This can be a bit tricky. People make the mistake of focusing too much on electrons. What is important is the charge density and the current density. In order to get electromagnetic waves you need to have a charge or current density that has a changing dipole moment.
Example 1: isolated charge moving at uniform velocity. This does not produce an EM wave because the charge and current dipole moments are both not changing.
Example 2: isolated charge moving with uniform circular motion. This produces an EM wave because the current dipole moments is rotating as the charge goes around the loop. The charge dipole moment is merely moving as in Example 1.
Example 3: loop with steady current. This does not produce an EM wave because the current dipole moment is steady and the charge dipole moment is zero. Note that the electrons in the wire are still accelerating as in Example 2. Note also that this case matches the specifications of your question, at all points the charge is balanced so there is no charge density anywhere, only current density. Charges are accelerating but there is no wave.
Example 4: loop with oscillating current. This does produce an EM wave because the current dipole moment is oscillating while the charge dipole moment is zero. Note that this case again matches the specifications of your question. At all points the charge is balanced so there is no charge density anywhere, only current density. Charges are accelerating and there is a wave.
Example 5: straight wire segment with oscillating current. This does produce an EM wave because both the current and charge dipole moments are changing in time. The charge is not always neutralized. | {
"domain": "physics.stackexchange",
"id": 90979,
"tags": "electromagnetic-radiation"
} |
Calculating the molarity of DNA in a cell | Question:
In the following questions use a value of 3 for $\pi$, $6 \times 10^{23}$ for Avogadro’s number and $660$ for the molecular weight of $\pu{1 bp}$ of DNA. The volume of a sphere of radius $r$ is $4/3\,πr^3$. A bacterium has a single copy of a $\pu{4 \times 10^6 bp}$ circular genomic DNA.
If the diameter of this spherical cell is 1 micrometer, what would be the molar concentration of DNA in this cell?
The volume comes to be $\pu{2 \times 10^{-5} L}$.
Amount of substance is $\pu{6.7 \times 10^-18 mol}$.
Dividing this by the volume, my answer comes $\pu{3.3 \times 10^{-13} M}$, but the answer given in my book is $\pu{3.3 \times 10^{-9} M}$.
Answer: If the stated bacterium's cell has a diameter of $\pu{1 \mu m}$, the volume can be derived in terms of liters remembering the linear relation between cubic meters:
$$
V_\text{cell}
=\frac{4}{3}\pi \left(\pu{0.5\times10^-6 m}\right)^3
=\frac{4}{3}\pi \left(\pu{0.5\times10^-5 dm}\right)^3
=\pu{5\times 10^-16 L}
$$
Inside this volume, the organism contains a certain number of base pairs; so the total amount of substance has to be known.
From Avogadro's Number, a mole is defined to be that portion (number of particles) of every substance in a defined physical phase; since this is an aqueous solution, the volume, as well as the molecular weight is meaningless: if one mole is defined by a certain number of particles, a different number of particles defines a different amount of substance:
$$
n_{\text{tot bp}}
=\frac{N_{\text{bp}}}{N_{\text{A}}}
=\frac{\pu{4\times 10^6 molecules}}{\pu{6\times 10^23 molecules mol-1}}
=\pu{6.7\times10^-18 mol}
$$
Then, by the definition of molarity, dividing the amount of substance contained inside the cell by its volume gives a decent number, for a cell:
$$
M_\text{DNA}
=\frac{n_\text{tot bp}}{V_\text{cell}}
=\frac{\pu{6.7 \times 10^-18 mol}}{\pu{5 \times 10^-16 L}}
=\pu{1.34 \times 10^-2 M}
$$
I think that the wrong result is due to the volume, because to me it seems rather surprising that one sphere of $\pu{1 \mu m}$ in diameter has:
$$
\pu{2\times 10^-5 L}
=\pu{2\times 10^-2 mL}
=\pu{20 \mu L}
=\pu{20 mm3}
\neq \pu{5 \times 10^-8 mm3}
$$
of occupied volume. I suspect that something went wrong with the conversions, because I don't see (for now) any errors in my derivation. | {
"domain": "chemistry.stackexchange",
"id": 14043,
"tags": "concentration, stoichiometry, mole"
} |
Evolution: How could all useful traits evolve simultaneously? | Question: I have a basic question about evolution, for which I never found an answer. I understand how evolution works if we focus on one specific organ or trait. With each generation, some organism is more likely to reproduce, so the trait that leads to success gets more frequent. My problem is understanding how all the traits can evolve simultaneously.
Looking back to our ancestors, we evolved many different kinds of adaptations (in unrelated areas like eyesight, kidney efficiency or a healthy fear of predators, digestion of certain nutrients, balanced walking etc.).
Natural selection can only work with what's there, so mutations are important. But if mutations are rare, it seems unlikely that many (thousands of) properties of organisms get improved in a single generation. You'd need to get very lucky to randomly improve all the genes responsible for it.
So, if we look back to our ancestors again, did they just improve on a single or a handful of traits in each generation? In that case we would need hundreds of generations before we have improved somewhat on each "front". By then, the other properties could "drift" back to a not-so-advantageous version. (This was supposing that each generation improves on a random trait, as opposed to long sequences of generations each improving on the same).
Or were there always some super lucky organisms that randomly got an improvement on almost all different traits? But just having some of those super lucky ones is not enough. The good traits don't guarantee success, just improve the odds. So we'd need many very lucky ones in each generation.
Has anyone calculated or simulated how the adaptation for many different traits can happen simultaneously?
Answer:
Has anyone calculated or simulated how the adaptation for many different traits can happen simultaneously?
There are a lots of studies on the subject but I don't fully understand what is your issue. So I'll try to give some words hoping that helps a bit but it is possible that I'll totally miss the point you want to make.
The mutation rate in humans is around $1.25 \cdot 10^{-8}$ per base pair per individual per generation. The human genome contains about 3400 Mpb, meaning that you probably carry $1.25 \cdot 10^{-8} * 3.4*10^9 = 42.5$ mutations that none of your parents had in their genome. Now if you consider the population size, you'll see that a fairly big amount of mutations occurs each generation in the human population. So mutations are not that rare! It might unfair to take human as an example though as human's population size is very big. Anyway, the great majority of mutations are deleterious. In consequence, you are right, probably not many beneficial mutations will occur in a given population at one given generation and especially not in the same individual (except if you think of viruses!).
Now, depending on how beneficial is the mutation you are considering (selection coefficient), depending on its dominance, depending on the (effective) population size, your new mutations might take quite a bit of generations before reaching fixation in the population. In this time other mutations might occur. And then, the question is very interesting. Imagine a population of asexual individuals where two beneficial mutations exist (at two different loci [=position on a chromosome]). Because there is no recombination (by definition of what is an asexual population) never these two mutations will be found in the same individuals and in consequence if one mutation reaches fixation, it necessarily mean that the other mutation would have disappear. In sexual population it is different. Recombination allow that the two mutations, once they reaches a sufficient frequency will probably be found in the same organism and therefore, the two mutations can reach fixation.
Hope that helped a bit | {
"domain": "biology.stackexchange",
"id": 2080,
"tags": "evolution, adaptation"
} |
Partitioned overlap-add convolution - strange behavior at buffer boundaries | Question: I've implemented a convolution reverb that operates in real-time, one audio buffer at a time (using FFTS for the fft bits). However, there's some strange behavior at the start of every buffer. Convolving a sinusoid with an impulse (a 1 followed by many zeroes), I don't get a sinusoid as the output:
Instead, I get peaks that are exactly twice the amplitude they should be at the start of every buffer. In fact, even if I don't use the spectra from the actual impulse file and instead multiply the complex portion of the input by 0, I get the same result. Conversely, if I multiply the real portion of the input by 0, I get 0 at the start of every buffer:
It seems like an off-by-one error or something, but then again the input is left intact if I don't modify the frequency components. I've been reading papers and going over my code for the past week, and I'd really appreciate it if someone could verify that it's correct.
class AudioEffectConvolver : public IAudioEffect
{
public:
AudioEffectConvolver(const char *impulse_name);
void process(AudioData *);
void calculateImpulse(unsigned buffer_size);
~AudioEffectConvolver();
private:
std::shared_ptr<AudioData> impulse;
std::vector<float> impulse_bins;
std::vector<std::vector<float>> partitions;
std::vector<std::vector<float>> bin_ring;
std::vector<float> overlap;
ffts_plan_t *forward;
ffts_plan_t *backward;
unsigned ring_index = 0;
bool impulse_calculated = false;
unsigned block_size = 0;
};
static unsigned npo2(unsigned size)
{
size--;
size |= size >> 1;
size |= size >> 2;
size |= size >> 4;
size |= size >> 8;
size |= size >> 16;
size++;
return size;
}
void AudioEffectConvolver::calculateImpulse(unsigned buffer_size)
{
unsigned impulse_size = impulse->frames();
block_size = npo2(buffer_size);
unsigned fft_size = block_size * 2;
forward = ffts_init_1d_real(fft_size, FFTS_FORWARD);
backward = ffts_init_1d_real(fft_size, FFTS_BACKWARD);
overlap.resize(block_size);
std::vector<float> window(fft_size);
for (unsigned i = 0; i * block_size < impulse->frames(); ++i)
{
unsigned offset = i * block_size;
if (impulse->frames() >= offset + block_size)
{
memcpy(window.data(), impulse->split(0) + offset, sizeof(float) * block_size);
memset(window.data() + block_size, 0, sizeof(float) * (fft_size - block_size));
}
else
{
memcpy(window.data(), impulse->split(0) + offset, sizeof(float) * (impulse->frames() - offset));
memset(window.data() + impulse->frames() - offset, 0, sizeof(float) * (fft_size - (impulse->frames() - offset)));
}
partitions.emplace_back(fft_size + 2); // (n / 2 + 1) * 2
ffts_execute(forward, window.data(), partitions[i].data());
bin_ring.resize(i + 1);
bin_ring[i].resize(fft_size + 2);
}
impulse_calculated = true;
}
void AudioEffectConvolver::process(AudioData *buffer)
{
unsigned buffer_size = buffer->frames();
if (!impulse_calculated) calculateImpulse(buffer->frames());
unsigned fft_size = block_size * 2;
buffer->resize(fft_size);
memset(bin_ring[ring_index].data(), 0, sizeof(float) * (fft_size + 2));
ffts_execute(forward, buffer->split(0), bin_ring[ring_index].data());
std::vector<float> convolution;
convolution.resize(fft_size + 2);
for (unsigned k = 0; k < partitions.size(); ++k)
{
int index = ring_index - k;
while (index < 0) index += (int)bin_ring.size();
for (unsigned i = 0; i < fft_size + 2; ++i)
{
convolution[i] += bin_ring[index][i] * partitions[k][i];
}
}
std::vector<float> output;
output.resize(fft_size);
ffts_execute(backward, convolution.data(), output.data());
//output.resize(block_size); // circular convolution; chop off the second half
for (unsigned i = 0; i < buffer_size; ++i)
{
float outsample = (output[i] + overlap[i]) / (fft_size);
buffer->sample(i) = outsample;
}
memcpy(overlap.data(), output.data() + buffer_size, sizeof(float) * (block_size - buffer_size));
ring_index++;
if (ring_index >= bin_ring.size()) ring_index = 0;
buffer->resize(buffer_size);
}
class AudioData
{
public:
AudioData(const char *filename);
AudioData(unsigned frames, unsigned channels = 1, unsigned rate = 44100);
~AudioData();
float sample(unsigned frame, unsigned channel = 0) const;
float& sample(unsigned frame, unsigned channel = 0);
float seek(float seconds, unsigned channel = 0) const;
void resize(unsigned frames);
float *data();
const float *data() const;
const float *split(unsigned channel);
unsigned frames() const;
unsigned rate() const;
unsigned channels() const;
private:
std::vector<float> audio_data;
std::vector<float *> splits;
unsigned frame_count;
unsigned sampling_rate;
unsigned channel_count;
void clearSplits();
};
float& AudioData::sample(unsigned frame, unsigned channel)
{
clearSplits();
return audio_data[frame * channel_count + channel];
}
void AudioData::resize(unsigned frames)
{
clearSplits();
audio_data.resize(frames * channel_count);
frame_count = frames;
}
float *AudioData::data()
{
return audio_data.data();
}
const float *AudioData::split(unsigned channel)
{
if (splits[channel] != nullptr) return splits[channel];
if (channel_count == 1) return data();
float *split = new float[frame_count];
for (unsigned i = 0; i < frame_count; ++i)
{
split[i] = sample(i, channel);
}
splits[channel] = split;
return split;
}
unsigned AudioData::frames() const
{
return frame_count;
}
void AudioData::clearSplits()
{
for (unsigned i = 0; i < splits.size(); ++i)
{
if (splits[i] != nullptr) delete[] splits[i];
splits[i] = nullptr;
}
}
In my specific case, the buffers are 448 samples (something to do with WASAPI shared mode).
Answer: You call a lot of methods and functions that are not included so it's hard to read. Here is how I debug this step by step.
Verify your audio framework. Do NOTHING in the process() function other than copying the input to the output
Verify simple processing. Now add multiplication with 0.5 or something simple like this.
Verify the FFT based processing. Do just the zero padding, forward FFT, inverse FFT, and output calculation
Add a "pass through" impulse response. Just a single sample at $n=0$
Verify your overlap handling: use an impulse response with a single tap at $n=256$
Verify the framing of the impulse response: use an impulse response with a single tap at $n=2000$
At each step calculated the expected results and calculate the RMS error to the expected result. For single precision floating point, that should be in the order of -130dB or so. Use both a sine wave and a unit impulse as input signal.
If you get a large error, stop and fix this step.
If that all checks out, chances are you code is good but you should still test it with a random input signal and a real room impulse response and calculate the RMS error to a known-good reference model (Matlab, Octave, Python). | {
"domain": "dsp.stackexchange",
"id": 7898,
"tags": "fft, convolution, c++"
} |
A Simple Unix Filter in Racket - Learning the Racket Way | Question: I've written the following simple filter in Racket as my first Racket program and am wondering if I am writing it in an "idiomatic Racket style".
#! /usr/bin/env racket
#lang racket
(require racket/cmdline)
(define a-prolog-mode? (make-parameter #f))
;; parses the options passed on the command-line
(command-line
#:program "patoms"
#:once-any
[("-a" "--aprolog") "output as A-Prolog code"
(a-prolog-mode? #t)])
;; dlv-input? : string -> boolean
;; Returns True if the given text corresponds to the output of DLV
;; and False otherwise. DLV's output is prefixed by one of the following
;; strings: "DLV", "{", or "Best model".
(define (dlv-input? text)
(regexp-match? #rx"^DLV|^{|^Best model" text))
;; text->answer-sets : string -> list of strings
;; Returns a list comprised of all of the answer sets in the given text.
(define (text->answer-sets text)
(cond
[(dlv-input? text) (regexp-match* #rx"{(.*?)}" text #:match-select cadr)]
[else null]))
;; write-as-code : string
;; Writes the given answer set in plain text form to standard output.
(define (write-as-text answer-set)
(let ([literals (map string-trim (string-split answer-set ","))])
(cond
[(empty? literals) (printf "~a~n" "{}")]
[else (for-each (λ (literal) (printf "~a~n" literal)) literals)])
(printf "~a~n" "::endmodel")))
;; write-as-code : string
;; Writes the given answer set as A-Prolog code to standard output.
(define (write-as-code answer-set)
(let ([literals (map string-trim (string-split answer-set ","))])
(for-each (λ (literal) (printf "~a.~n" literal)) literals)
(printf "~a~n" "%%endmodel")))
;; main
;; Serves as the main function of the program. If a-prolog-mode is specified
;; by the user via the command-line, all of the answer sets that may be parsed
;; from standard input are written to standard output as A-Prolog. Otherwise
;; the answer sets are written in a plain text format.
(define (main)
(let* ([text (port->string (current-input-port))]
[answer-sets (text->answer-sets text)])
(cond
[(a-prolog-mode?) (for-each write-as-code answer-sets)]
[else (for-each write-as-code answer-sets)])))
(main)
In general, any and all feedback would be very much appreciated given that this is my first foray into Racket.
Thank you kindly in advance.
Answer: Overall, this is very pleasant code to review. Especially given you're new to the language. Your function names are follow convention such as the use of ending predicates with ? and indicating conversions with ->. The comments make this code easy to understand. So bravo, keep at it! That being said, here are some pretty minor suggestions.
"Favor define when feasible". Your let's and let*'s could be changed to internal define's to decrease nesting.
Change (define (main) ...) to (module+ main ...). Submodule support added in June's 5.3 release make having main's and test's easier than ever.
Within main, should one of the cond branches be using write-as-text instead of write-as-code? right now, both branches do the same thing. Along the same vein, the comment above write-as-text was copied but not changed from ;; write-as-code.
In racket, we have 3 ways of doing output, display, write, and print. Since your write-as-text/code functions use printing, I would change the names of those functions to be called print-as-text/code | {
"domain": "codereview.stackexchange",
"id": 2772,
"tags": "scheme, racket"
} |
Do alternate theories for Dark Matter (like MOND) explain its effect on gravitational lensing? | Question: For a long time, I was sceptical about the evidence for dark matter. To me, it seemed like a pretty big leap to make when we have no idea whether or not our current models of gravity should apply exactly to cosmological objects of massive scales like galaxies. Just like Einstein’s relativity replaces Newton's laws, wouldn't MOND be a better explanation for the discrepancies of galactic rotations than some "dark matter" that we have no evidence of?
Apparently though, its effect on galactic rotation is not the only evidence of dark matter, we can also see the effect of dark matter on the gravitational lensing of galaxies. That seems a lot harder to explain using modified theories of gravity than the rotational problem.
Do any of the modified theories of gravity address this evidence for dark matter, or just the galactic rotation problem?
Answer: Milgrom's simple Newtonian MOND cannot, as it is just a modification of newtonian dynamics (which is the acronym for MOND, after all). Jacob Bekenstein, however, has worked out a relativistic generalization of MOND called TeVeS that does account for gravitational lensing and a variety of other effects:
https://en.wikipedia.org/wiki/TeVeS
TeVeS is ludicrously complex, though. And it is unclear whether it can explain effects like the bullet cluster, where gravitating dark matter seperates from normal matter. Also, one could argue that, in a lot of ways, TeVeS is just a proposal for the dark matter Lagrangian (though it couples to the metric in such a way to mimic gravity). | {
"domain": "physics.stackexchange",
"id": 24636,
"tags": "general-relativity, dark-matter, gravitational-lensing, modified-gravity"
} |
How this proof of fractional knapsack works? | Question: I don't understand a step in my book proving the fractional knapsack problem:
Let value of items $v_1\ge v_2\ge \dots\ge v_n$, and assume $X=\langle x_1, \dots,x_n\rangle$ are the solution by greedy, where $0\le x_i\le 1$ is the fraction packed into the knapsack.
Assume $j$ is the first index s.t. $x_j<1$. Let $Y=\langle y_1,\dots, y_n\rangle$ be any solution not $X$.
Consider $$\dfrac{v_i}{w_i}(x_i-y_i)\ge\dfrac{v_j}{w_j}(x_i-y_i),\tag{????}$$
So $\displaystyle\sum_{i=1}^n v_i(x_i-y_i)=\sum\color{blue}v_i\dfrac{w_i}{\color{blue}w_i}(x_i-y_i)\ge\color{blue}{\dfrac{v_j}{w_j}}\sum w_i(x_i-y_i)\ge0.$
I can understand the blue part I highlighted, but can anyone help me understand the (????) part? Why it must hold?
Answer: The following condition is implicitly included in the question.
$$\dfrac{v_1}{w_1}, \dfrac{v_2}{w_2}, \cdots, \dfrac{v_n}{w_n} \text{ is in the descending order.}$$
Let $W$ be the total weight to be filled. The greedy algorithm for fractional knapsack problem is the following procedure.
Let $k$ loop through $1, 2, ..., n$ in that order.
Set $$x_k = \dfrac{W- \sum_{1\le i\lt k} x_i}{w_i}$$
If $x_k<1$, set $x_l=0$ for all $l>k$. Break the loop.
$$\dfrac{v_i}{w_i}(x_i-y_i)\ge\dfrac{v_j}{w_j}(x_i-y_i),\tag{????}$$
There are three cases for $i$.
The case when i < j, i.e., $\dfrac{v_i}{w_i}\ge\dfrac{v_j}{w_j}$. Since $j$ is the first index s.t. $x_j<1$, $x_i=1$, i.e., $x_i-y_i\ge0$.
The case when i = j. Both sides are 0.
The case when i > j, i.e., $\dfrac{v_i}{w_i}\le\dfrac{v_j}{w_j}$. Since $j$ is the first index s.t. $x_j<1$, according to the definition of the greedy algorithm, $x_i=0$. That means $x_i-y_i\le0$. | {
"domain": "cs.stackexchange",
"id": 12935,
"tags": "algorithms, correctness-proof, greedy-algorithms"
} |
Selecting specific lines from a file and focusing on them only? | Question: So I have a file containing a huge list of sentences, some containing keywords, and some not, so in order to specifically focus on the ones with keywords, I used this method. It works, but is there another way to this without having to create a new file?
keyW = ["love", "like", "best", "hate", "lol", "better", "worst", "good", "happy", "haha", "please", "great", "bad", "save", "saved", "pretty", "greatest", 'excited', 'tired', 'thanks', 'amazing', 'glad', 'ruined', 'negative', 'loving', 'sorry', 'hurt', 'alone', 'sad', 'positive', 'regrets', 'God']
with open('tweets.txt') as oldfile, open('newfile.txt', 'w') as newfile:
for line in oldfile:
if any(word in line for word in keyW):
newfile.write(line)
because with these specific tweets, I'm going to use them when doing another function
for line in open('tweets.txt'):
line = line.split(" ")
lat = float(line[0][1:-1]) #Stripping the [ and the ,
long = float(line[1][:-1]) #Stripping the ]
if eastern.contains(lat, long):
eastScore += score(line)
elif central.contains(lat, long):
centralScore += score(line)
elif mountain.contains(lat, long):
mountainScore += score(line)
elif pacific.contains(lat, long):
pacificScore += score(line)
else:
continue
Ultimately, this is what my code looks like.
from collections import Counter
try:
keyW_Path = input("Enter file named keywords: ")
keyFile = open(keyW_Path, "r")
except IOError:
print("Error: file not found.")
exit()
# Read the keywords into a list
keywords = {}
wordFile = open('keywords.txt', 'r')
for line in wordFile.readlines():
word = line.replace('\n', '')
if not(word in keywords.keys()): #Checks that the word doesn't already exist.
keywords[word] = 0 # Adds the word to the DB.
wordFile.close()
# Read the file name from the user and open the file.
try:
tweet_path = input("Enter file named tweets: ")
tweetFile = open(tweet_path, "r")
except IOError:
print("Error: file not found.")
exit()
#Calculating Sentiment Values
with open('keywords.txt') as f:
sentiments = {word: int(value) for word, value in (line.split(",") for line in f)}
with open('tweets.txt') as f:
for line in f:
values = Counter(word for word in line.split() if word in sentiments)
if not values:
continue
keyW = ["love", "like", "best", "hate", "lol", "better", "worst", "good", "happy", "haha", "please", "great", "bad", "save", "saved", "pretty", "greatest", 'excited', 'tired', 'thanks', 'amazing', 'glad', 'ruined', 'negative', 'loving', 'sorry', 'hurt', 'alone', 'sad', 'positive', 'regrets', 'God']
with open('tweets.txt') as oldfile, open('newfile.txt', 'w') as newfile:
for line in oldfile:
if any(word in line for word in keyW):
newfile.write(line)
def score(tweet):
total = 0
for word in tweet:
if word in sentiments:
total += 1
return total
def total(score):
sum = 0
for number in score:
if number in values:
sum += 1
#Classifying the regions
class Region:
def __init__(self, lat_range, long_range):
self.lat_range = lat_range
self.long_range = long_range
def contains(self, lat, long):
return self.lat_range[0] <= lat and lat < self.lat_range[1] and\
self.long_range[0] <= long and long < self.long_range[1]
eastern = Region((24.660845, 49.189787), (-87.518395, -67.444574))
central = Region((24.660845, 49.189787), (-101.998892, -87.518395))
mountain = Region((24.660845, 49.189787), (-115.236428, -101.998892))
pacific = Region((24.660845, 49.189787), (-125.242264, -115.236428))
eastScore = 0
centralScore = 0
pacificScore = 0
mountainScore = 0
happyScoreE = 0
for line in open('newfile.txt'):
line = line.split(" ")
lat = float(line[0][1:-1]) #Stripping the [ and the ,
long = float(line[1][:-1]) #Stripping the ]
if eastern.contains(lat, long):
eastScore += score(line)
elif central.contains(lat, long):
centralScore += score(line)
elif mountain.contains(lat, long):
mountainScore += score(line)
elif pacific.contains(lat, long):
pacificScore += score(line)
else:
continue
print(keywords)
print("The number of tweets in the Pacific region is:", pacificScore)
print("The number of tweets in the Montain region is:", mountainScore)
print("The number of tweets in the Central region is:", centralScore)
print("The number of tweets in the Eastern region is:", eastScore)
Answer: Your code in general
You should definitely familiarize yourself with PEP8 which specifies Python coding style guidelines.
Those include joint lower case names for variables an functions as well as empty lines around function and class definitions and a maximum line length of 79 characters.
Consistency
Whatever you do, be consistent. You mix single and double quotes for string literals and you do not use the file's build-in context management and iteration capability on wordFile.
Divide and conquer
Your code seems to do some complex data processing in several steps. Isolate these different tasks and outsource them into several functions, each specialized for one certain process. You already did this with the functions score() and total() and your Regions() class. Let's have more of those.
Use the script's __name__
Put the part of your code that should run when you execute your script inside an
if __name__ == '__main__':
<your code here>
block. This prevents it to run on the import of the script as a module, if you or some other user one day decide to re-use its members, i.e. functions and classes, in other programs.
Comments
Though you commented parts of your code, those comments are a counterexample of their kind. You state the obvious by commenting on checking the membership of items in a dictionary, which can obviously be read from the code itself. On the other hand it is not clear, why you store comma seperated values as a key in this very dictionary, giving each key the value of 0 without using the dictionary any further apart from printing it out.
Filtering lines
Regarding your first question, filtering lines of a file by certain keywords is a common example for coroutines. You can use a method like
def grep(keywords):
"""Yields lines containing keywords"""
file = yield
for line in file:
if any(keyword in line for keyword in keywords):
yield line
and invoke it with
with open(tweets_file) as tweets:
fltr = grep(words)
next(fltr)
fltr.send(tweets)
for line in fltr:
print(line)
assuming words is your keyword list. | {
"domain": "codereview.stackexchange",
"id": 22991,
"tags": "python, python-3.x"
} |
Where is the mistake in this reasoning? | Question: 2 identical objects are moving with different constant velocities. Then, in turn same forces act on them for some period. Both times, the same amount of energy is used to produce those forces. As the forces acted, the objects covered different distances. Hence the works done by the forces are different. So the objects gained different amounts of energy.
So we have a contradiction: The same amounts of energy were tranfered to the objects, but they gained different amounts of energy.
Answer: You either have the same forces (i.e. same amount of Newton), then there are different energies that were transmitted. Or you have the same energy (i.e. same amount of Joule), then you will end up with different forces.
A force is the product of TWO objects, so you can't just say "the same force is acting upon both objects, independently of the objects". If the objects move in different velocities, and you "do the same thing" with them, then you end up with different forces. | {
"domain": "physics.stackexchange",
"id": 56774,
"tags": "newtonian-mechanics, energy, work, inertial-frames"
} |
Why a DFT of two sinusoids is very noisy even with frequency sampling 5 times higher? | Question: I've set up this case to try to understand DFT implementing a real case in Excel
Frame Size $\;\color{blue}{(T = 5 \; s})$
Time Sampling $\;\color{blue}{(T_S = 0.1 \; s})$
Block Size $\;\color{blue}{(N = T/T_S = 50)}\;$ (It was 51 in my question)
Sampling Rate $\;\color{blue}{(F_S = T/T_S+1 =10 Hz}$
Frequency Resolution $\;\color{blue}{(F_R = F_S/N = 0.2 Hz)}$
In time spectrum I have put just the sum of 2 sin functions
The first sine has amplitude $\color{blue}{10}$, phase $ \color{blue}{\pi/3}$ and frequency $\color{blue}{2}$ Hz.
The second sine has amplitude $\color{blue}{5}$, phase $\color{blue}{0}$ and frequency $\color{blue}{1}$ Hz and for $ \color{blue}{\; n = 0\; to\; 49 \;}$ we have :
$$ \color{blue}{x_n = 10\, \sin{(2\pi t 2 + \pi/3)}+ 5\, \sin{(2\pi t 1)}}$$
So I have generated DFT values where $\color{blue}{\; k=-25\; to\; 24\, }$ :
$$\color{blue}{\quad X_k=a_k + ib_k}\;$$
$$\color{blue}{a_k = (1/N) \; \Sigma_{n=0,N-1} x_n \cos{(2 \pi n k /N)}} $$
$$\color{blue}{b_k = (1/N) \; \Sigma_{n=0,N-1} x_n \sin{(2 \pi n k /N)}} $$
After, I've calculated module $\color{blue}{A}$ and phase $\color{blue}{\Phi}$ for $ \color{blue}{\; k = -25\; to \;24\,}$:
$$\color{blue}{A_k = |X_k| = \sqrt{(a_k)^2 + (b_k)^2}} $$
$$\color{blue}{\Phi_k = atan2(b_k, a_k)} $$
I consider in my analysis for $ \color{blue}{\; k = -25\; to \;24\,}$:
$$\color{blue}{ Freq_k = k.F_S/N} $$
Old text: Now my doubt arise. In my understanding, the signal is concentrated in the values of frequencies $\color{blue}{-1}$ Hz, $\color{blue}{-2}$ Hz, $\color{blue}{1}$ Hz and $\color{blue}{2}$ Hz. The negative frequencies I disregard, of course. However, in short, the frequency spectrum is noisy. For instance, $\color{blue}{2.2}$ Hz amplitude is around $\color{blue}{25%}$ of $\color{blue}{2}$ Hz amplitude. Is it normal? What's happening?
New text As Fat32 has showed me, I was using an additional and spurious point ($\color{blue}{51}$ in total) in my time sample that was distorting the whole calculation. If I use a specific time frame ($\color{blue}{5\;s}$ in this case), I need to suppress the final time point (time = $\color{blue}{5\;s}$)
$$\star\star\star$$
I'm working with frequency sampling $\color{blue}{10}$ Hz, five times than maximum detected frequency ($\color{blue}{2}$ Hz).
To check if I had made a mistake, I had calculated the inverse DTF.
Old text: The original signal was restored well, where $\color{blue}{\;n=0\; to\; 50}\;$ and imaginary part $\;\color{blue}{\sim 0}\;$.
New text: The original signal was restored with no flaws, where $\color{blue}{\;n=0\; to\; 49}\;$ and imaginary part $\;\color{blue}{= 0}\;$.
The only positive frequences other than $\color{blue}{0}\;$ is $\color{blue}{1\;hz}\;$ and $\color{blue}{2\;hz}\;$ , respectively with value $\color{blue}{125}\;$ and $\color{blue}{250}\;$, $\color{blue}{25}\;$ times bigger than the real amplitude. I guess that the factor is $\color{blue}{25}\;$ because is the half the samples ($\color{blue}{N=50}\;$).
$$\star\star\star$$
$$\color{blue}{ Re(x_n) =(1/N)\Sigma_{k=-N/2,N/2-1}A_k \cos{\Phi_k} \cos{(2 \pi k n/N)} - A_k \sin{\Phi_k} \sin{(2 \pi k n/N)}}$$
$$\color{blue}{ Im(x_n) = -(1/N)\Sigma_{k=-N/2,N/2-1}A_k \sin{\Phi_k} \cos{(2 \pi k n /N)} + A_k \cos{\Phi_k}\sin{(2 \pi k n /N)}}$$
The 50 time data (it was 51 time date before the answer) is available from this link
The $\color{blue}{50}$ transformed data(Re(z), Im(z), |z|, $\Phi$(z))
(it was $\color{blue}{51}$ time date before the answer) is available from this link
Answer: As far as I understood your question, you are complaining about the windowing effect and its consequence of smearing of the spectrum.
So you have a mathematically defined pure sine wave that extends from minus infinity to plus infinity in time, whose theoretical spectrum, you expect to be, a single line in frequency. But you eventually represent that infinitely long ideal sine wave with a finite length record (the windowed version, aka truncated) in practice and take the DFT of the finite length version instead. Then the result will be a broadened single line into a sinc waveform mostly having nonzero amplitudes at all frequencies in general which is what you called noise?
That's the most fundamental manifestation of the effect of time domain windowing (aka aperture effect) on the frequency spectrum; the smearing of the single frequency line and also leakage when multiple tones (including DC) co-exist.
So those are not noise, but an irrecoverable side effect of the practical utilization of finite length DFT. Note however that you might be adding your computational error on top of that. Hence for best verification, you better test your algorithm with an established library from Matlab / Octave kind of platforms.
The following plots are generated by Matlab from your uploaded data, to test your problem and it confirms with the explanation; i.e., those nonzero values you see in the spectrum are due to the windowing effect...
You cannot get rid of it but the effect will minimize by using longer data, however is not the most practical approach. Instead by using special window types such as Hamming, Hanning, Blackman, Kaiser etc, you can tradeoff certain features of it. Incidentally you are using a rectangular window, whose peaks are sharpest but those tails (side lobes) are also largest. All other windows will supress the tails (side lobes) but widen the peaks...
And this is the result of Hamming window applied to your data :
Note that your signal $x[n]$ seems 1 sample longer than it would be... | {
"domain": "dsp.stackexchange",
"id": 6792,
"tags": "frequency-spectrum, noise, dft, frequency"
} |
How to implement PPO without using a Critic | Question: I am using the standard policy gradient algorithm, REINFORCE, to solve a RL problem and was thinking about implementing Proximal Policy Optimization (PPO) to increase the sample efficiency of my solution. From the original paper, it seems the clipped loss PPO optimizes includes an Advantage $A_t$ term:
$$
L^{CLIP} (\theta) = \mathbb{E}_t \Big[min(r_t(\theta)A_t,
\: clip(r_t(\theta), 1 - \epsilon, 1 + \epsilon)A_t \Big]
$$
Generally, this advantage $A_t$ is calculated using an actor-critic schema, where we train a network (critic) to predict the value $V(s)$ of a given state, which is then used to calculate $A_t$. However, my RL task is episodic and the trajectories end (i.e., arrive at a terminal state) in just a few actions, so I would rather not use a critic network. Thus, my question is the following: How can I implement PPO without a critic network, i.e., without a network that predicts $V(s)$?
To achieve this, I thought about simply substituting the advantage term $A_t$ used in $L^{CLIP}$ for the discounted sum of rewards $R_t$ used in the REINFORCE loss. Should this work fine or is there a better alternative?
Answer: Your justification for not wanting to use a critic (that the episodes are short) does not make sense to me. I would expect that including the critic would result in a substantial variance reduction (and substantially faster training) due to the extra baseline and also the bootstrapping which is done in the generalized advantage estimation (GAE). I don't see how the episode length being short is relevant.
However, if you don't want to use an advantage, you don't have to. You can simply replace $A_t$ with the (monte carlo sampled) cumulative reward $R_t$. The problem is that the clipping objective only makes sense if you use an advantage. It won't make sense to do that clipping if you are using $R_t$ instead of $A_t$.
It seems dubious to me to call such an algorithm "PPO" as really it would just be REINFORCE. The only possible difference between what you describe and reinforce, is that PPO should generate a dataset of transitions and do multiple mini-batch updates per dataset. If you do decide you want to go that route, you should note that you still need the importance sampling term (the unfortunately named $r_t(\theta)$) for the update to be unbiased. | {
"domain": "ai.stackexchange",
"id": 3366,
"tags": "deep-rl, actor-critic-methods, proximal-policy-optimization, reinforce"
} |
Cathode ray tube tungsten filament design? | Question: I am working on building a home cathode ray tube. I'm at the stage of considering the electrical design and filament. From my reading, I've seen that tungsten filaments are a popular choice for thermionic emission devices. However, I haven't seen anything regarding the effect of filament length, diameter, and shape.
Could anyone direct me to resources for learning about this? Or, if possible, can anyone comment on the effects of length, diameter, and filament shape when designing an electron gun?
Answer: You will find this to be an almost impossible DIY task, for a long list of reasons.
Tungsten wire is difficult to form into shape and very difficult to manipulate in thin gauges. Specialized tooling will be required to coil the filament.
The electron emissivity of the wire will determine the physical dimensions of the gun assembly, which must be determined experimentally because the as-manufactured state of the wire will affect this and cannot be predicted in advance.
It is possible to use an indirectly heated electron emitter, in which the filament heats a sleeve it is inserted into which has been coated with an alkali metal oxide mixture that has very high electron emissivity. the sleeve is then part of the overall circuit for the gun, separate from the filament, which will allow you to optimize the designs of the heater and emitter separately. This is common practice in high-vacuum electron tubes but it requires detailed knowledge of how to compound and apply the coating.
To route the electrical connections out through the glass wall of the gun tube requires the use of a special metal alloy which has been engineered to possess precisely the same thermal expansion coefficient as the glass it is stuck through. You'll need to find a source for this (highly specialized) material.
Once the tube has been assembled and evacuated it will bake out at high temperatures, causing all the components of the gun assembly to outgas and thereby ruin the vacuum inside the tube. To deal with this you need to include a component called a getter inside the tube which will adsorb or chemically react with the outgassing and a getter flash which will react with any residual oxygen that either leaks into the tube when it is hot or outgasses from the glass envelope of the tube during operation.
If you are sufficiently clever you can part the entire gun assembly off the neck of an old-school cathode ray display (TV) tube after carefully breaking the vacuum in it and then find a glass-blower who can weld it into the device you want to use it in, but you'll need to restore the getters because they will be wrecked upon exposure to ambient air. | {
"domain": "physics.stackexchange",
"id": 85375,
"tags": "vacuum, electronics"
} |
Definition of density operator/ Matrix | Question: In Sakurai's book, the density operator is defined as
$$\rho\equiv \sum_i w_i|i\rangle \langle i|$$
where $w_i$ is statistical weight.
Now, I'm reading a book by Parisi, In which it says in Sec 5.1,
We now introduce the density matrix or density operator of the system, denoted as $\rho$. This operator is a generalization of the projector $P_\psi$. Here we limit ourselves to the discrete case. Let us first discuss the case where the system admits a wave-function description i.e. $\rho=P_\psi$.
I don't how they defined
$$\rho\equiv|\psi\rangle \langle \psi|$$
What's the benefit of this? It's only the projector. It's also can make sense if $|\psi$ is an eigenstate of the system and we are considering pure ensemble so that $w_i$ is zero for other states. But here, they are (as it seems) of a general state which can be written in eigenbasis.
$$|\psi\rangle =\sum_i c_i|\phi_i\rangle \rightarrow \rho=\sum_{i,j}c_ic_j^*|\phi_j\rangle\langle \phi_i|$$
Can you explain what's the physical interpretation of the above?
Edit: I'm asking, what's the meaning or physical interpretation of the second definition? The first make sense in term of
$$\langle A\rangle =\text{Tr}(\rho A)$$
but not the second. It's simply the projection operator. How this become the density operator if system admits a wave-function description?
Answer: "The system admits a wave-function description" means there is just one single state (or "wavefunction") $\lvert \psi\rangle$ the system is in with absolute certainty. So in that case, the set of your $\lvert i\rangle$ is just the single state $\lvert \psi\rangle$ and it occurs with statistical weight $w_\psi = 1$. Obviously
$$ \rho = w_\psi\lvert \psi\rangle\langle \psi\rvert = \lvert \psi\rangle\langle\psi\rvert = P_\psi$$
in this case. | {
"domain": "physics.stackexchange",
"id": 85245,
"tags": "quantum-mechanics, density-operator"
} |
Recreate a spanning tree in a grid graph given vertex descriptions | Question:
Let's assume I have graph above with spanning tree pointed out by blue edges.
Vertex at position (1,1) (row 1, column 1) is connected to the bottom vertex and has degree 1.
Vertex at position (4,2) (row 4, column 2) is connected to the up, left and right vertex and has degree 3.
Let's call (up, down, left, right) a description of a vertex. For a vertex that is connected to a vertex above it and a vertex left to it, description is (1, 0, 1, 0). For a vertex that is connected to a vertex above, below, left and right to it, description is (1, 1, 1, 1).
Given an array of descriptions of vertices that belong to a spanning tree of a grid graph, dimensions of the grid, recreate a spanning tree that decodes to a given array of descriptions.
For example, for the image above an input to the algorithm could be:
[
(0, 0, 1, 0), # this one describes vertex at position (4, 4), it is connected to the vertex on the left
(1, 0, 1, 1), # vertex (4, 2)
(0, 1, 1, 0), # vertex (2, 4)
(1, 0, 1, 1), # vertex (2, 3)
(1, 0, 1, 1), # vertex (2, 2)
(1, 0, 0, 1), # vertex (2, 1)
(0, 1, 0, 0), # vertex (1, 2)
(0, 1, 0, 0), # vertex (3, 1)
(0, 1, 0, 1), # vertex (1, 3)
(1, 0, 1, 0), # vertex (3, 4)
(0, 1, 0, 0), # vertex (1, 1)
(0, 1, 0, 1), # vertex (3, 2)
(0, 0, 1, 0), # vertex (1, 4)
(0, 0, 1, 1), # vertex (4, 3)
(0, 0, 1, 1), # vertex (3, 3)
(1, 0, 0, 1), # vertex (4, 1)
]
One of the possible outputs can be (reordered array of descriptions, but I can go over each description and color the edges in an empty grid and get the image above):
[
(0, 1, 0, 0), # vertex (1, 1)
(0, 1, 0, 0), # vertex (1, 2)
(0, 1, 0, 1), # vertex (1, 3)
(0, 0, 1, 0), # vertex (1, 4)
(1, 0, 0, 1), # vertex (2, 1)
(1, 0, 1, 1), # vertex (2, 2)
(1, 0, 1, 1), # vertex (2, 3)
(0, 1, 1, 0), # vertex (2, 4)
(0, 1, 0, 0), # vertex (3, 1)
(0, 1, 0, 1), # vertex (3, 2)
(0, 0, 1, 1), # vertex (3, 3)
(1, 0, 1, 0), # vertex (3, 4)
(1, 0, 0, 1), # vertex (4, 1)
(1, 0, 1, 1), # vertex (4, 2)
(0, 0, 1, 1), # vertex (4, 3)
(0, 0, 1, 0), # vertex (4, 4)
]
The array can come in any order (there is no way to know for which position the description is). Of course, vertices in the first row can never be connected to a vertex above (there is no row above), similarly, vertices in the last row, first column, last column, cannot be connected to vertices below, left, or right, respectively.
Is there an efficient algorithm that will recreate the spanning tree or enumerate all possible spanning trees that decode to the given descriptions array.
I think that from the given array there are multiple spanning trees of a grid graph (the array does not uniquely define the spanning tree shown on the picture). Similarly, if I was given a sorted degree array [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3] (decoded from the image above), I could construct a variety of trees that satisfy this degree array but some of these trees would not span a grid.
Problem is related to the game described here https://math.stackexchange.com/questions/3191645/uniqueness-of-spanning-tree-on-a-grid/3191756 but there one can rotate things, here, we just have shuffled grid and need to recreate some spanning tree of the grid by repositioning the elements.
Answer: It's not possible. There isn't enough information in the input, even if you had unlimited computing time.
The order of the input is irrelevant. So, we could equivalently specify the input as follows. Each node can be assigned to one of 16 different descriptions (for instance, a node that is connected to one above it and one to the left of it has description (1,0,1,0)). So, the input could equivalently be described by providing 16 counts: for each description, a count of how many nodes have that description.
And now we can see that, for large grid graphs, the number of possible spanning trees exceeds the number of possible inputs. So, you can't hope to uniquely recover the spanning tree from just this information. Moreover, there can be exponentially many spanning trees that are consistent with any one given input, so there is also no efficient algorithm to enumerate all possible spanning trees that match the input.
Why? Suppose you have a $m\times n$ grid graph, where $m>2$ and $n>2$. Then the number of spanning trees of this grid graph is exponential in $mn$. However, the number of possible inputs is at most $(nm+1)^{16}$, since there are 16 descriptions and for each description we have a count that ranges from $0$ to $nm$. The latter number is polynomial in $nm$, so when $nm$ is large, we find that the number of possible spanning trees vastly exceeds the number of possible inputs. | {
"domain": "cs.stackexchange",
"id": 20080,
"tags": "algorithms, enumeration, spanning-trees, planar-graphs, square-grid"
} |
Rotating an NxN matrix | Question: I came up with the following solution for rotating an NxN matrix 90 degrees clockwise, to solve this CodeEval challenge:
Input
The first argument is a file that contains 2D N×N matrices (where 1 <= N <= 10), presented in a serialized form (starting from the upper-left element), one matrix per line. The elements of a matrix are separated by spaces.
For example:
a b c d e f g h i j k l m n o p
Output
Print to stdout matrices rotated 90° clockwise in a serialized form (same as in the input sample).
For example:
m i e a n j f b o k g c p l h d
It looks elegant to me, but can someone explain how its performance compares to the more common solutions mentioned here? Does it have \$O(n^2)\$ time complexity?
import sys, math
for line in open(sys.argv[1]):
original = line.rstrip().replace(' ', '')
nSquared = len(original)
n = int(math.sqrt(nSquared))
output = ''
for i in range(nSquared):
index = n * (n - 1 - i % n) + int(i / n)
output += original[index] + ' '
print(output.rstrip())
The expression n * (n - 1 - i % n) + int(i / n) is something I found while observing common patterns among rotated matrices.
Answer:
The formula is elegant, and the approach is correct.
You have to be careful with complexities though. Specifically you have to be very clear about what \$n\$ is. Typically complexity is a function of the size of input (which is, provided that \$n\$ is a matrix dimension, \$n^2\$ itself), so I'd qualify your solution as linear. And since each element should be accounted for, no better solution is possible.
The asymptotic constant could be better. The problem doesn't ask to rotate the matrix; it is only asks to print the matrix as if it was rotated. In other words, building output is technically a waste of time. Use your formula to print elements as you enumerate them. | {
"domain": "codereview.stackexchange",
"id": 32447,
"tags": "python, programming-challenge, mathematics, matrix"
} |
How long does it take for fusion to occur in a tokamak? | Question: Once the plasma is in the tokamak reactor how long does it take until fusion starts to begin? I found that it must reach 150000000 degrees so, how long does it take for the plasma to reach this temperature? Is there a graph of time against temperature for the inside of a tokamak and which reactor reaches this fastest?
Answer: As others have mentioned, the speed is not really a key issue when we operate tokamak reactors. Plasma parameters respond in a fast timescale to inputs of external heating sources and stability control schemes.
A plateau of high temperature is reached as soon as the plasma current, radio frequency power and neutral beam injection power all reach their respective plateaus, provided control mechanisms (e.g. resonant magnetic perturbations) have also been applied. In most research reactors today, such plateaus are typically reached within just a couple of seconds.
As an example you can see this picture from last year's record temperature-duration experiment of the EAST tokamak in China, in which an electron temperature of 50 million degrees K (three times that of the core of the sun) was maintained for around 100 seconds. There you may observe how fast the temperature plateau was reached. | {
"domain": "physics.stackexchange",
"id": 40424,
"tags": "thermodynamics, fusion"
} |
Do histones constitute the largest proportion of the protein in chromosomes at mitosis? | Question: Do histones contribute more (by mass) than non-histone proteins in the chromosomes formed during mitosis?
Answer: If you are asking if histone mass represents a larger percentage of total chromosome mass then the answer is yes when considered at the level of the nucleosome. Each histone-octamer wraps ~147 base pairs of dna around 1.7 turns. The histone-octamer consists of two copies of each of the four structural core proteins (H2A, H2B, H3 and H4). The sequence encoding for H2A histone family member B1 (H2AFB1) is 517 nucleotides long (RefSeq). Based on nucleotide length alone and inferring a consistent contribution to mass from nucleotides at the single base level and in triplet at the amino acid level, each nucleosome should be roughly 95% histone by mass. But this doesnt account for stochasticity in the mass and density due to DNA-methylation and to heterochromatin and euchromatin along the genome, respectively.
Consider DNA methylation, these features represent a non-zero contribution to total chromosomal mass that would decrease histone mass as a portion of the total and would not be considered if whole genome sequencing of a single organism was employed to answer this question at the genome level for a single sample. Now consider histone acetylation, this is another feature that represents a non-zero contribution to mass but swings the pendulum in the other direction. Beyond a local calculation at the nucleosome level this relationship would be nearly impossible to quantify.
During replication specifically, histone translation seems to be as tightly regulated as DNA replication.
Ma Y, Kanakousaki K, Buttitta L. How the cell cycle impacts chromatin architecture and influences cell fate. Front Genet. 2015;6:19. Published 2015 Feb 3. doi:10.3389/fgene.2015.00019
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4315090/
nature has a great open source overview related this topic.
https://www.nature.com/scitable/topicpage/chromatin-remodeling-in-eukaryotes-1082 | {
"domain": "biology.stackexchange",
"id": 9835,
"tags": "biochemistry, cell-biology"
} |
Program to compress a string of characters | Question: I was given this small task in an interview recently. I'm usually terrible at coding questions in interviews, because with the time constraint and the lack of google I usually overthink and rush stuff and just end up with a mess of a program.
The task was to write a program that would compress the length of string, and compare the length of the compressed string to the input string and return whichever one is smaller. Example would be: input string "aaabcccc" would be compressed to "a3b1c4".
I essentially split the string into a charArray and then counted each occurrence of a character and stored them into a hashmap, and looped through the map to build the new string. Just from a learning perspective, was there a better way to do this? I was given ~10 minutes to write it as well, it more to assess how I would solve the problem as opposed to the code itself. But regardless, I'd like a review of it:
import java.util.HashMap;
import java.util.Map.Entry;
public class Compressor {
public static void main(String args []) {
String randomString = "aaabccccc";
HashMap<Character, Integer> map = countCharacters(randomString);
String compressedString = createCompressedString(map);
if(randomString.toCharArray().length < compressedString.toCharArray().length) {
System.out.println(randomString);
}
else {
System.out.println(compressedString);
}
}
/**
* Create hasmap to store character and count of occurrence
* @param s
* @return
*/
private static HashMap<Character, Integer> countCharacters(String s) {
HashMap<Character, Integer> characterCount = new HashMap<Character, Integer>();
char[] characterArray = s.toCharArray();
for(Character c : characterArray) {
int newCount;
Integer count = characterCount.get(c);
if(count == null) {
newCount = 1;
}
else {
newCount = count + 1;
}
characterCount.put(c, newCount);
}
return characterCount;
}
/**
* Convert hashmap into a string
* @param map
* @return
*/
private static String createCompressedString(HashMap<Character, Integer> map) {
String newString = "";
for (Entry<Character, Integer> entry : map.entrySet()) {
Character key = entry.getKey();
Integer value = entry.getValue();
newString += "" + key + "" + value;
}
return newString
}
}
Answer: For better performance you should be able to eliminate the Hashmap and use a StringBuilder to count and build the compressed string in one operation:
public static String Compress(String input)
{
StringBuilder sb = new StringBuilder();
int i = 0;
int limit = input.length() - 1;
char next = '\0';
while (i < limit)
{
int count = 1;
next = input.charAt(i++);
while (i <= limit && next == input.charAt(i))
{
count++;
i++;
}
sb.append(next);
sb.append(count);
}
if(i == limit)
{
sb.append(input.charAt(i));
sb.append(1);
}
return sb.toString();
}
Caveat: This code won't work with the full unicode set | {
"domain": "codereview.stackexchange",
"id": 33847,
"tags": "java, algorithm, strings, compression"
} |
Rationale for titant standardization in alkalinity measurements [ISO 9963-1:1994] | Question: I work in a environmental chemistry laboratory and I will start to do some alkalinity measurements in my project. My research group is pretty big on following any and all standardization possible, which most often mean compliance according to ISO-standards.
In the ISO-document Water quality - Determination of alkalinity - Part1: Determination of total and composite alkalinity (ISO 9963-1:1994), you need to prepare your titrant, in this case $0.1 \text{M HCl}$ and a standard solution $0.025 \text{M Na}_{2}\text{CO}_{3}$. Then you need to standardize your titrant using either a potentiometric detection (titrate a dilute standard solution to pH 4.5 and note the volume used) or visual endpoint detection (titrate a dilute standard solution with bromocresol green-methyl red indicator until color change). And lastly, a blank using water as the titrand instead of the standard solution.
Using these two acid volume consumption you then recalculate your $0.1 \text{M HCl}$ to it's actual concentration. And if you keep your stock acid solution for a longer period you should perform this standardization every week. Then you can perform the actual titration and determination of the alkalinity in the sample.
My question is, why is this necessary? Why does the ISO stipulate this standardization?
I understand that if I prepare an acid it won't be super precise since there are effects that influence the preparation such as my handling, age of chemical, pipetting, graduated cylinder error intervals etc. But the same would apply to my preparation of the sodium carbonate. In principle I am shifting the error effects from the acid preparation to the sodium carbonate preparation which I am standardizing against.
Answer: According to this listing, $\ce{Na2CO3}$ is on of the primary standards used in volumetric analysis, i.e. is easily available in a reproducible form and concentration, has a high molecular mass, is -- once the container is opened -- not (so much) hygroscopic; and is stable.
I speculate, a dilute solution of HCl (0.1 mol/L), if stored over weeks in a bottle recurrently opened, maybe stored on a window bench (seen so frequently in the undergrad's labs) subject to ventilation and underneath heating by the radiators does not have this resillience.
Using a cartrige to prepare a standard solution of HCl that you rinse into a volumetric flask and fill up with deionised water may be seen valid as alternative for a secondary standard by convention. (ISO standards are settled agreements; similar to ASTM standards, if you see things differently, you may contact them.) Obviously, alteration of a single step within a whole chain of procedures may invalidate the protocol's fit into ISO; a less severe issue in a teaching lab than for a GMP-accredited lab.
Addendum
"How is the diluted solution of HCl obtained?" brought an other element into play that I did not consider earlier: the use of concentrated hydrochloric acid. If purchased and used freshly, it may be a batch of concentrated (37%) or even "fuming HCl" (about 40%), and both are hygroscopic. Small errors about the concentration of the stem solution of this corrosive reagent may indeed yield larger systematic errors, down the road of repeated dilutions. (Obviously not so important if the sole intend is to fill up again the bottle of 1 M or 2 M HCl for the extractive workups ...) | {
"domain": "chemistry.stackexchange",
"id": 7759,
"tags": "titration"
} |
Summing up magnetic fields | Question: In the case of a complex shape filiform distribution of current, are we allowed to determine the magnetic field created by sections of the distribution and then summing them up, like we do with a discrete distribution of charge when calculating the electric field?
Thank you
Answer: Yes, absolutely. In other words, the magnetic field also obeys the principle of superposition. This does break down if you consider back-reaction (i.e. the currents feel Lorentz forces from the magnetic fields); but it should always be fine for static systems. | {
"domain": "physics.stackexchange",
"id": 6869,
"tags": "electromagnetism, superposition"
} |
What do we mean by wavelength of any electromagnetic wave? | Question:
What do we mean by wavelength of EMW?
Wavelength of oscillating electric field or the oscillating magnetic field?
Or is it that both the electric and magnetic field waves have same wavelength? If yes...
Why should they have same wavelength?
P.S. : I just have a superficial knowledge of electromagnetic radiation and waves
Answer: EM waves are formed when an electric field couples with a magnetic field. The magnetic & electric fields of an EM wave are perpendicular to each other & to the direction of the wave. The wavelength is just that--the length of the wave through one frequency cycle. | {
"domain": "physics.stackexchange",
"id": 28232,
"tags": "waves, electromagnetic-radiation, wavelength, microwaves"
} |
Implement selection from array by 'random' or 'if/elif conditionals' | Question: I'm working on some code examples in 'Automate the boring stuff in python...' to prepare for my first year of CS. I want to know if its better to use an array with random selection below or rather multiple if/elif conditionals to generate output. Any and all suggestions are welcome.
import sys
import random
import time
answer = ['It is certain', 'It is so', 'Try later', 'Ask Again', 'Not good', 'Doubtful']
r = random.randint(0,5)
def getAnswer():
print (answer[r])
def Main():
try = input('Try your luck. Yes or no.\n')
if try == 'yes':
getAnswer()
sys.exit()
else:
print('May luck be on your side.')
sys.exit()
if __name__ == '__main__':
Main()
Answer: The array is clean and easy to read, and it takes up much less space than an army of if/elif/else statements. In my opinion, what you have is much better than the alternative.
That said, there are a couple of things that could be changed.
First, either move the random selection into the getAnswer() function, or change the function to accept an argument. What if you decide later on that you want to get multiple answers? Even if you only call getAnswer() once, making one of these changes will make the function's job clearer.
Second, remove the hard-coded array length in the random.randint() arguments. If you later add or remove answers, the arguments to randint() need to be changed, too. This can be avoided by either using random.choice() (ideal if getAnswer() doesn't take any arguments) or by using random.randint(0, len(answer) - 1) or random.randrange(len(answer)).
randrange() is typically preferred over randint(). Everything else in Python treats ranges as including the start index and excluding the end index, but randint() includes the end index. Not only does that make randint() inconsistent with everything else, but it also makes working with lists and sequences more error-prone (notice how clean and neat randrange(len(my_list)) is compared to randint(0, len(my_list) - 1)?).
Here is how these changes might look (along with one or two other changes, unrelated to how the random selection is made).
Option 1: Random selection inside of getAnswer():
import sys
import random
import time
ANSWERS = ['It is certain', 'It is so', 'Try later', 'Ask Again', 'Not good', 'Doubtful']
def getAnswer():
print (random.choice(ANSWERS))
def Main():
prompt_string = 'Try your luck. Yes or no.\n'
while input(prompt_string).strip().lower() in ('yes', 'y')
prompt_string = 'Try again (yes or no)?\n'
getAnswer()
print('May luck be on your side.')
sys.exit()
if __name__ == '__main__':
Main()
Option 2: getAnswer() takes an argument
import sys
import random
import time
ANSWERS = ['It is certain', 'It is so', 'Try later', 'Ask Again', 'Not good', 'Doubtful']
def getAnswer(chosen_index):
print (ANSWERS[chosen_index])
def Main():
prompt_string = 'Try your luck. Yes or no.\n'
while input(prompt_string).strip().lower() in ('yes', 'y')
prompt_string = 'Try again (yes or no)?\n'
getAnswer(random.randrange(len(ANSWERS)))
print('May luck be on your side.')
sys.exit()
if __name__ == '__main__':
Main() | {
"domain": "codereview.stackexchange",
"id": 15183,
"tags": "python, array, python-3.x, random"
} |
Is it possible to create a superposition in IBMQ QISkit which has probability amplitudes $|a|\neq |b|$? | Question: For example, we can create a single qubit state with a polar angle of $\pi/2$ with the Hadamard gate. But, can we create a state such as this,
$$\Psi = \cos(\pi/8)|0\rangle + \sin(\pi/8)|1\rangle$$
where the polar angle does not equal $\pi/2$, in QISkit?
Answer: You use the standard rotations. In this case, you're looking for the ry operator (rotation around the y-axis). To rotate the state vector counter-clockwise around the unit circle by $\theta$, call ry with $2\theta$ or in your case $\frac{2\pi}{8}$ applied to state $|0\rangle$.
from qiskit import *
import numpy as np
q = QuantumRegister(1)
qc = QuantumCircuit(q)
qc.ry(2*np.pi/8,q) | {
"domain": "quantumcomputing.stackexchange",
"id": 603,
"tags": "quantum-state, programming, qiskit"
} |
Boltzmann constant in atomic units | Question: I was just wondering what value the Boltzmann constant $k_B$ takes when we are using Hartree atomic units (i.e $\hbar=e=a_0=m_e=1$) where the unit of energy is 1 hartree. Should we we convert $1.38 \times 10^{-23} \rm\: J\:K^{-1}$ to hartree/K if all our other units are expressed in atomic units?
Answer:
Should we we convert $1.38 \times 10^{-23} \rm\: J\:K^{-1}$ to Hartree/K if all our other units are expressed in atomic units?
In short, yes. The numerical value then becomes $k_B = 3.167 \times 10^{-6}\: E_\mathrm{H} / \rm K$. This gives the direct route to calculate products of the form $k_BT$ (with $T$ in kelvin) in hartrees, which then give direct values of energy in atomic units. | {
"domain": "physics.stackexchange",
"id": 79104,
"tags": "thermodynamics, units, si-units"
} |
Recursive, non-recursive systems; FIR, IIR systems | Question: I am confused with the classifications of an LTI system as recursive or non-recursive systems and FIR or IIR systems.
I understood what the FIR and an IIR systems are, but is it correct to say that FIR system is always non-recursive?
We could express an finite accumulator up to past N inputs (FIR system) in both non-recursive and recursive forms.
Also is it correct to say that a non-recursive system is always an IIR system or vice versa?
Answer: The logical implications are the following:
"non-recursive" $\Longrightarrow$ FIR
IIR $\Longrightarrow$ "recursive"
But the opposites are not necessarily true because a FIR system can be implemented recursively (transfer function poles can be cancelled by zeros).
Of course, when referring to "recursive" or "non-recursive" we always talk about implementations with finitely many operations per output sample. Clearly, any discrete-time LTI system can be described by a generally infinite convolution sum, but that is not what we mean by "non-recursive". | {
"domain": "dsp.stackexchange",
"id": 10031,
"tags": "filters, filter-design, finite-impulse-response, infinite-impulse-response"
} |
very fast rotation | Question:
hi, all,
my robot is rotating very fast. it runs on move_base.
although I have capped the rotation as below in the local path planner and rotate recovery
TrajectoryPlannerROS:
max_vel_x: 0.45
min_vel_x: 0.05
max_rotational_vel: 0.2
min_in_place_rotational_vel: 0.01
acc_lim_th: 0.2
acc_lim_x: 0.5
acc_lim_y: 0
RotateRecovery:
max_rotational_vel: 0.2
min_in_place_rotational_vel: 0.01
acc_lim_th: 0.2
it still turns very fast.
especially when the robot set off from its original position.
the max rotational vel printed from cmd_vel topic is upto -1.0
the 0.2 cap seems only effective when robot reaches its destine and trying to adjust its final position.
other then the two rotational velocity about, is there any other place I should be concerning?
thanks
Ray
Originally posted by dreamcase on ROS Answers with karma: 91 on 2014-08-01
Post score: 1
Original comments
Comment by 2ROS0 on 2014-08-02:
the nomenclature might be different as @ahendrix mentioned. But I have had similar problems from bringing down the velocities too much. I don't know what the problem source is but increasing the overall velocities and accelerations moderated that problem for me.
Answer:
You should probably also be setting the min_rotational_vel or min_vel_theta parameter to TrajectoryPlannerROS.
The other parameters to TrajectoryPlannerROS are described on the base_local_planner wiki page
Originally posted by ahendrix with karma: 47576 on 2014-08-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18853,
"tags": "ros, navigation, move-base, velocity"
} |
Correlating the decrease in orbital radius and penetrating power with the increase in l value | Question: I have read that s-orbitals have a stronger penetration effect as compared to p-orbitals and p-orbitals have a stronger penetration effect as compared to d-orbitals, etc.
Therefore, the electrons in an s-orbital are closer to the nucleus of the atom and their ionisation energies are greater than for an electron in a p-orbital. (Example: This is what we use to explain why the energy of ionisation decreases from $\ce{Be}$ to $\ce{B}$).
Everything made perfect sense to me until I read:
The orbital radius slightly decreases with an increase of $l$ value.
Since s-orbitals penetrate more, shouldn't that mean that being closer to the nucleus, they have smaller radius as compared to an orbital with higher $l$ value, say, the p-orbital?
How can we justify this? Or are these two pieces of information not related in this way?
Answer: tl;dr
The statement that "the orbital radius slightly decreases with the increase of $l$ value" is true only for orbitals with the same value of $n$ (shell). The penetration of the nucleus by an electron is measured by the relative electron density, which depends on both shell ($n$) and subshell ($l$) of an electron in an atom.
What is Orbital Penetration:
Penetration describes the proximity of electrons in an orbital
to the nucleus. Electrons which experience greater penetration
experience less shielding and therefore experience a larger Effective
Nuclear Charge ($Z_\text{eff}$), but shield other electrons more effectively. Electrons in different orbitals have different
wavefunctions and therefore different distributions around the
nucleus. However, contrary to what many think, penetration is not the
outer electrons penetrating through the shield of the core electrons.
It is actually just how well the electrons feel the nucleus. This is
similar to the idea of outer electrons penetrating, but not the same.
They are not the same because the core electrons have more penetration
than the outer electrons since they (the core electrons) feel the
strongest pull.
Clarification
As you can see from the definition above, the penetration of a nucleus by an electron is measured by the relative electron density near the nucleus of an atom. It is essentially, how effectively electrons can get close to the nucleus. For instance, below are the electron probability densities for s orbitals and a p orbital.
Nodes are regions of zero electron probability (white areas),
The orange color corresponds to regions of space where the phase of the wave function is positive, and the blue color corresponds to regions of space where the phase of the wave function is negative.
In a multi-electron system, the penetration of the nucleus by an electron is measured by the relative electron density near the nucleus of an atom for each shell and subshell of an electron. For example, the 2s electron is penetrating the nucleus of an atom more than the 2p electron, because the 2s has more electron density near the nucleus than the 2p electron. The penetration power of an electron, in a multi-electron atom, is dependent on the values of both the shell (n) and subshell (l) of an electron in an atom.
For the same shell value (n) the penetrating power of an electron follows this trend in subshells:
$$\ce{s > p > d > f}$$
And for different values of shell (n) and subshell (l), penetrating power of an electron follows this trend:
$$\ce{1s > 2s > 2p > 3s > 3p > 4s > 3d > 4p > 5s > 4d > 5p > 6s > 4f ...}$$ | {
"domain": "chemistry.stackexchange",
"id": 5829,
"tags": "orbitals"
} |
termination of two concurrent threads with shared variables | Question: We're in a shared memory concurrency model where all reads and writes to integer variables are atomic.
do: $S_1$ in parallel with: $S_2$ means to execute $S_1$ and $S_2$ in separate threads, concurrently.
atomically($E$) means to evaluate $E$ atomically, i.e. all other threads are stopped during the execution of $E$.
Consider the following program:
x = 0; y = 4
do: # thread T1
while x != y:
x = x + 1; y = y - 1
in parallel with: # thread T2
while not atomically (x == y): pass
x = 0; y = 2
Does the program always terminate? When it does terminate, what are the possible values for x and y?
Acknowledgement: this is a light rephrasing of exercise 2.19 in Foundations of Multithreaded, Parallel, and Distributed Programming by Gregory R. Andrews.
Answer: This trace is possible, in two separate threads T1 and T2. $state$ is $(x,y)$.
T1: ... $state=(0, 4)$
T1: x = x + 1; y = y - 1 $~~state=(1, 3)$
T1: x = x + 1; y = y - 1 $~~state=(2, 2)$
T2: x == y evaluates to true, pass and then x = 0; $~~state=(0, 2)$
T1: x != y evaluates to true, x = x + 1; y = y - 1 $~~state=(1, 1)$
T2: y = 2 $~~state=(1, 2)$
T1: x != y evaluates to true, x = x + 1; y = y - 1 $~~state=(2, 1)$
$state=(3, 0)$
$state=(4, -1)$
...
(Note that it works even if x = expr; is atomic)
There are other possible interleavings. The point $(2,2)$ is common to all of them, where T1 has pending (logically) atomic instructions:
T1: push x; push y; eq ? stop : push(x + 1); pop@x; push(y - 1); pop@y; repeat
T2: (x != y) ? repeat : x = 0; y = 2;
In the first case, T1 proceeds to stop and then T2 can only proceed and the final state is $(0,2)$.
If T2 finally skips the repeat and (T1:push x) is run before (T2:x = 0) then T1 will stop looping and the same final state is reached.
If T2 finally skips the repeat and (T1:push x) is run after (T2:x = 0) then T2 can proceed after the stop independently of (T1:y = 2).
state = (0, 2)
T1: push(x + 1); pop@x; push(y - 1); pop@y; ...
T2: y = 2;
If T2 is run now then it will loop as above, so T1 proceeds:
state = (1, 2); stack = 1
T1: pop@y; ...
T2: y = 2;
If T2 is run now, this will go to the final state $(1,1)$. Otherwise:
state = (1, 1)
T1: push x; push y; eq ? stop : push(x + 1); pop@x; push(y + 1); pop@y; repeat
T2: y = 2;
If T2 does not act before push y, this will stop and go to the state $(1,2)$. If it does, then the state is $(1, 2)$ and this will loop into $(2,1)$, $(3,0)$, ...
To sum up the possible final states are $(0,2)$, $(1,1)$, $(1,2)$. I don't think it was worth the effort though, since I probably made mistakes. | {
"domain": "cs.stackexchange",
"id": 51,
"tags": "concurrency, shared-memory, imperative-programming"
} |
What does pseudo-irreversible mean? | Question: In this publication (PDF) I encountered the following word:
Binding of rupatadine to histamine H1 receptors isolated from the
guinea-pig cerebellum and lung was demonstrated by inhibition of
3H-mepyramine binding; equilibrium inhibition constant (affinity) [Ki]
values were 26–256 nM in the various experiments. Binding was
time-dependent and pseudo-irreversible. Rupatadine was ~7.5 and
10-times more potent than the structurally-related anti-histamines
loratadine and fexofenadine. H1 receptor occupation by rupatadine in
guinea-pig cerebellum and lung after oral (PO) dosing was relatively
rapid (with maximum binding evident at 2–4 hours post-dose) and
dose-dependent; low or no binding was seen at 48 hours post-dose.
In this context, what does pseudo-irreversible mean? Since it goes on to say the binding had dissipated within 48 hours, it seems to mean persistent or long-lived. Is that correct or does it have a more specific meaning in biology?
Answer:
it seems to mean persistent or long-lived. Is that correct...?
Yes, it's correct, but how persistent the effect is is related to the rate of dissociation or rate of turnover of the receptor. It is relative, so to speak; relative to the effect of non-irreversible receptor binding.
Irreversible inhibition of, say, a cell surface receptor means that a drug (agonist/antagonist) binds permanently to the target receptor, usually by binding covalently; the chemical reaction is not reversible. This isn't synonymous with a permanent effect on a cell or organism; that depends on turnover of the receptor. A "faulty" receptor may be replaced quite quickly or not; it depends of the cell and the receptor.
Pseudo-irreversible means that the drug acts in a manner similar to irreversibly bound agonists/antagonists, but there is no covalent bond formed; if a drug/molecule which has a sufficiently high affinity for the receptor is introduced (in a sufficiently high quantity), the new molecule will replace that which was strongly bound before (the pseudo-irreversibly bound drug.)
Some of these types of reactions have been studied more than others. One example of this can be seen with h5-HT7 receptors, which are subject to pseudo-irreversible inactivation by risperidone and 9-OH-risperidone.
Clozapine and other competitive antagonists reactivate risperidone-inactivated h5-HT7 receptors: radioligand binding and functional evidence for GPCR homodimer protomer interactions | {
"domain": "biology.stackexchange",
"id": 5174,
"tags": "definitions"
} |
Right-justifying a number with dot leaders | Question: I'm working on a small library to help me format numbers easier as my current project deals with a lot of them. I want to be able to just supply the value, precision, width, and fillchar to a method and have it do all the work for me.
Based on the standard way of doing things with rjust()…
print '{:.2f}'.format(123.456).rjust(10, '.')
# ....123.46
… I could write:
def float_rjust(value, width, precision=2, fillchar=' ', separator='', sign=''):
#[sign][#][0][width][,][.precision][type]
spec = '{{:{1}{2}{3}.{4}f}}'.format(
sign,
width,
separator,
precision
)
return spec.format(value).rjust(width, fillchar)
However, I have found that the format() method actually allows for justifications in it:
print '{:.>10.2f}'.format(123.456)
# ....123.46
… so I could also write it this way:
def float_rjust(value, width, precision=2, fillchar=' ', separator='', sign=''):
#[[fill]align][sign][#][0][width][,][.precision][type]
spec = '{{:{0}>{1}{2}{3}.{4}f}}'.format(
fillchar,
sign,
width,
separator,
precision
)
return spec.format(value)
My main question is, should I used the [[fill]align] formatting spec in format() to do justification, or is it more Pythonic to use format().rjust()?
Answer: In terms of efficiency, the second form is probably slightly better, as it creates fewer string objects:
def float_rjust(value, width, precision=2, fillchar=' ', separator='', sign=''):
#[sign][#][0][width][,][.precision][type]
spec = '{{:{1}{2}{3}.{4}f}}'.format( # 1
sign,
width,
separator,
precision
) # 2
value = spec.format(value) # 3 - slight refactor for clarity
return value.rjust(width, fillchar) # 4
vs.
def float_rjust(value, width, precision=2, fillchar=' ', separator='', sign=''):
#[[fill]align][sign][#][0][width][,][.precision][type]
spec = '{{:{0}>{1}{2}{3}.{4}f}}'.format( # 1
fillchar,
sign,
width,
separator,
precision
) # 2
return spec.format(value) # 3
In general terms, I would use e.g. rjust only where that was the only formatting I required; it seems awkward to split the formatting into two steps.
However, I would probably rearrange this completely. You have combined two steps that I think should probably be separate: creating a specification, and applying it to an object.
Using format(value, spec) rather than spec.format(value) (i.e. the format function vs. the str.format method) lets you simplify the spec (getting rid of the extra braces):
def float_spec(width, precision=2, fillchar=' ', separator='', sign=''):
"""Create a specification for formatting floats."""
return '{}>{}{}{}.{}f'.format(fillchar, sign, width, separator, precision)
You can now inline the format call:
>>> print format(123.456, float_spec(width=10, fillchar='.'))
....123.46
For greater efficiency, as float_spec would return equal (but not identical) strings for the same inputs, you could add memoization/caching. You could also add validation (of either the parameters or created specs) to ensure that it outputs a valid specification for format. | {
"domain": "codereview.stackexchange",
"id": 14049,
"tags": "python, comparative-review, formatting, floating-point"
} |
Why stresses on a flywheel are similar to a pressure vessel? | Question: A spinning wheel or an engine car flywheel has the same maths, regarding the stresses developed when spinning, with a pressure vessel
Maybe someone knows the underlying mechanism for this similarity?
Answer: The outer rim of a flywheel must exert a radially inward force to keep the components of the wheel in circular motion. You could say this is similar to the radially inward force that must be exerted by the walls of a pressure vessel if there is a higher pressure inside than outside.
However, the analogy disappears when you consider the upper and lower surfaces of a flywheel. The force that the upper and lower surfaces exert must still be directed radially inward towards the axis of rotation, and varies linearly with distance from this axis, whereas the upper and lower walls of a cylindrical pressure vessel must exert a vertical force that is constant per unit area.
I don't believe any underlying mechanism is required to explain this partial analogy. | {
"domain": "physics.stackexchange",
"id": 65837,
"tags": "angular-momentum, continuum-mechanics"
} |
How to mathematically find principal axes frame for a random distribution of mass $\rho(r)$? | Question: How to mathematically find principal axes frame for a random distribution of mass with density $\rho(r)$? That was, how to find the origin and orthonormal basis such that the moment of inertial matrix was diagonalized?
Answer: Use any Cartesian coordinate system to compute the moment of inertia tensor. It will be symmetric.
Find its eigenvectors and eigenvalues, considering it as a matrix. The eigenvectors give the directions of the principal axes in the original coordinate system. Because the matrix is symmetric, they will be orthogonal; you can of course normalize them so they have unit length.
The eigenvalues give you the values on the diagonal of the diagonalized matrix. | {
"domain": "physics.stackexchange",
"id": 62732,
"tags": "reference-frames, rotational-dynamics, moment-of-inertia"
} |
Right Reset Turing Machine | Question: How do you simulate an ordinary turing machine using a right reset turing machine ?
(A right reset machine is in which you can move left and reset to right end position. Moreover the tape is unbounded on left but it is bounded on right)
Answer: A complete description of the simulation would be tedious. In the following I will only be sketching a strategy that allows you to simulate a movement to the right, which seems to be where the difficulty lies.
If the head is in some tape cell and you want to move it to the right you can do the following:
Add a "mark", which we will call $\alpha$, to the symbol in the current cell (formally this introduces a new symbol $x_\alpha$ for each symbol $x$ in the tape alphabet).
Reset the head to the rightmost tape cell.
Place a $\beta$ mark on the current tape cell.
Scan the tape from right to left, while remembering whether the previous symbol had a $\beta$ mark. Stop as soon as you reach the symbol with the $\alpha$-mark.
If the previous symbol had a $\beta$ mark, then:
5.1. Remove the $\alpha$ mark from the current symbol.
5.2. Reset the head position.
5.3. Scan the tape from right to left until you find the symbol with the $\beta$ mark.
5.4. Remove the $\beta$ mark. The head is now one location to the left w.r.t. its position at the beginning of this procedure.
Otherwise (the previous symbol did not have a $\beta$ mark):
6.1. Reset the head position.
6.2. Scan the tape from right to left until you find the symbol with the $\beta$ mark.
6.3. Remove the $\beta$ mark from the current symbol.
6.4. Move left.
6.5. Add the $\beta$ mark to the current symbol.
6.6. Reset the head position and repeat from instruction 4. | {
"domain": "cs.stackexchange",
"id": 19707,
"tags": "turing-machines, automata"
} |
Question about Ryder text (Generating functional) | Question:
The second equality in (6.88) he says was obtained by expanding the denomitator by the binomial theorem. It is probably very dumb but I'm not following. I see how the 1 and the vacuum term in the numerator cancel with the denominator and give a 1. But I don't follow how he got the rest.
Answer: The expansion is
$$ \frac{1}{1-x} = 1 + x + x^2 + \cdots $$
where $x$ is the vacuum diagram. You only get the linear term to first order in $g$, which cancels the vacuum diagram in the numerator. There is a combinatorial proof that the cancellation of vacuum diagrams holds to all orders - Ryder should have it. | {
"domain": "physics.stackexchange",
"id": 6541,
"tags": "quantum-field-theory"
} |
Tokamaks and the reason they are still not efficient | Question: I'm curious about why tokamaks are inefficient as generators. In laymans terms, what is the main reason(s) tokamaks still cannot be used as generators?
My limited understanding of tokamaks tells me that the magnetic field required to keep the plasma in place and moving demands a vast amount of energy, much more than the tokamak itself can produce. Is there other ways we could create strong magnetic fields to contain the plasma?
And just how small could a tokamak be?
Answer: Well, a large number of countries, after the break even ( actually 60% of output over input energy) in energy of the prototype tokamak in JET joined into creating ITER, a prototype Tokamak design designed to have output energy in megawats.
If interested you should go to the FAQ of the link given for ITER .
There exist alternate projects:
Of the "magnetic confinement concepts" for fusion (mainly tokamaks and stellarators) the main advantage of ITER and its tokamak technology is that for the time being, the tokamak concept is by far the most advanced toward producing fusion energy. It is consequently pragmatism that dictated the choice of the tokamak concept for ITER. Stellarators are inherently more complex than tokamaks (for example, optimized designs were not possible before the advent of supercomputers) but they may have advantages in reliability of operation. The W7-X Stellarator, presently under construction in Greifswald, Germany, will allow good benchmarking against the performance of comparable tokamaks. These results will be incorporated in decisions about how DEMO, the next-generation fusion device after ITER, will look.
The "inertial fusion concepts" are something quite different. These technologies have mainly been developed to simulate nuclear explosions and were not originally planned to produce fusion energy. The inertial fusion concept has not demonstrated so far that it offers a better or shorter path than magnetic confinement to energy production. In Europe, the Euratom Framework Programs do not fund research on inertial fusion, but the program maintains a "watching brief" on developments.
Efficiency in tokamaks rises with dimensions, and that is why ITER is much larger than JET. | {
"domain": "physics.stackexchange",
"id": 31017,
"tags": "magnetic-fields, plasma-physics, fusion"
} |
JavaScript code for countdown function is maxing CPU usage | Question: I have a script for a countdown function, which is using massive amounts of CPU power on my MBP.
The script updates the countdown display every second and is updating all piecharts as well.
var options = {
scaleColor: false,
trackColor: 'rgba(255,255,255,0.3)',
barColor: '#E7F7F5',
lineWidth: 6,
lineCap: 'butt',
size: 95
};
$('#days').easyPieChart(options);
$('#hours').easyPieChart(options);
$('#minutes').easyPieChart(options);
$('#seconds').easyPieChart(options);
function countdown(endT,callback) {
var first_load = false;
var days,hours,minutes,sec,timer;
end = new Date(endT);
end = end.getTime(); //Get initial Date in Milliseconds
if (isNaN(end)) {
return;
}
var tot_current = new Date();
var tot_remain = parseInt((end - tot_current.getTime())/1000);
var tot_days = parseInt(tot_remain/86400);
timer = setInterval(calculate,1000);
function calculate(){
var current = new Date();
var remaining = parseInt((end - current.getTime())/1000); //remaining seconds
if (remaining <= 0){
clearInterval(timer);
days=0;
hours=0;
minutes=0;
sec=0;
display(days,hours,minutes,sec);
if (typeof callback === 'function' ) {
callback();
}
}else{
days = parseInt(remaining/86400);
remaining = (remaining%86400);
hours = parseInt(remaining/3600);
remaining = (remaining%3600);
minutes = parseInt(remaining/60);
remaining = (remaining%60);
sec = parseInt(remaining);
display(days,hours,minutes,sec);
}
}
function display(days,hours,minutes,sec) {
var dl = days.toString().length;
if (dl == "1") {
sl = 2;
}else{
if (isNaN(dl)) {
sl = 3;
}
sl = dl;
}
days_rem = ("00"+days).slice(-sl);
hrs_rem = ("0"+hours).slice(-2);
min_rem = ("0"+minutes).slice(-2);
sec_rem = ("0"+sec).slice(-2);
$("#days span").text(days_rem);
$("#hours span").text(hrs_rem);
$("#minutes span").text(min_rem);
$("#seconds span").text(sec_rem);
$("#days").data('easyPieChart').update((100/tot_days)*days_rem);
// Disable animation for the first load
if(hrs_rem == 23 && first_load) { $('#hours').data('easyPieChart').disableAnimation(); }
$("#hours").data('easyPieChart').update((100/23)*hrs_rem);
if(hrs_rem == 23 && first_load) { $('#hours').data('easyPieChart').disableAnimation(); }
if(min_rem == 59 && first_load) { $('#minutes').data('easyPieChart').disableAnimation(); }
$("#minutes").data('easyPieChart').update((100/59)*min_rem);
if(min_rem == 59 && first_load) { $('#minutes').data('easyPieChart').enableAnimation(); }
if(sec_rem == 59 && first_load) { $('#seconds').data('easyPieChart').disableAnimation(); }
$("#seconds").data('easyPieChart').update((100/59)*sec_rem);
if(sec_rem == 59 && first_load) { $('#seconds').data('easyPieChart').enableAnimation(); }
first_load = true;
}
}
var d = new Date(2015, 6, 10, 12, 7, 0, 0);
countdown(d,null);
The code can be seen in action here.
Answer: First, I'll get to the fix for your problem. The problem is more in the CSS than in your JavaScript. Because you are updating the graph every second, that area of the page has to be repainted each time. Due to the way browsers handles these things by default, it has to also check the surrounding area to see if it now needs to be changed as well. So the idea is to minimize this. And the simplest way to do this is to add the following lines into your CSS:
.chart {
transform: translateZ(0);
z-index:1;
/* future proofing */
will-change: transform;
}
What this code does is move the charts onto there own rendering layer. This prevents the browser from having to check the surrounding areas because each chart is now separated from the rest of the content. In Chrome, you can see the performance difference this makes by watching the Timeline in the Dev tools. In the original, it spends ~180 ms repainting the screen and ~100 ms performing calculations. In the updated version, it spends ~20 ms for each. Add this over four graphs and you can see why the cpu is pegged. The will-change is a new CSS property that will basically do the same thing as the two lines above in the somewhat-near-future. I have updated the fiddle with just this change and you can see the difference.
Now, the code review:
As always, the first thing I recommend is to remove your code from the global scope. The easiest way to do this is to use an IIFE. This will basically create a private scope for all your code which reduces the amount of code you place in the global scope, which in turn, reduces the chance of collisions with other peoples code. Since you are using jQuery, this also provides a way to make sure $ always points to it.
(function( $, window, document, undefined ) {
//your code here
})( jQuery, window, document );
So in the above statement, we are passing jQuery as $, as well as the window and document objects (for a slight performance boost) and, to deal with older browsers, making sure undefined is actually undefined.
The next thing you could do is cache your jQuery selectors. This will improve the overall performance because selecting an element in the DOM is one of jQuery's slowest performing functions. Your are using ID's which help but caching them is even quicker. We want to do this as soon as the DOM is ready. I usually create a single function for everything that has to happen there and then call the single function. Also, since we will be referring to these selectors we will declare them immediately inside of our IIFE. I have also included the reference to create the piecharts:
(function( $, window, document, undefined ) {
var $days, $daysText, $hours, $hoursText,
$minutes, $minutesText, $seconds, $secondsText;
function init() {
var options = {
scaleColor: false,
trackColor: 'rgba(255,255,255,0.3)',
barColor: '#E7F7F5',
lineWidth: 6,
lineCap: 'butt',
size: 95
};
getSelections();
$days.easyPieChart( options );
$hours.easyPieChart( options );
$minutes.easyPieChart( options );
$seconds.easyPieChart( options );
}
function getSelections() {
$days = $('#days'),
$daysText = $days.find('span'),
$hours = $('#hours'),
$hoursText = $hours.find('span'),
$minutes = $('#minutes'),
$minutesText = $minutes.find('span'),
$seconds= $('#seconds'),
$secondsText = $seconds.find('span');
}
$(function() { //shortcut for document.ready event
init();
});
})( jQuery, window, document );
One other big change you need to make is to not update all four pie charts every second. There is only one chart that needs updating that often (seconds obviously). You should only update the minutes chart every minute, the hour every hour and the days every day. That way, on almost every interaction you are only updating the one chart.
Another way you could improve this code is to DRY it out a bit. One place that would greatly benefit from this is the display function. We need to make changes here anyway to help out with the problem mentioned above.
Let's take this code:
days_rem = ("00"+days).slice(-sl);
hrs_rem = ("0"+hours).slice(-2);
min_rem = ("0"+minutes).slice(-2);
sec_rem = ("0"+sec).slice(-2);
$("#days span").text(days_rem);
$("#hours span").text(hrs_rem);
$("#minutes span").text(min_rem);
$("#seconds span").text(sec_rem);
Notice how that is pretty much the same thing repeated over and over? That means it needs to be it's own function:
updateText( $daysText, days );
updateText( $hoursText, hours );
updateText( $minutesText, minutes );
updateText( $secondsText, seconds );
function updateText( $el, val) {
var len = val.length;
val = ( len === 1 ) ? '0'+val : val;
$el.text( val );
}
With this function, val will always be at least 2 characters long.
Now before updating the charts, check to see what are current value is. If it's the same as the new value (which will be typical for minutes, hours, days), then there is no reason to update them (or there text for that matter). Adding a data element with jQuery is a great way to keep track of this.
if ( $days.data('remaining') !== days_rem) {
// update it
updateText( $daysText, days );
$days.data('remaining', days_rem);
$days.data('easyPieChart').update( ( 100 / tot_days) * days_rem );
}
The first time through, the remaining data element is not set so it will update the chart. Each other time through, it will check that existing value and only update the chart when the value actually changes. Once the code is added for each chart, it becomes obvious that this can be DRY-ed out as well. Maybe with something like this:
updateChart( $days, $daysText, days_rem, ( 100 / tot_days ) * days_rem );
function updateChart( $chart, $chartText, newValue, formula ) {
//code here
}
I will leave that code up to you.
At this point, you should see a noticeable increase in the performance of the page.
Some other things to consider:
In your calculate function you have a bunch of Magic Numbers such as 86400. Although this is fairly straight forward when you know what you are looking at, it might be easier to set up some variables to hold those that provide some explanation for them:
var SECONDS_IN_A_DAY = 86400;
days = parseInt( remaining / SECONDS_IN_A_DAY , 10 );
Also notice, the radix parameter for the parseInt function. It isn't required but it's one of those things you should use anyway. That way the parser doesn't take the extra time to figure this out on it's own.
I hope that helps! Feel free to post any comments or questions.
Updates based on comments/fiddle
I have created a fiddle with all of the updated code, including some of the things you were trying to do on your own fiddle. To show you the performance improvements, I created a couple of screen captures. Note these are both showing when the seconds chart goes from 1 to 59.
This first one is your original fiddle linked in your post.
Performance Before
This is the updated fiddle I linked to above.
Performance After
By reducing the number of times each of the charts have to update and using the CSS above, you can see the execution time has improved significantly. These samples were taken with Chrome dev tools. If you are seeing significant CPU thrashing, try running in an incognito window to see if it is something else (tab or extention) interfering with your performance. | {
"domain": "codereview.stackexchange",
"id": 14573,
"tags": "javascript, jquery"
} |
Unexpected output of robot_localization EKF | Question:
UPDATE
I have changed the yaw_offset to 0 instead of 3.14. This has caused /odometry/filtered and odometry/gps to give much better results.
Setup
I have a TurtleBot3 robot with a GPS receiver. I am running ROS2 Dashing on Ubuntu 18.04 and a Raspberry Pi 3 Model B+. The transformation between /odom and /base_link is done in the TurtleBot3 bringup process. When I listen to the /odom everything seems good here. NB! The odometry topic has odometric data fused with IMU data. This is done in the hardware driver from ROBOTIS. In /odom: When I drive the robot forward, x-position increases. Moving to the left increases y-position. Turning counterclockwise around z gives an increase in yaw, so everything seems good here.
I want to navigate outside so I want to use my GPS data for position information, and use the fused odometry and IMU data for orientation. I have set up an extended Kalman filter using the ekf_node from robot_localization and the navsat_transform node. The setup of these can be seen in the .yaml files below together with the launch file.
I have a tf2 transformation in the launch file since my GPS is mounted 23 centimeters above base_link and 8 centimeters behind. I have attached a picture (gps_transform) where this transform can be seen in rviz. It appears to be looking alright.
In the setup of the navsat_transform node I have a yaw_offset of 3.14.(Changed to 0 after update to post.) When I rotate my robot so that the imu_link x axis is pointing west I read 0 in yaw. From what the documentation says most IMUs have either north or east heading so there might be an issue here.
I have attached a ros2bag where the robot is stationary from 0:00 to 1:50 and starts moving thereafter named ekf_04_26-17_27 + a ros2bag with the new yaw_offset called ekf_28_24.
Questions
As the robot is moving forward the /odom is increasing in x as expected. However, /odometry/filtered is decreasing in y axis and the same goes for odometry/gps. How can that be? I suspect this may have something to do with the yaw_offset I have supplied to the navsat_transform node. UPDATE: This was caused by the wrong yaw_offset finding the correct yaw_offset solved this issue.
I can see that my /odometry/gps starts in position XYZ (0.08, 0.0, -0.23) from the transform I would expect the opposite XYZ (-0.08, 0.0, 0.23). Is something wrong with the transform itself or is it possible that the problem is arising from some other misconfiguration of the ekf setup? UPDATE: I am not sure that this is even a problem. It might be the correct transform. I just have a hard time grasping this one. Knowing that /odometry/gps is given in the map(world frame) and my transform is defined from /base_link to /gps my immediate expectation does not make sense.
At the beginning map, odom, base_link, imu_link and gps are all placed around the same place with the x_axises pointing in the same direction. When the robot starts moving the map frame starts moving away from the odom frame. (This can be seen in the “map_odom.webm” screen recording(old yaw_offset) + “odometry_filtered_in_map_odom.webm” screen recording new yaw_offset). The transformation between /odom and /map is coming from the ekf_node. I don’t have a map frame prior to running the ekf as I assumed that ekf creates one. Is this correct?
Also, should /map leave /odom? From REP-105, I understand that /odom is anchored in /map so shouldn’t the transform from map -> odom and vice versa be static? UPDATE: I now believe that /map and /odom should leave each other in the rviz visualization. What rviz is showing is how the /base_link frame is positioned in both /map and /odom. And for my case /base_link is positioned differently, therefore should /map and /odom not be in the same place.
Link to bag file and video snippet
https://drive.google.com/open?id=1ksf0XDbP5WqcC4gbdiuGuMazEuaQp_hc
Gps transform
Yaml param file for ekf_node
ekf_filter_node_map:
ros__parameters:
frequency: 30.0
sensor_timeout: 0.1
two_d_mode: true
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: map
odom0: odom
odom0_config: [false, false, false,
false, false, true,
false, true, false,
false, false, false,
false, false, false]
odom0_queue_size: 10
odom0_nodelay: true
odom0_differential: false
odom0_relative: false
odom1: odometry/gps
odom1_config: [true, true, false,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom1_queue_size: 10
odom1_nodelay: true
odom1_differential: false
odom1_relative: false
Yaml param file for navsat_transform node
navsat_transform:
ros__parameters:
frequency: 30.0
magnetic_declination_radians: 0.0849975346 # For lat/long 57.452989, 10.021515
yaw_offset: 0.0 # Changed from 3.14159265359 after update to this post
zero_altitude: false
broadcast_utm_transform: true
publish_filtered_gps: true
use_odometry_yaw: false
wait_for_datum: false
Launch file
from launch import LaunchDescription
from ament_index_python.packages import get_package_share_directory
import launch_ros.actions
import os
import yaml
from launch.substitutions import EnvironmentVariable
import pathlib
import launch.actions
from launch.actions import DeclareLaunchArgument
def generate_launch_description():
return LaunchDescription([
launch.actions.ExecuteProcess(
cmd=['ros2', 'run', 'tf2_ros', "static_transform_publisher",
"-0.08", "0.00", "0.23", "0", "0", "0", "base_link", "gps"],
output='screen'
),
launch_ros.actions.Node(
package='localization',
node_executable='gps',
node_name='gps_node',
output='screen',
),
launch_ros.actions.Node(
package='robot_localization',
node_executable='ekf_node',
node_name='ekf_filter_node_map',
output='screen',
parameters=[os.path.join('/home/ubuntu/loc_nav_ws/src/turtle_localization/params', 'ekf_filter_map.yaml')],
),
launch_ros.actions.Node(
package='robot_localization',
node_executable='navsat_transform_node',
node_name='navsat_transform',
output='screen',
parameters=[os.path.join('/home/ubuntu/loc_nav_ws/src/turtle_localization/params', 'navsat_transform.yaml')],
remappings=[("imu/data", "imu")]
),
])
Originally posted by DanielRobotics on ROS Answers with karma: 42 on 2020-04-27
Post score: 1
Original comments
Comment by Tom Moore on 2020-06-24:
Would you say that your update above answers this question, such that we can mark it as answered? Sorry for not looking at this sooner.
Comment by DanielRobotics on 2020-06-24:
Yes, it can be marked answered
Answer:
Original question was edited and answered above:
UPDATE I have changed the yaw_offset to 0 instead of 3.14. This has caused /odometry/filtered and odometry/gps to give much better results.
Originally posted by Tom Moore with karma: 13689 on 2020-06-24
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34841,
"tags": "navigation, ros2, ekf, gps, navsat-transform-node"
} |
Derivation of Klein Gordon equation from Dirac equation; what does it mean? | Question: In Dirac field (Peskin and Schroeder), there is one equation in which it
multiples the Dirac operator
$$(-i\gamma^{\mu}\partial_{\mu}-m )$$
by
$$(i\gamma^{\nu}\partial_{\nu}-m ),$$
obtaining $\partial^2+m^2 = 0$.
What does it mean?
Answer: It means that if a given field is a solution of the Dirac equation, then its components are automatically solutions of the Klein-Gordon equation.
This is basically how the Dirac equation is 'derived' (justified, really). The Klein-Gordon equation is nice and relativistically invariant, but the fact that it's second-order in time is awkward, inconsistent with the form we'd like to have for a Schrödinger equation, and ultimately undesirable. It's therefore desirable to have a "square root" of the Klein-Gordon operator, i.e. something which is first-order in time (and therefore in the form of a Schrödinger equation) and which naturally squares to the Klein-Gordon rendering of the relativistic energy-momentum dispersion relation. This is, of course, impossible using $c$-number coefficients, but having non-commutative operators makes the trick work ─ and of course, when you're finished, you're left with the Dirac equation. | {
"domain": "physics.stackexchange",
"id": 58253,
"tags": "quantum-field-theory, dirac-equation, klein-gordon-equation"
} |
Vertical pipe flow | Question: I want to know the flow through a vertical pipe when a valve in the bottom is opened (a disk is lowered as illustrated in the drawing)
I know the heights in the drawing and the diameter of the pipe.
I have tried to use Poiseuille's law (assuming laminar flow). Where I get the following:
Flow: <$Q_{pipe} = \frac{(p_1-p_2-\rho \cdot g \cdot h_{submerged}) \cdot \pi \cdot D_{pipe}^4}{128 \cdot \mu \cdot h_{pipe}} $>
Velocity: <$Q_{pipe} = \frac{(p_1-p_2-\rho \cdot g \cdot h_{submerged}) \cdot \pi \cdot D_{pipe}^2}{32 \cdot \mu \cdot h_{pipe}} $>
where: <$p_1 = \rho \cdot g \cdot h_{submerged} \quad p_2 = p_{atm}$>
However, using this I end up with an average fluid velocity of approximately 9000 m/s.
Can anyone help me set up the equations needed?
parameters:
<$\rho 1000$>
<$g = 9.82$>
<$h_{pipe} = 0.152 m$>
<$D_{pipe} = 0.102 m$>
<$\mu = 0.00141$>
the submerged height can vary in the range of 0.1 to 0.45 m.
Answer: Since the pipe is vertical, the flow will have to overcome its height as well (if $h_{submerged} = h_{pipe}$, there should be no flow). So the proper pressure difference driving the flow will be:
$$\Delta p = \left(h_{submerged} - h_{pipe}\right)\cdot \rho\cdot g$$
If calculated flow is not laminar, you can use the approach from this answer. | {
"domain": "engineering.stackexchange",
"id": 4924,
"tags": "fluid-mechanics"
} |
Is my Java SQL connection secure from hackers? | Question: I would like to know if my java db class is enough protected against hackers. (I'm currently developing an Android application). I protect it with a infos.properties file which contains every information that I need.
final class DB {
static final String DRIVER;
static final String URL;
static final String USER;
static final String PASSWORD;
static {
Properties prop = new Properties();
try {
prop.load(new FileInputStream("infos.properties"));
} catch (IOException e) {
e.printStackTrace();
}
DRIVER = prop.getProperty("DRIVER");
URL = prop.getProperty("URL");
USER = prop.getProperty("user");
PASSWORD = prop.getProperty("password");
}
public static ResultSet doQuery(String query)
{
try {
Class.forName(DRIVER);
Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
if (rs != null) rs.close();
if (stmt != null) stmt.close();
if (conn != null) conn.close();
return (rs);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}
}
If it is not enough protected, what should I do then ?
Answer: If possible I would try to avoid a method like doQuery(String query). If someone manages to manipulate the query this becomes a classical injection problem.
If you do not need the flexibility to make any kind of db requests, use prepared statements for specific requests which only get some parameters. | {
"domain": "codereview.stackexchange",
"id": 7567,
"tags": "java, sql, security"
} |
Why does current remain same in a series circuit? | Question: Current is the rate of flow of charges with time. When passing through each resistance in a series circuit, charges lose energy and "slow" down due to collisions with atoms (metal cations). When time increases, current decreases. I agree number of charges remains same but what about time?
Answer: Imagine a scenario. Take a battery having a potential of $V$. The ends of the battery is connected by a wire that is resistance less(an ideal approximation). In that case, the electrons in the wire are offered no 'resistance' and hence, they are free to move from the negative end to the positive end of the battery. From simple application of Ohm's law $$I=\frac{V}{R}$$ we see that the current is not defined, but in the limiting case of $R\rightarrow 0$, $I\rightarrow \infty$. Though the Ohm's law is an approximate law and we need a better treatment of the scenario to understand what is happening, we as of now know that the electrons don't travel through the resistance less wire instantaneously. There are electrons all along in the wire which just 'drift' towards the positive end of the battery without any obstruction and so quite fast(not instantaneously).
Now, if we include a resistor of resistance $R$ in this scenario, the drift of the electrons are no doubt restricted and this restriction of the motion of charges through the resister is indeed what causes the potential drop across the resistor, and the drift through the resistor occurs at a smaller rate, hence the smaller current.
For two resistors in series, this drift is slowed by both of the resistors(hence lesser current as compared to the single resistor case). However, the drift velocity through the entire wire has to remain the same as electrons can't accumulate anywhere in the wire if there are non regular speeds of their drifts as pointed out by @Nuclear Wang and the OP wrote in the comments. Also, electrons do lose energy due to collisions and that loss appears as heat that reduces the potential in due time and hence the current all in all deceases. But the current can't have various values in such a series circuit. | {
"domain": "physics.stackexchange",
"id": 56798,
"tags": "electricity, electric-circuits"
} |
Are charges absolute or relative? | Question: The charge of a particle is (mostly) an intrinsic property of the particle.
One of the few elementary particles that doesn't have a charge are neutrino's.
Does that mean that it is still possible that scientists ever will found a sort of neutrino particle that has a sort of charge-relation with our neutrino like an elektron it has with a proton? That is a relative charge.
Or is it not possible because a neutrino has absolute not any charge in any way?
Answer: I think you use "relative vs absolute" to mean "distinguishable vs undetermined".
If this is the case, we could say it is a possibility that yes, the charge of neutrino is undetermined (relative). This is because having a charge, is physics jargon for "susceptible of certain kind of interaction".
Thus neutrinos have no electric charge (do not "feel" electric forces) but do have lepton charge (feel weak forces). And the same applies to all elementary particles and all such properties (maybe spin is a little trickier but in a sense, also the same case).
Thus it is plausible that we could detect a new kind of fundamental interaction to which they are susceptible, in which case your assertion would be true. | {
"domain": "physics.stackexchange",
"id": 27836,
"tags": "charge, neutrinos"
} |
How important is it that the generator of a generative adversarial network doesn't take in information about input classes? | Question: I'm building a generative adversarial network that generates images based on an input image. From the literature I've read on GANs, it seems that the generator takes in a random variable and uses it to generate an image.
If I were to have the generator receive an input image, would it no longer be a GAN? Would the discriminator be extraneous?
Answer: Short Answer
Generative networks in generative network arrangements do not learn about input images directly. Their input during training is feedback from the discriminative network.
The Theory in Summary
The seminal paper, Generative Adversarial Networks, Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair, Courville, and Bengio, June 2014, states, "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models ..." The two models are defined as MLPs (multilayer perceptrons) in the paper.
Generative model, G
Discriminative model, D
These two models are interconnected such that they form a negative feedback loop.
G is trained to capture the feature relational distribution of a set of examples and generate new examples based on that relational distribution well enough to fool D.
D is trained to differentiate G's mocks from the set of externally procured examples.
Applying Concepts
If G were to receive input images, their presence would merely frustrate network training, in that the goal of the training would likely be inadequately defined. The objective of the convergence of G, stated above, is not the learning of how to process the images to produce some other form of output. Its objective in the generative approach is to learn how to generate well, an entirely incompatible objective with either image evaluation or image processing.
Additional Information
Additionally, one image is not nearly enough. There must be a sufficiently large set of example images for training to converge at all and then many more to expect the convergence to be both accurate and reliable. The PAC (probably approximately correct) learning analysis framework may be helpful to determine how many examples are needed for a specific case.
Essential Discriminator
The discriminator is essential to the generative approach because the feedback loop referenced above is essential to the convergence mechanism. The bidirectional interdependence between G and D is what allows a balanced approach toward accuracy in the feature relational distribution. That accuracy facilitates the human perception that the generated images fit adequately within a cognitive class.
.
Response to Comments
The attempt to use a generative approach, "To paint in gaps in an image," is reasonable. In such a case, using Goodfellow's nomenclature, G would be generating the missing pixels and D would be trying to discriminate between G's gap filling and the pixels that were in the scenes in the regions of gaps prior to their introduction.
There are two additional requirements in the scenario of filling in pixels.
G must be strongly incentivized against allowing a large gradient between the generated pixels and the adjacent non-gap pixels, unless that gradient is appropriate to the scene, as in the case of an object edge, or a change in reflectivity, such as a surface abrasion or the edge of a spray painted shape.
D must train using the entire image, which means the examples should be images without gaps, the gaps must be introduced in a way that matches the expected distribution of features of gaps that may be encountered later, and the result of G must be superimposed over the full image to produce the input arising from G and discriminated from the original by D.
It is recommended to begin with a standard GAN design, create a test for it (in TDD fashion), implement it, experiment with it, and otherwise become familiar with it and the mathematics involved. Most important to understand is how the balance between G's convergence and D's convergence is obtained in the loss (a.k.a. error or disparity) functions for each, and what concepts of feedback are employed using those functions.
Does your point about input images frustrating network training apply to this kind of problem, or just to GANNs that generate from scratch?
It applies to both.
Would I have to have the generator compare the original image with the generated image and pick which one it thinks is better in order to deal with the "adequately defined" issue?
D compares, not G. That is the delegated arrangement. It is not that other arrangements cannot work. They may. But Goodfellow and the others understood what worked in artificial networks long before they discovered a new approach, and they likely worked out the math of that approach and diagrammed it, perhaps on a white board, long before they typed a single line of code. | {
"domain": "ai.stackexchange",
"id": 1086,
"tags": "keras, generative-model, generative-adversarial-networks"
} |
What is the meaning of $Ta_k$ of fourier series or transform? | Question: What is the meaning of $Ta_k$ of fourier series or transform?
I am taking a course on signal and systems.
In 286 page of my textbook, it says that as T becomes arbitrarily large the original periodic square wave approaches a rectangular pulse.
Also it says that all that remains in the time domain is an aperiodic signal corresponding to one period of the square wave.
(textbook: siganls and systems sencond edition, author: oppenheim)
I have a difficulty understanding this.. I can't connect this idea with fourier transform.
I suggest a link http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-003-signals-and-systems-fall-2011/lecture-videos-and-slides/MIT6_003F11_lec16.pdf
Answer: The idea is that a Fourier series is only defined for periodic signals. In the discussion in the linked slides, the author is considering a rectangular pulse train with period $T$. That is, a pulse of width $2S$ repeats periodically with a spacing of $T$ between them. The pulses are therefore centered at:
$$
[\ldots, -2T, T, 0, T, 2T, \ldots ]
$$
Now, consider what happens as $T \to \infty$: in the limit, the only pulse that remains is the one centered at zero; the others are infinitely far away. When the author makes the claim that:
$$
\lim_{T\to \infty} T a_k = E(\omega)
$$
He or she is trying to show that the Fourier transform, which is defined for suitably well-formed aperiodic signals, can be thought of as the Fourier series of that signal (which typically wouldn't be defined since the signal is not periodic) in the limiting case of an infinite period. Stated a little differently, you can in some way think of an aperiodic signal as a periodic signal with infinite period.
The multiplication by $T$ in the limit is to account for the differences in definition between the Fourier series and Fourier transform: the series representation typically has a factor of $\frac{1}{T}$, while the transform does not. I don't know that there is a lot of insight to be gained via this analysis, but it shows that the series and transform representations are intimately related. | {
"domain": "dsp.stackexchange",
"id": 909,
"tags": "fourier-transform"
} |
PyQt5 validator for decimal numbers | Question: This class is similar to QDoubleValidator, but it improves slightly on the editing comfort. The comments mention "comma" but it's actually the decimal point of your locale (comma for me).
class DecimalValidator(QDoubleValidator):
def __init__(self, *args):
QDoubleValidator.__init__(self, *args)
self.setNotation(QDoubleValidator.StandardNotation)
def validate(self, input, pos):
sep = self.locale().decimalPoint()
if pos and (input[pos-1]==sep) and (sep in input[pos:]):
# When we're left of the comma, and comma is pressed,
# remove the inserted comma and move right of the old comma.
input = input[:pos-1] + input[pos:]
pos = input.find(sep)+1
elif sep in input[:pos] and (len(input.split(sep)[1]) > self.decimals()):
# When we're right of the comma, and all decimal places are used already,
# go into overwrite mode (by removing the old digit)
input = input[:pos] + input[pos+1:]
return QDoubleValidator.validate(self, input, pos)
Answer: I noticed your commenting style and decided it's worthy of it's own answer.
You admit the comma is not always a comma. So don't call it a comma. I went for separator, which is as neutral as it gets. If you think of something better, go for it.
Also, I think your comments would fit better as a docstring than as comments.
class DecimalValidator(QDoubleValidator):
def __init__(self, *args):
QDoubleValidator.__init__(self, *args)
self.setNotation(QDoubleValidator.StandardNotation)
def validate(self, input, pos):
'''
When we're left of the separator, and separator is pressed,
remove the inserted separator and move right of the old separator.
When we're right of the separator, and all decimal places are used already,
go into overwrite mode (by removing the old digit)
'''
sep = self.locale().decimalPoint()
if pos and (input[pos-1]==sep) and (sep in input[pos:]):
input = input[:pos-1] + input[pos:]
pos = input.find(sep)+1
elif sep in input[:pos] and (len(input.split(sep)[1]) > self.decimals()):
input = input[:pos] + input[pos+1:]
return QDoubleValidator.validate(self, input, pos) | {
"domain": "codereview.stackexchange",
"id": 16691,
"tags": "python, validation, qt, pyqt"
} |
what would a "ceiling effect" (the converse of ground effect planes experience) entail? | Question: Wikipedia describes ground effect as "the increased lift (force) and decreased aerodynamic drag that an aircraft's wings generate when they are close to a fixed surface."
That's all fair and good, but when they state "close to a fixed surface" they pretty much assume the surface would be the ground or another large surface under the wing (hence the name).
What I'm wondering is, in the unlikely event that a plane were to fly near a ceiling (so the fixed surface would be over the plane), what would be the net effect? Will there be more, or less lift? Drag? Is there something else that could be affected which I'm not considering?
I can't think of any realistic scenarios where this could happen besides someone flying an RC toy plane in a warehouse (that's probably why it's much less documented than ground effect) but I was thinking about this and was curious.
Answer: A plane flying just below a ceiling would experience a beneficial effect very similar to a ground effect. It would have greater lift and smaller drag than if there were no ceiling.
This can be seen most easily by considering a typical 3-D wing to be operating in an adverse flow field caused by counter rotating vortices located at the wingtips (normally called a downwash flow).
Adding a planar surface either below or above the wing can be modeled using reflections of the original tip vortices. The reflected vortices have signs opposite to the original ones, so they create an upwash flow that partially offsets the original adverse downwash flow. I repeat, this effect is the same whether the reflection plane is above or below the wing.
This sketch shows the wing with its tip vortices above a ground plane:
And this very similar sketch shows the wing & its tip vortices below a "ceiling plane":
A realistic scenario where the "ceiling effect" can happen is a hydrofoil below the surface of the water. In linearized flow models, there are two conditions where the water's surface can be considered planar. In the low-speed limit (small Froude Number), the reflected vortices have opposite sign to the originals, just like in your "ceiling-effect" case, and the effect is beneficial. At the high speed limit however (large Froude Number), they have the same sign, so the effect is detrimental. | {
"domain": "physics.stackexchange",
"id": 50586,
"tags": "aerodynamics, aircraft"
} |
Why polynomial time is called "efficient"? | Question: Why in computer science any complexity which is at most polynomial is considered efficient?
For any practical application(a), algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?!
(a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.
Answer: Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial time on another equally powerful machine model.
This is a weaker statement to the general C-T thesis, and is 'sort of' violated by both randomized algorithms and quantum algorithms, but has not been violated in the sense of being able to solve an NP-hard problem in poly-time by changing the machine model.
This is ultimately the reason why polynomial time is a popular notion in theoryCS. However, most people realize that this does not reflect "practical efficiency". For more on this, Dick Lipton's post on 'galactic algorithms' is a great read. | {
"domain": "cs.stackexchange",
"id": 10262,
"tags": "algorithms, complexity-theory, terminology, efficiency"
} |
build of gazebo package not working | Question:
Hello,
I am running the latest Ros electric with Ubuntu 11.10. I am encountering some problems when trying to build gazebo and could really use some help.
The first problem when trying to build gazebo is:
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc: In member function ‘int AudioDecoder::Decode(uint8_t**, unsigned int*)’:
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc:105:49: error: ‘avcodec_decode_audio2’ was not declared in this scope
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc: In member function ‘int AudioDecoder::SetFile(const string&)’:
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc:152:7: warning: ‘int av_open_input_file(AVFormatContext**, const char*, AVInputFormat*, int, AVFormatParameters*)’ is deprecated (declared at /usr/include/libavformat/avformat.h:1050) [-Wdeprecated-declarations]
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc:152:75: warning: ‘int av_open_input_file(AVFormatContext**, const char*, AVInputFormat*, int, AVFormatParameters*)’ is deprecated (declared at /usr/include/libavformat/avformat.h:1050) [-Wdeprecated-declarations]
/home/oro/ros/stacks/simulator_gazebo/gazebo/build/gazebo/server/audio_video/AudioDecoder.cc:172:59: error: ‘CODEC_TYPE_AUDIO’ was not declared in this scope
I tried just commenting those parts out, which seems to work so far.
The next problem are a lot of undefined references like the following ones:
rendering/libgazebo_rendering.so: undefined reference to `Ogre::Camera::getCameraToViewportRay(float, float) const'
libgazebo_server.so.0.10.0: undefined reference to `Ogre::AutoParamDataSource::AutoParamDataSource()'
rendering/libgazebo_rendering.so: undefined reference to `Ogre::MovableObject::~MovableObject()'
rendering/libgazebo_rendering.so: undefined reference to `Ogre::Pass::setDiffuse(float, float, float, float)'
The whole list would be too long to post here (if needed I can add it), but they are all undefined references to Ogre methods.
Is this a problem with my ogre installation or can someone tell me where my problem really is?
Thanks in Advance!
Originally posted by oro on ROS Answers with karma: 1 on 2011-11-14
Post score: 0
Answer:
As your using Ubuntu, I'd suggest you use the prebuild Debian packages to install gazebo:
sudo apt-get install ros-electric-simulator-gazebo
Originally posted by Steven Bellens with karma: 735 on 2011-11-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7286,
"tags": "ros, gazebo, build, ogre"
} |
Why Fermi level doesn't change with temperature? | Question: We always draw f(E) vs. E crossing at $[E_F, 0.5]$ for any temperature. But a new temperature is a different steady state. So why the value of $E_F$ (Fermi level) doesn't change with temperature?
No actual reason was really given here.
Answer: I'm not quite sure what you mean by "crossing at $[E_F,0.5]$" but the Fermi level doesn't change with temperature by definition. The Fermi level is defined as the energy of the highest energy electrons at zero temperature when the system is in its ground state. It is a property of the system that is only dependent on the quantum mechanical eigenfunctions and the number of electrons and is independent of thermodynamics or statistical mechanics. When you increase the temperature of the system, you can excite electrons above the Fermi level. | {
"domain": "physics.stackexchange",
"id": 22917,
"tags": "solid-state-physics"
} |
Creating and updating link to instagram image or video | Question: I've been using the InstagramPhotoLink extension for Opera for some time but instagram broke it recently. I thought I could try to fix it... and I did.
The original extension is rather limited and it supports only a link to the first image and no videos so while fixing it I completely rewrote it in order to extend it.
It works by looking for an image or video element with a specific class and extracts the src attribute from it and puts it in a new a-element inside the media-container.
Switching images/videos is supported by the MutationObserver that updates the links when either a div or an attribute changes.
I came across a problem where the MutatorObserver reacts too quickly and before the src attribute of the new image is set. I solved it by adding a small delay by adding a call to setTimeout to wait until everything is updated.
function createMediaLink() {
let mo = new MutationObserver(mutations => {
mutations.forEach(mutation => {
if (mutation.type == "attributes") {
//console.log(mutation.attributeName);
addOrUpdateVideoLink();
}
if (mutation.type == "childList") {
//console.log(mutation.childList);
addOrUpdateImageLink();
}
});
});
return (
addOrUpdateImageLink(mo) ||
addOrUpdateVideoLink(mo)
);
}
function addOrUpdateImageLink(mo) {
let image = document.getElementsByClassName('FFVAD');
if (image.length == 0) {
return false;
}
image = image[0];
if (mo) {
// the nearest div to the image that stays there when switching images
var div = document.getElementsByClassName("rQDP3");
mo.observe(div[0], {
childList: true
});
}
setTimeout(() => {
addOrUpdateMediaLink(image.src);
}, 50);
return true;
}
function addOrUpdateVideoLink(mo) {
let video = document.getElementsByClassName('tWeCl');
if (video.length == 0) {
return false;
}
video = video[0];
if (mo) {
mo.observe(video, {
attributes: true
});
}
setTimeout(() => {
addOrUpdateMediaLink(video.src);
}, 50);
return true;
}
function addOrUpdateMediaLink(src) {
console.log(`src: ${src}`);
let a = document.getElementsByClassName('_open_');
if (a.length == 0) {
a = document.createElement("a");
a.className = "_open_";
a.innerHTML = 'Open in new tab';
a.target = "_blank";
// media container
document.getElementsByClassName("_97aPb")[0].appendChild(a);
} else {
a = a[0];
}
a.href = src;
}
createMediaLink();
I don't write javascript too often so this is the best I can do but I'm pretty sure this can still be improved. What do you think?
Answer:
addOrUpdateImageLink() and addOrUpdateVideoLink() are essentially the same, therefore it would be better to create just one function that would accept different parameters dependent on whether our media is an image or video;
In createMediaLink() you invoke addOrUpdate………Link() functions before you declare them. It is not wrong, but it worsens readability and code flow;
Setting 50 ms delay is not the best way to achieve what you are trying to do. Since I don't know how the DOM workings look like, I left it unchanged though;
Use strict equality operator (===) wherever possible — it performs no type conversion;
Once you pick which quotes you use ('' or ""), stick to it. Generally, single quotes option is more popular and standard.
Rewrite
const addOrUpdateMediaLink = src => {
if (!src) { return; }
console.log(`src: ${src}`);
let a = document.querySelector('._open_');
if (!a) {
a = document.createElement('a');
[a.className, a.textContent, a.target] = ['_open_', 'Open in new tab', '_blank'];
// media container
document.querySelector('._97aPb').appendChild(a);
}
a.href = src;
};
const addOrUpdate = (selector, observeSelector, attr, mo) => {
let media = document.querySelector(selector);
if (!media) { return false; }
if (mo) {
const obj = {};
obj[attr] = true;
mo.observe(document.querySelector(observeSelector), obj);
}
setTimeout(() => addOrUpdateMediaLink(media.src), 50);
return true;
};
const createMediaLink = () => {
const selectors = {
attributes: ['.tWeCl', '.tWeCl'],
childList: ['.FFVAD', '.rQDP3']
};
const mo = new MutationObserver(mutations => mutations.forEach(mutation => addOrUpdate(selectors[mutation.type][0])));
return addOrUpdate(...selectors.childList, 'childList', mo) || addOrUpdate(...selectors.attributes, 'attributes', mo);
};
createMediaLink(); | {
"domain": "codereview.stackexchange",
"id": 30936,
"tags": "javascript, ecmascript-6, instagram"
} |
Is it possible to do a NULL comparison in Relational Algebra? | Question: Please consider the following question given in a booklet:
I have read about databases from Database System Concepts - by Henry F. Korth, but never saw an instance where we could do a NULL comparison in relational algebra queries - unlike SQL queries.
(C) is given the correct option, but my question is:
Is the statement III given above, a valid Relational Algebra query?
Answer:
Is the statement III given above, a valid Relational Algebra query?
Good question; and you're going to get different answers depending on which exact version of RA you're talking about.
III has a restriction (sigma) S.b<>null. So it's presumably expecting that the result from the outer join will "fill values" in S.b where there is no S row matching an R row.
But whatever null is, it's not a 'value'. (Codd described it as a "mark".) So in SQL you can't write ... WHERE S.b <> NULL. (Or rather you can, but it will never select any rows. ... op NULL returns false for any comparison op.)
In SQL you must write ... WHERE NOT ISNULL(S.b) (or some such) -- which shows that testing for null is not merely a comparison, because null is not merely a value.
You can choose different ways to try to make sense of nullable columns; but they all end up in some sort of gibberish like this. There simply is not any coherent semantics. I call it 'two-and-a-half value logic'.
So in any version of RA I would deal with, there's no nulls, no outer join, and III would not be a valid query.
There's a persistent confusion on stackxxx RA questions that SQL and RA are equivalent and that there's some mechanical translation between them. Well you can (as it seems in this case) add bogus semantics to RA so it's something like SQL. But why? It's putting lipstick on a pig. | {
"domain": "cs.stackexchange",
"id": 12536,
"tags": "databases, relational-algebra, tuple-relational-calculus"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.