anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Finding minimum spanning tree of a special form graph | Question: I'm trying to find an efficient algorithm that will find me the minimum spanning tree of an undirected, weighted graph of this particular form:
My idea was a recursive solution:
Suppose the algorithm recieve the graph with n as a parameter,
if n==2 then:
take the lower cost path between vo-->v1-->v2 and v0-->v2
if n==1 then:
take the lower cost path between v0-->v2-->v1 (if v2 exists) and v0-->v1
else:
- recursivly call the algorithm for n-1
- take the lower cost edge between v0-->vn and v(n-1)-->vn
I'm really not sure wheather this algorithm is correct, i was trying to prove this by induction and got a bit stuck at the base, which got me thinking maybe the algorithm is flawed.
Any suggestions would be much appreciated.
Answer: That recursive algorithm is wrong, although it was an interesting attempt to improve established algorithms on specialized graphs.
Here is a counterexample with $n=3$.
The graph above was drawn at https://graphonline.ru/en
All edges weigh 2 except that the rightmost two edges, $(v0,v3)$ and $(v2,v3)$ weigh 1. The recursive algorithm will pick only one edge from the rightmost two edges while an MST must include both of them.
By the way, I cannot understand clearly the description of the algorithm in the case of $n=1$ and in the case of $n=2$. Fortunately (or unfortunately), that does not affect this answer since the counterexample depends only on the "else" branch of the algorithm, which is described clearly. | {
"domain": "cs.stackexchange",
"id": 11927,
"tags": "algorithms, graphs, correctness-proof, weighted-graphs, minimum-spanning-tree"
} |
Is the smell coming out from a portable dehumidifier safe? | Question: A portable dehumidifier requires the consumer to plug it into an outlet to regenerate its crystal. When the crystal is thus "regenerated", the indicator will change from pink color to its original blue color. However, I noticed that there is usually a colorless gas (or smell - it smells differently from the surrounding air) accompanying this change.
So, is the gas (or smell) from the portable dehumidifier (when plugged into a outlet for re-generation of its crystal) harmful?
Answer: Can you explain what the crystal is? Based on the common colors you describe and the "Regeneration" I would think that the "crystal" you are describing is just a basic Desiccant (like a silica gel, it's hygroscopic, it absorbs water). Most desiccants are pretty harmless (never heard of any issues with common ones used in consumer goods).
You really need to add more information to your description. What is the make (in case someone wants to look it up)?
The biggest danger I can think of with a portable dehumidifier is bacterial/fungal growth. Both of these can give off a smell, it could be that when "regenerating the crystal" (which if it's a desiccant like I think would mean heating it, is actually "cooking" the growth creating a smell. (But this is me trying to image what could potentially cause that smell, not what is happening)...
Again, not enough information to reach any conclusion. (But I do think you are describing a desiccant). | {
"domain": "chemistry.stackexchange",
"id": 63,
"tags": "toxicity"
} |
Checking String with is_numeric in C | Question: I am a newbie in stack exchange code review. I just wrote a C function where the function checks if the given string is numeric or not. What do you think about my way of doing this? And could this be done in other ways?
#include <stdio.h>
#include <stdbool.h>
bool is_numeric(const char* str)
{
while (isspace(*str))
str++;
if (*str == '-' || *str == '+')
str++;
while (*str)
{
if (!isdigit(*str) && !isspace(*str))
return false;
str++;
}
return true;
}
int main(int argc, char** argv)
{
printf("%s\n", is_numeric("123436") ? "true" : "false"); // should be true
printf("%s\n", is_numeric("123.436") ? "true" : "false"); // should be false
printf("%s\n", is_numeric(" 567") ? "true" : "false"); // should be true
printf("%s\n", is_numeric("235 ") ? "true" : "false"); // should be true
printf("%s\n", is_numeric("794,347") ? "true" : "false"); // should be false
printf("%s\n", is_numeric("hello") ? "true" : "false"); // should be false
printf("%s\n", is_numeric("-3423") ? "true" : "false"); // should be true
}
Thanks in advance!
Answer: Bug: is_numeric("") returns true.
Bug: is...(negative_values) is UB. (Aside from is...(EOF)). Possible when *str < 0.
Bugs: as reported by user3629249
Consider allowing hex such as "0xAbC".
Standard library string functions operate as if char is unsigned, even if char is signed. Recommend to do the same here.
What do you think about my way of doing this?
I like the idea of allowing of trailing white-space when leading white-space allowed.
#include <stdbool.h>
#include <ctype.h> // missing in OP code.
bool is_numeric_integer(const char *str) {
const unsigned char *ustr = (const unsigned char *) str;
while (isspace(*ustr)) {
ustr++;
}
if (*ustr == '-' || *ustr == '+') {
ustr++;
}
const unsigned char *begin = ustr;
while (isdigit(*ustr)) {
ustr++;
}
if (begin == ustr) {
return false; // no digits.
}
// If you want to allow trailing white-space
while (isspace(*ustr)) {
ustr++;
}
return *ustr == '\0'; // fail with trailing junk
} | {
"domain": "codereview.stackexchange",
"id": 38950,
"tags": "c"
} |
Method to count all comments in single external C# file | Question: I recently had an interview question:
Write a method that counts all comments in a single external file.
It was a timed question and I wanted to know if this is the best way to accomplish the task. I'm also open to improvements, advice, etc.
(I didn't pass the interview.)
class Program
{
static void Main(string[] args)
{
List<string> FileLines = new List<string>();
const string Filename = @"file.txt";
readInFile(Filename, FileLines);
CountComments(FileLines);
}
public static void CountComments(List<string> value)
{
int finalCount = 0;
List<string> something = value;
bool inQuotes = false;
for (int x = 0; x < something.Count; x++)
{
if (something[x] != @"""")
{
inQuotes = !inQuotes;
}
for (int y = 0; y < something[x].Length; y++)
{
if (something[x][y] == '/' && something[x][y + 1] == '/')
{
finalCount++;
break;
}
else if (something[x][y] == '/' && something[x][y + 1] == '*')
{
finalCount++;
break;
}
}
}
Console.WriteLine("Total number of comments is " + finalCount);
}
public static void readInFile(string Filename, List<string> FileLines)
{
using (StreamReader r = new StreamReader(Filename))
{
string Line;
while ((Line = r.ReadLine()) != null)
{
FileLines.Add(Line);
}
}
}
}
Answer: Naming
The first thing I notice, is inconsistency with naming.
Locals
FileLines
Filename
finalCount
something
inQuotes
x
y
r
Line
Locals should be camelCase, and should have meaningful names. x, y, r and something aren't meaningful names. Filename should be fileName.
Parameters
args
value
Filename
FileLines
Parameters are like locals - they should be camelCase. Filename should be fileName, and FileLines should be fileLines.
Methods
Main
CountComments
readInFile
Method/member names should be PascalCase. readInFile should be ReadInFile - and that's not exactly a great name either. You're reading the file into a List<string> ...that's passed as a parameter. Not the most intuitive approach.
Accessibility
Methods CountComments and readInFile have no business being public.
Constant vs Read-Only
This variable has no business being a const:
const string Filename = @"file.txt";
It should be an instance-level private static readonly string field, because a const should be strictly for something that has zero chance of ever changing in a future version. A file path is definitely something that can change, declaring it as a const is a semantic mistake.
Approach
readInFile should have a return type instead of having the side-effect of adding items to a List<string> parameter. While legal, a better way to convey that a method will have side-effects on a parameter, is to ask for an out parameter, or to pass the list by reference using the ref keyword. Your method takes its output through its input channel, and that doesn't feel right at all.
I would have gone with File.ReadAllLines instead, which returns a string[] array where each element is a line in the specified file, or better (given C# 4+), File.ReadLines, which returns an IEnumerable<string> instead: this means the whole readInFile method could have been inlined, and the FileName just specified as a hard-coded parameter value (or read from the args command-line arguments, if you wanted to get fancy).
As for the actual commend-finding logic...
if (something[x][y] == '/' && something[x][y + 1] == '/')
{
finalCount++;
break;
}
else if (something[x][y] == '/' && something[x][y + 1] == '*')
{
finalCount++;
break;
}
Can you spot the smell? Why do you need two branches, if both are going to do exactly the same thing?
I would have expected CountComments to do just that: count the comments. Yours is breaking SRP by also performing output - the method should've had an int return type, and just returned the result, leaving it up to the caller to decide what to do with it.
This one puzzles me:
List<string> something = value;
Why not work off value? Why introduce a new meaningless identifier (something? really?) to make a copy of another meaningless identifier (value would already be better as values, but lines or even content would have been sooooo much better!).
I like that you stop iterating characters in a line after you've found a comment.
But instead of nested loops, I would have written a function that takes a string and returns a bool, to encapsulate the logic that basically says "is there a comment anywhere in that string?" - assuming C# 3.5+, the whole method could have looked like this:
return lines.Count(HasComment);
If you were constrained to 2.0, you would've had to perform the loop explicitly - still (assuming a string[] lines parameter):
int result = 0;
for (int line = 0; line < lines.Length; line++)
{
if (HasComment(lines[line]))
{
result++;
}
}
return result;
And then depending on the time remaining you could have refined the HasComment logic, for example to skip lines that are inside a multiline comment. | {
"domain": "codereview.stackexchange",
"id": 14880,
"tags": "c#, parsing, interview-questions"
} |
On time-evolution of a quantum state | Question: Suppose I have a quantum system governed by a time-independent Hamiltonian $H$. Its eigenvectors $\{|\varphi_n\rangle\}_{n\in\mathbb{N}}$ form a complete orthonormal set (or basis) for the Hilbert space $\mathscr{H}$ hosting the state $|\psi\rangle$ of the system.
If ${\displaystyle \left|\psi (t)\right\rangle }$ is a state at time $t$, then
$${\displaystyle H\left|\psi (t)\right\rangle =i\hbar {\partial \over \partial t}\left|\psi (t)\right\rangle .}$$
The eigenvectors of $H$ at a given time $t_0$ are $\{|\varphi_n\rangle\}_{n\in\mathbb{N}}$ (assuming discrete, non-degenerate spectrum). Then
$$
|\varphi_n(t)\rangle = \exp\left(-\frac{i}{\hbar}E_nt\right)|\varphi_n\rangle \qquad \forall\;t>t_0
$$
Since $\{|\varphi_n\rangle\}_{n\in\mathbb{N}}$ form a complete orthonormal set (or basis) for the Hilbert space $\mathscr{H}$
any state $|\psi(t_0)\rangle \in \mathscr{H}$ at the time $t_0$ can be expressed as
$$
|\psi(t_0)\rangle= \sum_{n\in\mathbb{N}}c_n|\varphi_n\rangle
\qquad \operatorname{at} \ t=t_0$$
What will the general expression for $|\psi(t)\rangle$ at time $t>t_0$ be?
Well, evidently $\{|\varphi_n(t)\rangle\}_{n\in\mathbb{N}}$ is still a basis for $\mathscr{H}$, at any time $t$. Therefore
$$
|\psi(t)\rangle= \sum_{n\in\mathbb{N}}c_n(t)|\varphi_n(t)\rangle
\qquad \operatorname{at} \ t>t_0$$
But actually what turns out is slightly different
$$
|\psi(t)\rangle= \sum_{n\in\mathbb{N}}c_n\exp\left(-\frac{i}{\hbar}E_n t\right)|\varphi_n\rangle
\qquad \operatorname{at} \ t>t_0$$
In other words $c_n(t)=c_n$. Why do the coefficients remain the same?
Answer: Write
$$
\vert\Psi(t)\rangle= \sum_n c_n(t)e^{-iE_nt/\hbar}\vert\varphi_n\rangle
$$
with $H\vert\varphi_n\rangle=E_n\vert\varphi_n\rangle$. Insert this into the
TDSE:
$$
i\hbar\sum_n \dot{c}_n(t)e^{-iE_nt/\hbar}\vert\varphi_n\rangle
+i\hbar \sum_n \frac{-iE_n}{\hbar}c_n(t)e^{-iE_nt/\hbar}\vert\varphi_n\rangle
= \sum_n c_n(t) E_n e^{-iE_nt/\hbar}\vert\varphi_n\rangle\, ,
$$
where $\dot{c}_m(t)=\frac{d}{dt}c_m(t)$ and where $H\vert\varphi_n\rangle=E_n\vert\varphi_n\rangle$ has been used.
Close with $\langle\varphi_m\vert$ and use orthogonality:
$$
i\hbar\dot{c}_m(t)e^{-iE_mt/\hbar}
+i\hbar \frac{-iE_m}{\hbar}c_m(t)e^{-iE_mt/\hbar}
= c_m(t) E_m e^{-iE_mt/\hbar}\, .
$$
The terms in $c_m(t)$ cancel out and you're left with
$$
i\hbar\dot{c}_m(t)e^{-iE_mt/\hbar}=0
$$
which implies $c_m$ is constant. | {
"domain": "physics.stackexchange",
"id": 92061,
"tags": "quantum-mechanics, schroedinger-equation, hamiltonian, time-evolution"
} |
Successive amplitudes in quantum mechanics | Question: In quantum mechanics we define amplitudes for events, like propagation from one point to some other point. Lets say that from a source to detector we have some amplitude (D/S).
But, lets now say that we have one mid point. Now we can say that the amplitude from S to M (midpoint) and then to D is (D/M)(M/S). Why multiply?
Answer: Because amplitudes are related to probabilities, and that's how the laws of probability work.
A fair die has a probability of $1/6$ of landing on any side. If you roll the die twice, the probability of rolling a 6 the first time and rolling a 6 the second time is $1/6\times 1/6=1/36$. Likewise, for a single roll, the probability of rolling a 5 or rolling a 6 is $1/6+1/6=1/3$. So you see that the laws of probability dictate the following:
The probability of one event happening and another mutually-exclusive event happening is the product of the probabilities of the two events happening.
The probability of one event happening or another mutually-exclusive event happening is the sum of the probabilities of the two events happening.
So the probability of the particle traveling from S to M to D is the probability of traveling from S to M and traveling from M to D, hence is equal to the product of the two individual probabilities. Since the squared magnitude of the amplitude is the probability, it should be straightforward to see that amplitudes should follow the same rule, since $|a||b|=|ab|$. | {
"domain": "physics.stackexchange",
"id": 61570,
"tags": "quantum-mechanics, newtonian-mechanics"
} |
Error Handler in PHP | Question: I am currently developing a Content Management System, and I have just completed the Error Handling system, which handles errors according to the user's settings.
I want to know every single thing I could improve to make it more flexible, robust, light weight and fast processing. If there are better ways of doing things, I am ready to re-do stuff. I just want comments on ErrorHandling System, in this global.php you can see how I instantiate them. It would also be helpful if you can suggest renames to variables, etc to make more sense. This project is going to be up on GitHub soon. Please be as strict as possible, and point out even a slight flaw in it.
Global.php
<?php
/**
* @author Hassan Althaf
* @link www.HassanTech.org
* @package IDK CMS
*/
require_once("config.php");
if($configuration['errors']['show_errors'] == 0) {
error_reporting(0);
}
require_once("ErrorHandling/interface.ErrorHandlerMapper.php");
require_once("ErrorHandling/class.ErrorHandlerController.php");
require_once("ErrorHandling/class.ErrorHandlerLocalMapper.php");
require_once("ErrorHandling/class.ErrorHandlerDatabaseMapper.php");
$database = new mysqli(
$configuration['database']['host'],
$configuration['database']['username'],
$configuration['database']['password'],
$configuration['database']['database']
);
if($configuration['errors']['error_log_method'] == 'local') {
$errorHandlerMapper = new CMS\Core\ErrorHandling\ErrorHandlerLocalMapper();
} elseif($configuration['errors']['error_log_method'] == 'database') {
$errorHandlerMapper = new CMS\Core\ErrorHandling\ErrorHandlerDatabaseMapper($database);
}
$errorHandlerController = new CMS\Core\ErrorHandling\ErrorHandlerController($errorHandlerMapper);
if($configuration['errors']['enable_error_logs'] == 1) {
if($database->connect_errno) {
echo $database->connect_error;
}
}
I have 3 classes and 1 interface.
class.ErrorHandlerController.php:
<?php
namespace CMS\Core\ErrorHandling;
class ErrorHandlerController {
private $errorHandlerMapper;
public function __construct(ErrorHandlerMapper $errorHandlerMapper) {
$this->errorHandlerMapper = $errorHandlerMapper;
}
public function handleErrorLogging($error) {
return $this->errorHandlerMapper->log($error);
}
public function handleReturningErrorList() {
return $this->errorHandlerMapper->returnErrorList();
}
public function handleErrorListTruncating() {
return $this->errorHandlerMapper->truncateErrorList();
}
}
class.ErrorHandlerDatabaseMapper.php:
<?php
namespace CMS\Core\ErrorHandling;
use mysqli;
class ErrorHandlerDatabaseMapper implements ErrorHandlerMapper {
private $database;
public function __construct(mysqli $database) {
$this->database = $database;
}
public function log($error) {
$query = $this->database->prepare("INSERT INTO errors VALUES('', ?);");
$query->bind_param('s', $error);
$query->execute();
$query->close();
return true;
}
public function returnErrorList() {
$query = $this->database->prepare("SELECT error FROM errors");
$query->execute();
$query->bind_result($error);
while($query->fetch()) {
$errors[] = $error;
}
$query->close();
return $errors;
}
public function truncateErrorList() {
$query = $this->database->prepare("TRUNCATE TABLE errors");
$query->execute();
$query->close();
return true;
}
}
class.ErrorHandlerLocalMapper.php:
<?php
namespace CMS\Core\ErrorHandling;
class ErrorHandlerLocalMapper implements ErrorHandlerMapper {
public function log($error) {
return file_put_contents('errorLog.txt', file_get_contents('errorLog.txt') . $error . ";");
}
public function returnErrorList() {
return explode(';', file_get_contents('errorLog.txt'));
}
public function truncateErrorList() {
return file_put_contents('errorLog.txt', '');
}
}
interface.ErrorHandlerMapper.php:
<?php
namespace CMS\Core\ErrorHandling;
interface ErrorHandlerMapper {
public function log($error);
public function returnErrorList();
public function truncateErrorList();
}
Answer: Errorhandler or ErrorLogger?
As far as I can see, you wrote an ErrorLogger, not an error handler. You are not registering an error handler. The only thing these classes seem to do is Logging and giving access to the errors - with the option to truncate them.
So in fact, you created a Logger that can delete all errors (now why would you do that?), so call it a Logger.
The good peeps over at psr - standards have create a really nice interface for a Logger. Implementing that interface would be a big plus since you could easily change it with a complete different logging engine if needed.
Code "errors"
You are using namespaces, that's good. But the location of your files is just illogical. Stick to psr-4 or psr-1. Standards exist because they are proved to be good. And instead of using all those requires. Use an auto loader instead. Write a custom one or use a complete solution like Symfony loader or even composer.
In your database you are only saving the error, nothing more. Some extra information would be useful. Like when did it occur for instance.
If every method starts with the same word and if that word can also be found in the class name, you probably don't need it. Writing:
$errorHandler = new ErrorHandlerController;
$errorHandler->handleErrorLogging($error);
It feels weird. You say that it should handle logging errors. But you need an error to handle error logging. But the only thing it does is log an error. So what does the ErrorHandlerController do? It passes everything to the ErrorMapper. So why not use the error mapper?
$errorHandler = new ErrorHandlerDatabaseMapper;
$errorHandler->log($error);
Way more readable.
Then your method name: returnErrorList. Return to where? The client? What you are doing is 'getting'. So getErrors or getErrorsAsList. But what is a list? It's just an array
The same goes for truncateErrorList. What is the List keyword doing there? And why is it returning something? This means that every piece of code that truncates all the errors needs to check for success, so throw an Exception.
When using the local error mapper problems will arise if the error message contains ';'.
And as a last remark. If you are creating an ErrorHandler and after you have created your error handler you write:
if($configuration['errors']['enable_error_logs'] == 1) {
if($database->connect_errno) {
echo $database->connect_error;
}
}
Your class failed. It is not handling errors.
Disclaimer: everything I wrote is here to help you; If you feel offended, I apologise. This is not what I intended. | {
"domain": "codereview.stackexchange",
"id": 9360,
"tags": "php"
} |
Is'nt ROS_INFO_STREAM_THROTTLE signature a little misleading | Question:
The definition for ROS_INFO_STREAM_THROTTLE looks like (question valid for all ROS_**_THROTTLE rosconsole macros ):
#define ROS_INFO_STREAM_THROTTLE(rate, args) ......
However, if i put ROS_INFO_STREAM_THROTTLE( 10, "whatever" ) it will be printed every 10 seconds. If i put ROS_INFO_STREAM_THROTTLE( 0.1, "whatever" ) it will be printed 10 times a second. So unlike ros::Rate where rate means actions per second the macro arg rate here means seconds between the actions. So it is more of a timeout, isn't it? Should maybe it should be renamed like
#define ROS_INFO_STREAM_THROTTLE(rate, args) ......
??
Do you think that would be clearer to the users of the marco?
Originally posted by Wolf on ROS Answers with karma: 7555 on 2014-12-23
Post score: 0
Original comments
Comment by gvdhoorn on 2014-12-23:
I think you meant to say #define ROS_INFO_STREAM_THROTTLE(period, args) or something similar? Your 'renamed' statement is now identical to the original one.
Answer:
That does look confusing/misleading, period does sound more accurate. it's probably worth a ticket to update the documentation/prototype.
Originally posted by tfoote with karma: 58457 on 2014-12-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20416,
"tags": "ros, rosconsole"
} |
Finding net magnetic and electric force on charged particle | Question:
This is from my textbook, it is not an assigned problem, but I want to understand.
It says:
Consider the situation in the figure, in which there is a uniform electric field in the x direction and a uniform magnetic field in the y direction. For each example of a proton at rest or moving in the x, y, or z direction, what is the direction of the net electric and magnetic force on the proton at this instant?
I believe I need to use the equation
$$ F_{net}=(q\overrightarrow{E})+q(\overrightarrow{v} \times \overrightarrow{B}) $$
But I'm not sure exactly how. We've just started to learn about this, and I want to get a head start. Could anyone put me on the right track?
Answer: First of all, I think you might have written the equation for the net force incorrectly: $$\vec{F}_{net} = q\vec{E} + q(\vec{v} \times \vec{B})$$ The second term is $q(\vec{v} \times \vec{B})$ and not $q(\vec{E} \times \vec{B})$.
From the first term of the force equation ($q\vec{E}$), we can see that the electric field will try push the proton parallel to it (so the proton will be pushed a bit in the $\hat{x}$-direction).
Note that the second term of the equation ($q(\vec{v} \times \vec{B})$) is perpendicular to the velocity (direction of motion) of the proton.
You might remember this from earlier: when a force acts perpendicular to the motion of a body, the force acts centripetally - that is, the body starts revolving in a circle. Therefore, the magnetic force is a centripetal force.
So we have two ways the proton can be pushed: the electric field pushes it in the $\hat{x}$-direction and the and magnetic field (when the proton is moving) tries to get the proton to revolve around the axis perpendicular to the direction of motion ($\hat{v}$) and the magnetic field ($\hat{B}$) (remember, the second term is a cross product).
The proton gets pushed in both ways at the same time (in the first example, it doesn't move at first, but then starts to move as the electric field pushes it, so the magnetic force appears too). It might be a bit hard to visualize, but I'll talk about the first example: the proton is both first pushed in the $\hat{x}$-direction (so its velocity is non-zero) and then undergoes the centripetal force (from the magnetic field) which causes it to start revolving around the x-axis. However, just before it dips under the z-axis, the proton stops moving (the electric field caused it to decelerate), causing the electric field to accelerate it again, restarting the process while causing the proton to move a net displacement along the z-axis (since it never rotated back to the origin).
(source: physics-animations.com)
(Note that the directions in the animation are not the same as in the second example).
Can you start to visualize how it will move in the other examples? | {
"domain": "physics.stackexchange",
"id": 20426,
"tags": "homework-and-exercises, electromagnetism, magnetic-fields, electric-fields"
} |
Using a singleton class to get and set program wide settings | Question: The following code works and does what I want, but I'd like to confirm I'm using this OOP concept correctly. I'm using the following class to get and set some configuration parameters for my program. It used $GLOBALS for this before, but I'm trying to use a singleton pattern for this instead.
Below is the Config class. Does anything jump out as aggressively stupid? Seeing as how I'm going to be including this in most files any way I figured it would make sense to throw the autoload function in there too- is this bad design?
<?php
//This class will be included for every file
//needing access to global settings and or
//the class library
spl_autoload_register(function($class){
require_once 'classes/' . $class . '.php';
});
final class Config {
private static $inst = null;
//Array to hold global settings
private static $config = array(
'mysql' => array(
'host' => '127.0.0.1',
'username' => 'root',
'password' => '123456',
'db' => NULL
),
'shell' => array(
'exe' => 'powershell.exe',
'args' => array(
'-NonInteractive',
'-NoProfile',
'-NoLogo',
'-Command'
)
)
);
public static function getInstance() {
if (static::$inst === null) {
static::$inst = new Config();
}
return static::$inst;
}
public function get($path=NULL) {
if($path) {
//parse path to return config
$path = explode('/', $path);
foreach($path as $element) {
if(isset(static::$config[$element])) {
static::$config = static::$config[$element];
} else {
//If specified path not exist
static::$config = false;
}
}
}
//return all configs if path NULL
return static::$config;
}
public function set($path=NULL,$value=NULL) {
if($path) {
//parse path to return config
$path = explode('/', $path);
//Modify global settings
$setting =& static::$config;
foreach($path as $element) {
$setting =& $setting[$element];
}
$setting = $value;
}
}
//Override to prevent duplicate instance
private function __construct() {}
private function __clone() {}
private function __wakeup() {}
}
I access it like so:
$aaa = Config::getInstance();
var_dump($aaa->get());
$aaa->set('mysql/host','zzzzzzzzzzzz');
var_dump($aaa->get());
I read a little bit about how singletons are less useful in php. I will research that further at some point, but for the time being I just want to make sure I'm doing this right.
Answer: There should be something about a Singleton class that requires that it allow and present a single, globally available, instance. Otherwise, the pattern is being misused.
The way your class is built, there's not really such a reason. You could have any number of instances, and they'd all work (though they'd all look like the same instance anyway).
As for recommendations to make things less weird...?
Use instance variables.
Everything is essentially global. (Make no mistake; hiding it in a class doesn't do much at all to change that. static is just how a lot of OO languages spell "global". :P) Your "instance" is just an interface to get and set that global stuff. Every "instance" has the same ability, since it gets and sets the same variables -- and in fact, any update through one "instance" will be reflected in every other. So new Config and Config::getInstance() really differ only in whether another instance is created. The caller could say one or the other, and get the exact same results.
If you insist on a singleton, moving the static variables into the instance would legitimize the existence of an instance at all, and provide a less dodgy reason to require exactly one authoritative instance. (Though it'd also prompt the question, "Why can't i create a separate configuration for testing?". Among others.)
Lose the public setter, or provide a way to disable it.
There's next to no encapsulation or data hiding going on here; the only thing that even remotely counts is your path stuff. Any caller can change global settings at will. This kind of "action at a distance" can cause all kinds of weirdness. It is the very reason mutable global state is discouraged, and set makes it even easier to abuse.
You should consider either getting rid of the setter, making it private (and thus disabling outside modification altogteher), or adding some way for your setup code to disable it -- effectively "freezing" the instance -- once the configuration is established.
Use self::, not static::.
static (when used in place of a class name) uses PHP's "late static binding" to look up variables through the name of the called class rather than the one where the method lives. It's there to allow subclasses to override static members. But you've explicitly disallowed inheritance...so values will only ever be looked up in this class. static is thus wasteful, and a bit of a mixed message.
As for the autoloader, it's not exactly a mortal sin or anything to put that in the same file as the config class. Unless the classes are going to be moving around, though, their current location isn't really part of the config settings, is it?
I'd suggest making the autoloader a part of the startup code, unless the class files are going to move around (thus making the current location a part of the configuration). | {
"domain": "codereview.stackexchange",
"id": 6569,
"tags": "php, object-oriented, singleton"
} |
Mitigate floating-point numerical errors for very-low corner frequency low-pass filter with DSP | Question: I'm designing a low-pass filter for a digital signal processing application that ideally just passes a very small bandwidth above DC. I'm using an IIR biquad filter for this, where the coefficients are derived using the instructions here. A smaller bandwidth leads to a longer filtering time (larger time constant) but yields a more accurate result whereas a larger bandwidth can be filtered faster but is less accurate. Both of these are valid use cases.
Here's the code I've got
#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import freqz
# calculates filter coefficients using link above
# fc is corner frequency, fs is sample freq
def iir_lp_coeffs(fc, fs):
w0 = 2 * np.pi * fc / fs
q = 1 / np.sqrt(2)
alpha = np.sin(w0) / (2 * q)
b0 = (1 - np.cos(w0)) / 2
b1 = 1 - np.cos(w0)
b2 = b0
a0 = 1 + alpha
a1 = -2 * np.cos(w0)
a2 = 1 - alpha
b0 /= a0
b1 /= a0
b2 /= a0
a1 /= a0
a2 /= a0
a0 /= a0
return (
np.array([b0, b1, b2], dtype=np.float64),
np.array([a0, a1, a2], dtype=np.float64),
)
fc = 2 # low pass corner frequency (Hz)
fsample = 500e3
b, a = iir_lp_coeffs(fc, fsample)
w, h = freqz(b, a, worN=int(1e6), fs=fsample)
fig, ax = plt.subplots()
ax.plot(w, 20 * np.log10(abs(h)))
ax.set_ylim(-40, 10)
ax.set_xscale("log")
plt.show()
print(w[0:10])
print(abs(h[0:10]))
The current settings use 64-bit floating point with a cutoff frequency of $2\,\text{Hz}$. This all works fine, and I can even decrease the corner frequency substantially as long as I increase the granularity of freqz (with worN=).
For instance here's a plot of the gain response with the above code (note that I've cut the x axis off at the higher frequencies):
However, my actual application requires 32-bit floating point. When I do this (set dtype of iir_lp_coeffs to np.float32), I get non-unity gain in the passband. For instance, here's a gain response with fc=10 using 32-bit:
If I set the corner frequency higher, the gain response looks correct again (e.g., fc=100 looks fine).
Am I running up against the limit of what's possible with 32-bit FP? Or, is there another strategy that would allow me to get away with the lower precision of 32-bit? Have I correctly diagnosed this issue as a floating-point issue?
Answer: I think your issue might be coefficient quantization and filter topology. A direct form biquad has poor quantization effects about 0 and π radians. It’s easier to analyze such effects in fixed point, but even though floating point has a much larger range, it still has shortcomings. In particular, if you add a very small number to a very large number, the small number disappears because it can’t be aligned for the operation in the available number of mantissa bits. This can cause the order of operations to affect the result. For instance, where S is a small number and L is large, L - L + S = S, but L + S - L = 0.
Udo Zolzer covers the differences between several filter structures in his book Digital Audio Signal Processing. I borrow direct form quantization effects on pole locations from the book:
See how precision is lost near 0 and π. Other filter topologies might have higher precision near 0, while being much worse near π, which may be a good tradeoff for uses such as yours. The Gold and Rader form has a very even distribution, it looks like a perfect grid.
Another simple and popular filter that has good quantization characteristics at low frequencies is the "Chamberlin" state variable filter. There are improved versions of this filter, as it has problems at higher frequencies (from about one-sixth of the sample rate and up), but the plain Chamberlin is very good at low frequencies, where you need it.
See my article on the Chamberlin state variable filter here:
The digital state variable filter
Zolzer presents modified Chamberlin structures here:
The Modified Chamberlin and Zölzer Filter Structures
In particular, see the graph of quantization effect near zero for the Chamberlin structure—very dense near zero, at the expense of poorer performance at high frequencies, compared to the direct form graph: | {
"domain": "dsp.stackexchange",
"id": 9442,
"tags": "lowpass-filter, infinite-impulse-response, floating-point"
} |
Measure how Straight/Smooth the Borders are Rendered in an Image | Question: I have two images:
I want to measure how straight/smooth the text borders are rendered.
First image is rendered perfectly straight, so it deserves a quality measure 1. On the other hand, the second image is rendered with a lot of varied curves (rough in a way) that is why it deserves a quality measure less than 1. How will I measure it using image processing or any Python function or any function written in other languages?
Clarification :
There are font styles that are rendered originally with straight strokes but there are also font styles that are rendered smoothly just like the cursive font styles. What I'm really after is to differentiate the text border surface roughness of the characters by giving it a quality measure.
I want to measure how straight/smooth the text borders are rendered in an image. Inversely, it can also be said that I want to measure how rough the text borders are rendered in an image.
Answer: I'd try a very "tinkery" approach here:
Erode the image, so that the black area is shrunk by a fixed radius of pixels from its border (say, 5px).
Dilate the resulting image by the same amount
measure the amount of difference between original and processed image.
The idea is that something that is a locally convex border doesn't suffer through erosion (it's only shrunk) significantly, and that this erosion can be reverted by dilatation. | {
"domain": "dsp.stackexchange",
"id": 8965,
"tags": "noise, python, image-processing, image-restoration"
} |
How determine the number of significant figures of a feet/inch system? | Question: How many significant figures are there in $5~\mathrm{ft}\ 10~\mathrm{in}$.
I know that $5~\mathrm{ft}\ 10~\mathrm{in}= 5.8\bar{3}~\mathrm{ft}=70~\mathrm{in}$, so I am not sure exactly how much significant digits are in this number.
Answer: Significant figures is a rule of thumb, to avoid giving overly precise results when the input doesn't warrant it.
In actual scientific calculations - where keeping track of the precision matters - significant figures aren't used. Instead, explicit error ranges (confidence intervals) are used. Notation of these varies, but often takes the form of a plus-minus format (14.5±0.3 m). The concept of significant figures uses implicit error ranges, conventionally set to one half of the last significant digit.
When error ranges are explicitly given, the error of a calculation result is found with formal propagation of uncertainty, rather than the rule-of-thumb significant figure rules. Sometimes this is based on a model of the distribution of error around the central value (for example, assuming that the error is normally distributed around the central value with a given standard deviation), but most commonly the error is taken as the minimum and maximum of an acceptable range of values around the main measurement. Under this scheme, the propagation of uncertainty often takes the form of the "crank three times" method, whereby you run the calculation once for the central value, once for the minimum value, and once for the maximum value.
So if you were doing a calculation like $\pu{14.5\pm0.3 m} - \pu{1.5\pm0.2 m}$, you would calculate once for the central value ($\pu{14.5 m - 1.5 m = 13 m}$), once to give you the smallest value you could get with the range ($\pu{14.2 m - 1.7 m = 12.5 m}$), and once to give the largest value your could get with the range ($\pu{14.8 m - 1.3 m = 14.5 m}$). You'd then convert this into a range, normally rounding both values such that the error value only has 1-2 significant figures: $\pu{13\pm0.5 m}$
It turns out that the rules for significant figure calculations are just a quick-and-dirty rule of thumb of doing more-or-less the same thing. I'd encourage you to try it out with a number of examples. If you assume that the error in each quantity is one half of the last significant digit, and you do the crank three times error propagation, you should come up with a result that has an error that is about one half the last significant digit of the answer by the significant figures approach - give or take a digit.
So really what you want to do is to convert the measurement of $\pu{5 ft 10 in}$. into one which has an explicit error associated with it. For this measurement, I'm guessing you'd have an error of $\pu{\pm0.5 in}$. - I say this because rounding to the nearest 10 inches seems rather silly to me. (Feel free to work the calculations below with different assumption, say with ±5 in., if you disagree.) So you have a measurement of $\pu{5 ft 10 in. \pm 0.5 in}$ in other words $\pu{5 ft} (\text{exact}) + 10\pm\pu{0.5 in}. = \pu{5 ft} (\text{exact}) \times \pu{12 in./ft }(\text{exact}) + \pu{10\pm0.5 in.} = \pu{70\pm0.5 in.}$ Note that is the same as $70$. in with two significant digits.
Since we have explicit error bars, we can use the crank three times method to convert it to feet: $70~\mathrm{in.} / (12~\mathrm{in./ft}) = 5.8\bar{3}~\mathrm{ft}$ and $69.5~\mathrm{in.} / (12~\mathrm{in./ft}) = 5.791\bar{6}~\mathrm{ft}$ and $70.5~\mathrm{in.} / (12~\mathrm{in./ft}) = 5.875~\mathrm{ft}$. This is $5.8\bar{3}±0.041\bar{6}~\mathrm{ft}$. Convention holds that we round the error to one or two significant figures, and also round the main value to the same decimal place: $5.83±0.04~\mathrm{ft}$. -- This is basically what you'd have from $\pu{5.8 ft}$ with two significant digits.
The upshot is that you'd never use $\pu{5 ft 10 in.}$ in a calculation. You'll want to convert it to a single consistent unit first. When you do that, keep in mind the actual error associated with the measurement, and use that error as a gauge at how precise the converted quantity is.
(Note that if this is a question on a test or on a homework, ask your teacher how they think you should get the "correct" answer. When dealing with rule-of-thumb things like significant figures, intelligent people can vary in their interpretations. This means an answer valid in one context can be marked as wrong on the answer key because it doesn't match the formalism the teacher is using.) | {
"domain": "chemistry.stackexchange",
"id": 3971,
"tags": "significant-figures"
} |
Is the number of wavelengths of light spanning a distance invariant with respect to spacetime distortion? | Question: I was recently asked by a friend how the expansion of spacetime affects photons. I gave him what I feel is a satisfactory general response, but it got me wondering how, exactly to calculate this effect. It occurred to me that a simple way to conceptualize the change in energy was considering comparing the number of wavelengths that would have spanned the distance originally to the number it would take to span the expanded distance. Given the time contraction of a photon, it seemed reasonable to me to assume that the number of wavelengths spanning the distance traveled would be a constant. Unfortunately, I never had the opportunity to take a GR class and I don't know if this simple concept is in any way valid.
My question is, is this simple naive approach valid (even if it needs modification)? That is, given the relation between wavelength and energy
$$E = \frac{hc}{\lambda}$$
where $E$ is the energy, $h$ is Plank's constant, $c$ is the speed of light, and $\lambda$ is the original wavelength of the light, can I make this
$$E = \frac{nhc}{d}$$
where $d$ is the unexpanded distance of a path traveled by light, and $n$ is the number of wavelengths of the initial energy that would span the distance along the path traveled by light.
From this point, is it proper to calculate the fractional change in energy due to a fractional expansion of spacetime by
$$\frac{dE}{dd} = -\frac{nhc}{d^2} $$
where $dd$ is the change in distance due to expansion. Or, alternatively,
$$E = E_0 + nhc \left(\frac{1}{d_f} - \frac{1}{d_i}\right).$$
There have been several questions about the temperature of the CMB and how the expansion of spacetime reduces it's temperature. In particular the following closely address the topic at hand:
Have CMB photons "cooled" or been "stretched"?
Effect of expansion of space on CMB
though none of them addressed the issue in the way I have described. It is worth noting that in the article linked to in the first of these questions, the authors state "...'all wavelengths of the light ray are doubled' if the scale factor doubles." which seems to give the same proportionality described by my formulation.
My intuition tells me that this is an inappropriate method at least in part because there is no observational frame stationary with reference to the path traveled, but I sense that there are other problems as well.
I just realized that I could equally well refrain from using $n$ and instead just use $d_i/\lambda_0$ or $d_i E_0 /hc$ which gives
$$E = E_0 \left( \frac{d_i}{d_f} \right)$$
Answer: The answer is yes for slow deformation compared to the frequency of light, and for cosmological expansion this is a perfect approximation. The relevant literature is old, it is the theory of adiabatic expansion of a box of light developed by Wilhelm Wien in 1898. The principle is that the number of wavelengths of light that fit in the box stays the same when you gradually move the wall outward, so that the light cools as the box expands in an adiabatic way. This led to Wien's displacement principle, and the theory of adiabatic invariance of the quantum number, which was the Einstein-Sommerfeld-deBroglie-Schrodinger path to quantum mechanics.
The way you can see that this is valid is to imagine that the two objects are two cosmological metal plates a certain distance apart. The number of wavelengths between these two plates is an integer and it can't change smoothly, it can only jump up and down. The number of photons inbetween is also an integer, which can't change smoothly. So when the two walls move apart, the number of photons and the number of wavelengths stays fixed. | {
"domain": "physics.stackexchange",
"id": 1763,
"tags": "gravity, electromagnetic-radiation, space-expansion, cosmic-microwave-background, wavelength"
} |
Implement MoveIt! on Real Robot | Question:
Hi all,
Unfortunately I am really struggling with implementing MoveIt! on my custom hardware.
Basically I am stuck in connecting to an Action Client.
I have been successful in using a driver package to control my motor drivers (RoboClaw).
link from @bcharrow
Unfortunately in MoveIt am always greeted with:
[INFO] [1410916361.912676781]: MoveitSimpleControllerManager: Waiting for /full_ctrl/joint_trajectory_action to come up
[ERROR] [1410916366.912904732]: MoveitSimpleControllerManager: Action client not connected: /full_ctrl/joint_trajectory_action
[ INFO] [1410916371.938914542]: MoveitSimpleControllerManager: Waiting for /gripper_ctrl/joint_trajectory_action to come up
[ INFO] [1410916376.939103684]: MoveitSimpleControllerManager: Waiting for /gripper_ctrl/joint_trajectory_action to come up
[ERROR] [1410916381.939338320]: MoveitSimpleControllerManager: Action client not connected: /gripper_ctrl/joint_trajectory_action
[ INFO] [1410916381.957750506]: Returned 0 controllers in list
[ INFO] [1410916381.963234975]: Trajectory execution is managing controllers
My Action Client is based on this:
link
Can anyone offer more of a step by step instruction on connecting my robot to MoveIt, i haven't found any such tutorial or etc such as:
link1
link2
Cheers,
Chris.
Originally posted by anonymous8676 on ROS Answers with karma: 327 on 2014-09-15
Post score: 3
Answer:
Okay so for anyone out there with the same problem here is my solution:
I start from the end of the MoveIt Setup Assistant. Note that i use non-standard joint names.
Just as a caveat: This is not the best solution, simply one that worked for me.
I hope it is helpful.
Setup your controllers.yaml such as the one below. This will include what controller is responsible for what actuator.
Implement an ActionServer for each controller. An example implementation is given below.
Modify the moveit_planning_execution.launch for your system. Again, my launch file is below.
Write a launch files to start the action server and moveit_planning_execution.launch.
The moveit_planning_execution.launch:
<launch>
<!-- The planning and execution components of MoveIt! configured to run -->
<!-- using the ROS-Industrial interface. -->
<!-- Non-standard joint names:
- Create a file <robot_moveit_dir>/config/joint_names.yaml
controller_joint_names: [joint_1, joint_2, ... joint_N]
- Update with joint names for your robot (in order expected by rbt controller)
- and uncomment the following line: -->
<rosparam command="load" file="$(find <robot_moveit_dir>)/config/joint_names.yaml"/>
<!-- the "sim" argument controls whether we connect to a Simulated or Real robot -->
<!-- - if sim=false, a robot_ip argument is required -->
<arg name="sim" default="false" />
<arg name="robot_ip" unless="$(arg sim)" />
<!-- load the robot_description parameter before launching ROS-I nodes -->
<include file="$(find <robot_moveit_dir>)/launch/planning_context.launch" >
<arg name="load_robot_description" value="true" />
</include>
<!-- run the robot simulator and action interface nodes -->
<group if="$(arg sim)">
<include file="$(find industrial_robot_simulator)/launch/robot_interface_simulator.launch" />
</group>
<!-- publish the robot state (tf transforms) -->
<node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher" />
<include file="$(find <robot_moveit_dir>)/launch/move_group.launch">
<arg name="publish_monitored_planning_scene" value="true" />
</include>
<include file="$(find <robot_moveit_dir>)/launch/moveit_rviz.launch">
<arg name="config" value="true"/>
</include>
<include file="$(find <robot_moveit_dir>)/launch/default_warehouse_db.launch" />
</launch>
The controllers.yaml:
controller_list:
- name: /full_ctrl
action_ns: joint_trajectory_action
type: FollowJointTrajectory
default: true
joints: [j_gripper_wrist1, j_wrist1_arm1, j_arm1_arm2, j_arm2_wrist2, j_wrist2_gripper2]
The example action server (full_action_server.cpp):
//http://clopema.felk.cvut.cz/redmine/projects/clopema/wiki/Sending_trajectory_to_the_controller
//http://wiki.ros.org/actionlib_tutorials/Tutorials/SimpleActionServer%28ExecuteCallbackMethod%29
//http://wiki.ros.org/actionlib_tutorials/Tutorials/SimpleActionServer%28GoalCallbackMethod%29
#include <ros/ros.h>
#include <actionlib/server/simple_action_server.h>
#include <control_msgs/FollowJointTrajectoryAction.h>
#include <trajectory_msgs/JointTrajectory.h>
class RobotTrajectoryFollower
{
protected:
ros::NodeHandle nh_;
// NodeHandle instance must be created before this line. Otherwise strange error may occur.
actionlib::SimpleActionServer<control_msgs::FollowJointTrajectoryAction> as_;
std::string action_name_;
public:
RobotTrajectoryFollower(std::string name) :
as_(nh_, name, false),
action_name_(name)
{
//Register callback functions:
as_.registerGoalCallback(boost::bind(&RobotTrajectoryFollower::goalCB, this));
as_.registerPreemptCallback(boost::bind(&RobotTrajectoryFollower::preemptCB, this));
as_.start();
}
~RobotTrajectoryFollower(void)//Destructor
{
}
void goalCB()
{
// accept the new goal
//goal_ = as_.acceptNewGoal()->samples;
}
void preemptCB()
{
ROS_INFO("%s: Preempted", action_name_.c_str());
// set the action state to preempted
as_.setPreempted();
}
};
int main(int argc, char** argv)
{
ros::init(argc, argv, "action_server");
RobotTrajectoryFollower RobotTrajectoryFollower("/full_ctrl/joint_trajectory_action");
ros::spin();
return 0;
}
Regards,
Chris.
Originally posted by anonymous8676 with karma: 327 on 2014-09-21
This answer was ACCEPTED on the original site
Post score: 11
Original comments
Comment by anonymous8676 on 2014-09-21:
Thanks @Rabe for the advice.
Comment by db on 2020-08-12:
I don't see where the control is being done there. You are just receiving the action goals on this code? There isn't even a result?... | {
"domain": "robotics.stackexchange",
"id": 19409,
"tags": "ros, moveit, ros-control, ros-controllers, roscpp"
} |
Difference between 3D Camera(using IR projection) and Stereo Camera? | Question: I am currently busy with a final year project which requires me to track people walking through a doorway.
I initially thought this may be possible using a normal camera and using some motion detection functions given in OpenCV, I have however come to the conclusion that the the camera is mounted too low for this to work effectively.(Height shown in the image below)
I have now been looking into using a 3D camera or a stereo camera to try and get around this problem.
I have seen similar examples where a Kinect(from Xbox 360) has been used to generate a depth map which is then processed and used to do the tracking, this was however done from a higher vantage point, and I found that the minimum operating range of the Kinect is 0.5m.
From what I have found, the Kinect uses an IR projector and receiver to generate its depth map, and have been looking at the Orbbec Astra S which uses a similar system and has a minimum working distance of 0.3m.
My question now:
What exactly would the difference be between the depth maps produced by a 3D camera that uses an IR projector and receiver, and a stereo camera such as the DUO/ZED type options?
I am just looking for some insight from people that may have used these types of cameras before
On a side note, am i going about this the right way? or should i be looking into Time of Flight Cameras instead?
----EDIT----:
My goal is to count the people moving into and out of the train doorway. I began this using OpenCV, initially with a background subtraction and blob detection method. This only worked for one person at a time and with a test video filmed at a higher vantage point as a "blob-merging" problem was encountered as shown in the left image below.
So the next method tested involved an optical flow method using motion vectors obtained from OpenCV's dense optical flow algorithm.
From which i was able to obtain motion vectors from the higher test videos and track them as shown in the middle image below, because of the densely packed and easily detected motion vectors it was simple to cluster them.
But when this same system was attempted with footage taken from inside a train at a lower height, it was unable to give a consistant output. My thoughts of the reasons for this was because of the low height of the camera, single camera tracking is able to function when there is sufficient space between the camera and the top of the person. But as the distance is minimized, the area of the frame that the moving person takes up becomes larger and larger, and the space to which the person can be compared is reduced (or atleast that is how I understand it). Below on the right you can see how in the image the color of the persons clothing is almost uniform, Optical flow is therefore unable to detect it as motion in both cases.
I only started working with computer vision a few months ago so please forgive me if I have missed some crucial aspects.
From what i have seen from research, most commercial systems make used of a 3D cameras, stereo cameras or Time-of-Flight cameras, but I am unsure as to how the specifics of each of these would be best suited for my application.
Answer: "3D camera" is a generalisation that covers sensors that can observe a point-cloud or depth map of the scene they are observing.
Some 3D cameras use a projector (via various ways) to improve the resolution and/or reduce noise of these sensors.
The fact that the projector is IR is not important, it could be a red laser projector instead, but IR is used so that humans don't see the projection. Obviously if you use an IR projector you need an IR camera to observe the projection.
The projector will project a pattern that is either:
A texture (not necessarily a pattern) projected to the scene to give better features for stereo-vision by imposing texture on smooth areas (which is problematic for calculating stereo disparity).
A pattern that is coded, which is decoded to produces a depth map.
Various pattern types can be used (time-multiplexed, spatially-coded, direct-coded or a mixture)
A good example of both of these is the Intel Realsense F200 (coded-light) and the R200 (stereo vision).
Aside: You should start a separate question for your application, and narrow down the scope of your question. | {
"domain": "robotics.stackexchange",
"id": 2606,
"tags": "computer-vision, cameras, stereo-vision"
} |
Stopwatch that uses the abstract factory design pattern | Question: I have wrote code for a stopwatch that utilizes the abstract design pattern and would like to get some feedback on the code, so be as harsh as you can.
Note: I used ctime instead of chrono because this is not meant for benchmarking. The code will later be used in a console game, and the format of the std::tm struct is easy to work with.
Also, note that the way I wrote the tests in source.cpp will result in a small visual bug from time to time to resolve it lower the waiting time inside of the do while loop from std::chrono::seconds(1) to std::chrono::milliseconds(250); and increase the value of timer, the following action will increase the refresh rate.
EDIT: Note that this question is a followup to Watch that uses the abstract factory design pattern they both use the same Console, Constants and Digits library but both accomplish a different task
StopWatch.h:
#ifndef STOP_WATCH
#define STOP_WATCH
#include"Digits.h"
#include<vector>
#include<memory>
#include<ctime>
class StopWatch
{
public:
virtual void printTime() = 0;
void setStopWatchXY(int x, int y);
bool countDownFrom(int seconds);
void updateTime();
void reset();
void start();
void stop();
void lap();
const std::vector<int>& getLapTimes() const;
int getElapsed() const;
virtual ~StopWatch() = default;
protected:
int m_watchXPos;
int m_watchYPos;
int m_seconds;
int m_minutes;
int m_hours;
private:
std::vector<int> m_lapTimes;
bool m_running{ false };
int m_elapsed{};
int m_beg;
std::time_t m_now;
void converter(int seconds);
void clearTime();
};
class DigitalStopWatch final : public StopWatch
{
public:
virtual void printTime() override;
explicit DigitalStopWatch(int x, int y)
{
setStopWatchXY(x, y);
}
};
class SegmentedStopWatch final : public StopWatch
{
public:
virtual void printTime() override;
explicit SegmentedStopWatch(int x, int y)
{
setStopWatchXY(x, y);
}
private:
Digit m_stopWatchDigits[6];
void printDigitAtLoc(Digit digArr[], int index, int x, int y) const;
void printColon(int x, int y);
void printSeconds();
void printMinutes();
void printHours();
void set(Digit digArr[], int startIndex, int unit);
void setDigitsToCurrentTime();
void setSeconds();
void setMinutes();
void setHours();
};
class Factory
{
public:
virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const = 0;
};
class DigitalStopWatchFactory final : public Factory
{
virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const override
{
return std::make_unique<DigitalStopWatch>(stopWatchXPos, stopWatchYPos);
}
};
class SegmentedStopWatchFactory final : public Factory
{
virtual std::unique_ptr<StopWatch> createStopWatch(int stopWatchXPos = 0, int stopWatchYPos = 0) const override
{
return std::make_unique<SegmentedStopWatch>(stopWatchXPos, stopWatchYPos);
}
};
#endif
StopWatch.cpp:
#include"StopWatch.h"
#include"Console.h"
#include<thread> //for this_thread::sleep_for
namespace
{
constexpr int maxTime { 356400 };
constexpr int digitPadding { 5 };
constexpr int secondsIndexStart{ 5 };
constexpr int minutesIndexStart{ 3 };
constexpr int timePadding { 2 };
constexpr int hoursIndexStart { 1 };
enum
{
First,
Second,
Third,
Fourth,
Fifth,
Sixth
};
}
/*|---STOP_WATCH_FUNCTIONS_START---|*/
/*|---PUBLIC_FUNCTIONS_START---|*/
void StopWatch::setStopWatchXY(int x, int y)
{
Console::setXY(x, y, m_watchXPos, m_watchYPos);
}
bool StopWatch::countDownFrom(int seconds)
{
if (seconds > maxTime) seconds = maxTime;
while (seconds >= 0)
{
converter(seconds);
printTime();
if (seconds > 0)
{
std::this_thread::sleep_for(std::chrono::seconds(1));
}//end of if
--seconds;
}//end of while
return true;
}
void StopWatch::updateTime()
{
long long curTimeInSec{ static_cast<long long>(std::time(&m_now)) - m_beg + m_elapsed };
if (curTimeInSec > maxTime) curTimeInSec = 0;
converter(curTimeInSec);
}
void StopWatch::reset()
{
m_running = false;
m_lapTimes.clear();
m_lapTimes.shrink_to_fit();
clearTime();
}
void StopWatch::start()
{
if (!m_running)
{
m_beg = static_cast<long long>(std::time(&m_now));
m_running = true;
}//end of if
}
void StopWatch::stop()
{
if (m_running)
{
m_elapsed += static_cast<long long>(std::time(&m_now)) - m_beg;
m_running = false;
}//end of if
}
void StopWatch::lap()
{
if (m_running)
{
stop();
m_lapTimes.emplace_back(m_elapsed);
clearTime();
start();
}//end of if
}
const std::vector<int>& StopWatch::getLapTimes() const
{
return m_lapTimes;
}
int StopWatch::getElapsed() const
{
return m_elapsed;
}
/*|----PUBLIC_FUNCTIONS_END----|*/
/*|---PRIVATE_FUNCTIONS_START---|*/
void StopWatch::converter(int seconds)
{
m_hours = seconds / 3600;
seconds = seconds % 3600;
m_minutes = seconds / 60;
m_seconds = seconds % 60;
}
void StopWatch::clearTime()
{
m_elapsed = 0;
m_seconds = 0;
m_minutes = 0;
m_hours = 0;
}
/*|----PRIVATE_FUNCTIONS_END----|*/
/*|----STOP_WATCH_FUNCTIONS_END----|*/
/*|---DIGITAL_STOP_WATCH_FUNCTIONS_START---|*/
/*|---PUBLIC_FUNCTIONS_START---|*/
/*|---VIRTUAL_FUNCTIONS_START---|*/
void DigitalStopWatch::printTime()
{
Console::gotoxy(m_watchXPos, m_watchYPos);
if (m_hours < 10) std::cout << '0';
std::cout << m_hours << ':';
if (m_minutes < 10) std::cout << '0';
std::cout << m_minutes << ':';
if (m_seconds < 10) std::cout << '0';
std::cout << m_seconds;
}
/*|----VIRTUAL_FUNCTIONS_END----|*/
/*|----PUBLIC_FUNCTIONS_END----|*/
/*|----DIGITAL_STOP_WATCH_FUNCTIONS_END----|*/
/*|---SEGMENTED_STOP_WATCH_FUNCTIONS_START---|*/
/*|---PUBLIC_FUNCTIONS_START---|*/
/*|---VIRTUAL_FUNCTIONS_START---|*/
void SegmentedStopWatch::printTime()
{
setDigitsToCurrentTime();
printHours();
printColon(m_watchXPos + 10, m_watchYPos);
printMinutes();
printColon(m_watchXPos + 22, m_watchYPos);
printSeconds();
}
/*|----VIRTUAL_FUNCTIONS_END----|*/
/*|----PUBLIC_FUNCTIONS_END----|*/
/*|---PRIVATE_FUNCTIONS_START---|*/
void SegmentedStopWatch::printDigitAtLoc(Digit digArr[], int index, int x, int y) const
{
digArr[index].setDigitXY(x + index * digitPadding, y);
digArr[index].printDigit();
}
void SegmentedStopWatch::printColon(int x, int y)
{
Console::putSymbol(x, y + 1, '.');
Console::putSymbol(x, y + 2, '.');
}
void SegmentedStopWatch::printSeconds()
{
printDigitAtLoc(m_stopWatchDigits, Fifth, m_watchXPos + timePadding * 2, m_watchYPos);
printDigitAtLoc(m_stopWatchDigits, Sixth, m_watchXPos + timePadding * 2, m_watchYPos);
}
void SegmentedStopWatch::printMinutes()
{
printDigitAtLoc(m_stopWatchDigits, Third, m_watchXPos + timePadding, m_watchYPos);
printDigitAtLoc(m_stopWatchDigits, Fourth, m_watchXPos + timePadding, m_watchYPos);
}
void SegmentedStopWatch::printHours()
{
printDigitAtLoc(m_stopWatchDigits, First, m_watchXPos, m_watchYPos);
printDigitAtLoc(m_stopWatchDigits, Second, m_watchXPos, m_watchYPos);
}
void SegmentedStopWatch::set(Digit digArr[], int startIndex, int unit)
{
if (unit < 10) digArr[startIndex - 1] = 0;
else digArr[startIndex - 1] = unit / 10;
digArr[startIndex] = unit % 10;
}
void SegmentedStopWatch::setDigitsToCurrentTime()
{
setHours();
setMinutes();
setSeconds();
}
void SegmentedStopWatch::setSeconds()
{
set(m_stopWatchDigits, secondsIndexStart, m_seconds);
}
void SegmentedStopWatch::setMinutes()
{
set(m_stopWatchDigits, minutesIndexStart, m_minutes);
}
void SegmentedStopWatch::setHours()
{
set(m_stopWatchDigits, hoursIndexStart, m_hours);
}
/*|----PRIVATE_FUNCTIONS_END----|*/
/*|----SEGMENTED_STOP_WATCH_FUNCTIONS_END----|*/
Source.cpp:
#include"StopWatch.h"
#include"Console.h" //for Console::gotoxy
#include<thread> //for this_thread::sleep_for
int main()
{
std::unique_ptr<Factory> segFact{ std::make_unique<SegmentedStopWatchFactory>() };
std::unique_ptr<Factory> digFact{ std::make_unique<DigitalStopWatchFactory>() };
std::unique_ptr<StopWatch> stoppers[2];
stoppers[0] = segFact->createStopWatch();
stoppers[1] = digFact->createStopWatch();
stoppers[0]->setStopWatchXY(5, 5);
//to test the second stopper simply change stoppers[0] to stoppers[1]
stoppers[0]->countDownFrom(12); //test countdown
//stoppers[0]->countDownFrom(60 * 60 * 60 * 60); //overflow test
/*
stoppers[0]->start();
while (1)
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::milliseconds(200)); //this is only responsible for the refresh rate, the lower the better
}//note that no waiting at all will result in visible reprinting.
*/
/*
int timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
}while (--timer);
stoppers[0]->stop(); //test stop
std::this_thread::sleep_for(std::chrono::seconds(3));
timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
}while (--timer);
*/
/*
int timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
}while (--timer);
stoppers[0]->reset(); //test reset
std::this_thread::sleep_for(std::chrono::seconds(3));
timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
}while (--timer);
*/
/*
int timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
} while (--timer);
stoppers[0]->lap(); //lap reset
stoppers[0]->stop();
std::this_thread::sleep_for(std::chrono::seconds(3));
timer = 9;
stoppers[0]->start();
do
{
stoppers[0]->updateTime();
stoppers[0]->printTime();
std::this_thread::sleep_for(std::chrono::seconds(1));
} while (--timer);
stoppers[0]->lap();
stoppers[0]->stop();
Console::gotoxy(0, 0);
std::cout << "m_elapsed: " << stoppers[0]->getElapsed() << '\n';
std::cout << "First lap: " << stoppers[0]->getLapTimes()[0] << '\n';
std::cout << "Second lap: " << stoppers[0]->getLapTimes()[1] << '\n';
*/
return 0;
}
Answer: Okay, after I worked my way through the original code, a few things have become clearer. Since I have never done programming with ncurses I was eager to try my hand at a better design.
Here it comes. It's a sketch only in the sense that I didn't create separate translation units. That is basically a tedious exercise and left for the reader.
However, it does implement stopwatch (including lap times and reset), countdown and a bonus "random timer" task.
Big ideas
I noticed that "Stopwatch" was a bit of a "God Object" antipattern. It does too many things (you can't really usefully do a countdown and a lap time simultaneously).¹
I reckoned it would be nice to simply have tasks (without any UI) that expose duration measurements, and views that can display them as they update. This is akin to a publish/subscribe pattern.
To demonstrate this, I made not only the views generic, but also the tasks.
The UI will be an interactive terminal application that supports the following short cut keys:
Select (multiple) views: [d]igital/[s]egmented
Launch task: [r]andom [c]ountdown s[t]opwatch [?]any
Control tasks: [l]ap [!]reset [k]ill
Other: [z]ap all views [q]uit/Esc
¹ I am aware of the usual implementation in hardware stopwatch devices where the operation modes form a state machine. I also realize that the implementation in code tried to mimick this. Unfortunately, not only did it fall short, it also conflated things with the UI side of things. Consider this answer a finger exercise on my part.
Code walkthrough
Includes
#include <iostream>
#include <sstream>
#include <iomanip>
We'll be using stream formatting to display hh:mm:ss times.
#include <chrono>
using namespace std::chrono_literals;
#include <memory>
#include <random>
#include <algorithm>
#include <set>
We'll be using standard library containers and algorithms.
#include <boost/signals2.hpp>
A little bit of Boost to aid in the publish/subscribe mechanism. It facilitates multi-cast subscription and automatic (RAII) disconnection.
#include <cursesapp.h>
#include <cursesp.h>
As in the last comment on the old answer, we'll be using ncurses to create an interactive terminal UI (TUI).
Preamble/general declarations
We start at the foundation, some general utilities:
namespace {
using Clock = std::chrono::steady_clock;
using TimePoint = Clock::time_point;
using Duration = Clock::duration;
using Durations = std::vector<Duration>;
Just some convenience shorthands, so we keep our code legible and easy to change.
struct HMS {
int hours, minutes, seconds;
std::string str() const {
std::stringstream ss;
ss << std::setfill('0') << std::setw(2) << hours << ':' << std::setw(2) << minutes << ':' << std::setw(2)
<< seconds;
return ss.str();
}
};
HMS to_hms(Duration duration) {
auto count = std::chrono::duration_cast<std::chrono::seconds>(duration).count();
int s = count % 60; count /= 60;
int m = count % 60; count /= 60;
int h = count;
return {h,m,s};
}
You had these conversions anyways, but here they are in "functional style" (specifically, side-effect free). You'll notice how much this uncomplicates the stopwatch tasks below.
using boost::signals2::signal;
using boost::signals2::scoped_connection;
}
(More shorthands)
There's a subtle point to maintaining Duration at full resolution internally. This means that no rounding errors are introduced when starting a new lap "halfway" a second.
The "Task" hierarchy
struct Task {
virtual ~Task() = default;
virtual void tick() {}
};
The foundation of our task class hierarchy. Note that the most generic base class doesn't even presuppose any published events, which makes the framework extensible to tasks other than time measurement. Our time-related tasks share the following abstract base:
struct TimerTask : Task {
signal<void(Duration, Durations const&)> updateEvent;
virtual void tick() = 0;
};
As you can see, we promise to publish events carrying one or more durations. These can be subscribed to by any view capable of displaying one or more durations.
Implementing the various timer operations on this is peanuts. Let's for example do a random durations generator in 3 lines of code:
struct RandomTimeTask : TimerTask {
virtual void tick() {
updateEvent(rand()%19000 * 1s, {});
}
};
That might seem a cheap example, but countdown is really not much different:
struct CountdownTask : TimerTask {
CountdownTask(Duration d) : _deadline(Clock::now() + d) { }
virtual void tick() {
updateEvent(std::max(Duration{0s}, _deadline - Clock::now()), {});
}
private:
TimePoint _deadline;
};
Even stopwatch isn't harder, before we add lap times:
struct StopwatchTask : TimerTask {
StopwatchTask() : _startlap(Clock::now()) { }
virtual void tick() {
updateEvent(Clock::now() - _startlap, {});
}
private:
TimePoint _startlap;
};
Adding laptimes and reset() the full-featured stopwatch task becomes:
struct StopwatchTask : TimerTask {
StopwatchTask() : _startlap(Clock::now()) { }
virtual void tick() {
updateEvent(elapsed(), _laptimes);
}
void lap() {
_laptimes.push_back(elapsed());
_startlap = Clock::now();
}
void reset() {
_startlap = Clock::now();
_laptimes.clear();
}
private:
Duration elapsed() const { return Clock::now() - _startlap; }
std::vector<Duration> _laptimes;
TimePoint _startlap;
};
View hierarchy
We introduce one more parameter object:
struct Bounds {
int x, y, lines, cols;
};
And then we get down to business: Views will be things that have a TUI element (a panel):
struct View {
View(Bounds const& bounds) : _window(bounds.lines, bounds.cols, bounds.x, bounds.y) { }
virtual ~View() = default;
protected:
mutable NCursesPanel _window;
};
Without delay, let's present the abstract base TimerView that subscribes to any TimerTask:
struct TimerView : View {
TimerView(Bounds const& bounds) : View(bounds) { }
void subscribe(TimerTask& task) {
_conn = task.updateEvent.connect([this](Duration const& value, Durations const& extra) {
//if (value == 0s) _window.setcolor(1);
update(value, extra);
});
}
private:
scoped_connection _conn;
virtual void update(Duration const& value, Durations const& extra) const = 0;
};
I haven't figured out how to configure colors in Curses, but you can see how simple it would be to add generic behaviour to timer views there.
A simple view to implement would be the digital timer view. We already have the to_hms() and HMS::str() utilities. Let's decide that the view should show the total elapsed time regardless of lap-times and it should indicate how many laps have been recorded:
struct DigitalView final : TimerView {
using TimerView::TimerView;
private:
void update(Duration const& value, Durations const& extra) const override {
_window.erase();
auto total = std::accumulate(extra.begin(), extra.end(), value);
auto hms = to_hms(total).str();
hms += "[#" + std::to_string(extra.size()) + "]";
int x = 0;
for (auto ch : hms)
_window.CUR_addch(0, x++, ch);
_window.redraw();
}
};
Note that I have foregone a Console like abstraction here. Instead I used _window methods directly, which does tie the implementation to Curses. You may want to abstract this again, probably using a buffered approach like in my other answer.
The SegmentView isn't actually much more complicated. I dropped the bitset<> in favour of more obviously readable code. It has the added benefit of making the code short (except for the constants of course).
Functionally, we show
the total elapsed time in a frame caption
the current lap time in the 7-segment large display
a running list of previous lap-times on the right hand side:
struct SegmentView final : TimerView {
using TimerView::TimerView;
private:
void update(Duration const& value, Durations const& extra) const override {
_window.erase();
// total times in frame caption
{
auto total = std::accumulate(extra.begin(), extra.end(), value);
_window.frame(to_hms(total).str().c_str());
}
// big digits show current lap
{
auto hms = to_hms(value).str();
int digits[6] {
hms[0]-'0', hms[1]-'0',
hms[3]-'0', hms[4]-'0',
hms[6]-'0', hms[7]-'0',
};
auto xpos = [](int index) { return 4 + (index * 5); };
int index = 0;
for (auto num : digits)
printDigit(num, xpos(index++), 1);
for (auto x : {xpos(2)-1, xpos(4)-1})
for (auto y : {2, 4})
_window.CUR_addch(y, x, '.');
}
// previous laptimes to the right
{
// print lap times
int y = 1;
for (auto& lap : extra) {
int x = 35;
_window.CUR_addch(y, x++, '0'+y);
_window.CUR_addch(y, x++, '.');
_window.CUR_addch(y, x++, ' ');
for (auto ch : to_hms(lap).str())
_window.CUR_addch(y, x++, ch);
++y;
}
}
_window.redraw();
}
void printDigit(int num, int x, int y) const
{
static char const* const s_masks[10] = {
" -- "
"| |"
" "
"| |"
" -- ",
" "
" |"
" "
" |"
" ",
" -- "
" |"
" -- "
"| "
" -- ",
" -- "
" |"
" -- "
" |"
" -- ",
" "
"| |"
" -- "
" |"
" ",
" -- "
"| "
" -- "
" |"
" -- ",
" -- "
"| "
" -- "
"| |"
" -- ",
" -- "
" |"
" "
" |"
" ",
" -- "
"| |"
" -- "
"| |"
" -- ",
" -- "
"| |"
" -- "
" |"
" -- ",
};
if (num < 0 || num > 9)
throw std::runtime_error("Cannot assign Invalid digit must be: (0 < digit < 9)");
for (auto l = s_masks[num]; *l; l += 4) {
for (auto c = l; c < l+4; ++c)
_window.CUR_addch(y, x++, *c);
++y; x-=4;
}
}
};
Behold, a thing of beauty. One should never underestimate the value of self-explanatory code (if only the constants here).
The Main Application
All that remains to be done is the demo application itself. It sets up some factories, and starts an input event loop to receive keyboard shortcuts. Handling them is pretty straightforward.
The stopwatch-specific operations ([l]ap and [!]reset) are the only mildly complicated ones because they will have to filter for any running tasks that might be stopwatch tasks.
Launching a task without selecting one or more views to connect up will result in a beep() and nothing happening.
Note how clearing the _tasks or _views automatically destroys the right subscriptions (due to the use of scoped_connection).
struct DemoApp : NCursesApplication {
using TaskPtr = std::unique_ptr<TimerTask>;
using ViewPtr = std::unique_ptr<TimerView>;
using TaskFactory = std::function<TaskPtr()>;
using ViewFactory = std::function<ViewPtr()>;
using ViewFactoryRef = std::reference_wrapper<ViewFactory const>;
int run() {
_screen.useColors();
_screen.centertext(lines - 4, "Select (multiple) views: [d]igital/[s]egmented");
_screen.centertext(lines - 3, "Launch task: [r]andom [c]ountdown s[t]opwatch [?]any");
_screen.centertext(lines - 2, "Control tasks: [l]ap [!]reset [k]ill");
_screen.centertext(lines - 1, "Other: [z]ap all views [q]uit/Esc");
::timeout(10);
// replace with set<> to disallow repeats
std::multiset<ViewFactoryRef, CompareAddresses> selected_views;
while (true) {
TaskPtr added;
switch (auto key = ::CUR_getch()) { // waits max 10ms due to timeout
case 'r': added = makeRandomTimeTask(); break;
case 'c': added = makeCountdownTask(); break;
case 't': added = makeStopwatchTask(); break;
case 'l':
for (auto& t : _tasks)
if (auto sw = dynamic_cast<StopwatchTask*>(t.get()))
sw->lap();
break;
case '!':
for (auto& t : _tasks)
if (auto sw = dynamic_cast<StopwatchTask*>(t.get()))
sw->reset();
break;
case '\x1b': case 'q': return 0;
case 'k': _tasks.clear(); break;
case 'z': _views.clear(); break;
case 'd': selected_views.insert(makeDigitalView); break;
case 's': selected_views.insert(makeSegmentView); break;
default:
for (auto& d : _tasks) d->tick();
}
if (added) {
if (selected_views.empty())
::beep();
else {
for (ViewFactory const& maker : selected_views) {
_views.push_back(maker());
_views.back()->subscribe(*added);
}
selected_views.clear();
_tasks.push_back(std::move(added));
}
}
::refresh();
}
}
The remainder is the setup for the DemoApp, which includes rather boring stuff like generating random view positions.
private:
std::mt19937 prng{ std::random_device{}() };
NCursesPanel _screen;
int const lines = _screen.lines();
int const cols = _screen.cols();
int ranline() { return std::uniform_int_distribution<>(0, lines - 9)(prng); };
int rancol() { return std::uniform_int_distribution<>(0, cols - 9)(prng); };
TaskFactory
makeRandomTimeTask = [] { return TaskPtr(new RandomTimeTask); },
makeCountdownTask = [] { return TaskPtr(new CountdownTask(rand()%240 * 1s)); },
makeStopwatchTask = [] { return TaskPtr(new StopwatchTask()); },
taskFactories[2] {
makeRandomTimeTask,
makeCountdownTask,
};
ViewFactory
makeDigitalView = [&] { return std::make_unique<DigitalView>(Bounds { ranline(), rancol(), 1, 13 }); },
makeSegmentView = [&] { return std::make_unique<SegmentView>(Bounds { ranline(), rancol(), 7, 48 }); },
viewFactories[2] = { makeDigitalView, makeSegmentView };
std::vector<ViewPtr> _views;
std::vector<TaskPtr> _tasks;
struct CompareAddresses { // in case you wanted to make selected_views unique
template <typename A, typename B>
bool operator()(A const& a, B const& b) const { return std::addressof(a.get())<std::addressof(b.get()); };
};
};
static DemoApp app;
You can see a "live" demo here: | {
"domain": "codereview.stackexchange",
"id": 26339,
"tags": "c++, object-oriented, design-patterns, datetime, polymorphism"
} |
Are graphic cards using the Map-Reduce model when performing typical gaming rendering | Question: I know that the Map-Reduce model is a common model of parallel computation, perhaps some sort of standard way(?). I also know that graphic cards are specifically built for parallel computation. However when reading about the graphic pipeline I am not sure if there is a point at which the Map Reduce model is used. If not, what kind of parallel computations (just the general model of them so I could read more about it) are done during the typical rendering process of modern graphics cards? If it is used, where would that be on the below pipeline?
Answer: This model isn't appropriate for rendering games, so it's not used in game engines. However, it can be implemented on GPUs. F.e. if you are going to implement Spark / relational algebra on GPUs, it may be a part of implementation. | {
"domain": "cs.stackexchange",
"id": 14259,
"tags": "parallel-computing, graphics, mapreduce"
} |
Calculating the divergence of static electric field without making the dependency argument? | Question: This question is a follow up on this old post here Divergence of electric field
(So this may seem dumb...)
When calculating the divergence of a field point through the following equation, where $\left(\vec{\mathbf{\gamma}}=\vec{r}-\vec r'\right)$:
$$\nabla_r \cdot \mathbf E(\vec r)=\int_{\text{all space}} \nabla_r\cdot\left(\frac{\vec{\gamma}}{\gamma^2}\right) \rho({\vec r'}) d\tau'$$
Suppose I just mindlessly calculate the intergrand by product rule without considering the dependency of $\nabla_r$, then
$$\nabla_r\cdot\left[\frac{\vec{\gamma}}{\gamma^2}\rho({\vec r'})\right]= \left(\frac{\partial\frac{\vec{\gamma} }{\gamma^2}}{\partial x}\vec{i}+\frac{\partial\frac{\vec{\gamma} }{\gamma^2}}{\partial y}\vec{j}+\frac{\partial\frac{\vec{\gamma}}{\gamma^2}}{\partial z}\vec{k}\right)\cdot\rho({\vec r'}) + \left(\frac{\vec{\gamma} }{\gamma^2}\right)\cdot\left(\frac{\partial\rho(\vec r')}{\partial x}\vec{i}+\frac{\partial\rho(\vec r')}{\partial y}\vec{j}+\frac{\partial\rho(\vec r')}{\partial z}\vec{k}\right)$$
To get the correct result, the last term of the above equation needs to be zero
$$\frac{\partial\rho(\vec r')}{\partial x}\vec{i}+\frac{\partial\rho(\vec r')}{\partial y}\vec{j}+\frac{\partial\rho(\vec r')}{\partial z}\vec{k}=0$$
Let $\rho(\vec r')=|\rho(\vec r')|\cdot\vec r_u'$, where $\vec r_u'$ is the unit vector of $\vec r'$, then
$$\frac{\partial|\rho(\vec r')|}{\partial x}\vec{i}~\vec{r_u}'+\frac{\partial|\rho(\vec r')|}{\partial y}\vec{j}~\vec{r_u}'+\frac{\partial|\rho(\vec r')|}{\partial z}\vec{k}~\vec{r_u}'=0$$
At this point, how am I suppose to show the above equation is zero if I force myself to just calculate it without making the dependency argument?
Answer: The divergence is calculated with respect to $\vec r$, not $\gamma$. $\gamma$ is defined through a dummy variable so it cannot be used outside the RHS integral. Now, it becomes obvious that $\nabla_r \rho(\vec r') =0$, because $\vec r$ and $\vec r'$ are independent variables. | {
"domain": "physics.stackexchange",
"id": 86034,
"tags": "electrostatics, electric-fields, differentiation, gauss-law, calculus"
} |
Why do quantum eraser and original double-slit setups produce different patterns on the detector? | Question: A similar question has been asked here, but there was no answer: Why Delayed choice quantum eraser doesn't produce two lines
I learned that in the original double-slit experiment, an interference pattern is observed after firing many photons on the detector.
I also learned that in Delayed-choice quantum eraser experiment, when you look at the detector, what you initially see is just noise. Only after separating out the photons according to which detector detected their entangled pair, you start to see interference patterns on D1 and D2 detectors.
What I'm wondering is what causes the interference pattern seen on the main detector in the original double-slit experiment to turn into noise in the quantum eraser experiment. I am suspecting it's because the beam splitter BSC but I don't understand why.
Let's consider a simpler experiment than quantum eraser, removing the two initial splitters BSa and BSb. The photons travel from both paths and which-way information is deleted when they pass through beam splitter BSC.
What do we see in the detector D0? Interference pattern or noise? If noise, why? In the original experiment we saw interference pattern, what caused the difference in this case? Since we removed the information, I would expect the result to be equivalent to the one in the original experiment.
In other words, I don't see a difference between this experiment and original experiment, other than the fact that we have entangled photons here which doesn't provide any information thanks to BSC. Why is the outcome different?
Thanks!
Answer: There's no way to affect the observable behavior of the "upper" photons by manipulating the "lower" photons after the initial generation of the entangled pairs in the Glan-Thompson prism (if there were, you could use it to send a signal faster than light).
There's no interference pattern because the lower photons contain which-path information about the upper photons. Even if you never measure that information, its presence in the world prevents visible interference. There's no way to destroy the information except by reversing the process that originally created the entanglement, which is impossible in practice in this setup.
This answer has a more detailed discussion of a different version of this experiment. The key point is the same: if information about the light is preserved elsewhere in any form then there's no interference, and that information can't be and isn't destroyed, despite the name "quantum eraser". | {
"domain": "physics.stackexchange",
"id": 82451,
"tags": "quantum-mechanics, quantum-entanglement, double-slit-experiment, quantum-eraser"
} |
Body falling through Earth | Question: Imagine a body with mass $m$ moves through a pipe through the centre of earth. The gravitational force is given by $\vec{F}=-C \vec{r}$ where $C$ is constant. I want to determine the 1D equation of motion of the body and solve it for the case that the body starts at the surface of the earth with initial velocity $v_0=0$.
My attempt: $F=m\ddot{r}=-Cr$ so $r=Ae^{x\sqrt{-C/m}}+Be^{-x\sqrt{-C/m}}$. But I think this isn't right, shouldn't the body stay at the centre of the earth? Because I think the attraction is largest there.
Answer: The full mass of the earth doesn't contribute to the gravitational field. You can use Gauss' Law for the gravitational field to get the right dependence.
Here $\vec{a}$ is the acceleration due to gravity.
$\nabla\cdot\vec{a}=-4\pi G\rho$
$a4\pi r^2=-4\pi G (\frac{3M}{4\pi R^3})(\frac{4\pi r^3}{3})=\frac{-4\pi GMr^3}{R^3}$
$\vec{a}=\frac{-GMr}{R^3}\hat{r}$
$m\frac{d^2\vec{r}}{dt^2}=\frac{-GMm}{R^3}\vec{r}$
So you have simple harmonic motion.
$\vec{r}=R\cos(\omega t)$ where $\omega^2=\frac{GM}{R^3}$.
Note the angular velocity to the cube of the radius follows Kepler's Third Law despite the absence of an inverse square force law. That's a good mnemonic for remembering it. It can be shown the straight line back and forth time round trip time is equal to a full orbit about the earth just above its surface, approximately 84 minutes, so just 42 minutes to the opposite side of the Earth. | {
"domain": "physics.stackexchange",
"id": 86146,
"tags": "homework-and-exercises, newtonian-mechanics, newtonian-gravity, harmonic-oscillator, planets"
} |
Why linear polarizing film/sheet are bit dark? | Question: Why linear polarizering film/sheet are bit dark? Is there any transparent linear polarizing film/sheet available in market which absorbs glare?
Answer: The polarizers are only letting one polarization (vertical or horizontal, say) through, The rest is absorbed. Since not all light is being, transmitted the films necessarily look a bit "dark." | {
"domain": "physics.stackexchange",
"id": 63657,
"tags": "polarization"
} |
Questions about Navier-Stokes equations, Einstein notation, tensor rank | Question: I'm looking at Navier-Stokes equation in index notation and how to get them in vector notation:
$$
{\partial u_i \over \partial t}+ u_j {\partial u_i \over \partial x_j}= -\frac{1}{\rho}{\partial p \over \partial x_i}+ \nu {\partial^2 u_i \over \partial x_j \partial x_j}+g_i
$$
1st equation: The local acceleration, pressure, and body forcing terms seem simple. Is it correct to say as they have 1 free index (i) that they are tensors of rank 1, thus vectors where i = 1,2, and 3 for 3D?
2nd Question: For the convection acceleration term (I assume the diffusive term would be similar):
$$
u_j {\partial u_i \over \partial x_j},
$$
is it correct to say the $u_j {\partial \over \partial x_j}$ component, as it has no free indices and 1 dummy index (j), is a rank 0 tensor and thus a scalar (after summation)?
3rd Question: If Question 2 is correct can $u_i$ in the same term, a vector, simply then be multiplied by $u_j {\partial \over \partial x_j}$, a scalar, to then get a vector?
This is probably a softball question for anyone that does a lot of tensor calculus. I'm trying to teach some things about the Navier-Stokes equations and I'm trying to make sure all of my definitions regarding tensor rank and index notation are correct.
Answer:
Is it correct to say as they have 1 free index (i) that they are tensors of rank 1, thus vectors where i = 1,2, and 3 for 3D?
Yes but... not always. In general, an expression with a free index $i$ might be interpreted as the $i$-th component of a $1 \times n$ array of numbers. If it has 2 free indices $i,j$ as the (i,j)-th component of a matrix, and so on. Yet, arrays and tensors are not the same thing. Tensors must obey certain transformation rules under rotations for example, and we 'represent' these tensors with arrays of numbers. For your problem, these quantities are vectors but you can always avoid getting into the details of the transformation properties and just think about them as arrays. Actually, whenever a dummy index arises from non-relativistic physics, it is almost always certain that the quantity is also the component of a vector.
is it correct to say the $_\frac{∂}{∂_}$ component, as it has no free indices and 1 dummy index (j), is a rank 0 tensor and thus a scalar (after summation)?
Yes, it is a scalar operator (no free indices) but take into account that this might change depending on the object on which it acts, e.g: if it acts on a function (scalar) it will be scalar, if it acts on a vector (1 free index) it will be a vector, and so on. Also, take into account that you cannot add up quantities with different ranks (i.e: a number + a vector is not a valid operation, therefore, if you identify that an equation is made up of a sum of vectors like Navier Stokes, then... the other terms must also be vectors).
If Question 2 is correct can $_$ in the same term, a vector, simply then be multiplied by $_\frac{∂}{∂_}$, a scalar, to then get a vector
Yes, read the previous comments.
Also, you might find useful to interpret things like $\frac{∂}{∂_}$ as the components of $\vec{\nabla}$ (a gradient), contractions as dot products, $_\frac{∂}{∂_}$ as a directional derivative $\vec{u}\cdot\vec{\nabla}$, $\frac{∂}{∂_∂_}$ as a Laplacian $\Delta^2$, etc. Further, take into account that $_\frac{∂}{∂_}\neq \frac{∂_}{∂_}$ (the first one is a directional derivative, and the latter a divergence). | {
"domain": "physics.stackexchange",
"id": 69594,
"tags": "tensor-calculus, navier-stokes"
} |
RPN calculator in C | Question: I had to write a RPN calculator in C as one of my homework problems, but if someone could critique my code and suggest any improvements, that would be fantastic! I haven't added any overflow errors or input errors, but I will add that eventually.
My code takes operands and operators from the command line (e.g., 3 4 - would return -1), and sends it to the decode function which uses the pop and push functions.
#include <stdio.h>
#include <stdlib.h>
void push(float stack[], float value, int *currStack)
{
int i = *currStack;
while (i != 0)
{
stack[i] = stack[i-1];
i--;
}
stack[0] = value;
*currStack += 1;
}
void pop(float stack[], char operation, int *currStack)
{
int i;
switch (operation)
{
case '+':
stack[0] = stack[1] + stack[0];
break;
case '-':
stack[0] = stack[1] - stack[0];
break;
case '*':
stack[0] = stack[1] * stack[0];
break;
case '/':
stack[0] = stack[1] / stack[0];
break;
default:
printf("Invalid character.");
break;
}
for (i=1;i<*currStack;i++)
{
float temp = stack[i];
stack[i] = stack[i+1];
stack[i+1] = temp;
}
*currStack -= 1;
}
int decode(char **instring, float *outval, int size)
{
int i=0, currStack=0;
float stack[size/2];
for (i=1;i<size;i++)
{
if (atof(instring[i]))
push(stack, atof(instring[i]), &currStack);
else
pop(stack, *instring[i], &currStack);
*outval = stack[0];
}
}
int main(int argc, char *argv[])
{
float result;
decode(argv, &result, argc);
printf("The answer is: %.3f\n", result);
return 0;
}
Answer:
for (i=1;i<*currStack;i++)
{
float temp = stack[i];
stack[i] = stack[i+1];
stack[i+1] = temp;
}
This is more complicated than it needs to be.
float temp = stack[1];
for ( i = 1; i < *currStack; i++ )
{
stack[i] = stack[i+1];
}
stack[*currStack] = temp;
This version has the exact same effect but does *currStack + 1 assignments where your original does *currStack * 3 - 3 assignments (although that may drop to *currStack * 2 - 2 assignments with good register management by the compiler).
We can do even better since we don't care about the value that is at stack[1] before we start.
for ( i = 1; i < *currStack; i++ )
{
stack[i] = stack[i+1];
}
The for loop is sufficient. We don't need to muck around with temp at all. It's just junk data at this point.
Note: I'm not disputing vnp's point that the copy is unnecessary. I'm just saying that if you do do the copy, you don't actually have to copy the stack[1] value. And you certainly don't have to copy stack[1] once per element in the stack.
default:
printf("Invalid character.");
break;
}
You don't actually need the break; there. It's all right to let it fall out of the default case on its own. It won't hurt anything, but I think it's more readable to set off default slightly by having it not break.
int decode(char **instring, float *outval, int size)
You say that decode is supposed to return an int but you never return anything. You could declare this of type void.
void decode(char **instring, float *outval, int size)
Then no one would expect it to return anything.
float stack[size/2];
for (i=1;i<size;i++)
You should probably comment why this works. Note that it's only because argc is one greater than the number of arguments that it does. This shows up implicitly in the way that you start with i=1 but could be clearer. As a general rule, any time you do something clever, you should comment to explain it. Otherwise, you have to be clever again to read the code.
for (i=1;i<size;i++)
{
if (atof(instring[i]))
push(stack, atof(instring[i]), &currStack);
else
pop(stack, *instring[i], &currStack);
*outval = stack[0];
}
You assign a value to *outval on every iteration of the loop but only use it once. You could do this outside the loop with the same ultimate effect.
for ( i = 1; i < size; ++i ) {
double temp;
if ( temp = atof(instring[i]) ) {
push(stack, temp, &currStack);
} else {
pop(stack, *instring[i], &currStack);
}
}
*outval = stack[0];
You also don't need to calculate atof(instring[i]) twice. Save it the first time.
I added curly braces around the statements in the if/else. The single statement version is susceptible to a class of typo bugs that is rather hard to diagnose.
There is an argument that ++i is faster than i++. It's not a big difference and a good compiler should be able to optimize this out, but it doesn't hurt anything to use the prefix notation.
Your code doesn't allow for values of 0.0 and will display "Invalid character" in that case. Yet that's a valid value. Another possibility would be
if ( temp = atof(instring[i]) || ! strcmp(instring[i], "0.0") ) {
which would allow for 0.0, although it only supports one format. An alternative would be an is_zero function which could do a more exhaustive check.
The string manipulation to check for a number is more expensive (in terms of time) than checking for an operator. Checking for an operator only requires checking two characters to establish that it is an operator.
if ( ! instring[i][1] && is_operator(in_string[i][0] ) {
pop(stack, *instring[i], &currStack);
} else if ( temp = atof(instring[i]) || ! strcmp(instring[i], "0.0") ) {
push(stack, temp, &currStack);
} else {
printf("Invalid argument: [%s].", in_string[i]);
}
Then you just need an is_operator function:
int is_operator(char c) {
switch (c) {
case '+':
case '-':
case '*':
case '/':
return 1;
default:
return 0;
}
}
The atof statement returns a double, but you are only using float variables. It would be better to make stack, outval, etc. hold double values rather than just float values. That would save the implicit conversions as well. Note that %lf in printf expects a double as well. | {
"domain": "codereview.stackexchange",
"id": 22448,
"tags": "c, homework, math-expression-eval, calculator"
} |
Proof of gauge invariance of the massless Fierz-Pauli action | Question: The massless Fierz-Pauli action describing a spin-2 field $h_{\mu\nu}$ is (up to a prefactor) given by,
$$
S[h]=\int dx h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu} h_{\mu\nu},\tag{1}
$$
wherein we define the differential operator,
$$
\zeta_{\alpha\beta}^{\mu\nu}=\square\left(P_\alpha^\mu P_\beta^\nu-P_{\alpha\beta}P^{\mu\nu}\right),\tag{2}
$$
with projection tensor $P_{\mu\nu}=\eta_{\mu\nu}-\partial^{-2}\partial_\mu\partial_\nu$ and d'Alembertian $\square$.
Many references, e.g. Hinterbichler. 2011, claim that such action, Eq. (2), is invariant under the gauge transformation,
$$
h_{\mu\nu}\to h_{\mu\nu}+\delta h_{\mu\nu}=h_{\mu\nu}+\partial_\mu\xi_\nu+\partial_\nu\xi_\mu,
$$
wherein we demand $\xi_\mu(x_\mu)$ to be continous differentiable and to fall of sufficient fast at infinity such that boundary terms vanish.
How do I prove the claimed gauge invariance?
We claim a theory invariant under a specific transformation if the equations of motion (EOMs) remain unchanged. From classical mechanics, we know that the EOMs remain unchanged if the action is changed by a total time derivative or a constant term as these drop out of the Euler-Lagrange equations which lead to the EOMs. I believe the time derivative is not relevant if we consider spacetime as we cannot easily separate time from space, thus in our case we are left to show,
$$
S[h+\delta h]-S[h]=\text{const}.\tag{3}
$$
When inserting Eq. (1) into Eq. (3) I struggle with the final steps. Furthermore, I would be grateful for tricks on how to simplify my calculations.
Calculations
We insert Eq. (1) into Eq. (3) and find that the term without $\delta h$ cancels out,
$$
\begin{align}
S[h+\delta h]-S[h]
&=\int dx (h^{\alpha\beta}+\delta h^{\alpha\beta})\zeta_{\alpha\beta}^{\mu\nu}(h_{\mu\nu}+\delta h_{\mu\nu})-\int dx h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu} h_{\mu\nu}\\
&=\int dx \left\{h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu}\delta h_{\mu\nu}+\delta h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu} h_{\mu\nu}+\delta h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu}\delta h_{\mu\nu}\right\}.\tag{A.1}
\end{align}
$$
We note that the first two terms need to cancel each other as the these are the only terms that contain $h_{\mu\nu}$. Consequently, the third term has to be a constant.
We perform partial integration on the second term in Eq. (A.1),
$$
\int dx\delta h^{\alpha\beta} \zeta_{\alpha\beta}^{\mu\nu} h_{\mu\nu}
=-\int dx h_{\mu\nu}\left(\zeta_{\alpha\beta}^{\mu\nu}\delta h^{\alpha\beta}\right),\tag{A.2}
$$
where we used that $\xi_\mu$ falls of rapidly towards the boundaries. That said, I am not sure if it is justified to use partial integration with $\zeta$ as the differential.
Using the Minkowski metric, we can raise and lower indices,
$$
h_{\mu\nu}\zeta_{\alpha\beta}^{\mu\nu}\delta h^{\alpha\beta}
=h^{\sigma\rho}\left(\eta_{\mu\sigma}\eta_{\nu\rho}\zeta_{\alpha\beta}^{\mu\nu}\eta^{\alpha\lambda}\eta^{\beta\gamma}\right)\delta h^{\alpha\beta}
=h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu}\delta h_{\mu\nu}.\tag{A.3}
$$
In the last step we relabeled the indices such that they match the first term in Eq. (A.1).
We are left with the third term,
$$
\int dx\delta h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu}\delta h_{\mu\nu}
=\int dx (\partial^\alpha\xi^\beta)\zeta_{\alpha\beta}^{\mu\nu}(\partial_\mu\xi_\nu+\partial_\nu\xi_\mu)+\int dx (\partial^\beta\xi^\alpha)\zeta_{\alpha\beta}^{\mu\nu}(\partial_\mu\xi_\nu+\partial_\nu\xi_\mu).
\tag{A.4}
$$
Because of the tensor symmetry $\zeta_{\alpha\beta}^{\nu\mu}=\zeta_{\alpha\beta}^{\mu\nu}=\zeta_{\beta\alpha}^{\mu\nu}$, we can sum the terms in Eq. (A.4) to,
$$
\int dx\delta h^{\alpha\beta}\zeta_{\alpha\beta}^{\mu\nu}\delta h_{\mu\nu}
=4\int dx (\partial^\alpha\xi^\beta)\zeta_{\alpha\beta}^{\mu\nu}(\partial_\mu\xi_\nu).\tag{A.5}
$$
At this point, I don't see any obvious operations on how to show that (A.5) is constant.
Answer: You need check invariance only on linear level, because you consider linear action. Third term is second order.
Integration by parts is incorrect, because ζ is quadratic differential operator.
I recommend you to start with most general quadratic action and find coefficients from diffeomorphism invariance, like in Zee book on gravity:
After that, you need to rewrite action in form, that you present. | {
"domain": "physics.stackexchange",
"id": 63822,
"tags": "homework-and-exercises, lagrangian-formalism, metric-tensor, field-theory, gauge-theory"
} |
Equation for acceleration vector given a constant acceleration value | Question: I am aware of the formula for acceleration given velocity over time, however I would like a way to apply a constant acceleration (say $4m/s^2$) to a direction vector. How can I write such an equation?
Specifics:
I know the scalar value of my acceleration through f=ma, which is ~$4 m/s^2$
I know the direction I am facing in x, y, z coordinates
I want to apply the acceleration proportionately to the direction I'm facing (ie I can't add $2t^2$ to all axis as I will not be moving at the same rate in all axis). Say I have my normalized direction vector. My x direction is 0.6, my y is 0.5, and my z is 0.3. Would I just divide each value by the sum total of the three and then add that percentage of my acceleration to each component in the direction vector? So I add $% * 1/2at^2$ to each component?
Here's what I'm thinking:
$A_x = {\displaystyle \frac{|i|}{|i+j+k|}} \times 0.5 \times at^2$
$A_y = {\displaystyle \frac{|j|}{|i+j+k|}} \times 0.5 \times at^2 $
$A_z = {\displaystyle \frac{|k|}{|i+j+k|}} \times 0.5 \times at^2$
Answer: No.
Just multiply your direction vector by the scalar magnitude.
If you direction vector is $\hat{d}$, and your scalar acceleration is $a$, the the vector acceleration is $\vec{a}=a\hat{d}$. (The hat on $d$ is a standard way of denoting a vector whose magnitude in 1.)
Note, however, that you have an error. The components you wrote for your direction vector are not normalized. | {
"domain": "physics.stackexchange",
"id": 43548,
"tags": "acceleration, vectors"
} |
Self-modifying esoteric language interpreter in Ruby | Question: I recently created Copy, an esoteric language, and wrote an implementation in Ruby.
The language has only 7 instructions:
copy <a> <b> <c> Copy the code block from a to b at c
remove <a> <b> Remove the code block from a to b
skip <value> Skip the next instruction if value is not 0
add <var> <value> Add value to var
negate <var> Negate var
print <value> Print value as an ASCII character
getch <var> Read a character and store it character code in var
All addresses are relative and values can be variables or signed integers,
require "io/console"
class Copy
def initialize(code)
@code = code.lines.map &:chomp
@len = @code.length*2
@i = 0
@vars = {}
end
def value(s)
if s =~ /-?\d+/
s.to_i
else
@vars[s] or 0
end
end
def step
while @code.length > @len do
@code.shift
@i -= 1
end
line = @code[@i].split " "
case line[0]
when "copy"
@code.insert(
@i + value(line[3]),
*@code.slice((@i + value(line[1]))..(@i + value(line[2])))
)
when "remove"
@code.slice!((@i + value(line[1]))..(@i + value(line[2])))
when "skip"
if value(line[1]) != 0
@i += 1
end
when "add"
@vars[line[1]] = (@vars[line[1]] or 0) + value(line[2])
when "negate"
@vars[line[1]] *= -1
when "print"
STDOUT.write value(line[1]).chr
when "getch"
@vars[line[1]] = STDIN.getch.ord
end
@i += 1
end
def run
while @i < @code.length
step
end
end
end
if __FILE__ == $0
unless ARGV[0]
puts "Copy interpreter in Ruby"
puts "Usage: #{$0} <file>"
end
c = Copy.new File.read(ARGV[0])
c.run
end
Truth machine (If the input is 0, print 0 and exit, if the input is 1, print 1 in an infinite loop):
getch a
add b a
add b -48
print a
skip b
skip 1
copy -4 0 1
I am not really good at Ruby, can someone review this?
Answer: I would suggest refactoring the switch. This would make scaling easier
def remove_callback
@code.slice!((@i + value(line[1]))..(@i + value(line[2])))
end
...
line = @code[@i].split " "
self.send("#{line[0]}_callback") | {
"domain": "codereview.stackexchange",
"id": 22155,
"tags": "ruby, interpreter, language-design"
} |
openni_params missing | Question:
I'm in the process of upgrading a Turtlebot from Ubuntu 11.04/Electric to 12.04/Fuerte. The first thing to do is to recalibrate, but when I run turtlebot_calibration I'm getting the following:
error loading <rosparam> tag:
file does not exist [/opt/ros/fuerte/stacks/openni_camera/info/openni_params.yaml]
XML is <rosparam command="load" file="$(find openni_camera)/info/openni_params.yaml"/>
Indeed it does appear that the openni_params file is missing. What happened to this?
Originally posted by JediHamster on ROS Answers with karma: 995 on 2012-06-03
Post score: 3
Answer:
I just experienced the same problem migrating my robot setup to Fuerte. It looks like the old openni_camera package got split up and deprecated. There is a migration guide, and you should be able to modify the turtlebot_calibration source accordingly until an official Fuerte version is released.
Originally posted by piyushk with karma: 2871 on 2012-06-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by SivamPillai on 2012-07-24:
I did not well understand the migration guide. should I obtain the deprecated version first and change my dependency names or depend upon the new openni package? I have the same problem as asked above.
Comment by piyushk on 2012-07-25:
I would recommend depending on the new openni package. It also seems a turtlebot stack has been released for Fuerte. You might want to try it out.
Comment by SivamPillai on 2012-07-25:
I'm sorry but I have a different Robot... its Husky A200 :) | {
"domain": "robotics.stackexchange",
"id": 9641,
"tags": "calibration, turtlebot, ubuntu, ros-fuerte, ubuntu-precise"
} |
Plotting the integer values in text format in matplotlib piechart | Question: I have done a clustering algorithm and represented the results in a pie chart as shown below.
fig, ax = plt.subplots(figsize=(20, 10), subplot_kw=dict(aspect="equal"))
contents = []
for k,v in clusters.items():
indi= str(len(clusters[k])) + " users " + "Cluster_"+ str(k)
contents.append(indi)
#contents = ['23 users Cluster_0', '21 users Cluster_1']
data = [float(x.split()[0]) for x in contents]
Cluster= [x.split()[-1] for x in contents]
def func(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.0f}%\n({:d} users)".format(pct, absolute)
wedges, texts, autotexts = ax.pie(data, autopct=lambda pct: func(pct, data),
textprops=dict(color="w"))
ax.legend(wedges, Cluster,
title="CLuster",
loc="center left",
bbox_to_anchor=(1, 0, 0.5, 1))
plt.setp(autotexts, size=10, weight="bold")
ax.set_title("Distribution of users: A pie chart")
Even though the users are 23 and 21 in each cluster, the piechart shows 22 and 20.
This is due to the conversion to int and some float values are cut off in the function def func().
But, to fix this I wrote the below code and it works:
def func(percentage, allvals):
absolute = int(np.sum(allvals))
newV = (percentage/100)*absolute
roundnewV = round(newV)
intnewV = int(roundnewV)
return "{:.0f}%\n({:d} users)".format(percentage, intnewV)
Is this a good way to save the original form of integer and not lose out any value?
Answer: You got yourself in trouble by using plt.pie, and especially the keyword argument autopct, beyond its intended use.
The basic idea of the pie chart is to have the wedge labels outside the pie and perhaps percentages inside.
You wanted the wedge label and percentage inside the pie and manipulated the autopct keyword with a function to achieve this. This involved cumbersome calculations from percentages to values you already know.
Another solution could be to use the more simple labels keyword argument and change the resulting texts properties to be inside the pie instead outside, see code changes below:
contents = ['23 users Cluster_0', '21 users Cluster_1']
data = [int(x.split()[0]) for x in contents]
def pie_chart_labels(data):
total = int(np.sum(data))
percentages = [100.0 * x / total for x in data]
fmt_str = "{:.0f}%\n({:d} users)"
return [fmt_str.format(p,i) for p,i in zip(percentages, data)]
wedges, texts, = ax.pie(data, labels=pie_chart_labels(data))
# shrink label positions to be inside the pie
for t in texts:
x,y = t.get_position()
t.set_x(0.5 * x)
t.set_y(0.5 * y)
plt.setp(texts, size=10, weight="bold", color="w", ha='center') | {
"domain": "codereview.stackexchange",
"id": 37457,
"tags": "python, python-3.x, integer, floating-point, matplotlib"
} |
DB-to-Java value mapper | Question: In my company, I've inherited some Java library that I'm now writing tests to, refactoring and fixing Sonar issues.
One particular point that Sonar is complaining about is a big chaining of if/else if/else statements, that is used in a class responsible for parsing values that come from the DB to the corresponding values used in Java. The cyclomatic complexity of this method is 32, where Sonar is configured to a threshold of 15.
I would like some ideas of a better way to write this. Maybe some design-pattern that could be applied.
Since this is a pretty critical part of the application, it would be interesting not to add a considerable overhead.
private Object getValue(final Class<?> type, final int index, final ResultSet resultSet) throws SQLException {
Object value = null;
if (Boolean.class.isAssignableFrom(type) || boolean.class.isAssignableFrom(type)) {
String v = resultSet.getString(index);
if (v != null) {
value = "Y".equalsIgnoreCase(v);
}
} else if (Byte.class.isAssignableFrom(type) || byte.class.isAssignableFrom(type)) {
value = resultSet.getByte(index);
} else if (Byte[].class.isAssignableFrom(type) || byte[].class.isAssignableFrom(type)) {
value = resultSet.getBytes(index);
} else if (Short.class.isAssignableFrom(type) || short.class.isAssignableFrom(type)) {
value = resultSet.getShort(index);
} else if (Integer.class.isAssignableFrom(type) || int.class.isAssignableFrom(type)) {
value = resultSet.getInt(index);
} else if (Long.class.isAssignableFrom(type) || long.class.isAssignableFrom(type)) {
value = resultSet.getLong(index);
} else if (Float.class.isAssignableFrom(type) || float.class.isAssignableFrom(type)) {
value = resultSet.getFloat(index);
} else if (Double.class.isAssignableFrom(type) || double.class.isAssignableFrom(type)) {
value = resultSet.getDouble(index);
} else if (BigDecimal.class.isAssignableFrom(type)) {
value = resultSet.getBigDecimal(index);
} else if (Timestamp.class.isAssignableFrom(type)) {
value = resultSet.getTimestamp(index);
} else if (Time.class.isAssignableFrom(type)) {
value = resultSet.getTime(index);
} else if (Date.class.isAssignableFrom(type)) {
value = resultSet.getDate(index);
} else if (LocalTime.class.isAssignableFrom(type)) {
String vlrString = resultSet.getString(index);
if (StringUtils.length(vlrString) == 6) {
value = LocalTime.of(Integer.valueOf(vlrString.substring(0, 2)), Integer.valueOf(vlrString.substring(2, 4)), Integer.valueOf(vlrString.substring(4)));
}
} else if (LocalDate.class.isAssignableFrom(type)) {
Timestamp timeStamp = resultSet.getTimestamp(index);
value = timeStamp == null ? null : timeStamp.toLocalDateTime().toLocalDate();
} else if (LocalDateTime.class.isAssignableFrom(type)) {
Timestamp timeStamp = resultSet.getTimestamp(index);
value = timeStamp == null ? null : timeStamp.toLocalDateTime();
} else if (String.class.isAssignableFrom(type)) {
value = resultSet.getString(index);
} else if (type.isEnum()) {
value = DomainUtils.get((Class<Enum>) type, resultSet.getString(index));
} else {
value = resultSet.getObject(index);
}
if (resultSet.wasNull() && !type.isPrimitive()) {
value = null;
}
return value;
}
Answer: I think, performance wise, you won't get much better than the if-then-else construct. Since isAssignableFrom is not a direct lookup, but a rather complex task, but your runtime types are actually limited, it may be helpful to build a cache around the result. So, if you found a type match, keep that in a direct lookup cache to avoid walking the if-then.
To help with code-maintenance, I would refactor the comparisions into seperate stateless comparator classes, each will implement one function, the getValue method.
The list of comparators will be initialized at some early point in the lifecycle of your application and put in a list. The method getValue will be reduced to a for loop over the comparator-list.
Adding another type is now reduced to changing a comparator class. This makes it very obvious, what you have changed even in changelogs, since the class is mentioned in the committed-files-list.
To help maintenance even more, you could introduce either a search of your classpath for the comparator interface or introduce an annotation to mark the implementations for automatic building of the list.
This will give you an almost self-maintaining code for the given task. Whenever you need a new type, you just add it with the right interface/annotation and it will magically be seen on the next run.
This would be your main entry point.
public class ValueGetter {
private static List<TypeConverter> converter;
static {
// might require synchronize
converter = new ArrayList<>(1);
converter.add(new FloatConverter());
// ... add more or build using reflection or whatever
}
public Object getValue(final Class<?> type, final int index, final ResultSet resultSet) throws SQLException {
Object value;
for (TypeConverter c : converter) {
value = c.getValue(type, resultSet, index);
if (value != TypeConverter.NOTTYPE) {
return value;
}
}
throw new IllegalArgumentException("Cannot convert to requested type");
}
}
This is the interface mentioned.
public interface TypeConverter {
public static final String NOTTYPE = "not of this type";
Object getValue(Class<?> type, ResultSet resultSet, int index) throws SQLException;
}
This is taking the float if-else
public class FloatConverter implements TypeConverter {
@Override
public Object getValue(Class<?> type, ResultSet resultSet, int index) throws SQLException {
if (Float.class.isAssignableFrom(type) || float.class.isAssignableFrom(type)) {
return resultSet.getFloat(index);
}
return NOTTYPE;
}
} | {
"domain": "codereview.stackexchange",
"id": 20548,
"tags": "java, design-patterns, database, casting"
} |
Is speed of light invariant in different inertial frame? | Question: Don't get angry at me. I believe in special relativity just as any scientist would.
But reading this article
http://arxiv.org/abs/0708.2687
I realize that actually I haven't done any experiments on my owns to test the invariance of the speed of light (not able to anyway).
There are some stories that the experiments's data had been "revised" so that they matched with the relativity's prediction.
I know it may sound meaningless or untrue, I want to know is there any trend or any action or any experiment in modern science that denies the invariance of the speed of light?
I THINK almost every physicists believe in the invariance of speed of light as people believe in God, since all the information about experiments just come from science journals (which, to some extent, can be manipulated).
Answer: To newcomers to relativity it seems to be based on the invariance of the speed of light. While this has some historic significance, these days we regard Lorentz invariance as the fundamental principle, and a constant speed of any massless particle is then just a consequence of Lorentz invariance.
So your question could, and should, be written as the equivalent question:
is there any trend or any action or any experiment in modern science that denies Lorentz invariance?
And the answer is that yes indeed, Lorentz invariance has been questioned many times and continues to be questioned. Rather than attempt a review here let me just point you to the Wikipedia page on the subject.
While various speculative theories suggest there many be small violations of Lorentz invariance under extreme conditions, you should note that Lorentz invariance is at the very worst expected to be an exceedingly accurate approximation. Quantum field theory is based on Lorentz invariance and it has been tested to extremely high accuracy. | {
"domain": "physics.stackexchange",
"id": 14836,
"tags": "special-relativity, speed-of-light, lorentz-symmetry"
} |
Performing add/delete/count operations on a list | Question:
Problem Statement (HackerRank)
You have a list of integers, initially the list is empty.
You have to process Q operations of three kinds:
add s: Add integer s to your list, note that an integer can exist more
than one time in the list
del s: Delete one copy of integer s from the list, it's guaranteed
that at least one copy of s will exist in the list.
cnt s: Count how many integers a are there in the list such that a AND
s = a, where AND is bitwise AND operator
Input Format
First line contains an integer Q. Each of the following Q lines
contains an operation type string T and an integer s.
Constraints
\$1 \le Q \le 200000\$
\$0 \le s \lt 2^{16}\$
Output Format
For each cnt s operation, output the answer in a new line.
Sample Input
7
add 11
cnt 15
add 4
add 0
cnt 6
del 4
cnt 15
Sample Output
1
2
2
Explanation
For first line, we have 15 AND 11 = 11 so the answer is 1
For second line, 6 AND 0 = 0 and 6 AND 4 = 4 so the answer is 2
For third line, 4 has been deleted and we have 15 AND 11 = 11 and 15
AND 0 = 0 so the answer is 2
My working code:
operations = int(raw_input())
current = 1
lst = []
while current <= operations:
count = 0
input_ = raw_input().split()
operation = input_[0]
num = int(input_[1])
if operation == 'add':
lst.append(num)
elif operation == 'cnt':
for number in lst:
if number & num == number:
count += 1
print(count)
elif operation == 'del':
lst.remove(num)
current += 1
Answer: You don't exploit the nature of the question.
Constraints
\$1 \le Q \le 200000\$
\$0 \le s \lt 2 ^ {16}\$
Why are these important? It's so you can achieve greater performance, for a price in memory.
You know that there is \$2 ^ {16}\$ possible numerical inputs,
and there is a maximum of 200000 inputs.
If the last input is count, and all the rest are add,
you will be recounting numbers a lot. Leading to a dramatic performance loss.
First you would want to prevent recounting if possible.
One way to do this is to make a dictionary to count the occurrences of numbers.
However due to constant type changes you can just use a list instead.
lst = [0] * (2 ** 16)
This takes more time than using lst = [], but it will pay off.
If we look at the performance of your 3 functions.
I use the information from Python's time complexity page.
Add
lst.append(num)
This is average case \$O(1)\$, worst case \$O(n)\$, (not amortized worst case).
lst is internally stored as an array,
and so if you grow past the bounds all the data must move.
And so I will say this is \$O(n)\$.
Del
lst.remove(num)
This is average case \$O(n)\$, worst case \$O(n)\$.
And so this is \$O(n)\$.
cnt
for number in lst:
if number & num == number:
count += 1
print(count)
And again this is \$O(n)\$.
Overall
You can have 200000 calls to functions that are \$O(n)\$.
If you ask me that's not good on performance.
You can easly make add and del \$O(1)\$.
If you use lst = [0] * (2 ** 16).
if operation == 'add':
lst[num] += 1
elif operation == 'del':
lst[num] -= 1
It's like using a default-dictionary where the index is only numbers.
However this currently will have the drawback of cnt always being \$O(2 ^ {16})\$.
If you wish to fix that you can use a set, to store the numbers, so it is \$O(n)\$.
Here is a solution that passes a few more of the tests. It does not pass them all.
operations = int(raw_input())
nums = set()
storage = [0] * (2 ** 16)
for _ in xrange(operations):
input_ = raw_input().split()
operation = input_[0]
num = int(input_[1])
if operation == 'add':
nums |= {num}
storage[num] += 1
elif operation == 'cnt':
print(sum(
storage[number]
for number in nums
if (number & num) == number
))
elif operation == 'del':
storage[num] -= 1
This uses a similar solution as yours, my lst is nums.
However, I aimed at obtaining \$O(1)\$ in both add and del.
I can't complete the problem.
The way Python programmers that solved this use [0] * (2 ** 16) and binary logic I don't know. Which make all the functions \$O(256)\$. | {
"domain": "codereview.stackexchange",
"id": 15055,
"tags": "python, programming-challenge, python-2.x, time-limit-exceeded, bitwise"
} |
Band-gap engineering and semiconductor lasers | Question: I am currently studying Physics of Photonic Devices, second edition by Shun Lien Chuang. Chapter 1.3 The Field of Optoelectronics says the following:
The control of the mole fractions of different atoms also makes the band-gap engineering extremely exciting. For optical communication systems, it has been found that minimum attenuation in the silica optical fibers occurs at $1.30 \ \mu\text{m}$ and $1.55 \ \mu\text{m}$ (Fig. 1.8a). The dispersion of light at $1.30 \ \mu\text{m}$ is actually zero (Fig. 1.8b). It is therefore natural to design sources such as light-emitting diodes and laser diodes, semiconductor modulators, and photodetectors operating at these desired wavelengths. In addition, many wavelengths, or the so-called optical channels for dense wavelength-division multiplexing (DWDM) applications, near $1.55 \ \mu\text{m}$ with constant frequency spacing such as $50$, $100$, or $200$ GHz can be used to take advantage of the broad $24$ THz frequency bandwidth near the minimum attenuation. For example, by controlling the mole fraction of gallium and indium in an $\mathrm{In}_{1 - x}\mathrm{Ga}_{x}\mathrm{As}$ material, a wide tunable range of band gap is possible because $\mathrm{InAs}$ has a $0.354 \ \text{eV}$ band gap and $\mathrm{GaAs}$ has a $1.424 \ \text{eV}$ band gap at room temperature.
The presence of the following two statements is what interests me:
The control of the mole fractions of different atoms also makes the band-gap engineering extremely exciting.
by controlling the mole fraction of gallium and indium in an $\mathrm{In}_{1 - x}\mathrm{Ga}_{x}\mathrm{As}$ material, a wide tunable range of band gap is possible because $\mathrm{InAs}$ has a $0.354 \ \text{eV}$ band gap and $\mathrm{GaAs}$ has a $1.424 \ \text{eV}$ band gap at room temperature.
It seems to me that the author is implying that there is some connection between semiconductor laser wavelength and the band gap? Is this correct, or am I misunderstanding this? Otherwise, what else would be the point of statement 2? I would greatly appreciate it if people would please take the time to clarify this.
Answer: In a direct-gap semiconductor, when an electron and hole recombine in a radiative process, they emit a photon with energy equal to the energy lost by the electron as it recombines with the hole. That is, roughly equal to the band gap energy.
The radiative process could be the spontaneous emission in an LED or below-threshold laser, or the stimulated emission in a laser.
Since the photon energy is directly proportional to the optical frequency or inversely proportional to the wavelength, the connection between bandgap energy of the emitting semiconductor and the LED or laser wavelength is very direct.
In the case of quantum wells or dots, the available energy levels will be slightly above the band gap edges, but the transition energies will still be fairly close (within tenths or hundredths of an eV) to the band gap energy. | {
"domain": "physics.stackexchange",
"id": 69776,
"tags": "optics, solid-state-physics, semiconductor-physics, laser, electronic-band-theory"
} |
What does a double red circle mean in rxgraph | Question:
Sometimes nodes appear in rxgraph as a double red circle instead of a black oval (see image below). What does the double red circle mean? The nodes seem to be working fine, ie they are sending and recieving data on topics.
image description http://www.pitchless.org/mark/rxgraph-double-red-circle.png
Originally posted by markpitchless on ROS Answers with karma: 390 on 2012-01-19
Post score: 1
Answer:
double red circle means that the Master reports the node exists, but rxgraph cannot talk to that node.
Originally posted by kwc with karma: 12244 on 2012-01-19
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by felix k on 2012-01-19:
I just can attest my rxgraph (11.10 electric) gets out of sync alot, e.g. if I remap a node's topic and restart it, rxgraph shows a connection to both topics.
Comment by markpitchless on 2012-01-19:
I do get other nodes in rxgraph that it says it can't connect to (in the info panel on the right) that don't get the red circle. Just noticed that the port number for node given in the rxgraph error is different to the port that rosnode info gives. Looks like rxgraph gets out of sync. | {
"domain": "robotics.stackexchange",
"id": 7937,
"tags": "ros, rxgraph"
} |
end-effector pose constrained planning | Question:
Hello: I am looking for end-effector pose constrained planning for the manipulator. I know that such functionalities have been provided in "MoveIt!" where you set your kinematics constraints and then give command to plan a path.
I believe that this type of end-effector constrained planning (in MoveIt!) is done in end-effector cartesian degree of freedom using Inverse Kinematics.
Can anyone please point me to: where this task-constrained code sits in OMPL (as MoveIt uses OMPL), or any pointer to the research paper of this implementation in MoveIt!
I want to understand how it is done and what are the steps involved in ROS implementation.
Thanks.
Originally posted by Victor22 on ROS Answers with karma: 51 on 2014-06-19
Post score: 5
Original comments
Comment by Rob.Chen on 2019-01-08:
Hello! I am also very interested in manipulator constraint motion planning, and have some problem about how moveit sent constraint information to ompl, and how ompl deal with it, if you have solve it, please help me, any help is very greatful!
Comment by JohnDoe on 2023-07-18:
@Rob.Chen Hi! Did you make progress on your question? I have the same confusion as you.
Answer:
Here is an example of how to use path constraints (here orientation constraints) with Moveit in Python:
from moveit_msgs.msg import RobotState, Constraints, OrientationConstraint
def init_upright_path_constraints(self,pose):
self.upright_constraints = Constraints()
self.upright_constraints.name = "upright"
orientation_constraint = OrientationConstraint()
orientation_constraint.header = pose.header
orientation_constraint.link_name = self.arm.get_end_effector_link()
orientation_constraint.orientation = pose.pose.orientation
orientation_constraint.absolute_x_axis_tolerance = 0.4
orientation_constraint.absolute_y_axis_tolerance = 0.4
orientation_constraint.absolute_z_axis_tolerance = 0.4
#orientation_constraint.absolute_z_axis_tolerance = 3.14 #ignore this axis
orientation_constraint.weight = 1
self.upright_constraints.orientation_constraints.append(orientation_constraint)
def enable_upright_path_constraints(self):
self.arm.set_path_constraints(self.upright_constraints)
def disable_upright_path_constraints(self):
self.arm.set_path_constraints(None)
Originally posted by fivef with karma: 2756 on 2015-05-28
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by danjo on 2018-10-03:
from moveit_msgs.msg import RobotState, Constraints, OrientationConstraint
is import line
Comment by Mehdi. on 2019-01-27:
what is pose here?
Comment by fivef on 2019-03-04:
pose is the geometry_msgs/Pose of the endeffector right before you enable the path constraints.
Comment by AdriaArroyo on 2021-02-10:
I can confirm that the pose var type is geometry_msgs/PoseStamped and not geometry_msgs/Pose.
geometry_msgs/Pose has no header. :) | {
"domain": "robotics.stackexchange",
"id": 18325,
"tags": "moveit, ompl"
} |
Rotations of higher spin and the adjoint representation | Question: I am trying to understand the following expression for rotations of higher-spin objects, potentially in greater than 3 dimensions (though 3D is probably enough, in which case you can replace $L_{\mu\nu}\rightarrow S_x, S_y,$ or $S_z$)
$$[L_{\mu\nu}, \mathcal{O}^a(0)] = (S_{\mu\nu})_b^a \mathcal{O}^b(0)$$
where $L_{\mu\nu}$ generates rotations within the $x^\mu,x^\nu$ plane, and therefore satisfies the commutation relations for $SO(d)$, and (this is the part that confuses me) $S_{\mu\nu}$ also satisfy the $SO(d)$ commutation relations. I know how things work out nicely for qubits in 3D; a natural basis is $\mathcal{O}^a \rightarrow \{\sigma^x,\sigma^y,\sigma^z\}$, and $S_{\mu\nu}$ satisfies the commutations for $SO(d)$. More generally, if each $\mathcal{O}^a$ is proportional to an $L_{\mu\nu}$, then $S_{\mu\nu}$ is by definition the adjoint representation of $SO(d)$.
However I don't understand how this generalizes nicely to higher spin or a larger basis of operators. It seems intuitive that $S_{\mu\nu}$ would transform like a spatial rotation is some way, but I can't see this from the math. So my question is the following:
Choose an orthonormal basis of operators $\mathcal{O}^a$, and use the above expression to define $(S_{\mu})_b^a$ (replacing $\mu\nu$ with $\mu$ for conciseness). How can we show that $[S_{\alpha}, S_{\beta}]=if_{\alpha\beta}^\gamma\mathcal{S}_\gamma$, where $f_{\alpha\beta}^\gamma$ are the structure constants of $SO(D)$ (and therefore $[L_\alpha,L_\beta]=if_{\alpha\beta}^\gamma L_\gamma$)?
Or if this expression is not true in general, then what is the correct way to express that $S_\mu$ forms a representation of the same algebra as $L_\mu$?
I came across this confusion when reading about field theory (this paper, Eq. 1.16), but it applies to regular quantum mechanics as well.
Answer: You might be faked out by the "rotations" involved in the arbitrary-dimensional Lorentz group, and you had a good instinct to illustrate the expressions in the plain 3D rotation subgroup. In that case, an infinitesimal rotation of an arbitrary operator
$\mathcal{O}^a(0)$ amounts to
$$[L^{j}, \mathcal{O}^a(0)] = (S^j) _{~~b}^a \mathcal{O}^b(0),\tag{1}$$
where, as stated, the three spin matrices $S^j$ obey the exact same commutation relations of so(3) summarized by the above abstract generators,
$$
[S^j,S^k]=i\epsilon^{jkl} S^l.\tag{2}
$$
The implied row/column indices of these matrices are suppressed here. These matrices may, in general, be $(2s+1)\times (2s+1)$ ones, s being the spin, 1/2,1, 3/2,2,... and the indices b of your operators take 2s+1 values which these matrices mix among themselves.
For vector operators $\mathcal{O}^a(0)$, transforming like 3-vectors, the indices are three, and the
matrices are the familiar 3$\times$3 generators $\vec L$ of basic QM or classical mechanics. Note the indices of the $\mathcal{O}^a(0)$, undeclared and supressed here, could be anything the kets carry. So, they could be spinorial, as your "qubit" example proffers.
A ready mnemonic is that when kets rotate through R, operators such as $\mathcal{O}^a(0)$ must then rotate adjointly by $R\mathcal{O}^a(0)R^{-1}= {\mathcal D}(R)^a_{~~b} \mathcal{O}^a(0)$, which converts rotations of operator components to a linear combination ( $\leadsto$matrix multiplication) of these components. It is all predicated on Campbell's (1897) fundamental Lemma (1.17), which your reference oddly dubs "Hausdorff's lemma"-- but then again, many people, including myself, would rather refer to it as "Hadamard's lemma", by dint of educational background. At the Lie algebra level, it maps isomorphically adjoint operation to matrix multiplication, as above, and "integrates" to the relation of this paragraph.
Edit addressing the comment request for showing (1) $\leadsto$ (2) more explicitly.
The r.h.s. of (1) is a linearly reordered $\mathcal{O}^a(0)$, so we reapply it yet another time, and utilize the Jacobi identity,
$$[L^k,[L^{j}, \mathcal{O}^a(0)]] -[L^j,[L^{k}, \mathcal{O}^a(0)]]= (S^j) _{~~b}^a [L^k,\mathcal{O}^b(0)]- (S^k) _{~~b}^a [L^j,\mathcal{O}^b(0)]\\
\leadsto [[L^k,L^{j}], \mathcal{O}^a(0)]=\bigl ((S^j) _{~~b}^a (S^k) _{~~c}^b -(S^k) _{~~b}^a (S^j) _{~~c}^b \bigr) {O}^c(0) \leadsto \\
i\epsilon^{kjl} [ L^l, \mathcal{O}^a(0) ]= ([S^j,S^k])_{~~c}^a {O}^c(0)\leadsto \\
\Bigl ( i\epsilon^{kjl} S^l -[S^j,S^k]\Bigr )_{~~c}^a ~{O}^c(0) =0,
$$
leading to (2) through dotting by another $i\epsilon_{kjm}$, up to a rep normalization: the Ss constitute a matrix representation of the abstract Lie algebra elements L, as seen above already. Many good QM texts review these arguments, as does Hamermesh, etc... | {
"domain": "physics.stackexchange",
"id": 91357,
"tags": "quantum-mechanics, quantum-spin, representation-theory, rotation"
} |
Why is $(N,N' G)$ reaction not listed? | Question: If I go here,
http://www.nndc.bnl.gov/ensdf/#
go to the by Nuclide tab, check the "Reaction" box, and search for $\rm^{24}Mg$ in the Nuclide box; various reactions which produce $\rm^{24}Mg$ appear. Included is the $\rm^{24}Mg(N,N'G)$ reaction (Neutron is incident on $\rm^{24}Mg$ Nucleus, Neutron and Gamma are emitted).
If I do the same search but with $\rm^{26}Mg$ instead of $\rm^{24}Mg$, the $\rm^{26}Mg(N,N'G)$ reaction is not listed.
Basically for some Nuclides this reaction is listed and for others it is not. I'm assuming that the reason for this is that this reaction is not allowed/doesn't occur.
My question is why this reaction is permitted for some isotopes but not others?
Answer: It's not (necessarily) a question of what's permitted so much as what's been measured, and published, and entered into the ENDF database. The reference for the 24Mg paper is
1984EL12,
Nucl.Instrum.Methods 228, 62 (1984),
D.Elenkov, D.Lefterov, G.Toumbev,
"Two-Target DSAM following the (n, n'γ) Reaction with Fast Reactor Neutrons"
and the abstract specifically mentions 24Mg; you'd have to find the paper to see whether they used an isotopically purified sample or whether they just selected against data from the 20% of magnesium with extra neutrons.
At the top of the search results is a link steering you to the XUNDL (experimental, un-evaluated data), which contains this reference for data on both (n,$\gamma$) and (n,n) on all stable magnesiums:
2012MA14,
Phys.Rev. C 85, 044615 (2012),
C.Massimi, for the n_TOF Collaboration,
"Resonance neutron-capture cross sections of stable magnesium isotopes and their astrophysical implications"
NUCLEAR REACTIONS 24,25,26Mg(n, γ), E=1 eV-1 MeV; measured Eγ, E(n), time-of-flight, capture yields, σ(E), resonance parameters using n_TOF facility at CERN. 25,26,27Mg; deduced resonances, levels, J, π, neutron and gamma widths, Maxwellian averaged cross sections. R-matrix analysis. Comparison with KADoNiS database. Discussed impact on s-process abundances and neutron resonance contribution to 22Ne(α, n)25Mg reaction.
Happy hunting! | {
"domain": "physics.stackexchange",
"id": 28103,
"tags": "nuclear-physics"
} |
New to TDD, am I doing everything right? How could I improve? | Question: I'm new to TDD, never used it ever. I understand the basic concepts but I'm now working on a small project which will be the first time I've ever actually used TDD.
The code is pretty self explanatory, it's a simple user data storage class. I've written an interface, a UserData class, and then implemented the interface with SimpleUserStore which uses lists.
I first wrote the tests, and then wrote the code so that they'd fail, and then "filled in" the code so that they would pass. When I realised I needed another method, I wrote the tests, then generated the stubs, then wrote the code so that they would pass.
My question is basically, am I doing this right? Are there things I'm missing out? Big gaping holes that make you go woah, wait a second!?
All responses appreciated.
public interface IUserStore
{
bool AddUser(UserData user);
bool DeleteUser(UserData user);
bool UpdateUser(UserData user);
UserData GetUserById(int id);
UserData GetUserByName(string name);
}
public class UserData
{
public UserData() { }
public virtual int UserId { get; set; }
public virtual string Username { get; set; }
public virtual string PasswordHash { get; set; }
public virtual string PasswordSalt { get; set; }
}
public class SimpleUserStore : IUserStore, IDisposable
{
private List<UserData> userdata;
public SimpleUserStore()
{
userdata = new List<UserData>();
}
public bool AddUser(UserData user)
{
if (user == null) throw new ArgumentNullException("user");
if (userdata.Contains(user)) return false;
userdata.Add(user);
return userdata.Contains(user);
}
public bool DeleteUser(UserData user)
{
if (user == null) throw new ArgumentNullException("user");
return userdata.Remove(user);
}
public bool UpdateUser(UserData user)
{
if (user == null) throw new ArgumentNullException("user");
if (!userdata.Contains(user)) return false;
UserData _user = userdata.FirstOrDefault(x => x.UserId == user.UserId);
_user = user;
return true;
}
public UserData GetUserById(int id)
{
if (id < 0) throw new ArgumentException("id");
return userdata.FirstOrDefault(x => x.UserId == id);
}
public UserData GetUserByName(string name)
{
if (name == null) throw new ArgumentNullException("name");
return userdata.FirstOrDefault(x => x.Username == name);
}
public void Dispose()
{
userdata = null;
}
}
I have another project for tests, and in it I have the following test helper class:
[ExcludeFromCodeCoverage]
public static class TestHelper
{
public static void ExpectException<T>(Action action) where T : Exception
{
try
{
action();
Assert.Fail("Expected exception " + typeof(T).Name);
}
catch (T)
{
// Expected
}
}
}
And the tests:
[ExcludeFromCodeCoverage]
[TestClass]
public class SimpleUserStoreTests
{
private SimpleUserStore simpleUserStore;
private UserData userA, userB;
[TestInitialize]
public void TestInitialize()
{
simpleUserStore = new SimpleUserStore();
userA = new UserData { UserId = 1, Username = "UserA", PasswordHash = "", PasswordSalt = "" };
userB = new UserData { UserId = 2, Username = "UserB", PasswordHash = "", PasswordSalt = "" };
}
[TestCleanup]
public void TestCleanup()
{
userA = null;
userB = null;
simpleUserStore.Dispose();
}
[TestMethod]
public void AddUserTest()
{
Assert.IsTrue(simpleUserStore.AddUser(userA));
}
[TestMethod]
public void AddUserAlreadyExistsTest()
{
simpleUserStore.AddUser(userA);
Assert.IsFalse(simpleUserStore.AddUser(userA));
}
[TestMethod]
public void AddUserNullArgumentTest()
{
TestHelper.ExpectException<ArgumentNullException>(() => simpleUserStore.AddUser(null));
}
[TestMethod]
public void DeleteUserTest()
{
simpleUserStore.AddUser(userA);
Assert.IsTrue(simpleUserStore.DeleteUser(userA));
}
[TestMethod]
public void DeleteUserNullArgumentTest()
{
TestHelper.ExpectException<ArgumentNullException>(() => simpleUserStore.DeleteUser(null));
}
[TestMethod]
public void UpdateUserTest()
{
simpleUserStore.AddUser(userA);
userA.Username = "Testing";
Assert.IsTrue(simpleUserStore.UpdateUser(userA));
}
[TestMethod]
public void UpdateUserNullArgumentTest()
{
TestHelper.ExpectException<ArgumentNullException>(() => simpleUserStore.UpdateUser(null));
}
[TestMethod]
public void UpdateUserNonExistTest()
{
Assert.IsFalse(simpleUserStore.UpdateUser(userA));
}
[TestMethod]
public void GetUserByIdTest()
{
simpleUserStore.AddUser(userA);
Assert.AreEqual(userA.UserId, simpleUserStore.GetUserById(1).UserId);
}
[TestMethod]
public void GetUserByIdNegativeArgumentTest()
{
TestHelper.ExpectException<ArgumentException>(() => simpleUserStore.GetUserById(-20));
}
[TestMethod]
public void GetUserByIdNonExistTest()
{
simpleUserStore.AddUser(userA);
Assert.AreEqual(null, simpleUserStore.GetUserById(12));
}
[TestMethod]
public void GetUserByNameTest()
{
simpleUserStore.AddUser(userA);
Assert.AreEqual(userA.UserId, simpleUserStore.GetUserByName("UserA").UserId);
}
[TestMethod]
public void GetUserByNameNullArgumentTest()
{
TestHelper.ExpectException<ArgumentNullException>(() => simpleUserStore.GetUserByName(null));
}
[TestMethod]
public void GetUserByNameNonExistTest()
{
simpleUserStore.AddUser(userA);
Assert.AreEqual(null, simpleUserStore.GetUserByName("Testing"));
}
}
Answer: Your tests in general suffer from trusting your code too much:
[TestMethod]
public void DeleteUserTest()
{
simpleUserStore.AddUser(userA);
Assert.IsTrue(simpleUserStore.DeleteUser(userA));
}
You verify the return value, but you don't verify that the user has been removed from the store. The object could very well return success without actually removing the object. You should have a call to GetUser() or similiar to verify that the user isn't there anymore.
Similar with update:
[TestMethod]
public void UpdateUserTest()
{
simpleUserStore.AddUser(userA);
userA.Username = "Testing";
Assert.IsTrue(simpleUserStore.UpdateUser(userA));
}
You don't do anything to verify that update actually did the update. You trust the return value. But you shouldn't do that. Let's actually look at the implementation.
public bool UpdateUser(UserData user)
{
if (user == null) throw new ArgumentNullException("user");
if (!userdata.Contains(user)) return false;
UserData _user = userdata.FirstOrDefault(x => x.UserId == user.UserId);
You don't verify that this works. You never pass a different object then the one you originally inserted in the Store. You need to pass another object to see if that works.
_user = user;
You don't verify that the user in the store gets updated, so you aren't testing this line either.
return true;
}
Truth be told, I can see two bugs in this implementation. I'll leave it to you to fix your tests and actually find them yourselves. However, its bad enough that if you wrote this in an interview I probably wouldn't hire you. One of the bugs makes it look like you are very confused about the semantics of the language you are using. Maybe its just a careless error (they happen to the best of us), but it would give me grave doubts. | {
"domain": "codereview.stackexchange",
"id": 3132,
"tags": "c#, .net"
} |
Are superposition and time-evolution of a quantum system unrelated? | Question: Consider a single particle (a single qubit if you will) in some arbitrary state $|\psi\rangle$ and an eigenvector $|\lambda\rangle$ corresponding to the eigenvalue $\lambda.$ Consider the time evolution of this system in some infinitesimal time $\epsilon$ to be given by a unitary operator U: $|\psi(\epsilon)\rangle = U|\psi(0\rangle)$.
Time-evolution preserving the inner product:
Consider the following statements holding that time evolution preserves inner product $\langle\psi|\lambda\rangle$. I think $\lambda$ is non-evolvable, or $\lambda(\epsilon) = \lambda(0)$, or $U$ does nothing on it. Then the following are true:
$\langle\psi(\epsilon)| = \langle\psi(0)|U^{\dagger}$.
$\implies$ $\langle\psi(\epsilon)|\lambda(\epsilon)\rangle = \langle\psi(0)|U^{\dagger}U|\lambda(0)\rangle = \langle\psi(0)|\lambda(0)\rangle$.
So when you measure $|\psi(\epsilon)\rangle$, you get $|\lambda\rangle$ with probability $|\langle\psi(\epsilon)|\lambda(\epsilon)\rangle|^{2}$ which is equal to $|\langle\psi(0)|\lambda(0)\rangle|^{2}$.
Superposition
If you start with $|\psi(0)\rangle = |0\rangle$ and apply Hadamard operation to it, you get $|\psi(\epsilon)\rangle = \frac{|0\rangle + |1\rangle}{2^{1/2}}$. If you consider $|\lambda(0)\rangle = |\lambda(\epsilon)\rangle = |0\rangle$, then $|\langle\psi(0)|\lambda(0)\rangle|^{2} = 1$ and $|\langle\psi(\epsilon)|\lambda(\epsilon)\rangle|^{2} = \frac{1}{2}$.
Question
Have I done something wrong or is there some problem in my understanding of the time evolution of a quantum system? Is Hadamard-ing a state not considered in the class of operations that qualify as time evolution of a quantum system? In short, why are these probabilities different?
Answer: The Hadamard gate is generated by a hamiltonian which is not diagonal in the computational basis you're using, so it is not true that you're comparing against an eigenstate of the hamiltonian in use.
In other words, the reason you're getting green on one side and orange on the other is that you're comparing apples and oranges. | {
"domain": "physics.stackexchange",
"id": 59512,
"tags": "quantum-mechanics, hilbert-space, superposition, time-evolution"
} |
Ring expansion of two fused rings to a larger ring | Question: I saw the following reaction mechanism in paper Tetrahedron Lett. 1976, 17 (33), 2869–2872.:
I'm not able to understand the following parts:
How did the conversion of 27 to 29 take place? I've never seen that kind of a ring expansion, and a detailed explanation would help - maybe including the necessary transition state(s) and why it happens.
27 to 28 and 30 to 31 aren't clear as well; why should such rearrangements happen and how do they happen? What is the mechanism?
Answer: I think the possible mechanism for the intermidiate conversions can be as following.Here thr $\ce{R}$ is actually $\ce{-OCHO}$. | {
"domain": "chemistry.stackexchange",
"id": 9896,
"tags": "organic-chemistry, reaction-mechanism, carbonyl-compounds, carbocation"
} |
Use bloom to generate a deb and a -dev.deb packages | Question:
I have implemented a library with some functionalities. As an example, let's say this is my library. Whenever I compile with:
mkdir build
cmake ..
make
and install this library:
sudo make install
another package can find it and use it.
Now I want to create Debian packages for this library. If I use the ROS approach (bloom) with:
bloom-generate debian --os-name ubuntu --os-version jammy
fakeroot debian/rules binary
A binary package is created and if I install it everything looks ok. However, I want to create instead of one package, two, one for the runtime library and one for the development tools, in this way, clients that are not developing new apps on top of the library do not need to install all the headers, cmakes, etc.
Is there any way of doing such a thing? I would like to stick to ROS bloom as it simplifies a lot the packaging dependency (both runtime and build time).
Originally posted by apalomer on ROS Answers with karma: 318 on 2023-01-24
Post score: 0
Answer:
No, this is not supported by Bloom at this time.
See Generating ‘dev’ and runtime artefacts from ROS packages on ROS Discourse for a related discussion.
Originally posted by gvdhoorn with karma: 86574 on 2023-01-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38242,
"tags": "ros, ros2, deb"
} |
How to subscribe to openni topics inside a package | Question:
Hi all,
I have ros-fuerte, ros-fuerte-openni-camera, ros-fuerte-openni-launch installed. I can launch openni from terminal and using rostopic echo /topic_name, I can get all data. The data is also visible on rviz.
Now, I need to use this data inside a program (for example in a cpp file) and I am unable to find out how. Here is what I did:
I created a package using roscreate-pkg, and tried putting some cpp files inside src and all compiled and run successfully after amending my CMakeLists.txt. I successfully subscribed to some turtlesim topics inside the cpp file.
Now I tried to subscribe to an openni topic (cpp file attached).
My first question is: Why openni-camera is located at ros/stack/openni rather than ros/include/... just like other packages (opencv and pcl)? Does it make any difference where actually openni is installed?
Because openni is installed at ros/stacks/ I have to manually give path to include its .h files (see #include in cpp file). Is it right approach?
The cpp file just does not compile with the error (fatal error: XnCppWrapper.h: No such file or directory). I tried to search my computer and couldnt find XnCppWrapper.h. Where is that file located and why my program cant find it?
Do I need to set a path for ros to use openni while compiling a cpp file inside a package
I am pretty confused and have spent a lot of time on it without any success. Any help to clarify will be appreciated a lot.
#include "ros/ros.h"
#include "std_msgs/String.h"
#include </opt/ros/fuerte/stacks/openni_camera/include/openni_camera/openni_depth_image.h>
void OpenniCallback(const sensor_msgs::PointCloud2::ConstPtr& msg)
{
ROS_INFO("I heard")//: [%f]", msg->linear, msg->angular);
//ROS_INFO("I heard");
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "openni_subscriber");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/camera/depth/points", 1000, OpenniCallback);
ros::spin();
return 0;
}
Originally posted by Latif Anjum on ROS Answers with karma: 79 on 2014-01-27
Post score: 0
Original comments
Comment by Latif Anjum on 2014-01-27:
#include "ros/ros.h"
#include "std_msgs/String.h"
#include </opt/ros/fuerte/stacks/openni_camera/include/openni_camera/openni_depth_image.h>
void OpenniCallback(const sensor_msgs::PointCloud2::ConstPtr& msg)
{
ROS_INFO("I heard")//: [%f]", msg->linear, msg->angular);
//ROS_INFO("I heard");
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "openni_subscriber");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/camera/depth/points", 1000, OpenniCallback);
ros::spin();
return 0;
}
Answer:
It doesn't really matter where openni is installed. As long as your manifest, CMakeLists, and ROS environment variables are correct, you shouldn't have a problem. Also, you should not be having to include paths to get things to compile.
What you are trying to do doesn't really have much to do with the openni package. The openni package will connect to your depth camera and produce the topic /camera/depth/points. It seems that you are trying to subscribe to that topic. So your package should depend on packages like sensor_msgs, pcl, and maybe pcl_ros.
Here is a manifest.xml
<package>
<depend package="roscpp"/>
<depend package="sensor_msgs"/>
</package>
Here is a CMakeLists.txt
cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
set(ROS_BUILD_TYPE Release)
rosbuild_init()
#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)
rosbuild_add_executable(openni_subscriber src/openni_subscriber.cpp)
Here is a slightly modified version of your code that compiles and works correctly:
#include <ros/ros.h>
#include <sensor_msgs/PointCloud2.h>
void OpenniCallback(const sensor_msgs::PointCloud2ConstPtr &msg)
{
ROS_INFO("I heard");
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "openni_subscriber");
ros::NodeHandle n;
ros::Subscriber sub = n.subscribe("/camera/depth/points", 1, OpenniCallback);
ros::spin();
return 0;
}
Check out this page to see examples of how to subscribe to Point Cloud messages, and this page for more complete examples on using PCL with ROS.
Originally posted by jarvisschultz with karma: 9031 on 2014-01-27
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Latif Anjum on 2014-01-27:
Thank you sir. You did solve the problem. Looking forward to your guidance in future as well.
Comment by Latif Anjum on 2014-01-27:
The statement #include <pcl_ros/point_cloud.h> gives an error. It says no such file. This file is actually located in my case at pcl-1.5/pcl/point_cloud.h. I gave this path but there are now other errors. Same is the case with #include <pcl/point_types.h>. Any guidance? | {
"domain": "robotics.stackexchange",
"id": 16780,
"tags": "kinect"
} |
About Vanishing of BRST commutator in path integral | Question: In Witten's paper Topological Quantum Field Theory, about formula (3.2), the property $\langle\{Q,\mathcal{O}\}\rangle=0$ depends on the assertion that,
$$Z_{\varepsilon}(\mathcal{O})= \int \mathcal{D}X \exp(\varepsilon Q) \left[\exp\left(-\frac{\mathcal{L'}}{e^2}\right)\mathcal{O} \right]$$
is independent on $\varepsilon$. Where does the assertion come from?
Answer: Witten clearly writes the justification just on the line above the equation (3.2): the integral is independent because the integration measure is invariant under supersymmetry – the symmetry generated by $Q$.
Just to be sure, $Q$ is the infinitesimal generator which is why $\exp(\varepsilon Q)$ is a finite transformation generated by this generator: $\varepsilon$ is the argument ("Grassmann angle") of the transformation. And $\exp(\varepsilon Q) [{\mathcal A}]$ is the transformed operator ${\mathcal A}$ by this transformation and the integral is a device that produces a scalar out of a function of $X$.
The independence on $\varepsilon$ holds because the SUSY-transformed integral of the SUSY-transformed (operator-valued) function is the same thing as the original integral of the SUSY-transformed function: I could have erased the adjective "SUSY-transformed" in front of the integral because the integration is SUSY-invariant. Because it doesn't matter whether we transform the integrand by $\exp(\varepsilon Q)$, it's the same thing as saying that the integral is independent of $\epsilon$ because it has the same value as the value for $\varepsilon=0$. | {
"domain": "physics.stackexchange",
"id": 6998,
"tags": "string-theory, path-integral, topological-field-theory, brst"
} |
Angular Momentum and Average Torque | Question:
Refer to number 6. This is the one I'm stuck on. So angular momentum is conserved right, so initial angular momentum is equal to final angular momentum. Initial is 7.87 so final must be 7.87, right? And so average torque is just change in angular momentum / change in time, so 0/7=0. What am I doing wrong?
Answer: The angular momentum of the rod is 0 at the beginning because it is not rotating.
I would proceed like that:
by conservation of angular momentum, calculate the final rotational speed of the rod
with that given, calculate the final angular momentum of the rod
You have that the torque gives the variation (with time) of the angular momentum. So if the torque is constant you just have "torque = angular momentum / $\Delta t$".
I can be more specific if you want. Tell me where you find a problem.
Edit: Apparently the steps are done in the previous questions, so this should just be a "put everything together question". | {
"domain": "physics.stackexchange",
"id": 100,
"tags": "classical-mechanics, homework-and-exercises, angular-momentum"
} |
How to use 1 for loop in a method in stead of using the same for loop over and over again | Question: Since i'm still learning C# i've made an exercise but it seems like i'm using the same code over and over again.
I do realize I have to use a helper method to get rid of the 'useless' code.
How can I put my for loops into a helper method ?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace NewTryoutExamen
{
class Ploeg
{
private List<Werknemer> werknemers = new List<Werknemer>();
public void VoegWerknemerToe(Werknemer w)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (w.Id == werknemers[i].Id)
throw new DuplicateWaitObjectException();
}
werknemers.Add(w);
}
public uint Prestatie(uint werknemerId, uint hoeveelheid)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
return werknemers[i].VoegVerkopenToe(hoeveelheid);
}
throw new KeyNotFoundException();
}
public uint GeefAantalEenheden(uint werknemerId)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
return werknemers[i].AantalEenheden;
}
throw new KeyNotFoundException();
}
public double GeefTotaalLoon(uint werknemerId)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
return werknemers[i].TotaalLoon();
}
throw new KeyNotFoundException();
}
public Werknemer ToonWerknemerDetails(uint werknemerId)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
return werknemers[i];
}
throw new KeyNotFoundException();
}
public double Basisloon(uint werknemerId)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
{
return werknemers[i].Basisloon;
}
}
throw new KeyNotFoundException();
}
public double Commissie(uint werknemerId)
{
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
{
return werknemers[i].Commissie;
}
}
throw new KeyNotFoundException();
}
public override string ToString()
{
string res = "";
string Id;
for (int i = 0; i < werknemers.Count; i++)
{
Id = string.Format("{0:0000}", werknemers[i].Id);
res += Id + " - " + werknemers[i].Naam + " " + werknemers[i].Voornaam + " ( eenheden: " + werknemers[i].AantalEenheden + ")" + Environment.NewLine;
}
return res;
}
}
}
Answer: You should definitely look into LINQ. It saves alot of space.
A few reworks;
for (int i = 0; i < werknemers.Count; i++)
{
if (w.Id == werknemers[i].Id)
throw new DuplicateWaitObjectException();
}
becomes
if(werknemers.Any(wn => w.Id == wn.Id)
{
throw new DuplicateWaitObjectException();
}
And
for (int i = 0; i < werknemers.Count; i++)
{
if (werknemerId == werknemers[i].Id)
return werknemers[i].VoegVerkopenToe(hoeveelheid);
}
becomes
return werknemers.Where(w => w.Id == werknemerId).FirstOrDefault().VoegVerkopenToe(hoeveelheid);
I'll leave the rest as an excercise :) | {
"domain": "codereview.stackexchange",
"id": 13647,
"tags": "c#"
} |
Can average momentum be imaginary? | Question: I am new to quantum physics. We just learnt about wave equations, observables and expectation values today. What really caught my attention was the expectation value of average momentum and energy:
$$\langle p \rangle = \int_{-\infty}^\infty \text{d}x\,\,\, \psi^*(x,t) \frac{\hbar}{i}\frac{\partial}{\partial x}\psi(x,t)$$
$$\langle H \rangle = \int_{-\infty}^\infty \text{d}x\,\,\, \psi^*(x,t) i\hbar\frac{\partial}{\partial t}\psi(x,t)$$
For the first equation, we take the $\hbar/i$ outside the integral. Obviously, the value of the integral has to be either real or complex. If it is complex, then it's completely fine as both the $i$s get cancelled out. But what if it's real? I read online that since momentum is represented by a Hermitian operator, all of its eigenvalues are real. Does this mean that the integral in this case is always zero?
I have the same question regarding the average energy. If the integral is complex, then nothing to worry about. But if it is real, then does it have to be zero? On the other hand, if the integral can be a real non-zero value, what does it mean for the average momentum to be imaginary?
I'm not sure what exactly I'm missing here. It would be great if someone could help me out with this. Please note, I'm a complete beginner to this whole concept (as mentioned in the beginning).
Answer: The integral is indeed zero, and it's quite easy to see why, since if $\psi(x)$ is real, then $\psi^*(x) = \psi(x)$, so:
$$\langle p \rangle = \frac{\hbar}{i}\int_{-\infty}^\infty \text{d}x\,\,\, \psi(x) \frac{\partial \psi}{\partial x} = \frac{\hbar}{2i}\int_{-\infty}^\infty \text{d}x\,\,\,\frac{\partial }{\partial x} \psi^2(x) = \frac{\hbar}{2i} \times \psi^2(x)\Bigg|_{-\infty}^\infty.$$
Since the wavefunction is real, $\psi^2(x) \equiv |\psi(x)|^2$, the probability density. And we know this to be zero at $\pm \infty$, since that is one of the requirements for a wavefunction.
In general, the expectation value of any Hermitian operator is always real. This is a standard exercise in introductory quantum mechanics courses. It boils down to showing that the expectation value of an operator is the sum of its eigenvalues, and in the case of Hermitian operators, these are all real. | {
"domain": "physics.stackexchange",
"id": 75139,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, complex-numbers, observables"
} |
Can we approximate the number of words accepted by an NFA? | Question: Let $M$ be an acyclic NFA.
Since $M$ is acyclic, $L(M)$ is finite.
In a related question, it was suggested that exact counting of the number of words accepted by $M$ is $\#P$-Complete.
The second answer for that question provides a counting algorithm, but only works for unambiguous NFAs (where every word is accepted by at most a single path).
Given an NFA $M$, can we approximate $|L(M)|$ in polynomial time?
As automata is a highly studied subject, I was surprised that I couldn't find anything about this, so if someone knows of a reference it'll be great :).
Answer: There exists a FPRAS (Fully Polynomial Randomized Approximation Scheme) for the problem of counting the words of length $n$ accepted by a NFA in the general case (without restricting to the acyclic NFA case).
The result was published this year on STOC, by Arenas, Croquevielle, Jayaram and Riveros.
Here is the talk
https://youtu.be/tyK-uujHMLU
and the paper
https://arxiv.org/abs/1906.09226 | {
"domain": "cstheory.stackexchange",
"id": 5382,
"tags": "ds.algorithms, fl.formal-languages, approximation-algorithms, automata-theory, counting-complexity"
} |
Why can't the relativistic version of the position operator in QM/QFT be interpreted as a probability amplitude? | Question: Specifically, I've been trying to understand why it is that the standard position operator in QM is defined as a non-relativistic limit, meaning massless particles can't be localized the same way massive ones can, since they have no non-relativistic limit. I asked a similar question last week about what goes wrong in the math if we try to localize photons and I did get a decent answer saying that, unlike the non-relativistic version, the relativistic version of the position operator can't be interpreted as a probability amplitude. But I don't understand why this is the case. Why does having an extra energy term in the denominator mean the expression can't be a probability amplitude? Does that somehow prevent it from being normalized?
Edit: Note that here I'm specifically asking why the relativistic position operator can't be interpreted as a probability amplitude, whereas this other question I asked, which some commenters have suggested it's a duplicate of, is asking about what the actual definition of the relativistic position operator is and how it differs from the NR version. That's a much more general question, which didn't elicit many good answers. This question is much more specific and focused. I did get one decent answer to the other question, and so editing that question drastically enough to focus it just on this specific issue would make the already posted answer irrelevant, which wouldn't be fair to that answerer.
Answer: The issue is very simple, but it can be surprisingly hard to find a clear discussion, because standard quantum field theory books choose to avoid discussing position operators at all, for reasons that will be clear below. Mathematical physicists do still discuss it today, but they like to use lots of jargon, which obscures the simplest and most physically important cases. I will give a short answer using notation familiar from undergraduate quantum mechanics.
The setup
Let $|\mathbf{p} \rangle$ be unit-normalized momentum eigenstates for a relativistic particle. In standard relativistic quantum field theory textbooks, one begins with field operators
$$\hat{\psi}(\mathbf{x}) = \int_{\mathbf{p}} \frac{1}{\sqrt{2 E_p}} \, (a_{\mathbf{p}} e^{i \mathbf{p} \cdot \mathbf{x}} + b_{\mathbf{p}}^\dagger e^{-i \mathbf{p} \cdot \mathbf{x}})$$
and defines relativistically normalized states by their action,
$$|\mathbf{x} \rangle_{\text{rel}} = \hat{\psi}^\dagger (\mathbf{x}) |0 \rangle = \int_{\mathbf{p}} \frac{e^{- i \mathbf{p} \cdot \mathbf{x}}}{\sqrt{2 E_p}} \, |\mathbf{p} \rangle.$$
Alternatively, one may define the so-called Newton-Wigner states, which match the definition in nonrelativistic quantum mechanics,
$$|\mathbf{x} \rangle_{\text{nw}} = \int_{\mathbf{p}} e^{- i \mathbf{p} \cdot \mathbf{x}}\, |\mathbf{p} \rangle.$$
Using each set of states, one can define a position operator,
$$\hat{\mathbf{x}}_{\text{rel}} |\mathbf{x} \rangle_{\text{rel}} = \mathbf{x} |\mathbf{x} \rangle_{\text{rel}}, \quad \hat{\mathbf{x}}_{\text{nw}} |\mathbf{x} \rangle_{\text{nw}} = \mathbf{x} |\mathbf{x} \rangle_{\text{nw}}$$
as well as position-space wavefunctions
$$\psi(\mathbf{x})_{\text{rel}} = {}_{\text{rel}}\langle \mathbf{x} | \psi \rangle, \quad \psi(\mathbf{x})_{\text{nw}} = {}_{\text{nw}} \langle \mathbf{x} | \psi \rangle. $$
So it's easy to define two separate quantities, each of which one might naively call "the amplitude for a particle to be at position $\mathbf{x}$".
The problems
So what's the problem? Why don't these simple expressions show up in introductory textbooks?
The relativistically normalized position states are not orthogonal, as
$${}_{\text{rel}} \langle \mathbf{y} | \mathbf{x} \rangle_{\text{rel}} = \int_{\mathbf{p}} \frac{e^{i \mathbf{p} \cdot (\mathbf{y} - \mathbf{x})}}{2 E_p} \sim e^{-m |\mathbf{x} - \mathbf{y}|}$$
for $|\mathbf{x} - \mathbf{y}| \gg 1/m$. So you can't say a particle is definitely localized at $\mathbf{x}$ if it's in the state $|\mathbf{x} \rangle_{\text{rel}}$, because such a particle also has some amplitude to be in the state $|\mathbf{y}\rangle_{\text{rel}}$.
The Newton-Wigner position states don't have this problem,
$${}_{\text{nw}} \langle \mathbf{y} | \mathbf{x} \rangle_{\text{nw}} = \delta(\mathbf{x} - \mathbf{y}).$$
But they are not Lorentz covariant: if you boost a particle that's in a Newton-Wigner position state, then it won't remain in such a state. That also seems to contradict what it means to be perfectly localized.
The worst issue is that both types of position-space wavefunction obey the equation of motion
$$i \dot{\psi}(\mathbf{x}) = \sqrt{- \nabla^2 + m^2} \, \psi(\mathbf{x})$$
which is simply the Klein-Gordan equation restricted to positive frequencies. This is fatal, as solutions of this equation generically spread superluminally! For instance, if the support of $\psi(\mathbf{x})$ is restricted to a finite region, then after an infinitesimal time, it will spread out infinitely far. So you cannot say that $|\psi(\mathbf{x})|^2$ is the probability density to detect a particle at $\mathbf{x}$. If that were actually true, then one could signal faster than light, violating causality.
Because of this final issue, it is very difficult to see how relativistic quantum mechanics can be causal, at the level of particle operators. Causality is only straightforward if you write all interactions in terms of local products of Lorentz covariant fields -- which is the approach taken by all modern textbooks. The tradeoff is that this language doesn't let you speak of the precise positions of particles. | {
"domain": "physics.stackexchange",
"id": 95397,
"tags": "quantum-mechanics, quantum-field-theory, special-relativity, quantum-electrodynamics"
} |
How might clock synchronization work with RSA SecurID tokens? | Question:
My workplace uses these things to generate one-time passwords which only work within a short time period. I have always been curious about how the clock synchronisation between the authentication server and the token might work. I'm not sure whether there is any communication between the token and the outside world at all, but I would doubt it because they are small, light, and they must make these things to be as cheap as possible for lowest unit cost. So, maybe there isn't any synchronization at all? But then, wouldn't the clocks gradually drift apart, especially if the battery was running down, eventually resulting in a bricked token?
Answer: The authentication server keeps track of the clock drift in each token and adjusts its expected code calculations based on that. See http://www.rsa.com/products/securid/sb/AS51_SB_0607-lowres.pdf; search for "clock drift". | {
"domain": "physics.stackexchange",
"id": 5635,
"tags": "time"
} |
Classifying a branch as master or dev | Question: I am dealing constantly with the following code.
The thing is that literally it makes no sense for me to have all the spacing and divisions in the if-else when the code is so small that can be a block by itself.
When I read a line within line breaks I expect something difficult to understand or something that should be paid special care to be read about. Not just a line break after each function or if definition by default.
if (branchInfo.name === BRANCHNAMES.MASTER) {
branchInfo.isMaster = true;
} else if (
branchInfo.name === BRANCHNAMES.RELEASE ||
branchInfo.name === BRANCHNAMES.HOTFIX
) {
branchInfo.isDev = true;
}
Given that the line doesn't exceed the 120 characters and that they are small truthy statements that don't deserve to get a function, I want to remove that unnecessary split in the else if because it doesn't seem necessary, almost like the other line breaks and in my opinion it just leads into weird code.
if (branchInfo.name === BRANCHNAMES.MASTER) {
branchInfo.isMaster = true;
} else if (branchInfo.name === BRANCHNAMES.RELEASE || branchInfo.name === BRANCHNAMES.HOTFIX) {
branchInfo.isDev = true;
}
Am I being over paranoid about it? Is this actually a good practice?
PS: I know a good practice is to keep into a coding style and always keep it as it is, but really, I can't stand some things that just make code look weird. Sometimes too many line breaks also lead into unreadable code, don't you think so?
Answer: Personally, I see no problem doing it either way. I prefer to keep my lines shorter than that but 120ish characters isn't unreasonable.
Another option you have is to store those values in some other variables and use those instead.
let isMaster = branchInfo.name === BRANCHNAMES.MASTER;
let isRelease = branchInfo.name === BRANCHNAMES.RELEASE;
let isHotfix = branchInfo.name === BRANCHNAMES.HOTFIX;
if (isMaster) {
branchInfo.isMaster = true;
} else if (isRelease || isHotfix) {
branchInfo.isDev = true;
}
Another way would be to use a switch statement with case fallthrough. Some people refuse to allow fallthrough so it's your call.
switch (branchInfo.name) {
case BRANCHNAMES.MASTER:
branchInfo.isMaster = true;
break;
case BRANCHNAMES.RELEASE:
case BRANCHNAMES.HOTFIX:
branchInfo.isDev = true;
}
Finally, I would consider just assigning directly.
branchInfo.isMaster = branchInfo.name === BRANCHNAMES.MASTER;
branchInfo.isDev = branchInfo.name === BRANCHNAMES.RELEASE ||
branchInfo.name === BRANCHNAMES.HOTFIX;
There's also one trick I use if I need to compare against multiple things at once.
let devBranches = [BRANCHNAMES.RELEASE, BRANCHNAMES.HOTFIX];
branchInfo.isMaster = branchInfo.name === BRANCHNAMES.MASTER;
branchInfo.isDev = devBranches.includes(branchInfo.name);
If you don't perform any extra work in either branch of the if statement, I would use the last or second to last option. If you need to perform additional work, I would use the additional variables to make it more readable. | {
"domain": "codereview.stackexchange",
"id": 27423,
"tags": "javascript"
} |
Intensity of a light beam at material transition | Question: The definition of the intensity of light is given as
$$I=0.5\varepsilon_0n_0\vert E\vert^2$$
Now, when transitioning from one material with $n_1=1$ to another one with $n_2=2$ at a normal incident angle, I will have a reflection coefficient of
$$r=\left\vert\frac{n_1 - n_2}{n_1 + n_2}\right\vert^2\approx0.11$$
which means that
$$I_2=0.89I_1$$
and
$$\begin{split}\vert E_2\vert^2&=\frac{0.89I_1}{2\cdot\varepsilon_0n_2}\\
&=\frac{0.89}{2\cdot\varepsilon_0n_2}\frac{\varepsilon_0n_1\vert E_1\vert^2}{2}\\
&=0.445\vert E_1\vert^2\end{split}$$
If I interpret that correctly, that means that I will have a non-continuous transition between the magnitude of the electrical fields at the border of the material. Is that correct?
Answer: The transmitted wave into the second material will have a magnitude of $|E_2|^2=0.445|E_1|^2$ as you say (actually is $|E_2|^2=\displaystyle\frac{4}{9}|E_1|^2$ if you do the calculations without losing decimals). Therefore $|E_2|=\displaystyle\frac{2}{3}|E_1|$.
In the first material, however, you also have to take into account the reflected wave, which will have a magnitude of
$$|E_{reflected}|=\left|\frac{n_1-n_2}{n_1+n_2}\right||E_1|=\frac{1}{3}|E_1|.$$
This reflected wave has a phase shift by $\pi$, because the reflected wave always has a phase shift when light passes from an optically thinner medium to an optically thicker one ($n_1<n_2$). Hence the continuity of the parallel component of $E$ to the interface is fullfilled
$$|E_{incident}|-|E_{reflected}|=|E_{transmitted}|$$
or with your notation
$$|E_1|-|E_{reflected}|=|E_2|$$
$$|E_1|-\frac{1}{3}|E_1|=\frac{2}{3}|E_1|$$
where the minus sign accounts for the phase shift. | {
"domain": "physics.stackexchange",
"id": 68977,
"tags": "optics, refraction"
} |
Prints each letter of a string in different colors | Question: I'm practicing for my Technical Interview and I'm worried my code does not meet the bar. What can I do to improve this answer?
Here is the question: https://www.careercup.com/question?id=5739126414901248
Given
colors = ["red", "blue", "green", "yellow"]; and a string
str = "Lorem ipsum dolor sit amet";
write a function that prints each letter of the string in different colors. ex. L is red, o is blue, r is green, e is yellow, m is red, after the space, i should be blue.
My answer:
static void TestPrintColors()
{
string[] colors = new string[4] {"red", "blue", "green", "yellow"};
string str = "Lorem ipsum dolor sit amet";
PrintColors(colors, str);
}
static void PrintColors(string[] colors, string str)
{
char log;
ConsoleColor originalColor = Console.ForegroundColor;
int colorIndex = 0;
ConsoleColor currentColor = originalColor;
for (int i = 0; i < str.Length; i++)
{
log = str[i];
if (log == ' ')
{
Console.WriteLine(log);
continue;
}
switch(colors[colorIndex])
{
case "red":
currentColor = ConsoleColor.Red;
break;
case "blue":
currentColor = ConsoleColor.Blue;
break;
case "green":
currentColor = ConsoleColor.Green;
break;
case "yellow":
currentColor = ConsoleColor.Yellow;
break;
default:
currentColor = originalColor;
break;
}
colorIndex++;
if (colorIndex >= colors.Length)
{
colorIndex = 0;
}
Console.ForegroundColor = currentColor;
Console.WriteLine(log);
}
Console.ForegroundColor = originalColor;
}
Answer: Don't worry about performance.
You're writing a handful of strings to the console, and it takes milliseconds. When you start writing tens of thousands of strings, and notice it taking seconds, then you can you start to look for optimizations. Until then, the cleanliness of your code is much more important.
Your function is too long.
This is a fifty-line function. It has the responsibilities of iterating through two lists in parallel, skipping spaces, parsing color names, setting the console color, writing to console, and resetting the console color when it's all done. That's a lot! Break it up into smaller functions.
Switch statements are ugly.
I don't mean that they are never appropriate, but ConsoleColor is an enum, and it's possible to parse enums (while ignoring case sensitivity). You should replace this switch statement with a function call.
Don't initialize variables until you need them.
With few exceptions, modern languages are very good about optimizing variable allocation. Putting char log = str[i] inside the loop will not result in additional memory usage, and it will save me (a potential reviewer or maintenance programmer) from having to think about that character before or after the loop.
Other tips...
You say this is practice for an interview, so it could be a good place for you to show off your knowledge of C#. With a little trouble, you could leverage Regular Expressions and LINQ to save you from manually manipulating array indexes. With a little more trouble, you could leverage IDisposable to ensure the original ForegroundColor is restored when all is said and done.
On the other hand, you could also shoot yourself in the foot attempting to do those things. If you don't honestly have in-depth knowledge about C#, it might be best just to aim for code that is as simple as possible. I think the best way to do that is to make small functions with clear names, to show you are thinking about the maintainability and reusability of your code. | {
"domain": "codereview.stackexchange",
"id": 34550,
"tags": "c#, performance"
} |
Doubt in vertex connectivity less than edge connectivity | Question: Sir i recently started graph theory. I understood the reason why edge connectivity is less than min degree(remove all vertices incident to min degree vertex). I have doubt in 2nd part of proof when given graph is not complete graph. how to prove here vertex connectivity less than edge connectivity?
confused here. Pls clarify my doubt
Answer: I'm not sure I understand your question, so this is the specific question I'm answering:
Why is the vertex connectivity of a graph always less than or equal to its edge connectivity?
If that's wrong, please let me know in the comments or edit the question.
The vertex connectivity of a graph is defined as the smallest number of vertices you can delete to make the graph no longer connected. The edge connectivity is the same, except substitute "edge" for "vertex".
So, let's take a graph $G$, and say its edge connectivity is $e_c$. This means, by definition, there's some set of edges $E_c$, such that deleting all those edges will make $G$ no longer connected, and $|E_c| = e_c$ (there are $e_c$ different edges in the set).
Let's assume that the vertex connectivity is greater than $e_c$. I'm going to show that this leads to a contradiction.
Let's go through $E_c$ and take one arbitrary endpoint from each edge. (At random, or always take the tail, etc, doesn't matter.) Call this new set of endpoints $V_{Ec}$. There's one for each edge, so there are $e_c$ total.
Deleting a vertex includes deleting every edge that touches it, so deleting every vertex in $V_{Ec}$ must delete every edge in $E_c$. Thus, deleting every vertex in $V_{Ec}$ makes the graph disconnected.
But we only deleted $e_c$ different vertices! If the vertex connectivity is any larger than $e_c$, we have a contradiction—since the vertex connectivity is the smallest number of vertices you can delete to disconnect the graph. And we just showed you can do it with fewer.
Therefore, the vertex connectivity can never be larger than the edge connectivity. | {
"domain": "cs.stackexchange",
"id": 13962,
"tags": "graphs"
} |
Why is there a shared matrix W in graph attention networks instead of the query-key-value trio like in regular transformers? | Question: In section 2.1 of the Graph attention network paper
The graph attention layer is described as
as an initial step, a shared
linear transformation, parametrized by a weight matrix, W ∈ RF ′×F , is applied to every node. We then perform self-attention on the nodes—a shared attentional mechanism a : RF ′
× RF ′→ R computes attention coefficients eij = a(Whi, Whj ) that indicate the importance of node j’s features to node i.
(forgive my amateur formatting)
The function a represents a fully connected neural network that takes the concatenated vector of Whi and Whj, then outputs a single value which is pushed through a softmax to get the attention score aij. Then, the embedding vectors Whj (for all neighboring nodes of i) are weighted and summed by the attention score as normal.
If my understanding is correct, this means that matrix W represents the query, the key and the value transformation matrices all in one. But how can it do that? I feel that the difference may lie in the way the additive attention is calculated vs the dot-product one, but I cannot comprehend how this works. Why can we use a shared matrix here and not there? Is it theoretical or technical?
Answer: To my understanding, there isn't any theoretical reason why the query, key and values weights are absent.
I feel that the difference may lie in the way the additive attention is calculated vs the dot-product one.
In the equations for the Graph Attention Network (GAT), there is for sure a difference between GAT and Transformers. However, I think you might be misunderstanding the relationship between the two. The attention mechanism in GATs is not equivalent to the one in Transformers. This means that one is not simply a rearrangement of the other; they are different inductive biases.
If my understanding is correct, this means that matrix W represents the query, the key and the value transformation matrices all in one. But how can it do that?
The equation states:
$$
e_{ij} = a(Wh_i,Wh_j )
$$
You are correct that a single weight matrix $W$ is used for the projection from the feature space to the feature map. However, this doesn't mean that $W$ represents the query, key, and value matrices, although it serves those roles if compared to the Transformer architecture.
I feel that you understand the forward pass of the GAT but lack intuition on why it is done like that. According to Aleks Gordic's commentary in the following video, he mentioned that he spoke with the authors of the paper. They said they had indeed tried using separate query, key, and value weight matrices but encountered overfitting in their experiments. They concluded that using a single weight matrix yielded more generalizable results. This makes sense when considering that the goal of GATs is not to construct a language model, which is a more complex problem. GATs most often operate on static graph data with features to learn. They are designed not to copy Transformers, but to adapt Transformer principles to graph data and objectives.
Is it theoretical or technical?
Based on what I understand, most inductive biases, if not all, are not theoretically grounded. If you think about it, there is no paper that says why one method theoretically performs better than another in neural networks. This is because it ultimately depends on your data and your final goal for the model. We only have experimental data to support our claims and a sense of intuition. The difficult part is to develop that intuition, which is based on previous experiments.
What I'm trying to say is that there is no reason why you couldn't try adding three different weight matrices instead of one. For the original idea of the Transformer, it makes sense to divide the attention into queries and keys because we are trying to simulate language, and language has a semantic ordering component. This means that it is not the same to say "Hello World" as it is to say "World Hello." Based on this, it is logical to have extra weights that can identify this difference.
For the datasets that were tried with GAT, they were citation networks and protein predictions. Predicting classes in citations may not require using more weights because the problem statement does not seem to require it. However, this is not set in stone. In the end, the only way to know if one architecture is better than another is through experimentation. | {
"domain": "ai.stackexchange",
"id": 3988,
"tags": "transformer, attention, graph-neural-networks"
} |
Understanding dot product in quantum mechanics | Question: Let's say we have a two-state-system with state $\vert 1\rangle$ and state $\vert 2\rangle$. From my understanding one can assume the base vectors of this system to be $\vert1\rangle \mapsto (1,0)^\top$ and $\vert 2\rangle \mapsto (0,1)^\top$.
Now assume we have a particle in state $\vert \phi\rangle = (\sqrt{\frac{1}{3}}, \sqrt{\frac{2}{3}})^\top$ and a different state $\vert \psi\rangle = (\sqrt{\frac{1}{2}}, \sqrt{\frac{1}{2}})^\top$. $\langle\psi|\phi\rangle$ should now define the amplitude to go from $\vert \phi\rangle$ to $\vert \psi\rangle$. This can be rewritten as $\langle\psi|1\rangle\langle 1|\phi\rangle + \langle\psi| 2\rangle\langle2|\phi\rangle$.
So putting in the numbers we get
$$\langle\psi|\phi\rangle = \sqrt{\frac{1}{2}}\cdot\sqrt{\frac{1}{3}} + \sqrt{\frac{1}{2}}\cdot\sqrt{\frac{2}{3}} \approx 0.985.$$
Is this thinking correct? Can a particle without any influence even change its state? (I suppose yes, as it would be otherwise in a stable state, which are only a small subset of the vectors in the vector space).
Answer: The inner (dot) product is completely correct and all, though to get a probability you'll want to square its modulus, so $|\langle \psi | \phi \rangle |^2$.
The interpretation of the number $0.985^2$ depends on the context though. The relevant postulate in QM is
When a particle is in state $|\phi \rangle$ and a measurement is made on some observable with corresponding operator $\hat{p}$ which has eigenvectors $|p\rangle$ and corresponding eigenvalues $p$, then the probability of getting a result equal to $p$ in that measurement is $|\langle p | \phi \rangle | ^2$.
So, in order to interpret your inner product squared as a probability, it first has to be the case that you are in a physical scenario in which the state $|\psi \rangle$ is an eigenvector of the operator corresponding to a quantity you measured. Outside of a specified measurement scenario, that squared inner product does not carry any special meaning and is just a random number.
For that reason you can't interpret it in general like "the state has a probability of randomly transitioning to $|\psi \rangle$ with probability $0.985^2$" without further context. This doesn't just happen willy-nilly, these transition/jumps occur as a result of someone measuring the particle.
Edit: Swapped the definition of $\psi$ and $\phi$ to be consistent with the question. | {
"domain": "physics.stackexchange",
"id": 90104,
"tags": "quantum-mechanics, homework-and-exercises, linear-algebra"
} |
Static electric and magnetic fields from quantised EM field | Question: From the Wikipedia article, the electric $\mathbf{E}$ and magnetic $\mathbf{B}$ fields can be quantised as:
$$ \mathbf{E}(\mathbf{r}) = i \sum_{\mathbf{k},\mu} \sqrt{\frac{\hbar \omega}{2 V \epsilon_0}} \left ( \mathbf{e}^{(\mu)}a^{(\mu)}(\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{r}} - \overline{\mathbf{e}}^{(\mu)}a^{\dagger{(\mu)}}(\mathbf{k})e^{-i\mathbf{k}\cdot\mathbf{r}} \right ), $$
$$ \mathbf{B}(\mathbf{r}) = i \sum_{\mathbf{k},\mu} \sqrt{\frac{\hbar }{2\omega V \epsilon_0}} \left ( (\mathbf{k}\times\mathbf{e}^{(\mu)})a^{(\mu)}(\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{r}} - (\mathbf{k}\times\overline{\mathbf{e}}^{(\mu)})a^{\dagger{(\mu)}}(\mathbf{k})e^{-i\mathbf{k}\cdot\mathbf{r}} \right ). $$
How do I get a dynamical (in time t) field? And how would I get a static field, e.g. the Coulomb field, from these equations?
Answer: How do I get a dynamical (in time $t$) field?
You will have to impose the quantized equations of motion. Their form depends on the field content of your theory. In general this is a hard problem, which can only be solved perturbatively through the use of Feynman diagrams. In a special case of the free Maxwell field (i.e. there's no dynamical charged quantum fields interacting with your electromagnetic field), the problem can be solved exactly though.
Another complication comes from the gauge invariance (which is actually not specific to the quantum case, as the same problem arises in the Hamiltonian formulation of classical electromagnetism). One way of solving this is by imposing a gauge-fixing condition. Let's suppose that you are working in the Lorentz gauge
$$ \partial_{\mu} A^{\mu} = 0. $$
In this gauge, the dynamics is encoded in the time-dependence of $a({\bf k})$ and $a^{\dagger}({\bf k})$:
$$ a({\bf k}, t) = a({\bf k}) e^{- i |{\bf k}| \cdot t}, \quad a^{\dagger}({\bf k}, t) = a^{\dagger} ({\bf k}) e^{i |{\bf k}| \cdot t}. $$
But note that in this gauge there's an additional requirement that you have to impose on your Hilbert space. Generally, if you write down the expression for the gauge condition operator, it does not vanish:
$$ \partial_{\mu} A^{\mu} \neq 0. $$
This additional constraint has to be imposed weakly (this is called Gupta-Bleuler method). I will not describe it here as it is an interesting topic on its own and not directly related to the question.
And how would I get a static field, e.g. the Coulomb field, from these equations?
If you are quantizing a free electromagnetic field in vacuum, Coulomb potential is not actually a solution of the equations of motions (since it has a singularity at the center). The real solutions are given by superpositions of plane waves.
You have two options here. Either add a static classical charge at position zero (which would break Lorentz invariance) and write down the modified quantum equations of motion for the electromagnetic field, or consider a full interacting quantum theory of matter and light, such as QED.
In second case, the problem of reconstructing a quantum state which corresponds to a given classical solution is a hard one. For the linear field, the answer is actually known (it is given by coherent states), but AFAIK for nonlinear theories this problem is unsolved (please correct me if I am wrong). You can still derive the properties of the Coulomb interaction (including the form of the potential, and the Lamb shift as a next-order correction) using perturbative QED. | {
"domain": "physics.stackexchange",
"id": 46445,
"tags": "electromagnetism, quantum-field-theory, quantum-electrodynamics"
} |
Does the AlphaZero algorithm keep the subtree statistics after each move during MCTS? | Question: This question is regarding the Monte Carlo Tree Search (MCTS) algorithm presented in the AlphaZero paper (arXiv). As described in the paper, each MCTS used 800 simulations to determine the next action. This process builds a search subtree downwards from the root note. During this process, statistics about the nodes (e.g. values & visit counts) are updated in backward passes upwards through the tree. After all 800 simulations are complete, the most promising child node is selected (i.e. the node with the most visits, normalized by temperature), and then 800 new MCTS simulations are started using the selected child node as the new root node.
Question: Once the next round of 800 MCTS simulations starts, do we discard the statistics from the previous tree and thereby start with a "fresh" subtree, or do we keep the statistics gathered from the previous round of simulations?
I have found several tutorials/blog posts/repositories that implement either of these options and are contradictory. Furthermore, the wording in the paper seems ambiguous as they speak of "restarting" but it is not clear whether they restart after every round of 800 MCTS simulations or after each game is complete.
Answer: The supplementary material of the AlphaZero paper states the following:
Unless
otherwise specified, the training and search algorithm and parameters are identical to AlphaGo
Zero.
I didn't see any mention of whether or not the subtree was kept when reading the rest of the AlphaZero paper; therefore, we defer to the AlphaGo Zero algorithm. The appendix of the AlphaGo Zero paper states the following:
The search tree is reused at subsequent time steps:
the child node corresponding to the played action becomes the new root node; the
subtree below this child is retained along with all its statistics, while the remainder
of the tree is discarded.
Thus, the subtree and statistics are retained. | {
"domain": "ai.stackexchange",
"id": 3473,
"tags": "reinforcement-learning, monte-carlo-tree-search, alphazero"
} |
How to zoom and focus image from object at fixed position on fixed screen? | Question: Suppose there's an object located at some position $x_O$, which emits light in the direction of sensor screen located at $x_S$, and the screen has some lens nearby at $x_L$, which partially focuses the image at the screen.
I need to magnify the image on the screen and focus it better. I guess both of actions would require placing some lens at $x\in(x_O,x_L)$. But:
What should be different in case of magnification from the case of focusing?
How can these actions be combined?
Answer: In essence you are describing making a microscope: in the simplest case that is a two-lens system, where the magnification comes about from having a lens near the object (the objective lens) close to the focal distance of that lens, and a second lens (the ocular) collecting that light and focusing it for you (onto your eye, or onto a screen / sensor etc).
The basic equation for a microscope can be found at http://www.schoolphysics.co.uk/age16-19/Optics/Optical%20instruments/text/Microscope_/index.html
In your case, you need to move the ocular lens so that you get a real image instead of a virtual image - in other words, you have to put the objective lens closer to the object than its focal length.
What I just described essentially amount to "taking a photograph of the object with a magnifying glass in front of it", with the added proviso that you can "focus" the image in the camera by playing around with the distance from the magnifying glass to the object. It's much easier to to this by just playing around with the lens rather than computing it - because the actual distance from a lens (especially a compound lens) to a screen / sensor can be quite hard to determine from physical measurements of the lens, but is easy to derive from the optical performance.
Do you know how to take it from there, or do you need more help with the equations? | {
"domain": "physics.stackexchange",
"id": 16299,
"tags": "geometric-optics"
} |
How to configure ROS networking to minimise packet broadcast redundancies? | Question:
We have three physical machines: A, B and logger. A and B actively publish to various topics and logger subscribes to everything that A and . The plan is to network them together through a single router/switch running DHCP.
My understanding of how ROS on top of DHCP will work is that each time A publishes to a topic that B and logger are subscribed to - the information in that message will be encoded into TCP packets and transmitted to the router/switch twice - once for each subscribing machine’s IP address.
This strikes me as rather wasteful - a more efficient, ideal situation would be: A publishes to a topic, the message is encoded into TCP packets with the subnet broadcast address, this packet transmits once to the router/switch and is relayed to both B and logger. Given that 100% of messages being sent over the network should be received by every other machine on the network this would double the effective available bandwidth.
Could such a networking scheme be practically configured and if so, how?
Originally posted by LukeAI on ROS Answers with karma: 131 on 2019-03-21
Post score: 0
Original comments
Comment by gvdhoorn on 2019-03-21:\
My understanding of how ROS on top of DHCP
should this read: "on top of TCP"?
Also note: if that is the case, then it's only partly correct: ROS supports both UDP and TCP (although UDP support in rospy is not complete).
minimise packet broadcast redundancies
As broadcasting is not used in the default ROS transports, I'm wondering what you mean by this.
Answer:
Given that 100% of messages being sent over the network should be received by every other machine on the network this would double the effective available bandwidth.
you are ignoring some things here, such as packet loss, processing delay in the subscribers (with resulting message loss) and other factors that will affect your dataflows. That makes this significantly more complex than you describe here.
A publishes to a topic, the message is encoded into TCP packets with the subnet broadcast address
I may be not completely up-to-date, but I don't believe TCP can be used to broadcast traffic. TCP is connection based and requires ACKs for every packet transmitted. That would get rather complex with a (potentially) unlimited nr of recipients of each message sent by the publisher (also: what would something like 'window size' mean with a 1000 receivers?)
What you're looking for I believe is multicast, and that is typically done using UDP.
One example of a transport that supports this would be ethzasl_message_transport, but I'm not sure of the state of that. Note also the trade-offs: reliable UDP multicasting is not trivial and I'm not sure whether there is a ready-to-be-reused package that can do that with ROS.
Originally posted by gvdhoorn with karma: 86574 on 2019-03-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32708,
"tags": "ros-kinetic"
} |
How can I define the saturation function in Matlab? | Question: I would like to write a Matlab code for an angular velocity of this type:
$\omega=\omega_{fb}+sat(\omega_{ff})$
where $\omega_{fb}$ is a feedback angular velocity and $\omega_{ff}$ is a feedforward angular velocity.
As you can see, the feedforward term has been saturated.
The problem is that there is not a saturation function in Matlab. Moreover, I would like to do this inside a Matlab function block, and not use the saturation block of Simulink.
I have heard that some use the atan() function to perform the saturation, but I am not sure I can do this.
So, my question is:
How can I define the saturation function in Matlab?
Answer: You could always employ conditional expressions that are clear and explicit.
Instead, if you like compact forms, you may implement saturation as below:
w = w_fb + min(hi, max(low, w_ff));
where hi and low are the upper and lower bounds of the saturation. | {
"domain": "robotics.stackexchange",
"id": 2263,
"tags": "control, matlab"
} |
How can we assume constant energy for a particle in arbitrary potential energy to solve Schrödinger's equation using separation of variable method? | Question: Edit: I have corrected some conceptual errors in the question. I actually intend to ask about a situation in which potential changes with position.
When we need to solve Schrödinger's equation for a particle, let's say, undergoing harmonic oscillations, the first approach is usually to assume the wave function as a product of two functions each depending only on space or on time, respectively.
Solving for the time part, we get a solution which describes a particle oscillating with constant energy. So what remains is finding the spatial part of the solution to get the complete wave function.
But there is a problem in this approach. If the particle is in a potential whose value is changing with position, then as it moves, let's say, down the potential, it should gain energy. So the temporal part of the wave function should oscillate faster and faster for spatial positions down the potential. In other words, given a non constant potential (with respect to spatial coordinates), the temporal part of the wave function can't be independent of space. So, it is wrong to assume that a valid solution for the wave function can be had by making such an assumption.
For example, in Griffith's book, the in order to solve the Schroedinger's equation for a harmonic oscillator using the variable separable method, the author concludes that we need to solve the following equation:
$-\frac{h^{2}}{2m}\frac{d^{2}\psi}{dx^{2}}+\frac{1}{2}m\omega^{2}x^{2}\psi=E\psi$
But my argument is that this method should not even be applicable in a situation where the potential changes with position.
Answer:
If the particle is in a constant [I assume you mean a function of
position, constant in time] potential, then as it moves, let's say,
down the potential, it should gain energy.
No. What you call energy when solving the time-independent Shroedinger equation is an eigenvalue of the Hamiltonian, i.e. of the total energy. This is conserved in time. | {
"domain": "physics.stackexchange",
"id": 39679,
"tags": "quantum-mechanics, schroedinger-equation"
} |
Are the path integral formalism and the operator formalism inequivalent? | Question: Abstract
The definition of the propagator $\Delta(x)$ in the path integral formalism (PI) is different from the definition in the operator formalism (OF). In general the definitions agree, but it is easy to write down theories where they do not. In those cases, are the PI and OF actually inequivalent, or is it reasonable to expect that the $S$ matrices of both theories agree?
For definiteness I'll consider a real scalar field $\phi$, with action
$$
S_0=\int\mathrm dx\ \frac{1}{2}(\partial \phi)^2-\frac{1}{2}m^2\phi^2 \tag{1}
$$
Path integral formalism
In PI we insert a source term into the action,
$$
S_J=S_0+\int\mathrm dx\ \phi(x)\ J(x) \tag{2}
$$
The mixed term $\phi J$ can be simplified with the usual trick: by a suitable change of variables, the action can be written as two independent terms
$$
S_J=S_0+\int \mathrm dx\;\mathrm dy\ J(x)\Delta_\mathrm{PI}(x-y)J(y) \tag{3}
$$
and this relation defines $\Delta_\mathrm{PI}(x)$: to get the action into this form we have to solve $(\partial^2+m^2)\Delta_{\mathrm{PI}}=\delta(x)$, i.e., in PI the propagator is defined as the Green function of the Euler-Lagrange equations of $S_0$. This definition is motivated by the the fact that when $S_J$ is written as $(3)$ the partition function can be factored as
$$
Z[J]=Z[0]\exp\left[i\int \mathrm dx\;\mathrm dy\ J(x)\Delta_\mathrm{PI}(x-y)J(y) \right] \tag{4}
$$
which makes functional derivatives trivial to compute. For example, if we differente $Z[J]$ two times we get
$$
\langle0|\mathrm T\ \phi(x)\phi(y)|0\rangle=\Delta_\mathrm{PI}(x-y) \tag{5}
$$
In this formalism, the propagator is always the Green function of the differential operator of the theory.
Operator formalism
In OF the propagator is defined as the contraction of two fields:
$$
\Delta_\mathrm{OF}(x-y)\equiv \overline{\phi(x)\phi}(y)\equiv \begin{cases}[\phi^+(x),\phi^-(y)]&x^0>y^0\\ [\phi^+(y),\phi^-(x)]&x^0<y^0\end{cases} \tag{6}
$$
where $\phi^\pm$ are the positive and negative frequency parts of $\phi$.
In general, $\Delta_\mathrm{OF}$ is an operator, but if it commutes with everything (or, more precisely, if the propagator is in the centre of the operator algebra) we can prove Wick's theorem, which in turns means that
$$
\langle 0|\mathrm T\ \phi(x)\phi(y)|0\rangle=\Delta_\mathrm{OF}(x-y) \tag{7}
$$
i.e., the propagator coincides with the two-point function. This makes it very easy to see, for example, that
$$
\Delta_\mathrm{OF}=\Delta_\mathrm{PI} \tag{8}
$$
In this theory, the fact that the propagator is a Green function is a corollary and not a definition. The theorem may fail if the assumptions are not satisfied.
The discrepancy
The positive/negative frequency parts of $\phi$ are the creation and annihilation operators, which in OF usually satisfy
$$
[\phi^+(x),\phi^-(y)]\propto\delta(x-y)\cdot1_\mathcal H \tag{9}
$$
and therefore $\overline{\phi(x)\phi}(y)$ is a c-number. This means that the assumptions of Wick's theorem are satisfied and $(8)$ holds.
The relation $(9)$ can be derived from one of the basic assumptions of OF: the canonical commutation relations:
$$[\phi(x),\pi(y)]=\delta(x-y)\cdot1_\mathcal H\tag{10}
$$
But if we use any non-trivial operator in the r.h.s. of $(10)$ instead of a $1_\mathcal H$, Wick's theorem is violated and in general $\Delta_\mathrm{PI}\neq \Delta_\mathrm{OF}$. One could argue that the r.h.s. of $(10)$ is fixed by the Dirac prescription $\{\cdot,\cdot\}_\mathrm{D}\to\frac{1}{i\hbar}[\cdot,\cdot]$, where $\{\cdot,\cdot\}_\mathrm{D}$ is the Dirac bracket. In the Standard Model, it's easy to prove that $\{\cdot,\cdot\}_\mathrm{D}$ is always proportional to the identity, but in a more general theory we may have complex constraints which would make the Dirac bracket non-trivial - read, not proportional to the identity - and therefore $\Delta_\mathrm{OF}\neq\Delta_\mathrm{PI}$.
Scalar, spinor and vector QFT's always satisfy a relation similar to $(10)$, and therefore OF and PI formalisms agree. But in principle it is possible to study more general QFT's where we use a commutation relation more complex than $(10)$. I don't know of any practical use of this, but to me it seems to me that we can have a perfectly consistent theory where PI and OF formalisms predict different results. Is this correct? I hope someone can shed any light on this.
EDIT
I think it can be useful to add some details to what I said about Dirac brackets, defined as
$$
\{a,b\}_\mathrm{DB}=\{a,b\}_\mathrm{PB}-\{a,\ell^i\}_\mathrm{PB} M_{ij}\{\ell^j,b\}_\mathrm{PB}\tag{11}
$$
where $\ell^i$ are the constraints and $M^{ij}=\{\ell^i,\ell^j\}_\mathrm{PB}$. As $\{q,p\}_\mathrm{PB}=\delta(x-y)$, the only way to get non-trivial Dirac brackets is through the second term. This may happen if we have non-linear constrains such that the second term is a function of $p,q$. If in some case we have non-linear constrains the matrix $M$ will depend on $p,q$ and $\{p,q\}_\mathrm{DB}$ will be a function of $p,q$. If we translate this into operators, we shall find
$$
[\pi,\phi]=\delta(x-y)\cdot1_\mathcal H+f(\pi,\phi)\tag{12}
$$
and so $[\pi,\phi]$ won't commute with neither $\pi$ nor $\phi$ as required by Wick's theorem. (This term $f$ is perhaps related to the $\hbar^2$ terms in QMechanic's answer, and the higher order terms in, e.g., the Moyal bracket).
Answer: General comments to the question (v1):
Any textbook derivation of the correspondence between
$$\tag{1} \text{Operator formalism}\qquad \longleftrightarrow \qquad
\text{Path integral formalism}$$
is just a formal derivation, which discards contributions in the process, cf. e.g. this Phys.SE post.
Rather than claiming complete understanding and existence of the correspondence (1), it is probably more fair to say that we have a long list of theories (such as e.g. Yang-mills, Cherns-Simons, etc.), where both sides of the correspondence (1) have been worked out.
The correspondence (1) is mired with subtleties. Example: Consider a non-relativistic point particle on a curved target manifold $(M,g)$ with classical Hamiltonian
$$\tag{2} H_{\rm cl} ~=~\frac{1}{2} p_i p_jg^{ij}(x), $$
which we use in the Hamiltonian action of the phase space path integral. Then one may show that the corresponding Hamiltonian operator is
$$\tag{3}\hat{H}~=~ \frac{1}{2\sqrt[4]{g}} \hat{p}_i\sqrt{g}~ g^{ij} ~\hat{p}_j\frac{1}{\sqrt[4]{g}}+ \frac{\hbar^2R}{8} +{\cal O}(\hbar^3),$$
cf. Refs. 1 & 2. The first term in eq. (3) is the naive guess, cf. my Phys.SE answer here. The two-loop correction proportional to the scalar curvature $R$ is a surprise, which foretells that a full understanding of the correspondence (1) is going to be complicated.
References:
F. Bastianelli and P. van Nieuwenhuizen, Path Integrals and Anomalies in Curved Space, 2006.
B. DeWitt, Supermanifolds, Cambridge Univ. Press, 1992. | {
"domain": "physics.stackexchange",
"id": 33616,
"tags": "quantum-field-theory, operators, path-integral"
} |
What is the probability of finding the second qubit as $0$ in the state $|\psi\rangle=\frac1{\sqrt2}|00\rangle+\frac12|10\rangle-\frac12|11\rangle $? | Question: Assuming two qubits start in the state:
$|\psi\rangle = \frac{1}{\sqrt 2}|00\rangle + \frac{1}{2}|10\rangle- \frac{1}{2}|11\rangle $
What is the probability of measuring the second qubit as 0? And what is the new state of the system after measuring the first qubit as 1?
I know that for a single qubit state that the probability amplitudes are the coefficient squared. In a two qubit system are the probabilities distributed to the individual states? I.e. from this example does the each zero state in the state: $|00⟩$ have a 50% chance? And I don't really understand the second question, any suggestion on where to review or study?
Answer: If we have the state $|\psi \rangle = \dfrac{1}{\sqrt{2}}|00\rangle + \dfrac{1}{2}|10\rangle - \dfrac{1}{2}|11\rangle$ then the probability of the second qubit being in the state $|0\rangle$ is the probability of the state $|\psi \rangle$ having $|0\rangle$ on the second qubit. In this case, it is from the states $|00\rangle$ and $|10\rangle$. So The probability of measuring the second qubit in the state $|0\rangle$ is $\bigg| \dfrac{1}{\sqrt{2}} \bigg|^2 + \bigg| \dfrac{1}{2} \bigg|^2 = \dfrac{3}{4} $.
You can also work this out more explicitly as well. That is, we have
$$
|\psi \rangle = \begin{pmatrix} 1/\sqrt{2} \ \ \\ 0 \\ 1/2 \\ -1/2 \end{pmatrix}
$$
We are looking for the probability that the second qubit is in the state $|0\rangle$ so the projective measurement $M$ is
$$
M = I \otimes |0\rangle \langle 0 | = \begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}
$$
and so according to Born's rule we have that the probability to measure the second qubit in the state $|0\rangle$ is
$$
\langle \psi | M | \psi \rangle = \begin{bmatrix} 1/\sqrt{2} & 0 & 1/2 &-1/2 \end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 0
\end{bmatrix} \begin{bmatrix} 1/\sqrt{2} \ \ \\ 0 \\ 1/2 \\ -1/2 \end{bmatrix} = \dfrac{1}{2} + \dfrac{1}{4} = \dfrac{3}{4}
$$
Also, the state post measurement is $|\psi_{post} \rangle = \dfrac{M|\psi\rangle}{\sqrt{3/4}}$.
You can extend this to the case where the first qubit is mesured in the state $|1\rangle$ too. In this case, the projective measurement $M = |1\rangle \langle 1| \otimes I$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2144,
"tags": "quantum-state, measurement, textbook-and-exercises"
} |
What could be attractive in a desert? | Question: It is not the Sahara, but the possibility that there isn't any interesting until the GUT scale. On the Wikipedia, I've read:
"The attraction of a desert is that, in such a scenario, measurements
of TeV scale physics at the near-future colliders LHC and ILC will
allow extrapolation all the way up to the GUT scale."
Consider we have the negative information, i.e. somehow we would know, or we could suspect with a quite high propability, that there is really nothing until the GUT scale. Could be this negative information also useful? If yes, how?
Answer: This is very speculative stuff, but here's how I understand this. The `desert' means there are no phase transitions. The whole thing about the TeV scale is that we know there is a phase transition there. The theory above this phase transition is different from the one below the phase transition. Currently we have a potential theory that is proposed just above the TeV phase transition, namely the Standard Model, but it has some theoretical problems. That is why we are investigating the physics at the TeV phase transition so that we can see if we can improve the Standard Model. If we can somehow improve the Standard Model so that these theoretical problems disappear then we would have a theory that applies all the way up to the next phase transition. If it is true that there is a desert all the way to the GUT scale, then the next phase transition would be then one at the GUT scale and then we can start to investigate the GUT scale physics in terms of this new theory. Hence, the benefit of a desert. | {
"domain": "physics.stackexchange",
"id": 32636,
"tags": "beyond-the-standard-model, grand-unification"
} |
In neural networks model, which number of hidden units to select? | Question: In the neural networks model, how many numbers of hidden units need to keep to get an optimal result, as per Cybenko theorem which demonstrates that only one hidden layer is sufficient to solve any regression/classification problem but the selection of the number of units in a hidden layer is very important because it impacts the model performance. Is there a theory to tell us how to choose the optimal number of units for a hidden layer?
Answer: Unfrtunately not, there is not theory to tell us what is the right number of units to choose. As there is not theory for the number of hidden layers to choose. On this respect, Deep Learning is still more an art than a science.
It's true that with one hidden layer we could theoretically solve any problem, but most of the problems are complicated enough to require unimaginable amounts computation.
I think in the end it all boils down to two main issues:
Your specific task, i.e. how large and deep a Network must be so that your model works as you need.
The compute power at your disposal.
When it comes to this, I strongly suggest to prioritize depth over width. Deeper Networks (the ones with many hidden layers) are much more efficient and powerful than large Networks (the ones with fewer, larger layers). It seems they are better at producing abstractions on the input data, to transform and process the signal in more sophisticated ways. | {
"domain": "datascience.stackexchange",
"id": 7421,
"tags": "machine-learning, neural-network, deep-learning, ann"
} |
Hydrophobic proteins in the body? | Question: I know that we can get hydrophobic amino acids, but are there any proteins in the body whose surface is hydrophobic? If so what is their typical function and where can they typically be found and if not why not?
Answer:
Are there any proteins in the body whose surface is hydrophobic?
Sure. Although you are right in thinking that most proteins have hydrophilic surfaces, some are very hydrophobic. My favorite example is Elastin, it is the main component of the skin which grants it elasticity. In fact, the hydrophobic nature of elastin is what confers it its function. The idea, briefly said, is that a insoluble/hydrophobic structure will tend to minimize it's surface area with water, so if you stretch such a substance (i.e. increase the surface area), it will pull back to the minimum surface area state.
Reference: Li, Daggett. Molecular basis for the extensibility of elastin. Journal of Muscle Research & Cell Motility (2002). | {
"domain": "biology.stackexchange",
"id": 5376,
"tags": "biochemistry, proteins"
} |
Changing the text of a few elements when a tile is clicked | Question: I have created a basic web page that contains a panel which displays some details about an object, and a panel which contains some tiles that are used to choose which object to view. The idea is straight-forward, and with plenty of C# experience, the learning curve was small. However, since I am relatively new to JavaScript, I'd like a brief code review of my JavaScript to determine if there are any gotchas or perhaps alternatives that are more graceful.
// The available titles and their associated data.
var titles = ["Pong", "Tetris", "Pac-Man", "Snake", "Super Mario", "Pokemon", "Match 3", "Blossom Blast", "Driven", "Terrible Zombies"];
var data = [
["Pong is the equivalent of Hello World in the game development industry.", "Vectors, Rendering", "Directions", "Adding obstacles.", "Add effects."]
["Tetris is an iconic title. Though I am unable to provide the greatest theme music of all time, I hope you thoroughly enjoy the remake!", "Multi-Dimensional Arrays", "Arrays of Arrays", "Creating new pieces.", "Add effects."]
["Pac-Man is an iconic title providing a lot of fun and some great concepts.", "Collision Detection", "Building the walls.", "Object pooling.", "Add more ghosts and effects."]
["Dialing it back down, Snake is another iconic title with great concepts but simple implementation.", "Collision Detection", "None", "Linked Lists", "None"]
["Another iconic title, Super Mario is known around the world.", "Parallaxing Backgrounds", "None", "None", "None"]
["Pokemon is a favorite of multiple generations, this remake is simple and for demonstrations only.", "None", "None", "None", "None"]
["This is a simple demonstration of the match three game logic.", "None", "None", "None", "None"]
["The beautiful game Blossom Blast was my inspiration to go mobile.", "None", "None", "None", "None"]
["Driven is a basic 3D driving game.", "None", "None", "None", "None"]
["Terrible zombies is just as the name states, a terrible zombie game.", "None", "None", "None", "None"]
];
function swapGame(domElement, title) {
// The tile collection.
var tileCollection = document.getElementsByClassName('tiles');
var tiles = tileCollection[0].getElementsByTagName('li');
// The labels for display.
var lblTitle = document.getElementById('dTitle');
var lblDescription = document.getElementById('dDescription');
var lblConcepts = document.getElementById('dConcepts');
var lblChallenges = document.getElementById('dChallenges');
var lblAdvanced = document.getElementById('dAdvanced');
var lblHomework = document.getElementById('dHomework');
for (var i = 0; i < tiles.length; i++)
tiles[i].className = '';
domElement.classList.toggle('active');
lblTitle.innerHTML = title;
var id = titles.indexOf(title);
lblDescription.innerHTML = data[id][0];
lblConcepts.innerHTML = 'Concepts: ' + data[id][1];
lblChallenges.innerHTML = 'Challenges: ' + data[id][2];
lblAdvanced.innerHTML = 'Advanced: ' + data[id][3];
lblAdvanced.innerHTML = 'Homework: ' + data[id][4];
}
.tile-container {
width: 100%;
height: 100%;
display: flex;
}
.view-panel {
width: 30%;
box-shadow: 0px 5px 15px #111;
background-color: #fff;
color: #333;
padding: 5px;
}
.launch-button {
background-color: #fc0;
border-radius: 5px;
height: 50px;
width: 95%;
margin: auto;
cursor: wait;
transition: 0.5s all;
}
.launch-button:hover {
background-color: #da0;
}
.launch-button a {
text-decoration: none;
color: #333;
width: 100%;
height: 100%;
display: flex;
align-items: center;
justify-content: center;
}
.tiles {
width: 70%;
display: flex;
align-items: flex-start;
justify-content: flex-start;
}
.tile-view {
list-style: none;
width: 100%;
display: flex;
flex-wrap: wrap;
align-items: flex-start;
justify-content: center;
padding: 0;
}
.tile-view li {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
will-change: all;
width: 150px;
height: 150px;
cursor: pointer;
transition: 0.5s background-color;
margin-left: 5px;
margin: 5px;
background-color: #222;
color: #eee;
box-shadow: 2px 2px 5px #111;
}
.tile-view li>a {
color: #eee;
text-decoration: none;
}
.tile-view li:hover {
background-color: #fc0;
box-shadow: 5px 5px 15px #111;
}
.tile-view li.active {
background-color: #fc0;
}
.tile-view li.active,
.tile-view li.active>a,
.tile-view li:hover,
.tile-view li:hover>a {
color: #333;
}
@media screen and (max-width: 750px) {
.view-panel {
display: none;
}
.tiles {
width: 100%;
}
.tile-view li {
width: 300px;
}
}
<div class="tile-container">
<div class="view-panel">
<h1 id="dTitle">Pong</h1>
<p id="dDescription">Pong is the equivalent of Hello World in the game development industry.</p>
<ul>
<li><span id="dConcepts">Concepts: Vectors, Rendering</span></li>
<li><span id="dChallenges">Challenges: Directions</span></li>
<li><span id="dAdvanced">Advanced: Adding obstacles.</span></li>
<li><span id="dHomework">Homework: Add effects.</span></li>
</ul>
<div class="launch-button"><a href="#">Launch</a></div>
</div>
<div class="tiles">
<ul class="tile-view">
<li onclick="swapGame(this, 'Pong')" class="active">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Pong</a>
</li>
<li onclick="swapGame(this, 'Tetris')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Tetris</a>
</li>
<li onclick="swapGame(this, 'Pac-Man')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Pac-Man</a>
</li>
<li onclick="swapGame(this, 'Snake')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Snake</a>
</li>
<li onclick="swapGame(this, 'Super Mario')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Super Mario</a>
</li>
<li onclick="swapGame(this, 'Pokemon')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Pokemon</a>
</li>
<li onclick="swapGame(this, 'Match 3')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Match 3</a>
</li>
<li onclick="swapGame(this, 'Blossom Blast')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Blossom Blast</a>
</li>
<li onclick="swapGame(this, 'Driven')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Driven</a>
</li>
<li onclick="swapGame(this, 'Terrible Zombies')">
<i class="fas fa-gamepad" aria-hidden="true"></i>
<a href="#">Terrible Zombies</a>
</li>
</ul>
</div>
</div>
Answer: It could go either way, but for this small example, I would suggest moving all the content in the HTML. You could even skip the JS and use :target to show and hide stuff in conjunction with hashed link hrefs.
.panel {
display: none
}
.panel:target {
display: block
}
<div class="app">
<div class="panels">
<div class="panel" id="pong">
<h1>Pong</h1>
<p>Lorem ipsum sit dolor amet...</p>
</div>
<div class="panel" id="pokemon">
<h1>Pokemon</h1>
<p>Lorem ipsum sit dolor amet...</p>
</div>
<div class="panel" id="donkey-kong">
<h1>Donkey Kong</h1>
<p>Lorem ipsum sit dolor amet...</p>
</div>
</div>
<nav class="menu">
<ul>
<li><a href="#pong">Pong</a></li>
<li><a href="#pokemon">Pokemon</a></li>
<li><a href="#donkey-kong">Donkey Kong</a></li>
</ul>
</nav>
</div>
Now onto your code...
I recommend using const over var. Nothing wrong with var, but const guarantees the value referenced by the variable never changes (i.e. you cannot reassign it). This ensures that whatever you set to it is the same thing later in code. It's also block-scoped, so if you're in ifs or fors, it scopes it in the block.
In JS, there's document.querySelector and document.querySelectorAll. These allow you to fetch DOM elements using CSS selectors. You target DOM elements in the same way you target them when writing CSS. This way, you can be more expressive instead of being limited to getElementById, getElementsByTagName, getElementsByClassName.
element.innerHTML is fine. But if you're just updating text, consider using element.textContent instead.
Instead of onclick on the HTML, use element.addEventListener in JavaScript instead. Inline scripts, while legit, are discouraged due to separation of concerns. Also, in inline scripts, the function is a global. Globals are to be avoided in JS to avoid clobbering stuff in the global namespace.
Avoid targetting HTML elements in your CSS selectors. For instance .tile-view li targets all descendant li elements under .tile-view. This is fine for small apps, but this is a bad habit to have when working on larger apps. On larger apps, where components are composed of smaller independent components, you never know what's in them. You may be hitting an li you did not originally anticipate to be under there. | {
"domain": "codereview.stackexchange",
"id": 33339,
"tags": "javascript, dom"
} |
Role based permissions in Express.js | Question: This is something I've done a few times, but I've found it to feel a bit error-prone with so many conditions, and am wondering if anyone can point me in the direction of a cleaner way. This is a PATCH route for editing a user. Super admin and admin users are both able to change other users (with some limitations) while other types of users can only edit themselves.
router.patch('/:userId', async (req, res) => {
const patcher = (req as AuthRequest).user;
const otherUser = await database.getUserById(req.params.userId);
const requestedUpdate = req.body;
// 404 if user is not found.
if (!otherUser) {
return sendCannotFind(res);
}
// Basic validation of requestedUpdate
if (requestedUpdate.userType && !isUserTypeValid(requestedUpdate.userType)) {
return sendInvalidUserType(res);
}
if (patcher.userType === 'superAdmin') {
// Super admin cannot demote self.
if (otherUser.id === patcher.id && requestedUpdate.userType) {
return sendCannotSetUserType(res);
}
// Super admin cannot edit other super admins
if (otherUser.userType === 'superAdmin' && otherUser.id !== patcher.id) {
return sendCannotEdit(res);
}
} else if (patcher.userType === 'admin') {
// Admin cannot edit super admins
if (otherUser.userType === 'superAdmin') {
return sendCannotEdit(res);
}
// Admin cannot edit other admins
if (otherUser.userType === 'admin' && otherUser.id !== patcher.id) {
return sendCannotEdit(res);
}
// Admin cannot promote or demote themselves
if (otherUser.id === patcher.id && requestedUpdate.userType) {
return sendCannotSetUserType(res);
}
// Admin cannot promote anyone to admin or superAdmin
if (requestedUpdate.userType === 'admin' || requestedUpdate.userType === 'superAdmin') {
return sendCannotSetUserType(res);
}
} else {
// Non-admins cannot edit anyone but themselves
if (otherUser.id !== patcher.id) {
return sendCannotEdit(res);
}
// Non-admins cannot promote or demote themselves
if (requestedUpdate.userType && requestedUpdate.userType !== otherUser.userType) {
return sendCannotSetUserType(res);
}
}
await doEdit(otherUser, requestedUpdate);
return res.json(otherUser);
});
```
Answer: I think I came up with a better and more declarative way using Joi schemas in nested dictionaries.
The idea is that we first figure out these three things:
Is the user editing themselves? (bool)
What type of user is the editor? (admin, etc)
What type of user is being edited? (admin, etc)
And based on those three things, we can lookup the correct Joi schema to use.
So the schema dictionary looks like this for the PATCH route:
// No one is allowed to edit their own user type, everyone can edit their own name.
const selfEditSchemas: { [key in UserType]?: Joi.Schema } = {
[UserType.superAdmin]: Joi.object({
userType: Joi.forbidden(),
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
[UserType.admin]: Joi.object({
userType: Joi.forbidden(),
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
[UserType.user]: Joi.object({
userType: Joi.forbidden(),
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
};
// We only allow admins and super admins to edit others, and we use different schemas
// depending on what type of user they are trying to edit.
const otherEditSchemas: { [key in UserType]: { [key in UserType]?: Joi.Schema } } = {
[UserType.superAdmin]: {
[UserType.admin]: Joi.object({ // (superAdmin can edit admins according to this schema)
userType: Joi.string().valid(...Object.values(UserType)).optional(),
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
[UserType.user]: Joi.object({ // (superAdmin can edit regular users according to this chema)
userType: Joi.string().valid(...Object.values(UserType)).optional(),
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
},
[UserType.admin]: {
[UserType.user]: Joi.object({ // (admin can edit regular users according to this schema)
userType: Joi.forbidden(), // (admin cannot promote regular users)
name: Joi.string().pattern(/^[a-zA-Z0-9_\-. ]{3,100}$/).optional(),
}),
},
[UserType.user]: {}, // (user is not allowed to edit anyone else)
};
And now the logic for validating edits becomes a lot simpler and less error-prone IMO:
function validateUserEdit(req: express.Request, res: express.Response, next: express.NextFunction) {
const userReq = req as UserRequest;
const isSelf = userReq.user.id === userReq.otherUser.id;
const schema = isSelf
? selfEditSchemas[userReq.user.userType]
: otherEditSchemas[userReq.user.userType][userReq.otherUser.userType];
if (!schema) {
return res.status(403).json({
message: 'Not allowed to edit that user.',
code: ErrorCodes.FORBIDDEN_TO_EDIT_USER,
});
}
const validateRes = schema.validate(req.body, { stripUnknown: true });
if (validateRes.error) {
res.status(400).json({
message: `Invalid arguments: ${validateRes.error}`,
code: ErrorCodes.INVALID_ARGUMENTS,
});
} else {
req.body = validateRes.value;
next();
}
} | {
"domain": "codereview.stackexchange",
"id": 39352,
"tags": "javascript, typescript, express.js, authorization"
} |
Substitution Cipher Machine | Question: Here is an attempt I made at a simple Substitution based cipher machine, it shifts input characters by an amount then returns an unreadable string which can then be decrypted back to its original self:
package Pack;
import java.io.UnsupportedEncodingException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
/**
* Start of CiphermachineMK3.
**/
public class CipherMachineMK3 {
/**
* Encryption character map before Encryption.
*/
private List<String> encryptMappingfrom = Arrays.asList(":", "/", "?", "#", ".", " ");
/**
* Encryption character map after Encryption.
*/
private List<String> encryptMappingto = Arrays.asList("!", "-", "+", ",", "]", " ");
/**
* Decryption character map before Decryption.
*/
private List<String> decryptMappingfrom = Arrays.asList("!", "-", "+", ",", "]", " ");
/**
* Decryption character map after Decryption.
*/
private List<String> decryptMappingto = Arrays.asList(":", "/", "?", "#", ".", " ");
/**
* HashMap to switch inputted character with encryption value.
*/
private static HashMap<String, String> encryptMapping = new HashMap<>();
/**
* HashMap to switch inputted character with decryption value.
*/
private static HashMap<String, String> decryptMapping = new HashMap<>();
/**
* Amount of shift required to loop the characters of the alphabet from A-Z and vice
* versa.
*/
private static final int SHIFT_ROTATE_VALUE = 26;
/**
* Encoding shift amount.
*/
private static final int NUMBER_TO_SHIFT_DIGITS = 3;
/**
* Shift amount required for decoding challenge.
*/
private static final int SECRET_DECODE_VALUE = 18;
/**
* Storage for lower case alphabet variables.
*/
private static final String ALPHABET_LOWERCASE = "abcdefghijklmnopqrstuvwxyz";
/**
* Storage for upper case alphabet variables.
*/
private static final String ALPHABET_UPPERCASE = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
/**
* Mapping for the encoding of specific characters.
*
* @param encryptMappingfrom
* Characters to the be encrypted.
* @param encryptMappingto
* Characters encrypted to.
* @return Encryption map
*/
static HashMap<String, String> encryptFixedmappings(final List<String> encryptMappingfrom,
final List<String> encryptMappingto) {
for (int i = 0; i < encryptMappingfrom.size(); i++) {
encryptMapping.put(encryptMappingfrom.get(i), encryptMappingto.get(i));
}
return encryptMapping;
}
/**
* Mapping for the decoding of specific characters.
*
* @param decryptMappingfrom
* Characters to be decrypted.
* @param decryptMappingto
* Characters decrypted to.
* @return Decryption map
*/
static HashMap<String, String> decryptFixedmappings(final List<String> decryptMappingfrom,
final List<String> decryptMappingto) {
for (int i = 0; i < decryptMappingfrom.size(); i++) {
decryptMapping.put(decryptMappingfrom.get(i), decryptMappingto.get(i));
}
return decryptMapping;
}
/**
* Start of Encrypt method.
*
* @param plainText
* Input of clear readable text.
* @param shiftKey
* Amount to shift text by when encoding.
* @return cipherText The final encoded result.
*/
public String encrypt(final String plainText, final int shiftKey) {
String cipherText = "";
String charasString = "";
encryptMapping = CipherMachineMK3.encryptFixedmappings(encryptMappingfrom, encryptMappingto);
if (plainText == null) {
return null;
}
// Loop begins
for (int i = 0; i < plainText.length(); i++) {
// Checks if character is alphabetic
if (Character.isAlphabetic(plainText.charAt(i))) {
// Performs tasks if character is Lower-case
if (Character.isLowerCase(plainText.charAt(i))) {
int lowerCharPosition = ALPHABET_LOWERCASE.indexOf(plainText.charAt(i));
int keyVal = (shiftKey + lowerCharPosition) % SHIFT_ROTATE_VALUE;
if (keyVal < 0) {
keyVal = ALPHABET_LOWERCASE.length() + keyVal;
}
char sortEncryptLowerCaseASCII = (char) keyVal;
char replaceVal = ALPHABET_LOWERCASE.charAt(sortEncryptLowerCaseASCII);
cipherText += replaceVal;
}
// Performs tasks if character is Upper-case
if (Character.isUpperCase(plainText.charAt(i))) {
int upperCharPosition = ALPHABET_UPPERCASE.indexOf(plainText.charAt(i));
int keyVal = (shiftKey + upperCharPosition) % SHIFT_ROTATE_VALUE;
if (keyVal < 0) {
keyVal = ALPHABET_UPPERCASE.length() + keyVal;
}
char sortEncryptUpperCaseASCII = (char) keyVal;
char replaceVal = ALPHABET_UPPERCASE.charAt(sortEncryptUpperCaseASCII);
cipherText += replaceVal;
}
} else {
// Begins symbol checker loop
if (Character.isWhitespace(plainText.charAt(i))) {
cipherText += plainText.charAt(i);
continue;
} else {
charasString = String.valueOf(plainText.charAt(i));
if (!encryptMapping.containsKey(charasString)) {
// Default value for unrecognised characters
cipherText += "_";
} else {
cipherText += encryptMapping.get(charasString);
}
}
// Checks if character is alphabetic
}
}
return cipherText;
}
/**
* Start of Decrypt method.
*
* @param cipherText
* Input of encoded text.
* @param shiftKey
* Amount to shift text by when decoding.
* @return plainText The final decoded result.
*/
public String decrypt(final String cipherText, final int shiftKey) {
String plainText = "";
String charasString = "";
decryptMapping = CipherMachineMK3.decryptFixedmappings(decryptMappingfrom, decryptMappingto);
if (cipherText == null) {
return null;
}
// Loop begins
for (int i = 0; i < cipherText.length(); i++) {
// Checks if character is alphabetic
if (Character.isAlphabetic(cipherText.charAt(i))) {
// Performs tasks if character is Lower-case
if (Character.isLowerCase(cipherText.charAt(i))) {
int charPosition = ALPHABET_LOWERCASE.indexOf(cipherText.charAt(i));
int keyVal = (charPosition - shiftKey) % SHIFT_ROTATE_VALUE;
if (keyVal < 0) {
keyVal = ALPHABET_LOWERCASE.length() + keyVal;
}
char sortDecryptLowerCaseASCII = (char) keyVal;
char replaceVal = ALPHABET_LOWERCASE.charAt(sortDecryptLowerCaseASCII);
plainText += replaceVal;
}
// Performs tasks if character is Upper-case
if (Character.isUpperCase(cipherText.charAt(i))) {
int charPosition = ALPHABET_UPPERCASE.indexOf(cipherText.charAt(i));
int keyVal = (charPosition - shiftKey) % SHIFT_ROTATE_VALUE;
if (keyVal < 0) {
keyVal = ALPHABET_UPPERCASE.length() + keyVal;
}
char sortDecryptUpperCaseASCII = (char) keyVal;
char replaceVal = ALPHABET_UPPERCASE.charAt(sortDecryptUpperCaseASCII);
plainText += replaceVal;
}
}
// Checks if character is non-alphabetic
if (!Character.isAlphabetic(cipherText.charAt(i))) {
// Begins symbol checker loop
if (Character.isWhitespace(cipherText.charAt(i))) {
plainText += cipherText.charAt(i);
continue;
} else {
charasString = String.valueOf(cipherText.charAt(i));
if (!decryptMapping.containsKey(charasString)) {
// Default value for unrecognised characters
plainText += "*";
}
plainText += decryptMapping.get(charasString);
}
}
}
return plainText;
}
/**
* Start of main method.
*
* @param args
* args
* @throws UnsupportedEncodingException Throws exception in case string to byte conversion fails.
*/
public static void main(final String[] args) throws UnsupportedEncodingException {
CipherMachineMK3 cipherMachine = new CipherMachineMK3();
System.out.println(cipherMachine.encrypt("Hello World", NUMBER_TO_SHIFT_DIGITS));
System.out.println(cipherMachine.encrypt("Well done: #Decoded am i right?", NUMBER_TO_SHIFT_DIGITS));
System.out.println(cipherMachine.decrypt("Khoor Zruog", NUMBER_TO_SHIFT_DIGITS));
System.out.println(cipherMachine.decrypt("Ugfyjslmdslagfk ]]] Lwdd fgtgvq lzwhskkogjv ak! ,Uzwvvsj",
SECRET_DECODE_VALUE));
}
}
Written in Eclipse. Any feedback regarding potential improvements or even just "Get rid of X and do X instead" would be greatly appreciated.
Answer:
First, about your two strings ALPHABET_LOWERCASE and ALPHABET_UPPERCASE. You are re-inventing the wheel, because the alphabetical ordering of the 26 letters is already reflected by the values of the code points these letters have been assigned to in the Unicode standard: The letters "A" to "Z" correspond to the code points U+0041 to U+005A, and the letters "a" to "z" correspond to the code points U+0061 to U+007A (the code points are typically referred to by the hexadecimal representation of their values). And a char in Java is nothing but a 16-bit value that contains the value of this Unicode code point. For example, the assignments char a = 'a' and char a = 0x61 (or char a = 97, which would be the decimal representation) are equivalent, and you can perform mathematical operations on chars just like you can with other primitive values that represent integers.
This means that, in order to represent the alphabet, you don't need to store every single character. In fact, for the purposes of your code, I don't think it is necessary to store anything, since you only use the two strings for shifting a character by a certain number of positions in the alphabet, and you can do that using simple addition. For example, to shift an uppercase character by 3 positions, you can just do this:
char encryptedChar = (char) ((originalChar - 'A' + 3) % 26 + 'A')
The possibility of the shift key being negative makes this a bit more complicated, because chars can only have positive values. On the other hand, in the expression originalChar - 'A' + 3 above, originalChar and 'A' would first be converted to ints before the expression originalChar - 'A' is evaluated. Here is a link to Chapter 4.2.2. from the Java Language Specification where this is explained. The following two paragraphs are relevant:
If an integer operator other than a shift operator has at least one operand of type long, then […].
Otherwise, the operation is carried out using 32-bit precision, and the result of the numerical operator is of type int. If either operand is not an int, it is first widened to type int by numeric promotion.
To illustrate this, here is a code sample:
char zeroChar = 0;
int testInt = zeroChar - 1; // -1
char testChar = (char) testInt; // 65535
In the second line, zeroChar is converted to an int before 1 is subtracted from it, so no overflow occurs. But since -1 cannot be represented by a char, information is lost in the narrowing conversion from int to char in the third line, and the result is \$2^{16}-1\$, which is the maximum value of a char.
You seem to be under the impression that Character.isAlphabetic(int) checks whether a character is one of the 52 characters from "A" to "Z" and from "a" to "z". This is not true, as the method's documentation reveals. There are many more characters that are considered alphabetic by this method which your code does not deal with. Likewise, the methods Character.isLowerCase(char) and Character.isUpperCase(char) return true for many more upper- and lowercase characters than your code deals with. So if you want to check whether a character is one of the 52 characters mentioned above, this can easily be done with relational operators like > and <, taking into account what I have explained in the previous point.
Why is the variable sortEncryptLowerCaseASCII a char and not an int? It doesn't make a difference for the result of the program, but it is confusing, because sortEncryptLowerCaseASCII represents the index of a character in a string, not a character itself, and when you pass it to String.charAt(int), it is implicitly cast to an int anyway, so there is absolutely no point in making this variable a char.
Instead of constructs like these:
if (Character.isAlphabetic(plainText.charAt(i))) {
// ...
} else {
if (Character.isWhitespace(plainText.charAt(i))) {
// ...
} else {
// ...
}
}
You can write this:
if (Character.isAlphabetic(plainText.charAt(i))) {
// ...
} else if (Character.isWhitespace(plainText.charAt(i))) {
// ...
} else {
// ...
}
This reduces the level of nesting, which makes the flow of the program a bit easier to follow.
Your code contains a lot of duplicate code. For example, the decryption and encryption methods perform essentially the same operations. From what I can tell, the only differences are the character mappings for non-alphabetic characters, the replacement character for unrecognized characters, and whether you add or subtract the shift key when converting alphabetic characters. So instead of duplicating the conversion algorithm in two methods, you could just make one method that accepts the character mapping and the replacement character as a parameter and adds the shift key, and then create two wrapper methods for decryption and encryption that merely call this method with the appropriate parameters (which would include the negated shift key when calling the method for a decryption).
Likewise, the two blocks for converting upper- and lowercase characters are almost identical. You should try to extract the common functionality to one place and isolate the differences in order to eliminate as much duplicate code as possible. The only difference here is whether you use the String ALPHABET_LOWERCASE or ALPHABET_UPPERCASE. So you could declare a local variable String referenceString, which gets assigned ALPHABET_LOWERCASE if the character is lowercase, and ALPHABET_UPPERCASE if the character is uppercase, and then, you just replace every occurrence of ALPHABET_LOWERCASE/ALPHABET_UPPERCASE with referenceString and you can annihilate the duplicate code.
A more appropriate type for your List<String> fields would be List<Character>. This should be self-explanatory.
Finally, your usage of static and non-static fields and methods is very confusing, and I don't know if there is a purpose behind this or if you just didn't know what you were doing. I don't really know how to review this aspect of your code, because I cannot discern a meaning behind your choices of what to make static and what to make non-static. For instance, the four lists containing the non-alphabetic characters are instance variables, but they only ever carry the completely instance-independent value to which they were initialized, rendering the fact that they are non-static variables absurd. The HashMaps on the other hand, which are needed for encryption and decryption, are static, yet the encrypt and decrypt methods are non-static.
You might not have understood the purpose of static and non-static fields and methods, and I don't think this example with the cipher machine lends itself to explaining the difference very well. You might take a look at this. I'm sure there are also plenty of other resources available on the internet. | {
"domain": "codereview.stackexchange",
"id": 30354,
"tags": "java, caesar-cipher"
} |
Calculate the mass of Mg needed to produce 5 kg of vanadium | Question: I need to calculate the mass of magnesium needed to produce $\pu{5 kg}$ of vanadium from the following reaction:
$$\ce{3Mg + 2VCl3 -> 2V + 3MgCl2}$$
given relative atomic masses for $\ce{Mg}$ and $\ce{V}$ as $24$ and $51$, respectively.
I multiplied $24 × 3$ and $51 × 2$ to get
$72 : 102 $
I changed $\pu{5 kg}$ into grams, which is $5000$.
I cross-multiplied to find the answer and then divided the answer by $1000$ to turn it back to kilograms because the question stated it. My answer is $\pu{3.5 kg}$. Is this correct?
Answer: You answer is correct.
It is given that for every 3 moles of $\ce{Mg}$ that reacts, 2 moles of $\ce{V}$ are produced. Here's one possible way to proceed towards the answer.
First, find out how many moles of $\ce{V}$ are in a mass of $5000\,\mathrm{g}$:
$$5000\,\mathrm{g}\cdot\left({1\,\mathrm{mol}\,\ce{V}}\over 51\,\mathrm{g}\right) = 98\,\mathrm{mol}\,\ce{V}$$
Second, calculate how many moles of $\ce{Mg}$ will yield $98$ moles of $\ce{V}$ by multiplying the ratio of the stoichiometric coefficients (3:2) by the number from step 1:
$$\left({3\,\mathrm{mol}\,\ce{Mg}\over 2\,\mathrm{mol}\,\ce{V}}\right)\cdot 98\,\mathrm{mol}\,\ce{V} = 147\,\mathrm{mol}\,\ce{Mg}$$
Finally, calculate the mass of $\ce{Mg}$ by multiplying the number of moles from step 2 by the atomic mass:
$$147\,\mathrm{mol}\,\ce{Mg}\cdot\left({24\,\mathrm{g}\over 1\,\mathrm{mol}\,\ce{Mg}}\right) = 3528\,\mathrm{g} \approx 3.5\,\mathrm{kg}$$ | {
"domain": "chemistry.stackexchange",
"id": 6188,
"tags": "inorganic-chemistry, stoichiometry"
} |
When separating an Oreo cookie, why does the cream stick to just one side only? | Question: There is probably some reason for this, but I can't figure out what it is. I agree that it probably doesn't happen 100% of the time, but most all of the time, the cream is clinging to just one of the cookie sides.
Answer: The "stuff" sticks to itself better than it sticks to the cookie. Now if you pull the cookies apart, you create a region of local stress, and one of the two interfaces will begin to unstick. At that point, you get something called "stress concentration" at the tip of the crack (red arrow) - where the tensile force concentrates:
To get the stuff to start separating at a different part of the cookie, you need to tear the stuffing (which is quite good at sticking to itself) and initiate a delamination at a new point (where there is no stress concentration).
Those two things together explain your observation.
Cookie picture credit (also explanation about manufacturing process introducing a bias)
Update
A plausible explanation was given in this article describing work by Cannarella et al:
Nabisco won’t divulge its Oreo secrets, but in 2010, Newman’s Own—which makes a very similar “Newman-O”—let the Discovery Channel into its factory to see how their version of cookies are made. The key aspect for twist-off purposes: A pump applies the cream onto one wafer, which is then sent along the line until a robotic arm places a second wafer on top of the cream shortly after. The cream always adheres better to one of these wafers—and all of the cookies in a single box end up oriented in the same direction.
Which side is the stronger wafer-to-cream interface? “We think we know,” says Spechler. The key is that fluids flow better at high temperatures. So the hot cream flows easily over the first wafer, filling in the tiny cracks of the cookie and sticking to it like hot glue, whereas the cooler cream just kind of sits on the edges of those crevices. | {
"domain": "physics.stackexchange",
"id": 40884,
"tags": "pressure, everyday-life, surface-tension"
} |
intialpose is in what reference? | Question:
What frame is the initialPose generated by setPose from navigation panel. Should I not do a translation to base_link first before using it. I am guessing the odom. So I would translate base_link -> odom before publishing tf and odometry.
Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-09-21
Post score: 2
Answer:
amcl subscribes to initialpose topic and initial pose estimates are accepted in the global frame. You can see some part of the code inside initialPoseReceived function from amcl_node.cpp
else if(tf_->resolve(msg->header.frame_id) != tf_->resolve(global_frame_id_))
{
ROS_WARN("Ignoring initial pose in frame \"%s\"; initial poses must be in the global frame, \"%s\"",
msg->header.frame_id.c_str(),
global_frame_id_.c_str());
return;
}
by default amcl sets global frame id to map
for more details check out here
Originally posted by cagatay with karma: 1850 on 2013-09-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 15610,
"tags": "navigation, amcl"
} |
rostopic publish message to remote machine? | Question:
Hi folks,
I'm trying to connect to ROS running on a remote machine. I can remotely echo the topics from the machine however I cannot publish anything.
rostopic echo /odom
works like a charm, printing all the odom messages published by the remote ROS on my local laptop.
rostopic pub /cmd_vel geometry_msgs/Twist -- ""
however does not. I have traced the communication using Wireshark and all I can see are some registerService, registerPublisher and getParam commands. My message however doesn't arrive. I've also tried to simply publish a std_msgs/String to a topic called /blubb. But even that does not work. I cannot see the message itself on the network (however I can see the mentioned remote procedure calls) and thus it doesn't arrive at the remote machine.
ROS_IP is set to the remote machine's IP on both machines.
ROS_MASTER_URI is set to http://:11311 on both machines.
netcat works, rostopic echo does, rqt_plot does, even rviz does using the remote connection. I just can't send messages to the remote machine.
Any idea what might be wrong?
Thanks a lot!
Cheers,
Hendrik
Originally posted by Hendrik Wiese on ROS Answers with karma: 1145 on 2014-01-16
Post score: 2
Answer:
I think you've got the meaning of ROS_IP wrong. ROS_IP is to be set to each machine's ip, where a node is running. It is not something like ROS_MASTER_URI!
So, if you have machine a at 10.1.1.1 and machine b at 10.1.1.2, each node running on a must have ROS_IP 10.1.1.1 and each node on machine b ROS_IP 10.1.1.2. The reason is that machines need to be able to connect forth and back. That is why some connections work (in one direction), while others don't.
Originally posted by dornhege with karma: 31395 on 2014-01-16
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by tfoote on 2014-01-16:
See http://wiki.ros.org/ROS/NetworkSetup for an overview of networking setup.
Comment by Hendrik Wiese on 2014-01-18:
I see, I see. Thanks a lot! That has solved this problem. | {
"domain": "robotics.stackexchange",
"id": 16664,
"tags": "ros, remote-roscore, publisher, rostopic, remote"
} |
How can I convert a BED file to GTF/GFF with gene_ids? | Question: Given a .bed (BED12), how can I convert it to GTF/GFF formats with gene_id attributes? What is the fastest way or available tools to do it?
For example, given an input like this:
chr27 17266469 17281218 ENST00000541931.8 1000 + 17266469 17281218 0,0,200 2 103,74, 0,14675,
How can I convert it to:
chr27 bed2gtf gene 17266470 17285418 . + . gene_id "ENSG00000151743";
chr27 bed2gtf transcript 17266470 17281218 . + . gene_id "ENSG00000151743"; transcript_id "ENST00000541931.8";
chr27 bed2gtf exon 17266470 17266572 . + . gene_id "ENSG00000151743"; transcript_id "ENST00000541931.8"; exon_number "1"; exon_id "ENST00000541931.8.1";
Answer: Here I provide an ordered list of options (note that I am the author of bed2gtf and bed2gff):
bed2gtf
A high-performance BED-to-GTF converter written in Rust from https://github.com/alejandrogzi/bed2gtf.
Usage: bed2gtf[EXE] --bed/-b <BED> --isoforms/-i <ISOFORMS> --output/-o <OUTPUT> --threads/-t <THREADS>
where:
--bed <BED>: a .bed file
--isoforms <ISOFORMS>: a tab-delimited file
--output <OUTPUT>: path to output file (*.gtf)
The isoforms file specification:
a tab-delimited .txt/.tsv/.csv/... file with genes/isoforms (all the transcripts in .bed file should appear in the isoforms file):
> cat isoforms.txt
ENSG00000198888 ENST00000361390
ENSG00000198763 ENST00000361453
ENSG00000198804 ENST00000361624
ENSG00000188868 ENST00000595977
Converts
Homo sapiens GRCh38 GENCODE 44 (252,835 transcripts) in 3.25 seconds.
Mus musculus GRCm39 GENCODE 44 (149,547 transcritps) in 1.99 seconds.
Canis lupus familiaris ROS_Cfam_1.0 Ensembl 110 (55,335 transcripts) in 1.20 seconds.
Gallus galus bGalGal1 Ensembl 110 (72,689 transcripts) in 1.36 seconds.
bed2gff
A Rust BED-to-GFF3 translator that runs in parallel from https://github.com/alejandrogzi/bed2gff.
Usage: bed2gff[EXE] --bed/-b <BED> --isoforms/-i <ISOFORMS> --output/-o <OUTPUT> --threads/-t <THREADS>
where:
--bed <BED>: a .bed file
--isoforms <ISOFORMS>: a tab-delimited file
--output <OUTPUT>: path to output file (*.gff)
The isoforms file specification:
a tab-delimited .txt/.tsv/.csv/... file with genes/isoforms (all the transcripts in .bed file should appear in the isoforms file):
> cat isoforms.txt
ENSG00000198888 ENST00000361390
ENSG00000198763 ENST00000361453
ENSG00000198804 ENST00000361624
ENSG00000188868 ENST00000595977
Convert
Homo sapiens GRCh38 GENCODE 44 (252,835 transcripts) in 4.16 seconds.
Mus musculus GRCm39 GENCODE 44 (149,547 transcritps) in 2.15 seconds.
Canis lupus familiaris ROS_Cfam_1.0 Ensembl 110 (55,335 transcripts) in 1.30 seconds.
Gallus gallus bGalGal1 Ensembl 110 (72,689 transcripts) in 1.51 seconds.
bedToGenePred + genePredToGtf + refTable
UCSC offers a fast way to convert BED into GTF files through KentUtils or specific binaries using:
bedToGenePred in.bed /dev/stdout | genePredToGtf file /dev/stdin out.gtf
You can install these tools with bioconda, or download them here. The gene_id is only achieved when using refTables (a format specified in UCSC's web browser), you can see a more elaborate answer here Obtaining Ucsc Tables Via Ftp And Converting Them To Proper Gff3 Via Genepredtogtf?.
Other options
Other scripts/tools That DO NOT produce a complete GTF file (lacking gene_id attributes) are:
gtf2bed
gtf2bed < foo.gtf | sort-bed - > foo.bed
awk '{print $1"\t"$7"\t"$8"\t"($2+1)"\t"$3"\t"$5"\t"$6"\t"$9"\t"(substr($0, index($0,$10)))}' foo.bed > foo_from_gtf2bed.gtf
-kscript
from https://github.com/holgerbrandl/kscript:
kscript https://git.io/vbJ4B my.bed > my.gtf
pfurio/bed2gtf
from https://github.com/pfurio/bed2gtf:
python bed2gtf [options] <mandatory>
AGAT
AGAT
Considering only the options that produce gene_ids attributes, bed2gtf and bed2gff are faster by ~3-4 seconds than UCSC's C binaries. More detailed instructions of this tools are explained in the sources linked. | {
"domain": "bioinformatics.stackexchange",
"id": 2622,
"tags": "bed, format-conversion, gtf, gff"
} |
What plotting / visualization tools are useful in generating good plots? | Question: It has been said a picture is worth a thousand words!
There have been a large number of softwares to write dsp codes and analyze signals and algorithms. It will be good to share on data plotting tools, that is - given a data set (1-d, 2-d, 3-d, ..higher dimensions (creative ways to visualize this)) what are the different tools available and advantages of each for plotting such data sets.
Advantages can be with respect to:
(a) quick/easy plotting
(b) formats of fugures (.eps, .svg) file which can be generated
(c) latex font embedding easiness
(d) post script figures
Tools like: matlab, octave, gnuplot, tikz, pgf etc are widely used by many of us. It would be useful to share your experiences (advantages/disadvantage) and best plot you have drwan using the tool you prefer!
Answer: I guess Python has great tools for this, especially IPython Notebook: http://ipython.org/notebook.html . It has good visualization opportunities and at the same time you could document your code and make it re-configurable and online. Of course it best for visualizing data low dimensions and creative visualization algorithms can be prepared either in OpenGL or high level languages such as MATLAB.
In 3D, if appropriate, time varying surfaces can be embedded in 2D and shown as a 3D volume. For generic 4D visualizations you could either use videos, or accumulated volume maps. For visualization of high dimensional data, always make sure you check methods such as Multi Dimensional Scaling. | {
"domain": "dsp.stackexchange",
"id": 1404,
"tags": "dsp-core, visualization"
} |
Separating the topics of general and special relativity | Question: So, I have managed to confuse myself beyond the point of repair. I am not a physics student, so my physical knowledge is limited.
Anyway, there are a few topics of relativity, which I can not seem to be able to seperate into either the general or special one. So, for one, I know a lot of things probably overlap, but what is more important to know is which topics do NOT belong into either.
So, when trying to explain special relativity I think I mixed up a couple of things. Correct me if I am wrong:
Special relativity describes that transformation of mass into energy and back is possible. Time is moving relatively (slower for faster moving objects). A ball falling to earth is equivalent to a ball 'falling' in a at g accelerating rocket. A ball in a falling elevator is equivalent to a ball floating in space. Nothing can move faster than light. Light in vacuum is always travelling at the same speed. Light is affected by gravity. Mass alters the time-space-fabric and attracts mass. The actual mass of something depends on the objects absolute velocity. Physical rules are the same in all not-moving systems. Everything moves along spacetime on the shortest possible route.
I know those are in no order what so ever, but are there some things which do not belong in this heap or did I forget anything? I find it hard to find anything that directly separates all those topics.
Answer: Special relativity addresses the null result of the Michelson-Morley experiment, which failed to measure motion relative to an assumed ether.
I wouldn't say time is moving at all. Rather, measured time and space intervals collected by pairs of observers will only match if their is no relative uniform motion between them. Relative motion causes a mixing of the data of one observer as seen by the other observer. This has some interesting consequences, specifically for simultaneity. However, the bending of light by gravity does not come in to play at all in SR. SR provides us with the correct coordinate invariance of the laws of physics and includes time on equal footing as space. But the space-time of SR is flat and extends to infinity in all directions, like Euclidean space. the notion of boosting mass is a bit out of date. We don't really see things that way any more. The reason is that there is really only one frame of reference that makes sense for measuring a particle's mass, its own rest frame. So M0 is the true mass of any particle. Also, we have a mathematical paradigm for expressing all these results that basically says Energy is the time component of a 4-dim vector, with momentum the other space components. The 4-vector (E, px, py, pz) lives in space-time and the inner product, measure of distance, in this new space give the magnitude of this vector as, -E^2 + |p|^2, I've set c = 1 for simplicity.
The minus sign is very important here. The magnitude is invariant under Lorentz transforms. The "size" of this vector is -m^2 --> E^2 = |p|^2 + m^2. The point is, in this paradigm "mass" is the norm of a vector and is constant.
I know I tossed a lot of math and you said you did not have a background in physics but you can look for some popular books that explain this, including Einstein's text.
Now, once Einstein worked out the correct symmetry for light, and set his postulates for SR, he set out to make all other laws of physics obey the same symmetry. This required modification to Newton's second law of motion and eventually gravity. This is where GR is born. GR replaced the force of gravity with the curvature of the space-time continuum. Astronomical bodies move the way they move because strong sources of curvature create non-trivial geodesic paths. This is why light paths bend in the presence of large massive bodies, the mass curves space-time and the light follows a geodesic. Since light (photons) have no mass their energy-momentum vector is ZERO! That's right those vectors have no length, or null length, in relativity. This is the fundamental separation of GR and SR, the curvature of space and time.
I hope that helps a little. | {
"domain": "physics.stackexchange",
"id": 58406,
"tags": "general-relativity, special-relativity"
} |
From CMB anisotropy data observed in 1992, did astronomers figure out that the universe should be accelerating before its discovery in 1998? | Question: CMB anisotropy was measured in 1992. I assume that astronomers, then, like now, would have been able to deduce the cosmological constant and things like that from the CMB anisotropy data. Then, from the cosmological constant and things like that, they would have been able to find that the current universe was accelerating. Did they indeed know that the current universe should be accelerating before its observation in 1998? Or, was the discovery in 1998 a great surprise? What was the exact situation between 1992 and 1998?
Answer: The only CMB anisotropies that were well determined in 1992 (and by the date, I assume you refer to the results from the COBE satellite) were those at small angular frequencies. This revealed the overall scale of the CMB structure (which tells you about a need for dark matter in order to get the large scale structure seen at the present epoch) and the dipole anisotropy, which tells you about the Galactic motion with respect to the CMB rest frame. There were no constraints on any possible dark energy - or rather, the effect of dark energy on the low frequency anisotropies is degenerate (i.e. cannot be distinguished from) the effects of the overall curvature of the universe.
To measure the curvature of the universe and constrain the mix of dark matter and dark energy (and constrain other cosmological parameters) needs information from higher angular frequencies - in particular information from the location and amplitudes of the first three "acoustic peaks" at angular scales of . This really only became available with the WMAP data (launched in 2001) and with improvements in ground based CMB measurements that took place at the end of the 1990s. The results and conclusions from this emerged over several years from 2000 onwards (e.g. Durrer et al. 2003; Spergel et al. 2003).
In actual fact, the evidence for $\Lambda$ from the CMB alone is rather weak. As explained in one of Wayne Hu's cosmology tutorials, the strongest evidence from the CMB is that the total density of the universe $\Omega = 1$ from the CMB and this can then be combined with other data (e.g. primordial abundances of D, He; galaxy cluster dynamics; gravitational lensing) to suggest that the matter contribution to this is only $\simeq 0.3$. | {
"domain": "astronomy.stackexchange",
"id": 3306,
"tags": "expansion, history, cosmic-microwave-background"
} |
A Machine learning model that has a undefined input size but a fixed output? | Question: I don't know too much about ML but I can seem to figure out how to train something like this. If you guys could list some possible ways to do this, thank you.
Answer: You have multiple options to "collapse" a variable-length input into a single value:
Recurrent neural networks (RNN), either vanilla RNNs or more powerful variants like long-short term memories (LSTM) or gated recurrent units (GRU). With these, you "accumulate" the information of each time step in the input and at the end, you get your fixed-size output.
Pooling, either average pooling or max pooling. You just compute the average/maximum of the representation across the time dimension.
Padding. You just assume a maximum length for the data and create a neural network that receives a piece of data of such a length. For any input data that is shorter, you just add some "padding" elements at the end until they have the maximum size. | {
"domain": "datascience.stackexchange",
"id": 7393,
"tags": "machine-learning, python, tensorflow"
} |
Binary Search Tree insert while keeping track of parent for node to be added | Question:
This question has a follow up question:
Binary Search Tree insert while keeping track of parent for node to be added - iteration 2
I am implementing a red black tree for fun and am wondering how I should modify my basic BST insert. Note: this happens before the red black tree rules are applied, it just finds the correct place within the tree to add the node, places it, sets references, value and defaults the color to RED. I am mainly struggling to see if there may be a better way to tack on the parent reference for the newly added node. The implementation I have here looks ahead one step with a NULL check where a BST insert that does not need to track the parent would not need.
struct node * bstInsert(struct node *n, int x) {
if (n != NULL) {
int cmp = (n->val < x) ? -1 : (n->val > x);
if (cmp == -1) {
if (n->left == NULL) {
n->left = createAChild(n, n->left, x);
} else {
bstInsert(n->left, x);
}
} else if (cmp > 0) {
if (n->right == NULL) {
n->right = createAChild(n, n->right, x);
} else {
bstInsert(n->right, x);
}
}
}
return n;
}
struct node * createAChild(struct node *par, struct node *n, int x) {
n = malloc(
sizeof(struct node)
);
n->parent = par;
n->left = n->right = NULL;
n->val = x;
n->color = RED;
return n;
}
Is there a cleaner solution to setting the parent reference for the node to be added?
Answer:
Instead of if (n != NULL) and then indenting the whole code of function, it is better to write:
if (n == NULL) { return NULL; }
The line int cmp = (n->val < x) ? -1 : (n->val > x); is very hard to read.
I suggest finding a better name to cmp - for example isGreater and changing its type to boolean. | {
"domain": "codereview.stackexchange",
"id": 22779,
"tags": "c, recursion, tree, reference"
} |
Xtion_pro_live Skeleton_tracker cannot work | Question:
Hello everyone,
I can use xtion pro live by openni2 . `roslaunch openni2_launch openni2.launch`
I want to use xtion pro live to detect human skeleton . But I cannot use `NiTE v2` as I cannot download it from [here](http://openni.ru/files/nite/index.html).
openni2_tracker is cannot use too.
And other tutorials like [skeleton_tracker](https://github.com/Chaos84/skeleton_tracker) is also unwork.
I mean we cannot use skeleton now without NiTE v2 ???
Or are there any package taht I can use or other way that could solve my problem ?
Thanks first!
Originally posted by wsAndy on ROS Answers with karma: 224 on 2015-08-04
Post score: 0
Answer:
Hi @wsAndy !
You can download nite2 from here. If you have issues, follow these guidelines. Then you can make this skeleton_tracker work.
Please, let me know if you have issues.
Originally posted by Chaos with karma: 396 on 2015-08-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by wsAndy on 2015-08-05:
Chaos , thanks for your answer ! The website you advice me is really good but the problem is that I could not download NiTE2 from this website. I don't know the reason.
Comment by wsAndy on 2015-08-05:
Then I tried another way . I was surprised to find the NiTE2 is included in this package[cob_people_perception](Then I tried another way . I was surprised to find the NiTE2 is included in this packagehere
Comment by wsAndy on 2015-08-05:
Now , I can detect people by skeleton_tracker.
Comment by Pototo on 2016-09-30:
Chaos, for how long have you ran your skeleton tracker while publishing depth and rgb at the same time? Do you see your program crashing after running for a while? What's the longest you've voluntarily ran your program for?
Comment by Chaos on 2016-10-03:
@Pototo, the last time I used this node was just a couple of weeks ago, and it ran for hours. | {
"domain": "robotics.stackexchange",
"id": 22368,
"tags": "ros, skeleton-tracker, asus-xtion-pro-live, openni2, nite"
} |
Are there any estimates of the Roche Limit for 152830 Dinkinesh? | Question: The Lucy spacecraft recently flew past the asteroid 152830 Dinkinesh on its way through the asteroid belt and photos show that Dinkinesh has a moon consisting of a contact binary.
(Image is Public Domain: en.wikipedia.org/wiki/152830_Dinkinesh#/media/File:Dinkinesh-family-portrait-2.png)
From Wikipedia:
The Roche limit, also called Roche radius, is the distance from a
celestial body within which a second celestial body, held together
only by its own force of gravity, will disintegrate because the first
body's tidal forces exceed the second body's self-gravitation.
Inside the Roche limit, orbiting material disperses and forms rings,
whereas outside the limit, material tends to coalesce. The Roche
radius depends on the radius of the first body and on the ratio of the
bodies' densities.
The estimated diameter of Dinkinesh is 790 metres but I haven't seen any estimates of the orbital distance of the moon, or of the densities of these bodies. Given that the fly-by is so recent, and the moon hasn't even been named yet, are there any estimates of the Roche Limit of Dinkinesh?
Answer: Looking at the photo and assuming the densities of the objects are similar, we can make an estimate.
The same wikipedia page you cite has the equation that for rigid bodies
$$d \simeq R_1\left(2\frac{\rho_1}{\rho_2}\right)^{1/3}\ , $$
where $R_1$ is the primary radius and $\rho_1, \rho_2$ are the densities of the primary and secondary.
Thus the Roche limit is not much bigger than the radius of the primary object.
This might be increased by a factor of 2 or so for an oblate secondary or for a less rigid body. It could be decreased if internal forces other than gravity are important in holding the object together.
Looking at the photograph you can see that the satellite is separated by at least (given possible projection effects) 10 times the radius of the primary so should be well outside the Roche limit (which is I suppose obvious from its existence).
AFAIK, given that the discovery only happened a few days ago, there are no masses available for these objects, but some constraints may be possible from analysing the time-series of relative positions. | {
"domain": "astronomy.stackexchange",
"id": 7172,
"tags": "asteroids, natural-satellites, roche-limit"
} |
Better way to return data via AJAX? | Question: Let's say I have three files: a page that is used for searching data called search, a file that handles requests called controller, and a file that has functions which the controller calls based on what POST parameter is passed. From the search page, I post some data and specify my action (search) via AJAX to the controller. The controller then takes that and calls a function that returns the data (which is inserted into the appropriate place in the DOM) which looks something like this:
<?php
function fetchStudentData($search, $roster) {
//If a user is not logged in or the query is empty, don't do anything
if(!isLoggedIn() || empty($search)) {
return;
}
global $db;
//Placement roster search
if($roster == "placement") {
$query = $db->prepare("SELECT t1.id,
t2.placement_date,
t2.placement_room,
t1.student_id,
t1.student_type,
t1.last_name,
t1.first_name,
t1.satr,
t1.ptma,
t1.ptwr,
t1.ptre,
t1.ptlg,
t1.waiver,
t2.tests_needed,
t2.blackboard_id
FROM student_data AS t1
JOIN additional_data AS t2
ON t1.id = t2.student_data_id
WHERE t2.placement_date = :search
ORDER BY t1.last_name ASC");
$query->bindParam(":search", $search);
$query->execute();
if($query->rowCount() == 0) {
return;
}
echo "<div id='searchHeader'>
<div class='searchSubHead'><em>Placement Roster For $search</em></div>
<br />
<span>Placement Room</span>
<span>Student ID</span>
<span>Student Type</span>
<span>Last Name</span>
<span>First Name</span>
<span>SATR</span>
<span>PTMA</span>
<span>PTWR</span>
<span>PTRE</span>
<span>PTLG</span>
<span>Waiver</span>
<span>Tests Needed</span>
<span>Blackboard ID</span>
</div>";
while($row = $query->fetch()) {
echo "<div id='{$row['id']}' class='placementRosterRecord'>
<span>{$row['placement_room']}</span>
<span>{$row['student_id']}</span>
<span>{$row['student_type']}</span>
<span>{$row['last_name']}</span>
<span>{$row['first_name']}</span>
<span>{$row['satr']}</span>
<span>{$row['ptma']}</span>
<span>{$row['ptwr']}</span>
<span>{$row['ptre']}</span>
<span>{$row['ptlg']}</span>
<span>{$row['waiver']}</span>
<span>{$row['tests_needed']}</span>
<span>{$row['blackboard_id']}</span>
</div>";
}
}
}
?>
This is obviously working very well for me but I wanted to know if there's a better way I should be doing this? Should I be returning the raw data from the database and encoded as JSON and then have my JavaScript take care of creating elements/structure? I see that a lot with various API's but this is something internal to my company and not something that other developers will be using. Is there anything wrong with my approach?
Answer: First of all, you should read about MVC.
There are a lot of tutorials out there, as well as frameworks (like CodeIgniter) that implement it. Basically your code should be divided into 3 main categories:
Controllers - where all logic should be placed like validations, checks. This is also where Views and Models are used, hence "controller"
Models - basically the source of data. Usually this is the abstraction of the data storage (databases, sessions, external requests) in the form of functions.
Views - the presentation, or in this case, the HTML.
Reasons to use MVC or MV* patterns is
Portability.
The files, especially the views and models, can be plucked out of the architecture, and be used elsewhere in the system without modification.
Models are simply function declarations one can just plug in the controller and call their functions. Views are simply layouts that, when plugged with the right values, display your data. They can be used and reused with little or no modification unlike with mixed code.
Purpose-written code and Readability
You would not want your HTML mixed with the logic, nor the data gathering mechanism. This goes for all other parts of your code. For example, when you edit your layout, all you want to see on your screen is your layout code, and nothing more.
Never, and I mean NEVER return marked-up data using AJAX (use JSON).
The main reason AJAX is used is because it's asynchronous. This means no page-loading, no more request-wait-respond, and you could do stuff in parallel. It's pseudo-multitasking for the human.
On the same train of thought, the reason why JSON was created was because XML was too heavy for data transfer and we needed a lightweight, data transport format. XML is NOT a data-transport format but a document and mark-up language (Though by "X" for extensibility, it has served a lot of purposes beyond document).
Use client-side templating
JavaScript engines have become fast these days that even developers create parsers/compilers on JavaScript. And with that comes client-side templating, like Mustache.
Coupled with raw data from JSON, which is very light, you can integrate that data into a template to form your view and have JavaScript display it, like you already do.
Have a rendered version and a data version
Taking into account the previous answer, if you are making a website rather than a web app, where JS is optional rather than required, one should create a website that works without JS.
However, for the same data but different format, the URL should not change (much). You should then take into account in the request what data type is requested by the client. And so you must accept a parameter which indicates the data type.
An example url would look like this:
//for a website page format (default: HTML):
http://example.com/my/shop/available/items.php
//for the JSON version
http://example.com/my/shop/available/items.php?format=JSON
//for the XML version
http://example.com/my/shop/available/items.php?format=XML
This means, that for the same page (available items), you have 3 formats you can use. This can be seen in the Joomla Framework (the framework that powers the Joomla CMS)
With these 3 tips, your code will be cleanly separated and purposely written (MVC), fast on transport and rendering (JSON + client-side templating) and extensible (multi-format support). | {
"domain": "codereview.stackexchange",
"id": 2861,
"tags": "javascript, php, mysql, ajax, json"
} |
how to compare between kmeans and hierarchical clustering results | Question: I am using 2 types of clustering algorithm
I apply hierarchical clustering the K-means clustering using python sklearn library
Now the results are a little bit different so how can I compare the results and which algorithm to use? because I want to write a conclusion for a set of unlabeled data.
Is there any benefit to use multiple algorithms and compare between them?
Answer: In general, even using k-means more than once will create slightly different clusters (that s if you do not set a random seed).
Ideally, the profiles of the clusters and the distribution of data points into them will be the similar. If this is the case then it shouldn't really matter which one you choose. If not then, I suggest you examine carefully what caused any major differences.
If you really want to chose, then:
you can use a metric like total within distance or silhouette (ex. select the clustering solution with the lowest total with distance or with maximum average silhouette),
you can use your judgement to select the clustering solution that makes more sense business wise.
As mentioned before, ideally, metrics like total within distance or silhouette should not differ much. Hence, it all comes down to using the second criterion.
As for the part of the question about any benefits from using multiple algorithms,
if you are referring to k-means and hierarchical clustering, you could first perform hierarchical clustering and use it to decide the number of clusters and then perform k-means. This is usually in the situation where the dataset is too big for hierarchical clustering in which case the first step is executed on a subset.
in general, since not all clustering algorithms are suitable for every case it is useful to use multiple algorithms. For example k-means and hierarchical clustering are good at detecting clusters that have a spherical/ball shape. They would perform poor on more complex shaped data. A problem that DBSCAN does not have (though DBSCAN has other disadvantages, see discussion at Wikipedia) | {
"domain": "datascience.stackexchange",
"id": 8848,
"tags": "clustering, k-means, unsupervised-learning"
} |
Is colocalisation of a protein with a presynaptic marker sufficient evidence to say that the protein is a component of axon terminals? | Question: I am reading journal papers about the subcellular localisation of the insulin receptor (IR) in neurons.
I have read a paper stating that IR is highly enriched at synapses, localising to both the presynaptic axon terminal and the postsynaptic density compartments.
The above statement saying that IR localises to presynaptic axon terminals comes from immunofluorescence images of cultured neurons where IR colocalises with synaptophysin puncta.
^ https://pubmed.ncbi.nlm.nih.gov/16978790/
In contrast, another paper showed via subcellular fractionation of the rat brain that IR is enriched in the post-synaptic density (PSD) fraction (through analysing the IR levels in the PSD fraction via Western blotting).
I think this is stronger evidence compared to immunocytochemistry.
To say that a protein is a component of the presynaptic terminal is colocalisation with a presynaptic marker (e.g. synaptophysin) sufficient evidence? Or should there be other forms of evidence to back this statement up?
Answer: Colocalization of a protein with another structure is necessary but not sufficient to say that the protein is a component of that structure.
Mixed into the concept of a structure's existence is the concept of an observational timescale. A presynaptic terminal is a structure that exists over specific timescales; all of its constitutive (and identity-conferring) components must colocalize with it, over all of its lifetime. Otherwise it would be more apt to specify something which colocalizes over more limited timescales to be cofunctional structures with the presynaptic terminal.
Experimental techniques tend to probe specific timescales. For example, immunostaining is based on the assumption that the dissociation rate constant for a specifically bound state is small compared to nonspecific bound states (allowing for the 'washing' steps to have sufficient efficacy). So, in general, spatial correlation of two molecular structures, based on a single observational approach, is insufficient evidence to qualify the structures as parts of a whole. | {
"domain": "biology.stackexchange",
"id": 12267,
"tags": "cell-biology, neuroscience, synapses, fluorescent-microscopy, neuron"
} |
what does "abandoned" mean with catkin build? | Question:
catkin build tells me that some number of packages in my workspace are "abandoned". what does this mean? Is there some way to list which packages are abandoned?
Originally posted by LukeAI on ROS Answers with karma: 131 on 2021-04-21
Post score: 0
Answer:
It basically means those have not been included in the current build (as in: not by choice or request of the user, but because circumstances made it impossible to include them or finish building them).
Typically caused by (one of) their dependencies having failed to build.
Taken from the source:
# List of jobs whose deps failed
abandoned_jobs = []
Originally posted by gvdhoorn with karma: 86574 on 2021-04-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 36348,
"tags": "ros-melodic, catkin"
} |
Rotation direction of Pulsars | Question: Pulsars are rotating neutron stars observed to have pulses of radiation at very regular intervals that typically range from milliseconds to seconds. It has a very strong magnetic fields which funnel jets of particles out along the two magnetic poles.
My question is how Pulsars rotate, clockwise or Anti-clockwise?
Answer: Does the Earth rotate clockwise or anticlockwise?
If you are floating above the north pole of Earth you would see the Earth rotating anticlockwise(*) if you were floating above the south pole, you would see the Earth rotating clockwise.
image from ASTRONOMÍA ASTRÓNOMOS UNIVERSO
So the direction of rotation depends on the point of view. You're going to find that many of the truths we cling to depend greatly on our own point of view...
You could say "I'll take the North Pole view". Now how do we define "North Pole" on a pulsar (which is many many times smoother than a billiard ball). One way to define "North Pole" is "the pole from which a an object appears to rotate anticlockwise". Can you see how the definition is going round in circles?
Or you can define "North Pole" in terms of magnetic field. The North pole of the Earth has a magnetic pole (that attracts the North-seeking pole of a compass) You can try to define the "North Pole" of a pulsar in terms of that which has the magnetic pole which corresponds to the that near the North pole of Earth.
The rotation of a pulsar is unrelated to its magnetic field. 50% of pulsars rotate clockwise about their North magnetic pole, and 50% rotate anticlockwise. The rotational direction of a pulsar is derived from the angular momentum of the star from which it is derived. The magnetic field is also inherited from the star. But a star can swap its magnetic field without changing its rotational direction.
There is no preferred direction of rotation, so a pulsar is equally likely to rotate clockwise as anticlockwise.
(*) and so the shadow of the gnomon of a sundial rotates clockwise, and clocks which were made by northern hemisphere horologists follow the convention. If the technological revolution had occurred in Patagonia not Europe, clocks might be very different. | {
"domain": "astronomy.stackexchange",
"id": 6330,
"tags": "star, rotation, neutron-star, pulsar, neutrons"
} |
Calculation For Thermal Conductivity | Question: Total newbie here way over my head on a project I'm working on. I need to protect some electronics while they are inside an injection mould of liquid latex that measures 180 degrees celsius. The electronics are 'baked' for lack of a better word for 16 minutes, then dropped into cold water out of the mould to cool. I need to make sure I don't melt/destroy the electronics by keeping them below 60 degrees celsius during this step.
My plan is to use a silicone potting compound which measures at 0.06 watt/meter/K.
I'm trying to calculate how thick the layer of silicon needs to be between the electronics and the heated latex and what the temperature will reach per minute the ball is exposed to this heat so I don't need to destroy prototype electronics doing trial and error.
Any help would be amazing! In summary:
180 degrees Celsius for 16 minutes
electronics measure 12mm in diameter (sphere)
Silicone thermal conductivity of 0.06 watt/meter/K
What thickness is required to make sure the electronics don't get above 60 degrees celsius?
Update - I found the specific heat value on the spec sheet of 1200 J/kg.K
Answer: Provided some material constants are known, a reasonable estimate could be obtained by lumped thermal analysis.
The blue shell is the protective silicone. The surrounding temperature is $T_{\infty}$ ($180\:\mathrm{C}$) and we're looking for the function $T(t)$.
Lumped analysis assumes the temperature of the sphere is uniform (no radial temperature gradients).
Using Newton's law of cooling/heating we can describe the heat flux entering the sphere as:
$$\frac{\mathbf{d}Q}{\mathbf{d}t}=uA[T_{\infty}-T(t)],\tag{1}$$
where $u$ is the overall heat transfer coefficient and $A$ the surface area of the sphere (assuming the silicone layer isn't too thick).
An infinitesimal heat flow $\mathbf{d}Q$ causes the sphere to heat up by $\mathbf{d}T$, acc.:
$$\mathbf{d}Q=mc_p\mathbf{d}T(t),\tag{2}$$
where $m$ is the mass of the sphere and $c_p$ its spcific heat capacity.
Substituting $(2)$ into $(1)$ we get:
$$\frac{mc_p\mathbf{d}T(t)}{\mathbf{d}t}=uA[T_{\infty}-T(t)]\tag{3}$$
$(3)$ is a first order linear differential equation, solvable by separation of variables and it yields:
$$\ln\Bigg[\frac{T_{\infty}-T(t)}{T_{\infty}-T_0}\Bigg]=-\frac{uA}{mc_p}t,\tag{4}$$
where $T_0$ is the initial temperature of the sphere and $T(t)$ the temperature after time $t$. So $(4)$ describes the approx. temperature evolution of the sphere.
The overall heat transfer coefficient $u$ can be estimated for not too large thicknesses $\theta$ of the silicone shell by:
$$\frac1u\approx\frac{1}{k_1}+\frac{\theta}{\kappa}+\frac{1}{k_2},\tag{5}$$
where:
$k_1$ is the heat transfer coefficient from heating medium to silicon and $k_2$ is the heat transfer coefficient from silicone to sphere.
$\kappa$ is the heat conductivity of the silicone.
By setting $T(t)$ to the desired 'safe' value and using $(4)$, $u$ can then be estimated. And by using $(5)$, the minimum $\theta$ can be calculated so $T(t)$ doesn't exceed the safe temperature. | {
"domain": "physics.stackexchange",
"id": 35782,
"tags": "thermodynamics, electronics, thermal-conductivity"
} |
What are the consequences of connecting a non-ideal conductor to a battery in open loop? | Question: Suppose I have a battery with $\Delta V=5[\mathrm{V}]$. Now I connect a piece of a metal wire to the "+" side of the battery only.
Let's assume that the ambient air is not conductive at all:
Are any charges going to reposition themselves between the battery and the attached conductor in this open-loop configuration?
If no charges are repositioning themselves between the battery and the conductor, why the potential of the conductor is immediately equal to that of the "+" of the battery upon attachment ($5[\mathrm{V}]$, minus possible voltage drop due to non-ideality of the conductor)?
Since this conductor is not ideal, can the whole open-loop system of battery+conductor be treated as an effective plate capacitor in closed-loop with the battery?
Answer: In answer
1) Yes, charge will flow to make the potential of the wire the same as the + terminal of the battery given that the wire is neutral or at earth prior to making contact. The ammount of charge that would flow would depends on the difference in potential between the wire before you attach it to the battery and the battery - we could calculate that with $Q=CV$ where $Q$ is the charge, $V$ is the potential difference and $C$ is the capacitance of the wire (no charge flows if $V=0$, which would mean the wire is at the exact potential of the +terminal before it is attached).
2) The answer is there will probably be a bit of shifting of charges... but note that there will be no voltage drop - because we have open circuit no current is flowing. Voltage drop would be calculated from $V = IR$, but $I$ is zero and so the voltage drop will be zero.
3) Yes the wire is like a capacitor, but the capacitance is very low - less than $10^{-12}~F$ typically.
There are similarities between this question and this one although this question and the other one are different. | {
"domain": "physics.stackexchange",
"id": 17426,
"tags": "electrostatics, charge, capacitance, batteries, conductors"
} |
Leetcode: Map Sum Pairs C# | Question:
Implement a MapSum class with insert, and sum methods.
For the method insert, you'll be given a pair of (string, integer).
The string represents the key and the integer represents the value. If
the key already existed, then the original key-value pair will be
overridden to the new one.
For the method sum, you'll be given a string representing the prefix,
and you need to return the sum of all the pairs' value whose key
starts with the prefix.
Example 1:
Input: insert("apple", 3), Output: Null
Input: sum("ap"), Output: 3
Input: insert("app", 2), Output: Null
Input: sum("ap"), Output: 5
Please comment on performance and style
using System;
using System.Collections.Generic;
using System.Runtime.InteropServices;
using Microsoft.VisualStudio.TestTools.UnitTesting;
namespace TrieQuestions
{
[TestClass]
public class TrieMapSum
{
[TestMethod]
public void MapSumTest()
{
MapSum mapSum = new MapSum();
mapSum.Insert("apple", 3);
Assert.AreEqual(3, mapSum.Sum("ap"));
mapSum.Insert("app", 2);
Assert.AreEqual(5, mapSum.Sum("ap"));
}
}
public class MapSum
{
private TrieMSNode Head;
/** Initialize your data structure here. */
public MapSum()
{
Head = new TrieMSNode();
}
public void Insert(string key, int val)
{
var current = Head;
foreach (var letter in key)
{
if (!current.Edges.ContainsKey(letter))
{
current.Edges.Add(letter, new TrieMSNode());
}
current = current.Edges[letter];
}
current.IsTerminal = true;
current.Value = val;
}
public int Sum(string prefix)
{
int sum = 0;
var current = Head;
foreach (var letter in prefix)
{
if (!current.Edges.ContainsKey(letter))
{
return sum;
}
current = current.Edges[letter];
}
// we use dfs for each edges of the trie;
return DFS(current);
}
private int DFS(TrieMSNode current)
{
if (current == null)
{
return 0;
}
int sum = current.IsTerminal ? current.Value : 0;
foreach (var edge in current.Edges.Values)
{
sum += DFS(edge);
}
return sum;
}
}
internal class TrieMSNode
{
public Dictionary<char, TrieMSNode> Edges { get; set; }
public bool IsTerminal;
public int Value;
public TrieMSNode()
{
Edges = new Dictionary<char, TrieMSNode>();
IsTerminal = false;
Value = 0;
}
}
}
Answer: In Insert(...) you should use TryGetValue instead of ContainsKeyfor efficiency:
foreach (var letter in key)
{
if (!current.Edges.TryGetValue(letter, out var edge))
{
edge = current.Edges[letter] = new TrieMSNode();
}
current = edge;
}
Name your methods after what they do, not after their implementation:
DFS(...)
could be GetSumFrom(...)
public int Sum(string prefix)
{
int sum = 0;
var current = Head;
foreach (var letter in prefix)
{
if (!current.Edges.ContainsKey(letter))
{
return sum;
}
current = current.Edges[letter];
}
Here the sum variable is redundant because it is never changed so you could just return 0 from the loop
Again the loop can be optimized:
foreach (char letter in prefix)
{
if (!current.Edges.TryGetValue(letter, out current))
return 0;
}
DFS()
can be simplified using LINQ:
private int GetSumFrom(TrieMSNode current)
{
if (current == null)
{
return 0;
}
return current.Edges.Aggregate(current.Value, (sum, kvp) => sum + GetSumFrom(kvp.Value));
}
You really don't have to check for IsTerminal because you know that only terminal nodes have a values different from 0.
You should test this LINQ-approach against your own plain foreach-loop, and you'll maybe find the latter fastest. | {
"domain": "codereview.stackexchange",
"id": 34600,
"tags": "c#, programming-challenge, trie"
} |
Is the information in the brain stored in the connections rather than neurons? | Question: Can I imagine the difference between the model of the grandma neuron and the model of interconnected neuron network so that the information isn't primarily stored in the neurons (respectively in their states) but in the connections between them.
Answer:
Is the information in the Brain stored in the connections rather than
in the neurons ?
It depends on what information you are referring to.
The brain does not just store one type of information,the model of a grandmother cell and the model of an interconnected neuron network are 2 subsystems of a greater system working together perhaps in a competitive or complementary arrangement. The purpose of this system is to extract information from the environment and identify it again for the organisms benefit. 1.
So let's say we see something that will benefit us in the future or that we might want to avoid. This is a simplification, but as you see that something, neurons in your neocortex light up specifically in region V1 in the back of your head, they arrive there after a lot of computations along the way from your retina, but also project to other parts of your brain where they are perhaps processed consciously,they fire in an orderly fashion, in a map, this bouncing around of electrical signals represents your conscious, and unconscious processing of the image, so the information exists as a network of electrical signals. 2.
During recollection, ( there are different types of memories so the structure in charge could be the neocortex,basal ganglia, hippocampus or even a muscle)3. this same map ( or a smaller subset ) gets replayed. The mechanism(s) by which this information is first encoded and later retrieved is still not entirely understood, but it involves a map and a getaway or switchboard.4. This switchboard ties together different percepts ( a smaller network of electrical signals), so in a way the information is represented at the switchboard level, if you disable this switchboard, you can't make new memories or lose previous ones.
A grandmother cell in the scheme of what I just explained, is the result of another subsystem that helps recognize something. Once you successfully encode a percept ( thing we want to remember seeing) you might want to associate it with something ( good,bad,delicious,etc), at the top of a series of associations lies a grandmother cell, which in turn is a handy evolutionary way of signaling back to other systems when some previous information has been experienced. So in this way the information of whether a subset of relationships in between networks of electrical signals has been experienced and encoded lies here.
References/Sources:
The idea of grandmother cells and object recognition in general is summarized in the Object Recognition chapter of ( Cognitive Neuroscience,Gazzinga-3rd ed 222-225) along with the problems a grandmother cell introduces in object recognition (ie if you lose the grandmother cell you would lose all the information about your grandmother)
Koch( Quest for Consciousness-2004) chapters on vision provide a very readable account of how the information goes from retina to v1. Vision Science (Palmer) is a more thorough account.
The chapter on learning and memory (Cognitive Neuroscience) gives an overview.
(Gluck -2001-Gateway to memory) presents different models of this arrangement. | {
"domain": "biology.stackexchange",
"id": 6354,
"tags": "neuroscience, brain, memory"
} |
This spec creates very few connects, is that plausible? | Question: I'm attempting exercise A.2.1 from the book http://alloytools.org/book.html. Having got this far, I find that when stepping through instances I only ever get 2 connects links, no matter how many requests links, when using the MiniSat solver. When using MiniSat plus Unsat core, I sometimes get more connects, but not many requests. Obviously, these all satisfy the model, but it feels wrong to me. Am I over-thinking this?
module exercises/telephone
sig Phone {
, requests : set Phone
, connects: lone Phone
}
fact minimum_system {
some requests
some connects
}
fact dont_call_yourself {
no p: Phone | p.connects = p
no p: Phone | p in p.requests
}
fact connect_from_a_request {
no (connects - requests)
}
fact only_receive_once {
all p: Phone.connects | one ~connects[p]
}
fact receiver_or_caller_only {
no (Phone.connects & Phone.~connects)
}
pred show {}
run show for exactly 12 Phone
Answer: When you get lots of instances, it's not really practical to look through them all. What most Alloy users would do in this case is to add a constraint to the show predicate to see if you can get the instance you expect. So I modified it to
pred show {
#connects > 4
#requests > 4
}
and now I see instances with more than 4 connects and requests.
Another suggestion: don't use exactly unless you really need to. You usually want to look for all instances up to some bound, and if you set it to exactly, you might find that you get no instances because it's not possible to construct one with exactly that number. For example, if you don't allow conference calls and you require phones to matched one to one, and you then set the bound to "exactly 3" you would get no solutions. | {
"domain": "codereview.stackexchange",
"id": 38979,
"tags": "alloy"
} |
rostopic pub /topic_name sensor_msgs/JointState [arguments] | Question:
Hi !
I run this command line:
rostopic pub servo sensor_msgs/JointState '{header: {seq:0, stamp: {secs:0, nsecs:0}, frame_id:""}, name:["art1"],position:[150.0], velocity:[0.0], effort:[0.0]}' --once
and i get the following error:
emilio@emilio-N61Vg:~/catkin_ws$ rostopic pub servo sensor_msgs/JointState '{header: {seq:0, stamp: {secs:0, nsecs:0}, frame_id:""}, name:["art1"],position:[150.0], velocity:[0.0], effort:[0.0]}' --once
Usage: rostopic pub /topic type [args...]
rostopic: error: Argument error: while scanning a plain scalar
in "<string>", line 1, column 11:
{header: {seq:0, stamp: {secs:0, nsecs:0}, ...
^
found unexpected ':'
in "<string>", line 1, column 14:
{header: {seq:0, stamp: {secs:0, nsecs:0}, fr ...
^
Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.
Originally posted by Emilio on ROS Answers with karma: 13 on 2015-08-12
Post score: 0
Answer:
I guess the question is "What is wrong with this command?", right?
Try with:
rostopic pub servo sensor_msgs/JointState '{header: {seq: 0, stamp: {secs: 0, nsecs: 0}, frame_id: ""}, name: ["art1"], position: [150.0], velocity: [0.0], effort: [0.0]}' --once
Note the space after each :. This is mandatory!
Originally posted by mgruhler with karma: 12390 on 2015-08-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Emilio on 2015-08-13:
That works, thank you!
Comment by mgruhler on 2015-08-13:
please mark the answer as correct, if it helped :-)
Comment by Emilio on 2015-08-13:
sorry, this is the first time i post here ;) | {
"domain": "robotics.stackexchange",
"id": 22443,
"tags": "ros"
} |
actionlib Action client missing some callbacks | Question:
I have a simple action server that currently runs a simple procedure : it just generates 10 random numbers and publishes each of them as feedback, after that it publishes a result. I then have a simple action client that has some callback functions, which just print out the values received.
For some reason though, not all feedback sent is received by the client, or it doesn't activate the callback, even though it is all published to the /feedback topic. To be clear, I publish feedback 10 times in my server. I can see 10 feedback messages in /feedback topic, but the callback in the client is only called around 4 times.
I thought this might have something to do with the callback queue size in the client, so I added the following lines to my server code:
nh_.setParam("actionlib_client_sub_queue_size", 30);
nh_.setParam("actionlib_client_pub_queue_size", 30);
I understand that the ActionClient looks for these parameters to set the queue size, am I correct? Setting them made no difference unfortunately, even though I have verified that the param is being set correctly.
Am I misunderstanding something? Is there another way to set the queue size for the ActionClient?
Could there be another reason for this behavior? Clearly the client and server are connected properly, since some of the feedback is received.
Any help would be greatly appreciated.
Originally posted by campergat on ROS Answers with karma: 11 on 2018-10-03
Post score: 1
Original comments
Comment by l4ncelot on 2018-10-04:
We need to see your code to debug anything. The queue size plays part only if the publishing rate of the message is greater then subscribing rate.
Answer:
In the end, the problem was that the parameters were being set in the wrong namespace. They have to be in the same namespace as the Action itself. That wasn't obvious to me.
So if you declare your Client something like this:
actionlib::SimpleActionClient<WhateverAction> ac("/robot/your/action", true);
Then your parameters need to be set like this:
nh_.setParam("/robot/your/action/actionlib_client_sub_queue_size", 30);
nh_.setParam("/robot/your/action/actionlib_client_pub_queue_size", 30);
Originally posted by campergat with karma: 11 on 2019-01-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by l4ncelot on 2019-01-17:
@campergat or you can use private namespace of the node with ~ sign. Have a look at q#250092
Comment by jayess on 2019-01-17:
@l4ncelot's suggestion is a better solution IMO. Hardcoding stuff like this makes your code less portable/reusable which @l4ncelot's suggestion avoids | {
"domain": "robotics.stackexchange",
"id": 31858,
"tags": "c++, ros-kinetic, actionlib"
} |
Skin effect in a 2D conductor | Question: We know that the skin effect is the tendency of current to flow mainly near the surface of a conductor that happens when it is driven by a time-varying current. This situation is usually explained in case of a cylindrical conductor, like that shown in the following picture at left.
But in some situations, our conductor is so thin to be approximated as a foil (2D), for instance in case of strip lines, patches etc. Let's consider a rectangular foil as that shown in the picture at the middle. Since it's a foil, it does not have (ideally) a cross-section. Will the AC current flow uniformly along all the area of the foil, or in this case does the skin mean that AC current will flow in the edges of the foil (which are segments)?
I'd say that current will flow uniformly on all the area of the foil. But someone told me that in this case, the current on the edges is more important.
Picture taken from here.
Answer: Yes, as you suggest, edge current density is greater than center-of-foil current density.
Skin effect comes from the magnetic field that a current generates, and
which impinges on adjacent conductive material. Conductive materials
are often diamagnetic (resist the penetration of magnetic field) and
at high AC frequencies, are very much diamagnetic.
The magnetic field generated by an edge-of-foil current only impinges on
adjacent metal on ONE
SIDE of the current, while the magnetic field generated by a center-of-foil
current impinges on two sides. The foil-center current is lessened
by this, because the magnetic field generates an opposing electromotive
force. It isn't as simple as a 'skin depth' situation, because the
highly symmetric effect on a cylinder conductor can be approximated
easily, while the strip of foil has no such symmetry. | {
"domain": "physics.stackexchange",
"id": 73980,
"tags": "electromagnetism, electric-circuits, electric-current, electrical-resistance, conductors"
} |
Are Feynman's Hamiltonian computers unitary? | Question: In this paper, Feynman gave the idea of creating a time-independent Hamiltonian from a quantum circuit. Is there anyway to say that these Hamiltonians will always be Hermitian? Moreover, will these Hamiltonians be always unitary?
Answer: The Hamiltonian will always be Hermitian (as any Hamiltonian must). To see that this is true of Feynman's Hamiltonians, note that each term can be written in the form $h + h^\dagger$, where $h^\dagger$ denotes the Hermitian conjugate of $h$. Feynman indicates this by writing "+ complex conjugate", which is shorthand for what I just wrote. It is always true that $(h+h^\dagger)$ is Hermitian, since $(h+h^\dagger)^\dagger = h^\dagger+h = h+h^\dagger$, and therefore the whole Hamiltonian is Hermitian.
It is not true that Hamiltonians need to be unitary. Indeed, they generally are not unitary. What is unitary is the the time-evolution operator $U$. If the Hamiltonian is time-independent, they are related by $U(t) = e^{-iHt}$. The fact that $H$ is Hermitian necessarily implies that $U$ is unitary. | {
"domain": "physics.stackexchange",
"id": 29518,
"tags": "quantum-mechanics, quantum-information"
} |
Summing over the values of a dict | Question: I'm coming from the Java world and making my first steps on Python and see how good it is to express more with less code. I'd like to know if, via Python, the size of the following code could be reduced:
column_entropy = 0
for k in frequencies.keys():
p = frequencies[k] / 10
column_entropy += p * log(p, 2) if p > 0 else 0
column_entropy = -column_entropy
Answer: If frequencies is a dict, then the code could be reduced and optimized a little bit by iterating over the items in it instead of the keys and then doing a lookup,
like this:
column_entropy = 0
for k, v in frequencies.items():
p = v / 10
column_entropy += p * log(p, 2) if p > 0 else 0
column_entropy = -column_entropy
And, if you change += to -= then you can drop the final negation of column_entropy.
Actually, it would be nice to give a name to the calculation inside the loop:
def calculate(k, v):
p = v / 10
return p * log(p, 2) if p > 0 else 0
With the help of this function, the complete calculation becomes:
column_entropy = -sum([calculate(k, v) for k, v in frequencies.items()])
This is just an example, you should give better names to all of these,
according to what they represent:
calculate
k
v
Note that in Python you can define functions inside functions.
This is perfectly legal (and I'd go as far as to say recommended):
def calculate_column_entropy(frequencies):
def calculate(k, v):
p = v / 10
return p * log(p, 2) if p > 0 else 0
return -sum([calculate(k, v) for k, v in frequencies.items()])
The good thing about inner functions is that they have a name:
instead of writing a comment to explain what the body of a for loop does,
you can express the same information by giving it a name.
And you never know,
you might want to reuse that function multiple times, later in the future. | {
"domain": "codereview.stackexchange",
"id": 10612,
"tags": "python, beginner"
} |
How do GPS devices work? | Question: I have GPS in my phone but I have only a very rough idea of how it works. I have checked the Wikipedia article but that gone over my head (I can not understand that maths and other strange stuff there). Please explain its working in simple words.
What I think is that it sends signal to satellites, satellite checks the location of sending device and reply with the location of that device.
Answer: GPS signal is actually a time signal. Each satellite sends data that includes:
time according to the satellite's clock
location of the satellite (or data where the location can be calculated)
some error correction data
Now, the receiver calculates the distance to the satellites: distance = time * velocity. Velocity is a known constant (+- the error correction). The receiver then gets multiple distances to different satellites so it can calculate the its location.
Note that four distances to known locations (satellites) are enough to determine the location of the receiver. Imagine that you tie four strings to four different stationary objects and other ends of the strings to your finger. There is only one (or none) position where all the strings are tight.
To be more accurate, the time in the above equation is relative to the time of the other satellites. In short, you need one satellite to be your reference time. The longer explanation is that there is an additional free parameter which requires you to solve a group of equations. See comments and Ibrahims answer for more details. | {
"domain": "physics.stackexchange",
"id": 1385,
"tags": "popular-science, gps"
} |
Output signal in digital communication | Question: In digital communications systems we say that the received signal can be expressed as
$$y(t)= x(t)+ n(t)\tag1$$
if x(t) is the transmitted signal and $n(t)$ is the noise.
I am wondering whether a better model would be
$$y(t)= x(t-t_0)+ n(t)\tag2$$
To account for delay caused by channel. Is my logic correct?
If so then, what are the assumptions made when using (1) and what are we accounting for if we use (2)? ie what type of errors are we accounting for (delay spread, or time acquistion error...)
I am looking forward to hearing your responses and discussion on this. thanks
Answer: Equation (1) in your question is a pure additive noise channel without any further distortion of the signal. Equation (2) takes a delay into account, but this is usually not relevant. A pure delay does not affect the quality of the communication link in terms of bit error rate. (I'm not talking about the annoying effect of delay in voice communication).
So the model in Equation (2) is often not relevant and can be replaced by Equation (1). A more general and useful model, which includes Eq. (2) as a special case, is
$$y(t)=(x*h)(t)+n(t)$$
where $h(t)$ is the impulse response of the channel, and $*$ denotes convolution. This model represents a linear time-invariant channel with additive noise. Eq. (2) is obtained as a special case with $h(t)=\delta(t-t_0)$. | {
"domain": "dsp.stackexchange",
"id": 2766,
"tags": "digital-communications"
} |
What is the significance of recent demonstration of a passive photon–atom qubit swap operation? | Question: In reference to this recent nature article: https://www.nature.com/articles/s41567-018-0241-6
Specifically, does this warrant a new type of gate?
Answer: This does not warrant a new type of gate. When we write down quantum circuits, each 'wire' corresponds to a single qubit. However, we do not (usually) specify what technology any of these qubits is made out of. You might typically assume that they're all the same technology (e.g. solid state, photonic,...) but there is no need to do so. There are very good reasons for wanting to interface different types of qubit, particularly static and flying qubits, to take advantage of the benefits of both.
So, on your quantum circuit, you specify that you want a swap gate. This is an abstraction, and does not tell you how you're physically going to realise it. The cited paper is showing one way of implementing it when the two qubits are two specific (different) types. But it's not a new gate.
As for the significance, that comes back to why you want to have both static and flying qubits in your experiment. Flying qubits are great if you need to connect two distant components because they can travel long distances with relatively little decoherence (usually more relevant to different quantum information protocols, rather than computation specifically, but some current quantum computer designs require distributed blocks). Static qubits are great if you actually want to manipulate the quantum state, interacting with other qubits etc. For example, we often talk in quantum information about processes such as "Alice sends a qubit to Bob" and then we talk about Bob holding the particular state he's received. So Alice probably sent it using a photon because they were far apart. But if Bob wants to hold onto the state, planning on doing something with it later, he needs to transfer it onto a static qubit. Perhaps, for example, Alice and Bob want to share a Bell pair, but their quantum communication channel is noisy, so Alice will send many Bell pair halves to Bob, and they will later perform a distillation protocol. The distillation process will be highly non-linear, and probably makes more sense on static qubits. | {
"domain": "quantumcomputing.stackexchange",
"id": 358,
"tags": "quantum-gate, experimental-realization"
} |
Converting Seconds to Millimeters | Question: The concept is that time is another dimension, complementary to those we can observe and measure directly. For those three, I can take a ruler and measure how many millimeters one point in space is far from another. If time is another dimension, and if I had the ability to observe that dimension as I observe the other three, I should be able to take my ruler and measure a distance in millimeters along that dimension as well.
This leads me to ask the question: How many millimeters are in a second? As in, if I take my ruler and measure between a point located in a 3D space at a particular time, measure along the time axis of spacetime to the same point in 3D space one second later (or earlier), I should be able to come up with a conversion ratio between millimeters and seconds, right?
If this is so, would there be a meaning to a question like How fast are we moving through spacetime, along the time dimension? As in, if we are following a vector in spacetime, what is the meaning of the magnitude of the time component of that vector?
Answer: Putting aside the fact that time and space are different, and that while they are unified as part of a single Lorentzian manifold there are important differences between "timelike" directions and "spacelike" directions, the conversion factor you're looking for is the speed $c$.
More specifically, there are $\delta X = 3\times 10^8 \frac{\mathrm m}{\mathrm s} \times 1 \ \mathrm{s} \approx 3 \times 10^{11} \ \mathrm{mm}$ in one second, insofar as light will travel $3\times 10^{11} \ \mathrm{mm}$ in one second (assuming flat spacetime, etc).
If this is so, would there be a meaning to a question like How fast are we moving through spacetime, along the time dimension?
For a particle moving with 4-velocity $\mathbf u$, the quantity $u^0 = \frac{d(ct)}{d\tau}$ is the number of ticks of my laboratory clock for every tick of the particle's wristwatch. Its magnitude is always greater than $c$, and is a measure of the time dilation effect. More precisely, $u^0/c$ is the factor by which time appears to be dilated for the moving particle. | {
"domain": "physics.stackexchange",
"id": 91209,
"tags": "special-relativity, spacetime, speed-of-light, dimensional-analysis, si-units"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.