anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How do chameleons signal cells to change color? | Question: I have read about how they can change color, but is there literature about the chemical signaling process they use to do so? I read that it could be some combination of hormones and neurotransmitters, but I couldn't find specific information on the receptors or chemical mechanisms.
Answer: As said by @dblyons, there has not been a lot of research (biochemical) on chameleons. So, the exact part of mechanism that you're looking for is still not understood. However, we have recently caught the broad end of the stick. Chameleons don't have special cells for color, their complete skin has a layer of pigments (dermal iridophores) which helps them in changing color. See this (or this) article:
Chameleon skin has a superficial layer which contains pigments, and under the layer are cells with guanine crystals. Chameleons change color by changing the space between the guanine crystals, which changes the wavelength of light reflected off the crystals which changes the color of the skin.
These guanine crystals are actually like colorless mirrors, but by changing the distance between these crystals, wavelength of light absorbed (and reflected) can be changed. It is quite similar to how the color of ozone changes from pale blue in gas to violet-black in solid form. However, how chameleons trigger this change in distance of crystals, is not yet known.
This is how these crystal lattice works:
sourcesource
This is what a guanine crystal looks like ((a) is guanine):
You can also see a nice real-time video of guanine crystals changing color of chameleon here.
The color changing portion of skin has 3 main layers:
superficial s-iridophores which help in changing color in visible region rapdily.
deep d-iridophores which reflect light in infrared refion and are thought to provide thermal protection to chameleon.
layer of melanin which is actually yellow in color, making chameleons naturally yellow(!) in color. When the guanine crystals come closer, they reflect the blue portion of visible spectrum. This blue light, along with the natural yellow color of melanin, becomes green which gives green color to chameleon. So, the actual color visible to us is a combination of color reflected by guanine crystal and yellow color of melanin.
As I said already, the exact biochemical signalling process behind this process is not known yet. It could be complex hormonal or neurotransmitter signals. However, hormones are considered as the main triggers because chameleons have been shown to change color due to changes in mood instead of environmental protection.
Cephalopods such as the octopus have complex chromatophore organs controlled by muscles to achieve this, whereas vertebrates such as chameleons generate a similar effect by cell signalling. Such signals can be hormones or neurotransmitters and may be initiated by changes in mood, temperature, stress or visible changes in the local environment. | {
"domain": "biology.stackexchange",
"id": 6550,
"tags": "biochemistry, endocrinology, cell-signaling"
} |
Effect of competitive inhibitor on substrate inhibition | Question: In an enzyme that undergoes substrate inhibition, how would the presence of a competitive inhibitor affect said substrate inhibition? Would the substrate concentration at which substrate inhibition begins be affected? Would the rate of enzyme activity decrease with increasing substrate be affected?
Answer: We need to deal with the mechanism of competitive inhibition+substrate inhibition:
\begin{align}
\ce{E + S <=> ES} \quad &K_\text{M} = \frac{\ce{[E]}\ce{[S]}}{\ce{[ES]}} \tag{R1/1} \\
\ce{ES -> E + P} \quad &r = k[\ce{ES}] \tag{R2/2} \\
\ce{E + I <=> EI} \quad &K_\text{I} = \frac{\ce{[E]}\ce{[I]}}{\ce{[EI]}} \tag{R3/3} \\
\ce{ES + S <=> ES2} \quad &K_\text{S} = \frac{\ce{[ES]}\ce{[S]}}{\ce{[ES2]}} \tag{R4/4}
\end{align}
where:
$\ce{E}$ is the enzyme.
$\ce{ES}$ is the enzyme-substrate complex.
$\ce{I}$ is the inhibitor.
$\ce{EI}$ is the enzyme-inhibitor complex.
$\ce{ES2}$ is the compound generated by the reaction of the enzyme and the enzyme-substrate complex, that is taking away the substrate the enzyme needs.
The $K$'s are the equilibrium constants, e.g., $K_\text{M}$ is the Michaelis's one. In biochemistry, it is standard practice to write them upside down.
Our task is to find an expression for the rate of formation of the product $\ce{P}$, in terms of the substrate concentration $\ce{S}$, the variable we can easily track.
Enzyme balance The initial concentration of enzyme is equal to
\begin{align}
C_\text{E0} &= \underbrace{[\ce{E}]}_1 +
[\ce{ES}] +
\underbrace{[\ce{EI}]}_3 +
\underbrace{[\ce{ES2}]}_4 \tag{5} \\
\end{align}
The underbraced parts (1), (3), and (4) are obtained via Eqs. (1), (3), and (4)
\begin{align}
K_\text{M} = \frac{\ce{[E]}\ce{[S]}}{\ce{[ES]}} &\rightarrow
\ce{[E]} = \frac{K_\text{M}\ce{[ES]}}{\ce{[S]}} \tag{6} \\
K_\text{I} = \frac{\ce{[E]}\ce{[I]}}{\ce{[EI]}} &\rightarrow
\ce{[EI]} = \left(\frac{\ce{[I]}}{K_\text{I}}\right)\ce{[E]} \tag{7} \\
K_\text{S} = \frac{\ce{[ES]}\ce{[S]}}{\ce{[ES2]}} &\rightarrow
\ce{[ES2]} = \frac{\ce{[ES]}\ce{[S]}}{K_\text{S}} \tag{8}
\end{align}
and combining Eqs. (5-8)
\begin{align}
C_\text{E0} &=
\frac{K_\text{M}[\ce{ES}]}{[\ce{S}]} + [\ce{ES}] +
\left(\frac{\ce{[I]}}{K_\text{I}}\right)\color{blue}{\ce{[E]}} +
\frac{[\ce{ES}][\ce{S}]}{K_\text{S}} \tag{9} \\
\end{align}
In Eq. (9) the enzyme concentration apears again in blue, so we combine Eq. (9) and Eq. (6) again
\begin{align}
C_\text{E0} &=
\frac{K_\text{M}[\ce{ES}]}{[\ce{S}]} + [\ce{ES}] +
\left(\frac{\ce{[I]}}{K_\text{I}}\right)
\frac{K_\text{M}[\ce{ES}]}{[\ce{S}]} +
\frac{[\ce{ES}][\ce{S}]}{K_\text{S}} \\
C_\text{E0} &=
\color{blue}{\frac{K_\text{M}[\ce{ES}]}{[\ce{S}]}} + [\ce{ES}] +
\color{blue}{\frac{K_\text{M}\ce{[ES]}}{[\ce{S}]}
\left(\frac{\ce{[I]}}{K_\text{I}}\right)} +
\frac{[\ce{ES}][\ce{S}]}{K_\text{S}} \\
C_\text{E0} &=
\frac{K_\text{M}[\ce{ES}]}{[\ce{S}]}
\underbrace{\left(1 + \frac{[\ce{I}]}{K_\text{I}}\right)}_\alpha
+ [\ce{ES}] +
\frac{[\ce{ES}][\ce{S}]}{K_\text{S}} \tag{10} \\
\end{align}
where we factored out the repeating term in both blue parts of the equation.
The main issue with Eq. (10), is that $\ce{[I]}$ is still there. We could use an inhibitor balance, but unfortunately, we can prove that we can't have a simple final expression. If you want further math just say in the comments. I will name underbraced part with $\alpha$, and name it the inhibitor power. We will explore its consequences further. Eq. (10) now becomes
\begin{align}
C_\text{E0} &=
\frac{[\ce{ES}]}{[\ce{S}]}K_\text{M} \alpha +
[\ce{ES}] +
\frac{[\ce{ES}][\ce{S}]}{K_\text{S}} \\
C_\text{E0} &= [\ce{ES}] \left(
\frac{K_\text{M} \alpha}{[\ce{S}]} + 1 + \frac{[\ce{S}]}{K_\text{S}}\right) \\
C_\text{E0} &= [\ce{ES}] \left(
\frac{K_\text{M}\alpha + [\ce{S}]}{[\ce{S}]} + \frac{[\ce{S}]}{K_\text{S}}\right) \\
C_\text{E0} &= [\ce{ES}] \left(
\frac{K_\text{S}K_\text{M}\alpha + K_\text{S}[\ce{S}] + [\ce{S}]^2}
{K_\text{S}[\ce{S}]}\right)\\
[\ce{ES}] &= \left(\frac{K_\text{S}[\ce{S}]}
{K_\text{S}K_\text{M}\alpha + K_\text{S}[\ce{S}] + [\ce{S}]^2}\right)C_\text{E0}
\tag{11} \\
\end{align}
Combining Eqs. (2) and (11)
\begin{equation}
r = \frac{kK_\text{S}[\ce{S}]C_\text{E0}}
{K_\text{S}K_\text{M}\alpha + K_\text{S}[\ce{S}] + [\ce{S}]^2} \rightarrow
\boxed{\frac{r}{kC_\text{E0}} = \frac{[\ce{S}]}
{K_\text{M}\alpha + [\ce{S}] + \dfrac{[\ce{S}]^2}{K_\text{S}}}} \tag{12} \\
\end{equation}
Eq. (12) is exactly what we want, a rate which is only a function of the substrate concentration. I also put it in a dimensionless form, which will be easier to analyze. The plot of Eq. (12) is the following, where I employed for simplicity $K_\text{S} = K_\text{M} = 1 \; \pu{mol/dm3}$:
Observations:
If $\alpha = 1$, the inhibitor power is zero, and $\ce{R3}$ just vanishes. We end with only substrate inhibition.
As $\alpha$ goes up, $\ce{R3}$ is more aggressive. This means that more enzime is consumed and be inactivated. Thus, the reaction goes down for every concentration of substrate.
Any operation with too high concentrations of a substrate is discouraged, since $\ce{R4}$ is favored, and the substrate inhibition is too predominant.
For all cases, we have an optimum value of substrate concentration, where the rate is maximum. This is really obvious in Eq. (12). In the numerator we have a linear function, that will win when $\ce{[S]}$ is low, and in the denominator a quadratic function that will when when $\ce{[S]}$ is high. This is a typical characteristic that substrate inhibition displays. We can prove that this happens in
$$ \frac{\mathrm{d}(r/kC_\text{E0})}{\mathrm{d}[S]} = 0 \rightarrow
\boxed{[\ce{S}]_\text{opt} = \sqrt{K_\text{S}K_\text{M}\alpha}} \tag{13} $$
This is the curve in magenta, thus, the best "ideal" operation will be the one that follows this curve for every instant in time in the biochemical reactor, disregarding the spatial effects. | {
"domain": "chemistry.stackexchange",
"id": 17604,
"tags": "kinetics, biochemistry, proteins, enzymes"
} |
(Apparent) Paradoxes in General Relativity | Question: I find that studying the paradoxes of special relativity, such as the twin paradox, ladder paradox, and Ehrenfest paradox, and their resolutions, helps me to understand it.
Are there any similarly helpful paradoxes in General Relativity?
Answer: Submarine paradox
J. M. Supplee, Am. J. Phys. 57, 75 (1989).
Matsas, 2008,
http://arxiv.org/abs/gr-qc/0305106
http://en.wikipedia.org/wiki/Supplee%27s_paradox
A submarine is neutrally buoyant at rest. There is a contradiction when it then starts to move, since an observer in the water's frame says it sinks due to its increased density, while sailors say it rises due to water's greater density.
Tethered galaxies
Suppose two galaxies are tied together with a rigid cable. What happens due to cosmological expansion? What happens to the Doppler shift of the galaxy?
Davis, Lineweaver, and Webb, https://arxiv.org/abs/astro-ph/0104349
Clavering, https://arxiv.org/abs/astro-ph/0511709
Straddling the horizon
What happens if you have half your body on one side of a black hole's event horizon, and half on the other side? | {
"domain": "physics.stackexchange",
"id": 53826,
"tags": "general-relativity, special-relativity, soft-question, education"
} |
Summations and products in F# | Question: I was bored, and looking to do something that involved anonymous functions, and it was suggested by @Quill, that I create a summation function. I decided to also include a product function as well to top the whole thing off.
For those who don't know, a summation is usually defined like this:
$$\sum_{n=a}^{b}f(n)$$
And a product is usually defined like this:
$$\prod_{n=a}^{b}f(n)$$
I'd like to know the following things:
Am I doing this in a proper functional way?
Is there a way to reduce the repetitiveness of the code?
Is it okay to use List.fold to multiply all the elements of a list together? Is there a better way?
Am I violation any style guidelines?
Anything else?
Here's the code:
/// <summary>
/// Find the summation in a specific range, given
/// a specific function.
/// </summary>
let summation func low high =
let function_result_set: int list = [for x in low..high do yield x |> func]
let result = function_result_set |> List.sum
result
/// <summary>
/// Find the product in a specific range, given
/// a specific function.
/// </summary>
let product func low high =
let function_result_set: int list = [for x in low..high do yield x |> func]
let result = function_result_set |> List.fold (*) 1
result
And finally, here's a few tests:
[<EntryPoint>]
let main argv =
System.Console.WriteLine(summation (fun n -> n) 1 10)
System.Console.WriteLine(product (fun n -> n) 1 10)
System.Console.ReadKey() |> ignore
0
And here's the desired output from the above tests:
55
3628800
Answer: Starting with summation,
let summation func low high =
let function_result_set: int list = [for x in low..high do yield x |> func]
let result = function_result_set |> List.sum
result
First of all, we can remove result
let summation func low high =
let function_result_set: int list = [for x in low..high do yield x |> func]
function_result_set |> List.sum
Next, we don't need to construct a list containing func(low) .. func(high); we can instead operate over a sequence, meaning we don't have to have all those values in memory at once.
let summation func low high =
let function_result_set = Seq.map func { low .. high }
function_result_set |> Seq.sum
Now we can make it a bit more succinct
let summation func low high =
Seq.map func { low .. high } |> Seq.sum
Finally, we can make it more generic by using statically resolved type parameters. At the moment, if we have a call to summation like this:
printfn "%d" <| summation id 1 10
then summation will be constrained to be of type (int -> int) -> int -> int -> int. We can fix this by making the function inline:
let inline summation func low high =
Seq.map func { low .. high } |> Seq.sum
Now summation has type ('a -> 'b) -> 'a -> 'a -> 'b and the following code will compile:
printfn "%d" <| summation id 1 10
printfn "%f" <| summation id 1.0 10.0
Similar comments hold for product, but with two additional things:
To make it generic we need to get the One property of the type ^a, and
I've used the checked version of * to check for overflow
let inline product func low high : ^a =
Seq.map func { low .. high } |> Seq.fold Checked.(*) LanguagePrimitives.GenericOne< ^a > | {
"domain": "codereview.stackexchange",
"id": 16195,
"tags": "programming-challenge, functional-programming, mathematics, f#"
} |
Rolling down an arbitrary hill with friction | Question: I had in mind this physical situation of a point mass rolling down a hill of some arbitrary shape ($y=y(x)$, for instance a parabola about $x=0$), with a peculiar friction force whose "expended work" is proportional to the arclength since $t=t_0$. If $x=x(t)$ and $y=y(t)$, then $$W=k \int_{x=x_0}^x \sqrt{1+y'(x)^{2}} dx=k\int_{t=t_0}^t\sqrt{x'{t}^{2}+y'(t)^{2}}dt$$
I've tried to get equations of motion starting from the energy theorem:
$$\frac{1}{2}m(x'(t)^{2}-v_0 ^{2})=-mg(y(t)-y_0) +k\int_{t=t_0}^t\sqrt{x'{t}^{2}+y'(t)^{2}}dt$$
and then differentiate everytime by $t$:
$$m x'(t) x''(t)=-mgy'(t)+k\sqrt{x'{t}^{2}+y'(t)^{2}}$$
I thought I would bring in the landscape $y(x)$ by using $$y'(t)=\frac{dy}{dx}x'(t)$$
Then: $$mx''(t)=-mg\frac{dy}{dx} +k\sqrt{1+y'(x)^{2}}$$
But I don't know how to work the equations further. Is there a better method to go at it (maybe even further from energetics)?
EDIT:
As shown in the comments exchange, I had to modify the method by expanding on $W=\int \mu N ds$ with $\vec N $ being the normal force. I obtained the peculiar result of $$N=mg \frac{1}{\sqrt{1+y'(x)^2}} \rightarrow W=\int \mu mg dx$$ seemingly path-independent.
First method: By generalizing that for the inclined plane $N=mg \cos{\theta}$ for $\theta$ the "slope angle". $\cos{\theta}$ then becomes $$=\frac{1}{\sqrt{1+\tan{\theta}^{2}}}=\frac{1}{\sqrt{1+y'(x)^{2}}}$$
Second method: I went in more rigorously, by using the definition of $N$ as the projection of "$\overrightarrow{\text{weight}}$" onto the instantaneous normal axis: $N=|\overrightarrow{\text{weight}} \cdot \hat n|$ with $\hat n$ the unit normal vector for the parametrization $(x,y(x))$. As $$\hat n=\frac{\hat t'}{|\hat t'|}$$ and $$\hat t=\frac{\vec r'}{|\vec r'|}=\frac{\left(1,y'(x)\right)}{\sqrt{1+y'(x)^{2}}} \rightarrow \hat t'=\left(\frac{-y''(x)y'(x)}{\left(1+y'(x)^{2}\right)^{3/2}},\frac{y''(x)}{\left(1+y'(x)^{2}\right)^{3/2}}\right) \rightarrow \hat n=\left(\frac{-y'}{\sqrt{1+y'^{2}}},\frac{1}{\sqrt{1+y'^{2}}}\right)$$
Using $\overrightarrow{\text{weight}}=(0,-mg)$ I got the same result as from method 1, and thus cancellation (??)
Answer: I propose redefining this problem as follows (because I'm not sure it has a solution the way the OP has defined it).
Let $y=f(x)$ be some symmetrical (around $y$) function like $x^2$. Let the point mass experience a friction force acc. to the usual simple model $F_f=\mu F_N$, with $F_N$ the Normal force acting on the point mass in the point $(x,y)$ ($N$ is the Normal line in $(x,y)$) and $\mu$ a coefficient of friction (constant).
Now, from the balances of forces in the $x$ and $y$ directions, equations of motion can be set up. | {
"domain": "physics.stackexchange",
"id": 24470,
"tags": "homework-and-exercises, newtonian-mechanics, friction, potential-energy"
} |
The definition of entropy | Question: As history of thermodynamics say, it was a mystery that what is the required condition for a given energy conversion to take place? Like there are two possible events each conserving energy but only one is chosen. So, in order to resolve this Clausius introduced a quantity called entropy which was given by $\int dq/T$. But can I know the reason for which Clausius chose this integral or quantity, why not any other quantity (changes in whom, positive or negative, would decide the occurrence of a given event)? I hope there lies an explanation to this which does not use statistical mechanics.
Answer: If you don't want to use statistichal mechanics, you can view it as a completely mathematical thing.
When you write the differential form $\delta Q$, you are not speaking of an exact differential, i.e. it is not really the differential of any function of the thermodynamical state.
Temperature is, in this case, called the integrating factor, which means that $\delta Q/T$ is an exact form, in particular, it's $dS$, the differential of Entropy. This is a way to let entropy come out.
On the other hand, much more physical explanations can be given.
The first uses obviously stat mech, but without making calculations, i can just tell you that $S$ turns out to be very closely connected with the number of possible microscopical states a thermodynamical (thus macroscopical) state can admit.
Finally, a reason is that the quantity $\int \delta Q/T$ is never negative in normal thermodynamical transformations, that is, it allows a simple formulation of the second principle.
I hope this is what you were looking for. | {
"domain": "physics.stackexchange",
"id": 6247,
"tags": "thermodynamics, entropy"
} |
Basic news/blog post system | Question: I had made some pages and they work but I'm not sure if I coded it in best way so I want your suggestions and ideas to make my code better.
connection.php
<?php
$mysql_host = 'localhost';
$mysql_user = 'root';
$mysql_pass = 'root';
$mysql_data = 'project_eye';
$connect = mysql_connect($mysql_host, $mysql_user, $mysql_pass) or mysql_error();
$db_sele = mysql_select_db($mysql_data);
mysql_query("set names 'utf8'");
mysql_query("SET character_set_client=utf8");
mysql_query("SET character_set_connection=utf8");
mysql_query("SET character_set_database=utf8");
mysql_query("SET character_set_results=utf8");
mysql_query("SET character_set_server=utf8");
?>
phpCodes.php:
It's a page that contain the header. I'm using it to call the function to my page.
<?php
function headerCode(){
echo '
<div class="header">
<div class="header-top">
<div class="logform">
';
accountLinks();
echo '
</div>
<div class="social-newtork">
<a><img src="images/f.png"></a>
<a><img src="images/t.png"></a>
<a><img src="images/g.png"></a>
</div>
</div>
<div class="menu-content">
<img src="images/eye.jpg">
<img src="images/compelete.jpg" style="width: 78.3%">
<div class="desc">
<span class="first">عينٌـــــــــ</span>
<span class="second">على الحقيقة</span>
</div>
</div>
<div class="menu">
<ul>
<li><a href="">محلية</a></li>
<li><a href="">عالمية</a></li>
<li><a href="">رياضية</a></li>
<li><a href="">طبية</a></li>
<li><a href="">طرائف</a></li>
</ul>
</div>
</div>
';
}
function accountLinks(){
if( !isset($_SESSION['id']) ){
echo '
<form method="post" action="index.php">
<input type="submit" name="submit" value="دخول" class="log">
<input type="password" id="password" name="password" placeholder="كلمة المرور" class="mem-information">
<input type="text" placeholder="اسم المستخدم" id="username" name="username" class="mem-information">
</form> ';
}else{
echo '
<form method="post" action="index.php">
<a href="controlpanel/" class="account">لوحة التحكم</a>
<input type="submit" href="" class="logout" name="logout" onclick="logout(this);" value="تسجيل الخروح">
</form>
';
}
}
?>
con.php:
This page contains simple control panel and it didn't complete yet
<?php
ob_start();
session_start();
include('../includes/connect.php');
include('../includes/phpCodes.php');
?>
<!DOCTYPE html>
<html>
<head>
<title>لوحة التحكم</title>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="../css/mainstyle.css">
<link rel="stylesheet" type="text/css" href="css/controlstyle.css">
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$('#tabs div').hide();
$('#tabs div:first').show();
$('#tabs ul li:first').addClass('active');
$('#tabs ul li a').click(function(){
$('#tabs ul li').removeClass('active');
$(this).parent().addClass('active');
var currentTab = $(this).attr('href');
$('#tabs div').hide();
$(currentTab).show();
return false;
});
});
</script>
</head>
<body>
<div class="wrapper">
<?php headerCode(); ?>
<div class="content">
<div id="tabs">
<ul>
<li><a href="#add">اضافة موضوع</a></li>
<li><a href="#remove">حذف موضوع</a></li>
<li><a href="#edit">تعديل موضوع</a></li>
<li><a href="#edit">التحكم بالاقسام</a></li>
</ul>
<div id="add">
<form method="POST" action="includes/add.php" dir="rtl" enctype="multipart/form-data">
<br>
حدد القسم : <select name="section">
<?php
$query = "SELECT * FROM `sections`";
$result = mysql_query($query);
while($row=mysql_fetch_array($result, MYSQL_ASSOC)){
echo "<option value='".$row['id']."'>".$row['sectionName']."</option>";
}
?>
</select><br>
عنوان الموضوع :<input type="text" name="title" class="mem-information"/><br>
الموضوع : <br /><textarea name="subject" rows="10" cols="50" class="mem-information" style="width: 500px"></textarea><br /><br>
الصورة :<input type="file" name="image"><br>
<input type="submit" value="إرسال" name="send" class="log" style="color: black">
</form>
</div>
<div id="remove">
<form method="POST" action="includes/remove.php" dir="rtl"><br>
حدد القسم :
<select name ="sectionsName">
<option value="">dd</option>
</select>
<input type="submit" value="حذف" name="send" class="log" style="color: black">
</form>
</div>
<div id="edit">
</div>
<div id="addDep">
</div>
</div>
</div>
</div>
</body>
</html>
add.php:
It's job is to add values to database.
<?php
session_start();
include('../../includes/connect.php');
$sectionID = $POST["section"];
$title = $_POST['title'];
$subject = $_POST['subject'];
$visiable = 1;
$imageName = mysql_real_escape_string($_FILES["image"]["name"]);
$imageData = mysql_real_escape_string(file_get_contents($_FILES["image"]["tmp_name"]));
$imageType = mysql_real_escape_string($_FILES["image"]["type"]);
$query = "insert into news (title, subject, visiable, image, section_id) values ('$title','$subject', '$visiable', '$imageData', '$sectionID')";
$result = mysql_query($query);
$id = mysql_insert_id();
$data = array(
'id' => $id
);
$base = '../../show.php';
$url = $base. '?' . http_build_query($data);
header("Location: $url");
exit();
?>
show.php:
After adding to database this page will display the values.
<?php
ob_start();
session_start();
include('includes/connect.php');
include('includes/phpCodes.php');
function showNews(){
$id = $_GET['id'];
echo '<img src="includes/getImage.php?id=' . $id . '" class="newsImage">';
echo '
<h1><p class="subjecTitle">هنا العنوان</p></h1>
<div class="newsContent">
hihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihihi
</div>
';
}
?>
<!DOCTYPE html>
<html>
<head>
<title>عينٌ على الحقيقة</title>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="css/mainstyle.css">
<link rel="stylesheet" type="text/css" href="css/showstyle.css">
<script lang="javascript">
function logout(myFrame){
myFram.submit();
}
</script>
</head>
<body>
<div class="wrapper">
<?php headerCode(); ?>
<div class="content" dir="rtl">
<?php showNews(); ?>
</div>
</div>
</body>
</html>
Answer: Ok, where to start.
mysql_* functions are bad. Burn them, throw them away but don't use them. Use mysqli_ instead or even better PDO.
Ever heard of SQL injections? I think so since you are using mysql_real_escape_string. A simple search on the interwebz for how to bypass mysql_real_escape_string shows its weaknesses.
That being said, let's look at your code.
The mysql_* left aside your code isn't bad at all. Everything works and at the end of the day that is all that matter. But after that day come other days. And in 2 month times you might need to change certain parts in your code, fix a bug, add some functionality,...
With that in our minds lets look at the code:
There is no real seperation of concern. Every file has multiple functionalities and knows way to much.
connection.php only handles the database connection. So this is good!
phpCodes.php supplies functions that handle presentation (I'll come to this later)
con.php handles presentation, sql queries, sessions and buffering
add.php knows about sessions AND how to add an Image.
show.php knows about session, buffers, other functional files AND it knows about presentation. Now that is a lot for a simple file.
I don't think I need to tell you what is wrong ;)
One of the important programming rules (imo) is DRY, Don't repeat yourself.
Yet somehow you repeat the HTML in 2 files. So if you want to make a change to the html you will have to check all files that have html and change it there. So make sure that you don't write duplicate code. Not for 'increased performace' because of a smaller filesize but because it is way easier to maintain.
So, a little todo list for you:
Seperate business logic from presentation. In your presentation files there sould only be html with some echo $someVar statements. In your business logic there should be no html and plain stupid code. Code that does the job without having to know about other parts of the system.
Then create a 'controller' file that includes the correct files, starts the session and the buffer, ...
This way you can easily add functionality, change presentation, ...
Always think SOLID
One last remark on the phpCodes.php. Don't use functions to output html. Functions should give some sort of functionality. You give it something and it does something to that thing. But you are simply using it as a glorified variable holding some HTML.
A part from that a function should best be stateless. But your function returns different things depending on a variable that doesn't even get passed in.
If a function changes its output depending on a variable, that variable should be passed in into the function. Not hard coded in it. If the sole purpose of the function is to return html. Don't use a function, simply include the correct template in the controller. | {
"domain": "codereview.stackexchange",
"id": 4252,
"tags": "php, mysql"
} |
Aggregate Rate and Poisson Process in Aloha | Question: I'm having a hard time understanding what is mean't by the aggregrate rate $\lambda$ and what it means for throughput in unslotted and slotted Aloha protocols. The more I seem to read about poission process the more confused I get.
I understand that the probability a packet is transmitted in a time interval $\delta t$ is $\lambda \times \delta t$ where you have $N$ senders and $Y$ is the aggregate rate from all $N$ senders.
However I don't understand what is mean't by aggregate rate. It's quite a perculiar phrase. Does it mean the probability that any of the $N$ senders will try to transmit, or is the number of packets to be sent by $N$ senders over the time?
Next we're then told that the individual rate for a sender is $\lambda / N$. Because I'm so confused by what is mean't by rate it's hard to begin to think about what this even means.
Answer: The dictionary definition of aggregate might be helpful: https://www.wordnik.com/words/aggregate
You have $N$ senders. Each sender sends packets on a network at some rate, say, $\lambda/N$. Now think about all the packets that are sent over that network, without regard to who sent it. What's the rate at which such packets appear on the network? That's a total over all packets. The aggregate rate will be something like $\lambda$.
It might also be helpful to refresh your understanding of what a rate is. A rate is an amount divided by a time period: e.g., the number of packets sent by Alice per unit time interval. The aggregate rate is also a count divided by a time period, but in this case, the count is of a different quantity: it's a count of all packets, not a count of the packets sent by Alice. So, the number of packets sent (in total, from anyone) per unit time interval is a rate. That is what your book is calling the aggregate rate. | {
"domain": "cs.stackexchange",
"id": 3568,
"tags": "computer-networks, communication-protocols"
} |
Merging black holes makes them less dense, so | Question: According to What is exactly the density of a black hole and how can it be calculated? (more specifically, John's answer here made me think: if you merge a whole load of chunks of an element heavier than iron (to prevent them from fusing), the resulting object would either be more dense than a black hole of the same mass, or would become less dense by becoming a black hole.
So which one of these would happen, in this hypothetical situation? Or would neither happen, but some completely different situation? Both seem impossible to me, since such heavy objects would have no way to prevent gravity from crushing them down (which implies it must become a black hole), but if a black hole would form, it would require gravitational energy input in order to become less dense. So that would exclude both possibilities, right?
Of course this situation would never occur in real life, but this hypothetical situation would have no angular momentum in the system, so no mass would be ejected.
Answer: If I understand you correctly you are concerned that a black hole somehow manages to become less dense than the matter that made it, as if it somehow expands against its own gravity to increase its volume.
However a black hole event horizon is not an object - it is just a place in spacetime. Although we can calculate a density by calculating the volume inside the event horizon this density is of no physical significance. The matter inside the event horizon is not uniformly distributed, as it is in a ball of iron, so all we are calculating is an average density.
Anything falling into a black hole rapidly reaches the singularity at the centre where the density is infinite (actually it's undefined, but let's save that complication for another day). So inside the event horizon you have empty space with a singularity at the centre. While there's nothing to stop you calculating an average density for this object your result doesn't have any special meaning. | {
"domain": "physics.stackexchange",
"id": 31418,
"tags": "black-holes, relativity, density"
} |
Extensible FizzBuzz in Ruby | Question: The title says it all. This is yet another FizzBuzz implementation where you can specify the words you want to use, though it autoselects the divisors as the first n primes other than 2, rather than having you enter them.
I'm looking for tips on variable/function naming and that kind of thing; I couldn't think of good ones when I wrote this. I'd also like any performance improvements that can be put in.
def n_primes(count)
primes = []
number = 2
until primes.length == count
primes << number if primes.inject(true) { |memo, cur| memo and number % cur != 0 }
number += 1
end
primes
end
puts 'Enter the words you would like to use'
word_array = gets.chomp.split
puts 'Enter the number to count up to'
up_to = gets.chomp.to_i
words = {}
primes = n_primes(word_array.length + 1)[1..-1]
primes.each_with_index { |prime, index| words[prime] = word_array[index] }
1.upto(up_to) do |number|
to_print = words.reject { |n, _| number % n != 0 }.values
to_print = [number] if to_print.length == 0
puts to_print.join ''
end
Note: I extracted n_primes into a function because it means I don't have to use meaningless names like p or num to avoid naming conflicts. If there's a better solution, I'd like to know it.
Sample output:
Enter the words you would like to use
> Fizz Buzz Wolf Foo Bar
Enter the number to count up to
> 100
1
2
Fizz
4
Buzz
Fizz
Wolf
8
Fizz
Buzz
Foo
Fizz
Bar
Wolf
FizzBuzz
16
17
Fizz
19
Buzz
FizzWolf
Foo
23
Fizz
Buzz
Bar
Fizz
Wolf
29
FizzBuzz
31
32
FizzFoo
34
BuzzWolf
Fizz
37
38
FizzBar
Buzz
41
FizzWolf
43
Foo
FizzBuzz
46
47
Fizz
Wolf
Buzz
Fizz
Bar
53
Fizz
BuzzFoo
Wolf
Fizz
58
59
FizzBuzz
61
62
FizzWolf
64
BuzzBar
FizzFoo
67
68
Fizz
BuzzWolf
71
Fizz
73
74
FizzBuzz
76
WolfFoo
FizzBar
79
Buzz
Fizz
82
83
FizzWolf
Buzz
86
Fizz
Foo
89
FizzBuzz
WolfBar
92
Fizz
94
Buzz
Fizz
97
Wolf
FizzFoo
Buzz
(The user input is prefixed with >, though it doesn't show up when you actually run the code)
Answer:
#n_primes
Ruby has the Prime class as part of its stdlib, so you don't need to roll your own prime number generator.
n_primes(word_array.length + 1)[1..-1]
A more Rubyesque version might use drop(1) instead of the [1..-1] slice.
words = {} + primes.each_with_index
Instead of creating an empty Hash and then adding elements to it, you can use Array#zip to create an array of word/number pairs. You can turn that into a hash with Hash[], if you'd like.
words.reject
Purely from a semantic standpoint, I think select would be more natural to use. I think of it as "finding the words to print" rather than "discarding the words that shouldn't be printed" which sounds like a double negative.
You could also consider using #inject or #each_with_object to collect matches instead of #reject + #values
to_print.length == 0
Use Array#empty? instead.
to_print.join ''
You don't need the argument for Array#join.
I'd do this:
require "prime"
puts "Enter the words you would like to use (separated by spaces)"
words = gets.strip.split
puts "Enter the number to count up to"
count = gets.strip.to_i
pairs = Prime.first(words.count + 1).drop(1).zip(words)
1.upto(count) do |n|
line = pairs.each_with_object([]) do |(prime, word), memo|
memo << word if (n % prime).zero?
end
puts line.empty? ? n : line.join
end
Or, more in line with yours, the last part could be:
1.upto(count) do |n|
line = pairs.select { |prime, _| (n % prime).zero? }.map(&:last)
puts line.empty? ? n : line.join
end | {
"domain": "codereview.stackexchange",
"id": 13975,
"tags": "ruby, fizzbuzz"
} |
Projective representations and understanding "non-trivial" two-cocycles | Question: A symmetry group $\mathcal{G}$ may be represented on the physical Hilbert space by unitary operators $U(g)$ such that it satisfies the composition rule $$U(g_1)U(g_2)=e^{i\phi(g_1,g_2)}U(g_1g_2).\tag{1}$$ If $\phi(g_1,g_2)$ is of the form $$\phi(g_1,g_2)=\alpha(g_1g_2)-\alpha(g_1)-\alpha(g_2)\tag{2}$$ the projective representations in (1) with such a phase (2) can be replaced by an ordinary representation by replacing $U(g)$ with $$\tilde{U}(g)=e^{i\alpha(g)}U(g).\tag{3}$$ If I understand it correct, Weinberg says in his QFT book that $\phi=0$ satisfies relation (2), and is therefore, a trivial two-cocycle. I don't understand why $\phi=\pi$ cannot satisfy the relation (2). It is satisfied by choosing $$\pi+\alpha(g_1g_2)=\alpha(g_1)+\alpha(g_2)\tag{4}.$$
Therefore, projective representations corresponding to $\phi=\pi$ (or any constant value of $\phi$), can be eliminated by defining a new representation of the form (3). Do I misunderstand something?
Answer: You are right: Any projective representation $U(g)$ with $\phi(g_1,g_2)=\pi$ can be made a linear representation by replacing
$$
\tilde U(g) = -U(g)\ .
$$
This corresponds to choosing $\alpha(g)=\pi$ in Eq. (2). (Note that while this satisfies (4), it is not clear whether Eq. (4) itself is a good definition -- it is not clear that $\alpha(g)$ for each $g$ is uniquely defined this way.)
Indeed, this is generally true when $\phi(g_1,g_2)=\vartheta$ is constant: In each of these cases, you can define $\tilde U(g)=e^{-i\vartheta}U(g)$, or $\alpha(g)=-\vartheta$. | {
"domain": "physics.stackexchange",
"id": 56681,
"tags": "quantum-field-theory, group-representations, representation-theory, poincare-symmetry"
} |
Static equilibrium | A beam is hanging by two ropes with angles and there is a block on it | Question: I have this problem model and I'm trying to get the solved equations for the distance $d$ and the tension $\vec{T_2}$ with the known values of angles $\alpha$ and $\theta$, $\vec{P_1}$, $\vec{P_2}$ and $l$.
The green $\vec{P}$ are the weights
The pink $τ$ are torques
I'm not sure about the signs of the torques and how to replace the equations with each other to get the final equations that only need the known values
Answer: You need three equations: The sum of the vertical components of the forces = 0. The sum of the horizontal components of the forces = 0. and The sum of the torques about a chosen point = 0. I would take the torques about point (A). Your three unknowns will be: $T_1, T_2$, and d. | {
"domain": "physics.stackexchange",
"id": 84981,
"tags": "homework-and-exercises, newtonian-mechanics, free-body-diagram, string"
} |
Components of Electrostatic energy to components of electric field? | Question: I'm trying to understand the relationship between electrostatic energy and the electric field so that I can compute electric field components from electrostatic energy components.
Is it correct to assume that $E_x=\frac {F_x}q $ where $ F_x$ is the force computed from the electrostatic energy?
Answer:
I'm trying to understand the relationship between electrostatic energy
and the electric field so that I can compute electric field components
from electrostatic energy components.
The energy associated with an electrostatic field is called electrostatic potential energy. All forms of energy, including electrostatic potential energy, are scalar quantities and do not have directional components. The electric field is a vector quantity and therefore has directional components. So you cannot compute electric field components from electrostatic potential energy components as the latter do not exist.
Is it correct to assume that $_{}=\frac{_}{}$ where $_$
is the force computed from the electrostatic energy?
No.
$E_x$ is the electric field in the $x$ direction and is a vector quantity. Its direction is by convention the direction of the force that a positive charge would experience if placed in the field. It is not electrostatic potential energy. $F_x$ is the electrostatic force in the x direction and is also a vector quantity. It, too, is not electrostatic potential energy.
Relationship between electrostatic potential energy and the electric field
Now, to help understand this relationship, consider the following using the analogy of gravitational potential energy:
When charge is moved in an electric field its electrostatic potential energy either increases or decreases. This is analogous to moving a mass in a gravitational field which results in an increase or decrease in gravitational potential energy.
The work involved in moving a charge $q$ a distance $x$ given a constant electric field $E_x$, is given by
$$W=F_{x}x=qE_xx$$
The gravity analogy is
$$W=mgh$$
The sign for the work in each case will depend on whether the force acts in the same or opposite direction to the displacement of the charge or mass.
A force applied by an external agent does positive work to move charge in a direction opposite to the direction of the force exerted by the electric field on the charge, such as moving positive charges towards each other. This is analogous to the work done by an external force to lift an object in opposition to the downward force of gravity. At the same time, the electric field does an equal amount of negative work on the charge taking the energy the external agent gave the charge and storing it as electrostatic potential energy. This is analogous to the gravitational field doing negative work on the object being lifted taking the energy provided by the external force and storing it as gravitational potential energy.
On the other hand, if the work is done by the electric field on a charge placed in the field, the direction of the electrostatic force is the same as the movement of the charge, and the force gives the charge kinetic energy at the expense of electrostatic potential energy. This is analogous to the gravitational field doing positive work on a falling object giving it kinetic energy at the expense of gravitational potential energy.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 60619,
"tags": "electrostatics, electric-fields, potential-energy, voltage"
} |
Time Dilation Experiment | Question: I am currently thinking of doing an experiment for science fair involving time dilation which will use the equations for both time dilation based off of speed and for gravitational dilation to determine if there is a speed and orbit altitude a satellite can maintain to nullify the effects of time dilation.
From the sources I have looked at, satellites gain about an extra 1.7-1.9 seconds every century because they are not effected by gravitational time dilation as much as us on earth. What I need a little help with is finding the equation for gravitational time dilation.
I have found out how to do it for velocity, which was pretty straightforward with the Lorentz equation and finding gamma. Is there a different formula for an orbiting object as opposed to an object on the surface.
Any response would be appreciated and thanks for taking the time to read this.
Original question: https://www.reddit.com/r/Physics/comments/2mic2l/time_dilation_experiment/
Answer: The time dilation factor (relative to a stationary observer at infinity) for a moving clock at a constant radius is:
$$\frac{d \tau}{dt} = \sqrt{1-\frac{2GM}{c^2 r} - \frac{v^2}{c^2}}$$
where G is the gravitational constant, M is the mass of Earth, r is the radius from the center of the Earth, and v is the velocity of the clock. So, a stationary observer sitting on the surface of Earth would experience a factor:
$$\sqrt{1-\frac{2GM}{c^2 r_\oplus}}$$
where $r_\oplus$ is the radius of the Earth. What we are then looking for is a stable orbit where the amount of time dilation experienced by the satellite is exactly equal to the amount experienced by someone stationary on Earth. In other words:
$$\sqrt{1-\frac{2GM}{c^2 r_\oplus}} = \sqrt{1-\frac{2GM}{c^2 r} - \frac{v^2}{c^2}}$$
This can be simplified a bit:
$$\frac{2GM}{c^2 r_\oplus} =\frac{2GM}{c^2 r} + \frac{v^2}{c^2}$$
Additionally, we know from ordinary Newtonian mechanics that the orbital velocity of a test particle is given by:
$$v^2=\frac{GM}{r}$$
So plugging that in to our equation above we get:
$$\frac{2GM}{c^2 r_\oplus} =\frac{2GM}{c^2 r} + \frac{GM}{c^2r}$$
Simplifying:
$$r= \frac{3}{2} r_\oplus$$
In other words, a satellite orbiting at a height of $\frac{1}{2} r_\oplus$ above the Earth's surface will age at the same rate as someone stationary on Earth at either the North or South pole. You can do exactly the same calculation for someone not at the poles, but their velocity due to Earth's rotation will depend on their distance from the equator. | {
"domain": "physics.stackexchange",
"id": 17733,
"tags": "time-dilation"
} |
How LOFAR pass through the ionosphere? | Question: The LOFAR project open a new window in the universe's study because it allows to receive low frequences of the universe that we could not get because of the ionosphere.
But I'm wondering how can LOFAR pass through the ionosphere? Is it because of the large fields of antennas? Or it is something else?
Thanks for any answers which can clear my mind.
Answer: LOFAR does not go 'through the ionosphere' as it's ground based. Rather, it is able to receive signals from outside the ionosphere due to the very low frequencies involved. These frequencies (naturally) have very long wavelengths, which means that LOFAR must be very large to obtain a decent level of resolution. The number of antennae will affect sensitivity and reliability. | {
"domain": "astronomy.stackexchange",
"id": 506,
"tags": "radio-astronomy"
} |
What trajectory has the Cosmic Microwave Background radiation taken to get to earth? | Question: I have a few related questions:
Where is the CMB coming (emitted/reflected/remitted) from?
When CMB hits the earth, is that the first thing those photons hit since they were emitted 400 thousand years after the big bang?
Why isn't the CMB at the edges of the universe? Why is it flying around in the middle? Has the trajectory of the photons been bent by masses in the universe until it bends back inward? Or is the theory that the universe wraps around on itself?
Answer: At a basic level:
The universe, in the beginning was very hot. So hot in fact that there were no atoms, only electrons and protons and neutrons and photons flying around. The photons were scatting off of the electrons and protons, as they interacted strongly because the electrons and protons are charged. The universe was much like the plasma you find in plasma balls, but turned up to 11. It was opaque. You could not see through it.
As the universe expanded, it cooled and at around 380,000 years after the big bang, it was cold enough that stable atoms could form. At this point, all of the photons that were flying around suddenly stopped reacting with all of the free electrons and protons, since they started to form atoms that had no net charge, behaving much like a very dilute gas, like the air. The universe became transparent. Just as we can see through air, at this point the photons could travel unimpeded. This is referred to as the "surface of last scattering", but you shouldn't think of it as a surface, you should think of it as a moment in time where the universe went from being opaque to light to being mostly transparent to light.
Having suddenly nothing to interact with, those photons just starting travelling in straight lines. Some of those photons were just the right distance from us and were pointed in just the right direction that they are hitting us just now. In fact, they are hitting us continuously since the entire universe was filled with this photons just before the universe went "transparent".
So, the CMB isn't at the edges, its everywhere, its all of the photons that are still to this day flying off in every which direction. Occasionally those photons hit something, but since the universe is mostly empty space, the fraction that hit something is completely negligible. It is safe to assume they have not interacted with anything since the "surface of last scattering" nearly 14 billion years ago.
Nowadays, those photons are long in wavelength, nearly 1 mm, because as the universe has continued to expand, they continue to cool and stretch in wavelength. | {
"domain": "physics.stackexchange",
"id": 24679,
"tags": "cosmology, photons, big-bang, cosmic-microwave-background"
} |
Who was the first to define Flow Shop / Job Shop problem? | Question: I spent some time to search in internet who was the first to formaly define Flow Shop and Job Shop problems but with no effect. I'm especially interested in article/book so I can mention it in my master thesis.
I'm guessing that there may be no author of this problems but this information also would help me a lot.
Answer: Coffman and Demming's famous Operating Systems Theory, 1973, section on Job Shop and Flow Shop problems (pp. 123-128) cites Conway, Maxwell, and Miller, 1967 as "a comprehensive treatment" of Job Shop scheduling. In the next paragraph, they introduce Flow Shop scheduling as a specialization of Job Shop, and cite
Johnson, S.M.; "Optimal two- and three-stage production schedules with set-up times included." Naval Research Logistics Quarterly, 1(1):61-68, 1954.
Naval Research Logistics Quarterly seems to be a Wiley pay-walled journal, but my library has a subscription to it. Johnson's paper begins:
Let us consider a typical multistage problem formulated in the following terms by R. Bellman:
"There are n items which must go through one production stage or machine and then a second one. There is only one machine for each stage. At most one item can be on a machine at a given time.
"Consider $2n$ constants $A_i, B_i, i = 1, 2, \cdots, n$. These are positive but otherwise arbitrary. Let $A_i$ be the setup time plus work time of the $i$th item on the first machine, and $B_i$ the corresponding time on the second machine. We seek the optimal schedule of items in order to minimize the total elapsed time."
He doesn't give the full reference to Bellman. Johnson's paper extends the definition in the obvious way to 3 machines. | {
"domain": "cs.stackexchange",
"id": 6726,
"tags": "scheduling"
} |
Stim's dem.diagram (detector error model diagram) not working for me? | Question: I've copied these lines from Stim's Getting Started notebook:
circuit = stim.Circuit.generated(
"repetition_code:memory",
rounds=30,
distance=9,
before_round_data_depolarization=0.03,
before_measure_flip_probability=0.01)
dem = circuit.detector_error_model()
dem.diagram("matchgraph-svg")
but when I run it the final line gives an error:
TypeError Traceback (most recent call last)
Cell In[99], line 8
1 circuit = stim.Circuit.generated(
2 "repetition_code:memory",
3 rounds=30,
4 distance=9,
5 before_round_data_depolarization=0.03,
6 before_measure_flip_probability=0.01)
7 dem = circuit.detector_error_model()
----> 8 dem.diagram("matchgraph-svg")
TypeError: diagram(): incompatible function arguments. The following argument types are supported:
1. (self: stim._stim_avx2.DetectorErrorModel, *, type: str) -> stim._stim_avx2._DiagramHelper
Invoked with: stim.DetectorErrorModel('''
error(0.02) D0
error(0.02) D0 D1
error(0.01) D0 D8
error(0.02) D1 D2
error(0.01) D1 D9
error(0.02) D2 D3
error(0.01) D2 D10
error(0.02) D3 D4
error(0.01) D3 D11
...
detector(9, 1) D20
detector(11, 1) D21
detector(13, 1) D22
detector(15, 1) D23
'''), 'matchgraph-svg'
Answer: print(stim.__version__) and confirm you're on v1.11 or later. Probably you're on v1.10; the getting started notebook was bumped to v1.11 last week and one of the changes was the removal of type= as a required keyword argument for diagram methods.
The pip install line in the notebook does include the minimum version. | {
"domain": "quantumcomputing.stackexchange",
"id": 4572,
"tags": "stim"
} |
Are physical probabilities also quantized? | Question: In physics there is quanta and energy occurs per this unit. Is it it then reasonable that probability also is quantized since energy is?
Answer: Probability is a statistical measure used widely in predicting both classical and quantum mechanical behavior. It is not a variable entering the differential equations either classically or quantum mechanically.
In quantum mechanics variables turn into operators which then enter differential equations and show, depending on the boundary conditions on the solutions, a quantized behavior of the variable that the operator describes, for example "energy." Now the square of the wavefunction, which describes the state of the system as a function of energy, gives the probability of finding the system with that energy. If the boundary conditions are such that the energy values are quantized it means that the probability will be high (near 1) for the quantized states for specific energy and low to zero at the rest.
Probability always goes from 0 to 1. If one is scanning probability versus energy against an energy quantized spectrum, it will be a saw tooth plot with maxima close to 1 and minima close to 0. Measurements confirm this:
Look at this spectrum plot of an intensity versus energy.
If each line is turned into a probabity plot by normalizing the number of photons making it up to one, there will be a width, but the saw tooth pattern is evident.
So no, probability is not a variable and cannot be quantized. | {
"domain": "physics.stackexchange",
"id": 88836,
"tags": "energy, quantum-information, probability, discrete"
} |
Tag wikis under a certain length | Question: I wrote a SEDE query to list all tag wikis with a body or an excerpt under a given number of characters. I've used this to find empty wikis or wikis with very little information. The tags can be sorted by post count to make choosing which to improve easier.
This is one of the first times I've written anything in SQL, I took ideas from other queries I found and threw them together. The conditional WHERE clause is my biggest concern. It looks really messy, but I'm not sure what the best way to achieve the same effect would be.
I'm aware that it doesn't actually get wikis with a length under the given number, but under or on. I wanted 0 to mean an empty wiki as that feel more intuitive than 1. I'm not sure about this design decision, what do you think?
-- MaxBodyLength: Max body length "Set to -1 to disable"
DECLARE @max_body_length INT = ##MaxBodyLength:INT?100##;
-- MaxExcerptLength: Max excerpt length "Set to -1 to disable"
DECLARE @max_excerpt_length INT = ##MaxExcerptLength:INT?-1##;
-- Max SE post length (see http://meta.stackexchange.com/a/176447/299387)
DECLARE @LEN_MAX INT = 30000;
BEGIN
SELECT
t.TagName,
t.Count AS [Post count],
LEN(pExcerpt.body) AS [Excerpt length],
LEN(pWiki.Body) AS [Wiki length]
FROM Tags t
LEFT JOIN Posts pExcerpt ON t.ExcerptPostId = pExcerpt.Id
LEFT JOIN Posts pWiki ON t.WikiPostId = pWiki.Id
WHERE
LEN(pWiki.body) <= CASE
WHEN @max_body_length >= 0 THEN
@max_body_length
ELSE
@LEN_MAX
END
AND LEN(pExcerpt.body) <= CASE
WHEN @max_excerpt_length >= 0 THEN
@max_excerpt_length
ELSE
@LEN_MAX
END
END
Answer: Overall, even for a non-first-timer with SQL, this is a good query. The consistency and styling makes it very clear you're not a programming newbie.
To answer your concern about "under" versus "under or on", and 0 meaning an empty wiki - I totally agree. For me, 0 meaning empty is the only logical interpretation - empty strings have length 0 in every language I've come across. The way you've named your parameters (MaxSomething) implies clearly that the user is inputting the highest number that will be included in the results - and that's what they get. This is all perfectly clear and sensible to me, no need to worry.
Minor niggles
You don't need the BEGIN and END wrappers here. They don't hurt, but they're only required for control-of-flow blocks (usually, IF and WHILE constructs).
You don't need @LEN_MAX in the current version of the query.
For a "report" type query like this, I'd always prefer to have default ordering set by the query (an ORDER BY clause) so the results are guaranteed consistent.
The WHERE clause
Looking at this half a year on from your original post, I can see you've refined the WHERE clause yourself. At the time of my writing the WHERE clause on the linked SEDE query is:
WHERE
(@max_body_length >= 0 AND LEN(pWiki.body) <= @max_body_length)
OR (@max_excerpt_length >= 0 AND LEN(pExcerpt.body) <= @max_excerpt_length)
This is definitely a much clearer way to express your desired logic than the CASE statement in the original post. For me, the reason it's clearer is that it maps quite easily to reading in plain English:
Show me rows where the max body length is 0 or more and the length of the Wiki Body is less than or equal to that, or where...
I reckon we could go further though :)
Different operators can improve readability
This is a matter of opinion only, but if your aim is to make query logic easily readable in plain English, sometimes inverting your operators can help that.
@max_body_length >= 0
"max body length is greater than or equal to 0, I mean 0 or more"
This is what I think when I read this code; I re-word it post-hoc to "0 or more" after understanding the meaning.
@max_body_length !< 0
"max_body_length is not less than 0"
For me this reads more clearly first-pass, and doesn't need a post-hoc re-scan. It'll depend a lot on the reader and how familiar they are with ! as a negation operator (some people prefer <>).
An even shorter alternative
Shorter isn't always better: for complex logic, spelling it out step by step (more verbosely) as you have can be much clearer, because of the "mapping to plain English" property.
However, we could go shorter here using nullif() to equate your -1 default values with SQL NULL i.e. "unknown"/"not applicable":
WHERE
LEN(pWiki.body) <= nullif(@max_body_length,-1)
or LEN(pExcerpt.body) <= nullif(@max_excerpt_length,-1)
This takes advantage of the all-consuming nature of NULL: if I ask "is the length of this (known) thing less than an unknown number", the only possible answer can be "I don't know" - another NULL.
Then, because SQL NULL doesn't evaluate to true, the overall WHERE clause as a whole works out just as you intend.
However, the catch here (you probably spotted it) is that this only works for -1 as a default, not any negative input. Since negative inputs apart from -1 are meaningless in this query, you could validate this at the start of the query:
IF (@max_body_length<-1) or (@max_excerpt_length<-1)
RAISERROR('Input parameters should be -1, 0 or positive integers',16,1)
Frankly this is probably overkill for a SEDE query but demonstrates the right approach for more generic application.
Ideally, one would actually make the default input for each parameter be NULL (rather than -1) and only permit 0 or positive input values: then you wouldn't need the nullif functions at all. However I'm not aware of a way to do that in SEDE. | {
"domain": "codereview.stackexchange",
"id": 18674,
"tags": "beginner, sql, sql-server, t-sql, stackexchange"
} |
Carbon-13 NMR for chloroform | Question: I am slightly confused by what the spectrum would show for carbon-13 NMR of $\ce{CHCl3}$.
My initial guess would be that the peak would be split by coupling to both the proton and the 3 chlorines, as both nuclei have a net spin.
If the peak were split by the chlorine only, then as there are three chlorine atoms we would get a quartet peak. However this cannot be correct because the spectrum for CDCl3 shown here has only three peaks:
I also don't understand here why the areas of the three peaks are the same. Should they not be in some other ratio (for the four peaks I expected it would be $1:3:3:1$ are ratio)?
Then I would expect that the presence of the proton would split every one of the four peaks from $\ce{C-Cl}$ coupling further into a doublet, giving a quartet of doublets.
I didn't find any spectra showing this, only showing very tiny peaks for $\ce{CHCl3}$ compared with $\ce{CDCl3}$ which I don't understand:
To make it more confusing, my lecture notes say that for chloroform, $\ce{CHCl3}$,
each $\ce{^13C}$ is attached to a spin $1/2$ proton so unless we applied broadband proton decoupling, coupling to the proton would mean that the $\ce{^13C}$ signal would appear as a doublet.
And it makes no mention of the effect of the chlorine atoms on the spectrum.
I would be very grateful is someone could expalin what the spectrum for carbon-13 NMR of chloroform, $\ce{CHCl3}$, would actually look like!
Answer: You assumed that coupling to the three chlorides would yield some type of quartet. This is correct in principle. However, chlorine is one of the many quadrupole nuclei that are basically unobservable by NMR due to their rapid relaxation. I hope another answer is around explaining that as I am not good at it.
I can, however, help you interpret the $1:1:1$ triplet of your spectrum. It is not, as you have assumed, the signal for $\ce{CHCl3}$ but that for $\ce{CDCl3}$. The difference between the two in NMR terms is that chloroform contains the spin ½ nucleus protium while deuterochloroform contains the spin 1 nucleus deuterium — and the latter does not relax rapidly as the chlorine isotopes do. Hence, deuterium is well-observable by NMR.
Now your first thought should be something like the following:
But I am only coupling to one nucleus. That should give me a $1:1$ doublet and not a $1:1:1$ triplet, shouldn’t it?
That simplification is only correct for spin ½ nuclei. The actual formula for calculating the multiplicity of a peak is $2\,n\,I + 1$, where $n$ is the number of chemically equivalent nuclei your observed nucleus is coupling to and $I$ is the spin of the coupling nuclei. Since deuterium is spin 1 ($I = 1$), the formula gives us:
$$2\,n\,I + 1 = 2 \times 1 \times 1 + 1 = 3$$
Or an expected triplet. Since this triplet does not derive from two equivalent couplings to spin ½ nuclei, it is not a $1:2:1$ triplet but a $1:1:1$ one.
Your second spectrum shows a $\ce{^13C}$ NMR of deuterochloroform which still contains residues of chloroform — at least, that is what it is supposed to show. For some reason unknown to me, the protochloroform signal has not been decoupled, i.e. the spectrum is in fact $\ce{^13C}$ and not $\ce{^13C\{^1H\}}$. Thus, you are observing a strong coupling to the single hydrogen nucleus giving a $1:1$ doublet as expected if chlorine is ignored. | {
"domain": "chemistry.stackexchange",
"id": 7037,
"tags": "organic-chemistry, nmr-spectroscopy, halides, isotope, spin"
} |
Spider Identification: Is this an Arizona Recluse? | Question: I found this spider in my toilet bowl. It was after dark. We live in Prescott Valley, AZ at around 5100' elevation at the edge of a development fairly close to the mountains in grass land.
I'm concerned its an Arizona Recluse (or Desert Recluse). We saw a few of these around the house during the warmer months; this is the first I've seen in a while since its gotten cold, but my wife just did some extra cleaning.
I don't see a violin shape, but this article points out that desert recluse's don't have a noticeable violin. It also looks like it has 3 sets of eyes, but the 4th may just be hidden due to the angle since it looks like the 3 I see in the pics aren't centered quite right. Anyways, if anyone has some insight I would be happy to know. The leg span was a bit bigger than a penny.
Answer: I don't think it's an Arizona recluse. Characteristic of all recluse spiders (including the five varieties found in Arizona):
Long thin legs
Oval shaped abdomen
6 eyes in dyads (pairs)
Uniformly colored abdoment with fine hairs
No spines on legs
Legs are uniformly colored
Light tan to dark brown in color
Distinct violin-shaped mark on on the back points to the posterior of the spider (less obvious on Arizona recluses)
Body not more that 3/8" in length
Your spider has spiny legs. That, plus the fact that Arizona recluses are even less likely to live where humans do (as your article states), and are attracted to dry places, makes your spider an unlikely candidate, but I could be wrong.
Loxascelidae, Loxosceles reclusa | {
"domain": "biology.stackexchange",
"id": 7997,
"tags": "zoology, species-identification, arachnology"
} |
FFT equivalent for generalized unitary transforms | Question: The DFT has the FFT, Hadamard transform has the Fast Hadamard Transform and so do a number of other unitary transforms (operators). Is there or has there been an attempt at creating FFT style algorithms for generalized unitary transforms?
Answer: It's all about structure. One early paper on this is A Unified Treatment of Discrete Fast Unitary Transforms, 1977:
A set of recursive rules which generate unitary transforms with a
fast algorithm (FUT) are presented. For each rule, simple relations
give the number of elementary operations required by the fast
algorithm. The common Fourier, Walsh-Hadamard (W-H), Haar, and Slant
transforms are expressed with these rules. The framework developed
allows the introduction of generalized transforms which include all
common.transforms in a large class of "identical computation
transforms". A systematic and unified view is provided for unitary
transforms which have appeared in the literature. This approach leads
to a number of new transforms of potential interest. Generalization to
complex and multidimensional unitary transforms is considered and some
structural relations between transforms are established.
Among the most common, the discrete sine (DST), cosine (DCT), Hartley, wavelet transforms and many others (Walsh, Hadamard, Paley or Waleymard, triangle, jacket, slant, Hermite) have fast counterparts.
One global initiative towards a systematic construction was termed Algebraic Signal Processing Theory. Let me mention two papers laying some foundation using basic algebraic structures:
Algebraic Signal Processing Theory: Foundation and 1-D Time, 2008
This paper introduces a general and axiomatic approach to linear
signal processing (SP) that we refer to as the algebraic signal
processing theory (ASP). Basic to ASP is the linear signal model
defined as a triple ($\mathcal{A}$, $\mathcal{M}$, $\Phi$) where
familiar concepts like the filter space and the signal space are cast
as an algebra $\mathcal{A}$ and a module $\mathcal{M}$, respectively.
The mapping $\Phi$ generalizes the concept of $z$-transform to
bijective linear mappings from a vector space of signal samples into
the module $\mathcal{M}$. Common concepts like filtering, spectrum, or
Fourier transform have their equivalent counterparts in ASP. Once
these concepts and their properties are defined and understood in the
context of ASP, they remain true and apply to specific instantiations
of the ASP signal model. For example, to develop signal processing
theories for infinite and finite discrete time signals, for infinite
or finite discrete space signals, or for multidimensional signals, we
need only to instantiate the ASP signal model to a signal model that
makes sense for that specific class of signals. Filtering, spectrum,
Fourier transform, and other notions follow then from the
corresponding ASP concepts. Similarly, common assumptions in SP
translate into requirements on the ASP signal model. For example,
shift-invariance is equivalent to $\mathcal{A}$ being commutative. For
finite (duration) signals shift invariance then restricts
$\mathcal{A}$ to polynomial algebras. We explain how to design signal
models from the specification of a special filter, the shift. The
paper illustrates the general ASP theory with the standard time shift,
presenting a unique signal model for infinite time and several signal
models for finite time. The latter models illustrate the role played
by boundary conditions and recover the discrete Fourier transform
(DFT) and its variants as associated Fourier transforms. Finally, ASP
provides a systematic methodology to derive fast algorithms for linear
transforms. This topic and the application of ASP to space dependent
signals and to multidimensional signals are pursued in companion
papers.
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs, 2008
This paper presents a systematic methodology to derive and classify
fast algorithms for linear transforms. The approach is based on the
algebraic signal processing theory. This means that the algorithms are
not derived by manipulating the entries of transform matrices, but by
a stepwise decomposition of the associated signal models, or
polynomial algebras. This decomposition is based on two generic
methods or algebraic principles that generalize the well-known
Cooley-Tukey FFT and make the algorithms' derivations concise and
transparent. Application to the 16 discrete cosine and sine transforms
yields a large class of fast general radix algorithms, many of which
have not been found before.
Aside, when those transforms are considered in the filter bank framework, other fast versions exist, based on polyphase decomposition or lifting transforms.
Additional references: I do not know of a full book on Algebraic Signal Processing Theory. The above link has a handful of references. For book-style, you have PhD theses:
A. Sandryhaila, 2010, Algebraic Signal Processing: Modeling and Subband Analysis
M. Püschel, 2008, DFT and FFT: An Algebraic View, Computer Algebra Handbook, Foundations, Applications, Systems, Eds. J. Grabmeier, E. Kaltofen, V. Weispfenning
On filter banks:
L. Liu, On Filter Bank and Transform Design with the Lifting Scheme
K. Soman et al., 2020, Insight into Wavelets: from Theory to Practice | {
"domain": "dsp.stackexchange",
"id": 5259,
"tags": "fft, transform"
} |
How will the super massive black hole affect our galaxy? | Question: I've recently learned that the general consensus is that several (if not, most) galaxies have super massive black holes in their center, in particular the Milky Way. This, at least to me, makes perfect sense seeing as we are in a spiral galaxy which means we need something to "spiral" around (a large body or a bunch of mass).
But seeing as we're rotating around this massive black hole, won't we inevitably end up sucked in by it? Aren't we spinning towards the center of the galaxy, or are we staying steady where we are?
Answer: Look at the question a different way: will the Earth get "sucking into" the sun? Answer: no, it's in orbit.
Now, black holes are a little different because inside 3/2 of the Schwartchild radius there are no stable orbits, but at very large distances gravity is gravity and orbits are orbits. | {
"domain": "physics.stackexchange",
"id": 15211,
"tags": "gravity, black-holes, orbital-motion, galaxies"
} |
How to do GMapping and SLAM Navigation using RPLIDAR A2 and Kobuki? | Question:
I have a Kobuki and installed Turtlebot software on my Turtlebot laptop and set everything up for Turtlebot, and I just got an RPLIDAR A2 yesterday and I couldn't figure out how to get it to work with Turtlebot Gmapping and AMCL, to navigate autonomously. On the remote computer the scan doesn't show up but the Kobuki's odometry shows up. I dont think there were any errors. In the launch file I deleted the line of code that starts the 3d camera, and I start the RPLIDAR node from another command line. But again, nothing! So please help me if you can. I am running Ubuntu 16.04 and ROS Kinetic on the Turtlebot Laptop and Ubuntu 14.04 ROS Indigo on the remote computer. Thanks!
Edit: I am now using hector_slam to do this. I have a pretty noisy map but I guess it could work, but now the question is how does the Turtlebot use the RPLIDAR A2 to navigate in the map generated by hector_slam? What parameters should I use in RVIZ? Is there a hector navigation file?
Edit 2: I could still use Gmapping, and obviously that would be SO much easier since I know how to do gmapping, but how do I implement it with an RPLIDAR A2? I will edit this again when I can get the error codes. Also, the errors happen when I run the rplidar node and then the turtlebot minimal node and then the gmapping node and I think it's because of a transform that I didn't put in the launch file.
Originally posted by RedstoneTaken on ROS Answers with karma: 3 on 2017-06-02
Post score: 0
Original comments
Comment by ufr3c_tjc on 2017-06-05:
Can you edit your question to include relevant code, as well as ROS version and OS info? When you say it "doesn't show up", what does this mean? Does the node not launch, or is there no messages being published?
Comment by aarontan on 2018-06-18:
have you figured this out?
Comment by lukewd on 2019-03-25:
Did y'all try the hokuyo tutorials, I think adding any lidar is similar process. I had luck doing an analogous ydlidar addition to my turtlebot.
Answer:
I was able to update my Turtlebot 2 (with Create 2 Base) to a 360 degree YDLidar to use for navigation.
I made a copy of the minimal.launch file in turtlebot launch folder, and renamed it. I added this to it:
<node name="ydlidar_node" pkg="ydlidar" type="ydlidar_node" output="screen">
<param name="port" type="string" value="/dev/ydlidar"/>
<param name="baudrate" type="int" value="115200"/>
<param name="frame_id" type="string" value="laser_frame"/>
<param name="angle_fixed" type="bool" value="true"/>
<param name="low_exposure" type="bool" value="false"/>
<param name="heartbeat" type="bool" value="false"/>
<param name="resolution_fixed" type="bool" value="true"/>
<param name="angle_min" type="double" value="-180" />
<param name="angle_max" type="double" value="180" />
<param name="range_min" type="double" value="0.08" />
<param name="range_max" type="double" value="16.0" />
<param name="ignore_array" type="string" value="" />
<param name="samp_rate" type="int" value="9"/>
<param name="frequency" type="double" value="7"/>
</node>
<node pkg="tf" type="static_transform_publisher" name="base_link_to_laser4"
args="0.0 0.0 0.2 0.0 0.0 0.0 /base_footprint /laser_frame 40" />
I also added the ros-compatible ydlidar library to the src folder.
After a few tries I got it to work and could then map a room very quickly with 360 degree view instead of the limited range of the kinect.
If I recall, I used links like these to help me:
https://answers.ros.org/question/173122/how-to-crate-a-map-with-gmapping-and-hokuyo-laser/
http://wiki.ros.org/turtlebot/Tutorials/indigo/Adding%20a%20lidar%20to%20the%20turtlebot%20using%20hector_models%20%28Hokuyo%20UTM-30LX%29
Originally posted by lukewd with karma: 116 on 2019-03-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 28044,
"tags": "slam, navigation, rplidar, turtlebot, gmapping"
} |
Immersing copper and zinc in hydrochloric acid | Question: I am wondering what would happen if a metal strip of copper and a metal strip of zinc were immersed in solution of $\ce{HCl}$ in standard electrochemical conditions. I have calculated the standard change in Gibb's free energy $\Delta_\mathrm rG^0$ using the relation $\Delta_\mathrm rG^0=-zF\Delta E^0$ where $z$ is the number of electrons exchanged, $F$ is faraday's constant, and $\Delta E^0$ is the standard change in electrode potential. I have found the following values:
$$\Delta_\mathrm rG^0_{\ce{Zn}}=-147\ \mathrm{kJ/mol}, \Delta_\mathrm rG^0_{\ce{Cu}}=-66\ \mathrm{kJ/mol}$$
It appears that the reaction between $\ce{Zn}$ and $\ce{H+}$ is more spontaneous. Does this mean that the reaction with $\ce{Cu}$ will not take place at all?
Answer: Thermodynamics spontaneity and kinetics of reactions are two different things. Just because something has more driving force, doesn't mean it is going to go faster. In fact, there are cases where higher driving force actually causes the reaction to go slower. For more, I would recommend the reading the Nobel lecture of Rudolph A. Marcus who developed theories explaining what is now known as the "Marcus Inverted Region".
Kinetics of reactions depend on more complex issues that are harder to control and predict. | {
"domain": "chemistry.stackexchange",
"id": 5784,
"tags": "thermodynamics, electrochemistry"
} |
Installing the deprecated kinect stack | Question:
I'm trying to install the old kinect stack that was based on libfreenect because I need it for a SLAM solution that I want to install.
Apparently, the stack source that I downloaded does not come with the freenect source, it should download it from github during make. The problem is that the old github repo for libfreenect is dead, so make breaks when it tries to download it. I downloaded libfreenect elsewhere, but I'm not sure if that will help. Obviously, the simplest solution of copy-pasting the libfreenect into the kinect stack dir doesn't work.
So, how can I successfully install the deprecated kinect stack?
Originally posted by kameleon on ROS Answers with karma: 68 on 2012-03-19
Post score: 1
Original comments
Comment by Mac on 2012-03-19:
I think you're solving the wrong problem. What exact functionality does the old Kinect stack provide that openni_camera does not?
Comment by kameleon on 2012-03-21:
I usually work with openni, but this specific SLAM application works with the old stack. I changed the makefile to look for the actual libfreenect repo, but now the connection still times out
Comment by Mac on 2012-03-21:
If it only depends on the stack for (say) point cloud topics, topic remapping will let you use the new stack. If it needs a specific library to link against, that's trickier.
Answer:
You can download the libfreenect code and it still works fine. I seem to have all kinds of trouble with openni on OSX still.
Originally posted by Kevin with karma: 2962 on 2012-04-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8631,
"tags": "ros, kinect, installation, libfreenect"
} |
kineckt is not available in my rostopic list! | Question:
Hi all,
I installed Ros hydro along with Gazebo package(1.9) for Ros in ubuntu 12.04, i run gazebo with this command
rosrun gazebo_ros gazebo
and i add sensors like Hokyuo and kineckt, i can't see the topic of them in rostopic list, i know that i should add plugin for them but at first i don't know the format and specified plugin for them and after that i am not sure about the place that i should add plugin, my mean is, should i add plugin tag in SDF file of this sensors or other place?
please if it is possible write the complete plugin tag for these sensors especially kineckt, I tried many times but those don't work!!!!
Originally posted by Vahid on Gazebo Answers with karma: 91 on 2013-10-12
Post score: 0
Answer:
Hi, before you can see/access any topics for sensors in gazebo, you need to load the corresponding plugin.
I actually do not have an SDF-version, but my URDF file for the kinect looks like this. (I assume that you already have a link and a joint for the sensor, e.g. kinect)
<!-- SENSOR -->
<gazebo reference="openni_camera_link">
<sensor type="depth" name="openni_camera">
<always_on>1</always_on>
<visualize>true</visualize>
<camera>
<horizontal_fov>1.047</horizontal_fov>

<depth_camera>
</depth_camera>
<clip>
<near>0.1</near>
<far>100</far>
</clip>
</camera>
<plugin name="camera_controller" filename="libgazebo_ros_openni_kinect.so">
<alwaysOn>true</alwaysOn>
<updateRate>10.0</updateRate>
<cameraName>camera</cameraName>
<frameName>openni_camera_link</frameName>
<imageTopicName>rgb/image_raw</imageTopicName>
<depthImageTopicName>depth/image_raw</depthImageTopicName>
<pointCloudTopicName>depth/points</pointCloudTopicName>
<cameraInfoTopicName>rgb/camera_info</cameraInfoTopicName>
<depthImageCameraInfoTopicName>depth/camera_info</depthImageCameraInfoTopicName>
<pointCloudCutoff>0.4</pointCloudCutoff>
<hackBaseline>0.07</hackBaseline>
<distortionK1>0.0</distortionK1>
<distortionK2>0.0</distortionK2>
<distortionK3>0.0</distortionK3>
<distortionT1>0.0</distortionT1>
<distortionT2>0.0</distortionT2>
<CxPrime>0.0</CxPrime>
<Cx>0.0</Cx>
<Cy>0.0</Cy>
<focalLength>0.0</focalLength>
</plugin>
</sensor>
</gazebo>
Originally posted by psei with karma: 166 on 2013-10-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Vahid on 2013-10-14:
thank you so much my friend, it works ;) i have another question, as i know kinect returns us depth, ir data and image, now i have these topic, but there is just ir and depth topic, how can i get image of kinect? maybe i am wrong!
Comment by psei on 2013-10-15:
With the configuration above you should see a "/camera/rgb/image_raw" topic. This is the RGB image. As far as I can see, the simulated kinect does not have an IR-Image topic (only depth).
Comment by Vahid on 2013-10-22:
thank you man, it works. | {
"domain": "robotics.stackexchange",
"id": 3491,
"tags": "ros, gazebo-plugin, sdformat"
} |
Why does the Direct form I become our first choice for fixed-point implementation? | Question: I heard that the Direct form II transposed is better for floating-point and the Direct form I is better for fixed-point. Is it true?
I understand that DF2 and DF1T should never be used in fixed-point implementation because the poles come first resulting in overflow in these two cases. But why is DF1 better than DF2T? Is it related to the following property that DF1 avoids internal overflow?
In this book the author says that
It is a very useful property of the direct-form I implementation that it cannot overflow internally in two's complement fixed-point arithmetic: As long as the output signal is in range, the filter will be free of numerical overflow. Most IIR filter implementations do not have this property.
I understand the two's complement wrap-around example the author gives. It seems that this property has two prerequisites:
There is fundamentally only one summation point in the filter.
The final result $y(n)$ is in range.
The second one is easy to understand but the first one is not. What does "only one summation point" mean and why do the other forms of IIR filters not satisfy this condition?
Thank you in advance.
Answer:
I heard that the Direct form II transposed is better for floating-point and the Direct form I is better for fixed-point. Is it true?
"Better" is a relative term, but generally yes. Direct Form II filters require less memory which is often an important consideration in floating-point calculations. Direct Form I filters require more memory, but the accumulator is never multiplied which is important in fixed-point calculations for the following reason.
Before we get into it though, there are three concepts of fractional fixed point computations that are important to understand:
The absolute value of any two's complement fractional fixed point number is less than 1.
Because of the first concept, it is impossible to obtain a number with an absolute value greater than 1 by multiplying two fixed point numbers, therefore it is impossible to overflow through multiplication. It is however possible to overflow through addition (e.g. $0.75 + 0.75 = 1.5$, but $0110 + 0110 = 1100 = -0.5$)
If an overflow occurs during a series of additions, the final result will still be correct if an underflow also occurs before the end result. You can even double overflow through addition as long as you double underflow before the final addition. This property does not hold if there is a multiplication between the overflow and the underflow.
Once numbers are added together in Direct Form I filters, there are no multiplications between the additions or before the final output which means they cannot have an overflow unless the final output itself is out of range. Let's look at some example code for a Direct Form I filter to better understand this:
// This code calculates one output of a second-order IIR filter for 6 bit fixed-point calculations
a[2] = {3,4};
b[3] = {5,6,7};
x[3] = {1,2,3}; // holds current sample and previous samples
y[3] = {0,4,5}; // y[0] is the current output. y[1] and y[2] are previous outputs
// Unroll the loop to talk about it
y[0] = b[0] * x[0]; // = 5 (no overflow)
y[0] += b[1] * x[1]; // = 5 + 12 = 17 (no overflow)
y[0] += b[2] * x[2]; // = 17 + 21 = -26 (overflow :( sad day)
y[0] -= a[0] * y[1]; // = -26 - 12 = 26 (underflow...)
y[0] -= a[1] * y[2]; // = 26 - 20 = 6 (which is the correct answer, how cool!)
In this example, $y[0]$ is used as an accumulator, and the values in this accumulator are only added or subtracted, so there are no multiplications between additions and subtractions. As long as the final output is within range, you will not have any errors due to overflow within the filter. The same could not be said for Direct Form II or Transposed Direct Form II. This is why Direct Form I is usually the way to go as long as you use the proper scale factor.
One important note is that some fixed-point processors saturate overflows by default, which makes this property of the Direct Form I filter unhelpful. | {
"domain": "dsp.stackexchange",
"id": 10463,
"tags": "infinite-impulse-response, fixed-point"
} |
A simple way to store the factors selected by (BE) Stepwise Regression n run on N datasets via lapply, a For Loop then an lapply, or a function? | Question: I am currently doing research with a coauthor and collaborator comparing a new optimal model selection procedure he has proposed via Monte Carlo Simulation of the new procedure vs 2 benchmarks, LASSO & Backward Elimination Stepwise. In order to compare the % of factors correctly selected and the % of factors selected which are spurious by his new procedure vs LASSO vs (BE) Stepwise head-to-head, all we will be comparing are the regressors/"factors" selected by each of the 3 in terms the aforementioned criteria.
This is made not only possible, but extremely simple because when my collaborator created the randomly generated synthetic sample observations on which to run his procedure vs LASSO & BE, he did so in such a way that the correct number of factors in the true underlying population model is known for each of the 47,000 individual (500 by 31) datasets stored in their own csv files within the same file folder.
The main issue here is that I am still a novice when it comes to writing and running code in R unfortunately. So, I have already written the following code, all of which works/runs (besides the last 3 lines which is why I am asking this question here):
directory_path <- "~/DAEN_698/sample_obs"
file_list <- list.files(path = directory_path, full.names = TRUE, recursive = TRUE)
head(file_list, n = 2)
> head(file_list, n = 2)
[1] "C:/Users/Spencer/Documents/DAEN_698/sample_obs2/0-5-1-1.csv"
[2] "C:/Users/Spencer/Documents/DAEN_698/sample_obs2/0-5-1-2.csv"
# Create another list with the just the "n-n-n-n" part of the names of of each dataset
DS_name_list = stri_sub(file_list, 49, 55)
head(DS_name_list, n = 3)
> head(DS_name_list, n = 3)
[1] "0-5-1-1" "0-5-1-2" "0-5-1-3"
# This command reads all the data in each of the N csv files via their names
# stored in the 'file_list' list of characters.
csvs <- lapply(file_list, read.csv)
### Run a Backward Elimination Stepwise Regression on each of the N csvs.
# Assign the full model (meaning the one with all 30 candidate regressors
# included as the initial model in step 1).
# This is crucial because if the initial model has less than the number of
# total candidate factors for Stepwise to select from in the datasets,
# then it could miss 1 or more of the true factors.
full_model <- lapply(csvs, function(i) {
lm(formula = Y ~ ., data = i) })
# my failed attempt at figuring it out myself
set.seed(50) # for reproducibility
BE_fits3 <- lapply(full_model, function(i) {step(object = i[["coefficients"]],
direction = 'backward', scope = formula(full_model), trace = 0)})
When I hit run on the above 2 lines of code after setting the seed, I get
the following error message in the Console:
Error in terms`(object) : object 'i' not found
Post Scrip/Mini Appendix: To briefly elaborate a bit further on why it is
absolutely essential that the initial model when running a Backward Elimination
version of Stepwise Regression, consider the following example:
Let us say that we start out with an initial model of 25, so, X1:X26 instead of
X1:X30, in that case, it would be possible to miss out on Stepwise Regression j
being able to select/choose 1 or more of the IVs/factors from X26 through X30,
especially if 1 or more of those really are included in the true underlying
population model that characterizes dataset j.
Answer: If you are willing to deal with the runtime issues of going with a for-loop, try this:
full_model <- vector("list", length = length(csvs))
BE_fits <- vector("list", length = length(csvs))
This is to initialize everything before running the following loop:
for(i in seq_along(csvs)) {
full_model[[i]] <- lm(formula = Y ~ ., data = csvs[[i]])
BE_fits[[i]] <- step(object = full_model[[i]],
scope = formula(full_model[[i]]),
direction = 'backward', trace = 0) }
It is important to remember to include the formula argument within the scope argument of the step function, it may throw you an error if you omit this detail. | {
"domain": "datascience.stackexchange",
"id": 11419,
"tags": "r, feature-selection, research, lasso, monte-carlo"
} |
Question with Einstein Convention | Question: Quick question, lets say I have the following term
\begin{equation}
a_{\mu}b^{\mu}c_{\nu}d^{\nu}\tag{1}
\end{equation}
I have repeated indices over $a$ and $b$ so their components are summed over, and I have repeated indices over $c$ and $d$ so their components are summed over. However, since the indices are repeated, I may relabel them:
\begin{equation}
a_{\mu}b^{\mu}c_{\mu}d^{\mu} \tag{2}
\end{equation}
This suggests that I can now sum the components of $b$ and $c$ and likewise with $a$ and $d$, etc. however I don't think this is valid and I just wanted to confirm. More specifically, as a general rule you should keep repeated indices unique to other repeated indices so confusions such as this don't occur. Can anyone confirm?
Answer: Yes, your conclusion is correct. The initial expression evaluates to just $(a \cdot b) (c \cdot d)$ while the rewriting of it to $a_{\mu}b^{\mu} c_{\mu} d^{\mu} $ is then meaningless (In Einstein summation convention you have an upper and lower repeated index and indices to be summed over come in pairs). | {
"domain": "physics.stackexchange",
"id": 37874,
"tags": "conventions, notation"
} |
Tic-Tac-Toe code using c# | Question: I am a beginner coder and was wondering how to improve my c# console code. It makes a tic-tac-toe game.
using System;
namespace TicTacToe
{
class Program
{
static string[] options = { "1", "2", "3", "4", "5", "6", "7", "8", "9" }; //stores the variables to change
static bool Playing = true; //stops the game once someone wins
static int turn = 0;
static void Main(string[] args)
{
Intro();
Board();
while (Playing) //is to stop the game once someone wins
{
if (turn%2 == 0)
{
Console.WriteLine("Player 1's turn");
}
else
{
Console.WriteLine("Player 2's turn");
}
int playerInput1;
Console.WriteLine("type in your responce player");
bool torf = int.TryParse(Console.ReadLine(), out playerInput1);
playerInput1--;
// makes sure the input is put in a valid value
if (torf && playerInput1 < 9 && playerInput1 > -1) //makes sure again the input is valid and continues
{
if (options[playerInput1] == "x" || options[playerInput1] == "o")
{
Console.WriteLine("Stop stealing othe people's place");
}
else
{
if (turn%2 == 0)
{
options[playerInput1] = "x";
turn++;
Board();
WinCondition();
Tie();
}
else
{
options[playerInput1] = "o";
turn++;
Board();
WinCondition();
Tie();
}
/*int playerInput2;
Console.WriteLine("o");
bool torf2 = int.TryParse(Console.ReadLine(), out playerInput2);
playerInput2--;
if (torf2 && playerInput2 < 9 && playerInput2 > -1)
{
if (options[playerInput2] == "x" || options[playerInput2] == "o")
{
Console.WriteLine("Stop stealing other people's space");
}
else
{
options[playerInput2] = "o";
Board();
WinCondition();
torf2 = false;
torf = false;
}
}*/
}
}
else
{
Console.WriteLine("Please input a valid expression");
}
}
}
public static void Board() // makes the board
{
Console.Clear();
Console.WriteLine(" | | ");
Console.WriteLine($" {options[0]} | {options[1]} | {options[2]}");
Console.WriteLine("_____|_____|_____ ");
Console.WriteLine(" | | ");
Console.WriteLine($" {options[3]} | {options[4]} | {options[5]}");
Console.WriteLine("_____|_____|_____ ");
Console.WriteLine(" | | ");
Console.WriteLine($" {options[6]} | {options[7]} | {options[8]}");
Console.WriteLine(" | | ");
}
public static void WinCondition()
{
if (options[0] == options[1] && options[1] == options[2])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
else if (options[3] == options[4] && options[4] == options[5])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
else if (options[6] == options[7] && options[7] == options[8])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
// checks for horizontal wins
else if (options[0] == options[3] && options[3] == options[6])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
else if (options[1] == options[4] && options[4] == options[7])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
else if (options[2] == options[5] && options[5] == options[8])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
// checks for vertical wins
else if (options[0] == options[4] && options[4] == options[8])
{
Playing = false;
if(turn % 2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
else if (options[2] == options[4] && options[4] == options[6])
{
Playing = false;
if (turn%2 == 0)
{
Console.WriteLine("Congrats on winning player 1, better luck next time player 2");
}
else
{
Console.WriteLine("Congrats on winning player 2, better luck next time player 1");
}
}
// checks for diagonal wins
}
public static void Intro()
{
Console.WriteLine("Welcome to\n");
Console.WriteLine(@"____________ ________ ________ ");
Console.WriteLine(@"___ __/__(_)______ ___ __/_____ _______ ___ __/__________ ");
Console.WriteLine(@"__ / __ /_ ___/________ / _ __ `/ ___/________ / _ __ \ _ \");
Console.WriteLine(@"_ / _ / / /__ _/_____/ / / /_/ // /__ _/_____/ / / /_/ / __/");
Console.WriteLine(@"/_/ /_/ \___/ /_/ \__,_/ \___/ /_/ \____/\___/ ");
Console.WriteLine("\n1.The game is played on a grid that's 3 squares by 3 squares.\n\n2.Player 1 is \"X\" and Player 2 is \"O\". Players take turns putting their marks in empty squares.\n\n3.The first player to get 3 of her marks in a row(horizontally, vertically or diagonally) is the winner.\n\n4.When all 9 squares are full, the game is over. If no player has 3 marks in a row, the game ends in a tie.\n\n5.You can put x or o in by typing the number you want to put it at");
Console.ReadKey(false);
Console.Clear();
}
public static void Tie()
{
if (options[0] != "1" && options[1]!= "2" && options[2] != "3" && options[3] != "4" && options[4] != "5" && options[5] != "6" && options[6] != "7" && options[7] != "8" && options[8] != "9")
{
Console.WriteLine("The game is a tie");
Playing = false;
}
}
}
}
Answer: Choosing good identifiers
When I see a name like options I think of something a player can choose from like "single player" (i.e., play against the computer) or "multiplayer". A better name would be board because that's what it is.
torf? Does it mean "true or false"? Every Boolean is true or false. This does not reflect the meaning it has in this context. Better: isValidInt. But I would inline this variable (see later).
Board(). A board is a thing, but here it stands for an action. Better PrintBoard(). Same for PrintIntro().
The field Playing starts with an upper case and is therefore said to be PascalCase. This is reserved for type names, method names and property names. Use camelCase for fields, parameters and variables.
See C# Coding Standards and Naming Conventions for a full list.
Logic and structure can be simplified and clarified.
At many places you calculate turn % 2 == 0. Instead, I suggest directly calculating the player number. We would declare static int player = 0; and a method getting the next player as well as a method switching between players
private static int NextPlayer()
{
return 1 - player;
}
private static void SwitchPlayer()
{
player = NextPlayer();
}
This leads to another simplification. The first if-else statement can be replaced by (I am using string interpolation here)
Console.WriteLine($"Player {player + 1}'s turn"); // Display player as 1-based number.
I would inline the variable you called torf and declare playerInput in a out variable declaration like this.
if (Int32.TryParse(Console.ReadLine(), out int playerInput) &&
playerInput is >= 1 and <= 9)
{
playerInput--; // Make it a 0-based index.
...
}
I used pattern matching to test if the value is in a valid range. But you can replace it by a classic Boolean expression if you prefer or if you are using a pre C# 9.0 version.
You differentiate two cases doing the same with a small exception (shown with the new identifiers):
if (player == 0) {
board[playerInput] = "x";
SwitchPlayer();
PrintBoard();
WinCondition();
Tie();
} else {
board[playerInput] = "o";
SwitchPlayer();
PrintBoard();
WinCondition();
Tie();
}
This can easily be simplified by declaring a new field playerMark as array. Since we're at it, we can do the same for player names. This saves us the base-0 to base-1 conversion of player numbers for display and could easily be extended to store real names.
static readonly char[] playerMark = { 'x', 'o' };
static readonly string[] playerName = { "1", "2" };
It becomes
board[playerInput] = playerMark[player];
PrintBoard();
WinCondition();
Tie();
SwitchPlayer(); // Doing this after printing winner!
Even without this new static field, you could move all but the first line out of the if-else statement, since they are exactly the same in both cases:
if (player == 0) {
board[playerInput] = 'x';
} else {
board[playerInput] = 'o';
}
PrintBoard();
WinCondition();
Tie();
SwitchPlayer();
The WinCondition method tests conditions and prints to the console. And it does so by repeating the same print statements over and over. Better separate the two concerns.
private static bool IsWinCondition()
{
return
board[0] == board[1] && board[1] == board[2] ||
board[3] == board[4] && board[4] == board[5] ||
board[6] == board[7] && board[7] == board[8] ||
board[0] == board[3] && board[3] == board[6] ||
board[1] == board[4] && board[4] == board[7] ||
board[2] == board[5] && board[5] == board[8] ||
board[0] == board[4] && board[4] == board[8] ||
board[2] == board[4] && board[4] == board[6];
}
}
and call it with (yet another simplification here)
if (IsWinCondition()) {
playing = false;
Console.WriteLine(
$"Congrats player {playerName[player]}, better luck next time player {playerName[NextPlayer()]}");
}
Same issue as above with Tie. Also, it does not test whether we have a tie but only if the board is full. We only have a tie if we do not have a win situation at the same time. Therefore, I changed the method to (requires a using System.Linq; before the namespace)
private static bool IsBoardFull()
{
return board.All(square => square > '9'); // Because 'x' and 'o' are greater
}
To make this work I changed the type of board and playerMark to char[]. char is considered to be a numeric type in C# and can be compared like numbers. (You must change the corresponding double quotes to single quotes.)
See: Enumerable.All extension method.
The conditions then become
if (IsWinCondition()) {
playing = false;
Console.WriteLine(
$"Congrats player {playerName[player]}, better luck next time player {playerName[NextPlayer()]}");
} else if (IsBoardFull()) {
playing = false;
Console.WriteLine("The game is a tie");
}
Now playing can be made a local variable since it is not used or set in other methods
bool playing = true; //stops the game once someone wins
while (playing) //is to stop the game once someone wins
{
...
}
The full solution (without the enclosing namespace and class):
static char[] board = { '1', '2', '3', '4', '5', '6', '7', '8', '9' }; //stores the variables to change
static readonly char[] playerMark = { 'x', 'o' };
static readonly string[] playerName = { "1", "2" };
static int player = 0;
public static void Main()
{
PrintIntro();
PrintBoard();
bool playing = true; // Stops the game once someone wins
while (playing)
{
Console.WriteLine($"Player {playerName[player]}'s turn");
Console.WriteLine("Please, type in your response: ");
if (int.TryParse(Console.ReadLine(), out int playerInput) && playerInput is >= 1 and <= 9) {
playerInput--; // Make it a 0-based index.
if (board[playerInput] is 'x' or 'o') {
Console.WriteLine("Stop stealing other people's place");
} else {
board[playerInput] = playerMark[player];
PrintBoard();
if (IsWinCondition()) {
playing = false;
Console.WriteLine(
$"Congrats player {playerName[player]}, better luck next time player {playerName[NextPlayer()]}");
} else if (IsBoardFull()) {
playing = false;
Console.WriteLine("The game is a tie");
}
SwitchPlayer();
}
} else {
Console.WriteLine("Please input a valid expression");
}
}
}
private static int NextPlayer()
{
return 1 - player;
}
private static void SwitchPlayer()
{
player = NextPlayer();
}
private static bool IsBoardFull()
{
return board.All(square => square > '9');
}
private static bool IsWinCondition()
{
return
board[0] == board[1] && board[1] == board[2] ||
board[3] == board[4] && board[4] == board[5] ||
board[6] == board[7] && board[7] == board[8] ||
board[0] == board[3] && board[3] == board[6] ||
board[1] == board[4] && board[4] == board[7] ||
board[2] == board[5] && board[5] == board[8] ||
board[0] == board[4] && board[4] == board[8] ||
board[2] == board[4] && board[4] == board[6];
}
private static void PrintBoard() // makes the board
{
Console.Clear();
Console.WriteLine(" | | ");
Console.WriteLine($" {board[0]} | {board[1]} | {board[2]}");
Console.WriteLine("_____|_____|_____ ");
Console.WriteLine(" | | ");
Console.WriteLine($" {board[3]} | {board[4]} | {board[5]}");
Console.WriteLine("_____|_____|_____ ");
Console.WriteLine(" | | ");
Console.WriteLine($" {board[6]} | {board[7]} | {board[8]}");
Console.WriteLine(" | | ");
}
private static void PrintIntro()
{
// ...
}
Additional thought
You are storing the game board in a one-dimensional array. There exist variations of Tic-tac-toe having more rows and columns. Using a 2-d array would make the game more extensible, because this would allow testing win conditions and printing the board by simply iterating rows and columns. Now we use a manually composed win condition. This is okay as the board is very small. But doing this on lager boards would be both tedious and error prone. | {
"domain": "codereview.stackexchange",
"id": 40841,
"tags": "c#, beginner, console, tic-tac-toe"
} |
Very Massive Relativistic Body | Question: You're observing a massive object (probably a neutron star), and it is moving at a significant fraction of the speed of light relative to you. The mass of the object is just below the mass necessary to form a black hole of the corresponding size (ie, if a relatively small amount of mass was added, or the current mass was compressed, it would form a black hole). In the moving reference frame of the object, it is not observed to be a black hole, and it doesn't have sufficient density to form one.
The interesting conundrum is that from your point of view, the object undergoes length contraction. In this case, if an object of the same mass were to have the size that you're observing due to length contraction, it would be of sufficient density to form a black hole. Obviously, you don't observe the moving object becoming a black hole, because it isn't actually doing that, but you observe what appears to be an object of sufficient density to become a black hole, but is not a black hole.
What makes this possible? Is the observed mass from your reference frame different for some reason? Does what you observe not matter? Is it something else?
Answer: The black hole solution you are referring to is the Schwarzschild solution that applies to a static centrally symmetric object. If the object is moving in your reference frame, it is not static, so this solution does not apply in your coordinates. In other words, the Schwarzschild spacetime is not Lorentz invariant just like most anything in General Relativity.
The easiest way to resolve this is to describe the object in its rest frame and then use the equivalence principle that physics does not depend on the reference frame. If the object is not a black hole in its own rest frame, then, according to the equivalence principle, this object is not a black hole in any reference frame. | {
"domain": "physics.stackexchange",
"id": 61763,
"tags": "special-relativity, black-holes, reference-frames, relativity, observers"
} |
Why are $SU(N)$ gauge theories easier to handle for $N\rightarrow \infty$? | Question: I was wondering if there was a intuitive/heuristic argument to understand why generalizing the QCD gauge group $SU(3)$ to $SU(N)$ and taking $N\rightarrow \infty$ simplifies the analysis of the theory. Since in this limit only planar diagrams survive, the others being suppressed.
At first I would expect things to get a lot nastier by introducing such large number $N$ of colors...
Answer: The intuitive idea is based on the Central Limit Theorem. Because suppose (as is usually the case) that your matter fields are in the fundamental representation of $SU(N)$, then the multiplet contains $N$ independent fields. Now the central limit tells us that the arithmetic average of $N$ independent random variables self-averages to a normal distribution, i.e. have small fluctuations. So in QCD for example, hadrons are always color singlets; a pion is $\pi = \sum_{c = 1}^N q_c\bar{q}_c$, so they have to be an average over all quark colors, so for $N\rightarrow \infty$ the fluctuations of $\pi$ (and hadrons in general) are much smaller than those of $q$ (quarks), because they should self-average according to the CLT. And in particular $$\left\langle \pi(x) \pi(y) \right\rangle \xrightarrow{N\rightarrow\infty}\left\langle \pi(x)\right\rangle\left\langle \pi(y)\right\rangle$$ | {
"domain": "physics.stackexchange",
"id": 20363,
"tags": "quantum-field-theory, quantum-chromodynamics"
} |
OpenGL 4.5 Core Buffer wrapper | Question: I recently wrote this OpenGL buffer wrapper which covers the 4.5 Core specification. I feel like the typed interface could be done better. Any feedback is greatly appreciated.
#ifndef MAKINA_CORE_RENDERING_BACKENDS_OPENGL_BUFFER_HPP_
#define MAKINA_CORE_RENDERING_BACKENDS_OPENGL_BUFFER_HPP_
#include <cstddef>
#include <type_traits>
#include <vector>
#include <makina/core/rendering/backends/opengl/opengl.hpp>
#ifdef MAKINA_OPENGL_CUDA_INTEROP
#include <cuda_gl_interop.h>
#include <cuda_runtime_api.h>
#endif
#include <export.hpp>
namespace mak
{
namespace gl
{
template<GLenum target>
class MAKINA_EXPORT buffer
{
public:
// 6.0 Buffer objects.
buffer()
{
glCreateBuffers(1, &id_);
}
buffer(GLuint id) : id_(id), managed_(false)
{
}
buffer(const buffer& that) : buffer()
{
that.is_immutable()
? set_data_immutable(that.size(), nullptr, that.storage_flags())
: set_data (that.size(), nullptr, that.usage ());
copy_sub_data(that, 0, 0, size());
}
buffer( buffer&& temp) = default;
~buffer()
{
if(managed_)
glDeleteBuffers(1, &id_);
}
buffer& operator=(const buffer& that)
{
that.is_immutable()
? set_data_immutable(that.size(), nullptr, that.storage_flags())
: set_data (that.size(), nullptr, that.usage ());
copy_sub_data(that, 0, 0, size());
return *this;
}
buffer& operator=( buffer&& temp) = default;
// 6.1 Create and bind buffer objects.
void bind () const
{
glBindBuffer(target, id_);
}
static void unbind ()
{
glBindBuffer(target, 0);
}
template<typename = typename std::enable_if<target == GL_ATOMIC_COUNTER_BUFFER || target == GL_SHADER_STORAGE_BUFFER || target == GL_UNIFORM_BUFFER || target == GL_TRANSFORM_FEEDBACK_BUFFER>::type>
void bind_range(GLuint index, GLintptr offset, GLsizeiptr size) const
{
glBindBufferRange(target, index, id_, offset, size);
}
template<typename = typename std::enable_if<target == GL_ATOMIC_COUNTER_BUFFER || target == GL_SHADER_STORAGE_BUFFER || target == GL_UNIFORM_BUFFER || target == GL_TRANSFORM_FEEDBACK_BUFFER>::type>
void bind_base (GLuint index) const
{
glBindBufferBase(target, index, id_);
}
// 6.2 Create / modify buffer object data (bindless).
void set_data_immutable (GLsizeiptr size, const void* data = nullptr, GLbitfield storage_flags = GL_DYNAMIC_STORAGE_BIT)
{
glNamedBufferStorage(id_, size, data, storage_flags);
}
void set_data (GLsizeiptr size, const void* data = nullptr, GLenum usage = GL_DYNAMIC_DRAW )
{
glNamedBufferData (id_, size, data, usage);
}
void set_sub_data (GLintptr offset, GLsizeiptr size, const void* data)
{
glNamedBufferSubData(id_, offset, size, data);
}
void clear_sub_data (GLenum internal_format, GLintptr offset, GLsizeiptr size, GLenum format, GLenum data_type, const void* data)
{
glClearNamedBufferSubData(id_, internal_format, offset, size, format, data_type, data);
}
void clear_data (GLenum internal_format, GLenum format, GLenum data_type, const void* data)
{
glClearNamedBufferData(id_, internal_format, format, data_type, data);
}
// 6.3 Map / unmap buffer data (bindless).
void* map_range (GLintptr offset, GLsizeiptr size, GLbitfield access_flags = GL_MAP_READ_BIT | GL_MAP_WRITE_BIT) const
{
return glMapNamedBufferRange(id_, offset, size, access_flags);
}
void* map ( GLenum access = GL_READ_WRITE) const
{
return glMapNamedBuffer(id_, access);
}
void flush_mapped_range (GLintptr offset, GLsizeiptr size) const
{
glFlushMappedNamedBufferRange(id_, offset, size);
}
void unmap () const
{
glUnmapNamedBuffer(id_);
}
// 6.5 Invalidate buffer data (bindless).
void invalidate_sub_data(GLintptr offset, GLsizeiptr size)
{
glInvalidateBufferSubData(id_, offset, size);
}
void invalidate ()
{
glInvalidateBufferData(id_);
}
// 6.6 Copy between buffers (bindless).
void copy_sub_data (const buffer& source, GLintptr source_offset, GLintptr offset, GLsizeiptr size)
{
glCopyNamedBufferSubData(source.id_, id_, source_offset, offset, size);
}
// 6.7 Buffer object queries (bindless).
bool is_valid () const
{
return glIsBuffer(id_);
}
std::vector<GLbyte> sub_data (GLintptr offset, GLsizeiptr size) const
{
std::vector<GLbyte> data(size);
glGetNamedBufferSubData(id_, offset, size, static_cast<void*>(data.data()));
return data;
}
GLsizeiptr size () const
{
return get_parameter(GL_BUFFER_SIZE);
}
GLenum usage () const
{
return get_parameter(GL_BUFFER_USAGE);
}
GLenum access () const
{
return get_parameter(GL_BUFFER_ACCESS);
}
GLbitfield access_flags () const
{
return get_parameter(GL_BUFFER_ACCESS_FLAGS);
}
bool is_mapped () const
{
return get_parameter(GL_BUFFER_MAPPED);
}
bool is_immutable () const
{
return get_parameter(GL_BUFFER_IMMUTABLE_STORAGE);
}
GLbitfield storage_flags() const
{
return get_parameter(GL_BUFFER_STORAGE_FLAGS);
}
GLintptr map_offset () const
{
return get_parameter_64(GL_BUFFER_MAP_OFFSET);
}
GLsizeiptr map_size () const
{
return get_parameter_64(GL_BUFFER_MAP_LENGTH);
}
void* map_pointer () const
{
void* pointer;
glGetNamedBufferPointerv(id_, GL_BUFFER_MAP_POINTER, &pointer);
return pointer;
}
GLuint id() const
{
return id_;
}
#ifdef MAKINA_OPENGL_CUDA_INTEROP
void cuda_register (cudaGraphicsMapFlags flags = cudaGraphicsMapFlagsNone)
{
if (resource_ != nullptr)
cuda_unregister();
cudaGraphicsGLRegisterBuffer(&resource_, id_, flags);
}
void cuda_unregister()
{
if (resource_ == nullptr)
return;
cudaGraphicsUnregisterResource(resource_);
resource_ = nullptr;
}
template<typename type>
type* cuda_map ()
{
type* buffer_ptr;
size_t buffer_size;
cudaGraphicsMapResources(1, &resource_, nullptr);
cudaGraphicsResourceGetMappedPointer(static_cast<void**>(&buffer_ptr), &buffer_size, resource_);
return buffer_ptr;
}
void cuda_unmap()
{
cudaGraphicsUnmapResources(1, &resource_, nullptr);
}
#endif
protected:
GLint get_parameter (GLenum parameter) const
{
GLint result;
glGetNamedBufferParameteriv(id_, parameter, &result);
return result;
}
GLint64 get_parameter_64(GLenum parameter) const
{
GLint64 result;
glGetNamedBufferParameteri64v(id_, parameter, &result);
return result;
}
GLuint id_ = 0;
bool managed_ = true;
#ifdef MAKINA_OPENGL_CUDA_INTEROP
cudaGraphicsResource* resource_ = nullptr;
#endif
};
template<typename type, GLenum target>
class MAKINA_EXPORT typed_buffer : public buffer<target>
{
public:
// 6.0 Buffer objects.
using buffer<target>::buffer;
using buffer<target>::operator=;
// 6.1 Create and bind buffer objects.
template<typename = typename std::enable_if<target == GL_ATOMIC_COUNTER_BUFFER || target == GL_SHADER_STORAGE_BUFFER || target == GL_UNIFORM_BUFFER || target == GL_TRANSFORM_FEEDBACK_BUFFER>::type>
void bind_range(GLuint index, GLintptr offset, GLsizeiptr size) const
{
buffer<target>::bind_range(index, sizeof(type) * offset, sizeof(type) * size);
}
// 6.2 Create / modify buffer object data (bindless).
void set_data_immutable (GLsizeiptr size, const type* data = nullptr, GLbitfield storage_flags = GL_DYNAMIC_STORAGE_BIT)
{
buffer<target>::set_data_immutable(sizeof(type) * size, static_cast<void*>(data), storage_flags);
}
void set_data (GLsizeiptr size, const type* data = nullptr, GLenum usage = GL_DYNAMIC_DRAW )
{
buffer<target>::set_data(sizeof(type) * size, static_cast<void*>(data), usage);
}
void set_sub_data (GLintptr offset, GLsizeiptr size, const type* data)
{
buffer<target>::set_sub_data(sizeof(type) * offset, sizeof(type) * size, static_cast<void*>(data));
}
void clear_sub_data (GLenum internal_format, GLintptr offset, GLsizeiptr size, GLenum format, GLenum data_type, const void* data)
{
buffer<target>::clear_sub_data(internal_format, sizeof(type) * offset, sizeof(type) * size, format, data_type, data);
}
// 6.3 Map / unmap buffer data (bindless).
type* map_range (GLintptr offset, GLsizeiptr size, GLbitfield access_flags = GL_MAP_READ_BIT | GL_MAP_WRITE_BIT) const
{
return static_cast<type*>(buffer<target>::map_range(sizeof(type) * offset, sizeof(type) * size, access_flags));
}
type* map ( GLenum access = GL_READ_WRITE) const
{
return static_cast<type*>(buffer<target>::map(access));
}
void flush_mapped_range (GLintptr offset, GLsizeiptr size) const
{
buffer<target>::flush_mapped_range(sizeof(type) * offset, sizeof(type) * size);
}
// 6.5 Invalidate buffer data (bindless).
void invalidate_sub_data(GLintptr offset, GLsizeiptr size)
{
buffer<target>::invalidate_sub_data(sizeof(type) * offset, sizeof(type) * size);
}
// 6.6 Copy between buffers (bindless).
void copy_sub_data (const buffer<target>& source, GLintptr source_offset, GLintptr offset, GLsizeiptr size)
{
buffer<target>::copy_sub_data(source, sizeof(type) * source_offset, sizeof(type) * offset, sizeof(type) * size);
}
// 6.7 Buffer object queries (bindless).
std::vector<GLbyte> sub_data (GLintptr offset, GLsizeiptr size) const
{
return buffer<target>::sub_data(sizeof(type) * offset, sizeof(type) * size);
}
GLsizeiptr size () const
{
return buffer<target>::size() / sizeof(type);
}
GLintptr map_offset () const
{
return buffer<target>::map_offset() / sizeof(type);
}
GLsizeiptr map_size () const
{
return buffer<target>::map_size() / sizeof(type);
}
type* map_pointer() const
{
return static_cast<type*>(buffer<target>::map_pointer());
}
};
template<typename type> using array_buffer = typed_buffer<type, GL_ARRAY_BUFFER>;
template<typename type> using atomic_counter_buffer = typed_buffer<type, GL_ATOMIC_COUNTER_BUFFER>;
template<typename type> using copy_read_buffer = typed_buffer<type, GL_COPY_READ_BUFFER>;
template<typename type> using copy_write_buffer = typed_buffer<type, GL_COPY_WRITE_BUFFER>;
template<typename type> using dispatch_indirect_buffer = typed_buffer<type, GL_DISPATCH_INDIRECT_BUFFER>;
template<typename type> using draw_indirect_buffer = typed_buffer<type, GL_DRAW_INDIRECT_BUFFER>;
template<typename type> using element_array_buffer = typed_buffer<type, GL_ELEMENT_ARRAY_BUFFER>;
template<typename type> using pixel_pack_buffer = typed_buffer<type, GL_PIXEL_PACK_BUFFER>;
template<typename type> using pixel_unpack_buffer = typed_buffer<type, GL_PIXEL_UNPACK_BUFFER>;
template<typename type> using query_buffer = typed_buffer<type, GL_QUERY_BUFFER>;
template<typename type> using shader_storage_buffer = typed_buffer<type, GL_SHADER_STORAGE_BUFFER>;
template<typename type> using texture_buffer = typed_buffer<type, GL_TEXTURE_BUFFER>;
template<typename type> using transform_feedback_buffer = typed_buffer<type, GL_TRANSFORM_FEEDBACK_BUFFER>;
template<typename type> using uniform_buffer = typed_buffer<type, GL_UNIFORM_BUFFER>;
template<typename type> using vertex_buffer = array_buffer <type>;
template<typename type> using index_buffer = element_array_buffer<type>;
}
}
#endif
Answer:
template<GLenum target>
As I've explained elsewhere, buffer objects are not typed. There's no such thing as a "vertex buffer object", "transform feedback buffer object", "uniform buffer object", or any other such thing. There are just buffer objects. You can transform feedback into a buffer, then use that same buffer as vertex input for rendering, or even as a UBO or SSBO.
So templating your buffer object on the bind target is absolutely incorrect.
buffer(GLuint id) : id_(id), managed_(false)
It's good that you allow your buffer object type to be able to be given an already created buffer object. However, it's not good that you can give it such a buffer without allowing it to adopt ownership of that buffer. That is, if a user creates a buffer object and wants to wrap it in your type, and then allow your type to destroy it, that should be allowed.
Think about how unique_ptr works. Yes, there's make_unique, but you can give it a pointer you allocated yourself and it will delete it.
I'm not saying that this behavior should necessarily be the default. But if you're going to allow wrapping user-created buffers, you should also give the user the option to allow the wrapped buffer to delete it.
Regardless of any of that, this constructor must be explicit. Otherwise, buffer is implicitly convertible from integers, and that is something that can really get out of control. Do you really want someone to be able to pass NULL as the argument to a function that takes a buffer?
buffer(const buffer& that)
Given the sheer expense of the copy operation (glCopyBufferSubData is not exactly cheap), I would strongly suggest not making it a function of the copy constructor. Make the type non-copyable and give it a member function to do copies, if you even want to support buffer object copying.
buffer( buffer&& temp) = default;
buffer& operator=( buffer&& temp) = default;
This is wrong. If managed_ is true, this will result in multiple objects deleting the same OpenGL object. That's bad.
You need a real move constructor. Remember the Rule of 5.
glNamedBufferData (id_, size, data, usage);
glMapNamedBuffer(id_, access);
I find it curious that you're allowing the creation of buffers using the older APIs, despite still requiring OpenGL 4.5. Sure, those are valid 4.5 calls, but they're effectively obsolete, having been superseeded by superior functions.
As for all of the get_* calls, I would personally file those under the YAGNI principle. Yes, OpenGL does allow you to get pretty much any state you set. But really, how often do you ever need to?
Lastly, typed_buffer is not a good type. By combining both the binding target and an object type with the buffer object, you encourage users to allocate lots and lots of buffer objects. This is well known to be a bad idea. You should try to have a few large buffers, and sub-section between them. There's no reason why you should have two separate buffer objects, just because you use different vertex formats.
All of your vertex data ought to be able to live in one buffer. Even if your engine can't really do it, it shouldn't be because it's impossible because of a quirk of your buffer object abstraction. | {
"domain": "codereview.stackexchange",
"id": 28067,
"tags": "c++, c++11, opengl, wrapper"
} |
Problem with str in service client | Question:
I am writing a service and this is the message :
std_msgs/String pose
---
sensor_msgs/JointState finalpos
This is the client :
#!/usr/bin/env python
import rospy
from interbotix_moveit.srv import pickitstr,pickitstringResponse
from std_msgs.msg import String
rospy.init_node('service_arm_client')
word=String()
rospy.wait_for_service('/widowxl/arm_service')
pose = rospy.ServiceProxy('/widowxl/arm_service',pickitstr)
word = pickitstringResponse()
word= 'Upright'
finalpose = pose(word)
print(finalpose.finalpose)
And I get this error :
Traceback (most recent call last):
File
"/home/michael/catkin_ws/src/interbotix_ros/interbotix_moveit/src/service_client_test.py",
line 13, in
finalpose = pose(word) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py",
line 435, in call
return self.call(*args, **kwds) File
"/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py",
line 512, in call
transport.send_message(request, self.seq) File
"/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/impl/tcpros_base.py",
line 665, in send_message
serialize_message(self.write_buff, seq, msg) File
"/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/msg.py",
line 152, in serialize_message
msg.serialize(b) File "/home/michael/catkin_ws/devel/lib/python2.7/dist-packages/interbotix_moveit/srv/_pickitstr.py",
line 58, in serialize
_x = self.pose.data AttributeError: 'str' object has no
attribute 'data'
What Is the problem ?
Originally posted by MichaelDUEE on ROS Answers with karma: 15 on 2020-02-25
Post score: 0
Answer:
Well in your case pose is a server proxy, not an actual pose or a string pose request as defined in your srv. Also you are not creating your srv request for the call properly. Check the documentation for the std_msg/Sring.
You can try the following:
word=String()
word.data= 'Upright'
srv_proxy = rospy.ServiceProxy(step.context.set_navigation_config_service_name, String)
response = srv_proxy(word)
The response should be of type sensor_msgs/JointState and you should be able to access its data with response.specific_field (for example response.name[0] to get the name of the first joint).
Originally posted by pavel92 with karma: 1655 on 2020-02-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34499,
"tags": "ros, ros-kinetic, service"
} |
Why do we hear lesser noise when plastic bags/wrappers are crumpled inside water than when they are crumpled in air? | Question: When plastic bags/wrappers are crumpled in air they make noise. But when crumpled inside water we hear very little noise. Why is it so? Would I hear more noise if I go inside water and crumple the wrapper? I have tried this and the noise was less inside water but I was uncomfortable as the water went inside my ears. Why is it so?
Answer: If you mean listening in the air while crushing the bag under water - the main reason is due to the different acoustic impedances of air and water. Transmission of the sound of the bag popping through the water probably plays a secondary role.
Acoustic impedance is defined as
$I =\rho c$
where $\rho $ is the density of the medium, and $c$ the sound speed in the medium.
The acoustic impedance of water is larger primarily because it is so much denser than air.
The acoustic impedance of water is much larger (about 3400 times) that of air. The transmission coefficient of sound from water to air (and vice versa) is roughly the ratio of the acoustic impedance of air to water, i.e 1/3400.
The actual value of the intensity transmission coefficient from medium 1 to 2 with acoustic impedances $I_1$ and $I_2$ is
$T =4\dfrac{\dfrac{I_1}{I_2}}{(1 + \dfrac{I_1}{I_2})^2}$
Which works out to 0.0012 for medium 1 = water, medium 2 = air
So hardly any underwater sound transmits to the air above. | {
"domain": "physics.stackexchange",
"id": 15209,
"tags": "experimental-physics, everyday-life, acoustics"
} |
Is this a weather phenomenon or an instrumental artifact? | Question: The radar image of the midwest provided by Weatherunderground at 10:30 PM Central time, May 8 2011 has odd patterns.
Are these patterns real? Perhaps caused by large scale convection over cities? Or are they artifacts of radar placement?
Here is the image that I am referring to, where green indicates light and yellow moderate rain:
Answer: I have a master's degree in meteorology so I think I can clear this up for you!
This is simply ground clutter. You will see this sort of thing happening on evenings where the relative humidity is very high, more so when the mixing ratio is high also. The radar beam can actually start to interact with water droplets in the air when your humidity values are very high. You are more likely to max out your humidity values during the night as the air temperature falls and approaches the dewpoint. Indeed, if you look at the image I posted below you will see that in the area your radar image depicts, the relative humidities are near 100% in most of these areas. You can see that in many areas the temperature/dewpoint ratio is near 100%. For example, there are values on the map such 44/40, 55/54, 57/54, 50/49...very humid.
(Surface Analysis on May 8, 2011 0300Z(10PM CDT). Image is taken from http://www.hpc.ncep.noaa.gov/html/sfc_archive.shtml where you can then retrieve the surface analysis for any day you wish through March 30, 2006. | {
"domain": "physics.stackexchange",
"id": 4350,
"tags": "atmospheric-science, weather, convection"
} |
K&R Exercise 1-18. Remove trailing blanks and tabs from each line | Question: Intro
I'm going through the K&R book (2nd edition, ANSI C ver.) and want to get the most from it: learn (outdated) C and practice problem-solving at the same time. I believe that the author's intention was to give the reader a good exercise, to make him think hard about what he can do with the tools introduced, so I'm sticking to program features introduced so far and using "future" features and standards only if they don't change the program logic.
Compiling with gcc -Wall -Wextra -Wconversion -pedantic -std=c99.
K&R Exercise 1-18
Write a program to remove trailing blanks and tabs from each line of input, and to delete entirely blank lines.
Solution
My solution reuses functions coded in the previous exercises (getline & copy) and adds a new function size_t trimtrail(char line[]); to solve the problem. For lines that can fit in the buffer, the solution is straightforward. However, what if they can't? The main routine deals with that.
Since dynamic memory allocation hasn't been introduced, I don't see a way to completely trim arbitrary length lines. Therefore, solution does the next best thing: trim the ends, and signal whether there's more job to be done. This way, the shell can re-run the program as many times as necessary to finish the job.
Code
/* Exercise 1-18. Write a program to remove trailing blanks and tabs
* from each line of input, and to delete entirely blank lines.
*/
#include <stdio.h>
#include <stdbool.h>
#define BUFSIZE 10 // line buffer size
size_t getline(char line[], size_t sz);
void copy(char to[], char from[]);
size_t trimtrail(char line[]);
int main(void)
{
size_t len; // working length
size_t nlen; // peek length
size_t tlen; // trimmed length
char line[BUFSIZE]; // working buffer
char nline[BUFSIZE]; // peek buffer
bool istail = false;
bool ismore = false;
len = getline(line, BUFSIZE);
while (len > 0) {
if (line[len-1] == '\n') {
// proper termination can mean either a whole line, or end
// of one
tlen = trimtrail(line);
if (istail == false) {
// base case, whole line fits in the working buffer
// print only non-empty lines
if (line[0] != '\n') {
printf("%s", line);
}
}
else {
// long line case, only the tail in the working buffer
printf("%s", line);
if (len != tlen) {
// we couldn't keep the whole history so maybe more
// blanks were seen which could not be processed;
// run the program again to catch those
ismore = true;
}
}
// this always gets the [beginning of] next line
len = getline(line, BUFSIZE);
istail = 0;
}
else {
// if it was not properly terminated, peek ahead to
// determine whether there's more of the line or we reached
// EOF
nlen = getline(nline, BUFSIZE);
if (nlen > 0) {
if (nline[0]=='\n') {
// if next read got us just the '\n'
// we can safely trim the preceding buffer
tlen = trimtrail(line);
if (tlen > 0) {
printf("%s", line);
if (len != tlen)
ismore = 1;
}
}
else {
// if still no '\n', we don't know if safe to trim
// and can only print the preceding buffer here
printf("%s", line);
}
// we didn't yet process the 2nd buffer so copy it into
// 1st and run it through the loop above
len = nlen;
copy(line, nline);
istail = 1;
}
else {
// EOF reached, peek buffer empty
// means we can safely trim the preceding buffer
tlen = trimtrail(line);
if (tlen > 0) {
if (line[0]!='\n') {
printf("%s", line);
}
else {
ismore = 1;
}
}
if (len != tlen) {
ismore = 1;
}
// and we don't need to run the loop anymore, exit here
len = 0;
}
}
}
// if there were too long lines, we could not trim them all;
// signal to the environment that more runs could be required
return ismore;
}
/* getline: read a line into `s`, return string length;
* `sz` must be >1 to accomodate at least one character and string
* termination '\0'
*/
size_t getline(char s[], size_t sz)
{
int c;
size_t i = 0;
bool el = false;
while (i < sz-1 && el == false) {
c = getchar();
if (c == EOF) {
el = true;
}
else {
s[i] = (char) c;
++i;
if (c == '\n') {
el = true;
}
}
}
if (i < sz) {
s[i] = '\0';
}
return i;
}
/* copy: copy a '\0' terminated string `from` into `to`;
* assume `to` is big enough;
*/
void copy(char to[], char from[])
{
size_t i;
for (i = 0; from[i] != '\0'; ++i) {
to[i] = from[i];
}
to[i] = '\0';
}
/* trimtrail: trim trailing tabs and blanks, returns new length
*/
size_t trimtrail(char s[])
{
size_t lastnb;
size_t i;
// find the last non-blank char
for (i = 0, lastnb = 0; s[i] != '\0'; ++i) {
if (s[i] != ' ' && s[i] != '\t' && s[i] != '\n') {
lastnb = i;
}
}
// is it a non-empty string?
if (i > 0) {
--i;
// is there a non-blank char?
if (lastnb > 0 ||
(s[0] != ' ' && s[0] != '\t' && s[0] != '\n')) {
// has non-blanks, but is it properly terminated?
if (s[i] == '\n') {
++lastnb;
s[lastnb] = '\n';
}
}
else {
// blanks-only line, but is it properly terminated?
if (s[i] == '\n') {
s[lastnb] = '\n';
}
}
++lastnb;
s[lastnb] = '\0';
return lastnb;
}
else {
// empty string
return 0;
}
}
Test
Input File
1
2
444 4
5555 5
66666 6 6
777777 7
8888888 8
99999999 9
000000000 0
1
Test Script (Bash)
i=0
j=1
./ch1-ex-1-18-01 <test.txt >out1.txt
while [ $? -eq 1 ] && [ $j -lt 20 ]; do
let i+=1
let j+=1
./ch1-ex-1-18-01 <out${i}.txt >out${j}.txt
done
Answer: Review covers only minor stuff.
getline()
Avoid a technical exploit when size == 0. Although this code passes sizes more than 0, the function is hackable with size == 0.
When sz == 0, as type size_t, sz-1 is a huge value. Simply + 1 on the left-hand side instead.
// while (i < sz-1 && el == false)
while (i + 1 < sz && el == false)
Advanced: getline()
When a rare reading error occurs, getchar() returns EOF. Standard functions like fgets() return NULL even if some characters were successfully read prior to the error. This differs from OP's getline() functionality. Since getline() uses a return of 0 to indicate end-of-file (and no data read), a parallel functionality to fgets() would also return 0 when an input error occurs (even if some good data read prior).
Easy, yet pedantic, change suggested:
if (i < sz) {
// add if
if (c == EOF && !feof(stdin)) { // EOF due to error
i = 0;
}
s[i] = '\0';
}
Consider const
When the source data does not change, using const can make for 1) more clarity in function usage 2) greater applicability as then const char *f; copy(..., f); is possible. 3) potentially more efficient code.
// void copy(char to[], char from[]);
void copy(char to[], char const from[]);
Advanced: Consider restrict
restrict, roughly, implies that the data referenced by pointer only changes due to the code's function without side effects. Should from/to overlap, copy() as presently coded, can dramatically fail. restrict informs the caller that to/from should not overlap and thus allows the compiler to perform additional optimizations based on that.
// void copy(char to[], char const from[]);
void copy(char * restrict to, char const * restrict from);
Inconsistent documentation/function
Code is described as "trim trailing tabs and blanks" yet then trims ' ', '\t' and '\n'. Recommended consistent documentation and function.
Sentinels
When printing string test output, especially ones with white-space removal, use sentinels to help show problems.
// printf("%s", line);
printf("<%s>", line);
bool deserves boolean syntax
Style issue.
// while (i < sz-1 && el == false) {
while (i < sz-1 && !el) {
No major issues noted. Well done. | {
"domain": "codereview.stackexchange",
"id": 32506,
"tags": "beginner, c, strings, io"
} |
Are there any results on the following "generalized matching" problems? | Question: Given a graph $G = (V, E)$, one can view a matching $M$ on the graph as a partition of $V$ into vertex sets $S_{i, j}$ for $j \in \{1, 2\}$, where each $S_{i, j}$ induces a subgraph in $G$ isomorphic to $K_{j}$. That is, the edges in $M$ correspond to subgraphs isomorphic to $K_{2}$ and the leftover (unmatched) vertices correspond to subgraphs isomorphic to $K_{1}$. A matching algorithm can then be viewed as partitioning the vertex set into subsets which induce complete graphs, where we try to include as many $K_{2}$s as possible before resorting to $K_{1}$.
I'm interested in a generalization of this problem where $j$ is as large as possible (or perhaps some fixed j > 2, if there are nice results). In particular, I'd like an algorithm which first tries to find subsets of $V$ which induce subgraphs isomorphic to $K_{l}$, for $l$ as large as possible, and then $K_{l-1}$, etc.
Is there any work on problems like this?
Answer: In http://www.sciencedirect.com/science/article/pii/S0012365X13003543 , we use exactly this process of picking the largest possible clique, then the next largest available clique, and so on (without repeating any vertices). When applied to a cograph $G$, this algorithm forms a cluster graph subgraph of $G$ with the max number of edges possible. The paper contains some combinatorial results on sizes of cliques obtained during such a process.
This paper http://publicaciones.dc.uba.ar/Publications/2015/BDNV15/BDNV15.pdf generalizes our result to a class of graphs beyond cographs.
I'm not sure which way you want to generalize matchings ... towards triangle covers or clique covers, but either way, it seems you will be moving from P-territory into NP-complete land. Unless you restrict your initial graph to certain structural classes. | {
"domain": "cstheory.stackexchange",
"id": 3810,
"tags": "graph-theory, graph-algorithms, matching"
} |
Generate a random integer of length N with unique digits | Question: The task is simple: given N, generate a random integer with no repeating digits.
'''
Generate a random number of length, N and all digits
are unique
'''
from __future__ import print_function
import random
from collections import OrderedDict
# keep generating till all are unique
# This is a brute force approach where I store the digits generated
# so far in a OrderedDict and if the next random number is already
# there, i ignore it.
def randN(n):
digits = OrderedDict()
while len(digits) < n:
d = random.randint(0, 9)
if d == 0 and not digits.keys():
continue
else:
if not digits.get(str(d), None):
digits[str(d)] = 1
return int(''.join(digits.keys()))
def _assert(randi, n):
assert len(str(randi)) == n
assert len(set(str(randi))) == n
for _ in range(100000):
_assert(randN(10), 10)
_assert(randN(1), 1)
_assert(randN(5), 5)
I have a feeling there is a better approach to solve this, which is why I am posting it here.
Answer: Not quite one line, but here's a much simpler solution:
import random
def randN(n):
assert n <= 10
l = list(range(10)) # compat py2 & py3
while l[0] == 0:
random.shuffle(l)
return int(''.join(str(d) for d in l[:n]))
Explanation:
The problem wants N unique digits. To get those, first make a list of all the digits, then shuffle them and cut off the unwanted digits (it should be noted that if you have many more than 10 digits, shuffling the whole lot would be expensive if you only want a handful).
To turn the list of digits into an integer, the shortest way is to stringify each digit and then join them into one string, and parse that back into an integer.
But wait! If the first digit is 0, the parsed string will be one digit shorter sometimes. So, earlier in the code, simply repeat the shuffle until some other digit comes first.
Since the initial order of the list has the 0 at the beginning, we don't need to do an extra shuffle before the loop. | {
"domain": "codereview.stackexchange",
"id": 10610,
"tags": "python, random"
} |
Why isn't option C correct answer for the given question? | Question: W
Please answer this question I am getting option c as my answer by putting input frequency of 3 radians in frequency response of system.
Answer: You're right, option C is correct. The system is an ideal lowpass filter with a delay of $\tau=1$ eliminating all components with frequencies higher than $\omega_c=4$. Consequently, only the component with index $k=1$ remains, and the only thing that happens to it is that it is delayed by $\tau=1$, i.e., $x(t)=\frac12\sin(3t)$ becomes $x(t-\tau)=\frac12\sin(3(t-\tau))$. | {
"domain": "dsp.stackexchange",
"id": 9295,
"tags": "linear-systems"
} |
Python3 conflicts with 2ython2 | Question:
Hello.
I am using the CanUsb package on Ros Noetic. As you know Noetic use Python3 but this package is written in Python2, so is there any way to solve the incompatibility between this versions and run the canusb.py script?
Of course I have installed Python2 on my PC, but it runs canusb with python3.
Thanks in advance.
Best regards.
Alessandro
Originally posted by Alessandro Melino on ROS Answers with karma: 113 on 2020-07-14
Post score: 0
Original comments
Comment by sloretz on 2020-07-14:\
it runs canusb with python3.
I'm not sure how to reproduce this. What runs canusb with Python3? What command is being run? It is possible to use Python 2 scripts though it's not recommended and the script won't be able to import other Noetic python packages.
Comment by Alessandro Melino on 2020-07-14:
Is a script called canusb.py and it is executed from the roslaunch file (<node name="canusb" pkg="canusb" type="canusb.py" output="screen">). You can find it in can usb package.
Comment by sloretz on 2020-07-14:
Oh, I was thinking the package was depending on something that required Python 2. It looks like the Package doesn't have any Python 2 only dependencies, and instead just hasn't been ported to Python 3 yet. Porting that package to Python 3 is the best way forward, and maybe it is the only way forward since it imports genpy in _CAN.py which will only be available for Python 3. The issue you opened is a good start https://github.com/spiralray/canusb/issues/2 .
The package package looks pretty small. It may not be much work to port it, and the maintainer may be interested in a contribution that ports it. Here's a guide to porting a package to Python 3 that might be helpful http://wiki.ros.org/UsingPython3/
Comment by Alessandro Melino on 2020-07-15:
Okay, thank you for the information. I hope that the maintainer does the porting soon.
Best regards.
Answer:
Solved creating a script on Python3 from scratch.
Originally posted by Alessandro Melino with karma: 113 on 2021-05-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35275,
"tags": "ros, python3"
} |
Do correlations matter when building neural networks? | Question: I am new to working with neural networks. However, I have built some linear regression models in the past. My question is, is it worth looking for features with a correlation to my target variable as I would normally do in a linear regression or is it better to feed the neural network with all the data I have?
Assuming that the data I have is all related to my target variable of course. I am working with this dataset and building a neural network regressor for it.
https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0101EN/labs/data/concrete_data.csv
Here is a snippet of the data. The target variable is the concrete strength rate given a certain combination of materials for that concrete sample.
I greatly appreciate any tips and explanations. I excuse me if this is too noob of a question but unfortunately I did not find any info about it on google. Thanks again!
Answer: If there is some correlation between features, that is what the network will ideally find out on its own and learn to utilize. So, in general, don't take correlated samples or features out of the training loop only because they look correlated. After all, they could convey a lot of valuable information.
When it comes to correlation between data samples during training, this correlation is commonly broken up by training a network on randomly selected mini-batches of training data samples. So, you randomly sample e.g. 16 or 32 (or so) training examples based on which you apply a single update of the weights using some Stochastic Gradient Descent variant. Since the members of a mini-batch are sampled at random, chances for finding highly correlated training samples in some mini-batch shall be sufficiently minimized in order not to negatively affect the training outcome.
Having said that, if you are concerned about overfitting of your model or weights that would overly weight just a small subset of all available input features, you could try applying regularization techniques like L1 (encouraging sparse representations) or L2 (encouraging low weights in general) regularization or dropout.
In your particular case, since the main concern is an excessive contribution of only a small set of input features, L2 shall yield better results (avoiding excessively large weights that would be required to excessively much weight just a small number of features).
Besides that: Commonly, you split your training dataset into 3 parts:
Data used for fitting the model (actual training data)
Data used for assessing the training progress & possibly for determining when to apply early stopping (validation data)
Test data used to assess the performance of the system after all training & intermediate testing is done
The final evaluation on the test dataset shall reveal then the generalization ability of your trained model to novel data.
So, with regularization in place during training and relatively low error rates on the validation and test datasets, you are pretty much save even without checking for correlated data beforehand. Only when you really struggle decreasing the validation loss, it might be worth to further inspect what exactly is going wrong in terms of correlations and such. | {
"domain": "ai.stackexchange",
"id": 2170,
"tags": "neural-networks, python, keras, linear-regression"
} |
Find the sum of the first K subsets of integer array | Question: We have given a multiset of $N$ integer, both positive or negative. Consider all $2^N$ subsets, sorted by their sum (the empty subset has sum 0). We want an algorithm that outputs only the first $K$ sums.
The problem is that $N$ can be very big number (up to 100000), so we cannot generate all subsets. How to optimize the search so we wont generate more than $K$ subsets.
$K$ in the problem will also be up to 100000.
I was thinking to get the smallest possible subset and then try to make it bigger and bigger, but I couldn't come to something that will work in all cases.
The problem is from past csacademy contest:
https://csacademy.com/contest/round-79/task/smallest-subsets/
Answer: Say the multiset is $S=M\cup P$ where $M$ is the set of negative numbers and $P$ is the set of non-negative numbers. Define $S^+=(-M)\cup P$ where $-M$ means the set of opposite numbers of elements of $M$. For a subset $T=(-M_T)\cup P_T$ of $S^+$ where $M_T\subseteq M$ and $P_T\subseteq P$, define $f(T)=(M\backslash M_T)\cup P_T$, which is a subset of $S$. Now easy to see $f$ is a bijection, and the sum of $f(T)$ is the sum of $T$ plus a constant (the sum of $M$). So we only need solve the problem on $S^+$, and transform the result, say $T_1,T_2,\ldots, T_K$ to $f(T_1),f(T_2),\ldots, f(T_K)$. That is to say, we only need to solve the problem where all elements are non-negative.
This is solved in this post. For completeness of this answer, I quote it here.
(Going to assume nonempty subsets for simplicity. Handling the empty subset is a line or two.)
Given a nonempty subset of indices S, define the children of S to be S \ {max(S)} U {max(S) + 1} and S U {max(S) + 1}. Starting with {1}, the child relation forms a tree that includes every nonempty subset of positive integers.
{1}
| \
{2} {1,2}______
| \ \ \
{3} {2,3} {1,3} {1,2,3}
Keyed by the sum of corresponding array elements, this tree is a min-heap. If you compute the keys carefully (by adding and subtracting rather than summing from scratch) and implement the min-heap deletion lazily, then you get an O(k log k)-time algorithm. | {
"domain": "cs.stackexchange",
"id": 11396,
"tags": "algorithms, optimization, sets"
} |
Rapid/ Slow Expansion in Gases | Question: How is rapid expansion of gases different from slow expansion?
Answer: Most physics education teach learners about slow vs rapid expansion of gases in thermodynamics. This is often misleading, but only because many miss the point that slow expansion accounts for heat flow to and from the gaseous system and is assumed that the gas reaches thermodynamic equilibrium with the external system; and that rapid expansion assumes heat flow is negligibly small since the process happens over a short time, and the process is idealized to be adiabatic. Ultimately these two modes of gas expansion has no difference in their underlying mechanisms, and it is only the differing importance of heat flow that makes each model differently important in different scenarios. | {
"domain": "physics.stackexchange",
"id": 23276,
"tags": "thermodynamics, ideal-gas"
} |
(Leetcode) Valid parentheses | Question: This is a Leetcode problem -
Given a string containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.
An input string is valid if -
Open brackets must be closed by the same type of brackets.
Open brackets must be closed in the correct order.
Note that an empty string is also considered valid.
Example 1 -
Input: "()"
Output: True
Example 2 -
Input: "()[]{}"
Output: True
Example 3 -
Input: "(]"
Output: False
Example 4 -
Input: "([)]"
Output: False
Example 5 -
Input: "{[]}"
Output: True
Here is my solution to this challenge -
def is_valid(s):
if len(s) == 0:
return True
parentheses = ['()', '[]', '{}']
flag = False
while len(s) > 0:
i = 0
while i < 3:
if parentheses[i] in s:
s = s.replace(parentheses[i], '')
i = 0
flag = True
else:
i += 1
if len(s) == 0:
return True
else:
flag = False
break
return False
So I would like to know whether I could improve performance and make my code shorter.
Answer: Not a performance suggestion, but you can make use of the fact that an empty collection is Falsey.
if len(s) == 0:
Is functionally the same as just:
if not s:
And similarly,
while len(s) > 0:
Can be just:
while s:
Relevant PEP entry (search for "For sequences" under the linked heading). | {
"domain": "codereview.stackexchange",
"id": 34929,
"tags": "python, performance, python-3.x, programming-challenge, balanced-delimiters"
} |
Do these cars rotate on themselves? | Question: I was reading about the moon rotation around earth and the tidal lock related. I found some interesting information already here and on astronomy.stackexchange.com as well. The moon is known to have a full rotation period of ~27 days.
However I really can't wrap my head about this.
Let's consider these vehicles going around earth:
✓ They always "face" the same side to the earth
✓ They found themselves on the same position after a full revolution around earth
... but are they rotating on themselves and do they have this tidal lock effect? I would have said no but after reading about the tidal lock of the moon, should I say so? The same question can be asked for planes (if the fact that these cars touch the ground is an issue).
Answer: It is not the same as the tidal lock because the angular position of each car is determined by the reaction forces between their wheels and the surface of the planet, not by tidal forces which would tend to hold them at right angles to their depicted positions.
Tidal force pull along the line joining the two bodies and compress transverse to that line. | {
"domain": "physics.stackexchange",
"id": 14221,
"tags": "space, rotation, moon, relative-motion"
} |
Why didn't a concept like "pointers" in Computer Science evolve in the genome? | Question: I see that the genome contains large regions of repeating sequences called interspersed or dispersed elements. The long dispersed elements (LINES) such as LINE-1, can reach up to 6-8 kb in length.
I'm wondering, given the amount of repetition that takes place in these regions, wouldn't a system where a pointer (such as in computer programming) existed be more efficient? For example, instead of including a LINE, include a unique 5 base pair sequence acting as a pointer to that LINE. A separate chromosome (containing one copy of each LINE) would then be read at the correct position once the pointer was read.
Do you think that given enough time in evolution, such a system would be more sustainable? Sort of like using a proper functional language instead of Assembler in computer programming, where f(x) can be defined once, and accessed via pointers, instead of being repeated many times?
Answer: Short answer: Pointers already exist within the genome, in terms of transcription elements (such as repressor/activator systems). These systems can remotely activate a specific gene for transcription based on concentrations of specific chemicals within the cell.
The problem with LINEs are that they are thought to be ancient retroviruses which lost their capacity to infect other cells, instead jumping around within the genome. They are therefore parasitic in nature, and they reproduce by inserting multiple copies of themselves into the genome. It makes more sense to think of LINEs as nearly inactivated retroviruses instead. | {
"domain": "biology.stackexchange",
"id": 3751,
"tags": "genetics"
} |
Are we comoving observers of space expansion? | Question: In cosmology:
A comoving observer is the only observer that will perceive the
universe, including the cosmic microwave background radiation, to be
isotropic. (Wikipedia)
According to this definition, is Earth considered as a comoving reference frame, or are we supposed to have a "peculiar velocity"?
What is the current precision for measuring if a frame is comoving or not, and for measuring its peculiar velocity? Or: From which speed (with respect to Earth) a frame would be considered as peculiar?
Answer: We have a small peculiar velocity with respect to the comoving frame, this can be seen as a dipole in the CMB data. (CMB gets doppler shifted)
This dipole (and the monopole) is usually subtracted before doing further analysis of the CMB. I think (but I am not sure about this) that measuring the CMB-dipole is the best and easiest way to find earths peculiar velocity with respect to the comoving frame.
There is no sharp division between an object with a peculiar velocity and one without, the question is, how large is the peculia velocity compared to what scales we are talking about.
The numerical value for the peculiar velocity is:
(369 $\pm$ 0,9) km/s.
You can find it here: https://arxiv.org/abs/1303.5087 | {
"domain": "physics.stackexchange",
"id": 19202,
"tags": "cosmology, reference-frames, time, space-expansion"
} |
Initial charge in parallel plate capacitor | Question: Two capacitors $C_1$ and $C_2$ (where $C_1 > C_2$) are charged to the same initial potential difference $\Delta V_i$. The charged capacitors are removed from the battery, and their plates are connected with opposite polarity. The switches S1 and S2 are then closed. Find the final potential difference $\Delta V_f$ between $a$ and $b$ after the switches are closed as in figure down below.
The solution finds the initial charge $Q_i = Q_{1i}+Q_{2i} = C_1 \Delta V_i -C_2 \Delta V_i = (C_1 - C_2) \Delta V_i$.
I don't understand why is there a negative sign for $Q_{2i}$. Can someone explain ?
Answer: The capacitor plates in figure 26.12(a) are connected in a way that the positive plate of one capacitor is connected by a wire to the negative plate of the other capacitor (It's written in your question, the plates are connected with opposite polarity). Because there is no battery or an external source of voltage, the electrons from the negative plate neutralize a part of the positive charge, so the charges would be balanced, and hence lower, which is why the negative sign is used for $Q_{2i}$
Hope this helps | {
"domain": "physics.stackexchange",
"id": 80193,
"tags": "homework-and-exercises, electromagnetism, charge, capacitance"
} |
Mac Bash: how to combine the open and str subtract to one line code? | Question: new to bash , I use the following code to open the super folder with a file path.
function openUp(){
cd $(echo $1 | sed 's@\(.*\)/.*@\1@' );
open ..
}
like drag a file to terminal, then open the super folder conveniently.
openup /Users/de/Downloads/32/S01E32-array-arrayslice-collection-collections-1-master.zip
How to combine the two line code to one line?
I originally think open directly without changing directory.
Answer: You can use open with an absolute path, it doesn't have to be a relative path.
That is, you could write the same in one line, with some basic improvements:
openUp() {
open "$(sed 's@\(.*\)/.*@\1@' <<< "$1")/.."
}
The basic improvements:
Instead of echo "..." | cmd, use here strings: cmd <<< "..."
Double-quote variables used as command line arguments (in your example, of cd, and echo
It's not recommended to use the function keyword, write without
A more important improvement would be to stop using sed to get the name of the base directory. Using a regex is error-prone and not as intuitive as the dirname command:
openUp() {
open "$(dirname "$1")/.."
}
Notice that the arguments of dirname and open are both double-quoted,
as mentioned earlier.
This is necessary, to protect from word-splitting and globbing. | {
"domain": "codereview.stackexchange",
"id": 33603,
"tags": "bash, macos"
} |
Commutators in Poincare algebra | Question: Consider the method of induced representations for the Poincare algebra, i.e. given a field $\phi$ (which need not be a scalar field despite its notation), we have the commutator $$[J^{\mu\nu},\phi(0)]=-\mathcal{J}^{\mu\nu}\phi(0)$$ where $J$ are the Lorentz generators, promoted to operators, while $\mathcal{J}$ are operators acting on the Hilbert space of fields. I want to find the commutator for non-zero $x$. My idea was to translate $\phi(0)=\mathcal{T}^{-1}(x)\phi(x)\mathcal{T}(x)$, where $\mathcal{T}(a)=e^{-ia^\mu P_\mu}$, where $P$ is the generator of translations (and so we may define an operator $\mathcal{P}_\mu=-i\partial_\mu$ on the Hilbert space).
When I do this, I find: $$[J^{\mu\nu},e^{ix^\alpha P_\alpha}\phi(x)e^{-ix^\alpha P_\alpha}]=-\mathcal{J}^{\mu\nu}e^{ix^\alpha P_\alpha}\phi(x)e^{-ix^\alpha P_\alpha}$$
My idea was to then multiply both sides by $e^{-ix^\alpha P_\alpha}$ from the left and $e^{ix^\alpha P_\alpha}$ from the right, to find $$[e^{-ix^\alpha P_\alpha}J^{\mu\nu}e^{ix^\alpha P_\alpha},\phi(x)]=-e^{ix^\alpha P_\alpha}\mathcal{J}^{\mu\nu}e^{-ix^\alpha P_\alpha}\phi(x) \tag{1}$$
Now on the left hand side i can use $$e^{-ix^\alpha P_\alpha}J^{\mu\nu}e^{ix^\alpha P_\alpha}=J^{\mu\nu}-ix^\alpha[P_\alpha,J^{\mu\nu}]=J^{\mu\nu}-x^\mu P^\nu+x^\nu P^\mu \tag{2}$$
where I used the Poincare algebra. Substituting this into (1), (I might have gotten a sign wrong somewhere) $$[J^{\mu\nu},\phi(x)]+i(x^\mu\partial^\nu-x^\nu\partial^\mu)\phi(x)=-e^{ix^\alpha P_\alpha}\mathcal{J}^{\mu\nu}e^{-ix^\alpha P_\alpha}\phi(x)$$ where I used $[P^\mu,\phi(x)]=-i\partial^\mu\phi(x)$. Now for the right hand side, I am tempted to use (in analogy with (2)) $$e^{ix^\alpha P_\alpha}\mathcal{J}^{\mu\nu}e^{-ix^\alpha P_\alpha}=\mathcal J^{\mu\nu}-ix^\alpha[P_\alpha,\mathcal J^{\mu\nu}]$$ however I am unsure as to how to proceed, because, as far as I know, the usual commutation relations hold between $P$ and $J$ (or equivalently on their representations $\mathcal P$ and $\mathcal J$), but here I have a "mixed" commutator, between P and $\mathcal J$.
I know the answer should be $$[J^{\mu\nu},\phi(x)]=-\mathcal J^{\mu\nu}\phi(x)+i(x^\mu\partial^\nu-x^\nu\partial^\mu)\phi(x)$$ so if what I wrote above is right (which it isn't, to the very least due to a sign error somewhere which I'm not too bothered about at the moment), then it must be that $[P_\alpha,\mathcal J^{\mu\nu}]=0$, which leaves my a bit perplexed.
Answer: (signs might be completely wrong here) In the following I use hats on quantum Hilbert-space operators to distinguish them from the differential operators acting on fields, which have no hats.
Further I use that for any operators $\hat O(x)$ we have
$$
\hat O(x) = e^{-ix\cdot \hat P} \hat O(0) e^{i x \cdot \hat P}.
$$
This is equivalent to the statement that
$$
[\hat P^{\mu},\hat O(x)] \equiv \widehat{P^{\mu} O}(x) = -i (\partial^{\mu} \hat O)(x)
$$
where in the "field representation" we have $P^{\mu} = -i \partial^{\mu}$. Also I say that
$$
[\hat J^{\mu \nu}, \hat \phi(0)] \equiv \widehat{J^{\mu \nu} \phi}(0) = S^{\mu \nu} \hat \phi(0),
$$
where $S^{\mu \nu}$ are matrices in some internal space in which the fields live. The question is now, given that we know $\widehat{J^{\mu \nu} \phi}$ at space-time pt $x = 0$, namely $S^{\mu \nu} \hat \phi$, what is $\widehat{J^{\mu \nu} \phi}$ at arbitary pt $x$. This is of course determined by the Poincare algebra.
$$
[\hat J^{\mu \nu}, \hat \phi(x)] = [\hat J^{\mu \nu}, e^{-i x \cdot \hat P} \hat \phi(0) e^{ix \cdot \hat P}] = e^{-i x \cdot \hat P} [e^{i x \cdot \hat P} \hat J^{\mu \nu} e^{-i x \cdot \hat P}, \hat \phi(0)] e^{i x \cdot \hat P}
\\
= e^{-i x \cdot \hat P} [\hat J^{\mu \nu} + x^{\mu} \hat P^{\nu} - x^{\nu} \hat P^{\mu}, \hat \phi(0)] e^{i x \cdot \hat P}
= e^{-i x \cdot \hat P} \Big ( [\hat J^{\mu \nu}, \hat \phi(0)] + x^{\mu} [\hat P^{\nu},\hat \phi(0)] - x^{\nu} [\hat P^{\mu},\hat \phi(0) ] \Big) e^{i x \cdot \hat P}
\\
= e^{-i x \cdot \hat P} \Big ( \widehat{ J^{\mu \nu} \phi}(0) + x^{\mu} \widehat{ P^{\nu} \phi}(0) - x^{\nu} \widehat{ P^{\mu} \phi}(0) \Big) e^{i x \cdot \hat P}
= e^{-i x \cdot \hat P} \Big ( S^{\mu \nu} \hat \phi(0) - i x^{\mu} (\partial^{\nu}\hat \phi)(0) + ix^{\nu} (\partial^{\mu}\hat \phi)(0) \Big) e^{i x \cdot \hat P}
\\
= S^{\mu \nu} \hat \phi(x) - i x^{\mu} (\partial^{\nu}\hat \phi)(x) + ix^{\nu} (\partial^{\mu}\hat \phi)(x).
$$
I.e.
$$
\widehat{J^{\mu \nu} \phi}(x) \equiv [\hat J^{\mu \nu}, \hat \phi(x)] = S^{\mu \nu} \hat \phi(x) - i x^{\mu} (\partial^{\nu}\hat \phi)(x) + ix^{\nu} (\partial^{\mu}\hat \phi)(x).
$$ | {
"domain": "physics.stackexchange",
"id": 83910,
"tags": "quantum-field-theory, special-relativity, operators, lie-algebra, poincare-symmetry"
} |
Tensors and rotations | Question: All the tensors that I have studied so far have always appeared with some kind of rotation. For example, spherical tensors rotate as spherical harmonics, tensors in the context of special relativity transform via the Lorentz matrices that are just rotations in the the 4 dimensional space-time. My question is the next, do all objects that are called tensors always have to have some kind of rotation associated with it?
Answer: There is a rather general notion of tensor in physics for which the answer to your question is yes provided you replace the word "rotation" with "group element." In particular, the notion of a tensor that is often used in physics is not restricted to that of multilinear maps on manifolds (which naturally have a particular transformation law under coordinate transormations).
Notice that both the set of rotations, and the set of Lorentz transformations form groups under matrix multiplication. The group of rotations on $3$-dimensional Euclidean space, $\mathbb R^3$, is called $\mathrm{SO}(3)$. The groups of Lorentz transformations on Minkowski space $\mathbb R^{3,1}$ is called $\mathrm{SO}(3,1)$. So when we say that a tensor is an object that transforms in a specific way under rotations, we are specifying that the object transforms in a specific way when acted on by elements of certain groups. For example, a $(k,0)$ tensors under rotations transforms as
$$
T^{i_i\cdots i_k} \to R^{i_1}_{\phantom{i_1}j_1}R^{i_k}_{\phantom{i_k}j_k}T^{j_1\cdots j_k}
$$
We can generalize this to arbitrary groups as follows. Let $G$ be a group, and let $\rho$ be a representation of the group acting on a vector space. This means that $\rho$ assigns a linear transformation $\rho(g)$ on the vector space to each group element $g$. In a given basis for the vector space, we can write the matrix representation $\rho(g)^i_{\phantom ij}$. Then we can define a $(k,0)$ tensor with respect to the representation $\rho$ as an object $T^{i_i\cdots i_k}$ that transforms as
$$
T^{i_i\cdots i_k} \to \rho(g)^{i_1}_{\phantom{i_1}j_1}\rho(g)^{i_k}_{\phantom{i_k}j_k}T^{j_1\cdots j_k}
$$
Notice that tensors under rotations and Lorentz tensors are a special case of this definition when the representations in question are those that map each rotation to itself and each Lorentz transformation to itself (often called the defining representation). | {
"domain": "physics.stackexchange",
"id": 8076,
"tags": "rotation, tensor-calculus"
} |
HackerRank 'Bot Saves Princess 2' | Question: I submitted the following Python 3 code for this HackerRank challenge. Given an n × n grid with an m and a p in random cells of the grid, it prints the next step in a path that moves the m to the p. I'd like any feedback on this.
def nextMove(n,r,c,grid):
#First, find the princess
for i in range(len(grid)):
for j in range(len(grid[0])):
if grid[i][j] == "p":
a = i
b = j
# i = row, j = column
nm = ("UP" if (r-a) > 0 else "DOWN")
if r-a == 0:
nm = ("LEFT" if (c-b) > 0 else "RIGHT")
return nm
n = int(input())
r,c = [int(i) for i in input().strip().split()]
grid = []
for i in range(0, n):
grid.append(input())
print(nextMove(n,r,c,grid))
Answer: Mathias gave an excellent answer. However, the problem gave some starting "suggestions", like working with the whole grid. If we are to stick to that, I suggest you make a separate function for finding the princess:
def findPrincess(grid):
for r, line in enumerate(grid):
if "p" in line:
return (r, line.index("p"))
This way, the function stops the moment it finds "p", instead of going through the rest of the lines, like in your code.
Also, we avoid (fairly slow) indexing by using enumerate, which is almost always better than range(len(...)).
I'll also add that Mathias used exception handling where I chose if. There is some cost to exception handlers whenever an exception occurs, but there is also the fact that my code looks twice for "p" in the last line. On average, I have no idea which is faster, nor do I think it matters for small grids. In the end, I picked if a) to show an alternative approach, and b) because it looked visually a bit nicer to me.
As for the rest of your function, for the sake of readability, I'd refactor
nm = ("UP" if (r-a) > 0 else "DOWN")
if r-a == 0:
nm = ("LEFT" if (c-b) > 0 else "RIGHT")
return nm
like this:
if r > a:
return "UP"
elif r < a:
return "DOWN"
elif c > b:
return "LEFT"
elif c < b:
return "RIGHT"
else:
return "ALREADY THERE"
The last elif...else... block can be replaced with else: return "RIGHT", but you'll need this version soon, for the cleaning bot problems. | {
"domain": "codereview.stackexchange",
"id": 20425,
"tags": "python, programming-challenge, python-3.x"
} |
How much approximately the pressure would drop? | Question: A container made from from aluminium ($0.5~\mathrm{mm}$ thickness $10^3~\mathrm{cm^3}$) closed system , it has 50 psi (nitrogen) pressure inside at temperature $80^\circ~\mathrm C.$
Approximately :
How much pressure will it be lost after 10 years ?
0.1 psi or 1 psi or more ...
I know that Gas permeation through solid is extremely slow and negligible in most cases. It's hard to get general idea without approximate number .
Answer: Consider Fick's second law of diffusion, in one dimension, where $u$ is the concentration of the diffusing gas (in $\mathrm{mol/m^3}$) and $D$ the diffusion coefficient:
$$\frac{\partial u}{\partial t}=D\frac{\partial u}{\partial x}$$
If we assume the concentration of gas outside the container to be much smaller than inside (a reasonable assumption), then the right hand concentration gradient can be determined from Fick's first law to be (with $\tau$ the thickness of the container walls):
$$\frac{\partial u}{\partial x}= -\frac{u-u_0}{\tau}$$
Where $u_0$ is the nitrogen concentration outside the box (assumed constant).
With the Ideal Gas law:
$$pV=nRT$$
$$p=\frac{n}{V}RT=uRT$$
$$u=\frac{p}{RT}$$
Also:
$$u_0=\frac{\chi p_a}{RT}$$
Where $p_a$ is atmospheric pressure and $\chi=0.78$ the molar fraction of nitrogen in air.
With those relations and $A$ the total surface area of the container, we get:
$$-\frac{V}{RT}\frac{dp}{dt}=\frac{AD}{\tau}\Big(\frac{p}{RT}-\frac{\chi p_a}{RT}\Big)$$
$$-V\frac{dp}{dt}=\frac{AD}{\tau}(p-\chi p_a)$$
$$-\frac{dp}{p-\chi p_a}=\frac{AD}{\tau V}dt$$
$$\int_{p_0}^{p(t)}\frac{dp}{p-\chi p_a}=-\frac{AD}{\tau V}\int_0^tdt$$
$$\ln\Bigg(\frac{p(t)-\chi p_a}{p_0-\chi p_a}\Bigg)=-\frac{AD}{\tau V}t$$
We'll call:
$$\alpha=-\frac{AD}{\tau V}$$
$$\implies \Large{p(t)=\chi p_a+(p_0-\chi p_a)e^{-\alpha t}}$$
Note that this expression does not contain $T$. $D$ however is temperature dependent as the following figure shows, for diffusion of gases through solids:
(Source, page 6)
As specific values for aluminium/nitrogen are hard to find we'll use the above values to get a crude estimate. First we need to calculate $\alpha$ from:
$A=600 \times 10^{-4}\mathrm{m^2}$
$D=10^{-12}\mathrm{m^2s^{-1}}$
$\tau=0.5\times 10^{-3}\mathrm{m}$
$V=10^{-3}\mathrm{m^3}$
$\implies \alpha \approx 10^{-7}\mathrm{s^{-1}}$
For 10 years:
$t=10\times365\times24\times60\times60=3.15 \times 10^8\mathrm{s}$
This puts the upper estimate of $-\alpha t \approx -30$ and thus $p(\text{10 years})\approx p_0e^{-30} \approx \chi p_a=11.5\mathrm{psi}$.
10 years is of course also quite a long time and $0.5\:\mathrm{mm}$ quite thin for a container holding a gas starting at $5.5\:\mathrm{Bar}$! | {
"domain": "physics.stackexchange",
"id": 32385,
"tags": "fluid-dynamics, material-science, physical-chemistry"
} |
What is a dead comet? | Question: How is a Dead Comet different from the normal comet?
How are they formed?
And why is the Halloween asteroid 2015 TB145 called a dead comet?.
Answer: A comet is usually characterized by its tail. A dead comet has lost all its ices and gases (responsible for producing this tail), leaving just a rocky core.
The Halloween comet is such a dead comet, in that it has no tail, but furthermore it resembles a skull, making it particularly relevant for Halloween. | {
"domain": "astronomy.stackexchange",
"id": 1180,
"tags": "asteroids, comets"
} |
PageRank for a Non-Random Searcher | Question: I'm looking to adapt the PageRank algorithm as a centrality measure in a network. This network however, unlike the "random surfer" of the original paper on PageRank, or the random library browser for Eigenfactor.org, doesn't have a random browser who can leave and jump off to some other network. The theoretical reader is reading only this literature, and reading it completely.
As I understand it, the damping factor in the usual implementation of PageRank is 1 - probability of the random surfer jumping to a different site, and is usually set at 0.85. Is it reasonable then, in an entirely closed network, to set this value = 1.0, or is there something I'm not seeing?
Some details of the network, which would probably be helpful:
All the networks I will be looking at are fairly small, less than 1000 nodes, and directed. They're citation networks - with papers as nodes and edges as citations between papers, so inherently there are no isolated nodes not connected to any other nodes, as their inclusion in the network is conditional on there being a link to or from the network. There's no reason to believe the network is strongly connected - indeed, I'm pretty sure they're inherently not.
Answer: I'm not familiar at all with details of the PageRank theory, but here's an intuitive answer: Suppose you have a huge connected graph plus a single isolated vertex that you wish to reach. Without random jumps there's no hope to stop surfing. Does the algorithm exclude such bad instances? More generally if the graph is disconnected the jumps would be necessary to reach every vertex. | {
"domain": "cstheory.stackexchange",
"id": 1033,
"tags": "ds.algorithms, advice-request, formal-modeling, ni.networking-internet"
} |
When you increase the tension on a string, how is the standing wave affected? | Question: I know that wave velocity is the product of wavelength and frequency, and that velocity is proportional to string tension. Does this mean that if you increase the tension on a string, the wavelength, frequency, and velocity will all increase?
Answer: This is something that I always see students getting tripped up with.
The equation $v=f\lambda$ just relates these variables together. It does not tell us, in general, how changing one variable changes another. You need additional information.
For example, the velocity of waves on a string is determined by $$v=\sqrt{\frac{F}{\mu}}$$ where $F$ is the tension in the string and $\mu$ is the linear mass density. The wavelength and frequency of the wave doesn't determine the speed. It's a property of the string. Also note that your statement that the velocity is proportional to the tension is incorrect.
If we are talking about generating waves on a string by wiggling one end, then the frequency is determined by how you wiggle the string. Then you can determine the wavelength of these waves by $v=f\lambda$
However, if we are talking about standing waves where only certain wavelengths are allowed, then we can determine the frequency we need to wiggle the string at by $v=f\lambda$
So, your question
Does this mean that if you increase the tension on a string, the wavelength, frequency, and velocity will all increase?
does not have an answer unless you specify what system you are looking at (although the velocity increases for sure). | {
"domain": "physics.stackexchange",
"id": 57287,
"tags": "forces, waves, frequency, oscillators"
} |
Answer the questions in one word with help of clues and images given | Question: Name the things associated with Antonie Van Leeuwenhoek - (6 pictures are given for help and four of them are associated with these clues)
a)Leeuwenhoek was the first to describe these and hence called the "Father of Microbiology"
b)He found many of these (rotifers) in the rainwater collected in ditches and canals
c)Leeuwenhoek was the first to describe plant vessels in this cross section of an ash tree
d)Leeuwenhoek used this to observe microorganisms.
Answer: Since you have apparently tried to answer the question here is my interpretation. I list the six pictures (numbered from top to bottom) and link four of them to an answer.
1 - yeast cells?
2 - ash tree section (c)
3 - rotifer (b)
4 - muscle fibre??
5 - bacteria? (a)
6 - Leeuwenhoek's microscope (d)
According to the WP page on van Leeuwenhoek he did describe the banded pattern in muscle, but there is no mention of yeast (although he would have had no trouble seeing these cells if he could see bacteria).
It's a very odd question. | {
"domain": "biology.stackexchange",
"id": 2825,
"tags": "microbiology, homework"
} |
Why Not Refine Crude Oil At Source | Question: It seems like refining of oil is often done far away from where the oil was initially captured and contained from the upstream source. Why?
This is not an engineering question is it?
Answer: Some places, like the North Slope of Alaska, are inhospitable to the construction of refineries. It is cheaper and more convenient in this case to transport the crude to a distant refinery via a pipeline.
Some wells by themselves do not produce enough crude oil to justify the construction of a dedicated refinery next door. In this case, it is far cheaper to run a pipeline or railroad cars from an entire field of wells to a distant refinery, where the crude can be economically fed into the process flow along with crude from a large number of other wells.
Refineries require intensive infrastructure development to support their operation. So even where the weather is not a barrier and flow from the wells is large, it is cheaper to transport the crude to an existing refinery and enlarge it if necessary to handle the extra product. | {
"domain": "engineering.stackexchange",
"id": 4705,
"tags": "petroleum-engineering, infrastructure"
} |
How to determine the lateral earth pressure in a double-walled cofferdam? | Question: The design of a retaining wall commonly involves determining the lateral earth pressure using either Rankine theory or Coulomb theory. Both theories involve mobilising the shear resistance of a triangular wedge of soil extending for a considerable distance away from the base of the wall.
In the case of a double-walled cofferdam, such as the one in the picture below, the short distance between the two walls would prevent such failure wedge from extending all the way down to the bottom. In which case, how does one go about determining the earth pressure from the sand fill material in between the two walls?
Answer: From what I read, you are looking at the pressure the sand between the sheet piling exerts on them. In this case, I see two possibilities: (1) log-spiral analysis or (2) elastic analysis of Boussinesq.
Log Spiral Analysis
The log spiral analysis assumes that soil pressure is mobilised by a soil mass that follows the shape of a log spiral curve. This is commonly used for braced trench excavations, and the curve of the mass must intersect the surface at the perpendicular. The analysis is non-determinate, so a trial and error graphical (scaled) method is recommended, but we have worked out a computer based algorithm that does this trial and error process computationally.
In this case though, in your trial and error analysis, you can consider that the curve must be forced to occur within the geometrical limits of the distance between piled walls. So it could represent a realistic condition.
Log spiral is suggested as applicable to all passive soil retention problems. I think this assumption would be applicable to your situation, but this is something that should be verified.
Boussinesq Elasticity Theory
Boussinesq theory can be used to look at lateral (and vertical) pressure problems where deformation does not occur. In your case deformation will likely occur, but assuming that it cannot will produce higher stresses/pressures than are expected (there is no relaxation under the theory) so it will be a conservative result.
Also there is the assumption of an elastic half space within Boussinesq theory. As your system is restricted by hydrostatic pressure, it could be considered to behave as an elastic half space. But more information would be required.
Other Considerations
A very good, comprehensive, but dated information source is the Steel Sheet Piling Design Manual (1984). Cellular cofferdams and pressure analysis is included, however, and a copy can be viewed at scribd.com here.
In the photo provided there is no doubt going to be construction traffic travelling along the region between the piles. I have used Boussinesq (as modified) specifically for this purpose on previous projects, to ensure that the structure can withstand these loadings. This is another very important issue to be studied - it will require the analysis of the specific equipment, track patterns and loadings - essentially the equipment manufacturers data. Your analysis should also be closely coordinated with the construction programme, to include the numbers and likely configurations of equipment that will be used. Not an easy task.
Schematic Of Suggested Analysis
In the figure below, the suggested approach is shown. Of course all of the conditions are not known, for example the locations of sea/river bed, the hydrostatic conditions between the sheet piled retaining elements, etc.
Construction loadings at the top of the section can be modelled using the track patterns/footprints and associated loadings. Boussinesq theory is used to compute the lateral stresses at the retaining structure as illustrated by the yellow and green stress envelopes, and these can be superimposed to accommodate any surface loading configuration that is desired.
The log spiral analysis, however, is an iterative process, where the origin of the curve, at point O must be perturbed such that the curve always intersects point A at right angles and also intersects point C at the base of the excavation. This yields a series of soil envelopes within ABC that reach a maximum value as illustrated by the curve and points above point A.
Note that this considers a curved failure surface. The assumption of passive conditions is difficult to assess, but near to corners of the cofferdam the box effect should provide sufficient rigidity. Towards the centre of the sides of the box this assumption needs further examination.
The traditional way to carry out the log spiral analysis is graphically. That is to construct a log spiral template to scale according to a scale drawing and shift it around the drawing under the constraints of points A and C. The area of ABC is calculated for each trial until a clear maximum is reached. However we have developed an algorithm that will carry this out computationally, so no graphical analysis is needed.
Depending upon your geometry, you may not encounter a maximum, instead you may be limited by point D. In this case the envelope defined by DBC would be the value of interest.
One of the most difficult aspects of such an analysis will be to establish the worst case base condition. Careful consideration will be needed to determine which events could coincide, in terms of equipment configurations, fluctuations in water levels and other issues, such as potential de-watering risks. A risk-based approach may be advised, that warrants more than the traditional factor of safety methods. | {
"domain": "engineering.stackexchange",
"id": 479,
"tags": "civil-engineering, geotechnical-engineering, soil"
} |
Symbolic Evaluation for Type Inference in a Dynamic Language | Question: Say I have the following contrived example code:
a(1)
function a(x) {
var n = b(x)
var m = c(x)
return n + m
}
function b(x) {
var n = d(x / 2)
var m = e(x)
return n - m
}
function c(x) {
var n = d(x)
var m = e(x)
return n * m
}
function d(x) {
return x + 2
}
function e(x) {
return x + 3
}
Then, I am analyzing the method d(x). From just analyzing it by itself, we can infer that x must be some sort of number. We do this by what seems like simulating x + 2, and realizing that for that to be satisfied x must be a number. Not quite sure how to implement the type inference here, not sure if it uses symbolic evaluation too.
But then we get to the function call a(1). In this case to do typechecking / type inference on d(x), we have to somehow traverse down the tree of functions, simulating how x is transformed along the way. It finds out that it is divided by 2 somewhere in there, so it goes from integer to float. So we check based on our original assumption that d(x) is a number, and agree that it will be valid.
That is just me roughly trying to figure out how to do type checking / type inference.
I'm wondering two things:
If you need to do some sort of symbolic evaluation to do type checking / inference. If so, any suggestions on resources or places/ways to better understand that.
Say we have a gigantic app with millions of lines of code. Say between a(x) and d(x) there were 500 function calls, doing all sorts of things to x. Wondering if we have to simulate that entire process to figure out if x will be a valid type, or if we can somehow limit the scope and do some sort of shortcut. If we had to traverse the 100's of functions for every variable, that would be a ton of evaluation and would be slow. So wondering how to limit the scope of the search somehow, to do type checking / inference.
Basically I am figuring out how to do type checking / inference. The resources are mostly on the lambda calculus from what I've found, which I am not too familiar with and works differently I would imagine than an imperative program.
For example, they seem to be mentioning that here:
(1) 3.2 Type Graph
After each method has been converted into its intermediate
representation, zscript gradually builds a type graph by each
method called by the program.
The CPA is non-iterative. Only the methods that could
potentially be called are processed, and (except for templates)
they are only processed one time only.
In our implementation of the algorithm, we use a work list. First,
the constructor of the class containing the main method, and the
main method itself is added to the work list. Then, while the
work list is not empty, the methods of the work list are
processed. During the processing of a method, more methods
may be added to the work list. The algorithm terminates when no
more methods remain to be analyzed.
Answer: I am not sure you actually understand the terms you use here.
Symbolic execution has nothing to do with type inference
Symbolic execution is used for static analysis of programs, and is a special case of abstract interpretation. The symbolic executor provides an symbolic input value and traces the program execution given that symbolic input value.
Type inference is much simpler: it involves automatically deducing the type of programs. Nowhere is symbolic execution of the program involved.
Type inference is usually undecidable
For a dynamic typed language, type inference is not decidable. Moreover, even we have a static type system, type inference is usually still undecidable.
The only (?) type system we know of that has decidable type inference is the Hindley-Milner type system, i.e. System F with only Rank-1 types (plus a few conservative extensions). It is known that H-M can type imperative programs (such as Standard ML), as long as you can give good typing rules. When I was an undergraduate, we did an exercise in which we implemented H-M type inference for a restricted, strongly-typed version of Scheme (which was imperative). It was a lot of fun, but it also proves that
type inference is possible for imperative programs.
Update: oh, how could I have left out the canonical reference for polymorphic, inferable typing of imperative programs? The canonical reference would be Xavier Leroy's PhD thesis, Typage polymorphe d'un langage algorithmique (translated into English as Polymorphic typing of an algorithmic language).
I suggest that you start reading on Hindley-Milner and Algorithm W, as a first step (see, e.g., Chapter 22 of Types and Programming Languages by Benjamin C. Pierce). I also suggest that you do more basic reading and learning before asking on CS.SE, as that would make this site work much better. The theory of programming languages is not like coding and must be studied systematically if you want to have a solid grasp of it :-) | {
"domain": "cs.stackexchange",
"id": 11793,
"tags": "type-checking, type-inference"
} |
Is hermiticity a basis-dependent concept? | Question: I have looked in wikipedia:
Hermitian matrix
and
Self-adjoint operator,
but I still am confused about this.
Is the equation:
$$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A.$$
independent of basis?
Answer: The relation
$$ \langle Ay | x \rangle = \langle y | A x \rangle \text{ for all } x \in \text{Domain of } A\tag1$$
makes no reference to any basis at all, so it is indeed basis-independent.
In fact, this definition, which seems pretty strange when you first meet it, arises precisely out of a desire to make things basis-independent. The particular observation that sparks the definition is this:
Let $V$ be a complex vector space with inner product $⟨·,·⟩$, and let $\beta=\{v_1,\ldots,v_n\}$ be an orthonormal basis for $V$ and $A:V\to V$ a linear operator with matrix representation $A_{ij}$ over $\beta.$ Then, if this matrix representation is hermitian, i.e. if $$A_{ji}^*=A_{ij}\tag2$$ when $A$ is represented on any single orthonormal basis, then $(2)$ holds for all such orthonormal bases.
(Similarly, for a real vector space simply remove the complex conjugate.)
Now this is a weird property: it makes an explicit mention of a basis, and yet it is basis independent. Surely there must be some invariant way to define this property without any reference to a basis at all? Well, yes: it's the original statement in $(1)$.
To see how we build the invariant statement out of the matrix-based porperty, it's important to keep in mind what the matrix elements are: they are the coefficients over $\beta$ of the action of $A$ on that basis, i.e. they let us write
$$ Av_j = \sum_i A_{ij}v_i.$$
Moreover, in an inner product space, the coefficients of a vector on any orthonormal basis are easily found to be the inner products of the vector with the basis: if $v=\sum_j c_j v_j$, then taking the inner product of $v$ with $v_i$ gives you
$$\langle v_i,v\rangle = \sum_j c_j \langle v_i,v_j\rangle = \sum_j c_j \delta_{ij} = c_i,$$
which then means that you can always write
$$v=\sum_i \langle v_i,v \rangle v_i.$$
(Note that if $V$ is a complex inner product space I'm taking $⟨·,·⟩$ to be linear in the second component and conjugate-linear in the first one.)
If we then apply this to the action of $A$ on the basis, we arrive at
$$ Av_j = \sum_j A_{ij}v_i = \sum_i \langle v_i, Av_j\rangle v_i, \quad\text{i.e.}\quad A_{ij} = \langle v_i, Av_j\rangle,$$
since the matrix coefficients are unique. We have, then, a direct relation between matrix element and inner products, and this looks particularly striking when we use this language to rephrase our property $(2)$ above: the matrix for $A$ over $\beta$ is hermitian if and only if
$$
A_{ji}^* = \langle v_j, Av_i\rangle^* = \langle v_i, Av_j\rangle = A_{ij},
$$
and if we use the conjugate symmetry $\langle u,v\rangle^* = \langle v,u\rangle$ of the inner product, this reduces to
$$
\langle Av_i, v_j\rangle = \langle v_i, Av_j\rangle. \tag 3
$$
Now, here is where the magic happens: this expression is exactly the same as the invariant property $(1)$ that we wanted, only it is specialized for $x,y$ set to members of the given basis. This means, for one, that $(1)$ implies $(2)$, so that's one half of the equivalence done.
In addition to this, there's a second bit of magic we need to use: the equation in $(3)$ is completely (bi)linear in both of the basis vectors involved, and this immediately means that it extends to any two vectors in the space. This is a bit of a heuristic statement, but it is easy to implement: if $x=\sum_j x_j v_j$ and $y=\sum_i y_i v_i$, then we have
\begin{align}
\langle A y, x\rangle
& =
\left\langle A \sum_i y_i v_i, \sum_j x_j v_j\right\rangle &&
\\ & =
\sum_i \sum_j y_i^* x_j \langle A v_i, v_j\rangle &&\text{by linearity}
\\ & =
\sum_i \sum_j y_i^* x_j \langle v_i, Av_j\rangle &&\text{by }(3)
\\ & =
\left\langle \sum_i y_i v_i, A\sum_j x_j v_j\right\rangle &&\text{by linearity}
\\ & =
\langle y, A x\rangle,&&
\end{align}
and this shows that you can directly build the invariant statement $(1)$ out of its restricted-to-a-basis version, $(3)$, which is itself a direct rephrasing of the matrix hermiticity condition $(2)$.
Pretty cool, right? | {
"domain": "physics.stackexchange",
"id": 37295,
"tags": "quantum-mechanics, operators, hilbert-space, observables"
} |
How can molecule of a few angstroms absorb visible light of a few hundred nanometers? | Question: I guess visible light is visible, because it has the right wavelength to be absorbed (or not) and emitted (or not) by many different molecules. Now visible light has a wavelength in the order of a few hundreds nanometers, while the typical size of a molecule is rather in the order of a few angstrom.
I guess the difference between the energy levels of the molecules will have the right magnitude that corresponds to the energy carried by a single photon. But it's still slightly strange, because a classical antenna would need a length of a quarter wavelength to be most effective at absorbing or emitting energy.
I also wonder what the effective size of the scattering cross-section of a molecule to absorb a photon of an appropriate wavelength will be. Is it more in the order of the size of the molecule, or more in the order of the geometric mean between the wavelength and the size of the molecule?
Answer: In quantum mechanics you calculate the charge density by taking the square of the wave function. If you do this for a hydrogen atom in a superposition of the ground state the first excited state (1s and 2p) you get an oscillating charge density. If you analyze this oscillating charge using Maxwell's equations, you get all the properties of the hydrogen atom: the absorption, the emission, the line-width...you name it. Everything the atom does in its normal interactions with light makes sense according to Maxwell's equations.
In quantum mechanics if you have a slab of material made of atoms, then at any given temperature there is a wave function made up of the superposition of the different thermal states of the slab. If you square the amplitude of this wave function you get a time-varying charge density full of oscillations. If you use Maxwell's equations and treat these oscillations as classical antennas, you will obtain the correct black-body spectrum for the solid slab. All the thermal properties of matter in its interaction with radiation (including the photo-electric effect) are consistent with Maxwell's equations.
It is true that the quantum oscillator is hundreds or thousands of times smaller than the quarter-wave dipole which is the most efficient classical absorber. But you can make a classical antenna shorter by adding an inductance. In an atom, the mass of the electron is the parameter which effectively increases the apparent inductance of the atomic antenna. The only difference with classical antennas is that you cannot normally get such a small size-to-wavelength ration because of the high resistance of copper.
The reason physicists don't talk about this is that they have a poor understanding of classical antenna theory. I explain how this works in more detail in a series of blog posts starting here: | {
"domain": "physics.stackexchange",
"id": 14998,
"tags": "quantum-mechanics, visible-light"
} |
Analyzing movement of fringes | Question: I have taken a series of images of a fringe pattern at regular time intervals. This fringes are generated by shining a laser beam onto the CCD camera. The laser goes through a lot of optics, which is why it creates these fringes.
There is some vibration in the laser system setup which causes these fringes to move around although not by much, couple of pixels at most. I want to analyze how much these fringes are moving around. Is there an easy way to do this?
One idea that I had, and I don't know if it's right, is to take the FFT of the image, and discard the DC (constant) components. I could analyze just one frequency component. Now, if the fringe pattern is moving about, then that should change the phase of that frequency component. Does that make sense?
Thanks.
Update
As suggested by A_A, I took one frame as a reference and subtracted it from the rest:
Out of the mess of fringes, I can clearly identify one pattern which is moving about. I particular, the diagonal fringes do not seem to be changing from frame to frame. In the end, I just want to identify if there's any one particular optic which is responsible for this vibration, so I could just set up my live camera feed to display the difference between consecutive images and play around with the optics to see if I can reduce the fringes.
Answer:
...take the FFT of the image, and discard the DC (constant) components.
That would result in removing the mean brightness from an image. Due to the fact that grayscale images have positive numbers between 0-255, what would be returned from that "high pass" filter (that only "cuts" the DC) would be the image with the same grayscale variance but now centered around 0 (instead of its original mean level).
I want to analyze how much these fringes are moving around. Is there an easy way to do this?
You could obtain the absolute difference between two successive frames and then the sum of all values of that quantitty as a simple metric of how much movement was there.
If you would also like to be able to estimate how much displacement was there between two successive frames then the simplest thing would be to use cross correlation (on successive frames) and track the position of its maximum. | {
"domain": "dsp.stackexchange",
"id": 320,
"tags": "image-processing, highpass-filter"
} |
A "function of proportionality" in a rate law | Question: I am currently studying rate laws and the determination of the order of a reaction. So first-order reactions remind me of linear functions $f(x)=kx$ while second-order reactions remind me of quadratic functions like $g(x)=cx^2$ in high-school algebra. Since we are dealing with reactants here, the input of the functions would be the concentrations of our substrates $[\ce{S_1}]$ and $[\ce{S_2}]$. I am wondering if there are any fancier functions that arise in "real-life" chemistry?
Answer: If by "fancier" you mean "uglier", then absolutely yes. Real chemistry is messy, much more than we can fully handle. The simple rate laws we learn in undergraduate physical chemistry are just approximations.
I've written elsewhere that, in general, a full description of chemical kinetics for any overall reaction requires studying an entire tree of simultaneous chemical reactions. There can be dozens, even hundreds of intermediate species involved, all mutually linked in complex ways, and with more than one final product ("by-products"). The difficulty in performing this full description has lead to the whole field of chemical reaction network theory. I hope you like matrix algebra.
To make chemical kinetics tractable in most cases, two major approximations made are:
Forget about the whole reaction network, and look at only the overall kinetically fastest path from reagents to products
Forget about all the steps and intermediates in the fastest path, and describe only the bottleneck - the slowest step
Even with these two crushing approximations, it's possible to describe many useful reactions with a fair level of accuracy. This is partly luck, and partly unavoidable (due to simplicity naturally arising in randomness). This level is what most undergraduate and even graduate work sticks to. Even when it's a bad fit, we'll sometimes stubbornly fit a system into a particular rate law, do some variational analysis, and hope the error isn't too great.
A nice example of a reaction whose kinetics is slightly more complex than the standard "nice" rate laws is the radical reaction between hydrogen and bromine.
$$\ce{H2 + Br2 -> 2HBr}$$
According to experimental data, it's possible to obtain the following rate law (which may still be approximate!):
$$\mathrm{r_{HBr}=\frac{k_1[H_2][Br_2]^{3/2}}{[HBr]+k_2[Br_2]}}$$
The deduction is nicely described in this site. As you can see, depending on the reaction parameters, the denominator can potentially be simplified down to a single term, with some degree of approximation, and a simple-ish rate law could be used. But in the general case, it's messier. Because of these reaction networks, even the concept of a "reaction order" does not exist in general. An example of a natural but extremely difficult reaction to describe kinetically in full would be the combustion of a moderately sized alkane, such as octane ($\ce{C8H18}$).
When taking full chemical networks into account, there appear cases with pretty unique kinetics. For example, if you haven't heard of oscillating reactions, they're a topic of considerable experimental and theoretical study. With a particular set of coupled reactions, it's possible for chemical systems to display the hallmarks of mathematical chaos. Even if individual steps can be described by simple kinetics, the global process cannot. There's a nice discussion of oscillating kinetics in this Chem.SE question (and possibly others).
I'm sure there are other ways to get strange rate equations. Some reactions just don't follow kinetic theory in its simplest form. This can lead to weird conclusions, such as negative activation energies. Furthermore, rate laws almost always assume macroscopic amounts of matter. When a tiny amount of reagents are involved, the discrete nature of matter calls for statistical corrections to chemical kinetics, because an integer number of molecules can't follow continuous analytical curves. | {
"domain": "chemistry.stackexchange",
"id": 12972,
"tags": "kinetics, analytical-chemistry, stoichiometry"
} |
How is LMS/FXLMS noise cancelling different than simple polarity inversion? | Question: Consider a noise-cancelling headphone:
If I have a noise signal from the outside world, mic it, flip the polarity by multiplying by -1 (with an op-amp or digitally), delay by the appropriate amount, and playback out of the speaker -- the summation of the noise and anti-noise at the ears should approach zero (with better performance at low frequencies).
How does FXLMS or LMS improve upon this? Why is polarity inversion ('phase cancellation') not as effective at cancelling noise, at least in the literature (e.g. Kuo 1994)?
Answer: Your polarity inversion method simply applies the same phase shift (delay) and gain to all frequencies, and is presumably not adaptive (or even closed loop). It may work reasonably well for some limited cases, where the phase and gain have been tailored for that particular environment, but any change to the environment will tend to reduce its effectiveness.
LMS however is an adaptive method which constantly "tunes" a filter to give you optimum cancellation using an appropriate gain and phase response which varies with frequency, and which adapts to a changing environment. | {
"domain": "dsp.stackexchange",
"id": 12088,
"tags": "noise, acoustics"
} |
The role of GPS in INS/GPS navigation systems | Question: Ideally, a gyroscope and an accelerometer would be enough for a complete navigation solution (attitude + position), using dead reckoning. This comprise the Inertial Navigation System, INS. In non-ideal world, we would couple the INS with GPS. The GPS helps 'fix' the navigation solution at certain times, to help mitigate the drift in the INS output.
My question.
Suppose I have no access to a GPS, but have access to a Spirit level instrument with associated camera and image processing algorithms. This system serves to provide an absolute values of pitch and roll angles, but with slower update rate and lower resolution. Still, it is assumed to be drift-free. Can such a system be used in place of normal GPS to achieve dead reckoning solution?
P.S. The reason for not using GPS is that I am trying to find a self-supporting navigation solution...
Answer: By itself, no, it cannot.
Your method of fixing the INS system is pretty clever, but it only accounts for drift in the gyroscope. Accelerometers are also (very) prone to drift, generally showing up as an increasing velocity bias. The typical method to account for accelerometer drift is to use zero-velocity updates. By detecting points when you are not moving, you can periodically remove any non-zero velocities from your model. | {
"domain": "dsp.stackexchange",
"id": 2003,
"tags": "kalman-filters"
} |
How far can a moving object embed itself | Question: Let's say we have an electron moving at 10% the speed of light toward a 10 cm thick block of pure carbon. How do we calculate the distance that the electron travels into the carbon?
Even though there is a chance of the electron bouncing off the surface of the carbon, I am only asking the range of distances that the electron could travel before stopping. Is it possible to calculate such a thing? I also believe that relativistic effects are going to come into play?
Answer: I think these electrons will penetrate, on average, a macroscopic distance into carbon: 40 cm. So many of them would go right through your 10 cm block.
For $v/c=0.1$, we have $\gamma=1/\sqrt{1-v^2/c^2}=1.005$, so the electron's kinetic energy is $K=(\gamma-1)mc^2=(0.005)(511\,\text{KeV})=2.5\,\text{KeV}$. Note that this is considerably less than the kinetic energies of electrons in carbon atoms; for example, since carbon has atom number 6, the fast-moving inner electrons have an energy about 6 times those in hydrogen, or only about $80\,\text{eV}$. The incident electrons are at least 30 times more energetic.
So you are asking for the mean free path of a 2.5 KeV electron in carbon. The electron is scattering off of the electrically charged protons and the electrons in the carbon atoms. But at this incident energy, I think they are mainly scattering off the nuclei. The atomic electrons just "get out of the way". The nuclei are too massive to do that.
When a scattering process occurs, the mean free path is given by $$\ell=\frac{1}{\sigma n}$$ where $\sigma$ is the scattering cross section and $n$ is the number density of the scattering sites (i.e., how many per unit volume).
Carbon has a mass density of $\delta=2.3\times 10^3\,\text{kg}/\text{m}^3$ and a single carbon atom has a mass of $m=2.0\times 10^{-26}\,\text{kg}$ so the number density of carbon nuclei causing the incident electron to scatter is $n=\delta/m=1.1\times 10^{29}/\text{m}^3$.
The cross section of a 2.5 KeV electron on a carbon nucleus (or on a carbon atom) is harder to calculate; it's a quantum electrodynamics calculation. Let's just make a simple order-of-magnitude estimate that the cross section is $\sigma=\pi a_C^2=2.3\times 10^{-29}\,\text{m}^2$ where $a_C=2.7\times 10^{-15}\,\text{m}$ is the radius of a carbon nucleus.
Then the mean free path is $$\ell=\frac{1}{(2.3\times 10^{-29}\,\text{m}^2)(1.1\times 10^{29}/\text{m}^3)}=0.4\,\text{m}.$$
The reason that it penetrates so far is basically that atoms are mostly empty space. Nuclei are very tiny.
Correction: Even though this answer has been accepted, I'm not happy with it. I have used a gross underestimate of the cross section, and therefore gotten too long a mean free path. The scattering is essentially electrostatic, and of course the electric field of the nucleus extends far outside the nucleus itself (but then gets screened by the fields of the electrons, so it doesn't extend far outside the atom). I think the right way to think of the scattering is as (essentially classical!) Rutherford scattering of a Z=1 electron on a Z=6 nucleus. However, I need to remember how to handle the divergence of the Rutherford cross section.
Second try: Rutherford scattering should be a reasonable approximation because the incident electron isn't highly relativistic. The differential cross section for Rutherford scattering is
$$\frac{d\sigma}{d\Omega}=\frac{a_K^2}{\sin^4{\frac{\theta}{2}}}$$
where
$$a_K=\frac{Z_1 Z_2 \alpha\hbar c}{4K}$$
Here $\alpha$ is the fine structure constant, $\hbar$ is the reduced Planck constant, and $c$ is the speed of light.
Using $\hbar c=197\,\text{Mev-fm}$, the numerical value of $a_K$ when $Z_1=1$, $Z_2=6$, and $K=2.5\,\text{KeV}$ is
$$a_K=8.6\times 10^{-13}\,\text{m}.$$
Now we have to integrate the differential cross section over all scattering angles:
$$\sigma=\int d\Omega \frac{d\sigma}{d\Omega}=2\pi a_K^2\int_0^\pi \frac{\sin{\theta}\,d\theta}{\sin^4{\frac{\theta}{2}}}$$
This integral diverges as $\theta\rightarrow 0$, but physically we cut it off at some small angle $\theta_0$ because the electric field of the nucleus gets screened by the atomic electrons. Doing this gives
$$\sigma=2\pi a_K^2 \int_{\theta_0}^\pi \frac{\sin{\theta}\,d\theta}{\sin^4{\frac{\theta}{2}}}=4\pi a_K^2\left(\frac{1}{\sin^2{\frac{\theta_0}{2}}}-1\right).$$
So, what to use for the cutoff angle $\theta_0$?
In Rutherford scattering, the scattering angle $\theta$ is related to the impact parameter $b_K$ (which is the distance at which the electron would pass by the nucleus if there was no electrostatic attraction) by
$$b_K=2 a_K \cot{\frac{\theta}{2}}$$
Since screening of the nucleus by the atomic electrons means that there should be little scattering when the impact parameter exceeds the radius of the atom (which for a carbon atom is $7.0\times 10^{-11}\,\text{m}$), we can find the cutoff angle from
$$\frac{1}{2}\frac{7.0\times 10^{-11}\,\text{m}}{8.6\times 10^{-13}\,\text{m}}=\cot{\frac{\theta_0}{2}}.$$
This gives $$\theta_0=0.049,$$ from which we then find $$\sigma=1.5\times 10^{-20}\,\text{m}^2$$
and
$$\ell=6.1\times 10^{-10}\,\text{m},$$
a reduction of about nine orders of magnitude from my previous result. (Oops!)
Since the inter-atomic spacing in carbon is about $$d=n^{-1/3}=2.1\times 10^{-10}\,\text{m},$$
I now think the incident electron gets only about 3 atomic layers into the carbon. Basically, at low energy, the scattering cross section is much closer to the cross-section size of the atom (because the electrostatic field of the nucleus matters throughout this region) than to the cross-sectional size of the nucleus.
I'll be interested to see what other people think. | {
"domain": "physics.stackexchange",
"id": 53596,
"tags": "special-relativity, particle-physics, electrons, particle-accelerators"
} |
Why is relative motion at constant velocity the same as being at rest? | Question: If I am a passenger who plays catching-the-ball game inside a vehicle that moves with a constant velocity in a straight road, why can I catch the ball repeatedly that as if the vehicle is at rest? How to explain this using first law of motion by Newton?
Answer: The explanation from the point of view of a stationary observer outside of the vehicle is that the ball has a certain horizontal velocity, which will remain constant unless it is acted on by a horizontal force (this is a consequence of Newton’s first law). If the ball is thrown straight up there is no horizontal force acting on it, so it continues to move with the same horizontal velocity. The thrower also has an identical horizontal velocity, so the ball remains directly above the thrower throughout it’s motion, and the thrower can catch it again as it falls.
From the point of view of an observer in the vehicle, there are no horizontal forces acting on either the thrower or the ball, so it is not surprising that the thrower can throw the ball vertically upwards and catch it again.
If your question is “why is Newton’s first law true” then the only answer is “that’s just the way our universe works”. | {
"domain": "physics.stackexchange",
"id": 74233,
"tags": "newtonian-mechanics, kinematics, inertial-frames, relative-motion, galilean-relativity"
} |
How are they different: foxy and foxy-release branches | Question:
I am about to check out the source of ros2 foxy, using this command vcs import src < ros2.repos. The file ros.repos is here https://github.com/ros2/ros2/blob/master/ros2.repos.
The GitHub repository has a branch called foxy-release, and another branch called foxy. How are they different and which branch should be used?
Originally posted by Zhoulai Fu on ROS Answers with karma: 186 on 2021-02-17
Post score: 1
Original comments
Comment by gvdhoorn on 2021-02-17:
It was only a comment, but I believe this was discussed in #q371016 as well:
If you replace eloquent-release with eloquent, you'd get the state of the branches targetting Eloquent in those repositories, which is potentially different from the packages released into Eloquent.
Answer:
If you want to checkout the source code for Foxy, you should use the ros2.repos file from the foxy branch. This is the branch referenced by the "install from source" instructions for Foxy. It will give you the latest development for Foxy.
The master branch is always used for the next upcoming release (currently Galactic), and is not compatible with Foxy.
The foxy-release branch using git tags for each source repository, representing what we call a "patch release". This is a more formal release, representing a snapshot of all the core packages that we used to produce binaries for all platforms (e.g. archives for Windows, macOS, and Linux).
Originally posted by jacobperron with karma: 1870 on 2021-02-17
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2021-02-18:
From a user perspective, the "source code for Foxy" will typically be understood to mean
the code used to build the packages I install when running apt install ros-foxy-....
That's different from "the current state of development of Foxy", which is what the foxy branch gets you.
With that in mind, it might be clearer to just say:
$ROS_DISTRO-release contains/points to the code used to build the binary packages for each Foxy release. The $ROS_DISTRO branches are the development branches, which may have already changed from what was released during the last release.
Comment by jacobperron on 2021-02-18:
Unfortunately, it's not completely true that the $ROS_DISTRO-release branch points to the code used for all binary packages. Strictly speaking it points to the code that is used to produce "patch release" tarballs. If the user is installing from debs on Ubuntu, then they are getting whatever version of was most recently bloomed (and synced).
Typically, if a user wants to build Foxy from source, they are wanting to make changes to the code (and hopefully contribute back!). Or maybe they're trying to get a recent patch that hasn't been released yet. So, I would recommend working with the latest set of fixes. If you are not planning to change code, then I'd recommend installing the binaries instead.
Comment by gvdhoorn on 2021-02-18:\
Unfortunately, it's not completely true that the $ROS_DISTRO-release branch points to the code used for all binary packages. Strictly speaking it points to the code that is used to produce "patch release" tarballs.
that is certainly unfortunately.
Is this documented somewhere?
Comment by jacobperron on 2021-02-18:
I thought so, but couldn't find it. We could probably document this caveat on this page: https://index.ros.org/doc/ros2/Installation/Maintaining-a-Source-Checkout/#release-versions
Comment by jacobperron on 2021-02-18:
I've proposed adding a note to the docs: https://github.com/ros2/ros2_documentation/pull/1120 | {
"domain": "robotics.stackexchange",
"id": 36096,
"tags": "ros"
} |
Calculating momentum change? | Question:
A 100 g ball with a speed of 5 m/s hits a wall at an angle of 45 degrees. The ball then bounces off the wall at a speed of 5m/s at an angle of 45 degrees. What is the change in the momentum of the ball?
When I look at this problem it seems intuitive to me that the answer should be 0, as the ball's mass remains constant and the speed remains unchanged. However, this is wrong as the velocity changes in this problem. After some calculations, you get around 0.7 kg m/s.
I'm having trouble understanding what this value really represents. How might this answer be useful?
Consider the diagram above. A ball travelling at 5m/s hits the wall and then travels at 5m/s again. What's the change in velocity? Assuming the angle to the wall is 45 degrees.
The ball has the same magnitude of velocity before and after hitting the wall so it doesn't make sense to me to give a numerical value to the change in velocity. The only change is in the direction. However, when you solve for the change in velocity there is a magnitude. What is the representing? Why is there even a magnitude when the speed is the same?
Edit: The magnitude of the velocity and momentum remains the same after collision with the wall. Assuming this is true, then why is the change in momentum 0.7 kg m/s? To me, this is saying there is a change in the magnitude of the momentum which is not true. So what is the value representing? I understand the magnitude remained constant while the direction changed. So why is there a numerical value in the change of momentum?
Links and resources to learn more would be appreciated.
Answer: One way to look at this is the following: for any system, you can relate the force the object experiences to its change in momentum by
$$
\Delta \vec{p} = \int \vec{F} \, dt
$$
In particular, if the force is constant throughout the period during which it acts, you have
$$
\Delta \vec{p} = \vec{F} \Delta t.
$$
This integral (or product, for constant force) is called the impulse delivered to that object. This vector equation works equally well if $\vec{p}$ changes direction, magnitude, or both.
This means that calculating the change in momentum $\Delta \vec{p}$ for an object tells us something valuable about the direction that a force acted on that object to effect that change. In your case, this means that the wall must have exerted a force directly to the left — any vertical components to the force between the wall and the ball must have been negligible, because $\Delta \vec{p}$ doesn't have a significant vertical component. | {
"domain": "physics.stackexchange",
"id": 94538,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, velocity, collision"
} |
Gravitational field neutralization | Question: For the sake of this question, if gravitons existed, and anti gravitons existed, if a field of anti-gravitons was generated, would it not neutralize the attraction in the area of the field, creating a zone of minimal gravity?
I understand anti-graviton wouldn't mean anti gravity, but wouldn't the two particles annihilate and reduce the field strength?
Answer: So it's fairly common for uncharged bosons to be their own antiparticles; this is true of the photon and the Z boson and the Higgs; we believe that it would be true of the graviton too. For this to not be true, gravitons would need to have a new charge that we have not yet discovered in the rest of physics yet.
As for whether two gravitons could annihilate, please note that for any two-particle collision, pure annihilation is inconsistent with conservation of momentum in any frame other than the rest frame, and is always inconsistent with conservation of energy. Rotating the diagram, the same conservation-of-momentum objection forbids any particle spontaneously scattering off the vacuum: all Feynman diagram vertices need to have at least three edges. So we could maybe think about them annihilating to form a Z boson, say, since that has mass.
This sort of thing is unlikely to happen for a simple reason; the recent gravitational waves observed from black hole mergers were 150 Hz, but these quanta would have an energy that's only around 0.6 pico-electron-volts whereas the Z boson has a mass of 90 giga-electron-volts. So we're talking about a difference of 23 orders of magnitude or so. | {
"domain": "physics.stackexchange",
"id": 43480,
"tags": "gravity, quantum-gravity, antimatter"
} |
What is the difference in CFM, at a given static pressure, between a CPU fan and a Squirrel Cage Blower fan when moving air through ductwork? | Question: I have a Delta ASB0912L DC Brushless fan. PDF here.
In the datasheet it says,
RPM 3800
CFM 67.80
IN H2O 0.302
On the blower, squirrel cage fan, Dayton from Grainger, found here (Rectangular Permanent Split Capacitor OEM Specialty Blower, Flange: Yes, Wheel Dia: 3", 115VAC)
It says,
RPM 3010
CFM @ 0.200-In. SP78
CFM @ 0.300-In. SP74
I want to exhaust fumes from a spray booth. I know there are many articles and plans online. I am trying to understand the math here.
It seems intuitive to me the Grainger fan is more powerful and will remove more air than the CPU fan. But the math makes no sense to me.
On the CPU fan it has .302 for Static Pressure and .3 for the Grainger and the CFM is pretty close.
I am moving air through 6' of ductwork, 4" in diameter, and neither tell me what CFM means in this context.
Given a 2'Lx2'Wx2'H spray box. Again, I'm not asking for plans. I can get that already. I only want to understand what these numbers mean and the spray box application gives the question context.
Why doesn't the CPU fan move almost as much as the Blower based on the specifications? Also, is the Delta IN H20 given static pressure or is that something else?
Answer:
Why doesn't the CPU fan move almost as much as the Blower based on the specifications?
It is because of the working principle of fan and blower. The fan takes the air axially(axis of rotation) and pushing it in the axial direction. But the Blower works like a centrifugal compressor in which axial inlet and radial/tangential outlet are present. Fundamentally, due to the pressure reactions, the centrifugal pump/turbines are more efficient than the axial pump/turbines. That is what you are seeing as the data in the sheets.
Also, is the Delta IN H20 given static pressure or is that something else?
Both are in the units of inches of water column. | {
"domain": "engineering.stackexchange",
"id": 2509,
"tags": "pressure, statics, airflow, hvac"
} |
Optimizing code in codewars | Question: I was practicing python on codewars.com with this question:
Create a function that takes a positive integer and returns the next bigger number that can be formed by rearranging its digits. For example:
12 ==> 21
513 ==> 531
2017 ==> 2071
But when I attempted my solution it said it was not optimized enough
Can someone say why it said that?
from itertools import permutations
def next_bigger(n):
per = list(permutations(str(n)))
result = []
for j in per:
result.append(int("".join(j)))
for i in sorted(result):
if i > n:
return i
Answer: Time & Memory Complexity
Your code uses a lot of time and memory.
With an \$d\$ digit number, you will get roughly \$d!\$ different permutations of digits. list(permutations(str(n))) will cause all of these permutations to be realized in a list \$d!\$ elements long.
You take this list, and then repeatedly take one permutation from it, convert it into a number, add append it to the end of a result list, growing the list one element at a time. This can be an \$O(k^2)\$ operation, and since \$k\$ in this case is \$d!\$, you have \$O({d!}^2)\$ time complexity.
Next, you take this list, and sort it, which is an \$O(k \log k)\$ operation, or in this case \$O(d! \log {d!})\$.
Finally, you take this sorted list, and loop through it until you find the first value larger than the starting value.
Space complexity: \$O(d!)\$. Time complexity: \$O({d!}^2)\$.
Removing the \$O(k^2)\$
The simplest way to avoid the creation of list, and repeated append operation (which may cause a relocation & copy of all previous elements for each additional element added to it), is to allocate an array of the correct size ahead of time.
Since this is such a common operation, Python even gives us a shortcut: list comprehension. Any code of the form:
destination = []
for value in source:
destination.append(func(value))
can be rewritten as:
destination = [func(value) for value in source]
The Python interpreter can (via the__length_hint__ from PEP424) tell that it will be allocating a new list of len(container) elements, and may pre-allocate that storage, and then start populating the elements of that list.
(Note: CPython, IronPython, Anaconda, and other big snakes may implement things differently under the hood, possibly amortizing individual appends down to an \$O(1)\$ operation, and may or may not use __length_hint__, but the point still is using list comprehension will always be faster than allocating a list and repeatedly appending elements one at a time.)
We can re-write your function as:
def next_bigger(n):
per = list(permutations(str(n)))
result = [int("".join(j)) for j in per]
for i in sorted(result):
if i > n:
return i
Space complexity: \$O(d!)\$. Time complexity: \$O(d! \log {d!})\$.
Removing the \$O(k \log k)\$
Sorting is an expensive operation. We don't want to do it if we don't have to. And we definitely don't have to here; we only want one value as a result.
You are looking for the smallest value, from a list of values, that is larger than the input. We can filter out all values which aren't larger than the input:
candidates = [value for value in result if value > n]
And then return the minimum of all candidates:
return min(candidates)
No sorting. Space complexity: \$O(d!)\$. Time complexity: \$O({d!})\$.
Removing the \$O(d!)\$ Space Complexity
There is no reason to store any intermediate results. You could generate your list of permutations, and for each permutation, convert it back to a number, discard any value that isn't larger that the original, and remember the smallest value which passes.
def next_bigger(n):
per = permutations(str(n))
result = (int("".join(j)) for j in per)
candidates = (i for i in result if i > n)
return min(candidates)
permutations(...) is a generator function, so per is assigned a generator object. We use that generator to construct our own generator that converts a digits of a permutation into an integer, and store that generator in result. We take that generator and create another one which only produces values larger than the original, assigning the generator to candidates.
At this point, no permutations have been created yet. No digits have been joined, and converted to an integer. And no values have been tested for whether they are greater than the original. Only generators for these actions have been created.
Then, we pass the last generator to min(...). The min(...) function asks this generator for a value, and then for another value and save the smaller, and then another value and saves the smaller, and so on. Only the smallest value is preserved at each step.
No list creation. Space complexity: \$O(1)\$. Time complexity: \$O({d!})\$.
Readability
I've used your variable names (n, per, result, j, and i) unmodified in my "improvements". But someone reading the code has to inspect the code to determine that j is a tuple of digits, and i is the corresponding integer value. Better variable names go a long way towards more understandable code. Type hints and """docstrings""" are also extremely useful.
from itertools import permutations
def next_bigger(number: int) -> int:
"""
For a positive integer, returns the next larger number that can
be formed by rearranging its digits. For example:
>>> next_bigger(12)
21
>>> next_bigger(513)
531
>>> next_bigger(2017)
2071
"""
permuted_values = (int("".join(permuted_digits))
for permuted_digits in permutations(str(number)))
return min(value for value in permuted_values if value > number)
if __name__ == '__main__':
import doctest
doctest.testmod(verbose=True)
Of course, for a Code Wars submission, you'd strip out the docstrings, doctest, type hints in the pursuit of speed.
A Better Algorithm
The above are minor improvements on your existing algorithm. As comments and other posts indicate, there is a better algorithm. | {
"domain": "codereview.stackexchange",
"id": 39515,
"tags": "python, programming-challenge"
} |
Single price grid component (HTML & CSS Project) | Question: I would appreciate some feedback on this single price grid component project completed using HTML and CSS.
I am trying to recreate the content shown by this image: https://res.cloudinary.com/practicaldev/image/fetch/s--FzGw2rbZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3y0bhisv135j7979dk3m.jpg
HTML file
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="styles.css">
<title>Price Grid</title>
</head>
<body>
<div class="container">
<div class="description">
<h2>Join our community</h2>
<h3>30-day, hassle-free money back guarantee</h3>
<h3>Gain access to our full library of tutorials along with expert code reviews.
<br>Perfect for any developers who are serious about honing their skills.
</h3>
</div>
<div class="pricing">
<h3>Monthly Subscription</h3>
<h2>$29 per month</h2>
<h2>Full access for less than $1 a day</h2>
<button>Sign up</button>
</div>
<div class="featuredContent">
<h2>Why us?</h2>
<ul>
<li>Tutorials by industry experts</li>
<li>Peer & expert code review</li>
<li>Coding exercises</li>
<li>Access to our GitHub repos</li>
<li>Community forum</li>
<li>Flashcard decks</li>
<li>New videos every week</li>
</ul>
</div>
</div>
</body>
</html>
CSS File
html {
margin: 0;
padding: 0;
font-family: 'Abel', sans-serif;
}
body {
background-color: rgb(230, 230, 230);
}
.container {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
width: 800px;
height: 500px;
border: 20px solid rgb(20, 148, 20);
border-radius: 25px;
}
.description {
padding: 20px;
text-align: left;
background-color: rgb(255, 255, 255);
}
.pricing {
position: relative;
float: left;
top: 5.5%;
height: 55%;
width: 50%;
color: white;
background-color: rgb(43, 179, 177);
text-align: center;
}
.featuredContent {
position: relative;
float: right;
top: 5.45%;
color: rgb(255, 255, 255);
background-color: rgb(74, 190, 189);
height: 55%;
width: 50%;
}
.featuredContent h2 {
text-align: center;
}
.featuredContent ul {
position: relative;
list-style: none;
left: 20%;
}
Answer: HTML
Several elements are marked up as headlines, although they certainly aren't ones:
The text "Gain access..." in the description element should be two(!) paragraphs. (Don't use <br> to create paragraph-like line breaks.)
<p>Gain access to our full library of tutorials along with expert code reviews.</p>
<p>Perfect for any developers who are serious about honing their skills.</p>
"$29 per month" and "Full access..." in pricing shouldn't be headlines either - especially not h2s after a h3. Also the price ("$29") itself needs to be markuped separately, since it's highlighted compared to the rest of the line. For example:
<div class="price"><strong>$29</strong> per month</div>
<p>Full access for less than $1 a day</p>
"Why us?" looks identical and is structurally identical to the headline "Monthly Subscription" in pricing, so it should have the same headline level.
The "button" "Sign up" functionally looks more like a link, than a <button>. (Buttons are primarily used to submit forms, which this is not.)
The class names are too generic and so can collide with other classes used on the site. Instead of container something like price-component would be more appropriate. And for the inner components either use a child/descendant selector in the CSS so that the rules only apply to your component:
.price-component > .description
Or use a CSS naming scheme (such as BEM) to create a unique class names for the inner components such as price-component__description.
CSS
Avoid absolute positioning. Centering the price component like that is only appropriate, if it's only used as a modal dialog, that covers the rest of the content of the page and the screen shot doesn't seem to indicate that. In order to "just" center it, use CSS grid or flex.
Also the use of relative positioning isn't appropriate. On those floats it is strange. It just creates a gap where there shouldn't be one and lets the elements overlap the bottom border. And on the list use either a left-margin or just a general padding on the featuredContent element.
Float is not appropriate for layout any more. This component is the perfect example for a grid layout.
Hard coding the width and height in pixels makes the component unresponsive to the screen and font size. For the max-width use relative units such as rem and don't set the height at all, letting it resize dynamically based on the content size. Also consider an alternative media query based layout for smaller screen sizes where it would be too wide.
Don't center all the text. For one, it contradicts the screen shot, but more importantly centered text is difficult to read.
Example: https://jsfiddle.net/vjr2ogpq/ | {
"domain": "codereview.stackexchange",
"id": 40044,
"tags": "html, css"
} |
Are all the matter & the force fields part of spacetime? | Question: I have checked at these below:
Is matter a continuous part of the field of space-time?
Is spacetime all that exists?
What I could understand is that particles are different kinds of local fluctuation states of force fields. Particles make up matter. So matter is part of various force fields. Is that correct?
The metric of spacetime is the gravitational field. So is it correct to say gravitational field is part of spacetime? What about other force fields? We have various force fields like electromagnetic field, gluon field, higgs field,etc. Do these fields lie on top of the spacetime (but separate from it) or are they too a part of the spacetime?
Answer:
What I could understand is that particles are different kinds of local fluctuation states of force fields.
Particles are specific kinds of states (a certain asymptotic states defined according to the LSZ formalism) of the quantum fields--they do not necessarily correspond to the states of force fields. For example, the electromagnetic field is what can be called the "force field" responsible for mediating electromagnetic forces between electrically charged particles. And photons are the particle states corresponding to this field. But, electrons are also particle states of a quantum field, called the electron field. And the electron field is not responsible for mediating any force between any particles. Such fields are often called matter fields.
Particles make up matter. So matter is part of various force fields. Is that correct?
This is a bit of an oversimplification to say that particles make up matter. For example, the electron in an atom is not in a particle state. The key thing is that a quantum field has many physical states which are not necessarily particle states. Especially, if one is talking about strongly interacting fields such as the quark fields, there is no meaningful way in which one can talk of a "quark particle". However, it is correct that matter corresponds to states of various quantum fields--even if it is not made up of particles. Again, all quantum fields are not force fields. There are also the so-called matter fields such as the quantum field corresponding to the electron.
The metric of spacetime is the gravitational field. So is it correct
to say gravitational field is part of spacetime?
Yes, this is essentially correct. Gravitation is a manifestation of the intrinsic dynamical structure of the spacetime itself. However, notice that we do not have a quantum theory of gravity. And our notion of spacetime and gravity is likely to be massively revised at a conceptual level when we understand a full quantum theory of gravity.
What about other force fields? We have various force fields like
electromagnetic field, gluon field, higgs field,etc. Do these fields
lie on top of the spacetime (but separate from it) or are they too a
part of the spacetime?
No, such fields are additional structure on top of the intrinsic structure that spacetime has. Of course, they live on a spacetime manifold, i.e., a quantum field is simply a field of operators--one at each point in spacetime. But, the operator is an additional structure on top of spacetime.
However, let me mention an interesting point. In theories involving extra dimensions, such as the Kaluza-Klein theory, fields such as the electromagnetic field are not added on top of the basic geometrical structure of spacetime. Rather, they simply arise out of the geometric structure of spacetime--just like gravity does in general relativity.
Hope this helps! :) | {
"domain": "physics.stackexchange",
"id": 62126,
"tags": "forces, gravity, spacetime, field-theory"
} |
Different representations of frequency space of 2D image FFT | Question: I'm learning images processing using FFT. In my test example provided below the input pixel values are clamped 0-1 (0-255), but I do eventually want to process floating point heightfield pixel values.
The software I'm using (Houdini v18) provides forward and inverse FFT functions and as a base line I can successfully convert an image to frequency space and back.
However when I look at the 2D representation of the frequency space it looks different to any examples I've found online.
Test image:
This is the result of Houdini's FFT with zero frequencies at center (height offset is pixel intensity):
The function returns 2x "images" representing real an imaginary values.
This however is the frequency space representation I find everywhere online:
From what I understand the radial type FFT images represent frequency between 0-2PI in u and v and the pixel value is the magnitude. And that most images are offset so 0 is centered.
Do I need to convert the real and imaginary components to frequency and magnitude and plot those? If so how?
EDIT: My end goal is to apply radial pass filters to the FFT so I just need to apply any spacial transforms to get the FFT to that state.
Answer: The second image you find online as you mentioned is magnitude and the first image must be either real part or the imaginary part (but not both). If you calculate complex value magnitude for each point, i.e. $Mag_{i,j} = R_{i,j}^2 + I_{i,j}^2$ you get something similar to the following image.
I = imread('cameraman.tif');
F = fft2(I);
F_Centered = fftshift(F);
Mag = abs(F);
imshow(Mag);
but taking log10 harnesses the large magnititde of some frequency bins so for visulasation the normalization does not round them to zero.
imshow(log10(Mag),[])
Thus applying the filter on the above image which is only processed for visualization is not correct and inverse FFT of the values would not have much meaning.
To apply your filter you get the fft of the image, F(image) , then you need to make a same size mask image (all 1 and the frequency section you need to be removed must be zero, also remember symmetry feature of Fourier) and then apply do a element wise multiplication of fft values with this mask.
Also, I would suggest to watch this short lecture by Hoff on this subject:
https://youtu.be/02c6ohQV2TA | {
"domain": "dsp.stackexchange",
"id": 8266,
"tags": "image-processing, fft, fourier-transform"
} |
How is a plasma different from a metal or an ionized gas? | Question: I've just started looking at plasmas and I have some confusion.
A metal is a lattice of positive ions bathed in a sea of delocalised electrons and conducts electricity. A plasma is a 'gas' of free ions and electrons and can also conduct electricity. Is a metal a solid-state plasma?
Are all ionized gases a plasma? Even if they have not been heated?
Answer: There are several sorts of plasmas. An ionized gas is a sort of plasma, also called thin plasma.
Thin plasmas, while physically different from a metal, share a surprisingly large number of properties with it:
They both have two separate charge carriers, cations and electrons.
On both, the cations have a negligible contribution to current and the electrons form a gas.
They have similar dispersion relations (leading to similar "local" Ohm's law in both).
It's because cations don't contribute much that they seem similar, in spite of one being a gas and the other a solid.
When you heat a thin plasma, for a while it's still an ionized gas. As it gets hotter, you get the energy to rip more and more electrons from atoms, but the nuclei remain untouched. However, cations start to contribute more so properties differ noticeably from a metal. It's the field of magnetohydrodynamics.
When a plasma is hot enough to rip nucleons from the nucleus, it becomes a thermonuclear plasma, and its properties change again. This is the sort of plasma the Sun is made of.
Heat such a plasma again (a lot!) and you have the energy to rip quarks and gluons from the nucleons. This is the quark-gluon plasma.
Overall, you get a different sort of plasma each time you become able to extract a new sort of particle from the system. The similarity with a metal only exists at low energy. | {
"domain": "physics.stackexchange",
"id": 89466,
"tags": "plasma-physics"
} |
Control a robot with 2 motors and a H-bridge with Arduino and ROS? | Question:
Hi all.
Does any one have a project which makes you connect an arduino to ROS, in order to control a robot with two motors connected to a h-bridge, just as a beginner project to get started?
Thanks, best regards!
Originally posted by Dynamite on ROS Answers with karma: 21 on 2015-06-22
Post score: 2
Original comments
Comment by Rai on 2017-03-10:
hey, did you find a way to it? I need exactly this right now.
Comment by Dynamite on 2017-03-10:
Hey unfortunately not. I have tried to learn more c/c++ programming meanwhile and maybe I should look into it again.
Any way if you figure it out or find more information please share it here for all to see. Thanks.
Comment by Rai on 2017-03-10:
Yeah,I'll make sure to share it here if I come up with something useful, thank you
Answer:
see also http://wiki.ros.org/rosserial_arduino
Originally posted by duck-development with karma: 1999 on 2015-06-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 21980,
"tags": "arduino"
} |
Unable to view to the clustering results in rviz | Question:
Hello, I am trying to perform euclidean clustering in ROS. I can generate data and also store them .pcd files. But i wish to see the output of the file in rviz which I am unable to. It throws the following warning:
[WARN] [1426601373.806774699]: Invalid argument passed to canTransform argument source_frame in tf2 frame_ids cannot be empty
My code looks like this:
ros::Publisher pub;
void cloud_cb(const sensor_msgs::PointCloud2ConstPtr& input){
sensor_msgs::PointCloud2::Ptr clusters (new sensor_msgs::PointCloud2);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>), cloud_f (new pcl::PointCloud<pcl::PointXYZ>);
pcl::fromROSMsg(*input, *cloud);
pcl::PointCloud<pcl::PointXYZ>::Ptr clustered_cloud (new pcl::PointCloud<pcl::PointXYZ>);
std::cout << "PointCloud before filtering has: " << cloud->points.size () << " data points." << std::endl;
// Create the filtering object: downsample the dataset using a leaf size of 1cm
pcl::VoxelGrid<pcl::PointXYZ> vg;
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered (new pcl::PointCloud<pcl::PointXYZ>);
vg.setInputCloud (cloud);
vg.setLeafSize (0.01f, 0.01f, 0.01f);
vg.filter (*cloud_filtered);
std::cout << "PointCloud after filtering has: " << cloud_filtered->points.size () << " data points." << std::endl;
// Create the segmentation object for the planar model and set all the parameters
pcl::SACSegmentation<pcl::PointXYZ> seg;
pcl::PointIndices::Ptr inliers (new pcl::PointIndices);
pcl::ModelCoefficients::Ptr coefficients (new pcl::ModelCoefficients);
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_plane (new pcl::PointCloud<pcl::PointXYZ> ());
pcl::PCDWriter writer;
seg.setOptimizeCoefficients (true);
seg.setModelType (pcl::SACMODEL_PLANE);
seg.setMethodType (pcl::SAC_RANSAC);
seg.setMaxIterations (100);
seg.setDistanceThreshold (0.02);
int i=0, nr_points = (int) cloud_filtered->points.size ();
while (cloud_filtered->points.size () > 0.3 * nr_points)
{
// Segment the largest planar component from the remaining cloud
seg.setInputCloud (cloud_filtered);
seg.segment (*inliers, *coefficients);
if (inliers->indices.size () == 0)
{
std::cout << "Could not estimate a planar model for the given dataset." << std::endl;
break;
}
// Extract the planar inliers from the input cloud
pcl::ExtractIndices<pcl::PointXYZ> extract;
extract.setInputCloud (cloud_filtered);
extract.setIndices (inliers);
extract.setNegative (false);
// Get the points associated with the planar surface
extract.filter (*cloud_plane);
std::cout << "PointCloud representing the planar component: " << cloud_plane->points.size () << " data points." << std::endl;
// Remove the planar inliers, extract the rest
extract.setNegative (true);
extract.filter (*cloud_f);
*cloud_filtered = *cloud_f;
}
// Creating the KdTree object for the search method of the extraction
pcl::search::KdTree<pcl::PointXYZ>::Ptr tree (new pcl::search::KdTree<pcl::PointXYZ>);
tree->setInputCloud (cloud_filtered);
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZ> ec;
ec.setClusterTolerance (0.02); // 2cm
ec.setMinClusterSize (10);
ec.setMaxClusterSize (2500);
ec.setSearchMethod (tree);
ec.setInputCloud (cloud_filtered);
ec.extract (cluster_indices);
std::vector<pcl::PointIndices>::const_iterator it;
std::vector<int>::const_iterator pit;
int j = 0;
for(it = cluster_indices.begin(); it != cluster_indices.end(); ++it) {
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_cluster (new pcl::PointCloud<pcl::PointXYZ>);
for(pit = it->indices.begin(); pit != it->indices.end(); pit++) {
//push_back: add a point to the end of the existing vector
cloud_cluster->points.push_back(cloud_filtered->points[*pit]);
cloud_cluster->width = cloud_cluster->points.size ();
cloud_cluster->height = 1;
cloud_cluster->is_dense = true;
std::stringstream ss;
ss << "cloud_cluster_" << j << ".pcd";
writer.write<pcl::PointXYZ> (ss.str (), *cloud_cluster, false); //*
j++;
}
//Merge current clusters to whole point cloud
*clustered_cloud += *cloud_cluster;
}
pcl::toROSMsg (*clustered_cloud , *clusters);
pub.publish (*clusters);}
int main (int argc, char** argv)
{
// Initialize ROS
ros::init (argc, argv, "clust");
ros::NodeHandle nh;
// Create a ROS subscriber for the input point cloud
ros::Subscriber sub = nh.subscribe ("input", 1, cloud_cb);
// Create a ROS publisher for the output model coefficients
//pub = nh.advertise<pcl_msgs::ModelCoefficients> ("output", 1);
pub = nh.advertise<sensor_msgs::PointCloud2> ("clusters", 1);
// Spin
ros::spin ();
}
Originally posted by blackmamba591 on ROS Answers with karma: 38 on 2016-01-27
Post score: 1
Original comments
Comment by jarvisschultz on 2016-01-27:
I haven't looked at this too closely, but I'd try making sure the header field of the clusters cloud matches the header field of the input cloud. Is the conversion to PCL and back to PointCloud2 properly preserving the frame information?
Comment by blackmamba591 on 2016-01-27:
Thanks jarvis, I have done so. Its solved.
Comment by jarvisschultz on 2016-01-27:
I just came back here to tell you that I was convinced that was the problem... glad to hear that it was an easy fix!
Answer:
frame_id of the headerof the pointcloud-message needs to be set.
clusters->header.frame_id = "/camera_depth_frame";
clusters->header.stamp=ros::Time::now();
pub.publish (*clusters);
This will solve the problem.
Originally posted by blackmamba591 with karma: 38 on 2016-01-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23575,
"tags": "ros, kinect, pcl, 3d-object-recognition, ros-indigo"
} |
tf tutorial: turtle2 doesn't follow turtle1 | Question:
I'm working on this tutorial: http://www.ros.org/wiki/tf/Tutorials/Introduction%20to%20tf
After execute "roslaunch turtle_tf turtle_tf_demo.launch" I can see 2 turtles spawned, however turtle2 doesn't move.
Output of "rosrun tf tf_echo turtle1 turtle2" is
At time 1310028986.770
Translation: [-3.380, 3.556, 0.000]
Rotation: in Quaternion [0.000, 0.000, 0.000, 1.000]
in RPY [0.000, -0.000, 0.000]
I'm using ROS diamondback in Ubuntu 10.04
Originally posted by Tien Thanh on ROS Answers with karma: 231 on 2011-07-06
Post score: 0
Answer:
I just ran the demo and it took a couple seconds for turtle2 to start following turtle1 but it did follow eventually. Can you run tf_monitor to see if the turtle1 and turtle2 frames are being published by their broadcasters.
mwise@bws:~/maintained_stacks$ rosrun tf tf_monitor
RESULTS: for all Frames
Frames:
Frame: turtle1 published by /turtle1_tf_broadcaster Average Delay: 0.000284624 Max Delay: 0.000325169
Frame: turtle2 published by /turtle2_tf_broadcaster Average Delay: 0.000212012 Max Delay: 0.000221337
All Broadcasters:
Node: /turtle1_tf_broadcaster 325.392 Hz, Average Delay: 0.000284624 Max Delay: 0.000325169
Node: /turtle2_tf_broadcaster 328.424 Hz, Average Delay: 0.000212012 Max Delay: 0.000221337
RESULTS: for all Frames
Frames:
Frame: turtle1 published by /turtle1_tf_broadcaster Average Delay: 0.000256403 Max Delay: 0.000325169
Frame: turtle2 published by /turtle2_tf_broadcaster Average Delay: 0.000237316 Max Delay: 0.000270265
All Broadcasters:
Node: /turtle1_tf_broadcaster 69.198 Hz, Average Delay: 0.000256403 Max Delay: 0.000325169
Node: /turtle2_tf_broadcaster 69.1953 Hz, Average Delay: 0.000237316 Max Delay: 0.000270265
Originally posted by mmwise with karma: 8372 on 2011-08-23
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 6058,
"tags": "ros, tutorials, transform"
} |
Theory invariance after substitution of theory's field equations back into theory's action functional? | Question: Suppose I have a theory $A$ concerning the evolution of a set of fields $T_1, \dots, T_n$. Let the action functional for this theory be $S[T_1, \dots, T_n]$. Suppose in the action, in addition to possible other functions, there is a function $f(T_i, \dots, T_{i + j})$ of a subset of the fields. Finally, suppose the variation of $S$ gives an explicit form for $f$, say $f = g$. My question: If we substitute $g$ for $f$ in the action $S$, does the action still describe the theory $A$?
As a particular example, consider the Einstein-Hilbert action with a matter action
$$
S = \frac{1}{\kappa^2} \int d^4x \sqrt{-g}R + S_m.\tag{1}
$$
Variation yields the Einstein field equation (EFE) $R_{\mu\nu} = \kappa^2 T_{\mu\nu} + \frac{1}{2}R g_{\mu\nu}$, whose trace tells us that $R = -\kappa^2 T$, where $T \equiv T^\mu_\mu$. If we substitute this into $(1)$ above, we obtain the action
$$
S = -\int d^4x \sqrt{-g}T + S_m.\tag{2}
$$
My question: does this action still describe GR? I think it should because the action $(2)$ ought to be an extremum precisely when $(1)$ is, but I still have my doubts since the only curvature coupling inherent in the theory $(2)$ is strictly to the metric and none whatever to the Ricci scalar.
Answer: TL;DR: Generically$^1$ an action principle gets destroyed if we apply EOMs in the action.
Examples:
This is particularly clear if we try to vary wrt. a dynamical variable that no longer appears in the action after substituting an EOM.
The 1D static model $$V(q)~=~\frac{k}{2}q^2+{\cal O}(q^3), \qquad k~\neq ~0,$$ has a trivial stationary point $q\approx 0$. (We ignore here possible non-trivial stationary points for simplicity.) We can replace the potential $V$ with a new potential $$\tilde{V}(q)~=~a+bq+\frac{c}{2}q^2+{\cal O}(q^3), \qquad \qquad c~\neq ~0,\qquad b~=~0,$$
without changing the trivial stationary point $q\approx 0$.
Note that it is crucial that $b=0$, i.e. it only works for a zero-measure set.
For the 2D kinetic term $L=T = \frac{m}{2}\left(\dot{r}^2+r^2\dot{\theta}^2\right)$ in polar coordinates, if we substitute the angular variable $\theta$ with its EOM, the remaining Lagrangian for the radial variable $r$ gets a wrong sign in one of its terms! See e.g. this & this Phys.SE posts for an explanation.
Specifically, we can not derive EFE from OP's action (2).
--
$^1$ The word generically means here generally modulo a zero-measure set of exceptions. | {
"domain": "physics.stackexchange",
"id": 50820,
"tags": "general-relativity, lagrangian-formalism, variational-principle, action, classical-field-theory"
} |
How electric field changes on changing area of wire? | Question: I know that in an isolated conductor when an external field is applied the charges rearrange themselves end field inside conductor becomes zero. But when this conductor is connected to a battery then charges are not able to accumulate and current flows through the conductor. So, I concluded that internal field is not able to develop and the electric field inside the conductor is solely due to external factors.
Now, while studying current electricity I came across this problem that Electric field changes on changing the area of wire.
I can understand it using the equations i=j.A and j=E/ρ. But this challenges my previous notions.
Where am I going wrong ?
Edit: Perhaps I am skipping the fact that electrons themselves also produce electric field and there would be more electron per unit volume passing through the narrower area(A1). So, they might increase the electric field. Is this the correct explanation?
Answer: I don't see a contradiction. A power supply takes electrons from one end of a conductor and put them into the other. This separation of charge puts an electric field into the conductor. The field moves electrons down the wire. The strength of the field at any point depends on the gradient of the charge density. If the cross section increases, the current density and field get smaller, but the total current must be constant. | {
"domain": "physics.stackexchange",
"id": 79339,
"tags": "electric-fields, electric-current, conductors"
} |
Do we know anything about the nature of Earth's core that hasn't come from magnetic or seismic measurements? | Question: There is much known about Earth's core from painstaking analysis of seismic data, and from detailed magnetic field maps and trends over time.
Are there any other measurements that have contributed to current understanding of Earth's core besides these two?
Answer:
Are there any other measurements that have contributed to current understanding of Earth's core besides these two?
The answer is of course "yes". Other answers have already alluded to laboratory experiments that attempt to re-create conditions similar to those well inside the Earth.
I'll provide two others; there are many more.
One is radio astronomy. Determining the apparent locations of quasars has drastically increased the accuracy of the Earth's orientation. Doing this in conjunction with modern communication techniques results in Very Long Baseline Interferometry. The combination of the two has reduced the uncertainties in the Earth's orientation to well under a milliarcsecond. This gives deep insight (pun intended) into the nature of the Earth's core. The Earth's Chandler wobble does not behave quite like that of a rigid body. How this varies over time gives insights into the nature of the Earth's core. The Earth's free core nutation is also observable from the precise Earth orientation parameters.
Another is precise gravity models of the Earth. These too give insights into the Earth's core, including the Earth's moment of inertia, the Chandler wobble, and the free core nutation. Going beyond the Earth, gravity models provide one of the key observational techniques for studying the interior of the Moon, Mars, and Jupiter. Scientists know that the Moon and Mars have partially molten cores thanks to gravity models developed from precise orbit determination based on the many satellites that have orbited the Moon and Mars. Scientists know that Jupiter has a diffuse core thanks to precise orbit determination of the Juno spacecraft's orbit about the planet. | {
"domain": "earthscience.stackexchange",
"id": 2120,
"tags": "seismology, planetary-science, geomagnetism, core"
} |
How is it possible for a new species to evolve? | Question: Suppose a new species is created from a random mutation that happened during an instance of reproduction in an existing species. How can that new species survive and flourish if there only exists one of it's kind and therefore it's not able to reproduce? By definition a species can only reproduce with others of the same species, no? It seems to me the only way it could is if a mate is created due to a separate occurance of a similar mutation, and the chances of both mutations occurring at around the same point in timespace must be very close to zero. And even if it were to happen, it seems to me the offspring wouldn't be able to reproduce due to problems with inbreeding.
Answer:
Suppose a new species is created from a random mutation that happened
during an instance of reproduction in an existing species.
No honest informed person would define species in this manner.
This is a better analogy | {
"domain": "biology.stackexchange",
"id": 8484,
"tags": "evolution, reproduction, species"
} |
Web scraper for e-commerce sites Part II | Question: I asked the same question Web scraper for e-commerce sites yesterday and I'm now posting the revised code here.
I'm building web scraper application which takes name, code and price from few sites. I thought factory pattern would fit in my application. I would like to someone review my code and tell if I'm missing something.
I have class Item which holds scraped data.
public class Item
{
public string Code { get; set; }
public string Name { get; set; }
public string Price { get; set; }
}
An interfacem which has a method RunScrapingAsync with a list of item codes as the single parameter, which I need to scrape.
public interface IWebScraper
{
Task<List<Item>> RunScrapingAsync(List<string> itemCodes);
}
Then I have implementations for three scrapers (Amazon, EBay, AliExpress):
public class AmazonWebScraper : IWebScraper
{
private static HttpClient client;
public List<string> ItemCodes { get; set; }
public AmazonWebScraper()
{
client = new HttpClient(new HttpClientHandler() { Proxy = null });
client.BaseAddress = new Uri("https://amazon.com");
}
public async Task<List<Item>> RunScrapingAsync(List<string> itemCodes)
{
ConcurrentBag<Item> itemsConcurrentBag = new ConcurrentBag<Item>();
//for simplicity this logic is not important no need to go in details
return itemsConcurrentBag.ToList();
}
}
public class EBayWebScraper : IWebScraper
{
private static HttpClient client;
public List<string> ItemCodes { get; set; }
public EBayWebScraper()
{
client = new HttpClient(new HttpClientHandler() { Proxy = null });
client.BaseAddress = new Uri("https://ebay.com");
}
public async Task<List<Item>> RunScrapingAsync(List<string> itemCodes)
{
ConcurrentBag<Item> itemsConcurrentBag = new ConcurrentBag<Item>();
//for simplicity this logic is not important no need to go in details
return itemsConcurrentBag.ToList();
}
}
public class AliExpressWebScraper : IWebScraper
{
private static HttpClient client;
public List<string> ItemCodes { get; set; }
public AliExpressWebScraper()
{
client = new HttpClient(new HttpClientHandler() { Proxy = null });
client.BaseAddress = new Uri("https://aliexpress.com");
}
public async Task<List<Item>> RunScrapingAsync(List<string> itemCodes)
{
ConcurrentBag<Item> itemsConcurrentBag = new ConcurrentBag<Item>();
//for simplicity this logic is not important no need to go in details
return itemsConcurrentBag.ToList();
}
}
Here is my factory class WebScraperFactory:
public enum WebSite
{
Amazon,
EBay,
AliExpress
}
public class WebScraperFactory
{
public IWebScraper Create(WebSite website)
{
switch (website)
{
case WebSite.Amazon:
return new AmazonWebScraper();
case WebSite.EBay:
return new EBayWebScraper();
case WebSite.AliExpress:
return new AliExpressWebScraper();
default:
throw new NotImplementedException($"Not implemented create method in scraper factory for website {webSite}");
}
}
}
The WebScraper class, which holds all scrapers in a dictionary and uses them in the Execute method for the provided WebSite.
public class WebScraper
{
private readonly WebScraperFactory _webScraperFactory;
public WebScraper()
{
_webScraperFactory = new WebScraperFactory();
}
public async Task<List<Item>> Execute(WebSite webSite, List<string> itemCodes) =>
await _webScraperFactory.Create(webSite).RunScrapingAsync(itemCodes);
}
This is a WinForm app, so users have the option to run one or more scrapers (they are not all mandatory to run). So if a user chooses to run Amazon and AliExpress, it will choose two files with codes, adds them to the dictionary and calls the webscraper factory on every chosen website.
Example usage:
var codes = new Dictionary<WebSite, List<string>>
{
{WebSite.Amazon, amazonCodes},
{WebSite.AliExpress, aliExpressCodes}
}
var items = new Dictionary<WebSite, List<Item>>
{
{WebSite.Amazon, null},
{WebSite.AliExpress, null}
}
var webScraper = new WebScraper();
foreach(var webSite in websitesItemCodes.Keys)
{
items[webSite] = await webScraper.Execute(webSite, codes[webSite]);
}
Answer: IWebScraper
Based on the implementations it seems like you would gain more if you would define this interface as an abstract class to have a common place for the shared logic
public abstract class WebScraperBase
{
private readonly HttpClient client;
public WebScraperBase(Uri domain)
=> client = new HttpClient() { BaseAddress = domain };
public async Task<List<Item>> RunScrapingAsync(List<string> itemCodes, CancellationToken token = default)
{
ConcurrentBag<Item> itemsConcurrentBag = new();
await ScrapeAsync(itemCodes, itemsConcurrentBag, token);
return itemsConcurrentBag.ToList();
}
protected abstract Task ScrapeAsync(List<string> itemCodes, ConcurrentBag<Item> items, CancellationToken token);
}
I think the whole handler object is unnecessary: new HttpClientHandler() { Proxy = null }
I changed your RunScrapingAsync to be a template method
I've added a CancellationToken as a parameter to allow user cancellation/timeout
I also defined a ScrapeAsync method as a step method
I'm not sure why do have a class level List<string> property and a List<string> parameter as well
I've removed the former because based on the shared code fragment it was not in use
If it was in use inside the not shown part of the code under the RunScrapingAsync then please be aware that this property could be modified while the async method is running!!!!
XYZWebScraper
After the above changes the implementation of a given scraper could look like this
public class AliExpressWebScraper : WebScraperBase
{
public AliExpressWebScraper() : base(new Uri("https://aliexpress.com"))
{}
protected async override Task ScrapeAsync(List<string> itemCodes, ConcurrentBag<Item> items, CancellationToken token)
{
//for simplicity this logic is not important no need to go in
}
}
WebScraperFactory
If you could take advantage of switch expression then you can rewrite your Create like this
public static WebScraperBase Create(WebSite website)
=> website switch
{
WebSite.Amazon => new AmazonWebScraper(),
WebSite.EBay => new EBayWebScraper(),
WebSite.AliExpress => new AliExpressWebScraper(),
_ => throw new NotImplementedException($"Not implemented create method in scraper factory for website {webSite}"),
};
WebScraper
I do believe this whole class is unnecessary if you define the WebScraperFactory and its Create method as static (like above)
UPDATE #1
I completely overlooked that the client was defined as static inside the WebScraperBase. That implementation does not work so here are two alternatives:
Store the static HttpClient instances inside the WebScraperFactory and pass them to the ctor of the concrete scrapper instances
WebScraperFactory
private static Dictionary<WebSite, HttpClient> clients = new Dictionary<WebSite, HttpClient>
{
{ WebSite.Amazon, new HttpClient() { BaseAddress = new Uri("https://amazon.com") } },
{ WebSite.EBay, new HttpClient() { BaseAddress = new Uri("https://ebay.com") } },
{ WebSite.AliExpress, new HttpClient() { BaseAddress = new Uri("https://aliexpress.com") } }
};
public static WebScraperBase Create(WebSite website)
{
var client = clients.ContainsKey(website) ? clients[website] : throw new InvalidOperationException("...");
switch(website)
{
case WebSite.Amazon: return new AmazonWebScraper(client);
case WebSite.EBay: return new EBayWebScraper(client);
case WebSite.AliExpress: return new AliExpressWebScraper(client);
default: throw new NotImplementedException("...");
};
}
WebScraperBase
private readonly HttpClient client;
public WebScraperBase(HttpClient client)
=> this.client = client;
Use IHttpClientFactory to delegate the life cycle management of the underlying HttpClientHandlers to a dedicated component
Because you are using .NET Framework, not .NET Core you need to do the following workaround to be able to use IHttpClientFactory
WebScraperFactory
private static readonly IServiceProvider serviceProvider;
static WebScraperFactory()
{
var serviceCollection = new ServiceCollection().AddHttpClient();
serviceCollection.AddHttpClient(WebSite.Amazon.ToString(), c => c.BaseAddress = new Uri("https://amazon.com"));
serviceCollection.AddHttpClient(WebSite.EBay.ToString(), c => c.BaseAddress = new Uri("https://ebay.com"));
serviceCollection.AddHttpClient(WebSite.AliExpress.ToString(), c => c.BaseAddress = new Uri("https://aliexpress.com"));
serviceProvider = serviceCollection.BuildServiceProvider();
}
public static WebScraperBase Create(WebSite website)
{
var factory = serviceProvider.GetRequiredService<IHttpClientFactory>();
var client = factory.CreateClient(website.ToString());
if(client.BaseAddress == null) throw new InvalidOperationException("...");
switch(website)
{
case WebSite.Amazon: return new AmazonWebScraper(client);
case WebSite.EBay: return new EBayWebScraper(client);
case WebSite.AliExpress: return new AliExpressWebScraper(client);
default: throw new NotImplementedException("...");
};
} | {
"domain": "codereview.stackexchange",
"id": 44351,
"tags": "c#, factory-method"
} |
Calculate work from Force and time | Question: Everywhere it states, that time does not matter when calculating work, but can't you do this:
$$F=ma$$
so $a=F/m$
taking the second integral (dt): $$s=\frac{1}{2}\frac{F}{m}t^2$$ since Force is constant.
now plugging that into $W=Fs$:
$$W=\frac{1}{2}\frac{F^2}{m}t^2$$
Now you can put the m, F and t into the equation and get W.
Did I do a mistake or is it possible to do this?
Answer: Your answer is correct - assuming no other force act.
The general statement:
$$W=F\cdot s$$
holds for the work done by the force $F$ but also permits other forces to be present.
The above formula is indeed independent of $t$, however if $F$ is the only force then greater $t$ means greater $s$ so greater $W$.
A better statement would be that:
time does not matter when calculating work provided that the distance covered remains the same.
In your calculation the distance covered changes. | {
"domain": "physics.stackexchange",
"id": 50145,
"tags": "newtonian-mechanics, forces, work, time"
} |
Why do gas discharge tubes operate indefinitely?(Or do they?) | Question: Question
Can a gas discharge tube with a perfectly sealed glass tube operate indefinitely? Or do the ions get "used up"?
Further detail
My (limited) understanding of gas discharge tubes is that a voltage is applied between the cathode and anode at opposite ends of the tube. Electrons escape from the cathode due to the strong electric field and may bounce into gas particles with enough energy to dislodge more electrons. With high voltage and low pressure, this collision-electron-generation process can cascade to form a "non-thermal" plasma of free electrons, cations, and neutral gas particles. From what I've read, the glow comes from photons released during de-excitation of gas particles (and maybe ions).
Any corrections to this summary would be greatly appreciated.
Eventually, wouldn't all of the gas particles get converted to cations and collect at the cathode, while all the free electrons would flow to the anode? And then the gas discharge and current across the tube would stop?
Answer: Once the positive gas ion hits the cathode (= roughly an infinite bucket of electrons) it neutralizes and flows back into the tube as neutral gas. You're right that otherwise the discharge would be quite short-lived. | {
"domain": "physics.stackexchange",
"id": 85408,
"tags": "electricity, plasma-physics"
} |
What happens if the holonomy group lies in $SU(2)$ for a CY 3-fold? | Question: I am a mathematician and reading a physics paper about the holonomy group of Calabi-Yau 3-folds.
In that paper, a Calabi-Yau 3-fold $X$ is defined as a compact 3-dimensional complex manifold with Kahler metric such that the holonomy group $G \subset SU(3)$ but not contained in any $SU(2)$ subgroup of $SU(3)$.
They remark "the condition that $G$ is not contained in $SU(2)$ is a really serious condition for physics since otherwise it would change the supersymmetry".
Could anyone kindly explain this sentence in more detail? I think that if $G\subset SU(2)$ the physics derived from the Calabi-Yau 3-fold has more supersymmetry (because less restriction), but what is wrong about it? One possibility is that too symmetric theory is trivial.
I would appreciate it if someone could kindly explain the physics behind it to a mathematician.
Answer: Let me elaborate on Ryan's correct comments.
The flat background makes all components of the spinors covariantly constant; so the geometry is compatible with all of SUSY.
A generic curved 6-real-dimensional manifold has an $O(6)$ holonomy or $SO(6)\sim SU(4)$ if it is orientable. The $SU(3)$ subgroup preserves 1/4 of the original supercharges – it is the single charge among $4$ in $SU(4)$ that is not included in $3$ of $SU(3)$ and therefore "not participating in the mixing" that destroys the covariant constancy. If the holonomy is $SU(2)$, then 2/4 of the original spinor components i.e. 1/2 of the supersymmetry is preserved.
In reality, the $SU(3)$ holonomy manifolds are the the usual generic Calabi-Yau three-folds. Starting from 16 supercharges i.e. $N=4$ of heterotic string theory, for example, they produce the realistic $N=1$. However, $SU(2)$ holonomy would produce $N=2$ in four dimensions which is too much. $N=2$ SUSY is too large for realistic models – at least for quarks and leptons – because it guarantees too large multiplets, left-right symmetry of spacetime (no chirality), and other strong constraints on the spectrum and the strength of various interactions that would disagree with observations.
The manifolds with the $SU(2)$ holonomy are pretty much just Calabi-Yaus of the form $K3\times T^2$ and perhaps some orbifolds of this manifold. So two of the six dimensions remain flat and decoupled from the other, curved four. | {
"domain": "physics.stackexchange",
"id": 10873,
"tags": "string-theory, differential-geometry, supersymmetry, compactification, calabi-yau"
} |
Hector_slam and IMU | Question:
I'm interested in how an IMU is used as a part of Hector_slam. Is it part of the odom data, or is the IMU purely to stabalize/orient the laser scanner and data from the IMU is not passed to hector_slam?
Originally posted by TJump on ROS Answers with karma: 160 on 2012-08-05
Post score: 1
Answer:
Information about the LIDAR attitude has to be available. If the platform experiences roll/pitch motion, using a IMU and tf is a straightforward way to supply this information. Of course, if the LIDAR is mounted to a heavy platform traveling on flat ground, the roll/pitch estimate can be considered fixed and no IMU is needed.
Low cost IMUs as commonly used in robotics (as opposed to military grade/aerospace ones) are hard to use for odometry, as the double integration of accelerometer data leads to very high translational errors within seconds. For this reason, IMU data generally has to be filtered and fused with other sources to be useful for (translational) motion estimation. A EKF-based filter for this is available in the hector_localization stack. Unfortunately we didn't get around to writing a proper tutorial for that so far. This state estimation filter estimates the 6DOF pose of the vehicle (as well as IMU sensor biases). The tf generated by the filter can be used as a start estimate for scan matching in hector_mapping, while the 2D pose estimate by the latter in turn can be used to update the EKF solution again.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2012-08-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by TJump on 2012-08-06:
I have not found many of the low cost robot IMUs supported in ROS. Is there a specific node/other to use for these low cost IMUs to work with the EKF-based filter?
Comment by TJump on 2012-08-11:
I've decided to test out the imu_um6 node.
Comment by Astronaut on 2013-01-02:
So is there any help or documentation how to use IMU in hector_localization and hector_mapping?? | {
"domain": "robotics.stackexchange",
"id": 10485,
"tags": "slam, imu, navigation, odometry, hector-slam"
} |
Duality computers | Question: There is a lot on the internet about quantum computers and how they could factor integers.
However, there is a type of computer which also uses the principles of quantum mechanics, which can be used to do much more than this, including solving NP-complete problems in an instant. See http://arxiv.org/abs/cs/0507003 and http://arxiv.org/abs/quant-ph/0605087. These computers have been called "Duality computers".
Why isn't more written about these types of computers?
Answer: Summary. All of these papers misunderstand the notion of quantum superpositions and interference, and lead to analyses which do not conserve probability (i.e. in which the probabilities of outcomes do not add to one) without specifying an interpretation for this fact. This may be considered to correspond to post-selection — conditioning on a particular outcome of a measurement; a subject which has indeed been studied in the context of quantum computing, and which is embodied as the complexity class PostBQP. But mostly, I suppose that these articles are ignored because while they propose to extend quantum computation, they seem to display a lack of awareness of fundamentals of quantum computation.
Details. I've added some elaboration on the failures of the analyses of these articles.
The article by Shiekh [cs/0507003] describes "adding quantum interference to quantum computers". This ignores the fact that:
Quantum interference is already a feature of quantum computation (and it is in fact difficult to concieve of how a quantum-mechanical version of computation could avoid exhibiting it); and
The way that quantum interference works is not how Shiekh describes it (in particular, his analysis seems to ignore normalization of probabilities). Shiekh supposes that if you have a superposition such as |000⟩ + |001⟩ + ... + |111⟩, that you can simply make a "second copy" of that superposition with some phases (e.g. on the standard basis states which are not satisfying solutions to a SAT formula), obtaining a superposition such as −|000⟩ + |001⟩ − ... − |111⟩, and somehow interfere them to obtain a state of the form |001⟩, cancelling all of the terms with opposing signs. But note that the two superpositions described above are not normalized: this is already a warning sign — the cancellation should at best yield |001⟩/21−n/2 if you keep track of the normalization, and allow this alternative "interference" process. That's a probability of 1/2n−2 of obtaining the result you want, which is still vanishingly small. But what happens if this result is not realized? Shiekh's analysis doesn't even propose any answer, and so it is incomplete. Worse still, what happens if there are no satisfying solutions at all, and all the terms cancel out? You're left with the zero vector: the system is in no state at all, which could be construed as a contradiction of the concept of "state", unless you propose that the system has been utterly destroyed (which would presumably affect the state of some other system, e.g. a detector, which you should be accounting for).
The article by Gudder (which you link to in the comments) describes a "quantum wave divider" Dp (in the words of Long's article, see below) which, for a probability distribution p = (p1 , ... , pn), performs a mapping $$\begin{align} |\psi\rangle \mapsto \frac{1}{\|\mathbf p\|_2} \bigoplus_{j=1}^n \; p_j |\psi\rangle \;. \end{align}$$
Yes, that's the Euclidean norm in the denominator, which effectively replaces the probability distribution p by the Euclidean unit-norm vector q = p / ||p||. This "wave splitter" prepares an independent quantum register Q in a state |q⟩ which is a superposition over |1⟩ , ... , |n⟩, with coefficients given by qk = ek† q, which is therefore in a tensor product with an input state |ψ⟩. So it's no wonder that Dp so described is an isometry.
The adjoint "wave combiner" operation, Cp = Dp† effectively describes the effect of trying to project that register Q back onto the state |q⟩. So of course it will be isometric on the image of Dp ; but unless Q is in the state |q⟩, this operation is norm-decreasing, and therefore is not probability-conserving. Everyting in that article is fine until he proposes to apply Cp ; for instance, the block-diagonal unitary operators are perfectly good coherently-controlled unitary operators conditioned on the register Q.
Unfortunately, the operation Cp which Gudder actually describes is not the adjoint of Dp — although he does claim that it is — but is the adjoint of |1⟩ + ... + |n⟩, which is a non-normalized version of the uniform superposition over all computational basis states. One can see that this is not actually the adjoint of Dp by noting that $$\begin{align} C_{\mathbf p} D_{\mathbf p} |\psi\rangle \;=\; C_{\mathbf p} \left[ \frac{1}{\|\mathbf p\|_2} \bigoplus_{j = 1}^n \; p_j |\psi\rangle \right] \;=\; \sum_{j = 1}^n \;\frac{p_j}{\| \mathbf p \|_2} |\psi\rangle \;=\; \left( \sum_{j = 1}^n \; q_j \right) |\psi\rangle \end{align}$$ which, contrary to his claim in Lemma 2.2, is equal to |ψ⟩ if and only if p is a point-mass function. This result is fact super-normalized: it enables events with probability greater than 1. It's not clear what this would mean; and Gudder proposes no such meaning.
Gudder's article actually has a number of such problems with linear algebra. Some of the mistakes he makes in the context of mixed states are addressed by Long's paper [quant-ph/0605087] (which however does not address the problems with failure to preserve probability, or of what the actual adjoint of Dp is).
So, that's likely why these ideas are not widely studied. Generally, whenever there is an article which claims to generalize quantum computation in some bold new manner, one should check whether probability is conserved (and what significance the authors attribute to it not being conserved), and whether the algebra is otherwise sound. | {
"domain": "cstheory.stackexchange",
"id": 803,
"tags": "quantum-computing"
} |
Boltzmann distribution for chemical potentials | Question: I read that if we have a system with two co-existing phases with chemical potentials $\mu_1$ and $\mu_2$, respectively, then, at the equilibrium, the concentrations $X_1$ and $X_2$ are related by the equation:
$$X_1=X_2\exp\left(\frac{-(\mu_1-\mu_2)}{kT}\right).$$
Is that true? How can I derive this expression?
Answer: It is just the Boltzmann distribution, with the energy being given by the chemical potential. | {
"domain": "physics.stackexchange",
"id": 353,
"tags": "thermodynamics, statistical-mechanics, equilibrium, chemical-potential"
} |
Would an adhesive surface have more air resistance? | Question: Imagine spreading double-sticky tape all over the surface of a car or a plane. Would there more significantly more aerodynamic drag as a result of the adhesive 'sticking' to air molecules and slowing down? This would certainly cause more resistance in solid mediums but would this cause more resistance in air or water? If an adhesive surface can drastically increase friction with solid surfaces, could one also increase friction with the air?
Answer: It seems the question assumes that because the tape is sticky, it will somehow strike or capture air molecules which will increase drag. This is incorrect since air molecules have very weak intermolecular forces (you might get a layer of gas molecules to form on the tape) and are moving too quickly to begin with to stick. Also, even though it's a sticky surface, it's still fairly flat/smooth.
Aerodynamic drag does depend on the roughness of a surface$^1$, and
even though objects with a rough surface will have more drag, it's hard to imagine that a sticky taped surface that is more or less smooth (though sticky), has an appreciable skin friction coefficient (see below).
And as pointed out by Niels, objects moving through a gas have a boundary layer adjacent to the surface which is more or less stationary relative to the surface. For surfaces with skin roughness (see links below) enough to noticeably affect the boundary layer profile, you would expect the surface to be a lot more rough than sticky tape. So sticky tape would not have the effect you ask of, in air.
You're right that the same cannot said about solids, but solids that have sticky tape will experience cohesive/adhesive forces when in contact.
$^1$ This is referred to as skin friction where in the equation for drag force,
skin friction is included in the drag coefficient $c_D$ in the first above linked equation, where the total skin friction drag force can be computed by $$ F=\iint \limits _{S}c_{f}{\frac {\rho v^{2}}{2}}dA$$ where $c_f$ is the skin friction coefficient. | {
"domain": "physics.stackexchange",
"id": 95716,
"tags": "fluid-dynamics, drag, air, adhesion"
} |
Morse code translator in Java | Question: I'm writing an application called "Learn Morse" in Java using Maven. A Morse code translator is a part of this app. I created encode and decode methods to translate in both sides Morse code and I moved the HashMap to the properties file. I also created a helper method to get keys by value.
pl/hubot/dev/learn_morse/model/Encoder.java:
package pl.hubot.dev.learn_morse.model;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
public class Encoder {
public final String encode(final String input)
throws IOException,
NoSuchFieldException {
StringBuilder encoded = new StringBuilder();
String lowerCaseInput = input.toLowerCase();
for (int i = 0; i < lowerCaseInput.length(); i++) {
char current = lowerCaseInput.charAt(i);
encoded.append(getMorseCode().getOrDefault(
Character.toString(current), " "));
encoded.append(" ");
}
return encoded.toString();
}
public final String decode(final String input)
throws IOException,
NoSuchFieldException {
StringBuilder decoded = new StringBuilder();
String lowerCaseInput = input.toLowerCase();
for (String current : lowerCaseInput.split(" ")) {
String key = getKeyByValue(getMorseCode(), current);
if (key != null) {
decoded.append(key);
}
decoded.append(" ");
}
return decoded.toString();
}
final Map<String, String> getMorseCode()
throws IOException,
NoSuchFieldException {
Class<?> aClass = Encoder.class;
ClassLoader classLoader = aClass.getClassLoader();
String filename = "morse_code.properties";
try (InputStream input = classLoader
.getResourceAsStream(filename)) {
if (input == null) {
throw new FileNotFoundException(
"Sorry, unable to find "
+ filename);
}
Properties properties = new Properties();
properties.load(input);
Map<String, String> translations = new HashMap<>();
for (String key : properties.stringPropertyNames()) {
String value = properties.getProperty(key);
translations.put(key, value);
}
return translations;
}
}
private <T, E> T getKeyByValue(final Map<T, E> map, final E value) {
for (Map.Entry<T, E> entry : map.entrySet()) {
if (Objects.equals(value, entry.getValue())) {
return entry.getKey();
}
}
return null;
}
}
resources/morse_code.properties:
a = ._
b = _...
c = _._.
d = _..
e = .
f = .._.
g = __.
h = ....
i = ..
j = .____
k = _._
l = ._..
m = __
n = _.
o = ___
p = .__.
q = __._
r = ._.
s = ...
t = _
u = .._
v = ..._
w = .__
x = _.._
y = _.__
z = __..
\u0105 = ._._
\u0107 = _._..
\u0119 = .._..
\u00E9 = .._..
ch = ____
\u0142 = ._.._
\u0144 = __.__
\u00F3 = ___.
\u015B = ..._...
\u017A = __.._
\u017C = __.._.
0 = _____
1 = .____
2 = ..___
3 = ...__
4 = ...._
5 = .....
6 = _....
7 = __...
8 = ___..
9 = ____.
. = ._._._
, = __..__
' = .____.
" = ._.._.
_ = ..__._
\: = ___...
; = _._._.
? = ..__..
\! = _._.__
- = _...._
+ = ._._.
/ = _.._.
( = _.__.
) = _.__._
\= = _..._
@ = .__._.
Answer: My main complaint about your code is that you rebuild the table used for translation for each and every character you are encoding or decoding. This is a major waste of computational power. In addition I've got some other minor comments:
Why the throws SomeException? – Do you really need these, as you don't explicitly throw them from your code? I'm not entirely sure on common or best practices here, so this might be a moot point, but couldn't these be removed in your code?
Add more space – The density of your code, makes it harder to read. Add a little vertical space here and there, and it will look a lot better. Consider opening your methods with a newline, so that the signature doesn't interfere with the start of the method.
How to access the ch from the properties – In the properties file, you have an entry for ch. How do get to this one? Or is this at typo?
Also adding space before for loop or if blocks, helps structuring the code within a method. Finally, adding an extra newline so that it is two newlines between methods helps separate the methods.
Most variables are named well – Mostly good naming, but input is kind of vague. I would prefer text in encode() and morseCode in decode(), and avoid the lowerCaseInput. Having a variable name describing what has happened to it, seems a little strange.
Avoiding the getMorseCode() rebuild
You're in a class, and a class can have a constructor. Utilize this to build the needed HashMap. And instead of having a rather costly getKeyByValue() I would advice in this case duplicate the reverse map, as well. (Another option would be to use some external library to implement some bidirectional map, like the BiMap from the Guava library). This would lead to code resembling this untested code:
public class Coder {
private HashMap <String, String> encodeMap;
private HashMap <String, String> decodeMap;
Coder() {
this("morse_code.properties");
}
Coder(String filename) {
InputStream input = null;
Properties properties = new Properties();
try {
input = Morse.class.getClassLoader().getResourceAsStream(filename);
if (input == null) {
System.out.println("Sorry, unable to find " + filename);
return; // Or throw bad exception... :-)
}
encodeMap = new HashMap<>();
decodeMap = new HashMap<>();
properties.load(input);
for (String text : properties.stringPropertyNames()) {
String morseCode = properties.getProperty(text);
encodeMap.put(text, morseCode);
decodeMap.put(morseCode, text);
}
// Catch and cleanup, if needed...
} catch (IOException ex) {
ex.printStackTrace();
} finally{
if (input!=null) {
try {
input.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
public final String encode(final String textInput) {
StringBuilder encoded = new StringBuilder();
int textLength = textInput.toLowerCase().length();
for (int i = 0; i < textLength; i++) {
String lookup = Character.toString(textInput.charAt(i));
encoded.append(encodeMap.getOrDefault(lookup, " "));
encoded.append(" ");
}
return encoded.toString();
}
public final String decode(final String morseInput) {
StringBuilder decoded = new StringBuilder();
for (String morseCode : morseInput.toLowerCase().split(" ")) {
decoded.append(decodeMap.getOrDefault(morseCode, " "));
decoded.append(" ");
}
return decoded.toString();
}
}
Here I've also renamed the class to Coder, as it seemed kind of strange to use the Encode class to decode stuff... I've also removed some temporary variables to avoid clutter.
I also added a second constructor allowing for providing a different filename to the constructor. This way you could change the entire functionality of the encode/decode by providing a different properties file.
Code is untested as I don't have access to do the resources thingy using my online java compiler, but you'll get the gist of the idea.
PS! Using the try-with-resources as Vogel612 exemplifies in his answer, is most likely a better, more up to date way to do the properties reading. | {
"domain": "codereview.stackexchange",
"id": 25469,
"tags": "java"
} |
How to predict the solubility of an organic compound in different kinds of solvents? | Question: How is it possible to know whether a particular solvent will dissolve in a particular kind of solvent, i.e. organic, polar, protic...?
Are there any indicators to look for in molecules or elements that allow one to predict what kind of solvent should be used to dissolve it?
Answer: There are some general guesses one can make from looking at the structure but the Abraham solvation equation is commonly used to estimate the solubility of a compound in a given organic solvent:
$$\log P_s = c + e E + s S + a A + b B + v V$$
This equation relies on a set of descriptors to characterize the properties of solvents and solutes to give the water/solvent partition coefficient $P_s$ (the ratio of the solubilities in water and solvent, under certain assumptions):
$p,e,s,a,b,v$ are coefficients that describe the solvent. Abraham and coworkers have collected coefficients for 85 different solvents.
The other variables are descriptors of the solute:
$E$ is the excess molar refractivity, which is a measure of the polarizability of a molecule.
$S$ is the solute dipolarity/polarizability.
$A$ is the overall (summation) hydrogen bond acidity.
$B$ is the overall (summation) hydrogen bond basicity.
$V$ is the McGowan characteristic volume.
You can look at the paper if you want to know exactly what these descriptors are, but their values can be obtained through various experimental and computational means, and combined with the solubility in water, used to estimate the solubility of a given compound in an organic solvent.
Fortunately, you don't have to do all this by hand as several websites, like this, that offer lookup and computation where you only have to specify a compound and pick a solvent from a list. | {
"domain": "chemistry.stackexchange",
"id": 2349,
"tags": "aqueous-solution, solvents"
} |
Bash function that allows running user aliases with sudo on Ubuntu | Question: I wrote this little Bash function to replace my sudo command.
It checks whether I have an alias for the given command and runs the aliased command instead of the literal one in that case.
As this is going to be used with sudo, I want to be sure there are no critical mistakes in it which generate security or safety issues. Please check whether everything is okay with it, like quoting issues or improvements to the sed pattern maybe. Thanks.
Here is my function:
sudo() { if alias "$1" &> /dev/null ; then $(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}" ; else command sudo $@ ; fi }
Or nicely formatted:
sudo() {
# check if the command passed as argument is an alias
if alias "$1" &> /dev/null ; then
# extract the aliased command from the output of 'type'
# and run it with the remaining arguments
$(type "$1" | sed -E 's/^.*`(.*).$/\1/') "${@:2}"
else
# run the original 'sudo' command with all arguments
command sudo $@
fi
}
Answer: First of all, you need to quote $@:
command sudo "$@"
If you don't your function will break if the command you pass to sudo contains a space in its path.
Then, this will break if your input is aliased to something which contains a backtick. For example:
$ alias foo='ls `which sudo`'
$ type foo
foo is aliased to `ls `which sudo`'
Now, the output of your sed command is not what you expect:
$ type foo | sed -E 's/^.*`(.*).$/\1/'
$
That will return nothing since the ^.*` means "match the longest possible string from the beginning of the string to a backtick" and, therefore, will match everything except the final '. A better approach would be to use a tool that has non-greedy matching like perl:
$ type foo | perl -pe 's/^.*?\`(.*).$/\1/'
ls `which sudo`
You will also need to eval it in order for it to run correctly:
$ "$(type foo | perl -pe 's/^.*?\`(.*).$/\1/')"
bash: ls `which sudo`: command not found
$ eval $(type foo | perl -pe 's/^.*?\`(.*).$/\1/')
/sbin/sudo
Putting all this together, gives:
sudo() {
# check if the command passed as argument is an alias
if alias "$1" &> /dev/null ; then
# extract the aliased command from the output of 'type'
# and run it with the remaining arguments
eval $(type "$1" | perl -pe 's/^.*?\`(.*).$/\1/' "${@:2}")
else
# run the original 'sudo' command with all arguments
command sudo "$@"
fi
}
I can't guarantee that it will work for all cases though. All things considered, I wouldn't use this approach at all. I would instead use
alias sudo='sudo '
As explained here. | {
"domain": "codereview.stackexchange",
"id": 22184,
"tags": "bash, linux"
} |
How do I build a world for a line follower bot in gazebo? | Question: I am just starting off in robotics with my own little pet project: a line follower using openCV. However, I have no idea how to create a world in Gazebo for the bot. I went through the official Gazebo tutorials, however there doesn't seem to be stuff that can help me with this.
I intend to build a plane which will be the floor of the room and "paint" a line on it for the bot to follow. I know many pre-made models exist, but I would like to try it out on my own. How can I achieve this? Can Blender be used to create such a room?
Answer: At the high level you're going to add a texture to the ground plane for the simplest approach. There's a gazebo tutorial on adding color and textures. The coloring of the line is a texture. There's also a section on Textures in the model Appearance Tutorial Gazebo Tutorial
There are many different tools that you can use to create the textures and models with textures. Blender is a commonly used tool, but you can use any 3D drawing tool.
However I'd suggest looking for projects who have already done this that you can play with. A good way to learn how to work with things is to start with an existing model and modify it instead of trying to start from scratch at first.
A quick search finds several projects with line follower worlds already implemented.
https://github.com/sudrag/line_follower_turtlebot
http://edu.gaitech.hk/turtlebot/line-follower.html
https://upcommons.upc.edu/bitstream/handle/2117/111138/tot.pdf | {
"domain": "robotics.stackexchange",
"id": 2449,
"tags": "gazebo, line-following"
} |
Is it okay to have different version of ROS in the remote pc and the turtleblot pc? | Question:
For example I have ros-melodic on remote pc and
ros-kinetic on turtlebot pc.
Originally posted by turtlebot3 on ROS Answers with karma: 42 on 2018-12-18
Post score: 0
Original comments
Comment by gvdhoorn on 2018-12-19:
We're all here to help, but this specific question has come up at least 10 times in the past 2 years. A quick Google (different ROS versions site:answers.ros.org) returns at least 1700 results for me.
Please try to use the search, ROS Answers is not a slow chat forum.
Answer:
There may be some issues between versions but I've had success using three different versions of ROS at once (Hydro, Indigo, and Kinetic) and I had to do some work arounds (mainly for Hydro, which is pretty old) but everything worked pretty well. Other, than a "you should be ok, but there may be issues" I don't think there's too much that can be said except for try it out and see what happens.
Originally posted by jayess with karma: 6155 on 2018-12-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by turtlebot3 on 2018-12-18:
Thank you for your comment :D | {
"domain": "robotics.stackexchange",
"id": 32190,
"tags": "ros-melodic, ros-kinetic"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.