anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Commutator of parity and Hamiltonian operators under even potential function | Question: I need to show what is $[H,P]$ where $H$ is the Hamiltonian and $P$ the parity operator. $V(\underset{\sim}x) = V(-\underset{\sim}x)$ in this case.
I start off with
$$
\langle \underset{\sim}x|HP|\psi\rangle
= \langle \underset{\sim}x|(\frac{p^2}{2m}+V)P|\psi\rangle
= \langle \underset{\sim}x|(\frac{p^2}{2m}+V)P|\psi\rangle
= \langle \underset{\sim}x|\frac{p^2}{2m}P|\psi\rangle+\langle \underset{\sim}x|VP|\psi\rangle
$$
and since $\langle \underset{\sim}x|V = V(\underset{\sim}x)\langle \underset{\sim}x|$ (is this step valid?) and $ \langle \underset{\sim}x|P = \langle -\underset{\sim}x|$, the above equation becomes
$$
\langle \underset{\sim}x|HP|\psi\rangle
= \langle \underset{\sim}x|\frac{p^2}{2m}P|\psi\rangle + V(\underset{\sim}x)\psi(-\underset{\sim}x)
$$
Similarly I have
$$
\langle \underset{\sim}x|PH|\psi\rangle
= \langle \underset{\sim}x|P\frac{p^2}{2m}|\psi\rangle + V(-\underset{\sim}x)\psi(-\underset{\sim}x)
= \langle \underset{\sim}x|P\frac{p^2}{2m}|\psi\rangle + V(\underset{\sim}x)\psi(-\underset{\sim}x)
$$
Taking the difference of the two, I find that
$$
\langle \underset{\sim}x|HP-PH|\psi\rangle
= \langle \underset{\sim}x|\frac{p^2}{2m}P-P\frac{p^2}{2m}|\psi\rangle
= -\langle \underset{\sim} x|\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x_i^2}P|\psi \rangle +\langle -\underset{\sim} x|\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x_i^2}|\psi\rangle
$$
which I had trouble evaluating. Any hints?
Answer: I think there is an easier way to go about this. Acting with the Parity operator on the Hamiltonian we have:
\begin{align}
P \hat{H} P & = \hat{H} ( - x ) \\
\Rightarrow P \hat{H} & = \hat{H} ( - x ) P
\end{align}
So the Hamiltonian commutes with the Parity operator if $ \hat{H} ( x ) = \hat{H} ( - x ) $. Now
\begin{equation}
\frac{ p ^2 }{ 2 m } = - \frac{1}{ 2m} \frac{ \partial ^2 }{ \partial x ^2 } \xrightarrow{P} - \frac{1}{ 2m} \frac{ \partial ^2 }{ \partial (-x) ^2 } =- \frac{1}{ 2m} \frac{ \partial ^2 }{ \partial x ^2 }
\end{equation}
So the momenta squared in invariant. Furthermore, if
\begin{equation}
V ( - x ) = V ( x )
\end{equation}
then the potential is also invariant. Thus we have,
\begin{equation}
\hat{H} ( - x ) = \hat{H} ( x )
\end{equation}
and the Hamiltonian must commute with the parity operator. | {
"domain": "physics.stackexchange",
"id": 11912,
"tags": "quantum-mechanics, homework-and-exercises"
} |
Why can't magnetic field lines intersect with each other? | Question: Why can't magnetic field lines intersect with each other? My teacher said that if they happen to intersect with each other then the compass needle will show two different directions at a time which is not possible. But I thought that the compass needle will show the resultant direction of the intersecting field lines.
Answer: You are correct. But so is your teacher.
If two magnetic fields are added at a point then the direction of the magnetic field at that point is given by the resultant, which is the same as the direction of the compass needle.
Magnetic fields are vectors and there is always only one resultant no matter how many vectors are added together.
Magnetic field lines do not actually exist. To "see" them we have to use things like compass needles. If we put small compass needles end to end they will trace out a single line. From a different starting point we could trace out another line. Will these 2 lines ever cross? No. If they did the compass where they intersect would point in 2 different directions at the same time. Clearly impossible as your teacher says.
The same can almost be said for electric field lines. [See Why can two (or more) electric field lines never cross?]. The do not intersect except where they start or end on a point charge.
This does not happen for magnetic field lines because there is no magnetic equivalent of an electric charge. Magnets always have a North and South pole - they are dipoles. Even when you chop them up into small pieces each piece always has a North and a South pole. Nobody has ever found an isolated North or South pole - a magnetic monopole. | {
"domain": "physics.stackexchange",
"id": 64824,
"tags": "electromagnetism, magnetic-fields, vector-fields"
} |
Equation for reaction between chlorine fluoride and potassium bromide | Question: The equation given in the answer key was
$$\ce{ClF + 2KBr → KCl + KF + Br2}$$
However won't the $\ce{KCl}$ also react with $\ce{ClF}$ to form $\ce{KF}$ and $\ce{Cl2}$, resulting in the net reaction being
$$\ce{2ClF + 2KBr → 2KF + Cl2 + Br2}~?$$
Answer: Let's subtract the two reactions from each other to simplify things a bit:
$$\ce{2ClF + 2KBr -> 2KF + Cl2 + Br2}\ \ \ \ \ \ \ \ \ \ [1]$$
$$\ce{ClF + 2KBr -> KCl + KF + Br2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ [2]$$
$$\ce{ClF + KCl -> KF + Cl2}\ \ \ \ \ \ \ \ \ \ \ \ \ \ [3] = [1]-[2]]$$
Now we can discuss [2] and [3] separately by splitting them up into half reactions.
Reaction 2
$$\ce{ClF + 2KBr -> KCl + KF + Br2}$$
Potassium is a spectator ion. The reduction half reaction is:
$$\ce{ClF + 2e- -> Cl- + F-}$$
The oxidation half reaction is:
$$\ce{2Br- -> Br2 + 2e-}$$
Reaction 3
$$\ce{ClF + KCl -> KF + Cl2}$$
Potassium is a spectator ion. Fluorine does not change oxidation state. Chlorine undergoes comproportionation (oxidation states +1 and -1 to oxidation state of 0). The reduction half reaction is:
$$\ce{ClF + e- -> 1/2 Cl2 + F-}$$
The oxidation half reaction is:
$$\ce{Cl- -> 1/2 Cl2 + e-}$$
Which reaction happens?
That depends on the reduction potentials, the kinetics of the two reactions and the concentration of species present. I'm not sure how you would figure it out without experimental (or computational) data.
Comparing the oxidation half reactions, you could make an argument about electronegativities of bromine vs chlorine, but I have no insight to compare the two reducation half reactions, which are very different.
The Wikipedia article on chlorine monofluoride states:
Many of its properties are intermediate between its parent halogens, $\ce{Cl2}$ and $\ce{F2}$.
If you substitute $\ce{Cl2}$ for $\ce{ClF}$ in reactions 2 and 3, they both look reasonable (reaction 3 becomes a self-exchange, i.e. equilibrium constant = 1, whereas reaction 2 reduces chlorine and oxidizes bromine, products are favored). | {
"domain": "chemistry.stackexchange",
"id": 11789,
"tags": "inorganic-chemistry, redox, halides, reactivity"
} |
BF interpreter written in C# | Question: I have recently written a Brainfuck interpreter in C#. I tested it with the examples given on EsoLang website. It does not handle errors right now.
Questions:
Even that it can run only one program at once, should I maybe put the variables into the class (create new class) or just leave them inside the BF method?
Should I split all of the code from the cases and create separate methods for them, accessing them from the switch?
Of course, I am also looking for all other suggestions on how I can improve this code.
using System;
namespace BF
{
class MainClass
{
public static void Main(string[] args)
{
if (args.Length == 0) {
Console.WriteLine("Specify the source file!");
} else {
if (System.IO.File.Exists(args[0]) == true) {
BF(System.IO.File.ReadAllText(args[0]).ToCharArray());
} else {
Console.WriteLine("The path to the file is not valid!");
}
}
}
private static void BF(char[] instructions)
{
int instructionPointer = 0;
int[] memory = new int[30000];
int pointer = 1;
while (instructionPointer < instructions.Length) {
switch (instructions[instructionPointer]) {
case '>': {
pointer += 1;
break;
}
case '<': {
pointer -= 1;
break;
}
case '+': {
memory[pointer] += 1;
break;
}
case '-': {
memory[pointer] -= 1;
break;
}
case '.': {
Console.Write((char)memory[pointer]);
break;
}
case ',': {
memory[pointer] = byte.Parse(Console.Read().ToString());
break;
}
case '[':
{
if (memory[pointer] == 0) {
int s = 0;
int ptr = instructionPointer + 1;
while (instructions[ptr] != ']' || s > 0) {
if (instructions[ptr] == '[') {
s += 1;
} else if (instructions[ptr] == ']') {
s -= 1;
}
ptr += 1;
instructionPointer = ptr;
}
}
break;
}
case ']': {
if (memory[pointer] != 0) {
int s = 0;
int ptr = instructionPointer - 1;
while (instructions[ptr] != '[' || s > 0) {
if (instructions[ptr] == ']') {
s += 1;
} else if (instructions[ptr] == '[') {
s -= 1;
}
ptr -= 1;
instructionPointer = ptr;
}
}
break;
}
}
instructionPointer += 1;
}
}
}
}
Answer:
int pointer = 1;
Looks like you're throwing away memory[0] here; I'd expect the pointer to initialize at the very start of the "tape".
if (System.IO.File.Exists(args[0]) == true)
The == true part is redundant, and the fully-qualified File.Exists feels... crowded. I'd remove System.IO. and add using System.IO; at the top of the file.
Arguably if no argument is specified, you should be throwing some ArgumentException instead of just saying "specify the source file" and exiting with a code-0, which would be interpreted as a successful run. Such a guard clause would help reduce nesting in the Main procedure:
public static void Main(string[] args)
{
if (args.Length == 0)
{
throw new ArgumentException("No source file was specified.");
}
if (!File.Exists(args[0]))
{
throw new FileNotFoundException("Specified source file was not found.");
}
BF(System.IO.File.ReadAllText(args[0]).ToCharArray());
}
Notice the consistency with brace positioning; your code alternates between C#-style and Java-style braces, depending on whether we're looking at a member or its body - it really doesn't matter which one you prefer, but pick a style, and stick to it.
"BF" might perhaps possibly make a [bad] class name - it's a noun. But here BF is a method, and methods do something, they're verbs. Interpret or even Run would be a better choice.
BF is a simple language; you don't need to go out of your way and implement a lexer to tokenize input and a parser to make sense out of it - basically every single character is an instruction, so it makes sense to process it one character at a time.
But is it ideal to read an entire file's contents, iterate every single character to create a character array, and then pass that array to a method that will iterate it one item at a time? Seems rather inefficient, considering you could be streaming the file content into the interpreter, and finish interpreting the BF program as you finish reading the last character in the file, having iterated the contents exactly once.
I think I'd make a separate dedicated class for the interpreter, with the pointer and memory[] as instance fields; I'd make a Dictionary<char, Action> to map each BF token to a dedicated method, too. | {
"domain": "codereview.stackexchange",
"id": 22553,
"tags": "c#, interpreter, brainfuck"
} |
GATK workflow for Cancer | Question: I am just starting to learn to use bioinformatics tools. My university has a limited and expensive bioinformatics team, so I'm mostly on my own except for big questions.
I am planning to use GATK to run 58 cancer control/normal pairs of Exome sequencing data (Illumina) from FASTQ or BAM file format, through the pipeline, with an output of a VCF & MAF format for analysis.
The current GATK pipeline is used for disease but not cancer, so I was wondering if anyone knew if there should be changes made for cancer. Here's the current pipeline starting with BAM files:
(Non-GATK) Picard Mark Duplicates or Samtools roundup
Indel Realignment (Realigner TargetCreator + Indel Realigner)
Base Quality Score Reacalibration (Base Recalibrator + PrintReads)
HaplotypeCaller
VQSR (VariantRecalibrator and ApplyRecalibrator in SNP and INDEL mode)
Annotation using Oncotator (?)
I'd like some verification that this pipeline will output what I need to run my samples on MuTect, MutSig, or some other analysis program. I appreciate any advice.
Answer: MuTect2 was just released into beta as part of GATK 3.5. It's based on HaplotypeCaller but makes somatic SNV and INDEL calls. You can find more information about MuTect2 on the GATK blog and ask any additional questions on the forum.
As a note: IndelQualityRecalibration is not needed with Mutect2, and there is no VQSR available for somatic calls.
MarkDuplicates -> BQSR -> Mutect2 -> Oncotator is a good basic workflow for somatic variant calling. | {
"domain": "biology.stackexchange",
"id": 4834,
"tags": "bioinformatics, cancer"
} |
Merge Intervals in JavaScript | Question: This is a task taken from Leetcode -
Given a collection of intervals, merge all overlapping intervals.
Example 1:
Input: [[1,3],[2,6],[8,10],[15,18]]
Output: [[1,6],[8,10],[15,18]]
/** Explanation: Since intervals `[1,3]` and `[2,6]` overlap, merge them into `[1,6]`. */
Example 2:
Input: [[1,4],[4,5]]
Output: [[1,5]]
/** Explanation: Intervals `[1,4]` and `[4,5]` are considered overlapping. */
My imperative solution -
/**
* @param {number[][]} intervals
* @return {number[][]}
*/
var merge = function(intervals) {
const sortedIntervals = intervals.sort((a,b) => a[0] - b[0]);
const newIntervals = [];
for (let i = 0; i < intervals.length; i++) {
if (!newIntervals.length || newIntervals[newIntervals.length - 1][1] < sortedIntervals[i][0]) {
newIntervals.push(sortedIntervals[i]);
} else {
newIntervals[newIntervals.length - 1][1] = Math.max(newIntervals[newIntervals.length - 1][1], sortedIntervals[i][1]);
}
}
return newIntervals;
};
My functional solution -
/**
* @param {number[][]} intervals
* @return {number[][]}
*/
function merge(intervals) {
const mergeInterval = (ac, x) => (!ac.length || ac[ac.length - 1][1] < x[0]
? ac.push(x)
: ac[ac.length - 1][1] = Math.max(ac[ac.length - 1][1], x[1]), ac);
return intervals
.sort((a,b) => a[0] - b[0])
.reduce(mergeInterval, []);
};
Answer: The code is mostly readable and clear:
the variable names are descriptive (for the most part - x is a little unclear)
there is good use of const and let instead of var
Some of the lines are a little lengthy - the longest line appears to be 117 characters long (excluding indentation):
newIntervals[newIntervals.length - 1][1] = Math.max(newIntervals[newIntervals.length - 1][1], sortedIntervals[i][1]);
I considered suggesting that a for...of loop be used to replace the for loop in the imperative solution after seeing the suggestion to use arr.entries() in this answer which would allow the use of a variable like interval instead of sortedIntervals[i], though when comparing in FF and chrome with this jsPerf test it seems that would be slower and thus less optimal, perhaps because each iteration would have an added function call.
When intervals.sort() is called the array is sorted in-place so intervals could be used instead of sortedIntervals, however it does seem that using sortedIntervals. That would reduce the storage by \$O(n)\$. | {
"domain": "codereview.stackexchange",
"id": 36156,
"tags": "javascript, algorithm, programming-challenge, functional-programming, ecmascript-6"
} |
Rate of reaction graph | Question: An autocatalytic reaction is a reaction in which one of the products catalyses the reaction.
In general what will be the shape of curve if the rate of reaction was plotted against time for an autocatalytic reaction?
I think answer should be A (in image) as an autocatalyst should always increase rate of reaction until there is a limiting factor for example concentration but the answer is B.
Answer: Let's suppose the mechanism is:
$$\ce{A + B <=> (AB) -> B + B}$$
Here, A is decomposed catalytically by B into B through the formation of a transient (AB) complex.
We can decompose two-step process into two reactions:
$$\ce{A + B <=>[k_1][k_2] (AB)} $$
$$\ce{(AB) ->[k_3] B + B}$$
Supposing these are "elementary" reactions, then,
$$\frac{d}{dt}(AB) = k_1[A][B] - k_2 (AB) - k_3[B]$$
If we apply the pseudo-steady state hypothesis to (AB):
$$\frac{d}{dt}(AB) = k_1[A][B] - k_2 (AB) - k_3[B] = 0$$
$$(AB) = \frac{k_1[A][B] - k_3[B]}{k_2}$$
We can additionally eliminate [A] from this equation because by stoichiometry, $[A] + (AB) + [B] = C$, where C is a constant. For simplicity, let's also suppose that (AB) will always be very low compared to [A] and/or [B]. But if (AB) is always low, we can approximate this by $[A] + [B] = C$. If so, then $[A] = C - [B]$. Substituting this into the equation above:
$$(AB) = \frac{k_1\left(C - [B] \right)[B] - k_3[B]}{k_2}$$
$$(AB) = [B]\frac{k_1\left(C - [B] \right) - k_3}{k_2}$$
Now, the rate of formation of [B] is what we are interested in. The 2nd reaction gives us the rate:
$$\frac{d}{dt}[B] = 2 k_3 (AB)$$
We can sub in the equation for (AB) we got above:
$$\frac{d}{dt}[B] = 2 k_3 [B]\frac{k_1\left(C - [B] \right) - k_3}{k_2}$$
$$\frac{d}{dt}[B] = 2 \frac{k_3}{k_2} \left([B]k_1\left(C - [B] \right) - k_3[B]\right)$$
$$\frac{d}{dt}[B] = r_B = 2 \frac{k_3}{k_2} \left( \left(k_1 C -k_3\right)[B] - k_1 [B]^2 \right)$$
$$\frac{d}{dt}[B] = r_B = 2 \frac{k_3 k_1}{k_2} \left(- [B]^2 + \left(C - \frac{k_3}{k_1}\right)[B] \right)$$
That factor in parenthesis is a quadratic in [B]. We can find when the rate will be a maximum by differentiating this term with respect to [B] and setting the result to zero.
$$-2 [B]_{maxrate} + (C - \frac{k_3}{k_1}) = 0$$
$$[B]_{maxrate} = \frac{1}{2}(C - \frac{k_3}{k_1})$$
This shows that the rate of formation of [B] will have a maximum. (Well technically an extremum but if you take the 2nd derivative you will find the extremum we found is in fact a maximum.)
You indicated in your question that the curves you drew represented plots not of concentration, but of the reaction rate. The only curve that has a maximum rate is curve B, therefore it must be the right answer. | {
"domain": "chemistry.stackexchange",
"id": 12136,
"tags": "catalysis"
} |
Does the Moon's magnetic field affect Earth's magnetic field? | Question: I wanted to ask a question; it's simple but I cannot find any possible and perfect solution.
Earth has poles, North and South. By which we can get directions using a compass or a needle compass, but that's not the concern.
My Question:
Would the Moon cause a change in the magnetic field of Earth when they are interpassing or colliding with each other, as the Moon does come between the magnetic path of the Earth.
So would the Moon's magnetic field affect the Earth's magnetic field, just as its gravitational pull affects Earth's gravitational pull for oceans?
Answer:
So would the Moon's magnetic field affect the Earth's magnetic field, just as its gravitational pull affects Earth's gravitational pull for oceans?
Yes, but only slightly. Firstly, magnetic fields can superimpose, so the field at any point is the sum of the field due to the Earth and the field due to the moon.
However, the moon is rather far away (and has a weak magnetic pole strength), so the magnetic field due to the moon on Earth's surface is nearly negligible (magnetic field also decreases as an inverse-square law)
In addition, the magnetic field of the Moon may bolster or erode the Earth's field as magnets moving relative to each other tend to either lose magnetization or become stronger. But this process has a negligible effect when we take the Moon and Earth. | {
"domain": "astronomy.stackexchange",
"id": 72,
"tags": "the-moon, earth, magnetic-field"
} |
Text extraction of document files | Question: I have this script that pulls text out of .docx, .doc and .pdf files and uploads that text to an Azure SQL Server. This is so users can search on the contents of those documents without using Windows Search / Azure Search.
The filenames are all in the following format:
firstname surname - id.extension
The id is incorrect though, the ID is from an outdated database and the new database that I am updating holds both (newID and oldID).
COLUMNS:
ID - New ID of the candidate record
OldID - Old ID of the candidate record (old database schema)
OriginalResumeID - Document link ID for the candidate table to the document table
CachedText - The field I am updating (holds the document text) at the moment this will mostly be NULL
Here is the script:
## Get resume list
$params = @{
'Database' = $TRIS5DATABASENAME
'ServerInstance' = $($AzureServerInstance.FullyQualifiedDomainName)
'Username' = $AdminLogin
'Password' = $InsecurePassword
'query' = "SELECT id, OldID, OriginalResumeID FROM Candidate WHERE OriginalResumeID IS NOT NULL"
}
$IDCheck = Invoke-Sqlcmd @params
## Word object
$files = Get-ChildItem -force -recurse $documentFolder -include *.doc, *.pdf, *.docx
$word = New-Object -ComObject word.application
$word.Visible = $false
$saveFormat = [Enum]::Parse([Microsoft.Office.Interop.Word.WdSaveFormat], "wdFormatText")
foreach ($file in $files) {
Write-Output "Processing: $($file.FullName)"
$doc = $word.Documents.Open($file.FullName)
$fileName = $file.BaseName + '.txt'
$doc.SaveAs("$env:TEMP\$fileName", [ref]$saveFormat)
Write-Output "File saved as $env:TEMP\$fileName"
$doc.Close()
$4ID = $fileName.split('-')[1].replace(' ', '').replace(".txt", "")
$text = Get-Content "$env:TEMP\$fileName"
$text = $text.replace("'", "''")
$resumeID = $IDCheck | where {$_.OldID -eq $4id} | Select-Object OriginalResumeID
$resumeID = $resumeID.OriginalResumeID
<# Upload to azure #>
$params = @{
'Database' = $TRIS5DATABASENAME
'ServerInstance' = $($AzureServerInstance.FullyQualifiedDomainName)
'Username' = $AdminLogin
'Password' = $InsecurePassword
'query' = "Update Document SET CachedText = '$text' WHERE id = $ResumeID"
}
Invoke-Sqlcmd @params -ErrorAction "SilentlyContinue"
Remove-Item -Force "$env:TEMP\$fileName"
}
$word.Quit()
The problem is that running this on a large dataset, let's say 750000 documents takes far too long per document. I'm fairly certain that this is because it has to search through the entire $IDCheck object of 750000 records before it can get the originalResumeID of the record to upload to.
Running this on a smaller database is quite quick (around 200000 per 24 hours). I was thinking I could check the documents table and only pull rows where the CachedText field is null and loop that to run every 50000 documents so it would get quicker as it goes. Problem is the documents table will be massive and will take a long time to search through every time this is called.
Any help on speeding this up would be much appreciated.
EDIT:
Looks like it is the upload to azure causing the delay:
<# Upload to azure #>
$params = @{
'Database' = $TRIS5DATABASENAME
'ServerInstance' = $($AzureServerInstance.FullyQualifiedDomainName)
'Username' = $AdminLogin
'Password' = $InsecurePassword
'query' = "Update Document SET CachedText = '$text' WHERE id = $ResumeID"
}
Invoke-Sqlcmd @params -ErrorAction "SilentlyContinue"
Answer: I would try to use bulkcopy to load all of the IDs and their CachedText at once into a staging table in Azure, and then do a single update on your document table.
CREATE TABLE document
(docKey BIGINT IDENTITY(1, 1) PRIMARY KEY,
CachedText NVARCHAR(MAX),
id INT
);
CREATE TABLE document_stage
(CachedText NVARCHAR(MAX),
id INT
);
As you iterate over the files, you create a PSObject with the properties you want in your sql table and add it to an collection. Then after all files are done, or at set batching limits you can use Out-DataTable to convert the collection into a data table, and then let SqlBulkCopy upload to the stage table in one batch, and a single UPDATE will update your primary table.
UPDATE Document
SET
CachedText = stg.CachedText
FROM document_stage stg
WHERE document.id = stg.id;
PS script
$files = Get-ChildItem -force -recurse $documentFolder -include *.doc, *.pdf, *.docx
$stagedDataAsArray = @()
foreach ($file in $files) {
$fileName = $file.name
$4ID = $fileName.split('-')[1].replace(' ', '').replace(".txt", "")
$text = Get-Content "$($file.FullName)"
$text = $text.replace("'", "''")
$resumeID = $IDCheck | where {$_.OldID -eq $4id} | Select-Object OriginalResumeID
$resumeID = $resumeID.OriginalResumeID
<# create the row and add it to our #>
$fileInstance = New-Object -TypeName psobject
$fileInstance | add-member -type NoteProperty -Name cachedText -Value $text
$fileInstance | add-member -type NoteProperty -Name resumeID -Value $resumeID
$stagedDataAsArray += $fileInstance
Remove-Item -Force "$env:TEMP\$fileName"
}
$stagedDataAsTable = $stagedDataAsArray | Out-DataTable
$cn = new-object System.Data.SqlClient.SqlConnection("YOUR AZURE DB CONNECTION STRING");
$cn.Open()
$bc = new-object ("System.Data.SqlClient.SqlBulkCopy") $cn
$bc.DestinationTableName = "dbo.document_stage"
$bc.WriteToServer($stagedDataAsTable)
$cn.Close()
$params = @{
'Database' = $TRIS5DATABASENAME
'ServerInstance' = $($AzureServerInstance.FullyQualifiedDomainName)
'Username' = $AdminLogin
'Password' = $InsecurePassword
'query' = "UPDATE Document
SET
CachedText = stg.CachedText
FROM document_stage stg
WHERE document.id = stg.id;"
}
Invoke-Sqlcmd @params -ErrorAction "SilentlyContinue" | {
"domain": "codereview.stackexchange",
"id": 33369,
"tags": "time-limit-exceeded, powershell, pdf, ms-word, azure"
} |
Cause of Wallace line | Question: First off, I am a biologist, not an earth scientist.
Very recently I heard about a very interesting biological phenomenon that has its origins rooted in geology : The Wallace Line.
From a biological perspective, this is very interesting. Wikipedia explains it very, but in short: The Wallace Line describes that in south east Asia there is a very strong divide in occurrence of species when comparing everything from Bali westward, and everything from Lombok eastward. e.g. large mammals characterizing Sumatra and Java are: Tigers, Elephants and Rhinos. Whereas eastward of the Wallace Line, you have your Marsupials. The same goes for many other different animal and plant species.
Explanation given is that many many many years ago when the sea level was much lower, there was an oceanic divide between east and west of south east Asia. This is supposed to be the result of tectonic movement. However, I have not been able to find a detailed explanation what could have caused this deeper oceanic divide. If I look at maps of tectonic plates and tectonic activity in the region, there are no tectonic boundaries that overlap the Wallace line.
So all this leading to my question: What could have been the cause of the this deep divide?
Here a link of the Wallace Line
Here a link of Indonesian Bathymetry
Answer:
So all this leading to my question: What could have been the cause of
the this deep divide?
Think about what animals need to travel between Islands. It's very difficult unless the water between the islands either drops or freezes over (Deer, for example, have crossed from New Jersey to Staten Island when the Hudson River freezes over), but the ocean doesn't freeze near the Wallace line.
Or, during ice ages when sea level falls as much as 410 feet, Islands can be joined as sea level drops. Japan, for example, a land-bridge formed through Hokaido to mainland Asia. Likewise, the UK connects to France and the Bering Straight becomes land as well.
There's nothing particularly remarkable about the water over the Wallace line other than it being too deep for any land bridges to form during ice ages. The relevant geological feature for land bridges is shallow oceans/seas/bays/straights/channels, etc. The Java sea is sufficiently shallow that all the islands Northwest of the Wallace line are joined when sea level falls.
It was pointed out that it's rough water too, but that's not the primary reason. The primary reason is simply the ocean's depth and width. Animals have little incentive to try to cross any fairly large body of ocean. Animals sometimes migrate across raging rivers, but in that situation the other side is clearly visible. The islands on each side of the Wallace line, even with the sea level drop, would (I assume), appear very far away to the eye, if they could be seen at all. | {
"domain": "earthscience.stackexchange",
"id": 2340,
"tags": "geology, plate-tectonics"
} |
What's the difference between the global transformations of $SU(2)$ and $U(1)$? | Question: I am studying the spontaneously broken global non-Abelian symmetry. Suppose we have an $SU(2)$ doublet of bosons $\Phi = (\phi^+, \phi^0)^T$, with Lagrangian density
$$
\mathcal L = (\partial_\mu\Phi^\dagger)(\partial^\mu\Phi)+\mu^2\Phi^\dagger\Phi-\frac{\lambda}{4}(\Phi^\dagger\Phi)^2
$$
This theory has $SU(2)\times U(1)$ symmetry. For global $SU(2)$ transformations, we have
$$
\Phi\rightarrow \Phi' = \exp(-i\vec\alpha\cdot\vec\tau/2)\Phi
$$
where $\vec\alpha = (\alpha_1,\alpha_2, \alpha_3 )$; While for global $U(1)$ symmetry, we have
$$
\Phi\rightarrow \Phi' = \exp(-i\beta)\Phi
$$
My question is what's the difference between these two transformations? Is it right to say for $SU(2)$ the Lagrangian is invariant under 3-dimensional rotation, but for $U(1)$ I can imagine there is only one axis of rotation? After the spontaneous symmetry breaking, do we still have the transformation for the unbroken subgroup?
Answer: As your text (for which there can be no substitute!) should stress, the SU(2) transformation mostly scrambles the spinor components of your complex doublet, whereas the U(1) of hyper charge simply alters their phase, in lockstep. So the sombrero potential you are looking at breaks both SU(2) and this U(1). (Indeed, you may think of SU(2)~O(3) and U(1)~O(2), but this sets you up for confusions, unless you are computationally adept.)
Specifically,
$$
\delta \Phi=-i(\beta+ \vec \alpha\cdot \vec \tau/2 )\Phi,
$$
so for $\langle \Phi\rangle= (0,v)^T$, you have
$$
\langle \delta \Phi\rangle= -i{v\over 2} \begin{pmatrix} \alpha_1-i\alpha_2\\ 2\beta-\alpha_3 \end{pmatrix} ,
$$
which cannot vanish for nonvanishing real angles, unless $\beta=2\alpha_3$, as your instructor must have computed for you.
That is, a linear combination of $T_3$ of su(2) and Y of u(1) escapes spontaneous breaking, and amounts to the generator of unbroken EM, $Q= T_3+Y/2$, a different U(1): the net number of independent generators broken is 3, not 4! The unbroken Q transformation, then, is $\Phi'=\exp(-i(\beta +\alpha_3/2))\Phi$, with the angle parameter which dropped out of $\langle \delta \Phi\rangle$. cf. (Check it for the Higgs, $H^0$.)
The essence of the SM is weak mixing.
Consider a one-generation lepton doublet to illustrate these statements and their consequences. | {
"domain": "physics.stackexchange",
"id": 94856,
"tags": "symmetry, field-theory, group-theory, symmetry-breaking"
} |
Command line boolean parameters to ros node | Question: I'm working with ROS melodic and Gazebo 9.9.0 on an Ubuntu 18.04.2 LTS.
I want to get two boolean parameters from command line. To do it, I have this code:
int main(int argc, char **argv)
{
bool doDiagonalSteps = false;
bool checkDiagonalCornerUnBlocked = false;
/**
* We must call one of the versions of ros::init() before using any other
* part of the ROS system.
*/
ros::init(argc, argv, "astar_controller");
/**
* NodeHandle is the main access point to communications with the ROS system.
*/
ros::NodeHandle n;
/**
* Get parameters (if there are parameters to get.)
*/
if (!n.getParam("diagonalSteps", doDiagonalSteps))
doDiagonalSteps = false;
else
{
std::cout << "Diagonal steps: " << std::endl;
}
if (!n.getParam("checkDiagonalStepCorner", checkDiagonalCornerUnBlocked))
checkDiagonalCornerUnBlocked = false;
else
{
std::cout << "Diagonal Corner: " << checkDiagonalCornerUnBlocked << std::endl;
}
When I run this command on terminal:
rosrun nav_astar_nodes astar_controller _diagonalSteps:=true _checkDiagonalStepCorner:=false
I don't get anything on the output.
How can I get a boolean parameter from command line?
Answer: When you create the node handle, you need to set it to the private namespace:
ros::NodeHandle n("~");
Then the call to n.getParam("diagonalSteps", doDiagonalSteps) will look for the parameter in the namespace /astar_controller (the name of your node) where it was set when you passed an under-scored variable to it from the command line, instead of the global namespace which is what happens by default. | {
"domain": "robotics.stackexchange",
"id": 1959,
"tags": "ros, c++"
} |
Palindrome checker in JavaScript | Question: I've built a Palindrome checker in JavaScript. It's fairly simple.
I'm still learning JS and am looking to learn.
So I would love hear ideas on how to improve the JavaScript in this. Like more efficient ways to resolve the task of checking for Palindrome.
Source code is below. Link to CodePen is her: http://codepen.io/MarkBuskbjerg/pen/JWMWwN?editors=1010
HTML:
<div class="cover">
<div class="container header">
<div class="row row-margin">
<div class="col-sm-12">
<h1 class="text-center">Palindrome checker</h1>
</div>
<div class="col-sm-12">
<p class="text-center text-uppercase">A nut for a jar of tuna</p>
</div>
</div>
</div>
<div class="container">
<div class="row row-margin">
<div class="col-md-2">
</div>
<div class="col-md-8">
<textarea class="col-md-12" id="inputPalindrome" rows="5">Borrow or rob?</textarea>
</div>
</div>
<div class="row row-margin">
<div class="col-sm-3">
</div>
<button id="checkPalindrome" class="btn btn-default btn-lg btn-block col-sm-6 col-sm-offset-4" type="submit">Check palindrome</button>
</div>
</div>
<div class="container">
<div class="row row-margin">
<div class="col-md-2">
</div>
<div class="col-md-8 text-center">
<div id="notification" class="alert alert-info">Palindrome has not been checked yet</div>
</div>
</div>
</div>
CSS:
.cover {
height: 100vh;
}
.row-margin {
margin: 4vh auto;
}
body {
background: #FF4E50;
background: -webkit-linear-gradient(135deg, #FF4E50, #F9D423);
background: linear-gradient(135deg, #FF4E50, #F9D423);
}
.header {
color: #fff;
}
h1 {
font-family: "Pacifico";
}
.btn-default {
background-color: black;
color: #fff;
font-weight: bold;
text-transform: uppercase;
}
JavaScript:
var checkButton = document.getElementById("checkPalindrome");
function isPalindrome(str) {
str = str.toLowerCase().replace(/[^a-z0123456789]+/g,"");
var reversedStr = str.split("").reverse().join("");
if (str == reversedStr) {
return true
}
return false
}
checkButton.addEventListener("click", function() {
var palindromeInput = document.getElementById("inputPalindrome").value;
var palindromeReturn = isPalindrome(palindromeInput);
if(palindromeReturn === true) {
document.getElementById("notification").innerHTML = "Yay! You've got yourself a palindrome";
document.getElementById("notification").className = "alert alert-success";
} else {
document.getElementById("notification").innerHTML = "Nay! Ain't no palindrome";
document.getElementById("notification").className = "alert alert-danger";
}
});
Link to CodePen with the Palindrome Checker
http://codepen.io/MarkBuskbjerg/pen/JWMWwN
Answer: RegEx
In character class, 0123456789 can be written as range 0-9. Although, this has no effect on working of RegEx, it saves some keystrokes and looks consistent with alphabets range a-z.
toLowerCase after replace
First remove the special characters and then convert the string to lower case. This, in my opinion will run fast than other way round as the number of characters to work on are reduced.(Not Tested)
When doing this, don't forget to add i flag on the RegEx.
str.replace(/[^a-z0-9]+/gi, "").toLowerCase();
return boolean;
if (str == reversedStr) {
return true
}
return false
can be written as return str == reversedStr. It is also recommended to use strict equality operator.
return str === reversedStr;
See Which equals operator (== vs ===) should be used in JavaScript comparisons?
Caching DOM element reference
In the click handler of button, #notification is referenced four times. All times, it is read again from DOM. This can be slower. The element can be cached and used whenever required.
var notification = document.getElementById('notification');
...
...
notification.innerHTML = 'Hello World!';
Complete Code
Updated after suggestions from Ismael Miguel
// After DOM is completely loaded
document.addEventListener("DOMContentLoaded", function() {
"use strict";
// Cache
var palindromeInput = document.getElementById("inputPalindrome");
var notification = document.getElementById("notification");
function isPalindrome(str) {
str = str.replace(/[^a-z0-9]+/gi, "").toLowerCase();
return str.split("").reverse().join("") === str;
}
document.getElementById("checkPalindrome")
.addEventListener("click", function() {
if (isPalindrome(palindromeInput.value)) {
notification.innerHTML = "Yay! You've got yourself a palindrome";
notification.className = "alert alert-success";
} else {
notification.innerHTML = "Nay! Ain't no palindrome";
notification.className = "alert alert-danger";
}
});
}); | {
"domain": "codereview.stackexchange",
"id": 28382,
"tags": "javascript"
} |
What is the asymptotic runtime of the best known TSP solving algorithm? | Question: I always thought that TSP currently requires time exponential in the number of cities to solve.
How, then, has Concorde optimally solved a TSP instance with
85,900 cities?!?
Is this a typo? Is the base of the exponential 1.0000000000000001 or similar? Was it an instance specifically constructed to be solvable easily? What is the asymptotic runtime of the best known TSP solving algorithm?
Answer: The worst case running time of Concorde or any other known method is exponential in the size of the input. However, sometimes heuristics or other pruning techniques are effective and you are able to solve some, even large, instances pretty quickly. You should define exactly what you mean by TSP as there are many variants, and many algorithms with different worst case run times.
See the accepted answer in this question on CSTheory to see a list of algorithms. | {
"domain": "cs.stackexchange",
"id": 2696,
"tags": "algorithms, np-hard, traveling-salesman"
} |
Why do such different Lagrangians give the same Euler-Lagrange equations? | Question: The Lagrangians $$L_\pm:=\frac{1}{2}\left( \dot{q}_1^2\pm\dot{q}_2^2\right)-\frac{1}{2}m^2\left( q_1^2\pm q_2^2\right)$$ each have Euler-Lagrange equations $\ddot{q}_i=-m^2q_i,\,i\in\left\{ 1,\,2\right\}$. The obvious reason is that these equations are uncoupled, so any linear combination of "one-equation" Lagrangians, including these $\pm$ choices, will give the same pair.
The Lagrangian $L_0:=\dot{q}_1\dot{q}_2-m^2q_1q_2$ has the same ELEs as well, so it doesn't couple the $q_i$ even though it looks like it might. What's the physical interpretation of $L_0$ obtaining the same ELEs as $L_+$? It doesn't seem to be of the form $a_+L_++a_-L_-+b+\dot{f}$.
Thinking in terms of Hamiltonians doesn't make it any clearer. With $L_0$ as our choice of Lagrangian, the Hamiltonian is $H_0:=p_1p_2+m^2q_1q_2$, which bears no obvious equivalence to $$H_\pm:=\frac{1}{2}\left( p_1^2\pm p_2^2\right)+\frac{1}{2}m^2\left( q_1^2\pm q_2^2\right)$$ (The momenta have different definitions in terms of the $\dot{q}_i$ in the two cases, but of course we still get the same equations of motion in all cases.)
So is there a more general principle that explains why $L_\pm$ is equivalent to $L_0$, or $H_\pm$ to $H_0$?
Answer: Having thought about this, I think I've found the key point. A Lagrangian of the form $\frac{1}{2}A\dot{q}^2-\frac{1}{2}Bq^2$ with $1$-dimensional $q$ and numbers $A,\,B$ with $A\ne 0$ has equations of motion that depend only on $A^{-1}B$, as scaling the Lagrangian is physically irrelevant. In particular, scaling the Lagrangian can be thought of as a scaling of the coefficients. For multi-dimensional $\mathbf{q}$, we can generalise this result. In a Lagrangian $L_{AB}:=\frac{1}{2}\dot{\mathbf{q}}^TA\dot{\mathbf{q}}-\frac{1}{2}\mathbf{q}^TB\mathbf{q}$ we can assume without loss of generality that $A,\,B$ are symmetric, in which case $\mathbf{p}=A\dot{\mathbf{q}}$ so the equation of motion is $A\ddot{\mathbf{q}}=-B\mathbf{q}$. If $A$ is invertible this simplifies to $\ddot{\mathbf{q}}=-A^{-1}B\mathbf{q}$, which is invariant under a "scaling" of the matrix coefficients of the form $A,\,B\to KA,\,KB$, with $K$ an arbitrary invertible square matrix conformable with $A,\,B$ for which $KA,\,KB$ are symmetric. We can find the same result in the Hamiltonian formalism, viz $H_{AB}:=\frac{1}{2}\mathbf{p}^TA^{-1}\mathbf{p}+\frac{1}{2}\mathbf{q}^TB\mathbf{q}$. | {
"domain": "physics.stackexchange",
"id": 39149,
"tags": "lagrangian-formalism, hamiltonian-formalism, hamiltonian"
} |
What are the criteria for a system to be considered intelligent? | Question: For example, could you provide reasons why a sundial is not "intelligent"?
A sundial senses its environment and acts rationally. It outputs the time. It also stores percepts. (The numbers the engineer wrote on it.)
What properties of a self driving car would make it "intelligent"?
Where is the line between non intelligent matter and an intelligent system?
Answer: Typically, I think of intelligence in terms of the control of perception. [1] A related, but different, definition of intelligence is the (at least partial) restriction of possible future states. For example, an intelligent Chess player is one whose future rarely includes 'lost at chess to a weaker opponent' states; they're able to make changes that move those states to 'won at chess' states.
These are both broad and continuous definitions of intelligence, where we can talk about differences of degree. A sundial doesn't exert any control over its environment; it passively casts a shadow, and so doesn't have intelligence worth speaking of. A thermostat attached to a heating or cooling system, on the other hand, does exert control over its environment, trying to keep the temperature of its sensor within some preferred range. So a thermostat does have intelligence, but not very much.
Self-driving cars obviously fit those definitions of intelligence.
[1] Control is meant in the context of control theory, a branch of engineering that deals with dynamical systems that perceive some fact about the external world and also have a way by which they change that fact. When perception is explicitly contrasted to observations, it typically refers to an abstract feature of observations (you observe the intensity of light from individual pixels, you perceive the apple that they represent) but here I mean it as a superset that includes observation. The thermostat is a dynamical system that perceives temperature and acts to exert pressure on the temperature it perceives.
(There's a philosophical point here that the thermostat cares directly about its sensor reading, not whatever the temperature "actually" is. I think that's not something that should be included in intelligence, and should deserve a name of its own, because understanding the difference between perception and reality and seeking to make sure one's perceptions are accurate to reality is another thing that seems partially independent of intelligence.) | {
"domain": "ai.stackexchange",
"id": 121,
"tags": "definitions, intelligence-testing"
} |
Do we need to underline the name of a gene while handwriting? | Question: While teaching about the cry genes and the Cry proteins in Biology class, my teacher told us that the names of genes are always written in lowercase and should be italicized, and the name of protein coded by that gene should always start with a capital letter.
Now, while reading the notes taken in the same lectures, I am wondering whether the name of the genes should be underlined while handwriting, as we use to do while writing the biological names; or are there some other rules related to it?
Moreover please tell me that is it really necessary to write the name of a protein (I am talking about every protein) always with a capital letter at start?
Answer: Handwriting and notes
How you handwrite is totally up to you. No important information in research science is disseminated widely with the use of handwriting anymore; whatever the conventions were, you'd be wise not to care about it one bit.
Nomenclature in scientific research
The nomenclature can be different across fields. In Drosophila genetics, the prescribed norm, which is admittedly not really often followed, is as follows for capitalizing the first letters (contrary to what you were taught):
1.2.2. Selection of lower or upper case of initial letter. Gene symbols/names begin with a lowercase letter if the gene is FIRST named for the phenotype of a recessive mutant allele, and begin with an uppercase letter if they are FIRST named for the phenotype of a dominant mutant allele. Gene symbols/names also begin with an uppercase letter if they are FIRST named for an aspect of the wild-type molecular function or activity of the gene product, which includes genes named after an ortholog or paralog.
Rules of thumb
Keep in mind the distinction between nucleic acid and protein; this usually differentiated by formatting (e.g. italics).
Keep in mind that macromolecule formatting nomenclature often specifies capitalization (entirely capitalized, first character, if non-first characters are allowed to be capitalized, etc.)
Keep in mind the difference between symbol (ID or shorthands) and full gene names.
Keep in mind that synonymous gene names and symbols exist for many genes.
The search engine is your friend. This is a good landing page on the topic. | {
"domain": "biology.stackexchange",
"id": 12231,
"tags": "proteins, molecular-genetics, terminology, gene, biotechnology"
} |
hydrostatic storage of gasses | Question: This question is a bit general but hopefully someone can point me in the right direction.
If you have a metal container, welded shut that's fitted into a hole that is say 10 meters wide and 30 meters deep.
This has the volume of about 21,205
If you pump pure oxygen into the tank until all the water was pushed out of that cylinder, what would be the pressure of the oxygen in the tank. How much oxygen would be in the tank?
Answer: If the hole that the water drains out from is a hole with a check valve located at the height H from the top of the tank the pressure of the gas is equal to the pressure of water at the depth H. And the bottom part below that level will be water only.
However if you want to drain the entire tank and replace the water with the gas you should have the hole on the bottom of the tank, otherwise there is always water remaining below the discharge level.
In that case the pressure of the gas will be 3 atmosphere. Therefore you can use ideal gas low to calculate the weight of the oxygen:
$$ PV=nRT $$
-P is pressure
-V is volume, liters
-n is number of gas molecules
-R is the ideal gas constant= 0.08206 L atm mol–1 K–1
-T is temperature in kelvin. | {
"domain": "engineering.stackexchange",
"id": 2761,
"tags": "mechanical-engineering, pressure, pressure-vessel, compressed-gases"
} |
RLC circuit, turning off the voltage source | Question:
An RLC circuit (pictured above) is governed by two equations:
\begin{align}
-I_1 R &= -L \frac{dI_2}{dt} = \frac{q}{C}+V(t) \\
\frac{dq}{dt} &= I_1+I_2 \, .
\end{align}
$q$ satisfies the equation
$$\frac{d^2q}{dt^2}+\frac{1}{RC}\frac{dq}{dt}+\frac{1}{LC}q=-\frac{1}{R}\frac{dV}{dt}-\frac{1}{L}V \, .$$
The system is held in a steady state (i.e. $dq/dt=0$ and $V(t) = Q/C$) for negative time. At $t=0$ the voltage is switched off and $V(t) = 0$ for $t \geq 0$.
How does one derive the initial conditions for the system, i.e. $q(0)=Q$ and $\dot{q}(0)=Q/RC$?
My attempt: to calculate the charge in the steady state (just before $t=0$), I can set all derivatives with respect to time to 0. Then I get $V=-C/q$ and I can define $Q=-C/V$. I don't know how to handle the discontinuity at $t=0$ to obtain $\dot{q}(0)$ though.
Answer: This is how ideal and impossible circuits elements behave, but it's a starting point for a simple analysis:
At discontinuous changes in circuits,
1) inductors have the same current immediately before and after the discontinuity, but can have discontinuous voltage changes. The current will then change exponentially/sinusoidally/both.
2) capacitors have the same voltage immediately before and after the discontinuity, but can have discontinuous current changes. The capacitor voltage will then change exponentially/sinusoidally/both.
3) the current and voltage associated with resistors can both change discontinuously, following $V_R = I_R R$.
4) You must be meticulous with sign conventions on these relationships.
At $t=0^-$, the current through the inductor is constant, so the voltage across the inductor is zero. That means $i$ (through the resistor) is also zero and the voltage across the capacitor is $V(t)$ with the rightmost plate at the higher potential, if $V(t)>0$. There is no current flowing into the capacitor because it is fully charged, so no current is flowing through the inductor.
At $t=0^+$, the voltage is turned off. Technically, there are two ways to interpret this: $V(t)$ is replaced by a straight wire (a short, which is what EEs do when they kill a voltage source) or, $V(t)$ is totally removed and an open takes its place (which would be like having a switch in series with the source). The behaviors will be different, but the starting conditions of the inductor current and capacitor voltage are the same.
The inductor current will initially be zero, and the voltage across the capacitor is $V(0^-)$. | {
"domain": "physics.stackexchange",
"id": 23898,
"tags": "homework-and-exercises, electric-circuits, electrical-resistance, capacitance, inductance"
} |
How can titanium burn in nitrogen? | Question: I was going through the properties of titanium when a certain thing caught my eye: It was the reaction of burning of titanium in nitrogen. I was astonished to read it as I knew that neither is nitrogen a supporter of combustion nor does it burn itself. But then how can titanium burn in nitrogen? Moreover it is one of the few elements to do so.
It was written that titanium burns in pure nitrogen.
Answer: There are two questions one has to answer when asking ‘will $\ce{X}$ react with $\ce{Y}$?’
Thermodynamics
This basically asks ‘will the products of a hypothetical reaction be more stable than the reactants?’, i.e. whether the entire process will release energy. A quick web search tells me that $\Delta_\mathrm{f} H^0 (\ce{TiN}) = - 337.65~\mathrm{\frac{kJ}{mol}}$ and therefore the reaction is thermodynamically possible.
Kinetics
This is asking ‘can I overcome the activation energy required for said hypothetical reaction to occur?’ And this is where it gets complicated.
The process of titanium burning can be split into the following sub-equations:
$$\ce{2 Ti + N2 -> 2 TiN} \\
~\\
\ce{Ti(s) -> Ti(g)}\\
\ce{Ti(g) -> Ti^3+ (g) + 3 e-}\\
\ce{N2(g) -> 2 N(g)}\\
\ce{N(g) + 3 e- -> N^3- (g)}\\
\ce{Ti^3+ (g) + N^3- (g) -> TiN(g)}\\
\ce{TiN(g) -> TiN(s)}$$
Broken down in this way, we should immediately realise:
The first step is generally okay-ish but may require some activation. (We don’t need to make the entire material gaseous in one step; an atom a time is enough.)
The second step requires energy (no surprises there).
The third step is extremely hard to do, since the $\ce{N#N}$ is so strong.
The fourth step also requires energy since nitrogen has an endothermic electron affinity.
The fifth and sixth steps are the ones where huge amounts of energy are liberated; especially the sixth one where lattice enthalpy is liberated.
Now if we consider a block of titanium, as DavePhD mentioned, the the first step is already rather hard to do. The block’s crystal lattice is rather stable as is so we cannot easily extract atoms from there. However, the larger the surface to volume ratio gets the easier this can happen, especially since we have multiple ‘attacking’ points at the same time. Thus, solid metal blocks are usually hard to burn, while metal powder often burns just by putting it in pure oxygen. (Iron wool doesn’t even need to be a powder, the surface to volume ratio is small enough already to oxidise in pure oxygen at room temperature.)
As Jon Custer mentioned in a comment and as I mentioned earlier, the cleavage of dinitrogen molecules is also extremely difficult. It will take a lot of activation energy to do so. This means, that a lot of energy must be gained in the formation of the ion lattice so that at least the dinitrogen cleavage can be overcome and combustion can continue.
Thus, the kinetics part of the question can be answered with: ‘it can, but.’
The question remains how to supply activation energy. Since we are most likely dealing with powder, it could be too slow to ignite titanium in air and then immediately immerse in nitrogen. Since nitrogen cleaving is so hard, many other ignition ideas that rely on, say, a match’s flame won’t work (the match is too weak). Maybe the activation energy is ‘low’ enough that heating to a few hundred degres will suffice. Unfortunately, I do not know the experimental details. | {
"domain": "chemistry.stackexchange",
"id": 4590,
"tags": "inorganic-chemistry, redox, combustion, transition-metals"
} |
Angular position as a function of time for elliptical orbits | Question: It is a well-known fact that there is no analytical solution to the two-body problem as a function of time. However, I wanted to derive the differential equation for the angular position of an elliptical orbit, such that I could use it to code a numerical solution.
According to Kepler's laws,
$$\frac{1}{2}r^2\frac{\mathrm{d}\theta}{\mathrm{d}t} = \frac{\pi a b}{P}$$
Where $a$ and $b$ are the semi-major and semi-minor axis.
Since
$$r=\frac{a(1-e^2)}{1+e \cos(\theta)}$$
and
$$b=a \sqrt{1-e^2}$$
it means that
$$\frac{\mathrm{d}\theta}{\mathrm{d}t} = \frac{2\pi a^2 \sqrt{1-e^2}}{Pr^2}$$
$$\frac{\mathrm{d}\theta}{\mathrm{d}t} = \frac{2\pi(1+e \cos(\theta))^2 }{P (1-e^2)^{3/2}} $$
Which, according to WolframAlpha, has a solution for $t$ in terms of $\theta$. In this sense, I wanted to know if this equation is correct and, if so, suggestions of methods for computing a numerical solution. I have tried the well-known Euler's method, but I don't think it is the most proper way.
Answer: Your final differential equation for $\theta(t)$ looks correct.
Euler's method for solving a first-order differential equation is the easiest method.
There are other methods (see Numerical methods for ordinary differential equations - Methods)
which are more elaborated and give more precise results.
I suggest to try a Runge-Kutta method.
For nearly circular orbits (i.e. for $e\ll 1$)
there is also the option to use a Fourier expansion for $\theta(t)$.
Taken from Mean anomaly - Formula:
$$\begin{align}
\theta(t) &= M \\
&+ \left(2e - \frac{1}{4}e^3 + \cdots \right)\sin(M) \\
&+ \left(\frac{5}{4}e^2 + \cdots \right) \sin(2M)\\
&+ \left(\frac{13}{12}e ^3 + \cdots \right) \sin(3M) \\
&+ \cdots
\end{align}$$
where $M=\frac{2\pi}{P}t$ is the so-called mean anomaly.
And $\cdots$ stands for higher-order terms in $e$
which become neglectable in case of nearly circular orbits. | {
"domain": "physics.stackexchange",
"id": 86690,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion"
} |
Falling - is the gain in KE linear? | Question: When a ball falls from a high place, it experiences a gravitational force. Forces make objects accelerate (in this case, it is constantly increasing the velocity). Because $KE = \frac{1}{2}mv^2$ this should mean that Kinetic Energy should grow quadratically (please correct me if wrong) because of the increasing velocity right?
But also, $GPE=mgh$ where the potential energy is a linear equation. How can this happen? If energy has to be conserved wouldn't both equations have to change linearly?
Can you please explain how the $KE$, $GPE$ and conservation of energy are reconciled in this system? Could you also confirm the shape of the graph of $KE$ and $GPE$ against time?
(I had initially come up with this problem for electric fields but I think that it might've been easier to answer the question in terms of gravitational fields)
Answer: You are mixing up two different SUVAT equations. The change of velocity with time is given by:
$$ v = u + at $$
So velocity increases linearly with time. However the change of velocity with distance is given by:
$$ v^2 = u^2 + 2as $$
So velocity increases as the square root of distance, not linearly with distance. That's why the kinetic energy increases linearly with distance. The kinetic energy does increase quadratically with time. | {
"domain": "physics.stackexchange",
"id": 52854,
"tags": "newtonian-mechanics, energy-conservation, potential-energy, projectile, free-fall"
} |
DNA Sequencing Types & Qualities | Question: I've been searching for a list of the types of DNA sequencing (e.g. Sanger, Next-Generation) and how prone they are to sequencing errors (None, Somewhat, Very), but I haven't been able to find anything. Here are a few scenarios:
I have a bacterial culture I want to sequence to see if a plasmid was properly transformed.
I want to verify a PCR product.
I want to sequence a large organism's genome.
I want to verify a mutation found in a human tumor (found using exome data)
Given these examples, and any other you use regularly in the lab, can you please give me information on overall platforms (not specific like Illumina, but general like Sanger sequencing) and their error prone-ness?
Answer:
Sequencing does not seem necessary for this... most plasmids contain an antibiotic resistance gene (ex. kanamycin-resistance) that will allow cells that have been appropriately transformed to survive in the presence of kanamycin.
If it's just a PCR product, Sanger sequencing seems the best way to go. It is quite specific and not prone to algorithmic errors as are next-gen sequencers.
Next-gen sequencing would be ideal as you're dealing with the whole-genome. Sanger would prove laborious and low throughput. Any variants of interest you find call always be confirmed through sanger. Illumina sequencers are quite robust at sequence generation at the whole genome level.
This mutation can be verified through sanger sequencing, but you could also conduct targeted re-sequencing of the particular exon (although this seems like over-kill). Since it is a single mutation, I believe the specificity of sanger would be your best bet. | {
"domain": "biology.stackexchange",
"id": 5771,
"tags": "dna-sequencing"
} |
Selection of copper alloy for electrical and wear applications | Question: I am working on a project which involves applying electrical load across a metal contact filled with lubricants (bearing applications). However, I am missing one of the blocks as attached and I cannot figure it out what material this is. Looking at the posts here, I am presuming this must be some copper alloy such as brass, however, I know that the rotating disc that this block sits on is of brass material and that can raise issues such as wear due to the presence of similar materials?
Any suggestions what material I can use for this application to not damage the discs while providing good conductivity?
Answer: It looks like phosphor bronze to me which makes sense as it has good bearing properties and reasonable electrical conductivity.
It could also potentially be sintered (ie porous), oil filled bronze bushings are fairly common. | {
"domain": "engineering.stackexchange",
"id": 1781,
"tags": "electrical-engineering, materials"
} |
Finger painting code | Question: I have a very simple view that handles touch events and draws accordingly. It's nothing significant, but it does use a bit more CPU than I would like (35%). Again, it is the bare minimum (<90 lines), but I can't really think of a simpler way to draw using user input. What would you change?
import UIKit
class DrawView: UIView {
var mainImageView = UIImageView()
var tempImageView = UIImageView()
var r = CGFloat(0)
var g = CGFloat(0)
var b = CGFloat(0)
var a = CGFloat(1)
var lineWidth = CGFloat(5)
var dot = true
var lastPoint = CGPoint.zeroPoint
required init(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
self.backgroundColor = UIColor.clearColor()
self.multipleTouchEnabled = false
mainImageView.frame = self.frame
tempImageView.frame = self.frame
self.addSubview(mainImageView)
self.addSubview(tempImageView)
}
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
dot = true
if let touch = touches.first as? UITouch {
lastPoint = touch.locationInView(self)
}
}
func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, 0)
let context = UIGraphicsGetCurrentContext()
tempImageView.image?.drawInRect(self.frame)
CGContextMoveToPoint(context, fromPoint.x, fromPoint.y)
CGContextAddLineToPoint(context, toPoint.x, toPoint.y)
CGContextSetLineCap(context, kCGLineCapRound)
CGContextSetLineWidth(context, lineWidth)
CGContextSetRGBStrokeColor(context, r, g, b, a)
CGContextSetBlendMode(context, kCGBlendModeNormal)
CGContextStrokePath(context)
tempImageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
override func touchesMoved(touches: Set<NSObject>, withEvent event: UIEvent) {
dot = false
if let touch = touches.first as? UITouch {
let currentPoint = touch.locationInView(self)
drawLineFrom(lastPoint, toPoint: currentPoint)
lastPoint = currentPoint
}
}
override func touchesEnded(touches: Set<NSObject>, withEvent event: UIEvent) {
if dot {
drawLineFrom(lastPoint, toPoint: lastPoint)
}
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, 0)
mainImageView.image?.drawInRect(self.frame, blendMode: kCGBlendModeNormal, alpha: a)
tempImageView.image?.drawInRect(self.frame, blendMode: kCGBlendModeNormal, alpha: a)
mainImageView.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
tempImageView.image = nil
}
func setColor(color: UIColor) {
let components = CGColorGetComponents(color.CGColor)
r = components[0]
g = components[1]
b = components[2]
}
func erase() {
mainImageView.image = nil
}
}
Answer: The first thing that strikes me as odd about your code is having the four single-character variables to represent color components. I would rather see a single drawColor variable with the UIColor type. Not only does this eliminate the need for the setColor method, but it also more easily lets us pull that color out for numerous reasons (make some other thing the same color as our DrawView's current drawColor, or see if the DrawView's drawColor is the same shade as something else, etc. There are plenty of reasons. Then only internal to the specific necessary methods do we break that color down into its components for the necessary Core Graphics function call.
Our code has no sense of organization. We should put our event-handling methods together, rather than separate them by the big drawLine method.
Our drawLine method isn't very nicely named. First of all, we can inspect the method to know that it expects points, and the "from" outside of the parenthesis looks and feels very odd. I'd prefer the method be declared something more like this:
drawLine(from fromPoint: CGPoint, to toPoint: CGPoint)
Which means when it's called by an end user, the call looks like this:
drawView.drawLine(from: pointA, to: pointB)
But internally, we refer to the points as the fromPoint and the toPoint.
mainImageView.frame = self.frame
tempImageView.frame = self.frame
This bit isn't really going to work. If these two views are subviews of the DrawView and we intend for them to be in the same spot, we need to better.
For a start, their frame property should be set to self's bounds property. For any UIView, the frame describes its position in its superview, so the X and Y will be how far it is from the top left corner. Meanwhile, its bounds property will always have an X and Y of zero. So if I set my subview's frame to be equal to its parent's bounds property, it will be the same size, but it will be positioned to the parent's top left corner--and I think this is what we're actually trying to achieve right?
But even this isn't enough. What happens when self is resized by something? Now our manually sized subviews are no longer the correct size. We need to programmatically add autolayout constraints so that the four edges of these subviews are pinned to its parent view, so they will always be the same size.
Ultimately, there's no real reason to use our image view subviews. Using Core Graphics, we can add lines like this directly to any view. This Code Review question shows an example of drawing lines directly on a view. The only reason we need an image view is if we want to display a UIImage. And there's no need to do so here.
Even if we want to be able to generate a UIImage from what the user has drawn on our DrawView, there are means for doing that in iOS (that question is Objective-C, but it can be done in Swift too). So there really is no reason to be using the image views here at all. | {
"domain": "codereview.stackexchange",
"id": 14488,
"tags": "ios, event-handling, swift, graphics, uikit"
} |
Do one dimensional SHMs follow a specific pattern when it comes to their displacements with respect to time? | Question: I was looking at different kinds of SHMs on $x$-axis and I was wondering if the situations of positions of the particle at different times are similar for different SHMs as the most general SHM (the one in which particle starts from the origin at $t=0 $) I have attached an image to make this more clear (it's just to describe what I mean by 'different SHMs'.
NOTE: these obviously aren't 'different' SHMs the only difference is when and where they start. But the mean position remains the Origin.
As we know for the SHM in which particle starts from the origin at t=0 towards positive x, it's positions are quite symmetrically determined at different times.
That is, at $t=\frac{T}{4}$ , the particle will be at $+A$ where A is the amplitude and T is the time period.
At $t=\frac{T}{2}$, the particle will again be at the origin moving towards negative x.
At $t=\frac{3T}{4}$ the particle will be at $-A$
and finally at $t=T$ the particle will again be at origin completing 1 cycle.
Now my question is will this be the same for the 'other SHMs'? That is, if we consider the first image,
will this particle's displacement be $-A/2$ at $t=T/4$ ? and would it again follow some symmetrical displacement at these specific times?
Would the time period of this even be T?
NOTE: In all the cases we are taking here, the mean position is origin.
I know I might be missing out on a lot of basic points but I'm really confused and need some clear explanation about this. Please explain this on the basis of the first image only.
Answer: I think that the answer to your question is: no.
You seem to be suggesting that the body might cover equal distances in equal time intervals from whatever point in its cycle we start our time interval (or at least that it covers distances of $A$ if we start a time interval of $T/4$ not just at $x=A,\ x=0,\ \text{or}\ x=-A$ but also at certain other points).
Everything is contained in the equation
$$x= A\ \cos \left(\frac{2\pi}{T}t + \epsilon\right).$$
If we agree to call $t=0$ when the body has its maximum $x$-wise displacement, $A$, we can put $\epsilon =0$.
For $x=\frac A2$ we find that the smallest positive value of $t$ is $t=\frac16 T$.
For $x=-\frac A2$ we find that the smallest positive value of $t$ is $t=\frac13 T$.
So the time interval for the body to go from $x=\frac A2$ to $x=-\frac A2$ is $\frac13 T-\frac16 T=\frac16T.$
The reason why this is less than $\frac 14 T$ is that, for a body doing shm, the nearer the body is to its equilibrium position, the faster it moves. The relationship between $x$ and $t$ is non-linear. | {
"domain": "physics.stackexchange",
"id": 77073,
"tags": "homework-and-exercises, newtonian-mechanics, harmonic-oscillator, oscillators"
} |
JavaScript function for the get the formatted date of tomorrow | Question: I need a JavaScript-function which always computes the date of the next day. Then returns the date as a string, formatted in the way: dd.mm.yyyy
I wrote this code:
const getDateStringForTomorrow = () => {
const millisOfDay = 1000 * 60 * 60 * 24;
const oTomorrow = new Date(Date.now() + millisOfDay);
const day = ("0" + oTomorrow.getDate()).slice(-2);
const month = ("0" + (oTomorrow.getMonth() + 1)).slice(-2);
const year = oTomorrow.getFullYear();
return `${day}.${month}.${year}`;
};
console.log(getDateStringForTomorrow());
Can I expect my function to work as expected and to provide correct results?
What't your opinion about the way I have written the function? To you think it's overly verbose?
Answer: With Intl.DateTimeFormat you can also format the date elements to 2 and 4 digits. You code would then look something like this:
const getDateStringForTomorrow = () => {
const tomorrow = new Date();
tomorrow.setDate(tomorrow.getDate() + 1);
const day = new Intl.DateTimeFormat('en', { day: '2-digit' }).format(tomorrow);
const month = new Intl.DateTimeFormat('en', { month: '2-digit' }).format(tomorrow);
const year = new Intl.DateTimeFormat('en', { year: 'numeric' }).format(tomorrow);
return day + '.' + month + '.' + year;
};
console.log(getDateStringForTomorrow());
Another variant uses the European date format to get the order of the elements correctly and then only replaces dashes by dots:
const getDateStringForTomorrow = () => {
const tomorrowDate = new Date();
tomorrowDate.setDate(tomorrowDate.getDate() + 1);
const tomorrowStr = new Intl.DateTimeFormat('uk', { day: '2-digit', month: '2-digit', year: 'numeric' }).format(tomorrow);
return tomorrowStr.replaceAll('-','.');
};
console.log(getDateStringForTomorrow());
The advantage of using Intl.DateTimeFormat() is that it writes out very clearly what you want. | {
"domain": "codereview.stackexchange",
"id": 40249,
"tags": "javascript, algorithm, datetime"
} |
MDP and a given policy and the correctness of the state-value function | Question: Is the following statement correct?
"For an MDP and a given policy, the Bellman equation can be used to check the correctness of the state-value function."
Answer: Generally speaking it's correct. Bellman equation is a set of linear equations rather than a single equation for the usual tabular cases. If we put these equations together, it would be clear how to calculate all the state values. See S&B book page 60:
The value function $v_{\pi}$ is the unique solution to its Bellman equation.
$v$ is the state-value function and $\pi$ is a given policy as you specified. | {
"domain": "ai.stackexchange",
"id": 4192,
"tags": "reinforcement-learning, markov-decision-process"
} |
Can I damage a biotinylated antibody in solution by vortexing or mixing too vigorously? | Question: I am running an immunoassay using biotinylated antibodies and have noticed a decreasing trend from duplicate assay runs (completed within minutes of each other) from the same vial.
In between each run:
I switch to a new low protein binding pipette tip.
I vortex the biotinylated antibody in solution in a low protein binding vial.
Then I prewet the pipette tip and deliver the reagent.
Note that degradation seems unlikely.
Any ideas on what may cause this?
Could it be vortexing or shearing during pipette tip mixing?
Answer: I never worked in the field but applied mechanical stress (French Press, sonication) while extracting chlorophyll a from cyanobacteria.
Anyway, a cursory search furnished some sources like this, where vigorous vortexing of antibody samples is advised against. Instead, gentle or pulse vortexing or simple spinning is suggested.
But please take this with a grain of salt since I'm not really familiar with the standard protocols here. | {
"domain": "chemistry.stackexchange",
"id": 14403,
"tags": "organic-chemistry, biochemistry"
} |
Fourier Transform of Impulse Train | Question: Why is the fourier transform of impulse train a impulse train? Is there a intuitive reason behind it?
Answer: Intuition can sometimes be misleading. But here are some ideas that might help one move towards creating a mental picture.
An infinitely long pure sinewave in the time domain (consisting of just one frequency FT or DFT basis function) will be a single impulse in the frequency domain.
Distort the sinewave a little, but leave the waveform perfectly periodic, and the impulse will be followed by an evenly spaced harmonic series. Usually the narrower and sharper the distortion (but keeping the waveform still perfectly periodic), the longer the harmonic series. What might be considered a limiting case of maximum distortion, the narrowest waveform with the sharpest edge will have the longest harmonic series. Or an infinitely long sine wave in the time domain maximally distorted into just an infinitely periodic impulse train, will produce a impulse followed by an infinitely long harmonic series, which looks a lot like another periodic impulse train.
Make it an even (cosine) function, and all the impulses in the FT will be real and thus symmetric around 0. Add a DC offset to the distorted sine wave to complete the pulse train at 0. | {
"domain": "dsp.stackexchange",
"id": 4230,
"tags": "fourier-transform"
} |
Location of the milky way in August in Death Valley | Question: I am going to be in Death Valley around August 8th. I was hoping to take a milky way shot with my camera when there. I would be shooting around 10pm.
I am using google maps to scout out popular viewpoints in Death Valley for potential locations for the shot, as I will be getting terrestrial objects in the frame.
From what I have read , I believe the milky way will be visible most of the night, and best viewable in the earlier hours of the night. In addition it will be in the south sky.
So will it be precisely due south? Or will it be slightly south west or south east? If I could pinpoint the location, it would be easier to scout out a spot.
Thanks!
Answer: The Milky Way will be in a straight line that goes from SSW, (a bearing of about 200 degrees) almost directly overhead and to the NE. The brightest part of the Milky way will be low in the sky in the SSW.
In future, you can check the visibility and direction of any astronomical object with planetarium software, such as the (free) Stellarium. | {
"domain": "astronomy.stackexchange",
"id": 2725,
"tags": "amateur-observing, milky-way, photography, star-gazing"
} |
some question about RGBD-6D-SLAM | Question:
I would like to achieve RGBD-6D-SLAM, but do not know the hardware configuration to run this program requires.As much detail as possible, my computer graphics card is Intel GMA HD 3000.Intel Core i5 2520M, 2.5GHz.I hope to get your reply, thank you.
Originally posted by longzhixi123 on ROS Answers with karma: 78 on 2012-08-14
Post score: 0
Answer:
As I know, Ubuntu does not support intel graphic card very well. My GMA950 graphic card cannot get 3D acceleration support under Ubuntu and the system freezing when I launch the program. You'd better check your driver support for the graphic card.
AMD should be the best choice for the graphic card.
It works well with my two years laptop, lenovo T410 with a NVidia graphic card after I have updated the driver.
edit:
Note for myself also.
if you wanna check whether your laptop is suitable for OpenGL,
/usr/lib/nux/unity_support_test -p
and you will see something like
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 660/PCIe/SSE2
OpenGL version string: 4.4.0 NVIDIA 340.96
Not software rendered: yes
Not blacklisted: yes
GLX fbconfig: yes
GLX texture from pixmap: yes
GL npot or rect textures: yes
GL vertex program: yes
GL fragment program: yes
GL vertex buffer object: yes
GL framebuffer object: yes
GL version is 1.4+: yes
Unity 3D supported: yes
if the unity 3D supported is no, definitely it won't work well.
Originally posted by tianb03 with karma: 710 on 2013-02-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10606,
"tags": "slam, navigation, rgbd-slam"
} |
Remove duplication in SELECT statement | Question: Say I have the following SQL:
SELECT
amount,
amount*.1,
(amount*1)+3,
((amount*1)+3)/2,
(((amount*1)+3)/2)+37
FROM table
Instead of repeating that identical code every time, I really want to be able to do something like this:
SELECT
amount,
amount*.1 AS A,
A+3 AS B,
B/2 AS C,
C+37 AS D,
FROM table
But this code doesn't work.
So, is there another way to avoid duplication in the working query that I have?
Answer: That's not how SQL works. As mentioned in the SO answer linked by @rolfl:
You can only refer to a column alias in an outer select, so unless you recalculate all the previous values for each column you'd need to nest each level, which is a bit ugly
Which means:
SELECT amount, A, B, C, C+37 D
FROM (SELECT amount, A, B, B/2 C
FROM (SELECT amount, A, A+3 B
FROM (SELECT amount, amount*0.1 A FROM table) firstPass) secondPass) thirdPass
That said, the last comma in your 2nd snippet is a pseudo syntax error (given it's a pseudo query) | {
"domain": "codereview.stackexchange",
"id": 5708,
"tags": "sql, sql-server"
} |
Family finances - tracking and reporting | Question: I am looking for clean way to accumulate amounts within a input dataset. In this example, there are 18 rows of data, and the output is a 3 rows of data, accumulated by Key, by ExpenseType.
My biggest concern is whether hashMap is the correct utility for this, or if there is an easier/cleaner way.
FamilyExpenseProcessor.java:
import java.math.BigDecimal;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
public class FamilyExpenseProcessor {
enum ExpenseType {
Grocery, Entertainment,Transportation
}
public static void main(String[] args) {
Map<String, FamilyExpense> feHashMap = new HashMap<>();
accumulate(feHashMap, "Jeffersons",ExpenseType.Entertainment, new BigDecimal(5.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Entertainment, new BigDecimal(10.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Entertainment, new BigDecimal(15.00));
accumulate(feHashMap, "Jeffersons",ExpenseType.Entertainment, new BigDecimal(20.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Entertainment, new BigDecimal(25.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Entertainment, new BigDecimal(30.00));
accumulate(feHashMap, "Jeffersons",ExpenseType.Grocery, new BigDecimal(35.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Grocery, new BigDecimal(40.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Grocery, new BigDecimal(45.00));
accumulate(feHashMap, "Jeffersons",ExpenseType.Grocery, new BigDecimal(50.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Grocery, new BigDecimal(55.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Grocery, new BigDecimal(60.00));
accumulate(feHashMap, "Jeffersons",ExpenseType.Transportation, new BigDecimal(15.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Transportation, new BigDecimal(20.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Transportation, new BigDecimal(25.00));
accumulate(feHashMap, "Jeffersons",ExpenseType.Transportation, new BigDecimal(30.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Transportation, new BigDecimal(35.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Transportation, new BigDecimal(40.00));
Iterator<Map.Entry<String, FamilyExpense>> entries = feHashMap.entrySet()
.iterator();
while (entries.hasNext()) {
Entry<String, FamilyExpense> entry = entries.next();
System.out.println("Key = " + entry.getKey() + ", Grocery = "
+ entry.getValue().getGroceryExpense());
System.out.println("Key = " + entry.getKey() + ", Entertainment = "
+ entry.getValue().getEntertainmentExpense());
System.out.println("Key = " + entry.getKey() + ", Transportation = "
+ entry.getValue().getTransportationExpense());
}
}
private static void accumulate(Map<String, FamilyExpense> feMap,
String key, ExpenseType expenseType, BigDecimal value) {
switch (expenseType) {
case Grocery:
if (feMap.containsKey(key)) {
feMap.get(key).setGroceryExpense(
feMap.get(key).getGroceryExpense().add(value));
} else {
FamilyExpense fe = new FamilyExpense();
fe.setGroceryExpense(value);
feMap.put(key, fe);
}
break;
case Entertainment:
if (feMap.containsKey(key)) {
feMap.get(key).setEntertainmentExpense(
feMap.get(key).getEntertainmentExpense().add(value));
} else {
FamilyExpense fe = new FamilyExpense();
fe.setEntertainmentExpense(value);
feMap.put(key, fe);
}
break;
case Transportation:
if (feMap.containsKey(key)) {
feMap.get(key).setTransportationExpense(
feMap.get(key).getTransportationExpense().add(value));
} else {
FamilyExpense fe = new FamilyExpense();
fe.setTransportationExpense(value);
feMap.put(key, fe);
}
break;
}
}
}
FamilyExpense.java:
import java.math.BigDecimal;
class FamilyExpense {
private String familyId;
private BigDecimal groceryExpense;
private BigDecimal entertainmentExpense;
private BigDecimal transportationExpense;
public FamilyExpense(){
this.groceryExpense = new BigDecimal(0);
this.transportationExpense = new BigDecimal(0);
this.entertainmentExpense = new BigDecimal(0);
}
public String getFamilyId() {
return familyId;
}
public void setFamilyId(String familyId) {
this.familyId = familyId;
}
public BigDecimal getGroceryExpense() {
return groceryExpense;
}
public void setGroceryExpense(BigDecimal groceryExpense) {
this.groceryExpense = groceryExpense;
}
public BigDecimal getEntertainmentExpense() {
return this.entertainmentExpense;
}
public void setEntertainmentExpense(BigDecimal entertainmentExpense) {
this.entertainmentExpense = entertainmentExpense;
}
public BigDecimal getTransportationExpense() {
return transportationExpense;
}
public void setTransportationExpense(BigDecimal transportationExpense) {
this.transportationExpense = transportationExpense;
}
}
Answer: Well I have changed your code a bit to make it cleaner.It is recommended to have a uniform interface for methods, so now you have two methods in FamilyExpense class is addExpense and another is getExpense rather explicitly calling each method.
I am not sure why you have used getFamilyId and setFamilyId.
public static void main(String[] args) {
Map<String, FamilyExpense> feHashMap = new HashMap<String, FamilyExpense>();
feHashMap.put("Jeffersons", new FamilyExpense());
feHashMap.put("Jetsons", new FamilyExpense());
feHashMap.put("Johnsons", new FamilyExpense());
accumulate(feHashMap, "Jeffersons", ExpenseType.Entertainment, new BigDecimal(5.00));
accumulate(feHashMap, "Jetsons", ExpenseType.Entertainment, new BigDecimal(10.00));
accumulate(feHashMap, "Johnsons", ExpenseType.Entertainment, new BigDecimal(10.00));
Iterator<Map.Entry<String, FamilyExpense>> entries = feHashMap.entrySet()
.iterator();
while (entries.hasNext()) {
Entry<String, FamilyExpense> entry = entries.next();
System.out.println("Key = " + entry.getKey() + ", Grocery = "
+ entry.getValue().getExpense(ExpenseType.Grocery));
System.out.println("Key = " + entry.getKey() + ", Entertainment = "
+ entry.getValue().getExpense(ExpenseType.Entertainment));
System.out.println("Key = " + entry.getKey() + ", Transportation = "
+ entry.getValue().getExpense(ExpenseType.Transportation));
}
}
public static void accumulate(Map<String, FamilyExpense> feMap,
String key, ExpenseType expenseType, BigDecimal value) {
FamilyExpense familyExpense=null;
if(feMap.containsKey(key)){
familyExpense=feMap.get(key);
familyExpense.addExpense(value,expenseType);
}
else{
FamilyExpense newFamily = new FamilyExpense();
newFamily.addExpense(value,expenseType);
feMap.put(key,newFamily);
}
}
This is your family expense class.
import java.math.BigDecimal;
public class FamilyExpense {
private String familyId;
private BigDecimal groceryExpense;
private BigDecimal entertainmentExpense;
private BigDecimal transportationExpense;
public FamilyExpense(){
this.groceryExpense = new BigDecimal(0);
this.transportationExpense = new BigDecimal(0);
this.entertainmentExpense = new BigDecimal(0);
}
public BigDecimal getExpense(FamilyExpenseProcessor.ExpenseType expenseType){
switch (expenseType){
case Entertainment:
return this.entertainmentExpense;
case Grocery:
return this.groceryExpense;
case Transportation:
return this.transportationExpense;
default:
return new BigDecimal(0.0);
}
}
public void addExpense(BigDecimal expense, FamilyExpenseProcessor.ExpenseType expenseType){
switch (expenseType){
case Entertainment:
this.entertainmentExpense=this.entertainmentExpense.add(expense);
break;
case Grocery:
this.groceryExpense=groceryExpense.add(expense);
break;
case Transportation:
this.transportationExpense=this.transportationExpense.add(expense);
break;
}
}
} | {
"domain": "codereview.stackexchange",
"id": 9022,
"tags": "java, hash-map"
} |
In which direction would gravitational waves be emitted when two black holes colide? | Question: Imagine two black holes on the x-axis coming together at the origin (not rotating around each other, just falling towards each other).
In which direction would the most intense gravitational waves be emitted when they collided?
Would they be emitted most strongly along the x-axis, namely in the directions the black holes came from?
Or would they radiate in circles on the yz-plane?
Or would they radiate in spherical shells evenly in all directions?
Or would there be no waves at all, since the black holes are not rotating around each other?
Answer: In a comment you explained that you are interested in the angular distribution of the power radiated as gravitational waves.
When the two black holes are close together, the analysis is probably messy, requiring numerical relativity. But when the separation of the two black holes is considerably greater than their Schwarzschild radii, the nonrelativistic analytical approach in the 1963 paper "Gravitational Radiation from Point Masses in a Keplerian Orbit" by Peters and Matthews is valid.
The angular power distribution is given by equation (5),
$$\frac{dP}{d\Omega}=\frac{G}{8\pi c^5}\left[\dddot{Q}_{ij}\dddot{Q}_{ij}-2n_i\dddot{Q}_{ij}n_k\dddot{Q}_{kj}-\frac12(\dddot{Q}_{ii})^2+\frac12(n_in_j\dddot{Q}_{ij})^2+\dddot{Q}_{ii}n_jn_k\dddot{Q}_{jk}\right],$$
where $\hat n$ is the unit vector in the direction of radiation and
$$Q_{ij}=\sum_n m^{(n)}x^{(n)}_ix^{(n)}_j$$
is the mass quadrupole tensor of the two masses around their center of mass.
Taking the $z$ (not $x$) axis to be along the direction of motion of the two holes, there is only one nonzero component of $Q_{ij}$, namely $Q_{zz}$, and it will be proportional to the square of the separation. Its third time derivative is irrelevant to the question of the angular distribution.
Since only $Q_{zz}$ is nonzero, one finds that only $n_z=\cos\theta$, where $\theta$ is the usual polar angle from the $z$-axis, enters into the contractions. The result is easily found to be
$$\begin{align}
\frac{dP}{d\Omega}&=\frac{G}{8\pi c^5}\left[\dddot{Q}_{zz}\dddot{Q}_{zz}-2n_z\dddot{Q}_{zz}n_z\dddot{Q}_{zz}-\frac12(\dddot{Q}_{zz})^2+\frac12(n_zn_z\dddot{Q}_{zz})^2+\dddot{Q}_{zz}n_zn_z\dddot{Q}_{zz}\right]\\
&=\frac{G}{8\pi c^5}(\dddot{Q}_{zz})^2\left[1-2\cos^2\theta-\frac12+\frac12\cos^2\theta+\cos^2\theta\right]\\
&=\frac{G}{16\pi c^5}(\dddot{Q}_{zz})^2\sin^2\theta.
\end{align}$$
So no power is radiated along the line of motion; the radiated power is greatest perpendicular to the line of motion; and the radiation pattern is symmetric relative to that perpendicular plane even when the two black holes have different masses.
Addendum: As @mmeent explains in another answer, my last point about the symmetry relative to the perpendicular plane no longer holds when smaller, higher-order multipole contributions are also considered. The calculation here is a leading-order (quadrupole) calculation. | {
"domain": "physics.stackexchange",
"id": 65482,
"tags": "black-holes, astrophysics, gravitational-waves, binary-stars"
} |
Non-locality and quanta | Question: Quantum mechanics is non-local in that long distance correlations are present, though there is no signalling possible. But QFT is Lorentz invariant and contains quantum mechanics as a special case. I assume this is not a paradox as paradoxes do not exist but I do not understand the details. Can anyone supply a reference or satisfactory explanation?
Answer: Correlations of results of measurement procedures of entangled system in QM (and thus also in QFT) are fixed at "the moment" of the observation and not previously as instead it happens for long-range correlated systems of statistical mechanics. In this sense, because the two measured parts of the system can stay arbitrarily far form each other (so that no physical signal can propagate from one part to the other with speed $<c$ during the measurement procedures), non-locality is manifest in quantum theories. Even in QFT in spite of the fact that fields obey covariant and local equations. It is because non locality is due to entangled quantum states and not to field equations.
This is just what the experimental failure of Bell's inequalities proves: (1) these correlations show up and (2) they were not fixed before performing measurement on the system (as it would be if there were local hidden variables, more fundamental than the quantum description of the system).
It is worth stressing that these correlation do not imply any transfer of energy or momentum or other physical quantities from one part of the system to the other, and there is no violation of causality with them (also because the time order of the pair of distant observations may depend on the used reference frame, since the involved pair of events are spacelike separated).
Moreover, since the outcome of measurements is stochastic one cannot transfer information through these correlations.
The situation is similar to this one where the entangled system is replaced by a pair of magical quantum dice. I have a die and you have another one. It happens that, no matter the distance between us, once you get a number from your die, I get the same number from mine.
In principle we could communicate through these correlations, in practice we cannot, because as the outcome is stochastic I cannot impose to my die to produce the outcome I want.
There is another possibility to communicate through our magic quantum dice: I could communicate you something simply by throwing my die. You should see your die to reproduce my numbers and you would know, this way, that I am throwing my die.
Conversely, with correlations of QM even this possibility is forbidden: It is possible to prove that the outcome of your measurement procedures on your part of the system have the same statistics, independently from the fact that I perform measurement on my part of system or not, though each pair of outcomes (on both sides of the system) appear to be correlated. So you cannot know whether or not I am "observing" my part of system. | {
"domain": "physics.stackexchange",
"id": 11246,
"tags": "quantum-mechanics, quantum-field-theory, locality"
} |
What do $SU(N)$ Dynkin labels say about a multi-component bosonic wavefunction? | Question: Suppose you have a system of A bosons of $N$ different types, each of which can be in different angular momentum orbitals. We define the creation operator of a boson of type $a$ with angular momentum $k$ by $ b_{a,k}^\dagger $.
A basis for the states of such bosons with angular momentum $ L $ can then be created by acting with a products of $ b_{a,k}^\dagger $s on the vacuum state in such a way that the total angular momentum is correct.
Suppose now that the Hamiltonian (including interaction) is $SU(N)$
symmetric (i.e. homogeneous - same interaction between every pair of particles) and commutes with the total angular momentum operator. We then know that the eigenstates can be labelled by SU(N)-quantum numbers (called Dynkin labels, I think).
My question is: what are these numbers and how do they reflect the symmetries of the corresponding eigenstates?
Answer: The Dynkin labels $(\lambda_1,\lambda_2,\ldots,\lambda_{N-1})$ label irreducible representations of $SU(N)$. Using Young diagrams and Schur-Weyl duality is it possible to infer the permutation symmetry properties of the multiparticle states from the Dynkin labels.
These, however, will not help you get angular momentum labels $L$. For these you need branching rules from an irrep $(\lambda_1,\lambda_2,\ldots,\lambda_{N-1})$ to the $SO(3)$ subgroup of rotations. The branching rules are usually clarified using a subgroup chain, i.e. something like
$$
SU(N)\supset SO(N)\supset SO(N-1)\ldots \supset SO(3)\tag{1}
$$
with each link in the chain supplying quantum numbers to properly label your states.
The problem is actually highly non-trivial. For $SU(3)\supset SO(3)$ the branching rules are known from the work of Elliott on the nuclear $SU(3)$ model, and actually constructing basis states is extremely non-trivial as some $SO(3)$ irreps will usually occur more than once. For example, in the irrep $(3,2)$, one can find the $SO(3)$ irreps $L=1,2,3,3,4,5$ with $L=3$ occurring twice.
Of course the subgroup chain of Eq.(1) is not the only way to get to $SO(3)$, i.e one could use $SU(N)\supset SU(N-1)\ldots SU(3)\supset SO(3)$. This chain and the labels it supplies will have a different physical interpretation than the chain of (1).
The place to look are the tables of the review article by Slansky: Group theory for unified model building, Phys.Rep. vol. 79 (1981) p.1-128. The physics of the simplest subgroup chains is best explained in nuclear physics texts, and a good modern one is the one by Rowe and Wood. Otherwise, the actual first principle computation use modified tableaux and Schur functions, which are not for the faint-at-hearts. | {
"domain": "physics.stackexchange",
"id": 39924,
"tags": "quantum-mechanics, group-theory, representation-theory, bosons"
} |
Why is the flux no infinite around an isolated charge? | Question: I was wondering that the density of electric field lines determine the strength of the electric field .Now let's say you have an isolated charge ; you know the flux through a closed surface which I take to be a unit sphere suddenly limited by the Gauss law. If the flux is limited then it's bound to be the field lines should be also Limited as if its Infinite if then you know that $\int \int E.ds$ is infinite and the flux becomes infinite as well. Given that the electric field lines are not infinite and they indicate where a unit point charge goes if kept at that point now it create some empty spaces where there are no field lines; so does it suggest that if I put a point charge in those empty spaces for there are no field lines that it would not interact with the field or feel a force outward or inward depending on the charge? Please show that where am I misunderstanding?
Answer: I was wondering that the density of electric field lines determine the strength of the electric field
First we need to realize that electric field lines are used to visualize and analyze electric fields and therefore should be considered as a pictorial tool as opposed to a physical entity. For example, one cannot think of the space between the field lines in the picture, which gets larger and larger the further away we go from the point charge producing the field, to be “devoid” of an electric field.
Now let's say you have an isolated charge ; you know the flux through a closed surface which I take to be a unit sphere suddenly limited by the Gauss law.
Not quite sure what you mean here, but the total flux is the same for any spherical closed surface surrounding the charge, no mater how large the sphere. Since the flux is the product of field strength and area perpendicular to the field, as the surface area increases with increasing radius, the electric field strength decreases proportionally so that ExA is a constant.
If the flux is limited then it's bound to be the field lines should be also Limited as if its Infinite if then you know that the double integral of E.dS is infinite and the flux becomes infinite as well.
As said previously, pictorially the number of field lines is the same, but their density decreases with increasing area, so that the total flux bounded by any sized closed surface remains the same.
Given that the electric field lines are not infinite and they indicate where a unit point charge goes if kept at that point now it create some empty spaces where there are no field lines; so does it suggest that if I put a point charge in those empty spaces for there are no field lines that it would not interact with the field or feel a force outward or inward depending on the charge? Please show that where am I misunderstanding?
As previously said, field lines are a pictorial tool. You cannot think of the space between lines as having no field. A charge placed anywhere will experience the field, albeit weakly the further out you go. Instead of thinking of individual, discrete field lines, think of the electric field as a continuum or smear of field lines each line getting thinner and thinner to represent lessening strength. This would be much harder to show in a picture, thus we use the discrete field lines representation.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 55486,
"tags": "electrostatics, vector-fields"
} |
Future pointing light cones in Black Hole in Schwarzschild Coordinates | Question: In examining black holes in Schwarzchild Coords (ie without resorting to other coords) the r coord becomes timeline within the event horizon and the t coord spacelike.
Therefore the light cone is tilted by 90 degrees. However, how do we say which direction in r is future and which is past? (textbooks jump to the forward light cone being radially inward with minimal explanation).
This question is further complicated by the theoretical existence of white holes (even though they are not thought to be a physical reality) where the future light cone is pointing radially outwards. Is there a less hand wavy approach to explaining all of this?
Answer: You can't expect an explanation if you stick to singular coordinates,
where there is no smooth way to go from without to within.
Shift to well-behaved coordinates (e.g. Kruskal-Szekeres). Then
light-cones behaviour is quite simple and you may easily see which
their relationship must be wrt to Schwarzschild coordinates.
In K-S coordinates light-like geodetics are all at 45°, $t=$ const
lines are straight lines through the origin, $r=$ const lines are
equilateral hyperbolas with 45° asymptotes (see e.g. wikipedia article
https://en.wikipedia.org/wiki/Kruskal-Szekeres_coordinates for
help). | {
"domain": "physics.stackexchange",
"id": 54349,
"tags": "general-relativity"
} |
Ambiguity in Earth's "Tilt" | Question: It’s well-known that the axial tilt of the Earth (with respect to the ecliptic) is about 23.4 degrees. However, two angles is needed to specify the orientation of any rigid body, so it’s unclear to me exactly how Earth is oriented with respect to its orbital plane.
To be more precise, consider the plane that satisfies these two conditions:
1. Perpendicular to the ecliptic plane
2. Contains the line connecting the Sun and the Earth when Earth is at perihelion (Jan. 2).
(Basically, this plane cuts through the orbit of the Earth such that for half the year Earth is one side and for the other half Earth is on the other side.)
Questions:
1. When the Earth is at perihelion, what is the angle between its rotational axis and the plane described above? I would guess this angle is very small from the images of Earth’s orbit. Is there a technical term for this angle?
2. Does the direction of the axis remain approximately the same throughout its entire orbit?
Please ignore precession for this question.
Answer: The fact that the perihelion is close to the the winter solstice is purely coincidental. The perihelion might occur in any season. It wanders around in a somewhat chaotic fashion because of the gravitational influence of the other planets. At the moment the perihelion moves a day every 58 years.
Let A = Earths axis, and E = the line perpendicular to the orbital plane. The angle between A and E is called the axial tilt. Let S = the line between the centers of the Earth and the Sun. Let P = the plane containing S and E.
Q1 is (maybe) asking what is the angle between A and P on Jan 2. It will be about 23.4 times sine(2*pi*12/365). Because there are 12 days from Dec 21 to Jan 2.
Q2. yes. | {
"domain": "astronomy.stackexchange",
"id": 1234,
"tags": "orbit, earth, rotation"
} |
Gazebo Static Map Plugin not loading | Question:
I am trying to use the Gazebo static map plugin following the tutorial at http://gazebosim.org/tutorials?tut=static_map_plugin&cat=build_world. However, when I launch Gazebo, with the command gazebo --verbose static_map_plugin.world, the world appears white, and an error shows in the console.
[Err] [Plugin.hh:180] Failed to load plugin libStaticMapPlugin.so: libStaticMapPlugin.so: cannot open shared object file: No such file or directory
I've looked throughout my computer, but I can't find this file, and as far as I can tell, it should come with Gazebo. What do I need to do to fix this?
This is on Gazebo 9.0.0 with ROS Melodic on Ubuntu 18.04.
Originally posted by BillThePlatypus on Gazebo Answers with karma: 1 on 2019-09-13
Post score: 0
Original comments
Comment by kumpakri on 2019-09-16:
the link appears to be broken. Can you fix it? I never heard of Static Map Plugin and was not able to find any reference on google search.
Comment by BillThePlatypus on 2019-09-16:
I've fixed the link. I made a mistake copying it over from ROS answers.
Comment by kumpakri on 2019-09-17:
I found this plugin in folder /usr/lib/x86_64-linux-gnu/gazebo-7/plugins.
Comment by BillThePlatypus on 2019-09-17:
I looked in that location, and the libStaticMapPlugin.so file isn't there. I have installed Gazebo through the ros-melodic-desktop-full package. Did you install Gazebo another way, or with extra packages?
Comment by kumpakri on 2019-09-19:
I have definitely used one of those approaches to install Gazebo.
Answer:
After a lot of searching and building from source, I discovered that the static map plugin is for Gazebo 10, not Gazebo 9.
Originally posted by BillThePlatypus with karma: 1 on 2019-09-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4437,
"tags": "gazebo-plugin, gazebo-9"
} |
navigate to NEAREST goal pos (if goal pos is blocked) | Question:
Hello all,
I am working on a project where my robot needs to navigate to the goal position (and it is easy and working fine). But sometimes the goal position may be blocked and there is a possibilty that the goal planners fail hence resulting in no movement at all.
In this case i want my robot to move to the nearest point of the goal position (this can be done by taking a radius around the goal position and assign it a new position and then navigate again but it might take alot of time). Please help me that how can i make my algorithms more robust to implement these changes. (using amcl)
Below is my code for navigation to the goal position. Please share your thoughts:
import rospy
from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal
import actionlib
from actionlib_msgs.msg import *
from geometry_msgs.msg import Pose, Point, Quaternion
class GoToPose():
def __init__(self):
self.goal_sent = False
# What to do if shut down (e.g. Ctrl-C or failure)
rospy.on_shutdown(self.shutdown)
# Tell the action client that we want to spin a thread by default
self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction)
rospy.loginfo("Wait for the action server to come up")
# Allow up to 5 seconds for the action server to come up
self.move_base.wait_for_server(rospy.Duration(5))
def goto(self, pos, quat):
# Send a goal
self.goal_sent = True
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = 'map'
goal.target_pose.header.stamp = rospy.Time.now()
goal.target_pose.pose = Pose(Point(pos['x'], pos['y'], 0.000),
Quaternion(quat['r1'], quat['r2'], quat['r3'], quat['r4']))
# Start moving
self.move_base.send_goal(goal)
# Allow TurtleBot up to 60 seconds to complete task
success = self.move_base.wait_for_result(rospy.Duration(60))
state = self.move_base.get_state()
result = False
if success and state == GoalStatus.SUCCEEDED:
# We made it!
result = True
else:
self.move_base.cancel_goal()
self.goal_sent = False
return result
def shutdown(self):
if self.goal_sent:
self.move_base.cancel_goal()
rospy.loginfo("Stop")
rospy.sleep(1)
if __name__ == '__main__':
try:
rospy.init_node('nav_test', anonymous=False)
navigator = GoToPose()
# Customize the following values so they are appropriate for your location
position = {'x': 13.3, 'y' : 3.9}
quaternion = {'r1' : 0.000, 'r2' : 0.000, 'r3' : 0.000, 'r4' : 1.000}
rospy.loginfo("Go to (%s, %s) pose", position['x'], position['y'])
success = navigator.goto(position, quaternion)
if success:
rospy.loginfo("Hooray, reached the desired pose")
else:
rospy.loginfo("The base failed to reach the desired pose")
# Sleep to give the last log messages time to be sent
rospy.sleep(1)
except rospy.ROSInterruptException:
rospy.loginfo("Ctrl-C caught. Quitting")
#### code by mark sulliman
Originally posted by RoboRos on ROS Answers with karma: 37 on 2018-09-25
Post score: 0
Original comments
Comment by Delb on 2018-09-25:
Which global planner are you using ? The carrot_planner does it but your robot only move straight forward. There is also the navfn which accept a tolerance on a goal point.
Comment by RoboRos on 2018-09-26:
Thanks for your help. Can you post it as an answer so i can accept it and close it. @Delb
Thanks for your help too @choco93
Comment by Delb on 2018-09-26:
You can answer yourself to tell exactly what solved your problem/which solution you used, and then accept it (and don't close the question, only accept it).
Glad your problem is solved !
Answer:
You can set your tolerence param to be high, that should solve your problem.
Originally posted by Choco93 with karma: 685 on 2018-09-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31821,
"tags": "slam, navigation, ros-kinetic, amcl"
} |
Does weight affect the Earth's magnetic field? | Question: Does the weight of a city influence the earths magnetic field, sort of like you can crush a magnet by putting more weight on the magnet then it can handle, for example an extremely heavy hammer?
Answer: The earth's magnetic field is not due to magnetized matter distributed uniformly from the surface to the core. Most of the field comes from the motions of the liquid core
Unlike a bar magnet, Earth's magnetic field changes over time because it is generated by a geodynamo (in Earth's case, the motion of molten iron alloys in its outer core).
Cutaway diagram of Earth's internal structure (to scale) with inset showing detailed breakdown of structure (not to scale)
We live where the mountains are in the figure above and it is obvious that only catastrophic impacts with similar size celestial bodies would affect the geometry of the core and thus change the boundary conditions for the dynamo.
Any effect of weight on the liquid core has already happened at the creation period of the earth. | {
"domain": "physics.stackexchange",
"id": 41011,
"tags": "newtonian-mechanics, magnetic-fields, mass, weight, geomagnetism"
} |
A shorter impulse-contact period than a rod´s mechanical longitudinal wave propagation period may fail to accelerate the rod? | Question: I have viewed and read several papers related to this subject but cant find any help to understand some seemingly a simple and a straitforward point, about whether a short impulse period compared to the wave propagation period in a rod may or may not fail to impart a motion on the rod. It seems that with a short duration impulse-contact between a ball and a long rod, no matter how powerful the impulse is , the struck/imparted long-rod would fail to accelerate/move due tue the rod´s wave propagation period being longer than the ball-rod contact impulse period(?)
To illustrate my point here is an example
A small ball hits one end of a long rod.(i.e a very short lived impulse on the rod where the ball "hits and runs away" from the rod.) let the two objects be made of the same material like steel, wood etc so that wave propagation speed is related only to length of rod or diameter of ball.
Now we know that the mechanical longitudinal - force carrying- propagation
wave starting right after the impact propagate through both the ball and rod it takes different times to propagate through each of them due Tue different lengths of the objects. This period, for the wave to propagate from the rod´s first end to the other
rod´s end, and then to reflect back to the rod´s original first end is much longer then the period of the small ball´s wave travel.
Lets assume that the ball´s impact/ impulse duration is shorter than the rod´s wave propagation period but is longer than the ball´s wave propagation period, i.e the ball hits the rod, has time to have its wave transmitted/reflected and moves away, and right afterwards the ball is removed very quickly from the rod).
Now lets see if I understand this correctly:
First , in order for any object to move/accelerate or- the rod in this case-, the wave from the original ball -impulse at the first end of the rod must propagate all the way to the other end of the rod and then reflect back again to the first rod-end again. Only now, when the wave has completed a full reflection that the motion force on the "rod as a whole" can reveal itself and the rod as a whole can now accelerate forward away from its original place moving for miles and miles :). Before that instant, no matter how hard the ball hits the rod, the rod itself as a whole can not move/accelerate, i.e. can not displace itself away from its original location where it can reach the "end of the universe" - just exaggerating and clarify this very delicate simple point:). Of course, only local small compression and rarefaction of the rod + minor periodic cg shifts occur during the wave propagation through the rod and instantaneously with the impact..
Here is my main question:
Now, keeping in mind that the impulse duration was shorter than the rod´s wave period , i.e. the ball "hit and run away" from the rod before the propagation wave inside the rod had time to reflect back. The ball is now no longer in contact with the rod. The wave in the rod is finally "after a long time" reflected back to the original rod´s first end on which the original impulse occurred, but there is no ball there touching!. Will the rod now as a whole start to move/accelerate even if the ball is no longer touching the rod-end ? Or will the rod only stay there since there is no ball for the rod´s reflected wave to push on, and all motion we got is the compression wave going back and forth through the rod´s length? I know , I can already hear your thoughts now but that is why I am asking.
Comments
My point is actually not so complicated although it may very well sound so. If you are not understanding anything please dont hesitate to ask me to make anything clearer. I am surely making mistakes here so that is where your help comes in.
Oh by the way, the ball rod example is just a rather inefficient bad example,. the best method to have a very short impulse( hitting an object detaching near instantaneously) would be electromagnetic, i.e a capacitor bank discharge into a coil.
looking forward for your replies. It is truly appreciated Thanks.
Regards
Answer: If we have two objects colliding in free space, conservation of momentum is required, regardless of what's happening inside each object. If the rod initially has no momentum in our reference frame, it will gain momentum after the ball hits it. It is possible that this momentum may initially be in the form of a wave propagating through the rod, with most of the atoms stationary except near the wavefront, but first, the momentum of the center of mass of the rod has increased because we're averaging over all the atoms, and second, once the wave hits the other end of the material it will rebound and the other stationary atoms will now start moving forward until the wave gets back to the front end. (It would be like a box with a ball bouncing back and forth between the sides. Even if the box is initially stationary when the ball is moving to the right, it'll start moving once the ball hits the wall and rebounds.) It is possible that we'd end up with a rod where most of the atoms only moves forward half the time, but nevertheless, it is on average moving forward and will continue to move forward (several times its length or whatever) for as long as we wait. The center of mass will move forward continuously the whole time.
Will the rod now as a whole start to move/accelerate even if the ball is no longer touching the rod-end ?
Yes.
Or will the rod only stay there since there is no ball for the rod´s reflected wave to push on, and all motion we got is the compression wave going back and forth through the rod´s length?
No, the rod will move. The wave reflects off the ends of the rod and doesn't need the ball's help except to act as an initial impulse.
Now, whether an impact will cause all the atoms to move as one or if some waves are created instead (or if both happen) is an interesting question. For example, Mossbauer spectroscopy, which involves a gamma ray photon hitting an atom's nucleus, depends on the collision resulting in the atoms all recoiling in unison rather than a wave being formed. | {
"domain": "physics.stackexchange",
"id": 16566,
"tags": "newtonian-mechanics"
} |
How large can an atom get? What's the farthest an electron can be from its nucleus? | Question: For example, would it be possible to excite a hydrogen atom so that it's the size of a tennis ball? I'm thinking the electron would break free at some point, or it just gets practically harder to keep the electron at higher states as it gets more unstable. What about in theoretical sense?
What I know is that the atomic radius is related to the principal quantum number $n$. There seems to be no upper limit as to what $n$ could be (?), which is what led me to this question.
Answer: Atoms with electrons at very large principle quantum number ($n$) are called Rydberg atoms.
Just by coincidence the most recent Physics Today reports on a paper about the detection of extra-galactic Rydberg atoms with $n$ as high as 508(!), which makes them roughly 250,000 times the size of the same atom in the ground state. That is larger than a micrometer.
The paper is Astrophys. J. Lett. 795, L33, 2014. and the abstract reads
Carbon radio recombination lines (RRLs) at low frequencies ($\lesssim 500 \,\mathrm{MHz}$) trace the cold, diffuse phase of the interstellar medium, which is otherwise difficult to observe. We present the detection of carbon RRLs in absorption in M82 with the Low Frequency Array in the frequency range of $48-64 \,\mathrm{MHz}$. This is the first extragalactic detection of RRLs from a species other than hydrogen, and below $1,\mathrm{GHz}$. Since the carbon RRLs are not detected individually, we cross-correlated the observed spectrum with a template spectrum of carbon RRLs to determine a radial velocity of $219 \,\mathrm{km \,s^{–1}}$. Using this radial velocity, we stack 22 carbon-$\alpha$ transitions from quantum levels $n = 468$–$508$ to achieve an $8.5\sigma$ detection. The absorption line profile exhibits a narrow feature with peak optical depth of $3 \times 10^{–3}$ and FWHM of $31 \,\mathrm{km \, s^{–1}}$. Closer inspection suggests that the narrow feature is superimposed on a broad, shallow component. The total line profile appears to be correlated with the 21 cm H I line profile reconstructed from H I absorption in the direction of supernova remnants in the nucleus. The narrow width and centroid velocity of the feature suggests that it is associated with the nuclear starburst region. It is therefore likely that the carbon RRLs are associated with cold atomic gas in the direction of the nucleus of M82. | {
"domain": "physics.stackexchange",
"id": 17792,
"tags": "quantum-mechanics, atomic-physics, atomic-excitation"
} |
Does resonance stablization of carbocation increase ease of $\ce{S_N1}$ reaction or decrease it due to delocalization? | Question: (A)
(B)
In both of the cases when a nucleophile approaches for $\ce{S_N1}$ reaction a carbocation is formed in place of $\ce{Cl}$ atom. I know that the rate of $\ce{S_N1}$ increases with increase in carbocation stability, and in (B) the carbocation will be resonance stabilized. But at the same time won't the positive charge be delocalized as well, leading to decreased effect of positive charge at the leaving group position? Won't that negatively affect $\ce{S_N2}$ reaction?
Answer: (A) Yes, rate of $\mathrm{S_N1}$ increases with the stability of the intermediate as the energy of transition state (partially positive charged at $\alpha$-$\ce{C}$) also decreases due to stabilization by delocalization.
(B) Since the rate-determining step (RDS) $\mathrm{S_N1}$ is the formation of the carbocation, the second step (which does slow down due to delocalization, but doesn't become as slow as the RDS), and thus, the rate is dependent on how fast the carbocation forms and not how fast the nucleophile approaches the positively charged intermediate; therefore, the overall reaction will speed-up.
Points to consider
$\ce{1-chloropropane}$ may undergo $\mathrm{S_N2}$ because the $\alpha$-$\ce{C}$ is primary, for which $\mathrm{S_N1}$ rates are usually slow. Some $\mathrm{S_N1}$ is always observed at primary $\ce{C}$ atoms as well, but rate of $\mathrm{S_N2}$ is usually much larger.
For $\ce{3-chloroprop-1-ene}$, nucleophile can approach at both the $\alpha$ and $\gamma$ positions. Here the product is the same, but it won't be the same for $\ce{4-chloroprop-2-ene}$. | {
"domain": "chemistry.stackexchange",
"id": 17465,
"tags": "organic-chemistry, reaction-mechanism, halides, stability, nucleophilic-substitution"
} |
Why are polysaccharides not sweet in taste? | Question: Polysaccharides are defined as polyhydroxy aldehyde or ketone which on hydrolysis yield many units of monosaccharides.
I got one answer(to my question above) as:
On our tongue, we have things called taste receptors. These receptors are loosely categorised into sweet, sour, bitter and salty. Our sweet-receptors bind to specific types of molecules, namely monosaccharides and disaccharides. Polysaccharides are not as sweet because they do not readily bind to the sweet-receptors on our tongue, as the other smaller molecules do!
My question is that in our body (due to the presence of water),polysaccharides and oligosaccharides are hydrolysed to simpler units of monosaccarides...
So why does the latter respond to the receptors(indicating sweetness) while the former does not(while both yield monosaccarides on hydrolysis)?
Answer: Well, while indeed polysacharides are metabolised to simpler units, even in mouth (saliva in the mouth can account for 30% of initial starch digestion), it usually doesn't happen fast enough, so full digestion follows later in you digestive system.
You can`t sense the sweetness of polysaccharides as they do not fit the receptors. | {
"domain": "chemistry.stackexchange",
"id": 9474,
"tags": "carbohydrates, hydrolysis, taste"
} |
Dependence of BRST Quantization on the Choice of Gauge-Fixing Function | Question: There is a point which confuses me about BRST procedure. One shows that, if we define physical states as the ones that are annihilated by BRST charge $Q$, the scattering amplitudes don't depend on gauge fixing function. But doesn't charge $Q$ depend on the auxiliary field $B$, and therefore (after integrating it out) on gauge fixing function?
Answer: The short answers are:
Yes, the non-minimal BRST charge $Q$ does depend on the Lautrup-Nakanishi (LN) auxiliary field $$.
No, the BRST charge $Q$ does not depend on the gauge-fixing condition (which instead is part of the gauge-fixing fermion $\psi$).
One of the benefits of the BRST formalism, is that it shows formally that the path integral $Z$ doesn't depend on the gauge-fixing fermion $\psi$, cf. e.g. this post.
Integrating out the $B$-field implies that $Q^2$ only vanishes on-shell. However, integrating out fields cannot spoil the gauge-fixing independence of pt. 3. | {
"domain": "physics.stackexchange",
"id": 56066,
"tags": "gauge-theory, path-integral, gauge-invariance, gauge, brst"
} |
Transducers in cuff-style blood pressure monitors | Question: I have been doing research into blood pressure monitors, and I was wondering when the cuff tightens, how do the transducers actually measure the systolic/diastolic pressure in the arteries.
Answer: This is actually a really neat question, and it involves a few engineering and anatomical concepts.
Here's what happens when an automatic blood pressure cuff measures your blood pressure:
The cuff inflates until it reaches a pressure just above systolic pressure (the maximum pressure your blood reaches during a heartbeat). When it reaches this pressure, the main artery in your arm gets closed off because its pressure is not enough to resist the compression of the cuff.
The cuff then starts to deflate slowly. There is a pressure sensor that measures pressure fluctuations in the cuff. Once the cuff reaches your systolic pressure, the artery opens up just enough to let some blood through. Since the artery is mostly closed, this causes tiny vibrations as the blood squeezes through (this is because the blood flow is "turbulent"). These vibrations are transmitted through your skin and into the air in the cuff. The pressure sensor is sensitive enough detect these tiny vibrations in the air of the cuff once they start, and it records the current pressure of the cuff as the "systolic pressure".
The pressure in the cuff keeps decreasing, until the tiny vibrations stop (the blood flow becomes "laminar" instead of turbulent). This is the "diastolic pressure", i.e. the minimum pressure during your heartbeat.
It turns out that doctors do the exact same thing with a manual cuff and a stethoscope: they pump up the cuff until they can't hear any gurgling noises in the stethoscope, then they slowly release until they hear the first gurgling noise (systolic pressure reached). They keep releasing pressure until they can't hear the noise any more, at which point they've found the diastolic pressure.
I've actually performed these measurements myself in a Biomechanical Engineering Measurements course. Cool stuff!
Source: Oscillatory Blood Pressure Monitoring Devices | {
"domain": "engineering.stackexchange",
"id": 1016,
"tags": "electrical-engineering, sensors, biomedical-engineering"
} |
Positive Kaon decay into $\pi^+$ and $\pi^0$ suppressed | Question: I am looking at the reactions of positive Kaons and am looking at
$$K^+\to\pi^++\pi^0$$
Since the strangeness is not conserved, this has to be a Weak decay. When I try to make the Feynman diagrams for this decay, I cannot make one that is not a flavor changing neutral current.
However, PDG says the fraction of this decay is $21\%$. This does not make sense as this is a very high fraction for a suppressed decay mode.
What am I missing?
Answer:
Cabibbo suppressed, but not FCNC
(Apologies for poor drawing quality) | {
"domain": "physics.stackexchange",
"id": 65156,
"tags": "particle-physics, standard-model, mesons, pions"
} |
How to automate NCBI genome download | Question: I need to download all the completely assembled cyanobacterial genome's GenBank file(.gbff) from NCBI(RefSeq or INSDC ftp data).
For this I think, the steps are:
Need to find the completely assembled genomes.
find the GenBank file URL based on the taxonomic name.
download the GenBank file(.gbff file)
Is their any way to do this by using any python module or any other idea??
Answer: Outline of solution:
get this file: ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/bacteria/assembly_summary.txt
filter for lines having "Complete Genome" in column 12
filter for lines having a taxid (column 6) that is a descendant of taxon id 1117 (phylum Cyanobacteria), you can use nodes.dmp from ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/taxdump.tar.gz
The file download URL is the concatenation of column 20 and the last field of column 20, after separating it at '/', followed by appending '_genomic.gbff.gz'. For example: ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/007/365/GCF_000007365.1_ASM736v1/GCF_000007365.1_ASM736v1_genomic.gbff.gz
Edit:
Since I will need something like that soon, I made a Perl script for downloading genomes.
For Cyanobacteria, you would do ./download-refseq-genomes.pl 1117 | {
"domain": "bioinformatics.stackexchange",
"id": 2324,
"tags": "python, phylogenetics"
} |
K nearest neighbour | Question: Is the k-nearest neighbour algorithm a discriminative or a generative classifier? My first thought on this was that it was generative, because it actually uses Bayes' theorem to compute the posterior. Searching further, it seems like it is a discriminative model. But I couldn't find the explanation.
So is KNN discriminative first of all? And if it is, is that because it doesn't model the priors or the likelihood?
Answer: See a similar answer here. To clarify, k nearest neighbor is a discriminative classifier.
The difference between a generative and a discriminative classifier is that the former models the joint probability where as the latter models the conditional probability (the posterior) starting from the prior.
In the case of nearest neighbors, the conditional probability of a class given a data point is modeled. To do this, one starts with the prior probability on the classes. | {
"domain": "datascience.stackexchange",
"id": 171,
"tags": "classification"
} |
Did dinosaurs have more than one brain? If so, why? | Question: I once remember reading (15 years ago) that dinosaurs had two brains. One for their head and another one for their digestive functions. What is the current opinion on this theory? Has more evidence become available?
Answer: It would appear that at one time it was thought that a "gap" in the skeleton of a Stegosaurus was a space for another brain. This is now thought to be a storage space for extra food.
Googling your question brings up a number of answers along these lines:
https://en.wikipedia.org/wiki/Stegosaurus#%22Second_brain%22
https://dinosaurs.about.com/od/dinosaurdiscovery/tp/dinoblunders.htm
https://wiki.answers.com/Q/Did_a_dinosaur_have_two_brains | {
"domain": "biology.stackexchange",
"id": 9598,
"tags": "zoology, anatomy"
} |
Evolved networks fail to solve XOR | Question: My implementation of NEAT consistently fails to solve XOR completely. The species converge on different sub-optimal networks which map all input examples but one correctly (most commonly (1,1,0)). Do you have any ideas as to why that is?
Some information which might be relevant:
I use a plain logistic activation function in each non-input node 1/(1 + exp(-x)).
Some of the weights seem to grow quite large in magnitude after a large number of epochs.
I use the sum squared error as the fitness function.
Anything over 0.5 is considered a 1 (for comparing the output with the expected)
Here is one example of an evolved network. Node 0 is a bias node, the other red node is the output, the green are inputs and the blue "hidden". Disregard the labels on the connections.
EDIT: following the XOR suggestions on the NEAT users page of steepening the gain of the sigmoid function, a network that solved XOR was found for the first time after ca 50 epochs. But it still fails most of the time. Here is the network which successfully solved XOR:
Answer: The problem was due to the following issues in my implementation:
The offspring generated in the crossover was not mutated (!)
The mutations did not occur with the expected frequencies (too few links and weight mutations)
The sigmoid activation had to be steepened
Another thing that previously caused issues was the network.activate function. Make sure that you wait for the network to stabilize when doing classification tasks, so all signals have time to propagate through the network. | {
"domain": "ai.stackexchange",
"id": 2397,
"tags": "neural-networks, activation-functions, neat, neuroevolution"
} |
Why are there no dihedral mirror planes in the D3h point group? | Question: Consider the molecule $\ce{BH3}$, which belongs to the $D_\mathrm{3h}$ point group. Why are the three mirror planes in this point group labelled as $\sigma_\mathrm v$ instead of $\sigma_\mathrm d$? Don't they bisect the angles between a pair of rotational axis $C_{2}$ axes as shown in the diagram below?
Answer: The σ$_v$ planes in a molecule are a direct result of the C$_n$ main axis i.e. they are interchangeable if you perform the C$_n$ operation on them. On the other hand σ$_d$, which are called dihedral planes bisect the dihedral angles between members of the σ$_v$'s set. Therefore molecules with odd C$_n$ such as BH$_3$ wont have any σ$_d$ planes; only n (3 for BH$_3$) σ$_v$'s. In contrast a molecule such as benzene which has a C$_6$ main axis will have 3 σ$_v$'s (the ones including the carbon atoms) and 3 σ$_d$'s which are the ones passing between the carbon atoms. Of course it doesnt really matter which set is called dihedral and which is called vertical, it could be the other way round as well. Ref: Chemical applications of group theory by Cotton. | {
"domain": "chemistry.stackexchange",
"id": 9643,
"tags": "symmetry, group-theory"
} |
How do mosquitoes maintain telomere length? | Question: While the vast majority of eukaryotic organisms maintain their chromosome ends (telomeres) via telomerase, an enzyme system that generates short, tandem repeats on the ends of chromosomes, other mechanisms such as the transposition of retrotransposons (e.g. Drosophila) or recombination (e.g. Yeast) can also be used in some species.
I would like to know of your thoughts on the possible mechanisms that regulate telomere lengths in mosquitoes.
Answer: It seems that the current model is that recombination is the mechanism by which telomere length is maintained in the malaria vector Anopheles gambiae. Here is an exerpt of Roth et al paper that proposed the recombination mechanism:
The insertion of a transgenic pUChsneo plasmid at the left end of chromosome 2 provided a unique marker for measuring the dynamics of the 2L telomere over a period of about 3 years. The terminal length was relatively uniform in the 1993 population with the chromosomes ending within the white gene sequence of the inserted transgene. Cloned terminal chromosome fragments did not end in short repeat sequences that could have been synthesized by telomerase. By late 1995, the chromosome ends had become heterogeneous: some had further shortened while other chromosomes had been elongated by regenerating part of the integrated pUChsneo plasmid. A model is presented for extension of the 2L chromosome by recombination between homologous 2L chromosome ends by using the partial plasmid duplication generated during its original integration.
You might also be interested in this paper: Genes required to maintain telomeres in the absence of telomerase in Saccharomyces cerevisiae and then look at whether there are mosquito homologs of the identified yeast genes that might be involved in telomere maintance. | {
"domain": "biology.stackexchange",
"id": 1896,
"tags": "genetics, cell-biology, dna, senescence, telomere"
} |
What does Fraunhofer diffraction around a circular shadow look like? | Question: I want to combine the set-ups of the Arago spot (small disk in the light path, diffraction pattern imaged directly behind it) with the set-up of the Airy disk (small aperture in the light path, diffraction pattern imaged far away). That is, I have a fairly small solid sphere in the light path (radius $R = 30\lambda$), and I'm imaging far away ($d = 4000 R$). What should I expect to see on the screen? I'd like to start with the assumption that the light source is infinitely far away, although if it's easy to extend it to the case of light rays that aren't exactly parallel, that would be neat too.
(Background, I'm a mathematician by background but have ended up working with optical systems.)
Answer: This Numerical simulation by GONDRAN Alexandre is for $R=10\lambda$
The figure on p.5 here goes out to 30R.
At $4000R$ you are out of the Fresnel regime. (See the condition on the Fresnel number in Wikipedia). I can't find an image of Fraunhofer diffraction from a disk, but by Babinet's principle it looks (roughly) like the negative of the Airy disk from an aperture of the same radius. | {
"domain": "physics.stackexchange",
"id": 80039,
"tags": "optics, diffraction"
} |
If two particles are entangled and you collapse the wave function of one of the particles. Does the other particle collapse as well? | Question: Let's suppose you entangled two photons, you separate the photons, and then you measure the polarization of one the photons collapsing its wave function. The wave function of the other photon collapses also?
Answer: Each photon does not have its own wave function. They are entangled. By definition, there is only one wave function between them. One function describes both particles simultaneously. If you do something to one particle that alters the wave function, then that's it; the wave function is altered.
Here's an analogy: I have a bag with two apples in it. Then I pose this question. If I were to tie a knot in the top of the first apple's bag, would the second apple's bag remain unchanged? The answer is obvious: both apples are in the same bag so if you make changes to the bag of the first apple, the bag of the second can't remain unchanged.
It's the same with entangled particles. The wave function is like the bag; there's only one that describes both particles. | {
"domain": "physics.stackexchange",
"id": 38597,
"tags": "quantum-mechanics, wavefunction, quantum-entanglement"
} |
If stars are ionized, where are the electrons? | Question: As far as I know, universe is electrically neutral so,
If stars are ionized, where are the electrons?
Answer: The electrons are still inside the stars. A stellar plasma is electrically (almost) neutral, the electrons in a plasma are simply not bound to individual nuclei. If we could take some plasma out of a star and we would let it cool down to room temperature, most of the matter in that gas (at least from main sequence stars) would be ordinary neutral hydrogen and helium gas. | {
"domain": "physics.stackexchange",
"id": 16697,
"tags": "electrons, stars"
} |
depthimage_to_laserscan: Cannot call rectifyPoint when distortion is unknown | Question:
I'm trying to run the depthimage_to_laserscan with an MYNT EYE camera. I get this output (see the error in the end):
SUMMARY
========
PARAMETERS
* /depthimage_to_laserscan/output_frame_id: base_link
* /depthimage_to_laserscan/scan_height: 3
* /rosdistro: melodic
* /rosversion: 1.14.3
NODES
/
depthimage_to_laserscan (depthimage_to_laserscan/depthimage_to_laserscan)
gmapping_node (gmapping/slam_gmapping)
ROS_MASTER_URI=http://localhost:11311
process[depthimage_to_laserscan-1]: started with pid [17275]
process[gmapping_node-2]: started with pid [17276]
[ERROR] [1584483469.757240526]: Could not convert depth image to laserscan: Cannot call rectifyPoint when distortion is unknown.
My platform: Ubuntu 18.04 (x64). I use ROS Melodic. The "depthimage_to_laserscan" package version is 1.0.8.
The depthimage_to_laserscan node is subscribed to 2 topics (I checked it with rqt_graph):
/mynteye/depth/image_raw
/mynteye/depth/camera_info
I used "rostopic echo" command to check the messages published to the "/mynteye/depth/camera_info" topic. This message is published continuously:
---
header:
seq: 22519
stamp:
secs: 1584483432
nsecs: 228966713
frame_id: "mynteye_depth_frame"
height: 480
width: 752
distortion_model: "KANNALA_BRANDT"
D: [-0.031170006043024615, 0.02301973375803421, -0.0352997004132725, 0.016822330752111488, 0.0]
K: [366.9709661238975, 0.0, 370.60059198845664, 0.0, 366.9835673211366, 235.15553431559664, 0.0, 0.0, 1.0]
R: [0.9999976403807529, 0.0012593887617488196, -0.0017700770811053344, -0.0012566534577968802, 0.9999980160222749, 0.0015455657866886375, 0.0017720200374941424, -0.0015433377662572895, 0.9999972390229515]
P: [367.166403809061, 0.0, 338.11387634277344, 0.0, 0.0, 367.166403809061, 188.29586029052734, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
---
It looks like the distortion matrix is not empty. What can go wrong there? Maybe the problem is in the unsupported distortion model "KANNALA_BRANDT"?
Thanks,
Eugene
Originally posted by Eugene on ROS Answers with karma: 3 on 2020-03-17
Post score: 0
Answer:
Maybe the problem is in the unsupported distortion model "KANNALA_BRANDT"?
That is a likely cause, yes.
As sensor_msgs/CameraInfo mentions, legal values for that field are defined in sensor_msgs/distortion_models.h. Looking at that file shows (from here):
const std::string PLUMB_BOB = "plumb_bob";
const std::string RATIONAL_POLYNOMIAL = "rational_polynomial";
const std::string EQUIDISTANT = "equidistant";
KANNALA_BRANDT is not in that list.
It looks like it's also called a fisheye distortion model, and for that model there are several PRs open. The one against ros/common_msgs would be ros/common_msgs#151.
You may be able to get an idea of the current state of support for this particular distortion model from the linked issues and PRs.
Originally posted by gvdhoorn with karma: 86574 on 2020-03-18
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 34603,
"tags": "ros, ros-melodic, depthimage-to-laserscan"
} |
Why publish a static tf every 100ms? | Question:
I have a multi-lidar, multi-camera system with several static tf transforms (no sensors will ever move), I note that the tf wiki page recommends publishing static transforms at 10Hz - I wondered what the rationale for this is? Why not publish, for example every 5 seconds? or even less often? How much of a performance cost is there here? Is the benefit just so that nodes which die and are reborn can quickly get the tf?
EDIT: i'm using tf, not tf2 - would the situation be different if i was using tf2?
Originally posted by LukeAI on ROS Answers with karma: 131 on 2020-03-13
Post score: 1
Original comments
Comment by gvdhoorn on 2020-03-13:
Please be precise: tf or tf2?
Comment by LukeAI on 2020-03-13:
have clarified
Answer:
Looking at the tf wiki page, section static_transform_publisher, we see this:
static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period_in_ms
Publish a static coordinate transform to tf using an x/y/z offset in meters and yaw/pitch/roll in radians. (yaw is rotation about Z, pitch is rotation about Y, and roll is rotation about X). The period, in milliseconds, specifies how often to send a transform. 100ms (10hz) is a good value.
Assuming you are referring to this specific part of this specific page: in tf (so not tf2), there was actually no real concept of "static TFs", neither was there support in the infrastructure. A static transform there is just a transform which doesn't change any of its values during the runtime of the application.
This means that in order for each listener to not consider the "static frame" (note again: nothing special, just a regular frame with unchanging values) stale after some time, you'll have to make sure to republish it. That's what that last sentence is about.
With tf2, this changed, and static transform is actually supported now. From wiki/tf2: Adding static transform support:
The goal of static transforms was to remove the need for recommunicating things that don't change. The ability to update the values was implemented in case they are subject to uncertainty and might be re-estimated later with improved values. But importantly those updated values are expected to be true at all times.
Technically, tf2 broadcasts regular transforms over the regular topic (ie: /tf) with normal Publishers, while static transforms are published using a latched Publisher on a special topic (ie: /tf_static) and only once (or very infrequently).
So tf2 would seem to correspond to what you imply: "never" changing data should not need to be republished.
Edit:
EDIT: i'm using tf, not tf2 - would the situation be different if i was usiung tf2?
while you may be using functions, classes et al. from tf, you're actually already using tf2:
Migration: Since ROS Hydro, tf has been "deprecated" in favor of tf2. tf2 is an iteration on tf providing generally the same feature set more efficiently. As well as adding a few new features.
As tf2 is a major change the tf API has been maintained in its current form. Since tf2 has a superset of the tf features with a subset of the dependencies the tf implementation has been removed and replaced with calls to tf2 under the hood.
Originally posted by gvdhoorn with karma: 86574 on 2020-03-13
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by tfoote on 2020-03-13:
If you specifically call the tf static_transform_publisher executable it maintains the original behavior. You can simply switch to the tf2 static_transform_publisher it will leverage the static transform primitives and not have to rebroadcast.
Comment by cferone on 2020-09-25:
When updating the static transform publisher in your launch files to use tf2 rather than tf, you have to make two changes: 1) update the pkg from tf to tf2_ros 2) delete the period_in_ms argument at the very end of the command.
See section 5 of this page for proper syntax for publishing static transforms. | {
"domain": "robotics.stackexchange",
"id": 34586,
"tags": "ros-kinetic, transform"
} |
catkin_make install and dynamic_reconfigure issue | Question:
Hello,
I am getting this weird behavior under ROS Indigo:
I am using dynamic_reconfigure: e.g. cfg/Node.cfg
In CMakelists I am calling generate_dynamic_reconfigure_options()
Files are generated nicely, e.g. NodeConfig.h
catkin_make works
Node works in general
However when I do: catkin_make install I get an error that the file NodeConfigConfig.h is missing. So for some reason it adds another Config string to the generated header and naturally it cannot find it. Weird.
Any ideas what can been happening? I not getting this in other packages. And I don't know how to debug this...
Originally posted by tanasis on ROS Answers with karma: 97 on 2018-01-10
Post score: 0
Answer:
See if #q203700 helps.
Originally posted by gvdhoorn with karma: 86574 on 2018-01-10
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by tanasis on 2018-01-10:
Yes this was it. Name file and .cfg parameter name missmatch. Nasty error! Thanks! | {
"domain": "robotics.stackexchange",
"id": 29713,
"tags": "catkin-make"
} |
Parallel transport of a vector | Question: There is a parameterized curve $\gamma(\tau)$ on a $4$-dim manifold. The self-parallel vector $X^{\alpha}(\tau)$ to the curve is to be found. By definition of auto parallel vectors, the covariant derivative of a vector along the curve must be zero.
In a textbook, it is given as follow:
$$\frac{\partial X^{\alpha}}{\partial\tau}+\Gamma^{\alpha}_{\beta\sigma}X^{\beta}\frac{d\gamma^{\sigma}}{d\tau}=0$$
I am confused about why the term $\frac{d\gamma^{\sigma}}{d\tau}$ is added? There is nothing similar to that in the definition of the covariant derivative.
Answer: As you note, a vector is parallel-transported when its covariant derivative along a curve is zero. To put it another way, if $v^\alpha = d \gamma^\alpha/d\tau$ is the tangent to the curve, then the parallel transport equation is
$$
v^\sigma \nabla_\sigma X^\alpha = v^\sigma \partial_\sigma X^\alpha + v^\sigma \Gamma^\alpha {}_{\beta \sigma} X^\beta = 0.
$$
(In terms of conventional vector calculus, this is like saying that $(\vec{v} \cdot \vec{\nabla}) \vec{X} = 0$.)
But $$
v^\sigma \partial_\sigma = \frac{d \gamma^\sigma}{d \tau} \frac{\partial}{\partial \gamma^\sigma} = \frac{d}{d\tau}
$$
and so the above equation becomes
$$
v^\sigma \nabla_\sigma X^\alpha = \frac{d X^\alpha}{d \tau} + \frac{d \gamma^\sigma}{d \tau}\Gamma^\alpha {}_{\beta \sigma} X^\beta = 0,
$$
as desired. | {
"domain": "physics.stackexchange",
"id": 64029,
"tags": "general-relativity, differential-geometry, tensor-calculus, vectors"
} |
Are all collisions involving photons and electrons elastic? | Question: In my textbook it asks me to calculate the energy gained by an electron that scatters an incoming x ray through a given angle.
Using the Compton scattering equation you can work out the change in the x ray's energy. The difference is given to the electron as its gain in KE.
My question is : is it possible for the electron to gain less than the photons change in energy?
Answer: Total energy is always conserved, even in inelastic collisions. Some energy might 'disappear' because the internal energy of one or both particles changes and this is not obvious from observing the motion of the particles. Such collisions are called 'inelastic.' For example, in a collision between molecules the internal vibrations or the rotation of one or both molecules might change. Or in a collision between atoms there might be a temporary change in the energy levels which electrons occupy within each atom.
Electrons do not have any internal structure so they cannot store energy internally. They have no energy levels as atoms do, and they cannot rotate to store rotational KE.
The total energy lost by the photon must equal that gained by the electron, and vice versa if a moving electron is slowed by a collision with a photon. | {
"domain": "physics.stackexchange",
"id": 56635,
"tags": "photons, electrons, atomic-physics, scattering"
} |
(Newbie) Decision Tree Randomness | Question: I'm starting at Data Science and, to get something going, I just ran the code from Siraj Raval's Intro to Data Science video. He implements a simple Decision Tree Classifier but I couldn't help but notice that, given the same training set, the classifier doesn't always yield the same prediction (nor the same fit apparently); which I happen to find terribly weird, since, from what I've learned, a Decision Tree is supposed to be deterministic.
The only thing I can think of that could be causing the randomness would be that the branches are being chosen at random at some point because 2 options might be identically valued. I would say this could be corrected with a little bit more training data, but even if I add 5 more people, nothing changes. Does anybody have an explanation for what's going on?
Following is the code (in Python) from the video in a for loop to count how many predictions for male and female the Decision Tree has yielded.
from sklearn import tree
from sklearn.svm import SVC
n_male_pred_tree = 0
n_female_pred_tree = 0
n_male_pred_svm = 0
n_female_pred_svm = 0
for i in range (1,1000):
# This loop tests the consistency of the CLF
# The Decision Tree is not very consistent (It's 50-50)
X = [[181,80,44],[177,70,43],[160,60,38],[154,54,37],
[166,65,40],[190,90,47],[175,64,39],[177,70,40],
[159,55,37],[171,75,42],[181,85,43]]
Y = ['male','female','female','female',
'male','male','male','female',
'male','female','male']
tree_clf = tree.DecisionTreeClassifier()
svm_clf = SVC()
tree_clf.fit(X,Y)
svm_clf.fit(X,Y)
tree_prediction = tree_clf.predict([[190,70,43]])
svm_prediction = svm_clf.predict([[190,70,43]])
if tree_prediction == 'male':
n_male_pred_tree += 1
else:
n_female_pred_tree += 1
if svm_prediction == 'male':
n_male_pred_svm += 1
else:
n_female_pred_svm += 1
print(f"MALE pred Tree: {n_male_pred_tree}")
print(f"FEMALE pred for Tree: {n_female_pred_tree}")
print("\n")
print(f"MALE pred for SVM: {n_male_pred_svm}")
print(f"FEMALE pred for SVM: {n_female_pred_svm}")
Answer: From sklearn:
The features are always randomly permuted at each split. Therefore, the best found split may vary, even with the same training data and max_features=n_features, if the improvement of the criterion is identical for several splits enumerated during the search of the best split. To obtain a deterministic behaviour during fitting, random_state has to be fixed.
If you manually set the random_state variable when you create your tree object, you'll find that it does become deterministic.
In simpler terms, the data you are feeding it is a little small, and there are several splits that have the same information gain, so the split that is chosen is subject to random factors. | {
"domain": "datascience.stackexchange",
"id": 3591,
"tags": "python, decision-trees"
} |
vmware subscriber multiplemachines problem | Question:
Hello,
I try to connect my Ubuntu system [a] to a virtual Ubuntu machine (VMware)[b] by network.
It seems to work. I can ping and ssh from one to the other machine.
Furthermore when I start a roscore on [a] I can run for example "rosrun rospy_tutorial listener.py" on machine [b] and it finds the roscore. If I start the talker on [a] I can see on [b] the topic "/chatter" and the node "/talker_3586_1356978819462".
But if I run the listener now on [b] (still with the roscore of [a]) it doesn't hear anything. I have no clue, where the problem is. Maybe someone can help me with that?
Some things that I have already tried:
Start a talker and listener only on machine [a] works.
Start a talker and listener only on machine [b] (with core of [a]) works.
Previously I had the Ubuntu system [a] also on a virtual machine and this example worked with both virtual machines [a] and [b] running in windows.
Best Jonathan
Originally posted by hej on ROS Answers with karma: 1 on 2012-12-31
Post score: 0
Answer:
ROS relies on bidirectional communications. Make sure you work all the way through the Network Setup Guide This sounds like you have hostnames which cannot lookup fully on both machines.
Originally posted by tfoote with karma: 58457 on 2013-01-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 12239,
"tags": "ros, subscribe, multiplemachines"
} |
rgbdslam map storing | Question:
I get the output from Kinect using rgbdslam package but I don't know how to store the obtained map or register the points cloud. I want to store the obtained map..not continuous Kinect output but sets of data...how can I do it??
Originally posted by shubh991 on ROS Answers with karma: 105 on 2012-06-07
Post score: 1
Answer:
Use octomap_server to build 3D maps. Details are on the rgbdslam and octomap_server wiki pages. Did you read rdgbdslam - Usage?
Originally posted by AHornung with karma: 5904 on 2012-06-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 9714,
"tags": "ros"
} |
Incorporating Radial Distortion in Imaging Model | Question: I'm currently developing a multi-view stereo system but I'm confused as to how all the standard equations in Structure from Motion are affected in the presence of radial and decentering distortion. For example, the standard imaging model is that of
a) World to Camera Coordinates --> translate to camera center and rotate to match orientation of camera (R,t)
b) Perspective Projection --> x = fX/Z and y = fY/Z
c) Conversion to Pixel Coordinates --> scale by no. of pixels/m and add principal point
(b and c can be combined into matrix K)
As a result, we can build a projection matrix P = K[R|t]. Thus a point X in world coordinates can be represented in pixel coordinates as u = PX.
We can also create a fundamental matrix F between two images such that a pair of corresponding point (f on image I1 and f' on image I2) satisfy the epipolar constraint;
x'Fx = 0.
The matrix F depends upon the matrix P. But, my question is that if we incorporate radial and decentering distortion, the matrix P can no longer represent the imaging model. How do we get an epipolar "curve" representation ?
Moreover, for things like triagulation back to 3D coordinates from 2D point.. how do we do it ?
Answer: If you know the radial distortion parameters (e. g. by calibrating the cameras), then you should simply compensate for the distortion before you do structure from motion. You can either undistort the images before you do anything else, or you can undistort the coordinates of the points that you are trying to match. | {
"domain": "dsp.stackexchange",
"id": 523,
"tags": "image-processing, computer-vision, stereo-vision"
} |
Gauge ivariance and canonical versus kinetic momenta for a charged particle in an EM field | Question: I all, I am struggling to grasp the notion of gauge invariant when talking about an object like the canonical momenta $\frac{\partial L}{\partial \dot{q}_i}$ or kinetic momenta $m\dot{q}_i$.
I am very comfortable with gauge theory in the field theory context, starting with a Lagrangian, requiring its invariance under a local symmetry, partial $\rightarrow$ covariant derivatives and the corresponding transformation of the gauge field connections,etc. But showing an object like a momentum is gauge invariant is new for me.
I am looking into the difference between the canonical and kinetic momenta in the case of a charged particle in an EM field, described by the standard Lagrangian
\begin{equation}
L = \frac{1}{2} m\dot{r}^2 - q \phi + q \dot{r}\cdot A
\end{equation}
The canonical momenta are $\vec{p}_c=m\dot{\vec{r}} +q\vec{A}$ and the kinetic are just $\vec{p}_k=m\dot{\vec{r}}$.
I am trying to figure out how to explicit show that $\vec{p}_c$ are not gauge-invariant (presumably under the U(1) symmetry of EM?) whereas $\vec{p}_k$ are. I know this to be the case by ACuriousMind's answer here, Emilio Pisanty's answer here, and the following section of Wikipedia's minimal coupling article.
Any tips are appreciated! :)
Answer: The canonical momentum is always (in position "$q$" basis) given by $-i\hbar\partial_q$ so the mapping of the commutator to the Poisson bracket $$[q,p]=i\hbar \leftrightarrow \{q,p\}=1$$ stays true. The nice covariant object however involves the covaraint derivative $\nabla_q$ as
$$
-i\hbar\nabla_q =-i\hbar\left(\partial_q-\frac{i}{\hbar}qA\right)= p-qA
$$
which in your example represents the gauge invariant $m\dot q$. On its own $\partial_q$ does not map nicely under gauge transformations
$|x\rangle \to e^{i\Lambda(x)} |x\rangle $, which translate to $\psi(x)=\langle x|\psi\rangle \to e^{-i\Lambda(x)} \psi(x)$. | {
"domain": "physics.stackexchange",
"id": 70168,
"tags": "electromagnetism, classical-mechanics, lagrangian-formalism, momentum, gauge-invariance"
} |
Publishing to a topic via subscriber callback function | Question:
Hello,
I am wondering if anybody could help me with this problem. I have a listener that just publishes to a topic when another topic is published data, so basically the topic it is listening on gets data published. I have tried putting global variable in the code whose change can map into the main function but ros is not allowing global variables. I also tried to define the nodehandle as global but ros is not allowing this either. (I think before rosinit no ros components will be recognized). My code is in C++. Any help as to what could be done would be really appreciated. I am including a basic skeleton of the code I am trying:
void func_cb(std_msgs type param as defined in ros chatter example)
{
//*need some code here to publish to a topic*
//*problem:nodehandle out of scope,cannot define new nodehandle*
}
int main(... )
{
//ros initilization
//ros node handle initialization
//ros subscription initialization
//callback spinner
return 0;
}
Originally posted by joy on ROS Answers with karma: 85 on 2013-04-01
Post score: 6
Answer:
You could define a class that handles everything and avoid using ugly global variables:
#include <ros/ros.h>
class SubscribeAndPublish
{
public:
SubscribeAndPublish()
{
//Topic you want to publish
pub_ = n_.advertise<PUBLISHED_MESSAGE_TYPE>("/published_topic", 1);
//Topic you want to subscribe
sub_ = n_.subscribe("/subscribed_topic", 1, &SubscribeAndPublish::callback, this);
}
void callback(const SUBSCRIBED_MESSAGE_TYPE& input)
{
PUBLISHED_MESSAGE_TYPE output;
//.... do something with the input and generate the output...
pub_.publish(output);
}
private:
ros::NodeHandle n_;
ros::Publisher pub_;
ros::Subscriber sub_;
};//End of class SubscribeAndPublish
int main(int argc, char **argv)
{
//Initiate ROS
ros::init(argc, argv, "subscribe_and_publish");
//Create an object of class SubscribeAndPublish that will take care of everything
SubscribeAndPublish SAPObject;
ros::spin();
return 0;
}
Originally posted by Martin Peris with karma: 5625 on 2013-04-01
This answer was ACCEPTED on the original site
Post score: 46
Original comments
Comment by joy on 2013-04-02:
Thanx Martin...One question though: If I create multiple objects how will ROS handle multiple instances of class publishing in the same topic especially if it is some topic like geomtry_msgs::Twist type? Is it a sequential/interleaved type execution?
Comment by Martin Peris on 2013-04-02:
That is a good question. If I am not mistaken, when you create multiple instances of SubscribeAndPublish object, lets say n, then for each message received on the topic "/subscribed_topic" you will generate n messages on the topic "/published_topic". No guarantees about the order though.
Comment by joy on 2013-04-05:
So the message is tied with the object id. Thanx so much for clearing that up. :)
Comment by drewm1980 on 2015-10-20:
Yeah, I was also going to warn that none the published messages won't actually be sent until after the callback returns. I landed here looking for a workaround.
Comment by OMC on 2016-06-29:
I just wanted to warn you that I think there is a semicolon missing at the end of the class declaration. Thank you for your suggestion!
Comment by Martin Peris on 2016-07-06:
Thanks @Billie, fixed.
Comment by Milliau on 2016-12-06:
I used this example fpr oublishing velocity from my joystick to my motor. But there is a delay. Maybe I can solve the problem by changing the publsih rate. How can I cahanged the publish rate at this example?
Comment by Martin Peris on 2016-12-11:
You can substitute the call to ros::spin() with:
while(true)
{
ros::Rate(YOUR_DESIRED_RATE).sleep();
ros::spinOnce();
}
Comment by OMC on 2017-02-24:
Would it make sense to make n_, pub_, and sub_ static in case we expect several objects to be created?
I have tried but I am unable to initialize them inside the constructor in the .cpp (the class definition is in a header).
Comment by lhnguyen on 2017-06-12:
Hi, Milliau,
I'm working with AUV. I need to transfer angular velocity (received from joystick, mounted to Ground Station Computer) via RJ45 cable to control motors (connected to Pixhawk which connected to Odroid companion computer). I think it's almost the same as you said. Could you shareYourCode
Comment by Blupon on 2017-08-07:
no actual need to set Nodehandle, Publisher and Subscriber as private variables, isn't it ? and what's the (last) "this" argument for n_.subscribe ? Thanks for this nice example code !
Comment by Martin Peris on 2017-08-13:
Hi @Blupon, absolutely no need to make NodeHandle, Publisher and Subscriber private variables if you don't mind other objects having access to them. The last argument in n_.sibscribe is a pointer to the object instance that implements the SubscribeAndPublish::callback
Comment by aks on 2018-04-20:
in the above example, what is it exactly publishing and subscribing ? or is it just a sample ?
Lets say in the callback function, I do output.data = input.data + 5 and then publish the output using pub_.publish(output); and then use ROS_INFO to get the info on the console. Shouldnt it work?
Comment by Martin Peris on 2018-04-20:
It is a pretty much complete example, you only need to change PUBLISHED_MESSAGE_TYPE and SUBSCRIBED_MESSAGE_TYPE to the types that you need. In your case, if output.data and input.data are the same type (for example int), then your idea should work
Comment by aks on 2018-04-20:
It isnt working and i cannot understand why. Could it be that both the topics are different that is why ?
Comment by dj95 on 2020-03-06:
Is there a particular nomenclature for having trailing underscores in variable name definition? (pub_,sub_,n_)
Comment by Martin Peris on 2020-03-10:
I am not aware of a particular nomenclature, I was following Google's C++ Style Guide: https://google.github.io/styleguide/cppguide.html#Variable_Names "The names of variables (including function parameters) and data members are all lowercase, with underscores between words. Data members of classes (but not structs) additionally have trailing underscores. For instance: a_local_variable, a_struct_data_member, a_class_data_member_."
Comment by dj95 on 2020-03-11:
Thanks Martin!
Comment by Thomasraynal on 2020-03-18:
4 hours of search to finally find you, i love you dude you are a life saver <3
Comment by Martin Peris on 2020-03-19:
Glad you found it useful :) | {
"domain": "robotics.stackexchange",
"id": 13632,
"tags": "ros"
} |
Scale invariance in curved spacetime? | Question: Question
What does it mean for the metric to be scale invariant in curved spacetime (in the sense when I say a property is scale invariant in thermodynamics)? I'm confused as to how to define this. It seems to be either by means of a Weyl scaling or conformal transformation where the scaling factor is a constant? I suspect the correct way would be via means of coordinate transformations? Is there some nice mathematical condition such a metric would satisfy?
Motivation
Consider the stress energy tensor for a perfect fluid:
$$T^{\mu \nu} = \left(\rho + \frac{p}{c^2} \right) U^{\mu} U^\nu + p g^{\mu \nu}, $$
Now keeping our notation ambiguous:
$$g^{\mu \nu} \to \lambda^2 g^{\mu \nu}$$
But $$g^{\mu \nu} g_{\mu \nu} = 4$$
Thus
$$ g_{\mu \nu} \to \frac{1}{\lambda^2}g_{\mu \nu} $$
We also know:
$$ g_{\mu \nu} U^{\mu} U^\nu = c^2 $$
Thus,
$$ U^{\mu} U^\nu \to \lambda^2 U^{\mu} U^\nu $$
Thus we have effectively done the following:
$$ T^{\mu \nu} \to \lambda^2 T^{\mu \nu} $$
Answer: You can't say that your metric is scale "invariant". What you perhaps meant is to say that the theory is scale invariant and the metric is scale covariant.
Assuming you mean the latter, scale invariance of a theory simply implies that if I scale the metric, velocities, stress tensor, etc. as you have shown, then the physics of the system remains unchanged.
A Weyl transformation is
$$
g_{ab}(x) \to \Omega^2(x) g_{ab}(x)
$$
Here, $\Omega(x)$ is an arbitrary positive function. When $\Omega(x)$ is a constant, we call this a scale transformation.
A conformal transformation is a diffeomorphism $x^a \to x'^a(x)$ such that the metric transforms as
$$
g_{ab}(x) \to g'_{ab}(x) = \Omega(x)^2 g_{ab}(x)
$$
Here, the Weyl factor $\Omega(x)$ is NOT an arbitrary function. Rather it is related to the diffeomorphism $x'^a(x)$ [which is also not arbitrary]. | {
"domain": "physics.stackexchange",
"id": 86140,
"tags": "thermodynamics, general-relativity, dimensional-analysis, stress-energy-momentum-tensor, scale-invariance"
} |
Why are these two QFT circuits equivalent? | Question: I am new to quantum computing and have been trying to understand the Quantum Fourier Transform (QFT). Through my research using both the Qiskit textbook and other sources, I see differences in how the circuit is actually implemented. My question is: Why are the two circuits below equivalent? Does the order of the wires in each circuit matter? To me they look significantly different as even the operations are being done on different qubits.
For example;
From the Qiskit Textbook:
From other sources:
Answer: Qiskit uses little endian convention, meaning that in the state $|ABC\rangle$, $q_0$ corresponds to $|C\rangle$, $q_1$ to $|B\rangle$, and $q_2$ to $|A\rangle$. In literature, it is common to see big endian convention which is the other way around.
It can also be because of qubit ordering on the circuit. Qiskit uses the top qubit as $q_0$. But you can find literature in which the bottom qubit is $q_0$.
If you reverse the wires, they are the same circuit. | {
"domain": "quantumcomputing.stackexchange",
"id": 2880,
"tags": "qiskit, quantum-fourier-transform, quantum-circuit"
} |
How to write Schrodinger equation! | Question: Quantum mechanics: Suppose that there is a particle with orbital angular momentum |L|. But if the particle also has spin quantity |S| the question is: How do I reflect this into Schrodinger equation? I do know how Schrodinger equation becomes for each case - when a particle has particular orbital angular momentum and when a particle has some spin, especially when both occur.
Answer: The Schroedinger equation does not describe spin. If you need to describe spin as well, you should use the Pauli equation or the Dirac equation (for spin 1/2). | {
"domain": "physics.stackexchange",
"id": 4950,
"tags": "quantum-mechanics, angular-momentum, quantum-spin, schroedinger-equation"
} |
Work out if a string is "fieldname" or "tablename.fieldname", and assign variables | Question: Is there a neater way to write this code?
var table, field, dbFieldEntries;
var s = 'one.two';
// The DB field is always the first parameter
dbFieldEntries = s.split( '.' );
if( dbFieldEntries.length === 1 ){
field = dbEntryEntries[ 0 ];
} else {
table = dbEntryEntries[ 0 ];
field = dbEntryEntries[ 1 ];
}
Basically, s can be either something or something.other.
If it's something, then table is undefined and field is something; it it's something.other, then table is something and field is other.
Answer: Assuming that field always gets the value of the last part of dbFieldEntries you could do:
field = dbEntryEntries[ dbFieldEntries.length - 1 ];
if ( dbFieldEntries.length > 1 ) {
table = dbEntryEntries[ 0 ];
}
Besides this, s is a bad variable name. | {
"domain": "codereview.stackexchange",
"id": 11959,
"tags": "javascript, strings"
} |
How to localize and navigate in a 3D map? | Question:
I am having a 3D lidar by Velodyne. I can build a 3D map with RTAB or Cartographer. But after that how can I use only 3D maps and 3D Lidar to do autonomous navigation with obstacle avoidance?
Originally posted by ShyamGanatra on ROS Answers with karma: 23 on 2020-03-05
Post score: 0
Answer:
If you can build and store the 3D map, both of those packages contain the ability to relocalize in a given map (please see relevant information in their packages, I won't describe here).
If you have a map, the ability to localize in it, and a position out, that "solves" the state estimation problem, for the first order case. Now how do you do autonomous navigation? Nothing about your setup changes the usual paradigm. You have some sensors, you can buffer into a costmap using costmap_2d, for example, and avoid obstacles that way. You use those costmaps to plan and compute control efforts. The navigation stack will do this for you for the canonical planar-environment case.
Tl;dr: Nothing in the navigation stack requires 2D laser scanners or 2D positioning or mapping. If you supply your own positioning system, everything will work find out of the box.
Originally posted by stevemacenski with karma: 8272 on 2020-03-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34547,
"tags": "ros-kinetic"
} |
Deterministic Parity Automata require unbounded index | Question: Deterministic parity automata $(Q, \Sigma, q_0, \Delta, c)$ are powerful enough to recognize all $\omega$-regular languages. However, the number of priorities they require for a language can become arbitrarily high. For example, the languages $L_{i,j} = \{ \alpha \in \{i, \dots, j\}^\omega \mid \text{The highest number occurring infinitely often in } \alpha \text{ is even} \}$ require at least $j-i+1$ priorities. I was not able to prove this theorem so far by myself, in particular I am stuck at the case that the automaton uses exactly the priorities from $i+1$ to $j$. I could not lead this assumption to a contradiction.
Any help or link to a paper would be appreciated.
Answer: Take a look at the following paper:
Relating hierarchies of word and tree automata.
They cite the result you are interested in.
As a more concrete answer, consider the following argument: let's look at the language $M_n=\{w\in \{0,...,n\}^\omega: \limsup w_i \text{ is even}\}$ (the same as your language with $i=0$), and assume it has a parity automaton $A$ with ranks $1,...,n+1$ (note that proving it doesn't have a parity automaton with ranks $1,...,n$ is not enough, as it's still possible for it to have ranks $0,...,n-1$).
We will construct a word that is accepted by $A$, but is not in the language (or vice-versa). Denote by $k$ the number of states in $A$. Consider the run of $A$ on the word $0^\omega$. It's accepting, so the limsup degree is even, and so at least $2$. Moreover, it must occur during the first $k$ steps, otherwise, since $A$ is deterministic, we have a non-accepting cycle. In addition, this is true for every $0^k$ infix of any word.
Now, consider the word $(0^k1)^\omega$. This word is not accepted, so the limsup degree is odd, but must be at least 3, since the $0^k$ infixes induce the degree $2$. Finally, this odd degree must occur within the first $k$ cycles of $0^k1$, for similar reasons as above.
We can repeat these alternating examples, constructing a word of the form $(0^k1)^k2$, and so on, with each word requiring a higher degree. However, since we only have degrees $1,...,n+1$, then eventually we will have a word that uses the letters $0,...,n-1$, whose limsup degree is $n+1$. From there, continuing this construction will still leave the limsup degree $n+1$, but will change the word from accepted to non-accepted, or vice-versa. In either case, the automaton fails to recognize the language. | {
"domain": "cstheory.stackexchange",
"id": 3392,
"tags": "fl.formal-languages, automata-theory, regular-language"
} |
Does the elasticity of a collision depend on the object's mass? | Question: This refers to my other question Why is the ideal gas law only valid for hydrogen?. In the update, my teacher said that hydrogen is closer to an ideal gas because its mass is lower: $m_{\rm H} \thickapprox \displaystyle\frac {1}4 m_{\rm He}$. Since the mass of the object is not included in the ideal Gas law which is $PV=nRT$, I concluded that they probably mean that the mass is relevant to the elasticity of a collision (which is one of the properties of an ideal gas).
Is that true? I see that the mass of the objects seems to play a role as it is included in the formula for elastic collisions:
$$u=\frac {m_1 v_1 + m_2 v_2}{m_1 + m_2}$$
But this equation does not describe how elastic a collision is, it only describes the behaviour of two objects after a perfectly elastic collision.
As far as I know, the elasticity of a collision is mainly determined by the ability of the objects to deform non-permanently (e.g. billiard balls or bouncy balls are close to being elastic on the macroscopic scale).
In my example, I compare the elasticity of the collision between two hydrogen or rather helium atoms. Does the higher mass of helium (and thus the higher volume of its nucleus) affect the elasticity of the collision?
As explained in the other question, I would be grateful if you could provide some sources for your answers in the case they cannot be simply deduced by known formulas or facts.
Answer:
In the update, my teacher said that hydrogen is closer to an ideal gas
because its mass is lower: $M_{H}$≈1/4 $m_{He}$
As pointed out by @nasu the mass of the hydrogen gas molecule is 1/2 of helium, not 1/4. You are comparing the masses of the atoms.
Although the mass of the hydrogen gas molecule is less than helium, the radius of the hydrogen atom, 53pm, is greater than the helium atom, 31pm. So the size of the helium gas atom is less then the hydrogen diatomic molecule. A gas behaves more ideally the smaller its size relative to the separation between atoms/molecules, all other things being equal.
In addition, all other things equal, there will be fewer collisions between the helium atoms than between the hydrogen molecules. In this regard, helium has a smaller "kinetic diameter" (260pm) than hydrogen (289pm). Per Wikipedia the "kinetic diameter is a measure applied to atoms and molecules that expresses the likelihood that a molecule in a gas will collide with another molecule. It is an indication of the size of the molecule as a target."
Since the mass of the object is not included in the ideal Gas law
which is $PV=nRT$, I concluded that they probably mean that the mass
is relevant to the elasticity of a collision (which is one of the
properties of an ideal gas).
The mass is included in the equation since the number of moles $n$ of the gas is the mass of the gas divided by its molecular weight. The ideal gas law can also be written in terms of mass $m$ as:
$$PV=mR_{g}T$$
Where in this case $R_g$ is the specific gas constant (specific to the gas under consideration). $R$ in the formula $PV=nRT$ is the universal gas constant.
Is that true? I see that the mass of the objects seems to play a role
as it is included in the formula for elastic collisions:
$$u=\frac {m_1 v_1 + m_2 v_2}{m_1 + m_2}$$
But this equation does not describe how elastic a collision is, it
only describes the behaviour of two objects after a perfectly
elastic collision.
You're equation does not appear to be for an elastic collision. It seems to be for a perfectly inelastic collision where the two objects stick together following the collision with a final velocity of $u$, based on conservation of momentum. In any case, I have not heard of mass, as a fundamental property of matter, playing a role in the elasticity of the collision. But I would be interested if anyone else has knowledge to the contrary.
In my example, I compare the elasticity of the collision between two
hydrogen or rather helium atoms. Does the higher mass of helium (and
thus the higher volume of its nucleus) affect the elasticity of the
collision?
Again, I don't see how a higher mass affects the elasticity of the collision. I only see that it affects the final momenta and kinetic energies of the colliding objects. To the best of my knowledge, it is the mechanical properties of the materials (are they elastic? Viscoelastic? (part elastic and part inelastic), etc.) that determine collision elasticity. Again, however, perhaps someone else can point to a reliable source to the contrary.
I am pretty sure that this collision is for elastic collisions. At
least, this is what I also found on Wikipedia.
I looked at the Wikipedia article. It looks like $u$ in your equation is the velocity in the center of mass frame which does not change before and after the collision. Since you didn't state was $u$ was I assumed it was the velocity of the two masses stuck together following the collision, which would also satisfy conservation of momentum.
Regardless, I believe your equation only requires conservation of momentum. It would apply whether the collision is elastic or inelastic. So the equation does not only apply to elastic collisions, as stated.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 69963,
"tags": "classical-mechanics, collision"
} |
Flows with Negative Values? | Question: Define a "non-standard" flow to be a flow where the quantity flowing through an edge may be negative.
Formally, given a directed graph $G$, and two designated and distinct
vertices $s$ and $t$ (such that no edges are leaving $t$, and no edges
are entering $s$), a non-standard flow $f : G_E \to \mathbb{R}$ is a
valuation on the edges such that
For each $v \in G_V\setminus \{s, t\}$, the sum of the values of $f$
on the edges entering $v$ is equal to the sum of the values of $f$ on
the edges leaving $v$.
Assuming all the capacities are positive, does the Max-Flow Min-Cut theorem still hold for non-standard flows?
Furthermore, is there a network with positive edge capacities, such that the maximum non-standard flow through the network is larger than the maximum (standard) flow through the network?
The standard proof for Max-Flow Min-Cut theorem uses the non-negativity of the flow to claim that each flow is smaller than each cut. I would guess that this tells us that the counterexample (if any) would have a cut that has negative flow into its incoming edges.
Answer: The maximum flow calculated using only positive flow values on each edge can indeed be smaller than the maximum if flow can also be negative. You can easily see why in a trivial graph with only two nodes and two edges:
---- capacity 1 --->
A B
<--- capacity 1 ----
If A is the source node and B is the sink, you can obviously see that if only positive flows are allowed, you get a max flow of 1 but if negative flows are allowed you'd get 2 (with the bottom edge flowing "backwards").
As I commented, the way to figure out the capacity of a graph that allows negative flows as well as positive ones is to restate the problem. Instead of allowing negative flows, add an additional reversed edge everywhere there's a normal edge in the original graph. Then solve the problem with only positive flows. In the trivial example above, you'd have two edges running each direction, and when you found the maximum using only positive flows, you'd get the right answer, 2. | {
"domain": "cs.stackexchange",
"id": 10205,
"tags": "network-flow"
} |
How to setup bloom to generate a local deb, outside of ros.org buildfarm | Question:
Hi.
This question is related to this one: http://answers.ros.org/question/173804/generate-deb-from-ros-package/
Basically, what we need to do is to provide our ros packages to project partners and clients.
But we do not want to distribute the source code for some of the nodes (some yes, and some no).
And we don’t want either to appear yet in the official ROS buildfarm/jenkins/repo for project confidentiality required by some clients.
And we will want in the future to add obfuscation of python code and license management for C++ code with LMX for example...
So we’ve been digging around the ROS release process.
But things are not really clear to us yet.
Would you know how to do, or how would you advise us to do in order to generate such installers so that our clients can easily install it and that we can check release numbers, and send only compiled code.
I think there are 2 ways:
The “easier” but not sure if it works well: generate a tar.gz of the catkin workspace after having cleaned it from source code…. Do you have experience on this? Do we need to remove the devel folder? if we remove the src folder this doesn't work, do we need to leave at least the config and lanch files? more?... Is this documented somewhere? Maybe we need to compile catkin with release flag?
The more complex but cleaner and more long-term: generating a debian and putting it on a server.
I’ve seen that we can use Bloom to generate a debian automatically in GitHub. And then there is pull generated to populate the ROS buildfarm…
But then, where is it compiled? In local? By github? By ros.org?
How can we specify/include in the debian only compiled code and not source code? (or a mix)
Can we generate only one debian for a metapackage? or will we have to generate as many .debs as packages?
Do you know how to disable this pull to ros.org? and to github (or how to configure the deb to be generated on our git server?)
If we don’t publish it on ros.org, can we still get the track feature to manage release version and tags etc…
If/when we'll have generated the .deb, what kind of server would we need to implement to get a private server to be able to run apt-get? (or any similar method) Might PPA be the solution? Or can one call an apt-get on a private git server since the release debian will be there?
And finally, how/where does the obfuscation popsup in this process?
Thanks in advance for any advice and guidelines.
You will have understood that we know more or less what we want, but have little idea of the available options and how to implement this... what a program :-)
For the question point 2, for a first package that in particular defines its own services, we started to follow some steps found here http://answers.ros.org/question/173804/generate-deb-from-ros-package/ but without success yet...
We do run bloom-generate rosdebian --os-name ubuntu --os-version precise --ros-distro hydro without error in the prompt.
But when running fakeroot debian/rules binary the compiler doesn’t find the include files that are located in the "devel/include/" directory of the catkin workspace and thus aborts.
Those header files are auto-generated by catkin_make when defining services in my main code... Any idea on to solve this?
Also, could you elaborate on the 2 alternatives you are proposing: checkinstall and dpkg-buildpackage ? Thanks in advance.
Thanks in advance
Damien
Originally posted by Damien on ROS Answers with karma: 203 on 2014-09-10
Post score: 8
Original comments
Comment by William on 2014-09-12:
@Damien I haven't forgotten about this, I'm at ROSCon right now, so it might be a few days before I get to this. Sorry!
Answer:
I'll see if I can address your questions and then give a suggested approach:
The “easier” but not sure if it works well: generate a tar.gz of the catkin workspace after having cleaned it from source code…. Do you have experience on this? Do we need to remove the devel folder? if we remove the src folder this doesn't work, do we need to leave at least the config and lanch files? more?... Is this documented somewhere? Maybe we need to compile catkin with release flag?
When you build a catkin workspace, you can tell it to do make install. This can be done with catkin_make install or catkin_make_isolated --install, depending on the build tool you are using. The result will be an install folder along side your src, build, and devel folders. This install folder stands on its own, you can delete the src, build, and devel folders and it will still function. It is also relocatable (a feature that cross-compiler's frequently use), meaning you can move the folder and it should still work. Debian package generation does just this, it takes the result of bloom (source + debian configuration files) and builds then installs your packages into an install folder, then it packages up that install folder and that is what is distributed in the resulting .deb file.
The more complex but cleaner and more long-term: generating a debian and putting it on a server. I’ve seen that we can use Bloom to generate a debian automatically in GitHub. And then there is pull generated to populate the ROS buildfarm…
The bloom-release command will generate the required debian configuration files, and then open a pull request on github. The pull request is only required if you are going to have our build farm compile your packages into .deb files. This requires that your code is open source, so this will not be what you want. As @IsaacSaito suggested, this answer should help you manually build the .deb from the result of bloom without a pull request or our farm:
http://answers.ros.org/question/67345/build-debian-package-locally/?answer=67373#post-id-67373
Then you can automate that process with a CI server like Jenkins or buildbot, see buildbot-ros:
https://github.com/mikeferguson/buildbot-ros
But then, where is it compiled? In local? By github? By ros.org?
If you open the pull request it will be built on jenkins.ros.org and the result hosted on packages.ros.org (open source only).
How can we specify/include in the debian only compiled code and not source code? (or a mix)
If you are using our farm, the source code and binaries are both always provided in the result, there is no configuration, hence the requirement for open source.
Can we generate only one debian for a metapackage? or will we have to generate as many .debs as packages?
If you are using bloom, there will be one .deb per package and per metapackage (contains just the metapackage not the packages which belong to it). If you want to generate a single .deb for multiple catkin packages, consider combining catkin_make install (which is really just cmake + make + make install) with checkinstall:
https://help.ubuntu.com/community/CheckInstall
Do you know how to disable this pull to ros.org? and to github (or how to configure the deb to be generated on our git server?)
The answer linked above tells you how to use bloom to generate the debian configurations, but not open a pull request.
If we don’t publish it on ros.org , can we still get the track feature to manage release version and tags etc…
No, all of our tools require our infrastructure, but they are all open source and designed with as few assumptions about infrastructure as possible, so it would be possible to adapt them to private use, though at this time we do not provide documentation on how to do that.
If/when we'll have generated the .deb, what kind of server would we need to implement to get a private server to be able to run apt-get? (or any similar method) Might PPA be the solution? Or can one call an apt-get on a private git server since the release debian will be there?
To host .debs you will need an apt repository:
https://wiki.debian.org/HowToSetupADebianRepository
And finally, how/where does the obfuscation popsup in this process?
We don't support this functionality at all, this is entirely up to the user, we provide our process and infrastructure to facilitate federated collaboration, so it isn't a feature which is requested often enough for us to support.
But when running fakeroot debian/rules binary the compiler doesn’t find the include files that are located in the "devel/include/" directory of the catkin workspace and thus aborts.
So the fake_root process assumes that you have your dependencies available for what ever package your are trying to build, you likely did not source the setup.bash file for the workspace containing the packages on which your package is depending.
Hopefully this gives you some directions in which to go investigating solutions. If you find something that works really well for your use case please consider providing details here for others.
Originally posted by William with karma: 17335 on 2014-09-14
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Damien on 2014-09-16:
Thanks a lot William.
We'll need a bit of time to digest all this and find our way to generate our packages.
The buildbot-ros seems a good option... the checkinstall to generate a single deb also...
Comment by Damien on 2014-09-18:
William, I've came up to the conclusion that I first need to get catkin_make install produce the /install correctly.
So, Can I/Should I/Where (adding a /install/src?) should I add the source code of a restricted number of my nodes to my future deb? ROS is only hosting it in github, right?
Comment by William on 2014-09-18:
I don't understand your comment. You don't need source code to ship the install space. If you want to include source code I would do it separately. ROS is not hosting anything on your behalf, if your code is on github it is because you put it there. I think I am just mis understanding the question.
Comment by Damien on 2014-09-18:
Goal is to ship a complete product to a client, with binary, docs and some source code. I dont want ros.org to host anything. But not sure about how to ship these src examples... So I though to config Cmakelist to generate in /install both binaries and copy source files; get deb with FPM and ship
Comment by Damien on 2014-09-18:
I referred to ROS just as a reference: in opt/ros there is no source code but the equivalent of \install. Apart, you provide source in github for us to compile.
My option might be to make install, clean my catkin_ws folder from build devel and some src pckg, then get a FPM deb or a targz to ship.
Comment by joq on 2014-09-18:
If your customers can use Debian packages, provide them with both source and binary packages.
Comment by Damien on 2014-09-18:
@joq any hint on how to generate a source debian for ros? Bloom will only generate a binary package I think. no?
Comment by joq on 2014-09-18:
The build farm uses the bloom git build package to generate source packages for each target, then uses them to generate the binaries. I don't know the details, but the information is all there.
Comment by Damien on 2014-09-23:
Hi again. I'm going step by step...
To comply git-buildpackage I've setup a github server for my source and release
Now I want to run bloom-release --rosdistro hydro --track hydro repository_name but I'm not sure about what to do to make sure that no Pull request is sent to ROS
Can you explicit?
Comment by Damien on 2014-09-23:
Or maybe I should run bloom-generate rosdebian --os-name ubuntu --os-version precise --ros-distro hydro instead?
I really don't want to mix up with Ros official build farm.... | {
"domain": "robotics.stackexchange",
"id": 19368,
"tags": "ros, catkin, deb, bloom-release, package"
} |
2D vertex of multiple countries | Question: CodeChef - QPOINT
Given a 2D vertex of multiple countries, find which country contains a certain point.
Input format:
"Number of countries"
"Number of vertex" "x1" "y1" "x2" "y2" ...
"Number of vertex" "x1" "y1" "x2" "y2" ...
...
"Number of query"
"x1" "y1"
"x2" "y2"
...
Please give me any advise on code style or design patterns.
class QPOINT
{
class Point
{
public double X;
public double Y;
public Point()
{
X = 0;
Y = 0;
}
public Point(double x, double y)
{
X = x;
Y = y;
}
}
class Line
{
public Point point1;
public Point point2;
public Line()
{
point1 = new Point();
point2 = new Point();
}
public Line(Point point1, Point point2)
{
this.point1 = point1;
this.point2 = point2;
}
}
class CountryUnit
{
public List<Point> vertex = new List<Point>();
public List<Line> boundaries = new List<Line>();
}
List<CountryUnit> CountryList = new List<CountryUnit>();
public static void Main(string[] args)
{
QPOINT qpoint = new QPOINT();
qpoint.ReadCountry();
qpoint.ReadQueryAndPrintResult();
}
public void ReadCountry()
{
// First line is the total number of countries
int countryCount = int.Parse(Console.ReadLine());
// Each following N lines contain all vertex of a single country
for (int i = 0; i < countryCount; i++)
{
string[] vertexInfo = Console.ReadLine().Split(' ');
CountryList.Add(CreateNewCountry(vertexInfo));
}
}
CountryUnit CreateNewCountry(string[] vertexInfo)
{
CountryUnit newCountry = new CountryUnit();
// The first number is the total number of vertex
int vertexCount = int.Parse(vertexInfo[0]);
// The following numbers are N pairs of x,y coordinates
for (int j = 0; j < vertexCount; j++)
{
newCountry.vertex.Add(
new Point(
int.Parse(vertexInfo[j * 2 + 1]),
int.Parse(vertexInfo[j * 2 + 2])
)
);
}
// Add the first coordinate to the list again to complete the circuit
newCountry.vertex.Add(newCountry.vertex[0]);
// Get boundaries of the country
for (int j = 0; j < vertexCount; j++)
{
newCountry.boundaries.Add(
new Line(
newCountry.vertex[j],
newCountry.vertex[j + 1]
)
);
}
return newCountry;
}
public void ReadQueryAndPrintResult()
{
// Continuing line is the total number of queries
int queryCount = int.Parse(Console.ReadLine());
for (int i = 0; i < queryCount; i++)
{
// Each following line is a query
string[] pointInfo = Console.ReadLine().Split(' ');
Point thisQuery = new Point(
int.Parse(pointInfo[0]),
int.Parse(pointInfo[1])
);
// Output the result
Console.WriteLine(GetContainingCountry(thisQuery));
}
}
int GetContainingCountry(Point query)
{
for (int i = 0; i < CountryList.Count; i++)
{
if (CountryContainsPoint(CountryList[i], query))
{
return (i + 1);
}
}
return -1;
}
bool CountryContainsPoint(CountryUnit country, Point query)
{
List<Line> boundaries = country.boundaries;
// Create a line that extends towards x, starting from the query point
Line infiniteLine = new Line(
query, new Point(1000000, query.Y)
);
// Count the total times of intersection between the infinite line and boundaries
int intersectCount = 0;
foreach (Line boundary in boundaries)
{
// If the point is on the boundary, return true
if (PointOnLine(query, boundary))
{
return true;
}
// Else, count intersections
else if (TwoLinesIntersect(infiniteLine, boundary))
{
intersectCount++;
}
}
// if total times of intersection is odd, return true
if (intersectCount % 2 == 1)
{
return true;
}
return false;
}
bool PointOnLine(Point point, Line line)
{
Point a = point;
Point c = line.point1;
Point d = line.point2;
if ((a.X - d.X) * (c.Y - d.Y) - (a.Y - d.Y) * (c.X - d.X) == 0)
{
return true;
}
return false;
}
bool TwoLinesIntersect(Line line1, Line line2)
{
Point a = line1.point1;
Point b = line1.point2;
Point c = line2.point1;
Point d = line2.point2;
Point intersection = new Point();
// lines are parallel
if ((b.Y - a.Y) * (c.X - d.X) - (b.X - a.X) * (c.Y - d.Y) == 0)
{
return false;
}
intersection.X = ((b.X - a.X) * (c.X - d.X) * (c.Y - a.Y) - c.X * (b.X - a.X) * (c.Y - d.Y) + a.X * (b.Y - a.Y) * (c.X - d.X)) / ((b.Y - a.Y) * (c.X - d.X) - (b.X - a.X) * (c.Y - d.Y));
intersection.Y = ((b.Y - a.Y) * (c.Y - d.Y) * (c.X - a.X) - c.Y * (b.Y - a.Y) * (c.X - d.X) + a.Y * (b.X - a.X) * (c.Y - d.Y)) / ((b.X - a.X) * (c.Y - d.Y) - (b.Y - a.Y) * (c.X - d.X));
if ((intersection.X - a.X) * (intersection.X - b.X) <= 0 && (intersection.X - c.X) * (intersection.X - d.X) <= 0 && (intersection.Y - a.Y) * (intersection.Y - b.Y) <= 0 && (intersection.Y - c.Y) * (intersection.Y - d.Y) <= 0)
{
return true;
}
else
{
return false;
}
}
}
Answer: I like that you defined some classes to help solve the problem, but your Point and Line have a pretty glaring issues. They're mutable.
class Point
{
public double X;
public double Y;
In my mind, you shouldn't be able to change a point or a line after it's been created. That's because if you change it, it is effectively a different point. I would only allow the coordinates to be set via the constructor and change these from public fields to get-only properties. You should also consider whether or not a struct would be sufficient for your needs.
I would also chain your constructors so that the parameter less one calls the other.
class Point
{
public double X { get; private set; }
public double Y { get; private set; }
public Point()
:this(0,0) { }
public Point(double x, double y)
{
this.X = x;
this.Y = y;
}
The second issue I see here is that your implementation is bound to the console. Many of your methods should be taking in arguments instead of directly getting input from the user. There should be another class solely responsible for interacting with the user. Main should be responsible for passing information to/from that class to/from the one that is calculating the solution. This will make it easier to use your class anywhere and put it under test. | {
"domain": "codereview.stackexchange",
"id": 12673,
"tags": "c#, programming-challenge, coordinate-system"
} |
Cannot achieve good result while Transfer Learning CIFAR-10 on ResNet50 - Keras | Question: I'm trying to Transfer Learn ResNet50 for image classification of the CIFAR-10 dataset.
It's stated in the original paper and also ResNet50 documentation on keras.io that the ResNet should have a minimum input shape of 32x32. But I cannot achieve any good results.
Here I have created and compiled the sequential model:
model = Sequential()
model.add(ResNet50(include_top=False, weights='imagenet', input_shape=(32,32,3)))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(128, activation='relu')) #Dense Layer
model.add(Dropout(0.5)) #Dropout
model.add(Dense(10, activation='softmax')) #Output Layer
model.layers[0].trainable = False #Set ResNet as NOT trainable
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
and here I have loaded the CIFAR-10 dataset:
(x_train, y_train), (x_test, y_test) = cifar10.load_data() #Load dataset
num_classes = len(np.unique(y_test))
(x_train, x_valid) = x_train[5000:],x_train[:5000] #set 5000 validation data
(y_train, y_valid) = y_train[5000:],y_train[:5000]
y_train = to_categorical(y_train,num_classes) #Convert to one-hot
y_test = to_categorical(y_test,num_classes)
y_valid = to_categorical(y_valid,num_classes)
Note that on the keras.io website documentation for the resnet50.preprocess_input, it is stated that the input data has 0-255 range.
So then to preprocess data I used:
x_train = preprocess_input(x_train.copy()) #preprocess training images for the resnet50
x_test = preprocess_input(x_test.copy()) #preprocess test images for the resnet50
x_valid = preprocess_input(x_valid.copy()) #preprocess validation images for the resnet50
And here's the fitting section:
ModelHistory = model.fit(x_train, y_train,
batch_size=32,
epochs=10, verbose=0,
validation_data=(x_valid,y_valid),
callbacks=[earlystop]) #Use Early Stopping
But the result I get is terrible even after ~20 epochs (~67% testing accuracy)
Where am I doing wrong?
Answer: I was able to achieve better results by upsampling the input images (using UpSample2D((6,6)))
Accuracy of ~82% was achieved. For better results, I think one could upsample more, or add another Dense layer at the output. | {
"domain": "datascience.stackexchange",
"id": 10752,
"tags": "classification, keras, tensorflow, transfer-learning"
} |
Correct way to implement linked list | Question: I'm doing this challenge on Hacker Rank that wants me to implement a linked list.
It seems to want me to find the last-added instance of node and change its head to link to my new instance of node. Therefore the last instance added would have head=None (I'm using Python).
This is the pic they provide -
Wouldn't it make more sense to create a node instance with its head linked to the previous node? That way the only node with head=None would be the first node created.
I've seen conflicting suggestions so far. I'm not a CS student or developer.
EDIT -
This example from Youtube (1.36) suggests the second method.
EDIT -
Sorry if this seemed like a programming question. I'm trying to see if there's a logical way to set up linked lists for my own benefit... solving the HackerRank challenge is not the issue.
Answer: First of all, don't have a head pointer in your structure that doesn't point to the head of the list: it will confuse everyone. Call the pointer in your structure next or something!
You are right that in a singly-linked list (one with a next pointer but no pre pointer) it is illogical and inefficient to add new items to the end of the list. Singly-linked lists work best when you add each new item to the beginning of the list. Then all you need to do is:
Create a new item and fill in its data.
Set its next pointer to the value stored in the global head variable.
Set the global head variable to point to the new item.
Thus, as you say, only the first item to be added to the list will have next=null.
Adding items to the end of the list involves walking through the whole of the list every time, which is not efficient or sensible.
If you wanted a singly linked list to which you could add items at the end, you would have a second global variable (call it tail) pointing to the last item in the list. This makes for inelegant code, since you now have different kinds of behaviour when tail is null (which it will be at the beginning) and when tail is not null. In fact, a good way of handling matters in this case is to create the list with one fictional item in it already, and tail pointing to that item. This saves a lot of tests, but you will of course have to remember, when walking through the list, not to pay attention to that fictional item. | {
"domain": "cs.stackexchange",
"id": 6939,
"tags": "data-structures, linked-lists"
} |
What is the cross-section area of an electrode when measuring electrolytic conductivity? | Question: If we have two flat plates,
then the area is understood easily. However, if our two electrodes are like this:
then will we take the area of $S_1$ or $S_2$?
Answer: Neither.
The connection between ohmic resistance $R$ and conductivity $\sigma$,
$$
\sigma = \frac{L}{RA},
$$
is based on
the assumption that the material follows Ohm’s Law, with the current density proportional to the electric field, $\vec J = \sigma \vec E$
the assumption that the electric field in the test volume is uniform, so that the volume integral from $\frac1\sigma \vec J = \vec E$ to $RI = V$ depends only on the geometry and not on variations in the strength of the electric field.
This second assumption is valid in a long thin conducting wire, with $L\gg \sqrt A$, because charges can rearrange on the outside of the wire to cancel out any external field. It’s also valid in a parallel-plate capacitor geometry, with $L \ll \sqrt A$, because two nearby parallel plates can have negligible “fringing field.”
However, neither of your diagrams obeys the assumption of uniform field. Your upper diagram has $L \approx \sqrt A$, which means there will be a substantial variation in the electric field through your electrolyte volume. And your lower diagram seems to have surfaces with different areas which are not parallel to each other. There is a volume integral in your future. | {
"domain": "physics.stackexchange",
"id": 87963,
"tags": "measurements, electrochemistry"
} |
Constant Temperature Cooling | Question: In my thermodynamics textbook there is part of a question that seems to be a contradiction.
...Superheated refrigerant R-134a at 20 C, 0.5 MPa is cooled in a piston/cylinder arrangement at constant temperature to a final two-phase state with quality of 50%...
Why would it say it is "cooled ... at a constant temperature" ? I understand there is a change in volume and pressure but if the temperature is constant how can it be cooled?
Answer: By "cooling" the book means "subtracting heat/energy". Even if usually adding and subtracting energy at constant pressure changes the temperature of a substance, this is not true at a phase transition point where you can add and subtract energy at fixed temperature and pressure by varying the amount of the two phases, and thus usually volume.
This is for example what happens when you cool water to make ice, once you reach 0° C both liquid water and ice would be stable, but they differ in the amount of internal energy. In order to turn the water into ice you need to subtract more heat/energy and both temperature and pressure may stay constant. Volume usually changes and the phase transition is the flat line in the temperature/volume diagram at constant pressure. | {
"domain": "physics.stackexchange",
"id": 29048,
"tags": "thermodynamics, temperature, physical-chemistry"
} |
Get length of the longest sequence of numbers with the same sign | Question: I need to get length of the longest sequence of numbers with the same sign. Assumes that zero is positive. For example:
{10, 1, 4, 0, -7, 2, -8, 4, -2, 0} → 4
{0, 1, 2, 3, -2, -4, 0} → 4
{1, -2, 0, -1} → 1
I wrote a function:
unsigned getLongestSameSignSequenceLength(std::vector<int> const& a)
{
unsigned maxlen = 1;
/* Assumes that zero is positive. */
#define SIGN(a) (a >= 0)
for (size_t i = 1, len = 1; i < a.size(); i++, len++) {
if (SIGN(a[i]) != SIGN(a[i - 1])) {
maxlen = std::max(maxlen, len);
len = 0;
} else {
if (i == a.size() - 1)
return std::max(maxlen, len + 1);
}
}
#undef SIGN
return maxlen;
}
Can you please give me tips to improve my code?
Answer: A small portability bug: std::size_t is in the std namespace, assuming it's declared by including <cstddef> (recommended).
No unit tests are included, but I'd expect one that tests that the result is zero when the input collection is empty. We need to initialize maxlen to zero for that test to pass.
When comparing consecutive elements of a collection, always consider using std::adjacent_find(). With a suitable predicate function, we can find changes from negative to non-negative and vice versa without needing to code our own loop or do any indexing.
(More advanced) Consider making your algorithm generic, templated on an iterator type, so that it can be applied to any collection (or even to an input stream directly).
Here's a version that applies all of these suggestions (and some from other answers that I've not repeated above):
#include <algorithm>
#include <cmath>
#include <cstddef>
#include <iterator>
template<typename ForwardIt>
std::size_t getLongestSameSignSequenceLength(ForwardIt first, ForwardIt last)
{
auto const signdiff =
[](auto a, auto b){ return std::signbit(a) != std::signbit(b); };
std::size_t maxlen = 0;
while (first != last) {
ForwardIt change = std::adjacent_find(first, last, signdiff);
if (change != last) { ++change; }
std::size_t len = std::distance(first, change);
if (len > maxlen) { maxlen = len; }
first = change;
}
return maxlen;
}
// tests:
#include <vector>
int main()
{
struct testcase { std::size_t expected; std::vector<int> inputs; };
std::vector<testcase> tests
{
{0, {}},
{1, {1}},
{1, {1, -2}},
{1, {1, -2, 3}},
{1, {-1, 2, -3}},
{2, {1, 2}},
{2, {1, 2, -3}},
{2, {-1, -2, 3}},
{2, {-1, 2, 3}},
{2, {-1, 2, 3, -4}},
};
int failures = 0;
for (auto const& [e, v]: tests) {
failures += getLongestSameSignSequenceLength(v.begin(), v.end()) != e;
}
return failures;
} | {
"domain": "codereview.stackexchange",
"id": 33763,
"tags": "c++, vectors"
} |
Setting Class Properties | Question: I am adding classes to some legacy code and wondering if there is a more appropriate way to be setting the properties. I am also trying to understand the trade offs, if there is another way to do this.
I'm trying to find out if there is a better way to do this? If there is, why? If there isn't, why is this better than the alternative?
class Foo
{
public $id;
public function __construct($id)
{
$this->id = $id;
$this->getDetails();
}
/**
*
* Assigns all properties
*
*/
public function getDetails()
{
$db = Database::getDatabase();
$sql = "SELECT * FROM foos WHERE id = '{$this->id}' ";
$result = $db->getRow($sql);
foreach($result as $key => $value) {
$this->$key = $value;
}
}
}
Answer: Security
Use prepared statements. Classes are reusable, so even if the id is not user provided right now, this may very well change in the future, making your code vulnerable to SQL injection.
Naming
getDetails is rather vague. What details? Additionally, getX usually signals that the method returns X, which isn't the case here.
Your comment is a bit better, something like assignProperties might work. I would prefer something like loadById, populateById, or more explicit populateFieldsById.
General Structur
getDetails should be private, right?
As for the general structure, it's fine.
I would probably separate the Foo object from the code retrieving it from the database like this (pseudo code):
// just setters, getters, and business logic
class Foo {
constructor(...) {...}
getPropertyX() {...}
setPropertyX(...) {...}
getPropertyY() {...}
setPropertyY(...) {...}
performSomeBusinesslogicOnObject() {...}
}
// performing database interactions and populating Foo objects
class FooDAO {
Foo getById(id) {...}
Foo getByPropertyZ(z) {...}
save(Foo foo) {...}
}
My main reason for doing it this way is that it separates the database operation from the model.
Now, you can use the model without being forced to have a database interaction (maybe one isn't needed right now, maybe you want to run tests, maybe the current instance of the object comes from somewhere else than the database, etc).
On the other hand, if most of your business logic happens directly on the database, not the object, this might get confusing. So it really also depends on what Foo is exactly. But for simple objects which are mainly retrieved, updated, saved, etc, this is the way I would go. | {
"domain": "codereview.stackexchange",
"id": 15956,
"tags": "php"
} |
Is there any relationship between cross-terms in QFT and double slit experiment? | Question: Suppose I have a fermion-fermion interaction with two channels $t$ and $u$, the matrix element is $\tilde M$ = $\tilde M_t + \tilde M_u$. Then when we square the matrix element, we have $$\sum_s\tilde |M|^2 = \sum_s |\tilde M_t|^2 + |\tilde M_u|^2 +\tilde M_t^*\tilde M_u+ \tilde M_t\tilde M_u^*.$$
I wonder if there is any relationship between the cross-terms $\tilde M_t^*\tilde M_u+ \tilde M_t\tilde M_u^*$ and interference in the double-slit experiment. If so, what does constructive and destructive interference mean here? Since the matrix element describes the amplitude, rather than probability, is there a connection between matrix elements and a wave function?
Answer: They are similar. In both cases, we are interested in the components of the time-evolved wavefunction. In the double-slit case, we are interested in the position basis components. In scatrering, we're interested in the Fock basis components.
Now, if, as a result of our computation techniues, we compute this vector component as a sum :
$$\langle i|\psi\rangle=\langle i| \psi_1\rangle +\langle i|\psi_2\rangle=a_1+a_2$$
Then inevitably:
$$|\langle i|\psi\rangle|^2=|a_1|^2+|a_2|^2+a_1a_2^*+a_1^*a_2$$
This is just pure mathematics.
Now, why is it that our computation techniques result in expressing our desired vector component as a sum?
This has different answers for both cases, because the computation techniques are different.
In the double slit case, we are exploiting the linear nature of the Schrodinger equation. We're evolving the wavefunctions from Slit1 and Slit2 individually and adding the results, which is why this computation technique results in a sum as the final answer. This technique is reminicient of the path integral technique, which is interpreted as a sum over possibilities.
In the Scattering case, we are using perturbation theory techniques like the Dyson series and the Wick's theorem to split-up the time-evolution operator:
$$\langle i| e^{-iHt}|\psi \rangle= \langle i| (\sum S_n) |\psi \rangle = \sum a_n$$
I've written this naive time evolution for simplicity. In general, we should be using the perturbation series arising from the LSZ formula. But in either case, we end up expressing our desired vector component as a sum, which is why there are cross terms between the sum components in the final probability.
Terms of this sum are tensor products of Green's functions, which is why they can be drawn pictorially.
Should we be interpreting these diagrams as a sum over physical possibilities?
I would say perturbation theory is only a computation technique. Only the wavefunction is measurable.
Nevertheless, there is an interpretation of the individual lines of Feynman diagrams as a sum over possibilities. This is because it happens that you can calculate the factor contributed by each line as a path integral of the relativistic particle action.
Still, AFAIK it takes perturbation theory to derive the full diagram description : like how many lines should be attached to a vertex. I would say they are a purely computational technique without any "particles splitting into virtual particles" description. | {
"domain": "physics.stackexchange",
"id": 92586,
"tags": "quantum-field-theory, interference, double-slit-experiment"
} |
Move_base: replanning issue | Question:
Hello:
I'm trying to configure move_base with my mobile robot platform and i am facing a serious issue. I am using ROS Hydro and navigation stack 1.11.15
I have configured the navigation package to follow the generated path as close as possible and when an unexpected obstacle appears in the path from outside the local window, the navfn and global_path is replanned and everything works OK. But if the obstacle appear near the robot (where the global path is already planned) it is not able to replan and approaches the obstacle until the robot stop.
If I configure the global planner to replan at a certain ratio, it is able to replan avoiding the obstacle, but this is not the behaviour (periodic planning) we want in the real application, only replanning when the path is blocked.
My configuration files are as following:
Base local planner params:
base_global_planner: navfn/NavfnROS
base_local_planner: base_local_planner/TrajectoryPlannerROS
recovery_behaviors: [{name: conservative_reset, type: clear_costmap_recovery/ClearCostmapRecovery}, {name: rotate_recovery, type: rotate_recovery/RotateRecovery}, {name: aggressive_reset, type: clear_costmap_recovery/ClearCostmapRecovery}]
controller_frequency: 10.0
planner_patience: 3.0
controller_patience: 5.0
conservative_reset_dist: 3.0
recovery_behavior_enabled: true
clearing_rotation_allowed: true
shutdown_costmaps: false
oscillation_timeout: 0.0
oscillation_distance: 0.5
planner_frequency: 0.0
global_frame_id: map_navigation
TrajectoryPlannerROS:
acc_lim_x: 0.4
acc_lim_y: 0.4
acc_lim_theta: 0.8
max_vel_x: 0.2
min_vel_x: 0.1
max_trans_vel: 0.2
min_trans_vel: 0.1
max_rotational_vel: 0.6
max_vel_theta: 0.6
min_vel_theta: -0.6
min_in_place_vel_theta: 0.3
escape_vel: 0.0
holonomic_robot: false
y_vels: []
xy_goal_tolerance: 0.5
yaw_goal_tolerance: 0.3
latch_xy_goal_tolerance: true
sim_time: 1.0
sim_granularity: 0.025
angular_sim_granularity: 0.025
vx_samples: 3
vtheta_samples: 20
controller_frequency: 10
public_cost_grid_pc: true
meter_scoring: true
#DWA
heading_scoring: false
dwa: false
forward_point_distance: 0.325
path_distance_bias: 32
goal_distance_bias: 24
#TRAJECTORY PLANNER
pdist_scale: 0.9
gdist_scale: 0.8
occdist_scale: 0.01
heading_lookahead: 0.325
publish_cost_grid_pc: false
global_frame_id: map_navigation
oscillation_reset_dist: 0.05
prune_plan: true
NavfnROS:
allow_unknown: false
planner_window_x: 0.0
planner_window_y: 0.0
default_tolerance: 0.0
visualize_potential: false
Local costmap params:
local_costmap:
# Coordinate frame and TF parameters
global_frame: map_navigation
robot_base_frame: base_link
transform_tolerance: 1.0
# Rate parameters
update_frequency: 2.0
publish_frequency: 2.0
#Map management parameters
rolling_window: true
width: 8.0
height: 8.0
resolution: 0.05
origin_x: 0.0
origin_y: 0.0
static_map: false
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: hokuyo_link, data_type: LaserScan, topic: hokuyo/scan, marking: true, clearing: true, observation_persistence: 0.0, expected_update_rate: 0.0, max_obstacle_height: 2.0, min_obstacle_height: -2.0, obstacle_range: 4.0, raytrace_range: 5.0, inf_is_valid: false}
max_obstacle_height: 2.0
obstacle_range: 4.0
raytrace_range: 5.0
track_unknown_space: false
inflation_layer:
inflation_radius: 5.52
cost_scaling_factor: 2.0
plugins:
-
name: obstacle_layer
type: "costmap_2d::ObstacleLayer"
-
name: inflation_layer
type: "costmap_2d::InflationLayer"
Global costmap params:
global_costmap:
# Coordinate frame and TF parameters
global_frame: map_navigation
robot_base_frame: base_link
transform_tolerance: 1.0
# Rate parameters
update_frequency: 5.0
publish_frequency: 2.0
#Map management parameters
rolling_window: false
resolution: 0.05
static_map: true
#Static Layer
static_layer:
unknown_cost_value: -1
lethal_cost_threshold: 100
map_topic: map_navigation
obstacle_layer:
observation_sources: laser_scan_sensor
laser_scan_sensor: {sensor_frame: hokuyo_link, data_type: LaserScan, topic: hokuyo/scan, marking: true, clearing: true, observation_persistence: 0.0, expected_update_rate: 0.0, max_obstacle_height: 2.0, min_obstacle_height: -2.0, obstacle_range: 4.0, raytrace_range: 5.0, inf_is_valid: false}
max_obstacle_height: 2.0
obstacle_range: 4.0
raytrace_range: 5.0
track_unknown_space: false
inflation_layer:
inflation_radius: 5.52
cost_scaling_factor: 2.0
plugins:
- name: static_layer
type: "costmap_2d::StaticLayer"
-
name: obstacle_layer
type: "costmap_2d::ObstacleLayer"
-
name: inflation_layer
type: "costmap_2d::InflationLayer"
Thank you in advance for any idea about how to solve it.
Originally posted by E. Molinos on ROS Answers with karma: 56 on 2015-03-26
Post score: 3
Original comments
Comment by Naman on 2015-06-12:
Did you solve this issue? I am having a similar problem. TIA
Answer:
it maybe accomplished,if you let the global planner replan when the cost of trajectory calculated by base_local_planner is less than zero.have a try, good luck to you
Originally posted by pengjiawei with karma: 138 on 2018-01-10
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21256,
"tags": "navigation, move-base"
} |
In the context of block-encoding, what does $|0\rangle\otimes I$ represent? | Question: New to quantum and ran into the block-encoding. Having a bit of trouble understanding $|0\rangle \otimes I$.
$|0\rangle$ is just a vector but $I$ is an $n$ by $n$ matrix? Not clear how vector can be tensorproducted with a matrix? I know I am missing something here.
Any help could be appreciated.
Answer: This is probably best done with an example. Let's consider a $4\times 4$ matrix $U$ which acts on two qubits. The $|0\rangle\otimes I$ is equivalent to
$$
\left(\begin{array}{c} 1 \\ 0 \end{array}\right)\otimes\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right)=\left(\begin{array}{cc}
1 & 0 \\
0 & 1 \\
0 & 0 \\
0 & 0
\end{array}\right)
$$
(if you don't know where this comes from, go back to the definition of the tensor product). This is a $4\times 2$ matrix, meaning it's the right size for you to pre-multiply by $U$. Similarly,
$$
\langle 0|\otimes I\equiv \left(\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{array}\right)
$$
so that, overall your matrix $A$ comes out as $2\times 2$ (it's the action of what happens to the second qubit if the first qubit starts and ends in state $|0\rangle$). | {
"domain": "quantumcomputing.stackexchange",
"id": 2890,
"tags": "mathematics, linear-algebra"
} |
Find equivalent LTL formula, without Y (Yesterday) operator. How can I handle first state? | Question:
The task is to find an equivalent LTL formula for $G(a \Rightarrow Yb)$, which doesn't contain the Y operator. My idea is to search for invalid path patterns with 2 $a$'s in a row, e.g. bbbbaab.
Therefore I'm thinking of $G(Xa \Rightarrow b)$, so whenever an $a$ occurs in the next state the the current state must be true for $b$. But I have a problem with the starting states. Consider a path starting with $a$, e.g. s3->s4->... this path would be false for the original formula (iv), since Yb is always false for the first state. But this exact state would be true for the formula $G(Xa \Rightarrow b)$.
How do I have to modify my formula to also cover the starting states correctly?
Answer: Initial states are somewhat of an anomaly. A similar problem is encountered in the $X$ operator when you try to define LTL over finite words. That is, it somehow would have made more sense to define PLTL over computations that are "infinite on both sides".
So, being an anomaly, it is probably easiest to treat it as such, and just explicitly deal with the initial state, requiring the first state not to have $a$.
Thus, an equivalent formula would be $G(Xa\to b)\wedge \neg a$. | {
"domain": "cs.stackexchange",
"id": 19360,
"tags": "model-checking, linear-temporal-logic"
} |
Units of variable acceleration | Question: If we have a function, which describes, how a displacement in space along a line varies as a function of time: e.g.: $s(t)=vt$, its units are meters because $[v]=\frac{\text{meters}}{\text{seconds}}$ and $[t]=\text{seconds}$ thus $[vt]=\frac{\text{meters}}{\text{seconds}}\text{seconds}=\text{meters}$
A linearly accelerated motion would have the units $\mathrm{\frac {\frac ms} s=\frac m{s^2}}$
A variably accelerated motion (jerk) could have the units $\mathrm{\frac {\frac {\frac ms} s}s=\frac m{s^3}}$
What units would the variable acceleration described by the function $v*sin(\frac t{t_{MAX}})$ have ?
EDIT:
The meaning of that last function is supposed to be "a speed along a straight line, whose magnitude varies from $0$ to $v$ (as time varies from 0 to $t_{MAX}$), with the $v(t)$ "shape" like the sine function which varies from 0 to 1". Consider only the domain from $0$ to $\frac\pi 2$.
Answer: With the last revision, you have the expression $v \sin (t/t_{MAX})$ where I assume that $t_{MAX}$ is a time in consistent units. (Given the rest of the question, those units should be seconds.)
That has the same units as $v$, which in your case is a velocity with units of m/s.
The corresponding acceleration is
$$ \frac{d}{dt} \left[ v \sin (t/t_{MAX}) \right] = \frac{v}{t_{MAX}} \cos(t/t_{MAX}) $$
and the corresponding jerk (time-derivative of acceleration) is
$$ \frac{d^2}{dt^2} \left[ v \sin (t/t_{MAX}) \right] = -\frac{v}{t_{MAX}^2} \sin(t/t_{MAX}) .$$
Note the importance of having the dimensions of the argument of the trig functions correct, i.e. dimensionless. Those factors of $t_{MAX}$ in the derivative make the dimensions of acceleration and jerk work out to m/s$^2$ and m/s$^3$, respectively, which would not have happened in your original formulation where the argument was just "$t$".
If I define $\omega = 1/t_{MAX}$, then you could adopt units where $\omega = 1$. But you need to do it consistently such that you are no longer measuring time in seconds but rather in $\omega s$ (where $\omega$ has units of 1/s and the "s" in the last expression is the second). If you do that consistently, it's ok to write $\sin t$ in those units but then your velocity will, in consistent units, be in meters not meters-per-second. | {
"domain": "physics.stackexchange",
"id": 60445,
"tags": "kinematics, units, dimensional-analysis, si-units"
} |
Find the bending moment of a pole attached to a moving block | Question: I'm having trouble with the following problem.
What I've done so far:
x-y is the usual coordinate system.
$a=\frac{F}{m}=\frac{800}{60}$ and the y component of this is $a_y=a\sin{60^\circ}$.
To figure out the force acting on the rod and contributing to the momentum, I take the sum of all forces in the y direction. Note that from now on, I am thinking about the rod as being separated from the rest of the body. The weld is represented by forces ($O_y, O_x$), and the rod is moving upwards with the acceleration calculated above.
$\Sigma F_y= O_y-20g = 20a_y\implies O_y=20a_y+20g$
Now, the momentum equation about the center of mass of the rod should be
$\Sigma M_G=0.7O_y+\bar{I}\omega_G=\bar{I}\alpha=0$
Because the rod is not rotating. Is this equation right? (Why not?)
If it is right, it implies that $\bar{I}\omega_G=-0.7O_y$. Where do I go from here? The things I've tried from here on don't yield the right answer, and I do not have much confidence in those strategies.
The correct answer, if anyone cares, is 196 Nm.
Answer: The weld acts on the rod with force components $B_x$ and $B_y$ as well as a moment $M$. The equations of motion for the rod are
$$ B_x = m_{rod} \left( - \cos\theta \, \ddot{q} \right) $$
$$ B_y = m_{rod} \left( \sin\theta \, \ddot{q} + g \right) $$
$$ M - \frac{L}{2} B_y = I_{rod} \dot{\omega}_{rod} = 0 $$
where $q$ is the distance along the guide of travel and the last equation is the sum of moments about the center of gravity of the rod. Gravity is included above as $g$.
The equations of motion for the block are
$$ -F\,\cos\theta + N \sin\theta - B_x = m_{block} \left(- \cos\theta\, \ddot{q} \right) $$
$$ F\,\sin\theta + N \cos\theta - B_y = m_{block} \left(\cos\theta\, \ddot{q} + g\right) $$
where $N$ is the contact normal force.
Combined the above is
$$ m_{rod} \left( \ddot{q} + g\sin\theta \right) - F = -m_{block} \left( \ddot{q} + g\sin\theta \right) $$
$$ N-m_{rod} g \cos\theta = m_{block} g \cos\theta $$
with solution
$$ N = \left(m_{rod}+m_{block}\right)\, g \cos\theta $$
$$ \ddot{q} = \frac{F}{m_{rod}+m_{block}} - g \sin \theta $$
Now going back to moment, $M=\frac{L}{2} B_y$ which you can solve now. | {
"domain": "physics.stackexchange",
"id": 7222,
"tags": "homework-and-exercises, newtonian-mechanics, forces, torque, statics"
} |
What causes destruction in car crash? | Question: Suppose a car crashes at a speed $v$ against a wall and comes to a stop. Now if the car crashes at $2v$, does that mean it suffers twice as much destruction, if that can be objectively measured?
If force is what causes the damage and $F = ma$ and $a = \Delta v/\Delta t$, so $F = m \Delta v/\Delta t$. Assuming $\Delta t$ remains the same, then the force acting upon the car in the crash at $2v$ is just twice the force of the crash at $v$.
But by conservation of energy, if the cars come to a stop all the kinetic energy is converted into other forms of energy. So a crash at twice the speed is four times as energetic, since $E = m v^2 / 2$. If that's the case, shouldn't it cause 4 times as much destruction?
Answer: OK, for a car, its a bit hard to calculate. Aside from the fact that "destruction" is not something you can quantify (there is no meaning for "twice as much destruction"), a car has just too many complicated parts. You'd need to know Young's moduli and breaking stresses for each part, and have a large computer to do the calculations.
Anyways, here are some points that may make the situation clearer.
1) When you have any sort of collision, there is a rapid deceleration. This implies a large force in a short time. As $F=\frac{dp}{dt}$, the impulse felt by this body is written as $$\Delta p=J=\int Fdt$$. Now, we cannot calculate the value of F from just this equation, as we do not know how the body behaves when deformed. For a simple body which obeys Hooke's law/Young's moduli, F as a function of time(for the duration of the collision with the wall, will be:
$$mv\left(\frac{m}{k}\right)^{\frac{3}{2}}\left(sin{\sqrt{\frac{m}{k}}t}\right)$$
Where $k=\frac{YA}{L}$ of the body obeying Young's modulus (Or $k$ can be thought of as the sprong constant). The above equation is easily calculated using SHM equations of a spring-mass system. If you want the equations for two bodies striking each other, replace $k$ with $\frac{k_1k_2}{k_1+k_2}$, and $m$ with $\frac{m_1m_2}{m_1+m_2}$. You will also need to replace $v$ with $v_{rel}$ between the two bodies before collision.
So, if by deformation, you want the amount of force that acts on it, then $F\propto v$.
2)On the other hand, you may only want the amount that the body is crushed; i.e the length that it is shortened. This is easy to calculate for a spring-mass system (one that obeys Hooke's law/Young's modulus).
Equating energies, we get $$\frac{1}{2}mv^2=\frac{1}{2}kx^2$$ in standard symbols, thus $$x=v\sqrt{\frac{m}{k}}$$
The same substitutions as above will generalize this for a two-body problem.
Again, we got that destruction is proportional to initial velocity.
3)We may also look at destruction as the amount of energy used in destroying the body. In this case, we get that it is proportional to $v^2$, as long as no energy is lost through sound (in car collisions, quite a lot is).
In conclusion:
As destruction is qualitative not quantitative, we cannot say that "destruction increases 4x" until we define what we mean by destruction is a quantitative manner.
Cars collisions are complex, generalizing any of the above formulae for them is not even a good approximation, due to loss of energy by heat and sound, as well as the complexity of parts (overall, a car does NOT obey Hooke's law or Young's modulus; and neither do half of its parts at the magnitudes of collision forces). THe answer will vary from car to car.
So really your question is unanswerable, we can only elaborate on the underlying concept.
If you want, though, you could try to find a automobile research center. They computer-simulate these crashes on their computers, so they'll have a more accurate answer to this question. | {
"domain": "physics.stackexchange",
"id": 2353,
"tags": "forces, energy-conservation, collision"
} |
Number of conservation laws | Question: I saw a discussion about the relation of symmetries of Lagrangian and conservation laws on a textbook of analytical mechanics. A part that was counterintuitive to me was that all the discussion was done in a generalized coordinate system.
Does it imply that the number of conservation laws does not change after a coordinate transformation?
(This question might not be an accurate because you can make another conservation law by simply adding a constant. But I want to know the case up to such a trivial change.)
Answer: Along with adding constants, an arbitrary function of existing conserved quantities will also be conserved. If $X_1, X_2, \dots X_n$ are all conserved, then $f(X_1, X_2, \dots X_n)$ will also be conserved for any function $f$. So really the conserved quantities should form a space, with some number of dimensions. The number of dimensions is the number of "nontrivial conserved quantities".
Any particular conserved quantity we examine is going to be a scalar that varies across this space. Any given state of the system corresponds to a point in the space, which doesn't move as the system evolves. Since the point doesn't move, the value of the scalar doesn't change, and so the scalar is conserved.
If the system has $N$ degrees of freedom (i.e. its phase space is $2N$ dimensional) then the dimension of the conserved space is also $2N$. Why? Conserved quantities are allowed to depend on the $t$ coordinate in general, so given a particular state at a particular time, we can just run time backwards to $t=0$, and we get the $N$ initial position coordinates, and the $N$ initial momentum coordinates. At any point in the trajectory, running time back to $t=0$ will give the same initial condition, so these $2N$ variables must all be conserved. (And the initial condition can vary any of these variables independently, so the space must have a full $2N$ dimensions.)
A coordinate transformation can't change the number of dimensions in a space, so there will always be $2N$ conserved quantities, though some of them may end up looking very complicated. | {
"domain": "physics.stackexchange",
"id": 80285,
"tags": "classical-mechanics, lagrangian-formalism, conservation-laws, symmetry, noethers-theorem"
} |
What does the following diagram represent? | Question: Suppose, the following is a diagram of a protein's polypeptide chain:
What does this diagram represent?
What are the letters A, E, M, W, L, N, S, etc. represent? (I suppose these are amino acid codes. Right?)
Which one is the c-alpha atom here?
What are the sequence numbers i-2, i-1, i, i+1, i+2, etc. represent? Why there are negative and positive increments from i?
The following is a related diagram:
Answer:
Yes, those are 1-letter amino acid codes
The image just shows each amino acid as a single point, so it doesn't the individual atoms, including the C−α.
This may require knowing where this figure came from to be more specific, but they seem to be interested in the distance between amino acids before and after some particular point in the chain. | {
"domain": "chemistry.stackexchange",
"id": 15941,
"tags": "proteins, amino-acids"
} |
Deficient Numbers | Question: I have found the following interesting challenge on the web:
Deficient Numbers
A number is considered deficient if the sum of its factors is less than twice that number.
For example: 10 is a
deficient number because its factors are 1, 2, 5, 10 and their sum is
1 + 2 + 5 + 10 = 18 which is less than 10 * 2 = 20.
Challenges
Easy level: write a program to verify whether a given number is deficient or not.
Medium level: write a program to find all the deficient numbers in a range.
Hard level: given a number, write a program to display its factors, their sum and then verify whether it's deficient or not.
I implemented it in C:
#include <stdlib.h>
#include <stdio.h>
#define DEBUG 1
/*
XXX EASY-OPTION IMPLEMENTATION
A list data structure is used to store the factors of a given number.
A simple boolean-like data type is then returned by the main isDeficient function.
*/
typedef enum bool bool;
enum bool {
false, true
};
typedef struct ListElem ListElem;
struct ListElem {
int value;
ListElem* next;
};
typedef struct List List;
struct List {
ListElem* head;
int length;
};
List* listInit() {
List* list = (List*) malloc(sizeof(List));
list->head = NULL;
list->length = 0;
return list;
}
ListElem* listAppend(List* list, int elem) {
ListElem* newHead = (ListElem*) malloc(sizeof(ListElem));
newHead->value = elem;
newHead->next = list->head;
list->head = newHead;
list->length++;
return newHead;
}
void listPrint(List* list) {
ListElem* head = list->head;
printf("list > ");
do {
printf("%i ", head->value);
head = head->next;
} while (head != NULL);
printf("\n");
}
void listSort(List* list) {
int inactive = 0, debug = 0;
while (inactive < list->length - 1) {
inactive = 0;
ListElem* previous = NULL; // Initialise pointers...
ListElem* current = list->head;
ListElem* next = list->head->next;
for (int i = 0; i < list->length - 1; i++) {
if (current->value > next->value) {
if (i != 0)
previous->next = next;
current->next = next->next;
next->next = current;
if (i == 0) // Update list head...
list->head = next;
inactive = 0;
previous = next;
next = current->next;
} else {
inactive++;
previous = current;
current = next;
next = next->next;
}
}
debug += list->length - 1;
}
#if DEBUG
printf("Debug info: sorting toke %i iterations.\n", debug);
#endif
}
bool isInList(List* l, int elem) {
ListElem* listElem = l->head;
for (int i = 0; i < l->length; i++) {
int value = listElem->value;
if (value == elem)
return true;
listElem = listElem->next;
}
return false;
}
List* findFactors(int n) {
List* factors = listInit();
int tempRes = 0;
int debug = 1;
listAppend(factors, 1);
if (!(isInList(factors, n))) // Avoid duplicates only when n = 1...
listAppend(factors, n);
for (int d = 2; d <= n / 2; d++) { // The <= operator is necessary when n = 4 or the iterations don't start...
if (!(n % d)) {
tempRes = n / d;
if (isInList(factors, tempRes))
break;
listAppend(factors, d);
if (!(isInList(factors, tempRes))) // Avoid duplicate entries when the divisor and the result of the division are the same...
listAppend(factors, tempRes);
}
debug++;
}
#if DEBUG
printf("Debug info: %i's factors found with %i iterations.\n", n, debug);
listSort(factors);
listPrint(factors);
#endif
return factors;
}
int calculateSum(List* list) {
int sum = 0;
ListElem* listElem = list->head;
for (int i = 0; i < list->length; i++) {
sum += listElem->value;
listElem = listElem->next;
}
return sum;
}
bool isDeficient(int n) {
List* f = findFactors(n);
if (calculateSum(f) < 2 * n)
return true;
return false;
}
// XXX MEDIUM-OPTION IMPLEMENTATION
List* findDeficients(int min, int max) {
List* deficients = listInit();
for (int i = min; i <= max; i++) {
if (isDeficient(i))
listAppend(deficients, i);
}
return deficients;
}
// XXX HARD-OPTION IMPLEMENTATION
int main(int argc, char* argv[]) {
if (argc != 2) {
printf("Error: arguments are not one!\n");
return 0;
}
int n = atoi(argv[1]);
List* facts = findFactors(n);
printf("The factors of %i are:\n", n);
listPrint(facts);
int sum = calculateSum(facts);
printf("Their sum is:\n%i\n", sum);
if (sum < 2 * n)
printf("%i IS deficient.\n", n);
else
printf("%i IS NOT deficient.\n", n);
return 0;
}
What about it? In particular, I would like to ask you for a feedback about the List structure and the sorting function. Is their code readable and understandable? Could it be improved with regard to performance?
Answer: The algorithm:
Well, findFactors is far too inefficient. Better do a prime-factoring, and then create the list of all factors from that.
If you need to do enough prime-factorings, consider pre-calculating all candidate primes, be it during or before compilation or before use.
Only use a list if you have to: Using and when needed realloc-ing a dynamically-allocated array is far more efficient and simpler in most cases. And it allows you to rely on qsort() for sorting.
Your code:
Avoid over-long lines, as horizontal scrolling kills readability. Yes, your IDE might allow for longer lines without scrolling or auto-wrapping, but it still makes things difficult. Even if everyone used an IDE on a big screen and gave the code-window the full screen-width, it slows reading considerably.
If you use the preprocessor for configuration, allow overriding by pre-defining.
This:
#define DEBUG 1
becomes
#ifndef DEBUG
#define DEBUG 1
#endif
You might want to upgrade to C99, then you will have a true boolean type. If you don't, stay with an integer. Emulating a true boolean type with an enum instead is painful.
If you want to introduce a typedef-name for a struct- / enum-name, consider merging that:
typedef struct ListElem ListElem;
struct ListElem {
int value;
ListElem* next;
};
becomes:
typedef struct ListElem {
int value;
struct ListElem* next;
} ListElem;
Anyway, the customary name for a ListElem is ListNode, which though the same length avoids curious abbreviations.
"Do I cast the result of malloc?" No, we are writing C here.
Getting any resource, even memory, can fail. Handle it, don't ignore it.
Also, prefer sizeof expr over sizeof(TYPE), doing so couples size requested and the use of the memory, making errors less likely, whether at first writing or after re-factoring.
listAppend() is mis-named, it should be listPrepend(). I also wonder why it returns the new node, which the caller probably doesn't care about, and can easily and efficiently get anyways.
listPrint() has Undefined Behaviour for an empty list. findDeficients() for example can result in an empty list.
listSort() doesn't really have to re-order the nodes to sort the values. Not doing so allows for some simplification.
There is no reason to use the lists size to iterate over the whole list. That way, your code gets simpler and more efficient:
bool isInList(List* l, int v) {
for (ListElem* p = l->head; p; p = p->next)
if (p->value == v)
return true;
return false;
}
The same holds for calculateSum().
If you have to choose one of two expressions, you should remember that's what the conditional operator cond ? true_exp : false_exp excells at.
Consider marking all internal functions static to avoid exporting the symbol and encourage inlining.
Please output diagnostic messages to stderr instead of stdout, so the caller can separate them easily.
The error-message for wrong use should explain proper use. argv[0] contains the program-name for that. Try to follow convention with your text.
Some comments about the challenge:
The second level is only harder than the first if you take advantage of efficiencies of scale, as you should.
The last level is only harder if you insist on sorted output, and didn't use any container earlier. | {
"domain": "codereview.stackexchange",
"id": 31914,
"tags": "c, programming-challenge, sorting, linked-list, factors"
} |
Reading and classifying lines from a file | Question: I'm new to C which I'm learning in university now, and I'm not sure if the following is considered in C good practices or not.
For an assignment in a simple Classification problem I wrote the following code:
int train(FILE *file, const dim dim, const int m, vector *linearSeparator)
{
size_t len = 0;
char *line;
bool stop = false;
Sample *sample = NULL;
for(int i = 0; i < m && getline(&line, &len, file) != -1 && !stop; i++)
{
if (strToSample(line, SEPARATOR, dim, sample) != EXIT_SUCCESS ||
calibrateLinearSeparator(sample, linearSeparator, dim) != EXIT_SUCCESS)
{
printError(TRAINING, "Failed processing line");
stop = true;
}
free(line);
freeSample(sample);
}
line = NULL;
return stop ? EXIT_FAILURE : EXIT_SUCCESS;
}
The functions:
strToSample gets a char* line and constructs a Sample object out of it, setting the *sample pointer. It returns EXIT_SUCCESS if succeeded and an error code if failed.
calibrateLinearSeparator uses the sample set by the previous function to manipulate the linearSeparator object (vector - a custom struct to represent a double[]). This function too returns EXIT_SUCCESS if succeeded and an error code if failed.
Both these functions print details about their internal errors and at this level I'm printing in what sample id that internal error occurred.
My question is:
Is this way of writing the if statement a good practice? Is it readable? Should I do it differently?
It feels as if these functions are having side effects. They get pointers to set but are used in an if statement to check if they succeeded in a way that order matters.
The idea behind writing it this way is that if either of these functions fail I want to print an error and not continue the loop. If the first one fails then the second is not performed (which is what I want). In any case of success or failure I want to free the allocated memory.
Is it a good practice in C to write the for condition the way I did? Having 3 conditions in it? (checking i, trying to bring next line and !stop?) It seems to me that this helps making the code cleaner as there are less "details" lines (of increasing i++ for example if the loop was a while loop on getLine)
Answer: The review from @chux covers most of what I'd have said, so I'll just add a few additional points.
Avoid variable type and name clashes
The function prototype has const dim dim as a parameter which is very peculiar. The only way this compiles if if dim is a user-defined type and also the parameter name. That's a recipe for confusion and should be avoided.
Understand the meaning of const
When a formal parameter is labeled const as with dim and m in this function, it means they can't be altered. For complex types and pointers this is a good idea, but if we're passing an int, it probably doesn't make much difference. The reason is that plain types like int and char are passed by value, so the function only has a copy of the variable anyway. This means that one could, in this case, eliminate const from parameter m (and make it of type size_t -- negative numbers don't seem to have any plausible semantic meaning here) and then we no longer really need variable i because the for loop can be written like this:
for ( ; m && !stop && getline(&line, &len, file) != -1; --m)
In this case, I've slightly rearranged the terms to make sure we only read a line when both m is nonzero and stop is false.
Set a boolean variable directly
Instead of testing a condition and then explicitly setting a boolean based on that, I'd suggest simply setting the boolean value directly. I'd name the variable ok inverting its sense, and write the loop like this:
for ( ; m && ok && getline(&line, &len, file) != -1; --m)
{
ok = strToSample(line, SEPARATOR, dim, sample) == EXIT_SUCCESS &&
calibrateLinearSeparator(sample, linearSeparator, dim) == EXIT_SUCCESS;
freeSample(sample);
}
Then cleanup and error handling is done after the loop:
free(line);
if (!ok) {
printError(TRAINING, "Failed processing line");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
Alternatively, if you want to give the user a little hint as to where the error might be, reintroduce line counter i and alter the printError code within the loop to be something like this:
printError(TRAINING, "Failed processing line %d", i+1);
Use the most appropriate return value type
It appears that train only returns EXIT_SUCCESS or EXIT_FAILURE. That strongly suggests to me that its return type should be bool and that it should return true or false instead. If that's done, instead of this:
return stop ? EXIT_FAILURE : EXIT_SUCCESS;
One could write this:
return stop;
I'd prefer the shorter, simpler version. The same thing probably applies to strToSample and calibrateLinearSeparator as well. | {
"domain": "codereview.stackexchange",
"id": 26919,
"tags": "beginner, c, file, error-handling, c99"
} |
What is the function of nitrobenzene as a solvent in Friedel Crafts alkylation reaction? | Question: I would like to know what is the function of nitrobenzene as a solvent in Friedel Crafts alkylation reaction.
Answer to this question suggests that Friedel Crafts reaction is not possible with nitrobenzene and I understand that it's because of the big loss of electron density of the benzene ring due to the highly deactivating nature of $\ce{-NO2}$
Answer: The aluminium halide complexes with nitro compounds (e.g., nitrobenzene) are known to display catalytic properties in some organic reactions, including that in Friedel craft alkylations. Those complexes are believed to be more soluble in the solvent (nitrobenzene).
Reference: Russian Journal of Coordination Chemistry 2001, 27(7), 469–475. | {
"domain": "chemistry.stackexchange",
"id": 9888,
"tags": "organic-chemistry, aromatic-compounds, solvents, nitro-compounds"
} |
Calculating cutoff frequency for Butterworth filter | Question: I have a problem while calculating cutoff frequency, suppose we have these specs.
Firstly, I calculated the order of the filter and got $N=5.8858$ and round it up to get $N=6$.
Now I'm supposed to get $\Omega_c$. Using these equations:
\begin{cases}1+\left(\frac{0.2\pi}{\Omega_c}\right)^{2N} = \left(\frac{1}{0.89125}\right)^2 \quad&\quad (1)\\
1+\left(\frac{0.3\pi}{\Omega_c}\right)^{2N} = \left(\frac{1}{0.17783}\right)^2\quad&\quad (2)
\end{cases}
Now with $N=6$ and $T=1$, substituting in $(1)$
\begin{align}
\left(\frac{0.2\pi}{\Omega_c}\right)^{12} &=\left(\frac{1}{0.89125}\right)^2 - 1 =0.25893\\
\implies (\Omega_c)^{12} &= \frac{{(0.2\pi)}^{12}}{0.25893}\\
&= 40.29
\end{align}
But in the textbook it says $\Omega_c = 0.7032$, what I did wrong? Any help would be appreciated.
Answer: Everythng is alright. You made a mistake in the final calculation.
$$\Omega_c^{12} = \frac{{(0.2\pi)}^{12}}{0.25893} \implies \Omega_c = \sqrt[12]{\frac{{(0.2\pi)}^{12}}{0.25893}} =\frac{0.2\pi}{\sqrt[12]{0.25893}} = 0.7032$$
which is the desired result. | {
"domain": "dsp.stackexchange",
"id": 4662,
"tags": "filters, proof, butterworth"
} |
Range class to represent limited values, like HP, Stamina and so on | Question: I made a small script for Unity3D, meant to be use as GameComponent for GameObjects. It is supposed to represent values that have a max value, and an actual value. In games that would be the case, for instance, for the Health Points, the Stamina, Magic Points and so on. I would love to hear your feedback.
/////////////
//
// This is meant for things like Stamina, or HP. You have a max
// value, and an actual value.
//
// It inherits from MonoBehaviour because it is meant to be used
// as a Game Object Component
//
/////////////
using UnityEngine;
using UnityEngine.UI;
public class Range : MonoBehaviour {
private int val, max;
public Slider slider;
// Is/Has/Can
public bool IsDepleted() { return val == 0; }
public bool IsFull() { return val == max; }
public bool HasAtLeast(int _val) { return this.val >= _val; }
// Actions
public void Fill() { val = max; }
public void Deplete() { val = 0; }
public void Increase(int _val) { this.SetValue(this.val + _val); }
public void Decrease(int _val) { this.SetValue(this.val - _val); }
// Getters
public int GetValue() { return this.val; }
public int GetMax() { return this.max; }
// Setters
public void SetValue(int amount) {
if (amount > this.max) this.Fill();
else if (amount < 0) this.Deplete();
else this.val = amount;
UpdateSlider();
}
public void SetMax(int amount) {
this.max = amount;
if (this.val > this.max) this.Fill();
UpdateSlider();
}
private void UpdateSlider() {
if (slider) {
slider.maxValue = this.max;
slider.value = val;
}
}
}
Answer: Problematic
Fill and Deplete are public, but don't call UpdateSlider, whereas Increment/Decrement do.
SetMax should reject values < 0 (0 could be allowed, for value forced to 0)
Design Style
There is a function in the Unity Mathf static class that does bounds checking Mathf.Clamp
C# has properties, while you can have the pair GetX/SetX they are preferred.
public int Value {
get { return val; }
set { val = Mathf.Clamp( value, 0, Max );
UpdateSlider(); } }
public int Max {
get { return max; }
set { if ( value < 0 ) throw new ArgumentOutOfRangeException ("Max");
max = value; } }
Consider
Increment and Decrement could be replaced by overriding + and -, as += and -= will use such overloads. | {
"domain": "codereview.stackexchange",
"id": 20091,
"tags": "c#, unity3d, role-playing-game"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.