anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How does an episode end in OpenAI Gym's "MountainCar-v0" environment? | Question: I am working on OpenAI's "MountainCar-v0" environment. In this environment, each step that an agent takes returns (among other values) the variable named done of type boolean. The variable gets a True value when the episode ends. However, I am not sure how each episode ends. My initial understanding was that an episode should end when the car reaches the flagpost. However, that is not the case.
What are the states/actions under which the episode terminates in this environment?
Answer: The episode ends when either the car reaches the goal, or a maximum number of timesteps has passed. By default the episode will terminate after 200 steps. You can customize this with the _max_episode_steps attribute of the environment. | {
"domain": "ai.stackexchange",
"id": 2177,
"tags": "reinforcement-learning, environment, gym"
} |
Lorentz group in SUSY | Question: Why do we carry Lorentz group to be included also in supersymmetry? That is after we extend our symmetry to supersymmetry, we carry with us the Lorentz group. Why not other group instead?
Answer: The Haag-Lopuszanski-Sohnius (HLS) theorem yields a preference for the super-Poincare algebra. When the assumptions of the HLS theorem are not fulfilled, other non-trivial extensions of the spacetime Poincare algebra is possible, cf. e.g. this Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 27591,
"tags": "quantum-field-theory, special-relativity, supersymmetry, group-theory, lorentz-symmetry"
} |
Proving the given language isn't context free using Pumping Lemma | Question: i want to prove that the following Language isn't CF
$L=\{a^kba^kba^k|k\in \mathbb{N}\}$
Let $z=a^nba^nba^n$ be a String from $L$
$n$ is the pumping length and
$|z|=3n+2 > n$
and $z=uvwxy$ with $|vx|\geqslant 1$, $|vwx|\leqslant n$
but i'm not sure how to determine the correct substrings, how should i consider them ? I tried the ones below are thy correct ?
Case 1 :
$u=a^n,\ uwx=ba^n,\ y=ba^n$
Case 2 :
$u=a^{n-l},\ uwx=a^lb,\ y=a^nba^n$
Case 3 :
$u=a^nb,\ uwx=a^nb,\ y=a^n$
Case 4 :
$u=a^nba^{n-l},\ uwx=a^{l-k},\ y=a^kba^n$
Case 5 :
$u=a^nb,\ uwx=a^n,\ y=ba^n$
Answer: The pumping segments $v$ and $x$ can be anywhere in the string assuming they are at most $n$ apart, i.e., $|vwx|\le n$. Like you I get five possibilities, but I think you wrote some special cases rather than the general picture.
Inside one of the $a$-segments, for instance :
the first $u=a^i$, $vwx=a^j$, $y=a^{n-i-j}ba^nba^n$ where $i+j\le n$ and $j\ge 1$. Or
the second $u=a^nba^i$, $vwx=a^j$, $y=a^{n-i-j}ba^n$ where $i+j\le n$ and $j\ge 1$. Or
the third $\dots$ .
Or overlapping with one of the $b$'s. Of course because of the lengths we cannot overlap with both $b$'s.
First: $u=a^{n-i}$, $vwx=a^iba^j$, $y=a^{n-j}ba^n$ where $0\le i,j\le n$, $i+j< n$ .
Second $\dots$ .
However this does not solve the pumping problem. Note we repeat the strings $v,x$ when pumping. In these last cases we do not know how $v,x$ look. Do they contain the $b$? More cases.
A shortcut to the proof would be to make the following observation.
The pumping segments cannot contain $b$.
That leaves five cases for the position of $v$, $x$ in the three $a$-segments. Either deal with them or argue
Now that $v$, $x$ can only contain $a$'s we can pump either one or two of the three $a$-segments in $z$. That means after pumping they cannot all contain the same number of $a$'s | {
"domain": "cs.stackexchange",
"id": 9339,
"tags": "formal-languages, context-free, pumping-lemma"
} |
Does anything like this covalent concerted bond exchange reaction exist? | Question: I'm wondering if any reaction pathway exists (not involving enzymes) to do the following in aqueous solvent at temperatures < ~100°C:
To clarify, it's important however that the reaction is concerted, and that the W - X and Y - Z bonds are not broken until both the W-Y and X-Z bonds are formed. Bonds here must be covalent, but otherwise can be of any type. Any kind of catalyst can be used. Of course, it would be nice to know of an efficient reaction with minimally exotic conditions or reagents (i.e. it would be nice to avoid heavy metals for oxidative-addition type reactions), but I won't push my luck.
Does something like this exist?
Answer: $$\ce{W=X + Y=Z ->[\mathrm{cat}] W=Y + X=Z}$$ is known as olefin metathesis, achieved using Grubbs/Hoyveda/Fürstner catalysts.
Water-soluble Ru catalysts have been prepared in the Grubbs group (DOI). | {
"domain": "chemistry.stackexchange",
"id": 1200,
"tags": "reaction-mechanism"
} |
When was it realised that most major moons orbit in the equatorial plane of their parent planets? | Question: Inspired by the discussion of the moons of Uranus providing a clue to the planet's axis of rotation in this question, I'm wondering when it was realised that the major satellites are typically located in the equatorial plane of their planets. Our own Moon is a counterexample to this trend, so presumably it would have taken a bit of detective work.
Answer: Since the question has sit unanswered for months, I'm trying to give an acknowledgedly incomplete answer.
In summary: By 1700 it would have been a reasonable guess. By 1800 it was becoming an observed trend.
TL;DR:
It's hard to decide when the consensus was created on the typical orbits of satellites, because there are only a few data points (planets with satellites) but we can know when those few data points where known.
Obviously, the only data point known in antiquity was the Moon, but by that time the whole idea of satellites was far from mainstream.
In 1610 Galileo discovered the four biggest moons of Jupiter and that they orbit Jupiter in roughly the same plane. However, it took until the 1660s for Cassini to discover features in Jupiter and therefore be able to realise that its moons orbited in the equatorial plane.
By 1655 Huygens had discovered Saturn's rings and Saturn's satellite Titan, both of them in the same plane. Between 1671 and 1684, Cassini discovered Tethys, Dione, Rhea and Iapetus all of them in the same plane of the rings except for Iapetus.
The rotation of Saturn is a bit harder to detect, and it wasn't measured until 1794 by Herschel.
There is an additional data point even before being able to see Saturn rotating: both Jupiter and Saturn appear elliptical in the 17th century telescopes. Although a causal relation with rotation wasn't established until Newton, as soon as Jupiter's rotation had been discovered it was known that the planet was rotating around the smallest axis of the ellipsoid.
No more data points were added for a long time, because the moons of Mars and the rotation of Uranus were discovered much later, although the first two moons of Uranus had been discovered by Herschel in 1787.
Additionally, it was known from antiquity that all planets were in similar planes, and when sunspots were observed (by Galileo) and the rotation of the Sun could be measured, the Sun's equatorial plane was close to the planes of all planets orbit. Therefore, when the idea of planets and satellites as "little solar systems" arrived, the idea of the central body rotating in the same plane of the system wouldn't have seemed strange.
Then we can summarise what an informed observer at the end of the 17th century would have known:
Jupiter's rotation, Jupiter's oblation, and Jupiter's moons were in the same plane.
Saturn's rings, Saturn's oblation and most of Saturn's moons were in the same plane.
There were only two known exceptions to the rule of "everything around the same planet in the same plane": Earth's Moon and Iapetus, which aren't in the equatorial plane - although they aren't very far away from it.
With all that at hand, I would say that guessing that most satellites orbit in the equatorial plane of its planet would have been a reasonable guess by 1700.
By 1800 the evidence would have increased with:
The rotation of Saturn had been actually observed, confirming that most satellites and the rings orbit on the equatorial plane.
Uranus had two satellites in the same plane.
With that, it seems even more reasonable to suppose that Uranus'satellites were orbiting on its (unknown) equatorial plane. | {
"domain": "astronomy.stackexchange",
"id": 4430,
"tags": "orbit, solar-system, natural-satellites, history"
} |
My first random password generator | Question: I'm making a simple program that generates a random password of some length with or without special characters, just for the sake of learning the C language. Finally I've got this working very well based on the outputs below:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
char *generate_random_password(int password_lenght, int has_special_characters)
{
const char *letters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
const char *digits = "0123456789";
const char *special_characters = "!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~";
char *random_password = malloc(sizeof(char) * (password_lenght+1));
srandom(time(NULL));
if(has_special_characters)
{
char to_be_used[95] = "\0";
strcat(to_be_used, letters);
strcat(to_be_used, digits);
strcat(to_be_used, special_characters);
for(int i = 0; i < password_lenght; i++)
{
const int random_index = random() % strlen(to_be_used);
const char random_character = to_be_used[random_index];
random_password[i] = random_character;
}
}
else
{
char to_be_used[63] = "\0";
strcat(to_be_used, letters);
strcat(to_be_used, digits);
for(int i = 0; i < password_lenght; i++)
{
const int random_index = random() % strlen(to_be_used);
const char random_character = to_be_used[random_index];
random_password[i] = random_character;
}
}
return random_password;
free(random_password);
}
int main(void)
{
printf("%s\n", generate_random_password(17, 1));
printf("%s\n", generate_random_password(17, 0));
return 0;
}
The output is:
|ZzN>^5}8:i-P8197
vPrbfzBEGzmSdaPPP
It's working!
But I'm completely in doubt about these strings, pointers, char arrays, etc. I have no idea if this is written "the right way" or how it could be better. I'm concerned if I allocated the right amount for each string/char array, and if it can break or crash in some future.
PS: I'm new at C programming, that's why I don't know much about pointers and memory management yet.
If can anyone give me some feedback about it I will be very grateful!
Answer: Typo
lenght is spelled length.
Magic numbers
What does 95 signify? You'll want to put this in a named #define or a const.
Allocation failure
After calling malloc, always check that you've been given a non-null pointer. Allocation failure does happen in real life.
Indentation
You'll want to run this through an autoformatter, because your if block has wonky indentation and needs more columns to the right.
Inaccessible statement
return random_password;
free(random_password);
This free will never be called; delete it.
Random
The larger conceptual problem with this program is that it uses a very cryptographically weak pseudorandom number generator. This is a large and fairly complex topic, so you'll need to do some reading, but asking for random data from an entropy-controlled system source will already be better than using C rand.
That aside: you aren't calling rand, you're calling random:
The random() function uses a nonlinear additive feedback random number generator employing a default table of size 31 long integers to return successive pseudo-random numbers in the range from 0 to RAND_MAX. The period of this random number generator is very large, approximately 16 * ((2^31) - 1).
It's probably not appropriate for cryptographic purposes. Have a read through this:
https://stackoverflow.com/questions/822323/how-to-generate-a-random-int-in-c/39475626#39475626 | {
"domain": "codereview.stackexchange",
"id": 36161,
"tags": "beginner, c"
} |
Why is it that when I do some substitution, I get $p=1$ for a photon? | Question: I know that this is wrong, but where did i make a mistake? I just use a few equations and substitutions and I get that the momentum of a photon is one. Here is my math:
$p=h/λ$
$p=hf/c$ (substituted using $λ=c/f$)
$c=hf/p$
$c=hc/λp$ (substituted $f=c/λ$)
$1=h/λp$ (divided by $c$)
$1=hλ/λph$ (substituted $p=h/λ$)
$1=p$
After that I substituted $p=h/λ$ and got that $λ=h$. What did I do wrong?
Answer: Your attempt is right since $1=\frac{h}{\lambda{P}}$ . Then you say that you substitute for $P=\frac{h}{\lambda}$ then how you get next relation with $P$ included? You must replace $P$ by $\frac{h}{\lambda}$ and your equation become $1=\frac{h\lambda}{h\lambda}$ and this gives $1=1$ . So, this relation have no physical significance. But mathematicaly this equation is valid. | {
"domain": "physics.stackexchange",
"id": 26082,
"tags": "homework-and-exercises, special-relativity, kinematics, photons"
} |
A reaction based question from stoichiometry | Question: A 110.0g sample of a mixture of CaCl2 and NaCl is treated with Na2CO3 to precipitate the Calcium as Calcium Carbonate. This CaCO3 is heated to convert all the Calcium into CaO and the final mass of CaO is 11.62 grams. The % by mass of the CaCl2 in the original mixture has to be found out..
i've tried out finding the no of moles of CaCO3 which which i got as 0.2075 and assuming that the CaCO3 and the no of moles of CaCl2 should be equal i tried finding out the % which came out to be 20.75% in this method and if i separately found the masses of Ca, Cl2 and summed them up to find the percentage over 110 the % turned out to be 20.09%
need some suggestions to get the % anywhere near 15.2%
in this process taking all of the compounds to be anhydrous
Answer: There is no way to produce $11.62 \pu{g}$ of $\ce{CaO}$ from a $\ce{CaCl2 + NaCl}$ sample weighing $110.0 \pu{g}$ which is $15.2\%$ $\ce{CaCl2}$ by weight. Your calculations are correct.*
Assuming $100\%$ yield, and, as mentioned, no hydration whatsoever, you need at least $20.96\%$ $\ce{CaCl2}$ in a $110.0 \pu{g}$ $\ce{CaCl2 + NaCl}$ sample to produce $11.62 \pu{g}$ of $\ce{CaO}$.
*However, if the final product is hydrated, this is possible. Let me know if you need help with calculating percentage hydration.
Underlying Principle
This is an application of the well-known law of conservation of matter, also known as the principle of atom conservation (POAC). You may refer to R. C. Mukherjee (2004), Modern Approach to Chemical Calculations: An Introduction to the Mole Concept, 7$^\text{th}$ edition for a thorough reading with lots of practice problems.
Calculation
Your overall (contracted) reaction is:
$$
\ce{CaCl2 -> -> CaO}
$$
We simply apply POAC:
$$
\text{moles of }\ce{Ca}\text{ in }\ce{CaO} = \text{moles of }\ce{Ca}\text{ in }\ce{CaCl2} \tag{1}
$$
$$
\text{moles of }\ce{Ca}\text{ in }\ce{CaO} = \dfrac{m_\ce{CaO}}{M_\ce{CaO}}M_\ce{Ca} = \dfrac{11.62}{56.077}40.078 \pu{g} = 8.30\pu {g} \tag{2}
$$
$$
\text{moles of }\ce{Ca}\text{ in }\ce{CaCl2} = \dfrac{m_\ce{CaCl2}}{M_\ce{CaCl2}}M_\ce{Ca} = \dfrac{m_\ce{CaCl2}}{110.98}40.078 = 0.36m_\ce{CaCl2} \tag{3}
$$
where, in Equations (2) and (3), $M_i$ is the molar mass of $i$ and $m_i$ is the mass of $i$ in sample of.
Equating Equations (2) and (3) in accordance with Equation (1):
$$
8.30\pu {g} = 0.36m_\ce{CaCl2}
\implies m_\ce{CaCl2} = 23.06 \pu{g}
$$
Thus, if the original $\ce{CaCl2 + NaCl}$ sample weighed $110.0 \pu{g}$, $\ce{CaCl2}$ was present in $\dfrac{23.06}{110.0}\times100 \%$ weight ratio, which is $20.96\%$
Note: All calculations have been rounded off to the second decimal place. | {
"domain": "chemistry.stackexchange",
"id": 17491,
"tags": "experimental-chemistry, stoichiometry, mole"
} |
Aerodynamics of wet versus dry sail | Question: I recently read in a history book that sailing ships that really needed to go fast would pour water on their sails because "wet sails capture the wind better than dry sails."
Why might this be? Does it have to do with changing the friction properties of the air-cloth interface? Or does it help the sail hold a good shape against the wind by making it heavier?
Answer: Sails used to be made of relatively porous fabric which could not sustain pressure differences very efficiently. Getting them wet reduced the porosity and improved their performance.
With the introduction of synthetic materials in the 2nd half of the 20th century, the fabric porosity was almost completely eliminated and is no longer a factor. | {
"domain": "physics.stackexchange",
"id": 48628,
"tags": "drag, aerodynamics, lift"
} |
Programmatically close optical drive tray | Question: I really liked the close tray program that CD-ROM drivers programs used to include in MS-DOS days.
Since I live in a place where even getting an internship is impossible, I've decided to learn by myself.
I implemented this "Hello World" C# script to do that with a twist.
It will check the optical drive status and if open, it will close it, otherwise it will tell you that a disk is present.
Note, I went through Stack Overflow and couldn't find a way to programmatically find if the tray is open.
Another note, it will not close the tray on most laptops where such mechanism does not exist.
And finally, I used an online code beautifier for indentation.
using System;
using System.Runtime.InteropServices;
using System.Text;
using System.Windows.Forms;
using System.Management;
class Program {
[DllImport("winmm.dll")] protected static extern int mciSendString(string Cmd, StringBuilder StrReturn, int ReturnLength, IntPtr HwndCallback);
static void Main(string[] args) {
ManagementObjectSearcher searcher = new ManagementObjectSearcher("SELECT MediaLoaded FROM Win32_CDROMDrive");
ManagementObjectCollection moc = searcher.Get();
var enumerator = moc.GetEnumerator();
if (!enumerator.MoveNext()) throw new Exception("No elements");
ManagementObject obj = (ManagementObject) enumerator.Current;
bool status = (bool) obj["MediaLoaded"];
if (!status) {
MessageBox.Show("The drive is either open or empty", "Optical Drive Status");
mciSendString("set cdaudio door closed", null, 0, IntPtr.Zero);
}
else MessageBox.Show("The drive is closed and contains an optical media", "Optical Drive Status");
}
}
Answer: Note that the Visual Studio C# code editor has an integrated code beautifier. Depending on your setup, it can be executed with different shortcut keys. Call it from the menu for the first time, so that you can see the active shortcut keys (it is Ctrl-E-D for me). Menu: Edit > Advanced > Format Document.
By using a technique called LINQ (Language INtegrated Query), you can get the management object easier. It uses extension methods from the namespace System.Linq. Therefore, you must include a using System.Linq; at the top of your code. Then you can query with
ManagementObject managementObject = moc.FirstOrDefault();
if (managementObject == null) {
// Handle error (output text or throw exception)
Console.WriteLine("Could not find a management object!");
} else {
...
}
This FirstOrDefault extension method calls GetEnumerator and MoveNext internally and returns null if no object is available.
Do not give it the name obj. Almost everything is an object, so this name is not very informative.
Exceptions are a complex matter. It raises a lot of questions like "should I throw an exception or output a message to the user?", "should I create my own exception types?", "should I log the exception?" etc., etc. There is no single best answer to these questions. Therefore, I content myself with giving you a link: C# Exception Handling Best Practices.
If you intend to reuse this functionality, you could encapsulate it into your own class:
public enum OperationResult
{
Failure,
NoDriveFound,
DriveIsOpenOrEmpty,
MediaIsLoaded
}
public class OpticalDriveCloser
{
public OperationResult CloseFirst()
{
...
}
public OperationResult[] CloseAll()
{
...
}
}
It returns an operation status as enumeration type and lets the calling application decide how to proceed. I.e. it does neither call Console.WriteLine nor throw exceptions and focuses on pure, non-UI logic. A console application will have a different UI-logic than a windows forms application or a web application. | {
"domain": "codereview.stackexchange",
"id": 39226,
"tags": "c#, beginner"
} |
The significance of state complexity in automata and regular languages? | Question: I'm reading "Concatenation of Regular Languages and Descriptional Complexity" by Galina Jiraskova, 2009 on the state complexity resulting from concatenation of two regular languages ( by Galina Jiraskova), but I can't understand what the practical implications of state complexity would be. The first trivial thought that struck me was that higher complexity would require more time and space by the machine. Is this correct? Also are there any other places where state complexity is relevant and signinficant?
Edit:
The state complexity of a regular language is the smallest number of states in any
deterministic finite automaton (dfa) accepting the language. The nondeterministic
state complexity of a regular language is defined as the smallest number of states in any nondeterministic finite automaton (nfa) for the language.
Answer: State complexity is really about concise description of an object (in this case, a regular language), not about computational complexity. The general topic is called "descriptional complexity" in the literature and draws its inspiration, in part, from the classic 1971 paper of Meyer and Fischer entitled "Economy of Expression by Automata, Grammars, and Formal Systems" (see http://people.csail.mit.edu/meyer/economy-of-description.pdf ). This is still an active area, with a yearly conference (DCFS - Descriptional Complexity of Formal Systems).
As for applications, any place where your program essentially relies on a finite-state machine (e.g., parsers) it will be good to have this finite-state machine as small as possible. | {
"domain": "cstheory.stackexchange",
"id": 2225,
"tags": "fl.formal-languages, automata-theory, regular-language"
} |
Understanding how quicksort operates | Question: I am having a hard time understanding the quick sort partition operation.
I understand what partition is supposed to do, I just don't understand how partition does it. Specifically, I don't understand how the two subarrays (q-1, q+1) are constructed.
I have looked at various other SE articles, such as:
How does Hoare's quicksort work, even if the final position of the pivot after partition() is not what its position is in the sorted array?
Randomized Selection
I understand the notion of:
1. Select a pivot (typically the last item in the array, at least in simplified methods)
2. Create one subarray to the left of items smaller than the pivot
3. Create one subarray to the right of items larger than the pivot
4. Recursively call partition on each of the subarrays
However, in the following example, is where I get lost
8 1 9 3 5 7 6
Here, 6 is my pivot
7 is larger than 6, so my array now looks like:
8 1 9 3 5 6 7
Here is where I get lost. Does the rest of my sequencing look like:
8 1 3 5 6 9 7
1 3 5 6 8 9 7
Where:
1 3 5 is my smaller array and
8 9 7 is my larger array (where each then get partition recursively called?)
I am most confused by the proper construction of the larger and smaller arrays and in which order to put the numbers in. Any help is appreciated. I am happy to edit my post for clarity if I am unclear.
EDIT:
I tried using some base code from this address, and edited it slightly to create more print statements, but still don't understand why it would sort to:
1 3 5 6 9 7 8 and not 1 3 5 6 8 9 7 initially
Here is the code:
class QuickSort
{
/* This function takes last element as pivot,
places the pivot element at its correct
position in sorted array, and places all
smaller (smaller than pivot) to left of
pivot and all greater elements to right
of pivot */
int partition(int arr[], int low, int high)
{
int pivot = arr[high];
int i = (low-1); // index of smaller element
for (int j=low; j<high; j++)
{
// If current element is smaller than or
// equal to pivot
if (arr[j] <= pivot)
{
i++;
// swap arr[i] and arr[j]
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
}
// swap arr[i+1] and arr[high] (or pivot)
int temp = arr[i+1];
arr[i+1] = arr[high];
arr[high] = temp;
return i+1;
}
/* The main function that implements QuickSort()
arr[] --> Array to be sorted,
low --> Starting index,
high --> Ending index */
void sort(int arr[], int low, int high)
{
printArray(arr);
if (low < high)
{
/* pi is partitioning index, arr[pi] is
now at right place */
int pi = partition(arr, low, high);
// Recursively sort elements before
// partition and after partition
sort(arr, low, pi-1);
sort(arr, pi+1, high);
}
}
/* A utility function to print array of size n */
static void printArray(int arr[])
{
int n = arr.length;
for (int i=0; i<n; ++i)
System.out.print(arr[i]+" ");
System.out.println();
}
// Driver program
public static void main(String args[])
{
int arr[] = {8, 1, 9, 3, 5, 7, 6};
int n = arr.length;
QuickSort ob = new QuickSort();
ob.sort(arr, 0, n-1);
System.out.println("sorted array");
printArray(arr);
}
}
/*This code is contributed by Rajat Mishra */
Answer: Please run your code with four more lines of printing statements, as shown below. Once you have seen the output, it will be immediate for you see what is happening. Basically, given the sequence 8 1 9 3 5 7 6, the partition routine rearranges it to 1 3 5 8 9 7 6 at first. Then it exchanges 6 with 8, the first element beyond the end of elements not larger than 6. So the sequence becomes 1 3 5 6 9 7 8.
System.out.print("Pivot with " + pivot + ": "); // add this line
printArray(arr); // add this line
// swap arr[i+1] and arr[high] (or pivot)
int temp = arr[i+1];
arr[i+1] = arr[high];
arr[high] = temp;
System.out.print("Exchange " + pivot + ": "); // add this line
printArray(arr); // add this line
Aside from the question proper, what you needed most is an Integrated Development Environment (IDE) where you can trace code execution and check the values of all relevant variables step by step easily. Using the command line can only serve as a temporary measure. I recommend Intellij Community Edition; others might have different recommendations. Once you are able to use that machinery, you will be able to answer others' questions such as this one just like I have answered your question. Well, as long as you will program in java, an IDE is a must unless you do not mind lagging behind other by a factor of 2 to 10. Just a bit of unsolicited advice that I offer to everyone of my local students. It is entirely up to you what to do, of course. | {
"domain": "cs.stackexchange",
"id": 12349,
"tags": "algorithms, algorithm-analysis, sorting, quicksort"
} |
Estimating the complexity by the calculation of the number of real multiplications in a general polynomial function | Question: For the following memory polynomial equation,
$y(n)=\displaystyle\sum_{m=0}^{M-1}\displaystyle\sum_{p =1\\p\,\textrm{odd}}^{P}a_{m_p} \, x(n-m) \, |x(n-m)|^{(p-1)}$
The total number of coefficients is $ M\frac{(P+1)}{2}$
I need to calculate:
1- the total number of complex multiplications
2- the total number of real multiplications
Answer: The direct computation should be straightforward. Since I suspect the exercice has other purposes (more advanced algorithmic complexity), there additionally are a couple of tricks to reduce the complexity of power evaluations.
First, if you have to evaluate all powers $y^0,y^1,\,\cdots,y^Q$, you can store the power-of-two powers, and use the binary expansion of a power $q$. It can be called exponentiation by squaring. For instance, to compute $y^7$, you need one multiply for $y^2 = y\times y$, one for $y^4=y^2\times y^2$ and two more for $y \times y^2 \times y^4$, so four multiplies instead of six by direct computation. And the $y^2$ and $y^4$ can be stored for the next powers.
Second, a classical polynomial evaluation is "Horner's scheme":
$$a_0 x^n + a_1 x^{n-1} + \cdots + a_n = \left(\left(\left( a_0x+a_1\right) x \right)\ldots a_{n-1}\right)x+ a_n$$
which can be adapted to your case. | {
"domain": "dsp.stackexchange",
"id": 8223,
"tags": "polynomial"
} |
Custom Data Input to RTAB instead of Kinect data | Question:
I have a python script which after processing, gives me Depth Map and RGB Input image in realtime. I can not connect kinect directly, but I want RTAB Map to create a map based on the live stream of Depth map and RGB Images processed by Python. How can I do that? I have reasonable experience in Python, but I'm a rookie in ROS.
Does the RTABMap access data via topic/subscribing to some topic? I have little clue, please help me on this.
Originally posted by Karan Shah on ROS Answers with karma: 1 on 2019-07-15
Post score: 0
Answer:
rtabmap_ros uses topics as input. Topics you need are a RGB image, a registered depth image and camera info of RGB image. See http://wiki.ros.org/rtabmap_ros/Tutorials/HandHeldMapping for many examples using different cameras. You should be able to do the same with your python script, just make sure you republish the images in different topic names and adjust rgb_topic, depth_topic and camera_info_topic arguments of rtabmap.launch:
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" rgb_topic:="my/python/rgb/image/topic" depth_topic:="my/python/depth/image/topic" camera_info_topic:="my/python/rgb/camera_info/topic"
cheers,
Mathieu
Originally posted by matlabbe with karma: 6409 on 2019-07-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33432,
"tags": "slam, navigation, rtabmap-ros, ros-kinetic"
} |
Building ROS2 Rolling from source: issues with gfortran / lit | Question:
I am trying to compile ROS2 from source on Docker (for development purposes) but am running into several issues while doing so. Any pointers at what might be wrong here?
Operating System:
Ubuntu22.04 (Jammy)
Installation type:
source
Version or commit hash:
ros2.repos commit
Steps to reproduce issue
Essentially follow the instructions at Linux Development Setup for Rolling. I created a Dockerfile that executes these in this gist
Expected behavior
Build completes and the example can be run inside the container
Actual behavior
colcon build --symlink-install fails at FortranCInterface which seems to be a CMake dependency.
Before that, I get several variations of this error which seems unrelated:
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 10, in
import distutils.core
File "/usr/lib/python3/dist-packages/numpy/distutils/__init__.py", line 24, in
from . import ccompiler
File "/usr/lib/python3/dist-packages/numpy/distutils/ccompiler.py", line 9, in
from distutils.ccompiler import (
ImportError: cannot import name 'compiler_class' from partially initialized module 'distutils.ccompiler' (most likely due to a circular import) (/usr/lib/python3/dist-packages/numpy/distutils/ccompiler.py)
[1.143s] ERROR:colcon.colcon_core.package_identification:Exception in package identification extension 'python_setup_py' in 'lib/python3/dist-packages/numpy': Command '['/usr/bin/python3', '-c', "import sys;from setuptools.extern.packaging.specifiers import SpecifierSet;from distutils.core import run_setup;dist = run_setup( 'setup.py', script_args=('--dry-run',), stop_after='config');skip_keys = ('cmdclass', 'distclass', 'ext_modules', 'metadata');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith('_') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data['metadata'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in ('license_files', 'provides_extras')};sys.stdout.buffer.write(repr(data).encode('utf-8'))"]' returned non-zero exit status 1.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/colcon_core/package_identification/__init__.py", line 142, in _identify
retval = extension.identify(_reused_descriptor_instance)
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 48, in identify
config = get_setup_information(setup_py)
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 241, in get_setup_information
_setup_information_cache[hashable_env] = _get_setup_information(
File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 281, in _get_setup_information
result = subprocess.run(
File "/usr/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-c', "import sys;from setuptools.extern.packaging.specifiers import SpecifierSet;from distutils.core import run_setup;dist = run_setup( 'setup.py', script_args=('--dry-run',), stop_after='config');skip_keys = ('cmdclass', 'distclass', 'ext_modules', 'metadata');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith('_') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data['metadata'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in ('license_files', 'provides_extras')};sys.stdout.buffer.write(repr(data).encode('utf-8'))"]' returned non-zero exit status 1.
Compilation finally fails with the following:
Starting >>> IntelFortranImplicit
-- stderr: FortranCInterface
CMake Error at CMakeLists.txt:5 (project):
No CMAKE_Fortran_COMPILER could be found.
Tell CMake where to find the compiler by setting either the environment
variable "FC" or the CMake cache entry CMAKE_Fortran_COMPILER to the full
path to the compiler, or to the compiler name if it is in the PATH.
Failed
The same errors occur when using humble rather than rolling.
when I add gfortran to the apt install list, I get another error:
Starting >>> lit
WARNING:colcon.colcon_cmake.task.cmake.build:Could not run installation step for package 'IntelFortranImplicit' because it has no 'install' target
Finished >> my-test-package
stderr: lit
CMake Warning (dev) in CMakeLists.txt: No project() command is present. The top-level CMakeLists.txt file must contain a literal, direct call to the project() command. Add a line of code such as project(ProjectName) near the top of the file, but after cmake_minimum_required(). CMake is pretending there is a "project(Project)" command on the first line.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Error at CMakeLists.txt:4 (configure_lit_site_cfg):
Unknown CMake command "configure_lit_site_cfg".
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
cmake_minimum_required(VERSION 3.22)
should be added at the top of the file. The version specified may be lower
if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP0000".
This warning is for project developers. Use -Wno-dev to suppress it.
Failed
I don't know what lit or my-test-package are.
Additional information
I couldn't determine what depends on Fortran with certainty and searching the ros2 org didn't yield any results, so this issue might be elsewhere or with documentation.
I looked at the official docker images but they install the binary packages from apt rather than build from source.
question while I'm here: If I want to develop on one of the component libraries (say rclpy), is building everything from source the recommended approach? Is there a way to use the prebuilt docker images ros:rolling and then clone just rclpy
related issue on ros2 issue tracker
Originally posted by achille on ROS Answers with karma: 464 on 2023-01-24
Post score: 0
Answer:
You're executing a colcon operation in the root of the container's filesystem.
The default value for colcon's --base-paths argument is actually ., not src. This is why colcon marks the build, install, and log subdirectories of the workspace with COLCON_IGNORE.
You could modify the colcon invocation with --base-paths src, but it would be better to create an empty subdirectory in the container to to serve as the workspace rather than dumping all of these files directly in /.
To explain why this is failing, colcon is crawling the entire filesystem and finds things that look an awful lot like packages, and then it tries to build them and unsurprisingly fails to do so.
EDIT: Here's the section of the installation instructions where a new directory is created for the workspace: https://docs.ros.org/en/rolling/Installation/Alternatives/Ubuntu-Development-Setup.html#get-ros-2-code
Originally posted by cottsay with karma: 311 on 2023-01-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by achille on 2023-01-25:
Great find, thanks! It's curious to me that the default isn't src but perhaps there's an explanation for that. | {
"domain": "robotics.stackexchange",
"id": 38243,
"tags": "ros2"
} |
Use local storage event | Question: I use the following code which is working OK to register and emit event for local storage on the browser. It will be great if you can give me some ideas on how to improve it for production use since I'm fairly new to JS.
"use strict";
function lsHelper() {
let status = "available";
let timeout = 5000;
//Register a function by its name, itself (callback) and the needed time span for the check cycle
this.RegisterLocalStorageFunction = function(name, callback) {
//Set local storage items for function and its result
resetFunction(name);
//Setting the event
window.addEventListener('storage', function(e) {
//Getting the params or eventually just the status
//Call the function if params are set and reset all
if (e.key === name && e.newValue != status) {
let result = callback(e.newValue);
//Set result
localStorage.setItem(name + "Result", result);
//Reset call
localStorage.setItem(name, status);
}
});
}
this.CallLocalStorageFunction = function(name, params, callback) {
//Check if the function is already called
if (localStorage.getItem(name) == placeholder
&& localStorage.getItem(name + "Result") == status) {
//Call remote function
localStorage.setItem(name, params);
//Define timeout and event for future usage
let timeout = {};
let storageEvent = {};
//Check in interval for result
storageEvent = function(e) {
//Check wheter the result is set
if (e.key == name + "Result" && e.newValue != status) {
//Call callback with results
callback(e.newValue);
//Reset result in local storage
localStorage.setItem(name + "Result", status);
//Dismiss listener and timeout
clearTimeout(timeout);
window.removeEventListener('storage', storageEvent, false);
}
}
//Add listener to event
window.addEventListener('storage', storageEvent);
//Start timout to cancel listener if no result is set, hence no remote function was called
timeout = setTimeout(function() {
//Dismiss interval
clearTimeout(timeout);
//Clean up
window.removeEventListener('storage', storageEvent, false);
}, timeout);
}
}
//Reset local storage values
function resetFunction(name) {
localStorage.setItem(name, status);
localStorage.setItem(name + "Result", status);
}
};
Answer: first of all remove your global use strict.
Why? Global use strict declarations can lead to big fails in your software.
From Mozilla website:
This syntax has a trap that has already bitten a major site: it isn't possible to blindly concatenate non-conflicting scripts. Consider concatenating a strict mode script with a non-strict mode script: the entire concatenation looks strict! The inverse is also true: non-strict plus strict looks non-strict. Concatenation of strict mode scripts with each other is fine, and concatenation of non-strict mode scripts is fine. Only concatenating strict and non-strict scripts is problematic.
So what you can do is to use the module pattern:
var lsHelper = (function() {
"use strict";
let status = "available";
let timeout = 5000;
return {
//Register a function by its name, itself (callback) and the needed time span for the check cycle
RegisterLocalStorageFunction: function(name, callback) {
//Set local storage items for function and its result
resetFunction(name);
//Setting the event
window.addEventListener('storage', function(e) {
//Getting the params or eventually just the status
//Call the function if params are set and reset all
if (e.key === name && e.newValue != status) {
let result = callback(e.newValue);
//Set result
localStorage.setItem(name + "Result", result);
//Reset call
localStorage.setItem(name, status);
}
});
},
CallLocalStorageFunction: function(name, params, callback) {
//Check if the function is already called
if (localStorage.getItem(name) == placeholder &&
localStorage.getItem(name + "Result") == status) {
//Call remote function
localStorage.setItem(name, params);
//Define timeout and event for future usage
let timeout = {};
let storageEvent = {};
//Check in interval for result
storageEvent = function(e) {
//Check wheter the result is set
if (e.key == name + "Result" && e.newValue != status) {
//Call callback with results
callback(e.newValue);
//Reset result in local storage
localStorage.setItem(name + "Result", status);
//Dismiss listener and timeout
clearTimeout(timeout);
window.removeEventListener('storage', storageEvent, false);
}
}
//Add listener to event
window.addEventListener('storage', storageEvent);
//Start timout to cancel listener if no result is set, hence no remote function was called
timeout = setTimeout(function() {
//Dismiss interval
clearTimeout(timeout);
//Clean up
window.removeEventListener('storage', storageEvent, false);
}, timeout);
}
},
//Reset local storage values
resetFunction: function(name) {
localStorage.setItem(name, status);
localStorage.setItem(name + "Result", status);
}
};
})();
Second. The name lsHelper does not tell what the function does. I think that localStorageHelper is a better name. | {
"domain": "codereview.stackexchange",
"id": 22658,
"tags": "javascript, beginner, node.js, html5, google-chrome"
} |
Mapping a list of cells in seurat featureplot | Question: I have 209 cells, I clustered them by Seurat to 4 clusters. By Featureplot I am able to track a gene in clusters:
Higher color shows higher expression.
Now, for some genes I want to highlight some cells in Featureplot so that apart from yellow or red colours I want to colour a subsets of cells with another color. I mean I want to map a list of cells in Featureplot or tSNE plot. Let's say I want to know the location of cells 1, 4, 80 and highlight them with another color.
My Seurat object in this link.
Seurat itself beautifully maps the cells in Featureplot for defined genes with a gradient of colours showing the level of expression. Saying I have genes A and B, in excel.
I have coloured cells that express a gene > mean + se, < mean - se or between these values. For instance, for this gene, 36 cells express this gene > mean + se, I want to map these cells in Featureplot or tSNE plot in distinct colour so I can locate them in clusters easily. Something like binary (on off) expression to relative expression.
How I can change the colour of some cells based on my threshold for this gene please?
Answer: To color the TSNEPlot, you can generate a new column in metadata with the expression levels (High, low, etc). Then use pt.shape to set a shape for each identity.
To show binary expression based on expression you first have to define the list of cells that are below or over your threshold. Once you have those lists you can use SetIdent() in Seurat to color those groups.
load(file= "~/seuset_16.RData")
high <- c("s1.1","s1.4", "s1.80")
low <- setdiff(colnames(seuset_h16@data), high)
seuset_h16 <- SetIdent(seuset_h16, cells.use = high, ident.use = "high")
seuset_h16 <- SetIdent(seuset_h16, cells.use = low, ident.use = "low")
seuset_h16 <- StashIdent(seuset_h16, save.name = "expr")
seuset_h16 <- SetAllIdent(seuset_h16, id="res.1")
t <- TSNEPlot(seuset_h16, pt.size = 2, do.return = T, pt.shape ="expr")
t
high <- c("s1.1","s1.4", "s1.80")
low <- setdiff(colnames(seuset_h16@data), high)
seuset_h16 <- SetIdent(seuset_h16, cells.use = high, ident.use = "high")
seuset_h16 <- SetIdent(seuset_h16, cells.use = low, ident.use = "low")
t <- TSNEPlot(seuset_h16, pt.size = 2, do.return = T, colors.use = c("red", "grey"), do.hover =T, data.hover =c("ident", "res.1))
t | {
"domain": "bioinformatics.stackexchange",
"id": 1233,
"tags": "r, single-cell, seurat, ggplot2"
} |
Expectation value of operators in quantum mechanics | Question: Can the expectation value of an operator be zero?
Answer: Expectation value of an operator is calculated in a specific state. For instance let's consider for simplicity a beam of electrons in a state $|\psi\rangle$ polarized in the direction $x$. The average $\langle \psi|\hat S_x|\psi\rangle = \hbar/2$. However, if the beam is in a state $|\phi\rangle$ polarized in the $z$ direction, $\langle \phi|\hat S_x|\phi\rangle = 0$.
In general, it can be proved that if a particle is in a state $|l,m\rangle$ where $l$ is the orbital quantum number and $m$ the magnetic quantum number i.e. an eigenvalue of $\hat L_i$, where $i =$ $x$, or $y$, or $z$, then
$$\langle l,m|\hat L_j|l,m\rangle = 0, \tag{i}$$
where $\hat L_j$ is one of the two spin-projection operators besides $\hat L_i$, .e. if $i=y$ then $j = z$, or $x$.
Comment: another question was posted in this site asking for a proof of eq. $\text {(i)}$, but it was deleted as home-work. | {
"domain": "physics.stackexchange",
"id": 20159,
"tags": "quantum-mechanics, operators, observables"
} |
Joule-Thomson and heat exchanger combination | Question: Suppose a refrigerant (say propane) at pressure P1 and temperature T1 in two different set ups (as shown in the diagram below):
#1: The refrigerant passes through a Joule-Thomson valve and exists at pressure P2 and temperature T2.
#2: The refrigerant first flows through a heat exchanger and its temperature drops, then flows through the Joule-Thomson valve and finally passes through the heat exchanger, exchanging heat with warm stream. It exists the exchanger at pressure P2 and temperature T3. (Note the exit pressure is P2, same exit pressure in #1).
Can we say T3 is always greater than T2?
Suppose friction loss in pipes and exchanger is negligible and heat is no heat transfer with the environment.
Answer: Your system contains a pure component refrigerant, which is the usual case for refrigeration systems. The boiling temperature of a pure component is strictly dependent on the pressure that the pure component experiences, as given by the Antoine equation, seen here.
For case 1, a high temperature (T1), high pressure (P1) refrigerant goes through an expansion valve, where there is substantial pressure drop due to the refrigerant running through a restriction at a relatively high flow rate. This pressure drop causes the high temperature liquid to boil, and the heat required for boiling comes from the refrigerant itself, causing its temperature to drop dramatically. When the refrigerant reaches pressure P2, it is a mixture of cold vapor and cold liquid (e.g., 10% vapor and 90% liquid) whose temperature (T2) depends strictly on its ambient pressure.
For case 2, the diagram shows heat exchange between the high temperature refrigerant entering the expansion valve and the low temperature liquid and vapor exiting the expansion valve. By cooling off stream 1, its lower temperature entering the expansion valve means that there is less heat available to boil the refrigerant as it enters the expansion valve. In addition, the heat exchange from the hot refrigerant to the cold refrigerant means that some of the cold refrigerant is boiled in the heat exchanger. Due to conservation of energy, the net effect should be the same amount of cold liquid and cold vapor downstream of the heat exchanger as was seen downstream of the expansion valve in case 1, meaning that you are wasting the cost of a heat exchanger in case 2 (you don't get any "gain" from that heat exchanger).
Regarding temperature T3, the refrigerant exits the heat exchanger at pressure P2, just as it did in case 1. Because the exit pressure is the same in both cases, the exit temperatures are the same in both cases, which means that T2 = T3. | {
"domain": "physics.stackexchange",
"id": 85232,
"tags": "thermodynamics"
} |
Differentiation algebra with index notation | Question: Consider $p_\mu p^\mu$ and let us differentiate it with respect to $p^\nu$. Then,
$$\frac{\partial}{\partial p^\nu}(p_\mu p^\mu) = p^\mu \eta_{\mu \sigma}\delta^\sigma_\nu + p_\mu \delta_\nu^\mu = 2 p_\nu$$
But if you consider the relation $p_\mu p^\mu =-m^2$, then
$$\frac{\partial}{\partial p^\nu}(p_\mu p^\mu) = \frac{\partial}{\partial p^\nu} (-m^2) = 0$$
What am I doing wrong?
Answer: The partial derivative of a function is defined like this:
$$\frac{\partial}{\partial x} f(x,y) := \lim_{\epsilon \rightarrow 0} \frac{f(x+\epsilon,y)-f(x,y)}{\epsilon}$$
In words, when computing the partial derivative of an expression with respect to $x$, you vary $x$ while holding everything else fixed and compute the corresponding difference quotient. For example,
$$\frac{\partial}{\partial x}(x^2+y^2)=2x \qquad \frac{\partial}{\partial y}(x^2+y^2)=2y$$
On the other hand, the expression $x^2+y^2=1$ defines a relationship between $x$ and $y$. If you want this expression to remain true, then $x$ and $y$ are not independent of one another; you generally cannot vary one without simultaneously varying the other. Naively differentiating both sides yields
$$2x = \frac{\partial}{\partial x}(x^2+y^2) = \frac{\partial}{\partial x}(1) = 0$$
$$\implies x=0, y=\pm 1$$
What you're doing here is the following. You are computing the rate of change of the quantity $x^2+y^2$ by varying $x$ and holding $y$ fixed - and also restricting your attention to the points which satisfy the constraint $x^2+y^2=1$.
In this sense it may be surprising that you obtained any solutions at all; the reason you do is that at the points $(x,y)=(0,\pm 1)$, varying $x$ by an infinitesimal amount $\delta$ corresponds to a variation in $y$ which goes like $\delta^2$ (i.e. the first order variation in $y$ vanishes). This is another way of saying that $y$ is stationary (with respect to variations in $x$) at those points.
If you understand this, then the answer to your question is basically identical. The expression $p_\mu p^\mu = \eta_{\mu\nu} p^\mu p^\nu$ is just a quadratic expression in the variables $p^0,p^1,p^2,$ and $p^3$. In principle, it can take any value, and differentiating it with respect to $p^\mu$ yields $2p_\mu$, as you say.
On the other hand, for massive particles the 4-momentum is not completely arbitrary; it is constrained by the on-shell constraint $p_\mu p^\mu=-m^2$. As a result, you generically cannot vary any individual component of the momentum without varying the others. The only time you can is when $p_\mu=0$, for the same reason as the simpler example given above. | {
"domain": "physics.stackexchange",
"id": 89783,
"tags": "special-relativity, differentiation"
} |
Decidability of PDA | Question: I have following problem:
INFPDA={⟨A⟩ |A is PDA and L(A)=infinite language}
Prove that this is decidable problem.
So my idea how to solve this problem is the following:
k = number of states of A, create finite automata D, which accepts all words which have length=k and more
Create context-free grammar G based on A
And now I am lost. If the automata would be DFA then I would do:
L(M) =L(A)∩L(D) then I would check whether L(M) = ∅ using turing machine (tm) for emptiness. And I would use pumping lemma. But as far as I know, I cannot use ∩ with context-free language.
How to prove this then, please?
Thank you
Answer: Consider a context-free grammar $\mathcal{G}$ for $L(A)$ in Greibach normal form with no useless nonterminals (i.e., non-terminals that cannot be transformed into a sequence of only terminals by applying production rules). Such a grammar can be mechanically constructed from $A$.
Now build a directed graph $G = (V,E)$ as follows:
Each nonterminal of $\mathcal{G}$ is a vertex of $G$.
For each production of $\mathcal{G}$ where a nonterminal $A$ appears on the left side and a (not necessarily distinct) nonterminal $B$ appears on the right side, add the edge $(A,B)$ to $G$.
Clearly if there is a cycle in $G$ then the cardinality of $L(\mathcal{G}) = L(A)$ is infinite. Indeed, for every $k \in \mathbb{N}$, there exists a walk from $S$ on $G$ of length $k$. By first applying the productions corresponding to the edges of this walk, and then a suitable set of productions to get rid of the leftover nonterminals, you can build a word $w \in L(A)$ such that $|w| \ge k$.
Conversely, if the language is infinite, for every $k >0$ there must be a derivation in $\mathcal{G}$ that uses at least $k \ge |V|$ productions. This induces a walk (from $S$) on $G$ of the same length $k$. This walk traverses at least $k+1 \ge |V|+1$ (not necessarily distinct) vertices of $G$ and, by the pigeonhole principle, at least one vertex must appear two or more times. This shows the existence of a cycle in $G$.
Determining whether a directed graph is cyclic is a decidable problem (a simple DFS visit suffices). | {
"domain": "cs.stackexchange",
"id": 17154,
"tags": "turing-machines, context-free, pushdown-automata"
} |
Thermal Expansion in a rigidly fixed rod | Question: I was reading this topic, and this is what I Found:
Consider a rod of length $L$ which is fixed between to rigid end separated at a distance $L$. Now, if the temperature of the rod is increased by $Δθ$, then the strain produced in the rod will be:
Thermal strain,$$ε = \frac{ΔL}{L}=\frac{\text{Final length - Original Length}}{\text{Original Length}}=\alpha \Delta\theta,$$
where $\alpha=\frac{\Delta L}{L} \times \frac{1}{\Delta \Theta}.$
Now my doubt is that why are we taking Original Length as L and not as L = $L(1+\alpha\Delta\theta)$? Reason being that when we are considering a rigidly fixed rod , we can visualise that the wall/support has caused the rod to shorten or compress from its changed length of L(1+αΔθ) back to L, and so the value of strain $\epsilon$ should have been = $\frac{\Delta L}{L(1+\alpha\Delta\theta)}.$ So please explain me where I am wrong.
Answer: In addition to nice @Gandalf61 answer there's some caveat in what you propose.
Original definition of thermal expansion is $$\tag 1 \varepsilon_0 = \frac{\Delta L}{L}$$,
while you propose it to be:
$$\tag 2 \varepsilon_1 = \frac{\Delta L}{L+\Delta L}$$
In (1) equation it measures expansion percentage (ratio) based on length before expansion, while your (2) equation measures percentage after expansion.
It would be not so bad, in a first glance, but... in (1) thermal strain is always proportional to rod $\Delta L$, that is
$$\tag 3 \varepsilon_0 \propto \Delta L$$
While in (2) it does not, i.e.
$$\tag 4 \varepsilon_1 \not \propto \Delta L$$
in case $\Delta L \gg L$, then $~\varepsilon_1 \approx 1$, and proportionality is lost. What's the point of thermal expansion percentage if it stops responding to great prolongations ? | {
"domain": "physics.stackexchange",
"id": 99497,
"tags": "thermodynamics, temperature, stress-strain"
} |
Photoelectric effect in space floating metal | Question: I have read this question:
Electrical neutrality in photoelectric effect
Now the answer by HiddenBabel says:
Metals are conductors. As electrons escape, new electrons easily flow from ground into the metal to maintain neutrality.
Now if I have a metal floating in space, and light shines on it, creating the photoelectric effect, electrons start to get knocked off the metal.
In this case, there is no connection to the ground, there is nowhere to get new electrons from. Will the metal's lattice structure's atoms become more neutral (meaning they will have fewer electrons)?
Would this not restructure the metal lattice?
Question:
Can we have photoelectric effect on a metal floating in space if we shine light on it? Obviously it cannot get any new electrons (instead of the ones get knocked off) from anywhere.
What will happen to the lattice structure of the metal? It will obviously lose electrons and cannot replace them.
Answer:
Obviously it cannot get any new electrons (instead of the ones get knocked off) from anywhere.
The ones knockedoff? The will feel the increasing attraction of the metal turned positive? ( I ignore gravitation since it is so tiny) My gues: There will be a continuous cloud of ejected electrons which will continuously be attracted and absorbed.
What will happen to the lattice structure of the metal
The effects are only surface effects, as the photons do not penetrate and after the first layers there should be no difference in the lattice. | {
"domain": "physics.stackexchange",
"id": 58685,
"tags": "quantum-mechanics, electromagnetic-radiation, electrons, photoelectric-effect"
} |
Is adding the word "solution" necessary when listing the materials used in a synthesis? | Question: I am editing a chemistry-related paper and the synthesis procedure described is as follows:
Then, 10 mL of the intermediate were mixed with 10 mL of 1% (w/v)
VCPL and 2 mL of 0.1% (w/v) KPS.
I was wondering if one should add the word "solution" after the name of the ingredients as follows:
Then, 10 mL of the intermediate were mixed with 15 mL of a 2% (w/v)
VCPL solution and 3 mL of a 0.2% (w/v) KPT solution.
I think that indicating a volume and a % (w/v) gives it away that you are referring to a solution containing the active ingredient, but should the word "solution" still be added?
Thanks.
Answer: No, the repeated addtion of the word "solution" is not needed.
All you want here is to provide a measured quantity (e.g., volume), concentration of your reagent in the solvent, and inferred quantity (e.g., mmol). This may be reported in the following pattern, too:
"1-Cinnamyl-4-methylbenzene (4a): The compound 4a was synthesized from
1-methoxy-4-{[(2 E)-3-phenylprop-2-en-1-yl]oxy}benzene 2b ($\pu{240.9 mg}$,
$\pu{1.00 mmol}$) with p-tolylmagnesium bromide 3a ($\pu{1.4 mL}$, $\pu{1.1 M}$ in THF, $\pu{1.5 mmol}$) in the presence of 1b ($\pu{4.1 mg}$, $\pu{1 mol\%}$) in $\ce{Et2O}$ ($\pu{5 mL}$) at room temperature for $\pu{24 h}$ ($\pu{198.0 mg}$, $\pu{95\%}$ yield, colorless liquid)."
(source: Hashimoto et al. in Molecules 2019, 24, 2296 (doi.org/10.3390/molecules24122296), section 3.2; open access publication.)
This pattern is seen for inorganic reagents like $\ce{HCl}$ dissolved in water, yet equally available in methanol, 1,4-dioxane, etc. (example); and organometallic reagents, e.g. butyllithium, methylmagnesium bromide. | {
"domain": "chemistry.stackexchange",
"id": 12350,
"tags": "solutions"
} |
How to verify publications using rostest? | Question:
I have several pieces of software in ROS, all of which are pretty thoroughly unit and integration tested. However, there's one aspect of testing within ROS that I have yet to accomplish with much success: Verifying publications.
Note that I'm still stuck in the dark ages of Fuerte, but I believe my question is relevant to more recent releases as well.
I currently have several tests within rostest, which looks like this in my CMakeLists.txt:
rosbuild_add_executable(my_tests EXCLUDE_FROM_ALL ${MY_TESTS})
rosbuild_add_gtest_build_flags(my_tests)
rosbuild_add_rostest(${PROJECT_SOURCE_DIR}/test/node/node.test)
Running this with make test also runs a roscore (since node.test is a launch file), so these tests can make use of classes that require a connection to roscore. I'd like to use this to verify publications, but it's not working reliably, which makes me think I'm doing something wrong. As an example:
class TestSubscriber
{
public:
TestSubscriber() : receivedMessage(false) {}
void callback(const std_msgs::Int32ConstPtr &newMessage)
{
receivedMessage = true;
message = newMessage;
}
bool receivedMessage;
std_msgs::Int32ConstPtr message;
}
TEST(MyTest, TestPublication)
{
// This class will publish on the "output" topic. It will create a node handle
// and advertise in its constructor.
MyPublishingClass publishingClass;
ros::NodeHandle nodeHandle;
TestSubscriber subscriber;
ros::Subscriber rosSubscriber = nodeHandle.subscribe("output",
1,
&TestSubscriber::callback,
&subscriber);
ASSERT_EQ(1, rosSubscriber.getNumPublishers()); // This passes
publishingClass.publishMessage();
ros::spinOnce(); // Spin so that publication can get to subscription
EXPECT_TRUE(subscriber.receivedMessage); // This may or may not be true
}
Depending on how the MyPublishingClass is written, this test may pass or fail due to that last line. To give a specific example, I have a class that reads through a bagfile and publishes specific topics. If that class publishes topics with:
publisher.publish(rosBagViewIterator->instantiate<std_msgs::Int32>());
the test passes. If instead the class publishes with:
publisher.publish(*rosBagViewIterator);
the test fails (even though the node still runs normally outside of the test, so I know that publication is actually happening). I have more examples of similar weirdness, but I'm hoping this is enough to describe my problem.
So, down to my questions:
Why is this so finicky? I realize that it's basically a node subscribing to its own publication, but that shouldn't be a problem. Should it?
Am I doing this wrong? Is there a better way to test ROS publications? Note that the class in question is not a complete ROS node, and cannot be tested as such.
Originally posted by kyrofa on ROS Answers with karma: 347 on 2014-10-15
Post score: 3
Original comments
Comment by kyrofa on 2014-10-16:
Thanks for the link Dirk! That is indeed exactly what I'm doing-- hopefully I'll be able to narrow it down a bit more.
Comment by DavidN on 2016-09-12:
Hi Kyle, did you manage to find out why the callback was inconsistent? I am having exactly the same problem. Let me know if you found something
Answer:
There is a unit test covering exactly your use case (https://github.com/ros/ros_comm/blob/ebd9e491e71947889eb81089306698775ab5d2a2/test/test_roscpp/test/src/subscribe_star.cpp#L123) which reliably passes. So I guess it must be something in your code - may be in the parts not posted here.
Originally posted by Dirk Thomas with karma: 16276 on 2014-10-15
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 19744,
"tags": "ros, rostest, gtest"
} |
ROS Answers SE migration: pycharm setup | Question:
I am trying to use pycharm as my ide and it doesn't seem to recognize any of the ros libraries(rospy, etc.) even though they are properly installed. If it helps I am using python 3.4.0 and pycharm 4.0.4.
Originally posted by Mr. CEO on ROS Answers with karma: 143 on 2015-03-05
Post score: 3
Answer:
This is probably similar to running an IDE and getting it to code-complete ROS code.
Have you tried starting pycharm from a terminal in which you have sourced your setup.bash?
Otherwise make sure to check your PYTHONPATH, it should contain (at least) /opt/ros/$ros_release/lib/python2.7/dist-packages.
Originally posted by gvdhoorn with karma: 86574 on 2015-03-05
This answer was ACCEPTED on the original site
Post score: 8
Original comments
Comment by Mr. CEO on 2015-03-05:
Thanks, adding the PYTHONPATH to the Pycharm Environment Variables fixed the problem
Comment by gvdhoorn on 2015-03-05:
It probably works, but I'd go for the "sourced environment" option. If anything ever changes in ROS wrt Python setup/paths, you don't have to do anything. Setting the PYTHONPATH (or adding it to pycharm's setup) would mean you'd have to track any changes in ROS manually. | {
"domain": "robotics.stackexchange",
"id": 21060,
"tags": "python"
} |
Map Display: Use different frame then /map | Question:
Hi,
I might have a very simple question but haven't found the solution yet. I am using Map display to visualize the Occupancy Grid type of message. Occupancy grid's frame should be centered at the vehicle (in-vehicle local frame e.g. like in an AV's occupancy grid visualizations).
However, in my stack, someone is already using /map frame to define the global frame of the world map, i.e. it doesn't translate with the vehicle.
Question is how can I change the frame used in Map display in rviz so I could use some newly defined frame, e.g. /local_vehicle_tf, rather than /map? Should I just define my new type of display in rviz? That seems to be overkill just to change the frame.
It's kind of a simple question but I haven't figured it out.
Thanks
Originally posted by BenQ1110 on ROS Answers with karma: 36 on 2019-11-12
Post score: 0
Answer:
There's no need to make a new type of display in Rviz. Instead, there are 3 simple things to do:
Make sure the occupancy grid message you publish contains the correct frame name in the header (eg. /local_vehicle_tf)
In RViz in the map display settings, make sure the "topic" field is set to listen to this new occupancy grid topic and not the one publishing the world map.
Change the "Fixed Frame" field in Global Options to your new frame ("/local_vehicle_tf")
That should be enough to display your local "map" and any other data that can be transformed into that frame. Eventually, you will want to make sure that this new TF frame is connected to the other frames in your robot. For instance, you could have a transform between the global map frame and your local_vehicle_tf frame. When you have a fully connected transform tree, step 3 is no longer necessary.
One last thing, the /local_vehicle_tf frame sounds like the frame that's normally called "baselink" (See https://www.ros.org/reps/rep-0105.html). You might want to use the standard name if that is correct for your situation.
Originally posted by Carl D with karma: 303 on 2019-11-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by BenQ1110 on 2019-12-02:
Thanks! That solves it. | {
"domain": "robotics.stackexchange",
"id": 34005,
"tags": "ros, rviz, ros-melodic"
} |
Are there somebodies use OpenCV2.2 with ROS? | Question:
I want to translate my previous code by opencv2.2 into ROS, and I have found many differences between opencv and ROS in image type and some other types. So I wander if some one have already developed a long time on it, please give me some advices?
Thanks a lot!
Originally posted by Yongqiang Gao on ROS Answers with karma: 133 on 2011-02-22
Post score: 1
Answer:
ROS default install uses OpenCV 2.0
You can upgrade to 2.2 if you replace the ROS opencv2 package with the one available her:
http://www.ros.org/wiki/vision_opencv
just do svn co https://code.ros.org/svn/ros-pkg/stacks/vision_opencv/trunk
and make it.
Originally posted by Yogi with karma: 411 on 2011-02-23
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by fergs on 2011-02-23:
Just a note -- cturtle used OpenCV 2.0, diamondback/unstable are using 2.2 | {
"domain": "robotics.stackexchange",
"id": 4839,
"tags": "ros, vision-opencv, cv-bridge, opencv2.2"
} |
Feedback on ArrayList using OOPs | Question: Please review my code for the problem statement below. Many Thanks.
You job is to create a simple banking application.
There should be a Bank class:
It should have an ArrayListof Branches
Each Branch should have an ArrayListof Customer s
The Customer class should have an ArrayListof Doubles (transactions)
Customer:
Name, and the ArrayList of Doubles.
Branch:
Need to be able to add a new customer and initial transaction amount.
Also needs to add additional transactions for that customer/branch
Bank:
Add a new branch
Add a customer to that branch with initial transaction
Add a transaction for an existing customer for that branch
Show a list of customers for a particular branch and optionally a list of their transactions Demonstration autoboxing and unboxing in your code.
import java.lang.Double;
import java.util.ArrayList;
public class Customer{
private String name;
private ArrayList<Double> transactions = new ArrayList<>();
public Customer(String name,double initialTransaction){
this.name = name;
this.transactions.add(Double.valueOf(initialTransaction));
}
public String getCustomerName(){
return this.name;
}
public Double getInitialTransaction(){
return this.transactions.get(0);
}
public void addTransaction(double transaction){
this.transactions.add(Double.valueOf(transaction));
}
public String getCustomerDetails(){
return getCustomerName()+"\t\t\t"+transactions;
}
}
import java.lang.Double;
import java.util.ArrayList;
public class Branch{
private String name;
private ArrayList<Customer> customers;
public Branch(String name){
this.name = name;
this.customers = new ArrayList<>();
}
public String getBranchName(){
return this.name;
}
public void addCustomer(Customer customer){
if(customer!=null){
if(!searchCustomer(customer)){
this.customers.add(customer);
//System.out.println("Customer with name "+customer.getCustomerName()+" added to branch "+getBranchName());
}else{
//System.out.println("Customer with name "+customer.getCustomerName()+" already present in branch "+getBranchName());
}
}else{
//System.out.println("Customer with null values entered!!!");
}
}
public void addTransaction(Customer customer,double transaction){
if(customer!=null || transaction<=0.0){
if(searchCustomer(customer)){
this.customers.get(this.customers.indexOf(customer)).addTransaction(transaction);
//System.out.println("Transaction of "+transaction+" added in account of Customer with name "+customer.getCustomerName());
}else{
//System.out.println("Customer with name "+customer.getCustomerName()+" not present in branch "+getBranchName()+". Please add the customer.");
}
}else{
//System.out.println("Customer with null values or no transaction entered!!!");
}
}
public void printCustomerList(){
System.out.println("Customer Name \t\t Transactions");
for(Customer cx:customers){
System.out.println(cx.getCustomerDetails());
}
}
public boolean searchCustomer(Customer customer){
for(int i=0;i<this.customers.size();i++){
if(this.customers.get(i).equals(customer)){
return true;
}
}
return false;
}
}
import java.lang.Double;
import java.util.ArrayList;
public class Bank{
private String name;
private ArrayList<Branch> branches;
public Bank(String name){
this.name = name;
this.branches = new ArrayList<>();
}
public String getBankName(){
return this.name;
}
public void addBranch(Branch branch,Customer customer){
if(branch!=null){
if(!searchBranch(branch)){
branch.addCustomer(customer);
this.branches.add(branch);
System.out.println("Customer with name "+customer.getCustomerName()+" with an initialTransaction of "+customer.getInitialTransaction().doubleValue()+" is added to Branch with name "+branch.getBranchName()+" of bank "+getBankName());
}else{
System.out.println("Branch with name "+branch.getBranchName()+" already present");
}
}else{
System.out.println("Branch with null values entered!!!");
}
}
public void addTransaction(Branch branch, Customer customer,double transaction){
if(branch!=null || customer!=null){
if(searchBranch(branch) && branch.searchCustomer(customer)){
branch.addTransaction(customer,transaction);
System.out.println("Transaction of "+transaction+" added in account of Customer with name "+customer.getCustomerName()+" under branch "+branch.getBranchName()+" of bank "+getBankName());
}else{
System.out.println("Customer with name "+customer.getCustomerName()+" not present in branch "+branch.getBranchName()+". Please add the customer.");
}
}else{
System.out.println("Branch or Customer with null values entered!!!");
}
}
public void printCustomersOfBranch(Branch branch){
if(branch!=null){
if(searchBranch(branch)){
branch.printCustomerList();
}
}else{
System.out.println("Branch with null values entered!!!");
}
}
private boolean searchBranch(Branch branch){
for(int i=0;i<branches.size();i++){
if(this.branches.get(i).equals(branch)){
return true;
}
}
return false;
}
}
public class BankTest{
public static void main(String[] args){
Bank pnb = new Bank("Punjab National Bank");
Branch peachtree = new Branch("Peach Tree");
Branch rodeodrive = new Branch("Rodeo Drive");
Branch goodearth = new Branch("Good Earth");
Customer harsh = new Customer("Harsh",0.5);
Customer nidhi = new Customer("Nidhi",600.75);
Customer yuv = new Customer("Yuv",1785.95);
pnb.addBranch(peachtree,harsh);
pnb.addBranch(rodeodrive,nidhi);
pnb.addBranch(goodearth,yuv);
pnb.printCustomersOfBranch(peachtree);
pnb.printCustomersOfBranch(rodeodrive);
pnb.printCustomersOfBranch(goodearth);
pnb.addTransaction(peachtree,harsh,500.60);
pnb.addTransaction(rodeodrive,nidhi,150000.70);
pnb.addTransaction(goodearth,yuv,121798.12);
pnb.printCustomersOfBranch(peachtree);
pnb.printCustomersOfBranch(rodeodrive);
pnb.printCustomersOfBranch(goodearth);
}
}
Answer:
Demonstration autoboxing and unboxing in your code.
You can then remove the calls to Double#valueOf(double) and
Double#doubleValue():doube. Then you can replace Double with double
when applicable (return of public methods).
Searching items in list
At some times you are using List#indexOf(Object):int but in other places, you
are looping on the list to test the equality of an item (searchCustomer,
searchBranch).
The Javadoc of ArrayList#indexOf(Object):int states that :
returns the lowest index i such that (o==null ? get(i)==null : o.equals(get(i)))
So you can easily replace your loops in your search methods by
return theList.indexOf(something) > -1
But there is still a better way to check if a list contains an item: ArrayList#contains(Object):boolean
Naming
Based on the previous comment, you could also rename your searchXyz methods to
contains or another more meaningful name. This may be a cultural issue but,
most of the time when "we" (BE_fr devs) read search, "we" expect to receive a
collection of found objects.
Complexity
Instead of nesting your ifs like in Branch#addTransaction(..) you can fail quickly :
if ( customer==null || transaction <= 0.0 ) {
// fail
} else if ( !contains(customer) ) {
// fail
} else {
// This is fine
}
If you rely on Exception for your failure you can also reduce a bit the nesting
of your method :
if ( isThisInvalid(..) ) {
throw new InvalidOperation(..);
}
// This is fine
The less path/branches you have in your code, the better it is. That's also why
I introduced a validation method, to move the validations rules outside of the
business code.
Encapsulation
Not all your methods have to be public, try to mark as much as possible as private
or package protected to reduce your public API.
Separation of concerns
Most of the time, for maintainability and testability concerns, you try extract
concerns in different classes. That's why there are some popular high-level patterns
like MVC, MVP, ...
In your case you could move the printing to a dedicated class that will be
responsible to gather, format and print values from your model.
Here are some excerpt of how your final code may looks like:
class Branch {
private final ArrayList<Customer> customers;
private final String name;
public Branch(String name) {
this.customers = new ArrayList<>();
this.name = name;
}
public String getBranchName() {
return this.name;
}
public ArrayList<Customer> getCustomers() {
return new ArrayList<>(customers);
}
public void addCustomer(Customer customer) {
if (!isValidCustomer(customer)) {
throw new InvalidCustomerException( customer);
}
customers.add(customer);
}
private boolean isValidCustomer(Customer customer) {
return customer!=null &&
!contains(customer);
}
public void addTransaction(Customer customer, double transaction) {
if (!isValidaTransaction(customer, transaction)) {
throw new InvalidTransactionException(customer, transaction);
}
getCustomer(customer).addTransaction(transaction);
this.customers.get(this.customers.indexOf(customer)).addTransaction(transaction);
}
private boolean isValidaTransaction(Customer customer, double transaction) {
return customer!=null &&
transaction <= 0.0 &&
contains(customer);
}
private boolean contains(Customer customer) {
return customers.contains(customer);
}
private Customer getCustomer(Customer customer) {
return customers.get(customers.indexOf(customer));
}
}
class BranchCustomersPrinter implements Consumer<Bank> {
private final PrintWriter output;
@Override
public void accept(Bank bank) {
Branch branch = bank.getBranch(branch);
output.append("Customer Name \t\t Transactions");
branch.getCustomers().forEach(customer ->
output.append(customer.getCustomerName())
.append("\t\t")
.append(customer.getTransactions())
);
output.flush();
}
}
Bank bank = ...
bank.print(new BranchCustomersPrinter(..));
Note; Your validation in Branch#addTransaction(..) seems to be buggy. It will
accept a null customer with a transaction bigger than 0. It will also never
accept a positive transaction.
Note; Java uses references and since your program will always run in memory you
don't need to get an item from the list to update his values.
Customer c1 = new Customer("One", 10);
ArrayList list = new ArrayList();
list.add(c1);
c1.addTransaction(5); // Is the same as :
list.get(list.indexOf(c1)).addTransaction(5);
But most of the time; you have to save and retrieve the customer from a persistent
storage.
This is also why list.contains(c1) returns true. Because it is effectively the
same "object"; doing the same with two customers having the same properties will
not work:
Customer c1 = new Customer("Customer", 10);
Customer c2 = new Customer("Customer", 10);
c1.equals(c2); // false
But I guess that you will learn the subtleties of object references and equals
soon. | {
"domain": "codereview.stackexchange",
"id": 38492,
"tags": "java, object-oriented"
} |
B.Tech Project for final year of College | Question: I am an Engineering student in NSIT, Delhi, India. In our last year of B.Tech degree we have a B.Tech project, aka, BTP. I am in ECE(Electronics and Communication Engineering) dept.
Usually, my ECE batch mates would perform the project majorly regarding electronics, but I want to do in data Science or data manipulation under other departments(COE or IT). I approached a COE mentor under whose supervision I would like to do the project. But I have been struggling in choosing the right topic for project.
There are a lot of suggestion online, and many are preferably done by students who just want to complete a project instead of it been already done and executed by others. I want some decent approach suggestions for choosing the right project topics as I am quite confused right now.
What student level data Science or, specifically, Data manipulation
projects can I do in my final year of College?
Answer: Do machine learning on microcontrollers or other edge devices (link1, link2, link3, link4, link5). This is a hot topic, new emerging technologies, and it will make your profs happy from both electronics and IT departments.
There was a recent news article (link) where a team from the Fraunhofer IMC did a handwritten digits recognition on an Arduino Uno. They claim that they not only run a trained neural network (which would be easy) but actually train the network on Arduino.
However, they don't show their code. On their license page (link) they only say some blah-blah-blah about discussing possibilities with partners and customers and boast 30 years of experience in the development of microelectronic circuits. This is lame. I am sure that this can be done by an undergraduate student without 30 years of experience and that student will write a blog post about it.
If you develop a machine learning code that would do such tasks on an Arduino or even something smaller, like ATTINY85 that would be cool. Your model should be able to be trained on a microcontroller given the restrictions of its memory and computational capacity. Then upload your code on a github or a blog, put your link here. If your code is good, it will get lots of likes, you will showcase your abilities to your employers, and will put those Fraunhofer dinos who hide their "great discoveries" to shame. Besides, it is very useful because of emerging technologies with IoT, 5G, etc.
By the way, it does not have to be a neural network. Other models, like SVM, KRR would also be great. Also, it does not have to be Arduino. Other microcontrollers, like PIC or ST are also OK. | {
"domain": "datascience.stackexchange",
"id": 5333,
"tags": "data-science-model"
} |
Optimize way of reading and writing file in node.js | Question: I want to read the content of file then want to make few updates to content and then write that content to another file. I have created the code with node.js as follows.
var filePath = path.join('public/index.html'),
exactFileName = path.basename(filePath),
outputDir = "output/";
function readContent(callback) {
//start reading file
fs.readFile(filePath, {encoding: 'utf-8'}, function (err, content) {
if (err) return callback(err)
callback(null, content);
//Check if directory available or not, if no then create new directory
if(!fs.existsSync(outputDir)){
fs.mkdirSync(outputDir, 0766, function(err){
if(err){
console.log(err);
response.send("ERROR! Can't make the directory! \n");
}
});
}
/*
will process content here and then write to output file
*/
//write the data in another file
fs.writeFile(outputDir+''+exactFileName, content, function (err) {
if (err) return console.log(err);
console.log('Content written to '+ exactFileName);
});
})
}
readContent(function (err, content) {
console.log(content);
});
I need inputs on:
optimizing this code
direction on how we can apply it for multiple files
Answer: Interesting question,
there is really not much to optimize from a speed perspective. From a code organization perspective you should really investigate libraries that help with asynchronous coding.
From a pure coding perspective, fs.mkdirSyn will ignore the callback function you provide.
From a reliability perspective, your code will try to write to the output folder if that folder could not be created. That is not good. Also where would response be defined in that callback function?
You can apply this to multiple files by providing filePath as a parameter to readContent. readContent is by the way a terrible function name, it misleads the reader in thinking it just reads content whereas it reads contents, checks directories, optionally creates directories, processes content and writes out content. You get my point..
I am also not sure why you call the callback in readContent before all the work is done, an error might occur in creating a directory or in writing out the new content and the callback would never know..
Update:
This code works and organizes the functionality into steps without using a 3rd party library for handling asynchronous calls:
var path = require('path'),
fs = require('fs');
outputDir = "output/";
function readFile( path, callback )
{
fs.readFile( path, {encoding: 'utf-8'}, callback );
}
function verifyFolder( folderName , callback )
{
if( fs.existsSync(outputDir) )
return callback(true);
fs.mkdir( outputDir, 0766, function(err){
callback( err ? false : true );
});
}
function processContent( content , callback ){
callback( undefined, content.toUpperCase() );
}
function writeContent( path, content , callback )
{
fs.writeFile( path, content, function (err) {
if (!err)
console.log( 'Content written to '+ path );
callback( err, content );
});
}
function processFile( filePath, callback) {
var content;
readFile( filePath, readFileCallback );
function readFileCallback( err, data ){
if(err)
return callback( err, data );
content = data;
verifyFolder( outputDir , verifyFolderCallback );
}
function verifyFolderCallback( exists ){
exists ? processContent( content , processContentCallback ) : callback( outputDir + ' does not exist' );
}
function processContentCallback( err, content )
{
if(err)
return callback( err, content );
writeContent( outputDir + path.basename(filePath) , content, callback );
}
}
function logResult( err, result ){
console.log( err || result );
}
processFile( path.join('killme.js') , logResult ); | {
"domain": "codereview.stackexchange",
"id": 8217,
"tags": "javascript, node.js, file-system, io"
} |
gazebo failed on virtualbox with fuerte | Question:
I use ubuntu 11.10 on virtualbox with 3D support.
I run
roslaunch gazebo_worlds empty_world.launch
And when I click plane1_model's body and move cursor to gazebo world, gazebo exited.
OpenGL Warning: crPackDrawElements: trying to use bound but empty elements buffer, ignoring.
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt. You must
reimplement QApplication::notify() and catch all exceptions there.
terminate called after throwing an instance of 'Ogre::InternalErrorException'
what(): OGRE EXCEPTION(7:InternalErrorException): Vertex Buffer: Out of memory in GLHardwareVertexBuffer::lock at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualizati on-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v 1-7-3/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 126)
/opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gui: line 2: 3056 已 經終止 `rospack find gazebo`/gazebo/bin/gzclient -g `rospack find gazebo`/lib/libgazebo_ros_paths_plugin.so
[gazebo_gui-3] process has died [pid 3053, exit code 134, cmd /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gui __name:=gazebo_gui __log:=/home/sam/.ros/log/51435328-a021-11e1-9fcc-080027ef39bc/gazebo_gu i-3.log].
log file: /home/sam/.ros/log/51435328-a021-11e1-9fcc-080027ef39bc/gazebo_gui-3*.log
How to solve it?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-05-17
Post score: 4
Original comments
Comment by joq on 2012-05-17:
Ogre often has trouble doing 3D graphics in a virtual machine. You may need to run it with a real driver (i.e. without virtualbox).
Answer:
From @joq Ogre often has trouble doing 3D graphics in a virtual machine. You may need to run it with a real driver (i.e. without virtualbox).
Originally posted by tfoote with karma: 58457 on 2012-06-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9432,
"tags": "ros, gazebo, virtualbox"
} |
Desktop notifications for queue items | Question: I wasn't satisfied with Simon's alert on the orange alert. I wanted to know if even 1 review item was available, so I made some modifications to the userscript so that it runs on the Review page, and only alerts you when there are reviews that show a positive number.
This is Simon's version, and here is my version:
/** @preserve
// ==UserScript==
// @name Review Queue Notification
// @author Malachi Edited Simon Forsberg Created
// @description Shows a desktop notification when there review items in the queue.
// @namespace https://github.com/Zomis/SE-Scripts
// @grant GM_getValue
// @grant GM_setValue
// @grant GM_notification
// @match *://*.stackexchange.com/review*
// @match *://*.stackoverflow.com/review*
// @match *://*.superuser.com/review*
// @match *://*.serverfault.com/review*
// @match *://*.askubuntu.com/review*
// @match *://*.stackapps.com/review*
// @match *://*.mathoverflow.net/review*
// ==/UserScript==
*/
var KEY_NEXT = 'NextReload';
var DELAY = 60 * 1000; //60,000 milliseconds
var currentTime = Date.now ? Date.now() : new Date().getTime();
var lastTime = GM_getValue(KEY_NEXT, 0);
var nextTime = currentTime + DELAY;
GM_setValue(KEY_NEXT, nextTime);
var timeDiff = Math.abs(lastTime - currentTime);
setTimeout(function(){
window.location.reload();
}, DELAY);
var title = document.title.split(' - '); // keep the site name
document.title = 'Desktop Notifications - ' + title[1];
// a way to detect that the script is being executed because of an automatic script reload, not by the user.
if (timeDiff <= DELAY * 2) {
var notifications = [];
var reviewCount = 0;
var reviewItems = document.getElementsByClassName('dashboard-num');
for (var i = 0; i < reviewItems.length; i++){
reviewCount += parseInt((reviewItems[i].getAttribute("title")).replace(',',''), 10);
console.log(reviewItems[i]);
}
console.log(reviewCount);
if (reviewCount > 0) {
notifications.push(reviewCount + ' Review Items');
}
if (notifications.length) {
var details = {
title: document.title,
text: notifications.join('\n'),
timeout: 0
};
GM_notification(details, null);
}
}
This is also available on my GitHub as a userscript.
Follow-up question with new GitHub repo
Answer: This is mostly personal preference, but, I usually stack the @matches in descending size:
// @match *://*.stackexchange.com/review*
// @match *://*.stackoverflow.com/review*
// @match *://*.superuser.com/review*
// @match *://*.serverfault.com/review*
// @match *://*.askubuntu.com/review*
// @match *://*.stackapps.com/review*
// @match *://*.mathoverflow.net/review*
into:
// @match *://*.stackexchange.com/review*
// @match *://*.stackoverflow.com/review*
// @match *://*.mathoverflow.net/review*
// @match *://*.serverfault.com/review*
// @match *://*.askubuntu.com/review*
// @match *://*.stackapps.com/review*
// @match *://*.superuser.com/review*
Just looks nicer, I suppose.
var title = document.title.split(' - ');:
You don't use title for anything other than title[1], so just use var title = document.title.split(' - ')[1] instead.
.replace(',',''): You should have a space after ','.
You could consider replacing the .length style for loop with a for (var i in reviewItems) style loop. If not, you can also declare reviewCount in the for loop like: for (var reviewCount = 0, i = 0;.
var details = {
title: document.title,
Desktop Notifications - Code Review Stack Exchange sounds a little bulky and over-the-top. Review Items is a better name for it, and you could consider removing Stack Exchange, but, consider the effect that would have on MSE.
I can't see the point in using notifications as an array, as it should really just have the one notification.
if (reviewCount > 0) {
notifications.push(reviewCount + ' Review Items');
}
if (notifications.length) {
var details = {
text: notifications.join('\n'),
if (reviewCount > 0) {
notification = reviewCount + ' Review Items';
}
if (notification) {
var details = {
text: notification,
or even just move reviewCount + ' Review Items' into details, like:
if (reviewCount > 0) {
var details = {
text: reviewCount + ' Review Items',
As of ECMAScript 5, the default radix used in parseInt is supposed to be \$10\$, before that, \$0\$ would get parsed as a octal number instead of a decimal number, and, as it seems, you can just omit that optional radix argument entirely. Although, as @EthanBierlein pointed out in the comments; MDN recommends not to omit it. | {
"domain": "codereview.stackexchange",
"id": 15072,
"tags": "javascript, stackexchange, rags-to-riches, userscript"
} |
Generalities about propagators and its application to scalar chromodynamics | Question: In an exercise for a course on quantum field theory, I am given the following Lagrangian:
$$
\mathcal{L} = -\frac{1}{2} G_{\mu\nu}^a G^{a\mu\nu} + 2 (D_\mu \phi^\dagger)^a(D^\mu \phi)^a - 2 m^2\phi^{\dagger a}\phi^a -2\lambda (\phi^{\dagger a}\phi^a)^2,
$$
where the covariant derivative and the strength tensor are given as
$$
G_{\mu\nu} = \partial_\mu G_\nu - \partial_\nu G_\mu - i g [G_\mu, G_\nu],
$$
$$
D_\mu = \partial_\mu - i g G_\mu \equiv \partial_\mu - K_\mu,
$$
and where the scalar field is complex and decomposes under the fundamental representation of ${\rm SU}(N)$ as $\phi = \phi^a T^a$, for some given $N\geq 2$. Of course, repeated indices are summed over.
Now the first question I am asked is to calculate the propagator for the scalar field, and here a conceptual problem that has been haunting me for a while makes its appearance again: when we talk about the propagator of a given field for a specific Lagrangian, aren't we talking about the Green's function for the differential equation that gives the dynamics of the (classical) field? Applying calculus of variation to the action corresponding to this Lagrangian under a change $\phi\mapsto \phi+\delta \phi$ I reach the equations of motion
$$
\Omega^{ab} \phi^b = \Omega^{ab} \phi^{\dagger b} = 0 \tag{1}
$$
for an operator
$$
\Omega^{ab} = \partial_\mu K^{\mu a b} - K_\mu^{ab} \partial^\mu + K^{ca}_\mu K^{\mu cb} - (\partial^2 +m^2)\delta^{ab}
$$
where, as usual, inside a linear operator terms like $\partial_\mu A$ act as $\phi\mapsto \partial_\mu(A \phi)$. Now the provided solution to the exercise says
The propagator for the scalar field is contained in the kinetic term
$$\mathcal{L}_{\rm kin} = -\phi^{\dagger a}(\partial^2 + m^2)\delta^{ab}\phi^b \tag{2}$$
so according to the solution the propagator is actually the Green function for the differential equation induced by $(2)$ and not by $(1)$, but why? Extremizing the action under a variation of $\phi$ for the full Lagrangian leads to (up to possible mistakes in my development) equation $(1)$ and as such I would expect to be that what determines the propagator. Is it because the (free) propagator we are looking for here derives from the interaction-free part of the Lagrangian, which by definition only includes (for $\phi$) the last term in my $\Omega^{ab}$?
If this is the reason, I would still appreciate further information on why the propagator doesn't include the whole $\Omega$. I understand that the "practical" objective is to express through the propagator $\Delta$ a free partition function as products of terms like
$$
Z_{\rm free}\propto \exp\left[i\int d^4xd^4y J(x)\Delta(x-y)J(y)\right]
\qquad (3)
$$
but, to my intuition, the interactions between a field and a gauge field are of different "nature" than the interactions added by hand such as the $\lambda$ term, because the former appear by the necessity of imposing gauge invariance. However, I believe these other interactions "added by hand" are also incorporated by a necessity, namely making the theory renormalizable, so perhaps they are not that different in nature after all, are they?
Answer: You are right. In perturbation theory, the propagator consists of quadratic terms about the field and the interaction terms consist of terms of third order or higher with respect to the field.
One way to understand why terms of the second order are given special treatment is that in the path integral formalism it is only the second order about a field that can do the integral exactly. (i.e., Gaussian integrals).
Let us try to understand this from a pragmatic point of view.
For example, if we decide to stand on the path integral formalism, the perturbation expansion is constructed in the following form:
$$Z[j,h]=\int D\phi D\varphi\ e^{iS_0+iS_I+\int_x(j\phi+h\varphi)}=\int D\phi D\varphi\ \Big(\sum_n\frac{(iS_I)^n}{n!}\Big)e^{iS_0+\int_x(j\phi+h\varphi)},$$
$$S_0+S_I=\int_x\Big\{\frac{1}{2}(\partial\phi\partial\phi-m^2\phi^2)+\frac{1}{2}(\partial\varphi\partial\varphi-m^2\varphi^2)+\frac{g}{2}\varphi\phi\phi\Big\}.$$
Here, the propagator is contained in $S_0$. Also, for the case of including fermions, see the appropriate textbook;
the reason for that the second order terms are important remains the same even if fermion is included.
Now suppose that only quadratic terms for the field are included in the $S_0$:
$$S_0=\int_x\Big\{\frac{1}{2}(\partial\phi\partial\phi-m^2\phi^2)+\frac{1}{2}(\partial\varphi\partial\varphi-m^2\varphi^2)\Big\}$$
We perform the path integral for the free field above and obtain
$$Z[j,h]=\Big(\sum_n\frac{(iS_I[\frac{\delta}{i\delta j},\frac{\delta}{i\delta h}])^n}{n!}\Big)\int D\phi D\varphi\ e^{iS_0+\int_x(j\phi+h\varphi)}$$
$$=\Big(\sum_n\frac{(iS_I[\frac{\delta}{i\delta j},\frac{\delta}{i\delta h}])^n}{n!}\Big)\ e^{-\int_{x,y}j_x(\Box+m^2)^{-1}j_y}e^{-\int_{x,y}h_x(\Box+m^2)^{-1}h_y}.$$
Here, we did the Gauss integral. This is special property for the quadratic integral.
On the other hand, if there is a third-order term for the field;
$$S_0=\int_x\Big\{\frac{1}{2}(\partial\phi\partial\phi-m^2\phi^2+g\varphi\phi\phi)+\frac{1}{2}(\partial\varphi\partial\varphi-m^2\varphi^2)\Big\}$$
it cannot be handled by the Gaussian integral, so we don't know how to handle it, in fact in this case we will obtain
\begin{align}
Z[j,h]&=\Big(\sum_n\frac{(iS_I[\frac{\delta}{i\delta j},\frac{\delta}{i\delta h}])^n}{n!}\Big)\\ \times&\int D\phi D\varphi\ e^{i\int_x\{\frac{1}{2}(\partial\phi\partial\phi-m^2\phi^2+g\varphi\phi\phi)\}+i\int_x(j\phi)}\\&\times e^{-\int_x\{i\varphi(\Box+m^2)\varphi\}+i\int_x(h\varphi)}.
\end{align}
We don’t know to treat this complicated integral.
This is why the propagator consists of second order terms about the field. (In operator formalism, it is difficult to construct the inverse of a differential operator with interaction terms like the one you mentioned, so we use the inverse of a free-field differential equation that can be solved.)
On the other hand, if, for example, in the example above, $\varphi$ is an external field, i.e., we do not perform path integral about $\varphi$, the story changes:
\begin{align}
Z[j,h]=\Big(\sum_n\frac{(iS_I[\frac{\delta}{i\delta j},\frac{\delta}{i\delta h}])^n}{n!}\Big) \times\int D\phi \ e^{-i\int_x\{\frac{1}{2}\phi(\Box+m^2-g\varphi)\phi\}+\int_x(j\phi)}\\
\end{align}
In this case, the Gaussian integral can be performed even if $S_0$ contains terms that include the external field $\varphi$. Thus, in this case, the propagator can be defined in a form that includes $\varphi$.
Regarding the last part: the symmetry-based construction of the theory has little to do with the propagator. Concepts such as propagator and vertex are objects that are introduced to construct a perturbation theory. One reason why it makes from second order terms about fields is because, as we saw above, second order terms about fields have various useful properties, such as the possibility of Gaussian integrals. On the other hand, when defining a field theory on a lattice, it is not always necessary to introduce propagators, etc. In other words, for perturbation theory, it is convenient to keep the second-order terms of the theory separate from the rest of the theory, so we keep them separate. | {
"domain": "physics.stackexchange",
"id": 95346,
"tags": "quantum-field-theory, lagrangian-formalism, quantum-chromodynamics, greens-functions, propagator"
} |
any relation/ overlap between small world graphs, scale free graphs, and expander graphs? | Question: small world graphs (eg Watts-Strogatz model & others) and scale free graphs are a relatively recently discovered graph type via mainly empirical analysis of large real-world graphs (eg via Big Data techniques/ datamining etc). they have since been found to be quite ubiquitous/ longstanding in many diverse graphs related to nature and human constructions (eg biology/genes, social networks, man-made networks eg WWW/ internet/ telecommunication/ electrical grids, airport connectivity, etc).
in contrast expander graphs are far older and were invented mainly as a theoretical device in math/(T)CS however have since found very broad/ widespread/ key application. am looking for eg refs/ surveys/ overviews on their interrelation.
what are the relations between the following (eg is there any overlap for some parameters)
small world networks
scale free graphs
expander graphs
Answer: There are lots of overlaps between small world and scale-free, but I think much less so between those two and expanders.
The terms "small world" and "scale-free" are often used informally, but formal definitions are often along the lines of:
Small-world means short average (or maximum) path length (typically $O(\log n)$, with $n$ vertices) and highly clustered (meaning that for any vertex $v$, the fraction of pairs of neighbors of $v$ which are adjacent is high)
Scale-free is often taken to mean that the degree distribution follows some variant of a power-law (e.g. power-law with cutoff, etc.), but more generally/informally is used to mean that the degree distribution is long-tailed, in contrast to, say, Erdos-Renyi random graphs which have a degree distribution that is exponentially concentrated around its mean.
Expander, of course, has a formal definition that is almost 100% standardized.
Expanders by their nature have logarithmic diameter, similar to small-world networks. Beyond that, however, as far as I know there is little overlap between the concepts.
In practice, expanders are often bounded-degree or even regular of bounded degree, whereas "real-world" graphs typically have a long-tailed degree distribution, with a small (but surprisingly large - e.g. not exponentially small) number of high-degree "hubs" (as they are usually called). Furthermore, scale-free graphs (almost by definition, depending on your definition) have a large number of vertices of very low degree - in most "real-world" graphs there are a large number of vertices of degree 1 or 2. This makes real-world graphs very unlike expanders, in that it is very easy to disconnect real-world graphs by removal of targeted edges, whereas expanders are by their nature highly connected. (Real-world graphs often also have the property that removal of random edges is very bad at disconnecting them.)
I'm sure there is something to be said about the use of spectral techniques for things like community detection, etc. in real-world networks, and from that viewpoint there may or may not be a little more overlap with expanders, but I'm not an expert in that area (maybe we can get Mark Newman to join cstheory.SE to comment...) | {
"domain": "cstheory.stackexchange",
"id": 2794,
"tags": "reference-request, graph-theory, big-picture, application-of-theory"
} |
Is "Cloning" exact or almost similar to parent? | Question: I was studying "Reproduction in Organism" as an interest of my own and there was a line
offspring produced through single parent are exact copies of their parent. Also, they are genetically identical.
So, in here, it says, "genetically identical" (I assume that identical here implies exactly same) but then the next line quite confused me.....
The term clone is used to describe such morphologically and genetically similar individuals.
Here, the term "genetically similar" is used.
So, clones (offspring produced through asexual reproduction- involvement of one parent) are "genetically exact" to their parents or genetically similar?
If "genetically exact", Then how is it possible since, there is 'No 100% efficient DNA Replication' that can be achieved?
Answer: Biology is a lot messier than pure math, so if you insist on the same level of perfection and rigidity in definitions you are going to be very frustrated. You are correct that DNA replication is never 100% efficient, so "genetically similar" is probably a more literally accurate description of a clone. However "genetically similar" is a very vague term, which we might apply (depending on context) to two cells from the same individual, clones and parents, two siblings, a child and a parent, two unrelated members of the same species, or even two members of the same biological kingdom. It is sometimes useful to distinguish different levels of similarity, even if we acknowledge that the boundaries of our definitions are fuzzy and arbitrary. Saying two individuals are "gentically identical" is really a short-hand for "in the context of the phenomena we're looking at, genetic differences are so small as to be irrelevant"
If you look at the level of DNA base pairs, it's estimated the average number of base pair differences between two humans is about 20 million or 0.6% of the 3.2 billion base pairs in the human reference genome. Clearly that 0.6% difference can generate a lot of variation in readily observable traits. During development some cells in an individual will undergo somatic mutations, so even two cells from the same individual may not be literally genetically identical. For the most part (excluding catastrophes like cancer), these differences will be in only a handful of bases out of the 3.2 billion, and won't cause any observable difference in the two cells. The difference between a parent and their clone will mostly be these somatic mutations, probably with some additional mutations caused by the cloning process. | {
"domain": "biology.stackexchange",
"id": 10605,
"tags": "genetics, reproduction"
} |
What is the relation between thermodynamic entropy and the information in the brain (or a book)? | Question: Maximum entropy is equivalent to minimum information in statistical mechanics and quantum mechanics and the universe as a whole is tending towards an equilibrium of minimum information/maximum entropy. What seems to be part of the canonical picture as well is that all the structure formation we see on earth is the result of the earth being an open system, a thermodynamic machine so to say, that is driven by the energy of the sun, and so the apparent violation of the second law of thermodynamics on earth is only an illusion that is resolved by including the rest of the universe.
But, what I have asked myself just yesterday:
is it legitimate to consider the information that is stored in the brain a part of the thermodynamic information/negentropy? Or is this a completely different level of information?
The discrepancy between these two aspects of information probably becomes more apparent if I ask the same question about the information that is stored in a computer. Because the computer is completely deterministic, and hence it seems to me that it is not valid to consider it as part of a statistical mechanical or quantum mechanical system (at least unless the computer is destroyed, or an electric discharge changes one of its storage flip-flops or something along these lines).
Or to take it yet another step further to absurdity: do the books in the US library of congress, or the art shown in the Louvre in Paris contain any substantial fraction of thermodynamic negentropy, or isn't it rather irrelevant thermodynamically if a page of a book or a painting contains a black or green or red spot of paint that is part of a letter "a" or the eye of the Mona Lisa, or if it contains no paint whatsoever?
Of course, I know that we can define Shannon's information for computer memory straightforwardly as well as we can define the entropy of the brain as a electrochemical-thermodynamic system. But my question is, are these concepts actually the same, or is one included in the other, or are they completely disjoint, complementary or whatever unrelated?
Edit: this question (thanks to Rococo) already has several answers, each of which I find pretty enlightening. However, since the bounty is now already on the table, I encourage everybody in providing his own point of view, or even statements conflicting to the ones in the linked question.
Answer: in short:
Yes, information and negentropy are measuring the same thing, and can be directly compared and just differ by a constant scale factor.
But this introduces a problem when talking about the valuable information in a book, brain, or computer, because the valuable information is overwhelmed by the relatively numerically gargantuan entropy of the arrangement of the mass that is storing that information. This problem, though, is often easily solved by asking information questions that are phrased in a way to select the information of interest. So in a computer, where this separation is more obvious, for example, it's important to differentiate between the entropy of a transistor (a crystalline structure with very high information) and the information contained in its logical state (a much much lower information, but usually more interesting).
Therefore, the question ends up being only, do we understand the system well enough to determine what information we are interested in? Once we know that, it's usually possible to estimate it.
are negentropy and information measuring the same thing?:
Yes. This is spelled out very clearly by many people, including this answer on PSE, but I'll go with an old book by Brilluoin, Science and Information Theory (i.e, this is the Brilluoin of Brilluoin Zones, etc, and also the person who coined the term "negentropy").
The essential point is to show that any observation or experiment
made on a physical system automatically results in an increase of the
entropy of the laboratory. It is then possible to compare the loss of
negentropy (increase of entropy) with the amount of information
obtained. The efficiency of an experiment can be defined as the ratio
of information obtained to the associated increase in entropy.
information vs valuable information:
Brilluoin also distinguishes between "information" and "valuable information", and says that a priori there's no mathematical way to distinguish these, although in certain cases we can define what we consider to be the valuable information, and in those cases we can calculate it.
We completely ignore the human value of the information. A selection
of 100 letters is given a certain information value, and we do not
investigate whether it makes sense in English, and, if so, whether the
meaning" of the sentence is of any practical importance. According to
our definition, a set of 100 letters selected at random (according to
the rules of Table 1.1), a sentence of 100 letters from a newspaper, a
piece of Shakespeare or a theorem of Einstein are given exactly the
same information value. In other words, we define “information” as
distinct from “knowledge,” for which we have no numerical measure. We
make no distinction between useful and useless information, and we
choose to ignore completely the value of the information. Our
statistical definition of information is based only on scarcity. If a
situation is scarce, it contains information. Whether this information
is valuable or worthless does not concern us. The idea of “value”
refers to the possible use by a living observer.
So then, of course, to address the information in the Pricipia, the question is to separate information from valuable information, and note that a similar book with the same letters in a specific but random sequence will have the same information, but different valuable information.
In his book, Brilluoin provides many ordinary examples, but also computes some broader and more interesting examples that are closely related to some subtopics of this questions. Instead of a picture (as the OP's question suggests), Brilluoin constructs a way to quantify what is the information of a schematic diagram. Instead of a physics text (as the OP's question suggests), he analyses a physical law (in the case, the ideal gas law), and also calculates its information content. It's not a surprise that these information values are swamped by the non-valuable information in the physical material in which they are embodied.
a specific case, the brain:
Of the three topics brought up by the question the most interesting one to me is the brain. Here, asking what is the information in the brain, creates a similar ambiguity as for a computer, "are you talking about the crystalline transistors or are you talking about their voltage state?" But in the brain it is more complex for various reasons, but the most difficult to sort out seems to be that there is not a clear distinction between structure and state and valuable information.
One case where it's clear how to sort this out, is the information in spikes within neurons. Without giving a full summary of neuroscience, I'll say that neurons can transmit information via voltages that appear across their membranes, and these voltages can fluctuate in a continuous way or exist as discrete events called "spikes". The spikes are the easiest to quantify their information. At least for afferent stimulus-encoding neurons where people can make a reasonable guess what stimulus they are encoding, it's often possible to quantify bits/spike, and it is usually found to be 0.1 to 6 bits/spike, depending on the neuron (but there's obviously some pre-selection of the neurons going on here). There is an excellent book on this topic titled Spikes, by Fred Rieke, et al, although a lot of work has been done since its publication.
That is, given a model of 1) what's being being encoded (eg, aspects of the stimuli), and 2) what is the physical mechanism for encoding that information (eg, spikes), it's fairly easy to quantify the information.
Using a similar program it's possible to quantify the information stored in a synapse, and in continuous voltage variations, although there's less work on these topics. To find these, search for things like "Shannon information synapse", etc. It seems to me not hard to imagine a program that continues along this path, and it if it were to scale to the large enough size, could eventually estimate information in the brain from these processes. But this will only work for the processes that we understand well enough to ask the questions that get at the information we are interested in. | {
"domain": "physics.stackexchange",
"id": 78503,
"tags": "thermodynamics, statistical-mechanics, entropy, information"
} |
Calculating moment of inertia for a cylinder? | Question: I'm trying to calculate the moment of inertia for a cylinder about a longitudinal axis, but I don't know where I went wrong with my approach.
$$I=\int r^2 dm$$
Assuming constant density:
$$\frac{M}{V} = \frac{dm}{dv} $$
Then in order to find $dv$ I found the volume by summing all the chords in the base of the cylinder from R to -R and multiplying my the length of the cylinder. (Where $R$ is the radius and $r$ is the distance from the axis of rotation, which I kept at the origin.
$$V = 2L\int \sqrt{R^2-r^2} dr$$
Thus,
$$dv = 2L\sqrt{R^2-r^2} dr$$
And I know that the volume of the cylinder is:
$$V = πR^2L$$
So then...
$$dm = \frac{2M}{πR^2}\sqrt{R^2-r^2}$$
Substituting the original definition of moment of inertia from $-R$ to $R$ gives me:
$$\frac{2M}{πR^2}\int_{-R}^{R}r^2\sqrt{R^2-r^2}dr = \frac{1}{4}MR^2$$
However, the moment of inertia I looked up in a physics textbook is exactly two times this (the factor is $1/2,$ not $1/4$). I also solved for the moment of inertia of a sphere and similarly got exactly half of the accepted answer. I have looked over this thoroughly and don't know where I went wrong, but I suspect it is something to do with my integrating bounds?
Answer: What is wrong with your answer?
I am going to write an answer explaining why your solution is wrong, because I don't think a comment would be enough. First, I am going to make a change of variable. You used the variable $r$ to refer to the distance from the axis of the cylinder. I feel more comfortable using the symbol $x$ for this variable, so that is what I am going to. The reason I fee more comfortable with this choice is because you don't integrate over cylindrical shells of radius $r$; you integrate over surfaces of constant $x$.
Anyway, let's look at what you did. Your equation $$dV = 2L \sqrt{R^2-x^2}dx$$ is correct. This is great so far.
The next equation I want to look at is
$$dm = \frac{2M}{\pi R^2}\sqrt{R^2-x^2}dx.$$
Notice you forgot the $dx$ in your original question, but its obvious that is what you meant. Now let's think about what this equation means. It means that the mass in the surface of constant $x$ of width $dx$ is the $dm$ given by the equation. This equation is also totally fine, but it isn't as useful as you think.
I do have a problem with your next equation. Your next equation is
$$I = \frac{2M}{\pi R^2}\int x^2 \sqrt{R^2-x^2}dx.$$
The reason I have a problem with this equation is that it is really saying $I=\int x^2 dm /M$, but of course we know that it should be an $r$ instead of an $x$: $I=\int r^2 dm /M$. The reason this distinction is important is because your surfaces of constant $x$ are not surfaces of constant $r$. Now we can't fix this problem by just writing
$$I = \frac{2M}{\pi R^2}\int r^2 \sqrt{R^2-x^2}dx$$
because each surface of constant $x$ is not at a well-defined $r$: the part of the surface near the surface of the cylinder has $r=R$, but the part in the middle of the surface has $r=x$. So the integral above doesn't make sense.
Correct way to get the answer
There are two ways to find the answer then. One way is just to write out the integral in rectangular coordinates and the other way is to use the parallel axis theorem to find the moment of inertia of each surface of constant $x$ and the integrate over constant $x$.
First way of getting the answer
Let's look at the first method first. We get the following expression for the moment of inertia:
$$I = \frac{M}{\pi R^2}\int^R_{-R} \int^\sqrt{R^2-x^2}_{-\sqrt{R^2-x^2}} (x^2+y^2) dy\,dx $$
Doing the inner integral, you get
\begin{equation}
\begin{aligned}
I &= \frac{2M}{\pi R^2}\int^R_{-R} \left(\frac{4}{3} R^2 + \frac{2}{3}x^2 \right) \sqrt{R^2-x^2} dx \\
&=\frac{1}{3} \frac{2M}{\pi R^2}\int^R_{-R}R^2\sqrt{R^2-x^2} dx+\frac{2}{3} \frac{2M}{\pi R^2}\int^R_{-R}x^2\sqrt{R^2-x^2} dx
\end{aligned}
\end{equation}
Now you have already evaluated the integral in the second term in your question. The integral in the first term is easy because it is just the area of a semicircle. So we get
\begin{equation}
\begin{aligned}
I &= \frac{1}{3} \frac{2M}{\pi R^2} R^2 \pi R^2/2 +\frac{2}{3} \frac{M R^2}{4} \\
&= MR^2/2
\end{aligned}
\end{equation}
Second way of getting the answer
Now let's look at the second way of getting the answer. This time we will just calculate the moment of inertia of each surface of constant $x$ and add them up. We could find in a table that the moment of inertia of a rectangle of width $2\sqrt{R^2-x^2}$ and mass $dm$ about its center of mass is $\frac{1}{3} dm \left(R^2-x^2\right).$ But we are more interested in the moment of inertia about the origin when this surface of constant $x$ is displaced a distance $x$ from the origin. Using the parallel axis theorem, we find that the moment of inertia is $\frac{1}{3} dm \left(R^2-x^2\right) + dm x^2$. This is the moment of inertia of each surface of constant $x$. Adding these up we get the total moment of inertia:
$$I = \int dI = \int \frac{1}{3} dm \left(R^2-x^2\right) + dm x^2 = \int \frac{1}{3} R^2 dm + \frac{2}{3} x^2 dm.$$
Now plugging in our expression for $dm$, we get that $$dI = \frac{2M}{\pi R^2} \int_{-R}^R \frac{1}{3} R^2 \sqrt{R^2-x^2} + \frac{2}{3} x^2 \sqrt{R^2-x^2} dx.$$
This is the same expression we got from the first way of computing the moment of intertia. So this will give us the correct answer. Also, we can see that the $y$ integral of the first method just gave us the moment of inertia of each surface of constant $x$.
I should add that the easiest way to get the moment of inertia is to integrate over surfaces of constant $r$ (where $r$ is the distance from the axis of the cylinder. You get that $dI = r^2 dm$ where $dm = \frac{M}{\pi R^2} 2 \pi r dr.$ So then you get $I = \int dI = \frac{M}{\pi R^2}\int_0^R r^2 2 \pi r dr = \frac{2 M}{R^2} \int_0^R r^3 dr = MR^2 /2$ | {
"domain": "physics.stackexchange",
"id": 27296,
"tags": "homework-and-exercises, rotational-kinematics, moment-of-inertia"
} |
Center of Pressure Line from Barefoot Scan (EMED) | Question: I have raw data obtained from EMED barefoot scan containing a matrix of pressure sensors over about 70 frames. This totals 70 matrices that record a snapshot of the pressure over the duration of a person's natural walk over the pressure plate.
I wanted to ask if anyone knows the algorithm that is typically used to determine the Center of Pressure Line (indicated by the path on the heatmap below). One thing I have tried is to consider each row as an array and find the index of the maximum pressure across the row, and create a line by matching up all the points that correspond to these indices. My approach fails to produce a smoothed line (even when using Gaussian smoothing).
Answer: Actually, scipy has a function that does exactly this. I used it on each of the 70 arrays that make up the time series, and points generate a smooth line.
https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.measurements.center_of_mass.html | {
"domain": "engineering.stackexchange",
"id": 1114,
"tags": "biomechanics"
} |
Calculating the ionic strength for an aqueous solution containing potassium ferricyanide and sodium sulfate | Question: I have the $100$ microM of K$_3$Fe(CN)$_6$ is dissolved in water and the following data:
$$\begin{array}{c|c}
\text{Experiment} & \ce{[Na_2SO_4]}/\pu{M} \\ \hline
\mathrm{A} & 0 \\
\mathrm{B} & 1\cdot10^{-3} \\
\mathrm{C} & 3\cdot10^{-3} \\
\mathrm{D} & 8\cdot10^{-3} \\
\mathrm{E} & 1\cdot10^{-2} \\
\hline
\end{array}$$
I want to calculate the ionic strength for each experiment and I know that I should use the equation:
$$I = \frac {1}{2} Σ c_i z_i^2$$
and that it for this case will look like:
$$I = \frac {1}{2} ((1)^2*[\ce{K+}]+(-3)^2\cdot[\ce{Fe(CN)6^3-}] + (1)^2*[\ce{Na+}]+(-2)^2\cdot[\ce{SO4^2-}] )$$
However, when I calculate this for the different experiments, I don't get the number which is in our answer key. There is no calculations in the answers and my answers differ quite much. This is the way I calculate the different ionic strengths:
$$I_{\mathrm{A}} = \frac {1}{2} ((1)^2\cdot(1\cdot10^{-4})+(-3)^2\cdot(1\cdot10^{-4}) + (1)^2\cdot0+(-2)^2\cdot0 ) = 5\cdot 10^{-4}$$
$$I_{\mathrm{B}} = \frac {1}{2} ((1)^2\cdot(1\cdot10^{-4})+(-3)^2\cdot(1\cdot10^{-4}) + (1)^2\cdot(1\cdot10^{-3})+(-2)^2\cdot(1\cdot10^{-3}) ) = 3\cdot10^{-3}$$
$$I_{\mathrm{C}} = \frac {1}{2} ((1)^2\cdot(1\cdot10^{-4})+(-3)^2\cdot(1\cdot10^{-4}) + (1)^2\cdot(3\cdot10^{-3})+(-2)^2\cdot(3\cdot10^{-3}) ) = 8\cdot10^{-3}$$
and so on.
In the answer key $I_{\mathrm{A}}=0.6\cdot10^{-3}$, $I_{\mathrm{B}}=3.6\cdot10^{-3}$ and $I_{\mathrm{C}}=9.6\cdot10^{-3}$. As you can see the difference increases and I don't understand what I am doing wrong. I would truly appreciate if someone could point out my mistakes. Thank you!
EDIT
I am also given this but I don't think they matter for the ionic strength:
Answer: In aqueous solutions, $\ce{K3Fe(CN)6}$ and $\ce{Na2SO4}$ will completely dissociate:
$$\ce{K3Fe(CN)6 (aq) -> 3 K+ (aq) + Fe(CN)6^3- (aq)} \tag1$$
$$\ce{Na2SO4 (aq) -> 2 Na+ (aq) + SO4^2- (aq)} \tag2$$
Thus, from the equation $(1)$, $c_\ce{K+} = 3c_\ce{Fe(CN)6^3-} = 3c_\ce{K3Fe(CN)6}$ and from the equation $(2)$, $c_\ce{Na+} = 2c_\ce{SO4^2-} = 2c_\ce{Na2SO4}$
Hence for condition A $([\ce{Na2SO4}] = 0)$:
$$I_{\mathrm{A}} = \frac {1}{2} \left((1)^2 \cdot (3\cdot10^{-4}) + (-3)^2\cdot(1\cdot10^{-4}) + (1)^2 \cdot 0 + (-2)^2\cdot 0 \right) = 6\times 10^{-4} = 0.6\times 10^{-3}$$
Similarly, for condition B $([\ce{Na2SO4}] = 1\cdot10^{-3})$:
$$I_{\mathrm{B}} = \frac {1}{2} \left((1)^2\cdot(3\cdot10^{-4})+(-3)^2\cdot(1\cdot10^{-4}) + (1)^2\cdot(2\cdot10^{-3})+(-2)^2\cdot(1\cdot10^{-3}) \right) = 3.6 \times 10^{-3}$$
And, for condition C $([\ce{Na2SO4}] = 3\cdot10^{-3})$:
$$I_{\mathrm{C}} = \frac {1}{2} \left((3)^2\cdot(1\cdot10^{-4})+(-3)^2\cdot(1\cdot10^{-4}) + (1)^2\cdot(6\cdot10^{-3})+(-2)^2\cdot(3\cdot10^{-3}) \right) = 9.6 \times 10^{-3}$$
So on for other conditions. Note that for all calculations, ionic strength of $\ce{K3Fe(CN)6}$ is always $0.6\times 10^{-3}$ since $[\ce{K3Fe(CN)6}] = \pu{1.0 \times 10^{-4} M}$ for all conditions. | {
"domain": "chemistry.stackexchange",
"id": 15601,
"tags": "concentration, ionic-compounds"
} |
What is the IF_ELSE basis gate of the processor type Falcon r5.11 on the IBM Quantum platform? | Question: I am master student currently working on a project using the IBMQ platform with qiskit. We are using the available quantum hardware with the processor type Falcon r5.11.
We would like to have a better understanding of the circuits we have composed and how it is really implemented onto the quantum computers. For the processor mentioned above, the basis gates are:
CX, ID, IF_ELSE, RZ, SX, and X.
Since we are able to compose in qiskit any quantum gate, I believe that these 6 gates complete a universal set, and therefore any gate can be composed by a combination of the basis gates above.
Can anyone please tell me what this IF_ELSE basis gate is?
What does it look like in matrix form?
Any input is appreciated, thank you in advance!
Answer: It is not a quantum gate operation and therefore does not have a unitary gate representation; instead it is an entirely classical operation. The term that IBMQ uses to refer to this circuit type is "dynamic circuits". These operations are supported in OpenQASM3. This youtube video from Qiskit gave a clear explanation of how to use dynamic circuits.
The key thing is that the operation is classical, but it can impact the circuit; you can specify if bit X==1, do A, otherwise do B. Basic control flow, but the classical computation is based on a classical bit of information. Meaning that you need to measure the state of a qubit, or qubits, into a classical register, and then use this classical conditional to potentially influence how the circuit is executed from that point onward. | {
"domain": "quantumcomputing.stackexchange",
"id": 4471,
"tags": "qiskit, universal-gates, ibm-quantum-devices"
} |
Is a Turing Machine that only takes strings of the form $0^*$ Turing Complete? | Question: You have a Turing machine that only processes input on the form $0^*$. If it is given an input without 0's, it will simply halt without accepting or do anything else. Is it Turing Complete?
The set $0^*$ is countably infinite, since you can make the bijective function $f(x) : 0^* → \mathbb{N} $:
$f(x) = length(x)$
Where $length(x)$ is the length of the string (so you treat them as Peano Numbers).
I understand that the set of all programs (the programs that a Turing machine can run) are countable, and that the set of a Turing machines are also countable. But, can the set of string that the Turing machine can process (with no guarantees of halting) only be countably infinite (as in this case), or does it have to be uncountable?
My understanding of undecidable problems with regards to Turing machines is that they arise because there are languages that have a cardinality strictly greater than the natural numbers, e.g. $B^*$, where $B = \{0,1\}$, which has a cardinality equal to the real numbers. It seems to me that, although you can encode any integer with the language $0^*$, you can't encode an arbitrary language. The problem is: how can you encode recursively enumerable languages when all you have is unary notation? If this is indeed impossible (though I have a feeling it is possible; I can't see how the representation of numbers should be a fundamental hindrance), then it turns out that this particular Turing machine is not Turing Complete (or maybe you would say that it is not really a Turing machine).
Answer: It looks like Turing machines remain Turing-complete when the alphabet is restricted to have one symbol, $0$. First, preprocess your input by replacing every $0$ with $0\square$, where $\square$ is the blank symbol. Now, you can simulate a 2-symbol alphabet by using $0\square$ to represent zero and $\square0$ to represent one.
My understanding of undecidable problems with regards to Turing machines is that they arise because there are languages that have a cardinality strictly greater than the natural numbers
This is incorrect. A language is, by definition, a set of finite strings and there are only countably many finite strings over any finite alphabet (proof: you get a bijection with the natural numbers by treating each string as a number written in base-$k$, where $k$ is the size of the alphabet). However, there are uncountably many different languages and that observation along with the countability of the set of Turing machines lets you deduce that there must be some undecidable langauges.
It seems to me that, although you can encode any integer with the language 0∗, you can't encode an arbitrary language.
This is also incorrect. You can encode any natural number $n$ with the single string $0^n$ and, therefore, you can code any set of natural numbers with a unary language, i.e., a subset of $0^*$. And you can code any set of natural numbers as a language. You can also code any language over an alphabet of size $k>1$ by using the base-$k$ trick I described above. | {
"domain": "cs.stackexchange",
"id": 2757,
"tags": "turing-machines, turing-completeness, machine-models"
} |
Why is a nonlinear crystal necessary to stimulate quantum fluctuations that entangle photons? | Question: I've been reading about spontaneous parametric down conversion (SPDC). The Wikipedia article on it says:
A nonlinear crystal is used to split photon beams into pairs of photons that, in accordance with the law of conservation of energy and law of conservation of momentum, have combined energies and momenta equal to the energy and momentum of the original photon and crystal lattice, are phase-matched in the frequency domain, and have correlated polarizations... SPDC is stimulated by random vacuum fluctuations, and hence the photon pairs are created at random times [...]
Why is a crystal necessary for these fluctuations to occur, and how do the fluctuations entangle the incoming photons?
Answer: Maxwellian electrodynamics, in vacuum, is a linear theory: that is, it obeys the principle of superposition, and the sum of any two given solutions will still be a solution, so that e.g. two beams crossing each other will pass by without affecting each other in any way.
Moreover, most materials that you meet in everyday life (at the intensities of EM radiation that you get in everyday life), are also linear: more specifically, unless they're opaque, they are dielectrics that are characterized by an electric polarization density $\mathbf P$ that depends linearly on the local electric field,
$$
\mathbf P = \epsilon_0\chi\mathbf E,
$$
for a constant $\chi$ called the electric susceptibility of the material, and this polarization density then feeds back linearly into the response of the material to the light (going into e.g. the refractive index). Because of this linear constitutive relation, linear dielectrics also obey the superposition principle, exactly as in vacuum.
Now, here's the thing: the superposition principle is all well and good for finding solutions and so on, but ultimately it is a boring property for a system to have. Why? because under linear conditions, the modes of the radiation are fixed and there is no way for the state of any given mode to 'talk to' the state of any other mode, at all, and that precludes any interesting dynamics from happening with the photons.
As a more relevant example, in a linear material, a light beam of frequency $\omega$ propagating down the material is a solution of the Maxwell equations (plus the constitutive relation), or, to put it another way: parametric down-conversion, where the beam's energy is transferred into modes of frequencies $\omega_\mathrm{s}$ and $\omega_\mathrm{i}$ (such that $\omega_\mathrm{s}+\omega_\mathrm{i}=\omega$) is completely impossible. Similarly, any kind of 'gating' behaviour, where the phase or propagation of one beam is affected by a second beam, like you might want in a photonic computer, is also completely ruled out.
This is why we turn to nonlinear optical components. These have nonlinear constitutive relations, where the dielectric polarization depends on higher powers of the electric field which break the linearity, in the form
$$
\mathbf P = \epsilon_0\chi\mathbf E + \epsilon_0\chi^{(2)} \cdot \mathbf E^{\otimes 2} + \epsilon_0\chi^{(3)} \cdot \mathbf E^{\otimes 3} + \cdots
$$
(where the $\chi^{(n)}$ and $\mathbf E^{\otimes n}$ are tensors and the dots are contractions, none of which is essential here) where now if the medium is subjected to the superposition of two beams, its response will differ from the vector addition of the individual responses. That then allows the modes to affect each other and it breathes dynamics back into optics.
Like I said in a previous question of yours, nonlinearity is a key requirement to be able to do anything interesting, and particularly for computational purposes. As far as quantum computing goes, nonlinearity in the interactions between components is a key resource to be sought and treasured, because it enables the whole game to play. (This is also true of classical computing, which only became possible on electronic substrates when nonlinearity, in the form of vacuum tubes, and later transistors, became available. Classical computing using only linear circuit elements is impossible.)
So, what about parametric down-conversion? This is a second-order nonlinear process, which means that it rides on the $\chi^{(2)} \cdot \mathbf E^{\otimes 2}$ term. To see how it works, suppose that we have a medium that has a nonzero $\chi^{(2)}$ (so, typically a BBO or LiNbO$_3$ crystal) along the $\chi^{(2)}_{zzz}$ directions, and that we apply to it two fields: a driver field
$$
\mathbf E_\mathrm{d}(t) = \hat{\mathbf e}_z E_{\mathrm{d},0} \cos(\omega_\mathrm{d} t),
$$
and a signal field
$$
\mathbf E_\mathrm{s}(t) = \hat{\mathbf e}_z E_{\mathrm{s},0} \cos(\omega_\mathrm{s} t),
$$
and we look at the nonlinear polarization:
\begin{align}
P^{(2)}_z(t)
&=
\epsilon_0 \chi^{(2)}_{zzz} (E_{\mathrm{d},z}(t) + E_{\mathrm{s},z}(t))^2
\\ & \cong
2\epsilon_0 \chi^{(2)}_{zzz} E_{\mathrm{d},z}(t)E_{\mathrm{s},z}(t)
\\ & =
2\epsilon_0 \chi^{(2)}_{zzz} E_{\mathrm{d},0} E_{\mathrm{s},0} \cos(\omega_\mathrm{d} t)\cos(\omega_\mathrm{s} t)
\\ & =
2\epsilon_0 \chi^{(2)}_{zzz} E_{\mathrm{d},0} E_{\mathrm{s},0} \left[
\cos((\omega_\mathrm{d} -\omega_\mathrm{s})t)
+
\cos((\omega_\mathrm{d} +\omega_\mathrm{s}) t)
\right]
\\ & \cong
2\epsilon_0 \chi^{(2)}_{zzz} E_{\mathrm{d},0} E_{\mathrm{s},0}
\cos((\omega_\mathrm{d} -\omega_\mathrm{s})t)
,
\end{align}
where $\cong$ means that I'm neglecting terms that don't contribute to the process I want to describe. The important thing to notice here is that the polarization (contains a term that) depends on the product $E_{\mathrm{d},z}(t)E_{\mathrm{s},z}(t)$, and that this is a product of cosines that decomposes into trigonometrics at different frequencies: to wit, the idler frequency
$$
\omega_\mathrm{i} = \omega_\mathrm{d}-\omega_\mathrm{s}.
$$
This is the down-conversion process, where we've taken light at frequency $\omega$ and diverted some of its energy into frequency $\omega_\mathrm{i}$, with an amplitude that can be quite big even if $E_{\mathrm{s},0}$ is small. If you do the math in full, it also turns out that a similar amount of energy ends up reinforcing the signal field $E_{\mathrm{s},z}(t)$.
To be a bit more precise, this process is stimulated parametric down-conversion, because we required an initial seed on $E_{\mathrm{s},z}(t)$, however small, to fix the phase (a.k.a. time of emission) of the signal and idler beams, onto which the energy from the driver could 'congeal'.
In addition to this, there is also a spontaneous parametric down-conversion process, where (if the phase-matching conditions are right) light at the driver frequency will split into beams at the signal and idler frequencies without any external prompting. As described in Wikipedia, this cannot happen within classical nonlinear optics, and it requires the QED vacuum fluctuations to initiate the process, and therefore it is not surprising that (i) it happens on a per-photon basis, and (ii) it can produce highly entangled states of the signal and idler beams.
But, either way, it should be clear that without a way to have a physical response that's proportional to both the driver and signal fields that we can then see as having a different frequency content, i.e. without a nonlinear component of the dynamics, none of this would be possible at all. | {
"domain": "physics.stackexchange",
"id": 42124,
"tags": "quantum-mechanics, photons, quantum-entanglement, vacuum"
} |
where to find / what to call a 3/4 inch air valve that can be opened/closed electrically? | Question: I'm looking for something like this air valve:
(as discussed in https://engineering.stackexchange.com/a/12404/12007)
But I need to tube to be wider (3/4 in), the tube to be shorter (the shorter the better), and I would like it if the mechanism for opening and closing the valve wasn't so bulky. This is a picture of an air valve at a much larger scale:
https://www.homedepot.com/p/Suncourt-12-in-Normally-Closed-Automated-Damper-ZC112/204268743?cm_mmc=Shopping%7cVF%7cG%7c0%7cG-VF-PLA%7c&gclid=CjwKCAjwlIvXBRBjEiwATWAQIp6R6KVXc7l3PrZILy2NjFr2LsexUXmn6eohfsIsHXPDVXo3r9prZBoC_QgQAvD_BwE&dclid=CIXgsZqh2toCFQN6fgodAd8ElA
I'm hoping to find something like that at a 3/4 in scale, where the mechanism is proportionally smaller also.
I only need the valve to be able to withstand the pressure of a person exhaling, which I think is like 3 psi max. So I'm hoping something less bulky than the solenoid in the first link could be used.
Answer: You are looking for a 3/4" magnetic valve or solenoid valve. Google these terms, then talk to a supplier. As you want the mechanism inside the duct (or so it appears), you could also look for a magnetic or electrical ball valve. However I beleive at this small size, the electrical componentes outside the duct will be about the same size as the physical body of the valve. | {
"domain": "engineering.stackexchange",
"id": 2083,
"tags": "airflow, valves, electrical"
} |
Why ball thrown with higher velocity hurts more? | Question: When a ball is thrown at a person at certain distance "d" first with lower velocity and then with higher velocity, it is obvious that it would hurt more in second case. So, I guess we can say the force is greater in second case. But why?
I came up with an thought that change is momentum is higher in second case as velocity changes from high to zero( supposed for easier understanding). And force is change in momentum per unit time.
Well that led me to come up with another confusion: is the time of contact (required for change in momentum) same or different in both the cases?
I also thought about change is kinetic energy being equal to work done. So the work done would be higher in the second case but again got confused whether the distance $d$ of ($W$=$F$$d$) is the distance between the thrower and the person who experiences the impact or not. If it is indeed the distance between them (which tends to be constant) then I guess the force will be greater in second case according to the formula.
Answer:
change is momentum is higher in second case as velocity changes from high to zero( supposed for easier understanding). And force is change in momentum per unit time.
Yes.
is the time of contact (required for change in momentum) same or different in both the cases?
My guess is that the time is more or less the same. At higher impact speed, the object will "sink deeper into" the soft skin and tissue but that longer distance will be passed faster. It might take more or less the same time.
This of course depends on the part of the body you throw at. Throwing at someone's head is a place on the body with thinner skin than, say, the thigh. There the bones and skull underneath the skin will soon be reached, which will abruptly stop the motion over a very short time. This is a momentum change over a very short time, corresponding to a much larger force.
In general, harder surfaces (asphalt, tiles, glass, bone...) cause much higher impact forces over shorter time than softer or more flexible surfaces (pillow, trampoline, grass field, fat tissue...).
If the person is not being hit but rather catches the object, then he can control the time himself by using joints and moving along with the motion. When catching a basketball you can stretch your arms just before reaching it and let the arms and body follow the motion backwards during the catch. All this happens to increase the time of the impact, causing more gradual momentum transfer giving less impact force.
Same happens when you land after a jump and bend in your knees. Just try to catch a ball with stretched, non-flexing arms or land with stretched, non-bending legs and you will feel the force difference.
confused whether the distance $d$ of ($W=Fd$) is the distance between the thrower and the person who experiences the impact or not.
No, it is not. $d$ is only the distance over which work od being done. No work is done while the object flies trough the air. Work is only done when there is contact. Work is done during the impact, only. So the distance we are talking about here is the distance that the body allows the object to "sink into the skin", as mentioned above.
Is the distance short then the force must be larger in order to still absorb all the kinetic energy. | {
"domain": "physics.stackexchange",
"id": 45535,
"tags": "newtonian-mechanics, energy, momentum"
} |
Filtering out employees that do not meet some criteria, using list comprehensions | Question: Right now I have a list of dictionaries. In order to filter out all the entries I do not want included in the list, I am running it through multiple list comprehensions.
Is there a more Pythonic or better way to write these comprehensions that seem repetitive?
#FILTER OUT ALL EMPLOYEES WHO WILL NOT BE FACTORED INTO METRICS
inScope = [s for s in set2 if s['LEAD_TIME'] >= 0]#lead times 0 or more
set2 = inScope
inScope = [s for s in set2 if s['WRK_RLTN_DSC'] != "INDEPENDENT CONTRACTOR"]#not independent contractor
set2 = inScope
inScope = [s for s in set2 if s['WRK_RLTN_DSC'] != "OUTSIDE CONSULTANT"]#not outside consultant
set2 = inScope
inScope = [s for s in set2 if s['WRK_RLTN_DSC'] != "OUTSIDE CONTRACTOR"]#not outside contractor
set2 = inScope
inScope = [s for s in set2 if s['WRK_RLTN_DSC'] != "EMP OF STRATEGIC INVESTMENT CO"]#not emp of strategic investment co
set2 = inScope
inScope = [s for s in set2 if s['BUS_GRP_CD'] != "FIL"]#In scope after removal of FIL Int'l Associates
set2 = inScope
Answer: The simplest refactoring is as follows:
CONDITIONS = [
('WRK_RLTN_DSC', 'INDEPENDENT CONTRACTOR'),
...
]
set2 = [s for s in set2 if s['LEAD_TIME'] >= 0]
for key, excl in CONDITIONS:
set2 = [s for s in set2 if s[key] != excl]
Note:
You can assign straight back to set2, getting rid of the temporary inScope variable;
I have factored out the multiple checks into a simple loop over a list of keys and values; and
I have consistently used single quotes for string literals, rather than a mix of single and double.
However, although neater than your current code, this is still relatively inefficient, as it builds multiple lists; you could instead try something like:
set2 = [s for s in set2 if s['LEAD_TIME'] >= 0 and
all(s[key] != excl for key, excl in conditions)]
which reduces it to a single list comprehension. An alternative would be to incorporate e.g.:
INVALID_WRK = {
'INDEPENDENT CONTRACTOR',
...
}
... s['WRK_RLTN_DSC'] not in INVALID_WRK ...
as you check that key multiple times. | {
"domain": "codereview.stackexchange",
"id": 33309,
"tags": "python"
} |
Problem deriving kinetic energy from work | Question: I'm currently reading a nice introductory book (german, could be translated as "Physics with a pencil"). The author works a lot with differential calculus and antiderivatives (integrals will be used later). I'm stuck at a kind of mathematical question inmidst the easier physics:
So he takes the equation of force:
$m \ddot{\vec{r}} = \vec{F}$
and dot-multiplies it with the speed $\dot{\vec{r}}$:
$m \dot{\vec{r}} \cdot \ddot{\vec{r}} = \dot{\vec{r}} \cdot \vec{F}$
but I do not understand how he gets to the next step:
$\delta_t \left(\frac{1}{2}m\dot{\vec{r}}^2\right) = \frac{d\vec{r} \cdot \vec{F}}{dt}\ \ \ $ (with $\frac{mv^2}{2} = E_{kin}$)
because if I resolve the last equation again I find something different than what we began with, because the derivation of the left part leaves me no single $\dot{\vec{r}}$.
$\frac{2}{2} m \ddot{\vec{r}} = \dot{\vec{r}} \cdot \vec{F}$
Maybe I misinterpret the application $\delta_t$ to both $\vec{r}$ or the $\frac{d\vec{x}}{dr}$-notation not applying to $\vec{F}$ here? (The author states before that he use the partial differentiation symbol $\delta_t$ in the same way as the total $\frac{d\vec{x}}{dr}$ because the partial states exactly what is to be derived.)
Answer: I think you're just making a calculus error.
$$
{d\over dt}\left({1\over 2}m\dot{\bf r}^2\right)={1\over 2}m{d(\dot{\bf r}\cdot\dot{\bf r})\over dt}={1\over 2}m(\ddot{\bf r}\cdot\dot{\bf r}+\dot{\bf r}\cdot\ddot{\bf r})=m\dot{\bf r}\cdot\ddot{\bf r}.
$$
You can tell that something has gone wrong in your calculation, because the units of $(d/dt)({1\over 2}m\dot{\bf r}^2)$ aren't the same as those of $m\ddot{\bf r}$. Also, the first one is a scalar, and the second is a vector.
By the way, I wrote vectors in boldface rather than with arrows. Both notations are fine. I don't like having multiple things (arrows and dots) piled on top of symbols, but that's just my taste. | {
"domain": "physics.stackexchange",
"id": 766,
"tags": "newtonian-mechanics, mathematics"
} |
How is the interchange of the radial space component and time on the black hole horizon to be interpreted physically? | Question: On the event horizon of a black hole (or just behind it), the radial component of the metric is interchanged with its time component. How can this be interpreted? Extra question: what happens to these components when you look exactly on the horizon?
Answer: In the case of a Schwarzschild BH, there is a $coordinate~singularity$ in Schwarzschild coordinates $t,~r,~\theta,~\phi$ at the event horizon. This is similar to asking what happens at longitude 0 on Earth - the longitude coordinate has a discontinuity there. If you perform a coordinate transformation to something like Kruskal-Szerkes coordinates which can describe spacetime inside an event horizon, you can see that there isn't actually a singularity at the event horizon. | {
"domain": "physics.stackexchange",
"id": 77048,
"tags": "black-holes, event-horizon"
} |
Should the output of regression models, like SVR, be normalized? | Question: I have a regression problem which I solved using SVR. Accidentally, I normalized my output along with the inputs by removing the mean and dividing by standard deviation from each feature.
Surprisingly, the Rsquare score increased by 10%.
How can one explain the impact of output normalization for svm regression?
Answer: In regression problems it is customary to normalize the output too, because the scale of output and input features may differ. After getting the result of the SVR model, you have to add the mean to the result and multiply that by the standard deviation, if you have done that during normalizing.
How can one explain the impact of output normalization for svm regression?
If you normalize your data, you will have a cost function which is well behaved. means that you can find local parts more easily. The reason is that you have to construct the output using the input features in regression problems. It is difficult to make large values with small normalized features but with small numbers, making a normalized output is easier and can be learned faster. | {
"domain": "datascience.stackexchange",
"id": 2508,
"tags": "machine-learning, regression, svm, svr"
} |
How to derive the nuclear spin of 23Na? | Question: Is it possible to derive the nuclear spin I=3/2 for $\ce{^23Na}$ from a term scheme or from something else from spectroscopy?
I thought the nucleus spin is empirical (and cannot be calculated from J and F values) or does it?
Answer: The nuclear shell model is a useful "first approach" to determining nuclear spin. It doesn't always work, but it is a relatively simple way to make a first attempt.
Here is a nuclear shell energy diagram. As you can see it is somewhat analogous to an electron orbital energy diagram.
A set of shells are filled by neutrons and a separate set is filled by protons. The protons will pair when possible, same applies to the neutrons. Notice that shells with J=1/2 hold 2 nucleons (a nucleon can be either a proton or a neutron), shells with J=3/2 can hold 4 nucleons, shells with J=5/2 can hold 6 nucleons, and so on. This is because the shell with (for example) J=5/2 has 3 sub-shells: J=5/2, J=3/2 and J=1/2, each capable of holding 2 nucleons. Higher J sub-shells are "usually" filled first.
Let's apply this model to a few examples.
$\ce{^2H}$ (deuterium): 1 neutron, 1 proton; the neutron goes into the neutron $\ce{1s_{1/2}}$ shell and is unpaired, spin = 1/2; the proton goes into the proton $\ce{1s_{1/2}}$ shell and is unpaired, spin = 1/2; total nuclear spin = 1/2 + 1/2 = 1
$\ce{^{17}O}$: 9 neutrons, 8 protons; the first 8 neutrons go into the neutron 1s and two 1p shells and are all paired, the 9th neutron goes into the neutron $\ce{1d_{5/2}}$ shell and is unpaired, spin = 5/2; all 8 protons are paired in the 1s and two 1p shells, spin = 0; total nuclear spin = 5/2 + 0 = 5/2
$\ce{^{23}Na}$: 12 neutrons, 11 protons; the first 8 neutrons go into the neutron 1s and two 1p shells and are all paired, the last 4 neutrons go into the neutron $\ce{1d_{5/2}}$ shell and each set of two is paired, spin = 0; the first 8 protons fill the proton 1s and two 1p shells and are all paired, the next 2 protons go into the $\ce{1d_{5/2}}$ J=5/2 sub-shell and are paired, spin = 0, the last proton goes into the $\ce{1d_{5/2}}$ J=3/2 sub-shell and has spin 3/2; total nuclear spin = 0 + 3/2 = 3/2 | {
"domain": "chemistry.stackexchange",
"id": 2058,
"tags": "nuclear, spin"
} |
Why do we say gravitational waves are analogous to sound? | Question: In every popsci discussion of gravitational waves, the waves are said to be like "sound", and that gravitational waves allow us to "hear" the universe. Despite this, I have no idea how gravitational waves are any more like sound than like light. Some possible explanations are:
Gravitational waves are produced by the vibrations of objects, just like sound waves. But light waves are also produced by vibrating (charged) objects.
Gravitational waves propagate in a medium (spacetime), just like sound propagates in some material medium. But so does light; it propagates in the quantum electromagnetic field. This is as much a "medium" as spacetime is.
Gravitational waves are described by rank 2 tensors, like sound waves in a solid. But no popular explanation ever brings this up; tensors are not popsci level.
Gravitational waves are classical, like sound waves. But there's quantum sound (phonons) and a perfectly good classical description of light.
I must be missing something basic here. What makes gravitational waves more analogous to sound waves?
Answer: I was using the "sound" analogy for gravitational waves exclusively in my popular talks – although I also mentioned that at the fundamental level, the gravitational waves are more similar to electromagnetic waves because they propagate by the speed of light and they are both waves on a fundamental field.
However, the "sound" analogy sounds vastly superior to me for these reasons:
the frequency of the gravitational waves in LIGO is actually of order of 100 Hz, so the signal may be converted to a perfectly audible sound signal (it's the chirping if the black hole signal is long, but the discovered LIGO black holes were large enough and the signal was short which is why it was more similar to a heartbeat)
creatures living at a planet close enough to the source of the gravitational waves (the two merging gravitational black holes) would actually hear (or did hear, if there were some observers) the signal literally in the form of the sound because the gravitational waves made the pressure and planetary radius oscillate with the same dependence of time. Gravitational waves cause the stretching and shrinking of all masses, just like the sound, so they are a form of sound that also propagates in the vacuum.
for light, our gadgets (telescopes plus eyes or detectors) may easily identify the direction from which the light is coming, and we may always shield specific or most regions of the (celestial) sphere and only look into one direction. On the other hand, our ears hear the sound from the whole space and it's not easy to "shield" against the sources of sound that are coming from a particular direction. We always hear all five children screaming around us, there's no easy way to "focus". The same property of the sound holds for the gravitational waves, too. The LIGO detectors also hear the sound coming from all directions simultaneously. Helpfully, the number of LIGO detectors was two – just like the number of human ears – and the information from the two sources may be used to deduce the direction from which the gravitational waves are coming in a similar way in which ears+brains are doing it, but one can't ever "shield the rest of the sky", something that is easy to do with light.
The gravitational waves are qualitatively different from the electromagnetic waves, and because the electromagnetic waves are generally linked to "eyes" (even though they include waves of many frequencies that eyes are not sensitive to), it's natural to pick different organs as the symbols of the gravitational waves, and the ears are clearly the most suitable ones. | {
"domain": "physics.stackexchange",
"id": 31240,
"tags": "gravity, waves, gravitational-waves"
} |
Volumetric Dilatation Rate, Material derivatives, and Divergence | Question: in class we derived the following relationship:
$$\frac{1}{V}\frac{dV}{dt}= \nabla \cdot \vec{v}$$
This was derived though the analysis of linear deformation for a fluid-volume, where:
$$dV = dV_x +dV_y + dV_z$$
I understood the derived relation as:
$$\frac{1}{V}V'(t) = \nabla \cdot \vec{v}$$
However, my professor recently told me that the $d/dt$ operator before V, stood for the material derivative and not the common derivative. I am very confused as to how is that the case, given that we did an infinitesimal analysis of linear deformation, in a way I could call analogous to any other infinitesimal analysis that results in the common derivative.
I also tried deriving the equation by taking the material derivative of $V$, and dividing by $V$:
$$ \frac{1}{V}\frac{DV}{Dt} = \frac{1}{V}\frac{\partial V}{\partial t} + \frac{1}{V}(\vec{v} \cdot gradV)$$
but I was unable to.
Answer: The continuity equation reads $$\frac{\partial \rho}{\partial t}+v\centerdot \nabla \rho+\rho \nabla \centerdot v=0$$where $\rho$ is the fluid density. Dividing this by $\rho $ gives $$\frac{1}{\rho}\left(\frac{\partial \rho}{\partial t}+v\centerdot \nabla \rho\right)+\nabla \centerdot v=0$$But, since the density is the inverse of the specific volume V, we have $$\frac{1}{V}\left(\frac{\partial V}{\partial t}+v\centerdot \nabla V\right)=\frac{1}{V}\frac{DV}{Dt}=\nabla \centerdot v$$ | {
"domain": "physics.stackexchange",
"id": 75408,
"tags": "fluid-dynamics, flow, density, continuum-mechanics, volume"
} |
Update "old positions" when gmapping updates its map | Question:
Hi everybody,
I want to generate a map which also contains the positions of some found objects. In a first step I want to store the positions where the robot took an image. This is quiet straightforward (only take the camera position in the map frame). The only problem is that the map might change (e.g. on loop closure) and then the stored camera positions are inaccurate, probably even in a wrong room.
Is there a way to update the camera positions correctly when gmapping does the map update? Or is there a way that the camera positions are updated automatically?
Thanks for any help in advance!
Originally posted by senfti on ROS Answers with karma: 1 on 2017-10-03
Post score: 0
Answer:
AFAIK there is no provisioning for this in standard Gmapping. Doing this correctly would require to keep track of the robot's trajectory along with every particle. This change would require quite some changes to the code. Not sure somebody has done it, but it might be worth to skim the gmapping forks to see if you find something. There also is a somewhat related ticket sitting around for 3 years :)
The better and future proof way to do things would perhaps be to switch to cartographer_ros. It uses submaps and a constraint graph between them, which lends itself more naturally for doing what you want. For instance, one could localize objects in submaps and thus have them positioned correctly even after loop closure. I'm not sure about how this would work API-wise, but that could be discussed on the cartographer mailing list or issue tracker.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-10-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by senfti on 2017-10-06:
Thanks, I will give cartographer_ros a try | {
"domain": "robotics.stackexchange",
"id": 28993,
"tags": "navigation, gmapping"
} |
Why does the Supernova 2006cm give a very different value for the Hubble constant? Why doesn't it increase error bars for the Hubble constant? | Question: The Supernova 2006cm has a redshift of 0.0153 which translates into a recession speed of 4600 km/s.
It has a distance modulus of 34.71 which translates into a luminosity distance of 87 Mpc.
This gives a value of 53 km/s/Mpc for the Hubble constant.
Why doesn't this anomalous observation create sufficient doubt in the value of the Hubble constant?
Answer: At a distance of $d = 87\,\mathrm{Mpc}$, with a Hubble constant of roughly $H_0 = 70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$ cosmological expansion should make the host galaxy UGC 11723 recede at $v=H_0 \,d\simeq6100 \,\mathrm{km}\,\mathrm{s}^{-1}$.
However, galaxies also move through space, at typical velocities from several
$100\,\mathrm{km}\,\mathrm{s}^{-1}$ in galaxy groups (e.g Carlberg et al. 2000) to some $1000\,\mathrm{km}\,\mathrm{s}^{-1}$ in rich clusters (e.g Girardi et al. 1993; Karachentsev et al. 2006).
The observed velocity of $4900\,\mathrm{km}\,\mathrm{s}^{-1}$ (Falco et al. 2006) is hence $\sim1200\,\mathrm{km}\,\mathrm{s}^{-1}$ smaller than the Hubble flow, but quite consistent with what may be expected.
This is why supernovae at such small distances cannot be used to deduce the Hubble constant, unless a very large number is observed such that statistical errors cancel out. | {
"domain": "astronomy.stackexchange",
"id": 4125,
"tags": "cosmology, supernova, expansion, redshift, hubble-constant"
} |
How does the Fermi level of different materials reach equilibrium when in contact? | Question: I'm a college student studying semiconductors. I can't understand what is happening inside the materials when two or more different materials are in contact. Can I get some help?
So, basically at thermodynamic equilibrium, the Fermi level of all materials should be the same. Yes, I understand that, because the electrons of each material will tend to move to a lower energy level, ultimately reaching equilibrium.
However, in metal & n-doped semiconductor junctions, as you can see in the figure below, the Fermi level of the semiconductor is higher than that of the metal. Therefore, I assumed that electrons will flow from the semiconductor into the metal. If that's the case, shouldn't metal's Fermi level increase? Why does the Fermi level of semiconductor change only, instead of both materials? Also, I learned that at the surface of each materials, the charges pile up, creating a barrier (Schottky barrier). However, if electrons migrate into metal, which has high conductivity and freedom of movement of electrons, in what way does charges pile up?
My second question features the MOS structure. As you can see in the figure below, all the materials are in contact, reaching a equilibrium. However, the oxide layer acts as an insulator, forbidding electron movement through the material. In this situation, in which mechanism does the Fermi level reach equilibrium throughout all materials?
Thank you.
Answer:
Why does the Fermi level of semiconductor change only, instead of both materials?
In principle, the Fermi level changes in both materials. However, the metal has a lot more electrons than the semiconductor does. In a semiconductor, only dopants contribute electrons, and dopants make up a small fraction of the total number of atoms. In a metal, basically all atoms contribute electrons. So, shifting the Fermi level in the metal takes a lot more energy than shifting the Fermi level in the semiconductor, and to a good approximation, the Fermi level in the metal doesn't move; the Fermi level in the semiconductor does. This is is just an approximation; the Fermi level in the metal does move a little, but it's such a small effect that it's usually ignored. (An exception is when you have a small amount of metal relative to the amount of semiconductor.)
in what way does charges pile up?
To get a constant Fermi level, the bands have to bend near the interface. This basically creates a potential well that the electrons fall into. (Arguably, cause and effect run in the opposite direction.)
In [a MOS structure], in which mechanism does the Fermi level reach equilibrium throughout all materials?
In practice, it doesn't reach equilibrium. (Unless you have a crummy oxide layer.) If you waited a really long time (sometimes on the order of years or even decades), eventually electrons would tunnel across the barrier (or be thermally excited over the barrier) and the Fermi level would equilibrate. So, MOS devices generally aren't in equilibrium; they're in a metastable state. This is basically how flash memory works: a structure similar to the gate gets charged up and takes so long to discharge that it can store information (i.e. retain its charge) for years or even decades. If you made a sufficiently thick, high-quality oxide layer, it would retain its charge effectively forever. | {
"domain": "physics.stackexchange",
"id": 80832,
"tags": "potential, semiconductor-physics, electrical-engineering, chemical-potential"
} |
Function to find text between two tags | Question: I've just finished this function and wanted to know if anyone know another way to do the same:
function findtext(archivo,Delimit1, Delimit2 :String) :String;
var
Buffer :AnsiString;
ResLength :Integer;
i :Integer;
PosDelimit :Integer;
begin
Buffer := read_file_z(archivo);
if Pos(Delimit1, Buffer) > Pos(Delimit2, Buffer) then
PosDelimit := Length(Buffer)-(Pos(Delimit1, Buffer)+Length(Delimit1))
else PosDelimit := Length(Buffer)-(Pos(Delimit2, Buffer)+Length(Delimit2));
Buffer := Copy(Buffer, (Length(Buffer)-PosDelimit), Length(Buffer));
ResLength := Pos(Delimit2, Buffer)-(Pos(Delimit1, Buffer)+Length(Delimit1));
for i := 0 to (Reslength-1) do
Result := Result+Buffer[Pos(Delimit1, Buffer)+(Length(Delimit1)+i)];
end;
The auxiliary function read_file_z is this:
function read_file_z(const FileName: String): AnsiString;
var
F: File;
DefaultFileMode: Byte;
begin
DefaultFileMode := FileMode;
try
FileMode := 0;
AssignFile(F, FileName);
{$I-}
Reset(F, 1);
{$I+}
if IoResult=0 then
try
SetLength(Result,FileSize(F));
if Length(Result)>0 then begin
{$I-}
BlockRead(F,Result[1],LENGTH(Result));
{$I+}
if IoResult<>0 then Result:='';
end;
finally
CloseFile(F);
end;
finally
FileMode := DefaultFileMode;
end;
end;
The function finds text between two tags. An example would [hi]hi world[hi] and the function will respond "hi world".
Example:
findtext('test.exe','[hi]','[hi]');
The way seek to do the function is the "uses" default.
What alternatives do I have to make the function "findtext"?
Answer: You code is extremly hard to read. But I belive this is what you are looking for:
function FindText(FileName: TFilename; const Delimit1, Delimit2: string): string;
var
Buffer: TStringList;
PD1, PD2: Integer;
begin
Buffer := TStringList.Create;
try
Buffer.LoadFromFile(FileName);
PD1 := Pos(Delimit1, Buffer.Text) + Length(Delimit1);
PD2 := Pos(Delimit2, Buffer.Text);
Result := Copy(Buffer.Text, PD1, PD2 - PD1);
finally
FreeAndNil(Buffer);
end;
end; | {
"domain": "codereview.stackexchange",
"id": 12315,
"tags": "delphi"
} |
Estimation of Instantaneous Amplitude | Question: I'm reading a paper on EMG analysis. The formulas are all clear to me, but the paper refers to the signal amplitude as "instantaneous" amplitude.
I know what instantaneous mean, but what does it mean in the context of signal processing?
Answer: The instantaneous amplitude (or envelope) is usually defined as the magnitude of the (complex-valued) analytic signal $x_a(t)$ associated with the given signal $x(t)$:
$$x_a(t)=x(t)+j\mathcal{H}\{x(t)\}\tag{1}$$
where $\mathcal{H}$ denotes the Hilbert transform. So the instantaneous amplitude (envelope) of $x(t)$ is given by $|x_a(t)|$.
As a very simple example, take $x(t)=\cos(\omega_0t)$. Its Hilbert transform is $\sin(\omega_0t)$, and, consequently, its associated analytic signal is
$$x_a(t)=\cos(\omega_0t)+j\sin(\omega_0t)=e^{j\omega_0t}\tag{2}$$
The instantaneous amplitude (envelope) of $x(t)$ is $|x_a(t)|=1$. | {
"domain": "dsp.stackexchange",
"id": 8683,
"tags": "signal-analysis, hilbert-transform, emg"
} |
Returning total number of rows in query | Question: Oftentimes I find myself wanting the total number of rows returned by a query even though I only may display 50 or so per page. Instead of doing this in multiple queries like so:
SELECT first_name,
last_name,
(SELECT count(1) FROM sandbox.PEOPLE WHERE trunc(birthday) = trunc(sysdate) ) as totalRows
FROM sandbox.PEOPLE
WHERE trunc(birthday) = trunc(sysdate);
It has been recommended to me to do this:
SELECT first_name,
last_name,
count(*) over () totalRows
FROM sandbox.PEOPLE
WHERE trunc(birthday) = trunc(sysdate);
I am just looking for what is better as far as performance and if performance is a wash. Does this really improve readability of SQL? It is certainly cleaner/easier to write.
Answer: The latter query will be much more efficient-- it only requires hitting the table once. You can do a quick test yourself to confirm this.
I'll create a simple two-column table with 1 million rows where the second column is one of 10 distinct values
SQL> create table t (
2 col1 number,
3 col2 number
4 );
Table created.
SQL> insert into t
2 select level, mod(level,10)
3 from dual
4 connect by level <= 1000000;
1000000 rows created.
Now, I'll run two different queries that retrieve 10% of the data. I've set SQL*Plus to not bother displaying the data but to display the query plan and the basic execution statistics. With the first query, note that the query plan shows that Oracle has to access the table twice and then do a sort and aggregate. The query does ~10,000 consistent gets which is a measure of the amount of logical I/O being done (note that this is independent of whether data is cached so it is a much more stable measure-- if you run the same query many times, the consistent gets figure will fluctuate very little)
SQL> set autotrace traceonly;
SQL> select col1
2 ,(select count(*) from t where col2=3)
3 from t
4 where col2 = 3;
100000 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3335345748
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 85706 | 2176K| 525 (3)| 00:00:07 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
|* 2 | TABLE ACCESS FULL| T | 85706 | 1088K| 525 (3)| 00:00:07 |
|* 3 | TABLE ACCESS FULL | T | 85706 | 2176K| 525 (3)| 00:00:07 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("COL2"=3)
3 - filter("COL2"=3)
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
32 recursive calls
1 db block gets
10465 consistent gets
0 physical reads
176 redo size
2219528 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100000 rows processed
On the other hand, with the analytic function approach, the query plan shows that we only have to hit the table once. And we only have to do ~1,900 consistent gets-- less than 20% of the logical I/O that the first query had to do.
SQL> select col1,
2 count(*) over ()
3 from t
4 where col2 = 3;
100000 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2291049666
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 85706 | 2176K| 525 (3)| 00:00:07 |
| 1 | WINDOW BUFFER | | 85706 | 2176K| 525 (3)| 00:00:07 |
|* 2 | TABLE ACCESS FULL| T | 85706 | 2176K| 525 (3)| 00:00:07 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("COL2"=3)
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
1892 consistent gets
0 physical reads
0 redo size
2219510 bytes sent via SQL*Net to client
73850 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
100000 rows processed
Now, to be fair, you probably won't cut out 80% of your consistent gets moving to the analytic function approach using this particular query because it is likely that far fewer than 10% of the rows in your PEOPLE table have a birth date of today. The fewer rows you return, the less the performance difference will be.
Since this is Code Review, the analytic function approach is much easier to maintain over time because you don't violate the DRY principle and you have to remember to make all the same changes to your inline query that you make in the main query. | {
"domain": "codereview.stackexchange",
"id": 528,
"tags": "sql, oracle"
} |
Am I headed in the right direction with PHP? | Question: I have been doing PHP for about three years now, with little direction other than the books I read and trial and error.
I am currently coding what I hope to be a pretty nice application and I have decided to throw myself into the fire and get torn apart (haha you think this is a nice application!!)
Over the last year I have tried to improve my coding by separating it better and moving towards an MCV type idea. At this point most of my work is procedural but I am starting to finally get into the OOP side and at least get my feet wet.
the code below is what I believe to be my controller I think? But really that isn't too important, I am more concerned about whether or not what I am trying to do in this script is being attacked in the right way.
This is for a lodge/resort application, we have ten different cabins and this page, cabins.php is responsible for pulling the various data needed for each cabin.
Please note that my comments are to explain my thoughts to this group, they are not proper comments in the true sense, simply to try and help explain what I think is going on!
If this is to broad or helpless, forgive me.
Here we go:
<?php
include("includes/configure.php");
include("includes/content_generation.php");
//Generate sub menus and collect info from database
if ($filename == 'cabins'){
$cabin_title = 'Neato Lodge Cabins';
if(isset($_GET['cabin'])){
$cabin = $_GET['cabin'];
if(!isset($type)){
header('Location: http://www.mysite.com/cabins.php?cabin='.$cabin.'&type=cabin_information');
}
$sub_menu_items['cabin_information']= 'Cabin Information';
$sub_menu_items['reservation_info']= 'Reservation Info';
$sub_menu_items['rates']= 'Rates';
$sub_menu_items['cabin_pictures']= 'Cabin Pictures';
if(isset($_GET['type'])){
$type = $_GET['type'];
switch ($type){
case 'cabin_information':
//I do a join here because I am storing cabin photos in their own table that shares cabin ID. This is my feeble attempt at trying to normalize my data, seemed like a good start. The image pulled is needed later in the view.
$query = "SELECT cabin_name,cabin_description,bed_one,bed_two,bed_extra,image FROM cabin_content INNER JOIN cabin_images ON cabin_content.cabin_id = cabin_images.cabin_id WHERE cabin_content.cabin_id = '$cabin'";
$data = mysqli_query($dbc, $query);
$row = mysqli_fetch_array($data);
$main = '<p class ="first">'.$row['cabin_description'];
$main .= '<p class ="big">Sleeping Information</p>
<p class ="bedrooms">Bedroom One: <span class = "bed">'.$row['bed_one'].'</span></p>
<p class ="bedrooms">Bedroom Two: <span class = "bed">'.$row['bed_two'].'</span></p>
<p class ="bedrooms">Additional Sleeping: <span class = "bed">'.$row['bed_extra'].'</span></p>
<p>For occupany information, available dates and other reservation rules click the reservation info tab</p>';
break;
case 'reservation_info':
$query = "SELECT cabin_name, peak_week, max_occupancy,image FROM cabin_content INNER JOIN cabin_images ON cabin_content.cabin_id = cabin_images.cabin_id WHERE cabin_content.cabin_id = '$cabin'";
$data = mysqli_query($dbc, $query);
$row = mysqli_fetch_array($data);
$main = '<p class ="first">'.$row['cabin_name'].' is available to be rented from May through September, please read below for occupancy information and reservation rules and requirements. '.$row['cabin_name'].' has a maximum occupancy of '.$row['max_occupancy'].' adults for the posted rates, extra charges apply for additional adults or children.</p>
<p class ="big">Seasonal Reservation Information</p>
<p class ="bedrooms">Spring Season: <span class = "bed">'.$row['cabin_name'].' is available for rent with a five night minimum stay with a special discounted rate.</span></p>
<p class ="bedrooms">Peak Season: <span class = "bed">'.$row['cabin_name'].' is available for rent with a six night minimum stay checking in on '.$row['peak_week'].' and checking out the following '.$row['peak_week'].'. If booking within thirty days of planned stay, cabin may be available with a five night minimum.</span></p>
<p class ="bedrooms">Fall Season: <span class = "bed">'.$row['cabin_name'].' is available for rent with a three night minimum stay.</span></p>';
break;
case 'rates':
$query = "SELECT cabin_name, spring_weekly, peak_weekly, fall_weekly, adult_weekly,spring_nightly, peak_nightly, fall_nightly, adult_nightly, youth_nightly, youth_weekly, max_occupancy,image FROM cabin_content INNER JOIN cabin_images ON cabin_content.cabin_id = cabin_images.cabin_id WHERE cabin_content.cabin_id = '$cabin'";
$data = mysqli_query($dbc, $query);
$row = mysqli_fetch_array($data);
$main = '<p class ="first"> Weekly and nightly rates are for up to the cabin maximum occupancy of '.($row['max_occupancy']).'.</p>
<p class ="big">Seasonal Rate Information</p>
<table cellspacing="2" cellpadding="3" >
<tr>
<td class ="clm_head_lft">Season</td>
<td class ="clm_head">Weekly Rate</td>
<td class ="clm_head">Nightly Rate</td>
<td class ="clm_head">Additional Occupants</td>
</tr>
<tr>
<td class ="table_head">Spring Season (May 5th - June 15th:</td>
<td><span class = "rate"><del>'.$row['fall_weekly'].'</del> </span><span class = "special_rate">'.$row['spring_weekly'].'</span></td>
<td><span class = "rate">'.$row['spring_nightly'].'</span></td>
<td rowspan="3"><span class = "additional">Additional adults are '.$row['adult_weekly'].' per week and '.$row['adult_nightly'].' per night, youth under 18 are '.$row['youth_weekly'].' per week and '.$row['youth_nightly'].' per night. </span></td>
</tr>
<tr>
<td class ="table_head">Peak Season:</td>
<td><span class = "rate">'.$row['peak_weekly'].'</span></td>
<td><span class = "rate">'.$row['peak_nightly'].'</span></td>
<td></td>
</tr>
<tr>
<td class ="table_head">Fall Season:</td>
<td><span class = "rate">'.$row['fall_weekly'].'</span></td>
<td><span class = "rate">'.$row['fall_nightly'].'</span></td>
<td></td>
</tr>
</table>';
break;
}
}
}
}
include("layout_includes/header.php");
include("layout_includes/cabins_layout.php");
include("layout_includes/footer.php");
?>
Answer: This is not a Controller. The purpose of MVC is separation of concerns, more specifically the separation of domain logic from the user interface.
Let's see where it fails:
$main .= '<p class ="big">Sleeping Information</p>
<p class ="bedrooms">Bedroom One: <span class = "bed">'.$row['bed_one'].'</span></p>
<p class ="bedrooms">Bedroom Two: <span class = "bed">'.$row['bed_two'].'</span></p>
<p class ="bedrooms">Additional Sleeping: <span class = "bed">'.$row['bed_extra'].'</span></p>
<p>For occupany information, available dates and other reservation rules click the reservation info tab</p>';
Scenarios:
You need to add another css class to any of your paragraphs
You need to change a css class to any of your paragraphs
You need to convert a paragraph to anything else
You need to change the text of a paragraph
blah blah blah (there are a lot of other likely scenarios, but I think the first 4 are enough to illustrate my point)
Points 1 to 3 are user interface specific concerns. In an MVC approach they belong to the View and not in the Controller.
Point 3 can be a user interface concern, if the texts are static, but can also be viewed as a data concern. The same way you get data from the database, you could collect all these static texts in a configuration file and only have to look at one file to change them across every View in your application. That could be considered a Model approach. Having said that, this:
$query = "SELECT cabin_name,cabin_description,bed_one,bed_two,bed_extra,image FROM cabin_content INNER JOIN cabin_images ON cabin_content.cabin_id = cabin_images.cabin_id WHERE cabin_content.cabin_id = '$cabin'";
$data = mysqli_query($dbc, $query);
$row = mysqli_fetch_array($data);
Doesn't really belong to the Controller as well. The easiest approach would be to have all your database specific functionality in functions in a separate file:
// model file, lets call it "cabinModel.php"
function getCabinInformation($cabinID) {
$query = "SELECT cabin_name,cabin_description,bed_one,bed_two,bed_extra,image FROM cabin_content INNER JOIN cabin_images ON cabin_content.cabin_id = cabin_images.cabin_id WHERE cabin_content.cabin_id = '$cabinID'";
return mysqli_query($dbc, $query);
}
function getAllCabins() {
...
}
function deleteCabin($cabinID) {
...
}
// controller
include "cabinModel.php"
$row = getCabinInformation($cabin);
This way a function to get cabin information is available to all your controllers, you don't have to rewrite it every time you need it. Don't repeat yourself.
If there's any HTML / CSS or any other presentation logic in your Controller, and of course any persistent data logic, you are doing it wrong (in MVC terms). But MVC for small sites may be an overkill. It's a correct approach conceptually, but you will have to decide for yourself if it's the right one for your application. But if you decide it is, you should follow it as is.
A very easy approach to separate presentation from logic would be to use a template engine. There are quite a few of them out there, and there isn't one that's better than the others. Using one is more important than which one.
And of course it wouldn't hurt if you didn't try to reinvent the wheel and started using an MVC framework. Or if that feels too much, a micro framework.
There's is an often quoted article by Rasmus Lerdorf that some people perceive as advocating against template engines and frameworks. It's not, the only point of the article is that you don't have to use them. In the article there's a very nice and tidy approach on how to get an MVC kind of structure out of the box, without the added complexity of any third library. If you really don't want to use any third library, you should copy Rasmus' style. | {
"domain": "codereview.stackexchange",
"id": 939,
"tags": "php"
} |
Adjoint representation of the Lorentz group | Question: Is it possible to construct an adjoint representation for the Lorentz group?
Answer: I suspect the impeccable WetSavannaAnimal answer is too abstract and general to satisfy the OP. Let me vulgarize it a bit. The Lorentz group has 6 generators, so your adjoint rep will be a 6-dimensional rep, a set of 6x6 matrices, which satisfy the same Lie algebra (commutators) as the conventional 4x4 matrices transforming the fundamental (x,y,z,t) 4-vector. As an aside, note the curvature form (parity inv. 2-form) of the Lorentz group is also 6-dimensional.
Recall the Lorentz group may be written in a pretty form, that is rotations $J_i\equiv −\epsilon_{imn} M_{mn} /2 $ and boosts, $K_i\equiv M_{0i}$, so that
$[J_m,J_n] = i \epsilon_{mnk} J_k$ , $[J_m,K_n] = i \epsilon_{mnk} K_k$ , $[K_m,K_n] = -i \epsilon_{mnk} J_k$.
Might further note the important simplification $[J_m +i K_m, J_n −i K_n]$, which permits reduction of the Lorentz algebra to su(2)⊕su(2) and efficient treatment of its associated representations.
Now consider the six 6x6 matrices with the 3 J+iKs on the upper left 3x3 subspace of these 6x6 ones, and the 3 J-iKs on the lower right block,
spanned by the 4,5,6 indices. Keep the indices of the upper left block to be the usual 1,2,3; and rename the indices of the lower right block from 1,2,3 to 4,5,6. The commutation relations are then manifest, and the structure constants $f_{mnl}$ are sparse, basically $\epsilon_{ijk}$ for
indices (1,2,3) or (4,5,6), and zero otherwise.
So, then, as you probably learned from SU(3), these very structure constants f provide your 6x6 matrices in the adjoint, with one of the 3 indices (taking 6 values) specifying the particular generator of the Lorentz group represented. | {
"domain": "physics.stackexchange",
"id": 86081,
"tags": "special-relativity, group-theory, group-representations"
} |
How reversible is decerebrate posturing caused by brain stem damage? | Question: This is a follow-up question to How likely would Abraham Lincoln be to survive his wounds today?
You don't have to see a CT scan or autopsy to know if the brainstem is
injured (directly or indirectly), if it doesn't work right.
The description of the first doc at the scene mentioned that Lincoln
was not breathing, and one pupil was dilated (the latter a clear and
unequivocal sign of dysfunction of the third cranial nerve or the
upper brainstem, from where it comes). Unfortunately, the second doc
described the enlarged pupil being the right one (it's extremely
unlikely to have been one and then the other - one of the doc's was
probably mistaken as to the side).
By 3 hours after injury, both pupils were fixed and dilated, and
Lincoln showed extensor (decerebrate) posturing - again, all signs of
profound brainstem dysfunction (but not yet brain death, though pretty
close to it).
Now, what is decerebrate posturing? See http://en.wikipedia.org/wiki/Abnormal_posturing#Decerebrate
Decerebrate posturing is also called decerebrate response, decerebrate
rigidity, or extensor posturing. It describes the involuntary
extension of the upper extremities in response to external stimuli. In
decerebrate posturing, the head is arched back, the arms are extended
by the sides, and the legs are extended.[6] A hallmark of decerebrate
posturing is extended elbows.[12] The arms and legs are extended and
rotated internally.[13] The patient is rigid, with the teeth
clenched.[13] The signs can be on just one or the other side of the
body or on both sides, and it may be just in the arms and may be
intermittent.[13]
A person displaying decerebrate posturing in
response to pain gets a score of two in the motor section of the
Glasgow Coma Scale (for adults) and the Pediatric Glasgow Coma Scale
(for infants). Decerebrate posturing indicates brain stem damage,
specifically damage below the level of the red nucleus (e.g.
mid-collicular lesion). It is exhibited by people with lesions or
compression in the midbrain and lesions in the cerebellum
Answer: The decerebrate posturing is a sign of major brainstem dysfunction.
The reversibility of this state (which is considered to be very critical and hardly reversible) depends upon the origin of the brainstem dysfunction.
The primary brainstem dysfunction means that the brainstem was directly damaged (by a bullet) and there is no way that this can be quickly reversed. This is exactly the case here and there are no chances that the brainstem might recooperate with time.
The secondary brainstem dysfunction may be caused by cerebral edema and thereby dislocation of the brainstem towards foramen magnum, where the brainstem is coned. The coning of the brainstem can be reversible if the brain edema is treated quickly and effectively (by administration of diuretics in the intravenal catheter.
With brainstem regaining its normal functions the decerebral posture will eventuall resolve and depending upon the residual brain injury the recooperation of the brain function (and compensation) may be almost entire. | {
"domain": "biology.stackexchange",
"id": 113,
"tags": "human-biology, neuroscience"
} |
How to show whether two states are indistinguishable or not by measuring in a different basis? | Question: I'm struggling with understanding a bit of basic quantum mechanics math that I was hoping someone could clarify.
If I have two states such as these:
$$\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$$
and
$$\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$$
What is the right way to go about showing whether they're indistinguishable or not? I know that because these two states differ by a relative phase change that they're not the same. Should I substitute in an alternate basis like $|+\rangle$ and $|-\rangle$ for the $|0\rangle$ and $|1\rangle$, and then take measurements? Is showing that they're not orthogonal enough to show that they are either the same state (if they're not orthogonal) or different states (if they are orthogonal)?
Thanks!
Answer: If you wish to distinguish two states $|\psi\rangle$ and $|\phi\rangle$, you can only guarantee to do this if $\langle\psi|\phi\rangle=0$. You do this by measuring in a basis defined by the two states (alternatively, you apply a unitary $U$ such that
$$
U|\psi\rangle=|0\rangle,\qquad U|\phi\rangle=|1\rangle,
$$
and then measure in the standard $Z$ basis.
However, provided $|\langle\psi|\phi\rangle|\neq 1$, you can distinguish the states with some non-zero probability. There are a couple of different strategies that you can follow depending on how you want to interpret the result.
For example, to succeed with maximum probability, construct the operator $|\psi\rangle\langle\psi|-|\phi\rangle\langle\phi|$, and construct two projectors $P_+$ and $P_-$ which project onto the positive and negative eigenspaces of that operator. When you measure using the projectors $P_{\pm}$, if you get the + answer, assume you had $|\psi\rangle$, while if you get the - answer, assume you had $|\phi\rangle$. This is known as the Helstrom measurement, and you can show it has the maximum success probability.
Alternatively, if you don't want there to be any ambiguity in the result (thinking it was $|\psi\rangle$ when it was actually $|\phi\rangle$, you can use a POVM. Define
$$
E_1=p|\psi^\perp\rangle\langle\psi^\perp|,\qquad E_2=p|\phi^\perp\rangle\langle\phi^\perp|,\qquad E_3=1-E_1-E2.
$$
The states $|\psi^\perp\rangle$ and $|\phi^\perp\rangle$ are orthogonal to $|\psi\rangle$ and $|\phi\rangle$ respectively. You must choose the parameter $p$ to be as large as possible, but such that $E_3$ has no negative eigenvalues. When you measure with these, if you get answer $E_1$, you definitely did not have $|\psi\rangle$, hence you definitely had $|\phi\rangle$. Similarly, if you got answer 2, you definitely had $|\psi\rangle$. However, if you get answer 3, this corresponds to a "not sure" answer.
In the case of orthogonal states, such as your example, all these strategies are equivalent and have a probability of success of 1. You can describe the strategy either as "measure in the $X$ basis" or "apply Hadamard and measure in the standard ($Z$) basis". | {
"domain": "quantumcomputing.stackexchange",
"id": 965,
"tags": "quantum-state, mathematics, measurement"
} |
STEP files for URDF model | Question:
Dear Ros Users,
I am trying to write an URDF model for Robotiq 2F-140 gripper. http://support.robotiq.com/pages/viewpage.action?pageId=5963876
Unfortunately only STEP files are provided. Is there a way to use these files to create an URDF file usable in ROS?
Thanks!
Originally posted by Rahndall on ROS Answers with karma: 133 on 2017-05-22
Post score: 0
Original comments
Comment by gvdhoorn on 2017-05-22:
This is not an answer, but I wanted to point you to ros-industrial/robotiq. I'm not sure whether or not the 2F-140 is supported already, but might be worth a look. See ros-industrial/robotiq#97.
Answer:
URDF only support STL and DAE file formats (as far as I know). If you can only find STEP files you can import those into most CAD programs and export it as a STL easily.
Solidworks is the most popular CAD software but it is expensive. If you don't already have access to it, there are some free ones you can try such as FreeCAD.
One small hiccup: if your STEP files don't contain color information then STL will work fine. But STL does not support colors so if your STEP DOES contain color you'll want to use DAE instead of STL, but most CAD packages don't export DAE. Most CAD software WILL export .wrl which can then be imported into Blender and then converted to DAE and color will be maintained.
To summarize:
if STEP file doesn't have color->import STEP into FreeCAD->export as STL
if STEP file does contain color->import STEP into FreeCAD->export as .wrl->import .wrl into Blender->export as DAE
(I know this works with Solidworks, I'm assuming FreeCAD can do it)
Originally posted by Airuno2L with karma: 3460 on 2017-05-22
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by pmuthu2s on 2019-02-05:
When I convert WRL format to DAE format using blender, color is lost. Any clue guys? | {
"domain": "robotics.stackexchange",
"id": 27962,
"tags": "urdf"
} |
Question regarding average velocity | Question: We know average velocity in its strictest sense, means total displacement over total time taken: $$\frac{X_f-X_i}{T_f-T_i}$$
There's a special case, when a body is moving in a straight line with a constant acceleration. Of course since its acceleration is constant, it has to be a rectilinear motion.
In this case average velocity (over a time interval) is simply
$$V_{avg}=\frac{V_1+V_2}{2} \tag 1$$
It can be proved easily, using the equations of motion.
The important point is, this formula works only when a body is moving with a "constant acceleration". As per my teachers and the books I have.
The problem is, why I'm actually putting this post up, there's another case. If the body moves with a velocity $V_1$ for a time interval $t$, and then it moves with a velocity $V_2$ for the same amount of time $t$. In other words, the body traveling with a velocity $V_1$, takes time $t$ to go from a point $A$ to another point $B$, and then it goes from point $B$ to another point $C$ and again, it takes time $t$, traveling with a velocity $V_2$.
In this case, when time intervals are equal, we calculate average velocity (or average speed) by taking the arithmetic mean of individual velocities, and this formula (as per my teachers and the books I have).
So, in this case :
$$V_{avg}=\frac{V_1+V_2}{2}\tag 2$$
But equation (1) and equation (2) are completely identical. Isn't that strange? And odd? Because equation (1) should be valid 'only' when the acceleration is constant. But in the second case, acceleration of the body is not constant during the course of its motion. It's acceleration is zero as it goes from $A$ to $B$, then its acceleration changes as it velocity changes from $V_1$ to $V_2$, and then its acceleration is constant from $B$ to $C$. Even though its acceleration is non-constant, the formula for finding out its average velocity is exactly the same, as in the first case.
Please explain what's going on here. Because the formula $V_{avg}=\frac{V_1+V_2}{2}$ should be valid if and only if acceleration is uniform. (As per my books and my teachers).
Thanks
Answer: You are mixing things. Those formulae are not the same; the problem is that you are using the same symbols, but the meaning is different.
For the rectilinear uniformly accelerated movement, you use
$$v_{average}=\dfrac{v_{initial}+v_{final}}{2}$$
Whereas ni the other case, you use
$$v_{average}=\dfrac{v_{1}+v_{2}}{2}$$
Meaning "velocity in the first part" and "velocity in the second part", but those velocities are constant.
So, in the first case, $v$ is continuously changing (constant acceleration), while in the second one, those are two constant velocities. First one value for some interval, then another value along the rest of the movement. They are different things.
Of course there's something else that tells you why they look so similar (not being the same). Take the definition of a mean value. In this case, we will talk about velocity but it works for any magnitude.
The movement lasts a total time $T$, and it is divided into 2 "parts", in which velocity is different. Let's say
1→$v_1, t_1$
2→$v_2, t_2$,
Meaning that "part one" lasts $t_1$ seconds and velocity is $v_1$ in that part. Same for the 2nd one. Obviously, $T=t_1+t_2$ because it is the total time.
Then, the average velocity is
$$ v_{average}=\dfrac{v_1t_1+v_2t_2}{T}$$
(Please, don't say "as my book and teachers say", you need to understand where this formula comes from. If you don't, let me know in the comments and I'll add it here).
And, of course, you can rewrite it as
$$ v_{average}=v_1 \frac{t_1}{T} + v_2 \frac{t_2}{T}$$.
So, no matter what the values are, you have
$ v_{average}=\dfrac{v_1+v_2}{2}$, as long as, and only if
Those velocities are constant along their intervals.
All the intervals have the same duration.
That is,
If $t_1=t_2$, you can use that formula.
But this formula can be generalized for any number of intervals
$$ v_{average}=\dfrac{v_1t_1+v_2t_2+v_3t_3+...}{T}$$
Even in the case where the intervals are so small that velocity is contiousuly changing. This is a definition which still holds. So, yyou can write
$$ v_{average}=\dfrac{\int_{ini}^{final} v dt}{T} $$
So what happens if velocity varies linearly, as in the uniformly accelerated straight motion? Then $v(t)=v_0+at$, so
$$ v_{average}=\dfrac{\int_{ini}^{final} [v_0+at] dt}{T}= $$
$$ = \frac{v_0T + aT^2/2}{T}= v_0+\frac{(v_F-v_0)}{2}=\frac{v_0+v_F}{2}$$
A formula with the same form, but the meaning is different, because now they're not 2 intervals, but a continuously changing velocity. | {
"domain": "physics.stackexchange",
"id": 52390,
"tags": "kinematics, acceleration"
} |
Discussion of the Rovelli's paper on the black hole entropy in Loop Quantum Gravity | Question: In a recent discussion about black holes, space_cadet provided me with the following paper of Rovelli: Black Hole Entropy from Loop Quantum Gravity which claims to derive the Bekenstein-Hawking formula for the entropy of the black hole.
Parts of his derivation seem strange to me, so I hope someone will able to clarify them.
All of the computation hangs on the notion of distinguishable (by an outside observer) states. It's not clear to me how does one decide which of the states are distinguishable and which are not. Indeed, Rovelli mentions a different paper that assumes different condition and derives an incorrect formula. It seems to me that the concept of Rovelli's distinctness was arrived at either accidentally or a posteriori to derive the correct entropy formula.
Is the concept of distinguishable states discussed somewhere more carefully?
After this assumption is taken, the argument proceeds to count number of ordered partitions of a given number (representing the area of the black hole) and this can easily be seen exponential by combinatorial arguments, leading to the proportionality of the area and entropy.
But it turns out that the constant of proportionality is wrong (roughly 12 times smaller than the correct B-H constant). Rovelli says that this is because number of issues were not addressed. The correct computation of area would also need to take the effect of nodes intersecting the horizon. It's not clear to me that addressing this would not spoils the proportionality even further (instead of correcting it).
Has a more proper derivation of the black hole entropy been carried out?
Answer: Dear Marek, it has been showed that the paper by Rovelli was invalid for lots of reasons, including those related to yours.
First of all, as you hint, it is incorrect to treat the interior and exterior of the black hole asymmetrically because the location of the event horizon may only be determined a posteriori - after a star collapses. So there's no qualitative difference between the interior and the exterior.
It follows that in the "real LQG", there would also be an entropy coming from the interior which would be volume-extensive. No one has ever showed that this term is absent; the absence is just a wishful thinking, so the proportionality law to the surface is just a result of an omission.
However, even if one removes the interior by hand, Rovelli's paper was showed incorrect. The numerical constant turned out to be incorrect, and newer calculations showed that even with the assumption that the black hole entropy comes from the horizon - which could make the area-law for the entropy tautological - the actual calculable entropy is actually not proportional to the area at all. The corrections to Rovelli's paper - showing that his neglecting of the higher spins etc. were invalid - appeared e.g. in
http://arxiv.org/abs/gr-qc/0407051
http://arxiv.org/abs/gr-qc/0407052
If you're looking for papers that show that it suddenly makes sense, you will be disappointed. Quite on the contrary, it has been showed that none of the early dreams that LQG could produce the right black hole entropy works. This is also particular self-evident in the case of the quasinormal modes that were hypothesized to know about the "right" unnatural value of the Immirzi parameter - a multiplicative discrepancy in the Rovelli-like calculations.
I showed that for the Schwarzschild, the result really contained $\ln(3)/\sqrt{2}$ and similar right things, but we also showed with Andy Neitzke - and with many other people who followed - that the number extracted for other black holes is totally different and excludes the heuristic conjecture.
So today, it's known that the relationship supported by the same Immirzi parameter on "both sides" was actually wrong on both sides, not just one. There is no calculation of an area-extensive entropy in LQG or any other discrete model of quantum gravity, for that matter.
Best wishes
Lubos | {
"domain": "physics.stackexchange",
"id": 4562,
"tags": "research-level, quantum-gravity, loop-quantum-gravity"
} |
Partial measurement (destructive) collapses |1> to |0> | Question: In the following QScript program:
VectorSize 6
// The system can be in only one state - |000001>
SigmaX 0
// Non-destructive full measurement returns 1, as expected
Measure
Print measured_value
// Destructive measurement collapses the last qubit to |0>?
MeasureBit 0
Measure
Print measured_value
Why does the last qubit collapse to |0> when it's in fact |1>? Is this a bug in the simulator or did I misunderstand how the partial destructive measurement should work? In the about page they give an example where a qubit collapses to |1>, so it should be possible...
Cross-posted on Stack Overflow.
Answer: Your understanding is correct. A destructive partial measurement will result in a deterministic bit. But it will never collapse to the state with zero probability as in your example. I have tried the example in their about page of the MeasureBit(b). However, after the partial measurement, the state always corresponds to the measurement result of 0, no matter whether the actual measurement result give 0 of 1. So it is a bug.
// Trying MeasureBit() desribed in the about page http://qcplayground.withgoogle.com/#/about
VectorSize 6
// Create 4 qubit state with equal probability
Hadamard 0
Hadamard 1
Hadamard 2
Hadamard 3
// Measure the most significant bit
MeasureBit 3
Print measured_value | {
"domain": "physics.stackexchange",
"id": 14791,
"tags": "quantum-information, quantum-computer"
} |
JavaScript BitArray Implementation | Question: I am working on a project in which an array can easily grow beyond 50M in length. It's an array holding only boolean (0/1) values. Using Uint8Array is just fine and it's very performant compared normal arrays. Still I just wanted to try implementing a BitArray by extending DataView object. BitArray enormously reduces the memory footprint compared to normal Arrays (1/32) and Uint8Array (1/8). So that's a certain gain but as for speed, despite I did everything to my knowledge BitArray is still slightly slower compared to both unless the size hits the 33,554,433 limit (> 2²⁵). As I have read from this document, at this point the normal Array reaches to ~268MB memory limit and it's internal structure gets switched to NumberDictionary[16] yielding a dramatic slow down in v8. Note that when the length is 33,554,433 the BitArray uses only 4MB of memory.
One other point to note is, for my application i need the total number of 1s in the BitArray (population count) which is the popcount property in below code. Thanks to the blazingly fast Hamming Weight algorithm BitArray is like 80 times faster compared to counting the existing items in array though I am not testing it here.
Any ideas to boost the BitArray up a little are most welcome.
class BitArray extends DataView{
constructor(n,ab){
var abs = n >> 3; // ArrayBuffer Size
super(ab instanceof ArrayBuffer ? ab
: n & 31 ? new ArrayBuffer(abs + 4 - (abs & 3))
: new ArrayBuffer(abs));
}
get length(){
return this.buffer.byteLength*8;
}
get popcount(){
var m1 = 0x55555555,
m2 = 0x33333333,
m4 = 0x0f0f0f0f,
h01 = 0x01010101,
pc = 0,
x;
for (var i = 0, len = this.buffer.byteLength >> 2; i < len; i++){
x = this.getUint32(i << 2);
x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits
x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits
x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits
pc += (x * h01) >> 56;
}
return pc;
}
// n >> 3 is Math.floor(n/8)
// n & 7 is n % 8
at(n){
return this.getUint8(n >> 3) & (1 << (n & 7)) ? 1 : 0;
}
set(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) | (1 << (n & 7)));
}
reset(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) & ~(1 << (n & 7)));
}
slice(a = 0, b = this.length){
return new BitArray(b-a,this.buffer.slice(a >> 3, b >> 3));
}
toggle(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) ^ (1 << (n & 7)));
}
toString(){
return new Uint8Array(this.buffer).reduce((p,c) => p + Array.prototype.reduce.call(c.toString(2).padStart(8,"0"),(f,s) => s+f), "")
.slice(0,this.length);
}
}
// Test code starts from here
var len = 1e6, // array length
tst = 10, // test count
arr = Array(len),
bar = new BitArray(len),
uia = new Uint8Array(len),
r1,r2,r3,t = 0;
console.log(`There are ${bar.popcount} 1s in the BitArray`);
for (var i = 0; i < len; i++){
(Math.random() > 0.5) && ( t++
, bar.set(i)
);
}
console.log(`${t} .set() ops are made and now there are ${bar.popcount} 1s in the BitArray`);
console.time("Array");
for (var k = 0; k < tst; k++){
for (var i = 0; i < len; i++) arr[i] = Math.random() > 0.5 ? 1 : 0;
for (var i = 0; i < len; i++) t = arr[1];
r1 = arr.slice();
}
console.timeEnd("Array");
console.time("BitArray");
for (var k = 0; k < tst; k++){
for (var i = 0; i < len; i++) Math.random() > 0.5 ? bar.set(i) : bar.reset(i)
for (var i = 0; i < len; i++) t = bar.at(i);
r2 = bar.slice();
}
console.timeEnd("BitArray");
console.time("Uint8Array");
for (var k = 0; k < tst; k++){
for (var i = 0; i < len; i++) uia[i] = Math.random() > 0.5 ? 1 : 0;
for (var i = 0; i < len; i++) t = uia[i];
r3 = uia.slice();
}
console.timeEnd("Uint8Array");
Answer:
abs + 4 - (abs & 3)
Could be simplified: (abs + 3) & -4
They are not equivalent expressions, but they are equivalent when abs is not a multiple of 4. Actually the simpler one is more convenient in this case: with this expression, you can skip the ternary, because it won't improperly round up multiples of 4 so that's no longer a special case.
popcount trickery
You're using a good trick already, but there is still a possibility for more. One interesting property of the trick you used is that when it has done a couple of steps, it starts to waste some of its potential, by which I mean for example that the third step, (x + (x >> 4)) & m4, adds adjacent nibbles, but the values in those nibbles range from 0 to 4 (inclusive), not the range 0 to 15. If the loop was unrolled a factor of two, then instead of having two whole copies of the popcount trick, only the first half needs to be duplicated: then the intermediate results could be added (producing nibbles with values in the range 0 to 8, inclusive), and the second half of the trick (the third and fourth steps) needs only to be performed once.
There are more tricks, for example given 3 uint32s (a, b, c), we only need 2 32-bit popcounts to count all the bits:
b0 = a ^ b ^ c
b1 = (a & (b | c)) | (b & c)
count = popcount(b0) + 2 * popcount(b1)
The bit-manipulation steps are effectively a bit-level 3-input addition, with the 2-bit result spread across two variables: at each position, b0 holds bit 0 of the sum at that position, b1 holds bit 1 of the sum. This can be generalized to combining 7 inputs into 3, 15 into 4, etc. Even better versions of this idea can be found in Faster Population Counts Using
AVX2 Instructions (of course you won't be using AVX2 in JavaScript, but some of the same techniques can be applied to reduce the number of actual popcounts). | {
"domain": "codereview.stackexchange",
"id": 43999,
"tags": "javascript, array"
} |
What is a compact model? | Question: I am a domain scientist and I study biophysical systems. When I asked a colleague in CS for suggestions about how to design a multi-scale simulator, he mentioned the term "compact model".
Now, I can't find a very good definition of this term. I can find the book "Compact Modeling" edited by Gildenblat but this is a collection of domain specific applications, similar to what I found on google scholar.
I haven't found a first-principles treatment of what a compact model is or the advantages of using one. It would be great to know where I could find this topic in a textbook or annual reviews type paper.
Q: What is a compact model, and when is it useful?
Answer: This paper uses the phrases "compact model" and "multiscale simulation". Perhaps it is relevant. Indeed, Googling "multiscale simulation compact model" yielded lots of results.
The back cover of the book you refer to states:
Compact Models of circuit elements are models that are sufficiently simple to be incorporated in circuit simulators and are sufficiently accurate to make the outcome of the simulators useful to circuit designers. The conflicting objectives of model simplicity and accuracy make the compact modeling field an exciting and challenging research area for device physicists, modeling engineers and circuit designers.
My understanding is that a compact model is a parametric equation with certain parameters determined that models the phenomenon of interest to some degree of precision. | {
"domain": "cs.stackexchange",
"id": 12092,
"tags": "terminology"
} |
Parsing parameters for a potential function | Question: Background
A forcefield is a collection of functions and parameters that is used to calculate the potential energy of a complex system. I have text files which contain data about the parameters for a forcefield. The text file is split into many sections, with each section following the same format:
A section header which is enclosed in square brackets
On the next line the word indices: followed by a list of integers.
This is then followed by 1 or more lines of parameters associated with the section
Here is a made-up example file to showcase the format.
############################################
# Comments begin with '#'
############################################
[lj_pairs] # Section 1
indices: 0 2
# ID eps sigma
1 2.344 1.234 5
2 4.423 5.313 5
3 1.573 6.321 5
4 1.921 11.93 5
[bonds]
indices: 0 1
2 4.234e-03 11.2
6 -0.134545 5.7
The goal is to parse such files and store all of the information in a dict.
Code
Main function for review
""" Force-field data reader """
import re
from dataclasses import dataclass, field
from typing import Dict, Iterable, List, TextIO, Tuple, Union, Any
def ff_reader(fname: Union[str, TextIO]) -> Dict[str, "FFSections"]:
""" Reads data from a force-field file """
try:
if _is_string(fname):
fh = open(fname, mode="r")
own = True
else:
fh = iter(fname)
except TypeError:
raise ValueError("fname must be a string or a file handle")
# All the possible section headers
keywords = ("lj_pairs", "bonds") # etc... Long list of possible sections
# Removed for brevity
re_sections = re.compile(r"^\[(%s)\]$" % "|".join(keywords))
ff_data = _strip_comments(fh)
# Empty dict that'll hold all the data.
final_ff_data = {key: FFSections() for key in keywords}
# Get first section header
for line in ff_data:
match = re.match(re_sections, line)
if match:
section = match.group(1)
in_section_for_first_time = True
break
else:
raise FFReaderError("A valid section header must be the first line in file")
else:
raise FFReaderError("No force-field sections exist")
# Read the rest of the file
for line in ff_data:
match = re.match(re_sections, line)
# If we've encounted a section header the next line must be an index list.
if in_section_for_first_time:
if line.split()[0] != "indices:":
raise FFReaderError(f"Missing index list for section: {section}")
idx = _validate_indices(line)
final_ff_data[section].use_idx = idx
in_section_for_first_time = False
in_params_for_first_time = True
continue
if match and in_params_for_first_time:
raise FFReaderError(
f"Section {section} missing parameters"
+ "Sections must contain atleast one type coefficients"
)
if match: # and not in_section_for_first_time and in_params_for_first_time
section = match.group(1)
in_section_for_first_time = True
continue
params = _validate_params(line)
final_ff_data[section].coeffs.update([params])
in_params_for_first_time = False
# Close the file if we opened it
if own:
fh.close()
for section in final_ff_data.values():
# coeff must exist if use_idx does
if section.use_idx is not None:
assert section.coeffs
return final_ff_data
Other stuff for the code to work
def _strip_comments(
instream: TextIO, comments: Union[str, Iterable[str], None] = "#"
) -> Iterable[str]:
""" Strip comments from a text IO stream """
if comments is not None:
if isinstance(comments, str):
comments = [comments]
comments_re = re.compile("|".join(map(re.escape, comments)))
try:
for lines in instream.readlines():
line = re.split(comments_re, lines, 1)[0].strip()
if line != "":
yield line
except AttributeError:
raise TypeError("instream must be a `TextIO` stream") from None
@dataclass(eq=False)
class FFSections:
"""
FFSections(coeffs,use_idx)
Container for forcefield information
"""
coeffs: Dict[int, List[float]] = field(default_factory=dict)
use_idx: List[int] = field(default=None)
class FFReaderError(Exception):
""" Incorrect or badly formatted force-Field data """
def __init__(self, message: str, badline: Optional[str] = None) -> None:
if badline:
message = f"{message}\nError parsing --> ({badline})"
super().__init__(message)
def _validate_indices(line: str) -> List[int]:
"""
Check if given line contains only a whitespace separated
list of integers
"""
# split on indices: followed by whitescape
split = line.split("indices:")[1].split()
# import ipdb; ipdb.set_trace()
if not set(s.isdecimal() for s in split) == {True}:
raise FFReaderError(
"Indices should be integers and seperated by whitespace", line
)
return [int(x) for x in split]
def _validate_params(line: str) -> Tuple[int, List[float]]:
"""
Check if given line is valid param line, which are
an integer followed by one or more floats seperated by whitespace
"""
split = line.split()
id_ = split[0]
coeffs = split[1:]
if not id_.isdecimal():
raise FFReaderError("Invalid params", line)
try:
coeffs = [float(x) for x in coeffs]
except (TypeError, ValueError):
raise FFReaderError("Invalid params", line) from None
return (int(id_), coeffs)
I consider myself a beginner in python and this is my first substantive project. I'd like the review to focus on the ff_reader function, but feel free to comment on the other parts too if there are better ways to do somethings. I feel like the way I've written the ff_reader is kind of ugly and inelegant. I'd be especially interested if there is a better way to read such files, perhaps parsing the whole file instead of line by line.
Answer: I have written parsers for several similar file formats, and one time I started with the same idea as you: iterate over the lines and record the current state in some boolean variables. Over time, these parsers got too large to understand. Therefore I switched to a different strategy: instead of recording the current state in variables, record it implicitly in the code that is currently executed. I structured the parser like this:
def parse_file(lines: Lines):
sections = []
while not lines.at_end():
section = parse_section(lines)
if section is None:
break
sections.append(section)
return sections
def parse_section(lines: Lines):
name = lines.must_match(r"^\[(\w+)\]$")[1]
indices_str = lines.must_match(r"\s*indices:\s*(\d+(\s*\d+))$")[1]
data = []
while not lines.at_end():
row = parse_row(lines)
if row is None:
break
data.append(row)
indices = map(int, indices_str.split())
return Section(name, indices, data)
As you can see, each part of the file structure gets its own parsing function. Thereby the code matches the structure of the file format. Each of the functions is relatively small.
To make these functions useful, they need a source of lines, which I called Lines. This would be another class that defines useful function such as must_match, which makes sure the "current line" matches the regular expression, and if it doesn't, it throws a parse error. Using these functions as building blocks, writing and modifying the parser is still possible, even when the file format becomes more complicated.
Another benefit of having these small functions is that you can test them individually. Prepare a Lines object, pass it to the function and see what it returns. This allows for good unit tests.
The Lines class consists of a list of lines and the index of the current line. As you parse the file, the index will advance, until you reach the end of the lines.
Regarding your code:
I don't like the union types very much. They make the code more complicated than necessary. For example, when stripping the comments, you actually only need the single comment marker #. Therefore all the list handling can be removed, and the comment character doesn't need to be a parameter at all.
Stripping the comments at the very beginning is a good strategy since otherwise you would have to repeat that code in several other places.
In that comment removal function you declared that the comment may also be None, but actually passing None will throw an exception.
Be careful when opening files. Every file that is opened must be closed again when it is not needed anymore, even in case of exceptions. Your current code does not close the file when a parse error occurs. This is another reason against union types. It would be easier to have separate functions: one that parses from a list of strings and one that parses from a file. How big are the files, does it hurt to load them into memory as a single block? If they get larger than 10 MB, that would be a valid concern. | {
"domain": "codereview.stackexchange",
"id": 34450,
"tags": "python, python-3.x, parsing"
} |
What is the minimum requirement for the dataset for time series forecasting? | Question: I have a dataset of patients where, for each patient, a measurement is taken 3 times per day. For example, patient 1 has recordings at 7.30 am, 12.30 pm and 8.30 pm. Patient 1 has a collection of 30 days with such data recorded thrice a day.(altogether 3 recordings per day * 30 days = 90 data points).
Is this type of data suitable for time-series forecasting?
Thanks in advance for the help.
Answer: Welcome to the site!
Forecasting can be done using any length of time series. For example, if I have a set of data {1, 10, 19, 28}, then I can be pretty sure that the next value in the set is going to be 37 (because there is a strong pattern here: 10=1+9, 19=10+9, etc.).
So if you have a strong signal, then even if you don't have a very long sequence, you can get a pretty accurate forecast.
The question becomes: what type of time series forecasting model should you use?
I would avoid any type of neural network here (the Data Science forum often talks about neural networks of some type). Your data are not rich enough to support estimation of the hundreds (to millions!) of parameters that such models require.
Instead, I would try something fairly simple to start, like a moving average or perhaps an ARIMA-type model.
Remember, each patient is independent (hopefully!) but each measurement per patient is dependent (i.e. if you have patient 1 at time 1, patient 2 at time 2, and patient 3 at time 3, each patient time measurement is dependent on the patient). So if you want to forecast "the value at 7:30 AM regardless of the patient", that's different from forecasting "patient 1's value at 10 pm".
Hope that at least gives you a starting point! | {
"domain": "datascience.stackexchange",
"id": 6390,
"tags": "machine-learning, time-series, forecasting"
} |
How to calculate normal force acting on part attached with a screw? Can the force on screws be decreased by lowering the height of the normal force? | Question: I have an L-shaped beam (equal height and width lengths) hanging from a screw on the wall, and I model the screw as a pin joint (hopefully okay approximation even though it does apply a constant force on the beam). I'm trying to figure out how well secured the screws need to be to the wall. Diagram below.
A (pin joint)
o --> Ax
|
| <-- N
| h
._____. B
:
v
F
Then if the rest of the vertical part of the beam is touching the wall, where approximately, in terms of height, could I model a point normal force applying? In reality the normal force applies along the entire height of the part touching the wall, but varies with height (and the amount of force applied by the screw). Specifically, I am interested in the case where a downward force is applied at the far end of the L beam. Intuitively, I feel the normal force will be applied closer to B the higher F is. But I am not sure how to start solving this.
For example:
The forces acting on my part are Ax, Ay, N, and F. The dimensions of the vertical and horizontal components of the part are 1m for simplicity and I want to solve in terms of F. The unknowns are Ax, Ay, N, and h, but solving for Ay is trivial I think.
Using sum of x forces gives: Ax - N = 0 --> Ax=N. Therefore, we have that the normal force will be equal to the force pulling the screws out of the wall. Is that right?
Then additional equations includes moments about B: F - Ax + h*N = 0 which simplifies to F + (h-1)*N=0
And sum of moments about A: F - (1-h)*N = 0 which simplifies to F + (h-1)*N = 0.
This is the same as sum of moments about B, and that makes sense since it's all the same part, unfortunately. The sum of moments about the point where F is applied also simplifies to the same equation after substituting Ay=F.
Therefore, I don't have enough equations to solve this problem and don't know how to proceed here. How do I find another equation to give me h? Or is there a different approach to modeling this problem? Finding h could be an integration problem if I know the equation form of load applied to the wall, but I don't know how to get started finding that equation form - linear from 0 at A to F at B? (if so, why?)
Lastly, I have that Ax = F / (1-h) where h is inversely related to how far from A the normal force is. This should mean that the lower on the wall I get the normal force, the less force is applied to the screws, right? Like would a T design for a wall bracket (diagram below) reduce the force pulling out the screws? It kind of makes intuitive sense, but again, I would like to be able to calculate it to make sure.
o-->Ax
|
----|
: |<--N
v |
F
Answer: For the most general case, of a T-design, I think you are looking at something like this
Where $F$ is the applied force, and $B_x$, $A_y$, and $A_x$ represent the forces provided by the screws to keep the bracket stable.
As you will see, moving the shelf up and down (changing the $d$ value) does not affect the forces on the screws.
The standard way to proceed is to form the three equilibrium equations and solve for the 3 unknown forces.
$$ \left. \begin{aligned} A_x + B_x & = 0 \\ A_y - F & = 0 \\ h B_x +\ell F & = 0 \end{aligned} \right\} \begin{aligned} A_x & = \tfrac{\ell}{h} F \\ A_y & = F \\ B_x & = \mbox{-}\tfrac{\ell}{h} F \end{aligned} $$
And there you have it. Calculate $A_x$ to get the required screw tension to keep the shelf in place. If you know what the screw tension $A_x$ is, then you can back solve for the minimum $h$ value
$$ \boxed{ h \ge \frac{F}{A_x} \ell }$$
For completion, you could also slide all the forces to find where they meet
and solving the for the force balance only, with no need to take moments. Notice that the direction $\theta$ of the net force at the top screw is fixed by the geometry as $\tan \theta = \tfrac{\ell}{h}$. Then the force balance equations are
$$ \left. \begin{aligned} A \sin \theta + B & = 0 \\ A \cos \theta - F & = 0 \end{aligned} \right\} \begin{aligned} A & = \frac{F}{\cos \theta} \\ B & = \mbox{-} F \tan \theta \end{aligned} $$
And the pull-out force of the top screww $$A_x = A \sin \theta = (\tan \theta ) F$$
So again, if you know the screw tension $A_x$, solve for the minimum design angle
$$ \boxed{ \tan \theta \ge \frac{A_x}{F} }$$ | {
"domain": "physics.stackexchange",
"id": 98554,
"tags": "homework-and-exercises, forces, statics"
} |
Simple majority classifier question | Question: one of my training questions for my exam is the following one:
Suppose you are testing a new algorithm on a data set consisting of
100 positive and 100 negative examples. You plan to use leave-one-out
cross-validation (i.e. 200-fold cross-validation) and compare your
algorithm to a baseline function, a simple majority classifier. Given
a set of training data, the majority classifier always outputs the
class that is in the majority in the training set, regardless of the
input. You expect the majority classifier to achieve about 50%
classification accuracy, but to your surprise, it scores zero every
time. Why?
My only solution about it is that the training data is inverse to the real data.
But I'm not sure about my answer. May anybody help me?
Regards,
Patrick
Answer: You have 100 examples of the positive class and 100 examples of the negative class.
Now you do:
examples = "List of all 200 examples"
accuracies = Empty list
for(i=0; i<|examples|; i++) {
one = examples[i]
training = examples \ {one}
# !!!!!!!!!!!!!!!!!!!!!!!!!
majority_clf = get majority in training. This is the other class than the class of "one"
# !!!!!!!!!!!!!!!!!!!!!!
accuracies.append(majority_clf.predict(one) == class(one))
}
overall_accuracy = sum(accuracies) / |accuracies| | {
"domain": "cs.stackexchange",
"id": 11292,
"tags": "machine-learning, classification"
} |
What's the difference between "Ohmic dissipation", "Joule heating", "ion drag" and "resistive heating"? | Question: The following terms are sometimes used to refer to ... more or less ... the same thing by different people and in different contexts (electronic circuits vs. plasma physics, etc.):
Ohmic dissipation
Joule heating
Ion drag
Resistive heating
[other terms??]
I've heard that some of these terms aren't quite the same, or are potentially misleading. See, for instance, the discussion in the following article:
Vasyliūnas, V. M., and P. Song (2005), Meaning of ionospheric Joule heating, J. Geophys. Res., 110, A02301, doi:10.1029/2004JA010615. (link)
What do each of these terms properly or formally refer to, and what are some "best practices" as to when to use one term over the other? What is the best context for each term? Is there a difference in the nature of energy transfer in each case? Which of the various MHD equations would each term show up in?
Note: I'm looking to split hairs, so "they're all the same" isn't a good answer. :-)
Answer: Joule heating is typically associated with increases in random kinetic energy (i.e., heat) due to $\mathbf{j} \cdot \mathbf{E}$. Ohmic dissipation and resistive heating are similar in a sense to Joule heating, as all three result from fluctuating electric fields acting as an effective drag force on an otherwise free flowing charged particle.
Ion drag is likely associated with fluid terms like viscosity or Coulomb collisions, which can act to inhibit the bulk flow of charged particles.
Generally in a plasma, one refers to anomalous resistivity or anomalous viscosity. The use of the word anomalous comes from the fact that the interactions are not rigorously fluid-like (i.e., not from collisions). They are typically the result from a wave or fluctuating fields radiated by an instability that then act to remove the free energy that created them. Note that I am not referring to the "fudge factor" that MHD simulations will often use to account for or introduce some form of dissipation.
These terms are used in MHD, though in a plasma we have found through observation and particle-in-cell (PIC) simulations that resistive/drag terms arise from purely kinetic effects. Meaning, to have these effects in MHD is to artificially insert them (i.e., throw in some adaptive anomalous resistivity or allow numerical resistivity).
I will edit this answer later with a more thorough response, but my newborn just woke up and needs attention.
Edit/Additions
The $\mathbf{j} \cdot \mathbf{E}$ comes from Poynting's theorem where:
$$
\partial_{t} W_{EM} + \nabla \cdot \mathbf{S} = - \mathbf{j} \cdot \mathbf{E}
$$
where $W_{EM}$ is the electromagnetic energy density (= $\varepsilon_{o} E^{2}/2$ + $B^{2}/(2 \ \mu_{o})$) and $\mathbf{S}$ is the Poynting flux (= $\mathbf{E} \times \mathbf{B}/\mu_{o}$). Another way to say this is the time rate of change of the energy density of the electromagnetic fields plus the rate of electromagnetic energy flux flowing out of a surface equals the energy lost due to momentum transfer between particles and fields.
When you can approximate $\mathbf{E}$ as $\overleftrightarrow{\eta} \cdot \mathbf{j}$, where $\overleftrightarrow{\eta}$ is a resistivity tensor, then we have:
$$
\mathbf{j} \cdot \mathbf{E} \rightarrow \mathbf{j} \cdot \overleftrightarrow{\eta} \cdot \mathbf{j}
$$
which is often approximated to be ~$\eta \ j^{2}$, where we have reduced the tensor to a scalar. In this form, one would call this Ohmic heating or resistive heating. I think of it this way because the conversion of $\mathbf{E}$ to a function of $\mathbf{j}$ is referred to as Ohm's law.
Drag Force
Drag forces are often written in a form similar to:
$$
\mathbf{F} = - b \ \mathbf{v}
$$
where $b$ is a constant and $\mathbf{v}$ is the velocity of the object experiencing the drag. In a collisional medium, $b$ $\rightarrow$ $m \ \nu$, where $m$ is the mass of the object and $\nu$ is a characteristic frequency which is a collision rate in this case.
The advantage of this form, $\mathbf{F}$ = -$m \ \nu \ \mathbf{v}$, is that $\nu$ can be applied to binary collisions, Coulomb collisions, or wave-particle collisions (what I referred to as anomalous collisions before).
Relation to Resistivity
The collision frequency can be related to resistivity by:
$$
\eta = \frac{ m \ \nu }{ n_{e} \ e^{2} }
$$
where $n_{e}$ is the electron number density and $e$ is the fundamental charge. In the ionosphere, the dominant resistive terms arise from electron-neutral and electron-ion collisions. In the solar wind, however, the Coulomb collision rates are roughly one per day near Earth (assuming $90^{\circ}$ deflections, i.e., not including small angle deflections). So in the presence of a waves/instabilities, the dominant terms arise from wave-particle collisions where the wave fields act as scattering centers.
The proper form for the wave-particle collision rate depends upon the dispersion relation for the wave. In the quasi-linear approximation for ion-acoustic waves, for instance, the anomalous collision frequency is given by:
$$
\nu = \omega_{pe} \frac{ \varepsilon_{o} \ \delta E^{2} }{ 2 \ n_{e} \ k_{B} \ T_{e} }
$$
where $\omega_{pe}$ is the electron plasma frequency, $\delta E$ is the wave electric field, $k_{B}$ is the Boltzmann constant, and $T_{e}$ is the electron temperature.
Summary
I think the best thing to do is explicitly state the term(s) that you are referring to in your work. Meaning, write out $\mathbf{j} \cdot \mathbf{E}$ as the definition of Joule heating, for instance. If you explicitly show the term to which you refer, you will not have an issue. The confusion largely arises from implied relationships between the jargon and the actual mathematical expressions that are used incorrectly or carelessly.
I also agree with Vytenis, in that the frame of reference for which you define these terms is critical because both $\mathbf{j}$ and $\mathbf{E}$ depend upon the frame of reference. However, if you clearly define each of the terms to which you refer, this should not be an issue either. | {
"domain": "physics.stackexchange",
"id": 21709,
"tags": "electromagnetism, fluid-dynamics, terminology, plasma-physics, magnetohydrodynamics"
} |
Evaluation of a solution for Run-length encoding algorithm | Question: I wrote an answer for the following question and wondering if there is any better approach to it.
Using the Java language, have the function RunLength(str) take the str
parameter being passed and return a compressed version of the string
using the Run-length encoding algorithm. This algorithm works by
taking the occurrence of each repeating character and outputting that
number along with a single character of the repeating sequence.
For example: "wwwggopp" would return 3w2g1o2p. The string will not
contain any numbers, punctuation, or symbols.
Code
public class App {
void runLength(String str) {
HashMap<Character, Integer> hash = new HashMap<Character, Integer>();
Character c;
for (int i = 0; i < str.length(); i++) {
c = str.toLowerCase().charAt(i);
if (hash.containsKey(c)) {
int value = hash.get(c);
value++;
hash.put(c, value);
} else {
hash.put(c, 1);
}
}
int value = 0;
StringBuffer buf = new StringBuffer();
String temp;
for (int j = 0; j < str.length(); j++) {
c = str.toLowerCase().charAt(j);
value = hash.get(c);
temp = str.substring(j, j + 1);
if (buf.indexOf(c.toString()) == -1) {
buf.append(value);
buf.append(temp);
}
}
System.err.println(buf.toString());
}
public static void main(String[] args) {
new App().runLength("wwwggopp");
}
}
Output
3w2g1o2p
Answer: Avoid printing in methods, unless the purpose of the method is speicifically to print. The main purpose of the runLength method is certainly not to print. It should return the compressed value instead.
The description doesn't say that you should treat upper and lowercase characters equal. It's not correct to do that. And if you really wanted to do that,
it would have been simpler to lowercase the entire input string once.
This is a very wasteful operation:
for (int i = 0; i < str.length(); i++) {
c = str.toLowerCase().charAt(i);
This would have been much better:
String lowered = str.toLowerCase();
for (int i = 0; i < lowered.length(); i++) {
c = lowered.charAt(i);
Declare variables with the interface type, when possible.
Instead of:
HashMap<Character, Integer> hash = new HashMap<Character, Integer>();
This would have been better:
Map<Character, Integer> hash = new HashMap<Character, Integer>();
Btw are you still on Java6? You should definitely migrate to at least Java7,
where the above declaration becomes simply:
Map<Character, Integer> hash = new HashMap<>();
Note that in most use cases, StringBuilder is preferred over StringBuffer.
StringBuffer is synchronized, StringBuilder is not, which makes it faster.
In this program you don't need synchronization when building the compressed string.
Suggested implementation
This is simpler, without a hashmap:
public String compress(String str) {
if (str.isEmpty()) {
return "";
}
char[] chars = str.toCharArray();
StringBuilder builder = new StringBuilder();
int count = 1;
char prev = chars[0];
for (int i = 1; i < chars.length; i++) {
char current = chars[i];
if (current == prev) {
count++;
} else {
builder.append(count).append(prev);
count = 1;
}
prev = current;
}
return builder.append(count).append(prev).toString();
}
Unit testing
It's always good to have unit tests to verify correctness:
@Test
public void test_aabcccccaaa() {
assertEquals("2a1b5c3a", compress("aabcccccaaa"));
}
@Test
public void test_a5() {
assertEquals("5a", compress("aaaaa"));
}
@Test
public void test_empty() {
assertEquals("", compress(""));
}
@Test
public void test_a() {
assertEquals("1a", compress("a"));
}
@Test
public void test_a3b4() {
assertEquals("3a4b", compress("aaabbbb"));
}
@Test
public void test_abc() {
assertEquals("1a1b1c", compress("abc"));
}
@Test
public void test_wwwggopp() {
assertEquals("3w2g1o2p", compress("wwwggopp"));
} | {
"domain": "codereview.stackexchange",
"id": 10028,
"tags": "java, strings, compression"
} |
Does the cosmic censorship conjecture limit the charge/mass ratio for particles? | Question: For charged black holes, supposing true the cosmic censorship conjecture, this inequality must be verified (in natural units):
$GM^2 > Q^2$
Where of course $M$ and $Q$ are the mass and charge of the black hole. Now suppose you "throw" a bunch of charged particles into the black hole. I know that the definition of charge and mass of a black hole is complicated and I know almost nothing about those. What I know is that they are usually defined based on their asymptotic values far away from the hole.
A quick google search seems to tell me that a particle falling into the black hole cause its mass and charged to increase of the mass and charged of the particle.
Does this mean that the inequality imply a limit for the mass/charge ratio for charged particles? Because if there's no limit one could keep throwing particles with a specific ratio until the inequality is no longer true creating a naked singularity.
Answer: No, this does not imply a charge/mass limit, but this is a really interesting question, and was answered for the first time in a famous paper of Wald from 1974.
His paper showed that, at least in the specific case where the charge of the particle is much smaller than the charge of the black hole, any particle with a charge/energy ratio that could result in a naked singularity if it were absorbed by the black hole will, in fact, be deflected before it reaches the event horizon. So there is no limit on the charge/energy ratio of the particle itself due to cosmic censorship, but there IS a limit on what kinds of particles can actually be absorbed by the black hole. That limit corresponds exactly to the limit needed to preserve cosmic censorship (at least in the case of an extremal black hole).
A 1999 paper due to Hubeny (http://arxiv.org/abs/gr-qc/9808043) raised some new questions about this result by looking at potential higher-order problems when one considers black holes that are nearly-extremal rather than extremal. This potential issue was seemingly resolved by Poisson and collaborators in 2013 (http://arxiv.org/abs/1211.3889).
NB: You may notice that I wrote "charge/energy" rather than "charge/mass" when discussing the particle's parameters. This is no mistake—the mass change in the black hole when it absorbs a particle is governed by the particle's total energy, not its rest mass. Even if you make a black hole absorb a particle with an arbitrary high charge and arbitrarily small rest mass, for example by pushing the particle to the edge of the event horizon with a rocket ship, the particle will pick up enough energy over the course of the "pushing" process to counterbalance its charge and preserve cosmic censorship. | {
"domain": "physics.stackexchange",
"id": 33955,
"tags": "general-relativity, black-holes, charge, reissner-nordstrom-metric, cosmic-censorship"
} |
sw_urdf_exporter preview and sw crash | Question:
When I have finished to specify link and joint at sw_urdf_exporter plugin, while other settings are set to be automatic, I click preview. The green process in the bottom loads a while and SW crashes and throws error.
I check issue posted here and realize that the reference coordinate system and axis should be specified in the model before export. Am I right?
But instead, my model file type is STEP, where assembly relations, reference coordinate system and axis do not exist. So, in order to proceed with export urdf, should I add reference coordinate system and axis mannually for model?
If so, it is a pretty time-cosuming, in other words, using original assembly file which contains assembly relation is the correct way to export urdf. Am I right? However, exporter plugin might be only fit to SW2012, original assembly is built in newer version.
Originally posted by shawnysh on ROS Answers with karma: 339 on 2017-01-10
Post score: 1
Original comments
Comment by Panason on 2019-08-26:
no need to assign reference frame
Answer:
So, in order to proceed with export urdf, should I add reference coordinate system and axis mannually for model?
If you want to have joints in your exported URDF: yes.
If so, it is a pretty time-cosuming, in other words, using original assembly file which contains assembly relation is the correct way to export urdf. Am I right?
That would certainly seem to be the most straightforward way of doing it, yes.
However, exporter plugin might be only fit to SW2012, original assembly is built in newer version.
I would try the plugin with a newer version of SW. It can work, it's just not supported / recommended.
Originally posted by gvdhoorn with karma: 86574 on 2017-01-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by shawnysh on 2017-01-10:
I tried the export process on SW2012 and failed, however I succeeded on SW2015 using other's PC. Maybe sth wrong with my SW2012.
Comment by shawnysh on 2017-01-10:
I asked someone, he told me that, he used to turn step file into assembly file, after adding assembly relation. If assembly relation exists, can both reference coordinate systems and reference axes be automatic detected with plugin, without mannually specifying? | {
"domain": "robotics.stackexchange",
"id": 26683,
"tags": "ros, coordinate, sw-urdf-exporter, model"
} |
Subscribe to information output to screen by Gmapping? | Question:
Is there a way to subscribe to the information output to the screen by Gmapping? It prints out a lot more information that it allows subscriptions to or services for. I want to be able to get that information without modifying the Gmapping code.
Originally posted by IFLORbot on ROS Answers with karma: 33 on 2012-07-19
Post score: 0
Original comments
Comment by allenh1 on 2012-07-19:
In which datum specifically are you interested?
Comment by IFLORbot on 2012-07-19:
Specifically, I'm interested in the Average Scan Matching Score and the Scan Matching Failed alert.
Comment by IFLORbot on 2012-07-19:
I'm unfamiliar with how to open the output in a pipe....
EDIT: After reading about ROS launch output, (either output to "screen" or "log") piping does seem to be the ONLY way. Thanks for the advice
Comment by IFLORbot on 2012-07-19:
@dornhege was right. I was able to pipe it into a simple publisher I wrote. Thanks. I am now opening up a new Question to see if I can add pipes to roslaunch xml files.
Answer:
IIRC gmapping does not use ROS_... output, so it is not possible by a ROS way. You could still open the output in a pipe or something like that.
Originally posted by dornhege with karma: 31395 on 2012-07-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10274,
"tags": "slam, navigation, gmapping-demo, gmapping, slam-gmapping"
} |
How to add walls or other solid obejcts to Eulers equations? | Question: I've recently started learning about the physics of fluids and I've found that euler's equations exist and that I can use them to compute the flow of fluids with zero viscosity. But I have a problem that I don't see any easy way of adding an obstacle to a compessible flow as shown in this gif
How should I do it? Is something similar possible also with navier-stokes equations?
(I am aware that this questions is probably a duplicate of some older question, because this sounds as a really basic problem, but I've search for the anwer for some time and I even asked on the physics discord server and I found nothing)
Answer: You do not need to add anything to the Euler or Navier-Stokes equations. To solve the equations you need boundary conditions. The walls appear in those. | {
"domain": "physics.stackexchange",
"id": 97068,
"tags": "fluid-dynamics, flow, boundary-conditions"
} |
Does 'Battery' matter to simulated pr2 robot | Question:
I noticed that, when running pr2 in simulation for a long time, the battery will go down eventually to 0. Does that matter when there's no battery, if so, how to charge it?
Thanks
Originally posted by vincent on ROS Answers with karma: 311 on 2011-08-15
Post score: 1
Answer:
No, it does not matter.
Originally posted by Tim Field with karma: 191 on 2011-08-15
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 6420,
"tags": "ros, battery, pr2-simulator"
} |
How to analyze/test a binary search algorithm? | Question: I was asked to "Compute the average runtime for a binary search, ordered array, and the key is in the array." I'm not quite sure how to approach this problem. Isn't the runtime of binary search O(log n)? And the average would be something like n + n/2 + n/4... etc?
I'm then asked to "Implement a program performing an empirical test of the binary search (using a fixed number of random arrays for each n), then do a ratio test justifying your analytical answer." How would I go about doing this? Could I perform a basic binary search algorithm on a number of random arrays, counting the basic operations, and compare that to my original analysis from the first question?
I appreciate any help/guidance here.
Answer: First, of all, the worst-case running time of binary search is $\Theta(\log n)$. What you are looking for is the average running time of binary search under some reasonable random model, for example the element to be looked for is chosen uniformly at random from the (distinct) elements in the array.
Second, the average running time can't be $n + n/2 + n/4 + \cdots$, since the average running time is lower than the worst-case running time. I'm not sure what exactly you're counting here. The average running time is the average number of operations taken by your algorithm, on a random input. In contrast, the worst-case running time is the maximal number of operations taken by your algorithm on any input.
Perhaps the following example will make things clear. Consider the following task: given an array of size $2n$ containing $n$ zeros and $n$ ones, find the index of a zero. The worst-case running time of any algorithm is $\Omega(n)$. In contrast, consider the algorithm which scans the array from left to right, and returns the first zero index. Its worst-case running time is $\Theta(n)$. Under the assumptions that the input array is chosen randomly among all legal arrays, the average running time is $\Theta(1)$ (why?). You can modify the algorithm to one which probes locations randomly, and then the worst-case expected running time is $\Theta(1)$. (If you don't understand the last remark, wait until you learn about randomized algorithms.)
Finally, you're required to empirically demonstrate the claimed average running time $\Theta(f(n))$. The idea is that for large $n$, the average running time will be $\approx C f(n)$ for some constant $C$ whose units are seconds. In other words, $C$ is the constant "hidden" inside the big O notation. Suppose you calculate for several values of $n$ an average $T(n)$ over several runs of the algorithms. Then you expect $T(n)/f(n)$ to be roughly constant, and that would demonstrate your claim concerning the average running time. | {
"domain": "cs.stackexchange",
"id": 1866,
"tags": "algorithm-analysis, runtime-analysis, binary-search"
} |
Thickness of proton exchange membrane (Nafion)? | Question: I am looking into using a proton exchange membrane for a bio-energy cell experiment. Previous scholars have been using Nafion as a PEM to place between the anode and cathode (air-cathode). the thickness of the material is about (7-mil). I was wondering, would the ion transfer rate get better if I used a thinner version of the same material due to a decrease of distance! or would it cause electrons to go through!
I would really appreciate it if someone can point me.
Thanks
Answer: Ideally, the ion transfer rate gets better with a decrease in distance. However, it is harder to implement thinner PEM depending on the application. The method of fabrication might cause smaller PEM to be less selective despite them shorting the distance between the electrodes. | {
"domain": "chemistry.stackexchange",
"id": 16433,
"tags": "electrochemistry, experimental-chemistry, ions, electrons, protons"
} |
ROS-Gazebo Failed to load joint_state_controller | Question:
Hello guys,
I'am following this tutorial about ROS Integration on Gazebo 1.9 (http://gazebosim.org/wiki/Tutorials/1.9/ROS_Control_with_Gazebo).
My issue is:
As example I consider RRBot, when I type these commands:
roslaunch rrbot_gazebo rrbot_world.launch
to start simulation, and
roslaunch rrbot_control rrbot_control.launch
to load controllers, it gives me these errors:
[INFO] [WallTime: 1397731938.843439] [11.566000] Loading controller: joint_state_controller
[ERROR] [WallTime: 1397731939.851478] [12.570000] Failed to load joint_state_controller
[INFO] [WallTime: 1397731939.852413] [12.571000] Loading controller: joint1_position_controller
[ERROR] [WallTime: 1397731940.860462] [13.576000] Failed to load joint1_position_controller
[INFO] [WallTime: 1397731940.861270] [13.576000] Loading controller: joint2_position_controller
[ERROR] [WallTime: 1397731941.869383] [14.581000] Failed to load joint2_position_controller
[INFO] [WallTime: 1397731941.870228] [14.581000] Controller Spawner: Loaded controllers:
[INFO] [WallTime: 1397731941.876315] [14.588000] Started controllers:
Then, if I type:
rosservice call controller_manager/list_controller_types
it gives me this error:
ERROR: Service [/controller_manager/list_controller_types] is not available.
How can I resolve?
Thanks
Originally posted by Il_Voza on ROS Answers with karma: 56 on 2014-04-17
Post score: 4
Answer:
I've got the same problem and I've figured it out how to solve it.
You are having this problem cause you don't have installed the controllers.
Just run these commands in your terminal:
sudo apt-get update
sudo apt-get install ros-indigo-ros-control ros-indigo-ros-controllers`
After that when you try to run the same commands you'll be able to control the arm of rrbot.
Originally posted by Joao Luis with karma: 110 on 2014-07-31
This answer was ACCEPTED on the original site
Post score: 8 | {
"domain": "robotics.stackexchange",
"id": 17682,
"tags": "ros, microcontroller, gazebo, rosservice"
} |
Ugly workaround to get the vbext_ProcKind of a procedure is breaking encapsulation | Question: This is a follow up to Extending the VBAExtensibility Library. It turns out that code had a really nasty bug. Anytime vbeProcedure.StartLine got called, I was running the risk of hitting runtime error 35 because CodeModule.ProcStartLine has to be told what kind of procedure it's looking for. Everything blows up when you call it with vbext_pk_proc, but you're really looking for a class property.
To clarify, this is a dangerous call.
CodeModule.ProcStartLine(procedureName, vbext_pk_proc)
My solution was to parse the code module line by line checking for some keywords so I could determine what vbext_ProcKind to pass it. I have one huge issue with how I've fixed it: vbeCodeModule now knows more about what it means to be a procedure than I like. I feel like I'm breaking encapsulation. In my original version, vbeCodeModule knew just enough to create a list of procedures. That's it. VbeProcedures were responsible for reporting information about themselves. My concerns are deepened by the difficulty I'm having in testing this code. If it was part of VbeProcedure, I could expose it publicly and testing would be a breeze. As part of VbeCodeModule, I don't really want it to be public.
I already posted this code as an answer, but I would like to have it reviewed. I'm posting the entirety of both classes here, but I'm particularly interested in IsSignature, GetProcedureType, and GetProcedures in the vbeCodeModule class.
The full project is over at GitHub.
vbeCodeModule
' requires Microsoft Visual Basic for Applications Extensibility 5.3 library
Option Explicit
Private mCodeModule As CodeModule
Private mVbeProcedures As vbeProcedures
Public Property Get CodeModule() As CodeModule
Set CodeModule = mCodeModule
End Property
Public Property Let CodeModule(ByRef codeMod As CodeModule)
Me.Initialize codeMod
End Property
Public Property Get vbeProcedures()
Set vbeProcedures = mVbeProcedures
End Property
Public Sub Insert(ComponentType As vbext_ComponentType)
'Dim project As VBProject
'Set project = VBIDE.VBE
'project.VBComponents.Add ComponentType
End Sub
Public Function Create(codeMod As CodeModule) As vbeCodeModule
' allows calls from other projects without breaking the exisiting API
Set Create = New vbeCodeModule
Create.Initialize codeMod
End Function
Public Sub Initialize(codeMod As CodeModule)
Set mCodeModule = codeMod
Set mVbeProcedures = GetProcedures(mCodeModule)
End Sub
Private Sub Class_Terminate()
Set mVbeProcedures = Nothing
Set mCodeModule = Nothing
End Sub
Private Function GetProcedures(codeMod As CodeModule) As vbeProcedures
Dim procName As String
Dim procs As New vbeProcedures
Dim proc As vbeProcedure
Dim line As String
Dim procKind As vbext_ProcKind
Dim lineNumber As Long
For lineNumber = 1 To codeMod.CountOfLines
line = codeMod.Lines(lineNumber, 1)
If IsSignature(line) Then
procKind = GetProcedureType(line)
procName = codeMod.ProcOfLine(lineNumber, procKind)
Set proc = New vbeProcedure
proc.Initialize procName, codeMod, procKind
End If
Next lineNumber
Set GetProcedures = procs
End Function
Private Function GetProcedureType(signatureLine As String) As vbext_ProcKind
If InStr(1, signatureLine, "Property Get") > 0 Then
GetProcedureType = vbext_pk_Get
ElseIf InStr(1, signatureLine, "Property Let") > 0 Then
GetProcedureType = vbext_pk_Let
ElseIf InStr(1, signatureLine, "Property Set") > 0 Then
GetProcedureType = vbext_pk_Set
ElseIf InStr(1, signatureLine, "Sub") > 0 Or InStr(1, signatureLine, "Function") > 0 Then
GetProcedureType = vbext_pk_Proc
Else
Const InvalidProcedureCallOrArgument As Long = 5
Err.Raise InvalidProcedureCallOrArgument
End If
End Function
Private Function IsSignature(line As String) As Boolean
If line = vbNullString Then Exit Function
If IsDeclaration Then Exit Function
' pattern:
' any number of characters;
' Doesn't start with a comment;
' any number of characters;
' space;
' word;
' space;
' any number of characters
If line Like "[!']* Property *" Then
IsSignature = True
ElseIf line Like "[!']* Function *" Then
IsSignature = True
ElseIf line Like "[!']* Sub *" Then
IsSignature = True
End If
End Function
Private Function IsDeclaration(line As String) As Boolean
IsDeclaration = InStr(1, line, "Const") > 0 Or InStr(1, line, "Dim") > 0
End Function
vbeProcedure
' requires Microsoft Visual Basic for Applications Extensibility 5.3 library
Option Explicit
' error handling values
Private Const BaseErrorNum As Long = 3500
Public Enum vbeProcedureError
vbeObjectNotIntializedError = vbObjectError + BaseErrorNum
vbeReadOnlyPropertyError
vbeInvalidArgError
End Enum
Public Enum MemberType
mt_PropertyGetter
mt_PropertyLetter
mt_PropertySetter
mt_Function
mt_Sub
End Enum
Public Enum MemberAccessibility
ma_Public
ma_Private
ma_Friend
End Enum
Private Const ObjectNotIntializedMsg = "Object Not Initialized"
Private Const ReadOnlyPropertyMsg = "Property is Read-Only after initialization"
' exposed property variables
Private Type TVbeProcedure
ParentModule As CodeModule
Name As String
procKind As vbext_ProcKind
End Type
Private this As TVbeProcedure
' truly private property variables
Private isNameSet As Boolean
Private isParentModSet As Boolean
Public Property Get Name() As String
If isNameSet Then
Name = this.Name
Else
RaiseObjectNotIntializedError
End If
End Property
Public Property Let Name(ByVal vNewValue As String)
If Not isNameSet Then
If vNewValue = vbNullString Then
RaiseInvalidArgError "Name", "The Name property can not be set to an empty string."
End If
this.Name = vNewValue
isNameSet = True
Else
RaiseReadOnlyPropertyError
End If
End Property
Public Property Get ParentModule() As CodeModule
If isParentModSet Then
Set ParentModule = this.ParentModule
Else
RaiseObjectNotIntializedError
End If
End Property
Public Property Let ParentModule(ByRef vNewValue As CodeModule)
' Object assignments should use Set, but that forces Initialize() through the Getter, raising ObjectNotInitialized
If Not isParentModSet Then
Set this.ParentModule = vNewValue
isParentModSet = True
Else
RaiseReadOnlyPropertyError
End If
End Property
Public Property Get procKind() As vbext_ProcKind
procKind = this.procKind
End Property
Public Property Get StartLine() As Long
ValidateIsInitialized
StartLine = Me.ParentModule.ProcStartLine(Me.Name, this.procKind)
End Property
Public Property Get EndLine() As Long
ValidateIsInitialized
EndLine = Me.StartLine + Me.CountOfLines
End Property
Public Property Get CountOfLines() As Long
ValidateIsInitialized
CountOfLines = Me.ParentModule.ProcCountLines(Me.Name, this.procKind)
End Property
Public Sub Initialize(Name As String, codeMod As CodeModule, procKind As vbext_ProcKind)
Me.Name = Name
Me.ParentModule = codeMod
this.procKind = procKind
End Sub
Public Property Get Lines() As String
ValidateIsInitialized
Lines = Me.ParentModule.Lines(Me.StartLine, Me.CountOfLines)
End Property
Public Property Get Signature() As String
' @Mat's Mug [https://codereview.stackexchange.com/users/23788/mats-mug] wrote this.
Dim code() As String
code = Split(Me.ParentModule.Lines(Me.StartLine, Me.CountOfLines), vbNewLine)
Dim i As Long
For i = 0 To UBound(code)
If code(i) <> vbNullString And Left(Trim(code(i)), 1) <> "'" Then
Signature = code(i)
Exit Property
End If
Next
End Property
'TODO: Property Body
'Public Property Get Body() As String
'End Property
Public Property Get ModuleMemberType() As MemberType
' @Mat's Mug [https://codereview.stackexchange.com/users/23788/mats-mug] wrote this.
Dim result As MemberType
Dim code() As String
code = Split(Trim(Signature), " ")
Dim modifier As String
modifier = code(0)
Dim mType As String, mPropType As String
If modifier = "Property" Or modifier = "Function" Or modifier = "Sub" Then
mType = modifier
mPropType = code(1)
Else
mType = code(1)
mPropType = code(2)
End If
Select Case mType
Case "Property"
If mPropType = "Get" Then
ModuleMemberType = mt_PropertyGetter
ElseIf mPropType = "Let" Then
ModuleMemberType = mt_PropertyLetter
ElseIf mPropType = "Set" Then
ModuleMemberType = mt_PropertySetter
Else
Const InvalidProcedureCallOrArguement As Long = 5
Err.Raise InvalidProcedureCallOrArguement
End If
Case "Function"
ModuleMemberType = mt_Function
Case "Sub"
ModuleMemberType = mt_Sub
End Select
End Property
Property Get Accessibility() As MemberAccessibility
' @Mat's Mug [https://codereview.stackexchange.com/users/23788/mats-mug] wrote this.
Dim code() As String
code() = Split(Trim(Signature), " ")
Dim modifier As String
modifier = code(0)
If modifier = "Property" Or modifier = "Function" Or modifier = "Sub" Then modifier = "Public"
Select Case modifier
Case "Public"
Accessibility = ma_Public
Case "Private"
Accessibility = ma_Private
Case "Friend"
Accessibility = ma_Friend
Case Else
Err.Raise 5
End Select
End Property
' TODO: Property ReturnType; get the properties Type
' TODO: "Create" or "Append" sub
' TODO: "Insert" Sub
' TODO: Sort function
Private Sub RaiseObjectNotIntializedError()
Err.Raise vbeProcedureError.vbeObjectNotIntializedError, GetErrorSource, ObjectNotIntializedMsg
End Sub
Private Sub RaiseReadOnlyPropertyError()
Err.Raise vbeProcedureError.vbeReadOnlyPropertyError, GetErrorSource, ReadOnlyPropertyMsg
End Sub
Private Sub RaiseInvalidArgError(propertyName As String, Optional additonalInfo As String = vbNullString)
Dim message As String
message = "Invalid Argument" & vbCrLf & "Property: " & propertyName
If additonalInfo = vbNullString Then
Err.Raise vbeProcedureError.vbeInvalidArgError, GetErrorSource, message
Else
Err.Raise vbeProcedureError.vbeInvalidArgError, GetErrorSource, message & vbCrLf & additonalInfo
End If
End Sub
Private Sub ValidateIsInitialized()
If Me.ParentModule Is Nothing Then
RaiseObjectNotIntializedError
End If
End Sub
Private Function GetErrorSource() As String
GetErrorSource = TypeName(Me)
End Function
Answer:
My concerns are deepened by the difficulty I'm having in testing this code. If it was part of VbeProcedure, I could expose it publicly and testing would be a breeze. As part of VbeCodeModule, I don't really want it to be public.
VbeCodeModule needs a way of "detecting" its members. You can test whether it can find all expected members, and walk away happy. If you want to test whether a function is capable of identifying a member's procedure kind, then indeed, you need to extract it out of VbeCodeModule and make it Public. It all comes down to what you consider being a testable unit.
The only code that needs to determine a procedure's kind, is the code that scans a code module and creates VbeProcedure objects. Hence, I'd keep it a private member, and test that a module indeed has all the members you're expecting.
Here's how I solved it, based on a recommendation you made!
Start looping at the first line following the declarations section.
Find the procedure kind for the current line - that's either one of the VBE enum values, or -1 if the line isn't a signature.
If you've got a procedure kind, you've found a signature; if not, you can skip to the next line.
Using the procedure kind, you can now safely get the procedure's name, signature and body - create your VbeProcedure instance here.
You can skip to the end of the procedure right away, using ProcStartLine and ProcCountOfLines.
Public Property Get Members() As Collection
Dim result As New Collection
Dim module As CodeModule
Set module = this.encapsulated.CodeModule
Dim procedureName As String
Dim procedureBody As String
Dim currentLine As Long
currentLine = module.CountOfDeclarationLines + 1
While currentLine < module.CountOfLines
Dim procedureKind As vbext_ProcKind
procedureKind = GetProcedureKind(module, currentLine)
If procedureKind <> -1 Then
procedureName = module.ProcOfLine(currentLine, procedureKind)
Dim procedureLines As Long
procedureLines = module.ProcCountLines(procedureName, procedureKind)
procedureBody = module.lines(module.ProcStartLine(procedureName, procedureKind), procedureLines)
result.Add Member.Create(procedureName, procedureBody)
currentLine = module.ProcStartLine(procedureName, procedureKind) + procedureLines
Else
currentLine = currentLine + 1
End If
Wend
Set Members = result
End Property
Where Member is, in my project, analoguous to your VbeProcedure class.
GetProcedureKind is pretty straightforward:
Private Function GetProcedureKind(ByVal module As CodeModule, ByVal line As Long) As vbext_ProcKind
Dim result As vbext_ProcKind
If Framework.Strings.StartsWithAny(module.lines(line, 1), False, "End", "'") Then
result = -1
Exit Function
End If
If Framework.Strings.ContainsAny(module.lines(line, 1), False, " Sub ", " Function ") _
Or Framework.Strings.StartsWithAny(module.lines(line, 1), False, "Sub ", "Function ") Then
result = vbext_pk_Proc
ElseIf Framework.Strings.Contains(module.lines(line, 1), "Property Get ") Then
result = vbext_pk_Get
ElseIf Framework.Strings.Contains(module.lines(line, 1), "Property Let ") Then
result = vbext_pk_Let
ElseIf Framework.Strings.Contains(module.lines(line, 1), "Property Set ") Then
result = vbext_pk_Set
Else
result = -1
End If
GetProcedureKind = result
End Function
The StartsWithAny case covers for signatures that use the default accessibility modifier, and the leading & trailing whitespaces in the ContainsAny case covers for members that would have "Sub" or "Function" as part of their identifier, so all these correctly get picked up:
Sub Foo()
Function Foo()
Private Sub FooFunction()
Private Function FooProperty()
And this correctly gets ignored:
'Public Function Bar()
' is this function ignored?
'End Function
Note that IsSignature becomes moot with this approach, since the GetProcedureKind function will only return a value other than -1 when the current line is the signature line.
vbeCodeModule now knows more about what it means to be a procedure than I like. I feel like I'm breaking encapsulation.
I think it's the other way around: VbeProcedure knows too much about a code module!
The ParentModule As CodeModule member merely enables StartLine, EndLine, CountOfLines and Lines members to come into existence... but all these can also be obtained at the parent module's level, by passing the procedure's name and kind.
I think if VbeProcedure had a Body property, you wouldn't need to expose Lines, and you could remove the dependency on CodeModule, and remove the need to "kill" procedures, since VbeProcedure would simply become a container for some strings and enum values that VbeCodeModule assigns when it parses a module.
Doing this also keeps the CodeModule parsing logic within VbeCodeModule, and limits the parsing capabilities of VbeProcedure to parsing its own body or signature, say, to expose an Accessibility property. | {
"domain": "codereview.stackexchange",
"id": 10144,
"tags": "object-oriented, strings, parsing, vba, meta-programming"
} |
Is randomness deterministic? | Question: Is randomness based on lack of knowledge or behavior of universe is true random?
Or in other words,
are the allegation by EPR about hidden variable in the QM theory justifiable? What evidence can disprove/prove EPR?
Answer: This is a very general question, and can be answered from several perspectives. I shall try to give an overview so you can perhaps research the areas that interest you a bit more.
Firstly, the most fundamental interpretation of probability (as considered by most mathematicians) is Bayesian probability. This effectively states that probability measures state of knowledge of an observer.
This view has interesting ties with physics, in particular quantum mechanics. One could consider the random outcome of a QM measurement (wavefunction collapse) from a frequentist approach, but it is often more appealing philosophically to consider it as a state of knowledge. (The famous thought experiment of Schrodinger's cat is a good example - until one opens the box, we can only say it is an "alive-dead" cat!)
Interestingly, Bayesian probability does not explicitly preclude determinism (or non-determinism). Our current understanding of quantum mechanics does however. In other words, even knowing perfectly the state of a system at a given time, we cannot predict the state of the system at a future time. This most famous upset Albert Einstein, who spent many years of his life looking for a more fundamental deterministic theory - a so-called hidden-variables theory. Since then, however, we have learnt of Bell's theorem, which implies the non-existence of local hidden variables, suggesting that there is no more fundamental theory that "explains away" the non-determinism of QM. This is however a very contentious issue, and in any case does not rule out the existence of non-local hidden variable theories - the most famous of which is Bohm's interpretation.
In summary, this issue is far from settled, and creates a lot of contention between different groups of physicists as well as philosophers today. | {
"domain": "physics.stackexchange",
"id": 679,
"tags": "determinism"
} |
How does a physical knock initiate freezing of supercooled water? | Question: I just stumbled across this YouTube video:
http://youtube.com/watch?v=iihz16t6MHs
What's the mechanism behind it? With a knock, I added some energy. So what?
With a knock, I also increased pressure, but water behaves opposite of most substances (having freezing point raised by pressure) meaning it'd stay in liquid state with that logic.
Answer: The water is in a supercooled state. That means its temperature is well below freezing (it was put in a freezer for a couple of hours). However, it can stay liquid at that temperature, unless it has impurities that help the formation of ice crystals. Note that the guy used "purified water". Try distilled water that has been boiled to remove any oxygen and other gasses dissolved in it. Very pure water can stay liquid down to $-48\,^{\circ}{\rm C}$. Freezing of water in this state can be caused by even a small shake. Once a single ice crystal forms, suddenly the water has sites to crystallize further, and freezing is almost instantaneous.
In rare circumstances, supercooled water can occur in rain. When that rain hits the ground it instantly freezes, causing the very dangerous "black ice". | {
"domain": "physics.stackexchange",
"id": 27199,
"tags": "states-of-matter"
} |
Using Factory pattern for parsing system | Question: I am currently trying to write a Parser system for my project. There are different type of files to be parsed (file1, file2, file3):
file1 -> AData // stored in AData class using AParser class's parsing logic
file2 -> BData // stored in BData class using BParser class's parsing logic
file3 -> CData // stored in CData class using BParser class's parsing logic
Files could be binary or txt. Different files require different logic for parsing because of the way file are written.
I have used a factory pattern for this purpose. The Base class is Parser which is an abstract class.
#include <fstream>
#include <iostream>
#include <memory>
#include <string>
// Base class
template <class T>
class Parser {
public:
virtual void DoParsing(T&, std::ifstream& fs) = 0;
};
// Base class for data
struct Data {};
//////
struct AData : Data {
int data;
};
class AParser final : public Parser<AData> {
public:
void DoParsing(AData& data, std::ifstream& fs) {
// implementation goes here
}
};
///
struct BData : Data {
char* data;
};
class BParser final : public Parser<BData> {
public:
void DoParsing(BData& data, std::ifstream& fs) {
// implementation goes here
}
};
template <class T>
class IParsing {
public:
void action(std::shared_ptr<Parser<T>> p, T d, std::ifstream& fs) {
p->DoParsing(d, fs);
}
};
class FParsing {
public:
FParsing() {}
void HandleParsing(std::string type, Data& d, std::ifstream& fs) {
if (type == "AParsing") {
std::shared_ptr<IParsing<AData>> iparse =
std::make_shared<IParsing<AData>>();
iparse->action(std::make_shared<AParser>(), static_cast<AData&>(d),
fs);
} else if (type == "BParsing") {
// iparse->action(std::make_shared<BParser>(), fs);
std::shared_ptr<IParsing<BData>> iparse =
std::make_shared<IParsing<BData>>();
iparse->action(std::make_shared<BParser>(), static_cast<BData&>(d),
fs);
} else {
std::cout << "Need shape\n";
}
}
private:
};
int main() {
std::ifstream iFile("data.txt");
FParsing fparse;
//AData is passed by ref because
// it will be populated during parsing process
AData ad;
fparse.HandleParsing("AParsing", ad, iFile);
// BData
BData ab;
fparse.HandleParsing("BParsing", ab, iFile);
}
My questions are:
Do you think this is a right approach for creating a parsing system?
Is the factory pattern implementation correct?
I just wanted to make sure I am not making things more complex than it should be.
Are there other design patterns better than factory pattern for this purpose?
Answer: Design issues
You are unnecessarily restricting the Parser interface by forcing fs to be a std::ifstream& when std::istream& would work as well.
AParser::DoParsing(AData&, std::ifstream&) should be marked override. Same for BParser::DoParsing(BData&, std::ifstream&).
I can't see any specific purpose for IParsing: Why not call AParser::DoParsing or BParser::DoParsing directly?
Besides, parameter d of IParsing::action(std::shared_ptr<Parser<T>> p, T d, std::ifstream& fs) should be of type T&.
FParsing::HandleParsing(std::string, Data&, std::ifstream&) doesn't use any class instance members. Maybe make it static?
Also, parameter type from that function should probably be an enum - no need to use strings for that.
Why allocate two std::shared_ptr in case a valid parser is found if they won't be shared and will be deallocated once the function returns?
What advantage do you hope to get from this extravagant implementation over overloading operator>>(std::istream&, AData&) and operator>>(std::istream&, BData&) instead? | {
"domain": "codereview.stackexchange",
"id": 27852,
"tags": "c++, c++11, design-patterns"
} |
Complexity of solving two different LP problems | Question: I have one LP problem (LP1) to solve, where a term in a constraint is to be substituted after solving another LP problem (LP2) (with a different variable vector). Suppose I call the dimension of the variable vector in LP1 as $n$ and in LP2 as $m$. I know that the complexity of solving an LP problem is $O(n^3L)$ ($L$ is the input length).
My doubt is : 1. Is it correct that the total complexity of solving the two LP problems together is $O((n^3+m^3)L)$? (I am worried that since I am substituting the optimal value of LP problem and thus, order is to be maintained, maybe it is to be multiplied)
2. $L$ here is the input size, so should it differ for the two problems? Or I should take it as the maximum of input size of LP1 and LP2?
e.g. suppose the problem is
$$\min c^Tx \,\text{s.t.} \, a_1^Tx\leq b_1, a_i^Tx\leq b_i,i=2,\ldots n.$$
Here $a_1$ is the optimal value of another LP problem. So we solve that LP to get $a_1$, substitute in this LP.
Answer: We measure the running time of an algorithm $A$ as a function of the input to that algorithm, not as a measure of the length of inputs to any subroutines or subalgorithms that $A$ might invoke.
Let $L_1$ be the size of the input to LP1, $L_2$ be the size of the input to LP2 before substitution, and $L_3$ be the size of the input to LP2 after substitution.
By your assumptions, the time to solve LP1 is $O(n^3 L_1)$. If this is true, it follows that the length (number of bits) needed to express the solution can be at most $O(n^3 L_1)$. Therefore, $L_3 = L_2 + O(n^3 L_1)$, so the time to solve LP2 is $O(m^3 L_3) = O(m^3 L_2 + n^3 m^3 L_1)$, and the total time to solve both is $O(n^3 L_1 + m^3 L_2 + n^3 m^3 L_1)$. The length of the input to the overall algorithm is $L_1 + L_2$, so as a function of $L_1 + L_2$, the best bound we can give on the total running time is $O(n^3 m^3 (L_1+L_2))$.
This assumes you are measuring running time in the bit-complexity model or the word RAM model. However, I am not sure whether your $O(n^3 L)$ bound is actually in those models, or if it is an arithmetic model where it is assumed it is possible to do an operation on unlimited-length integers in $O(1)$ time. If the latter, then the reasoning above is not valid. | {
"domain": "cs.stackexchange",
"id": 20356,
"tags": "complexity-theory, time-complexity"
} |
How do I write the equation of motion for a pulley on a fixed rope? | Question: I've got complex dynamics problem I'm trying to solve, but the crux of my problem at the moment is describing the path a pulley would take on a rope of fixed length.
Assume the rope is fixed at points $A$ and $B$ and that the rope length is longer than the distance from $A$ to $B$.
I know how long the rope is, the pulley radius, where the fixed points are, and I know the angle the pulley makes to the center of the rope field.
My initial approach is to start with a diagram like the following:
This is a lot to take in, so I'll go over each part in particular to approach the solution I'm currently using, which utilizes a numeric solver to find the answer. I'm attempting to write the into a dynamics model, though, with the ultimate goal of building a state feedback controller around the system, so what I really need is an equation of motion.
First, the pulley location:
This is straightforward enough. I don't know what the distance from the pulley center is to the center of the rope field ($L$), but I know the angle the pulley makes to the center of the rope field, $\theta$. I then know that the horizontal and vertical positions should be:
$$
L_x = L\sin{\theta} \tag{1}\\
$$
$$
L_y = -L\cos{\theta} \tag{2}\\
$$
Now, moving on to the length from the $A$ fixed point to the tangent of the pulley:
The tangent side of the triangle formed, $T_a$, forms a right triangle with the pulley radius $r$. The hypotenuse, $L_a$, is the distance from the pulley center to the fixed point $A$. Finally, the angle of the triangle that occupies a portion of the pulley is $\alpha_{\mbox{minor}}$, so-named because it's the a-side angle of the smaller triangle.
I know that $L_a$ is the distance between the pulley center and $A$:
$$
L_a = \sqrt{(L_x-A_x)^2 + (L_y-A_y)^2} \tag{3} \\
$$
Since I know the pulley radius $r$, I can calculate:
$$
\alpha_{\mbox{minor}} = \mbox{acos}{(r/L_a)} \tag{4} \\
$$
And then the tangent length to be:
$$
T_a = L_a \sin{\alpha_{\mbox{minor}}} \tag{5} \\
$$
The process is the same for the b-side:
Following the same steps:
$$
L_b = \sqrt{(L_x-B_x)^2 + (L_y-B_y)^2} \tag{6} \\
$$
$$
\beta_{\mbox{minor}} = \mbox{acos}{(r/L_b)} \tag{7} \\
$$
$$
T_b = L_b \sin{\beta_{\mbox{minor}}} \tag{8} \\
$$
Then I need to find the wrap angle, but in order to do so I first find the "major" angles, to find out what portion of the interior angle of the pulley isn't being used for rope wrap:
This is straightforward, like calculating the minor angles, but instead here the "short" leg of the right triangle is the distance $-Ly$ (taking negative such that the net length is a positive). The hypotenuse for the a-side and b-side is still $L_a$ and $L_b$, respectively, so:
$$
\alpha_{\mbox{major}} = \mbox{acos}{(-L_y/L_a)} \tag{9} \\
$$
$$
\beta_{\mbox{major}} = \mbox{acos}{(-L_y/L_b)} \tag{10} \\
$$
Finally, looking at the wrap angle:
Now it's clear that the wrap angle is what's left over after taking away the a- and b-side major and minor angles. The rope wrapped around the pulley is then just the arc lengths through that angle:
$$
\mbox{wrapAngle} = 2\pi - \left(\alpha_{\mbox{major}} + \alpha_{\mbox{minor}} + \beta_{\mbox{major}} + \beta_{\mbox{minor}}\right) \tag{11} \\
$$
$$
\mbox{wrap} = r\left(\mbox{wrapAngle}\right) \tag{12} \\
$$
Finally, I solve for the pulley center distance $L$ in the equation:
$$
\mbox{ropeLength} = T_a + \mbox{wrap} + T_b \tag{13} \\
$$
When I try to solve for $L$ with Matlab, I get:
Warning: Unable to find explicit solution. For options, see help.
> In solve (line 317)
ans =
Empty sym: 0-by-1
My real hope here was that I could start with the basic pendulum equation:
$$
\frac{d^2\theta}{dt^2} + \frac{g}{L}\sin{\theta} = 0 \tag{14}\\
$$
And have the "pendulum length" $L$ as a function of $\theta$, such that I wind up with a more complex pendulum equation, and then can substitute that in my control equations, but I can't get past the non-solution presented here.
Am I expressing the problem poorly? Is there a way to formulate these equations such that there is an analytic solution?
Answer: Here I'm exploring the 'simplification' of looking at an ellipse only:
A point mass is moving without friction inside an ellipse with focal points $A$ and $B$:
Due to inertia the mass will oscillate. We want to find the Equation of Motion (EoM).
Assume the particle starts at $y_0$ height and $v=0$ then the energy equation is:
$$mg(y-y_0)+\frac12 mv^2=0$$
Or:
$$g(y-y_0)+\frac12(v_x^2+v_y^2)=0$$
$$g(y-y_0)+\frac12(\dot{x}^2+\dot{y}^2)=0$$
To get the EoM, derive to time, so:
$$g\dot{y}+\ddot{x}\dot{x}+\ddot{y}\dot{y}=0\tag{1}$$
In the case where $|AP|+|BP|\gg|AB|$ the ellipse then approximates a circle and a simple EoM is then obtained (I think). But we're stuck with an ellipse:
$$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$
where $a$ is the semi-major axis and $b$ the semi-minor axis. Also:
$$x=\pm\frac{a}{b}\sqrt{b^2-y^2}$$
$$\dot{x}=\pm\frac{a}{b}(b^2-y^2)^{-1/2}y\dot{y}$$
Now we could certainly still obtain $\ddot{x}$ and then inserting $\dot{x}$ and $\ddot{x}$ into $(1)$ would yield an expression in $y$, $\dot{y}$ and $\ddot{y}$. But it's very messy.
And there's no way it could be made explicit in $\ddot{y}$ like in:
$$\ddot{y}=F(y,\dot{y})$$
as is possible for the SHO. | {
"domain": "physics.stackexchange",
"id": 58248,
"tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, string"
} |
Does a ceiling fan block air flow from a window or a door? | Question: A ceiling fan creates a vertical circular airflow. Does this airflow along a window or a door in the room reduce the airflow through them? I would guess air trying to flow into the room would face more resistance as the ceiling fan's speed increases.
As an example imagine there's a window along the right-hand side I contour:
Will having the ceiling fan spinning, reduce the rate of exchange of air between the room and the outside world or the rest of the house?
Answer: The question reduces to: "Can a parallel airstream to an open window in a room prevent wind from outside the room from entering the room?"
In wind, the air molecules have a specified direction (on the average). The average kinetic energy of this extra directed velocity (the wind velocity) is pretty insignificant (assuming typical air conditions) compared to the average kinetic energy of the air molecules when there's no wind:
One can look up:
For typical air at room conditions, the average molecule is moving at about 500 m/s (close to 1000 miles per hour)
Just for information, because these (relative to wind velocities) high velocities (due to the heat content of the air), have nothing to do with this problem.
Let's assume typical air conditions. If there is no airstream parallel to the window (the fan is off) and a wind is blowing towards the open window than the wind can blow freely into the room, making you feel a wind blowing in the room
It's the averaged wind velocity that's involved.
Now if we apply an airstream parallel to the window (fan switched on), you are creating a "wind" with a momentum parallel to the window. The difference with the case where this wind is not there (fan switched off), is that there is a net moment, contrary to non-moving air (which contains kinetic energy though de to the heat contained). This wind (fan switched on) parallel to the window can deflect a bigger part of the incoming wind momentum.
But why? Well, if the momentum of the wind is directed toward the window, the wind (fan switched on) parallel to the window gives the wind a change in momentum upward. The wind deflects from the window (of course the upward momentum of the fan wind can't change the wind momentum to Because the layer of wind parallel to the window has a certain thickness, a part of the air molecules will not reach the room because of this, which they would have if the fan was switched off.
Imagine a very strong fan. The wind caused by this fan can surely prevent (as you guessed) a part of the incoming wind from reaching your room. | {
"domain": "physics.stackexchange",
"id": 70020,
"tags": "fluid-dynamics, flow"
} |
Can the Na+/K+ pump backwards to generate ATP? | Question: The standard physiological direction of the Na+/K+ pump is to export 3 Na+, import 2 K+, and hydrolyze one ATP to ADP. Can it be driven backwards, importing 3 Na+, exporting 2 K+, and generating ATP? Does this happen in normal cells, and in what conditions?
Answer: All enzymes can theoretically catalyze the reverse reaction.
Researchers have driven the Na+/K+ ATPase to synthesize ATP with artificial ion concentrations:
We have studied the apparent affinity for K at its intracellular discharge sites by measuring the rate of ATP synthesis as a function of the internal K concentration in resealed red blood cell ghosts, where the Na-K pump is driven in reverse by the downhill efflux of K and influx of Na… | {
"domain": "biology.stackexchange",
"id": 7929,
"tags": "biochemistry, metabolism, cell-membrane, bioenergetics, membrane-transport"
} |
Generating an ikfast solution for 4 DOF arm | Question:
Hi,
I'm trying to make a 4DOF ikfast plugin.
ROS version: Kinetic
Ubuntu: Xenial
I'm following these instructions:
http://wiki.ros.org/Industrial/Tutorials/Create_a_Fast_IK_Solution
the first problem I run into is that I can't install the openrave package
sudo apt update
Ign:3 http://ppa.launchpad.net/openrave/release/ubuntu xenial InRelease
Err:4 http://ppa.launchpad.net/openrave/release/ubuntu xenial Release
404 Not Found
and of course:
parallels@ubuntu:~/matilda-ikfast$ sudo apt install openrave0.8-dp-ikfast
[sudo] password for parallels:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package openrave0.8-dp-ikfast
E: Couldn't find any package by glob 'openrave0.8-dp-ikfast'
E: Couldn't find any package by regex 'openrave0.8-dp-ikfast'
So I installed from source, following these instructions
http://fsuarez6.github.io/blog/workstation-setup-xenial/
With as result openrave 0.9 (the tags on de github account van 0.8.0 en 0.8.2 gave compilation errors.)
Then I succesfully made a collada file (openrave matilda.dae shows my robot)
parallels@ubuntu:~/matilda-ikfast$ openrave-robot.py matilda.dae --info links
name index parents
-------------------------
base 0
base_link 1 base
link_1 2 base_link
link_2 3 link_1
link_3 4 link_2
link_4 5 link_3
tool0 6 link_4
-------------------------
name index parents
While I try to make the cpp file I get a sympy matrices.matrices.ShapeError:
parallels@ubuntu:~/matilda-ikfast$ python ../openrave/python/ikfast.py --robot=matilda.dae --iktype=translationyaxisangle4d --baselink=0 --eelink=6 --savefile=output_ikfast61.cpp
...
...
INFO: trying to guess variable from [j0]
INFO: have only one variable left j0 and most likely it is not in equations [-pz*sin(j1) + cos(j1)/20 + 11029*cos(j2)/20000, pz*cos(j1) + sin(j1)/20 + 11029*sin(j2)/20000 + 123/200, sin(j1)*sin(j2)*cos(j3) - sin(j1)*sin(j3)*cos(j2) + sin(j2)*sin(j3)*cos(j1) + cos(j1)*cos(j2)*cos(j3) - cos(r00), -pz**2 - 123*pz*cos(j1)/100 - 123*sin(j1)/2000 - 30651159/400000000, -pz*sin(j1)*sin(j2) - pz*cos(j1)*cos(j2) - sin(j1)*cos(j2)/20 + sin(j2)*cos(j1)/20 - 123*cos(j2)/200, -pz*sin(j1)*cos(j2) + pz*sin(j2)*cos(j1) + sin(j1)*sin(j2)/20 + 123*sin(j2)/200 + cos(j1)*cos(j2)/20 + 11029/20000]
INFO: depth=0, c=2, iter=0/1, adding newcases: set([Abs(px) + Abs(py)])
Traceback (most recent call last):
File "../openrave/python/ikfast.py", line 9548, in <module>
chaintree = solver.generateIkSolver(options.baselink,options.eelink,options.freeindices,solvefn=solvefn)
File "../openrave/python/ikfast.py", line 2297, in generateIkSolver
chaintree.leftmultiply(Tleft=self.multiplyMatrix(LinksLeft), Tleftinv=self.multiplyMatrix(LinksLeftInv[::-1]))
File "../openrave/python/ikfast.py", line 1122, in leftmultiply
self.Pfk = Tleft[0:2,0:2]*self.Pfk+Tleft[0:2,3]
File "/home/parallels/.local/lib/python2.7/site-packages/sympy/matrices/matrices.py", line 404, in __mul__
return matrix_multiply(self,a)
File "/home/parallels/.local/lib/python2.7/site-packages/sympy/matrices/matrices.py", line 2458, in matrix_multiply
raise ShapeError()
sympy.matrices.matrices.ShapeError
Any pointers on how to tackle this?
Thanks,
Originally posted by Bas on ROS Answers with karma: 5 on 2017-06-15
Post score: 0
Original comments
Comment by gvdhoorn on 2017-06-16:
After Hydro, I've never bothered installing OpenRAVE - using binaries or from sources. I've always just used docker/personalrobotics/ros-openrave.
Comment by gvdhoorn on 2017-06-16:
Also: see #q196753 for a question about 5dofs, but the same approach should work for 4dofs. I'm not sure that will solve / work around the issue that you are seeing, but it's worth a try.
Finally: probably not something you want to hear, but unfortunately we're only users of OpenRAVE/IKFast just as you. If something is really broken / not working as it should, rdiankov/openrave/issues would be the place to report it.
Comment by Bas on 2017-07-05:
I've tried the linked docker image, where I get other kinds of errors (matrices size mismatch etc, but not the error above). I followed your links re. 5DOF's too. This looks like it's going to be a non-trivial exercise with limited chance of success...
Comment by Bas on 2017-07-05:
...Going to look into manually coding the IK. Thanks for the links and looking into my problem
Comment by gvdhoorn on 2017-07-05:\
This looks like it's going to be a non-trivial exercise [..]
It's actually about the same amount of work as the regular procedure to generate an IK plugin. The .dae you need to make anyway, and then only the small additional wrapper is needed. After that copy-paste the command line I already gave as an example in #q196753.
I'm probably a little biased, but it shouldn't be too much work actually.
Comment by Bas on 2017-07-05:
Ok, i get held back by the links to the openrave mailing list and my experience with my non-standard kinematics. I'll try the wrapper approach.
Comment by gvdhoorn on 2017-07-05:
Don't spend too much time on it: I can't guarantee that it'll actually work. I just wanted to clarify that it shouldn't be too different from the 'regular' approach wrt the amount of work.
Comment by gvdhoorn on 2017-07-06:
I just spent some time on this, and after fiddling a little with the personalrobotics Docker image and writing a very minimal OpenRAVE wrapper xml file I got it to generate a translationyaxisangle4d cpp. I haven't tested it yet, will do that later today.
Comment by gvdhoorn on 2017-07-06:
And a general comment: you linked to the "Create a kinematics solution using IK Fast" tutorial in your OP. You did notice that that tutorial has been deprecated, right? There is an admonition at the top of the page linking to the correct / updated version of the instructions.
Comment by Bas on 2017-07-06:
I must have missed the info that it's deprecated. Thanks for thinking along and helping out. I have recently fixed my macro file and added the collada file in this branch
Comment by Bas on 2017-07-06:
I'm hampered by some other work and I'm freeing up this evening to work on this. If you gain some insights I would be very interested.
Comment by gvdhoorn on 2017-07-06:
re: branch: I've just used master. But the process is relatively straightforward and repeatable, so you should be able to do it yourself as well with any urdf.
Answer:
This is what I did to get OpenRAVE to generate an IKFast plugin for your urdf.
I'll assume that you've already:
converted the top-level .xacro to a .urdf, and
have converted it to Collada (using rosrun collada_urdf urdf_to_collada ..)
I decided to use the personalrobotics/ros-openrave Docker image, as installing OpenRAVE from sources on recent Ubuntu / Debian versions is not trivial, and takes quite some time (as you probably discovered).
We need to tweak the image a bit though, so:
$ mkdir /tmp/prro_work && cd /tmp/prro_work
$ cat << EOF > Dockerfile
FROM personalrobotics/ros-openrave
RUN apt-get update || true && apt-get install -y --no-install-recommends build-essential python-pip liblapack-dev && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip install sympy==0.7.1
EOF
# Replace `$DOCKER_IMAGE` with some descriptive name here
$ docker build -t $DOCKER_IMAGE .
This makes pip install an older version of sympy in /usr/local, which is then used by OpenRAVE instead of the sympy that is installed by apt. The IKFast generator is very specific about the version of sympy and this version seems to work - in any case it works with your .dae.
Now wait for Docker to build the new image.
Then we need a small wrapper 'robot' for OpenRAVE:
<robot file="$NAME_OF_YOUR_COLLADA_FILE">
<Manipulator name="NAME_OF_THE_ROBOT_IN_URDF">
<base>base_link</base>
<effector>tool0</effector>
</Manipulator>
</robot>
save this as wrapper.xml (you can obviously change the name if you want). OpenRAVE supports relative filenames for the file attribute of the robot element in our wrapper.xml, so it's easiest if you place wrapper.xml in the same directory that contains the .dae of your robot model.
And make sure to update base_link and tool0 to whatever you want as your base and effector links (these elements correspond to the --baselink and --eelink arguments that you pass to ikfast.py normally, but by name, not an index). In almost all cases these should correspond to whatever you have modelled as those links in your urdf.
Now generate the plugin:
$ cd /to/wherever/you/have/your/dae/and/your/wrapper
$ mkdir output
# the first volume here is the 'work dir', ie: where we have stored the wrapper and
# the Collada file.
#
# the second volume is where OpenRAVE stores the generated `.cpp`.
# there is no 'output file' argument to `openrave.py --database inversekinematics ..`,
# so this will have to do.
#
# make sure to replace $DOCKER_IMAGE with whatever you used earlier
$ docker run -it --rm -v `pwd`:/ikfast -v `pwd`/output:/root/.openrave $DOCKER_IMAGE openrave0.9.py --database inversekinematics --robot=/ikfast/wrapper.xml --iktype=translationyaxisangle4d --iktests=1000
Because we installed build-essential inside the image, we can ask OpenRAVE to run some IK tests for us. OpenRAVE will automatically compile the plugin after it has generated it and run the tests. Pay attention to the final output. On my machine it is something like:
openravepy.databases.inversekinematics: testik, success rate: 1.000000, wrong solutions: 0.000000, no solutions: 0.000000, missing solution: 0.036000
If everything went well, you should now have a ikfast71.TranslationYAxisAngle4D.0_1_2_3.cpp and a ikfast.h in the output/ dir that we created earlier. You should be able to continue with the tutorial now and finish creating the MoveIt IKFast plugin.
Documentation I used for this is basically what I referenced in #q196753.
Originally posted by gvdhoorn with karma: 86574 on 2017-07-06
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by gvdhoorn on 2017-07-06:
Note that I didn't bother with uid and gid with Docker, so the generated files will be owned by root.
Comment by gvdhoorn on 2017-07-06:
Note also that depending on your urdf it may be necessary to add a direction child tag to the Manipulator tag in the wrapper xml. Refer to Defining Manipulators in the OpenRAVE docs for more info.
Comment by gvdhoorn on 2017-07-06:
This same procedure should work for other robots as well, just make sure to select a proper iktype for those models (instead of translationyaxisangle4d).
Comment by Bas on 2017-07-06:
Thanks @gvdhoorn This generates the files as you described, and indeed I experienced installing openrave as non-trivial. I will continue with the correct tutorial. Thanks for all your patience! Next time we meet I'll buy you a nice cold beverage and a big Burger!
Comment by gvdhoorn on 2017-07-06:
I don't think it's changed too much, but the most recent version of the tutorial can be found here.
Comment by hamzamerzic on 2017-12-07:
To add to this - in case you don't go with the Docker image, make sure you don't have mpmath installed.
Comment by yijiangh on 2018-03-17:
Complementing gvdhoorn's answer, I've posted an exmaple xml file here:
https://answers.ros.org/question/196753/generating-ikfast-plugin-for-5-dof-robot/?answer=285707#post-id-285707
Comment by hamzamerzic on 2018-03-17:
I have created a tool for online generation of ikfast solvers - meaning no need to do any manual setting up. You can find the tool here: https://www.hamzamerzic.info/ikfast_generator/
Comment by lv on 2019-10-28:
Hi, I'm also generating an ikfast plugin for 3 DOF arm, after running command "docker run -it --rm -v ...", I got the result as follows.
openravepy.databases.inversekinematics: RunFromParser, testing the success rate of robot /ikfast/wrapper.xml
2019-10-29 00:32:27,979 openrave [WARN] [plugindatabase.h:577 InterfaceBasePtr OpenRAVE::RaveDatabase::Create] Failed to create name fcl_, interface collisionchecker
2019-10-29 00:32:27,987 openrave [WARN] [kinbody.cpp:1504 KinBody::SetDOFValues] dof 0 value is not in limits 0.000000e+00<5.260000e-01
openravepy.databases.inversekinematics: testik, success rate: 1.000000, wrong solutions: 0.000000, no solutions: 0.000000, missing solution: 0.997000
what does missing solution mean, is it OK?
Comment by gvdhoorn on 2019-11-04:
Missing solution means that for some IK queries, the solver could not return a solution. In your specific case the solver could not solve the IK query in 0.3 percent of the requests. So for 1000 requests, 3 would go unanswered. | {
"domain": "robotics.stackexchange",
"id": 28122,
"tags": "ros, ros-kinetic, ubuntu, ikfast, openrave"
} |
Interesting answer as a range of tension in pulley-block-plane system | Question: I want some intuitive understanding on why there will be a range in tension in the below question. (On solving we will get that the system is at rest ($a=0$) and since its starts from rest the blocks will be stationary). Now, this seems experimentally feasible to have a unique absolute tension and I wonder why this is not the case (I initially thought friction might change but as the blocks are stationary it's not the case).
A system of two blocks and a light string are kept on two inclined faces (rough) as shown in the figure below. All the required data are mentioned in the diagram. Pulley is light and frictionless. (Take $g=10\,\text m/\text s^2$, $\sin37^\circ=3/5$) If the system is released from rest then what is the range of the tension in the string?
Note: Please assume the wedge to be at rest even though not mentioned.
Answer:
I initially thought friction might change but as the blocks are stationary it's not the case
It's not that friction is changing over time, it's that the specific value for friction (in the static case) is unknown, so the specific value for tension is also unknown. You know the maximum possible value for static friction on the blocks, but not the specific value.
Let's go to a more extreme case. Imagine that the coefficient of friction is so high that the blocks can remain in place on the ramp without a rope. When you set them down, what is the tension on the rope? The answer is that it depends. You could put them down with 0 tension and they would stay in place. You could put them down with very high tension and they would stay in place.
For the problem given, the low coefficient of friction limits the possible values for tension, but doesn't indicate a unique value. | {
"domain": "physics.stackexchange",
"id": 81569,
"tags": "homework-and-exercises, newtonian-mechanics, forces, friction, relative-motion"
} |
What is spectrum permutation? | Question: This paper refers to the concept of "spectrum perumutation" on page 4. I'm having a difficult time understanding what it means to permuate a signal. This is mainly due to the fact I don't understand the "invertible mod n" means.
What exactly happens to a signal when we permutate it? What does it look like in the time domain and what does it look like in the frequency domain?
Answer: Reminder what a permutation is: The exchange of indices in a sequence.
That's exactly what they're doing here.
Invertible in this context (only read the paragraph so far) means
"the multiplication with $\sigma$ under mod $n$ is invertible, i.e. given the result of that multiplication and $\sigma$, you can caclulate the original value in all cases."
Let's do an example:
let's say we're $x\in \mathbb Z
\setminus 13$, and you multiply by $\sigma = 7$, you get $x\sigma = 4 \mod 13$. You know that $x=8$, undisputably! (because 7·8 = 56, and 56 mod 13 = 4, and it's only that easy because this is a prime field).
That works for any possible result $xz\in \mathbb Z\setminus 13$, because $\sigma$ is coprime to 13; so, $\sigma$ is invertible for $(\cdot,\mathbb Z \setminus 13)$.
I picked that 13 because it's prime, and prime numbers are by definition coprime to any smaller integer.
Let's do an counter-example of that:
For FFTs, the length is usually not a prime number, but can be any natural integer. $n=2^m, m \in \mathbb N$ is a very common FFT length (because you can decompose it easily using simple butterflies); then, not every $n$ is invertible in the sense above: for example, if your multiplication $x\cdot4 = 0 \mod 32$, you can't say whether $x=0$, $x=8$ or $x=16$; it's not invertible. | {
"domain": "dsp.stackexchange",
"id": 8744,
"tags": "fft, frequency-spectrum"
} |
Extracting a part of a URL path | Question: I have a string like the one below from which I am extracting the tariff name.
const url = "https://any-api/supplier/tariffSelectionG_E5449168?t=randomChars"
So from the above string, I want text tariffSelectionG_E5449168 and I am using the code below at the moment, which is working fine.
const supplyUrl = "https://any-api/supplier/tariffSelectionG_E5449168?t=randomChars"
if(supplyUrl.includes('?')) {
const url = supplyUrl.split('?')
if(url.length > 0) {
const tariff = url[0].split('supplier/')
return tariff[1]
}
}
Is there any better way to do the above extracting?
Answer: While it may be unlikely to occur, the URL could contain a hostname with a top-level domain supplier. Currently .supplies is registered to Donuts Inc and someday URLs could have a TLD with supplier, which would lead the current code to not correctly find the target text.
const supplyUrl = "https://my.supplier/supplier/tariffSelectionG_E5449168?t=randomChars"
if(supplyUrl.includes('?')) {
const url = supplyUrl.split('?')
if(url.length > 0) {
const tariff = url[0].split('supplier/')
console.log('tariff:', tariff[1])
}
}
A more robust solution would use the URL API with the pathname property. For example, if it was known that the last part of the pathname was the tariff string then the array method .pop() could be used:
const supplyUrl = "https://my.supplier/supplier/tariffSelectionG_E5449168?t=randomChars"
console.log('tariff: ', new URL(supplyUrl).pathname.split(/\//).pop());
Otherwise if there is a need to find strings anywhere in the path that start with tariff then a pattern match could be used for that - for example:
const supplyUrl = "https://my.supplier/supplier/tariffSelectionG_E5449168?t=randomChars"
function findTariff(url) {
const pathParts = new URL(url).pathname.split(/\//);
for (const part of pathParts) {
if (part.match(/^tariff/)) {
return part;
}
}
return -1; // or something to signify not found
}
console.log('tariff: ', findTariff(supplyUrl));
Or one could take a functional approach, which can be more concise:
const supplyUrl = "https://my.supplier/supplier/tariffSelectionG_E5449168?t=randomChars"
function findTariff(url) {
const pathParts = new URL(url).pathname.split(/\//);
return pathParts.find(part => part.match(/^tariff/));
}
console.log('tariff: ', findTariff(supplyUrl)); | {
"domain": "codereview.stackexchange",
"id": 42501,
"tags": "javascript, parsing, ecmascript-6"
} |
Redox titration or iron(II) | Question:
The concentration of $\ce{Fe^2+(aq)}$ can be determined by a redox titration using
A. $\ce{KBr}$
B. $\ce{SnCl2}$
C. $\ce{KMnO4}$ (basic)
D. $\ce{KBrO3}$ (acidic)
Can anyone please help me on this question? I know the answer is between C and D, since those two are the only redox reactions. However, I don't know how to determine between C and D, as they will both act as an oxidizing agent and Fe will be reduced in both cases (the correct answer is D).
Answer:
Can you please elaborate on that part? I know that the E value of
Mno4- is +1.51, and the one for BrO3- is +1.48. What does that
suggest?? that Mno4- will more likely to be reduced by Fe+ than
Bro3-?? please explain
The first and foremost hint, even before you calculate the electrode potentials is the solubility rule taught in general chemistry. Hydroxides of most metals are insoluble. Write the equations : Fe(II) + hydroxide ions = iron (II) hydroxide. This is why alkaline permanganate is not a good choice. However, this is not the final answer because you would like to further confirm whether C or D is the correct choice.
Next determine the E_cell= E(cathode)-E(anode) > 0 V or not.
Now you need to oxidize Fe(II) to Fe(III), so the iron half cell will be the anode. As a mnemonic, anode and oxidation both start with vowels; cathode and reduction don't. You would like to reduce the permanganate in an alkaline medium, this half cell is the cathode.
Repeat the same for bromate half cell in acidic medium and Fe(II). Both (C) and (D) would turn out to be positive E_cells. However, the (C) is not an answer because you don't want iron to precipitate out as iron(II) hydroxide.
In real life analysis, Fe(II) can be routinely titrated with potassium permanganate but in highly acidic medium. For practice, check the E_cell after writing a balanced reaction. | {
"domain": "chemistry.stackexchange",
"id": 12140,
"tags": "inorganic-chemistry, redox, aqueous-solution, analytical-chemistry, titration"
} |
Warnings when running turtlebot_gazebo | Question:
Environment
Ubuntu 12.04.2 (run on Virtualbox on Mac OSX Mavericks)
hydro
Questions
I installed turtlebot-simulator.
sudo apt-get install ros-hydro-turtlebot-simulator
sudo apt-get install ros-hydro-turtlebot-apps
sudo apt-get install ros-hydro-tutlebot-rviz-launchers
Run roscore and launched the simulator.
roslaunch turtlebot_gazebo turtlebot_empty_world.launch
I got these warning.
OpenGL Warning: No pincher, please call crStateSetCurrentPointers() in your SPU
OpenGL Warning: You called glBindTexture with a target of 0x806f, but the texture you wanted was target 0xde1 [1D: de0 2D: de1 3D: 806f cube: 8513]
Are these warnings serious? If so, how can I fix them?
Originally posted by wkentaro on ROS Answers with karma: 175 on 2014-10-12
Post score: 0
Answer:
Running OpenGL applications inside VMs is often fraught with problems. If things are generally working it's likely ok to ignore them. Otherwise you may need to look into updating your video drivers.
Originally posted by tfoote with karma: 58457 on 2015-01-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19714,
"tags": "turtlebot, ros-hydro"
} |
Simple Python tic tac toe | Question: I'm a beginner in python so I decided to take a simple challenge and wrote tictactoe
How to write it better in future?
What's wrong with my code?
from random import choice
again = ''
board = '[1][2][3]\n[4][5][6]\n[7][8][9]'
win_combination = [(1,2,3),(4,5,6),(7,8,9),(1,4,7),(2,5,8),(3,6,9),(1,5,9),
(3,5,7)]
pleyer_numbers = []
computer_numbers = []
numbers = [x for x in range(1,10)
def replece_board(board):
board = board.replace(str(pleyer_number),'X')
board = board.replace(str(computer_number),'O')
return board
def change_number():
pleyer_numbers.append(int(pleyer_number))
computer_numbers.append(int(computer_number))
def check_win(who):
check = 0
for win in win_combination:
for number in win:
if number in who:
check += 1
if check == 3:
return True
else:
continue
check = 0
return False
print('Welcome in simple tic-tac-toe!\nYour enemy is computer.')
while True:
board = '[1][2][3]\n[4][5][6]\n[7][8][9]'
pleyer_numbers = []
computer_numbers = []
numbers = [x for x in range(1,10)]
while True:
print(board)
pleyer_number = input('Enter the field number: ')
if int(pleyer_number) not in range(1,10):
print('There is no filed with this number.')
continue
elif int(pleyer_number) not in numbers:
print('This field is occupied.')
continue
numbers.remove(int(pleyer_number))
if numbers:
computer_number = choice(numbers)
numbers.remove(computer_number)
print('Computer chose field: ' + str(computer_number))
change_number()
board = replece_board(board)
if check_win(pleyer_numbers):
print(board.replace(str(pleyer_number),'X'))
print('----------YOU WON!----------')
break
if check_win(computer_numbers):
print(board)
print('----------YOU LOST!----------')
break
if not numbers:
print('\n----------TIE!----------')
break
again = input('Do you want to play again? y/n ')
while True:
if again != 'y' and again != 'n':
again = input('Only y/n ')
else:
break
if again == 'n':
break
else:
print('----------------------------\nYou started new game!')
Answer: Welcome to code review.
Bug:
unmatched parenthesis at line 9:
numbers = [x for x in range(1,10)
Style:
I suggest you check PEP0008 https://www.python.org/dev/peps/pep-0008/ the official Python style guide and here are a few comments:
win_combination = [(1,2,3),(4,5,6),(7,8,9),(1,4,7),(2,5,8),(3,6,9),(1,5,9),(3,5,7)]
a space should be left after commas for readability like this:
win_combination = [(1, 2, 3), (4, 5, 6), (7, 8, 9), (1, 4, 7), (2, 5, 8), (3, 6, 9), (1, 5, 9)]
Spelling
pleyer_numbers = []
def replece_board(board):
print('There is no filed with this number.')
A code is also designed for human beings to read regardless of whether the machine does or does not care if names are not significant we would be using barcodes as variable names instead.)
Code
General remarks: your code is a bit longer than it should and disorganized and you might want to consider breaking down your code into functions with separate roles.
def replece_board(board):
replace what board? what is board?
def change_number():
what number?
def check_win(who):
what is who? a number? a string? a list? something else?
for each public function you make now and in the future you should include a docstring answering such obvious questions.
Docstrings: Python documentation strings (or docstrings) provide a convenient way of associating documentation with Python modules, functions, classes, and methods. It's specified in source code that is used, like a comment, to document a specific segment of code.
if __name__ == '__main__':
guard to be used at the end of script to test your functions and this allows your module to be imported by other modules without running the whole script.
again = ''
board = '[1][2][3]\n[4][5][6]\n[7][8][9]'
win_combination = [(1,2,3),(4,5,6),(7,8,9),(1,4,7),(2,5,8),(3,6,9),(1,5,9),
(3,5,7)]
pleyer_numbers = []
computer_numbers = []
numbers = [x for x in range(1,10)
Global variables: Are to be avoided as much as possible, a better approach is to enclose your variables inside their corresponding functions that use them
Catching invalid inputs since you're dealing with user input, it is very likely that someone enters a wrong value example:
Enter the field number: w
results in an error, you should control errors that would terminate your program
confirm_number = input('Enter field number: ')
while not confirm_number.isdecimal():
print('You should enter integer numbers only!')
confirm_number = input('Enter field number: ')
f-strings since you're using Python 3 I suppose from the print statements, f-strings are a new string formatting mechanism known as Literal String Interpolation or more commonly as F-strings (because of the leading f character preceding the string literal). ... In Python source code, an f-string is a literal string, prefixed with 'f', which contains expressions inside braces providing a way to embed expressions inside string literals, using a minimal syntax.
a statement like
print('Computer chose field: ' + str(computer_number))
can be written:
print(f'Computer chose: {computer_number}.')
which improves readability.
print('----------YOU WON!----------')
print('----------YOU LOST!----------')
print('----------------------------\nYou started new game!')
String multiplication: the statements above can be replaced with:
print(f"{10 * '-'}YOU WON!{10 * '-'}")
print(f"{10 * '-'}YOU LOST!{10 * '-'}")
print(f"{30 * '-'}\nYou started a new game!") | {
"domain": "codereview.stackexchange",
"id": 35864,
"tags": "python, game, tic-tac-toe"
} |
Remotely launch files that are not part of current workspace? | Question:
I'm aware of the tag which is used to launch remote nodes. However, this seems to fail when the remote node workspace is different from the local workspace.
Specifically, if I have a package "remote_pkg" in my remote workspace, but not in my local workspace (where I am calling it from), I get the following error:
File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 200, in get_path
raise ResourceNotFound(name, ros_paths=self._ros_paths)
ResourceNotFound: remote_pkg
My launch file is as follows:
<launch>
<machine name="remote_machine" address="remote_machine.local" user="ubuntu"/>
<include machine="remote_machine" file="$(find remote_pkg)/launch/main_online.launch"/>
</launch>
Thanks
Originally posted by sr71 on ROS Answers with karma: 87 on 2016-03-31
Post score: 0
Answer:
It's a bit of a footnote, but the docs for roslaunch do clearly state:
Substitution args are currently resolved on the local machine. In other words, environment variables and ROS package paths will be set to their values in your current environment, even for remotely launched processes.
This includes the $(find pkg) substitution.
This means that any launch files that you want to launch remotely need to exist on the local machine, and they need to be at the same absolute path on both the local and remote machines.
Originally posted by ahendrix with karma: 47576 on 2016-03-31
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sr71 on 2016-03-31:
I see. For now, I will pass in an absolute path. Thank you. | {
"domain": "robotics.stackexchange",
"id": 24289,
"tags": "ros, roslaunch, remote"
} |
Maximum Subsequence Sum : Mark Weiss: | Question:
In the highlighted part below how is Weiss concluding that the array starting at an arbitrary index "p" and ending at "j" can never be larger than the array starting at "i" and ending at "p-1"?
By his own example (the inner loop of algorithm 2 shown below) this is demonstrably false.
Let i = 0, and j = 3.
Let a[ 0 ] = -14, a[ 1 ] = -4, a[ 2 ] = -2, a[ 3 ] = -1.
If p = 2, the subsequence sum from "p" to "j" is clearly bigger than "i" to "p-1".
Even more concerning is Mr. Weiss seems to pull an assumption out of thin air ("j is the first index that causes the subsequence starting at index i to become negative") when nothing in the above could possibly imply that! Indeed, Weiss only mentions "detection" of the subsequence sum being negative between index "i" and "j", but never where the source of this could only occur. Where is this coming from?
Thanks for any help!
Answer: I think you are confused about which algorithm he is describing. His description isn't for algorithm 2 but for algorithm 4, where an imaginary index i in reintroduced.
Let me write this algorithm with this index to make it clear to you:
maxSubSum(array a):
maxSum = 0, thisSum = 0;
i = 0;
for j going from 0 to length(a)-1:
thisSum += a[j];
if thisSum > maxSum:
maxSum = thisSum;
else if thisSum < 0:
i = j+1;
thisSum = 0;
return maxSum;
Now you can see that in this algorithm, whenever thisSum < 0, then j is indeed the "first index that causes the subsequence starting at index i to become negative", and the rest of the claim follows. Notice in particular that the example you gave cannot occur with this algorithm (you will never have i=0 and j=3 in that case). | {
"domain": "cs.stackexchange",
"id": 15436,
"tags": "algorithms, algorithm-analysis, subsequences, maximum-subarray"
} |
Achieve a control gate with 2 hadamard coins | Question: I want to implement two Toffoli gates with 4 qubits:
3 serving as control qubits (the 2 hadamard coins and one other qubit) and
the last one as target qubit and 3 qubits (2 coins 1 target qubits) as shown below :
Is there a way of achieving this with qiskit's ccx function ?
I tried :
qnodes = QuantumRegister(2)
qsubnodes = QuantumRegister(2)
qc.ccx(subnode[0], subnode[1], q[1])
if (q[1] ==1) :
qc.x(q[0])
qc.x(q[1])
qc.ccx(subnode[0], subnode[1], q[1])
Answer: You can implement three-input Toffoli gate with three two-input Toffoli gates and one ancilla qubit as shown below.
Assume that qubits $q_0$, $q_1$ and $q_2$ are three inputs and qubit $q_4$ is a target. Qubit $q_3$ is ancilla qubit. The first gate implements function $q_0~ \mathrm{AND}~ q_1$ and saves results of this operation to $q_3$. The second gate implements function $q_2~ \mathrm{AND}~ q_3 = q_2~\mathrm{AND}~(q_0~ \mathrm{AND}~ q_1) = q_0~\mathrm{AND}~q_1~ \mathrm{AND}~ q_2$. Eventually, qubits $q_0$, $q_1$ and $q_2$ control qubit $q_4$. The last gate is used for uncomputation ancilla qubit $q_3$ back to state $|0\rangle$ (this is necessary because ancilla qubits entangled with other qubits can cause interference and increase error rate, note that inverse gate for Toffoli is again Toffoli).
Here is a code in QASM (sorry, I am not experienced in Qiskit)
OPENQASM 2.0;
include "qelib1.inc";
qreg q[5];
creg c[5];
ccx q[0],q[1],q[3];
ccx q[2],q[3],q[4];
ccx q[0],q[1],q[3]; | {
"domain": "quantumcomputing.stackexchange",
"id": 1289,
"tags": "quantum-gate, qiskit, circuit-construction"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.