anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What species of wasp is this? And how to get rid of them? | Question:
Yesterday while moving a chair in my garden I got stung by a group of wasps. During the night I went hunting for their nests and managed to neutralize 3 of them: one was in the chair's legs, about the size of a child's fist and with a dozen on them inside. It was made of grey hexagonal cells, similar to paper in appearance.
(Sorry for the blurry image, I was a bit scared :-P)
Then I've found another one in a water hose tube that was slightly smaller, but quite active.
And finally a third one in a vase, it was very small, only had a couple of cells in it and wasps were working on it, almost as if they were building it from scratch.
Given the number and size of the nests I'm afraid that they could be spreading in the garden. I have already noticed that a couple of them survived the massacre of last night and I'm worried that unless I kill every of them they might keep nesting.
I don't know much about the hierachical society of wasps, but I was not able to identify the queen (although I haven't checked every single one I've killed). Reading online I've found that killing the queen might mean that the worker would eventually die.
So what kind of wasp is this? What can I do to make sure that they do not keep bothering me?
EDIT: by the way, I live in northern Italy
Answer: These are European paper wasps.
(They look very similar to yellowjacket wasps but you can notice the antennae; they are brown here whereas they are black in case of yellowjackets. There are other subtle differences too.)
See this post on Gardening & Landscaping stackexchange about getting rid of wasps.
(This part is probably not really on-topic here and I don't want to copy-paste an existing answer in another stackexchange site.)
If you are upvoting this post then please also upvote the original one in G&L SE | {
"domain": "biology.stackexchange",
"id": 9906,
"tags": "species-identification, entomology, pest-control, wasps"
} |
Sphere-bot CNC that can engrave in a high resolution? | Question: What kind of machine would be best suited to engrave a photograph (grayscale) on a glass sphere?
I have checked out the eggbot, and openbuilds "spherebots"....but neither of these options can do something like a photograph.
Would I need a 5-axis machine? If so what type? I know nothing about them.
What types of cnc machines out there could achieve this?
Answer: Laser cutters like this Epilog can engrave grayscale images as well as glass and curved surfaces. See some sample images below. But I am not sure how much grayscale you will be able to achieve on glass. You might need to achieve this effect with halftone.
You will need an added rotary attachment for curved surfaces. This will limit the size of work piece you can use. The laser with rotary attachment is really only suited for cylindrical objects, not spheres. As the sphere surface curves away, the laser will de-focus and loose power and resolution. Fixing a sphere in the rotary attachment will also be a challenge.
That being said, I have engraved logos on pumpkins in the past with reasonable results. | {
"domain": "robotics.stackexchange",
"id": 2245,
"tags": "cnc"
} |
Thevenin equivalent voltage | Question: I am trying to find the Thevenin equivalent of the following circuit:
In regards to the equivalent voltage, I know (I have the answers) that it should be V + IR, but I am not sure how to get there. This is what I have so far, where R1 is the resistor between the voltage source and A, and R2 is the other resistor (in parallel with the current source):
Using KVL on the loop V-->R-->A-->B-->R-->V,
$$V=I_{R1}R+V_{out}+I_{R2}R$$
Using KCL on the node that joins the current source, the resistor and the voltage source:
$$I+I_{R2}=I_{R1}$$
However, I need another equation to be able to solve these, as I currently have three unknowns.
Answer: Hint: to find equivalent resistance replace all ideal voltage sources with short circuits and all ideal current sources with open circuits.
Hope this helps | {
"domain": "physics.stackexchange",
"id": 81804,
"tags": "homework-and-exercises, electric-circuits, electric-current, electrical-resistance, voltage"
} |
ROS for raspberry pi OS 2021 (using raspberry pi 4, 8GB)? | Question:
I need to know if there is any ROS distribution version for raspberry pi OS 2021 (using raspberry pi 4, 8GB) because I need to use ROS LIDAR SICK with this OS. (I used noetic on ubuntu 20.04, but it doesn't work well on raspberry OS)
I appreciate your help,
Originally posted by julius82 on ROS Answers with karma: 1 on 2021-11-03
Post score: 0
Answer:
Hello @julius82,
I used noetic on ubuntu 20.04
If you are using Noetic, try to use noetic in raspberry pi because your communication will be easier and connection establishment will be easy with the same distribution.
You can have a look below are some ways to install noetic into Raspberry pi 4.
You can directly put the ROS Noetic into the Raspberry Pi OS.
You can have a look at the below steps for installing.
Or else, you can directly install ubuntu 20.04 into your raspberry 4 and directly install noetic in it.
Originally posted by Ranjit Kathiriya with karma: 1622 on 2021-11-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37087,
"tags": "ros"
} |
3D bin packing algorithm using Java? | Question: I wrote a 3D bin packing algorithm but I am still not sure if it is correct or not.
I did not follow any code or pseudo-code that's why I would like to know if it is an efficient algorithm for the 3D bin packing problem or not.
each container has a length, height and breadth
each item has a length , height and breadth.
This is the code I wrote to pack items one by one without exceeding the container's length, height or breadth:
private double x,y,z=0;
private double[] remainingLength;
private double[] remainingHeight;
private double[] remainingBreadth;
//----initialize the remaining dimensions' arrays
public void init(int n) {
remainingLength=new double[n];
remainingHeight=new double[n];
remainingBreadth=new double[n];
for (int i=0; i<n; i++) {
remainingLength[i]=length;
remainingHeight[i]=height;
remainingBreadth[i]=breadth;
}
}
public boolean put3D(ItemsUnit item, int p,int n) {
init(n);
if(x<length){
if(putL(item,p)) {
packedItems.add(item); // if item fits add it to the packedItems into the container
return true;
}
}
if(y<breadth) {
if(putB(item,p)){
packedItems.add(item); // if item fits add it to the packedItems into the container
return true;
}
}
if(z<height){
if(putH(item,p)){
packedItems.add(item); // if item fits add it to the packedItems into the container
return true;
}
}
return false;
}
public boolean putL(ItemsUnit item, int p) {
//remaining dimensions arrays already initialized in the optimization algorithm
double minRemL=remainingLength[0];
int i=0;
for (int j=0; j<remainingLength.length; j++){
if ((remainingLength[j]!=0)&&(minRemL>=remainingLength[j])&&(remainingLength[j]>=item.getLength())){
i=j; //choosing the item to which we should put the new packed item next to
minRemL=remainingLength[j]; //minimum length left
}else {
return false;
}
}
remainingLength[p]=remainingLength[i]-item.getLength();
remainingBreadth[p]-=item.getBreadth();
remainingHeight[p]-=item.getHeight();
remainingLength[i]=0;
x+=item.getLength(); //increment x
return true;
}
public boolean putB(ItemsUnit item, int p) {
//remaining dimensions arrays already initialized in the optimization algorithm
double minRemB=remainingBreadth[0];
int i=0;
for (int j=0; j<remainingBreadth.length; j++){
if ((remainingBreadth[j]!=0)&&(minRemB>=remainingBreadth[j])&&(remainingBreadth[j]>=item.getBreadth())){
i=j; //choosing the item to which we should put the new packed item next to
minRemB=remainingBreadth[j]; //minimum length left
}
else {
return false;
}
}
remainingBreadth[p]=remainingBreadth[i]-item.getBreadth();
remainingHeight[p]-=item.getHeight();
remainingLength[p]-=item.getLength();
remainingBreadth[i]=0;
y+=item.getBreadth(); //increment y
return true;
}
public boolean putH(ItemsUnit item, int p) {
//remaining dimensions arrays already initialized in the optimization algorithm
double minRemH=remainingHeight[0];
int i=0;
for (int j=0; j<remainingHeight.length; j++){
if ((remainingHeight[j]!=0)&&(minRemH>=remainingHeight[j])&&(remainingHeight[j]>=item.getHeight())){
i=j; //choosing the item to which we should put the new packed item next to
minRemH=remainingHeight[j]; //minimum length left
}
else {
return false;
}
}
remainingHeight[p]=remainingHeight[i]-item.getHeight();
remainingBreadth[p]-=item.getBreadth();
remainingLength[p]-=item.getLength();
remainingHeight[i]=0;
z+=item.getHeight(); //increment z
return true;
}
I tested the algorithm and it worked fine without exceeding the dimensions of the container but I am not fully certain if the code is correct.
Can anyone read the code and tell me if it has a problem somewhere or if it is correct?
Answer: Each of the putL, putB and putH methods is public without needing to be. A call to any of these methods can throw an exception if the put3D method hasn't been called before because the array haven't been initialised.
The call to init should be done in the constructor to avoid such side effects.
Having spaces before and after (conditional) operators will increase the readability. E.g this
if ((remainingHeight[j]!=0)&&(minRemH>=remainingHeight[j])&&(remainingHeight[j]>=item.getHeight())){
would be better like this
if ((remainingHeight[j] != 0) && (minRemH >= remainingHeight[j]) && (remainingHeight[j] >= item.getHeight())){ | {
"domain": "codereview.stackexchange",
"id": 24325,
"tags": "java, performance, algorithm, combinatorics"
} |
Choosing the right condenser | Question: What are the advantages and disadvantages of the various types of condensers commonly found in the laboratory? Obviously more intricate pieces of glassware are more costly but assuming they are all available why would one use a Dimroth condenser vs. a Friedrichs when refluxing something? I've heard that Graham condensers are to be avoided when refluxing due to the possibility of clogging, and yet it is still very common - when is it appropriate to use one?
Illustration made with ChemDraw. Feel free to reference other designs but please attach a schematic for clarity.
Edit: This question was motivated by an organic preparation which involved the bromination of an alkene in boiling water. Bromine has a boiling point of 58.8 °C and on top of that the reaction was exothermic. It was difficult to avoid the loss of Br2 gas with your typical Allihn condenser, but the increased cooling capacity provided by a Friedrichs returned Br2 back to the flask as it was produced.
Also, while the Friedrichs forces vapors up a spiral path, the path itself is wide, and in my condenser at least, there was a bit of leeway for liquid to drop down the sides, helping to prevent blockages. I didn't try a Graham condenser but I imagine a much slower rate of addition would be supported by this condenser.
Answer: @Mart 's comment impelled me to return to this question and correct my answer. I've deleted incorrect material and expanded the discussion to, hopefully, provide correct information. There is a good discussion (better than the reference previously cited) of the issue here.
Reflux is the process of boiling reactants while continually cooling the vapor returning it back to the flask as a liquid. It is used to heat a mixture for extended periods and at certain temperatures...A condenser is attached to the boiling flask, and cooling water is circulated to condense escaping vapors.
If you are refluxing a mixture, as you might in organic synthesis to increase the speed of the reaction by doing it at a higher temperature (i.e., the boiling point of the solvent), then any of the condensers that worked well enough to avoid the loss of solvent and avoid "flooding" would work equally well. When you're refluxing, you want the "reflux ring", the place where the vapor is visibly condensing into a liquid, to be no more than 1/3 of the way up the reflux column.
You have two different basic types of condensers shown, Graham-type condensers (the first 3) and coil condensers (the last two). In the coil condensers (the left condenser in the picture below), the water flows through the coil and the vapor moves up in the larger, outside area of the condenser, condenses onto the cooled coils, then drips back into the pot. In a Graham-type condenser (the right condenser in the picture below), the water flows around a tube (whether straight or coiled) that contains the vapor/condensed liquid.(picture source) The Graham-type condensers clog (or flood) more easily since they have a more restricted path for the liquid to return to the pot.
Graham-type condensers: The Liebig condenser is simple, but has low cooling capacity and can be fairly easily clogged as the condensed liquid flows back into the flask and blocks the vapor that is trying to escape. The Allihn improves on this design by having a wider bore at the bottom and condensing the liquid on the "bubbles" where it can run down the sides and avoid blocking the vapor. (I've used this to good effect in refluxing many reactions.) The Graham condenser is the same basic design as the other two, but the condensation tube is coiled which provides more surface area for cooling...but also tends to send the condensed liquid right into the path of the vapor trying to move up. It is particularly prone to flooding.
Coil condensers, such as the Dimroth and Freidrichs, have high capacity for cooling with fewer problems from flooding since the vapor condenses on the coils and drips back from the little prominence at the bottom of the coils into the center of the pot. The vapor has an easy time getting past the drops falling into the pot. If you can afford it, this seems like a good choice for most applications. The Freidrichs condensers, which incorporate a cold-finger with the spiral, are higher capacity, quite bulky and heavy. I have seen them used with rotovaps where you are taking a lot of solvent off quickly, but not with an ordinary reflux apparatus. This would be over-kill for a simple reflux reaction situation.
Sorry for the incorrect information (for those of you who looked at this before) and hope this is helpful. | {
"domain": "chemistry.stackexchange",
"id": 11939,
"tags": "experimental-chemistry, equipment"
} |
A Paragraph Merger with Removing Overlapped Duplicated Lines in C# | Question: I am trying to make a paragraph merger that takes multiple paragraphs and output a concatenated result with removing the duplicated overlapped lines due to redundancy. Each input paragraph is with the follows specifications.
The leading/trailing spaces in each line have been removed.
No empty line.
The output merged paragraph follows the rules as below.
Paragraph input to Update method is concatenated after the previous input.
If the line(s) from the start of the paragraph input to Update method is / are sequenced same (overlapped) as the end of the previous input, just keep single copy of the sequenced duplicated lines.
The definition of duplicated lines here:
The content in two line should be totally the same, no “partial overlapping” cases need to be considered.
The content sequence in two blocks of lines should be totally the same.
Example Input and Output
Besides the rules, here's a use case.
Inputs
Input paragraph 1 example:
Code Review is a question and answer site for seeking peer review of your code.
It's built and run by you as part of the Stack Exchange network of Q&A sites.
We're working together to improve the skills of programmers worldwide by taking working code and making it better.
We're a little bit different from other sites. Here's how:
Ask questions, get answers, no distractions
This site is all about getting answers.
It's not a discussion forum.
There's no chit-chat.
Input paragraph 2 example:
We're a little bit different from other sites. Here's how:
Ask questions, get answers, no distractions
This site is all about getting answers.
It's not a discussion forum.
There's no chit-chat.
Good answers are voted up and rise to the top.
The best answers show up first so that they are always easy to find.
The person who asked can mark one answer as "accepted".
Accepting doesn't mean it's the best answer, it just means that it worked for the person who asked.
Get answers to practical, detailed questions
Focus on questions about an actual problem you have faced.
Include details about what you have tried and exactly what you are trying to do.
Expected Output
The two block of text are the same, so keep single overlapped part after merging.
Code Review is a question and answer site for seeking peer review of your code.
It's built and run by you as part of the Stack Exchange network of Q&A sites.
We're working together to improve the skills of programmers worldwide by taking working code and making it better.
We're a little bit different from other sites. Here's how:
Ask questions, get answers, no distractions
This site is all about getting answers.
It's not a discussion forum.
There's no chit-chat.
Good answers are voted up and rise to the top.
The best answers show up first so that they are always easy to find.
The person who asked can mark one answer as "accepted".
Accepting doesn't mean it's the best answer, it just means that it worked for the person who asked.
Get answers to practical, detailed questions
Focus on questions about an actual problem you have faced.
Include details about what you have tried and exactly what you are trying to do.
The experimental implementation
The experimental implementation is as below.
[Serializable]
class ParagraphMerger
{
private List<string> paragraphContents;
public ParagraphMerger(List<string> input)
{
this.paragraphContents = input;
}
public ParagraphMerger(string[] input)
{
this.paragraphContents = input.ToList();
}
public ParagraphMerger Update(List<string> input)
{
return Update(input.ToArray());
}
public ParagraphMerger Update(string[] input)
{
bool flag = false;
foreach (var element in input)
{
if (((!IsStringExist(paragraphContents, element)) || flag) && (!String.IsNullOrWhiteSpace(element)))
{
paragraphContents.Add(element);
flag = true;
}
}
return this;
}
public override string ToString()
{
StringBuilder sb = new StringBuilder();
foreach (var element in this.paragraphContents)
{
sb.AppendLine(element);
}
return sb.ToString();
}
private bool IsStringExist(List<string> strings, string target)
{
if (strings is null)
{
return false;
}
if (target is null)
{
throw new ArgumentNullException();
}
foreach (var element in strings)
{
if (element.Equals(target))
{
return true;
}
}
return false;
}
}
Test cases
The test case here use ParagraphMerger class with File.ReadAllText and File.WriteAllText. The content of "paragraph1.txt" is as "Input paragraph 1" above and "paragraph2.txt" is as "Input paragraph 2" above.
var paragraph1 = File.ReadAllText("paragraph1.txt").Split(new[] { Environment.NewLine }, StringSplitOptions.None);
var paragraph2 = File.ReadAllText("paragraph2.txt").Split(new[] { Environment.NewLine }, StringSplitOptions.None);
ParagraphMerger paragraphMerger = new ParagraphMerger(paragraph1);
paragraphMerger.Update(paragraph2);
File.WriteAllText("output.txt", paragraphMerger.ToString());
Console.WriteLine(paragraphMerger.ToString());
All suggestions are welcome. If there is any issue about:
Data processing performance
The naming and readability
Potential drawbacks of the implemented methods
, please let me know.
Answer: Readonly Field
As you assign the field only once in constructor, you may make it readonly. It makes the compiler able to apply more good optimisations to it.
List constructor
You store a reference to the list passed by the argument, store the copy instead because when you modifying passed list, you'll change the source list. It can cause an unexpected behavior for the external code. input.ToList() makes a copy of the collection even if it's already a List.
IEnumerable
You made the class to accept both arrays a lists. You may use IEnumerable<string> instead that's compartible with both.
IsStringExist()
Consider to use List.Contains(T) instead of Equals in loop.
strings is null can be moved to return statement
Update()
Name can be improved e.g. AppendLines() as based on what the method do.
In multiple conditions per logical expression the fastest can be checked first. Move flag to the first place, then IsStringExist() will not be called if flag is true. But !String.IsNullOrWhiteSpace(element) can moved in front of all to remove null argument check from IsStringExist() method. An then IsStringExist() will be optimized to single List.Contains() call and optimized out as redundant.
Small thigs
As option !String.IsNullOrWhiteSpace(element) can be replaced with element?.Trim().Length > 0 but it will change nothing.
ToString() can be made with simple return string.Join(Environment.NewLine, paragraphContents). string.Join uses StringBuilder under the hood too. The behavior difference is no CRLF at the end of the output.
[Serializable]
class ParagraphMerger
{
private readonly List<string> paragraphContents;
public ParagraphMerger(IEnumerable<string> input)
{
paragraphContents = input.ToList();
}
public ParagraphMerger AppendLines(IEnumerable<string> input)
{
bool flag = false;
foreach (var element in input)
{
if (element?.Trim().Length > 0 && (flag || !paragraphContents.Contains(element)))
{
paragraphContents.Add(element);
flag = true;
}
}
return this;
}
public override string ToString()
{
return string.Join(Environment.NewLine, paragraphContents);
}
}
And finally usage example
var paragraph1 = File.ReadAllLines("paragraph1.txt"); // File class have a lot of interesting methods, check the docs
var paragraph2 = File.ReadAllLines("paragraph2.txt");
ParagraphMerger paragraphMerger = new ParagraphMerger(paragraph1);
paragraphMerger.AppendLines(paragraph2);
string text = paragraphMerger.ToString(); // call ToString() once and reuse
File.WriteAllText("output.txt", text);
Console.WriteLine(text); | {
"domain": "codereview.stackexchange",
"id": 40566,
"tags": "c#, object-oriented, strings, classes, stream"
} |
TISE asymmetric infinite potential well boundary conditions and normalisation | Question: I am attempting to solve the time-independent Schrodinger equation as a numerical analysis exercise, but my QM is a bit weak. I have the following potential and I want the energy/eigenvalue. \begin{equation*} V(x) = \begin{cases}
\infty & (- \infty , 0)\cup (2l, \infty) \\
0 & x \in [0,l]\\
V_0 & x \in [l,2l]
\end{cases}
\end{equation*}
I was wondering if this was a correct way of attacking it.
I have found solutions $\psi_1(x)$ and $\psi_2(x)$ for $x \in [0,l]$ and $x \in [l,2l]$ by making initial guesses of $\psi_1'(0)$, $\psi_2'(2l)$ and $E$ which I want to use to extract the true solution by the shooting method. For the $x \in [l,2l]$ case I used a negative step size to traverse backwards, I was unsure if this was correct but I dont think starting at $\psi_2'(l)$ would suffice because it's not at the boundary where potential is infinite and the wave function is 0, so there's no good information for an initial/boundary value .
My main question is when it is time to "clean up" my guesses for the true values of the constants should I normalise with $\displaystyle \int_{0}^{l} |\psi_1|^2$ and $\displaystyle \int_{l}^{2l} |\psi_2|^2$ or $\displaystyle \int_{-\infty}^{\infty} |\psi_1|^2$ and $\displaystyle \int_{-\infty}^{\infty} |\psi_2|^2$ or even $\displaystyle \int_{-\infty}^{\infty} |\psi|^2$ where $\psi$ takes the appropriate values depending on the region. I am also unsure if I am looking for a single eigenvalue that works for both $\psi_1(x)$ and $\psi_2(x)$ or for $E_i$ s.t. $\hat{H}\psi_i(x) = E_i\psi_i(x)$
Apologies if this should be in scicomp.stackexchange or is a little basic, thanks.
Answer: Because you are only interested in the energy eigenvalues, you don't need to normalize $\psi$. Observe that if $\psi$ is a solution to the Schrodinger equation
$$-\frac{\hbar^{2}}{2m}\frac{{\rm d}^{2}\psi}{{\rm d}x^{2}}+V\left(x\right)\psi=E\psi$$
then so as $A\psi$ for every $A\neq 0$ with the same $E$. Also, since this is a second order differential equation, you only need to set two conditions on $\psi$. Because you are trying to solve the problem numerically, you need to set the conditions at $x=0$ (I assume you solve forwards and not backwards). I argue that you must require
$$\begin{cases}\psi\left(x=0\right)=0\\\psi^{\prime}\left(x=0\right)=1\end{cases}$$
The first condition is the consequence of the infinite potential at the boundaries. The second condition is equivalent to $\psi^{\prime}\left(x=0\right)\neq 0$ - because remember! We don't care about the normalization! Now you use the shooting method. You guess $E$, solve the ODE and get $\psi\left(x=2L\right)$. Keep guessing till you get $\psi\left(x=2L\right)=0$. In other words, use a root finder to solve the equation
$$\psi_{E}\left(x=2L\right)=0$$
for $E$. Start the iterations at the guess $E_{0}=0$ for the ground state. For higher states you'll need to use trial and error. I'd recommend you to plot $\psi_{E}\left(x=2L\right)$ as a function of $E$. The zeros of this graph are the eigenvalues. | {
"domain": "physics.stackexchange",
"id": 52526,
"tags": "quantum-mechanics, wavefunction, potential, schroedinger-equation, computational-physics"
} |
openni_launch problems with kinect and asus, electric and fuerte | Question:
I've tried getting openni_launch to work on Ubuntu 12.04 with fuerte and a Kinect, also on 11.10 with electric and Kinect and on 11.10 with an Asus Xtion Pro Live and now on a second computer running 11.10, electric and the Asus camera. I'm convinced that there's got to be something wrong with the package. I've followed the instructions to the letter. Latest attempt with 11.10, electric and Asus Xtion.
Are there known problems? I've spent probably 40 hours trying to get it working and hired an experienced ROS developer who has probably 15-20 hours into trying to get it working. Are there known problems with it? Has the project been abandoned?
rosrun openni_launch openni.launch
[rosrun] Couldn't find executable named openni.launch below /opt/ros/electric/stacks/openni_kinect/openni_launch
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /opt/ros/electric/stacks/openni_kinect/openni_launch/launch/openni.launch
chris@chris-VirtualBox:/opt/ros/electric/stacks/openni_kinect$
rosmake openni_kinect --rosdep-install
[ rosmake ] Packages requested are: ['openni_kinect']
[ rosmake ] Logging to directory/home/chris/.ros/rosmake/rosmake_output-20120713-101739
[ rosmake ] Expanded args ['openni_kinect'] to:
['openni_tracker', 'openni_camera', 'depth_image_proc', 'openni_launch', 'openni', 'nite']
[ rosmake ] Generating Install Script using rosdep then executing. This may take a minute, you will be prompted for permissions. . .
Failed to find rosdep nite-dev for package nite on OS:ubuntu version:oneiric
Failed to find rosdep openni-dev for package openni on OS:ubuntu version:oneiric
Failed to find rosdep ps-engine for package nite on OS:ubuntu version:oneiric
[ rosmake ] rosdep install failed: Rosdep install failed
chris@chris-VirtualBox:/opt/ros/electric/stacks/openni_kinect$
Originally posted by punching on ROS Answers with karma: 78 on 2012-07-13
Post score: 0
Answer:
it has to be
roslaunch openni_launch openni.launch
and also in feurte use
rosdep install package_name
instead of rosmake package_name --rosdep-install
Originally posted by cagatay with karma: 1850 on 2012-07-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 10184,
"tags": "kinect, openni"
} |
Dalton's law for a gas mixture | Question:
If $\pu{200 mL}$ of $\ce{N2}$ at $\ce{25 ^\circ C}$ and a pressure of $\pu{250 torr}$ are mixed with $\pu{350 mL}$ of $\ce{O2}$ at $\pu{25 ^\circ C}$ and a pressure of $\pu{300 torr}$, so that the resulting volume is $\pu{300 mL}$, what would be the final pressure in $\pu{torr}$ of the mixture at $\pu{25 ^\circ C}$?
I understand that to find final pressure, I need to add the partial pressure of $\ce{N2}$ and $\ce{O2}$. I'm not sure how to find the partial pressure using Dalton's law. To find partial pressure of $\ce{N2}$ first:
$$p(\ce{N2}) = X(\ce{N2}) \cdot \pu{0.3289 atm}$$
where $X$ is the mole fraction. How do I find it?
Answer: There is no need to use ideal gas law in $pV = nRT$ form or find molar fraction $X$. The process is isothermal ($T_1 = T_2 = \pu{25 ^\circ C}$, hence $p_1V_1 = p_2V_2$), so it's a matter of finding the sum of partial pressures of each gaseous component $i$ in the system:
$$p = \sum_i{p_{2i}} = \sum_i{\frac{p_{1i}V_{1i}}{V_{2i}}}$$
Also, converting the pressure back and forth between $\pu{torr}$ and $\pu{atm}$ is counterproductive as you are explicitly asked to provide an answer in $\pu{torr}$.
$$p = p_2(\ce{N2}) + p_2(\ce{O2}) = \frac{\pu{250 torr} \cdot \pu{200 mL}}{\pu{300 mL}} + \frac{\pu{350 torr} \cdot \pu{350 mL}}{\pu{300 mL}} = \pu{575 torr}$$ | {
"domain": "chemistry.stackexchange",
"id": 9216,
"tags": "physical-chemistry, gas-laws"
} |
When to choose character instead of factor in R? | Question: I am currently working on a dataset which contains a name attribute, which stands for a person's first name. After reading the csv file with read.csv, the variable is a factor by default (stringsAsFactors=TRUE) with ~10k levels. Since name does not reflect any group membership, I am uncertain to leave it as factor.
Is it necessary to convert name to character? Are there some advantages in doing (or not doing) this? Does it even matter?
Answer: Factors are stored as numbers and a table of levels. If you have categorical data, storing it as a factor may save lots of memory.
For example, if you have a vector of length 1,000 stored as character and the strings are all 100 characters long, it will take about 100,000 bytes. If you store it as a factor, it will take about 8,000 bytes plus the sum of the lengths of the different factors.
Comparisons with factors should be quicker too because equality is tested by comparing the numbers, not the character values.
The advantage of keeping it as character comes when you want to add new items, since you are now changing the levels.
Store them as whatever makes the most sense for what the data represent. If name is not categorical, and it sounds like it isn't, then use character. | {
"domain": "datascience.stackexchange",
"id": 8773,
"tags": "r, data-wrangling"
} |
Is buckminsterfullerene aromatic? | Question: According to Wikipedia,
The $\ce{C60}$ molecule is extremely stable,[26] withstanding high temperatures and high pressures. The exposed surface of the structure can selectively react with other species while maintaining the spherical geometry.[27] Atoms and small molecules can be trapped within the molecule without reacting.
Smaller fullerenes than $\ce{C60}$ have been distorted so heavily they're not stable, even though $\ce{M@C28}$ is stable where $\ce{M\,=\,Ti, Zr, U}$.
Some of us have heard and learned about the "rules" of aromaticity: The molecule needs to be cyclic, conjugated, planar and obey Huckel's rule (i.e. the number of the electrons in $\pi$-system must be $4n+2$ where $n$ is an integer).
However, I'm now very skeptical to these so-called rules:
The cyclic rule is violated due to a proposed expansion of aromaticity. (See what is Y-aromaticity?)
The must-obey-Huckel rule is known to fail in polycyclic compounds. Coronenefigure 1 and pyrene figure 2 are good examples with 24 and 16 $\pi$ electrons, respectively.
Again, Huckel fails in sydnone. The rule tells you that it's aromatic, while it's not.
The planar rule is not a rule at all. We're talking about "2D" aromaticity when we're trying to figure out the $n$ in $4n+2$. The "3D" rule is as following:
In 2011, Jordi Poater and Miquel Solà, expended the rule to determine when a fullerene species would be aromatic. They found that if there were $2n^2+2n+1$ π-electrons, then the fullerene would display aromatic properties. - Wikipedia
This would mean $\ce{C60}$ is not aromatic, since there is no integer $n$ for which $2n^2+2n+1 = 60$.
On the other hand, $\ce{C60-}$ is ($n = 5$). But then this rule strikes me as peculiar because then no neutral or evenly-charged fullerene would be aromatic. Furthermore, outside the page for the rule, Wikipedia never explicitly states that fullerene is not aromatic, just that fullerene is not superaromatic. And any info on superaromaticity is unavailable or unhelpful to me; including the Wikipedia "article" on that topic.
So, is $\ce{C60}$ aromatic? Why, or why not?
Answer: Aromaticity is not binary, but rather there are degrees of aromaticity. The degree of aromaticity in benzene is large, whereas the spiro-aromaticity in [4.4]nonatetraene is relatively small. The aromaticity in naphthalene is not twice that of benzene.
Aromaticity has come to mean a stabilization resulting from p-orbital (although other orbitals can also be involved) overlap in a pi-type system. As the examples above indicate, the stabilization can be large or small.
Let's consider $\ce{C_{60}}$:
Bond alternation is often taken as a sign of non-aromatic systems. In $\ce{C_{60}}$ there are different bond lengths, ~1.4 and 1.45 angstroms. However, this variation is on the same order as that found in polycyclic aromatic hydrocarbons, and less than that observed in linear polyenes.
Conclusion: aromatic, but less so than benzene.
Magnetic properties are related to electron delocalization and are often used to assess aromaticity. Both experiment and calculations suggest the existence of ring currents (diamagnetic and paramagnetic) in $\ce{C_{60}}$.
Conclusion: Although analysis is complex, analysis is consistent with at least some degree of aromaticity.
Reactivity - Substitution reactions are not possible as no hydrogens are present in $\ce{C_{60}}$. When an anion or radical is added to $\ce{C_{60}}$ the electron(s) are not delocalized over the entire fullerene structure. However, most addition reactions are reversible suggesting that there is some extra stability or aromaticity associated with $\ce{C_{60}}$.
Conclusion: Not as aromatic as benzene
Resonance energy calculations have been performed and give conflicting results, although most suggest a small stabilization. Theoretical analysis of the following isodesmic reaction
$$\ce{C_{60} + 120 CH4 -> 30 C2H4 + 60 C2H6}$$
suggested that it only took half as much energy to break all of the bonds in $\ce{C60}$ compared to the same bond-breaking reaction with the appropriate number of benzenes.
Conclusion: Some aromatic stabilization, but significantly less than benzene.
This brief overview suggests that $\ce{C_{60}}$ does display properties that are consistent with some degree of aromatic stabilization, albeit less than that found with benzene. | {
"domain": "chemistry.stackexchange",
"id": 4177,
"tags": "organic-chemistry, carbon-allotropes, aromaticity"
} |
Help using epos_hardware | Question:
Hello everyone!
I am trying to add 4 maxon motors controlled by 4 maxon drives (EPOS2 70/10) and I have been unsuccessful so far. I have communicated with the EPOS2 in Windows and have gone through the initial setup correctly. EDIT: I will summarize the post in order to make it easier to read.
At first I had some USB permission problem which was solved by giving the device special permissions in udev/rules.d. I thought that was not enough and made this post but as ahendrix pointed out probably only by rebooting my computer did the permission change take effect.
I am still trying to make the example work with my own EPOS2 + Motor and the following issue still puzzles me: I have edited the example.yaml file that came with the epos_hardware to fit my motor. Still when I run the example.launch file I get:
[ERROR] [1440402458.067846767]: Could not find motor
[ERROR] [1440402458.067882353]: Could not configure motor: my_joint_actuator
[FATAL] [1440402458.067895978]: Failed to initialize motors
I realized something trying to troubleshoot the problem. My device's serial number is:
Serial Number: 0x662080006193
The package documentation states:
~/motor_name/serial_number (string)
The serial number of the motor (in hex). This is used to locate the motor.
So I have introduced: serial_number: '662080006193'. No problems so far but whenever I try to run the example and check rqt_console I get this ROS_INFO message:
Initializing: 0x7461757463416e6f
Which could be the cause of the problem since to my understanding it is trying to connect to a different device (correct me if I'm wrong). I have tried introducing the number in a variety of ways but I either got the same number (0x7461757463416e6f) or zero (0x0). Does anybody have any clues as to what's happening behind the curtain? Is the number being converted to hex twice? I have tried introducing the decimal equivalent but then I get zero in rqt_console...
Now the latest update I have is I have tried running roswtf with epos_hardware's example.launch file and this is what I get:
ERROR The following packages have rpath issues in manifest.xml:
* epos_hardware: found flag "-L/opt/ros/indigo/share/controller_manager/lib", but no matching "-Wl,-rpath,/opt/ros/indigo/share/controller_manager/lib"
* controller_manager: found flag "-L/opt/ros/indigo/share/controller_manager/lib", but no matching "-Wl,-rpath,/opt/ros/indigo/share/controller_manager/lib"
Could this be the source of the problem or at least a part of it? How do I fix it? I have checked package.xml in both epos_hardware and controller_manager but I am not exactly sure about what to do. I have tried commenting the following line in controller_manager's package.xml but although roswtf gave me no errors I still get the same problem when I try to execute the example:
<export>
<cpp lflags="-L${prefix}/lib -lcontroller_manager" cflags="-I${prefix}/include"/>
</export>
Originally posted by maztentropy on ROS Answers with karma: 35 on 2015-08-21
Post score: 2
Answer:
I sort of made it work for now. As I initially suspected the problem was in the way the serial number is handled internally. I manually changed the file "epos.cpp" and recompiled and the issue was solved. This is the change I did in case anyone is interested:
if(!config_nh_.getParam("serial_number", serial_number_str)) {
ROS_ERROR("You must specify a serial number");
valid_ = false;
}
else {
ROS_ASSERT(SerialNumberFromHex(serial_number_str, &serial_number_));
}
I changed it to:
config_nh_.getParam("serial_number", serial_number_str);
ROS_ASSERT(SerialNumberFromHex(serial_number_str, &serial_number_));
I think that serial_number_str was not being collected properly but I am no expert. How can I make it known to the author?
Originally posted by maztentropy with karma: 35 on 2015-08-31
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2015-08-31:
According to the docs the serial nr should be specified in hex, as you had found out. Have you tried setting the parameter to 0x662080006193 (if that is your serial nr in hex)?
As to how to contact the author, use the issue tracker at https://github.com/RIVeR-Lab/epos_hardware.
Comment by maztentropy on 2015-08-31:
I did try that form also, but in the example provided in the package is was entered as '662080006193' that's why I have used it for the post. Thanks for your interest anyway.
Comment by mitchell on 2015-09-02:
The serial number you have is probably in decimal. Try converting it to hex with the 0x prefix and see if that works. Feel free to add a PR to support both formats of SN. Not sure about the controller_manager issue, but I would try filing a bug against controller_manager
Comment by maztentropy on 2015-09-03:
I used the serial number obtained from "list_devices" (which I think is already hex). I have anyway tried different ways of inputting the number and none of them worked. The ROS_INFO message which parses the device you're connecting to kept saying "0x7461757463416e6f" or "0x0".
Comment by jdeleon on 2016-06-17:
@maztentropy did you finally get the EPOS work? I'm having the same 3 errors as you.
Comment by johnyang on 2017-09-13:
I'm running into the same problem here now. However, the exact same code works on a computer running Ubuntu14.04 but failed on another machine running Ubuntu 14.02. If I removed ROS_ASSERT, then it works fine. Any explanation to his? | {
"domain": "robotics.stackexchange",
"id": 22507,
"tags": "ros, controller-manager, ros-indigo"
} |
Perpendicular Horizontal Velocities in a Projectile? | Question:
Two balls A and B are both thrown with horizontal initial velocities of 6 m/s and 8 m/s from the edge of a vertical cliff. Their initial velocities are perpendicular and the points where they hit the ground are 120 meters apart. Find the height of the cliff.
The answer given is height of cliff is 720 meters. But I am kind of confused as to how the horizontal velocities can be perpendicular? Is this scenario possible? Also after that if they are perpendicular, how do you go about finding the solution?
Answer: Imagine it as a 3-dimensional problem, not a 2-dimensional one. I assume you may ignore effects of air resistance. Suppose it's an east-west cliff, and we're standing on the south side of the edge. Then one ball is (at time $t=0$) thrown horizontally in the north-east direction, the other one north-west.
Their vertical velocity at time $t$ is entirely due to Earth's gravity and is equal to $gt$. Their vertical distance travelled is $\frac12gt^2$, so if the height of the cliff is $h$, that happens at a time $t_h$ when $\frac12gt^2=h \implies t_h=\sqrt{\frac{2h}g}.$
Now look at the horizontal distance they travelled in that time: $6t_h$ and $8t_h$. When viewed from above, their paths are straight lines (in NE and NW direction) and a right-angled triangle with leg lengths 6 and 8 has a third leg of length 10 (by Pythagoras' theorem). So $10t_h = 120$ and $t_h = 12$. You can plug that into the formula $\frac12gt^2$ and (if you're allowed to approximate $g$ with 10), the result is 720m. | {
"domain": "physics.stackexchange",
"id": 83447,
"tags": "homework-and-exercises, kinematics, projectile"
} |
$\omega$-automata where string is accepted iff a final state is accessible from starting state | Question: I am wondering if $\omega$-automata with the following acceptance condition are valid.
An input string is accepted iff one of the final states occurs at least once.
This differs from Buchi automata in that the final state only has to occur once, not infinitely often.
Does this kind of automata have a name? Is it interesting or important?
Answer: Yes, it was studied before. In one of the early papers on accepting infinitary languages Landweber introduced five acceptance types that included those of Büchi and Muller. On the lowest level where two types that referred to the set of states entered, and the other three levels considered the more classic set of states entered infinitely often.
Your type is called 1-acceptance (if I recall right) and a string is accepted if a state from an accepting set $D$ is entered at least once. The dual 1'-acceptance required that the states entered are always within $D$.
One level up one requires that a state from $D$ is entered infinitely often (Büchi), respectively that from some moment on the states are within $D$.
These definitions can be considered not only for finite state automata, but also for automata with external storage, like pushdown automata. One common fact for all such automata is that the 1-acceptance (your type) is of low "topological" complexity. Those $\omega$-languages are of the form $L\cdot \Sigma^\omega$, where $L$ is an ordinary language of the type of automata. This means one cannot accept $(a^*b)^\omega$.
In general there is a clear link between topological complexity of the $\omega$-language and the acceptance type.
L.H. Landweber, Decision problems for $\omega$-automata, Math. Systems Theory 3 (1969) 376-384. | {
"domain": "cs.stackexchange",
"id": 18274,
"tags": "automata, finite-automata, omega-automata"
} |
Automating a WordPress install | Question: I do not have a whole lot to do over winter break, so I wrote this little script to automate a Wordpress install (currently can only install once instance) on a fresh Debian server (tested, working with Wheezy). It may be pretty sloppy because it's the first thing I've actually tried, but it's a start I guess. I was not too worried about security with this script, but I tried to handle the passwords as best as possible, and they are not printed out at any time (except in .my.cnf, which gets deleted).
I heard somewhere that it is better to print variables like ${DOCUMENT_ROOT} instead of just $DOCUMENT_ROOT. Are there any other recommended tips like this to make scripts perform better / easier to maintain?
#!/bin/bash
#auto wordpress installer
DOCUMENT_ROOT="/var/www/wordpress"
MYSQL_ROOT_PASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1)"
## uses this server email to set up apache's config file
echo "Enter in the email for the server administrator:"
read SERVER_ADMIN_EMAIL
apt-get update
apt-get upgrade
## Set up passwords so mysql-server install doesn't have password prompt
debconf-set-selections <<< "mysql-server mysql-server/root_password password $MYSQL_ROOT_PASS"
debconf-set-selections <<< "mysql-server mysql-server/root_password_again password $MYSQL_ROOT_PASS"
## install the required packages to run
apt-get -y install apache2 install libapache2-mod-php5 install libapache2-mod-auth-mysql install php5-mysql
apt-get -y install mysql-server
## download and extract wordpress
wget http://wordpress.org/latest.tar.gz
tar -xzvf latest.tar.gz
## sets up variables for wordpress installation
MYSQL_DB=wordpress$(echo "$RANDOM")
MYSQL_USER=wordpress$(echo "$RANDOM")
MYSQL_USER_PASS="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1)"
## creates a .my.cnf so you can run mysql from the command line without password prompt
printf "[mysql]\nuser=root\npassword=\""$MYSQL_ROOT_PASS"\"\n" > ~/.my.cnf
## adds a wordpress user with own password and creates database for wordpress
mysql --defaults-file=~/.my.cnf -e "create database $MYSQL_DB; create user "$MYSQL_USER"@localhost; set password for "$MYSQL_USER"@localhost = PASSWORD(\""$MYSQL_USER_PASS"\"); GRANT ALL PRIVILEGES ON "$MYSQL_DB".* TO "$MYSQL_USER"@localhost IDENTIFIED BY '"$MYSQL_USER_PASS"'; flush privileges;"
## removes the .my.cnf file which contains mysql's root password
rm -r ~/.my.cnf
## sets up wordpress to use the newly created user and password
cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php
sed -i s/database_name_here/$MYSQL_DB/ ~/wordpress/wp-config.php
sed -i s/username_here/$MYSQL_USER/ ~/wordpress/wp-config.php
sed -i s/password_here/$MYSQL_USER_PASS/ ~/wordpress/wp-config.php
## puts wordpress in the appropriate place and changes permissions
mv wordpress /var/www/
sudo chown www-data:www-data /var/www/wordpress -R
## configures apache to serve wordpress as the site root
cp /etc/apache2/sites-available/default ./default.bak
sed -i s/webmaster@localhost/$SERVER_ADMIN_EMAIL/ /etc/apache2/sites-available/default
sed -i s@/\var\/www@${DOCUMENT_ROOT}@ /etc/apache2/sites-available/default
service apache2 reload
## removes the password used to do an unattended install of mysql-server
echo PURGE | debconf-communicate mysql-server
## browse to this URL to configure the wordpress install
echo "browse to the url /wp-admin/install.php to configure wordpress"
Answer: This line is long and hard to read with many commands jammed inside:
mysql --defaults-file=~/.my.cnf -e "create database $MYSQL_DB; create user "$MYSQL_USER"@localhost; set password for "$MYSQL_USER"@localhost = PASSWORD(\""$MYSQL_USER_PASS"\"); GRANT ALL PRIVILEGES ON "$MYSQL_DB".* TO "$MYSQL_USER"@localhost IDENTIFIED BY '"$MYSQL_USER_PASS"'; flush privileges;"
A more readable way to write this:
cat << EOF | mysql --defaults-file=~/.my.cnf
create database $MYSQL_DB;
create user "$MYSQL_USER"@localhost;
set password for "$MYSQL_USER"@localhost = PASSWORD(\""$MYSQL_USER_PASS"\");
GRANT ALL PRIVILEGES ON "$MYSQL_DB".* TO "$MYSQL_USER"@localhost IDENTIFIED BY '"$MYSQL_USER_PASS"';
flush privileges;
EOF
Drop the -r here, as that's useful for recursively removing directories, but you have a simple file here:
rm -r ~/.my.cnf
It seems the script is designed to setup a single WordPress site per system. It would be useful to extract the logic of conducting a WordPress site, so that you could setup multiple sites per system easily if needed. | {
"domain": "codereview.stackexchange",
"id": 17536,
"tags": "mysql, bash, linux, wordpress, installer"
} |
Calculating Efficiency from a $pV$ Diagram | Question: I Am trying the calculate the efficiency of this engine however, I'm not sure whether my result is making intuitive sense. The $pV$ diagram of the engine is as follows;
Here we note that the process $2\to3$ is an isothermal expansion of the engine. So, the efficiency of the engine is defined as;
$$\epsilon=\frac{W}{Q_H}$$
Where $W$ is the net work. So, to then determine the efficiency of the engine, we must determine the net work of the system and the expression for $Q_H$. Namely, we note that the net work of the system is the area enclosed by the cycle. This can in turn be given by;
$$W_{net}=NkT_h\int_{V_i}^{V_f}\frac{1}{V}dV-P_i\int_{V_i}^{V_f} dV=NkT_h\ln{(\frac{V_f}{V_i})}-P_i\Delta V=NkT_h\ln{(\frac{V_f}{V_i})}-Nk\Delta T$$
And since $Q_H$ is the heat added during the isochoric process,
$$Q_H=C_V\Delta T$$
No, then we can substitute these expressions into our from for $\epsilon$;
$$\epsilon=\frac{NkT_h\ln{(\frac{V_f}{V_i})}-Nk\Delta T}{C_V\Delta T}$$
And, given that $\frac{V_f}{V_i}=\frac{T_h}{T_c}$ for this process, where $T_c$ is the temperature of the engine at $(1)$, we can rewrite the efficiency;
$$\epsilon=\frac{NkT_h\ln{(\frac{T_h}{T_c})}-Nk\Delta T}{C_V\Delta T}$$
Also, if we assume the gas to me monatomic, $C_V=\frac{3}{2}Nk$ which again simplifies the expression to;
$$\epsilon=\frac{2}{3}\left(\frac{T_h\ln{(\frac{T_h}{T_c})}-\Delta T}{\Delta T}\right)=\frac{2}{3}\left(\frac{T_h\ln{(\frac{T_h}{T_c})}-(T_h-T_c)}{(T_h-T_c)}\right)$$
Is this process correct? When I consider the limiting case where $\frac{T_h}{T_c}\to \infty$ these seems to be no maximum efficiency as I would expect. Any help would be greatly appreciated!
Answer: Your equation for the heat added is incorrect. There is also heat added during isothermal expansion. So heat added is $$Q=C_v(T_h-T_l)+nkT_h\ln{(V_f/V_i)}=C_vT_h\left(1-\frac{V_i}{V_f}\right)+nkT_h\ln{(V_f/V_i)}$$and the work done is $$W=nkT_h\ln{(V_f/V_i)}-nkT_h\left(1-\frac{V_i}{V_f}\right)$$So the efficiency is:
$$\epsilon=\frac{\ln{(V_f/V_i)}-\left(1-\frac{V_i}{V_f}\right)}{\ln{(V_f/V_i)}+\frac{C_v}{nK}\left(1-\frac{V_i}{V_f}\right)}=\frac{1-\alpha}{1+\frac{C_v}{nK}\alpha}$$with $$\alpha=\frac{\left(1-\frac{V_i}{V_f}\right)}{\ln{(V_f/V_i)}}$$ | {
"domain": "physics.stackexchange",
"id": 66530,
"tags": "homework-and-exercises, thermodynamics, statistical-mechanics, condensed-matter"
} |
Multi-Agent simulation causing trouble in Rviz | Question: I'm trying multi-agent simulations using ROSBOT 2.0 by husarion. I'm comfortable with single agent but when it comes to multiple agents I'm able to spawn them and control them independently but the problem comes with Rviz. I'm unable to visualize the robot in Rviz. It spwans upside down, i'm unable to see the camera feed and lidar output but the topics are present in the rostopic list.
The below is the launch file which I run.
<launch>
<arg name="world_name" default="worlds/empty.world"/>
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="world_name" value="$(arg world_name)"/> <!-- world_name is wrt GAZEBO_RESOURCE_PATH environment variable -->
<arg name="paused" value="false"/>
<arg name="use_sim_time" value="true"/>
<arg name="gui" value="true"/>
<arg name="headless" value="false"/>
<arg name="debug" value="true"/>
<arg name="verbose" value="true"/>
</include>
<include file="$(find rosbot_gazebo)/launch/multi_rosbot.launch"/>
</launch>
Below is multi_rosbot.launch file.
<launch>
<param name="robot_description" command="$(find xacro)/xacro.py '$(find rosbot_description)/urdf/rosbot.xacro'"/>
<!-- BEGIN ROBOT 1-->
<group ns="rover1">
<param name="tf_prefix" value="rover1" />
<include file="$(find rosbot_gazebo)/launch/rosbot_empty_world.launch" >
<arg name="init_pose" value="-x 0 -y 0 -z 0" />
<arg name="robot_name" value="rover1" />
</include>
</group>
<!-- BEGIN ROBOT 2-->
<group ns="rover2">
<param name="tf_prefix" value="rover2" />
<include file="$(find rosbot_gazebo)/launch/rosbot_empty_world.launch" >
<arg name="init_pose" value="-x -2 -y 0 -z 0" />
<arg name="robot_name" value="rover2" />
</include>
</group>
<!-- BEGIN ROBOT 3 -->
<group ns="rover3">
<param name="tf_prefix" value="rover3" />
<include file="$(find rosbot_gazebo)/launch/rosbot_empty_world.launch" >
<arg name="init_pose" value="-x -4 -y 0 -z 0" />
<arg name="robot_name" value="rover3" />
</include>
</group>
</launch>
Below is the rosbot_empty_world.launch
<launch>
<arg name="init_pose"/>
<arg name="robot_name"/>
<param name="robot_description" command="$(find xacro)/xacro.py '$(find rosbot_description)/urdf/rosbot.xacro'"/>
<arg name="model" default="$(find xacro)/xacro.py '$(find rosbot_description)/urdf/rosbot.xacro'"/>
<node name="rosbot_spawn" pkg="gazebo_ros" type="spawn_model" ns="/$(arg robot_name)"
args="$(arg init_pose) -unpause -urdf -model $(arg robot_name) -param /robot_description " respawn="false" output="screen"/>
<node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher" ns="/$(arg robot_name)" args="--namespace=/$(arg robot_name)">
<param name="publish_frequency" type="double" value="30.0" />
<param name="tf_prefix" value="/$(arg robot_name)" type="str"/>
</node>
<rosparam command="load" file="$(find joint_state_controller)/joint_state_controller.yaml" ns="/$(arg robot_name)"/>
<node name="controller_spawner_$(arg robot_name)" pkg="controller_manager" type="spawner" ns="/$(arg robot_name)"
respawn="false" args=" joint_state_controller --namespace=/$(arg robot_name)" output="screen"/>
</launch>
I cannot visualize anything in Rviz. Therobot spawns upside down, the camera is blank.
Here's the rostopic list output.
wally1002@wally:~$ rostopic list
/clock
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state
/rosout
/rosout_agg
/rover1/camera/depth/camera_info
/rover1/camera/depth/image_raw
/rover1/camera/depth/points
/rover1/camera/parameter_descriptions
/rover1/camera/parameter_updates
/rover1/camera/rgb/camera_info
/rover1/camera/rgb/image_raw
/rover1/camera/rgb/image_raw/compressed
/rover1/camera/rgb/image_raw/compressed/parameter_descriptions
/rover1/camera/rgb/image_raw/compressed/parameter_updates
/rover1/camera/rgb/image_raw/compressedDepth
/rover1/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/rover1/camera/rgb/image_raw/compressedDepth/parameter_updates
/rover1/camera/rgb/image_raw/theora
/rover1/camera/rgb/image_raw/theora/parameter_descriptions
/rover1/camera/rgb/image_raw/theora/parameter_updates
/rover1/cmd_vel
/rover1/imu
/rover1/joint_states
/rover1/odom
/rover1/range/fl
/rover1/range/fr
/rover1/range/rl
/rover1/range/rr
/rover1/scan
/rover2/camera/depth/camera_info
/rover2/camera/depth/image_raw
/rover2/camera/depth/points
/rover2/camera/parameter_descriptions
/rover2/camera/parameter_updates
/rover2/camera/rgb/camera_info
/rover2/camera/rgb/image_raw
/rover2/camera/rgb/image_raw/compressed
/rover2/camera/rgb/image_raw/compressed/parameter_descriptions
/rover2/camera/rgb/image_raw/compressed/parameter_updates
/rover2/camera/rgb/image_raw/compressedDepth
/rover2/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/rover2/camera/rgb/image_raw/compressedDepth/parameter_updates
/rover2/camera/rgb/image_raw/theora
/rover2/camera/rgb/image_raw/theora/parameter_descriptions
/rover2/camera/rgb/image_raw/theora/parameter_updates
/rover2/cmd_vel
/rover2/imu
/rover2/joint_states
/rover2/odom
/rover2/range/fl
/rover2/range/fr
/rover2/range/rl
/rover2/range/rr
/rover2/scan
/rover3/camera/depth/camera_info
/rover3/camera/depth/image_raw
/rover3/camera/depth/points
/rover3/camera/parameter_descriptions
/rover3/camera/parameter_updates
/rover3/camera/rgb/camera_info
/rover3/camera/rgb/image_raw
/rover3/camera/rgb/image_raw/compressed
/rover3/camera/rgb/image_raw/compressed/parameter_descriptions
/rover3/camera/rgb/image_raw/compressed/parameter_updates
/rover3/camera/rgb/image_raw/compressedDepth
/rover3/camera/rgb/image_raw/compressedDepth/parameter_descriptions
/rover3/camera/rgb/image_raw/compressedDepth/parameter_updates
/rover3/camera/rgb/image_raw/theora
/rover3/camera/rgb/image_raw/theora/parameter_descriptions
/rover3/camera/rgb/image_raw/theora/parameter_updates
/rover3/cmd_vel
/rover3/imu
/rover3/joint_states
/rover3/odom
/rover3/range/fl
/rover3/range/fr
/rover3/range/rl
/rover3/range/rr
/rover3/scan
/tf
/tf_static
I'm unable to solve this. Any help is appreciated. Thank You.
Answer: I have managed to solve the problem.
It was a problem of Fixed Frame used in the visualization
tf_prefix was not used correctly.
You can look at this answer for more details. Thanks to anyone who tried. | {
"domain": "robotics.stackexchange",
"id": 2294,
"tags": "ros, rviz"
} |
Why does hydrogen ionization happen in HII regions? | Question: Why does hydrogen ionization happen in HII regions? Why is the hydrogen there ionized?
Answer: Stars are responsible.
HII regions$^\dagger$ can refer to several things, but usually I guess one thinks of the volumes around star-forming regions. The more massive a star is, the faster it burns its fuel, and at a higher temperature, meaning that the peak of their spectra are more toward the high frequencies. The most massive stars of a stellar population — the so-called O and B stars — produce enough photons with wavelengths below the hydrogen ionization threshold of $\lambda = 912$ Å that they carve out bubbles in their surrounding HI clouds, giving rise to HII regions.
Right: The HII region NGC 604 (from Wikipedia). Left: The spectra of three different stars. Only the B star has a significant portion of its spectrum above the hydrogen ionization threshold. Note the logarithmic scale on the intensity (from here).
Because of the high densities, the HII quickly recombines to HI. If the recombination goes directly to the ground state, a new ionizing photon is emitted, which again is absorbed by a hydrogen atom, but if it goes to one of the higher states, the emitted radiation is no longer capable of ionizing the gas. In this way the ionizing radiation is converted into photons of specific wavelengths, corresponding the energy differences between the excited level of hydrogen, most notably the Lyman $\alpha$ emission line with $\lambda = 1216$ Å.
Because hydrogen is the most abundant element in the Universe, and because Lyman $\alpha$ is the most common transition, the Lyman $\alpha$ line is an excellent probe of the most distant galaxies where other wavelengths are not observable. This is especially so because the most distant galaxies are also the earliest and hence still in the process of forming, meaning a high star formation rate which, in turn, means that the shortlived OB stars are still present.
In addition to this distinct regions of HII, ionized hydrogen also exist in a more diffuse component between the stars of a galaxy, in huge bubbles caused by stellar feedback and supernovae, and in the intergalactic medium.
$^\dagger$The terms HI and HII refers to neutral and ionized hydrogen, respectively. | {
"domain": "astronomy.stackexchange",
"id": 1221,
"tags": "nebula, hydrogen"
} |
error unable communicate with master | Question:
Hello,
I'm trying to setup a workstation to work with a Turtlebot.
I've followed the tutorial here : wiki.ros.org/Robots/Turtlebot/Network%20Setup
Robot IP is : 192.168.0.152
Workstation IP is 192.168.0.122
In .bashrc
I set up ROS_MASTER_URI in both bashrc at this value : http://192.168.0.152:11311
Both Hostname are set up with the IP of the machine
I can ping in both ways.
Roscore is launched on the robot and rostopic return "/rosout" and "/rosout_agg"
But still when I launch "rostopic list" on the workstation, it doesn't work and returns "Unable to connect with master"
Has someone any leads that we could follow please?
Thank you :D
Originally posted by gautier on ROS Answers with karma: 16 on 2019-03-12
Post score: 0
Original comments
Comment by billy on 2019-03-12:
Have you set the IP of the workstation as ROS id? If workstation isn't properly identified, roscore can't connect it to the topic.
"export ROS_IP=192.168.0.122" in the bashrc file on the workstation?
Comment by gautier on 2019-03-13:
hello,
thanks for your answer but the bashrc file already have "export ROS_IP=192.168.0.122"
have you other ideas ?
Answer:
got it !! it was the firewall of the machine link to the robot thanks for your help anyway
Originally posted by gautier with karma: 16 on 2019-03-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32640,
"tags": "ros, roscore, ros-indigo"
} |
Could an asteroid impact cause radioactive fallout? | Question: Could a (regional or global) fallout of radioactive material be a "bonus" disaster effect of an asteroid impact? My reasoning for how such a scenario maybe could be possible:
A) Some asteroids maybe consist of lots of radioactive heavy metals because they were initially formed by ejecta from the cores of planets during planetary collisions. Maybe even an airburst of a rather small such an asteroid could cause widespread dangerous radiak fallout?
As I understand it, Earth's initial uranium has fallen into the core. The uranium mines dig into concentrated deposits of uranium in the crust which came from uranium rich asteroids.
B) If any kind of asteroid large enough hits a uranium rich part of the Earth crust (making it the second one on the same spot), it could maybe eject much of Earth's own uranium and thus cause a dangerous fallout?
Answer: I'm not aware of any relevant uranium ore deposit, which is related to meteoritic material.
Dangerous fallout is by far the most caused by short-lived radioactive isotopes. Those isotopes are rare in meteorites as well as in rock on Earth.
Natural nuclear reactors, which would produce short-lived radioactive isotopes, don't occur any more on Earth, since the natural U-235/U-238 isotope ratio is too low, now. This is due to the half-life of U-235 of about 713 million years in contrast to 4,468 million years for U-238.
If an asteroid would impact into a uranium ore deposit, most of the molten ejected material would fall back to Earth as tektites.
Evaporated ejecta and aerosols (dust) would probably increase health risk for some time, until it's washed out from the atmosphere. But other, less-radioactive aerosols would be a health risk, too. | {
"domain": "astronomy.stackexchange",
"id": 351,
"tags": "asteroids, impact"
} |
A capacitor with a conductor between the plates | Question: Consider a parallel plate capacitor of area $A$ with a distance between the plates $d$, disconnected from the battery. I wonder, how would the capacitance of the system change if we placed a conductor or an insulator (which also have some width $l$ < $d$) of smaller area $A'$ between the plates?
Apparently, we can't neglect edge effects in such scenario, so it is not clear how to calculate voltage between the plates. This system is equivalent to two capacitors in series (1-3 and 4-2), however, both of these capacitors seem to have different charges on each plate. In case of conductor, since the field inside is zero, charge on each side must be $Qconductor=Qplates*A'/A$, and in case of insulator, since the field inside must be decreased by $\epsilon$ - $Qinsulator=Qplates*A'*(1-1/(\epsilon))/A$. So we get two capacitors of different surface areas and with different charges. How can I calculate the capacitance of any of them? Is there another way to deal with the initial problem?
Answer: You have two capacitors in parallel - represented by the two area and one of those capacitors is actually two capacitors in series.
Those two series capacitors are either one with one type dielectric and one with another dielectric or if it is a conductor which is placed between the plates then just two capacitors with the same dielectric.
So a pair of capacitors in series in parallel with another capacitor.
Later
If the shaded bit is a conductor then there is no capacitor there just a conducting wire.
You can move the shaded area to any convenient spot. So in my diagram it goes top right.
Later still
Hopefully this shows that order does not matter. | {
"domain": "physics.stackexchange",
"id": 28828,
"tags": "homework-and-exercises, electrostatics, capacitance"
} |
Why is gas mileage typically better when traveling on the highway than on country roads or in the city? | Question: The gas mileage of my vehicle tends to improve the more I have been driving on the interstate on that tank of gas - if I go through a tank of gas without at any point driving on the interstate, I will typically get 24-26 MPG, but when I have driven almost exclusively on the interstate, I will typically get 26-29 MPG.
What confuses me is that, when traveling on the interstate, I am driving great distances at higher RPMs, which I would expect to correlate to more gas usage and thus worse gas mileage. Why does the inverse seem to be true?
Answer: It is all about engine operating regime and how much braking and re-starting you do
The efficiency of internal combustion engines varies enormously over their range of safe operating conditions. In miles per gallon terms, of course, idling at rest is as bad as it could possible get. Manufactures take some trouble (in designing the whole powertrain) to ensure that the engine runs in relatively an efficient regime at highway speeds.
Accelerating the car takes more energy than just tooling along at a steady speed (that extra kinetic energy has to come from the fuel after all), but when you brake that energy is not recovered---it is converted to heat. So city driving with it's stops and starts means that you keep pouring energy into kinetic form and then promptly converting it to heat so that you have to go get some more from the fuel tank. Over and over again. That can't be good.
Try it with an electric or a good hybrid and your mileage will go down on the highway because it is dominated by air resistance. | {
"domain": "physics.stackexchange",
"id": 56499,
"tags": "forces"
} |
Is there a way to quantify the amount of radiofrequency interference? | Question: Is there a way to quantify the amount of radiofrequency interference? Do any units exist that quantifies how much radiofrequency interference is there?
Answer: The severity of radio frequency interference is described by comparing the power content of the interfering noise to that of the desired signal. The power content is measured in units of decibels and the ratio of signal power to noise power gives the signal to noise ratio.
For a signal to be readable it must be stronger than the underlying noise interference (this is called the noise floor). Decibel math states that for two signals that differ by 3 decibels, the power content of the stronger signal is twice that of the weaker signal. So to lift a signal above the noise floor by +3dB requires the signal to be twice the strength of the noise floor. | {
"domain": "engineering.stackexchange",
"id": 3965,
"tags": "electromagnetism, radio"
} |
Access field in message inside a message | Question:
Hallo
Being new to ros, I´ve stumbled into a stupid problem..
Im writing a python listener, and I want to get the orientation from an Imu.
The variable I want to access can be echoed with : rostopic echo /imu/orientation/z
Here is what I have so far - which is not really working out...:
#!/usr/bin/env python
import rospy
from sensor_msgs.msg import Imu
from geometry_msgs.msg import Quaternion
def imu_callback(data):
rospy.loginfo(Imu.orientation.z)
#do something with z
def imu_listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber("imu", Imu, imu_callback)
rospy.spin()
if __name__ == '__main__':
imu_listener()
Originally posted by Oevli on ROS Answers with karma: 30 on 2015-11-24
Post score: 0
Answer:
In imu_callback try
rospy.loginfo(data.orientation.z)
You need to use the message variable passed into your callback function (which you have named data), not the message type.
Be careful though, the orientation is represented as a quaternion, so if you are thinking that the z component is yaw you are going to have issues. See this answer for ways to convert rotations in rospy.
Originally posted by Thomas D with karma: 4347 on 2015-11-24
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Oevli on 2015-11-24:
Thanks :-) | {
"domain": "robotics.stackexchange",
"id": 23070,
"tags": "ros"
} |
Optimizing search by id in some data in JSON structure | Question: I have an object created by a JSON data. I need to retrieve the id for an object, I would like to know if a faster way exist, considering I am using for ..in loop.
var test = {
id: 0,
cnt: [
{
id: 1,
cnt: [
{
id: 2,
cnt: [
{
id: 3,
cnt:[]
}
]
}
]
},
]
};
getById: function (id) {
function getByIdInner(obj, id) {
var result;
for (var p in obj) {
if (obj.id == id) {
return obj;
} else {
if (typeof obj[p] === 'object') {
result = getByIdInner(obj[p], id);
if (result) {
return result;
}
}
}
}
return result;
}
return getByIdInner(test, id);
},
Answer: It looks like this is just a snippet of the code you are using, but if this is the standard structure, you could simplify your search by doing something like:
getById: function(id) {
function innerGetById(obj, id) {
var result;
if (obj.id == id) {
return obj;
} else {
for (var i = 0; i < obj.cnt.length; i++) {
result = innerGetById(obj.cnt[i], id);
if (result) {
return result;
}
}
}
}
return innerGetById(test, id);
}
This will only work if you know the names of the keys you are dealing with, if that is unknown, you will need to do a for in loop like you have. The key here is that you are testing for the id outside of the loop through the array.
I would also define innerGetById outside of your getById function if possible. I just wrote it this way because I don't know the context that you are defining the getById function in and it looks like you may be creating it as part of an object.
To answer your question, I'm not sure that a faster way exists than manually searching through the structure unless you are willing to index the structure before doing the search. If you want to do that, you can use the same sort of recursive method, but create a flat object with the ids as keys and the associated object as the value. This will make lookups very simple.
You can turn your test object into something like this:
lookupTable = {
0: { id: 0, cnt: [ ... ] },
1: { id: 1, cnt: [ ... ] },
2: { id: 2, cnt: [ ... ] },
etc.
}
By using something like:
var lookupTable = {};
function buildLookupTable(obj) {
lookupTable[obj.id] = obj;
for (var i = 0; i < obj.cnt.length; i++) {
buildLookupTable(obj.cnt[i]);
}
}
buildLookupTable(test); // Using your original test data
Then your lookup is simply: lookupTable[id] | {
"domain": "codereview.stackexchange",
"id": 11911,
"tags": "javascript, performance"
} |
Is it possible to sustain an net electric charge density in a moving conductor? (MHD application) | Question: The conservation of electric charge, deduced from the divergence of the Maxwell-Ampère equation, takes the form:
$$ \nabla \cdot \textbf{J} = -\frac{\partial \rho_e}{\partial t}.$$
For low frequency application, the quasi-magnetostatic approximation leads us to:
$$\nabla \times \textbf{B} = \mu_0\textbf{J} \qquad\text{and}\qquad \nabla \cdot \textbf{J} = 0.$$
This relationship is typical of highly conductive media in which the phenomenon of electric charge relaxation appears. This can be illustrated by taking the divergence of Ohm's law [$\textbf{J} = \sigma \left(\textbf{E}+\textbf{u}\times\textbf{B}\right)$] associated with the Maxwell-Gauss law:
$$\frac{\partial\rho_e}{\partial t}+\frac{\sigma}{\varepsilon_0}\rho_e+\sigma\nabla\cdot(\textbf{u}\times\textbf{B})=0.$$
If we first assume a static situation where $\textbf{u}= \textbf{0}$, the previous equation
is simplified and the solution is:
$$\rho_e(t) = \rho_e(0)\exp(-t/\tau),$$
where $\tau = \varepsilon_0/\sigma$ is a characteristic charge disappearance time and is about $10^{-18}$ s in
metallic media. The charge density is therefore approximately zero
in stationary conductors after a short transient regime. In the case of a moving
conductor however, and by neglecting the first term of the previous equation from the previous analysis, the charge density is given by:
$$\rho_e = -\varepsilon_0\nabla\cdot(\textbf{u}\times\textbf{B}).$$
It seems therefore possible to maintain a volumetric charge density in a moving conductor.
My questions are the following : If we take a moving conductive fluid and manage to produce a diverging $(\textbf{u}\times\textbf{B})$ field, can we really sustain a net volumetric charge density in the bulk (and not only a surface charge density)? If so, has it been observed experimentally? Is it measurable? If $\textbf{B}$ is oscillating, does it mean that new currents appear?
The all argument is taken from:
[1] P.A. Davidson. Introduction to magnetohydrodynamics, 2nd edition. Cambridge Texts in Applied Mathematics (2017) and
[2] J.A. Shercliff. A textbook of magnetohydrodynamics. Pergamon Press (1965).
Answer: Yes, the simplest example is a solid conductive disk rotating in external magnetic field, with lines of force perpendicular to the disk. Charge of one sign concentrates on the rim, charge of the opposite sign is distributed in the disk (with uniform spatial density). A disk made of liquid would behave the same - as long as it is conductive and the flow in it is not too mixing (turbulent), the charge separation effect will be present.
So the effect is really due to motion of conductor in external magnetic field ($\mathbf u $ is velocity of the conductor element), whether the conductor is liquid or not does not matter.
In the inertial frame co-moving with a disk element, there is an "induced" electric field due to the magnet $\mathbf E_m'$ that has value $\mathbf E_m' = \mathbf u\times\mathbf B$, this follows from the rules of how fields transform in special relativity. This electric field pushes on the free charge in the conductor; in case of $\mathbf B$ parallel to $\boldsymbol{\omega}$, it pushes positive charge out towards the rim. When the disk is spinned up, or magnetic field is increased from zero, it thus creates a short-lived current that redistributes charge in the disk. Due to this redistribution, additional electric field $\mathbf E_{d}'$ appears in the same frame, which is due to charge in the disk: it is the sum/integral of all the Coulomb fields of the charges in the disk and its rim. Very quickly, an equilibrium charge distribution and its electric field $\mathbf E_{d}'$ is established, where in the disk, $\mathbf E_{d}' = -\mathbf E_m' = -\mathbf u\times\mathbf B$, so total force is zero (ignoring the centripetal force needed to make the charge go in circles, which is negligible for achievable angular velocities). | {
"domain": "physics.stackexchange",
"id": 97452,
"tags": "electromagnetism, electric-fields, charge, magnetohydrodynamics"
} |
Expressing that a functor is natural | Question: The Haskell List: Type -> Type constructor implements the Functor typeclass with function fmap f = map f. This functor that applies the morphism f to each element of a list works, but there are other equally valid functors, for example fmap f = map f . rev which first reverses the list before mapping over it. Is there a concept in category theory that expresses the fact that the first choice of functor is in some sense "more natural" for this type constructor?
Answer: Being a functor requires two properties:
fmap id = id
fmap (f . g) = fmap f . fmap g
The definition fmap fails the first condition:
fmap id = map id . reverse = reverse != id
So that's not a functor at all!
And it even fails the second condition:
fmap (f . g) = map (f . g) . reverse
= map f . map g . reverse
= map f . fmap g
= reverse . map f . reverse . fmap g
= reverse . fmap f . fmap g
!= fmap f . fmap g
Which is quite clear if you think about it:
fmap (f . g) reverses the list once and then maps its elements via f . g
fmap f . fmap g reverse the list, maps via g, then reverses the list again and then maps via f! | {
"domain": "cs.stackexchange",
"id": 9903,
"tags": "category-theory"
} |
Deleting temporary rows and columns from 21-sheet workbook | Question: I've created a VBA code to delete extra rows and columns that were needed for initial calculations but are required to be removed before converting/importing a csv into a database. The code loops through 21 sheets and runs for about 4 minutes. Is this a decent run time or can it be shortened?
Public Sub Test()
Dim xWs As Worksheet
Set xWs = ActiveSheet
Dim Firstrow As Long
Dim Lastrow As Long
Dim Lrow As Long
Dim CalcMode As Long
Dim ViewMode As Long
'SETTING DEPENDENT VALUES TO ABSOLUTE VALUES============================='
For Each xWs In Application.ActiveWorkbook.Worksheets
xWs.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
xWs.DisplayPageBreaks = False
xWs.UsedRange.Value = xWs.UsedRange.Value
Next
'DELETING ROWS BASED ON COLUMN B VALUES=================================='
For Each xWs In Application.ActiveWorkbook.Worksheets
xWs.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
xWs.DisplayPageBreaks = False
Firstrow = xWs.UsedRange.Cells(1).Row
Lastrow = xWs.UsedRange.Rows(xWs.UsedRange.Rows.count).Row
For Lrow = Lastrow To Firstrow Step -1
With xWs.Cells(Lrow, "B")
If Not IsError(.Value) Then
If .Value = "0" Then .EntireRow.Delete
End If
End With
Next Lrow
Next
'DELETING DUPLICATE IP ADDRESSES=========================================='
With Sheets("IP-Unassigned")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Firstrow = .UsedRange.Cells(1).Row
Lastrow = .UsedRange.Rows(.UsedRange.Rows.count).Row
For Lrow = Lastrow To Firstrow Step -1
With .Cells(Lrow, "H")
If Not IsError(.Value) Then
If .Value = "1" Then .EntireRow.Delete
End If
End With
Next Lrow
End With
'DELETING EXTRA COLUMNS========================================================'
With Sheets("IP-FSW")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-2070")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-MNTR")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-BBS")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-DET")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-TTR")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-CCTV")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(8).EntireColumn.Delete
Columns(7).EntireColumn.Delete
End With
With Sheets("IP-Unassigned")
.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
.DisplayPageBreaks = False
Columns(16).EntireColumn.Delete
Columns(15).EntireColumn.Delete
Columns(14).EntireColumn.Delete
Columns(13).EntireColumn.Delete
Columns(12).EntireColumn.Delete
Columns(11).EntireColumn.Delete
Columns(10).EntireColumn.Delete
Columns(9).EntireColumn.Delete
Columns(8).EntireColumn.Delete
End With
'=========================================================================='
End Sub
Answer: Portland Runner gave some good hints in the comments. Your selection and view changes do not add any value to what you want to achieve. I was able to remove all of them without issues. When doing macro-based Excel manipulation, you should always consider:
Setting Application.ScreenUpdating to False
Setting Application.Calculation to xlManual
Setting Application.EnableEvents to False
Resetting all of these values when you have completed the work
Of course, there will be exceptions, but these should be rare.
Always remember Option Explicit in VBA
Looking at absolute values:
For Each xWs In Application.ActiveWorkbook.Worksheets
xWs.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
xWs.DisplayPageBreaks = False
xWs.UsedRange.Value = xWs.UsedRange.Value
Next
This can simply be:
For Each xWs In Application.ActiveWorkbook.Worksheets
xWs.UsedRange.Value = xWs.UsedRange.Value
Next
Cleaner, easier to see what it does, and easier to maintain.
Looking at Column B conditionals:
For Each xWs In Application.ActiveWorkbook.Worksheets
xWs.Select
ViewMode = ActiveWindow.View
ActiveWindow.View = xlNormalView
xWs.DisplayPageBreaks = False
Firstrow = xWs.UsedRange.Cells(1).Row
Lastrow = xWs.UsedRange.Rows(xWs.UsedRange.Rows.count).Row
For Lrow = Lastrow To Firstrow Step -1
With xWs.Cells(Lrow, "B")
If Not IsError(.Value) Then
If .Value = "0" Then .EntireRow.Delete
End If
End With
Next Lrow
Next
Can become:
For Each xWs In Application.ActiveWorkbook.Worksheets
Firstrow = xWs.UsedRange.Cells(1).Row
Lastrow = xWs.UsedRange.Rows(xWs.UsedRange.Rows.count).Row
For Lrow = Lastrow To Firstrow Step -1
With xWs.Cells(Lrow, "B")
If Not IsError(.Value) Then
If .Value = "0" Then .EntireRow.Delete
End If
End With
Next Lrow
Next
But I see repetition here. You can save your self a small loop. Admittedly, in the example you gave not a big issue but still a good practice to have.
Combining the loops - take note of the order in which I have done this. Do the most work first, and then do the simple clean-up. You could do it the other way (unless the xlError gets converted to a value on the way through?) but considering how much work is done in each step and how each step impacts on the amount of future work is a good habit.
For Each xWs In Application.ActiveWorkbook.Worksheets
Firstrow = xWs.UsedRange.Cells(1).Row
Lastrow = xWs.UsedRange.Rows(xWs.UsedRange.Rows.count).Row ' this is yet another different way I have seen to get the last row!
For Lrow = Lastrow To Firstrow Step -1 ' Good that you know to go backwards.
With xWs.Cells(Lrow, "B")
If Not IsError(.Value) Then
If .Value = "0" Then .EntireRow.Delete
End If
End With
Next Lrow
xWs.UsedRange.Value = xWs.UsedRange.Value ' Good that you know the simple way to convert formulas to values.
Next
Deleting extra columns - I see a lot of repetition here, and room for a subroutine.
Private Sub Delete78(xWs as Worksheet) 'Sheet could also include a Chart sheet
xWS.Columns(8).EntireColumn.Delete
xWS.Columns(7).EntireColumn.Delete
End Sub
Interestingly, you could also do Col7.Del, and then Col7.Del again to achieve the same effect! But at least the way you have written it shows the intent to remove the two columns.
You main part of the code then becomes:
Delete78 Sheets("IP-FSW")
Delete78 Sheets("IP-2070")
Delete78 Sheets("IP-MNTR")
Delete78 Sheets("IP-BBS")
Delete78 Sheets("IP-DET")
Delete78 Sheets("IP-TTR")
Delete78 Sheets("IP-CCTV")
Still some repetition - and this could be fixed as well. But that can be another day. Or perhaps now. Because your last code block seems different but is really the same. So let us try a new subroutine.
Private Sub DeleteColumnBlock(xWs as Worksheet, LastColumn as Long, FirstColumn as Long) ' Get the user to enter the values in a logical order. I chose this way.
Dim ColIterator as Long
' Do some input validation. If they have entered bad values, fix it.
For ColIterator = LastColumn to FirstColumn Step -1
xWs.Columns(ColIterator).EntireColumn.Delete
Next ColIterator
End Sub
Because we are dealing with a contiguous block, you could also do the slightly more obscure - same effect, but slightly harder to see at a glance what you intend to do. Add some good comments if you intend to do this!
For ColIterator = FirstColumn to LastColumn
xWs.Columns(FirstColumn).EntireColumn.Delete ' continually remove a column until the right number have been removed.
Next ColIterator
Your main part of that entire block then becomes:
DeleteColumnBlock Sheets("IP-FSW"), 8, 7
DeleteColumnBlock Sheets("IP-2070"), 8, 7
DeleteColumnBlock Sheets("IP-MNTR"), 8, 7
DeleteColumnBlock Sheets("IP-BBS"), 8, 7
DeleteColumnBlock Sheets("IP-DET"), 8, 7
DeleteColumnBlock Sheets("IP-TTR"), 8, 7
DeleteColumnBlock Sheets("IP-CCTV"), 8, 7
DeleteColumnBlock Sheets("IP-Unassigned"), 16, 8
Summary
Turn off the parts of Excel that you don't need when doing grunt
work.
Try to run through a loop only once, don't repeat your loops
unless it is really really necessary.
DRY (don't repeat yourself) - repetition is a sign that you can
modularise some code making it easier to maintain.
Use explicit addressing, avoid Active and Select unless there is
no other way (e.g. copying sheets to a new workbook is a subroutine,
not a function so ActiveWorkbook is the only way to immediately
reference that new workbook).
-- If you do Activate or Select, make sure your following code is actually using those elements.
-- And then consider if you can already reference it explicitly and do so!
Addendum
In my examples above, I provided a very mechanistic way of deleting columns (based on the OP code). Two more ways to do this without using a loop are to set a range:
e.g. xWS.Range(xWS.Cells(1,7), xWS.Cells(1,8).EntireColumn.Delete
Setting a union of the columns, and then deleting | {
"domain": "codereview.stackexchange",
"id": 30108,
"tags": "performance, vba, excel"
} |
Energy and Non-conservative forces | Question: I understand that one cannot assign a potential energy to all points in space in the presence of a non-conservative force field due to the work done by the force being dependent on the path taken. However, I have often come across this statement that energy is "irrecoverably" lost when work is being done in a non-conservative force field (look at the photo below for example). What does this mean exactly? What do we mean by "irrecoverable" here? Where does this extra work/energy actually go? Does this mean that some part of the energy fed into the must be wasted in the form of heat or sound (somewhat like how there's an upper limit on the theoretical efficiency of a Carnot engine)? I would really appreciate some more clarity on this notion, and I apologise if there are duplicates I couldn't find.
Answer:
I understand that one cannot assign a potential energy to all points in space in the presence of a non-conservative force field due to the work done by the force being dependent on the path taken.
This is not true. One can define a potential (and thus potential energy of a single test charge) in almost all points of space, derived from the field, even if it isn't conservative.
Every field can be decomposed uniquely into a conservative and a divergence-less part, and we can use the conservative part to define a potential and thus also potential energy of a test charge.
This is fine and it is actually how potential and potential energy is used in practice, for example, for real electric circuits. Total electric field is never conservative everywhere, because there is always some non-zero non-conservative field contribution due to moving charges somewhere, and one could always devise a complicated extended path that comes close to those charges, to make the line integral of total field dependent on that path.
Electric potential can be defined in many different but mathematically equivalent ways, but in the most important convention, the Coulomb gauge, it is equal to line integral of the conservative part of the electric field, and this leads to the familiar Coulombic formula for electric potential $V(\mathbf x) = \sum_a K\frac{q_a}{|\mathbf x- \mathbf r_a|}$ and potential energy of charge $b$:
$$
P.E. = q_b V(\mathbf r_b).
$$
What is true is that in non-conservative field, particle's kinetic and potential energy do not obey any simple law like
$$
K.E. + P.E. = const.
$$
which holds in conservative field, because in non-conservative field, total force on the particle $b$ is not described completely by gradient of this potential energy function.
The correct statement here would be that non-conservative field cannot be expressed as gradient of a potential, or no potential function can fully describe the non-conservative field.
However, I have often come across this statement that energy is "irrecoverably" lost when work is being done in a non-conservative force field (look at the photo below for example). What does this mean exactly?
The usual kind of energy - kinetic plus potential - is not guaranteed to be conserved in time, when total force on a particle is such a function of position that is a non-conservative field.
For example, if there is an electric field with lines of force circling around a cylinder in space, such field is non-conservative, and if a charged particle was constrained to move along those circles, its kinetic energy would change in time, and also sum of kinetic and potential energy would change in time.
However, this does not mean "irrecoverable loss of energy", because while those energies assigned to the particle are not conserved, total energy of the system, including the source of the non-conservative field, and EM field everywhere, can be conserved.
For example, electric field near and inside conductive path of an LC circuit with an inductor and a capacitor is not conservative, and current-forming particles' energies are not conserved, but this does not mean energy gets lost irrecoverably; energy just transforms from electric to magnetic form repeatedly, and non-conservative electric field takes part in that conversion.
That statement is correct when a particular interpretation of terms is adopted, such as when we talk about friction force, and mechanical energy only. Friction force is, in general, not even a function of position, it often depends on velocity or other details of the system which are not traced, and result of its action is often that mechanical energy gets "lost irrecoverably", as it turns into internal energy of the bodies experiencing friction. This is a different notion of a "non-conservative force", which does not really refer to properties of line integrals of a field, but refers to the fact that mechanical energy transforms into heat. | {
"domain": "physics.stackexchange",
"id": 99729,
"tags": "thermodynamics, forces, energy-conservation, dissipation, conservative-field"
} |
What does 'Linear regularities among words' mean? | Question: Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
Answer: By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen) | {
"domain": "datascience.stackexchange",
"id": 4710,
"tags": "nlp, language-model, representation"
} |
Are the Trappist-1 planets in stable orbits? | Question: The Trappist-1 planets all orbit very close to each other. During NASA's press release, they mentioned that these planets are close enough to disturb each others orbits. Is this system stable over a long time scale? Or could we perhaps just have imaged this system before one or more planets is ejected or destroyed?
Later into the press conference, one of the scientists says "these planets should have formed further out and migrated inwards". Could they possibly still be moving inwards, or are their orbits now stable?
Answer: We don't know.
Much of what we do know - or at least think we know - about the planets' orbital parameters comes from simulations the team ran via $n$-body methods[1]. Some methods of integration led to short-term disruptions, on the order of less than $\sim10^6$ years. That said, the system is at least $5\times10^8$ years, and it would be odd if the observations came just before the final instability, in the authors' opinions.
Fortunately, using a different (statistical) method, the team got different results. The found that the system has
a 25% chance of suffering an instability over 1 Myr, and an 8.1% chance of surviving for 1 billion years
This does not seem great for long- or short- term instabilities. However, the authors were extremely cautious, emphasizing that there is plenty of uncertainty. There are many other factors that need to be taken into account:
Tidal interactions between the planets and the star and between each other
Possible other planets in the system (although giant planets may be ruled out)
The masses and orbits are not known to great accuracy, and so interactions can not be modeled as ideally as one would like.
Resonances could be important, and the planets in the system fall into near-integer resonances, which is interesting.
Whether or not the planets are still changing their orbits is an interesting question. The authors believe that the planets moved inwards via gas disk migration, which would require interactions with the initial protoplanetary disk. However, as the disk has clearly dissipated, it would seem that the migration has stopped, and has not happened for some time. | {
"domain": "astronomy.stackexchange",
"id": 2135,
"tags": "orbital-mechanics, trappist-1"
} |
Why is the Fourier transform of a Dirac comb a Dirac comb? | Question: This doesn't make sense to me, because the Heisenberg inequality states that $\Delta t\Delta \omega$ ~ 1.
Therefore when you have something perfectly localized in time, you get something completely distributed in frequency. Hence the basic relationship $\mathfrak{F}\{\delta(t)\} = 1$ where $\mathfrak{F}$ is the Fourier transform operator.
But for the Dirac comb, applying the Fourier transform, you receive another Dirac comb. Intuitively, you should also get another line.
Why does this intuition fail?
Answer: I believe that the fallacy is to believe that a Dirac comb is localized in time. It isn't because it is a periodic function and as such it can only have frequency components at multiples of its fundamental frequency, i.e. at discrete frequency points. It can't have a continuous spectrum, otherwise it wouldn't be periodic in time. Just like any other periodic function, a Dirac comb can be represented by a Fourier series, i.e. as an infinite sum of complex exponentials. Each complex exponential corresponds to a Dirac impulse in the frequency domain at a different frequency. Summing these Dirac impulses gives a Dirac comb in the frequency domain. | {
"domain": "dsp.stackexchange",
"id": 2864,
"tags": "fourier-transform, dirac-delta-impulse"
} |
Android / Rocon_app_manager pairing error | Question:
When trying to pair an Android device with the Rocon_app_manager as per [1], the 'Robot Remocon' Android app does find the robot when pressing 'Add a robot > Scan the local network', but when pressing 'select' it tries to connect but displays an error:
org.ros.exception.RosRuntimeException: timed out waiting for a gateway_info publication
I tried on different networks (university-wide and at home through a local router) and two different pc's and android devices. There is at least some communication as there's a different error when the rocon_app_platform hasn't been started on the pc.
I tried to look up where this error originates but I am quite hopeless. Does anyone have an idea of how to solve this?
[1] wiki.ros.org/rocon_app_manager/Tutorials/hydro/Pairing%20with%20Androids
Originally posted by The Fonz on ROS Answers with karma: 13 on 2013-12-13
Post score: 1
Original comments
Comment by Daniel Stonier on 2014-06-29:
Any chance that some of the problems here are due to the fact that you are using android devices that have multiple interfaces? You can diagnose by checking rostopic list and rosservice list to see if the connections are there. rostopic info on an android topic will show strange ip's.
Comment by Daniel Stonier on 2014-06-29:
This error is quite vague sorry....we have simplified communications hugely getting ready for indigo with better error feedback (yet to be released though).
Answer:
I think probably your ROS_IP/ROS_HOSTNAME/ROS_MASTER_URI are not correctly set on the turtlebot for multi-device-pc communication. Some information on the roswiki and the turtlebot wiki.
The 'dumb' error message unfortunately is a late hydro regression I have yet to fix - it used to actually provide a hint to make sure you have your ROS_IP/ROS_HOSTNAME set.
Originally posted by Daniel Stonier with karma: 3170 on 2013-12-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by The Fonz on 2014-01-17:
You're right - setting ROS_IP and ROS_HOSTNAME in .bashrc works, per the instructions in the turtlebot wiki. Still not very stable, sometimes it connects, sometimes on retry, sometimes on app restart.
Comment by Daniel Stonier on 2014-01-19:
Still quite beta, occasionally state gets confused if something goes awry. We're working hard to upgrade/stabilise the android environment better for igloo. | {
"domain": "robotics.stackexchange",
"id": 16452,
"tags": "ros, rocon, android"
} |
Implementation of a sparse Markov Chain | Question: I need to create a sparse Markov chain. It is supposed to receive text, so the number of rows or columns can easily go up to 20000. Besides, if I want to consider higher orders of the Markov chain (creating pairs of consecutive words) the dimension can become much bigger. Hence the need to have something sparse.
I added the constraint to have a "uniform prior" on the transitions (so as to avoid having infinite log likelihood).
I am not sure this is the cleanest way to proceed.
using System.Collections.Generic;
using System;
namespace rossum.Machine.Learning.Markov
{
public class SparseMarkovChain<T>
{
private Dictionary<T, Dictionary<T, int>> _sparseMC = new Dictionary<T, Dictionary<T, int>>();
private Dictionary<T, int> _countEltLeaving = new Dictionary<T, int>();
private int _size = 0;
public double GetTransition(T p1, T p2)
{
if (_sparseMC.ContainsKey(p1))
{
if (_sparseMC[p1].ContainsKey(p2))
return (1f + _sparseMC[p1][p2]) / (_countEltLeaving[p1] + _size);
else
return 1f / (_countEltLeaving[p1] + _size);
}
else
return 1f / _size;
}
public void AddTransition(T p1, T p2)
{
if (_sparseMC.ContainsKey(p1))
{
_countEltLeaving[p1]++;
if (_sparseMC[p1].ContainsKey(p2))
_sparseMC[p1][p2] += 1;
else
_sparseMC[p1].Add(p2, 1);
}
else
{
_size++;
if (!_sparseMC.ContainsKey(p2))
_size++;
Dictionary<T, int> nd = new Dictionary<T, int>();
nd.Add(p2, 1);
_sparseMC.Add(p1, nd);
_countEltLeaving.Add(p1, 1);
}
}
public double LogLikelihood(T[] path)
{
double res = 0;
for (int i = 1; i < path.Length; i++)
res += Math.Log(GetTransition(path[i - 1], path[i]));
return res;
}
}
}
Answer: If you're using ContainsKey, you're probably doing it wrong and you need to use Dictionary<TKey, TValue>.TryGetValue Method (TKey, TValue).
Your first method would then become:
public double GetTransition(T p1, T p2)
{
Dictionary<T, int> p1Value;
if (!_sparseMarkovChain.TryGetValue(p1, out p1Value))
{
return 1f/_size;
}
int p2Value;
if (p1Value.TryGetValue(p2, out p2Value))
{
return (1f + p2Value)/(_countEltLeaving[p1] + _size);
}
return 1f/(_countEltLeaving[p1] + _size);
}
Your naming could be improved. The "MC" in _sparseMC makes sense in context, but why not write "MarkovChain" in full? p1 and p2 aren't clear to me, but I don't know the convention.
These three lines could be a single one:
Dictionary<T, int> nd = new Dictionary<T, int>();
nd.Add(p2, 1);
_sparseMC.Add(p1, nd);
versus:
_sparseMC.Add(p1, new Dictionary<T, int>{{ p2, 1 }});
Use braces everywhere to avoid introducing bugs, even for code like this:
if (_sparseMC[p1].ContainsKey(p2))
{
_sparseMC[p1][p2] += 1;
}
else
{
_sparseMC[p1].Add(p2, 1);
}
_sparseMC and _countEltLeaving can be made readonly. | {
"domain": "codereview.stackexchange",
"id": 16360,
"tags": "c#, algorithm, markov-chain"
} |
Conways Game of Life in cmd batch file | Question: Just for fun I have written a Conway's Game of Life in cmd batch file.
I like writing in batch - its restrictions and limits are its appeal.
However - it is slow, very slow on a large grid. Any tips to speed it up?
I think the slowest part is the function GETNCOUNT - this gets the count of the neighbouring 'live' cells so it is called once per cell.
@ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
IF "%3"=="" GOTO HELP
SET WIDTH=%1
SET HEIGHT=%2
SET DENSITY=%3
SET LIFECYCLE=0
::::::::::::::::::::
:: Generate Grid 'A'
:: Also for safety, delete any Grid 'B' cells that might be in memory
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
SET /A RAND=!RANDOM!*100/32768
SET /A RAND=!RAND!+1
IF !DENSITY! GEQ !RAND! (
SET A[%%w][%%h]=@
) ELSE (
SET "A[%%w][%%h]= "
)
SET B[%%w][%%h]=
)
)
::::::::::::::::::::::::::::::
:: TOP OF MAIN PROCESSING LOOP
::
:: Loop through all the Grid 'A' cells
:: - Count number of neighbours
:: - Check if alive or not
:: - If required assign new alive/dead status in Grid 'B'
::
:PROCESS
SET /A LIFECYCLE=%LIFECYCLE%+1
CLS
ECHO Conway's Game of Life.
CALL :DISPLAY
ECHO Current lifecycle: %LIFECYCLE%
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
CALL :GETNCOUNT %%w %%h
IF "!A[%%w][%%h]!"=="@" (SET ALIVE=Y) ELSE (SET ALIVE=N)
IF "!ALIVE!"=="Y" (
IF !NCOUNT! LSS 2 (
SET "B[%%w][%%h]= "
)
IF !NCOUNT! EQU 2 (
SET B[%%w][%%h]=@
)
IF !NCOUNT! EQU 3 (
SET B[%%w][%%h]=@
)
IF !NCOUNT! GTR 3 (
SET "B[%%w][%%h]= "
)
)
IF "!ALIVE!"=="N" (
IF !NCOUNT! EQU 3 (
SET B[%%w][%%h]=@
)
)
)
)
:: Now check if we have any Grid 'B' cells
:: If so, assign these to Grid 'A' cells
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
IF DEFINED B[%%w][%%h] (
IF "!B[%%w][%%h]!"==" " (
SET "A[%%w][%%h]= "
)
IF "!B[%%w][%%h]!"=="@" (
SET A[%%w][%%h]=@
)
)
)
)
:: loop back to the top of process to start again
GOTO PROCESS
::::::::::::::::::::::::::
:: BOTTOM OF PROCESS LOOP
::::::::::::::::::::::::::
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: THIS FUNCTION COUNTS THE NUMBER OF NEIGHBOURS FOR THE GIVEN X AND Y CO-ORDINATES
:: THE COUNT IS STORED IN VARIABLE 'NCOUNT'
::
:: TL | TM | TR
:: ML | | MR
:: BL | BM | BR
::
:: %1=x %2=y
::
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:GETNCOUNT
SET NCOUNT=0
::::::
:: TOP
IF %2 EQU 1 (SET Y=%HEIGHT%) ELSE (SET /A Y=%2-1)
:: TOP-LEFT
IF %1 EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%1-1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:: TOP-MIDDLE
SET X=%1
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:: TOP-RIGHT
IF %1 EQU %WIDTH% (SET X=1) ELSE (SET /A X=%1+1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:::::::::
:: MIDDLE
SET Y=%2
:: MIDDLE-LEFT
IF %1 EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%1-1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:: MIDDLE-RIGHT
IF %1 EQU %WIDTH% (SET X=1) ELSE (SET /A X=%1+1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:::::::::
:: BOTTOM
IF %2 EQU %HEIGHT% (SET Y=1) ELSE (SET /A Y=%2+1)
:: BOTTOM-LEFT
IF %1 EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%1-1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:: BOTTOM-MIDDLE
SET X=%1
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
:: BOTTOM-RIGHT
IF %1 EQU %WIDTH% (SET X=1) ELSE (SET /A X=%1+1)
IF !A[%X%][%Y%]! EQU @ (SET /A NCOUNT=!NCOUNT!+1)
::ECHO BR=X:%X% Y:%Y%
GOTO EOF
::::::::::::::::::::::::::::::::::::::::::::
:: THIS FUNCTION DISPLAYS GRID 'A' ON SCREEN
::::::::::::::::::::::::::::::::::::::::::::
:DISPLAY
SET TOP=
SET BOT=
FOR /L %%h IN (1, 1, %height%) DO (
IF %%h EQU 1 (FOR /L %%w IN (1, 1, %width%) DO (SET TOP=_!TOP!))
IF %%h EQU 1 ECHO .!TOP!.
SET ROW=
FOR /L %%w IN (1, 1, %WIDTH%) DO (
SET ROW=!ROW!!A[%%w][%%h]!
)
ECHO ^|!ROW!^|
IF %%h EQU %height% (FOR /L %%w IN (1, 1, %width%) DO (SET BOT=~!BOT!))
IF %%h EQU %height% ECHO `!BOT!'
)
GOTO EOF
:HELP
ECHO/
ECHO 'Conway's Game of Life' - Batch Edition - Chazjn 25/02/2017
ECHO ===========================================================
ECHO Usage is as follows:
ECHO life [width] [height] [%%density]
ECHO E.g.
ECHO life 8 5 25
ECHO/
ECHO For more infomation visit: https://en.wikipedia.org/wiki/Conway's_Game_of_Life
GOTO EOF
:EOF
Answer: Ok so it turns out that the using CALL in a batch script is very slow. So calling a function once per cell was very expensive time-wise.
So I knew I had to move the logic from the function GETNCOUNT into the main processing loop. However the main issue I ran into here was variable expansion.
I am storing each cell value in a variable named A[x][y] e.g. A[1][1], A[1][2], A[1][3] etc. So after I had calculated the x and y value of the neighbouring cell that I wanted to check, I had to get the value of that variable somehow.
I tried all kinds of double-expansion-nested syntax, e.g. !A[!X!][!Y!]! but this just resulted in XY because the script was trying to expand variables !A[! !][! !]!.
In the end I tried a FOR loop and this worked very nicely. E.g.:
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (ECHO !A[%%a][%%b]!)
So what this is doing is expanding and assigning !X! and !Y! to variables local to the FOR loop %%a and %%b. Thus I can inject them into my cell variable and expand this to get the value its assigned.
So here is the 'final' code, it works much much faster than before. I am pretty satisfied with the performance now, I don't think I can get much more speed out of it.
What I would like to do next is figure out a way to exit gracefully as the only way currently is to press CTRL+C. But that's for another time...
@ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
IF "%3"=="" GOTO HELP
SET WIDTH=%1
SET HEIGHT=%2
SET DENSITY=%3
SET GENERATION=0
SET /A CELLCOUNT=%WIDTH%*%HEIGHT%
SET ALIVECOUNT=0
::::::::::::::::::::
:: Generate grid 'A'. This grid holds the cell layout for display
:: Also for safety, delete any grid 'B' cells that might be in memory
:: Grid 'B' used to store temporary cell values before they are assigned to grid 'A'
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
SET /A RAND=!RANDOM!*100/32768
IF !DENSITY! GEQ !RAND! (
SET A[%%w][%%h]=@
SET /A ALIVECOUNT=!ALIVECOUNT!+1
) ELSE (
SET "A[%%w][%%h]= "
)
SET B[%%w][%%h]=
)
)
::::::::::::::::::::::::::::::
:: TOP OF MAIN PROCESSING LOOP
::
:: Display grid 'A'
:: Loop through all the Grid 'A' cells:
:: - Get values neighbouring cells
:: - Get count of alive neighbours
:: - Apply 'Game of Life' rules and store resulting value in grid 'b' cell
:: Assign grid 'b' cell values to grid 'a' cell values
:: Loop back to start process again
:PROCESS
SET /A GENERATION=%GENERATION%+1
CLS
ECHO Conway's Game of Life.
ECHO Generation: %GENERATION%
ECHO Live Cells: %ALIVECOUNT%/%CELLCOUNT%
CALL :DISPLAY
IF "%ALIVECOUNT%"=="0" (GOTO EOF)
SET ALIVECOUNT=0
SET COUNTER=0
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
SET /A COUNTER=!COUNTER!+1
TITLE Calculating Cell !COUNTER!/%CELLCOUNT%
SET X=0
SET Y=0
SET NCOUNT=0
REM Find the 3 cells above this cell
IF %%h EQU 1 (SET Y=%HEIGHT%) ELSE (SET /A Y=%%h-1)
IF %%w EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%%w-1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
SET X=%%w
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
IF %%w EQU %WIDTH% (SET X=1) ELSE (SET /A X=%%w+1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
REM Find the 2 cells left and right of this cell
SET Y=%%h
IF %%w EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%%w-1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
IF %%w EQU %WIDTH% (SET X=1) ELSE (SET /A X=%%w+1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
REM Find the 3 cells below this cell
IF %%h EQU %HEIGHT% (SET Y=1) ELSE (SET /A Y=%%h+1)
IF %%w EQU 1 (SET X=%WIDTH%) ELSE (SET /A X=%%w-1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
SET X=%%w
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
IF %%w EQU %WIDTH% (SET X=1) ELSE (SET /A X=%%w+1)
FOR /F "tokens=1,2" %%a IN ("!X! !Y!") DO (IF "!A[%%a][%%b]!"=="@" (SET /A NCOUNT=!NCOUNT!+1))
REM Check if this cell is alive or not
IF "!A[%%w][%%h]!"=="@" (
SET ALIVE=Y
SET /A ALIVECOUNT=!ALIVECOUNT!+1
) ELSE (
SET ALIVE=N
)
REM Assign live status to grid 'B' based on rules
IF "!ALIVE!"=="Y" (
IF !NCOUNT! LSS 2 (
SET "B[%%w][%%h]= "
)
IF !NCOUNT! EQU 2 (
SET B[%%w][%%h]=@
)
IF !NCOUNT! EQU 3 (
SET B[%%w][%%h]=@
)
IF !NCOUNT! GTR 3 (
SET "B[%%w][%%h]= "
)
)
REM Assign dead status to grid 'B' based on rules
IF "!ALIVE!"=="N" (
IF !NCOUNT! EQU 3 (
SET B[%%w][%%h]=@
)
)
)
)
:: Now check if we have set any Grid 'B' cells
:: If so, assign these cell values to Grid 'A' cells
FOR /L %%h IN (1, 1, %HEIGHT%) DO (
FOR /L %%w IN (1, 1, %WIDTH%) DO (
IF DEFINED B[%%w][%%h] (
IF "!B[%%w][%%h]!"==" " (
SET "A[%%w][%%h]= "
)
IF "!B[%%w][%%h]!"=="@" (
SET A[%%w][%%h]=@
)
)
)
)
:: Loop back to the top of process to start again
GOTO PROCESS
::::::::::::::::::::::::::::::::::::::::::::
:: THIS FUNCTION DISPLAYS GRID 'A' ON SCREEN
::::::::::::::::::::::::::::::::::::::::::::
:DISPLAY
SET TOP=
SET BOT=
FOR /L %%h IN (1, 1, %height%) DO (
IF %%h EQU 1 (FOR /L %%w IN (1, 1, %width%) DO (SET TOP=_!TOP!))
IF %%h EQU 1 ECHO .!TOP!.
SET ROW=
FOR /L %%w IN (1, 1, %WIDTH%) DO (
SET ROW=!ROW!!A[%%w][%%h]!
)
ECHO ^|!ROW!^|
IF %%h EQU %height% (FOR /L %%w IN (1, 1, %width%) DO (SET BOT=~!BOT!))
IF %%h EQU %height% ECHO `!BOT!'
)
GOTO EOF
:HELP
ECHO/
ECHO 'Conway's Game of Life' - Batch Edition - Chazjn 01/03/2017
ECHO ===========================================================
ECHO Usage is as follows:
ECHO life [width] [height] [%%density]
ECHO E.g.
ECHO life 15 10 25
ECHO/
ECHO For more infomation visit: https://en.wikipedia.org/wiki/Conway's_Game_of_Life
GOTO EOF
:EOF | {
"domain": "codereview.stackexchange",
"id": 38398,
"tags": "game-of-life, batch"
} |
Is 4D $\phi^4$ theory with a complex mass term UV finite? | Question: Consider the path integral expression for the (unrenormalised) propagator in $\phi^4$ theory with a real scalar field $\phi$ on 4D Minkowski space and with a complex mass term:
$$
G[f,g] = \int\mathcal{D}\phi\,{\textstyle(\int f\phi)(\int g\phi)\exp(i\int[\frac12(\partial_\mu\phi)(\partial^\mu\phi) - \frac12(m^2 - i\epsilon)\phi^2 - \frac1{4!}\lambda\phi^4]})
$$
where $f$ and $g$ are smooth, square-integrable functions on Minkowski space. In the limit $\epsilon\to0$ this is just the propagator of $\phi^4$ theory which I know is UV divergent. My question is whether this expression is also divergent for finite $\epsilon>0$.
I have one argument which shows that $G[f,g]$ is finite and one which shows that it is infinite and I can't figure out which one is wrong:
Argument 1: I know that $\mathcal{D}\phi$ is just a sloppy physicist notation for an ill-defined functional integration measure, but if my understanding of Gaussian functional integration measures (which is somewhat limited and comes mostly from math-ph/0510087) is correct then
$$
d\mu_\epsilon(\phi)\equiv\mathcal{D}\phi{\textstyle\exp(-\frac12\epsilon\int\phi^2)}
$$
can be regarded as a well-defined functional integration measure which satisfies
$$
\int d\mu_\epsilon(\phi) = 1
\quad,\quad
\int d\mu_\epsilon(\phi){\textstyle(\int f\phi)(\int g\phi)} = \frac1\epsilon{\textstyle\int fg}
$$
If this is true we may write
$$
G[f, g] = \int d\mu_\epsilon(\phi){\textstyle(\int f\phi)(\int g\phi)\exp(i\int[\frac12(\partial_\mu\phi)(\partial^\mu\phi) - \frac12m^2\phi^2 - \frac1{4!}\lambda\phi^4]})
$$
and since the expression in the exponent is now purely imaginary the exponential has magnitude 1 and we have
$$
|G[f,g]| \leq \left|\int d\mu_\epsilon(\phi){\textstyle(\int f\phi)(\int g\phi)}\right| = \frac1\epsilon|{\textstyle\int fg}| < \infty
$$
Argument 2: The integrals we encounter when we calculate $G[f, g]$ in perturbation theory have the same UV divergences irrespective of whether $\epsilon$ is zero or not. For example, the one-loop 'tadpole' integral is
$$
\int d^4l \frac1{l^2 - m^2 + i\epsilon}
= -i\int_0^\infty dr\frac{2\pi^2r^3}{r^2 + m^2 - i\epsilon}
$$
where we obtained the right-hand side by Wick rotating and integrating out the angular variables. The UV divergence is associated with the behaviour of the integrand for $r\to\infty$ and for this the value of $\epsilon$ is irrelevant. Thus, if $G[f, g]$ is UV divergent for $\epsilon\to0$ it should be UV divergent for all $\epsilon>0$.
My instinct is to trust the non-perturbative argument 1 over the perturbative argument 2, but this would mean that a complex mass term can serve as a UV regulator and I've never heard of anyone use or even discuss such a regularisation.
Answer: Argument 1 is incorrect because (without additional regularisations) the functional integral is still ill-defined. The reason is that the the derivatives in the integrand are only defined on a dense subspace of $L^2(\mathbb{R}^4)$ (the space of square fields on Minkowski space) but we are trying to integrate over the whole space $L^2(\mathbb{R}^4)$. The issue is discussed in detail in Example 2.6 of these lecture notes. | {
"domain": "physics.stackexchange",
"id": 49629,
"tags": "quantum-field-theory, path-integral"
} |
Markov Decision Process representation | Question: I'm attempting to model a simple process using a Markov Decision Process.
Let $A$ be a set of $3$ actions : $ A \in \{b,s\}$.
$T(s,a,s')$ represents the probability of if in state $s$ , take action $a$ and end up in state $s'$
Notation for the MDP diagram is as follows :
Here is my MDP diagram which models 7 states:
The outgoing actions for each state sum to 1.
$T(1,b,2) = .7 $
$T(1,b,3) = .3 $
$T(1,s,4) = .9 $
$T(1,s,5) = .05 $
$T(1,s,6) = .05 $
I've tried to keep this as simple as possible to check my understanding. Are my representations & probabilities correct ?
Answer: Looks 'correct' to me, in the sense that it satisfies the requirements for being an MDP. Whether it models the underlying real-world problem correctly cannot be validated with the information given here. | {
"domain": "datascience.stackexchange",
"id": 9118,
"tags": "machine-learning, reinforcement-learning, q-learning, markov-process"
} |
How to use mixed data for image segmentation? | Question: I have a task for which I have to do image segmentation (cancer detection on MRIs). If possible, I would also like to include clinical data (i.e. numeric/categorical data which comes in the form of a table with features such as age, gender, ...).
I know that for classification purposes, it's possible to create a model that uses both numeric data as well as image data (as mentioned in the paper by Huang et al. : "Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: a case‑study in pulmonary embolism detection"
The problem I have is that, for image segmentation tasks, it doesn't really make sense to me as to how to use both types of data.
In the above-mentioned paper, they create one model with only the image data and another with only the numeric data, and then they fuse them (there are multiple strategies for fusing them together). For classification tasks, it makes sense. However, for my task, it does not make sense to have a model which only uses the clinical data for image segmentation and that's where I get confused.
How do you think I should proceed with my task? Is it even possible to mix both types of data for image segmentation tasks?
Answer: You can try doing image segmentation the traditional way, just using the image data. If you want to use the non-image data, then, you can introduce classification as another task for your network. It will provide some regularization to your model. But, this is one way you can still use non-image data whilst still working with image outputs. | {
"domain": "ai.stackexchange",
"id": 2671,
"tags": "deep-learning, image-segmentation, u-net"
} |
Convergence and representation theorems for machine learning | Question: I come from a pure math background and am not very familiar with machine learning. So, I'll start with an example to compensate for my confused grasp of the terminology.
Let's say we have a function $f:X \to Y$, and we want to develop a neural network to compute this function. For instance, $X$ could be $\{0, 1, ... 255\}^N$ for some large $N$, and $Y$ could be $\{0, 1\}$, if we wanted to build a network that performs a binary classification on images of $N$ pixels. We have some algorithm which adjusts, maybe stochastically or maybe deterministically, the weights and parameters of our network, creating a sequence of networks which should converge to $f$.
What theorems exist to guarantee that we do indeed converge to $f$, or even that a network exists to represent $f$? If we don't want to make assumptions about $f$, we would need some theorem establishing that given any function $f$, and any initial network, the sequence of networks generated by the learning algorithm will converge to $f$.
I'm not just interested in neural nets, though. Are there textbooks on machine learning which cover this sort of thing in depth? What keywords should I be looking out for?
Answer: I recommend Neural Network Learning: Theoretical Foundations by Anthony and Bartlett. You'll find exhaustive answers to your questions there. Very briefly, theoretical learning problems tend to separate very cleanly into two orthogonal components: statistical and computational. Thus, a universal learner unconstrained by computational issues could construct the smallest Turing machine (or MATLAB program) consistent with the input data; such a learner would be near-optimal statistically in a well-defined sense.
Nobody studies such learners (beyond intro-class homework problems) because they are computationally infeasible (actually uncomputable, in my example). When computational constraints enter the picture, it becomes much more nuanced: often times, what is the obvious course of action for the statistician is dismissed out of hand as infeasible by the algorithmist.
In any case, the Anthony+Bartlett book seems like the perfect introduction for a mathematically inclined beginner. | {
"domain": "cstheory.stackexchange",
"id": 3788,
"tags": "machine-learning"
} |
Spin of hydrogen orthonormal | Question: In this video lecture, the lecturer wrote spin up is $|\alpha\rangle = [\frac{1}{2}, \frac{1}{2}]$ and spin-down is $|\beta\rangle= [\frac{1}{2}, -\frac{1}{2}]$.
He then wrote that $\langle\alpha|\alpha\rangle = 1$ and $\langle\alpha|\beta\rangle = 0$
But I'm thinking that $\langle\alpha|\alpha\rangle = \alpha \cdot \alpha = \frac{1}{4} + \frac{1}{4} = \frac{1}{2}$ ?
Above I'm basically taking the dot product.
Can someone explain what I'm getting wrong?
Answer: The terminology $|\alpha\rangle = |1/2,1/2\rangle$ does not mean that this is a two-element vector with components 1/2 and 1/2. Instead, the first element refers to the total spin $s=1/2$, and the second element refers to the $z$-component of the spin. These are just labels for the state that correspond to eigenvalues of the spin operators. Thus, these are the quantum numbers of the state. The state as written is assumed to be the normalized eigenvector of those operators with the corresponding eigenvalues, i.e.
$$\hat{S}^2|1/2,1/2\rangle = \hbar^2\left(\frac{1}{2}\left(\frac{1}{2}+1\right)\right)|1/2,1/2\rangle,
$$
and
$$\hat{S}_z|1/2,1/2\rangle = \hbar\frac{1}{2}|1/2,1/2\rangle.
$$ | {
"domain": "physics.stackexchange",
"id": 82322,
"tags": "quantum-mechanics, hilbert-space, quantum-spin, hydrogen"
} |
Validity checks for a user signup process | Question: Background:
I have been working on a service which allows users to signup on different apps and on each login checks if the request is valid or not based on a series of checks.
The below snippet is small part of the whole application but covers my question and is working fine for now:
Code:
'use strict';
const bcrypt = require('bcrypt');
const boom = require('boom');
const joi = require('joi');
const flatten = require('lodash/flatten');
const pick = require('lodash/pick');
const models = require('../../models');
const { AccessToken, App, User } = models;
const debug = require('debug')('microauth:test');
const loginSchema = joi
.object({
appname: joi.string().required(),
email: joi.string().required(),
password: joi.string().required(),
})
.required();
async function run(req, res, next) {
const { appname, email, password } = joi.attempt(req.body, loginSchema);
const app = await getApp(appname);
if (!app) {
throw boom.badRequest(`Invalid app name: ${appname}.`);
}
if (app.isInactive()) {
throw boom.badRequest('App is not active.');
}
const { isAuthorized, user } = await authorize({ email, password });
if (!user) {
throw boom.notFound('User not found.');
}
debug(`User ${user.get('email')} is authorised? ${isAuthorized}`);
if (!isAuthorized) {
throw boom.unauthorized('Invalid email or password.');
}
const { result } = await isUserBelongsToApp(user, app.get('name'));
if (!result) {
throw boom.badRequest(`User is not authorised to access app.`);
}
return successResponse(email, app.get('secret'), res);
}
async function getApp(name) {
return await App.findOne({ name });
}
async function authorize({ email, password }) {
const user = await User.findOne(
{ email, status: 'active' },
{ withRelated: ['apps', 'roles.permissions'] }
);
let isAuthorized = false;
if (user) {
isAuthorized = await bcrypt.compare(password, user.get('password'));
}
return { isAuthorized, user };
}
async function isUserBelongsToApp(user, appname) {
let result = false;
let app = null;
app = user.related('apps').findWhere({ name: appname });
if (app) {
result = true;
}
return { result, app };
}
async function successResponse(email, secret, res) {
const userFields = [
'device',
'email',
'firstname',
'language',
'lastname',
'phone',
'uid',
];
const roleFields = ['name', 'description'];
const permissionFields = ['name', 'object', 'action'];
let user = await User.findOne(
{
email: email,
},
{
withRelated: ['roles.permissions'],
}
);
user = user.toJSON();
const result = Object.assign({}, { ...user });
result.roles = [];
result.permissions = [];
if (user.roles) {
result.roles = user.roles.map(role => pick(role, roleFields));
result.permissions = user.roles.map(role => {
return role.permissions.map(permission =>
pick(permission, permissionFields)
);
});
}
result.permissions = flatten(result.permissions);
const { token, expiration } = new AccessToken(secret).create(result);
res.json({ token, expiration });
}
module.exports = run;
Questions:
The code above belongs to the controller of the applications, is that the right place to do all these checks?
Right now the main logic seems pretty obvious but each step depends of the previous step. Is there any better way to write the same logic?
Answer: This is only a partial review.
I would declare your constants outside of your functions, as they are constant.
Also, the constants you did declare outside of your functions should be chained.
Finally, you should never call a function more than once. If you are truly using functional-programming then you should do the following:
const required = joi.string().required(),
loginSchema = joi
.object({
appname: required,
email: required,
password: required,
})
.required();
because a given function must return the same output for the same input.
Rewrite
'use strict';
const bcrypt = require('bcrypt'),
boom = require('boom'),
joi = require('joi'),
flatten = require('lodash/flatten'),
pick = require('lodash/pick');
const models = require('../../models'),
{ AccessToken, App, User } = models;
const debug = require('debug')('microauth:test');
const userFields = [
'device',
'email',
'firstname',
'language',
'lastname',
'phone',
'uid',
],
roleFields = ['name', 'description'],
permissionFields = ['name', 'object', 'action'];
const required = joi.string().required(),
loginSchema = joi
.object({
appname: required,
email: required,
password: required,
})
.required();
async function run(req, res, next) {
const { appname, email, password } = await joi.attempt(req.body, loginSchema);
const app = await getApp(appname);
(!app) && (throw boom.badRequest(`Invalid app name: ${appname}.`);)
(app.isInactive()) && (throw boom.badRequest('App is not active.');)
const { isAuthorized, user } = await authorize({ email, password });
(!user) && (throw boom.notFound('User not found.');)
debug(`User ${user.get('email')} is authorised? ${isAuthorized}`);
(!isAuthorized) && (throw boom.unauthorized('Invalid email or password.');)
const { result } = await isUserBelongsToApp(user, app.get('name'));
(!result) && (throw boom.badRequest(`User is not authorised to access app.`);)
return successResponse(email, app.get('secret'), res);
}
async function getApp(name) {
return await App.findOne({ name });
}
async function authorize({ email, password }) {
const user = await User.findOne(
{ email, status: 'active' },
{ withRelated: ['apps', 'roles.permissions'] }
);
let isAuthorized = false;
if (user) {
isAuthorized = await bcrypt.compare(password, user.get('password'));
}
return { isAuthorized, user };
}
async function isUserBelongsToApp(user, appname) {
let result = false;
let app = null;
app = user.related('apps').findWhere({ name: appname });
if (app) {
result = true;
}
return { result, app };
}
async function successResponse(email, secret, res) {
let user = await User.findOne(
{
email: email,
},
{
withRelated: ['roles.permissions'],
}
);
user = user.toJSON();
const result = Object.assign({}, { ...user });
result.roles = [];
result.permissions = [];
if (user.roles) {
result.roles = user.roles.map(role => pick(role, roleFields));
result.permissions = user.roles.map(role => {
return role.permissions.map(permission =>
pick(permission, permissionFields)
);
});
}
result.permissions = flatten(result.permissions);
const { token, expiration } = new AccessToken(secret).create(result);
res.json({ token, expiration });
}
module.exports = run; | {
"domain": "codereview.stackexchange",
"id": 30758,
"tags": "javascript, object-oriented, node.js, ecmascript-6"
} |
Rigorous version of field Lagrangian | Question: In Classical Mechanics the configuration of a system can be characterized by some point $s\in \mathbb{R}^n$ for some $n$. In particular, if it's a system of $k$ particles then $n = 3k$ and if there are holonomic constraints then in truth $s$ lies in some submanifold of $\mathbb{R}^n$. Even if the constraints are not holonomic, the configuration of a system can still be given by elements of some finite dimensional smooth manifold.
In that case, the Lagrangian becomes a smooth function $L: TM\to \mathbb{R}$ where $TM$ is the tangent bundle of the configuration manifold. Given coordinates $(q^1,\dots,q^n)$ on $M$ we can therefore make coordinates $(q^1,\dots,q^n,\dot{q}^1,\dots,\dot{q}^n)$ on $TM$ such that $q^i$ on $TM$ is really $q^i\circ \pi$ and $\dot{q}^i$ is characterized by the fact that if $v \in T_aM$ is
$$v = \sum_{i=1}^n v^i\dfrac{\partial}{\partial q^i}\bigg|_a$$
Then $\dot{q}^i(v) = v^i$. In that way, differentiating with respect to $q^i$ and $\dot{q}^i$ is perfectly well defined and Lagrange's Equation is totally meaningfull
$$\dfrac{d}{dt} \dfrac{\partial L}{\partial \dot{q}^i}(c(t),c'(t)) = \dfrac{\partial L}{\partial q^i}(c(t),c'(t))$$
When it comes then to studying fields like electromagnetic fields and so on, things get a little messy. Now, the system is the field and a configuration of the field is not anymore a certain list of numbers but a function like $\mathbf{E}: \mathbb{R}^3\to T\mathbb{R}^3$ or $\phi : \mathbb{R}^3\to \mathbb{R}$.
If we insist in building a configuration space $M$ it will be infinite dimensional and locally modeled on Banach Spaces. If we try to mimic Lagrangian formalism here, it'll end up in some infinite dimensional bundle, and this is not something nice to work with.
Now, most books work formally. For example, they let $\mathcal{L} = \dfrac{1}{2}g_{\mu\nu}(\partial^\nu \phi)(\partial^\mu\phi)- \dfrac{1}{2}m^2\phi^2$. Then they compute formally:
$$\dfrac{\partial \mathcal{L}}{\partial \phi} = -m^2\phi \\ \dfrac{\partial \mathcal{L}}{\partial (\partial_\mu \phi)} = \partial^\mu \phi$$
And then Lagrange's Equations becomes
$$\dfrac{\partial \mathcal{L}}{\partial \phi} = \partial_\mu \dfrac{\partial \mathcal{L}}{\partial (\partial_\mu\phi)}\Longrightarrow \partial_\mu\partial^\mu\phi + m^2\phi = 0$$
Now this brings some questions:
First of them it is not clear on which space this $\mathcal{L}$ is defined and where it takes values. Some people say it is just a $3$-form on spacetime, but it doesn't seem like that, it looks like a scalar to me.
Second, we take derivatives of $\mathcal{L}$ with respect to functions. This is much confusing to me. It even conflicts the first point of view, if $\mathcal{L}$ is a $3$-form it can only be differentiated with respect to the coordinates of the manifold on which it is defined.
So how can we make all of this rigorous? I mean, in which space is $\mathcal{L}$ defined? What these derivatives really mean and why they make any sense at all? How to make a connection between this and the Classical Mechanics Lagrangian formalism?
Answer: Let us start from Minkowski spacetime $M$ and construct the trivial bundle $\Phi=\mathbb R \times M \to M$ whose sections $\phi : M \ni p \mapsto (p,\phi(p))$ are the scalar fields you want to discuss their dynamics.
Since you correctly wish to see the partial derivatives of $\phi$ as variables independent from $\phi$ itself (this is your second raised issue), the convenient space is the so called first jet bundle $j^1 \Phi$.
I will not enter here into the details of the mathematical notion of jet bundle, I will simply illustrate how it can be used to clarify your issues.
$j^1 \Phi$ is a fiber bundle over $M$ such that each fiber at $p\in M$ has the structure (is diffeomorphic to) $\mathbb R \times \mathbb R^4$. The first factor $\mathbb R$, on shell,
embodies the information of $\phi(p)$ and the second $\mathbb R^4$ on shell refers to the derivatives $\partial_\mu \phi(p)$ at the same point of the basis $p$.
However, in general these components must be viewed as independent variables: They are related just when the equations of motion are imposed, i.e., on shell.
Coming back to your first issue, in this picture, the Lagrangian is a map $${\cal L} : j^1\Phi \to \mathbb R$$
so that, ${\cal L}= {\cal L}(p, \phi(p), d_\mu(p))$. Euler-Lagrange equations
determine sections $$M \ni p \mapsto (p, \phi(p), d_\mu(p)) \in j^1\Phi$$ and read
$$\partial_\mu \left(\frac{\partial {\cal L}}{\partial d_\mu}\right) - \frac{\partial {\cal L}}{\partial \phi} = 0\:, \quad \partial_\mu \phi = d_\mu\:.$$
You see that the field equations themselves establish that $d_\mu = \partial_\mu \phi$, otherwise $\phi(p)$ and $d_\mu(p)$ would be independent variables.
Also in classical mechanics the convenient picture is that of a jet bundle (more natural than the one based on a tangent bundle). In that case, $M$ is replaced by the line of time $\mathbb R$ and each fiber $Q_t$ of the fiber bundle $\Phi \cong \mathbb R \times Q$ is the configuration space at time $t$ covered by coordinates $q^1,\ldots, q^n$. In this sense $\Phi$ is the spacetime of configurations. All Lagrangian mechanics is next constructed in $j^1\Phi$. Here the fiber $A_t$ at $t\in \mathbb R$ admits natural local coordinates $q^1,\ldots,q^n, \dot{q}^1,\ldots, \dot{q}^n$. The Lagrangian function is nothing but a map
$$j^1\Phi \ni (t,q^1,\ldots,q^n, \dot{q}^1,\ldots, \dot{q}^n) \mapsto L(t, q^1,\ldots,q^n, \dot{q}^1,\ldots, \dot{q}^n)\in \mathbb R$$
and Euler-Lagrange equations now read
$$\frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}^k}\right) - \frac{\partial L}{\partial q^k} = 0\:, \quad \frac{dq^k}{dt} = \dot{q}^k\:.$$ | {
"domain": "physics.stackexchange",
"id": 17280,
"tags": "classical-mechanics, mathematical-physics, lagrangian-formalism, field-theory"
} |
How can I turn custom robot into a PROTO in Webots 2023? | Question: I created a custom robot using the Robot Node by importing some meshes and creating boundary boxes with primitive shapes. I want to use it in another world, and I know the process involves creating a Proto. My question is: How can I turn my robot into a PROTO? In the past, I remember that right-clicking the node would display the "Export to Proto" option. Is there a way to do this in Webots 2023?
Answer: The only way to do this is to:
Open a text editor
Load the world file containing your Robot node in the text editor
Create a new PROTO file from scratch in the text editor
Copy your Robot node from the world file
Paste it in the body of your PROTO node
All this procedure is explained in detail in this tutorial.
Disclaimer: I am a Webots developer working at Cyberbotics. | {
"domain": "robotics.stackexchange",
"id": 38656,
"tags": "webots"
} |
Why Angular velocity is a vector quantity and why it got direction perpendicular to the plane | Question: Because linear velocity is a vector which I can agree as it's because of the direction of the displacement but in case of angular velocity it's angles covered in certain time but angles got no direction then how come it got any direction
P.S I came to know it's something of a pseudo vector but didn't understand it so explain it in simple terms.Thanks
Answer: In the simplest case, a point mass or particle or any other suitable abstraction at ($\vec r$) moving (at velocity $\vec v$) through space (with an origin), one can define an angular velocity about the origin:
$$ \vec \omega = \vec r \times \vec v $$
where the cross product's three components are defined by:
$$ \omega_i = \epsilon_{ijk}r_jv_k $$
What's that? That's the same as defining and antisymmetric rank-2 tensor:
$$ \omega_{ij} = r_iv_j - r_jv_i ,$$
which has 3 independent components that transform under rotations just like an ordinary vector.
Under reflections (aka coordinate inversion, aka parity transformations), the angular velocity does not transform like a vector:
$$ \vec r \rightarrow -\vec r$$
$$ \vec v \rightarrow -\vec v$$
(that is, vectors are odd), while:
$$ \vec\omega \rightarrow +\vec\omega $$
The angular velocity "vector" is even, just like a rank-2 tensor. It is for this reason that it is called an axial vector.
Sometimes "axial-vector" is considered synonymous with "pseudo-vector", but their is a distinction: pseudo-vectors depend on the origin.
If I translate the orgin:
$$ \vec r \rightarrow \vec r + \vec a, $$
then $\vec \omega$ changes. Real vectors, like $\vec v$ and $\vec a$, don't do that. Of course, that leaves $\vec r$ out in the lurch, because it's not really a vector either, since it has an orgin which breaks translation symmetry. Really $\vec r$ is an affine point, and in any serious physics formula you're always talking about $\vec r - \vec r'$, which is a vector. | {
"domain": "physics.stackexchange",
"id": 66557,
"tags": "rotational-dynamics, vectors, angular-velocity"
} |
FFT of SIN waves with different phase delays | Question: I have come across a peculiarity of FFTs which has got me somewhat baffled.
I've simply summed up 101 sine waves and taken the FFT using this matlab script :
clear all
f=1e9; % Centre Frequency 1GHz
df=2.5e6; % Carrier Frequency 2.5MHz
Time=linspace(-100e-9,100e-9,1000); % Region of time to simulate over
delay=0;
Voltage=Time.*0; % Initialise Voltages to zero
for loop=-50:50 % Sum 101 carrier Frequencies
Voltage=Voltage+sin(2.*pi().*(f+df.*loop).*(Time-delay));
end
figure(1) %Plot Time dependent response
subplot(2,1,1)
plot(Time,Voltage)
subplot(2,1,2) %Plot Frequency Content
dt=Time(2)-Time(1);
frequency=linspace(-0.5/dt,0.5/dt,1000);
spectrum=fftshift(fft(Voltage));
plot(frequency,abs(spectrum))
The output is as I had expected, with the correct frequency content :
However, if I simply add a significant time delay (by re-running the script with delay=150e-9;, such that the main constructive interference lobe disappears outside the calculation window) the frequency content of the resulting time trace collapses to two peaks.
However the time trace is still the summation of 101 sin waves albeit now out of phase because of the introduced delay ?? Intuitively I would have expected the absolute frequency content of the trace to be preserved and only the phases modified by the delay. Upon reflection I can perhaps understand that the frequency content must be modified on energy conservation grounds, but can anybody rationalise what is going on here ?
Answer: You aren't looking at the frequency content of your sum of sinewave. You are looking at the frequency content of a rectangular window on your sinewaves, and the window (FFT length) is shorter than the least common multiple of all the sinewave periods.
Signals that are not orthogonal within a window can cancel each other out partially or completely within that window. What has happened is that you've chosen a window of a length such that each sinewave can get nearly completely cancelled out by the next higher and lower frequency pair of sinusoids of the "right" phase. The ones on the ends are the only ones that aren't sandwiched, and thus aren't cancelled. | {
"domain": "dsp.stackexchange",
"id": 3441,
"tags": "fft, signal-analysis, fourier-transform, fourier-series"
} |
Filling a buret tip before an experiment | Question: I have a question that asks if the buret tip is not filled before the experiment what would happen (effect on molarity for acid/base titration). The way I'm thinking about it, it's not like you're putting a pre measured solution into it. You fill it as high as you fill it and you can't put anything in without getting the tip filled up too, so I don't see where that will change anything. Am I thinking about that right?
Answer:
You fill it as high as you fill it and you can't put anything in without getting the tip filled up too,
No I don't think it will work like that. Usually you have the stock or valve closed at the bottom. Consequently, the tip will be empty until you bleed the buret. Further, you will often bleed (allow the titrant to flow) a buret while thumping the tip to remove any air bubbles that may form directly under the stop-cock when your titrant flows initially through the tip. Once you have titrant in the tip and the tip is devoid of air bubbles you will then close the valve and fill the buret to the desired level.
At this point you can better determine just how much you have used.
The measure marks on the buret itself take into account what is in the tip below the stop cock. But if you do not bleed the tip initially, filling it with your titrant then your first addition will be a bit off because the tip does not fill entirely with liquid before you allow some to drip through.
Here is a good write-up on titrations with burets.
A bubble in the nozzle of a buret will produce an inaccurate volume reading if the bubble escapes during a titration. Bubbles may be large and visible as shown above left or so small as not to be seen, above center. During a titration such small bubbles begin to move in the direction of the nozzle but may remain in place even though there is a moderate flow of titrant (above right). Even when the buret valve is wide open some bubbles remain in place until you take your eyes off them. Then they sneak through the nozzle and ruin your titration. Also if you let a an air bubble stay because you think it will remain and then later it leave to the tip you will have to start over because you have no idea what | {
"domain": "chemistry.stackexchange",
"id": 2199,
"tags": "acid-base, experimental-chemistry, titration"
} |
Do the polynomials in "polynomial time" have integer, real or complex coefficients? | Question: This is probably a very basic question but do the polynomials in "polynomial time" have integer, real or complex coefficients?
Everywhere I looked it just says "polynomial expression". I am guessing the polynomial must have integer coefficients?
Answer: When people say polynomial time, they mean that the time has polynomial growth, that is if we denote the time by $T(n)$, then $T(n) = O(n^c)$ for some real $c$. We get the same definition if we only allow integer $c$. Since $T(n)$ is real-valued, it wouldn't make sense to consider complex polynomials here.
The exact running time need not actually be a polynomial. For example, an algorithm running in time $n\log n$ (exactly) still runs in polynomial time, since (for example) $n\log n = O(n^2)$. | {
"domain": "cs.stackexchange",
"id": 2467,
"tags": "complexity-theory"
} |
How to produce a bigger rotational velocity from two rotational velocities? | Question: If I have two electronic motors, both running at the same voltage and current, delivering the same rotational velocity, how could I produce a bigger rotational velocity from them?
Answer: Let the first and the second rotational velocity of two electronics motors are $\overrightarrow{\omega}_1$ and $\overrightarrow{\omega}_2$, respectively.
To achieve the highest resultant of these vector, we must adding them in parallel direction at the same rotational axis ($\theta = 0$).
$$
\begin{align*}
\overrightarrow{\omega}_{\mathrm{resultant,max}}
&= \overrightarrow{\omega_1} + \overrightarrow{\omega_2} \\
\left|\overrightarrow{\omega}_{\mathrm{resultant,max}}\right|
&= \sqrt{\left|\overrightarrow{\omega_1}\right|^2 + \left|\overrightarrow{\omega_2}\right|^2 + 2 \left|\overrightarrow{\omega_1}\right|\left|\overrightarrow{\omega_1}\right|\cos{\theta}}\\
&= 2\omega
\end{align*}
$$
$$
\mathfrak{GOD\,Bless\,You}
$$ | {
"domain": "physics.stackexchange",
"id": 9641,
"tags": "newtonian-mechanics, electricity, electrical-engineering"
} |
catkin_make » /usr/bin/ld: cannot find -lcsparse | Question:
Hi,
I'm trying to get LSD_SLAM working but when running the catkin_make command at 91% I get the error message /usr/bin/ld: cannot find -lcsparse.
I already tried to softlink using this command sudo ln -s /usr/local/lib/libg2o_solver_csparse.so /usr/bin/lcsparse.so but this doesn't fix it.
here is the full error message:
[ 91%] Linking CXX shared library /home/adas/lsd-slam_workspace/devel/lib/liblsdslam.so
/usr/bin/ld: cannot find -lcsparse
collect2: error: ld returned 1 exit status
lsd_slam/lsd_slam_core/CMakeFiles/lsdslam.dir/build.make:755: recipe for target '/home/adas/lsd-slam_workspace/devel/lib/liblsdslam.so' failed
make[2]: *** [/home/adas/lsd-slam_workspace/devel/lib/liblsdslam.so] Error 1
CMakeFiles/Makefile2:2385: recipe for target 'lsd_slam/lsd_slam_core/CMakeFiles/lsdslam.dir/all' failed
make[1]: *** [lsd_slam/lsd_slam_core/CMakeFiles/lsdslam.dir/all] Error 2
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
To be honest I don't know that else I should try as there is nothing called lcsparse in apt either.
Best regards,
Alex
Software:
Ubuntu 18.04
ROS-Melodic
catkin_tools 0.4.5
Originally posted by MrMinemeet on ROS Answers with karma: 41 on 2019-07-17
Post score: 0
Answer:
I opened an issue on Stackoverflow too.
Eventually found the solution myself.
Here is the link to Stackoverflow
https://stackoverflow.com/questions/57077832/catkin-make-usr-bin-ld-cannot-find-lcsparse/57092644#57092644
Originally posted by MrMinemeet with karma: 41 on 2019-07-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33455,
"tags": "ros-melodic"
} |
Can you sail up/down a moving river on a windless day? | Question: It's not possible to sail into the wind directly, but only at an angle. It can be shown (it may help for below) by considering the pressure forces on a flat sail and flat keel that sailing upwind is only possible because of the angle between the sail and keel.
Suppose a sailboat is on a long, wide, straight river with a current of $0.5\,\text{m/s}$ on a windless day (there's no wind with respect to the river bank).
Can the sailboat travel faster downstream than $0.5\,\text{m/s}$ by raising its sails and sailing?
Is there any way the sailboat can sail upstream on the slow-moving river on this windless day?
The source of this question is a bathroom physics problem posted above university physics department toilets near grad offices. Problems are submitted by department members. To the best of my knowledge, none of these were ever homework questions. This question is minimally modified from a problem that was provided by Matt Kleban.
Answer: The answer to 1 is yes. It is easiest to analyze all this in the frame of the water.
In the frame of the water there is a $0.5\frac{\mathrm{m}}{\mathrm{s}}$ wind directed upriver. Sailboats are able to sail into the wind, albeit at an angle. This is called sailing close-hauled, or beating. In the frame of the water the boat will be moving into the wind --- albeit at an angle --- and hence in the frame of the riverbank the sailboat will be moving downriver faster than $0.5\frac{\mathrm{m}}{\mathrm{s}}.$ You can see how close-hauled sailing works in this sketch
The sail acts like an airplane wind. As the wind passes the sail the wind is redirected, creating both lift and drag. If the lift is large enough and the angles are right the net force can have a component that pushes the boat forward. This will accelerate the boat into the wind at an angle until the drag from the water balances the driving force from the sail.
The answer to 2 is also yes. Sailing upriver in the riverbank frame means sailing downwind faster than the wind in the frame of the river. Modern sailboats are actually able to move downwind faster than the wind by sailing at an angle to the wind. See "High-performance sailing", Wikipedia. So it is possible (in theory) to sail upriver in this case.
Here is a sketch demonstrating how this works.
The boat sails downwind at an angle to the wind, called "broad reach". As it picks up speed the wind in the frame of the boat changes direction. If the downwind component of the velocity is greater than the speed of the wind then in the boat frame it looks like the boat is sailing into the wind. Since we already know that boats can sail into the wind, the boat can maintain its downwind velocity.
You may wonder how the boat can get going fast enough to do this in the first place. To understand how this could happen, first imagine sailing sideways on the river from bank to bank. The only limit on how fast you can go in this direction is the drag from the water and how well your sails are designed; the $0.5\frac{\mathrm{m}}{\mathrm{s}}$ crosswind can accelerate you to arbitrary speeds. First attain a very high speed this way.
Next, turn your boat upriver. If you angle your boat right and you obtained a high enough speed, the upriver component of your velocity can be arbitrarily greater than $0.5\frac{\mathrm{m}}{\mathrm{s}}$ and you so you will be moving upriver in the riverbank frame at arbitrarily high speeds. You can then set your sails for the apparent wind to maintain your faster-than-downwind motion. | {
"domain": "physics.stackexchange",
"id": 49655,
"tags": "newtonian-mechanics, classical-mechanics, pressure, reference-frames"
} |
Reference Point for Potential Energy | Question: The title is quite misleading, sorry for that i couldn't think of any other title.
Energy is a certain quantity whose value is constant anywhere in space. There are different forms of energy such as kinetic and potential energy which interchange in the prescence of a gravitational field.
Suppose a ball from infinity is brought on earth and kept on a tall building. What potential energy would it have? Would it be the same if a ball from ground is kept on a that same tall building?
I am asking this beacause it is given in my book that "the choice of reference point is your own".
Answer: If the text says calculate the potential energy, it means that calculate the potential energy difference from the choice of you reference point. Taking the following example where I choose $y_0$ to be my reference point.
$$\int_{U(y_0)}^{U(y)}dU=\int_{y_0}^{y}mgdy$$
$$U(y)-U(y_0)=mg(y-y_0)$$
Now, if I choose $U(y_0)$ to be $0$, its the matter of convenience, that now $U(y)$ is my potential difference w.r.t to my reference point. So you can absolutely take your reference as infinity and take the ball from infinity to the roof of the building, but be aware that the potential energy difference is now w.r.t to infinity. The answer of you "potential energy" varies depends on your reference point. | {
"domain": "physics.stackexchange",
"id": 67378,
"tags": "energy, reference-frames, energy-conservation, potential-energy, conventions"
} |
What happens to the hollow nerve cord? | Question: The dorsal nerve cord of vertebrates is a hollow structure that develops into the nervous system. This embryonic tissue is hollow and I wonder what happens to this 'hollow' later?
Does it form the ventricles of brain?
Answer: The "hollow" i.e. the neural canal develops into the ventricles, the cerebral aqueduct and the spinal canal (Wikipedia; also see this site).
For a more authoritative reference, see this book by Haines and Ard† (page 82 onwards in google books; google book links are apparently transient and they expire; therefore I didn't provide one).
† Haines, Duane E., and M. D. Ard. Fundamental neuroscience for basic and clinical applications. Philadelphia, PA: Elsevier/Saunders, 2013. ISBN: 9781437702941 | {
"domain": "biology.stackexchange",
"id": 6213,
"tags": "brain, development"
} |
Why is the volume related to the probability of finding an electron? | Question: thank you for your time.
I was wondering about the correlation between volume and probability of finding an electron.
we know that if we move away from the nucleus, we are going to get less probability of finding an electron. this happens with a 1s sub-level.
But when i want to know ( graphically ) the probability of finding an electrons in a 2s sub-level, then the volume comes up.
my professor during the lessons said "The PROBABILITY increases because the PROBABILITY DENSITY decreases, but the VOLUME increases."
he also said that this is true to a certain point, since after that it (volume) starts to decrease.
he then said that the DENSITY OF PROBABILITY is 0 in then nucleus.
what i wrote above it's probably messy. i'm just looking for a better explanation of this concept since i didn't find anything online that talks about density of probability and volume.
Answer: There is a straightforward relationship between g(r) - the radial distribution function of the electron (probability per unit radius) - and $\rho(r)$ - the electron density (probability per unit volume).
You can write that
$$\mathrm{g(r) = 4\pi r^2\rho(r)}$$
Here all that's been done is to integrate $\rho(r)$ over a concentric area $4\pi r^2$ surrounding the origin, removing the angular dependence of the density. This works for an s-orbital function with no angular variations in the density.
The $r^2$ factor means that even when the density becomes small away from a nucleus, the total radial probability can be substantial (and reaches a maximum). Inversely, you can have a very high volume density but very low radial probability, for instance at the origin of an s-orbital, where $r^2$ goes to 0.
This is shown schematically (not intended to depict a real H-atom) in the following:
where the red curve illustrates the behavior of g(r), the blue curve of the density. | {
"domain": "chemistry.stackexchange",
"id": 11467,
"tags": "quantum-chemistry"
} |
Questions regarding Vacuum in space | Question: The mention of vacuum has always created questions in my mind. In space isn't there near complete vacuum.
If there is then why don't the oxygen travel from the suit to space (because of large pressure difference)?
And what happens to our blood pressure in space?
I read somewhere that the only thing because of which we could die in space is lack of oxygen and boiling of body fluids. Is this true?
Answer: The vacuum of space is only -14.7 psi compared to normal air. You probably wouldn't want to vacation there, but it's not enough to explode you Hollywood style.
Medical oxygen tanks are pressurized up to 2200 psi. That's 150 times greater pressure difference than inside your body and the vacuum of space. Obviously, human bodies aren't as strong as oxygen tanks, but the same principle applies.
It's not about the pressure difference between inside and out. It's about the internal pressure overpowering the vessel it's in. As long as your skin and blood vessels don't break, the oxygen has nowhere to go. (Of course, the strain on your skin and blood vessels is related to that pressure difference, but there's just not enough to tear through most places on your body.)
For the same reasons, your blood pressure won't change unless something ruptures. It's likely you get some capillaries near the surface that do rupture, but they're so tiny the blood in your body can't escape very quickly. I'd guess your body would stop the bleeding before your blood loss was high enough to substantially alter blood pressure, but I've never tried it.
Now, if your lungs are exposed, things change. We're designed to work at normal atmospheric pressure. In a vacuum, the blood/lung interface will likely allow bodily fluids to boil out, but you'll probably pass out and die from the lack of oxygen before your blood boils out substantially.
Similarly, your sweat glands will start dehydrating you, because space is extremely dry. But your body isn't an open system, so it won't instantly suck all the moisture out of your body.
Also, I'm guessing your ears, nose, tear ducts, etc. won't like you after your sojourn to space if you leave them unprotected, but I'm not sure how much actual damage would be done. The air pressure inside your eardrums would evacuate via the Eustachian tube, but the tube is normally closed by muscles in the back of the throat. It's possible the pressure could rupture your eardrum before the pressurized air forced its way through your throat.
Your bowels and bladder would tend to empty themselves, and the extra pressure might be too much for the usual protections to function, but I'm not sure. The same problem would happen at the other end, and you might well end up with your lunch flying off into space.
Space itself has no temperature, but you're still bound by blackbody radiation. The ~2.7 K microwave radiation isn't enough to keep you warm, so you'd freeze to death as your body heat evaporates and radiates out of you without nearby heat sources. And something like the 6000 K radiation from the sun would burn you alive if you were too close, because you couldn't remove heat fast enough.
Also, I'm sure you'll end up with some nasty skin problems with how dry space is. The longer you're up there, the worse the problems will get.
With a spacesuit on, the pressure differences are placed on the suit, rather than your sensitive bits, so you can remain comfortable indefinitely. Additionally, spacesuits have oxygen tanks and are designed to keep you at a decent temperature. | {
"domain": "physics.stackexchange",
"id": 25211,
"tags": "pressure, vacuum, biophysics"
} |
Picturing the density inhomogeneity of matter at recombination | Question: The CMB was formed at the time of recombination which suggests that the analysis of the temperature anisotropies of the CMB helps to infer the profile of density fluctuations at the time of recombination. If that is true, is it possible to reconstruct a picture (analogous to this) of the density distribution of matter (including ordinary and dark stuff) at that moment?
Answer: The answer is yes. At the time of the last scattering, On the large scales ($\theta>\theta_H$) the blueshifted and redshifted parts on the CMBR can be explained by the dark matter distribution/fluctuations. This effect is also called "Non-integrated Sachs–Wolfe effect"
Dark matter was dominant in these fluctuations because at the time of the last scattering when we calculate the energy densities we see that,
$\epsilon_{dm}>\epsilon_{rad}>\epsilon_{bary}$
Essentially photons can pass less denser areas without losing much energy so these areas are blueshifted, meanwhile, when photons coming from more denser areas they become redshifted.
We can also use the idea of potential wells and potential maxima's to describe the situation. In climbing out of the potential well, photon loses energy and consequently is redshifted. Conversely, a photon which happens to be at a potential maximum when the universe became transparent gains energy as it falls down the “potential hill”, and thus is blueshifted.
So when we look at the CMBR the blueshifted areas represent less dense areas meanwhile, the redshifted ones more dense areas.
In the CMBR we can also see the matter effects but in small scale fluctuations. This effect can be explained by the acoustic oscillations of the photon-baryon fluid, caused by the dark matter potential wells. After the photon decoupling era when the fluid makes an oscillatory motion (compresses and expands), it emits photons which are redshifted and blueshifted due to the doppler effect caused by this copression and expansion of the fluid.
We can also see why these fluctuations happen on the small scales since dark matter energy density is greater then the others it creates the largest effect on the CMBR. | {
"domain": "physics.stackexchange",
"id": 55558,
"tags": "cosmology, cosmic-microwave-background"
} |
Validity of elastic collision equation | Question: There is a 2.0-kg disk travelling at 3.0 m/s on frictionless ice. It strikes a 1.0-kg stick of length 4.0 m that is lying flat on the same frictionless ice. Assume elastic collision and that the disk does not deviate from its original line of motion, find the translational speed of the disk, $v_{df}$. Let $m_d$ be the mass of the disk and $m_s$ be the mass of the stick. Let $v_{d}$ be the velocity of the disk and $v_{s}$ be the velocity of the stick.
The solution used the conservation of linear momentum, conservation of angular momentum and conservation of mechanical energy:
$$m_dv_{di}= m_dv_{df}+m_sv_s $$
$$\frac{l}{2} m_dv_{di} = \frac{l}{2} m_dv_{df} + I_{CM} \omega $$
$$\frac{1}{2} m_dv_{di}^2 = \frac{1}{2} m_dv_{df}^2 +\frac{1}{2} m_sv_{s}^2+ I_{CM} \omega^2 $$
Which, after solving, got a solution of:
$v_{df} = \frac{7}{3} m/s^2 $
My question is: Why does the relation $u_1 -u_2 = v_2 - v_1$ not work in this case? Isn't it an elastic collision? When I used this:
By conservation of linear momentum,
$$m_dv_{di}= m_dv_{df}+m_sv_s $$
$$(2.0)(3.0) = (2.0)(v_{df}) +(1.0)(v_s)$$
hence
$$v_s = 6-2 v_{df}$$
from $u_1 -u_2 = v_2 - v_1$,
$$3.0 - (0) = (v_s)-(v_{df})$$
hence
$$v_s = 3+v_{df}$$
Combining and doing some basic manipulation,
$$v_{df}=1.0 m/s^2$$
Which contradicts with the solution.
Please let me know where I may have made some wrong assumptions. Thanks!
Answer: You are comparing the velocities of the centers of mass, instead of the velocities of the points of contact.
The true elastic contact relationship is expressed for the contact point A.
$$ (v_{df} - v_{sf}^A) = -\epsilon (v_{di} - v_{si}^A) $$ where $v_{si}^A = 0 $, and $v_{sf}^A = v_{sf} + c\,\omega_f $, with $c$ the distance of the center of mass of the stick to the contact point, and $\epsilon=1$ the COR. | {
"domain": "physics.stackexchange",
"id": 44148,
"tags": "angular-momentum, energy-conservation, collision"
} |
Probability of collision between two particles (Statistical Mechanics) | Question: I'm pretty new to statistical mechanics. While reading an introductory book ("Fisica - Meccanica e termodinamica", translated "Physics - Mechanics and Thermodynamics" by C. Mencucci and V. Silvestrini, it's an italian book). I've stumbled on the section talking about the velocity distribution function (chapter 17, pages 583 - 584 - 585). Now the book introduces three functions:
$n(\vec{r}) = {\rho (\vec{r}) \over m}$ which is the particles density. $\rho (\vec{r})$, function of position $\vec{r}$, is the density, $m$ is the mass of a particle of an ideal gas.
$dN(\vec{r}) = n(\vec{r}) dV$ which is the number of particles inside the volume element
$P(\vec{r}) = {n(\vec{r}) \over N}$ which is the particle space distribution function, that is said to satisfy the property for which:
$\iiint P(\vec{r}) dV = {1 \over N} \iiint n(\vec{r}) dV = {1 \over N} \iiint dN(\vec{r}) = 1$
Where $N$ is the total number of particle in $dV$. Now these functions can be generalized to the velocity distribution context, thus:
$N(\vec{v}) = n(\vec{v}) dV$
$P(\vec{v}) = {n(\vec{v}) \over N}$
Now the first question: how do I physically interpret this functions? Should I even try to physically interpret them or should I just stick with the mathematics instead?
Later the author continues and infers that the function $P(\vec{v})$, if there are no outer stresses (if the velocity distribution is isotropic) could be defined to be only a function of $v^2$ thus a function of $K$ (kinetic energy). Henceforth, from there on, the author uses $P = P(K)$.
The following section is the most problematic to me: imagine two particles, whose initial energies are $K_1$ and $K_2$, colliding with each other, with resulting energies $K_1'$ and $K_2'$. The probability for this to happen is said to be proportional to $P(K_1)P(K_2)$ with proportionality constant $C$:
$p = CP(K_1)P(K_2)$
my question is why? I tried to figure it out by substituting $P(K)$ with ${n(\vec{v}) \over N}$. $dN(\vec{v}) = n(\vec{r}) dV$ represents - as it has been explained - all the particles in $dV$ with velocities between $\vec{v}$ and $\vec{v} + d\vec{v}$, therefore I could say that, since, for a collision to happen, the particles need to have a velocity with a direction almost towards the impact point (considering the particle not a point but a sphere with non-zero radius), thus between a certain velocity $\vec{v}$ and a certain other velocity $\vec{v} + d\vec{v}$, and since $N(\vec{v})$ represents all the possible velocities configurations (or at least I think so as $N$, in the case of $n(\vec{r})$, represented the number of particles in $dV$), $P(\vec{v}) dV$ can be thought as some kind of probability. But even if my reasoning is correct, which it seems unluky, where is the $dV$ factor? Is $P(\vec{v}) = P(v) = P(K)$? Why is there the need for the $C$ constant?
Answer:
$\iiint P(\vec{r}) dV = {1 \over N} \iiint n(\vec{r}) dV = {1 \over N} \iiint dN(\vec{r}) = 1$
Where $N$ is the total number of particle in $dV$. Now these functions can be generalized to the velocity distribution context, thus:
$N(\vec{v}) = n(\vec{v}) dV$
Here you are changing from a number density $n(\vec r)$ that changes in space to a number density $"n"(\vec v)$ that changes in velocity space. In the latter function, I put the "n" in quotes, since it is clearly a different function than the first n function. This kind of bad notation turns out to be a running theme in much of physics.
But you are using the same symbol for both, which can cause some confusion. The functional form of $n(\vec r)$ and the functional form of $n(\vec v)$ are not the same, so it is confusing to use the same symbol for the function (since the only differentiator is the letter you chose to denote the function argument, which is just a dummy variable).
$P(\vec{v}) = {n(\vec{v}) \over N}$
Same here. This is now a probability that the particle has velocity $v$, which is a different function than, say, the spatial probability.
Now the first question: how do I physically interpret this functions?
You are likely leading towards the interpretation that
$$
P(\vec v)d^3v
$$
is the probability that a particle has velocity in the range $d^3v$ about $\vec v$. For example, at a fixed temperature T, a classical gas of particles would have
$$
P(\vec v) = Ae^{-\frac{mv^2}{2T}}\;,
$$
where $A$ is chosen such that the integral over all velocities is 1.
Should I even try to physically interpret them or should I just stick with the mathematics instead?
Yes, both. The physical interpretation is as stated above. But also stick with the mathematics.
Later the author continues and infers that the function $P(\vec{v})$, if there are no outer stresses (if the velocity distribution is isotropic) could be defined to be only a function of $v^2$ thus a function of $K$ (kinetic energy). Henceforth, from there on, the author uses $P = P(K)$.
The author is performing a common abuse of notation. The functional form of
$P(K)$ is not the same as the functional form of $P(v)$ but the author is using the same letter to denote the function because of some strange desire to always denote a probability density with the letter $P$.
If we don't force ourselves to (confusingly) use the same letter, instead we would write
$$
1 = \iiint d^3v P(v) = 4\pi \int dv v^2 P(v) = 4\pi \int dK \frac{dv}{dK} v^2(K) P(v(K))\equiv \int dK \tilde P(K)\;,
$$
and then we would read off the probability density in kinetic energy $\tilde P(K)$ (where I use $\tilde P$ instead of just plain $P$ to avoid confusion) as:
$$
\tilde P(K) = \frac{4\pi}{mv(K)} v^2(K)P(v(K)) = \frac{4\pi}{m\sqrt{2K/m}}\frac{2K}{m}P(\sqrt{\frac{2K}{m}})
$$
The following section is the most problematic to me: imagine two particles, whose initial energies are $K_1$ and $K_2$, colliding with each other, with resulting energies $K_1'$ and $K_2'$. The probability for this to happen is said to be proportional to $P(K_1)P(K_2)$ with proportionality constant $C$:
$p = CP(K_1)P(K_2)$
my question is why?
At this point, your $P(K)$ denotes the probability density that the particle has kinetic energy $K$. The probability you are interested in is roughly the probability that a particle with energy close to $K_1$ gets close (in space) to a particle with energy close to $K_2$. In general, this is some complicated function:
$$
P_2(\vec r, K; \vec r', K')\;,
$$
but as above, we assume we can ignore spatial variation (i.e., there is no dependence on $r$ and $r'$), and then we further go to the so-called "mean field" or "molecular chaos" approximation in which we can treat the particles as independent. This means that the probability of "A and B" $P_2(\vec r_A, K_A; \vec r_B,K_B)$ can be factored into the unconditional probabilities that "A" multiplied by the unconditional probability that "B". I.e.,
$$
P_2(\vec r, K ; \vec r', K') \to CP(K)P(K')\;,
$$
where $C$ is whatever constant arrises from all the spatial integrals, etc, i.e., just whatever it needs to be to make the total integral equal to 1. Of course, if you have already normalized the $P(K)$ correctly, the $C$ might already be equal to 1, but it's not clear without more context.
I tried to figure it out by substituting $P(K)$ with ${n(\vec{v}) \over N}$.
Use the example above for the Boltzmann distribution to understand how to make the change of variables correctly. You basically use $P(K) = \frac{dv}{dK}P(v(K))$ with the understanding that the two $P$s are actually different functions on each side of the equation. (Bad confusing physicist notation). | {
"domain": "physics.stackexchange",
"id": 88952,
"tags": "energy, statistical-mechanics, collision, density"
} |
Classical force keeping electron in orbit around proton in hydrogen atom | Question: In terms of Classical Physics, what force would be required to hold an electron in orbit around a proton, i.e., an hydrogen atom? I have done a calculation, but need verification.
Answer: I've made a numerical answer to complement the existing answer, with some sanity checks along the way. I trust that you know that this is entirely a toy universe, and none of this stuff has very much meaning, as said in the other answer.
Speaking entirely classically, an electron orbiting a proton will feel a Coulomb force of:
$$F=-\frac{e^2}{4\pi \varepsilon_0 r^2}.$$
If the electron is staying at constant $r=r_B$, then this force must be balanced by another force. The magnitude of this force is found by plugging in the right constants, as:
$$F=8.24\times 10^{-8}~N.$$
Sounds small right? Consider the acceleration this force would correspond to, through Newton's second law as:
$$a= \frac{F}{m} = 9.05 \times 10^{22}~m s^{-2}.$$
Now that is huge! But that's of course understandable. Consider this force coming from centripetal acceleration by being in circular motion, then the velocity the electron would be moving at would be:
$$v=\sqrt{\frac{F r}{m}} = 2.18 \times 10^6~ ms^{-1}.$$
That does of course mean the electron orbits ~$10^{15}$ a second, around a tiny tiny amount of space. To get an electron travelling at $2.18 \times 10^6 ~ms^{-1}$ to completely change its velocity vector $10^{15}$ a second is going to require a mighty acceleration. | {
"domain": "physics.stackexchange",
"id": 41451,
"tags": "electromagnetism, atomic-physics, coulombs-law, hydrogen"
} |
How is many world interpretation of quantum mechanics compatible with no cloning theorem? | Question: In many worlds interpretation of the quantum mechanics all possible outcomes of a measurement are realized, however, in different universes. Everytime a measurement occurs we register one outcome and for the others our universe is copied and possible outcomes appears in these copies.
In quantum mechanics also holds no cloning theorem which tells that it is not possible to copy a quantum state. For example, on a quatum computer, it is not possible to construct a gate mapping state $|\psi\rangle \otimes |0\rangle$ to $|\psi\rangle \otimes |\psi\rangle$. It is only possible to use CNOT gate as a fan-out but in this case, the state $|\psi\rangle$ is in the end entangled with its "copy" (they are not independent).
Imagine, we do a measurement of a qubit for example, so only two possibe outcome can occur - $|0\rangle$ and $|1\rangle$. Assume that we measured $|0\rangle$ and rest of our universe remains unchanged. In many world interpretation, our universe has been copied with only one exception - the result of the qubit measurement is $|1\rangle$. These two universes are independent. So, how is this copying of universes compatible with no cloning theorem?
Answer: In the many-worlds interpretation, the measurement process amounts to the mapping
\begin{align*}
&\lvert\phi_\mathrm{system}\rangle\otimes\lvert\mbox{observer hasn't measured yet}\rangle \otimes \lvert \psi_{\mathrm{rest}}\rangle
\\
&\qquad\mapsto\ \ \ a\lvert\mathrm{system}=0\rangle\otimes\lvert\mbox{observer sees }0\rangle \otimes \lvert \psi_{\mathrm{rest}}\rangle
\\&\qquad\quad +
b\lvert\mathrm{system}=1\rangle\otimes\lvert\mbox{observer sees }1\rangle \otimes \lvert \psi_{\mathrm{rest}}\rangle\ ,
\end{align*}
with $\lvert\phi_\mathrm{system}\rangle=a|0\rangle+b|1\rangle$.
This is perfectly unitary and entirely comparible with the no-cloning theorem. | {
"domain": "physics.stackexchange",
"id": 78293,
"tags": "quantum-mechanics, quantum-information, quantum-interpretations, quantum-computer, measurement-problem"
} |
Why is $FOLLOW$ not necessary for $LL(1)$ grammars with no $\epsilon$ transitions? | Question: I'm aware of how $FIRST$ and $FOLLOW$ sets are used to construct a parsing table for $LL(1)$ grammars.
However, I've encountered this statement from my notes:
With $\epsilon$ productions in the grammar, we may have to look
beyond the current non-terminal to what can come after it
In my opinion, this suggests that $FOLLOW$ is not necessary for $LL(1)$ grammars that have no $\epsilon$ transition. Am I wrong? And if I'm not, why is this the case?
Thanks
Answer: That's correct for $LL(1)$: if there are no $\epsilon$ productions, the $LL$ parser-generation algorithm will never consult $FOLLOW$, because it only does that if it finds $\epsilon$ in the $FIRST$ set for the first non-terminal in the right-hand side of a production. (So it might not need the $FOLLOW$ sets even if there are some $\epsilon$ productions, provided that none of those productions occur at the beginning of a right-hand side.)
The observation doesn't generalise well to other values of $k$. You'll need $FOLLOW_k(\alpha)$ if any production can derive a string whose length is less than $k$. | {
"domain": "cs.stackexchange",
"id": 15843,
"tags": "formal-languages, formal-grammars, compilers, parsers, lexical-analysis"
} |
Why does pointcloud pointing upwards? | Question:
I am using a camera remotely and sending the compressed image frames to host pc. I am using RTABmap to create a 2d MAP.
The problem occurs when the PCL is being published and the points are located above my robot. Why is that? Have a look at the picture:
My TF is normal and I am using this CPP code to send the images:
https://github.com/duo3d/duo3d_driver/blob/master/src/duo3d_driver.cpp
And using this launch file with static TF:
<launch>
<arg name="pi/2" value="1.5707963267948966" />
<node pkg="tf" type="static_transform_publisher" name="duo3d_base_link" args="0 0 0 0 0 01 /duo3d/camera_frame duo3d_camera 100" />
<node name="duo3d" pkg="duo3d_driver" type="duo3d_driver" output="screen">
<param name="frame_rate" value="30.0"/>
<rosparam param="image_size">[320, 240]</rosparam>
<param name="dense3d_license" value="OVMZU-ZHFE2-9K41K-NQL44-WX3DV"/>
<param name="gain" value="0"/>
<param name="exposure" value="50"/>
<param name="auto_exposure" value="true"/>
<param name="vertical_flip" value="true"/>
<param name="led" value="35"/>
<param name="processing_mode" value="1"/>
<param name="image_scale" value="0"/>
<param name="pre_filter_cap" value="8"/>
<param name="num_disparities" value="7"/>
<param name="sad_window_size" value="3"/>
<param name="uniqueness_ratio" value="2"/>
<param name="speckle_window_size" value="256"/>
<param name="speckle_range" value="2"/>
</node>
<node pkg="rqt_gui" type="rqt_gui" name="rqt_gui" args="--perspective-file $(find duo3d_driver)/launch/rqt/depth_view.perspective"/>
<node pkg="rviz" type="rviz" name="rviz" args="-d $(find duo3d_driver)/launch/rviz/depth.rviz"/>
</launch>
So why does PCL prints on top of my robot? That makes my map to be occupied everywhere.
Also, this is what PCL of obstacles look like produced by RTABMAP:
I am using default stereo_mapping launch file.
EDIT!
It seems to do something with IMU but I do not know yet what is happening. When I tilt my IMU, pcl also tilts.
If you can debug together with me that would be great.
My rosbag is:
https://drive.google.com/file/d/1qRQrpndWqA_F7vqKuDCZKGiTedbIf4-S/view?usp=sharing
Originally posted by EdwardNur on ROS Answers with karma: 115 on 2019-03-23
Post score: 0
Answer:
It seems there is a missing optical rotation TF. You may try this:
<arg name="pi/2" value="1.5707963267948966" />
<node pkg="tf" type="static_transform_publisher" name="duo3d_base_link" args="0 0 0 -$(arg pi/2) 0 -$(arg pi/2) /duo3d/camera_frame duo3d_camera 100" />
Originally posted by matlabbe with karma: 6409 on 2019-03-23
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by EdwardNur on 2019-03-24:
@matlabbe Hi, thank you. Indeed, there is a problem with the tf but I fixed that by assigning PCL values in a different order: pcl.x = z and etc., as static_tf did not fix the problem. I guess I need to send TF myself.
Also, when I send compressed images without RTABmap I get 30 Hz but as soon as I turn on RTABmap, my frequency of images drop to 10 Hz, do you know the reason?
Comment by EdwardNur on 2019-03-24:
@matlabbe Actually, if you can have a look at my new question I would appreciate:
https://answers.ros.org/question/319346/all-spaces-are-occupied-using-rtabmap-stereo/
Comment by matlabbe on 2019-03-31:
Make sure you changed frame_id of rtabmap from duo3d_camera to /duo3d/camera_frame. If you don't start rtabmap, it is maybe because there are no subscribers on the topic images, so they are not compressed (thus no processing time used to compress image) | {
"domain": "robotics.stackexchange",
"id": 32727,
"tags": "slam, navigation, ros-melodic, rtabmap, rtabmap-ros"
} |
catkin_make unable to create executable & automatically copy .h files to devel | Question:
When I ran my catkin_make, I understand that it should automatically copy the header files (in this case mosquitto.h) which I included in the main cpp file into devel/include and create an executable, however, it is not doing so. devel/include doesn't even exist in my catkin_ws. My header file is in catkin_ws/src/package_name/include/package_name
Error:
Linking CXX executable /home/catkin_ws/devel/lib/mqtt_pub/mqtt_pub_node
/usr/bin/ld: cannot find -lmosquitto.h
collect2: error: ld returned 1 exit status
make[2]: *** [/home/catkin_ws/devel/lib/mqtt_pub/mqtt_pub_node] Error 1
make[1]: *** [mqtt_pub/CMakeFiles/mqtt_pub_node.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j1 -l1" failed
Note that mqtt_pub_node doesn't exist. Why is it looking for something that doesn't exist? It should be automatically created. From what I know, the executable should be in devel/lib/mqtt_pub, not sure where did the system think about devel/lib/mqtt_pub/mqtt_pub_node. If I create the devel/lib/mqtt_pub/mqtt_pub_node and put my header file in it, the catkin_make is successful, but the executable would not be created.
CMakeList.txt
find_package(catkin REQUIRED COMPONENTS
roscpp
std_msgs
)
catkin_package(
INCLUDE_DIRS include
LIBRARIES mqtt_pub
CATKIN_DEPENDS roscpp std_msgs
DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
/catkin_ws/src/mqtt_pub/include/mqtt_pub
include
)
link_directories(
/catkin_ws/src/mqtt_pub/include/mqtt_pub
)
link_libraries(
mosquitto.h
)
add_executable(mqtt_pub_node src/mqtt_publish.cpp)
target_link_libraries(mqtt_pub_node ${catkin_LIBRARIES})
package.xml
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>std_msgs</build_depend>
<run_depend>roscpp</run_depend>
<run_depend>std_msgs</run_depend>
Would appreciate the guidance in solving this issue. I'm really clueless about the error, have research online and my cmake and xml file seems to be alright. Errors in my maincpp file have been rectified. Thanks!
Originally posted by cechster on ROS Answers with karma: 3 on 2016-12-22
Post score: 0
Answer:
First: this is not a ROS problem per se, but really a matter of using CMake correctly.
I understand that it should automatically copy the header files [..] which I included in the main cpp file into devel/include
No, that is not the case: header files will not be copied to the devel space by catkin (unless they are auto-generated, or there are statements in the CMakeLists.txt that do that). It's also not necessary: the dependents include path will automatically include the correct paths (in the src space) if the providing package properly exports those include locations.
Error:
Linking CXX executable /home/catkin_ws/devel/lib/mqtt_pub/mqtt_pub_node
/usr/bin/ld: cannot find -lmosquitto.h
collect2: error: ld returned 1 exit status
[..]
Note that mqtt_pub_node doesn't exist. Why is it looking for something that doesn't exist? It should be automatically created.
yes, it should, but only if linking actually succeeded. Which it didn't.
From what I know, the executable should be in devel/lib/mqtt_pub, not sure where did the system think about devel/lib/mqtt_pub/mqtt_pub_node [..].
the error message you refer to simply states that while linking a binary called mqtt_pub_node (which will be placed in devel/lib/mqtt_pub if linking is successful), no library called mosquitto.h could be found. In your CMakeLists.txt, you defined the name of the binary as mqtt_pub_node with the add_executable(mqtt_pub_node ..) line.
catkin_package(
INCLUDE_DIRS include
LIBRARIES mqtt_pub
CATKIN_DEPENDS roscpp std_msgs
DEPENDS system_lib
)
include_directories(
${catkin_INCLUDE_DIRS}
/catkin_ws/src/mqtt_pub/include/mqtt_pub
include
)
link_directories(
/catkin_ws/src/mqtt_pub/include/mqtt_pub
)
link_libraries(
mosquitto.h
)
add_executable(mqtt_pub_node src/mqtt_publish.cpp)
target_link_libraries(mqtt_pub_node ${catkin_LIBRARIES})
Multiple things are not as they should be here:
mosquitto.h is not a library (obviously), it's a header. In your CMakeLists.txt you are asking CMake to link_libraries(..) against that header. That will obviously not work, resulting in the error that you are seeing. Note that this will never work, not even if search paths were setup correctly
location of your workspace is slghtly unorthodox (directly under /home). That is definitely legal, but perhaps placing it in your user's home directory (ie: /home/$USER) would be better
I doubt your catkin workspace is located at /catkin_ws, as that would be in the root of your file system. From the error messages, it should be at least /home/catkin_ws, but see my previous comment
it's typically recommended to place your package's include directories before anything else, as that allows you to provide files that override system includes
the use of link_directories(..) is typically not recommended with CMake. Prefer to use absolute paths to libraries, as that will avoid surprises if anyone ever overrides your linker's search path
your catkin_package(..) invocation states that your package depends on something called system_lib. That is probably not correct
in the same catkin_package(..) invocation a LIBRARY called mqtt_pub is exported, but that library is not build by your CMakeLists.txt (or at least, not in the snippet that you included)
nowhere do you (or CMake) search for anything MQTT related, neither is that dependency made explicit anywhere. Assuming that headers and / or libraries exist on a system is not very robust (they may not exist, have different names or may not be installed): always try to search for what your program needs
link_libraries(..) is also not recommended. In general it's better to be as precise and explicit about dependencies as possible, so the per target version target_link_libraries(..) is preferred
Would appreciate the guidance in solving this issue. I'm really clueless about the error, have research online and my cmake and xml file seems to be alright.
Suggestion for a correct CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.3)
project(mqtt_pub)
find_package(catkin REQUIRED COMPONENTS
roscpp
std_msgs
)
# make sure we have Mosquitto on our system somewhere, our project
# depends on it.
#
# Note: this requires a 'FindMosquitto.cmake' file to be present
# on the CMake module search path. An example of such a file can be
# found at https://github.com/tarantool/mqtt/blob/master/cmake/FindMosquitto.cmake
find_package(Mosquitto REQUIRED)
catkin_package(
INCLUDE_DIRS include
CATKIN_DEPENDS roscpp std_msgs
)
include_directories(
include
${MOSQUITTO_INCLUDE_DIR}
${catkin_INCLUDE_DIRS}
)
add_executable(mqtt_pub_node src/mqtt_publish.cpp)
target_link_libraries(mqtt_pub_node ${catkin_LIBRARIES} ${MOSQUITTO_LIBRARIES})
# add install(..) if necessary
Note that this assumes that Mosquitto has been installed somewhere on the system, or at least in a location that CMake / FindMosquitto.cmake knows about / can find. Refer to the CMake documentation for how to configure that.
Finally: unfortunately there doesn't seem to be a rosdep rule for Mosquitto yet, so you can't add that build and run dependency to your package.xml. If you'd like to do that (instead of blindly assuming that Mosquitto is installed on the system your package is/gets built on), then you'd have to contribute such a rule and then add the appropriate bits to your package manifest.
Originally posted by gvdhoorn with karma: 86574 on 2016-12-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by cechster on 2016-12-22:
Thank you for your prompt reply! Will try out your suggestions.
Regarding point 2 & 3, opps sorry, I accidentally missed out when typing out the $User. Yes my catkin_ws is in home/$user/catkin_ws | {
"domain": "robotics.stackexchange",
"id": 26553,
"tags": "ros, catkin-make, devel"
} |
How do I use the model generated by the R package poLCA to classify new data as belonging to one of the classes? | Question: For example, in the election example from the documentation, if I create a new set of answers to the questions, how can I use the poLCA model to tell me what class (cluster) it's most likely to be in?
There doesn't appear to be a function to do this, though the model has a df within it that lists the probabilities of class membership for each value of each manifest variable. I'm tasked with converting some sql code that takes a second dataset and classifies the patients there as members of the clusters created from a first. Superficially this is a programming question. It seems like a function to do this would be a reasonable addition to the package. More deeply, if indeed there isn't such a function, it would become a question about how to use the table of probabilities to classify new data.
If readers aren't familiar with the R package poLCA, it's an LCA package that works with discrete/categorized data.
(full disclosure: I asked on cross-validated and a shorter version of this question was put on hold.)
Answer: As Paolo says, use the poLCA.poseterior() function. The data comes out in the same format as the lca_model$posterior structure returned by the poLCA function.
library(poLCA)
data(election)
column_names <- c('MORALG', 'CARESG', 'KNOWG', 'LEADG', 'DISHONG',
'INTELG', 'MORALB',
'CARESB', 'KNOWB', 'LEADB', 'DISHONB', 'INTELB')
election_matrix = as.matrix(mapply(as.numeric,election[,column_names]))
election_matrix_no_na =election_matrix[apply(election_matrix, 1,
function(x) all(is.finite(x)) ),]
preds = poLCA.posterior(lc=lca_model, y=election_matrix_no_na) | {
"domain": "datascience.stackexchange",
"id": 4427,
"tags": "r, clustering"
} |
Is it possible to use HC-SR04 ultrasonic range sensor to indicate thickness of a material | Question: The HC-SR04 is directly connected to an Arduino board with the receiver end(echo) connected to analog pin 2 and the transmitter (trigger) connected to digital pin 4.
I am wondering if I can use the sensor to sense the change in saturation from when object block its path. The receiver and transmitter will be positioned like this
The line in the middle is supposed to be a paper. I'll be using it to see the difference between one paper and two paper when they travel trough the two.
Now I'm not sure if this is possible but the way I see it working is kind of similar to an IR LED Arduino program connected to an LED, where when one paper passes trough the light gets a little bit weaker and with two it takes a heavier hit.
Is this possible?
Answer: The short answer is "no, a sonic range sensor can't do it".
It might "work" under very controlled conditions, but relying on only the attenuation of the returned signal to determine thickness may leave you open to incorrect results due to multipath propagation effects.
The more traditional way to measure thickness with sound is called profiling. The following is excerpted from a USGS Woods Hole Science Center page on Seismic Profiling systems:
reflection profiling is accomplished by [emitting] acoustic energy in timed intervals [...]. The transmitted acoustic energy is reflected from boundaries between various layers with different acoustic impedances [i.e. the air and the paper]. Acoustic impedance is defined by the bulk density of the medium times the velocity of the sound within that medium. The reflected acoustic signal is received [by one or more microphones]. The receiver converts the reflected signal to an analog signal [which is digitized and heavily processed to determine the makeup of the materials].
Rather than just measuring the time of the incoming pulse, you'd need to analyze both the time and frequency domain of the recovered signal to solve for the acoustic properties necessary to transform your transmitted pulse into the received pulse.
So the long answer is that it can be done sonically, although a sonic range sensor is generally insufficient for this purpose. | {
"domain": "robotics.stackexchange",
"id": 243,
"tags": "arduino, sensors"
} |
How can we confirm the number of protons in an atom? | Question: The periodic table tells us that there are 6 protons in a carbon atom. Is there a way to verify this first-hand? Or are we just expected to believe it unquestioned?
Answer: 1) While not easy, it is possible to obtain mass-spectrum of full range of ions, including fully ionized. By number of peaks with Z/E corresponding as X, X/2, X/3 ... X/n it is possible to ensure that an element has exactly n electrons and protons. It is still difficult to ensure full ionization of heavier atoms, so the method is not applicable for heavier elements. This, however, is the only direct method I can think of. Should work for carbon, though.
2) Various X-ray-derived spectral methods. While not providing direct evidence, they do provide information on electronic shell structure, that can be compared with theoretical calculations. | {
"domain": "chemistry.stackexchange",
"id": 2228,
"tags": "molecules, atoms, periodic-table, elements"
} |
How to determine resonance frequency of a piezoceramic element? | Question: Lets say we have a circular piezoelectric element with radius R and height H. We assume that the piezoelectric element has density $\rho$. How do you find the resonance frequency ? Is there a formula for it ?
Answer: First of all, a piezoelectric crystal has not only one resonance frequency but many.
Therefore I guess you want only the lowest resonance frequency.
Common piezoelectric crystals are very flat cylinders ($H \ll R$)
with the electrodes attached to both circular faces.
For this case the physics is simple enough to calculate the resonance
without sophisticated math.
(Actually the calculation below works not only for flat cylinders,
but also for flat cuboids, or any other flat forms with a homogenous
height $H$ much smaller than its lateral size.)
The lowest resonance is such that there fits half a wavelength
$\lambda$ into the height $H$:
$$ H = \frac{\lambda}{2} $$
Wavelength $\lambda$ and frequency $f$ are connected
by the speed of sound $c_s$: $$ \lambda f = c_s $$
Hence, you get the frequency
$$ f = \frac{c_s}{2 H}. $$
For common piezo-materials you can look up their speeds of sound in
The Free Dictionary - Piezoelectric Materials.
Example:
Quartz has a speed of sound $c_s = 5.47 \cdot 10^3\ \text{m/s}$.
For a flat cylindrical quartz slab with height $H = 1\ \text{mm}$
you get a frequency
$$f = \frac{c_s}{2 H}
= \frac{5.47 \cdot 10^3\ \text{m/s}}{2\cdot 1\ \text{mm}}
= 2.73\ \text{MHz}$$ | {
"domain": "physics.stackexchange",
"id": 99449,
"tags": "frequency, resonance, piezoelectric"
} |
Cross Correlation dimensions | Question: I have been trying to do the cross correlation of two signals, and I am expecting the output to have the same dimensions as the two signals have. However what I am seeing is that the cross correlation has twice the amount of points
The red here is the cross correlation of the green and blue. The two starting signals cover $1$ hour of time, sampled at $100Hz$, where as the cross correlation seems to represent $2$ hours, what I want to see is how correlated they are over the 1 hour. Can anyone explain this? (also apologies the graphs look like the joker)
Answer: When you compute cross correlations, you shift one signal over the other and compute the (normalized) inner product of the overlapping sections of the two signals. The point is that you can shift in both directions, i.e., your time shift (in samples) can be positive or negative.
Assume both signals have $N$ samples. Your central data point in the cross correlation is when you haven't shifted your second signal at all, i.e., all $N$ points are overlapping. Now, you can shift "to the right" by 1 sample: your overlap is $N-1$. Shift another one: your overlap is $N-2$. Keep going, until you shifted $N-1$ samples: there is only an overlap of 1 sample. This gives one half of your correlation function. You get the other half by shifting in the other direction, "to the left", you get another $N-1$ samples like that. In total, this gives $2N-1$ samples, which explains the "doubling" you observe.
Example: Correlate the sequences [1,2,3] and [4,-5,6]. Normalization left aside, we get as inner products:
[1,2,3]'*[0,0,4] = 12
[1,2,3]'*[0,4,-5] = -7
[1,2,3]'*[4,-5,6] = 12
[1,2,3]'*[-5,6,0] = 7
[1,2,3]'*[6,0,0] = 6
Therefore, from N=3 original lags, we get 2N-1=5 lags in the correlation function.
If you have a good reason to believe that your signals are periodic, you can use a cyclic convolution, which will result in N samples exactly. But his only makes sense for periodic signals. | {
"domain": "dsp.stackexchange",
"id": 7616,
"tags": "cross-correlation"
} |
CPU Registers and Computation | Question: How exactly does the control unit in the CPU retrieves data from registers? Does it retrieve bit by bit?
For example if I'm adding two numbers, A+B, how does the computation takes place in memory level?
Answer: Let’s say you have a processor with sixteen 32-bit registers and a unit that can add 32 bit integers. A very simplified way to add register x and register y and store the result in register z is this:
First, you build hardware that given a number x, can pick the first bit of register x and move it to the 1st bit of “operand1”. This is actually quite complicated because x could be one of 16 values, and the hardware must be capable to read the first bit of any of the 16 registers.
Second, you take this hardware and make 32 copies of it, one copy for each bit in a register. Now we have some quite complex hardware that can move 32 of 512 bits.
Third, we duplicate all of this with the appropriate changes to move register y to “operand2”.
Fourth, we build hardware that can add operand1 and operand2 and store the sum in a 32 bit location “output”
And fifth, we build the hardware that can take the value in “output”
and store it into register z. | {
"domain": "cs.stackexchange",
"id": 16526,
"tags": "cpu, memory-access, assembly"
} |
Given regular grammars (each is either left or right linear), does exist word/string so it can be derived from all regular grammars? | Question: Given regular grammars (each is either left or right linear), does exist word/string so it can be derived from all regular grammars, i.e. a word/string that can be derived from each regular grammar.
Suppose that S1, ... , Sn are regular grammars. Then is the following statement is true:
∃w: S1→w ∧ ... ∧ Sn→w ?
In other words:
∃w∀i: 1≤i≤n→(Si→w) ?
Does exist polynomial algorithm (both in time and space) that can find the answer to this question quickly?
EDIT: The language of each given regular grammar is finite and 'w' is a word of length m, where m is natural number given as input to the algorithm.
Also the alphabet of each regular grammar is Σ={0,1} and thus |Σ|=2.
In other words, does exist word of length m that can be derived from each regular grammar, where the language of each regular grammar is finite and alphabet of each regular grammar is {0,1}?
Answer: Your problem is PSPACE-complete. Indeed, the easier problem of DFA intersection is already PSPACE-complete, see for example Descriptional and computational complexity of finite automata—A survey by Holzer and Kutrib. For comparable results from the point of view of exponential time algorithms, see Problems on Finite automata and the exponential time hypothesis by Fernau and Krebs.
It is well-known that regular grammars are essentially equivalent in power to NFAs (even considering description complexity), and for this reason your problem is essentially equivalent to the NFA intersection problem, in which you are given a collection of NFAs and have to decide whether the intersection of languages accepted by them is empty — this is the problem considered in the papers above.
The same problem for regular expressions is also PSPACE-complete, see for example Complexity of decision problems for simple
regular expressions by Martens, Neven and Schwentick, which considers restricted cases of the problem. | {
"domain": "cs.stackexchange",
"id": 9347,
"tags": "algorithms, complexity-theory, formal-languages, regular-languages, formal-grammars"
} |
Algorithm for searching in BST with only < | Question: How could one construct an algorithm for finding a node in a binary search tree that only requires the presence of $<$ on the key type. The ones I can easily also requires $=$.
Answer: If what you mean is that you want to build a BST and you only have the $<$ operation, and you only know the algorithms with the $\leq$ operation, you can notice that :
$$a \leq b \Leftrightarrow \neg (a > b)$$
$$\Leftrightarrow \neg(b < a )$$
Hence, using a negation in the right place, you can build your usual algorithm. | {
"domain": "cs.stackexchange",
"id": 13632,
"tags": "search-algorithms, binary-search-trees"
} |
rosmake + make test? Automated testing? | Question:
I am wondering how I could automate the build of my unit tests.
Because right now I have to call make test in every package.
But I would like to automate this, to build for every package that has a test in it.
Something like a rosmake command for testing would be ideal.
Or can I add something to the makefile, so rosmake will also build the test?
Originally posted by madmax on ROS Answers with karma: 496 on 2013-06-05
Post score: 1
Answer:
Ok, I should have searched for rosmake options. ;-)
There is a command rosmake [PACKAGE] -t that builds and tests the package.
But it calls make test only for the package specified and not for the packages it depends on...
Originally posted by madmax with karma: 496 on 2013-06-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14435,
"tags": "ros, rostest, unit-testing, gtest"
} |
Generic equality checker | Question: I use this method to check if two reference types are equal
public static bool AreEquals<T>(T source, object obj) where T : class
{
if (ReferenceEquals(source, obj))
return true;
var convertedTarget = obj as T;
if (ReferenceEquals(convertedTarget, null))
return false;
List<object> equalityMembers = typeof (T).GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList<object>();
equalityMembers.AddRange(typeof(T).GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList<object>());
if (equalityMembers.Count == 0)
return false; //The references were not equals and there is nothing to compare..
var enumerator = equalityMembers.GetEnumerator();
bool areEquals = true;
while (enumerator.MoveNext() && areEquals)
{
var current = enumerator.Current;
//On sait que FieldInfo et PropertyInfo ont la méthode GetValue
var methodInfo = current.GetType().GetMethod("GetValue", new [] {typeof (object)});
object valueSource = methodInfo.Invoke(current, new object[] {source});
object valueObj = methodInfo.Invoke(current, new object[] {convertedTarget});
areEquals = valueSource.Equals(valueObj);
}
return areEquals;
}
I use attributes on properties/fields to find which of them are used to decide if the two objects are equals ex :
public class MyClass
{
[EqualityMember]
public int ID{get;set;}
public override Equals(object obj)
{
return MyStaticClass.AreEquals(this,obj);
}
}
It works very well but I am bothered with how I get the properties and fields in a list and use reflection to invoke the GetValue method, would you have any other alternatives? Or is there anything that seems flawed in my method?
Answer:
List<object> equalityMembers = typeof (T).GetProperties(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList<object>();
equalityMembers.AddRange(typeof(T).GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList<object>());
I would consider caching these values. Usually, the custom attributes do not change during runtime, and having the complete list cached after the first run should improve performance of subsequent comparisons a lot.
while (enumerator.MoveNext() && areEquals)
Since areEquals is initialized as true, you could use while (areEquals && enumerator.MoveNext()) instead, thus avoiding an additional call to MoveNext if the process has already determined that the objects are not equal. However, I do not know, why you use the IEnumerator manually, because would be intrigued to write the whole loop as a simple foreach loop and just return false on the first value that is not equal in both instances.
areEquals = valueSource.Equals(valueObj);
This throws a NullReferenceException if valueSource is null.
Honestly, instead of using GetValue via reflection, I would just save PropertyInfo and FieldInfo in separate lists and compare them in separate loops. For example
public static class EqualityComparer<T> where T : class
{
private static readonly BindingFlags flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance;
private static readonly IReadOnlyCollection<PropertyInfo> propertiesForEquality = typeof(T).GetProperties(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList();
private static readonly IReadOnlyCollection<FieldInfo> fieldsForEquality = typeof(T).GetFields(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).ToList();
private static readonly bool hasEqualityMembers = propertiesForEquality.Any() || fieldsForEquality.Any();
public static bool AreEquals(T source, object obj)
{
if (ReferenceEquals(source, obj))
return true;
var convertedTarget = obj as T;
if (ReferenceEquals(convertedTarget, null))
return false;
if (!hasEqualityMembers)
return false; //The references were not equals and there is nothing to compare..
foreach (var propertyInfo in propertiesForEquality)
{
var valueSource = propertyInfo.GetValue(source);
var valueObj = propertyInfo.GetValue(obj);
if (valueSource == null && valueObj != null || !valueSource.Equals(valueObj))
{
return false;
}
}
foreach (var fieldInfo in fieldsForEquality)
{
var valueSource = fieldInfo.GetValue(source);
var valueObj = fieldInfo.GetValue(obj);
if (valueSource == null && valueObj != null || !valueSource.Equals(valueObj))
{
return false;
}
}
return true;
}
}
In a quick benchmark (really simple class with one field and one property used for equality comparison), my version was a lot faster. You may want to benchmark that with your concrete classes. :)
If you do not want to have multiple foreach loops, you can also build a small wrapper around PropertyInfo and FieldInfo. In my simple test, the following did not lead to any noticable performance degradation.
public interface IValueGetter
{
object GetValue(object obj);
}
public class FieldInfoValueGetter : IValueGetter
{
private readonly FieldInfo fieldInfo;
public FieldInfoValueGetter(FieldInfo fieldInfo)
{
if (fieldInfo == null)
{
throw new ArgumentNullException("fieldInfo");
}
this.fieldInfo = fieldInfo;
}
public object GetValue(object obj)
{
return this.fieldInfo.GetValue(obj);
}
}
public class PropertyInfoValueGetter : IValueGetter
{
private readonly PropertyInfo propertyInfo;
public PropertyInfoValueGetter(PropertyInfo propertyInfo)
{
if (propertyInfo == null)
{
throw new ArgumentNullException("propertyInfo");
}
this.propertyInfo = propertyInfo;
}
public object GetValue(object obj)
{
return this.propertyInfo.GetValue(obj);
}
}
public static class InterfacedEqualityComparer<T> where T : class
{
private static readonly BindingFlags flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance;
private static readonly IReadOnlyCollection<IValueGetter> valueGetters = typeof(T).GetProperties(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).Select(p => (IValueGetter)new PropertyInfoValueGetter(p))
.Union(typeof(T).GetFields(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).Select(p => new FieldInfoValueGetter(p)))
.ToList();
private static readonly bool hasEqualityMembers = valueGetters.Any();
public static bool AreEquals(T source, object obj)
{
if (ReferenceEquals(source, obj))
return true;
var convertedTarget = obj as T;
if (ReferenceEquals(convertedTarget, null))
return false;
if (!hasEqualityMembers)
return false; //The references were not equals and there is nothing to compare..
foreach (var valueGetter in valueGetters)
{
var valueSource = valueGetter.GetValue(source);
var valueObj = valueGetter.GetValue(obj);
if (valueSource == null && valueObj != null || !valueSource.Equals(valueObj))
{
return false;
}
}
return true;
}
}
I know that I changed the method signature, but having the method be not generic but embedding it in a generic class is probably the easiest way to cache loaded property and field info data, but that can be changed to a generic method in a non-generic class.
public class EqualityComparer
{
private static readonly BindingFlags flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance;
private static readonly IDictionary<Type, object> valueGetterCache = new Dictionary<Type, object>();
public static bool AreEquals<T>(T source, object obj) where T : class
{
if (ReferenceEquals(source, obj))
return true;
var convertedTarget = obj as T;
if (ReferenceEquals(convertedTarget, null))
return false;
var valueGetters = GetValueGettersForType<T>();
if (!valueGetters.Any())
return false; //The references were not equals and there is nothing to compare..
foreach (var valueGetter in valueGetters)
{
var valueSource = valueGetter.GetValue(source);
var valueObj = valueGetter.GetValue(obj);
if (valueSource == null && valueObj != null || !valueSource.Equals(valueObj))
{
return false;
}
}
return true;
}
private static IReadOnlyCollection<IValueGetter> GetValueGettersForType<T>() where T : class
{
var type = typeof(T);
object dictionaryValue;
IReadOnlyCollection<IValueGetter> valueGetters;
if (!valueGetterCache.TryGetValue(type, out dictionaryValue) || (valueGetters = dictionaryValue as IReadOnlyCollection<IValueGetter>) == null)
{
valueGetters = type.GetProperties(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).Select(p => (IValueGetter)new PropertyInfoValueGetter(p))
.Union(type.GetFields(flags).Where(p => p.GetCustomAttribute<EqualityMemberAttribute>() != null).Select(p => new FieldInfoValueGetter(p)))
.ToList();
valueGetterCache.Add(type, valueGetters);
}
return valueGetters;
}
}
Note that the main advantage seems to be the caching - if you remove it, this performs about as fast as your original approach. :) | {
"domain": "codereview.stackexchange",
"id": 9092,
"tags": "c#, generics"
} |
[Turtlebot] Turtlebot doesn't walk in a straight line | Question:
Hi All,
I was fiddling my turtlebot today and I found something a bit surprising... It doesn't actually go in a straight line!!
I was running the .cpp script from turtlebot_teleop (yep, I made another package and built the script), and turtlebot moves in a arc... Unsure what is happening here and I tried running calibration and guess what, not just it doesn't work but my turtlebot actually goes backwards! lolz
I was trying to find something on the net but apparently no one else has this problem... Any ideas? Is there something wrong with my parameters or is hardware giving me a hard time? Let me know if you need any data/log/code/etc.
Cheers,
Mid
Hardware: turtlebot (the old type)
ROS version: electric
OS: Ubuntu 10.04
Originally posted by Midnight on ROS Answers with karma: 5 on 2013-05-23
Post score: 0
Answer:
It sounds like one of your motors has died. You can try sending commands directly to the base with twists to command the bot to drive straignt, turn in place right and turn in place left. If you need more help describe the behavior when sending different commands, both commanded and observed.
Based on the calibration driving backwards my guess is that your right wheel motor has failed.
Unfortunately, there's no good fix for that. Check to make sure that the wheels aren't bound on hair or other debris. And beyond that you may actually have a bad motor which will require you to go inside the Create and debug. Unfortunately the Creates are known to wear out over time.
Originally posted by tfoote with karma: 58457 on 2013-05-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Midnight on 2013-05-26:
Hey Tully, thanks for the reply.
Righto, I did some test and here what I found out:
Twist.linear.x = 1 => Forward and leftwards
Twist.angular.z = 1 => Rotates anti-clockwise while going forward and right
Twist.angular.z = -1 => Rotates clockwise while going back and right (0.o)
Thanks
Comment by Midnight on 2013-05-26:
Also, I wasn't able to take the wheel apart, since it doesn't belong to me. But from observation, I don't see any debris at all... Maybe suggest a way to debug, please?
Thanks again!
Comment by tfoote on 2013-05-26:
Unfortunately this seems to me to be a hardware problem. There's not much lower level you can go than sending raw twist commands.
Comment by Midnight on 2013-05-27:
Got it. I report on.
Thanks a lot =D
Comment by benabruzzo on 2017-09-10:
Does anyone know of a vendor or source for replacement wheel motors? I'm pretty sure I have a failed motor on a kabuki platform, but I have not been able to locate a replacement. | {
"domain": "robotics.stackexchange",
"id": 14275,
"tags": "ros, turtlebot, turtlebot-calibration, ros-electric"
} |
ROS2 rolling header location for different Ubuntu version | Question:
I have two clean installed Ubuntu, 20.04 and 22.04, and both of them are installed with ROS2 rolling.
Somehow I found their std_msgs headers are located in different folders as shown .
The Ubuntu 22.04 has one more extra include layer for the std_msgs. Is this the expected behavior for different Ubuntu versions?
Thanks!
Originally posted by Chris7462 on ROS Answers with karma: 3 on 2023-02-10
Post score: 0
Answer:
I found their std_msgs headers are located in different folders as shown here.
The Ubuntu 22.04 has one more extra include layer for the std_msgs. Is this the expected behavior for different Ubuntu versions?
I wouldn't call it behaviour, but: yes, this is expected. See ros2/ros2#1150 for context.
Note: unless you're (planning to) manually manage your include paths, this change should not have any affect on your development workflow. CMake will take care of sorting things out for you.
I have two clean installed Ubuntu, 20.04 and 22.04, and both of them are installed with ROS2 rolling.
please note: Rolling on Focal (20.04) is no longer being updated. It saw its last sync on 2022-01-28, which is over a year ago. See the following discussions for context:
Preparing ROS 2 Rolling for the transition to Ubuntu 22.04
Preparing for final Rolling sync on Ubuntu Focal 2022-01-25
Rolling Ridley has rolled onto Ubuntu Jammy
New packages for ROS 2 Rolling Ridley 2022-01-28
Basically: Rolling on Focal is very different from Rolling on Jammy.
Originally posted by gvdhoorn with karma: 86574 on 2023-02-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Chris7462 on 2023-02-15:
I see. Thank you for your reply. Really appreciate it. | {
"domain": "robotics.stackexchange",
"id": 38274,
"tags": "ros"
} |
Black Holes: More Than One Entry Point? | Question: Most animations and drawings of Black Holes that I've seen usually depict some kind of funnel which is the "entrance" to the black hole; let's call this the front.
Are there more than one way to enter the Black Hole, such as the back side (180 degrees from the "front" side)?
Can a Black Hole have more than one entry point?
(It's kind of difficult viewing a ball {the Black Hole} with multiple entry points around the surface, sucking in matter and light).
Answer: What you are talking about is an embedding diagram. These are ways of visualising the curvature of 3D space by projecting it onto a 2D surface. These can be very misleading - for example, the trajectory of something in freefall around a black hole is not simulated by rolling a ball on this surface.
Such diagrams make no attempt to represent the full dimensionality of a real black hole.
Black holes exist in all three spatial dimensions. If we are talking about a non-spinning black hole, it is absolutely spherically symmetric and looks and behaves the same approaching it from any direction | {
"domain": "astronomy.stackexchange",
"id": 5700,
"tags": "black-hole"
} |
Addition of cyclohexanone to indole with MeONa and MeOH | Question: I need to find this reaction mechanism and i have run out of ideas. I have tried using -OMe on both indole and on cyclohexanone in order to create an enolate but i cant think of anything else. Thank you in advance
Answer: The Comments have pretty much ruled out the existence of this reaction in the chemical literature. Like many exam questions the proposed reaction is conceptually possible but that doesn't mean that it occurs. The reaction for indole itself (1b), in principle, would be catalytic in base in that alkoxide is regenerated. Your indole (1a) requires stoichiometric base to neutralize the carboxylic acid in addition to the catalytic amount of base required to effect the reaction. In practice, an excess of base would normally be employed.
Indole (1b) has a pKa of ~21 while methanol's pKa is 15.5. This means that the concentration of resonance stabilized anion 2 is quite low. However, it is a better nucleophile than indole itself. Any addition of the nitrogen of 2 to cyclohexanone (3) will be unproductive because the reverse reaction is its only option. Addition of the carbon site to 3 leads to protonation of the alkoxide and regeneration of the indole nucleus as shown in structure 4. During the course of these steps either 4 or 5 may be protonated and deprotonated. Species 5 effects elimination of hydroxide affording 6, which upon vinylogous deprotonation leads on to the final product 8 via 7. continued
The foregoing reaction is a case of a strong nucleophile (2) and a decent electrophile (3). What about the use of a strong electrophile (9) and a decent nucleophile (1)? An acid-catalyzed route is less problematic having the Vilsmeier-Haack formylation of indole as a precedent. Accordingly, species 10, which is formed from protonated cyclohexanone 9 and indole 1, aromatizes to indole 11. Acid-catalyzed loss of water from 12 affords 8 via 13.
If I were setting out to accomplish this reaction, I would certainly opt for the acid-catalyzed route. | {
"domain": "chemistry.stackexchange",
"id": 17404,
"tags": "organic-chemistry, reaction-mechanism"
} |
Flavonoid Xanthine Oxydase Inhibitors vs Hydroxychloroquine | Question: Regarding COVID-19 and hydroxychloroquine (HCQ), HCQ is known to increase pH in lysosomes by 1-2 orders of magnitude, and also results in lethal intracellular heme overload in parasites responsible for hemolytic diseases. Question is, if circulating levels of xanthine oxidase (XO) are also known to be increased in hemolytic diseases, if one considers the natural flavonoids Kaempferol, quercetin, or luteolin, which are known xanthine oxidase (XO) inhibitors, antivirals, and anti-oxidants, what would be different about pharmacology of natural XO inhibitors and HCQ?
Is HCQ an XO inhibitor because it results in heme overload in parasites? What about anti-oxidant properties of HCQ in terms of radical scavenging? There is no information supporting the notion that HCQ is an antiviral.
Sticking with hemolytic diseases, for which the goal is overload of intracellular heme in parasites, what would happen if a human took high levels of an XO inhibitor that wasn't HCQ during malaria?
XO inhibitors usually result in an overall decrease in uric acid, essentially rendering them as a natural supplement for gout -- i.e., replacement to allopurinol, which has horrendous side effects.
Answer: HCQ does show antiviral properties in vitro, but, in vivo (in humans), when taken after an initial dose of 800mg for 4 days at 600 mg/day, it was not found to prevent COVID-19 in high-risk exposure settings. Thus, it didn't stop people from getting SARS-CoV-2 infection followed by onset of symptoms/severity.
I suspect this is exactly what would happen with flavonoids, many of which have shown antiviral properties in vitro (laboratory settings). The information on flavonoids and SARS-CoV (2003) is very sparse, but several flavonoids did inhibit SARS-CoV in vitro (see last paragraph below). I do know there was a clinical trial approved in 2020 by NIH in which the investigators received an IND (From FDA) to use diosmin as a supplement to treat patients with COVID-19 (i.e. hospitalized). However, I don't know if was completed. Diosmin is available OTC as a supplement, and used via prescription under the brand names Dafflon® and Vasculera® for chronic venous insufficiency (CVI), which is caused by venous hypertension (VH). VH, in turn, aggravates metabolic imbalances leading to a self perpetuating cycle of further metabolic changes, including venous acidosis. These changes promote further inflammation in vascular tissue leading to edema, skin damage and possible ulceration and deep vein thrombosis (DVT). Varicose veins and hemorrhoids are also part of the spectrum of CVI disorders.
Interestingly, it took 16 years before a group published the most comprehensive paper on in vitro inhibition if SARS-CoV (2003) by many flavonoids that were considered. | {
"domain": "chemistry.stackexchange",
"id": 15406,
"tags": "radicals"
} |
Why is Stokes flow reversible? | Question: Stokes flow is reversible because it is linear and instantaneous. Instantaneous means that is entirely the boundary conditions that define the movement at any given time.
What does the definition of "instantaneity" really means?
And why is this flow linear?
Answer: I think the statement is false: Stokes flow is irreversible.
Viscosity is essential in understanding the Stokes problem. Without viscosity there is no force on the sphere. This is sometimes called d'Alembert's paradox. But viscosity will lead to irreversible dissipation of heat in the fluid. Another way to say the same thing: Pushing the sphere through the fluid (or pushing the fluid past a stationary fluid) does work, and that work goes into heating the fluid.
The only thing I can imagine that the "reversible" statement refers to is the following: The Navier Stokes equation is
$$
\partial_t v + v\cdot \nabla v = -\frac{1}{\rho}\nabla P + \frac{\eta}{\rho}
\nabla^2 v .
$$
Under $T$ the velocity and $\partial_t$ are odd. So the LHS is even, the first terms on the RHS is even, and the viscosity terms is odd (as expected). In Stokes flow, the first term on the LHS is zero (stationary flow) and the second term is neglected. That still leaves the RHS which does not transform simply under $T$. But now we can take the curl and get an equation that only involves $v$
$$
\nabla^2 \nabla\times v =0
$$
which does not contain $\eta$ and transform homogeneoulsy under $T$. | {
"domain": "physics.stackexchange",
"id": 38443,
"tags": "fluid-dynamics, reversibility"
} |
What is the physical meaning of logarithm? | Question: What sense can be made of the natural logarithm, When appearing in a physical process?
For example, This integral in the thermodynamic $\int_i^f \frac {dV}{V}=ln\frac {V_f}{V_i}$ when $V$ denotes volume.
In general $ln\frac {Q_f}{Q_i}$ when the $Q$ denotes physical quantities.
Or in the formular of entropy:
$S=k_Bln\Omega$
Why sometimes natural logarithm can be interpreted as a physical process? What are the odds of that happening?
Answer: This is a pretty vague question, but I take it that you're groping for some "physical significance". The clearest one is that the logarithm is the inverse of the exponential function $x\mapsto e^x$ which itself arises whenever the rate of quantity's variation is equal to or proportional to that quantity, a fairly common statement describing physical processes. For example: rates of chemical reactions, radio active decays, attenuation of light or other EM radiation through mediums all follow such laws. Given this "physical definition" it follows then that the inverse function is simply that given by $x\mapsto \int_1^x \frac{\mathrm{d}z}{z}$ and then this definition is broadened into the punctured complex plane $\mathbb{C}\sim \{0\}$ by analytic continuation. Moreover the functions $\exp$ and $\log$ defined in this way have particularly simple Taylor series (the former is universally convergent, the latter convergent in an open unit radius circle about $z=1$) that make their definitions relatively easy to broaden to objects other than numbers such as matrices, operators and so forth.
The idea of a rate of a quantity's variation being proportional to that quantity is further generalized in operator equations and, in particular, in the theory of Lie groups, where $\exp$ and its inverse $\log$ play central roles in mapping neighbourhoods of the group's identity to and from the "Lie algebra", i.e. the space of the linear transformations that play the role of generalized "rates of change" - these can now be complex numbers, quaternions or in general square matrices (for the Lie algebra they can always be thought of as square matrices - Ado's theorem - but this is not always so for the Lie group). Again, it is the natural base $e$ logarithm that falls from the definitions by dint of its Taylor expansion around the identity. The theory of Lie groups, with its fundamental reliance on $\exp$ and $\log$, plays many important roles in physics and the sciences in general. In an even more generalized setting, the Schrödinger equation is also a generalized "rate of change proportional to the quantity" equation, as are the descriptions of flows and the exponential map defining geodesics in differential geometry.
Lastly, since you ask about thermodynamics and the formula graven on Boltzmann's headstone, the logarithm is the grounding of the natural encoding of the idea that numbers of possibilities (volumes of phase spaces) multiply, whereas intuitively the corresponding "entropies", as extensive protperties of thermodynamic systems should add. Whilst it should be clear that the logarithm's base does not matter for this definition (indeed information theorists choose base 2 logarithms to write informational entropies in bi nary digi ts or bits), one could argue that the natural base $e$ logarithm that is the "prototypical" isomorphism (which is what Boltzmann's intuitive idea is all about) between the group of reals and addition and the group of strictly positive reals and multiplication that arises from the Lie theoretical idea of mapping the Lie group $(\mathbb{R}^+\sim\{0\},\,\times)$ onto its Lie algebra $(\mathbb{R},\,+)$
What is the probability of all this happening? It's precisely equal to unity: for the above ideas are how we define the natural logarithm (i.e. as the ones defined above as opposed to logarithms with another base or even indeed other functions altogether). | {
"domain": "physics.stackexchange",
"id": 9299,
"tags": "thermodynamics"
} |
Major product formed when HBr is added to 1-phenyl but-2-ene | Question: Addition of $\ce{HBr}$ to $\ce{Ph-CH2-CH=CH-CH3}$ possibly yields two products:
$\ce{Ph-CH2-CH2-CHBr-CH3}$ or
$\ce{Ph-CH2-CHBr-CH2-CH3}$
Which one of them should possibly be the major product? Markovnikov's rule (incorrectly) predicts that both of them should be the major product because the double-bonded carbon atoms in the substrate contains equal number of hydrogen atoms.
Answer: Phenyl groups are bulky and cause steric hindrance.
This makes the $\ce{Br^-}$ nucleophile have an easier time targeting the carbon atom in the double bond that's closer from the right side of the reactant molecule.
On the other hand, $\ce{H^+}$ is small enough to bypass said steric hindrance.
Therefore, the first product is more likely to form. | {
"domain": "chemistry.stackexchange",
"id": 16972,
"tags": "organic-chemistry, reaction-mechanism, c-c-addition"
} |
Quantum circuit to implement matrix exponential | Question: I want to build a circuit which will implement $e^{iAt}$, where $ A=
\begin{pmatrix}
1.5 & 0.5\\
0.5 & 1.5\\
\end{pmatrix}
$ and $t= \pi/2 $.
We see that $A$ can be written as, $A=1.5I+0.5X$. Since $I$ and $X$ commute, $e^{iAt}=e^{i(1.5I)t}e^{i(0.5X)t}$.
Evaluating manually, I get $e^{iAt}=1/2\begin{pmatrix}
e^{2it}+e^{it} & e^{2it}-e^{it}\\
e^{2it}-e^{it} & e^{2it}+e^{it}\\
\end{pmatrix}.$
Question
How can I decompose the matrix $"1/2\begin{pmatrix}
e^{2it}+e^{it} & e^{2it}-e^{it}\\
e^{2it}-e^{it} & e^{2it}+e^{it}\\
\end{pmatrix}"$ into elementary quantum gates
Answer: I think this is enough $e^{iAt}= e^{i(1.5I)t} e^{i(0.5X)t}$ for constructing the circuit. From rx and u3:
$$R_x(-t) = e^{i(0.5X)t} \qquad R_x(\theta) = u3(\theta, -\pi/2, \pi/2)$$
The $e^{i(1.5I)t}$ is a global phase gate that can be implemented via the following circuit for q[0] qubit. Here is the whole circuit for the $e^{iAt}$:
# Rx part
circuit.u3(-t, -pi/2, pi/2)
# Global phase part
circuit.u1(1.5t, q[0])
circuit.x(q[0])
circuit.u1(1.5t, q[0])
circuit.x(q[0])
The more general approach can be found in this paper (especially 4.1 Trotter decomposition). | {
"domain": "quantumcomputing.stackexchange",
"id": 1391,
"tags": "quantum-gate, qiskit, hamiltonian-simulation, hhl-algorithm"
} |
JPEG DCT padding | Question: Since the JPEG DCT block used is 8x8, how does the method deal with images with dimensions that are not multiples of 8? What kind of padding does it use? How are the 8x8 blocks of the image chosen?
Answer: The filling is performed to the right ($[1\,,1\,,3\,,x_1\,,x_2\,,x_3\,,x_4\,,x_5]$) or the bottom ($[1\,,1\,,3\,,y_1\,,y_2\,,y_3\,,y_4\,,y_5]^T$), line by line or column by column. The extended values, as far as I know, are not fixed, and they depend on the encoders choices.
Remember that blocks are formed on luminance/chrominance transformed images, after color space transformation (RGB>YUV) and chroma subsampling. Images are parsed in raster scan: left to right, top to bottom.
The partly occupied blocks on the right and the bottom are filled into Minimum Coded Units of $8\times8$, see JPEG Minimum Coded Unit (MCU) and Partial MCU:
In the case where there are not enough pixels in a row or column to complete a full tile, a partial MCU is used. A partial MCU is automatically extended to be the size of a full MCU but then the overall image dimensions are used to indicate where to cut off the extra later. This extension is generally done by repeating the last pixel of the row or column as necessary.
From Baseline JPEG:
The image is partitioned into blocks of size 8x8. Each block is then independently transformed using the 8x8 DCT. If the image dimensions are not exact multiples of 8, the blocks on the lower and right hand boundaries may be only partially occupied. These boundary blocks must be padded to the full 8x8 block size and processed in an identical fashion to every other block. The compressor is free to select the value used to pad partial boundary blocks.
This last image is taken from Heiko Schwarz, Source Coding and Compression:
A mere zero-padding can be applied, but the risk of strong artifacts at the borders is very high. For advanced applications, and with more recent image coders, one can benefit from the symmetry in the basis function to extend the image more inherently with symmetry/antisymmetry.
You can check for instance On Reconstruction Methods for Processing Finite-Length Signals with Paraunitary Filter Banks , Oct. 1995 (online version). | {
"domain": "dsp.stackexchange",
"id": 4416,
"tags": "image-processing, image-compression, jpeg"
} |
Electric Potential of Two Oppositely Charged Adjacent Spheres | Question: I am confused about part d to the following question (defining the reference point $\displaystyle\lim_{x\to\infty}\phi(x)=0$). My instructor has told me that $\displaystyle\lim_{x\to-\infty}\phi(x)=0$ but this does not make sense to me. I include my work on the problem (linked below), which concludes that this potential would usually be non-zero, depending on the values for $R$, $d$, and $\rho$ (only if $d=0$ do my workings predict a potential at negative infinity of $0$). Is the potential actually zero at negative infinity, necessarily? If so, please explain.
My work typed up on LaTex: link
Answer: Keep a few things in mind and you won't have to do much integration. The law of superposition will help. So will the consideration of equivalent charge distributions.
The field of a uniformly charged sphere outside of that sphere is the same as it would be had all the charge was located at the center of the sphere.
For A), we are outside of both spheres, so we just calculate the field for to point charges.
For B), we pretend we only have one sphere and calculate the field from the center. Using Gauss' Law, $\vec{E}4\pi r^2=\rho(4\pi r^3)/3\epsilon_0\implies \vec{E}=\frac{\rho r}{3\epsilon_0}\hat{r}$. Keeping in mind $\vec{r}$ is vector pointing from the center of the sphere to the field point. Now, for a given field point, we figure out the vectors to the respective centers, and adjust this electric field formula appropriately, summing the electric field contributed by both spheres.
For C), we combine the approaches of A) and B), i.e., since we are outside of one of the spheres, we treat it as if all of its charge was located at its center, for the part contributed by the other sphere, which the field point is in, we use the radially dependent formula above. Keeping symmetry considerations in mind, a reflection and change in charge sign gives us the field in the complementary region.
For D), we integrate from $x=+\infty$ to the beginning of the sphere on the right. Since the field is effectively the field of two point charges, our integral is elementary. We evaluate the boundary points of the integration asserting the potential is zero at infinity. Once we are in one of the spheres, the integral becomes somewhat more complicated. We have two field contributions, one field term is proportional to r, the other is proportional to $\approx1/r^2$ (adjusting r to represent distance to off-center placement of the center of the sphere. In short, our potential will be functions of $r^2$ and $1/r$ in the spherical regions that don't overlap. Where they do overlap the contribution is only from terms proportional to $r^2$. Finally, given symmetry considerations, we can just use some reflections principles to calculate the field for the rest of the x axis. | {
"domain": "physics.stackexchange",
"id": 62468,
"tags": "electrostatics, potential"
} |
Example of PyQt5 simple turn-based game code | Question: I've made my first turn-based game in PyQt5. I suppose that its idea can also be used by other novice GUI programmers.
There is a 5 by 5 squared unpainted field and 4 players. Each player starts at corner square and has his own colour. Player can move to adjacent square and fill it with his colour if it isn’t occupied by other player or isn’t filled in player's colour.
If player has nowhere to move he is randomly teleported to non-filled square. Game ends when all squares are filled. Player who has most squares filled with his colour wins.
Also I would like to hear any suggestions about code improvement. The problem parts are probably self.turn() and self.turn_loop() in MyApp class since they have a bit complicated "if/elif/else" logic.
Main module (PainterField.py):
#!/usr/bin/env python
'''
Game name: PainterFiled
Author: Igor Vasylchenko
As this module and its sub-modules use PyQt5 they are distributed and
may be used under the terms of the GNU General Public License version 3.0.
See http://www.gnu.org/copyleft/gpl.html
'''
import random
import sys
import traceback
from PyQt5.QtCore import (QRectF, QTimer, Qt)
from PyQt5.QtGui import (QBrush, QColor, QImage)
from PyQt5.QtWidgets import (QApplication, QGraphicsItem, QGraphicsScene,
QGraphicsView, QMainWindow, QPushButton)
# Custom classes
from FieldClasses import (PlayerQGraphics, SquareQGrapics)
from FieldFunctions import (create_obstacles, create_squares,
create_players, print_main, print_rules)
# Gui generated by Qt5
from gui import Ui_Field as Ui_MainWindow
'''Exceptions hadling block. Needed to track errors during
operation.'''
sys._excepthook = sys.excepthook
def exception_hook(exctype, value, traceback):
sys._excepthook(exctype, value, traceback)
sys.exit(1)
sys.excepthook = exception_hook
class MyApp(QMainWindow, Ui_MainWindow):
def __init__(self):
QMainWindow.__init__(self)
Ui_MainWindow.__init__(self)
self.setupUi(self)
# Important property that deletes all widgets (including QTimer)
# on close.
self.setAttribute(Qt.WA_DeleteOnClose)
# Variables
self.key = None
self.players = create_players()
self.scene = QGraphicsScene()
self.squares = create_squares(9, 9)
'''Timer is used to repeat self.turn_loop and hadle turn sequence.
It is stopped by default and starts after self.start() is called.
Stops when current game ends or apllication closes (see note above).'''
self.timer = QTimer(self)
self.draw_field(self.squares, self.players)
self.QGraph_field.setScene(self.scene)
# Connecting signals to slots
self.QBut_main.clicked.connect(lambda: self.print_text(print_main()))
self.QBut_reset.clicked.connect(self.reset)
self.QBut_rules.clicked.connect(lambda: self.print_text(print_rules()))
self.QBut_start.clicked.connect(self.start)
self.timer.timeout.connect(self.turn_loop)
def draw_field(self, squares, players):
for xy in squares.keys():
self.scene.addItem(squares[xy])
for ID in players.keys():
player = players[ID]
self.scene.addItem(player)
self.squares[player.xy].fill(player.colour)
def isEnd(self):
for square in self.squares.values():
if square.colour == 'cyan':
end = 0
break
else:
end = 1
return end
def keyPressEvent(self, event):
key = event.key()
# Player movement
if key in (Qt.Key_Left, Qt.Key_Right,
Qt.Key_Up, Qt.Key_Down):
self.key = key
# Start new game
elif key in (Qt.Key_Enter, Qt.Key_Return):
self.start()
elif key == Qt.Key_Escape:
self.close()
# Reset current field and draw new one.
# Starting the game is still needed.
elif key == Qt.Key_R:
self.reset()
def print_text(self, source):
self.QText_status.setHtml(source)
def reset(self):
self.timer.stop()
self.key = None
self.players = create_players()
self.squares = create_squares(9, 9)
self.scene.clear()
self.draw_field(self.squares, self.players)
self.print_text(print_main())
def results(self):
results = ()
text = ''
for ID in self.players:
player = self.players[ID]
colour = player.colour
score = 0
for square in self.squares.values():
if colour == square.colour:
score += 1
results += (score, )
score = '<p>Player {0} ({1}): {2}</p>'.format(ID, player.colour,
score)
text += score
max_score = max(results)
text += 'Player {} won with score of {}!'.format(
results.index(max_score),
max_score
)
text = text.replace('Player 0', 'You')
# Tie between player and computer is still considered a win)))
return text
def start(self):
self.key = None
if not self.isEnd():
self.timer.start(50)
# Initially was divided in two functions turn_pl and turn_ai,
# but they shared a lot of code. Still a bit messy though.
def turn(self, ID):
'''If there is no room to move player is teleported on
unpainted square.'''
player = self.players[ID]
obstacles = create_obstacles(self.players)
free_directions = player.findDirections(self.squares, obstacles)
# Setting parameters to teleport
if not free_directions:
for xy in self.squares.keys():
square = self.squares[xy]
if square.colour == 'cyan':
player.xy = xy
player.update()
square.fill(player.colour)
break
# Returns 1 so next ai player can make move
return 1
# For human player pressed key is used for player movement
elif ID == 0:
key = self.key
# Ai moves in random direction
else:
direction = random.sample(free_directions, 1)
key = direction[0]
# Moving player in designated direction
obstacles = create_obstacles(self.players)
xy = player.goto(key, obstacles, self.squares)
if xy is not None:
self.squares[xy].fill(player.colour)
return xy
def turn_loop(self):
end = self.isEnd()
if not end:
# Player turn after arrow key is pressed
if self.key is not None:
xy = self.turn(0)
self.key = None
# Waiting on player turn
else:
xy = None
# Computer ('ai') turn
if xy is not None:
for ID in range(1, len(self.players)):
self.turn(ID)
# Ending game and printing results
else:
self.timer.stop()
self.print_text(self.results())
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MyApp()
window.show()
sys.exit(app.exec_())
Custom classes for players and squares (FieldClasses.py):
from PyQt5.QtCore import (QRectF, Qt)
from PyQt5.QtGui import (QBrush, QColor, QImage)
from PyQt5.QtWidgets import QGraphicsItem
class PlayerQGraphics(QGraphicsItem):
def __init__(self, xy=(-1,-1),
colour='green', icon='graphics/player.png'):
QGraphicsItem.__init__(self)
self.colour = colour
self.icon = icon
self.xy = xy
def boundingRect(self): #Is set to field dimensions
return QRectF(0,0,270,270)
def findDirections(self, squares, obstacles, exceptColours=None):
'''Returns list of free directions to move.
Direction is not free if colour of target square is similar
to player’s or target square is already occupied.'''
if exceptColours is None:
exceptColours = (self.colour, )
obstacles = tuple(obstacles)
free_directions = []
for n in range(1, 5):
xy = self.prepareGoto(n, exceptColours, obstacles, squares)
if xy is not None:
free_directions.append(n)
return free_directions
def goto(self, direction, obstacles, squares, exceptColours=None):
'''Checks direction by self.prepareGoto(...) and moves
player if direction is free.'''
if exceptColours is None:
exceptColours = (self.colour, )
xy = self.prepareGoto(direction, exceptColours, obstacles, squares)
if xy is not None:
self.xy = xy
self.update()
return xy
def paint(self, painter, option, widget):
x, y = self.xy
target = QRectF(x*30, y*30, 28, 28)
source = QRectF(0, 0, 28, 28)
painter.drawImage(target, QImage(self.icon), source)
def prepareGoto(self, direction, exceptColours, obstacles, squares):
'''Checks if selected direction is free and returns actual
coordinates to move is so. Otherwise returns None'''
x, y = self.xy
if direction in (Qt.Key_Up, 'u', 1):
y = y - 1
elif direction in (Qt.Key_Down, 'd', 2):
y = y + 1
elif direction in (Qt.Key_Left, 'l', 3):
x = x - 1
elif direction in (Qt.Key_Right, 'r', 4):
x = x + 1
xy = (x, y)
try:
for n in obstacles:
if xy == n:
xy = None
break
if (xy is not None
and squares[xy].colour in exceptColours):
xy = None
except KeyError:
xy = None
return xy
class SquareQGrapics(QGraphicsItem):
def __init__(self, xy=(-1,-1), colour='cyan'):
QGraphicsItem.__init__(self)
self.colour = colour
self.xy = xy
def boundingRect(self):
x, y = self.xy
return QRectF(x*30, y*30, 28, 28)
def fill(self, new_colour='red'):
'''Fills square with selected colour by updating self.colour.'''
self.colour = new_colour
self.update()
def paint(self, painter, option, widget):
x, y = self.xy
colour = QBrush(QColor(self.colour))
painter.setBrush(colour)
painter.drawRect(x*30, y*30, 28, 28)
Custom functions (FieldFunctions.py):
#Custom classes
from FieldClasses import (PlayerQGraphics, SquareQGrapics)
#Obstacles are players' coordinates. And it is easier to return them as
#generator since they need update already when called.
def create_obstacles(players):
for player in players.values():
yield player.xy
def create_squares(cols, rows):
squares = {}
for x in range(cols):
for y in range(rows):
squares[(x,y)] = SquareQGrapics((x,y))
return squares
def create_players():
players = {}
ai = 'graphics/ai.png'
players[0] = PlayerQGraphics(xy=(0,0)) # Human controlled player
players[1] = PlayerQGraphics(xy=(8,0), colour='red', icon=ai)
players[2] = PlayerQGraphics(xy=(0,8), colour='blue', icon=ai)
players[3] = PlayerQGraphics(xy=(8,8), colour='black', icon=ai, )
return players
def print_main():
text = ('''<p>Welcome to PainterField!</p>
<p>To start game press "Start" or "Enter"
key. Use arrow keys to move player icon from the top left
corner.</p>
<p>Click "Main menu" to read this message.</p>
<p>Click "Rules" to read them.</p>
<p>Press "Reset" or "R" key to reset game
field (arrow keys will be frozen).</p>''')
return text
def print_rules():
text = ('''<p>Each player goes to one of four adjacent squares
and fills it with his colour. Squares of player’s
colour are not allowed to enter.</p>
<p>If player has nowhere to move he is randomly teleported
to an empty square.</p>
<p>Game ends when all squares are painted. Player,
who painted the most squares, wins.</p>''')
return text
Player icons (rename to ai.png and player.png and place in '/graphics'):
GUI code generated from .ui file (gui.py):
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'PainterField.ui'
#
# Created by: PyQt5 UI code generator 5.7
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Field(object):
def setupUi(self, Field):
Field.setObjectName("Field")
Field.resize(270, 570)
Field.setMinimumSize(QtCore.QSize(270, 570))
Field.setMaximumSize(QtCore.QSize(270, 570))
self.QGraph_field = QtWidgets.QGraphicsView(Field)
self.QGraph_field.setGeometry(QtCore.QRect(0, 300, 270, 270))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.QGraph_field.sizePolicy().hasHeightForWidth())
self.QGraph_field.setSizePolicy(sizePolicy)
self.QGraph_field.setMinimumSize(QtCore.QSize(270, 270))
self.QGraph_field.setMaximumSize(QtCore.QSize(270, 270))
self.QGraph_field.setFocusPolicy(QtCore.Qt.NoFocus)
self.QGraph_field.setFrameShape(QtWidgets.QFrame.NoFrame)
self.QGraph_field.setLineWidth(0)
self.QGraph_field.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.QGraph_field.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.QGraph_field.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.AdjustIgnored)
self.QGraph_field.setSceneRect(QtCore.QRectF(0.0, 0.0, 270.0, 270.0))
self.QGraph_field.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignTop)
self.QGraph_field.setObjectName("QGraph_field")
self.verticalLayoutWidget = QtWidgets.QWidget(Field)
self.verticalLayoutWidget.setGeometry(QtCore.QRect(10, 10, 251, 271))
self.verticalLayoutWidget.setObjectName("verticalLayoutWidget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.QText_status = QtWidgets.QTextBrowser(self.verticalLayoutWidget)
font = QtGui.QFont()
font.setPointSize(10)
self.QText_status.setFont(font)
self.QText_status.setFocusPolicy(QtCore.Qt.NoFocus)
self.QText_status.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff)
self.QText_status.setReadOnly(True)
self.QText_status.setObjectName("QText_status")
self.verticalLayout.addWidget(self.QText_status)
self.horizontalLayout_4 = QtWidgets.QHBoxLayout()
self.horizontalLayout_4.setObjectName("horizontalLayout_4")
self.QBut_start = QtWidgets.QPushButton(self.verticalLayoutWidget)
font = QtGui.QFont()
font.setPointSize(14)
self.QBut_start.setFont(font)
self.QBut_start.setFocusPolicy(QtCore.Qt.NoFocus)
self.QBut_start.setObjectName("QBut_start")
self.horizontalLayout_4.addWidget(self.QBut_start)
self.QBut_rules = QtWidgets.QPushButton(self.verticalLayoutWidget)
font = QtGui.QFont()
font.setPointSize(14)
self.QBut_rules.setFont(font)
self.QBut_rules.setFocusPolicy(QtCore.Qt.NoFocus)
self.QBut_rules.setObjectName("QBut_rules")
self.horizontalLayout_4.addWidget(self.QBut_rules)
self.verticalLayout.addLayout(self.horizontalLayout_4)
self.horizontalLayout_3 = QtWidgets.QHBoxLayout()
self.horizontalLayout_3.setObjectName("horizontalLayout_3")
self.QBut_reset = QtWidgets.QPushButton(self.verticalLayoutWidget)
font = QtGui.QFont()
font.setPointSize(14)
self.QBut_reset.setFont(font)
self.QBut_reset.setFocusPolicy(QtCore.Qt.NoFocus)
self.QBut_reset.setObjectName("QBut_reset")
self.horizontalLayout_3.addWidget(self.QBut_reset)
self.QBut_main = QtWidgets.QPushButton(self.verticalLayoutWidget)
font = QtGui.QFont()
font.setPointSize(14)
self.QBut_main.setFont(font)
self.QBut_main.setFocusPolicy(QtCore.Qt.NoFocus)
self.QBut_main.setObjectName("QBut_main")
self.horizontalLayout_3.addWidget(self.QBut_main)
self.verticalLayout.addLayout(self.horizontalLayout_3)
self.retranslateUi(Field)
QtCore.QMetaObject.connectSlotsByName(Field)
def retranslateUi(self, Field):
_translate = QtCore.QCoreApplication.translate
Field.setWindowTitle(_translate("Field", "PainterField"))
self.QText_status.setHtml(_translate("Field", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n"
"<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n"
"p, li { white-space: pre-wrap; }\n"
"</style></head><body style=\" font-family:\'MS Shell Dlg 2\'; font-size:10pt; font-weight:400; font-style:normal;\">\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">Welcome to PainterField! </p>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">To start game press "Start" or "Enter" key. Use arrow keys to move player icon at top left corner. </p>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">Click "Main menu" to read this message. </p>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">Click "Rules" to read them. </p>\n"
"<p style=\" margin-top:12px; margin-bottom:12px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\">Press "Reset" or "R" key to reset game field (arrow keys will be frozen).</p></body></html>"))
self.QBut_start.setText(_translate("Field", "Start"))
self.QBut_rules.setText(_translate("Field", "Rules"))
self.QBut_reset.setText(_translate("Field", "Reset"))
self.QBut_main.setText(_translate("Field", "Main menu"))
Also see my answer below for further improvements of coding style/readability.
Follow-up question on this post: Example of PyQt5 Snake game
Answer: List of major edits in PainterField module:
1) Removed useless import from beginning (main module doesn’t directly use custom classes).
2) Iteration over dictionary elements:
Had a lot of (see def draw_field()):
for xy in squares.keys():
square = dict[xy]
do_smth
Fixed with:
for square in dict.squares():
do_smth
3) Used dictionary mapping instead of multiple elif statements in def keyPressEvent():
keymap = {Qt.Key_Enter: self.start,
Qt.Key_Return: self.start,
Qt.Key_Escape: self.close,
Qt.Key_R: self.reset
}
# Player movement
if key in {Qt.Key_Left, Qt.Key_Right,
Qt.Key_Up, Qt.Key_Down}:
self.key = key
# Starting new game, exiting or resetting current field
elif key in keymap:
keymap[key]()
4) def results(), which creates text was edited a bit and moved to FieldFunctions.py
5) def turn() and def turn_loop() readability was improved:
Impoved iteration over dictionary elements as in p. 2.
Instead of
elif ID == 0:
key = self.key
...
xy = self.turn(0)
used:
elif player == self.players['You']:
key = self.key
...
xy = self.turn(self.players['You'])
self.players now looks like {'You': PlayerQGraphics(), 'Player 1': PlayerQGraphics(), ...}
random.choice() instead of random.sample(, 1)
"Computer ('ai') turn" section now is:
if xy is not None:
ai_players = sorted(self.players.keys())
ai_players.remove('You')
for ai in ai_players:
self.turn(self.players[ai])
So computer players now make turn in strict order.
List of major edits in FieldClasses module:
1) Bounding rectangles of QGraphicsItem subclasses now fit images and are change where needed by .prepareGeometryChange() method instead of .update()
def boundingRect(self):
x, y = self.xy
return QRectF(x*30, y*30, 28, 28)
2) obstacles is now set as it is called multiple times inside nested function.
3) In def findDirections() instead of:
for n in range(1, 5):
xy = self.prepareGoto(n, exceptColours, obstacles, squares)
used
for key in {Qt.Key_Left, Qt.Key_Right,
Qt.Key_Up, Qt.Key_Down}:
xy = self.prepareGoto(exceptColours, key, obstacles, squares)
which is self explanatory inside code)
4) Yet again beautiful dictionary mapping in def prepareGoto():
x, y = self.xy
moves_map = {Qt.Key_Up: (x, y-1),
Qt.Key_Down: (x, y+1),
Qt.Key_Left: (x-1, y),
Qt.Key_Right: (x+1, y)
}
xy = moves_map[key]
5) And the second part of def prepareGoto() also looks more readable since try: except statements handle exactly one operation:
# Prevent movement outside the field bounds
try:
target_square = squares[xy]
except KeyError:
return None
# Prevent movement in other players. Can be replaced with
# or statement, but that looks ugly and might be a bit slower that elif.
if xy in obstacles:
xy = None
elif target_square.colour in exceptColours:
xy = None
return xy
Brief edits in FieldFunctions:
1) Yet AGAIN dictionary mapping was used in def create_players() to iterate over:
colours_map = {'You': ((0,0), 'green'), 'Player 1': ((8,0), 'red'),
'Player 2': ((0,8), 'blue'), 'Player 3': ((8,8), 'black')}
for name, parameters in colours_map.items():
xy, colour = parameters
players[name] = PlayerQGraphics(xy, colour, icon='graphics/ai.png')
else: # else here looks a bit better that just one line of code after for loop
players['You'].icon = 'graphics/player.png'
return players
2) Slightly different version of def print_results() was moved from main module to FieldFunctions.py
Conclusion:
I'm still opened to suggestions though it seems that code is quite optimized now. So probably next post will be about snake game clone, which I wrote 2 times faster because used this game as a template))) | {
"domain": "codereview.stackexchange",
"id": 22272,
"tags": "python, game, python-3.x, pyqt"
} |
Is prion a term used to describe the normal form of the protein as well as the disease causing form? | Question: I've been reading my textbook and it refers to prions as a normal protein with a helpful function but it can turn into a disease causing form. However, I look in my other textbook and it refers to the word prion as solely being a disease causing protein.
I'd like to know which is the correct definition. Ie. Would I be correct in saying "The prion protein is normally involved in synaptic transmission but can turn into a disease causing form"?
Thanks in advance!
Answer: The normal isoform of the protein is called PrPC, which stands for cellular prion protein, while the infectious isoform is called PrPSC, which stands for scrapie prion protein.
According to Riesner (2003):
The biochemical properties of the prion protein which is the major, if not only, component of the prion are outlined in detail. PrP is a host-encoded protein which exists as PrPC (cellular) in the non-infected host, and as PrPSc (scrapie) as the major component of the scrapie infectious agent. (emphasis mine)
If you search for "cellular prion protein" you're gonna find several papers that use the name prion protein to the normal isoform. Some examples:
Prado, M., Alves-Silva, J., Magalhães, A., Prado, V., Linden, R., Martins, V. and Brentani, R. (2004). PrPc on the road: trafficking of the cellular prion protein. Journal of Neurochemistry, 88(4), pp.769-781.
Ramljak, S. (2008). Physiological function of the cellular prion protein (PrPc_1hnc). 1st ed. Berlin: Logos-Verl.
Pantera, B., Bini, C., Cirri, P., Paoli, P., Camici, G., Manao, G. and Caselli, A. (2009). PrPc activation induces neurite outgrowth and differentiation in PC12 cells: role for caveolin-1 in the signal transduction pathway. Journal of Neurochemistry, 110(1), pp.194-207.
Martins, V., Mercadante, A., Cabral, A., Freitas, A. and Castro, R. (2017). Insights into the physiological function of cellular prion protein.
And many others.
Therefore, following this nomenclature, the answer to your question ("Would I be correct in saying 'The prion protein is normally involved in synaptic transmission but can turn into a disease causing form'?") is yes. The difference is the adjective: cellular or scrapie.
Finally, pay attention to this: you have two different questions here. In the title you say "Is prion a term used...", but in the last paragraph you say ""Is the prion protein normally involved in...". As extensively discussed in the other answer, the term prion alone (instead of prion protein) is normally used only when referring to the abnormal isoform. More on that here: https://www.cdc.gov/prions/pdfs/public-health-impact.pdf
Source: Detlev Riesner; Biochemistry and structure of PrPC and PrPSc. Br Med Bull 2003; 66 (1): 21-33. | {
"domain": "biology.stackexchange",
"id": 7208,
"tags": "molecular-biology, proteins, terminology, protein-structure, prion"
} |
What is a wavefunction, in the context of String Theory? | Question: I have to admit that I don't know much about String Theory (or QFT, for that matter ..), but, if we assume that String Theory is the correct description of fundamental particles, what is the correct way to think about a wavefunction, in the context of String Theory?
For example, take an electron being shot from an emitter to a target. I usually imagine the wavefunction as spreading out from the target in three dimensions, possibly traveling through two or more slits; interfering with itself along the way; and then spontaneously deciding to collapse down onto one of the atoms that makes up the target surface. Where does the string fit into this picture?
Does String Theory say anything about the mysterious wavefunction collapse? (I assume it must do, otherwise can it really be described as a 'theory of everything'?)
Edit:
It was mentioned in one of the answers that, in string theory, 'point particles' are described as strings. Hang on though .. in QM we were told: "there are no point particles, there are only wavefunctions." But, now in string theory, apparently these point particles are back again, only they're now described as strings. So, where did the wavefunctions go?
Answer: I think one important thing to mention is that Copenhagen Interpretation of Quantum Mechanics is not the only interpretation out there; in particular there is no "need" for wavefunction collapse if you don't want it. Alternative interpretations include the Many Worlds Interpretation (which makes sense in light of the development of Decoherence in quantum theories; for more on this interpretation I'd recommend this paper by Max Tegmark).
I don't think String Theory should be viewed as something that can resolve between different interpretations of Quantum Mechanics; after all String Theory is still a Quantum Theory. In particular, you still have wavefunctions as before; but instead of describing what you might think of as 'point particles', they describe strings. | {
"domain": "physics.stackexchange",
"id": 84768,
"tags": "quantum-mechanics, string-theory, wavefunction"
} |
Broadcaster & Listener | Question:
Hi all,
May I know what is the difference between broadcaster/listener and tf broadcaster/listener?
Still new to ROS fuerte. Thanks.
Originally posted by FuerteNewbie on ROS Answers with karma: 123 on 2013-09-23
Post score: 0
Answer:
The one is specific for tf messages and interprets them to easily access tf.
A non-tf broadcaster/listener is usually called publisher/subscriber.
Originally posted by dornhege with karma: 31395 on 2013-09-24
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 15630,
"tags": "ros, broadcaster, transform"
} |
Family tree for edible plants? | Question: I am looking for a family tree for plants, particularly veg / herbs / fruit.
Something similar to:
If it could be slightly less technical than all the Latin names too :). The aim is to easily find out what plants are related such as Cabbage, Broccoli, sprouts, Cauliflower (all Brassicaceae). Currently having to look them all up on Wikipedia which is quite painful.
Another example, but a plant version.
Answer: Basically just search "thing you want" and "phylogeny" and you'll find a million results on Google. For you, I might recommend the Botanist in the Kitchen blog, which has a whole page on the subject and has assembled this phylogeny, including many, many others. It's pretty impressive! | {
"domain": "biology.stackexchange",
"id": 1469,
"tags": "botany, taxonomy, phylogenetics, ethnobiology"
} |
Is there any reason the common housefly continues to return to an area? | Question: This might come off as a really silly question. But I'm wondering (especially in the case of food) if there is any reason a fly would continue to try and sit on top of a piece of food even after swatting it away. I assume (it could be misconception) that it is instinctive that animals and insects would leave an area if it is harmful / dangerous to their existence after having close encounters more than once. Is this not the same for the fly?
I have this question mainly because I recall waving a fly away several times while eating lunch, and I couldn't understand why the fly wouldn't just find another place where there is food or somewhere safer.
Answer: I don't think it's a silly question, but it is a common error to anthropomorphise animals.
Insects respond to cues which they have evolved to respond to, and this is how they 'make decisions'. They do not have free will or any more complex decision-making process like common sense. This is evident is lots of insect behaviour: flying repeatedly at a closed window; landing on brightly coloured clothes instead of flowers; and returning to a food source when they are in real danger of being swatted!
When a fly senses the food (often by olfactory receptors), they are 'programmed' to fly towards it in response to some chemical they sense depending on the species and food. They may not have adapted a response to swatting, or perhaps the food cue overrides others. In nature swatting is not so much of a threat to a fly. Some animals may brush them away, but since they are not really doing any harm in feeding from another animal's food, they are mostly ignored.
CO2 traps are used to entice and kill mosquitoes. The mosquitoes are attracted to CO2 (as it is dispelled from the animals from which they blood feed), they will only evolve to avoid the traps if there is another cue which they could eventually associate with a negative effect.
Another thing to remember when thinking about insect behaviour is that their life strategy is very different to ours. Insects are more r-selected than humans, meaning that each individual life has not had so much energy put into it as a more K-selected animal (such as humans), and to compensate for this, many more young are produced. This often results in more risks being taken by individuals since there will still be a viable population even after many deaths.
Chapter 4 of 'The Insects' by Gullan & Cranston gives a good introduction to the sensory responses of insect behaviour. There are other books on the subject, 'Introduction to insect behaviour' by Atkins looks like a good starting point, but I have not read it yet. | {
"domain": "biology.stackexchange",
"id": 438,
"tags": "zoology, entomology, ethology, behaviour"
} |
Why are the ozonides of heavier elements more stable? | Question: The ozonides of $\ce{Cs}$, $\ce{Rb}$, $\ce{K}$ are well known and relatively stable, but there is little mention of the ozonides of $\ce{Na}$, $\ce{Li}$. Why is this?
Answer: Potassium and heavier alkali metals ozonides are formed by treating with ozone, or by treating the alkali metal hydroxide with ozone. They are very sensitive explosives that have to be handled at low temperatures in an atmosphere consisting of an inert gas.
Lithium and sodium ozonide are extremely unstable and must be prepared by low-temperature ion exchange starting from $\ce{CsO3}$. Sodium ozonide, $\ce{NaO3}$, which is prone to decomposition into $\ce{NaOH}$ and $\ce{NaO2}$, was previously thought to be impossible to obtain in pure form. However, with the help of cryptands and methylamine, pure $\ce{NaO3}$ may be obtained as red crystals isostructural to $\ce{NaNO2}$.
As pointed out by user2617804, lithium and sodium ozonide are extremely unstable because the ozonide ion is big for the tiny sodium and lithium ions so it is stretched around it due the distributed charge on the ozonide ion- puts it under large internal stresses.
See wikipedia for more details. | {
"domain": "chemistry.stackexchange",
"id": 5592,
"tags": "inorganic-chemistry, stability, ionic-compounds"
} |
Why, intuitively, are tension forces the same but opposite in a taut rope/string? | Question: I'm having a little trouble understanding why conceptually the two oppose each other with equal magnitudes. Let's say you have two people playing tug of war, with person A on the left much stronger than person B on the right. Person B is getting pulled by the rope because they are not strong enough. Person A is pulling with greater force, so shouldn't the tension force pulling person B to the left be greater than the tension pulling tension A to the right? Or, from what I understand about Atwood Machines, they reach some sort of "equilibrium", where the two forces oppose each other just enough so they accelerate with F/m: m being the combined mass and F being the net force.
Answer: I gave a detailed answer to this in here.
A quick answer, though, is if the rope has mass and is accelerating because A is stronger than B, then, yes the tension at A will be greater. If the rope is moving at a constant speed (or zero) then the net force on the rope is zero, and the tension on either end is the same.
However if the rope is massless, which is an approximation frequently made, then Newton's second law tells us that the net force on the rope is zero regardless of acceleration. In the massless rope approximation, the tension on either end is the same. | {
"domain": "physics.stackexchange",
"id": 25859,
"tags": "homework-and-exercises, newtonian-mechanics, forces, string"
} |
Enabling discard pending changes on DbContext | Question: This code review request is tightly coupled with this SO question, this is the solution I implemented to solve the problem being asked about there.
All my ViewModels get constructor-injected with a Model that itself gets constructor-injected with a DbContext-derived class, and that works well in all cases, except when the View has a command to allow the user to discard pending changes, in which case the DbContext should be disposed and then reinstantiated.
Obviously since the DbContext is created by an IoC container it would be a very bad idea to just dispose and reinstantiate the context, so I came up with a solution involving a factory.
public interface IContextFactory<out TContext> where TContext : DbContext
{
TContext Create();
}
public interface IDiscardModelChanges
{
void DiscardChanges();
}
Now IDiscardModelChanges is implemented by the model, so to facilitate this I've created a base class that will ensure I have a context factory at hand:
public abstract class DiscardableModelBase<TContext> : IDiscardModelChanges, IDisposable
where TContext : DbContext
{
private readonly IContextFactory<TContext> _factory;
protected TContext Context { get; private set; }
protected DiscardableModelBase(IContextFactory<TContext> factory)
{
_factory = factory;
Context = _factory.Create();
}
public virtual void DiscardChanges()
{
Context.Dispose();
Context = _factory.Create();
}
public void Dispose()
{
Context.Dispose();
}
}
The model that needs to control its DbContext then derives from this class:
public class SomeModel : DiscardableModelBase<SomeContext>, ISomeModel
{
public SomeModel(IContextFactory<SomeContext> factory)
: base(factory)
{ }
/* methods that act upon protected Context property */
}
And then the ViewModel that needs to discard pending changes can do it like this (given a private readonly ISomeModel _model):
var model = _model as IDiscardModelChanges;
if (model != null) model.DiscardChanges();
As far as binding conventions are concerned, I'm keeping the existing "Context" convention, and adding a "ContextFactory" one:
_kernel.Bind(t => t.From(_dataLayerAssembly)
.SelectAllClasses()
.Where(type => type.Name.EndsWith("ContextFactory"))
.BindSingleInterface());
Bottom line, as far as ViewModels are concerned, nothing changes; the Model is still injected as an interface that exposes the available model methods, and the IDiscardModelChanges interface is only needed if the ViewModel needs to use it, and the ViewModel can't assume the interface is implemented by the Model.
Any known issues with this approach, any blatant mistake made?
Answer: Somewhat similar but still a bit different approach is to encapsulate your context into some container class and put this wrapper into IoC container instead of context itself. Might look like this:
interface IContextManager<TContext> : IDisposable
{
TContext Context { get; }
void ReloadContext();
event Action ContextChanged;
}
The obvious advantage (which might as well be a disadvatage, depending on context) is that this way you will always have a single instace of DbContext, and once it is reloaded - every model will use new instance (given they access it via IContextManager.Context property).
Also IDiscardModelChanges is a bad name for an interface. Interface name should not be a verb. It should either name a property of an object (e.g. IDiscardable), or name object itself (e.g. IDiscardableModel). | {
"domain": "codereview.stackexchange",
"id": 8549,
"tags": "c#, entity-framework, wpf, dependency-injection"
} |
Electric field outside a capacitor | Question: I know that the electric field outside of a capacitor is zero and I know it is easy to calculate using Gauss's law. We create cylindrical envelope that holds the same amount of charges (of opposite signs) in each plate.
My question is why can't I pick an envelope which include only part of one of the plate? Gauss's law states, specifically, that I can pick any envelope I want.
Note: I have encountered this question a couple of years a go and I got an answer which I was not completely happy with.
Answer: Outside two infinite parallel plates with opposite charge the electric field is zero, and that can be proved with Gauss's law using any possible Gaussian surface imaginable. However, it might be extremely hard to show if you don't choose the Gaussian surface in a smart way.
The usual way you'd show that the electric field outside an infinite parallel-plate capacitor is zero, is by using the fact (derived using Gauss's law) that the electric field above an infinite plate, lying in the $xy$-plane for example, is given by
$$
\vec{E}_1=\frac{\sigma}{2\epsilon_0}\hat{k}
$$
where $\sigma$ is the surface charge density of the plate. If you now put another plate with opposite charge, i.e. opposite $\sigma$, some distance below or above the first one, then that contributes its own electric field,
$$
\vec{E}_2=-\frac{\sigma}{2\epsilon_0}\hat{k}
$$
in the region above it. Since the electric field obeys the principle of superposition, the net electric field above both plates is zero. The same happens below both plates, while between the plates the electric field is constant and nonzero.
Your way of doing it is a little more tricky, but again gives the same answer. For example, if you choose the Gaussian surface to have an hourglass shape with different radii for the two sides, then indeed the net charge enclosed is not zero. However, when you calculate the total electric flux through that surface, you have to be careful to realize that there is nonzero electric field between the two plates, and therefore there is a nonzero flux through the part of the Gaussian surface that lies between the plates. That flux, of course, has to be accounted for. Assuming that you know the electric field inside the capacitor, $\vec{E}_\text{inside}$, you can do the integral $\oint\vec{E}_\text{inside}\cdot d\vec{A}$ for such a Gaussian surface (it's not that hard actually), and you find that the flux through the part of the surface that lies between the plates is exactly equal to $q_{\text{enclosed}}/\epsilon_0$. Thus, the net flux through the part of the Gaussian surface that lies outside the plates has to be zero, proving, after a little thought, that the electric field outside the capacitor is zero.
The final answer for $\vec{E}$ never depends on the Gaussian surface used, but the way to get to it always does. That's why the Gaussian surface has to be chosen in a smart way, i.e. in a way that makes the calculation of $\oint\vec{E}\cdot d\vec{A}$ easy. | {
"domain": "physics.stackexchange",
"id": 50358,
"tags": "electrostatics, electric-fields, capacitance, gauss-law"
} |
Voltage drop across capacitors in series, why? | Question: Basically when you connect more than 1 capacitor in series then the charge on each capacitor is same but there is a voltage drop across each capacitor. I have no intuition as to why the voltage drop occurs. Please help me visualize the situation and understand why is there a voltage drop across capacitors.
Answer: Here is a slightly different way of considering two capacitors in series.
Diagram 1 shows an ideal parallel plate capacitor with a potential difference of 5 V across its plates $AA'$ and $BB'$.
The capacitance of this capacitor is $C = \frac Q 5 $
Also shown in red are some equipotential surfaces one example being labelled $DD'$.
If an uncharged, very thin conducting plane is introduced on an equipotential surface then charges are induced on the surface of the conducting plane as shown in diagram 2.
The charge must be induced to ensure that the electric field within the conducting plane is zero.
The introduction of an uncharged, very thin conducting plane does not change anything else.
Now there are two parallel plate capacitors of capacitance $C_1 = \frac Q 2$ and $C_2 = \frac Q 3$
So there you have the voltage drop and zero net charge on plate $DD'$
Furthermore $\frac 5 Q = \frac 2 Q + \frac 2 Q \Rightarrow \frac 1 C = \frac 1 C_1 + \frac 1 C_2$. | {
"domain": "physics.stackexchange",
"id": 29595,
"tags": "electrostatics, capacitance, voltage"
} |
Correction of amplitude after zero padding for upsampling purposes | Question: I have time sequence in which the data is sampled at 0.8 Hz. The data is related to chromatography (chemical analysis), that is why the sampling frequency is relatively low. The instrument cannot sample faster at this moment.
I was exploring the idea of upsampling by zero padding in the frequency domain as follows in MATLAB.
FFT_S=fft(Signal); % FFT of Signal of Interest
FFT_ZP=[FFT_S(1:length(t)/2,1); zeros(1000,1); FFT_S(length(t)/2+1:end,1)]; % Zero padding with 1000 zeros.
Signal_Up=real(ifft(FFT_ZP)); % Upsampled data
The original data consists of 716 points. The upsampled data has 1716 points but the amplitude has reduced - which is undesirable.
Is there a simple multiplying factor to correct the amplitude in MATLAB based on total number of points before and after upsampling?
EDIT:
Fortunately for this analytical chemistry purpose the trade offs of zero padding are not relevant. Qualitatively, in order to keep the amplitude the same, I found that if we double the sampling rate, the amplitude after inverse fft had to be multiplied by 2, if we triple the sampling rate by zero padding, the amplitude had to be multiplied by 3. Ignoring the trade offs, there must be a generalized method to correct the final amplitude based on the initial and final number of data points?
Thanks.
Answer: If you use an 1/N normalized DFT, then you shouldn't have any problems with your amplitude when you take the inverse DFT (with no normalization factor/factor of 1). Consider the case of a pure tone with a whole number of cycles in the frame. Only one bin pair will be non-zero. No matter how many extra zeroes you insert, the inverse DFT will reconstruct the same signal in the same duration, just appropriately upsampled. The amplitude of the signal will be twice the magnitude of of each bin value independent of your sample count.
If the normalization factors are already hard coded in your routines as "1" forward, "1/N" inverse, you can simply use them in reverse roles. Take the "inverse" of your signal, zero pad it (splitting the Nyquist), then take the "forward".
Alternatively, just multiply your final results by a factor of $\frac{N_{new}}{N_{old}}$, which confirms your observations in the update. | {
"domain": "dsp.stackexchange",
"id": 9313,
"tags": "fft, fourier-transform, sampling"
} |
Are there Goldstone bosons in the spontaneous normal to superfluid transition? | Question: The normal to the superfluid transition of liquid helium breaks is a U(1) global symmetry. Since it is a continuous, global symmetry (unlike superconductivity which is a gauge theory), I expect that there be Goldstone bosons in the theory like phonons or magnons which result from the spontaneous breakdown of translational invariance in crystals and rotational symmetry in ferromagnets, respectively. However, while reading online, and textbooks on statistical mechanics, I sparsely encounter Goldstone bosons in the context of superfluid transition.
Can someone suggest a reference (a condensed matter physics reference, in particular) which mentions about Goldstone modes in superfluids? If my guess is incorrect (i.e., there are no such modes) do correct me. If the presence of such modes is debated and controversial in the condensed matter community, and therefore not a standard textbook material, also let me know.
Answer: The Goldstone boson for a superfluid is the phonon.
See Wikipedia:
A version of Goldstone's theorem also applies to nonrelativistic theories (and also relativistic theories with spontaneously broken spacetime symmetries, such as Lorentz symmetry or conformal symmetry, rotational, or translational invariance).
It essentially states that, for each spontaneously broken symmetry,
there corresponds some quasiparticle with no energy gap—the
nonrelativistic version of the mass gap. [...] However, two different
spontaneously broken generators may now give rise to the same
Nambu–Goldstone boson. For example, in a superfluid, both the U(1)
particle number symmetry and Galilean symmetry are spontaneously
broken. However, the phonon is the Goldstone boson for both.
And also this article:
An example of spontaneous symmetry breaking is the breaking of the global
U(1)-symmetry in $^4$He and the appearance of superfluidity below a certain critical temperature. Associated with the breaking of the symmetry, there is a nonzero value of an order parameter and a condensate of particles in the zero-momentum state. When a global continuous symmetry is broken, the Goldstone theorem states that there is a gapless excitation for each generator that does not leave the ground state invariant. In the case of nonrelativistic Bose gases, one identifies the Goldstone mode with the phonons.
Other relevant sources:
A. Schmitt, Introduction to Superfluidity
K. Huang, Statistical Mechanics (chap. 13) | {
"domain": "physics.stackexchange",
"id": 41968,
"tags": "resource-recommendations, phase-transition, symmetry-breaking, superfluidity"
} |
Compressing large jpeg images | Question: I'm working with thousands of large image files in a regularly updated library. The following script does the job (on average reduces my file size ~95%) but costs me around 25 seconds to compress one image. Obviously, I can just let the script run overnight, but it would be cool if I can shave some time off this process. I'm mostly looking for any unnecessary redundancies or overhead in the script that can be trimmed out to speed up the process. I'm still new to Python, so go easy on me.
from PIL import Image
from pathlib import Path
import os, sys
import glob
root_dir = "/.../"
basewidth = 3500
for filename in glob.iglob(root_dir + '*.jpg', recursive=True):
p = Path(filename)
img = p.relative_to(root_dir)
new_name = (root_dir + 'compressed/' + str(img))
print(new_name)
im = Image.open(filename)
wpercent = (basewidth/float(im.size[0]))
hsize = int((float(im.size[1])*float(wpercent)))
im = im.resize((basewidth,hsize), Image.ANTIALIAS)
im.save(new_name, 'JPEG', quality=40)
Answer: I think what greybeard is getting at in the comments is that you could get away with squishing these images much more than you presently are. It sounds like you're basically using the reduced versions as thumbnails, but these "thumbnails" are almost twice the width (over three times the area) of a standard HD monitor.
Dropping basewidth to 1920 (or possibly much lower still) seems like a good idea. Contrary to greybeard, I think your JPEG "quality" setting is fine, but you could play around with different variations of smaller-vs-larger and crisp-vs-compressed.
A minute of googling suggests that PIL is in fact the normal choice for image handling. Maybe there's something better, but I'll suppose not. Given that the originals are large (50MB? more?), it may simply be that there's a lot of work to be done. That said, there are some things you can try.
General hardware: Given that this is a heavy task, you may have some luck just running the same code on a fancier computer. This is an expensive option; even if you feel like throwing money at the problem still do plenty of research first.
Storage hardware: One possible bottleneck is the hard drive you're reading from and writing to. If you can easily try the task against a different (faster or slower) harddrive while keeping everything else the same, that may be informative.
Memory: Does you computer have a lot of RAM? Is that memory available for this python script to use? If you have a way of making half the currently available memory unavaliable, check if that makes the process twice as slow.
Memory leaks: I see the line im = Image.open(filename), and looking at the docs suggests that you should probably have a call to load or more likely close someplace.
Parallelism: Your computer probably has multiple processor cores. It's hard to say if or how well this script is using all of those cores. If you try running your script twice at the same time (against non-overlapping target sets), how much slower is each concurrent process?
It's not clear if ANTIALIAS is a valid option to the resize method in the 3.x version. Try NEAREST.
Oh hey there's a thumbnail function that affects the way the file is read from disk!. Playing around with the underlying draft function may also be helpful.
And because this is code review:
Move your format and quality constants up to the same place as basewidth. | {
"domain": "codereview.stackexchange",
"id": 37213,
"tags": "python, performance, python-3.x, image, compression"
} |
"Cookie Clicker Alpha" solution | Question: I am trying to learn Clojure for some time. In my experience, it has been rather too easy to produce write-only code.
Here is a solution to a simple problem with very little essential complexity. Input and output formats are extremely simple, too. Which means all complexity in it must be accidental. How to improve its legibility, intelligibility?
Is the decomposition of the problem into functions all right?
Also other specific problems:
How to input/output numbers? Is there any benefit to use read-string instead of Double/parseDouble? How to format floating point numbers to fixed precision without messing with the default locale?
How to avoid the explicit loop/recur, which is currently a translation of a while loop?
Are there definitions that should/shouldn't have bee private/dynamic?
Problem
You start with 0 cookies. You gain cookies at a rate
of 2 cookies per second [...]. Any time you
have at least C cookies, you can buy a cookie farm. Every time you buy
a cookie farm, it costs you C cookies and gives you an extra F cookies
per second.
Once you have X cookies that you haven't spent on farms, you win!
Figure out how long it will take you to win if you use the best
possible strategy.
(ns cookie-clicker
(:use [clojure.string :only [split]])
(:require [clojure.java.io :as io]
[clojure.test :refer :all]))
;;See http://code.google.com/codejam/contest/2974486/dashboard#s=p1
(defn parse-double [s] (java.lang.Double/parseDouble s))
(defn parse-row [line]
(map parse-double (split line #"\s+")))
(defn parse-test-cases [rdr]
(->> rdr
line-seq
rest
(map parse-row)))
(def initial-rate 2.0)
(defn min-time [c f x]
(loop [n 0 ; no of factories used
tc 0 ; time cost of factories built
r initial-rate ; cookie production rate
t (/ x r)] ; total time
(let [n2 (inc n)
tc2 (+ tc (/ c r))
r2 (+ r f)
t2 (+ tc2 (/ x r2))]
(if (> t2 t)
t
(recur n2 tc2 r2 t2)))))
(java.util.Locale/setDefault (java.util.Locale/US))
(defn ans [n t]
(str "Case #" n ": " (format "%.7f" t)))
(defn answers [test-cases]
(map #(ans %1 (apply min-time %2))
(rest (range))
test-cases))
(defn spit-answers [in-file]
(with-open [rdr (io/reader in-file)]
(doseq [answer (answers (parse-test-cases rdr))]
(println answer))))
(defn solve [in-file out-file]
(with-open [w (io/writer out-file :append false)]
(binding [*out* w]
(spit-answers in-file))))
(def ^:dynamic *tolerance* 1e-6)
(defn- within-tolerance [expected actual]
(< (java.lang.Math/abs (- expected actual))
*tolerance*))
(deftest case-3
(is (within-tolerance
63.9680013
(min-time 30.50000 3.14159 1999.19990))))
(defn -main []
(solve "resources/cookie_clicker/B-large-practice.in"
"resources/cookie_clicker/B-large-practice.out"))
Answer: I have to admit, the actual "solving the problem" component of this is a little over my head. But, I thought I'd try to answer your questions and give you some style/structure feedback, for what it's worth :)
You can simplify your ns declaration like this:
(ns cookie-clicker
(:require [clojure.string :refer (split)]
[clojure.java.io :as io]
[clojure.test :refer :all]))
(:require foo :refer (bar) does the same thing as :use foo :only (bar), and is generally considered preferable, especially as an alternative to having both :use and :require in your ns declaration)
I think Double/parseDouble is a good approach to parsing doubles in string form. Integer/parseInt is usually my go-to for doing the same with integers in string form. This is just a hypothesis, but Double/parseDouble might be faster and/or more accurate than read-string because it's optimized for doubles.
FYI, you can leave out the java.lang. and just call it as Double/parseDouble in your code. In light of that, you might consider getting rid of your parse-double function altogether and just using Double/parseDouble whenever you need it. The only thing is that Java methods aren't first-class in Clojure, so you would need to do things like this if you go that route:
(defn parse-row [line]
(map #(Double/parseDouble %) (split line #"\s+")))
(Personally, I still like that better, but you might prefer to keep it wrapped in a function parse-double like you have it. It's up to you!)
I think needing to mess with the locale might be a locale-specific problem... I tried playing around with (format "%.7f" ... without changing my locale and it worked as expected. Granted, I'm in the US :)
I think the legibility issues you're seeing might be related to having too many functions. You might consider condensing and renaming things and see if you like that better. I would re-structure your program so that you parse the data into the data structure at the top, something like this:
(defn parse-test-cases [in-file]
(with-open [rdr (io/reader in-file)]
(let [rows (rest (line-seq rdr))]
(map (fn [row]
(map #(Double/parseDouble %) (split row #"\s+")))
rows))))
(I condensed your functions parse-row, parse-test-cases and half of spit-answers into the function above)
Then define the functions that "do all the work" like min-time, and then, at the end:
(defn spit-answers [answers out-file]
(with-open [w (io/writer out-file :append false)]
(.write w (clojure.string/join "\n" answers)))
(def -main []
(let [in "resources/cookie_clicker/B-large-practice.in"
out "resources/cookie_clicker/B-large-practice.out"
test-cases (parse-test-cases in)
answers (map-indexed (fn [i [c f x]]
(format "Case #%d: %.7f" (inc i) (min-time c f x)))
test-cases)]
(spit-answers answers out)))
I came up with a few ideas above:
In your answers function you use (map ... (rest (range)) (test-cases)) in order to number each case, starting from 1. A simpler way to do this is with map-indexed. I used (inc i) for the case numbers, since the index numbering starts at 0.
I condensed (str "Case #" n ": " (format "%.7f" t))) into a single call to format.
I used destructuring over the arguments to the map-indexed function to represent each case as c f x -- that way it's clearer that each test case consists of those three values, and you can represent the calculation as (min-time c f x) instead of (apply min-time test-case).
As for your min-time function, I don't think loop/recur is necessarily a bad thing, and I often tend to rely on it in complicated situations where you're doing more involved work on each iteration, checking conditions, etc. I think it's OK to use it here. But if you want to go a more functional route, you could consider writing a step function and creating a lazy sequence of game states using iterate, like so:
(note: I'm writing step as a letfn binding so that it can use arbitrary values of c, f and x that you feed into a higher-order function that I'm calling step-seq -- this HOF takes values for c, f and x and generates a lazy sequence of game states or "steps.")
(defn step-seq [c f x]
(letfn [(step [{:keys [factories time-cost cookie-rate total-time result]}]
(let [new-time-cost (+ time-cost (/ c cookie-rate))
new-cookie-rate (+ cookie-rate f)
new-total-time (+ new-time-cost (/ x new-cookie-rate))]
{:factories (inc factories)
:time-cost new-time-cost
:cookie-rate new-cookie-rate
:total-time new-total-time
:result (when (> new-total-time total-time) total-time)}))]
(iterate step {:factories 0, :time-cost 0, :cookie-rate 2.0,
:total-time (/ x 2.0), :result nil})))
Now, finding the solution is as simple as grabbing the :result value from the first step that has one:
(defn min-step [c f x]
(some :result (step-seq c f x))) | {
"domain": "codereview.stackexchange",
"id": 7497,
"tags": "clojure"
} |
Geometry of anticommutation relations | Question: I am asking this question as a mathematician trying to understand quantum theory, so please forgive my naivety.
Systems satisfying the canonical commutation relations are naturally modeled with symplectic geometry: for example, in the discrete setting, there is a deep connection between the stabilizer formalism and affine symplectic geometry.
Is there a analogous geometry which naturally models systems satisfying the anticommutation relations?
Answer: The algebra of a finite set of anticommuting $a_k$ and $a^\dagger_k$ is naturally connected with the orthogonal group. In particular the set
$$
\gamma^{2n-1}=\hat a_n^{\dagger}+ \hat a_n, \nonumber\\
\gamma^{2n} = i(\hat a_n^\dagger-\hat a_n),
$$
$n=1,\ldots N$ generates the Clifford algebra ${\rm Cl}_{2N}$. | {
"domain": "physics.stackexchange",
"id": 99584,
"tags": "fermions, geometry, phase-space, anticommutator"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.