anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Convert XML to CSV | Question: I'm pretty sure this code can be optimized, but I'm not talented enough in Linq to do it myself. Here's what I'm trying to do: I have an XML file that needs to be converted into a .csv file. The XML looks like this:
<Inventory>
<Item>
<Name>Super Mario Bros</Name>
<Count>14</Count>
<Price>29,99</Price>
<Comment>-No Comment-</Comment>
<Artist>N/A</Artist>
<Publisher>Nintendo</Publisher>
<Genre>Video Games</Genre>
<Year>1985</Year>
<ProductID>001</ProductID>
</Item>
<Item>
<Name>The Legend of Zelda</Name>
<Count>12</Count>
<Price>34,99</Price>
<Comment>-No Comment-</Comment>
<Artist>N/A</Artist>
<Publisher>Nintendo</Publisher>
<Genre>Video Games</Genre>
<Year>1986</Year>
<ProductID>002</ProductID>
</Item>
</Inventory>
(There are many more Items in the list, but they are all the same.)
The code I'm currently using is working as intended, here it is:
public void fileConvert_XMLToCSV() {
//This method converts an xml file into a .csv file
XDocument xDocument = XDocument.Load(FilePath_CSVToXML);
StringBuilder dataToBeWritten = new StringBuilder();
var results = xDocument.Descendants("Item").Select(x => new {
title = (string)x.Element("Name"),
amount = (string)x.Element("Count"),
price = (string)x.Element("Price"),
year = (string)x.Element("Year"),
productID = (string)x.Element("ProductID")
}).ToList();
for (int i = 0; i < results.Count; i++) {
string tempTitle = results[i].title;
string tempAmount = results[i].amount;
string tempPrice = results[i].price;
string tempYear = results[i].year;
string tempID = results[i].productID;
dataToBeWritten.Append(tempYear);
dataToBeWritten.Append(";");
dataToBeWritten.Append(tempTitle);
dataToBeWritten.Append(";");
dataToBeWritten.Append(tempID);
dataToBeWritten.Append(";");
dataToBeWritten.Append(tempAmount);
dataToBeWritten.Append(";");
dataToBeWritten.Append(tempPrice);
dataToBeWritten.Append(";");
dataToBeWritten.Append(0);
dataToBeWritten.Append(";");
dataToBeWritten.Append(0);
dataToBeWritten.Append(Environment.NewLine);
}
Console.WriteLine(dataToBeWritten.ToString());
Console.ReadLine();
var testpath = AppDomain.CurrentDomain.BaseDirectory + @"frMediaShop\test.csv";
File.WriteAllText(testpath, dataToBeWritten.ToString());
}
Running this method outputs a file (test.csv) that looks just like I want it. But the code is repetitive and dull. Please help me optimize it.
Answer: First of all, I'd split the convert method out into it's own thing - separate from the loading and saving:
// Load xml
XDocument xDocument = XDocument.Load(FilePath_CSVToXML);
// Convert
string data = Convert(xDocument);
// Do whatever it is you want to do with the results
Console.WriteLine(data);
Console.ReadLine();
var testpath = AppDomain.CurrentDomain.BaseDirectory + @"frMediaShop\test.csv";
File.WriteAllText(testpath, data);
We can simplify the actual conversion by using string interpolation and rolling it all up in a single LINQ statement:
private static string Convert(XDocument xDocument)
{
var data = new StringBuilder();
foreach (var result in xDocument.Descendants("Item").Select(x => new {
title = (string)x.Element("Name"),
amount = (string)x.Element("Count"),
price = (string)x.Element("Price"),
year = (string)x.Element("Year"),
productID = (string)x.Element("ProductID")
}))
{
data.AppendLine($"{result.year};{result.title};{result.productID};{result.amount};{result.price};{0};{0}");
};
return data.ToString();
} | {
"domain": "codereview.stackexchange",
"id": 25099,
"tags": "c#, linq, csv, xml"
} |
Can anyone help me how this code extracts features from the graph? | Question: I have this code from DGCNN Neural Network but i don't understand how it extracts features.
In particular i understand that we get the top knn point but i don't understand the idx_base.
def knn(x, k):
inner = -2*torch.matmul(x.transpose(2, 1), x)
xx = torch.sum(x**2, dim=1, keepdim=True)
pairwise_distance = -xx - inner - xx.transpose(2, 1)
idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
return idx
def get_graph_feature(x, k=20, idx=None):
batch_size = x.size(0)
num_points = x.size(2)
x = x.view(batch_size, -1, num_points)
if idx is None:
idx = knn(x, k=k) # (batch_size, num_points, k)
device = torch.device('cuda')
idx_base = torch.arange(0, batch_size, device=device).view(-1, 1, 1)*num_points
idx = idx + idx_base
idx = idx.view(-1)
_, num_dims, _ = x.size()
x = x.transpose(2, 1).contiguous() # (batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims) # batch_size * num_points * k + range(0, batch_size*num_points)
feature = x.view(batch_size*num_points, -1)[idx, :]
feature = feature.view(batch_size, num_points, k, num_dims)
x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()
return feature
Answer:
i don't understand how it extracts features.
What you show is not really extracting any features, but collecting them:
Compute a distance from each point to each other point in x (idx = knn(x, k))
Collect the features of the k-nearest points = x.view(batch_size*num_points, -1)[idx, :]
Now for each x you have k other vectors, so x needs to be repeated k-times: x.repeat(1, 1, k, 1)
Compute the difference between x and all k nearest points and concat x: feature = torch.cat((feature-x, x), dim=3)
The last step provides you with one feature for each of the k-nearest neighbor points. Your features have now the shape: [ batch_size, num_dims, num_points, k ]. You can now apply some Neural Network to them.
[...] i don't understand the idx_base.
Something like this is common in GNNs and relates to graph batching.
Notice the transpose:
(batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims)
To get an index that works with this shape, you cannot use the raw node indices, you have to offset them using the batch number and the number of points per sample.
This is what idx_base is doing, it offsets the point indices. This DGL-documentation has a nice visualization for what graph-batching means. The same applies here for idx_base. | {
"domain": "ai.stackexchange",
"id": 3494,
"tags": "neural-networks, graph-neural-networks"
} |
How do you implement an INS from an accelerometer and (optionally) gyros and a magnetometer? | Question: I'm building a walking robot that will need to know when it moves forward. I'm using on-board intelligence and I plan on using accelerometers, gyros, and magnometers (if needed) to be able to detect if the robot moves forward. The problem is, I dont know how to program an Internal Navigation System or an IMU. What software algorithms are needed?
To clarify my problem, I need to know how to program the micro controller to read the sensors and be able to tell if the robot has displaced itself forward since a previous measurement.
Also if I used this sensor board (or similar) could I use it to determine the displacement.
Answer: To get relative displacement between two time instants all you need to do is integrate the values given off by the accelerometer (twice for linear displacement) and gyro (once for angular displacement).
Due to measurement errors, which can many times be adequately modeled as Gaussian (you might have to estimate a bias and/or scale factor to the measurement), there will be drift in your estimate (i.e. errors accumulate and your estimate diverges). Because of that, if you plan to use the IMU to obtain position and orientation estimates relative to a fixed frame, you will also have to use more information to correct that estimate. These corrections can be made using a Kalman Filtering approach.
Many people use the accelerometer and magnetometer data to do that, assuming your robot isn't moving too fast (i.e. $g \gg a_{robot}$), there aren't many magnetic field disturbances (i.e. $m_{earth} \gg m_{other\_stuff}$), and both vectors are perpendicular and have fixed orientation with respect to the ground. See, for instance, the TRIAD algorithm.
But then again, back to your question, if what you mean by "robot has displaced itself forward since a previous measurement" is:
Relative to his own body, regardless of his orientation in space, all you need to do is check the sign of the accelerometer output in the forward direction (you might want to set a minimum threshold or perform median filtering due to sensor noise)
Relative to a fixed frame, you have to take everything I talked about into consideration | {
"domain": "robotics.stackexchange",
"id": 60,
"tags": "software, imu, deduced-reckoning, artificial-intelligence"
} |
Is there a contradiction of the theory of relativity here? -- Length contraction and EMR amplitude | Question: Suppose there is a laser beam powerful enough to burn through iron aimed at a piece of iron. You observe this event while you are in the same frame as the piece of iron and the laser-beam generator. In this frame, there is a certain part of space that you know that the light is traveling through.
Now say that you get in a rocket that travels a few meters away from the laser beam in a direction perpendicular to eventually at a constant speed arbitrarily close to the speed of light. As you do so, the area that you knew the light was traveling through contracts in the direction that you are traveling in.
Say that the amplitude that you think that the light has decreases to the point where the light would no longer be carrying enough energy to burn through the piece of iron. If you see the iron stop being burned by the laser, our universe is seriously weird. I don't think this would happen. Would you see the laser continue to burn through the iron even though it does not seem to you to have the energy necessary to do so? Would this mean that, instead of having the length contract, that there is such a thing as absolute distance?
(If we suppose that the amplitude you perceive the laser to have remains constant regardless of what inertial frame you are in, then the laser would appear to have a constant amplitude even as objects around it continued to contract in the direction you are traveling through. This would mean that the laser would have to seem to affect more and more of space as you traveled faster and faster, so it seems it would have to seem to burn through more and more objects as you went faster, which doesn't seem right to me.)
Does this contradict the theory of relativity? Is there an error in here somewhere?
Thanks.
Answer: You mean frequency, not amplitude--- you mean chasing the light until it is too weak to burn through the iron. But then the iron is rushing toward the light, and the relative motion of the iron and the light is what determines the impact energy, and this is unchanged in any frame.
EDIT: Perpendicular motion
Now that you said what you meant--- you meant perpendicular motion. Then the beam is slanting down, and like a flashlight shining at an angle, you assume that it covers more area and is reduced in intensity. This is just not true. The reason a flashlight gets dimmer at an angle is that the same number of photons are hitting more area, because a given angle-spread at the emitter gets turned into more area at a further distance.
The laserbeam is just tilted by your motion, not spread in angle, so it hits the exact same area when you are moving, in fact, a smaller area because of the Lorentz contraction. The intensity goes up not down, but the atoms Lorentz contract just the same, so that the number of photon collisions per atom stays the same. Each collision is physical, so there is no mystery why it should be invariant.
The frequency of the light is also increaseded by your motion. But the relative motion of the light and the atom is unchanged, as I said above.
EDIT: Amplitude decreasing
The amplitude of light is not a length, and does not extend in physical space. It's an internal thing. A high amplitude wave can extend over a big, or small, area, which is independent of the amplitude. There is no relation between space and "amplitude space". | {
"domain": "physics.stackexchange",
"id": 1798,
"tags": "special-relativity"
} |
Magic Formula: Calculating of lateral forces | Question: To obtain lateral force according to slip angles, I use Pacejka Magic Formula, but my graph's style is not similar with picture which i added. I cannot find where the mistake is.
clear all
clc
a1y=-22.1;
a2y=1011;
a3y=1078;
a4y=1.82;
a5y=0.208;
a6y=0.00;
a7y=-0.354;
a8y=0.707;
Fz=2;
Sh=-0.28;
Sv=-118;
Cy=1.5;
Dy=a1y*Fz^2+a2y*Fz;
BCDy=a3y*sind(a4y*atand(a5y*Fz));
By=BCDy/(Cy*Dy);
Ey=a6y*Fz^2+a7y*Fz+a8y;
alpha_r=-10:0.01:10;
PCKy=(Dy*sind(Cy*atand(By*rad2deg(alpha_r+Sh) - Ey*(By*rad2deg(alpha_r+Sh) - atand(By*rad2deg(alpha_r+Sh))))))+Sv;
plot(alpha_r,PCKy)
Answer: The offending line was
PCKy=(Dy*sind(Cy*atand(By*rad2deg(alpha_r+Sh) - Ey*(By*rad2deg(alpha_r+Sh) - atand(By*rad2deg(alpha_r+Sh))))))+Sv;
The correct form is without the rad2deg
PCKy=(Dy*sind(Cy*atand(By*(alpha_r+Sh) - Ey*(By*(alpha_r+Sh) - atand(By*(alpha_r+Sh))))))+Sv;
if you just correct that you should get: | {
"domain": "engineering.stackexchange",
"id": 3684,
"tags": "automotive-engineering, wheels"
} |
is this a Context free Language : $L=\{W_1W_2 \mid W_1 \ne W_2 \: \text{and} \: |W_1|=|W_2|\}$ | Question: $L=\{W_1W_2 \mid W_1 \ne W_2 \: \text{and} \: |W_1|=|W_2|\}$
Alphabet = { a , b }*
Considering L={WW} is not context free, shouldn't this be non context free as well? otherwise can you provide a machine or grammar which accepts this?
Answer: This is a classical example of a context-free language whose complement is not context-free. It is context-free since every word in $w$ has the form
$$ \Sigma^i a \Sigma^j \Sigma^i b \Sigma^j = \Sigma^i a \Sigma^i \Sigma^j b \Sigma^j $$
or the similar form with the locations of $a$ and $b$ switched. | {
"domain": "cs.stackexchange",
"id": 9861,
"tags": "automata, context-free, pushdown-automata"
} |
Black Hole Ripped Apart | Question: Could a black hole be ripped apart if it passed directly between two other black holes that were millions of times bigger?
Answer: First, a black hole between two black holes has two places where two black holes are near each other. This is much like two black holes merging. The two do not rip each other apart.
You have probably seen videos showing two black holes circling each other until - blip - there is one black hole. If not, here is one from CalTech. Two Black Holes Merge into One
Here is a numerical simulation from CalTech that shows the final moment of the first merger detected in 2019. You can see the shape is distorted, but not ripped apart. GW190412: Binary Black Hole Merger | {
"domain": "physics.stackexchange",
"id": 100107,
"tags": "black-holes, cosmological-inflation, tidal-effect"
} |
What is the senior parent chain in the following compound? | Question:
Will the IUPAC name be (2-hydroxymethyl) but-3-ynoic acid (or) 2-ethynyl-3-hydroxy propanoic acid?
Answer: The most important simplified criteria for the choice of a principal chain are:
greater number of suffixes
longest chain
greater number of multiple bonds
lower locants for suffixes
lower locants for multiple bonds
greater number of prefixes
lower locants for prefixes
lower locants for substituents cited first as a prefix in the name
The corresponding wording of the rules taken from Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book) is as follows.
P-44.1 SENIORITY ORDER FOR PARENT STRUCTURES
When there is a choice, the senior parent structure is chosen by applying the following criteria, in order, until a decision is reached. These criteria must always be applied before those applicable to rings and ring systems (see P-44.2) and to chains (see P-44.3). Then criteria applicable to both chains and rings or ring systems given in P-44.4 are considered.
P-44.1.1 The senior parent structure has the maximum number of substituents corresponding to the principal characteristic group (suffix) or senior parent hydride in accord with the seniority of classes (P-41) and the seniority of suffixes (P-43).
(…)
P-44.3.2 The principal chain has the greater number of skeletal atoms [criterion (b) in P-44.3].
(…)
P-44.4.1 If the criteria of P-44.1 through P-44.3, where applicable, do not effect a choice of a senior parent structure, the following criteria are applied successively until there are no alternatives remaining. These criteria are illustrated in P-44.4.1.1 through P-44.4.1.12.
The senior ring, ring system, or principal chain:
(a) has the greater number of multiple bonds (P-44.4.1.1);
(b) has the greater number of double bonds (P-44.4.1.2);
(…)
(h) has the lower locant for an attached group expressed as a suffix (P-44.4.1.8);
(…)
(j) has the lower locant(s) for endings or prefixes that express changes in the level of hydrogenation, i.e., for ‘ene’ and ‘yne’ endings and ‘hydro/dehydro’ prefixes (P-44.4.1.10);
(…)
P-45.2.1 The preferred IUPAC name is based on the senior parent structure that has the maximum number of substituents cited as prefixes (other than ‘hydro/dehydro’) to the parent structure.
P-45.2.2 The preferred IUPAC name is based on the senior parent structure that has the lower locant or set of locants for substituents cited as prefixes (other than ‘hydro/dehydro’) to the parent structure.
P-45.2.3 The preferred IUPAC name is based on the senior parent structure that has the lower locant or set of locants for substituents cited as prefixes to the parent structure (other than ‘hydro/dehydro’ prefixes) in their order of citation in the name.
(…)
You have correctly identified the suffix (“oic acid”) for the principal characteristic group. The next criterion for the principal chain is the greater number of skeletal atoms (i.e. the longest chain). Thus, the principal chain corresponds to the but-3-ynoic acid part and not to the 3-hydroxypropanoic acid part since but-3-ynoic acid has a longer chain than propanoic acid. Therefore, the correct name is 2-(hydroxymethyl)but-3-ynoic acid. | {
"domain": "chemistry.stackexchange",
"id": 9764,
"tags": "organic-chemistry, nomenclature"
} |
Blow the fuse if threshold exceeded | Question: While implementing the Retry & Breaker patterns I decided that the Breaker does more then it should so I extracted two responsibilities into their own classes. Here they are.
I stripped out the Threshold to be only data:
public class Threshold
{
public Threshold(int count, TimeSpan interval, TimeSpan timeout)
{
Count = count;
Interval = interval;
Timeout = timeout;
}
public int Count { get; }
public TimeSpan Interval { get; }
public TimeSpan Timeout { get; }
public override string ToString()
{
return $"Count = {Count} Interval = {Interval} Timeout = {Timeout}";
}
}
The second responsibility is the object counting the events and checking if the threshold is exceeded. I call it Fuse and implemented it this way:
public class Fuse
{
public Fuse(Threshold threshold)
{
Threshold = threshold;
}
// todo: add null check
public IClock Clock { get; set; } = new SystemClock();
public Threshold Threshold { get; }
public int Count { get; private set; }
public DateTime? Point { get; private set; }
public bool Blown
{
get
{
return
Clock.GetUtcNow() - Point <= Threshold.Interval &&
Count >= Threshold.Count;
}
}
public bool TimedOut
{
get { return (Clock.GetUtcNow() - Point) > Threshold.Timeout; }
}
public Fuse Increase(int value)
{
if (TimedOut) { Reset(); }
Count += value;
Point = Clock.GetUtcNow();
return this;
}
public Fuse Increase()
{
Increase(1);
return this;
}
public Fuse Reset()
{
Count = 0;
Point = null;
return this;
}
public override string ToString()
{
return $"Count = {Count} Point = \"{Point?.ToString(CultureInfo.InvariantCulture)}\" Blown = {Blown} TimedOut = {TimedOut}";
}
}
To be able to better test it I created an abstraction for the DateTime and called it IClock:
public interface IClock
{
DateTime GetNow();
DateTime GetUtcNow();
}
public class SystemClock : IClock
{
public DateTime GetNow() => DateTime.Now;
public DateTime GetUtcNow() => DateTime.UtcNow;
}
public class TestClock : IClock
{
public DateTime Now { get; set; }
public DateTime UtcNow { get; set; }
public DateTime GetNow() => Now;
public DateTime GetUtcNow() => UtcNow;
}
Without the breaker the two new modules are now easier to test:
var threshold = new Threshold(count: 3, interval: TimeSpan.FromSeconds(5), timeout: TimeSpan.FromSeconds(10));
var fuse = new Fuse(threshold) { Clock = new TestClock { UtcNow = new DateTime(2016, 11, 12, 9, 0, 0) } };
fuse.Increase(2).ToString().Dump();
fuse.Increase().ToString().Dump();
(fuse.Clock as TestClock).UtcNow = new DateTime(2016, 11, 12, 9, 0, 0).AddSeconds(20);
fuse.ToString().Dump();
fuse.Increase().ToString().Dump();
Output:
Count = 2 Point = "11/12/2016 09:00:00" Blown = False TimedOut = False
Count = 3 Point = "11/12/2016 09:00:00" Blown = True TimedOut = False
Count = 3 Point = "11/12/2016 09:00:00" Blown = False TimedOut = True
Count = 1 Point = "11/12/2016 09:00:20" Blown = False TimedOut = False
Answer: Your interface Iclock exposes methods GetNow and GetUtcNow.
Your TestClock class implements it, but because you also want to be able to modify the values you end up by implementing the properties Now and UtcNow.
In my opinion would be a better alternative that IClock would expose a read-only properties,
and that your class would expose the public setter as well:
public interface IClock
{
DateTime Now{ get; }
DateTime UtcNow{ get; }
}
public class TestClock : IClock
{
public DateTime Now { get; set; }
public DateTime UtcNow { get; set; }
}
Your Increase method in Fuse class is hard to get grasp on.
So... I migth be wrong but you do not want to keep track of the current time when Increase method is called.
What this means is that this line Point = Clock.GetUtcNow(); would become
Point = Point ?? Clock.GetUtcNow();
Furthermore I would consider to move this to some other method.
It could be a wise choice to move it to the constructor, however if you want to have a better control about exactly which tim ite is related to you could have a start method.
public Fuse Start(){
//if you want to you can check if Point is null and throw an exception if it's not
Point = Clock.GetUtcNow();
}
It also seems that your Increase method sometimes decides to do his trick and call Reset method.
If I were a consumer of any API I would never guess that a method called Increase would reset something.
In my opinion one simple thing you can do it have a boolean flag that would decide if Reset should be called "automatically".
Obviously this change would be accompanied by a slightly better named method.
Unfortunately nothing better than ShortenFuse comes to mind.
public class Fuse
{
public Fuse(Threshold threshold)
{
Threshold = threshold;
AutoReset = true;
}
public bool AutoReset{ get; set; }
public Fuse ShortenFuse(int value)
{
if (TimedOut && AutoReset) { Reset(); }
Count += value;
return this;
}
//...
}
One more thing. In your "tests" you do this:
(fuse.Clock as TestClock).UtcNow = new DateTime(2016, 11, 12, 9, 0, 0).AddSeconds(20);
Wouldn't it be much better to put the seconds on the constructor? | {
"domain": "codereview.stackexchange",
"id": 22896,
"tags": "c#, datetime, timeout"
} |
Scattering of blue wavelength and red wavelength in our atmosphere | Question: I've read the reason sky appears blue is because blue wavelength is being scattered by the gas molecules, dust particles etc. Thus, because of this scattering, we are basically being bombarded with pure blue wavelength the most from the atmosphere in all directions.While during sunset, the blue wavelength is scattered to a maximum and we only see the least scattered lower wavelengths of yellow and red hues. But if blue wavelength is scattered to a maximum then shouldn't our eyes only perceive more blue colour? I know there's a mistake in my reasoning but I just wanted to clear this off in my head. Thank you
Answer: It's not quite correct to say that blue light is scattered to a maximum at sunset and other wavelengths are scattered the least. As far as Rayleigh scattering is concerned shorter wavelengths are always scattered more than the longer wavelengths and this amount stays the same at all hours of the day.
Each particular wavelength of light experiences the same amount of scattering per distance traveled regardless of the time of day, and shorter wavelengths scatter more than longer wavelengths.
But what is different is that at sunset the light must travel through more atmosphere to reach your eye. That means, EVERY wavelength experiences more chances to be scattered, however much each wavelength scatters by. But since the blue scatters more than red, by the time it reaches your eye, so much blue light has been scattered (perpendicular to your line of vision) along the way that almost none is left to actually enter your eye.
Your eye only detects light that actually travels in a line to enter your eye. When the blue light is scattered, it is scattered in all directions Let's say the 6 sides of a cube, for example. One of those direction is directly away from your eye, four are perpendicular to your line of vision, and only one is actually towards your eye. And each time that light scatters in 6 directions, the scattered light scatters again in 6 directions over and over again; Each time sending more of the light that initially was traveling on a line straight to your eye in directions away from your eye. | {
"domain": "physics.stackexchange",
"id": 78467,
"tags": "electromagnetic-radiation, visible-light, scattering, atmospheric-science"
} |
Autoencoder network for feature selection not converging | Question: I am training an undercomplete autoencoder network for feature selection. I am using one hidden layer in the encoder and decoder networks each. The ELU activation function is used for each layer. For optimization, I am using the ADAM optimizer. Moreover, to improve convergence, I have also introduced learning rate decay. The model shows good convergence initially, but later starts to generate losses (12 digits) in the same range of values for several epochs, and is not converging. How to solve this issue?
Answer: The trick was to normalize the input dataset values with the respective mean and standard deviation in each column. This reduced the loss drastically, and my network is training more efficiently now. Moreover, normalizing the data also helps you calculate the weights associated with each input node more easily, especially when trying to find out variable importance. | {
"domain": "ai.stackexchange",
"id": 1668,
"tags": "autoencoders, learning-rate"
} |
Where is the latest document of Erratic Simulation? | Question:
In http://www.ros.org/wiki/simulator_gazebo/Tutorials/TeleopErraticSimulation
I got an error by running : rosmake erratic_gazebo teleop_base
sam@/home/sam/code/ros/gazebo/stl$ rosmake erratic_gazebo teleop_base
[ rosmake ] Packages requested are: ['erratic_gazebo', 'teleop_base']
[ rosmake ] Logging to directory/home/sam/.ros/rosmake/rosmake_output-20110731-113912
[ rosmake ] Expanded args ['erratic_gazebo', 'teleop_base'] to:
[]
[ rosmake ] WARNING: The following args could not be parsed as stacks or packages: ['erratic_gazebo', 'teleop_base']
[ rosmake ] ERROR: No arguments could be parsed into valid package or stack names.
sam@/home/sam/code/ros/gazebo/stl$
Originally posted by sam on ROS Answers with karma: 2570 on 2011-07-30
Post score: 0
Original comments
Comment by sam on 2011-08-01:
Isn't it would be install with original installation of ROS?I use "ROS version-*" to install every packges.
Comment by Bram van de Klundert on 2011-08-01:
are you sure you have those packages on your pc and in a location where ros commands can find them?
Answer:
Please see this question. Instead of installing the CTurtle version of erratic_robot stack, install Diamondback version by running sudo apt-get install ros-diamondback-erratic-robot if you are running ROS Diamondback.
Note: I have also updated the linked tutorial to use erratic_robot stack to prevent future confusion.
Originally posted by arebgun with karma: 2121 on 2011-08-01
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 6303,
"tags": "ros, erratic, erratic-gazebo"
} |
Golang Flood Fill | Question: I started learning Go a few months ago and am trying to shake the rust off after a project took me out of the Go world for several weeks. I'd appreciate a code review on this flood fill algorithm I wrote for practice.
Specifically I'm looking for points on any code I've written that is unidiomatic in Go, and of course any glaring mistakes that my manual testing didn't catch. Performance is a more minor concern. I'm sure I can tell each node where it came from to improve performance by ~25%, but that's more trouble than it's worth for a little practice code.
package fill
type OrderedPair struct {
x, y int
}
var wg sync.WaitGroup
func FloodFill(graph [][]int, origin OrderedPair) []OrderedPair {
val := graph[origin.y][origin.x]
length := 0
for _, row := range graph {
length += len(row)
}
q := make(chan OrderedPair, length)
q <- origin
seen := make(map[OrderedPair]struct{})
for {
select {
case op := <-q:
if _, found := seen[op]; found {
continue
} else {
seen[op] = struct{}{}
}
wg.Add(4)
for _, mods := range [][]int{{-1, 0}, {1, 0}, {0, -1}, {0, 1}} {
go func(xmod, ymod int) {
newx := op.x + xmod
newy := op.y + ymod
if 0 <= newy && newy < len(graph) && 0 <= newx && newx < len(graph[newy]) {
if graph[newy][newx] == val {
q <- OrderedPair{newx, newy}
}
}
wg.Done()
}(mods[0], mods[1])
}
wg.Wait()
default:
result := make([]OrderedPair, 0, len(seen))
for key := range seen {
result = append(result, key)
}
return result
}
}
}
Particularly I'm not a big fan of the
for {
select {
case DO SOMETHING:
// ...
default:
// wrap up the function and return
}
}
I would much rather my default case break me out of the infinite loop so I handle the cleanup and return on the parent level, but it seems a break in a select statement just breaks the select. Any better way to handle that?
I'd also like to generalize the function if possible. Since I'm not using any sort of fuzzy-match and I'm giving back []OrderedPair regardless, I should be able to operate on a graph [][]interface{} but the mechanics of that aren't clear to me (I can't pass in a concrete [][]int or [][]string anymore. Do I have to go through the reflect package? That seems like a pain, but might be out of scope for CR)
Answer: Have you tried to benchmark your solution against a simpler non-parallel one?
go-routines are really cheap, but that's relative to real threads. Your use of the go-routines at such a granular level, and the heavy use of the channel as a queue, are bound to be causing all sorts of memory contention.
I suspect that if you just process the whole lot in a single routine, with a simple slice, that things will be a whole lot faster.... and simpler.
So, turn q in to a make([]OrderedPair, 0, length) (a slice with capactiy for possibly everything), and then append flood-candidates to that.
Your seen map should also possibly be a map[OrderedPair]bool instead of map[OrderedPair]struct{}. It makes the logic easier later... instead of:
if _, found := seen[op]; found {
continue
you can instead have:
if seen[op] {
continue
Also, there's no reason to have an 'else' clause to that if. The continue breaks the code block scope, so the else is redundant.
More completely, the following:
if _, found := seen[op]; found {
continue
} else {
seen[op] = struct{}{}
}
should be:
if seen[op] {
continue
}
seen[op] = true
The WaitGroup is also a problem. It should be declared inside the function.... but really, you don't want goroutines here anyway. Promise.
The up-down-left-right slice could also be simplified a bunch too without the routines. No need for the closures and shadow-copies, and so on.
Instead of the seen map, I would consider having a boolean 2-D slice that matches the same dimensions as the input data (screw the memory footprint, worst-case memory footprint is probably less than worst-case memory for the map anyway)
All told, I would reduce the code to something like:
type OrderedPair struct {
x, y int
}
var mods = [...]struct {
x, y int
}{
{-1, 0}, {1, 0}, {0, -1}, {0, 1},
}
func FloodFill(graph [][]int, origin OrderedPair) []OrderedPair {
val, ok := graph[origin.y][origin.x]
if !ok {
// origin is not part of the graph!?!?!?!?
return nil
}
seen := make([][]bool, len(graph))
for i, row := range graph {
seen[i] = make([]bool, len(row))
}
// let go sort out the appended size.
fill := []OrderedPair{}
// go will shuffle memory too when adding/removing items from q
q := []OrderedPair{origin}
for len(q) > 0 {
// shift the q
op := q[0]
q = q[1:]
if seen[op.y][op.x] {
continue
}
seen[op.y][op.x] = true
fill = append(fill, op)
for _, mod := range mods {
newx := op.x + mod.x
newy := op.y + mod.y
if 0 <= newy && newy < len(graph) && 0 <= newx && newx < len(graph[newy]) {
if graph[newy][newx] == val {
q = append(q, OrderedPair{newx, newy})
}
}
}
}
return fill
} | {
"domain": "codereview.stackexchange",
"id": 19154,
"tags": "go, graphics"
} |
Why is Minkowski spacetime in polar coordinates treated in texts as flat spacetime? | Question: Taking 3-D Minkowski spacetime line element in General Relativity:
$$ds^2=-c^2dt^2+dx^2+dy^2+dz^2, $$
when considering a change into spherical coordinates leads to:
$$ds^2=-c^2dt^2+dr^2+r^2\left(d\theta^2+\sin^2\theta\,d\phi^2\right).$$
In several books, it is said that this is still Euclidean flat-space time, for it is only a change of coordinates speaking about the same as in Euclidean plane... but my big inquiry is under what point of view is this still flat, since the Levi-Civita connection $\Gamma^{\alpha}_{\,\,\beta\lambda}$ for this new space-time is not zero for some components. Are these symbols equal to zero a necessary condition for giving flat space-time?
I have not computed the components of the Riemann tensor for the polar coordinates spacetime, yet. But it is easy to see that for Cartesian coordinates they are equal to zero. If they were nonzero, does this assume that the deviation of geodesics equal to zero is still obeyed? From since I can remember, if the components of the Riemann tensor $R^{\alpha}_{\,\,\beta\mu\nu}$ are all zero, you get deviation zero and you can talk about Euclidean, flat space-time. Also, I can remember that if the Ricci scalar $R=0$ if and only if flat space-time is given. Am I correct?
Answer: Under a coordinate change, the metric may change form, but it is fundamentally the same manifold you are dealing with, and curvature scalars are diffeomorphism invariants.
While $\Gamma^a_{bc} \neq 0$, Minkowski space in any set of coordinates has $R^a_{bcd} = 0$. To convince yourself without calculating, see a coordinate change as a relabelling of positions. Rather than a grid, you might use a spherical coordinate system, but the points you are labelling on the surface are not being moved. The distance between any two is still the same.
The notion of curvature has to be independent of any coordinate system, since that is something we impose on the manifold and is not an intrinsic property. | {
"domain": "physics.stackexchange",
"id": 50085,
"tags": "general-relativity, differential-geometry, metric-tensor, tensor-calculus, curvature"
} |
OpenNI_Camera on Pandaboard (Fuerte + Ubuntu 12.04) | Question:
Hi,
As is mentioned in the title, I have a Pandaboard (pandaboard.org) which features a dual core ARMv7 processor. I currently have Ubuntu 12.04 running, as well as have compiled the "full - desktop" version of ROS Fuerte for it (wiki here: www.ros.org/wiki/fuerte/Installation/Ubuntu/Source). This Pandaboard is intended to be the brain of a robot which features a Kinect sensor for SLAM.
I am currently grappling with compiling OpenNI_Camera for my platform. I have installed the latest (unstable) OpenNI drivers as per this guide (www.pansenti.com/wordpress/?page_id=1772). I have not been able to sudo apt-get install any ROS packages on the Pandaboard. Is there a repository I'm missing or something?
Not sure if that was the right move, but thats what I've attempted so far. I realize this is not ros-fuerte-openni-camera as one would sudo apt-get on a normal x86 processor, but I believe it to be a dependency for the openni_camera (ros.org/wiki/openni_camera - fuerte version) package.
I guess my question/problem is that according to sources like this (kinect-with-ros.976505.n3.nabble.com/Ros-kinect-kinect-on-ARM-td2654041.html), people have openni_camera and openni_launch compiled and running for ARM architectures but I simply cannot find out how to do it, I thought I'd start by compiling its dependencies by hand, but the libopenni-sensor-primesense-dev for example, has no ARM compilation options according to the README (github.com/jspricke/openni-sensor-primesense) as well as a quick peak into the git.
Any direction or advice for how to do this would be greatly appreciated! This is for an undergraduate senior design course, for those interested. Apologies for the lack of karma and henceforth, real links - I've been lurking around for an answer on the internet for a couple weeks now.
EDIT 1:
Having followed the directions provided here:(http://www.pansenti.com/wordpress/?page_id=1772) (openNI and kinect drivers) and here:(http://www.novemberkiloecho.com/2012/05/08/how-to-install-opencv-2-4-on-your-pandaboard-an-addendum/) (OpenCV - Hansg91 mentioned it as a dependency, not sure if its necessary) The first link basically outlines what Hansg91 has said below. I have successfully read data from the kinect on the pandaboard, as per the testing listed at the bottom of the first link. This data is not in ROS yet, but will begin to look at Hansg91's source code to see if it can read the data as I have it.
It is worth noting that to get the test to execute successfully I had to change the default USB interface. Hansg91 recommended BULK, however I needed ISO, as is mentioned under the driver installation (UsbInterface=1) in the first link. I plan on continuing to update this post as I make progress. It is still very possible that his/her code needs BULK usb interface instead (UsbInterface=2)
EDIT 2:
Moving on from The OpenNI drivers, we found we would like to use these packages in ccny_rgbd. Its faster than rgbd_slam and doesn't require openni_camera (but still requires the drivers). Unfortunately upon attempting to make this package, it requires PCL. Which is currently only working on the pandaboard in an unstable version, pcl 1.7. Which has been released, but occupies the pcl17 namespace, instead of the pcl namespace as the makefiles are going to reference. So to have this compile we would have to change all of those files in addition to compiling pcl17 and all of its dependencies from source.
We have also acquired an eee pc (first gen) which is indeed much slower, but also allows us to "sudo apt-get install" all of these things. We are advancing on this front as well. These would be used in conjunction with each other in such that the eee pc would do the imaging, and the pandaboard would do basically everything else. We would have a router onboard the robot, so that they can seemlessly communicate via a single roscore running on the (faster) pandaboard.
EDIT 3
Alas, I seem to have succumbed to the ARM gods for the time being. It is simply not worth the time to compile all this right now, as we have several other pressing issues regarding this project, especially when the solution for this problem is to simply put a full computer in the frame to do the image processing. I personally, however, will not be giving up. Its become somewhat of an unhealthy obsession of mine to get this working. SO for the time being there may not be updates regarding progress on this front, but hopefully there will be at a later date. Good luck all.
Originally posted by aman501 on ROS Answers with karma: 33 on 2013-02-19
Post score: 3
Original comments
Comment by Claudio on 2013-02-19:
I'll follow your question: I'm using fuerte on a Panda too but didn't even try to compile the whole desktop variant.
Unfortunately there is (at this time) no repository for ROS on ARM.
So no debs.
Comment by aman501 on 2013-02-20:
Yeah, that's what I assumed, hence the installation from source. Thanks.
Comment by Hansg91 on 2013-03-06:
Ah I see you have it working, very good :) You actually don't even need OpenCV, but it makes things a bit easier (and you probably need it sometime anyway). I thought I had changed it to BULK interface, but I could very well have been mistaking, ISO sounds familiar too ;)
Comment by Ivan Dryanovski on 2013-03-07:
Hello, have you made any progress with ccny_rgbd? I'd be curious to see if it worked on your setup. If you need help recompiling it against a custom pcl, let me know
Answer:
Getting a Kinect to work on a Pandaboard is possible, but I will tell you now that it is computationally very expensive for the Pandaboard to read out the Kinect data. As I said in another question, if you have the Kinect working and sending its data to the Pandaboard, there is little computational power left for SLAM, sending it over WiFi is a bad idea as well. I have not tried to use the Kinect on the Pandaboard with just SLAM, but my experience is that the Pandaboard would get hot rather quickly, scale down its CPU and then you have one very slow brain.
So to sum up: be careful if you want to do this, with what you let the Pandaboard do as calculations.
I have a Pandaboard with Ubuntu 11.10 and ROS Fuerte installed, indeed from source as none of the packages are available for ARM.
What I did was compile the OpenNI drivers from source, using (if I remember correctly) these steps:
Download the OpenNI source from
https://github.com/OpenNI/OpenNI and
compile it using the ARM compiler and
install it
Download the NITE middleware source (for which I cannot find the link at the moment?) and compile it using ARM and install it
Download the avin2 SensorKinect
driver
(https://github.com/avin2/SensorKinect),
compile it for ARM and install it
I am not sure if the NITE middleware SDK was mandatory or optional, you can try without. Also, I believe the OpenNI had to be set to use the BULK USB interface for ARM devices, otherwise it wouldn't communicate properly.
If you got this far, try one of the OpenNI samples and see if they work. If they do, it is just a matter of letting ROS know OpenNI is installed and where it is installed.
Good luck!
Originally posted by Hansg91 with karma: 1909 on 2013-02-24
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 12970,
"tags": "openni, ccny-rgbd, ubuntu, ros-fuerte, pandaboard"
} |
JavaScript and callback: apply an action on each element of a callback input | Question: I'm quite new to js. I even don't know how to call this problem: apply an action on each element of a callback input, but the elements are actually an output..
I crash a lot on this situation: I have a nested callback where first I load a json array and then I perform an action on each element of this array.
$.getJSON(DATA_URL+'/groups', function( groups ) {
groups.forEach( function(g){
appendDropdownItem(g);
} );
});
I want to do it in one line. Not because I think one line is cool but because I think that this nested callback only gets the code dirty and affect readability.
In a pythonic approach I imagine somethings like a comprehension:
[appendDropDownItem(g) for g in loadAjax(DATA_URL+'/groups')]
So.. how to make it one liner/more readable?
Answer: First you can realize that function(x){return f(x)} == f, so you can reduce one line there:
$.getJSON(DATA_URL +'/groups', function(groups){
groups.forEach(appendDropdownItem)
})
It is harder to go further with these builtin functions. But if you have a curried forEach, with the arguments in the right order, such as the one in Rambda, or in Essentialjs (shameless plug), then you can reduce it even more:
$.getJSON(DATA_URL +'/groups', forEach(appendDropdownItem))
There you got a one readable one-liner.
You can implement your own curried forEach in any case:
var forEach = function(f) {
return function(xs) {
return xs.forEach(f)
}
}
But these libraries include many other helpers that provide a more functional workflow that leads to these one-liners often by means of currying and composition. | {
"domain": "codereview.stackexchange",
"id": 9616,
"tags": "javascript, callback"
} |
Why do the $u$ and $d$ quark not have an associated quantum number? | Question: All the other quarks ($c$,$s$,$b$ and $t$) have quantum numbers of charmness, strangeness, bottomness and topness that are conserved in strong interactions.
This allows, among other things, flavour changing neutral currents in $K^0$, $B^0$ and $D^0$ mesons as I discussed in this question but prevents them in pions $\pi^0$.
Is there any physical meaning as to why there are no quantum numbers of 'upness' and 'downness'?
Answer: They do. It's the third component of isospin, which came about before Murray Gell-Mann's quark model.
\begin{equation}
I_3 = \frac{(N_u-N_\bar{u})-(N_d-N_\bar{d})}{2}
\end{equation} | {
"domain": "physics.stackexchange",
"id": 40508,
"tags": "particle-physics, symmetry, standard-model, quarks, isospin-symmetry"
} |
Does the Newton's law break scale invariance? | Question: Under a scale transformation $$t\rightarrow \bar{t}=\mu t\hspace{0.3cm}\text{and}\hspace{0.3cm}\textbf{r}\to\bar{\textbf{r}}=\lambda\textbf{r},\tag{1}$$ Newton's law take the form $$m\frac{d^2\textbf{r}}{dt^2}=\textbf{F}\Rightarrow m\frac{d^2\bar{\textbf{r}}}{d\bar{t}^2}=\frac{\lambda}{\mu^2}\textbf{F}.\tag{2}$$ which shows that Newton's law is not scale-invariant for a time-independent $\textbf{F}$.
This looks surprising to me because scaling investigates whether the physics is same at all scales (of magnification), and scale invariance is broken/spoiled if there is a built-in length scale or time scale in the problem. Now, Newton's law for a particle of mass $m$ is not scale invariant as I've shown in (2).
What is the reason for this? There is no built-in length scale or time scale in the problem that one can construct from the $\textbf{F}$ and $m$. Therefore, physically it is surprising to me. Does it mean that breakdown of scale invariance has nothing to do with intrinsic length scale or time-scale?
Answer: What you have shown is that Newton's law is not scale-invariant for a force $F(x,\dot{x},t)$ that is scale-invariant, since you implicitly assumed that $F$ transforms as a scalar under the dilation1. This is kind of a trivial statement: If the l.h.s. of an equation transforms non-trivially and you assume that the r.h.s. transforms trivially, the equation as a whole cannot be in- or covariant.
The point is that it is a priori undetermined how $F$ transforms under such a transformation. It is the precise functional form of $F$ that determines whether or not the equation of motion is invariant under any transformation, in particular the scale transformation.
Your confusion seems to be that you expect "Newtonian mechanics" to exhibit scale symmetry. But symmetries are properties of physical systems, not of physical theoretical frameworks. Since many Newtonian systems have equivalent Lagrangian descriptions in which we can apply Noether's theorem, expecting all Newtonian systems to have scale invariance is patently absurd, since this would expect all of them to have a corresponding conserved quantity. Your "explicit length/time scales" are simply hidden from you because you haven't picked a particular system and therefore an explicit expression for $F$.
1Time-independence is not enough to guarantee that Newton's law is not scale-invariant, consider the force $F = \frac{\dot{r}^2}{r}$ as a counter-example. | {
"domain": "physics.stackexchange",
"id": 37208,
"tags": "newtonian-mechanics, classical-mechanics, symmetry, scale-invariance, scaling"
} |
Axial Equatorial NMR graph difference | Question: I wanted to ask a question about NMR for Axial and Equatorial molecules.
I was asked to describe how to separate the following two product molecules from this reaction:
and the answer that my colleague mentioned was NMR.
He stated that for the product on the left, the $\ce{P}$ atoms were in the same environment (axial) so they would show only $1$ peak but on the right hand side product, the $\ce{P}$ atoms were in different environments (equatorial and axial) hence two different peaks would be shown.
I had a previous introductory course to NMR where coupling constants were discussed, and as the maximum bond coupling "length" was considered to be $3$, this made perfect sense.
My question is, how are axial and equatorial substituents (if they are the same substituent as in this case) in such different environments? Is it to do with the surrounding substituents deshielding by varying amounts? What is the case?
I failed to find a definitive answer on Google or StackExchange, but only statistical interpretations were given, no actual explanations.
Answer: When discussing about same environment and different environment, not only the different position should be noted (axial/equatorial), but also the different electronic effects implied.
In the first case:
In short: the molecule has a plane of symmetry, and the substituents are "mirrored" through the plane.
"Long" explanation: This means that they "see" the same electronic environment: the
equatorial substituents have the same electronic effects on the two
axial substituents. This means that their resonance frequency, is the same.
In the second case:
Short answer: no symmetry exists between the substituents
"Long" answer: every group neighboring the substituents is different. One of the two substituents feels the effects of $\ce{CO}$, the other one feels the effect of -$\ce{O}$-. Except in some unlikely and unlucky cases, this means two different signals.
Note to the reader: shielding, resonance frequencies and NMR behavior arise from (way more) complex interactions: I did not mention, above, any possible effect of non-neighbouring substituents, for instance, neither I talked about the shielding or deshielding effects of near substituents.
Nonetheless, a perception of the "symmetry" of the involved groups might give you a "dirt cheap" and operational insight on what could be the outcome of an NMR experiment, in cases as simple as this. | {
"domain": "chemistry.stackexchange",
"id": 12110,
"tags": "nmr-spectroscopy"
} |
Conservation of energy and work done by a torque | Question: Suppose you let a solid roll down an incline without slipping, from height $h$. My textbook gives the following conservation of energy relation
$$mgh = \frac{1}{2}mv_{cm}^2 + \frac{1}{2}I\omega^2.$$
Why do we not have to include the work done by the static friction (nonconservative force) on the left side? I know it is supposed to do zero work, as there is no motion where it acts, but it is the only force providing a torque and $\omega$ is obviously increasing, so in my opinion it should be doing (rotational) work.
Answer: The solid is assumed to be a rigid body. Friction causes rotation and does do rotational work with respect to the center of mass. But, for no slipping of a rigid body, the net work from friction is zero because the decrease in translational kinetic energy of the center of mass due to friction is exactly matched by the increase in rotational energy with respect to the center of mass due to friction. Said another way, the net work from friction is zero because the point where friction acts is instantaneously at rest in the inertial frame of reference. For a detailed discussion of both of these reasons see Consistent Approach for Calculating Work By Friction for Rigid Body in Planar Motion and Is work done by torque due to friction in pure rolling?. An answer by @Dale in the second reference provides a very simple way to determine whether or not friction does net work; this is a much clearer answer than many confusing answers given elsewhere.
With slipping, the work done by friction is not zero, it is negative; the negative increase in translational kinetic energy is greater in magnitude than the positive increase in rotational energy. Said another way, the point where friction acts is not instantaneously at rest in the inertial frame of reference. And, in the limit with no rotation (a box just sliding) the net work from friction is its most negative.
Note: the total work on the body is that from friction and gravity. For a rigid body there is no increase in the internal energy of the body (no "heating"). (In reality, no body is truly rigid, so heating cannot be ignored.) | {
"domain": "physics.stackexchange",
"id": 88176,
"tags": "newtonian-mechanics, energy, rotational-dynamics, energy-conservation, work"
} |
Trajectory of a particle in a string or a rope that goes under the effect of a wave pulse | Question: What is the trajectory of a particle in a string or a rope that goes under the effect of a wave pulse.
The illustration in this image is not what I am asking about, I just attached it in order to give you an idea of the case I mean.
If you give me a picture of the trajectory, I'd be grateful. Thanks in advance.
Answer: Since the particle is in and attached to the string,they would undergo time-dependent,periodic vertical displacement and oscillation.
Imagine you put a ball in the string and record it's shadow while the wave propogates. | {
"domain": "physics.stackexchange",
"id": 34581,
"tags": "newtonian-mechanics, waves, continuum-mechanics, string"
} |
Stone, Paper, Scissors in Python | Question: Recently I started programming with python.
Today I was trying to make a stone, paper, scissor game. After so scratching my head long time, finally I got a working code.
from random import choice
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = raw_input('->')
if user_again == 'yay':
SPCa()
elif user_again == 'nay':
print "Ok bye! hope you enjoyed the game. See you soon! :)"
else:
print "Please choose correct option."
play_again()
def SPCa():
computer_choice = choice( ['stone', 'paper', 'scissor'] )
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = raw_input('->')
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
play_again()
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
play_again()
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
play_again()
else:
print "please choose correct option"
SPCa()
def SPC():
computer_choice = choice( ['stone', 'paper', 'scissor'] )
computer_choosed = "Computer choosed %s" % computer_choice
print "You are playing Stone, Paper, Scissor."
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = raw_input('->')
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
play_again()
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
play_again()
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
play_again()
else:
print "please choose correct option"
SPCa()
SPC()
But I found my code so repetitive, it has not even a single loop. Please suggest how can I improve my code. What are things that I should learn?
Answer: Little issues
Your indentations seems wrong : too many spaces in a few places, not enough in a few other places. Whitespaces matter in Python so this is definitly something you should fix before going any further.
Style
Python has a style guide called PEP 8 which is definitly worth reading and and worth following if you do not have good reasons not to. In you case, your usage of whitespaces around parenthesis and the trailing whitespaces for instance are not compliant to PEP8. You'll find tools online to check your code compliancy to PEP8 in a automated way if you want to. This could also help you to detect and fix your indentation issues.
Don't Repeat Yourself
You've already realised that your code was repeating itself and that is was a bad thing. Let's see how this can be improved.
The only difference between SPCa and SPC is that SPC has an additional line printed at the beginning. It might be easier to replace all the calls to SPC() by a call to print and then a call to SPCa. Once this is done, we can get rid of SPC and maybe rename SPCa in play_game.
At this stage, the code looks like:
from random import choice
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = raw_input('->')
if user_again == 'yay':
play_game()
elif user_again == 'nay':
print "Ok bye! hope you enjoyed the game. See you soon! :)"
else:
print "Please choose correct option."
play_again()
def play_game():
computer_choice = choice(['stone', 'paper', 'scissor'])
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = raw_input('->')
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
play_again()
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
play_again()
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
play_again()
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
play_again()
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
play_again()
else:
print "please choose correct option"
play_game()
print "You are playing Stone, Paper, Scissor."
play_game()
if main guard
In Python, it is a good habit to move your code actually doing things (by opposition to merely defining things) behind an if __name__ == "__main__": guard. This is useful if you want to reuse the code : you can import the file and get all the benefits from it (the definition of values/functions/classes) without having it performing unwanted actions.
In your code, the end of the script becomes :
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
play_game()
Now we can get into the actual changes in your code. One of the issue it that you have multiple functions calling each other which make things difficult to understand.
All branches in play_game end up calling play_again (except when the option is not correct). It may be easier to call it once, at the end of the function like this.
else:
print "please choose correct option"
play_game()
return
play_again()
However, an even more simple option would be to check at the beginnign that the value is correct. You could define a list with the correct options and use it like this :
from random import choice
game_options = ['stone', 'paper', 'scissor']
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = raw_input('->')
if user_again == 'yay':
play_game()
elif user_again == 'nay':
print "Ok bye! hope you enjoyed the game. See you soon! :)"
else:
print "Please choose correct option."
play_again()
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = raw_input('->')
if user_choice not in game_options:
print "please choose correct option"
play_game()
return
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
play_again()
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
play_game()
Also, this may call for a better option. You could define a function asking a user for a value in a list. This function could be used in 2 places and make your code easier to follow and less repetitive.
from random import choice
game_options = ['stone', 'paper', 'scissor']
def get_user_input_in_list(lst):
user_input = raw_input('->')
while True:
if user_input in lst:
return user_input
else:
print "Please choose correct option."
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = get_user_input_in_list(['yay', 'nay'])
if user_again == 'yay':
play_game()
elif user_again == 'nay':
print "Ok bye! hope you enjoyed the game. See you soon! :)"
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = get_user_input_in_list(game_options)
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
play_again()
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
play_game()
This is better but we still have play_game calling play_again and play_again calling play_game.
Maybe play_game should be used to play a single game and shouldn't call play_again at all. This can be done by removing the call to play_again in play_game and moving it after the call to play_game in play_again. That way, we'd just have play_game calling itself.
You'd have something like :
from random import choice
game_options = ['stone', 'paper', 'scissor']
def get_user_input_in_list(lst):
user_input = raw_input('->')
while True:
if user_input in lst:
return user_input
else:
print "Please choose correct option."
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = get_user_input_in_list(['yay', 'nay'])
if user_again == 'yay':
play_game()
play_again()
elif user_again == 'nay':
print "Ok bye! hope you enjoyed the game. See you soon! :)"
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = get_user_input_in_list(game_options)
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
play_game()
play_again()
This is a bit better but you can go further in the separation of concerns. It is probably a better option to have play_again to return a boolean and have a while loop ensuring we call play_game as long as required.
This would look like:
from random import choice
game_options = ['stone', 'paper', 'scissor']
def get_user_input_in_list(lst):
user_input = raw_input('->')
while True:
if user_input in lst:
return user_input
else:
print "Please choose correct option."
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = get_user_input_in_list(['yay', 'nay'])
return user_again == 'yay'
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = get_user_input_in_list(game_options)
if user_choice == computer_choice:
print computer_choosed
print "So it's a tie"
elif user_choice == 'stone':
if computer_choice == 'paper':
print computer_choosed
print "So, You loose"
elif computer_choice == 'scissor':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'paper':
if computer_choice == 'scissor':
print computer_choosed
print "So, you loose."
elif computer_choice == 'stone':
print computer_choosed
print "So, Cheers! You won!"
elif user_choice == 'scissor':
if computer_choice == 'stone':
print computer_choosed
print "So, you loose."
elif computer_choice == 'paper':
print computer_choosed
print "So, Cheers! You won!"
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
while True:
play_game()
if not play_again():
print "Ok bye! hope you enjoyed the game. See you soon! :)"
break
Now we can get into the internals of play_game.
First, you could get rid of the various way where printing is repeated.
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = get_user_input_in_list(game_options)
print computer_choosed
if user_choice == computer_choice:
assert result == 0
print "So it's a tie"
else:
win = False # unused value
if user_choice == 'stone':
win = (computer_choice == 'scissor')
elif user_choice == 'paper':
win = (computer_choice == 'stone')
elif user_choice == 'scissor':
win = (computer_choice == 'paper')
if win:
print "So, Cheers! You won!"
else:
print "So, you loose."
There are various way to define who wins in a game of player/scissors/rock. You could define a dictionnary mapping the different combinations possible to the winner. I quite like using modulo arithmetic to find the result.
Final code looks like:
from random import choice
game_options = ['stone', 'paper', 'scissor']
def get_user_input_in_list(lst):
user_input = raw_input('->')
while True:
if user_input in lst:
return user_input
else:
print "Please choose correct option."
def play_again():
print "Do you want to play again:- Choose 'yay' or 'nay'"
user_again = get_user_input_in_list(['yay', 'nay'])
return user_again == 'yay'
def play_game():
computer_choice = choice(game_options)
computer_choosed = "Computer choosed %s" % computer_choice
print "Make an choice"
print "Choose stone, paper, scissor"
user_choice = 'stone' # get_user_input_in_list(game_options)
computer_idx = game_options.index(computer_choice)
user_idx = game_options.index(user_choice)
result = (computer_idx - user_idx) % 3
print computer_choosed
if result == 0:
print "So it's a tie"
elif result == 1:
print "So, you loose."
else:
assert result == 2
print "So, Cheers! You won!"
if __name__ == "__main__":
# execute only if run as a script
print "You are playing Stone, Paper, Scissor."
while True:
play_game()
if not play_again():
print "Ok bye! hope you enjoyed the game. See you soon! :)"
break | {
"domain": "codereview.stackexchange",
"id": 20972,
"tags": "python, beginner, python-2.x, rock-paper-scissors"
} |
Plane wave focused by lens to a point | Question: I want to mathematically recover the situation in the following picture, that is a plane wave which is incident on a (thin) lens, such that the outgoing beam focuses to a finite spot size at a distance equal to the focal length of the lens.
I have my incident plane wave $e^{ikz}$ and I know that a thin lens causes a phase delay to the wave-front of $e^{-\frac{ik}{2f}(x^2+y^2)}$.
I have plot it on Mathematica, just looking at the $x$-axis so $y = 0$, I get a constant curvature in the correct direction but never a focus:
(the horizontal axis is $z$, the direction of propagation)
Just to be clear here I have plotted $$ e^{ikz}\quad \mathrm{for} \quad z<0 $$
and $$ e^{ikz} \exp\left ({-\frac{k x^2}{\lambda f}} \right) \quad \mathrm{for} \quad z>0.$$
I checked for very large $z$ and it never goes to a spot.
Is this because I am using the thin lens approximation?
Answer: The equations you write are not those of a focussing wave (I think you're missing an $i\,\pi$ in your exponent for the $x$ variation for $z>0$). There is no way for the wavefront curvature to change with $z$ in your equations, thus no focussing. You need to model the effect of diffraction on your wavefront.
The easiest way to model diffraction is to assume a Gaussian intensity variation in the input, instead of a plane wave as you have done. You simply have to have the spotsize large enough to model the beamwidth you are dealing with. Then you impart the thin lens phase mask so that the field at $z=0$ which is immediately to the right of the lens output has the $x$ variation:
$$E(x,\,0) = \exp\left(-\frac{x^2}{2\,\sigma^2}\right) \,\exp\left(i\frac{k\, x^2}{2\,f}\right)\tag{1}$$
One can model the effect of diffraction on a field variation like this by taking heed of the following formula for the propagation of a generalized Gaussian beam in a homogeneous medium:
$$E(x,\,z) = \frac{1}{\sqrt{z-z_0 + i\,z_R}}\, \exp\left(-i \,k\, \frac{x^2}{2 \,(z-z_0 + i\,z_R)}\right)\tag{2}$$
where $z_R$ is the Rayleigh length for the beam. So your task is to find $z_0$ and $z_R$ in (2) to match (1) and then you can use (2) to propagate the Gaussian beam.
Note that the above is not the exact scalar diffraction operator; it makes a paraxial approximation that the transverse component $k_x$ of the wavevector is small compared to $k$; alternatively, that the beam's numerical aperture is small (less than about 0.3, depending on the accuracy you need).
Otherwise, you need to calculate the exact diffraction integral, which I outline below.
Full Diffraction Calculation
You begin with the Helmholtz equation in a homogeneous medium $(\nabla^2 + k^2)\psi = 0$. If the field comprises only plane waves in the positive $z$ direction then we can represent the diffraction of any scalar field on any transverse (of the form $z=c$) plane by:
$$\begin{array}{lcl}\psi(x,y,z) &=& \frac{1}{2\pi}\int_{\mathbb{R}^2} \left[\exp\left(i \left(k_x x + k_y y\right)\right) \exp\left(i \left(\sqrt{k^2 - k_x^2-k_y^2}-k\right) z\right)\,\Psi(k_x,k_y)\right]{\rm d} k_x {\rm d} k_y\\
\Psi(k_x,k_y)&=&\frac{1}{2\pi}\int_{\mathbb{R}^2} \exp\left(-i \left(k_x u + k_y v\right)\right)\,\psi(x,y,0)\,{\rm d} u\, {\rm d} v\end{array}$$
To understand this, let's put carefully into words the algorithmic steps encoded in these two equations:
Take the Fourier transform of the scalar field over a transverse plane to express it as a superposition of scalar plane waves $\psi_{k_x,k_y}(x,y,0) = \exp\left(i \left(k_x x + k_y y\right)\right)$ with superposition weights $\Psi(k_x,k_y)$;
Note that plane waves propagating in the $+z$ direction fulfilling the Helmholtz equation vary as $\psi_{k_x,k_y}(x,y,z) = \exp\left(i \left(k_x x + k_y y\right)\right) \exp\left(i \left(\sqrt{k^2 - k_x^2-k_y^2}-k\right) z\right)$;
Propagate each such plane wave from the $z=0$ plane to the general $z$ plane using the plane wave solution noted in step 2;
Inverse Fourier transform the propagated waves to reassemble the field at the general $z$ plane.
Here is a short, well tested Mathematica code of mine to implement the above. In the following $f$ is a square array of complex values of the input field, $d$ the axial ($z$) distance we wish to diffract the wave, $k$ the wavenumber and $Dx,\,Dy$ are the widths of the simulation domain in the $x$ and $y$ directions. It is easy to modify this code to cope with one transverse direction.
Diffract[f_, d_, k_, Dx_, Dy_] /;
If[MatrixQ[f], True, Message[Diffract::nnarg] False] :=
Module[{lenX, lenY, phase, mask, jx, jy},
(
lenX = Length[f];
lenY = Length[f[[1]]];
phase =
Table[N[(jx - If[jx > lenX/2, lenX, 0])/Dx]^2 +
N[(jy - If[jy > lenY/2, lenY, 0])/Dy]^2, {jx, 0,
lenX - 1}, {jy, 0, lenY - 1}];
phase = k^2 - (4 Pi^2 phase);
mask =
Table[If[phase[[jx, jy]] < 0, 0, 1], {jx, 1, lenX}, {jy, 1,
lenY}];
phase =
Table[If[phase[[jx, jy]] < 0, 0,
d (k - Sqrt[phase[[jx, jy]]])], {jx, 1, lenX}, {jy, 1,
lenY}];
Return[InverseFourier[Fourier[f] Exp[-I phase] mask]];
);]; | {
"domain": "physics.stackexchange",
"id": 46914,
"tags": "optics, geometric-optics, lenses"
} |
An extension to the StringBuilder | Question: This is another pretty basic class I wrote for a library as I hate the way the default StringBuilder in .NET works.
Essentially, I wanted to have the + operator, as well as implicit conversions to strings. (Rather than needing .ToString() all the time.)
It's pretty small and simple, so there may not be a lot to critique.
Also, before you say "just inherit StringBuilder and extend it", it's sealed.
/// <summary>
/// This wraps the .NET <code>StringBuilder</code> in a slightly more easy-to-use format.
/// </summary>
public class ExtendedStringBuilder
{
private StringBuilder _stringBuilder;
public string CurrentString => _stringBuilder.ToString();
public int Length => _stringBuilder.Length;
public ExtendedStringBuilder()
{
_stringBuilder = new StringBuilder();
}
public ExtendedStringBuilder(int capacity)
{
_stringBuilder = new StringBuilder(capacity);
}
public ExtendedStringBuilder Append(string s)
{
_stringBuilder.Append(s);
return this;
}
public ExtendedStringBuilder Append(char c)
{
_stringBuilder.Append(c);
return this;
}
public ExtendedStringBuilder Append(object o)
{
_stringBuilder.Append(o);
return this;
}
public static ExtendedStringBuilder operator +(ExtendedStringBuilder sb, string s) => sb.Append(s);
public static ExtendedStringBuilder operator +(ExtendedStringBuilder sb, char c) => sb.Append(c);
public static ExtendedStringBuilder operator +(ExtendedStringBuilder sb, object o) => sb.Append(o);
public static implicit operator string(ExtendedStringBuilder sb) => sb.CurrentString;
public override string ToString() => CurrentString;
public string ToString(int startIndex, int length) => _stringBuilder.ToString(startIndex, length);
}
I didn't implement all the overloads of the .Append method (yet) or the + variants of them.
This can literally be used in the exact same manner as the .NET StringBuilder, or you can use += or + instead of .Append, and you can implicitly convert it to a string.
Answer: A couple of quick comments:
You can use the see tag's cref attribute. If you generate documentation, some tools will generate hyperlinks for you.
/// <summary>
/// This wraps the .NET <see cref="StringBuilder"/> in a slightly easier to use format.
/// </summary>
The length property of a StringBuilder is read and writable. It's also really useful for it to be so:
var sb = new StringBuilder();
foreach (var i in Enumerable.Range(0,10))
{
sb.AppendFormat("{0},", i);
}
sb.Length--; // removes the trailing comma.
That's a contrived example which is trivially served with string.Join but setting the length can be useful!
The _stringBuilder field should be readonly.
I'd say CurrentString is superflous. Just call _stringBuilder.ToString()
I must admit, personally I think the StringBuilder api is really good, I've never needed an implicit conversion to a string or felt the need to + rather than append to them. | {
"domain": "codereview.stackexchange",
"id": 19594,
"tags": "c#, strings, reinventing-the-wheel"
} |
groupGO parameters explanation (ont and level) | Question: I need to use groupGO in clusterProfiler to find functional profile of a list of genes, and I am having trouble finding out what some parameters of the function mean and which I should select for my specific case.
The function is defined as:
groupGO(gene, OrgDb, keyType = "ENTREZID", ont = "CC", level = 2,
readable = FALSE)
I have mouse genes dataset. What I should select for ont and level parameters? What does ont types: MF, BP, CC mean? What is level? Where I could find this info? It is definitely not in the vignette...
Answer: As mentioned in the comments MF is molecular function, BP is biological process and CC is cellular compartment. These are the 3 domains of the ontologies. The level refers to the level in the ontology graph. In the example in that link, pigmentation would be level 1, pigmentation during development level 2, regulation of pigmentation during development is level 3 and so on. You can find more discussion of this in this post on biostars from a year ago.
Regarding the exact settings you should use, I don't personally find CC to be an informative domain. The first couple levels are typically too generic to be useful and they'll include too many genes anyway for you to likely pick up any changes. I'd start around level 3 and see how things go (in other programs you would get all of the levels and domains at once). | {
"domain": "bioinformatics.stackexchange",
"id": 528,
"tags": "rna-seq, groupgo, clusterprofiler"
} |
Getting non-Clifford after performing several Clifford gates in qiskit | Question: I'm trying to test Clifford gates in qiskit according to the table in Fault-tolerant SQ, page 101. I tried 4 Cliffords in the test $$-X/2 - X -X/2,Y/2,X/2 - -X/2,Y/2,-X/2$$
using the following code
import numpy as np
from qiskit import QuantumCircuit, Aer, execute
from qiskit.quantum_info import Operator
qc = QuantumCircuit(1)
qc.rx(-np.pi/2, 0)
qc.rx(np.pi, 0)
qc.rx(-np.pi/2, 0)
qc.ry(np.pi/2, 0)
qc.rx(np.pi/2, 0)
# 4th
qc.rx(-np.pi/2, 0)
qc.ry(np.pi/2, 0)
qc.rx(-np.pi/2, 0)
print('Final matrix:', Operator(qc).data)
qc.draw('mpl')
which gives me the output:
Final matrix: [[ 0. +0.707j -0.707+0.j ]
[ 0.707+0.j 0. -0.707j]]
next, I found the inverse of this:
np.linalg.inv(Operator(qc).data)
which gives me:
[[ 0. -0.707j 0.707+0.j ]
[-0.707+0.j -0. +0.707j]]
and didn't find the corresponding Clifford in the table as I was expeccting.
What do I do wrong in my calculations in qiskit?
Answer: The matrix
$$
M = \frac{1}{\sqrt{2}}\begin{bmatrix}-i & 1\\-1 & i\end{bmatrix}
$$
resembles
$$
X/2 = \frac{1}{\sqrt{2}}\begin{bmatrix}1 & -i\\-i & 1\end{bmatrix}\tag1
$$
where we follow the notation $\pm X/2$ for the $\pm\frac{\pi}{2}$ rotation around the $X$ axis as used in the table B.6 on page 101 in Julian Kelly's PhD thesis. We can make the similarity more apparent by multiplying $M$ by the imaginary unit. We get
$$
M\equiv iM = \frac{1}{\sqrt{2}}\begin{bmatrix}1 & i\\-i & -1\end{bmatrix}\tag2
$$
where $\equiv$ signifies equality up to global phase.
Comparing $(1)$ and $(2)$, we see that up to global phase $M$ differs from $X/2$ only by the relative phase of $\pi$ between the two columns. We can introduce this phase difference using right-multiplication by $Z$. We have
$$
\begin{align}
M &\equiv (X/2) Z \\
M &\equiv (X/2) XY \\
M &\equiv (-X/2) Y \\
M &\equiv Y (X/2)
\end{align}
$$
where we used the identity $(X/2)X\equiv(-X/2)$ which follows from $X^2=I$ and $(-X/2)Y=Y(X/2)$ which follows from the fact that $X$ and $Y$ anti-commute.
Finally, we find $M\equiv Y(X/2)$ in the third row of the "Hadamard-like" section of the table B.6 and conclude that $M$ is a Clifford gate as expected. | {
"domain": "quantumcomputing.stackexchange",
"id": 3173,
"tags": "qiskit, programming, quantum-gate, circuit-construction, clifford-group"
} |
Did the Sun form around a solid core? | Question: When Jupiter formed I assume like the other planets it started as tiny clumps of matter that eventually came together, became gravitationally bound and then eventually captured a lot of gas. I've also heard it was capable of collecting a lot of solid ice due to its distance from the Sun. Anyway, if Jupiter were larger we might be living in a binary star system. So, my question then becomes, did the Sun have a similar beginning to Jupiter and in what way was it different? Did the Sun form around a solid core?
Answer: Star formation isn't completely answered, but it is well believed that a solid core is not necessary. However if the sun did form around a planetary-sized solid core we would not know the difference. Due to the very high temperature of the sun, the result is not meaningfully different from colliding with planetary bodies early on (which is plausible given the number of planetary body collisions that are invoked to explain the solar system). | {
"domain": "physics.stackexchange",
"id": 23470,
"tags": "astrophysics, sun, stars, stellar-evolution"
} |
Generalisation of pancake sorting with arbitrary flipped slices? | Question: In pancake sort, the primary operation is: flip all pancakes above a given position. What about flipping all pancakes between two given positions? Anybody knows if this has been studied?
To illustrate the problem, here is a quick brute-force, greedy implementation in Python 2.7 (not quite sure it always converges though):
def sort(a):
entropy = 0
for i in range(len(a) - 1):
entropy += abs(a[i] - a[i+1])
while True:
max_improvement = 0
for i in range(len(a) - 1):
for j in range(i + 1, len(a)):
improvement = 0
if i > 0:
improvement += abs(a[i] - a[i-1]) - abs(a[j] - a[i-1])
if j < len(a) - 1:
improvement += abs(a[j] - a[j+1]) - abs(a[i] - a[j+1])
if improvement > max_improvement:
max_improvement = improvement
(next_i, next_j) = (i, j)
if max_improvement == 0:
if a and a[0] > a[-1]:
a[:] = a[::-1]
print a
return
entropy -= max_improvement
a[next_i:next_j+1] = a[next_i:next_j+1][::-1]
print a
a = [7, 1, 3, 8, 6, 0, 4, 9, 2, 5]
print a
sort(a)
Output:
[7, 1, 3, 8, 6, 0, 4, 9, 2, 5]
[7, 6, 8, 3, 1, 0, 4, 9, 2, 5]
[7, 6, 8, 9, 4, 0, 1, 3, 2, 5]
[7, 6, 8, 9, 4, 5, 2, 3, 1, 0]
[9, 8, 6, 7, 4, 5, 2, 3, 1, 0]
[9, 8, 7, 6, 4, 5, 2, 3, 1, 0]
[9, 8, 7, 6, 5, 4, 2, 3, 1, 0]
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Answer: Yes, it has been studied quite a lot. The general problem is called sorting by reversal. It is important as it is related to finding the similarity between genomes of two species that have the same genes, but in different order.
There are two variants of the problem, one where the orientation of the pancake [= gene] also matters. This is modelled using so-called signed permutations. See also the last paragraph and references in the Wikipedia page you quote.
There is a very precise characterization of the number of reversals needed for signed permutations in terms of the structure of a certain graph. For the unsigned variant, which you use in your question, the problems seems NP-hard (see the Wiki page) and even hard to appoximate. | {
"domain": "cs.stackexchange",
"id": 4138,
"tags": "algorithms, sorting"
} |
Why don't we feel the subtle speed change of Earth's elliptical orbit? | Question: Earth's orbit is a slight ellipse, so to conserve momentum its speed increases when it is closest to the Sun. If the speed changes there is an acceleration. If there is an acceleration there is a force. Even if the change is small and gradual, wouldn't we experience a force because the Earth is so massive?
Answer: We don't feel any acceleration because the Earth and all of us humans on it is in free fall around the Sun. We don't feel the centripetal acceleration any more than the astronauts on the ISS feel the acceleration of the ISS towards the Earth.
This happens because of the way general relativity describes motion in gravitational field. The motion of a freely falling object is along a line called a geodesic, which is basically the equivalent of a straight line in curved spacetime. And because the freely falling object is moving in a straight line it experiences no force.
To be a bit more precise about this, the trajectory followed by a freely falling object is given by the geodesic equation:
$$ \frac{\mathrm d^2x^\alpha}{\mathrm d\tau^2} = -\Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu \tag{1} $$
Explaining what this means is a bit involved, but actually we don't need the details. All we need to know is that the four-acceleration of a body $\mathbf A$ is given by another equation:
$$ A^\alpha = \frac{\mathrm d^2x^\alpha}{\mathrm d\tau^2} + \Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu \tag{2} $$
But if use equation (1) to substitute for $d^2x^\alpha/d\tau^2$ in equation (2) we get:
$$ A^\alpha = -\Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu + \Gamma^\alpha_{\,\,\mu\nu}U^\mu U^\nu = 0 $$
So for any freely falling body the four acceleration is automatically zero. The acceleration you feel, the "g force", is the size of the four-acceleration - technically the norm of the four-acceleration or the proper acceleration.
Nothing in this argument has referred to the shape of the orbit. Whether the orbit is hyperbolic, parabolic, elliptical or circular the same conclusion applies. The orbitting observer experiences no acceleration.
You might be interested to read my answer to How can you accelerate without moving?, where I discuss this in a bit more detail. For an even more technical approach see How does "curved space" explain gravitational attraction?. | {
"domain": "physics.stackexchange",
"id": 32479,
"tags": "newtonian-mechanics, newtonian-gravity, angular-momentum, orbital-motion, earth"
} |
Shape Recognition and finding the location | Question:
Hello Everyone,
I have a 3d point cloud data of human head. I have attached few markers (sphere/hemispheres) to the head before scanning the head with laser scanner. I need to find the location (xyz coordinates) of these markers. I was wondering if anyone could help in identifying these markers through shape/pattern recognition or any other technique so that I get the location of the markers. Does this software package has the capability of doing this? Thanks in advance.
Thanks
Santosh
Originally posted by santosh on ROS Answers with karma: 11 on 2011-03-22
Post score: 1
Answer:
I haven't looked at the implementation myself but perhaps the shape_detection package from the object_recognition stack is a solution?
Originally posted by KoenBuys with karma: 2314 on 2011-03-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sam on 2012-10-28:
Where is shape_detection doc details? It seems nothing... Thank you~ | {
"domain": "robotics.stackexchange",
"id": 5170,
"tags": "ros, object-recognition, laserscan, pointcloud"
} |
Why do chameleons move back and forth? | Question: I was always curious, why do chameleons have this strange gait?
Answer: The movement you observed serves two functions:
Imitation of leaves to protect against predators
Improved stereoscopic vision while scanning for their own prey
Imitation of leaves (mimesis)
Chameleons in the wild live in trees and are surrounded by leaves. In order to protect themselves from predators they move forth and back to blend with leaves moving in the wind.
Also note that their torso shape resembles a leaf:
Why do they move like this even when there is no wind and they are not sitting on a tree? I suppose they just follow their instincts. Evolution "did not prepare" them for sitting in a cage. The lack of ability to know when to move and when not was not a evolutionary disadvantage so there was not pressure to get rid of it.
Improved stereoscopic vision
Chameleons are predators that use their tongue to catch prey e.g. grasshoppers. While they are looking for their next target, they scan their surroundings with both eyes independently, i.e. one scans the left hemisphere, the other eye scans the right hemisphere, thus during this time they have no stereoscopic vision.
Moving forth and back helps alleviating this. They can estimate the distance to their distance better through the use of motion parallax (closer objects seem to move more than object further away when we move sideways). Once they found a target they turn in its direction and then use both eyes to focus on it, preparing to shoot out their tongue.
(Disclaimer: I am not a biologist. This is from my own reasoning and the German Wikipedia article about Chameleons) | {
"domain": "biology.stackexchange",
"id": 7515,
"tags": "zoology, behaviour"
} |
Why aren't resistors equivalent to breaks when $R\to\infty$? | Question:
Find the voltage across each resistor as $R\to\infty$
Kirchoff's voltage law gives
$$10\ \mathrm{V} - V_R-V_R = 0 \implies V_R = 5\ \mathrm{V}$$
However, don't we get two holes in the circuit as the resistances approach infinity ? The dangling wire in the middle is confusing me a lot.
Answer: I think the question is quite interesting actually.
When resistances are equal the voltage will divide itself equally because the resistances are coupled in series. Then we take each R to infinity and we assume that we do this to each resistance in the same manner. I think that the voltage across each resistance remains 5 Volts in this case when you take R to infinity.
I think your second drawing is good and shows that you have a good intuition what is happening. Essentially when you let the R go to infinity you get something like a capacitor! This is essentially what you have drawn. It then also becomes clear I hope that indeed there should be 5 Volts across each 'capacitor'. | {
"domain": "physics.stackexchange",
"id": 30847,
"tags": "homework-and-exercises, electric-circuits, electric-current, electrical-resistance"
} |
Rotational dynamics | Question: In studying rotational dynamics of a rigid body, I can't seem to understand why you can solve the problem correctly only using certain points in a body and not all? Angular momentum and torque leads to correct answer only in some cases.
Answer: Firstly , definition of torque is $\vec{r}\times \vec{F}$ and angular momentum $\vec{r}\times \vec{p}$.
And now w.r.t. your frame $\vec{F}$ & $\vec{p}$ & $\vec{r}$ are all relative . but newton's second law of rotation holds for all frames.
.Because all points are just frames and to maintain the distances in frame , you've to move with that frame , and as force and momentum both are relative to your frame , so will be torque and angular momentum , but the thing is they will all give the correct angular accelerations and angular velocities and linear velocities relative to them as Newton's laws can be made valid in all frames (by applying pseudo force in some) .
And answer in your book must be given in absolute terms , you can find correct answer by then applying gallilean relativity to your frame . | {
"domain": "physics.stackexchange",
"id": 7431,
"tags": "rotational-dynamics"
} |
BMI Calculator in Java | Question: My task:
Body Mass Index (BMI) is a measure of health based on height and weight. It can be calculated by taking your weight in kilograms and dividing it by the square of your height in meters.
Write a java code to let the user enter weight, feet, and inches and interpret the users BMI.
My code:
//import scanner
import java.util.Scanner;
//import Math class
import java.lang.Math;
public class ComputeBMI {
public static void main(String[] args) {
// TODO Auto-generated method stub
//create scanner
Scanner input = new Scanner(System.in);
//declare variables
double weight;
int feet;
int inches;
//prompt user
System.out.print("Enter weight in pounds: ");
weight = input.nextFloat();
System.out.print("Enter feet: ");
feet = input.nextInt();
System.out.print("Enter inches: ");
inches = input.nextInt();
//convert measurements
double weightInKilos = weight * 0.453592;
double heightInMeters = (((feet * 12) + inches) * .0254);
double bmi = weightInKilos / Math.pow(heightInMeters, 2.0);
// double bmi = weightInKilos / (heightInMeters * heightInMeters);
//display output
System.out.println("Your BMI is: " + bmi);
//interpret BMI
if (bmi < 18.5 ) {
System.out.print("Underweight");
}
else if (bmi >= 18.5 && bmi < 25) {
System.out.print("Normal");
}
else if (bmi >= 25 && bmi < 30) {
System.out.print("Overweight");
}
else if (bmi >= 30) {
System.out.print("Obese");
}
// Do I need this last else if there?
// else {
// System.out.print("");
// }
input.close();
}
}
I used only the material we've been taught thus far to complete my code. My POC is clarity/fluidity of my code and my variable data types. In my mind, feet and inches should be integers and weight should be a double. Valid hypothesis?
Thanks, y'all.
Answer:
You run all the computations in the main method. It's not a good practice. One method should do one focused thing. That's why I'd recommend to create a separate method for "step" of your computation:
converting measurements (something like double poundsToKilograms(double weightInPounds)
computing the BMI given the height and the weight of the user
converting a numeric value of the BMI to a human-readable message
This way, you main method would just read the input, call these methods and print the result. It would make your code more readable and testable (you would be able to test your methods separately)
The comments should explain what the code does and why it does what it does. They shouldn't describe how it works. That is, it's a good idea to create a doc comment for each method saying what it does, how it behaves if it gets an incorrect input and so on. Conversely, comments like //declare variables or //create scanner actually harm the readability. They just create noise. They don't add anything useful to the code itself. Writing self-documenting code is a good practice (that is, ideally it should be clear what your code does from the code itself).
ComputeBMI doesn't sound like a good class name to me. I'd rather call it a BMICalculator (it's conventional to name classes with nouns and methods with verbs).
The message Enter feet: looks kind of strange. I think it should say that it requests the user's height (it's not clear from the message itself).
It's fine to keep the height as an int, but I would show a message to the user saying that. Otherwise, they may get an unexpected error.
You could also add some kind of input validation and error handling so that your program doesn't fail with an exception (it might be confusing for the user) but rather prints a more suitable message and possible prompts the user again. | {
"domain": "codereview.stackexchange",
"id": 24806,
"tags": "java, beginner, calculator"
} |
A liquid that forms a surface membrane that prevents evaporation but allows oxygen to pass through | Question: I'm a biologist. I have a water solution with living cells in a tiny well (10 um). I need to keep the solution from evaporating for several days in a warm incubator. I'm looking for a liquid to place on the surface of the solution that will create a surface membrane/film that prevents or slows evaporation but continues to allow oxygen exchange. I welcome any other suggestions for achieving the same result, prevent/slow evaporation while allowing oxygen exchange.
Here are some things that we've tried and some constraints:
Increasing the humidity in the incubator is not sufficient (we've tried)
Lowering the temperature in the incubator is not an option (we want the cells to keep doing what they do)
Sealing the well is not an option (we need the solution to remain oxygenated)
What can be added to the solution is limited to substance we are sure won't react with the cells
Answer: Thank you for the responses and suggestions. It turns out that there exists a product for exactly this purpose. "Light Mineral Oil is a sterile light mineral oil intended for use as an overlay when culturing in reduced volumes of media to prevent evaporation, and to protect the media from changes in osmolality and pH."
https://en.wikipedia.org/wiki/Mineral_oil#Cell_culture | {
"domain": "chemistry.stackexchange",
"id": 11259,
"tags": "chemical-biology"
} |
How to recognize when tf has stopped publishing? | Question:
I use /tf topic to calculate a parameter with C++. I use what is written in the tf tutorial to listen to the transforms. I use a callback with "ros::spinOnce(); rate.sleep();" so only when I press Ctrl+C the program stops.
My problem is that I need to know when I do not have publishing tf; that means when the /tf stops publishing. Is there any function that checks the existence of /tf and for example becomes False when the /tf is not published anymore?
Originally posted by Antares on ROS Answers with karma: 27 on 2012-07-12
Post score: 1
Original comments
Comment by Antares on 2012-07-17:
Thanks really cagatay it is working!!
Answer:
hello,
you may check out waitForTransform()
bool tf::TransformListener::waitForTransform (const std::string &target_frame, const std::string &source_frame, const ros::Time &time, const ros::Duration &timeout, const ros::Duration &polling_sleep_duration=ros::Duration(0.01), std::string *error_msg=NULL) const
Test if source_frame can be transformed to target_frame at time time.
The waitForTransform() methods return a bool whether the transform can be evaluated. It will sleep and retry every polling_duration until the duration of timeout has been passed. It will not throw. If you pass a non NULL string pointer it will fill the string error_msg in the case of an error. (Note: That this takes notably more resources to generate the error message.)
or
The canTransform() methods return a bool whether the transform can be evaluated. It will not throw. If you pass a non NULL string pointer it will fill the string error_msg in the case of an error.
bool tf::TransformListener::canTransform (const std::string &target_frame, const std::string &source_frame, const ros::Time &time, std::string *error_msg=NULL) const
Originally posted by cagatay with karma: 1850 on 2012-07-12
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 10172,
"tags": "ros, topic, publish, transform"
} |
Construct PDA for ${ \{0^m1^n0^{2n} | n>0\}}$ | Question: I have to construct PDA for ${ \{0^m1^n0^{2n} | n>0\}}$
So my idea is to (informally) not pushing anything into the stack while having 0s at first, then when automata start accepting 1s it should push X onto the stack twice on every 1 and then when start accepting 0s it should pop X once every time until stack will be empty.
Is my understanding correct? If not, how should I proceed?
Answer: Yes, your understanding is correct. You may need a bit more mechanism to ensure there is at least one 1 after the initial 0's. | {
"domain": "cs.stackexchange",
"id": 12960,
"tags": "pushdown-automata"
} |
Which approach of mine for an algorithm upper bound is correct? | Question: Say we have this algorithm in Python.
def secret(S: list):
n = len(S)
while n > 0:
n = n // 2
for j in range(n):
if j in S:
S.append(j)
return S
I need to find the strictest upper bound (Big-$O$).
I have two approaches to solve this, which give me different solutions and I would appreciate your help deciding which one is correct!
The first approach:
According to the python documentations, the x in s operator is $O(n)$ where $n$ is the length of the list.
(Provided here: https://wiki.python.org/moin/TimeComplexity)
So, at the worst case, in each of the for loop, we search the element $|S|$ times. Say that $|S| = N$, then in each iteration the length of $S$ at the worst case gets bigger by $1$ - because it does contain $j$ each time, and it adds this $j$.
The first iteration of the while loop: $\text{range}(N/2)$
$$ N + (N+1) + (N+2) + \dots + N + \frac{N}{2} = \frac{N^2}{2} + 1 + 2 + \dots + \frac{N}{2} = \frac{N^2}{2} + \frac{\frac{N}{2}(\frac{N}{2} + 1)}{2}$$
At the second iteration of the while loop: $\text{range}(N/4)$
$$N + \frac{N}{2} + (N + \frac{N}{2} + 1) + (N + \frac{N}{2} + 2) + \dots + (N + \frac{N}{2} + \frac{N}{4}) = \frac{N^2}{4} + \frac{N^2}{8} + 1 + 2 + 3 + \dots + \frac{N}{4} = \frac{N^2}{4} + \frac{N^2}{8} + \frac{\frac{N}{4}(\frac{N}{4} + 1)}{2}$$
The $k$-th iteration is by:
$$I_k = \frac{N^2}{2^k} + \frac{N}{2^{k-1}} \cdot \frac{N}{2^k} + \sum_{i=1}^{N/2^k} i$$
and over and over... exactly $\lg(N)$ times.
$$ \sum_{k=1}^{\lg(N)} I_k$$
Which is (if I am not wrong here): $$O(N^2 \lg (N))$$
The second approach:
The second approach is like the first, but I noticed that while it's true that the x in S operator is $O(|S|)$, it is not $|S|$ (the current size) in this case specifically, as we insert these numbers if they are already in the array! so each iteration will be at most $O(N)$ and not $O(|S|)$
And so the math gets easier and we do on the first iteration:
$$\overset{N/2 \text{ times}}{N + N + N + \dots + N} = N^2 / 2$$
On the second iteration it would be:
$$\overset{N/4 \text{ times}}{N + N + N + \dots + N} = N^2 / 4$$
and this keeps going of course, $\lg(N)$ times, so we have this sum:
$$ \sum_{k=1}^{\lg(N)} N^2 / 2^k$$
Which at the end goes to be:
$$O(N^2)$$
Which one is correct?
When we recognize the 'live' updating size of $N$ (starting at $N$, then $N+1$, then all the way to $N+N/2$, then $N+N/2+1$ all the way to $N+N/2+N/4$, ...)
Or not recognize the live updating size, as the in operator would surely find the number at most $N$ iterations? ($N$ is the starting size of $S$.)
Answer: A simple proof of $\Theta(N^2)$ time-complexity
Each iteration of the while loop will cut $n$ in half and, at worst, increases the size of $S$ by $n$. In the $i$-th iteration of the while loop, $n\le\frac{N}{2^i}$.
Hence the line S.append(j) can be executed at most $N/2 + N/4 + N/8+\cdots = N$
times. So, $|S|$ is at most $N + N= 2N$ at any time.
Each appendment takes $O(1)$ time except possibly when the capacity of $S$ is doubled, at which time the appendment takes $O(N)$ time, assuming the usual implementation of Python. So, the total time that is spent on that line is at most $O(N)$.
Now consider the total time spent on if j in S. Each execution will take at most $|S|$ time. So the total time is no more than
$$ \frac{N}2 (2N) + \frac{N}{2^2}(2N) + \frac{N}{2^3}(2N) + \cdots = 2N^2$$
Hence, $O(N^2)$ is an upper bound.
Consider the first iteration of the while loop, which is basically
for j in range(N//2):
if j in S:
#
If j is not in $S$, if j in S will take $\Theta(|S|)$ time. In order to execute if j in S faster, we can assume all js are in $S$. Since it takes different number of lookups to find different js, the least total number of lookups needed to execute if j in S for j in range(N//2) is
$$1 + 2 + \cdots + (N//2-1)=\Theta(N^2).$$
Hence, $\Omega(N^2)$ is a lower bound.
So, the time-complexity is $\Theta(N^2)$.
In particular, the strictest upper bound in big $O$-notation is $O(N^2)$.
"You can improve the bound in your first approach"
Let us do the math more carefully.
The $k$-th iteration is by:
$$I_k = \frac{N^2}{2^k} + \frac{N}{2^{k-1}} \cdot \frac{N}{2^k} + \sum_{i=1}^{N/2^k} i$$
So, $$I_k = \frac{N^2}{2^k} + \frac{N^2}{2^{k}}\frac1{2^{k-1}} + \frac N{2^k}(\frac N{2^k}+1)/2= \frac{N^2}{2^k}(1 + \frac1{2^{k-1}}+\frac1{2^k}+\frac1N)\lt\frac{N^2}{2^k}(1+1+1+1)=\frac{4N^2}{2^k}$$
So, $$ \sum_{k=1}^{\lg(N)} I_k\le 4N^2\sum_{k=1}^{\infty}\frac1{2^k}=4N^2.$$ | {
"domain": "cs.stackexchange",
"id": 19835,
"tags": "asymptotics, big-o-notation, python"
} |
Is there a threshold on distance/size for a tidal locking? | Question: I know that some systems tend to tidal locking (such as earth-moon), which occurs basically because the difference in the gravitational pull on one side is significantly different from the pull on the opposite side. (There's a lot of great answers here.) But what can be considered a significant different?
In other words, is there a threshold on distance or size in order for systems to tend to tidal lock?
Thanks!
Answer: You can see an example of tidal locking and atmosphere simulation for a planet closely orbiting a dim star. They show a simulation of the atmosphere and some interesting theories about the movement of gasses due to tidal locking (convection) that occurs between the bright and dark side of the planet. The link goes directly to the discussion of tidal locking:
KEPLER 186F - LIFE AFTER EARTH
A similar question was posted in the physics forum and it looks like that could be close to the answer you are looking for:
Tidal Lock Radius in Habitable Zones
I am also interested in tidal locking and hope someone may post a more concise answer. | {
"domain": "astronomy.stackexchange",
"id": 832,
"tags": "the-moon, orbit, gravity"
} |
Differentiation of a vector with respect to a vector | Question: Does differentiation of a vector with respect to a vector make any sense? Even if it makes sense, how does it make any physical meaning? I mean what is the physical interpretation?
Answer: Well, a good example is thinking in term of components. In several areas of physics, the math gets more intuitive when you think in terms of components of the vectors. So, instead of writing the vector $\mathbf r$ for the position of a particle, you write $x^i$ as the $i$-th component of a vector. The $i$ in the top is to indicate a contravariant vector, instead of the $i$-th component of a covariant vector: $x_i$. In euclidian geometry, those differences are irrelevant, so, lets forget about them. Thus, I am going to use always lower indexes.
Say you have a scalar function $\phi$, dependent on position: $\phi(\mathbf r(t))$. In component notation: $\phi(x_i(t))$. Its time derivative:
$$
\frac{d\phi}{dt} = \sum_i \frac{\partial\phi}{\partial x_i} \frac{d x_i}{dt}.
$$
So, transforming it to vector notation, how would one write this? Yes.. Using vector division.. since $x_i$ represents component of a vector:
$$
\frac{d\phi}{dt} = \frac{d \phi}{d \mathbf r} \frac{d\mathbf r}{dt}.
$$
Now is a simple chain rule. On this small example, the derivative of the scalar function with respect to a vector, would be what you call gradient:
$$
\frac{d \phi}{d \mathbf r} = \nabla\phi \quad\Longrightarrow\quad
\frac{d\phi}{dt} = \nabla\phi\cdot\frac{d\mathbf r}{dt}.
$$
Similarly, instead of scalar field, if was a vector field $\mathbf E = \mathbf E(\mathbf r(t))$, say, an electric field. We can use component-notation: $E_i = E_i(x_k(t))$. So, the time derivative:
$$
\frac{dE_i}{dt} = \sum_k \frac{\partial E_i}{\partial x_k} \frac{d x_k}{dt} \quad\Longrightarrow\quad
\frac{d\mathbf E}{dt} = \frac{d\mathbf E}{d\mathbf r} \frac{d\mathbf r}{dt}
$$
That one is a little bit more tricky, but the component-notation makes it clear: It has two ranks instead of one. Yes, a matrix! Lets call it matrix $J$, and write it in component-notation $J_{ik}$:
$$
J_{ik} = \frac{\partial E_i}{\partial x_k} = \left(\frac{d\mathbf E}{d\mathbf r}\right)_{ik}
$$
That matrix is called the Jacobian Matrix. So, it makes sense to "differentiate" by vectors, if you look at the component-notation.
For sake of curiosity: The second order derivative of the scalar field would give a second-rank object, or a matrix, called Hessian Matrix. The second order derivative of the vector field would give rise to third-rank objects. The $n$-rank generalization is called a Tensor. And when space is not euclidean, one can build a $r$-rank contravariant and $s$-rank covariant tensor, or a $(r,s)$-rank tensor. | {
"domain": "physics.stackexchange",
"id": 25118,
"tags": "differential-geometry, vectors, differentiation, vector-fields"
} |
Simple function that simulates survey results based on sample size and probability | Question: What is this:
This is a simple function, part of a basic Monte Carlo simulation. It takes sample size and probability as parameters. It returns the simulation result (positive answers) plus the input parameters in a tuple.
What I'm asking:
I'm trying to avoid using temporary variables, I have two questions.
Do I really save memory by avoiding storing interim results?
How could I improve readability without adding variables?
def simulate_survey(sample_size, percent_subscribes):
return (
sample_size,
percent_subscribes,
round(
(
sum([
r.random() < percent_subscribes
for _ in range(sample_size)
]) / sample_size
),
2
)
)
Answer:
As I discovered recently, summing a lot of booleans, where the chance that the value is False is not negligible, can be surprisingly slow.
So I would change your survey result calculation to:
sum([1 for _ in range(sample_size) if r.random() < percent_subscribes])
This allows sum to use its faster integer implementation and you do not sum a bunch of zeros.
Alternatively, you could look at this problem as an application of the binomial distribution. You have some chance that a certain result is obtained and you want to know how often that chance was true for some population. For this you can use numpy.random.binomial:
import numpy as np
def simulate_survey(sample_size, percent_subscribes):
subscribers = np.random.binomial(sample_size, percent_subscribes)
return sample_size, percent_subscribes, round(subscribers / sample_size, 2)
Using numpy here may also speed up your process in other places. If you need to run this function many times, you probably want to use the third argument to generate multiple values at once.
IMO, the readability is also greatly increased by using one temporary variable here, instead of your many levels of parenthesis.
I am not a fan of your function returning its inputs. The values of those should already be available in the scope calling this function, so this seems unnecessary. One exception would be that you have other, similar, functions which actually return different/modified values there.
You should add a docstring describing what your function does. | {
"domain": "codereview.stackexchange",
"id": 34675,
"tags": "python, functional-programming, random, simulation, numerical-methods"
} |
How to make sense of `Pauli observables` in the gate cutting code here? | Question: I am following the tutorial mentioned in the circuit-knitting-toolbox.
This tutorial explains how you can replace non-local gates with local operations in the superoperator representations and effectively cut the gates, reducing the number of qubits required to run the circuit.
From the tutorial, I see they just make a random circuit :
Then they want it to be cut into two subcircuits of two-qubit each and then they wrote the code:
from qiskit.quantum_info import PauliList
observables = PauliList(["ZZII", "IZZI", "IIZZ", "XIXI", "ZIZZ", "IXIX"])
They are calculating the expectation value of the circuit, but I still don't understand the choice of these Pauli matrices, the order in which they are, and the fashion they are chosen. Can anyone point me in the right direction to understand how this is done?
Answer: The Pauli list says the operator or the basis on which you want to measure the quantum state. Whenever you implement a QAOA, or any other code that requires you to find the expectation value of an operator, it does so by measuring the state on some basis. Like this:
$$\langle\psi|A|\psi\rangle$$
The measurement is mostly on a computational basis, which is the $Z$ basis, but you can measure on another basis as well, like the other Pauli Operators. That is what is mentioned here.
Like in QAOA when you have to minimize the energy, you do :
$$E = \langle\psi|O|\psi\rangle.$$
You need to define the operator you're interested in and the state |⟩ concerning which you want to compute the expectation value.
Using the code like this:
# you can define your operator as circuit
circuit = QuantumCircuit(2)
circuit.z(0)
circuit.z(1)
op = CircuitOp(circuit) # and convert to an operator
# or if you have a WeightedPauliOperator, do
op = weighted_pauli_op.to_opflow()
# but here we'll use the H2-molecule Hamiltonian
from qiskit.aqua.operators import X, Y, Z, I
op = (-1.0523732 * I^I) + (0.39793742 * I^Z) + (-0.3979374 * Z^I) \
+ (-0.0112801 * Z^Z) + (0.18093119 * X^X)
# define the state you w.r.t. which you want the expectation value
psi = QuantumCircuit(2)
psi.x(0)
psi.x(1)
# convert to a state
psi = CircuitStateFn(psi)
and then just straightforward calculating the expectation value, gives you your result.
# easy expectation value, use for small systems only!
print('Math:', psi.adjoint().compose(op).compose(psi).eval().real)
The same operators are here, you can keep them all $Z$ , or all $X$, it's for you to decide in which basis you want to measure, just keep in mind that they should be equal to the number of qubits in your quantum circuit, one for each. | {
"domain": "quantumcomputing.stackexchange",
"id": 5483,
"tags": "qiskit, circuit-construction, quantum-circuit"
} |
How to determine gerade & ungerade symmetry of a MO orbital? | Question: J.D.Lee writes in his book Concise Inorganic Chemistry:
[...] An alternative method for determining the symmetry of the molecular orbital is to rotate the orbital about the line joining the two nuclei and then rotate the orbital about the line perpendicular to this. If the sign of the lobes remains the same, the orbital is gerade, and if the sign changes, the orbital is ungerade.
Now, it can be detected easily which MO, formed due to the overlapping of s-s or p-p orbitals, is gerade or ungerade as shown in these pics:
But what about the molecular orbitals formed due to the overlapping of s-p orbitals or p-d orbitals?
mo due to to overlapping of s-p orbitals:
mo due to overlapping of p-d orbitals:
If I rotate about the first orbital about an axis perpendicular to the inter-nuclear axis, the smaller lobe goes to the right only; the sign always remains same; just the position of the lobe has been changed. But in the first picture, only the bonding mo is gerade while the anti-bonding mo is ungerade; I'm not getting how by rotating along the axis perpendicular to the inter-nuclear axis, the sign of the antibonding mo changes making it ungerade.
Can anyone please help me how to apply the procedure for the molecular orbitals formed due to the overlapping of s-p & p-d orbitals?
Answer: The better way to do it is to check what happens under inversion ($i$ or $\bar 1$). If the orbital stays the same, it is g, otherwise u.
However, as Orthocresol mentioned in the comments, checking that is only possible if the entire molecule contains inversion symmetry. Not all point groups do, and the vast majority of molecules does not (partly because $C_1$ is likely the most prevalent point group out there).
For example, consider transition metals’ d orbitals. In octahedral complexes, they are labelled $\mathrm{t_{2g}}$ and $\mathrm{e_g}$. In tetrahedral complexes, which do not have inversion symmetry, they are $\mathrm{t_2}$ and $\mathrm{e}$. | {
"domain": "chemistry.stackexchange",
"id": 5775,
"tags": "molecular-orbital-theory"
} |
What are the CIP rules for cyclic substituents? | Question: Here's what I read on Wikipedia (section "Cycles"):
To handle a molecule containing one or more cycles, one must first expand it into a tree (called a hierarchical digraph by the authors) by traversing bonds in all possible paths starting at the stereocenter. When the traversal encounters an atom through which the current path has already passed, a ghost atom is generated in order to keep the tree finite. A single atom of the original molecule may appear in many places (some as ghosts, some not) in the tree.
I don't understand how to assign CIP priorities to cyclic systems, for example, consider this:
How do we decide the CIP priorities? What does that paragraph I found on Wikipedia really mean? Could someone please give a detailed explanation?
Answer: I presume you want to know the E/Z geometry of the double bond. You need to determine on each end of the double bond which ring has priority. Intuitively one may argue that cyclohexane is larger than cyclopentane and cyclobutane is larger than cyclopropane. In this simple case, this is true and the double bond has the Z-configuration. However, digraphs are used in more complex cases. Your structure below has four colored dots. Each attached ring bond is "cut" and stretched out until one returns to the original atom. The three red atoms from the cyclohexane ring are identical. The same is true of the other three rings. Now treat the digraph as you would any acyclic alkene. The digraph has the Z-configuration.
Addendum: For two more complex cases, go here and here. | {
"domain": "chemistry.stackexchange",
"id": 9499,
"tags": "nomenclature, stereochemistry"
} |
Pauli matrices and Wikipedia | Question: Wikipedia claims Pauli Matrices with an $i$: $i \sigma_1, i \sigma_2, i \sigma_3$ form a basis of $\mathfrak{su}(2)$.
But what about the following relation?:
$$[\frac{1}{2} \sigma_i, \frac{1}{2} \sigma_j]=\frac{i}{2}\epsilon_{ijk}\sigma_k$$
Pauli cooked up the matrices set for spin..
There is no other sane way to get the $1/2$ angle.
I did try the Dirac Plate Trick, didn't manage. :D
Is the article wrong?
Answer: You are right, one often uses $S_i = \sigma_i / 2$ as the generators of $\mathfrak{su}(2)$, they satisfy
$$ [ S_i, S_j ] = \mathrm i\epsilon_{ijk}\, S_k . $$
However, $\mathfrak{su}(2)$ is a vector space, and the $\{ \sigma_i \}$ are a basis of it just as well as the $\{ S_i \}$.
This shows that the structure constants of an algebra are not uniquely defined, they depend on the choice of basis.
Edit: @doetoe is right in his answer that the convention in math is different by a factor of $\mathrm i$, because we talk about the real algebra $\mathfrak{su}(2)$. What I wrote above is the usual notation in Physics and technically applies only to the complexified $\mathbb C \otimes \mathfrak{su}(2)$. | {
"domain": "physics.stackexchange",
"id": 50337,
"tags": "lie-algebra, complex-numbers"
} |
Can we define time as a field? | Question: The main objective is, can we relate time in terms of a field, I know time differs in many properties from an usual field. But I always imagine time as an forward moving field and we all know it is affected by gravitational field, so can we relate time as a field?
Answer: A field is a quantity which typically depends on the location like --- giving a simple example --- a temperature distribution over a piece of metal, on one end of the metal the temperature is low and other end the temperature is high.
So the temperature is a function of the location where it is measured:
$$T = T(x)$$
That would be a temperature field.
In particular the temperature distribution can be predicted by physical laws, i.e. there is a relationship between a temperature value at a certain position and the temperature value in its neighourhood.
Of course on top of that one could be interested in the temperature distribution in time and location, i.e.
$$T =T(t,x)$$
On the other hand time is just a parameter, nothing more. Time depends on the chosen reference system, but not on the location.
One can make consider slices of spacetime where the time is the same, these slices depend on the reference system.
Different observers at different locations have their proper time they measure, but one would never consider this as a field. Simply since there is no relationship between the individual (proper) times and the location where it is measured, rather it depends on the reference systems of the different observers. In particular each observer can choose its own reference system, which means that such a data set would be rather arbitrary. | {
"domain": "physics.stackexchange",
"id": 99591,
"tags": "spacetime, coordinate-systems, field-theory, time"
} |
Using the XOR operator to calculate a checksum | Question: As part of a Google Foobar challenge, I'm trying to answer a rather difficult problem that involves the use of the XOR operator to calculate a checksum. While my solution works, my algorithm works in O(n) time, while the desired solution seems to need to be faster.
You can see the text of the challenge here.
While I know the help presented in that thread is valid, the solution is in Python, and I'm working in Java. Could someone explain why my solution is inefficient, and how it could be improved? I'm a self-taught coder, so some of the arcana of compsci is a mystery to me.
static int answer(int start, int length) {
int checksum = 0;
int dividerPos = length;
int count = start;
int idx = 0;
while(true) {
if(dividerPos == 0)
return checksum;
else if (idx == dividerPos) {
count += (length - dividerPos);
--dividerPos;
idx = 0;
continue;
}
checksum ^= count;
++idx;
++count;
}
}
Answer:
There are situations where an infinite loop while (true) is warranted, this is not one of them. The code implies that the loop shall be broken when dividerPost reaches zero. Say it explicitly:
while (dividerPos > 0) {
if (idx == dividerPos) {
count += (length - dividerPos);
--dividerPos;
idx = 0;
continue;
}
checksum ^= count;
++idx;
++count;
}
return checksum;
Now it is easy to see that the body of the loop is another loop in disguise. Again, be explicit:
while (dividerPos > 0) {
for (idx = 0; idx < dividerPos; idx++, count++) {
checksum ^= count;
}
count += (length - dividerPos);
--dividerPos;
}
Now it is clear what the code is doing, and what it is doing inefficiently. An immediately identified bottleneck is an inner loop. What it does, it compute an xor of a range of numbers The first hit for a xor of a range query is this discussion. Study it, and see how it is applicable here. That itself will give your code a boost. Then try to optimize the outer loop.
PS: It is crucial to understand that the language doesn't matter.
PPS: Sorry if I sound harsh, but when I say study, I mean study. Don't skim. Understand how it works. Prove that it works correctly.
PPPS: Both answers in the page you've linked miss the point. | {
"domain": "codereview.stackexchange",
"id": 25401,
"tags": "java, algorithm"
} |
Can this custom Test:Unit assert be replaced/refactored? Am I missing test cases? | Question: I have a unit test that needs to compare two arrays but it shouldn't care about the order of the arrays. I didn't see anything in the Test:Unit docs that provided this so I wrote my own. I really don't want to use this code if I don't have to (want to avoid Not Invented Here syndrome).
If you can give me some feedback on the following points it would be a huge help, thanks!
Should I even use this? Is there a built-in Test:Unit assertion I can use?
I've refactored this as much as I can but any further improvements are encouraged and welcome.
I've included my tests for this custom assertion, can you see any cases that I may be missing?
Here is the custom assertion:
def assert_have_same_items(expected, actual, message = nil)
full_message = build_message(message, "<?> was expected to contain the same collection of items as \n<?> but did not.\n", actual, expected)
test = expected | actual
assert_equal expected.size, actual.size, full_message
assert_equal expected.size, test.size, full_message
end
And the tests:
test "assert_have_same_items with two arrays with same items in same order reports success" do
assert_have_same_items [1,2,3], [1,2,3]
end
test "assert_have_same_items with two arrays with same items in different order reports success" do
assert_have_same_items [1,2,3], [2,1,3]
end
test "assert_have_same_items with actual missing an item, reports failure" do
assert_should_fail do
assert_have_same_items [1,2,3], [2,1]
end
end
test "assert_have_same_items with actual having more items, reports failure" do
assert_should_fail do
assert_have_same_items [1,2,3], [2,1,3,4,5,6]
end
end
test "assert_have_same_items with actual having duplicates, reports failure" do
assert_should_fail do
assert_have_same_items [1,2,3], [2,1,3,2,1,3]
end
end
test "assert_have_same_items with actual having duplicates and expected having same number of items, reports failure" do
assert_should_fail do
assert_have_same_items [1,2,3,4,5,6], [2,1,3,2,1,3]
end
end
protected
def assert_should_fail
begin
yield if block_given?
flunk "Expected failure but assertion reported success"
rescue Test::Unit::AssertionFailedError
assert true
end
end
Answer: You have adequate test coverage for integers, and probably this will extend to floating points as well (although you really should test those as well, IMO). However you should also test for objects. Also, try a couple combinations of objects, numbers, and strings to be really sure that your assertion is correct.
If you only need to test for integer arrays, you should make it clear that the assert only works for them and its behavior with other arrays is undefined. In that case maybe you should change the name to assert_same_items_integer or something similar for clarity. | {
"domain": "codereview.stackexchange",
"id": 1064,
"tags": "ruby, unit-testing"
} |
Can the wave-function of any particle in any basis be written as a matrix? | Question: Can the wave-function of any particle in any basis be written as a matrix?
If no, how can we explain this, where the Hamiltonian $H$ in
U is a QM operator that can be written as a linear transformation therefore a matrix. And if we take the exponential matrix of H, which gives us another matrix. So surely, we can write Ψ as a matrix. Right??
Answer: The wavefunction is a linear span of eigenvectors of the differential equations (which is why it is more convenient to study the eigenvectors). So yes, it is a matrix in the base of eigenvectors. | {
"domain": "physics.stackexchange",
"id": 59898,
"tags": "quantum-mechanics, operators, hilbert-space, wavefunction, hamiltonian"
} |
(Almost) double light speed | Question: Let's say we have $2$ particles facing each other and each traveling (almost) at speed of light.
Let's say I'm sitting on #$1$ particle so in my point of view #$2$ particle's speed is (almost) $c+c=2c$, double light speed? Please say why I am incorrect :)
EDIT: About sitting me is just example, so in point of view of #1 particle, the second one moves at (almost) $c+c=2c$ speed?
Answer: One of the results of special relativity is that a particle moving at the speed of light does not experience time, and thus is unable to make any measurements. In particular, it cannot measure the velocity of another particle passing it. So, strictly speaking, your question is undefined. Particle #1 does not have a "point of view," so to speak. (More precisely: it does not have a rest frame because there is no Lorentz transformation that puts particle #1 at rest, so it makes no sense to talk about the speed it would measure in its rest frame.)
But suppose you had a different situation, where each particle was moving at $0.9999c$ instead, so that that issue I mentioned isn't a problem. Another result of special relativity is that the relative velocity between two particles is not just given by the difference between their two velocities. Instead, the formula (in one dimension) is
$$v_\text{rel} = \frac{v_1 - v_2}{1 - \frac{v_1v_2}{c^2}}$$
If you plug in $v_1 = 0.9999c$ and $v_2 = -0.9999c$, you get
$$v_\text{rel} = \frac{1.9998c}{1 + 0.9998} = 0.99999999c$$
which is still less than the speed of light. | {
"domain": "physics.stackexchange",
"id": 26583,
"tags": "special-relativity, speed-of-light, velocity, inertial-frames, faster-than-light"
} |
Creating a vacuum around a body | Question: Imagine, for abstraction purposes, a spherical object with a single surface (and surface material). I was wondering the effects of giving that object a steady stream of positive charge (that is, maintain a certain positive charge) on the surface.
Would this create a vacuum around the material because all the protons in the gases (the bulk of the mass) would be repelled by the surface? On further application, could this be used to reduce the air resistance to near zero for a suspended flying sphere in a gaseous environment? What are some complications that could occur?
Answer: The protons in gases are bound in atoms, which are electrically neutral, pretty strongly bound, and thus, to zeroth order, would be pretty unaffected by anything but a very strong electric field (which would require a lot of charge on the surface).
But atoms are also not point particles; they have a finite charge distribution (an approximately point-like proton surrounded by a diffuse electron cloud). A strong electric field can deform this electron cloud, creating an atomic polarization which would turn the atom into a weak electric dipole. The relationship between the applied electric field and the induced polarization is dictated by the atomic polarizability, which is typically a pretty small number, indicating that it's pretty hard to polarize atoms. However, if the gas is composed of molecules rather than single atoms, its electron clouds may already be asymmetric, creating a much stronger polarization. However, even a molecular polarization wouldn't affect the macroscopic gas dynamics that much, because the interaction energy between the dipole and the applied electric field is quite a bit lower than the average kinetic energy of gas molecules at any reasonable temperatures.
Note that the above isn't true for dust particles in the air (which are much heavier, move much slower, and may even be able to accommodate a small electric charge for short periods). As such, your device will end up effectively being an electrostatic air filter, gradually accumulating more and more dust as it operates.
If you make your electric field strong enough (roughly 30 kV/cm in dry air), then you get to the point where the electric field can strip electrons from atoms/molecules and ionize the air around it. Note that for a positive surface charge, which is what we are assumed to have in the question, your device would attract electrons and repel protons. Anyway, ionization of air creates a bunch of fast-moving charged particles, which quickly slam into other molecules/atoms and create further ionization, which gives a chain reaction, causing a stream of plasma to be created from the air - in other words, a spark. This spark deposits a significant amount of negative charge on the sphere's surface, which quickly drives the produced electric field below the ionization threshold. But if you can replace the charge as fast as it is neutralized by the incoming ion flux, then you can get a device that produces sparks at a frequency that increases with increasing input power. At that point, you basically just have a van de Graaf generator. (https://en.wikipedia.org/wiki/Van_de_Graaff_generator)
In order for your device to produce the effect you want it to, it would have to ionize all of the air around it at once, and attract/repel the charge faster than new air can come in. This is not at all a stable configuration, and sparks would rapidly degrade this arrangement and allow air to leak into the vacuum that you may have instantaneously created. So no, it's not really possible to do what you're asking about on any large scale. That said, if the air you're working with is already at extremely low pressures, and there's a finite amount of it around (i.e. you're not trying to use this thing in open air, but rather in a sealed container), then there actually is a device that creates a vacuum by ionizing atoms. It's called an ion pump (https://en.wikipedia.org/wiki/Ion_pump_(physics)), and it works by ionizing the gas in the chamber and collecting the ions on charged plates. Since there is a finite amount of gas that can be absorbed into/adsorbed onto a plate, an ion pump will be quickly overwhelmed by too much gas, so this only works with a small chamber of low-pressure gas (i.e. it gets you from "almost a vacuum" to "even closer to a vacuum"). | {
"domain": "physics.stackexchange",
"id": 48357,
"tags": "electrostatics, kinematics, charge, vacuum"
} |
Why can't omega meson decay into two neutral pions? | Question:
Here it says this decay mode is a violation of C-parity. I don't understand how that works.
So $\omega$ has $C=-1$ and $J=1$ and $\pi^0$ has $C=1$ and $J=0$, that means the orbital angular momentum of final state is $L=1-0=1$. Consider C-parity of final state we get $C=(-1)^L=-1$. This seems to agree with C-parity conservation?
I've also seen an argument saying that our final state with $L=1$ is anti-symmetric if we exchange two $\pi^0$, which is forbidden since they are bosons. But I don't understand how that's related to C-parity.
Answer: @anna_v 's comment is your answer.
The ω has C=- and G=- ,
while the C of each π0 is +, and thus of their aggregate +.
(Your involving the angular momentum here in connection to C is fundamentally unsound--You are probably confusing the aggregate C of a fermion-antifermion pair or π+ π- which go into each other. Stay away from it here.)
The G of each pion is -, so their aggregate's is +, so G-parity also forbids the decay. That's what G parity is: evenness versus oddness of the number of pions. The ω is thus destined to only decay to an odd number of π s. However, G parity is violated more than C, since it involves isospin in its definition, which is not as perfect a quantum number as C in the strong interactions.
The PDG entry is thus spot on. | {
"domain": "physics.stackexchange",
"id": 47976,
"tags": "particle-physics, conservation-laws, standard-model, mesons"
} |
Show distribution of users affected by outlier response times | Question: My dataset is performance metrics (response time) of a web page over the course of a single day.
The data looks roughly like this:
| Date | User Id | Response time in milliseconds |
|------+---------+-------------------------------|
| ... | U1 | 390 |
| ... | U2 | 1965 |
| ... | U2 | 7789 |
| ... | U1 | 479 |
| ... | U1 | 9876 |
Charting the percentiles 50th, 75th, 90th, 95th and 99th I could see that the response time for the vast majority of users is below 600 ms - which is seen as acceptable.
But there are extreme spikes in response times: I tried to sketch that in the data example above.
What I would like to do is visualize if it's always the same users affected by these spike or if the behaviour is more erratic than that. In the example above, user U2 would experience frequent spikes, whereas U1 has better performance with one spike (I know the sample size is obviously small in the example but I hope it helps to illustrate what I'm after.)
Goal of this effort is to reveal if it's always the same few users that are affected by spikes, then we could narrow down the problem.
An idea would be to use a bubble chart, which visualizes the following triplet for each user:
Count of how often we saw the user id during the day. (This will be the size of the bubble)
99th percentile for this user (x-axis)
75th percentile for this user (y-axis)
What I thought we could see from such kind of chart:
The bigger the bubble, the more frequent we saw the user.
The closer to 0 on both axes, the better the overall performance.
The further away from 0 on the x-axis, the more extreme the performance peaks.
The further away from 0 on the y-axis, the worse the common case performance.
Assumption is, users that are shown as large bubbles near the upper right corner could be the ones to investigate.
I thought a while about this, but I can't really say I'm totally convinced this is what I'm after.
Does this approach make sense to find out if we have mostly the same users experiencing the worst performance?
Answer: One option would be look at conditional probability based on percentiles.
First, find percentiles based on all the data. For example, 99th percentile is 1432 milliseconds.
Then, find percent of a specific user request above that threshold. For example U1 has 50% of requests above 99th percentile.
This could be made into a cross-tabs table for easier exploration. The rows would be each user and the columns would be binned response time percentiles. | {
"domain": "datascience.stackexchange",
"id": 9471,
"tags": "clustering, statistics, visualization"
} |
Adding minutes to a time value in Ruby | Question: I recently submitted a code-challenge for an interview according to the following guidelines:
About
Write a function or method that accepts two mandatory arguments and returns a result.
Requirements:
Use Ruby or another language of your choosing, and write production quality code.
Your function or method should not use any date/time library.
The first argument is a time value in the form of a string with the following format: [H]H:MM {AM|PM}
The second argument is the number of minutes, as an integer value
The return value of the function should be a string of the same format as the first
argument, which is a result of adding the minutes to the time.
For example, add_minutes('9:13 AM', 10) would return 9:23 AM
Additional notes:
We just want to see how you code, so this exercise is not meant to be too hard or take too long.
If you spend an hour on this, stop coding and finalize by adding some notes about what you would do if you had more time.
Although my Ruby proficiency has somewhat atrophied due to primarily working with php for the last 8 months, I was able to produce the following well-within one-hour of beginning the challenge:
# FIRST ATTEMPT
# require 'time'
# def flux_capacitor(time,mins)
# the_time = Time.parse(time.to_s)
# mins = the_time + 10*60
# end
# FINAL REFACTOR
def flux_capacitor(time,mins)
the_time = time.scan(/\d/).join('').to_i
meridian = time.scan(/(A|M|P)/).join('')
new_time = the_time + mins
back_in_time = new_time.to_s.insert(-3,':') + " #{meridian}"
end
puts flux_capacitor("9:13 AM",10)
puts flux_capacitor("9:13 PM",10)
p flux_capacitor("10:13 PM",10).class
# DOX
# http://ruby-doc.org/core-2.2.3/Time.html
# https://www.ruby-forum.com/topic/125709
# https://www.ruby-forum.com/topic/104359
# http://apidock.com/ruby/String/insert
# http://rubular.com/
Simply-stated, is there a better approach? I've asked others and they think my solution is fairly clean. Obviously, I know my solution could benefit with a conditional statement for edge cases (e.g. flux_capacitor("9:50 AM", 15)).
Answer: More testing is required:
flux_capacitor("11:59 AM", 62)
=> "12:21 AM"
Definitely test some corner cases.
Updated Thoughts
I think you should completely rethink the way you are handling time.
Use a regex to validate that time is a valid string, then simply use string.split to get each portion of the time: hours, minutes, and AM/PM.
Next you should convert those into a quantity of minutes:
hours += 12 if meridian == 'PM'
total_minutes = hours*60+minutes+mins
Next I would recalculate the time as a 24-hour time (which goes up to 23 hours, 59 minutes):
# this is integer math so the decimal portion will be discarded
# adjusted_minutes may be greater than 23, but adjusted_minutes should always be less than 60
adjusted_hours = (total_minutes / 60)
adjusted_minutes = total_minutes - (60*adjusted_hours)
# modulus 24 converts adjusted_hours into 24-hour time
adjusted_hours = adjusted_hours % 24
Now convert to 12 hour time if necessary and determine if this is AM or PM
# the added mins may have been several days worth, so adjusted_hours might be very large.
if adjusted_hours > 12
meridian = "PM"
adjusted_hours -= 12
else
meridian = "AM"
end
Finally return a string of the new time:
# zero pad the minutes, as this is expected for a time
return "%d:%02d %s" % [adjusted_hours, adjusted_minutes, meridian]
Final Solution
Since a solution that reliably passes all test cases has yet to be put forth...
def flux_capacitor(time, mins)
#check if time in valid format
time_match = time.strip.match /^(12|11|10|0?\d):([012345]\d)\s+(AM|PM)/
#throw error on invalid time
raise(ArgumentError, "Invalid time: #{time.strip}") if not time_match
#calculate new time
strhours, strminutes, meridian = time_match.captures
hours = (meridian == "AM" ? strhours.to_i : strhours.to_i + 12)
total_minutes = hours * 60 + strminutes.to_i + mins
total_minutes = total_minutes % (24*60) # we only want the minutes that fit within a day
adjusted_hours, adjusted_minutes = total_minutes.divmod(60)
if adjusted_hours > 12
adjusted_hours -= 12
meridian = "PM"
else
meridian = "AM"
end
"%d:%02d %s" % [adjusted_hours, adjusted_minutes, meridian]
end
Testing:
[ "11:13 PM",
"13:09 PM", #invalid!
"1:59 AM",
"04:49 PM",
"4:79 PM" #invalid!
].each do |time|
begin
puts flux_capacitor(time, 0)
rescue
puts $!
end
end
puts "----------"
puts "#{flux_capacitor("11:13 PM", 10)}, expected: 11:23 PM"
puts "#{flux_capacitor("11:13 PM", 12*60)}, expected 11:13 AM"
puts "#{flux_capacitor("11:13 PM", 24*60)}, expected 11:13 PM"
puts "#{flux_capacitor("11:13 PM", 24*60 + 1)}, expected 11:14 PM"
puts "#{flux_capacitor("11:59 AM", 62)}, expected 1:01 PM"
puts "#{flux_capacitor("11:59 PM", 62)}, expected 1:01 AM" | {
"domain": "codereview.stackexchange",
"id": 18782,
"tags": "ruby, datetime, regex, interview-questions"
} |
What do you lose if you rewrite the Dirac equation in terms of $\mid\Psi\mid^{2}=\Phi$? | Question: Taking a look at the Dirac equation (taking $\hbar$ to be unity):
$$\bar{\Psi}(i\gamma^{a}e_{a}^{\mu}\partial_{\mu}-m)\Psi=0$$
The operator is Hermitian and and hence we may rewrite it as:
$$\Psi(i\gamma^{a}e_{a}^{\mu}\partial_{\mu}-m)\bar{\Psi}=0$$
Though the bar here denotes the Dirac adjoint, I believe it's still valid (if not we can pull $\gamma^{0} out of \bar{\Psi})$. I find it useful to remember that all physically measurable information comes from considering expectation values, so I like writing it as:
$$\intop_{allspace}\bar{\Psi}(i\gamma^{a}e_{a}^{\mu}\partial_{\mu}-m)\Psi d^{3}x=0$$
This is just personal “taste” yet it seems to encode all pertinent information regarding the Dirac equation (not counting normalization which is another constraint). We can put the first two equations together as:
$$\frac{1}{2}\left[\bar{\Psi}(i\gamma^{a}e_{a}^{\mu}\partial_{\mu})\Psi+\left\{ (i\gamma^{a}e_{a}^{\mu}\partial_{\mu})\bar{\Psi}\right\} \Psi\right]-m\Psi\bar{\Psi}=0$$
Maybe I messed something simple up, but something like that should be achievable. At this point, we may as well write:
$$(i\gamma^{a}e_{a}^{\mu}\partial_{\mu}-2m)\mid\Psi\mid^{2}=0$$
Then what purpose does that serve, let's just use a scalar such that: $\mid\Psi\mid^{2}=\Phi$.
$$(i\gamma^{a}e_{a}^{\mu}\partial_{\mu}-2m)\Phi=0$$
This seems to encode all the same physical information as the Initial Dirac equation, though clearly the solutions for a given problem will be of another “flavor”.
I'm wondering here, just what information have we lost?, (I'm sure it's something, maybe the negative energy solutions?)
EDIT:
It was really late when I asked this and so I confused quite a bit of notation. I plan on coming back to rewrite this later (in a manner not invalidating the comments and answer below).
Answer: Below I will try to formalise a little the objects that define the theory: of course many more mathematical details must be filled, so do not take this as exhaustive.
We assume, to start with, the existence of a Dirac algebra of operators $\gamma^{\mu}$ satisfying the below (anti)-commutation relations
$$
\lbrace\gamma^{\mu},\gamma^{\nu}\rbrace = 2g^{\mu\nu}\mathbf{1}
$$
for some metric $g$. Once so, one starts looking for representations of such algebra over some space. Once can show that the fields, as operator valued distributions, defined by $f\mapsto B(f)$ such that
$$
B(f)^{\dagger} = B(\Gamma f)\qquad
$$
$$
\big{\lbrace} B(f), B(g)\big{\rbrace} = \sigma(\Gamma f,g)\mathbf{1}
$$
for some unitary involution $\Gamma$ and some scalar product $\sigma$ (that the above metric $g$ comes from) is a good representation of the initial Dirac algebra.
Given the above one defines the dynamics as the set of all fields fulfilling the following equation of motion
$$
\Psi^{\dagger}(i\partial_{\mu}\gamma^{\mu}-m)\Psi = 0
$$
whose solution is certain families of objects $\Psi$ whose precise sense in operator terms is given by the aforementioned construction.
Once we are equipped with the solution of the equation of motion (some special $\Psi$) then we define correlation functions thereof, namely we postulate that we have knowledge (that can be derived even by means of arguments of symmetries and invariances) of
$$
\omega_0\big(B(f)B(g)\big) = (\Gamma f, P g)
$$
where $\omega_0$ is a state of the system and $P$ is some special (projection) operator. Higher order correlation functions can be derived from the 2-point function as products of sums thereof (Wick's theorem).
It can be shown that the knowledge of the correlation functions is, under some suitable assumptions, sufficient to reconstruct the whole theory in terms of states, operators and scattering amplitudes.
Of course there is much more to be filled in, the above is just a short draft of how one goes around to make sense of the objects in QFT. One would have to prove that all the operators and their representations are bounded, well defined on some domains and that their commutation relations are physically meaningful, namely that causality is preserved for space-like intervals.
References
Local Quantum Physics, Rudolf Haag.
Mathematical Theory of Quantum Fields, Huzihiro Araki. | {
"domain": "physics.stackexchange",
"id": 42107,
"tags": "quantum-mechanics, operators, dirac-equation, spinors"
} |
ARMA vs. AR and then what? | Question: Sorry if this sounds elementary but I am struggling to grasp the physical idea behind ARMA (auto-regressive, moving average) process. The "AR" part is intuitive and so is "MA", but put together?
If I model my time series using "AR", I can predict the next sample using linear terms of the previous samples and the difference between the predicted and the real one should then have a normal distribution provided the noise was guassian white (the unpredictable).
But what does it mean to say that my time series can be ARMA modelled?
thanks
Answer: The arma approach is to model the current output of the system as the sum of past outputs and past inputs explicitly. The assumption of gaussian model for the noise statistics still can be used for the unpredictable signal which cannot be modeled as arma.
From system's frequency response spectrum modeling aspect: an ar model is able to model only the spectrum peaks as it fits the peaks better (using poles) and the ma model is able to model the valleys better (using zeros) in the spectrum. Hence, by using an arma model we will be better enable both peaks and valleys using a reasonable order for both ar and ma.
However, it is always possible to model a zero (in z-domain) using a infinite number of poles so an approximation to an arma model (whose parameter estimation is relatively involved) is to use a high order ar model (whose paramter estimation is relatively easy). | {
"domain": "dsp.stackexchange",
"id": 1162,
"tags": "autoregressive-model"
} |
Does my grammar contradict LL ⊆ LR(1)? | Question: This answer claims that $ LL \subseteq LR \left( 1 \right )$ where $LL = \bigcup_k LL(k)$.
But is this true? Is this grammar a valid counterexample?
$ S \rightarrow a | Aaa $, $ A \rightarrow \varepsilon $
It's easy to show that this grammar is not $LR \left( 1 \right )$, but I think that it is in $LL(2)$ - the longest string that can be derived is two letters so a lookahead of two tokens should be sufficient.
Answer: Confusingly, the terms $LL$ and $LR$ are overloaded. There are two related but fundamentally different classes of objects that we'd call $LL$ or $LR$:
The sets of all grammars that are $LL(k)$ or $LR(k)$ for some choice of $k$. Let's call these sets $LL_{CFG}$ and $LR_{CFG}$.
The sets of all languages that have an $LL(k)$ or $LR(k)$ grammar for some choice of $k$. Let's call these sets $LL_{LANG}$ and $LR_{LANG}$.
The example you've given is a grammar that is $LL(2)$ (I believe) but not $LR(1)$. This shows that $LL_{CFG} \not\subseteq LR(1)_{CFG}$. However, there is a different grammar that you could pick for the same language that is $LR(1)$, which follows because $LL_{LANG} \subseteq LR(1)_{LANG}$. This happens because $LR(1)_{LANG}$ is precisely the set of all deterministic context-free languages and all $LL$ languages are deterministic. | {
"domain": "cs.stackexchange",
"id": 7073,
"tags": "formal-languages, formal-grammars, parsers"
} |
Mean first passage time and Kramer's problem | Question: I've a HW problem and I've no idea how to proceed. Could someone can provide any hints?
Consider a particle trapped in a potential $V(x)$ with infinite boundary condition, in between these reflecting barriers exist finite barriers the particle can cross. The mean first-passage time is the average time a particle needs to cross a barrier and reach $x_{B}$ for the first time when starting at position $x_{0} < x_{B}$. The formula for the mean-passage time is
$$
\tau_{MPT} = \frac{1}{D} \int_{x_{0}}^{x_B} dx^{'} e^{\beta V(x^{'})} \int_{x_{L}}^{x^{'}} dx^{''} e^{- \beta V (x^{''})},
$$
with $x_{L} < x_{0}$ denoting the position of the reflective barrier and $D = 1 / (\gamma \beta)$ as the diffusion constant.
a.) Consider the system with one barrier in the middle region
$$
\begin{equation}
V(x) =
\begin{cases}
\infty, & x \leq x_{L} \; \text{and} x \geq x_{B} \\
V_{A}, & x_{L} < x < x_{0} \\
0, & x_{0} \leq x < x_{B}
\end{cases}
\end{equation}
$$
sketch the potential and compute the mean first-passage time to reach $x_{B}$ starting at $x_{0}$. Discuss the limits $V_{A} \rightarrow \pm \infty$.
b.) Derive Kramer's formula for the barrier-crossing rate $\kappa$
$$
\begin{equation}
\kappa = \frac{\sqrt{V_{1}^{''} V_{2}^{''}}}{\pi \gamma} e^{- \beta (V(x_{2}) - V(x_1))}
\end{equation}
$$
rom the formula for the mean first-passage time $\tau_{MPT}$ using a saddle point approximation. Here, $V_{1,2}^{''}$ is the second derivative of the potential with respect to $x$ at $x_{1,2}$
$Hint$: A saddle point approximation is a harmonic approximation at extermal points $x_{1}$,$x_{2}$, of the potential $V(x)$ in the exponent.
Answer: I try to help you in the first part doing the integral.
$$
\tau_{MPT} = \frac{1}{D} \int_{x_{0}}^{x_B} dx^{'} e^{\beta V(x^{'})} \int_{x_{L}}^{x^{'}} dx^{''} e^{- \beta V (x^{''})}
$$
Divid the second integral into $ x_L - x_0 $ where $V=V_A$; and $ x_0 - x'$ where $V=0$.
$$
=\frac{1}{D} \int_{x_{0}}^{x_B} dx^{'} e^{\beta 0} \{ \int_{x_{L}}^{x_{0}} dx^{''} e^{- \beta V_A} + \int_{x_{0}}^{x'} dx^{''} e^{- \beta 0} \}
$$
$$
= \frac{1}{D} \int_{x_{0}}^{x_B} dx^{'} \{ (x_{0}-x_{L}) e^{- \beta V_A} + ( x'-x_{0} ) \}
$$
$$
= \frac{1}{D} \{ ( x_{B}-x_{0} ) (x_{0}-x_{L}) e^{- \beta V_A} + \frac{1}{2} (x_B - x_0 )^2\}
$$
Thus,
$$
\tau_{MPT} = = \frac{1}{D} \{ ( x_{B}-x_{0} ) (x_{0}-x_{L}) e^{- \beta V_A} + \frac{1}{2} (x_B - x_0 )^2\}
$$
Diffusion barrier locates at $x_0$ with barrier height $V_A$. | {
"domain": "physics.stackexchange",
"id": 75087,
"tags": "homework-and-exercises, statistical-mechanics, stochastic-processes"
} |
What is the difference between these two formulas for pH of weak acids? | Question: In the German Wikipedia article on $\mathrm{pH}$, I found the following formula for calculating the $\mathrm{pH}$ of weak acids (which are there defined of having $4.5 < \mathrm{p}K_\mathrm{a} < 9.5:$
$$c(\ce{H3O+}) = c^\circ\cdot\sqrt{K_\mathrm{a}\cdot c_0/c^\circ}\label{eqn:1}\tag{1}$$
I am a bit confused about this as in school, I learned that the $\mathrm{pH}$ (or more precisely, the concentration of $\ce{H3O+};$ one would have to take $-\log c(\ce{H3O+})$ to get $\mathrm{pH})$ for weak acids is
$$c(\ce{H3O+}) = \sqrt{K_\mathrm{a}\cdot c_0}\label{eqn:2}\tag{2}$$
I see that these formula are somewhat similar, but they aren't identical. The Wikipedia formula \eqref{eqn:1} includes $c^\circ,$ of which I could not find out what it is (it also appears in the equations for strong and very weak acids).
So I have two questions:
What is $c^\circ$?
Are these formulae different and if yes, how?
Answer: The purpose of this $c^\circ$ is to ensure dimensional consistency. Keep in mind that Wikipedia can be edited by anyone, although the content is often of very good quality, sometimes the volunteer writer misses some points and assumes that the reader might be aware of his/her symbols. In your link, the writer does not explicitly define $c^\circ$.
The German version is ensuring that the equilibrium constant is dimensionless and all the quantities which are mathematically operated upon are dimensionless numbers. Therefore $c^\circ$ must be 1 mol/L, if $c_0$ has molar units.
$$c(\ce{H3O+}) = c^\circ\cdot\sqrt{K_a\cdot c_0/c^\circ} \tag{1}$$
Note the quantity under the square has been made dimensionless. In order to attach a unit to the square root term, $c^\circ$ has been multiplied.
If you want to dig deeper, there is something called quantity calculus, which I quote again from my previous answer
Are the units of mole of oxygen molecules the same with the units of mole of nitrogen molecules?
Calculus here is not the integration / differentiation, but rather the
Latin calculus implying a method of calculation.
There is a very nice article "Quantity Calculus: Unambiguous
Designation of Units in Graphs and Tables" by Mary Anne White in the
Journal of Chemical Education. Please read this if you are seriously
interested. Search on Google Scholar and it is free to download from
there.
In quantity calculus Each physical quantity as the product of a
numerical value and a unit:
physical quantity = numerical value × unit
This approach was introduced by British scientists and many leading
physicists used it. Now there is there is nothing less or nothing
more. Therefore your ambiguity arises from introducing another factor
such as "oxygen" or "nitrogen". The unit mol does not know whether it
belongs to oxygen or nitrogen.
As explained in the comments, suppose we write L symbolizing the
height of a tree, then I can only write, L = 10 m. For mathematical
purposes, I will not introduce "tree" anywhere in this equation. The
tree is already incorporated in L (in your mind) but not in the
mathematical equation. One can also write L/m =10. Now you have a pure
number on both sides. | {
"domain": "chemistry.stackexchange",
"id": 14830,
"tags": "acid-base, aqueous-solution, ph, notation"
} |
Is the elimination reaction between 1 1 dibromo cyclopropane and phenyl lithium feasible in alkaline medium (and some heat) | Question: I found this question in one of my question papers for the JEE ,
My sir gave me an explanation that due to resonance between lone pairs of the bromide and π bond formed.
This reasoning doesn't seem to be satisfying because of the cyclopropyl ring strain which is not lost even at the end of reaction.
Please give an alternate pathway of the above product is not correct.
Answer: The alkene formation is not correct under this conditions. This is an example of Doering–LaFlamme allene synthesis, a reaction of alkenes that converts to allenes by insertion of a carbon atom introduced in 1958 (Ref.1; see user user55119's comment). In abstract, it states that:
In a two-step sequence, of which the first step involves addition of dibromocarbene to an olefin and the second involves reaction of the resulting substituted 1,1-dibromocyclopropane with magnesium or sodium, allenes are obtained. The overall structural change involves the insertion of a single carbon atom between the two of the original double bond and therefore represents a novel way of increasing carbon chain lengths by one atom.
In your example, the compound has undergone the insertion of additional carbon atom (between two $\mathrm{sp^2}$-carbon atoms of the original double bond). Therefore, it's now the rearrangement part assited by organo lithium compound, the mechanism of which is depicted in following diagram (Ref.2):
Accordingly, the first step of the gem-dihalocyclopropane (I) with alkyllithium can be depicted as a halogen-lithium inter-conversion with formation of a 1-lithio-1-halocyclopropane derivative II. The reaction is dependent on the nature of both halide and the lithium reagent, but its mechanism is not fully understood (Ref.2). For example, bromides react much more readily that the corresponding chlorides, and butyl lithium is generally much more reactive than methyllithium. It is expected that intermediate II would readily eliminate lithium halide as discussed in Ref.1. This may occur by two different mechanisms: (i) Pathway $\color{blue}{\bf{a}}$: Concerted elimination and ring openig to an allene; or (ii) Pathway $\color{maroon}{\bf{b}}$: An $\alpha$-eleminatin to the carbene intermediate III (Ref.2). The author comments that the pathway $\color{blue}{\bf{a}}$ may well be the way by which allenic product has been formed, although it appears that cyclopropyl anions have considerable stability. However, the formation of non-allenic isomers (byproducts) in a number of these reactions is difficult to explain (Ref.2).
References:
W. von E. Doering, P. M. LaFlamme, "A two-step of synthesis of allenes from olefins," Tetrahedron 1958, 2(1–2), 75–79 (https://doi.org/10.1016/0040-4020(58)88025-4).
Lars Skattebøl, "The Synthesis of Allenes from 1.1-Dihalocyclopropane Derivatives and Alkyllithium," Acta Chem. Scand. 1963, 17(6), 1683 – 1693 (DOI: 10.3891/acta.chem.scand.17-1683). | {
"domain": "chemistry.stackexchange",
"id": 13959,
"tags": "organic-chemistry, elimination"
} |
DSP Book with More Applications | Question: The process of my DSP class used the Oppenheimer DSP book as text. Although I can easily understand the math in the book, I found it lack examples and applications, hence I still cannot build a picture how people actually use those under different circumstances. Can someone recommend me a DSP book that contains more practical examples and applications more than "This example uses the mathematical formula above to derive some numbers from some numbers"? Thanks in advance.
Answer: Hopefully you'll get a bunch of answers here from the very general to the super specific. I'll put in my two cents here.
The recommendations I would make are from the field of radar and communication systems. These systems tend to exercise almost all aspects of signal processing:
Signal generation and mixing
Sampling, decimating/upsampling
Signal conditioning
General filter design and implementation
Detection theory
Target/observable tracking
Advanced techniques such as eigen-decomposition and super-resolution
Unfortunately books in the radar and communications fields are not free, but in my opinion are worth their asking price (as far as texts are concerned).
Bassem Mahafza: Radar System Analysis and Design using MATLAB, Chapman & Hall
This book is an awesome treatment of many of the aspects of a radar/comms system. As the title suggests, buying this book gives you access to all of the MATLAB functions provided by the author. As you work through the theory of the book, many sections have accompanying examples that you can play with allowing you to both see implementation details and visuals.
Merril I. Skolnik: Radar Handbook, Third Edition, McGraw-Hill
This book is a very large collection of different authors' work. It covers almost everything you can imagine about signal processing in a radar system. Theory, practicalities, and implementation specifics are all covered here. This book is to be used as a reference for applications and does not really contain examples of how to code certain things.
Harry L. Van Trees: Optimum Array Processing: Part IV of Detection, Estimation, and Modulation Theory, Wiley & Sons
This one is dense and I would consider to be the more hardcore of the three. This text exposes to you a bunch of different techniques in signal processing when you have access to an antenna array, which would not be possible otherwise. Applications include radar, astronomy, communications, seismology, medical diagnosis, etc.
I could list many more but these are the three that come to mind that are a good mix of theory, applications of that theory, and examples. | {
"domain": "dsp.stackexchange",
"id": 9633,
"tags": "digital-filters"
} |
Array manipulation object | Question: I'm trying to write a class that simplifies the most common operations with arrays and then I want to distribute... Maybe it can help someone...
But I'm facing some problems in make the object simple to use and intuitive.
Here a summary of the public methods:
Array to range
Array to string
Array to text file
Array Filter
Merge Arrays
Range to array
Array Sort
String to array
Text file to array
Transpose
Array Filter:
Here I have to allow the user to set the filters he needs and that means allow public methods that mean nothing outside the filter method.
Those are the methods:
FilterIncludeEquals
FilterExcludeEquals
FilterIncludeIfBetween
FilterIncludeIfContains
and then:
FilterApplyTo
How To use (complete code on class module named ArrayManipulation):
Public Sub Test()
Dim testObject As ArrayManipulation
Set testObject = New ArrayManipulation
Dim arrayOfNumbers As Variant
ReDim arrayOfNumbers(12)
Dim numbers As Long
For numbers = 0 To 11
arrayOfNumbers(numbers) = numbers
Next
With testObject
' setup filters
.FilterExcludeEquals 3, 0 'column is not considered for 1d arrays
.FilterIncludeIfBetween 1, 4, 0
' filter the array
.FilterApplyTo arrayOfNumbers
' this create a txt file storing the array
.ArrayToTextFile arrayOfNumbers, Environ("USERPROFILE") & "\Desktop\Test.txt"
' this read the array from the just created file
.TextFileToArray Environ("USERPROFILE") & "\Desktop\Test.txt", arrayOfNumbers
' this write the array on activesheet of you activeworkbook, starting from D3
.ArrayToRange arrayOfNumbers, Cells(3, 4)
End With
End Sub
I think the best solution would be to create a second object and then compose the two class and expose a property that returns the "filter" object.
But I'm concerned that two modules are less immediate and maybe a person that is not familiar with the IDE can find it more difficult.. So I've decided to put an "Filter" suffix on all filter-related methods.
Do you have any advice?
Sort: At the moment the sort use merge sort but I want to try to write also insertion sort and introsort (as soon as I'll understand it) but more importantly, how can I understand how to sort by multiple columns? I can't find examples that I can understand... How did you do?
Results: All the methods require byRef arguments and the results of the routine overwrite the arguments.
Is this approach acceptable? Or is necessary or good practice to use functions?
I would like to have a feedback on the code and on the idea.. Thank you!
Option Explicit
Private pColumnsToReturn As Variant
Private pFiltersCollection As Collection
Private pPartialMatchColl As Collection
Private Enum filterType
negativeMatch = -1
exactMatch = 0
isBetween = 1
contains = 2
End Enum
Public Property Let ColumnsToReturn(arr As Variant)
pColumnsToReturn = arr
End Property
' FILTER METHODS ******************************************************************
Public Sub FilterIncludeEquals(ByRef equalTo As Variant, ByRef inColumn As Long, _
Optional ByRef isCaseSensitive As Boolean = False)
If inColumn > -1 Then
Dim thisFilter As Collection
Dim thisFilterType As filterType
Set thisFilter = New Collection
thisFilterType = exactMatch
With thisFilter
.Add thisFilterType
.Add inColumn
.Add IIf(isCaseSensitive, equalTo, LCase(equalTo))
.Add isCaseSensitive
End With
If pFiltersCollection Is Nothing Then Set pFiltersCollection = New Collection
pFiltersCollection.Add thisFilter
Set thisFilter = Nothing
End If
End Sub
Public Sub FilterExcludeEquals(ByRef equalTo As Variant, ByRef inColumn As Long, _
Optional ByRef isCaseSensitive As Boolean = False)
If inColumn > -1 Then
Dim thisFilter As Collection
Dim thisFilterType As filterType
Set thisFilter = New Collection
thisFilterType = negativeMatch
With thisFilter
.Add thisFilterType
.Add inColumn
.Add IIf(isCaseSensitive, equalTo, LCase(equalTo))
.Add isCaseSensitive
End With
If pFiltersCollection Is Nothing Then Set pFiltersCollection = New Collection
pFiltersCollection.Add thisFilter
Set thisFilter = Nothing
End If
End Sub
Public Sub FilterIncludeIfBetween(ByRef lowLimit As Variant, ByRef highLimit As Variant, ByRef inColumn As Long)
If inColumn > -1 Then
Dim thisFilter As Collection
Dim thisFilterType As filterType
Set thisFilter = New Collection
thisFilterType = isBetween
With thisFilter
.Add thisFilterType
.Add inColumn
.Add lowLimit
.Add highLimit
End With
If pFiltersCollection Is Nothing Then Set pFiltersCollection = New Collection
pFiltersCollection.Add thisFilter
Set thisFilter = Nothing
End If
End Sub
Public Sub FilterIncludeIfContains(ByRef substring As String, Optional ByRef inColumns As Variant = 1)
If IsArray(inColumns) Or IsNumeric(inColumns) Then
Dim thisFilterType As filterType
Set pPartialMatchColl = New Collection
thisFilterType = contains
With pPartialMatchColl
.Add thisFilterType
.Add inColumns
.Add substring
End With
End If
End Sub
Public Sub FilterApplyTo(ByRef originalArray As Variant)
If Not IsArray(originalArray) Then Exit Sub
If isSingleDimensionalArray(originalArray) Then
filterOneDimensionalArray originalArray
Else
filterTwoDimensionalArray originalArray
End If
End Sub
Private Sub filterTwoDimensionalArray(ByRef originalArray As Variant)
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
Dim arrayOfColumnToReturn As Variant
Dim partialMatchColumnsArray As Variant
Dim result As Variant
result = -1
arrayOfColumnToReturn = pColumnsToReturn
If Not pPartialMatchColl Is Nothing Then partialMatchColumnsArray = pPartialMatchColl(2)
' If the caller don't pass the array of column to return
' create an array with all the columns and preserve the order
If Not IsArray(arrayOfColumnToReturn) Then
ReDim arrayOfColumnToReturn(LBound(originalArray, 2) To UBound(originalArray, 2))
For col = LBound(originalArray, 2) To UBound(originalArray, 2)
arrayOfColumnToReturn(col) = col
Next col
End If
' If the caller don't pass an array for partial match
' check if it pass the special value 1, if true the
' partial match will be performed on values in columns to return
If Not IsArray(partialMatchColumnsArray) Then
If partialMatchColumnsArray = 1 Then partialMatchColumnsArray = arrayOfColumnToReturn
End If
firstRow = LBound(originalArray, 1)
lastRow = UBound(originalArray, 1)
' main loop
Dim keepCount As Long
Dim filter As Variant
Dim currentFilterType As filterType
ReDim arrayOfRowsToKeep(lastRow - firstRow + 1) As Variant
keepCount = 0
For row = firstRow To lastRow
' exact, excluse and between checks
If Not pFiltersCollection Is Nothing Then
For Each filter In pFiltersCollection
currentFilterType = filter(1)
Select Case currentFilterType
Case negativeMatch
If filter(4) Then
If originalArray(row, filter(2)) = filter(3) Then GoTo Skip
Else
If LCase(originalArray(row, filter(2))) = filter(3) Then GoTo Skip
End If
Case exactMatch
If filter(4) Then
If originalArray(row, filter(2)) <> filter(3) Then GoTo Skip
Else
If LCase(originalArray(row, filter(2))) <> filter(3) Then GoTo Skip
End If
Case isBetween
If originalArray(row, filter(2)) < filter(3) _
Or originalArray(row, filter(2)) > filter(4) Then GoTo Skip
End Select
Next filter
End If
' partial match check
If Not pPartialMatchColl Is Nothing Then
If IsArray(partialMatchColumnsArray) Then
For col = LBound(partialMatchColumnsArray) To UBound(partialMatchColumnsArray)
If InStr(1, originalArray(row, partialMatchColumnsArray(col)), pPartialMatchColl(3), vbTextCompare) > 0 Then
GoTo Keep
End If
Next
GoTo Skip
End If
End If
Keep:
arrayOfRowsToKeep(keepCount) = row
keepCount = keepCount + 1
Skip:
Next row
' create results array
If keepCount > 0 Then
firstRow = LBound(originalArray, 1)
lastRow = LBound(originalArray, 1) + keepCount - 1
firstColumn = LBound(originalArray, 2)
lastColumn = LBound(originalArray, 2) + UBound(arrayOfColumnToReturn) - LBound(arrayOfColumnToReturn)
ReDim result(firstRow To lastRow, firstColumn To lastColumn)
For row = firstRow To lastRow
For col = firstColumn To lastColumn
result(row, col) = originalArray(arrayOfRowsToKeep(row - firstRow), arrayOfColumnToReturn(col - firstColumn + LBound(arrayOfColumnToReturn)))
Next col
Next row
End If
originalArray = result
If IsArray(result) Then Erase result
End Sub
Private Sub filterOneDimensionalArray(ByRef originalArray As Variant)
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
Dim arrayOfColumnToReturn As Variant
Dim partialMatchColumnsArray As Variant
Dim result As Variant
result = -1
firstRow = LBound(originalArray)
lastRow = UBound(originalArray)
' main loop
Dim keepCount As Long
Dim filter As Variant
Dim currentFilterType As filterType
ReDim arrayOfRowsToKeep(lastRow - firstRow + 1) As Variant
keepCount = 0
For row = firstRow To lastRow
' exact, excluse and between checks
If Not pFiltersCollection Is Nothing Then
For Each filter In pFiltersCollection
currentFilterType = filter(1)
Select Case currentFilterType
Case negativeMatch
If filter(4) Then
If originalArray(row) = filter(3) Then GoTo Skip
Else
If LCase(originalArray(row)) = filter(3) Then GoTo Skip
End If
Case exactMatch
If filter(4) Then
If originalArray(row) <> filter(3) Then GoTo Skip
Else
If LCase(originalArray(row)) <> filter(3) Then GoTo Skip
End If
Case isBetween
If originalArray(row) < filter(3) _
Or originalArray(row) > filter(4) Then GoTo Skip
End Select
Next filter
End If
' partial match check
If Not pPartialMatchColl Is Nothing Then
If InStr(1, originalArray(row), pPartialMatchColl(3), vbTextCompare) > 0 Then
GoTo Keep
End If
GoTo Skip
End If
Keep:
arrayOfRowsToKeep(keepCount) = row
keepCount = keepCount + 1
Skip:
Next row
' create results array
If keepCount > 0 Then
firstRow = LBound(originalArray, 1)
lastRow = LBound(originalArray, 1) + keepCount - 1
ReDim result(firstRow To lastRow)
For row = firstRow To lastRow
result(row) = originalArray(arrayOfRowsToKeep(row - firstRow))
Next row
End If
originalArray = result
If IsArray(result) Then Erase result
End Sub
' TRANSPOSE ARRAY ******************************************************************
Public Sub Transpose(ByRef originalArray As Variant)
If Not IsArray(originalArray) Then Exit Sub
If isSingleDimensionalArray(originalArray) Then Exit Sub
Dim row As Long
Dim column As Long
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
firstRow = LBound(originalArray, 1)
firstColumn = LBound(originalArray, 2)
lastRow = UBound(originalArray, 1)
lastColumn = UBound(originalArray, 2)
ReDim tempArray(firstColumn To lastColumn, firstRow To lastRow) As Variant
For row = firstColumn To lastColumn
For column = firstRow To lastRow
tempArray(row, column) = originalArray(column, row)
Next column
Next row
originalArray = tempArray
Erase tempArray
End Sub
Private Function isSingleDimensionalArray(myArray As Variant) As Boolean
Dim testDimension As Long
testDimension = -1
On Error Resume Next
testDimension = UBound(myArray, 2)
On Error GoTo 0
isSingleDimensionalArray = (testDimension = -1)
End Function
' ARRAY TO STRING ******************************************************************
Public Sub ArrayToString(ByRef originalArray As Variant, ByRef stringToReturn As String, _
Optional colSeparator As String = ",", Optional rowSeparator As String = ";")
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
If Not IsArray(originalArray) Then Exit Sub
' Join single dimension array
If isSingleDimensionalArray(originalArray) Then
stringToReturn = Join(originalArray, colSeparator)
Exit Sub
End If
firstRow = LBound(originalArray, 1)
lastRow = UBound(originalArray, 1)
firstColumn = LBound(originalArray, 2)
lastColumn = UBound(originalArray, 2)
ReDim rowArray(firstRow To lastRow) As Variant
ReDim tempArray(firstColumn To lastColumn) As Variant
For row = firstRow To lastRow
' fill array with values of the entire row
For col = firstColumn To lastColumn
tempArray(col) = originalArray(row, col)
Next col
rowArray(row) = Join(tempArray, colSeparator)
Next row
' convert rowArray to string
stringToReturn = Join(rowArray, rowSeparator)
Erase rowArray
Erase tempArray
End Sub
' STRING TO ARRAY ******************************************************************
Public Sub StringToArray(ByRef myString As String, ByRef arrayToReturn As Variant, _
Optional colSeparator As String = ",", Optional rowSeparator As String = ";")
If myString = vbNullString Then Exit Sub
Dim rowArr As Variant
ReDim tempArr(0, 0) As Variant
Dim colArr As Variant
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
' get the dimensions of the resulting array
rowArr = Split(myString, rowSeparator)
firstRow = LBound(rowArr)
lastRow = UBound(rowArr)
colArr = Split(rowArr(firstRow), colSeparator)
firstColumn = LBound(colArr)
lastColumn = UBound(colArr)
' return one dimension array
If firstColumn = lastColumn Then
arrayToReturn = rowArr
Exit Sub
End If
' dim result array
ReDim tempArr(firstRow To lastRow, firstColumn To lastColumn)
For row = firstRow To lastRow
' split each row
colArr = Split(rowArr(row), colSeparator)
For col = firstColumn To lastColumn
' fill result array
If IsDate(colArr(col)) Then
tempArr(row, col) = CDate(colArr(col))
Else
tempArr(row, col) = colArr(col)
End If
Next col
Next row
arrayToReturn = tempArr
Erase tempArr
Erase rowArr
Erase colArr
End Sub
' ARRAY TO TEXT FILE ******************************************************************
Public Sub ArrayToTextFile(ByRef originalArray As Variant, ByRef fullPath As String, _
Optional colSeparator As String = ",", Optional rowSeparator As String = ";")
Dim fso As FileSystemObject
Dim resultingString As String
Set fso = New FileSystemObject
Me.ArrayToString originalArray, resultingString, colSeparator, rowSeparator
With fso.CreateTextFile(fullPath)
.Write resultingString
End With
Set fso = Nothing
End Sub
' TEXT FILE TO ARRAY ******************************************************************
Public Sub TextFileToArray(ByRef fullPath As String, ByRef arrayToReturn As Variant, _
Optional colSeparator As String = ",", Optional rowSeparator As String = ";")
Dim fso As FileSystemObject
Dim resultingString As String
Set fso = New FileSystemObject
If fso.FileExists(fullPath) Then
With fso.OpenTextFile(fullPath)
resultingString = .ReadAll
End With
Me.StringToArray resultingString, arrayToReturn, colSeparator, rowSeparator
End If
Set fso = Nothing
End Sub
' ARRAY TO RANGE ******************************************************************
Public Sub ArrayToRange(ByRef myArray As Variant, ByRef TopLeftCell As Range)
Dim totRows As Long
Dim totColumns As Long
If Not IsArray(myArray) Then Exit Sub
If isSingleDimensionalArray(myArray) Then
totRows = 1
totColumns = UBound(myArray) - LBound(myArray) + 1
Else
totRows = UBound(myArray, 1) - LBound(myArray, 1) + 1
totColumns = UBound(myArray, 2) - LBound(myArray, 2) + 1
End If
TopLeftCell.Resize(totRows, totColumns).value = myArray
End Sub
' RANGE TO ARRAY *******************************************************************
Public Sub RangeToArray(ByRef TopLeftCell As Range, ByRef ResultingArray As Variant)
ResultingArray = TopLeftCell.CurrentRegion.value
End Sub
' MERGE *****************************************************************************
Public Sub MergeArrays(ByRef MainArray As Variant, ByRef ArrayOfArrays As Variant)
If isSingleDimensionalArray(MainArray) Then
MergeArrays1D MainArray, ArrayOfArrays
Else
MergeArrays2D MainArray, ArrayOfArrays
End If
End Sub
Private Sub MergeArrays2D(ByRef MainArray As Variant, ByRef ArrayOfArrays As Variant)
Dim arrayOfColumnToReturn As Variant
Dim totRows As Long
Dim row As Long
Dim column As Long
Dim resultRow As Long
Dim currentArray As Variant
Dim i As Long
If Not IsArray(MainArray) Then Exit Sub
arrayOfColumnToReturn = pColumnsToReturn
' If the caller don't pass the array of column to return
' create an array with all the columns and preserve the order
If Not IsArray(arrayOfColumnToReturn) Then
ReDim arrayOfColumnToReturn(LBound(MainArray, 2) To UBound(MainArray, 2))
For column = LBound(MainArray, 2) To UBound(MainArray, 2)
arrayOfColumnToReturn(column) = column
Next column
End If
' calculate dimensions of the result array
totRows = UBound(MainArray)
For row = LBound(ArrayOfArrays) To UBound(ArrayOfArrays)
totRows = totRows + UBound(ArrayOfArrays(row)) - LBound(ArrayOfArrays(row)) + 1
Next row
ReDim tempArray(LBound(MainArray) To totRows, LBound(arrayOfColumnToReturn) To UBound(arrayOfColumnToReturn)) As Variant
' fill result array from main array
For row = LBound(MainArray) To UBound(MainArray)
For column = LBound(arrayOfColumnToReturn) To UBound(arrayOfColumnToReturn)
tempArray(row, column) = MainArray(row, column)
Next column
Next row
resultRow = row
' fill result array from ArrayOfArrays
For i = LBound(ArrayOfArrays) To UBound(ArrayOfArrays)
If IsArray(ArrayOfArrays(i)) Then
currentArray = ArrayOfArrays(i)
For row = LBound(currentArray) To UBound(currentArray)
For column = LBound(arrayOfColumnToReturn) To UBound(arrayOfColumnToReturn)
tempArray(resultRow, column) = currentArray(row, column)
Next column
resultRow = resultRow + 1
Next row
End If
Next i
MainArray = tempArray
End Sub
Private Sub MergeArrays1D(ByRef MainArray As Variant, ByRef ArrayOfArrays As Variant)
Dim totRows As Long
Dim row As Long
Dim resultRow As Long
Dim currentArray As Variant
Dim i As Long
If Not IsArray(MainArray) Then Exit Sub
' calculate dimensions of the result array
totRows = UBound(MainArray)
For row = LBound(ArrayOfArrays) To UBound(ArrayOfArrays)
totRows = totRows + UBound(ArrayOfArrays(row)) - LBound(ArrayOfArrays(row)) + 1
Next row
ReDim tempArray(LBound(MainArray) To totRows) As Variant
' fill result array from main array
For row = LBound(MainArray) To UBound(MainArray)
tempArray(row) = MainArray(row)
Next row
resultRow = row
' fill result array from ArrayOfArrays
For i = LBound(ArrayOfArrays) To UBound(ArrayOfArrays)
If IsArray(ArrayOfArrays(i)) Then
currentArray = ArrayOfArrays(i)
For row = LBound(currentArray) To UBound(currentArray)
tempArray(resultRow) = currentArray(row)
resultRow = resultRow + 1
Next row
End If
Next i
MainArray = tempArray
End Sub
' SORT ****************************************************************************************
Public Sub Sort(ByRef myArray As Variant, Optional ByVal columnToSort As Long, _
Optional Ascending As Boolean = True)
If Not IsArray(myArray) Then Exit Sub
If isSingleDimensionalArray(myArray) Then
Divide1D myArray, Ascending
Else
Divide2D myArray, columnToSort, Ascending
End If
End Sub
Private Sub Divide1D(thisArray As Variant, _
Optional Ascending As Boolean = True)
Dim Length As Long
Dim i As Long
Length = UBound(thisArray) - LBound(thisArray)
If Length < 1 Then Exit Sub
Dim Pivot As Long
Pivot = Length / 2
ReDim leftArray(Pivot) As Variant
ReDim rightArray(Length - Pivot - 1) As Variant
Dim Index As Long
For Index = LBound(thisArray) To Pivot + LBound(thisArray)
leftArray(i) = thisArray(Index)
i = i + 1
Next Index
i = 0
For Index = Index To UBound(thisArray)
rightArray(i) = thisArray(Index)
i = i + 1
Next Index
Divide1D leftArray
Divide1D rightArray
Merge1D leftArray, rightArray, thisArray, Ascending
End Sub
Private Sub Merge1D(leftArray As Variant, rightArray As Variant, _
arrayToSort As Variant, Ascending As Boolean)
Dim lLength As Long
Dim rLength As Long
Dim leftLowest As Long
Dim rightLowest As Long
Dim resultIndex As Long
resultIndex = IIf(Ascending, LBound(arrayToSort), UBound(arrayToSort))
lLength = UBound(leftArray)
rLength = UBound(rightArray)
Do While leftLowest <= lLength And rightLowest <= rLength
If leftArray(leftLowest) <= rightArray(rightLowest) Then
arrayToSort(resultIndex) = leftArray(leftLowest)
leftLowest = leftLowest + 1
Else
arrayToSort(resultIndex) = rightArray(rightLowest)
rightLowest = rightLowest + 1
End If
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
Do While leftLowest <= lLength
arrayToSort(resultIndex) = leftArray(leftLowest)
leftLowest = leftLowest + 1
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
Do While rightLowest <= rLength
arrayToSort(resultIndex) = rightArray(rightLowest)
rightLowest = rightLowest + 1
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
End Sub
Private Sub Divide2D(thisArray As Variant, ByRef columnToSort As Long, _
Optional Ascending As Boolean = True)
Dim Length As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim column As Long
Dim i As Long
firstColumn = LBound(thisArray, 2)
lastColumn = UBound(thisArray, 2)
Length = UBound(thisArray) - LBound(thisArray)
If Length < 1 Then Exit Sub
Dim Pivot As Long
Pivot = Length / 2
ReDim leftArray(0 To Pivot, firstColumn To lastColumn) As Variant
ReDim rightArray(0 To Length - Pivot - 1, firstColumn To lastColumn) As Variant
Dim Index As Long
For Index = LBound(thisArray) To Pivot + LBound(thisArray)
For column = firstColumn To lastColumn
leftArray(i, column) = thisArray(Index, column)
Next column
i = i + 1
Next Index
i = 0
For Index = Index To UBound(thisArray)
For column = firstColumn To lastColumn
rightArray(i, column) = thisArray(Index, column)
Next column
i = i + 1
Next Index
Divide2D leftArray, columnToSort
Divide2D rightArray, columnToSort
Merge2D leftArray, rightArray, thisArray, Ascending, columnToSort
End Sub
Private Sub Merge2D(leftArray As Variant, rightArray As Variant, _
arrayToSort As Variant, Ascending As Boolean, ByRef columnToSort As Long)
Dim lLength As Long
Dim rLength As Long
Dim leftLowest As Long
Dim rightLowest As Long
Dim resultIndex As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim column As Long
resultIndex = IIf(Ascending, LBound(arrayToSort), UBound(arrayToSort))
firstColumn = LBound(arrayToSort, 2)
lastColumn = UBound(arrayToSort, 2)
leftLowest = LBound(leftArray)
rightLowest = LBound(rightArray)
lLength = UBound(leftArray)
rLength = UBound(rightArray)
Do While leftLowest <= lLength And rightLowest <= rLength
If leftArray(leftLowest, columnToSort) <= rightArray(rightLowest, columnToSort) Then
For column = firstColumn To lastColumn
arrayToSort(resultIndex, column) = leftArray(leftLowest, column)
Next column
leftLowest = leftLowest + 1
Else
For column = firstColumn To lastColumn
arrayToSort(resultIndex, column) = rightArray(rightLowest, column)
Next column
rightLowest = rightLowest + 1
End If
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
Do While leftLowest <= lLength
For column = firstColumn To lastColumn
arrayToSort(resultIndex, column) = leftArray(leftLowest, column)
Next column
leftLowest = leftLowest + 1
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
Do While rightLowest <= rLength
For column = firstColumn To lastColumn
arrayToSort(resultIndex, column) = rightArray(rightLowest, column)
Next column
rightLowest = rightLowest + 1
resultIndex = resultIndex + IIf(Ascending, 1, -1)
Loop
End Sub
EDIT:
Correct an error on filter 1D subroutine
Answer: The first comment has to do with your question about Results. IMO you are far better off to implement your ArrayToX and XToArray subroutines as functions. Also, I tried to use your module (Class Module or Standard Module? => recommend ClassModule) and had difficulty understanding how to use the Filters. In fact, I never did figure it out. I wrote a test subroutine in a Standard Module to try and use the code. (I would suggest you could improve your question by providing a similar example of how the class is intended to be used.)
Here's the test subroutine I was working with:
Option Explicit
Public Sub Test()
Dim testObject As ArrayOps
Set testObject = New ArrayOps
Dim arrayOfNumbers(12)
Dim numbers As Long
For numbers = 0 To 11
arrayOfNumbers(numbers) = numbers
Next
Dim result As String
testObject.ArrayToString arrayOfNumbers, result
Dim result2 As String
result2 = testObject.ArrayToString2(arrayOfNumbers)
Dim result3 As String
result3 = testObject.ArrayToString2(arrayOfNumbers, testObject.FilterIncludeEquals2(3, 0))
End Sub
The first use of ArrayToString is the version in the posted code. I've added some functions to your module to support the code for result2 and result3.
To my eye, the code reads easier using Functions rather than Subroutines. Also, using ByRef to allow passed-in values to change is probably not the best practice - especially for arrays. As the user, I probably do not want to pass in an array and get back a modified version. The user might have wanted to retain the original array for other downstream logic. Using a Function will make the input versus output very clear.
The code for the added ArrayToString2 and FilterIncludeEquals2 are basically copies of the original Subroutine with some edits and comments. They are:
Public Function ArrayToString2(ByRef originalArray As Variant, Optional filter As Collection = Nothing, _
Optional colSeparator As String = ",", Optional rowSeparator As String = ";") As String
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
If Not IsArray(originalArray) Then Exit Function
' Join single dimension array
If isSingleDimensionalArray(originalArray) Then
ArrayToString2 = Join(originalArray, colSeparator)
If Not filter Is Nothing Then
ArrayToString2 = FilterApplyTo2(ArrayToString2)
End If
Exit Function
End If
firstRow = LBound(originalArray, 1)
lastRow = UBound(originalArray, 1)
firstColumn = LBound(originalArray, 2)
lastColumn = UBound(originalArray, 2)
'No need to use module variables - locals would be better
Dim rowArray As Variant
ReDim rowArray(firstRow To lastRow) As Variant
Dim tempArray As Variant
ReDim tempArray(firstColumn To lastColumn)
For row = firstRow To lastRow
' fill array with values of the entire row
For col = firstColumn To lastColumn
tempArray(col) = originalArray(row, col)
Next col
rowArray(row) = Join(tempArray, colSeparator)
Next row
' convert rowArray to string
ArrayToString2 = Join(rowArray, rowSeparator)
If Not filter Is Nothing Then
ArrayToString2 = FilterApplyTo2(ArrayToString2)
End If
'Now using local variables
'Erase rowArray
'Erase tempArray
End Function
Public Function FilterIncludeEquals2(ByRef equalTo As Variant, ByRef inColumn As Long, _
Optional ByRef isCaseSensitive As Boolean = False) As Collection
'Declaring thisFilter outside the If block so that the function always returns a
'collection (possibly empty) rather than nothing
Dim thisFilter As Collection
Set thisFilter = New Collection
'There's an upper limit to check for as well since only 1 and 2 dimensional
'arrays are handled?
If inColumn > -1 And inColumn < 2 Then
'Dim thisFilter As Collection
'Dim thisFilterType As filterType
'Set thisFilter = New Collection
'thisFilterType = exactMatch
With thisFilter
.Add exactMatch
.Add inColumn
.Add IIf(isCaseSensitive, equalTo, LCase(equalTo))
.Add isCaseSensitive
End With
'To use this filter as a parameter in ArrayToString2 I return it directly.
'This is different than the original design...just an example to consider
'If pFiltersCollection Is Nothing Then Set pFiltersCollection = New Collection
'pFiltersCollection.Add thisFilter
'Set thisFilter = Nothing
End If
Set FilterIncludeEquals2 = thisFilter
End Function
Based on your update, I better understand what you are working toward - thanks! After looking at your example, I would suggest that there is a definite advantage to creating a class module for the filter operations. Establish a "Filter" Property in the ArrayManipulation class. You mention concerns that adding a second module would possibly confusing to the user. IMO it creates less confusion.
Below is another version of the test module with a revised Test Subroutine using the ArrayManipulation class with an ArrayManipulationFilter class member available as Public Property Get Filter().
Option Explicit
Public Sub Test()
Dim testObject As ArrayManipulation
Set testObject = New ArrayManipulation
Dim arrayOfNumbers As Variant
ReDim arrayOfNumbers(12)
Dim numbers As Long
For numbers = 0 To 11
arrayOfNumbers(numbers) = numbers
Next
Dim arrayReturned As Variant
With testObject
' setup filters
.Filter.ExcludeEquals 3, 0
.Filter.IncludeIfBetween 1, 4, 0
' this create a txt file storing the array
' The filter can now be applied inline or separately.
' Or, "applyFilters As Boolean" can also be added as a parameter to the ArrayToX subroutine signatures
.ArrayToTextFile .Filter.ApplyTo(arrayOfNumbers), Environ("USERPROFILE") & "\Desktop\Test.txt"
' this read the array from the just created file
.TextFileToArray Environ("USERPROFILE") & "\Desktop\Test.txt", arrayReturned
' this write the array on activesheet of you activeworkbook, starting from D3
'arrayOfNumbers is still the original set of numbers
.ArrayToRange arrayOfNumbers, Cells(3, 4)
.ArrayToRange arrayReturned, Cells(5, 4)
End With
End Sub
Below is the ArrayManipulationFilter class which was a copy of the filter subroutines from the original class (with the "Filter" prefix removed from the subroutine names) plus the additional code below.
Private Sub Class_Initialize()
Set pFiltersCollection = New Collection
End Sub
Public Function ApplyTo(ByRef originalArray As Variant) As Variant
If Not IsArray(originalArray) Then Exit Function
Dim result As Variant
If isSingleDimensionalArray(originalArray) Then
ApplyTo = filter1DArray(originalArray)
Else
ApplyTo = filter2DArray(originalArray)
End If
End Function
Private Function isSingleDimensionalArray(myArray As Variant) As Boolean
Dim testDimension As Long
testDimension = -1
On Error Resume Next
testDimension = UBound(myArray, 2)
On Error GoTo 0
isSingleDimensionalArray = (testDimension = -1)
End Function
Private Function filter2DArray(ByRef originalArray As Variant) As Variant
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
Dim arrayOfColumnToReturn As Variant
Dim partialMatchColumnsArray As Variant
Dim result As Variant
result = -1
arrayOfColumnToReturn = pColumnsToReturn
If Not pPartialMatchColl Is Nothing Then partialMatchColumnsArray = pPartialMatchColl(2)
' If the caller don't pass the array of column to return
' create an array with all the columns and preserve the order
If Not IsArray(arrayOfColumnToReturn) Then
ReDim arrayOfColumnToReturn(LBound(originalArray, 2) To UBound(originalArray, 2))
For col = LBound(originalArray, 2) To UBound(originalArray, 2)
arrayOfColumnToReturn(col) = col
Next col
End If
' If the caller don't pass an array for partial match
' check if it pass the special value 1, if true the
' partial match will be performed on values in columns to return
If Not IsArray(partialMatchColumnsArray) Then
If partialMatchColumnsArray = 1 Then partialMatchColumnsArray = arrayOfColumnToReturn
End If
firstRow = LBound(originalArray, 1)
lastRow = UBound(originalArray, 1)
' main loop
Dim keepCount As Long
Dim Filter As Variant
Dim currentFilterType As filterType
ReDim arrayOfRowsToKeep(lastRow - firstRow + 1) As Variant
keepCount = 0
For row = firstRow To lastRow
' exact, excluse and between checks
If Not pFiltersCollection Is Nothing Then
For Each Filter In pFiltersCollection
currentFilterType = Filter(1)
Select Case currentFilterType
Case negativeMatch
If Filter(4) Then
If originalArray(row, Filter(2)) = Filter(3) Then GoTo Skip
Else
If LCase(originalArray(row, Filter(2))) = Filter(3) Then GoTo Skip
End If
Case exactMatch
If Filter(4) Then
If originalArray(row, Filter(2)) <> Filter(3) Then GoTo Skip
Else
If LCase(originalArray(row, Filter(2))) <> Filter(3) Then GoTo Skip
End If
Case isBetween
If originalArray(row, Filter(2)) < Filter(3) _
Or originalArray(row, Filter(2)) > Filter(4) Then GoTo Skip
End Select
Next Filter
End If
' partial match check
If Not pPartialMatchColl Is Nothing Then
If IsArray(partialMatchColumnsArray) Then
For col = LBound(partialMatchColumnsArray) To UBound(partialMatchColumnsArray)
If InStr(1, originalArray(row, partialMatchColumnsArray(col)), pPartialMatchColl(3), vbTextCompare) > 0 Then
GoTo Keep
End If
Next
GoTo Skip
End If
End If
Keep:
arrayOfRowsToKeep(keepCount) = row
keepCount = keepCount + 1
Skip:
Next row
' create results array
If keepCount > 0 Then
firstRow = LBound(originalArray, 1)
lastRow = LBound(originalArray, 1) + keepCount - 1
firstColumn = LBound(originalArray, 2)
lastColumn = LBound(originalArray, 2) + UBound(arrayOfColumnToReturn) - LBound(arrayOfColumnToReturn)
ReDim result(firstRow To lastRow, firstColumn To lastColumn)
For row = firstRow To lastRow
For col = firstColumn To lastColumn
result(row, col) = originalArray(arrayOfRowsToKeep(row - firstRow), arrayOfColumnToReturn(col - firstColumn + LBound(arrayOfColumnToReturn)))
Next col
Next row
End If
filter2DArray = result
If IsArray(result) Then Erase result
End Function
Private Function filter1DArray(ByRef originalArray As Variant) As Variant
Dim firstRow As Long
Dim lastRow As Long
Dim firstColumn As Long
Dim lastColumn As Long
Dim row As Long
Dim col As Long
Dim arrayOfColumnToReturn As Variant
Dim partialMatchColumnsArray As Variant
Dim result As Variant
result = -1
firstRow = LBound(originalArray)
lastRow = UBound(originalArray)
' main loop
Dim keepCount As Long
Dim Filter As Variant
Dim currentFilterType As filterType
ReDim arrayOfRowsToKeep(lastRow - firstRow + 1) As Variant
keepCount = 0
For row = firstRow To lastRow
' exact, excluse and between checks
If Not pFiltersCollection Is Nothing Then
For Each Filter In pFiltersCollection
currentFilterType = Filter(1)
Select Case currentFilterType
Case negativeMatch
If Filter(4) Then
If originalArray(row) = Filter(3) Then GoTo Skip
Else
If LCase(originalArray(row)) = Filter(3) Then GoTo Skip
End If
Case exactMatch
If Filter(4) Then
If originalArray(row) <> Filter(3) Then GoTo Skip
Else
If LCase(originalArray(row)) <> Filter(3) Then GoTo Skip
End If
Case isBetween
If originalArray(row) < Filter(3) _
Or originalArray(row) > Filter(4) Then GoTo Skip
End Select
Next Filter
End If
' partial match check
If Not pPartialMatchColl Is Nothing Then
If InStr(1, originalArray(row), pPartialMatchColl(3), vbTextCompare) > 0 Then
GoTo Keep
End If
GoTo Skip
End If
Keep:
arrayOfRowsToKeep(keepCount) = row
keepCount = keepCount + 1
Skip:
Next row
' create results array
If keepCount > 0 Then
firstRow = LBound(originalArray, 1)
lastRow = LBound(originalArray, 1) + keepCount - 1
ReDim result(firstRow To lastRow)
For row = firstRow To lastRow
result(row) = originalArray(arrayOfRowsToKeep(row - firstRow))
Next row
End If
filter1DArray = result
If IsArray(result) Then Erase result
End Function | {
"domain": "codereview.stackexchange",
"id": 37611,
"tags": "object-oriented, array, vba"
} |
Is these a pair of diastereomer or identical compound? | Question: I attempted on this question (e) and found out that they are diastereomers, but the answer from the book is that they are identical. I worked out the configuration at each of the centres for the molecules then compare the results. If you find that one is RS and the other is SS then you'll know that they differ at only one stereo-centre and hence I think the book is wrong. I've got this question from Solomons: Organic Chemistry, 11th edition..
Answer: Maybe the answer of the book is wrong? For the fisher projection, it is Diastereomer definitely. | {
"domain": "chemistry.stackexchange",
"id": 5706,
"tags": "organic-chemistry, stereochemistry, erratum"
} |
Fundamental problems with the understanding of probability currents | Question: For the Normalization of the Schrödinger wave equation we need the following to be true
$$\begin{equation}
\int_{-\infty}^{+\infty}|\psi(x,t)|^2\,\mathrm{d}x=1
\end{equation}$$
Now, if I write in variable separable form the wave function $|\psi(x,t)|=|f(x)||g(t)|$, then we have
$$\int_{-\infty}^{+\infty}|f(x)|^2|g(t)|^2\,\mathrm{d}x=|g(t)|^2\int_{-\infty}^{+\infty}|f(x)|^2\,\mathrm{d}x=1$$, since for any square integratable function $\int_{-\infty}^{+\infty}|f(x)|^2\,\mathrm{d}x$ will be a finite constant, hence $|g(t)|$ will also be a constant.
Now, let us look at the probability current of the particle in a localized neighbourhood $(a<x<b)$.
$$\frac{\partial}{\partial t}P(a<x<b)=\left(\int_{a}^{b}|f(x)|^2\,\mathrm{d}x\right)\frac{\partial}{\partial t}|g(t)|^2$$
Now, since I've established above, that $|g(t)|$ is a constant, hence the probability current through a predetermined neighbourhood should always be zero. Where am I going wrong, kindly help, I'm not a physicist but I'm having to do some material science right now.
Answer: There is no probability current for steady states: their probability density is time-independent. However for linear combinations of time-dependent solutions (which are not separable in $t$ and $x$) the probability current can be $\ne 0$.
Take $\Psi(x,t)= a \psi_0(x)e^{-i\omega t/2} + b\psi_1(x)e^{-i3\omega t/2} $ as linear combinations of harmonic oscillator states with any $a,b$ such that $aa^*+bb^*=1$ as an example... | {
"domain": "physics.stackexchange",
"id": 39828,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, probability"
} |
Polynomial curve-fitting over a large 3D data set | Question: I have a list of 4 images, called listfile.list, which looks like this:
image1
image2
image3
image4
Each image has 10 frames containing a 2000 x 2000 array of pixels, so the size of each image is [10,2000,2000]. The pixel value for each frame increases from 0 to 10, so for example for one pixel in image1:
image1[:,150,150] = [435.8, 927.3, 1410. , 1895.1, 2374.6, 2847.1,
3340.5, 3804.8, 4291.6, 4756.1]
The other pixels show similar values and a roughly linear increase across the frames. I need to fit several functions to pixel values across the frames for each pixel in each image and then average over the images. The data sets are 3D and very large, so I don't know how to paste them here. My code does exactly what I want it to, the issue is that it can take days (> 3) to run my full script.
One function is frame_fit to return rates and intercepts. There are several other functions. My code is structured as follows:
import itertools
import numpy as np
from scipy.optimize import curve_fit
def frame_fit(xdata, ydata, poly_order):
'''Function to fit the frames and determine rate.'''
# Define polynomial function. Here, 'b' will be ideal rate and
# 'c', 'd', 'e', etc. describe magnitudes of deviation from
# ideal rate.
if poly_order == 5:
def func(t, a, b, c, d, e, f):
return a+ b*t+ c*(b*t)**2+ d*(b*t)**3+ e*(b*t)**4+ f*(b*t)**5
elif poly_order == 4:
def func(t, a, b, c, d, e):
return a + b*t + c*(b*t)**2 + d*(b*t)**3 + e*(b*t)**4
# Initial values for curve-fitting.
initvals = np.array([100, 4.e+01, 7.e-03, -6.e-06, 3.e-08, -8.e-11])
# Provide uncertainty estimate
unc = np.sqrt(64 + ydata)
beta, pcov = curve_fit(func, xdata, ydata,
sigma=unc, absolute_sigma=False,
p0=initvals[:poly_order+1],
maxfev=20000)
# beta[1] is rate, beta[0] is intercept
return beta[1], beta[0]
all_rates = np.zeros((number_of_exposures, 2000, 2000),dtype=np.float64)
all_intercepts = np.zeros((number_of_exposures, 2000, 2000),dtype=np.float64)
all_results = np.zeros((2, 2000, 2000),dtype=np.float64)
pix_x_min = 0
pix_y_min = 0
pix_x_max = 2000 # max number of pixels in one direction
pix_y_max = 2000
xdata = np.arange(0,number_of_frames) # so xdata [0,1,2,3,4,5,6,7,8,9]
# Here is where I need major speed improvements
for exposure in list_of_exposures:
for i,j in itertools.product(np.arange(pix_x_min,pix_x_max),
np.arange(pix_y_min,pix_y_max)):
ydata = exposure[:,i,j]
rate, intercept = frame_fit(xdata,ydata,5)
# Plus other similar functions ...
# results2, residuals2 = function2(rate, intercept)
# results3, residuals3 = function3(results2, specifications)
all_rates[exposure,i,j] = rate
all_intercepts[exposure,i,j] = intercept
avg_rates = np.average(all_rates, axis=0)
avg_intercepts np.average(all_intercepts, axis=0)
all_results[0,:,:] = avg_rates
all_results[1,:,:] = avg_intercepts
# all_results is saved to a file after everything is finished.
There are usually at least 4 images with 2000x2000 arrays of pixels in my input list. Besides the frame_fit function, there are several others that I have to run for each pixel. I am a relatively new Python programmer so I often don't know about all available tools or best practices to improve speed. I know a nested for-loop as I've written it is going to be slow. I have considered cython or multiprocessing, but those are complicated to tackle for a new programmer. Is there a better way to perform a function on each pixel and then average over the results? I am hoping to stick with standard python 3.5 modules (rather than installing extra packages).
Answer: In terms of possible speed optimisations there's not much to work with, but one thing does stand out:
def func(t, a, b, c, d, e, f):
return a+ b*t+ c*(b*t)**2+ d*(b*t)**3+ e*(b*t)**4+ f*(b*t)**5
(and similarly for the four-element case). It's fairly standard knowledge that polynomials evaluation can be optimised for speed using Horner's method. I'll also pull out that multiplication by b:
def func(t, a, b, c, d, e, f):
bt = b*t
return a + bt*(1 + bt*(c + bt*(d + bt*(e + bt*f)))) | {
"domain": "codereview.stackexchange",
"id": 24055,
"tags": "python, time-limit-exceeded, image, numerical-methods, scipy"
} |
Is the direction detection of one VIRGO/LIGO interferometer limited to two dimensions? | Question: I've gone to a conference dealing with the recent detection of gravitational waves (dated end of 2015). I then learnt that there are several interferometers accross the world.
The principle of detection is interferometry along two long arms making a 90° angle. The arms are in a common 2D plane. What was presented is that when a gravitational wave pass through the device, the arms stretch or contract by a certain amount of deformation that is measureable.
Does it mean that the deformation is measured only along this axle and thus with one interferometer it is not possible to get all variations of the $g_{\mu\nu}$ metric tensor but only $g_{ij}$ with $i=1,2$ ?
If one want other components another VIRGO/LIGO interferometer is necessary. And this second one must be oriented in another plane and another directions. Is that sounds OK or do I miss some points ?
Answer: Yeah, that's pretty much it.
(Moreover, like light, gravitational waves have two polarizations, usually denoted $+$ and $×$, with deformations along those axes, and each LIGO detector is also incapable of detecting $×$ waves when its arms are aligned along $+$. For astronomical sources this is less important, as we don't expect a particular polarization from a random source in the sky, but the effect is still there.)
This directional sensitivity is one of the main reasons why people are building more detectors, including a new detector in India and links to the existing VIRGO facilities. (The other reason being, of course, that more detectors allow for better coincidence detection, so less chances of spurious signals, as well as a better ability to pinpoint the spatial origin of the signals.) | {
"domain": "physics.stackexchange",
"id": 34935,
"tags": "general-relativity, gravitational-waves"
} |
Lifecycle node consuming cpu in deactivated state | Question: My node which is a critical component of a larger system contains the following interfaces for communicating with other nodes
Two action servers
One tf listner
One service server
Four subscribers
Two publishers
[running in ros 2 foxy in docker container]
I recently implemented the lifecycle version of this node and the idea was to deactivate the node when not required and save on CPU usage. I followed the following practices while implementing the lifecycle version:
Defined all parameters in constructor of class
Initializing pubs,subs,srvers etc all in on_configure callback.
Activating things which could be activated in on_activate callback and controlling rest with bools
Deactivating things which could be deactivated in on_deactivate callback and resetting the bools for all others
resetting all pointers in cleanup
My external code which controls the lifecycle of this node uses activate and deactivate states while in operation and configure / cleanup only once per run.
The CPU usage before on_configure is less than 1% which is expected. It goes to 15% on configure but it doesn't change much on activate or deactivate.
(The compute might be going in spinnig and waiting i guess but not sure)
so activating and deactivating is not really giving me what i needed.
I tried moving a few things from cleanup and configure into deactivate and activate CBs and got some improvement like the tf_listner it saved me around 4-5% of cpu. But the rest is still being utilized under the hood in spinning I guess.
Can we do anything else to reduce the usage further ?
Answer: I'm going to speculate its your TF system. TF is not a lifecycle system, so unless you're resetting the TF buffer / listener in on_deactivate, you're still processing incoming transformation information. Also, subscribers similarly have no lifecycle eq. I'd look at resetting those in on_deactivate as well. | {
"domain": "robotics.stackexchange",
"id": 38912,
"tags": "ros2, rclcpp, spinning"
} |
Convert Object to a DateTime | Question: public static DateTime ObjectToDateTime(object o, DateTime defaultValue)
{
if (o == null) return defaultValue;
DateTime dt;
if (DateTime.TryParse(o.ToString(), out dt))
return dt;
else
return defaultValue;
}
The code feels too wordy and smells bad. Is there a better way?
Answer: On the offhand chance that your object is already a DateTime, you're performing unnecessary conversions to and from strings.
if (o is DateTime)
return (DateTime)o;
This also strikes me as something you might be doing for a database item, for example. In which case, I'd encourage you to know and trust your data types and then use existing methods of retrieval.
For example, if you have a DataTable with a column CreatedDate, you should know it's a date, what you might not know is if it has a value if the column is nullable at the database. That's fine, you can handle that in code.
var createdDate = row.Field<DateTime?>("CreatedDate");
There we go, a DateTime?, no coding of a conversion necessary. You can even specify a default and type if to DateTime
var createdDate = row.Field<DateTime?>("CreatedDate").GetValueOrDefault(DateTime.Now);
var createdDate = row.Field<DateTime?>("CreatedDate") ?? DateTime.Now; | {
"domain": "codereview.stackexchange",
"id": 26665,
"tags": "c#, datetime, converting"
} |
Complexity of Topological Properties. | Question: I am a computer scientist taking a course on Topology (a sprinkling of point-set topology heavily flavored with continuum theory). I have become interested in decision problems testing a description of a space (by simplices) for topological properties; those preserved up to homeomorphism.
It is known, for example, that determining the genus of a knot is in PSPACE and is NP-Hard. (Agol 2006; Hass, Lagarias, Pippenger 1999)
Other results have more a more general feel: A. A. Markov (the son of the Markov) showed in 1958 that testing two spaces for a homeomorphism in dimensions $5$ or higher is undecidable (by showing the undecidability for 4-manifolds). Unfortunately, this last example is not a perfect exemplar of my question, as it deals with the homeomorphy problem itself rather than properties preserved under homeomorphism.
There seems to be a large amount of work in "low dimensional topology": knot and graph theory. I am definitely interested in results from low dimensional topology, but am more interested in generalized results (these seem to be rare).
I am most interested in problems which are NP-Hard on average, but feel encouraged to list problems not known to be so.
What results are known about the computational complexity of topological properties?
Answer: Computational topology encompasses an enormous body of research. A complete summary of every complexity result would be impossible. But to give you a small taste, let me expand on your example.
In 1911, Max Dehn posed the word problem for finitely presented groups: Given a string over the generator alphabet, does it represent the identity element? One year later, Dehn described an algorithm for the word problem in fundamental groups of orientable surfaces; equivalently, Dehn described how to decide whether a given cycle on a given orientable surface is contractible. Properly implemented, Dehn's algorithm runs in $O(n)$ time. In the same 1912 paper, Dehn opined that “Solving the word problem for all groups may be as impossible as solving all mathematical problems.”
In 1950, Turing proved that the word problem in finitely presented semigroups is undecidable, by reduction from the halting problem (surprise, surprise).
Building on Turing's result, Markov proved in 1951 that every nontrivial property of finitely-presented semigroups is undecidable. A property of groups is nontrivial if some group has the property and some other group does not. Theoretical computer scientists know the similar result about partial functions as "Rice's Theorem".
In 1952, Novikov proved that the word problem in finitely presented groups is undecidable, thereby proving that Dehn's intuition was correct. The same result was independently proved by Boone in 1954 and Britton in 1958.
In 1955, Adyan proved that every nontrivial property of finitely-presented groups is undecidable. The same result was proved independently by Rabin in 1956. (Yes, that Rabin.)
Finally, in 1958, Markov described algorithms to construct 2-dimensional cell complexes and 4-dimensional manifolds with any desired fundamental group, given the group presentation as input. This result immediately implied that a huge number of topological problems are undecidable, including the following:
Is a given cycle in a given 2-dimensional complex contractible? (This is the word problem.)
Is a given 2-complex simply connected? ("Is this group trivial?")
Is a given cycle in a given 4-manifold contractible?
Is a given 4-manifold contractible?
Is a given 4-manifold homeomorphic to a particular 4-manifold (constructed by Markov)?
Is a given 5-manifold homeomorphic to the 5-sphere (or any other fixed 5-manifold you choose)?
Is a given 6-complex a manifold?
My favorite corollary of these results is more recent and more subtle: It is undecidable whether a given finitely-presented group is the fundamental group of a 3-manifold. Perelman's recent proof of Thurston's geometrization conjecture implies the existence of an algorithm to determine whether a given 3-manifold has a trivial fundamental group. (As @SamNead points out, results of Rubenstein and Casson imply an algorithm that runs in exponential time.) If a given group $G$ is not a 3-manifold group, then $G$ cannot be trivial, because $\pi_1(S^3)$ is trivial. Thus, if you could decide whether $G$ is a 3-manifold group, you could decide whether $G$ is trivial, which is impossible. | {
"domain": "cstheory.stackexchange",
"id": 2491,
"tags": "cc.complexity-theory, big-list, topology"
} |
Calculating the radius of a hole for a desired outflow rate | Question: I am tasked with finding the radius of a hole at the bottom of a tank to obtain the desired outflow rate when the tank is full.
The tank is a cylinder with dimensions Height: 4.8m and Radius: 1.79m. It holds roughly 48316.6897 liters of water.
I have already determined $C_d$ (The coefficient of discharge) to be 0.6372.
The outflow rate I desire is 3500 liters per hour.
I know that $\frac{dh}{dt}= \frac{a_0C_d\sqrt{2gH}}{A}$. where $a_0$ is the radius i need to obtain, and $A$ being the surface area.
From chain rule I can get $\frac{dh}{dt}$ in terms of radius and $\frac{dv}{dt}$(The desired outflow rate)
$\frac{dh}{dt} = \frac{\Delta{V}}{\pi r^2} $
Now I let them equal each other
$\frac{\Delta{V}}{\pi r^2} = \frac{a_0 C_d\sqrt{2gH}}{A}$
This gives me the equation
$$a0=\frac{\Delta{V}}{C_d\sqrt{2gH}} $$
Now, substitute in what we know
$3500$ Litres per hour is $\frac{35}{36000} \frac{m^3}{s} $
$$ a0 = \frac{\frac{35}{36000}}{0.6372\sqrt{2\cdot 9.8 \cdot 4.8}} $$
According to this $a_0$ is $0.157304491$ millimeters in radius.
I am doubting my calculations because $3500$ liters going past a hole of radius $0.157304491$ millimeters every hour seems absurd.
Answer: According to this a0 is 0.157304491 millimetres in radius.
I am doubting my calculations because 3500 litres going past a hole of radius 0.157304491 millimetres every hour seems absurd.
This is area. You need diameter. Multiply it by 4 then divide it by pi then take root of the result. That will be diameter of hole | {
"domain": "engineering.stackexchange",
"id": 1485,
"tags": "mechanical-engineering, fluid-mechanics, fluid"
} |
Generating IK solution for use in MoveIT for 5 DOF arm | Question:
I am working on a custom 5 DOF arm(image at the bottom). I have generated the URDF, the MoveIt configs etc for it.
I want to have the basic IK, planning and collision detection features of MoveIt. Before programming my application, I tried running the simple planning in RViz, using the interactive marker etc. Now, since it is a 5 dof system, I had to enable "Allow Approximate IK solutions", for it to let me use the interactive markers.
It was using OMPL by default. I tried planning for some random-valid goal states. The problem is that the planning success rate was very very low. 4 out of 5 times the planning failed. I also tried changing the planners from the dropdowns, but none of them were consistent in planning succesfully, all of them were failing quite frequently.
After looking up this problem here on Ros-Answers and many other places, we found that IKFast would be what we should use, as it works reliably for 5 dof systems.
After installing openrave and ikfast via source, we ran the commands to generate a IKfast cpp file(using this tutorial) for our robot model. We tried the TranslationDirection5D ik type. Now here we are getting a tuple index out of range error which is something that others have faced before and have not been able to solve. See this and this. Basically, we've now hit a wall trying to get the IK on my 5 dof robot to work properly. Openrave is not even able to generate an Ikfast cpp.
Here'ss the exact error i get when i try to generate an ikfast cpp,
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 9521, in <module>
chaintree = solver.generateIkSolver(options.baselink,options.eelink,options.freeindices,solvefn=solvefn)
File "/usr/local/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 2281, in generateIkSolver
chaintree = solvefn(self, LinksRaw, jointvars, isolvejointvars)
File "/usr/local/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 2796, in solveFullIK_TranslationDirection5D
coupledsolutions,usedvars = solvemethod(rawpolyeqs2[index],newsolvejointvars,endbranchtree=[AST.SolverSequence([endbranchtree2])], AllEquationsExtra=AllEquations)
File "/usr/local/lib/python2.7/dist-packages/openravepy/_openravepy_/ikfast.py", line 5226, in solveLiWoernleHiller
hassinglevariable |= any([all([__builtin__.sum(monom)==monom[i] for monom in newpeq.monoms()]) for i in range(3)])
IndexError: tuple index out of range
Is there a recommended way to get IKfast to generate a cpp or maybe even get the normal KDL to work reliably for my 5 dof system? I would really appreciate any kind of help to get this to work, thanks
Here's the picture of the bot from RViz
Originally posted by t27 on ROS Answers with karma: 68 on 2017-01-04
Post score: 1
Original comments
Comment by Humpelstilzchen on 2017-01-05:
Welcome to the 5dof club, I also had lots of problems. Note that OpenRave is very picky about sympy version. The combination OpenRave git b89980e with sympy 0.7.1 worked for me.
Comment by t27 on 2017-01-05:
I installed Sympy 0.7.1, but i still get the same error with openrave. I've added the exact error in the above question...
Comment by Humpelstilzchen on 2017-01-05:
Have you tried a known working robot, e.g. youbot?
Answer:
We have been trying to debug this issue since the past 2 days.
With no clear reason for this problem, we decided to try solving the IK of robot of the same configuration with a simple URDF file. So we wrote a urdf by hand, with the same joint and link config as our robot. The URDF we used earlier was generated from our original CAD model using the Solidworks URDF Converter plugin.
With this simple URDF, we did the same process as mentioned above. From this new MoveIt config, the plans were working significantly better. With the earlier urdf, hardly 1 out of 10 plans would work and even simple plans where only 1 motor had to move were failing. But with this new URDF model, many plans are working realli good! Even with the basic KDL solver with OMPL. And this is the same 5 dof configuration we had before. Our guess is that the solidworks urdf exporter plugin, which is severely dated, might be causing the issues in the planning. But we are still figuring out the actual, fundamental cause for this issue.
As far as solving the above problem is concerned, building a URDF by hand solves this problem for us. Even the IKfast library generates an analytical cpp solution.
Originally posted by t27 with karma: 68 on 2017-01-06
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 26640,
"tags": "manipulator, moveit, ikfast, kdl, openrave"
} |
Vertex algebra confusion | Question: In Blumenhagen's book on CFT, the authors have defined $\bar{v}(\bar{z})$ to be the antiholomorphic part of the vertex operator for a free bosonic CFT, $V(z,\bar{z})=:\exp{(\alpha X(z,\bar{z})}):$ where $X$ is the field.
Then on page 52, right after the 3rd equation they claim that $[L_0, \bar{v}(\bar{z})]=0$. $L_n$ are the Laurent modes of the EM tensor. I don't understand why this is true.
$\bar{v}$ is composed of the operators, $\bar{j_n}$, the Laurent modes of the operator $\bar{j}=i\bar{\partial}{X}$. Using the OPE of primary anti chiral fields with the EM tensor (equation (2.40) in the same book) I tried to prove that the EM tensor modes commute with the anti chiral modes but it's not working.
Answer: They have put all the anti-holomorphic dependence into $\bar{v}(\bar{z})$. So for holomorphic modes of stress energy tensor $[L_n, \bar{v}(\bar{z})]=0$. For anti-holomorphic generators $[\bar{L}_n, \bar{v}(\bar{z})]\neq0$.
Field $X(z, \bar{z})$ have holomorphic and anti-holomorphic parts in Laurent expansion (2.89). Then from equation (2.40) with $h=\bar{h}=0$ you can find commutation relations for $L_n$ and anti-holomorphic modes of $X(z,\bar{z})$. It's straightforward to show, that result is zero. | {
"domain": "physics.stackexchange",
"id": 63778,
"tags": "quantum-field-theory, conformal-field-theory, bosons"
} |
Help with General ROS query | Question:
I am trying to communicate ROS (on a virtual machine) with Optitrack (on a separate computer, Windows) and there have been some suggestions as to what might help.
I am completely new to ROS and Linux systems and the documentations for packages are really really disappointing. I have tried to look up on how to download or build packages which are not on the wiki, tried to understand what any particular package can do and how to go about carrying out a task, but nothing has been helpful. I am sure I am not smart enough as you all might be but I have spent a lot of time trying to understand how to work with ROS and failed miserably at it.
Can anyone please guide me towards some (preferably general) tutorials on how to go about using ROS or certain packages in it so that I can get a better idea about it all?? Like I am trying to use "gt-ros-pkg " to use "ros_vrpn_client" via that for my task and something explaining how it can be done or to point me in the right direction?
I apologize for any rudeness, I am just frustrated 'cause of not being able to work it out properly. If I am missing something which has already been explained before, then I would gladly accept any criticism for my post above and would appreciate it if anyone could guide me in the right direction.
Originally posted by nemesis on ROS Answers with karma: 237 on 2013-02-17
Post score: 0
Answer:
I am sorry for your frustration. Please note that, while valid, this is really not a general ROS query.
Many organizations throughout the world develop code using the ROS framework. The authors of ros_vrps_client do not seem to have released their package using any of the normal ROS mechanisms. So, the answer to your reasonable question is not apparent to me or (probably) most other ROS users.
If you can figure out who maintains this package, you should try to contact them directly. I took the liberty of tagging your question, hoping that those developers may be more likely to see it.
UPDATE: There seems to be more information on the Georgia Tech repository here.
Originally posted by joq with karma: 25443 on 2013-02-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by joq on 2013-02-18:
Try contacting some of the people mentioned here: http://code.google.com/p/gt-ros-pkg/ | {
"domain": "robotics.stackexchange",
"id": 12933,
"tags": "ros"
} |
Who has gotten WiringPi working in a ROS 2 C++ node? Want to control GPIO on Raspberry Pi 4 running Ubuntu 22 | Question: What I've tried and works to know the WiringPi library is working on my Raspberry Pi 4.
g++ compiling a test cpp script using WiringPi.
Terminal gpio commands (gpio -g mode 18 output sets pin 18 to be an output)
Physically wiring an output to an input. Writing to the output changed the input correctly
All results of changing pinMode and digialWrite were correct on the gpio readall
Properly linking the external WiringPi library into my ROS 2 package, wiringPi.h file in my cpp node, and using the same code as the test cpp script, it doesn't work. Is there something special I need to account for? Is there a ROS 2 background thread preventing WiringPi from functioning?
Code
class MyNode : public rclcpp::Node
{
public:
MyNode() : Node("my-node")
{
wiringPiSetupGpio();
if (wiringPiSetupGpio() == -1)
{
throw std::runtime_error("Failed to initialize WiringPiSetup");
}
pinMode(18, OUTPUT);
pinMode(17, INPUT);
// Jumper wire connecting pin 18 to 17
digitalWrite(18, HIGH);
RCLCPP_INFO(get_logger(), "%d", digitalRead(17);
}
}
int main(int argc, char **argv)
{
rclcpp::init(argc, argv);
try
{
auto node = std::make_shared<MyNode>();
RCLCPP_INFO(node->get_logger(), "Starting node");
rclcpp::spin(node);
}
catch (const std::exception &ex)
{
RCLCPP_ERROR_STREAM(rclcpp::get_logger("main"), "Exception during node initialization: " << ex.what());
}
rclcpp::shutdown();
return 0;
}
18 does not change to OUTPUT nor is it set to HIGH. 17 is by default an INPUT.
Reference
GPIO terminal comamnds I was testing: https://learn.sparkfun.com/tutorials/raspberry-gpio/c-wiringpi-setup
Example I used in my cpp file: https://learn.sparkfun.com/tutorials/raspberry-gpio/c-wiringpi-example
Answer: Put the wiringPiSetup() inside main and before spinning the node (rclcpp::spin(node)). It should look like this.
int main(int argc, char** argv)
{
rclcpp::init(argc, argv);
wiringPiSetupGpio();
if (wiringPiSetupGpio() == -1)
{
return -1;
}
auto node = std::make_shared<MyNode>();
RCLCPP_INFO(node->get_logger(), "Starting node");
rclcpp::spin(node);
rclcpp::shutdown();
return 0;
}
Although putting wiringPiSetup() in the constructor doesn't throw any errors, it looks like you want the initialization to be separate from the rest of the ROS Node. I have no references for this solution. | {
"domain": "robotics.stackexchange",
"id": 38807,
"tags": "ros2, c++, ros-humble, raspberry-pi"
} |
Can we extract positrons from gamma rays? | Question: If gamma rays undergo pair production is there a way to say, deflect and collect the positrons using magnetic fields?
Answer: Rather than using a magnetic field, you are better off using a strong electric field to separate them - since the initial direction of the electron / positron is somewhat random, a magnetic field will deflect but not separate in a meaningful way. An electric field can pull the positrons one way, and the electrons the other way - regardless of their initial direction. Thus the positrons can always be directed towards your "collector"; of course you will need a confinement device (magnetic, presumably) which means you will ultimately need both. But if you use just a magnetic field, you will trap both electrons and positrons and they won't stay apart for long - it's not in their nature. | {
"domain": "physics.stackexchange",
"id": 23676,
"tags": "quantum-mechanics, antimatter, gamma-rays"
} |
Lost sync with rosserial_python and wrong key transmission | Question:
Hi everyone,
I've been doing maps with a two-wheeled robot by using navigation stack and gmapping. For some reason when I source my encoders with a minor voltage, the maps I got are more accurate than higher voltages.
The thing is, that I have a huge problem with the synchronization. For different supplies values for the encoders I have, rosserial_python, which uses serial_node, lost sync suddenly while I'm mapping. Also, there is something I haven't figured out for teleop_twist_keyboard. Seems that it doesn't receive the messages through topics at the time it should, probably for an insufficent transmission time. It works perfectly when I don't supply the encoders, but if do that, I cannot mapping because it wouldn't read them.
So, if I want to make a good map and navigate through it, first I need to be able to teleop the robot without having problems of sync and that it detects al keys I press. That's because sometimes it doesn't detects some keys I press, like turn left, right and I have to wait a few seconds in order to it reads the keys I've already pressed.
I'm sure that once solving this, I'll be able to navigate through the maps I got, because I've followed all instructions on ROS, and when the robot tries to navigate from some point to another it get lost and keeps advancing in one direction, surely because of the delay of transmission.
The problem must reside on motor.conroller6.ino. Here is my github page with the files, specifically that file. I've read in other related topics like this and this one, and I have spinonce() in the loop function for arduino, and the millis() seems to be fine for me. Also changing LOOPTIME only affects the responding time, but doesn't change the behavior.
Here is motorcontroller6.ino
The typical error of sync
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/rosserial_python/serial_node.py", line 85, in <module>
client.run()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rosserial_python/SerialClient.py", line 503, in run
self.requestTopics()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rosserial_python/SerialClient.py", line 389, in requestTopics
self.port.flushInput()
File "/usr/lib/python2.7/dist-packages/serial/serialutil.py", line 532, in flushInput
self.reset_input_buffer()
File "/usr/lib/python2.7/dist-packages/serial/serialposix.py", line 566, in reset_input_buffer
termios.tcflush(self.fd, termios.TCIFLUSH)
termios.error: (5, 'Input/output error')
[serial_node-26] process has died [pid 30108, exit code 1, cmd /opt/ros/kinetic/lib/rosserial_python/serial_node.py __name:=serial_node __log:=/home/gerson/.ros/log/58647904-9a10-11e7-aafb-3ca0672cc307/serial_node-26.log].
log file: /home/gerson/.ros/log/58647904-9a10-11e7-aafb-3ca0672cc307/serial_node-26*.log
I hope you guys help me to solve this problem which is keeping me stuck for navigation.
Originally posted by gerson_n on ROS Answers with karma: 43 on 2017-09-15
Post score: 0
Original comments
Comment by jayess on 2017-09-15:
Could you please copy and past the error instead of uploading an image? Thanks.
Comment by gerson_n on 2017-09-15:
I've updated the post. Thanks
Answer:
I've solved the synchronization problem just using the original teleop_twist_keyboard.py file. I realized that if I change the raw keys for other, even if it hasn't mistakes, I'll always have sync problems. So, is better don't move anything in there.
On the other hand, for navigation I thought that because of odd keys reading and sync, the wheeled robot wasn't navigating as should. But its behavior isn't changed. Definitely I have a wrong configuration for the navigation. At least I can discard this problem for navigation :)
Cheers
Originally posted by gerson_n with karma: 43 on 2017-09-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28862,
"tags": "ros, rosserial-python"
} |
Printing a week range string in Python | Question: For some reporting, I need to print a human readable date string for the week covered by the report, using just the standard python module library. So the strings should look like:
Dec 29, 2013 - Jan 4, 2014
Jan 26 - Feb 1, 2014
Jan 19 - 25, 2014
Below is a script I threw together to achieve this. But is there a simpler way to do this?
from datetime import (datetime, date, timedelta)
def week_string(dt):
(year, week, weekday) = dt.isocalendar()
week_0 = dt - timedelta(days=weekday)
week_1 = dt + timedelta(days=(6-weekday))
month_0 = week_0.strftime("%b")
day_0 = week_0.strftime("%e").strip()
year_0 = week_0.strftime("%Y")
month_1 = week_1.strftime("%b")
day_1 = week_1.strftime("%e").strip()
year_1 = week_1.strftime("%Y")
if year_0 != year_1:
return "%s %s, %s - %s %s, %s" %(
month_0, day_0, year_0,
month_1, day_1, year_1)
elif month_0 != month_1:
return "%s %s - %s %s, %s" %(
month_0, day_0,
month_1, day_1, year_1)
else:
return "%s %s - %s, %s" %(
month_0, day_0, day_1, year_1)
print week_string(date(2013, 12, 30))
print week_string(date(2014, 01, 30))
print week_string(datetime.date(datetime.now()))
Since the report script is going to be shared with other people, I want to avoid adding dependencies on anything they'd need to install.
Answer: 1. Comments on your code
There's no docstring. What does the week_string function do, and how should I call it? In particular, what is the meaning of the dt argument?
You've put your test cases at top level in the script. This means that they get run whenever the script is loaded. It would be better to refactor the test cases into unit tests or doctests.
Your code is not portable to Python 3 because the test cases use the statement form of print.
You import datetime.datetime but don't use it. And you import datetime.date but only use it in the test cases.
There's no need for parentheses in these lines:
from datetime import (datetime, date, timedelta)
(year, week, weekday) = dt.isocalendar()
The variables are poorly named, as explained by unholysampler.
You format the components (year, month, day) of the dates and then compare the formatted components:
year_0 = week_0.strftime("%Y")
year_1 = week_1.strftime("%Y")
if year_0 != year_1:
# ...
but it would make more sense to compare the year property of the dates directly:
if begin.year != end.year:
# ...
The %e format code for srtftime was not defined by the C89 standard and so may not be portable to all platforms where Python runs. See the strftime documentation where this is noted. Also, even where implemented, the %e format code outputs a leading space which doesn't seem appropriate in your case.
So I would follow unholysampler's technique and use Python's string formatting operation on the day field of the date objects.
Date-manipulating code is often tricky, and you made a mistake in the case where dt is on a Sunday, as pointed out by 200_success. So it would be worth putting in some assertions to check that the manipulations are correct. You can see in the revised code below that I've added assertions checking that begin is on a Sunday, that end is on a Saturday, and that d lies between these two dates.
2. Revised code
from datetime import timedelta
def week_description(d):
"""Return a description of the calendar week (Sunday to Saturday)
containing the date d, avoiding repetition.
>>> from datetime import date
>>> week_description(date(2013, 12, 30))
'Dec 29, 2013 - Jan 4, 2014'
>>> week_description(date(2014, 1, 25))
'Jan 19 - 25, 2014'
>>> week_description(date(2014, 1, 26))
'Jan 26 - Feb 1, 2014'
"""
begin = d - timedelta(days=d.isoweekday() % 7)
end = begin + timedelta(days=6)
assert begin.isoweekday() == 7 # Sunday
assert end.isoweekday() == 6 # Saturday
assert begin <= d <= end
if begin.year != end.year:
fmt = '{0:%b} {0.day}, {0.year} - {1:%b} {1.day}, {1.year}'
elif begin.month != end.month:
fmt = "{0:%b} {0.day} - {1:%b} {1.day}, {1.year}"
else:
fmt = "{0:%b} {0.day} - {1.day}, {1.year}"
return fmt.format(begin, end) | {
"domain": "codereview.stackexchange",
"id": 5832,
"tags": "python, strings, datetime"
} |
pcl_ros(hydro) fails to compile on Mac OSX 10.8.5 | Question:
Here is what I get compiling Hydro on OSX 10.8.5 Particularly I get the following errors during pcl_ros compilation.
/Users/artemlenskiy/ros/hydro/src/pcl_ros/src/pcl_ros/features/feature.cpp:50:10: fatal error: 'pcl/io/io.h' file not found
#include <pcl/io/io.h>
...
The reason is IO is turned off in homebrew PCL, and therefore the headers are missing.
Does William of anyone else know how to overcome this problem?
Thanks in advance!
Originally posted by Artem on ROS Answers with karma: 709 on 2013-10-01
Post score: 1
Answer:
Sorry, I fixed this a while back and forgot to push it! Thanks for the heads up!
https://github.com/ros/homebrew-hydro/commit/b0694465dd3b096f1e00661228d50df23a8f8dd8
You should be able to:
brew uninstall pcl
brew update
brew install pcl
To resolve the issue.
Originally posted by William with karma: 17335 on 2013-10-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Artem on 2013-10-01:
Thanks William, it perfectly works! | {
"domain": "robotics.stackexchange",
"id": 15722,
"tags": "pcl, ros-hydro, osx, pcl-ros"
} |
How does ANF increase GFR? | Question: ANF as we know reduce the Na+ uptake and K+ removal in the distal tubules and it also functions as a Vasodialator (?) But again it says that ANF increases the Glomerular filtrate ? But if it is acting has a Vasoldialator (ie, antagonistic to Vasopressin) how is it increasing the GFR ? Shouldnt it lower the GFR ?
ANF - Atrial Natriuretic Factor also called Atrial Natriuretic Peptide
GFR - Glomerular Filtration Rate
Answer: ANF (Atrial Natriuretic Factor more commonly known as ANP - atrial natriuretic peptide) squeezes (vasoconstricts) the efferent arteriole. This means the pressure in the glomerulus is higher (like if you squeeze the end of a hose) and so more fluid is squeezed out i.e. the glomerular filtration rate (GFR) is higher. It also dilates the afferent which means more fluid is going in, further increasing GFR (Marin-Grez et al.) | {
"domain": "biology.stackexchange",
"id": 3522,
"tags": "human-biology, physiology, kidney, human-physiology"
} |
Two simple sorting algorithms in Python | Question: For a personal programming challenge, I wanted to try writing simple bubble sort and insertion sort functions in Python 3. I haven't looked at the standard pseudo-code, I just read about how they worked. Are these good implementations?
def bubble_sort(array):
"""
Sorts array using a bubble sort.
>>> bubble_sort([43, 10, 100, 24, 1, 6, 10, 3])
[1, 3, 6, 10, 10, 24, 43, 100]
"""
array2 = array[:] # Save a copy, so that original is not mutated
last_index = len(array) - 1 # Iterate up to this position
while last_index > 0:
for i in range(last_index):
a, b = array2[i], array2[i + 1] # Consecutive numbers in array
if a > b:
array2[i], array2[i + 1] = b, a # Swap positions
last_index -= 1 # A new number has bubbled up, no need to inspect it again
return array2
def insertion_sort(array):
"""
Sorts array using an insertion sort.
>>> insertion_sort([43, 10, 100, 24, 1, 6, 10, 3])
[1, 3, 6, 10, 10, 24, 43, 100]
"""
sorted_array = []
for a in array:
# Loop backwards through sorted_array
for i, b in reversed(list(enumerate(sorted_array))):
if a > b:
sorted_array.insert(i + 1, a) # Insert a to the right of b
break
else:
# a is less than all numbers in sorted_array
sorted_array.insert(0, a) # Add a to beginning of list
return sorted_array
Answer: The docstrings and doctests are nice. I would prefer a more explicit description of the behaviour: "Return a copy of the array, sorted using the bubble sort algorithm." Typically, if you're implementing these sorting algorithms as an exercise, you would perform the sorting in place.
In bubble_sort(), the while last_index > 0 loop would be written more idiomatically as:
for last_index in range(len(array) - 1, 0, -1):
…
In insertion_sort(), the reversed(list(enumerate(sorted_array))) would be making temporary copies of sorted_array. I would therefore consider it an improper implementation of the algorithm. | {
"domain": "codereview.stackexchange",
"id": 22716,
"tags": "python, python-3.x, sorting"
} |
A simple quicksort implementation | Question: I have the following class for Quicksorting.
import java.util.Random;
public class QuickSorter {
public static void main(String[] args) {
Random random = new Random();
int[] randoms = new int[10000];
for(int i = 0; i < randoms.length; i++) randoms[i] = random.nextInt(10000);
quickSort(randoms, 0, randoms.length);
System.out.println(Arrays.toString(randoms));
}
/** Same as {@link #quickSort(int[], int, int)}, but assumes the whole array should be sorted. */
public static void quickSort(int[] in) {quickSort(in, 0, in.length);}
/** Sorts an array of integers using the Quicksort algorithm, in the range [{@code start}, {@code end}).
* @param in The full array to sort
* @param start The starting index, inclusive
* @param end The ending index, exclusive
* @see #quickSort(int[]) */
public static void quickSort(int[] in, int start, int end) {
int pivot = (start + end) / 2;
/* Temporary array containing the ordered sub-list */
int[] sub = new int[end - start];
int left = 0, right = sub.length;
/* Populate the sub-list */
for(int i = start; i < end; i++) {
if(i == pivot) continue;
if(in[i] < in[pivot]) {
sub[left++] = in[i];
} else {
sub[--right] = in[i];
}
}
/* Add in the original pivot in its new position */
sub[left] = in[pivot];
/* Copy back into the original array */
for(int k = start; k < end; k++) {
in[k] = sub[k - start];
}
/* Translate new pivot position into full list index */
left += start;
if(left - start > 0) quickSort(in, start, left);
/* The start of the right branch should not include the pivot */
left++;
if(end - left > 0) quickSort(in, left, end);
}
}
Is this the most efficient way to Quicksort? I feel as if copying the values from sub back into in can be done better somehow.
And of course, I know this won't cause any huge issues with arrays of size 10000. But, I'd like this to hold up all the way up to 2^31-1 (to the limits of max array size).
Answer: Integer overflow
Since you said you cared about array sizes up to the max size, you should know that this line could cause an overflow with two large indices, meaning that pivot could become negative:
int pivot = (start + end) / 2;
You can fix this by using the following expression instead:
int pivot = start + (end - start) / 2;
Skip subarrays of length 1
You can do slightly better by ignoring subarrays of length 1, since they are already sorted. In other words, this line:
if(left - start > 0) quickSort(in, start, left);
could be:
if(left - start > 1) quickSort(in, start, left);
The same goes for the right subarray.
In-place version should be faster
Although there is nothing incorrect with your version, it allocates a temporary array and copies elements back and forth. If you switched to an in-place algorithm, you will reduce your array writes by 50%. This means that the in-place version should be faster than your current version, assuming you use a proper version that swaps elements from both ends working inwards. | {
"domain": "codereview.stackexchange",
"id": 16819,
"tags": "java, quick-sort"
} |
Were alchemists right? | Question: When I was in school, I learned about alchemists, a group of scientists who sought a way to convert other materials into gold. They were never successful, so whenever I studied or read about them, they were portrayed as failures or foolish people, and that’s the impression I had about them.
However, more recently, I was preparing for something else, and I read a question which said:
Copper can be converted into gold by artificial radioactivity
I’m not a science guy, but when I looked into this, it seemed to be true. Is it? And if it's true, were the alchemists right all along, not foolish people as history portrays them?
Answer: They were wrong in the same way the people who made human-sized wings to fly were wrong. The goal of flight/metal transmutation is not impossible, but the methodology is naive and hopeless. | {
"domain": "physics.stackexchange",
"id": 69601,
"tags": "nuclear-physics, physical-chemistry, history, radioactivity, elements"
} |
Can every chemical compound be melted? | Question: Are there any chemical compounds that disintegrate (without going into other chemical reactions; let's say in a vacuum) before reaching a melting temperature?
Answer: Some compounds decompose before they reach the melting point. In the literature, you'll see something like "m.p. $40\ ^{\circ}\mathrm{C}$ (dec.)"
Ammonium chloride is an example. | {
"domain": "chemistry.stackexchange",
"id": 6963,
"tags": "phase, temperature, melting-point"
} |
How do you measure proton's spin? | Question: I've probably read it somewhere in Sakurai but I cannot recall it at the moment. So how does one really measure the proton's spin? I mean the proton's spin and not its constituents.
Do you measure it using a Stern-Gerlach type of setup?
How about atoms and other stuff, how is their spin measured?
Answer: Spin can be measured as you say by the Stern Gerlach experiment for individual particles.
Spin for atoms can be found from measuring electromagnetic spectra of transitions between energy levels and fitting/assigning a solution of the potential problem which identifies the spin state for us.
As the answer by Dwin says there are specific energy levels that can be excited in nuclei and fitted with the appropriate model to identify the spin.
In accelerator experiments the spin of resonances can be determined from their decay products, fitting their angular distributions . I have provided some links in the answer to a similar quiestion. | {
"domain": "physics.stackexchange",
"id": 11327,
"tags": "particle-physics, experimental-physics, angular-momentum, quantum-spin, measurements"
} |
Does TM $M$ exist, when $L_{\leq3} \subset L(M) \subset L_{\leq4}$ | Question: and
$L_{\leq k} = \{\langle M \rangle : |L(M)|\leq k\}$
The solution that I saw is:
Proof by contradiction, assume such $M$ exists.
So reduction $f$ from $\overline{HP}$ to $L(M)$,
when $\overline{HP}=\{(\langle M\rangle,x ) | M $ doesn't halt on $ x\}$
$f(\langle M'\rangle,x ) = \langle M_x\rangle$
When $M_x$ on input $w$ implemented in the following way:
execute $M'$ on $x$
accept if M' halt
I can't understand the validity of it, I mean why
$M_x \in L(M) \Leftrightarrow (\langle M'\rangle,x )\in \overline{HP}$
is true?
The next step quite simple, if $M$ exists then $L(M)\in RE$ and based on the reduction it's mean that $\overline{HP}\in RE$, contradiction.
Maybe I found wrong solution?
Answer: To fix the solution you just need to accept any 3 elements of your choice in $M_x$. Now, $M_x$ will look something like that:
If $w$ (the input) is $0,1$ or $00$, accept .
Otherwise, emulate $M$ on $x$.
Accept if $M$ halted.
Now, you are guaranteed to have exactly 3 elements in $L(M_x)$ if $M$ doesnt halt on $x$, and otherwise $L(M_x)=\Sigma^*$.
You can now continue with the proof as you have written. | {
"domain": "cs.stackexchange",
"id": 18665,
"tags": "turing-machines"
} |
Where does energy come from when a laser bean exits lens at c? | Question: Take a laser. Fire it at a lens. Prior to arrival at the lens the lasers pulsed beam’s speed is c.
Traversing the lens it’s speed is less than c.
Exiting the lens (and back to the vacuum) it’s speed is what?
c or less than c?
I’m inclined to think that its speed post lens is the same as its speed pre lens.
If that is so, didn’t the beam of light “accelerate” post lens? Where did the energy come from?
Answer: It is a misconception to think that inside the lens (the medium that is different from vacuum in your case) photons travel with speed less then c.
There are a few things to clarify:
It is the wavefront that slows down inside the medium.
Photons always travel with speed c when measured locally in vacuum.
And photons do travel in vacuum inside the medium, inside the lattice, between the molecules and atoms, the photons travel in vacuum, with speed c when measured locally.
It is when the photons interact with the atoms in the lattice structure of the medium (the lens glass) that the individual photons' speed over the whole length of the glass will slow down compared to c. It is because the EM interactions between the photons and the atoms takes time. Now do not misunderstand please, the individual photons travel at speed c between the atoms in vacuum. It is when they interact with the atoms that slows them down (there is actually two theories on this site about this, one says there is phase shift, the other one says the photons get scattered elastically, but both ways, the interaction will cause the wavefront to slow down compared to c), relative to speed c.
Now, when we say that the individual photons speed on the whole length of the glass will slow down, that means that while they travel inside the glass, they interact with the atoms, and they go zigzag, not straight and the interaction takes time. Now, if you take the time when they enter the glass, and exit it, and count this time difference, and measure the straight path, you will get speed less then c. Of course, since the individual photons do not go straight, they zigzag.
Now, when the photons travel inside the glass, the wavefront will slow down. This is the same why the photons go zigzag, meaning, that the average path of the wavefront's photons is longer then the straight path. So you will get speed less then c for the wavefront inside the glass.
Now you say that when they exit the glass, they travel with speed c again so they sped up. In reality the photons go with speed c in vacuum in the glass and outside too. It is just that the individual photons go in zigzag (because of the interaction with the glass atoms) inside the glass and straight outside the glass. So they do not speed up, they always go with speed c in vacuum when measure locally. It is the wavefront that slows down inside the glass.
Even if you look at questions that confuse you, when somebody asks, when the photon gets emitted, why does it speed up to c from 0? It does not, in reality when the photon gets emitted, it already travels at speed c. Before the emission, the photon did not exist, so it did not have a speed. | {
"domain": "physics.stackexchange",
"id": 58560,
"tags": "visible-light, speed-of-light, lenses"
} |
How to calculate this string-dissimilarity function efficiently? | Question:
Migrated from stackoverflow.
Hello,
I was looking for a string metric that have the property that moving around large blocks in a string won't affect the distance so much. So "helloworld" is close to "worldhello". Obviously Levenshtein distance and Longest common subsequence don't fulfill this requirement. Using Jaccard distance on the set of n-grams gives good results but has other drawbacks (it's a pseudometric and higher n results in higher penalty for changing single character).
[original research]
As I thought about it, what I'm looking for is a function f(A,B) such that f(A,B)+1 equals the minimum number of blocks that one have to divide A into (A1 ... An), apply a permutation on the blocks and get B:
f("hello", "hello") = 0
f("helloworld", "worldhello") = 1 // hello world -> world hello
f("abba", "baba") = 2 // ab b a -> b ab a
f("computer", "copmuter") = 3 // co m p uter -> co p m uter
This can be extended for A and B that aren't necessarily permutations of each other: any additional character that can't be matched is considered as one additional block.
f("computer", "combuter") = 3 // com uter -> com uter, unmatched: p and b.
Observing that instead of counting blocks we can count the number of pairs of indices that are taken apart by a permutation, we can write f(A,B) formally as:
f(A,B) = min { C(P) | P ⊆ |A|×|B|, P is a bijective function, ∀i∈dom(P) A[P(i)]=B[P(i)] }
C(P) = |A| + |B| − |dom(P)| − |{ i | i,i+1∈dom(P) and P(i)+1=P(i+1) }| − 1
The problem is... guess what...
... that I'm not able to calculate this in polynomial time.
Can someone suggest a way to do this efficiently (possibly with minor modifications)? Or perhaps point me to already known metric that exhibits similar properties?
Answer: What you seem to be looking for is called the minimum common string partition, and it has been well studied. In particular, it's known to be NP-hard. There is also a closely related concept of edit distance with block moves, which would capture the extension you mention, where substitutions are allowed. | {
"domain": "cstheory.stackexchange",
"id": 4079,
"tags": "ds.algorithms, metrics, string-matching"
} |
Can someone explain to me what is T gate and tdg like im five? | Question: I'm newbie to quantum computing, i read about these two gates on IBM quantum computing docs, but I can't seem to understand what is t-gate and tdg or t-dagger, can someone give me a simple explanation about these two gates? Also S-gate if that is possible!
Thank you so much.
Answer: tdg is the method used to apply $T^\dagger$ (read T dagger). Thus, there are no differences between these two.
For a quantum gate $U$, $U^\dagger$ is the inverse of $U$. That is, if you apply $U$ on a given state $|\psi\rangle$ and then $U^\dagger$, you will be back in the state $|\psi\rangle$ again.
So now, what is $T$? Or rather, why do we care about $T$? You see, there are these gates, $H$, $S$ and the $CNOT$ using which we can do some things. However, we are quite limited with them, there is no way to construct a Toffoli gate (that is, an $X$ gate controlled on two qubits) using them for instance. However, if we do allow ourselves to use these gates and the $T$ gate, then we can construct any quantum gate we'd like.
The $S$ gate is simply the $T$ gate applied twice: $T^2=S$.
On a more "math" level, the $T$ gate is the following matrix:
$$\begin{pmatrix}1&0\\0&\mathrm{e}^{\mathrm{i}\frac\pi4}\end{pmatrix}$$
That is, it applies a phase of $\frac\pi4$ to the $|1\rangle$ state and leaves $|0\rangle$ untouched.
I'm not sure that this is a gate you'll have to deal with often. It does appear a lot in the Quantum Circuits you'll build, but I'm not sure that you'll use it directly when defining your own Quantum Gates. For instance, if you use a Toffoli gate in your circuit, under the hood the gates that will be applied to your three qubits are those:
Thus, while you will reason without this $T$ gate in most cases, it will be here when decomposing the circuit into the basic gates using which you'll build it. | {
"domain": "quantumcomputing.stackexchange",
"id": 3840,
"tags": "qiskit, programming, quantum-gate, quantum-state, ibm-q-experience"
} |
Read a single int from an optional .properties file | Question: I am no newcomer to Java. One thing that confounds me is why it is so messy to load values from .properties files.
I have an application where, if the .properties file is found, then the value if found, should be used. And in any other circumstance, to use the default. (In the correct deployment, the server is run with sufficient permissions to engage with the socket listener 443, whereas in the development environment or any other environment where a person goes to the trouble to insert the .properties file, another port will be used).
The location of the file is com/foo/bar/webserver.properties in the in the same directory as the class file com/foo/bar/WebServer.class.
The content of webserver.properties is simply:
listen=4444
Since this such a terribly simple function I would like to know if the community of reviewers sees a more elegant/succinct/secure/complete/correct way to load the value. I feel like I must be overlooking something basic for what feels like, to me, should be a one-liner more like the imaginary API:
// set int value for key "listen", or else default value 443
int port = Properties.loadProperties( "webserver.properties" ).getInt( "listen", 443 );
And here is my real code for review:
int port;
try ( InputStream webserverProperties = WebServer.class.getResourceAsStream( "webserver.properties" ) ) {
if ( webserverProperties == null ) {
port = 443;
} else {
Properties p = new Properties();
p.load( webserverProperties );
String listen = p.getProperty( "listen", "443" );
port = Integer.parseInt( listen );
}
}
catch ( NumberFormatException e ) { port = 443; }
finally {}
Answer: you could at least get rid of the if by combining the exceptions:
int port;
try ( InputStream webserverProperties = WebServer.class.getResourceAsStream( "webserver.properties" ) ) {
Properties p = new Properties();
p.load( webserverProperties );
String listen = p.getProperty( "listen", "443" );
port = Integer.parseInt( listen );
}
catch ( NumberFormatException|NullPointerException e ) { port = 443; }
finally {}
or ignore exceptions at all:
int port= 443;
try ( InputStream webserverProperties = WebServer.class.getResourceAsStream( "webserver.properties" ) ) {
Properties p = new Properties();
p.load( webserverProperties );
String listen = p.getProperty( "listen", "443" );
port = Integer.parseInt( listen );
}
catch ( Exception e ) {}
finally {} | {
"domain": "codereview.stackexchange",
"id": 32759,
"tags": "java, beginner, configuration, properties"
} |
Moving Base set distance without Navigation stack | Question:
I am using a Beaglebone Black on a iRobot Create 2 for some experimenting. I want to be able to move the base a set amount without using the navigation stack, as the Beaglebone would be overwhelmed. I am using the new Create driver that came out and it publishes /odom and subscribes to /cmd_vel. Is there a way to accomplish this? I only need to move about 12 inches but I need to move as precisely as possible.
Thanks,
luketheduke
Originally posted by luketheduke on ROS Answers with karma: 285 on 2016-04-21
Post score: 0
Answer:
The wheel odometry from the Create is not the greatest. I would expect it to be able to travel 12 inches (+- 0.5inch) on wheel encoders alone, but you can let me know.
I would record a reference odometry message before you command the robot to move. Then continually compute the distanced travelled by taking the difference between the current odometry message and the reference. When the robot has travelled the desired distance, send a zero command to /cmd_vel. Don't forget the units reported on /odom are in meters. I'd expect the slower you command the robot to move, the closer you'll be able to stop at the desired mark. Also note that when rotation is involved, the odometry estimate becomes worse.
EDIT: To clarify, I am suggesting writing your own controller node that subscribes to /odom and publishes to /cmd_vel.
Originally posted by jacobperron with karma: 1870 on 2016-04-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by luketheduke on 2016-04-21:
How would I listen to /odom and derive the distance moved?
Comment by jacobperron on 2016-04-21:
I would follow the tutorial on how to write a ROS publisher and subscriber.
In the subscriber callback for /odom, you can implement the logic I described.
Comment by luketheduke on 2016-04-21:
Specifically, in the odom message, which part should I record? The robot moves in 2D so the Z location should be ignored, correct?
Comment by jacobperron on 2016-04-21:
Right, it should be following REP 103.
Positive X is forward
Positive Y is to the left
Positive Z is up
You'll probably want to compute the Euclidean distance in XY plane. | {
"domain": "robotics.stackexchange",
"id": 24423,
"tags": "ros, navigation, create-autonomy, create2, beagleboneblack"
} |
Quantization Image using MATLAB | Question: I'm trying to quantize an image 8 bits to 4 or 2-bits uniformly. I searched internet, interestingly I could not find what I want exactly. Then I wrote an simple code for it myself. I'm curious about whether there is a build-in function in MATLAB which convert 8-bit image to 4-bits uniformly . My results using the methods from internet is not good. Am I doing something wrong when I use these methods? Thank you in advance.
Method from internet giving strange results
reducedImage = uint8((single(monalisa)/256)*2^4);
Method writen by me. I'm putting code to clarify what I want to do.
monalisa_2= monalisa;
figure
monalisa_2(monalisa<=63) = 32 ; %% 00
monalisa_2(monalisa<=127&monalisa>63) = 85 ; %% 01
monalisa_2(monalisa<=191&monalisa>127) = 159 ; %% 10
monalisa_2(monalisa<=255&monalisa>191) = 223 ; %% 11
imshow(monalisa_2)
Answer: I point you to this https://stackoverflow.com/questions/12723699/changing-image-bit-depth-using-matlab it can be done. This particular post is only for png, but if I'm not mistaken all image types have the depth parameter so they can likely all be changed in a similar manner
EDIT
For your particular problem, since your images are already grayscale you would do something like
%change this as needed
desired_bit_depth = 2;
my_pixel_depth = 2 ^ desired_bit_depth;
%converts grayscale to an indexed image at the appropriate depth
[ind_im, reduced_colormap ] = gray2ind(my_gray_image, my_pixel_depth)
%displays your new image
figure()
subplot(1,2,1);imshow(my_gray_image); title('original')
subplot(1,2,2);imshow(ind_im,reduced_colormap); title(sprintf('reduced to %d bits',desired_bit_depth))
% save indexed png
imwrite(ind_im, reduced_colormap, 'test.png', 'bitdepth', desired_bit_depth);
EDIT #2: While reducing the "bits" to display the image, the "colormap" is made by taking the average of each range
I kept the original answer because it still works and may suit someone elses needs in the future.
For this solution, rather than using evenly spaced values for our "colormap' we actually use the gray value that is the average over the specified range
ex
to convert to 1 bit image we have two ranges 0,128 and 128,256
replace all values between 0,128 with the average value between 0,128
replace all values between 128,256 with the average value between 128,256
function result_im = ChangeBitDepthGrayImage(gray_im, desired_bit_depth)
if (desired_bit_depth < 1)
disp('converting to binary(1 bit) image');
desired_bit_depth = 1;
end
if (desired_bit_depth > 8)
disp('converting to 8 bit image');
desired_bit_depth = 8;
end
%assuming we start with 8 bit image 256 levels
num_levels = 2 ^ desired_bit_depth;
%figures out how big each range should be, we use +1 because if we
%divide the data into N levels, there should be N+1 boundaries
limits = linspace(0,256,num_levels + 1);
result_im = uint8(zeros(size(gray_im)));
for i = 1:num_levels
lower_lim = limits(i);
upper_lim = limits(i+1);
%creates a binary mask of values between the limits, the output is
%0 or 1, but we need to make it uint8 for the next step
temp_mask = uint8((gray_im >= lower_lim) & (gray_im < upper_lim));
%multiplies image by mask, this isolates only pixels in the given
%range
image_only_in_range = temp_mask .* gray_im;
%finds the mean of that small part of the image. this weird notation is
%taking the average of nonzero elements
avg_val_for_range = round(mean(image_only_in_range(image_only_in_range~=0)));
%replaces all pixels in that range with the average val
result_im = result_im +(avg_val_for_range * temp_mask);
end
%i just picked some random figure
figure(32)
subplot(1,2,1);imshow(gray_im);title('original image');
subplot(1,2,2);imshow(result_im);title(sprintf('modified to %dbit image',desired_bit_depth));
end | {
"domain": "dsp.stackexchange",
"id": 2480,
"tags": "matlab, quantization"
} |
AutoCAD circle dimension annotation meanings | Question: What do the following annotations mean in this AutoCAD drawing mean?
R1.25 TYP (circle/hole Left)
2XØ1.0 (circle/hole Right)
Ø1.25 (circle/hole Bottom)
Based on those annotations, what is the diameter (or radius) of each of the three circles/holes in this AutoCAD drawing?
Also, if I'm reading correctly, the FILLET command can be used for "rounds" and "fillets", correct? In reference to the dimension annotation R.5 (meaning radius 0.5).
Answer: The outside circle like you said has a radius of 1.25. The two holes at the top have a radius of 0.5 as you said and the bottom hole has a radius of 0.625.
What the drawing doesn't say I think is the "outside circle" of the bottom and right hole. | {
"domain": "engineering.stackexchange",
"id": 5423,
"tags": "mechanical-engineering, autocad"
} |
Are ultracold atoms only created by intelligent life? | Question: Nature has particle accelerators that are far beyond our capacity, but occasionally I hear atomic physicists claim that they are able to make something that has never been formed in any natural process (that is, not coming from intelligent life). This has always seemed plausible to me, but is it really true?
For definiteness, let's say we have produced 100,000 atoms at 100 nK for 10 seconds, which is quite conservative in every parameter. What is the probability of this many atoms having ever taken this temperature for this amount of time due to spontaneous fluctuations, anywhere in the observable universe? How about 10 atoms? Supposing the universe is infinite in size, what volume would we have to look at before we had a decent probability of this ever occurring? Feel free to make any simplifying assumptions that would give an upper bound to this probability, such as assuming that the universe has been at 3 K the entire time since the Big Bang.
Edit: since no one has bitten, I will point out the farthest I've gotten on this question: the fluctuation theorem states, if I understand it correctly, that following:
$\text{Pr}(\Delta S=-S_{UC})=\text{Pr}(\Delta S=+S_{UC})e^{-S_{UC}t}$
where $S_{UC}$ is the entropy decrease needed to take the atoms from 3 K to 100 nK. In other words, it is exponentially more likely that they will spontaneously increase by the needed entropy, itself presumably an unlikely event.
Answer: Although not a complete answer, one place to start is with the coldest naturally occurring place in the universe, which is the Boomerang Nebula, a planetary nebula that is around 1 K. As best as I can tell, this cooled below the CMB temperature simply by adiabatic expansion, and is insulated in its interior from CMB heating. Is this a feasible way to get to ultracold temperatures?
For a monoatomic gas, recall that adiabatic expansion is $TV^{2/3}=const$. So something that cools to 100 nK from 3 K would have to expand in volume by a factor of ~$10^{11}$. Which is, obviously, a lot, but space is big and has more than enough room. The Boomerang nebula is about 1 light-year across and is expanding out at about 164 km/s. So we could imagine, for example, a similar object that starts out with a radius of around 10^(-4) LY (which is still 10,000 times larger than the Sun) and expands to the same size at the same rate, which would take around 1000 years. This doesn't seem particularly implausible, although I'm no astronomer.
The harder question to answer is what the heating rate from the CMB would be in the interior of this cloud. It would only have to be very small, of course, to counteract the adiabatic cooling. Looking at one of the papers on the Boomerang nebula, the authors there estimate the cosmic ray heating as $4*10^{-28}$ erg/s, while the cooling rate is around $10^{-25}$ in the same units. So since adiabatic cooling goes slower as the gas gets colder (indeed, in this simple model we have $\dot{T}(t)=-T/t$), we would probably expect that by the time the gas has cooled to about 1/1000th of the CMB temperature, if not sooner, the heating rate would match the cooling rate.
All in all, my very crude best guess then is that adiabatic expansion of this sort could not lead to a temperature below the mK scale. | {
"domain": "physics.stackexchange",
"id": 26286,
"tags": "cosmology, astronomy, astrophysics, atomic-physics, non-equilibrium"
} |
Online-Offline Class Manager | Question: Router is a generic class that manage multiple contracts that. It is able to find out wheter it's an online or offline situation, on the very moment when an operation is being made.
There's a really easy way of doing it: a class for each Online-Offline pair that implement the contract and check on every each method wheter if it's online or not, and makes the right call. And that's exactly what I want to avoid.
Just FYI, behind the scenes it would be an Online scenario connected to WCF services and an Offline scenario connected to a client local database.
FYI 2: I've tried to accomplish this avoiding Interception and AOP stuff, but I found a dead end. You can see this post where I implement what seems to be a good solution, but stablishes if it's connected or not on the contructor, but real-world scenario needs this check at Operation level, not constructor level.
It's ready to run & test: just copy/paste on a new console application.
using System;
using System.Reflection;
using Microsoft.Practices.Unity;
using Microsoft.Practices.Unity.InterceptionExtension;
namespace ConsoleApplication1
{
public class Unity
{
public static IUnityContainer Container;
public static void Initialize()
{
Container = new UnityContainer();
Container.AddNewExtension<Interception>();
Container.RegisterType<ILogger, OnlineLogger>();
Container.Configure<Interception>().SetInterceptorFor<ILogger>(new InterfaceInterceptor());
}
}
class Program
{
static void Main(string[] args)
{
Unity.Initialize();
var r = new Router<ILogger, OnlineLogger, OfflineLogger>();
try
{
r.Logger.Write("Method executed.");
}
catch (CantLogException ex)
{
r.ManageCantLogException(ex);
}
Console.ReadKey();
}
}
public class Router<TContract, TOnline, TOffline>
where TOnline : TContract, new()
where TOffline : TContract, new()
{
public TContract Logger;
public Router()
{
Logger = Unity.Container.Resolve<TContract>();
}
public void ManageCantLogException(CantLogException ex)
{
// Is this an ugly trick? I mean, the type was already registered with online.
Unity.Container.RegisterType<TContract, TOffline>();
Logger = Unity.Container.Resolve<TContract>();
var method = ((MethodBase)ex.MethodBase);
method.Invoke(Logger, ex.ParameterCollection);
}
}
public interface ILogger
{
[Test]
void Write(string message);
}
public class OnlineLogger : ILogger
{
public static bool IsOnline()
{
// A routine that check connectivity
return false;
}
public void Write(string message)
{
Console.WriteLine("Logger: " + message);
}
}
public class OfflineLogger : ILogger
{
public void Write(string message)
{
Console.WriteLine("Logger: " + message);
}
}
[System.Diagnostics.DebuggerStepThroughAttribute()]
public class TestAttribute : HandlerAttribute
{
public override ICallHandler CreateHandler(IUnityContainer container)
{
return new TestHandler();
}
}
public class TestHandler : ICallHandler
{
public int Order { get; set; }
public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext)
{
Console.WriteLine("It's been intercepted.");
if (!OnlineLogger.IsOnline() && input.Target is OnlineLogger)
{
Console.WriteLine("It's been canceled.");
throw new CantLogException(input.MethodBase, input.Inputs);
}
return getNext()(input, getNext);
}
}
public class CantLogException : Exception
{
public MethodBase MethodBase { get; set; }
public object[] ParameterCollection { get; set; }
public CantLogException(string message)
: base(message)
{
}
public CantLogException(MethodBase methodBase, IParameterCollection parameterCollection)
{
this.MethodBase = methodBase;
var parameters = new object[parameterCollection.Count];
int i = 0;
foreach (var parameter in parameterCollection)
{
parameters[i] = parameter;
i++;
}
this.ParameterCollection = parameters;
}
}
}
Questions
Performance? Handling online-offline status through exceptions smells really bad.
Multi-threading operations would expose this design as thread-unsafe?
Isn't there any other way of preventing method execution?
Any other constructive comments are apreciated too.
I'm not interested on paid third-party stuff, so sadly things like PostSharp aren't options for me.
Answer: With Castle DynamicProxy, I would create an interface proxy without a target and an interceptor that would choose which implementation to use. Something like:
class OnlineOfflineInterceptor<TInterface> : IInterceptor
{
private readonly TInterface m_online;
private readonly TInterface m_offline;
public OnlineOfflineInterceptor(TInterface online, TInterface offline)
{
m_online = online;
m_offline = offline;
}
public void Intercept(IInvocation invocation)
{
invocation.ReturnValue = invocation.Method.Invoke(
Connectivity.IsConnected() ? m_online : m_offline, invocation.Arguments);
}
}
…
var proxyGenerator = new ProxyGenerator();
ILogger logger = proxyGenerator.CreateInterfaceProxyWithoutTarget<ILogger>(
new OnlineOfflineInterceptor<ILogger>(new OnlineLogger(), new OfflineLogger()));
I think this is more elegant than binding offline implementation to the online implementation by having a property on the online version that returns the offline version. | {
"domain": "codereview.stackexchange",
"id": 2428,
"tags": "c#, generics"
} |
Help with understanding Simulated Annealing algorithm | Question: I'm trying to wrap my head around it, but no matter what I read, I still can't fully understand it.
I tried to read a little bit about the annealing process in physics, but I have no background whatsoever in physics, let alone in thermodynamics, so I couldn't understand what is it exactly, and how it fits into the algorithm.
Here is the algorithm:
In the Hill Climbing algorithm, the reasoning can be easily described: Of all the successors of the current state - choose the highest-valued. But in Simulated Annealing... well, I can see what the algorithm does, I just don't understand the reasoning behind it:
1. It starts a timer.
2. It chooses a random successor.
3. It evaluates how "far away" the randomly chosen successor from current.
4. If the successor is indeed a "progress in the right direction" ($\Delta E > 0$) then we move ahead towards the direction of successor; otherwise, we move ahead towards the direction of successor with some (odd) probability that depends on the timer(?).
Why is randomly choosing a successor better then the Hill Climbing method?
Can someone please explain the reasoning behind it?
Do I really have to understand the annealing process? If so, can someone please explain it in layman terms?
Answer: The variable $T$ is not a timer but the temperature. It starts very high, making it more likely that transitions are taken, and then slowly cools. It's called annealing due to the metallurgical technique of the same name.
Simulated annealing handles one problematic aspect of the hill climbing algorithm, namely that it can get stuck at a local optimum which is not a global optimum. Instead of getting stuck, simulated annealing offers a way out: if the proposed change only marginally deteriorates the objective function, then with some reasonable probability with make it anyway. This way we can escape local optima.
As the process proceeds, we increase the threshold of making unfavorable changes by decreasing the temperature; when the temperature is zero, simulated annealing is the same as hill climbing (and when the temperature is infinite, it is the same as a random walk). The idea is that at first we need to escape local optima more vigorously, but eventually we settle at a good solution which does not require as much tinkering with (except for hill climbing).
You mention that hill climbing always chooses the best neighbor, but this is only one version of the algorithm, and not necessarily the best one. In other versions we choose an improving neighbor in some other fashion. I don't have much intuition which approach is better, though you are right that choosing the best neighbor sounds like a reasonable heuristic. | {
"domain": "cs.stackexchange",
"id": 4354,
"tags": "constraint-satisfaction"
} |
The phylogenetic definition of the clade Dinosauria | Question: Up until two years ago the clade of Dinosauria was defined as all the descendants of the most recent common ancestor of Triceratops and birds. From Wikipedia:
Under phylogenetic nomenclature, dinosaurs are usually defined as the group consisting of the most recent common ancestor (MRCA) of Triceratops and Neornithes, and all its descendants. It has also been suggested that Dinosauria be defined with respect to the MRCA of Megalosaurus and Iguanodon, because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria.
However, after the resurrection of the clade Ornithoscelida by Baron et al in 2017 (see here), as the clade consisting Ornithschia and Theropoda, the MRCA of Triceratops and birds is defining Ornithoscelida and not the whole Dinosauria, assuming we still define Sauropodomorphs as dinosaurs.
So if dinosaurs include Sauropodomorpha, Herrerasaurs, Ornithoschia and Theropoda, what is the most updated phylogenetic definition of the clade Dinosauria?
Answer: As of right now it is still the same, the evidence for sauropodomorpha being a outgroup is not statistically more reliable, this may change, however this will still not have much impact. There are several ways in which dinosauria is defined.
The most recent common ancestor of Megalosaurus and Iguanodon is also sometimes used since they were the original animals used to define the group.
The last common ancestor of Triceratops horridus, Passer domesticus, Diplodocus carnegii, and all of its descendants is also used just because of Baron's work.
Finally there are several unambiguous synapomorphies used to define the group, including such things as ankle structure. A full list can be found here.
In a group as well known as dinosaurs, a discovery like Baron's (if it holds up) is just going to lead to a change in the definition. | {
"domain": "biology.stackexchange",
"id": 9541,
"tags": "phylogenetics, dinosaurs, cladistics"
} |
RViz throws "undefined symbol" exception when including from include folder | Question:
Rough Description: (intro / tl;dr)
The Problem is that when i move my header files for an RViz panel plugin to a dedicated include folder, the project will compile, but rviz will throw a "Poco exception = ...my_panel.so: undefined symbol: _ZTVN16my_panel14MyPanelE" Exception.
more precisely, this happens when i change the folder structure from:
meta_package/my_panel/my_panel.h
meta_package/my_panel/my_panel.cpp (including the header with "#include my_panel.h")
to:
meta_package/my_panel/include/meta_package/my_panel.h
meta_package/my_panel/src/my_panel.cpp (including the header with "#include <my_panel/my_panel.h>")
Project in Detail:
meta_package/my_panel/include/meta_package/my_panel.h , former @ meta_package/my_panel/my_panel.h
#ifndef MY_PANEL_H
#define MY_PANEL_H
#ifndef Q_MOC_RUN
#include <ros/ros.h>
#include <rviz/panel.h>
#endif //Q_MOC_RUN
class QLineEdit;
namespace my_panel
{
class MyPanel: public rviz::Panel
{
Q_OBJECT
public:
MyPanel(QWidget *parent = 0);
virtual void load( const rviz::Config& config);
virtual void save( rviz::Config config ) const;
public Q_SLOTS:
void savePathFile (const QString& save_file_name);
void loadPathFile (const QString& load_file_name); //"const QString& " ?
protected Q_SLOTS:
void dumpParamServer();
void clearParamServer();
protected:
QString open_file;
QString save_file_name;
QString load_file_name;
QLineEdit* save_file_editor;
QLineEdit* load_file_editor;
ros::NodeHandle nh;
};
} //end namespace my_panel
#endif //MY_PANEL_H
meta_package/panel/src/my_panel.cpp , former @ meta_package/my_panel/my_panel.cpp
#include <stdio.h>
#include <QPainter>
#include <QLineEdit>
#include <QHBoxLayout>
#include <QVBoxLayout>
#include <QLabel>
#include <my_panel/my_panel.h> //former #include "my_panel.h"
namespace my_panel
{
MyPanel::MyPanel( QWidget *parent )
: rviz::Panel( parent )
{
QHBoxLayout* save_file_layout = new QHBoxLayout;
save_file_layout->addWidget( new QLabel( "Save File with Name:" ));
save_file_editor = new QLineEdit;
save_file_layout->addWidget( save_file_editor );
QHBoxLayout* load_file_layout = new QHBoxLayout;
load_file_layout->addWidget( new QLabel( "Load File with Name:" ));
load_file_editor = new QLineEdit;
load_file_layout->addWidget( load_file_editor );
QVBoxLayout* layout = new QVBoxLayout;
layout->addLayout( load_file_layout );
layout->addLayout( save_file_layout );
setLayout( layout );
}
void MyPanel::savePathFile(const QString& save_file_name)
{
//TODO
}
void MyPanel::loadPathFile(const QString& load_file_name)
{
//TODO
}
void MyPanel::dumpParamServer()
{
//TODO
}
void MyPanel::clearParamServer()
{
//TODO
}
void MyPanel::save( rviz::Config config ) const
{
rviz::Panel::save( config );
//config.mapSetValue( "String", suff );
}
// Load all configuration data for this panel from the given Config object.
void MyPanel::load( const rviz::Config& config )
{
rviz::Panel::load( config );
/* if( config.mapGetString( "String", &stuff ))
{
updateStuff();
}*/
}
} //end namespace my_panel
#include <pluginlib/class_list_macros.h>
PLUGINLIB_EXPORT_CLASS(my_panel::MyPanel,rviz::Panel )
meta_package/my_panel/CMakeLists.txt
cmake_minimum_required(VERSION 2.8.3)
project(my_panel)
find_package(catkin REQUIRED COMPONENTS rviz)
catkin_package()
include_directories(${catkin_INCLUDE_DIRS} include)
link_directories(${catkin_LIBRARY_DIRS})
## This setting causes Qt's "MOC" generation to happen automatically.
set(CMAKE_AUTOMOC ON)
## This plugin includes Qt widgets, so we must include Qt.
## We'll use the version that rviz used so they are compatible.
if(rviz_QT_VERSION VERSION_LESS "5")
message(STATUS "Using Qt4 based on the rviz_QT_VERSION: ${rviz_QT_VERSION}")
find_package(Qt4 ${rviz_QT_VERSION} EXACT REQUIRED QtCore QtGui)
## pull in all required include dirs, define QT_LIBRARIES, etc.
include(${QT_USE_FILE})
else()
message(STATUS "Using Qt5 based on the rviz_QT_VERSION: ${rviz_QT_VERSION}")
find_package(Qt5 ${rviz_QT_VERSION} EXACT REQUIRED Core Widgets)
## make target_link_libraries(${QT_LIBRARIES}) pull in all required dependencies
set(QT_LIBRARIES Qt5::Widgets)
endif()
## The tutorial prefers the Qt signals and slots to avoid defining "emit",
##"slots", etc because they can conflict with boost signals, so define
## QT_NO_KEYWORDS here.
add_definitions(-DQT_NO_KEYWORDS)
set(HDR_FILES
include/my_panel/my_panel.h
)
set(SRC_FILES
src/my_panel.cpp
)
qt_wrap_cpp(${PROJECT_NAME} ${SRC_FILES} ${HDR_FILES})
## An rviz plugin is just a shared library, so here we declare the
## library to be called ``${PROJECT_NAME}`` and specify the list of
## source files we collected above in ``${SRC_FILES}``.
add_library(${PROJECT_NAME} ${SRC_FILES})
## Link the myviz executable with whatever Qt libraries have been defined by
## the ``find_package(Qt4 ...)`` line above, or by the
## ``set(QT_LIBRARIES Qt5::Widgets)``, and with whatever libraries
## catkin has included.
target_link_libraries(${PROJECT_NAME} ${QT_LIBRARIES} ${catkin_LIBRARIES})
## Mark executables and/or libraries for installation
install(TARGETS ${PROJECT_NAME}
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
## Mark cpp header files for installation
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
FILES_MATCHING PATTERN "*.h"
)
install(FILES
plugin_description.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION})
install(DIRECTORY media/
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/media)
install(DIRECTORY icons/
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}/icons)
meta_package/my_panel/package.xml
<package>
<name>my_panel</name>
<version>0.10.1</version>
<description>
RViz Plugin for My project
</description>
<maintainer email="My@email.example">MyName</maintainer>
<license>BSD</license>
<author>MyName</author>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>qtbase5-dev</build_depend>
<build_depend>rviz</build_depend>
<run_depend>libqt5-core</run_depend>
<run_depend>libqt5-gui</run_depend>
<run_depend>libqt5-widgets</run_depend>
<run_depend>rviz</run_depend>
<export>
<rosdoc config="${prefix}/rvizdoc.yaml"/>
<rviz plugin="${prefix}/plugin_description.xml"/>
</export>
</package>
meta_package/my_panel/plugin_description.xml
<library path="lib/libmy_panel">
<class name="my_panel/MyPanel"
type="my_panel::MyPanel"
base_class_type="rviz::Panel">
<description>
A RViz plugin panel for the My project
</description>
</class>
</library>
Error Message:
[ERROR] [1501182670.698822643]: PluginlibFactory: The plugin for class 'my_panel/MyPanel' failed to load. Error: Failed to load library /home/ros/ros/devel/lib//libmy_panel.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/ros/ros/devel/lib//libmy_panel.so: undefined symbol: _ZTVN16my_panel14MyPanelE)
Thanks in advance!
Originally posted by G on ROS Answers with karma: 46 on 2017-07-27
Post score: 0
Original comments
Comment by ahendrix on 2017-07-29:
I remember seeing a similar question recently, and I think it ended up being a problem with cmake's automoc no longer running MOC on the headers when they were in a different directory.
Comment by ahendrix on 2017-07-29:
http://answers.ros.org/question/265610/undefined-reference-to-vtable-for-myviz/
Comment by G on 2017-07-30:
@ahendrix thanks for the suggestion! followed your link and their answers, so i added this to the CMakeLists.txt: "qt_wrap_cpp(${PROJECT_NAME} ${SRC_FILES} ${HDR_FILES})". pittily didn't work, and was the only suggestion except putting headers into the src folder. am i doing it wrong?
Answer:
I simply needed to add my new "HDR" macro to the add_library function!
in total:
set(HDR_FILES
include/heika_panel_beta/heika_panel_beta.h
)
set(SRC_FILES
src/heika_panel_beta.cpp
)
add_library(${PROJECT_NAME} ${SRC_FILES} ${HDR_FILES})
Works now! Thanks to everyone!
Originally posted by G with karma: 46 on 2017-08-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jayess on 2017-08-07:
If this solved your problem, you should mark it as the correct answer. | {
"domain": "robotics.stackexchange",
"id": 28465,
"tags": "rosparam, roscpp"
} |
Find the frequency response if i have the magnitude response? | Question: if i have the transfer function of magnitude response is there a method that i could calculate the frequency response?
For example the transfer function of the magnitude response is:
$ 3db \pm 3.5db $ for $|ν|<0.1$
$ <-55db $ for $|ν|<0.2$
Answer: The frequency response of a system can be represented in polar format, in which the magnitude and phase response are considered separately:
$$
H(\omega) = |H(\omega)| \angle H(\omega)
$$
With this representation, it should be clear that the magnitude response alone is not sufficient to characterize the full frequency response of a system; you have to know (or assume) what its phase response is also. | {
"domain": "dsp.stackexchange",
"id": 732,
"tags": "frequency-response, transfer-function"
} |
Strouhal number motivation | Question: I am looking for a nice way to motivate the Strouhal number definition. Let me illustrate what I mean on the Reynolds number. (As ususal, $\mathbf{u}$, $p$, $\rho$, $\nu$ denote the flow velocity, pressure, density and kinematic viscosity respectively.)
Sure, there are multiple good ways to show the importance and properties of the Reynolds number. I particularly like the one based on the momentum equation (the Navier-Stokes equation) scaling. The equation reads:
$$
\frac{\partial \mathbf{u}}{\partial t} + \left( \mathbf{u} \cdot \nabla \right)\mathbf{u} = -\frac{1}{\rho}\nabla p + \nu \nabla^2 \mathbf{u}
$$
then by introducing $X_i = x_i/L$, $U_i = u_i/V$, $P = p/(\rho V^2)$, $\tau = \nu t/L^2$ we obtain:
$$
\frac{\partial U_i}{\partial \tau} + \text{Re}\left( U_k \frac{\partial U_i}{\partial X_k} + \frac{\partial P}{\partial X_i} \right) = \frac{\partial U_k}{\partial X_k \partial X_k}
$$
i.e. the Reynolds number $\text{Re} = \frac{UL}{V}$ appears naturally as the only control parameter of the scaled system.
And now, is there a way to obtain the Strouhal number by a similar procedure?
Notes and notions:
The particular beauty of the aforementioned procedure is that "it is autonomous". I presume that for the Strouhal number there should be an assumption such as "let the flow instability be described by a time-harmonic function".
It must be based on Euler equations rather then Navier-Stokes equations.
Would recasting the momentum equation in Crocco's form be of any help?
Answer: In your definition of the dimensionless time you have assumed that the characteristic scale for time is $\frac{L^2}{\nu}$. If you instead assume that the characteristic scale is the inverse of the vortex-shedding frequency $f^{-1}$ and redo the analysis you will retrieve the Strouhal number. You will need to rescale the characteristic scale for the pressure accordingly. | {
"domain": "physics.stackexchange",
"id": 46048,
"tags": "fluid-dynamics, frequency, scaling"
} |
slideToggle plugin | Question: I would like a code review for my first simple slideToggle jQuery plugin.
Demo
(function($) {
$.fn.ezToggle = function(options) {
var defaults = {
selector : '.yourSelector',
speed : 300,
openedClassName : 'opened',
closedClassName : 'closed',
},
options = $.extend(defaults, options);
return this.each(function() {
var originalHeight = $(this).outerHeight(true);
options.minHeight = options.minHeight || $(this).find(defaults.selector).outerHeight(true);
if (!$(this).hasClass(defaults.openedClassName)) {
$(this).addClass(defaults.closedClassName).height(options.minHeight);
}
$(this).find(defaults.selector).on('click', function(e) {
e.preventDefault();
var $parent = $(this).parent();
if ( $parent.hasClass(defaults.closedClassName) ) {
$('.'+defaults.openedClassName)
.removeClass(defaults.openedClassName)
.addClass(defaults.closedClassName)
.animate( {
height : options.minHeight
}, defaults.speed );
$parent.removeClass(defaults.closedClassName)
.addClass(defaults.openedClassName)
.animate({
height : originalHeight
}, defaults.speed);
} else if ( $parent.hasClass(defaults.openedClassName) ) {
$parent.removeClass(defaults.openedClassName)
.addClass(defaults.closedClassName)
.animate({
height : options.minHeight
}, defaults.speed);
}
});
});
};
}) (jQuery);
Answer: If you didn't know already, this type of plugin is referred to as an accordion. I would suggest reading the code for the jQuery plugin (since you are using jQuery) to find out how they do it. With this you can get ideas, see their organization, and if you find something that can be fixed/improved you'll have the power to contribute!
To add to redexp's answer about the preventDefault() method.
In your click functions you might want to prevent the default browser action on a link, which is to direct the page to that link. Since you just want to perform something on your page and don't actually want the browser to leave the page you should prevent that action. The difference between the two is that return false; does that and at the same time stops event propagation. Propagation being when you click on an element, it triggers an event on the element, and any events on its parent elements (because technically they were also clicked). Whether or not you need to stop propagation is up to you, so pick accordingly.
So basically:
function() {
return false;
}
// Is the same as doing
function(e) {
e.preventDefault();
e.stopPropagation();
}
It's all probably a lot more complicated than this and articles like this probably explain it all a lot better.
As you progress in plugin development you should start to think about implementing design patterns. There are almost endless options of patterns you can use and some you can event make sort of a hybrid pattern. Don't feel overwhelmed with all the options pick out a couple to start with and try them out. I'd suggest the Module Pattern (another good article) since you've already sort of implemented it in this plugin. Also look at the Observer Pattern (aka Pub/Sub) it's great for dealing with custom events. This video by Jeffery Way does a great job of explaining the concept. I'd recommend you'd watch the rest of the episodes from that series as well because he does cover some good ground on plugins. | {
"domain": "codereview.stackexchange",
"id": 4117,
"tags": "javascript, jquery, plugin"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.