anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
A circular motion problem | Question:
A car goes on a horizontal circular road of radius $R$, the speed increasing at a constant rate $\frac{dv}{dt}=a(constant)$. The friction coefficient of the tyre is $\mu$. Find the speed at which the car will skid.
For vertical equilibrium, the normal force $\mathcal{N}=mg$, where $m$ is the mass of the car. For skid, $\frac{mv^2}{R}-f=ma=\frac{mv^2}{R}-\mu mg$.
Now it is well known that $v=\sqrt{\mu gR}$, but it does not help me. I cannot proceed from here. The answer is pretty weird and it seems like i require integration as the answer possesses a lot of squares. So i tried differentiating but it takes me nowhere. Can you please help me?
Note: Please do not close this or mark it as opinion based because it is not any homework. I simply am fond of physics so like doing problems and i encountered this one. I am just asking for help or the strategy.
Answer: You probably are not arriving at the answer because you haven't considered the tangential acceleration which will provide a pseudo force.
So there are two forces acting on the car is the centrifugal force and a tangential force(ma) due to tangential acceleration a .
The car will skid when the frictional force equals the resultant of other forces acting on the car.
$ f_max = F_{net}$
Now $F_{net}$ on the car is the resultant of centrifugal force and tangential force.
$$ \mu mg= \sqrt{ (\frac{mv^2}{r})^2 + (ma)^2}$$
$$ \mu ^2 m^2g^2= (\frac{mv^2}{r})^2 + (ma)^2$$
$$ \mu ^2 g^2= (\frac{v^2}{r})^2 + (a)^2 $$
$$ (\mu ^2 g^2-a^2)r^2= v^4 $$
$$v= ((\mu ^2 g^2-a^2)r^2)^{\frac{1}{4}} $$
The formula that you mentioned $v= \sqrt{\mu Rg} $
Is valid for constant speed but here due to the acceleration " a" the speed changes | {
"domain": "physics.stackexchange",
"id": 57711,
"tags": "homework-and-exercises, newtonian-mechanics, friction, centripetal-force"
} |
use rviz and urdf to simulate the differential_drive robot | Question:
I want use the rviz and urdf to simulate the differential_drive robot , and now I have got the robot model and can show in the rviz. I can publish the cmd_vel to let the robot move ,but here is the problem : that I move the robot just change the entire robot's velocity and orientation , but I want to give the individual wheel speed to receive the same goal ,so what can I do ?
please give me a hand. Thank u very much!
Originally posted by Tomas yuan on ROS Answers with karma: 56 on 2017-02-19
Post score: 0
Original comments
Comment by gvdhoorn on 2017-02-19:
RViz is not a simulator. It is a visualisation tool. If you want to simulate dynamics, use Gazebo or V-REP or a similar tool.
Comment by Tomas yuan on 2017-02-19:
so if I use the Gazebo,how can I move the robot by changing the individual wheel's velocity ?
Comment by Tomas yuan on 2017-02-19:
thank u very much ! :)
Comment by Tomas yuan on 2017-02-20:
hello, sorry to disturb you again , I read the diff_drive_controller code in the github ,but have some problem with speed_limiter.cpp. I have been quiet confused with the "SpeedLimiter::limit(double& v, double v0, double v1, double dt)", how dose it work ?
Comment by gvdhoorn on 2017-02-20:
I did not write that code, so I can't answer you right now. I'd advise you to open a new question.
Comment by Tomas yuan on 2017-02-20:
ok,thank u
Answer:
It's up to you, but I'd look at gazebo_ros_control in combination with the diff_drive_controller.
re: individual wheel velocity: that's not how we typically control mobile bases, but if you want, even that would be possible.
Originally posted by gvdhoorn with karma: 86574 on 2017-02-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27062,
"tags": "rviz"
} |
Could a bolt this size actually be used on a project? | Question: could there ever be a structure that would require something like this?
Answer: From my location it appears the image is broken, however I can assume with my imagination and the comments it’s a pretty big bolt.
Exceptionally large bolts are surprisingly not that uncommon. They often show up in oil industry to bolt flanges and for use in huge structures. But they are usually custom low production bolts designed for a specific use case and not mass manufactured. M180 is the largest I’ve ever seen in a bolt list to purchase from a supplier.
24” nuts was “common” for a time, however this has mostly changed to higher quality smaller diameter alloys and bolts.
At one point there were 98” diameter, and nearly 30’ long bolts made in the 60s
Penrith Engineering Works in Clydesdale Scotland in 1967. The bolts measured 27' 4" long each and had a diameter of 4' 2".
As far as i remember it was an experiment to bolt oil tankers together that failed miserably.
As mentioned in the oil industry it is common to use large diameter bolts, as much as 20” (w510) can be “easily” purchased from ITH for example.
There are of course large problems with large bolts (fitting, yes?) handling, manufacturing, weight, transportation, actually bolting them, etc etc And often when the design requires such large bolts then one will generally choose a different (permanent) connection method. | {
"domain": "engineering.stackexchange",
"id": 4359,
"tags": "mechanical-engineering, design, structural-analysis, steel"
} |
Spin hamiltonian matrix representation | Question: To preface, I'm an applied mathematician trying to parse the meaning of physics notation I've come across in a paper. My goal is to understand the setting in terms of matrices and vectors so that I can test an algorithm I'm studying. Since I don't know the terminology or notation, I haven't been able to figure out how to read more about the topic.
I'm reading about spin systems and I see expressions like:
$$
\mathbf{H} = -\sum_{i,j} J_{i,j} \mathbf{s}(i) \cdot \mathbf{s}(j)
$$
where $\mathbf{s}(i)$ is the spin operator at site $i$.
My understanding is that $\mathbf{H}$ is can be represented as a matrix of size $(2s+1)^N$ where $N$ is the number of spins and and $s$ is the spin number. I have also seem the spin matrices for specific values of $s$, which are of size $2s+1$ (although I'm not sure if these are the same as the $\mathbf{s}(i)$ since they seem to have $x, y, z, +, -)$. So, the missing piece for me is what is the meaning of $\mathbf{s}(i) \cdot \mathbf{s}(j)$ as well as the meaning of the sum.
Answer: To start: What is $\mathbf{s}(i)$? This is not an operator (not a matrix) but rather meant to denote a vector of operators.
\begin{equation}\mathbf{s}(i)=(s_x(i),s_y(i),s_z(i)),\end{equation}
where $s_\sigma(i)$ is an operator, a $(2s+1)^N$-dimensional matrix.
What does the dot product between them mean? It means to imitate the usual dot product, as in
\begin{equation}
\mathbf{s}(i)\cdot\mathbf{s}(j) = s_x(i)s_x(j)+s_y(i)s_y(j)+s_z(i)s_z(j)
\end{equation}
where the multiplication $s_\sigma(i)s_\sigma(j)$ is simply matrix-matrix multiplication.
Finally: What are the matrices $s_\sigma(i)$, and how are they $(2s+1)^N$-dimensional? You probably already know the form of the matrix $s_\sigma$ for $\sigma=x,y,z$; these are $(2s+1)$-dimensional matrices. When we describe multiple spins, the total Hilbert space is a tensor product of the Hilbert space of the individual spins, so our total Hilbert space for $N$ particles is $\bigotimes^N\mathbb{C}^{2s+1}$, where the $i$th copy of $\mathbb{C}^{2s+1}$ in the tensor product represents the state of the $i$th particle. This is a $(2s+1)^N$-dimensional space, which is why the overall Hamiltonian is a $(2s+1)^N$-dimensional matrix. When we write $s_\sigma(i)$, we mean the spin operator $s_\sigma$ that acts only on the part of the Hilbert space associated to the $i$th spin. So, strictly speaking, we have
\begin{equation}
s_\sigma(i) = I\otimes I\otimes\cdots I\otimes \underbrace{s_\sigma}_{i\text{th position}}\otimes I\cdots \otimes I
\end{equation}
which makes $s_\sigma(i)$ a $(2s+1)^N$ dimensional matrix. | {
"domain": "physics.stackexchange",
"id": 85212,
"tags": "quantum-spin, hamiltonian, notation, spin-models"
} |
How to deal with infinite loops in the MCTS search of AlphaTensor when using a transposition table? | Question: In the published version of the AlphaTensor algorithm, there are two mentions of a transposition table:
In addition, a transposition table is used to recombine different action sequences if they reach the exact same tensor. This can happen particularly often in TensorGame as actions are commutative
and
After simulating $N(s)$ trajectories from state $s$ using MCTS, the normalized visit counts of the actions at the root of the search tree $N(s, a)/N(s)$ form a sample-based improved policy. Differently from AlphaZero and Sampled AlphaZero, we use an adaptive temperature scheme to smooth the normalized visit counts distribution as some states can accumulate an order of magnitude more visits than others because of sub-tree reuse and transposition table.
I don't understand how this use of a transposition table do not lead to infinite loops in the MCTS search, or how to avoid them.
For instance, say that a state $A$ has many children $B_1, \cdots, B_n$. Each of these child is associated to a certain prior given by the Neural Network. Let us assume that the child maximizing the UCPT score is $B_1$. The MCTS search will then go to $B_1$ and expand it if it's a leaf node. Now, say that $A$ is one of the children of $B_1$. It may very well happen at some point that $A$ is $B_1$'s child maximizing the UPCT score. So at some point, the algorithm will:
Start from $A$, reckon that $B_1$ maximizes the UCPT and go to $B_1$
Reckon that $A$ maximizes the UCPT and go to $A$
But at that point, $A$ is not a leaf node, its children are known, and in particular, the one maximizing the UCPT score is $B_1$.
Hence the infinite loop. How to deal with this situation? You want to treat $A$ as a child of $B_1$, so that you don't have to query the neural network for its priors once again, to update the visit count, etc. But at the same time, considering it as a non-leaf node may make this loop appear.
One possible solution would be to create a "copied node", such that all of its characteristics are shared with $A$, but its children are treated as new nodes (though they can also be in the transposition table). That way, each time we get at a leaf node, we will expand the tree instead of circling back. But his seems rather artificial. Is this the best way to deal with this problem?
Answer: I would like to acknowledge the excellent work by Nebuly in creating this implementation of AlphaTensor. My answer to your question will be based on his implementation, and I will reference some of his code accordingly. If you are keen on understanding the finer details of AlphaTensor, I highly recommend reading through their code.
Just to clarify, the "infinite loop" you mentioned isn't technically an infinite loop. When executing MCTS, you always need to specify the number of simulations you want the algorithm to perform. Moreover, theoretically, as models are expected to improve with each iteration, such loops should occur less frequently. This is because the model learns from its mistakes, and repeating the same states worsens the returned rewards. Hence, the use of a transposition table in AlphaTensor doesn’t exactly address the repetition of states in a loop but rather serves as a tool to economize the inference cost of the Neural Network.
def monte_carlo_tree_search(
model: torch.nn.Module,
state: torch.Tensor,
n_sim: int,
t_time,
n_steps: int,
game_tree: Dict,
state_dict: Dict,):
#More Code
for _ in range(n_sim):
simulate_game(model, state, t_time, n_steps, game_tree, state_dict)
# return next state
possible_states_dict, _, repetitions, N_s_a, q_values, _ = state_dict[
state_hash
]
possible_states = _recompose_possible_states(possible_states_dict)
next_state_idx = select_future_state(
possible_states, q_values, N_s_a, repetitions, return_idx=True
)
next_state = possible_states[next_state_idx]
return next_state
Here's an excerpt from the MCTS function which demonstrates the use of transposition tables, represented by two dictionaries: game_tree and state_dict. The process begins with a for loop that runs n_sim times (the limit of simulations per Monte Carlo play), executing the simulate_game() function. Essentially, this function plays the TensorGame and updates the dictionaries.
def simulate_game(
model,
state: torch.Tensor,
t_time: int,
max_steps: int,
game_tree: Dict,
states_dict: Dict,
horizon: int = 5,
):
"""Simulates a game from a given state.
Args:
model: The model to use for the simulation.
state (torch.Tensor): The initial state.
t_time (int): The current time step.
max_steps (int): The maximum number of steps to simulate.
game_tree (Dict): The game tree.
states_dict (Dict): The states dictionary.
horizon (int): The horizon to use for the simulation.
"""
idx = t_time
max_steps = min(max_steps, t_time + horizon)
state_hash = to_hash(extract_present_state(state))
trajectory = []
# selection
while state_hash in game_tree:
(
possible_states_dict,
old_idx_to_new_idx,
repetition_map,
N_s_a,
q_values,
actions,
) = states_dict[state_hash]
possible_states = _recompose_possible_states(possible_states_dict)
state_idx = select_future_state(
possible_states, q_values, N_s_a, repetition_map, return_idx=True
)
trajectory.append((state_hash, state_idx)) # state_hash, action_idx
future_state = extract_present_state(possible_states[state_idx])
state = possible_states[state_idx]
state_hash = to_hash(future_state)
idx += 1
# expansion
if idx <= max_steps:
trajectory.append((state_hash, None))
if not game_is_finished(extract_present_state(state)):
state = state.to(model.device)
scalars = get_scalars(state, idx).to(state.device)
actions, probs, q_values = model(state, scalars)
(
possible_states,
cloned_idx_to_idx,
repetitions,
not_dupl_indexes,
) = extract_children_states_from_actions(
state,
actions,
)
not_dupl_actions = actions[:, not_dupl_indexes].to("cpu")
not_dupl_q_values = torch.zeros(not_dupl_actions.shape[:-1]).to(
"cpu"
)
N_s_a = torch.zeros_like(not_dupl_q_values).to("cpu")
present_state = extract_present_state(state)
states_dict[to_hash(present_state)] = (
_reduce_memory_consumption_before_storing(possible_states),
cloned_idx_to_idx,
repetitions,
N_s_a,
not_dupl_q_values,
not_dupl_actions,
)
game_tree[to_hash(present_state)] = [
to_hash(extract_present_state(fut_state))
for fut_state in possible_states
]
leaf_q_value = q_values
else:
leaf_q_value = -int(torch.linalg.matrix_rank(state).sum())
# backup
backward_pass(trajectory, states_dict, leaf_q_value=leaf_q_value)
The states_dict will contain values such as q_values, N_s_a (the visit counter of the state), actions, children's states, and a repetition map to address the commutative nature of multiplication, which results in many repeated states.
Conclusion
In conclusion, if you are interested in examining the implementation of AlphaTensor, I would urge you to read through the repository. If an infinite loop occurs, whether it's within the transposition table or not, it will eventually terminate. Based on this implementation, the simulation maintains a trajectory of the state hashes with each iteration. If a repeated state is found, it will append a reference to the state dictionary and append a copy to the state in the game tree. Therefore, your initial idea of creating a copy is valid. I hope this information is helpful to you. | {
"domain": "ai.stackexchange",
"id": 4011,
"tags": "deep-rl, monte-carlo-tree-search"
} |
What happens to constants in a .msg? | Question:
Hello,
If I have the following message:
uint8 val
uint8 a=0
uint32 b=0
string c=this is the string
Which I publish as follows:
#include "ros/ros.h"
#include "test_types/type33.h"
int main(int argc, char **argv)
{
ros::init(argc, argv, "test_types_publisher_type33");
ros::NodeHandle n;
ros::Publisher chatter_pub = n.advertise<test_types::type33>("/test_types/type33", 10);
ros::Rate loop_rate(1);
while (ros::ok())
{
test_types::type33 msg;
msg.val=10;
chatter_pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
However when I echo the topic or I check with wireshark, I only see val and the length of the messages in the header is also 1 byte: 01 00 00 00 0a. So where are the constants transmitted?
Thanks.
Originally posted by Ariel on ROS Answers with karma: 65 on 2021-01-23
Post score: 0
Original comments
Comment by gvdhoorn on 2021-01-23:\
string c=this is the string
is this a verbatim copy-paste? Because string constants should be enclosed in quotes.
Answer:
So where are the constants transmitted?
Nowhere (*).
They are compile time constants (in compiled languages such as C and C++) or predefined (constant) variables in the namespace of the message class / struct / dictionary for languages like Python. They are not part of the data that gets (de)serialised when exchanging messages.
So in your case, since you don't use test_types::type33::a or any of the other constants anywhere in your code, you'll not see their values anywhere in the serialised data stream.
It would also not be very efficient to do that for each and every message.
The assumption is that publisher and subscriber both have access to the message structure (ie: definition file or code generated from those). As the constants would be present in those representations, the receiving side can use them.
*) I wrote nowhere, but that's not exactly true. The complete message definition is exchanged between publisher and subscriber during setup of the connection, so technically the constant values are also sent to the subscriber.
But that's not as part of the actual message data exchange, hence the nowhere.
Originally posted by gvdhoorn with karma: 86574 on 2021-01-23
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 36001,
"tags": "ros-kinetic"
} |
Friction-Newtonian Mechanics | Question:
Here we are having two blocks of mass $2~\rm kg$ and $4~\rm kg$ on an inclined angle with angle of inclination being $30~\circ.$ The block of mass 2 kg has a coefficient of friction $\mu_1=0.2$ and the block of mass 4 kg has a coefficient of friction $\mu_2=0.3$.
In a book, it was written that as $\mu_1<\mu_2$, hence $a_{\mathrm{2~kg}}>a_{\mathrm{4~kg}}$, where $a_{i}= \text{acceleration of the blocks}$.
I checked for the cases separately and that came out to be true. I just wanted to ask, can i do that (the $\mu_1<\mu_2$ stuff) always while solving a problem or should I always check out the cases separately for whose acceleration would be bigger.
Answer: You always need to make a force diagram to be safe, however, for this particular configuration (assuming the blocks are not in contact otherwise will have a common acceleration) you get almost always that response. The reason is that if you calculate the forces along an axis parallel to the plane you get, after simplifying:
$a=\sin(\theta)-\mu \cos(\theta)$,
Thus, regardless of angle (assuming $\theta \in (0,90))$, the larger $\mu$ the smaller $a$, independently of the individual masses that were cancelled when simplifying the equation.
The above argument assumes that the masses are moving and $a$ is positive (downwards), if $\mu$ makes $a$ to becomes negative then the object will not move and $a$ will be zero. | {
"domain": "physics.stackexchange",
"id": 31725,
"tags": "homework-and-exercises, newtonian-mechanics, forces, friction, free-body-diagram"
} |
Given regular language $L$, is $L_1 = \{ w \mid \text{each prefix of } w \text{ of odd length} \in L \}$ regular? | Question: I was given a question and don't really know to solve it.
Given a regular language $L$, is the following language also regular?
$$L_1 = \{ w \mid \text{each prefix of } w \text{ of odd length is in $L$} \}.$$
I think that $L_1$ should be regular, but I don't have a clue how to prove it.
Thank you for any input.
Answer: Try constructing an NFA for $\hat L:=\{w|$ there is an odd length prefix of $w$ that is $\notin L\}$, given a DFA for $\overline L$, and then show $L_1=\overline {\hat L} $.
If I'm not mistaken, this should work out. | {
"domain": "cs.stackexchange",
"id": 16299,
"tags": "formal-languages, automata, regular-languages"
} |
Is the statement $dF=(\delta Q-TdS)-pdV-SdT$ more general than $dF=-SdT-pdV$? | Question: So the Helmholtz free energy is defined as $F=U-TS$, which means $$dF=\delta U-TdS-SdT$$
$$=(\delta Q-pdV)-TdS-SdT$$
$$=(\delta Q-TdS)-pdV-SdT.$$
since $\delta Q \neq TdS$ ($dS \geq \delta Q/T$) in general, it means tha $dF=-pdV-SdT$ only holds when the process is reversible?
Answer: The equation dF=-SdT-pdV describes the mutual variations in these parameters between two closely neighboring (differentially separated) thermodynamic equilibrium states. It doesn't matter whether the process that took the system between these two thermodynamic equilibrium states was reversible or whether it was a highly tortuous irreversible process path which, in the end, put the final state very close to the initial state. But, for the irreversible path, Q was definitely not equal to TdS.
Consider any irreversible process in which the initial and final thermodynamic equilibrium states are not close together, but the system is in contact with a constant temperature reservoir at temperature T, which happens to be the same as the initial temperature of the system. Under these circumstances, the final equilibrium temperature of the system will also be T. For this situation, it follows from the first and 2nd laws of thermodynamics that $$\Delta U=Q-\int{p_{ext}dV}$$and $$Q=T\Delta S-T\sigma$$where $\sigma$ is the entropy generated as a result of irreversibility (always positive). So, if we combine these two equations, we obtain: $$\Delta U=T\Delta S-T\sigma-\int{p_{ext}dV}$$or, equivalently, $$W=\int{p_{ext}dV}=-\Delta F-T\sigma$$From this it follows that the maximum work that can be obtained by all processes between the specified initial and final thermodynamics equilibrium states of the system is $-\Delta F$. | {
"domain": "physics.stackexchange",
"id": 87445,
"tags": "thermodynamics, energy, temperature, entropy"
} |
How is deviated nasal septum related to cold | Question: I have a deviated nasal septum and I always suffer from common cold I was just wondering how a deviated nasal septum can be the cause for chronic common cold
Answer: Nasal septum deviation is a condition where the cartilage which divides the nasal cavity into two nasal passages isn't centrally aligned, making one of the nostrils smaller and thus obstructed. Now, in our nose we have something called mucociliary clearance and that is the mechanism getting rid of unwanted particles and pathogens that get into contact with the mucous membranes in the nose.
Studies show that in individuals with a deviated septum the nasal mucociliary clearance time is slower than in individuals whose nasal passages are symmetrical. From this information one can conclude that the particles and pathogens that you inhale are getting to spend more time inside your nasal passages, thus giving the pathogens a better chance at striving and giving you sinusitis.
By looking at the information above you could think that the connection between nasal septum deviation and chronic sinusitis is clear. This literary review however states that the correlation between a deviated nasal septum and having chronic sinusitis is still unclear. | {
"domain": "biology.stackexchange",
"id": 1600,
"tags": "human-biology"
} |
Is it needed to "beautify" this Stack implementation? | Question: I'm just learning OOP, and I have an assignment to create a Stack. I came up with the following. Is it "beauty" enough, or to "schooly". Where can I evolve?
This is the class:
class Stack
{
private int _length;
private List<int> _members = new List<int>();
public Stack()
{
_length = 10;
}
public Stack(int length)
{
_length = length;
}
public List<int> Members
{
get { return _members; }
}
public int Length
{
get { return _members.Count; }
}
public bool IsEmpty
{
get { return (_members.Count == 0); }
}
public void Push(int member)
{
if (_members.Count < _length) _members.Add(member);
else throw new Exception("The Stack is full.");
}
public int Pop()
{
if (!IsEmpty)
{
int last_one = _members[_members.Count - 1];
_members.RemoveAt(_members.Count - 1);
return last_one;
}
else throw new Exception("The Stack is empty.");
}
}
And this is my Form:
public partial class Form1 : Form
{
Stack verem = new Stack(20);
Random rnd = new Random();
public Form1()
{
InitializeComponent();
}
private void refreshStack()
{
lbStack.Items.Clear();
foreach (int member in verem.Members)
{
lbStack.Items.Insert(0, member);
}
}
private void btnIsEmpty_Click(object sender, EventArgs e)
{
MessageBox.Show(verem.IsEmpty.ToString());
}
private void btnLength_Click(object sender, EventArgs e)
{
MessageBox.Show(verem.Length.ToString());
}
private void btnPush_Click(object sender, EventArgs e)
{
verem.Push((int)rnd.Next());
refreshStack();
}
private void btnPop_Click(object sender, EventArgs e)
{
verem.Pop();
refreshStack();
}
}
Answer: class Stack
Stack (or any other collection) is a prime example of type that should be generic.
private int _length;
I think length is not a good name for this. You should rename it to something like _maxLength, or even better, _maxCount (see below).
Also, I'm not completely sure if this feature is necessary, but it could make sense (for example System.Collections.Generic.Stack<T> doesn't have anything like this, but System.Collections.Concurrent.BlockingCollection<T> does).
_length = 10;
I don't think 10 is a reasonable default here. It would certainly be unexpected to me. A much better default would be infinity. Or maybe don't have a default at all.
public List<int> Members
Unless absolutely necessary, you should never expose private implementation details like this. Using this member, anyone can do anything to the underlying collection (including adding items beyond the allowed maximum number).
If you want to be able to iterate over the items in the collection, you should implement IEnumerable<T>.
public int Length
The name Length makes some sense for arrays, but not so much for other collections. For consistency with other collections, the best name would be Count.
return (_members.Count == 0);
The parentheses are not necessary here.
if (_members.Count < _length) _members.Add(member);
else throw new Exception("The Stack is full.");
You probably shouldn't write the body of if and else on the same line like this. Putting them on a line of their own will make your code clearer to read.
throw new Exception("The Stack is full.")
You shouldn't throw Exception directly, you should create your own type that inherits from Exception and throw that. Alternatively, you could use an existing exception type, like InvalidOperationException.
if (!IsEmpty)
{
…
}
else throw new Exception("The Stack is empty.");
You could simplify this by reversing the condition and putting the exception first. This way, the main code will be less indented.
public partial class Form1 : Form
You should name your forms with more descriptive names, even if it's something as simple as MainForm.
lbStack
This form of Hungarian notation is usually frowned on. If you want to indicate that the variable is a list box, you can name is something like stackListBox.
(int)rnd.Next()
You don't need a cast here, Next() already returns an int. | {
"domain": "codereview.stackexchange",
"id": 4863,
"tags": "c#, stack"
} |
Is the dominating set problem restricted to planar bipartite graphs of maximum degree 3 NP-complete? | Question: Does anyone know about an NP-completeness result for the DOMINATING SET problem in graphs, restricted to the class of planar bipartite graphs of maximum degree 3?
I know it is NP-complete for the class of planar graphs of maximum degree 3 (see the Garey and Johnson book), as well as for bipartite graphs of maximum degree 3 (see M. Chlebík and J. Chlebíková, "Approximation hardness of dominating set problems in bounded degree graphs"), but could not find the combination of the two in the literature.
Answer: What if you simply do the following: Given a graph $G = (V,E)$, construct another graph $G' = (V \cup U, E')$ by subdividing each edge of $G$ in 4 parts; here $U$ is the set of new nodes that we introduced, and $|U| = 3|E|$.
The graph $G'$ is bipartite. Moreover, if $G$ is planar and has max. degree 3, then $G'$ is also planar and has max. degree 3.
Let $D'$ be a (minimum) dominating set for $G'$. Consider an edge $(x,y) \in E$ that was subdivided to form a path $(x,a,b,c,y)$ in $G'$. Now clearly at least one of $a,b,c$ is in $D'$. Moreover, if we have more than one of $a,b,c$ in $D'$, we can modify $D'$ so that it remains a valid dominating set and its size does not increase. For example, if we have $a \in D'$ and $c \in D'$, we can equally well remove $c$ from $D'$ and add $y$ to $D'$. Hence w.l.o.g. we have $|D' \cap U| = |E|$.
Then consider $D = D' \cap V$. Assume that $x \in V$ and $x \notin D'$. Then we must have a node $a \in D'$ such that $(x,a) \in E'$. Hence there is an edge $(x,y) \in E$ such that we have a path $(x,a,b,c,y)$ in $G'$. Since $a,b,c \in U$ and $a \in D'$, we have $b, c \notin D'$, and to dominate $c$ we must have $y \in D'$. Hence in $G$ node $y$ is a neighbour of $x$ with $y \in D$. That is, $D$ is a dominating set for $G$.
Conversely, consider a (minimum) dominating set $D$ for $G$. Construct a dominating set $D'$ for $G'$ so that $|D'| = |D| + |E|$ as follows: For an edge $(x,y) \in E$ that was subdivided to form a path $(x,a,b,c,y)$ in $G'$, we add $a$ to $D'$ if $x \notin D$ and $y \in D$; we add $c$ to $D'$ if $x \in D$ and $y \notin D$; and otherwise we add $b$ to $D'$. Now it can be checked that $D'$ is a dominating set for $G'$: By construction, all nodes in $U$ are dominated. Now let $x \in V \setminus D'$. Then there is a $y \in V$ such that $(x,y) \in E$, and hence along the path $(x,a,b,c,y)$ we have $a \in D'$, which dominates $x$.
In summary, if $G$ has a dominating set of size $k$, then $G'$ has a dominating set of size at most $k + |E|$, and if $G'$ has a dominating set of size $k + |E|$, then $G$ has a dominating set of size at most $k$.
Edit: Added an illustration. Top: the original graph $G$; middle: graph $G'$ with a "normalised" dominating set; bottom: graph $G'$ with an arbitrary dominating set. | {
"domain": "cstheory.stackexchange",
"id": 316,
"tags": "cc.complexity-theory, graph-theory"
} |
What is the radius of the helix a charged particle makes when entering a magnetic field at an angle? | Question: Here is the equation I found in my textbook but it doesn't make sense:
$R=\frac{mv\sin α}{QB}$ . Looking at this formula we can say that particle A moving through magnetic field at an angle $α<β$ would follow a helical path with radius $r_1<r_2$ and particle B moving through the same field with an angle $β$ would follow a helical path with radius $r_2$. But we know that force on a particle with smaller angle is smaller than on a particle with a bigger angle and this formula hence doesn't make sense?
Answer: The centripetal acceleration required to hold an object in a circular path is $a_c=\frac{v^2}{r}$. What matters here is the projection of the spiral path (which is a circle) and the component of $v$ perpendicular to $B$, which is $v_\perp=v\sin\theta$. So required acceleration for circular path is $a_c=\frac{v^2 \sin^2\theta}{r}$. Put it $r$: $a_c=\frac{v^2 \sin^2\theta q B}{m v \sin\theta} = \frac{v \sin\theta q B}{m}$, which is consistent. Yes, force and acceleration decrease with angle, but it's because the particle isn't going as fast around the circle part of the spiral. It has more velocity along the linear component of the spiral path, which takes no force to maintain. | {
"domain": "physics.stackexchange",
"id": 58604,
"tags": "electromagnetism, charge"
} |
Does the galGal5 chicken assembly have a chromosome 29? | Question: The chromosome sizes at UCSC don't seem to contain chr29:
ftp://hgdownload.soe.ucsc.edu/goldenPath/galGal5/bigZips/galGal5.chrom.sizes
It has a chr28 and a chr30. Am I missing something or is there some piece of history that led to the omission of this chromosome?
Answer: The current ensembl entry doesn't have a 29 either. The archived ensembl assembly lacks 29 30, and 31 and 33 and LGE64.
The chromosomes after 30 are tiny, so they might not be visible in a karyotype. They probably realized that "chr 29" was really attached to some other chromosome. | {
"domain": "bioinformatics.stackexchange",
"id": 547,
"tags": "genome, assembly, reference-genome, ucsc"
} |
How can ingesting a prion "infect" someone? | Question: That's something that's been bugging me for a while...
Our gastrointestinal tract produces proteases that degrade proteins. Prions are proteins. Shouldn't they be broken by proteases?
Also, how can a prion pass the gastrointestinal barrier? Shouldn't the intestine be able to absorb aminoacids only (instead of full proteins)?
Answer: Prions are misfolded proteins with abnormal tertiary or quaternary structures. That grants them resistance (to some extent, at least) to proteases (1).
Also researchers believe that prions are able to replicate (2), by changing the structure of other proteins.
Regarding the gastrointestinal barrier, that isn't exactly true. It has been shown that small quantities of intact proteins do cross the gastrointestinal tract in animals and adult humans (3), and that this is a physiologically normal process required for antigen sampling by subepithelial immune tissue in the gut
So, the resistance to the proteases and the ability to replicate in certain conditions might explain the odds of a prion crossing the gastrointestinal tract and infecting an individual.
References:
http://www.ncbi.nlm.nih.gov/pubmed/24338008
http://www.nejm.org/doi/full/10.1056/NEJM199609193351218
http://www.ncbi.nlm.nih.gov/pubmed/3060169 | {
"domain": "biology.stackexchange",
"id": 2912,
"tags": "human-biology, prion, human-physiology"
} |
What causes drag in a fluid? | Question: What causes resistance of an object to motion within a fluid like water? Please explain to me the molecular dynamics of the situation.
Answer: When an object moves in a fluid, the fluid molecules just around it get entrained along with it due to friction. In turn, this layer of fluid entrains the next layer at a lower velocity, and so on. This is due to the fact that individual molecules switch layer and thus "diffuse" the momentum (a molecule coming from a fast layer to a slower transfers some momentum to it, and vice-versa). In liquids, molecular interactions are added on top of that.
The total frictional energy dissipated by this friction corresponds to the total energy needed to achieve the displacement of the object, and the ratio of the force to the velocity attained is called the drag. | {
"domain": "physics.stackexchange",
"id": 12819,
"tags": "fluid-dynamics, drag, viscosity"
} |
How to calculate the equilibrium constant for nitrogen, hydrogen, and ammonia? | Question:
$\pu{2 mol}$ of $\ce{N2}$ is mixed with $\pu{6 mol}$ of $\ce{H2}$ in a closed vessel of $\pu{1 L}$ capacity. If $50\%$ of $\ce{N2}$ is converted into $\ce{NH3}$ at equilibrium, what is the value of $K_c$ (equilibrium constant) for the reaction
$$\ce{N2 + 3 H2 <=> 2 NH3}?$$
This is what I tried to solve it:
The answer to this question is $\frac{4}{27}$ as given in the solution key.
It is only possible if the concentration of $\ce{NH3}$ is not raised to its stochiometric coefficient.
Am I doing it wrong or is the answer key wrong?
Answer: Few notes about your solution:
"Both are limiting reactant"
You can neglect this idea in this problem, as the reaction is not a completion reaction, it is an equilibrium reaction.
Your solution indicates that the amount of $\ce{NH3}~ \text{at equilibrium}=n_\mathrm{e}(\ce{NH3})=\pu{4mol}$
It is not correct, it is equal $\pu{2mol}$, it can be calculated as the following:
\begin{align}
n_\mathrm{e}(\ce{NH3})
&=n_\text{ formed}(\ce{NH3})\\
&=2\times{n_\text{ reacted}(\ce{N2})}\\
&= 2\times \left(2\times\frac{50}{100}\right)\\
&= 2\times{(1)}\\
&= \pu{2mol}
\end{align}
Use the molar ratio to calculate the amount of $\ce{H2}$ reacted:
\begin{align}
n_\text{ reacted} (\ce{H2})
&= 3\times{n_\text{ reacted}(\ce{I2})}\\
&= 3\times{(1)}\\
&= \pu{3mol}
\end{align}
Calculate The amount of $\ce{H2}$ at equilibrium:
\begin{align}
n_\mathrm{e}(\ce{H2})
&=n_\mathrm{I}(\ce{H2})
-n_\text{reacted}(\ce{H2})\\
&= 6-3\\
&=\pu{3mol}
\end{align}
Use the following table to determine the amount of each species at equilibrium
in order to calculate $K_c$ :
\begin{array}{c | c c c c c}
&\ce{N2} & + &\ce{3H2} &\ce{<=>}& \ce{2NH3} \\\hline
\text{I} &(\pu{2 mol}) & &(\pu{6 mol}) & &(\pu{0 mol}) \\
\text{C} &(\pu{-1 mol}) & &(\pu{-3 mol}) & &(\pu{2 mol}) \\
\text{E} &(\pu{1 mol}) & &(\pu{3 mol}) & &(\pu{2 mol})
\end{array}
Use the expression $[i]_\mathrm{e}= \frac{n_\mathrm{e}(i)}{V}$
to calculate the concentration of each species at equilibrium:
\begin{align}
[\ce{N2}]_\mathrm{e} &= \frac{\pu{1 mol}}{\pu{1 L}}=\pu{1 M}\\
[\ce{H2}]_\mathrm{e} &= \frac{\pu{3 mol}}{\pu{1 L}}=\pu{3 M}\\
[\ce{NH3}]_\mathrm{e} &= \frac{\pu{2 mol}}{\pu{1 L}}=\pu{2 M}\\
\end{align}
Substitute the concentration of each species at equilibrium in the following formula to calculate $K_c$:
\begin{align}
K_c &=\frac{[\ce{NH3}]^2_\mathrm{e}}{[\ce{N2}]_\mathrm{e}[\ce{H2}]^3_\mathrm{e}}\\
&=\frac{(\pu{2 M})^2}{(\pu{ 1M})(\pu{3 M})^3}\\
&=\left(\frac{4}{27}\right)\cdot \pu{L^2//mol^2}
\end{align} | {
"domain": "chemistry.stackexchange",
"id": 11688,
"tags": "equilibrium, stoichiometry"
} |
Difference between the courses "Digital signal processing" and "Real Time Digital signal processing"? | Question: Different universities around the globe offer DSP courses with different names
Especially some name the course "Digital signal processing" and some name "Real Time Digital signal processing"
I want to know,is there any difference between these two courses or they referring to exact same course?
Forexample,please check the below two links,both are from texas university
http://signal.ece.utexas.edu/~arslan/courses/dsp/index.html
http://signal.ece.utexas.edu/~arslan/courses/realtime/index.html
Both courses on above links have different course codes and websites,by which i am making assumption that both these courses are not exactly same?Is my understanding correct?
Answer: In this case these are actually very different courses.
The first one goes through the mathematical foundations of Digital Signal Processing: continuous vs discrete, sampling theorem, Z-transform, LTI systems, Fourier Transform, some light-weight filter design, etc.
You'll probably will use mostly pen & paper plus Matlab or Python for this one.
The second one appears focus and getting this to run on an actual piece of hardware. You'll most likely learn how to get the hardware up and running, getting development tools installed (C-compiler, board interface), figure out how to configure the processor and the peripherals so that you can actually receive and send data at the right format and correct speed and timing, build a data path through your processor, express an abstract algorithm in terms of actual working code, track performance & system metrics and test & debug (A LOT).
These are very different skill sets. IMO the first course is a pre-requisite to the second: Before you implement an algorithm in hardware, you need to understand how the algorithm actually works, what the expected behavior is and what you can or cannot tweak. | {
"domain": "dsp.stackexchange",
"id": 10329,
"tags": "terminology"
} |
Interpretable xgboost - Calculate cover feature importance | Question: When trying to interpret the results of a gradient boosting (or any decision tree) one can plot the feature importance.
There are same parameters in the xgb api such as: weight, gain, cover, total_gain and total_cover. I am not quite getting cover.
”cover” is the average coverage of splits which use the feature where coverage is defined as the number of samples affected by the split
I am looking for a better definition of cover and perhaps some pseudocode to understand it better.
Answer: A more detailed explanation of cover can be found in the code
cover: the sum of second order gradient of training data classified to the leaf, if it is square loss, this simply corresponds to the number of instances in that branch. Deeper in the tree a node is, lower this metric will be
You can find this here: cover definition in the code
This basically mean that for each split the second order gradient of the specified loss is computed (per sample). Then this number is scaled by the number of samples. | {
"domain": "datascience.stackexchange",
"id": 7027,
"tags": "machine-learning, python, decision-trees, xgboost, explainable-ai"
} |
Confused about Xacro and URDF | Question:
Hey together,
I am a little bit confused about the use case of xacro and urdf. I my case I have an robot equipped with 3 laserscanner and modeled a urdf file for it following the build your URDF tutorial. After finishing and testing this i have read that it is possible to clean up a URDF with Xacro. Thats whats confusing for me, I don't have a Xacro file and should use a Xacro file to clean up my previously made URDF?
So was it incorrect to create an URDF, should I have created a Xacro file instead?
Thanks in advance
Originally posted by schultza on ROS Answers with karma: 232 on 2015-11-27
Post score: 1
Answer:
In fact it is up to you to use xacro. You do not have to, but especially if you have a large URDF and possible duplicate codes it will be beneficial.
xacro provides you some advantages with URDF. As the tutorial you linked points,
It does three things that are very
helpful.
Constants
Simple Math
Macros
This way you can use constants throughout your URDF, therefore it will be more parametrized;
<xacro:property name="width" value=".2" />
Also you can do some simple math like;
<cylinder radius="${wheeldiam/2}" length=".1"/>
And you can use macros like;
<xacro:macro name="default_origin">
<origin xyz="0 0 0" rpy="0 0 0"/>
</xacro:macro>
<xacro:default_origin />
For example in your case, you say that you have 3 laser range finders. Therefore, possibly without xacro, you are using same code block again and again with some little differences like x,y,z and r,p,y. If you use xacro at this point, you can have a macro called "laser_scanner" and you can just use it three times when you need it.
You can check PAL's repo for PMB2 in this link for such an example.
Another advantage of using xacro is that it will be interpreted and converted to URDF on run-time if you want. Therefore it will be a dynamic URDF file.
Originally posted by Akif with karma: 3561 on 2015-11-27
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by schultza on 2015-11-30:
Thanks for the answer, but I understand it right that you also wirte xacro by hand and you cant create a xacro from a urdf just the other way around?
Comment by Akif on 2015-11-30:
Yes, you write your xacro files and convert them to one single URDF file before run-time or use this URDF in random memory without having a file on harddrive, during run-time. | {
"domain": "robotics.stackexchange",
"id": 23097,
"tags": "ros, urdf, robot-model, xacro"
} |
What are the differences between stage and stageros? | Question:
When using stage, diamondback has 2 executables at /opt/ros/diamondback/stacks/simulator_stage/stage/bin - stage and stageros. How are these two different, particularly in the usage ? Using these two in simple environments shows no differences.
Originally posted by Arkapravo on ROS Answers with karma: 1108 on 2011-08-28
Post score: 2
Answer:
hmm good question. There are 2 stage executable files in the stage bin directory. stage and stageros.
stage is the one compiled from the original stage files. It can be used as the original stage simulator if the libraries are linked correctly. (By default the libraries are not linked for stage, the stage library has to be added to ldconfig)
However, stage does not provide the interface to ros. So if you use stage, the simulation data will not be available to ros although it will probably still work with player.
stageros on the other hand makes a portion of the stage interface available to ROS.
It published the following topics to the ros interface, so that it would work with nodes written for ROS.
/base_pose_ground_truth
/base_scan
/clock
/cmd_vel
/odom
/tf
So if you want to run simulation in ros, you gotta use the stageros instead of stage; in order for ros to interpret the simulation data.
Originally posted by Zack with karma: 216 on 2011-08-28
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Arkapravo on 2011-10-03:
@Zack : One more observation to confirm to your answer; stageros needs an instance of roscore, stage can function without roscore.
Comment by Zack on 2011-08-29:
simulations in ros.
Comment by Zack on 2011-08-29:
Nope, there's only one excutable for gazebo. stage is useful for 2D simulations. It's easier to setup and is good for simple model. However for more complex models, gazebo would be able to model them more accurately. You can take a look at the provided launch files to get an idea of how to launch
Comment by Arkapravo on 2011-08-29:
@Zack : Thank you ! I am not really into gazebo - but do we have a similar situation there too ? gazebo and gazeboros (?) ? | {
"domain": "robotics.stackexchange",
"id": 6543,
"tags": "simulation, stage, ros-diamondback"
} |
How can I get the axes of the polarization ellipse from the Jones vector of the light? | Question: I am analyzing the polarization state of a monochromatic, coherent light source, for which I know the Jones vector of the polarization,
$$
\mathbf E
=\begin{pmatrix}E_x\\E_y\end{pmatrix}
=\begin{pmatrix}|E_x|e^{i\varphi_x}\\|E_y|e^{i\varphi_y}\end{pmatrix},
$$
and I would like to expand it in terms of a major and a minor axis of ellipticity, i.e. in the form
$$
\mathbf E=
e^{i\varphi}\left(
A \hat{\mathbf u} + i B\hat{\mathbf v}
\right)
=
e^{i\varphi}\left(
A
\begin{pmatrix}\cos(\theta)\\ \sin(\theta)\end{pmatrix}
+ i B
\begin{pmatrix}-\sin(\theta)\\ \cos(\theta)\end{pmatrix}
\right),
$$
or as shown graphically as follows:
Image source
Wikipedia provides a multi-step procedure going through the Stokes parameters, but I'm thinking there is surely a cleaner and more direct way to get $A$, $B$, $\hat{\mathbf u}$, $\hat{\mathbf v}$, $\theta$, and the components $A \hat{\mathbf u}$ and $B\hat{\mathbf v}$, from $E_x$ and $E_y$, and it's not particularly obvious from the search results I can find. What's the cleanest way to do this?
To be clear: what I think is lacking from the existing resources, and what the question is directly asking for, is an explicit set of connections, as simple as possible, for the named parameters (all of $A$, $B$, $\hat{\mathbf u}$, $\hat{\mathbf v}$, $\theta$, and the components $A \hat{\mathbf u}$ and $B\hat{\mathbf v}$), in terms of the Cartesian components $E_x$ and $E_y$. Schemes that simply send to some other set of complex manipulations are already available from Wikipedia and are not what the question is asking for.
Answer: The cleanest way to do this is offered by Michael Berry in the paper
Index formulae for singular lines of polarization. M V Berry, J. Opt. A: Pure Appl. Opt. 6, 675–678 (2004), author eprint.
In Berry's notation, the electric field can be written as
$$
\mathbf E=\mathbf P +i \mathbf Q = e^{i\gamma} \left(\mathbf A+i\mathbf B\right),
$$
where $\mathbf P$, $\mathbf Q$, $\mathbf A$ and $\mathbf B$ are real-valued vectors, $\mathbf A$ and $\mathbf B$ are respectively the major and minor axes of the polarization ellipse, and those two are defined up to a sign by $\mathbf A\cdot\mathbf B=0$ and $|\mathbf A|\geq |\mathbf B|$. With this notation, the polarization axes and the phase are defined as
$$
\gamma = \frac12 \arg(\mathbf E\cdot\mathbf E)
\quad\text{and}\quad
\mathbf A+i\mathbf B = \frac{\sqrt{\mathbf E^*\cdot \mathbf E^*}}{\left|\sqrt{\mathbf E^*\cdot \mathbf E^*}\right|}\mathbf E,
$$
or in other words
$$
\mathbf A = \frac{1}{\left|\sqrt{\mathbf E^*\cdot \mathbf E^*}\right|}\mathrm{Re}\mathopen{}\left[\sqrt{\mathbf E^*\cdot \mathbf E^*} \: \mathbf E\right]\mathclose{}
\quad\text{and}\quad
\mathbf B = \frac{1}{\left|\sqrt{\mathbf E^*\cdot \mathbf E^*}\right|}\mathrm{Im}\mathopen{}\left[\sqrt{\mathbf E^*\cdot \mathbf E^*} \: \mathbf E\right]\mathclose{}.
$$
There is an obvious sign ambiguity, in that flipping both $\mathbf A$ and $\mathbf B$ and adding $\pi$ to $\gamma$ will not change anything (i.e. rotating the polarization ellipse by 180° is equivalent to adding a phase), which is reflected in the branch cuts of both the argument and the square root functions. These naturally mesh together so long as both branch cuts are taken on the same cut, ideally along the negative real axis.
As another tricky point, one should note that these formulas are not defined when $\mathbf E\cdot\mathbf E=0$, which corresponds to circular polarization; in this case both the polarization axes, as well as the phase $\gamma$ at the major axis, are ill-defined, so this is not a problem.
As an added bonus, this approach also naturally gives the direction of the normal to the plane of the polarization ellipse, in the form
$$
\mathbf C
= \frac12 \mathrm{Im}\mathopen{}\left(\mathbf E^*\times\mathbf E\right)\mathclose{}
=\mathbf P\times\mathbf Q
=\mathbf A\times\mathbf B,
$$
where the cross product $\mathbf E^*\times\mathbf E$ is naturally imaginary, as its conjugate is minus itself. Of course, this will vanish if $\mathbf E$ and $\mathbf E^*$ (or $\mathbf P$ and $\mathbf Q$) are linearly dependent, which corresponds to linear polarization; in this case, $\mathbf B$ will vanish, because $\sqrt{\mathbf E^*\cdot \mathbf E^*} \: \mathbf E$ is naturally real.
Berry credits
Polarization singularities in paraxial vector fields: morphology and statistics. M R Dennis, Opt. Commun. 213,
201–21 (2002), eprint.
for this form, and that reference contains a fuller proof of how and why the decomposition works.
This is, in fact, rather simple, once you realize that the decomposition as $\mathbf E = e^{i\gamma} \left(\mathbf A+i\mathbf B\right)$, as above, must exist, because under those conditions the dot product reduces to
$$
\mathbf E\cdot\mathbf E
=e^{2i\gamma} \left(\mathbf A+i\mathbf B\right)\cdot \left(\mathbf A+i\mathbf B\right)
=e^{2i\gamma}(\mathbf A^2-\mathbf B^2),
$$
where $\mathbf A^2-\mathbf B^2$ is real and positive, so taking the argument of both sides naturally gives the phase as $2\gamma=\arg(\mathbf E\cdot\mathbf E).$ Similarly, taking the modulus of that equation returns $\mathbf A^2-\mathbf B^2=|\mathbf E\cdot\mathbf E|$, so we can simply get the phase factor as
$$
e^{2i\gamma}
=\frac{\mathbf E\cdot\mathbf E}{\mathbf A^2-\mathbf B^2}
=\frac{\mathbf E\cdot\mathbf E}{|\mathbf E\cdot\mathbf E|}
,\ \text{so}\
e^{i\gamma}
=\frac{\sqrt{\mathbf E\cdot\mathbf E}}{|\sqrt{\mathbf E\cdot\mathbf E}|}
,\ \text{and therefore}\
e^{-i\gamma}
=\frac{\sqrt{\mathbf E^*\cdot\mathbf E^*}}{|\sqrt{\mathbf E^*\cdot\mathbf E^*}|};
$$
the characterization for $\mathbf A+i\mathbf B$ then follows from $\mathbf E = e^{i\gamma} \left(\mathbf A+i\mathbf B\right)$ by simply dividing by $e^{i\gamma}$. | {
"domain": "physics.stackexchange",
"id": 38076,
"tags": "electromagnetism, optics, electromagnetic-radiation, polarization"
} |
How can a star emit light if it is in Plasma state? | Question:
I understand that star is in Plasma state (all nucleus and electrons are not bound to each other and moving around freely)
Photon is emitted when an excited electron moves back to lower orbit.
So in a star if electrons are not in any orbit then how can photons be produced?
I am sure some of my understanding above is incorrect :) please help me understand.
Answer:
1.I understand that star is in Plasma state (all nucleus and electrons are not bound to each other and moving around freely)
While hydrogen only has one electron, all other neutral atoms have more than one electron. When one electron is removed, this is referred to as the "first ionization". Removing one of several electrons from an atom still makes it plasma. Also, the term "plasma" is used when a substantial fraction of the atoms are ionized, not necessarily all. So in the sun or other stars, there are still electrons bound to nuclei, as well as free electrons.
For these reason, in the spectrum below, one still sees lines from transtions between electron energy levels of atoms.
2.Photon is emitted when an excited electron moves back to lower orbit.
Yes, and absorbed when going to a higher level, that is why we see the lines in the above spectrum.
3.So in a star if electrons are not in any orbit then how can photons be produced?
The main reason is that gamma ray photons are produced in the core of the sun by hydrogen fusion to helium, and create a cascade of lower energy photons as they travel to the surface. Also, all materials emit black body radiation. The overall shape of the above spectrum fits well to a black body model. | {
"domain": "physics.stackexchange",
"id": 13397,
"tags": "astrophysics, thermal-radiation, plasma-physics, stars, stellar-physics"
} |
Why does a subduction zone produce a serpentinization diapir rather than volcanism? | Question: The classic Troodos Ophiolite in Cyprus has been uplifted by a 'serpentinization event'. Upper mantle (peridotite) has been serpentinized creating a buoyant diapir. This has uplifted the ocean crust sufficiently for it to be thrust over continental crust (underlying continental crust has been inferred from gravitational surveys).
The last sentence of the abstract of "Tertiary uplift history of the Troodos massif, Cyprus", AHF Robertson says:
The dominant driving force may have involved the liberation of water
from a subduction zone dipping northward beneath Cyprus.
This makes sense - serpentinization is the alteration process of olivine -> serpentine in the presence of water. And, as we know, subduction slabs release fluids (primarily water) into the overlying mantle wedge.
These fluids usually promote partial melting leading to andesite arc volcanism. However, Cyprus has this big serpentinite diapir instead of volcanism. Why?
What causes the fluids from the subducting slab to occasionally result in serpentinization of the mantle wedge instead of partial melting?
Answer: First of all, your statement implies that volcanism didn't occur in Troodos. That is not true. Troodos was even referred to as "Troodos Volcano" once in Miyashiro's 1973 article about Troodos.
A geological map of Cyprus clearly shows that a large portion of the ophiolite is composed of lavas (volcanic) and dykes (sub-volcanic):
(source)
Note the red, brown and pink colors. This is however not the arc volcanism you were referring to but rather ocean floor volcanism in a spreading center setting.
As for your question, why there isn't any arc volcanism in Troodos, the answer is simply that it wasn't hot enough. Let's put the question of where the water comes from aside, and concentrate on what happens when water meets an ultramafic rock, which is what mantle rocks are.
(source)
You can see that serpentine is stable at temperatures lower than 600°C, and even lower at (the more relevant for our case) low pressures. At higher temperatures serpentine is not stable anymore and the stable mineral is olivine (a rock forming mineral in lherzolite and harzburgite). Olivine is a dry mineral, so what happens to all the water? The water acts to lower the temperature needed to melt the mantle rocks.
You can see the dry versus the wet solidus (the temperature in which a rock begins to melt) in the following diagram.
So to sum it up, serpentinite forms when water reacts with ultramafic rocks at low temperature, for example when seawater infiltrates the mantle rocks, or when fluids from the subducted dehydrated slab reach the colder shallow mantle rocks. Melting and consequently arc volcanism occurs for example when fluids from the subducted dehydrated slab rise to deep and hot mantle rocks and suppress the melting temperature to below the ambient temperature.
There are several reasons that the mantle rocks in Troodos were "cold" enough for this to occur. First of all, you are talking about very shallow rocks, very close to the plutonic section of the ophiolite.
(source)
You can see that shallow mantle rocks are in the 300-500°C range. The arc volcanoes that you see in that figure are situated above the areas where fluids from the subducting slab can infiltrate hot (1000ish°C) mantle rocks. So you can use the Japan Sea as an analogue for Troodos in this case. Now, it is true that temperatures were likely higher because magmatic activity did occur in Troodos, but as Troodos was a slow-spreading center, the magmatic activity was rather sporadic and localized. The Nuriel et al. (2009) paper that Lanzafame refers to actually advocates the idea that Troodos was a core-complex. That is, the mantle rocks were directly exposed to seawater due to faulting, which both cooled them considerably and facilitated serpentinization.
The Troodos ophiolite is indeed a supra-subduction zone ophiolite. And it is reasonable to think that arc volcanism occurred somewhere, but the record is absent from the Troodos ophiolite itself. If you are interested, look up "a h f robertson" on Google Scholar and read some of his newer work. The article you cited is from the 70s and much research has been conducted since then. | {
"domain": "earthscience.stackexchange",
"id": 238,
"tags": "geology, plate-tectonics, petrology, subduction"
} |
Influence of pressure on thermal conductivity of water vapour (steam) | Question: I am looking for an analytical correlation of the thermal conductivity of water vapour (steam) in function of pressure. Apparently the effect is not large, but I would like to confirm this by some correlations.
The only thing I found so far is this plot from The Engineering Toolbox. Unfortunately, no sources/references are given.
Thanks!
Answer: Bird, Stewart, and Lightfoot, Transport Phenomena, Chapter 9 presents molecular derivation of pure species thermal conductivity as a function of temperature and pressure, and, in Section 9.2 presents a Corresponding States plot of reduced thermal conductivity as a function of reduced temperature and reduced pressure. At low pressures, all the curves converge to a limiting curve that is a function only of reduced temperature. | {
"domain": "physics.stackexchange",
"id": 60260,
"tags": "thermodynamics, pressure, water, thermal-conductivity"
} |
Signal in frequency domain with OpenCV dft | Question: I am experimenting with cv::dft: a 1HZ sinus signal is generated, and displayed in the frequency domain. But for some reason it hasn't got the maximum component at 1Hz. My code is the following:
const int FRAME_RATE = 20; //!< sampling rate in [Hz]
const int WINDOW_SIZE = 256;
double len = double(WINDOW_SIZE)/double(FRAME_RATE); // signal length in seconds
double Fb = 1./len; // frequency bin in Hz
// Constructing frequency vector
std::vector<double> f;
double freq_step = 0;
for (int i = 0; i < WINDOW_SIZE; ++i)
{
f.push_back(freq_step);
freq_step += Fb;
}
// Create time vector
std::vector<double> t;
double time_step = 0;
for(int i = 0; i<WINDOW_SIZE; ++i)
{
t.push_back(time_step);
time_step += 1./double(FRAME_RATE);
}
// Creating sin signal with 1Hz period
std::vector<double> y;
for(auto val : t)
{
y.push_back(sin(1*FRAME_RATE*val));
}
// Compute DFT
cv::Mat fd;
cv::dft(y, fd, cv::DFT_REAL_OUTPUT);
fd = cv::abs(fd);
If I plot the signal in time and frequency domain: plot(t, y); plot(f, fd) the result is the following:
The time signal is good, but the frequency signal has maximum around 6HZ instead of 1HZ.
Where did I take the mistake?
Answer: Your sine wave should use 2pi instead of frame rate. Also, your frequency data should have window size/2 points and go from 0 to frame rate/2, not frame rate. | {
"domain": "dsp.stackexchange",
"id": 6814,
"tags": "fourier-transform, frequency-spectrum, opencv, c++"
} |
Transform class | Question:
Where is the Transform class defined? What members and functions does it have? I have been able to find the Transforms class, but not the Transform class.
Originally posted by qdocehf on ROS Answers with karma: 208 on 2011-07-12
Post score: 0
Answer:
tf::Transform is a typedef for a Bullet btTransform class.
Originally posted by arebgun with karma: 2121 on 2011-07-12
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 6116,
"tags": "transform"
} |
Get dynamic World Pose of a link | Question:
Hey guys
i am struggling to get the dynamic world pose of a link. I have written a plugin, which computes the distance to certain objects in the world. Therefore i need to know the World pose of different links in a model which is moving. To get the world Pose of a link i have done this in the OnUpdate() function:
physics::Link_V links= this->world->ModelByName("desk_0")->GetLinks();
for (physics::Link_V::iterator iter = links.begin(); iter !=links.end(); ++iter){
if((*iter)->GetName().find(this->anchorPrefix) == 0){
physics::LinkPtr anchor = *iter;
ignition::math::Pose3d anchorPose = anchor->GetWorldPose();
}
}
RCLCPP_INFO(ros_node->get_logger(), "Anchor Pose: %s: x: %f, y: %f, z: %f", (*iter)->GetName().c_str(), anchorPose.Pos().X(), anchorPose.Pos().Y(), anchorPose.Pos().Z());
As expected, i get the world pose of the different links, but the pose is not changing if i move the model around in the world. The model is not static, only the gravity for those specific "anchor"-links is disabled. I also disabled the auto disable function (<allow_auto_disable>0</allow_auto_disable>) for the model.
I hope somebody could give me a hint what i could have done wrong or an alternative approach to get the World pose.
Thanks in advance
Jannis
Originally posted by jl on Gazebo Answers with karma: 1 on 2022-03-07
Post score: 0
Answer:
I finally found it out myself:
Problem was that i didn't connected the links through fixed joints with the base link of the model. So they were not moving if i move the model.
Hope this helps someone sometime.
Originally posted by jl with karma: 1 on 2022-03-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4647,
"tags": "gazebo-11"
} |
Frequency difference when water splashes at different temperatures | Question: When I pour hot water (near boiling) and cold water ($5 \unicode{x2103}$) from a height on a platform, there is a distinct difference in the sound that is generated. I feel that hot water splashing has a lower frequency than cold water splashing. What can be the possible reason behind this?
Edit 1: I used a tea kettle to heat the water and dropped it on a marble platform. I did the same experiment with cold (refrigerated) water using the same kettle. Height would be around 1.5m. There's a distinct difference between the sound produced.
Edit 2: I guess I won't need to do the experiment as @Deep suggested. Please view the link given by @Porges. Also, I was incorrect in relating the frequencies. Hot water makes higher frequency. Only thing is, how does bubbling make it more shrill?
Answer: This is a guess since I have never done the experiment, but the viscosity of water falls by a factor of 5 on heating from 5°C to 100°C. The viscosity is one of the two factors (the other being density) that control the water flow, so it is quite reasonable to suppose that water at 100°C splashes in a noticably different way to water at 5°C.
I mentioned above that the density also affects the flow. However the density of water only changes by about 4% over this temperature range. So it seems likely that the change in the viscosity is the main factor. | {
"domain": "physics.stackexchange",
"id": 38011,
"tags": "fluid-dynamics, acoustics, temperature, frequency, viscosity"
} |
Digital Genomic Footprinting for ENCODE | Question: I'm reading over the ENCODE Nature papers, and one of the papers referred to is "Global mapping of protein-DNA interactions in vivo by digital" by Hesselberth et al[1].
Genomic footprinting is a massively parallel variant of DNAse I hypersensitivity screening, which I feel I have a good handle on, but I don't understand this paragraph from the beginning of the results (the troublesome part is in bold):
To visualize regulatory protein occupancy across the genome of Saccharomyces cerevisiae, we coupled DNase I digestion of yeast nuclei with massively parallel DNA sequencing to create a dense whole-genome map of DNA template accessibility at nucleotide-level. We analyzed a single well-studied environmental condition, yeast a cells treated with the pheromone α-factor, which synchronizes cells in the G1 phase of the cell cycle. We isolated yeast nuclei and treated them with a DNase I concentration sufficient to release short (<300 bp) DNA fragments while maintaining the bulk of the sample in high molecular weight species (Supplementary Fig.1). These small fragments derive from two DNase I “hits” in close proximity, and therefore their isolation minimizes contamination by single fragment ends derived from random shearing. Because each end of the released DNase I ‘double-hit’ fragments represents an in vivo DNase I cleavage site, the sequence and hence genomic location of these sites can be readily determined by sequencing (Supplementary methods).
There's a reference to another paper by Sabo et al.[2] about genome-scale DNase assays using microarrays in that paragraph that I'm reading, but if someone understands the biology well, I'd really appreciate an answer.
Hesselberth JR, Chen X, Zhang Z, Sabo PJ, Sandstrom R, Reynolds AP, Thurman RE, Neph S, Kuehn MS, Noble WS, Fields S, Stamatoyannopoulos JA. 2009. Global mapping of protein-DNA interactions in vivo by digital genomic footprinting. Nature Methods 6(4): 283–289, doi:10.1038/nmeth.1313.
Sabo PJ, Kuehn MS, Thurman R, Johnson BE, Johnson EM, Cao H, Yu M, Rosenzweig E, Goldy J, Haydock A, Weaver M, Shafer A, Lee K, Neri F, Humbert R, Singer MA, Richmond TA, Dorschner MO, McArthur M, Hawrylycz M, Green RD, Navas PA, Noble WS, Stamatoyannopoulos JA. 2006. Genome-scale mapping of DNase I sensitivity in vivo using tiling DNA microarrays. Nature Methods 3: 511-518, doi:10.1038/nmeth890.
Answer: After reading the paper cited I think the logic goes like this: DNAse I will create free ends at accessible sites in the genome. However shearing during subsequent DNA isolation is also a source of free ends, and these represent noise in the analysis. You have to put in a lot of energy to shear DNA to very small fragments, so I infer that mild shearing during DNA isolation is very unlikely to create a new end that is near to an existing DNase I-generated end. Therefore, under the conditions used, any small fragments are much more likely to have been generated by two closely-spaced DNase I hits. By limiting the analysis to these fragments the noise from shearing is minimised.
Now that I've written this answer it doesn't seem like much more than a rewrite of the emboldened sentence, but I hope it is helpful anyway. | {
"domain": "biology.stackexchange",
"id": 568,
"tags": "bioinformatics, dna-sequencing"
} |
Learning about EXPTIME and EXPSPACE | Question: I'd like to know some good starting points (such as books, papers, lecture notes, etc.) on EXPTIME and EXPSPACE. I'd like to learn more about these two topics, but I'm not sure what the best approach is....
Answer: You may start from the connection between polynomial identity problem and exponential time classes. Polynomial identity problem has significant importance in both algorithm and
complexity theory. The recent breakthrough for faster exponential time exact algorithm for Hamiltonian cycle problem
was derived via a reduction to polynomial identity. From the complexity point of view,
derandomization of polynomial identity implies the separation of NEXP or PP from P/ploy.
Therefore, I believe that polynomial identity will continue to play an important role in both
algorithm and complexity theory. It is closely related to some exponential time algorithmes, and
also the separations between exponential time classes and non-uniform circuit classes. | {
"domain": "cstheory.stackexchange",
"id": 657,
"tags": "cc.complexity-theory, reference-request, complexity-classes, exp-time-algorithms"
} |
Problem while running navigation stack for turtlebot | Question:
After going through ROS tutorials on navigation stack, I was trying to implement it on Turtlebot with Kinect. I have done the following to get navigation stack up and running:
I have created a launch file for pointcloud to laser transformation first (kinect_laser.launch).
I am using turtlebot_navigation package for all config yaml costmap files, move_base and amcl.
Created a package for publishing odometer data and for setting up /map to /odom link as this can't be static transform publisher.
Then I created one launch file with "tf" static transform publisher for /base_link to /camera_link (Kinect), included the above created node in step 3, included amcl and move_base (which has all yaml config files).
When I am launching this final created file, everything goes fine as per frames.pdf and rostopic tf tf_echo /map /odom (as per movement of bot, values are changing) but after every topic gets published, it throws following message:
" The sensor at ( , ) is out of map bound. Hence, the costmap can't raytrace it. "
I am not sure what all costmap files I need to update and how shall I decide on the parameters in costmap like footprint, robot_radius etc. so that I don't get this warning. Also, while clicking on 2D Pose Estimate I am not getting laser beam surrounding that point and on clicking 2D Nav Goal , the robot just rotates at the same position without translating to the goal.
Anyone please suggest on this ?
Originally posted by Devasena Inupakutika on ROS Answers with karma: 320 on 2013-03-28
Post score: 1
Original comments
Comment by Devasena Inupakutika on 2013-04-05:
Issue is resolved now. Figured out the problem: The transform from /map to /odom is static but /odom to /base_link varies. Hence, edited and made changes to the tf publisher for /odom to /base_link and respective launch files. Also, first tried navigating bot in the environment and saved map then
Comment by Devasena Inupakutika on 2013-04-05:
tried using that map for setting 2D pose estimate and giving 2D Nav goal which then worked as expected and robot can now autonomously navigate through environment. However, I had to include launch file for SLAM that I created for SLAM in navigation launch file.
Answer:
Issue is resolved now. Figured out the problem: The transform from /map to /odom is static but /odom to /base_link varies. Hence, edited and made changes to the tf publisher for /odom to /base_link and respective launch files. Also, first tried navigating bot in the environment and saved map then tried using that map for setting 2D pose estimate and giving 2D Nav goal which then worked as expected and robot can now autonomously navigate through environment. However, I had to include launch file for SLAM that I created for SLAM in navigation launch file.
Originally posted by Devasena Inupakutika with karma: 320 on 2013-04-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13585,
"tags": "navigation, turtlebot, transform"
} |
Are eigenvectors always orthogonal each other? | Question: When an observable/selfadjoint operator $\hat{A}$ has only discrete eigenvalues, the eigenvectors are orthogonal each other. Similarly, when an observable $\hat{A}$ has only continuous eigenvalues, the eigenvectors are orthogonal each other.
But what if $\hat{A}$ has both of discrete eigenvalues and continuous ones?
CONTEXT:
According to my teacher, an observable $\hat{A}$ can have discrete eigenvalues and continuous ones simultaneously.
$$\hat{A} |n\rangle = \alpha _n |n\rangle$$
$$\hat{A} |\xi\rangle = \xi |\xi\rangle$$
Completeness is this.
$$\sum _n |n\rangle\langle n| + \int d\xi |\xi\rangle\langle\xi| = \hat{1}$$
Now the following are true.
$$\langle n|m\rangle = \delta _{nm}$$
$$\langle \xi | \xi '\rangle = \delta (\xi - \xi ')$$
QUESTION:
Are $|n\rangle$ and $|\xi\rangle$ orthogonal each other? I mean, is the equation below true?
$$\langle n|\xi\rangle = 0$$
I want this to be true. I have to use this to prove the expectation value formula
$$E[A] = \frac{\langle \psi|\hat{A}|\psi\rangle}{\langle \psi|psi\rangle}.$$
Answer: You need to formalize the notion of discrete/continuous. If we assume that this is a well defined property of the system then there must exist an observable $D$ that has the same eigenstates as $A$ with eigenvalues $0$ for discrete eigenstates and $1$ for continuous eigenstates. You can then prove that a discrete eigenstate $\left|n\right>$ and a continuous eigenstate $\left|\xi\right>$ are orthogonal when $n = \xi$ (otherwise with different eigenvalues we would already know that they have to be orthogonal), using the fact the eigenvalues of $D$ of these states are different. | {
"domain": "physics.stackexchange",
"id": 39875,
"tags": "quantum-mechanics, operators, hilbert-space, eigenvalue, observables"
} |
Regularization - Combine drop out with early stopping | Question: I'm building a RNN (recurrent neural network) with LSTM cells. I'm using time series to perform anomaly detection.
When training my RNN I'm using a dropout of 0.5 and I'm early stopping with a patience of 5 epochs when my validation loss is increasing.
Does it make sense to use early stopping in combination with dropout?
Answer: It does make sense, they are just two different things.
Dropout only makes your model learning harder, and by this it helps the parameters of the model act in different ways and detect different features, but even with dropout you can potentially overfit your traning set.
On the other hand, early stopping prevents your model from overfitting by taking the best model on your validation data so far.
However, for the sake of simplicity, I think it is easier to just use dropout (training a neural network is not easy and the training may not be successful due to many different reasons, it is a good practice to reduce the possible reasons why the training is failing as much as possible). Unless you have short time to train your network, with a sufficiently high amount of dropout you will ensure that your model is not overfitting.
My final recommendation is: just use dropout. If using a 0.5 dropout rate still overfits, set a higher dropout rate. | {
"domain": "datascience.stackexchange",
"id": 5204,
"tags": "machine-learning, deep-learning, rnn, dropout"
} |
How to install XGBoost or LightGBM on Windows? | Question: I'm a Windows user and would like to use those mentioned algorithms in the title with my Jupyter notebook which is a part of Anaconda installation.
I've tried in anaconda promt window:
pip install xgboost
which retuned:
Could not find a version that satisfies the requirement xgboost (from
versions: ) No matching distribution found for xgboost
Likewise, I've tried:
conda install lightgbm
which returned:
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
https://repo.anaconda.com/pkgs/r/noarch/repodata.json.bz2 Elapsed: -
Help would be appreciated.
Answer: Follow this XGBoost installation guide: https://xgboost.readthedocs.io/en/latest/build.html
If you are using python3 then make sure that you run: pip3 install xgboost
To fix the problem with lightgbm on windows try installing OpenSSL first, refer this: https://www.cloudinsidr.com/content/how-to-install-the-most-recent-version-of-openssl-on-windows-10-in-64-bit/n | {
"domain": "datascience.stackexchange",
"id": 6200,
"tags": "python, xgboost, algorithms, windows"
} |
Merge table stored procedure | Question: I wrote a stored procedure that will do a merge of two tables:
CREATE PROCEDURE [dbo].[merge_tables]
-- Add the parameters for the stored procedure here
@SourceTable varchar(50),
@DestinationTable varchar(50),
@PrimaryKey varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE @update_query varchar(max) =
(select Concat('SET ', string_agg(cast(@DestinationTable + '.[' + name + '] = '+ @SourceTable +'.[' + name +']' as varchar(max)),','))
from sys.columns
WHERE object_id = OBJECT_ID(@DestinationTable) and name != @PrimaryKey and generated_always_type = 0 and system_type_id != 189 and is_identity = 0);
DECLARE @insert_query varchar(max) = (select Concat('([', string_agg(cast(name as varchar(max)),'],['), '])', ' VALUES ([', string_agg(cast(name as varchar(max)),'],['), '])')
from sys.columns
WHERE object_id = OBJECT_ID(@DestinationTable) and generated_always_type = 0 and is_identity = 0 and system_type_id != 189);
DECLARE @merge_query varchar(max) = 'MERGE ' + @DestinationTable +
' USING ' + @SourceTable +
' ON (' + @SourceTable + '.' + @PrimaryKey + ' = ' + @DestinationTable + '.' + @PrimaryKey + ')' +
' WHEN MATCHED THEN UPDATE ' + @update_query +
' WHEN NOT MATCHED BY TARGET THEN INSERT ' + @insert_query +
' WHEN NOT MATCHED BY SOURCE THEN DELETE;';
select @merge_query;
EXEC(@merge_query)
END
GO
It appears to work. Is there any way in which this code can be improved? Also, I am not sure how to make this work with compound keys, but that might be off topic for this site.
Answer:
Is there any way in which this code can be improved?
The procedure parameters should reflect the underlying data types. Each component of an identifier is sysname, a synonym for nvarchar(128) NOT NULL. Using varchar(50) artificially limits the length, and characters that might be used. Likewise, Unicode should be used for the dynamic SQL itself.
The user-supplied table identifier strings should be split using PARSENAME into schema and object portions, then quoted using QUOTENAME. This is safer and more correct than manually surrounding the whole identifier with square brackets.
QUOTENAME returns nvarchar(258), so a safe type for the input is twice this, plus one for the . separator, giving nvarchar(517).
Add a @Debug bit parameter, or similar, so the user can view the generated dynamic SQL (and the associated execution plan) before deciding to execute it.
The procedure should have a basic error handling framework, for example as described by Erland Sommarskog in Error and Transaction Handling in SQL Server.
Add some basic checks for things like object existence, the presence of a primary key, and so on.
STRING_AGG should normally have a WITHIN GROUP clause to provide deterministic ordering.
Compute the distinct column lists once, then reuse this value as necessary.
Separate the code into more steps to make it clearer, and easier to debug. Try to avoid code that scrolls off to the right, and deeply nested function calls.
It is difficult to anticipate all the conditions that could cause the MERGE statement to fail, but consider adding extra checks for e.g. computed columns. Even if this is covered by another condition, being explicit makes the intent clearer.
Avoid hard-coding a numeric type id. Use the TYPE_ID function instead.
Using CONCAT_WS can result in cleaner code than CONCAT when using separators.
Also, I am not sure how to make this work with compound keys...
This can be done by consulting sys.key_constraints and sys.index_columns, see the example code below.
Example
This illustrates how the above points might be incorporated:
CREATE OR ALTER PROCEDURE dbo.MergeTables
@Source nvarchar(517),
@Target nvarchar(517),
@Debug bit = 0
AS
BEGIN
SET NOCOUNT, XACT_ABORT ON;
BEGIN TRY
-- Cannot specify a database or server in the input table names
IF PARSENAME(@Source, 3) IS NOT NULL
BEGIN
RAISERROR(N'Three- or four-part names are not supported for the table identifer %s.', 16, 1, @Source);
END;
IF PARSENAME(@Target, 3) IS NOT NULL
BEGIN
RAISERROR(N'Three- or four-part names are not supported for the table identifer %s.', 16, 1, @Target);
END;
-- Break and quote input e.g. 'My Schema.My Object' - > [My Schema].[My Object]
SET @Source = CONCAT_WS(N'.',
QUOTENAME(PARSENAME(@Source, 2)),
QUOTENAME(PARSENAME(@Source, 1)));
SET @Target = CONCAT_WS(N'.',
QUOTENAME(PARSENAME(@Target, 2)),
QUOTENAME(PARSENAME(@Target, 1)));
-- Check tables are accessible
DECLARE @SourceID integer = OBJECT_ID(@Source, N'U');
DECLARE @TargetID integer = OBJECT_ID(@Target, N'U');
IF @SourceID IS NULL
BEGIN
RAISERROR(N'Table %s not found in the current database.', 16, 1, @Source);
END;
IF @TargetID IS NULL
BEGIN
RAISERROR(N'Table %s not found in the current database.', 16, 1, @Target);
END;
IF CONVERT(integer, OBJECTPROPERTYEX(@SourceID, 'TableHasPrimaryKey')) = 0
BEGIN
RAISERROR(N'Table %s does not have a primary key.', 16, 1, @Source);
END;
IF CONVERT(integer, OBJECTPROPERTYEX(@TargetID, 'TableHasPrimaryKey')) = 0
BEGIN
RAISERROR(N'Table %s does not have a primary key.', 16, 1, @Target);
END;
/* TODO: Check source and target schemas are compatible*/
DECLARE
-- Find the primary key columns for the MERGE statement's ON clause
@PrimaryKeyColumnList nvarchar(max) =
(
SELECT
STRING_AGG(
CONVERT(nvarchar(max),
CONCAT(N'S.', QUOTENAME(C.[name]), N'=T.', QUOTENAME(C.[name]))),
N' AND ')
WITHIN GROUP (ORDER BY IC.key_ordinal)
FROM sys.key_constraints AS KC
JOIN sys.index_columns AS IC
ON IC.[object_id] = KC.parent_object_id
AND IC.index_id = KC.unique_index_id
JOIN sys.columns AS C
ON C.[object_id] = IC.[object_id]
AND C.column_id = IC.column_id
WHERE
KC.parent_object_id = @SourceID
AND KC.[type_desc] = N'PRIMARY_KEY_CONSTRAINT'
),
@InsertClause nvarchar(max) = N'INSERT (column_list) VALUES (column_list)',
-- Find the insert columns list
@InsertColumnList nvarchar(max) =
(
SELECT
STRING_AGG(
CONVERT(nvarchar(max),
QUOTENAME(C.[name])),
N',')
WITHIN GROUP (ORDER BY C.column_id)
FROM sys.columns AS C
WHERE
C.[object_id] = @SourceID
AND C.is_computed = 0
AND C.is_identity = 0
AND C.is_column_set = 0
AND C.is_hidden = 0
AND C.generated_always_type = 0
AND C.system_type_id != TYPE_ID(N'timestamp')
),
-- Generate the UPDATE SET clause
@UpdateClause nvarchar(max) = N'UPDATE SET ' +
(
SELECT
STRING_AGG(
CONVERT(nvarchar(max),
CONCAT(N'T.', QUOTENAME(C.[name]), N'=S.', QUOTENAME(C.[name]))),
N',')
WITHIN GROUP (ORDER BY C.column_id)
FROM sys.columns AS C
WHERE
C.[object_id] = @SourceID
AND C.is_computed = 0
AND C.is_identity = 0
AND C.is_column_set = 0
AND C.is_hidden = 0
AND C.generated_always_type = 0
AND C.system_type_id != TYPE_ID(N'timestamp')
AND NOT EXISTS
(
-- Exclude column if it is part of the primary key
SELECT 1
FROM sys.key_constraints AS KC
JOIN sys.index_columns AS IC
ON IC.[object_id] = KC.parent_object_id
AND IC.index_id = KC.unique_index_id
WHERE
KC.parent_object_id = C.[object_id]
AND KC.[type_desc] = N'PRIMARY_KEY_CONSTRAINT'
AND IC.column_id = C.column_id
)
),
-- Basic form of the merge statement
@MergeStatement nvarchar(max) =
CONCAT_WS(
N' ',
N'MERGE @Target AS T',
N'USING @Source AS S',
N'ON primary_key_match_list',
N'WHEN MATCHED THEN merge_matched',
N'WHEN NOT MATCHED BY TARGET THEN merge_not_matched',
N'WHEN NOT MATCHED BY SOURCE THEN DELETE;');
-- Substitute the insert column list into the INSERT clause
SET @InsertClause = REPLACE(@InsertClause, N'column_list', @InsertColumnList);
-- Substitute other values into the MERGE template
SET @MergeStatement = REPLACE(@MergeStatement, N'@Target', @Target);
SET @MergeStatement = REPLACE(@MergeStatement, N'@Source', @Source);
SET @MergeStatement = REPLACE(@MergeStatement, N'primary_key_match_list', @PrimaryKeyColumnList);
SET @MergeStatement = REPLACE(@MergeStatement, N'merge_matched', @UpdateClause);
SET @MergeStatement = REPLACE(@MergeStatement, N'merge_not_matched', @InsertClause);
BEGIN TRANSACTION;
IF @Debug = 0
BEGIN
-- Peform the MERGE
EXECUTE (@MergeStatement);
END;
ELSE
BEGIN
PRINT @MergeStatement;
END;
COMMIT TRANSACTION;
RETURN 0;
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION;
THROW;
RETURN -1;
END CATCH;
END; | {
"domain": "codereview.stackexchange",
"id": 31581,
"tags": "sql, sql-server, t-sql"
} |
When drawing a vector on paper, does it matter where I'm putting the arrow? | Question:
Let a vector $\vec{P}$ passes along the line segment $AB$. Now, from the figure, can we say that $|\vec{P}|=AD$? [$D$ is the point where I drew the arrow]
In other words, when drawing a vector on paper, does it matter where I'm putting the arrow, or is the arrow's sole purpose just to indicate direction?
Answer:
In other words, when drawing a vector on paper, does it matter where I'm putting the arrow, or is the arrow's sole purpose just to indicate direction?
Arrow is usually placed at the end of the line that represents a vector and indicates vector direction.
Let a vector $\vec{P}$ passes along the line segment $AB$. Now, from the figure, can we say that $|\vec{P}|=AD$?
I do not see from your schematic how is vector $\vec{P}$ related to the line that passes through points $A$ and $B$. I would conclude that $\vec{P} = \vec{AD}$. | {
"domain": "physics.stackexchange",
"id": 87002,
"tags": "vectors, notation"
} |
How to compute performance of a detection-classification system? | Question: I use a yolo (y) to detect only one object and a multiclassifier (mc) that classifies that object.
Now, the problem is: what I have to do with yolo's false positive and false negative, if I want to compute the whole system accuracy, precision and recall?
Now I'm computing overall accuracy like this:
acc = (tp_mc + tn_mc) / (tp_mc + tn_mc + fp_mc + fn_mc + fn_y + fp_y)
To compute precision and recall I'm doing that for each class of mc:
precision_i = tp_mc_i / (tp_mc_i + fp_mc_i + fp_y_i)
recall_i = tp_mc_i / (tp_mc_i + fn_mc_i + fn_y_i)
Where fp_y_i and fn_y_i are the yolo's false positive and false negative that belongs to the class i of the multiclassifier.
Do you think that this is the correct way to compute accuracy, precision and recall?
Answer: What matters is whether the instance is classified correctly or not at the end, so it's only about mapping correctly the cases from the y classifier:
tp_y is easy, it's the classification status of the mc classifier
tn_y is a TN for every class $i$.
fp_y is a FP for the class $i$ predicted by mc and a TN for every other class.
fn_y is a FN for the true class $i$ and a TN for every other class.
I'm not sure what you count as tn_mc here? Normally in a multiclass setting accuracy is the sum of $TP_i$ for every class $i$, because there's no TN. That would give us:
$$acc=\frac{\sum_{i}TP_i}{n}$$
With $n$ the total number of instances (from the start, not only the ones fed to mc)
Your precision and recall formulas look correct to me, assuming that fp_y_i are the FP instances where $i$ is the predicted class whereas fn_y_i are the FN instances where $i$ is the true class. | {
"domain": "datascience.stackexchange",
"id": 9885,
"tags": "accuracy, performance"
} |
Dyes to bind to double stranded DNA? | Question: Are there any commercially available fluorescent dyes that will bind only to double stranded DNA (not RNA, single stranded DNA etc.) for studying in vitro using confocal microscopy?
Answer: Most of the dyes used for visualization bind with a much higher affinity to dsDNA. This would be SybrGreen, EtBr (although this will bind RNA as well).
There is a pretty comprehensive website from Life Technologies about Nucleic acid stains that is worth a look.
There is as well a publication on this topic: "DNA Staining for Fluorescence and Laser Confocal Microscopy" | {
"domain": "biology.stackexchange",
"id": 3913,
"tags": "biochemistry, microscopy"
} |
Does coronal mass ejection include radiation at optical wavelengths? | Question: It is known that during Solar/Stellar Flare, plasma from the corona of the star is released into the space, and travels comparatively slowly.
In an hypothetical situation, does the coronal mass also consist of electromagnetic radiation within the optical wavelengths? And if so, is it possible to observe the incoming coronal mass ejection from the ground/surface of the Earth way before it approaches?
Answer: Coronal mass ejections consist of a very hot, but thin, plasma. Their very weak intrinsic emission would be dominated by ultraviolet and X-ray lines and bremsstrahlung continuum. There is very little optical radiation.
However, CMEs can and are monitored at optical wavelengths using the light that they scatter from the Sun. The process is Thomson scattering from the free electrons in the plasma. This is done by spacecraft because the Earth's atmosphere makes it very difficult to achieve the contrast necessary to observe the scattered light close to the Sun (except during total solar eclipses).
A CME takes a couple of days to travel from the Sun to the Earth and such monitoring can give "space weather forecasts" associated with magnetospheric phenomena triggered by the impact of CMEs.
An example picture taken by the LASCO white light imager on board the SOHO spacecraft is shown below. Everything here is viewed in scattered light. The Sun is obscured by an occulting disk.I have picked an example showing the eruption of a CME. | {
"domain": "astronomy.stackexchange",
"id": 3630,
"tags": "astrophysics, solar-flare, coronal-mass-ejection"
} |
Map array of objects and apply condition | Question: I'm mapping through an array of objects and applying conditions on each object by adding a new value according to the condition. I use map in ramda. Can I do this better?
events = map(el => {
if (!['ts', 'sxi', 'ht', 'hd'].includes(el.value)) {
el['extras'] = {type: 'cx', label: 'Es', name: 'es', options: ['cf']}
}
return el;
}, events);
Answer: You can skip Ramda altogether. JS has a built-in array.map().
I'd avoid doing side-effects on a map operation. The goal of a map operation is to transform one array of things into another array of things without affecting the original array or its contents. Not following this breaks expectations, and thus your code's reliability.
If you intend to mutate, use array.forEach() instead. Otherwise, if you want to continue using array.map(), create new objects.
You can do either of the following:
// Not mutating elements of events
const newEvents = events.map(e => {
if (!['ts', 'sxi', 'ht', 'hd'].includes(el.value)) {
// Copy e to a new object, append extras to this new object.
return { ...e, extras: {type: 'cx', label: 'Es', name: 'es', options: ['cf']} }
} else {
// Changed nothing, return original element.
return e
}
})
// Mutating elements of events
events.forEach(e => {
if (['ts', 'sxi', 'ht', 'hd'].includes(el.value)) return
e['extras'] = {type: 'cx', label: 'Es', name: 'es', options: ['cf']}
}) | {
"domain": "codereview.stackexchange",
"id": 31657,
"tags": "javascript"
} |
How to obtain an envelope of my mas-spring damping curve? | Question: I have my program for mass spring system made with Euler method...and I can't menage to obtain envelope of the BLUE damping curve. Can you help me?
#****************************************************************************************************************************************************************************************
# Damped spring-mass oscillator
# ****************************************************************************************************************************************************************************************
from pylab import *
from scipy.signal import hilbert,chirp
g = 0 #grawitacja [m/s2] bez grawitacji 0
g = -9.80665 #grawitacja [m/s2] bez grawitacji 0
m = 0.4532 #masa ciężaru [kg]
k = 875.60 #sztywność [N/m]
# c = (2*m)*sqrt(k/m)
c = 0
c_kr = 2*sqrt(k*m)
c = 2*sqrt(k*m)
c = 5.0 #tlumienie [Ns/m]
omega=sqrt(k/m)
f=omega/(2*pi)
T=1/f
T=2*pi/omega
f1=1/T
gamma=c/c_kr
p=c/(2*m)
print('dekrement tłumienia',gamma,' [s]')
print('tłumienie', c)
print('tłumienie krytyczne',c_kr)
print('omega czestosc własna',omega,' [rad]')
print('czestotliwosc',f,' [Hz=1/s]')
print('czestotliwosc',f1,' [Hz=1/s]')
print('okres',T,' [s]')
# print('e',e,' [s]')
u_pocz=-0.0254 #ugięcie, przemieszczenie wstępne sprężyny [m]
u_pocz= 0.0 #ugięcie, przemieszczenie wstępne sprężyny [m]
v_pocz= 0.0 #prędkość w [m/s]
t_pocz= 0.0 #czas w [s]
env_pocz= 0.0 #czas w [s]
dt=0.001 #przyrost czasu [s]
t_kon=0.15 #czas końca obliczeń [s]
t_kon=1.00 #czas końca obliczeń [s]
print('czas',t_kon,' [s]')
u=u_pocz #przemieszczenie [m]
v=v_pocz #prędkość w [m/s]
t=t_pocz #czas w [s]
env=env_pocz #czas w [s]
#To mogę policzyć bo wiem, ugięcie, wydłużenie sprężyny w [m] - a dlaczego tak ?
#bo k*u=G (siła ciężkości) czyli k*u=m*g dlatego u=m*g/k
# u_ugiecie_wlasne=m*g/k-u_pocz #ugięcie przemieszczenie własne w [m]
# a=-g-(k/m)*(u-u_ugiecie_wlasne)-c*v*abs(v)/m #przyspieszenie pocztkowe w [m/s2] ale potrzebne to bedzie dopierow później
u_ugiecie_wlasne=m*(g)/k #ugięcie przemieszczenie własne w [m] bez LOAD_BODY
print('u_ugiecie_wlasne',u_ugiecie_wlasne,' [s]')
przechowalnia_u=[]
przechowalnia_v=[]
przechowalnia_a=[]
przechowalnia_F=[]
przechowalnia_t=[]
przechowalnia_env=[]
# ****************************************************************************************************************************************************************************************
# Obliczenia zasadnicze
# ****************************************************************************************************************************************************************************************
while (t<t_kon):
a=-g-(k*(u-u_ugiecie_wlasne)/m)-(c*v/m)+g # dv/dt=a=(-g-k*(u+u_ugiecie_wlasne)/m-c*v/m) z LOAD_BODY czyli przyspieszenie ziemskie *g
v=v+a*dt # v=v+dt*dv/dt
u=u+v*dt # u=u+dt*du/dt
F=(-g-(k*(u-u_ugiecie_wlasne)/m)-(c*v/m))*m
t=t+dt
przechowalnia_u.append(u)
przechowalnia_v.append(v)
przechowalnia_a.append(a)
przechowalnia_F.append(F)
przechowalnia_t.append(t)
przechowalnia_env.append(env)
# ****************************************************************************************************************************************************************************************
# Obliczenia zasadnicze
# ****************************************************************************************************************************************************************************************
# ****************************************************************************************************************************************************************************************
# Grafika
# ****************************************************************************************************************************************************************************************
# coding: utf-8
from matplotlib.font_manager import FontProperties
# okno = get_current_fig_manager()
def quit_figure(event):
if event.key == 'escape':
close(event.canvas.figure)
if event.key == 'f10':
savefig('0.png',dpi=150)
# rcParams.update({'font.size': 24})
rcParams['font.size'] = 24 #set the value globally
rcParams['axes.linewidth'] = 2 #set the value globally
rcParams['toolbar'] = 'None'
font = {'family':'ISOCPEUR','weight':'normal','color':'black'}
# font = {'family':'ISOCPEUR','weight':'normal','color':'black','size':10}
fig=figure(num=None, frameon='False', figsize=(16, 7), facecolor='w')
# title('Identification of Johnson-Cook constitutive equations in terms of FEM simulation\n$\mathrm{Y=}$',font)
ylabel('Przemieszczenie [m]')
xlabel('Czas [s]')
plot(przechowalnia_t, przechowalnia_u, linewidth=4, color='b',label='u - przemieszczenie, [m]')
# plot(analytical_signal, linewidth=2, color='b',label='u - przemieszczenie, [m]')
# plot(amplitude_envelope, linewidth=4, color='r',label='u - przemieszczenie, [m]')
# plot(przechowalnia_t, przechowalnia_a, linewidth=4, color='r',label='a - przyspieszenie, [m/s$^2$]'')
# plot(przechowalnia_t, przechowalnia_v, linewidth=4, color='r',label='v - prędkość, [m/s2]')
# plot(przechowalnia_t, przechowalnia_F, linewidth=4, color='r',label='F - siła, [N]')
axvline(0,linestyle='--',linewidth=2,color='r')
axhline(0,linestyle='--',linewidth=2,color='r')
hlines(u_ugiecie_wlasne,t_pocz,t_kon,linestyle='-',linewidth=4,color='g',label='ugięcie własne w [m]')
legend(loc='upper right')
quit = gcf().canvas.mpl_connect('key_press_event', quit_figure)
show()
# ****************************************************************************************************************************************************************************************
# Grafika
# ****************************************************************************************************************************************************************************************
I did it but I am afraid that I wonted something else - I wantef logarithmic damping increment and I have to figure out how to draw from the elvelope the information about level of damping. Nevertheless - thank you.
Answer: So, you have numerically solved a second order differential equation and plotted its result. Now it seems to be a damped sinusoidal response of the type
$$y(t) = K e^{-\alpha t} \cos(\omega_0 t + \phi) $$
for some constants $\alpha$, $\omega_0$ and $K$. And you want to compute the damping parameter $\alpha$.
A crude approximation to $\alpha$ can be obtained by the following. Consider two points, one at an arbitrary time $t_0$ and the other one period later $t_0 + T_0$, on the curve. You don't know the period ? It can be approximately decuded by looking at two consecutive peaks of the oscillations. Note that those peaks are slightly misplaced due to being weigted by the exponential; nevertheless it will be ok for a crude approximation. Then the values at those two points will be $A_1$ and $A_2$; then it can be seen that
$$ \frac{ |A_1| }{|A_2| } = \frac{ K e^{-\alpha t_0} }{K e^{-\alpha (t_0 + T_0) }} = e^{\alpha T_0} $$
Note that you have already measured the period $T_0$ in the first step; hence you can find the damping parameter $\alpha$ to be
$$ \alpha = \frac{1}{T_0} \ln ( |A_1| / |A_2| ) $$
If you want to estimate it more accuretely, you might consider using different estimators for it. A Kalman filter can also be designed to esitmate it, but that's a lot more complex. | {
"domain": "dsp.stackexchange",
"id": 8097,
"tags": "signal-analysis"
} |
Boyle's law- what's the big deal if temperature is not constant? | Question:
Boyle's law: At constant temperature of the gas, the volume of a given mass of a gas is inversely proportional to its pressure.
So, Boyle's law is talking about isothermal condition,right? But, what if temperature is not constant?
My thinking:
Suppose, there is a given gas in a frictionless cylinder fitted with a piston. Its initial temperature is $t_0$ and its pressure is $p_0$ . It is at equilibrium initially and hence the piston will also exert pressure $p_0$ so as to maintain mechanical equilibrium. Now heat is applied to it by a burner. As the gas gains thermal energy, the gas' temperature increases from $t_0$ to $t_k$ . Now as the molecules have more KE, the pressure exerted by the gas on the piston will be more now. And the gas will expand now as a result of which volume increases. But now, the pressure of the gas is low now as the KE is used to displace the piston. And so finally, we can write $$P\propto \frac{1}{v}$$ which is what Boyle's law says but temperature being same.
So, what is the importance of constant temperature?? The gas still follows the law even if temperature is variant. Or am I mistaking anywhere?? Please help. I am confused.
Answer: I agree with answers of John Rennie and t.c, but would like to add a few examples.
If the piston is displaced then work is done by the system (the system being the gas insider the cylinder), which is equal to $\int_{x_0}^{x_1} p_0 dx$ if the piston applies a constant pressure, $p_0$, and it starts at position $x_0$ and moves to position $x_1$. In this case the heat energy applied to the gas by the burner partly gets converted to work done in pushing the cylinder and partly heats up the gas - The final result would be gas at the same pressure $p_0$, but higher volume $V$ and higher temperature $T$ following the equation shown by John Rennie
In your question you talk about heating up the gas, the gas losing energy by doing work on the piston and its pressure dropping - for that case the piston must reduce its pressure on the gas, otherwise the piston would compress the gas.
The best example I can think of to relate directly to your question is if the cylinder is in a heat bath that keeps it at constant temperature and the piston is free to move. Then the volume, $V$, can increase and the pressure, $P$, will drop and heat will go into the gas, which will be converted into work done in pushing back the cylinder. The temperature of the gas will be constant. | {
"domain": "physics.stackexchange",
"id": 17615,
"tags": "thermodynamics"
} |
Why the resultant spring constant different in the following two cases? | Question:
In these two cases in the first case my book The Physics Of Waves And Oscillations by NK Bajaj says:
That the restoring force exerted on the mass by each spring is same so added the elongations and he got k =$\frac{k_1 k_2}{k_1 +k_2}$ (for case a). And in the second case the total restoring force is $ -k_2 x -k_1 x =F $ so $k'x = -(k_2 +k_1) x $ so $k' =k_1 + k_2$ (case b).
Now if I say in the first case (case a) the while the spring is released from its stretched position then the spring attached to mass wants to restore it in its equilibrium and so does the second. So shouldn't the attached spring pull the mass simultaneously being pulled by the another spring. In this pursuit the mass also gets double acceleration so the restoring force should be $k_1x_1 +k_2x_2$ instead F. So the question is if this is true in the both cases we should get the same restoring force. Where am I wrong?
Answer: As you say case (b) is straightforward, but case (a) need a bit of thought.
Suppose we displace the mass by a distance $x$. This means we stretch both the springs. Let's call the stretch of the first spring $x_1$ and the stretch of the second spring $x_2$. Together these must add up to the total stretch, so we have:
$$ x_1 + x_2 = x \tag{1} $$
The tension in the first spring is $k_1 x_1$ and the tension in the second spring is $k_2 x_2$, and these must be equal for the springs to be in equilibrium with each other. So we get a second equation:
$$ k_1 x_1 = k_2 x_2 \tag{2} $$
So we have a pair of simultaneous equations for $x_1$ and $x_2$. If we use equation (1) to write:
$$ x_1 = x - x_2 $$
And substitute for $x_1$ in equation (2) we get:
$$ k_1 ( x - x_2) = k_2 x_2 $$
and this rearranges to give us $x_2$:
$$ x_2 = \frac{k_1}{k_1 + k_2} x $$
So the tension in spring 2, $k_2 x_2$ is:
$$ F_2 = \frac{k_1 k_2}{k_1 + k_2} x $$
This is equal to the tension in spring 1, and both are equal to the force on the mass. So we find the combined spring constant is:
$$ k = \frac{k_1 k_2}{k_1 + k_2} $$ | {
"domain": "physics.stackexchange",
"id": 52010,
"tags": "forces, waves, oscillators, elasticity"
} |
Integrating high momentum modes using Wilson's approach to renormalization | Question: In Section 12.1 of Peskin & Schroeder, they introduce the renormalization group for $\phi^4$ theory.
Let $b < 1$ and $\Lambda$ some UV cutoff. Define
$$\hat{\phi} = \begin{cases}
\phi(k) , \quad b\Lambda \leq |k| < \Lambda\\
0
\end{cases}$$
and redefine $\phi(k)$ as
$$\phi = \begin{cases}
\phi(k) , \quad |k| < b\Lambda\\
0
\end{cases}$$
so that we replace the original $\phi$ by $\phi + \hat{\phi}$. This leads to the functional integral:
$$Z = \int \mathcal{D}\phi e^{-\int \mathcal{L}(\phi)} \int \mathcal{D} \hat{\phi} \exp\Big(-\int d^dx\big[\frac{1}{2}(\partial_\mu \hat{\phi})^2 + \frac{1}{2} m^2\hat{\phi}^2 + \lambda(\frac{1}{6} \phi^3 \hat{\phi} + \frac{1}{4}\phi^2 \hat{\phi}^2 + \frac{1}{6}\phi \hat{\phi}^3 + \frac{1}{4!}\hat{\phi}^4)\big]\Big). \tag{12.5}$$
They introduce a term $\mathcal{L}_\text{eff}$ and note that it involves only the Fourier components $\phi(k)$ with $|k| < b\Lambda$. To carry out the integrals over the high momentum modes $\hat{\phi}$ they say on p. 396-397
...we will see below that the new terms in $\mathcal{L}_\text{eff}$ can be written in diagrammatic form. In this analysis, we treat the quartic terms in (12.5), all proportional to $\lambda$, as perturbations. Since we are mainly interested in the situation $m^2 \ll \Lambda^2$, we will also treat the mass term $\frac{1}{2}m^2\hat{\phi}^2$ as a perturbation. Then the leading-order term in the portion of the Lagrangian involving $\hat{\phi}$ is $$\int \mathcal{L}_0 = \frac{1}{2} \int_{b\Lambda \leq |k| < \Lambda} \frac{d^d k}{(2\pi)^d} \hat{\phi}^*(k) k^2 \hat{\phi}(k). \tag{12.7}$$
I am having a hard time following their work.
Where did (12.7) come from?
By quartic terms do they mean all terms of the form $\phi^n \hat{\phi}^m$ such that $n + m = 4$?
Answer: Assuming that you are comfortable with eq. (12.5) of Peskin-Schroeder,
$$\begin{align}Z &= \ldots \\ &= \int \! \mathcal{D} \phi \, e^{-\int \mathcal{L}(\phi)} \! \! \int \! \mathcal{D}\hat{\phi} \\ &{} \exp \left(\!- \!\int \! \! d^d x \left[\frac{1}{2}\left(\partial_\mu \hat{\phi}\right)^2 \!+\frac{m^2}{2}\hat{\phi}^2 +\lambda \left( \frac{1}{6}\phi^3 \hat{\phi}+\frac{1}{4}\phi^2 \hat{\phi}^2 + \frac{1}{6} \phi \hat{\phi}^3 +\frac{1}{4!} \hat{\phi}^4 \right) \right] \right), \end{align} $$ they define $$\mathcal{L}_0= \frac{1}{2}\left(\partial_\mu \hat{\phi} \right)^2.$$ Using the Fourier decomposition $$\hat{\phi}(x) = \int \! \frac{d^d k}{(2\pi)^d} \, e^{i k x}\, \hat{\phi}(k)= \! \! \!\int \limits_{b \Lambda \le |k| \lt \Lambda} \! \! \frac{d^d k}{(2 \pi)^d} \, e^{ikx}\, \hat{\phi}(k), $$ where $\hat{\phi}^\ast \!(x)=\hat{\phi}(x)$ implies $\hat{\phi}^\ast\!(k) = \hat{\phi}(-k)$, they compute $$\begin{align}\int \! d^d x \, \mathcal{L}_0 &= \frac{1}{2} \int \! d^d x \, \left(\partial_\mu \hat{\phi}(x)\right)^2 \\ &= \frac{1}{2} \int \! d^d x \int\! \frac{d^d \ell}{(2 \pi)^d} \, i \ell_\mu \, e^{i \ell x} \, \hat{\phi}(\ell) \int \! \frac{d^d k}{(2 \pi)^d}\, i k_\mu \, e^{ikx} \, \hat{\phi}(k) \\ &= - \frac{1}{2} \int \! \frac{d^d \ell}{(2 \pi)^d}\int \frac{d^d k}{(2 \pi)^d} \, \ell \cdot k \, \, \hat{\phi}(\ell) \, \hat{\phi}(k) \, (2\pi)^d \delta^{(d)}(\ell +k) \\ &= \frac{1}{2} \int \frac{d^dk}{(2 \pi)^d}\, k^2 \, \hat{\phi}(-k) \, \hat{\phi}(k) \\&=\frac{1}{2} \! \! \int\limits_{b \Lambda \le |k| \lt \Lambda} \! \! \frac{d^d k}{(2 \pi)^d} \, \hat{\phi}^\ast \!(k) \, k^2 \, \hat{\phi}(k), \end{align} $$ being just the formula given in eq. (12.7) of the book. | {
"domain": "physics.stackexchange",
"id": 94401,
"tags": "quantum-field-theory, renormalization, path-integral, effective-field-theory"
} |
Is air a pure substance? | Question: I was referring to some Thermodynamics text books and found that their definitions of a "pure substance" seem very subjective.
"Air, for example, is a mixture of several gases, but it is often considered
to be a pure substance because it has a uniform chemical composition" (Fundamentals of thermal-fluid sciences, fifth edition - YUNUS A.
ÇENGEL, JOHN M.
CIMBALA, ROBERT H.
TURNER)
"A system consisting of air can be regarded as a pure substance as
long as it is a mixture of gases; but if a liquid phase should form on cooling, the liquid would
have a different composition from the gas phase, and the system would no longer be considered a pure substance" (Fundamentals of
Engineering Thermodynamics - Michael J. Moran, Howard N. Shapiro)
Answer: I have seen these statements. They can be confusing. They can also be mis-leading. In trying to avoid giving the fuller details, these statements sometimes hinder the ability to make valid projections later.
The term substance generally means a particular kind of matter with uniform properties. In chemistry, a substance is matter that has a specific composition and specific properties.
Here is the first lesson. Every pure compound is a substance. But not every substance is a (single) compound.
Air is NOT a pure compound. So, it cannot be classed as a substance by default. Air (as a single gas phase) is ONLY a substance by the fact that it has a specific uniform composition and specific uniform properties throughout. What are the specifics? Air (as a gas phase) contains a well-defined set of chemical compounds (nitrogen, oxygen, and so on) in well-defined relative concentrations (79 mol%, 21 mol%, and so on) mixed uniformly as a single gas-phase solution (not as multi-phase system).
What then is air as a "pure" substance as opposed to simply calling air a substance? The inference by using the word "pure" is that, as long as the composition and properties of (gas phase) air are uniform throughout, we may as well just believe that air is composed of "air molecules”. We do not need to know that air is truly composed of nitrogen molecules and oxygen molecules (and water molecules and argon atoms and ...). We never get that far in our treatment of air to discover where it makes a difference.
Now the second lesson. Even when the gas of a pure compound forms a liquid, we still regard the gas as a substance. In fact, we also regard the pure liquid as a substance. Why? Because the default is that pure compounds are substances. The term "substance" for pure compounds is not tied to the phase state of the compound.
But, even though each phase is itself a substance, we never say is that the entire two phase system is a substance. Why not? Because anything that has more than one phase by definition does not have uniform properties throughout the entire system. In particular, at a fundamental (molecular) level, either the chemistry or the structure (or both) within each of the various phases must be different from any of the other phases in order to have more than one phase.
By example, we do not call a system that contains a two phase mixture of PURE water vapor + PURE liquid water a substance. With this same reasoning, we cannot call a two phase system of gas phase air + liquid phase air a substance either.
To be clear, in a multi-phase system, each phase is by itself a substance, so long as that phase retains its own uniform composition and properties throughout.
What about composition? Why is that notion even raised?
We need to explore what happens to air when it forms a liquid. Because air is not a pure compound, it will not form a pure compound as a liquid. In fact, the liquid state of air will naturally have a different composition of the components that it contains. The normal boiling point of nitrogen is -320 $^o$F and of oxygen is -297 $^o$F. As we lower the temperature on gaseous air, the oxygen will prefer to form liquid before the nitrogen. So, the liquid phase that forms will be richer in oxygen than in nitrogen. At any temperature between -297 $^o$F and -320 $^o$F, the equilibrium state of the two phase system will have a liquid that is richer in oxygen than the gas phase state.
Does this composition difference make air no longer a compound?
This finding does NOT make the GAS PHASE AIR no longer a substance by itself. This finding also does NOT make the LIQUID PHASE AIR no longer a substance by itself. Both phases by themselves completely follow the guidelines for uniform composition and properties throughout.
So, part of the confusion is due to a redundancy. Simply put, we do not call a SYSTEM of gas + liquid a substance because no SYSTEM that contains more than one phase ever has uniform properties throughout. Regardless of whether the system contains pure water, pure benzene, or ... "pure" air, a multi-phase system is never a substance.
Return now to the ending phrase "the liquid would have a different composition from the gas phase, and the system would (therefore) no longer be considered a pure substance". This phrase is conceptually misleading if not wrong. It projects to you that the reason to believe that a system of gas phase air + liquid phase air is no longer a substance is because the compositions in the two phases are different from each other. The real reason is simply because you have more than one phase in the system. There is absolutely no reason to have to talk about composition.
In summary, there is a lot that is hidden in the simplified statements that you quote. They make life so much easier to solve homework problems (oh ... this system has gas phase air ... it is a pure substance ... there is an ideal equation for this case). They make life so much harder when it comes time to solve problems to the real world (oh ... my gas phase air pump is experiencing cavitation ... why would a "pure substance" cause such a thing). | {
"domain": "engineering.stackexchange",
"id": 3701,
"tags": "mechanical-engineering, thermodynamics"
} |
Sampling $x(t)=\cos(4\pi t)+\cos(2\pi t)$ | Question: Imagine that we sample the signal $x(t)=\cos(4\pi t)+\cos(2\pi t)$ with a certain sample frequency $f_s$ and we obtain $x[n]$. Now, by ideal interpolation, we get $y(t)$ from $x[n]$.
How can we know the used sample frequency $f_s$ by looking at $y(t)?$
As a particular case, which $f_s$ verifies that $y(t)=2\cos(2\pi t)?$
Answer:
How can we know the used sample frequency fs by looking at y(t)?
You can't.
The sampling theorem states that any sampling higher than twice the highest signal frequency allows for perfect reconstruction. In you case any sampling frequency higher than 4 would result in the same $y(t)$. | {
"domain": "dsp.stackexchange",
"id": 8188,
"tags": "sampling, frequency, interpolation, cosine"
} |
"Jolly Jumper" challenge | Question: I'm a C#/Java full-time developer who is trying to pick up C++ purely for educational reasons. I've tried to solve a pretty simple ACM ICPC problem and would love to hear all kinds of criticisms on how to write acceptable modern C++ code. (Seriously, tell me that my code is a joke, as long as you can tell me how to make it better.)
Specifically, I'd like to hear comments on the following aspects:
Does the code follow typical C++ coding convention?
Is it cross platform? Especially regarding how I am handling strings.
Is it C++11 compliant? Am I making a reasonable use of it?
Is the code readable?
You're also welcome to comment on my algorithm on solving the problem, although that's less of a concern for me at the moment because the primary purpose for me is to learn how to properly use C++.
Full source (VS2012)
The challenge is to determine whether each line of input contains a "jolly jumper" sequence:
A sequence of n > 0 integers is called a jolly jumper if the absolute values of the difference between successive elements take on all the values 1 through n-1. For instance,
1 4 2 3
is a jolly jumper, because the absolutes differences are 3, 2, and 1 respectively.
Below is a test input you can try running against the program. The program works as intended as far as I can tell.
4 1 4 2 3
5 1 4 2 -1 6
10 1 2 3 4 5 6 7 8 9 10
10 1 2 4 7 11 16 22 29 37 46
10 -1 -2 -4 -7 -11 -16 -22 -29 -37 -46
10 -1 -1 -4 -7 -11 -16 -22 -29 -37 -46
1 1
2 1 2
2 2 1
4 0 4 2 3
4 1 3 2 4
1 2
6 1 4 3 7 5 10
main.cpp
#if defined(WIN32) || defined(_WIN32) || defined(__WIN32) && !defined(__CYGWIN__)
#define WIN32_DEF 1
#else
#define WIN32_DEF 0
#endif
#if WIN32_DEF
#include "win32helper.h"
#endif
#include "jolly_jumper.h"
#include <string>
#include <iostream>
#include <fstream>
using std::cin;
using std::cout;
using std::endl;
using std::string;
using std::ifstream;
int main(int argc, char *argv[], char *envp[]) {
#if _DEBUG && WIN32_DEF
//Visual studio does some wacky things with setting current directory when ran under debug mode..
//This could be changed in the project settings, but might as well as add code here to
//manually control current working directory...
cout << "[Running under debug mode]" << endl;
setCurrentDirectoryForVSDebug();
#endif
string fileName = "";
if (argc > 1) {
fileName = string(argv[1]);
}
else {
cout << "Input file name: ";
getline(cin, fileName);
}
while (fileName.empty()) {
cout << "Filename cannot be empty!" << endl;
cout << "Input file name: ";
getline(cin, fileName);
}
cout << "Reading from: " << fileName << endl;
ifstream fileStream(fileName);
if (!fileStream || !fileStream.is_open()) {
cout << endl << fileName << " could not be opened. Make sure the file exists and the read permission has been set correctly.";
return 0;
}
JollyJumper jollyJumper;
string currLine = "";
while (getline(fileStream,currLine)) {
cout << currLine << endl;
if (jollyJumper.IsJolly(currLine)) {
cout << "Jolly" << endl;
}
else {
cout << "Not Jolly" << endl;
}
}
fileStream.close();
cin.get();
return 0;
}
string_helper.h
#ifndef __STRING_HELPER_H
#define __STRING_HELPER_H
#include <vector>
#include <string>
#include <istream>
#include <sstream>
#include <algorithm>
#include <cctype>
#include <iostream>
//Static Class Declaration
class StringHelper {
private:
StringHelper() {};
public:
static std::vector<std::string> &Split(const std::string &s, char delim, std::vector<std::string> &elems);
static std::vector<std::string> Split(const std::string &s, char delim);
static bool IsNumber(const std::string& s);
};
#endif
jolly_jumper.h
#ifndef __JOLLY_JUMPER_H_
#define __JOLLY_JUMPER_H_
#include <iostream>
#include <string>
#include <vector>
#include <set>
class JollyJumper
{
private:
public:
JollyJumper();
bool IsJolly(std::string input);
};
#endif
win32helper.h
#ifndef __WIN32_HELPER_H_
#define __WIN32_HELPER_H_
#include <windows.h>
#include <tchar.h>
#include <Shlwapi.h>
#include <iostream>
void setCurrentDirectoryForVSDebug();
#endif
jolly_jumper.cpp
#include "jolly_jumper.h"
#include "string_helper.h"
using std::string;
using std::vector;
using std::cout;
using std::endl;
using std::set;
#define MAX_INPUT_VALUE 2000
JollyJumper::JollyJumper()
{
}
bool JollyJumper::IsJolly(string input)
{
if (input.empty())
return false;
vector<string> elements = StringHelper::Split(input, ' ');
if (elements.size() == 1)
return true;
//First value determines the count of input
if (!StringHelper::IsNumber(elements.front()))
return false;
int inputLength = stoi(elements.front());
if (inputLength > MAX_INPUT_VALUE)
return false;
if (elements.size() - 1 != inputLength)
return false;
//Skip first element since it only tells us the size of the elements
auto it = elements.begin();
it++;
//Validate string inputs and make sure they are integers.
//Once validated, push the difference of curr iterator and next iterator to jolly_diff vector
set<int> jolly_diff;
for (; it != elements.end(); it++) {
if (!StringHelper::IsNumber(*it)) {
return false;
}
int value = abs(stoi(*it));
auto nx = std::next(it);
if (nx == elements.end()) {
break;
}
if (!StringHelper::IsNumber(*nx)) {
return false;
}
int nextValue = abs(stoi(*nx));
int difference = abs(value - nextValue);
//Difference cannot logically be greater than the input length
if (difference >= inputLength)
return false;
//Duplicate difference means it is not a jolly jumper
if(jolly_diff.find(difference) != jolly_diff.end()) {
return false;
}
jolly_diff.insert(difference);
}
return true;
}
string_helper.cpp
#include "string_helper.h"
using std::vector;
using std::string;
using std::getline;
using std::stringstream;
vector<string>& StringHelper::Split(const string &s, char delim, vector<string> &elems) {
stringstream ss(s);
string item;
while (getline(ss, item, delim)) {
elems.push_back(item);
}
return elems;
}
vector<string> StringHelper::Split(const string &s, char delim) {
vector<string> elems;
Split(s, delim, elems);
return elems;
}
bool StringHelper::IsNumber(const std::string& s)
{
std::string::const_iterator it = s.begin();
if(s.size() > 1 && (s[0] == '-' || s[0] == '+')) it++;
while (it != s.end() && std::isdigit(*it)) ++it;
return !s.empty() && it == s.end();
}
win32helper.cpp
#include "win32helper.h"
using std::wcout;
using std::endl;
void setCurrentDirectoryForVSDebug() {
TCHAR currentFilePath[MAX_PATH];
//Get the full path including the file name from the executable
GetModuleFileName(NULL, currentFilePath, MAX_PATH);
//Remove filename and get directory only
PathRemoveFileSpec(currentFilePath);
wcout << "Setting current directory to: " << currentFilePath << endl << endl;
SetCurrentDirectory(currentFilePath);
}
Answer: I'll just focus on your jolly_jumper.h:
#ifndef __JOLLY_JUMPER_H_
#define __JOLLY_JUMPER_H_
#include <iostream>
#include <string>
#include <vector>
#include <set>
class JollyJumper
{
private:
public:
JollyJumper();
bool IsJolly(std::string input);
};
#endif
Issues I see include:
Pay attention to const correctness. The parameter should be const, and the method should be const.
The method is going to be doing too much: it needs to parse the input string as a vector, then run its algorithm. The IsJolly() method should take a vector (as a const std::vector&).
Your class is "degenerate". There's no state. Your constructor does nothing. You might as well write it as a standalone function, or perhaps a class that contains a static function.
The private keyword is just noise.
In summary, I suggest:
struct JollyJumper
{
static bool IsJolly(const std::vector &input);
}; | {
"domain": "codereview.stackexchange",
"id": 9531,
"tags": "c++, windows, stl"
} |
F# inventory system | Question: To learn F#, I've implemented this very simple inventory system. While I'm proud that that it's my first program, and that it works, there are still a few areas that I'd like tips on, namely these:
I don't like how I'm using a for ... in ... loop in Inventory.ChangeSelectedItem to find the length of Inventory.Items.
Is this the correct usage of F#'s type/class system? Should this be made in a more "functional" way?
Am I using getters/setters correctly? Do I need to declare the mutable variables internalName in my types, or is this automatically done?
Is my code properly styled?
open System
/// <summary>
/// Represents an item. This type is only
/// used in the Inventory type.
/// </summary>
/// <param name="name">The item's name.</param>
/// <param name="count">The amount of this specific item.</param>
type InventoryItem(name:string, count:int) =
let mutable internalCount = count
member this.Name = name
member this.Count
with get() = internalCount
and set(value) = internalCount <- value
/// <summary>
/// Increment or decrement how many items there are.
/// It will not allow for a negative count.
/// </summary>
/// <param name="amount">The amount to change the item count by.</param>
member this.ChangeCount(amount) =
if this.Count >= 1 then
this.Count <- this.Count + amount
override this.ToString() =
String.Format("Name: \"{0}\", Count: \"{1}\"", this.Name, this.Count)
/// <summary>
/// This type represents an inventory, a collection
/// of values of the InventoryItem type.
/// </summary>
/// <param name="items">A list of InventoryItems.</param>
/// <param name="selectedItem">
type Inventory(items:InventoryItem list, selectedItem:int) =
let mutable internalItems = items
let mutable internalSelectedItem = selectedItem
member this.Items
with get() = internalItems
and set(value:InventoryItem list) = internalItems <- value
member this.SelectedItem
with get() = internalSelectedItem
and set(value:int) = internalSelectedItem <- value
/// <summary>
/// Add an item to the inventory.
/// </summary>
/// <param name="item">The item to add.</param>
member this.AddItem(item:InventoryItem) =
this.Items <- item :: this.Items
/// <summary>
/// Change the currently selected item.
/// </summary>
/// <param name="amount">The amount to increase by.</param>
member this.ChangeSelectedItem(amount:int) =
let mutable itemCount = -1;
for _ in this.Items do
itemCount <- itemCount + 1
if this.SelectedItem < itemCount && itemCount <> -1 then
this.SelectedItem <- this.SelectedItem + amount
override this.ToString() =
String.Format(
"Items: \"{0}\", Selected Item: \"{1}\"",
this.Items,
this.Items.[this.SelectedItem]
)
Here's some example usage:
let myInventory = new Inventory([], 0)
myInventory.AddItem(new InventoryItem("Great Sword", 1))
myInventory.AddItem(new InventoryItem("Gold", 5))
myInventory.AddItem(new InventoryItem("Silver", 10))
myInventory.AddItem(new InventoryItem("Copper", 15))
Console.WriteLine(myInventory)
myInventory.ChangeSelectedItem(2)
Console.WriteLine(myInventory)
myInventory.ChangeSelectedItem(-1)
Console.WriteLine(myInventory)
Answer: Some things:
let mutable itemCount = -1;
for _ in this.Items do
itemCount <- itemCount + 1
can become
let itemCount = this.Items |> List.length
In terms of types, in F# I would probably make Inventory Item a record rather than a class.
The easiest way to know if something needs to be mutable and you are stuck is to leave it off and see if the compiler complains.
member this.ChangeCount(amount) =
if this.Count >= 1 then
this.Count <- this.Count + amount
should probably be
member this.ChangeCount(amount) =
if this.Count+amount >= 1 then
this.Count <- this.Count + amount | {
"domain": "codereview.stackexchange",
"id": 14830,
"tags": "beginner, game, f#"
} |
Yukawa decay at one-loop | Question: I am trying to calculate the amplitude for a decay $\phi \to e^+e^-$ under a Yukawa interaction $\mathcal{L}_I = -g\phi \bar{\psi}\psi$ to one-loop order (with massless fermions for simplicity).
If I'm not wrong, there are 4 diagrams that contribute to 1 loop, three diagrams involving self-energy corrections (i.e. inserting a loop into the external lines) and an extra diagram with vertex correction (a $\phi$ field exchanged by $e^+$ and $e^-$).
I have no problem calculating the integrals, but I'm not sure if the condition I use for renormalization is correct. Following the example of QED, to apply on-shell renormalization I used the following conditions;
The scalar propagator in the limit $p^2 \to M^2$ should be $\frac{i}{p^2-M^2}$
The fermion propagator in the limit $\not{\!p} \to 0$ should be $\frac{i}{\not{p}}$
The vertex function in the limit $p^2 \to M^2$ should be $-ig$. ($p$ is the momentum of the scalar particle.)
Now, because the self-energy diagrams are all in external legs, the first two corrections mean that those diagrams vanish.
But the third condition tells that the vertex correction must also vanish when the scalar particle is on-shell (as in my diagram). Therefore all the diagrams here vanish trivially due to renormalization conditions.
Is this analysis correct? Or did I make some mistake in the renormalization part?
EDIT: I think that the fact I'm working with massless fermions is irrelevant to the discussion. Also, I'm considering a general Yukawa interaction, not related to the Higgs, so even for massless fermions there is still a non-zero interaction.
Answer: Since you want to use on-shell renormalization I will think of your scalar as some physical field, i.e. its quanta represent proper mass-eigenstates. Furthermore, I will assume a minimal model in the sense that there are kinetic terms for the fermion and the scalar and that there is a simple mass term for the scalar.
I assume that you are using the real on-shell scheme, which is sufficient for one-loop calculations anyway. Since the fermions are massless there is no on-shell condition needed to fix the mass renormalization and we are hence left with the condition
$$
\lim_{p^2\to 0}\left\{\frac{\not{p}}{p^2} \widetilde{\text{Re}}
\left(\Gamma^{\bar f f}_{R,ii}(-p,p)\right)u_i(p)\right\}=u_i(p),$$
which assures that close to the pole of the renormalized propagator the propagator is given by its lowest-order expression. For your scalar particle we need two on-shell conditions. One to renormalize the field and one for the mass-renormalization. These are given by
$$
\lim_{p^2\to M^2}\left\{\frac{1}{p^2-M^2} \widetilde{\text{Re}}
\left(\Gamma^{\phi\phi}_{R}(-p,p)\right)\right\}=1,$$
$$
\lim_{p^2\to M^2} \widetilde{\text{Re}}
\left(\Gamma^{\phi\phi}_{R}(-p,p)\right)=0.$$
The last renormalization conditions needed is the one for your coupling $g$. For example in QED there would be the condition that the electric charge is given by the Thompson limit. (That the Thompson limit in fact matches the QED charge with the classical charge is assured by Thirrings theorem.)
In your case you therefore need to tell me what a suiting renormalization limit for $g$ would be. In the OP this condition is given by
$$
\lim_{p^2\to 0}
\left(\Gamma^{\phi\bar f f}_{R}(M,-p,p)\right)=g,\qquad (1)$$
which in fact assures that the renormalized vertex corrections should vanish for an on-shell scalar. And since OP did not specify the theory OP is working in further, there is no relation between the renormalization of the coupling and the renormalization of the masses and fields.
So far so good. But there is one thing which really should be stressed whenever talking about renormalization.
Renormalzation is the procedure to make the connection between a theory parameter, which we use in a perturbative manner, and a measurement.
The renormalization conditions you chose assure that the decay $\phi\to\bar f f$ rate exactly fixes the coupling $g$. Hence you are not able to calculate any corrections to that decay! You chose it as an input-parameter. | {
"domain": "physics.stackexchange",
"id": 93773,
"tags": "homework-and-exercises, quantum-field-theory, renormalization, propagator"
} |
Computing every boolean function with a polynomial over $\mathbb{F}_3$? | Question: The following paper briefly mentions the power of $MOD_6$ gates (page 3), and relies on the unstated fact that every boolean function can be computed with an arithmetic circuit of depth 2 over $\mathbb{F}_3$. I'm not sure how this is done.
Answer: Every function $\mathbb{F}_p^n \longrightarrow \mathbb{F}_p$ (where $p$ is prime) can be written as a polynomial. For the proof, consider all $p^n$ monomials, and show that they are linearly independent. | {
"domain": "cstheory.stackexchange",
"id": 1517,
"tags": "cc.complexity-theory, circuit-complexity, boolean-functions, arithmetic-circuits"
} |
Is there any material that can survive a nuke? | Question: Is there a material known to man that I can tape to a Tsar-Bomba-yield nuclear warhead and find kilometers away after detonation?
This question is quite similar but a nuclear explosion is quite instantaneous. The sun, on the other hand, exposes a material to the same conditions continuously until it disintegrates.
Answer: One has to think about what happens in the explosion.
In a conventional explosion, a chemical reaction creates a whole lot of hot gas - that volume is initially contained inertially, and it expands as a shock wave travels outward. Any object on the boundary will experience a larger thermal and pressure gradient; the pressure and flow of matter behind the shock wave propel things outwards. The "heat" is external to any nearby objects, and whether they survive is partly a question of size, melting point and strength.
A nuclear explosion is different. Instead of a chemical reaction creating a large amount of hot gas, the heat is radiated outward by photons, neutrons and other particles. And because the neutron flux is so high, there is a very large heat transfer to any objects in the close vicinity - both the air (think mushroom cloud) and any objects. This means that an object nearby will be heated "through and through", as the neutron flux diffuses through it.
For an object to survive, it would have to have a combination of extremely small neutron cross section, high heat capacity, and high melting point.
I don't believe materials exist with a sufficient combination of these properties. | {
"domain": "physics.stackexchange",
"id": 29983,
"tags": "nuclear-physics, material-science"
} |
How to make µg/ml concentrations of proteinase-K? | Question: How does one prepare concentrations in the mass/volume (weight/volume) form, for substances like nucleic acids or in this case, proteinase? A detailed example would be helpful.
I need to prepare squishing buffer for a DNA extraction /PCR exercise to help me learn the materials better for a in class "internship".
The squishing buffer recipe I found from a laboratory class manual describes it as: 10 mM of Tris-Cl, pH 8.2, 1 mM of EDTA, 25 mM of NaCl , and 200 µg/ml of proteinse K (freshly diulted).
I understand molarity concentrations well enough. One only has to determine the total molar mass of a substance, in this case NaCl, and with that molar mass (g/mol) then use the given desired molarity (in this case 25 mM of NaCl, which equals 0.025 mol /one liters) to get the number of grams of the substance needed. In this case, since we need 0.025 moles / 1 Liter, we would need to multiply that by the molar mass to get: (g/mol) X (mol/l) = g/l of NaCl.
I understand percent solutions (to a degree) as well. If I needed 1% agrose gel, I would take 1 gram of agarose solid powder and bring that to 100 ml total to obtain a 1%, either using water or buffer?
I am having trouble understanding the concepts of concentration, in reference to mg/ml.
From what I understand, µg/ml is a weight (mass)/volume type of concentration. Therefore, should it be the case that if I wanted to prepare
200 µg/ml of proteinase (freshly diluted), how would I go about it? Would I add 5 ml of deionized water to get a working volume of 1000 µg of proteinase per 5 ml of water, which would be equivalent to 1 mg of proteinase per 5 ml of water. From that 1mg of proteinase to 5ml of water, should I then draw 0.2 µl (or 200 ml) of solution to then have the 200 µg/ml? Or am I totally wrong? I tried googling for more information, but I became more and more confused.
Answer: It depends on what form your proteinase K is in.
You may have a stock solution of some concentration, in which case you just add a specific volume according to $C_1V_1=C_2V_2$.
Example: Let's say you want to prepare 5 mL ($V_2$) of 200 µg/mL ($C_2$) proteinase K from a stock of 1000 µg/mL ($C_1$). You're looking for the volume of stock solution to add ($V_1$).
$$V_1=\frac{5\ mL\ \cdot \ 200\ \mu g/mL}{1000\ \mu g/mL}=1\ mL$$
So you take 1 mL of the stock and bring it up to 5 mL.
Or you may have a lyophilized powder. In this case, you have two options:
1) You can make a stock solution from the powder and, again, use $C_1V_1=C_2V_2$. This has the advantage of not having to weigh the powder every time you need to use it. This may not be a viable option, however, as I'm not sure of the stability of proteinase K in solution.
2) You can just weigh out the amount that will give you the desired concentration (which depends on the volume of buffer you're making).
Example: Again, you want to prepare 5 mL of 200 µg/mL proteinase K. The total mass you need is:
$$200\ \mu g / mL\ \cdot \ 5\ mL=1000\ \mu g$$
So weigh out a milligram and bring it up to 5 mL.
There's nothing special about mass concentration; you can convert it to molarity given the molecular weight of proteinase K. According to Wikipedia, proteinase K is 28.9 kDa or 28900 g/mol. If you are making this conversion, however, make sure you find the actual molecular weight of the protein you're using. 200 µg/mL is equal to:
$$\frac{0.2\ g/L}{28900\ g/mol}=6.92\ \mu M$$ | {
"domain": "biology.stackexchange",
"id": 3656,
"tags": "lab-techniques, dna-isolation"
} |
Meaning of symbol, 'curly N' in the equation of Linear Gaussian system dynamics | Question: In the article of Topological Based Representation(Page no. 12), the equation of the Linear Gaussian system dynamics is given as
In above equation what is the meaning of 'curly N'?
Answer: They are modeling the probability as a normal distribution with the given mean and variance. | {
"domain": "robotics.stackexchange",
"id": 1031,
"tags": "mobile-robot, control, robotic-arm, wheeled-robot, motion-planning"
} |
Feature extraction for sound classification | Question: I'm trying to extract features from a sound file and classify the sound as belonging to a particular category (eg : dog bark, vehicle engine e.t.c). I'd like some clarity on the following things :
1) Is this doable at all? There are programs that can recognize speech, and differentiate between different types of dog bark. But is it possible to have a program that can receive a sound sample and just say what kind of a sound it is? (Assume there's a database containing a lot of sound samples to refer to). The input sound samples can be a bit noisy (microphone input).
2) I assume that the first step is audio feature extraction. This article suggests extracting MFCCs and feeding them to a machine learning algorithm. Is MFCC enough? Are there any other features that are generally used for sound classification?
Thank you for your time.
Answer:
By long shot it is doable - to what extend? You will see. This task of environmental sound classification is not very well studied. Also choice of machine learning paradigm is crucial - statistical approach or maybe binary classifier? You can start with GMM's, ANN's and SVM's - I opt for GMM's and ANN's.
Yes, most of people are using MFCC's because they are well correlated with what people are actually hearing and also no one came up with anything better since. You might also want to add extra features such as MPEG-7 descriptors. Proper feature optimisation must be performed because sometimes you don't need so many features, especially when they are do not separable. For more info please refer to my previous answers:
Feature extraction from spectrum
MFCC extraction
Detection of sounds | {
"domain": "dsp.stackexchange",
"id": 7950,
"tags": "audio, mfcc, machine-learning, classification, sound-recognition"
} |
Is neutron-neutron fusion viable? | Question: Do I get unusual baryon maybe like pentaquark or just a pair of conjoined twin that is very unstable?
Answer: Under isospin symmetry, the dineutron should be a "mirror nucleus" with the diproton and the spin-zero deuteron. Neither of those are bound (the deuteron has spin $\hbar$, and no stable excited states), and so there's no stable dineutron to fuse into.
Stipe Galic points out the possiblity of the weak interaction process
$$
\rm n + n \to d + e^- + \bar\nu
$$
as the isospin analogue to the proton-proton reaction in the core of the Sun,
$$
\rm p + p \to d + e^+ + \nu
$$
The core of the Sun is dense hydrogen under enormous pressure with a power density of about $100\rm\,W/m^3$; I'll let you work out for yourself the (in)feasibility of observing neutron-neutron fusion under terrestrial conditions.
High-energy neutron-neutron collisions will excite the baryon-meson spectrum in the same way as high-energy proton-proton collisions, but it's hard to make high-energy free neutrons and there aren't pure neutron targets. | {
"domain": "physics.stackexchange",
"id": 65625,
"tags": "nuclear-physics, standard-model, fusion, neutrons"
} |
Is the integral action invariant under metric signature convention? | Question: There are $2$ convenient metric signatures: (-,+++) and (+,---). If one considers the integral action:
$$
S=\frac{1}{16 \pi G_n} \int d^4x \sqrt{-g} [R-2\Lambda-F_{\mu \nu}F^{\mu \nu}],
$$
is it then invariant under which metric signature you use?
I would say yes because S represents just the energy of the system.
Answer: Firstly, note that $S$ does not represent at all the energy of the system. Indeed,
$$
[S]=J\cdot s=[ET].
$$
Besides, it contains the Lagrangian but, for a proper definition of the energy, you will need the Hamiltonian and, in general relativity, this is a quite complex and interesting issue.
Then, note that the choice of the signature of the metric is an arbitrary matter and cannot have any impact on the action.
Finally, you can check the action proposed in Weinberg (1972), eq.(12.2.4) and (12.4.2) and Landau and Lifshitz (1951) eq.(93.1) where they postulated identical actions with opposite metrics. | {
"domain": "physics.stackexchange",
"id": 76450,
"tags": "general-relativity, lagrangian-formalism, metric-tensor, conventions, action"
} |
Do the concepts of Newtonian and non-Newtonian fluids still hold at a nanoscopic level? | Question: Recently I've spent a great deal of time thinking about enzyme diffusion in cytoplasm. One thing that keeps me awake at night is the constant referral to cytoplasm as a non-Newtonian fluid (which it surely is when looking at it as a whole), and the implications this has for the diffusive properties of singular enzymes.
If I think about a single enzyme in the cytoplasm with its immediate surroundings, I would expect to find the protein with its solvation shells, and only a few non-water components. In this context I'm unable to understand how this single enzyme could be thought to be in a non-Newtonian medium.
Furthermore, when thinking about the diffusion of the enzyme, quite a few authors refer to the Scallop theorem, and define it in the context of non-Newtonian fluids. If the enzyme is mostly surrounded by water molecules, and only occasionally runs in to other proteins or peptides, shouldn't the protein be considered to reside in a Newtonian medium?
Shortly put: How far down can you go until the classification between Newtonian and non-Newtonian fluids starts to break down?
Answer: Generally Fick's laws of diffusion have been found to apply pretty well to molecular translational and rotational diffusion in solution. This means that we assume that the solvent is a continuous structureless fluid, which in practice means that the diffusing molecule should be far bigger than that of the solvent so that the solvent's molecular nature can be ignored. Surprisingly this works well with 'slip' and 'stick' boundary conditions down to situations where the solute is not that much bigger than the solvent, e.g. rhodamine and similar dye molecules in solvents such as acetonitrile or ethanol. In your example of a protein in water then you might expect that it would behave 'normally' as just described.
However, in the cytoplasm the close packing of many similar sized and large molecules means that these can easily block the diffusion of any other (except for far, far smaller ones such as water) and so the situation is totally different and will be more like a percolation process where a slight reduction in packing can lead to a large, almost stepwise increase in diffusion coefficient. The idea of a single diffusion coefficient is therefore not helpful. | {
"domain": "chemistry.stackexchange",
"id": 13634,
"tags": "diffusion"
} |
Validating files and returning errors messages/boolean values | Question: I am writing simple file validator for my java ee app and I am stack with my class api. I need specific error descriptions, but also I would like to have boolean values indicating whether file is valid or not. Please give me some hints how to do it 'smart'.
package pl.poznan.put.ims.business.attachments;
import java.util.List;
import java.util.ArrayList;
import com.google.inject.Inject;
import pl.poznan.put.ims.business.entities.Attachment;
import pl.poznan.put.ims.business.exceptions.AttachmentValidationException;
import pl.poznan.put.ims.settings.IFileSystemSettings;
public class AttachmentValidator implements IAttachmentValidator
{
IFileSystemSettings messageSettings;
private List<String> errors;
@Inject
public AttachmentValidator(IFileSystemSettings messageSettings)
{
this.messageSettings = messageSettings;
this.errors = new ArrayList<String>();
}
@Override
public boolean validate(List<Attachment> attachments) throws AttachmentValidationException
{
for (Attachment attachment : attachments) {
validate(attachment);
}
if(attachments.size() > messageSettings.getMaxFilesCount())
errors.add("Attachments max number exceeded.");
if(!errors.isEmpty())
throw new AttachmentValidationException(errors);
return true;
}
@Override
public boolean validate(Attachment attachment)
{
boolean isValid = validateLimitExceeded(attachment);
isValid &= validateNull(attachment);
return !isValid;
}
private boolean validateLimitExceeded(Attachment attachment)
{
boolean exceeded = attachment.getFileSize() > messageSettings.getMaxFileSize();
if(exceeded)
errors.add("Max file size limit exceeded. File: " + attachment.getFileDisplayName() + ". Size: "+ attachment.getFileSize()/1024/1024 + "MB.");
return exceeded;
}
private boolean validateNull(Attachment attachment)
{
if(attachment == null) {
errors.add("No attachment.");
return false;
}
return true;
}
@Override
public List<String> getErrors()
{
return errors;
}
@Override
public long getTotalMaxFilesSize() {
return messageSettings.getMaxFilesCount() * messageSettings.getMaxFileSize();
}
}
Answer: Some notes:
The boolean validate(...) method never returns false so it should be void method.
I'd pass immutable ValidationResult objects to the clients. The clients could check the results and could signal to their clients if there is an error or prints the messages etc.
I'd omit the &= operator, it's really hard to read.
getTotalMaxFilesSize() should be in the FileSystemSettings class. (Feature or data envy smell.)
After a few refactoring steps the following came out:
AttachmentValidatorService.java
import java.util.List;
public interface AttachmentValidatorService {
ValidationResult validate(List<Attachment> attachments);
ValidationResult validate(Attachment attachment);
long getTotalMaxFilesSize();
}
ValidationResult.java
import static com.google.common.base.Preconditions.checkNotNull;
import java.util.ArrayList;
import java.util.List;
import javax.annotation.concurrent.Immutable;
@Immutable
public class ValidationResult {
private final List<String> errors;
public ValidationResult(final List<String> errors) {
checkNotNull(errors, "errors cannot be null");
this.errors = new ArrayList<String>(errors);
}
public boolean isValid() {
if (errors.isEmpty()) {
return true;
}
return false;
}
public List<String> getErrors() {
return new ArrayList<String>(errors);
}
// If lots of the clients use this
// public void checkErrors() {
// if (!isValid()) {
// throw new ...
// }
// }
}
AttachmentValidatorServiceImpl:
import static com.google.common.base.Preconditions.checkNotNull;
import java.util.Collections;
import java.util.List;
public class AttachmentValidatorServiceImpl implements
AttachmentValidatorService {
private final FileSystemSettings messageSettings;
public AttachmentValidatorServiceImpl(
final FileSystemSettings messageSettings) {
this.messageSettings = checkNotNull(messageSettings,
"messageSettings cannot be null");
}
@Override
public ValidationResult validate(final List<Attachment> attachments) {
final AttachmentValidator attachmentValidator = new AttachmentValidator(
messageSettings, attachments);
return attachmentValidator.getValidationResult();
}
@Override
public ValidationResult validate(final Attachment attachment) {
return validate(Collections.singletonList(attachment));
}
@Override
public long getTotalMaxFilesSize() {
return messageSettings.getMaxFilesCount()
* messageSettings.getMaxFileSize();
}
}
AttachmentValidator.java
import static com.google.common.base.Preconditions.checkNotNull;
import java.util.ArrayList;
import java.util.List;
public class AttachmentValidator {
private final List<String> errors = new ArrayList<String>();
private final FileSystemSettings messageSettings;
public AttachmentValidator(
final FileSystemSettings messageSettings,
final List<Attachment> attachments) {
this.messageSettings = checkNotNull(messageSettings,
"messageSettings cannot be null");
checkNotNull(attachments, "attachments cannot be null");
for (final Attachment attachment : attachments) {
validate(attachment);
}
validateSize(attachments);
}
private void validate(final Attachment attachment) {
validateNull(attachment);
validateLimitExceeded(attachment);
}
private void validateSize(final List<Attachment> attachments) {
if (attachments.size() > messageSettings.getMaxFilesCount()) {
errors.add("Attachments max number exceeded.");
}
}
private void validateLimitExceeded(final Attachment attachment) {
if (attachment == null) {
return;
}
final int fileSize = attachment.getFileSize();
final boolean exceeded = fileSize > messageSettings
.getMaxFileSize();
if (exceeded) {
final String fileDisplayName = attachment
.getFileDisplayName();
final int fileSizeInMegaBytes;
if (fileSize == 0) {
fileSizeInMegaBytes = 0;
} else {
fileSizeInMegaBytes = fileSize / 1024 / 1024;
}
final String errorMsg = "Max file size limit exceeded. File: "
+ fileDisplayName
+ ". Size: "
+ fileSizeInMegaBytes + "MB.";
errors.add(errorMsg);
}
}
private void validateNull(final Attachment attachment) {
if (attachment == null) {
errors.add("No attachment.");
}
}
public ValidationResult getValidationResult() {
return new ValidationResult(errors);
}
}
Feel free to ask if you have any questions. | {
"domain": "codereview.stackexchange",
"id": 791,
"tags": "java, exception-handling"
} |
Help with a difficult expected runtime recurrence | Question: I developed an algorithm and have a recurrence for its runtime; I want to show the expected runtime is $O(\sqrt{n})$.
At each iteration $i$, I have a random variable $k_i$ that is equal to the number of heads after flipping $\frac{n}{2^i}$ coins minus $\frac{n}{2^{i+1}}$ (i.e. a binomial distribution centered at 0 with width $\frac{n}{2^i}$). The runtime of my algorithm is then given by the following recurrence:
$$B(0) = 0$$
$$B(i) = \max\{B(i-1)+k_i, 0\}$$
When $B(i)$ is large, the negative and positive $k_i$ can cancel each other out, so intuitively one should be able to come up with a pretty good bound on $\mathbb{E}[B(n)]$, but I have no idea where to start.
Answer: Yes, you'll have $B(i) = O(\sqrt{n})$ (with very high probability).
Note that $k_i$ is binomially distributed, so it is approximately Gaussian. It has mean $0$ and standard deviation
$$\sigma_i = \sqrt{n} \times 2^{-i/2 - 1}.$$
The probability that $k_i$ is more than, say, 10 standard deviations above the mean is incredibly small. We'll define the event $\textsf{BAD}_i$ to be true if $k_i \ge 10 \sigma_i$, and we'll define the event $\textsf{BAD}$ to hold if there exists $i$ such that $\textsf{BAD}_i$ holds, i.e.,
$$\textsf{BAD} = \textsf{BAD}_1 \lor \dots \lor \textsf{BAD}_m.$$
By a union bound, we have
$$\Pr[\textsf{BAD}] \le \sum_i \Pr[\textsf{BAD}_i],$$
which is still very small. Moreover, if $\textsf{BAD}$ does not happen, then we have
$$B(m) \le \sum_i 10 \sigma_i = 10 \sqrt{n}/2 \sum_i {1 \over \sqrt{2}^i} \le 10 \sqrt{n}/2 \times 3.5 = O(\sqrt{n}).$$
In other words, there is a constant $c$ such that $B(m) \le c \times \sqrt{n}$ with overwhelmingly high probability.
(I've shamelessly handwaved and glossed over details all over the place to present the intuition in a clean way, but even correcting for them isn't going to change the bottom-line answer.) | {
"domain": "cs.stackexchange",
"id": 7642,
"tags": "algorithm-analysis, runtime-analysis, recurrence-relation, randomized-algorithms"
} |
Stuck with derivation of the integrated rate equation for a pseudo first order equilibrium reaction | Question: I was reading about the integrated rate law. However I have problems to follow the solution.
I have an equilibrium reaction:
$$\ce{A + B<=>[{k_{on}}][{k_{off}}]AB}$$
with forward and back reaction. I approximate the measurement to a pseudo first order reaction. The complex can be measured.
B can be calculated:
$$[\ce{B}]\overset{\text{def}}{=}[\ce{B_0}]-[\ce{AB}]$$
Therefore the equilibrium is
$$0 = k_\mathrm{on} \cdot [\ce{A_0}] \cdot ([\ce{B_0}]-[\ce{AB}])-k_\mathrm{off} \cdot [\ce{AB}]$$
which can be expressed also like this:
$$0 = k_\mathrm{on} \cdot \ce{[A_0]} \cdot \ce{[B_0]} - (k_\mathrm{on} \cdot A_0 - k_\mathrm{off})F$$
To solve the differential equation I simplified to this:
\begin{align}
\frac{\mathrm{d}F(t)}{\mathrm{d}t} &= c_1 - c_2 \cdot F(t)\\
c_1 &= k_\mathrm{on} \cdot A_0 \cdot B_0\\
c_2 &= k_\mathrm{on} \cdot A_0 - k_\mathrm{off}\\
\end{align}
So far the theory was easy and I am sure everything is correct, but now I am stuck. The solution of this differential equation would be straightforward:
$$F(t) = \frac{c_1}{c_2}+k\cdot \mathrm{e}^{-(c_2\cdot t)}$$
However, the solution to fit and simulate the kinetic should be (in the simplified writing). This equation is published in http://afm1.pharm.utah.edu/pnscourse/Anal_Biochem_1995.pdf.
$$F(t) = \frac{c_1\cdot (1-\mathrm{e}^{-(c_2\cdot t)})}{c_2}$$
I really would like to know, where is my mistake and how the derivations of this formula has to be.
Answer: You have made no mistake. Your solution and the solution from the paper are practically identical. The solution from the paper is
\begin{align}
F(t)=c_1 \frac{1−e^{−c_2 t}}{c_2}=\frac{c_1}{c_2} - \frac{c_1}{c_2} e^{−c_2 t}
\end{align}
which is exactly the form your equation has, i.e.
\begin{align}
F(t)=\frac{c_1}{c_2} + K e^{−c_2 t} \qquad \text{with} \qquad K = - \frac{c_1}{c_2} \ .
\end{align}
The integration constant $K$ is not defined the way it is but fixed via the boundary conditions of the reaction. In this case you get $K = - \frac{c_1}{c_2}$ by requiring that at the start of the reaction, i.e. $t=0$, the reactants have not yet reacted with each other and there is no product $\ce{AB}$ present initially, i.e. $F(t\!=\!0) = 0$. This leads to the desired result:
\begin{align}
F(t\!=\!0) \overset{!}{=} 0 &=\frac{c_1}{c_2} + K e^{0} \qquad \Rightarrow \quad K=- \frac{c_1}{c_2} \ .
\end{align} | {
"domain": "chemistry.stackexchange",
"id": 2313,
"tags": "physical-chemistry, equilibrium, kinetics"
} |
What is the relation between input and output PSDs given system transfer function $H(s)$ | Question: If I have the system transfer function $H(s)$ in the complex frequency domain, how would I relate the input/output power spectral densities?
I have come across the relation $P_{out}(f) = |H(f)|^2P_{in}(f)$ in the frequency domain, where $P_{out/in}(f)$ refer to the input and output PSDs. Would I be able to use this same relation in the complex frequency domain as $P_{out} = |H(i\omega)|^2P_{in}$? Although I suppose that would mean the PSD would be in complex frequency domain as well?
This is all very new to me so any clarification or resources that I could look at would be greatly appreciated.
Answer: If the system described by the transfer function $H(s)$ is stable, you can obtain its frequency response by substituting $s=j\omega$, and use the relation that you found:
$$S_Y(\omega)=S_X(\omega)\big|H(j\omega)\big|^2\tag{1}$$
where $S_X(\omega)$ and $S_Y(\omega)$ denote the power spectra of the system's input and its output, respectively. | {
"domain": "dsp.stackexchange",
"id": 9663,
"tags": "power-spectral-density, linear-systems, transfer-function, random-process"
} |
CNN Architecture for Multiple Instance Learning | Question: I have a binary classification problem where I have a bag of documents (image files) that I need to classify - the bag that is, not the individual document. However, a bag can have a different number of documents and the combination of documents will determine the classification. E.g.
Bag1: A, B, C label = Pass
Bag2: D, E Fail
Bag3: F Pass
In Bag1, the presence of A, B, or C alone may not be enough to Pass, but together they do. In Bag3, F itself might be enough to pass.
I have already done OCR to represent all the text at the Bag level. For the images, all I can think is to average the pixel values of each image in the Bag to represent it as a single image, and then train with a CNN.
Is there any better architecture or method for handling something like this?
Answer: As I understand there is some information that needs to be present in order for the example to "pass".
I have a few questions as well as a suggestion.
First off do the information come in different forms? Meaning, different fonts, different kind of documents for the same information etc?
Is the cnn mandatory? Any more info, regarding the domain of the application or the nature of the problem that you could share?
Suggestions:
Since you have done the OCR I would suggest a pipeline of different models. Process the outputs of the OCR so they can create a secondary dataset in order to employ NLP. Meaning decode the OCRs to produce plaintext outputs, which would serve as the secondary data-set.
Another thing you could try is to identify the category of each document just like any other multi-class classification. After that you could, employ another classifier based on which categories pass your test (if there 's a discrete number) and continue like so.
Example: Identify Type A through Type F documents. Set the combinations that would serve as true labels, e.g occurrence of Type F, occurrence of Type A,B,C (any other combination would be False) and then train.
Finally there's a paper on a different domain, though an approach similar to theirs could be followed. They present a Bag-of-Visual Words (BOV) method for object-based classification in land-use/cover mapping.
...we can characterize an image by a histogram of visual-word count [6]. The visual vocabulary provides a “midlevel” representation which helps to bridge the huge semantic gap between the low-level features extracted from an image and the high-level concepts to be categorized
Object Classification of Aerial Images With Bag-of-Visual Words | {
"domain": "datascience.stackexchange",
"id": 5534,
"tags": "machine-learning, python, neural-network, image-classification"
} |
Quantum Decoherence | Question: If I understand correctly, a quantum coherence is when two wave functions with same phase, frequency and period superimpose to become peaks or troughs.
In macroscopic scale, say a chair is there because by the time you saw it (measured) the wave functions collapsed and they don't longer cohere therefore leaving the object there. This happens in $10^{-31}$ seconds to be exact.
Basically, when we observe, wave functions decohere super fast, leaving us only one object to be seen. Is this a correct assumption?
Answer: Let me clarify a couple of misconceptions in the question - perhaps this will be enough to answer it:
Coherence does not mean the same phase: it means that the two waves can interfere. Depending on the phase they may interfere constructively or destructively. We could take as an example the conventional light sources, such as light bulbs: they emit lots of waves with completely random phases, some of which interfere constructively, others destructively, and we observe some average. The quantum light sources, such as lasers and masers, emit coherent radiation, which explains their wonderful properties in terms of observing interference patterns, low beam divergence, transmitting energy on large distances, etc.
Interfering wave functions correspond to the states of the same object. Decoherence in this context means that we will observe only one of these states. The double-slit experiment is a good example: by trying to detect through which slit the light has passed the observer causes decoherence and destroys the interference pattern. | {
"domain": "physics.stackexchange",
"id": 66384,
"tags": "quantum-mechanics"
} |
Class Inheritance in C# (possibly generics) | Question: I'm working on a segment of code where it runs a number of tasks and then combine individual task results to construct a complete task result object, there's no concurrency involved so it's purely a question of the best use of class inheritance/pattern.
Below is the skeleton of the code I've written, it loops through IList<ITask> where each ITask returns an ITaskResult, the main method will then have to examine the type of each ITaskResult and set the property on CompleteTaskResult accordingly.
The goal is to keep the code easy to understand and easy to add new Task/TaskResult.
What I'm not happy about is the part where it has to check the type of ITaskResult individually and then set the value on CompleteTaskResult, it feels a little cumbersome with lots of repetitions.
Is there a better way to structure this code?
Thanks.
public interface ITask
{
ITaskResult Execute();
}
public class AlphaTask : ITask
{
public ITaskResult Execute()
{
return new AlphaTaskResult();
}
}
public class BetaTask : ITask
{
public ITaskResult Execute()
{
return new BetaTaskResult();
}
}
public class GammaTask : ITask
{
public ITaskResult Execute()
{
return new GammaTaskResult();
}
}
public interface ITaskResult
{
}
public class AlphaTaskResult : ITaskResult
{
}
public class BetaTaskResult : ITaskResult
{
}
public class GammaTaskResult : ITaskResult
{
}
public class CompleteTaskResult
{
public AlphaTaskResult AlphaTaskResult { get; set; }
public BetaTaskResult BetaTaskResult { get; set; }
public GammaTaskResult GammaTaskResult { get; set; }
}
static void Main(IList<ITask> tasks)
{
var taskResults = tasks.Select(x => x.Execute());
var completeTaskResult = new CompleteTaskResult();
foreach (var taskResult in taskResults)
{
if (taskResult is AlphaTaskResult alphaTaskResult)
{
completeTaskResult.AlphaTaskResult = alphaTaskResult;
continue;
}
if (taskResult is BetaTaskResult betaTaskResult)
{
completeTaskResult.BetaTaskResult = betaTaskResult;
continue;
}
if (taskResult is GammaTaskResult gammaTaskResult)
{
completeTaskResult.GammaTaskResult = gammaTaskResult;
continue;
}
throw new InvalidOperationException("unsupported task")
}
}
Edit: The intention is one can pass a number of tasks in the form of IList<ITask> to the library and then retrieve concrete results back from CompleteTaskResult CompleteTaskResult may have null properties if no specific tasks related to the properties were executed. The main caller is well aware of what concrete type of ITask were sent in so it also knows what properties from CompleteTaskResult to query for the results
Answer: You could simply add the task results to a collection. Different types of collections can be used:
List<ITaskResult>: You can add several results of the same type. You must enumerate to find a task of a specific type.
Dictionary<Type, ITaskResult>: Each task result type can be added only once. You can query specific task types.
if (results.TryGetValue(typeof(BetaTaskResult), out ITaskResult taskResult)) {
...
}
or if you know a result is there for sure:
ITaskResult taskResult = results[typeof(BetaTaskResult)];
Of course, you can also use other types of keys, like enums or strings.
Dictionary<Type, List<ITaskResult>>: Each task result type can be added several times. You can query specific task result types. Handling is a bit more complex.
If you want to access members who are not part of the interface, you must cast the result to specific types.
// Assuming each result type occurs only once.
Dictionary<Type, ITaskResult> results = tasks
.Select(x => x.Execute())
.ToDictionary(r => r.GetType()); // alternative: .ToList()
If you prefer to keep your current solution, you can simplify it a bit. This will enumerate the tasks several times; however, this is acceptable, because you have a very small number of tasks.
var taskResults = tasks
.Select(x => x.Execute())
.ToList();
var completeTaskResult = new CompleteTaskResult {
AlphaTaskResult = taskResults.OfType<AlphaTaskResult>().FirstOrDefault(),
BetaTaskResult = taskResults.OfType<BetaTaskResult>().FirstOrDefault(),
GammaTaskResult = taskResults.OfType<GammaTaskResult>().FirstOrDefault(),
};
It is important to call .ToList(), otherwise this would execute the tasks several times.
See also: Lazy Evaluation (and in contrast, Eager Evaluation) | {
"domain": "codereview.stackexchange",
"id": 35308,
"tags": "c#, object-oriented, design-patterns, inheritance"
} |
Which is Worse: Car vs. Car or Car vs. Wall? | Question: So I got myself questioning what could be worse for the driver... a collision of two identical cars at equal speed (frontal crash) or the same car with the same speed crashing through a wall? The first case I see it would double the impact, but also it will absorb the energy into the other car structure, otherwise, in a solid and rigid wall, all the energy would come back to the vehicle.
Which situation is worse for the passengers?
Answer: From the point of view of the driver of a car, impacting another car is about as bad as crashing against an ideal wall (a wall with zero deformation whatsoever).
If there were a plane reflection between the two cars, then vs. Car would be exactly equal to vs. Wall (the contact points between both cars would all be on the same plane, due to reflection, so each car could be considered a wall for the other). But this plane reflection does not exist:
What we have instead is a 2-fold rotational reflection.
Let's say the left part of the car is heavier than the right part. The left and right parts will get crushed differently, with the left part of each car going further than if there had been an immovable wall. Heavy parts of each car will slide beside each other, with a lot of the energy absorbed by steel deformation, and a longer distance between point of impact and final point, thus lesser deceleration. In this scenario, if you happen to sit on the heavy side you are lucky, but if you happen to sit on the light side it might be worse than a wall.
Also, rather than all forces having the same direction, some of the energy will be converted into rotation, which can be either a good or bad thing depending on where you sit.
Finally, cars have a few hard structural beams (or parts that can be considered as beams) and most of the rest is softer. If hitting a wall, deceleration is immense as soon as a beam touches the wall. If hitting another car, the beams will probably enter the other car's soft parts. Here again, distance between point of impact and final position will be longer, thus a less violent deceleration. This is especially true at very high speed, with beams of each car piercing through most of the opposite car.
All in all, crashing into an ideal wall is probably a bit worse than crashing into another car, but better drive safely and avoid crashes :-) | {
"domain": "engineering.stackexchange",
"id": 241,
"tags": "automotive-engineering, safety"
} |
a polynomial representation of boolean functions | Question: I came up with this linear transformation to map boolean functions to polynomials and it seems to have some nice properties. I was wondering if there is any reference describing this (and/or similar) mappings and their application.
For three variables, the transformation matrix is given below:
I have posted this with some details and a description for reducing 3-SAT to checking the existence of specific term in an ordinary polynomial product on math.stackexchange (here). I have not got much feedback there. I was wondering if someone here could help.
If this is not the right forum to ask this sort of questions, or more details are needed, please let me know in comments.
EDIT: I found similarities to Zhegalkin polynomials and the $f:\left \{ -1,1 \right \}^n \rightarrow \left \{ -1,1 \right \}$ representation described here with the difference that this is a $f:\left \{ -1,1 \right \}^n \rightarrow \left \{ 0,1 \right \}$ representation.
Answer: Well done on your independent discovery.
This is a Hadamard matrix of Sylvester type, written in a different order. There is a massive literature on this topic. It is used in coding theory, cryptography, is directly related to Reed-Muller codes of degree 1, it can be used to obtain best affine approximations of functions, etc.
Ryan O'Donnell's notes on Analysis of Boolean functions available here which are now published as a book, Claude Carlet's chapters on Boolean
functions here are easily accessible online.
Just use the order $[1,X,Y,XY,Z,ZX,ZY,ZXY]$ which corresponds to lexicographic order over the integers, and apply the corresponding permutation to the columns as well.
In that order your matrix is simply the 3-fold Kronecker product $$H_2 \otimes H_2\otimes H_2$$of the basic 2 by 2 Hadamard matrix.
$$
H_2=\left(\begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array}
\right),
$$
This paper considers efficient evaluation of Hadamard representation coefficients. | {
"domain": "cstheory.stackexchange",
"id": 4360,
"tags": "cc.complexity-theory, computability, sat, boolean-functions, polynomials"
} |
How can we keep Schrödinger's cat alive? | Question: We know, Schrödinger's cat inside the box is in the equal superposition state of both alive and dead. We can express its state as $$|\text{cat}_\phi\rangle= \frac{|\text{alive}\rangle+e^{i\phi}|\text{dead}\rangle}{\sqrt{2}} \hspace{10mm} \text{where }\phi\text{ is relative phase}$$
If $\phi$ were $0$ or $\pi$ we could use Grover's algorithm to keep the cat alive.
But since we don't know $\phi$ and we don't want to measure the cat without being $100\%$ sure that the cat is now in $|\text{alive}⟩$ state, how can we proceed? Can we develop a more general version of Grover's algorithm?
Answer: TL;DR: This is probably going to be disappointing. If a cat enters a superposition and we lose track of the relative phase $\phi$ then there is only one deterministic operation that returns to the $|\text{alive}\rangle$ state: the state preparation channel. In other words, we have to get a new cat.
Let us represent the states of the cat on the Bloch sphere with $|\text{alive}\rangle$ at the North pole and $|\text{dead}\rangle$ at the South pole. The states $|\text{cat}_\phi\rangle$ are on the equator. Further, let us denote with $\mathcal{E}:L(\mathbb{C}^2)\to L(\mathbb{C}^2)$ the required quantum operation that saves the cat. In other words,
$$
\mathcal{E}(|\text{cat}_\phi\rangle\langle \text{cat}_\phi|) = |\text{alive}\rangle\langle \text{alive}|\quad\text{for all}\,\phi.\tag1
$$
Thus, $\mathcal{E}$ maps the equator of the Bloch sphere to the North pole. This immediately tells us that $\mathcal{E}$ is not bijective and hence not unitary.
Moreover, by linearity, $\mathcal{E}$ maps the entire equatorial plane of the Bloch sphere to the North pole. In particular, $\mathcal{E}$ maps the maximally mixed state $\frac{I}{2}$ to the North pole
$$
\mathcal{E}\left(\frac{I}{2}\right) = |\text{alive}\rangle\langle \text{alive}|.\tag2
$$
On the other hand,
$$
\mathcal{E}\left(\frac{I}{2}\right)=\mathcal{E}\left(\frac{|\text{alive}\rangle\langle \text{alive}|+|\text{dead}\rangle\langle \text{dead}|}{2}\right) = \frac12\rho_1+\frac12\rho_2\tag3
$$
where $\rho_1 = \mathcal{E}(|\text{alive}\rangle\langle \text{alive}|)$ and $\rho_2 = \mathcal{E}(|\text{dead}\rangle\langle \text{dead}|)$. Combining $(2)$ and $(3)$, we have
$$
|\text{alive}\rangle\langle \text{alive}| = \frac12\rho_1+\frac12\rho_2.
$$
However, $|\text{alive}\rangle$ is an extreme point of the Bloch sphere and hence not a convex combination of states other than $|\text{alive}\rangle$. Therefore, $\rho_1=\rho_2=|\text{alive}\rangle\langle \text{alive}|$. Finally, since the set consisting of the equator and the poles contains a basis, we conclude that
$$
\mathcal{E}(\rho) = |\text{alive}\rangle\langle \text{alive}|\tag4
$$
for all states $\rho$. Thus, the only quantum operation satisfying $(1)$ is the state preparation channel $(4)$ for the $|\text{alive}\rangle$ state. | {
"domain": "quantumcomputing.stackexchange",
"id": 3079,
"tags": "quantum-state, quantum-algorithms, grovers-algorithm, superposition"
} |
Distribution of Charges on a Conducting Cup | Question:
A multiple-choice problem goes as follows:
A small positively charged sphere is lowered by a nonconducting thread
into a grounded metal cup without touching the inside surface of the
cup, as shown above. The grounding wire attached to the outside
surface is disconnected and the charged sphere is then removed from
the cup. Which of the following best describes the subsequent
distribution of excess charge on the surface of the cup?
and the given correct answer is:
Negative charge resides on the outside surface, and no charge resides
on the inside surface.
with an explanation online:
When lowered inside, the charged sphere induces a negative charge on
the inner surface of the cup. The outer surface remains neutral since
it is grounded. When the grounding wire is removed, the cup has a net
negative charge, which when the sphere is removed, will move to the
outer surface of the cup.
I agree with this right until the sphere is removed and the charges reconfigure. I can see how a much larger amount of charge would reside on the outer surface, and I can see that the charges on the inner surface will initially be repelled away from the inner surface, but is it true that exactly no charge will be on the inner surface?
My thought process is that on the inwards bends on the inner surface, it is going to be very hard for charge for to stay there since nearby charges will push them off
So if there is a lot charge on the inner surface, a lot of it will leak off from these inwards bends, but it could still accumulate in the outwards bends on the inner surface, and if there is a small enough amount of charge on the inner surface, the charge on the outer surface could hold the charges at those inwards bends in place, making me think the charge distribution could look something like:
and if no charge resides on the inner surface, wouldn't that imply there is independence between outer and inner surfaces, so we could change the inner surface to not affect the outer surface distribution (as long as its still an "inner" surface):
and get that the red circled region has an E-field of 0 in both pictures since in one its inside the conductor? Not sure I believe that.
So is it true that no charge will reside on the inner surface of this cup?
Answer: It is not strictly true that there is no charge on the inner surface. Note that as far as the field/charge is concerned, there is no abrubt transition between the outer surface and the inner surface, so it would be odd if the charge density suddenly dropped to zero at some point on the cup, and naturally it doesn't.
The figure above shows the electrostatic simulation result for such a cup. The arrows indicate the E-field direction and magnitude (arrow length is proportional to the magnitude), and the colors correspond to the logarithm of the E-field magnitude normalized to the peak value:
$$ \log_{10}\left(\frac{E}{E_{max}}\right)$$
Note that the surface charge density on the cup is proportional to the E-field magnitude at the surface. You can see that the E-field (and hence the surface charge density) gradually drops as you go downward on the inner surface of the cup. The charge density on the inner surface can be as low as ~4 orders of magnitude less than on the outer surface. So it is understandable that one might consider the charge density to be zero on much of the inner surface for practical purposes, but as you can see, this is not strictly true. | {
"domain": "physics.stackexchange",
"id": 88096,
"tags": "electrostatics, electric-fields, charge, conductors"
} |
Temperature Advection using finite differences with gridded data | Question: Advection of a scalar quantity, such as temperature (T), by the horizontal wind, is defined as follows:
$-\textbf{U}\cdot\nabla T$
where $\textbf{U}$ is the horizontal wind vector, $\textbf{U}=(u,v)$, being $u$ and $v$ its zonal and meridional components, respectively (source).
To compute it with gridded data, it is convenient to use finite differences (or similar numerical approaches), as pointed here.
Using this method, the following lines written in Python calculate the advection of temperature (TAdv):
dy=111000 # [m]
lonres=lon[1]-lon[0] # constant
tadv=np.zeros(np.shape(T)); tadv.fill(np.nan)
for t in range(np.shape(T)[0]):
for x in np.arange(1,len(lon)-1):
for y in np.arange(1,len(lat)-1):
dx = abs(111000*np.cos(lat[y]*(2*np.pi/360))*lonres)
tadv[t,y,x] = -(u[t,y,x]*(T[t,y,x+1]-T[t,y,x-1])/(2*dx) +\
v[t,y,x]*(T[t,y+1,x]-T[t,y-1,x])/(2*dy))
I would expect a similar result from scaling the wind components by the temperature, at each grid cell, which is done as follows:
ut = np.zeros(np.shape(T))
vt = np.zeros(np.shape(T))
UT = np.zeros(np.shape(T))
for t in range(np.shape(T)[0]):
for x in range(len(lon)):
for y in range(len(lat)):
ut[t,y,x] = u[t,y,x]*abs(T[t,y,x])
vt[t,y,x] = v[t,y,x]*abs(T[t,y,x])
UT[t,y,x] = ((ut[t,y,x]**2 + vt[t,y,x]**2)**0.5)*np.sign(T[t,y,x])
The magnitude of the scaled wind vectors (UT) would be then similar to this "amount of transport of T". It is multiplied by the sign of T (-1 or 1), because T are anomalies (there is T<0), in order to keep its the sign.
However, the results are totally different. What I am missing here? Is it a mistake in this assumption (that both results should look fairly similar)? Or is it a coding problem?
In the figure below, I show the original T, u and v fields (left), and the results of the two above calculations: Tadv with u and v (middle), and the magnitude of the scaled vectors, UT, with the scaled vectors themselves uT and vT (right). TAdv is multiplied by 24*3600, so that the units are K/day.
Here is a link to a Dropbox folder, where the full script and sample data are available for download: https://www.dropbox.com/sh/kcrb08h72jjj3rn/AAAIAUgKVrRrICAQtXhOHS2ta?dl=0
Answer: When I look at problems like these I first check to see if there is a well tested and well documented implementation already rather than reinventing the wheel. In this case MetPy temperature advection is a well tested software that does many of the things meteorologists want including calculating finite differences with the right map scale factors. Since you are using Python I would at least look at their implementation first and correct your code even if you are not going to use their code.
Please do this first and validate your data before you attempt any downscaling operation.
And finally AtmosphericPrisonEscape is correct. If you are using lat,lon grid and calculating finite differences you need to include map scale factors. And if you are using global grids the finite differences will not work at the poles. If you are using regional grids that is not a problem. Please include as much detail as possible when you ask a computational question. | {
"domain": "earthscience.stackexchange",
"id": 1926,
"tags": "meteorology, temperature, wind, numerical-modelling, atmospheric-circulation"
} |
First Order interpretation of arbitrary structures as a graph | Question: I am currently trying to get some intuition on the concept of First Order reductions, and have come across this exercise question by Immerman, dubbed "Everything is a Graph".
Given some arbitrary relational structure $S$ of some vocabulary $\sigma$, show that there are first-order queries $I$ and $I^{-1}$, such that $G:=I(S)$ is a directed graph, and $I^{-1}(G)$ is isomorphic to $S$.
I would be grateful for some hints and proof ideas, as I struggle a bit with seeing how the encoding would work.
Answer: The following is, as requested, a hint rather than a full answer. I'll hint at one way of doing the coding; I'll not touch on how to define the queries but those formulas will always be horrible and you'd probably end up describing them in words anyway. We'll code a structure $\cal{A}$ that has universe $A$ as a graph $G_\cal{A}$. The will contain one vertex for each element of $A$, plus a whole bunch of other vertices that encode the relations and so on.
You can assume a purely relational vocabulary by using standard codings of constants and functions as relations.
Coding is done by adding "gadgets": small graphs that you can recognize. Remember that, for every graph $H$ on $k$ vertices, there is a formula $\varphi_H(x_1, \dots, x_k)$ such that $(G, v_1, \dots, v_k)\vDash \varphi_H$ iff the subgraph of $G$ induced by $v_1, \dots, v_k$ is $H$.
You probably need a way to say "This vertex is an element of $A$." Come up with a gadget, make a copy of the gadget for each element of $A$ and add an edge between the gadget and the vertex coding that element. (You might not need to do this but it's normally easier to just do it than to prove that it's unnecessary.)
You definitely need a way to say "This tuple $a_1\dots a_r$ is in the $r$-ary relation $R$." Come up with a gadget for each relation and a way to attach each tuple in $R$ to its own copy of the gadget. Remember that the tuple $a_1a_2a_3$ is different from $a_2a_3a_1$ so your coding needs to make those different.
If you don't understand what I mean about gadgets, mouse-over the following for a concrete example. However, the example gives away a lot of the answer, so try not to look at it.
You could mark a vertex $v\in G_\cal{A}$ as coding an element of $A$ by adding a $k$-clique to your graph, along with an edge from one vertex of the clique to $v$. So, being adjacent to a $k$-clique means a vertex codes an element of $A$. Make $k$ big enough so there are no other $k$-cliques in $G_\cal{A}$.
If you still don't understand, leave a comment and I'll try to explain it better. | {
"domain": "cs.stackexchange",
"id": 3075,
"tags": "complexity-theory, graphs, logic, descriptive-complexity"
} |
Calculating pretension of a spring with an initial tension | Question: Simple one. Some extension springs are wound with an initial tension which acts to pull the coils together, even when not under external loading.
When calculating the preload in the spring, should the initial tension be added to the spring force (e.g. initial length x spring rate)? I have seen a few examples and some people include it, others don't. Which is correct?
Answer: After being pointed in the correct direction by an acquaintance I think I have the answer now:
It appears that initial tension should be included when calculating load in an extension spring, as shown in the formulae above. However, in some applications the initial tension may be insignificant compared with the load at a given extension. People either ignore it due to it being negligible, or in error. | {
"domain": "engineering.stackexchange",
"id": 1993,
"tags": "springs"
} |
Is it possible to construct a quiver diagram for electromagnetism? | Question: I have been trying to learn about quiver diagrams and quiver gauge theory for a summer project. All of the lecture notes/papers on the topic give example diagrams that are mathematically simple but don't correspond to any real theory.
I was wondering, what does the quiver diagram for fields I already know about (electromagnetism, scalar spin 0 fields, dirac spinors etc) look like?
Answer: There is no such quiver diagram for electromagnetism because it is not a quiver gauge theory. A quiver gauge theory for which you can draw a quiver diagram is a gauge theory with at least 2 gauge groups $\mathrm{U}(N_i)$ and at least one matter field field that transforms in the fundamental representations of two of these groups. In addition the theory is usually assumed to be supersymmetric.
Neither the standard model nor its minimal supersymmetric extension (MSSM) is a quiver gauge theory, either, because they have fields you can't fit into the quiver because they transform in the wrong representations. However, there is a "minimal quiver extension" of the MSSM constructed in "Building the Standard Model on a D3-brane" by H. Verlinde and M. Wijnholt. | {
"domain": "physics.stackexchange",
"id": 54730,
"tags": "symmetry, field-theory, gauge-theory"
} |
A* versus Bidirectional Dijkstra's algorithm | Question: I have added bidirectional Dijkstra's algorithm into my pathfinding "framework", and I would like to make good use of C++ programming idioms, eliminate all possible memory leaks, otherwise improve readability, but I need your help for that to happen.
That's what I have scrambled:
shortest_path.h:
#ifndef SHORTEST_PATH_H
#define SHORTEST_PATH_H
#include <queue>
#include <string>
#include <unordered_map>
#include <unordered_set>
#include <vector>
namespace coderodde {
template<class NodeType>
class AbstractGraphNode {
protected:
using Set = std::unordered_set<NodeType*>;
public:
AbstractGraphNode(std::string name) : m_name{name} {}
virtual void connect_to(NodeType* other) = 0;
virtual bool is_connected_to(NodeType* other) const = 0;
virtual void disconnect_from(NodeType* other) = 0;
virtual typename Set::iterator begin() const = 0;
virtual typename Set::iterator end() const = 0;
class ParentIterator {
public:
ParentIterator() : mp_set{nullptr} {}
typename Set::iterator begin()
{
return mp_set->begin();
}
typename Set::iterator end()
{
return mp_set->end();
}
void set_list(Set* p_list)
{
this->mp_set = p_list;
}
private:
std::unordered_set<NodeType*>* mp_set;
};
virtual ParentIterator* parents() = 0;
bool operator==(const NodeType& other) const
{
return m_name == other.m_name;
}
std::string& get_name() {return m_name;}
protected:
std::string m_name;
};
template<class T, class FloatType = double>
class AbstractWeightFunction {
public:
virtual FloatType& operator()(T* p_node1, T* p_node2) = 0;
};
template<class FloatType>
class Point3D {
private:
const FloatType m_x;
const FloatType m_y;
const FloatType m_z;
public:
Point3D(const FloatType x = FloatType(),
const FloatType y = FloatType(),
const FloatType z = FloatType())
:
m_x{x},
m_y{y},
m_z{z} {}
FloatType x() const {return m_x;}
FloatType y() const {return m_y;}
FloatType z() const {return m_z;}
};
template<class FloatType>
class AbstractMetric {
public:
virtual FloatType operator()(coderodde::Point3D<FloatType>& p1,
coderodde::Point3D<FloatType>& p2) = 0;
};
template<class FloatType>
class EuclideanMetric : public coderodde::AbstractMetric<FloatType> {
public:
FloatType operator()(coderodde::Point3D<FloatType>& p1,
coderodde::Point3D<FloatType>& p2) {
const FloatType dx = p1.x() - p2.x();
const FloatType dy = p1.y() - p2.y();
const FloatType dz = p1.z() - p2.z();
return std::sqrt(dx * dx + dy * dy + dz * dz);
}
};
template<class T, class FloatType = double>
class LayoutMap {
public:
virtual coderodde::Point3D<FloatType>*& operator()(T* key)
{
return m_map[key];
}
~LayoutMap()
{
typedef typename std::unordered_map<T*,
coderodde::Point3D<FloatType>*>::iterator it_type;
for (it_type iterator = m_map.begin();
iterator != m_map.end(); iterator++)
{
delete iterator->second;
}
}
private:
std::unordered_map<T*, coderodde::Point3D<FloatType>*> m_map;
};
template<class NodeType, class DistanceType = double>
class HeapNode {
public:
HeapNode(NodeType* p_node, DistanceType distance) :
mp_node{p_node},
m_distance{distance} {}
NodeType* get_node()
{
return mp_node;
}
DistanceType get_distance()
{
return m_distance;
}
private:
NodeType* mp_node;
DistanceType m_distance;
};
template<class NodeType, class DistanceType = double>
class HeapNodeComparison {
public:
bool operator()(HeapNode<NodeType, DistanceType>* p_first,
HeapNode<NodeType, DistanceType>* p_second)
{
return p_first->get_distance() > p_second->get_distance();
}
};
template<class NodeType, class FloatType = double>
class DistanceMap {
public:
FloatType& operator()(const NodeType* p_node)
{
return m_map[p_node];
}
private:
std::unordered_map<const NodeType*, FloatType> m_map;
};
template<class NodeType>
class ParentMap {
public:
NodeType*& operator()(const NodeType* p_node)
{
return m_map[p_node];
}
bool has(NodeType* p_node)
{
return m_map.find(p_node) != m_map.end();
}
private:
std::unordered_map<const NodeType*, NodeType*> m_map;
};
template<class NodeType>
std::vector<NodeType*>* traceback_path(NodeType* p_touch,
ParentMap<NodeType>* parent_map1,
ParentMap<NodeType>* parent_map2 = nullptr)
{
std::vector<NodeType*>* p_path = new std::vector<NodeType*>();
NodeType* p_current = p_touch;
while (p_current != nullptr)
{
p_path->push_back(p_current);
p_current = (*parent_map1)(p_current);
}
std::reverse(p_path->begin(), p_path->end());
if (parent_map2 != nullptr)
{
p_current = (*parent_map2)(p_touch);
while (p_current != nullptr)
{
p_path->push_back(p_current);
p_current = (*parent_map2)(p_current);
}
}
return p_path;
}
template<class T, class FloatType = double>
class HeuristicFunction {
public:
HeuristicFunction(T* p_target_element,
LayoutMap<T, FloatType>& layout_map,
AbstractMetric<FloatType>& metric)
:
mp_layout_map{&layout_map},
mp_metric{&metric},
mp_target_point{layout_map(p_target_element)}
{
}
FloatType operator()(T* element)
{
return (*mp_metric)(*(*mp_layout_map)(element), *mp_target_point);
}
private:
coderodde::LayoutMap<T, FloatType>* mp_layout_map;
coderodde::AbstractMetric<FloatType>* mp_metric;
coderodde::Point3D<FloatType>* mp_target_point;
};
template<class NodeType, class WeightType = double>
std::vector<NodeType*>*
astar(NodeType* p_source,
NodeType* p_target,
coderodde::AbstractWeightFunction<NodeType, WeightType>& w,
coderodde::LayoutMap<NodeType, WeightType>& layout_map,
coderodde::AbstractMetric<WeightType>& metric)
{
std::priority_queue<HeapNode<NodeType, WeightType>*,
std::vector<HeapNode<NodeType, WeightType>*>,
HeapNodeComparison<NodeType, WeightType>> OPEN;
std::unordered_set<NodeType*> CLOSED;
coderodde::HeuristicFunction<NodeType,
WeightType> h(p_target,
layout_map,
metric);
DistanceMap<NodeType, WeightType> d;
ParentMap<NodeType> p;
OPEN.push(new HeapNode<NodeType, WeightType>(p_source, WeightType(0)));
p(p_source) = nullptr;
d(p_source) = WeightType(0);
while (!OPEN.empty())
{
HeapNode<NodeType, WeightType>* p_heap_node = OPEN.top();
NodeType* p_current = p_heap_node->get_node();
OPEN.pop();
delete p_heap_node;
if (*p_current == *p_target)
{
// Found the path.
return traceback_path(p_target, &p);
}
CLOSED.insert(p_current);
// For each child of 'p_current' do...
for (NodeType* p_child : *p_current)
{
if (CLOSED.find(p_child) != CLOSED.end())
{
// The optimal distance from source to p_child is known.
continue;
}
WeightType cost = d(p_current) + w(p_current, p_child);
if (!p.has(p_child) || cost < d(p_child))
{
WeightType f = cost + h(p_child);
OPEN.push(new HeapNode<NodeType, WeightType>(p_child, f));
d(p_child) = cost;
p(p_child) = p_current;
}
}
}
// p_target not reachable from p_source.
return nullptr;
}
template<class T, class FloatType>
class ConstantLayoutMap : public coderodde::LayoutMap<T, FloatType> {
public:
ConstantLayoutMap() : mp_point{new Point3D<FloatType>()} {}
~ConstantLayoutMap()
{
delete mp_point;
}
Point3D<FloatType>*& operator()(T* key)
{
return mp_point;
}
private:
Point3D<FloatType>* mp_point;
};
/***************************************************************************
* This function template implements Dijkstra's shortest path algorithm. *
***************************************************************************/
template<class NodeType, class WeightType = double>
std::vector<NodeType*>*
dijkstra(NodeType* p_source,
NodeType* p_target,
coderodde::AbstractWeightFunction<NodeType, WeightType>& w)
{
ConstantLayoutMap<NodeType, WeightType> layout;
EuclideanMetric<WeightType> metric;
return astar(p_source,
p_target,
w,
layout,
metric);
}
template<class NodeType, class WeightType = double>
std::vector<NodeType*>*
bidirectional_dijkstra(
NodeType* p_source,
NodeType* p_target,
coderodde::AbstractWeightFunction<NodeType, WeightType>& w)
{
std::priority_queue<HeapNode<NodeType, WeightType>*,
std::vector<HeapNode<NodeType, WeightType>*>,
HeapNodeComparison<NodeType, WeightType>> OPENA;
std::priority_queue<HeapNode<NodeType, WeightType>*,
std::vector<HeapNode<NodeType, WeightType>*>,
HeapNodeComparison<NodeType, WeightType>> OPENB;
std::unordered_set<NodeType*> CLOSEDA;
std::unordered_set<NodeType*> CLOSEDB;
DistanceMap<NodeType, WeightType> DISTANCEA;
DistanceMap<NodeType, WeightType> DISTANCEB;
ParentMap<NodeType> PARENTA;
ParentMap<NodeType> PARENTB;
OPENA.push(new HeapNode<NodeType, WeightType>(p_source, 0.0));
OPENB.push(new HeapNode<NodeType, WeightType>(p_target, 0.0));
DISTANCEA(p_source) = WeightType(0);
DISTANCEB(p_target) = WeightType(0);
PARENTA(p_source) = nullptr;
PARENTB(p_target) = nullptr;
NodeType* p_touch = nullptr;
WeightType best_cost = std::numeric_limits<WeightType>::max();
while (!OPENA.empty() && !OPENB.empty())
{
if (OPENA.top()->get_distance() +
OPENB.top()->get_distance() >= best_cost)
{
return traceback_path(p_touch, &PARENTA, &PARENTB);
}
if (OPENA.top()->get_distance() < OPENB.top()->get_distance())
{
HeapNode<NodeType, WeightType>* p_heap_node = OPENA.top();
NodeType* p_current = p_heap_node->get_node();
OPENA.pop();
delete p_heap_node;
CLOSEDA.insert(p_current);
for (NodeType* p_child : *p_current)
{
if (CLOSEDA.find(p_child) != CLOSEDA.end())
{
continue;
}
WeightType g = DISTANCEA(p_current) + w(p_current, p_child);
if (!PARENTA.has(p_child) || g < DISTANCEA(p_current))
{
OPENA.push(new HeapNode<NodeType,
WeightType>(p_child, g));
DISTANCEA(p_child) = g;
PARENTA(p_child) = p_current;
if (CLOSEDB.find(p_child) != CLOSEDB.end())
{
WeightType path_len = g + DISTANCEB(p_child);
if (best_cost > path_len)
{
best_cost = path_len;
p_touch = p_child;
}
}
}
}
}
else
{
HeapNode<NodeType, WeightType>* p_heap_node = OPENB.top();
NodeType* p_current = p_heap_node->get_node();
OPENB.pop();
delete p_heap_node;
CLOSEDB.insert(p_current);
typename coderodde::AbstractGraphNode<NodeType>::ParentIterator*
p_iterator = p_current->parents();
for (NodeType* p_parent : *p_iterator)
{
if (CLOSEDB.find(p_parent) != CLOSEDB.end())
{
continue;
}
WeightType g = DISTANCEB(p_current) +
w(p_parent, p_current);
if (!PARENTB.has(p_parent) || g < DISTANCEB(p_parent))
{
OPENB.push(new HeapNode<NodeType,
WeightType>(p_parent, g));
DISTANCEB(p_parent) = g;
PARENTB(p_parent) = p_current;
if (CLOSEDA.find(p_parent) != CLOSEDA.end())
{
WeightType path_len = g + DISTANCEA(p_parent);
if (best_cost > path_len)
{
best_cost = path_len;
p_touch = p_parent;
}
}
}
}
}
}
return nullptr;
}
class DirectedGraphNode : public coderodde::AbstractGraphNode<DirectedGraphNode> {
public:
DirectedGraphNode(std::string name) :
coderodde::AbstractGraphNode<DirectedGraphNode>(name)
{
this->m_name = name;
}
void connect_to(coderodde::DirectedGraphNode* p_other)
{
m_out.insert(p_other);
p_other->m_in.insert(this);
}
bool is_connected_to(coderodde::DirectedGraphNode* p_other) const
{
return m_out.find(p_other) != m_out.end();
}
void disconnect_from(coderodde::DirectedGraphNode* p_other)
{
m_out.erase(p_other);
p_other->m_in.erase(this);
}
ParentIterator* parents()
{
m_iterator.set_list(&m_in);
return &m_iterator;
}
typename Set::iterator begin() const
{
return m_out.begin();
}
typename Set::iterator end() const
{
return m_out.end();
}
friend std::ostream& operator<<(std::ostream& out,
DirectedGraphNode& node)
{
return out << "[DirectedGraphNode " << node.get_name() << "]";
}
private:
Set m_in;
Set m_out;
ParentIterator m_iterator;
};
class DirectedGraphWeightFunction :
public AbstractWeightFunction<coderodde::DirectedGraphNode, double> {
public:
double& operator()(coderodde::DirectedGraphNode* node1,
coderodde::DirectedGraphNode* node2)
{
if (m_map.find(node1) == m_map.end())
{
m_map[node1] =
new std::unordered_map<coderodde::DirectedGraphNode*,
double>();
}
return (*m_map.at(node1))[node2];
}
private:
std::unordered_map<coderodde::DirectedGraphNode*,
std::unordered_map<coderodde::DirectedGraphNode*, double>*> m_map;
};
}
#endif // SHORTEST_PATH_H
main.cpp:
#include <iostream>
#include <random>
#include <string>
#include <tuple>
#include <vector>
#include "shortest_path.h"
using std::cout;
using std::endl;
using std::get;
using std::make_tuple;
using std::mt19937;
using std::random_device;
using std::string;
using std::to_string;
using std::tuple;
using std::vector;
using std::uniform_int_distribution;
using std::uniform_real_distribution;
using std::chrono::duration_cast;
using std::chrono::milliseconds;
using std::chrono::system_clock;
using coderodde::astar;
using coderodde::bidirectional_dijkstra;
using coderodde::dijkstra;
using coderodde::DirectedGraphNode;
using coderodde::DirectedGraphWeightFunction;
using coderodde::EuclideanMetric;
using coderodde::HeuristicFunction;
using coderodde::LayoutMap;
using coderodde::Point3D;
/*******************************************************************************
* Randomly selects an element from a vector. *
*******************************************************************************/
template<class T>
T& choose(vector<T>& vec, mt19937& rnd_gen)
{
uniform_int_distribution<size_t> dist(0, vec.size() - 1);
return vec[dist(rnd_gen)];
}
/*******************************************************************************
* Creates a random point in a plane. *
*******************************************************************************/
static Point3D<double>* create_random_point(const double xlen,
const double ylen,
mt19937& random_engine)
{
uniform_real_distribution<double> xdist(0.0, xlen);
uniform_real_distribution<double> ydist(0.0, ylen);
return new Point3D<double>(xdist(random_engine),
ydist(random_engine),
0.0);
}
/*******************************************************************************
* Creates a random directed, weighted graph. *
*******************************************************************************/
static tuple<vector<DirectedGraphNode*>*,
DirectedGraphWeightFunction*,
LayoutMap<DirectedGraphNode, double>*>
create_random_graph(const size_t length,
const double area_width,
const double area_height,
const float arc_load_factor,
const float distance_weight,
mt19937 random_gen)
{
vector<DirectedGraphNode*>* p_vector = new vector<DirectedGraphNode*>();
LayoutMap<DirectedGraphNode, double>* p_layout =
new LayoutMap<DirectedGraphNode, double>();
for (size_t i = 0; i < length; ++i)
{
DirectedGraphNode* p_node = new DirectedGraphNode(to_string(i));
p_vector->push_back(p_node);
}
for (DirectedGraphNode* p_node : *p_vector)
{
Point3D<double>* p_point = create_random_point(area_width,
area_height,
random_gen);
(*p_layout)(p_node) = p_point;
}
DirectedGraphWeightFunction* p_wf = new DirectedGraphWeightFunction();
EuclideanMetric<double> euclidean_metric;
size_t arcs = arc_load_factor > 0.9 ?
length * (length - 1) :
(arc_load_factor < 0.0 ? 0 : size_t(arc_load_factor * length * length));
while (arcs > 0)
{
DirectedGraphNode* p_head = choose(*p_vector, random_gen);
DirectedGraphNode* p_tail = choose(*p_vector, random_gen);
Point3D<double>* p_head_point = (*p_layout)(p_head);
Point3D<double>* p_tail_point = (*p_layout)(p_tail);
const double cost = euclidean_metric(*p_head_point,
*p_tail_point);
(*p_wf)(p_tail, p_head) = distance_weight * cost;
p_tail->connect_to(p_head);
--arcs;
}
return make_tuple(p_vector, p_wf, p_layout);
}
/*******************************************************************************
* Returns the amount of milliseconds since Unix epoch. *
*******************************************************************************/
static unsigned long long get_milliseconds()
{
return duration_cast<milliseconds>(system_clock::now()
.time_since_epoch()).count();
}
/*******************************************************************************
* Checks that a path has all needed arcs. *
*******************************************************************************/
static bool is_valid_path(vector<DirectedGraphNode*>* p_path)
{
for (size_t i = 0; i < p_path->size() - 1; ++i)
{
if (!(*p_path)[i]->is_connected_to((*p_path)[i + 1]))
{
return false;
}
}
return true;
}
/*******************************************************************************
* Computes the length (cost) of a path. *
*******************************************************************************/
static double compute_path_length(vector<DirectedGraphNode*>* p_path,
DirectedGraphWeightFunction* p_wf)
{
double cost = 0.0;
for (size_t i = 0; i < p_path->size() - 1; ++i)
{
cost += (*p_wf)(p_path->at(i), p_path->at(i + 1));
}
return cost;
}
/*******************************************************************************
* The demo. *
*******************************************************************************/
int main(int argc, const char * argv[]) {
random_device rd;
mt19937 random_gen(rd());
cout << "Building a graph..." << endl;
tuple<vector<DirectedGraphNode*>*,
DirectedGraphWeightFunction*,
LayoutMap<DirectedGraphNode, double>*> graph_data =
create_random_graph(50000,
1000.0,
700.0,
0.0001f,
1.2f,
random_gen);
DirectedGraphNode *const p_source = choose(*std::get<0>(graph_data),
random_gen);
DirectedGraphNode *const p_target = choose(*std::get<0>(graph_data),
random_gen);
cout << "Source: " << *p_source << endl;
cout << "Target: " << *p_target << endl;
EuclideanMetric<double> em;
unsigned long long ta = get_milliseconds();
vector<DirectedGraphNode*>* p_path1 =
astar(p_source,
p_target,
*get<1>(graph_data),
*get<2>(graph_data),
em);
unsigned long long tb = get_milliseconds();
cout << endl;
cout << "A* path:" << endl;
if (!p_path1)
{
cout << "No path for A*!" << endl;
return 0;
}
for (DirectedGraphNode* p_node : *p_path1)
{
cout << *p_node << endl;
}
cout << "Time elapsed: " << tb - ta << " ms." << endl;
cout << std::boolalpha;
cout << "Is valid path: " << is_valid_path(p_path1) << endl;
cout << "Cost: " << compute_path_length(p_path1, get<1>(graph_data)) << endl;
cout << endl;
cout << "Dijkstra path:" << endl;
ta = get_milliseconds();
vector<DirectedGraphNode*>* p_path2 =
dijkstra(p_source,
p_target,
*get<1>(graph_data));
tb = get_milliseconds();
if (!p_path2)
{
cout << "No path for Dijkstra's algorithm!" << endl;
return 0;
}
for (DirectedGraphNode* p_node : *p_path2)
{
cout << *p_node << endl;
}
cout << "Time elapsed: " << tb - ta << " ms." << endl;
cout << "Is valid path: " << is_valid_path(p_path2) << endl;
cout << "Cost: " << compute_path_length(p_path2, get<1>(graph_data)) << endl;
cout << endl;
cout << "Bidirectional Dijkstra path:" << endl;
ta = get_milliseconds();
vector<DirectedGraphNode*>* p_path3 =
bidirectional_dijkstra(p_source,
p_target,
*get<1>(graph_data));
tb = get_milliseconds();
if (!p_path3)
{
cout << "No path for bidirectional Dijkstra's algorithm!" << endl;
return 0;
}
for (DirectedGraphNode* p_node : *p_path3)
{
cout << *p_node << endl;
}
cout << "Time elapsed: " << tb - ta << " ms." << endl;
cout << "Is valid path: " << is_valid_path(p_path3) << endl;
cout << "Cost: " << compute_path_length(p_path3, get<1>(graph_data)) << endl;
vector<coderodde::DirectedGraphNode*>* p_vec = get<0>(graph_data);
while (!p_vec->empty())
{
delete p_vec->back();
p_vec->pop_back();
}
delete get<0>(graph_data);
delete get<1>(graph_data);
delete get<2>(graph_data);
return 0;
}
Answer: You have posted a lot of code, which makes it hard (for me) to find structural issues. However a few style items directly caught my attention:
All your code seems to be in a single header file. When you write libraries (or a framework as you indicate) you want to expose your end user to as little details as possible. Consider splitting logic to a C++ implementation and header file.
I see a lot of functions accepting pointers. Consider using references, e.g., with is_valid_path - by using references you can reduce the ASCII art in this line: !(*p_path)[i]->is_connected_to((*p_path)[i + 1]
The function create_random_graph takes floats and doubles, perhaps a simplified interface with only doubles makes usage simpler.
There are a lot of new statements. Especially with trivial functions such as create_random_point consider returning by value or using one of the smart pointers from the <memory> header.
There are no code comments. For example: When you compare arc_load_factor I can only guess why you used 0.9. Similarly vague issues arise when you cast floats to size_t.
I cannot vouch for your implementation of std::heap, but the standard does not have a very strict performance upper-bound. For example: a fibonacci heap has O(1) on operators in which the C++ standard requires only O(log n)
Highly personal, but I'd prefer std::function over functors. Especially when operator() is not declared const. Theoretically, your operator could maintain an internal state.
The <chrono> header has all kinds of type-safe features and operators. With your get_milliseconds functions you reduce all the glory to a unsigned long long. Just use std::chrono::time_point, it has 'difference' operators defined.
I find it confusing to see some variable names completely in uppercase. | {
"domain": "codereview.stackexchange",
"id": 13247,
"tags": "c++, algorithm, graph, library, pathfinding"
} |
What are the advantages and disadvantages of PN sequence over Walsh code | Question: I would like to ask what are the disadvantages and advantages of PN sequence over Walsh code? and also Gold code over Walsh code.
I have checked online, but I didn't get clear explanation for that.
Thank you
Answer: A Pseudo-random noise (PRN) sequence ia a closer approximation to white random noise in that its energy is spread equally over the occupied frequency band (The energy is spread as a Sinc function if reconstructed with pulses just because of the pulse shape but the underlying code as a stream of impulses has a more uniform distribution), and its auto-correlation function approximates a single impulse; providing a strong correlation when the sequence is aligned with a copy of itself in time and very low correlation when there is any offset between the sequence and a copy of itself. This property makes it ideal for time alignment such as used in acquisition to find the start of a packet, or in RADAR and sounding applications to resolve a time delay with high precision. (This post details such an example with a PRN sequence: Autocorrelation to diagnose faults ) Also due to it filling an occupied frequency band evenly, PRN sequences are also ideal for training patterns to equalize a channel since the equalizer can resolve solutions only at frequencies where a signal is present.
Walsh Codes in contrast are not spread equally over frequency (as is clear if you consider the sequence of all 1's is a Walsh Code), but are completely orthogonal when time aligned. Given Walsh Codes are always an even number of digits, when you multiply one code to another in the same set and sum the digits, it will always add to 0. PRN codes generated with linear feedback shift registers (LFSR's) are always an odd number of digits, so are not able to add to complete 0 (be completely orthogonal) and further different codes can have higher cross correlations to each other. Walsh Codes are ideal for allocating users or resources in orthogonal code space when you have tight control of the time alignment of each user or resource (such as broadcasting to multiple users from a single transmitter). This orthogonality property is disrupted when the Walsh Codes have a time delay between them. This post demonstrates the channel or resource allocation and also shows the similarity to the DFT which is also simply another set of orthogonal codes: How CDMA receiver extract it's corresponding data from the receiving modulated & superposition-ed signal?
Gold Codes are generated by adding two LFSR outputs together, each generated with a separate polynomial. The advantage of this is we get many more usable codes of a given order, and the disadvantage is higher cross-correlation and sidelobes of the auto-correlation. This is what is used to generate the codes for GPS satellites and I explain Gold Codes in more detail specific to the GPS implementation at this link:
GPS Coarse Acquisition PRN Codes | {
"domain": "dsp.stackexchange",
"id": 8312,
"tags": "digital-communications, spread-spectrum"
} |
Half cell reactions for oxidation of water by acidified solution of potassium dichromate | Question: From Chemguide:
I don't understand the first equation. The chromium atoms in the dichromate ion receive 6 electrons, and their oxidation state declines from 6+ to 3+. But where do the 14 hydrogen cations get their 14 electrons in order to form water with the seven oxygens?
Answer: Look closely at the oxidation state of all atoms in the equation.
$$\ce{Cr2O7^2- (aq) + 14 H+ (aq) + 6e- <=> 2Cr^3+ (aq) + 7 H2O (l)}$$
The oxidation state in chromium changes from +6 to +3. This is what the electrons in the equation are necessary for. Oxygen retains its oxidation state of -2 in both sides of the equation, as well as hydrogen which remains its +1 oxidation state.
In the other equation
$$\ce{O2 (g) + 4H+ (aq) +4e- <=> 2H2O (l)}$$
the oxygen gets reduced by changing its oxidation state form 0 to -2. Again for hydrogen, the oxidation state of +1 is retained.
The total equation for this process would therefore be
$$\ce{Cr2O7^2- (aq) + 14 H+ (aq) <=> 4Cr^3+ (aq) + 8H2O (l) + 3O2}.$$
From the standard potentials you can see, that this reaction may happen, it does not tell you anything about the conditions under which this reaction will happen. There are a lot more factors to consider. | {
"domain": "chemistry.stackexchange",
"id": 5615,
"tags": "redox"
} |
How is the complexity of recursive algorithms calculated and do they admit better complexity than non-recursive algorithms? | Question: How are asymptotical time complexities calculated for recursive algorithms?
Recursive algorithms call themselves and therefore take up more space compared to non-recursive algorithms. But are they better taking time complexity into account? If they are better than non-recursive algorithms, then how are they better and why (not)?
Answer: When the design of an algorithm is ready, one can evaluate its running time. If one wishes to implement the algorithm, (s)he can do so recursively or iteratively. It's just an implementation detail, it won't affect asymptotical running time.
As an example, consider the problem of sorting an array of size $n$.
There are a lot of algorithms to solve this problem with a lot of different time complexities. Two examples are:
Merge Sort, which runs in $O(n\times log(n))$;
Insertion Sort, which runs in $O(n^2)$.
Both algorithms can either be implemented using a recursive approach or an iterative approach. It will not affect their asymptotical running time.
Assume we're given a recursive version of Merge Sort and we wish to evaluate it's running time.
func mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end func
This pseudocode takes an array as input, divides the given array in half, recursively calls mergesort on both halves and then merges both halves. The merging process looks like
func merge( var a as array, var b as array )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
while ( a has elements )
add a[0] to the end of c
remove a[0] from a
while ( b has elements )
add b[0] to the end of c
remove b[0] from b
return c
end func
The (iterative) merge function takes two arrays as input. It will compare the first element of both arrays. The biggest element is added to the last not-yet-filled-in slot of a new array $c$. Then, the biggest element is removed from its array. This is repeated until both given arrays are empty, the now fully filled list $c$, which is sorted, will be returned.
One can calculate the running time of merge sort by solving its recurrence:
$T(n) = 2T(\frac{n}{2}) + O(n)$.
The running time of any recursive algorithm can be represented by a similar equation above.
$T(n)$ represents our asymptotic running time. The term $2T(\frac{n}{2})$ is there because in our algorithm we recursively call the same algorithm twice and pass it an array with input size which is half of our original input size. The term $O(n)$ is there because we call the algorithm merge after, which runs in $O(n)$ (do you see why?).
To solve the recurrence, we can apply the master theorem.
Case two applies here (do you see why?), so we can conclude that merge sort runs in $T(n) = \Theta(n \times log(n))$.
I will not discuss an iterative implementation of merge sort here. But if you would look it up and evaluate its running time; you would see it runs in the same running time.
To summarize, the design/general idea of an algorithm defines its running time. Recursive/Iterative versions of it are just an implementation detail.
I sense you are also wondering what the advantage/disadvantage of a recursive algorithm is over its iterative sibling since they're equivalent in running time anyway.
The disadvantage you mention (it takes more space on the stack) is generally a very small one and is only a significant problem in a few circumstances. The advantage of recursive algorithms could be that their implementation results in much cleaner, less and more readible code. Merge sort is a nice example to express this advantage. | {
"domain": "cs.stackexchange",
"id": 6502,
"tags": "algorithm-analysis, runtime-analysis, recursion"
} |
Can a diver swim a short distance in great depths without being physically crushed by the pressure? | Question: I recently saw "The Abyss".
Does it make sense that they do dives in these depths (700m) with soft suits?
Also - what is all the depressurization talk about? Why do divers need to depressurize long periods after they resurface? Does that mean crew members can't escape drowning submarines?
Answer: A diver's body is basically made of water, and water is incompressible to a good approximation over the relevant pressure range. So the physiological effect of the water pressure itself will be negligible: I expect there is no problem with diving at 700m in soft suits in that sense.
However, in order for the diver to inflate her lungs, she must have air supplied at the same pressure as the surrounding water. Breathing pressurised gas brings its own set of problems. The most prominent of these is a serious class of injuries called "the bends". This is a danger because the pressure increases the solubility of nitrogen gas in the blood and tissues of the body. The concentration of nitrogen dissolved in the diver's tissues increases with the time spent at depth, as the diver breathes more and more gas. If the diver then ascends too quickly, the sudden decrease in pressure causes the nitrogen gas to come out of solution and form bubbles. Potentially this results in excruciating pain or death.
Submarines have a rigid outer shell which allows the air inside to remain at atmospheric pressure even at great depth. So as long as the poor sailors can hold their breath all the way up, then there is pretty much no risk of the bends. More likely, submarine escapees will require some compressed air source if they are to have any chance of making it alive to the surface from the depths usually plumbed by submarines. In this case decompression illness does become an issue.
If I remember the Abyss well enough, the kind of deep diving they were doing would have required weeks and weeks of gradual decompression in a hyperbaric chamber before the return to normal surface life. The idea of trained divers forgetfully decompressing themselves by accident from 70 atmospheres to 1 atmosphere in a matter of seconds is so ridiculous that I burst out laughing every time I watch the end of that film. It's still an absolute classic, though :) | {
"domain": "physics.stackexchange",
"id": 15030,
"tags": "water, pressure"
} |
Why does DNA have two sets of bases? | Question: DNA has two sets of bases. Why is this the case? If one gets mutated, does the other as well?
Answer: One could argue that any informative polymer must have more than two different monomers in its primary structure in orden to contain biological information. One could make an analogy to having a language in which any word contains two letters. Any word contained on such languague, like aaaabbbbb or ababababa for example, would essencially be devoid of information, as we can predict the the sequence of letters. So it is clear that if DNA was to be the molecule carrying the genetic information of a particular cell, then its primary structure must contain more than two different monomers. Now, asnwering why exactly does it use 4 different nucleotides is quite hard.
When you use an even number of nucleotides, all the nitrogen bases can have its complementary base, which allows for the formation of two or three stabilizing hydrogen bonds, in sort of a one-to-one relationship (cytosine to guanine and adenine to thymine). These hydrogens bonds can form because the functional groups of the complementary bases are arranged in space in such a way that allows the formation of these stabilizing eletrostatic interactions. If DNA used any odd number of nucleotides, like $3$ or $5$, this one-to-one relationship would not be possible, since you would always have an extra base without a complementary partner. If DNA uses $3$ different nucleotides, let's say, thymine, adenine and cytosine, we can see that in this example, cytosine lacks a complementary base to bind to. The same can be applied to an usage of $5$ different bases, and in general, to an usage of $2n + 1$ bases. Then, the extra base must bind to a non-complementary base and form a much weaker interaction with it, as the functional groups of the bases will not complement in space well enough, which will decrease the stability of the molecule. So, we can see that, any DNA molecule that used an odd number of nucleotides would be far more unstable than any other that used an even number of nucleotides.
Now, in terms of an even number of nucleotides, from above, we can see that DNA could not use $2$ different nucleotides, as it will hardly contain any biological information. The usage of an even number of nucleotides that are greater than $4$, such as $6$ or $8$, seems to be quite unfavorable, as this would imply that we would have to recognize hundreds of different codons. If we pick greater even numbers than these two, the situation only becomes worse, as the number of codons would exponentially grow ($n^3$) into the thousands. So it seems that using $4$ different nucleotides is, by far, the best pathway to take in terms of stability and practicality.
As for the second question, no, a gene mutation on a particular base will not necessarily alter the complementary base, but it will affect the number of hydrogen bonds the pair can form, and therefore, the stability of the molecule.
NOTE: Credits to Bryan Krause for his insight. | {
"domain": "biology.stackexchange",
"id": 10441,
"tags": "dna"
} |
Not seeing a diffraction pattern, what could be the cause? | Question: Basically took a piece of paper, cut out a small slit with some scissors, and held my phone's flashlight right on the slit. What could be causing the lack of diffraction?
Does the fact that the light source is held very close to the slit matter? Or is because of the fact that an LED light source was used?
Answer: There are a number of problems with your experimental set up which a really easily resolved and you will be able to see a white light diffraction pattern with very little effort.
What you need :
Some thin black (opaque) card about $\frac 13$ mm thick. A pile of 15 sheets will be approximately 5 mm thick.
A really sharp Stanley/utility knife - I used a new blade.
A ruler preferably metal
The first problem is that you have used an extended source, the LED in you phone.
Cut ou a 3 mm sqaure of the black card and cut through it in one go a line about 10 mm long and this you will place on top of the light source in your phone.
Your next problems was that your slit was probably too wide.
So on another bit of your back card cut through it for a distance of about 10 mm.
This is the single slit which is going to produce the diffraction pattern.
Switch in the light in the phone and place the phone on a table.
Cover the light with the 3 mm square card adjusting its position so that emergent light has a maximum intensity.
Hold the other card about 400 mm above the card which is on the phone and orientate the slits so that they are approximately parallel to one another.
Look at the bottom slit through the top slit with the tip of your nose touching the top card.
Do not have you eye too close to the top slit.
You should see a series of white/coloured bands as illustrated in the diagram below.
Moving and rotating the top slit about a vertical axis to improve the visibility of your single slit diffraction fringes.
When you used an extended source each part of the source resulted in a diffraction pattern offset from the other diffraction patterns that were being produced.
So you were see diffraction patterns offset from one another $\Rightarrow$ no visible diffraction pattern.
Whet you cut a slit you made it too wide which resulted in the fringes being too close to one another for you to see them.
A double slit is much harder to make! | {
"domain": "physics.stackexchange",
"id": 48673,
"tags": "optics, double-slit-experiment, home-experiment, experimental-technique"
} |
How to detect when move_base is executing recovery behaviors | Question:
Hi all,
My problem is the following. When a robot is stuck and move_base is not able to find a valid plan I can see on rqt_console the corresponding ERROR and WARN messages:
In this case, there's no path to the goal and move_base starts its recovery behaviors.
My question is: if I want to write a simple subscriber that is able to detect this situation (e.g. so that it can warn somehow the user) to which topic should I listen and how can I intercept this message?
Thanks.
Originally posted by schizzz8 on ROS Answers with karma: 183 on 2020-01-31
Post score: 0
Answer:
the page about rqt_console has the answer to this "rqt_console is a viewer in the rqt package that displays messages being published to rosout". You would also be able to see this in rqt_graph by deselecting the hide debug box so that nodes that are providing information about the system are visible (nodes considered debug are hidden by default as they involved in introspecting the system, rather than directly contributing to the robots behaviour, so are usually a distraction to what you are trying to see in the graph).
Originally posted by nickw with karma: 1504 on 2020-02-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34360,
"tags": "navigation, move-base, ros-kinetic"
} |
Finding the pair of nodes with maximum distance in an arbitrary rooted tree | Question: Suppose we are given an arbitrary rooted tree. We want to find two nodes that have the maximum distance among all pairs of nodes. I am looking for an algorithm with time complexity $\mathcal{O}(n)$, where $n$ is the number of nodes in the tree. Note that there may be more than one pair of nodes that satisfy this condition, but finding any one pair is enough.
Also, I am writing code in Java, and the input format is as follows: the first line gives the number of nodes, and the next $n-1$ lines give two numbers $u_i$ and $v_i$ each, indicating that there is an edge between nodes $u_i$ and $v_i$. What would be a good data structure to store and represent the rooted tree for this problem and implement the algorithm?
Any help is greatly appreciated!
Answer: I don't know much about java, but there could be several ways to represent the tree:
an array of $n$ adjacency lists, like an undirected graph;
an array of $n$ integers, that contains the parent of each node (and so that the root is its own parent);
an inductive structure, with a node and the list of its children.
The idea of the algorithm is simple:
find a leaf $x$ of maximum depth;
find the farthest node $y$ of $x$ (either by using BFS, or by finding a leaf of maximum depth in the tree rooted in $x$).
Let $V$ be the set of nodes of the tree.
To prove that $d(x, y) = \max\limits_{u,v \in V^2}d(u, v)$, consider $u$ and $v$ two nodes that maximize $d(u, v)$, and $w$ be their lowest common ancestor and consider two cases:
if $w$ is an ancestor of $x$, then let $z$ be the lowest common ancestor between $u$ and $x$ (without loss of generality). Since $x$ has greater depth than $u$, then:
$$d(u, v) = d(u, w) + d(w, v) \leqslant d(x, w) + d(w, v) = d(x, v)$$
if $w$ is not an ancestor of $x$, then let $z$ be the lowest common ancestor between $w$ and $x$. Since $x$ has greater depth than $w$, then:
$$d(u, v) = d(u, w) + d(w, v) \leqslant d(u, z) + d(w, v) \leqslant d(x, w) + d(w, v) = d(x, v)$$
In both case, the maximum distance can be reach with $x$ as one of the two nodes, which proves the correction of the algorithm. | {
"domain": "cs.stackexchange",
"id": 21700,
"tags": "algorithms, graphs, data-structures, trees, graph-traversal"
} |
Why does the flow of charge even create electricity? | Question: Okay this is a question I’ve asked a lot of places but I always get its the flow of charges and it’s like a property. What I don’t really understand is how is this flow of charges creating electric current.
My guess is that as these charges get closer to the desired potential(to satisfy potential difference) Energy is released which happens continuously and it is the reason for electric current atleast in a conductor.
Can I get some insight into what is happening down at the quantum level.
Answer: First of all you have to understand that flow of electrical current and dissipation of energy are two completely different concepts.
Electrical current: The flow of electrical charges is called electrical current. This is like a definition and has nothing to do with dissipation. There are systems, where current flows without dissipation. At the elementary level, you get the electrical current $I$, if you count, how many elementary charges $e$ cross a specific cross-sectional area of your "conductor" per second. Mathematically this means:
$$ I := \frac{e\Delta N}{\Delta t},$$
where $I$ is the current, $\Delta t$ is the time interval (e.g. 1 second), $e$ is the elementary charge, and $\Delta N$ is the number of elementary charges that you count within time $\Delta t$.
Usually, conductors are metals, and you may think of the cross sectional area of a copper wire, for example. But you can also imagine other "conductors" that are liquids with ions in them, or even gases with charged atoms in them.
Electrical resistance: Flowing charge carriers dissipate energy, if they scatter with other particles and thereby lose energy. In metals, for example, electrons forming the electrical current will scatter from lattice vibrations (phonons) and thereby dissipate energy. This energy dissipation leads to electrical resistance, usually denoted by $R$.
Electrical voltage: Yet another question is, why charge carriers would start to flow at all. In metals, the reason is the applied electrical voltage $U$ between the two ends of a wire. An electrical potential represents the potential energy of charge carriers in an electric field. The voltage is just the difference of the electrical potential at two points (i.e., the difference of the potential energy for the charge carriers between two points). The charge carriers start to flow, because they try to reduce their potential energy (same for the apple falling from a tree: it tries to reduce its potential energy).
Ohm's law: The three concepts explained above now are related in Ohm's law, which you may know:
$$ U = R I,$$
which holds for so-called Ohmic conductors. | {
"domain": "physics.stackexchange",
"id": 56329,
"tags": "electrostatics, electricity, electric-current, charge, flow"
} |
What is the average rate of passage of time in the observable universe relative to the passage of time on earth? | Question: Main Question: If you were to average out the rate of the passage of time in the observable universe relative to earth, what would it be?
Alternative Precise Question: What is the rate of passage of time at the halfway point between Andromeda and The Milky Way relative to earth? I imagine it must be very fast.
Assume 1 time unit = 1 earth time unit.
--
I think it would be interesting to find out that earth experiences time vastly different from the rest of the universe.
Answer: Gravitational time dilation in a gravity well is equal to the relativistic time dilation due to the speed required to escape that gravity well (see this Wikipedia article for more information). Escape velocity from the Earth is about 11.2 km/s. Solar escape velocity from Earth's orbit is about 42.1 km/s. Escape velocity from the Milky Way is about 550 km/s. So total escape velocity from Earth to a point halfway between Andromeda and the Milky Way is somewhere in the vicinity of 600 km/s, which produces a time dilation factor of about 1.00000200277612; that is, for each second that passes on Earth around 1.00000200277612 seconds pass at your hypothetical distant point.
(This calculation assumes the observers are at rest relative to one another. If the distant observer is at rest relative to the cosmic microwave background, then there's another factor of about 600 km/s to account for due to the Earth's movement relative to the CMB.) | {
"domain": "physics.stackexchange",
"id": 79075,
"tags": "general-relativity, cosmology, spacetime, time, observable-universe"
} |
Ansible playbook to generate webserver folder structure with users and projects | Question: I try to generate with ansible a folder structure for a webserver with users and linked projects.
The playbook works. But with more users and projects it gets quite complex to handle.
How can I simplify my code or vars sections?
I read a hint about using blocks so that all tasks inside it could use the same loop.
Is that the way to go?
The docs on blocks don't have a lot of examples. Any hint or link to examples is most appreciated.
Of course if you have any other pointers, please let me know.
---
- hosts: localhost
become: yes
become_method: sudo
# check_mode: yes
vars:
project_base_path: /usr/src/project
www_base_path: "/var/www/{{ansible_hostname}}"
projects:
mars: "{{ project_base_path }}/mars"
venusnew: "{{ project_base_path }}/venus/new"
venusold: "{{ project_base_path }}/venus/old"
users:
goku:
present: true
pw: #vault
luffy:
present: true
pw: #vault
user_projects:
- uname: goku
uproject:
- lname: mars
spath: "{{ projects.mars }}"
- uname: luffy
uproject:
- lname: venusnew
spath: "{{ projects.venusnew }}"
- lname: project
spath: "{{ projects.venusold }}"
tasks:
- name: check www path | Check if the path is available or create it
file:
path: "{{ www_base_path }}"
owner: www-data
group: www-data
state: directory
- name: check user www directory | Check if the directory of the user is available or create it
file:
owner: www-data
group: www-data
state: directory
path: "{{ www_base_path }}/{{ item.key}}"
with_dict: "{{ users }}"
- name: check projects in usr src | Check if the directory of the user is available or create it
file:
owner: www-data
group: www-data
state: directory
path: "{{ item.value}}"
with_dict: "{{ projects }}"
- name: link projects to users | Create links for the users
file:
state: link
# force: yes
owner: www-data
group: www-data
src: "{{ item.1.spath }}"
dest: "{{ www_base_path }}/{{ item.0.uname }}/{{ item.1.lname }}"
with_subelements:
- "{{ user_projects }}"
- uproject
Answer: I can think of 3 options.
Using a role
As you work in Ansible progresses you'll find the number of tasks and variables harder to manage as a single file. You'll likely want to create an ansible role.
ansible-galaxy init <your role name>
This will create a structure of folders with defaults, vars, files, handlers, tasks, meta and templates.
You can then separate out your variables into separate files while keeping them originally defined in defaults/main.yml in order to easily override them while controlling how there initialized.
Force 'manual' loading of vars conditionally within the role by overriding defaults/main.yml with custom logic in tasks.
This can apply to stand alone task files also, but I prefer using roles.
Here are two examples:
# including vars that are outside your role on inventory dir path level.
- include_vars:
file: "{{inventory_dir | dirname}}/group_vars/rdu.yml"
when: env == 'rdu'
# including vars that are within your role to a var called dataset.
- include_vars:
file: "{{role_path}}/defaults/datasets/{{dataset_name}}.yml"
when: dataset_name is defined
You can also use the vars folder within the role but I prefer to use defaults/main.yml to set defaults and then write over the defaults later.
If the values are static and should not be changed then you can consider using vars/main.yml this assumes you are using a role.
Put variables which are associated with a specific group under your group_vars and then make the association in your inventory file.
See http://docs.ansible.com/ansible/latest/intro_inventory.html#group-variables
My suggestion is to create a role, and then organize your variables into similar groups where possible and keep defaults in the role/role-name/defaults.main.yml and then override them based on the logic of your variables.
So looking at your code it seems that each project (mars, venusnew, and venusold) could like get there own yml file. Using conditional logic you can decide which vars to load. | {
"domain": "codereview.stackexchange",
"id": 27054,
"tags": "beginner, hash-map, ansible"
} |
Bash script to create directories and files with a numbered prefix | Question: I wrote a pretty simple bash script. It takes a directory with subdirectories with incremental prefixes in their names (01test, 02test, 03test), creates a new directory with the next highest prefix and creates a couple of files and fills them with some default text.
I'd like to do this in fewer lines of code and eliminate redundancies.
#!/bin/bash
LAST=`exec ls example_dir | sed 's/\([0-9]\+\).*/\1/g' | sort -n | tail -1`
PREFIX="${LAST:0:2}"
PREFIX=$((PREFIX + 1))
PREFIX="$PREFIX"_
ARGS=$@
TESTNAME="${ARGS// /_}"
DIRNAME="example_dir/$PREFIX$TESTNAME"
mkdir $DIRNAME
mkdir $DIRNAME/some_directory
mkdir $DIRNAME/expected
touch $DIRNAME/first.txt
touch $DIRNAME/second.txt
touch $DIRNAME/third.json
echo "{" >> $DIRNAME/test.json
echo -e "\t\"enabled\": true" >> $DIRNAME/test.json
echo "}" >> $DIRNAME/test.json
echo "{" >> $DIRNAME/some_directory/$PREFIX.json
echo -e " \"entities\": [" >> $DIRNAME/some_directory/$PREFIX.json
echo " ]" >> $DIRNAME/some_directory/$PREFIX.json
echo "}" >> $DIRNAME/some_directory/$PREFIX.json
Answer:
tail must to read the whole file. Calling sort with a -r (aka --reverse option) you can use head instead.
touch takes multiple arguments. You may
touch $DIRNAME/first.txt $DIRNAME/second.txt $DIRNAME/third.json
and further use brace expansion:
touch $DIRNAME/{first.txt,second.txt,third.json}
I would consider here-documenting the json files instead of echoing them, along the lines of:
cat << EOF >> $DIRNAME/test.json
{
"enabled": true
}
EOF | {
"domain": "codereview.stackexchange",
"id": 18403,
"tags": "bash"
} |
Dynamic field performance | Question: A have a model with a dynamic field like so:
def resolveClient() {
if (prevCall && prevCall.client) {
return prevCall.client
} else {
return client
}
}
Pretty simple. However, using a dynamic field makes me unable to do queries at the database level. I can't query using HQL or Criteria. This seems to be impacting performance pretty heavily. What's a better method to do something like this?
Answer: In order to resolve the client with a GORM/Hibernate query you basically need to persist the resolved client value.
Based on your resolveClient() method I'm assuming your domain class model goes something like this:
class SomeDomainClass {
Call prevCall
Client client
def resolveClient() {
if (prevCall && prevCall.client) {
return prevCall.client
} else {
return client
}
}
}
class Call {
Client
}
class Client { }
If you were dealing with two properties that were in the SomeDomainClass then you'd probably be able to use a derived property. But since one of the properties in an association, a derived property won't work. Instead, you can do this:
class SomeDomainClass {
Call prevCall
Client client
Client resolvedClient
def resolveClient() {
prevCall?.client ?: client
}
def beforeInsert() {
resolvedClient = resolveClient()
}
def beforeUpdate() {
resolvedClient = resolveClient()
}
}
The new property resolvedClient, which is maintained by the beforeInsert() and beforeUpdate() methods, takes care of saving the current resolved client to the GORM store (database). With that value persisted you can use it in GORM queries:
def instances = SomeDomainClass.where {
resolvedClient == someClient
}.list()
On the Groovy side, it would still be best to use the resolveClient() method because the resolvedClient property can get out of sync if prevCall changes.
def client = instance.resolveClient() | {
"domain": "codereview.stackexchange",
"id": 16668,
"tags": "performance, groovy, hibernate, grails"
} |
Searching an entire workbook for keywords and then printing the results on another worksheet | Question: I made a search page at the beginning of a work book. This page has a drop down that allows you to choose what category to search in, then you type your phrase or keyword into the search box and it finds all of the results. It then takes those results and pastes them in in the chart at the bottom of the search page. You then have the option to print out those results or clear the chart. Pictures are also posted for better understanding.
Private Sub ComboBox1_Change()
End Sub
Private Sub ComboBox2_Change()
UpdateSearchBox
End Sub
Private Sub CommandButton1_Click()
Select Case TextBox1.Value
Case "F"
TextBox1.Value = "G"
Case "E"
TextBox1.Value = "F"
Case "D"
TextBox1.Value = "E"
Case "C"
TextBox1.Value = "D"
Case "B"
TextBox1.Value = "C"
Case "A"
TextBox1.Value = "B"
Case "G"
TextBox1.Value = "A"
End Select
End Sub
Private Sub CommandButton2_Click()
FindOne
End Sub
Private Sub TextBox1_Change()
UpdateSearchBox
End Sub
Sub UpdateSearchBox()
Dim PageName As String, searchColumn As String, ListFiller As String
Dim lastRow As Long
If TextBox1.Value <> "" Then
PageName = TextBox1.Value
Else
Exit Sub
End If
Select Case ComboBox2.Value
Case "EQUIPMENT NUMBER"
searchColumn = "A"
Case "EQUIPMENT NAME"
searchColumn = "C"
Case "DUPONT NUMBER"
searchColumn = "F"
Case "SAP NUMBER"
searchColumn = "G"
Case "SSI NUMBER"
searchColumn = "H"
Case "PART NAME"
searchColumn = "I"
Case ""
MsgBox "Please select a value for what you are searching by."
End Select
lastRow = Sheets(PageName).Range("A65536").End(xlUp).Row
If lastRow <> 0 And PageName <> "" And searchColumn <> "" Then
ListFiller = PageName & "!" & searchColumn & "2" & ":" & searchColumn & lastRow
ComboBox1.ListFillRange = ListFiller
End If
End Sub
Sub FindOne()
Range("B19:J1500") = ""
Application.ScreenUpdating = False
Dim k As Integer, EndPasteLoopa As Integer
Dim myText As String, searchColumn As String
Dim totalValues As Long
Dim nextCell As Range
k = ThisWorkbook.Worksheets.Count
myText = ComboBox1.Value
Set nextCell = Range("B20")
If myText = "" Then
MsgBox "No Address Found"
Exit Sub
End If
Select Case ComboBox2.Value
Case "EQUIPMENT NUMBER"
searchColumn = "A"
Case "EQUIPMENT NAME"
searchColumn = "C"
Case "DUPONT NUMBER"
searchColumn = "F"
Case "SAP NUMBER"
searchColumn = "G"
Case "SSI NUMBER"
searchColumn = "H"
Case "PART NAME"
searchColumn = "I"
Case ""
MsgBox "Please select a value for what you are searching by."
End Select
For i = 2 To k
totalValues = Sheets(i).Range("A65536").End(xlUp).Row
ReDim AddressArray(totalValues) As String
For j = 0 To totalValues
AddressArray(j) = Sheets(i).Range(searchColumn & j + 1).Value
Next j
For j = 0 To totalValues
If (myText = AddressArray(j)) Then
EndPasteLoop = 1
If (Sheets(i).Range(searchColumn & j + 2).Value = "") Then EndPasteLoop = Sheets(i).Range(searchColumn & j + 1).End(xlDown).Row - j - 1
For r = 1 To EndPasteLoop
Range(nextCell, nextCell.Offset(0, 8)).Value = Sheets(i).Range("A" & j + r, "I" & j + r).Value
Set nextCell = nextCell.Offset(1, 0)
Next r
End If
Next j
Next i
Application.ScreenUpdating = True
End Sub
Answer: Here are some of the changes I'd make
Use Option Explicit at the top of the module
Eliminate all errors found by compiling the code (Debug -> CompileVBA Project)
(declare all variables, first of all)
Remove the empty Sub ComboBox1_Change() declaration if you don't inded to use it
Replace string comparisons such as PageName <> "" with
PageName <> vbNullString (faster)
Len(PageName) > 0 (even faster)
As pointed out in the comment, generalize the lastRow and totalValues statements
From:
lastRow = Sheets(PageName).Range("A65536").End(xlUp).Row
To
With Sheets(PageName)
lastRow = .Range(.Rows.Count, 1).End(xlUp).Row
End With
To reduce total lines of code and make it a bit easier to maintain I'd replace the three Select statements with one separate Function, and use it like this:
Private Sub CommandButton1_Click()
TextBox1.Value = GetSelectedLetter(TextBox1.Value)
End Sub
Sub UpdateSearchBox()
...
searchColumn = GetSelectedLetter(ComboBox2.Value)
If InStr(searchColumn, "Please") > 0 Then MsgBox searchColumn
...
End Sub
Sub FindOne()
...
searchColumn = GetSelectedLetter(ComboBox2.Value)
If InStr(searchColumn, "Please") > 0 Then MsgBox searchColumn
...
End Sub
Public Function GetSelectedLetter(ByVal sel As String) As String
Select Case sel
Case "G", "EQUIPMENT NUMBER": GetSelectedLetter = "A": Exit Function
Case "B", "EQUIPMENT NAME": GetSelectedLetter = "C": Exit Function
Case "E", "DUPONT NUMBER": GetSelectedLetter = "F": Exit Function
Case "F", "SAP NUMBER": GetSelectedLetter = "G": Exit Function
Case "D": GetSelectedLetter = "E": Exit Function
Case "SSI NUMBER": GetSelectedLetter = "H": Exit Function
Case "PART NAME": GetSelectedLetter = "I": Exit Function
Case "C": GetSelectedLetter = "D": Exit Function
Case "A": GetSelectedLetter = "B": Exit Function
Case vbNullString:
GetSelectedLetter = "Please select a value for what you are searching by."
End Select
End Function
(not the standard indentation, but I find it visually easier to work with)
Performance
One improvement: start the search after moving all data to arrays (array of arrays specifically)
Example:
Public Sub SearchAllSheetsArrayOfArrays()
Const SRC_VAL = "Test"
Dim ws As Worksheet, arr() As Variant, i As Long, j As Long, tc As Long, tw As Long
Dim r As Long, c As Long, lbr As Long, ubr As Long, lbc As Long, ubc As Long
i = 1
tw = ThisWorkbook.Worksheets.Count
ReDim arr(i To i + tw - 1) 'Place all data from each sheet in Variant sub-arrays
For Each ws In ThisWorkbook.Worksheets
If ws.UsedRange.Cells.Count > 1 Then
arr(j + 1) = ws.UsedRange
j = j + 1
End If
Next
tw = j
ReDim Preserve arr(i To tw) 'Trim the main Array if there are empty sheets
For i = 1 To tw 'tw = total worksheets
lbr = LBound(arr(i), 1): ubr = UBound(arr(i), 1) 'dimension 1 (rows)
lbc = LBound(arr(i), 2): ubc = UBound(arr(i), 2) 'dimension 2 (cols)
For r = lbr To ubr
For c = lbc To ubc
If Len(arr(i)(r, c)) > 0 Then
If InStr(arr(i)(r, c), SRC_VAL) > 0 Then tc = tc + 1 'collect vals
End If
Next
Next
Next
Debug.Print tc
End Sub
Ps. I used very short variable names to make the code fit in its window without scroll bars, but more descriptive names should be used - like in your posted code, but try to keep naming conventions consistent: all variables start with lower case letters, all procedures start with upper case letters | {
"domain": "codereview.stackexchange",
"id": 26249,
"tags": "vba, excel, search"
} |
Why are there usually grids in front of sound speakers? | Question: Usually, the sound speakers have this metal grid.
However, not all of them have this.
Is there any purpose, maybe related to the sound quality, that justifies this?
Or is it only for physically protect the speaker?
Answer: No, it has nothing to do with sound quality. In fact, the grid or covering is carefully chosen to interfere with the sound as little is possible.
Speaker cones must be light weight, so are made from paper or other thin and delicate material. The grill is to physically protect the delicate speaker cone from getting dinged, a curious cat, or some moron with a poky finger.
The tweeter in your picture doesn't have a covering because it is significantly recessed and behind a rigid and narrow barrier. Poky fingers can't fit in there to hurt the tweeter. Also, the high frequencies that the tweeter produces are more susceptible to attenuation by a cover, and a cover would cause diffusion and alter the radiation pattern. From the horn shape, it looks like the sound is intended to be directed in a somewhat narrow beam straight out. | {
"domain": "physics.stackexchange",
"id": 10836,
"tags": "acoustics, everyday-life"
} |
Physical Interpretation of $\nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0} $ | Question: The differential's form of Gauss' Law is $$\nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0}. $$
This suggests that at every point in space, the the electric field $\vec{E}$ is determined by the charge density $\rho$ at that point.
But if charge density $\rho$ is non-zero at a point in space, it means that there is a point charge $q$ present at that point. If we now evaluate $\vec{E}$ on this point, we should get $\vec{E}=\infty $ since now we evaluating the electric field $\vec{E}$ on a point charge $q$. In other words, considering only the contribution from this point charge $q$, by Coulomb's law we get
$$\vec{E} = \frac{1}{4\pi\epsilon_0}\frac{q}{r^2}=\frac{1}{4\pi\epsilon_0}\frac{q}{0^2}=\infty.$$
Is my interpretation of Gauss' Law wrong? How can this problem be avoided?
Answer:
This suggests that at every point in space, the the electric field is determined by the charge density at that point.
Technically it says that the divergence of the field at a point in space is determined by the charge density at that point. We know that the actual electric field at a point in space is influenced by charges not at that point in space (for example, Coulomb's law that you give). It seems like you are thinking that the equation reads $\mathbf E=\frac{\rho}{\epsilon}$, which is not what it says.
In your point charge example, at the point charge $\rho=\infty$ (excuse my horrible math), and so it makes sense that you have an infinite divergence at the location of the point charge. (We usually get around this by using Dirac Delta functions, but this has been covered at other places on this site and in many introductory text books, so I won't go into it here). | {
"domain": "physics.stackexchange",
"id": 55346,
"tags": "electrostatics, electric-fields, gauss-law, singularities, coulombs-law"
} |
light point and photosensor | Question: When a laser pointer is pointed at a screen or at a target that spot can be detected because the laser beam falls on the target and bounces back and then is detected by a photosensor.
But, what if it was possible for an electronic eye / photosensor to detect a spot or point of light in just air, without the need of a surface for that spot/point to bounce of.
What would be the practical applications of such an invention?
Answer: In order to see something, that "something" must either emit or reflect photons into your eye (or detector). If photons leave a light source in some direction, but are seen/detected "in the air", then the air must have somehow directed those photons toward your eye/detector.
Like this:
I can only see two ways for this to happen:
Some particles in the air scatter the photons. This happens in many cases.
Magic make the photons change direction. To my knowledge, this has not happened.
Practical implications of the first scenario include
guiding the eye toward a star with a laser pointer (photons scatter on particles in the air and thus create a visible ray),
studying supernovae looking at the light echo from the light scattering off of surrounding dust particles in their remnants,
measuring the size of the broad-line region around supermassive black holes, using the technique known as reverberation mapping (see also this answer),
breaking into banks using flour to see laser traps, and, most importantly,
summoning Batman.
Practical implications of the second scenario are limited only by your imagination. | {
"domain": "physics.stackexchange",
"id": 37500,
"tags": "visible-light, electromagnetic-radiation, photons, electronics, sensor"
} |
How can I correct odometry using rtabmap? | Question:
I know that rtabmap uses loop closure to correct the odometry error but did I turn it on, that is the question.
What parameter should I turn on to activate the loop closure or is it turned on by default?
Also, having the wheel odometry, is it worth to also turn on the stereo odometry? If yes, how can I fuse them both?
Originally posted by EdwardNur on ROS Answers with karma: 115 on 2019-06-26
Post score: 0
Answer:
I think the following parameters will help you:
<param name="RGBD/ProximityBySpace" type="string" value="true"/> is for
Local loop closure detection (using estimated position) with locations in WM
<param name="Reg/Strategy" type="string" value="2"/> is for Loop closure transformation: 0=Visual, 1=ICP, 2=Visual+ICP
Originally posted by kosmastsk with karma: 210 on 2019-06-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by EdwardNur on 2019-06-26:
@kosmastsk isn't ICP stands for the lidar odometry? I only have the wheel odometry
Comment by kosmastsk on 2019-06-26:
You're right! Take a look in here: http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot
RTAB-Map gives you the way to syncronize all of the odometry sources, or you can just ignore the laser odometry and use only the wheel odometry . You can remap the odom topic to the topic of your wheel odometry.
Also, ICP is not for a specific odometry type, it is an alignment method that leads to loop closure, independent of the source of odometry
Comment by EdwardNur on 2019-06-26:
@kosmastsk Ok, so the second param would be value=0 as I will only use visual | {
"domain": "robotics.stackexchange",
"id": 33263,
"tags": "slam, navigation, ros-melodic, rtabmap"
} |
Maximal Minimum Spanning Tree by Removing $k$ Edges | Question: The problem is as follows:
Consider a connected, undirected, and weighted graph $G = (V, E, w)$ and an integer $0 < k < |E| - |V| + 1$. Describe and analyze and efficient algorithm to remove at most $k$ edges from $E$ such that the resulting graph $G' = (V, E' \subseteq E, w)$ has a maximal weight minimum spanning tree over all possible $G'$.
I initially thought a greedy algorithm would work with "just remove the $k$ smallest edges as long as the graph remains connected." However, this does not work, consider the following graph and $k = 1$:
The MST of $G$ has weight 9. If we remove the minimal edge $(B, C)$, the MST of the resulting graph has weight 12. However, if we remove the edge $(A, B)$, the MST of the resulting graph has weight 13. So this greedy strategy does not work.
Other strategies, we can first note that it only helps to remove edges that are in the MST of $G$ initially. So we can first determine $T = MST(G)$. The next (inefficient) thing we could do is consider each edge in $e \in T$ and do:
Remove $e$ from $T$, cutting $T$ into $T_1$ and $T_2$.
Determine the next smallest edge $e'$ spanning $T_1$ and $T_2$ in $G$.
Keep track of $e$ such that it maximizes $w(e') - w(e)$.
Repeat this $k$ times. This doesn't seem very efficient though. This would be something like $O(k \cdot n \cdot (n + m))$ I think. We could optimize this a bit by keeping track of an ordered set of edges on the cuts.
I am wondering if there exists an algorithm that is $O(km \log n)$ or better. Any approaches / advice would be appreciated.
Answer: This is known as the $k$ most vital edges for the minimum spanning tree problem. It has been proved as NP-hard [1].
For fixed $k$, Weifa Liang proposed an $O\left(n^k\alpha((k+1)(n-1),n)\right)$ algorithm where $\alpha$ is a functional inverse of Ackermann’s function [2], and can be improved to $O\left(n^k\log\alpha((k+1)(n-1),n)\right)$ using Seth Pettie's sensitivity analysis [3].
[1] Frederickson, G. N., & Solis-Oba, R. (1999). Increasing the weight of minimum spanning trees. Journal of Algorithms, 33(2), 244-266.
[2] Liang, W. (2001). Finding the k most vital edges with respect to minimum spanning trees for fixed k. Discrete Applied Mathematics, 113(2-3), 319-327.
[3] Pettie, S. (2005, December). Sensitivity analysis of minimum spanning trees in sub-inverse-Ackermann time. In International Symposium on Algorithms and Computation (pp. 964-973). Springer, Berlin, Heidelberg. | {
"domain": "cs.stackexchange",
"id": 13658,
"tags": "algorithms, graphs, weighted-graphs, minimum-spanning-tree"
} |
message types of topics concerning map | Question:
I am developing an android app communicating with ros, but I don't know the information about map. I have topic names but I don't know which message types should I use. I found on the Internet but failed. Here is the list of topics I need.It's better if I can know more details of those topics.
Move_base_simple/goal
nitial_pose
Move_base/local_costmap/inflated_obstacles
move_base/NavfnROS/plan
5.move_base/local_costmap/robot_footprint
6.move_base/local_costmap/obstacles
Thanks a lot.
Originally posted by maoqizhen on ROS Answers with karma: 7 on 2013-05-10
Post score: 0
Answer:
move_base_simple/goal - geometry_msgs/PoseStamped
It is a topic (non-action type) on which one can post a goal if he does not want to monitor its status (success or not) after posting.
initialpose - geometry_msgs/PoseWithCovarianceStamped
It is used to re-state the current position of the robot in the map if the robot seems lost. 2d pose estimate in rviz does this.
Move_base/local_costmap/inflated_obstacles - nav_msgs/GridCells
It specifies the cells in the map grid that are considered occupied due to the inflation radius of the robot. The inflation radius of the turtlebot is mentioned in turtlebot_navigation/config/local_costmap_params.yaml
move_base/local_costmap/obstacles - nav_msgs/GridCells
It specifies the cells in the map grid that are occupied by obstacles.
move_base/local_costmap/robot_footprint - geometry_msgs/PolygonStamped
In a map, the the robot is represented as a polygon which is called 'footprint' of a robot. This topic lists the points on the circumference of the polygon that form the footprint.
move_base/NavfnROS/plan - nav_msgs/Path
Once you give a nav goal the robot marks a path along which it has to move to reach the goal point (shown as a thin gree line in rviz). This topic lists the points in the local costmap that lie on the path.
Read more here, here, here and here.
Originally posted by prasanna.kumar with karma: 1363 on 2013-05-11
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 14138,
"tags": "rosjava"
} |
Least-balanced possible red-black tree of n distinct nodes | Question: Let's say we have a red-black tree of $n$ total nodes where all keys are distinct. The subtree rooted at the root node's left child has $n_L$ nodes, and similarly the subtree rooted at the root node's right child has $n_R$ nodes.
What is the range of values on $n_L$ and $n_R$ for any $n \geq 3$?
From Red Black Tree vs AVL Tree
For example, in the above image: $n = 8, n_L = 1, n_R = 6$.
Answer: Let $b$ be the black height of the left subtree, which is also the black height of the right subtree.
The number of nodes in the left subtree is at least $2^{b}-1$. $2^b-1$ is reached when the left subtree is a perfect binary tree of black nodes and height $b-1$.
The number of nodes in the right subtree is at most $2^{2b+1}-1$. $2^{2b+1}-1$ is reached when the root is black and the right subtree is a perfect binary tree of height $2b$ with nodes colored red, black, red, black and so on alternatively from top to bottom.
$$2^b-1 + 2^{2b+1}-1\ge n-1$$
Solving for $b$, we obtain
$$ b\ge f(n) := \lceil \log_2(\sqrt{8n+9}-1)\rceil-2$$
So, the range for $n_L$ is $[2^{f(n)}-1, n - 2^{f(n)}]$, which is, by symmetry, also the range for $n_R$.
For example, when $n=8$, $f(n)=1$, the range is $[1, 6]$ as illustrated in the question. | {
"domain": "cs.stackexchange",
"id": 19616,
"tags": "binary-search-trees, red-black-trees"
} |
Converting frequency from $\textrm{Hz}$ to radians-per-sample | Question: In MATLAB I have to pass cut-off frequency for designing a filter. But this Cut-off frequency is in radians-per-sample. How do I convert my analog Cut off frequency in $\textrm{Hz}$, into the required radians-per-sample for MATLAB?
Answer: Problems like these are best attacked using some dimensional analysis:
$$f_{[\rm rad/samples]} = f_{[\rm cycles/sec]}\cdot \frac{\text{sec}}{\text{samples}}\cdot \frac{\text{rad}}{\text{cycle}}$$
$$f_{[\rm rad/samples]} = f_{[\rm cycles/sec]}\cdot \frac{2\pi}{f_s}$$
where $f_s$ is the sample rate in $\textrm{Hz}$. | {
"domain": "dsp.stackexchange",
"id": 714,
"tags": "matlab, sampling"
} |
Build a Pubmed query given long gene list | Question: I would like to search the literature using the Pubmed search engine for a set of genes that are associated with a brain region of interest. Unfortunately, there are a lot of them. It's possible there is no solution if there is a character limit on the query. I think it would be easiest to do this with 'nix tools but I'm open to R libraries or Python implementations too.
The list is 908 genes long. It's from the Gensat annotation search engine, with the filters Structure IS olfactory bulb & Expression level IS moderate to strong signal & Expression pattern IS region-specific.
The list looks like, scraped from the dropdown list in the HTML source with a regular expression:
Aatf
Abcb1b
Abhd3
Abhd4
Abhd5
...
The Pubmed query should look like:
(((((((Aatf) OR Abcb1b) OR Abhd3) OR Abhd4) OR Abhd5) OR Abl2) AND olfactory bulb
but with 908 genes separated by OR and parentheses.
Bonus points if you find/write a utility which adds gene synonyms to the query!
Answer: After getting it out on paper (so to speak) I was able to accomplish what I wished with bash:
#!/bin/bash
gene_list=($(cat ./markers_clean.txt))
query=${gene_list[@]:0:1}
parens="("
for gene in ${gene_list[@]:1}
do
query="${query} OR ${gene})"
parens="${parens}("
done
echo "${parens}${query} AND olfactory bulb)"
With the output being:
((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((Aatf OR Abcb1b) OR Abhd3) OR Abhd4) OR Abhd5) OR Abl2) OR Ablim2) OR Acaa2) OR Ackr3) OR Acsbg1) OR Acvr1c) OR Acvr2a) OR Adam11) OR Adam23) OR Adcy1) OR Adcyap1) OR Adgrg1) OR Adora2a) OR Adra2a) OR Adra2c) OR AF529169) OR Agrp) OR Agt) OR Agtr1a) OR Agtr2) OR Ak4) OR Akap12) OR Akap5) OR Akr1c18) OR Aldh1l1) OR Aldh7a1) OR Aldoc) OR Als2) OR Amfr) OR Amigo1) OR Amigo2) OR Angpt2) OR Ank) OR Ank2) OR Ankrd34b) OR Api5) OR Apod) OR Arc) OR Arf6) OR Arhgap33) OR Arhgdig) OR Arhgef6) OR Arhgef7) OR Arl6ip1) OR Arpp21) OR Arvcf) OR Arx) OR Ascl1) OR Aspm) OR Atg4c) OR Atic) OR Atoh1) OR Atp1a1) OR Atp1b1) OR Atp2a2) OR Atp5a1) OR Atp5j2) OR Atp6v1b2) OR Atpif1) OR Avp) OR Avpr1a) OR Avpr1b) OR B2m) OR B3gat1) OR Bahcc1) OR Baiap2) OR Bcar1) OR Bdnf) OR Becn1) OR Bend5) OR Bmp1) OR Bmp6) OR Bmp7) OR Bnip3) OR Bsx) OR Btbd17) OR Btg1) OR Btg2) OR Btg3) OR C1ql2) OR C1ql3) OR C2cd2l) OR C33) OR Cacna1h) OR Cacna1i) OR Cacnb1) OR Cacng3) OR Cacng5) OR Cadps) OR Calb1) OR Calb2) OR Calca) OR Calcb) OR Calcr) OR Camk2a) OR Camkk1) OR Camkk2) OR Camkv) OR Card6) OR Cartpt) OR Casp9) OR Casr) OR Cbln1) OR Cbln2) OR Cbln4) OR Cbx5) OR Ccdc163) OR Ccdc184) OR Ccdc187) OR Ccdc3) OR Cck) OR Cckbr) OR Ccna2) OR Ccnd1) OR Ccne1) OR Ccne2) OR Ccsap) OR Cd63) OR Cdc42bpg) OR Cdc42ep4) OR Cdc6) OR Cdh1) OR Cdh11) OR Cdh3) OR Cdh6) OR Cdh9) OR Cdhr1) OR Cdk17) OR Cdk4) OR Cdk5) OR Cdk5r1) OR Cdk5r2) OR Cdk6) OR Cdkn1a) OR Cdkn1b) OR Cdkn2a) OR Cdkn2c) OR Cdkn2d) OR Cdon) OR Cdr2) OR Celf6) OR Cemip) OR Chd7) OR Chn2) OR Chrm1) OR Chrm4) OR Chrna2) OR Chrna3) OR Chrna4) OR Chrna5) OR Chrna7) OR Chrnb2) OR Chrnb4) OR Chst15) OR Chst2) OR Cic) OR Cisd1) OR Cited1) OR Ckmt1) OR Clcn1) OR Cldn1) OR Clu) OR Cnr1) OR Cntfr) OR Cntn2) OR Cntnap1) OR Coch) OR Col6a1) OR Colgalt2) OR Cpe) OR Cpne6) OR Crabp1) OR Creb1) OR Crh) OR Crhr1) OR Crhr2) OR Crim1) OR Crip2) OR Crk) OR Crym) OR Csnk1g3) OR Ctgf) OR Ctxn1) OR Cx3cl1) OR Cxcl11) OR Cxcl12) OR Cxcr4) OR Cxxc4) OR Cyb561) OR Cyp19a1) OR Cyp4v3) OR Cyr61) OR D8Ertd738e) OR Dbi) OR Dbp) OR Dcdc2a) OR Dct) OR Dctn1) OR Dcx) OR Ddit4l) OR Dgcr14) OR Dgcr6) OR Dhcr24) OR Dhrs3) OR Dhx57) OR Dkk3) OR Dlg4) OR Dlk1) OR Dll1) OR Dlx1) OR Dlx2) OR Dlx5) OR Dlx6) OR Dmbx1) OR Dnajb1) OR Dnajc12) OR Dnmt1) OR Dnmt3a) OR Doc2g) OR Drd1) OR Drd2) OR Drd3) OR Dsp) OR Dtx4) OR Dtymk) OR Dusp1) OR Dusp14) OR Dusp16) OR Dusp3) OR Dusp5) OR Dyrk1a) OR Dyx1c1) OR Dzip1) OR E2f8) OR Ebf2) OR Ebf3) OR Eef1a1) OR Efna1) OR Efnb1) OR Efnb2) OR Efr3a) OR Efs) OR Egr1) OR Eif1ax) OR Eif3h) OR Eif4e3) OR Eif4g2) OR Elavl2) OR Elavl3) OR Elavl4) OR Elfn1) OR Ell3) OR Emp1) OR Emx1) OR En2) OR Enah) OR Enc1) OR Eno1) OR Eno2) OR Entpd4) OR Eomes) OR Epha4) OR Epha7) OR Epha8) OR Eps8l2) OR Erp29) OR Esr2) OR Esyt3) OR Evx1) OR F2r) OR Fam171b) OR Fam81a) OR Fam84a) OR Fbxl5) OR Fbxo32) OR Fezf1) OR Fezf2) OR Fgd5) OR Fgf1) OR Fgf1) OR Fgf13) OR Fgf15) OR Fgfr1) OR Fign) OR Fkbp2) OR Fkbp3) OR Flrt1) OR Flrt2) OR Fn1) OR Fos) OR Foxi1) OR Foxi2) OR Foxm1) OR Foxp4) OR Frat1) OR Fryl) OR Frzb) OR Fxyd6) OR Fxyd7) OR Fzd2) OR Fzd3) OR Fzd5) OR Fzd8) OR Gaa) OR Gabbr1) OR Gabrb2) OR Gabre) OR Gabrg1) OR Gabrg2) OR Gad2) OR Gal) OR Galnt1) OR Galnt5) OR Galnt9) OR Galr3) OR Gap43) OR Gas1) OR Gbx2) OR Gck) OR Gcm2) OR Gdf1) OR Gfap) OR Gja1) OR Gjd2) OR Glce) OR Gldc) OR Glrb) OR Gls) OR Glud1) OR Gng1) OR Gng13) OR Gng3) OR Gng4) OR Gng5) OR Gng7) OR Gpc4) OR Gpm6a) OR Gpr12) OR Gpr139) OR Gpr151) OR Gpr165) OR Gpr21) OR Gpr27) OR Gpr37) OR Gpr68) OR Gpr83) OR Gpr88) OR Gprin3) OR Gpsm1) OR Gria2) OR Grid2ip) OR Grin2c) OR Grm1) OR Grm2) OR Grm4) OR Grp) OR Gsx2) OR Gtf2i) OR Gucy1a3) OR Hap1) OR Hbegf) OR Hcfc1) OR Hcn2) OR Hcn4) OR Hdac11) OR Hdc) OR Herc3) OR Hes6) OR Hey1) OR Hey2) OR Hgs) OR Hmgn2) OR Hmgn3) OR Hnrnpab) OR Hpcal1) OR Hrasls) OR Hrh2) OR Hsd11b1) OR Hspa12a) OR Hspb8) OR Htr1a) OR Htr1b) OR Htr1d) OR Htr3a) OR Htr3b) OR Htr4) OR Htr5b) OR Htr6) OR Icam5) OR Ict1) OR Id1) OR Id4) OR Idh3a) OR Ier5) OR Igf1) OR Igfbp2) OR Igfbp4) OR Igfbp5) OR Igsf9) OR Inpp5j) OR Insig2) OR Iqsec3) OR Irs4) OR Isl2) OR Islr2) OR Itgb3) OR Itm2b) OR Jag1) OR Jag2) OR Jak2) OR Jam3) OR Jph4) OR Jup) OR Kcnc1) OR Kcnc4) OR Kcnip2) OR Kcnj5) OR Kcnk1) OR Kcnq2) OR Kctd13) OR Kctd4) OR Kif11) OR Kif26a) OR Kit) OR Kitl) OR Klf1) OR Klf13) OR Kremen1) OR Krt19) OR Lamb1) OR Lamp5) OR Layn) OR Lbh) OR Ldb1) OR Lef1) OR Lfng) OR Lgals3) OR Lgi1) OR Lhx1) OR Lhx2) OR Lhx4) OR Lhx5) OR Lhx6) OR Lhx8) OR Limk1) OR Lix1) OR Lmo1) OR Lmo4) OR Lpar1) OR Lpl) OR Lrfn5) OR Lrig1) OR Lrig2) OR Lrp1) OR Lrp12) OR Lrp2) OR Lrrk2) OR Lrrn1) OR Lrrn3) OR Lrrtm2) OR Lss) OR Ly6h) OR Lynx1) OR Lypd1) OR Lypd2) OR Lypd6) OR Lzts1) OR Mafb) OR MAGEA8) OR Man1a2) OR Map1b) OR Map1lc3a) OR Map2k6) OR Map4k5) OR Mapk1) OR Mapk8) OR Mapt) OR March4) OR Matk) OR Mb21d2) OR Mbd2) OR Mbd4) OR Mc3r) OR Mc4r) OR Mdk) OR Med12) OR Mef2c) OR Meis1) OR Met) OR Mfap2) OR Mfap4) OR Mfn1) OR Mmp17) OR Mmp2) OR Mpc1) OR Mst1r) OR Msx1) OR Mtap) OR Mturn) OR Mxi1) OR Mycn) OR Nae1) OR Napb) OR Nars) OR Ncor1) OR Ndn) OR Ndnf) OR Ndrg1) OR Ndufv1) OR Ndufv2) OR Nectin3) OR Nefh) OR Nefl) OR Nefm) OR Neurod1) OR Neurod4) OR Neurod6) OR Neurog1) OR Neurog2) OR Nfil3) OR Nhlh1) OR Nhp2) OR NKAIN2) OR Nkain3) OR Nkx2) OR Nlgn3) OR Nmb) OR Nmbr) OR Nmu) OR Nnat) OR Nog) OR Nomo1) OR Nos1) OR Notch1) OR Notch2) OR Nov) OR Nova1) OR Npc1) OR Npc2) OR Nptx1) OR Nptx2) OR Npy) OR Npy1r) OR Npy2r) OR Npy5r) OR Nr2f2) OR Nr4a1) OR Nrep) OR Nrgn) OR Nrip3) OR Nrp1) OR Nrtn) OR Nrxn2) OR Nsf) OR Nsg1) OR Nsmf) OR Ntn1) OR Ntng2) OR Ntrk1) OR Ntrk2) OR Ntrk3) OR Nts) OR Nudt2) OR Numb) OR Numbl) OR Odc1) OR Olfm1) OR Olfr734) OR Olig2) OR Oma1) OR Ophn1) OR Opn3) OR Opn4) OR Oprk1) OR Otof) OR Otx2) OR Oxt) OR Oxtr) OR P2rx7) OR Pacsin3) OR Pak1) OR Pard6a) OR Pcbd1) OR Pcdh2) OR Pcdh9) OR Pcnt) OR Pcp4) OR Pcsk9) OR Pde1b) OR Pde1c) OR Pdp1) OR Pdyn) OR Peg3) OR Penk) OR Per2) OR Pex5l) OR Pgrmc1) OR Phactr1) OR Phactr4) OR Phox2b) OR Pias3) OR Pik3r1) OR Pirt) OR Pkd1) OR Plagl1) OR Plcxd2) OR Pld3) OR Plekhg1) OR Plk2) OR Plk3) OR Plp1) OR Plxdc1) OR Plxna1) OR Plxnc1) OR Pmp22) OR Pnmt) OR Pnoc) OR Podxl2) OR Polr1d) OR Polr3e) OR Pomc) OR Pou2f1) OR Pou3f4) OR Ppfia4) OR Ppm1e) OR Ppp1ca) OR Ppp1r14c) OR Ppp1r16b) OR Ppp1r17) OR Ppp1r1a) OR Ppp1r1b) OR Ppp1r2) OR Ppp1r7) OR Ppp2r2a) OR Ppp4c) OR Ppp6c) OR Prkacb) OR Prkcd) OR Prkcq) OR Prmt2) OR Prnp) OR Prodh) OR Prok2) OR Prokr2) OR Prox1) OR Prss12) OR Prtg) OR Psen1) OR Psen2) OR Ptch2) OR Pten) OR Ptf1a) OR Ptgds) OR Ptn) OR Ptpn2) OR Ptpn5) OR Ptprf) OR Pus1) OR Pygb) OR Rab37) OR Rac1) OR Ramp3) OR Rap1a) OR Rasa3) OR Rasal1) OR Rasgrp1) OR Rasgrp2) OR Rasl11b) OR Rbp1) OR Rcan2) OR Rcn1) OR Rel) OR Rest) OR Rfng) OR Rgs12) OR Rgs16) OR Rgs2) OR Rgs3) OR Rgs4) OR Rgs5) OR Rgs8) OR Rgs9) OR Rhob) OR Rhou) OR Rif1) OR Rlbp1) OR Rnd1) OR Rnd2) OR Rnd3) OR Rorb) OR Rpl18) OR Rpl31) OR Rprm) OR Rraga) OR Rtn4rl1) OR Rubcn) OR Rufy3) OR Runx1) OR Rxfp3) OR S1) OR Satb2) OR Scg2) OR Scg3) OR Scn3b) OR Scrn1) OR Scrt1) OR Scube1) OR Scube2) OR Sdc3) OR Selenom) OR Selenow) OR Sema3a) OR Sema3b) OR Sema3c) OR Sema3e) OR Sema3f) OR Sema4a) OR Sema6d) OR Sema7a) OR Sept3) OR Sept6) OR Serf1) OR Serpina9) OR Serpini1) OR Sertm1) OR Sez6) OR Sez6l2) OR Sf1) OR Sfrp1) OR Sfrp2) OR Sfrp5) OR Sim1) OR Sirpa) OR Sirt2) OR Skil) OR Sla) OR Slc1) OR Slc12a5) OR Slc13a3) OR Slc17a6) OR Slc17a7) OR Slc18a2) OR Slc1a1) OR Slc1a2) OR Slc1a3) OR Slc1a6) OR Slc22a3) OR Slc25a12) OR Slc32a1) OR Slc33a1) OR Slc41a3) OR Slc6a1) OR Slc6a12) OR Slc6a2) OR Slc6a3) OR Slc6a6) OR Slc6a9) OR Slc7a14) OR Slc8a2) OR Slc9a3r1) OR Slit1) OR Slitrk2) OR Slitrk3) OR Slitrk4) OR Slitrk5) OR Slitrk6) OR Smad6) OR Smarce1) OR Smoc2) OR Snap25) OR Snap91) OR Sncg) OR Snhg11) OR Snx17) OR Sort1) OR Sox1) OR Sox11) OR Sox2) OR Sox4) OR Sox8) OR Sox9) OR Sp4) OR Sprn) OR Spry1) OR Sptan1) OR Sqle) OR Srgap3) OR Srsf7) OR Sstr1) OR Sstr2) OR St8sia5) OR Stard5) OR Stau1) OR Strip2) OR Strn) OR Stum) OR Stx1a) OR Sufu) OR Sulf2) OR Sun2) OR Suv39h1) OR Sv2a) OR Sv2b) OR Syndig1l) OR Syngr3) OR Synj2) OR Syt13) OR Syt17) OR Syt2) OR Syt3) OR Syt4) OR Syt6) OR Syt7) OR Tac1) OR Tac2) OR Tacr3) OR Taf13) OR Tagln3) OR Tbc1d8) OR Tcf3) OR Tgif1) OR Thbs1) OR Thbs2) OR Thbs3) OR Thbs4) OR Them6) OR Thra) OR Thrsp) OR Tigar) OR Tle1) OR Tle3) OR Tmeff1) OR Tmem35a) OR Tmem59l) OR Tnc) OR Tor1b) OR Tpbg) OR Tpd52l1) OR Tph2) OR Tppp3) OR Traf4) OR Trappc3) OR Trf) OR Trh) OR Trhr) OR Trib2) OR Tsc22d2) OR Tspan14) OR Tspan7) OR Tuba1b) OR Tubb3) OR Tubb4a) OR Ube3a) OR Ubxn11) OR Uchl1) OR Ucp2) OR Uhmk1) OR Uncx) OR Uso1) OR Vat1) OR Vcan) OR Vdac2) OR Vegfa) OR Vim) OR Vip) OR Vipr2) OR Vldlr) OR Vmn1r2) OR Vsnl1) OR Vsx2) OR Washc2) OR Wdr46) OR Wfs1) OR Wif1) OR Wnt2b) OR Wnt5b) OR Wnt7a) OR Wnt8b) OR Wrb) OR Wscd1) OR Wwc1) OR Ypel4) OR Zbtb18) OR Zbtb2) OR Zcchc12) OR Zfp5) OR Zfp532) OR Zic1) OR Zic2) OR Zic3) OR Zic4) OR Zic5) OR Zwint) AND olfactory bulb)
As far as I can tell, this does not surpass any character limits on Pubmed. | {
"domain": "bioinformatics.stackexchange",
"id": 764,
"tags": "text-processing, literature-search"
} |
Convert C to Swift | Question: I trying to convert this C code for calculating distance from RSSI to Swift code. I try to do it by myself, but considering I'm beginner, I need help in how to do it.
Here is C code:
#define QUEUE_SIZE 16
#define INCREASE_INDEX(x) ({x++; if(x >= QUEUE_SIZE) x = 0;})
int rssi_array[QUEUE_SIZE] = {0};
int sort_rssi[QUEUE_SIZE] = {0};
int rssi_index = 0;
static double getDistance(double rssi, int txPower) {
/*
* RSSI = TxPower - 10 * n * lg(d)
* n = 2 (in free space)
*
* d = 10 ^ ((TxPower - RSSI) / (10 * n))
*/
if (rssi == 0) {
return -1.0; // if we cannot determine accuracy, return -1.
}
return pow(10, ((double) txPower - rssi) / (10 * 2));
}
static double calculateAccuracy(double rssi, int txPower) {
if (rssi == 0) {
return -1.0; // if we cannot determine accuracy, return -1.
}
double ratio = rssi * 1.0 / txPower;
if (ratio < 1.0) {
return pow(ratio, 10);
} else {
double accuracy = (0.89976) * pow(ratio, 7.7095) + 0.111;
return accuracy;
}
}
int cmpfunc (const void * a, const void * b) {
return ( *(int*)a - *(int*)b );
}
static double calculate_average() {
double average = 0;
int i = 0;
int drop = 3;
memcpy(sort_rssi, rssi_array, QUEUE_SIZE * sizeof(int));
qsort(sort_rssi, QUEUE_SIZE, sizeof(int), cmpfunc);
for (i = 0; i < QUEUE_SIZE - drop; ++i) {
if(sort_rssi[i + drop] == 0) break;
average += sort_rssi[i];
}
return average / i;
}
// For adding new rssi we can use this code:
rssi_array[rssi_index] = rssi;
INCREASE_INDEX(rssi_index);
double mean_rssi = calculate_average();
Here is what I managed to do, but I'm not sure if it's good.
Swift code:
let queueSize = 16
func increaseIndex(_ x: Int) -> Int {
var x = x + 1
if x >= queueSize {
x = 0
}
return x
}
var rssiArray = [0]
var sortRssi = [0]
var rssiIndex = 0
private func getDistance(rssi: Double, txPower: Int) -> Double {
if rssi == 0 {
return -1.0 // if we cannot determine accuracy, return -1.
}
return pow(10, (Double(txPower) - rssi) / (10 * 2))
}
private func calculateAccuracy(rssi: Double, txPower: Int) -> Double {
if rssi == 0 {
return -1.0 // if we cannot determine accuracy, return -1.
}
let ratio = rssi * 1.0 / Double(txPower)
if ratio < 1.0 {
return pow(ratio, 10)
} else {
let accuracy = (0.89976) * pow(ratio, 7.7095) + 0.111
return accuracy
}
}
// This part is probably for comparing results from two functions,
// but I don't get it from where and what
func compareFunction(_ a: UnsafeRawPointer?, _ b: UnsafeRawPointer?) -> Int {
return Int(a ?? 0) - Int(b ?? 0)
}
private func calculateAverage() -> Double {
var average: Double = 0
var i = 0
let drop = 3
memcpy(sortRssi, rssiArray, queueSize * MemoryLayout<Int>.size)
qsort(sortRssi, queueSize, MemoryLayout<Int>.size, cmpfunc)
for i in 0..<queueSize - drop {
if sortRssi[i + drop] == 0 {
break
}
average += Double(sortRssi[i])
}
return average / Double(i)
}
I know this will be very useful for others. Code help will be much appreciated.
Thank you all in advance!
Answer: A few suggestions:
Don't have everything be part of a class. In Swift, you can have free functions, just like in C. Remove the class AverageRSSI: NSObject { ... }.
Keep casing consistent. You have calculateAccuracy and calculate_average. The preferred casing in Swift is camelCase.
It's customary in Swift not to capitalize constant and function names. Everything that's not a class name should be camelCase.
Have more meaningful names. calculateAccuracy is good, cmpfunc is not. rssi_array could also be rssi_values or simply rssi.
Make both the function definition and the function call as clear as possible. For example, when you read calculateAccuracy(_ rssi: Double, _ txPower: Int), it's clear that the first parameter is rssi and the second is txPower, but when you call it as calculateAccuracy(1.0, 2), it's no longer clear. Keep the parameter names with calculateAccuracy(rssi: Double, txPower: Int) and call it as calculateAccuracy(rssi: 1.0, txPower: 2).
getDistance and calculateAccuracy return the magic value -1 in case of error. In Swift it's better to throw an error detailing what the problem was, instead of returning an invalid value.
Since rssi_array and sort_rssi are arrays, you don't need to use memcpy to copy the values of one to another, you can simply assign the values with sort_rssi = rssi_array.
In Swift, you don't need to use the C qsort() function, since any MutableCollection has a sort() method.
Since sort_rssi is only used inside a function, it doesn't need to be file global, it can be a local variable.
You could replace
memcpy(sort_rssi, rssi_array, QUEUE_SIZE * sizeof(int));
qsort(sort_rssi, QUEUE_SIZE, sizeof(int), cmpfunc);
with:
var sortedRssi = rssiArray
sortedRssi.sort(by: >)
See:
Swift API Design Guidelines for a list of Swift coding guidelines.
Collection Types for details about collections, including arrays.
Error Handling for details about how to create, throw and catch errors. | {
"domain": "codereview.stackexchange",
"id": 40191,
"tags": "c, swift, ios, objective-c, converting"
} |
Proof of Kirchoff's second law | Question: There are 2 approaches I have stumbled to prove Kirchhoff's second law:
Derive it from the conservation of energy.
You can derive that from "integral of E*ds = 0".
Can you show me a full or at least very convincing proof using both approaches?
Answer: Here is an answer I propose for the 2nd way.
I'm for sorry for the bad paintings. | {
"domain": "physics.stackexchange",
"id": 31805,
"tags": "homework-and-exercises, electric-circuits, voltage"
} |
Calculation of an impulse response of h[n] | Question: I am currently looking at the z-transform and am using a great youtube reference to help me, however I am struggling on some basic step. How do I get the impulse response array of h[n] = [ ... ] shown in the lecture at the following URL - https://www.youtube.com/watch?v=dEJp46SFgV4&ab_channel=DavidDorran
I can do the following which I believe to be correct but do not know how to get the array out as shown in purple.
Answer: That purple array is giving the impulse response over time. You can get it directly from the difference equation. Assume initial rest, $y[-1]=0$, then write out the impulse response for $n=0, 1, 2, ...$. If you do that you will get:
\begin{align}
h[0]&=1+0 \\
h[1] &= 0 + 0.5 \\
h[2] &= 0 + 0.25 \\
h[3] &= 0 + 0.125 \\
...
\end{align} | {
"domain": "dsp.stackexchange",
"id": 9362,
"tags": "z-transform, transfer-function"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.